This blog was x-posted on https://tabulareditor.com/blog/semantic-modeling-patterns-with-power-bi-and-databricks
Databricks –the inventor of Lakehouse-has become a popular data platform that many organizations adopt. One of the more meaningful ways to leverage the data in Databricks is through visualization and reporting in Power BI. But how can we do this most effectively?
Integration between Power BI and Databricks has significantly evolved over time. As a result, multiple integration patterns have emerged, each tailored to potentially different scenarios and use cases. This article provides an overview of some options to integrate Power BI with Databricks, with focus on building robust semantic models directly on the Databricks Lakehouse that you can use for reporting, dashboarding and AI queries.

The diagram shows the following approaches:
- Databricks SQL and Power BI Desktop.
- Publish to Power BI Service from Databricks, directly.
- Power BI task in Databricks workflows.
- Databricks, Tabular Editor, and Power BI.
We explain these approaches in the rest of the article.
Databricks SQL + Power BI Desktop

Most users begin their Power BI development journey with Power BI Desktop, which remains the most common starting point for building semantic models. A few years ago, Databricks introduced Databricks SQL that is designated to run BI workloads. If you’re still using interactive clusters to connect Power BI to Databricks, switching to Databricks SQL warehouses can offer significant performance gains.
This pattern is a great starting point because Power BI Desktop is easy to use and widely adopted, and Databricks SQL warehouses provide scalability, and are optimized for BI use cases.
Despite its popularity, this approach has a few limitations: Firstly, except for configuring incremental refresh, Power BI Desktop does not support custom partitions in semantic models. Custom partitions are a technique for dividing tables into smaller, more manageable segments. This can be a limitation for large models, and it has several benefits, such as:
- Faster refreshes: Partitioning can let you refresh multiple parts of a table at the same time. This allows parallel refresh (e.g., configuring MaxParallelismPerQuery), which can drastically reduce refresh duration.
- Better resource management: You can leverage partitions for more efficient queries and manage these partitions separately, refreshing only the partitions that you need to, rather than the entire table.
It is important not to confuse the partition in Databricks vs the partitioning in Power BI. Partitioning in Power BI happens in the semantic model, while partitioning in Databricks happens in the Lakehouse. This is partitioning Delta tables, the need of which has been replaced by liquid clustering. Liquid clustering can boost Power BI query performance when the cluster keys are common query predicates. Liquid clustering is to Databricks as v-ordering is to Fabric.
In summary, approach one involves creating and managing your semantic model from Power BI Desktop. While straightforward to get started, there are several limitations. Power BI Desktop is not available for macOS, excluding Mac users from semantic model development. Furthermore, to unlock other advanced modelling capabilities, tools like Tabular Editor are needed (to be covered later in this article).
Pros:
- Easy to get started with Power BI desktop, so adoption threshold is low. This pattern can serve as a starting point for your semantic modelling journey.
Cons:
- No custom partitions, not suitable for partitioning use cases beyond simply time-based incremental refresh. This is especially important for developers working on large semantic models to consider.
- Mac users will need to do modelling via a Windows virtual machine to use Power BI Desktop.
- Can be limited and/or slower for complex modelling tasks, so this option is not suitable for experienced developers, or those who are looking for automation and advanced modelling features.
Best for:
Desktop developers with simple/moderate modelling needs and no requirement for advanced partitioning or automation.
Publish to Power BI Service from Databricks, Directly

Databricks introduced Publish to Power BI Service last year – a web-based integration that connects a Databricks workspace to Power BI using Microsoft Entra ID. This feature allows users to publish a table or an entire schema directly from Databricks to a Power BI workspace, enabling quick data access in Power BI without needing Power BI Desktop.
All modelling is performed in the Power BI Service web UI, which makes it accessible to Mac users and ideal for ad hoc data exploration. However, this feature is best suited for lightweight scenarios, as it currently lacks API support , which means you cannot use this functionality by calling a REST API endpoint. In addition, the modelling capabilities are limited compared to what’s available in Power BI Desktop or external tools like Tabular Editor. available in Power BI Desktop or external tools like Tabular Editor.
Pros:
- Click and drop, easy to get started with Databricks UI and Power BI Web UI. Once the connectivity between the two services is configured, the deployment only takes a few clicks.
- Since the solution is web-based, Mac users have the same user experience as Windows users.
- Good for data exploration of Databricks data in Power BI. If your use case is simply bringing some data from Databricks into Power BI and doing some quick ad hoc analysis, this is a great option.
Cons:
- Modelling capabilities are very limited since Power BI Web UI has limited modelling capabilities. Thus, this is not suitable for use cases that requirements more than basic modelling tasks.
- No API support means operationalization of data modelling done through this feature is not possible.
Best for:
SQL analysts doing ad hoc data exploration
Power BI Task in Databricks Workflows

Announced in Q1 2025, the Power BI task in Databricks Workflows is an evolution of the “Publish to Power BI Service” feature. This enhancement allows Power BI to be configured as a task within Databricks workflows, enabling users to publish, update, and refresh Power BI semantic models in a fully orchestrated and automated way.
The feature supports DirectQuery, Import, and Composite modes, and is available with Databricks Jobs API and Databricks Asset Bundles, making it well-suited for DevOps and CI/CD pipelines. While powerful for automation, it’s important to note that the feature is still in preview and does not include modelling capabilities—modelling must be performed in the Power BI Service Web UI, similar to the previous Publish to Power BI feature.
Pros:
- Databricks Workflow built-in feature that comes with Job API and DABs support, which means you can extend your DevOps pipeline from just data engineering in Power BI to include the publishing of semantic model in Power BI
- Possible to chain ingestion, data engineer, and publishing Power BI semantic models in a single pipeline in an orchestrated manner.
Cons:
- Like the previous pattern, modelling capabilities are very limited given Power BI Web UI’s lack of modelling capabilities. This is not suitable for developers that need to do some development on the semantic modellings beyond the basic tasks.
- This feature was announced in FabCon 2025, still in Public Preview. Organizations that cannot adopt preview features based on company policy will need to wait until it goes to GA.
Best for:
Data engineers owning semantic layer with very simple modelling needs.
Databricks, Tabular Editor, and Power BI

This pattern stands apart from the others by introducing an external tool—Tabular Editor—into the architecture. Tabular Editor serves as a powerful bridge between Databricks and Power BI, enabling both advanced semantic modelling and DevOps workflows. With a native Databricks connector, Tabular Editor supports complex modelling scenarios and full automation of tasks such as deployment, version control, and scripting.
This approach is ideal for teams looking to optimize and automate the full lifecycle of Power BI semantic models. However, it introduces additional tooling complexity, and since Tabular Editor is Windows-based, it excludes Mac users like Power BI Desktop (discussed earlier in approach one).
To dive deeper into modelling and automation features, visit Tabular Editor Learn. For DevOps integration patterns between Databricks and Power BI using Tabular Editor, check out my upcoming session at SQL Bits 2025: “Bridging DevOps Across Power BI and Databricks with Tabular Editor.”
Pros:
- Offers more advanced features for managing complex models, which means you can build better models faster using Tabular Editor.
- Advanced scripting and task automation for data modelling, which can help you improve productivity when working with semantic models.
- Enables version control and parallel development for deployment, which makes team collaboration possible when working on the same semantic model
ls.
Cons:
- Requires some familiarity with Tabular Editor and Power BI modelling concepts. If you are just getting started with Power BI and want to learn the basic of Tabular Editor, can check out this beginner course from Tabular Editor Learn.
- Requires setting up Tabular Editor and introducing one more component in the End-to-End Development. More components in the End-to-End pipeline means more complexity to manage, and you will need to set up the DevOps pipeline properly to orchstrate the End-to-End process.
Best for:
Advanced modellers who have needs for end-to-end DevOps, team collaboration, model optimization, and/or batch task automation.
Conclusion
in summary, there are several different appproaches to set up a semantic model to use data from Databricks:
- Databricks SQL and Power BI Desktop: With this approach, users connect to Databricks SQL from Power BI Desktop, where they create and manage their semantic model from the Windows application. While simple, this approach has limitations that can inhibit enterprise organizations with large models.
- Publish to Power BI Service from Databricks, directly: With this approach, you create the semantic model by publishing tables from the Databricks UI to Power BI Service. Then, you build the model in the web. This approach can be suitable when users have Mac OS or simple modelling requirements, but the web modelling experience does not offer the same features or utility as alternatives.
- Power BI task in Databricks workflows: With this approach, you can streamline and automate semantic model publishing from Databricks using Databricks workflows, but you still need to build the model in the Power BI service via the web. You can leverage more advanced CI/CD, but have the same limitations as Approach 2.
- Databricks, Tabular Editor, and Power BI: With this approach, you use Tabular Editor to connect to Databricks and build/manage your semantic model, which you deploy to Power BI. Here, you can leverage all the productivity enhancements of Tabular Editor to ensure that your semantic model is tuned to perfection. However, developers need to know how to use Tabular Editor and deploy/manage models via XMLA read/write endpoints.
Choosing the right approach depends on your scenario and needs!




