
Power BI Dataverse Integration: Dynamics 365 and Model-Driven Apps
A comprehensive guide to integrating Power BI with Microsoft Dataverse covering the Dataverse connector, TDS endpoint, virtual tables, Synapse Link for Dataverse, Dynamics 365 analytics, data modeling strategies, Delta Lake export, and Microsoft Fabric integration for enterprise analytics.
<h2>The Strategic Role of Dataverse in Enterprise Analytics</h2>
<p>Microsoft Dataverse is the data backbone of the Power Platform and Dynamics 365 ecosystem. Every Dynamics 365 application—Sales, Customer Service, Finance, Supply Chain Management, Field Service, Marketing—stores its operational data in Dataverse. Every model-driven Power App, every Power Automate flow that processes business data, every Copilot Studio agent that serves enterprise users reads from and writes to Dataverse. For organizations running Dynamics 365, Dataverse is not just a database—it is the single source of operational truth for CRM, ERP, and custom business applications.</p>
<p>The challenge for analytics teams is that Dataverse is an <strong>operational</strong> data store optimized for transactional workloads (reads and writes from application users), not an <strong>analytical</strong> data store optimized for complex queries across millions of rows. Directly querying Dataverse for analytical workloads—large aggregations, cross-table joins, time-series analysis—can degrade application performance for operational users and encounter throttling limits. The enterprise integration strategy must separate analytical workloads from operational workloads while maintaining data freshness and consistency.</p>
<p>Our <a href="/services/power-bi-consulting">Power BI consulting</a> team designs Dataverse-to-Power BI integration architectures for organizations running Dynamics 365 across <a href="/industries/healthcare-consulting">healthcare</a>, <a href="/industries/financial-services-consulting">financial services</a>, and <a href="/industries/government-consulting">government</a> sectors. This guide covers every integration method, from direct connections for small datasets to Synapse Link and Fabric integration for enterprise-scale analytics.</p>
<h2>Integration Method 1: The Dataverse Connector (Power Query)</h2>
<p>The native Dataverse connector in Power BI Desktop connects directly to Dataverse tables using the Dataverse Web API. This is the simplest integration method and is appropriate for small to mid-size datasets (under 500,000 rows total across all tables).</p>
<h3>How to Connect</h3>
<ol> <li>In Power BI Desktop, select <strong>Get Data > Dataverse</strong> (or search for "Dataverse" in the connector list).</li> <li>Authenticate with your Microsoft 365 credentials (the same credentials used to access Dynamics 365).</li> <li>Browse or search for tables. Dataverse tables appear with their display names (e.g., "Account", "Contact", "Opportunity") and schema names (e.g., "account", "contact", "opportunity").</li> <li>Select the tables you need and click <strong>Load</strong> (Import mode) or <strong>Transform Data</strong> to apply Power Query transformations before loading.</li> </ol>
<h3>Import Mode vs. DirectQuery</h3>
<p>The Dataverse connector supports both Import and DirectQuery modes:</p>
<ul> <li><strong>Import mode</strong>: Data is loaded into the Power BI in-memory engine during scheduled refresh. Best for datasets under 500K rows where data freshness of 30 minutes to a few hours is acceptable. Provides the fastest query performance for report users.</li> <li><strong>DirectQuery mode</strong>: Queries are sent to Dataverse in real time. Provides always-current data but with significantly slower query performance and with Dataverse API throttling limits that can cause query failures under concurrent user load. DirectQuery against Dataverse is generally <strong>not recommended</strong> for production dashboards due to performance and throttling constraints.</li> </ul>
<h3>Performance Considerations</h3>
<ul> <li><strong>Row limits</strong>: The Dataverse connector uses the Web API, which pages results in 5,000-row batches. Loading a table with 2 million rows requires 400 API calls, which is slow and can trigger throttling.</li> <li><strong>Column selection</strong>: Always select only the columns you need in Power Query. Dataverse tables (especially Account, Contact, Opportunity) can have 200+ columns, and loading all columns wastes memory and increases refresh time.</li> <li><strong>Query folding</strong>: The Dataverse connector supports limited query folding. Filters applied in Power Query may or may not be pushed to the API. Test by right-clicking a step in Power Query and checking whether "View Native Query" is available.</li> <li><strong>Incremental refresh</strong>: Configure incremental refresh on large Dataverse tables to avoid full refreshes. Use the \`modifiedon\` column (present on all Dataverse tables) as the date/time filter column.</li> </ul>
<h2>Integration Method 2: TDS Endpoint (SQL Server Protocol)</h2>
<p>The Dataverse TDS (Tabular Data Stream) endpoint exposes Dataverse tables through the SQL Server wire protocol, allowing you to connect to Dataverse as if it were a SQL Server database. This enables the use of the SQL Server connector in Power BI, which is more mature and performant than the native Dataverse connector for many scenarios.</p>
<h3>Enabling the TDS Endpoint</h3>
<ol> <li>Navigate to the <strong>Power Platform Admin Center</strong> > <strong>Environments</strong> > select your environment > <strong>Settings</strong> > <strong>Features</strong>.</li> <li>Enable <strong>TDS endpoint</strong>.</li> <li>The endpoint URL follows the pattern: <code>your-org.crm.dynamics.com</code> (use port 5558 if specifying explicitly).</li> </ol>
<h3>Connecting from Power BI</h3>
<ol> <li>In Power BI Desktop, select <strong>Get Data > SQL Server</strong>.</li> <li>Enter the TDS endpoint URL as the server name.</li> <li>Authentication: Use <strong>Azure Active Directory</strong> authentication (organizational account).</li> <li>Browse tables or write SQL queries directly.</li> </ol>
<h3>TDS Endpoint Advantages</h3>
<ul> <li><strong>SQL query support</strong>: Write native T-SQL queries to filter, join, and aggregate data at the source rather than loading raw tables into Power BI. This is significantly more efficient for large datasets.</li> <li><strong>Better query folding</strong>: The SQL Server connector has robust query folding support, so Power Query transformations are more likely to be pushed to the source.</li> <li><strong>Views</strong>: Dataverse views (system and personal) are accessible as SQL views, providing pre-filtered datasets.</li> <li><strong>Familiar tooling</strong>: Database developers can use SQL Server Management Studio (SSMS) or Azure Data Studio to explore and test queries before building Power BI reports.</li> </ul>
<h3>TDS Endpoint Limitations</h3>
<ul> <li><strong>Read-only</strong>: The TDS endpoint supports SELECT queries only. No INSERT, UPDATE, DELETE, or DDL operations.</li> <li><strong>Not all data types supported</strong>: Some Dataverse column types (multi-select option sets, file/image columns) may not be available or may require special handling.</li> <li><strong>Throttling</strong>: Subject to the same Dataverse API throttling limits as the native connector. Enterprise Protection Plans provide higher limits.</li> <li><strong>No change tracking</strong>: Unlike the native connector, the TDS endpoint does not natively support incremental refresh via change tracking.</li> </ul>
<h2>Integration Method 3: Synapse Link for Dataverse</h2>
<p>Synapse Link for Dataverse is the enterprise-grade integration method for organizations that need to analyze large Dataverse datasets without impacting operational performance. Synapse Link continuously exports Dataverse data to Azure Synapse Analytics or Microsoft Fabric, creating a read-optimized analytical copy that is completely decoupled from the operational Dataverse environment.</p>
<h3>How Synapse Link Works</h3>
<ol> <li><strong>Initial sync</strong>: When you configure Synapse Link for a Dataverse table, it performs a full export of all rows to the target (Synapse or Fabric lakehouse).</li> <li><strong>Continuous sync</strong>: After initial sync, Synapse Link captures changes (inserts, updates, deletes) from Dataverse and applies them to the analytical store incrementally, typically within minutes.</li> <li><strong>Delta Lake format</strong>: Data is stored in Delta Lake format (Delta Parquet files), providing ACID transactions, time travel, schema evolution, and high-performance analytics.</li> <li><strong>No impact on Dataverse</strong>: Synapse Link reads from the Dataverse change tracking system, not from the operational database, so there is zero impact on Dynamics 365 application performance.</li> </ol>
<h3>Synapse Link to Azure Synapse Analytics</h3>
<p>When linked to Synapse, Dataverse data lands in an Azure Data Lake Storage Gen2 account as Delta Parquet files. You can then:</p>
<ul> <li>Query the data using <strong>Synapse Serverless SQL Pool</strong> with standard T-SQL, creating views that Power BI connects to via DirectQuery.</li> <li>Run <strong>Spark notebooks</strong> in Synapse for data transformation, enrichment, and machine learning.</li> <li>Build a <strong>Synapse Dedicated SQL Pool</strong> (data warehouse) for the most demanding analytical workloads.</li> <li>Connect Power BI to any of these compute layers depending on performance and cost requirements.</li> </ul>
<h3>Synapse Link to Microsoft Fabric</h3>
<p>The most modern and streamlined option is linking Dataverse directly to a <a href="/blog/getting-started-microsoft-fabric-2025">Microsoft Fabric</a> lakehouse. Data lands as Delta tables in OneLake, immediately available for:</p>
<ul> <li><strong>Direct Lake</strong> Power BI models: Near-Import performance without scheduled refresh. Data updated in Dataverse is available in Power BI within minutes.</li> <li><strong>Fabric SQL Endpoint</strong>: Query the data using T-SQL without any additional compute provisioning.</li> <li><strong>Fabric notebooks</strong>: Transform and enrich the data using PySpark or Spark SQL.</li> <li><strong>Fabric data pipelines</strong>: Orchestrate additional data movement and transformation.</li> </ul>
<p>The Fabric integration path eliminates the need for a separate Azure Synapse workspace, Azure Storage account, and the associated networking and security configuration—everything runs within the Fabric capacity. For organizations already investing in <a href="/services/microsoft-fabric">Microsoft Fabric</a>, this is the recommended integration path.</p>
<h2>Integration Method 4: Virtual Tables</h2>
<p>Virtual tables (virtual entities) in Dataverse are tables that are mapped to external data sources. Instead of storing data in Dataverse, a virtual table queries the external source in real time when accessed. From a Power BI perspective, virtual tables appear identical to native Dataverse tables—they are accessible through the Dataverse connector, TDS endpoint, and Synapse Link.</p>
<h3>Virtual Table Use Cases for Power BI</h3>
<ul> <li><strong>Federated analytics</strong>: A Dynamics 365 Sales dashboard that combines native Dataverse data (accounts, opportunities) with data from an external SQL Server database (inventory levels, shipping status) via virtual tables, all through a single Dataverse connection in Power BI.</li> <li><strong>Real-time external data</strong>: Virtual tables backed by APIs provide real-time data access. A Power BI report can display Dynamics 365 CRM data alongside live data from external systems (market data feeds, IoT sensor readings) through virtual tables.</li> <li><strong>Simplifying Power BI models</strong>: Instead of creating multiple data connections in Power BI (one for Dataverse, one for SQL Server, one for an API), virtual tables consolidate external data into Dataverse, allowing a single-connection Power BI model.</li> </ul>
<h2>Data Modeling from Dataverse</h2>
<p>Dataverse has a rich and complex schema that requires careful data modeling when building Power BI semantic models. The following patterns address common Dataverse modeling challenges.</p>
<h3>Handling Option Sets (Picklist Columns)</h3>
<p>Dataverse option set columns (choice columns) store an integer value internally and display a label to users. In Power BI, the Dataverse connector exposes both the integer value column and a corresponding label column (suffixed with "@OData.Community.Display.V1.FormattedValue" or shown as a friendly name). Always use the label column for display and the integer column for filtering or sorting to maintain consistency with the Dynamics 365 user experience.</p>
<h3>Managing Lookup Columns</h3>
<p>Dataverse uses lookup columns to establish relationships between tables (similar to foreign keys). A lookup column on the Opportunity table might reference the Account table. In Power BI, the Dataverse connector typically exposes the lookup as a GUID column (the unique identifier of the related record). You must create relationships between tables using these GUID columns. Use the <code>accountid</code> on the Account table and the <code>_accountid_value</code> on the Opportunity table to establish the relationship.</p>
<h3>Dealing with Polymorphic Lookups</h3>
<p>Some Dataverse lookups are polymorphic—they can reference records from multiple tables. The "Regarding" column on the Activity table can point to an Account, Contact, Opportunity, or Case. In Power BI, polymorphic lookups require special handling: you may need to create separate filtered tables or use DAX to resolve the correct related record based on the entity type column.</p>
<h3>Flattening the Data Model</h3>
<p>Dataverse's normalized schema can result in many tables with complex join paths. For Power BI star schema modeling:</p>
<ul> <li><strong>Create dimension tables</strong> by denormalizing Dataverse lookup chains. For example, create a "Full Account" dimension that combines Account, Parent Account, Industry, Territory, and Owner information into a single flat table.</li> <li><strong>Create fact tables</strong> by selecting only the transaction/event tables (Opportunities, Cases, Activities, Orders) with their key measures and foreign keys.</li> <li><strong>Remove unnecessary tables</strong>: Dataverse includes many system tables (metadata, security, audit) that are not needed for analytics. Load only the business-relevant tables.</li> </ul>
<h2>Dynamics 365 Analytics Patterns</h2>
<h3>Dynamics 365 Sales Analytics</h3>
<p>The core Dataverse tables for Sales analytics include: Opportunity, Account, Contact, Lead, Quote, Order, Invoice, Activity (calls, emails, meetings), and Product. Key Power BI patterns:</p>
<ul> <li><strong>Pipeline analysis</strong>: Track opportunity progression through sales stages using the opportunity status and stage columns. Calculate stage conversion rates, average time in stage, and pipeline velocity.</li> <li><strong>Sales forecasting</strong>: Combine historical win rates by segment, deal size, and salesperson with current pipeline data to generate probability-weighted forecasts.</li> <li><strong>Activity correlation</strong>: Analyze the relationship between sales activities (calls, emails, meetings logged in Dataverse) and deal outcomes to identify the activity patterns that lead to closed-won deals.</li> <li><strong>Territory performance</strong>: Use Dataverse territory assignments and account hierarchies to create territory-level performance dashboards that roll up to regional and national views.</li> </ul>
<h3>Dynamics 365 Finance and Operations Analytics</h3>
<p>Finance and Operations (F&O) data follows a different pattern because F&O uses its own database (historically SQL Server-based) rather than Dataverse for core transactional data. However, Microsoft is progressively enabling <strong>dual-write</strong> and <strong>virtual tables</strong> to synchronize F&O data to Dataverse. For Power BI integration:</p>
<ul> <li><strong>Dual-write entities</strong>: Core entities (customers, vendors, products, sales orders, purchase orders) can be synchronized to Dataverse in near-real-time via dual-write, enabling a unified Power BI model that combines CRM and ERP data through Dataverse.</li> <li><strong>Entity Store (deprecated path)</strong>: The legacy approach used Aggregate Measurements in the Entity Store. Microsoft's recommended migration path is to Synapse Link for Finance and Operations, which exports F&O data to Delta Lake in Synapse/Fabric.</li> <li><strong>Synapse Link for Finance and Operations</strong>: Exports F&O tables directly to Synapse/Fabric without going through Dataverse, providing the most complete and performant analytical access to ERP data.</li> </ul>
<h3>Dynamics 365 Customer Service Analytics</h3>
<p>Customer Service analytics leverage the Case (incident), Queue, Knowledge Article, SLA, and Entitlement tables in Dataverse:</p>
<ul> <li><strong>Case resolution analysis</strong>: Track case lifecycle from creation to resolution, analyzing resolution time by priority, category, team, and channel.</li> <li><strong>SLA compliance</strong>: Monitor SLA timer data from Dataverse to calculate SLA compliance rates, identify breaching cases, and analyze patterns that lead to SLA failures.</li> <li><strong>Agent performance</strong>: Combine case assignment, resolution, and customer satisfaction data to create agent-level performance dashboards.</li> <li><strong>Knowledge effectiveness</strong>: Analyze knowledge article usage alongside case resolution to identify which articles effectively reduce resolution time.</li> </ul>
<h2>Delta Lake Export and Data Lakehouse Patterns</h2>
<p>Synapse Link for Dataverse exports data in Delta Lake format, which is the foundation for modern lakehouse analytics architectures. The Delta Lake export enables:</p>
<ul> <li><strong>Time travel</strong>: Query Dataverse data as it existed at any point in the past (within the retention period), enabling historical analysis, audit compliance, and point-in-time reporting for regulatory requirements.</li> <li><strong>Schema evolution</strong>: When Dataverse tables are modified (new columns added, column types changed), Delta Lake handles schema evolution gracefully without breaking downstream Power BI reports.</li> <li><strong>Medallion architecture</strong>: Build a Bronze/Silver/Gold data architecture where Bronze = raw Dataverse data from Synapse Link, Silver = cleaned and conformed data with business rules applied, Gold = aggregated and modeled data optimized for Power BI Direct Lake consumption.</li> <li><strong>Cross-source analytics</strong>: Combine Dataverse data with data from other sources (Azure SQL, Snowflake, SAP, Salesforce) in the lakehouse, creating a unified analytical layer that Power BI consumes through Direct Lake or SQL endpoints.</li> </ul>
<h2>Microsoft Fabric Integration with Dataverse</h2>
<p>Microsoft Fabric represents the future of Dataverse analytics. The integration path is: Dataverse → Synapse Link → Fabric Lakehouse → Direct Lake Power BI model. This eliminates the need for Import refresh schedules, reduces data staleness to minutes, and provides near-Import query performance.</p>
<h3>Setting Up the Fabric Integration</h3>
<ol> <li><strong>Create a Fabric Lakehouse</strong> in your Fabric workspace.</li> <li><strong>Configure Synapse Link</strong> in the Power Platform Admin Center, selecting your Fabric workspace and lakehouse as the target.</li> <li><strong>Select tables</strong> to sync (start with the core tables for your analytics use case).</li> <li><strong>Monitor sync</strong>: Initial sync can take minutes to hours depending on data volume. Incremental sync runs continuously.</li> <li><strong>Build a Direct Lake semantic model</strong> in Power BI that reads from the Fabric lakehouse tables. The model uses Direct Lake storage mode, which loads Delta Parquet column segments into the VertiPaq engine on demand.</li> </ol>
<h3>Performance Optimization for Fabric + Dataverse</h3>
<ul> <li><strong>V-Order optimization</strong>: Fabric automatically applies V-Order optimization to Delta Parquet files for maximum VertiPaq performance. Ensure your Fabric capacity is sized appropriately for the data volume.</li> <li><strong>Partition strategy</strong>: For very large Dataverse tables (millions of rows), consider creating Silver-layer tables in the lakehouse that are partitioned by date or business segment for optimal Direct Lake framing.</li> <li><strong>Monitor Direct Lake fallback</strong>: If Direct Lake queries fall back to DirectQuery mode (due to exceeding framing limits or unsupported DAX patterns), query performance degrades significantly. Use Fabric capacity metrics to monitor fallback events and optimize the model or increase capacity.</li> <li><strong>Semantic model design</strong>: Apply <a href="/blog/power-bi-data-modeling-best-practices">data modeling best practices</a> including star schema design, appropriate cardinality, and optimized <a href="/blog/essential-dax-patterns">DAX patterns</a> for Direct Lake models.</li> </ul>
<h2>Security and Compliance Considerations</h2>
<p>Dataverse has a sophisticated security model (business units, security roles, field-level security, record sharing) that does not automatically translate to Power BI. When data is exported from Dataverse to Synapse/Fabric or imported into Power BI, the Dataverse security model is bypassed—Power BI accesses data through a service account or the report creator's credentials, not the end user's Dataverse permissions.</p>
<h3>Implementing Security in Power BI</h3>
<ul> <li><strong>Row-Level Security (RLS)</strong>: Recreate Dataverse business unit and security role restrictions using Power BI RLS. Map Dataverse business unit hierarchies to RLS role definitions that filter data based on the report user's identity.</li> <li><strong>Object-Level Security (OLS)</strong>: Use OLS to hide sensitive columns (salary data, SSN, medical records) from users who should not see them, mirroring Dataverse field-level security.</li> <li><strong>Service principal access</strong>: Use a dedicated service principal for Synapse Link and Power BI data access, with permissions scoped to only the Dataverse tables required for analytics.</li> <li><strong>Compliance considerations</strong>: For <a href="/industries/healthcare-consulting">healthcare</a> (HIPAA) and <a href="/industries/financial-services-consulting">financial services</a> (SOC 2) organizations, document the data flow from Dataverse through Synapse Link/Fabric to Power BI, including encryption at rest and in transit, access controls at each layer, and audit logging. Our <a href="/services/power-bi-architecture">Power BI architecture</a> team designs compliant integration architectures that satisfy regulatory audit requirements.</li> </ul>
<h2>Choosing the Right Integration Method</h2>
<table> <thead><tr><th>Method</th><th>Data Volume</th><th>Freshness</th><th>Performance Impact on Dataverse</th><th>Best For</th></tr></thead> <tbody> <tr><td>Dataverse Connector (Import)</td><td>Under 500K rows</td><td>Scheduled refresh (30 min+)</td><td>Moderate (API calls during refresh)</td><td>Small teams, departmental reports</td></tr> <tr><td>TDS Endpoint</td><td>Under 1M rows</td><td>Real-time (DirectQuery) or scheduled</td><td>Moderate to high</td><td>SQL-savvy teams, ad-hoc analysis</td></tr> <tr><td>Synapse Link to Synapse</td><td>Unlimited</td><td>Near-real-time (minutes)</td><td>None (reads from change tracking)</td><td>Enterprise analytics, data warehouse</td></tr> <tr><td>Synapse Link to Fabric</td><td>Unlimited</td><td>Near-real-time (minutes)</td><td>None</td><td>Modern analytics, Direct Lake models</td></tr> <tr><td>Virtual Tables</td><td>Depends on source</td><td>Real-time</td><td>Minimal</td><td>Federated queries, external data integration</td></tr> </tbody> </table>
<p>For enterprise organizations, the recommended path is Synapse Link to Fabric with Direct Lake Power BI models. This provides the best combination of data freshness, query performance, zero operational impact on Dynamics 365, and unified governance through the Fabric platform.</p>
<p><a href="/contact">Contact EPC Group</a> to discuss your Dataverse integration strategy. Our <a href="/services/power-bi-consulting">Power BI consulting</a> and <a href="/services/microsoft-fabric">Microsoft Fabric consulting</a> teams design, implement, and optimize Dataverse-to-Power BI integration architectures for organizations running Dynamics 365 across healthcare, financial services, government, and other compliance-heavy industries.</p>
Frequently Asked Questions
Should I use the Dataverse connector or Synapse Link for Power BI reporting on Dynamics 365 data?
The choice depends on data volume, freshness requirements, and impact tolerance on Dynamics 365. The native Dataverse connector (Import mode) is appropriate for small datasets under 500,000 rows where scheduled refresh intervals of 30 minutes or more are acceptable and where the API overhead during refresh does not noticeably impact Dynamics 365 performance. For anything larger or more demanding, Synapse Link is the recommended approach. Synapse Link continuously replicates Dataverse data to Azure Synapse Analytics or Microsoft Fabric using change tracking, which has zero impact on Dynamics 365 operational performance. Data arrives in Delta Lake format within minutes of changes in Dataverse. When combined with Direct Lake Power BI models in Fabric, you get near-Import query performance with near-real-time freshness and no scheduled refresh to manage. The total cost of ownership for Synapse Link is higher (Fabric or Synapse capacity required), but for organizations with more than a few hundred thousand rows of Dataverse data or more than a handful of concurrent report users, the performance and reliability benefits justify the investment.
How do I handle Dataverse security (business units, security roles) in Power BI reports?
Dataverse security is not automatically enforced in Power BI. When data is imported into Power BI or accessed through Synapse Link, the Dataverse security model (business units, security roles, team membership, field-level security, record sharing) is bypassed because Power BI accesses data through a service account or the report creator credentials, not the end user Dataverse permissions. To enforce equivalent security in Power BI, you must implement Row-Level Security (RLS) that mirrors your Dataverse business unit hierarchy and security role restrictions. This typically involves creating a security mapping table that associates Azure AD user identities (email or UPN) with the business units, teams, or territories they should see, and then creating RLS role definitions that filter fact and dimension tables based on the authenticated user USERPRINCIPALNAME() function matched against the mapping table. For field-level security (hiding sensitive columns), use Object-Level Security (OLS) in Power BI to restrict specific columns or tables to authorized roles. For compliance-regulated industries, document the security model translation from Dataverse to Power BI and validate it during security audits.
What is the best way to model Dataverse data in Power BI using a star schema?
Dataverse uses a normalized relational schema with many tables connected by lookup columns (GUIDs), which does not directly translate to an efficient Power BI star schema. The recommended approach is to create a transformation layer (either in Power Query for small datasets or in a Fabric lakehouse Silver layer for large datasets) that denormalizes Dataverse tables into proper dimension and fact tables. For dimensions, flatten lookup chains into wide dimension tables: combine Account with its parent account, industry, territory, owner, and address information into a single DimAccount table. For facts, select only the transaction tables (Opportunity, Case, Order, Activity) with their key measures and foreign key references to dimensions. Remove the hundreds of system columns (created/modified metadata, versioning, state/status codes beyond what you need) to keep the model lean. Replace Dataverse GUID keys with integer surrogate keys if possible for optimal VertiPaq compression. Handle option set columns by including both the integer value (for sorting) and the display label (for report display) in your dimension tables. This star schema approach typically reduces model size by 60-80% compared to loading raw Dataverse tables and dramatically improves both query performance and report authoring experience.
How does Synapse Link for Dataverse work with Microsoft Fabric and Direct Lake?
Synapse Link for Dataverse can target a Microsoft Fabric lakehouse, continuously replicating Dataverse table data as Delta Lake tables in OneLake. The setup involves configuring Synapse Link in the Power Platform Admin Center, selecting the target Fabric workspace and lakehouse, and choosing which Dataverse tables to replicate. Initial synchronization copies all existing data, and then incremental sync continuously applies inserts, updates, and deletes as they occur in Dataverse, typically with latency of a few minutes. Once the data is in the Fabric lakehouse as Delta tables, you create a Power BI semantic model using Direct Lake storage mode. Direct Lake reads Delta Parquet column segments directly from OneLake into the VertiPaq in-memory engine on demand—no scheduled Import refresh is needed, and no DirectQuery latency is incurred for most queries. The result is near-Import query performance with data freshness measured in minutes rather than hours. This is the recommended integration architecture for enterprise Dynamics 365 analytics because it combines zero operational impact on Dataverse (change tracking based replication), near-real-time freshness, high query performance, and unified governance through the Fabric platform.
Can I combine Dynamics 365 Sales and Finance and Operations data in a single Power BI report?
Yes, but the integration approach differs because Dynamics 365 Sales stores data natively in Dataverse while Finance and Operations (F&O) historically uses its own SQL Server database. There are several approaches to unification. First, dual-write synchronizes core F&O entities (customers, vendors, products, orders) to Dataverse in near-real-time, allowing you to access both CRM and ERP data through a single Dataverse connection. However, dual-write supports only a subset of F&O entities and requires careful configuration to avoid sync conflicts. Second, Synapse Link for Finance and Operations exports F&O tables directly to Azure Synapse or Fabric alongside Synapse Link for Dataverse, putting both CRM and ERP data in the same lakehouse. You then build a unified semantic model that joins CRM dimensions (accounts from Dataverse) with ERP facts (financial transactions from F&O) in a shared star schema. This is the most comprehensive approach for enterprise analytics. Third, for simpler requirements, you can create a Power BI composite model that connects to Dataverse for CRM data (Import mode) and to the F&O database for ERP data (DirectQuery or Import), joining them in the Power BI model. This approach is simpler to set up but has limitations around cross-source relationship performance. The Synapse Link approach (both CRM and ERP data landing in Fabric) is recommended for production enterprise analytics.