
Power BI Composite Models: Combining Import and DirectQuery for Optimal Performance
A comprehensive guide to Power BI composite models covering how to combine Import and DirectQuery storage modes in a single model, aggregation tables, dual storage mode, model chaining, Direct Lake integration, performance optimization, security considerations, and when to use composite models in enterprise deployments.
<h2>What Are Composite Models and Why Do They Matter?</h2>
<p>A composite model in Power BI is a semantic model that contains tables using different storage modes within the same model. Before composite models, you had to choose a single storage mode for your entire model: either <strong>Import mode</strong> (data copied into Power BI's in-memory engine for fast queries) or <strong>DirectQuery mode</strong> (queries sent to the source system in real time). This forced a binary trade-off—Import mode delivered fast performance but required scheduled refreshes and consumed memory, while DirectQuery provided real-time data but depended on source system query performance and imposed DAX limitations.</p>
<p>Composite models eliminate this trade-off by allowing you to combine Import, DirectQuery, and Dual storage modes within a single model. A fact table with billions of rows can use DirectQuery to avoid memory constraints and provide near-real-time data, while dimension tables with thousands of rows use Import mode for fast filtering and slicing. Aggregation tables in Import mode accelerate common queries, with automatic fallback to DirectQuery detail tables when users drill below the aggregation grain. This architecture gives you the performance benefits of Import mode where it matters most and the real-time data access of DirectQuery where freshness is critical.</p>
<p>Our <a href="/services/power-bi-consulting">Power BI consulting</a> team implements composite models for enterprise organizations that need to balance performance, data freshness, dataset size, and source system load. This guide covers everything from foundational concepts to advanced patterns for production deployments.</p>
<h2>Storage Modes Explained</h2>
<p>Understanding the three storage modes is fundamental to designing effective composite models.</p>
<h3>Import Mode</h3>
<p>Import mode loads data into Power BI's VertiPaq columnar engine at refresh time:</p>
<ul> <li><strong>Performance</strong>: Fastest query performance because all data is in-memory with columnar compression, run-length encoding, and hash encoding optimizations</li> <li><strong>Data freshness</strong>: Data is as fresh as the last dataset refresh (scheduled or on-demand). Standard refresh cadence ranges from once daily to 48 times daily (30-minute intervals) depending on your license tier</li> <li><strong>Size limits</strong>: Power BI Pro datasets are limited to 1 GB compressed; Premium Per User allows 100 GB; Premium Per Capacity and Fabric capacities support datasets up to 400 GB</li> <li><strong>DAX support</strong>: Full DAX function library is available with no restrictions</li> <li><strong>Source impact</strong>: Data sources are queried only during refresh, not during report interactions—minimizing source system load during business hours</li> </ul>
<h3>DirectQuery Mode</h3>
<p>DirectQuery sends queries to the source system in real time as users interact with reports:</p>
<ul> <li><strong>Performance</strong>: Dependent on source system query performance, network latency, and query complexity. Typically slower than Import mode for analytical queries</li> <li><strong>Data freshness</strong>: Always current—queries retrieve live data from the source</li> <li><strong>Size limits</strong>: No Power BI-side storage limit since data remains at the source</li> <li><strong>DAX support</strong>: Some DAX functions are not supported in DirectQuery mode because they cannot be translated to the source system's native query language. Functions like RANKX, complex iterators, and certain time intelligence patterns may be restricted</li> <li><strong>Source impact</strong>: Every report interaction generates queries against the source system. High-concurrency report usage can create significant load on the data source</li> </ul>
<h3>Dual Storage Mode</h3>
<p>Dual mode is unique to composite models and is the key to making Import-DirectQuery joins efficient:</p>
<ul> <li><strong>Behavior</strong>: Tables set to Dual mode act as Import when joined with other Import tables (fast in-memory joins) and act as DirectQuery when joined with DirectQuery tables (the join is pushed to the source system)</li> <li><strong>Best use case</strong>: Dimension tables that need to join with both Import and DirectQuery fact tables. Setting dimensions to Dual ensures that regardless of which fact table is being queried, the join is handled optimally</li> <li><strong>Storage</strong>: Dual mode tables are cached in memory (like Import) and also accessible via DirectQuery. They consume memory equivalent to an Import table</li> <li><strong>Refresh</strong>: Dual mode tables are refreshed during dataset refresh, just like Import tables</li> </ul>
<h2>When to Use Composite Models</h2>
<p>Composite models are the right choice in specific scenarios. Understanding when to use them—and when simpler approaches are sufficient—prevents unnecessary architectural complexity.</p>
<h3>Use Composite Models When:</h3>
<ul> <li><strong>Dataset size exceeds Import limits</strong>: Your fact table has billions of rows that would exceed memory or storage limits in Import mode, but your dimension tables are small enough to import</li> <li><strong>Real-time data is required for some tables</strong>: Some tables need live data (current inventory levels, real-time transaction feeds, live pricing) while others are fine with periodic refresh (product catalog, customer master, geographic hierarchy)</li> <li><strong>Aggregation tables accelerate common queries</strong>: You can pre-aggregate data at a higher grain (daily instead of transactional, category instead of SKU) in Import mode for fast dashboard performance, with drill-through to DirectQuery detail</li> <li><strong>Combining data from multiple sources</strong>: You need to join data from a high-performance data warehouse (via DirectQuery) with supplemental data from Excel files, SharePoint lists, or small databases (via Import)</li> <li><strong>Chaining models for enterprise semantics</strong>: You want to extend a shared organizational semantic model with department-specific tables without duplicating the entire model</li> </ul>
<h3>Do NOT Use Composite Models When:</h3>
<ul> <li><strong>Data fits comfortably in Import mode</strong>: If your data fits within Import mode limits and refresh frequency meets requirements, pure Import mode is simpler and faster</li> <li><strong>All data comes from one fast source</strong>: If you need real-time data from a single well-optimized source, pure DirectQuery may be simpler</li> <li><strong>Direct Lake is available</strong>: If your data is in Microsoft Fabric OneLake, Direct Lake mode often provides a better alternative to traditional composite models (discussed below)</li> </ul>
<h2>Building a Composite Model: Step by Step</h2>
<h3>Step 1: Connect to Your Primary Data Source</h3>
<p>In Power BI Desktop, connect to your primary data source. If it is a large data warehouse, choose DirectQuery as the connection mode. This becomes the initial storage mode for all tables from this source.</p>
<h3>Step 2: Add a Second Data Source</h3>
<p>When you add a second data source with a different storage mode (for example, importing an Excel file into a DirectQuery model), Power BI prompts you to create a composite model. Accept the prompt. The model now contains tables with mixed storage modes.</p>
<h3>Step 3: Set Storage Modes for Each Table</h3>
<p>In the Model view, select each table and set its storage mode in the Properties pane:</p>
<ul> <li>Set <strong>large fact tables</strong> to DirectQuery (or leave them as DirectQuery if that was the initial connection mode)</li> <li>Set <strong>dimension tables</strong> to Dual mode so they work efficiently with both Import and DirectQuery fact tables</li> <li>Set <strong>aggregation tables</strong> and <strong>supplemental tables</strong> to Import mode for maximum query performance</li> </ul>
<p><strong>Important</strong>: Changing a table from DirectQuery to Import is straightforward. Changing from Import to DirectQuery is only possible if the source supports DirectQuery. Once you change a table to Import, you cannot change it back to DirectQuery in most cases—plan your storage modes before building extensive DAX calculations.</p>
<h3>Step 4: Create Relationships</h3>
<p>Create relationships between tables as you normally would. In composite models, the relationship storage mode depends on the tables involved:</p>
<table> <thead><tr><th>Table A Mode</th><th>Table B Mode</th><th>Relationship Behavior</th></tr></thead> <tbody> <tr><td>Import</td><td>Import</td><td>In-memory join (fast)</td></tr> <tr><td>DirectQuery</td><td>DirectQuery (same source)</td><td>Source-side join (single query pushed to source)</td></tr> <tr><td>DirectQuery</td><td>DirectQuery (different source)</td><td>Limited relationship (cross-source join handled by Power BI engine—slower)</td></tr> <tr><td>Import</td><td>DirectQuery</td><td>Limited relationship (Power BI filters DirectQuery table using Import table values—use Dual mode to avoid this)</td></tr> <tr><td>Dual</td><td>DirectQuery</td><td>Source-side join when querying DirectQuery facts (optimal)</td></tr> <tr><td>Dual</td><td>Import</td><td>In-memory join when querying Import facts (optimal)</td></tr> </tbody> </table>
<p>The key takeaway is: <strong>set shared dimension tables to Dual mode</strong> to avoid limited relationships, which degrade query performance because Power BI must coordinate data between in-memory and source-side engines.</p>
<h3>Step 5: Validate and Test</h3>
<p>Use <a href="/blog/power-bi-external-tools">external tools</a> like DAX Studio to inspect storage modes and test query performance:</p>
<ul> <li>Verify that queries hitting Import/Dual tables use the VertiPaq storage engine (fast)</li> <li>Verify that queries hitting DirectQuery tables generate efficient source queries</li> <li>Check for unexpected limited relationships that force cross-engine joins</li> <li>Test with realistic data volumes and concurrent user loads</li> </ul>
<h2>Aggregation Tables: The Performance Multiplier</h2>
<p>Aggregation tables are the most powerful feature of composite models for enterprise analytics. They provide Import-mode performance for common dashboard queries while preserving the ability to access detail-level data through DirectQuery.</p>
<h3>How Aggregations Work</h3>
<ol> <li>You create a summary table in Import mode that contains pre-aggregated data at a higher grain than the DirectQuery detail table. For example, if the detail table has individual sales transactions, the aggregation table might summarize sales by date, product category, and region.</li> <li>You configure <strong>aggregation mappings</strong> in Power BI that link the aggregation table columns to the detail table columns, specifying the aggregation function (Sum, Count, Min, Max, GroupBy).</li> <li>You <strong>hide</strong> the aggregation table from report view—users never interact with it directly.</li> <li>When a user creates a visual, Power BI's query engine automatically determines whether the aggregation table can satisfy the query. If the query's grouping columns and aggregation functions match the aggregation mappings, Power BI retrieves data from the fast Import-mode aggregation table. If the query requires finer granularity than the aggregation provides (drilling to individual transactions, filtering by a column not in the aggregation), Power BI falls back to the DirectQuery detail table.</li> </ol>
<h3>Designing Effective Aggregation Tables</h3>
<p>The goal is to ensure that the vast majority of dashboard queries (typically 80-95%) are served from the aggregation table, with DirectQuery fallback only for ad-hoc drill-through analysis:</p>
<ul> <li><strong>Analyze query patterns</strong>: Use <a href="/blog/power-bi-monitoring-alerting-admin-best-practices-2026">Power BI monitoring</a> and DAX Studio to identify the most common query patterns—which columns are users grouping by? Which measures are most frequently calculated?</li> <li><strong>Choose aggregation grain</strong>: The aggregation grain must be coarser than the detail table but fine enough to answer most dashboard questions. Common patterns include daily aggregation (for transactional data), weekly or monthly for slower-moving metrics, and aggregation by dimension hierarchy level (category instead of product, region instead of store)</li> <li><strong>Include all commonly filtered dimensions</strong>: If users frequently filter by date, region, product category, and customer segment, include all four as GroupBy columns in the aggregation table</li> <li><strong>Pre-calculate frequently used measures</strong>: Sum of revenue, count of transactions, sum of cost, count of distinct customers—pre-aggregate the measures that appear in 80%+ of visuals</li> </ul>
<h3>Aggregation Table DAX Patterns</h3>
<p>When writing DAX measures for reports that use aggregation tables, ensure measures use simple aggregation functions (SUM, COUNT, MIN, MAX, AVERAGE, DISTINCTCOUNT) over the detail table columns. Power BI can automatically redirect these to the aggregation table when the grain matches. Complex calculated columns, row-level calculations, or iterators that require row context on the detail table will force DirectQuery fallback. Structure your <a href="/blog/essential-dax-patterns">DAX patterns</a> to maximize aggregation hits:</p>
<ul> <li>Use SUM(Sales[Amount]) rather than SUMX(Sales, Sales[Quantity] * Sales[UnitPrice]) when possible—pre-calculate the Amount column in the data source</li> <li>Avoid FILTER() on detail columns that are not in the aggregation GroupBy—use slicer-based filtering instead</li> <li>Use <a href="/blog/power-bi-calculation-groups-advanced-patterns-2026">calculation groups</a> for time intelligence that works across both aggregation and detail levels</li> </ul>
<h2>Chaining Models (Composite Models on Shared Datasets)</h2>
<p>Power BI supports creating composite models that chain to published semantic models in the Power BI Service. This enables a "hub and spoke" enterprise architecture:</p>
<h3>How Chaining Works</h3>
<ol> <li>A central data team publishes a certified <a href="/blog/power-bi-semantic-model-best-practices-datasets-2026">shared semantic model</a> containing the core organizational data model—star schema dimensions and facts, approved measures, and RLS rules</li> <li>Department teams create new Power BI Desktop files and connect to the published model using the "Power BI datasets" connector, which establishes a <strong>DirectQuery connection</strong> to the published model</li> <li>The department team adds their own tables (imported from local sources—departmental Excel files, specialized databases, survey data) to the model</li> <li>The department creates relationships between their local Import tables and the shared model's tables</li> <li>The result is a composite model that combines the enterprise shared model (via DirectQuery) with department-specific data (via Import)</li> </ol>
<h3>Benefits of Chaining</h3>
<ul> <li><strong>Single source of truth</strong>: All departments build on the same core data model with consistent dimension values, measure definitions, and business logic</li> <li><strong>Departmental flexibility</strong>: Teams can augment the shared model with their own data without modifying the central model or waiting for the central team to add their tables</li> <li><strong>RLS inheritance</strong>: <a href="/blog/power-bi-row-level-security">Row-level security</a> defined on the shared model is enforced in the chained model—department users see only the data they are authorized to access in the shared model</li> <li><strong>Reduced duplication</strong>: Instead of each department importing a copy of the same enterprise data, all chain through DirectQuery to the single published model</li> </ul>
<h3>Chaining Limitations</h3>
<ul> <li><strong>Performance</strong>: Queries against the shared model go through DirectQuery, which adds latency compared to local Import mode. If the shared model itself is Import mode, it still needs to run the VertiPaq engine on the Power BI service—network round-trips add overhead</li> <li><strong>Cannot modify shared model objects</strong>: You cannot rename, reorganize, or modify measures from the shared model in the chained model. You can only add new tables, relationships, and measures</li> <li><strong>Licensing</strong>: Chaining is available with Power BI Premium Per User, Premium Per Capacity, or Fabric capacity licenses</li> </ul>
<h2>Direct Lake and Composite Models in Microsoft Fabric</h2>
<p>Microsoft Fabric introduces <a href="/blog/power-bi-direct-lake-mode-fabric-guide-2026">Direct Lake mode</a>, which changes the composite model landscape significantly. Direct Lake reads Delta Parquet files directly from OneLake into the VertiPaq engine—no Import refresh needed and no DirectQuery source system queries required.</p>
<h3>Direct Lake as a Composite Model Alternative</h3>
<p>For organizations with data in Fabric OneLake, Direct Lake often replaces traditional Import/DirectQuery composite models:</p>
<ul> <li><strong>Performance</strong>: Direct Lake provides near-Import-mode performance because data is loaded into VertiPaq on demand from optimized Delta Parquet files</li> <li><strong>Data freshness</strong>: Direct Lake reflects data as soon as it is written to the Lakehouse Delta tables—no scheduled refresh required. If a Fabric pipeline updates a table at 10:15 AM, a Power BI report querying that table at 10:16 AM shows the updated data</li> <li><strong>No memory preloading</strong>: Unlike Import mode, Direct Lake loads column segments on demand rather than loading the entire dataset into memory at refresh time. This enables much larger effective dataset sizes</li> </ul>
<h3>Composite Models with Direct Lake</h3>
<p>You can create composite models that combine Direct Lake tables with Import tables and DirectQuery tables from non-Fabric sources:</p>
<ul> <li><strong>Direct Lake + Import</strong>: Core data from Fabric Lakehouse (Direct Lake) combined with supplemental reference data imported from external sources</li> <li><strong>Direct Lake + DirectQuery</strong>: Fabric Lakehouse data combined with real-time data from an operational database via DirectQuery</li> <li><strong>Chaining to Direct Lake models</strong>: Department teams can chain to a published Direct Lake model and add their own Import tables, getting the same benefits as chaining to Import or DirectQuery models</li> </ul>
<p>The key consideration is that Direct Lake tables fall back to DirectQuery when certain conditions are met (dataset exceeds framing limits, V-Order optimization is not applied, or certain DAX patterns force row-level evaluation). Monitor fallback events using <a href="/blog/fabric-monitoring-hub">Fabric Monitoring Hub</a> and optimize your Lakehouse tables (V-Order, file compaction, Z-Order on filter columns) to minimize fallbacks.</p>
<h2>Performance Optimization for Composite Models</h2>
<p>Composite model performance requires attention to both the Import-mode and DirectQuery-mode components of the model.</p>
<h3>Import-Mode Optimization</h3>
<ul> <li><strong>Compression</strong>: Follow <a href="/blog/power-bi-performance-optimization">standard Import mode optimization</a>—reduce cardinality of text columns, remove unnecessary columns, use appropriate data types</li> <li><strong>Aggregation coverage</strong>: Maximize the percentage of queries served from aggregation tables by monitoring aggregation hit rates with DAX Studio or Fabric capacity metrics</li> <li><strong>Incremental refresh</strong>: Configure <a href="/blog/power-bi-incremental-refresh-data-partitioning-guide-2026">incremental refresh</a> on Import and Dual tables to minimize refresh duration and resource consumption</li> </ul>
<h3>DirectQuery Optimization</h3>
<ul> <li><strong>Source system indexing</strong>: Ensure the source database has indexes on columns used in relationships, filters, and aggregations in the Power BI model</li> <li><strong>Reduce query count</strong>: Each visual in a Power BI report generates one or more DirectQuery queries. Reduce the number of visuals per page, use <a href="/blog/power-bi-bookmarks">bookmarks</a> for progressive disclosure, and avoid high-cardinality card visuals that generate many queries</li> <li><strong>Query reduction settings</strong>: Enable "Reduce the number of queries sent" in Power BI Desktop to batch slicer changes rather than sending a query for each slicer interaction</li> <li><strong>Auto page refresh limits</strong>: Set reasonable auto-refresh intervals (minimum 30 seconds for Premium) to avoid overwhelming the source with continuous queries</li> </ul>
<h3>Cross-Source Join Optimization</h3>
<p>When relationships cross storage modes (Import to DirectQuery), Power BI must coordinate data between engines:</p>
<ul> <li><strong>Use Dual mode dimensions</strong>: As described above, Dual mode dimensions avoid limited relationships by adapting to the storage mode of the joined fact table</li> <li><strong>Minimize cross-source joins</strong>: Design the model so that most joins occur within the same storage mode. Avoid having a DirectQuery fact table join to an Import lookup table that joins to another DirectQuery table from a different source</li> <li><strong>Limit the data sent across engines</strong>: When a limited relationship exists, Power BI sends the distinct values from the Import side to the DirectQuery source as a filter. If the Import table has high cardinality (millions of distinct values), this filter transfer degrades performance significantly</li> </ul>
<h2>Security Considerations</h2>
<p>Composite models introduce security considerations that differ from pure Import or pure DirectQuery models.</p>
<h3>Row-Level Security (RLS) in Composite Models</h3>
<ul> <li><strong>Import tables</strong>: RLS is enforced by the VertiPaq engine in-memory, as with any Import model. RLS filters dimension tables, which restrict visible rows in related fact tables through relationship filtering</li> <li><strong>DirectQuery tables</strong>: RLS filters are translated to the source system's native query language and applied at the source. This means the source system only returns rows the user is authorized to see</li> <li><strong>Cross-mode RLS</strong>: When an RLS role filters a Dual-mode dimension table, the filter is applied both in-memory (for Import fact queries) and translated to the source (for DirectQuery fact queries). This works correctly but requires testing to confirm that the DAX filter expression produces the expected SQL translation</li> </ul>
<h3>Single Sign-On (SSO) for DirectQuery</h3>
<p>DirectQuery tables in a composite model can use SSO to pass the Power BI user's Azure AD identity to the source system. This enables source-level security (database-level permissions, row-level filtering in the source) in addition to Power BI RLS. SSO is supported for Azure SQL Database, Azure Synapse Analytics, SQL Server (via gateway), Snowflake, Databricks, and other sources. Configure SSO in the dataset settings under the gateway or cloud connection configuration. Our <a href="/services/power-bi-consulting">Power BI consulting</a> team configures SSO for <a href="/blog/power-bi-security-best-practices-enterprise-2026">enterprise security architectures</a> that require end-to-end identity passthrough.</p>
<h3>Object-Level Security (OLS)</h3>
<p>Object-level security can be applied to composite models to hide specific tables, columns, or measures from certain user roles. OLS definitions are enforced on the Power BI side regardless of the table's storage mode, providing consistent column-level access control across Import and DirectQuery tables.</p>
<h2>Limitations and Constraints</h2>
<p>Understanding composite model limitations prevents design decisions that cause problems in production:</p>
<ul> <li><strong>Cannot publish to My Workspace</strong>: Composite models using DirectQuery to Power BI datasets (chaining) cannot be published to My Workspace—they require a Premium or Fabric workspace</li> <li><strong>Auto date/time disabled</strong>: The auto date/time feature is disabled in composite models. You must create explicit date tables, which is a <a href="/blog/power-bi-star-schema">best practice anyway for enterprise models</a></li> <li><strong>Many-to-many relationships</strong>: Composite models support many-to-many relationships, but cross-source many-to-many relationships (Import to DirectQuery) can produce unexpected results and should be tested thoroughly</li> <li><strong>Calculated columns on DirectQuery tables</strong>: You cannot add DAX calculated columns to DirectQuery tables—calculations must be done in the source or in measures</li> <li><strong>Source system compatibility</strong>: Not all DirectQuery sources support all SQL constructs that Power BI may generate. Complex DAX measures translated to SQL may produce errors or timeouts on some sources. Test thoroughly with your specific source system</li> <li><strong>Refresh behavior</strong>: Refreshing a composite model refreshes only the Import and Dual mode tables. DirectQuery tables are never refreshed because they always query the source live</li> </ul>
<h2>Enterprise Composite Model Patterns</h2>
<p>These patterns represent common enterprise implementations that our <a href="/services/data-analytics">data analytics</a> team deploys for large organizations.</p>
<h3>Pattern 1: Aggregation-Accelerated Data Warehouse</h3>
<p>The most common pattern: a large data warehouse connected via DirectQuery, with Import-mode aggregation tables providing dashboard performance:</p>
<ul> <li><strong>DirectQuery</strong>: Fact tables (billions of rows) connected to a <a href="/blog/connect-azure-synapse-power-bi-integration-guide-2026">Azure Synapse</a>, <a href="/blog/connect-snowflake-power-bi-integration-guide-2026">Snowflake</a>, or <a href="/blog/connect-databricks-power-bi-integration-guide-2026">Databricks</a> data warehouse</li> <li><strong>Dual</strong>: All dimension tables (date, product, customer, geography) set to Dual mode</li> <li><strong>Import</strong>: Aggregation tables at daily/weekly grain covering 80%+ of dashboard queries, plus any supplemental reference data</li> </ul>
<h3>Pattern 2: Real-Time Operational + Historical Analytics</h3>
<p>Combining near-real-time operational data with historical trends:</p>
<ul> <li><strong>DirectQuery</strong>: Operational database (Azure SQL, SQL Server) providing current-day transactions, live inventory, real-time order status</li> <li><strong>Import</strong>: Historical data warehouse tables (prior periods) providing trend analysis, year-over-year comparisons, and historical benchmarks</li> <li><strong>Dual</strong>: Shared dimensions that join to both current and historical tables</li> </ul>
<h3>Pattern 3: Enterprise Shared Model Extension</h3>
<p>Department teams extending the central semantic model:</p>
<ul> <li><strong>DirectQuery (chained)</strong>: Central shared semantic model with enterprise dimensions, core facts, and approved measures</li> <li><strong>Import</strong>: Department-specific tables (survey results, planning spreadsheets, specialized external data) joined to shared model dimensions</li> <li><strong>Custom measures</strong>: Department-specific DAX measures that reference both shared model columns and local Import table columns</li> </ul>
<h3>Pattern 4: Fabric Direct Lake + External Sources</h3>
<p>Organizations with Fabric as their primary analytics platform, supplemented by external data:</p>
<ul> <li><strong>Direct Lake</strong>: Core fact and dimension tables in Fabric Lakehouse, benefiting from near-Import performance without scheduled refresh</li> <li><strong>DirectQuery</strong>: Real-time operational data from an external OLTP database that has not been migrated to Fabric</li> <li><strong>Import</strong>: Small reference tables from external sources (industry benchmarks, mapping tables, configuration data)</li> </ul>
<h2>Monitoring and Troubleshooting Composite Models</h2>
<p>Composite models require more sophisticated monitoring than single-mode models because performance issues can originate from either the Import or DirectQuery side.</p>
<h3>Key Monitoring Metrics</h3>
<ul> <li><strong>Aggregation hit rate</strong>: Use DAX Studio or the DirectQuery diagnostics to measure what percentage of queries are served from aggregation tables vs. falling back to DirectQuery. Target 80-95% aggregation hit rate for dashboard workloads</li> <li><strong>DirectQuery query duration</strong>: Monitor the duration of DirectQuery queries sent to the source system. Queries exceeding your performance threshold indicate the need for source-side optimization (indexing, materialized views, query tuning)</li> <li><strong>Limited relationship warnings</strong>: In DAX Studio, check for limited relationships that force cross-engine joins—these are the most common performance bottleneck in composite models</li> <li><strong>Direct Lake fallback events</strong>: In Fabric, monitor for Direct Lake fallbacks to DirectQuery using the Fabric Monitoring Hub and <a href="/blog/fabric-capacity-metrics">capacity metrics</a></li> </ul>
<h3>Troubleshooting Common Issues</h3>
<ul> <li><strong>Slow dashboard load</strong>: Check whether slow visuals are hitting DirectQuery or Import. If DirectQuery, optimize the source query or improve aggregation coverage. If Import, follow standard <a href="/blog/fix-slow-power-bi-reports-performance-optimization-2026">performance optimization</a> techniques</li> <li><strong>Unexpected query results</strong>: Cross-source joins with limited relationships can produce unexpected results when filtering semantics differ between sources. Test filter propagation across storage mode boundaries</li> <li><strong>Refresh failures</strong>: Import and Dual tables refresh independently from DirectQuery tables. A refresh failure affects only the Import/Dual portion—DirectQuery tables continue to serve live data with potentially stale dimension values</li> </ul>
<p><a href="/contact">Contact EPC Group</a> to discuss your composite model requirements. Our <a href="/services/power-bi-consulting">Power BI consulting</a> and <a href="/services/power-bi-architecture">Power BI architecture</a> teams design, implement, and optimize composite models for enterprise organizations that need to balance query performance, data freshness, dataset scale, and source system impact across their Power BI deployments.</p>
Frequently Asked Questions
What is the difference between Import mode, DirectQuery mode, and Dual mode in Power BI?
Import mode loads data into Power BI in-memory columnar engine (VertiPaq) during scheduled refresh, providing the fastest query performance but with data only as current as the last refresh. DirectQuery sends queries to the source database in real time every time a user interacts with a report, providing always-current data but with performance dependent on the source system speed and with some DAX function limitations. Dual mode is exclusive to composite models and allows a table to behave as Import when joined with Import tables and as DirectQuery when joined with DirectQuery tables. Dual is the recommended storage mode for dimension tables in composite models because it ensures optimal join performance regardless of which fact table (Import or DirectQuery) is being queried. A Dual-mode table is cached in memory like an Import table and refreshed during scheduled refresh, but it can also be accessed via DirectQuery when the query engine needs to push a join to the source system. The practical recommendation is: set large fact tables to DirectQuery (or Direct Lake in Fabric), set shared dimension tables to Dual, and set aggregation tables and small supplemental tables to Import.
How do aggregation tables work in Power BI composite models?
Aggregation tables are pre-summarized Import-mode tables that accelerate query performance in composite models. You create a table that contains data aggregated to a higher grain than your DirectQuery detail table—for example, daily sales totals by product category and region instead of individual transactions. In Power BI Desktop, you configure aggregation mappings that tell the query engine how each column in the aggregation table corresponds to a column in the detail table (using Sum, Count, Min, Max, GroupBy, or CountTableRows functions). The aggregation table is then hidden from report view so users never see or interact with it directly. When a user creates a visual, the Power BI query engine automatically checks whether the query can be answered from the aggregation table. If the visual groups by columns that exist in the aggregation table and uses aggregation functions that match the mappings, data is retrieved from the fast Import-mode aggregation table. If the visual requires finer granularity (drilling to individual transactions, filtering by a column not in the aggregation), the engine falls back to the DirectQuery detail table. This is transparent to the report user—they see the same measures and columns regardless of whether the aggregation or detail table is serving the query. The key to effective aggregation design is ensuring that 80-95 percent of typical dashboard queries match the aggregation grain, with DirectQuery fallback only for ad-hoc exploration and drill-through.
What is model chaining in Power BI and how does it support enterprise analytics?
Model chaining (also called composite models on Power BI datasets) allows you to create a new Power BI model that connects to an existing published semantic model via DirectQuery and adds additional local tables. This enables a hub-and-spoke architecture where a central data team publishes a certified shared semantic model containing the core organizational data (enterprise dimensions, fact tables, approved measures, and row-level security rules), and department teams create chained models that reference the shared model and augment it with their own department-specific data. The chained model establishes a live DirectQuery connection to the published model, so any updates to the central model (new measures, updated dimension values, refreshed fact data) are immediately available in all chained models without requiring republishing. Department teams can add their own Import tables (from Excel, SharePoint, specialized databases), create relationships between their local tables and the shared model tables, and write new DAX measures that reference both shared and local data. RLS defined on the shared model is enforced in the chained model, maintaining security. Chaining requires Power BI Premium Per User, Premium Per Capacity, or Fabric capacity licensing. The main limitations are that you cannot modify objects from the shared model (no renaming, no changing measures), and queries against the shared model go through DirectQuery, which adds latency compared to a local Import model.
How does Direct Lake mode in Microsoft Fabric relate to composite models?
Direct Lake is a storage mode introduced with Microsoft Fabric that reads Delta Parquet files directly from OneLake into the VertiPaq in-memory engine on demand, without requiring a scheduled Import refresh and without sending queries to an external source system like DirectQuery. Direct Lake provides near-Import-mode query performance with near-DirectQuery data freshness—when data is updated in a Fabric Lakehouse table, it is available in Power BI within seconds without triggering a refresh. Direct Lake can participate in composite models: you can combine Direct Lake tables from a Fabric Lakehouse with Import tables from external sources and DirectQuery tables from external databases in a single model. This is useful when your primary data platform is Fabric but you need to incorporate data from systems not yet migrated to Fabric. Direct Lake often reduces or eliminates the need for traditional Import/DirectQuery composite models because it provides the performance benefits of Import (data loaded into VertiPaq) without the freshness limitations (no scheduled refresh needed) and without the size constraints of traditional Import (column segments are loaded on demand rather than preloading the entire dataset). However, Direct Lake has its own considerations: tables must be in Fabric OneLake as V-Ordered Delta Parquet, certain DAX patterns or exceeding framing limits cause fallback to DirectQuery mode, and monitoring for fallback events is essential. For organizations fully on Fabric, pure Direct Lake models may replace many composite model use cases.
What are the security implications of using composite models in enterprise Power BI deployments?
Composite models introduce several security considerations beyond standard Import or DirectQuery models. Row-level security (RLS) works across storage modes but behaves differently: on Import tables, RLS filters are applied in-memory by VertiPaq; on DirectQuery tables, RLS DAX expressions are translated to the source system native query language (SQL) and applied at the source. You must test that your RLS DAX expressions translate correctly to SQL for each DirectQuery source, as complex DAX may not translate as expected. For chained models (composite models on published datasets), RLS defined on the shared source model is enforced in the chained model—the chaining user does not see data they would not see when querying the source model directly. Single Sign-On (SSO) can be configured for DirectQuery connections so that the end user Azure AD identity is passed to the source system, enabling source-level security enforcement in addition to Power BI RLS. This is supported for Azure SQL, Synapse, Snowflake, Databricks, and other sources. Object-Level Security (OLS) can hide specific tables or columns from certain roles and is enforced on the Power BI side regardless of storage mode. Data exfiltration risk differs between modes: Import data is stored in the Power BI service and can potentially be exported by authorized users, while DirectQuery data remains at the source and is only transmitted in query results. For compliance-sensitive deployments (HIPAA, GDPR, SOC 2), document which data resides where (in Power BI memory vs. at source) and ensure encryption at rest and in transit for all paths.