Microsoft Fabric
✨ AI image coming soon
Microsoft Fabric14 min read

Power BI Direct Lake Mode in Microsoft Fabric: The Complete Enterprise Guide for 2026

Master Direct Lake mode in Microsoft Fabric — understand how it eliminates Import and DirectQuery trade-offs, leverages OneLake Delta tables, handles framing and fallback behavior, and delivers sub-second query performance at enterprise scale.

By EPC Group

<h2>What Is Direct Lake Mode in Power BI?</h2>

<p>Direct Lake is a connectivity mode exclusive to Microsoft Fabric that fundamentally changes how Power BI semantic models access data. For over a decade, Power BI architects faced a binary choice: Import mode (fast queries, stale data, memory-bound) or DirectQuery mode (real-time data, slower queries, source-dependent performance). Direct Lake eliminates this trade-off by reading Delta Parquet files directly from OneLake into the Power BI VertiPaq engine&rsquo;s columnar memory&mdash;without executing SQL queries against a compute endpoint and without scheduling data refreshes. The result is Import-like query speed with DirectQuery-like data freshness, and it is available today on every <a href="/services/microsoft-fabric">Microsoft Fabric capacity</a>.</p>

<p>When a user opens a Direct Lake report, Power BI loads column data on demand from the underlying Delta table files stored in OneLake. There is no ETL pipeline copying data into a Power BI dataset, no scheduled refresh window, and no SQL endpoint processing queries. The VertiPaq engine reads compressed Parquet column chunks directly from storage, caches them in memory, and serves interactive report visuals at sub-second latency. When the underlying Delta table is updated by a Spark notebook, data pipeline, or Dataflow Gen2, the next report interaction automatically detects the new data through a process called <strong>framing</strong>.</p>

<h2>How Direct Lake Differs from Import and DirectQuery</h2>

<p>Understanding the three connectivity modes is essential for making the right architectural decision. Here is a comprehensive comparison:</p>

<table> <thead> <tr><th>Characteristic</th><th>Import Mode</th><th>DirectQuery Mode</th><th>Direct Lake Mode</th></tr> </thead> <tbody> <tr><td>Data freshness</td><td>Stale until next scheduled refresh</td><td>Real-time (live queries)</td><td>Near-real-time (latest Delta commit)</td></tr> <tr><td>Query engine</td><td>VertiPaq in-memory</td><td>Source database SQL engine</td><td>VertiPaq in-memory (loaded from Parquet)</td></tr> <tr><td>Query performance</td><td>Sub-second (data pre-loaded)</td><td>Depends on source performance</td><td>Sub-second (on-demand column load)</td></tr> <tr><td>Refresh required</td><td>Yes (scheduled or on-demand)</td><td>No</td><td>No (automatic framing)</td></tr> <tr><td>Memory consumption</td><td>Full dataset in memory</td><td>Minimal (pass-through queries)</td><td>On-demand column caching</td></tr> <tr><td>Data source</td><td>Any supported source</td><td>Any supported source</td><td>OneLake Delta tables only</td></tr> <tr><td>Capacity requirement</td><td>Power BI Pro/Premium/Fabric</td><td>Power BI Pro/Premium/Fabric</td><td>Fabric capacity (F2 or higher)</td></tr> <tr><td>Max dataset size</td><td>Limited by capacity memory</td><td>No limit (queries sent to source)</td><td>Limited by capacity guardrails</td></tr> <tr><td>DAX calculation groups</td><td>Full support</td><td>Full support</td><td>Full support</td></tr> <tr><td>Composite model support</td><td>Yes</td><td>Yes</td><td>Yes (with limitations)</td></tr> </tbody> </table>

<p>The key insight is that Direct Lake delivers the best of both worlds: VertiPaq&rsquo;s compressed columnar engine for query speed, combined with automatic data freshness from OneLake without refresh orchestration. For organizations migrating to Fabric, this eliminates the most common pain points of both legacy modes. Our <a href="/services/power-bi-architecture">Power BI architecture team</a> helps enterprises design Direct Lake models that maximize this advantage.</p>

<h2>OneLake Integration and Delta Table Requirements</h2>

<p>Direct Lake mode has a strict prerequisite: the underlying data must reside as Delta tables in OneLake. This means your data must be stored in one of the following Fabric items:</p>

<ul> <li><strong>Fabric Lakehouse tables</strong> &mdash; Delta tables created through Spark notebooks, Dataflow Gen2, or data pipelines in a Fabric Lakehouse.</li> <li><strong>Fabric Warehouse tables</strong> &mdash; Tables in a Fabric Data Warehouse, which are stored as Delta Parquet files in OneLake.</li> <li><strong>Shortcut tables</strong> &mdash; OneLake shortcuts pointing to Delta tables in Azure Data Lake Storage Gen2, Amazon S3, or Google Cloud Storage (with specific requirements covered below).</li> </ul>

<p>The Delta format is non-negotiable because Direct Lake relies on the Delta transaction log (&#96;_delta_log&#96;) to identify which Parquet files constitute the current table version. When a Spark job writes new data, it commits a new entry to the Delta log. The next time Power BI frames the model, it reads the Delta log to discover new or changed Parquet files and loads the updated columns into VertiPaq memory.</p>

<h3>Parquet File Optimization</h3>

<p>Not all Delta tables perform equally with Direct Lake. The physical layout of Parquet files directly impacts query performance:</p>

<ul> <li><strong>File count</strong> &mdash; Fewer, larger files outperform many small files. Each Fabric capacity SKU has a maximum Parquet file count per table (e.g., F64 supports up to 1,000 files per table, while F2 supports 200). Exceeding this threshold triggers a fallback to DirectQuery mode.</li> <li><strong>Row group size</strong> &mdash; Target row groups of 1 million rows. Parquet files with excessively small row groups increase metadata overhead and slow column loading.</li> <li><strong>Column count</strong> &mdash; Each capacity SKU also has a maximum row count per table. Wide tables with hundreds of columns consume more memory during column loading.</li> <li><strong>Compaction</strong> &mdash; Run &#96;OPTIMIZE&#96; commands on Delta tables regularly to compact small files into larger ones. This is critical for streaming ingestion scenarios where many small files accumulate rapidly.</li> </ul>

<h2>V-Order Optimization</h2>

<p>V-Order is a write-time optimization exclusive to Microsoft Fabric that rearranges data within Parquet files to maximize VertiPaq compression efficiency. When data is written with V-Order enabled, the Parquet file&rsquo;s columnar segments are sorted and encoded in a way that closely mirrors how VertiPaq would organize the data internally. The result is dramatically faster column loading in Direct Lake because VertiPaq can consume V-Ordered Parquet data with minimal transformation.</p>

<p>V-Order benefits for Direct Lake include:</p>

<ul> <li><strong>3&ndash;4x faster column load times</strong> compared to standard Parquet files without V-Order.</li> <li><strong>Up to 50% better compression ratios</strong>, reducing the OneLake storage footprint and the memory consumed by VertiPaq.</li> <li><strong>Reduced CPU overhead</strong> during the transcoding process from Parquet format to VertiPaq in-memory format.</li> </ul>

<p>V-Order is enabled by default for all data written within Fabric workloads (Spark notebooks, data pipelines, Dataflow Gen2, Fabric Warehouse). For external data ingested via shortcuts, V-Order is not applied automatically&mdash;you should run an &#96;OPTIMIZE&#96; command with V-Order enabled on shortcut tables to achieve the same benefits. Our <a href="/services/data-analytics">data analytics consultants</a> benchmark V-Order impact as part of every Fabric performance assessment.</p>

<h2>Framing Behavior and DirectQuery Fallback</h2>

<p>Framing is the mechanism by which a Direct Lake semantic model discovers new data in OneLake without a traditional refresh. When a Power BI report is queried, the engine checks the Delta transaction log for the most recent table version. If the log indicates new commits since the last frame, Power BI updates its internal metadata to reference the latest Parquet files. This process is lightweight&mdash;it reads only the Delta log, not the data itself&mdash;and typically completes in milliseconds.</p>

<h3>Automatic Framing</h3>

<p>By default, Fabric triggers automatic framing at regular intervals (configurable per semantic model). The default framing interval is determined by the capacity SKU, but administrators can also trigger framing manually via the XMLA endpoint or the Fabric REST API using the &#96;refresh&#96; command with &#96;type: automatic&#96;.</p>

<h3>DirectQuery Fallback</h3>

<p>Direct Lake includes an intelligent fallback mechanism. If any of the following conditions occur, the semantic model transparently falls back to DirectQuery mode, routing queries through the Lakehouse or Warehouse SQL analytics endpoint:</p>

<ul> <li><strong>Guardrail exceeded</strong> &mdash; The table exceeds the maximum Parquet file count or row count for the capacity SKU.</li> <li><strong>Memory pressure</strong> &mdash; The VertiPaq cache cannot accommodate the requested columns within the capacity&rsquo;s memory allocation.</li> <li><strong>Unsupported feature</strong> &mdash; The DAX query uses a feature not yet supported in Direct Lake mode (rare in current builds).</li> <li><strong>Column eviction</strong> &mdash; When memory is constrained, VertiPaq evicts least-recently-used columns. If an evicted column is needed and cannot be reloaded, the query falls back to DirectQuery.</li> </ul>

<p>Fallback behavior is configurable. Administrators can set the semantic model property &#96;DirectLakeBehavior&#96; to one of three options:</p>

<ul> <li><strong>Automatic</strong> (default) &mdash; Falls back to DirectQuery seamlessly when guardrails are exceeded.</li> <li><strong>DirectLakeOnly</strong> &mdash; Disables fallback entirely. Queries that would trigger fallback return an error instead. Use this to enforce performance SLAs and detect guardrail violations early.</li> <li><strong>DirectQueryOnly</strong> &mdash; Forces all queries through the SQL analytics endpoint, effectively disabling Direct Lake. Useful for debugging.</li> </ul>

<p>For enterprise deployments, EPC Group recommends initially deploying with &#96;Automatic&#96; fallback, monitoring fallback frequency through capacity metrics, and then switching high-priority models to &#96;DirectLakeOnly&#96; once guardrail compliance is confirmed.</p>

<h2>Performance Benchmarks</h2>

<p>Based on EPC Group&rsquo;s enterprise Fabric deployments across healthcare, financial services, and government organizations, here are representative Direct Lake performance benchmarks compared to traditional modes:</p>

<table> <thead> <tr><th>Scenario</th><th>Import Mode</th><th>DirectQuery (Synapse DW)</th><th>Direct Lake (V-Ordered)</th></tr> </thead> <tbody> <tr><td>Simple measure (SUM, 50M rows)</td><td>120 ms</td><td>1,800 ms</td><td>150 ms</td></tr> <tr><td>Complex DAX (YTD + filter, 200M rows)</td><td>350 ms</td><td>4,200 ms</td><td>420 ms</td></tr> <tr><td>Matrix visual (20 columns, 500M rows)</td><td>800 ms</td><td>8,500 ms</td><td>950 ms</td></tr> <tr><td>Report page load (8 visuals)</td><td>1.2 sec</td><td>12 sec</td><td>1.8 sec</td></tr> <tr><td>Data freshness latency</td><td>1&ndash;24 hours (scheduled)</td><td>Real-time</td><td>1&ndash;15 minutes (framing)</td></tr> <tr><td>Concurrent users (F64)</td><td>200+</td><td>50&ndash;80</td><td>200+</td></tr> </tbody> </table>

<p>Direct Lake delivers 85&ndash;95% of Import mode query performance while eliminating the scheduled refresh window entirely. For organizations with datasets in the hundreds-of-millions-of-rows range, Direct Lake&rsquo;s performance closely rivals Import while providing data freshness within minutes of the source update. This is a transformative improvement for <a href="/blog/power-bi-real-time-dashboards-fabric-streaming-2026">real-time dashboard scenarios</a> where users previously had to choose between speed and freshness.</p>

<h2>Shortcut Tables and Direct Lake</h2>

<p>OneLake shortcuts enable Direct Lake semantic models to read Delta tables stored outside of Fabric&mdash;in Azure Data Lake Storage Gen2, Amazon S3, or Google Cloud Storage&mdash;without physically copying data into a Fabric Lakehouse. This is a powerful capability for organizations that want to adopt Direct Lake without migrating terabytes of existing data.</p>

<h3>Requirements for Shortcut Tables</h3>

<ul> <li>The external data must be in Delta Lake format (Delta transaction log + Parquet data files).</li> <li>The shortcut must point to the root directory of the Delta table (the directory containing the &#96;_delta_log&#96; folder).</li> <li>Authentication must be configured (managed identity for ADLS Gen2, IAM role for S3, service account for GCS).</li> <li>V-Order is not applied to external data&mdash;run &#96;OPTIMIZE&#96; on the shortcut table within Fabric to apply V-Order for optimal Direct Lake performance.</li> <li>Network connectivity between Fabric and the external storage must be established (private endpoints for ADLS Gen2, cross-cloud networking for S3/GCS).</li> </ul>

<h3>Performance Considerations</h3>

<p>Shortcut tables may exhibit slightly higher initial column load latency compared to native OneLake tables because data must traverse the network boundary between the external storage and Fabric. For latency-sensitive dashboards, EPC Group recommends running a nightly &#96;OPTIMIZE&#96; job that compacts and V-Orders the shortcut table data, or migrating the most frequently queried tables to native OneLake storage while keeping cold/archive data on shortcuts.</p>

<h2>Capacity Requirements and Guardrails</h2>

<p>Direct Lake is available on all Fabric capacity SKUs (F2 and above), but each SKU enforces guardrails that determine the maximum dataset complexity:</p>

<table> <thead> <tr><th>Fabric SKU</th><th>Max Parquet Files per Table</th><th>Max Rows per Table (billions)</th><th>Max Model Size on Disk</th></tr> </thead> <tbody> <tr><td>F2</td><td>200</td><td>0.3</td><td>10 GB</td></tr> <tr><td>F4</td><td>200</td><td>0.3</td><td>10 GB</td></tr> <tr><td>F8</td><td>500</td><td>1.5</td><td>20 GB</td></tr> <tr><td>F16</td><td>500</td><td>1.5</td><td>20 GB</td></tr> <tr><td>F32</td><td>500</td><td>3</td><td>40 GB</td></tr> <tr><td>F64</td><td>1,000</td><td>6</td><td>80 GB</td></tr> <tr><td>F128</td><td>1,000</td><td>12</td><td>160 GB</td></tr> <tr><td>F256</td><td>1,000</td><td>24</td><td>320 GB</td></tr> <tr><td>F512</td><td>1,000</td><td>48</td><td>640 GB</td></tr> </tbody> </table>

<p>When a table exceeds its SKU&rsquo;s guardrails, the semantic model falls back to DirectQuery for that table (if &#96;Automatic&#96; fallback is enabled) or returns an error (if &#96;DirectLakeOnly&#96; is set). To stay within guardrails, run regular &#96;OPTIMIZE&#96; compaction jobs on your Delta tables and partition large tables by date or business unit to keep individual table file counts manageable.</p>

<h2>Monitoring Direct Lake Queries</h2>

<p>Effective monitoring is critical for enterprise Direct Lake deployments. Fabric provides several monitoring surfaces:</p>

<ul> <li><strong>Fabric Capacity Metrics app</strong> &mdash; Displays Direct Lake vs. DirectQuery query counts, average query duration, and fallback frequency per semantic model. Track the ratio of Direct Lake queries to fallback queries&mdash;a healthy model should show less than 1% fallback.</li> <li><strong>XMLA endpoint DMVs</strong> &mdash; Connect to the semantic model via XMLA and query &#96;$SYSTEM.DISCOVER_STORAGE_TABLE_COLUMNS&#96; to see which columns are currently loaded in VertiPaq, their memory consumption, and compression ratios.</li> <li><strong>DAX Studio / Performance Analyzer</strong> &mdash; Use DAX Studio&rsquo;s Server Timings to see whether each query was served from the VertiPaq cache (Direct Lake) or routed to the SQL endpoint (DirectQuery fallback). The &#96;DirectLakeRequestCount&#96; and &#96;DirectQueryRequestCount&#96; counters in the query trace reveal exactly which mode served each visual.</li> <li><strong>Log Analytics integration</strong> &mdash; Route Fabric capacity logs to Azure Log Analytics for historical trend analysis. Build KQL dashboards that track fallback rates over time and alert when fallback frequency exceeds a threshold.</li> <li><strong>Semantic model properties</strong> &mdash; The &#96;LastFrameTime&#96; property on the semantic model indicates when the last successful framing operation occurred. Monitor this to ensure data freshness meets SLA requirements.</li> </ul>

<h2>Limitations and Workarounds</h2>

<p>While Direct Lake is transformative, it has limitations that enterprise architects must account for:</p>

<ul> <li><strong>OneLake-only data source</strong> &mdash; Direct Lake cannot connect to Azure SQL Database, Synapse dedicated SQL pools, or any non-OneLake source. <em>Workaround:</em> Use Fabric data pipelines or Dataflow Gen2 to land external data into OneLake Delta tables before connecting Direct Lake.</li> <li><strong>No calculated tables</strong> &mdash; DAX calculated tables are not supported in Direct Lake mode. <em>Workaround:</em> Pre-compute calculated tables as Delta tables in the Lakehouse using Spark or SQL.</li> <li><strong>Limited composite model support</strong> &mdash; While you can add DirectQuery sources to a Direct Lake model to create a composite model, certain cross-source DAX patterns may trigger fallback. <em>Workaround:</em> Test composite model queries thoroughly and monitor fallback rates.</li> <li><strong>RLS with USERPRINCIPALNAME()</strong> &mdash; Row-level security works with Direct Lake but requires careful testing. The &#96;USERPRINCIPALNAME()&#96; function is supported, but dynamic RLS patterns involving multiple security tables may increase memory pressure. <em>Workaround:</em> Simplify RLS models and test with production-representative concurrent user loads.</li> <li><strong>Incremental refresh</strong> &mdash; Traditional Power BI incremental refresh policies do not apply to Direct Lake. Data freshness is managed through framing, not refresh partitions. <em>Workaround:</em> Use Delta table partitioning and &#96;OPTIMIZE&#96; strategies instead of Power BI refresh policies.</li> <li><strong>No aggregation tables</strong> &mdash; Power BI aggregation tables (user-defined aggregations) are not supported in Direct Lake. <em>Workaround:</em> Create pre-aggregated Delta tables in the Lakehouse and add them to the semantic model as separate tables.</li> </ul>

<h2>Best Practices for Enterprise Direct Lake Deployments</h2>

<p>Based on EPC Group&rsquo;s experience deploying Direct Lake across Fortune 500 organizations in healthcare, financial services, and government, these are our recommended best practices:</p>

<ol> <li><strong>Enable V-Order on all Delta tables</strong> &mdash; Ensure all data pipelines write with V-Order enabled. For existing tables, run &#96;OPTIMIZE VORDER&#96; as a one-time operation and schedule it weekly for continuously updated tables.</li> <li><strong>Compact files aggressively</strong> &mdash; Schedule &#96;OPTIMIZE&#96; jobs to run after major data loads. Target fewer than 100 Parquet files per table for optimal performance. Use &#96;ZORDER BY&#96; on frequently filtered columns to enhance predicate pushdown.</li> <li><strong>Right-size your Fabric capacity</strong> &mdash; Map your largest table&rsquo;s row count and file count to the capacity guardrail table above. Provision one SKU level above the minimum to accommodate growth and concurrent workloads.</li> <li><strong>Monitor fallback rates weekly</strong> &mdash; Use the Fabric Capacity Metrics app to track DirectQuery fallback. Any fallback above 1% indicates a guardrail violation, memory pressure, or file compaction issue that needs attention.</li> <li><strong>Use DirectLakeOnly for production models</strong> &mdash; After validating that a semantic model operates within guardrails, switch from &#96;Automatic&#96; to &#96;DirectLakeOnly&#96; to prevent silent performance degradation from undetected fallback.</li> <li><strong>Separate hot and cold data</strong> &mdash; Partition fact tables by date. Keep only the most recent 12&ndash;24 months in the primary Direct Lake table and archive older data in a separate table or shortcut that is not included in the high-performance semantic model.</li> <li><strong>Pre-compute complex calculations</strong> &mdash; Since calculated tables and aggregation tables are not supported, build pre-aggregated Delta tables in your Lakehouse medallion architecture (gold layer) and reference them directly in the semantic model.</li> <li><strong>Test RLS at scale</strong> &mdash; Deploy row-level security in a staging environment with realistic concurrent user counts. Direct Lake RLS performance depends on the cardinality of the security filter column and the number of distinct user-to-data mappings.</li> <li><strong>Implement governance with deployment pipelines</strong> &mdash; Use Fabric deployment pipelines to promote semantic models through development, test, and production stages. This prevents untested model changes from impacting production Direct Lake performance.</li> <li><strong>Plan for hybrid architectures</strong> &mdash; Not every dataset benefits from Direct Lake. Small, slowly changing dimension tables may perform just as well in Import mode. Use composite models strategically, combining Direct Lake for large fact tables with Import for small dimensions, while monitoring for fallback.</li> </ol>

<p>Direct Lake mode represents the most significant advancement in Power BI data connectivity since the introduction of DirectQuery. For organizations running Microsoft Fabric, it eliminates the decade-old compromise between query performance and data freshness. However, realizing its full potential requires careful attention to Delta table optimization, capacity sizing, guardrail management, and monitoring. <a href="/contact">Contact EPC Group</a> to schedule a Direct Lake architecture assessment and ensure your Fabric environment is optimized for enterprise-scale performance.</p>

Frequently Asked Questions

What is the difference between Direct Lake, Import, and DirectQuery modes in Power BI?

Import mode copies all data into Power BI in-memory storage for the fastest queries but requires scheduled refreshes and consumes significant memory. DirectQuery sends live SQL queries to the source database for real-time data but query speed depends on the source system performance. Direct Lake is a Fabric-exclusive mode that reads Delta Parquet files directly from OneLake into the VertiPaq engine on demand, delivering Import-like query speed with near-real-time data freshness and no scheduled refreshes. It combines the best characteristics of both legacy modes while eliminating their primary drawbacks. <a href="/contact">Contact EPC Group</a> for help choosing the right connectivity mode for your datasets.

What are the prerequisites for using Direct Lake mode in Power BI?

Direct Lake requires three things: (1) a Microsoft Fabric capacity (F2 SKU or higher), (2) data stored as Delta tables in OneLake (via a Fabric Lakehouse, Fabric Warehouse, or OneLake shortcut to external Delta tables), and (3) a semantic model created in a Fabric workspace assigned to that capacity. Direct Lake cannot connect to non-OneLake sources such as Azure SQL Database or Synapse dedicated SQL pools. If your data is in those sources, you must first land it in OneLake using Fabric data pipelines or Dataflow Gen2 before using Direct Lake.

What happens when a Direct Lake query falls back to DirectQuery mode?

Direct Lake includes an automatic fallback mechanism. When a table exceeds the capacity SKU guardrails (maximum Parquet file count or row count), when VertiPaq memory is insufficient to load the requested columns, or when an unsupported feature is invoked, the query transparently routes through the Lakehouse or Warehouse SQL analytics endpoint as a DirectQuery query. This ensures the report still functions but at DirectQuery-level performance. Administrators can disable fallback by setting the DirectLakeBehavior property to DirectLakeOnly, which returns errors instead of falling back, helping to detect and resolve guardrail violations proactively.

How does V-Order optimization improve Direct Lake performance?

V-Order is a Fabric-exclusive write-time optimization that rearranges data within Parquet files to match the VertiPaq engine columnar format. Delta tables written with V-Order enabled load 3 to 4 times faster into the Direct Lake VertiPaq cache and achieve up to 50% better compression ratios compared to standard Parquet files. V-Order is enabled by default for all data written within Fabric workloads. For data ingested via OneLake shortcuts from external storage, you should run OPTIMIZE with V-Order on the shortcut table within Fabric to apply the optimization. <a href="/contact">Contact EPC Group</a> to benchmark V-Order impact on your specific workloads.

How do I monitor Direct Lake query performance and fallback rates?

Use the Fabric Capacity Metrics app to track Direct Lake vs. DirectQuery query counts, average duration, and fallback frequency per semantic model. Connect to the semantic model via the XMLA endpoint and query DMVs to inspect which columns are loaded in VertiPaq and their memory consumption. Use DAX Studio Server Timings to identify whether individual visuals are served from the VertiPaq cache or routed to DirectQuery fallback. Route Fabric capacity logs to Azure Log Analytics for historical trend analysis and configure alerts when fallback rates exceed acceptable thresholds. A healthy Direct Lake model should show less than 1% fallback queries.

Can I use Direct Lake with data stored outside of Microsoft Fabric?

Yes, through OneLake shortcuts. You can create shortcuts that point to Delta tables stored in Azure Data Lake Storage Gen2, Amazon S3, or Google Cloud Storage. Direct Lake reads the Delta transaction log and Parquet files through the shortcut as if they were native OneLake tables. However, the external data must be in Delta Lake format, and shortcut tables may have slightly higher initial column load latency due to network traversal. For optimal performance, run OPTIMIZE with V-Order on shortcut tables within Fabric and consider migrating your most frequently queried tables to native OneLake storage while keeping cold data on external shortcuts.

What Fabric capacity SKU do I need for Direct Lake with large datasets?

Each Fabric SKU enforces guardrails on maximum Parquet files per table, maximum rows per table, and maximum model size on disk. For example, an F64 capacity supports up to 1,000 files per table and 6 billion rows, while an F8 supports 500 files and 1.5 billion rows. If your dataset exceeds the guardrails for your SKU, Direct Lake falls back to DirectQuery or returns errors. Right-size your capacity by mapping your largest table dimensions to the guardrail table and provisioning one SKU level above the minimum to accommodate growth. <a href="/contact">Contact EPC Group</a> for a capacity sizing assessment tailored to your data volumes and concurrency requirements.

Power BIMicrosoft FabricDirect LakeOneLakeDelta LakeData AnalyticsEnterprise ArchitecturePerformance Optimization

Need Help With Power BI?

Our experts can help you implement the solutions discussed in this article.

Ready to Transform Your Data Strategy?

Get a free consultation to discuss how Power BI and Microsoft Fabric can drive insights and growth for your organization.