Quick Answer
For most 2026 deployments above 150 named users or 20 GB of semantic-model data, Fabric F64 with a one-year Azure Reserved Instance plus scheduled pause-and-resume is the lowest-TCO license. Below 150 users with no lakehouse dependency, PPU is still the simplest and often the cheapest option. Power BI Premium P SKUs are a legacy choice in 2026 and should not be renewed.
For storage mode: default to Import under 20 GB, move to Direct Lake above that when the source lives in OneLake as Delta Parquet, keep DirectQuery only for sub-minute-latency regulated sources, and use Dual for dimension tables in composite models. Every other combination is a special case.
1. The 2026 Licensing Landscape
Power BI licensing in 2026 is a three-way choice between Fabric capacity SKUs (F2 through F2048), legacy Power BI Premium SKUs (P1 through P3), and Per-User Premium (PPU). Microsoft is actively funneling all new investment into F SKUs, but the P and PPU paths remain commercially valid for specific workloads. Understanding where each path wins requires separating three independent dimensions: who can view content, what capacity features unlock, and what governance model fits.
Fabric F SKUs (F2 to F2048)
F SKUs are Azure-billed capacities priced per vCore-hour. They support pause and resume, Azure Reserved Instances, burstable autoscale, and every current Fabric workload (Power BI, Data Engineering, Data Warehouse, Real-Time Intelligence, Data Factory, Data Science). Pricing is linear: F64 is 2x F32, F128 is 2x F64, and so on. The critical threshold is F64: below it, viewers must hold Pro or PPU licenses; at F64 and above, free-license viewers can consume content, matching the old Premium per-capacity benefit.
Power BI Premium P SKUs (P1 to P3)
P SKUs are billed through Microsoft 365 as flat monthly subscriptions with no pause option. Compute equivalence to F SKUs is well-defined: P1 equals F64, P2 equals F128, P3 equals F256. P SKUs lack access to any Fabric-exclusive workload (Lakehouse, Warehouse, Direct Lake, Data Factory, Real-Time Intelligence, Data Activator, Copilot). They exist in 2026 only for organizations with multi-year Enterprise Agreement commitments that have not yet been renegotiated. We cover the migration path in detail in our Fabric vs Premium migration guide.
Per-User Premium (PPU)
PPU at approximately $24 per user per month bundles Pro and Premium-class features into a single named license. It is the only Premium-class license that does not require capacity management. PPU includes larger dataset sizes (up to 100 GB), XMLA read/write, deployment pipelines, and paginated reports. It does not include Fabric-exclusive workloads (no Lakehouse, no Direct Lake, no Copilot). PPU remains compelling for small and medium teams that need Premium features without the capacity-admin burden.
Rule of thumb: If your organization runs more than one workload beyond Power BI (warehouse, lakehouse, real-time, or data engineering), F SKU is the only rational choice. If Power BI is your only analytics workload and your user count is under 150, PPU is almost always simpler and often cheaper.
2. Full Feature Matrix: Fabric SKUs vs Premium vs PPU
The table below maps the features that matter most to enterprise decision-makers. Rows are ordered by commercial weight, not alphabetically.
| Feature | F2 to F32 | F64+ | P1 to P3 | PPU |
|---|---|---|---|---|
| Free-license viewers | No | Yes | Yes | No |
| Max semantic model size | 10 GB | 400 GB | 400 GB | 100 GB |
| Direct Lake mode | Yes | Yes | No | No |
| OneLake + shortcuts | Yes | Yes | No | No |
| Lakehouse + Warehouse workloads | Yes | Yes | No | No |
| Data Factory in Fabric | Yes | Yes | No | No |
| Real-Time Intelligence | Yes | Yes | No | No |
| Copilot (all workloads) | No | Yes | No | Limited |
| Paginated reports | Yes | Yes | Yes | Yes |
| XMLA read/write | Yes | Yes | Yes | Yes |
| Deployment pipelines | Yes | Yes | Yes | Yes |
| Azure Reserved Instance discount | Up to 41% | Up to 41% | No | No |
| Pause and resume billing | Yes | Yes | No | N/A |
| Burst autoscale | Yes | Yes | Limited | No |
| Capacity admin overhead | Medium | High | High | None |
| Minimum commitment | Hourly | Hourly | Monthly | Per user/month |
The matrix reduces to one governing observation: F64 and above is a strict superset of P1 through P3 in feature terms, with additional Fabric workloads and billing flexibility. PPU is a simpler governance model for teams that do not need capacity-scale features.
3. Fabric SKU Pricing Table (2026)
Pricing below reflects April 2026 US East pay-as-you-go rates, one-year Azure Reserved Instance rates, and the equivalent legacy P SKU monthly price. Regional variation is typically within 5 percent.
| SKU | Capacity Units | PAYG $/hr | 24x7 PAYG $/mo | 1-yr Reserved $/mo | Legacy P SKU Equiv. |
|---|---|---|---|---|---|
| F2 | 2 | $0.36 | $263 | $156 | — |
| F4 | 4 | $0.73 | $526 | $311 | — |
| F8 | 8 | $1.46 | $1,052 | $622 | A1 / EM1 |
| F16 | 16 | $2.92 | $2,104 | $1,244 | A2 / EM2 |
| F32 | 32 | $5.84 | $4,208 | $2,488 | A3 / EM3 |
| F64 | 64 | $11.67 | $8,410 | $4,963 | P1 ($4,995) |
| F128 | 128 | $23.33 | $16,820 | $9,925 | P2 ($9,995) |
| F256 | 256 | $46.66 | $33,640 | $19,850 | P3 ($19,995) |
| F512 | 512 | $93.33 | $67,280 | $39,700 | P4 |
| F1024 | 1024 | $186.67 | $134,560 | $79,400 | P5 |
| F2048 | 2048 | $373.33 | $269,120 | $158,800 | — |
The 24x7 PAYG column is a theoretical ceiling, not a realistic bill. Almost every production F SKU deployment pauses overnight and on weekends, bringing effective hours to 240 to 280 per month rather than 730. Combining that with a one-year reservation on the always-on portion typically produces effective monthly cost 45 to 60 percent lower than the 24x7 PAYG figure.
4. TCO at 50 / 200 / 1,000 / 5,000 Users
Licensing decisions get decided on TCO, not per-feature checklists. The four scenarios below model realistic mixes of creators, viewers, and capacity for the most common enterprise shapes. All figures are annual, in USD, and include both capacity and per-user license costs.
Scenario 1: 50 users (department BI)
Composition: 5 creators, 45 viewers. Single semantic model under 10 GB. No lakehouse.
- PPU only: 50 users x $24 x 12 = $14,400/yr. Simplest option, zero capacity admin.
- Pro + F2 reserved: 50 Pro licenses ($10 x 12 x 50 = $6,000) + F2 reserved ($156 x 12 = $1,872) = $7,872/yr. Cheaper but every viewer still needs Pro (F2 is below the free-viewer threshold).
- Pro creators + F64 reserved + free viewers: 5 Pro ($600) + F64 reserved ($4,963 x 12 = $59,556) = $60,156/yr. Overkill at this scale.
Winner: Pro + F2 reserved ($7,872/yr), unless governance complexity of capacity admin outweighs the savings versus PPU. Many teams pick PPU for the simpler operational model.
Scenario 2: 200 users (mid-market enterprise)
Composition: 20 creators, 180 viewers. Multiple semantic models totaling 40 GB. Lakehouse roadmap within 12 months.
- PPU only: 200 x $24 x 12 = $57,600/yr. No capacity overhead, but no lakehouse path.
- Pro creators + F64 reserved (paused nights/weekends) + free viewers: 20 Pro ($2,400) + F64 reserved ($4,963 x 12 = $59,556) + pause savings approx. minus $18,000 = effective $43,956/yr. Unlocks Fabric workloads.
- Pro + P1: 20 Pro ($2,400) + P1 flat ($4,995 x 12 = $59,940) = $62,340/yr. Same feature as F64 minus pause and Fabric workloads.
Winner: F64 reserved + pause ($43,956/yr). Wins on both cost and future-proofing. This is the sweet-spot scenario where F SKU first clearly dominates.
Scenario 3: 1,000 users (large enterprise)
Composition: 80 creators, 920 viewers. 200 GB across 30 semantic models. Active lakehouse deployment with Direct Lake.
- Pro creators + F128 reserved + pause + autoscale burst: 80 Pro ($9,600) + F128 reserved ($9,925 x 12 = $119,100) + pause savings approx. minus $36,000 + autoscale $4,000 = effective $96,700/yr.
- PPU only: 1,000 x $24 x 12 = $288,000/yr. Plus no Direct Lake, no lakehouse, no real-time. Eliminated.
- Pro + P2: 80 Pro ($9,600) + P2 flat ($9,995 x 12 = $119,940) = $129,540/yr. No pause savings, no Fabric workloads.
Winner: F128 reserved + pause ($96,700/yr), roughly 25 percent less than P2 and with full Fabric access. Above this scale, F SKU is the only defensible choice.
Scenario 4: 5,000 users (global enterprise)
Composition: 300 creators, 4,700 viewers, 800 GB across 80 semantic models. Multiple workspaces with Direct Lake, Lakehouse, Warehouse, and Real-Time Intelligence workloads. Global concurrency requirements.
- Pro creators + F256 reserved steady-state + F512 burst autoscale + pause weekends: 300 Pro ($36,000) + F256 reserved ($19,850 x 12 = $238,200) + burst approx. $15,000 + pause savings approx. minus $25,000 = effective $264,200/yr.
- Pro + P3: 300 Pro ($36,000) + P3 flat ($19,995 x 12 = $239,940) = $275,940/yr. No pause, no Fabric workloads, no burst.
- Pro + 2x F128 reserved (multi-capacity): 300 Pro ($36,000) + 2x F128 reserved ($238,200) + pause savings approx. minus $45,000 = effective $229,200/yr. Splits workload across capacities for better isolation.
Winner: Multi-capacity F128 strategy ($229,200/yr). At scale, splitting into multiple smaller F SKUs often beats a single larger capacity because pause windows compound and workload isolation reduces noisy-neighbor throttling. See our Fabric capacity cost optimization playbook for the multi-capacity pattern in detail.
5. Storage Modes: Direct Lake vs Import vs DirectQuery vs Dual
Storage mode is the second half of the decision framework and is often where cost models collapse. A wrong storage mode choice can consume 3x to 5x the CU of the right choice, making even the most favorable SKU uneconomic.
Import mode
Import loads the entire table into the VertiPaq columnar engine on refresh. Query latency is sub-second because the entire model lives in memory, but refresh windows grow with data volume. Import is the correct default for semantic models under 20 GB, for complex star schemas with calculated columns, and for high-concurrency dashboards where query performance dominates the decision.
DirectQuery
DirectQuery issues a SQL query to the source for every visual on every slicer change. Latency is whatever the source returns, typically 2 to 20 seconds per query. DirectQuery is the right mode only when source data changes faster than 15 minutes and freshness matters more than interactivity. Regulated industries with audit-trail requirements on every query sometimes require DirectQuery because the source system is the record of truth.
Direct Lake (Fabric-only)
Direct Lake reads Delta Parquet files from OneLake directly into the VertiPaq engine on demand. There is no refresh step: the Delta table is the source of truth. Columns are paged into memory lazily as queries reference them. Query latency approaches Import performance (typically 1 to 3 seconds) because the engine is still VertiPaq, but data freshness matches the upstream Delta commit frequency (typically seconds to minutes). Direct Lake requires a Fabric F SKU and data in OneLake. It is the defining storage mode of the Fabric era.
Dual mode
Dual is a hybrid that behaves like Import when queried alongside Import tables and like DirectQuery when queried alongside DirectQuery tables. It is specifically designed for dimension tables in composite models. Dual is a niche mode that solves one real problem well: small dimension tables that need to participate in both cached and live query paths. See our composite models guide for the full pattern.
Storage mode trade-off matrix
| Dimension | Import | DirectQuery | Direct Lake | Dual |
|---|---|---|---|---|
| Typical query latency | < 1 sec | 2 to 20 sec | 1 to 3 sec | Depends on neighbor |
| Data freshness | Refresh-bound | Real-time | Near real-time | Hybrid |
| Refresh CU cost | High | None | None | Partial |
| Query CU cost | Low | High | Low (native) | Mixed |
| Max practical size | 400 GB | Unlimited | Unlimited | Dimension-scale |
| Calculated columns | Full | Limited | Limited (2026) | Limited |
| RLS support | Full | Full | Full | Full |
| Governance complexity | Low | Medium | Medium | High |
| Requires Fabric | No | No | Yes | No |
Direct Lake 2026 improvements
Direct Lake in 2026 has shipped four material improvements over the 2024 preview:
- Composite model support: Direct Lake tables can now coexist with Import tables in the same semantic model, allowing hybrid designs that keep dimension tables in Import for speed and fact tables in Direct Lake for freshness.
- Row-level security parity: RLS on Direct Lake now matches Import in filter-context semantics, with no fallback-to-DirectQuery penalty in the common dimensional-filter cases.
- Incremental column paging: The engine now pages columns into memory on first reference and caches them for the duration of the query session, reducing first-query latency for large models from 5 to 10 seconds to 1 to 2 seconds.
- Warehouse-sourced Direct Lake: Direct Lake can now read directly from Fabric Warehouse tables, not just Lakehouse, unifying the semantic layer over both T-SQL and Spark sources.
For the hands-on tuning playbook, see our Direct Lake performance tuning guide.
6. The Binary Decision Tree
Six binary questions, answered in order, route any organization to the right SKU and storage mode. The tree is intentionally decisive: there are no "it depends" branches.
Q1. Do you need Fabric workloads beyond Power BI (Lakehouse, Warehouse, Data Factory, Real-Time Intelligence, or Copilot)?
Yes to F SKU (continue to Q3). No continue to Q2.
Q2. Is your user count under 150 AND do you have no lakehouse roadmap in the next 18 months?
Yes to PPU. Stop here. No to F SKU. Continue to Q3.
Q3. Do you need free-license viewers (users who consume but never create)?
Yes you need F64 or larger. Continue to Q4. No F2 through F32 is economic for small creator-only teams. Continue to Q4.
Q4. Will your largest semantic model exceed 50 GB within 18 months?
Yes Direct Lake is mandatory for that model. Continue to Q5. No Import remains the default. Continue to Q6.
Q5. Is your source already a Delta table in OneLake (Lakehouse or Warehouse)?
Yes Direct Lake is production-ready for this model. No either build a Fabric Lakehouse first or fall back to Import with incremental refresh. Continue to Q6.
Q6. Does any source require sub-minute freshness with audit-trail guarantees?
Yes that specific table must use DirectQuery. Consider a composite model with Import or Direct Lake for the rest. No Import or Direct Lake covers the entire model.
Running the tree against the four TCO scenarios above produces the expected answers: 50-user shop goes PPU, 200-user shop with lakehouse roadmap goes F64 + Import moving to Direct Lake, 1,000-user shop goes F128 + Direct Lake + composite, 5,000-user shop goes multi-F128 + Direct Lake + composite + DirectQuery for audit-critical tables.
7. Migration Triggers
The decision tree determines the destination. Migration triggers determine when to move. Below are the concrete signals that a PPU shop should move to F-series, and that a P1 shop should move to Fabric F SKUs.
PPU shop should move to F-series when any of the following are true
- Total PPU license count exceeds 150 named users (TCO crossover).
- Any single semantic model is approaching the 100 GB PPU ceiling.
- The organization commits to a lakehouse or warehouse roadmap that requires OneLake.
- A business-critical workload requires free-license viewing for a customer-facing audience.
- Copilot or Fabric AI workloads appear in the 12-month roadmap.
- More than 30 percent of PPU users are viewers only (they should be free-license viewers under F64).
- Any regulated workload requires audit trails that integrate with Fabric Purview features not available on PPU.
P1 shop should move to Fabric F SKUs when any of the following are true
- Any report in the environment would benefit from Direct Lake (lakehouse-backed fact tables above 50 GB or daily refresh exceeding 30 minutes).
- Capacity utilization drops below 50 percent for more than 12 hours per day (pause-and-resume savings dominate).
- The P SKU contract is within 12 months of renewal.
- The organization has begun Data Factory, Lakehouse, Warehouse, or Real-Time Intelligence projects on any capacity.
- Any Copilot use case has been proposed by the business.
- There is a 3-year TCO case showing greater than 15 percent savings from reservations plus pausing.
- A multi-capacity architecture would improve workload isolation for noisy-neighbor or tenant-isolation reasons.
If any two triggers are true, start the migration planning now. If any four are true, migration is overdue and every month of delay is measurable TCO waste plus feature debt.
8. Measuring Fit with the Capacity Metrics App
Every decision in this framework can be validated against real telemetry before committing. The Fabric Capacity Metrics app is the single tool that matters. It exposes CU consumption by operation type, by workspace, by artifact, and by hour. Running the app for 14 consecutive days against the current capacity produces the baseline needed to size any migration.
DAX queries against the Capacity Metrics semantic model
The following DAX measures can be added to a blank report connected to the Fabric Capacity Metrics semantic model. They surface the four numbers that drive SKU sizing.
// Peak CU consumed in any 30-second window
Peak CU (30s) =
MAXX (
VALUES ( 'TimePoints'[TimePoint] ),
CALCULATE ( SUM ( 'MetricsByItem'[CU] ) )
)
// Percentage of hours capacity ran above 80% of SKU
Hours Above 80% =
VAR TotalHours = COUNTROWS ( 'TimePoints' ) / 120
VAR HotHours =
DIVIDE (
COUNTROWS (
FILTER ( 'TimePoints', [Peak CU (30s)] > [SKU CU Limit] * 0.8 )
),
120
)
RETURN DIVIDE ( HotHours, TotalHours )
// Average idle hours per day (candidates for pause)
Idle Hours per Day =
AVERAGEX (
VALUES ( 'TimePoints'[Date] ),
CALCULATE (
COUNTROWS (
FILTER ( 'TimePoints', [Peak CU (30s)] < [SKU CU Limit] * 0.1 )
)
) / 120
)
// Refresh CU as percentage of total (Direct Lake savings target)
Refresh CU Share =
DIVIDE (
CALCULATE ( SUM ( 'Operations'[CU] ), 'Operations'[OperationType] = "DatasetRefresh" ),
SUM ( 'Operations'[CU] )
)Run these four measures for 14 consecutive days. If Peak CU (30s) stays below 60 percent of the SKU limit, you are oversized. If Hours Above 80% exceeds 10 percent, you are undersized. If Idle Hours per Day exceeds 10, pause-and-resume will deliver major savings. If Refresh CU Share exceeds 40 percent, Direct Lake migration is the single highest-ROI architectural change you can make.
9. Governance and Compliance Impact by Path
Licensing decisions reshape governance. The four paths have materially different operating models.
F SKU governance model
F SKUs are managed in the Azure portal by a Capacity Admin (typically a central Platform or FinOps team) and assigned to workspaces by Power BI Tenant Admins. This dual-admin model distributes authority: Azure controls cost and capacity shape, Power BI controls workspace-level governance. Sensitivity labels, DLP policies, and Purview integration function identically. For regulated industries (HIPAA, SOC 2, FedRAMP), Fabric inherits Azure compliance posture and often simplifies audit because everything runs in a single Azure subscription.
P SKU governance model
P SKUs are managed in the Microsoft 365 admin center by a Power BI Admin. Governance is centralized but inflexible: there is no concept of pause, no autoscale, and no integration with Azure FinOps tooling. Regulated deployments often need to build custom audit processes because the operating surface area is shared with Microsoft 365 rather than Azure.
PPU governance model
PPU has the simplest governance model: per-user licensing with no capacity concept at all. Governance reduces to workspace-level role management and sensitivity labeling. For small teams this is ideal. For regulated deployments, PPU can be limiting because some Fabric-integrated compliance tools are not available.
Governance risk summary
- F SKU + Direct Lake: highest flexibility, most sophisticated governance surface, best fit for regulated enterprises.
- F SKU + Import: flexible capacity management, simpler semantic model governance, lowest risk profile.
- P SKU + Import: legacy governance model, limited future-proofing, renewal risk.
- PPU: simplest governance, no capacity scaling, limited for regulated workloads above 100 users.
Organizations in regulated industries should treat the F SKU migration as a governance upgrade first and a cost optimization second. The audit and compliance footprint of Fabric in 2026 is materially more mature than P SKU Premium, and the value of that maturity typically exceeds the direct capacity savings.
10. Operational Patterns That Actually Save Money
The TCO numbers in Section 4 assume disciplined operations. Without the patterns below, F SKU deployments often cost more than the P SKU they replaced. Implementing the four patterns below delivers the savings the executive deck promised.
Pattern 1: Azure DevOps-driven pause schedule
Schedule capacity pause via an Azure DevOps pipeline or Logic App that calls the Fabric REST API at predictable times. Typical schedule: pause at 7 PM local, resume at 6 AM local weekdays; pause Friday 7 PM, resume Monday 6 AM. Document the schedule in the workspace and surface it on every Power BI admin dashboard. Pause savings of $18,000 to $45,000 per year on F64 are only real if the schedule actually runs.
Pattern 2: One-year reservation on steady-state + PAYG for burst
Reserve 70 to 80 percent of capacity as a one-year Azure Reserved Instance. Let the remaining 20 to 30 percent burst on PAYG. This hybrid model protects against over-commitment while capturing the full 41 percent reservation discount on the always-on portion. Three-year reservations add another 15 percent but should only be used when a workload is forecast to run continuously for 36+ months.
Pattern 3: Multi-capacity workload isolation
At enterprise scale, splitting workload across two or three F128 or F256 capacities often beats a single larger F SKU. Split by blast radius: one capacity for production Power BI, one for Fabric engineering workloads, one for development and test. Each capacity can pause on its own schedule (dev and test pause on nights and weekends, production pauses only on weekends). The multi-capacity pattern typically delivers 15 to 25 percent additional savings compared to a single equivalent SKU.
Pattern 4: Import to Direct Lake migration for large fact tables
Any Import model larger than 50 GB is a Direct Lake candidate. Migration typically reduces refresh CU to zero and reduces total capacity consumption by 30 to 50 percent on that model. The migration itself is usually a 2-to-4-week effort per model: build a Lakehouse, land the source as a Delta table, create a Direct Lake semantic model, validate DAX parity, redirect reports. For the hybrid approach that keeps dimension tables in Import, see the composite models guide.
Frequently Asked Questions
What is the cheapest Fabric capacity that can run Power BI production workloads?
F64 is the cheapest capacity that delivers the full Power BI Premium experience, including free-license viewers, paginated reports, AI workloads, and 400 GB semantic model size. F2 through F32 are technically capable of hosting Power BI content, but viewers must hold Pro licenses, Copilot is unavailable, and several Premium-only features (such as XMLA write with full feature set and large semantic models above 10 GB) are either disabled or constrained. For any deployment beyond a small departmental pilot, F64 is the practical production floor.
Is PPU still viable in 2026?
Per-User Premium (PPU) at approximately $24 per user per month remains viable for teams with 10 to 70 named users who need Premium-class features but do not have free-license viewers. Break-even against F64 typically lands between 200 and 220 PPU seats once you model pause-and-resume savings on F64. Above that threshold, F SKU almost always wins on TCO. Below 150 seats PPU is usually cheaper and has zero capacity management overhead.
Does Direct Lake work without Fabric?
No. Direct Lake is a Fabric-only storage mode that reads Delta Parquet files from OneLake directly into the VertiPaq engine. It cannot run on Power BI Premium P SKUs or PPU. This is the single largest architectural reason organizations migrate from P to F. If your roadmap includes lakehouse-backed semantic models, F SKU is mandatory.
When should I keep Import mode instead of moving to Direct Lake?
Keep Import when your source is not a Delta table in OneLake, when your model needs calculated columns or composite relationships that Direct Lake does not yet support, or when your data volume is small enough that full refresh completes in under 15 minutes. Import also remains the right answer when you need maximum query performance for interactive dashboards with many concurrent users, because Import loads the entire model into memory with no fallback behavior.
What does Direct Lake fallback to DirectQuery actually mean for performance?
Direct Lake reads Parquet columns on demand into the VertiPaq engine. When a query references a feature Direct Lake does not support, such as a calculation that triggers row-level security combined with complex relationships, Fabric transparently falls back to DirectQuery against the SQL endpoint of the Lakehouse or Warehouse. Fallback queries run 3 to 20 times slower than native Direct Lake queries. Monitoring fallback frequency through the Fabric Capacity Metrics app is essential: if more than 5 percent of queries fall back, the model needs redesign.
How big does a Direct Lake semantic model need to be to justify the architectural complexity?
Direct Lake begins to win on TCO when the underlying Delta table exceeds 20 to 50 GB or when refresh of an equivalent Import model exceeds 30 minutes. Under those thresholds, Import is simpler, faster at query time, and usually cheaper in CU consumption. Above those thresholds, the elimination of refresh CU cost and the near-instant data availability start to compound quickly. For tables above 200 GB, Direct Lake is often the only viable choice on a standard F SKU.
Can I mix Direct Lake and Import in the same semantic model?
Yes, through composite modeling. As of 2026, Direct Lake tables can coexist with Import tables in a single semantic model, allowing you to keep small dimension tables in Import for maximum query performance while placing high-volume fact tables in Direct Lake. This hybrid pattern is now the recommended architecture for enterprise lakehouse-backed models because it combines the best of both storage modes without forcing a single-mode decision.
What happens to my existing Power BI Premium P SKU contract?
Existing P SKU contracts continue to honor their renewal terms. Microsoft has not published a hard end-of-life date for P SKUs, but all new feature investment is flowing into F SKUs. New purchases and renewals of P SKUs are being discouraged through both commercial incentives and feature gating. Most organizations should plan a migration to F SKUs within their current licensing term rather than renew a P SKU for another three years.
Is Dual mode still relevant in 2026?
Dual mode remains useful for dimension tables in composite models that need to behave like Import when queried alongside Import-mode fact tables but like DirectQuery when queried alongside DirectQuery or Direct Lake fact tables. It is a niche but important storage mode for hybrid composite models. With Direct Lake coexisting with Import in the same model, Dual mode use is slowly declining, but it has not been deprecated.
How do I calculate CU consumption for Direct Lake vs Import?
Direct Lake charges CU on query time only, not on refresh, because there is no refresh step: the Delta table is the source of truth. Import charges CU on both refresh and query. For a daily-refreshed 50 GB Import model, refresh typically consumes 40 to 60 percent of total CU. Migrating to Direct Lake eliminates that refresh CU entirely. The Fabric Capacity Metrics app breaks down CU by operation type and allows you to model the before-and-after cost for any candidate model.
Which F SKU do I need for 5,000 viewers?
At 5,000 viewers, F64 is the minimum license that permits free-license viewing. Actual capacity sizing depends on concurrency, model size, and report complexity rather than viewer count. A typical 5,000-viewer deployment with 500 concurrent users and 10 to 20 active semantic models runs comfortably on F128 or F256 during peak hours. Organizations often provision F256 as the steady-state capacity with autoscale to F512 during month-end peaks.
Does moving to F SKU affect my Pro and PPU licenses?
Pro licenses remain required for content creators regardless of whether the capacity is P, F, or PPU. F64 and larger let free-license users view content in premium-assigned workspaces, matching the Premium benefit. If you migrate from PPU to F64, you can drop PPU licenses for users who are viewers only, but creators still need Pro. The license audit typically saves $10 to $14 per viewer per month when migrating from PPU to F SKU with Pro creators.
The Advisory Bottom Line
In 2026, Fabric F64 with a one-year reservation, scheduled pause and resume, and Direct Lake for fact tables above 50 GB is the highest-ROI Power BI license architecture for the majority of enterprises. PPU remains the right answer for small teams with no lakehouse roadmap. P SKUs are a renewal trap: if your contract comes up in the next 12 months, plan the F SKU migration now.
Direct Lake has matured from preview curiosity to production default for any fact table above 50 GB sourced from OneLake. Import remains the right choice under that threshold. DirectQuery stays reserved for the narrow audit-critical real-time tier. Dual mode exists for composite-model dimensions. Every other storage-mode combination is an edge case.
For continued reading, our 2026 Power BI pricing and licensing guide covers the full license taxonomy, and our Power BI Embedded SaaS architecture guide addresses the multi-tenant ISV scenario that sits adjacent to the enterprise licensing question answered here.
Need a Fabric Licensing + Storage Mode Review?
Our consultants deliver a 10-day Fabric licensing and storage-mode review that models your exact TCO across F SKU, PPU, and P SKU paths, and identifies every semantic model that should move to Direct Lake. No forms required for the reading material: contact us when you want the custom analysis.