Power BI Premium & Fabric Capacity Planning Guide 2026: SKU Sizing, CU Optimization & Cost Strategy
Master Power BI Premium and Fabric capacity planning. Learn SKU sizing, CU consumption, autoscale, monitoring, and cost optimization for enterprise deployments.
Capacity planning is the single most consequential infrastructure decision in a Power BI Premium or Microsoft Fabric deployment. Undersize your capacity and users experience slow report renders, failed refreshes, and throttled queries during peak hours. Oversize it and you burn tens of thousands of dollars per month on idle compute. This guide provides the analytical framework, sizing methodology, and operational practices that our <a href="/services/enterprise-deployment">enterprise deployment team</a> uses to right-size capacity for organizations ranging from 500 to 50,000+ users.
Power BI Premium vs. Premium Per User vs. Fabric Capacity
Before diving into capacity planning, you need to understand which licensing model applies to your scenario. Each model has fundamentally different capacity characteristics:
Power BI Premium (P SKUs)
Power BI Premium dedicated capacity is purchased as P-series SKUs (P1 through P5) through an Enterprise Agreement or Microsoft Customer Agreement. Each P SKU provides a fixed allocation of backend v-cores and memory that is exclusively reserved for your organization. Premium capacity runs on dedicated infrastructure—your workloads do not compete with other tenants for compute resources.
Key characteristics: - Dedicated infrastructure: Your v-cores and memory are reserved 24/7 regardless of utilization - Unlimited viewers: Any user in your organization (or external guests) can view content in Premium workspaces without a Pro or PPU license - Full feature set: Paginated reports, AI visuals, deployment pipelines, XMLA endpoint, dataflows, large datasets (up to 400 GB per model on P5) - On-premises reporting: Power BI Report Server license included - Predictable billing: Fixed monthly cost per SKU, no consumption-based surprises
Premium Per User (PPU)
PPU provides Premium-level features at a per-user cost of $20/user/month. There is no dedicated capacity—PPU workloads run on shared Microsoft-managed infrastructure with per-user resource limits.
Key characteristics: - No capacity to manage: Microsoft handles all resource allocation - Per-user throttling: Each user gets a fixed resource allocation; heavy workloads may be throttled - Every user needs PPU: Both creators and viewers in PPU workspaces must have PPU licenses - Feature parity: Most Premium features available (XMLA, deployment pipelines, paginated reports, AI) - Best for: Organizations with 50-300 Premium feature users where dedicated capacity cost does not justify the user count
Fabric Capacity (F SKUs)
Microsoft Fabric capacity (F-series SKUs) is the evolution of Power BI Premium capacity. F SKUs provide Capacity Units (CUs) that are consumed by all Fabric workloads—Power BI, Data Engineering, Data Warehouse, Real-Time Intelligence, Data Factory, and Data Science. This is the model Microsoft is investing in going forward.
Key characteristics: - Unified compute: One capacity pool serves all Fabric workloads, not just Power BI - Consumption-based options: Available as reserved capacity (monthly commitment) or pay-as-you-go (per-second billing) - Broader SKU range: F2, F4, F8, F16, F32, F64, F128, F256, F512, F1024, F2048—far more granular than P SKUs - Pause/resume: F SKUs can be paused when not in use (P SKUs cannot) - Azure integration: Managed through Azure portal, supports Azure RBAC, tags, and cost management tools
For organizations evaluating licensing models, see our detailed <a href="/blog/power-bi-pricing-licensing-guide-2026">Power BI pricing and licensing guide</a>.
SKU Sizing: P-Series and F-Series Comparison
The following table maps P SKUs to their F SKU equivalents and details the resource allocation for each tier:
| P SKU | Equivalent F SKU | Capacity Units (CU) | Backend V-Cores | Memory (GB) | Max Dataset Size | Monthly Cost (Approx.) | |---|---|---|---|---|---|---| | — | F2 | 2 | 2 | 6 | 3 GB | $262/mo | | — | F4 | 4 | 4 | 12 | 3 GB | $524/mo | | — | F8 | 8 | 8 | 25 | 10 GB | $1,049/mo | | — | F16 | 16 | 16 | 50 | 10 GB | $2,098/mo | | — | F32 | 32 | 32 | 100 | 10 GB | $4,195/mo | | P1 | F64 | 64 | 8 | 25 | 25 GB | $4,995/mo | | P2 | F128 | 128 | 16 | 50 | 50 GB | $9,990/mo | | P3 | F256 | 256 | 32 | 100 | 100 GB | $19,980/mo | | P4 | F512 | 512 | 64 | 200 | 200 GB | $39,960/mo | | P5 | F1024 | 1024 | 128 | 400 | 400 GB | $79,920/mo | | — | F2048 | 2048 | 256 | 800 | 400 GB | $159,840/mo |
Critical note: F SKUs below F64 do not support all Power BI Premium features. Specifically, F2 through F32 do not include unlimited viewer access (viewers still need Pro licenses), XMLA read/write endpoint, paginated reports, or deployment pipelines. If you need those features, you must deploy F64 or higher—or use PPU for a smaller user base.
How to Choose Between F and P SKUs
Microsoft has signaled that P SKUs will eventually be deprecated in favor of F SKUs. For new deployments, always choose F SKUs. For existing P SKU deployments, plan migration to F SKUs during your next Enterprise Agreement renewal. The functional capabilities are identical at equivalent tiers (P1 = F64), but F SKUs provide:
- Azure billing integration (consolidated invoice, cost management, tagging)
- Pause/resume (save money during off-hours)
- Pay-as-you-go option (no upfront commitment)
- Sub-P1 tiers (F2-F32 for development, testing, and small workloads)
- Future feature investment (new Fabric features target F SKUs first)
Understanding Capacity Units (CU) and Consumption
Capacity Units are the universal compute currency in Microsoft Fabric. Every operation—a Power BI report render, a dataset refresh, a Spark notebook execution, a SQL warehouse query—consumes CUs. Understanding how CUs are consumed is essential for accurate capacity planning.
CU Consumption Model
Fabric uses a smoothing mechanism for CU consumption. Operations are classified as:
- Interactive operations: Report renders, visual queries, DAX queries, dashboard tile refreshes. These consume CUs in real-time and are subject to interactive throttling if the capacity is overloaded.
- Background operations: Dataset refreshes, dataflow executions, Spark jobs, warehouse queries. These consume CUs that are smoothed over a 24-hour window. A refresh that uses 100 CU-seconds is spread across the day, so the effective per-second impact is minimal.
This smoothing model means that scheduled refreshes and batch jobs have a much smaller impact on perceived capacity utilization than interactive queries. A capacity that appears 80% utilized based on interactive workloads may still handle significant background processing.
CU Consumption by Workload
| Workload | CU Consumption Pattern | Typical Impact | |---|---|---| | Power BI report rendering | Interactive, per-visual query | 0.01-5 CU-seconds per query | | Power BI dataset refresh (Import) | Background, smoothed over 24h | 10-500 CU-seconds per refresh | | Direct Lake query | Interactive, very efficient | 0.001-0.5 CU-seconds per query | | Paginated report render | Interactive, can be heavy | 0.1-50 CU-seconds per render | | Spark notebook | Background, smoothed | 10-10,000 CU-seconds per execution | | Data warehouse query | Background or interactive | 0.01-1,000 CU-seconds per query | | Dataflow Gen2 | Background, smoothed | 5-200 CU-seconds per execution |
Throttling Behavior
When a capacity is overloaded, Fabric applies throttling in stages:
- Smoothing (0-10 min overuse): Background operations are delayed but interactive queries continue normally
- Interactive delay (10-60 min overuse): Interactive queries are queued with increasing delays (users notice slower report loads)
- Rejection (60+ min sustained overuse): Interactive requests may be rejected entirely—reports fail to load
Understanding these thresholds is critical. A capacity that occasionally hits smoothing is fine. A capacity that regularly enters interactive delay needs to be upsized. A capacity that hits rejection is undersized and causing business impact.
Autoscale and Burst Capacity
Power BI Premium Autoscale
Premium capacity supports autoscale through Azure. When enabled, autoscale automatically adds v-cores (in 1 v-core increments, up to a configurable maximum) when the capacity exceeds its base allocation. Key considerations:
- Billing: Each autoscale v-core is billed per 24-hour period at approximately $85/v-core/day. This can add up quickly—24 autoscale v-cores sustained for a month costs more than upgrading to the next P SKU.
- Response time: Autoscale activates within 1-2 minutes of threshold breach, but there is a lag. Users may experience throttling during the activation window.
- Maximum cap: You set the maximum number of autoscale v-cores (1-24). Always set a cap to avoid runaway costs from unexpected workload spikes.
- Evaluation window: Autoscale evaluates usage over 1-minute intervals. Short spikes may resolve before autoscale engages.
EPC Group recommendation: Use autoscale as a safety net, not as your primary capacity strategy. If autoscale activates more than 3 times per week, your base capacity is undersized. Upgrade to the next SKU tier—it will cost less than sustained autoscale charges.
Fabric Capacity Burst
F SKU capacities support burst above the base CU allocation. Burst allows short-term consumption above the SKU CU limit, with the excess smoothed over the evaluation window. This provides headroom for legitimate peak usage without immediate throttling.
However, burst is not infinite. Sustained overuse triggers the same smoothing, delay, and rejection cascade described above. Burst is designed for spiky workloads (everyone opens reports at 9:00 AM) rather than sustained overload (continuous heavy refreshes all day).
Monitoring with the Capacity Metrics App
The Microsoft Fabric Capacity Metrics App is the primary tool for monitoring capacity utilization. Install it from AppSource and connect it to your capacity to get:
Key Metrics to Monitor
- CU utilization % (interactive): Percentage of CU allocation consumed by interactive operations. Target: below 70% sustained, below 90% peak.
- CU utilization % (background): Percentage consumed by background operations. This can exceed 100% because of 24-hour smoothing—but watch the trend.
- Throttling events: Count and duration of throttling episodes. Zero tolerance for rejection events. Investigate any interactive delay events exceeding 5 minutes.
- Overload minutes: Minutes per day the capacity spends in overloaded state. Target: zero for production capacities.
- Top consumers: Identifies which workspaces, datasets, and reports consume the most CUs. Essential for optimization targeting.
- Timepoint detail: Drill into specific time windows to correlate throttling with specific operations (e.g., a 2 GB dataset refresh triggering throttling during peak interactive hours).
Setting Up Alerts
Configure Azure Monitor alerts for: - CU utilization exceeding 80% for more than 15 consecutive minutes - Any throttling event (interactive delay or rejection) - Autoscale activation (if enabled) - Background processing queue depth exceeding baseline
Our <a href="/services/power-bi-architecture">Power BI architecture team</a> helps clients implement comprehensive monitoring dashboards that combine Capacity Metrics App data with Azure Log Analytics for end-to-end observability.
Performance Optimization for Capacity
Before upgrading to a larger SKU, exhaust optimization opportunities on your current capacity. These strategies can reduce CU consumption by 30-60%:
1. Optimize Data Models
- Reduce model size: Remove unused columns, reduce cardinality on high-cardinality text columns, use integer keys instead of string keys. A 2 GB model that drops to 800 MB after optimization uses proportionally fewer CUs for every query.
- Implement aggregations: Pre-aggregated tables for common query patterns (monthly summaries, department totals) can reduce query CU consumption by 90%+ for those patterns.
- Use Direct Lake mode: For Fabric capacities, Direct Lake eliminates the import refresh entirely—the model reads directly from Delta tables in OneLake. This eliminates the background CU cost of scheduled refreshes.
- Reduce visual count per page: Each visual fires at least one DAX query. A report page with 30 visuals fires 30+ queries simultaneously. Target 8-12 visuals per page.
2. Optimize Refresh Strategy
- Incremental refresh: Only refresh new and changed data partitions. A table with 3 years of history that receives daily data should only refresh the current month partition, not all 36 partitions.
- Stagger refresh schedules: If 50 datasets all refresh at 6:00 AM, the capacity experiences a massive spike. Distribute refreshes across a 2-hour window (5:00 AM - 7:00 AM) to flatten the curve.
- Refresh during off-peak hours: Background operations smoothed during low-interactive periods have minimal user impact.
3. Optimize Queries
- Reduce DAX complexity: Replace nested CALCULATE patterns with variables. Avoid FILTER() on large tables when KEEPFILTERS or simple filter arguments work.
- Implement query caching: Enable the query cache for reports with many viewers seeing identical data (executive dashboards).
- Use composite models: Combine Import tables (fast, CU-efficient) with DirectQuery connections (real-time, higher CU cost) strategically.
4. Workspace and Content Organization
- Separate dev/test/prod: Use deployment pipelines to isolate development workloads from production capacity. Better yet, put dev/test on a separate F8 or F16 capacity.
- Archive inactive content: Datasets and reports that have not been accessed in 90+ days still consume memory. Move them to a low-tier capacity or remove them.
- Enforce governance: Implement workspace creation policies and naming conventions. Ungoverned workspace sprawl is the number one cause of unexpected capacity consumption. Learn more about our <a href="/services/microsoft-fabric">Microsoft Fabric governance approach</a>.
Multi-Geo Deployment Considerations
For organizations with users and data across multiple geographic regions, multi-geo deployment allows you to pin Fabric capacity to specific Azure regions. This addresses:
Data Residency Requirements
- GDPR (EU): Data must remain within EU boundaries. Deploy a capacity in West Europe or North Europe for EU user data.
- HIPAA (US): While HIPAA does not mandate geographic restrictions, many healthcare organizations require US-only data residency. Pin capacity to US East or US West.
- Data sovereignty: Government agencies and regulated industries may require data in specific national boundaries (Canada Central, Australia East, UK South).
Multi-Geo Architecture
- Home region: Your tenant default region hosts the primary capacity
- Satellite regions: Additional capacities in other Azure regions for data residency compliance
- Workspace assignment: Each workspace is assigned to a specific capacity (and therefore region). Users access workspaces across regions transparently.
- Cost implication: Each region requires a separate capacity purchase. An organization needing US + EU coverage needs at minimum two F64 SKUs ($10,000+/month combined).
Performance Implications
- User proximity: Users querying a capacity in a distant region experience 50-200ms additional latency per query. For interactive reports with 10+ visuals, this adds 0.5-2 seconds to page load.
- Cross-region data access: A report in EU capacity accessing a data source in the US adds latency to every refresh and DirectQuery call.
- Recommendation: Place capacity in the region closest to the majority of users. Use data gateway or replication to bring source data close to the capacity rather than querying across regions.
Cost Optimization Strategies
1. Right-Size Through Monitoring
Run your production capacity for 30 days with comprehensive monitoring before making sizing decisions. Knee-jerk upgrades based on a single throttling event waste budget. Analyze: - Average CU utilization across business hours (target 50-65%) - Peak CU utilization windows (target below 85%) - Throttling event frequency and duration - Weekend/off-hours utilization (opportunity for pause/resume)
2. Pause and Resume for Non-Production Capacities
F SKUs can be paused via Azure portal, PowerShell, or REST API. A development capacity paused 16 hours/day and all weekend saves 76% of its cost. Automate pause/resume with Azure Automation runbooks: - Resume at 7:00 AM local time on weekdays - Pause at 11:00 PM on weekdays - Keep paused all weekend unless weekend testing is scheduled
3. Reserved Instances
Azure Reservations for Fabric capacity provide 20-40% savings over pay-as-you-go pricing with 1-year or 3-year commitments. For production capacities that run 24/7, a 1-year reservation pays for itself within 3-4 months. Always purchase reservations for production. Use pay-as-you-go for dev/test and experimental workloads.
4. Workspace Consolidation
Each workspace does not need its own capacity. Consolidate workspaces onto shared capacities with clear governance policies. A single F64 can serve 5-10 production workspaces if the total CU consumption stays within limits. Use the Capacity Metrics App to validate.
5. Hybrid Licensing Strategy
Not every user needs Premium features. Use a hybrid approach: - Pro licenses ($10/user/month) for users who only need standard Power BI features and shared capacity - PPU ($20/user/month) for individual power users who need Premium features but do not justify dedicated capacity - Fabric capacity (F64+) for workspaces serving large viewer populations (500+ users) or requiring Fabric workloads
This hybrid approach can reduce total licensing cost by 40-60% compared to putting all users on PPU or buying oversized capacity.
When to Upgrade Capacity Tiers
Upgrade your capacity SKU when any of the following conditions persist for more than 2 consecutive weeks:
- Sustained CU utilization above 80% during business hours after all optimization opportunities are exhausted
- Interactive throttling events occurring more than 3 times per week
- Autoscale activating daily (the autoscale cost likely exceeds the SKU upgrade cost)
- Dataset refresh windows expanding beyond acceptable SLA (e.g., refresh that used to complete in 20 minutes now takes 90 minutes due to resource contention)
- User complaints about report load times that correlate with capacity utilization peaks in the Metrics App
- New workloads onboarding (adding Fabric Data Engineering or Data Warehouse workloads to a capacity already near its Power BI limit)
Downgrade signals: If your capacity averages below 30% CU utilization during business hours for 30+ consecutive days and you are not in a growth phase, consider moving to a smaller SKU. The savings can be significant—dropping from F128 to F64 saves approximately $5,000/month.
Fabric Capacity vs. Dedicated Power BI Capacity
For organizations choosing between a Fabric-first or Power BI-only capacity strategy:
| Factor | Fabric Capacity (F SKU) | Dedicated Power BI (P SKU) | |---|---|---| | Workload scope | All Fabric workloads (Power BI + Data Engineering + Warehouse + Real-Time + Data Science) | Power BI only | | Billing flexibility | Pay-as-you-go or reserved | Annual commitment only | | Pause/resume | Yes (F SKUs via Azure) | No (P SKUs run 24/7) | | Granularity | 11 SKU tiers (F2-F2048) | 5 SKU tiers (P1-P5) | | Management | Azure portal | Power BI Admin portal | | Future investment | Primary platform | Maintenance mode (eventual deprecation) | | Minimum for Premium features | F64 | P1 | | Best for | Organizations adopting Fabric holistically | Organizations using Power BI only with no Fabric plans |
EPC Group recommendation: Choose Fabric capacity (F SKUs) for all new deployments. The flexibility, cost optimization options (pause/resume, pay-as-you-go), and unified management make it the superior choice even if you only use Power BI today. When you eventually adopt Fabric Data Engineering, Warehouse, or Real-Time Intelligence, the capacity is already in place.
Capacity Planning Methodology
Our <a href="/services/enterprise-deployment">enterprise deployment team</a> follows this proven methodology for capacity planning engagements:
Phase 1: Discovery (Week 1-2) - Inventory all Power BI workspaces, datasets, reports, and users - Measure current dataset sizes, refresh frequencies, and query volumes - Identify peak usage patterns (time of day, day of week, month-end spikes) - Document compliance and data residency requirements - Map current licensing (Pro, PPU, existing Premium) and costs
Phase 2: Sizing Analysis (Week 2-3) - Calculate total CU demand based on discovered workloads - Apply optimization multipliers (how much CU reduction is achievable through model optimization, refresh staggering, etc.) - Model growth projections (new users, new datasets, Fabric workload adoption) - Evaluate multi-geo requirements and associated capacity needs - Compare scenarios: single large capacity vs. multiple smaller capacities
Phase 3: Recommendation (Week 3-4) - Deliver SKU recommendation with cost analysis - Provide optimization roadmap (what to fix before going live) - Define monitoring and alerting configuration - Establish upgrade/downgrade triggers with specific thresholds - Create capacity governance policy (workspace assignment, resource limits, admin procedures)
Phase 4: Implementation and Monitoring (Week 4-8) - Provision capacity and migrate workspaces - Implement monitoring dashboards and alerts - Optimize top CU-consuming datasets and reports - Validate performance against SLA targets - Hand off to operations team with runbooks
Getting Started with Capacity Planning
Capacity planning is not a one-time exercise—it is an ongoing operational discipline. As your user base grows, datasets expand, and you adopt more Fabric workloads, your capacity needs will evolve. The organizations that succeed are those that monitor proactively, optimize continuously, and scale deliberately rather than reactively.
If your organization is evaluating Power BI Premium or Fabric capacity, or if you are experiencing throttling and performance issues on an existing capacity, contact our team for a complimentary capacity assessment. We will analyze your current workload profile, identify optimization opportunities, and recommend the right-sized capacity for your requirements and budget.
<a href="/contact">Schedule a free consultation</a> to get started. You can also explore our <a href="/services/power-bi-architecture">Power BI architecture services</a> and <a href="/services/microsoft-fabric">Microsoft Fabric consulting</a> for comprehensive platform strategy.
Frequently Asked Questions
What is the difference between Power BI Premium capacity and Fabric capacity?
Power BI Premium capacity (P SKUs) provides dedicated compute exclusively for Power BI workloads—report rendering, dataset refreshes, paginated reports, and dataflows. Fabric capacity (F SKUs) provides Capacity Units (CUs) that serve all Microsoft Fabric workloads including Power BI, Data Engineering (Spark), Data Warehouse, Real-Time Intelligence, Data Factory, and Data Science. Functionally, P1 equals F64 for Power BI workloads. The key advantages of F SKUs are pause/resume capability (P SKUs run 24/7), pay-as-you-go billing options, more granular SKU tiers (F2-F2048 vs P1-P5), and Azure portal management with cost tagging. Microsoft is investing in F SKUs going forward, and P SKUs will eventually be deprecated. For new deployments, always choose F SKUs.
How do I determine the right SKU size for my organization?
Start with discovery: inventory your datasets (count and size in GB), refresh frequency, concurrent user count during peak hours, and query complexity. Use the Capacity Metrics App on an existing capacity (or a trial) to measure actual CU consumption for 2-4 weeks. For a rough initial estimate: organizations with under 500 Power BI users and datasets under 10 GB typically start with F64 (equivalent to P1). Organizations with 500-2000 users or datasets between 10-50 GB often need F128 (P2 equivalent). Large enterprises with 2000+ users, complex composite models, or heavy refresh schedules may need F256 or higher. Always plan for 30-40% headroom above measured peak utilization to accommodate growth and avoid throttling during spikes.
Can I pause Fabric capacity to save costs, and what happens to my reports when capacity is paused?
Yes, F SKU capacities can be paused via the Azure portal, PowerShell, or REST API. When paused, you stop paying for compute—only OneLake storage charges continue. However, all workloads assigned to that capacity become unavailable: reports will not render, datasets will not refresh, Spark jobs will not run, and warehouse queries will fail. Users will see an error message indicating the capacity is paused. When you resume, all workloads become available again within 1-2 minutes, but datasets in Import mode will need to be reloaded into memory (the first report load after resume may be slower). For production capacities serving business-critical reports, do not pause during business hours. Pause/resume is best suited for development, testing, and UAT capacities that are only needed during work hours.
What causes throttling on Power BI Premium or Fabric capacity, and how do I fix it?
Throttling occurs when cumulative CU consumption exceeds the capacity allocation. Fabric applies throttling in three stages: smoothing (background operations delayed, interactive unaffected), interactive delay (queries queued with increasing wait times), and rejection (requests fail entirely). Common causes include: too many large dataset refreshes running simultaneously during peak hours, oversized data models generating expensive DAX queries, report pages with 20-30+ visuals each firing concurrent queries, and unoptimized DirectQuery data sources. To fix throttling: stagger refresh schedules across a 2-hour window, reduce dataset sizes by removing unused columns, limit visuals to 8-12 per page, implement aggregations for common query patterns, enable incremental refresh, and move dev/test workloads to a separate capacity. If throttling persists after optimization, upgrade to the next SKU tier.
How does autoscale work with Power BI Premium, and is it cost-effective?
Autoscale automatically adds v-cores to your Premium capacity when utilization exceeds the base allocation. It activates within 1-2 minutes and scales in 1 v-core increments up to a maximum you configure (1-24 additional v-cores). Each autoscale v-core costs approximately $85 per 24-hour period. Autoscale is cost-effective as an occasional safety net for unexpected spikes—a few activations per month cost less than upgrading to the next SKU. However, if autoscale activates daily or multiple times per week, the accumulated cost quickly exceeds the price of the next SKU tier. For example, 8 autoscale v-cores sustained for 30 days costs roughly $20,400—far more than the $5,000 incremental cost of upgrading from P1 to P2. Use autoscale to prevent throttling during rare peaks, but treat frequent activation as a signal to right-size your base capacity.
What is the minimum Fabric SKU needed for Power BI Premium features like paginated reports and XMLA endpoint?
F64 is the minimum Fabric SKU that provides full Power BI Premium feature parity. F SKUs below F64 (F2, F4, F8, F16, F32) support basic Power BI functionality but do not include: unlimited content viewing for free users (viewers still need Pro licenses), XMLA read/write endpoint for third-party tool connectivity, paginated reports, deployment pipelines, email subscriptions, or datamart creation. If your organization requires any of these features, you need F64 or higher. For organizations that only need Premium features for a small group of users (under 300), Premium Per User (PPU) at $20/user/month may be more cost-effective than an F64 at approximately $4,995/month—the breakpoint is around 250 users where F64 becomes cheaper than PPU.
How should I plan capacity for a multi-geo Power BI deployment?
Multi-geo deployments require a separate Fabric or Premium capacity in each Azure region where you need data residency. Each capacity is an independent purchase with its own SKU tier and cost. For example, an organization with users in the US and EU needing GDPR-compliant data residency in both regions needs at minimum two F64 capacities (approximately $10,000/month combined). Plan multi-geo by: identifying which workspaces contain EU personal data (must go to EU capacity), which contain US-regulated data (US capacity), and which contain non-sensitive global data (home region capacity). Assign workspaces to the appropriate capacity based on data residency, not user location—users can access workspaces in any region with 50-200ms additional latency for cross-region access. Minimize cross-region queries by co-locating data sources and capacity in the same region where possible.