Using the Fabric Monitoring Hub
Microsoft Fabric
Microsoft Fabric7 min read

Using the Fabric Monitoring Hub

Monitor all Microsoft Fabric activities from a central hub. Track pipeline runs, Spark jobs, data refreshes, and capacity utilization in real time.

By Administrator

The Microsoft Fabric Monitoring Hub provides unified visibility into every operation running across your Fabric environment—pipeline runs, Spark jobs, notebook executions, dataflow refreshes, and semantic model refreshes—all in a single interface. For administrators and data engineers managing production Fabric workloads, the Monitoring Hub is the first place to look when something fails, runs slowly, or consumes unexpected resources.

Accessing the Monitoring Hub

The Monitoring Hub is available from two entry points. At the workspace level, navigate to any workspace and select "Monitoring hub" from the left navigation pane—this shows activities for that specific workspace. At the Fabric homepage level, select "Monitoring hub" from the main navigation—this shows activities across all workspaces you have access to.

Permissions determine what you see. Workspace admins and members see all activities in their workspaces. Contributors see their own activities and activities on items they have contributor access to. Viewers see only their own triggered activities. Fabric administrators see everything across the entire tenant.

Understanding the Activity View

The Monitoring Hub displays a table of activities with key columns:

| Column | Description | Use For | |---|---|---| | Item Name | The artifact that ran (pipeline, notebook, semantic model) | Identifying which workload ran | | Item Type | Category of the item (Pipeline, Spark Job, etc.) | Filtering by workload type | | Status | Running, Succeeded, Failed, Cancelled | Identifying problems | | Start Time | When the activity began | Correlating with incidents | | Duration | Total elapsed time | Performance trending | | Submitted By | User or service principal that triggered the run | Accountability | | Workspace | Which workspace contains the item | Organizational context |

Filtering and Search

Effective monitoring depends on finding relevant activities quickly among potentially thousands of entries:

Status Filters: Filter by Running (currently active), Succeeded (completed successfully), Failed (completed with errors), or Cancelled (manually or automatically stopped). For troubleshooting, filter by Failed first.

Item Type Filters: Narrow to specific workload types—Pipeline activities, Spark job runs, Dataflow Gen2 refreshes, Semantic model refreshes, KQL queryset runs, or Notebook executions. This is essential when investigating a specific workload category.

Time Range Filters: The Monitoring Hub shows the last 30 days of activity by default. Adjust the time range to focus on recent issues (last 24 hours) or investigate historical patterns (last 7 days, last 30 days). For trend analysis beyond 30 days, export data to your own storage.

Workspace Filters: When monitoring cross-workspace from the Fabric homepage, filter by specific workspaces to focus on a particular team or project.

Investigating Failed Activities

When an activity fails, the Monitoring Hub provides the starting point for root cause analysis:

Error Messages: Click on a failed activity to view the error details. Error messages range from clear (connection timeout, insufficient permissions) to cryptic (internal error codes). Document recurring error patterns for your team's runbook.

Activity Details: The detail pane shows parameters passed to the activity, input/output datasets, and execution stages. For pipelines, you can see which specific activity within the pipeline failed.

Spark Job Details: For failed Spark jobs (notebooks, Spark SQL), the Monitoring Hub links to the Spark application UI where you can examine executor logs, DAG visualization, and SQL query plans—essential for debugging distributed computation failures.

Common Failure Patterns: - Connection failures: Source systems unreachable, expired credentials, firewall changes - Timeout failures: Long-running queries exceeding configured limits, capacity throttling - Data errors: Schema changes in source systems, null values in non-nullable columns, data type mismatches - Capacity errors: Insufficient CUs for the workload, concurrent job limits exceeded

Performance Monitoring

Beyond failure detection, the Monitoring Hub helps identify performance degradation:

Duration Trending: Sort by duration to find your slowest activities. If a pipeline that normally completes in 5 minutes is now taking 30 minutes, investigate before it becomes a failure. Compare current durations against historical baselines.

Queue Time: When activities show significant queue time (time between submission and execution start), your capacity is overloaded. Either scale up the capacity SKU, reschedule activities to off-peak times, or optimize existing workloads to consume fewer CUs.

Concurrent Activity Analysis: Multiple long-running activities competing for the same capacity cause mutual slowdown. Identify overlapping heavy workloads and stagger their schedules.

Building Custom Monitoring

The Monitoring Hub's 30-day retention and basic filtering may be insufficient for enterprise monitoring. Extend it with custom solutions:

Export to Lakehouse: Use the Fabric REST APIs to programmatically export Monitoring Hub data to a Lakehouse for long-term retention and advanced analysis. Build Power BI reports on top of this data for executive-level operational dashboards.

Azure Monitor Integration: Configure Fabric diagnostic settings to send activity logs to Azure Log Analytics. This enables KQL-based querying, alerting on failure patterns, and integration with broader Azure monitoring infrastructure (Grafana dashboards, PagerDuty alerts).

Data Activator Alerts: Connect Monitoring Hub data to Data Activator to create automated alerts based on conditions like "any pipeline failure in production workspaces," "Spark job duration exceeds 2x historical average," or "more than 3 failures in the last hour." Data Activator sends notifications to Teams, email, or Power Automate flows.

Operational Best Practices

  • Check the Monitoring Hub daily as part of your morning operational review
  • Set up alerts for all production pipeline failures—do not rely on manual checking
  • Track mean time to recovery (MTTR) for failed activities as an operational KPI
  • Maintain a runbook documenting common failure patterns and their resolutions
  • Export monitoring data monthly for capacity planning and cost optimization analysis

Related Resources

Frequently Asked Questions

How long does Monitoring Hub retain historical data?

Monitoring Hub retains job history for 30 days by default. For longer retention, export data to your own storage or use Azure Log Analytics integration for extended historical analysis.

Can I set up alerts from Monitoring Hub?

Currently, direct alerting from Monitoring Hub is limited. Use Data Activator or Azure Monitor for comprehensive alerting on Fabric job status and performance metrics.

Microsoft FabricMonitoringOperationsAdmin

Industry Solutions

See how we apply these solutions across industries:

Need Help With Power BI?

Our experts can help you implement the solutions discussed in this article.

Ready to Transform Your Data Strategy?

Get a free consultation to discuss how Power BI and Microsoft Fabric can drive insights and growth for your organization.