The Ultimate Toolbox for Power BI
Explore the entire Power BI toolchain with this comprehensive guide and elevate your tool-building capabilities for FinOps and other data-oriented solutions on Power BI, Azure and Fabric.

My grandfather was a dextrous craftsman. When I was a young boy, he taught me how to repair my bike. I needed that skill badly, as I loved experimenting with tricks.
As one might expect, my iterations often led to failures. Sometimes causing harm to me, sometimes to my bike. But all this resulted in learning and experience that you cannot get otherwise. No matter how long you watch others do it, or how thick books you read. You need to do it.
Grandpa always highlighted the importance of having the right tools for the trade and knowing how to use them well. This is how we achieved efficiency in bike repairs. With tools specifically designed for a purpose, repairs were faster and of higher quality.
No different in today's world. The things we work on have changed, from hardware to software, but tools still surround us in the digital world. When we were building a homegrown FinOps tooling with my team on Power BI, I took account of the available specialized tools, assessed how they could benefit us, and invested in learning them to better our processes.
In this writing, I am offering you this toolbox along with my views on how it can benefit you when building homegrown FinOps tooling (or other data-oriented solutions) on Power BI, Azure, and Fabric.
So, here is the toolbox with the tools that we are going to have a closer look at:
- Core Power BI Tools
- Advanced Tools and Utilities for Power BI and Analysis Services
- Development, Analysis, AI/ML, and Science Tools
- VS Code with Power BI Studio and Other Extensions
- SQL Server Data Tools (SSDT) with Visual Studio 2022
- Power BI Version Control Tools (Multiple Options)
- PowerShell Analysis Services cmdlets and Power BI REST APIs for Automation
- Testing Automation with UI.Vision RPA, Selenium, or Playwright
- Jupyter and Other Notebooks
- Parquet Viewer and Apache Arrow
- Local Analytics Technologies (DuckDB, Polars, DataFusion, ClickHouse)
- Tools for Data in Azure/Fabric
- 😱 Not Enough Tools in the Box?
Let's start!
Core Power BI Tools
Power BI Desktop
Power BI Desktop is a desktop Business Intelligence (BI) application that runs only on Windows. It is the core all-in-one tool for creating reports with rich visuals and interactive filtering, as well as authoring the underlying semantic model, which encapsulates the logical data model (also referred to as the tabular model, schema, or dataset) and the data content itself.
I saw three main ways of using Power BI Desktop that were useful for me:
(1) ad-hoc analysis, (2) experimentation, and (3) authoring BI content to publish to the Power BI Service.
Power BI Service (Web UI)
Power BI Service is the place where you publish your reports and data that you authored in Power BI Desktop. It serves your reports to users through the web browser or mobile device, allows you to share them, schedule regular data refreshes, assemble apps with a hierarchical menu system, and more.
For my team, too, Power BI Service was the place for sharing and consuming our homegrown FinOps app, dashboards, reports, and semantic models. As builders and maintainers, our usage extended to configurations, such as setting the refresh schedule of our semantic models, switching to a large semantic model storage format (that utilizes SSDs), connecting multiple reports to the same semantic model, checking refresh status, or obtaining the connection details for XMLA endpoints.
Power BI Report Builder
Power BI Report Builder, a desktop application, allows you to author paginated reports that are viewed as multi-page documents, similar to PDF files, or printed in physical form (on real paper, bro', but I hope not). Apart from the obvious legacy format support, the real deal is that these reports are not constrained to the size of the screen. In the Power BI (non-paginated) format, if you place a table with a large number of rows on the screen, it appears with a scroll bar and gets printed cropped, cutting off your data rows that are currently not visible in the table's viewport. In contrast, paginated reports extend to as many pages as necessary and will keep all your rows visible. If you have a use case where you need to distribute or archive reports or data as complete documents, it is the way to go.
Excel and PowerPoint Integrations
Oh yeah, let's not forget about good old Excel as an option to analyse your data in Power BI semantic models and build worksheets with pivot tables and pivot charts on top of it. You can start the discovery from Power BI on the desktop or on the web with the "Analyze in Excel" feature, or begin with Excel's "Get Data" feature and discover all the Power BI semantic models that you have access to.
With the Power BI add-in for PowerPoint, you can embed live and interactive reports in your slides, which will boost your data storytelling capabilities and enable on-the-spot deep dives and answering questions that would otherwise take you way out of your presentation flow.
From a FinOps perspective, Excel has been most useful as a complementary tool to our Power BI application for two main use cases: (1) for ad-hoc analyses, and (2) for providing cyclical reporting that can also be stored as archives (e.g., monthly reports). For the second use case, always be aware of whether you are saving something as live data or a snapshot. You don't want to get into a situation where you open an archived monthly report from several months back, to see your data refreshed and overwritten with the latest month's data as you open the sheet. For Excel, I am not aware of a built-in snapshot feature, so you will need to create your own snapshots, but PowerPoint has this feature built in.
Fabric Web UI and Admin
Fabric with its web UI is the central facade to manage all the things that Fabric offers to various data personas (analyst, engineer, scientist, BI, AI/ML, etc.): Power BI, OneLake, Data Factory, Databases, Data Warehouses, Real-time Intelligence, Data Engineering and Data Science tools like Notebooks, Spark, AI/ML models. If you are using Fabric for your FinOps cost visiblity solution, this will be central to your workflow in one way or another.
If you or your organization is managing Fabric Capcities, the Fabric Admin cockpit will be the central web UI to control your environment, including tenant settings (switching org level features on/off), managing users, usage, settings of capacities, workspaces, custom branding, featured content, organisational level visuals and code, etc. From a FinOps solution perspective this is only relevant for you, if you are administering your own tenant or capacity, but in larger enterprises usually there is a dedicated team to take care of this for you.
Fabric Deployment Pipelines for Power BI
Fabric Deployment Pipelines for Power BI is your user-friendly visual service to manage multiple environments, like DEVELOPMENT
, TEST
, and PRODUCTION
. This allows your users to use the production reports and data uninterrupted, while you develop and mature them in parallel. In turn, you propagate your semantic models, reports, and other assets across the different stages The environments are user-defined, and you can have 2 to 10 stages. When you advance your assets to the subsequent stage, you can compare them with the previous (source) stage to identify any changes and differences, and then selectively deploy assets to the next stage.
Deployment Pipelines are great overall, but if you work with large semantic models, you will miss some of the more advanced deployment features. One such scenario is when you want to propagate a semantic model change for which it is not necessary to refresh the underlying data. With Deployment Pipelines, you may still be forced to refresh your data even if it is not logically necessary. But worry not, for these situations, you have a saviour: the ALM Toolkit, which I will also cover here.
Power BI Visuals
Power BI supports the development of custom visuals. Many folks who have already done this have shared their work on AppSource, and by now, you can browse among hundreds of free and paid Power BI Visuals. It is definitely worth taking a look if you're struggling to visualize something in a particular way, or if you're seeking creative inspiration. Of course, in enterprise environments, it may be trickier to use custom components from random developers, but the Power BI Certified badge on custom visuals might help you.
Advanced Tools and Utilities for
Power BI and Analysis Services
DAX Studio
DAX Studio is a specialized tool for writing, testing, and optimizing DAX (Data Analysis Expressions) queries to run against Power BI or Analysis Services data. It also encompasses the VertiPaq Analyzer, which provides insightful metrics to help you understand and tune your in-memory datasets.
DAX Studio features syntax highlighting, code completion, and query templates. It is an ideal place for developers and analysts to design and test query logic. Then, to improve the speed and reduce the resource consumption of queries, you can take advantage of query (execution) plans, tracing and diagnostics, and measured server timings, detailing Formula Engine and Storage Engine operational metrics that help you pinpoint and fix performance bottlenecks. It also comes integrated with the VertiPaq Analyzer, which I explain in detail in the next section.
VertiPaq Analyzer
VertiPaq Analyzer is a tool for inspecting the storage usage of Power BI or Analysis Services Datasets. The word VertiPaq refers to the storage engine and in-memory, columnar data storage format that powers your models and enables lightning-fast querying. While VeryiPaq Analyser is integrated within DAX Studio, it is also available separately for use with an Excel file or as software libraries.
In practice, when building our FinOps systems, the VertiPaq Analyzer helped me understand the key characteristics of our billing dataset. It helped me realise, for example, that the columns containing the cost and quantity figures are the ones with the highest cardinality in a billing dataset. Consequently, they occupy the most storage space. In hindsight, this is self-evident, as quantity and cost numbers come in all shapes and sizes. In contrast, most other columns, such as service, SKU, or region, have a limited cardinality ranging from hundreds to thousands of distinct values.
This tool was also valuable in verifying relationships among table columns and identifying logical issues, such as referential integrity violations like missing values on one side of a 1:N relationship or blanks.
Tabular Editor
Tabular Editor is an authoring tool for tabular models that enables advanced modeling and scripting, which is not possible in Power BI Desktop. Note that Tabular Editor is available in two versions: Tabular Editor 2 is free and open-source, while Tabular Editor 3 is a commercial version.
I have been using the open-source version, and I found it particularly useful for editing the tabular model of semantic models live. I appreciated being able to avoid refreshing the data for changes that do not affect the data content, such as adding measures or updating metadata descriptions. The tool also allows for bulk editing, such as mass find-and-replace operations, which are otherwise tedious to perform one by one in Power BI.
Another neat use case was setting up incremental refresh. Once it had been configured in Power BI, I could apply custom refresh policies to set up the partition schema and the corresponding refresh configurations properly.
I did not get to taking full advantage of scripting, but C# scripts were also an enticing idea for improving developer flow, especially combined with version control.
Then, there is the Best-Practice Analyser (BPA), which allows you to apply rules to your tabular model to promote the use of sane conventions and best practices. Microsoft has curated a list of over 70 best practice rules to be used with this feature.
ALM Toolkit
The ALM abbreviation in ALM Toolkit's name refers to Application Lifecycle Management. It is designed to enable the comparison and merging of different tabular models, thereby facilitating the deployment of changes across your various environments.
While developing our FinOps Power BI solution, we used it exactly to do this. It made it easy to deploy our tabular model changes or new features selectively from DEVELOPMENT
to TEST
to PRODUCTION
environments. It saved a significant amount of time by not having to refresh our data for the types of changes where this was logically unnecessary (e.g., adding a new DAX-based metric).
It was also invaluable for generally checking differences between tabular models to understand them or to validate changes made with a different method. The tool enables you to export models to BIM
(Business Intelligence Model) files, which can also be used for version control of your models.
SQL Server Management Studio
SQL Server Management Studio (SSMS) is the premier tool for managing data infrastructure, from SQL Server or Azure SQL databases to Synapse and Analysis Services or Power BI Datasets. It combines query editing (DDL and DQL), performance analysis, and administration, all in a single tool.
While building FinOps tooling, we have been using it to manage both Synapse and Power BI Datasets. For Synapse, it has helped manage the SQL views over our ADLS Gen2 data lakes, which contained the Azure billing exports. Still, it is not a complete substitute for Synapse Studio, which is a much more powerful and better-suited tool for tasks, such as monitoring your running queries or managing things beyond SQL processing, like data integration or Spark jobs.
What you can also do with SQL Server Management Studio is to connect to Power BI Datasets via the XMLA Endpoint and manage partition refreshes in a very flexible way. This is a highly valuable feature for large in-memory datasets, as it enables you to selectively choose a set of discrete partitions and refresh them in one batch. Additionally, you can run the data refresh and database (re)calculation process steps separately. For example, suppose you have changed the calculation logic of a calculated column in your model. In that case, you can avoid reloading the data from the source and instead use (re)calculation only to bring your data to the correct state.
Another feature to take advantage of is performing backup and restore operations of your semantic models. For any serious system, you should have a disaster recovery strategy that includes the backup and restoration of your data, which is regularly tested. SQL Server Management Studio
SQL Server Profiler
SQL Server Profiler is a tracing tool that helps you monitor events in real-time, enabling you to debug performance or process issues at a granular level. It supports relational engines (SQL Server, Azure SQL, but not Synapse), as well as Power BI Datasets (Analysis Services). While it still works fine with relational engines, it is considered deprecated as more modern options like Extended Events (XEvents) and Distributed Replay exist. However, for Power BI (Analysis Services), it is still the go-to tool.
This is precisely what I found it most useful for. Notably, as we scaled our in-memory FinOps Power BI Datasets, we experienced memory issues during dataset refreshes. The symptom was typically an out-of-memory situation, and to understand why it happened, we needed to dig deeper. This is when we hooked up SQL Server Profiler to our Power BI Dataset, analysed what was happening, and applied a fix to the root cause (often a model change or a capacity-related tuning). Trace results could be saved locally and replayed, which was also helpful during these analyses.
Power BI Field Finder
Power BI Field Finder is a utility that helps you identify where fields and measures from your semantic model are used in your report visuals. It is a standalone Power BI file that can connect to your data model, as well as inspect multiple of your Power BI report files at once. It generates crosstab statistics of tables/fields and reports/visuals, and even renders small, color-coded report layout images to enable you to find and navigate the visuals easily.
I found it invaluable when changing the names of fields or measures in our models or preparing to remove them. Of course, it is always better to think carefully upfront about naming conventions and which fields/measures to use in your visuals. However, sometimes naming changes will still be necessary, and this tool will ensure that you do not miss a dependent visual that would break as a result. Conversely, it is also useful to identify fields that are not used anywhere in your reports so you can delete them from your semantic model to save storage space.
Power BI Helper
Power BI Helper offers a range of utilities in one package, most notably generating documentation for both Power BI Desktop files and Power BI Service objects. Additionally, it provides features for exporting data and audit logs, performance tuning and cleanup, metadata related to visualization, and comparison of Power BI files. Among these, I found that the most valuable unique features for building and maintaining a homegrown FinOps tooling are the documentation and comparison features.
The easily generated documentation is comprehensive and provides very detailed information. All the pieces are viewable in a single place without the need to navigate a heavy UI: tables, columns, measures, relationships, data sources, Power Query transformations, DAX expressions, report pages and visuals, dependencies and reverse dependencies, and more. For the Power BI Service, it documents workspaces, datasets, reports, dashboards, dataflows, configurations, and more.
The other valuable feature, the comparison of Power BI files, comes in handy when version control is not in place, or when a report file takes on its own discrete evolutionary paths being worked on by multiple people at the same time. Truth be told, despite recent positive developments (e.g., the PBIP format that enables Git integration), version control and collaborative work on the same file remain challenging for Power BI.
Power BI Sidetools
Power BI Sidetools, as per its documentation, was born to increase productivity during report development in Power BI Desktop. It features a template-based DAX generator, a DAX debugger with sample data display, a DAX parser with expandable/collapsible tree visualization, including help, Excel data export, and the ability to manage your DAX and M formulas in an exported folder with one-click editing in VS Code. While I was not aware of this tool at the time we developed our FinOps tooling, it appears to be a powerful addition to the workflow of someone focusing on report development, particularly with DAX and M.
Translations Builder and Metadata Translator
So, you are one of the lucky ones to develop and support multi-language reports in Power BI? Then, Translations Builder and Metadata Translator are the tools for you.
Translations Builder simplifies the translation process by displaying localization metadata in a grid, hiding the underlying model complexity, and showing the captions and descriptions to be translated on the side, with the target language versions in separate columns. The tool comes with extensive, well-illustrated documentation, so using it shall be a breeze.
Metadata Translator is a similar tool, and while it has a less powerful UI, it offers more powerful automation by supporting machine-assisted translations with Azure Cognitive Services. If you're looking for more automation, give this one a try. If you prefer more manual control and a more elaborate UI, opt for Translations Builder.
Fabric Capacity Metrics App
(formerly Power BI Premium Capacity Metrics App)
The Fabric Capacity Metrics App is a tool that provides monitoring capabilities for Fabric (formerly Power BI Premium) Capacities. The app is primarily targeted at Capacity admins, enabling insights into the health of the capacity (e.g., issues with throttling or query rejections), compute performance, storage usage, point-in-time analyses, autoscaling, and more.
When building your homegrown FinOps tooling, access to this app will be handy if your organization has its own Capacity. Whether you share that Capacity with other teams or you are the sole user, these reports will be invaluable for assessing your capacity utilisation levels, scaling needs, dealing with noisy neighbour problems (whether it is you or the others 😄), as well as understanding and troubleshooting capacity level issues (primarily compute performance or storage usage).
Development, Analysis, AI/ML, and Science Tools
VS Code with Power BI Studio and Other Extensions
VS Code is a free, open-source, extensible, and popular code editor. While it is officially a code editor, in reality, it is really somewhat of an IDE (Integrated Development Environment). At least a lightweight one, since there are literally tens of thousands of extensions available.
I have seen many data engineers and scientists work with and like VS Code while delivering our homegrown FinOps tooling. For Power BI and various Azure/Fabric data pipeline technologies, the most popular extensions to consider are listed here.
Power BI extensions:
- Power BI Studio for managing your Power BI tenant with REST APIs and your semantic models using TMDL (Tabular Model Definition Language),
- TMDL for TMDL language support
- Power Query / M Language for Power Query / M language support
- DAX for Power BI for DAX syntax highlighting
- PBI Document Generator for automatic doc generation for
PBIT
templates
Azure/Fabric and other data technology extensions:
- MSSQL for a complete SQL development experience for the provider's SQL technologies (Synapse SQL Serverless included): connect, explore, manage schemas, execute queries, visualize query plans, etc.
- Fabric Data Engineering VS Code (previously Synapse VS Code) for exploring Fabric lakehouses, authoring notebooks, and Spark job definitions
- OneLake VSCode for mounting OneLake folders right into VSCode Explorer
- Azure Storage, Azure Pipelines, Azure Functions, and many other Azure Extensions for managing the specific Azure technologies in VS Code
- Data Table Renderers for drastically boosted notebook data table rendering
Further popular extensions:
- Python for Python development
- Tabnine / GitHub Copilot for AI assistance
- Prettier for code formatting
- ESLint for static code analysis
- GitLens for advanced versioning
- Live Server / Docker / Remote - SSH for 'servering'
- Live Share for real-time collaboration
- WakaTime for productivity insights
- and endless further extensions...
SQL Server Data Tools (SSDT) with Visual Studio 2022
SQL Server Data Tools, in integration with Visual Studio (Community, Professional, and Enterprise editions), is a classic tool that was initially developed to support the database world and early BI offerings in the 2010s (SQL Server Integration, Analysis, and Reporting Services [SSIS, SSRS, SSAS]). Many of these services, in an evolved state, are foundational to today's Power BI, Azure, and Fabric offerings. However, since then, Reporting Services has matured into Power BI Desktop, and Analysis Services is also increasingly managed through other tools, including the Power BI Service, Fabric with Deployment Pipelines, Tabular Editor, and the ALM Toolkit.
While I have not been using SSDT in the FinOps context, you may have a use case if it matches your tech stack (more SQL database/warehouse oriented rather than data lake oriented) and you have already been using it for other purposes, so it is a skill in your team that you can immediately tap into. SSDT can still handle Synapse SQL Dedicated Pools, Azure SQL Databases, and Fabric Warehouses, and you can also use DACPACs (Data-tier Application Packages) to manage and source control your SQL objects.
Power BI Version Control Tools (Multiple Options)
Power BI works with PBIT
(template without data) and PBIX
(report with data) files. These are zip-compressed files containing a folder structure and various text-encoded files that describe the model, reports, and other components. In June 2025, the provider added support for PBIP
(Power BI Project) as a new (preview) feature, which saves your work to a folder structure with plain text files instead of a single compressed file, making it easier to version control.
Power BI version control tools (one here, another here, a third here) were developed before PBIP
to address the same issue with the single-file saving model. The first two utilize SharePoint's check-out and check-in functionality, the third extracts from and builds to the compressed files. The SharePoint option includes a file version history (as binary files) with comments and protection against overwriting in case multiple authors work on the same file. The extraction option works with the compressed contents and Git.
For all options, there is still a monster out there. Merging the work of multiple authors working in parallel on the same file remains a major pain in the butt. So, it is best to split up the work between contributors mindfully: pre-align and replicate any data model changes, and avoid working on the same report pages or even files simultaneously.
All in all, if you want to source control your work, you will need to choose from several limited options: SharePoint check-out/check-in with or without a helper tool, Git versioning files either as binary or in the decompressed form, or jumping on the PBIP
bandwagon, which is probably the most forward-looking path, and it can also be combined with Azure DevOps.
PowerShell Analysis Services cmdlets and Power BI REST APIs for Automation
PowerShell Analysis Services cmdlets (module here) are a great way to automate administrative tasks related to semantic models. The feature set is not super wide, but it includes four essential aspects:
- user management (adding and removing members to a database role),
- processing (refreshing) semantic models: databases, tables, partitions, as well as merging partitions,
- backup and restore of semantic models,
- running TMSL (Tabular Model Scripting Language) scripts, which give you some more flexibility.
As an alternative and extension to shell scripts, you can also use the Power BI REST APIs for even more automation. The APIs cover the management of Power BI semantic models, which are still referred to as Datasets
there, as they were an exception to the naming change. But APIs offer much more: the ability to manage various objects, including Capacities, Groups (Workspaces), Dashboards, Reports, Datasources (Gateways), Dataflows, Pipelines, Users, and more. We made very good use of the Report Rebinding capability to fix bindings when a report was accidentally hooked up to the wrong dataset instance during deployment with Deployment Pipelines.
All in all, both cmdlets and REST APIs are excellent ways to save on your toil, improve user experience, and support business continuity through automation.
Testing Automation with UI.Vision RPA, Selenium, or Playwright
Yes, you read that right, "testing automation". Why do you need testing automation for Power BI? Just like in other areas of software development, end-to-end testing is the ultimate way to ensure everything works as expected. If you do not want to click through all critical user paths manually every time you deploy a change, it is better to have a comprehensive automated test suite in place in your CI/CD pipeline.
There are a few good options available. If your developers are data analysts or power users, consider UI.Vision RPA (Robotic Process Automation), a low-code option that offers the possibility of macro recording. Underneath this tool, a Selenium heart is beating. Using Selenium directly is a good fit for engineers who already have some experience with it. Eventually, a newer kid on the block, Playwright, is a fresh option with which I had great success in my former team.
As we deployed across DEVELOPMENT
, TEST
, and PRODUCTION
environments, end-to-end tests were used to verify that common user paths, including navigating menus, checking values in report visuals, interactions, and filtering, are functioning as expected, and that no key element is missing or failing to operate correctly. It also allowed us to compare numeric values displayed on the report face with the corresponding numbers in upstream data sources, which were queried using DAX or SQL.
Jupyter and Other Notebooks
Jupyter, and various 'hosted'/'branded' notebook flavours (see Azure Machine Learning, Azure Data Studio, Azure Synapse, Fabric, others), provide you with the interactive computational notebook experience: combining live code with narrative text in blocks following one another, and producing rich inline outputs with tables, charts, and other advanced outputs.
I have seen many data folks, including myself, fall in love with the concept. In the FinOps space, I find it super useful for several things:
- Ad-hoc analysis to answer questions that could not be answered with our standard reports, requiring tapping into raw data and analyzing it more deeply with visuals, data summaries, and data samples.
- Exploration of various data sources (profiling new raw data), as well as prototyping and experimenting with pipeline features (data cleaning, transformation, allocation, etc.), which were later brought to production maturity with engineering.
- AI and machine learning use cases, for example, include forecasting and anomaly detection in the FinOps space, or clustering high-cardinality columns in raw data to create more useful business-oriented groupings of items.
Parquet Viewer and Apache Arrow
Let's spend a few words on working with the Parquet data format, the underlying storage format of Delta Lake, which, in turn, powers the storage of OneLake warehouses and lakehouses in Fabric. Parquet provides an open-source, column-oriented storage format with high-performance compression and support for complex nested data structures. Delta Lake adds ACID table storage capabilities on top and works on cloud object stores. This makes the two technologies an ideal combination for massive parallel processing scenarios in the cloud, with ACID transactions and high performance, regardless of the compute technology applied on top.
In FinOps, you often get your data export for cloud cost and usage in the PARQUET
format (Azure included). Power BI can directly connect to these files, but if you want to inspect them more closely outside of Power BI, a desktop viewer, like Parquet Viewer (Windows only), will be handy.
To read and write PARQUET
files programmatically, your go-to technology will likely become Apache Arrow. We have been using it for FinOps tooling development, particularly in cases where more complex transformations were required, and where SQL proved less convenient and maintainable for handling the heavier business logic. Arrow comes with support for multiple languages, Python, Rust, Go, C++, R, Java, and several more.
Local Analytics Technologies
(DuckDB, Polars, DataFusion, ClickHouse)
Undeniably, these are very much on the edge of our map here. Almost falling off. I was hesitant to add them, but eventually decided to do so. While many of them cannot connect directly to Power BI, they are still practical to have in the toolbox, in addition to Power BI, when working with PARQUET
files for analysis and development in a local environment.
DuckDB is a gold standard in this regard, and it's great to work with in a local notebook context. Alternatives include Polars, Apache DataFusion, and ClickHouse. The latter can also be connected to from Power BI via ODBC.
Tools for Data in Azure/Fabric
This section covers tools that are not directly aimed at working with Power BI or Analysis Services objects, but are upstream in your data processing pipeline. These tools manage the storage and processing (including ingestion, preparation, transformations, aggregation, etc.) of your data before it is published to users in a semantic model and Power BI reports, dashboards, and applications.
Azure Portal
The Azure Portal is the unified web interface that enables you to build, manage, and monitor all Azure resources and services within the Azure Cloud. This will entail most of your components in your data pipeline, including data storage (e.g., Azure Blob Storage, Azure Data Lake Gen2) and various processing methods (e.g., Azure Functions, Azure Synapse, Azure Logic Apps).
Apart from the above components, in the FinOps field, you will also enjoy an intimate relationship with the Cost Management part of the Azure Portal. This will be your go-to place to configure cost and usage data exports, monitor the export history, and analyze the costs, comparing them to the figures in the exports for verification. Cost Management also offers further value-added features, including Budgets, Alerts, and Allocation, which may be useful for your practice.
Synapse Studio
Synapse Studio will be a must-use if your processing pipeline utilizes any of the integrated services in the Azure Synapse suite. These are SQL Pools (Serverless or Dedicated), Spark Pool, Synapse Pipelines (similar to Azure Data Factory), or Synapse Data Explorer (similar to Azure Data Explorer). In my practice, we have been primarily using Synapse SQL Serverless Pool and Synapse Pipelines.
Serverless SQL offered a cost-efficient, on-demand consumption model for querying our FinOps data lake. In this context, Synapse Studio served as both our tool for developing the SQL queries over our ADLS Gen2 FinOps data lake and also the best place to monitor the execution of those queries. Two major types of queries were involved: those used for refreshing our Power BI semantic models in Import mode, and those generated when users were using our reports over tables connected in Direct Query mode.
Synapse Pipelines (in addition to Azure Functions) were primarily used for data processing, and Synapse was particularly used for aggregation (e.g., daily to monthly granularity) scenarios. Synapse Studio provided a visual, drag-and-drop (low-code) authoring environment for these pipelines, along with automation (e.g., scheduling and event-based triggers, retry configurations) and monitoring (which can also alert and notify when integrated with Azure Monitor).
Azure Storage Explorer
Azure Storage Explorer is a desktop application to manage Azure storage resources (Blobs, Files, Queues, Tables). It is capable of managing Azure Blob Storage and Azure Data Lake Storage (ADLS) Gen2 resources, including connecting to multiple storage accounts via Shared Access Signatures (SAS), creating and removing containers and directories, viewing, uploading, and downloading blobs, and more.
For a FinOps data lake in Azure, ADLS Gen2 is a great choice for storing data, at least in its raw form. Azure Cost Management's default target for exporting billing data is also Azure Blob Storage, which is the underlying technology for ADLS Gen2, so it is compatible with both. I found Azure Storage Explorer to be a great tool for discovering our FinOps data lake, downloading samples of data (blobs) for inspection, or checking when data has arrived in the lake.
Azure Data Studio
I am including the lightweight Azure Data Studio desktop application for completeness. However, its retirement has been announced by the end of February 2026. Anyhow, its usefulness in developing and managing a homegrown FinOps cost visibility solution was limited to managing Synapse SQL, as it cannot connect to Power BI or Analysis Services semantic models. Managing the SQL side is also a redundant capability, as this can also be accomplished with Synapse Studio on the web or with SQL Server Management Studio (SSMS) on the desktop. On the plus side, unlike SSMS, this tool supports multiple operating systems, including Windows, Mac, and Linux.
😱 Not Enough Tools in the Box?
Hmm, I tried to be as complete as possible. If you feel that I failed you, please read on. I was certainly focusing on the set of tools that I was personally exposed to, mainly from the free and open-source (FOSS) options, and I might have missed something. If you are looking for more, I suggest:
- Checking out the Analysis Services repo for even more stuff.
- Searching for commercial tools, especially for governance use cases.
- Searching for smaller online tools or helpers for formatting, optimizing, or sandbox environments to run your code in.
If you still feel uneasy and empty-handed, feel free to drop me a line or two and tell me more about what things you were missing.
Here is the end of this writing. I hope you learned something and enjoyed it!
That's my pleasure.