Posts

Direct Lake fallback reasons

PowerBI semantic models utilizing the Direct Lake storage mode read data directly from tables stored in OneLake. However, if a DAX query reaches the limitations of the Direct Lake mode, it may switch to DirectQuery mode (this behavior can be controlled, details follow). These limitations are as follows: As of the writing of this article, the following features were unsupported: Single lakehouse/warehouse Composite models T-SQL based views Calculated columns and calculated tables String length > 4000 characters Complex/No structured delta table column types (Binary, GUID) Relationships based on DateTime types Hierarchies / Excel drillthrough RLS/OLS in Warehouse Direct Lake model resource limits Switching from Direct Lake mode to DirectQuery mode results in reduced performance and, in some cases, query failure, as not all DAX features are supported in DirectQuery mode.

File comparison in SSMS

When I’m working with git, I like to compare files at the commit level. You have version X displayed in the left window, version Y in the right window, and any differences in the two versions are shown in color - extra changes are shown in green, and missing changes are shown in red. On one of my last projects I needed to compare changes in a .sql script that was not available in SSDT, but only in SSMS.

PowerBI – Direct Query lazy loading optimization

Recently I was asked by one of my friends for help with slow PowerBI report. The scenario is that the data source is view from Oracle database, built as a join from several tables and returning a lot of rows. Without parameters, this view is running forever (they cancelled it after one hour …). Because of Oracle database, only DirectQuery can be used as a storage model. The problem with DirectQuery is that it doesn’t support query parameters, so in our case all data from the source view must be read into PowerBI desktop and filters can be applied later.

Small, but useful SSMS tip

In my recent project I ran into situation, where I had to turn off page compression on some of the indexes. Unfortunately, indexes were dropped and re-created in stored procedures, so I had to identify those procedures and change the data_compression setting from PAGE to NONE. To do this I’ve decided to query sys.sql_modules DMV and check the definition column, which contains stored procedure text. I just copied text into clipboard and then pasted into new query windows and … no voilà effect :-(.

How to get Actual Execution Plan for specific query and not to destroy your server

One of the techniques for query performance troubleshooting is to compare Actual and Estimated Execution Plans. In the most cases, both plan shapes are identical and the only difference is in the runtime statistics (ActualNumberOfRows, ActualExecution, MemoryGrant, DegreeOfParallelism etc.) – those are present in Actual Execution Plan only. If the difference between estimated and actual number of rows is too big, you have a cardinality estimates issue. For demonstration, let’s enlarge AdventureWorks2017 sample database using J.