Let’s set the scene. You’ve built a wonderful, useful, and descriptive report for your stakeholders with a variety of tooltips that offer deeper insights. They love the tooltips, and have requested more for other visuals, but when they get to the boardroom to present, they are unable to hover over any numbers without the tooltip blocking the context of that data point. Because they love the tooltips for their personal exploration, they don’t want them disabled permanently and have requested a way to temporarily turn them off for presentations. This blog was inspired by Steve Hughes (you may know him) complaining about “too many tooltips in the way of my data”.
This one is for you, Dad.
Tooltips are useful for personal exploration, but not shared exploration.
2. Open your Power BI report and create a duplicate page for the page you would like to disable tooltips on. To do this, right click on the page and select “Duplicate Page”.
3. Rename the new page something like “[original page name] – No Tooltip” by right-clicking on the new page tab. We are also going to hide this page.
4. Next, navigate to the View tab in the upper ribbon and select the “Selection” pane. This will bring up a list of all the visuals on your page.
5. Click to your first visual in the list with a tooltip and in the “Visualizations” pane navigate to the “General” setting to toggle “Tooltip” to off.
6. Repeat by clicking down the list in the selection pane and Power BI will keep opening the Visualization pane to the General tab for easy toggling. Pretty slick. Keep repeating until all your visuals have this toggle turned off. Keep in mind, not all your all your visuals have a tooltip like titles, text boxes, etc. To close the selection panel, select “Selection” in the upper ribbon to close that panel or select the “X” at the top of that panel.
7. Time to add our icon! Go to “Insert” in the top ribbon and select “Image” then choose the downloaded icon for magnifying glass without check box. This will signal to end users that there are no tooltips turned on for this page.
8. After resizing and positioning the magnifying glass, in the “Visualization” pane turn on Action and select Type “Page navigation” & set Destination for your original page (Overview in this example). I also turn on Tooltip (the only one for this entire page promise!) for this and instruct users to “Click here to turn on tooltips”.
9. Finally, go back to your original page and repeat steps 7 and 8 with the magnifying glass with a check mark to signify tooltips are on. To your end users, this will feel like a button and by keeping visuals on a separate page, you’ll see a minimal impact to performance.
PRO TIP
To have a true button experience, use the “Properties” section of the “General” tab in the “Visualizations” pane to have the visuals align perfectly and be the exact same size.
Thank you so much to those of you who were able to attend this three-hour, C# packed session! The Power BI Cruise was an incredible conference, and I cannot recommend it enough if you’re looking for an opportunity to dig deep on Power BI topics with other passionate individuals. From the first ever Power BI bingo to a three hour session on TMDL from Mathias himself, this was a trip that will stick with me for a while. In this post, I’ll include notes from my session and the sessions I attended plus links to the GitHub. Let me know if anything really piques your interest and you’d like to see more detailed blog posts about it!
Huge shout out to the other speakers and of course the incredible organizers who stuck with this amazing vision through the pandemic and made it happen. Thank you all so much for making this event full of passion and unparalleled learning. I don’t know how you all made this happen without sponsors, but it was truly an incredible event. So thank you again Asgeir, Erik, Johan, and Just!
My Session – Power BI, C#, and TMDL!
Now you may have noticed I give this session – Power BI Meets Programmability – quite often. Bringing programmability to the world of Power BI is something I greatly enjoy, but this session was truly unique. Having three hours to take a deep dive into what C# can do for Power BI development was the opportunity of a lifetime. With two more hours than usual, we explored creating calculation groups in this session for the first time as well as creating automatically generated set of measures based on data types. Not only that, but the attendees were good sports and stuck around for an extra half hour to create calculation groups in C# using TMDL! I plan on doing a very detailed blog on this in the future, but the GitHub folder for this conference (link below) contains TMDL code live coded by the creator himself – Mathias Thierbach! The script contains a section that will create a TMDL version of your data model, allow you to interact with TMDL files within your solution, and publish those changes back to the data model. It blew my mind, and I hope it blows yours as well!
Speaking of TMDL, this conference hosted the incredible Mathias Thierbach for a session on TMDL & Source Control. Below are my notes from that session, but if you ever have a chance to hear him speak on the subject I highly recommend it. The notes below are in bullet point format for now, as I dig further into this language I look forward to writing more detailed blog posts about this incredible language and what we can do with it.
Key goals of TMDL: readable, editable, and allows collaboration
There will be a VSCode extention for TMDL coming soon (we were given a special preview in the session) that will have syntax highlighting for both DAX and M as well as TMDL
Shortcut to get to the TOM docs: goto.pbi.tools/tom-docs or pbi.onl/tools
TMDL is completely case insensitive!
Default export of .tmd file is camel case
Boolean properties: you only have to say the Boolean property to enforce the default value
To explicitly set it, syntax is: property: true/false
No matter how many times you serialize TMDL, the order of the objects will always be the same
You can reorder objects to your specifications (aka no longer alphabetical)
Meta data IsParameterQuery = true makes it a parameter
Ordinal is not a table property in TOM, but you can create one in TMDL! It won’t show up in this order in Desktop…yet. But it will allow consistent order. It’s a weak ordinal, in cases of conflict it won’t cause any errors. It should default to alphabetically on conflict.
Indentation for expressions is not white space sensitive (DAX and M)
It finds a shared whitespace to the left to create indentation in the final model
Indentation only matters for nested objects
To fold levels in VS Code use Ctrl + k + [level_number]
To unfold levels in VS Code use Ctrl + k + j to unfold all levels
Unicode is supported
Shared expressions = parameters
Preview 1 (what we have), perspectives can’t be read back but can create them
Will be fixed in Preview 2
4 previews total, goal is for GA at end of the year
tableName.columnName etc.
If there are spaces, use single quotes as a delimiter
Description is added by using /// text on top of the object declaration. Can be multi-line and line break is maintained.
No comment support but you can comment within M and DAX
Triple ‘’’[formula] ‘’’ will be a safe way to mark M/DAX expression sections. Likely coming in preview 3 or 4 later this year
Language spec will be published
Default properties are available to find in Rui’s documentation
Default property is what is after the = for the object
Hacking = use various available means to get better insights from data.
This session absolutely blew my mind. Stepan does an incredible job showing the art of the possible by hacking into expression properties within tabular editor as well as calculation group formatting. We had an awesome time connecting at this conference and I hope to see him again at future conferences! In the meantime, below is a link to his GitHub as well as a blog post that covers a lot of the same material as his session. Be prepared to have your mind blown on this one.
/ used in a format string will escape the wildcard characters
Keep in mind audience, green is not positive in Japan
topProductByLocation = TOPN(1, ALLSELECTED(Products[ProductName]),[# Total Quantity])
# sold quantity by Top Product = VAR _product = [topProductByLocation] RETURN CALCULATE([# Total Quantity], Products[Product name] = _product
This allows us to use a measure as a filter by precalculating it in a variable
Best whitespace option is to use UNICHAR 8203, it’s a very very small dot
Color picker: Just Color Picker
Use smart text in subtitles in charts to tell people when we hit a target or how close we are
You can use format to change numbers to text with a thousands delimiter! See the subtitle measure
You can use ; within format to do an if statement! BUT don’t forget to escape any special characters using \
Example: “Sales target was set to ” & FORMAT(_target,”#,,,,,,,,,,#”) & ” and we have sold ” & FORMAT(_sold,”#,,,,,,,,,,#”) & ” which means we ” & FORMAT(_sold-_target,”\have;\haven’t”) & ” fulfilled our target.”
In this case “h” is a special character that is trying to get an hour value and will show 0 if you don’t escape it
If you create conditional formatting in bar charts, the colors will stay if you switch to a line chart even though conditional colors are not available in the UI
Same thing happens if you have an x constant line in a line chart and switch to bar chart
You can customize spacing below titles and subtitles.
Meeting Jay and chatting during various free time on the ship was definitely a highlight of the trip. He’s incredibly knowledgeable on Power BI (especially the elusive DAX language), and has a deep understanding of the Microsoft strategy on new feature creation. It was a pleasure getting to laugh and brainstorm together, and I can’t wait to hang out again at future conferences. Below are some questions that were asked during the AMA session as well as answers provided and links I found while Jay was talking.
Why can’t we use RLS and USERELATIONSHIP() together?
RLS blocks USERELATIONSHIP() since USERELATIONSHIP() can (but doesn’t always) go around the RLS relationship and creates a security risk
Different people own different visuals which is why the options and customizations vary so widely
Miguel Myers did a session on the future of visuals in Power BI. Huge plans with him on board to own visuals and align everything
There is a linked in group called PBI Core Vision, put comments there on what can be improved in the future. Twitter and LinkedIn
When we change the hierarchy name or order via XMLA, it now breaks the visual. Can we get this to update in the visual in the service with adjusted level orders
SEND TO JAY
Also ask him about Top N + Others
Tableau is the primary competition for PBI, Looker is up and coming as Google, and third is Qlik because it’s still getting new clients in Europe
These are taken into consideration when making a case for new items
Cannot blatantly copy items
Tableau has specific people in their agreement that cannot even look at Tableau (can only look as a user, not at the debugger nor dll file)
In the future, hoping to have functions that could be shared within a community for DAX
Could we get Rulers or aligned grid lines in PBI Desktop?
Rosie or Ree-ann would own this
Can we customize grids?
Not part of visual team, it’s placed under on-boarding
Advancements to API?
Hardening of PBIX will help with getting better Scanner APIs to get deeper API’s (visual level and page level details on what’s available within the tenant)?
DataZen got killed and unsure why
Jay would investigate how to get rid of filter direction in DAX
Why can I not put a measure name in smart narratives?
You can do this by creating a shape and using fx to put in the text you want
I had an amazing time adventuring to Wales with other members of the data community last week! It was my first international conference, and I had heard so many incredible things about it, but still it blew away all my expectations. I’m so honored to have met so many incredible people throughout the week! Thank you to new friends and old for making it a conference to remember.
Below are the links to GitHub for my session as well as some notes from the sessions I attended. Disclaimer, these notes are not even close to the real thing. Be sure to check out these sessions if you see them being held at your local user group or SQL Saturday in the future! I tend to take very bullet point oriented notes to best organize my thoughts, hopefully these spark your interest and guide you to professionals who know about specific technologies within the data realm.
Check out the agenda and feel free to reach out to folks with topics you’re interested in! They may have their slide deck easily accessible to folks who may not make it to the conference.
Thank you so much to everyone who made it out to my session!
It was incredible to have 48 people in person and another 21 virtual! It was a humbling experience to see that room fill up and know they were all there to learn something new and exciting to help with their day-to-day work. Can’t wait to look through the feedback and learn where to tweak my presentation to make an even bigger impact in the future!
Github with slide deck, sample C# code, and sample PBIX file (with Excel file that powers it): Github Link
Fun tip someone showed me after the session: If you use /* and */ to comment out chunks of code (like I do frequently during the session), you can use /* and –*/ so that you only need to comment out the /* to run that block. Pretty neat!
Supercharge Power BI with Azure Synapse Analytics with Mathias Halkjaer
Limitations of PBI
Source load minimization doesn’t exist in most cases (lots of queries against the same source)
Out of sight transformation layer
Not best tool for data warehouse tasks
No logs or output from quality checks
Doesn’t handle humongous data very well
Enchantments needed for PBI from Synapse
Time-travel (historical data & change tracking)
Enlarge (massive scale)
Counter spell (reverse ETL)
Wish (supports multiple programming languages)
Divination (AI)
Regenerate (CI/CD and source control)
Azure Synapse
A toolbox full of tools: serverless sql, data warehouse, spark engine, etc.
Data Lake
Landing zone for data
Cheap
Redundant
Quick and dirty analytics
Performance
Most efficient way to load data into data warehouse
Easily consumed
Report Design & Psychology (Quick Tips) with Jon Lunn
Dual process theory: intuitive vs attentive
You can mentally process up to 15 items intuitively. Past that it gets pushed to attentive.
Things can move from system 2 (attentive) to system 1 (intuitive) with repetition (aka driving or typing).
Layout – left to right, top to bottom. Move most important KPI to top left. Remove distractions (excess colors, 3d visuals, pie charts, etc). Titles in upper left make it easier to know what’s contained within the visual.
Tip – to visualize dots use dice patterns to move it to a system 1 process.
Ratios (how many, relative) vs percentages (how likely, chance risk probability). Both are good options to contextualize a number more. We usually understand ratios much better because how many is more intuitive than how likely.
Intuitive systems are more prone to errors or bias. If you want to hide something, go for percentage.
Use the power of 10’s (aka 8 out of 10 preferred it). Round and use a ratio to move it into intuitive process.
You can manage RLS roles within datamart, join tables via GUI, and join with similar UI to PBI desktop modeling view. In preview only. You can use as a SQL endpoint to connect to your datamart.
Why? Access data through a SQL endpoint. Do simple aggregation using SQL script instead of M. There are potential challenges since this is in preview.
It keeps getting easier to move Oracle and Open Source workloads to Azure with David Levy
Cloud migration phases:
Discover
Assess
Migrate
Optimize
Secure & Manage
Azure Migrate: central hub of tools for datacenter migration
Primary tool for discover and assess
Azure Database Migration Service – Gen 2 has much more control for end users to say how many resources that item gets
Use Oracle Assessments in Azure Data Studio to get recommendations for sku size
Oracle databases in Oracle cloud can use OCI Interconnect to pull data back and forth as Azure IaaS
A lot of people move to PostgresSQL to pay less and get open source resources
Migration strategy
Rehost
Refactor
Rearchitect
Replace
No better way to run software than by SaaS. Why invest the time and resources when the vendor can maintain this for you and allow you to focus on your business?
The Oracle Assessments in Azure Data Studio can see all the database details, SKU recommendations, and Feature Compatibility. Each section has a migration effort attached, very nice
Azure is a great place for Postgres SQL:
High availability
Maintained
PostgresSQL migration extension can run assessment in ADS and understand the migration readiness
Integrated Assessment
Easy set up and config in ADS
To migrate postgres
Pg_dump scripts out the PostgresSQL schema
Pg-sql allows creating databases and tables
Use the PostgreSQL to Azure Database for PostgreSQL Online Migration Wizard
You can start cutover with pending changes
Azure Database for MySQL
Flexible server is now available, allows mission critical apps to use zone redundancy and fine-grain maintenance scheduling
Azure Database Migration Service (ADMS)
Migrates schema an data
Preview = replicate changes
You can do this as an online migration
Optimizing your environment
Costs during & after migration
During: Azure TCO Calculator, Azure Migrate, Azure Hybrid Benefit & Reserved Instances, and join Azure Migration & Modernization Program
Do reservations early, you’ll lose money otherwise
AMMP = Azure Migration & Modernization Program, provides coaching with best practice
Make sure you get good data for trouble times so your migrated instance is ready for the worst case scenario
Execute iteratively
5 Things You Can Do to build better looking reports – James McGillivray
Design is subjective
No clear end point
No one solution
Design is often seen as less important
Blog: jimbabwe.co.za
Tip Techniques
Grid Layout
Pro: left brain, less layout tweaking happens, looks professional
Con: can look boxy,
Gutters = white space between visuals
Margins = white space along edge of report
Maximize Real Estate
Pro: bigger visuals = more value, fewer visuals = less distractions
Con: often hard to convince stakeholders, all questions not answered by default
Beautiful Colors
Pro: use color to convey meaning, highlight data, show continuous data, qualitative sequential divergent
Con: many pitfalls, accessibility concerns, can be polarising
Consider the Audience
Pro: most important elements first, remove unneeded elements, hierarchy view (summary on top, grouped middle, detail on bottom)
Identifying and Preventing Unauthorized Power BI Gateways with Angela Henry
The problem: available to all, gateways are not only for PBI, results in chaos
How do we find gateways?
Manage connections & gateways. Let’s us identify gateways. Need to be Azure AD global, PBI service admin, or Gateway Admin
This past weekend, I had the pleasure of attending and speaking at SQL Saturday Atlanta! If you’re in the area, I highly recommend connecting with the local user group and getting to know fellow data nerds near you. Thank you to everyone who was able to make it out, it was great to see such a large in person SQL Saturday event post-pandemic! Big shout out to the volunteers, organizers, and sponsors who made it all happen. Thank you all for your time, hard work, and commitment that will lead to rebuilding the data community. This was the first conference I felt confident enough in my presentation to attend a session in every timeslot! Below are some takeaways I took from the sessions I attended, but there were a lot of other incredible sessions and I recommend checking out the schedule for any that interest you and reaching out to the speaker.
Attended Session Takeaways
Practical Use Cases for Composite Models (Kevin Arnold)
Never thought of using composite models for pulling in an enterprise model with a thin slice specific to the use case of the report. Genius method for maintaining a focused enterprise model while meeting the needs of your end users.
Perspectives can be used with personalized visuals instead of hiding columns, so end users are not overwhelmed by column and measure options.
Field parameters can also be used/created by end users for a cultivated experience that meets their business needs without impacting the larger audience of your enterprise model. If you haven’t heard of them (I hadn’t), highly recommend checking out this link.
Planning Steps for a Power BI Report (Belinda Allen)
Always ask stakeholders what their experience is with Power BI, it will help put all their questions and assumptions in context.
Ask your stakeholder for the scope of success. If they can’t define what success is for the project, you have the wrong person or the wrong client.
Show nothing in a needs gathering session. Listen and take notes. Similar to watching a movie before reading a book, it will severely limit the imagination necessary for an impactful report.
Ask who is maintaining the data currently and who will continue to do so.
Check out PowerBI.tips Podcast.
Ask if they want data access or data analytics. This will let you know if a visual report is a waste of resources for them and/or if paginated report or something similar better fits their needs.
Check out Chris Wagner’s blog, he has a great slide deck for a wireframing session with success owner after the needs gathering session.
Host office hours or something similar to foster on-going user growth
After project, always ask if we achieved defined success benchmarks. Try to give them a concrete ROI (ie x hours saved = x $ saved based on average salary).
Linktr.ee/msbelindaallen
Introduction to Azure Synapse Studio Development Tools (Russel Loski)
Synapse workspace can allow you to create T-SQL and python notebooks off items in Azure Data Lake Storage like csv and parquet files.
Notebooks allow markdown to be side-by-side with code
ctrl + space will bring up snippets to use within a notebook
No indexing since it’s serverless, prepare for some wait time.
We can promote column headers using a variable HEADER_ROW = TRUE
DataOps 101 – A Better Way to Develop and Deliver Data Analytics (John Kerski)
To reduce the “other” line item in performance analyzer, limit the number of visual objects on a page.
Optimize DAX when the line item is over 120 milliseconds.
SLA for page loads is 5 seconds.
Using drop downs in your slicer will delay DAX from running for that visual object. That deferred execution aids in speeding up the initial load.
Tooltips run DAX behind the scenes on the initial load for visuals (you can see this by copying out the DAX query into the DAX Studio). To delay this execution until it’s needed, use a tooltip page.
If the storage engine in DAX Studio is over 20 milliseconds, there’s opportunity to optimize.
Variables limit the number of times a sub-measure will be run and speed up DAX queries behind visuals.
Keep in mind while performance tuning, Power BI desktop has 3 caches – visual, report, and analysis services engine. You can clear all caches within the desktop tool except visual. To clear that cache, you need to close and reopen the PBIX file.
My Session
I cannot express enough how grateful I am for everyone who was able to make it to my session! To have so many established professionals in the field approach me afterwards telling me how well it went was a dream come true. If you’re interested in reviewing the slides and code, please check out my GitHub folder for all you will need to recreate the demo we went through. Miss it? No worries! I’ll be presenting this topic at SQLBits and the Power BI Cruise, so come join me! I’m also open to presenting at various user groups, feel free to reach out to me at kristyna@dataonwheels.com.
Again, thank you so much to everyone who made this weekend possible and please connect with me on LinkedIN and Twitter! I’d love to stay connected!
Making multilingual reports in Power BI requires a lot of different elements. Translations can be added to PBIX files to translate column names, visual titles, etc. but these translations will not impact the data itself. One way to translate the dimensions themselves (star schema introduction) is to have a translation table that contains a display language and value column for each ID of the dimension. In the example below, the translation table would replace the original table and we could apply RLS to the Language column.
Data Model Edits
To make RLS work in this manner, that is to filter down one table and not the entire data model, there are a couple of things that need to be set up. First, you will need a bridge table for every translated table that has a unique list of the ID’s. Second, the RLS filter needs to be added to each of the tables that has a translation. Lastly, the relationships need to be re-established so that the translated tables are only related to their bridge table.
Add RLS Filters
In order to apply RLS dynamically in an embedded Power BI scenario, the locale needs to be passed through the embedded token as an argument within the UPN (dynamic RLS instructions ). Unfortunately, Power BI can only receive one value for each embedded token, so if you need true RLS on the report along with dynamic filtering for the translation, you will need to pass both arguments through the embedded token and separate them later. For example, if the UPN of the user was example@test.com, you can program the embedded token to send the default browser locale send by that user and the UPN of the signed in user in this format: UPN|locale.
It looks a bit goofy, but the pipe delimitator will allow us to separate out the two RLS arguments via the DAX function PATHITEM(). With PATHITEM(), we can tell the RLS filter to either choose what’s before the pipe (the UPN in this case) or what’s after (the locale) to pass through for the role filter. The filter for the example above would be:
[Language] = PATHITEM(USERPRINCIPALNAME(),2)
Once you have applied this filter to all of the tables with the Language field, you’ll need to make some adjustments to your data model.
Reconnect Relationships
Per the diagram below, you will need to connect the bridge table into your existing data model. Remember, the ONLY relationship your translation table should have would be to the bridge table. Every other fact table can join to the bridge table. The joins are as follows – Translation Table has ONE bidirectional join as * to 1 to the Bridge Table, Bridge table joins to every fact table single direction with Bridge Table filtering the fact tables in a 1 to * relationship.
NOTE: do not use CROSSFILTER() in measures with tables that have RLS applied. They will break in the online Power BI service.
And that’s all there is to it! Hopefully this saves you time when building a multilingual, embedded report. Happy coding!