Avoiding IT “Involvement Entitlement”

Avoid IT “Involvement Entitlement” by being a partner with the business, not a gatekeeper – if they’re not coming to you for help, don’t require it, but instead ask yourself why not and fix that.

You’re in the IT department, and you have some pretty solid processes in place – project request forms and approval committees, change approval and management, and some kind of steering committee. Instead of everything lining up nicely, though, you’re getting business units going around the process, building their own applications, writing their own code, and doing it without IT approval. With such a great funnel to go through and everything so neatly defined and documented, why would the business do that?

What is “Involvement Entitlement”?

This Programmers.StackExchange question is a great summary of this, and there’s a general feeling throughout many of the answers that I’ll call “Involvement Entitlement.” In summary, I’d define it this way:

All use of technology must be in a manner explicitly approved by IT, and any custom solutions in place must be written by a developer in the IT department. Any use of technology in a way not explicitly cleared by IT is forbidden.

This attitude can manifest itself in many different ways:

  • A policy requiring IT approval for any software
  • Locking down computers so nothing unapproved can be installed
  • Allowing custom development, but requiring that it clear some bar of “development” or “infosec” approval that’s never actually met in practice or is so complicated that it’s never actually attempted
  • Custom applications or processes being “allowed”, but setting up requirements with learning curves so high that it’s not feasible for anybody outside IT to meet them (for example, “all applications must be developed in C++” or “every process must have high-availability and load-balancing with automatic no-touch failover to multiple data centers”), when there’s no technical reasons for these requirements

The problem is that these policies don’t actually prevent custom solutions; they just prevent IT from being involved in (or often even finding out about) the things that are being developed. The business doesn’t want to skirt the process or avoid IT, but they do want something very simple – for their jobs to be easier. When the business builds a custom solution like an Excel macro that takes a workbook and uses keypress automation to type a long list of transactions into the accounting software, or small .NET desktop app that generates PDFs for customer paperwork for bankruptcy processing and emails it to the relevant state authority, they’re doing it for one reason – it’s easier than doing it the old way. If they’re not involving IT, it’s for the same reason – it’s easier that way.

What’s really going on?

When you stumble across one of these processes that didn’t go through IT, ask yourself the following two things:

  • What are they trying to accomplish by developing something themselves?
  • Why is the business going around the IT processes?

If you don’t like all the custom development that’s going on, forbidding it is mistaking a symptom for the problem – you should instead be asking why they’re going around IT, not just telling them it’s not allowed. Remember that you (IT) exist to help make their job easier, not more complicated. Sometime the value of IT comes in the form of ensuring that applications are fault tolerant or high availability, or in the form of ensuring an application can be supported once the lone developer that built it leaves the company. At the end of the day, you’re in the solution business, and if you and your processes aren’t enabling the business to work better, you’re not adding value.

But here’s something to remember: Every custom application that’s been developed outside of IT corresponds to a need that’s not being met by IT. There may be a good reason they’re not met – not a priority in the company, very specialized problem, not as good as other options, custom language your IT people don’t know, etc – and the lack of IT involvement may be legitimate, but these solutions were created because some department had a need that IT couldn’t (or wouldn’t) satisfy.

IT can still help – be a partner, not gatekeeper

Try to help them solve their problems, and if you don’t have the time or resources, let them solve them on their own. Mandating some language that has a steep learning curve, with the sole purpose of keeping people out of your business, only serves to enhance the elitist attitude most business users perceive IT to have. In the end, that kind of elite attitude only leads to more of the same problem, as users are afraid to approach IT for fear that they’ll either be told “no” outright or that it will add extra effort without adding any value.

There’s no reason I’d forbid this sort of development, within reason – it eases up constraints on shared resources (mainly IT), and allows each group to empower themselves to solve their own problems (as people versed in advanced Excel are pretty common, since this is a common problem, most departments have at least one).

However, you also shouldn’t be expected to solve any problems that arise from these applications, or support them after the original developer leaves the company. As some of the comments in that original post mention, this doesn’t stop the big boss from demanding that you support it, but if you keep a feel out for the kinds of custom applications or processes out there, you can get a feel for when something becomes critical and you might need to get involved to bring it “in-house.” Also, if something is connecting to and modifying systems that are under IT control, then IT should be involved, if only to ensure the security and integrity of their central systems – however, if it’s something confined to a user’s desktop, why feel the need to forbid it?

Postscript – don’t let perfect stand in the way of done

When I said I started this post 7 years ago, I mean it – it’s a classic case of not knowing exactly what perfect looked like, so I let it stand in the way of done. Go find something you’re not sure how to finish and figure out what keeps you from getting it done – and then deliver šŸ™‚ A moderately good idea delivered is worth 100x as much as an incredible idea stuck in your head!

Wow, I let that one sit around for a while….

Adding Failed Refresh Notification Emails to Power BI Server

A script to generate failed report refresh alerts in Power BI Server and can be scheduled with SQL Agent.

Power BI Server has it’s use cases, but it also has some gaps where it falls short of the functionality in the Power BI Service (app.PowerBI.com), but one of the major gaps is that it doesn’t notify anybody when a report refresh fails. Report owners are left either reviewing the reports every time they’re scheduled to ensure the data is refreshed, creating a report that shows refresh status (and hoping it actually refreshes itself), or implementing their own homebrew notification solution.

I opted for #3 – creating a custom notification process – and I want to share it with anybody else that’s wondering how to set up notifications for their own internal Power BI Server instance. If you just want the script, jump to the end for the link and variable instructions.

The process has a few high-level parts:

  • Query for failed report refreshes. Check the report catalog and refresh history to find reports where the last refresh attempt failed within the monitoring window. If a report has already been refreshed again and succeeded, or is currently being refreshed, I chose not to alert on these – this meant somebody was already on it, and we wanted to see reports that needed attention and were currently failed, not the actual failures themselves.
  • Create the email notification text. This is a multi-step process that inserts a header, builds a table with details on the reports, and then a footer with instructions and additional details.
  • Actually send the email. Don’t forget to do this part šŸ™‚ We’ll use DBMail to send it.
  • SQL Agent job that runs the notification code. This is the recurring schedule that checks for failures. We chose to run it every 5 minutes, so that’s what I have it set up for below (it can be customized as needed for any interval).

CAVEAT BEFORE WE START: We’ll be querying the PBI Report Server database, which could change at any time and is totally unsupported. I’ve never seen any issues caused by it and I’ve never had a table/view change from under code I’ve written to do this, but you’ve been warned. However, we’re only reading – don’t make any changes or write to this database or you could break your PBIRS installation. If you want to learn more about this in general, check out this write-up.

Query for failed report refreshes

We’ll start by all the last successful refreshes for our reports. The process doesn’t actively use this, but it does keep it so that reports that have never been refreshed or where they’ve not been refreshed successfully for X time can be called out in the future:

  SELECT [SubscriptionID]
         ,max([EndTime]) as LastSuccessTime
    FROM [YourPBIRSDatabase].[dbo].[SubscriptionHistory]
   WHERE Status = 0 -- Success ("Completed Data Refresh")
GROUP BY SubscriptionID

The [SubscriptionHistory] table contains all the records for all scheduled refreshes, regardless of success or failure. In this case, we’re filtering for “Status = 0”, which is success – other numbers are error codes.

To grab a list of all the reports and their refresh status, we’ll use two more tables: [Catalog] (which contains a list of all the reports) and [Subscriptions] (which contains the scheduled refresh jobs). Where [SubscriptionHistory] is the log of all the job executions, [Subscriptions] contains a list of every scheduled refresh that’s set up. One of the nice things about this table is that it also contains details about the current state of the schedule – if the last run was successful or a failure, and details on the last error if there was one.

[Subscriptions] also contains a column called [LastRunTime] with a timestamp when the schedule was last run. What’s handy here is that it’s a timestamp for the last STATUS of the schedule, not the time the last refresh started. Because of that, it will contain the timestamp of the end of execution in case of a failure. Here’s how they link together (the boxes in orange match, or match close enough to show that it’s end time and not start time):

When we join those together, here’s what we get (you’ll start to see some variables here – there script has a large block at the top where you can use customize the script behavior):

  select LEFT(c.Path, LEN(c.Path) - LEN(c.Name)) as Path,
        c.Name, 
        @URLRootDataRefresh + Path as DataRefreshURL,
        s.Description, s.LastStatus, LastRunTime, ls.LastSuccessTime
  into #workingtable
  from [dbo].[Subscriptions] s
  join dbo.[Catalog] c
    on s.Report_OID = c.ItemID
  where s.LastStatus not in ('Completed Data Refresh', 'New Scheduled Refresh Plan', 'Refreshing Data')
    and c.path not like @IgnorePath
    and DATEDIFF(ss, LastRunTime, GETDATE()) < @ThresholdSeconds 

The first filter in the WHERE clause filters our reports that are either refreshing now or last refreshed successfully. The second allows you to use a variable to exclude a path (we keep our retired reports in a hidden folder and didn’t want them triggering an alert) – this is optional. The third filter only returns reports that failed within the last X seconds – this should match how often the job runs to check for new failures so that it’s only getting failures since the last execution. A more advanced version could check to see when the SQL Agent job last ran and get failures since then, but that wasn’t needed in our case.

There’s also a column called DataRefreshURL for each report – this will link directly to the “Scheduled Refresh” settings page for the report, and uses the report management page under (yourserver)/reports/manage/catalogitem/listcaching_pbi/(yourreport). This will allow you to go right from the email to the page that lets you view error details and refresh the report again if the error is temporary.

Create the email notification text

Now that we have a temp table (#WorkingTable) that’s holding our failed reports, let’s build an email!

I wanted to line things up neatly in columns, so the first step was figuring out how wide each column should be:

SET @LongestName = (SELECT MAX(LEN(Name))           FROM #workingtable)
SET @LongestPath = (SELECT MAX(LEN(path))           FROM #workingtable)
set @LongestLink = (SELECT MAX(LEN(DataRefreshURL)) FROM #workingtable)

With those lengths set, we can build the headers. For each of the three columns, we’ll include the column title and then enough spaces to justify each column evenly – we’ll use the same formula for the detail lines as we will for the header:

SELECT @Body = @Body + ('Report Title:' + SPACE(@LongestName - 13 + 2))
                     + ('Path:' + SPACE(@LongestPath - 5 + 2))
                     + ('Refresh Status Link:' + SPACE(@LongestLink - 20))
                     + @CRLF
                     + REPLICATE('-', @LongestName) + SPACE(2)
                     + REPLICATE('-', @LongestPath) + SPACE(2)
                     + REPLICATE('-', @LongestLink)
                     + @CRLF

This uses both SPACE and REPLICATE to include a certain number of characters to ensure the final text is evenly column justified. There’s also an @CRLF variable that’s set at the beginning of the script – whenever I’m building multi-line text, I always create this variable as it makes breaking lines apart much, much easier further down and allows you to keep formatting consistent.

The line items are built much the same way, though here’s another handy trick that may be new to you. In the SELECT below, the @Body variable is getting appended by every single row in the resultset – the product of this is a variable that contains one additional line for every report in our table. This is also useful when you want to create a variable that contains everything in a set from a table – not just when building multi-line text, but you can consolidate anything from a list to a variable.

  SELECT @Body = @Body + Name + SPACE(@LongestName - LEN(Name) + 2)
                       + Path + SPACE(@LongestPath - LEN(Path) + 2)
                       + DataRefreshURL
                       + @CRLF
    FROM #workingtable
ORDER BY Name, Path

We’re also padding these values based on the “Longest” variables we set earlier – the result is something that looks like this:

At the end, I encourage you to add a line that includes the server that’s sending the email and, if applicable, a link to any documentation – I’ve gotten into a situation before where we were getting an alert every few minutes that was being generated by some T-SQL code somewhere and we had to go through every job on every one of dozens of servers to find it. We eventually resolved the cause, but not before we’d gotten over 1000 emails each about how it was broken! Here’s something simple:

SET @Body = @Body + 'This email was generated by ' + @@SERVERNAME + ' at ' + CONVERT(VARCHAR, GETDATE(), 120)

Actually send the email

This part is simple and uses the sp_send_dbmail system stored procedure. It’s pretty straightforward – everything you need is defined in a variable earlier in the script, so you just need to call the SP:

EXEC msdb.dbo.sp_send_dbmail
     @profile_name          = @DBMailProfile,
     @recipients            = @ToAddresses,
     @copy_recipients       = @CopyAddresses,
     @Blind_copy_recipients = @Blind_copy_recipients,
     @subject               = @Subject,
     @body                  = @Body

Since you’ll be running this code from a SQL Agent job, make sure that the account you’re using for SQL Agent (or the proxy account if you’re configured one) has this permission granted. From the Microsoft docs about sp_send_dbmail:

Execute permissions for sp_send_dbmail default to all members of the DatabaseMailUser database role in the msdb database.

If you don’t have those permissions, you’ll get a moderately cryptic error and your mail won’t send.

SQL Agent job that runs the notification code

Once your script is set up, you need to schedule the job – to do that, follow any walk-through on creating a SQL Agent job – I like the one here: https://www.c-sharpcorner.com/UploadFile/raj1979/create-and-schedule-a-job-in-sql-server-2008/

For the “Steps Tab” from that walk-through, you’ll want to paste the script you’ve set up (once you update all the variables to match your environment). For the “Schedules Tab” step, set the job to run every X seconds, where X is the same number of seconds you set in your variables (the default in the script is 300 seconds, or 5 minutes). The notification/alerts are optional, but you can set those up if you like.

Cool – so what do I need to do?

To use the script, the minimum you need to do is set the variables at the beginning to match your environment and then schedule the agent job. Here’s the default variable block from the script:

DECLARE @ThresholdSeconds INT                = 300,
	@ToAddresses VARCHAR(1000)           = 'You@SomeDomain.com,
	@CopyAddresses VARCHAR(1000)         = NULL,
	@Blind_copy_recipients VARCHAR(1000) = NULL,
	@URLRootDataRefresh VARCHAR(1000)    = 'https://(yourpbiserver)/reports',
	@IgnorePath VARCHAR(1000)            = '/IgnoreThisFolder/%',
	@DBMailProfile VARCHAR(500)          = 'Your DBMail Profile',
	@Subject VARCHAR(1000)               = 'Failed Power BI Refresh Alert',
	@SendEmail BIT                       = 0

Here’s what these are used for and when you should make changes:

  • @ThresholdSeconds. This is how far back you want to check for report failures. This setting should match how often you run the SQL Agent job – if they values don’t match, then you run the risk of missing some failed reports.
  • @ToAddresses. A list of people you want to email that should probably do somethign about this, separated by semi-colons.
  • @CopyAddresses. Like To, but more of a heads up – leave this NULL if not applicable.
  • @Blind_Copy_Recipients. We include this so we can send an email to our team’s slack channel using email integration, but we don’t want this email address to get out to anybody that gets copied. Normally NULL, but some may have a special need like this.
  • @URLRootDataRefresh. This should be the root of your Power BI Server address, including the “/reports” at the end. If you visit this URL, you should be redirected to the root of your reporting site to browse reports. This URL is appended with the location of the subscription settings so you can browse directly to the details of a failure.
  • @IgnorePath. We had a folder where we store offline reports – failures here should be ignored. If you have multiple folders like this, you can modify this variable to add them or you can modify the query in the script to exclude these folders explicitly.
  • @DBMailProfile. The name of the profile for sending mail. If you haven’t set up DB Mail, refer to the documentation to get this set up and name your profile. Here another good walk-through.
  • @Subject. Make this sufficiently scary so that recipients pay attention!
  • @SendMail. This is set up so you can test, and it’s defaulted to 0 (meaning the script will print your mail body rather than sending it). Once you’re happy with your settings, change this to 1 so the mail actually gets sent.

Download the full script for yourself

Download the script

Feedback/Suggestions?

Feel free to add a comment below – I’ll update this code as people make suggestions or it improves through my own experience using it, and I hope it helps some others who miss this functionality in Power BI Server. Share below if you end up using it – I’d love to hear about it!

Getting a list of Power BI reports and Data Sources from the Power BI API

In changing our reports to point from an older database server to a newer one, we needed a way to tell which reports we’d migrated so far and which ones still had data connections to the old server. Keeping a list of reports would be the easiest way to do this, but why do that when you can write a query to track it for you!

I came across this post that provides the outline of what I was looking for:

https://community.powerbi.com/t5/Report-Server/Get-all-connection-string-for-reports-from-power-bi-report/td-p/920392

It provided the following M-code, which uses the Power BI API to fetch a list of all reports on the server and all the datasource details (it doesn’t fetch the contents of the actual M-code behind the data sources – that would be pretty amazing). Nonetheless, pretty neat on its own (you need to replace “<YOUR URL>” in a couple of spots):

let

    Source = OData.Feed("https://<YOUR URL>/pbireports/api/v2.0/PowerBIReports", null, [Implementation="2.0"]),

    #"Expanded DataSources" = Table.ExpandTableColumn(Source, "DataSources", {"Id", "ModifiedBy", "ModifiedDate", "ConnectionString", "DataModelDataSource"}, {"DataSources.Id", "DataSources.ModifiedBy", "DataSources.ModifiedDate", "DataSources.ConnectionString", "DataSources.DataModelDataSource"}),

    #"Expanded DataSources.DataModelDataSource" = Table.ExpandRecordColumn(#"Expanded DataSources", "DataSources.DataModelDataSource", {"Type", "Kind", "AuthType", "Username", "ModelConnectionName"}, {"DataSources.DataModelDataSource.Type", "DataSources.DataModelDataSource.Kind", "DataSources.DataModelDataSource.AuthType", "DataSources.DataModelDataSource.Username", "DataSources.DataModelDataSource.ModelConnectionName"}),

    #"Added Conditional Column" = Table.AddColumn(#"Expanded DataSources.DataModelDataSource", "DS.Connection_String", each if [DataSources.ConnectionString] = null then "No Data Source" else [DataSources.ConnectionString]),

    #"Removed Columns" = Table.RemoveColumns(#"Added Conditional Column",{"CacheRefreshPlans", "AccessToken", "Roles", "ContentType", "Content", "ParentFolder", "Properties", "Comments", "AlertSubscriptions", "AllowedActions", "Policies", "DependentItems","Id", "ParentFolderId", "DataSources.Id", "DataSources.DataModelDataSource.ModelConnectionName"}),

    #"Renamed Columns" = Table.RenameColumns(#"Removed Columns",{{"DataSources.ConnectionString", "DS.ConnectionString"}, {"DataSources.DataModelDataSource.Type", "DS.Type"}, {"DataSources.DataModelDataSource.Kind", "DS.Kind"}, {"DataSources.DataModelDataSource.AuthType", "DS.AuthType"}, {"DataSources.DataModelDataSource.Username", "DS.Username"}, {"DataSources.ModifiedBy", "DS.ModifiedBy"}, {"DataSources.ModifiedDate", "DS.ModifiedDate"}, {"Name", "Report"}, {"Path", "Report Path"}}),

    #"Add Report URL" = Table.AddColumn(#"Renamed Columns", "Report URL", each "https://<YOUR URL>/PBIReports/powerbi" &amp; [Report Path] &amp; "?rs:embed=true")

in

    #"Add Report URL"

This was a good framework, but I wanted to add a few things and combine multiple different servers together (My example shows a dev, qa, and prod server being combined, but they could be any number of servers or even both Power BI Server and SSRS if you wanted):

The M-code shown earlier became the highlighted item above – I converted it to a function (“PBI Data Sources” at the top of the “PBI Data Sources Function” group) that accepted both a “Server URL” (replacing the “<YOUR URL>” portion in the initial query) and a “Server Zone” allowing you to name the server in the results. Once you have the function, we move up to the last three objects in “Each Server”, which each call the function with different parameters (for each of my server zones), and then combine them into a single dataset (“Combined Data Sources”). Finally, the results are split into a list of Reports and a list of Data Sources that you can pull into your Power BI model.

Once of the issues I ran into was that I didn’t have access to all the reports on my server – those broke my dataset with the following message:

DataSource.Error: ODate Request failed: The remote server returned an error (500) Internal Server Error

Interestingly, it wasn’t a “Permission Denied” error, but a 500 server error. I got around it by adding some code to drop any rows where I received an error by using the “try otherwise” error handing in M, something I’d never used before:

= Table.SelectRows(#"Sorted Rows", each ((try [DataSources]{0}[Id] otherwise null) <> null))

This code checks to see if it can view the very first value within the “DataSources” field, and if it can’t, it returns null and then the row gets filtered. This isn’t the most elegant way to do this – it drops any reports I don’t have access too, rather than calling them out, but it got the job done and allowed my dataset to move forward without errors.

Once the data is in the model, you can use some simple visualizations to see what types of connections you have, where you’re connecting, and what user those connections are set up to use (so you can see if every one of your connections is using the correct proxy users).

I hope this helps you get a handle on all the data sources you’re using, and make sure that everything is using the correct settings. I’ve included the empty PBIT file at the end of this post for anybody that wants to connect to their data and see what it looks like (when you open the file, it prompts you for two variables – you can put whatever you want there, but it’s because I’ve left the source query in the file).

If you do end up using this to create something, please let me know what as I’d love to see it in action!

Download the PBIT file

Removing columns named “Column” when importing Excel files with Power Query

When importing Excel files using Power Query, any columns without a name in the first row will end up titled “Column123”, with an number at the end that increments for each column. This happens whether they contain actual data, or whether they’re just formatted columns that contain nothing at all.

Example of imported Excel file with two named columns and two just called “Column”.

If you’re importing a single file, you can rename or delete these columns as you need to. However, if you’re importing an entire folder of Excel files or the format changes unpredictably, you may want to automatically remove all o these columns at the end.

To do this, you’ll use the “Advanced Editor” to add a line of M-code:

"Remove Columns Named Column" = Table.RemoveColumns(#"Promoted Headers", List.FindText(Table.ColumnNames(#"Promoted Headers"), "Column")), 
After adding the new step, columns named “Column” are now gone.

After you add this step, you may need to modify your “Changed Type” step to only set the type on columns you’re keeping, as it automatically mentions every single column by name.

Adding this step will remove the additional columns, dynamically, every time the data is imported. Now when you import a folder full of Excel files and some of them contain additional columns or rogue formatting, the step will drop them if they don’t get named.

Viewing active (and missing) permissions in SSRS and Power BI Server

To see who has folder/report level permissions in SSRS and Power BI Server, you have to join a couple of tables together – I started with a query I found at StackOverflow and built onto it – here’s my query (it pulls a bit more detail and shows not just permissions granted, but those not granted):

select  case E.Path
		when '' then '/'
			else E.Path
		end as ItemPathAndName,
		concat(LEFT(E.Path, case
			when len(E.Path)-len(E.Name) = 0 then 0
				else len(E.Path)-len(E.Name)-1 end),'/'
		) as ItemPath,
		E.Name as ItemName,
		case E.Type
			when 1 then 'Folder'
			when 13 then 'PBI Report'
			else concat('Unknown (',Type,')')
		end as ItemType,
		case e.PolicyRoot
			when 0 then 'Inherited'
			when 1 then 'Customized'
			else 'Unknown'
		end as PermissionSource,
		C.UserName,
		D.RoleName,
		--D.Description, -- Uncomment for role description
		convert(bit, case when
				a.id is null then 0
				else 1
		end) as ValidPermission
from dbo.Catalog E
cross join dbo.Users C
cross join dbo.Roles D
   left
   join dbo.PolicyUserRole A
     on E.PolicyID = A.PolicyID 
	and A.RoleID = D.RoleID
	and A.UserID = C.UserID
order by Path, C.UserName

Rather than just selecting the permissions, this query actually shows all possible permissions, and shows where users have or don’t have them. In my case, I wanted to see a list of users on the server that lacked access to specific reports and the “show me the permissions” query wouldn’t do that – additionally, this query can show you all the places that your permissions are manually set, rather than inherited.

To interpret them, I copied the results to Excel, created a pivot table, and then filtered using PermissionSource=Customized (to see all the custom permissions) or ValidPermission=0 (to see the places where people didn’t have permission to view an item).

The crossjoins can make the resultset a bit large, but it worked for me. If you want to filter things down a bit, you can add a WHERE clause near the end to folder those columns for the specific situation you’re looking for.

“Your Google account can’t go here” when logging into Chromebook with family link

I didn’t find a good answer online when I ran into this issue, so hoped I can be the answer for people who have this same issue in the future.

We recently got Chromebooks for our kids to support their remote school for the rest of the school year and I wanted to set them up with monitoring – Google has a tool to do that called “Family Link“. We set up accounts for our kids, but unfortunately, we received an error when we tried to log into the Chromebook:

There weren’t any details (and my parent account could log in just fine) – searching online came up with a bunch of frustrated posts that were almost a year old and talking about the change from supervised accounts and how Family Link didn’t work on Chromebooks. I received that error when trying to add a new user or log in – I couldn’t complete either process.

  Looks like your google account can't go here

The secret ended up being that the ChromeOS on my Chromebook was CRAZY out of date – even though it was ordered brand new from Amazon, it was running ChromeOS 56 when the CURRENT VERSION (as of March 2020) IS 80! All of the documentation I saw on Family Link said it that required at least ChromeOS 71 and this obviously wasn’t cutting it. I would have expected a brand new Chromebook to be updated to something near current, but obviously not – according to https://www.chromium.org/developers/calendar, version 56 was released in November 2016 and is over 3 years old at this point

ChromeOS version 56, installed on our Asus Chromebook when we received it.

It doesn’t prompt you to automatically update to a newer version of ChromeOS – you have to do it manually. To do that, go to the system info in the lower-right corner and click on the gear to open settings, then click “About Chrome OS” at the top, then “Check for updates” and apply anything it shows. In our case, it updated to 72, rebooted, and the applied a few more security updates and a few more reboots before it finally landed on 80. Once it did, I could log in with the family link child account, approve my child’s sign on, and we were on the way with no more problems.

Good luck!

Power BI Server – Resolving “Web Portal URLs and Web Service URLs don’t match”

After replacing an expired SSL certificate on Power BI Server, we were no longer able to browse the site – we received an error that “The report server isn’t configured properly… The report server Web Portal URLs and Web Service URLs don’t match”:

We hadn’t changed the Power BI Server settings aside from updating the certificate, but now couldn’t load the site without getting this error message.

We were able to resolve it by manually updating both URLs (Web Service and Web Portal) in Report Server Configuration Manager to the Fully Qualified Domain Name (FQDN) – normally, the HTTP version of the URL is set to “All Assigned” (and shows the server name) and the HTTPS version shows the name on the certificate (in our case, both the old and new certificate had the FQDN). This is how it looked on the other servers that worked:

No obvious errors, but manually changing the HTTP URL for both Service and Portal to FQDN fixed the Power BI Server site – things started to work again instantly. This bothered me, though – none of the other servers needed this setting changed, and they were all perfectly fine with the server short name in the HTTP URL.

Cause/Solution:

The issue ended up being Internet Information Services installed on the same server – even though the primary site was disabled, it was bound to port 80 on all IPs without a host name, so it was claiming the “HTTP://+:80” that Power BI Server was also trying to bind. Because this bind was failing, Power BI couldn’t register matching URLs and gave us the error.

To resolve it, we added a phony hostname to the Default Web Site in IIS (even though it was stopped and shouldn’t have caused an issue) so that that it didn’t claim everything at Port 80:

Once this was done, we set the Power BI configuration back to “All Assigned” and it re-registered its own URLs, and then things started working perfectly again. I’m not sure if IIS was installed by our server admins during our certificate update, or if some other configuration change caused IIS to change how it registered the URL, but this resolved this issue.

Hopefully it helps somebody else!

Installing Access Database Engine 2016 x64 alongside Office x86

In Power BI, you can open XLSX files without any additional drivers, but opening XLSB files requires you to first install the Access Database Engine driver (available for free at https://www.microsoft.com/en-us/download/details.aspx?id=54920) or you’ll get an error like this:

We encountered an error while trying to connect. Details: "Excel Workbook: the 'Microsoft.ACE.OLEDB.12.0' provider is not registered on the local machine. The 64-bit bersion of the Access Database Engine 2010 Access Database Engine OLEDB provider may be required to read.
‘Microsoft.ACE.OLEDB.12.0’ provider error message

If you have Microsoft Office installed, you already have this component, but most people use a version of Office that’s x86 – since Power BI is x64, you’ll need the x64 version of the driver as well. However, if you try to install it with Office x86 on your computer, you’ll get this error and you’ll be stuch:

Microsoft Access database engine 2016 x64 Installation Error screenshot - "You cannot install the 64-bit version of Microsoft Access Database Engine 2016 because you current have 32-bit Office products installed. If you want to install 64-bit Microsoft Access Database Engine 2016, you will first need to remove the 32-bit installation of Office products. After uninstalling the following product(s), rerun setup in order to install 64-bit version of Micorsoft Access Database Engine 2016: Office 16 Click-to-Run Extensibility Component"
Microsoft Access database engine 2016 x64 Installation Error screenshot

There’s a way to install both! To avoid this error and install the x64 database engine without removing Office x86, you need to download the installation for the Access Database Engine x64 and run the installation with a command line switch:

AccessDatabaseEngine_X64.exe /quiet
Command to install Access database engine using the "/quiet" switch to run to installation without an error

Once you run this command, you’ll get a black command window for a few seconds, and then it will close once it’s done. To confirm that it’s installed successfully, you can either check for “Microsoft Access database engine 2016” in your Windows “Programs and Features” window, or you can just launch Power BI and attempt to import an XLSB file!

I haven’t noticed any ill effects to doing this, but if you do have issues, comment below.

SQL Differential backups failing with “current database backup does not exist” error

I recently set up an Azure VM and installed SQL Server 2017 – it worked great until it came time to set up the database maintenance plans. My normal routine is to set up a weekly full database backup, daily differential database backups, and hourly log backups. Thanks to Azure, I was able to send them to URL so they went directly to my BLOB storage container.

After setting up the backups, I tested them and everything went time – I ran the weekly full successfully, then the daily differential, then the logs, and didn’t get any errors. However, that night I got an email that the differential had failed, and the agent log had this note in it:

Code: 0xC002F210
Source: (Maintenance plan)
Execute SQL Task
Description: Executing the query "BACKUP DATABASE [MyDatabase] TO Uā€¦" failed with the following error: "Cannot perform a differential backup for database "MyDatabase", because a current database backup does not exist. Perform a full database backup by reissuing BACKUP DATABASE, omitting the WITH DIFFERENTIAL option. BACKUP DATABASE is terminating abnormally.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.

I had tested everything earlier, so I wasn’t sure why the differential backup was failing now (the log backups were working without issue the whole time, so it wasn’t storage). I executed the differential again and it failed with the same error. I ran the full backup successfully and then tested the differential backup a few times – it now ran successfully each time. I shrugged and went on my way, chalking it up to something going wrong, only to have the differential backup fail again that night with the same error.

To see what was going on, I took a look at the backup sets and noticed some backups I hadn’t scheduled going a device I didn’t recognize (all of my scheduled backups were to URL, so they were easy to spot because they all start with “HTTPS://”):

select top 100 *
from msdb.dbo.backupmediafamily mf
where TRY_CONVERT(uniqueidentifier, physical_device_name) is not null

The GUID indicates a backup taken through the VSS service – in this case, they were being created externally by the Azure VM Snapshot process. However, these backups are by default official database backups, and they interrupt the database backup chain – had I attempted to restore one of my full database backups followed by the log backups, I would have found that I couldn’t restore past the VSS backup timestamp because I didn’t have access to that media.

The good news is that there’s a registry key you can use to tell the VSS service that database backups should be taken as COPY_ONLY (meaning they don’t interrupt your backup chain). You can do this with the following registry key:

Windows Registry Editor Version 5.00 

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\BcdrAgent]

"UseVSSCopyBackup"="True"

This tells the server that when VSS takes a snapshot, it should do it as a non-authoritative copy-only backup. This also means it won’t reset any attributes in the VM that say a backup has happened, but in my case, I didn’t want that to happen.

To learn more, check the Microsoft articles that provide details about snapshot backups for Azure VMs – neither refers to the error message specifically, but they provide some additional details about configuration. They’re https://docs.microsoft.com/en-us/azure/backup/backup-azure-vms-introduction#snapshot-creation and https://docs.microsoft.com/en-us/azure/backup/backup-azure-vms-troubleshoot#troubleshoot-vm-snapshot-issues (both linked to the most relevant section).

I hope this saves you the hours of aggravation that it cost me – let me know if it helps or if you have any issues!

Slicing overlapping subgroups of data with a synthetic “Filter Table”

I saw a post from Marco Russo (Twitter) showing a trick to use a slicer to filter across multiple columns at once in Power BI with some DAX and a new dynamic filter table. I’ve done something similar in the past to use a slicer not to slice data directly, but to change a view in the data or change filtering criteria indirectly – I wanted to share an example here to show a slightly different use case than Marco’s for a similar technique.

I want to create a slicer that switches the dataset between different subsets of data, but with overlap – like just my sales, sales for my group, and sales for the company. This is normally accomplished with a calculated column for the group the data is in, along with a slicer where if you want to see everybody, you have to select all three values like this:

But what if you want to create slicer options that aren’t exclusive? Where selecting “My Department” includes you too? Here’s how you do it!

In my example, I’ll need a fake sales and person table, so I’ll just use the “Enter Data” interface to create them:

Creating a sales table
Creating a user table (just a list from the sales table)

At this point, we could just join them on name and add some slicers, but here’s where we get more advanced by creating the filter table. The names repeat, but they show once for each slicer option they belong with, so whatever you select, you’ll see the relevant people:

Creating the filter table with all the slicer options in it

Once you have those three tables, you need to link them up – note the two-way relationship between the filter table and the person table. Without that, filtering on the last table won’t flow through to the sales table and our slicer won’t work:

Create the relationship and make sure the filter-user link to bi-directional

That’s all it takes – now you can create a slicer on the “Filter” column in the “Team Filter” table, and you’ll get whatever non-exclusive subset of the data you’re looking for!

I’m looking at only Bob’s sales
I’ve added Sally’s sales, but can still see Bob
Now I’m looking at everybody!

I made the slicer single-select in this example, which I do to avoid confusion – while this still works if you leave it a normal multi-select, it can lead to some confusion since users will expect the options to be exclusive and if they’re already got “Everybody” selected, neither of the other two options will do anything šŸ™‚

If you want it, grab the PBIX file here.

I hope this is helpful to somebody – if you have any suggestions or other use cases for this, please add them below!