T-SQL Tuesday #110 – “Automate All the Things”

T-SQL Tuesday #110 – “Automate All the Things”

It is once more the second Tuesday of the month, better known as T-SQL Tuesday. This month, we got our writing assignment from Garry Bargsley (b|t). He asks us to write about automation: what automation are we proud of or do we want to complete, and using what technology?

However, I found that I was more triggered by the title of his request post than by the content, so I’ll allow myself the liberty of applying a very loose interpretation of the topic.

All the things?

Garry titled his invitation “Automate All the Things”, and I plagiarized that title because it is the word “all” that I want to focus on. As Garry points out, automating everything appeared to be a major focus area (or, if you prefer, buzzword) in 2018. But is it really possible to automate everything? And if it is, then where does that leave us, the workers? And more specifically, what does it mean for me, and my geeky hobby of dissecting execution plans?

Automation and me

I often say, somewhat jokingly, that a good developer never does the same thing twice. If I need to do a repetitive task, I will always look for ways to automate it. And I’ll admit that this sometimes backfires – investing three hours to avoid repeating a five-minute task twenty times is not really the best possible time investment. (Though it does reduce the risk of errors, and it’ll keep me interested in my job for longer).

I won’t go into the tools I use. I typically just pick whatever I have available (based on both what is installed and what I already have knowledge of). Probably very often not the best pick for the job, so I feel no need to brag about this. Plus, as already mentioned, I want to focus on another aspect of automation. But I did want to include this “about me” bit just so you understand that I do love automation. I really do.

Automation and execution plans

Tuning queries has for a long time been purely manual work. Skilled workers, also known as tuning wizards, would look at indexing-related DMVs, pore over execution plans, go into a trance-like state of meditation for a while, and then they’d have the magical combination of a new index and a rewrite of your query to make it go five thousand percent faster. (Or five percent – but any gain is useful, right?)

Early years

Microsoft have been trying to assist here by providing automated tools, and they have been doing this for a long time. Anyone remember the “Index Tuning Wizard”? Don’t be ashamed if you don’t, I myself am still trying time to erase it from my memory. Let’s just call it a pre-alpha release and move on.

But the Index Tuning Wizard got replaced by the Database Tuning Advisor – a new name but also a completely new tool, trying to do the same. And while still far from perfect, it is definitely a much better tool. And it wasn’t just Microsoft anymore, based on the available data in DMV’s such as sys.dm_db_missing_index_details, sys.dm_db_index_usage_stats, and others, community members and tool vendors started building their own methods to automate the work of finding suggestions for adding, removing, or changing indexes. Not all equally successful – but it’s the thought that counts!

Azure

Things really started taking off with Azure. Azure SQL Database, or whatever the name was back then, became popular. Microsoft now saw themselves hosting not dozens, not thousands, but probably millions of databases. That had two immediate results. The bad result (for Microsoft) was that queries that were slow and consumed lots of resources started hurting them way more than ever before. The good result was that they were able to start collecting loads upon loads of metric data. Data they could analyze (using automating, of course) to find common patterns in those slow workloads.

Microsoft employs a huge amount of extremely smart people. As soon as they have both a problem caused by bad performance and the data to help find a fix, it is only a matter of time before they will find solutions. And they have. And luckily, they are not keeping them for their own use only – the fruits of their labor have been starting to see the light.

In SQL Server 2016, the Query Store was added to the box product. In SQL Server 2017, Adaptive Query Processing was implemented – granted, very limited at that time but again a step forwards. SQL Server 2019 takes that to the next level and renames it to Intelligent Query Processing. Also added in SQL Server 2017, a feature that is called “Automatic Tuning” (but that in reality only does automatic plan forcing based on Query Store data, plus some verification after the fact). Finally, and currently only available in Azure SQL Database, there is an option called “Automatic index management”, that will create new indexes and drop unused ones.

Future

So what does this mean for the future? Did I make a bad choice when I chose to spend time on creating the Execution Plan Reference?

I think not. I think that all those tools will be very good at finding and solving the 90% of performance issues that I myself, and many others, would be able to solve relatively easy. It isn’t hard to find a candidate for a covering index, and it doesn’t need much understanding of execution plans to do so. It isn’t rocket science to find unused indexes or to identify almost-overlapping indexes that can be combined. There is little challenge in finding that a query is slow because the fixed cardinality estimate of a multi-statement table-valued function is far off from reality. If Microsoft, or other vendors, can automatically find those cases, that is fine with me. I would build such tools for myself if I were smart enough and had access to the data that Microsoft has.

But based on my experience with execution plans I have seen in my career, I am convinced that there are still lots of execution plans out there that do … weird stuff. Things that are so uncommon that no tools, no machine learning algorithms, will be able to find a root cause and a fix from looking at those execution plans. Perhaps, one day, that will change. And I know that the pace of change has been increasing over the past century and still increases. But I still think that it will take a long time before machines have sufficient fake intelligence that they would be able to tune each and every query.

For the years (and decades?) to come, I expect that automation of the query tuning process will mean that we get to look at far less problem queries than before. But the ones we look at will be more complex, more interesting, and harder to get to the bottom of. So once you do get a query on your desk, being able to look at the execution plan and understand what is really happening will be even more important than it was in the past.

I’m not wasting time on the Execution Plan Reference. In the future, it is possible that less people will need it. But those that do, will need it even harder than they need it now!

Conclusion

Okay, I didn’t really stick to the topic that Garry suggested. But his title and intro gave me stuff to think about, and then to write about, so I still consider this a T-SQL Tuesday post.

Thanks for the inspiration, Garry!

Looking back to 2018
Off Topic: Charity and snooker

Related Posts

No results found.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close