T-SQL Tuesday #139 – Execution plans in a hybrid world

T-SQL Tuesday #139 – Execution plans in a hybrid world

Another month has passed. Another T-SQL Tuesday arrives, with a new host and a new subject.

The host is no less than the amazingly awesome (his words, not mine!) Ben Weissman (b|t); his selected topic, based on the realization that the data world is hybrid and will remain so, is to share our experiences with our journey towards or on that hybrid world of on-prem, cloud, and edge.

I don’t care …

I’ll admit, my first response was to just sit this one out. I am extremely specialized in execution plans, as a tool for query performance tuning. When your query runs slow and you ask me to tune it, I don’t really care what platform you’re on. Windows or Linux? Doesn’t matter to me. Cloud or on-prem? Doesn’t matter to me. Edge? Nice, new, and shiny … but still doesn’t matter to me.

There are tuning specialists who look at performance issues from a different side. Can the I/O be optimized? How is the memory configuration? For those people, the underlying hardware is very important. Here, Windows, Linux, or Kubernetes makes a big difference. Cloud, edge, and on-prem all have their own rules. I am so glad other people do this. It means I don’t have to.

My tuning focuses on the execution plan. If I can reduce the number of rows read, the CPU cycles burned, and the memory needed, then those other people are not even needed anymore. And my bag of tricks just happens to be completely oblivious to the underlying system. The beauty of SQL Server in a hybrid world is that Microsoft has enabled the use of the exact same engine on all those different platforms. Which means I can take my tuning skills everywhere, and don’t even need to know what platform I’m working on.

… but you should!

But then I realized that this is exactly the message people need to hear.

People associate tuning indexes, analyzing execution plans, and rewriting queries to get better performance with on-prem databases. When I submit a conference talk about execution plans or indexes, conference organizers put it in a DBA track or a developer track. Never in the cloud track. Or in the edge track.

Cloud and performance

The cloud track at conferences is where presenters talk about yet another new Azure service. About how to choose the best Azure offering. Or if it’s not a Microsoft centric conference, about how to choose between Microsoft’s, Google’s, or Amazon’s cloud. Not about a silly topic like performance tuning. We’re cloud people. We don’t need performance tuning. We have a slider, that the Microsoft sponsored speakers will happily slide all the way to the right during the keynote presentation to show off the amazing performance of the newest cloud product.

But that slider costs money. Every consultant who seriously suggested to a customer to just adjust the slider to fix the performance issue is like the consultant I met once, at the very start of my career, who bluntly told the customer to just keep buying hardware instead of fixing the exponentially scaling performance nightmare he had built.

I think cloud is where you can see the most direct benefits of performance tuning. When I work my magic on an on-prem database and I see CPU usage drop from 80% to 30% throughout the day, and response times go up a bit, I see happy faces on the work floor. But work that same magic in cloud and then tell management they can safely scale down to a cheaper Azure service and save thousands of dollars each month, and I see happy faces from the people who pay my bill!

Edge and performance

The edge track at conferences is where  we get to see all the wonderful solutions we can build now that we have the option to put a full SQL Server database in a sometimes connected device in the magic world of IoT. Talks focus on the high compatibility level between SQL Edge and “standard” SQL Server. They talk about customer experiences, or ideas that are being worked on already. Not about performance. And granted, most of the edge applications I have seen use SQL Edge mostly as a data collector, meaning performance is likely not the biggest issue. But we’re just starting. We’ll want more once we realize we have more.

And remember, if it has ever been important to really have a full handle on performance, it’s in the edge world. Just imagine being the manager responsible for having to recall a product that has already shipped thousands of copies because the built-in SQL Edge solution turns out not to scale to some real world workloads. Or imagine being the manager having to convince the board that a much more expensive chipset must be built into what was supposed to be a cheap IoT device, just so the data collection and preprocessing scales.

Or … imagine being the developer or DBA who looks at the query, the tables and indexes, and the execution plans, and then finds the changes needed to make the entire workload go lots faster.

Conclusion

Query tuning is the same in the cloud or on the edge as it is in the traditional on-prem environment. And it’s just as relevant, if not more relevant, in those new environments. It’s time that the forerunners of cloud and edge realize this, and then take the ole granpa’s and granma’s with deep query tuning experience on board.

Our knowledge and experience is here for you. You’re invited to come and take advantage of it!

Free video on percentages in execution plans
Plansplaining, part 19. Temporal tables (part 4)

Related Posts

No results found.

3 Comments. Leave new

  • In “And remember, if it has even been important to really have a full handle…” should the 6th word be “ever”?

    I, too, make my living getting performance out of SQL Server. And, like many, I find that the advent of off-premises platforms has driven proper attention toward real costs. For many shops the days of a hulking server being amortized over 5 or 7 years as a shared cost among many departments is gone. Now people have the opportunity to understand just what they are paying, as well as what they are paying for.

    The temptation to push the slider to the right is one that I encounter almost every day. In some cases it makes sense. The initial rollout worked fine on an S1 in Azure. We planned to scale as the load increased. We built it and they came. As you note the cautionary tale is to slide to the right for the right reason.

    Flexible platforms don’t keep you from writing terrible code. They can do a terrific job of exposing it. In my mind proper performance has never been more important. Throwing ram at an on-premises server is a one time cost shared by many. In the narrower platforms typically used in the cloud it is a cost borne by a few, month after month after month.

    Reply
    • Hugo Kornelis
      June 9, 2021 14:47

      Yikes, that’s a nasty typo. I hate all typos. But those that change the meaning when that happens, those I hate the most.
      Fixed now.

      Thanks for sharing your experiences; glad I’m not the only feeling that way!

      Reply
  • […] Hugo Kornelis […]

    Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close