Estimated I/O cost, a sign of an expected spill

Estimated I/O cost, a sign of an expected spill

Some execution plan properties get a lot of attention. Others tend to be mostly overlooked. But even those properties can, sometimes, relay interesting information.

Estimated Operator Cost

The Estimated Operator Cost property is one of those that does get a lot of attention. Perhaps even too much.

When you look at an execution plan in SQL Server Management Studio, or in Azure Data Studio., it conveniently marks each operator with a line such as “Cost: 34%”. And some third party tools, such as SentryOne’s Plan Explorer, even go a step further and color code this cost in red, yellow, or green, so that you can really quickly find the operators with a high cost.

This number is not actually included in the execution plan. It is computed by whatever tool you use to look at the execution plan, by dividing the Estimated Operator Cost property of each operator by that of the entire plan (which is the Estimated Subtree Cost of the top left operator).

Just in case you overlooked it, the word “estimated” is used a lot in the above paragraph. All the costs you see are always estimates. That goes for an execution plan only (aka “estimated execution plan”), but also for an execution plan plus run-time statistics (aka “actual execution plan”). The only costs that the execution plan stores are estimated costs. Even if the query ran and actual run-time statistics were available, the costs are still estimates.

So if your query is slow, you look at the execution plan, and you instantly zoom in on the operators that have high percentages, you are looking at the area where SQL Server expected a lot of work to happen. But a query that is slow is very often caused by misestimates, so it is very much possible that in reality, the slowness is caused by operators that SQL Server expected to cost almost nothing, but that in reality did a lot more work!

But that’s not what today’s blog post is about.

This property does not exist

As mentioned above, when you look at an execution plan and see “Cost: 34%” listed under an operator, you are not seeing a number that is stored somewhere in the execution plan. You are looking at a number that is computed by dividing the Estimated Operator Cost of that operator by the Estimated Subtree Cost of the top left operator (which is the total combined Estimated Operator Cost of all operators).

But here’s the catch. The Estimated Operator Cost of each individual operator is not taken from the execution plan either. If you look at the XML that represents the execution plan, you will not find a property with this name, nor a sufficiently similar name. And you will not find the values you see in the execution plan. That’s because this property also is computed on the spot by the tool you are using, from other information.

Which other information, you ask? Easy. The Estimated Subtree Cost of the operator. A lot of people know and use this property too. But they only use it on the top-left operator, and then they call it the cost of the execution plan. Which makes sense: for the top-left operator, the subtree is the entire execution plan.

But there is in fact an Estimated Subtree Cost for each operator. It’s not really useful for us when doing plan analysis. But this property is in fact stored in the execution plan. And it is used by the front end tools to compute the Estimated Operator Cost for each individual operator: the Estimated Subtree Cost of an operator minus the Estimated Subtree Cost of all its direct descendants is that individual operator’s Estimated Operator Cost.

The Estimated Operator Cost property can (when used with appropriate care, as explained above) be very useful. Which means that even though we don’t directly use it ourselves, the Estimated Subtree Cost of each operator is actually very relevant to us.

But that’s not what today’s blog post is about.

Round and round and round

When you think about the entire process of optimizing an execution plan, returning it to the client, and then displaying, this seems rather inefficient.

To estimate the cost of an execution plan, or rather of any possible subtree, SQL Server has to estimate the amount of rows each operator will process, and use that to estimate the cost of that operator. It then adds those individual costs for all operators in a subtree to find the estimated cost of that subtree. And that number gets stored in the execution plan, not the individual cost of each operator.

So the optimizer knows each operator’s individual cost, adds them together, sends the total to the client, and then the client has to do subtraction to find the individual cost again. Weird, huh?

If you dive deeper into how exactly the optimization process work, this does make a bit more sense. The optimizer doesn’t simply try hundreds of plans and compare their estimated cost. It’s a bit more complex, and a bit smarter. For the full details, you should read everything ever written by smart people such as Paul White and Benjamin Nevarez, and then especially focus on explanations about “the memo”.

Simplified to my own level of understanding, let’s just say that for a complex query that joins multiple tables, there are very many possible execution plans. And a relevant subset of them will have a branch where tables A and B are joined. Instead of exploring the possible ways to join A and B for each of the plans in that subset, the optimizer explores the possible ways to just do a join between A and B, finds the best, and uses that in all possible plans that have this join order somewhere. And this obviously means that the subtree of the query that joins A and B is the part explored, and that the optimizer really cares about the cost of that subtree.

And larger subtrees that have a “join A and B” subcomponent somewhere simply use this component as a black box, knowing that the estimated cost of this component (subtree) is the best the optimizer could find.

So with that understanding, I do understand why the Estimated Subtree Cost is such an important property for the optimizer, and why it is stored at all levels – and hence in all operators. This does not explain, though, why there is no Estimated Operator Cost actually stored during the optimization and/or in the final execution plan. I’m not sure there really is much of a reason. I think the optimizer team just didn’t think it was relevant, and then later the tools team DID want to expose it and had to re-compute something that previously, during optimization, had already been computed.

However, this is also not what today’s blog post is about.

Deeper and deeper

We know that the Estimated Operator Cost is not included in the execution plan, but computed from the Estimated Subtree Cost, which does get included. We also know that this Estimated Subtree Cost is computed from an estimation of the individual cost of each operator in the subtree. And we know that this estimation is based on cardinality estimations.

But wait. There’s more!

To estimate the cost of an operator, SQL Server looks at two important cost components: I/O and CPU usage. Based on what the exact task of the operator is, combined with the amount of work expected to be done, SQL Server first computes two components for each operator: Estimated I/O Cost and Estimated CPU Cost. And yes, these two cost estimates DO appear in the execution plan. You can see them in your client tool, and you can also find them in the execution plan XML. So they are actually stored in the execution plan, not computed by the client.

At this point, you might wonder why the Estimated Operator Cost you see in the client is computed by doing complex stuff with Estimated Subtree Cost of that operator and all its direct descendant. Why isn’t it simply computed by adding the Estimated CPU Cost and the Estimated I/O Cost?

The answer is: because that would show incorrect date. The total Estimated Operator Cost of an individual operator is, often, not equal to the sum of Estimated CPU Cost and Estimated I/O Cost.

One reason for this is that Estimated CPU Cost and Estimated I/O Cost are measured per execution of the operator, whereas Estimated Operator Cost is the total of all executions. And yet, multiplying by Estimated Number of Executions is not enough to get the same number of we get by subtracting the subtree costs. And that’s because of scaling. Repeated executions do not necessarily always incur the same cost. Reinitializing for a new execution might cost more, or less, or the same, as a “first” initialization. And this might depend on whether the execution is a rebind or a rewind. A first access to a data page, or to an index page, is estimated (for costing purposes) to be a physical read. Repeated executions in quick succession will be logical reads, and costing takes this into account. Read-ahead processes are probably also taken into account.

And then, there might be other factors beside I/O and CPU that get taken into account when estimated the cost of operators, and using that to determine the Estimated Subtree Cost of a branch of the execution plan. I don’t know whether or not this is the case. But it’s definitely possible and it would not surprise me.

In short, there is no direct relation between the Estimated CPU Cost and Estimated I/O Cost of an operator and its Estimated Operator Cost. There is (as far as I know) no way to compute the Estimated Operator Cost that uses the Estimated CPU Cost and Estimated I/O Cost properties. The only way I know is as described above, based on Estimated Subtree Cost.

However, the Estimated CPU Cost and the Estimated I/O Cost do get computed, for a single execution, and they do get stored in the execution plan. I must admit, though, that I find them fairly useless. In most cases.

However, once more, this is not the subject of this post.

Cat, get out of that bag!

As stated above, the Estimated CPU Cost and the Estimated I/O Cost are stored in the execution plan, but they are fairly useless. In most cases.

Let’s review a special case where these properties suddenly get more useful.

Typically, when I look at an operator that does I/O, I expected to see values in both the Estimated CPU Cost and the Estimated I/O Cost. An Index Seek locates specific data in an index. The CPU has to do some work, but most of the cost of this operator is the actual I/O. So when I look at an execution plan and I see an Index Seek that has an Estimated CPU Cost of  0.0001787 and an Estimated I/O Cost of 0.003125 (almost 20 times as much), I am not surprise. That’s an expected ratio.

Conversely, operators that don’t do any I/O should of course be estimated to have a zero I/O cost. And that is indeed the case. Operators such as Nested Loops, Assert, or Row Count Spool will always have an Estimated I/O Cost of zero.

And then there are some operators where the Estimated I/O Cost may or may not be zero. A good example is Merge Join. When Merge Join does a one to many join, it performs all its actions in memory. But when the relationship between the data sources to be joined is many to many, then it has to write data to a worktable, and read that data back later, which of course incurs I/O. So in that case, you’ll see a non-zero Estimated I/O Cost. But of course, you can also just look at the Many to Many property to know what relationship the optimizer inferred between the data source, so you don’t need to look at the Estimated I/O Cost for this (though you can still use it to gauge exactly how much more expensive the join is because of that many to many relationship).

But recently I was surprised, when looking at an execution plan, to see a non-zero value for Estimated I/O Cost on a Hash Match operator. This confused me at first. The Hash Match algorithm is supposed to use CPU only. Lots of it. And lots of memory. But no I/O. Or rather, hopefully no I/O. Hash Match will only do I/O if it runs out of memory, and has to spill to tempdb in order to complete its work.

Spilling is bad. SQL Server tries to avoid it. SQL Server tries to request sufficient memory (the “memory grant”) to store all the data it needs to store while processing a Hash Match operator, plus a margin of error. And of course, if the estimated number of rows to process, or more specifically to store in memory, is severely underestimated, then the Hash Match will spill to tempdb and ruin your day.

But this is never planned. Or is it?

Yes, this will spill!

All estimated numbers in execution plans are always based on the assumption that the cardinality estimations are correct. That the actual number of rows will be equal to the estimated number of rows. So if the requested memory grant is based on the estimated number of rows, and the estimated cost is based on the estimated number of rows, then surely the Hash Match operator should never be estimated to spill. Right?

Wrong!

Seeing a non-zero Estimated I/O Cost on a Hash Match means that, during optimization, the optimizer already realized that, based on the estimated amount of rows, the operator will probably need more memory than what it requests in the Memory Grant plan property. So why does it not request more?

Well, perhaps it computed a memory requirement that is more than what you have available on your server. Or more than can be assigned due to a resource governor setting, or other limitation.

And that’s when I realized how useful the Estimated I/O Cost property can be on operators such as Hash Match, or Sort, and perhaps a few more. Operators that only do I/O in bad cases. When they are estimated to do I/O, it means that the optimizer expects this operator to run slow, perhaps even very slow, and it still created this plan because all alternatives are even worse!

And that’s really all the incentive I will ever need to scratch my head, look at the query, and find out where and how exactly I asked SQL Server to do such a hideous task. And then redesign the tables, rewrite the query, and/or change my indexing strategy, so that at least the performance issues are not so bad that the optimizer already knows it’s terrible before it even starts execution.

Conclusion

All operators have a non-zero Estimated CPU Cost property. All operators also have an Estimated I/O Cost property, that should be zero on operators that are not supposed to do I/O.

When you see a non-zero Estimated I/O Cost on an operator that should normally not do any I/O, you should always investigate why the optimizer expects that this operator will do I/O it normally should not do. Because I/O is expensive, the optimizer tries to avoid it. So seeing unexpected Estimated I/O Cost means you gave the optimizer an assignment that was so hard that it was unable to avoid this I/O.

Usually, that is not a good sign.

T-SQL Tuesday #129 – Time capsule
T-SQL Tuesday #130 – Automated restores

Related Posts

No results found.

2 Comments. Leave new

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close