How TOP wrecks performance (part 1)

How TOP wrecks performance (part 1)

The TOP keyword in the SELECT clause may not be ANSI standard, but I am sure we have all used it. But do you know that it comes with two optional keywords? And have you used them? They are PERCENT and WITH TIES.

TOP

Let’s first look at the based functionality of TOP, and how it affects performance – which in fact, contrary to the title of this post, is usually quite good for performance.

Here is a simple example, using the AdventureWorks2012 sample database (which can be downloaded from Codeplex). Let’s start with a very basic example:

SELECT   CustomerID, OrderDate, SubTotal

FROM     Sales.SalesOrderHeader

ORDER BY CustomerID;

For this query, based on the available indexes, the query optimizer has two obvious choices (and probably hundreds of less obvious ones). One is to scan the table (using the clustered index on SalesOrderID) and then sort the results on CustomerID. The other is to do an ordered scan of the index on CustomerID, which eliminates the sort from the plan, but introduces a lookup to fetch the OrderDate and SubTotal columns. Based on the available statistics, the query optimizer estimates that the cost of doing a lookup for each of the 30K+ rows in the table exceeds the cost of a sort, so the plan I get for this query uses the first option:

image

Adding a TOP operator to this query will result in a very nice performance boost. Here are the modified query and the plan I got after adding just a vanilla TOP clause:

SELECT   TOP(3) CustomerID, OrderDate, SubTotal

FROM     Sales.SalesOrderHeader

ORDER BY CustomerID;

image

The obvious and predictable change is the injection of a Top operator that limits the result set to just the first three rows. But as you see, the rest of the plan also changed shape. This is because the optimizer knows that plans are executed left to right. The Top operator, when called by the SELECT operator, calls the Nested Loops operator to get a row; returns it to the SELECT operator; and then sits waiting until it is called again – which normally will happen fairly quickly, unless the SELECT operator has to wait for the application or network to send out the row. The same thing happens two more times, but when the SELECT operator calls the Top operator for the fourth time, the Top operator will immediately return an “end of data” condition without ever bothering to call its descendant Nested Loops node. No matter how many rows the SalesOrderHeader table has, the TOP clause guarantees that there will never be more than three lookups. The Sort operator we had in the first plan was cheaper than 31,465 lookups, but is far more expensive than just three lookups.

So vanilla TOP is actually good for performance. It not only reduces the amount of data returned, reducing load on the network; it also allows the optimizer to make choices that are optimal for the specified lower number of rows rather than for the full result set.

WITH TIES

Unfortunately, both the WITH TIES and the PERCENT option change this. Let’s first focus on WITH TIES. Adding this option to a TOP clause has the same effect you sometimes see in sports results: the top three athletes are listed, but if numbers three and four finish in the exact same time or have the exact same score, both are considered to be in the number three spot and both are included in the results. However, if numbers two and three finish in the same time, the number four is NOT included anymore – so this is not like DENSE_RANK.

Because the concept of ties requires the concept of an order, the WITH TIES option requires that the query has an ORDER BY clause. (But using any TOP clause without ORDER BY is kind of an abomination anyway). Adding WITH TIES to our previous query has only a minimal impact on the plan:

SELECT   TOP(3) WITH TIES CustomerID, OrderDate, SubTotal

FROM     Sales.SalesOrderHeader

ORDER BY CustomerID;

image

If you run this query you will see that there are still only three rows returned, but the actual number of rows returned from the Nested Loops operator to the Top operator is shown as four. Where does that extra row come from? Well, if you remember the results of the original query, you will know that there is no tie between rows #3 and #4. The way you saw that is to look at both rows and compare the customer number. And that is also exactly how SQL Server handles this. On the fourth call, the Top operator will request another row from its descendant and compare the value of its ORDER BY column(s) to the value those columns had in the third row. If they are the same, it will return the row, wait for the next call, and then repeat this until the value change; only then will it return an “end of data” condition and stop calling its descendants. In this example the value changed directly on that fourth call, so the number of rows processed is just one more than the number in the TOP expression. If you use TOP(4) WITH TIES, you will get six rows in the result and seven rows processed, because now the Top operator knows there are no more ties after reading the seventh row.

So far, all behavior is exactly as expected – with one exception. In the execution plan above, the estimated number of rows is three. This is clearly not correct; there is no way SQL Server can ever satisfy this query without processing at least four rows (to verify that there is no tie), or more (if there is one). Now you may think this is just a minor difference, and in this specific case it is – one row off for the TOP(3) WITH TIES and three rows off for TOP(4) WITH TIES will probably not have a major impact on plan choice. But the difference can be much bigger, as demonstrated by this query and execution plan:

— Create index (for the demo)

CREATE INDEX IX_SalesOrderHeader_TerritoryID

ON Sales.SalesOrderHeader(TerritoryID);

GO

SELECT   TOP(3) WITH TIES CustomerID, OrderDate, SubTotal

FROM     Sales.SalesOrderHeader

ORDER BY TerritoryID;

GO

— Drop index (we don’t really need it)

DROP INDEX Sales.SalesOrderHeader.IX_SalesOrderHeader_TerritoryID;

GO

image

The difference between an estimate of 3 and an actual of 4595 is way too big, and does in this case actually result in a bad plan choice. The lookup is estimated to execute three times, but in reality it executes 4,595 times. The result is a whopping 14,091 logical reads on a table that is only 689 pages in size. In this, a plan with a clustered index scan and a sort would have been much faster.

There is good news, though. All the above tests were done on my main machine, which is SQL Server 2012 SP1. I also ran a test on SQL Server 2014, and the new cardinality estimator gave me much better results. The estimated row counts for the input to the Top now actually take data distribution and statistics into account when a query uses TOP WITH TIES. For the query above, the number of rows coming into Top operator was estimated as 3146.5 when I enabled the new cardinality estimator, which is much closer to the actual number (4595). As a result, the plans chosen now have a much better chance of being optimal for the actual data.

One error that was not fixed in SQL Server 2014 is that, even with WITH TIES, the estimated number of rows being read by the Top operator is still equal to the estimated number of rows being returned. Since the WITH TIES can only stop returning rows to its parent after seeing a row with different values being returned from its child node, the estimated input should always be one more – but now we are really talking about a minor difference, an off-by-one error, which for estimates and plan choice is really just a minor issue.

Conclusion

So far we have seen that a vanilla TOP operator can give you great performance benefits, but a TOP operator with the WITH TIES option poses a severe danger. In SQL Server 2012 and before, as well as in SQL Server 2014 with the old cardinality estimator, the data distribution of the ORDER BY column(s) is not taken into account, which can result in a gross misestimate, especially if the ORDER BY columns include lots of duplicates. These wrong estimates can easily result in very bas plan choices, and hence to bad performance.

The new cardinality estimator in SQL Server 2014 fixes this. It still has an off-by-one error, but that is just a minor issue that will hardly ever cause serious performance issues.

In the next episode, I will focus on the PERCENT option of TOP, and show how this option will wreck performance in an even worse way.

Give speakers feedback. It makes a difference.
How TOP wrecks performance (part 2)

Related Posts

No results found.

2 Comments. Leave new

  • How about with ties and Over & Partition by clause…

    Reply
  • Hugo Kornelis
    November 2, 2015 17:28

    Hi SWAT,

    WiTH TIES is covered above. OVER and PARTITION BY are, unfortunately, not supported for TOP – they would be a great addition to the options available in T-SQL and I know a lot of people have proposed this to Microsoft – but no luck (so far?)

    Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close