How a row goal affects estimates

How a row goal affects estimates

Most of our queries are intended to process a complete data set. But not all. There are cases where we restrict the number of rows. Or cases where SQL Server does this. In all such cases, and a few others, the query optimizer uses a mechanism called a “row goal” to ensure that the execution plan is optimized for a subset of the total result set, not all of it.

Some examples

Without trying to be complete, let’s list a few cases where a row goal would be set.

Literal row limitation in the query

The most simple is when the query literally tells SQL Server that you don’t want to have all rows returned. Everyone knows the TOP clause, which is most commonly used for this. For ANSI portability, and because it adds a few options, you should also be aware of the FETCH and OFFSET modifiers to the ORDER BY clause, that have a similar functionality and are specifically designed to support paging. And there is of course the SET ROWCOUNT option, though I sincerely hope nobody actually uses that. All of these options literally tell SQL Server that we don’t want all results, only a part of them. The execution plan that would produce the entire set the fastest might not necessarily be the fastest way to get the few rows we actually want, so it’s a good thing that the optimizer has a way to come up with a different execution plan for these cases.

Implied row limitation in the query

There are also more subtle ways implicit ways for a query to indicate you don’t need all the rows. For instance, when you use [NOT] EXISTS with a subquery, then all you ask SQL Server to verify is whether the query returns at least one row. This always gets translated to an execution plan that tries to find a single row, then stops. Nobody cares whether that would be the only row or whether there would be a billion, so why waste resources to search for other matching rows?

Depending on how the execution plan for the subquery of the EXISTS clause looks, you will often find that a row goal is used on these operators to ensure that the optimizer produces the fastest plan for finding that first matching rows, instead of a plan that would be fastest if all matching rows were needed.

Hidden row limitation in the query

Sometimes the presence of a row goal is even less obvious from a query. Take for instance a look at the sample query below, based on the AdventureWorks sample database:

SELECT c.CustomerID,
FROM   Sales.Customer AS c
       (SELECT     MAX(sod.OrderQty) AS OrderQty
        FROM       Sales.SalesOrderDetail AS sod
        INNER JOIN Sales.SalesOrderHeader AS soh
           ON      soh.SalesOrderID = sod.SalesOrderID
        WHERE      soh.CustomerID = c.CustomerID
        AND        sod.OrderQty > 3
        GROUP BY   soh.CustomerID) AS Sales
WHERE  c.TerritoryID = 3;

Upon inspecting the query there seems to be no reason to assume any row goal would be set. There is nothing that tells us that we don’t need the entire set. Until we look a bit deeper.

The CROSS APPLY returns the highest order quantity of all orders above a certain threshold that this customer placed. That’s always a single row. Or no row at all if the customer has no qualifying orders. And in the latter case, because the query uses CROSS APPLY and not OUTER APPLY, the customer gets omitted from the results.

But if you then look at what the outer query does, you see that the data coming from the subquery is not used at all. So the actual net effect the subquery has on the query is to remove customers that have no sales and show the customers that do. It’s actually a simple query on the Customers table with an EXISTS condition “in disguise”. And though writing it this way might fool your co-workers, or even “future you” if you need to maintain the query later, it doesn’t fool the optimizer:

This is the execution plan for the query above. Note that even though the top left Nested Loops is marked as an Inner Join, the presence of the Top on its inner input, combined with the details in its Output List property, makes it effectively equivalent to a Semi Join, which implements the EXISTS logic.

Query hint

The last method I’ll call out to introduce a row goal in an execution plan is the “FAST nquery hint, as shown below:

SELECT     soh.OrderDate,
FROM       Sales.SalesOrderHeader AS soh
INNER JOIN Sales.SalesOrderDetail AS sod
   ON      sod.ModifiedDate = soh.ModifiedDate
WHERE      soh.SalesPersonID = 278

For your reference, here is a side by side comparison of the execution plan for the query above (left side) and the execution plan for the exact same query without the FAST 10 query hint (right side):

The FAST 10 hint used in the query tells SQL Server that I want an execution plan that returns the first ten rows as fast as possible, even if that means that it takes more time to return the full result set. And sure enough, if you run both versions you will see that with the FAST 10 hint included, the total execution time is a lot higher (on my system, 4.8 seconds versus 0.2 seconds). This simple query is too fast to see if the first 10 rows do indeed appear faster, but I assume they do.

The method SQL Server uses to achieve this is to set a row goal in the execution plan. Think of it as telling the optimizer it needs to find the execution plan for this query with a TOP(10) clause added, but without the operator(s) to actually implement that TOP(10).

Row goal described

If you want to understand what a row goal is and how it works, just imagine that the engine tells the optimizer “for optimization purposes, please assume that we’ll only ask for the first N rows from this query”. (Or rather from an operator, since row goals typically start somewhere within the plan and then affect only the subtree of that operator).

The optimizer, together with the cardinality estimator, makes sure to “translate” the row goal appropriately to the operators in that subtree. Note that this is not a simple copy/paste operation. If a Filter operator, or any other operator that applies a filter to its rows, has for example a row goal of 200 rows, and the selectivity of the Predicate is estimated to be 5%, then the optimizer knows, or rather estimates, that in order for the Filter operator to return 200 rows is will likely have to read 4,000 rows. So that number will now be the row goal of its input. Similarly, if a Stream Aggregate operator has a row goal of 5, and the statistics indicate that there are on average 10 rows per group, then the row goal of the Stream Aggregate’s input will be set to 56. And joins, too, are processed similarly (though even more complicated).

The reverse can also happen. If a blocking operator is used, like for example an eager Table Spool, then the row goal stops at that operator. It doesn’t matter how many rows will eventually be requested from the Table Spool; it reads its entire input before returning the first row anyway, so its subtree will run to finish.

Note that the actual implementation of all of this is probably more subtle than this, and likely uses lots of hardcore mathematics and set logic, but my goal is to make you understand the generic process, not the exact details.

Visibility in execution plans

Now that you know what row goals are, why they are used and how they work, let’s look at how information related to row goals is exposed in execution plans. There is some good news, but some bad news too. Let’s start on a positive note.

Was a row goal used?

Let’s look at a sample query where a row goal is used. The query below is equivalent to the sample query presented in the section on a hidden row limitation, but rewritten to make the limitation more directly visible.

SELECT c.CustomerID,
FROM   Sales.Customer AS c
WHERE  c.TerritoryID = 3
    (SELECT     *
     FROM       Sales.SalesOrderDetail AS sod
     INNER JOIN Sales.SalesOrderHeader AS soh
        ON      soh.SalesOrderID = sod.SalesOrderID
     WHERE      soh.CustomerID = c.CustomerID
     AND        sod.OrderQty > 3
     GROUP BY   soh.CustomerID);

The execution plan returned for this is almost the same as the execution plan we saw before:

If you compare the execution plan to the one shown before, you will see that in this case the Nested Loops has changed from Inner Join to Left Semi Join. Any semi join on Nested Loops ensures the lower input is only until a matching row is found, so no explicit Top operator is required anymore; there still is an implied row goal though.

But is this row goal in any way exposed in the execution plan?

Estimated Number of Rows

The most obvious effect of a row goal is of course on the property it is supposed to influence: the estimated number of rows returned by the operator. Here is another copy of the same execution plan, but this time I have added the value of the Estimated Number of Rows property (called Estimated Number of Rows Per Execution on recent versions of Management Studio) for each operator.

The top line of operators is simple. The optimizer expects that 132 customers match the pushed down predicate on TerritoryID, and that a bit over 90 of these remain after applying the EXISTS check.

The numbers on the lower input of the Nested Loops are of course per execution. The logic here is executed once for each of the estimated 132 rows from the top input, as can be seen in the Estimated Number of Executions property on these operators. If you know the data in the SalesOrderHeader and SalesOrderDetail tables, you might realize that these numbers are suspiciously low. This can alert you to the presence of a row goal: the numbers are artificially lowered because we won’t read the entire set anyway. As soon as a match is found, execution of this branch stops. That’s why the number of rows estimated to be returned from the Nested Loops is even exactly 1.

Note that it should actually be even lower. Remember, of the estimated 132 customers matching the TerritotyID, only 91.5 remain after the Nested Loops; the other 40.5 are discarded because for these customers no row is returned from its bottom input. Out of 132 executions 91.5 are estimated to return 1 row; the other 40.5 are estimated to return no rows, so mathematically the estimate here should have been 0.693620. However, SQL Server always rounds numbers between 0 and 1 up to 1.

The estimate of 1.00841 on the Index Seek is also interesting. As mentioned above, the optimizer and cardinality estimator ensure to “translate” the row goal to child operators. In this case the statistics apparently indicate that it will sometimes, but very rarely, happen that the first order header finds for a customer has no qualifying order detail rows, yet a second header exists and is then returned. Remember, for an individual execution of the lower branch to return 2 rows from the Index Seek requires that for the first header returned the Clustered Index Seek on SalesOrderDetail (with the pushed down filter on OrderQty) returns no rows, and that at least one more order header exists for the customer. Based on the estimated number of rows after applying the row goal, the likelihood for this to happen is apparently really low.

We can see how the estimates were affected by the row goal. But is the reverse true? Can we from these low estimates automatically conclude that a row goal was used? Well, that depends. On your knowledge of the data, and data distribution.

Perhaps your knowledge of the data in the tables is limited and you don’t immediately know that the estimates are too low to correspond to the full set. Or perhaps some filters are pushed down and you might believe that the low estimates are due to the high selectivity of these filters. You cannot be 100% sure that these numbers are due to a row goal. And yet, for a long time this was all we had.


Luckily, newer versions of SQL Server make it easier to recognize the presence of a row goal. A new property called EstimateRowsWithoutRowGoal was introduced in SQL Server 2017, and later backported to SQL Server 2016 in Service Pack 2. This property is included on operators that have a row goal (with one exception).

If you see this property on an operator, you always know that a row goal was in effect for that operator. If the property is not there, is usually means that no row goal applies to the operator. However, the EstimateRowsWithoutRowGoal property is also omitted if a row goal was set, but it was not evaluated and used. In my experience this typically happens if the row goal exceeds the total estimated number of rows for the operator.

To the right is a screenshot of the properties of the Index Seek from the execution plan shown above. I have highlighted the Estimated Number of Rows Per Execution that I have copied in the screenshot above, but as you see there is also a property EstimateRowsWithoutRowGoal. And this is set to 1.64575, more than 60% higher than the Estimated Number of Rows Per Execution.

This is actually quite cool. We not only see that this operator was affected by a row goal, we only see the exact impact. Without a row goal, the Estimated Number of Rows Per Execution would have been 1.64575; this is apparently the average number of order headers per customer. The row goal that forces the Nested Loops to return just a single row to the other Nested Loops that is its parent ensures that 0.63734 of those rows are not read because they are not needed anymore.

So this is a really great addition to the information in execution plans, and as mentioned it’s in all versions starting from SQL Server 2016 Service Pack 2. It tells you at a glance which operators were affected by a row goal, and also gives you a lot of insight in how much difference the row goal made to the estimated cardinality for the operator. The actual value of the row goal used is, unfortunately, not exposed. Perhaps in a future update?

A possible confusion …

But a row goal can also lead to enormous confusion. Especially if you are not aware that a row goal was used. To see what I mean, look at these two simple queries:

SELECT SalesOrderID FROM Sales.SalesOrderDetail;
SELECT SalesOrderID FROM Sales.SalesOrderDetail

Here’s what I get when I put these in a single batch and request execution plan only:

The estimated cost for the first query is 99% of the batch; for the second query it is 1%. Even though both execution plans use the same Index Scan operator. With the same properties. In fact, the execution plans are effectively fully identical. The row goal was used in compilation, but did in this case not result in a different execution plan because even for returning just 200 rows, this Index Scan is the best option.

So why does Management Studio tell me that the second query is much, much cheaper?

It’s actually quite simple. The FAST 200 hint resulted in a row goal. We told the optimizer to pretend during its entire operation that only 200 rows would be needed. Up to and including the cost estimates – no, in fact, especially for the cost estimates. We wanted the optimizer to care only about the cost of returning the first 200 rows, and so that is exactly what it did. It computed the cost of returning the first 200 rows.

Just a bit of a shame that Management Studio presents this in a way that might make it seem as if it’s the total cost. It isn’t. The percentages you see tell you that the second query is expected to return its first 200 rows in 1% of the cost it takes the first query to return the full set. But the second query does not really stop after 200 rows, so this percentage (that is already quite meaningless) is even more meaningless in this case.

… and a bug (probably)

The misleading estimated cost values can trick you, but can be avoided by paying more attention. Or by simply not looking at these percentages at all.

But unfortunately, that is not the only problem that a row goal introduces in what the execution plan tells us. And this other problem is harder to avoid. Let’s look at this query, where a row goal will very clearly be used:

SELECT TOP (2000) SalesOrderID,
FROM   Sales.SalesOrderDetail
WHERE  OrderQty < 3;

No need to run the query, just ask for the execution plan only and this is what you should see:

In this case, to see what I consider to be a bug you need to check the properties of the Clustered Index Scan operator on the far right, as shown in the screenshot to the right.

The Estimated Number of Rows Per Execution property shows that the operator will return 2000 rows, no surprise. Also expected is the Predicate property for the pushed down filter on OrderQty < 3. With a predicate pushed into the operator, I expect that it will likely need to read more rows than 2000 before it has found the 2000 requested rows. After all, there are probably some rows in there that will be skipped due to this Predicate.

And indeed, the operator reports the Estimated Number of Rows to be Read to be higher. Much higher … a whopping 121317. That’s the entire table! This might be valid if the statistics indicate that only 1.65% of the data matches the filter. But I don’t think that is the case here.

To double check, you could change the row goal by changing TOP(2000) to, for instance, TOP(40000) or TOP(3). Or any other value. And no matter which value you punt in, the Estimated Number of Rows to be Read will always be 121317.

The problem here is that Estimated Number of Rows to be Read, like EstimateRowsWithoutRowGoal, represents the value before applying the row goal. In the screenshot above, you can see that the EstimateRowsWithoutRowGoal is almost 81,000; given how the data in the OrderQty column is distributed it is indeed reasonable to estimate that 81,000 out of the total 121,000 rows in the table will match. The Estimated Number of Rows Per Execution is then scaled down to match the row goal, but the Estimated number of Rows to be Read is not. It should have been reduced to, if my math is right, 2,996.52 rows.

So we have a confusing situation. For the number of rows produced, the Estimated Number of Rows (both Per Execution and for All Executions) is scaled down to match the row goal, but the EstimateRowsWithoutRowGoal property, as suggested by the name, is not. So far so good. Until you look at the Estimated Number of Rows to be Read. Based on the observations so far, you would expect properties that don’t have “Without Row Goal” in their name to be scaled down. But that is not the case. The Estimated Number of Rows to be Read actually represents an estimation of the number of rows that would be read without a row goal, and there is no way to directly see what number of rows the optimizer estimated it would need to read with the row goal in place, even though it did compute that and take it into account for costing the operator.

Which brings me to a few more properties that have this same problem. Look at the Estimated CPU Cost, Estimated I/O Cost, and Estimated Operator Cost. Does it make sense that the total cost is far less than the sum of its components? Of course not. And here, too, you will see that the Estimated CPU Cost and Estimated I/O Cost will not change at all if you change the value for the TOP clause or even remove it, but the Estimated Operator Cost does.

So once more, we have a weird discrepancy where two of the three very similarly named properties are computed without the row goal, and the third is a lot lower because it does take the row goal into account.

If you agree that this confusing and misleading situation should be improved, then please vote for my suggestion.


Row goals are a useful instrument. If we only want 50 rows out of a two million row dataset, we are happy that the optimizer has a mechanism to help it find an execution plan that is fast for that small set, instead of sticking to the execution plan that is optimal for finding all two million rows.

Visibility of row goals in execution plans is lacklustre, though. The introduction of the EstimateRowsWithoutRowGoal predicate has improved things, but there still are situations where SQL Server represents information in a technically correct way that can still catch you off guard if you don’t understand the mechanics involved, or even in an outright misleading way.

Execution plans – learning opportunities
T-SQL Tuesday #138 – Keeping up with change

Related Posts

No results found.

1 Comment. Leave new

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.