T-SQL User-Defined Functions: the good, the bad, and the ugly (part 1)

T-SQL User-Defined Functions: the good, the bad, and the ugly (part 1)

So you thought that encapsulating code in user-defined functions for easy reuse is a good idea? Think again!

SQL Server supports three types of user-defined functions. Only one of them qualifies as good. The other two – well, the title says it all, doesn’t it?

The bad: scalar functions

A scalar user-defined function (UDF) is very much like a stored procedure, except that it always returns a single value of a predefined data type – and because of that property, it isn’t invoked with an EXECUTE statement, but embedded in an expression where the returned value is immediately used. I won’t explain all the basics, but assume that you are either already familiar with the concept, or that you at least have read the description in Books Online. It is allowed to read (but not modify!) table data from within a scalar UDF, but in this blog posts I will focus on scalar UDFs that include computations and other expressions only, without doing any data access.

The code below defines and then uses a very simple scalar UDF that simply triples the input:

CREATE FUNCTION dbo.Triple(@Input int)
RETURNS int
AS
BEGIN;
DECLARE @Result int;
SET @Result = @Input * 3;
RETURN @Result;
END;
go
SELECT DataVal, dbo.Triple(DataVal) AS Triple
FROM   dbo.LargeTable;

This example is obviously overly simple – even the most enthusiastic devotee of isolating and reusing code will never bother to define and use a function for something so simple. But if the calculation in the function is actually very complex, it’s easy to see how code that defines the calculation once and then simply invokes the function every time it’s needed is easier to build, understand, debug, and maintain than code that repeats the complex expression at several locations. In traditional programming languages, like C# or VB.Net, it’s easy to see why using functions to encapsulate and reuse common computations is considered a best practice.

But SQL Server isn’t a traditional programming language. It’s a declarative language, with an optimizer that has been designed to optimize the execution order within queries to make sure that the results are returned as fast as possible – without impacting correctness, of course. And that optimizer simply cannot optimize code that calls scalar UDFs as well as it can optimize code that has the same logic inlined. Let’s first take a look – in order to test the performance, I’ll create a table with 100,000 rows with random values from 1 to 10 in the DataVal column.

CREATE TABLE dbo.LargeTable
(KeyVal int NOT NULL PRIMARY KEY,
DataVal int NOT NULL CHECK (DataVal BETWEEN 1 AND 10)
);

WITH Digits
AS (SELECT d FROM (VALUES (0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) AS d(d))
INSERT INTO dbo.LargeTable (KeyVal, DataVal)
SELECT 10000 * tt.d + 1000 * st.d
+ 100 * h.d + 10 * t.d + s.d + 1,
10 * RAND(CHECKSUM(NEWID())) + 1
FROM   Digits AS s,  Digits AS t,  Digits AS h,
Digits AS st, Digits AS tt;

The code may be a bit complex and you may be tempted to write a simple loop to insert 100,000 rows. But that would take a lot more time – the code above runs in less than 1 second on my laptop, whereas a loop takes almost five seconds. When we need more rows (later), this difference becomes even more noticeable.

The first test

Now it’s time to test. Below are two queries. The first query is designed to calculate the triple of each DataVal value in the table. (Note that I added the MAX aggregate to ensure that the actual performance would not be impacted by the overhead of returning 100,000 rows to SSMS and rendering them in the result display). The second query is exactly the same, except that it doesn’t use the scalar UDF, but includes (“inlines”) the actual formula in the query.

SELECT MAX(dbo.Triple(DataVal)) AS MaxTriple
FROM   dbo.LargeTable;

SELECT MAX(3 * DataVal) AS MaxTriple
FROM   dbo.LargeTable;

If you select the two queries, activate the option to include the actual execution plan, hit execute, and then switch to the execution plan tab, you may be pretty happy:

image

I must add that I didn’t get these plans all the time. In my first tests, the plans were equal, with the only difference being the actual content (visible only in the Defined Values property) of the Compute Scalar iterator – the arithmetic formula 3 * DataVal vs. invoking dbo.Triple; in those cases both plans were said to have a cost of 50%. In later tests, the plans changed to the above; the call to dbo.Triple is now hidden several levels deep in the Defined Values property of the Stream Aggregate iterator, and though the same work is still done, the first query is now said to be suddenly slightly cheaper than the second. But either way, whether 50% each or 49% vs 51%, the scalar UDF seems to be a very good choice.

However, you may not be aware that the “Actual Execution Plan” is a dirty rotten liar. Or maybe I should say that the terms “Actual Execution Plan” and “Estimated Execution Plan” are misleading. There is only one execution plan, it gets created when the queries are compiled, and then the queries are executed. The only difference between the “Actual” and the “Estimated” execution plan is that the estimated plan only tells you the estimates for how many rows flow between iterators and how often iterators are executed, and the actual plan adds the actual data for that. But no “actual operator cost” or “actual subtree cost” is added to the corresponding estimated values – and since those costs are the values that the percentages are based on, the percentages displayed in an actual execution plan are still based only on the estimates.

To get a better idea of the actual query cost, let’s run these queries again – this time without the execution plan, but with the actual duration displayed. You can enable this display with the option Query / Query Options / Advances / SET STATISTICS TIME. Or you can simply add the statement SET STATISTICS TIME ON; at the start of the batch (and SET STATISTICS TIME OFF; at the end). Now, if you run the queries, you’ll get some extra information returned (in the Text tab of the results pane) that tells you exactly how long the query took (elapsed time) and how much CPU time it used (CPU time). On my laptop, the query that uses the UDF takes 889 ms CPU and 900 ms elapsed, and the query with the logic inlined takes only a fraction of that: 47 ms CPU and 52 ms elapsed! Not 49% versus 51%, but 95% versus 5%.

Why?

This huge performance difference is caused by the overhead of calling and executing a scalar UDF. The computation of 3 * DataVal in the second query is entirely executed inside an iterator (the Compute Scalar), which is very efficient. The computation of dbo.Triple(DataVal) in the first query is also executed in an iterator (the Stream Aggregate, in this case) – but since this is a call to a separate module, SQL Server will have to invoke this module for each of the 100,000 rows flowing through that iterator. Each of those 100,000 calls introduces some overhead: start up the execution, step through the two executable statements of the function body, and clean up afterwards. Some sources claim that the function text is interpreted (compiled) on each call; I found that this is –at least on SQL Server 2012– not the case; when executing this code with a profiler trace running, only a single cache hit (or cache insert if the function is not in the procedure cache) event is fired.

This overhead is invisible in the “Actual Execution Plan”, but the execution time and the profiler trace tell a different story. And so does the “Estimated Execution Plan” – if I select the query that uses the function and then request at “Estimated Execution Plan”, I get two execution plans: one for the query that we saw before, and one for the function body, with two iterators that represent the executable statements: a SET with a calculation, and a RETURN.

image

But note that, again, the execution plan is lying. First, it implies that the UDF is invoked only once, which is not the case. Second, look at the cost. You may think that the 0% is the effect of rounding down, since a single execution of the function costs so little in relation to the cost of accessing and aggregating 100,000 rows. But if you check the properties of the iterators of the plan for the function, you’ll see that all operator and subtree costs are actually estimated to be exactly 0. This lie is maybe the worst of all – because it’s not just the plan lying to us, it is SQL Server lying to itself. This cost estimate of 0 is actually used by the query optimizer, so all plans it produces are based on the assumption that executing the function is free. As a result, the optimizer will not even consider optimizations it might use if it knew how costly calling a scalar UDF actually is.

Determinism

You may have wondered why SQL Server even bothers to invoke the UDF 100,000 times. Based on the CHECK constraint, there can never be more than 10 distinct values in the DataVal column – so why doesn’t the optimizer transform the execution plan to first get the distinct values of DataVal, then call the UDF only for those? Surely, that would be more efficient? Yes, it would, as we can easily verify by making that transformation ourselves:

SET STATISTICS TIME ON;
SELECT MAX(dbo.Triple(DataVal)) AS MaxTriple
FROM  (SELECT DISTINCT DataVal
FROM   dbo.LargeTable) AS d;

SELECT MAX(3 * DataVal) AS MaxTriple
FROM  (SELECT DISTINCT DataVal
FROM   dbo.LargeTable) AS d;
SET STATISTICS TIME OFF;

If you check the returned execution times, you will see that this technique even helps the performance of the query without function, if only by a little bit – 47 ms CPU and 50 ms elapsed on my laptop. For the version with scalar UDF, the saving is significant, as it is now almost as efficient as the version without scalar UDF: 62 ms CPU and 51 ms elapsed.

So why does the optimizer not make this transformation by itself? There are two reasons for that. The first is that with this version of the UDF, it can’t guarantee that this transformation won’t change result, because of a property called “determinism”. If a function is deterministic, we can be sure that when it is invoked multiple times with the same arguments, it will always return the same result. If a function is not deterministic, it might return different results, even when the same parameters are passed in. Our scalar UDF is not deterministic, as this query shows:

SELECT OBJECTPROPERTY(OBJECT_ID(‘dbo.Triple’), ‘IsDeterministic’);

You can check Books Online for a list of all the requirements a function has to meet to be deterministic. In our case, the only problem is that the UDF is not schemabound, so let’s remedy that:

ALTER FUNCTION dbo.Triple(@Input int)
RETURNS int
WITH SCHEMABINDING
AS
BEGIN;
DECLARE @Result int;
SET @Result = @Input * 3;
RETURN @Result;
END;
go
SELECT OBJECTPROPERTY(OBJECT_ID(‘dbo.Triple’), ‘IsDeterministic’);

The function is now marked as deterministic – but if you rerun the previous tests, you’ll see that this does not affect plan choice for these queries at all! The optimizer still won’t shuffle the plan of the first query to match that of the second, even though they are now (with the function marked deterministic) guaranteed to be equivalent. That is because there is a second reason why the optimizer won’t make this change – and that is that the optimizer thinks that invoking the function has a zero cost. Why would it even consider a complicated plan transform that saves 99,990 function calls if it thinks that those calls are free? After all, zero multiplied by 99,990 is still zero. Unfortunately, whereas we can affect determinism of a function, we cannot influence the cost estimate the optimizer uses for it.

This same zero cost estimate leads to more bad plan choices. For instance, in the query below, the optimizer will happily invoke the scalar UDF two times for each row: once for the WHERE and once for the SELECT:

SELECT 1 – dbo.Triple(DataVal)
FROM   dbo.LargeTable
WHERE  dbo.Triple(DataVal) > 20;

It gets worse

Unfortunately, these two (overhead times the number of rows and bad cost estimate affecting plan choice) are not the only problems with scalar UDFs. There is a third problem: SQL Server will never use parallelism in a plan that uses scalar UDFs. This becomes only visible with larger tables, so let’s get rid of our 100,000 test rows and replace them with ten million fresh ones:

TRUNCATE TABLE dbo.LargeTable;

WITH Digits
AS (SELECT d FROM (VALUES (0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) AS d(d))
INSERT INTO dbo.LargeTable (KeyVal, DataVal)
SELECT 1000000 * sm.d
+ 100000 * ht.d + 10000 * tt.d + 1000 * st.d
+ 100 * h.d + 10 * t.d + s.d + 1,
10 * RAND(CHECKSUM(NEWID())) + 1
FROM   Digits AS s,  Digits AS t,  Digits AS h,
Digits AS st, Digits AS tt, Digits AS ht,
Digits AS sm;

If we now execute our original queries again, we will see two changes over the first time, when we used 100,000 rows. The first change is that now the plans are not the same; the plan for the query with scalar UDF is still the same, but the plan for the query without scalar UDF introduces parallelism.

image

The second change is maybe not really a change – it’s the observation that the percentages in the plan are still way off. On my laptop, the query with UDF takes 40108 ms CPU and 43760 ms elapsed to process all million rows; the query without UDF does the same in 4397 ms CPU and 808 ms elapsed. Based on CPU usage, the UDF version takes 90% of the batch (making the difference slightly less than in the non-parallel version – this is caused by the overhead of synchronizing over all the threads and combining the results); based on elapsed time, it’s even 98% (based on all cores in my laptop working instead of just one).

I made another interesting (and slightly disturbing) observation when I looked at the execution plans of the queries that force the optimizer to first find the distinct values of DataVal and then only invoke compute the triple of those distinct value:

image

The version without UDF uses the parallel part of the plan to read all rows and find the distinct DataVal values, then computes the triple for those distinct values in the serial part. I would have expected a similar plan for the version with UDF (since the UDF would only be called in the serial part), but apparently, the mere presence of a scalar UDF in the query prevents any form of parallelism for the entire execution plan!

The remedy

If you care about performance, you should avoid the use of scalar UDFs, except in situations where their performance hit doesn’t hurt. For instance, in a SET statement without subquery, a UDF does not hurt you, because it will be invoked only once. And in a SELECT statement that processes only a very small table and is not part of a time-critical part of your system, the performance hit doesn’t really matter much (but don’t think that it’s safe to use a scalar UDF in a query that returns only a few rows from a large table – sometimes the optimizer will produce a plan where the evaluation of the UDF is pushed down to a part of the plan that is executed before the majority of the rows is filtered out!)

The obvious workaround is to not use a scalar UDF at all, but instead inline the code. For the Triple function I used here, this is dead simple. If you have a UDF that contains multiple statements, calculating and storing intermediate results in variables, doing conditional logic based on IF … ELSE blocks,etc – this can be quite hard. You may have to use complicated CASE expressions, and you may have to repeat expressions multiple times (or use CTEs to avoid that duplication). But the performance gain will make up for the effort! Just don’t forget to carefully comment and document the sometimes hideous queries this may result in. As an example of what I mean, look at this scalar UDF and the corresponding inline rewrite (and if you want to know what the use of this UDF is, there is none; it’s just some nonsense I made up).

CREATE FUNCTION dbo.Nonsense(@Input int)
RETURNS int
WITH SCHEMABINDING
AS
BEGIN;
DECLARE @Result int,
@BaseDate date,
@YearsAdded date,
@WeekDiff int;
SET @BaseDate = ‘20000101’;
SET @YearsAdded = DATEADD(year, @Input, @BaseDate);
IF @Input % 2 = 0
BEGIN;
SET @Result = DATEDIFF(day, @YearsAdded, @BaseDate)
– DATEDIFF(month, @YearsAdded, @BaseDate);
END;
ELSE
BEGIN;
SET @WeekDiff = DATEDIFF(week, @BaseDate, @YearsAdded);
SET @Result = (100 + @WeekDiff) * (@WeekDiff – 100);
END;
RETURN @Result;
END;
go

SELECT KeyVal, DataVal,
dbo.Nonsense(DataVal)
FROM   dbo.LargeTable
WHERE  KeyVal <= 100;

WITH MyCTE
AS (SELECT KeyVal, DataVal,
CAST(‘20000101’ AS date) AS BaseDate,
DATEADD(year, DataVal, CAST(‘20000101’ AS date)) AS YearsAdded
FROM   dbo.LargeTable)
SELECT KeyVal, DataVal,
CASE WHEN DataVal % 2 = 0
THEN DATEDIFF(day, YearsAdded, BaseDate)
– DATEDIFF(month, YearsAdded, BaseDate)
ELSE (100 + DATEDIFF(week, BaseDate, YearsAdded))
* (DATEDIFF(week, BaseDate, YearsAdded) – 100)
END
FROM   MyCTE
WHERE  KeyVal <= 100;

The query with the logic inlined may not look pretty – but if I execute the query for all ten million rows in the table (again using the MAX aggregate to reduce I/O and rendering overhead), the performance difference makes it worth it: over a minute for the UDF version, versus less than 4 seconds for the inlined version!

What’s next?

In the next part of this series, I’ll look at how data access in a scalar UDF changes the situation – to the worse! After that, I’ll check the two types of table-valued functions. Where scalar UDFs are good, table-valued UDFs can be downright ugly – or they can be good, depending on the type. I will also present a way to replace scalar functions with inline table-valued functions, so that you can encapsulate and reuse code without paying the gigantic performance penalty of scalar UDFs – but at the cost of complicating your query. So stay tuned!

Principles of Modeling: Avoid Redundancy
T-SQL User-Defined Functions: the good, the bad, and the ugly (part 2)

Related Posts

No results found.

39 Comments. Leave new

  • Hugo Kornelis
    May 20, 2012 23:21

    I guess I should add (thank you for pointing it out, Adam) that all of the above applies to user-defined function that are created in T-SQL. You can also create user-defined functions with .Net languages (CLR). I have not done much research into them yet, I do know that they usually perform better than their T-SQL counterparts.

    I may do some investigation into CLR user-defined functions in the future, and if I do I’ll write my findings in a blog post.

    Reply
  • Why hasn’t Microsoft done anything to fix the problems with scalar T-SQL UDFs? They’ve had the opportunity in SQL Server 2008 and 2012. Why don’t they care about SQL Server performance?

    Reply
  • Hugo Kornelis
    May 21, 2012 15:51

    @Randall: I would not go as far as to say that Microsoft "don’t care about SQL Server performance". There have been many changes to help performance, just not in the area of scalar T-SQL UDFs.

    I can’t tell you why MS has chosen not to invest in the area of scalar T-SQL UDF performance. Only they can tell you. I can give you my best guess: they had more things they wanted to do than they had developer hours available, so they had to prioritize, and other issues were considered more important.

    Personally, I’d love to see some work done here. Just adding a special syntax for an inline scalar UDF (similar to the inline table-valued UDF) would be a great addition. There is a very popular connect item for this (281 up votes; 2 down votes) at https://connect.microsoft.com/SQLServer/feedback/details/273443/the-scalar-expression-function-would-speed-performance-while-keeping-the-benefits-of-functions – if you haven’t voted yet, do so now!

    Reply
  • "Why hasn’t Microsoft done anything to fix the problems with scalar T-SQL UDFs? They’ve had the opportunity in SQL Server 2008 and 2012. Why don’t they care about SQL Server performance?"

    Agreed.

    There is no reason that this still has to be a problem. At the very least, SQL Server could inline the code when it is compiled or give you the option to have that happen.

    I can’t fathom why this hasn’t been fixed. We’ve recently in-lined a bunch of functions to improve performance in a number of our procedures, and it would be much nicer to have the logic contained in one place.

    Reply
  • Hugo Kornelis
    May 22, 2012 00:45

    @Geoff: "We’ve recently in-lined a bunch of functions to improve performance in a number of our procedures, and it would be much nicer to have the logic contained in one place."

    There is a way to do that. I plan to cover it in a future blog post, but if you’re impatient you can also read about it here:

    http://sqlblog.com/blogs/adam_machanic/archive/2006/08/04/scalar-functions-inlining-and-performance-an-entertaining-title-for-a-boring-post.aspx

    Reply
  • GrumpyOldDBA
    May 22, 2012 14:29

    We’ve really known the issues with T-SQL UDFs since SQL 2000. It’s a key point of much performance tuning to just remove UDFs. Typically queries which ran for hours now run in secs and with reduced io usually.

    I did try a bunch of ( simple ) CLR functions to replace T-SQL functions but couldn’t reproduce in test the issues I had in production so the results were inconclusive, some time I will have to revisit.

    Reply
  • Hugo, thanks.

    We can’t use an inline TVF because we are still stuck on 2000 compatibility mode. We’ve made the changes to get us off of compatibility mode, but they have to be tested.

    Reply
  • SQL Server 2012 should make this easier with a DETERMINISTIC keyword for the function like Oracle has had for years.

    Reply
  • Hugo Kornelis
    May 25, 2012 21:13

    Ted, SQL Server uses WITH SCHEMABINDING to mark user-defined functions as deterministic, so no need for a DETERMINISTIC keyword.

    The difference between SQL Server, apart from the term used, is that SQL Server can’t force determinism on a function that is actually not deterministic. If what I just found on internet is correct, Oracle doesn’t check; if you declare a non-deterministic function to be deterministic, you’ll simply get unexpected results.

    Reply
  • Hugo Kornelis
    May 25, 2012 21:21

    NOTE:

    I just found that I made a mistake with the rowcount in the table used for the demo. I corrected this by editing the text in blog post – everywhere where I first had 10,000 has been changed to 100,000; and all occurences of a million have been changed to read ten million. The actual code has not been changed; the text now accurately describes the actual number mof rows in the table,

    My apologies for this stupid mistake.

    Reply
  • TheSQLGuru
    May 26, 2012 17:23

    >>>>>>>>>>>>

    There is no reason that this still has to be a problem. At the very least, SQL Server could inline the code when it is compiled or give you the option to have that happen.

    I can’t fathom why this hasn’t been fixed. We’ve recently in-lined a bunch of functions to improve performance in a number of our procedures, and it would be much nicer to have the logic contained in one place.

    >>>>>>>>>>>>

    VERY good reason:javascript:WebForm_DoPostBackWithOptions(new WebForm_PostBackOptions("ctl00$$ctl00$$ctl00$ctl00$bcr$ctl00$$form$$btnSubmit", "", true, "", "", false, true))  It is really, REALLY f’in HARD to get the engine to know about, understand, and integrate the virtually infinite array of code that could go into any UDF into the optimizer and engine of SQL Server.  Since there are known workarounds (often several) and SOOOOO many other things that need to be fixed/improved in the engine, integrating UDFs has and never will make the cut IMNSOH.  ESPECIALLY with Steve Balmer’s "the cloud – we’re ALL IN" mantra redirecting hundreds of thousands of developer hours to windows/sql azure.  🙁

    Reply
  • Hugo Kornelis
    May 26, 2012 17:45

    Kevin, I totally agree with your remark if you mean that it would cost too many developer hours to have the optimizer automatically inline all scalar user-defined functions. However, if you think about extending the syntax to allow a special kind of inline functions that allows only a single expression (similar to how inline table-valued funcions allow only a single query) and inlining them automatically, that would cost far less developer hours, place the burden of rewriting functions to a single expression on us, but give us the possiblity to encapsulate logic without takinng a performance hit.

    Reply
  • TheSQLGuru
    May 26, 2012 19:07

    Still disagree.  I can write a thousand-line-long SELECT that returns a single scalar value.  Absolutely no way that can be effectively integrated.  You are also ignoring the optimization shortcoming entirely – as in how would an estimate of number of rows or value distribution possibly be extracted from arbitrary code, especially when such output is dependent on the per-row input from the outer calling query?

    Perhaps you are speaking of a non-data-hitting scalar, something like RETURN int AS (@input + 3)?  Again I can make an arbitrarily complex statement that could take extraordinary efforts to get accurate statistics sucked out of it.  

    Reply
  • RBarryYoung
    May 26, 2012 21:24

    The strange thing is that Microsoft has had an object-type for inline scalar functions since SQL Server 2005, and they even document how they work in a query in every version of the doc since then.  They just never included the syntax to implement one.  Weird.

    Reply
  • cinahcaM madA
    May 26, 2012 21:46

    Kevin, I’m not sure what data access has to do with anything. The query optimizer generally has no problem inlining even large/complex correlated scalar subqueries or table expressions. I write large table expressions — even scalar-valued — all the time that the QO has absolutely no issue producing accurate statistics for.

    And in cases where expressions can’t be inlined, they’re still much faster when evaluated via a nested subtree rather than in a separate context, as occurs with scalar UDFs.

    … you can read my prior blog post on the topic, linked above in Hugo’s response to Geoff, for more information.

    Reply
  • Hugo Kornelis
    May 26, 2012 23:31

    @RBarry: Interesting, I didn’t know that. Can you create such objects through the object model, or can you only read them?

    Kevin: It’s not about the number of lines, or the number of values returned – it’s about what goes between the BEGIN and END of the function definition. For the inline scalar function I would like to see added to the product, that would only be one statement: RETURN, followed by an expression that evaluates to a single scalar value. And that expression can be data-hitting (by including subqueries), or not.

    Yes, you can make it complicated, and that may be hard to optimize. How does that differ from manually copying and pasting that same complicated expression in the query? I don’t expect a query that uses an inline scalar function to perform better than one where I inline the expression myself, but it would be great if they would perform the same. That would be a huge step forward. And it is possible, since all it takes is for the parser to replace the reference to the function with its definition, as a sort of macro expansion. Exactly how currently views and inline table-valued function are already treated.

    Reply
  • Nice article.

    A fast way to optimize code that makes use of a scalar function in a situation where the logic is too complex to be re-written is to create a table with all the values and join it.  

    It is not elegant and I don’t know if it performs well under most situations but it helped us cut the execution time tremendously many times.  Something like:

    SELECT

    D.DataVal,

    T.MaxTriple

    FROM (

    SELECT

    DataVal,
    
    'MaxTriple' = dbo.Triple(DataVal)
    

    FROM (

    SELECT DISTINCT DataVal
    
    FROM
    
        dbo.LargeTable) T) T
    

    INNER JOIN

    dbo.LargeTable D

    ON D.DataVal = T.DataVal

    Frantz

    Reply
  • TheSQLGuru
    May 27, 2012 23:00

    Here are 2 exceptionally simple UDFs, one scalar and one TVF.  Both result in horrible estimates and suboptimal executions, including memory grants 2 and 3 times larger.  I am sure I am missing something Adam, but am not sure what it is.

    USE AdventureWorks2008R2

    SET NOCOUNT ON

    GO

    –useful index

    CREATE NONCLUSTERED INDEX [idx_fullname] ON [Person].[Person]

    (

    [LastName] ASC,

    [FirstName] ASC

    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF,

    DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]

    GO

    CREATE FUNCTION dbo.fn_FullName (@FirstName nvarchar(50), @MiddleName nvarchar(50), @LastName nvarchar(50))

    RETURNS nvarchar(153) WITH SCHEMABINDING

    AS

    BEGIN

      RETURN @LastName + N’, ‘ + @FirstName + N’ ‘ + ISNULL(@MiddleName, N”)

    END

    GO

    –show actual execution plan and not estimated/actual differences and the query shape/operations

    SELECT dbo.fn_FullName(p.FirstName, p.MiddleName, p.LastName) as FullName,

          od.ProductID,

          SUM((UnitPrice – UnitPriceDiscount) * OrderQty) AS TotalPrice

     FROM Sales.SalesOrderHeader oh

    INNER JOIN Sales.SalesOrderDetail od ON od.SalesOrderID = oh.SalesOrderID

    INNER JOIN Sales.SalesPerson sp ON sp.BusinessEntityID = oh.SalesPersonID

    INNER JOIN Person.Person p ON p.BusinessEntityID = sp.BusinessEntityID

    WHERE dbo.fn_FullName(p.FirstName, p.MiddleName, p.LastName) = N’Abbas, Syed E’

    GROUP BY p.LastName, p.FirstName, p.MiddleName, od.ProductID

    .13 duration, .15 cpu, 1231 reads, 2192 memory grant

    significant disparity between estimated and actual rowcounts

    SELECT p.LastName + N’, ‘ + p.FirstName + N’ ‘ + ISNULL(p.MiddleName, ”) as FullName,

          od.ProductID,

          SUM((UnitPrice – UnitPriceDiscount) * OrderQty) AS TotalPrice

     FROM Sales.SalesOrderHeader oh

    INNER JOIN Sales.SalesOrderDetail od ON od.SalesOrderID = oh.SalesOrderID

    INNER JOIN Sales.SalesPerson sp ON sp.BusinessEntityID = oh.SalesPersonID

    INNER JOIN Person.Person p ON p.BusinessEntityID = sp.BusinessEntityID

    WHERE p.LastName = N’Abbas’

      AND p.FirstName = N’Syed’

      AND p.MiddleName = N’E’

    GROUP BY p.LastName, p.FirstName, p.MiddleName, od.ProductID

    0 duration, 0 cpu, 63 reads, 1024 (default sized?) memory grant

    CREATE FUNCTION dbo.tvf_FullName (@FirstName nvarchar(50), @MiddleName nvarchar(50), @LastName nvarchar(50))

    RETURNS TABLE WITH SCHEMABINDING

    AS

      RETURN (SELECT @LastName + N’, ‘ + @FirstName + N’ ‘ + ISNULL(@MiddleName, N”) AS FullName)

    GO

    SELECT tvp.FullName,

          od.ProductID,

          SUM((UnitPrice – UnitPriceDiscount) * OrderQty) AS TotalPrice

     FROM Sales.SalesOrderHeader oh

    INNER JOIN Sales.SalesOrderDetail od ON od.SalesOrderID = oh.SalesOrderID

    INNER JOIN Sales.SalesPerson sp ON sp.BusinessEntityID = oh.SalesPersonID

    INNER JOIN Person.Person p ON p.BusinessEntityID = sp.BusinessEntityID

    CROSS APPLY dbo.tvf_FullName (p.FirstName, p.MiddleName, p.LastName) tvp

    WHERE tvp.FullName = N’Abbas, Syed E’

    GROUP BY tvp.FullName, od.ProductID

    .13 duration, .15 cpu, 1231 reads, 3712 memory grant

    significant disparity between estimated and actual rowcounts

    Reply
  • Dave Maiden
    May 28, 2012 05:03

    I personally think this started when Microsoft decided to create the .NET Framework and go OOP, I appreciate SQL as a language but have since moved onto VB.NET/C#, but still actively use SQL. When you compile your code within the Framework the COMPILER is designed to calibrate your code (and it varies between each language on the the result).

    I assume (incorrectly) that the in-built SQL functions are written within the Framework, which would make more sense to me as a change in one function would not have a devastating affect. First problem LEGACY if there is a problem then they either change it or leave it, the latter being the problem for compatibility. I do however understand that legacy systems are in use and use some of the depreciated commands.

    You should test CLRs, I use them all the time and in these days of Asynchronous results surely we should have a essential bible (framework) which everything relies on and is standardised throughout. For example DATEDIFF("D") is different to Microsoft.VisualBasic.DateDiff(DateInterval.Day) as it uses Timespan.

    Dave

    Reply
  • If I use timestamps e.g. SYSUTCDATETIME() instead of statistics time then I see a different picture. Its still bad in comparison but using STATISTICS TIME has a direct and fundamental effect on the observation.

    My Figures are:

    Using SYSUTCDATETIME with LargeTable

    multiplication: ~24 ms

    dbo.triple: ~250ms ms

    Using STATISTICS TIME with LargeTable

    multiplication: ~24 ms

    dbo.triple: ~450ms ms

    Reply
  • Madhukar Sreeramoju
    December 17, 2012 08:59

    How to know the DB – Memory Consumption in Sql Server using the query.

    madhukars@live.com

    Reply
  • let A,B,x in R such that x>0 if A>B then xA>xB….so you should select 3max(DataVal) instead of select max(3DataVal)

    Reply
  • I found your post when I was searching the web for the answer to the question: IF I have a UDF call in the SELECT part and the same UDF call in the WHERE part, would optimiser be smart enough to call UDF only once.

    SELECT

    dbo.PortalElevators.elevatorsId AS ElevatorID

    , dbo.GetElevatorLastPlaybackStats(dbo.PortalElevators.elevatorsId) AS MaxT

    FROM

    dbo.PortalElevators

    WHERE

    dbo.GetElevatorLastPlaybackStats(dbo.PortalElevators.elevatorsId) IS NOT NULL

    And you answered that it will call it twice. Alas. It is good to know and there is an easy  workaround.

    But, I’d like to note that in my case use of UDFs made my queries significantly faster.

    I’m using MS SQL 2008. I’ve been researching UDFs for a while trying to optimise my queries and discovered that everybody call UDFs evil.

    What you and other authors say about UDF is true and helped me to understand how it all works, but still in my particular case use of UDFs made my query run significantly faster.

    In short, I have a relatively large table with logs – around 10 million rows. These logs are events that happen at approximately 2000 computers. Essentially, TableLogs has these columns:

    ID int not null (primary key, identity)

    ComputerID int not null (foreign key to the table of computers)

    EventDateTime datetime not null (when it happened)

    EventDurationMSec int not null (event duration in milliseconds)

    Usually event duration is 10 – 20 seconds, but when things go wrong it is less than 5 seconds. Events normally should happen non-stop. If there are no events for a while for a given computer – there is a problem with the computer.

    So, I have a monitoring screen that shows 2000 rows – one row for each computer. For each computer I show the timestamp of the last event that was longer than 5 seconds. Result is ordered by the timestamp desc and at the top of the list are computers that have problems.

    I could use this simple query:

    select

       ComputerID

       , max(EventDateTime) as LastTimeUp

    from

       TableLogs

    where

       ComputerID = @ParamComputerID

       EventDurationMSec >= 5000

    group by ComputerID

    order by LastTimeUp desc;

    But, no matter what indexes I created it was still scanning the whole table (or index, but still it was a scan with plenty of reads).

    Then I created scalar UDF with one parameter GetLastTimeUp(@ParamComputerID):

    select @Result = max(EventDateTime)

    from TableLogs

    where ComputerID = @ParamComputerID AND EventDurationMSec >= 5000;

    I created filtered index

    CREATE NONCLUSTERED INDEX [IX_ComputerID_EventDurationMSec] ON [dbo].[TableLogs]

    (

    [ComputerID] ASC,

    [EventDateTime] ASC

    )

    WHERE ([EventDurationMSec]>=(5000))

    ;

    And changed query to:

    select

       ComputerID

       ,dbo.GetLastTimeUp(ComputerID) as LastTimeUp

    from TableComputers

    order by LastTimeUp desc;

    Each call to UDF now is not a scan of an index, it is an instant seek of a filtered index.

    Yes, there are 2000 calls and 2000 seeks, but it is way better than scanning through 10 million of all old events, while I need only the latest event.

    I would appreciate it if you could comment this. Thank you.

    Reply
  • Hugo Kornelis
    July 21, 2013 12:58

    Hi Vladimir,

    That’s a great example of why "it depends" is the most common answer when it comes to SQL Server questions. There are always exceptions to every rule.

    For the record, I think the filtered index would have been useful for the original query as well (but see below). And to make it really work, some massaging and rewriting might have been a good idea. For instance (not sure if this will be best, just throwing out an idea as a starting point):

    SELECT c.ComputerID,

      (SELECT MAX(l.EventDateTime)

       FROM   dbo.TableLogs AS l

       WHERE  l.ComputerID = c.ComputerID

       AND    l.EventDurationMSec >= 5000) AS LastTimeUp

    FROM   dbo.TableComputers AS c

    ORDER BY LastTiimeUp DESC;

    I expect this to use a scan of TableComputers and a loop join with a seek in the filtered index. (You may need to INCLUDE the EventDurationMSec column in the index, because of what Microsoft calls a "functionality gap" in the optimizer wrt filtered indexes; if you tried a filtered index for the original query and it was not selected, this is why). That’s essentially the same as you now get with the UDF, but without the overhead.

    On the other hand – I can imagine that the query is currenntly fast enough already that you have better things to spend your time on…

    Reply
  • Hi Hugo,

    Thank you for reply.

    Unfortunately, the query that you suggest still scans the index. It uses the filtered index, but scans it instead of seeking.

    I made few scripts to generate test tables and test data and checked performance.

    If you want, I can post the scripts.

    I tried a slightly different approach and tried to use TOP() instead of MAX:

    SELECT

    TableComputers.ID

    , (

    SELECT TOP(1)

    EventDateTime

    FROM

    TableLogs

    WHERE

    EventDurationMSec &gt;= 5000

    AND TableLogs.ComputerID = TableComputers.ID

    ORDER BY EventDateTime DESC

    ) as LastTimeUp

    FROM TableComputers

    ;

    And this query has the plan that I was looking for – it is a Nested loop left join between TableComputers and seek&top of index. Actual plan shows that actual number of rows and number of executions of Seek is 2048 – one for each computer ID.

    I checked number of reads in profiler.

    Variant with TOP has 6,797 reads.

    Variant with UDF has 6,157 reads. Still slightly better.

    I can only guess why.

    With UDF there is scan of TableComputers (11 pages), then 2048 calls of UDF, 3 reads each:

    2048*3+11 = 6155 (almost 6157 that profiler tells me)

    With TOP there are 2048 seeks, 3 reads each = 6144, so 6797-6144 = 653 reads must be these nested loops that join TableComputers and results of index seek. With UDF there is no join.

    But with UDF there is no chance to get parallel plan.

    In my case the table is not too big, but if it grows to 100 million rows and optimiser would be smart enough to generate a parallel plan, it could become better than UDF.

    So, my point is that UDF can be useful from the performance point of view. I think of it as using hints to optimiser in the query. Sometimes it could be difficult to get the plan you want and UDF could be one way to force it.

    Reply
  • Sure, Scalar UDFs are bad because they basically make everything RBAR (Row By Agonizing Row) – there is overhead in having SQL Server prepare and execute the UDF for each and every row.

    In the interests of DRY (Don’t Repeat Yourself) have you considered keeping your algorithms inside UDFs but rewriting them as Inline Table Valued Functions for speed?

    For example:

    if exists(select * from sys.objects where object_id=object_id(N’dbo.TripleITVF’))

    drop function dbo.TripleITVF

    go

    create function dbo.TripleITVF(@Input int)

    returns table as

    return select [Triple] = 3 * @Input;

    go

    Usage comparison:

    set statistics time on;

    set statistics io on;

    select [MaxTriple]=max(dbo.Triple(DataVal)) from dbo.LargeTable;

    select [MaxTriple]=max(3 * DataVal) from dbo.LargeTable;

    select [MaxTriple]=max(Triple) from dbo.LargeTable cross apply dbo.TripleITVF(DataVal);

    set statistics io off;

    set statistics time off;

    For your example there is no performance difference between inlining the algorithm inside the max() aggregate and using the ITVF version of the UDF.

    Reply
  • "But SQL Server isn’t a traditional programming language."

    Not to nit pick but SQL Server is not a language at all. I know that Microsoft developers have developed this annoying habbit recently where they don’t even bother to say the word "Server" anymore, not to mention that 99.999999% don’t know the difference between Sequel and SQL.

    Reply
  • Hugo Kornelis
    August 27, 2013 22:40

    Mark: You can pick all you want, but do allow me to nit back at you, okay?

    You are totally right about the messed up sentence. I am pretty sure I thought "T-SQL isn’t a traditional programming language", but my fingers apparently had a mind of their own, and I didn’t notice. I guess I need a better proofreader.

    Leaving out the word "Server" is indeed incredibly annoying; I hate it for a multitude of reasons. The most practical being that I lost count of the number of times I see a job ad for a "SQL expert", and don’t know if they mean me, or if the content of the ad will reveal it’s actually PL/SQL or MySQL they are looking for,

    Third: "Sequel" – are you talking about the ’70s language developerd by IBM that was later renamed to SQL for trademark reasons, about the MySQL database management product for Mac OS X, or about the "official" pronounciation of SQL in SQL Server (but not in SQL, the language).

    Reply
  • Hugo Kornelis
    August 27, 2013 22:42

    Antman: You are way ahead of me with that comment. (My own fault, I guess; I should have finished the series before giving you that chance).

    After covering scalar UDF (the bad), I plan to move on to multi-statement UDFs (the ugly) – and then wrap it up with the good: inline UDFs. You took away all the tension. (I still plan to write those other parts – when I’ve got time again)

    Reply
  • Hugo,

    It is really a nice post and helped me to understand the difference between UDF and TVF. However I am relatively new in T-SQL programming. if any one can help me here….

    I have a below UDF which is being called by some procedure, is taking huge time. After reading this post I understand that using UDF is bad and it hits the performance. Is anyone can help me here to change below UDF into TVF or suggest me any other alternate code.

    ################

    Create function [dbo].[ufnGetOrgHierarchyByPassingOrgUnitTypeId](@OrgUnitID INT,@OrgUnitTypeId as INT)    

    returns varchar(max)    

    AS        

    BEGIN        

    DECLARE @UnitType INT        

    DECLARE @DeptCode VARCHAR(100)            

     ,@FuncCode VARCHAR(100)            

     ,@BUCode VARCHAR(100)            

     ,@SPUCode VARCHAR(100)            

     ,@SubSegCode VARCHAR(100)            

     ,@SegCode VARCHAR(100),        

     @FirstOrgUnitID INT ,    

     @OutputString varchar(max)      

     SET @FirstOrgUnitID = @OrgUnitID        

    –SET @OrgUnitID =(select OrgUnitID FROM GiftLocalThreshold )        

    DECLARE @DeptID INT =0            

     ,@FuncID INT =0            

     ,@BUID INT =0            

     ,@SPUID INT =0            

     ,@SubSegID INT =0            

     ,@SegID INT =0            

    —- Use SET clause to initialize the value instead of SELECT keyword

    SELECT @DeptCode = ”            

     ,@FuncCode = ”            

     ,@BUCode = ”            

     ,@SPUCode = ”            

     ,@SubSegCode = ”            

     ,@SegCode = ”          

    SELECT @UnitType = OrgUnitTypeID  FROM [GPD_OrgUnitDetails] WHERE OrgUnitID = @OrgUnitID        

    DECLARE @NextLevelunitType INT = 0            

    SELECT @NextLevelunitType = @UnitType

    DECLARE @Code VARCHAR(15) = ”            

    IF @NextLevelunitType = 6            

    BEGIN            

     SELECT @DeptCode = OrgUnitDesc + case when isnull(Active,0)=0 then ‘ (Inactive)’ else ” end,@DeptID=@OrgUnitID            

      ,@Code = ParentCode            

     FROM [GPD_OrgUnitDetails]            

     WHERE OrgUnitID = @OrgUnitID;            

     SELECT @NextLevelunitType = OrgUnitTypeID            

      ,@OrgUnitID = OrgUnitId            

     FROM [GPD_OrgUnitDetails]            

     WHERE Code = @Code            

    END            

    IF @NextLevelunitType = 5            

    BEGIN              

     SELECT @FuncCode = OrgUnitDesc+ case when isnull(Active,0)=0 then ‘ (Inactive)’ else ” end,@FuncID=@OrgUnitID            

      ,@Code = ParentCode            

     FROM [GPD_OrgUnitDetails]            

     WHERE OrgUnitID = @OrgUnitID            

     SELECT @NextLevelunitType = OrgUnitTypeID            

      ,@OrgUnitID = OrgUnitId            

     FROM [GPD_OrgUnitDetails]          

     WHERE Code = @Code            

    END            

    IF @NextLevelunitType = 4            

    BEGIN              

     SELECT @BuCode = OrgUnitDesc + case when isnull(Active,0)=0 then ‘ (Inactive)’ else ” end          

      ,@Code = ParentCode,@BUID=@OrgUnitID            

     FROM [GPD_OrgUnitDetails]            

     WHERE OrgUnitID = @OrgUnitID            

     SELECT @OrgUnitID = OrgUnitId            

      ,@NextLevelunitType = OrgUnitTypeID            

     FROM [GPD_OrgUnitDetails]            

     WHERE Code = @Code            

    END            

    IF @NextLevelunitType = 3            

    BEGIN              

     SELECT @SPUCode = OrgUnitDesc  + case when isnull(Active,0)=0 then ‘ (Inactive)’ else ” end          

      ,@Code = ParentCode            

      ,@NextLevelunitType = OrgUnitTypeID,@SPUID=@OrgUnitID            

     FROM [GPD_OrgUnitDetails]            

     WHERE OrgUnitID = @OrgUnitID          

     SELECT @OrgUnitID = OrgUnitId,@NextLevelunitType = OrgUnitTypeID            

     FROM [GPD_OrgUnitDetails]            

     WHERE Code = @Code            

    END            

    IF @NextLevelunitType = 2            

    BEGIN              

     SELECT @SubSegCode = OrgUnitDesc + case when isnull(Active,0)=0 then ‘ (Inactive)’ else ” end          

      ,@Code = ParentCode,@SubSegID=@OrgUnitID            

     FROM [GPD_OrgUnitDetails]            

     WHERE OrgUnitID = @OrgUnitID        

     SELECT @NextLevelunitType = OrgUnitTypeID            

      ,@OrgUnitID = OrgUnitId            

     FROM [GPD_OrgUnitDetails]            

     WHERE Code = @Code            

    END            

    IF @NextLevelunitType = 1            

    BEGIN              

     SELECT @SegCode = OrgUnitDesc  + case when isnull(Active,0)=0 then ‘ (Inactive)’ else ” end          

      ,@Code = ParentCode,@SegID=@OrgUnitID            

     FROM [GPD_OrgUnitDetails]            

     WHERE OrgUnitID = @OrgUnitID        

     SELECT @NextLevelunitType = OrgUnitTypeID            

      ,@OrgUnitID = OrgUnitId            

     FROM [GPD_OrgUnitDetails]            

     WHERE Code = @Code            

    END            

    — SELECT        

    –@OutputString = ( @SegCode+  case when(@SubSegCode!=”)  then ‘/’+@SubSegCode      

    —   else ” end      

    —   +      

    —   case when(@SPUCode!=”) then  ‘/’+@SPUCode      

    —   else  ” end    

    —   +      

    —   case when(@BUCode!=”) then ‘/’+@BUCode      

    —   else  ”end    

    —   +      

    —   case when(@FuncCode!=”)  then ‘/’+@FuncCode      

    —   else ”end    

    —   +      

    —   case when(@DeptCode!=”)  then  ‘/’+@DeptCode      

    —   else  ” end  )    

    if (@OrgUnitTypeId=1)    

    Select @OutputString = @SegCode &nbsp; &nbsp;

            else if (@OrgUnitTypeId=2) &amp;nbsp; &amp;nbsp;
    
                        Select @OutputString = @SubSegCode &amp;nbsp; &amp;nbsp; 
    
                                else if (@OrgUnitTypeId=3) &amp;nbsp; &amp;nbsp;
    
                                        Select @OutputString = @SPUCode &amp;nbsp; &amp;nbsp; 
    
                                                    else if (@OrgUnitTypeId=4) &amp;nbsp; &amp;nbsp;
    
                                                                Select @OutputString = @BUCode &amp;nbsp; &amp;nbsp;
    
                                                                            else if (@OrgUnitTypeId=5) &amp;nbsp; &amp;nbsp;
    
                                                                                    Select @OutputString = @FuncCode &amp;nbsp; &amp;nbsp;
    

    else    

    Select @OutputString = @DeptCode &nbsp; &nbsp;

    return   @OutputString  ;    

    END

    ###############################

    Reply
  • Hi Hugo

    Thanks for the article and have a quick question about schema binding..

    I came here when troubleshooting one of our queries taking longer they should and its the issue with scalar UDF our query uses.

    as per Adam mechanic’s article quoted here, i added schema binding and query performed with same speed as inline code.

    But question is , in your example thats not the case so am bit confused whether commit my changes or should i still use inline code. there are lot of places this function being used so i want to avoid if thats not an issue.

    Thanks for the advise.

    Reply
  • Hugo Kornelis
    October 3, 2014 09:55

    Hi John,

    Thanks for your comment!

    I am not aware of any Adam Machanic article that recommends schemabinding as a performance measure. And I would be surprised, becuase I am also not aware of any performance benefit of schemabinding. Schemabinding is a method you can use to ensure that objects used in a view or function cannot be changed as long as the view/function exists, which prevennts some bugs. It is also a requirement for indexed views – maybe some confusion comes from this, because indexed views can sometimes help performance and they do need schhemabinding.

    But as far as I know, schemabinding by itself does not help performance. I can’t explain why you saw performance gain, but I guess you just were lucky.

    If possible, change the UDF to an "inline table-valued function", you will probably gain a performance benefit. But beware: ff the current UDF is scalar, this will require some changes to the calling code as well.

    Reply
  • Hi Hugo,

    What a great post! I thank you for all your valuable advices.

    It should be added to the growing list of "what every developer should know about MS SQL Server" 😉

    .NET developers are often focused on patterns and best practices to build their applications and forget to care about querying data efficiently. I think that a lot of developers rely on their knowledge in their programming language (like C#) and will apply the same principles – like you mentioned, encapsulation – on their databases… without ever thinking that it could be wrong! I was one of those. Now I am aware of it. Again, thanks a lot.

    Reply
  • Lars Haggqvist
    October 19, 2014 15:36

    All very interesting but rather academic. SQL as a language is based on a misunderstood technology. Mr Codd’s original theory was never properly implemented (else why do we have a DISTINCT keyword). Therefore SQL is necessarily flawed.

    Reply
  • Hugo Kornelis
    October 19, 2014 23:48

    Hi Lars,

    Thanks for your comment!

    I must admit that I don’t really understand it, though. Yes, I will immediately agree that SQL is not a true implementation of the relational model as proposed by Codd. But for me, that leads to the reverse of your conclusion: until we see an implementation of a true relational database that reaches a coverage of at least 5% of the current market for SQL-based databases, I would consider discussions about true relational theory academic. (Not that there’s anything wrong with that, a lot of practical progress starts with academic discussion).

    The blog post you are commenting on does not pretend to be purely relational. It is, or maybe I should say it tries to be, a very pragmatic approach to a pragmatic problem in a language that may not be as Codd intended it, but is still widely used in many systems. Any blog post that tries to solve practical problems in actual database systems is, in my opinion, anything but academic.

    Sorry for the rant. 🙂

    Reply
  • upload a user defined scalar function

    Reply
  • dsds

    Reply
  • @Lars Haggqvist: "else why do we have a DISTINCT keyword"… Well, we do have it because even though primary keys guarantee uniqueness of a row (as the theory requires), you cannot (and should not!) guarantee that every attribute will always be unique. That would be a pretty useless theory, wouldn’t it?

    Reply
  • I’m surprised that there seems to be so little curiosity about why SQL Server has such horrific performance with scalar UDFs, compared to TVFs, when the computational work performed is identical.

    Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close