Sorting is one of the most often used functions in SQL Server. It is used when creating indexes. It allows us to retrieve data in a particular order. It also is used to filter out repeated rows in the context of a DISTINCT request. The SQL Server Team spent a lot of effort optimizing the sort algorithms to give the best possible performance in each context. They even implemented sort avoidance where the expensive sort gets removed from the query plan if it is known that the input is sorted already or will contain only a single distinct value.
One of those optimizations is the TOP N Sort that I would like to look at in a little more detail today.
Sorting requires memory – potentially a lot of memory. SQL Server estimates the size of the data using statistics and data size estimates and then requests about 150 percent of that in memory for the sort operation.
To see how much memory a query is using, you can use the sys.dm_exec_query_memory_grants DMV. It shows the current memory usage for each SPID that requested additional memory for one (or more) of the memory consuming operators.
The problem with this DMV is that its values change quickly, so it is difficult to get meaningful information for a particular query. There is however a trick you can use: Every memory consuming iterator has a build phase where most of the memory is used, and a delivery phase in which the rows get send to the parent operator. The memory consumption during the build phase is usually a lot higher. SQL Server recognizes this and reuses the additional memory during the delivery phase for other memory consuming iterators.
The sys.dm_exec_query_memory_grants DMV contains the columns used_memory_kb and columns max_used_memory_kb that report current and maximum memory usage. If we can ensure that we get the data from this DMV during the delivery phase of the operator in question, the max_used_memory_kb column will tell us exactly how much memory was used during the build phase. This however only works for queries with a single memory consuming operator that also needs to be blocking.
Sort is a blocking operator. That means that during the build phase no rows get transmitted. If we include the request for the memory data in the query and place it before ("left of" in the graphical plan) the sort, we know that the sort will have finished its build phase when the data gets retrieved from the DMV.
Let us look at an example:
CREATE TABLE dbo.FixedWidth(
Id INT PRIMARY KEY CLUSTERED,
c1 INT NOT NULL,
fill VARCHAR(1000) NOT NULL
);
INSERT INTO dbo.FixedWidth(Id,c1,fill)
SELECT n,CHECKSUM(NEWID()),REPLICATE('X',192)
FROM dbo.GetNums(10000);
SELECT SUM(DATALENGTH(Id)+DATALENGTH(c1)+DATALENGTH(fill))
FROM dbo.FixedWidth;
[/sql]
This snippet creates the table dbo.FixedWidth and inserts 10000 rows into it. It then sums up the DATALENGTH of all columns over all rows. The size of all the data totals 2,000,000 bytes. It is using Itzik Ben-Gan's GetNums function.
The following query will sort the data in this table and report its memory usage:
This query has below execution plan:
The access to the sys.dm_exec_query_memory_grants DMV happens after the sort has finished. Therefore the max_used_memory_kb will tell us the amount actually used for the sort of the entire 2MB table. The information from the memory DMV is returned as a single XML value. As the DMV result is spooled in this query, all rows will show the same XML value:
It shows that 7,352KB is the ideal memory grant size the optimizer came up with. This was also the amount that was granted. However, only 2,456KB where actually used during the sort.
If you are interested only in the first few rows based on a sort order, you do not need to complete sort the rest of the rows.
To find the top most row it is enough to take the first row of the data set and compare it to each remaining row, keeping each time the one row that comes first according to the sort order. If you have more than one row you can follow the same principle, you just have to keep the first n rows after each step.
SQL Server implements this with the TOP N Sort operator:
This simple query has the following execution plan:
As you can see, the sort operator is of type "TOP N Sort".
You can use the same trick as shown above to get its memory consumption, but you have to add an outer layer, otherwise SQL Server removes the TOP N Sort optimization:
This query used only 24KB, clearly not enough to hold the entire table, so the TOP N Sort optimization was used.
The algorithms SQL Server uses for sorting are not documented. However, it is clear that an algorithm to find the TOP N rows cannot be as efficient as a normal sort when sorting the same number of rows. For that reason there is a maximum number of rows for which SQL Server uses the TOP N Sort optimization. That number was set to 100.
If you run above query with TOP(100) you will get a memory usage of 128KB:
If you use TOP(101) the memory usage goes up to 2,456KB:
All the memory numbers match the ones from the whole table sort, so SQL Server decided to execute a standard sort followed by a standard TOP operation instead of using the TOP N Sort optimization.
This boundary is fixed and cannot be configured. There is also no hint to force this optimization. That is rather unfortunate as 100 seems to be a quite arbitrary boundary value and a TOP N Sort would often perform a lot better that a full table sort followed by a TOP.
We all know that while the optimizer in general is very good a finding a (close to) optimal plan, sometimes it does not.
The TOP N Sort optimization is a great enhancement, if you are dealing with 100 or less rows. However at 101 rows you will see a sudden drop in performance. To deal with this, you can either restrict the value of rows to return to 100, if the business allows. Or you can help SQL Server to deal with the "all rows" sort. One of the most obvious options here is to add an index to the table.