Heaps of Trouble?
Introduction
Brad Schulz recently wrote about optimizing a query run against tables with no indexes at all. The problem was, predictably, that performance was not very good. The catch was that we are not allowed to create any indexes (or even new statistics) as part of our optimization efforts.
In this post, Iām going to look at the problem from a different angle, and present an alternative solution to the one Brad found.
Inevitably, there will be some overlap between our entries, and while you donāt necessarily need to read Bradās post before this one, I do recommend you read it at some stage. He covers some important points that I wonāt cover again here.
The Example
We will use data from the AdventureWorks sample database, copied to temporary unindexed heap tables.
A script to create these structures is shown below:
CREATE TABLE #Custs
(
CustomerID integer NOT NULL,
TerritoryID integer NULL,
AccountNumber varchar(10)
COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL,
);
GO
CREATE TABLE #Prods
(
ProductMainID integer NOT NULL,
ProductSubID integer NOT NULL,
ProductSubSubID integer NOT NULL,
[Name] nvarchar(50)
COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL,
);
GO
CREATE TABLE #OrdHeader
(
SalesOrderID integer NOT NULL,
OrderDate datetime NOT NULL,
SalesOrderNumber nvarchar(25)
COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL,
CustomerID integer NOT NULL,
);
GO
CREATE TABLE #OrdDetail
(
SalesOrderID integer NOT NULL,
OrderQty smallint NOT NULL,
LineTotal numeric(38,6) NOT NULL,
ProductMainID integer NOT NULL,
ProductSubID integer NOT NULL,
ProductSubSubID integer NOT NULL,
);
GO
INSERT #Custs
(
CustomerID,
TerritoryID,
AccountNumber
)
SELECT
C.CustomerID,
C.TerritoryID,
C.AccountNumber
FROM Sales.Customer AS C
WITH (TABLOCK);
GO
INSERT #Prods
(
ProductMainID,
ProductSubID,
ProductSubSubID,
[Name]
)
SELECT
P.ProductID,
P.ProductID,
P.ProductID,
P.[Name]
FROM Production.Product AS P
WITH (TABLOCK);
GO
INSERT #OrdHeader
(
SalesOrderID,
OrderDate,
SalesOrderNumber,
CustomerID
)
SELECT
H.SalesOrderID,
H.OrderDate,
H.SalesOrderNumber,
H.CustomerID
FROM Sales.SalesOrderHeader AS H
WITH (TABLOCK);
GO
INSERT #OrdDetail
(
SalesOrderID,
OrderQty,
LineTotal,
ProductMainID,
ProductSubID,
ProductSubSubID
)
SELECT
D.SalesOrderID,
D.OrderQty,
D.LineTotal,
D.ProductID,
D.ProductID,
D.ProductID
FROM Sales.SalesOrderDetail AS D
WITH (TABLOCK);
The test query itself is a straightforward join of the four tables:
SELECT
P.ProductMainID AS PID,
P.[Name],
D.OrderQty,
H.SalesOrderNumber,
H.OrderDate,
C.TerritoryID
FROM #Prods AS P
JOIN #OrdDetail AS D
ON P.ProductMainID = D.ProductMainID
AND P.ProductSubID = D.ProductSubID
AND P.ProductSubSubID = D.ProductSubSubID
JOIN #OrdHeader H
ON D.SalesOrderID = H.SalesOrderID
JOIN #Custs C
ON H.CustomerID = C.CustomerID
ORDER BY
P.ProductMainID ASC
OPTION
(
RECOMPILE,
MAXDOP 1
);
Remember that these tables have no indexes at all, and only the single-column sampled statistics SQL Server automatically creates (assuming default settings).
The estimated execution plan (using the 70 cardinality estimator) produced for the test query looks like this:
The Problem
The problem here is one of cardinality estimationāthe number of rows SQL Server expects to find at each step of the plan.
The lack of indexes and useful statistical information means that SQL Server does not have the information it needs to make a good estimate.
Every join in the plan shown above estimates that it will produce just a single row as output. Brad covers the factors that lead to the low estimates in his post.
In reality, the join between the #Prods
and #OrdDetail
tables will produce 121,317 rows.
It should not surprise you that this has rather dire consequences for the remainder of the query plan. In particular, it makes a nonsense of the optimizerās decision to use Nested Loops to join to the two remaining tables.
Instead of scanning the #OrdHeader
and #Custs
tables once (as it expected), it has to perform 121,317 full scans of each. The query takes somewhere in the region of 20 minutes to run to completion on my laptop.
A Solution
At this point, you may be thinking the same thing I was: If we really are stuck with no indexes, the best we can do is to use hash joins everywhere. We can force the exclusive use of hash joins in several ways, the two most common being join and query hints.
A join hint means writing the query using the INNER HASH JOIN
syntax. Using a query hint means adding OPTION (HASH JOIN)
at the bottom of the query.
The difference is that using join hints also forces the order of the join, whereas the query hint gives the optimizer freedom to reorder the joins at its discretion.
Adding the OPTION (HASH JOIN)
hint results in this estimated plan:
That produces the correct output in around 7 seconds, which is quite an improvement!
As a purely practical matter, and given the rigid rules of the environment we find ourselves in, we might leave things there. (We can actually improve the hashing solution a bitāIāll come back to that later on).
Faster Nested Loops
It might surprise you to hear that we can beat the performance of the hash join solution shown above using nested loops joins exclusively, and without breaking the rules we have been set. The key to this part is to realize that a condition like A = B
can be expressed as (A <= B) AND (A >= B)
. Armed with this insight, we can rewrite the join predicates like so:
SELECT
P.ProductMainID AS PID,
P.[Name],
D.OrderQty,
H.SalesOrderNumber,
H.OrderDate,
C.TerritoryID
FROM #OrdDetail AS D
JOIN #OrdHeader AS H
ON D.SalesOrderID >= H.SalesOrderID
AND D.SalesOrderID <= H.SalesOrderID
JOIN #Custs AS C
ON H.CustomerID >= C.CustomerID
AND H.CustomerID <= C.CustomerID
JOIN #Prods AS P
ON P.ProductMainID >= D.ProductMainID
AND P.ProductMainID <= D.ProductMainID
AND P.ProductSubID = D.ProductSubID
AND P.ProductSubSubID = D.ProductSubSubID
ORDER BY
D.ProductMainID
OPTION
(
RECOMPILE,
LOOP JOIN,
MAXDOP 1,
FORCE ORDER
);
I have also added LOOP JOIN
and FORCE ORDER
query hints, to ensure that only nested loops joins are used, and that the tables are joined in the order they appear. The new estimated execution plan is:
This new query runs in under 2 seconds.
Why Is It Faster?
The main reason for the improvement is the appearance of the Eager Index Spools, which are also known as index-on-the-fly spools.
If you read my Inside The Optimizer series you might be interested to know that the rule responsible is called JoinToIndexOnTheFly
.
An Eager Index Spool consumes all rows from the table (or other row source) it sits above, into an internal rowset that has an index suitable for the join to seek on.
Taking the index spool above the #Custs
table as an example, it reads all the CustomerID
and TerritoryID
values with a single scan of the table, and stores those rows in a rowset that has an index keyed on CustomerID
.
The term āeagerā means that the spool consumes all of its input rows when it starts up (during the Open
call). The rowset is built in tempdb and only exists until the query finishes executing. The index has no associated statistics object.
The end result of this strategy is that each unindexed table is only scanned once, and only for the columns necessary for the temporary index. From that point on, every execution of the inner side of the join is answered by a seek using the temporary indexānot the base table.
A second optimization is that the sort on ProductMainID
(required by the ORDER BY
clause) is performed early, on just the rows coming from the #OrdDetail
table.
The optimizer has a good estimate for the number of rows it needs to sort at that stageāit is just the cardinality of the table itself. The accuracy of the estimate there is important because it helps determine the memory grant given to the sort operation.
Nested loops join preserves the order of rows on its outer input, so sorting early is safe. (Hash joins do not preserve order in this way, of course).
The extra lazy spool on the #Prods
branch is a further optimization that avoids executing the seek on the temporary index if the value being joined (the āouter referenceā) hasnāt changed from the last row received on the outer input.
It takes advantage of the fact that rows are still sorted on ProductMainID
, so if duplicates exist, they will arrive at the join operator sequentially.
The optimizer is quite conservative about introducing index spools into a plan, because adding rows to the temporary index is a relatively expensive operation. Its presence in a plan is often an indication that a useful permanent index is missing.
I want to stress that I rewrote the query in this way primarily as an educational exerciseāI canāt imagine having to do something so horrible to a production system.
Improving the Hash Join
I promised I would return to the solution that uses hash joins. You might be puzzled by the fact that SQL Server can populate three new indexes (and perform all those nested loops iterations) faster than it can perform three hash joins.
The answer, again, is down to the poor information available to the optimizer. Letās look at the hash join plan again:
Two of the hash joins have single-row estimates on their build inputs. SQL Server fixes the amount of memory available for the hash table based on this cardinality estimate, so at run time the hash join very quickly runs out of memory.
This results in the join spilling hash partitions to disk, and any rows from the probe input that hash to the spilled buckets also get written to disk.
The join process then continues, and may again run out of memory. This is a recursive process, which may eventually result in SQL Server resorting to a bailout join algorithm, which is guaranteed to complete eventually, but may be very slow.
The data sizes in the example tables are not large enough to force a hash bailout, but it does result in multiple levels of hash recursion. You can see this for yourself by tracing the Hash Warning event using the Profiler tool. On more modern versions of SQL Server, this information is available in a post-execution (actual) plan.
The final sort in the plan also suffers from a similar problem. It receives very little memory and has to perform multiple passes, saving intermediate runs to disk (the Sort Warnings Profiler event can be used to confirm this).
Notice also that because hash joins donāt preserve sort order, the sort cannot be pushed down the plan toward the #OrdDetail
table, as was seen in the nested loops plan.
Ok, so now we understand the problems, what can we do to fix it? We can address the hash spilling by forcing a different order for the joins:
SELECT
P.ProductMainID AS PID,
P.[Name],
D.OrderQty,
H.SalesOrderNumber,
H.OrderDate,
C.TerritoryID
FROM #Prods AS P
JOIN #Custs AS C
JOIN #OrdHeader AS H
ON H.CustomerID = C.CustomerID
JOIN #OrdDetail AS D
ON D.SalesOrderID = H.SalesOrderID
ON P.ProductMainID = D.ProductMainID
AND P.ProductSubID = D.ProductSubID
AND P.ProductSubSubID = D.ProductSubSubID
ORDER BY
D.ProductMainID
OPTION
(
MAXDOP 1,
HASH JOIN,
FORCE ORDER
);
The execution plan is:
With this plan, each of the inputs to the hash joins has a good estimate, and no hash recursion occurs.
The final sort still suffers from the one-row estimate problem, and we get a single-pass sort warning as it writes rows to disk.
Even so, the query runs to completion in under 4 seconds. Thatās around half the time of the previous hashing solution, but still not as fast as the nested loops trickery.
Final Thoughts
The SQL Server query optimizer makes cost-based decisions, so it is vital to provide it with accurate information. We canāt really blame the performance problems highlighted here on anything other than the decision to use completely unindexed tables, and not to allow the creation of additional statistics.
I should probably stress that the nested loops solution shown above is not one I would contemplate in the real world. Itās there primarily for its educational and entertainment value. I might perhaps use it to demonstrate to the sceptical that SQL Server itself is crying out for an index.
My thanks to Brad for granting permission to reuse some of his material for this post.