Storage structures 3 – Memory-optimized

Storage structures 3 – Memory-optimized

Update February 26: The maximum of 8 memory-optimized indexes per table was dropped in SQL Server 2017.

After discussing traditional on-disk rowstore storage in part 1 and columnstores in part 2, it is now time to turn our eye towards memory-optimized storage structures in SQL Server.

Memory-optimized storage was introduced in SQL Server 2014, as part of a project that was codenamed “Hekaton” and later renamed to in-memory OLTP. Whereas columnstore indexes were specifically targeted towards large scale analytical work, Hekaton and memory-optimized tables are specifically geared towards high volume OLTP workloads. By fully eliminating locks and latches, and using precompiled machine code where possible, the processing time of transactions is significantly reduced, allowing for throughput numbers that were previously impossible to achieve.

The name memory-optimized is very deliberately chosen. This feature does not just replace on-disk storage with memory-based storage. The very structure of the data has been completely redesigned in order to benefit from the speed of modern memory, and to allow for safe concurrency without using locks or latches.

Note that memory-optimized columnstore indexes, available since SQL Server 2016, will be described in a separate post. This post is about memory-optimized rowstore indexes.

Varheap

The first thing you will notice when looking into memory-optimized storage structures is that SQL Server only supports nonclustered indexes on a memory-optimized table. There is no such thing as a memory-optimized clustered index. Of course, we know that, for any data, SQL Server stores it either in a clustered index or in a heap. Since the former is not supported for in-memory OLTP, that means that, by definition, all memory-optimized tables store their data in a heap. However, for memory-optimized table, Microsoft uses the term varheap (variable-length heap).

A varheap consists of one or more blocks of memory, each 64 MB in size. (Note that this size is not documented; I deducted it from tests on my own system, but the actual size might be different on different hardware, and Microsoft might change it without notice). These blocks are connected by a pointer chain. Whenever a new block is allocated, it is added at the start of the chain, as the “first” in the chain. So the last block will be the first block that was allocated to the table.

Each row from the table is then stored in one of those blocks. Wherever there is space available. So this will roughly depend on the insertion order, with the caveat above. Or the order in which data is loaded back in memory after a server restart. Either way, it will not be an order that is in any way useful.

In order to allow for lock- and latch-free concurrent access without concurrency conflicts, SQL Server uses a form of snapshot isolation for all memory-optimized tables. This means that old versions of rows are kept (until no longer needed), but marked as no longer current. This is done by adding two extra internal columns to each row, called the begin timestamp and end timestamp. These names are a bit misleading, because the data in these columns are not related to the time. They are, in fact, an internal ever increasing transaction counter. But Microsoft chose to use the term timestamp, so in this blog, I will use that terminology. Just keep in mind that there is no direct relation to any wall clock.

When you insert data in a memory-optimized table, a new row is stored in memory, in the varheap. The row data will of course have the data you specify. The begin timestamp will be set to the timestamp of the current transaction. The end timestamp will be set to a special value to represent infinity. From now on, the row is available to all transactions that started with or after the insert. Older transactions will encounter the row when reading the data, but will pretend it is not there, because they have a timestamp that is lower than the begin timestamp of that row.

When you delete an existing row, the end timestamp is set to the timestamp of the delete transaction. Future read operations will skip the row and pretend it no longer exists if their timestamp is higher than this end timestamp. But older transactions, with a lower timestamp, will still see the deleted row.

For updating, the two actions above are combined. The current row in memory is left unchanged, but its end timestamp is set so that it is logically deleted. A new copy of the row, with the updated columns changed to the new values, is inserted, with the begin timestamp set to the transaction’s timestamp and the end timestamp as infinity. Any read action with the same or a higher timestamp will pretend the original version does not exist and return the new version instead. If it’s a still active older transaction, then it will only see the original version of the row and not the new copy, because its timestamp is lower than the end timestamp of the original and the begin timestamp of the new row.

So far, I have been using the term row in the above for the structures stored in the varheap. But as you see by now, that is not actually correct. Any update causes a new version of the row to be stored, in addition to the old version. So instead of rows, what is actually stored in the varheap are rowversions, all marked with a begin and end timestamp to indicate for which transactions that are valid and hence visible.

Nonclustered indexes

Memory-optimized nonclustered indexes are not stored as separate structures. Instead, they are implemented as pointer chain, that connect the rowversions in various ways. In SQL Server 2014 and 2016, each rowversion in the varheap was always stored with room for eight pointers, which was the reason why memory-optimized tables did not support more than eight nonclustered indexes in those versions. Since SQL Server 2017, the number of pointers is variable. Memory-optimized tables now support up to 999 indexes. (But remember, just because you can does not mean you should).

On SQL Server 2014 and 2016, declaring less indexes did not save any memory. The space for eight pointers was always allocated, even when less indexes existed. While this did waste memory for tables with less than eight indexes, it had the benefit that extra indexes could be created without having to reshuffle data in memory. The space for the extra pointers is already there. On the other hand, as of SQL Server 2017, adding more indexes does increase the size of the rows in the varheap. While I have not found any documentation about the internals of this, I have to assume that this means a full rebuild of the table and all indexes whenever you add an index later.

We saw earlier that on-disk nonclustered indexes are stored as separate structures, with a data-based (or, in the case of heaps, location-based) pointer to the data in the clustered index or heap. That is not the case for memory-optimized nonclustered indexes. These are implemented as pointers. For each nonclustered index, one of the eight pointer slots in each rowversion is used to chain it to the “next” rowversion, according to the logic of that index.

SQL Server supports two distinct types of nonclustered index on memory-optimized tables: nonclustered hash indexes and nonclustered indexes. The latter term is in my opinion very confusing, because that name is also used for on-disk rowstore nonclustered indexes. So for clarity, I usually refer to them by the name of their storage structure: Bw-index. And for brevity, I often shorten the term nonclustered hash index to just hash index.

Hash index

A memory-optimized nonclustered hash index is based on the hash of one or more columns in the table. A hash function translates the values in those columns to a hash value in the range that is indicated by the specified number of buckets.

Somewhere in memory, at a location stored in metadata, SQL Server allocates an array of pointers equal in size to that number of buckets. So for each bucket, one pointer exists. That pointer then indicates the location of one of the rowversions for which the indexed columns hash to that value. The pointer in that rowversion then points to another rowversion that also belongs to that bucket, and so on. So in that way, each bucket is the start of a pointer chain that connects all rowversions that belong in that bucket, according to the hash function on the indexed columns.

So let’s say that you have a Persons table, with two hash indexes. One on (LastName, FirstName); the other on only City. The rowversion for Jane Smith, living in New York, would be somewhere in the pointer chain that connects to bucket 164 for the index on name (since Smith, Jane hashes to 164), and in the chain to bucket 883 (the hash of New York) for the city index. If she then moves to Chicago, the new rowversion would be added somewhere in the same pointer chain for 164 for her unchanged name, but it would go to the chain from bucket 504 (the hash of Chicago), whereas the old rowversion still remains connected to bucket 883.

(Note that the hash values in the above example are purely made up)

Bw-tree index

A Bw-tree is very similar to the B-tree we know from disk-based rowstore indexes. I will not go in detail on the differences. If you want all the nitty gritty details, you can read the paper here. Like a B-tree, a Bw-tree has a single entry at the root level, multiple intermediate levels, and a lot of entries at the leaf level. All of these combined are called the “internal pages”.

Where a B-tree has one entry per row on its leaf level, a Bw-tree has one entry per distinct value (or combination of values) of its key column(s) in all its rowversions. This leaf level entry stores the value, and a pointer to one of the rowversions with that value or set of values in the indexed column(s). That row then has a pointer to another rowversion with the same values. This might be an older or newer version of the same row, or, if the index is not declared as unique, it might be a version of a different row that happens to have the same values.

Another important difference between Bw-tree and B-tree, is that the internal pages (the tree structure) in a Bw-tree do not point directly to each other. Instead, every reference goes through the so-called “page mapping table”. So, where entries in the root page of a regular B-tree point to entries on the first intermediate level, the entries in the root page of a Bw-tree point to an entry in the page mapping table, that then in turn points to the actual level one entry.

Obviously, this slows down every process that navigates the tree. Every step to another page in the structure now takes two hops instead of one. Granted, each hop is a direct access to a memory location, so blindingly fast, but it still seems like wasted performance. And when you only think about reading data, you’d be right. However, this extra structure is necessary to facilitate completely lock- and latch-free processes for all modifications, including adding entries on an intermediate page, page splits, and even adding a level. No page is ever modified (because that would necessitate at least a latch). Instead, for every affected page, a new version is created, and then the pointer in the page mapping table is updated to point to the new page.

While this process is fine for the root and intermediate pages, it turned out that the leaf pages were modified too frequently, and this would become a burden. So the last additional complexity for Bw-indexes is that, for the leaf pages only, so-called delta pages are used. A modification is not implemented by creating a new version of the page and then relinking the page mapping table, but by allocating a new “delta page”, that describes the changes and that is linked to the leaf page. SQL Server now reads the original leaf page, then all delta pages are read and their changes applied, to construct the current version of the leaf page in memory. A background cleanup process periodically goes through the leaf pages to consolidate all changes and create a new version of the leaf page.

Reading from a memory-optimized table or index

After describing the structure of the index, let’s now look at how the data can be accessed.

Table Scan

A Table Scan on a memory-optimized table reads rowversions directly from the varheap. It starts at the first memory block in the chain. (Remember, this is the most recently allocated memory block). For every row in that memory block, it checks the begin timestamp and end timestamp to verify whether the rowversion is visible to the current transaction, then verifies the Predicate (if any), and then returns the row.

Index Scan

How an Index Scan on a memory-optimized index is processed depends on the type of memory-optimized index.

Index Scan on a Bw-tree index

An Index Scan on a Bw-tree index starts at the first entry in the page mapping table, which always points to the root page of the tree. It uses the first pointer to (once more through the page mapping table) find the logical first page of the first intermediate level. That repeats, until the logical first leaf level page is found. The changes on the delta pages are applied to construct the current version, the first value (or set of values for a composite index) is found, and then the pointer to the first rowversion for that set of values is followed. After checking the begin timestamp and end timestamp, and if applicable the Predicate, the row is returned.

On the next call, the pointer to the next rowversion with the same values in the key column is used. This repeats until the end of the pointer chain. At that point, Index Scan returns to the logical first leaf page to find the second distinct value (or set of values), to repeat the process.

Once all entries on the first leaf page are processed, Index Scan uses the pointer to the next leaf page (once more through the page mapping table), and then repeats the process. This continues until the last leaf page has been processed, as indicated by a null in its next page pointer.

The result is that any Index Scan will always return data in order of the indexed columns. The optimizer might set the Ordered property to false if that order is not required, but that will not affect the actual access strategy.

Index Scan on a hash index

An Index Scan on a hash index is not supported.

You might still see one in an execution plan, though. As far as I know, the only way to get this is to literally force SQL Server by specifying the index in an index hint. But if you do that, then SQL Server will obey your command and generate an execution plan with an Index Scan on that hash index.

However, this is not what actually happens when you execute the query. Though the execution plan shows an Index Scan on the specified index, the data is actually returned by scanning the varheap, one momery block at a time. The exact same method as a Table Scan on a memory-optimized table.

So, when you see an Index Scan on a hash index in your execution plan, SQL is just lying to you, and it will in reality perform a Table Scan.

Index Seek

The processing of an Index Seek on a memory-optimized index of course also depends on the type of memory-optimized index.

Index Seek on a Bw-tree index

For an Index Seek, SQL Server once more starts at the first entry in the page mapping table, which always points to the root page of the tree. It now looks at the values on this page to find the correct first-level intermediate page to go to, and then (through the page mapping table) finds that intermediate page. That process repeats, until the leaf level is reached. The changes on the delta pages are applied to construct the current version, and then the requested value (or set of values) is found. The operator follows the pointer to the first rowversion for that set of values, checks the begin timestamp and end timestamp, checks the Predicate (if given), and then that row is returned.

For a singleton lookup, that is the end of the story. This only happens when the execution plan specifies an equality search on all indexed columns in a unique index. There might still be more rowversions connected to the pointer chain. But in a unique index, only one of those rowversions can be valid for any transaction, so after reading the first, any other rowversions that might exist are guaranteed to be invalid for that same transaction.

If the index is nonunique, if only a left-side subset of columns is specified, or if an inequality predicate is used, the operation is always a range seek, which means that the operator will be called again for the next matching row. At this point, the pointer to the next rowversion with the same values in the key column is used, and the same checks on begin transaction and end transaction, plus Predicate if needed, are repeated. Eventually, all rowversions for that set of values have been processed. For an equality search on all columns in a nonunique index, this is the end.

For equality search on a subset of the columns, as well as any inequality predicate, the process then still continues. We return to the leaf page and take the next entry, on the same leaf page or on the next leaf page, to check whether that value still fits the predicate. If yes, then all rowversions chained to that entry are processed in the same way. If no, then we have reached the end of the qualifying data.

An Index Seek on a hash index is always ordered. Supported predicates are the same as an Index Seek on a disk-based rowstore index: equality on all columns, equality on a left subset of the columns (optionally with inequality on the immediate next column), or inequality on the leftmost column.

Index Seek on a hash index

A hash index only supports Index Seek with an equality predicate on all indexed columns. For inequality search, and for searches on a subset of the indexed columns, a hash index cannot be used.

The process is quite simple. Index Seek computes the hash of the value (or values) that it needs to find. It finds the corresponding pointer in the array of pointers. That pointer is the location of the first rowversion for that index key. As always, begin transaction and end transaction (and Predicate, if any) are checked, before the row is returned. However, in this case the Seek Predicates property must also be checked, because hash collisions might result in non-matching rows in the same hash bucket.

If the index is declared as unique, then this is a singleton lookup. There might be more rowversions in the same bucket, but they are either for different key values that have a hash collision, or for the same key values but not valid for the current transaction.

If the index is not unique, then there can be more matching rows, so we do a range seek. After returning the first matching row, Index Seek will be called again and continue the pointer chain to search for more rowversions with the correct data in the key columns, and that are valid for the current transaction.

Key Lookup / RID Lookup

Because memory-optimized tables do not have clustered indexes, Key Lookup is not supported on these tables. And while they are a kind of heap, RID Lookup is not supported either. That is because in this case, the nonclustered index is not a separate structure, but a pointer chain superimposed on the data. Following the pointers takes us to the full data and we do not need a separate step to retrieve additional columns from the heap.

Conclusion

Memory-optimized tables are stored in memory in a heap-like structure that is called a varheap. The Table Scan operator uses this structure to find data to return.

Ever memory-optimized table must have at least one nonclustered index. More is possible, up to a maximum of eight. These nonclustered indexes come in two flavors.

Hash indexes use a hash function on the indexed columns to specify in which bucket a rowversion must be stored. Rowversions in each bucket are connected with a pointer chain. This structure is very efficient to find rows based on an exact match of all indexed columns, but can’t be used for other types of data access.

Bw-tree indexes use a structure that is very similar to a B-tree to store all distinct values in the indexed columns in a tree structure, and then connect the rowversions through pointer chains to those values. These indexes are more versatile, but slower for exact searches.

I hope this gives good insight in how memory-optimized data is used. For the next blog post in this series, I plan to look at memory-optimized columnstore indexes.

Optimized Locking part 1: Transaction ID Locking
Precon, precon, precon

Related Posts

No results found.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close