Memory-optimized Logging

In a previous post I talked about transaction log works, and what about using a memory-optimized table?

SQL Server has the feature Memory-Optimized Objects to improve performance. In-memory nonclustered indexes are implemented using a data structure called a Bw-Tree. A Bw-Tree is a lock and latch-free variation of a B-Tree.

In-memory architecture:inmemory

To enable an application to use In-Memory OLTP, you need to complete the following tasks:

  • Create a memory-optimized data filegroup and add a container to the filegroup.
  • Create memory-optimized tables and indexes.
  • Load data into the memory-optimized table and update statistics after loading the data and before creating the compiled stored procedures.
  • Create natively compiled stored procedures to access data in memory-optimized tables. You can also use a traditional, interpreted Transact-SQL to access data in memory-optimized tables.

  • As needed, migrate data from existing tables to memory-optimized tables.

In this new architecture, let’s see what happens in the logging level.

I created a table using the code:

CREATE TABLE InMemoryTable (
 ID INT IDENTITY NOT NULL PRIMARY KEY NONCLUSTERED
 , FIRST_NAME VARCHAR(100)
) WITH (MEMORY_OPTIMIZED=ON, DURABILITY=SCHEMA_ONLY)

As you can see, the first part is equal to a normal table and to create the in-memory table we use MEMORY_OPTIMIZED and the DURABILITY. In this case, I’d like you to pay attention to the DURABILITY configured to SCHEMA_ONLY.

Let’s take a look in the transaction log after create the table.inmemotable

We can see all the sys changes to create the table. Now, inserting a row in this table and see the log again:

INSERT INTO InMemoryTable VALUES('Douglas Correa')
GO

inmemotable

You can see nothing changed, but where’s my data? The data is there in the table but only in memory. As you can imagine, both the logging and saving the data to disk are expensive operations.

That means with DURABILITY schema_only the data won’t be there after a crash or restart the server. Changing that for SCHEMA_AND_DATA and look in the log file we are going to see the log operation when inserting data.

inmemotable2

As you can see, the log operation is LOP_HK, the row is in the log in case of crash SQL Server can redo.

Conclusion

The memory-optimized table is fast and can improve performance especially if don’t need to save the data, but there are limitations and one of limitation I didn’t like was I can’t detach and attach the database recreating the log file.

 

SQL Server Undocumented 1

SQL Server has many undocumented functions and commands. I will write series of posts with functions to have a library online.

The first, I’d like to write about %%PhysLoc%% and the function fn_PhyslocCracker

With this function, I can see my table physical location and use DBCC PAGE to see the data in each page.

SELECT *
FROM dbo.MyTable AS m
CROSS APPLY fn_PhyslocCracker(%%physloc%%) plc;

DBCC PAGE(MyDatabse, File_ID, Page_ID, Type) WITH TABLERESULTS

Temporary Tables

Sometimes we need to use temporary tables to do aggregations, joins or to simplify the job. SQL Server supports a number of options that you can use to store data temporarily. You can use temporary tables and table variables, before we start with codes we need know some concepts.

Local temporary tables are named with a single number sign as a prefix; for example, #T1.

They are visible only to the session that created them. Different sessions can actually create temporary tables with the same name, and each session will see only its own table. Behind the scenes, SQL Server adds unique suffixes to make the names unique in the database, but this is transparent to the sessions.

Local temporary tables are visible throughout the level that created them, across batches, and in all inner levels of the call stack. So if you create a temporary table in a specific level in your code and then execute a dynamic batch or a stored procedure, the inner batch can access the temporary table. If you don’t drop the temporary table explicitly, it is destroyed when the level that created it terminates.

Global temporary tables are named with two number signs as a prefix; for example, ##T1.
They are visible to all sessions. They are destroyed when the session that created them terminates and there are no active references to them.

Table variables are declared, as opposed to being created. They are named with the at sign (@) as a prefix; for example, @T1. They are visible only to the batch that declared them and are destroyed automatically at the end of the batch. They are not visible across batches in the same level, and are also not visible to inner levels in the call stack.

Temporary tables are created in tempdb in the dbo schema. As already mentioned, you can create two temporary tables with the same name in different sessions, because SQL Server internally adds a unique suffix to each. But if you create temporary tables in different sessions with the same constraint name, only one will be created and the other attempts will fail.

Physical Representation in tempdbtempDB

There’s a common misconception that only temporary tables have a physical representation in tempdb and that table variables reside only in memory. This isn’t true. Both temporary tables and table variables have a physical representation in tempdb.

You can find entries in the sys.objects view for the internal tables that SQL Server creates in tempdb to implement your temporary tables and table variables. As an example, the following code creates a temporary table called #T1 and then queries the sys.objects view in tempdb looking for table names starting with #.

As we can see, both the variable as a temporary table are created implicitly and explicitly in tempdb.

So when should I use one or the other?

When it comes to performance, there’s a very important difference between temporary tables and table variables. SQL Server maintains distribution statistics (histograms) for temporary tables but not for table variables. This means that, generally speaking, you tend to get more optimal plans for temporary tables. This comes at the cost of maintaining histograms, and at the cost of recompilations that are associated with histogram refreshes.

temporary1

This plan is very efficient. The optimizer examined the histogram on col2 and estimated that a very small number of rows are supposed to be filtered. The plan decided to use the index on col2 because the filter is very selective. Only a small number of key lookups are required to obtain the respective data rows. For such a selective filter, this plan is preferable to one that does a full clustered index scan.

temporary2

Unlike with temporary tables, SQL Server doesn’t maintain histograms for table variables. Not being able to accurately estimate the selectivity of the filter, the optimizer relies on hardcoded estimates that assume fairly low selectivity (30 percent). It just doesn’t realize that the filter is actually very selective, and that a plan more similar to the one shown earlier in the temporary table would have been much more efficient.

A common question is whether table expressions such as common table expressions (CTEs) also get persisted like temporary tables and table variables. The answer is no. When SQL Server optimizes a query against a table expression, it unnests the inner query’s logic and interacts directly with the underlying tables. This means that unlike temporary tables and table variables, table expressions have no physical side to them.

The conclusion from this example is that when the plan efficiency depends on existence of histograms, you should use temporary tables. Table variables are fine to use in two general cases. One is when the volume of data is so small, like a page or two, that the efficiency of the plan isn’t important. The other case is when the plan is trivial. A trivial plan means that there’s only one sensible plan and the optimizer doesn’t really need histograms to come up with this conclusion. An example for such a plan is a range scan in a clustered index or a covering index. Such a plan is not dependent on selectivity of the filter—it’s simply always a better option
than a full scan.