Published on

February 4, 2020

Understanding In-Memory Optimization in SQL Server

In today’s fast-paced world, businesses need to adapt quickly to meet new competition and drive innovation. As a database administrator, it is crucial to adopt changing needs and formulate the right design strategy to meet the demands of your services and enterprise. One technology that can significantly improve application performance is In-Memory Optimization in SQL Server.

In-Memory Optimization is a specialized, memory-optimized relational data management engine and native stored procedure compiler integrated into SQL Server. It is designed to handle the most demanding OLTP workloads and offers several benefits:

  • Low latency, high throughput, and faster response time
  • High efficiency and performance
  • Zero or no lock escalation management through an optimistic concurrency model
  • Elimination of contention and efficient data retrieval
  • Minimization of I/O logging and code execution time

If your existing SQL Server applications require performance and scalability gains, or if you are experiencing database bottlenecks, In-Memory OLTP is worth considering. It is especially useful for workloads such as data ingestion, data load, and analytical queries.

When implementing In-Memory Optimization, there are a few key concepts to understand:

  • In-Memory optimized tables and indexes
  • Non-durable tables and traditional temp tables
  • Natively compiled stored procedures and UDFs
  • Memory-optimized table type for table variables

One important consideration when using In-Memory Optimization is memory limitations. The data structures that make up memory-optimized tables are stored in memory and are not backed by durable storage. This means that if sufficient memory is not available to store the memory-optimized rows, it can cause issues. It is essential to evaluate the size of memory required and consider the workload that may produce multiple versions of rows, requiring additional memory allocations.

To get started with In-Memory Optimization, you can create a filegroup with the memory_optimized_data option and implement a logical file to the group. Then, you can create a table with memory_optimization options, specifying the durability level (SCHEMA_ONLY or SCHEMA_AND_DATA).

Here is an example of creating an in-memory optimized table with durability set to SCHEMA_ONLY:

CREATE TABLE InMemDemo (
  Id INT NOT NULL,
  data VARCHAR(25),
  CONSTRAINT pk_id PRIMARY KEY NONCLUSTERED (id)
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY);

Once the table is created, you can insert data into it and validate the number of records. It’s important to note that the data in an in-memory optimized table is temporary and bound to memory. If the SQL instance restarts, the data is flushed out of memory.

If you need data persistence and high performance, you can create an in-memory optimized table with durability set to SCHEMA_AND_DATA:

CREATE TABLE InMemDemo (
  Id INT NOT NULL,
  data VARCHAR(25),
  CONSTRAINT pk_id PRIMARY KEY NONCLUSTERED (id)
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);

To demonstrate the performance implications of traditional objects versus in-memory optimized tables, consider the following example:

SET NOCOUNT ON;

DECLARE @start DATETIME = GETDATE();
DECLARE @id INT = 1;

WHILE @id < 10000
BEGIN
  INSERT INTO InMemDemo (id, data)
  VALUES (@id, 'SQLShackDemo');

  SET @id = @id + 1;
END;

SELECT DATEDIFF(s, @start, GETDATE()) AS [MemInsert];

DECLARE @start DATETIME = GETDATE();
DECLARE @id INT = 1;

WHILE @id < 10000
BEGIN
  INSERT INTO Demo (id, data)
  VALUES (@id, 'SQLShackDemo');

  SET @id = @id + 1;
END;

SELECT DATEDIFF(s, @start, GETDATE()) AS [Insert];

DROP TABLE InMemDemo;
DROP TABLE Demo;

The output of this example shows that the in-memory optimized table is 37 times faster than the traditional non-memory optimized table.

When working with in-memory optimized tables, it is important to consider memory sizing. Each delta file is sized based on the available memory, and it is critical to ensure sufficient memory for efficient processing of the tables. You can use the sys.dm_db_xtp_table_memory_stats view to monitor memory usage.

In conclusion, In-Memory Optimization in SQL Server is a powerful feature that can significantly improve application performance. By understanding the concepts and considerations of In-Memory Optimization, you can design and implement high-performance computing systems that meet the demands of your business. Stay tuned for more articles on this topic!

Let me know what you think…

Click to rate this post!
[Total: 0 Average: 0]

Let's work together

Send us a message or book free introductory meeting with us using button below.