SQLintersection session – In-Memory OLTP Overview

I attended the SQL intersection conference in November 2014 in Las Vegas, Nevada.  I am including my notes from the sessions in this blog, primarily for my own reference later.  For anyone that comes across these notes, please take them with a grain of salt – I may very easily have misheard something or have written it down incorrectly.

Session Title:  In-Memory OLTP Overview
Instructor: Kevin Farlee

  • Hekaton = In-Memory OLTP
  • Columnstore = In-Memory DW
  • CPU clock rates stalled out about 2005, but more power now coming in the form of multiple cores
  • Memory costs continue to plummet
  • Saying “It’s fast because it’s in memory” doesn’t really cover the Hekaton story
    • Can add enough memory to load file-based tables into buffer pool
    • Hekaton has many more speed optimizations
  • Focus on data hotspots, and load that data into Hekaton. Doesn’t have to be the entire dataset or even a significant portion of the dataset
  • Compiled stored procedures don’t have to context-switch (but still subject to quantum)
  • Cannot now replicate from Hekaton to another system, but can replicate from a disk table to a Hekaton table
  • Based on optimistic concurrency – assume that no one else will access simultaneously
  • Validation errors cause rollback; have to treat like deadlock in code (i.e., retry)
    • SQL does retry in some very limited conditions
  • Works with Always On, AG, backups, etc.
  • Not recommended for work queues
  • Can enable delayed durability on Hekaton tables
  • If run out of memory, can still delete rows, but cannot update or insert
  • Indexes are not persisted on disk; they are always built in memory
  • SCHEMA_ONLY does not even write to transaction log (other than table creation)
  • Case example of online gambling company that accessed web session state from SQL
    • Using Hekaton, went from 15,000 txn/sec to 250,000
  • Uses more efficient logging to improve write throughput
  • Need to set bucket count an order magnitude (or two) larger than expected row count
  • Tables can be schema and data or schema only
  • Can create natively compiled SPs
    • Calls out to C compiler to create
    • Entire SP is atomic
  • Some transactions cannot run in read committed
    • Must use snapshot, repeatable read, or serializable
  • Cannot comment on whether this will work its way into Azure, but wouldn’t be surprised
  • Don’t necessarily go big bang – convert tables a few at a time until goals are achieved

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.