A few years ago when SAP first announced its SAP BI Accelerator, I wrote a brief explanation of the in-memory approach to storing and accessing data. Given the expected renewed emphasis on in-memory databases at next week's Sapphire, I thought it would be worthwhile to repeat that explanation here.
In a conventional BI system, queries work against a large relational database stored on a disk drive. Sorting through that database and accessing the data on the disk drive create a significant delay in getting results. Query developers rely on precalculation and aggregate building to compensate, but the benefit realized from these techniques diminishes the bigger the data set becomes.
By placing all the data in memory, locating the data becomes virtually immediate and you eliminate the latency of retrieving the data from a disk drive. Companies using in-memory databases report that performance is so fast that users often don't trust the results. They think something must have gone wrong with the query.
Performance itself is not the biggest benefit of in-memory databases. The real significance is that it breaks down barriers to BI usage. Companies can make BI available to more people in the organization without worrying about the effect on system performance. And users will more readily accept BI if they know they will get fast results.