In-Memory Technology Speeds Up Data Analytics

26.06.2013

"We will end up with a record in there for every user who goes to a piece of content that can be served an ad through our system," Lindquist says, adding that the user data will amount to hundreds of millions of records.

That data store will further multiply since AdJuggler customers will be permitted to place their own, proprietary audience data into the Terracotta data management system. As for throughput, the new platform will be able to grow to support at least 1 million transactions per second, Lindquist notes.

The in-memory shift expands the possibilities for a database involved in real-time decision making, Lindquist says. Previously, getting a database to perform at the now-required level would call for a significant amount of tuning-configuring memory and carving out a data cache in RAM to improve performance.

A cache hit is quicker than going back to disk for data, but a cache typically represents a small portion of the data stored in a database. Lindquist notes that mySQL performance depends on having the right piece of data in memory at the right time.

Why not put all the crucial data there "We decided it's all got to be in memory," Lindquist explains, "so you don't have to worry about the tremendous amounts of database tuning you typically would have to do." AdJuggler will run a Terracotta cluster, using the distributed version of the company's BigMemory data management software."

Zur Startseite