The Hidden Costs of Cloud Data Lakes

This blog series from Cazena's engineering team investigates the hidden costs of cloud data lakes. Learn the top three hidden costs of cloud data lakes!

Read the Blog Series

New Infographic: Decades of Database Innovation (and My First Database)

You always remember your first…database. You didn’t quite know what you were doing and it was a bit awkward, but you figured it out and eventually ran your first query.  Whether you built it, maintained it, used it or cursed it, I’m guessing that you have at least one memorable database in your past. Can you plot it on our new infographic below and share it?

We worked with industry analyst Robin Bloor to highlight decades of data technology milestones. This timeline visually shows why it’s a challenge for enterprises to choose which database technologies to adopt and when. With nine to 12-month implementation cycles, a wrong bet can be a costly mistake, wasting time you won’t get back.

That’s all changing with Big Data as a Service and the cloud. For example, Cazena uses multiple database engines to power our Data Lake as a Service and Data Mart as a Service solutions. Our secure cloud service has built-in data movers, making it easy to shift analytic workloads between different engines. We regularly benchmark and add new engines to the platform, ensuring our customers always get the benefits of the latest database technologies — MPP SQL, Hadoop, Spark and whatever becomes The Next Big Thing.

While the journalist in me cringes at the buzzword I’m about to drop, it’s apt here: Big Data as a Service can “future-proof” enterprises. It’s a new way of looking at data processing, and it’s a major evolution from early database technologies, as you can see on the graphic.

My first was a Microsoft Access database, which I built when my customer spreadsheet at a 1990s startup became unwieldy. It evolved as the company grew – and as I learned more. I got advice from my dad, who worked on databases for a large insurance company. Along the way, he explained mainframes, the new columnar databases the insurer was testing and why he had to build “indexes,” so that queries ran faster. It was fascinating (a testament to my dad’s teaching skills), challenging and it was clearly important. Eventually, my Access database had to be replaced, and I learned about a whole new set of databases. Not long after that startup, I became a tech journalist, covering data and analytics, and I’ve worked in this industry ever since. Cue violins.

That was #myfirstdatabase – what was yours?



Learn more about Big Data as a Service >>

Related Resources