By: Molly Folan
Predictive, in-memory, and Big Data technologies are all the hype right now, and their flashy marketing campaigns often distract us from some of the more fundamental – albeit not as “cool” – issues that must be addressed before it makes sense to invest in one of these emerging technologies.
Take for instance, Big Data technologies like HANA. Touting the ability to control exploding data volumes at the speed of light, it’s hard to resist the appeal.
Take your pick of one of the BI output tools on the market today, and how quickly and easily they can transform rows and rows of data into a visually-appealing and interactive dashboard. Or a perfectly formatted report.
These buzzworthy technologies that can work wonders with your data assume the data is 100% accurate. At the end of the day, they’re only as good as the data you’re feeding them.
And why shouldn’t these technologies rely on good data? You’ve been governing and managing your data perfectly over the years…haven’t you?? And I’m sure there isn’t any duplicate data or anything crazy like that, right??
If you’re data is crap, then what’s the sense in speeding it up? Or making it look pretty in a perfectly formatted report or dashboard?
So, before you spend millions chasing the latest and greatest new technology offering, stop and think “Am I just putting lipstick on a pig?” (And if you haven’t figured it out by now, data is the “pig” here and an expensive new technology offering would be the “lipstick.”)
If you answered “Yes,” then it’s time to take a few steps back and reevaluate your data strategy. Establish and maintain enterprise-wide data management, governance, and integrity. This way, you won’t be speeding up bad data. Or making crappy data look pretty.
…or putting lipstick on a pig.