What are your options for virtualization for SAP HANA? How does VMware support an SAP HANA deployment? Why take a virtualized deployment approach? Long-time SAPinsider conference speaker and SAP expert, Deloitte's Kurt Hollis, outlines the various deployment models, what announcements about virtualized HANA environments mean, and his insights from recent testing on disaster recovery and risk management for in-memory computing.
Listen to our interview with Kurt Hollis from the SAP TechEd exhibit hall in Las Vegas, or read our transcript of our conversation here.
Kristine Erickson, SAPinsider: Hello and welcome, I’m Kristine Erickson, and I’m here at SAP TechEd with Kurt Hollis. Kurt is a manager at Deloitte Consulting and a long-time speaker at SAPinsider conferences, including our most recent GRC 2014 event in the US.
Most recently, he’s been focusing on SAP HANA implementation, administration, and security. So Kurt, welcome and thanks for joining me here at TechEd && d-code Las Vegas!
Kurt Hollis, Deloitte: Thank you very much, glad to be here.
SAPinsider: It’s great to have you. We will start, I just wanted to start with your impressions of TechEd so far, what you’ve seen in terms of the announcements and your thoughts on those.
Kurt Hollis: Yeah, sure. So we’re here at Las Vegas this week and there were some announcements from SAP regarding the HANA. Two of the things that come to my attention was SAP HANA’s been out there for a few years now, and it’s starting to get some traction.
Customers are putting HANA in, and they’re seeing lots of demonstrations from actual customers who adopted HANA. And now it is announced that there is this simpler choice -a new, clear path to adopt SAP HANA. A hesitation for some customers in the past was the complexity of it, and the cost, in some cases. Now there’s a simpler choice, and a clearer path, now allowing you to also adopt it and run it on SAP Cloud as well.
The other announcements were some of the capabilities. Those capabilities were around the new API management technology that was announced, and the new capabilities in Support Pack 9 of HANA.
About two times a year there’s a new support pack release, and we’ve been on 8 here and that gave us some significant capabilities. But SP9 includes key innovations including multi-tenancy, dynamic tiering, smart data integration and smart data streaming, and some asset-compliant graph storage and UDFs, and some things like that. So that’s very exciting.
SAPinsider: And you’re here at TechEd to give two sessions. Let’s start with your session on implementing HANA in a virtual environment. Do you want to first talk about the concept of virtualization, as well as the concept of deploying HANA in a virtual environment?
Kurt Hollis: Sure. I’m doing two presentations here at SAP TechEd && d-code conference this year. One of them is on overall virtualization based on a project that I had done for a large energy client in Washington State.
Let’s just discuss what virtualization is, and why it’s being looked at by some of our customers and clients with HANA. In the deployment scenarios that exist, you can run multiple components on one system, MCaaS, or you can run multiple components on one database, MCaaD. But virtualization provides another option to allow you to maximize resources running together on one server by using the traditional VMware approach.
One of the main reasons for taking on the VMware approach is that your company has already adopted and has a philosophy of running all of the servers on VMware that are implemented in the on-premise data center. So virtualization is much like running your own cloud, where you’re maximizing the resources and have a central control center, with vSphere on the vCenter for all your deployed products and resources. It really lowers the total cost of ownership.
With HANA though, it’s important to understand the restrictions with that. It’s not quite the same as before, where you could maximize use of memory CPU and resources on the system by sharing the resources based on the utilization load of the various servers. In the case of HANA, it’s important to not over-commit, and the CPU and memory ratio is reserved for those systems. So you’re losing a little bit of the capability of VMware in that aspect, but having it adopted and managed by central resources and being able to deploy the virtual system from templates in a more rapid fashion is definitely a big advantage.
The other big advantage is on your virtualized resources, each VMware-based HANA is running in its own operating system and has the database there. It’s independent of any other virtualized system running on that hardware. So it’s almost like a container for your system.
In MCaaS or MCaaD deployment scenarios, they’re sharing the same operating system, and if anything happens to that operating system, then it impacts all of those systems. But in a virtualized environment, it only impacts the one HANA system. That’s also a big advantage.
SAPinsider: The second session that you’re doing is on disaster recovery, again for a HANA environment. So I wanted to ask about the big takeaways from that session.
Kurt Hollis: With appliances that are delivered by hardware vendors, you went the route of having the non-virtualized environment - each one of those hardware vendors providing HANA on an appliance, on a pre-set server that’s certified by SAP and that hardware vendor. They also provide disaster recovery techniques and HA techniques, such as system replication or storage replication for having that disaster recovery site. Or you can fail over to another complete site in case of a disaster at your primary site, such as a flood, hurricane, fire, water damage.
With virtualized environments, and using SAP HANA, there was a lack of these types of solutions. So actually Deloitte had combined with some other hardware partners to come up with a solution and certify that solution, and test it in a proof of concept, so this other presentation I’m doing was to describe how disaster recovery can be implemented for virtualized HANA environments. And that’s using a product that works at the storage level.
What this allows you to do is perform data replication over a long distance and do that over a VPN tunnel at low cost - you don’t necessarily need a leased line. It’s using the storage recovery point from EMC. We’ve tested that with HANA and looked at some of the results.
SAPinsider: Sure, let’s take a look at those results. And can you talk specifically about what’s different in disaster recovery for HANA as well, as you’re looking at these results?
Kurt Hollis: A key difference of SAP HANA over traditional databases is that the entire database is running in-memory. Because it is in-memory, when you have a disaster, or require high-availability or disaster recovery, you want to minimize the loss of in-flight data as much as possible.
So the key areas to be concerned with are: the frequency of the log data or writing to the persistent store onto the disk, from memory; and the frequency of commits when running long writes onto the memory, to get that information committed to the disk.
They did some tests in a project that we ran, and we were able to come up with some results: How long did it take to log into HANA at the DR site, and restart a report after the disaster recovery? Failover is about 15 minutes. How much data was lost from the in-flight data that was caused by the disaster? And that was less than 5% of the in-flight data loss. So that’s not 5% of the entire system, that’s just 5% of the in-flight data - that’s important to keep in mind. So those are very nice statistics and it’s nice to be able to present and show the solution and proof-of-concept tests for how this is achieved in a virtualized environment.
A couple of key points on why this is important: Virtualized environments were just announced at April 2014 at SAPPHIRE, so that you can now run this in production and production. With VMware, you need to support high availability and disaster recovery. Now VMware itself can provide HA capability by itself, but for disaster recovery between two data centers, a solution like this was required.
Significant improvement in this proof of concept was the ability for it to transfer changes that occurred under persistent store, at the block level on the disk, across a wide-area network connection that only needed a VPN connection tunnel between the two sites, instead of an expensive lease line. That was a key factor with this.
Now we’re testing greater amounts of data, so we can see how much we can bump through the VPN tunnel before we maximize it and it becomes a bottleneck. That’s where we’re at with testing right now. But the testing looks good for the VPN tunnel to synchronize between the two sites at the block level of the storage device using a recovery point product.
SAPinsider: Well, thank you very much, Kurt for the overview and a look at virtualization in HANA.