Expand +



Key Considerations in Architecting SAP HANA for the Cloud

by Jake Echanove, HANA Distinguished Engineer; Director, Solutions Engineering

September 29, 2015

Moving any mission-critical workload to the cloud can be daunting, but deploying SAP HANA in the cloud requires a special understanding of the options. There are still some misconceptions around what it takes to implement and run SAP HANA in a multi-tenant environment while still being in compliance with business continuity requirements. It is important for businesses to understand SAP HANA deployment alternatives and architectural design before selecting the best fit for their business continuity requirements.

One major barrier to entry for SAP HANA has been the hardware investment required to support the platform. Memory-heavy appliances require resources and architects must design with this in mind. There is always a balance between cost and the resiliency of the high availability (HA) and disaster recovery (DR) design. This is not unique to SAP HANA, but paying per appliance necessitates that many alternatives are considered before deciding on a final design.

One SAP HANA deployment alternative is virtualized SAP HANA (vHANA). Leveraging vHANA can provide significant cost benefits to the customer. First, customers can eliminate the upfront capital expense of an SAP HANA appliance. Next, it is also possible to take advantage of the vSphere tools offered by VMware for HA and DR. This would eliminate the need for two dedicated SAP HANA systems: one for HA and one for DR. Bear in mind that the cloud provider must have the correct architecture in place and the additional vHANA capacity to accommodate any failover that may occur. Finally, depending on the cloud service provider, there may also be an option for consumption-based billing, so customers will be billed based on the resources used instead of the resources allocated. Consumption-based billing can be very beneficial to the customer considering that SAP HANA can sometimes have large amounts of memory and CPU sitting idle.

Even with the expanding virtualization support, many SAP HANA implementations will still require physical appliances. This is relevant for both scale-out and scale-up scenarios. For scale-out HA scenarios, the option is to add one or more standby nodes that can take over for any active node that may fail. For scale-up HA implementations, there are a couple of options. One is similar to scale-out by utilizing IBM General Parallel File System (GPFS) or a storage adapter where, in the event that the primary node fails, the standby node would take over the persistent layer and load data into memory. The other option for scale-up HA is SAP HANA system replication. SAP HANA system replication will use a combination of snapshots and logs to replicate data to the target system. The benefit of SAP HANA system replication is a quicker recovery time objective (RTO) because data can be pre-loaded into memory. SAP HANA system replication and storage level replication are also relevant for DR because they send data to a target system in a secondary data center.

It is critical for SAP HANA architects to understand the various options and what impact each will have on cost, RTO, and recovery point objective (RPO). Working with a cloud service provider that has extensive experience in architecting SAP HANA in the cloud is crucial not only in ensuring a successful SAP HANA deployment, but also in ensuring that business continuity and DR requirements are met.

For more information, visit

An email has been sent to:

More from SAPinsider


Please log in to post a comment.

No comments have been submitted on this article. Be the first to comment!