SAP HANA is an in-memory database engine and application platform that offers unique advantages for reporting and high-performance operations. Customers have to answer many questions in their path to running on SAP HANA. How can architect your SAP HANA solution to fit your needs? Once you are up and running, how can administrators monitor the system ensure it is running correctly? How can you integrate your SAP HANA solution with new features?
HANA 2017 speaker Kurt Hollis recently answered to readers' questions on SAP HANA architecture, performance, security, and more. If your organization has recently implemented or is planning to implement SAP HANA, get answers on topics such as:
- Scale up/Scale Out
Meet the panelist:
Kurt Hollis, Deloitte
Kurt Hollis is a manager at Deloitte Consulting, LLP in the SAP Solutions Network located in Philadelphia, Pennsylvania. Kurt has 16 years of experience with SAP products along with an SAP NetWeaver 7.0 Associate Certification and has performed expert consulting for SAP clients for over 12 of those years, 4 years as employee of Deloitte, and 10 years as employee of SAP Americas, and 2 years in a Big Pharma organization. A primary focus of Kurt’s specialty is with SAP HANA Systems performing administration, operations, and monitoring of several HANA systems. This includes SAP HANA environments based on HP, IBM, Dell, and Hitachi appliance platforms.
SAPinsiderMatt: Welcome to today’s live Q&A. I am pleased to be joined by HANA 2017 speaker Kurt Hollis of Deloitte Consulting, LLP as he shares his insights on implementing and administering SAP HANA. Kurt’s specialty is with SAP HANA Systems, performing administration, operations, and monitoring of several HANA systems. This includes SAP HANA environments based on HP, IBM, Dell, and Hitachi appliance platforms.
Kurt Hollis: For the purposes of this transcript, I grouped the questions and answers by these categories:
- Architecture Questions
- Performance Questions
- MDC Questions
- Scale up/Scale Out Questions
- HA/DR Questions
- Security Questions
Comment From Roberto Bob: How do you compare TDI and appliance architecture in terms of performance?
Kurt Hollis: TDI and appliance architecture are built on basically the same SAP certified and approved hardware configurations. The performance is the same with TDI as long as proper setup and configuration is followed. The TDI architecture can be integrated the same as appliance architecture, and often times it can be set up with newer and faster hardware products. The flexibility of TDI is a great advantage for custom tailoring storage requirements, adding memory, and updating components. The one disadvantage of TDI is that the integration and certification of the HANA servers is in the hands your own technology team, and it requires you to put in the time and effort to perform these important tasks.
Comment From Raju: How does SAP calculate the licenses for SAP HANA? Is it based on memory or the number of users?
Kurt Hollis: Memory is the license method, not number of users. But memory is important so do not make it too small for license reasons.
Comment From Roberto Bob: I saw in the quicksizer for SAP HANA applications that the minimum is 200GB RAM memory, even for initial installation. Did I get it correctly?
Kurt Hollis: The minimum I would suggest is 64GB for a simple practice system. It is possible to deploy a smaller size then that for development or learning purposes. For use with SAP NetWeaver based systems, the minimum recommended is 256GB from my experience. Appliances usually start at 128GB sizes, though Dell offers one at 64GB. The quicksizer is looking at the requirements for running an SAP system in the memory, so yes, 200GB and up is usually recommended for that purpose.
Comment From Dave Cuff: We are busy with an implementation of IS-Retail on HANA. There are now various requests for Fiori analytical apps and reporting tools to access the HANA DB directly. Our concern is, how do we manage these database connections that are not coming through the application server layer?
Kurt Hollis: Web-based connections are handled through the XS Engine and are included as part of the HANA system. The reporting users will need to be defined in HANA and granted access to the objects. Single sign on can be set up to support seamless integration.
Comment From Guest: What is SAP's present "ceiling" of memory for vHANA systems?
Kurt Hollis: The limits of a VMware based system is 1TB for VMWARE 5.5 and up to 4TB for VMWARE versions 6.0/6.5.
Comment From Guest: Do you have any recommendations on which backup tool - using the 'backint' interface - works optimally with HANA systems?
Kurt Hollis: See the references from SAP at http://global.sap.com/community/ebook/2013_09_adpd/enEN/search.html#search=HANA-brint. Popular solutions include Commvault, EMC Avamar, Networker, IBM Tivoli/Spectrum Connect, Netbackup, and others. See the SAP note 1730932 - Using backup tools with Backint for HANA.
Comment From Hector: Is there any online training or guideline documentation where you can start learning about SAP Hana architecture?
Kurt Hollis: http://saphanatutorial.com/ is one good site. The best information, which I use, is the SAP HANA guides and FAQ notes, see http://help.sap.com/hana. Additionally, good learning is available from the SAPinsider conferences and the SAP Press books.
Comment From Hector: In addition to SAP HANA studio, what other software can be used for reporting?
Kurt Hollis: Both the capabilities of the Studio and the Workload Analysis tools are helpful together. All analysis information is collected from the performance collection tables in HANA using SQL. More tools are better.
Comment From Roberto Bob: What´s the main difference in terms of performance between using SSD storage and standard storage?
Kurt Hollis: SSD storage is typically much faster than traditional spinning disks and has a longer life span. Some spinning disk arrays have caches included, which help to make them acceptable for HANA. Disk and SAN speed is important for the persistent store as logs must be written during commit cycles. Faster storage is better with HANA, as is the newer E7 Broadwell processor type.
Comment From Bryan F: How many VMs are supported on a single HANA production system?
Kurt Hollis: The servers are setup as ESX hosts for VMware. Each host can support multiple HANA VMs. For production it is recommended to not overcommit resources. On VMware 6.0 you can go to 4TB memory, but the amount of core CPU must meet the minimum requirements for CPU/memory ratio. You can have multiple production systems on VMware but need to follow the guidelines from SAP and VMware.
Comment From Adam: What's the best training path for preparing to move into a position that does admin, performance tuning, etc. In my case I am coming from a Business Objects reporting/BI position.
Kurt Hollis: Take the SAP class HA200. Or since you have reporting experience, are you interested in HANA development? That would be the HA300 class.
Comment From Scott: Actually, regarding that question on SSD storage ... TDI rules dictate that every storage vendor has to hit the exact same metrics. Regardless of if SAS or SSD is used, those same metrics must be hit. Thus the real benefit of SSD over SAS is a smaller footprint for HANA. For example, I may need 10 SAS spindles to hit my KPIs vs 2 SSD's to hit the same metric. Thus, a smaller footprint and less cooling is the biggest factor. Regardless of disk type, everyone needs to hit the same metrics
Kurt Hollis: Very nice follow up, thanks. Overall, SSD is better, although more expensive. But that expense may be offset somewhat by the number of SAS disks needed to match performance, as you stated. Good information for justifying SSD based storage solutions. Look at EMC Extreme IO or Pure Storage using Flash.
Comment From Scott: The number one storage operating system in the world and only TDI vendor that can do both FC and NFS for HANA. ; )
Kurt Hollis: Good comment back. Fiber channel and both NFS/NAS can be helpful. NAS is good for backups and HA.
Comment From Robert LaLonde: Does the recommendation to avoid OLTP and OLAP in the same MCOD DB still apply to HANA?
Kurt Hollis: OLAP and OLTP do not need to be avoided in the same HANA server as the memory serves both equally well. One difference is that the CPU requirement for OLAP is 2 times higher than that for OLTP in the HANA server. If running OLAP, make sure that the CPU allocation and the speed of the processor is the main driving factor.
Comment From Andrea: What is your opinion in regards to having ERP and BW in the same MDC? Is this recommended and/or an SAP best practice?
Kurt Hollis: It is acceptable. One difference is that the CPU requirement for OLAP/BW is 2 times higher than OLTP/ERP in the HANA server. If running OLAP, make sure the CPU allocation and the speed of the processor is the main driving factor, similar to the other question. One other consideration is that the BW system’s free memory requirement is usually higher. Having these two systems together in production may be OK but should be discussed with experts regarding the applications you are planning to run there. Growth is another concern. Sharing two systems on the same server could be an issue during times of growth. Cost savings is improved though.
Comment From Guest: Do you have any insights or info on the number of production nodes that can be supported on a single ESX box when using VSphere 6 without having overallocation of memory or cpus?
Kurt Hollis: The servers are setup as ESX hosts for VMware. Each host can support multiple HANA VMs. For production it is recommended to not overcommit resources. On VMware 6.0 you can go up to 4TB memory, but the number of core CPUs must meet the minimum requirements of the CPU/memory ratio. You can have multiple production systems on VMware servers but need to follow the guidelines from SAP and VMware.
Comment From Tina: We are using an HP appliance and were told that SAN to SAN replication was not supported for HANA, so we have been using the HANA system replication. Do you know which hardware vendors support SAN to SAN replication in HANA?
Kurt Hollis: Good question. I have not worked on any storage sync methods. I have seen the backups synced to another data center, which could be used for recovery with logs being written there, too. This was done using a third party storage product. Usually people use HANA storage replication or system replication methods for HA and DR.
Comment From Scott: FYI on the vmware question, here is a great FAQ from vmware. vSphere 6 only supports a single production instance.
Kurt Hollis: Yes, I see that, but this may be newer: 1995460 - Single SAP HANA VM on VMware vSphere in production
Comment From Pradeep: How can you monitor your system’s current performance, and how can you fine tune it?
Kurt Hollis: The most convenient way to monitor the performance of a HANA system is by using the HANA Studio and going through the tabs for memory, CPU, sessions, and especially system information. Checking the logs and performing traces can be helpful. Also look at expensive SQL statements, run the SQL mini-check tools (see SAP Notes) and use the output of the checks to help with parameters.
Comment From Srini: What's the optimum lock wait time, and how can we find what’s causing it?
Kurt Hollis: Lock wait timeout is important to keep tables from being locked for too long. Look at the threads and see what is causing the timeout errors. Look for blocked transactions in HANA. It’s usually a sign of expensive SQL statements. Thirty minutes is a typical timeout setting. An example timeout setting would look like this: SET TRANSACTION LOCK WAIT TIMEOUT 1800000 (30 minutes). See the SAP note 1999998 - FAQ: SAP HANA Lock Analysis for more detail. Long running jobs, large updates, or data loads usually cause lock times to exceed values. Use the HANA studio to investigate the locks to find out what is causing it. You can trace it back to the process and the user causing the lock to occur.
Comment From Denise: We occasionally get out of memory dumps. Should we analyze this ourselves or send it to SAP?
Kurt Hollis: Both—look at it and open the incident with SAP in parallel. You should look at the traces from the index server (usually the area having issues) or another process that is having the dumps. You should also look at the dumps and see what errors exist in the logs. Often you can see what happened before the dump and take steps to resolve it. Make sure the hardware is not having issues, as writing to persistent store is a critical operation and problems there may cause dumps. Review SAP notes regarding the specific errors you are finding. Definitely open a high priority SAP incident to have the dump analyzed. In many cases the resolution is to implement the next patch level of HANA or move up to the next release level. In some cases a parameter setting provided by SAP support can help to avert the dumps. There is a method for zipping up the dump files to send to SAP. Make sure you practice that procedure and refer to the SAP note on how to do it.
Comment From Guest: What general checks can be performed in order to optimize the performance of the SAP HANA database? For what purpose did SAP HANA separate the concept of schema and ownership of objects while all other RDBMS's use the concept of schema and ownership as one? What benefit does it bring? SAP Hana's approach certainly can cause issues if caution is not exercised.
Kurt Hollis: The flexibility it brings to security is the main reason I can think of. This is also an application platform with development capabilities in addition to the database objects. Use the mini-checks (refer to SAP notes) for performance optimization and take a look at the HANA performance guide located at help.sap.com/hana.
Comment From Lee: Sometimes we cannot cancel sessions/threads using HANA studio. Is there any better way to kill a session?
Kurt Hollis: That may be due to few reasons. The thread/session may still in a critical wait/busy state, and killing it may cause serious system or data problems. Or it may be in a long rollback. Later HANA releases have helped with these issues. Check the logs. You could also try some SQL commands, such as: alter system disconnect session '400210' or ALTER SYSTEM CANCEL SESSION '400210'
Comment From Vikram Das: Is HANA limited by the size of the database? If the database is a petabyte in size, then can it not fit in the memory of any existing server?
Kurt Hollis: Databases that are larger than the memory size can be considered in certain cases. Data is loaded in memory only when needed for column storage, which helps. Combining it with near line storage (NLS) or dynamic tiering can also reduce memory size requirements.
Comment From Guest: A link to said blogs would be helpful if possible. And any other useful information that is not normally linked like the Master Guides are would be appreciated.
Kurt Hollis: Check out help.sap.com/hana. Search for HANA using Google. A lot of good information exists out there.
Comment From Sankara Bavirisetti: What are some other parameters or memory parameters we need to set after an installation in a Multi SID environment , ideally ones used to set "Global Allocation Limit" according to RAM?
Kurt Hollis: That is the primary parameter to set up “Global allocation limit”. The processes will already be set up for normal operations and associated parameters will also be set up already. You can use the mini-checks to validate if the parameters are set up within the correct ranges. Make sure you set up the alerts and emails from them to closely monitor if any issues come up after running the system.
Comment From SAP Basis: Have you experienced HANA server crashes, and if so, please comment on the common causes and steps necessary to prevent this, besides the obvious step of upgrading to a newer HANA version or patch.
Kurt Hollis: I would start by looking at the logs to see what is leading up to the memory dumps. SAP is needed to help provide answers to the causes of the memory dumps and possible HANA software fixes needed in patched versions. I would do both, start an incident with SAP and try to analyze the issue locally. Maybe a trace can help.
Comment From Sankara Bavirisetti: What are the other parameters needed after a HANA installation in a multi SID environment? Ideally, I used set “Global Allocation Limit” to restrict memory limits.
Kurt Hollis: Yes, the main important item is to limit the memory per SID with “Global allocation limit.” With several databases sharing the same memory, these settings are applied for each database to set the limit for each. This is for MCOS or MCOD. The new way is to use multitenant database containers (MDC). Memory is managed better that way.
Comment From Raju: If we are unloading the data from the main memory to persistence storage, is there any performance issue that could happen? If developers are updating the data on same table at the same time, how will it react with that table? Should they wait until they receive the update or what?
Kurt Hollis: This question is about forcing a column store unload from memory, from what I gather. The unload will occur and then get loaded again for the updates to happen. This action will slow down the update process but may free up some memory due to partial loading of the column store table. Look at the delta merge operation as well. What are you trying to accomplish with this action?
Comment From Sridhar: How can you handle the alerts in alert logs to partition the tables?
Kurt Hollis: You should be able to change the thresholds for these alerts. This can be an annoyance, I know from experience. There have been some issues with the alerts in some releases. The best practice is to partition the tables when necessary as there is a 2 billion limit.
Comment From Tina: Lately, Idera launched their latest monitoring tool: Workload Analysis for SAP HANA. The tool can be useful to monitor performance, trend system resource consumption, and capture/rank HANA SQL statements with their execution plans, etc. Is SAP HANA Studio not able to trend the system resource consumption based on the history? Do we have to rely on HANA SQL to do so?
Kurt Hollis: Yes, the HANA Studio is limited in this way. The Idera product is a graphical view of HANA ‘M’ views and stores history in its own database. One downside is that it does not have alerting, threshold, and user level drill down capabilities. The HANA cockpit and new Fiori-based access is a nice addition.
Comment From Bryan F: What are your recommendations for running multi-tenant database containers in a SAP landscape?
Kurt Hollis: Multi-tenant database containers (MDC) are a good fit for several scenarios. The main point of MDC is to replace earlier MCOS deployments (Multiple components one system) and provide a more robust solution for deploying common scenarios such as ERP-CRM-BW, QA/DEV, and datamarts together in the same database server. A single database server for a single application is a great choice but may not take full advantage of the cost savings and benefits of multiple applications in a single HANA database server.
Comment From Andrea: During a system refresh in MDC, can we refresh a single container in an MDC from a single container of a different MDC?
Kurt Hollis: Yes, this can be done via Backup/Restore. However, you need to ensure it meets the pre-requisites like the System DB version. (For example, moving from the latest version to an older version is not supported) You will need to build an interim system where you can convert the single DB to an MDC and then, using backup/restore of that container, refresh in the QA MDC. However, as you know, you cannot revert from MDC to a single DB system
Comment From Luanne: Have you seen performance issues with MDCs? Do you have recommendations for which applications should not run together as tenants? Do you have recommendations for which applications run well together?
Kurt Hollis: Typically, the performance of running multiple tenants and multiple applications in those tenants is not a problem. If you are running BW in an MDC, especially as a scale out, there are some cautions about putting other systems together. The main point of MDC is to replace earlier MCOS deployments (multiple components one system) and provide a more robust solution for deploying common scenarios, such as ERP-CRM-BW, QA/DEV, and datamarts, together in the same database server. This also provides fast interactions between tenant databases using Smart Data Access. The performance should not be a problem as long as the proper guidelines and memory management recommendations are followed. The OSS Note 2096000 – SAP HANA MDC additional information talks about these steps as well.
Comment From Scott: Why the push for MDC? MCOD never took off as it makes a complex SAP landscape more complex. MDC increases complexity. My experience suggests that virtualizing SAP instances (single instance) is easier than implementing an MDC landscape. MDC also removes the HANA snapshot abilities. I’m just curious on your take.
Kurt Hollis: (repeated from above) The main point of MDC is to replace earlier MCOS deployments (multiple components one system) and provide a more robust solution for deploying common scenarios such as ERP-CRM-BW, QA/DEV, and datamarts together in the same database server. A single database server for a single application is a great choice but may not take full advantage of the cost savings and benefits of multiple applications in a single HANA database server.
Comment From Andrea: In HANA database replication, can we replicate a single container from an MDC?
Kurt Hollis: Yes, backup and restore can be managed for a single container and restored on a different HANA server using the SWPM tool.
Scale Up/Scale Out Questions
Comment From Tina: Is it still true that SAP recommends BW on HANA to be scale out rather than scale up?
Kurt Hollis: BW scale out is only recommended when the size exceeds that of a reasonable scale up solution. The performance of scale up is better than that of scale out. Scale out has a performance impact because node to node is across a network and not totally in memory. When BW sizing exceeds 3TB, the need for scale out is a serious consideration.
Comment From DRQ: We have HA for HANA scale up, but are there disaster recovery options other than duplicate stand-by servers?
Kurt Hollis: Standby servers are an excellent choice for an HA solution for scale out systems. For DR solutions the usual choice is to replicate the storage to a QA server, which is set up exactly the same as production. The QA can be shut down, and the copy of production can be started on that server. The requirement for the DR solution is that the scale out environment for DR must match the production system. When doing scale out, the DR set up becomes more difficult. There is an extended storage option when using dynamic tiering that may be leveraged for DR options as well, with different size systems using failover groups. In general, scaling up offers some performance advantages over scaling out, as memory access is local and minor overhead associated with inter-node network communication is avoided.
Comment From Bryan F: How many backup nodes are recommend for a BI/BW system?
Kurt Hollis: One standby node for a scale out with BW is all that is needed.
Comment From Scott: The fastest most efficient way of refreshing a QA system from a copy of production is to get a fast HANA snapshot/storage snapshot and then mount the data to the QA system (assuming QA has already been built). If the production system is non-MDC and your QA system resides within an MDC system, what is the best method for refreshing it? File backup/backint backup? If so, that's like 1998 architecture.
Kurt Hollis: It is possible to backup/restore MDC to MDC systems, and that is the recommended approach. For a non-MDC to MDC refresh, there is a conversion process that is supported.
Comment From Vikram Das: Since HANA is in-memory, is it possible to have an active-active type of High Availability, similar to Oracle RAC, that shares common disk space and high speed networking to make it possible? Since, there is no technology for two servers to share RAM, it seems that it is probably not possible?
Kurt Hollis: This is a work in progress for SAP. It’s not available yet that I know of.
Comment From Guest: We are in the process of designing our SAP HANA HA/DR architecture. What kind of deployment would you recommend for SAP HANA HA with a single node system ? We are thinking of Async System Replication because it provides the kind of DR with minimum downtime, but it cannot provide an HA scenario.
Kurt Hollis: Using the SAP-provided system replication using sync or async (for more distance) between the database servers is the recommended best practice. The sync method in the same data center works as HA, and the async method between distant data centers works as DR. There are storage replication methods which work as well.
HANA Security Questions
Comment From Guest: We recently turned the Audit Logging on in our HANA system, but unfortunately left the system users included. This exploded the CS_AUDIT_LOG_ table's size, which is bigger than the top 5 other tables. Is there any way to delete those users from the entries in that table while keeping the other users?
Kurt Hollis: I have not done this myself, but I am sure you can write a SQL statement to perform this delete activity. Another way may be to stop audit logging temporarily and clean up the logs, and then start it again with the correct logging enabled.
Comment From AAia: Is there an SAP roadmap for BW push down security for HANA native development?
Kurt Hollis: I’m not sure about any push down security yet. This is maintained separately in BW and HANA. One option to consider is to use SAP GRC to do the user create, which will do both BW and HANA same time.
Comment From Pradeep: What is best way to refresh and export /import user security in the HANA DB?
Kurt Hollis: Unlike with other SAP systems, export and import of user security in HANA is limited. This is due to the integration of the grants with the database objects. There are SQL scripts you could use which may help (found in blogs).
Comment From Guest: Do you know of any way to drop or delete some entries (specifically the entries from the system users) from the CS_AUDIT_LOG_ table?
Kurt Hollis: Interesting question. This is not good policy, but if you turn the audit logging off and as SYSTEM user delete the entries using SQL commands, it should be possible. If logging is on, the actions for the delete entries will be logged again. The action seems against security best practices.
SAPinsiderMatt: Thank you Kurt for all your insightful answers, and thank you to everyone who participated in the today’s chat. You can review the Q&A chat replay at any time, and I will alert you by email you when the transcript of today’s discussion is posted.
Kurt Hollis: Thanks for all the questions today.