With increasingly powerful hardware now available at increasingly affordable prices, many customers are considering consolidating their IT systems onto a single global instance, which encompasses all software pieces in one server or one database. This consolidation, which I'll call "going global" (see Figure 1), promises standardized processes across languages and countries, financial gains through easier maintenance, streamlined data management, and simplified reporting — among other benefits.
Consolidating IT systems on a single global instance and data center
But beyond these business benefits, consider the impact of going global on performance KPIs like response time and system throughput. Consolidating all systems on one global instance surely incurs more load, since users across different time zones will all be tapping into the same system. Consider also the additional data being used in background jobs. What's more, all this global activity leaves IT with a much smaller window for system maintenance, as there's no longer a clear distinction between day and night.
Accordingly, it's important for companies going global not to forget about performance. In fact, I'd advise any company, especially if that company has tight business application requirements (see sidebar), to address performance throughout its software architecture.
Going Global? Consider the Impact on Performance
When evaluating the performance impact of consolidated — and therefore often quite large — SAP systems, companies need to look at two major considerations that at first glance have little in common:
- In the planning phase, companies moving to a single global instance need to determine the feasibility of such a move. Will the software be able to, for example, payroll 500,000 employees in three hours? Can the system handle our 100 million business partners? What is the largest database table we will have to deal with?
- The second set of questions deals with the root causes of poor perceived performance. Two recent studies showed that the number-one culprit for performance problems in large enterprise systems is custom coding.1 It is fairly safe to say that the performance repercussions of custom-developed reports and interfaces are all too often underestimated ("Oh, but we only wrote 50 BAPIs!").
In the following sections, I will provide an overview of the typical performance metrics within each architectural layer of SAP software. I will also offer some hints about which potential bottlenecks to safeguard and where to emphasize the performance optimization of custom coding.
Performance Implications at Each SAP System Architecture Layer
Figure 2 depicts the different layers of a typical SAP system architecture. End-to-end response time, a major consideration when determining perceived performance, comprises the individual service times at each layer — starting from the front end (the browser) to the network, Web server, application server, database server, and back again.
The layers of a typical SAP system architecture; end-to-end response times comprise the individual service times on each layer
Let's look at the performance implications at each layer. I'll also offer system tuning and coding recommendations within each section.
The Front End and the Front-End Network
If more users are accessing the central system through a wide area network, consider the following aspects:
- Complexity of the user interface (UI). With more data being transferred to the UI, more bandwidth will need to be provided, and increased latency issues may occur because of a growing number of roundtrips. A note here: Most people think bandwidth is the culprit for poor response time, but latency is more often the real issue (see Latency sidebar).
- Browser settings. Very often, browser settings prevent the effective compression and caching of data, encouraging caching on the server instead of at the browser.
- Internet infrastructure. While sufficient bandwidth is usually not an issue (although in some regions, it can be quite costly), latency and its variations certainly are issues (again, see Latency sidebar).
So what are some possibilities for improving front-end performance? I've divided Figure 3 into system tuning recommendations and tips for ensuring that custom code performs well.
Front-end performance improvement recommendations
The Application Layer
Consolidating onto one server means that more user-driven load and background-driven load will need to be processed, especially because these loads may be running in parallel (as there is no longer a clear distinction between day and night). Because of the increased load, check your custom applications for:
- Linear resource consumption of CPU and memory. It is important to ensure that the CPU time increases linearly with the size of the processed objects. Each excess millisecond will be multiplied with the number or the size of processed objects. Another issue is to avoid memory leaks at all costs. Memory leaks occur when a program allocates memory and does not release it at its end. The allocated memory will accumulate until there is no more free memory available.2
- Parallel processing. More background jobs need to be scheduled, and these jobs may have to process more data in the same or in an even smaller time window. Make sure your custom application is able to split data volume into small packages for load-balanced distribution across different application servers.
- Integration. If you were to consolidate onto one server, you might gain performance by decreasing redundancy and thus saving communication time. If, for whatever reason, you still need to integrate different servers and applications, make sure that the interfaces will be able to process the potentially increased load caused by the consolidation. Also note that your middleware will likely have to process more messages in a shorter time frame.
Figure 4 includes my recommendations for improving application performance.
Application-layer performance improvement recommendations
The Database Layer
With single global instances, one database holds all the data. This can have four major effects:
- Increased costs. Handling large databases — and, accordingly, large database tables — often involves increased costs (including managing unplanned downtime, backups, and recovery). You may also run into additional costs for the storage subsystem, as each gigabyte of disk space is mirrored and system-copied.
- Increased access and response times. Access times may increase with the growing size of database tables, thus affecting server response time. And when database accesses are not supported by proper index design, response times may increase significantly since the statement will need to search through a very large amount of data.
- Increased disk input/output. This factor is often overlooked, but with a larger amount of data to be inserted and read from the database, the input/output will increase as well. Make sure you tune the system accordingly — for example, increase the database cache size.
- Resource competition. Background jobs are usually database CPU-intensive. In a single global instance, however, there is no longer a distinct nighttime during which background jobs can run, so end users working at that time may experience poorer response times.
Figure 5 includes my recommendations for improving performance at the database level.
Database-layer performance improvement recommendations
Make sure you do not keep data in the database any longer than it is needed:
- Achieve this through a combination of data archiving and retention management policies
- Be sure not to delete data you may still legally need!*
Consider external scalability when dealing with:
- Number of CPUs and processor architectures
Consider disk input/output:
- First and foremost, ensure that the database cache grows with the amount of processed data
- Make sure you have a sufficient number of spindles
- Use transaction ST04 to check the input/output rate
- Check the data transfer rate
In custom coding, the number-one culprit for performance problems is inefficient index design. Therefore:
- Ensure that all frequently executed database accesses (selects, updates, and deletes) are supported by indexes
- A highly modified table, such as one with many INSERT, UPDATE, or DELETE operations, should not have too many indexes; these indexes must be maintained by the database management system (DBMS)
- The more columns an index has, the higher the chance that an UPDATE will affect one of the indexed columns; use small indexes
- More than 10 milliseconds in the Minimum Time/Record column suggests improper indexing in the Performance Trace tool
- Good index design requires coordination and a potential compromise among all developers that need to access the data; for example, one country would like to add a particular index that only it uses, but that country must understand that the more indexes you have, the higher the negative impact on access times
|* For more information, see "SAP's Strategy for End-to-End ILM Success: The Information Lifecycle Management Solution from SAP Bridges the Gap Between Applications and Storage Technology for Legal Compliance," a Performance & Data Management Corner column by Dr. Axel Herbst and Tanja Kaufmann in the January-March 2008 issue of SAP Insider (www.SAPinsideronline.com).
I'd like to leave you with three pieces of advice for evaluating — and improving — system performance, especially if you're considering consolidating on a single global instance.
First, define your performance baseline using actual business requirements, and involve both the implementation and technical teams when setting up this baseline. Second, when dealing with your custom code, remember that you can often boost system performance by allowing your teams sufficient time to analyze and optimize that coding. And third, devise an efficient housekeeping policy whereby you create rules to limit data growth. Remember that bigger isn't always better!
Many scalability issues on the application layer can be resolved by adding hardware capacity:
- You can add physical application servers in your landscape
- You can add processing power:
- Batch-oriented systems may be run with many, perhaps slower, processors to improve parallel processing capabilities
- User-driven systems may show better end-to-end performance with powerful processors
Additional tips refer to the configuration of application servers:
- Configure a sufficient amount of application servers, including work processes in ABAP
- Configure up to 500 users per application server on the ABAP stack
- Budget roughly 4GB memory per core (faster cores may need more memory)
Whichever method you perform, beware that adding hardware does not necessarily solve performance issues!
Ensure linear scalability when dealing with:
- CPU time — Perform linearity measurements to ensure linear resource consumption
- Memory usage — Note that there might be dynamically growing memory consumption through internal tables, ABAP/Java objects, and strings; also be sure that there are no memory leaks
Enable parallel processing:*
- Allow for workload balance within your programs
- Improve the processing time of a single step; this in turn will improve the processing time of the whole chain
- Determine whether to use fixed or dynamic distribution
- Give preference to a higher number of small packages rather than to only a few large packages
- Avoid deadlocks by following a sorted sequence and keeping locks as short as possible by choosing the right granularity
Consider additional CPU time caused by communication to other services:
- Fixed overhead per message adds to processing time
- Serialization, compression, and authorization add to processing time
|*For more information, please see "Speed Up High-Throughput Business Transactions with Parallel Processing — No Programming Required!" by Susanne Janssen and Werner Schwarz in the January/February 2002 issue of SAP Professional Journal (www.SAPpro.com). See also "How to Build Optional Parallel Processing into Your Applications for Increased Throughput and Reduced Processing Time" in the March/April 2002 issue of SAP Professional Journal (www.SAPpro.com).
Consider employing a browser setting policy to avoid roundtrips:
- Set cache size to approximately 100MB
- Allow compression by selecting HTTP 1.1 (accept encoding)
- De-select the following options:
- Do not save encrypted data to disk
- Empty temporary Internet files folder
Balance infrastructure costs with perceived performance:
- Users manipulating lots of data — while culling reports, for example — may require a different level of UI access than users working on "light" applications
Consider what might add additional load on the WAN bandwidth, for example:
- If central services, such as local printing, will need to be sent over the Internet as well
- If you upload data from scanners or RFID devices
If you can influence data transfer and roundtrips, keep them to a minimum:
- Use compression techniques
- Avoid synchronous roundtrips in WAN
- For standard scenarios that do not involve much data or complex UIs, try
to keep the average bandwidth per user interaction step in a browser between
10KB and 20KB
- The Administration and Infrastructure 2009 conference in Orlando, March 24-27, 2009, for lessons on optimizing the performance of your global system landscape (www.sapadmin2009.com)
- SAP Performance Optimization Guide (7th Edition) by Thomas Schneider (SAPinsider Store, SAPinsider Store)
- "Speed Up High-Throughput Business Transactions with Parallel Processing — No Programming Required!" by Susanne Janssen and Werner Schwarz (SAP Professional Journal, January/February 2002, www.SAPpro.com)
Susanne Janssen (firstname.lastname@example.org) joined SAP in 1997 after finishing her studies in applied linguistics and cognitive science at the Universities of Mainz and Edinburgh. Since 1998, she has been a member of the SAP Performance, Data Management, and Scalability team, where she manages sizing processes, projects, and customer contacts. She also supports the SAP field in feasibility studies.