GRC
HR
SCM
CRM
BI


Article

 

Lower TCO and a Better End-User Experience Through Automated Performance Testing and Analysis

by Armin Hechler-Stark | SAPinsider

April 1, 2006

by Armin Hechler-Stark, SAP AG SAPinsider - 2006 (Volume 7), April (Issue 2)
 




Armin Hechler-Stark,
SAP AG

Every minute that your organization loses due to poor system performance can incur significant cost. Consider the impact on systems with even just a two-second response time lag. Multiply those two seconds by the number of system requests users make in a day, and the time and cost expenses are staggering.1

To solve crucial performance problems before they interfere with mission-critical processes in the costly, high-load, productive stages, performance testing and analysis — especially when automated and conducted early on in the development cycle — can help your development teams find bottle-necks in your organization's processes and programs.

Unfortunately, there's a misconception that performance tests are only good for monitoring your system's response time, and that they're too expensive and inflexible to run in a test environment. This is simply not the case. Automating performance testing and analysis during application development eliminates the costly and time-consuming work involved in manual testing. This leads to:

  • Lower TCO, since well-performing programs require less hardware

  • Better end-user experiences, with faster and more predictable response times

  • Reduced production interruptions, slowdowns, or standstills

  • Faster, more predictable run times for batch jobs

At SAP, performance testing and optimization have been an integral part of the development process for a long time — all SAP solutions are subject to thorough performance testing as part of the standard development process. More recently, SAP has made great strides in moving from manual, more compartmentalized performance testing toward a large-scale automation of performance tests on a business-scenario level. To reap the benefits outlined above, customer development teams that are writing additional code in SAP systems should likewise invest some upfront time in testing their own code for possible performance improvements. For programmers, development managers, and project leads, this article will offer performance testing best practices and reveal the four key requirements of automated, replicable performance tests.

What Performance Testing and Analysis Can Do for Your Programs

Performance tests at the application level help to quickly detect excessive resource consumption that can lead to degraded performance of the entire system. Highly performance-critical, day-to-day resources include database, memory, and CPU, but excessive network communication can also significantly reduce the performance of a business solution, for example, by increasing end-user response times. The goal of performance testing is to find out where program performance is less than optimal; that of analysis is to remedy these situations by interpreting performance test results. Whenever the term "performance testing" is used in this article, it refers by extension also to "performance analysis."

Traditional Performance Testing Methods, and How They Fall Short

Many common performance problems (see sidebar below) — unnecessary or inefficient database accesses, for example — can be detected in small development systems by manually testing single programs or transactions. In a manual test, you usually start the program to be tested and use standard performance analysis tools — transactions ST05, SE30, and STAD, for example — to analyze the program's behavior and find the code sequences that are causing performance problems. Individual developers can perform this manual testing, and its advantage is that problems that would affect all system users in the productive stage can be detected and fixed early on.

The reality, though, is that software has become increasingly complex. System landscapes now consist of a large variety of heterogeneous component types, and more and more business processes run across system boundaries. Manual performance testing of individual system components is no longer sufficient, since performance depends on the interaction of several diverse components. And from the end-user point of view, response time equates to the overall performance behaviors of all components in a system landscape. It becomes necessary to monitor and test performance across system boundaries rather than within a single component only. With this in mind, automating performance testing and analysis is especially important to reduce the overall testing and analysis effort.

Major Culprits of Poor Performance: A Checklist for Programmers

Years of experience in performance testing have shown that business application programmers repeatedly stumble into a number of performance pitfalls. Some of these pitfalls are "low-hanging fruit," meaning the performance bottlenecks can be avoided or removed without much effort while yielding significant positive effects on the overall performance and scalability of the business process. Programmers can avoid such pitfalls by asking:

Is the number of database accesses too high, or are the accesses themselves inefficient? These performance problems can be remedied by checking whether all frequently executed accesses to the persistence layer are necessary. Eliminating identical accesses to the persistence layer within one business transaction is particularly effective. To help make data accesses more efficient, data from database buffers (such as data from buffered customizing tables) should be used; bypassing the buffer must be avoided. The number of recordsto be searched can be kept small if database accesses are supported by appropriate indexes.

Is the volume of data being transferred to and from the database too high? Database accesses should read only those data records that the application needs. For example, complete WHERE clauses should be specified.

Are users or tasks competing for system processing power and locking each other out? To enable scalable parallel processing, locking and load balancing mechanisms should be employed.

Is your resource consumption (memory, CPU, etc.) linearly scaled to the amount of processed data? For example, freeing up memory after use prevents memory leaks and promotes more linear scalability, preventing the application from becoming slower and slower over time.*

Are there too many round trips per user interaction step — either between the front end and the back end or between two application servers? Is too much data being transferred? Nowadays, programmers must assume that solutions might be executed in a wide area network (WAN), where each round trip requires a minimum runtime of 0.5 seconds if a satellite connection is involved. For optimal WAN performance, the number of round trips between the front end and the application layer per user interaction step should not exceed two.

*See the section "Running Performance Test Series" for more on linear scalability.

Automated Performance Testing and Analysis Benefits

Automating performance testing and analysis means that a particular business process or scenario can be started automatically and repeatedly, involving the appropriate components in the system landscape and collecting and centrally storing relevant performance data. The advantages of cross-system automation of performance tests, especially for complex landscapes, are numerous:

  • Most importantly, automation allows tests to be reproduced efficiently and enables the test results (performance figures) to be retrieved and reused at any time. This reusability enables regression tests2 and simplifies the fulfillment of legal requirements. For example, a company's custom development may be subject to detailed documentation in accordance with legal mandates within their industry; the reusability and ready access of performance test results would be invaluable for compliance efforts.

  • Automation reduces the need for developers or expert testers to conduct manual tests, which can be costly in terms of manpower and time. With automated tests, you don't need to repeatedly devote expensive expert-user time (apart from the initial setup); most any user can start the test and view the performance results.

  • Since performance degradation needs to be detected as early as possible during a software development cycle and its improvement needs to be monitored by repeated test execution, automation simplifies the running of performance test series to monitor performance behavior over time.

  • Automation ensures the comparability of performance tests — even over extended time intervals — because tests are executed exactly the same way for each measurement.

4 Key Requirements of Automated Performance Testing and Analysis

When automating performance testing, developers must be mindful of four important requirements for automated tests. SAP tools and functionality are already in place to help developers migrate toward automated performance testing and adhere to these requirements.

Tests must be executed for complete transactions or business scenarios

SAP's extended Computer Aided Test Tool (eCATT), transaction SECATT, can be employed effectively here. This tool is usually intended for functional testing, but it is eminently suitable for performance testing, too. eCATTs are based on test scripts that contain all the user interaction steps of the scenario to be tested. eCATT scripts, once recorded or written, can be modified by manual editing at any time (see Figure 1).As eCATT test scripts can execute entire business scenarios that run across system boundaries on a system landscape, they lend themselves to the automation of complex performance test runs.

Figure 1

Editing an eCATT Script Parameter
click here to view a larger version of this image

The process of measuring performance must not affect the performance result of the scenario being tested

To avoid any adverse effects of performance testing on the performance behavior and results of the scenario being tested, GUI scripting can be employed. In the GUI scripting process, the test driver (running an eCATT script) is initiated from outside the part of the system landscape being tested, analogous to a user interaction. This ensures no performance-relevant influence on the tested system components, obtaining highly stable and reliable results.

The performance test must be repeatable

Performance tests must be executed repeatedly to obtain reliable results. Each test run must be exactly the same, working on precisely the same data or customizing settings. Any data created or modified during a previous test run must not influence subsequent test runs. For each test run, the business and customizing data (if it's changed during the previous test) must be reset to the start value.

Automated performance test results must be reliable

To guarantee that the measured performance figures (CPU time, memory, etc.) are reliable, multiple runs of each test script are necessary. Based on the results of these runs, an average value for each performance figure can be computed, taking into consideration the corresponding standard deviation (see sidebar). Because statistical variations within the performance tests are minimized, test results are more stable and reliable.

Running Performance Test Series

Once your performance tests are automated, you can run whole performance test series. To obtain information about changes in performance within a development cycle or between different release versions, you can set up performance test series for regression testing. By comparing the results (CPU time, memory, etc.) of successive test runs, you can monitor for positive or negative performance trends in the tested scenarios.

You can also set up performance test series to prove the scalability of your solution. A scalable application, scenario, component, or system can be expanded and reduced — in size, volume, number of concurrent users, and so on — and still continue to function properly and predictably. The ideal performance goal is to obtain linear scalability, meaning that programs work in linear (or less than linear) dependency of the required load; in other words, tripling the amount of data to be processed leads to only three times as much required CPU time at most.

Scalability can be considered on three different levels:

  • For a whole system — Performance factors (such as CPU load and memory consumption) depend on all users working with the system

  • Within an application or scenario — Performance factors depend on the number of processed objects

  • For the parallel execution of an application or scenario — Performance factors depend on the number of applications and scenarios running simultaneously

Key Term: Standard Deviation

Standard deviation is a value that can be used to determine the quality of a corresponding average value. The smaller the standard deviation, the better the average value. For example, 5 microseconds as an average value derived from 4.5 and 5.5 microseconds is good. But 5 microseconds as an average value derived from 0 and 10 microseconds is bad because the original values vary too much. As a rule of thumb, a standard deviation of 10% or less of the average value (0.5 microseconds, in this case) is considered good.

Let's take a closer look at the "application or scenario" case. The most important performance figures here are CPU time, memory consumption, and overall response time. On one hand, testing scalability within an application means testing performance with a series of performance tests — for 1, 10, and 100 processed objects in a scenario, for example. On the other hand, it means executing a scenario — say with 1, 2, and 10 parallel runs in a test series — to check for linear resource consumption of CPU and memory.

If the behavior of these performance figures is greater than linear — say, it's quadratic, meaning that if the number of objects is tripled, the CPU load is nine (32) times as high — this must be investigated and improved. By extrapolating the results of a scalability test you can also predict how the tested scenario will behave in a load situation that extends beyond the capabilities of your test system environment. For example, you can extrapolate the test for a case with millions of objects (see sidebar below). If the result of an extrapolation is worse than linear, this must be remedied.

A Quick Lesson in Extrapolation

To extrapolate a performance result, developers must carry out a so-called linear or quadratic regression using the performance figures of the test runs.* So a function f can be determined to extrapolate results for cases that need not be tested.

Let's look at an example. Say that five performance tests are executed for 1, 2, 5, 10, and 20 processed objects within an application. The application fulfills scalability conditions if the resulting CPU time also behaves in accordance with the factors 2, 5, 10, and 20, compared to the value for 1 processed object. For an untested case of 2,000 objects, the factor for CPU time is not allowed to exceed 2,000 — compared to the value for 1 processed object — to fulfill linear dependency. This factor can be computed using the above-determined function f.

* Quadratic regression testing is sufficient to detect all nonlinear behaviors.

SAP's Performance Testing Outlook

SAP already offers robust support for investigating key performance killers through its Code Inspector (transaction SCI, for analyzing static program code), statistical records, SQL trace data, and ABAP trace data (transaction SE30). By default, statistical records are written by the system and can be displayed using transaction STAD, along with transactions ST03N, ST03G, and STATTRACE. SQL trace data, in contrast, needs to be generated explicitly using transaction ST05.

Down the road, SAP will further extend its support for automated performance testing. For example, SAP is integrating the concept of eCATT test configurations within a central performance test tool (see Figure 2). This tool:

  • Integrates and extends the functionality of the above-mentioned performance measurement tools

  • Collects performance data from the involved system components in a central database

  • Automatically aggregates the data of repeated runs in the central test system's database

  • Presents the performance data as average values resulting from multiple runs of the scenarios that are tested

  • Compares performance figures automatically between different test runs (regression tests/scalability)

  • Evaluates performance results automatically against predefined conditions with regard to the key performance indicators — for example, a maximum of two round trips per interaction step

 

Legend:

  1. The central performance test tool invokes eCATTs on the central test system (this is usually also where the central test tool is executed)

  2. The eCATT scripts run the recorded scenarios to be tested, involving all the necessary components of a system landscape

  3. The central performance test tool retrieves the relevant performance figures from the components used in the system landscape (e.g., data also displayed in transactions STAD, ST05)

  4. The central performance test tool calls Code Inspector functionality (transaction SCI) to automatically perform a coding check of all involved programs detected

  5. The central performance test tool stores the collected performance results on the central database
Figure 2

SAP's Support for the Automated Testing Process

SAP provides comprehensive support for performance testing and analysis and has a strong commitment to continue to share the performance best practices SAP has gained in working with thousands of customers. As a result, development teams are equipped to incorporate performance testing — especially automated, repeatable performance testing — into their company's development practices from the start.

For background information on testing and eCATTs, please visit the SAP Service Marketplace (http://service.sap.com/netweaver --> SAP NetWeaver in Detail --> Solution Life-Cycle Management -->Customizing and Testing Tools --> Customizing & Testing Tools In Detail --> Testing). For more information on automated performance testing, please refer to my Weblog on the SAP Developer Network (www.sdn.sap.com).


1 At one customer call center, a two-second system response time delay resulted in 15% fewer daily orders.

2 See the section "Running Performance Test Series" for more on regression testing.


Armin Hechler-Stark finished his studies in computer science and electrical engineering at the University of Saarbruecken in Germany in 1991. After working as a software developer for a small software company, he joined SAP in 1994 as an application developer for master data management (R/3). He has been a member of SAP's Performance, Data Management, and Scalability team since 2000, where he is currently responsible for developing automated performance measurement tools.

An email has been sent to:






More from SAPinsider



COMMENTS

Please log in to post a comment.

No comments have been submitted on this article. Be the first to comment!


SAPinsider
FAQ