Q and A


Tips from Joerg Boeke on tuning your SAP BW systems: Recent Q&A on improving query performance (transcript)

by Bridget Kotelly

I recently moderated an online forum with BI 2012 speaker and BI expert Joerg Boeke. During this one-hour Q&A on April 24, Joerg took questions on improving performance and avoiding slow SAP BW queries, based on his session A technical guide to SAP NetWeaver BW landscape optimization for BI 2012. You can attend this session at the upcoming BI 2012 Milan conference, June 6-8.

Questions covered topics such as performance issues with BEx Analyzer and Office 2010 upgrades, virtual characteristics and keyfigures, how BWA can improve and analyze BW performance, diagnosing memory allocation problems, when to use the high cardinality flag, partitioning infoproviders, APD, and combining compression & partitioning.

 You can view the Forum discussion at any time , or read the edited transcript of the discussion here:

Bridget Kotelly

Conference Producer, BI 2 012


 Bridget Kotelly: We’re looking forward to some great discussion and  tips on tuning your BW landscape for better query performance today. Thanks to consultant and BI expert Joerg Boeke for joining us today and taking these questions during the hour. 

Joerg recently presented a number of sessions at our recent BI 2012 conference in Las Vegas, including a session on this topic. SAPinsider’s upcoming BI 2012 Europe event, June 6-8, also has a number of sessions on BW performance, and I invite you to join us there as well! 

I encourage you to post your questions early in the hour. We expect quite a bit of discussion today and hope to get to as many questions as possible before the Forum ends at 12:30pm EDT.

 CBBIC: At my client we experience performance issues regarding BW queries via BEx Analyzer ever since we upgraded to Office 2010. Our current configuration:

BI Components: 7.20 Patch 402

 Check done on: 23-4-2012   17:48:11

Excel Version: Excel 2010 (14.0.6024.1000)

Windows Version: Windows 7 Service Pack 1 (7601)

Default Browser: Internet Explorer (9.0.8112.16421)

SAP GUI: 720 Patch 7

What could be the issue? When executing via RSRT queries execute within seconds, but when going through the Analyzer it takes 10-20 minutes for the same query.

Thanks in advance for your suggestions.

Christian Brokken - CBBIC

Joerg Boeke: Hello,

Yes, indeed, this is a common problem. You can even test it in frontend by using BEX 3.5 opening New queries.

The new world is working different. The old 3.x environment stored variables and settings within two hidden sheets. The new world stores the information in the Excel structure itself.

This cannot be changed. What you can do - there are a lot of performance settings I recommend in my session in Milano.

First of all, experience with customers shows that migrated workbooks store some orphan data, so try to build new ones.

Secondly, check with OSS notes Note 912367 RSADMIN Performance Parameter and

Note 1510916 Collective Note RS_FRONTEND_INIT parameters. These will help a lot with new workbooks


petergmk: Hi Joerg,

My client is a merchant bank implementing BW and BO (WEBI) to for their financial reporting. My role is to create the all the BEx queries required for building the Universe on which the WEBI reports are created. 

In these BEx queries based on the standard DSO 0FIGL_O10, Financial Statement Item (0GLACCEXT) is enabled as a hierarchy under the Row and the key figure Balance (0BALANCE) placed under the Column.

The 0GLACCEXT is something I added to 0FIGL_O10, and I created a BAdI for virtual characteristic to derive the value of 0GLACCEXT during the query runtime based on the value of 0BALANCE, e.g. account 30001010 appears under two different parent nodes Assets and Liabilities in the 0GLACCEXT hierarchy. And in the BEx report it should go to Assets if the run-time balance is larger than zero otherwise it should go to Liabilities.

Without this BAdI solut ion, the data would double count in the report.

But I am worried about the performance. I was told the number of records generated for the G/L account transactions is about half a million per day.

It will be much appreciated if you have any suggestions on the design and performance. I have not got the chance to gauge the performance yet because not much test data is available at this moment, though I believe there will definitely be a performance issue with the huge data volume as described. Thank you in advance.

Ming Kai


Joerg Boeke: Yes, your consideration is correct.

Virtual characteristics and keyfigures will eat up a lot of time concerning the different aggregation level of data.

If possible, try to populate those keyfigures as real keyfigures within your design.

This will swap the performance load to the WHM not the reporting execution time.

Otherwise your users will get bored.

Another option, try to use CACHE Warming. In case having too many records in cache, the execution will be faster due to missing DB time.

Maybe you need to contact me outside of this session, due to the fact that this answer will take longer.


jgmccabe: We are just implementing a SAP BW system in the next month.   Can you recommend some classes /papers on performance tuning and monitoring? Thanks in advance,

John M 

Joerg Boeke: Check OSS as follows:

- Note 1510916 Collective Note RS_FRONTEND_INIT parameters

- Note 912367 RSADMIN Performance Parameter

The optimization of Indexing and Buffering is also very important.

In the Milano session, you will get an overview about multiple more areas.


Dave R: We will be implementing BWA in the next couple months. Can you recommend some resources to use to help identify which queries will benefit the most from BWA?

We also have a large cube that changes hourly; will this present any problems if loaded to the BWA? Any tips/tricks we need to be aware of in order to pull this off without problems or difficulty (high maintenance)?

Also are there other system settings that need to be tweaked in order to get the full benefit or most efficient use from the addition of BWA to our BW 7.3 system?

Thanks in advance,

Dave R

Joerg Boeke: A great way to check is using the BW technical content. There are a bunch of cubes tracking all type of information.

One is tracking the performance of queries in terms of time in OLAP and DB. With the help of BWA, you can almost eliminate the DB time.

In parallel you may also check tables storing the query executions: RSDDSTAT_OLAP will display all information about the query OLAP, and RSDDSTAT_DM will give you the analysis about time on DB and how many records have been retrieved.

Just go to Transaction SE11 or SE16 and type in those table names.


CsabaGrunblatt: Hello Joerg,

I would be interested in improving analysis process performance. I need to run queries in the background to store them in DSOS. Some queries are very complex and may have 30.000-50.000 lines. I cannot reduce this...I need to store the results at the material / plant / month level.

In some cases, > 40.000 the AP fails...TSV_TNEW_PAGE_ALLOC _FAILED

BASIS cannot help...much.

Any ideas or suggestions?



Joerg Boeke: The problem with huge queries (in your case I am wondering because it’s not huge) is memory allocation.

I assume your BW system is not properly set up in terms of BW parameters.

The error you describe is very common. Check OSS for that error.

If your system is too weak to enhance the shared memory (SAP recommends 200MB at least), you might use the Analysis Process Designer (APD) to distribute queries to DSO (Transaction RSANWB).

I did a session with WIS in Las Vegas some years ago about that topic. APD allows you to execute queries with packages (similar to the load into BW) to save memory allocation


gdoudalis: Hello Joerg,

Does the High Cardinality Flag when creating a Dimension have any impact in an MSQL DB? Is it necessary to use this flag along with the Line Item Flag?

Kind regards,



Joerg Boeke: Cardinality flag should be used (rule of thumb) with dimensions having entries more than 20% of fact table.

It will create a different Index (optimized for huge dimensions) instead of the regular one.

I am not 100% sure why SQL Server should not support it. MS-SQL server normally is one of the fastest DBs.

The Line-Item really matters! Line items are commonly improperly used. Line item dimensions should be used for all characteristics that commonly have no 2nd usage, like Sales document number (all characteristics that may grow infinitely).

Make sure to use as much dimensions as possible.

Check report (SE38) SAP_INFOCUBE_DESIGNS - that will give you an idea what cube might need to be optimized (red color) but...only if facing performance problems.

I have redesigned a cube - by using optimal dimensional setting and Q-Performance – that came down from 5 minutes to 10 seconds (without aggregates or supersonic tricks).


peter_stoffersen: Hello Joerg,

What are the impacts (if any) on background jobs running in the BW system at the same time as the users are executing their reports?

Is there a connection between poor query response times and different background jobs running - even though those background jobs are not reading/writing to the same tables as the users are reading from their reports?

And if so – why will the system suffer from poor performance if the background jobs are not reading / writing to the same tables as the users?

Thanks, Peter

Joerg Boeke: That is a VERY IMPORTANT question and fact.

Due to the fact that you have a limited number of dialog and background jobs using your CPU processes, loading data can immensely drop the query performance.
Try to run a video on your PC and in parallel open 2 BEx sessions with huge data -- your CPU will get warm! :)

The same will happen with your BW server.

To see what is going on in your system use Transaction SM50 and from menu select 'LIST' and then 'CPU'. In the upcoming display you should have at least (for dialog and batch) one entry with 0:00 usage time. Otherwise you need more processes because your workload got queued.

The CPU usage can be monitored with Transaction OS07 and 'details', just to explain two major areas to check. It needs a precise fine tuning to get the best out of your BW; I do it very often, but is hel ps a lot.


Enrique Quispe: Hello Joerg,

I have a several cubes that have more than 200,000,000 rows. The question is: Is it necessary to partition the InfoProvider?

All cubes can partition, is there some limitation?

How much does it increase the speed of a cube, once the partition is made?


Enrique Quispe


Joerg Boeke: Partitioning the infoprovider either by time (standard) or logically by any characteristic will reduce the time to retrieve data from DB.

SO in case you have 200 million rows in total, but just 2 million (just an idea) for the actual year, the Query needs only to retrieve that data and everything goes faster.

I do a lot with logical partitioning. There is no limitation, just consider that

based on time partitioning (period, month, etc.) the partition range might become outdated in 10- 20 years whatever you decide. In that case you need to use repartitioning.

In terms of speed, partitioning and optimal indexing can be up to a factor 10++.


Dave R: Can Excel and PowerPivot directly read data stored in BWA?

Can SSIS or any 3rd party ETL tool load data to BWA?

Joerg Boeke: You cannot directly read BWA as far as I know.

In the case of using queries and OLE-DB or 3rd party tools having access to BW data,

the query will automatically read the data from BWA if processed by BWA.

MS-SSIS can be used to retrieve such data via the upper described way - I have used that in a customer project.


CsabaGrunblatt: Hello Jörg,

Would you consider at all imp lementing BWA (now called BOA?) without upgrading to 7.3 version? We are on 7.0 now.

How long can a BWA implementation last (medium size company with about 1T data, 4 years of history, ca. 60-7 cubes, 250-300 users)? What type of resources do we need?

Can HANA now be considered as a mature product to implement?



Joerg Boeke: Implementing BWA will help a lot with dropping almost all data retrieval time on DB. The version 7.0 or 7.3 does not really matter.

In terms of your BW size (1TB), I think there might be other ways to optimize your BW because 1TB is not a huge installation.

Maybe you can consider partitioning to scale down actual data retrieval for reporting.

The number of users can be analyzed by technical content. I'd check when are peak times of reporting and how your system is behaving. It might be the fact that the processes are queued or insufficient CPU power is available at peak times.

During one of my customer BW system optimizations (you need a single day just to analyze a system to spot the major problems) this was the case. We added another application server and the problem was gone.

Concerning HANA is a new story.

I really like HANA because, beside the fact of speeding up the queries, it will take away a lot of pain concerning WHM processes as well as design considerations.

It is new, but in case of regular BW usage I think its usable.

I started with BW 1.0, and 1.2A was my first customer experience. Compared to that times, SAP learned a lot and I think this new technology will change the way of operating BW.


CsabaGrunblatt: Jörg, thanks for your reply on APD. Actually I am using already the APD, RSANWB to store the query results in DSOs. The error occurs during the APD run.

I have also set the divide data collection into packages flag. No success, it failed again. Clicked plant as partitioning charact.

Actually this may be a fairly buggy run, with more than 60,000-80,000 records.

I have also set the flag process data in memory, GO TO/perf. settings/process data in memory.

Any other ideas?

I will have the shared memory set with BASIS.



Joerg Boeke: In your case, try to DESELECT the parameter process in memory. It will take longer, but should run stable.

As mentioned, there are a lot of OSS notes around your allocation error.

I had a similar very rare case that an ORACLE parameter was causing this as well.

just contact me privately. I need to look up that param and let you know (cross your fingers that I find it!) :) 



ajaygupta: We have a query on  a virtual provider which is based on a 0EC_PCA_1 datasource. The query calculates balance of current year to date and previous year. Query performance is very poor, as it takes 1400sec to 1500sec. Can you please suggest something we can do?

Joerg Boeke: First I would check (transaction RSRT) where the time is being used (on DB or OLAP).

In case the time is based on DB, I would recommend 'Cache Warming' use Process chain to run the query (change properties in RSRT) reading all values - not just during navigation - prior to the users.

In case of OLAP, with 100thousands or millions of records in OLAP, there might not be a real way of reducing the time.

In that case, it might be wise to add the calculated keyfigures as physical keyfigures to the cube. Doing so will transfer the 1500 seconds to WHM, not to the user query execution.


nirajchalwade: Hello Joerg,

We are doing the compression and reorganization activity on our SAP BI database. DB is Oracle Will there be any improvement in the query performance?

Joerg Boeke: Yes! Compress compress compress!

Query performance may go up by a factor of 10++. The reason is: Oracle is using partitioning by request ID for the f-tables (non-compressed data) and building an index for each request.

Let’s assume you have 100mio records in your cube but only 10K of records for previous month (column in reporting). The system recognizes the tiny request indices as not usable and starts a full table scan (reads all data).

In terms of compression (in combination with partitioning - best approach), the index will be based, i.e., on the period and it will only read data (10K) for the selected period.

Even more important to compress in case of using noncumulative keyfigures (i.e. Stocks).

The reference pointer will be updated only during compression, so compress whatever you can! :)

The pity about compression is (I know this question will arise):

You cannot delete a request anymore because that ID / Key will be gone after compression.

So, if you're not sure if your data has to be revised, wait 1 or 2 days before compressing data. I recommend to have the last 20 requests (in a business month) out of compression.

In case you need to change old data, just use repair request.

So again, YES compression will help VERY MUCH.


girishsavadi: Hi

We have a VM image with BW 7.1 installed on a Win 2003 server. We have allocated 25 GB of memory, and the system performance is way to o slow. I don’t know how to ask SAP BW to make use of all that memory, because I feel even though the VM is allocated 25GB, I don’t think BW uses all of it.

Could this be the reason that even a simple Bex query or simple t -code like RSA1 takes forever to come up on the screen?

Another area that I think could be the problem is: the BW database (SQL 2005) is on the on same VM image. How do I move over the DB to another dedicated server which would take off the load for this VM?

Joerg Boeke:  It depends what you mean about 25 GB assigned to BW. In case of pure memory this should be plenty. Just remember that in your VM, BW uses that memory along with any physical system as well.

The operating system of that VM eats up some of the memory - let’s assume 8GB assigned to Windows – and the DB that runs in that VM will use memory as well. Commonly the DB cache is set up to 4GB. Then the SAP system itself needs some memory...

You see 25GB is not 25GB for BW.

What makes me wonder is that you say performance is weak. I run a Laptop BW demo system with 8GB and it is pretty neat to use.

What is important with your system is the IO channel. When your VM system changes data in BW or is reading data from DB, it will access the physical disk assigned to your VM. Maybe that is the weak point?

Check transaction ST03 and analyze the IO wait times.

Another idea might be to check transaction RSRT and execute your query in debug mode displaying statistical information (not use cache).

In the event number 9000' you will read the DB access times.

I think your problem might be located in that area.


peter_stoffersen: Speaking of the daily fin e tuning the BW system - Which transactions do you see as the top X most important ones?



Joerg Boeke: It’s hard to say what are the X most important transactions.

It depends on the area of performance where you might have a problem.

I commonly use the following transactions during my customer onsite (or via WebEx) remote system analysis sessions:


- Spot problems that occur due to index problems (Oracle intends to create primary indices up to 10 times the size of the original data). As an example in combination with orphan data, I cleared 1TB (of an overall 4TB BW installation) by creating new indexes.

- Spot the biggest data graves (obsolete PSA data) - see my session in Milano

OS07: Spot CPU or memory issues (CPU idle time less than 25% => hardware bottleneck)

ST03: Spot the individual areas losing time (IO channel, wait times, time in extraction or staging)

SM50: Sufficient and optimized processes (see one of my previous answers). Do you swap day and night processes? (More batch during the night)

ST13: My favorite - For this you need to have the ST-API add-on installed within your system. Just Google SAP transaction ST13, or see my session :). This is a toolset to spot problems within Process chains, buffering of dimensions as well as a fast access to data within specific Infoprovider.


CBBIC: Hi Joerg,

Thank you for your suggestions, I will evaluate them the coming days. Is there any way I can contact you in case I have additional questions?

Best regards,

Christian Brokken - CBBIC


Joerg Boeke: For all in the chat, feel free to contact me at


Bridget Kotelly:

Thank you again to Joerg Boeke of BIAnalyst for joining us today - and thanks to all who posted questions and followed the discussion!

A full summary of all the questions will be available here in Insider Learning Network’s  BI/BW Group. If you have registered for this Q&A, you will receive an email alerting you when the transcript is posted.

For more discussion of BW query performance, I invite you to join me at BI 2012 in Milan this June.  Unfortunately, Joerg cannot attend, but he authored the session A technical guide to SAP NetWeaver BW landscape optimization, which will be presented in Milan. There is also an entire track on System Performance & Landscape Optimization. For more information, you’ll find full details on the BI 2012 conference web site. I hope to see you there!

Did you find this Q&A helpful? Get access to the latest updates and resources from SAPinsider with a free subscription.

Get the SAPinsider subscription now »»

An email has been sent to:

More from SAPinsider


Please log in to post a comment.

No comments have been submitted on this article. Be the first to comment!