Q and A


Summary: Forum with Dr. Berg on boosting BI performance

by Kristin Bent

Recently, BI expert Dr. Bjarne Berg answered questions from Insider Learning Network members about improving BI reporting performance in an exclusive moderated BI-BW forum on March 9, 2011.


The following is an edited transcript from this forum. You can also see Dr. Berg’s own summary of the forum in his blog.


About Dr. Berg


Dr. Berg is an internationally recognized expert in Business Intelligence and is a frequent speaker at major BI and SAP conferences worldwide. Previously, Dr. Berg was a Senior Manager at PriceWaterhouseCooper's global data warehouse group, the Director of BI at VIP consulting and the Vice President of IT at Comerit Inc. Dr. Berg was a featured speaker at BI 2011 in Las Vegas on reporting and BI, including a session on “A Comprehensive Guide to Plan for, Implement, and Maintain SAP NetWeaver BW Accelerator.”  


Kristin Bent: Welcome to the BI-BW Forum hosted by Dr. Bjarne Berg.


Thank you for joining us today, Berg!  I see that there are some questions already posted, so please feel free to take those in advance, and then move on to other questions as they roll in.


KiranKumarGangavaram: Hello Dr. Berg,


Regarding the DSO performance improvement tools available in BW when compared to InfoCube.


My question to you is can we have tools available for Infocube like Partitioning and rebuilding statistics, for DSO?


How should we handle the large Base DSO's where data is flowing every year? The base DSO will be feeding many other data targets and we can not archive data in those base DSO's.




DR. BERG: Hi Kiran,


There are several ways to address this.  As DSOs are loaded and data is activated, the size of the data store and the number of requests become important for data activation time.  Large DSOs with more than 100 million records and lot of load requests increases data activation time.


These DSOs should be partitioned to reduce the size and number of requests for faster activation times. There are also known activation issues when parallel activation is attempted on very large DSOs (see: note 634458 'ODS object: Activation fails - DEADLOCK'  and 84348 'Oracle deadlocks, ORA-00060 ORA).


So first, physically partition large objects by i.e. fiscal year; and second you only have to build statistics and indexes on the last DSO which makes everything go faster.  A 'hidden bonus' is that by loading the historical DSOs, you are also reducing the number of requests to one per DSO, which also speeds up future data loads for the current period DSO.


So: Divide and win (take a look at the new Semantic Partition Object (SPO) feature and w izards in BW 7.3 that may help you do this).


Dr. Berg

SURESH: Query performance is one of the major issues. Some of the reports filters get processed sequentially. If the SID table is large it would take longer.


Do you recommend building indexes on all info-objects used in filters in BI queries?


Other question is about Indexes and BI Stats. During dataload process when is it mandatory to build them and when we can ignore them? Sometimes it takes too long to create indexes that increases the extraction time. We also find that even when BI Stats has successfully run the log does not say so. Why is that?


What is the impact of using large hierarchies in BI Queries reports?


Looking forward to meeting you online.


Regards, Suresh Sharma


DR. BERG:  Hi Suresh,


This is a multipart question, so I will try to be as specific as possible.  First, a trick is to look at the log files during data activation. If you have a very long-time log-file-sync, you could buffer Dimensional IDs (DIMID), System IDs (SIDS) and hierarchy IDs (HIEID) to decrease the time the system spends committing individual records to the NRIV table and syncing the log files.


Go to SNRO, click Edit -> Set Buffering -> Main memory try setting this to a higher number (normal is up to 1000, but for a very high number of IDs, you can set this up to 10,000). I would increase this to reduce the load time and the log-sync times (fewer commits). Also, I would remove the BEx flag on DSOs that are not used for reporting and also partitioning these objects to get the SID, DIMID and HIEIDs reduced.


As to Indexes, maintaining valid Database indexes for infocubes is critical for faster data reads. This is a problem for infocubes with a high-cardinality flag set (causes the index type to change from bitmaps to b-tree). Check this in RSA1->Manage IC->Performance tab and repair these indexes in RSRV for impacted infocubes.


Background: When you flag a dimension as “high cardinality” SAP BI will use a b-tree index instead of a bit-map index. This can be substantially slower if the high cardinality does not exist in the data in general (star-joins cannot be used with b-trees).  Consider only flagging high-cardinality dimensions if a substantial amount of records exists (more than 8-10 million) or if the system is substantially unbalanced (i.e. 30+%). All these items leads to better query performance.


Dr. Berg

Laszlo Torok: Hello Sir,


I am interested in tuning queries with custom SQL. Is there any user-exit or other way to tune the sql statement sent by SAP to the DB?


Is there any best practice for using master data tables with large record numbers? Like 100.000 entities - BUT because the attributes are time dependent - actually it can make up for a million records? Also the number of attributes are huge (and that makes time dependency a major problem). Just for information: a HR BI system with 0EMPLOYEE (extended by most of the important infotype fields (100-250 attributes)). If we chose to load all master data into cubes or ODS, that would mean a lot of duplicate efforts at the extractor programs, and redundancy in the data we store.


Any info on what is the impact o real time data extraction and remote cubes with remote master data on the source system?


Performance issues when using virtual characteristics and key figures?


Thank you, Laszlo

DR. BERG: Hi Lazlo,


First, I would not recommend custom SQL in queries, this can create serious issues in the BI Analytical engine. For time dependent masterdata, the trick is to reduce the masterdata in each object by partitioning the object and assigning it to where used.


I.e. MD for 1995-2005 in one object assigned to one IC and another object to another MD. This is normally not an issue except for very, very, very large MD tables. And a simpler way may be to split the Masterdata in mini-dimensions. Take a look at Ralph Kimball's DW book for some tips on this. The quick solution, may be to split the attributes you want to track over time to separate MD objects.


Dr. Berg

Scott Wallask: Hi Dr. Berg -- Nice to meet you on the chat. A reader of ours mentioned that her activation hangs for large objects and she has to restart them manually. Do you have any advice to troubleshoot this?




DR. BERG: Hi Scott,


Unfortunately, this is a well-known issue and SAP has issued several notes. The first is the problem of accessing multiple threads against the same NIRV tables (can be helped by buffering), the other is the lock on the log files.


As data stores are loaded and data is activated, the size of the data store and the number of requests become important for data activation time.  Large DSOs with more than 100 million records and lot of load requests increase s data activation time. These DSOs should be partitioned to reduce the size and number of requests for faster activation times. There are also known lock activation issues when parallel activation is attempted on very large DSOs (see: note 634458 'ODS object: Activation fails - DEADLOCK'  and 84348 'Oracle deadlocks, ORA-00060 ORA).


Dr. Berg

Suresh: Dr Berg,


1. We have high cardinality in cube but we have less than 8 mil records. Will it help if we turn on high cardinality flag now?


2. What are ideal query property setting for web reports with data more than 60 mil records?


DR. BERG: Hi Suresh,


SAP recommends for Indexing and performance reasons to flag these as ‘high-cardinality’ dimensions. However, it has minor impact to smaller cubes. I recommend setting the high-Cardinality flag for large InfoCubes with more than 10 million rows, and a 'balance' of more the 30% (somewhat smaller for very, very large cubes).


The reason for not following the 20% rule-of-thumb is the impact of the switch from bitmap to b-tree indexes (size and performance).


1. For a small cube with only 8 million rows, I would not set the high-cardinality flag, unless the cube was 50+% 'unbalanced".


2. For 'queries' with 60 million rows, a parallel query execution does not make sense. To avoid an overflow of the memory, parallel processing is cancelled as soon as the collected result contains 30,000 rows or more and there is at least one incomplete sub process. The MultiProvider query is then restarted automatically and processed sequentially. W hat appears to be parallel processing, is actually sequential processing plus the startup phase of parallel processing.


You can change the default value of 30,000 rows, or change the size of the result set (MPRO_MAX_RESULT) to higher than 5MB for queries with large result sets that are hitting multiproviders with more than one infocube and/or DSO (see notes 629541, 622841, 607164, and 630500).


However, I would imply change the read mode to 1. Read all data - All data is read from a database and stored in user memory space and turn off parallell query execution.


Dr. Berg

Chavi Asdhir: How can we improve the query performance for queries running on the Multiprovider? There is a Multiprovider Hint table -- not sure if it is reliable and works 100% of the time. Thanks. Chavi.


DR. BERG: Hi Chavi,


We can add “hints” to guide the query execution. In the RRKMULTIPROVHINT table, you can specify one or several characteristics for each MultiProvider which are then used to partition the MultiProvider into BasicCubes. If a query has restrictions on this characteristic, the OLAP processor is already checked to see which part cubes can return data for the query. The data manager can then completely ignore the remaining cubes.


An entry in RRKMULTIPROVHINT only makes sense if a few attributes of this characteristic (that is, only a few data slices) are affected in the majority of, or the most important, queries (SAP Notes: 911939. See also: 954889 and 1156681). Alternatively, some of the MP joins may be included on BWA v. 7.2


Dr. Berg

Laszlo Torok: Thank you Dr. Berg,


If the master data record just for one year is around 1 million, do you think we should consider to split the data into even smaller logical partitions? I think it has a huge overhead cost if we have to create new infoobjects, e.g., every quarter and set up new cubes and extraction route. 200-250 attributes can be split into different infoobjects (that makes less records per infoobject) but don't you think it makes querying complicated, as the users should remember which attribute belongs to which object?


Kind regards, Laszlo


DR. BERG: Hi Lazlo,


For only 1 million MD records, I would simply look at adding secondary indexes; pre-run (broadcast) query results to OLAP Cache; add aggregate; add BWA. It is hard to cost justify the effort of partitioning cubes based on so few MD records (if that was the only reason).


Dr. Berg

Scott Priest: One thing our readers have asked about is the performance functionality within SAP NetWeaver BW Accelerator 7.2? How is it different from past versions?


DR. BERG: Hi Scott,


I am actually doing a session called "Proven Techniques for Improving Reporting and Analytics Performance Using SAP NetWeaver BW Accelerator - Including BWA 7.2 new features" at the BI conference next week.


And yes, there are many new performance features. Here are a few:


1. In BWA 7.2 you can complete many of the joins and the aggregation that previously was conducted in the BI Analytical engine, and was therefore hard to performance tune. It also reduces the data being transferred to the app-server and therefore are much faster as well.


2. A major benefit of the BWA7.2 is that you can move more of the calculation over to the accelerator and materialize the result in-memory. This improves the reads and calculations in the OLAP/ BI analytical engine on the BW side is substantially reduced. The following calculations are now included:


SUM   MIN  MAX  CNT (count all detailed values);   CN0 (Count all values not zero or null) AVG (average);   AV0 (Average for all values not zero or null)


There are still some limitations. For example you cannot use the exception aggregation for single key figures in BWA if it uses:


  1. Virtual key figures

  2. Conversion before aggregation

  3. Formula calculation before aggregation

  4. Non-cumulative key figures

  5. Key figures with elimination of internal business volume


More details here


Dr. Berg

Suresh: Dr. Berg, We have large hierarchies (GL) in our system that we use in reporting rows. What are do’s and don’ts regarding use of hierarchies this way? What are the recommended settings?


Regards, Suresh


DR. BERG: Hi Suresh,


One way to make this look fast and reduce the number of records transferred from the db to the app server is to change the read-mode of queries to read mode #3: "Read data during navigation and when expanding the hierarchy" - Data is read when requested by users in navigation and then transferred from the DB to the app and then to the client.


Dr. Berg

Kevin: We currently use R/3 4.7 EC-CS  for consolidations. The BW 7.0 refresh requires a complete re-load.  Are you aware of options to provide Delta for EC-CS?


DR. BERG: Hi Kevin,


Yes, there is a substantial amount of standard content in FI for BW 7.0 content package 5. I have compiled this recently for a client in Florida and posted it on the Internet. Take a look at the overview picture on page #3 and see the remote cubes for GL available.


Dr. Berg

Laszlo Torok: Thanks Dr. Berg,


How many master data records (for one infoobject) do you think is too much for SAP BW to handle? What is the max attribute number considering performance? (We do NOT have BWA)


Kind regards, Laszlo


DR. BERG: Laszlo,


Hard question. I guess the correct answer is the 'consulting' answer -- It depends :-)


The problem is that it depends on available memory (I.e. OLAP cache size available to pre-run reports). The speed of processors, I/Os of RAID sets/SAN, number of processors and available threads, user interface and network speed. So instead of giving a 'hard answer' I would say that on an average well-tuned system 500 - 1 million MD records would not cause me any worries.


But I would recommend a quick look at the DBA cockpit available in in SAP_BASIS Service pack 12 (DBACOCKPIT).  It has more functions than you had before and you no longer need to use the 'old' codes. They are all available in the new DBaA cockpit. DB02-Space management, DB12-backup, DB13/DB13C - configuration, DB24-jobs ST04-performance.


While I remember: Make sure you also upgrade to latest kernel patch before using the cockpit, or you may get a bunch of error messages (programs missing).


Dr. Berg

Kevin: We currently use BEx 7.0 for the majority of our reporting. Users like Excel feel; however, workbooks require lots of VLOOKUPs that slow performance and require maintenance. Many of the workbooks contain multiple queries with multiple presentation pages with Excel formulas? We have some users that are demanding Essbase for their reporting needs?


Do you have suggestions for improvements?  Does the new Business Analytics tool address any of these issues?  Do we need BO suite to start using Business Analytics?




DR. BERG: Hi Kevin,


You are not alone in wanting Excel to do more and in an easier manner. In Barcelona at the Reporting and Analyti cs Conference in November last year, I gave a whole session on how to build as many of the features into BEX and the query as possible to take the workload off Excel.


It does not help with removing all Vloopups or Hloopkups, but gives you more options. You may also want to look at BEX broadcaster to prerun the workbooks and mail them in advance.


Dr. Berg

Megan Daley: Hi Dr. Berg,


I've been hearing from some of our customers that their SAP ERP extractions aren't as fast as they'd like. Are there any best practices you could offer to optimize the extraction process? Thanks!


DR. BERG: Hi Megan,


It has become increasingly common for large organizations to add a dedicated application server for the ECC/ERP extraction alone. This is normally only done when the extract times are tight, or the volume is causing stress on the system. But it is an option that is easy to implement.


Dr. Berg

Nick Bertz: Hello Dr. Berg,


With respect to a layered Scalable Architecture (LSA), should we be creating a DSO for each datasource which is the first point in which data is loaded to?


Especially in regards to the LO Cockpit datasources (LBWE), should we select all the available fields per each datasource and load them to thier own DSO before updating further to other cubes/dso's for analysis?  (This would prevent the need for system downtime for reloads and adding additional fields.)


What are your thoughts on this sort of architecture/approach? How would you manage the data volume in this approach?


Thank you,




DR. BERG: Hi Nick,


No, I would not add all fields in the LO cockpit (LBWE), that is an “overkill". But I would stage most data in write-optimized DSOs as first layer in a scalable architecture. The data is then moved to consolidated DSOs where extremely few are flagged as reportable (reduced activation) and then to infocubes.


I would use the lookups as much as possible on the source before data is sent to BW, and transforms between the first layer DSOs and the second/conformed layer.


For user access, consider using BWA 7.2 and skip the infocubes. That will make your system much, much faster.


Dr. Berg

Suresh: Query performance stats we notice that we spend 90% or more time in DB is it normal, or we need to do some thing to fix this? We have a DB2 BI.


DR. BERG: Hi Suresh,


90% on the DB side is an indication of major data transfers. The reason can be a high number of diverse users (low cache hit-ratio), or more likely, downloads into spreadsheets of huge volumes of data.


It may also be a wrong query read mode for the queries. There are three query read modes that determines the amount of data to be fetched from a database and sent to the application server: 


  1. Read all data - All data is read from a database and stored in user memory space

  2. Read data during navigation - Data is read from a database only on demand during navigation and

  3. Read data during navigation and when expanding the hierarchy - Data is read when requested by users in navigation.


You may want to make sure yours are not defaulted to read mode 1.


Finally, there may be BOBJ reports based on OLAP universes such as Webi that is transferring large volumes of data between the servers. For those, consider BICS connectors or the query stripping options instead.


So without knowing the system, I cannot pin-point it. But 90% on DB is not normal.


Dr. Berg

Kristin Bent: Thanks to everyone who is participating today. 


If you have any other questions for Dr. Berg, you can reach him on Insider Learning Network here. Dr. Berg will also be presenting a number of sessions, including “What you need to know to get the most out of using SAP NetWeaver BW as your enterprise data warehouse” and “A comprehensive guide to plan for, implement, and maintain SAP NetWeaver BW Accelerator” at BI 2011.


As a reminder, if you have any further questions for Dr. Berg, you can follow his blog on Insider Learning Networ or meet him at BI 2011.


A summary of questions and answers will also be available on Insider Learning Network in the BI-BW Group.


Thank you all for joining us! Thank you, Berg, for fielding these questions!


DR. BERG: Thanks for your time.  Hope this was helpful, and I look forward to see you at the BI 2011 / IT 2011 conference next week in Las Vegas.


I will be at the Ask-the-Experts table and also deliver a series of presentations. Feel free to stop by and ask any questions.


I also strongly recommend the book "SAP NetWeaver Business Warehouse: Administration and Monitoring " 2010 edition by Olaf Klostermann and Milco Österholm. It is 590 pages of really useful info.



Dr. Berg


An email has been sent to:

More from SAPinsider


Please log in to post a comment.

No comments have been submitted on this article. Be the first to comment!