If 10-20% of your data is obsolete, do you really want it to pay to process it on HANA?
That’s the question that BI 2012 speaker Joerg Boeke addressed in his recent one-hour online Q&A on Wednesday, July 11 in Insider Learning Network’s BI-BW Forum. In addition to specific next steps for cleaning up your database, he also offered his advice on running BW on HANA vs. side-by-side implementations, BPC on HANA, and how the role of the DBA will change in a HANA install.
To view all the questions, you can review the Q&A Forum archives, or read the edited transcript below. (And join Joerg’s next Q&A on September 13,where he’ll take your questions on upgrading to BW 7.3. We’re looking forward to seeing how everyone is doing with their BW upgrades!)
Bridget Kotelly, BI 2012: Welcome to today’s BI/BW Forum, and thanks to BI expert Joerg Boeke for joining us! We’re looking specifically at BW and HANA – what are your questions about preparing BW data for in-memory technology, and how will it affect everything from licensing costs to design to performance?
Joerg is a consultant with BIAnalyst and a speaker at this year’s BI 2012 conference in Las Vegas, and was also featured in a past Q&A here on Insider Learning Network on BW performance. Joerg will be here to review your posts and respond with his answers during the hour.
Joerg, thanks for joining us again today. I see there’s already a question in the forum – I’ll hand it over to you to get started!
Robert Moore: Hi there,
We are looking at running SAP BW 7.3 on HANA, but also will soon have SAP BPC NW v10. Can the latter run on HANA too? And are there any special considerations around BPC connectivity when we do this?
Joerg Boeke: Hi Rob,
Actually, I am not the real BPC expert - I can do just BW. :)
To answer your question, YES HANA can be used to speed up BPC as well.
Due to the fact that HANA supports multiple accesses to HANA ( BPC is using Excel sometimes:) ) the grab of data and the precalculation speeds up dramatically. One opportunity with HANA that you may even think about is not to read data from BW but directly from the HANA-replicated ECC tables.
When using BW powered by HANA all BW objects can be accessed, and planning will become a fast experience.
There are a bunch of customers already used BPC and HANA ( here’s a link to some responses by SAP customers).
RamachandraKudamala: We are implementing BW 7.3, BI 4.0 and BPC NW 10. In the future we would like to add HANA to the land scape. What are the design considerations we should consider to avoid rework?
Joerg Boeke: Hello
I assume you are intend to use BW powered by HANA not just HANA standalone.
The consideration about your design is a good question - with HANA a lot of old design will go with the wind.
In earlier times, you needed to think about dimensional design - what to put in, what dimension, or do I use partitioned cubes of whatever type? With BW run on top of HANA and switching the cubes to memory optimized, the cubes will become similar to flat tables.
So from point of design the complete areas of staging or reporting layer will get a new face.
Sometimes, and I like to say most often, there is no need to design cubes.
Why not directly think of creating multiprovider on top of DSO objects?
In terms of virtual Infoprovider with latest SP, you can even directly read data from HANA model.
So from design point of view the near-real-time thing (using a daemon) and temporarily grabbing data from ECC might not be a valid consideration anymore, because you can directly report real time without loading data to BW cubes.
You see that, in addition to performance, there are a lot new design areas to think about. I think this is as important as the gain of performance
BridgetKotelly: I know in the session on BW and HANA in Milan, customers were not all aware of how cleaning up data affects licensing and scaling a HANA project. What is your advice there?
Joerg Boeke: You are right.Talking to customers the costs for implementing HANA (licensing fees) are the most important thing. Cleaning BW will will save thousands of $.
There are several areas you need to clean up I'll list the most important:
- PSA (make sure all PSA data is REALLY deleted)
1. In table RSTSODSPART you can display the # of entries.
2. Set the flag DELFLAG to 'X' the display the # of entries again.
The tables and request you have on display now are PSA "deleted" entries that are still on DB.
In big systems, this can be several hundred GB. Using the technical name of this display for table and using this information in table RSTSODS (technical name field) will unveil the datasource where this data comes from. Using the technical name of data allows you even to display this data
1. Most of the indices are keeping their orphan information inside its primary and secondary tables. Those tables can be 10 times the size of the real tables (masterdata, DSO tables and monitoring).
2. Before physical migration I'd get rid of and delete all index tables (not the definition, only content) because in HANA they will be rebuilt and HANA specific.
- Performance Technical content
I'd delete all tables for technical content RSDDSTAT* before migrating to HANA even the content of technical content cubes
I assume you do not like to pay for this information (how slow did my system respond before HANA)!
… just to name a few.
JoeMcconville: With the implementation of BW on HANA, how does the traditional DBA role change? Are there any unique skillsets that are now required to support/troubleshoot with the in-memory database?
Joerg Boeke: The role of DBA will shift when using HANA. You have to think of HANA as a DB.
When migrating to HANA it's like migrating from ORACLE to Microsoft SQL server or vice versa. After migration you will see all tables in SAP-BW DDIC in the same way then before migration. The difference is that you do not have a DB anymore (a physical piece of DB).
Your DDIC tables live in RAM cells, and all historical DB patches and additional tasks - like checking indices or running BRCONNECT (in terms of ORACLE) jobs - are gone.
So a lot of work will become obsolete with the HANA approach.
On the other hand, there will be new tasks like generating concepts of HANA backup (i.e. into Sybase) to keep data not only in memory or flash, but to keep an offline copy of your system.
New tasks like using HANA monitoring – to check whether sufficient memory is usable and all processes have sufficient CPU etc. - will arise.
So simply put: Your work (I assume you still have DB-driven systems) will gain new areas to learn and educate yourself - and some of the old work will become obsolete.
Hope this answers your question.
BridgetKotelly: Can you tell what, if any, manual tasks are necessary to prepare for migration to HANA?
Joerg Boeke: Yes, this is often asked. Most people think that, with BW powered by HANA, after migration, everything is optimized automatically.
No, it’s not. BW is just using a very fast technology (I will not use the word DB in this case). But all infoproviders will remain in DDIC as they were before. You need to run some additional steps.
One of those steps is to convert Infocubes to become "Inmemory" optimized. This can be done manually by help of context menu or via SAP report
After running this report, all cubes or defined cubes will be flattened and the dimensions (snowflake) will be gone (except request) as well as all kind of (HANA) obsolete partitioning.
For DSO this can be done by report RSMIGRHANADB
After run the loads of DSO will become very fast and activation problems will be a thing of the past.
So a few tasks are related to the customer side to choose when to optimize their specific Infoproviders.
Scott Wallask: Hi Joerg -- I hear a lot from our readers about their decision-making when choosing how to roll out HANA. What are big considerations as team decide whether to run HANA side-by-side with NetWeaver BW or running BW on HANA?
Joerg Boeke: Hi Scott,
This is a neck breaking question :) but let me try to answer it.
With HANA side-by-side you will have no performance gain on the BW reporting side.
With the latest BW 7.3 and SPs, you will be able to access HANA models with virtual cubes. That means you could be able to retrieve data from HANA and report in BW reporting with HANA speed. That is a great idea for reporting real-time data when your ERP/ECC/HCM... lies in HANA
The side effect could be when combining those virtual Infoprovider with regular BW driven Infoprovider, the performance is driven by the weakest member, so old BW wins and performance will be like before - but at least you have real-time data without any load and storage.
When using BW powered by HANA, all Infoprovider and BW objects get much faster.s You may even think about (depending on the size of the HANA system) replicating some additional ECC tables into HANA (should become possible) and integrating this approach with the help of virtual provider. Currently you can see the BW objects in HANA Studio, but not the HANA objects in BW DDIC. This should change with latest SP.
So I'd prefer the BW powered by HANA approach - the advantages beat the side by side approach.
The pitfall is that you have to store and load data into BW (this will become faster due to DSO changes with HANA as well).
BridgetKotelly: Joerg, thank you again for joining us today!
And thanks to all who posted questions and followed the discussion! I also invite you to also mark your calendars for additional BW and HANA events:
- Join us for Joerg’s next Q&A on September 13, taking your questions on moving to BW 7.3. Follow us @ILN4SAP or join the BI/BW Group for more details and registration.
- Come to the HANA Seminar in Chicago and 4 more cities in the US and Europe. Check out the details here - the first seminar is coming soon, on July 25.
- There’s an entire track on SAP HANA & Advanced Analytics at BI 2012 in Singapore this October. I personally invite you to join me there!
- Don’t forget to register for a Q&A on implementing HANA here on Insider Learning Network on July 25, with Penny Silvia & Bjarne Berg.
And finally, check out this quick list of HANA resources on Insider Learning Network.
Thanks again for joining us today!