Allison Martin | Conference producer, SAPinsider |
I recently hosted a webinar with Scott Cairncross of TruQua Enterprises and speaker at our SAPinsider BPC Seminar. After his webinar, “Streamline your planning processes with SAP Business Planning and Consolidation on SAP HANA" Scott took attendees’ questions. We’ve compiled a few of Scott’s answers to questions on performance and functionality when running SAP BPC on HANA here.
You can also view a replay of the full webinar and don't miss Scott’s extensive presentations at our 3-day BPC Seminars in Copenhagen (October 21-23) and Orlando (November 20-22).
Q. Could you please specify what “lower data volumes” means when evaluating potential performance enhancements that would come with SAP HANA?
What’s lower data volumes? In our example, notice the legal to management BPC query. Here, really we didn’t see a huge boost in reports. So what was the number of records?
Here we have a half a million records, but we were pulling all the detail from those records, vs. a million records where we need an aggregate level of detail.
So I would say for the ad hoc queries that use report definition, we got huge performance improvements. But with lower data volumes – under a million records – or with detail, so, the more detail you add to the columns, performance enhancements aren’t really going to give you a lot of bang for the buck.
All the HANA acceleration, when you have the columns that you’re not selecting within your query, that’s when you’re really going to get a huge improvement around aggregation and these types things because of the columns..
So, lower data volumes, let’s say it’s under a million records. When you got over a million records, you see a lot of improvement.
I mean, we see improvement across the board, but the higher the data volume, the higher the impact.
I would say you see an exponential curve, where it starts out kind of linear until you hit that million dollar mark, and then you see a huge spike. So that’s what we saw in this example.
Q. Why did this example use EVDRE and not EPM Add-In?
Scott Cairncross: In the case study [presented in the webinar], what we were looking at was what type of performance benefit we would get immediately.
Migrating EVDRE, although it’s not that challenging, there were a number of different reports that we had to migrate. Leaving it as an EVDRE was a way to say, hey, if you do a technical upgrade, right out of the box, this is what you’re gonna get.
We wanted to compare apples to apples to see what the technical platform would really give us in acceleration.
Obviously, there are additional enhancements we can do here. For consolidations, when we were doing benchmarking, we weren’t leveraging some of the performance enhancements that came with it, like incremental consolidation. Likewise we weren’t using report definitions.
But actually, to give you a little bit more information, the customer that this data is from did end up choosing HANA and they are migrating these reports to Report Definition. So we’ll see shortly what additional bang for the buck we got.
Q. What is the largest volume of data you have seen in any BPC model on HANA?
I’m not sure how many saw the ASUG presentation on HP. HP is really the largest BPC on HANA. They did a public presentation at ASUG, which was really great. I don’t remember the statistics exactly, but it’s terabytes of data – terabytes and terabytes and terabytes. I think they’re probably closing in on pedabytes. They have tons of data within their system. I think it’s THE largest BPC on HANA implementation out there.
They had, from a dimension member standpoint, they had something like 800,000 members; they had multiple hierarchies; and it’s fast. It’s really fast.
I think you can access their presentation at the ASUG website. I don’t have all the statistics to rattle off, off the top of my head, but that implementation was the largest BPC on HANA implementation that I know of.
Q. With BPC on HANA, what functions have been optimized to run at the HANA database level?
Queries have been optimized. Consolidations, to a certain degree.
This is one of the things that I was kinda excited about: SP11 has a lot of new capabilities. SP11 comes out in August.
One of the really nice things about 7.0 - one of the key optimizations that we have - is the capability now to leverage MDX functions from HANA directly within our script logic editor.
So you can leverage MDX HANA functions now. This was something that was introduced with SP11. You can access it early, as I did, with an SAP note. Within the Logic Scripts area, if I jump into any of these, you can see the MDX keywords.
These are actually the HANA MDX keywords. You can see a whole slew of these that have helper functions. These are MDX keywords that actually run on HANA that you can actually leverage directly within your script logic editor. (If you look on experiencesaphana.com, there’s documentation on each of these keywords. That note is 1832527 - Support SAP HANA MDX Functions and Script Logic Editor.)
So this tab is new. It’s a new one, part of SP11 which isn’t out yet, where you can see these future enhancements and future capabilities, and you can leverage them today.
So a lot of this script logic is now pushed down in HANA. That’s one example.
Another example that I haven’t seen yet but I’ve heard is there, is that allocations is getting pushed down into HANA as well, which is going to be huge from a performance standpoint, because of some of the nuances of the allocations keyword within BPC NetWeaver.
The fact that the way that the WHAT within the run allocation keyword is processed, where it actually doubles and triples numbers, because run allocation in Microsoft actually works in, I would say, a more accurate function than NetWeaver. There’s a note that addresses this and a workaround for it using a FOR loop, which actually causes pretty big performance degradation.
But with HANA and with this improvement, and pushing it down in into the database, I’m hoping that it doesn’t matter anymore. But we’ll see.
That’s another example of something that is coming. So consolidations, querying, the rights being pushed down, dimension processing – not needing to actually calculate the dimension IDs when you run a load – that’s just a benefit we get out of the box with BW on HANA. And now these MDX functions.
Q. We have RSDRI performance issues, running it up to 800 times. Is it worth trying to fix this, or should we just test on HANA? Can we test before we buy?
You can absolutely test before you buy.
Those RSDRI performance issues should not be happening. I’ve done a lot of these performance reviews and performance analysis. And it could be a bottleneck in your database, it could be a bottleneck in the sizing of the application, memory allocated, your work processor.
There could be a number of things that could be causing that performance degradation on RSRDI. But it should be fast.
So I would do both, actually: I would look at optimizing your system as is, and evaluate what type of additional bang for your buck you’ll get with HANA, which is what this customer did here. They said, What can we get just by tuning our system? So if you have the data volumes that warrant HANA and you’ll really get that incremental performance… or with this customer, even though they didn’t have drastic volumes, what they did have was demand planning on their roadmap. We all know with demand planning, we have massive quantity of data, and they wanted to be able to do with BPC because of the adoption of their end users have, and they’re using it as their standard.
Anyhow, that’s what I would do. I would do both – I would look at optimizing first, RSDRI, and really tuning that, and looking not at your BPC applications but at your BW systems and your database and see what I could do to tune that. And then eventually do a pilot, where you evaluate HANA as an option.
Q. Can predictive analytics solutions from HANA be integrated with BPC?
Yes. BPC has very much customized. There are quite a few hooks within the system to do custom development.
When I say custom development, I know that a lot of people get skittish. But a lot of customer enhancements come from customer requirements. If you look here, you’ll see enhancements came from China Nuclear, where they had an approval process within their workflow that really required a consensus among a group before the approval went through from one set to the next.
We have retraction enhancements (UJD_RETRACT) - bringing parity to BPC ETL is a huge AB routine, so a lot of customers want to be able to write start routines and end routines, just like you can within BW. We can see the write back from shared query engine values, where customers wanted to implement their own security models, so they had a table to actually capture data before you wrote it to the database or capture the data before it was displayed to the end user.
One of the big ones, probably the most widely used is this UJ_CUSTOM_LOGIC -- the ability to enhance script logic and input your own special custom keyword into the script logic language. By doing that, you’re actually invoking ABAP from a data manager package.
Now with 7.3, it’s a little trickier to invoke stored procedures within HANA. But within those HANA stored procedures is where you would execute these predictive libraries that I’m talking about. You could create a wrapper around the stored procedure that is executing these predictive algorithm.
So you can do it. It’s complex, but it’s made a lot easier by 7.4 .
Then we use stored procedures directly from ABAP rather than using these special classes. You’d have to use this custom logic, but it is possible. I would say it takes a specialized skillset at the moment, but it is possible.
For more from Scott Cairncross on BPC, be sure to join us for our 3-day BPC Seminars in Copenhagen (October 21-23) and Orlando (November 20-22).