Expand +



Structuring Large Integration Flows in SAP HANA Cloud Integration

by Dr. Volker Stiehl, Professor, Ingolstadt Technical University of Applied Sciences

June 6, 2016

See how to use SAP HANA Cloud Integration (SAP HCI) for modeling really large and complex integration flows. You use the local integration process as a means to structure your process models in manageable sizes. In the parent process you apply the Process Call shape for actually invoking the sub-process. In addition, understand how data is transferred back and forth between parent and child processes.

With SAP HANA Cloud Integration (SAP HCI) you can model fairly large integration scenarios. Due to the flexible pipeline of the underlying Apache Camel integration engine, you could potentially add as many processing steps in your route as are necessary to fulfill your integration needs. However, when you use a graphical modeler, those large models can easily become quite confusing and you lose all the benefits of a graphical notation. In this installment of my series about modeling and running integration flows on SAP HCI, I explain how to structure large process models using sub-processes.

Getting Hold of Complexity by Modularization

Putting overly complex logic in one module is never a good idea. This holds true for classical programming languages, such as Java, as well as for graphical environments, such as the web user interface (UI) used in SAP HCI.

You have certainly learned how to slice large programs in manageable logical units and treat them separately (separation of concerns). The same can be applied for graphically modeled integration flows. The means to achieve this in SAP HCI is the use of sub-processes or local processes as they are called in SAP HCI. (The terms are interchangeable.)

Working with sub-processes is not that difficult. I  encourage you to make use of them and to keep your individual processes and sub-processes at a reasonable size. As a rule of thumb, process models shouldn’t contain more than 10 elements. If your models become larger, you should refactor them and reduce their size by moving parts of your model into newly created sub-processes.

To apply this rule, you need to know how sub-processes can be modeled and how parameters are exchanged between parent and child processes. For example, examine how the process model shown in Figure 1 works.

Figure 1
Process model calling a local process named doTheWork

First, look at the execution sequence before diving into the detailed configuration of each step. The main process, modeled at the top of the diagram, is triggered by an incoming Simple Object Access Protocol (SOAP) message. A Content Modifier step sets some header and exchange property variables. (For details on header variables, exchange properties, and how to work with the Camel data model, refer to the first article of this series titled “Your First SAP HCI Integration Flow.” For your convenience and as a reminder, I’ve added Figure 2, depicting the exchange again.)

Figure 2
The exchange

The exchange is the central container carrying all necessary data, including the message’s payload and header information from step to step inside an integration flow.

In the model in Figure 1, the main process invokes the sub-process called doTheWork. Within the sub-process a Content Modifier step is used to work with the variables being set in the parent process. This step is to showcase the availability of those variables in the child process, although they have been set in the parent process. Additionally, the first Content Modifier step in the local process adds a new header variable to the exchange. The goal is to demonstrate how variables created in a child process are also available in the parent process once the sub-process has finished.

To add some more logic, you use a gateway (the diamond shape in the doTheWork sub-process of Figure 1) representing a Content Based Router to distinguish between different order number ranges. (See the respective labels at the two gates.) The two Content Modifier steps following the gateway just set the content of the reply message.

Once executed, the sub-process is finished and process execution continues in the main process by calling the last Content Modifier step. Here you access the variable set in the local process. It verifies its availability although the sub-process has already finished.

As you can see, you are really focusing on the cooperation between the parent and child process and how the parameter transfers (back and forth) between the two works. So let’s see how to configure the individual steps to make the collaboration executable.

Configuring the Collaboration Between the Parent and Child Processes

I begin with the configuration of the first Content Modifier step in the main process. The settings are shown in Figures 3 and 4.

Figure 3
Setting a Message Header variable

Figure 4
Set an Exchange Property variable

Obviously, two variables are being set: one with the name orderNumber in the Message Header area (Figure 3) and the other with the name msg in the Exchange Property area (Figure 4). This should remind you of the first article within this series in which I did the same process. As a result, the exchange contains the two variables in their respective locations.

Now comes the interesting part: the integration flow invokes the sub-process. The first question to answer is: How do you model the sub-process and its invocation? You have to begin with the sub-process first. This is important as the parent process has to reference the sub-process later. Therefore, the sub-process must be in place. Otherwise, such a reference could not be established. You model it alongside the main process by picking the Local Integration Process entry from the palette, which is beneath the Process () main menu entry (Figure 5).

Figure 5
Model a local integration process

After you have positioned the sub-process beneath the main process, you get a new pool containing an empty flow (Figure 6).

Figure 6
Newly positioned local integration flow

Note the new local process icon () in the upper left corner of the pool, signifying it as a sub-process that cannot be started by an incoming message or by a timer start event. It can only be invoked from a parent process by a respective Process Call shape, which I explain soon. Because of this invocation relationship to a parent process, the sub-process needs to start with an empty start event (). The only attribute you can change when selecting the sub-process is its name. You should adjust it and give it a self-explanatory name. Within the sub-process you can model any integration logic as you would do it for the main process.

Next, you model the invocation of the local integration process from the main process—the referencing of the child process from the parent process I was talking about above. This is done by positioning the Process Call shape shown in Figure 7 inside the main process. It is a sub-node of the external call root node ().

Figure 7
Choose the Process Call shape from the palette

The last step is to connect the newly positioned Process Call shape with the sub-process itself. This is done by selecting the Process Call rectangle in the main process and adjusting the local integration process field in the associated properties area beneath the process model (Figure 8).

Figure 8
Connect the Process Call step with the sub-process

Click the Select button to open another dialog listing all the modeled local integration processes. Pick the one you want to invoke (as you have modeled only one local process there should only be one entry). The dialog closes automatically after you have chosen one entry from the list. That’s all you need to do to model a sub-process invocation from the main process.

However, you may ask yourself whether there is a need to define an interface for your sub-process describing which data the sub-process expects from its parent process and which data it returns after it finishes its execution. The answer is: You don’t have to define such an interface. The simple reason is that the called sub-process is also relying on the same exchange the main process is working on and that is automatically handed over from step to step within the main process as well as within the sub-process.

This stresses again the importance of the exchange as the central data container within integration flows while working with SAP HCI and its underlying Camel framework. It should be clear now how the data transfer between parent and child processes works. There is no need for local or global variables as you typically find them in normal programming languages. The only container carrying variables and their values is the exchange that is being transferred back and forth between parent and child.

I continue with the configuration of the first Content Modifier inside the called local integration flow doTheWork (Figure 1). Figure 9 shows its configuration.

Figure 9
Set the message’s payload in the sub-process

I set the message’s payload by filling its body with two XML tags and the contents of the two variables that were set previously in the parent process. By this I demonstrate the availability of data in the child process that was set before by the parent process. To showcase the data transfer in the reverse direction (from child to parent process), you can create a new variable named VarFromSubprocess (Figure 10) within the same Content Modifier.

Figure 10
Sub-process setting a variable in the Message Header area of the Exchange

The new variable contains a string constant that later is added to the response message by the parent process. Following the first Content Modifier in the sub-process, a Content-based router represented by the diamond-shaped Exclusive Gateway takes care of adding more information to the response message. Figure 11 depicts the gateway’s configuration.

Figure 11
Configuration of the Gateway

Note the Condition Expression column in the first row of the Routing Condition table. For the decision about which route to follow, the condition again relies on the header variable set by the parent process. Now continue with the two Content Modifier steps following the gateway. Actually, they just add some static text to the already existing payload. The configuration of the upper Content Modifier is shown in Figure 12.

Figure 12
Set the body’s content using the Content Modifier

Note the two new surrounding XML tags labeled result2. They wrap the current payload referenced by the Camel variable ${in.body}. In addition, the constant text later reveals whether the right path was chosen. The lower Content Modifier is equally configured. Just the text is changed to OrderNumber lower 10250. That finishes the explanation of the sub-process. You can continue with the last Content Modifier in the main process following the step that caused the sub-process’s invocation. Its settings are shown in Figure 13.

Figure 13
Configuration of the last Content Modifier in the main process

What is finally missing is the proof that variables being set in the sub-process can be accessed by the parent process. That’s why you find the expression ${header.VarFromSubprocess} inside the Message Body’s definition. It accesses exactly the variable that I set before in the called local integration process (Figure 10). Note also the new XML tags labeled with result3 that again wrap the current payload plus the contents of the variable VarFromSubprocess. If everything works correctly, you should get three nicely nested tags labeled with result3, result2, and result1, respectively. So let’s see that integration flow in action. After the invocation of my demo process, you get the reply depicted in Figure 14.

Figure 14
Response message produced by the integration flow

Obviously, everything worked as expected. The result tags are nested and depending on the entered order number, you get the respective text message whether the number was lower than 10250 or not.

Real-life integration scenarios can grow quite large. That’s why a means to structure large process models is urgently required. SAP HCI supports structuring of large process models by using local integration processes to keep each individual process model at a reasonable size. The parameter transfer between parent and child processes is solved by the exchange, the standard container for managing data within an integration flow.

As you have seen, the exchange is handed over from step to step on one process level. The exchange is also the vehicle for moving data from parent to child processes and vice versa. This makes defining global or local variables superfluous. You are now able to model, run, and monitor really complex scenarios. If you follow my recommendations, you also ensure manageable process sizes, making it fun to work with them.

An email has been sent to:


Dr. Volker Stiehl

Prof. Dr. Volker Stiehl studied computer science at the Friedrich-Alexander-University of Erlangen-Nuremberg. After 12 years as a developer and senior system architect at Siemens, he joined SAP in 2004. As chief product expert, Volker was responsible for the success of the products SAP Process Orchestration, SAP Process Integration, and SAP HANA Cloud Integration (now SAP HANA Cloud Platform, integration service). He left SAP in 2016 and accepted a position as professor at the Ingolstadt Technical University of Applied Sciences where he is currently teaching business information systems. In September 2011, Volker received his Ph.D. degree from the University of Technology Darmstadt. His thesis was on the systematic design and implementation of applications using BPMN. Volker is also the author of Process-Driven Applications with BPMN as well as the co-author of SAP HANA Cloud Integration and a regular speaker at various national and international conferences.


Please log in to post a comment.

No comments have been submitted on this article. Be the first to comment!