Expand +



Business Process, Interrupted: The High Cost of Master Data Mis-Management

by Tom Kennedy | SAPinsider

April 1, 2010

If your data governance went awry, would you recognize the symptoms? This special section explains these signs, the price of mis-managed master data efforts, and how to solve such problems by implementing “passive” and “active” data governance models.

Any solution is only as good as the data that powers it. This is even more prevalent when it comes to master data management (MDM), the hub of all the enterprise data your business processes rely on. Regardless of which MDM solution you use, data quality is everything.

Data quality, in turn, relies on effective data governance. In a recent study, 95% of organizations said that data governance was anywhere from “important” to “critical” to the successful implementation and rollout of MDM strategies — yet nearly half of those same companies rated their own data governance maturity as “low” or “very low.”1 There’s clearly a gap between what companies need to do and what they are actually doing (see sidebar).

The disparity between the data quality required to effectively power an MDM solution and the data quality you currently have may be considerable. This is why I encourage companies to focus their efforts on improving data quality.

Signs That Your Data Governance Is Amiss

When master data management goes awry, the impact may not show up for days, weeks, or even months. The symptoms of poor or insufficient master data quality are rarely evident when the data is first entered or imported into the MDM system. These symptoms include:

  • Failure to ship products. A company starts getting calls from customers who want to know why their product orders have not been shipped. When the supplier checks the system, it discovers that the product is a manufactured item and therefore requires a bill of materials (BOM) and a routing, per their data standards. These omissions, which occurred during the creation of the data in the company’s manufacturing division (not its distribution division), are discovered only after customers never received their shipments.
  • Missing materials management. It can happen hundreds of times a day — a delivery truck shows up at a company, the plant crew slaps a bar code on the materials to be delivered, but they can’t enter the SKU into the system. Someone has forgotten to extend the material to the delivery plant. It will take time to fix this error and, in the meantime, boxes are stacking up on the delivery plant’s loading dock.
  • Error-prone data entry. A pharmaceutical company decides to switch its supplier of a time-sensitive ingredient used to manufacture one of its products. While the company enters all of the correct information into the system in order to receive the ingredient from the new vendor, it neglects to change the expiration date of the end material and, as a result, may be fined or subjected to a product recall in the next government audit.

In each of these examples, the issues with master data are only caught downstream — not when the error occurs.

The Source of Master Data Management Problems

Master data becomes troublesome for three major reasons: complexity, incongruent or incompatible systems, and errors and omissions.

Complexity. In a global enterprise, if business processes are not properly managed, there can be a code for everything. We recently performed an audit for a global company and found that it used more than 1,300 different terms codes. Our analysis of those codes showed that if the company reduced that number to 12, those 12 codes could handle 99% of its customers.

Maintaining 1,300 terms codes is unwieldy. Maintaining 12 master terms codes is much more manageable.

Incongruent or incompatible systems. One of the main reasons for practicing master data management is to give companies a single view of their data, regardless of the system that data stems from. But it’s never as easy as it sounds. For example, a certain record in two different systems consists of 12 fields, but those fields have completely different requirements in each system. Integrating these systems (and their records) with the master data hub can require months and months of integration effort and, if the project is not executed properly, the single view of corporate data is rarely achieved.

Many of the companies we work with have multiple application instances and as many as 2,400 legacy systems. No master data management project in the world will integrate systems so diverse unless the quality and consistency of data is addressed in advance.

Errors and omissions. While complexity and incongruent systems usually result in fairly predictable issues, many of the data issues result from errors in or omission of the data itself. In many MDM strategies, there is no single person responsible for creating the data for a single record. For a customer record, for example, a salesperson will input the name and address, and accounting will enter payment terms and credit limits. Usually, three or four people contribute to a single customer record. In the case of a new vendor, up to 10 people could be involved in creating the vendor record.

Data quality is the single biggest challenge when it comes to master data, since errors and omissions made upstream usually do not become evident until much later in a business process. During one of our data audits, we will typically find tens of thousands of data standards violations, and any one of those violations represents the risk of business process interruptions and potentially high costs to your company (see Figure 1). It is imperative that you coordinate and govern the creation of that data — this is data governance.


Business process interrupted

Types of problems you may be uncovering

How to solve these issues through data governance

 Procurement  Incorrect and incomplete supplier and parts information leads to purchases from many more suppliers than required Improve procurement efficiency by reducing sourcing costs incurred from supporting multiple parts types from multiple vendors; vendor and parts consolidation makes this possible
Opportunity to get volume-based discount pricing for parts purchases is lost because of an excess of parts types and suppliers in the procurement process Obtain global best prices through consolidation of suppliers and parts types based on an accurate, enterprise-wide view of parts and suppliers
 Logistics  Delays in product delivery caused by data errors (in purchase order processing, freight scheduling, warehouse handling, and receiving) leads to higher buffer stock Lower safety stock by reducing delays in product deliveries caused by item data errors
Companies pay a penalty for relying on an excessive number of suppliers and parts types by having to carry excess inventory Lower inventory levels through the consolidation of vendor and parts; this is enabled by procurement practices that are based on a common, accurate, and central view of all suppliers and parts
 Order entry   Field sales people end up spending a lot of their time resolving invoice disputes that involve disagreements on item information with retailers Reduce sales force time spent on resolving invoice disputes through immediate access to item details
Invoice disputes that can’t be resolved have to be written off Reduce invoice write-offs through better data synchronization with retailers in order to reduce invoice disputes that emanate from disagreements on item information
Customer service costs are high because service reps are unable to refer customers to data sites containing accurate information (particularly for companies that use catalogs extensively) Reduce production cycle time for print and online catalogs: Data can be ported directly to marketing materials, eliminating the need for time-intensive data extracts; direct access to product master data reduces time-consuming, manual reviews and typing processes while reducing the risk of order errors and inaccurate product weights
Figure 1 Data governance issues may be the root cause of your business process interruptions

The High Cost of Master Data Mis-Management

We recently worked with one client whose business processes required more than six weeks to introduce a new material into its landscape. That in itself was not the only issue. This client would be plagued by business process interruptions during those six weeks. Several different people were responsible for setting up a material record, and if one person made an error or forgot to fill out a field, there would be problems downstream.

Another client, a transportation company, wanted to measure the impact of business process interruptions that resulted from its master data management issues. The company estimated that each business process interruption cost the company US$75, a fairly conservative estimate.

After 30 days, they found that business process interruptions were costing the company US$10 million each month. With numbers like this, they had no problem making a business case to call for help to analyze the situation and implement a solution to fix it.

When errors happen, we generally blame the system. But it's not a system problem — it's a data problem.

Solving Your Master Data Management Problems with Effective Data Governance

The key to overcoming master data management challenges is effective data governance. Surprisingly, we find that many of our clients have master data systems in place, and yet they practice little or no data governance. These organizations typically have many users (with a wide range of data knowledge) entering data with an unchecked ability to make errors. The resulting data entry problems affect business processes downstream. There are few, if any, mechanisms or processes in place to catch these errors. These companies have invested sometimes millions in an MDM system, only to find that they still suffer from poor data quality.

When companies realize that they need data governance, they usually begin with a centralized model. All data entry is controlled by a center of excellence (COE) or similar organization, and the COE becomes the data governance strategy. This, however, can result in a workflow bottleneck that, while improving data quality, may take several days or several weeks to input a new customer, vendor, or material record. The COE cannot know every business process in the entire supply chain for the company. They don’t always know the relative importance of a data quality error, and if that error involves a high-value item or a major customer, lost business or returned goods can result from a delay of just a few hours, much less a few days or weeks. Sales, marketing, and operations teams remain unhappy because they continue to have real problems despite the best efforts of the dedicated COE, and people quickly lose faith in their systems.

Obviously, having no data governance is ineffective. But we also believe that a centralized model is only a modest improvement. We have defined two models of data governance — passive and active — that, when implemented in stages, will dramatically improve your master data quality and, from there, directly improve your master data management (see Figure 2).

Figure 2 A typical delineation of data quality over time for a company that first chooses to use a centralized model for its master data management but then switches to the passive/active data governance model

Passive Data Governance

Many of the errors in omission that affect data quality occur during either the input or the migration of data. So the majority of these exceptions can be resolved by directly connecting the business interruption to the owner of the data that caused the issue in the first place. This requires a workflow solution whereby data owners are alerted to omission errors in their data at the outset of a business interruption — this is a central tenet of passive data governance.

The most important benefit of passive data governance is the modification of behavior in the data owners that doesn’t require them to abandon or change their current business processes. Passive data governance also demands that there be visibility into master data quality metrics, such as manufactured items, missing BOMs and routings, and materials with poor descriptions.

Active Data Governance

An active data governance model, by comparison, requires more programmatic solutions. For one, the business rules behind entering master data must be automated so that errors in omission are detected as they are entered, rather than when they cause a business interruption. A flexible system must be in place that enables collaboration and coordination of data authorship when there is more than one individual creating the data.

Implementing passive and active data governance is a stepped process. A company should begin with passive data governance, which will help identify where issues are occurring in the first place. Once the source of master data management issues are identified and mapped, then business rules can be implemented and automated to prevent those issues in the future.

We recommend that companies first look at passive data governance to understand their data standards and business processes, or lack thereof. Then, once passive data governance is in place, you can begin to build an active data governance strategy. As volume and complexity warrants it, you can introduce active data governance for any object that is either high volume or high complexity.

Achieving One View of the Truth: An Example

For a global cargo company, passive data governance helped stabilize its current production data; the company uncovered and defined the meaning and context of a real data error and began to measure both the frequency of that error’s occurrence and how they remediated it. Because they were able to rationalize and lock down their rules and methods, they could measure data quality defects almost immediately. 

Daily passive data governance reports are run to measure, find, and fix errors within 24 hours. New rules become immediately visible, and the capability for rapid correction minimizes transactional impacts. 

Once errors were being corrected on a daily basis at this company, they then implemented active data governance. Active data governance creates workflow and speeds the collection and approval process for all material, vendor, and customer master data. All validations are performed at the point of entry, ensuring that only perfect data is collected and posted the first time, every time.

Conclusion: Data Governance Is a Journey

Taking full advantage of your SAP solutions requires an effective data governance strategy that produces business-ready data every day. Remember, complaining about data is not a strategy. You must put a process in place to ensure its quality. 

Tom Kennedy is Co-Founder and CTO of BackOffice Associates (, a provider of data migration and data governance solutions for global SAP customers.

Tom was formerly CEO of Kennedy & Associates, which provided fourth-generation languages (4GLs) as well as integrated accounting and custom-developed applications for Johnson & Johnson, UPS, Otis Elevator, Citrus World, and many other global corporations.

Fourteen years ago, Kennedy co-founded BackOffice Associates and focused on the SAP data quality market. His vision and design have led to industry-leading solutions and methodologies that provide a high value proposition for customers.

Tom holds bachelor’s degrees in mathematics and economics from the University of South Florida.

1 Rob Karel, “Trends 2009: Master Data Management (Findings From Forrester’s August 2009 Global Master Data Management/Data Quality Online Survey),” October 23, 2009. [back]

An email has been sent to:

More from SAPinsider


Please log in to post a comment.

No comments have been submitted on this article. Be the first to comment!