Академический Документы
Профессиональный Документы
Культура Документы
Summary: Projects that involve data: CRM projects, MDM initiatives, ERP implementations, Business
Intelligence and data warehouse projects, data governance programs, migrations, consolidations,
harmonizations… all offer the opportunity to improve data quality. This paper is a phase by phase guide
that identifies for both business team members and IT resources, what tasks should be incorporated into
a project plan for each team function, as relates to data quality. This provides a roadmap for optimal
effectiveness and coordination. These data quality essentials are based upon best practices collected
from experiences on thousands of data management projects and successes over the past 30 years.
Corporate Headquarters
+ 1 (978) 436-8900
trilliumsoftware@trilliumsoftware.com
TRILLIUM SOFTWARE
Designed for collaboration and information sharing, the Trillium Software System lets businesses
individually define what data quality means to their organizations. The Trillium Software System is
comprised of:
TS Discovery - Collaborate across business and IT resources to assess large volumes of data
within and across systems. Robust data profiling capabilities allow users to understand data
domains, formats, patterns, and relationships as they exist within the data itself, as well as to see
whether data conforms to specific business rules and defined data standards. Ongoing
monitoring assesses data to ensure that high quality is maintained at all times.
TS Quality - Cleanses, standardizes, and matches any data: name and address data; product
data; asset, material, and location data; transactions; etc. World-class global capabilities and
automated, rules-based intelligence give organizations a simple but complete solution to handle
massive volumes of data—out of the box. Organizations can further customize rules and adapt to
meet changing business needs.
TS Enrichment - Complements, supplements, and amplifies data by drawing on over 5000 third-
party sources. This service provides a fully automated process for appending third-party data
seamlessly for storage and distribution.
TS Insight - Monitor business rules and data quality metrics graphically through a customizable
Data Quality Dashboard. Use scorecards and trending information to communicate data
initiatives, results, and goals. TS Insight shows which data sources have data quality issues and
which do not meet minimum corporate threshold and acceptability levels, helping you forecast
and allocate the right resources to improve and optimize the business processes that impact
them.
Usage Notice
Permission to use this document is granted, provided that: (1) The copyright notice “©2008 by Harte-Hanks
Trillium Software, appears in all copies, along with this permission notice. (2) Use of this document is only
for informational and noncommercial or personal use and does not include copying or posting the document
on any network computer or broadcasting the document through any medium. (3) The document is not
modified from the original version.
It is illegal to reproduce, distribute, or broadcast this document in any context without express written
®
permission from Trillium Software . Use for any other purpose is expressly prohibited by law, and may result
in severe civil and criminal penalties. Violators will be prosecuted to the maximum extent possible.
Incorporating an effective data quality solution into your project requires a number of
additional activities throughout the lifecycle of your project plan. Some tasks are more
suited for business user resources to take the lead on while other tasks are primarily
technical activities.
This white paper introduces some of the techniques used by successful companies to plan
and successfully implement data quality processes as part of an initiative. While
technology greatly facilitates and automates data quality management, it should be
applied in accordance with a measurable, objective methodology to assure success and a
high ROI for the project. As you’ll see in the pages to follow, process, people, and
business expertise are major components in achieving an improvement in data quality,
leaving technology as a way to automate and improve processes.
This paper is aimed squarely at project managers and describes the step-by-step process
for implementing data quality as part of a project. Specifically, this white paper highlights:
• The importance and ways to best involve business users in the project to ensure
their needs are met
In this phase, you will evaluate what resources and time are
needed to execute the project and what issues, roadblocks and
risks will need to be overcome. Start by setting up your team,
defining the scope, expectations, and deliverables of the project,
and conducting an analysis of the current state of your data.
Each of the types of team members has clearly defined roles for making the initiative a
success and must be accountable for his/her part. Here is how roles and responsibilities
are typically defined with regards to the data quality initiative elements of a project.
Role Responsibility
Executive Publicly endorse the data quality initiative. Foster support. Secure
Leaders funding for project. Resolve issues and remove roadblocks.
(CIO, CFO, VP)
The right technology will help keep team members engaged and communicating. A data
analysis environment with a central repository provides the right architecture for multi-role,
multi-member projects where resources need to interact and communicate about source
data and target environment designs.
This architecture provides an infrastructure that allows a common understanding about the
issues around the quality of data, recommendations for use of data, and what
transformations may need to take place as data is migrated.
In the short term, begin improving data by starting small and keeping the scope well-
defined. In the long term, keep in mind that if all goes well, you will have success, and you
will be asked to replicate this success across the company.
Scope
Scoping draws clear parameters around the data you are capturing, moving, cleansing,
standardizing, linking, and enriching, and its use. Each requirement must be assessed to
determine whether or not the data involved in this project can or will meet the requirement
to the satisfaction of the business. There are several basic questions to answer:
3. What is the level of quality within each source, for this information?
In a data migration, for example, you might be looking for certain key elements to appear
in the target data model. You may first need to confirm that the anticipated target data
physically exists within source systems and may next need to determine the best source
for the data, or most trusted source. If taking data from multiple sources, you may have to
establish a set of standards that all source systems conform to in order to produce a
consistent representation of that data in the new target system.
Understanding the scope of the project early is key to its successful and timely delivery.
Be sure to categorize the need-to-have data and the nice-to-have data. Be prepared to
drop off the nice-to-haves if time becomes short or if the effort of moving, cleansing,
standardizing, etc. outweighs the anticipated business benefit.
There are ways to limit scope. For example, if you’re integrating multiple data sources, will
it be one large movement of data or several smaller movements? Will the data need to be
the entire database, or is 6 months enough? Working through these issues with the
business team and IT will keep the project on-time and on target, and will help manage
expectations during the project lifecycle so there are no surprises as the project nears a
close.
If a process or technology exists that meets user expectations, can it be leveraged within
the new solution? If so, can it also be leveraged for other solutions in accordance with
long-term business objectives? If not, is it a good source of standards or logic, which can
be designed into a new data quality solution that offers more options for future growth?
Often solutions to data quality problems might be point solutions, without regard to the
entire enterprise. The key is to develop a solution that can serve the needs of the entire
organization.
If data does not meet expectations, what are the root causes of these gaps and how must
they be addressed before proceeding with your project? Does project scope need to be
revisited or do isolated requirements need to be classified as high risk?
Use a data discovery process on the source data to determine if the data is viable. If the
data cannot support key business requirements, the project is at a high risk of failure
despite investments of time and money. Thus, before committing to development, first
assess data to ensure that the project can ultimately meet user expectations.
Data discovery is the process of discovering the unknown about your data by bringing
together IT and business team members, familiar with the meaning of the data content
and how the data is to be used. This team will address issues that arise early in the
project lifecycle and create workable solutions. For example, you may not want to
incorporate specific data elements into your CRM system if there is a high degree of null
values, since the data will not meet your business needs.
Number of records with duplicate Unique keys must be generated by IT, thus
primary keys causing delays in the project if unexpected.
Blank values for critical fields of Does the customer get the right quantity of
data such as quantity per box in items ordered? Can the customer logistically
supply chain data or shipping handle the package they receive?
dimensions
Adherence to standards such as Do the same exact or similar parts exist in the
the metric or English systems of supply chain, but under different measurement
Total dollar value of bills with no Is the billing system in compliance with
invoices regulations? Is revenue being reported
Total dollar value of invoices with properly? Are orders being fulfilled without a
no bills purchase order and invoice?
Technology can play a significant role in uncovering data conditions such as those listed
above and establish a recorded baseline of these conditions. Not only will technology help
you organize and document results, but it can further be used to manage conditions going
forward. Automated data profiling analysis and exception reporting along with drill-down
functionality gives you results and the tools to involve non-technical users in the analysis
of these results. You can set conditions, such as those listed above, and understand
immediately to what degree the metrics are met.
Moreover, do other members of the business community agree with the relationships that
have been identified between data, its quality, and the impact of that data on the
business? For an effective and useful ROI, it is essential to establish buy-in as part of an
early communication plan task, and follow up with updated metrics at pre-determined
milestones.
Define Standards
Project team members representing the business play a key role in standards definition.
The team members involved in this step should be a fair representation of the ultimate
user audience. For example, if the end user audience will include sales and marketing
and potentially shipping, someone from each of the named departments should be
involved in defining system standards. Also, a representative from each of the company's
departments should act as a data steward to make sure data adheres to the defined
standards in the new system, if not ALSO in the source systems.
With every business, there are certain standards that can be applied to every piece of
data. For example, a name and address should almost always conform to postal
standards. E-mail addresses conform to a certain shape with a user name, internet
domain and an “@” sign in the middle. However, there may be data for which your team
needs to define a new standard. This is typically a part number, item description, supply
chain data, and other non-address data. For this, you need to set the definition with the
business team. As part of the process, explore the current data, decide what special data
exists in your required fields, and establish system standards that can then be automated
and monitored for compliance.
business. Most project managers find it helpful at this point to seek out the endorsement
of a ranking executive. Using the data quality metrics and business impact generated in
the previous step, keeping the executives in the loop about your initiative will help you
maintain your endorsement of the data quality initiative, foster support, and secure funding
for future projects or additional resources. If there are any internal political challenges,
executives can help resolve issues and remove roadblocks. If they are already well
informed of your efforts, status, and potential positive impact, it will be much easier to
invoke their support.
Access Data
At this point in design, it is necessary to take a deep dive into data extracts, representative
of the actual data that will be used as part of the production system. The purpose here is
to understand what mappings, transformations, processing, cleansing, etc. must be
established to create and maintain data that meets the needs and standards of the new
system or solution.
IT resources are generally responsible for defining data extracts and gaining appropriate
access to source systems. This data can then be shared with other team members to
support detailed design tasks.
It is very feasible to leverage the same technology used for risk mitigation and data metric
definition to now help with this up-front analysis of the data. Additional functionality,
available within a data discovery tool, presents users with statistics, results, and a window
into the data, so that information can be easily digested and navigated by IT and business
users alike. A special purpose data browser makes it possible for users to identify and
review issues in the data, collaborate and reach consensus about what should be done.
Use technology to facilitate source system analysis. Employ advanced profiling and data
discovery functions for comprehensive column and attribute analysis. Identify potential
problems within structured data fields such as dates, postal codes, product codes,
customer codes, addresses or any attributes that should conform to a particular format
and structure. Configure custom data quality rules, and flag any attributes that do not
conform.
When you’re done with the analysis, you will have a very good idea of the challenges you
face in integrating data and the information necessary to develop designs that address the
challenges proactively. At this time it is also a good idea to revisit the project plan and
confirm that appropriate time and resources have been allocated to deal with any data
issues that have been uncovered.
Capture a Baseline
Business team members have defined the data quality metrics and business impact in a
previous step. Now is the time to take a baseline measurement. As part of the source
system analysis, a baseline of each source system should be captured and stored as well
as how multiple systems conform to expected metrics or business rules. In some cases, it
will make sense to look not only at each source system in isolation, but across systems.
To take things a step further and offer more long-term value however, as the designs are
being set, and if technology investments are being made, revisit the long term business
objectives outlined during the Blueprint phase. Evaluate vendor tools using your longer
term vision and ensure that you have options for future connectivity requirements. Provide
your organization with the flexibility to extend the data quality processes you design for
your immediate project to other systems, in different environments, and on other platforms
that exist within your technical enterprise infrastructure.
It’s up to you and your business team members to decide how to standardize each of
these name formats for optimal efficiency in the target systems. Should ‘John and Jan
Smith’ be linked, but separate records in your master file or remain as a single entry?
Set up a test file or database of records that present these common data situations for QA
purposes during this stage of the project. A quality assurance (QA) task will be completed
prior to going live with new data. This test case scenario definition effectively begins to
build a list of data quality anomalies, which you can leverage to build and test business
rules and quality processes. Some of the business rules and test cases will come standard
with the cleansing process of packaged data quality solutions. These should be highly
tunable to meet your organization’s specific needs. Others, you can begin to build, based
on your needs.
When a data quality exception occurs, the data steward must resolve the exception and
decide whether the anomaly is an unusual occurrence, or whether new rules should
become part of the data quality process. Your project should define a clear way to handle
exceptions, including automated distribution (of error records) where possible, areas of
responsibility for correcting, and a method to report anomalies back to the source.
Phase 3: Implement
When all the planning is done, it’s time to begin to put the
technology in place to improve data using automation wherever
possible. For the technology resources implementation tasks,
we recommend “Trillium Software Data Quality Methodology”, a
white paper detailing how to standardize, enrich, and match
data, and how to fine-tune the business rules to optimize data.
Although this is the most technical of the plan’s phases,
business users still play an important role in this phase.
QA Initial Results
Results. The most important part of the data quality process should be that business
users are happy with the results. As you begin to implement new data quality process
designs, project managers should have business users run sample data through the data
quality processes to ensure results meet their expectations. Business users can compare
results before and after processing with the same data discovery tool that they have been
using all along. Coarse-tune processes using sample data, then switch over to a complete
data set for formal QA.
Once results have been verified, it’s time to load sample data into the target applications
and begin testing it more thoroughly. By taking the extra step with the business during the
QA cycle, you’re much more likely to be successful the first time you load data and will
avoid loading and reloading data repeatedly.
Validate Rules
In phase one and two, you’ve both determined what you have and what you need. Rules
are developed in an iterative analytic process. This requires access to knowledge about
intended meaning of the data. Business users and data analysts should work together on
this process, applying the same technology and process described for analyzing source
data, if additional questions come up. Give business users an opportunity to set up test
data scenarios and allow them to review the results after the cleansing process.
This is also your opportunity to review and add your specific terminology, e.g., industry-
specific terms, company-specific definitions and regional colloquialisms) not initially part of
the standardization terminology. This is a chance also to determine whether they will
require geography-specific standardization.
Involving business team members directly with the tuning process ensures that the rules
exactly meet their needs and removes the risk of failed expectations late in the game.
Although your project’s solution may not need the capability to expand and grow over
time, having the option of extending to any and all applications, even those that may come
to your company through mergers and acquisitions is a feasible best practice that should
be seriously considered. This also includes the ability to carry the rules from one
application to the next.
This will likewise meet any need to expand from a scheduled batch process to a multi-use
real-time process wherever and whenever you need it within enterprise systems.
Data discovery tools can here again help aid the project during
the UAT process by giving both business users and technical users a view into the data.
Teams can collaborate and view the results of any data quality process, before and after
the process is run.
It’s valuable to test inside the target application, too. Things to test include:
• All forms—particularly important when using a real time interface into the data
quality tool
• All reports—ensures the results from the reports are as expected
• Test scenarios—test the results of the data quality process’s impact on
systems and applications that interface with the ones included in your project
Throughout your UAT, make sure your business users have easy access to the data,
whether it be through tools and technologies used throughout the project, or otherwise, to
quickly address any questions that arise.
• Any new required fields or formats as they enter data into the system
• Any new screens or pop-ups requesting validation of automated cleansing and
matching of data
• The positive impact and business benefits of new, cleaner data
• The involvement of both business users and IT users in the process of creating
high quality data.
Most product managers will create a schedule and plan to engage the new system with
newly cleansed data. The migration to production generally occurs during an off-peak
time. The decisions you need to make include training, if/how to phase the rollout,
expertise needed when the cutover occurs, whether to run multiple systems (old and new)
in tandem, and if so, for how long, whether to hire additional resources (e.g., consultants
or contractors) to assist, and any additional security considerations.
If you have executed the tasks communicated so far, you have significantly reduced the
likelihood of any of the above-mentioned issues from occurring on your project. By taking
the time upfront to thoroughly investigate source system data, incorporate necessary
processing into your designs, and perform UAT that includes anticipated problematic data
conditions, you have proactively addressed the issues that cause most project teams
severe headaches late in the game.
Should something unexpected occur and require attention, you already have the
resources and infrastructure in place to quickly react: your team of both IT and business
users is already familiar with the project, the data, and any technology you have been
using (i.e., your data discovery tool) and can swiftly look at the data and assess the
problem for a quick resolution.
Phase 5: Go Live
SWOT Team
At this stage, it’s a good idea to have in place a cross-functional
SWOT (Strengths, Weaknesses, Opportunities, Threats) team
including – business analysts or departmental resources familiar
with business processes, performance engineers, data
architects, field technicians, and contacts from any vendors, to
be available on an emergency basis to provide rapid problem
resolution.
Teams should meet to complete a post mortem, discussing how the project went and how
to further improve on data quality during the next round.
Problem Resolution
All support organizations have some form of processes and procedures in place for
helping to resolve user and system-generated queries, issues, or problems in a consistent
manner. In some organizations these processes are very structured; in others they are
more informal.
In addition to efficient processes, it is also very important that the support team have well
defined roles and responsibilities to reduce response time to customer needs. Here is an
example of an escalation hierarchy, along with the individuals who perform these tasks, for
a fairly large implementation. For this example, when a problem is identified, it is escalated
as follows.
Tier 1: Help Desk - Help Desk technicians provide first-line support to the user community
and perform any additional training and remote operations to resolve issues. If the help
desk is unable to resolve the issue, it is escalated to Tier 2.
Tier 4: Project Managers - An issue usually reaches this level if an architectural change is
required to resolve it. The project managers will have to analyze the situation and take the
appropriate actions to resolve it.
The hierarchy just described is merely one example of a support and escalation hierarchy.
No matter what type of support hierarchy you have, it is crucial that each group within it
understand its role and responsibilities. Moreover, the team must be able to quickly
resolve or escalate any issues that arise.
Post Mortem
Re-run your baseline processes and collect updated results for a quantified measurement
of your impact. Gather up your metrics, your support log, your exceptions processing log,
and other relevant documentation. Call a meeting to:
• List the lessons learned during the project; use it as input to improve future project
delivery
To facilitate this process, many organizations leverage the technology used for risk
assessment, baseline measurements, source system analysis and design, and user
acceptance testing. For example, a data discovery tool in which you have trained
business users to use to investigate data, collaborate with IT, and measure data metrics,
as well as all the knowledge capital built into the environment is easily adapted to perform
scheduled audits and ongoing monitoring. Email workflows can be defined to alert key
players when problems arise or when problems exceed or fall below defined thresholds.
Monitoring ensures that you continue to meet or even exceed user expectations over time
so that your data assets become a trusted source, actively USED by the business.
Phase 6: Maintain
Announce Successes
One of the keys to maintaining funding for your project is to
internally publicize the successes you’ve had. In reality, a data
quality initiative should be constantly re-sold at every
opportunity, to continue to reinforce in people’s minds, the value
you are introducing to your organization.
This is also a very good time to remind the company that data quality is everyone’s
problem and ways they can help solve data quality issues.
Monitor
You can keep track of data quality in a number of ways. A full analysis with your data
discovery tool is one way. Each time you compare it to your baseline or the previously
measured baseline, you will get a very detailed idea of how your data quality initiative is
progressing.
Some tools include a way to automatically keep track of data quality, too. For example,
they may include an e-mail notification feature to inform key personnel when business
rules are violated such as when data values do not meet predefined requirements,
threshold values are exceeded, or nulls are present where unacceptable. These powerful
features prevent errors from impacting your business, should your enterprise use data
sources that are prone to change.
Data stewards, system owners, and/or key business contacts can receive alerts on critical
changes and errors. The tools can then allow users to call up the violation(s) and drill
down on the error(s).
Organizations with action-oriented governance programs use such features to alert key
stakeholders and responsible parties of data anomalies. Each day, stewards can address
the issues at hand and create prioritized tasks to resolve the issues identified.
Hold a meeting or series of meetings, to gather new requirements for version 1.1 of the
project.
2. The assigned resource, perhaps the data steward, assesses the change request
to see if it's worth investigating. Compare the benefits to how difficult the change
is (impact). Weed out the obvious change requests with limited benefit and place
them in a nice-to-have list for future reference. Assess risks in making the
changes necessary.
3. The data steward and project manager should document and communicate the
assessment to key stakeholders.
4. If the change sounds reasonable and has obvious benefit, ask the corporate
sponsor to accept the changes in schedule, cost, quality and risk. Remain
objective and let the sponsor decide if the change request has merit.
5. Communicate the schedule and status of the change request to key stakeholders.
We have covered a lot of ground herein, but if you distill this paper down into its essence,
there are just six guiding principles to use to incorporate a successful and complete data
quality strategy, regardless of whether you have a project that is related to CRM, ERP,
SCM, CDI, DW, BI, MDM, SOA, data harmonization, OLTP, legacy migration, and so on.
When building a solution for data quality, make sure your solution is designed for
expansion over time, as your organization develops new needs and faces new
challenges. The guiding principles that will best prepare you for extension and growth
over time recommend a solution that is:
Principle Description
Comprehensive Deliver fit-for-purpose data for all types of data, everywhere, anytime. It includes the
ability to support global business, not just information from the US and UK, but from
China, Japan, Germany, Mexico, etc. Not just single byte, but double byte data. Not
just name and address data, but all types of data. Important for
consolidation/migration projects with international or cross-functional reach.
Intelligent Contains intelligence to identify and address problems in context so you do not have
to apply heavy human resources to fix the data. Important for lowering IT costs and
saving you money on human resources.
Seamless Does your solution have the capability to expand and grow over time, extending to
any and all applications, even those that may come to your company through
mergers and acquisitions? Important when you want to apply data quality to key
enterprise applications.
Dynamic Can you quickly and precisely change the rules if you need to, to adapt to meet
changing business needs? Important because business models and business
processes can quickly change as technology advances.
Measurable Can you measure that your solution is working both immediately and over time? Does
it produce quantifiable results that can impact business? Important as an internal self-
justification of your team, a way to continue improvement, and a way to justify
expenditures.
The Trillium Software System answers these challenges with a scalable, flexible
framework that supports the integration of data quality processes into any system, at any
time, anywhere in the world. From tactical projects to strategic practices, the Trillium
Software System increases integration efficiency, lowers development costs, and provides
faster return on investment (ROI) from data quality initiatives through:
Trillium Software System includes TS Discovery, a data discovery tool that you can
leverage across the life of your project. It facilitates the inclusion of business users in all
phases of your project to reduce risk, and better meet the demands of the business. The
Trillium Software System also includes TS Quality, a rules-based engine that promotes
reusable data quality processes, business user-defined and -tuned rules, and the most
deployment options of any data quality product on the market. The Trillium Software
System also offers TS Enrichment, a data enrichment service available for supplementing
and increasing data available within source systems.