Вы находитесь на странице: 1из 4
Dan Linstedt.com - Data Vault Issues Page 1 of 4 ei L ar rey ee ee a MRAN LINSTEDT.cOM rT, Home | About Us | About the Data Vault | Forums | Pubic Sector | White Papers | Dynamic Origins | Success Stories | DV Issues Data Vault Issues ‘As you might imagine, the Data Vault (Just like any other data modeling technique) has it's issues, pitfalls, and limitations. Some of the original limitations of the original modeling techniques have been overcome, but not all. This is an evolutionary step in the process of information modeling, and as such will have additional refinements going forward that will improve it. As the inventor, I feel it necessary to share such pitfalls with the community, thus providing the true picture and not just a "colorized" version of it that is all rosy and warm. If you have comments, please sign up for the forums and offer critical review, or issue statements inside. In this page we take a look at some of the risks of using a Data Vault model, along with some of the successes, We also look at some of the risks of other data modeling concepts as well, both from a business and technical standpoint. The first white-paper in the series explains some of the evolution, timelines, and pro's and cons so w: will not re-iterate those issues here. What we will discuss is some of the business battles, performance concerns, and data versus information debate. We will shed some light on what we've done in the past to make the Data ‘Vault a success in different businesses. Dave Wells of TDWI wrote an interesting article about the Data Vault and it's position in the warehouse. In this article he also discusses the concepts of Master Dimensions, Master Fact Tables. We'll explore some of those issues below. See the article Here. ‘The following are the concepts which we cover on this site: «= Data Vault Issues «= Data Versus Information Data Vault Issues ‘The Data Vault is just that, a "Data" based architecture, While it is purposely architected around the business, it is driven by the data that we keep in our source systems. It is juxtaposed to the knowledge proposition which is driven from information. In order to make sense of the data beyond pattern recognition, integration, and flexibility we must 1) improve the data quality, 2) merge, mix, match different data sets 3) multi-dimensionalize the data. We have proposed an architecture that separates and moves the business processing or information processing rules downstream, beyond the Data Vault. In other words, moving data out of the Data Vault and back into the presentation layers for usability by business. In doing so, we have defined a storage format (form) of informatior into which we load raw data. The only rules that apply to the data coming into the Data Vault consist of an integrated and consolidated view of the lowest level of grain of data. This data is separated by rate of change, type of information, and semantic meaning. ‘The architecture doesn't care or pay attention to how the information can or should be used, nor does it discern which information is "right versus wrong” (mostly because right and wrong are perspective based depending on whom is examining the data. Thus, the data in the Data Vault is not for end-user access (direct access), it és for Power-user access and data mining or discovery operations. This leaves us to figure out what to do downstream from the Data Vault Questions like: How do we turn our data into information? Does it make sense to "stop" at different information stores for certain data? How can real-time be realized if the data has to stop in different places? But before we ge into answering some of these questions, let's take a look at some of the business and technical issues that face us http://www danlinstedt.com/DVIssues. php. 07/24/2006 Dan Linstedt.com - Data Vault Issues Page 2 of 4 if we build a Data Vault. Business Issues + Data in the Data Vault is non-user accessible, + Data in the DV is not “cleansed or quality checked” + Benefits of the DV are indirect, but very real. * More up-front work is required for long-term payoff. * Business Users believe (in the beginning) that they don't need an “extra copy of the data” + Elegent Architecture is secondary to business churn. _ Using @ DV forces examination of source data processes, and source business processes, some business users don't want to be accountable, and will fight this notion. _, Businesses believe their existing operational reports are "right", the DV architecture proves this is not always the case. ., Business Users from different units MUST agree on the elements (scope) they need in the Data Vault before parts of it can be built. * Currently there is only one source of information exchange, there are no books on the Data Vault (yet). * Some businesses fight the idea of implementing a new architecture, they claim itis yet unproven. Technical Issues « Modelers struggle to grasp the reasons behind “not enforcing relationships" on the data model level. * Data Vault model introduces many many joins Data Vault model is based on MPP computing, not SMP computing, and is not necessarily a clustered architecture. * Data Vault contains all deltas, only houses deletes and updates as status flags on the data itself. * Data must be made into information BEFORE delivering to the business. + Modelers must accept that there is no "snowflaking” in the Data Vault: + Stand-alone tables for calendar, geography, and sometimes codes and descriptions are acceptable. 60% to 80% of source data typically is not tracked by change, forcing a re-load and delta comparison on the way into the DV. ‘Tracking queries becomes paramount to charging different user groups for "data utilization rates" and funding new projects. Businesses must define the metadata on a column based level in order to make sense of the Data Vault storage paradigm. Just because we've listed the issues, doesn’t mean there aren't mitigation strategies. In fact, in About the Data ‘Vault, and in the forums, are some of the mitigation strategies that will help you overcome these issues. On the other hand, as with any evolving or changing architecture, there will come a time when this one will be changed too (to meet future needs that aren't yet seen). ‘There are issues with understanding the application of the Data Vault, which come in different forms. Understanding how the Hubs and Links are utilized, built and represented are the most difficult. Once these techniques are mastered, they can be applied anywhere in the model - in a repeatable and redundant fashion, ‘Take the Hub for example, the questions I commonly get are: why store only one copy of the business key? Especially if I get the key arriving from multiple sources? The answer is quite simply: Integration, raw level integration of the business information is of utmost importance. In this manner, the same key may represent. different descriptive data, and tie together the currently dis-integrated source data. If on the other hand, the twc keys actually represent physically different semantic meanings, then they must be separated into two Hubs. If th ‘two keys represent two different "customers" for example, but are at the same semantic layer, then a record- source must be added to the business key (unique index) of the hub. This is actually a business problem that http://www danlinstedt.com/DVIssues. php. 07/24/2006 Dan Linstedt.com - Data Vault Issues Page 3 of 4 must be recorded and fixed, Dave Wells and I sat down and discussed the nature of the Data Vault, and perhaps it's a very fitting name. We both agree that there is something more, something bigger to be had and that’s a common or master set of “information”. So the next question we face is: how do we turn data into Information? Data Vault Versus Information ‘The data vault houses data, as we've discussed above. Unedited, unchanged flat-out data. Information can be found when the data and structures are processed with specific functionality - say for instance, data mining, or further integration and aggregation of the data to build and load a Star Schema. Dave has come up with something interesting he calls "Master dimensions, Master Facts, and Master Cubes" as an information platform. In the past, this type of processing has happened on the way out of the Data Vault and on the way to "report collections” (flat wide highly denormalized reporting tables) - this is just another name for a data mart. It has ‘typically been built into a second layer of staging tables (that was in 1997 when the hardware and software hadn’ advanced to what itis today). ‘These days, turning the data into information certainly requires additional stomping grounds, additional processin: and complex business rules. After all, "version of the truth" is hard to come by, and can change as often as the business management team, or the business itself changes. If we look at this from a different angle: batch versu real-time, a whole host of issues crop up ranging from refresh rates of data to size and volume of data going through those complex rules for processing. We also end up with an N-Dimensional method that the business users want to examine the data from. In other words: Sales by Finance by Contracts, crossed with Executive vs Decision Makers vs Management vs Line Workers, cross with Campaign Management vs CRM vs ERP and so on. ‘There are N dimensions by which to examine this information (that's not the tricky part). ‘The tricky part is getting the answers to agree in all the dimensions, and sourcing off the same data. It requires « great deal of patience on the part of the business users, in order to allow IT the proper time to identify the grain in between. It also requires an audit phase prior to release of the data into regular business operations, and the ability to aggregate by X dimensions in the same data store in order to provide the queries with consistent answers. Finally, it requires the ability to strip out bad, old, or unwanted data before processing, and separate that information into separate stores. Oh yes, we nearly forgot: business user training is necessary (by the business for the business) in order to understand which collections or data marts have which data, and what grait its’ stored at. ‘These are the business users that will write the new reports against the ‘formation stores housed within the warehouse. Sometimes the warehouse team is required to manage accountability through this process as wel, and in doing so must meet at least "two versions of the truth with the same answer at the same time.” We had @ case where we had to match Sales Revenue to Finance Revenue, and provide both answers at the same time. Finance revenue wanted to see all the adjustments in the time period in which they applied, Sales wanted to see an aggregate of the adjustments and what the total was at the end of each month. We solved this problem in a dual-entry report collection. By dual-entry, we mean dual-entry date. We had two dates: the applied date and the changed date (or loaded date). Accounting rolled up the grain by applied date. Sales rolled up the numbers by changed date, each had their version of the truth, and they "agreed with each other.” In terms of turning data into information, this is different for each business - this is where the business rules certainly do apply. However, there are a few things to note about scalability and complexity. © As complexity of the business rules increase, scalability decreases. They are in direct opposite proportion to one another. © As scalability decreases, the amount of data that can be passed through that process either takes twice as ong (exponential rise), or IT must limit the data set to 1/2 as much in order to process in the same time http://www danlinstedt.com/DVIssues. php. 07/24/2006 Dan Linstedt.com - Data Vault Issues Page 4 of 4 frame as previous run-times. ‘© Data which is in error, must be separated from data which is not in error - this is where the colored lenses of “truth” change the view of data/information. The data in error must be sent to “error marts” if it is marked bad enough to cause mistakes in the aggregate calculations downstream. Otherwise, if it doesn't cause mistakes in the aggregate calculations, it can be marked by virtue with default values, and doesn't need to be re-routed to error marts. © Complexity limits the possibilities for parallelism, and scalability - therefore sending 100% ofthe data through all business rules for every load, becomes an impossible task (based on exponential rise of complexity). In other words, analyzing the entire data set over and over and over again eventually is too high a cost forthe business to bear. © Information is useful when it holds summaries of the data in bite-sized and understandable chunks. This is when the business can make better decisions based on aggregate information. That's not to say that lots of small aggregates presented inthe right way can't help business, it's to say that 1) too much data can easily overwhelm the decision maker, and 2) too much data aggregated into averages can easily lead the user astray. A fine balance must be reached between the two. ‘There are other issues, and knowledge points which will be discussed within the forums going forward. Please sign up for the forums today, and offer your opinion on how to handle such information. As we all know, there are pros and cons to every technique in the market, this one is no different. What is different about this approacl is that itis an evolutionary step to the next level of information modeling. [Back To Top] Privacy Policy BI Feedback B Authorized Consultants (©) DanLinsteat.com 2004-2008, Data Vault Modeling http://www danlinstedt.com/DVIssues. php. 07/24/2006

Вам также может понравиться