Вы находитесь на странице: 1из 9

WHITEPAPER

WHITEPAPER

Users Guide to the Emerging Database Landscape: Row vs. Columnar vs. NoSQL
Overview
Businesses today are challenged by the ongoing explosion of data. Organizations capture, track, analyze and store more information than ever beforeeverything from mass quantities of transactional, online and mobile data, to growing amounts of machine-generated data. In fact, machinegenerated data represents the fastest-growing category of Big Data. How can you effectively address the impact of data overload on application performance, speed and reliability? Where do newer technologies such as columnar databases and NoSQL come into play? The first thing to recognize is that, in the new data management paradigm, one size will not fit all data needs. The right IT solution may encompass one to two to even three technologies working together. Figuring out which of the several technologies (and even subvariants of these technologies) meets your needs while also fitting your IT staffing and budget parameters is no small issue. We hope this User Guide will help clarify which data management approach is best for which of your companys data challenges.

INFOBRIGHT Corporate Headquarters 47 Colborne Street, Suite 403 Toronto, Ontario M5E1P8 Canada Tel. 416 596 2483 Toll Free 877 596 2483 info@infobright.com www.infobright.com Sales: North America Tel. 312-924-1695 EMEA Tel. +353 (0)87 743 7107

WHITEPAPER WHITEPAPER

Todays Top Data-Management Challenge


Businesses today are challenged by the ongoing explosion of data. Gartner is predicting data growth will exceed 650% over the next five years.1 Organizations capture, track, analyze and store everything from mass quantities of transactional, online and mobile data, to growing amounts of machine-generated data. In fact, machine-generated data, including sources ranging from web, telecom network and call-detail records, to data from online gaming, social networks, sensors, computer logs, satellites, financial transaction feeds and more, represents the fastest-growing category of Big Data. Highvolume web sites can generate billions of data entries every month. As volumes expand into the tens of terabytes and even the petabyte range, IT departments are being pushed by end users to provide enhanced analytics and reporting against these everincreasing volumes of data. Managers need to be able to quickly understand this information, but, all too often, extracting useful intelligence can be like finding the proverbial needle in the haystack. Using traditional row-based databases that were not designed to analyze this amount of data, IT managers typically try to mitigate these plummeting response times using several responses. Unfortunately, each method has a significant adverse impact on analytic effectiveness and/or costs. A recent survey from Unisphere Research2 highlighted the most typical approaches: Tuning or upgrading existing database, the most common response, translates into significantly increased costs, either through admin costs or licensing fees Upgrading hardware processing capabilities increases overall TCO Expanding storage systems increases overall costs in direct proportion to the growth of data Archiving old data translates into less data your analysts and business users can analyze at any one time. Frequently, this results in less comprehensive analysis of user patternsand can greatly impact forward-looking analytic conclusions Upgrading network infrastructure leads to both increased costs and, potentially, more complex network configurations. So, if throwing money at your database problem doesnt really solve the issues, what should you do? How can you effectively address the impact of data overload on application performance, speed and reliability? Where do newer technologies such as columnar databases and NoSQL come into play? Figure 1. Machine-Generated Data Drives Big Data

1 2

Gartner IT Infrastructure, Operations & Management Summit 2009 Post Event Brief. Keeping Up with Ever-expanding Enterprise Data, Joseph McKendrick, Research Analyst, Unisphere Research, October 2010.

WHITEPAPER WHITEPAPER

Coexistence. Not Competition.


The first thing to recognize is that, in the new data management paradigm, one size will not fit all data needs. Instead of building the one, single, ultimate database, the driving force behind the behemoth data-warehousing efforts of the last decade or so, IT managers need to identify the right technologies to solve their particular business and data problems. The right IT solution may encompass one to two to even three technologies working together. Open-source technology will coexist with proprietary software. Row-based databases will live peacefully next to a columnar databasesand both will share data with NoSQL solutions. Sounds simple, doesnt it? Almost idyllic. Of course, theres a bit more to it than that. As Mike Vizard of ITBusinessEdge recently noted, [T]here is more diversity in the database world than any time in recent memory. 3 Figuring out which of the several technologies (and even subvariants of these technologies) meets your needswhile also fitting your IT staffing and budget parametersis no small issue. We hope this User Guide will help clarify which data management approach is best for which 0o your companys data challenges.

Transactional Powerhouse

The Ubiquity of Thinking in Rows


Organizing data in rows has been the standard approach for so long that it can seem like the only way to do it. An address list, a customer roster, and inventory informationyou can just envision the neat row of fields and data going from left to right on your screen. Databases such as Oracle, MS SQL Server, DB2 and MySQL are the best known row-based databases. Row-based databases are ubiquitous because so many of our most important business systems are transactional. Row-oriented databases are well suited for transactional environments, such as a call center where a customers entire record is required when their profile is retrieved and/or when fields are frequently updated. Other examples include: Mail merging and customized emails Inventory transactions Billing and invoicing Where row-based databases run into trouble is when they are used to handle analytic loads against large volumes of data, especially when user queries are dynamic and ad hoc. To see why, lets look at a database of sales transactions with 50-days of data and 1 million rows per day. Each row has 30 columns of data. So, this database has 30 columns and 50 million rows. Say you want to see how many toasters were sold for the third week of this period. A row-based database would return 7-million rows (1 million for each day of the third week) with 30 columns for each rowor 210-million data elements. Thats a lot of data elements to crunch to find out how many toasters were sold that week. As the FIgure 2. Example Data Set
3

Row-based Database

The Rise of the Columnar Database, Mike Vizard, IT BusinessEdge, June 14 2011.

WHITEPAPER WHITEPAPER

data set increases in size, disk I/O becomes a substantial limiting factor since a row-oriented design forces the database to retrieve all column data for any query. As we mentioned above, many companies try to solve this I/O problem by creating indices to optimize queries. This may work for routine reports (i.e. you always want to know how many toasters you sold for the third week of a reporting period) but there is a point of diminishing returns as load speed degrades since indices need to be recreated as data is added In addition, users are severely limited in their ability to quickly do ad-hoc queries (i.e., how many toaster did we sell through our first Groupon offer? Should we do it again?) that cant depend on indices to optimize results.

Pivoting Your Perspective: Columnar Technology

Lightning Analytics

Column-based Database

Column-oriented databases allow data to be stored column-by-column rather than row-by-row. This simple pivot in perspectivelooking down rather than looking acrosshas profound implications for analytic speed. Column-oriented databases are better suited for analytics where, unlike transactions, only portions of each record are required. By grouping the data together this way, the database only needs to retrieve columns that are relevant to the query, greatly reducing the overall I/O. Returning to the example in the section above, we see that a columnar database would not only eliminate 43 days of data, it would also eliminate 28 columns of data. Returning only the columns for toasters and units sold, the columnar database would return only 14 million data elements or 93% less data. By returning so much less data, columnar databases are much faster than row-based databases when analyzing large data sets.

In addition, some columnar databases (such as Infobright) compress data at high rates because each column stores a single data type (as opposed to rows that typically contain several data types), and allow compression to be optimized for each particular data type. Row-based databases have multiple data types and limitless range of values, thus making compression less efficient overall. Read the sidebar Infobright: Putting Intelligence in Columns to learn how Infobright improves query speed even more, while simplifying administration and lowering costs, with its Knowledge Grid and Domain ExpertiseTM capabilities. Figure 3. Pivoting Data for Columnar View

WHITEPAPER WHITEPAPER

Will the Real NoSQL Please Stand Up?


A term invented by Carlo Strozzi in 19984 , NoSQL has been a hard term to pin down from the beginning. For one thing, while most people now translate the term to mean Not Only SQL, there are other accepted variations. More importantly, the term refers to a broad, emerging class of non-relational database solutions. NoSQL technologies have evolved to address specific business needs for which row technologies couldnt scale to meet and column technologies were unsuited to address. Currently, there are over 112 products or open-source projects in the NoSQL space, with each solution matching a specific business need. For example: Real-time data logging such as in finance or web analytics Web apps or any app which needs better performance without having to define columns in an RDBMS Storing frequently requested data for a web app

Infobright: Putting Intelligence in Columns


Infobrights high performance analytic database is designed to handle business-driven queries on large volumes of datawithout IT intervention. Easy to implement and manage, Infobright provides the answers your business users need at a price you can afford. How is this achieved? Infobright combines a columnar database with intelligence we call the Knowledge Grid to deliver fast query response with unmatched administrative simplicity: no indexes, no data partitioning, and no manual tuning. Infobright uses intelligence, not hardware, to drive query performance: Creates information about the data upon load, automatically Uses this to eliminate or reduce the need to access data to respond to a query The less data that needs to be accessed, the faster the response What this means to customers: Self-managing: 90% less administrative effort Low-cost: More than 50% less than alternative solutions Scalable, high-performance: Up to 50 TB using a single industry standard server Fast queries: Ad-hoc queries are as fast as anticipated queries, so users have total flexibility Compression: Data compression of 10:1 to 40:1 that means a lot less storage is needed Infobright offers an open source and a commercial edition of its software. Both products are designed to handle data volumes up to 50TB. Try it yourselfdownload our Community Edition at www.infobright.org, or a free trial of our Enterprise Edition at www.infobright.com.

While each technology addresses different problems, they all share certain attributes: huge volume of data and transaction rates, a distributed architecture and often unstructured (or semi-structured data) with heavy read/write workloads. Unstructured information is typically text heavy but may contain data such as dates and other numbers as well. The resulting irregularities and ambiguities make this data unsuitable for traditional row-based or column-based structured databases. In short, NoSQL solutions are typically beasts in terms of their data capacity, lookup speed and ability to handle streaming data, especially over highly scaled environments. On the other hand, they generally lack a SQL interface and often come with little or no programmatic interfacesmeaning that setup and administration may require some specialized skills. In addition, NoSQL can be limited in terms of their ability to execute complex queries, restricting the types of actionable analytics they can deliver. For example, queries that JOIN two tables or employ nested SELECTs are typically not possible using these technologies.

Below, we go a bit deeper into each of three main NoSQL subvariants: key-value stores, document stores and column stores.

Wikipedia, http://en.wikipedia.org/wiki/NoSQL

WHITEPAPER WHITEPAPER

Key-value Store A key-value store does what it sounds like it does: values are stored and indexed by a key, usually built on a hash or tree data-structure. 5 Key-value pairs are widely used in tables and configuration files. Key-value stores allow the application to store its data without

Data Beasts

NoSQL Database

predefining a schemathere is no need for a fixed data-model. In a key-value store, for example, a record may look like: 12345 => img456.jpg,checkout.js,20 Companies turn to key-value stores when they require the functionality of key-values but do not require the technology overhead of a traditional RDBMS system, either because they require more efficient, cost-effective scalability or they are working with unstructured or semi-structured data. Key-value stores are great for unstructured data centered on a single object, and where data is stored in memory with some persistent backup. Consequently, they are typically used as a cache for data frequently requested by web applications such as online shopping carts or social-media sites. As these web pages are created

on the fly, the static components are quickly retrieved and served up to the user. Document Store As with a key-value store, companies turn to NoSQL document stores when they are dealing with huge volumes of data and transactions requiring massive horizontal scaling or sharding. And, similarly, there is no need for a pre-set schema. However, the data in document stores can contain several keys, so queries arent as limited as they are in key-value stores. For example, in a document data store an example record could read: id => 12345, name => Jane, age => 22, email => jane@gmail.com While multiple keys increase the types of possible queries, the data stored in these documents do not need to be predefined and can change from document to document. The tradeoff for the more complex query-options is speed: queries with a key-value store are much simpler and often faster. Document stores are often deployed for web-traffic analysis, user-behavior/action analysis, or log-file analysis in real time. However, while document stores allow more query capabilities than key-value stores, there are still limitations given the non-relational basis of the document-store database. Column Store Column stores are an emerging NoSQL option, created in response to very specific database problems involving beyond-massive amounts of data across a hugely distributed system. Think Google. Think Facebook. Imagine the colossal amount of data that Google stores in its data farms. And then imagine how many permutations of data sets need to be compiled to respond to all possible Google

For more on hash functions see http://en.wikipedia.org/wiki/Hash_function. For more on tree data see http://en.wikipedia.org/wiki/Tree_%28data_structure%29.

WHITEPAPER WHITEPAPER

searches. Clearly, this task could never be accomplished in any reasonable time frame with a traditional relational database. It requires the ability to handle massive amounts of data but with more query complexity than either key-value stores or document stores would deliver. Most column stores also use MapReduce, a fault-tolerant framework for processing huge datasets on certain kinds of distributable problems using a large number of computers. This technology is still emergingand use cases may eventually overlap with document stores as both technologies mature. But at the moment, the use cases in production for column stores are generally limited to applications such as Google and Facebook. A Column by Any Other Name.. It should go without saying, but well say it anywaya column store is only similar to a column-based database in that they both have the word column in their names. A column-based database is still a structured relational database, albeit one optimized for analytics. A column store is still firmly in the NoSQL campthis is a system for handling huge volumes of data and transactions, in a massively distributed manner, without the need to define the database structure up frontthough it tends to have more SQL traits than either a key-value store or document store.

LiveRail: Infobright & Hadoop Power Video Advertising Analytics


LiveRail delivers technology solutions that enable and enhance the monetization of internet-distributed video. By focusing specifically on challenges and opportunities created by online video, LiveRails tools are designed to be easier, more efficient and more effective than traditional display ad servers to deliver and track advertising into this medium. Their platform enables publishers, advertisers, ad networks and media groups to manage, target, display and track advertising in online video. The Challenge: LiveRails platform enables publishers, advertisers, ad networks and media groups to manage, target, display and track advertising in online video. With a growing number of customers, LiveRail was faced with managing increasingly large data volumes. They also needed to provide near real-time access to their customers for reporting and ad hoc analysis. The Solution: LiveRail chose two complementary technologies to manage hundreds of millions of rows of data each dayApache Hadoop and Infobright. Detail is loaded hourly into Hadoop and at the same time summarized and loaded into Infobright. Customers access Infobright 7x24 for ad-hoc reporting and analysis and can schedule time if needed to access cookie-level data stored in Hadoop. Infobright and Hadoop are complementary technologies that help us manage large amounts of data while meeting diverse customers needs to analyze the performance of video advertising investments. Andrei Dunca, CTO of LiveRail

Can I Get a Hadoop From Anyone?


While this User Guide addresses the emerging database landscape, no conversation would be complete without mentioning Hadoop. Hadoop is a scalable fault-tolerant distributed system for data storage and processing (open source under the Apache license). It has two main parts: Hadoop Distributed File System (HDFS): self-healing high-bandwidth MapReduce: fault-tolerant distributed processing framework clustered storage

The data typically stored with Hadoop is complex, from multiple data sources and, well, theres always lots and lots of it. Beyond being a mass-storage system, Hadoop, through MapReduce, also is used for batch processing and computation done in parallel execution spread over a cluster of servers. While running MapReduce jobs is a common way to access data stored in Hadoop, technologies such as Hbase and Hive which sit on top of HDFS are also used to query the data.

WHITEPAPER WHITEPAPER

Summary and Next Steps


The world of one-size-fits-all database is done. Myriad technology approaches have been (and are being) developed to meet the challenges of Big Data. This activity impels corporate IT groups to look beyond rows-based solutions to find the right fit for their analytic needs, staffing and budget requirements. We hope that this paper, and the following Emerging Database Landscape chart, serves as a useful resource for figuring out the strengths and the weaknesses of the various database approaches available today. Infobright: High-performance Analytics for Machine-generated Data Infobrights high-performance database is the preferred choice for applications and data marts that analyze large volumes of machine-generated data such as Web data, network logs, telecom records, stock tick data and sensor data. Easy to implement and with unmatched data compression, operational simplicity and low cost, Infobright is being used by enterprises, SaaS and software companies in online businesses, telecommunications, financial services and other industries to provide rapid access to critical business data. If you decide that a columnar database has a place in your analytic solutions, you can try it for yourself, free. Either download our Community Edition at www.infobright.org, or a free trial of our Enterprise Edition at www.infobright.com. For more information, please visit http://www.infobright. com or join our open source community at http://www.infobright.org.

WHITEPAPER

The Emerging Database Landscape


This chart gives a quick overview of the strengths, weaknesses and use cases for row-based, columnar and NoSQL databases.

Row-Based
Basic Description Data structured in rows

Columnar
Data is vertically striped and stored in columns

NoSQLKey Value Store


Data stored usually in memory with some persistent backup

NoSQLDocument Store
Persistent storage for unstructured or semi-structured data along with some SQL-like querying functionality Web apps or any app which needs better performance and scalability without having to define columns in an RDBMS

NoSQLColumn Store
Very large data storage, MapReduce support

Common Use Cases

Transaction processing, interactive transactional applications

Historical data analysis, intelligence

Used as a cache for storing for a web app

Real-time data logging as in finance or web analytics

data warehousing, business frequently requested data

Strengths

Capturing and inputting new records.

Fast query support,

Scalability, very fast storage and partly structured data

Persistent store with scalability features such as sharding built in with and better query support than key-value stores

Very high throughput for Big Data, strong partitioning support, random read-write access

especially for ad hoc queries and retrieval of unstructured compression

Robust, proven technology. on large datasets,

Weaknesses

Scale issuesless suitable for queries, especially against large databases

Not suited for transactions; import and export speed; heavy computing resource utilization

Usually all data must fit into memory, no complex query capabilities

Lack of sophisticated query capabilities

Low-level API, inability to perform complex queries, high latency of response to queries

Typical Database Size Range Key Players MySQL, Oracle, SQL Sever, Sybase ASE

Several GBs to 50 TB Infobright, Aster Data, Sybase IQ, Vertica, ParAccel

Several GBs to several TBs MemCached, Amazon S3, Redis, Voldemort

Few TBs to several PBs MongoDb, Couchdb, SimpleDb

Few TBs to several PBs HBase, Big Table, Cassandra

Copyright 2011 Infobright Inc. Infobright is a registered trademark of Infobright Inc. All other trademarks and registered trademarks are the property of their respective owners.

Вам также может понравиться