Вы находитесь на странице: 1из 21

Computer Science and Management Faculty

2011

Data Warehouses

Supervisor: Tomasz Kajdanowicz

Author: Ömer Selçuk Coşkun


Student Number : 174958
Wroclaw,
Specialization : Computer Science
May 2011
1.General Task List Information

In this task list, students will learn how to create OLAP Cubes with MS SQL Server
2008 (SQL Server Analysis Services) and process them using Integration services..

Brief Inducton :
An OLAP (Online analytical processing) cube is a data structure that allows
fast analysis of data. It can also be defined as the capability of manipulating and
analyzing data from multiple perspectives. The arrangement of data into cubes
overcomes some limitations of relational databases.!

2. Tasks – The asterisk symbol (*) denotes optional items

a) Create at least one cube using any online transactional database


(OLTP). You can use either sample OLTP databases (e.g.
AdventureWorks or FoodMart) or any external database (Sample
Microsoft Databases). This is a repetition of the previous task list but
for another data source (a transactional one instead of the one
already prepared for data warehousing). Prepare screenshots with
description for each following step.
Answer :
In this exercise I have decided to use FoodMart exemplary databases since
previous AdventureWorks is well known by everybody and we have covered it in
both theorical and practical exercises.
The problem related to the FoodMart database is ; created for MSSQL 2000
requires careful revert and migration operation in order process information and
creating cubes.

For MSSQL 2005 version can be downloaded from ;


http://www.e-tservice.com/downloads.html
I have migrated that one for MSSQL 2008 R2 version as BACKUP file (.bak) anybody
who is wishing to use can download it from ;
http://www.multiupload.com/CK9AKVHF25
It can be easily imported using SQL Server Management Studio ;
1. Migrating and Importing Database

We have successfully migrated and imported FoodMart database for MSSQL


2008 R2 server.

FoodMart database similarly looks like AdventureWorks database it contains with such
tables; account ,category, inventory ,currency ,costumer, department, inventory_fact_1997
,inventory_fact_1998 etc. As we can see such information very old , older than mssql 2000
release.

2.Creating Our Cube using FoodMart Database


We need to create a Analysis Service project which allows us to define cubes , data source
and dimensions related to particular cube.

After creating our project we need to configure connection type and database source .

We have defined our data source and connection type ; our database is stored under my
local sql server.
We have successfully defined our data source and connection.

3.Defining Fact Table and Dimension

Selecting data source view from solution explorer menu , we’ll see tables of our
database . This exemplary database consist of different entities and various different
information. We will select some relational tables in order to create our fact table.
I have selected some tables such like Store , Inventory_Fact_1997 ,
Time_By_Day , Warehouse, Product.

I decided to choose such tables because I’m planning to query information in 1997
(fact) for each activity basis on defined time period.

Our tables look like that in SQL server , those tables are correlated to each other. We’ll see
how the relationship will be represented in our cube. Let’s go on.
Creating Our Cube

As selecting Cube tab from Solution Explorer menu we can create our cube. We’ll use our
exemplary database’s tables in order to create cube.
Additionally : Linked Objects

Having created our cube we can also define linked objects. Right clicking measures area we
can select linked-object option to create a new one.

By using a linked object, we can create, store, and maintain a dimension or measure group
on one database, and still make that dimension or measure group available to users of
multiple databases. To users, a linked object appears like any other dimension or measure
group.
Defining Aggregations

I have chosen inventory_fact_1997 as measure group table. Basing upon this we can define
aggregation.

In this step we need to specify aggregation usage ; we can refactor these options as our
needs.
In this wizard we need to specify and perform an object count operation that we defined in
previously in aggregation design. Green bar means that everything is alright and
measurement is done!

As we can observer ; we have successfully created our aggregation.


After completing aggregation and other personalization options we can process our
project to see whether it work or not. Fortunately it works!

Now we can analyze our data basis on different measurements. In this example I performed
product basis inventory count , warehouse sale , ware house cost etc. estimation. We can
extend or change analysis using measurement and table dimension from browser tab.

However , in software companies chief executives are generally senior programmers, but
still we cannot expect our boss to learn MSSQL / Integration services etc. In this case we can
allow him/her to connect through MS Excel then he can easily perform appropriate
measurement without break or overloading SQL server process.
Defining source name and Server Name our cubes can easily be accessed. Especially ,
this feature allows to connect server which can be any workstation in our network.

As we can see task3 is our analysis project name and z is also cube name which is
created within this particular project.

If we reach this screen it means that granted user has right to access and analyze
cubes.
Then basis on accessed information we can generate tables and graph which
represents particular measure.

We have done cube creation , defining fact/measures and also related comparison
tables depending upon these information.

Generating Reports

As similarly we need to create a new Report Server project.


We need to define a data source ; which information will be used to generate our
reporting project.

Data source = localhost


Initial catalog = “FoodMart 2008”
As our database located on my local machine (localhost = 127.0.0.1) I defined it as
data source and initial catalog corresponds to name of data schema.

Designing Reports

From Solution explorer , right clicking reports folder icon we can reach to query designer.
Depending upon needs various different can be visually generated using this utility. Chains
between tables represents the relationship of tables, and yellows keys are corresponds to
foreign keys.

In this step we need to check how our report visually be depicted on window.

Then we can see our report from preview window. Our 1st report is rather naïve report. We
can similarly construct better reports which appeals to real world enterprise.

Similarly 2nd report

Similarly in our cube , we generate a report basing on products.


Let’s verify whether our report is valid or not!

When I process the reporting project report preview windows opens! It means that our
report is valid and working.

Automated Cube Processing


Similar to the previous tasks ; we need to create an Integration Service project

Having created an integration service project we need to set connection options and select
our initial catalog which we have used for cube creation.

During configuration we need to select correct analysis project name that we have
previously created in order to process and create cube.

As I store both projects and sql server under my local computer my connection wizard looks
like that.
In the next step we need to select objects that we are going to analyze ; as its seen from so
called task3 cube ; measure and dimensions.

Finally , we can process it ! As we can see our integration project working properly!

Scheduling Cube Processing Job Using MSSQL

We need to create new scheduled job and link it to the our integration project which
process our cubes.

Creating a new job from MS SQL Server Studio ; we need to go to step section.

Step name = any arbitrary name

Type = SQL Server Integration Services Package

Package Resource : File System

Package : Location of Compiled Integration Service Project (*.dtsx)

In the next step we need to specify connection manager.


We can similarly add more steps to our MSSQL job or add some specific option such like
execution (invoking job) some period of time.

As its seen our MSSQL job is ready to go . We have configured and set it up.
As we can observe finally we can sit on our back and watch our cube is being processed by
sql server on its own 

END