Вы находитесь на странице: 1из 97

Universe Designer Interview Questions

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33.

34. 35. 36. 37. 38. 39. 40. 41.

What is business objects? What are the various Business Objects products? What are the advantages of Business Objects over other DSS? How many modes are there in BO & Designer What are Enterprise and Workgroup modes? How do you save a Business Objects document which can be accessed by all users in workgroup mode? What is online and offline mode? What is a universe? Can a Universe connect to multiple databases? How do you define universe parameters? What is a database connection? What are the types of connections we use when connecting to the database? What are different types of joins available in Universe design? Explain each How do you design a universe? What are the components of the Designer interface? What are classes/objects? What are classes? What are objects? What are Dimension-Measure-Detail Objects? What us hierarchy How to create hierarchies in BO? What are Contexts? What are aggregated tables and how would you use it in BO Universe What is Incompatibility? What is the typical strategy employed in developing/maintaining/distributing Universes? I have Customer dimension table and a fact table with cust_to_ship_key and cust_to_bill_key. How do I get the corresponding customer names? What are strategies? What are the different types of Strategies? How do you specify external strategies? What are the visualization options available? What is Join Path Problem? How to add aggregate table in universe in real life scenario? If we have a user group and we want to give the access to the report from 1990 to 2000. and from that group we want one user to restrict to see the report from 1990 to 1995. Then what to do in BO designer, so that this can be possible (Not in report level). What is shortcut join? What is its use explain with an example What is isolated join, explain with an example What is cardinality and its significance in BO universe How will you know the version of BO designer you are using What is loop in Universe? Explain its problem and different methodologies to resolve it. What is CHASM trap and how to resolve it When any new universe changes are deployed how does the end user get the view of the new classes/objects added (apart from specs doc)? I have 2 universes that is u1 and u2. From u1,i created one report that is r1. Now i want to give the connection r1 to u2 and at the same time delete the connection from u1 to r1 ? How is it possible explain?

42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70.

What is meant by ZABO and FC(full-client)? What happens if cardinalities are not resolved? What is aggregate navigation. What is Index Awareness in Universe. What are @functions What is core Universe What is derived universe. What are linked universes? Explain with advantages and disadvantages. What is Object Qualification How to create filter in Universe and what is advantages and disadvantages. Why do we need to create derived table in Universe. Explain security level in BO Universe How to implement row level security in Universe. How do you determine when to use alias and when to use context. What are different ways to link universes. How do you distribute Universe? What is the list mode? What is Parse checking? . What are the disadvantages of Alias? What are linked Universes Explain Universe Design Methodology. Explain Universe Development Lifecycle. What is failover and fault over? What is role of CMS? Working with FRS Pruning and Tracing What is PRM file? Universe Parameters/Data source Connection What is Sql Editor? What is File Repositary server?

1.What is business objects? BUSINESSOBJECTS is an integrated query, reporting and analysis solution for business professionals that allow them to access the data in their corporate databases directly from their desktop and present and analyze this information in a BUSINESSOBJECTS document. It is an OLAP tool that high-level management can use as a part of a Decision Support Systems (DSS). BUSINESSOBJECTS makes it easy to access the data, because you work with it inbusiness terms that are familiar to you, not technical database terms like SQL. 2.What are the various Business Objects products? User Module, Designer, Supervisor, Auditor, Set Analyzer, Info View (Web Intelligence),Business Objects Software Development Kit (SDK), Broadcast Agent etc. 3.What are the advantages of Business Objects over other DSS? User Friendly. Familiar Business Terms. Graphical Interface Drag and Drop. Powerful reports in less time.

Enterprise wide Deployment of documents using WebI Customized dashboards using Application foundation and Business Objects SDK. 4.How many modes are there in BO & Designer There are 2 types: Enterprise Mode, Workgroup 5.What areEnterpriseand Workgroup modes? Designer lets you save universes in either enterprise or workgroup mode.Enterprisemode means that you are working in an environment with a repository. Workgroupmode means that you are working without a repository. The mode in which you save your universe determines whether other designers are able to access them. By default, a universe is saved in the mode in which you are already working. For example, if you launched a session in enterprise mode, any universe you save is automatically in that mode. 6.How do you save a Business Objects document which can be accessed by all users inworkgroup mode? If we want to make a universe accessible to another designer working without a repository, then click the Save For All Users check box in the Save as universe dialog box. 7.What is online and offline mode? If you want a universe to be accessible in offline mode, you must firstly ensure that the universe has been opened at least once in online mode, and that it has been saved with the Save for All Users check box selected in the Save Universe As box. To make Offline mode available: Mode Description Online Default mode of operation for Designer when you are working in an environment with a repository. Offline Mode of operation for Designer when you are not connected to a repository. Only available if you have previously connected in online mode. In offline mode you can open universes that are stored on your local computer only if those universes have been opened previously in online mode. You can access databases where the connection and security information are stored on your local machine (personnel and shared connections.) You can use offline mode when you do not have access to the repository, for example when working with a laptop off site, or when the network is not available. 8.What is a universe? Universe provides a semantic layer between you and the database. It consists of classes and objects named in business terms. It is basically a mapping of table and the columns in the database to classes and objects respectively in the query panel. Alternatively, It is logical mapping of data in business terms. In the BusinessObjects User module, universes enable end users to build queries from which they can generate and perform analysis. Universes isolate end users from the complexities of the database structure as well as the intricacies of SQL syntax.

9.Can a Universe connect to multiple databases? NO 10.How do you define universe parameters? The first step in creating a universe is to specify its parameters. These parameters include the definition of a universe, which is comprised of: the universe name, a description of the universe, a connection to an RDBMS. You enter universe parameters from the Universe Parameters dialog box. This dialog box also lets you set up database options, external strategies, graphic options, and print settings. 11.What is a database connection? A connection is a set of parameters that provides access to an RDBMS. These parameters include system information such as the data account, user identification, and the path to the database. Designer provides three types of connections: secured, shared, and personal. 12.What are the types of connections we use when connecting to the database? There are three types of connections namely: Secured, Shared and Personal. A secured connection is used to centralize and control access to sensitive or critical data. It is the safest type of connection for protecting access to data. A shared connection is used to access common resources such as universes or documents. Several users can thus use it. A personal connection is specific to one user and can be used only from the computer on which it was created. 13.What are different types of joins available in Universe design? Explain each Equi or Inner or Natural or Simple join: is based on the equality between the values in the column of one table and the values in the column of another. Because the same column is present in both tables, the join synchronizes the two tables. Self-Join: join a table to itself i.e create a self-join to find rows in a table that have values in common with other rows in the same table. Theta or Non-Equi join: links tables based on a relationship other than equality between two columns. A join is a relational operation that causes two or more tables with a common domain to be combined into a single table. The purpose of joins is to restrict the result set of a query run against multiple tables. Outer join: links two tables, one of which has rows that do not match those in the common column of the other table. Left Outer Join: All records from first table with matching rows from second. Right Outer Join: All records from second-named table with matching rows from left.

Full outer join: All rows in all joined tables are included, whether they are matched or not. Shortcut join: can be used in schemas containing redundant join paths leading to the same result, regardless of direction. Improves SQL performance. 14.How do you design a universe? The design method consists of two major phases. During the first phase, you create the underlying database structure of your universe. This structure includes the tables and columns of a database and the joins by which they are linked. You may need to resolve loops which occur in the joins using aliases or contexts. You can conclude this phase by testing the integrity of the overall structure. During the second phase, you can proceed to enhance the components of your universe. You can also prepare certain objects for multidimensional analysis. As with the first phase, you should test the integrity of your universe structure. Finally, you can distribute your universes to users by exporting them to the repository or via your file system. 15.What are the components of the Designer interface? In Designer, you create a universe using three areas: the Universe pane, the Structure pane, the Table Browser. The Universe pane displays the components of the universe from the point of view of BusinessObjects; that is the classes, objects, and conditions. The Structure pane reflects the underlying database structure of the universe including the tables, columns, and joins. The Table Browser is the component that lets you create the classes and objects of the universe from the tables and columns of a database. 16.What are classes/objects? An object maps to data or a derivation of data in the database. For the purposes of multidimensional analysis, an object can be qualified as one of three types: a dimension, detail, or measure. A class is a collection of objects based on business categories. A universe is a set of classes and objects intended for a specific application or group of users. 17.What are classes? A class is a logical grouping of objects within a universe. In general, the name of a class reflects a business concept that conveys the category or type of objects. For example, in a universe pertaining to human resources, one class might be Employees. A class can be further divided into subclasses. In the human resources universe, a subclass of the Employees class could be Personal Information. As designer, you are free to define hierarchies of classes and subclasses in a model that best reflects the business concepts of your organization. 18.What are objects? An object is the most refined component in a universe. It maps to data or a derivation of data in the database. Using objects, end users can build queries to generate reports. The name of an object suggests a concept drawn from the terminology of a business or discipline. For a human resources manager, objects might be Employee Name, Address, Salary, or Bonus, while for a financial analyst, objects might be Profit Margin, Return on Investment, etc. For the purposes of multidimensional analysis, objects are qualified as one of three types: dimension, detail, or measure.

19.What are Dimension-Measure-Detail Objects? When creating universes, universe designers define and qualify objects. The qualification of an object reveals how it can be used in analysis in reports. An object can be qualified as a dimension, a detail, or a measure. A dimension object is the object being tracked; in other words, it can be considered the focus of the analysis. A dimension can be an object such as Service, Price, or Customer. Dimension objects retrieve the data that will provide the basis for analysis in a report. Dimension objects typically retrieve character-type data (customer names, resort names, etc.), or dates (years, quarters, reservation dates, etc.) A detail object provides descriptive data about a dimension object (or attribute of a dimension). It is always associated with a specific dimension object. However, a detail object cannot be used in drill down analysis. E.g. Address & phone number can be attributes about the customer dimension. A measure object is derived from one of the following aggregate functions: Count, Sum, Minimum, Maximum or average or is a numeric data item on which you can apply, at least locally, one of those functions. This type of object provides statistical information. Examples of measure objects include the following: Revenue, unit price etc 20.What us hierarchy Groups of related dimension objects are referred to as dimension hierarchies. An example of a dimension hierarchy is Geography, which can consist of City, Region, and Country.(Hierarchy is an ordered series of related dimensions, which can be used, in multidimensional analysis) Good examples of hierarchy are geography and time 21.How to create hierarchies in BO? A hierarchy, which the designer sets up when creating the universe, consists of dimension objects ranked from less detailed to more detailed. The objects that belong to hierarchies are the ones you can use to define scope of analysis. 22.Can a Universe have more than 1 fact Table? Yes. Typically a universe can have more than 1 fact table and numerous aggregated tables. 23.What are Contexts? A context is a rule that determines which of two paths can be chosen when more than one path is possible in the database from one table to another. It helps in resolving the loops created by various joins in the universe tables. With certain database structures, you may need to use contexts rather than aliases to resolve loops. A situation where this commonly occurs is a transactional database with multiple fact tables (multiple stars) that share lookup tables.

24.What are aggregated tables and how would you use it in BO Universe Aggregate table Aggregate tables are table which contains summarized data at different level depending on the need of reports. Imagine a fact table which contains granular data up to minutes transaction. Now if you are developing a reports which has hour, day, week, month, quarter, year level summaries. Queries to get these summary values will scan millions of records which would in turn result in poor performance of reports. One can address this issue by creating aggregate summary table. Possible problems of using Aggregate table Aggregate table are good to achieve performance for high level summery queries. However if there are multiple aggregate tables which contains summary values then using aggregate table might be an issue. Consider following example e.g. you have two aggregate table Table 1 AggregateID Table 2 AggregateID

Year

Quarter

Month

Sales Revenue

Year

Quarter

Month

ServiceType

Sales Revenue

Now if you have a requirement wherein one of your report is displays year wise sales revenue whereas other report is displays Year wise ServiceType sales revenue. How would you tell report to use table1 for year wise sales revenue and use table 2 for another report. Using Aggregate Table in Business Objects Business Objects provides a wonderful function to use aggregated tables. Aggregate_aware(). This function determines which aggregate table to use based on the attributes used in the query. Syntax: Aggregate_aware(<expression1>, <expression2>, -) Expression = field or valid SQL expression or calculation e.g. so formula for sales revenue might be. Aggregate_aware (table1.salesrevenue, table2.salesrevenue, sum(sometable.column) ) Notice the arrangement of column used in function. It is highest summarized to least summarized. Now if while creating report if you use only year attribute and sales revenue. Aggregate_aware will use table1 to get the sales revenue, if used service type in report It will use table2, in all other cases it will use sum(sometable.column) which could be a fact table.

Rules to use aggregate_aware 1.If the object is at the same level of aggregation or higher, it is OMPATIBLE with the summary table. e.g. In above example none of the table contains aggregated value for Article so article object will be incompatible. You can not use aggregate_aware with Article object 2.If the object is at a lower level of aggregation, it is INCOMPATIBLE. 3.If the object has nothing to do with the summary table, it is INCOMPATIBLE. 25.What is Incompatibility? The set of incompatible objects you specify determines which aggregate tables are disregarded during the generation of SQL. With respect to an aggregate table, an object is either compatible or incompatible. The rules for compatibility are as follows: When an object is at the same or higher level of aggregation as the table, it is compatible with the table. When an object is at a lower level of aggregation than the table (or if it is not at all related to the table), it is incompatible with the table. 26.What is the typical strategy employed in developing/maintaining/distributing Universes? Phase 1: Break down the informational system into functional areas. Phase 2: Analyze the information needs of users. Phase 3: Design a conceptual schema Design the specification of the user. Phase 4: Create a Universe with designer. Test the Universe with the Business Objects module. Distribute the Universe. Repeat the other steps for other Universes. Phase 5: Update and maintain the Universe. Notify end users of changes. 27.I have Customer dimension table and a fact table with cust_to_ship_key and cust_to_bill_key. How do I get the corresponding customer names? Create an Alias table for Customer dimension table. Join the cust_to_ship_key with a customer key of Actual customer table and join the cust_to_bill_key with customer key of the Alias. 28.What are strategies? A strategy is a script that automatically extracts structural information from a database or flat file. 29.What are the different types of Strategies? In Designer we can specify two types of strategies: 1) Built-in strategies and 2) External strategies

Built-in Strategies: Designer provides a number of default strategies which we can use. These are strategies for extracting joins, detecting cardinalities, and creating default classes and objects. Options for indicating default strategies are located in the Database tab of the Options dialog box. External Strategies: We can also create our own strategies. Such strategies are referred to as external strategies. With an external strategy, we can specify the exact way that objects and joins are to be extracted from the database structure. The strategy we use, for example, can be a script generated from a CASE Access tool. An external strategy is specific to one RDBMS. 30.How do you specify external strategies? With an external strategy, you can specify the exact way that objects and joins are to be extracted from the database structure. All external strategies are contained within the same text file. The name of this text file is indicated in the .prm file specific to your RDBMS. In the .prm file, the strategy file is declared as follows: STG=[StrategyFileName] where StrategyFileName is the name of the strategy file. An external strategy, whether for objects or for joins, is made up of the following sections: a name and description (These are visible in the Strategies tab of the Universe Parameters dialog box.) a type parameter: object or join an SQL parameter or file parameter an optional parameter that points to a connection other than the universe connection. An external strategy can be based on SQL or a file. 31.What are the visualization options available? Designer contains a variety of features for organizing and viewing the tables and columns in the Structure pane. Among these features are: List Mode, which adds three panes to the Structure pane. These panes are for viewing the names of tables, joins, and contexts. When you click a component in a pane, its corresponding graphical representation in the schema is highlighted. Graphic options, which let you customize the shape or appearance of the tables, columns, joins, and cardinalities in the Structure pane. Arrange tables, a feature that reorganizes the tables in the Structure pane so as to produce an orderly display. Gridlines, a command that displays a grid, which you can use to align tables in the Structure pane. Table (Column) Values, commands that display the data values associated with a particular table or column. 32.What is Join Path Problem? A one to many join links a table, which is in turn linked by a one to many join. This type of fanning out of one to many joins can lead to a join path problem called a fan trap. The fanning out effect of one to many joins can cause incorrect results to be returned when a query includes objects based on both tables. 33.How to add aggregate table in universe in real life scenario? Decide the reports which are using high aggregates Create Aggregate table in DB Insert them in Universe Join them with dimension table

34.If we have a user group and we want to give the access to the report from 1990 to 2000. and from that group we want one user to restrict to see the report from 1990 to 1995. Then what to do in BO designer, so that this can be possible( Not in report level). This can be done using row level security in universe parameter. 35.What is shortcut join. What is its use explain with an example A shortcut join is a join which links two table by bypassing intervening tables that exists in the universe. This is used when it is possible in certain circumstances to make SQL more efficient. e.g. in Above example if you want get the client list and there countries you can simple join country ID from client table to country id from country table. However this would introduce a loop. So instead of using normal join you make it as shortcut join. So if query contains object from client table and country table it would you shortcut join. This would result in efficient query avoiding extra join of region. 36.What is isolated join, explain with an example. Isolated joins means the joins which are not included in any context. suppose you have 15 joins in ur universe. in context A you included 7 joins and in context B 7 joins. The rest means one join isur isolated join. 37.What is cardinality and its significance in BO universe Cardinality expresses the minimum and maximum number of instances of an entity B that can be associated with an instance of an entity A. The minimum and the maximum number of instances can be equal to 0, 1, or N. Because a join represents a bi-directional relationship, it must always have two cardinalities. There are two main methods for detecting or editing cardinalities: Detect Cardinalities command Edit Join dialog box If you selected the Detect cardinalities in joins options in the Database tab of the Options dialog box, Designer detects and retrieves the cardinalities of the joins. If you do not use this option, you can still retrieve the cardinalities for one or all joins in the universe.

SQL Traps in Business Objects Universe : How to Solve CHASM Trap A Chasm trap is a join path type problem between three tables where two many-to-one join path converge on a single table and there is no context to separate the converging path. However even if we have above type of joins in universe we experience chasm trap problem only when

1. 2. 3.

There is many-to-one-to-many relationship between three tables Reporting query have objects on tables from many end There is more the one value for a single dimensional value.

Lets see it in detail. Consider below diagram. Now in above case when a query includes object from table B and Table C and objects from table A, the CHASM trap causes a query to return every possible combination of one measure with other. This result gets multiplied by number of rows in result set and output is similar like a Cartesian product. This CHASM trap can be resolved by executing separate query for each measure and then merging the results. How to detect CHASM trap in a universe CHASM trap can be detected automatically; you would need to use multiple ways to identify a possible CHASM trap issue.

Make sure you arrange one-to-many table from left to right in universe and analyze one-to-many relationship to detect possible CHASM trap issue. Use detect context automatic tool to detect possible context in universe and use them in order to avoid CHASM traps. Test many-to-one tables by creating reports using object from table at many end. Try adding additional dimension object in report. If there is a CHASM trap aggregated values will be double which might help you to detect possible CHASM trap.
Lets see a practical CHASM trap example in a universe Let consider a following joins in universe. In following diagram three tables are joined by many-toone-to-many join relationship. If I want to see number of guest for a sports service, report returns following result Service Sports Number of Guests 145

If I want to see number of future guest for a sports service, repot returns following result Service Sports Number of Future Guests 8

However If I include moth the measure together in the same query. Service Sports Number of Guests 188 Number of Future Guests 96

Result seems to be inflated due to CHASM trap issue. How does CHASM trap inflate the result of a query? CHASM trap causes query to return every possible combination of a measure with other measure, which makes query to return Cartesian product result and since result is grouped against single dimension value its gets aggregated. In above example

Number of guests transactions *Number of future guest transactions Number of future guest transactions*Number of guests transactions
Lets go deeper to understand what has happened. In order to examine what all rows are included in aggregation we would need to split the aggregated data into granular level. For Number of Guest we would include additional dimension Days Billed to see granular data. Service Sports Sports Sports Number of Guests 4 133 8 Days Billed 3 4 6

For Number of Future Guest we would include additional dimension Days Reserved to see granular data. Service Sports Sports Number of Future Guests 7 1 Days Billed 1 2

Now lets combine the result (Disable generate separate query for measure from universe parameters) Service Sports Sports Days Billed 3 3 4 4 6 6 Number of Guests 4 4 129 35 8 8 Days Reserved 1 2 1 2 1 2 No. of Future Guests 3 1 75 9 6 2

Sum

188

Sum

96

You could see, query returns every possible combination of future guests with number of guests and when result is aggregated it gives wrong numbers. How to solve CHASM Trap? You can solve CHASM trap using context. In above example you can create context.

1. 2. 1. 2. 3. 4. 5.

Analyze many-to-one-many relationship to detect possible CHASM trap. Use Detect Context to create contexts Select the contexts and click on Add. Select File->Parameters to launch universe parameter box. Click SQL Tab Select Multiple SQL for each contexts option. Click OK

Now when you create query, two separate queries will be generated and result will be merged on common dimension. This is how CHASM trap gets resolved using contexts. Using Multiple Statement for Measure to solve CHASM trap. If you have only measure objects defined for both fact tables, then you can use the Universe Parameters option Multiple SQL statements for each measure. This force the generation of separate SQL queries for each measure that appears in the Query pane. This solution does not work for dimension and detail objects. How to Define Cardinalities in Business Objects Universe by BIDW TEAM In previous post we have learned how to set up join in BO universe. in this post we will lean what is cardinality and how to define cardinalities in SAP Business Objects Universe. What is cardinality? Cardinality means a relationship between two tables based on a join. Means how many rows of one table will match with rows in other tables when these tables are joined. Setting up cardinality is very important to resolve loops BO universe. Lets take a practical example of cardinality. A manager can have many employees reporting to him, so the relationship between manager and employee table is 1-N. The cardinality can be any of one type.

One-to-One (1-1)

One-to-Many(1-N) Many-to-many (N-N) Many-to-one(N-1)

Setting up cardinality manually or using automatic detection tool Cardinality in universe design is based on a logical algorithm, which uses physical count of record from the table. The automatic detection tool only gives correct cardinality if the database is populated with realistic data. Also, the automatic detection tool fires three queries for every join to set the cardinality. So if you have lots of table in schema, automated cardinality detection tool is not a good idea as it might overload the database with queries. Lets take an example of how cardinality detection tool works. Manager table has multiple employees reporting to each manager so cardinality of manager and employee table is 1-N. Let understand how automated cardinality detection tools determines the cardinality for this join.

1. 2. 3.

One query to find number of rows from manager table One query to find number of rows from employee table One query to find number of rows when these two tables are joined

If manager table has 10 rows, Employee table has 20 rows. 1st query will return 10, second query will return 20, and third query will return 20 which would tell that employee table is at many sides and manager table is at 1 side. The output of queries is very important for automated tool and thats why database should contain realistic data. Detect cardinality using automation tool. To detect cardinality of all joins

1. 2. 3.

From Tools->Automation Detection->Detect Cardinality If no joins is selected, it asks for if you want to detect cardinality for all joins. Click OK.

To detect cardinality for specific join

1. 2.

Right click on specific join Click on detect Cardinality

To set cardinality manually

1. 2. 3. 4. 5.

Double click on join for which you want to set cardinality Edit join dialog appears with join expression Check on cardinality check box Select appropriate 1,N radio box Click ok.

After reading this article you should be comfortable with cardinality concept and its usage in SAP Business Objects Universe Design. List of Values (LOV) in Business Objects Universe by BIDW TEAM List of values or LOV is a distinct list of data values associated with an object. When any dimension of details object is created LOV is assigned to an object automatically. Use of List of values. When user needs to filter data in a query based on specific object values, User can simply view the LOV of that objects and choose the value on which they want to filter the data. e.g. if COUNTRY dimension has following distinct values A,B,C and if user wants to filter the data of country B, user can put a filter on Country dimension and choose the B as filter while executing the query. How to create a LOV for an object.

1. 2. 3. 4.

Double click on object in designer to view its properties. Click on Properties Tab Click on Associate a List of Values checkbox. Select other LOV options based on requirement.

When first LOV is created it is stored in .LOV file name at universe subfolder on the system file system. The default location is C:\Documents and Settings\<UserName>\Application Data\Business Objects\Business Objects 12.0\Universes\@<ServerName>\<UniverseName> LOV Options List Name Its the name of LOV file by which it will stored on local file system. User can override the default name and can enter his own LOV name. Maximum character limit is 8. Allow Users to Edit List of Values When checked this option allows report users to edit the list of values of an objects. The purpose of a list of values is usually to limit the set of available values to a user. If they can edit a list, you no longer have control over the values they choose. Normally, if you are not using a personal data file as a list of values source, you clear this option to ensure that users do not edit lists of values.

Automatic Refresh before Use When selected this option LOV will be refreshed each times it is referred and used in report. You should choose this option only if contents of underlying column are frequently changing. This options should be use very carefully after evaluation. If this option is not selected LOV is refreshed first when the objects is used in a user session. Hierarchical Display Select the Hierarchical Display property to display the cascading list of values as a hierarchy in Web Intelligence. Export with Universe When this option is selected LOV file associated with object is exported to universe CMS and gets stored as XML on CMS Viewing the LOV of an object To view the LOV of an objects click on display button on properties tab of an object Modifying the LOV of an object You can remove the values from LOV of an object by applying a filter or add values to LOV by adding a column. Apply condition on LOV To apply condition on LOV

1. 2. 3. 4. 5.

Click on Edit button on objects edit properties tab The designer query panel will appear showing default object of a LOV Drag drop the condition object in condition pane and specify the appropriate condition. You can also view the SQL of the LOV query by click on SQL icon on toolbar. Run the query to test the values after applying condition on LOV

View and Edit LOV of complete universe You can also view all the object which has LOV associated with them and edit them.

1. 2. 3. 1. 2.

Click on Tools->List of Values->Edit List of values dialog will appear Select the LOV objects and click on Edit if you want to edit a LOV. In addition to query you can also define LOV for an object using personal data filelike CSV and values from this file can also be used as LOV for an object. To do so. Click on Personal Data and provide the details on Personal data LOV dialog box.

Cascading LOV Cascading LOV is a LOV associated with hierarchy of an object in the universe. Cascading LOV is created, and if any of the object is used as prompt filter in report query, user has to answer series of values from cascading LOV. How to create Cascading LOV

1. 1. 2. 3.

Click on Tools->List of Values->Create Cascading LOV. Add the object and re-arrange them as per your hierarchy Click on generate LOVs Click OK.

Now if you use any of the objects as prompt in query. It will prompt the hierarchical LOV to user. Using Derived Table in SAP Business Objects Universe What are Derived table and its use? Derived table is not a physical in database however its logical table created in Business Objects Universe using SQL. Derived table can be considered like views in database where its structure is defined using SELECT statement. Advantages of Derived Table:

Derived table in Universe lets you create a SQL statement to fetch data using variousexpressions, joins which is not possible using universe structure. Its lets you put inline views (select statement in FROM clause) which are not possible in Universe normally.
e.g. select agg1_id as id from (select * from Agg_yr_qt_mt_mn_wk_rg_cy_sn_sr_qt_ma)

Derived table can be treated as normal tables and can be joined with actual table in Universe. Its lets you merge data from different table which is not possible using normal in universe using underlying data sources. One can embed prompts in derived table definition. You can use derived table as a lookup when you have multiple fact table separated by contexts. Normally if you want to use measured from different fact table thenBusiness Objects creates two queries one for each measure. Now some time this mayresult in performance issues. You can avoid this by creating a lookup table for different fact tables using derived tables.
e.g.

Suppose you have measure1 in fact1 and measure2 in fact2 and dimension is dim1. Now if you create a query with dim1, measure1, measure2 you will get two different queries. Now instead of this you can create separate derived table which includes dim1, measure1, measure2. Disadvantages if derived table. Since derived table is not an actual table you may face performance issues if underlyingSQL query has performance issues. How to Create Derived Tables

From menu mar select Insert->Derived Tables Now write the SELECT statements which to define the structure of derived table. Remember to give Alias to column if you are using any expression in column list. Click on Check Syntax to confirm the definition of derived table Click OK
Join the newly created derived table to existing table.

If you have context. Include the join in relevant context. Save the Universe. Now you can create object using derived table just like normal tables.
Nested Derived Tables Nested derived table is nothing but a derived table using another derived in definition. It behaves similar to normal derived tables. Nested derived tables are generally using when underlying derived table is complex to build. In that case you can create different small derived tables and then use this derived table in main derived table. Advantage of using nested derived table is the simplicity in derived table creating. AlsoBusiness Objects combines the definition in single SQL and its treated as single SQL. Business Objects does not limit number of derived tables but nesting is limited to 20 levels. You can create a nested derived table by using existing derived table in the from clause. Linking Universe in Universe Designer There are many times when universe development task is big and its not possible to dingle person to develop the universe in stipulated time. To cater this need Business Objects has provided the facility to divide the universe design task amongst severaldesigners and then integrate the work in single universe using universe linking method. What is Universe Linking? Linked universe are universes which share common component such as objects, classes and joins. When two universes are linked the one universe is called as core universe. Its the main universe which contains common components where other universe is called as derived universe. Changes made to core universe are automatically propagated to derived universe.

Uses and advantages of universe linking.

When you have to develop multiple universes but there are some common components across these universes in that you can create a core universe of common components and link it to other universes. Linking of Universe enables to distribute the universe designing task amongst other developers. Its also helps to follow the code-reusability practices. If there are any changes to common objects. It needs to be done only in core universe. It gets propagated to all derived universes. Linking of universes help in universe maintenance.
Universe Linking Strategies. Core Strategy: This strategy is used when you need to develop a universe for different functions. In such case you can create multiple universes for each function and the linkall universes in single universe. This strategy allows us to create a common object only once and also help to split the universe design task amongst developers. Master Strategy: Suppose you have a requirement to create clone of existing universe. You can achieve this by creating another copy of existing universe however this will increase maintenance as you would need to maintain two universes now. To avoid that you can use master linking strategy. In which existing universe is linked to new blank universe so it creates a copy of existing universe with different CUID and we need to maintain only one universe as core universe is linked. Multiple Core Strategy: Now if you want divide the development task then you can follow this approach in which developers can develop their universe and at the link all universe in one universe. Limitations and Restrictions of Linking Universes:

Both the universes (core and derived) must use same connection and should connect to same database. Both the universe must be present in same repository in order to link. Only one level of linking is allowed you can create derived universe from another derived universe. Both universes should have unique object and classes. If there are duplicateobjects/classes it will be renamed in core universe. Tables from two universes must be joined after linking in order to avoid Cartesian product. When core universe is linked in derived universe only classes, objects and tables are made available in derived universe. Context and LOV needs to be recreated in derived universe.

How to Link Universes? To link universe make sure

Core universe is exported to repository Its open in designer. Now open universe parameters from File->Parameters Click on Link tab Click on Add Link Select the core universe to link. Click OK After this components from core universe will be available in derived universe and it will be grayed. Now analyze the derived universe and create joins between tables added from core universe. Create context/Alias wherever required. Save and Export the derived universe.
Using Include to import one universe into another In linking universe contents are not copied to derived universe and one cannot edit the core universe components in derived. However sometimes one need to merge two universe into one. For this purpose one can use Include universe approach instead of linking. When universe are included components of one universe are copied into another. Difference between Linking and Including Universes Including and Linking decision needs to be taken based on your own need following are the points which may help. Linking.

Core universe structure is created once and used in many derived universe. Only one copy of components from core universe exists in repository. Changes needs to done only to core universe and it gets propagated to all derived universe. Linking universe needs both the universe to be present in repository. Only one level of linking is allowed. Context and LOV needs to recreate in derived universe. Both the universe must use same connection and connect to same database
Including

Its the easiest and fastest way to copy universe into another. Context needs to redefine after including. Changes are not propagated from core to derived universe. Both the universe must exist in repository like in linking. One can easily maintain one universe rather than multiple universe o maintenance becomes bit easy.

Business Objects Universe Optimization


Every Business Objects Universe designer faces performance problems in his/her career at least once and in most of the cases he will use push down strategy ask DBA to optimize the warehouse However it works very well but there are some thing which can also help to optimize the performance of Universe. Analyze Report SQL for unnecessary Joins 1. First get the list of reports which are performing low and get their SQL. 2. Analyze the report SQL for joins. There could be unnecessary join which is causing query to perform low. make a change in your universe accordingly to generate query with optimized joins. Analyze report Query for Indexes. ( work with DBA for this ) 1. Get the report SQL , check the where clause 2. Check if indexes are user properly in SQL and also they exist in database 3. Also, check if statistics in database is updated, as its its not then DB may not generate optimized query. Use Aggregates for measures 1. Use Aggregate aware for your measure objects to use summary tables form Database\ 2. You can also think of Automatic Query re-write instead of using Aggregate Aware in Universe designer, However this requires careful planning and high involvement from DBA Use Partitions for high volume fact tables 1. Partitioning the fact table can boost your query performance. 2. Work with DBA to get it done Array Fetch Size 1. Play with Universe options for setting up optimal value of Array fetch Size parameter. Universe Analysis for Shortcut joins 1. Analyze your reports and universe for possible use of shortcut joins as they play little but important role in performance. Index Awareness 1. Try using Index Awareness from Universe side to generate optimized query.

Note: This requires thorough testing of report data and detail understating of data warehouse data Universe Cleaning. 1. Make sure LOV is disabled measure objects 2. Make sure LOV is disable for un-necessary dimension objects JOIN_BY_SQL 1. Try evaluating JOIN_BY_SQL universe parameter I will try posting above Universe Tuning/Optimization practical examples one by one in coming time. Need to do few experiments . Business Objects Universe Optimization is not a one night task it requires careful planning and effort. Difference between CMC,CMS and CCM CMS = Central Management Service, a process running as part of your Business Objects Enterprise servers, including the CMS database, authenticating users, storing access rights, etc. The CMS is the heart of a Business Objects Enterprise system. CMS maintains the CMS database (system database) and Audit database. It acts like an Auditor not auditing database. The CMS (Central Management Server) is a service/daemon that manages the entire BOXI server deployment, authentication, object repository, services; scheduling etc. We can say CMS keep the track of Security details, objects hierarchy, managing servers, user activity CMC = Central Management Console, web based administration interface for your Business Objects Enterprise system, where one can add new users/groups, create folders, set access rights, configure SSO, configure your Business Objects Enterprise server services, etc. The CMC is a web application interface for administrating the BOXI server. Most server management tasks are now handled through the CMC, not in the CCM. We can say CMS is web-based tool for handling day-to-day administrative tasks i.e. data (content) management ,server management (stopping the process ),user management The CMC is used by SAP Business Objects Administrators to access and configure the SAP Business Objects BI system. The CMC provides management of and configuration for the following system elements: Security Authentication User and group creation and management

Object rights License keys Folder and Category management Scheduling Services/server configuration Server groups (clusters) Universes and data connections User interface settings and preferences Business calendars CCM = Central Configuration Manager, an application which allows you to configure/add/remove/stop Business Objects server services. The CCM is a server troubleshooting and node configuration tool. The CCM allows you to view and to modify server settings only while Business Objects server processes are offline. The CMC is used to stop server processes, then the CCM is used to modify performance settings or change server port numbers. Lets say that your CMS is down, what would you do? Can you login to CMC to start it? Of course you cant. You need to use CCM.

1.XIR3 can use Excel document as a data provider which was not possible in XIR2.
2.We can open XIR2 universes using XIR3 Designer but XIR3 universe cannot be opened using earlier versions of designer.

? We can create universe from stored procedure in BOXI3 but not in BOXIR2 ? Web intelligent report ? when we save as excel report thatreport contains more than 65K rows it?s automatically populated to next sheet.

Central Management Server (CMS)


The aptly named Central Management Server (CMS) is the main server in the BO XI collection. The CMS maintains a database of information about your BusinessObjects Enterprise system. This is known as the CMS database. All the platform services are managed and controlled by the CMS. The CMS handles communication with the RDBMS tables that store the metadata about the BO XI objects. Any commands issued by the SDK to the servers are communicated via the CMS. CMS is also known as :

In Application Foundation 6.x versions and earlier, BusinessObjects repository Before the rebranding effort, the Crystal Management Server, and before that as the Automated Process Scheduler (APS).
There are still a few active properties that are named for the old APS designation. One of these APS references is found in the ServerKind propertyof the Server class. The CMS ServerKind designation of the Central Management Server is still aps. The CMS also manages :

the auditing database all schedule and custom events.


The CMS can also maintain an optional auditing database of information about user actions, and files with the File Repository Servers. File events alone are handled by the Event Server. The CMS manages :

security and controls authentication of all users as well as license management.


Because the Central Management Server is the principal server, it cannot be stopped from within the SAP BOBJ Central Management Console (CMC). You must use the Central Configuration Manager. In a production environment, its a good idea to disable all servers first so they can finish any pending requests before shutting them down, with the CMS being the last to close. If youre working with a cluster, shutting down one CMS will shift the workload to the other active onesa feature that allows maintenance without causing downtime.

The CMS also manages access to the system file store where the physical documents are managed. CMS data includes information about :

users and groups, security levels, content, services. license.

Main tasks
This data allows the CMS to perform four main tasks:

Maintaining security
The CMS enforces the application of rights at both the folder and object level, and supports inheritance at the user and group level. It also supports aggregation through a flexible, group-user membership model. An integrated security system is available for customers who do not currently use an entitlement database, although BusinessObjects Enterprise is designed for integration with multiple concurrent third-party security systems, such as LDAP, Siteminder, or Microsoft Active Directory. When a change is made to a user in the entitlement database, the change is then propogated to BusinessObjects Enterprise.

Managing objects
The CMS keeps track of the object location and maintains the folder hierarchy. InfoObjects are system metadata objects that contain contain index information, and the actual documents or objects are stored in a file store. The separation of the object definition (metadata) from the document allows for fast object processing as only the required information is retrieved from the systems repository. The CMS also runs scheduled report jobs.

Managing servers
Server process are monitored by the CMS and allocates work to less busy processes. It will also add or remove service instances as work loads change or services become unavailable. The CMS handles load balancing and automated clustering to avoid bottlenecks and maximize hardware efficiency. In some multi-server environments, BusinessObjects Enterprise may not require a separate third-party load balancing system.

Managing auditing
User actions can be monitored and written to a central audit database. This information allows system administrators to better track and manage their BusinessObjects Enterprise deployment. The auditing functionality allows administrators to better understand which users accessed the enterprise system, which documents they interacted with, and the overall system metrics for system optimization. Usage data is collected from the system interactions recorded in the auditing

database. A sample universe and sample auditing reports are also available to provide fast access to information such as the most accessed reports, peak system use times, and average user session times. It is strongly recommended that you back up and audit the CMS system database frequently. The CMS database should not be accessed directly. System information should only be retrieved using the calls that are provided in the BusinessObjects Enterprise software development kit (SDK).

The Central Management Server (CMS) Repository


The content of the Business Objects Enterprise (BOE) system consists of the physical files and the metadata information about the physical files. For a Crystal Report, the physical file as well as the metadata about the file should exist in the BOE system. The Crystal report is stored as a file on File Repository Server (FRS) with an extension of .rpt . The Metadata information such as report name, type, report ID, path, etc is stored as an InfoObject in the CMS Repository. The CMS metadata is physically stored on a database as InfoObjects. purpose of which is given below. Sno Physical Table Name 1 2 3 CMS_VersionInfo CMS_InfoObjects6 CMS_Aliases6 Purpose Contains the current version of BOE. Each row in this table stores a single InfoObject. This is the main table in the repository. Maps the user alias(es) to the corresponding user ID. For example, a user may have both a Win NT alias and an LDAP alias. Regardless of the number of aliases a user may have, in the BI Platform each user has only one user ID. The map is stored in a separate table to enable fast logins. The CMS uses this table to generate unique Object IDs and Type IDs. It has only two rows: an Object ID row and a Type ID row. The CMSs in a cluster use this table when generating unique ID numbers. Relationship tables are used to store the relations between InfoObjects. Each row in the table stores one edge in the relation. For example, the relation between a Web Intelligence document and a Universe would be stored in a row in the WebI Universe Relation table. Each relationship table has these columns: Parent Object ID, Child Object ID, Relationship InfoObject ID, member, version, ordinal, data. This is an auxiliary table of CMS_RELATIONS6. There are six tables, the

CMS_IdNumbers6

CMS_Relationships6

CMS_LOCKS6

The CMS repository tables cannot be queried directly. Query builder is the tool to be used to retrieve Business Objects metadata information using the virtual tables such as ci_systemobjects, ci_infoobjects and ci_appobjects.

Working with FRS Pruning and Tracing


Have you ever got the chance to see how a Crystal or WebI document or Instance stored internally in the File System? Here it is. The document will be saved internally in the file system with in one or more folders named based on random name generation. What will happen if the report or the Instance deleted? The report or the Instance alone will be deleted and leaving the temporary folders as it is. As a result of this over the period of time, there will be thousands of folders in the FRS and will be a intricacy for the Administrator when he goes for the FRS Backup. The Backup process will be very time consuming as well as occupy more space and finally the FRS will be inefficient. How to get rid of this? The -Prune command, added at the end of the command line of the File repository servers will be handy at this moment. Working with FRS Pruning and Tracing -Prune command, added at the end of the command line of servers triggers the server to go through the Input or Output folders in the internal Filestore folder of Business Objects Enterprise to clean up all the empty directories. -Trace command, added at the end of the command lines of the servers logs the activity of that specific server in the Logging folder of the BOE installation directory. We need to periodically delete the empty FRS directories to cleanup the disk but not to be manually. Instead the FRS server should be started with the -Prune command line switch. When this switch is used, the FRS servers status will remain Starting till the deletion is done. Once deletion is done, the servers will stop. The -Prune switch will have to be removed manually to allow the servers to start normally. Working with FRS Pruning and Tracing Add -Trace and -Prune 1. Stop File Servers (both IFRS and OFRS) in CCM (XIR2) or in CMC (XI 3.x). 2. Add -Prune command at the end of the line to FRS (Input and Output) and also TraceCommand at the end of the line to check it is cleaning up files and folders that are empty. 3. Start the servers and Monitor the pruning process, you should gain more hard disk space. Remove -Trace and -Prune 1. Stop the Servers and remove -trace and -prune commands from the command line Parameters of FRS.

2. Start the Servers again normally. Viewing Log files You can find the log files in the below location (for XI 3.x) C:\Program Files\Business Objects\BusinessObjects Enterprise 12.0\Logging Points to remember

After the Pruning process if any empty folders still exists, they may be used by BO to keep for its housekeeping process. Dont leave the prune option enabled even after the prune completed. Once pruning completed successfully the FRS will be stopped. We have to re-modify the command line by removing Prune and -Trace and Server has to be started manually. Pruning process does not clean-up any CMS object that lost the FRS files that they need to point to. Please Note -Prune is an undocumented feature in Business Objects.

What is PRM File?


The PRM file is a text file that lists parameters used to configure universes creation and SQL query generation in Web Intelligence. There is a PRM file for each supported RDBMS. PRM files are located in the database folders under <INSTALLDIR>\win32_x86\dataAccess\ConnectionServer\

Verifying which PRM file is used by a connection


To verify which PRM file is used by a universe connection:

Select File > Parameters.


The Parameters dialog box appears. v:shapes=_x0000_i1044>

Click the Test button.


The Test Connection message box appears.

Click the Details button.


The details of your connection appear in a drop down message box.

Scroll down the message box to the line that starts with PRM.

This line indicates the file path and name of the PRM file currently used by the active universe.

Universe Parameters / Data Source Connection


A connection is a named set of parameters that defines how a Business Objects application accesses data in a database file and is defined in an universes via universe designer.

Create a new connection


From the parameters dialog
You can create a new connection from the Definition page of the Universe Parameters dialog box (File > Parameters > Definition).

Connection Wizard
You can view all connections available to a universe from the Connections list (Tools > Connections).

Selecting strategies
A sstrategy is a script that automatically extracts structural information from a database or flat file. Strategies have two principle roles:

Automatic join and cardinality detection (Join strategies) Automatic class, object, and join creation (Objects and Joins strategies)
Strategies can be useful if you want to automate the detection and creation of structures in your universe based on the SQL structures in the database. In Designer you can specify two types of strategies:

Strategy Built in strategy External strategy

Description Built in strategies can not be customized. Default strategy shipped with Designer. User defined script that contains the same type of information as a Built in strategy, but customized to optimize information retrieval from a database.

SQL parameters
Many of the parameters common to most supported RDBMS middleware are available for editing in the Parameters tab in the universe parameters dialog box (File > Parameters > Parameter). These parameters apply only to the active universe, and are saved in the UNV file. When you modify an SQL parameter for a universe in Designer, the value defined in Designer is used, and not the value defined in the PRM file (parameters file) associated with the data access driver for the connection.

To know the list of available parameters, see the product guide xi3_designer.pdf page 88 section Universe SQL parameters reference.

PRM File
The PRM file is a text file that lists parameters used to configure universe creation and SQL query generation in Web Intelligence. There is a PRM file for each supported RDBMS.

Support
Connections through ODBC to Excel and text files
You can create connections through ODBC to Excel files, and to text files in .csv format. In order that Web Intelligence can use a universe based on a text file or an Excel file accessed through ODBC, you must edit the msjet.prm file for the connection. This file is located in the folder: $INSTALLDIR$/BusinessObjects Enterprise 12.0/win32_x86/dataAccess/connectionserver/odbc where where $INSTALLDIR$ is the directory in which your Business Objects applications are installed. In the msjet.prm file, change the DB_TYPE parameter as follows:

From: <Parameter Name=DB_TYPE>MS Jet Engine</Parameter> To: <Parameter Name=DB_TYPE>MS Jet</Parameter>


You must stop and restart the Business Objects Enterprise server after making this change. If you are running Designer on the same machine as your Web Intelligence server and you want to create additional universes based on text or Excel files after changing this value, you must reset the value to <Parameter Name=DB_TYPE>MS Jet Engine</Parameter>

Sql Editor

About
You can use an SQL editor to help you define the Select statement or a Where clause for an object. The SQL Editor is a graphical editor that lists tables, columns, objects, operators, and functions in tree views. You can double click any listed structure to insert it into the Select or Where boxes.

Functions

number, character, and date functions. @Functions specific to Business Objects products.
Available functions are listed under the Functions entry in the parameters BOBJ PRM File for the target database.

Show object SQL


When selected, the SQL syntax is displayed for the objects that appear in the Select, or Where boxes.

Parse
When clicked, parses the syntax. If the syntax is not valid, a message box appears describing the problem. nput and Output File Repository Server (FRS) processes run on each BusinessObjects Enterprise server machine. The Input FRS manages report and program objects that have been published to the system by administrators or end users using :

the SAP BOBJ Publishing Wizard, the SAP BOBJ Central Management Console (CMC), the Import Wizard, or a Business Objects designer component such as Crystal Reports, or SAP BO Web Intelligence report panels).
Objects with associated files, such as text files, Microsoft Word files, or PDFs, are stored on the Input File Repository Server. The Output FRS manages all of the report instances generated by the Report Job Server or the Web Intelligence Processing Server, and the program instances generated by the Program Job Server. If you use the BusinessObjects Enterprise SDK, you can also publish reports from within your own code. The FRSes are responsible for listing files on the server, querying for the size of a file, querying for the size of the entire file repository, adding files to the repository, and removing files from the repository. To avoid conflicts between input and output objects, the Input and Output FRSes cannot share the same file system directory. In larger deployments, there may be multiple Input and Output FRSes. However, only one set is active at any given time. In this case, all Input File Repository Servers share the same directory. Likewise, all Output File Repository Servers share the same directory.

Frequent Interview Question of Business Object


What is the benefits of data warehouse?
A data warehouse helps to integrate data and store them historically so that we can analyze different aspects of business including, performance analysis, trend, prediction etc. over a given time frame and use the result of our analysis to improve the efficiency of business processes.

Why Data Warehouse is used?


For a long time in the past and also even today, Data warehouses are built to facilitate reporting on different key business processes of an organization, known as KPI(Key Performance Index). Data warehouses also help to integrate data from different sources and show a single-point-of-truth values about the business measures.Data warehouse can be further used for data mining which helps trend prediction, forecasts, pattern recognition etc

What is data mart?


Data marts are generally designed for a single subject area. An organization may have data pertaining to different departments like Finance, HR, Marketting etc. stored in data warehouse and each department may have separate data marts. These data marts can be built on top of the data warehouse.

What is the difference between OLTP and OLAP?


OLTP is the transaction system that collects business data. Whereas OLAP is the reporting and analysis system on that data. OLTP systems are optimized for INSERT, UPDATE operations and therefore highly normalized. On the other hand, OLAP systems are deliberately denormalized for fast data retrieval through SELECT operations.

What is ER model?
ER model or entity-relationship model is a particular methodology of data modeling wherein the goal of modeling is to normalize the data by reducing redundancy. This is different than dimensional modeling where the main goal is to improve the data retrieval mechanism.

What is dimensional modeling?


Dimensional model consists of dimension and fact tables. Fact tables store different transactional measurements and the foreign keys from dimension tables that qualifies the data. The goal of Dimensional model is not to achive high degree of normalization but to facilitate easy and faster data retrieval. Ralph Kimball is one of the strongest proponents of this very popular data modeling technique which is often used in many enterprise level data warehouses.

What is dimension?

A dimension is something that qualifies a quantity (measure). For an example, consider this: If I just say 20kg, it does not mean anything. But if I say, 20kg of Rice (Product) is sold to Ramesh (customer) on 5th April (date), then that gives a meaningful sense. These product, customer and dates are some dimension that qualified the measure 20kg. Dimensions are mutually independent. Technically speaking, a dimension is a data element that categorizes each item in a data set into non-overlapping regions.

What is Fact?
A fact is something that is quantifiable (Or measurable). Facts are typically (but not always) numerical values that can be aggregated.

What are additive, semi-additive and non-additive measures?


Non-additive Measures Non-additive measures are those which can not be used inside any numeric aggregation function (e.g. SUM(), AVG() etc.). One example of non-additive fact is any kind of ratio or percentage. Example, 10% profit margin, revenue to asset ratio etc. A non-numerical data can also be a nonadditive measure when that data is stored in fact tables, e.g. some kind of varchar flags in the fact table. Semi Additive Measures Semi-additive measures are those where only a subset of aggregation function can be applied. Lets say account balance. A sum() function on balance does not give a useful result but max() or min() balance might be useful. Consider price rate or currency rate. Sum is meaningless on rate; however, average function might be useful. Additive Measures Additive measures can be used with any aggregation function like Sum(), Avg() etc. Example is Sales Quantity etc.

What is Cursor? Cursor is a database object used by applications to manipulate data in a set on a row-by-row basis, instead of the typical SQL commands that operate on all the rows in the set at one time. In order to work with a cursor we need to perform some steps in the following order:

Declare cursor Open cursor Fetch row from the cursor Process fetched row Close cursor Deallocate cursor
What is Collation? Collation refers to a set of rules that determine how data is sorted and compared. Character data is sorted using rules that define the correct character sequence, with options for specifying case sensitivity, accent marks, kana character types and character width. (Read More Here) What is Difference between Function and Stored Procedure? UDF can be used in the SQL statements anywhere in the WHERE/HAVING/SELECT section where as Stored procedures cannot be. UDFs that return tables can be treated as another rowset. This can be used in JOINs with other tables. Inline UDFs can be thought of as views that take parameters and can be used in JOINs and other Rowset operations. What is sub-query? Explain properties of sub-query? Sub-queries are often referred to as sub-selects, as they allow a SELECT statement to be executed arbitrarily within the body of another SQL statement. A sub-query is executed by enclosing it in a set of parentheses. Sub-queries are generally used to return a single row as an atomic value, though they may be used to compare values against multiple rows with the IN keyword. A subquery is a SELECT statement that is nested within another T-SQL statement. A subquery SELECT statement if executed independently of the T-SQL statement, in which it is nested, will return a resultset. Meaning a subquery SELECT statement can standalone and is not depended on the statement in which it is nested. A subquery SELECT statement can return any number of values, and can be found in, the column list of a SELECT statement, a FROM, GROUP BY, HAVING, and/or ORDER BY clauses of a T-SQL statement. A Subquery can also be used as a parameter to a function call. Basically a subquery can be used anywhere an expression can be used. (Read More Here) What are different Types of Join? Cross Join A cross join that does not have a WHERE clause produces the Cartesian product of the tables involved in the join. The size of a Cartesian product result set is the number of rows in the first table multiplied by the number of rows in the second table. The common example is when company wants to combine each product with a pricing table to analyze each product at each price.

Inner Join A join that displays only the rows that have a match in both joined tables is known as inner Join. This is the default type of join in the Query and View Designer. Outer Join A join that includes rows even if they do not have related rows in the joined table is an Outer Join. You can create three different outer join to specify the unmatched rows to be included:

Left Outer Join: In Left Outer Join all rows in the first-named table i.e. left table, which appears leftmost in the JOIN clause are included. Unmatched rows in the right table do not appear. Right Outer Join: In Right Outer Join all rows in the second-named table i.e. right table, which appears rightmost in the JOIN clause are included. Unmatched rows in the left table are not included. Full Outer Join: In Full Outer Join all rows in all joined tables are included, whether they are matched or not.
Self Join This is a particular case when one table joins to itself, with one or two aliases to avoid confusion. A self join can be of any type, as long as the joined tables are the same. A self join is rather unique in that it involves a relationship with only one table. The common example is when company has a hierarchal reporting structure whereby one member of staff reports to another. Self Join can be Outer Join or Inner Join. (Read More Here) What are primary keys and foreign keys? Primary keys are the unique identifiers for each row. They must contain unique values and cannot be null. Due to their importance in relational databases, Primary keys are the most fundamental of all keys and constraints. A table can have only one Primary key. Foreign keys are both a method of ensuring data integrity and a manifestation of the relationship between tables. What is User Defined Functions? What kind of User-Defined Functions can be created? User-Defined Functions allow defining its own T-SQL functions that can accept 0 or more parameters and return a single scalar data value or a table data type. Different Kinds of User-Defined Functions created are: Scalar User-Defined Function A Scalar user-defined function returns one of the scalar data types. Text, ntext, image and timestamp data types are not supported. These are the type of user-defined functions that most developers are used to in other programming languages. You pass in 0 to many parameters and you get a return value.

Inline Table-Value User-Defined Function An Inline Table-Value user-defined function returns a table data type and is an exceptional alternative to a view as the user-defined function can pass parameters into a T-SQL select command and in essence provide us with a parameterized, non-updateable view of the underlying tables. Multi-statement Table-Value User-Defined Function A Multi-Statement Table-Value user-defined function returns a table and is also an exceptional alternative to a view as the function can support multiple T-SQL statements to build the final result where the view is limited to a single SELECT statement. Also, the ability to pass parameters into a TSQL select command or a group of them gives us the capability to in essence create a parameterized, non-updateable view of the data in the underlying tables. Within the create function command you must define the table structure that is being returned. After creating this type of user-defined function, It can be used in the FROM clause of a T-SQL command unlike the behavior found when using a stored procedure which can also return record sets. What is Identity? Identity (or AutoNumber) is a column that automatically generates numeric values. A start and increment value can be set, but most DBA leave these at 1. A GUID column also generates numbers; the value of this cannot be controlled. Identity/GUID columns do not need to be indexed. What is DataWarehousing?

Subject-oriented, meaning that the data in the database is organized so that all the data elements relating to the same real-world event or object are linked together; Time-variant, meaning that the changes to the data in the database are tracked and recorded so that reports can be produced showing changes over time; Non-volatile, meaning that data in the database is never over-written or deleted, once committed, the data is static, read-only, but retained for future reporting. Integrated, meaning that the database contains data from most or all of an organizations operational applications, and that this data is made consistent.
Which TCP/IP port does SQL Server run on? How can it be changed? SQL Server runs on port 1433. It can be changed from the Network Utility TCP/IP properties -> Port number, both on client and the server. What are the difference between clustered and a non-clustered index? A clustered index is a special type of index that reorders the way records in the table are physically stored. Therefore table can have only one clustered index. The leaf nodes of a clustered index contain the data pages.

A non clustered index is a special type of index in which the logical order of the index does not match the physical stored order of the rows on disk. The leaf node of a non clustered index does not consist of the data pages. Instead, the leaf nodes contain index rows. What are the different index configurations a table can have? A table can have one of the following index configurations:

No indexes A clustered index A clustered index and many nonclustered indexes A nonclustered index Many nonclustered indexes
What are different types of Collation Sensitivity? Case sensitivity A and a, B and b, etc. Accent sensitivity a and , o and , etc. Kana Sensitivity When Japanese kana characters Hiragana and Katakana are treated differently, it is called Kana sensitive. Width sensitivity A single-byte character (half-width) and the same character represented as a double-byte character (full-width) are treated differently than it is width sensitive. What is OLTP (Online Transaction Processing)? In OLTP online transaction processing systems relational database design use the discipline of data modeling and generally follow the Codd rules of data normalization in order to ensure absolute data integrity. Using these rules complex information is broken down into its most simple structures (a table) where all of the individual atomic level elements relate to each other and satisfy the normalization rules. Whats the difference between a primary key and a unique key? Both primary key and unique key enforces uniqueness of the column on which they are defined. But by default primary key creates a clustered index on the column, where are unique creates a nonclustered index by default. Another major difference is that, primary key doesnt allow NULLs, but unique key allows one NULL only.

What is difference between DELETE & TRUNCATE commands? Delete command removes the rows from a table based on the condition that we provide with a WHERE clause. Truncate will actually remove all the rows from a table and there will be no data in the table after we run the truncate command. TRUNCATE

TRUNCATE is faster and uses fewer system and transaction log resources than DELETE. TRUNCATE removes the data by deallocating the data pages used to store the tables data, and only the page deallocations are recorded in the transaction log. TRUNCATE removes all rows from a table, but the table structure, its columns, constraints, indexes and so on, remains. The counter used by an identity for new rows is reset to the seed for the column. You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint. Because TRUNCATE TABLE is not logged, it cannot activate a trigger. TRUNCATE cannot be rolled back. TRUNCATE is DDL Command. TRUNCATE Resets identity of the table
DELETE

DELETE removes rows one at a time and records an entry in the transaction log for each deleted row. If you want to retain the identity counter, use DELETE instead. If you want to remove table definition and its data, use the DROP TABLE statement. DELETE Can be used with or without a WHERE clause DELETE Activates Triggers. DELETE can be rolled back. DELETE is DML Command. DELETE does not reset identity of the table.
When is the use of UPDATE_STATISTICS command? This command is basically used when a large processing of data has occurred. If a large amount of deletions any modification or Bulk Copy into the tables has occurred, it has to update the indexes to take these changes into account. UPDATE_STATISTICS updates the indexes on these tables accordingly. What is the difference between a HAVING CLAUSE and a WHERE CLAUSE? They specify a search condition for a group or an aggregate. But the difference is that HAVING can be used only with the SELECT statement. HAVING is typically used in a GROUP BY clause. When GROUP BY is not used, HAVING behaves like a WHERE clause. Having Clause is basically used only

with the GROUP BY function in a query whereas WHERE Clause is applied to each row before they are part of the GROUP BY function in a query. What are the properties and different Types of Sub-Queries? Properties of Sub-Query

A A A A

sub-query must be enclosed in the parenthesis. sub-query must be put in the right hand of the comparison operator, and sub-query cannot contain an ORDER-BY clause. query can contain more than one sub-query.

Types of Sub-query

Single-row sub-query, where the sub-query returns only one row. Multiple-row sub-query, where the sub-query returns multiple rows,. and Multiple column sub-query, where the sub-query returns multiple columns
What is SQL Profiler? SQL Profiler is a graphical tool that allows system administrators to monitor events in an instance of Microsoft SQL Server. You can capture and save data about each event to a file or SQL Server table to analyze later. For example, you can monitor a production environment to see which stored procedures are hampering performances by executing too slowly. Use SQL Profiler to monitor only the events in which you are interested. If traces are becoming too large, you can filter them based on the information you want, so that only a subset of the event data is collected. Monitoring too many events adds overhead to the server and the monitoring process and can cause the trace file or trace table to grow very large, especially when the monitoring process takes place over a long period of time. What are the authentication modes in SQL Server? How can it be changed? Windows mode and Mixed Mode SQL & Windows. To change authentication mode in SQL Server click Start, Programs, Microsoft SQL Server and click SQL Enterprise Manager to run SQL Enterprise Manager from the Microsoft SQL Server program group. Select the server then from the Tools menu select SQL Server Configuration Properties, and choose the Security page. Which command using Query Analyzer will give you the version of SQL server and operating system?

SELECT SERVERPROPERTY ('productversion'), SERVERPROPERTY('productlevel' ), SERVERPROPERTY ('edition')


What is SQL Server Agent? SQL Server agent plays an important role in the day-to-day tasks of a database administrator (DBA). It is often overlooked as one of the main tools for SQL Server management. Its purpose is to ease the implementation of tasks for the DBA, with its full-function scheduling engine, which allows you to schedule your own jobs and scripts. Can a stored procedure call itself or recursive stored procedure? How much level SP nesting is possible? Yes. Because Transact-SQL supports recursion, you can write stored procedures that call themselves. Recursion can be defined as a method of problem solving wherein the solution is arrived at by repetitively applying it to subsets of the problem. A common application of recursive logic is to perform numeric computations that lend themselves to repetitive evaluation by the same processing steps. Stored procedures are nested when one stored procedure calls another or executes managed code by referencing a CLR routine, type, or aggregate. You can nest stored procedures and managed code references up to 32 levels. What is Log Shipping? Log shipping is the process of automating the backup of database and transaction log files on a production SQL server, and then restoring them onto a standby server. Enterprise Editions only supports log shipping. In log shipping the transactional log file from one server is automatically updated into the backup database on the other server. If one server fails, the other server will have the same db and can be used this as the Disaster Recovery plan. The key feature of log shipping is that it will automatically backup transaction logs throughout the day and automatically restore them on the standby server at defined interval. Name 3 ways to get an accurate count of the number of records in a table?

SELECT * FROM table1 SELECT COUNT(*) FROM table1 SELECT rows FROM sysindexes WHERE id = OBJECT_ID(table1) AND indid < 2

What does it mean to have QUOTED_IDENTIFIER ON? What are the implications of having it OFF? When SET QUOTED_IDENTIFIER is ON, identifiers can be delimited by double quotation marks, and literals must be delimited by single quotation marks. When SET QUOTED_IDENTIFIER is OFF, identifiers cannot be quoted and must follow all Transact-SQL rules for identifiers. What is the difference between a Local and a Global temporary table? A local temporary table exists only for the duration of a connection or, if defined inside a compound statement, for the duration of the compound statement. A global temporary table remains in the database permanently, but the rows exist only within a given connection. When connection is closed, the data in the global temporary table disappears. However, the table definition remains with the database for access when database is opened next time. What is the STUFF function and how does it differ from the REPLACE function? STUFF function is used to overwrite existing characters. Using this syntax, STUFF (string_expression, start, length, replacement_characters), string_expression is the string that will have characters substituted, start is the starting position, length is the number of characters in the string that are substituted, and replacement_characters are the new characters interjected into the string. REPLACE function to replace existing characters of all occurrences. Using the syntax REPLACE (string_expression, search_string, replacement_string), where every incidence of search_string found in the string_expression will be replaced with replacement_string. What is PRIMARY KEY? A PRIMARY KEY constraint is a unique identifier for a row within a database table. Every table should have a primary key constraint to uniquely identify each row and only one primary key constraint can be created for each table. The primary key constraints are used to enforce entity integrity. What is UNIQUE KEY constraint? A UNIQUE constraint enforces the uniqueness of the values in a set of columns, so no duplicate values are entered. The unique key constraints are used to enforce entity integrity as the primary key constraints.

What is FOREIGN KEY? A FOREIGN KEY constraint prevents any actions that would destroy links between tables with the corresponding data values. A foreign key in one table points to a primary key in another table. Foreign keys prevent actions that would leave rows with foreign key values when there are no primary keys with that value. The foreign key constraints are used to enforce referential integrity. What is CHECK Constraint? A CHECK constraint is used to limit the values that can be placed in a column. The check constraints are used to enforce domain integrity. What is NOT NULL Constraint? A NOT NULL constraint enforces that the column will not accept null values. The not null constraints are used to enforce domain integrity, as the check constraints. How to get @@ERROR and @@ROWCOUNT at the same time? If @@Rowcount is checked after Error checking statement then it will have 0 as the value of @@Recordcount as it would have been reset. And if @@Recordcount is checked before the errorchecking statement then @@Error would get reset. To get @@error and @@rowcount at the same time do both in same statement and store them in local variable. SELECT @RC = @@ROWCOUNT, @ER = @@ERROR What is a Scheduled Jobs or What is a Scheduled Tasks? Scheduled tasks let user automate processes that run on regular or predictable cycles. User can schedule administrative tasks, such as cube processing, to run during times of slow business activity. User can also determine the order in which tasks run by creating job steps within a SQL Server Agent job. E.g. back up database, Update Stats of Tables. Job steps give user control over flow of execution. If one job fails, user can configure SQL Server Agent to continue to run the remaining tasks or to stop execution. What are the advantages of using Stored Procedures?

Stored procedure can reduced network traffic and latency, boosting application performance. Stored procedure execution plans can be reused, staying cached in SQL Servers memory, reducing server overhead. Stored procedures help promote code reuse.

Stored procedures can encapsulate logic. You can change stored procedure code without affecting clients. Stored procedures provide better security to your data.
What is a table called, if it has neither Cluster nor Non-cluster Index? What is it used for? Unindexed table or Heap. Microsoft Press Books and Book on Line (BOL) refers it as Heap. A heap is a table that does not have a clustered index and, therefore, the pages are not linked by pointers. The IAM pages are the only structures that link the pages in a table together. Unindexed tables are good for fast storing of data. Many times it is better to drop all indexes from table and then do bulk of inserts and to restore those indexes after that. Can SQL Servers linked to other servers like Oracle? SQL Server can be linked to any server provided it has OLE-DB provider from Microsoft to allow a link. E.g. Oracle has an OLE-DB provider for oracle that Microsoft provides to add it as linked server to SQL Server group What is BCP? When does it used? BulkCopy is a tool used to copy huge amount of data from tables and views. BCP does not copy the structures same as source to destination. BULK INSERT command helps to import a data file into a database table or view in a user-specified format. What command do we use to rename a db, a table and a column? To rename db

sp_renamedb 'oldname' , 'newname'


If someone is using db it will not accept sp_renmaedb. In that case first bring db to single user using sp_dboptions. Use sp_renamedb to rename database. Use sp_dboptions to bring database to multi user mode. E.g.

USE master; GO EXEC sp_dboption AdventureWorks, 'Single User', True

GO EXEC sp_renamedb 'AdventureWorks', 'AdventureWorks_New' GO EXEC sp_dboption AdventureWorks, 'Single User', False GO
To rename Table We can change the table name using sp_rename as follows,

sp_rename 'oldTableName' 'newTableName'


E.g.

sp_RENAME 'Table_First', 'Table_Last' GO


To rename Column The script for renaming any column :

sp_rename 'TableName.[OldcolumnName]', 'NewColumnName', 'Column'


E.g.

sp_RENAME 'Table_First.Name', 'NameChange' , 'COLUMN' GO


What are sp_configure commands and set commands? Use sp_configure to display or change server-level settings. To change database-level settings, use ALTER DATABASE. To change settings that affect only the current user session, use the SET statement. E.g.

sp_CONFIGURE 'show advanced', 0 GO RECONFIGURE

GO sp_CONFIGURE GO
You can run following command and check advance global configuration settings.

sp_CONFIGURE 'show advanced', 1 GO RECONFIGURE GO sp_CONFIGURE GO


How to implement one-to-one, one-to-many and many-to-many relationships while designing tables? One-to-One relationship can be implemented as a single table and rarely as two tables with primary and foreign key relationships. One-to-Many relationships are implemented by splitting the data into two tables with primary key and foreign key relationships. Many-to-Many relationships are implemented using a junction table with the keys from both the tables forming the composite primary key of the junction table. What is an execution plan? When would you use it? How would you view the execution plan? An execution plan is basically a road map that graphically or textually shows the data retrieval methods chosen by the SQL Server query optimizer for a stored procedure or ad-hoc query and is a very useful tool for a developer to understand the performance characteristics of a query or stored procedure since the plan is the one that SQL Server will place in its cache and use to execute the stored procedure or query. From within Query Analyzer is an option called Show Execution Plan (located on the Query drop-down menu). If this option is turned on it will display query execution plan in separate window when query is ran again. What are the basic functions for master, msdb, model, tempdb and resource databases? The master database holds information for all databases located on the SQL Server instance and is theglue that holds the engine together. Because SQL Server cannot start without a functioning masterdatabase, you must administer this database with care.

The msdb database stores information regarding database backups, SQL Agent information, DTS packages, SQL Server jobs, and some replication information such as for log shipping. The tempdb holds temporary objects such as global and local temporary tables and stored procedures. The model is essentially a template database used in the creation of any new user database created in the instance. The resoure Database is a read-only database that contains all the system objects that are included with SQL Server. SQL Server system objects, such as sys.objects, are physically persisted in the Resource database, but they logically appear in the sys schema of every database. The Resource database does not contain user data or user metadata. What is Service Broker? Service Broker is a message-queuing technology in SQL Server that allows developers to integrate SQL Server fully into distributed applications. Service Broker is feature which provides facility to SQL Server to send an asynchronous, transactional message. it allows a database to send a message to another database without waiting for the response, so the application will continue to function if the remote database is temporarily unavailable. Where SQL server user names and passwords are stored in SQL server? They get stored in System Catalog Views sys.server_principals and sys.sql_logins. What is Policy Management? Policy Management in SQL SERVER 2008 allows you to define and enforce policies for configuring and managing SQL Server across the enterprise. Policy-Based Management is configured in SQL Server Management Studio (SSMS). Navigate to the Object Explorer and expand the Management node and the Policy Management node; you will see the Policies, Conditions, and Facets nodes. What is Replication and Database Mirroring? Database mirroring can be used with replication to provide availability for the publication database. Database mirroring involves two copies of a single database that typically reside on different computers. At any given time, only one copy of the database is currently available to clients which are known as the principal database. Updates made by clients to the principal database are applied on the other copy of the database, known as the mirror database. Mirroring involves applying the

transaction log from every insertion, update, or deletion made on the principal database onto the mirror database. What are Sparse Columns? A sparse column is another tool used to reduce the amount of physical storage used in a database. They are the ordinary columns that have an optimized storage for null values. Sparse columns reduce the space requirements for null values at the cost of more overhead to retrieve nonnull values. What does TOP Operator Do? The TOP operator is used to specify the number of rows to be returned by a query. The TOP operator has new addition in SQL SERVER 2008 that it accepts variables as well as literal values and can be used with INSERT, UPDATE, and DELETES statements. What is CTE? CTE is an abbreviation Common Table Expression. A Common Table Expression (CTE) is an expression that can be thought of as a temporary result set which is defined within the execution of a single SQL statement. A CTE is similar to a derived table in that it is not stored as an object and lasts only for the duration of the query. What is MERGE Statement? MERGE is a new feature that provides an efficient way to perform multiple DML operations. In previous versions of SQL Server, we had to write separate statements to INSERT, UPDATE, or DELETE data based on certain conditions, but now, using MERGE statement we can include the logic of such data modifications in one statement that even checks when the data is matched then just update it and when unmatched then insert it. One of the most important advantages of MERGE statement is all the data is read and processed only once. What is Filtered Index? Filtered Index is used to index a portion of rows in a table that means it applies filter on INDEX which improves query performance, reduce index maintenance costs, and reduce index storage costs compared with full-table indexes. When we see an Index created with some where clause then that is actually a FILTERED INDEX. Which are new data types introduced in SQL SERVER 2008?

The GEOMETRY Type: The GEOMETRY data type is a system .NET common language runtime (CLR) data type in SQL Server. This type represents data in a two-dimensional Euclidean coordinate system. The GEOGRAPHY Type: The GEOGRAPHY datatypes functions are the same as with GEOMETRY. The difference between the two is that when you specify GEOGRAPHY, you are usually specifying points in terms of latitude and longitude. New Date and Time Datatypes: SQL Server 2008 introduces four new datatypes related to date and time: DATE, TIME, DATETIMEOFFSET, and DATETIME2.

DATE: The new DATE type just stores the date itself. It is based on the Gregorian calendar and handles years from 1 to 9999. TIME: The new TIME (n) type stores time with a range of 00:00:00.0000000 through 23:59:59.9999999. The precision is allowed with this type. TIME supports seconds down to 100 nanoseconds. The nin TIME (n) defines this level of fractional second precision, from 0 to 7 digits of precision. The DATETIMEOFFSET Type: DATETIMEOFFSET (n) is the time-zone-aware version of a datetime datatype. The name will appear less odd when you consider what it really is: a date + a time + a time-zone offset. The offset is based on how far behind or ahead you are from Coordinated Universal Time (UTC) time. The DATETIME2 Type: It is an extension of the datetime type in earlier versions of SQL Server. This new datatype has a date range covering dates from January 1 of year 1 through December 31 of year 9999. This is a definite improvement over the 1753 lower boundary of the datetime datatype. DATETIME2 not only includes the larger date range, but also has a timestamp and the same fractional precision that TIME type provides
What are the Advantages of using CTE?

Using CTE improves the readability and makes maintenance of complex queries easy. The query can be divided into separate, simple, logical building blocks which can be then used to build more complex CTEs until final result set is generated. CTE can be defined in functions, stored procedures, triggers or even views. After a CTE is defined, it can be used as a Table or a View and can SELECT, INSERT, UPDATE or DELETE Data.
How can we rewrite sub-queries into simple select statements or with joins? Yes we can write using Common Table Expression (CTE). A Common Table Expression (CTE) is an expression that can be thought of as a temporary result set which is defined within the execution of a single SQL statement. A CTE is similar to a derived table in that it is not stored as an object and lasts only for the duration of the query.

E.g.

USE AdventureWorks GO WITH EmployeeDepartment_CTE AS ( SELECT EmployeeID,DepartmentID,ShiftID FROM HumanResources.EmployeeDepartmentHistory ) SELECT ecte.EmployeeId,ed.DepartmentID, ed.Name,ecte.ShiftID FROM HumanResources.Department ed INNER JOIN EmployeeDepartment_CTE ecte ON ecte.DepartmentID =ed.Departm entID GO
What is CLR? In SQL Server 2008, SQL Server objects such as user-defined functions can be created using such CLR languages. This CLR language support extends not only to user-defined functions, but also to stored procedures and triggers. You can develop such CLR add-ons to SQL Server using Visual Studio 2008. What are synonyms? Synonyms give you the ability to provide alternate names for database objects. You can alias object names; for example, using the Employee table as Emp. You can also shorten names. This is especially useful when dealing with three and four part names; for example, shortening server.database.owner.object to object. What is LINQ? Language Integrated Query (LINQ) adds the ability to query objects using .NET languages. The LINQ to SQL object/relational mapping (O/RM) framework provides the following basic features:

Tools to create classes (usually called entities) mapped to database tables Compatibility with LINQs standard query operations The DataContext class, with features such as entity record monitoring, automatic SQL statement generation, record concurrency detection, and much more
What is Isolation Levels?

Transactions specify an isolation level that defines the degree to which one transaction must be isolated from resource or data modifications made by other transactions. Isolation levels are described in terms of which concurrency side-effects, such as dirty reads or phantom reads, are allowed. Transaction isolation levels control:

Whether locks are taken when data is read, and what type of locks are requested. How long the read locks are held. Whether a read operation referencing rows modified by another transaction: Blocks until the exclusive lock on the row is freed. Retrieves the committed version of the row that existed at the time the statement or transaction started. Reads the uncommitted data modification.
What is use of EXCEPT Clause? EXCEPT clause is similar to MINUS operation in Oracle. The EXCEPT query and MINUS query returns all rows in the first query that are not returned in the second query. Each SQL statement within the EXCEPT query and MINUS query must have the same number of fields in the result sets with similar data types. What is XPath? XPath uses a set of expressions to select nodes to be processed. The most common expression that youll use is the location path expression, which returns back a set of nodes called a node set. XPath can use both an unabbreviated and an abbreviated syntax. The following is the unabbreviated syntax for a location path: /axisName::nodeTest[predicate]/axisName::nodeTest[predicate] What is NOLOCK? Using the NOLOCK query optimizer hint is generally considered good practice in order to improve concurrency on a busy system. When the NOLOCK hint is included in a SELECT statement, no locks are taken when data is read. The result is a Dirty Read, which means that another process could be updating the data at the exact time you are reading it. There are no guarantees that your query will retrieve the most recent data. The advantage to performance is that your reading of data will not block updates from taking place, and updates will not block your reading of data. SELECT statements take Shared (Read) locks. This means that multiple SELECT statements are allowed simultaneous access, but other processes are blocked from modifying the data. The updates will

queue until all the reads have completed, and reads requested after the update will wait for the updates to complete. The result to your system is delay (blocking). How would you handle error in SQL SERVER 2008? SQL Server now supports the use of TRYCATCH constructs for providing rich error handling. TRY CATCH lets us build error handling at the level we need, in the way we need to, by setting a region where if any error occurs, it will break out of the region and head to an error handler. The basic structure is as follows: BEGIN TRY <code> END TRY BEGIN CATCH <code> END CATCH So if any error occurs in the TRY block, execution is diverted to the CATCH block, and the error can be dealt. What is RAISEERROR? RaiseError generates an error message and initiates error processing for the session. RAISERROR can either reference a user-defined message stored in the sys.messages catalog view or build a message dynamically. The message is returned as a server error message to the calling application or to an associated CATCH block of a TRYCATCH construct. How to rebuild Master Databse? Master database is system database and it contains information about running servers configuration. When SQL Server 2005 is installed it usually creates master, model, msdb, tempdb resource and distribution system database by default. Only Master database is the one which is absolutely must have database. Without Master database SQL Server cannot be started. This is the reason it is extremely important to backup Master database.

To rebuild the Master database, Run Setup.exe, verify, and repair a SQL Server instance, and rebuild the system databases. This procedure is most often used to rebuild the master database for a corrupted installation of SQL Server. What is XML Datatype? The xml data type lets you store XML documents and fragments in a SQL Server database. An XML fragment is an XML instance that is missing a single top-level element. You can create columns and variables of the xml type and store XML instances in them. The xml data type and associated methods help integrate XML into the relational framework of SQL Server. What is Data Compression? In SQL SERVE 2008 Data Compression comes in two flavors:

Row Compression Page Compression


Row Compression Row compression changes the format of physical storage of data. It minimize the metadata (column information, length, offsets etc) associated with each record. Numeric data types and fixed length strings are stored in variable-length storage format, just like Varchar. Page Compression Page compression allows common data to be shared between rows for a given page. Its uses the following techniques to compress data:

Row compression. Prefix Compression. For every column in a page duplicate prefixes are identified. These prefixes are saved in compression information headers (CI) which resides after page header. A reference number is assigned to these prefixes and that reference number is replaced where ever those prefixes are being used.
Dictionary Compression. Dictionary compression searches for duplicate values throughout the page and stores them in CI. The main difference between prefix and dictionary compression is that prefix is only restricted to one column while dictionary is applicable to the complete page. What is use of DBCC Commands?

The Transact-SQL programming language provides DBCC statements that act as Database Console Commands for SQL Server. DBCC commands are used to perform following tasks.

Maintenance tasks on database, index, or filegroup. Tasks that gather and display various types of information. Validation operations on a database, table, index, catalog, filegroup, or allocation of database pages. Miscellaneous tasks such as enabling trace flags or removing a DLL from memory.
How to find tables without Indexes? Run following query in Query Editor.

USE <database_name>; GO SELECT SCHEMA_NAME(schema_id) AS schema_name ,name AS table_name FROM sys.tables WHERE OBJECTPROPERTY(OBJECT_ID,'IsIndexed') = 0 ORDER BY schema_name, table_name; GO
How to copy the tables, schema and views from one SQL Server to another? There are multiple ways to do this.

1. 2. 3.

Detach Database from one server and Attach Database to another server. Manually script all the objects using SSMS and run the script on new server. Use Wizard of SSMS.

How to copy data from one table to another table? There are multiple ways to do this. 1) INSERT INTO SELECT This method is used when table is already created in the database earlier and data is to be inserted into this table from another table. If columns listed in insert clause and select clause are same, they are not required to list them. 2) SELECT INTO

This method is used when table is not created earlier and needs to be created when data from one table is to be inserted into newly created table from another table. New table is created with same data types as selected columns. What is Catalog Views? Catalog views return information that is used by the SQL Server Database Engine. Catalog Views are the most general interface to the catalog metadata and provide the most efficient way to obtain, transform, and present customized forms of this information. All user-available catalog metadata is exposed through catalog views. What is PIVOT and UNPIVOT? A Pivot Table can automatically sort, count, and total the data stored in one table or spreadsheet and create a second table displaying the summarized data. The PIVOT operator turns the values of a specified column into column names, effectively rotating a table. UNPIVOT table is reverse of PIVOT Table. What is Filestream? Filestream allows you to store large objects in the file system and have these files integrated within the database. It enables SQL Server based applications to store unstructured data such as documents, images, audios, videos etc. in the file system. FILESTREAM basically integrates the SQL Server Database Engine with New Technology File System (NTFS); it basically stores the data in varbinary (max) data type. Using this data type, the unstructured data is stored in the NTFS file system and the SQL Server Database Engine manages the link between the Filestream column and the actual file located in the NTFS. Using Transact SQL statements users can insert, update, delete and select the data stored in FILESTREAM enabled tables. What is Dirty Read ? A dirty read occurs when two operations say, read and write occurs together giving the incorrect or unedited data. Suppose, A has changed a row, but has not committed the changes. B reads the uncommitted data but his view of the data may be wrong so that is Dirty Read. What is SQLCMD? sqlcmd is enhanced version of the isql and osql and it provides way more functionality than other two options. In other words sqlcmd is better replacement of isql (which will be deprecated

eventually) and osql (not included in SQL Server 2005 RTM). sqlcmd can work two modes i) BATCH and ii) interactive modes. What is Aggregate Functions? Aggregate functions perform a calculation on a set of values and return a single value. Aggregate functions ignore NULL values except COUNT function. HAVING clause is used, along with GROUP BY, for filtering query using aggregate values. Following functions are aggregate functions. AVG, MIN, CHECKSUM_AGG, SUM, COUNT, STDEV, COUNT_BIG, STDEVP, GROUPING, VAR, MAX, VARP What do you mean by Table Sample? TABLESAMPLE allows you to extract a sampling of rows from a table in the FROM clause. The rows retrieved are random and they are not in any order. This sampling can be based on a percentage of number of rows. You can use TABLESAMPLE when only a sampling of rows is necessary for the application instead of a full result set. What is Row_Number()? ROW_NUMBER() returns a column as an expression that contains the rows number within the result set. This is only a number used in the context of the result set, if the result changes, the ROW_NUMBER() will change. What are Ranking Functions? Ranking functions return a ranking value for each row in a partition. All the ranking functions are non-deterministic. Different Ranking functions are: ROW_NUMBER () OVER ([<partition_by_clause>] <order_by_clause>) Returns the sequential number of a row within a partition of a result set, starting at 1 for the first row in each partition. RANK () OVER ([<partition_by_clause>] <order_by_clause>) Returns the rank of each row within the partition of a result set. DENSE_RANK () OVER ([<partition_by_clause>] <order_by_clause>) Returns the rank of rows within the partition of a result set, without any gaps in the ranking. What is the difference between UNION and UNION ALL?

UNION The UNION command is used to select related information from two tables, much like the JOIN command. However, when using the UNION command all selected columns need to be of the same data type. With UNION, only distinct values are selected. UNION ALL The UNION ALL command is equal to the UNION command, except that UNION ALL selects all values. The difference between Union and Union all is that Union all will not eliminate duplicate rows, instead it just pulls all rows from all tables fitting your query specifics and combines them into a table. What is B-Tree? The database server uses a B-tree structure to organize index information. B-Tree generally has following types of index pages or nodes:

root node: A root node contains node pointers to branch nodes which can be only one. branch nodes: A branch node contains pointers to leaf nodes or other branch nodes which can be two or more. leaf nodes: A leaf node contains index items and horizontal pointers to other leaf nodes which can be many.

What is Star-schema?
This schema is used in data warehouse models where one centralized fact table references number of dimension tables so as the keys (primary key) from all the dimension tables flow into the fact table (as foreign key) where measures are stored. This entity-relationship diagram looks like a star, hence the name. Consider a fact table that stores sales quantity for each product and customer on a certain time. Sales quantity will be the measure here and keys from customer, product and time dimension tables will flow into the fact table.

How to test Universe in SAP Business Objects


There has been a lot of questions from developers about the best strategy to test a universe. In this post, I will talk about how to test a universe once the development/design is completed. Before you begin testing the universe, it is a good practice to refresh the universe structure. Refreshing the universe structure detects if any columns were added/removed to/from the tables, if any tables were removed from the database or if any tables were renamed in the database. Testing a universe will need to be done in two phases: 1. Testing metadata In this phase, you will test the integrity of the entire universe. In other words, you will: Test syntax(parse) of all the objects in the universe Test syntax of all the joins in the universe Test syntax of all the predefined conditions in the universe Make sure that there are no loops Make sure that there are no fan/chasm traps Make sure there are no Isolated tables; which means that each table is added to atleast one context(if there are contexts defined in the universe) Make sure that there are no loops within contexts(if there are contexts defined in the universe) Thanks to SAP Business Objects, there is a check integrity tool included in the universe designer application that will assist you in performing all the aforementioned tasks. However, for the tool to rightly detect/resolve loops, traps etc, you will have to set the cardinality correctly. You should never use the "detect cardinality" option in the designer application. Why? Because the application will have no idea about your data, so you should set the cardinality for all the joins yourself manually as you know your data well. Besides what has been listed above, you will have to make sure that for each measure object, there is an appropriate aggregate function defined in the select clause and an appropriate projection defined on the properties tab of the object.

Though this is not directly related to testing, it will help you enhance the performance of the reports by pushing the pain of aggregating data down to the database server. The projection setting will display data in reports at the "appropriate" level of detail. 2. Testing data In this phase, you will test the actual data that will be extracted from the database using the objects, joins etc defined in the universe. This is a bit tricky when compared to testing the metadata of the universe. Here are a couple of methods that I use to test the actual data: Unit Testing or System Testing: Create reports on top of the universe and verify the numbers against already existing reports. Create adhoc reports using the universe by dragging and dropping objects into the query panel(WebI or DeskI). Take a look at the generated SQL and see if it makes sense, especially check whether the joins are defined as per the data model. Create enough reports so that all the objects in the universe are tested. User Acceptance Testing: Let the business users create adhoc reports using the universe and verify the data. Please be sure to include all the users from whom you got the requirements for the universe. This is probably the best method in my opinion. Users are well aware of the data and this way you can also get their sign off.

Top 5 BusinessObjects (BO) Scenario Based Questions


In this tutorial, we will look into some of the fundamental and widely asked scenario based questions in BusinessObjects (BO) interviews. Let's get started.

Scenario 1:
Suppose in a Universe structure we have tables as shown in the diagram below. Tables A,B and C are in context ABC and C,E and F are in context CDE Now if there is a requirement which requires a join between Table E and Table B, we can define a new context BCDE. But what is the easiest way to implement this?

Define a shortcut join between tables B and E. To do this, join the tables normally. Then open the join editor and check the box shortcut join.

The join will show as a dotted line between the two tables. This kind of a join does not create loop and cannot be placed in any context. The shortcut join between Table B and Table E will only work when objects from both the tables are selected in the query panel of report.

Scenario 2:
We have objects from 3 tables A,B and C in the query panel of a report. Among them C is a lookup table which holds values with respect to keys. Table B holds the foreign key to the table C. A filter condition is applied to Table C in Query Level. The resulting Query is:
SELECT A.a, B.b FROM A,B,C WHERE A.bfk=B.pk AND AND B.cfk=C.pk C.val=XXX

Now, we define Primary key and foreign key relations for Tables B and C. Suppose the surrogate key corresponding to val XXX is 12. How will the query change after implementing this index awareness?
The resulting query will be:

SELECT A.a, B.b FROM A,B WHERE A.bfk=B.pk AND B.cfk=12

The table C will be eliminated from the Query and the foreign key to C in table be will be equated to 12, the key corresponding to XXX. The join with C will be eliminated.

Scenario 3:
A User named User1 wants a privilege of running a BO Report for 40 mins and retrieving a report with row limit 40,000. However, in the SQL parameters of the universe, the row limit is set to 10,000 and the execution time limit is set to 10 mins. How can you give the user the required rights?
Go to Tools -> Manage Security Click on Mange Access Restrictions. Create a new restriction. In the Controls tab of the restriction, set the row limit to 40,000 and execution time limit to 40 mins. In the Main Window, apply this restriction to User1

Scenario 4:
In a report we have a table like this:
Dim A AA BB CC DD EE FF Dim B 12 34 21 43 45 54 Measure 1 100 50 40 90 200 75

There is a report filter applied on this block which restricts both DIM A and DIM B in the table, i.e only select values of DIM A, DIM B and the corresponding measures from the query are displayed in the table. Another column needs to be added which will calculate the average for each row based on the sum of Measure 1 in the table (not all values in report). What would be the formula?
For this we will require the sum of Measure 1 in the table, which can be achieved only by In Block keyword. The formula will be Sum(Measure 1)/Sum(Measure 1 In Block)

Scenario 5:
In the embedded sheet of an Xcelsius dashboard, we have data like:
Field A Field B Field C Field D Xxx Xxx Xxx Yyy Yyy Tyu Yyy Dev Wes Rid 100 45 56 78 200 98 76 87 13 106

In the dash board, we need a selector on Field A for a chart which will plot the values of Field C and Field D against all Field B values against one Field A. How can we achieve that?
This can be achieved using filtered rows. In the Selector Properties, select Insertion Type as filtered rows and map the Labels by selecting all values of Field A(including the duplicates).

The labels will display unique values and when a particular value is selected, all rows corresponding to the value of Field A will be selected as output. The chart component needs to be mapped to the output this selector.

One Report, Many Folders


Its a common problem, you have a folder structure in place that allows users to see only the content applicable to their region, division, department, etc. But there are some reports that should be visible across all regions (or divisions, departments, etc.). This post shows how to allow your users to see both their specific reports and shared reports, for a streamlined experience. Im Using: The CMC in BusinessObjects Enterprise 4.0 SP4 Im Assuming: You have several folders, several users or groups, and some reports that should be group-specific and others that can be seen by multiple groups. Youre familiar with the concepts of users, user groups, folders and reports.

Categories
By now, you may well be thinking why categories arent the obvious answer. Good point. Categories are built to do exactly this. They work like tags, or labels in something like GMail, or the libraries that Windows has used to abstract folders for the last few versions. That is, one report can be assigned to many categories. However they can be fiddly when it comes to permissions, and confusing for casual users who may not understand the concept of a report residing in a folder and several categories. Having said that, please do investigate if theyll work for you, and at the very least, let you users know about personal categories so that they can manage their reports in a way that suits them.

The Scenario
I have four countries sharing a BusinessObjects installation, Australia, USA, UK and India. Each region have their own reports, but I also have a world sales report that all countries must be able to see. My folder structure looks like this:

The Short Version


Heres what Im going to do: 1. Create a user group for each folder. Each user group will have access to its folder.

2.

Put a shortcut to the shared report in each country folder.

3.

Block view access to the Shared Reports folder, but not let that inherit down to the reports in the folder. So technically the users are allowed to see the shared reports, but have no way to navigate to them, they will only see the report (shortcut) in their country folder.

The Long Version


I have my country-specific user groups in a hierarchy so that I can easily define permissions for all of these users at once.

Bonus Tip 1: The easiest way to create a group hierarchy is to select the parent group in the group hierarchy tree, then click Create New Group, the group will automatically be added as a child. The same applies when adding a user, select the group first to save having to add the user to the group afterwards.

And Im not ashamed to say that I only worked this out when taking the screenshot! Must have been at the back of the manual.

Security Setup

The Top-Level Security concept is a bit tricky. If youre having trouble, theres plenty of articles out there on the interwebs, make sure it applies to your version. Ill cover the process quickly here. 1. In the CMC, go to Folders and click Manage > Top-Level Security > All Folders then click OK to the message that appears.

2. 3.

Add the Report Users parent group, but dont assign an access level.

Go to the Advanced tab, then click Add/Remove Rights. Give them the View Objects right, but un-tick the last column. This means the permissions wont apply to the objects below.

4.

Click OK, the advanced tab should look like this:

Note that the Apply To icon is just a single page.

5. 6.

Click OK and Close.

Next Ill give the Australia group access to the Australia folder (choose whichever access level you normally would), and the same for the other three countries.

If I log in a user in the Australia group, I see just that folder. Perfect.

7.

Now Ill give the Shared Reports group View rights on the Shared Reports folder. So the Access Levels tab will look like this:

8.

But before I save, I want to revoke the granular View Objects right. So I go to the the Advanced tab, click Add/Remove Rights and specifically deny that right. Its very important that when you do this you un-tick the box on the right.

With these settings youre saying: The user isnt allowed to see this folder, but Im not saying anything about whether they can see objects within the folder. This would be logically pointless, except for the fact that were going to be creating a shortcut to an object within this folder next. 9. Still in the CMC, go to the folders area and navigate to your shared report.

10.

Right click the report and click Organize > Create Shortcut In

11.

Hold down the control key and click each of the folders that you want to create a shortcut in, then click the > arrow to move them to the right.

12.

Click Create Shortcut. (You can create shortcuts in launch pad too, but CMC lets you do it in bulk).

13.

Now check that in each of your folders, you can see the shortcut.

14. You may want to go and rename each of the shortcuts to just the report name (i.e. remove the Shortcut to prefix). This way the report will appear in the correct position alphabetically. 15. Log in as a user thats a member of one of the groups and check that they can see their reports, and the shortcut to the shared report.

Bonus Tip 2: The new launch pad home page is great, but I want to make the appropriate folder the default location for my users. To do this for all users in a group, click on the group in the CMC and select BI launch pad Preferences

Then set the desired folder as the default location.

Thats it! You now have a seamless layout for your users to access both their own reports and shared reports. Its also low maintenance since its the parent group that has the permissions set on the shared reports folder. Any new users or groups require no more setup than they normally would.

Database Delegated Measures or Smart Measures in Web Intelligence


A Brief Overview of Measure Objects
A universe has three basic types of result objects: dimensions, details, and measures. Measure objects are analytical values like dollars or quantities. Measure objects have two extra settings that dimensions and details do not have: a SQL aggregate function and a report Projection function. The SQL aggregate function is not enforced by the application, but it should always be present. (Thats a subject for another blog post. Or two.) The SQL aggregate function is performed by the database server, and the projection function is used by the report engine. This is what makes measures roll up when you slice and dice or drill on a report. The problem is, certain calculations cannot be done by the database because there is no valid projection function. Designer 3.x gives me a way to address that by introducing a new

projection function setting of Database delegated. This post will explain why thats important and how it works.

How Projection Works


The projection function is designed to complement the SQL aggregation operation. For example the Revenue object from Island Resorts has the following SQL formula: sum(INVOICE_LINE.DAYS * INVOICE_LINE.NB_GUESTS * SERVICE.PRICE) The object also has a Sum projection function. If I run a query that returns the Resort and the Year and the Revenue the output looks like this:

Using a simple drag-and-drop technique I can remove the Year from the output block and the projection function causes the data to roll up to this result:

This is all controlled via the projection function which is a part of the object properties screen as shown here:

When Projection Fails


This all works very well until a special function like Average is considered. Averages cannot easily be projected because the source data could be very skewed. A sum operation can be applied recursively. What this means is that 1 + 5 generates the same result as 1 + (2 + 3). An average is not recursive. An average of 1 and 2.5 is not the same as an average of 1, 2, and 3. For the record, and average of 1 and 2.5 is 1.75, and an average of 1, 2, and 3 is 2. Even with a very small set of data the results of an average projection can be very wrong. The basic problem here is that averages have to work with the source data. I cannot apply an average to an average and expect to get the correct result. It is for this reason that report developers have had to create average and percentage calculations on their reports rather than reusing an object from a universe. In order to deliver the correct result I have to work with the source data.

Averages Do Not Average Well


As I said in the prior paragraph, the only way to generate the correct result for an average is to recalculate it from the source data. In order to demonstrate this I have created an Average Revenue object in my universe. For this screen shot I have used the average object in two different queries. The first shows Average Revenue by Resort and Year, and the second shows Average Revenue by Resort only.

The object is created with the following SQL: avg(INVOICE_LINE.DAYS * INVOICE_LINE.NB_GUESTS * SERVICE.PRICE) and the projection function is set to Average. As I did before I can apply a simple drag-and-drop operation to remove the Year object from the first block, allowing the report engine to project the Average Revenue using the selected projection function of Average. Are the results correct?

The results are wrong because the projection in the first block is taking three years of data, summing them up, and then dividing the total by three to get the new average value. The true result based on a database calculation is shown in the right block. The second block was not affected by the Year values since the query did not include that object in the result set.

Delegated Measures
This is where the delegation process comes in. As a universe designer I can now create an object that will project correctly (yay) at the expense of having to run a database query (boo). Instead of projecting my average calculation using the Average function, I will use the Database Delegated option instead. Heres how that looks:

When I run the same query with my new measure definition here is what the initial output looks like:

The difference becomes apparent when I drag-and-drop the Year object away from my block:

The note #TOREFRESH is telling me that before I can see the numbers for that column I have to refresh my document. I think that its nice that it doesnt refresh right away, as it gives me the opportunity to make more adjustments. Perhaps on a more complex report I want to remove (or add) more than one object from the block. In any case, when I click the refresh button the results are displayed.

Note that the two blocks are 100% the same now. The option to delegate the average calculation to the database has given me the power to create an entirely new type of object that I could not have done before.

Conclusion
This is another nice new feature for Designer. It will provide me with better control over how my measure objects are handled. In this case, the solution is not without a cost I may have to refresh the report in order to force the data to be updated. If the query takes a long time to run, there is a cost involved. It may ultimately still be easier to do this type of calculation on the report. Here is another challenge that is not solved by this technique: I cant do calculations that cross contexts in the universe. Suppose that I have one context for current year measures, and a second context for prior year measures. If I want to compare current to prior year values, that has to be done on the report. Delegating that calculation to the database is not possible because the values come from two different contexts. But its nice to have options.

Web Intelligence Glossary


This is a glossary of Web Intelligence System terms: System Term Block Definition A block is a set of data be that a table, crosstab or chart. Multiple blocks can be held in a section. These blocks may be related or not, e.g. a table and a chart showing the same information. A cell contains either fixed text, formulas or report variables. Fixed text variables are referred to as constants. Although we say fixed text they could be URLs or images. Cells whose properties change are called variables. When an objects is selected this becomes a variable in the report or a variable may be created using a formula e.g. adding two objects together or some other function. A class allows the grouping of related objects. The relationship between objects is in terms of the business terminology, e.g. loan related objects are grouped together in one class. A class can hold sub-classes. A crosstab is another type of table, which displays values for dimensions across the top axis and on the left axis. The body of the report displays values corresponding to the intersection of the dimensions. You can reformat a vertical table into a crosstab, if you wish. Class and objects are displayed along the left side of the screen. A client-based tool for creating and loading universes. The Designer also controls database connections and user restrictions. A detail object provides additional information about a particular dimension. A dimension object is typically textual information by which users analyze numeric measures. Web Intelligence objects are often called documents. Documents are made up of queries and reports. A portal to view objects. It is also possible to refresh and distribute documents. It is also possible to schedule objects to be refreshed. It is possible to access Web Intelligence from InfoView. When an object is scheduled to be run and refreshed the resulting document is saved as an instance of the original document. These instances are saved as documents and are copies of the document which reflect the data at the time the object was scheduled. A measure is a number that users wish to analyze. Ability to combine dimensions from various sources into one dimension for a display on a report. The original dimensions will not be changed. Objects are any type of file or report within the system. These could be Desktop Intelligence reports, Web Intelligence reports, Crystal Reports and third-party files that can be uploaded called agnostic objects. This is also the generic term for dimensions, details and measures in a Universe A prompt is a special type of query filter. It is a dynamic filter that displays a question every time you refresh the data in a document. You answer prompts by either typing or selecting the value(s) you want to view before you refresh the data. Allows the removal of all the data in a document, while still leaving the document structure intact. If a document has multiple queries, you can purge data from specific queries or all queries. A query is the selection of objects you wish to report on with any query filters required. The query is converted into SQL and run against the source database and results returned for use in a document.

Cell

Class

Cross Tab Data Manager Pane Designer Detail Dimension Document InfoView

Instances Measure Merged Dimension Object

Prompt

Purge

Queries

Query Filters Query Filters pane

Query Filters are conditions that limit the data returned in a query. A query filter is created in the Query Panel and affects the SQL generated. Objects are used to narrow your search range. The area within Web Intelligence where queries are built. It is made up of four sections: Query Panel Toolbar

Query Panel

Data Manager Pane Result Objects Pane Query Filter Pane

Query Panel Toolbar Report Report Filter

Allows user to perform various functions such as executing a query. A report is the formatted output from a query or queries which can be viewed and refreshed in InfoView and amended in Web Intelligence and Desktop Intelligence. A report filter is created within a report and limits the displayed on the report. Report Filters are created in the Report Panel and only affect the data being displayed.

Results Objects These are the objects a user has chosen to be displayed in a report. pane Section A section is part of a report. It is possible to section a report based on an object, i.e. location. The blocks within each section would then be displayed in terms of that object. A universe is a group of objects that are mapped to the relevant attributes in the database. These objects are given familiar names e.g. borrower type or item type. There are different universes for different business areas. A variable is a named formula. Variables provide a mechanism for reusing formulas without having to set them up every time you use them in a report. Variable Editor is a dialog box that offers all of the selection options for creating and editing variables. Vertical tables display header cells at the top of the report and the corresponding data in columns. A web-based document creation tool that uses universes to create documents. It is also possible to format data in multiple ways and formats

Universe

Variable Variable Editor Vertical Table Web Intelligence

Data Modeling:Basic Concepts


There are actually two types of inter-related data models:1 Logical Model Business view of the data which strictly follows the rules of normalization. Physical Model Physical view of the data. Rules of normalization may be relaxed somewhat for efficiency. The basic steps in data modeling are: 1. Identify the things you are interested in. These are called entities in the logical model and tables in the physical model. 2. Identify relationships between entities. These become the lines drawn between each entity. For example, a manager has workers, workers are assigned workflows, and work flows are supported by tools. Note that managers are not directly related to tools, but are indirectly related through workers and work flows. Relationships can be either one or many. For example, a manager can have many workers, but each worker only has one manager. 3. Identify the keys for each entity. Keys are used to look up a row of data. Keys describe the minimum amount of data necessary to identify a particular thing. Often, a computer generated number is used for the key. For example, an employee ID number (generated by the HR software) might be used to identify each manager and each worker. 4. Identify the attributes for each entity. Attributes are the information which needs to be kept about each entity. For example, an workers attributes might include the workers name and email address. Data models are expressed in an Entity Relationship Diagram (ERD) and a Data Element Dictionary (DED). The ERD is a schematic representation of the database while the DED is a text representation. The two must be combined to get a clear picture of the data model. Entity Relationship Diagram (ERD) In the ERD, tables are represented as a pair of stacked boxes. The SQL name for the table is just above the box. The physical database key for the table is listed in the top box. The attributes (columns) of the table are listed in the bottom box. The symbology of the table links is: O zero _______ one /|\ many Table links are read by looking at one table then looking at the link attached to the second table. For example: Table links are read by looking at one table then looking at the link attached to the second table. For example: Table_A Key A Attribute A1 Attribute A2 Attribute A3 Table_B Key_B Attribute B1 Attribute B2 Key_A (FK)

Here, each row in Table_A can have zero to many connections to Table_B. Each row in Table_B must have one and only 1 connection to a row in Table_A. An example of this type of relationship might be managers and workers. A new manager might not be assigned any workers. However,

managers are typically assigned several workers. Each worker must have one and only one manager. The actual column which connects the tables is identified by finding a common key column between the two tables. In this case, the common key column is Key_A. The (FK) behind Key_A in Table_B indicates that Key_A is a Foreign Key. A Foreign Key is used to look up a unique row in another table, in this case Table_A. Here are some sample relationships with English translations: Coach Agent Agent_ID A manager may be associated with any number of worker. An worker is associated with one and only one manager. Implication: A manager must exist before an worker can be added. Coach Agent Coach_ID A manager must be associated with one or more workers. An worker may be optionally associated with a single manager. Implication: An worker must exist before a manager can be added. One to many relationships can be either identifying or non-identifying. Identifying relationships are designated with a solid line and mean that if a row in the master table is deleted, then the row in the child table must be deleted. A corollary to this rule is that the key of the master table must be part of the key in the child table. Non-identifying relationships are designated with a broken line. A non- identifying relationship means that if a row in the one table is deleted, the corresponding row of the detail row need not be deleted. A corollary to that rule is that non- identifying relationships are attributes, not keys. Physical data models sometimes aggregate a set of table where each of the tables consist of a oneup numeric key and a tag/name/label. These table typically drive pick lists in the application. The aggregated tables are described in a master pick list table while the items in each list are in the pick item table. Since the numbers are used to store references to the labels, the text of the label can be change without affecting the data stored in other tables. In other words, each place the tag is used will automatically get the new name associated with the number.

Step By Step Method to do Clustering In Business Object


Following are the steps which you need to follow for clustering:Please make sure about following options before adding a CMS to a cluster: - Machines are at same OS and Database patch levels - CMS machines utilize same hardware - Each CMS must use same DB & access method - Server date & time are synched between machines. Please follow these steps for configuring cluster between 2 servers: 1. Install and configure a BO XI 3.1 server1. Test Infoview and Central management console (CMC) to ensure that it is up & running. 2. Then, install BO XI 3.1 server2 (Expand Install Option), pointing to the same CMS database as server1 (using same database credentials). 3. Both servers are now part of a cluster with same name as the 1st server (server1). 4. Now, on your desktop, bring up your Central Configuration Manager (CCM) on server1 (then later server2)(stop the SIA, right-click properties, go to Configuration tab and change or provide a clustername) (eg. BOPROD) 5. Upon starting the CMS on server1, this updates clustername in the CMS database. 6. Now go ahead, stop SIA for server2 (stop the SIA, right-click properties, go to Configuration tab and change or provide the same clustername) (eg: @BOPROD) 7. Configuring Shared Storage (FileStore) # Identify a location for the shared FileStore, ensure that the user running server1 and server2 have read/write access on this location. Eg: \\xyz\share. Create a folder structure called FileStore with 2 subfolders Input and Output. 8. on both servers, using CMC with enterprise admin, change the Input FRS and Output FRS (in properties) login accounts to the admin user which has access to the shared FileStore. (you may have to stop the processes for doing this and start them after done).

9. Then on server1 (later server2), using CMC, (with all processes running), go to home>servers and click on Input FRS, and in Properties tab, change Root Directory to point to \\xyz\share\FileStore\Input. Click Update, and restart Input FRS from CMC. 10. Similarly, go to home>servers and click on Output FRS, and in Properties tab, change Root Directory to point to \\xyz\share\FileStore\Output. Click Update, and restart Output FRS from CMC. 11. on both server1 and server2 CMS # to ensure cluster is configured properly a. Login to CMC, go home>settings>cluster # This tells you if the cluster and the cluster members are configured properly b. go to regedit, and check HKEY_LOCAL_MACHINE\SOFTWARE\Business Objects\Suite 12.0\Enterprise\CMSClusterMembers for an entry for @clustername with values such asserver1;server2. 12. There are a few other places where clustername change needs done. a. For IIS: In <BusinessObjects Installdir>\BusinessObjects Enterprise 12.0\WebContent\InfoViewApp\InfoViewApp\Web.config, search for cms and change Value to @clustername b. For java application servers like Tomcat: Following changes are required in web.xml files for Infoview NOTE Change the context-param for cms.default entry to look like below <context-param> <param-name>cms.default</param-name> <param-value>@CLUSTERNAME</param-value> </context-param> NOTE Change the context-param for cms.clusters entry to look like below. you may need to copy/paste the below snippet right after the comment in your web.xml <context-param> <param-name>cms.clusters</param-name> <param-value>@CLUSTERNAME</param-value> </context-param> <context-param> <param-name>cms.clusters.CLUSTERNAME</param-name>

<param-value>server1,server2 </param-value> </context-param> Besides this, you will also need to search for cms and change the entry to @Clustername in web.xml files for CmcApp and businessobjects applications as well; and in Initconfig.properties if you have Performance Management installed.

DaysBetween function
Description Returns the number of days between two dates Function Group Date Syntax

integer DaysBetween(date first_date, date last_date)


Input

first_date The first date in the range last_date The last date in the range
Output The number of the days between the two dates Examples DaysBetween(<Reservation Date>, <Invoice Date) return 61 when <Reservation Date> is 2 January 2002 and <Invoice Date> is 5 April 2002 Remarks If you use a constant as input, surround it with single quotes, for example DaysBetween(5/4/2002,5/7/2002). If last_date is before first_date, DaysBetween() returns a negative number.

Changing the name of a CMS cluster


Changing the name of a CMS cluster By default, a CMS cluster name reflects the name of the first CMS that you install, but the cluster name is prefixed by the @ symbol. For instance, if your existing CMS is called BUSINESSOBJECTSCMS, then the default cluster name is@BUSINESSOBJECTSCMS. This procedure allows you to change the name of a cluster that is already installed and running. To change the cluster name, you need only stop one of the CMS cluster members. The remaining CMS cluster members are dynamically notified of the change. For optimal performance, after changing the name of the CMS cluster reconfigure each Business Objects server so that it registers with the CMS cluster, rather than with an individual CMS.

To change the cluster name on Windows


1. cluster. Use the CCM to stop any Central Management Server that is a member of the With the CMS selected, click Properties on the toolbar. Click the Configuration tab. Select the Change Cluster Name to check box. Type the new name for the cluster. Click OK and then start the Central Management Server. The CMS cluster name is now changed. All other CMS cluster members are dynamically notified of the new cluster name (although it may take several minutes for your changes to propagate across cluster members). 1. Go to the Servers management area of the CMC and check that all of your servers remain enabled. If necessary, enable any servers that have been disabled by your changes.

2. 3. 4.
5.

6.

To change the cluster name on UNIX


Use the cmsdbsetup.sh script. For reference, see the BusinessObjects Enterprise Administrators Reference Guide.

To register servers with the CMS cluster on Windows


1. Use the CCM to stop a Business Objects server. Select the server from the list, and then click Properties. Click the Configuration tab. In the CMS Name box, type the name of the cluster. The name of the cluster begins with the @ symbol. For example, if the cluster name was changed to ENTERPRISE, type @ENTERPRISE in the box. 1. Click OK, and then start the server. Repeat for each Business Objects server in your installation.

2. 3. 4.

To registers servers with the CMS cluster on UNIX

1. 2. 3.

Use ccm.sh to stop each server. Use a text editor such as vi to open the ccm.config file found in the root directory of your BusinessObjects Enterprise installation. Find the -ns command in the launch string for each server, and change the name of the CMS to the name of the CMS cluster. The name of the cluster begins with the @ symbol. For example, if the cluster name was changed to ENTERPRISE, type @ENTERPRISE. Do not include a port number with the cluster name.

1.

Save the file, and then use ccm.sh to restart the servers.

DataProvider function
Description Returns the name of a data provider Function Group Data Provider Syntax

string DataProvider( variable any_variable)


Input

any_variable Any variable in a report


Output The name of the data provider Examples DataProvider(<Revenue>) returns Query 1 with eFashion if the data provider is called eFashion Remarks You should use DataProvider() as input to all other data provider functions

Cascading Prompts using Custom Hierarchies


Cascading Prompts
Prompting a user, when they run a Webi report, to select from a list of values, is easy stuff. But sometimes you may want to provide your users with a hierarchical structure in a prompt. This is easy to set up, but asked often enough that I thought a quick post couldnt hurt.

Environment:

BOBJ XI3.1 SP3 FP3.5 Universe Designer InfoView/Webi MS SQL Server 2008 R2 Microsoft AdventureWorks database. Assumption:

Familiar to basics of working with Webi familiar with SQL databases Lets get started In this example, I want to prompt my user to select a city. Rather than present them with a flat list of hundreds of cities, I would like to display a tree view that allows them to see a list of countries, expand a country to see a list of states, and expand a state to see a list of cities.

1.

In Universe designer, click Tools > Lists of Values > Create cascading Lists of Values

2.

Select Custom Hierarchies in the bottom left if you have custom hierarchies set up.

3.

Select the hierarchy that you want to use for a cascading list. In this example, Ill select Location, which contains Country, State and City. Then click the right arrow to move that hierarchy to the right.

4. 5. 6.
7.

Click Generate LOVs If you had already created a list of values for one of the objects, you will be prompted to overwrite, in which case, click OK. Click OK. Open up Webi in InfoView and edit the query.

8.
prompt.

Drag the lowest level of the hierarchy into the Query Filters section and set is as a

9.

Click Run Query and you will see the prompt showing a tree structure with Country, State and City.

Multiple queries, combined queries and synchronized queries compared in WEBI reports
Multiple queries You can include one or multiple queries in a Web Intelligence document. When you include multiple queries, those queries can be based on a single universe or on multiple universes available in InfoView. For example, you can include product sales data and customer data in the same document. In this case, your corporate data for product line sales is available on one universe and data on customers is available on another universe. You want to present product line sales results and information on customer age groups in the same report. To do this, you create a single document that includes two queries; one query on each universe. You can then include and format results from both queries on the same report. Returning data using queries 3 Building and working with queries Defining multiple queries in a single document is necessary when the data you want to include in a document is available on multiple universes, or when you want to create several differently-focused queries on the same universe. You can define multiple queries when you build a new document or add more queries to an existing document. You can present the information from all of the queries on a single report or on multiple reports in the same document. Multiple queries, combined queries and synchronized queries compared Multiple queries can be related in a Web Intelligence document in different ways. Basic multiple queries draw unrelated data from different sources. Synchronized queries relate the data from different queries around a dimension that contains data common to both queries. These dimensions are called merged dimensions. You merge dimensions in the Web Intelligence reporting interface after you have created and run your multiple queries. Combined queries are a special kind of query created in the Web Intelligence query interface. Combined queries generate SQL containing the UNION, INTERSECT and MINUS operators (if the database supports them) or simulate the effect of these operators. Combined queries allow you to answer complex business questions that are difficult to formulate using standard queries. You cannot create combined queries in Query HTML. To add a query 1. Click Add a Query. The Add Query window appears. 2. Select the universe you want to use to build the query. You can create a new query on a universe already used in the document or select a different universe. By default, the universe used in the current

document is displayed first. 3. Click OK. 4. Define the objects, filters, scope of analysis, and properties you want for the query. The data content, scope of analysis, and filters you define here will only apply to the selected query. The query properties you define only apply to the selected query. To duplicate a query If you want to build a different query on a universe already included in the document, you can duplicate the existing query on that universe and then modify it, instead of starting from scratch. 1. Select the query you want to duplicate by right-clicking the appropriate Query tab at the bottom of the report panel. 2. Select Duplicate

How to Resolve Ambiguous Relationships


Resolving Ambiguous Relationships Ambiguous relationships occur when the data represented by a query subject or dimension can be viewed in more than one context or role, or can be joined in more than one way. The most common ambiguous relationships are: Role-Playing Dimensions Loop Joins Reflexive and Recursive Relationships You can use the Model Advisor to highlight relationships that may cause issues for query generation and resolve them in one of the ways described below. The main goal is to enable clear query paths. Role-Playing Dimensions A table with multiple valid relationships between itself and another table is known as a role-playing dimension. This is most commonly seen in dimensions such as Time and Customer. Loop Joins Loop joins in the model are typically a source of unpredictable behavior. This does not include star schema loop joins. Reflexive and Recursive Relationships Reflexive and recursive relationships imply two or more levels of granularity. IBM Cognos Framework Manager imports reflexive relationships but does not use them when executing queries. Reflexive relationships, which are self-joins, are shown in the model for the purpose of representation only.

CMS DB and Audit DB in SAP BO


The CMS system database is used to store BI platform information, such as user, server, folder, document, configuration, and authentication details. It is maintained by the Central Management Server (CMS), and is sometimes referred to as the system database or repository. During installation of BI platform you are asked to which database you want to connect. Once you select a database, the setup program creates the tables and views necessary to utilize the database

as the system database. During the installation, the default servers, users, groups, and content are added to this database. For the Linux pattern, a Sybase ASE 15.7 database client and server was used. Before deploying our pattern we requested that a database user and schema be created for our CMS database. The database users will require reading and writing rights, as well as table creation rights on the schema. This pattern reviews the the Sybase client configuration in Sybase Middleware. The CMS database is a central and critical component of the Business Intelligence platform architecture. A single database server is used in the Linux pattern to host the CMS database, but in a production environment, redundancy and appropriate database recovery policies are necessary. For more information on the CMS database and other servers within the BI platform, see the BIP 4.0 Administrators Guide.

Each BI platform environment requires a unique set of users/schemas. If you use an existing schema, the data is overwritten and your existing system is lost.
Below is an example of how you could name your user accounts/schemas. The user name and schema are often the same.

Stage of Deployment Proof of concept (POC) Development Quality Assurance Production

CMS User/Schema Name Audit User/Schema Name BI4CMSPOC BI4AUDPOC BI4CMSDEV BI4AUDDEV BI4CMSQA BI4AUDQA BI4CMSPROD BI4AUDPROD

Details on the Sybase ASE 15.7 database used for the CMS database in our pattern
Two requirements for Sybase ASE are the following:

You must use a unicode character set. You must set a page size of 8 KB.

CMS Database Overview for our Linux pattern

Version Sybase ASE 15.7 Database Name Cms57u05 Character Encoding UTF-8 Page Size 8KB Server Name Cms57u05 Machine Cmsdb05 Schema SAPCMS Username SAPCMS

For our database server we used the system parameters outlined below. These are not the official recommendations; you should consult with your Database Administrator before making any changes to your database servers. Data Cache [buffer pool]: Make 250MB memory available EXEC sp_configure max memory, 500000 go EXEC sp_cacheconfigdefault data cache, 200.000M go Procedure Cache EXEC sp_configureprocedure cache size, 27680 go Lock Granularity EXEC sp_configure lock scheme, 0, datarows go Parallel Processing: set number of engines = number of CPUs EXEC sp_configure max online engines, 8 go EXEC sp_configure number of engines at startup, 8 go Number of Connections EXEC sp_configure number of remote connections, 100 go EXEC sp_configure number of remote logins, 100 go EXEC sp_configure number of user connections, 100 go Number of tablespaces

EXEC sp_configure number of devices, 25 go Number of open objects, indexes, and partitions EXEC sp_configure open objects, 2000 go EXEC sp_configure open indexes, 4000 go EXEC sp_configure open partitions, 3000 go

Вам также может понравиться