Вы находитесь на странице: 1из 17

http://forums.sdn.sap.com/thread.jspa?

threadID=357803

1. Different kinds of extractors: LO Cockpit Extractors are SAP standard / pre-defined extractors / Data Source for loading data to BW. COPA- is customer generated application specific Data Source. When we create COPA Data Source we will be getting different field selections. There are no BI cubes for COPA. Generic Extractor: We create generic extractors from table views, query and functional module / InfoSet Query. 2. What's the difference between extraction structure and table in datasource? a) The extraction structure is just a technical definition, it does not hold any physical data on the database. The reason why you have it in addition to the table/view is that you can hide deselect fields here so that not the complete table needs to be transferred to BW. b) In short - The extract structure define the fields that will be extracted and the table contains the records in that structure. c) Table is having data but Extract structure doesnt have data. Extract structure is formed based on table and here we have the option to select the fields that are required for extraction. So extract structure will tell what are the fields that are using for extraction. 3. Define V3 Update (Serialised and Unserialised), Direct Delta and Queued Delta a). Direct Delta: When number of document changes between two delta extractions is small, you go for direct delta. The recommended limit is 10000 i.e. if the No of doc changes (Creating, changing and deleting) between two successive delta runs is within 10000, direct delta is recommended. Here the number of LUWs are more as they are not clubbed into one LUW. b). Queued delta is used if number of document changes is high ( more than 10000). Here data is written into an extraction queue and from there it is moved to delta queue. Here up to 10000doc changes are cumulated to one LUW. c). Unserialized V3 update method is used only when it is not important that data to be transferred to BW in the exactly same sequence as it was generated in R/3. d). Serialized V3 Update: This is the conventional update method in which the document data is collected in the sequence of attachment and transferred to BW by batch job. The sequence of the transfer does not always match the sequence in which the data was created. Basic difference is in the sequence of data transfer. In Queued delta it is same as the one in which documents are created whereas in serialized v3 update it is not always the same. 4) Difference between Costing based and Account based CO-PA

Account based is tied to a GL account posting. Costing based is derived from value fields. Account based would be more exact to tie out to the GL. Costing based is not easy to balance to the GL and more analytical and expect differences. Costing based offers some added revaluation costing features Implementing costing based is much more work but also gives much more reporting possibilities especially focused on margin analyses. Without paying attention to it while implementing costing based COPA, you get account based with it, with the advantage of reconciled data. COPA accounting based is for seeing at abstract level whereas costing based is the detailed level, 90% we go for costing based only. COPA Accounting is based on Account numbers; where as cost accounting is based on cost centers. COPA Tables: Account base COPA tables are COEJ, COEP, COSS and COSP

1. What is data integrity? Data Integrity is about eliminating duplicate entries in the database. Data integrity means no duplicate data. 2. What is the difference between SAP BW 3.0B and SAP BW 3.1C, 3.5? The best answer here is Business Content. There is additional Business Content provided with BW 3.1C that wasn't found in BW 3.0B. SAP has a pretty decent reference library on their Web site that documents that additional objects found with 3.1C. 3. What is the difference between SAP BW 3.5 and 7.0? SAP BW 7.0 is called SAP BI and is one of the components of SAP NetWeaver 2004s. There are many differences between them in areas like extraction, EDW, reporting, analysis administration and so forth. For a detailed description, please refer to the documentation given on help.sap.com. 1. No Update rules or Transfer rules (Not mandatory in data flow) 2.Instead of update rules and Transfer rules new concept introduced called transformations. 3. New ODS introduced in additional to the Standard and transactional. 4. ODS is renamed as DataStore to meet with the global data warehousing standards. And lot more changes in the functionalities of BEX query designer and WAD etc. 5. In Infosets now you can include Infocubes as well. 6. The Re-Modeling transaction helps you adding new key figures and characteristics and handles historical data as well without much hassle. This facility is available only for info cube. 7. The BI accelerator (for now only for infocubes) helps in reducing query run time by almost a factor of 10 - 100. This BI accl is a separate box and would cost more. Vendors for these would be HP or IBM. 8. The monitoring has been improved with a new portal based cockpit. Which means you would need to have an EP guy in your project for implementing the portal ! :) 9. Search functionality has improved!! You can search any object. Not like 3.5 10. Transformations are in and routines are passe! Yes, you can always revert to the old transactions too.

4. What is index? Indices/Indexes are used to locate needed records in a database table quickly. BW uses two types of indices, B-tree indices for regular database tables and bitmap indices for fact tables and aggregate tables. 5. What is KPIs (Key Performance Indicators)? (1) Predefined calculations that render summarized and/or aggregated information, which is useful in making strategic decisions. (2) Also known as Performance Measure, Performance Metric measures. KPIs are put in place and visible to an organization to indicate the level of progress and status of change efforts in an organization. KPIs are industry-recognized measurements on which to base critical business decisions. In SAP BW, Business Content KPIs have been developed based upon input from customers, partners, and industry experts to ensure that they reflect best practices. 6. What is the use of process chain? The use of Process Chain is to automate the data load process. Used to automate all the processes including Data load and all Administrative Tasks like indices creation deletion, Cube compression etc. Highly controlled data loading. 7. Difference between Display Attribute and Navigational Attribute? The basic difference between the two is that navigational attributes can be used to drilldown in a Bex report whereas display attributes cannot be used so. A navigational attribute would function more or less like a characteristic within a cube. To enable these features of a navigational attribute, the attribute needs to be made navigational in the cube apart from the master data info-object. The only difference is that navigation attributes can be used for navigation in queries, like filtering, drill-down etc. You can also use hierarchies on navigational attributes, as it is possible for characteristics. But an extra feature is that there is a possibility to change your history. (Please look at the relevant time scenarios). If navigation attributes changes for a characteristic, it is changed for all records in the past. Disadvantage is also a slow down in performance. 8. If there are duplicate data in Cubes, how would you fix it? Delete the request ID, Fix data in PSA or ODS and re-load again from PSA / ODS. 9. What are the differences between ODS and Info Cube? ODS holds transactional level data. Its just as a flat table. Its not based on multidimensional model. ODS have three tables 1. Active Data table (A table containing the active data) 2. Change log Table (Contains the change history for delta updating from the ODS Object into other data targets, such as ODS Objects or InfoCubes for example.) 3. Activation Queue table (For saving ODS data records that are to be updated but that have not yet been activated. The data is deleted after the records have been activated) Whereas Cube holds aggregated data which is not as detailed as ODS. Cube is based on multidimensional model.

An ODS is a flat structure. It is just one table that contains all data. Most of the time you use an ODS for line item data. Then you aggregate this data to an info cube One major difference is the manner of data storage. In ODS, data is stored in flat tables. By flat I mean to say ordinary transparent table whereas in a CUBE, it composed of multiple tables arranged in a STAR SCHEMA joined by SIDs. The purpose is to do MULTIDIMENSIONAL Reporting In ODS; we can delete / overwrite the data load but in cube only add is possible, no overwrite. 10. What is the use of change log table? Change log is used for delta updates to the target; it stores all changes per request and updates the target. 11. Difference between InfoSet and Multiprovider a) The operation in Multiprovider is "Union" where as in Infoset it is either "inner join" or "Outer join". b) You can add Info-cube, ODS, Info-object in Multiprovider whereas in an Infoset you can only have ODS and Info-object. c) An Infoset is an Info-provider that joins data from ODS and Info-objects( with master data). The join may be a outer join or a inner join. Whereas a Multiprovider is created on all types of Infoproviders - Cubes, ODS, Info-object. These InfoProviders are connected to one another by a union operation. d) A union operation is used to combine the data from these objects into a MultiProvider. Here, the system constructs the union set of the data sets involved. In other words, all values of these data sets are combined. As a comparison: InfoSets are created using joins. These joins only combine values that appear in both tables. In contrast to a union, joins form the intersection of the tables. 12. What is the T.Code for Data Archival and what is it's advantage? SARA. Advantage: To minimize space, Query performance and Load performance 13. What are the Data Loading Tuning from R/3 to BW, FF to BW? a) If you have enhanced an extractor, check your code in user exit RSAP0001 for expensive SQL statements, nested selects and rectify them. b) Watch out the ABAP code in Transfer and Update Rules, this might slow down performance c) If you have several extraction jobs running concurrently, there probably are not enough system resources to dedicate to any single extraction job. Make sure schedule this job judiciously. d) If you have multiple application servers, try to do load balancing by distributing the load among different servers.

e) Build secondary indexes on the under lying tables of a DataSource to correspond to the fields in the selection criteria of the datasource. ( Indexes on Source tables) f) Try to increase the number of parallel processes so that packages are extracted parallelly instead of sequentially. (Use PSA and Data Target in parallel option in the info package.) g) Buffer the SID number ranges if you load lot of data at once. h) Load master data before loading transaction data. i) Use SAP Delivered extractors as much as possible. j) If your source is not an SAP system but a flat file, make sure that this file is housed on the application server and not on the client machine. Files stored in an ASCII format are faster to load than those stored in a CSV format. 14. Performance monitoring and analysis tools in BW a) System Trace: Transaction ST01 lets you do various levels of system trace such as authorization checks, SQL traces, table/buffer trace etc. It is a general Basis tool but can be leveraged for BW. b) Workload Analysis: You use transaction code ST03 c) Database Performance Analysis: Transaction ST04 gives you all that you need to know about whats happening at the database level. d) Performance Analysis: Transaction ST05 enables you to do performance traces in different are as namely SQL trace, Enqueue trace, RFC trace and buffer trace. e) BW Technical Content Analysis: SAP Standard Business Content 0BWTCT that needs to be activated. It contains several InfoCubes, ODS Objects and MultiProviders and contains a variety of performance related information. f) BW Monitor: You can get to it independently of an InfoPackage by running transaction RSMO or via an InfoPackage. An important feature of this tool is the ability to retrieve important IDoc information. g) ABAP Runtime Analysis Tool: Use transaction SE30 to do a runtime analysis of a transaction, program or function module. It is a very helpful tool if you know the program or routine that you suspect is causing a performance bottleneck. 15. Difference between Transfer Rules and Update Rules a) Transfer Rules: When we maintains the transfer structure and the communication structure, we use the transfer rules to determine how we want the transfer structure fields to be assigned to the communication structure InfoObjects. We can arrange for a 1:1 assignment. We can also fill InfoObjects using routines, formulas, or constants. Update rules: Update rules specify how the data (key figures, time characteristics, characteristics) is updated to data targets from the communication structure of an InfoSource. You are therefore connecting an InfoSource with a data target.

b) Transfer rules are linked to InfoSource, update rules are linked to InfoProvider (InfoCube, ODS). i. Transfer rules are source system dependant whereas update rules are Data target dependant. ii.The no. of transfer rules would be equal to the no. of source system for a data target. iii.Transfer rules is mainly for data cleansing and data formatting whereas in the update rules you would write the business rules for your data target. iv. Currency translations are possible in update rules. c) Using transfer rules you can assign DataSource fields to corresponding InfoObjects of the InfoSource. Transfer rules give you possibility to cleanse data before it is loaded into BW. Update rules describe how the data is updated into the InfoProvider from the communication structure of an InfoSource. If you have several InfoCubes or ODS objects connected to one InfoSource you can for example adjust data according to them using update rules. Only in Update Rules: a. You can use return tables in update rules which would split the incoming data package record into multiple ones. This is not possible in transfer rules. b. Currency conversion is not possible in transfer rules. c. If you have a key figure that is a calculated one using the base key figures you would do the calculation only in the update rules. 16. What is OSS? OSS is Online support system runs by SAP to support the customers. You can access this by entering OSS1 transaction or visit Service.SAP.Com and access it by providing the user name and password. 17. How to transport BW object? Follow the steps. i. RSA1 > Transport connection ii. In the right window there is a category "all object according to type" iii. Select required object you want to transport. iv. Expand that object, there is select object, double click on this you will get the number of objects, select yours one. v. Continue. vi. Go with the selection, select all your required objects you want to transport. vii. There is icon Transport Object (Truck Symbol). viii. Click that, it will create one request, note it down this request. ix. Go to Transport Organizer (T.code SE01). x. In the display tab, enter the Request, and then go with display. xi. Check your transport request whether contains the required objects or not, if not go with edit, if yes "Release" that request. Thats it; your coordinator/Basis person will move this request to Quality or Production. 18. How to unlock objects in Transport Organizer? To unlock a transport use Go to SE03 --> Request Task --> Unlock Objects

Enter your request and select unlock and execute. This will unlock the request. 19. What is InfoPackage Group? An InfoPackage group is a collection of InfoPackages. 20. Differences Between Infopackage Groups and Process chains i.Info Package Groups are used to group only Infopackages where as Process chains are used to automate all the processes. ii Infopackage goups: Use to group all relevent infopackages in a group, (Automation of a group of infopackages only for dataload). Possible to Sequence the load in order. Process Chains: Used to automate all the processes including Dataload and all Administrative Tasks like indices creation deletion, Cube compression etc Highly controlled dataloading. iii. InfoPackage Groups/Event Chains are older methods of scheduling/automation. Process Chains are newer and provide more capabilities. We can use ABAP programs and lot of additional features like ODS activation and sending emails to users based on success or failure of data loads.

21. What are the critical issues you faced and how did you solve it? Find your own answer based on your experience.. 22. What is Conversion Routine? a) Conversion Routines are used to convert data types from internal format to external/display format or vice versa. b) These are function modules. c) There are many function modules, they will be of type CONVERSION_EXIT_XXXX_INPUT, CONVERSION_EXIT_XXXX_OUTPUT. example: CONVERSION_EXIT_ALPHA_INPUT CONVERSION_EXIT_ALPHA_OUTPUT 23. Difference between Start Routine and Conversion Routine In the start routine you can modify data packages when data loading. Conversion routine usually refers to routines bound to InfoObjects (or data elements) for conversion of internal and display format. 24. What is the use of setup tables in LO extraction? The use of setup table is to store your historical data in them before updating to the target system. Once you fill up the setup tables with the data, you need not to go to the application tables again and again which in turn will increase your system performance. 25. R/3 to ODS delta update is good but ODS to Cube delta is broken. How to fix it?

i. Check the Monitor (RSMO) whats the error explanation. Based on explanation, we can check the reason ii. Check the timings of delta load from R3 ODS CUBE if conflicting after ODS load iii. Check the mapping of Transfer/Update Rules iv. Fails in RFC connection v. BW is not set as source system vi. Dump (for a lot of reasons, full table space, time out, sql errors...) Do not receive an IDOC correctly. vii. There is a error load before the last one and so on... 26. What is short dump and how to rectify? Short dump specifies that an ABAP runtime error has occurred and the error messages are written to the R/3 database tables. You can view the short dump through transaction ST22. You get short dumps b'coz of runtime errors. The short dump u got is due to the termination of background job. This could be of many reasons. You can check short dumps in T-code ST22. U can give the job tech name and your userid. It will show the status of jobs in the system. Here you can even analyze short dump. U can use ST22 in both R/3 and BW. OR To call an analysis method, choose Tools --> ABAP Workbench --> Test --> Dump-Analysis from the SAP Eas

SAP BW Questions - Some Real Question


1. Differences b/w 3.0 and 3.5 2. Differences b/w 3.5 and BI 7.0 3. Can you explain a life cycle in brief 4. Difference b/w table & structure 5. Steps of LO 6. Steps of LIS 7. Steps of generic 8. What is index and how do you increase performance using them 9. How do you load deltas into ODS and cube 10. Example of errors while loading data and how do u resolve them 11. How do you maintain work history until a ticket is closed 12. What is reconciliation 13. What is the methodology u use before implementation 14. What are the role & responsibilities when you are in implementation and while in support also Major Differences between Sap Bw 3.5 & sapBI 7.0 version 1. In Infosets now you can include Infocubes as well. 2. The Remodeling transaction helps you add new key figure and characteristics and handles historical data as well without much hassle. This is only for info cube. 3. The BI accelerator (for now only for infocubes) helps in reducing query run time by almost a factor of 10 - 100. This BI accl is a separate box and would cost more. Vendors for these would be HP or IBM. 4. The monitoring has been imprvoed with a new portal based cockpit. Which means you would need to have an EP guy in ur project for implementing the portal ! :) 5. Search functionality hass improved!! You can search any object. Not like 3.5 6. Transformations are in and routines are passe! Yess, you can always revert to the old transactions too. 7. The Data Warehousing Workbench replaces the Administrator Workbench. 8. Functional enhancements have been made for the DataStore object: New type of DataStore object Enhanced settings for performance optimization of DataStore objects. 9. The transformation replaces the transfer and update rules. 10. New authorization objects have been added 11. Remodeling of InfoProviders supports you in Information Lifecycle Management.

12 The Data Source: There is a new object concept for the Data Source. Options for direct access to data have been enhanced. From BI, remote activation of Data Sources is possible in SAP source systems. 13.There are functional changes to the Persistent Staging Area 14.BI supports real-time data acquisition. 15 SAP BW is now known formally as BI (part of NetWeaver 2004s). It implements the Enterprise Data Warehousing (EDW). The new features/ Major differences include: a) Renamed ODS as DataStore. b) Inclusion of Write-optmized DataStore which does not have any change log and the requests do need any activation c) Unification of Transfer and Update rules d) Introduction of "end routine" and "Expert Routine" e) Push of XML data into BI system (into PSA) without Service API or Delta Queue f) Intoduction of BI accelerator that significantly improves the performance. g) Load through PSA has become a must. I am not too sure about this. It looks like we would not have the option to bypass the PSA Yes, 16. Load through PSA has become a mandatory. You can't skip this, and also there is no IDoc transfer method in BI 7.0. DTP (Data Transfer Process) replaced the Transfer and Update rules. Also in the Transformation now we can do "Start Routine, Expert Routine and End Routine". during data load. New features in BI 7 compared to earlier versions: i. New data flow capabilities such as Data Transfer Process (DTP), Real time data Acquisition (RDA). ii. Enhanced and Graphical transformation capabilities such as Drag and Relate options. iii. One level of Transformation. This replaces the Transfer Rules and Update Rules iv. Performance optimization includes new BI Accelerator feature. v. User management (includes new concept for analysis authorizations) for more flexible BI end user authorizations. ===================================== 2. Complete life-cycle implementation of SAP BW, which includes data modeling, data extraction, data loading, reporting and support. ============================== 3A. When they refer to the full life cycle its the ASAP methodology which SAP recommends to follow for all its projects. 1.Project peparation. 2.Business Blueprint. (PSA).

3.Realization. 4.Fit-Gap Analysis 5.Go-Live Phase. Normally in the first phase all the management guys will sit with in a discussion . In the second phase u will get a functional spec n basing on that u will have a technical spec. In the third phase u will actually inplement the project and finally u after testing u will deploy it for production i.e. Go-live . You might fall under and get involved in the realization phase . If its a supporting project u will come into picture only after successful deployment. =========================== 5A. LO extraction is one stop shop for all the logistics extarctions . LO dataosuurces come as a part of the business content in BW . We need to transfer them from the BCT and then activate them from D version to the active Verion A . Go to the tcode -- RSA5 ---- Transfer the desired datasource . and then LBWE to maintain the extract structure LBWG -- to delete the setup tables ---- why ... ? OLI*bw -- Statistcal initialization . LBWQ --- To delete the extractor queue SM13 -- to delete the update queue . LBWf -- to get the log . Its always recommended to have the Queued delta as an update method , since it uses the collective run to schedule all the changed records . Once the intial run is over set the delta method Queued to periodic for further delta load Why ..? We need to delete the setup tables since we need to delete the data that is already in them and also because we will change the ES by adding the required fields from the R/3 communuication structrure . we can also select and hide the fields here .. all the filds in blue are mandatory .. If the required fields are not avilable we will go for the DS enhancments . ================================= 6A. LIS uses information structures to extract the data .it comes under the application specific customer generated extractions. The range is 000 to 999. 0-499 is for SAP structures n 500 to 999 are for customer defined . We need to consider the two cases if its the SAP defined r the customer defined . Lets see the SAP defined first :

Tcode -- LBW0 give the information structure and selct the option display settings to know the status .if u seelct the generate datasource option it will throw an error saying that u cannot create in the SAp name range . You select the option the setup environment settings it will create the two tables n one structure with the naming convention .. 2LIS_application no _BIW1 ,2LIS_appl No _BIW2 and 2LIS _appl no_BIWS These two structures are used to enable the delta records interchangebaly --- we will come to know this in the table TMCBIW . Then go to LBW1 to change the version .. LBW2 to setup the no update update method .. now you do the full load and after that go LBW1 change the version and then go to LBW2 to setup the delta update n the periodic job . Now you can load the delta updates . If its in the case of the user defined : You need to create the IS MC18 --- create the IS MC19-- change MC20 -- dispaly MC21--MC22,,MC23,MC24,MC25,MC26 folwing to craete the update rules . Then Tcode LBW0 give the IS name and sel setup lis environment it will craete the the two tables n the structure .. You can use OMO1 to fill the tables n u do the full upload then u need to setup the delta by sel the steup delta .. You can set whether the delta is enabled are not using the option activate /deactivate deelta Both in cases you need while migrating the data you need to lock the setup tables to prevent users from entreing the transactions SE14 ... and after completion u need to unlock . But LO is preffered to LIS in all aspects .. LO provdes the IS upto the level of detail . Enhanced performance and also deletion of setup tables after updation which we never do in LIS . ============================ 7A. We opt for generic extraction whenever the desired datasource is not available in business content or if it is already used and we need to regenerate it . when u want to extract the data from table, view, infoset, function module we use generic extraction. In generic we create our own datasource and activate it .

Steps : 1.Tcode is RSO2 2.Give the datasource name and designate it to a particular application component. 3. Give the short medium and long desc mandatory. 4. Give the table / Fm /view /infoset. 5. and continue which leads to a detail screen in which you can always hide, select ,inversion , field only options are available. HIDE is used to hide the fields .. it will not transfer the data from r/3 to BW SELECT -- the fields are available in the selection screen of the info package while u schedule it . INVERSION is for key figs which will operate with '-1' and nullify the value . Once the datasource is generated you can extract the data using it . And now to speak abt the delta ... we have .. 0calday , Numeric pointer , time stamp. 0calday -- is to be run only once a day that to at the end of the day and with a range of 5 min. Numeric Pointer -- is to be used for the tables where it allows only appending of records ana no change .. eg: CATSDB HRtime managent table . Timestamp: using this you can always delta as many times as possible with a upeprlimit . Whenever there 1:1 relation you use the view and 1:m you use FM. ============================ 8A. INDEX: They are used to improve the performance of data retrival while executing the queries or work books the moment when we execute and once we place the valiues in Selection criteria at that time the indexes will act as retrival point of data .. and it will fetch the data in a faster mannner .. Just common example if you can obeserve any book which has indexes .. This indexes will give a exact location of each topic based on this v can easily go to that particulur page and we will continue our thingz. Similar mannner in the data level the indexes are act in BW side... ============================= 9A. Time stamp error. sol: activate the data source and replicate the data source and load. 2. Data error in PSA. sol: Modify the in PSA error data and load. 3. RFC connection failed. sol: raise the ticket to BASIS team to provide the connection. 4. Short dump error. sol: delete the request and load once again. a) Loads can be failed due to the invalid characters b) Can be because of the deadlock in the system c) Can be becuase of previuos load failure , if the load is dependant on other loads d) Can be because of erreneous records

e) Can be because of RFC connections.....(sol: raise the ticket to BASIS team to provide the connection) f) Can be because of missing master data. g) Can be because of no data found in the source system h) Invalid characters while loading. When you are loading data then you may get some special characters like @#$%...e.t.c..then BW will throw an error like Invalid characters..then you need to go through this RSKC transaction and enter all the Invalid chars and execute..it will store this data in RSALLOWEDCHAR table..Then reload the data..You won't get any error because now these are eligible chars..done by RSKC. i) ALEREMOTE user is locked Normally, ALEREMOTE gets locked due to a sm59 RFC destination entry having the incorrect password. You should be about to get a list of all sm59 RFC destinations using ALEREMOTE by using transaction se16 to search field RFCOPTIONS for a value of "*U=ALEREMOTE". You will need to look for this information in any external R/3 instances that call the instance in which ALEREMOTE is getting locked as well. j) Lower case letters not allowed. look at your infoobject description: Lowercase Letters allowed k)extraction job aborted in r3 It might have got cancelled due to running for more than the expected time, or may be cancelled by R/3 users if it is hampering the performance. l)datasource not replicated If the new DataSource is created in the source system, you should replicate this DataSource in 'dialog' mode. During the replication, you can decide whether this DataSource should be replicated as the 3.x DataSource or as the new DataSource. If you do not run the replication in 'dialog' mode, the DataSource is not replicated. m)ODS Activation Error ODS activation errors can occur mainly due to following reasons 1. Invalid characters (# like characters) 2. Invalid data values for units/currencies etc 3. Invalid values for data types of char & key figures. 4. Error in generating SID values for some data =================================== 10A Ticket is nothing but an issue or a process error which need to be addressed. There are two types of tickets. * ito tickets - which are usually generated by the system automatically when a process fails.for example, when a process chain fails toi run it wil generate an ito ticket which we need to address and find the fault. * non-ito tickets - which are the issues which the client face and which are forwarded for correction or alternative action. If you're using Remedy for tickets, Unfortunately it's not possible. But this depends on the software you are using, ask your admin. =========================== 11A Reconciliation is nothing but the comaprision of the values between BW target data with the Source system data like R/3, JD edwards,Oracle,ECC or SCM or SRM. In general this process is taken @ 3 places one is comparing the info provider data with R/3 data,Compare the Query display data with R/3 or ODS data and

Checking the data available in info provider kefigure with PSA key figure values ==================== 12A ASAP Methodology 1. Project Preparation, in which the project team is identified and mobilized, the project standards are defined, and the project work environment is set up; 2. Blueprint, in which the business processes are defined and the business blueprint document is designed; 3. Realization, in which the system is configured, knowledge transfer occurs, extensive unit testing is completed, and data mappings and data requirements for migration are defined; 4. Final Preparation, in which final integration testing, stress testing, and conversion testing are conducted, and all end users are trained; and 5. Go-Live and Support, in which the data is migrated from the legacy systems, the new system is activated, and post-implementation support is provided ======================== 13A Responsibilities of an implementation project... For ex, Lets say If its a fresh implementation of BI or for that matter you are implementing SAP... First and foremost will be your requirements gathering from the client. Depending upon the requirements you will creat a business blueprint of the project which is the entire process from the start to the end of an implementation... After the blue print phase sign off we start off with the realization phase where the actual development happens... In our example after installing the necessary softwares, patches for BI we need to discuss with the end users who are going to use the system for inputs like how they want a report to look like and what are the Key Performance Indicators(KPI) for the reports etc., basically its a question and answer session with the business users... After collecting those informations the development happens in the development servers... After the development comes to an end the same objects are tested in quality servers for any bugs, errors etc., When all the tests are done we move all the objects to the production environment and test it again whether everything works fine... The Go-Live of the project happens where the actually postings happen from the users and reports are generated based on those inputs which will be available as an analytical report for the management to take decisions... The responsibilites vary depending on the requirement... Initially the business analyst will interact with the end users/managers etc., then on the requirements the software consultants do the development, testers do the testing and finally the go-live happens... What are the objects that we peform in a production Support project? In production Suport Generally most of the project they will work on monitoring area for their loads(R3/ NON SAP to Data Taggets (BW)) and depending up the project to project it varies because some of them using the PC's and Some of them using Event Chains. So its Depends up on the Project to project varies.

What are the different transactions that we use frequently in Production support project? Plz explain them in detial.. Generally In Production Support Project , we will use the check the loads by using RSMO for Monitoring the loads and we will rectify the errors in that by using step by step analysis. The consultant is required to have access to the following transactions in R3. 1. ST22 2. SM37 3. SM58 4. SM51 5. RSA7 6. SM13 Authorizations for the following transactions are required in BW 1. RSA1 2. SM37 3. ST22 4. ST04 5. SE38 6. SE37 7. SM12 8. RSKC 9. SM51 10. RSRV The Process Chain Maintenance (transaction RSPC) is used to define, change and view process chains. Upload Monitor (transaction RSMO or RSRQ (if the request is known) The Workload Monitor (transaction ST03) shows important overall key performance indicators (KPIs) for the system performance The OS Monitor (transaction ST06) gives you an overview on the current CPU, memory, I/O and network load on an application server instance. The database monitor (transaction ST04) checks important performance indicators in the database, such as database size, database buffer quality and database indices. The SQL trace (transaction ST05) records all activities on the database and enables you to check long runtimes on a DB table or several similar accesses to the same data. The ABAP runtime analysis (transaction SE30) The Cache Monitor (accessible with transaction RSRCACHE or from RSRT) shows among other things the cache size and the currently cached queries. The Export/Import Shared buffer determines the cache size; it should be at least 40MB.

*-- Anu radha

Вам также может понравиться