Protected by Copyscape Original Content Checker

Friday, August 26, 2011

Activation of Inactive Objects in Production

Sometimes there may be a case that in spite of successful migration of a BW object via transport, the object
continues to remain inactive. There may also be cases where we mistakenly forget to transport a dependent
object and hence the dependent object remains inactive in the Production system.
For such scenarios SAP has helped us by providing programs which activate the objects without the need for
us to transport them from development to production.
Here is a list of programs which can be used to activate the inactive objects in the system:



Monday, August 15, 2011

Below are the steps required to create an Open Hub Destination in BI 7.0

1.  Click on Open Hub Destination from left side panel after executing RSA1.
Riqht click on InfoArea where the Open Hub Destination will reside and select “Create Open Hub Destination”

Fill in the following:
  • Open Hub Destination:  Z_IFSFFPT (up to 9 characters only)
  • Description:  IFS FFP Interface Test
  • Object Type:  DataStore Object
  • Name:  ZRV_O904


2.  Click on the Destination Tab.
Select “File” from the drop down box

Select the directory… for testing purposes, save to your desktop

Click on the Application Server Box… notice the Application Server “MVSADBDV1” will be displayed initially as the default application server.

For migration to production, you want to work with Basis to setup a ‘logical file path/name’.  This will ensure that the logical path and logical name can be migrated into all environments without having to specify the server name.

For R8.1 IFS, we are using the newly created Logical File Name “Z_OPENHUB_IFS”

From AL11 in BA8, you can see the Logical File Name “Z_OPENHUB_IFS” directories

3.  Now you are ready to specify the fields for the Open Hub Destination.
Select the Tab “Field Def”.

4.  Now you are ready to create the Transformation (source).
Right click on the saved Open Hub Destination “Z_IFSFFPT”, and select “Create Transformation” from dropdown box.

5.  Select “DataStore Object” from the Object Type drop down box.


Select “ZRV_O904” from the drop down box.
Click on the Green Check Mark to continue.

6.  Now, you can do all of the activities available to a Transformation:
              Create Start Routine
              Create End Routine (see below)
              Create Formulas (see below)


Activate the transformation when completed.

7.  Now, you are ready to create the Data Transfer Process for the Open Hub Destination.
Right click on the Open Hub Destination “Z_IFSFFPT”, and select “Create Data Transfer Process” from the drop down box.

The target and source will be prepopulated, so you may not need to do anything; however, you can also specify the details. 
Click on the Green Check Mark to continue.

8.  From the Extraction Tab:
Select Extraction Mode:  Full
Click on the Radio Button “Active Table (Full Extract Only)

No steps required in the Tab “Update”.

No steps required in the Tab “Execute”.

9.  Finally, click on the Activation Icon.



Monday, August 1, 2011

Steps to copy queries from one cube to other



  • Log into BW system and enter the code RSZC.




  • Enter the source and target infocubes in the screen as shown below and click OK






  • This screen allows to move even the structures, KFs as shown in select component section .List of queries will be displayed only if the structure of  both the infoproviders are same else it will give a pop up message saying unable to



    Select the queries u want to move to the target cube and click on transfer selections button at the bottom of the screen.This new screen will pop up to allow you to rename the technical name of the query or variable. Change the names and the click on check mark.The queries will be moved to the cube and successful message is displayed as shown below.


    Friday, July 29, 2011

    How to re-construct a Load From DSO

    All the screenshots in this guide are thought to reconstruct a load in Opportunities Header DSO but the process is similar to any other object and can be a reference for any other DSO we need to fix data without taking new data from source system (CRM or R3).

    1. Delete the existing load from the DSO.
    2. Go to the Reconstruction tab and select the request we want to reconstruct and click the Reconstruction/Insert button.
    3. Come back to the Request tab and monitor the progress of the load clicking the refresh button, we need continue monitoring until the request reach the green status.
    4. Despite of the load will complete, it won’t be inactive (there is no value in the ID of Request column).
    5. In order to activate it, we need to choose the request from the manage and click on the activate button.
    6. A new window with the list of available requests in the DSO will be shown, most of the times it will only exist one in the first row of the list, let’s select it and click on the start button
    7. A new window asking in which specific Server we want to run the process appears, let’s accept what is proposed by default and click on green button to allow the process to continue.
    8. Once the activation is done, the request in the manage will show with all the information populated and data will be ready to be moved to the cubes or accessed by routines

    Tuesday, July 26, 2011

    Hierachy / Attribute Change Run

    If hierarchies and attributes for characteristics have changed, then it is necessary to make structural changes to the aggregates in order to adjust the data accordingly.
    Attribute change run is nothing but adjusting the master data after its been loaded from time to time so that it can change or generate or adjust the sid's so that u may not have any problem when loading the transaction data in to data targets.When a master data is modified or changed it gets loaded into the master data table as version "M". It is only after a attribute change run that the master data becomes activated i.e. version "A".If hierarchies and attributes of characteristics belonging to an Info Cube have been changed, It is a necessary to make structural changes to the aggregates in order to modify the data accordingly.
    Attribute change run replace activating of master data ? Yes, Both do the same function- make "M" status records to "A" status.
    ACR will do the activation of master data if and only if the characteristic’s attributes are used in the Aggregate. And also it refreshes the Aggregate with new attributes. That why you cannot see all available characteristics (in the system) in the list of characteristics in the screen of ACR.And ACR is not required and of course you cannot use ACR for activating the Characteristics whose attributes are not used in any aggregate.
    USE: When we load Master data, the attribute change run is run to activate and adjust the master data with the existing data.Unless the change run is run, chances are transaction data loads may fail for lack of master data and reports may not display correctly even if the master data is loaded.If we have aggregates on the cubes, then the new or changed master data is adjusted with the aggregate tables during the change run.
    Debug:
    I would suggest double click on the cancelled job(in SM37), and view the Job Log to find out the reason for the failure of the job.You can also restart the change run using transaction CHANGERUNMONI, refer to Note 915515 for more details.
    Or
    look at SM50 if process is still active and check also with ST22 if there are dumps.
    Or
    there could be a lock situation. remember SAP doesn't allow 2 attr changeruns at the same time.

    Monday, July 25, 2011

    BI - R/3 Configuration Steps

    Here are few checkout steps for succesful connection between R/3 and BW
    - Define Logical system (SPRO)
    - Assign Logical system to client (SPRO)
    - Create Background Users (In both the systems- ALEREMOTE & BWREMOTE) (SU01)
    - Test the System RFC Connection (SM59)
    - Create the Source System Connection (RSA1 0r RSA1OLD)
    - Activate the Datasources (RSA6)
    - Transfer Application Component Hierarchy (RSA9)
    - Source System Assignment
    - Transfer Global Settings.Replicate Datasources

    Saturday, June 4, 2011

    SAP BI vs. R/3 Reporting

     

    Many companies that use core SAP ERP modules never implement a SAP BI solution choosing instead to use R/3 based reporting.  Since R/3 is by nature a transactional OLTP system and was never designed for analytics and reporting and the importance of a unified view of a companies data it is somewhat surprising that more businesses haven't found, and leveraged the power of, SAP Business Intelligence.

    Comparisons

    R/3 Reporting

    R/3 is an OLTP transactional system used for real-time operational data.  Reporting is only available from a single system and is transaction based.  R/3 reports are more likely to be broken as the system is highly changeable based on the needs of the business and will require ongoing maintenance and support costs.  Only limited web reporting is available and most reports are simple and list based.  Many users querying significant data volumes will slow down the system and disrupt the experience of the average user.

    SAP BI Reporting

    BW is an OLAP reporting system mainly used for summarized data rather, although drill down to line items is possible.  It houses historical data from multiple systems (including non SAP third party sources) and even data that no longer exists in the source systems.  SAP BI offers a wide range of data analysis tools including Excel and web based analysis, scorecards, dashboards  and pixel perfect static printed style reporting capabilities.  Most reports are not real-time but are a day or more old, for small data volumes near-time (<5 minute) reporting is possible.  Users querying the dedicated BI system will not slow down the source system and as the data is stored in a more optimal way the reports are available to the user much quicker. 
    In addition to the reporting options it also provides ETL functions, better security, the option to 'broadcast' reports to users in a number of different ways and formats and allows planning based on historical data.


    So in summary if you are faced with this decision consider these advantages that BI reporting has over R/3:
    • By offloading ad-hoc and long running queries from production R/3 system to BI system, overall system performance should improve on R/3.
    • SAP BI is specifically designed for query processing, not data updating and OLTP. Within BI, the data structures are designed differently and are much better suited for reporting than R/3 data structures.
    • Better front-end reporting within BI. Although the BI Excel front-end has it's problems, it provides more flexibility and analysis capability than the R/3 reporting screens.
    • BI has ability to pull data from other SAP or non-SAP sources into a consolidated cube

    For up to the minute operational reporting you should use R/3 but for more historical strategic reporting a dedicated BI solution provides faster performing, fully featured reports with stronger data analysis capabilities. 

    Sunday, May 15, 2011

    Creating a Process Chain


    • Create a new Process Chain
    • Define the Start Process
    • Create a new Process Variant
    • Define the Start Process Variant
    • Define the start time
    • Add an info package
    • Select an info package
    • Create a link between the Start Process and the info package
    • Save and Activate the Process Chain
    In SAP BW to create a Process Chain:
                  Enter transaction code /nrspc.
    1. The Process Chain Maintenance Planning View window opens.

    2. Click the Create icon (or F5).
      The New Process Chain dialog opens.
    3. Enter a Process Chain name and a Long Description. Click the Enter icon.
      After you specify the name of the new Process Chain, the Insert Start Process dialog opens. It lets you insert a Start Process for the Process Chain.

    4. Define the Start Process. The Start Process must be unique for each Process Chain.
    5. Create the Start Process Variant. A variant is a collection of predefined criteria, similar to a group of values used as parameters.
      Variants are attached to various processes that are defined for Process Chains. 
      Click the New icon on the Insert Start Process dialog to create a new Process Variant.
      The Start Process dialog opens.

    6. Define the Start Process Variant. When the Maintain Process Variant dialog opens, create a variant for the start process by selecting Direct Scheduling. This allows you to customize the Start Timeoptions. Click Change Selections.
      The Start Time dialog opens.

              Enter a Start Time value by clicking the Immediate button, which   enables Immediate start. Click the Check button and Save.
    7. Select the InfoPackage which is used to load data in Infocube, drag InfoPackage to right panel.Select the Info Package and press 'Continue 
    8. Following steps will be added in the process chain 
    9. Join 'Start' step with  'Load Step' by selecting 'Start' step and dragging the line to 'Load Step'pcc6
    10. Check' the process chain and 'Activate' the process chain
    11. Process chain will start executing.Once completed, the status will appear as follows, incase there is any failure, the status will appear as cancelled

    Tuesday, May 10, 2011

    General Usage/ Detail about Process Chain



    • Process chain (PC) is a sequence of processes linked together.
    • Each Process have a type (BW activity, e.g., activate ODS) and a variant (contains parameters).
    • Start process describes when the process will start (immediately, scheduled job, metaprocess, API).
    • Connector is linking processes; you can choose 1 of 3 options of starting next process: when previous finishes with success (green arrow), failure (red) or always start the next process (black).
    • Variant is a set of parameters passed to the process, such as the name of the InfoPackage to use for loading, the name of the InfoObject to perform a change run for.
    Selected icon bar buttons:
    • Planning view enables to create and modify processes.
    • Checking view checks consistency of a process chain selected in plan view.
    • Log view shows log of the execution of a process chain selected in plan view.
    The picture below shows simple PC (loading transaction data into ODS and than into InfoCube).


    Monday, May 2, 2011

    What is difference between PSA and ODS?


    PSA: This is just an intermediate data container. This is NOT a data target. Main purpose/use is for data quality maintenance. This has the original data (unchanged) data from source system.


    ODS: This is a data target. Reporting can be done through ODS. ODS data is overwriteable. For datasources for which delta is not enabled, ODS can be used to upload delta records to Infocube.  


    You can do reporting in ODS. In PSA you can't do reporting directly


    ODS contains detail -level data , PSA The requested data is saved, unchanged from the source system. Request data is stored in the transfer structure format in transparent, relational database tables in the Business Information Warehouse. The data format remains unchanged, meaning that no summarization or transformations take place
    In ODS you have 3 tables Active, New data table, change log, In PSA you don't have

    Saturday, April 30, 2011

    Difference Between PSA, ALE IDoc, ODS


    What is difference between PSA and ALE IDoc?  And how data is transferd using each one of them?
    The following update types are available in SAP BW:
    1. PSA
    2. ALE (data IDoc) 



    You determine the PSA or IDoc transfer method in the transfer rule maintenance screen. The process for loading the data for both transfer methods is triggered by a request IDoc to the source system. Info IDocs are used in both transfer methods. Info IDocs are transferred exclusively using ALE
    A data IDoc consists of a control record, a data record, and a status record The control record contains, for example, administrative information such as the receiver, the sender, and the client. The status record describes the status of the IDoc, for example, "Processed".  If you use the PSA for data extraction, you benefit from increased flexiblity (treatment of incorrect data records). Since you are storing the data temporarily in the PSA before updating it in to the data targets, you can check the data and change it if necessary. Unlike a data request with IDocs, the PSA gives you various options for additional data updates into data targets:


    InfoObject/Data Target Only - This option means that the PSA is not used as a temporary store. You choose this update type if you do not want to check the source system data for consistency and accuracy, or you have already checked this yourself and are sure that you no longer require this data since you are not going to change the structure of the data target again.


    PSA and InfoObject/Data Target in Parallel (Package by Package) - BW receives the data from the source system, writes the data to the PSA and at the same time starts the update into the relevant data targets.  Therefore, this method has the best performance.


    The parallel update is described in detail in the following: A dialog process is started by   data package, in which the data of this package is writtein into the PSA table. If the data is posted successfully into the PSA table, the system releases a second, parallel dialog process that writes the data to the data targets. In this dialog process the transfer rules for the data records of the data package are applied, that data is transferred to the communcation structure, and then written to the data targets. The first dialog process (data posting into the PSA) confirms in the source system that is it completed and the source system sends a new data package to BW while the second dialog process is still updating the data into the data targets.
    The parallelism relates to the data packages, that is, the system writes the data packages into the PSA table and into the data targets in parallel.  Caution: The maximum number of processes setin the source system in customizing for the extractors does not restrict the number of processes in BW. Therefore, BW can require many dialog processes for the load process. Ensure that there are enough dialog processes available in the BW system. If there are not enough processes on the system side, errors occur. Therefore, this method is the least recommended. 


    PSA and then into InfoObject/Data Targets (Package by Package) - Updates data in series into the PSA table and into the data targets by data package. The system starts one process that writes the data packages into the PSA table. Once the data is posted successfuly into the PSA table, it is then written to the data targets in the same dialog process. Updating in series gives you more control over the overall data flow when compared to parallel data transfer since there is only one process per data package in BW. In the BW system the maximum number of dialog process required for each data request corresponds to the setting that you made in customizing for the extractors in the control parameter maintenance screen. In contrast to the parallel update, the system confirms that the process is completed only after the data has been updated into the PSA and also into the data targets for the first data package.
    Only PSA - The data is not posted further from the PSA table immediately. It is useful to transfer the data only into the PSA table if you want to check its accuracy and consistency and, if necessary, modify the data. You then have the following options for updating data from the PSA table:


    Automatic update - In order to update the data automatically in the relevant data target after all data packages are in the PSA table and updated successfully there, in the scheduler when you schedule the InfoPackage, choose Update Subsequently in Data Targets on the Processing tab page

    Thursday, April 28, 2011

    AL08 (Tcode)- List of all users

    AL08 shows the list of all the users who are logged on to the system globally or for all the instances in the system which are active. It shows all the active instances and number of active users in the system. It contains the following columns.


    1) Instance - It shows the Instance into which the user logged in

    2) Client - It displays the SAP client into which the user is Logged in

    3) User Names - SAP user name

    4) Terminal
     - Terminal at which the user is working

    5) T-code - Last executed transaction code

    6) Time - Time at which the user last initiated a dialog step by entering data

    7) External Sessions - Number of External sessions the user has opened

    8) Internal Sessions - Number of Internal sessions the user has opened



    The Difference between External and Internal sessions

    Internal Session: It is the memory allocated for a program during execution. When we call a program using SUBMIT or Call Transaction then it will be loaded in a new internal Session. To exchange the data between internal sessions we can use ABAP MEMORY.

    External Session: It is nothing but a window. Which we can create using SYSTEM -> CREATE SESSION.
    We can open up to 6 external sessions (this is set by Basis of course).
    We can use SAP Memory to exchange the data between External sessions in a Login.

    Sunday, April 24, 2011

    AP Q&A PART - 2

    1. The following transactions are relevant to the data sources in an SAP BW source system. 
    a. RSA3
    b. RSA4
    c. RSA5
    d. RSA6 

    ANSWER(S): A, C, D 
    Transaction RSA3, or extractor checker, is used in the BW source system to check data sources for various extraction modes, including full update, delta update and delta initialization. 
    RSA5 is for installing standard business content data sources and RSA6 is for maintaining data sources

    2. True or False? A reference characteristic will use the SID table and master data table of the referred characteristic. 
    a. True
    b. False 

    ANSWER(S): A 
    If an info object is created as a characteristic with a reference characteristic, it won't have its own sid and master data tables. The info object will always use the tables of the referred characteristic. 

    3. The following statements are not true about navigational attributes. 
    a. An attribute of an info object cannot be made navigational if the attribute-only flag on the attribute info object has been checked.
    b. Navigational attributes can be used to create aggregates.
    c. It is possible to mak
    e a display attribute to navigational in an info cube data without deleting all the data from the info cube.
    d. Once an attribute is made navigational in an info cube, it is possible to change it back to a display attribute if the data has been deleted from the info cube. 

    ANSWER(S): D 
    All the statements except D are true. It is possible to change a navigational attribute back to a display attribute in an info cube, without deleting all data from the info cube

    4. True or False? It is possible to create a key figure without assigning currency or unit. 
    a. True
    b. False 

    ANSWER(S): A 
    Yes, it is possible to create a key figure without assigning a unit if the data type is one of these four: Number, Integer, Date or Time. 

    5. The following statements are true for compounded info objects. 
    a. An info cube needs to contain all info objects of the compounded info object if it has been included in the info cube.
    b. An info object cannot be included as a compounding object if it is defined as an attribute only.
    c. An info object can be included as an attribute and a compounding object simultaneously.
    d. The total length of a compounded info object cannot exceed 60. 

    ANSWER(S): A, B, D 
    When a compounded info object is included in an info cube, all corresponding info objects are added to the info cube. If an info object is defined as an attribute, it cannot be included as compounding object. The total length of the compounding info objects cannot exceed 60 characters. 


    6. The following statements are true for an info cube. 
    a. Each characteristic of info cube should be assigned to at least one dimension.
    b. One characteristic can be assigned to more than one dimensions.
    c. One dimension can have more than one characteristic.
    d. More than one characteristic can be assigned to one line item dimension. 

    ANSWER(S): A, C 
    Any characteristic in the info cube should be assigned to a dimension. One characteristic cannot be assigned to more than one dimension. One dimension can have more than one characteristic, provided it is not defined as a line item dimension.


    7. The following statements are true for info cubes and aggregates. 
    a. Requests cannot be deleted if info cubes are compressed.
    b. A request cannot be deleted from an info cube if that request (is compressed) in the aggregates.
    c. Deleting a request from the cube will delete the corresponding request from the aggregate, if the aggregate has not been compressed.
    d. All of the above. 

    ANSWER(S): A, C 
    Once the info cubes are compressed it is not possible to delete data based on the requests. There won't be request IDs anymore. Requests can be deleted even if the requests in aggregates have been compressed. But the aggregates will have to be de-activated. Deleting an uncompressed request from an info cube will automatically delete the corresponding request from aggregate if the aggregate request has not been compressed

    8. The following statements are true regarding the ODS request deletion. 
    a. It is not possible to delete a request from ODS after the request has been activated.
    b. Deleting an (inactive) request will delete all requests that have been loaded into the ODS after this request was loaded.
    c. Deleting an active request will delete the request from the change log table.
    d. None of the above. 

    ANSWER(S): C 
    It is possible to delete requests from an ODS, even if the request has been activated. The "before and after image" of the data will be stored in the change log table using which the request will be deleted. 
    Deleting a request which has not been activated in ODS will not delete the requests which are loaded after this request. But if the request has been activated then the loaded and activated requests later will get deleted. Also the change log entries will be deleted for that request. 


    9. The following statements are true for aggregates. 
    a. An aggregate stores data of an info cube redundantly and persistently in a summarized form in the database.
    b. An aggregate can be built on characteristics or navigational attributes from the info cube.
    c. Aggregates enable queries to access data quickly for reporting.

    d. None of the above. 

    ANSWER(S): A, B, C 
    Aggregates summarize and store data from an info cube. Characteristics and navigational attributes of an info cube can be used to create aggregates. Since aggregates contain summarized data, the amount of data in aggregates will be much less that the cube which makes the queries to run faster when they access aggregates. 

    10. True or False? If an info cube has active aggregates built on it, the new requests loaded will not be available for reporting until the rollup has been completed successfully. 
    a. True
    b. False
     

    ANSWER(S): A 
    Newly-loaded requests in an info cube with aggregates will not be available for reporting until the aggregate rollup has been completed successfully. This is to make sure that the cube and aggregates are consistent while reporting. 

    11. What is the primary purpose of having multi-dimensional data models? 
    a. To deliver structured information that the business user can easily navigate by using any possible combination of business terms to show the KPIs.
    b. To make it easier for developers to build applications, that will be helpful for the business users.
    c. To make it easier to store data in the database and avoid redundancy.
    d. All of the above. 

    ANSWER(S): A 
    The primary purpose of multi-dimensional modeling is to present the business users in a way that corresponds their normal understanding of their business. They also provide a basis for easy access of data which is OLAP engine

    12. The following statements are true for partitioning. 
    a. If a cube has been partitioned, the E table of the info cube will be partitioned on time.
    b. The F table of the info cube is partitioned on request.
    c. The PSA table is partitioned automatically with several requests on one partition.
    d. It is not possible to partition the info cube after data has been loaded, unless all the data is deleted from the cube. 

    ANSWER(S): A, B, C, D, F 
    BW allows partitioning of the info cubes based on time. If the info cube is partitioned, the e-fact table of the info cube will be partitioned on the time characteristic selected. 
    The F fact table is partitioned on request ids automatically during the loads. PSA tables are also partitioned during the loads and can accommodate more than one request. For an info cube to be partitioned, all data needs to be removed from the info cube. 

    13. The following statements are true for OLAP CACHE. 
    a. Query navigation states and query results are stored in the application server memory.
    b. If the same query has been executed by another user the result sets can be used if the global cache is active.
    c. Reading query results from OLAP cache is faster than reading from the database.
    d. Changing the query will invalidate the OLAP cache for that query. 

    ANSWER(S): A, B, C, D 
    Query results are stored in the memory of application server, which can be retrieved later by another user running the same query. This will make the query faster since the results are already calculated and stored in the memory. By changing the query, the OLAP Cache gets invalidated

    14. The following statements are true about the communication structure. 
    a. It contains all the info objects that belong to an info source.
    b. All the data is updated into the info cube with this structure.
    c. It is dependent on the source system.

    d. All of the above. 

    ANSWER(S): A, B 
    The communication structure contains all info objects in the info source and it is used to update the info cube by temporarily storing the data that needs to be updates to the data target. It doesn't depend on the source system

    15. The following statements are untrue about ODSs. 
    a. It is possible to create ODSs without any data fields.
    b. An ODS can have a maximum of 16 key fields.
    c. Characteristics and key figures can be added as key fields in an ODS.
    d. After creating and activating, an export data source is created automatically

    ANSWER(S): A,C 
    An ODS cannot be created without any data fields, and it can have a maximum of only 16 key fields. Key figures cannot be included as a key field in an ODS. The export data source is created after an ODS has been created and activated

    Wednesday, April 20, 2011

    SAP Q& A PART - 1

    1. Identify the statement(s) that is/are true. A change run... 
    a. Activates the new Master data and Hierarchy data
    b. Aggregates are realigned and recalculated
    c. Always reads data from the InfoCube to realign aggregates
    d. Aggregates are not affected by change run 

    ANSWER(S): A, B 
    Change run activates the Master data and Hierarchy data changes. Before the activation of these changes, all the aggregates that are affected by these changes are realigned. Realignment is not necessarily done by reading InfoCubes. If these are part of another aggregate that can be used to read data for the realignment, change run uses that aggregate

    2. Which statement(s) is/are true about Multiproviders? 
    a. This is a virtual Infoprovider that does not store data
    b. They can contain InfoCubes, ODSs, info objects and info sets
    c. More than one info provider is required to build a Multiprovider
    d It is similar to joining the data tables 

    ANSWER(S): A, B 
    Multiproviders are like virtual Infoproviders that do not store any data. Basic InfoCubes, ODSs, info sets or Info objects can be used to build a Multiprovider. Multiproviders can even be built on a single Infoprovider

    3. The structure of the PSA table created for an info source will be... 
    a. Featuring the exact same structure as Transfer structure
    b. Similar to the transfer rules
    c. Similarly structured as the Communication structure
    d. The same as Transfer structure, plus four more fields in the beginning 


    ANSWER(S): D 
    The structure of PSA tables will have an initial four fields: request id, packet number, partition value and record number. The remaining fields will be exactly like Transfer Structure. 

    4. In BW, special characters are not permitted unless it has been defined using this transaction: 
    a. rrmx
    b. rskc
    c. rsa15
    d. rrbs 

    ANSWER(S): B 
    Rskc is the transacation used to enter the permitted characters in BW.


    5. Select the true statement(s) about info sources: 
    a. One info source can have more than one source system assigned to it
    b. One info source can have more than one data source assigned to it provided the data sources are in different source systems
    c. Communication structure is a part of an info source
    d. None of the above 

    ANSWER(S): A, C 
    Info sources can be assigned to multiple source systems. Also, info sources can have multiple data sources within the same source system. Communication structure is a part of the source system

    6. Select the statement(s) that is/are true about the data sources in a BW system: 
    a. If the hide field indicator is set in a data source, this field will not be transferred to BW even after replicating the data source
    b. A field in a data source won't be usable unless the selection field indicator has been set in the data source
    c. A field in an info package will not be visible for filtering unless the selection field has been checked in the data source
    d. All of the above 

    ANSWER(S): A, C 
    If the hide field is checked in a data source, that field will not be transferred to a BW system from the source system even after replication. If the selection field is not checked, that field won't be available for filtering the info package

    7. Select the statement(s) which is/are true about the 'Control parameters for data transfer from the Source System': 
    a. The table used to store the control parameters is ROIDOCPRMS
    b. Field max lines is the maximum number of records in a packet
    c. Max Size is the maximum number of records that can be transferred to BW
    d. All of the above 

    ANSWER(S): A 
    ROIDOCPRMS is the table in the BW source system that is used to store the parameters for transferring data to BW. Max size is the size in KB which is used to calculate the number of records in each packet. Max lines is the maximum number of records in each packet

    8. The indicator 'Do not condense requests into one request when activation takes place' during ODS activation applies to condensation of multiple requests into one request to store it in the active table of the ODS. 
    a. True
    b. False 

    ANSWER(S): B 
    This indicator is used to make sure that the change log data is not compressed to one request when activating multiple requests at the same time. If these requests are combined to one request in change log table, individual deletion will not be possible. 

    9. Select the statement(s) which is/are not true related to flat file uploads: 
    a. CSV and ASCII files can be uploaded
    b. The table used to store the flat file load parameters is RSADMINC
    c. The transaction for setting parameters for flat file upload is RSCUSTV7
    d. None of the above 

    ANSWER(S): C 
    Transaction for setting flat file upload parameters is RSCUSTV6.

    10. Which statement(s) is/are true related to Navigational attributes vs. Dimensional attributes? 
    a. Dimensional attributes have a performance advantage over Navigational attributes for queries
    b. Change history will be available if an attribute is defined as navigational
    c. History of changes is available if an attribute is included as a characteristic in the cube
    d. All of the above 

    ANSWER(S): A, C 
    Dimensional attributes have a performance advantage while running queries since the number of table joins will be less compared to navigational attributes. For navigational attributes, the history of changes will not be available. But for dimensional attributes, the InfoCube will have the change history

    11. When a Dimension is created as a line item dimension in a cube, Dimensions IDs will be same as that of SIDs. 
    a. True
    b. False 

    ANSWER(S): A 
    When a Dimension is created as a line item dimension, the SIDs of the characteristic is directly stored in the fact tables and these are used as Dimension IDs. Dimension table will be a view off of SID table and fact table

    12. Select the true statement(s) related to the start routine in the update rules: 
    a. All records in the data packet can be accessed
    b. Variables declared in the global area is available for individual routines
    c. Returncode greater than 0 will be abort the whole packet
    d. None of the above 

    ANSWER(S): A, B, C 
    In the start routine, all records are available for processing. Variables declared in the global area can be used in individual routines. Returncode greater than 0 will abort processing of all records in the packet

    13. If a characteristic value has been entered in InfoCube-specific properties of an InfoCube, only these values can be loaded to the cube for that characteristic. 
    a. True
    b. False 

    ANSWER(S): A 
    If a constant is entered in the InfoCube-specific properties, only that value will be allowed in the InfoCube for that characteristic. This value will be fixed in the update rules and it is not possible to do the change in update rules for that characteristic.

    14. After any changes have been done to an info set it needs to be adjusted using transaction RSISET. 
    a. True
    b. False 

    ANSWER(S): A 
    After making any type of change to an info set, it needs to be adjusted using the transaction RSISET. 


    15. Select the true statement(s) about read modes in BW: 
    a. Read mode determines how the OLAP processor retrieves data during query execution and navigation
    b. Three different types of read modes are available
    c. Can be set only at individual query level
    d. None of the above 

    ANSWER(S): A, B 
    Read mode determines how an OLAP processor retrieves data during query execution and navigation. Three types of read modes are available:
    1. Read data during expand hierarchies
    2. Read data during navigation
    3. Read data all at once
    Read mode can be set at info provider level and query level.

    Saturday, April 16, 2011

    Selective deletion process chain





    This article tells you how to use the Selective Deletion in Process Chains. I.e. Using "DELETE_FACTS"
    TCode , how to generate the selective deletion program and then using that program how to delete the data in InfoCube.


    Sometimes, before loading the data to InfoCube, we need to delete the data based on some selective
    deletion, E.g. Date, then we need to load the data to InfoCube. Every time we don't want to do this activity manually, and we need to automate this process.


    InfoCube Data deletion by using Selective Deletion through Process Chains. In some cases we need to
    delete the InfoCube data based on selective deletion.
    E.g.: We have Planning InfoCube in BW system and the data will get from APO system, in APO they will run
    planning on weekly basis and they plan from SY-DATUM to next 30 days. So once APO system will completes the SNP Weekly runs, then BW system need to extract the data from APO.
    But before loading the data to BW Plan InfoCube, first we need to delete the existing data from SY-DATUM
    to next 30 days data, after that I need to load the data. Because APO SNP runs will happen every Week, if we load the data directly to Plan InfoCube, it will give wrong information in reports, because every time we are loading from SY-DATUM to next 30 days. (E.g.: Suppose APO SNP first run date is 01.02.2009, so after the APO run we loaded Plan data to BW InfoCube i.e. from date = SY-DATUM (01.02.2009) and to date = 01.03.3009.
    Then on 08.02.2009 second SNP run was happen, so if I load data from SY-DATUM (08.02.2009) to
    08.03.2009, then the data is duplicated in InfoCube, because we already loaded from 01.02.2009 to 01.03..2009 in the first SNP run, and now we are again loading from 08.02.2009 to 08.03.2009, so finally InfoCube is having the data from 08.02.2009 to 01.03.2009 from first SNP run and 08.02.2009 to 08.03.2009 from second run, this is wrong data. So first we need to delete the data from 08.02.2009 to 01.03.2009 and then load the data from 08.02.2009 to 08.03.2009).
    We can achieve this by using "DELETE_FACTS" Transaction code. I want to automate the complete
    process using Process Chains.
    Give DELETE_FACTS Tcode in Command field screen and enter and give InfoCube name and select
    Generate selection program option and then execute.

    Take this program  and then go to SE38 and give program name and click on Variants option
    Give the Varient name ZVAR_DEL1 and then click on create
    It will display the selection screen. Our intension is we need to delete the data based on 0Calday (Calendar 
    Day), so press F1 and find the screen fields for that Calendar Day. 








    In the following screen you can find the Screen Field for Calendar Day. I.e. C006-LOW. Once you find the screen field number then close the screen. 

    Click on Technical name button And click on the Attributes ButtonClick on Selection Variable and select "D: Dynamic date calculation"Then select Name of Variable like below and double click on that it will open the following screen.I want to delete the data from Current day to next 30 days. So select Current date -xxx, current date + yyy and double click.








      






    Save. Come back to SE38 and select Variants option and Click on Display button.See the values for that variant. It shows SY-DATUM to next 30 days date.So we created a selective deletion program to delete data from InfoCube, and whenever you execute this program with using Variant ZVAR_DEL, it will delete the data from that day to next 30 days.

    Add the program in process chain and automate using SAP scheduler. . 

    Web Analytics