Protected by Copyscape Original Content Checker

Saturday, April 30, 2011

Difference Between PSA, ALE IDoc, ODS


What is difference between PSA and ALE IDoc?  And how data is transferd using each one of them?
The following update types are available in SAP BW:
1. PSA
2. ALE (data IDoc) 



You determine the PSA or IDoc transfer method in the transfer rule maintenance screen. The process for loading the data for both transfer methods is triggered by a request IDoc to the source system. Info IDocs are used in both transfer methods. Info IDocs are transferred exclusively using ALE
A data IDoc consists of a control record, a data record, and a status record The control record contains, for example, administrative information such as the receiver, the sender, and the client. The status record describes the status of the IDoc, for example, "Processed".  If you use the PSA for data extraction, you benefit from increased flexiblity (treatment of incorrect data records). Since you are storing the data temporarily in the PSA before updating it in to the data targets, you can check the data and change it if necessary. Unlike a data request with IDocs, the PSA gives you various options for additional data updates into data targets:


InfoObject/Data Target Only - This option means that the PSA is not used as a temporary store. You choose this update type if you do not want to check the source system data for consistency and accuracy, or you have already checked this yourself and are sure that you no longer require this data since you are not going to change the structure of the data target again.


PSA and InfoObject/Data Target in Parallel (Package by Package) - BW receives the data from the source system, writes the data to the PSA and at the same time starts the update into the relevant data targets.  Therefore, this method has the best performance.


The parallel update is described in detail in the following: A dialog process is started by   data package, in which the data of this package is writtein into the PSA table. If the data is posted successfully into the PSA table, the system releases a second, parallel dialog process that writes the data to the data targets. In this dialog process the transfer rules for the data records of the data package are applied, that data is transferred to the communcation structure, and then written to the data targets. The first dialog process (data posting into the PSA) confirms in the source system that is it completed and the source system sends a new data package to BW while the second dialog process is still updating the data into the data targets.
The parallelism relates to the data packages, that is, the system writes the data packages into the PSA table and into the data targets in parallel.  Caution: The maximum number of processes setin the source system in customizing for the extractors does not restrict the number of processes in BW. Therefore, BW can require many dialog processes for the load process. Ensure that there are enough dialog processes available in the BW system. If there are not enough processes on the system side, errors occur. Therefore, this method is the least recommended. 


PSA and then into InfoObject/Data Targets (Package by Package) - Updates data in series into the PSA table and into the data targets by data package. The system starts one process that writes the data packages into the PSA table. Once the data is posted successfuly into the PSA table, it is then written to the data targets in the same dialog process. Updating in series gives you more control over the overall data flow when compared to parallel data transfer since there is only one process per data package in BW. In the BW system the maximum number of dialog process required for each data request corresponds to the setting that you made in customizing for the extractors in the control parameter maintenance screen. In contrast to the parallel update, the system confirms that the process is completed only after the data has been updated into the PSA and also into the data targets for the first data package.
Only PSA - The data is not posted further from the PSA table immediately. It is useful to transfer the data only into the PSA table if you want to check its accuracy and consistency and, if necessary, modify the data. You then have the following options for updating data from the PSA table:


Automatic update - In order to update the data automatically in the relevant data target after all data packages are in the PSA table and updated successfully there, in the scheduler when you schedule the InfoPackage, choose Update Subsequently in Data Targets on the Processing tab page

Thursday, April 28, 2011

AL08 (Tcode)- List of all users

AL08 shows the list of all the users who are logged on to the system globally or for all the instances in the system which are active. It shows all the active instances and number of active users in the system. It contains the following columns.


1) Instance - It shows the Instance into which the user logged in

2) Client - It displays the SAP client into which the user is Logged in

3) User Names - SAP user name

4) Terminal
 - Terminal at which the user is working

5) T-code - Last executed transaction code

6) Time - Time at which the user last initiated a dialog step by entering data

7) External Sessions - Number of External sessions the user has opened

8) Internal Sessions - Number of Internal sessions the user has opened



The Difference between External and Internal sessions

Internal Session: It is the memory allocated for a program during execution. When we call a program using SUBMIT or Call Transaction then it will be loaded in a new internal Session. To exchange the data between internal sessions we can use ABAP MEMORY.

External Session: It is nothing but a window. Which we can create using SYSTEM -> CREATE SESSION.
We can open up to 6 external sessions (this is set by Basis of course).
We can use SAP Memory to exchange the data between External sessions in a Login.

Sunday, April 24, 2011

AP Q&A PART - 2

1. The following transactions are relevant to the data sources in an SAP BW source system. 
a. RSA3
b. RSA4
c. RSA5
d. RSA6 

ANSWER(S): A, C, D 
Transaction RSA3, or extractor checker, is used in the BW source system to check data sources for various extraction modes, including full update, delta update and delta initialization. 
RSA5 is for installing standard business content data sources and RSA6 is for maintaining data sources

2. True or False? A reference characteristic will use the SID table and master data table of the referred characteristic. 
a. True
b. False 

ANSWER(S): A 
If an info object is created as a characteristic with a reference characteristic, it won't have its own sid and master data tables. The info object will always use the tables of the referred characteristic. 

3. The following statements are not true about navigational attributes. 
a. An attribute of an info object cannot be made navigational if the attribute-only flag on the attribute info object has been checked.
b. Navigational attributes can be used to create aggregates.
c. It is possible to mak
e a display attribute to navigational in an info cube data without deleting all the data from the info cube.
d. Once an attribute is made navigational in an info cube, it is possible to change it back to a display attribute if the data has been deleted from the info cube. 

ANSWER(S): D 
All the statements except D are true. It is possible to change a navigational attribute back to a display attribute in an info cube, without deleting all data from the info cube

4. True or False? It is possible to create a key figure without assigning currency or unit. 
a. True
b. False 

ANSWER(S): A 
Yes, it is possible to create a key figure without assigning a unit if the data type is one of these four: Number, Integer, Date or Time. 

5. The following statements are true for compounded info objects. 
a. An info cube needs to contain all info objects of the compounded info object if it has been included in the info cube.
b. An info object cannot be included as a compounding object if it is defined as an attribute only.
c. An info object can be included as an attribute and a compounding object simultaneously.
d. The total length of a compounded info object cannot exceed 60. 

ANSWER(S): A, B, D 
When a compounded info object is included in an info cube, all corresponding info objects are added to the info cube. If an info object is defined as an attribute, it cannot be included as compounding object. The total length of the compounding info objects cannot exceed 60 characters. 


6. The following statements are true for an info cube. 
a. Each characteristic of info cube should be assigned to at least one dimension.
b. One characteristic can be assigned to more than one dimensions.
c. One dimension can have more than one characteristic.
d. More than one characteristic can be assigned to one line item dimension. 

ANSWER(S): A, C 
Any characteristic in the info cube should be assigned to a dimension. One characteristic cannot be assigned to more than one dimension. One dimension can have more than one characteristic, provided it is not defined as a line item dimension.


7. The following statements are true for info cubes and aggregates. 
a. Requests cannot be deleted if info cubes are compressed.
b. A request cannot be deleted from an info cube if that request (is compressed) in the aggregates.
c. Deleting a request from the cube will delete the corresponding request from the aggregate, if the aggregate has not been compressed.
d. All of the above. 

ANSWER(S): A, C 
Once the info cubes are compressed it is not possible to delete data based on the requests. There won't be request IDs anymore. Requests can be deleted even if the requests in aggregates have been compressed. But the aggregates will have to be de-activated. Deleting an uncompressed request from an info cube will automatically delete the corresponding request from aggregate if the aggregate request has not been compressed

8. The following statements are true regarding the ODS request deletion. 
a. It is not possible to delete a request from ODS after the request has been activated.
b. Deleting an (inactive) request will delete all requests that have been loaded into the ODS after this request was loaded.
c. Deleting an active request will delete the request from the change log table.
d. None of the above. 

ANSWER(S): C 
It is possible to delete requests from an ODS, even if the request has been activated. The "before and after image" of the data will be stored in the change log table using which the request will be deleted. 
Deleting a request which has not been activated in ODS will not delete the requests which are loaded after this request. But if the request has been activated then the loaded and activated requests later will get deleted. Also the change log entries will be deleted for that request. 


9. The following statements are true for aggregates. 
a. An aggregate stores data of an info cube redundantly and persistently in a summarized form in the database.
b. An aggregate can be built on characteristics or navigational attributes from the info cube.
c. Aggregates enable queries to access data quickly for reporting.

d. None of the above. 

ANSWER(S): A, B, C 
Aggregates summarize and store data from an info cube. Characteristics and navigational attributes of an info cube can be used to create aggregates. Since aggregates contain summarized data, the amount of data in aggregates will be much less that the cube which makes the queries to run faster when they access aggregates. 

10. True or False? If an info cube has active aggregates built on it, the new requests loaded will not be available for reporting until the rollup has been completed successfully. 
a. True
b. False
 

ANSWER(S): A 
Newly-loaded requests in an info cube with aggregates will not be available for reporting until the aggregate rollup has been completed successfully. This is to make sure that the cube and aggregates are consistent while reporting. 

11. What is the primary purpose of having multi-dimensional data models? 
a. To deliver structured information that the business user can easily navigate by using any possible combination of business terms to show the KPIs.
b. To make it easier for developers to build applications, that will be helpful for the business users.
c. To make it easier to store data in the database and avoid redundancy.
d. All of the above. 

ANSWER(S): A 
The primary purpose of multi-dimensional modeling is to present the business users in a way that corresponds their normal understanding of their business. They also provide a basis for easy access of data which is OLAP engine

12. The following statements are true for partitioning. 
a. If a cube has been partitioned, the E table of the info cube will be partitioned on time.
b. The F table of the info cube is partitioned on request.
c. The PSA table is partitioned automatically with several requests on one partition.
d. It is not possible to partition the info cube after data has been loaded, unless all the data is deleted from the cube. 

ANSWER(S): A, B, C, D, F 
BW allows partitioning of the info cubes based on time. If the info cube is partitioned, the e-fact table of the info cube will be partitioned on the time characteristic selected. 
The F fact table is partitioned on request ids automatically during the loads. PSA tables are also partitioned during the loads and can accommodate more than one request. For an info cube to be partitioned, all data needs to be removed from the info cube. 

13. The following statements are true for OLAP CACHE. 
a. Query navigation states and query results are stored in the application server memory.
b. If the same query has been executed by another user the result sets can be used if the global cache is active.
c. Reading query results from OLAP cache is faster than reading from the database.
d. Changing the query will invalidate the OLAP cache for that query. 

ANSWER(S): A, B, C, D 
Query results are stored in the memory of application server, which can be retrieved later by another user running the same query. This will make the query faster since the results are already calculated and stored in the memory. By changing the query, the OLAP Cache gets invalidated

14. The following statements are true about the communication structure. 
a. It contains all the info objects that belong to an info source.
b. All the data is updated into the info cube with this structure.
c. It is dependent on the source system.

d. All of the above. 

ANSWER(S): A, B 
The communication structure contains all info objects in the info source and it is used to update the info cube by temporarily storing the data that needs to be updates to the data target. It doesn't depend on the source system

15. The following statements are untrue about ODSs. 
a. It is possible to create ODSs without any data fields.
b. An ODS can have a maximum of 16 key fields.
c. Characteristics and key figures can be added as key fields in an ODS.
d. After creating and activating, an export data source is created automatically

ANSWER(S): A,C 
An ODS cannot be created without any data fields, and it can have a maximum of only 16 key fields. Key figures cannot be included as a key field in an ODS. The export data source is created after an ODS has been created and activated

Wednesday, April 20, 2011

SAP Q& A PART - 1

1. Identify the statement(s) that is/are true. A change run... 
a. Activates the new Master data and Hierarchy data
b. Aggregates are realigned and recalculated
c. Always reads data from the InfoCube to realign aggregates
d. Aggregates are not affected by change run 

ANSWER(S): A, B 
Change run activates the Master data and Hierarchy data changes. Before the activation of these changes, all the aggregates that are affected by these changes are realigned. Realignment is not necessarily done by reading InfoCubes. If these are part of another aggregate that can be used to read data for the realignment, change run uses that aggregate

2. Which statement(s) is/are true about Multiproviders? 
a. This is a virtual Infoprovider that does not store data
b. They can contain InfoCubes, ODSs, info objects and info sets
c. More than one info provider is required to build a Multiprovider
d It is similar to joining the data tables 

ANSWER(S): A, B 
Multiproviders are like virtual Infoproviders that do not store any data. Basic InfoCubes, ODSs, info sets or Info objects can be used to build a Multiprovider. Multiproviders can even be built on a single Infoprovider

3. The structure of the PSA table created for an info source will be... 
a. Featuring the exact same structure as Transfer structure
b. Similar to the transfer rules
c. Similarly structured as the Communication structure
d. The same as Transfer structure, plus four more fields in the beginning 


ANSWER(S): D 
The structure of PSA tables will have an initial four fields: request id, packet number, partition value and record number. The remaining fields will be exactly like Transfer Structure. 

4. In BW, special characters are not permitted unless it has been defined using this transaction: 
a. rrmx
b. rskc
c. rsa15
d. rrbs 

ANSWER(S): B 
Rskc is the transacation used to enter the permitted characters in BW.


5. Select the true statement(s) about info sources: 
a. One info source can have more than one source system assigned to it
b. One info source can have more than one data source assigned to it provided the data sources are in different source systems
c. Communication structure is a part of an info source
d. None of the above 

ANSWER(S): A, C 
Info sources can be assigned to multiple source systems. Also, info sources can have multiple data sources within the same source system. Communication structure is a part of the source system

6. Select the statement(s) that is/are true about the data sources in a BW system: 
a. If the hide field indicator is set in a data source, this field will not be transferred to BW even after replicating the data source
b. A field in a data source won't be usable unless the selection field indicator has been set in the data source
c. A field in an info package will not be visible for filtering unless the selection field has been checked in the data source
d. All of the above 

ANSWER(S): A, C 
If the hide field is checked in a data source, that field will not be transferred to a BW system from the source system even after replication. If the selection field is not checked, that field won't be available for filtering the info package

7. Select the statement(s) which is/are true about the 'Control parameters for data transfer from the Source System': 
a. The table used to store the control parameters is ROIDOCPRMS
b. Field max lines is the maximum number of records in a packet
c. Max Size is the maximum number of records that can be transferred to BW
d. All of the above 

ANSWER(S): A 
ROIDOCPRMS is the table in the BW source system that is used to store the parameters for transferring data to BW. Max size is the size in KB which is used to calculate the number of records in each packet. Max lines is the maximum number of records in each packet

8. The indicator 'Do not condense requests into one request when activation takes place' during ODS activation applies to condensation of multiple requests into one request to store it in the active table of the ODS. 
a. True
b. False 

ANSWER(S): B 
This indicator is used to make sure that the change log data is not compressed to one request when activating multiple requests at the same time. If these requests are combined to one request in change log table, individual deletion will not be possible. 

9. Select the statement(s) which is/are not true related to flat file uploads: 
a. CSV and ASCII files can be uploaded
b. The table used to store the flat file load parameters is RSADMINC
c. The transaction for setting parameters for flat file upload is RSCUSTV7
d. None of the above 

ANSWER(S): C 
Transaction for setting flat file upload parameters is RSCUSTV6.

10. Which statement(s) is/are true related to Navigational attributes vs. Dimensional attributes? 
a. Dimensional attributes have a performance advantage over Navigational attributes for queries
b. Change history will be available if an attribute is defined as navigational
c. History of changes is available if an attribute is included as a characteristic in the cube
d. All of the above 

ANSWER(S): A, C 
Dimensional attributes have a performance advantage while running queries since the number of table joins will be less compared to navigational attributes. For navigational attributes, the history of changes will not be available. But for dimensional attributes, the InfoCube will have the change history

11. When a Dimension is created as a line item dimension in a cube, Dimensions IDs will be same as that of SIDs. 
a. True
b. False 

ANSWER(S): A 
When a Dimension is created as a line item dimension, the SIDs of the characteristic is directly stored in the fact tables and these are used as Dimension IDs. Dimension table will be a view off of SID table and fact table

12. Select the true statement(s) related to the start routine in the update rules: 
a. All records in the data packet can be accessed
b. Variables declared in the global area is available for individual routines
c. Returncode greater than 0 will be abort the whole packet
d. None of the above 

ANSWER(S): A, B, C 
In the start routine, all records are available for processing. Variables declared in the global area can be used in individual routines. Returncode greater than 0 will abort processing of all records in the packet

13. If a characteristic value has been entered in InfoCube-specific properties of an InfoCube, only these values can be loaded to the cube for that characteristic. 
a. True
b. False 

ANSWER(S): A 
If a constant is entered in the InfoCube-specific properties, only that value will be allowed in the InfoCube for that characteristic. This value will be fixed in the update rules and it is not possible to do the change in update rules for that characteristic.

14. After any changes have been done to an info set it needs to be adjusted using transaction RSISET. 
a. True
b. False 

ANSWER(S): A 
After making any type of change to an info set, it needs to be adjusted using the transaction RSISET. 


15. Select the true statement(s) about read modes in BW: 
a. Read mode determines how the OLAP processor retrieves data during query execution and navigation
b. Three different types of read modes are available
c. Can be set only at individual query level
d. None of the above 

ANSWER(S): A, B 
Read mode determines how an OLAP processor retrieves data during query execution and navigation. Three types of read modes are available:
1. Read data during expand hierarchies
2. Read data during navigation
3. Read data all at once
Read mode can be set at info provider level and query level.

Saturday, April 16, 2011

Selective deletion process chain





This article tells you how to use the Selective Deletion in Process Chains. I.e. Using "DELETE_FACTS"
TCode , how to generate the selective deletion program and then using that program how to delete the data in InfoCube.


Sometimes, before loading the data to InfoCube, we need to delete the data based on some selective
deletion, E.g. Date, then we need to load the data to InfoCube. Every time we don't want to do this activity manually, and we need to automate this process.


InfoCube Data deletion by using Selective Deletion through Process Chains. In some cases we need to
delete the InfoCube data based on selective deletion.
E.g.: We have Planning InfoCube in BW system and the data will get from APO system, in APO they will run
planning on weekly basis and they plan from SY-DATUM to next 30 days. So once APO system will completes the SNP Weekly runs, then BW system need to extract the data from APO.
But before loading the data to BW Plan InfoCube, first we need to delete the existing data from SY-DATUM
to next 30 days data, after that I need to load the data. Because APO SNP runs will happen every Week, if we load the data directly to Plan InfoCube, it will give wrong information in reports, because every time we are loading from SY-DATUM to next 30 days. (E.g.: Suppose APO SNP first run date is 01.02.2009, so after the APO run we loaded Plan data to BW InfoCube i.e. from date = SY-DATUM (01.02.2009) and to date = 01.03.3009.
Then on 08.02.2009 second SNP run was happen, so if I load data from SY-DATUM (08.02.2009) to
08.03.2009, then the data is duplicated in InfoCube, because we already loaded from 01.02.2009 to 01.03..2009 in the first SNP run, and now we are again loading from 08.02.2009 to 08.03.2009, so finally InfoCube is having the data from 08.02.2009 to 01.03.2009 from first SNP run and 08.02.2009 to 08.03.2009 from second run, this is wrong data. So first we need to delete the data from 08.02.2009 to 01.03.2009 and then load the data from 08.02.2009 to 08.03.2009).
We can achieve this by using "DELETE_FACTS" Transaction code. I want to automate the complete
process using Process Chains.
Give DELETE_FACTS Tcode in Command field screen and enter and give InfoCube name and select
Generate selection program option and then execute.

Take this program  and then go to SE38 and give program name and click on Variants option
Give the Varient name ZVAR_DEL1 and then click on create
It will display the selection screen. Our intension is we need to delete the data based on 0Calday (Calendar 
Day), so press F1 and find the screen fields for that Calendar Day. 








In the following screen you can find the Screen Field for Calendar Day. I.e. C006-LOW. Once you find the screen field number then close the screen. 

Click on Technical name button And click on the Attributes ButtonClick on Selection Variable and select "D: Dynamic date calculation"Then select Name of Variable like below and double click on that it will open the following screen.I want to delete the data from Current day to next 30 days. So select Current date -xxx, current date + yyy and double click.








  






Save. Come back to SE38 and select Variants option and Click on Display button.See the values for that variant. It shows SY-DATUM to next 30 days date.So we created a selective deletion program to delete data from InfoCube, and whenever you execute this program with using Variant ZVAR_DEL, it will delete the data from that day to next 30 days.

Add the program in process chain and automate using SAP scheduler. . 

Tuesday, April 12, 2011

How to reprocess a failed request (update from PSA)


Click the failed request, and go to the header tab 

Next double click on the data target.


Go to the Requests tab and delete failed request.



Note: You must delete the bad requests from all data targets

Exit the monitor and click on PSA.  Next, do a search for the request #.

Right mouse click on the request and click Start the update immediately.

A background job will be created to reprocess the request. Refresh and if successful the status will be green



Friday, April 8, 2011

How to Fill Random data in Cube


There may be situation in which one of the BW infocubes you were working on was empty. Not only was it empty it may also dont have any transformations and loading processes.  In such case you can use this  program CUBE_SAMPLE_CREATE  to fill the cube with random data .You can use this program  to fill the Infocube in no longer than a minute.
This program allows you to fill a premade infocube with data, based either on master data, data entry or randomly generated data.
First use SE38 with the program"CUBE_SAMPLE_CREATE"
.  

Execute it...

Output:-The info cube will have a new request with the generated data.
It's very useful for quick demonstration of a data model for a business user.





Monday, April 4, 2011

BW Useful Programs


RSIMPCURR -- To Transfer Exchange Rates
RSIMPCUST -- To Transfer Global Settings from source system
RS_TRANSTRU_ACTIVATE_ALL -- To Activate Transfer Rules
-- Useful whenever we need to activate transfer rules in Quality or Production system after transports. 
RSAU_UPDR_REACTIVATE_ALL -- To Activate Update Rules
SAP_CONVERT_TO_TRANSACTIONAL -- To change Basic Cube to Transactional Cube
RSAR_PSA_CLEANUP_DIRECTORY -- To Clean PSA and Change log
SAP_INFOCUBE_DESIGN -- To know statistics(Size) of Cubes
-- Useful to know the size of Fact Tables and Dimension Tables
RSSM_SET_REPAIR_FULL_FLAG -- To change request status from Full load to repair full
-- Useful to start delta loads, If full loads are already present in data target from same data source
RSDDS_AGGREGATES_MAINTAIN -- For Hierarchy/Attribute Change run
RSDDS_CHANGERUN_MONITOR -- To Check Change run Status
RSDG_ODSO_ACTIVATE -- To Activate ODS in background. very much useful when BEx reporting switched on.
RSDG_IOBJ_ACTIVATE -- To Activate Infoobjects(Mass Activation)
RSDG_MPRO_ACTIVATE -- To Activate MultiProviders
RSDG_CUBE_ACTIVATE -- Activation of InfoCubes
RS_COMSTRU_ACTIVATE_ALL -- Activate all inactive communication structures 
RSCONN07 - SAP Connect Administration(System Status)
RSAOS_METADATA_UPLOAD_BATCH -- To replicate single datasource from Source(R/3) 
RSDRD_DELETE_FACTS -- To delete data selectively from infoprovider(ODS or CUBE)
RSAR_LOGICAL_SYSTEMS_ACTIVATE -- Activate All SAP Source Systems (After BW Upgrade)
RSDS_DATASOURCE_ACTIVATE_ALL -- Activate All DataSources of a Log System
RSTCC_ACTIVATE_ADMIN_COCKPIT -- Perform all steps to activate the content for the BI Admin Cockpit
RSTCC_ACTIVATEADMINCOCKPIT_NEW -- Activate Content for the BI Admin Cockpit
Web Analytics