ABAP Basic Tips

What is use of using HASHED TABLE?

Hashed table is useful when your have to work with very big internal table and to read it with
The time access is constant !
Definition of a Hashed Table:
"Defines the table as one that is managed with an internal hash procedure. You can imagine a hashed table as a set, whose elements you can address using their unique key. Unlike standard and sorted tables, you cannot access hash tables using an index. All entries in the table must have a unique key.

Access time using the key is constant, regardless of the number of table entries.
You can only access a hashed table using the generic key operations or other generic operations (SORT, LOOP, and so on). Explicit or implicit index operations (such as LOOP ... FROM to INSERT itab within a LOOP) are not allowed."
As long as your records has unique key(s), using hash table will give you a huge performance gain when dealing with large dataset. assuming in your case, 10000 record , and if the key is unique, use hash table. The main use of hash tables is for looking up fixed information from a key. So if you have a report that has personnel number and you want to display their name, you could use a hash table.
types: begin of typ_vbeln,
vbeln like vbak-vbeln,
ernamlike vbak-ernam,
end of typ_vbeln.
data: ls_vbeln type typ_vbeln,
lt_vbeln type hashed table of typ_vbeln with unique key vbeln.
select vbeln ernam into table lt_vbeln from vbak.
loop at itab.
read table lt_vbeln with table key vbeln = itab-vbeln
into ls_vbeln.
write: ls_vbeln-ernam.
Submit ABAP report with SQL traces
You use ST05, Trace Request, to do a SQL trace.
When using SQL trace, it is good to run the program in debugger and just before you execute the SQL, go to ST05 in another session and turn the trace on.
After, or when, the SQL is finished, you can turn the trace off.
Then click on List Trace to see the details.
This is the way to control the traces.
If you just turn the trace on and then execute the program, you will get everything and will be overflooded with unnecessary information.
Testing and Debugging ABAP Codes, Functions
First test your code with '/h' type in command window which shall take you to debug or by putting break points with F8. You test your program from code and then go to se30 performance analysis transaction and there you can know what are the drawbacks. Then go to SLIN transaction and do extended syntax check or from code in menu options where you have debugging -> extended check and the program id ready without error.
Test for division by zero if it involves calculation or code accordingly.
Test by leaving all parameters in selection screen blank.
Test by entering wrong values in selection screen and display a pop-up if the user enters wrong selection screen values.
You can also do ABAP trace and SQL trace to make sure that your program is efficient.
As mentioned above type /h in command line and try to execute the program. Another way is to set break-point at the function module or the required line and do single step execute or execute. Once you finished debugging, you can select Delete to clear all the break points.
<!--[if !vml]-->Testing and Debugging ABAP<!--[endif]-->

Third option in certain cases is to check for sy-subrc <> 0 and proceed from there by making it 0 by updating the sy-subrc in the debug editor and see how the program behaves if the value is what you enterd in the debug screen. For certain case try to use WATCH POINT feature in debug editor.

Fields of Internal Tables
Current line of an internal table. SY-TABIX is set by the statements below, but only for index tables. The field is either not set or is set to 0 for hashed tables.
APPEND sets SY-TABIX to the index of the last line of the table, that is, it contains the overall number of entries in the table.
COLLECT sets SY-TABIX to the index of the existing or inserted line in the table. If the table has the type HASHED TABLE, SY-TABIX is set to 0.
LOOP AT sets SY-TABIX to the index of the current line at the beginning of each loop lass. At the end of the loop, SY-TABIX is reset to the value that it had before entering the loop. It is set to 0 if the table has the type HASHED TABLE.
READ TABLE sets SY-TABIX to the index of the table line read. If you use a binary search, and the system does not find a line, SY-TABIX contains the total number of lines, or one more than the total number of lines. SY-INDEX is undefined if a linear search fails to return an entry.
SEARCH FOR sets SY-TABIX to the index of the table line in which the search string is found.
After the statements DESCRIBE TABLE, LOOP AT, and READ TABLE, SY-TFILL contains the number of lines in the relevant internal table.
After the statements DESCRIBE TABLE, LOOP AT, and READ TABLE, SY-TLENG contains the length of the lines in the relevant internal table.
After the statements DESCRIBE TABLE, LOOP AT, and READ TABLE, SY-TLENG contains the initial amount of memory allocated to the relevant internal table.

Difference between Work Area and Header Line
While adding or retrieving records to / from internal table we have to keep the record temporarily.
The area where this record is kept is called as work area for the internal table. The area must have the same structure as that of internal table. An internal table consists of a body and an optional header line.
Header line is a implicit work area for the internal table. It depends on how the internal table is declared that the itab will have the header line or not.
data: begin of itab occurs 10,
ab type c,
cd type i,
end of itab. " this table will have the header line.
data: wa_itab like itab. " explicit work area for itab
data: itab1 like itab occurs 10. " table is without header line.
The header line is a field string with the same structure as a row of the body, but it can only hold a single row.
It is a buffer used to hold each record before it is added or each record as it is retrieved from the internal table. It is the default work area for the internal table.
Parallel cursor is the technique to increase the performance of the program, when there are nested loops.
For example if we use nested select in our program instead of For all entries addition, then definitely performance going down. In the same way the if we use nested loops in the program it will also leads to down the performance.
For example:
if the code contains this type of logic:
loop at itab into wa.
loop at itab1 into wa1.
In the above logic, for one record of the itab, again the table itab1 loops many times. If itab contains many records and also at the same time if itab1 contains double the records, then this would result into performance issue. You can modify it as:
loop at itab into wa.
read table itab1 into wa1 with key field1 = wa-field1.
v_tabix = sy-tabix.
if sy-subrc eq 0.
loop at itab1 into wa1 from v_tabix. "It will loop from that index
Parallel Cursor helps in performance tuning.
Reasons for using Parallel Cursor:
Nested Loops is one of the fear factors for all the ABAP developers as this consumes lot of program execution time. If the number of entries in the internal tables is huge, then the situation would be too worse. The solution for this is to use parallel cursor method whenever there is a need for Nested Loop.
Program (Nested Loop using Parallel Cursor):
REPORT zparallel_cursor2.
t_likp TYPE TABLE OF likp,
t_lips TYPE TABLE OF lips.
w_runtime1 TYPE i,
w_runtime2 TYPE i,
w_index LIKE sy-index.
FROM likp
INTO TABLE t_likp.
FROM lips
INTO TABLE t_lips.
GET RUN TIME FIELD w_runtime1.
SORT t_likp BY vbeln.
SORT t_lips BY vbeln.
LOOP AT t_likp INTO likp.
LOOP AT t_lips INTO lips FROM w_index.
IF likp-vbeln NE lips-vbeln.
w_index = sy-tabix.
GET RUN TIME FIELD w_runtime2.
w_runtime2 = w_runtime2 - w_runtime1.
WRITE w_runtime2.
Explain about secondary index for queries.
An index in SAP is like an index for a book. If you want to look up something from the book you refer the book's index page. Right?
In the same way in SAP, when you fire a query it makes it easy for the system to search a particular record in the db table if the field that you have in the SELECT statement is an index.

So in that sense, there is no difference between the query based on a primary index or a secondary index.
They are fired exactly the same way.

Here matnr could be a primary or a secondary index.
Primary Index
The Primary Index is the one that is created on declared Primary Keys of the table.

It is not always possible to stick on to primary index for querying a particular table for your report.

For example, at our company, we are using Special Stock Indicators "E" and "O" for the stocks. Almost 95% of the Finished & Semifinished products move within SAP with tag "E".

If I want to develop Z report to process material documents with "E" stock, I have to query mseg table on Sale Order Number (MAT_KDAUF) and Item Number (MAT_KDPOS).

But Primary Index of mseg does not include sale order number and item. Then to improve performance of my query I have to create another index on mseg with MAT_KDAUF and MAT_KDPOS fields.

Then my query will be something like this,

select * into table i_mseg from mseg where matnr = p_matl and mat_kdauf = p_so and mat_pos = p_item.

But there are many things to be taken care while creating secondary indexes.

1. More number of indexes on single table will result in database swelling and also reduce the performance of the database. Indexes are nothing but a copy index key columns of the main table and associated record pointer sorted on index keys. Hence as we create more more sets, database will keep on searching the suitable index for your query.
2. An index with more number of fields, is literally an useless index. Because as you keep adding fields to your index, it is just like you are searching your entrie table for the filter criteria.
3. Essentially, maximum three fields per index, and maximum 6 indexes per table will produce the desired results.
4. If the table size is small and growth rate is less, more number of fields or indexes can be added.
5. Very important point is, create index with a field set only if you have the requirement of certain querying criteria in many programs or reports. In the above example, whatever material doc related report I develop I'll use sale order number and item number in the query. Hence the requirement is generic. Don't create an index for one time use.
6. Maintain the field sequence in where clause when you are trying to point to an index of the table. It will improve the performance considerably.
Quick Note on Design of secondary database indexes and logical databases
Secondary Database
First it must be stated that table design is a more logical work while index design is rather technical. In table design it might make sense to place certain fields (client, company code, ...) in the beginning. In index design, this is not advisable. Very important for an index is that it contains very selective fields in the beginning. Those are fields like object numbers. Not selective are client, company code, ...
Indexes should be small (few fields). The Database optimizer can combine two or more indexes to execute a query.
Indexes of one table should be disjoint (have few common fields), in order not to confuse the optimizer which index to use.
Note that each index slows the inserts into the table down. Updates are only slowed down if indexed fields are updated. In general, heavy inserted tables should have only few indexes while heavy selected tables might have more.
Logical Database
Using logical databases is a good method to write reports. Logical databases can be optimized centrally. But ensure that the structure of the logical database fits well to your report. Otherwise the effect can be the opposite.

1. What is meant by secondary index?
2. What is the purpose of cursor?
If you cannot use the primary index to determine the result set because, for example, none of the fields primary index occur in the WHERE or HAVING clause, the system searches through the entire table (full table scan). For this case, you can create secondary indexes, which can restrict the number of table entries searched to form the result set.
You should only create secondary indexes, for database tables from which you mainly read, since indexes have to be updated each time the database table is changed. secondary indexes should contain columns that you use frequently in a selection, and that are as highly selective as possible to improve performance..
Cursor is used to read the data:-----
When you use a cursor to read data, you decouple the process from the SELECT statement. To do this, you must open a cursor for a SELECT statement. Afterwards, you can place the lines from the selection into a flat target area. An open cursor points to an internal handler, similarly to a reference variable pointing to an object.
Cursor is used for one more purpose that is ,You can set the cursor on the current list dynamically from within your program. You can do this to support the user with entering values into input fields or selecting fields or lines. To set the cursor we use SET CURSOR command.
SET cursor .
This statement sets the cursor to column of line of the output window.
Create a table maintance program for a z table
In transaction SE11, in the attribute tab of your z table check table maintenance check box. Go to SM30 transaction, enter the ztable name and click on maintain button. Here you can enter new entries into the ztable .
You can create a PARAMETER TRANSACTION for the transaction for SM30 .
Follow these steps :
1. go to transaction SE93 , give your own transaction code say ztran_tab, for maintaining your ztable.
2. Click on create button and check the radio button Transaction with parameters (PARAMETER TRANSACTION) and click on the tick button.
3. In the next screen enter default values:
transaction : SM30
check the check box skip initial screen
4. Scroll down you will find a table control for default values
Name of the screen field | value
VIEWNAME | your ztable name
Save your work.
Now as you have created a custom transaction for maintaining your ztable this transaction can be called from any program with CALL transaction 'XXX'.

1 comment: