SlideShare a Scribd company logo
1 of 14
BW on HANA
Performance Optimisation
Ajay Kumar Uppal
BW on HANA : Scale-Up Versus Scale-Out
Configuration Scale-Out Configuration for BW
on HANA
Scale-Up Configuration for
BW on HANA
Capacity Ramp-Up, Flexibility, Scalability
Can be Ramped-Up easily to almost
200 TBs and this configuration allows
maximum flexibility and scalability. Limited scalability.
Capacity Ramp-Down Available No Provision.
SAP Certification for Ramp-Up from
1TB to > $4 TBs and beyond Already Available Not Certain. At mercy of SAP.
Business Downtime during Capacity
Ramp-Up Zero
Equal to Migration Time ~ >
18hours
Costs associated with Capacity Ramp-
Up No Project Fees
Has to be done as a Migration
Project
Installed based >95% of customers < 5% of customers
Largest Production Instance Size > 60 TB <4TB
Architecture, Operations and Support
Multi-node System(s) with more
parts. Calls for little extra
management effort as compared to
Scale-Up. Allows better flexibility.
Single Node. Calls for less
management and support effort.
But incase of failure of Single
Node the only fall back is DR.
Table Re-distribution
May call for table(s)- re-distribution
once or twice a year and this can be
done during planned maintenance.
Little to no need as everything is
stored on Single Node.
Reliability
Higher reliability and a spare node
can be provisioned in case 1 node
goes down.
Low Reliability . If the Single Node
goes down , the entire system
fails.
BW on HANA: Performance Optimisation
I am prepared to share in brief some items that certainly result in performance optimization
• Data Storage Object (DSO) activation is a critical step in process of transferring data from source
systems to the business warehouse. We has achieved activation times that are 54 times faster
than the previous process.
• Faster DSO activation makes data available more quickly, supports more-frequent loading of
data, and more-frequent updates for availability closer to real-time.
• It speeds the flow of data from source systems
• Conversion of In-memory objects where by the extended star schema will be slimmed and
dimensions will be realigned and provide scope for Re-architecting some data flows without
disruption.
• Suggest Data Volume Optimization strategy to maintain system clean and green
• In a tool-supported post-migration step InfoCubes can be selected and converted to SAP HANA-
optimized objects
Housekeeping is the single most significant
contributorSAP HANA Data Volume Management Tasks (Sample Template)
Priority Action Description Deadline Startegy
High
Define retention times for PSA records individually for all
DataSources and delete outdated data.
After BW on HANA Live NA
High Schedule periodic batch job for deletion of outdated entries from table ODQDATA_F After BW on HANA Live
High Enable the non-active data concept as suggested in SAP Note 1767880 After BW on HANA Live
High Archive / move to NLS old/unused data from the DSOs and InfoCube After BW on HANA Live
High
End to End Review with Philips Team and recommendations
regarding the top BW Schema Tables
After BW on HANA Live
High Frequent Review and Check on HANA DB parameter settings After BW on HANA Live
High Check whether power save mode is active. For more information, see SAP Note 1890444 After BW on HANA Live
High Weekly Check for HANA DB trace settings After BW on HANA Live
High Review and Propose a Best Practice for backup procedure Ongoing
High
At least a Weekly Review and Monitoring on the recommendations for the
alerts generated in the HANA DB system
After BW on HANA Live
High
Use report RSDRI_RECONVERT_DATASTORE to convert HANA- optimized DSOs back to classic objects.
Since from BW 7.3 SP5 Standard DSO’s will support on HANA Schema Algorithm
After BW on HANA Live
Medium
Consider partitioning for tables that are expected to grow rapidly in order to ensure parallelization
and adequate performance.
After BW on HANA Live
Medium
Propose and Suggest for re-partitioning tables that are expected to grow.
HP recommend that you re-partition tables before inserting mass data or while they are still small.
After BW on HANA Live
Medium
Review, test and implement SAP basis and memory management parameter recommendations to
Avoid OOM (Out of Memory )Issues
After BW on HANA Live
Optimization 1 : Recommendations to reduce data footprint on HANA Database : As a thumb rule, at least 45-50% of the
SAP HANA memory should be reserved for the SQL computations, SAP HANA services, Analytics and other OS related
services. The rest can be occupied by the actual data in different column and row stores. Frequent check for BW on HANA
system on the memory occupied with data and the rest is left for computations. This will adversely affect the performance as
it will increase the number of unloads for different tables from memory to disk which further deteriorates the performance.
This obviously will lead to high memory peaks in SAP HANA. In keeping the monitoring we can always keep BW on HANA
system in line with Best Practices.
Optimization 2 : Provider proposes frequent analysis of HANA database configuration, review of HANA DB parameters,
CPU frequency settings and trace settings. Analyzing the objects with high number of records for table partitioning should be
considered if these tables are expected to grow rapidly in the future.
Optimization 3: To reduce the data footprint in the HANA database, review and implement the recommendations:
• Keep Track of the size of the top may be 30 PSA tables and assess the Data Retention Policies .
• Cold/Warm data can be unloaded to Disk.
• Define retention times for PSA records individually for all DataSources and delete outdated data. Start with the largest
PSA tables
BW on HANA project: additional questions
Optimization 4 : Delete the outdated entries from table ODQDATA_F be scheduling the periodic batch job
ODQ_CLEANUP as suggested in SAP Note 1836773.
Recommendation: Table ODQDATA_F is part of the operational delta queue. Refer SAP Note 1836773 (How to delete
outdated entries from delta queues - SAP Data Services) and delete the outdated entries from this table using the batch job
called ODQ_CLEANUP.
Once a day a cleanup process removes all outdated entries from the delta queues so they do not fill up. This is a regular
batch job and can be maintained as such. With the ODQMON transaction the job and the retention interval can be
configured.
In transaction ODQMON, choose menu Goto -> Reorganize delta queues,
Schedule a job for reorganization, e.g. ODQ_CLEANUP_CLIENT_004
By default the job is scheduled each day at 01:23:45 system time
If needed adapt the start time and frequency in transaction SM37
If needed adapt the retention time for recovery (see F1 help for details)
Optimization 5 : Enable the non-active data concept for BW on SAP HANA DB, review and implement the code corrections
contained in SAP Note 1767880 - Non-active data concept for BW on SAP HANA DB.
After implementing the code corrections, follow the manual steps to ensure that the unload priorities of all tables are set
correctly.
This would ensure that Persistent Staging Area tables, change log tables, and write-optimized DataStore objects are flagged
as EARLY UNLOAD by default. This means that these objects are displaced from the memory before other BW objects
(such as InfoCubes and standard DataStore objects).
BW on HANA project: additional questions
Optimization 6: Understand and review the CPU type, CPU clock frequency, and the hosts. If the CPU clock frequency is
set too low, this has a negative impact on the overall performance of the SAP HANA system. Usually the CPU clock
frequency should be above 2000 MHz.
Optimization 7: If an inappropriate trace level is set for SAP HANA database components, a high amount of trace
information may be generated during routine operation. This can impair system performance and lead to unnecessary
consumption of disk space.
Recommendation: For production usage of your SAP HANA database, we recommend setting the trace level of all
components according to the recommendations in the table above.
Background: Traces can be switched in the 'Trace Configuration' tab of the SAP HANA studio Administration Console
Optimization 8: Largest Non-partitioned Column Tables: There are objects with high number of records (more than 300
million). This is not yet critical with regard to the technical limit of SAP HANA (2 billion records), but table partitioning should
be considered if these tables are expected to grow rapidly in the future.
Recommendation: Consider partitioning for tables that are expected to grow rapidly in order to ensure parallelization and
adequate performance. We recommend that you partition tables before inserting mass data or while they are still small.
BW on HANA project: additional questions
Optimization 9 : Largest Partitioned Column Tables (Records : Consider re-partitioning tables that are expected to grow.
We also need to look at re- partition tables before inserting mass data or while they are still small.
For more information see SAP Note 1650394 or refer to the SAP HANA Administration Guide
Optimization 10 : Largest Column Tables in terms of delta size The separation into main and delta storage allows high
compression and high write performance at the same time. Write operations are performed on the delta store and changes
are transferred from the delta store to the main store asynchronously during delta merge. The column store automatically
performs a delta merge according to several technical limits that are defined by parameters. If applications require more
direct control over the merge process, the smart merge function can be used for certain tables (for example, BW prevents
delta merges during data loading for performance reasons
Optimization 11 : Memory Utilization Details for HANA Services
The following table shows the memory usage of the SAP HANA engines (services) and is only a snapshot of the time of the
download collection.
Different aspects of the memory consumption of the HANA database are highlighted: "Physical Memory Used by Services"
corresponds to the "Database Resident Size" in the SAP HANA studio and can be compared with the resident size of the
service in the operating system. The sum of "Heap Memory Used Size" and "Shared Allocated Size" roughly corresponds to
the memory usage of the SAP HANA database, which is shown in the SAP HANA studio as "Database Memory Usage".
The difference between the "Database Memory Usage" and the "Resident Database Memory" can usually be explained by
the "Allocated Heap Size".
BW on HANA project: additional questions
Optimization 12: Reducing Table Sizes All tables located in the row store are loaded into the main memory when the
database is started. Furthermore, row store tables cannot be compressed as much as tables located in the column store.
Therefore, we need to keep the row store as small as possible.
RSDDSTAT* data BW statistical data saved in the RSDDSTAT* tables are located in the row store. Since new data is
continuously loaded into the Business Warehouse (BW), the amount of statistical data is always increasing. Therefore, it is
essential to keep the statistical tables as small as possible, which also provide information about the performance of your
queries.
Recommendation: Reduce the number of records saved in the RSDDSTAT* tables. Consider the following:
When you maintain the settings for the query statistics, deactivating the statistics is the same as activating the statistics
internally with detail
The settings on the "InfoProvider" tab page affect the collection of statistical data for queries, as well as the settings on the
"Query" tab page (transaction RSDDSTAT). For Web templates, workbooks, and InfoProviders, you can decide between
activating or deactivating the statistics only. If you did not maintain settings for the individual objects, the default setting for
the object is used. If you did not change the default settings, the statistics
are activated.
You can delete statistics data using report RSDDSTAT_DATA_DELETE or using the corresponding graphical interface
accessible via transaction RSDDSTAT.
BW on HANA project: additional questions
Optimization 13 : Conversion of InfoCubes and DataStore Objects After an upgrade to SAP NetWeaver BW 7.30 SP5 or
later with SAP HANA, all DataStore objects and InfoCubes remain unchanged. In a tool-supported post processing step
(transaction RSMIGRHANADB or report RSDRI_CONVERT_CUBE_TO_INMEMORY), DataStore objects and InfoCubes
can be selected and converted to SAP HANA-optimized objects.
All InfoCubes should be converted to fully benefit of the advantages provided by SAP BW powered by HANA DB. On the
other hand, we do not recommend converting DataStore Objects as the advantages of the converted objects can be
achieved without modifying the DSOs.
Optimization 14 : SAP HANA-optimized DataStore Objects
Background: All advantages of HANA-optimized DataStore Objects are now available for standard DSOs too, which renders
conversion unnecessary. While HANA-optimized DSOs will still be supported in the future, we do not recommend converting
DSOs but, rather, reconverting any existing HANA-optimized DSOs back to classic objects.Starting with BW 7.30 SP10 (BW
7.31 SP09, BW 7.40 SP04), converting classic DSOs to HANA-optimized DSOs will not be possible anymore.
SAP HANA-optimized DataStore objects cannot be included in an SAP BW 3.x data flow. If you want to optimize a
DataStore object that is part of a 3.x data flow, you first have to migrate the actual data flow.
Furthermore, an SAP HANA-optimized DataStore object cannot be populated directly with real-time data acquisition (RDA).
The 'unique records' property does not provide any performance gain. In addition, the
uniqueness check is not performed at all in BW; instead, the uniqueness is checked by an SQL statement (DBMS exit).
Never Generate SIDs: SIDs are never generated. This option is useful for all DSO that are used (only) for further processing
in other DSOs or InfoCubes as it is not possible to run a query directly on this kind of DSO.
BW on HANA project: additional questions
Optimization 15 : SAP HANA-optimized InfoCubes With SAP HANA-optimized InfoCubes, the star schema is transferred to
a flat structure, which means that the dimension tables are no longer physically available. Since no dimension IDs have to
be created for the SAP HANA-optimized InfoCubes, the loading process is accelerated. The accelerated insertion of data in
SAP HANA- optimized InfoCubes means that the data is available for reporting earlier.
Optimization 16 : Analytic Indexes - Analytic indexes can be created in the APD (transaction RSANWB) or they can be an
SAP HANA model published in the SAP BW system. If you want to use SAP BW OLAP functions to report on SAP HANA
analytic or calculation views, you can publish these SAP HANA models to the SAP BW system (transaction
RSDD_HM_PUBLISH).
Optimization 17 : MultiProvider Queries
For MultiProvider queries based on SAP HANA, the standard setting for the “operations in BWA” query property (transaction
RSRT) is “3 Standard”. However, if the MultiProvider consists of a mixed landscape (there are SAP HANA-optimized as well
as non-converted InfoProviders underneath), performance problems might occur.
Recommendation: If you are running queries on top of MultiProvider containing SAP HANA-optimized InfoProviders, as well
as standard InfoProviders, either: Convert all InfoProviders to SAP HANA-optimized or
Always we need set it to S-standard and Mode 3
Last But not the Least :
Always remember to test and make a full system backup before implementing any changes in a productive environment.
BW on HANA project: additional questions
Performance testing – Query
Impact of current run time- using Scale-Out BW on
HANA
4
22
40
73
2 10 7
25
0
10
20
30
40
50
60
70
80
Less than 10 s 10s to 30 s 30s to 60s More than 60s
Time(s)
Type of query
Query before & after BW on HANA
move per type
Avg Before
Avg After
3.6
2
1.6
5
0.00
2.00
4.00
Less than
10 s
Time(s)
Data-Load results: 'Customer' 12TB BW on HANA
PoC
Application Impacting / long
Number of
loads Improvement
A2A Long running load 1 78%
CL SCM Impacting load 1 95%
CORE 1 Long running load 1 15%
Master data Long running load 1 87%
One PI Long running load 1 91%
PDS Impacting load 1 98%
POS Impacting load 1 89%
QXP Impacting load 2 91%
SCM Dashboards Impacting load 3 81%
SMART - SRM Impacting load 3 57%
SMART - VBM Impacting load 1 88%
Long running load 2 66%
VIPP LI Impacting load 3 64%
VIPP PH Impacting load 2 45%
Long running load 4 71%
BPC results from recent 'Customer' 12TB PoC
Input Layouts Reports

More Related Content

What's hot

What's hot (20)

Data Catalogues - Architecting for Collaboration & Self-Service
Data Catalogues - Architecting for Collaboration & Self-ServiceData Catalogues - Architecting for Collaboration & Self-Service
Data Catalogues - Architecting for Collaboration & Self-Service
 
A day in the life of hadoop administrator!
A day in the life of hadoop administrator!A day in the life of hadoop administrator!
A day in the life of hadoop administrator!
 
Sap lama presentation
Sap lama presentationSap lama presentation
Sap lama presentation
 
Migrating to the SAP Cloud
Migrating to the SAP Cloud Migrating to the SAP Cloud
Migrating to the SAP Cloud
 
Oracle EPM BI Overview
Oracle EPM BI OverviewOracle EPM BI Overview
Oracle EPM BI Overview
 
Solving Data Discovery Challenges at Lyft with Amundsen, an Open-source Metad...
Solving Data Discovery Challenges at Lyft with Amundsen, an Open-source Metad...Solving Data Discovery Challenges at Lyft with Amundsen, an Open-source Metad...
Solving Data Discovery Challenges at Lyft with Amundsen, an Open-source Metad...
 
An Overview of Ambari
An Overview of AmbariAn Overview of Ambari
An Overview of Ambari
 
Creating a Data-Driven Organization (Data Day Seattle 2015)
Creating a Data-Driven Organization (Data Day Seattle 2015)Creating a Data-Driven Organization (Data Day Seattle 2015)
Creating a Data-Driven Organization (Data Day Seattle 2015)
 
5 Steps for Architecting a Data Lake
5 Steps for Architecting a Data Lake5 Steps for Architecting a Data Lake
5 Steps for Architecting a Data Lake
 
AWS Webcast - Implementing SAP Solutions on the AWS Cloud
AWS Webcast - Implementing SAP Solutions on the AWS CloudAWS Webcast - Implementing SAP Solutions on the AWS Cloud
AWS Webcast - Implementing SAP Solutions on the AWS Cloud
 
SAP Teched 2019 - Deployment Options with Business Continuity for SAP HANA
SAP Teched 2019 - Deployment Options with Business Continuity for SAP HANASAP Teched 2019 - Deployment Options with Business Continuity for SAP HANA
SAP Teched 2019 - Deployment Options with Business Continuity for SAP HANA
 
Stl meetup cloudera platform - january 2020
Stl meetup   cloudera platform  - january 2020Stl meetup   cloudera platform  - january 2020
Stl meetup cloudera platform - january 2020
 
How YugaByte DB Implements Distributed PostgreSQL
How YugaByte DB Implements Distributed PostgreSQLHow YugaByte DB Implements Distributed PostgreSQL
How YugaByte DB Implements Distributed PostgreSQL
 
Mastering SAP Monitoring - SAP SLT & RFC Connection Monitoring
Mastering SAP Monitoring - SAP SLT & RFC Connection MonitoringMastering SAP Monitoring - SAP SLT & RFC Connection Monitoring
Mastering SAP Monitoring - SAP SLT & RFC Connection Monitoring
 
Introduction to Microsoft’s Master Data Services (MDS)
Introduction to Microsoft’s Master Data Services (MDS)Introduction to Microsoft’s Master Data Services (MDS)
Introduction to Microsoft’s Master Data Services (MDS)
 
Tableau Architecture
Tableau ArchitectureTableau Architecture
Tableau Architecture
 
Principles of SAP HANA Sizing - on premise and cloud-1.pdf
Principles of SAP HANA Sizing - on premise and cloud-1.pdfPrinciples of SAP HANA Sizing - on premise and cloud-1.pdf
Principles of SAP HANA Sizing - on premise and cloud-1.pdf
 
Power BI & SAP - Integration Options and possible Pifalls
Power BI & SAP - Integration Options and possible PifallsPower BI & SAP - Integration Options and possible Pifalls
Power BI & SAP - Integration Options and possible Pifalls
 
Introduction to extracting data from sap s 4 hana with abap cds views
Introduction to extracting data from sap s 4 hana with abap cds viewsIntroduction to extracting data from sap s 4 hana with abap cds views
Introduction to extracting data from sap s 4 hana with abap cds views
 
Deploying SAP Solutions on AWS
Deploying SAP Solutions on AWSDeploying SAP Solutions on AWS
Deploying SAP Solutions on AWS
 

Viewers also liked

Hana To Go Presentation Final With Demo Screen Shots Nov8
Hana To Go Presentation Final With Demo Screen Shots Nov8Hana To Go Presentation Final With Demo Screen Shots Nov8
Hana To Go Presentation Final With Demo Screen Shots Nov8
Doug Berry
 

Viewers also liked (19)

SAP Configuration Data
SAP Configuration Data SAP Configuration Data
SAP Configuration Data
 
Cio forum s4hana
Cio forum s4hanaCio forum s4hana
Cio forum s4hana
 
VMworld 2013: Big Data: Virtualized SAP HANA Performance, Scalability and Bes...
VMworld 2013: Big Data: Virtualized SAP HANA Performance, Scalability and Bes...VMworld 2013: Big Data: Virtualized SAP HANA Performance, Scalability and Bes...
VMworld 2013: Big Data: Virtualized SAP HANA Performance, Scalability and Bes...
 
Cloud centric consumption based services for SAP, HANA, Concur, Ariba, C4C
Cloud centric consumption based services for SAP, HANA, Concur, Ariba, C4CCloud centric consumption based services for SAP, HANA, Concur, Ariba, C4C
Cloud centric consumption based services for SAP, HANA, Concur, Ariba, C4C
 
End to end business transformation
End to end business transformationEnd to end business transformation
End to end business transformation
 
SAP HANA Live vs BW on HANA
SAP HANA Live vs BW on HANASAP HANA Live vs BW on HANA
SAP HANA Live vs BW on HANA
 
Architecture review certificate generation of client files
Architecture review certificate generation of client files Architecture review certificate generation of client files
Architecture review certificate generation of client files
 
SAP HANA SPS08 Scale-Out, High Availability and Disaster Recovery
SAP HANA SPS08 Scale-Out, High Availability and Disaster RecoverySAP HANA SPS08 Scale-Out, High Availability and Disaster Recovery
SAP HANA SPS08 Scale-Out, High Availability and Disaster Recovery
 
Consolidating the Application Landscape
Consolidating the Application LandscapeConsolidating the Application Landscape
Consolidating the Application Landscape
 
Hana To Go Presentation Final With Demo Screen Shots Nov8
Hana To Go Presentation Final With Demo Screen Shots Nov8Hana To Go Presentation Final With Demo Screen Shots Nov8
Hana To Go Presentation Final With Demo Screen Shots Nov8
 
Autodesk Technical Webinar: SAP HANA in-memory database
Autodesk Technical Webinar: SAP HANA in-memory databaseAutodesk Technical Webinar: SAP HANA in-memory database
Autodesk Technical Webinar: SAP HANA in-memory database
 
Hana Memory Scale out using the hecatonchire Project
Hana Memory Scale out using the hecatonchire ProjectHana Memory Scale out using the hecatonchire Project
Hana Memory Scale out using the hecatonchire Project
 
Cutover strategy - Legacy to new billing, invoicing engine
Cutover strategy - Legacy to new billing, invoicing engineCutover strategy - Legacy to new billing, invoicing engine
Cutover strategy - Legacy to new billing, invoicing engine
 
Running SAP Business Warehouse in the AWS Cloud-SAPPHIRE NOW 2016
Running SAP Business Warehouse in the AWS Cloud-SAPPHIRE NOW 2016Running SAP Business Warehouse in the AWS Cloud-SAPPHIRE NOW 2016
Running SAP Business Warehouse in the AWS Cloud-SAPPHIRE NOW 2016
 
Business case for SAP HANA
Business case for SAP HANABusiness case for SAP HANA
Business case for SAP HANA
 
Data migration blueprint legacy to sap
Data migration blueprint  legacy to sapData migration blueprint  legacy to sap
Data migration blueprint legacy to sap
 
IoT and the Role of Platforms
IoT and the Role of PlatformsIoT and the Role of Platforms
IoT and the Role of Platforms
 
Cloud, big data, sap, hana, iot, ms azure &amp; way forward
Cloud, big data, sap, hana, iot, ms azure &amp; way forwardCloud, big data, sap, hana, iot, ms azure &amp; way forward
Cloud, big data, sap, hana, iot, ms azure &amp; way forward
 
20170101 RILHEVA HVAC IOT PLATFORM
20170101 RILHEVA HVAC IOT PLATFORM20170101 RILHEVA HVAC IOT PLATFORM
20170101 RILHEVA HVAC IOT PLATFORM
 

Similar to BW on HANA optimisation answers

1310 success stories_and_lessons_learned_implementing_sap_hana_solutions
1310 success stories_and_lessons_learned_implementing_sap_hana_solutions1310 success stories_and_lessons_learned_implementing_sap_hana_solutions
1310 success stories_and_lessons_learned_implementing_sap_hana_solutions
Bobby Shah
 
5507832a c074-4013-9d49-6e58befa9c3e-161121113026
5507832a c074-4013-9d49-6e58befa9c3e-1611211130265507832a c074-4013-9d49-6e58befa9c3e-161121113026
5507832a c074-4013-9d49-6e58befa9c3e-161121113026
Krishna Kiran
 
Sap_abap_on_hana_question_and_answer__1683603113.pdf
Sap_abap_on_hana_question_and_answer__1683603113.pdfSap_abap_on_hana_question_and_answer__1683603113.pdf
Sap_abap_on_hana_question_and_answer__1683603113.pdf
charantej369263
 
HANA Demystified by DataMagnum
HANA Demystified by DataMagnumHANA Demystified by DataMagnum
HANA Demystified by DataMagnum
Prasad Mavuduri
 
507 Real-time Challenges Migration Suite on SAP HANA V2.3 - 2014
507 Real-time Challenges Migration Suite on SAP HANA V2.3 - 2014507 Real-time Challenges Migration Suite on SAP HANA V2.3 - 2014
507 Real-time Challenges Migration Suite on SAP HANA V2.3 - 2014
Praveen Sabbavarapu
 

Similar to BW on HANA optimisation answers (20)

What's new for SAP HANA SPS 11 Dynamic Tiering
What's new for SAP HANA SPS 11 Dynamic TieringWhat's new for SAP HANA SPS 11 Dynamic Tiering
What's new for SAP HANA SPS 11 Dynamic Tiering
 
1310 success stories_and_lessons_learned_implementing_sap_hana_solutions
1310 success stories_and_lessons_learned_implementing_sap_hana_solutions1310 success stories_and_lessons_learned_implementing_sap_hana_solutions
1310 success stories_and_lessons_learned_implementing_sap_hana_solutions
 
HANA
HANAHANA
HANA
 
5507832a c074-4013-9d49-6e58befa9c3e-161121113026
5507832a c074-4013-9d49-6e58befa9c3e-1611211130265507832a c074-4013-9d49-6e58befa9c3e-161121113026
5507832a c074-4013-9d49-6e58befa9c3e-161121113026
 
Finance month closing with HANA
Finance month closing with HANAFinance month closing with HANA
Finance month closing with HANA
 
Sizing sap hana
Sizing sap hanaSizing sap hana
Sizing sap hana
 
SAP HANA Online Training/ SAP HANA Interview Questions
SAP HANA Online Training/ SAP HANA Interview QuestionsSAP HANA Online Training/ SAP HANA Interview Questions
SAP HANA Online Training/ SAP HANA Interview Questions
 
project proposal guidelines for bw on hana Dr Erdas
project proposal guidelines for bw on hana Dr Erdasproject proposal guidelines for bw on hana Dr Erdas
project proposal guidelines for bw on hana Dr Erdas
 
SAP Hana Overview
SAP Hana OverviewSAP Hana Overview
SAP Hana Overview
 
BW Migration to HANA Part1 - Preparation in BW System
BW Migration to HANA Part1 - Preparation in BW SystemBW Migration to HANA Part1 - Preparation in BW System
BW Migration to HANA Part1 - Preparation in BW System
 
Cool features 7.4
Cool features 7.4Cool features 7.4
Cool features 7.4
 
Best Practices to Administer, Operate, and Monitor an SAP HANA System
Best Practices to Administer, Operate, and Monitor an SAP HANA SystemBest Practices to Administer, Operate, and Monitor an SAP HANA System
Best Practices to Administer, Operate, and Monitor an SAP HANA System
 
SAP HANA Interview questions
SAP HANA Interview questionsSAP HANA Interview questions
SAP HANA Interview questions
 
Sap_abap_on_hana_question_and_answer__1683603113.pdf
Sap_abap_on_hana_question_and_answer__1683603113.pdfSap_abap_on_hana_question_and_answer__1683603113.pdf
Sap_abap_on_hana_question_and_answer__1683603113.pdf
 
Bw on hana some obvious wins
Bw on hana some obvious winsBw on hana some obvious wins
Bw on hana some obvious wins
 
SAP HANA SPS09 - SAP HANA Scalability
SAP HANA SPS09 - SAP HANA ScalabilitySAP HANA SPS09 - SAP HANA Scalability
SAP HANA SPS09 - SAP HANA Scalability
 
HANA Demystified by DataMagnum
HANA Demystified by DataMagnumHANA Demystified by DataMagnum
HANA Demystified by DataMagnum
 
Ha100 unit 3 hana architecture sp08
Ha100 unit 3 hana architecture sp08Ha100 unit 3 hana architecture sp08
Ha100 unit 3 hana architecture sp08
 
What is Sap HANA Convista Consulting Asia.pdf
What is Sap HANA Convista Consulting Asia.pdfWhat is Sap HANA Convista Consulting Asia.pdf
What is Sap HANA Convista Consulting Asia.pdf
 
507 Real-time Challenges Migration Suite on SAP HANA V2.3 - 2014
507 Real-time Challenges Migration Suite on SAP HANA V2.3 - 2014507 Real-time Challenges Migration Suite on SAP HANA V2.3 - 2014
507 Real-time Challenges Migration Suite on SAP HANA V2.3 - 2014
 

More from Ajay Kumar Uppal

Target Architecture And Landscape
Target Architecture And LandscapeTarget Architecture And Landscape
Target Architecture And Landscape
Ajay Kumar Uppal
 
SAP Periodical Jobs And Tasks
SAP Periodical Jobs And TasksSAP Periodical Jobs And Tasks
SAP Periodical Jobs And Tasks
Ajay Kumar Uppal
 
Enterprise DataWarehousing + Management Information
Enterprise DataWarehousing + Management InformationEnterprise DataWarehousing + Management Information
Enterprise DataWarehousing + Management Information
Ajay Kumar Uppal
 

More from Ajay Kumar Uppal (19)

Microservices for Application Modernisation
Microservices for Application ModernisationMicroservices for Application Modernisation
Microservices for Application Modernisation
 
Sap on aws cloud technology proposition
Sap on aws  cloud  technology propositionSap on aws  cloud  technology proposition
Sap on aws cloud technology proposition
 
Sap sap hana s4 on cloud
Sap sap hana s4 on cloudSap sap hana s4 on cloud
Sap sap hana s4 on cloud
 
Sap sap hana s4 on cloud
Sap sap hana s4 on cloudSap sap hana s4 on cloud
Sap sap hana s4 on cloud
 
Business value of Enterprise Security Architecture
Business value of Enterprise Security Architecture Business value of Enterprise Security Architecture
Business value of Enterprise Security Architecture
 
Enterprise Architecture - Information Security
Enterprise Architecture - Information SecurityEnterprise Architecture - Information Security
Enterprise Architecture - Information Security
 
Cloud proposition for banking
Cloud proposition for bankingCloud proposition for banking
Cloud proposition for banking
 
Cloud dev ops costs prices sap hana ms
Cloud dev ops costs prices sap hana msCloud dev ops costs prices sap hana ms
Cloud dev ops costs prices sap hana ms
 
S 4 HANA 4 CEOs and CFOs
S 4 HANA 4 CEOs and CFOsS 4 HANA 4 CEOs and CFOs
S 4 HANA 4 CEOs and CFOs
 
ICT strategy and architecture principles by ajay kumar uppal
ICT strategy and architecture principles  by ajay kumar uppalICT strategy and architecture principles  by ajay kumar uppal
ICT strategy and architecture principles by ajay kumar uppal
 
Consumption based ICT outsourcing on
Consumption based ICT outsourcing onConsumption based ICT outsourcing on
Consumption based ICT outsourcing on
 
Sap hana as a service value propositionslideshare
Sap hana as a service value propositionslideshare Sap hana as a service value propositionslideshare
Sap hana as a service value propositionslideshare
 
TypicalBusinessModelandRoadmap
TypicalBusinessModelandRoadmapTypicalBusinessModelandRoadmap
TypicalBusinessModelandRoadmap
 
Trusted advisory on technology comparison --exadata, hana, db2
Trusted advisory on technology comparison --exadata, hana, db2Trusted advisory on technology comparison --exadata, hana, db2
Trusted advisory on technology comparison --exadata, hana, db2
 
First thing a leader should do
First thing a leader should doFirst thing a leader should do
First thing a leader should do
 
SAP on pay as you go model
SAP on pay as you go modelSAP on pay as you go model
SAP on pay as you go model
 
Target Architecture And Landscape
Target Architecture And LandscapeTarget Architecture And Landscape
Target Architecture And Landscape
 
SAP Periodical Jobs And Tasks
SAP Periodical Jobs And TasksSAP Periodical Jobs And Tasks
SAP Periodical Jobs And Tasks
 
Enterprise DataWarehousing + Management Information
Enterprise DataWarehousing + Management InformationEnterprise DataWarehousing + Management Information
Enterprise DataWarehousing + Management Information
 

Recently uploaded

TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
TrustArc
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Recently uploaded (20)

TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
 
Less Is More: Utilizing Ballerina to Architect a Cloud Data Platform
Less Is More: Utilizing Ballerina to Architect a Cloud Data PlatformLess Is More: Utilizing Ballerina to Architect a Cloud Data Platform
Less Is More: Utilizing Ballerina to Architect a Cloud Data Platform
 
ChatGPT and Beyond - Elevating DevOps Productivity
ChatGPT and Beyond - Elevating DevOps ProductivityChatGPT and Beyond - Elevating DevOps Productivity
ChatGPT and Beyond - Elevating DevOps Productivity
 
Design Guidelines for Passkeys 2024.pptx
Design Guidelines for Passkeys 2024.pptxDesign Guidelines for Passkeys 2024.pptx
Design Guidelines for Passkeys 2024.pptx
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
AI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by AnitarajAI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by Anitaraj
 
JavaScript Usage Statistics 2024 - The Ultimate Guide
JavaScript Usage Statistics 2024 - The Ultimate GuideJavaScript Usage Statistics 2024 - The Ultimate Guide
JavaScript Usage Statistics 2024 - The Ultimate Guide
 
Choreo: Empowering the Future of Enterprise Software Engineering
Choreo: Empowering the Future of Enterprise Software EngineeringChoreo: Empowering the Future of Enterprise Software Engineering
Choreo: Empowering the Future of Enterprise Software Engineering
 
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
 
ADP Passwordless Journey Case Study.pptx
ADP Passwordless Journey Case Study.pptxADP Passwordless Journey Case Study.pptx
ADP Passwordless Journey Case Study.pptx
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Quantum Leap in Next-Generation Computing
Quantum Leap in Next-Generation ComputingQuantum Leap in Next-Generation Computing
Quantum Leap in Next-Generation Computing
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Vector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxVector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptx
 
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
 

BW on HANA optimisation answers

  • 1. BW on HANA Performance Optimisation Ajay Kumar Uppal
  • 2. BW on HANA : Scale-Up Versus Scale-Out Configuration Scale-Out Configuration for BW on HANA Scale-Up Configuration for BW on HANA Capacity Ramp-Up, Flexibility, Scalability Can be Ramped-Up easily to almost 200 TBs and this configuration allows maximum flexibility and scalability. Limited scalability. Capacity Ramp-Down Available No Provision. SAP Certification for Ramp-Up from 1TB to > $4 TBs and beyond Already Available Not Certain. At mercy of SAP. Business Downtime during Capacity Ramp-Up Zero Equal to Migration Time ~ > 18hours Costs associated with Capacity Ramp- Up No Project Fees Has to be done as a Migration Project Installed based >95% of customers < 5% of customers Largest Production Instance Size > 60 TB <4TB Architecture, Operations and Support Multi-node System(s) with more parts. Calls for little extra management effort as compared to Scale-Up. Allows better flexibility. Single Node. Calls for less management and support effort. But incase of failure of Single Node the only fall back is DR. Table Re-distribution May call for table(s)- re-distribution once or twice a year and this can be done during planned maintenance. Little to no need as everything is stored on Single Node. Reliability Higher reliability and a spare node can be provisioned in case 1 node goes down. Low Reliability . If the Single Node goes down , the entire system fails.
  • 3. BW on HANA: Performance Optimisation I am prepared to share in brief some items that certainly result in performance optimization • Data Storage Object (DSO) activation is a critical step in process of transferring data from source systems to the business warehouse. We has achieved activation times that are 54 times faster than the previous process. • Faster DSO activation makes data available more quickly, supports more-frequent loading of data, and more-frequent updates for availability closer to real-time. • It speeds the flow of data from source systems • Conversion of In-memory objects where by the extended star schema will be slimmed and dimensions will be realigned and provide scope for Re-architecting some data flows without disruption. • Suggest Data Volume Optimization strategy to maintain system clean and green • In a tool-supported post-migration step InfoCubes can be selected and converted to SAP HANA- optimized objects
  • 4. Housekeeping is the single most significant contributorSAP HANA Data Volume Management Tasks (Sample Template) Priority Action Description Deadline Startegy High Define retention times for PSA records individually for all DataSources and delete outdated data. After BW on HANA Live NA High Schedule periodic batch job for deletion of outdated entries from table ODQDATA_F After BW on HANA Live High Enable the non-active data concept as suggested in SAP Note 1767880 After BW on HANA Live High Archive / move to NLS old/unused data from the DSOs and InfoCube After BW on HANA Live High End to End Review with Philips Team and recommendations regarding the top BW Schema Tables After BW on HANA Live High Frequent Review and Check on HANA DB parameter settings After BW on HANA Live High Check whether power save mode is active. For more information, see SAP Note 1890444 After BW on HANA Live High Weekly Check for HANA DB trace settings After BW on HANA Live High Review and Propose a Best Practice for backup procedure Ongoing High At least a Weekly Review and Monitoring on the recommendations for the alerts generated in the HANA DB system After BW on HANA Live High Use report RSDRI_RECONVERT_DATASTORE to convert HANA- optimized DSOs back to classic objects. Since from BW 7.3 SP5 Standard DSO’s will support on HANA Schema Algorithm After BW on HANA Live Medium Consider partitioning for tables that are expected to grow rapidly in order to ensure parallelization and adequate performance. After BW on HANA Live Medium Propose and Suggest for re-partitioning tables that are expected to grow. HP recommend that you re-partition tables before inserting mass data or while they are still small. After BW on HANA Live Medium Review, test and implement SAP basis and memory management parameter recommendations to Avoid OOM (Out of Memory )Issues After BW on HANA Live
  • 5. Optimization 1 : Recommendations to reduce data footprint on HANA Database : As a thumb rule, at least 45-50% of the SAP HANA memory should be reserved for the SQL computations, SAP HANA services, Analytics and other OS related services. The rest can be occupied by the actual data in different column and row stores. Frequent check for BW on HANA system on the memory occupied with data and the rest is left for computations. This will adversely affect the performance as it will increase the number of unloads for different tables from memory to disk which further deteriorates the performance. This obviously will lead to high memory peaks in SAP HANA. In keeping the monitoring we can always keep BW on HANA system in line with Best Practices. Optimization 2 : Provider proposes frequent analysis of HANA database configuration, review of HANA DB parameters, CPU frequency settings and trace settings. Analyzing the objects with high number of records for table partitioning should be considered if these tables are expected to grow rapidly in the future. Optimization 3: To reduce the data footprint in the HANA database, review and implement the recommendations: • Keep Track of the size of the top may be 30 PSA tables and assess the Data Retention Policies . • Cold/Warm data can be unloaded to Disk. • Define retention times for PSA records individually for all DataSources and delete outdated data. Start with the largest PSA tables BW on HANA project: additional questions
  • 6. Optimization 4 : Delete the outdated entries from table ODQDATA_F be scheduling the periodic batch job ODQ_CLEANUP as suggested in SAP Note 1836773. Recommendation: Table ODQDATA_F is part of the operational delta queue. Refer SAP Note 1836773 (How to delete outdated entries from delta queues - SAP Data Services) and delete the outdated entries from this table using the batch job called ODQ_CLEANUP. Once a day a cleanup process removes all outdated entries from the delta queues so they do not fill up. This is a regular batch job and can be maintained as such. With the ODQMON transaction the job and the retention interval can be configured. In transaction ODQMON, choose menu Goto -> Reorganize delta queues, Schedule a job for reorganization, e.g. ODQ_CLEANUP_CLIENT_004 By default the job is scheduled each day at 01:23:45 system time If needed adapt the start time and frequency in transaction SM37 If needed adapt the retention time for recovery (see F1 help for details) Optimization 5 : Enable the non-active data concept for BW on SAP HANA DB, review and implement the code corrections contained in SAP Note 1767880 - Non-active data concept for BW on SAP HANA DB. After implementing the code corrections, follow the manual steps to ensure that the unload priorities of all tables are set correctly. This would ensure that Persistent Staging Area tables, change log tables, and write-optimized DataStore objects are flagged as EARLY UNLOAD by default. This means that these objects are displaced from the memory before other BW objects (such as InfoCubes and standard DataStore objects). BW on HANA project: additional questions
  • 7. Optimization 6: Understand and review the CPU type, CPU clock frequency, and the hosts. If the CPU clock frequency is set too low, this has a negative impact on the overall performance of the SAP HANA system. Usually the CPU clock frequency should be above 2000 MHz. Optimization 7: If an inappropriate trace level is set for SAP HANA database components, a high amount of trace information may be generated during routine operation. This can impair system performance and lead to unnecessary consumption of disk space. Recommendation: For production usage of your SAP HANA database, we recommend setting the trace level of all components according to the recommendations in the table above. Background: Traces can be switched in the 'Trace Configuration' tab of the SAP HANA studio Administration Console Optimization 8: Largest Non-partitioned Column Tables: There are objects with high number of records (more than 300 million). This is not yet critical with regard to the technical limit of SAP HANA (2 billion records), but table partitioning should be considered if these tables are expected to grow rapidly in the future. Recommendation: Consider partitioning for tables that are expected to grow rapidly in order to ensure parallelization and adequate performance. We recommend that you partition tables before inserting mass data or while they are still small. BW on HANA project: additional questions
  • 8. Optimization 9 : Largest Partitioned Column Tables (Records : Consider re-partitioning tables that are expected to grow. We also need to look at re- partition tables before inserting mass data or while they are still small. For more information see SAP Note 1650394 or refer to the SAP HANA Administration Guide Optimization 10 : Largest Column Tables in terms of delta size The separation into main and delta storage allows high compression and high write performance at the same time. Write operations are performed on the delta store and changes are transferred from the delta store to the main store asynchronously during delta merge. The column store automatically performs a delta merge according to several technical limits that are defined by parameters. If applications require more direct control over the merge process, the smart merge function can be used for certain tables (for example, BW prevents delta merges during data loading for performance reasons Optimization 11 : Memory Utilization Details for HANA Services The following table shows the memory usage of the SAP HANA engines (services) and is only a snapshot of the time of the download collection. Different aspects of the memory consumption of the HANA database are highlighted: "Physical Memory Used by Services" corresponds to the "Database Resident Size" in the SAP HANA studio and can be compared with the resident size of the service in the operating system. The sum of "Heap Memory Used Size" and "Shared Allocated Size" roughly corresponds to the memory usage of the SAP HANA database, which is shown in the SAP HANA studio as "Database Memory Usage". The difference between the "Database Memory Usage" and the "Resident Database Memory" can usually be explained by the "Allocated Heap Size". BW on HANA project: additional questions
  • 9. Optimization 12: Reducing Table Sizes All tables located in the row store are loaded into the main memory when the database is started. Furthermore, row store tables cannot be compressed as much as tables located in the column store. Therefore, we need to keep the row store as small as possible. RSDDSTAT* data BW statistical data saved in the RSDDSTAT* tables are located in the row store. Since new data is continuously loaded into the Business Warehouse (BW), the amount of statistical data is always increasing. Therefore, it is essential to keep the statistical tables as small as possible, which also provide information about the performance of your queries. Recommendation: Reduce the number of records saved in the RSDDSTAT* tables. Consider the following: When you maintain the settings for the query statistics, deactivating the statistics is the same as activating the statistics internally with detail The settings on the "InfoProvider" tab page affect the collection of statistical data for queries, as well as the settings on the "Query" tab page (transaction RSDDSTAT). For Web templates, workbooks, and InfoProviders, you can decide between activating or deactivating the statistics only. If you did not maintain settings for the individual objects, the default setting for the object is used. If you did not change the default settings, the statistics are activated. You can delete statistics data using report RSDDSTAT_DATA_DELETE or using the corresponding graphical interface accessible via transaction RSDDSTAT. BW on HANA project: additional questions
  • 10. Optimization 13 : Conversion of InfoCubes and DataStore Objects After an upgrade to SAP NetWeaver BW 7.30 SP5 or later with SAP HANA, all DataStore objects and InfoCubes remain unchanged. In a tool-supported post processing step (transaction RSMIGRHANADB or report RSDRI_CONVERT_CUBE_TO_INMEMORY), DataStore objects and InfoCubes can be selected and converted to SAP HANA-optimized objects. All InfoCubes should be converted to fully benefit of the advantages provided by SAP BW powered by HANA DB. On the other hand, we do not recommend converting DataStore Objects as the advantages of the converted objects can be achieved without modifying the DSOs. Optimization 14 : SAP HANA-optimized DataStore Objects Background: All advantages of HANA-optimized DataStore Objects are now available for standard DSOs too, which renders conversion unnecessary. While HANA-optimized DSOs will still be supported in the future, we do not recommend converting DSOs but, rather, reconverting any existing HANA-optimized DSOs back to classic objects.Starting with BW 7.30 SP10 (BW 7.31 SP09, BW 7.40 SP04), converting classic DSOs to HANA-optimized DSOs will not be possible anymore. SAP HANA-optimized DataStore objects cannot be included in an SAP BW 3.x data flow. If you want to optimize a DataStore object that is part of a 3.x data flow, you first have to migrate the actual data flow. Furthermore, an SAP HANA-optimized DataStore object cannot be populated directly with real-time data acquisition (RDA). The 'unique records' property does not provide any performance gain. In addition, the uniqueness check is not performed at all in BW; instead, the uniqueness is checked by an SQL statement (DBMS exit). Never Generate SIDs: SIDs are never generated. This option is useful for all DSO that are used (only) for further processing in other DSOs or InfoCubes as it is not possible to run a query directly on this kind of DSO. BW on HANA project: additional questions
  • 11. Optimization 15 : SAP HANA-optimized InfoCubes With SAP HANA-optimized InfoCubes, the star schema is transferred to a flat structure, which means that the dimension tables are no longer physically available. Since no dimension IDs have to be created for the SAP HANA-optimized InfoCubes, the loading process is accelerated. The accelerated insertion of data in SAP HANA- optimized InfoCubes means that the data is available for reporting earlier. Optimization 16 : Analytic Indexes - Analytic indexes can be created in the APD (transaction RSANWB) or they can be an SAP HANA model published in the SAP BW system. If you want to use SAP BW OLAP functions to report on SAP HANA analytic or calculation views, you can publish these SAP HANA models to the SAP BW system (transaction RSDD_HM_PUBLISH). Optimization 17 : MultiProvider Queries For MultiProvider queries based on SAP HANA, the standard setting for the “operations in BWA” query property (transaction RSRT) is “3 Standard”. However, if the MultiProvider consists of a mixed landscape (there are SAP HANA-optimized as well as non-converted InfoProviders underneath), performance problems might occur. Recommendation: If you are running queries on top of MultiProvider containing SAP HANA-optimized InfoProviders, as well as standard InfoProviders, either: Convert all InfoProviders to SAP HANA-optimized or Always we need set it to S-standard and Mode 3 Last But not the Least : Always remember to test and make a full system backup before implementing any changes in a productive environment. BW on HANA project: additional questions
  • 12. Performance testing – Query Impact of current run time- using Scale-Out BW on HANA 4 22 40 73 2 10 7 25 0 10 20 30 40 50 60 70 80 Less than 10 s 10s to 30 s 30s to 60s More than 60s Time(s) Type of query Query before & after BW on HANA move per type Avg Before Avg After 3.6 2 1.6 5 0.00 2.00 4.00 Less than 10 s Time(s)
  • 13. Data-Load results: 'Customer' 12TB BW on HANA PoC Application Impacting / long Number of loads Improvement A2A Long running load 1 78% CL SCM Impacting load 1 95% CORE 1 Long running load 1 15% Master data Long running load 1 87% One PI Long running load 1 91% PDS Impacting load 1 98% POS Impacting load 1 89% QXP Impacting load 2 91% SCM Dashboards Impacting load 3 81% SMART - SRM Impacting load 3 57% SMART - VBM Impacting load 1 88% Long running load 2 66% VIPP LI Impacting load 3 64% VIPP PH Impacting load 2 45% Long running load 4 71%
  • 14. BPC results from recent 'Customer' 12TB PoC Input Layouts Reports