Oracle Real Application Testing (RAT) was created to give DBAs the ability to make preemptive modifications while also knowing exactly what to expect when a change is done. It considerably reduces the possibility of any harmful consequences.
Oracle RAT is made up of two major parts. The first component is referred to as Database Replay. This program allows you to capture and replay workloads from one environment into another.
Simply capture the workload, transfer the files to a test environment, and then recreate the action that your actual users completed.
Oracle Database Replay is an Oracle Database functionality that allows you to capture and replay database workloads. It enables you to precisely measure the impact of system changes, like as upgrades, patches, and configuration changes, on your database system’s overall performance and stability. This allows the enterprise business to quickly adopt new technologies, and changes while eliminating the risk. Database replay will help you to test performance as well as application functionality.
USE CASES
Use Case 1: Conduct a load test, but you don’t have the number of users you need in order to easily do it?
The other component is called SQL Performance Analyzer and it allows you to assess the impact of the load on your test system through detailed analysis.
Use Case 2: if you want to increase SGA in production, you could first make the update in your test environment and use the production load to see if there’s any value in doing this without trial and error in a production system.
Use Case 3: upgrading database versions, consolidating schemas into a single database, introducing new hardware, and more.
Users can capture the workload either using Enterprise Manager GUI Interface or command line interface provided by DBMS_WORKLOAD_xxxxx packages.
There are mainly four phases involved in Oracle Database replay:
Workload capture
Workload pre-processing
Workload replay
Analysis and reporting
Workload capture
In this phase users will capture the production workload from the database. Enabling the workload capture involves tracking and recording all external requests from users, application etc. and changes will be stored in binary files called capture files. These files will have information such as SCN, SQL Text, bind variables etc.
Create a directory where you want to keep all your capture files:
mkdir -p /u01/app/oracle/rplySQL> Create or replace directory db_replay_dir AS '/u01/app/oracle/rply';SQL> grant read, write on directory db_replay_dir to sys;
Use START_CAPTURE procedure to start workload capture:
Replace capture_name with a name for the workload capture, capture_directory with the directory where the capture files will be stored, and duration with the duration of the capture in seconds.
If the DURATION parameter value is not specified then workload capture will continue until the FINISH_CAPTURE procedure is used.
Use capture_sts parameter to capture a SQL tuning set in parallel with the workload capture.
The default value of STS_CAP_INTERVAL is 300 and this parameter specifies the duration of the SQL tuning set capture.
col name format a15col status for a15col dir_path for a30set lines 200set pages 9999select id, name, status, start_time, end_time , connects, user_calls, dir_path from dba_workload_captures;
Use the FINISH_CAPTURE procedure to stop the workload capture:
BEGIN DBMS_WORKLOAD_CAPTURE.FINISH_CAPTURE ();END;/
To add filters to the workload use the ADD_FILTER procedure and to delete the filter use DELETE_FILTER.
Workload pre-processing
In this phase, information in the capture files will be preprocessed and create all the metadata needed for replaying the workload. This step can be resource and time consuming and it’s recommended to perform this step on a test system where you want to replay the database work load.
Copy the capture files from production to test server
It’s recommended to create a restore point, so that changes can be reverted after replay and same database used for other database work load tests:
SQL> CREATE RESTORE POINT before_replay1;
Create a directory on the test server where you want to copy the files:
mkdir -p /u01/app/oracle/rplCreate or replace directory db_replay_dir AS '/u01/app/oracle/rply';grant read, write on directory db_replay_dir to sys;mkdir -p /u01/app/oracle/rplCreate or replace directory db_replay_dir AS '/u01/app/oracle/rply';grant read, write on directory db_replay_dir to sys;
Use the PROCESS_CAPTURE procedure to preprocess the captured workload:
BEGIN DBMS_WORKLOAD_REPLAY.PROCESS_CAPTURE (capture_dir=>'DB_REPLAY_DIR');END;/
It’s important that the preprocessing is done with the same database versions. Users must copy the capture files from production to the test system.
Workload replay
In this phase users can replay the workload that’s captured and pre-processed. All the external requests which are recorded will be performed on the test system with same time and concurrency.
Use the INITIALIZE_REPLAY procedure to initialize workload replay:
BEGIN DBMS_WORKLOAD_REPLAY.INITIALIZE_REPLAY(replay_name=>'tst_prd_reply1',replay_dir=>'DB_REPLAY_DIR');END;/
Use the PREPARE_REPLAY procedure to prepare workload replay on the test database system:
BEGIN DBMS_WORKLOAD_REPLAY.PREPARE_REPLAY (synchronization=> TRUE,capture_sts=> TRUE,sts_cap_interval=>300);END;/
The synchronization parameter default value is SCN this means the COMMIT order will be preserved and replay actions will be executed only after all dependent COMMIT actions are complete.
Before you run the replay procedures make sure to estimate the number of clients, hosts necessary to replay the captured workload using wrc.exe (workload replay client):
Use the START_REPLAY procedure to start a workload replay:
BEGIN DBMS_WORKLOAD_REPLAY.START_REPLAY ();END;/
Check Status:
SELECT id, name, STATUS FROM dba_workload_replays where name ='cli_prd_reply1';
Analysis and Reporting:
In this phase users will perform detailed analysis of the reports from both the workload capture and workload replay phases. These reports will include information about errors, statistics about database time, average time spent, user call, session info and data divergence.
For advanced analysis and comparison users can use AWR (Automatic work load repository) reports and SQL Performance Analyzer to compare the SQL tuning set from capture and replay.
Query the DBA_WORKLOAD_REPLAYS view to see the information about all the workload replays.
Use DBMS_WORKLOAD_CAPTURE.REPORT function to generate Workload capture report:
set long 20000 longchunksize 20000 pagesize 0 linesize 1000 feedback off verify off trimspool oncolumn ddl format a1000begin dbms_metadata.set_transform_param (dbms_metadata.session_transform, 'SQLTERMINATOR', true); dbms_metadata.set_transform_param (dbms_metadata.session_transform, 'PRETTY', true);end;/select DBMS_WORKLOAD_REPLAY.REPORT(replay_id=>&replayid, format=>'HTML') tt from dual;
dir specifies the directory which contains the workload capture files
capture_id specifies the ID related to the directory which contains the workload capture files
To get the capture_id use DBMS_WORKLOAD_REPLAY.GET_REPLAY_INFO function.
The report contains the information about workload capture such as filters, date, time, SCN of capture and overall statistics such as total database time captures, no of logins and transactions and any limitations due to version etc.
Use DBMS_WORKLOAD_REPLAY.REPORT function to generate Workload replay report:
DECLARE V_REPOTT CLOB; V_CAPID NUMBER; V_REPID NUMBER;BEGINV_CAPID:= DBMS_WORKLOAD_REPLAY.GET_REPLAY_INFO(dir=>'DB_REPLAY_DIR');SELECT MAX (id) INTO V_REPID FROM dba_workload_replays WHERE capture_id = V_CAPID;V_REPORT:= DBMS_WORKLOAD_REPLAY.REPORT(replay_id=> V_REPIDformat=> DBMS_WORKLOAD_CAPTURE.TYPE_HTML);END;/
Where:
dir specifies the directory which contains the workload replay files.
replay_id specifies the ID related to the directory which contains the workload replay files. To get the replay_id use DBMS_WORKLOAD_REPLAY.GET_REPLAY_INFO function.
Formatspecifies the report format and three formats available.
To Generate a TEXT report use DBMS_WORKLOAD_REPLAY.TYPE_TEXT
To Generate a HTML report use DBMS_WORKLOAD_REPLAY.TYPE_HTML
To Generate a XML report use DBMS_WORKLOAD_REPLAY.TYPE_XML
The report contains the information about workload replay and capture such as job name, duration, status, time, path, database information, replay clients and overall statistics like total database time captured and replayed, no of logins, transactions, replay divergence, error divergence etc.
Best Practices for Database Testing
Avoid workload interference: Disable maintenance windows and background jobs to avoid workload interference. Background jobs may already exist on test system and so there is no need to duplicate them during replay. For example: auto optimizer stats collection, auto space advisor or auto sql tuning advisor.
Filter out background monitoring activity: Filter out Enterprise Manager’s dbsnmp sessions or other monitoring and RMAN backup schedule sessions.
Understand application workload profile details and limitations: Before using SQL Performance Analyzer (SPA) or Database Replay, understand application workload profile details and the database features used. Check whether the application is using features that are not supported by RAT
Perform SPA testing prior to Database Replay: The goal of SPA testing to identify SQL level performance regression like optimizer plan changes or Exadata cells I/O behavior. This can help significantly the Database replay testing, especially to test Database upgrade or Exadata migration.
Setup restore points to ensure a consistent replay start point: Setup guaranteed restore points using Database Flashback on the test database before starting database replay so that you will be able to repeatedly replay the captured workload from same point before the capture was taken. Alternatively reset database to point of capture by using either RMAN, Snapshot Standby or imp/exp, Data Pump, etc. Test database should be isolated from the production database.
Run a test replay for a short duration of workload capture: When preparing a capture replay, you should test that a representative workload operates as expected to avoid wasting time if you find that a larger workload does not work later. This way, potential issues during replay will be easier to debug. As a rule of thumb we recommend Database replay of a capture of representative workload of a one hour duration. Once a one hour Database Replay run has worked satisfactorily, one can move towards a longer duration Database replay. Do not jump directly to a larger duration capture such as 12 or 24 hours before doing this. As far as possible, replicate production hardware and O/S in Test.
Perform Workload Replay Analysis before performance analysis: For Workload Replay Analysis, compare replay to replay in the same environment and database server as far as possible. Review divergences first and then move on to performance analysis.
Explore AWR and ASH data for comparison: At the end of the capture, always export the AWR data using DBMS_WORKLOAD_CAPTURE.EXPORT_AWR and import it in test replay Database using DBMS_WORKLOAD_CAPTURE.IMPORT_AWR. Importing AWR data enables detailed analysis of the workload and is also required if you plan to run the AWR Compare Period report on a pair of workload captures or replays.
Inflight transactions : Need to be factored into replay analysis. Long running background jobs may start before capture starts and these may affect replay quality. The best way minimize inflight txns is to restart database, but it not mandatory or may not be feasible in most real-life scenarios. Given this, it is recommended to factor them into replay analysis and adjusting the capture window appropriately to encompass the interesting period that includes the job you want to test.
Database Replay Synchronization: The recommendation is first to run the replay with TIME based (synchronization => false). But each workload can have certain characteristic that more suitable for a particular synchronization type.
SELECT SCN_TO_TIMESTAMP(TIMESTAMP_TO_SCN(startup_time)) "DB Shutdown Time" , startup_time "DB Startup Time" FROM v$instance;SELECT startup_time "DB Startup Time" FROM v$instance;select to_char (startup_time, 'dd-mon-yyyy hh24:mi:ss') start_time from V$instance;
Startup History
select startup_time open_time from dba_hist_database_instance order by 1 desc;COL INSTANCE_NAME FOR A10SELECT INSTANCE_NAME,TO_CHAR(STARTUP_TIME, 'HH24:MI DD-MON-YY') FROM DBA_HIST_DATABASE_INSTANCE ORDER BY STARTUP_TIME DESC;
Check DB_UP time
SET LINE 60COLUMN HOSTNAME FOR A60COLUMN INSTANCE_NAME FOR A60COLUMN STIME FOR A60COLUMN UPTIME FOR A60SELECT'HOSTNAME : '|| HOST_NAME,'INSTANCE NAME : '|| INSTANCE_NAME,'STARTED AT : '||TO_CHAR(STARTUP_TIME,'DD-MON-YYYY HH24:MI:SS') STIME,'UPTIME : '||FLOOR(SYSDATE - STARTUP_TIME) ||' DAYS(S) '||TRUNC( 24*((SYSDATE-STARTUP_TIME) -TRUNC(SYSDATE-STARTUP_TIME))) ||' HOUR(S) '||MOD(TRUNC(1440*((SYSDATE-STARTUP_TIME) -TRUNC(SYSDATE-STARTUP_TIME))), 60) ||' MINUTE(S) '||MOD(TRUNC(86400*((SYSDATE-STARTUP_TIME) -TRUNC(SYSDATE-STARTUP_TIME))), 60) ||' SECONDS' UPTIMEFROMSYS.V_$INSTANCE;SELECT instance_number, startup_time, shutdown_time FROM dba_hist_database_instance;
set pagesize 180col OWNER for a25break on Tablespace on Ownercolumn Objects format A20select Tablespace_Name,Owner,COUNT(*)||' tables' Objectsfrom DBA_TABLESgroup by Tablespace_Name,Ownerunionselect Tablespace_Name, Owner, COUNT(*)||' indexes' Objectsfrom DBA_INDEXESgroup by Tablespace_Name, Owner;
Physical Size of the Oracle database (Multi-Tenant)
select con_id, name, open_mode, total_size/1024/1024/1024"PDB_SIZE_GB" from v$pdbs;
Physical Size of the Oracle Database (Logfile,Tempfile,datafile)
col "Database Size" format a20col "Free space" format a20col "Used space" format a20select round(sum(used.bytes) /1024/1024/1024 ) ||' GB'"Database Size", round(sum(used.bytes) /1024/1024/1024 ) -round(free.p /1024/1024/1024) ||' GB'"Used space", round(free.p /1024/1024/1024) ||' GB'"Free space"from (select bytesfrom v$datafileunion allselect bytesfrom v$tempfileunion allselect bytesfrom v$log) used, (select sum(bytes) aspfrom dba_free_space) freegroup by free.p/
select"Reserved_Space(GB)", "Reserved_Space(GB)"-"Free_Space(GB)""Used_Space(GB)","Free_Space(GB)"from(select (select sum(bytes/(1014*1024*1024)) from dba_data_files) "Reserved_Space(GB)",(select sum(bytes/(1024*1024*1024)) from dba_free_space) "Free_Space(GB)"from dual);
Logical Size (Space utilise by the data in the database)
select sum(bytes)/1024/1024/1024"Size in GB" from dba_segments;
Tablespace Utilisation
Set lines 180 pages 100Col CONTAINER for a20col TABLESPACE for a30with CONT as (selectcon_id, namefromv$containers)select decode(c.con_id,0,'Database',c.name) CONTAINER, a.tablespace_name "Tablespace",round(a.bytes_alloc /1024/1024/1024) "Size_GB",round(maxbytes/1024/1024/1024) "MaxSize_GB",round((a.bytes_alloc -nvl(b.bytes_free, 0)) /1024/1024/1024) "Used_GB",round(nvl(b.bytes_free, 0) /1024/1024/1024) "Free_GB",round(maxbytes/1024/1024/1024-(a.bytes_alloc -nvl(b.bytes_free, 0)) /1024/1024/1024) "FreeMaxSize_GB",100-round((nvl(b.bytes_free, 0) / a.bytes_alloc) *100,2) "%Used",round(100*(a.bytes_alloc -nvl(b.bytes_free, 0))/maxbytes,2) "%Used_MaxSize"from ( select con_id, tablespace_name,sum(f.bytes) bytes_alloc,sum(decode(f.autoextensible, 'YES',f.maxbytes,'NO', f.bytes)) maxbytes from cdb_data_files f group by con_id, tablespace_name ) a, ( select ts.con_id, ts.name tablespace_name,sum(fs.blocks) * ts.blocksize bytes_free from CDB_LMT_FREE_SPACE fs, containers(sys.ts$) ts where ts.ts# = fs.tablespace_id and ts.con_id=fs.con_id group by ts.con_id,ts.name, ts.blocksize ) b, cont cwhere a.tablespace_name = b.tablespace_name (+)and a.con_id=b.con_idand a.con_id=c.con_id---and a.tablespace_name like '%SYSAUX%'--and a.tablespace_name in ('UNDOTBS1')order by "%Used_MaxSize" asc;
Schema Size
set linesize 150set pagesize 5000col owner for a15col segment_name for a30col segment_type for a20col TABLESPACE_NAME for a30clear breaksclear computescompute sum of SIZE_IN_GB on reportbreak on reportselect OWNER,sum(bytes)/1024/1024/1000"SIZE_IN_GB" from dba_segments group by owner order by owner;
Tables inside specific Schema with size_in_mb
COLUMN TABLE_NAME FORMAT A32COLUMN OBJECT_NAME FORMAT A32COLUMN OWNER FORMAT A10SELECT owner, table_name, TRUNC(sum(bytes)/1024/1024) MB,ROUND( ratio_to_report( sum(bytes) ) over () *100) PercentFROM(SELECT segment_name table_name, owner, bytes FROM dba_segments WHERE segment_type IN ('TABLE', 'TABLE PARTITION', 'TABLE SUBPARTITION') UNION ALL SELECT i.table_name, i.owner, s.bytes FROM dba_indexes i, dba_segments s WHERE s.segment_name = i.index_name AND s.owner = i.owner AND s.segment_type IN ('INDEX', 'INDEX PARTITION', 'INDEX SUBPARTITION') UNION ALL SELECT l.table_name, l.owner, s.bytes FROM dba_lobs l, dba_segments s WHERE s.segment_name = l.segment_name AND s.owner = l.owner AND s.segment_type IN ('LOBSEGMENT', 'LOB PARTITION') UNION ALL SELECT l.table_name, l.owner, s.bytes FROM dba_lobs l, dba_segments s WHERE s.segment_name = l.index_name AND s.owner = l.owner AND s.segment_type ='LOBINDEX')WHERE owner inUPPER('&owner')GROUP BY table_name, ownerHAVING SUM(bytes)/1024/1024>10/* Ignore really small tables */ORDER BY SUM(bytes) desc;
Installing Oracle Database 19c involves several steps, and the process can vary based on your operating system and specific requirements. Here’s a general outline of the steps you would follow for installing Oracle Database 19c:
Prerequisites and Preparation:
Review the Oracle Database 19c documentation and installation guides for your specific operating system.
Ensure your system meets the hardware and software prerequisites, including memory, disk space, and compatible operating system versions.
Download the Oracle Database 19c installation files from Oracle’s official website.
Create a dedicated user (often named oracle) to perform the installation and manage the Oracle software.
Update hosts parameter to point the DNS and IP to your server
vi /etc/hosts
After, updating changes in the Hosts file, we will proceed further.
We have two options Manual Configuration and RPM based configuration. We are going to see RPM based installation first.
RPM based Configuration
yum install -y oracle-database-preinstall-19c
Now, after rpm installation, we need to set ORACLE user password.
passwd oracle
Note: In case of Active Directory server configuration, you need to disable the nsswitch.conf first by editing and removing sss
vi /etc/nsswitch.conf
After setting the password for the oracle user, we need to create the directory for ORACLE_HOME
We need to change the user to Oracle user and set the bash profile.
su - oracle
vi .bash_profile
Add below entries
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
## Adding for Oracle Database 19c##
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/19.3/db_home
export ORACLE_SID=ORCL
export LD_LIBRARY_PATH=\$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=\$ORACLE_HOME/jlib:\$ORACLE_HOME/rdbms/jlib
export NLS_LANG=american_america.al32utf8
export NLS_DATE_FORMAT="yyyy-mm-dd:hh24:mi:ss"
PATH=$PATH:$HOME/.local/bin:$ORACLE_HOME/bin
export PATH
On SUSE Linux Enterprise Server, to install the latest bc package using YaST, run the following command:
yast --install bc
Add the following lines to the “/etc/sysctl.conf” file, or in a file called “/etc/sysctl.d/98-oracle.conf”.
# oracle-database-preinstall-19c setting for fs.file-max is 6815744
fs.file-max = 6815744
# oracle-database-preinstall-19c setting for kernel.sem is '250 32000 100 128'
kernel.sem = 250 32000 100 128
# oracle-database-preinstall-19c setting for kernel.shmmni is 4096
kernel.shmmni = 4096
# oracle-database-preinstall-19c setting for kernel.shmall is 1073741824 on x86_64
kernel.shmall = 1073741824
# oracle-database-preinstall-19c setting for kernel.shmmax is 4398046511104 on x86_64
kernel.shmmax = 4398046511104
# oracle-database-preinstall-19c setting for kernel.panic_on_oops is 1 per Orabug 19212317
kernel.panic_on_oops = 1
# oracle-database-preinstall-19c setting for net.core.rmem_default is 262144
net.core.rmem_default = 262144
# oracle-database-preinstall-19c setting for net.core.rmem_max is 4194304
net.core.rmem_max = 4194304
# oracle-database-preinstall-19c setting for net.core.wmem_default is 262144
net.core.wmem_default = 262144
# oracle-database-preinstall-19c setting for net.core.wmem_max is 1048576
net.core.wmem_max = 1048576
# oracle-database-preinstall-19c setting for net.ipv4.conf.all.rp_filter is 2
net.ipv4.conf.all.rp_filter = 2
# oracle-database-preinstall-19c setting for net.ipv4.conf.default.rp_filter is 2
net.ipv4.conf.default.rp_filter = 2
# oracle-database-preinstall-19c setting for fs.aio-max-nr is 1048576
fs.aio-max-nr = 1048576
# oracle-database-preinstall-19c setting for net.ipv4.ip_local_port_range is 9000 65500
net.ipv4.ip_local_port_range = 9000 65500
Run one of the following commands to change the current kernel parameters, depending on which file you edited.
/sbin/sysctl -p
# Or
/sbin/sysctl -p /etc/sysctl.d/98-oracle.conf
Add following parameter in this file /etc/security/limits.d/oracle-database-preinstall-19c.conf
# oracle-database-preinstall-19c setting for nofile soft limit is 1024
oracle soft nofile 1024
# oracle-database-preinstall-19c setting for nofile hard limit is 65536
oracle hard nofile 65536
# oracle-database-preinstall-19c setting for nproc soft limit is 16384
# refer orabug15971421 for more info.
oracle soft nproc 16384
# oracle-database-preinstall-19c setting for nproc hard limit is 16384
oracle hard nproc 16384
# oracle-database-preinstall-19c setting for stack soft limit is 10240KB
oracle soft stack 10240
# oracle-database-preinstall-19c setting for stack hard limit is 32768KB
oracle hard stack 32768
# oracle-database-preinstall-19c setting for memlock hard limit is maximum of 128GB on x86_64 or 3GB on x86 OR 90 % of RAM
oracle hard memlock 134217728
# oracle-database-preinstall-19c setting for memlock soft limit is maximum of 128GB on x86_64 or 3GB on x86 OR 90% of RAM
oracle soft memlock 134217728
# oracle-database-preinstall-19c setting for data soft limit is 'unlimited'
oracle soft data unlimited
# oracle-database-preinstall-19c setting for data hard limit is 'unlimited'
oracle hard data unlimited
We need to change the user to Oracle user and set the bash profile.su – oracle vi .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
## Adding for Oracle Database 19c##
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/19.3/db_home
export ORACLE_SID=ORCL
export LD_LIBRARY_PATH=\$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=\$ORACLE_HOME/jlib:\$ORACLE_HOME/rdbms/jlib
export NLS_LANG=american_america.al32utf8
export NLS_DATE_FORMAT="yyyy-mm-dd:hh24:mi:ss"
PATH=$PATH:$HOME/.local/bin:$ORACLE_HOME/bin
export PAT
. .bash_profile.
Database Installation
We will going to perfor Database Insatallation using silent method.
Download the Database software from edelivery website using wget or on windows machine and later transfer it to using winscp software.
Now unzip the software to the oracle home location.