Oracle 23c AI introduces new V$ views to diagnose Data Pump performance issues. These views help monitor and troubleshoot Data Pump activities by providing detailed insights into job execution, resource usage, and potential bottlenecks.
Key V$ Views:
(G)V$DATAPUMP_PROCESS_INFO: Provides information about active Data Pump processes
(G)V$DATAPUMP_PROCESSWAIT_INFO: Details the waits encountered by Data Pump processes
(G)V$DATAPUMP_SESSIONWAIT_INFO: Provides information about session waits related to Data Pump operations.
These views enable administrators to efficiently manage and optimize Data Pump tasks. The outputs will be needed, along with the DataPump job logs, traces and other job specific info for diagnostic purpose.
(G)V$DATAPUMP_PROCESS_INFO
SQL> desc gv$datapump_process_info;Name Null? Type---------------------------INST_ID NUMBERCUR_DATE VARCHAR2(19)PROGRAM VARCHAR2(84)SESSIONID NUMBERSTATUS VARCHAR2(8)USERNAME VARCHAR2(128)JOBNAME VARCHAR2(128)SPID VARCHAR2(24)SERIALNUMBER NUMBERPROCESSID NUMBERCON_ID NUMBERSample output:select * from v$datapump_process_info;CUR_DATE PROGRAM SESSIONID STATUS USERNAME JOBNAME SPID SERIALNUMBER PROCESSID CON_ID------------------------------------------------------------------------------------------------------2023-01-0913:56:07 ude@orcl (TNS V1-V3) 42 ACTIVE SYSTEM SYS_EXPORT_FULL_01 3891480118297002023-01-0913:56:07 oracle@orcl (DW00) 48 ACTIVE SYSTEM SYS_EXPORT_FULL_01 3891500362448202023-01-0913:56:07 oracle@orcl (DM00) 149 ACTIVE SYSTEM SYS_EXPORT_FULL_01 389149442966790
Where <xx> is the version of PostgreSQL you are installing. For example, if you are installing version 16, the package name would be postgresql16-server postgresql16-contrib.
NetworkManager is a utility in Linux that simplifies the management of network connections and devices. It provides both command-line and graphical interfaces for configuring and controlling network settings. Two of the key tools associated with NetworkManager are nmtui and nmcli.
1. NetworkManager (nmcli):
nmcli Overview:
nmcli is the command-line client for NetworkManager, offering a powerful and flexible way to manage network connections in a text-based environment.
Features:
Connection Management:nmcli allows users to create, edit, activate, and deactivate network connections, including wired, wireless, and virtual connections.
Device Status: Users can check the status and details of network devices, such as their IP addresses, MAC addresses, and connection profiles.
Profile Management:nmcli enables users to manage connection profiles, modify settings, and prioritize connections.
Usage Examples:
Displaying all available connections:
nmcli connection show
Activating a specific connection:
nmcli connection up ConnectionName
Displaying details of a specific device:
nmcli device show DeviceName
2. NetworkManager TUI (nmtui):
nmtui Overview:
nmtui stands for NetworkManager Text User Interface. It is a user-friendly, console-based interface that provides a simple and interactive way to manage network connections.
Features:
Intuitive Menu System:nmtui presents a menu-driven interface that allows users to navigate through various options to configure and control network settings.
Connection Management: Users can easily create, edit, activate, and deactivate network connections using the interactive menus.
Device Configuration:nmtui provides options to configure settings for specific network devices, such as Ethernet or Wi-Fi.
Usage Examples:
Launching the nmtui interface:
nmtui
Navigating through the menus to configure a new connection or modify an existing one.
Activating or deactivating a connection using the on-screen options.
3. Advantages of NetworkManager, nmtui, and nmcli:
User-Friendly Interfaces: Both nmtui and nmcli provide interfaces that cater to users with different preferences – interactive and menu-driven (nmtui) or command-line and scriptable (nmcli).
Versatility: NetworkManager supports a wide range of connection types and devices, making it suitable for various networking scenarios.
Consistency:nmtui and nmcli provide consistent interfaces across different Linux distributions, promoting a standardized approach to network management.
Scripting and Automation:nmcli allows users to automate network-related tasks by scripting commands, making it a powerful tool for system administrators and advanced users.
CentOS 7 and Oracle Linux 8 are both popular Linux distributions that serve different purposes within the open-source community.
CentOS 7:
1. Origin and Community-driven:
CentOS (Community ENTerprise Operating System) is a free and open-source Linux distribution derived from the sources of Red Hat Enterprise Linux (RHEL).
CentOS 7, based on RHEL 7, was known for its stability, long-term support, and compatibility with RHEL packages.
Oracle Linux 8:
Origin and Enterprise Focus:
Oracle Linux is an open-source Linux distribution sponsored by Oracle Corporation. Oracle Linux 8 is based on the source code of RHEL 8, emphasizing enterprise features and performance.
Transition from CentOS 7 to Oracle Linux 8:
This guide explains how to combine a software upgrade and migration into a single process, taking you from CentOS 7 directly to Oracle Linux 8. It demonstrates using the open-source projects Leapp and Elevate to achieve this streamlined migration and upgrade in one go.
The first step is to get your existing CentOS 7 system updated to latest packages release available and then reboot. To do this, please execute following step:
#update YUM packageyum update -y#rebootinit 6
Installing the “elevate” release RPM is necessary to access and utilize the Elevate utilities.
Before upgrading, use the preupgrade check to assess your system’s readiness. This analysis will highlight potential conflicts without making any changes. The report files and debug log it creates offer valuable insights into your system’s configuration and possible upgrade challenges. To initiate this process, simply run the leapp preupgrade command.
leapp preupgrade
Edit /var/log/leapp/answerfile and uncomment Confirm and add TRUE as answer.
leapp upgradereboot
Post-Migration Steps:
Verify Successful Migration:
Once the system reboots, log in and confirm that the operating system is now Oracle Linux 8.
Review Leftover Packages:
Check for any remaining CentOS 7 packages that might not have been migrated automatically. You can then decide whether to remove them manually or update them to their Oracle Linux 8 equivalents.
Examine Upgrade Logs:
Finally, review the upgrade logs to ensure the migration process completed successfully and identify any potential issues that require attention.
SRVCTL, short for Server Control Utility, is a command-line tool, used to manage various entities in an Oracle Cluster environment, including databases, instances, listeners, services, and more. It allows you to perform various tasks like starting, stopping, modifying.
Oracle Real Application Testing (RAT) was created to give DBAs the ability to make preemptive modifications while also knowing exactly what to expect when a change is done. It considerably reduces the possibility of any harmful consequences.
Oracle RAT is made up of two major parts. The first component is referred to as Database Replay. This program allows you to capture and replay workloads from one environment into another.
Simply capture the workload, transfer the files to a test environment, and then recreate the action that your actual users completed.
Oracle Database Replay is an Oracle Database functionality that allows you to capture and replay database workloads. It enables you to precisely measure the impact of system changes, like as upgrades, patches, and configuration changes, on your database system’s overall performance and stability. This allows the enterprise business to quickly adopt new technologies, and changes while eliminating the risk. Database replay will help you to test performance as well as application functionality.
USE CASES
Use Case 1: Conduct a load test, but you don’t have the number of users you need in order to easily do it?
The other component is called SQL Performance Analyzer and it allows you to assess the impact of the load on your test system through detailed analysis.
Use Case 2: if you want to increase SGA in production, you could first make the update in your test environment and use the production load to see if there’s any value in doing this without trial and error in a production system.
Use Case 3: upgrading database versions, consolidating schemas into a single database, introducing new hardware, and more.
Users can capture the workload either using Enterprise Manager GUI Interface or command line interface provided by DBMS_WORKLOAD_xxxxx packages.
There are mainly four phases involved in Oracle Database replay:
Workload capture
Workload pre-processing
Workload replay
Analysis and reporting
Workload capture
In this phase users will capture the production workload from the database. Enabling the workload capture involves tracking and recording all external requests from users, application etc. and changes will be stored in binary files called capture files. These files will have information such as SCN, SQL Text, bind variables etc.
Create a directory where you want to keep all your capture files:
mkdir -p /u01/app/oracle/rplySQL> Create or replace directory db_replay_dir AS '/u01/app/oracle/rply';SQL> grant read, write on directory db_replay_dir to sys;
Use START_CAPTURE procedure to start workload capture:
Replace capture_name with a name for the workload capture, capture_directory with the directory where the capture files will be stored, and duration with the duration of the capture in seconds.
If the DURATION parameter value is not specified then workload capture will continue until the FINISH_CAPTURE procedure is used.
Use capture_sts parameter to capture a SQL tuning set in parallel with the workload capture.
The default value of STS_CAP_INTERVAL is 300 and this parameter specifies the duration of the SQL tuning set capture.
col name format a15col status for a15col dir_path for a30set lines 200set pages 9999select id, name, status, start_time, end_time , connects, user_calls, dir_path from dba_workload_captures;
Use the FINISH_CAPTURE procedure to stop the workload capture:
BEGIN DBMS_WORKLOAD_CAPTURE.FINISH_CAPTURE ();END;/
To add filters to the workload use the ADD_FILTER procedure and to delete the filter use DELETE_FILTER.
Workload pre-processing
In this phase, information in the capture files will be preprocessed and create all the metadata needed for replaying the workload. This step can be resource and time consuming and it’s recommended to perform this step on a test system where you want to replay the database work load.
Copy the capture files from production to test server
It’s recommended to create a restore point, so that changes can be reverted after replay and same database used for other database work load tests:
SQL> CREATE RESTORE POINT before_replay1;
Create a directory on the test server where you want to copy the files:
mkdir -p /u01/app/oracle/rplCreate or replace directory db_replay_dir AS '/u01/app/oracle/rply';grant read, write on directory db_replay_dir to sys;mkdir -p /u01/app/oracle/rplCreate or replace directory db_replay_dir AS '/u01/app/oracle/rply';grant read, write on directory db_replay_dir to sys;
Use the PROCESS_CAPTURE procedure to preprocess the captured workload:
BEGIN DBMS_WORKLOAD_REPLAY.PROCESS_CAPTURE (capture_dir=>'DB_REPLAY_DIR');END;/
It’s important that the preprocessing is done with the same database versions. Users must copy the capture files from production to the test system.
Workload replay
In this phase users can replay the workload that’s captured and pre-processed. All the external requests which are recorded will be performed on the test system with same time and concurrency.
Use the INITIALIZE_REPLAY procedure to initialize workload replay:
BEGIN DBMS_WORKLOAD_REPLAY.INITIALIZE_REPLAY(replay_name=>'tst_prd_reply1',replay_dir=>'DB_REPLAY_DIR');END;/
Use the PREPARE_REPLAY procedure to prepare workload replay on the test database system:
BEGIN DBMS_WORKLOAD_REPLAY.PREPARE_REPLAY (synchronization=> TRUE,capture_sts=> TRUE,sts_cap_interval=>300);END;/
The synchronization parameter default value is SCN this means the COMMIT order will be preserved and replay actions will be executed only after all dependent COMMIT actions are complete.
Before you run the replay procedures make sure to estimate the number of clients, hosts necessary to replay the captured workload using wrc.exe (workload replay client):
Use the START_REPLAY procedure to start a workload replay:
BEGIN DBMS_WORKLOAD_REPLAY.START_REPLAY ();END;/
Check Status:
SELECT id, name, STATUS FROM dba_workload_replays where name ='cli_prd_reply1';
Analysis and Reporting:
In this phase users will perform detailed analysis of the reports from both the workload capture and workload replay phases. These reports will include information about errors, statistics about database time, average time spent, user call, session info and data divergence.
For advanced analysis and comparison users can use AWR (Automatic work load repository) reports and SQL Performance Analyzer to compare the SQL tuning set from capture and replay.
Query the DBA_WORKLOAD_REPLAYS view to see the information about all the workload replays.
Use DBMS_WORKLOAD_CAPTURE.REPORT function to generate Workload capture report:
set long 20000 longchunksize 20000 pagesize 0 linesize 1000 feedback off verify off trimspool oncolumn ddl format a1000begin dbms_metadata.set_transform_param (dbms_metadata.session_transform, 'SQLTERMINATOR', true); dbms_metadata.set_transform_param (dbms_metadata.session_transform, 'PRETTY', true);end;/select DBMS_WORKLOAD_REPLAY.REPORT(replay_id=>&replayid, format=>'HTML') tt from dual;
dir specifies the directory which contains the workload capture files
capture_id specifies the ID related to the directory which contains the workload capture files
To get the capture_id use DBMS_WORKLOAD_REPLAY.GET_REPLAY_INFO function.
The report contains the information about workload capture such as filters, date, time, SCN of capture and overall statistics such as total database time captures, no of logins and transactions and any limitations due to version etc.
Use DBMS_WORKLOAD_REPLAY.REPORT function to generate Workload replay report:
DECLARE V_REPOTT CLOB; V_CAPID NUMBER; V_REPID NUMBER;BEGINV_CAPID:= DBMS_WORKLOAD_REPLAY.GET_REPLAY_INFO(dir=>'DB_REPLAY_DIR');SELECT MAX (id) INTO V_REPID FROM dba_workload_replays WHERE capture_id = V_CAPID;V_REPORT:= DBMS_WORKLOAD_REPLAY.REPORT(replay_id=> V_REPIDformat=> DBMS_WORKLOAD_CAPTURE.TYPE_HTML);END;/
Where:
dir specifies the directory which contains the workload replay files.
replay_id specifies the ID related to the directory which contains the workload replay files. To get the replay_id use DBMS_WORKLOAD_REPLAY.GET_REPLAY_INFO function.
Formatspecifies the report format and three formats available.
To Generate a TEXT report use DBMS_WORKLOAD_REPLAY.TYPE_TEXT
To Generate a HTML report use DBMS_WORKLOAD_REPLAY.TYPE_HTML
To Generate a XML report use DBMS_WORKLOAD_REPLAY.TYPE_XML
The report contains the information about workload replay and capture such as job name, duration, status, time, path, database information, replay clients and overall statistics like total database time captured and replayed, no of logins, transactions, replay divergence, error divergence etc.
Best Practices for Database Testing
Avoid workload interference: Disable maintenance windows and background jobs to avoid workload interference. Background jobs may already exist on test system and so there is no need to duplicate them during replay. For example: auto optimizer stats collection, auto space advisor or auto sql tuning advisor.
Filter out background monitoring activity: Filter out Enterprise Manager’s dbsnmp sessions or other monitoring and RMAN backup schedule sessions.
Understand application workload profile details and limitations: Before using SQL Performance Analyzer (SPA) or Database Replay, understand application workload profile details and the database features used. Check whether the application is using features that are not supported by RAT
Perform SPA testing prior to Database Replay: The goal of SPA testing to identify SQL level performance regression like optimizer plan changes or Exadata cells I/O behavior. This can help significantly the Database replay testing, especially to test Database upgrade or Exadata migration.
Setup restore points to ensure a consistent replay start point: Setup guaranteed restore points using Database Flashback on the test database before starting database replay so that you will be able to repeatedly replay the captured workload from same point before the capture was taken. Alternatively reset database to point of capture by using either RMAN, Snapshot Standby or imp/exp, Data Pump, etc. Test database should be isolated from the production database.
Run a test replay for a short duration of workload capture: When preparing a capture replay, you should test that a representative workload operates as expected to avoid wasting time if you find that a larger workload does not work later. This way, potential issues during replay will be easier to debug. As a rule of thumb we recommend Database replay of a capture of representative workload of a one hour duration. Once a one hour Database Replay run has worked satisfactorily, one can move towards a longer duration Database replay. Do not jump directly to a larger duration capture such as 12 or 24 hours before doing this. As far as possible, replicate production hardware and O/S in Test.
Perform Workload Replay Analysis before performance analysis: For Workload Replay Analysis, compare replay to replay in the same environment and database server as far as possible. Review divergences first and then move on to performance analysis.
Explore AWR and ASH data for comparison: At the end of the capture, always export the AWR data using DBMS_WORKLOAD_CAPTURE.EXPORT_AWR and import it in test replay Database using DBMS_WORKLOAD_CAPTURE.IMPORT_AWR. Importing AWR data enables detailed analysis of the workload and is also required if you plan to run the AWR Compare Period report on a pair of workload captures or replays.
Inflight transactions : Need to be factored into replay analysis. Long running background jobs may start before capture starts and these may affect replay quality. The best way minimize inflight txns is to restart database, but it not mandatory or may not be feasible in most real-life scenarios. Given this, it is recommended to factor them into replay analysis and adjusting the capture window appropriately to encompass the interesting period that includes the job you want to test.
Database Replay Synchronization: The recommendation is first to run the replay with TIME based (synchronization => false). But each workload can have certain characteristic that more suitable for a particular synchronization type.
SELECT SCN_TO_TIMESTAMP(TIMESTAMP_TO_SCN(startup_time)) "DB Shutdown Time" , startup_time "DB Startup Time" FROM v$instance;SELECT startup_time "DB Startup Time" FROM v$instance;select to_char (startup_time, 'dd-mon-yyyy hh24:mi:ss') start_time from V$instance;
Startup History
select startup_time open_time from dba_hist_database_instance order by 1 desc;COL INSTANCE_NAME FOR A10SELECT INSTANCE_NAME,TO_CHAR(STARTUP_TIME, 'HH24:MI DD-MON-YY') FROM DBA_HIST_DATABASE_INSTANCE ORDER BY STARTUP_TIME DESC;
Check DB_UP time
SET LINE 60COLUMN HOSTNAME FOR A60COLUMN INSTANCE_NAME FOR A60COLUMN STIME FOR A60COLUMN UPTIME FOR A60SELECT'HOSTNAME : '|| HOST_NAME,'INSTANCE NAME : '|| INSTANCE_NAME,'STARTED AT : '||TO_CHAR(STARTUP_TIME,'DD-MON-YYYY HH24:MI:SS') STIME,'UPTIME : '||FLOOR(SYSDATE - STARTUP_TIME) ||' DAYS(S) '||TRUNC( 24*((SYSDATE-STARTUP_TIME) -TRUNC(SYSDATE-STARTUP_TIME))) ||' HOUR(S) '||MOD(TRUNC(1440*((SYSDATE-STARTUP_TIME) -TRUNC(SYSDATE-STARTUP_TIME))), 60) ||' MINUTE(S) '||MOD(TRUNC(86400*((SYSDATE-STARTUP_TIME) -TRUNC(SYSDATE-STARTUP_TIME))), 60) ||' SECONDS' UPTIMEFROMSYS.V_$INSTANCE;SELECT instance_number, startup_time, shutdown_time FROM dba_hist_database_instance;