Author: dbapranjan

  • Oracle Data Pump – New Diagnostic V$ Views For Performance Issues in 23ai

    Oracle 23c AI introduces new V$ views to diagnose Data Pump performance issues. These views help monitor and troubleshoot Data Pump activities by providing detailed insights into job execution, resource usage, and potential bottlenecks.

    Key V$ Views:

    • (G)V$DATAPUMP_PROCESS_INFO: Provides information about active Data Pump processes
    • (G)V$DATAPUMP_PROCESSWAIT_INFO: Details the waits encountered by Data Pump processes
    • (G)V$DATAPUMP_SESSIONWAIT_INFO: Provides information about session waits related to Data Pump operations.

    These views enable administrators to efficiently manage and optimize Data Pump tasks. The outputs will be needed, along with the DataPump job logs, traces and other job specific info for diagnostic purpose. 

    (G)V$DATAPUMP_PROCESS_INFO

    SQL> desc gv$datapump_process_info;
    
    Name    Null?    Type
    ---------------------------
    INST_ID          NUMBER
    CUR_DATE         VARCHAR2(19)
    PROGRAM          VARCHAR2(84)
    SESSIONID        NUMBER
    STATUS           VARCHAR2(8)
    USERNAME         VARCHAR2(128)
    JOBNAME          VARCHAR2(128)
    SPID             VARCHAR2(24)
    SERIALNUMBER     NUMBER
    PROCESSID        NUMBER
    CON_ID           NUMBER
    
    Sample output:
    
    select * from v$datapump_process_info;
    
    CUR_DATE PROGRAM SESSIONID STATUS USERNAME JOBNAME SPID SERIALNUMBER PROCESSID CON_ID
    ------------------- -------------------------------------- --------- -------- ---------- ------------------
    2023-01-09 13:56:07 ude@orcl (TNS V1-V3) 42 ACTIVE SYSTEM SYS_EXPORT_FULL_01 3891480 11829 70 0
    2023-01-09 13:56:07 oracle@orcl (DW00)   48 ACTIVE SYSTEM SYS_EXPORT_FULL_01 3891500 36244 82 0
    2023-01-09 13:56:07 oracle@orcl (DM00)  149 ACTIVE SYSTEM SYS_EXPORT_FULL_01 3891494 42966 79 0

     (G)V$DATAPUMP_PROCESSWAIT_INFO

    SQL> desc gv$datapump_processwait_info;
    
    Name              Null?   Type
    -------------------------------------
    INST_ID                   NUMBER
    WAITING_SESSION           NUMBER
    HOLDING_SESSION           NUMBER
    SERIAL_NUMBER             NUMBER
    EVENT                     VARCHAR2(64)
    PROGRAM_WAITSESSION       VARCHAR2(84)
    PROGRAM_HOLDINGDSESSION   VARCHAR2(84)
    MODULE_WAITSESSION        VARCHAR2(64)
    MODULE_HOLDINGSESSION     VARCHAR2(64)
    DATAPUMP_LOCKID           NUMBER
    CON_ID                    NUMBER
    
     Sample output:
    
    SQL> select * from gv$datapump_processwait_info;
    
    INST_ID WAITING_SESSION HOLDING_SESSION SERIAL_NUMBER EVENT PROGRAM_WAITSESSION PROGRAM_HOLDINGDSESSION MODULE_WAITSESSION MODULE_HOLDINGSESSION DATAPUMP_LOCKID CON_ID
    -------------------------------------------------------------------------------------------------------------------------------------------------------------
          1   174   57 40525 enq: TM - contention oracle@orcl (DM00) oracle@orcl (DW00) Data Pump Master Data Pump Worker 79473 1

    (G)V$DATAPUMP_SESSIONWAIT_INFO

    SQL> desc gv$datapump_sessionwait_info;
    
    Name    Null?    Type
    ------------------------------------
    INST_ID             NUMBER
    WAITING_SESSION     NUMBER
    SERIAL_NUMBER       NUMBER
    SEQ_NUMBER          NUMBER
    EVENT               VARCHAR2(64)
    DP_WAITTIME         NUMBER
    DP_SECONDS_IN_WAIT  NUMBER
    DP_STATE_IN_WAIT    VARCHAR2(19)
    DP_P1TEXT           VARCHAR2(64)
    DP_P1               NUMBER
    DP_P2TEXT           VARCHAR2(64)
    DP_P2               NUMBER
    DP_P3TEXT           VARCHAR2(64)
    DP_P3               NUMBER
    CON_ID              NUMBER
    
    Sample output:
    
    SQL> select * from gv$datapump_sessionwait_info;
    
    INST_ID WAITING_SESSION SERIAL_NUMBER SEQ_NUMBER EVENT DP_WAITTIME DP_SECONDS_IN_WAIT DP_STATE_IN_WAIT DP_P1TEXT DP_P1 DP_P2TEXT DP_P2 DP_P3TEXT DP_P3 CON_ID
    --------------------------------------------------------------------------------------------------------------------------------------------
    1 46 65244 1319 enq: TM - contention 0 8086 WAITING name|mode 1414332419 object # 80996 table/partition 0

  • Postgres dB Installation in Linux

    Red Hat Enterprise Linux 8

    Step 1: Set up the EDB repository

    To determine if your repository exists, enter this command:

    dnf repolist | grep enterprisedb

    If no output is generated, the repository isn’t installed.

    To set up the EDB repository:

    • Go to EDB repositories.
    • Select the button that provides access to the EDB repository.
    • Select the platform and software that you want to download.
    • Follow the instructions for setting up the EDB repository.

    Install the EPEL repository:

    sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm

    Enable additional repositories to resolve dependencies:

    ARCH=$( /bin/arch ) subscription-manager repos --enable "codeready-builder-for-rhel-8-${ARCH}-rpms"

    Disable the built-in PostgreSQL module:

    dnf -qy module disable postgresql

    Install the package

    sudo dnf -y install postgresql<xx>-server postgresql<xx>-contrib

    Where <xx> is the version of PostgreSQL you are installing. For example, if you are installing version 16, the package name would be postgresql16-server postgresql16-contrib.

  • NetworkManager tool in Linux

    NetworkManager is a utility in Linux that simplifies the management of network connections and devices. It provides both command-line and graphical interfaces for configuring and controlling network settings. Two of the key tools associated with NetworkManager are nmtui and nmcli.

    1. NetworkManager (nmcli):

    • nmcli Overview:
      • nmcli is the command-line client for NetworkManager, offering a powerful and flexible way to manage network connections in a text-based environment.
    • Features:
      • Connection Management: nmcli allows users to create, edit, activate, and deactivate network connections, including wired, wireless, and virtual connections.
      • Device Status: Users can check the status and details of network devices, such as their IP addresses, MAC addresses, and connection profiles.
      • Profile Management: nmcli enables users to manage connection profiles, modify settings, and prioritize connections.
    • Usage Examples:
      • Displaying all available connections:
        • nmcli connection show
      • Activating a specific connection:
        • nmcli connection up ConnectionName
      • Displaying details of a specific device:
        • nmcli device show DeviceName

    2. NetworkManager TUI (nmtui):

    • nmtui Overview:
      • nmtui stands for NetworkManager Text User Interface. It is a user-friendly, console-based interface that provides a simple and interactive way to manage network connections.
    • Features:
      • Intuitive Menu System: nmtui presents a menu-driven interface that allows users to navigate through various options to configure and control network settings.
      • Connection Management: Users can easily create, edit, activate, and deactivate network connections using the interactive menus.
      • Device Configuration: nmtui provides options to configure settings for specific network devices, such as Ethernet or Wi-Fi.
    • Usage Examples:
      • Launching the nmtui interface:
        • nmtui
      • Navigating through the menus to configure a new connection or modify an existing one.
      • Activating or deactivating a connection using the on-screen options.

    3. Advantages of NetworkManager, nmtui, and nmcli:

    • User-Friendly Interfaces: Both nmtui and nmcli provide interfaces that cater to users with different preferences – interactive and menu-driven (nmtui) or command-line and scriptable (nmcli).
    • Versatility: NetworkManager supports a wide range of connection types and devices, making it suitable for various networking scenarios.
    • Consistency: nmtui and nmcli provide consistent interfaces across different Linux distributions, promoting a standardized approach to network management.
    • Scripting and Automation: nmcli allows users to automate network-related tasks by scripting commands, making it a powerful tool for system administrators and advanced users.

  • CentOS 7 to Oracle Linux 8

    CentOS 7 and Oracle Linux 8 are both popular Linux distributions that serve different purposes within the open-source community.

    CentOS 7:

    1. Origin and Community-driven:

    • CentOS (Community ENTerprise Operating System) is a free and open-source Linux distribution derived from the sources of Red Hat Enterprise Linux (RHEL).
    • CentOS 7, based on RHEL 7, was known for its stability, long-term support, and compatibility with RHEL packages.

    Oracle Linux 8:

    Origin and Enterprise Focus:

    • Oracle Linux is an open-source Linux distribution sponsored by Oracle Corporation. Oracle Linux 8 is based on the source code of RHEL 8, emphasizing enterprise features and performance.

    Transition from CentOS 7 to Oracle Linux 8:

    This guide explains how to combine a software upgrade and migration into a single process, taking you from CentOS 7 directly to Oracle Linux 8. It demonstrates using the open-source projects Leapp and Elevate to achieve this streamlined migration and upgrade in one go.

    • The first step is to get your existing CentOS 7 system updated to latest packages release available and then reboot. To do this, please execute following step:
    #update YUM package
    yum update -y
    
    #reboot
    init 6
    • Installing the “elevate” release RPM is necessary to access and utilize the Elevate utilities.
    yum install -y http://repo.almalinux.org/elevate/elevate-release-latest-el$(rpm --eval %rhel).noarch.rpm
    • Install “leapp” packages and migration data for Oracle Linux, as the target to migrate to
    yum install -y leapp-upgrade leapp-data-oraclelinux

    Before upgrading, use the preupgrade check to assess your system’s readiness. This analysis will highlight potential conflicts without making any changes. The report files and debug log it creates offer valuable insights into your system’s configuration and possible upgrade challenges. To initiate this process, simply run the leapp preupgrade command.

    leapp preupgrade

    Edit /var/log/leapp/answerfile and uncomment Confirm and add TRUE as answer.

    leapp upgrade
    
    reboot

    Post-Migration Steps:

    1. Verify Successful Migration:
      • Once the system reboots, log in and confirm that the operating system is now Oracle Linux 8.
    2. Review Leftover Packages:
      • Check for any remaining CentOS 7 packages that might not have been migrated automatically. You can then decide whether to remove them manually or update them to their Oracle Linux 8 equivalents.
    3. Examine Upgrade Logs:
      • Finally, review the upgrade logs to ensure the migration process completed successfully and identify any potential issues that require attention.

    cat /etc/oracle-release
    cat /etc/os-release

    Thank you ..!

  • Basic SRVCTL Commands

    SRVCTL, short for Server Control Utility, is a command-line tool, used to manage various entities in an Oracle Cluster environment, including databases, instances, listeners, services, and more. It allows you to perform various tasks like starting, stopping, modifying.

    Common Basic SRVCTL Commands:

    • Starting and Stopping Services:

    Start a database instance:

    SYNTAX – srvctl start database -d db_name [-o start_options]

    start_option – nomount/mount/open(default)

    srvctl start database -d PRODB -o nomount
    srvctl start database -d PRODB -o mount
    srvctl start database -d PRODB -o open

    Stop a running database:

    SYNTAX – srvctl stop database -d db_name [-o stop_options]

    stop_options – normal /immediate(default)/transactional/abort

    srvctl stop database -d PRODB -o normal
    srvctl stop database -d PRODB -o immediate
    srvctl stop database -d PRODB -o transactional
    srvctl stop database -d PRODB -o abort

    • Checking Status

    View the status of all database instances:

    srvctl status database -d

    Get detailed information about a service:

    srvctl info service -s

    • Adding and Removing Resources:

    Add a new database:

    srvctl add database -d (caution: advanced usage)

    Remove an existing service:

    srvctl remove service -s
  • Profile Metadata

    Run this query first, so that it will print the complete metadata

    set long 20000 longchunksize 20000 pagesize 0 linesize 1000 feedback off verify off trimspool on
    column ddl format a1000
    
    begin
       dbms_metadata.set_transform_param (dbms_metadata.session_transform, 'SQLTERMINATOR', true);
       dbms_metadata.set_transform_param (dbms_metadata.session_transform, 'PRETTY', true);
    end;
    /

    Now use DBMS_Meta_Data

    SELECT DBMS_METADATA.GET_DDL('PROFILE', 'C##'_Profile_roll') AS profile_ddl
    FROM dual;
  • Real Application Testing – Database Reply

    Oracle Real Application Testing (RAT) was created to give DBAs the ability to make preemptive modifications while also knowing exactly what to expect when a change is done. It considerably reduces the possibility of any harmful consequences.

    Oracle RAT is made up of two major parts. The first component is referred to as Database Replay. This program allows you to capture and replay workloads from one environment into another.

    Simply capture the workload, transfer the files to a test environment, and then recreate the action that your actual users completed.

    Oracle Database Replay is an Oracle Database functionality that allows you to capture and replay database workloads. It enables you to precisely measure the impact of system changes, like as upgrades, patches, and configuration changes, on your database system’s overall performance and stability. This allows the enterprise business to quickly adopt new technologies, and changes while eliminating the risk. Database replay will help you to test performance as well as application functionality.

    USE CASES

    Use Case 1: Conduct a load test, but you don’t have the number of users you need in order to easily do it?

    The other component is called SQL Performance Analyzer and it allows you to assess the impact of the load on your test system through detailed analysis.

    Use Case 2: if you want to increase SGA in production, you could first make the update in your test environment and use the production load to see if there’s any value in doing this without trial and error in a production system.

    Use Case 3: upgrading database versions, consolidating schemas into a single database, introducing new hardware, and more.

    Users can capture the workload either using Enterprise Manager GUI Interface or command line interface provided by DBMS_WORKLOAD_xxxxx packages.

    There are mainly four phases involved in Oracle Database replay:

    • Workload capture
    • Workload pre-processing
    • Workload replay
    • Analysis and reporting

    Workload capture

    In this phase users will capture the production workload from the database. Enabling the workload capture involves tracking and recording all external requests from users, application etc. and changes will be stored in binary files called capture files. These files will have information such as SCN, SQL Text, bind variables etc.

    Create a directory where you want to keep all your capture files:

    mkdir -p /u01/app/oracle/rply
    SQL>	Create or replace directory  db_replay_dir  AS '/u01/app/oracle/rply';
    SQL>	grant read, write on directory db_replay_dir to sys;

    Use START_CAPTURE procedure to start workload capture:

    BEGIN
    DBMS_WORKLOAD_CAPTURE.START_CAPTURE (name     => 'tst_prd_capture',
    				dir      => 'DB_REPLAY_DIR',
    				duration => 300,
    				capture_sts => TRUE,
    				sts_cap_interval => 300);
    END;
    /

    Replace capture_name with a name for the workload capture, capture_directory with the directory where the capture files will be stored, and duration with the duration of the capture in seconds.

    If the DURATION parameter value is not specified then workload capture will continue until the FINISH_CAPTURE procedure is used.

    Use capture_sts parameter to capture a SQL tuning set in parallel with the workload capture.

    The default value of STS_CAP_INTERVAL is 300 and this parameter specifies the duration of the SQL tuning set capture.

    col name format a15
    col status for a15
    col dir_path for a30
    set lines 200
    set pages 9999
     
    select id, name, status, start_time, end_time , connects, user_calls, dir_path from dba_workload_captures;
    

    Use the FINISH_CAPTURE procedure to stop the workload capture:

    BEGIN
    	DBMS_WORKLOAD_CAPTURE.FINISH_CAPTURE ();
    END;
    /

    To add filters to the workload use the ADD_FILTER procedure and to delete the filter use DELETE_FILTER.

    Workload pre-processing

    In this phase, information in the capture files will be preprocessed and create all the metadata needed for replaying the workload. This step can be resource and time consuming and it’s recommended to perform this step on a test system where you want to replay the database work load.

    Copy the capture files from production to test server

    It’s recommended to create a restore point, so that changes can be reverted after replay and same database used for other database work load tests:

    SQL> CREATE RESTORE POINT before_replay1;

    Create a directory on the test server where you want to copy the files:

    mkdir -p /u01/app/oracle/rpl
    Create or replace directory  db_replay_dir  AS '/u01/app/oracle/rply';
    grant read, write on directory db_replay_dir to sys;
    mkdir -p /u01/app/oracle/rpl
    Create or replace directory  db_replay_dir  AS '/u01/app/oracle/rply';
    grant read, write on directory db_replay_dir to sys;

    Use the PROCESS_CAPTURE procedure to preprocess the captured workload:

    BEGIN
    	DBMS_WORKLOAD_REPLAY.PROCESS_CAPTURE (capture_dir => 'DB_REPLAY_DIR');
    END;
    /

    It’s important that the preprocessing is done with the same database versions. Users must copy the capture files from production to the test system.

    Workload replay

    In this phase users can replay the workload that’s captured and pre-processed. All the external requests which are recorded will be performed on the test system with same time and concurrency.

    Use the INITIALIZE_REPLAY procedure to initialize workload replay:

    BEGIN
    	DBMS_WORKLOAD_REPLAY.INITIALIZE_REPLAY(
    			replay_name=> 'tst_prd_reply1',
    			replay_dir => 'DB_REPLAY_DIR');
    END;
    /

    Use the PREPARE_REPLAY procedure to prepare workload replay on the test database system:

    BEGIN
    	DBMS_WORKLOAD_REPLAY.PREPARE_REPLAY (synchronization => TRUE,
    					capture_sts => TRUE,
    					   sts_cap_interval => 300);
    END;
    /

    The synchronization parameter default value is SCN this means the COMMIT order will be preserved and replay actions will be executed only after all dependent COMMIT actions are complete.

    Before you run the replay procedures make sure to estimate the number of clients, hosts necessary to replay the captured workload using wrc.exe (workload replay client):

    wrc  mode=calibrate  replaydir=/u01/app/oracle/rply
    
    wrc system/"S4Tv[\X.zRsEc_"@CCLIDRX mode=replay replaydir=/u01/app/oracle/rply

    Use the START_REPLAY procedure to start a workload replay:

    BEGIN
    	DBMS_WORKLOAD_REPLAY.START_REPLAY ();
    END;
    /

    Check Status:

    SELECT id, name, STATUS FROM dba_workload_replays where name = 'cli_prd_reply1';

    Analysis and Reporting:

    In this phase users will perform detailed analysis of the reports from both the workload capture and workload replay phases. These reports will include information about errors, statistics about database time, average time spent, user call,  session info and data divergence.

    For advanced analysis and comparison users can use AWR (Automatic work load repository) reports and SQL Performance Analyzer to compare the SQL tuning set from capture and replay.

    Query the DBA_WORKLOAD_REPLAYS view to see the information about all the workload replays.

    Use DBMS_WORKLOAD_CAPTURE.REPORT function to generate Workload capture report:

    set long 20000 longchunksize 20000 pagesize 0 linesize 1000 feedback off verify off trimspool on
    column ddl format a1000
    
    begin
       dbms_metadata.set_transform_param (dbms_metadata.session_transform, 'SQLTERMINATOR', true);
       dbms_metadata.set_transform_param (dbms_metadata.session_transform, 'PRETTY', true);
    end;
    /
    
    select DBMS_WORKLOAD_REPLAY.REPORT(replay_id=>&replayid, format=>'HTML') tt 
    from dual;

    DECLARE
    	V_CAPID	NUMBER;
    	V_REPORT	CLOB;
    BEGIN
    	V_CAPID  := DBMS_WORKLOAD_CAPTURE.GET_CAPTURE_INFO(dir => 'DB_REPLAY_DIR');
    	V_REPORT := DBMS_WORKLOAD_CAPTURE.REPORT(capture_id => V_CAPID,
    				format => DBMS_WORKLOAD_CAPTURE.TYPE_HTML);
    END;
    /

    Where:

        dir specifies the directory which contains the workload capture files

        capture_id specifies the ID  related to the directory which contains the workload capture files

    To get the capture_id use DBMS_WORKLOAD_REPLAY.GET_REPLAY_INFO function.

    The report contains the information about workload capture such as filters, date, time, SCN of capture and overall statistics such as total database time captures, no of logins and transactions and any limitations due to version etc.

    Use DBMS_WORKLOAD_REPLAY.REPORT function to generate Workload replay report:

    DECLARE
    	V_REPOTT	CLOB;
    	V_CAPID	NUMBER;
    	V_REPID	NUMBER;
     
    BEGIN
    V_CAPID:= DBMS_WORKLOAD_REPLAY.GET_REPLAY_INFO(dir => 'DB_REPLAY_DIR');
    SELECT MAX (id)   INTO   V_REPID
    	FROM	dba_workload_replays
    	WHERE	capture_id = V_CAPID;
    V_REPORT:= DBMS_WORKLOAD_REPLAY.REPORT(
    				replay_id => V_REPID
    				format => DBMS_WORKLOAD_CAPTURE.TYPE_HTML);
    END;
    /

    Where:

    • dir specifies the directory which contains the workload replay files.
    • replay_id specifies the ID  related to the directory which contains the workload replay files. To get the replay_id use DBMS_WORKLOAD_REPLAY.GET_REPLAY_INFO function.
    • Formatspecifies the report format and three formats available.
    • To Generate a TEXT report use DBMS_WORKLOAD_REPLAY.TYPE_TEXT
    • To Generate a HTML report use DBMS_WORKLOAD_REPLAY.TYPE_HTML
    • To Generate a XML report use   DBMS_WORKLOAD_REPLAY.TYPE_XML

    The report contains the information about workload replay and capture such as job name, duration, status, time, path, database information, replay clients and overall statistics like total database time captured and replayed, no of logins, transactions, replay divergence, error divergence etc.

    Best Practices for Database Testing

    1. Avoid workload interference: Disable maintenance windows and background jobs to avoid workload interference. Background jobs may already exist on test system and so there is no need to duplicate them during replay. For example: auto optimizer stats collection, auto space advisor or auto sql tuning advisor.
    • Filter out background monitoring activity: Filter out Enterprise Manager’s dbsnmp sessions or other monitoring and RMAN backup schedule sessions.
    • Understand application workload profile details and limitations: Before using SQL Performance Analyzer (SPA) or Database Replay, understand application workload profile details and the database features used. Check whether the application is using features that are not supported by RAT
    • Perform SPA testing prior to Database Replay: The goal of SPA testing to identify SQL level performance regression like optimizer plan changes or Exadata cells I/O behavior. This can help significantly the Database replay testing, especially to test Database upgrade or Exadata migration.
    • Setup restore points to ensure a consistent replay start point: Setup guaranteed restore points using Database Flashback on the test database before starting database replay so that you will be able to repeatedly replay the captured workload from same point before the capture was taken. Alternatively reset database to point of capture by using either RMAN, Snapshot Standby or imp/exp, Data Pump, etc. Test database should be isolated from the production database.
    • Run a test replay for a short duration of workload capture: When preparing a capture replay, you should test that a representative workload operates as expected to avoid wasting time if you find that a larger workload does not work later. This way, potential issues during replay will be easier to debug. As a rule of thumb we recommend Database replay of a capture of representative workload of a one hour duration. Once a one hour Database Replay run has worked satisfactorily, one can move towards a longer duration Database replay. Do not jump directly to a larger duration capture such as 12 or 24 hours before doing this. As far as possible, replicate production hardware and O/S in Test.
    • Perform Workload Replay Analysis before performance analysis: For Workload Replay Analysis, compare replay to replay in the same environment and database server as far as possible. Review divergences first and then move on to performance analysis.
    • Explore AWR and ASH data for comparison: At the end of the capture, always export the AWR data using DBMS_WORKLOAD_CAPTURE.EXPORT_AWR and import it in test replay Database using DBMS_WORKLOAD_CAPTURE.IMPORT_AWR. Importing AWR data enables detailed analysis of the workload and is also required if you plan to run the AWR Compare Period report on a pair of workload captures or replays.
    • Inflight transactions : Need to be factored into replay analysis. Long running background jobs may start before capture starts and these may affect replay quality. The best way minimize inflight txns is to restart database, but it not mandatory or may not be feasible in most real-life scenarios. Given this, it is recommended to factor them into replay analysis and adjusting the capture window appropriately to encompass the interesting period that includes the job you want to test.
    1. Database Replay Synchronization: The recommendation is first to run the replay with TIME based (synchronization => false). But each workload can have certain characteristic that more suitable for a particular synchronization type.

  • Oracle Database startup and shutdown Time

    Shutdown and Startup of Database

    SELECT SCN_TO_TIMESTAMP(TIMESTAMP_TO_SCN(startup_time)) "DB Shutdown Time" , startup_time "DB Startup Time" FROM v$instance;
    
    
    SELECT startup_time "DB Startup Time" FROM v$instance;
    
    
    select to_char (startup_time, 'dd-mon-yyyy hh24:mi:ss') start_time from V$instance;

    Startup History

    select startup_time open_time from dba_hist_database_instance order by 1 desc;
    
    
    COL INSTANCE_NAME FOR A10
    SELECT INSTANCE_NAME,TO_CHAR(STARTUP_TIME, 'HH24:MI DD-MON-YY') FROM DBA_HIST_DATABASE_INSTANCE ORDER BY STARTUP_TIME DESC;

    Check DB_UP time

    SET LINE 60
    COLUMN HOSTNAME FOR A60
    COLUMN INSTANCE_NAME FOR A60
    COLUMN STIME FOR A60
    COLUMN UPTIME FOR A60
    SELECT
    'HOSTNAME : ' || HOST_NAME
    ,'INSTANCE NAME : ' || INSTANCE_NAME
    ,'STARTED AT : ' || TO_CHAR(STARTUP_TIME,'DD-MON-YYYY HH24:MI:SS') STIME
    ,'UPTIME : ' || FLOOR(SYSDATE - STARTUP_TIME) || ' DAYS(S) ' ||
    TRUNC( 24*((SYSDATE-STARTUP_TIME) -
    TRUNC(SYSDATE-STARTUP_TIME))) || ' HOUR(S) ' ||
    MOD(TRUNC(1440*((SYSDATE-STARTUP_TIME) -
    TRUNC(SYSDATE-STARTUP_TIME))), 60) ||' MINUTE(S) ' ||
    MOD(TRUNC(86400*((SYSDATE-STARTUP_TIME) -
    TRUNC(SYSDATE-STARTUP_TIME))), 60) ||' SECONDS' UPTIME
    FROM
    SYS.V_$INSTANCE;
    
    SELECT instance_number, startup_time, shutdown_time FROM dba_hist_database_instance;

  • Update .bash_profile to display user and path of the Home and Environmental Variable

    PATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:$HOME/bin:/usr/bin/X11:/sbin:.
    
    export PATH
    
    if [ -s "$MAIL" ]           # This is at Shell startup.  In normal
    then echo "$MAILMSG"        # operation, the Shell checks
    fi                          # periodically.
    
    export PS1="$USER-`uname -n`:\$PWD>"
    
    export umask=022
    export TERM=ansi
    export ORACLE_TERM=hft
    export ORACLE_HOME=$HOME
    export ORACLE_BASE=/home/data/u01
    export TNS_ADMIN=$ORACLE_HOME/network/admin
    export PATH=$ORACLE_HOME/bin:$PATH
    
    if [ "$DISPLAY" = "" ]
    then
    whoFlags="-m"
    case "$( uname -s )" in
    HP-UX) display="$(who -mR | sed 's/^.*(//' | sed 's/).*$//')" ;;
    AIX) display="$(who -m | sed 's/^.*(//' | sed 's/).*$//')" ;;
    SunOS) display="$(who -m | sed 's/^.*(//' | sed 's/).*$//')" ;;
    IRIX*) display="$REMOTEHOST" ;;
    Linux) display="$(who -m | sed 's/^.*(//' | sed 's/).*$//')" ;;
    esac
    if [ "$display" != "" -a "$display" != ":0.0" ]
    then
    export DISPLAY="$display:0.0"
    echo
    echo "DISPLAY set to $DISPLAY"
    echo
    fi
    if [ "$display" = ":0.0" ]; then export DISPLAY=":0.0"; fi
    fi
    
    user=$USER
    ps -fu $user
    
  • Install Oracle Instant Client on Linux server

    1. Download Instant Client
    2. Download Oracle Instant client SQL*Plus Package for linux
    3. Install instant client and SQL*Plus
    4. Setup tnsnames.ora
    5. setup TNS_ADMIN linux environment variable
    6. Test Database connectivity using sqlplus
    7. vi tnsnames.ora export TNS_ADMIN=/root