Skip to main content

Automating Oracle Database Backup to Amazon S3 / GCS / Azure

This blog post explains how to set up backup and restore operations for Oracle instances deployed on Kubernetes.

Prerequisites

  • Kubectl installed
  • Deployed Oracle instance on a Kubernetes cluster
  • Azure CLI, AWS CLI, and gcloud installed, depending on your cloud setup

Setting Up a Backup

  1. Get the pod name and SSH into the pod:

    kubectl get pods -A | grep "oracle"
    kubectl exec -it <pod-name> -n <namespace> -- sqlplus / as sysdba
  2. Create a directory (logical pointer to a physical directory):

    CREATE DIRECTORY EXPORT_BK AS '/opt/oracle/EXPORT_BK';
    GRANT READ, WRITE ON DIRECTORY EXPORT_BK TO <db_username>;
  3. Run your backup commands:

    • Open the first terminal shell and port-forward the Oracle pod:
    kubectl get svc -A | grep "oracle"
    kubectl port-forward -n <namespace> service/<oracle-svc-name> 1521:1521
    • Open a second terminal and run the commands below:
    export DB_USERNAME=<db_username>
    export DB_PASSWORD=<db_password>
    export DB_SERVICE_NAME=<oracle_service_name>
    export DB_HOST="127.0.0.1"
    export DB_PORT=1521
    export POD_NAME="<oracle-pod-name>"
    export NAMESPACE="oracle-pod-ns"

    export TIMESTAMP=$(date +%Y%m%d_%H%M)
    export BACKUP_FILE="oracle_backup_${TIMESTAMP}.dmp"
    export LOG_FILE="oracle_backup_${TIMESTAMP}.log"

    # Ensure the directory /opt/oracle/EXPORT_BK exists
    kubectl exec -n $NAMESPACE $POD_NAME -- mkdir -p /opt/oracle/EXPORT_BK

    kubectl exec -n $NAMESPACE $POD_NAME -- expdp $DB_USERNAME/$DB_PASSWORD@$DB_HOST:$DB_PORT/$DB_SERVICE_NAME \
    schemas=$DB_USERNAME directory=EXPORT_BK dumpfile=$BACKUP_FILE logfile=$LOG_FILE
  4. Copy the exported data and backup to your cloud bucket:

    kubectl cp $NAMESPACE/$POD_NAME:/opt/oracle/EXPORT_BK/$BACKUP_FILE ./$BACKUP_FILE

    aws s3 cp $BACKUP_FILE $BUCKET/$BACKUP_FILE
    gsutil cp $BACKUP_FILE gs://$BUCKET/$BACKUP_FILE
    az storage blob upload --file $BACKUP_FILE --container-name $BUCKET --name $BACKUP_FILE --account-name <storage-account-name>

Restoring a Backup File

  1. Download your backup file from the cloud bucket:

    aws s3 cp s3://$BUCKET/$FILE_NAME $LOCAL_PATH
    gsutil cp gs://$BUCKET/$FILE_NAME $LOCAL_PATH
    az storage blob download --container-name $BUCKET --name $FILE_NAME --file $LOCAL_PATH --account-name $STORAGE_ACCOUNT
  2. Run the restore command:

    export DB_USERNAME=<db_username>
    export DB_PASSWORD=<db_password>
    export DB_SERVICE_NAME=<oracle_service_name>
    export DB_HOST="127.0.0.1"
    export DB_PORT=1521
    export POD_NAME="<oracle-pod-name>"
    export NAMESPACE="oracle-pod-ns"
    export FILE_NAME="<file-name>"

    # Copy the backup file to the Oracle pod
    kubectl cp ./$FILE_NAME $NAMESPACE/$POD_NAME:/opt/oracle/EXPORT_BK/$FILE_NAME

    # Execute the impdp command inside the Oracle pod
    kubectl exec -n $NAMESPACE $POD_NAME -- impdp $DB_USERNAME/$DB_PASSWORD@$DB_HOST:$DB_PORT/$DB_SERVICE_NAME schemas=$DB_USERNAME directory=EXPORT_BK dumpfile=$BACKUP_FILE table_exists_action=replace

    echo "Import complete."