Back
Image Alt

Working with RDS Oracle

Working with RDS Oracle

2191307775_b719d191ac_o

This is a guest post from Chris Eastland, database architect. Chris is a Big Data Solutions provider and DBA service, Amazon RDS migration specialist moving Oracle and MySQL databases from on-Premisis to Amazon RDS.

When working with RDS Oracle instances and faced with relatively frequent use of datapump export and import to refresh instances of small to medium size, I find it convenient to use a small EC2 instance loaded with an Oracle 11g home and 11g database and a few simple scripts to automate the export and import (depending on your security setup you could also use an Oracle server outside of AWS).

The scripts are listed below but the essential process is :-

(a) Create database links in the local EC2 database (I left it as the default ORCL) pointing to the each of the RDS instances that need to use datapump for export or import.
These database links use the local EC2 tnsnames.ora entries.

(b) Run the script create_datapump_files.sh on the local EC2 instance to build 4 scripts that can in turn be executed to export the desired schemas from the source database, transfer the dump file to the EC2 ORCL datapump directory, transfer the file in turn to the target RDS datapump directory and run the import. (I leave a few steps as manual as I prefer not to drop user cascade from the script). As you can see this involves 2 network transfers (from source to local, then local to target data pump directories) but allows the centralization of the database links in the local EC2 database and is well worth the extra network time unless the datapump files are large in which case the source RDS instance can contain a direct database link.

First run create_datapump_files.sh with 7 arguments as follows :-

(1) source database name (actually the name of the database link in ORCL which in my case is also the TNS service name of the database)
(2) target database service name
(3) schema list (single or delimited with commas and surrounded with double quotes)
(4) source RDS master username
(5) source RDS master password
(6) target RDS master username
(7) target RDS master password

For example:-

oracle:/home/oracle/datapump>./create_datapump_files.sh SOURCEDB TARGETDB “SCOTT,HR” rdsdba rdsdbapw rds2dba rds2dbapw

This will build 4 files in /home/oracle/datapump/temp which should be checked for accuracy.
Modify execute_datapump.sh for your directory stucture and execute each step as the local EC2 server oracle user.

(1) sqlplus to target RDS instance as master username and drop user sampleschema(s) cascade ;
(2) expdp parfile=/home/oracle/datapump/temp/rdsexport_file.par
(3) sqlplus system/a@ORCL @/home/oracle/datapump/temp/dbms_get_file.sql
(4) sqlplus system/a@ORCL @/home/oracle/datapump/temp/dbms_put_file.sql
(5) impdp parfile=/home/oracle/datapump/temp/rdsimport_file.par
(6) sqlplus to target RDS instance as master username and grant resource,connect,unlimited
tablespace and any other desired privileges to target schema

Of course you can modify the basic parameter files to adjust to your situation, such as using multiple
dump files or skipping grants exports.

————————–
create_datapump_files.sh
————————–
#!/bin/bash
#
# NOTE change /home/oracle/datapump to the directory where scripts are installed
# and create temp subdirectory – make sure $ORACLE_HOME etc defined also in PATH
#
cd temp
rm -f /home/oracle/datapump/temp/*
# $1=source db
# $2=target db
# $3=schemas (e.g. SCHEMA1 or “SCHEMA1,SCHEMA2”)
# $4=source rds dba user
# $5=source rds pass
# $6=target rds dba user
# $7=target rds pass
/home/oracle/datapump/create_rdsexport.sh $1 $3 $4 $5
/home/oracle/datapump/create_rdsimport.sh $2 $3 $6 $7
/home/oracle/datapump/datapump_get.sh $1
/home/oracle/datapump/datapump_put.sh $2
cd ..

———————
create_rdsexport.sh
——————–

cat < rdsexport_file.par
userid=$3/$4@$1
directory=DATA_PUMP_DIR
dumpfile=export.dmp
logfile=export.log
schemas=$2
EOF

———————
create_rdsimport.sh
——————–

cat < rdsimport_file.par
userid=$3/$4@$1
directory=DATA_PUMP_DIR
dumpfile=export.dmp
logfile=import.log
schemas=$2
EOF

———————
datapump_get.sh
——————–

cat < dbms_get_file.sql
BEGIN
DBMS_FILE_TRANSFER.GET_FILE(
source_directory_object => ‘DATA_PUMP_DIR’,
source_file_name => ‘export.dmp’,
destination_directory_object => ‘DATA_PUMP_DIR’,
destination_file_name => ‘export.dmp’,
source_database => ‘$1’
);
END;
/
exit
EOF

———————
datapump_put.sh
——————–
cat < dbms_put_file.sql
BEGIN
DBMS_FILE_TRANSFER.PUT_FILE(
source_directory_object => ‘DATA_PUMP_DIR’,
source_file_name => ‘export.dmp’,
destination_directory_object => ‘DATA_PUMP_DIR’,
destination_file_name => ‘export.dmp’,
destination_database => ‘$1’
);
END;
/
EOF

———————
execute_datapump.sh
——————–

#!/bin/sh
#
# Uses scripts in /home/oracle/datapump/temp that were created by create_datapump_files.sh
# First drop target users/schemas cascade in target db
# e.g. drop user sampleschema cascade;”
#
expdp parfile=/home/oracle/datapump/temp/rdsexport_file.par
sqlplus system/pass@ORCL @/home/oracle/datapump/temp/dbms_get_file.sql
sqlplus system/pass@ORCL @/home/oracle/datapump/temp/dbms_put_file.sql
impdp parfile=/home/oracle/datapump/temp/rdsimport_file.par
#
# now connect as target dba user to the destination db and grant to each target schema
# e.g. grant resource,connect,unlimited tablespace to targetschema
# and optionally change password for each
# e.g. alter user targetschema identified by newpassword

=====================================================================================

Photo Credit: https://www.flickr.com/photos/50108326@N00/2191307775/

Post a Comment