This section describes the process of exporting data from Oracle Database to CSV format for data instantiation of non-Oracle database types using CSV files as of SCN prior to starting the replication.
This document assumes that reader has the following knowledge:
- Basic experience with Dbvisit Replicate, familiar with running the setup wizard.
- Knowledge of Oracle — ability to connect to the database and knowledge of how to execute scripts, knowing the login credentials.
- Some knowledge of Unix OS, i.e. is able to run commands and navigate on the filesystem, view/edit files.
1. Set up the replication from Oracle to the required non-Oracle database. It is important to choose single-scn and ddl_file on the following questions of Step 2 - Replicate Pairs:
Lock and copy the data initially one-by-one or at a single SCN? one-by-one : Lock tables one by one and capture SCN single-scn : One SCN for all tables ddl-only : Only DDL script for target objects resetlogs : Use SCN from last resetlogs operation (standby activation, rman incomplete recovery) no-lock : Do not lock tables. Captures previous SCN of oldest active transaction. Requires pre-requisite running of pre-all.sh script (one-by-one/single-scn/ddl-only/resetlogs/no-lock) [one-by-one] single-scn
What data instantiation script to create? ddl_file : DDL file created (APPLY.sql) ddl_run : DDL is automatically executed on target load : All replicated data is created and loaded automatically none (ddl_file/ddl_run/load/none) [ddl_file] ddl_file
2. Exclude the unsupported columns, if any. The process described in detail in "Oracle to MySQL using Swingbench guide". You can use it as a reference when setting up other types of databases.
3. Run the
-all.sh script (or
-all.bat script depending on your OS). Check the output and remember or write down the SCN from the prepare section. Lines of interest will look the following way:
Table SOME_SCHEMA.SOME_TABLE instantiated at SCN 4575498
If for some reason these lines haven't been noticed at the time of script execution, it is always possible to find them in the
4. If the previous step is successfull, the
APPLY.sql file contains DDL statements to create objects on the target database. The output of the
APPLY.sql should be examined to make sure that generated statements satisfy requirements.
APPLY.sql should be executed on the target database to create objects prior to instantiation.
Dbvisit Replicate offers several ways to load the initial data from Source database to Target database (perform data instantiation). While
load method is being convenient method (it is described, for example, in "Oracle to MySQL using Swingbench"), we feel that it is necessary to describe the manual data instantiation procedure which, while being time consuming, but gives more control of the process. The method described in this section will use CSV-format files as an intermediate storage for data transfer, however — it is possible to use any tools that are available on the market to transfer the data from Oracle objects bound for replication to newly created MySQL objects, depending on customers' preference.
Exporting the Data to CSV Format
In this section the
csv_exp tool will be used to export data from Oracle to CSV files, which is available here.
Please note that
csv_exp is provided AS-IS, without any warranty and/or support (see LICENSE for more details).
Exporting of the whole schema in the format compatible with mysql CSV importer can be done using the following command:
csv_exp -s MY_SCHEMA --scn USER/PASSWORD@DATABASE --xc --null-as '\N'
--null-as '\N' is required for CSV import to work correctly on MySQL database and others, otherwise rows containing any NULL columns will cause warnings and might not be imported. With aforementioned parameter, all
NULL values will be exported as
\N, as per example below.
MY_SCHEMA should be replaced with the desired schema name and
SCN_number — with the SCN number captured earlier while running the
bat) script (see step 2 of preparation). This will generate a number of files with the names
MY_SCHEMA.*.csv in the current folder.
Execution without parameter
--null-as (NULLS are exported as nothing):
[oracle@oel712csrc 2]$ csv_exp.py --sql \ "SELECT 'VALUE1' as COLUMN1,NULL as NULL_COLUMN2, 'VALUE3' AS COLUMN3, NULL as NULL_COLUMN4 from DUAL" \ test1/test1@unitsrc COLUMN1,NULL_COLUMN2,COLUMN3,NULL_COLUMN4 VALUE1,,VALUE3,
--null-as "\N" (NULLS are exported as
[oracle@oel712csrc 2]$ csv_exp.py --sql \ "SELECT 'VALUE1' as COLUMN1,NULL as NULL_COLUMN2, 'VALUE3' AS COLUMN3, NULL as NULL_COLUMN4 from DUAL" \ --null-as '\N' \ test1/test1@unitsrc COLUMN1,NULL_COLUMN2,COLUMN3,NULL_COLUMN4 VALUE1,\N,VALUE3,\N
Importing data from CSV
Import data using MySQL conventional methods described in MySQL documentation (see the Example below).
This is the generic form of import command for importing table regarding the sequence '\N' which is used during the export:
copy TABLE_NAME from '/home/TABLE_NAME.csv' with delimiter as ',' NULL AS '\N' csv;
1,Column 1,\N,Column 3
Creating table on PostgreSQL:
postgres=# create table TEST_TABLE (ID INTEGER,COL1 VARCHAR(50),COL2 VARCHAR(40),COL3 VARCHAR(30)); CREATE TABLE
Loading data from CSV file to TEST_TABLE:
postgres=# copy TEST_TABLE from '/tmp/TEST_TABLE.csv' with delimiter as ',' null as '\N' csv; COPY 1 postgres=# select * from TEST_TABLE; id | col1 | col2 | col3 ----+----------+------+---------- 1 | Column 1 | | Column 3
It is always a good idea to verify the loading results, such as — comparison of row counts on Source and Target, selective comparison of data rows, making sure that some tricky values have made it to the target database (if there are such).
After all the described steps has been completed, the Replication can be started. Please refer to other sections of this User Manual.