When messing with MySQL's filesystem you have to stop the MySQL server. To avoid downtime on your live machine, use a backup/virtualised machine with the SAME VERSION of MySQL server. While the BACKUP MySQL server is stopped, copy the tables (I assume .FRM, .MYI etc?) into the filesystem in /var/lib/mysql/BACKUP_DB (the BACKUP_DB's corresponding directory).
Start the BACKUP MySQL server and ensure the data has loaded correctly using scripts or CLI. When verified, mysqldump the BACKUP_DB database so it can be loaded into the live server:
mysqldump --extended-insert BACKUP_DB > /root/sql/BACKUP_DB.sql
You have now converted your raw backup data into SQL statements which can be loaded into MySQL without downtime (unlike the raw data). Move BACKUP_DB.sql
to the live machine.
Import BACKUP_DB.sql
into your LIVE MySQL instance as a different database:
mysql BACKUP_DB < /root/sql/BACKUP_DB.sql
You should now have the backup database loaded into MySQL as BACKUP_DB.
Now, dependent on INSERT IGNORE, or REPLACE INTO statements (are you overwriting old data or "filling the blanks" in your indexes?):
mysqldump --no-create-db
--no-create-info --extended-insert --insert-ignore MERGE_SOURCE | mysql BACKUP_DB
Or, for REPLACE INTO action:
mysqldump --no-create-db --no-create-info --extended-insert MERGE_SOURCE | sed 's/INSERT INTO/REPLACE INTO/g' | mysql BACKUP_DB
Alternatively, instead of piping output back into MySQL, send to a file and review the SQL statements.
mysqldump --no-create-db --no-create-info --extended-insert --insert-ignore MERGE_SOURCE > /root/sql/merge.sql
mysqldump --no-create-db --no-create-info --extended-insert MERGE_SOURCE | sed 's/INSERT INTO/REPLACE INTO/g' > /root/sql/merge.sql
Finally, to avoid downtime, dump the first database over the second:
mysqldump BACKUP_DB | mysql CURRENT_DB
You could always lock the database first to prevent data being written to (for example) the z table with a foreign key in the a table (which has already been written over):
FLUSH TABLES WITH READ LOCK;
(perform dump as previous step)
UNLOCK TABLES;
Add the FLUSH command to the start of your dump .sql file and UNLOCK to the end.
Be sure to quadruple check your DB names in this scenario! Ask any follow up question you're unsure of, as this is high risk data mashing stuff. Perform (and note in detail) the exact steps required on a dev server if possible, or virtualize your test, or create a small scale test. Just test. And, of course, take enough backups to cover every data loss eventuality.
When you built your database, you should have been prompted for a password at some point for the system and sys accounts. That's the password you need to use.
The comment above about logging in with "as sysdba" with the user SYSTEM is incorrect. SYSTEM does not have SYSDBA privilege.
If you've forgotten or don't know the password(s) you specified for these accounts, you can sneak in through the back door by doing the following (assuming Windows):
- Add your Windows user ID to the ORA_DBA group.
- In a command window, set ORACLE_SID appropriately and enter "sqlplus / as sysdba"
- You should be logged in as SYSDBA. You can now reset the passwords as @Gaius stated.
- One other potential gotcha is that you may need to modify the sqlnet.ora file in your ORACLE_HOME/network/admin folder to contain the line SQLNET.AUTHENTICATION_SERVICES=(NTS) if you continue to get the "insufficient privileges" error in step 2.
Best Answer
Import is intended for loading whole schemas, objects and data. Altough there is some flexibility - load individual tables, just load the data - it does not gracefully handle the existence of tables with data in the target schema. This is especially a problem if the old data in the target schema contains records which have been updated or deleted in the newer source schema.
If you are using Oracle 10g or higher the solution is to use Data Pump instead. This utility has much more powerful functionality, including the ability to replace existing tables or data in existing tables in the target schema with the data from the source schema. Find out more.
If you are using an earlier version of the database there are some workarounds. Find out more. But you may have to disable all the constraints and fix up the data in a round of post-processing.