During this post, I will demonstrate how to create and restore Oracle Database backups using the exp and imp tools, which allow you to generate full backups of an owner and restore to another server.
Setting Environment Variables in Windows
1 2 3 | SET ORACLE_SID=ORCLTESTE SET ORACLE_HOME=C:\oracle\product\11.2.0\dbhome_1 SET NLS_LANG=AMERICAN_AMERICA.AL32UTF8 |
Setting environment variables in Linux
1 2 3 | EXPORT ORACLE_SID=ORCLTESTE EXPORT ORACLE_HOME=/oracle/product/11.1.0/db_1/ EXPORT NLS_LANG=AMERICAN_AMERICA.AL32UTF8 |
Export Parameters
Here I will demonstrate how to export the database data to a dump file, which can be used to restore the data database and replicate the database to another instance / server.
Here are some parameters:
- fillet = file.dmp: Defines the name of the dump file that will be generated. If not informed, the file name will be “EXPDAT.DMP”
- log = log file: Defines the log file name that will be generated. If not informed, no log will be generated.
- owner = , : Defines the name of the owner (s) that will be exported
- full = y: Indicates whether to export the entire database, with users, tablespaces, grants, etc.
- grants = y: Defines whether owner-granted privileges are exported
- indexes = y: Defines whether index objects will be exported
- compress = y: Sets whether the dump file will be compressed to reduce final file size
- consistent = y: Defines whether Oracle will prevent data changed during export from being exported to the dump file. This operation will use the undo area to store these changes and apply them to the database after export.
- constraints = y: Defines whether PK and FK contraints will be exported in the dump file.
- direct = y: Defines whether the exported data will skip the Oracle parse layer (SQL command-processing layer - evaluating buffer), which validates the syntax of the commands and makes the export faster (about 10% generally)
- feedback = 0: Sets how many records the exp utility will display on screen progress. 0 is disabled.
- filesize = 10MB: Sets the size limit per file. If you specify 2 MB, for example, exp will break the dump file into 2 MB files, which need to be entered in the FILE parameter. If not informed, the dump file will consist of only 1 file.
- flashback_scn: Allows you to export the data through SCN, where you get the current value from the database and guarantee that the data will not change after export. To find out the current value of SCN, just run the query
select current_scn from v$database;
- flashback_time: Allows you to generate the export with data from a specific date.
- tables = regions: Allows you to define the tables to export
- Query = “Condition”: Allows you to define a condition to export the data. The TABLES parameter must be informed and the condition must be applicable for all these tables.
- parfile = “File path”: Allows you to define a parameter file for export, where each parameter must be on a line and the utility will read that file and execute the exp command using those parameters.
Parfile Example:
userid=usuario/senha
file=exp_HR_20150319.dmp
log=exp_HR_20150319.log
owner=hr
full=n
buffer=409600000
rows=n
statistics=none
direct=y
consistent=y
- statistics = none: Allows you to define whether the statistics generated by the bank for performance should be exported or not. The options are none, compute, and estimate.
- triggers = y: Defines whether table triggers should be exported.
- tablespaces = tablespace1, tablespace2: If entered, this parameter causes exp to generate data from all tables that are allocated in the given tablespaces.
- buffer = 4096: Defines the size (in bytes) of the buffer used by export to fetch rows (Only applies when direct = n)
- help = y: Help and Documentation Information
Exporting the data - Generating the backup file
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | # Gerando backup de um owner, sem dados, apenas tabelas (rows=n) exp usuario/senha file=exp_HR_20150319.dmp log=exp_HR_20150319.log owner=hr buffer=409600000 rows=n # Gerando backup dos owners HR e SYS, com dados (rows=y) exp usuario/senha file=exp_HR_20150319.dmp owner=hr,sys full=n rows=y statistics=none direct=y consistent=y # Gerando backup de todos os objetos de todos os owners da instância (full=y) exp usuario/senha file=exp_HR_20150319.dmp full=y rows=y statistics=none direct=y consistent=y # Gerando backup de todos os objetos de um owner, em 3 arquivos de no máximo 10 MB exp usuario/senha file=exp_HR_20150319_1.dmp,HR_20150319_2.dmp,HR_20150319_3.dmp filesize=10MB # Gerando backup dos dados a partir de uma query exp usuario/senha file=exp_HR_20150319.dmp tables=hr.jobs query=\"where max_salary <= 10000\" # Gerando backup dos dados usando um arquivo de configuração (PARFILE) exp parfile="C:\parfile.dat" # Como eu costumo utilizar - Crio um arquivo parfile.dat com o userid exp parfile="C:\parfile.dat" file=exp_<owner>_<instancia>.dmp log=exp_<owner>_<instancia>.log buffer=409600000 statistics=none direct=y consistent=y |
Import Parameters
Here I will demonstrate how to import database data from a dump file.
Following are some parameters, besides those shown above for export, which are also applicable in import:
- commit = n: If commit = y, imp will commit after each input block. Otherwise, commit will be performed on each imported table.
- compiles = y: Defines if imp will compile the procedures, functions and triggers after their creation.
- fromuser = user: Identifies the source owner (s) of the data that was exported
- touser = user: Identifies the target owner (s) where the data will be imported. Owners must exist on target instance before executing import
- full = y: Imports the entire dump file without object filters
- ignore = y: Defines whether imp will ignore errors in object creation or stop aborting operation on error
- Show = y: Defines if imp will only list the contents of the dump instead of importing it
- statistics = value: Defines how imp will import statistics collected from the bank into Export. Possible options are: Always (statistics always matter), None (Does not matter or recalculate statistics), Safe (Imports statistics if they are not questionable. Otherwise, recalculates) or Recalculate (Doesn't matter statistics, but recalculates)
Importing data to new owner - Preparing the environment
Before importing, we will need to create the users and tablespaces used by that user in the target instance.
Note that the size of tablespaces and datafiles must be calculated so that there is sufficient space to receive the source data. You can get this data by replicating the same values as the source tablespaces or by enabling the datafile's autoextend option, which causes it to increase in size as needed. In addition, I set the user quota on the created tablespace to unlimited so that he can allocate the entire tablespace if necessary.
This is not good practice because you as a DBA should evaluate the allocation of space in the export / import process, but for testing it is ideal.
Commands Used:
1 2 3 4 | create tablespace TS_novo_hr datafile 'C:\ORACLE\ORADATA\ORCLTESTE\novo_hr.dbf' size 10M; create user novo_hr identified by teste default tablespace TS_novo_hr; ALTER DATABASE DATAFILE 'C:\ORACLE\ORADATA\ORCLTESTE\NOVO_HR.DBF' autoextend on; alter user novo_hr quota unlimited on TS_novo_hr; |
Importing data for new owner
1 2 3 4 5 6 7 8 | # Exemplo simples, importando do owner HR para NOVO_HR e gerando arquivo de log imp dirceu/dirceu file=exp_HR_20150319.dmp log=imp_HR_20150319.log fromuser=hr touser=novo_hr # Realizando o import sem trazer estruturas adicionais imp dirceu/dirceu file=exp_HR_20150319.dmp log=imp_HR_20150319.log fromuser=hr touser=novo_hr grants=n commit=n ignore=y analyze=n constraints=n indexes=n # Como eu costumo utilizar - Crio um arquivo parfile.dat com o userid imp parfile="C:\parfile.dat" file=exp_<owner>_<instancia>.dmp buffer=409600000 log=imp_<owner>_<instancia>.dmp fromuser=<owner> touser=<owner> grants=y commit=n ignore=y analyze=n constraints=y |
Importing structure and data for existing owner
Operation widely used for replication or creation of new environments, importing structure and data for an existing owner consists of deleting all objects below the owner of the target environment and importing.
Deleting the entire structure of an owner can be tricky at times, which is why many DBAs prefer to use the command drop user <usuario> cascade;
to erase the user and their objects quickly and easily.
I particularly don't like this solution since I don't have control of what will be deleted, let alone a log of this operation. To do this, I created a script that performs this task and creates a new script with commands to drop all owner objects and records a log with each operation performed (attached script at the footer of the post):
After executing the generated script, deleting all owner objects, you can import normally, which will recreate all owner objects according to what exists in the dump file. (Be sure to check the size of tablespaces and datafiles as above)
1 2 | # Como eu costumo utilizar - Crio um arquivo parfile.dat com o userid imp parfile="C:\parfile.dat" file=exp_<owner>_<instancia>.dmp buffer=409600000 log=imp_<owner>_<instancia>.dmp fromuser=<owner> touser=<owner> grants=n commit=n ignore=n analyze=n constraints=y |
Importing data only for existing owner
This is a more complex operation than the structure and data dump, and is intended to update data in an environment without changing its structure in the target environment. To accomplish this, we will need to delete all data from the tables, disable the contraints and triggers, delete the sequences (script attached at the bottom of the post).
As you know, if there are any new objects in the source environment where the data export was generated, these objects will be created in import. Therefore, we will need to create a script to delete these new objects that can be created if this situation occurs.
The easiest way to do this is by using the software Toad for Oracle, which allows visual comparison and script generation to match environments. This script should be validated by you, so that it only deals with the removal of the newly created objects (if any). There is no cake recipe here, as each environment and owner requires custom scripts for each situation.
If there is a difference between objects, the data import will fail. You, as a DBA, should alert the requester that import will not be possible or apply the Toad script to match the environments, and after the import, apply the Toad script to get back the changes and maintain the environment structure as it was before.
1 2 | # Como eu costumo utilizar - Crio um arquivo parfile.dat com o userid imp parfile="C:\parfile.dat" file=exp_<owner>_<instancia>.dmp buffer=409600000 log=imp_<owner>_<instancia>.dmp fromuser=<owner> touser=<owner> grants=n commit=n ignore=y analyze=n constraints=n |
Some Import Tips
- show parameter archive: Verifies the location where transaction logs (archivelog) are stored, so you can verify that there is space available to store the logs that will be generated by Import. Otherwise, it will fail.
- Check the size of the UNDO tablespace to calculate if it will support UNDO operations that will be generated by Import.
- Check the size of the owner and its largest object. It should not be larger than undo tablespace
- Check for disabled contraints before Import. After import, contraints may become invalid or have data inconsistency errors. You need to ensure that these errors already existed in the base before Import.
- Check and compare the size of source and target tablespaces and datafiles to ensure that data will not overflow your tablespace or datafile on the target instance.
- The scripts with this walkthrough available here.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 | select sum(bytes)/1024/1024 || ' mb' from dba_segments where owner = 'HR'; select owner, sum(bytes)/1024/1024 || ' mb' from dba_segments where owner = 'HR' group by owner; select distinct tablespace_name from dba_segments where owner = 'HR' order by tablespace_name; @tablespace_tamanho HR @info_tablespace undo select owner, segment_name, segment_type, bytes/1024/1024 || ' MB' from dba_segments where bytes = ( select max(bytes) from dba_segments where owner = 'HR' and segment_type not like '%LOB%' ) and owner = 'HR'; select owner, segment_name, segment_type, bytes/1024/1024 || ' MB' from dba_segments where bytes = ( select max(bytes) from dba_segments where owner = 'HR' and segment_type like '%LOB%' ) and owner = 'HR'; show parameter archive @free_asm select * from dba_constraints where status = 'DISABLED' and owner = 'HR'; select * from v$instance; select * from gv$instance; |
Scripts used to import structure and data and data only
For more detailed technical information, please refer to Oracle Official Website - Export and Import.
Good night!
I need to create the tables below, however, with different users!
WITH USER FOCVS
create table FOCVS.SIS_SYNC_LOGITEM
(
ID_SYNC_LOGITEM VARCHAR2(50),
ID_INTEGER LICENSE default 1,
ID_EXECUTION VARCHAR2(50),
)
WITH USER FOCVS01
create table FOCVS.SIS_SYNC_LOGITEM
(
ID_SYNC_LOGITEM VARCHAR2(50),
ID_INTEGER LICENSE default 1,
ID_EXECUTION VARCHAR2(50),
)
How do I do that?
Eg
I have an entire dump with OWNER FOCVS and I want to create a TEST environment with this dump for USER FOCVS01, how do I do this?
Thank you!
Very good indeed. I'll keep coming back to see if something appears in the same way for impdp / expdp that, although they are very similar, have their peculiarities. Before any visitor asks me why I don't do it myself, I don't have a blog, let alone this didactic I found here. Success always!
Antonio, is in the plans to create a similar to this, about Datapump .. Wait .. rs
Hello,
I'm new to DBA and I have a question, after importing can I delete the .dmp file from the path given at the time of import ??
I need to delete this file because it is too large to be stored on the server.
Thankfully.
Maiara Gradim
Hi Maiara, good afternoon.
First, thanks for stopping by. 🙂
After importing, you can delete the dump file yes, no problem.
Good afternoon Dirceu,
I performed the steps mentioned above by you, but when importing I have “violated restriction” errors.
Can you tell me how to fix it? I used the file available (import_structure_data).
Sincerely.
Maiara, do you want to import data only or structure and data?
I did not understand how I enter the password in the file. Thank you..
Ulysses, your question is on export or import?
I'm starting to use oracle doing dump but I'm having a problem at the destination bank where when saving a record it gives me the error of "existing record", so I was informed that the sequences from bank 1 was not generated in the exp and imported by imp in bank 2. Is there any way to avoid these sequence errors when importing?
Patrick,
Good morning and thanks for stopping by.
If you are importing the entire owner and all of its objects to another server / owner, you can delete all objects from the destination and let import recreate the objects (remember to save the permissions on the source to apply on the destination). This way I believe you will not have this problem.
Take a look at this post, because it has a script (import_structure_data) that already does it all for you.
If you still have questions, just talk.
I'm using 1 server with 2 instances one prd and another sml .. the banks have the same structure .. one for production and one for simulation .. the owner is the same in both banks ..
Is your script for windows? Should it only run m sqlplus or can it be in tools?
Patrick, the scripts were developed for sqlplus on Windows, but work in any environment as long as sqlplus is used. What would be the tools?
would be SQL tools, the connection manager.
Very good!