Click on the banner to learn about and purchase my database training on Azure

Creating and restoring logical backups (DUMPs) in Oracle Database 11g (exp and imp)

Views: 42.732 views
Reading Time: 4 minutes

During this post, I will demonstrate how to create and restore Oracle Database backups using the exp and imp tools, which allow you to generate full backups of an owner and restore to another server.

Setting Environment Variables in Windows

Oracle - SET

Setting environment variables in Linux

Export Parameters

Here I will demonstrate how to export the database data to a dump file, which can be used to restore the data database and replicate the database to another instance / server.

Here are some parameters:

  • fillet = file.dmp: Defines the name of the dump file that will be generated. If not informed, the file name will be “EXPDAT.DMP”
  • log = log file: Defines the log file name that will be generated. If not informed, no log will be generated.
  • owner = , : Defines the name of the owner (s) that will be exported
  • full = y: Indicates whether to export the entire database, with users, tablespaces, grants, etc.
  • grants = y: Defines whether owner-granted privileges are exported
  • indexes = y: Defines whether index objects will be exported
  • compress = y: Sets whether the dump file will be compressed to reduce final file size
  • consistent = y: Defines whether Oracle will prevent data changed during export from being exported to the dump file. This operation will use the undo area to store these changes and apply them to the database after export.
  • constraints = y: Defines whether PK and FK contraints will be exported in the dump file.
  • direct = y: Defines whether the exported data will skip the Oracle parse layer (SQL command-processing layer - evaluating buffer), which validates the syntax of the commands and makes the export faster (about 10% generally)
  • feedback = 0: Sets how many records the exp utility will display on screen progress. 0 is disabled.
  • filesize = 10MB: Sets the size limit per file. If you specify 2 MB, for example, exp will break the dump file into 2 MB files, which need to be entered in the FILE parameter. If not informed, the dump file will consist of only 1 file.
  • flashback_scn: Allows you to export the data through SCN, where you get the current value from the database and guarantee that the data will not change after export. To find out the current value of SCN, just run the query select current_scn from v$database;
  • flashback_time: Allows you to generate the export with data from a specific date.
  • tables = regions: Allows you to define the tables to export
  • Query = “Condition”: Allows you to define a condition to export the data. The TABLES parameter must be informed and the condition must be applicable for all these tables.
  • parfile = “File path”: Allows you to define a parameter file for export, where each parameter must be on a line and the utility will read that file and execute the exp command using those parameters.

    Parfile Example:

    userid=usuario/senha
    file=exp_HR_20150319.dmp
    log=exp_HR_20150319.log
    owner=hr
    full=n
    buffer=409600000
    rows=n
    statistics=none
    direct=y
    consistent=y

  • statistics = none: Allows you to define whether the statistics generated by the bank for performance should be exported or not. The options are none, compute, and estimate.
  • triggers = y: Defines whether table triggers should be exported.
  • tablespaces = tablespace1, tablespace2: If entered, this parameter causes exp to generate data from all tables that are allocated in the given tablespaces.
  • buffer = 4096: Defines the size (in bytes) of the buffer used by export to fetch rows (Only applies when direct = n)
  • help = y: Help and Documentation Information

Exporting the data - Generating the backup file

Oracle - Exp

Import Parameters

Here I will demonstrate how to import database data from a dump file.

Following are some parameters, besides those shown above for export, which are also applicable in import:

  • commit = n: If commit = y, imp will commit after each input block. Otherwise, commit will be performed on each imported table.
  • compiles = y: Defines if imp will compile the procedures, functions and triggers after their creation.
  • fromuser = user: Identifies the source owner (s) of the data that was exported
  • touser = user: Identifies the target owner (s) where the data will be imported. Owners must exist on target instance before executing import
  • full = y: Imports the entire dump file without object filters
  • ignore = y: Defines whether imp will ignore errors in object creation or stop aborting operation on error
  • Show = y: Defines if imp will only list the contents of the dump instead of importing it
  • statistics = value: Defines how imp will import statistics collected from the bank into Export. Possible options are: Always (statistics always matter), None (Does not matter or recalculate statistics), Safe (Imports statistics if they are not questionable. Otherwise, recalculates) or Recalculate (Doesn't matter statistics, but recalculates)

Importing data to new owner - Preparing the environment

Before importing, we will need to create the users and tablespaces used by that user in the target instance.

Note that the size of tablespaces and datafiles must be calculated so that there is sufficient space to receive the source data. You can get this data by replicating the same values ​​as the source tablespaces or by enabling the datafile's autoextend option, which causes it to increase in size as needed. In addition, I set the user quota on the created tablespace to unlimited so that he can allocate the entire tablespace if necessary.

This is not good practice because you as a DBA should evaluate the allocation of space in the export / import process, but for testing it is ideal.

Oracle - User and Tablespace Creation

Commands Used:

Importing data for new owner

Oracle - Import in New Owner

Importing structure and data for existing owner

Operation widely used for replication or creation of new environments, importing structure and data for an existing owner consists of deleting all objects below the owner of the target environment and importing.

Deleting the entire structure of an owner can be tricky at times, which is why many DBAs prefer to use the command drop user <usuario> cascade; to erase the user and their objects quickly and easily.

I particularly don't like this solution since I don't have control of what will be deleted, let alone a log of this operation. To do this, I created a script that performs this task and creates a new script with commands to drop all owner objects and records a log with each operation performed (attached script at the footer of the post):

Oracle - Drop Owner

After executing the generated script, deleting all owner objects, you can import normally, which will recreate all owner objects according to what exists in the dump file. (Be sure to check the size of tablespaces and datafiles as above)

Importing data only for existing owner

This is a more complex operation than the structure and data dump, and is intended to update data in an environment without changing its structure in the target environment. To accomplish this, we will need to delete all data from the tables, disable the contraints and triggers, delete the sequences (script attached at the bottom of the post).

Oracle - Truncate Owner

As you know, if there are any new objects in the source environment where the data export was generated, these objects will be created in import. Therefore, we will need to create a script to delete these new objects that can be created if this situation occurs.

The easiest way to do this is by using the software Toad for Oracle, which allows visual comparison and script generation to match environments. This script should be validated by you, so that it only deals with the removal of the newly created objects (if any). There is no cake recipe here, as each environment and owner requires custom scripts for each situation.

If there is a difference between objects, the data import will fail. You, as a DBA, should alert the requester that import will not be possible or apply the Toad script to match the environments, and after the import, apply the Toad script to get back the changes and maintain the environment structure as it was before.

Some Import Tips

  • show parameter archive: Verifies the location where transaction logs (archivelog) are stored, so you can verify that there is space available to store the logs that will be generated by Import. Otherwise, it will fail.
  • Check the size of the UNDO tablespace to calculate if it will support UNDO operations that will be generated by Import.
  • Check the size of the owner and its largest object. It should not be larger than undo tablespace
  • Check for disabled contraints before Import. After import, contraints may become invalid or have data inconsistency errors. You need to ensure that these errors already existed in the base before Import.
  • Check and compare the size of source and target tablespaces and datafiles to ensure that data will not overflow your tablespace or datafile on the target instance.
  • The scripts with this walkthrough available here.

Scripts used to import structure and data and data only

For more detailed technical information, please refer to Oracle Official Website - Export and Import.