Oracle Automatic Workload Repository (AWR) maintains performance statistics for detecting problems in the Database by collecting and processing data needed for self-tuning. The data gathered resides both in memory as well as in the database. You can utilize both reports and views to access this data.
Oracle allows you to transport the data AWR collects between systems. This allows you to use a separate system to perform analysis of the AWR data when required. In order too transport AWR snapshot data, you must extract the data from the database on the source system, then load the data into the target system database.
Note that this process is available in all Oracle Database versions beginning with Oracle 10g.
How to Extract AWR Data
Use the awrextr.sql script to extracts the AWR data from the database into a Data Pump export file. This can be done for a specified range of snapshots. This dump file can be moved to another system after it is created, then the extracted data can be loaded into the new database.
You must be connected to the database as the SYS user to run the awrextr.sql script.
- From the SQL prompt, enter:
This returns a list of the databases in the AWR schema.
- Enter the database from which the AWR data will be extracted:
Enter value for db_id: [YOUR DB ID]
For example, to select the database with the database identifier of 123456789 enter the value 123456789.
- Specify the number of days for the list of Snapshot Ids.
Enter value for num_days: [DAYS]
A list of existing snapshots for the specified time range will be displayed. For example, snapshots captured in the last 2 days are displayed when an value of 2 is entered.
- Define the snapshots range for AWR data to be extracted by specifying a beginning and ending Snapshot Id:
Enter value for begin_snap: [x] Enter value for end_snap: [y]
For example, to select a snapshot with a snapshot Id of 20 as the beginning snapshot, and the snapshot with a snapshot Id of 30 as the ending snapshot, enter 20 for [x] and 30 for [y].
- After defining the range a list of directory objects is displayed.
- Enter the name of the directory object pointing to the directory where the export dump file will be stored.
Enter value for directory name: [Your DATA PUMP Directory]
For example, to select the directory object
MY_DATA_PUMPat the prompt.
- Finally specify the prefix for the export dump file name (a .dmp suffix will be automatically appended to the file name):
Enter value for file_name: awrdata_20_30
In this example, an export dump file named
awrdata_20_30will be created in the directory corresponding to the directory object you specified, as below:Dump file set for
D:\ORACLE\PRODUCT\22.214.171.124\DB_1\RDBMS\LOG\AWRDATA_20_30.DMPThe amount of AWR data that needs to be extracted will determine the processing time for the AWR extract operation. After the dump file is created, Data Pump can be used to transport the file to another system.
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at [TimeStamp]
How to Load AWR Data
After transporting the export dump file to the target system, you load the extracted AWR data by using the awrload.sql script. The awrload.sql script will create a staging schema for the snapshot data to be transferred from the Data Pump file into the database. The data is then transferred from the staging schema into the AWR tables.
You must be connected to the database as the SYS user to run the awrload.sql script.
- From the SQL prompt, enter:
@?/rdbms/admin/awrload.sqlThis returns a list of directory objects.
- Specify the directory object pointing to the directory where the export dump file is located:
Enter value for directory_name: [Your DATA PUMP Directory]For example, to select the directory object
MY_DATA_PUMPat the prompt.
- Next specify the prefix for the export dump file name (a .dmp suffix will be automatically appended to the file name):
Enter value for file_name: awrdata_20_30IFor this example, the export dump file named
- Now enter the name of the staging schema where the AWR data will be loaded:
Enter value for schema_name: [Your SCHEMA]For this example, to use a staging schema named
MY_AWR_STAGEfor the AWR data simply enter
MY_AWR_STAGEat the prompt.
- You must the specify the default tablespace for the staging schema:
Enter value for default_tablespace: [Your TABLESPACE]For example, enter SYSAUX to select that tablespace.
- Finally, specify a temporary tablespace for the staging schema:
Enter value for temporary_tablespace: [LOCATION]In this example for [LOCATION] you could enter TEMP as the tablespace.
- A staging schema named
MY_AWR_STAGEis created where the AWR data is loaded. After the AWR data is loaded into the
MYAWR_STAGEschema, the data will be transferred into the AWR tables in the
SYSschema. The output should be as follows, with number of objects
[w]and time in seconds
[z]varying with the specifics of your data:
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINTThe amount of AWR data loaded will effect the AWR load operation timing. Once the AWR data is loaded, the staging schema will be dropped automatically by the script.
Completed [w] CONSTRAINT objects in [z] seconds
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Completed [w2] REF_CONSTRAINT objects in [z2] seconds
Job "SYS"."SYS_IMPORT_FULL_03" successfully completed at [TIMESTAMP] ... Dropping MY_AWR_STAGE user
End of AWR Load
Options for HA Configurations? Not Replication! If you’re considering high availability options, replication is not an option. SQL Server has other H/A options such as Clustering, Mirroring, and AlwaysOn that deal with your RDBMS or…
- Jan 21
- < 1 min read