Previous Table of Contents Next


Archiving Considerations

Archiving allows you to recover your database in case of a media failure and should be used as a general good practice. However, when performing a batch data load, you have to realize that the operations involved are a large amount of INSERTs, which you can easily regenerate by re-executing the data load. It will definitely help to disable archiving for the duration of the database load.


Caution:  
If there are other transactions going on concurrently with your batch job and a media failure occurs during the load, you may not be able to recover the other transactions.


Tip:  
Use the following to put the database in ARCHIVELOG mode:

On UNIX,

C: > svrmgrl

SVRMGR> connect internal

SVRMGR> shutdown

SVRMGR> startup mount

SVRMGR> alter database archivelog

SVRMGR> alter database open;



Tip:  
Use the following to put the database in NOARCHIVELOG mode:

On UNIX,

C: > svrmgrl

SVRMGR> connect internal

SVRMGR> shutdown

SVRMGR> startup mount

SVRMGR> alter database noarchivelog

SVRMGR> alter database open;


  Disable the logging of large tables by using nologging. When nologging is set for a table, none of its transactions will be logged, which results in improved performance. This option gives you the flexibility of placing your database in the archive mode while leaving the large tables in the nologging mode.


Tip:  
If a table uses the BLOB and the CLOB datatypes, you can set the LOB portions of the table to be in nologging.
  Keep your indexes up to date by rebuilding them after major changes such as bulk inserts, updates, and deletes. Inefficient indexes will be directly reflected in the overall system performance.

Use the following tips to enhance the performance of data loads:

  Disable table constraints.
  Disable triggers on the table. (Tables that are bulk loaded should not have triggers to begin with.)
  Drop indexes prior to data load, and then re-create the indexes.
  Use Direct-Path SQL*Load for data load. This will improve performance because data will be loaded above the high-water mark.
Full table scans would read the blocks up to the high-water mark, and this can result in performance degradation if you have deleted a lot of rows.


Tip:  
There are two ways you can change the high-water mark: drop and re-create the table, or truncate the table.
  Use the APPEND hint for the creation of aggregate tables. Aggregate tables are usually created by using information from another table. The APPEND hint allows insertion after the high-water mark and can be a significant performance improvement.
  Wherever possible use the TRUNCATE command to delete large amounts of data. Combining this with the partitioning feature, the parallel becomes possible.
  The optimizer is partition aware, but if you know the partition that needs to be considered, you can query the partition by specifying it in the FROM clause. For example:

     Select *

     From sales (part4)

     Where month between 10 and 12;


Caution:  
The suggestion in the previous example should be used carefully, because if for some reason you change the information stored in a partition or rearrange the partitions such that part4 is no longer the partition, the previous query will not work.

Consider rearranging the partition structure of a partitioned table after periodic use.


Tip:  
Use the Alter table command to add, drop, exchange, move, modify, rename, split, and truncate partitions.

Operation of a Large Database

The definition of a very large database keeps changing because operating systems and hardware enhancements continue at a rapid pace, and the database size is not a factor anymore for defining the database as being “large.” It is more realistic to look at the recovery time of the database and define a database to be “large” if it cannot be recovered in 24 hours. VLDB systems need special considerations as compared to smaller systems, and some of the suggestions in dealing with them might even be opposite of what a DBA would have done for smaller databases.

A VLDB typically consists of:

  A small number of large tables that account for about 90 percent of the data. These tables change dramatically in size with the passage of time.
  A lot of small code tables that are generally static and do not change much in size as time goes by. You should be able to size these tables properly. Try to use storage parameters so a code table can fit in one extent. These tables should generally be free from fragmentation.
  A lot of aggregate tables that are derived from the large tables. These may or may not change in time, depending on your application. The purpose of these tables is to improve performance.
  Frequently needed data.
  Queries that are complex but perform the same data massaging and aggregation.
  Several temporary tables. These tables are used during data loads.


Tip:  
Codes tables with more or less the same size should be placed in a tablespace whose default storage parameters are appropriate for all of them. This will improve the reusability of extents.


Tip:  
Codes tables are static in nature and therefore it would be beneficial to use a low pctfree setting.


Tip:  
If large transaction tables are truncated and then reloaded with information, set pctfree to a low value; otherwise set pctfree to a high value.


Tip:  
If a large transaction table grows at a constant rate, set pctincrease to 0. Otherwise, set it to an appropriate non-zero value.

Rollback Segments Design

A VLDB usually has three different types of transactions. Table 33.3 shows the rollback segments that should be used for these types of transactions.

Table 33.3. Rollback segments to use based on transaction types.

Transaction Type Rollback Segment

Batch loads Large, dedicated rollback segments
Batch aggregates Medium-sized rollback segments
Small and short user transactions A lot of small rollback segments


Previous Table of Contents Next
Используются технологии uCoz