Previous | Table of Contents | Next |
Parallel Server Initialization Parameters
All initialization parameters prefixed by GC_ are OPS related. Please refer to the Parallel Server Concepts and Administration Guide for a complete list of initialization parameters and how to use them. The most important parameters are presented in Table 31.5.
Parameter | Definition |
---|---|
GC_FILES_TO_LOCKS | Specifies number and size of PCM locks |
GC_ROLLBACK_LOCKS | Limit of DLM locks per public RBS |
GC_RELEASABLE_LOCKS | Limit of DLM locks per session |
GC_DB_LOCKS | Database limit of DLM locks |
GC_LOCK_PROCS | Lock processes per instance |
GC_FILES_TO_LOCKS allows us to predefine PCM locks, specify whether or not they are hash or fine-grained, and which data datafiles they will be used for.
The value of GC_FILES_TO_LOCKS is quite sophisticated in its format. One of its simpler formats is <file#1>=<Nbr Locks>:<file#2>=<Nbr Locks>: ... Consider the following example of a GC_FILES_TO_LOCKS setting:
GC_FILES_TO_LOCKS = 1=500:2=100:3=200:4=500:5=50
This setting defines 500 PCM locks (hash by default) for use with data file 1 (the system tablespace). It also defines various numbers of locks to be used with other data files, which presumably contain application tables, indexes, temporary tablespace, or rollback segments.
Each PCM lock protects one database block by default. If we want locks to protect more blocks, we append an !x to the locks specification, where x is the number of blocks assigned to each lock. Lets revisit a slightly modified version of our example:
GC_FILES_TO_LOCKS = 1=500!50:2=100!25:3=200:4=500:5=50
In our modified version, each of the 500 locks for the system tablespace protect 50 database blocks. If our block size was 2KB, each PCM lock used for the system tablespace would be 100KB.
Because PCM locking greatly affects the performance of OLTP applications running OPS, youll spend a great deal of time tuning these parameters to be sure that transactions arent waiting for locks to become available. GC_DB_LOCKS and GC_RELEASABLE_LOCKS effectively dictate how many PCM locks are available database-wide. Not all locks provided by GC_DB_LOCKS and GC_RELEASABLE_LOCKS need to be statically assigned to individual data files. We can leave some free for Oracle to use as needed.
This section addresses database configuration considerations for large OLTP applications. To a large extent, configuration issues are similar between large OLTP applications and other types of databases. However, there are significant database configuration differences for large OLTP applications. Several differences we will talk about involve rollback segment definitions, as well as setting the transaction attributes on indexes and tables.
I/O Considerations and File/Disk Placement
As with all databases, large OLTP databases should have tables and indexes placed on separate devices. Often I will also place temporary tablespace and rollback segment space on devices separate from tables and indexes. Although we would like to separate temporary tablespace from rollback segment space, putting both on the same device is an acceptable concession because very few operations use them both. Chapter 24, Oracle8 Database Tuning, has more details on the placement of data files.
If youre using OPS, consider placing all read-only tables, indexes, and clusters in a read-only tablespace. This will reduce PCM-locking activity for these tables. The disadvantage is that changing tables in a read-only tablespace requires an ALTER TABLESPACE command to change tablespace before and after the change. Typically, ALTER TABLESPACE is not a privilege given to application users or developers.
Youre more likely to have more concurrent transactions updating the same block due to a large number of users, so consider increasing the INITRANS and MAXTRANS for both indexes and tables. The maximum number of transactions that can simultaneously hold row-level locks on the same table or index block is set by MAXTRANS. If MAXTRANS isnt set high enough, users will experience periodic hanging.
OLTP applications SQL operations typically affect small result sets. Furthermore, there are typically very few sequential read operations. With this type of load, large block sizes are relatively inefficient. For example, reading or writing in 16KB blocks is inefficient when you are typically using only a few hundred bytes within that block. I recommend 2KB or 4KB block sizes for OLTP applications as a general rule of thumb.
If you are using clusters, make sure that the database block size is larger than the estimated amount of space required by each cluster key and its associated rows. Clusters physically force the order in which rows are stored in one or more tables, and typically are used to reduce the overhead associated with joins. For example, if the estimated space occupied by a key value and its associated rows is 6KB, we would define the database with an 8KB block size even though it breaks our rule of thumb. Clusters become very inefficient if keys and the associated rows span blocks. Elsewhere in this chapter we will discuss clusters in more detail.
Previous | Table of Contents | Next |