Eloquence B.08.30 may require the configuration of additional HP-UX kernel
parameters or adjustment of previously configured kernel parameters.
As of Eloquence B.08.00 the eloqdb database server is a multi-threaded
process that opens a set of files (volume files, log files, etc),
allocates its internal database BufferCache, and waits for incoming
client connections via TCP sockets. For each client it creates an OS
level thread and by default it uses Sys V IPC semaphores and shared
memory to communicate with the clients, if they are on the same host.
The following sections discuss HP-UX kernel parameters involved.
You may have to increase some of the HP-UX kernel parameters beyond
the default values, depending on your number of eloqdb servers and
their eloqdb.cfg settings like Threads or BufferCache, for example.
Note that the discussion below only explains the requirements for
the eloqdb database server(s). When adjusting kernel parameters, you
need to also take into account any requirements by other applications
as well as the operating system itself. So you will typically add to
existing settings, to be on the safe side.
Processes / Threads
- nproc - limits the number of processes allowed to exist simultaneously
- maxuprc - limits the maximum number of concurrent user processes per user
- nkthread - limits the number of threads allowed to run simultaneously
- max_thread_proc - defines the maximum number of concurrent threads allowed per process.
The default settings for nproc and maxuprc should typically be sufficient.
However, you may need to increase the nkthread and or the max_thread_proc
parameters depending on number of servers or eloqdb.cfg [config] Threads.
Each eloqdb server process creates a small number of internal OS threads,
typically below 10, and one additional OS thread for every concurrent DB
client connection (regardless of the number of DBOPENs by each client).
Use
nkthread >= SUM of "10 + max number of clients (Threads)" per eloqdb
max_thread_proc >= MAX of "10 + max number of clients (Threads)" per eloqdb
In other words: nkthread depends on the total number of client threads
across your eloqdb servers, whereas max_thread_proc depends on the eloqdb
server with the largest number of client threads.
For example:
eloqdb server A: Threads = 1000
eloqdb server B: Threads = 300
eloqdb server C: Threads = 200
nkthread >= 1530 ( 3*10+1000+300+200 )
max_thread_proc >= 1010 ( 10+1000 )
Files / Sockets
- maxfiles_lim - hard maximum number of file descriptors per process
Each eloqdb server opens a typically small to moderate number of files,
depending on your specific eloqdb.cfg settings. This includes the DATA
and LOG volumes, LogFile, StatFile, SessionStatFile, and forward logs,
for example. However, each eloqdb server also listens for incoming TCP
connections on the ports [Server] Service (and ServiceHttp), accepting
up to [config] Threads concurrent socket connections by DB clients.
The default maxfiles_lim by HP-UX is typically sufficient, unless you
have a server with a very large maximum number of concurrent clients.
Sys V IPC semaphores
- semmns - number of System V IPC system-wide semaphores
- semmni - number of System V IPC system-wide semaphore identifiers
- semmnu - maximum number of System V IPC undo structures for processes
- semume - maximum number of System V IPC undo entries per process
Each eloqdb server process uses Sys V IPC semaphores and shared memory
for communicating with the database clients running on the local system,
unless eloqdb.cfg is configured for [Server] EnableIPC = 0. For remote
database clients, only the TCP socket connection is used.
When using Sys V IPC semaphores, the eloqdb server allocates a semaphore
identifier with 2 semaphores for each concurrent client connection and
also makes use of SEM_UNDO operations for each of these client sessions.
Unless you have a large number of eloqdb servers, the HP-UX default for
semmnu will typically be sufficient. However, you may need to increase
the semmns, semmni and especially the semume setting in some cases.
Use
semmni >= SUM of "max number of clients (Threads)" per eloqdb
semmns >= SUM of 2 * "max number of clients (Threads)" per eloqdb
semmnu >= number of eloqdb servers
semume >= MAX of "max number of clients (Threads)" per eloqdb
In other words: semmni and semmns depend on the total number of client
threads across your eloqdb servers, whereas semume depends on the eloqdb
server with the largest number of client threads (and semmnu depends on
the number of eloqdb servers).
For example:
eloqdb server A: Threads = 1000
eloqdb server B: Threads = 300
eloqdb server C: Threads = 200
semmni >= 1500 ( 1000+300+200 )
semmns >= 3000 ( 2*1000+2*300+2*200 )
semmnu >= 3
semume >= 1000
Sys V IPC shared memory
- shmmni - number of System V shared memory segment identifiers in the system
- shmmax - maximum size (in bytes) for a System V shared memory segment
- shmseg - maximum number of System V shared memory segments per process
Each eloqdb server process uses Sys V IPC semaphores and shared memory
for communicating with the database clients running on the local system,
unless eloqdb.cfg is configured for [Server] EnableIPC = 0. For remote
database clients, only the TCP socket connection is used.
For EnableIPC=2 (the default) the eloqdb server allocates a single shared
memory segment for communicating with local database clients. The segment
size depends on the configured max number of clients, ie [config] Threads.
For EnableIPC=1 the eloqdb server allocates a separate 32 KB segment for
each database client.
Unless you have a large number of eloqdb servers or use EnableIPC=1,
the HP-UX defaults for shmmni and shmseg will typically be sufficient.
With EnableIPC=2 use
shmmni >= number of eloqdb servers
With EnableIPC=1 use
shmmni >= SUM of "max number of clients (Threads)" per eloqdb
plus number of eloqdb servers
shmseg >= MAX of "max number of clients (Threads)" per eloqdb
Using EnableIPC=2 is recommended for efficiency reasons.
Process memory / Address space
-
maxdsiz, maxdsiz_64bit
- maximum size (in bytes) of the data segment for any user process
- max_mem_window
- maximum number of group-private 32-bit shared memory configurable by users
The memory allocation of the eloqdb servers depends on a number of
factors, including the eloqdb.cfg settings for [Config] BufferCache
and [Config] Threads, and differs for the 32-bit and 64-bit server.
The data segment sizes of processes are limited by the maxdsiz or
maxdsiz_64bit kernel parameters. For 32-bit processes there are also
architecture-specific limitations on the address space available for
global shared memory segments.
The 32-bit eloqdb servers allocate stack space for OS level threads
handling concurrent client sessions from their data segment and memory
for the dedicated BufferCache from the shared memory address space.
Depending on your eloqdb server with the largest number of concurrent
client sessions (limited by [Config] Threads), you may need to increase
the maxdsiz parameter or switch to using the 64-bit eloqdb program.
Depending on your number of eloqdb server instances and their settings
for [Config] BufferCache, you may also run into limits or fragmentation
issues with the global shared memory address space. This may require to
switch to using HP-UX memory windows or the 64-bit eloqdb program.
The 64-bit eloqdb servers do not only allocate stack space for OS level
threads handling concurrent client sessions from their data segment, but
also allocate memory for the dedicated BufferCache this way. They only
use shared memory for the client communication (with EnableIPC > 0).
Depending on your eloqdb server settings for [Config] Threads as well
as [Config] BufferCache, you may need to increase the maxdsiz_64bit.
Note that the BufferCache memory is allocated during startup of the
eloqdb server process whereas the stack space for OS level threads
handling concurrent client sessions grows as these sessions connect.
Note that HP-UX on PA-RISC uses 64 KB stack size per thread,
whereas HP-UX on Itanium uses 256 KB stack size per thread.
HP-UX evp driver is required
The Eloquence database requires the HP-UX evp driver enabled
in the kernel configuration. The following commands may be used to
verify if the evp driver is configured:
lsdev -C pseudo | grep evp
ls /dev/poll
If the evp driver is configured it is listed in the lsdev output and
the /dev/poll device file is present. If not, the evp driver
needs to be enabled in the kernel configuration or the database server
will not start.
Upgrading from Eloquence versions before B.08.00
When upgrading from Eloquence versions before B.08.00, please refer
to the
HP-UX kernel config section of the Eloquence B.08.00 Release Notes.
|