on [oracle@grid26 db_1]$ cat /etc/redhat-release CentOS Linux release 8.5.2111 [oracle@grid26 db_1]$ /u01/app/oracle/product/19.0.0/db_1/perl/bin/perl: error while loading shared libraries: libnsl.so.1: cannot open shared object file: No such file or directory ./runInstaller -ignorePrereqFailure -waitforcompletion -silent \ > -responseFile $ORACLE_HOME/install/response/db_install.rsp \ > oracle.install.option=INSTALL_DB_SWONLY \ > UNIX_GROUP_NAME=oinstall \ > INVENTORY_LOCATION=$ORA_INVENTORY \ > SELECTED_LANGUAGES=en,en_GB \ > ORACLE_HOME=$ORACLE_HOME \ > ORACLE_BASE=$ORACLE_BASE \ > oracle.install.db.InstallEdition=EE \ > oracle.install.db.OSDBA_GROUP=dba \ > oracle.install.db.OSBACKUPDBA_GROUP=dba \ > oracle.install.db.OSDGDBA_GROUP=dba \ > oracle.install.db.OSKMDBA_GROUP=dba \ > oracle.install.db.OSRACDBA_GROUP=dba \ > oracle.install.db.isRACOneInstall=false \ > oracle.install.db.rac.serverpoolCardinality=0 \ > oracle.install.db.config.starterdb.type=GENERAL_PURPOSE \ > oracle.install.db.ConfigureAsContainerDB=false \ > SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \ > DECLINE_SECURITY_UPDATES=true /u01/app/oracle/product/19.0.0/db_1/perl/bin/perl: error while loading shared libraries: libnsl.so.1: cannot open shared object file: No such file or directory [oracle@grid26 db_1]$ ls -l /usr/lib64/libnsl* lrwxrwxrwx. 1 root root 15 May 11 2019 /usr/lib64/libnsl.so.2 -> libnsl.so.2.0.0 -rwxr-xr-x. 1 root root 120592 May 11 2019 /usr/lib64/libnsl.so.2.0.0 [oracle@grid26 db_1]$ Install below rpm [root@grid26 ~]# dnf install libnsl Last metadata expiration check: 0:34:10 ago on Fri 06 Mar 2026 12:23:51 GMT. Dependencies resolved. ============================================================================================================ Package Architecture Version Repository Size ============================================================================================================ Installing: libnsl x86_64 2.28-164.el8 BaseOS 103 k Transaction Summary ============================================================================================================ Install 1 Package Total download size: 103 k Installed size: 160 k Is this ok [y/N]: y Downloading Packages: libnsl-2.28-164.el8.x86_64.rpm 96 kB/s | 103 kB 00:01 ------------------------------------------------------------------------------------------------------------ Total 94 kB/s | 103 kB 00:01 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : libnsl-2.28-164.el8.x86_64 1/1 Running scriptlet: libnsl-2.28-164.el8.x86_64 1/1 Verifying : libnsl-2.28-164.el8.x86_64 1/1 Installed: libnsl-2.28-164.el8.x86_64 Complete! [root@grid26 ~]# == export CV_ASSUME_DISTID=OEL7.8 ./runInstaller -ignorePrereqFailure -waitforcompletion -silent -responseFile $ORACLE_HOME/install/response/db_install.rsp oracle.install.option=INSTALL_DB_SWONLY UNIX_GROUP_NAME=oinstall INVENTORY_LOCATION=$ORA_INVENTORY SELECTED_LANGUAGES=en,en_GB ORACLE_HOME=$ORACLE_HOME ORACLE_BASE=$ORACLE_BASE oracle.install.db.InstallEdition=EE oracle.install.db.OSDBA_GROUP=dba oracle.install.db.OSBACKUPDBA_GROUP=dba oracle.install.db.OSDGDBA_GROUP=dba oracle.install.db.OSKMDBA_GROUP=dba oracle.install.db.OSRACDBA_GROUP=dba oracle.install.db.isRACOneInstall=false oracle.install.db.rac.serverpoolCardinality=0 oracle.install.db.config.starterdb.type=GENERAL_PURPOSE oracle.install.db.ConfigureAsContainerDB=false SECURITY_UPDATES_VIA_MYORACLESUPPORT=false DECLINE_SECURITY_UPDATES=true = from root [root@grid26 ~]# /u01/app/oracle/product/19.0.0/db_1/root.sh Check /u01/app/oracle/product/19.0.0/db_1/install/root_grid26_2026-03-06_13-05-01-372106860.log for the output of root script [root@grid26 ~]# dbca -silent -createDatabase \ -templateName General_Purpose.dbc \ -gdbname ORCL \ -sid ORCL \ -responseFile NO_VALUE \ -characterSet AL32UTF8 \ -memoryMgmtType auto_sga \ -totalMemory 2048 \ -storageType FS \ -datafileDestination "/u01/app/oracle/oradata" \ -emConfiguration NONE \ -sysPassword vihaan123 \ -systemPassword vihaan123 [oracle@grid26 db_1]$ pwd /u01/app/oracle/product/19.0.0/db_1 [oracle@grid26 db_1]$ cd bin [oracle@grid26 bin]$ pwd /u01/app/oracle/product/19.0.0/db_1/bin [oracle@grid26 bin]$ ./dbca -silent -createDatabase -templateName General_Purpose.dbc -gdbname ORCL -sid ORCL -responseFile NO_VALUE -characterSet AL32UTF8 -memoryMgmtType auto_sga -totalMemory 2048 -storageType FS -datafileDestination "/u01/app/oracle/oradata" -emConfiguration NONE -sysPassword vihaan123 -systemPassword vihaan123 [WARNING] [DBT-06208] The 'SYS' password entered does not conform to the Oracle recommended standards. CAUSE: a. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. b.The password entered is a keyword that Oracle does not recommend to be used as password ACTION: Specify a strong password. If required refer Oracle documentation for guidelines. [WARNING] [DBT-06208] The 'SYSTEM' password entered does not conform to the Oracle recommended standards. CAUSE: a. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. b.The password entered is a keyword that Oracle does not recommend to be used as password ACTION: Specify a strong password. If required refer Oracle documentation for guidelines. Prepare for db operation 10% complete Registering database with Oracle Restart 14% complete Copying database files 43% complete Creating and starting Oracle instance 45% complete 49% complete 53% complete 56% complete 62% complete Completing Database Creation 68% complete 70% complete 71% complete Executing Post Configuration Actions 100% complete Database creation complete. For details check the logfiles at: /u01/app/oracle/cfgtoollogs/dbca/ORCL. Database Information: Global Database Name:ORCL System Identifier(SID):ORCL Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/ORCL/ORCL.log" for further details.
Search This Blog
Total Pageviews
Friday, 6 March 2026
runInstaller error while loading shared libraries: libnsl.so.1: cannot open shared object file: No such file or directory
oracle command
--------------------------------------------------------------------------------
ORACLE NOTES CONTENTS:
- MISC ORACLE
- INSTALL - CREATE - UPGRADE DATABASE
- ORACLE NET - NET8 - SQL*NET - SQLNET
- REAL APPLICATION CLUSTER - ORACLE PARALLEL SERVER and RAW DEVICES – RAC/OPS
- SGA - ORACLE PROCESSES
- TUNING
- SQL - PL/SQL
- BACKUP - RESTORE - RECOVER
- ERRORS
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
########## MISC ORACLE ##########
--------------------------------------------------------------------------------
WEB SITES:
- Oracle documentation: http://otn.oracle.com/documentation/index.html
- 11g: http://www.oracle.com/pls/db111/homepage?framed=0
- 10g (10.1): http://otn.oracle.com/pls/db10g/portal.portal_demo3?selected=3
- 9.2: http://otn.oracle.com/pls/db92/db92.homepage
- 8.1.7: http://otn.oracle.com/pls/tahiti/tahiti.homepage
- Oracle MetaLink: http://metalink.oracle.com
- Oracle Technology Network: http://technet.oracle.com
- DBA page: http://www.oracle.com/technology/dba/index.html
- Oracle partner network: http://oraclepartnernetwork.oracle.com/
- Oracle Applications Network: http://appsnet.oracle.com
- Ixora: http://www.ixora.com.au
- Ask Tom: http://asktom.oracle.com
- Oracle Base: http://www.oracle-base.com
--------------------------------------------------------------------------------
AUTHENTICATION METHODS:
- data dict: V$PWFILE_USERS
- SYSDBA and SYSOPER reference the SYS schema
- SYSOPER: basic DB operation without ability to look at user data
- SYSDBA: SYSOPER privs plus full system privs with ADMIN option
- unix group (dba) for SYSDBA and SYSOPER privs set in:
$ORACLE_HOME/rdbms/lib/config.c (or config.s)
- connecting as SYSDBA or SYSOPR:
- example connect as SYSDBA
unix> sqlplus / as sysdba OR
unix> sqlplus "/ as sysdba" non-10g
unix> sqlplus /nolog
sql> connect / as sysdba
- example connect as SYSDBA to remote database
unix> sqlplus dbauser/dbawinter@subs as sysdba
OR
unix> sqlplus /nolog
sql> connect /@ccp as
- example connect as SYSDBA to remote database
unix> sqlplus "/ as sysdba"
SQL > select * from global_name; -> dbv9
SQL > disconnect
SQL > set instance repdb -> where repsb is defined in tnsnames.ora
SQL > connect / as sysdba
SQL > select * from global_name; -> repsb
- OS authentication:
- OS authentication has priority over password authentication
- oracle verifies the unix user is in the OSDBA (dba) or OSOPER OS role/group
- local admin only
- setup:
1. create unix account in special group (dba) with special process rights
2. remote_login_passwordfile=none (this is the default)
3. connect using: connect / as sysdba OR sysoper OR connect internal
- password file - password security file:
- $ORACLE_HOME/dbs/orapwd
- remote admin allowed
- but note, the local init file will be used
- remote instance cannot be started/stopped through MTS connection
- changing passwords: alter user ... automatically updates the password file
- setup:
1. create password file:
orapwd file= password= entries=
2. remote_login_passwordfile=exclusive
3. add users to the password file:
grant sysdba to ; OR grant sysoper to ;
4. connect using: connect / as sysdba;
connect /@ as sysdba;
connect sys/ as sysdba;
- oracle user set up as IDENTIFIED EXTERNALLY
- remote_os_authent = TRUE/FALSE
allow non-secure remote clients to use auto-logon accounts
- os_authent_prefix = OPS$
if oracle user accounts will be in the form: OPS$
--------------------------------------------------------------------------------
TABLESPACE - TABLESPACES:
- data dict: DBA_TABLESPACES, DBA_DATA_FILES, DBA_TEMP_FILES, V$TEMPFILE,
DBA_FREE_SPACE, DBA_FREE_SPACE_COALESCED, FET$ (free extents), UET$ (used extents),
TS$, V$SORT_SEGMENT (by TS), V$SORT_USAGE (by user - gives sqladdr),
DBA_TABLESPACE_USAGE_METRICS, DBA_HIST_TBSPC_SPACE_USAGE
- select TABLESPACE_NAME, CONTENTS from sys.dba_tablespaces;
- Usage statistics
- DBA_TABLESPACE_USAGE_METRICS: displays tablespace size, used space and used percent (not documented)
- DBA_HIST_TBSPC_SPACE_USAGE: displays historical tablespace usage statistics.
- create TS example
create tablespace order_data
datafile '/oracle/oradata/zevqa2/order_data_01' size 1990m
reuse autoextend off extent management local uniform size
- resize datafile
- size is absolute size, not size being added
- alter database datafile '/u08/oradata/rmgntgi9/ESTAR_01.DBF' resize 10g
- autoextend
- DBA_DATA_FILES.autoextensible (DBA_TEMP_FILES.autoextensible)
- when ON, size of datafiles is extended automatically
- autoextend must be turned off at the datafile level
alter database datafile ''
autoextend off;
- CREATE TABLESPACE tbs_02
DATAFILE 'diskb:tbs_f5.dat' SIZE 500K REUSE
AUTOEXTEND ON NEXT 500K MAXSIZE 100M;
- extent management
- dictionary managed
- system TS must be dictionary managed
- locally managed tablespace: LMT
- create tablespace ... extent management local uniform size;
- see DBMS_SPACE_ADMIN package for maintenance procedures
- bitmap in each datafile to track free/used extents
- only INITIAL clause used
- uniform or autoallocate:
- uniform next extent sizes (default 1m)
- autoallocate: next determined by system (default 64k)
- fragmentation: to control frag all extents in a TS should be uniform sizes
- segment space management
- how free and used space is managed
- manual or auto
- manual: free lists used. PCTUSED, FREELISTS, and FREELISTS GROUPS must be managed
- auto: oracle manages free space within segments with bitmaps
- Only permanent, locally managed tablespaces can specify automatic segment-space management
- CREATE TABLESPACE lmtbsb DATAFILE '/u02/oracle/data/lmtbsb01.dbf' SIZE 50M
EXTENT MANAGEMENT LOCAL uniform size
SEGMENT SPACE MANAGEMENT AUTO;
- switching between dictionary and locally managed:
- dbms_space_admin.tablespace_migrate_from_local()
- dbms_space_admin.tablespace_migrate_to_local()
- transportable tablespaces:
- move TS from source DB to target DB
- comprised of datafiles and an export of data dict info
- steps:
1. ensure compatible >= 8.1
2. dbms_tts.transport_set_check
3. put source TS in read only mode
4. export with transport_tablespace=y, tablespaces=
5. copy datafiles to target
6. put source TS back in read/write
7. import with transport_tablespace=y, datafiles=,
8. put target TS in read/write
- database-wide default tablespace
ALTER DATABASE DEFAULT TABLESPACE users;
- coalesce
- only dictionary managed tablespaces need to be coalesced. Locally managed do not
because bitmaps automatically track adjacent free space
- 8.x: alter tablespace coalesce;
- 7.x: alter session set events 'immediate trace name coalesce level ';
- modify TS storage example
alter tablespace livindex
default storage (maxextents 2147483645)
- tablespace renaming – rename (10g)
ALTER TABLESPACE temp RENAME TO temp01;
--------------------------------------------------------------------------------
TEMPORARY TABLESPACE:
- check free space
select tablespace_name, current_users, total_extents,
total_blocks, used_blocks, free_blocks
from v$sort_segment
- used to improve sorts and store temporary segments
- to see temporary datafiles use V$TEMPFILE or DBA_TEMP_FILES
- they do NOT appear in DBA_DATA_FILES
- assigning a default temporary tablespace:
alter database default temporary tablespace ;
- allocates extents as needed, does not deallocate. Extents marked as FREE for
next sort operation. Eliminates allocate/deallocate
- creating
- locally managed tablespaces are ideal for temporary tablespaces
create temporary tablespace temp
tempfile '/u02/oracle/data/temp01.dbf' size 20m reuse
autoextend off extent management local uniform size 16m
- dictionary managed TS
create tablespace temp datafile '/u02/oracle/data/sort01.dbf' size 50m
default storage (initial 2m next 2m minextents 1 pctincrease 0)
extent management dictionary
temporary
- adding a file to an existing temp TS
alter tablespace temp
add tempfile '/oracle/oradata/zevqa1/temp02.dbf' size 500m reuse
autoextend off
- resize
alter database tempfile '/oracle/oradata/zevqa1/temp02.dbf' resize 750m
- dropping a tempfile (and removing the file from the OS): note the TS remains
alter database tempfile '/oracle/oradata/zevqa1/temp02.dbf'
drop including datafiles
- temporary tablespace groups (10g)
- benefits
- spread temporary tablespace input/output across multiple tablespaces eliminating bottleneck
- even more of an advantage when you are performing operations in parallel
- creating
CREATE TEMPORARY TABLESPACE temp01 TEMPFILE '/oradata/db01/temp01.dbf' SIZE 100M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 16M GROUP temp_group01;
CREATE TEMPORARY TABLESPACE temp02 TEMPFILE '/oradata/db02/temp02.dbf' SIZE 100M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 16M GROUP temp_group01;
Having created temp TS’s, make the group the default temporary disk space
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp_group01;
--------------------------------------------------------------------------------
DROP COLUMN:
- data dict: DBA_UNUSED_COL_TABS
- for parent keys or columns with check constraints, CASCADE CONSTRAINTS must be used
- unused
- specifying unused does not actually remove the column from each row in the table
- unused columns are treated as if they were dropped
- you can add to the table a new column with the same name as an unused column
- when a column is dropped, all columns marked UNUSED in that table are dropped
- drop unused: to remove all columns currently marked as unused
- statements
- alter table ... drop column;
- alter table ... set unused column ...
- alter table ... drop unused column ...
--------------------------------------------------------------------------------
DROP DATAFILE – DATAFILE DROP:
- drop datafiles (10g)
- Restrictions on Dropping Files, the datafile or tempfile:
1. Must be empty (not in DBA_EXTENTS)
2. Cannot be the first file that was created in the tablespace
In such cases, drop the tablespace instead
3. Cannot be in a read-only tablespace
- ALTER TABLESPACE tbs_03 DROP DATAFILE 'tbs_f04.dbf';
- query to find unused datafiles:
select t.name tablespace_name, f.name file_name, bytes/1048576
from v$datafile f, v$tablespace t
where f.ts# = t.ts#
and t.name not like 'NX%'
and f.file# not in (select distinct file_id from dba_extents)
--------------------------------------------------------------------------------
SYSAUX TABLESPACE:
- required in 10g
- Only the SYSAUX tablespace datafile location is specified. Oracle specifies the remaining properties:
- online – permanent - read write - extent management local - segment space management auto
--------------------------------------------------------------------------------
TEMPORARY SEGMENTS - TEMPORARY TABLES:
- data dict: DBA_TABLES.temporary, DBA_SEGMENTS.segment_type,
V$TEMPFILE, DBA_TEMP_FILES
- select * from dba_segments where segment_type = 'TEMPORARY';
- description
- other sessions can see table definition but not data
- data truncated when transaction/session (commit/rollback/exit) ends
- sessions get bound to temporary tables with the first insert into the table
- segments allocated to table at insert time not create time like normal tables
- creating
- session specific:
CREATE GLOBAL TEMPORARY TABLE ON COMMIT PRESERVE ROWS;
- transaction specific:
CREATE GLOBAL TEMPORARY TABLE ON COMMIT DELETE ROWS;
- example:
CREATE GLOBAL TEMPORARY TABLE Profile_values_temp
(Profile_option_name VARCHAR(60) NOT NULL,
Profile_option_id NUMBER(4) NOT NULL,
CONSTRAINT Profile_values_temp_pk PRIMARY KEY (Profile_option_id))
ON COMMIT PRESERVE ROWS ORGANIZATION INDEX;
INSERT INTO Profile_values_temp
(Profile_option_name, Profile_option_id)
SELECT Profile_option_name, Profile_option_id
FROM Profile_values_view;
COMMIT;
--------------------------------------------------------------------------------
ORACLE E-BUSINESS SUITE - ORACLE APPLICATIONS - ORACLE FINANCIALS:
- data dict: FA_...
- releases: 11i (11.5.5), 11, 10.7
- online documentation
- http://download-east.oracle.com/docs/cd/B12190_01/html/docset.html
- http://otn.oracle.com/documentation/index.html
- http://www.oracle.com/appsnet/products/documentation/index.html
- http://www.oracle.com/appsnet
- Oracle strongly recommends NEVER using SQL*Plus to modify 11i data
- AIW (Applications Implementation Wizard):
- coordinates implementation of any/all applications products
- OEM has Oracle Management Pack for Applications
- concurrent manager
- must be running for each financials DB
- starting: startmgr sysmgr="/" mgrname=""
- stopping: CONCSUB / SYSADMIN 'System Administrator' SYSADMIN \
CONCURRENT FND ABORT
--------------------------------------------------------------------------------
MATERIALIZED VIEWS - SNAPSHOTS:
- $ORACLE_HOME/rdbms/admin/catsnap.sql & dbmssnap.sql
- dbms_mview, dbms_olap, dbms_job
- data dict:
DBA_MVIEWS, DBA_SNAPSHOTS, DBA_SNAPSHOT_LOGS, DBA_REGISTERED_SNAPSHOTS,
DBA_REFRESH, DBA_RGROUP, DBA_JOBS
- errors to alert log and trace files with ORA-12012
- for automatic mview refresh job_queue_processes must be > 0
- snapshots are automatically registered at the master site:
use sys.dba_registered_snapshots at master site to see registered snapshots
- grant create materialized view to ;
- mview types: primary key, rowid, subquery
- refresh types:
- fast (incremental):
- needs mview log
- if mview is based on multiple tables, each table should have a mview log
- exec dbms_mview.refresh('', 'F');
- complete:
- fkeys referencing the mview must be disabled before refresh
- if snapshot log: exec dbms_mview.purge_log('', 1);
- exec dbms_mview.refresh('', 'C');
- snapshot logs - materialized view log:
- always create snap log before snapshot
- mlog$_ prefixes
- each log associated with a single master table
- each log can support multiple snapshots on the master table
- export/import:
snapshots and snap logs are exported with schema name explicitly set in DDL,
therefore imports cannot be performed into different schemas.
Even fromuser/touser will fail.
- primary key mview:
- mview log:
create materialized view log on .
tablespace snap_log
storage (initial 500k next 500k pctincrease 0)
with primary key;
- mview:
create materialized view .
tablespace snap_data
storage (initial 500k next 500k pctincrease 0)
using index
tablespace snap_index
storage (initial 500k next 500k pctincrease 0)
refresh fast on demand as
select * from .
@;
- rowid mview:
- mview log:
create materialized view log on customer
tablespace snap_log
nologging
storage (initial 50k next 50k pctincrease 0) with rowid;
- mview
create materialized view customer
tablespace snap_data
storage (initial 100k next 100k pctincrease 0)
refresh fast on demand with rowid as
select * from customer@proddb;
- refresh groups:
- mviews in group are refreshed at the same time to ensure
transactionally consistent data
- mview group refreshes done via delete/re-insert (instead of truncate/re-insert)
so other users will not see a gap where the mview is empty
- there must always be 2 mviews in a refresh group. So a dummy mview must be created
if the group exists just to avoid the "no data found" gap during refresh.
- exec dbms_refresh.make(name => 'events_group',
list => 'events_mv,events_dual_mv',
next_date => sysdate,
interval => 'sysdate + (30/1440)',
implicit_destroy => FALSE);
- removing a refresh group
exec dbms_refresh.destroy('');
- steps for rebuilds after schema changes:
1. TARGET: find grantee's granted table privileges on
2. TARGET: drop materialized view
3. SOURCE: drop materialized view log on
4. SOURCE: create materialized view log on
5. TARGET: create materialized view
6. TARGET: grant select on to
- updateable snapshots:
CREATE SNAPSHOT FOR UPDATE AS SELECT * FROM @;
- query rewrite:
- disabled by default, must enable with ... enable query rewrite ...
- init parameters:
query_rewrite_enabled = true
query_rewrite_integrity = trusted
job_queue_processes = >= 1
job_queue_interval = ?
optimizer_mode = choose, first_rows, or all_rows
compatible = >= 8.1
- grants: grant rewrite; OR grant global rewrite;
- dbms_olap: mview analysis package
- estimate mview size
- recommend mviews
- report on mview usage
- dimensions: hierarchial info about data
- Oracle7 to Oracle8 problem: ORA-12028
create snapshots with 'refresh complete with rowid'
--------------------------------------------------------------------------------
VIEWS:
- data dict: DBA_VIEWS, DBA_DEPENDENCIES, DBA_UPDATABLE_COLUMNS
- views are logical tables based on one or more tables or views
- views contains no data
- the tables upon which views are based are called base tables
- privileges: work like stored procedure privileges
- only the owner of the view needs access to the underlying objects
- only objects owned by or granted explicitly to the owner are accessible
- users of the view only need to be granted privs on the view
- creating
- CREATE VIEW clerk
(id_number, person, department, position)
AS SELECT empno, ename, deptno, job FROM emp WHERE job = 'CLERK';
- CREATE VIEW ed
AS SELECT e.empno, e.ename, d.deptno, d.loc
FROM emp e, dept d WHERE e.deptno = d.deptno ;
- updatable views:
- query USER_UPDATABLE_COLUMNS for updatable columns in views
- columns in updatable views must be key preserved:
if every key of the table can also be a key of the result of the join.
- object view: view of a user-defined type, where each row contains objects,
each object with a unique object identifier.
--------------------------------------------------------------------------------
DBMS_SCHEDULER:
- data dict: DBA_SCHEDULER_JOBS, DBMS_SCHEDULER_...
- Modifying a schedule, use set_attribute procedure
BEGIN
DBMS_SCHEDULER.SET_ATTRIBUTE (
name => 'WEEKEND_WINDOW',
attribute => 'REPEAT_INTERVAL',
value => 'freq=daily;byday=SAT, SUN;byhour=12;byminute=0; bysecond=0');
END;
BEGIN
DBMS_SCHEDULER.SET_ATTRIBUTE (
name => 'WEEKEND_WINDOW',
attribute => 'DURATION',
value => interval '5' hour);
END;
--------------------------------------------------------------------------------
DBMS_JOB - JOB QUEUES – DBA_JOBS:
- $ORACLE_HOME/rdbms/admin/dbmsjob.sql
- data dict: DBA_JOBS, DBA_JOBS_RUNNING
- need at least one SNPn background process
- job_queue_processes: # of SNPn processes
- errors to alert log and trace files with ORA-12012
- broken jobs: after 16 auto retrys
- SNPn will not execute jobs when DB is in restricted session mode
- DB links used in jobs must contain user & password, anonymous links will fail
- check jobs:
col job format 9990
col d1 format a20 trunc heading "last"
col d2 format a20 trunc heading "next"
col what format a25 trunc
col FAILURES format 90 heading "FA"
select JOB, to_char(LAST_DATE, 'DD-MON-YYYY HH24:MI:SS') d1,
to_char(NEXT_DATE, 'DD-MON-YYYY HH24:MI:SS') d2,
what, broken, failures
from dba_jobs;
- fixing broken jobs:
1. select job, schema_user, what from dba_jobs where broken = 'Y';
2. fix the problem
3. run the job: exec dbms_job.run();
- submit a job:
declare
jobno number;
begin
dbms_job.submit(jobno, 'statspack.snap;', trunc(sysdate+1/24,'HH'),
'trunc(SYSDATE+1/24,''HH'')', TRUE);
commit;
end;
- change the interval: exec dbms_job.interval(, 'sysdate+(1/48)');
- removing a job: exec dbms_job.remove();
--------------------------------------------------------------------------------
DATE - DATE FORMAT:
- SELECT SYSTIMESTAMP AS timestamp FROM DUAL;
- modify using: alter session set nls_date_format = ''
- default:
- explicitly set with NLS_DATE_FORMAT (nls_date_format=YYYY-MM-DD)
- implicitly set with NLS_TERRITORY
--------------------------------------------------------------------------------
FIXED_DATE:
- Constant date that SYSDATE will always return instead of the current date
- FIXED_DATE parameter can be set in INIT.ORA or can be set dynamically using the command
ALTER SYSTEM SET FIXED_DATE = 'DD-MON-YY';
OR
ALTER SYSTEM SET FIXED_DATE = 'DD-MON-YY HH:MI';
- Unset FIXED_DATE by using the command:
ALTER SYSTEM SET FIXED_DATE=NONE;
--------------------------------------------------------------------------------
PRODUCT USER PROFILES:
- as SYSTEM: $ORACLE_HOME/sqlplus/admin/pupbld.sql
- to see profiles: select * from system.PRODUCT_PROFILE;
- when a disabled command is executed, the following displays:
invalid command:
- automate disconnect of idle sessions
- must be done through profiles
- either modify the default profile OR create a profile and attach to users
- modify default profile
1. ALTER PROFILE default LIMIT IDLE_TIME ;
- create new profile and attach to user
1. alter system set resource_limit = true; OR set parameter in init file and restart
2. create profile limit idle_time ;
3. alter user profile ;
--------------------------------------------------------------------------------
PASSWORD TRICKERY:
- alter user identified by values '';
--------------------------------------------------------------------------------
ADVANCED PASSWORD MANAGEMENT:
- $ORACLE_HOME/rdbms/admin/utlpwdmg.sql
--------------------------------------------------------------------------------
PASSWORD SECURITY - PROFILES:
- data dict: DBA_PROFILES, DBA_USERS, USER_PASSWORD_LIMITS
- profiles: used to manage password security
- utlpwdmg.sql: set default profile
- profile settings:
- account locking
- password aging
- password expiration
- password history
- custom password authentication
- account locking
- DBA_USERS.account_status, lock_date, expiry_date
- alter user lock ...;
- alter user unlock ...;
- SYS must own custom password verify functions
--------------------------------------------------------------------------------
ORACLE COMMENTS:
- data dict: SYS.DBA_TAB_COMMENTS, SYS.DBA_COL_COMMENTS
- comment on table . is '';
- comment on column .. is '';
--------------------------------------------------------------------------------
PRIVILEGES - ROLES:
- data dict: DBA_ROLES
- default role:
- role disabled for a user if SYS.DBA_ROLE_PRIVS.default_role = NO
- alter user default role all;
- alter user default role ;
--------------------------------------------------------------------------------
PARTITIONING - PARTITIONS:
- data dict:
DBA_PART_TABLES, DBA_PART_INDEXES, DBA_PART_KEY_COLUMNS, DBA_TAB_PARTITIONS,
DBA_IND_PARTITIONS, DBA_PART_COL_STATISTICS, DBA_TAB_COL_STATISTICS,
DBA_PART_HISTOGRAMS
- partition key
- Consists of an ordered list of 1 to 16 columns
- Cannot contain a LEVEL, ROWID, or MLSLABEL pseudocolumn or a column of type ROWID
- Can contain columns that are NULLable
- maxvalue needed for null partition keys
- methods:
- 8i
- Range
- Hash: cannot split, drop, or merge hash partitions
- Composite: first partition by range then by hash, but only subpart created
- 9i
- Range
- Hash
- List
- Composite range-hash
- Composite range-list
- storage parameters and TS's may be different
- partition independence: availability status is independent
- must use cost based optimizer to benefit from partition pruning
- exporting/importing a partition:
:
- SQL:
- select count(*) from .
partition ();
- alter table .
truncate partition drop storage;
- delete from .
partition ();
- local indexes:
- defined
- equipartitioned with the underlying table, meaning that it is partitioned on the
same columns as the underlying table, creates the same number of partitions or
subpartitions, and gives them the same partition bounds as corresponding partitions of
the underlying table
- index partitioning is maintained automatically when partitions are affected by maintenance
activity. This ensures that the index remains equipartitioned with the underlying table.
- when new partitions are added to the parent table, new local index
partitions will also be created in the TABLE's TS with the table partition's
name. Index partitions should be rebuilt into their own TS and renamed.
- rebuild:
alter index rebuild partition
tablespace storage (...);
- rename:
alter index rename partition to ;
- local store in: clause to define storage for local indexes
- initial extents on new partitions should be small because:
1. the initial extents for the table and local index will be created in the
tables TS(?). The index will need to be moved afterward.
2. the table and index's next extent can be preallocated to a named instance
in an OPS/RAC environment:
alter table
modify partition
allocate extent (size M instance 1 datafile '...');
- for each LOB an additional equipartitioned data and index segment is created
- enable row movement
- updates to partition keys supported with ... enable row movement ... clause
- equipartitioning: 2 or more objects (table & index) share partition def.
- availability status is tied together
- prefixed: if leftmost index partition key same as leftmost table partition key
- dbms_utility.analyze_part_object()
- global indexes
- the following operations on partitions mark all global indexes as unusable:
- ADD (HASH)
- COALESCE (HASH)
- DROP
- EXCHANGE
- MERGE
- MOVE
- SPLIT
- TRUNCATE
- UPDATE GLOBAL INDEXES
- can be maintained by appending the clause UPDATE GLOBAL INDEXES
to the SQL statements for the operation
- The two advantages to maintaining global indexes
- The index remains available and online throughout the operation.
- The index doesn't have to be rebuilt after the operation.
- migrate table from non-partitioned to partitioned use dbms_redefinition
- sample commands
- add partition
ALTER TABLE ppc_mast.account_image
ADD PARTITION PPC_ACCTIMA_TBL_20071201
VALUES LESS THAN (20071201)
STORAGE( FREELISTS 8) PCTFREE 5 PCTUSED 90 TABLESPACE PPC_AI_DATA
- rename local index partiton after automatic create
ALTER INDEX ppc_mast.i_acctima_carrier
RENAME PARTITION PPC_ACCTIMA_TBL_20071201 TO PPC_ACCTIMA_NDX_20071201
- rebuild local index
ALTER INDEX ppc_mast.i_acctima_carrier
REBUILD PARTITION PPC_ACCTIMA_NDX_20071201
STORAGE( FREELISTS 8) PCTFREE 10 TABLESPACE PPC_AI_NDX NOLOGGING
- drop partition
ALTER TABLE PPC_MAST.ACCOUNT_IMAGE DROP PARTITION PPC_ACCTIMA_TBL_20070701;
--------------------------------------------------------------------------------
PARALLEL QUERY - PARALLEL DML - PDML - PARALLEL EXECUTION:
- data dict: V$PX_PROCESS, V$PX_SYSSTAT, V$PQ_SYSSTAT, V$PQ_SLAVE,
DBA_TABLES.degree, DBA_INDEXES.degree
- init:
- parallel_min_servers = n ora_p0NN_SID servers created at startup
- parallel_max_servers = n
- parallel_server_idle_time = n servers terminated if idle this long
- parallel_threads_per_cpu default recommended
- parallel_min_percent = n if n servers requested not available, do
not run query and return an error
- parallel DML (PDML):
- must explicitly enable pdml: alter session enable parallel DML;
- direct path insert (direct load insert): use /*+ append */ hint
- inserted data is placed above the high water mark
- triggers must be disabled during parallel DML
- parallel statement MUST be committed before issuing another statement
- set transaction use rollback segment ... not valid with pdml
- updates and deletes not parallelized on non-partitioned tables
- hints override parallel defaults set in create table statement
delete /*+ parallel(, ) */ from
...
- create table emp1_dept parallel 5
as select * from scott.emp where deptno=10;
- ALTER TABLE emp PARALLEL 4; note, the 4 is optional, oracle can set degree
- space management implications: objects created in parallel will have at least
as many extents as parallel execution servers used.
- parallelization of functions/packages: must be created with
create function ... parallel_enable ...
- PQ performs direct reads so blocks not written to buffer cache
--------------------------------------------------------------------------------
DATATYPES:
- character
- char: fixed length character string
- nchar: fixed length national character set character string
- nvarchar2: variable-length national character set character string
- varchar2: variable-length character string
- max length = 4000
- varchar: recommended to use varchar2 - synonymous with the VARCHAR2
- number
- number(9,2):
- 9 is the precision or total digits
- 2 is the scale or digits to the right of the decimal
- number(9): integer
- number: floating point number with decimal percision of 38
- float: also floating point number with decimal percision of 38
- float(9): floating point number with decimal percision of 9
- long: character string up to 2G
- date
- translate a number into words. Handy for writing out a check for an amount in words
- select to_char(to_date(112004,'J'), 'JSP') from dual;
- lob
- bfile
- blob
- clob
- nclob
- raw: data that is not to be interpreted by oracle
- raw
- long raw
- rowid
- restricted: Oracle7 rowids
- extended: Oracle8 and above row ids (extended for partitioning)
- urowid: universal rowids
- logical rowids for objects that are not physical or permanent or were not
generated by oracle
- SQL statements that create tables and clusters can also use ANSI datatypes and
datatypes from IBM's products SQL/DS and DB2
- user defined datatypes
- object types
- REFs
- varrays
- nested tables
--------------------------------------------------------------------------------
LONG DATATYPE:
- character string up to 2G
- inserting long datatype into a varchar2
- setup
create table long_test (trig_name varchar2(2000), trig_body_str varchar2(2000));
- procedure to insert long to varchar2:
declare
tname varchar2(2000);
tbody varchar2(2000);
cursor aaa_cur is
select trigger_name, trigger_body
from sys.dba_triggers
where owner = 'SBLIVE';
begin
for aaa in aaa_cur loop
tbody := substr(aaa.trigger_body, 1, 1999);
insert into long_test
(trig_name, trig_body_str)
values
(aaa.trigger_name, tbody);
end loop;
end;
--------------------------------------------------------------------------------
LOB DATATYPE - LOBS - LARGE OBJECTS:
- data dict: DBA_LOBS
- $ORACLE_HOME/rdbms/admin/dbmslob.sql
- DBMS_LOB
- mechanism to manipulate LOBs
- work using LOB locators
- max 4G (except BFILE which can be any size)
- stored inline if less than 4,000 bytes (or DISABLE STORAGE IN ROW)
- components:
- locator: stored inline in the segment
- selecting from a LOB returns the LOB locator
- value: stored in another segment if over 4,000 bytes or externally
- chunks: smallest unit of storage. contiguous blocks bound together
- buffer cache use turned on/off with the cache/nocache keyword
- types:
1. BLOB: binary LOB
2. CLOB: character LOB
3. NCLOB: NLS Character LOB
4. BFILE: external binary file
- BFILE's are read only
- directory object: pathname to locate and read the object
create or replace directory as '';
grant read on directory to ;
- CREATE TABLE lob_table
(col1 char(20), col2 clob, col3 blob)
TABLESPACE table_ts
STORAGE (initial 10m next 5m pctincrease 0)
LOB (col2, col3) STORE AS
(TABLESPACE lob_ts
DISABLE STORAGE IN ROW
STORAGE (initial 10m next 5m pctincrease 0)
CHUNK 4096
NOCACHE
PCTVERSION 10)
- query:
- example shows piece wise fetch
DECLARE
lobloc CLOB;
buffer VARCHAR2(32000);
amount NUMBER := 10;
amount_in_buffer NUMBER;
offset NUMBER := 1;
BEGIN
-- Initialize buffer with data to be inserted
SELECT document INTO lobloc -- get LOB handle
FROM lob_store
WHERE lob_id = 2;
dbms_lob.read(lobloc,amount,offset,buffer);
-- using length built-in function to find the length of the buffer
amount_in_buffer := length(buffer);
dbms_output.put_line(buffer);
dbms_output.put_line(to_char(amount_in_buffer));
COMMIT;
END;
- write to LOB:
- dbms_lob.write
(lob_loc IN OUT CLOB,
amount IN BINARY_INTEGER,
offset IN INTEGER,
buffer IN VARCHAR2);
- example:
DECLARE
lobloc CLOB;
buffer VARCHAR2(32000);
amount NUMBER := 20;
offset NUMBER := 1;
BEGIN
-- Initialize buffer with data to be inserted
buffer := 'abcdefghijklmnopqrstuvwxyz';
amount := length(buffer);
-- get LOB handle
SELECT document
INTO lobloc -- get LOB handle
FROM lob_store
WHERE lob_id = 1 FOR UPDATE;
-- write
dbms_lob.write(lobloc,amount,1,buffer);
COMMIT;
END;
--------------------------------------------------------------------------------
USER DEFINED DATATYPES - UDD - OBJECT TYPES - COLLECTION TYPES:
- $ORACLE_HOME/rdbms/admin/catadt.sql (called from catalog.sql)
- data dict: DBA_TYPES, DBA_TYPE_ATTRS, DBA_SOURCE, DBA_COLL_TYPES, DBA_VARRAYS,
DBA_TYPE_METHODS, DBA_METHOD_PARAMS, DBA_METHOD_RESULTS
- object types:
- comprised of attributes and methods
- type definition stored in:
- select type_name, attr_name, attr_type_name
from dba_type_attrs
where type_name = '';
- DBA_SOURCE.text with DBA_SOURCE.type = 'TYPE'
- access to remote or distributed objects is not allowed (8.1.5 PL/SQL p.23)
- DDL: user_info is a column in site_users of UDD type USER_TY
- select u.user_info.USER_NAME
from site_users u
where user_id = xx;
- update site_users
set u.user_info.password = ''
where user_id = xx;
- DML: updates executed inside a PL/SQL block using variable for object type:
DECLARE
div_id number;
div_info_v division_ty;
BEGIN
BEGIN
select d.division_id, d.division_info
into div_id, div_info_v
from smcmain.division d
where d.division_id = 354727;
div_info_v.d_alias := 'Xaxay1';
update division
set DIVISION_INFO = div_info_v
where division_id = div_id;
END;
END;
- like a package, object types have a specification and body:
create type as object (
,
,
member function (x number) return number,
member function (x number) return );
create type body as
member function (x number) return number is
begin
...
end;
member fucntion (x number) return is
begin
...
end;
end;
- method types: member, static, comparison
- constructor: method that makes a new object. Its name is the object type
ex. external_person("John Smith", "1-800-555-1212");
- comparison methods: map and order
- REF: pointer to object row
- object table (row objects): relational view of attributes in object types
- create table of ;
- manipulating:
- single column: select value(p) from p
where p. = "John Smith";
- multi-column: insert into values
("John Smith", "1-800-555-1212");
- updating: update t set t. = 'abcd';
- deleteing: delete from t where t. = 0;
- collections:
- comprised of varrays and nested tables
- create type as varray(10) of number(12,2);
- elements in varray (sic_codes is varray inside object type specialty_info)
declare
div_id number;
varray_v ID_LIST_TY;
begin
select ds.division_id, ds.specialty_info.sic_codes
into div_id, varray_v
from division_specialty ds
where ds.division_id = 332792;
dbms_output.put_line('div id = ' || div_id);
dbms_output.put_line('count = ' || varray_v.COUNT);
exception
when COLLECTION_IS_NULL then null;
when SUBSCRIPT_BEYOND_COUNT then null;
when others then null;
end;
/
--------------------------------------------------------------------------------
LIBRARY OBJECTS - EXTERNAL PROCEDURES:
- allows PL/SQL to call external C programs
- create or replace library as ;
- to execute: PL/SQL block calls the library
- flow:
1. PL/SQL block calls the library
2. the request is passed to the listener
3. extproc is spawned by the listener. extproc is linked to the
listener by extproc_connection_data defined in tnsnames.ora
4. extproc then loads the program, executes it, and returns any data
- tnsnames: extproc_connection_data must be a service name
--------------------------------------------------------------------------------
AQ - ADVANCED QUEUING:
- $ORACLE_HOME/rdbms/admin/catqueue.sql
- $ORACLE_HOME/rdbms/admin/dbmsaqad.sql
- data dict:
- sys: sys.AQ$, DBA_QUEUES, DBA_QUEUE_TABLES
- system: AQ$_QUEUE_TABLES, AQ$_QUEUES, AQ$_SCHEDULES
- check queues:
col owner format a5 trunc
col name format a20 trunc
col queue_table format a30 trunc
col enqueue_enabled format a7 trunc
col dequeue_enabled format a7 trunc
select OWNER, NAME, QUEUE_TABLE, ENQUEUE_ENABLED, DEQUEUE_ENABLED
from sys.dba_queues
where owner = 'INTR'
and QUEUE_TYPE = 'NORMAL_QUEUE'
order by name;
- select from queue table:
select ...
from AQ$;
- drop queue problems:
SQL> alter session set events '10851 trace name context forever, level 2';
- expired queues: rows = AQ$__E are expired and can be deleted
select queue
from aq$;
- time manager: handles delayed dequeing and message expiration
- aq_tm_processes = 1
- DBMS_AQADM: creation and management of queues tables and queues
- access granted with AQ_ADMINISTRATOR_ROLE
- grant_type_access(); must be executed 1st to grant AQ privs
- dbms_aqadm.alter_queue
- to queue users: grant_queue_privilege('ALL', '', '')
- DBMS_AQ: contains procedures to read and write messages to queues
- access granted with AQ_USER_ROLE
- steps:
1. grant AQ_ADMINISTRATOR_ROLE to ; * for queue owners
2. exec grant_type_access(); * for queue owners
3. grant AQ_USER_ROLE to ;
4. connect as
5. create queue table:
exec dbms_aqadm.create_queue_table
(queue_table => '',
queue_payload_type => 'RAW',
storage_clause => 'tablespace bos_queue
storage(initial 10M
next 5M
pctincrease 0)');
6. create queue:
exec dbms_aqadm.create_queue('','');
7. start queue:
exec dbms_aqadm.start_queue('');
8. exec dbms_aqadm.grant_queue_privilege('ALL', '', '')
9. enqueue:
DECLARE
enqueue_options dbms_aq.enqueue_options_t;
message_properties dbms_aq.message_properties_t;
message_handle raw(16);
rfq_id_v smcmain.rfq_header.rfq_id%TYPE;
BEGIN
select rfq_id
into rfq_id_v
from smcmain.rfq_header
where rownum < 2;
dbms_aq.enqueue('', enqueue_options,
message_properties, hextoraw(rfq_id_v), message_handle);
END;
10. dequeue:
DECLARE
dequeue_options dbms_aq.dequeue_options_t;
message_properties dbms_aq.message_properties_t;
message_handle raw(16);
message raw(16);
rfq_id_out smcmain.rfq_header.rfq_id%TYPE;
BEGIN
dbms_aq.dequeue('', dequeue_options, message_properties,
message, message_handle);
rfq_id_out := rawtohex(message);
dbms_output.put_line('rfq_id = ' || rfq_id_out);
EXCEPTION when no_messages then return NULL;
END;
--------------------------------------------------------------------------------
CONSTRAINTS:
- data dict:
DBA_CONS_COLUMNS, DBA_CONSTRAINTS, DBA_CROSS_REFS, EXCEPTIONS (utlexcpt.sql)
- enforced: effects new data but not enforced on old data
1. alter table
modify ( constraint NOT NULL disable);
2. alter table enforce constraint ;
- deferred constraints: defer checking of integrity constraints until commit:
alter session set constraints=deferred;
- disable: alter table
modify constraint disable;
- constraint types:
- P = primary key - R = foreign key
- U = unique key - C = check or not null
- O = with read only on a view - V = with check option on a view
DBA_CONSTRAINTS.search_condition to distinguish check and not null
- primary and unique keys:
- "using index" to define storage parameters
ex. ... using index tablespace storage (...) ...
- creating
alter table csr_security_level
add constraint csr_security_level_pk
primary key (security_level_id)
using index
storage (intital 50k next 50k pctincrease 0)
tablespace livindex nologging;
- foreign keys:
- fkey columns should be indexed
- fkeys must reference primary or unique keys on parent table
- datatypes of the dependent and referenced columns must match
- creating
alter table Order_Exception_Queues
add constraint fk_Order_Exception_Queues#g_id foreign key (group_id)
references Order_Exception_groups(group_id)
- on delete cascade:
By default you cannot delete pkey data if there is an fkey.
But if the fkey is created using "on delete cascade", when parent
data is deleted all child data will be automatically deleted.
- truncate: parent table cannot be truncated if referenced by an enabled fkey
- constraint exceptions:
1. create an exceptions table using utlexcpt.sql
2. create or enable a constraint using "exceptions into "
ex. alter table
enable primary key
exceptions into ;
3. select * from ;
- referencing keys:
col r_constraint_name format a27 trunc heading "parent constraint"
col constraint_name format a27 trunc heading "child constraint"
col table_name format a24 trunc heading "child table"
select c1.r_constraint_name, c1.constraint_name, c1.table_name
from sys.dba_constraints c1
where c1.r_constraint_name in (select c2.constraint_name
from sys.dba_constraints c2
where c2.table_name = '')
--------------------------------------------------------------------------------
ROWID:
- dbms_rowid: $ORACLE_HOME/rdbms/admin/dbmsutil.sql
- rowid_to_extended()
- rowid_to_restricted()
- rowid_info(rowid_in in rowid, rowid_type out number,
object_number out number, relative_fno out number,
block_number out number, row_number out number)
- rowid_type(): 0 = oracle7 1 = oracle8
- Oracle8:
- object id - relative datafile id - block id - row or slot number
- 10 bytes to store
--------------------------------------------------------------------------------
OEM - ORACLE ENTERPRISE MANAGER:
- data dict: SMP_...
- 10g
- installed with DB
- starting
1. dbconsole must be running on server (to connect from client browser)
- unix: $ORACLE_HOME/bin/emctl start dbconsole
- windows: Start->Control Panel->Administrative Tools->Services
service = OracleDBConsole
2. from the web browser: http://:/em
- port numbers are listed in $ORACLE_HOME/install/portlist.ini
- example, if db server = comp42 and the EM console HTTP port number = 5500
http://comp42:5500/em
3. login using SYS or SYSTEM or another user authorized to access the Database Control
- dbconsole
- status: $ORACLE_HOME/bin/emctl status dbconsole
- starting: $ORACLE_HOME/bin/emctl start dbconsole
- stopping: $ORACLE_HOME/bin/emctl stop dbconsole
- 3 tier architecture:
- user console (java)
- management server/repository
- nodes/intelligent agents
- configuring - required steps (see OEM Config Guide p.1-6 to 1-8)
1. create DB for repository schema - init parameter processes must be > 200
2. create repository: Enterprise Manager Configuration Assistant (from MS)
3. start management server: on repository server
4. start intelligent agent: on local servers
5. start console
6. create administrator accounts
7. discover services
8. set preferred credentials on nodes
9. configure job system on unix
10. configure DB for remote sysdba administration
- management server - OMS (tier 2):
- OEM super administrator account
- commands (ENV set to OEM's ORACLE_HOME):
- start: oemctrl start oms &
- stop: oemctrl stop oms /
oemctrl stop oms oemdb_user/oemdb_user
- status: oemctrl status oms /
- log: $ORACLE_HOME/sysman/log/.
- configuration file: $ORACLE_HOME/sysman/config/omsconfig.properties
- middle tier application. multiple management servers can be added
- intelligent agent (tier 3):
- agent session
select username
from v$session
where username = 'DBSNMP'
- process on remote node to run jobs, monitor events, discover services, etc.
- version 9:
- listing of parameters: agentctl
- stopping dbsnmp and dbsnmpwd: agentctl stop
- starting dbsnmp and dbsnmpwd: agentctl start
- status of dbsnmp and dbsnmpwd: agentctl status
- dbsnmpwd is UNIX script that ensures the dbsnmp (Intelligent Agent) is running
- version 8:
- start: lsnrctl dbsnmp_start
- stop: lsnrctl dbsnmp_stop
- status: lsnrctl dbsnmp_status (ps -ef | grep dbsnmp should see 2)
- logs: $ORACLE_HOME/network/log/nmiconf.log dbsnmpc.log dbsnmpw.log
- monitors DBs in oratab
- maintains 2 sessions, so DB cannot be shutdown normal
V$SESSION.username = DBSNMP V$SESSION.program = dbsnmp@
- see $ORACLE_HOME/network/doc/README.oemagent & readme.wri
- agent software in $ORACLE_HOME/network/agent/.
- configures itself by creating files: $TNS_ADMIN/snmp_ro.ora and snmp_rw.ora
- tracing: include the following in $TNS_ADMIN/snmp_rw.ora
nmi.trace_level = 13
nmi.trace_mask = (106)
- console (tier 1):
- start (ENV set to OEM's ORACLE_HOME):
1. start management server in tier 2 (see above)
2. oemapp console (sysman/spy4dba)
- discovering nodes:
1. click Navigatior drop-down
2. choose Discover Nodes...
3. enter DB server
- Change Management Pack:
- MS Windows only
- tracking apps:
- DB capture: capture DB object definitions into a basline or SQL DDL script
- DB diff
- change apps:
- DB diff sync wizard: exemplars
- DB quick change: directives
- DB alter: directives
- DB propogate: exemplars
- change plan: must be created before creating or modifying change plans
- directive: changes for an existing object
- exemplar: complete object definition
- all change plans can be viewed with the Plan Manager
- data gatherer:
- commands:
- start: vppcntl -start
- stop: vppcntl -stop
- verify running: vppcntl -ping (ps -ef | grep vppdc)
- version: vppcntl -version
- starting steps:
1. rm alert_dg.log
2. rm $ORACLE_HOME/odg/reco/*
3. vppcntl -start
- log: $ORACLE_HOME/odg/log/alert_dg.log
- installed with Oracle Intelligent Agent
- used by the Oracle Capacity Planner and Oracle Performance Manager
- state and data files are in $ORACLE_HOME/odg/reco/.
--------------------------------------------------------------------------------
DATABASE STARTUP - DATABASE SHUTDOWN:
- startup:
- startup nomount: new DB creation, create SGA
- startup mount: rename datafiles, recover DB
- alter database mount;
- startup open: startup
- alter database open;
- startup pfile=/.ora
- startup restrict: only users with restricted session privilege
- v8 select logins from v$instance; if ALLOWED then DB is open
- alter system enable/disable restricted session;
- startup recover: start and begin recovery
- startup exclusive: does not allow other instances to mount the DB
- startup force: like shutdown abort
- alter database open read only;
- startup open retry: RAC only
- shutdown:
- shutdown normal: transactions finish
- shutdown transactional: transactions finish, OPS users failover
- shutdown immediate: terminates sessions, stops and rolls back transactions
- shutdown abort: kills everything, DB recovery required
- suspend/resume: suspends I/O, queries, and locks
- alter system suspend;
- alter system resume;
- $ORACLE_HOME/bin/dbshut: shuts down all DB's in oratab
- $ORACLE_HOME/bin/dbstart starts all DB's in oratab
- configuring dbshut/dbstart at boot time:
1. edit stop/start flag to Y in /var/opt/oracle/oratab
2. modify dbshut to 'shutdown immediate' and not shutdown normal
3. create file /etc/init.d/dbora
4. add the following to dbora:
ORA_HOME=
ORA_OWNER=oracle
if [! -f $ORA_HOME/bin/dbstart]
then
echo "Oracle startup: cannot start"
exit
fi
case "$1" in
’start’)
su - $ORA_OWNER -c $ORA_HOME/bin/dbstart &
;;
’stop’)
su - $ORA_OWNER -c $ORA_HOME/bin/dbshut &
;;
esac
5. link dbora by entering:
ln -s /etc/init.d/dbora /etc/rc0.d/K10dbora
ln -s /etc/init.d/dbora /etc/rc2.d/S99dbora
6. test
/etc/rc2.d/S99dbora start
/etc/rc0.d/K10dbora stop
--------------------------------------------------------------------------------
PRECISE - PRECISE/SQL (Precise 2.8.2):
- components: Inspect and Analyze
- GUI: 1. options 2. quick connect 3. set ports to match server ports
- server:
- env: INSPECT_HOME and ANALYZE_HOME
- Inspect and Analyze MUST be restarted every time the DB is restarted
- ports:
- $ANALYZE_HOME//analyze.cfg (analyze_service=5002)
- $INSPECT_HOME//etc/inspect.cfg (inspect_service=5003)
- starting steps:
1. login as precise and set env
2. $INSPECT_HOME/bin/inspect.sh -a -k
3. $ANALYZE_HOME/bin/analyze.sh -k
- stopping steps:
1. login as precise and set env
2. $INSPECT_HOME/bin/inspshut.sh -a -k
3. $ANALYZE_HOME/bin/anashut.sh -k
--------------------------------------------------------------------------------
OERR MESSAGES - ERROR MESSAGES:
- message text in $ORACLE_HOME/rdbms/mesg/.
- error codes above 20000 are USUALLY reserved for user defined errors
--------------------------------------------------------------------------------
AUDIT - AUDITING:
- data dict:
- SYS.AUD$: database audit trail
- DBA_STMT_AUDIT_OPTS: statement audits set
- DBA_PRIV_AUDIT_OPTS: privilege audits set
- DBA_OBJ_AUDIT_OPTS: object audits set
- AUDIT_ACTIONS: adit # audit name
- DBA_AUDIT_TRAIL: raw audit trail of all audit records (view of SYS.AUD$)
- enable auditing:
1. execute $ORACLE_HOME/rdbms/admin/cataudit.sql as SYS to create DD views
2. set audit_trail = DB, OS, or NONE
3. if using OS audit trail: set audit_file_dest or use the default
4. specify auditing options with AUDIT command
- audit trails are not generated by SYS or connections with admin privs
- DDL statements against SYS.AUD$ are OK ( ex. delete from sys.aud$; )
- OS audit trail:
- audit_file_dest (default $ORACLE_HOME/rdbms/audit)
- always audited into OS audit trail: startup, shutdown, admin priv connects
- to protect SYS.AUD$ itself:
audit insert, update, delete on SYS.AUD$ by access;
stuff in SYS.AUD$ about SYS.AUD$ can only be deleted with admin privs
- audit objects should be rebuilt out of the system TS
--------------------------------------------------------------------------------
RESOURCE LIMITS:
- tablespace quotas:
- data dict: DBA_TS_QUOTAS
- grant unlimited tablespace to ;
- alter user quota unlimited on ;
--------------------------------------------------------------------------------
SQL*LOADER - SQLLOADER - SQL LOADER:
- sqlldr / sqlload.ctl
- to set maximum errors to allow:
- sqlldr / errors=500 control=sqlload.ctl OR
- in control file: OPTIONS (errors = 500)
- direct path load:
- execute: $O_H/rdbms/admin/catldr.sql
- in control file: direct=true
- example: file with comma separated fields, variable length records, and multiple infiles
OPTIONS (ERRORS=500000)
LOAD DATA
INFILE cs03132003.csv
INFILE cs03142003.csv
INFILE cs03152003.csv
INFILE cs03162003.csv
BADFILE big_honkin_dead_file.bad
DISCARDFILE big_honkin_dead_file.discard
DISCARDMAX 20000
INTO TABLE dead_shoppers_20040212
FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY " "
TRAILING NULLCOLS
(SHOPPER_ID, EMAIL, IS_MSN, MARKED_DEAD_DATE)
- example: file with comma separated fields, variable length records, and lower() function
LOAD DATA
INFILE '/raid/ab/mastercard_final_parse.txt'
BADFILE '/export/home/abonetti/work/fraud_1_3_sqlload.bad'
DISCARDFILE '/export/home/abonetti/work/fraud_1_3_sqlload.discard'
DISCARDMAX 20000
INTO TABLE sblive.mc_reject_cc_range_temp INSERT
FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY " "
(cc_num_min,
cc_num_max,
primary_phone,
secondary_phone,
bank_name "lower(:bank_name)")
- example: fields separated by TABS allowing many errors with input data substr()
OPTIONS (ERRORS=500000)
LOAD DATA
INFILE a20030818_Click_Detail.txt
BADFILE a20030818_Click_Detail.bad
DISCARDFILE a20030818_Click_Detail.discard
INTO TABLE befree_recovery_18
FIELDS TERMINATED BY X'09'
(timestamp,
ip_address,
url1 CHAR(4000) "substr(:url1, 1, 3500)",
url2 CHAR(4000) "substr(:url1, 1, 3500)",
source_id)
- example:
OPTIONS (errors=50,rows=50)
LOAD DATA
INFILE url_builder.dat
DISCARDMAX 50
INSERT
INTO TABLE url_builder
FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
( "EMAIL_TEMPLATE_ID" INTEGER EXTERNAL,
"EXCHANGE_ID" INTEGER EXTERNAL,
"SEQ_NUM" INTEGER EXTERNAL,
"SQL_STATEMENT" CHAR (512))
- example with CLOB:
OPTIONS (errors=50,rows=100050) LOAD DATA
INFILE exchange_email.dat
DISCARDMAX 50
INSERT
INTO TABLE exchange_email
FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
( "EMAIL_TEMPLATE_ID" INTEGER EXTERNAL,
"EXCHANGE_ID" INTEGER EXTERNAL,
"ACTIVE" INTEGER EXTERNAL,
EXT_FNAME FILLER CHAR(400),
"EMAIL_TEMPLATE" LOBFILE(EXT_FNAME) TERMINATED BY EOF)
- example: appending data into table from multiple sqlldr infiles
LOAD DATA
INFILE cs03132003.csv
INFILE cs03142003.csv
INFILE cs03152003.csv
INFILE cs03162003.csv
BADFILE 'lead_gen_sqlldr.bad'
DISCARDFILE 'lead_gen_sqlldr.discard'
DISCARDMAX 20000
APPEND
INTO TABLE sblive.lead_gen_shopper
FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY " "
(email)
--------------------------------------------------------------------------------
INTERMEDIA - CONTEXT:
- $ORACLE_HOME/ctx/.
- data dict
- owned by sys:
DBA_INDEXES.index_type='DOMAIN', ityp_name='CONTEXT', ityp_owner='CTXSYS',
domidx_status, domidx_opstatus
select owner, index_name, status, domidx_status, domidx_opstatus
from sys.dba_indexes
where index_type = 'DOMAIN';
- owned by ctxsys:
CTX_INDEXES, CTX_PENDING, CTX_SERVERS, CTX_SECTION_GROUPS,
CTX_INDEX_ERRORS, CTX_PARAMETERS, CTX_SECTIONS, CTX_THESAURI
- owned by index owner:
DR$$I.token_text, token_info, token_type
- ctxapp role: create indexes and issue intermedia queries
- listenr.ora and tnsnames.ora must support external procedures
- extproc_connection_data must be present in tnsnames.ora
- SID_DESC may need to be present in listener.ora (not always)
- install
- performed during DB create in 8i
- steps
1. as sys: $ORACLE_HOME/ctx/admin/dr0csys.sql ctxsys drsys temp
creates CTXSYS user
2. as ctxsys: $ORACLE_HOME/ctx/admin/dr0inst.sql
when prompted for 1: /ctx/lib/libctxx8.so
3. as ctxsys: $ORACLE_HOME/ctx/admin/defaults/drdefus.sql
- deinstall
1. as ctxsys: $ORACLE_HOME/ctx/admin/dr0drop.sql drops objects
2. as sys: $ORACLE_HOME/ctx/admin/dr0dsys.sql drop CTXSYS user
- DDL:
- grant ctxapp role to user
- ctxsrv SHOULD NOT be running
- indexed table MUST have a pkey
- simple create index
create index ON ()
indextype is ctxsys.context;
- section group
exec ctxsys.ctx_ddl.drop_section_group('rfqsearch');
exec ctxsys.ctx_ddl.create_section_group('rfqsearch', 'XML_SECTION_GROUP');
- replace section group
alter index rfq_searchindex
rebuild parameters ('replace section group rfqsearch');
- field section
exec ctxsys.ctx_ddl.add_field_section('rfqsearch','material','mt', TRUE);
- create index using section group
create index rfq_searchindex ON rfq_search(parameters)
indextype is ctxsys.context parameters ('section group rfqsearch');
- zone section
exec ctxsys.ctx_ddl.add_zone_section('rfqsearch','material','mt');
- queries:
- grant ctxapp role to user
- global query:
- select from where contains(, '%') > 0;
- select rfq_id from rfq_search where contains(description, 'steel%') > 0;
- field section query:
- select from
where contains(,'% WITHIN ') > 0;
- select rfq_id from rfq_search
where contains(parameters,'steel% WITHIN material') > 0;
- stemming / about / label query:
select from
where contains(, '$(), about()', 1) > 0;
- $() performs stemming
- about() allows queries on concepts increasing hits
- , 1 is the label which specifies the relevancy score (1 is the highest)
- DML:
- ctx_ddl:
- ctx_ddl.sync_index('');
- ctx_schedule:
- install with ctx_schedule.sql (only available in metalink doc id 132693.1)
- ctx_schedule places a job in DBA_JOBS
- starting:
- sync every minute: exec ctx_schedule.startup('ex1_index', 'SYNC', 1);
- optimize every 2 hours:
exec ctx_schedule.startup('ex1_index', 'OPTIMIZE FAST', 120);
- stopping: assumes you are stopping a sync unless otherwise noted
- exec ctx_schedule.stop('ex1_index');
- exec ctx_schedule.stop('ex1_index', 'OPTIMIZE FAST');
- ctxsrv (context server/m server):
- ctxsrv is being deprecated and replaced by ctx_schedule
- start: nohup ctxsrv -user ctxsys/ctxsys -log
&
OR nohup ctxsrv -user ctxsys/ctxsys -log -sqltrace &
- stop: exec ctx_adm.shutdown;
- manual:
alter index rebuild online parameters ('sync');
alter index rebuild online parameters ('sync memory 2M');
- inserts and updates are queued in ctxsys.CTX_PENDING (ctxsys.DR$PENDING)
- deleted data invalidated immediately
- logging:
- session level logging
- ctx_output.start_log('') - uses init parameter log_directory
- ctx_output.end_log;
- ctx_output.logfilename; - this is a function returning a varchar2
- optimization / fragmentation:
- OK to run while ctxsrv is running
- fast optimization: glues fragmented rows
alter index rebuild online parameters ('optimize fast');
- full optimization: removes invalidated data and glues fragmented rows
alter index rebuild online parameters ('optimize full');
alter index rebuild online parameters ('optimize full maxtime 5');
- fragmentation target is less that 10:
SELECT AVG(COUNT(*))
FROM DR$$I
GROUP BY TOKEN_TEXT HAVING COUNT(*) > 1
- knowledge base:
- $ORACLE_HOME/bin/ctxkbtc = compiler
- compiles a thesaurus for use in theme indexes and ABOUT queries in English
- thesaurus must first be loaded with ctxload -thescase Y
- thesaurus:
- ctx_thes.create_thesaurus()
- used to extend the knowledge base to enhance ABOUT queries in English
- themes:
- ctx_doc.themes()
- generate themes for a document
- can be stored in theme tables or in-memory PL/SQL table
--------------------------------------------------------------------------------
ODBC INSTALL:
1. log into NT: user = administrator, domain = smcadm
2. Oracle8i Client CD - Custom Install
3. check ODBC and complete install
4. log into NT as user
5. from ODBC Data Source Administrator: add new "User DSN"
6. choose Oracle ODBC Driver
7. fill in data:
- Data Source Name: SMC Production Oracle DB
- Service Name: smcprod
- UserID: smcmain
8. copy nn:\smcadmin\common\CT Tables\RFQ Production Lookup AND
nn:\smcadmin\common\Market Making\SMC RFQ Admin to desktop
9. in both MA Access icons ...
10. tools -> link tables manager
11. check always prompt for new location
--------------------------------------------------------------------------------
MOVE TABLE CLAUSE - ALTER TABLE MOVE:
- move a table to new TS
- indexes on moved tables become unusable (ORA-01502), must rebuild or recreate
- for normal tables:
alter table . move
tablespace storage (initial xxM next yyM pctincrease 0);
- for IOTs:
alter table .
move online
tablespace storage (initial xxM next yyM pctincrease 0);
--------------------------------------------------------------------------------
STORED OUTLINES:
- data dict: DBA_OUTLINES, DBA_OUTLINE_HINTS, outln.OL$, outln.ol$hints
- replaces hints
- ensure the CBO will use the execution plan you choose
- user = OUTLN
- $O_H/rdbms/admin/catol.sql, dbmsol.sql, prvtol.plb (called from catproc.sql)
- dynamically:
- alter session set create_stored_outline = true;
- execute queries
- alter session set create_stored_outlines = false;
- on specific SQL:
create or replace outline
for category on ;
- to use:
- alter session set use_stored_outine = true; OR
alter session set use_stored_outine = ;
--------------------------------------------------------------------------------
WEBDB:
- installing: see Oracle WebDB manual
* MUST be a new ORACLE_HOME
- DAD - Database Access Descriptor:
- config file: $ORACLE_HOME/listener/cfg/wdbsvr.app
- after modifying DAD settings, stop/restart WebDB listener
- JInitiator: must be installed on client
--------------------------------------------------------------------------------
OAS - ORACLE APPLICATION SERVER - OAS PL/SQL TOOLKIT:
- installing PL/SQL toolkit for OAS:
- used by WebDB
- user: OAS_PUBLIC
1. mount WebDB CD
2. cd to owa40 directory on CD
3. sqlplus sys/
4. @owaload.sql oas_public tools temp oas_public
--------------------------------------------------------------------------------
JAVA - JVM - JSERVER:
- data dict: CREATE$JAVA$LOB$TABLE: stores java class files
- to see if java has been loaded into DB:
select count(*) from dba_objects where object_type like 'JAVA%';
- requirements:
- shared_pool_size: 15M min.
- java_pool_size: 20M min
- SYSTEM TS: 150M min
- JVM installed using: $ORACLE_HOME/javavm/install/initjvm.sql
- DB roles:
- javauserpriv
- javasyspriv
- javadebugpriv
- java stored procedures:
1. write java code
2. compile to a java class file (bytecode) using JDK or JDeveloper
3. use loadjava to load java class file (bytecode) into DB
loadjava u/p@db
loadjava help
4. publish: create PL/SQL procedure to call the java class file
create procedure ... as language java name '' ...
- drop java class file using dropjava
- DBMS_JAVA.longname(): to get full java class name
- create or replace java system;
--------------------------------------------------------------------------------
DATA WAREHOUSE:
- ETL: Extraction, Transformation, Load
- misc
- the most natural way to model a data warehouse is as a star schema
- only one join establishes the relationship between the fact table
and any one of the dimension tables
- A star schema optimizes performance by keeping queries simple and
providing fast response time
- All the information about each level is stored in one row.
- each dimension table is joined to the fact table using a primary key to foreign key join,
but the dimension tables are not joined to each other
- star schema:
- fact table in center surrounded by dimension tables
- denormalized
- ex.
- fact table: sales
- dimension tables: customer, product, time
- example
SELECT ch.channel_class, c.cust_city, t.calendar_quarter_desc,
SUM(s.amount_sold) sales_amount
FROM sales s, times t, customers c, channels ch
WHERE s.time_id = t.time_id
AND s.cust_id = c.cust_id
AND s.channel_id = ch.channel_id
AND c.cust_state_province = 'CA'
AND ch.channel_desc in ('Internet','Catalog')
AND t.calendar_quarter_desc IN ('1999-Q1','1999-Q2')
GROUP BY ch.channel_class, c.cust_city, t.calendar_quarter_desc;
- fact tables
- large tables in your warehouse schema that store business measurements
- typically contain facts and foreign keys to the dimension tables
- represent data, usually numeric and additive, that can be analyzed and examined
- examples include sales, cost, and profit.
- dimension tables
- also known as lookup or reference tables
- contain the relatively static data in the warehouse
- store the information you normally use to contain queries
- Dimension tables are usually textual and descriptive and
you can use them as the row headers of the result set
- Examples are customers or products.
- materialized views with query rewrite take advantage of star schema designs
--------------------------------------------------------------------------------
OLAP - ONLINE ANALYTICAL PROCESSING:
- defined
- data warehouses use a star schema, OLAP environments use a hierarchical cube
- hierarchical cubes include both detail data and aggregated data
- data sets where data is aggregated along the rollup hierarchy of each of its dimensions
and these aggregations are combined across dimensions
- sample case
- Consider a sales data set with two dimensions, each of which has a 4-level hierarchy
- All time, year, quarter, and month
- All products, division, brand, and item
- this means there are 16 aggregate groups in the hierarchical cube:
four levels of time are multiplied by four levels of product to produce the cube
- GROUP BY clause needed to create the hierarchical cube:
GROUP BY ROLLUP(year, quarter, month),
ROLLUP(Division, brand, item)
- using OLAP cubes
- typical OLAP queries slice and dice different parts of the cube comparing aggregations from
one level to aggregation from another level
- for instance, a query might find sales of the grocery division for the month of January, 2002
and compare them with total sales of the grocery division for all of 2001.
- example
- this consists of a hierarchical cube query nested in a slicing query
- the inner hierarchical cube has two dimensions and four levels in each dimension
it would generate 16 groups (4 Time levels * 4 Product levels)
- the GROUPING_ID function in the query identifies the specific group each row belongs to,
based on the aggregation level of the grouping-columns in its argument
- the outer query slices a small chunk of data from the cube
SELECT month, division, sum_sales FROM
(SELECT year, quarter, month, division, brand, item, SUM(sales) sum_sales,
GROUPING_ID() gid
FROM sales, products, time
WHERE
GROUP BY
ROLLUP(year, quarter, month),
ROLLUP(division, brand, item))
WHERE division = 25
AND month = 200201
AND gid =
- OLAP mviews
- OLAP requires fast response times so parts of OLAP cubes could be precomputed mviews
- full OLAP cube materialized: requires the most storage space but ensures quick response
- partial materialization: saves storage space, but not as fast for some queries
- If an OLAP environment's queries cover the full range of aggregate groupings,
it may be best to materialize the whole hierarchical cube.
- example 1: complete hierarchical cube mview
CREATE MATERIALIZED VIEW sales_hierarchical_cube_mv
REFRESH FAST ON DEMAND
ENABLE QUERY REWRITE
AS
SELECT country_id, cust_state_province, cust_city, prod_category,
prod_subcategory, prod_name, calendar_month_number,
day_number_in_month, day_number_in_week,
GROUPING_ID(country_id, cust_state_province, cust_city,
prod_category, prod_subcategory, prod_name,
calendar_month_number, day_number_in_month,
day_number_in_week) gid,
SUM(amount_sold) s_sales,
COUNT(amount_sold) c_sales,
COUNT(*) c_star
FROM sales s, products p, customers c, times t
WHERE s.cust_id = c.cust_id AND s.prod_id = p.prod_id
AND s.time_id = t.time_id
GROUP BY
ROLLUP(country_id, (cust_state_province, cust_city)),
ROLLUP(prod_category, (prod_subcategory, prod_name)),
ROLLUP(calendar_month_number, (day_number_in_month, day_number_in_week))
PARTITION BY LIST (gid)
- example 2: partial hierarchical cube mview
CREATE MATERIALIZED VIEW sales_mv
REFRESH FAST ON DEMAND
ENABLE QUERY REWRITE
AS
SELECT country_id, cust_state_province, cust_city,
prod_category, prod_subcategory, prod_name,
GROUPING_ID(country_id, cust_state_province, cust_city,
prod_category, prod_subcategory, prod_name) gid,
SUM(amount_sold) s_sales,
COUNT(amount_sold) c_sales,
COUNT(*) c_star
FROM sales s, products p, customers c
WHERE s.cust_id = c.cust_id and s.prod_id = p.prod_id
GROUP BY GROUPING SETS
((country_id, cust_state_province, cust_city),
(country_id, prod_category, prod_subcategory, prod_name),
(prod_category, prod_subcategory, prod_name),(country_id,prod_category))
PARTITION BY LIST (gid)
--------------------------------------------------------------------------------
ORACLE AND WINDOWS:
- set
- c:\> set - shows all env vars
- c:\> set ORA - shows env vars starting with ORA
- c:\> set ORACLE_HOME=c:\orant
- c:\> set ORACLE_SID=
- services: start, settings, control panle, services
- oradim:
- manually starting, stopping, creating, deleting, or modifying DB
- log $ORACLE_BASE\$ORACLE_HOME\$DATABASE\.
- help: oradim -HELP
- creating an instance:
- only creates password file and service, not the datafiles
- oradim -new -sid - initpwd - startmode auto -pfile c:\...
- starting an instance:
- oradim -startup -sid -starttype srvc,inst
- stopping an instance:
- oradim -shutdown -sid -shuttype srvc,inst
--------------------------------------------------------------------------------
ORACLE BILLING:
- CPU model:
- one time bill
- Enterprise Edition cost = number of CPU's * $40,000
- ex. 8 * 40000 = $320,000
- Standard Edition cost = number of CPU's * $15,000
- named user model:
- one time bill
- $800 per named user
- minimum 10 usres per CPU
- ex. 8 CPU's: 80 users (10 per CPU) * 800 = $64,000
- UPU model:
- one time bill
- UPU = number of CPU's * processor speed * 1.5
- cost = UPU * 100
- ex. 8 CPU's * 400 MHz * 1.5 = 4800 UPU's then 4800 * 100 = $480,000
- RAC/OPS: number of CPU's * $20,000
- partitioning option: number of CPU's * $10,000
- support: cost * 22% billed yearly
- ESL: Embedded Software Licenses (ISV)
- models
1. 80% discount on Oracle list prices
2. VBM (Value Based Model)
- pay Oracle 10% of total application price
- support is 19% of VBM price
3. ASFU (partially embedded): 40% of list price
- must apply for ESL certification
- must be a member of the Oracle Partner Network
- conditions
- Installation
- The thrid party customer cannot see the Oracle install.
- Oracle mst be installed as part of the application
- Administration and support must be done by the Oracle customer with the ESL agreement,
not the third party customer
- Data Access
- Must be via canned reports and other features of the application
- The third party customer cannot access the database directly
--------------------------------------------------------------------------------
ENCRYPTION:
- dbms_crypto (10g)
- DBMS_CRYPTO replaces the obfuscation toolkit in 10g,
although dbms_obfuscation_toolkit is still available for backwards compatibility
- encrypt and decrypt functions and procedures in dbms_crypto package work on RAW data type only
- obfuscation toolkit - dbms_obfuscation_toolkit
- example
declare
data_to_encrypt varchar2(24) := 'Must be a mutliple of 8 ';
encrypt_key varchar2(15) := 'abcdefghijklmno';
encrypted_data varchar2(2048);
data_after_decrypt varchar2(24);
begin
----- encrypting -----
dbms_output.put_line('data to encrypt = ' || data_to_encrypt);
dbms_obfuscation_toolkit.desencrypt(input_string => data_to_encrypt,
key_string => encrypt_key, encrypted_string => encrypted_data);
dbms_output.put_line('after encryption = ' || encrypted_data);
----- decrypting -----
dbms_obfuscation_toolkit.desdecrypt(input_string => encrypted_data,
key_string => encrypt_key, decrypted_string => data_after_decrypt);
dbms_output.put_line('after decryption = ' || data_after_decrypt);
end;
--------------------------------------------------------------------------------
GUID:
- Globally Unique Identifier
- select sys_guid() from dual;
--------------------------------------------------------------------------------
HIERARCHICAL QUERY:
- To issue a hierarchical query, you must know two things about your data
1. the conditions identifying root rows (those at the top of a hierarchy)
- START WITH IS NULL 'start with' generally identifies the root nodes
- Parent rows should have the field in the table equal to NULL
2. the column or columns in a child row that point to its parent.
- CONNECT BY PRIOR = defines relationship between parent and child
- example
SELECT affiliate_id, affiliate_name, affiliate_parent
FROM affiliate_header
START WITH affiliate_id = 19
CONNECT BY PRIOR affiliate_parent = affiliate_id
- example selecting only top level (NULL parent):
SELECT cat.cat_id
FROM category cat
WHERE cat.parent_cat_id IS NULL START WITH cat.cat_id = 5061
CONNECT BY PRIOR cat.parent_cat_id = cat.cat_id;
- example with pseudocolumn "level":
SELECT id, parent_id, level
FROM table
START WITH id = 199
CONNECT BY PRIOR parent_id = id;
- example selecting only level 2:
SELECT v.employee_id, v.last_name, v.lev
FROM (SELECT employee_id, last_name, LEVEL lev
FROM employees v
START WITH employee_id = 100
CONNECT BY PRIOR employee_id = manager_id) v
WHERE (v.employee_id, v.lev) IN (SELECT employee_id, 2 FROM employees);
- example with SYS_CONNECT_BY_PATH:
SELECT LPAD(' ', 2*level-1) || SYS_CONNECT_BY_PATH(last_name, '/') "Path"
FROM employees
START WITH last_name = 'Kochhar'
CONNECT BY PRIOR employee_id = manager_id;
- ORDER SIBLINGS BY : sorts rows at each level BUT preserves hierarchy
SELECT RPAD(' ', 2*(LEVEL-1)) || assembly_name assembly_name
FROM bill_of_materials
START WITH assembly_id = 200
CONNECT BY parent_assembly = PRIOR assembly_id
ORDER SIBLINGS BY assembly_name;
--------------------------------------------------------------------------------
IN-DOUBT TRANSACTION:
- data dict: DBA_2PC_PENDING, PENDING_TRANS$
- dbms_transaction:
- dbms_transaction.purge_mixed()
- dbms_transaction.purge_lost_db_entry()
- resolving "prepared" or "transmitting":
1. select tran_comment, advice from SYS.DBA_2PC_PENDING;
2. select * from SYS.PENDING_TRANS$ where LOCAL_TRAN_ID = '4.15.95157';
SYS.PENDING_TRANS$.state should be prepared or transmitting
3. rollback force '4.15.95157'; (OR commit force '4.15.95157';)
4. select * from SYS.PENDING_TRANS$ where LOCAL_TRAN_ID = '4.15.95157';
SYS.PENDING_TRANS$.state should be 'forced rollback' if fixed
- resolving "collecting":
1. select local_tran_id, state from SYS.PENDING_TRANS$;
2. exec dbms_transaction.purge_lost_db_entry('
--------------------------------------------------------------------------------
DEFAULT DATA - DATA DEFAULT:
- data dict: DBA_TAB_COLUMNS
- act on INSERT
- allows you to specify a value for a column if an INSERT statement omits a value for the column
- operate as though the INSERT statement actually specified the default value
- default value applied before pre-insert triggers, so :new references contain the default value
- default values cause AFTER UPDATE triggers to be fired
- if adding a new column with a default, the default value is added to all rows
- if adding to an existing column, the value is not back populated
--------------------------------------------------------------------------------
MICROSOFT TRANSACTION SERVER and COM+ SUPPORT:
1. as sys: $ORACLE_HOME/rdbms/admin/xaview.sql
2. grant select on DBA_PENDING_TRANSACTIONS to public;
--------------------------------------------------------------------------------
CHANGE DATABASE NAME = CHANGE DB NAME
- prior to 9i a control file rebuild is required
- 9i and later
1. Shutdown database
2. Startup mount
3. $ nid target=/ dbname=BRDSTN
4. Shutdown database
5. Change init.ora/spfile name, i.e.: initBRDSTN.ora
6. Change db_name parameter in init.ora/spfile, i.e.: db_name='brdstn'
7. Change ORACLE_SID to point to new database name
8. Startup mount
9. Alter database open resetlogs
10. Take a backup
--------------------------------------------------------------------------------
SYS_CONTEXT
- Oracle provides a built-in namespace called USERENV, which describes the current session,
Information about the current session
- select
SYS_CONTEXT('USERENV','TERMINAL') terminal,
SYS_CONTEXT('USERENV','LANGUAGE') language,
SYS_CONTEXT('USERENV','SESSIONID') sessionid,
SYS_CONTEXT('USERENV','INSTANCE') instance,
SYS_CONTEXT('USERENV','ENTRYID') entryid,
SYS_CONTEXT('USERENV','ISDBA') isdba,
SYS_CONTEXT('USERENV','NLS_TERRITORY') nls_territory,
SYS_CONTEXT('USERENV','NLS_CURRENCY') nls_currency,
SYS_CONTEXT('USERENV','NLS_CALENDAR') nls_calendar,
SYS_CONTEXT('USERENV','NLS_DATE_FORMAT') nls_date_format,
SYS_CONTEXT('USERENV','NLS_DATE_LANGUAGE') nls_date_language,
SYS_CONTEXT('USERENV','NLS_SORT') nls_sort,
SYS_CONTEXT('USERENV','CURRENT_USER') current_user,
SYS_CONTEXT('USERENV','CURRENT_USERID') current_userid,
SYS_CONTEXT('USERENV','SESSION_USER') session_user,
SYS_CONTEXT('USERENV','SESSION_USERID') session_userid,
SYS_CONTEXT('USERENV','PROXY_USER') proxy_user,
SYS_CONTEXT('USERENV','PROXY_USERID') proxy_userid,
SYS_CONTEXT('USERENV','DB_DOMAIN') db_domain,
SYS_CONTEXT('USERENV','DB_NAME') db_name,
SYS_CONTEXT('USERENV','HOST') host,
SYS_CONTEXT('USERENV','OS_USER') os_user,
SYS_CONTEXT('USERENV','EXTERNAL_NAME') external_name,
SYS_CONTEXT('USERENV','IP_ADDRESS') ip_address,
SYS_CONTEXT('USERENV','NETWORK_PROTOCOL') network_protocol,
SYS_CONTEXT('USERENV','BG_JOB_ID') bg_job_id,
SYS_CONTEXT('USERENV','FG_JOB_ID') fg_job_id,
SYS_CONTEXT('USERENV','AUTHENTICATION_TYPE') authentication_type,
SYS_CONTEXT('USERENV','AUTHENTICATION_DATA') authentication_data,
SYS_CONTEXT('USERENV','CURRENT_SQL') current_sql,
SYS_CONTEXT('USERENV','CLIENT_IDENTIFIER') client_identifier,
SYS_CONTEXT('USERENV','GLOBAL_CONTEXT_MEMORY') global_context_memory
from dual;
--------------------------------------------------------------------------------
SEQUENCES:
- RAC does not support the CACHE option with the ORDER option. Oracle cannot guarantee
an order if each instance has some sequence values cached. if created with both,
they will be ordered but not cached.
- CREATE SEQUENCE
INCREMENT BY 1
START WITH 1266
MINVALUE 1
MAXVALUE 9999999999999999999999999999
NOCYCLE
ORDER
- create sequence .
start with 14804200
minvalue 10000000
maxvalue 99999999999999999999
noorder
cycle
cache 500
--------------------------------------------------------------------------------
LONG OPERATIONS:
- data dict: V$SESSION_LONGOPS
- requires
- init: timed_statistics or sql_trace must be TRUE
- cost based stats must be present: use analyze or dbms_stats
- processes longer than 6 seconds
- DBMS_APPLICATION_INFO.SET_SESSION_LONGOPS: for app specific long running operations
- see session_longops.sql for queries
- time remaining for rollback
select time_remaining
from v$session_longops
where sid = ;
--------------------------------------------------------------------------------
SPACE MANAGEMENT in 10g
- see: http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14231/schema.htm
- segment advisor
- online segment shrink
- deallocating unused space
- Capacity Planning
- Estimating the Space Use of a Table
- Estimating the Space Use of an Index
- Obtaining Object Growth Trends).
--------------------------------------------------------------------------------
ADDM - Automatic Database Diagnostic Monitor (10g):
- performance diagnosis and monitoring
- major components
- Automatic Workload Repository (AWR)
- Automatic Database Diagnostic Monitor (ADDM)
- Oracle Enterprise Manager (EM)
--------------------------------------------------------------------------------
ASM – AUTOMATIC STORAGE MANAGEMENT:
- Only one ASM instance is required per node regardless of the number of database instances on the node
- The primary component of Automatic Storage Management is the disk group
- A disk group consists of a grouping of disks that are managed together as a unit
- These disks are referred to as ASM disks
- Disks in a given disk group should have similar size and performance characteristics.
- ASM instances can be created with DBCA
- Queries
- V$ASM_DISKGROUP
select group_number, name, total_mb, free_mb, state
from V$ASM_DISKGROUP
- V$ASM_DISK
select path, REDUNDANCY, TOTAL_MB, DISK_NUMBER, GROUP_NUMBER, state
from V$ASM_DISK
- V$ASM_FILE
select count(*)
from V$ASM_FILE
- V$ASM_CLIENT
- srvctl commands
srvctl stop asm -n a1e1b1 -i +ASM1
srvctl stop asm -n a1e1b2 -i +ASM2
srvctl start asm -n a1e1b1 -i +ASM1
srvctl start asm -n a1e1b2 -i +ASM2
srvctl remove asm -n udb-e1b3 remove ASM from Clusterware
- oracleasm commands
- /etc/init.d/oracleasm listdisks
- /etc/init.d/oracleasm scandisks
- /etc/init.d/oracleasm querydisk {DISKNAME | devicename}
- /etc/init.d/oracleasm createdisk DISK1 /dev/raw/raw3
- /etc/init.d/oracleasm deletedisk DISKNAME
- disk group commands
- create diskgroup DG_ASM_1 external redundancy disk '/dev/raw/raw3', '/dev/raw/raw4'
- alter diskgroup DG_ASM_1 ADD DISK '/devices/diska5'
- drop diskgroup DG_ASM_1 including contents
- alter diskgroup DG_ASM_1 drop disk ORCL:DISK1
- alter diskgroup DG_ASM_1 check all
- create a disk group
1. create diskgroup RMANDG1 external redundancy
disk '/dev/raw/raw5', '/dev/raw/raw6'
2. alter diskgroup RMANDG1 mount;
3. select * from v$asm_diskgroup;
4. show parameter asm
5. If using an init.ora for the ASM instances you must add the raw devices and the added disk group.
ASM_DISKSTRING='/dev/raw/raw3','/dev/raw/raw4','/dev/raw/raw5','/dev/raw/raw6'
ASM_DISKGROUPS='ASMDG1','RMANDG1'
--------------------------------------------------------------------------------
STREAMS:
- data dict: DBA_CAPTURE, DBA_QUEUES, DBA_QUEUE_TABLES
- Oracle Streams can capture, stage (queue), and apply events in the database or another database
- works with Advanced Queuing (AQ)
- see http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96571/strmover.htm#50145
- Oracle-Supplied PL/SQL Packages
- DBMS_STREAMS_ADM Package
- DBMS_CAPTURE_ADM Package
- DBMS_PROPAGATION_ADM Package
- DBMS_APPLY_ADM Package
- DBMS_RULE_ADM Package
- DBMS_RULE Package
- DBMS_STREAMS Package
- conflict resolution
- Streams offers a variety of prebuilt conflict resolution handlers or
you can build your own conflict resolution handlers
- If a conflict is not resolved, or if a handler procedure raises an error,
then all events in the transaction that raised the error are saved in an exception queue for
later analysis and possible reexecution.
- query for capture process parameters
SELECT PARAMETER,
VALUE,
SET_BY_USER
FROM DBA_CAPTURE_PARAMETERS
WHERE CAPTURE_NAME = 'CAPTURE';
- query to display all of the Streams queues
SELECT q.OWNER, q.NAME, t.QUEUE_TABLE, q.USER_COMMENT
FROM DBA_QUEUES q, DBA_QUEUE_TABLES t
WHERE t.OBJECT_TYPE = 'SYS.ANYDATA' AND
q.QUEUE_TABLE = t.QUEUE_TABLE AND
q.OWNER = t.OWNER;
--------------------------------------------------------------------------------
TRACE ANOTHER SESSION – TRACE SESSION:
- To trace all code from another user and includes all bind variables in the trace file.
- steps
1. Log in as SYS
2. To turn on
execute sys.dbms_system.set_ev('','', 10046, 12, '');
3. To turn off
execute sys.dbms_system.set_sql_trace_in_session('' ,'', FALSE);
- To create the trace on and off statements dynamically
- example to turn on
select 'execute sys.dbms_system.set_ev('|| sid ||','|| serial# ||',10046,12, '''');'
from v$session
where username = 'USERNAME'
and program like 'JDBC%';
- example to turn off
select 'execute sys.dbms_system.set_sql_trace_in_session('|| sid ||' ,'|| serial# ||',FALSE);'
from v$session
where username = 'USERNAME'
and program like 'JDBC%';
--------------------------------------------------------------------------------
WRAP UTILITY:
- notes
- Hides (encrypts - sorta) the algorithm and prevents reverse-engineering
- Wrap only the body, not the spec. So others can see the how to call the package
- Cannot un-wrap. The source code must be saved
- usage:
Enter the wrap command at your operating system prompt using the following syntax:
wrap iname=input_file [oname=output_file]
--------------------------------------------------------------------------------
DBMS_METADATA – DEPENDENT DDL:
- sample script to get dependent DDL:
sqlplus -s @ << EOF
SET LONG 2000000000
SET PAGESIZE 0
SET NEWPAGE 0
SET SPACE 0
SET LINESIZE 80
SET ECHO OFF
SET FEEDBACK OFF
SET VERIFY OFF
SET HEADING OFF
SET MARKUP HTML OFF SPOOL OFF
SET TERMOUT OFF
EXEC DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'SQLTERMINATOR',true);
column db_stamp new_value file_name
select sys_context('userenv','db_name') db_stamp from dual;
spool get_bdr_grants_&file_name..sql
SELECT DBMS_METADATA.GET_DEPENDENT_DDL('OBJECT_GRANT','BDR','RESPDEV') from dual;
spool off
EOF
--------------------------------------------------------------------------------
RAID:
- RAID
- RAID is an acronym for Redundant Arrays of Inexpensive Disks
- For load balancing and securing your data across multiple disks
- RAID levels
- none: No data protection
- 0: Block level striping of data across multiple disks to balance the load across the disk array.
No protection from data loss
- 1: Disk mirroring. Complete copy of each disk on at least one other disk.
- 1+0 / 0+1: Combination of RAID 0 and RAID 1.
Benefits of block level striping across the array and the security of disk mirroring.
Ideal RAID configuration in terms of performance and availability
But it can be expensive as it requires at least twice the amount of disk space
- 5: Stripes data and parity information across 3 or more disks
Overhead associated with this RAID level make it slower than the previous methods.
Requires fewer disks so it is cost effective.
- Oracle and RAID
- Write-intensive applications should be implemented using RAID 0+1/1+0
- Read-intensive applications can be implemented using RAID 5
- Oracle10g with Automatic Storage Management (ASM) recommends RAID 1+0
--------------------------------------------------------------------------------
HETEROGENEOUS SERVICES:
- From Oracle, allows use of SQL to access data stored in non-oracle systems.
- Generic connectivity
- allows access to any ODBC or OLE DB data store for which a third party ODBC
or OLE DB driver is available
- generic connectivity is free to use through Oracle,
BUT the third party ODBC driver (ex. DataDirect) must be purchased.
- tracing
- In the inithsagent.ora file, change the following to "on"
HS_FDS_TRACE_LEVEL = off
- Writes trace file to: $ORACLE_HOME/hs/log
- Installation
- An HS agent must be configured and can reside in the following places:
- On the same machine as the non-Oracle system
- On the same machine as the Oracle server
- On a machine different from either of these two
- For this sample install, the agent reside on the same machine as the Oracle server
1. Make sure Generic Connectivity using ODBC is installed in your Oracle DB
For this config, the qanxt on qa0 was used.
Run a query on SYS.HS_FDS_CLASS
If not found, run the caths.sql script located in
ORACLE_HOME\RDBMS\ADMIN\ as user sys or internal.
2. Setup a tns service name
For this config, the tnsnames.ora on qa0 was used.
hsagent =
(DESCRIPTION=
(address_list =
(ADDRESS=(PROTOCOL=tcp) (HOST=10.144.12.32) (PORT=1529))
)
(CONNECT_DATA=(SID=hsagent))
(HS = OK)
)
3. Add entry to listener
For this config, the qanxt_listener on qa0 was used.
sid_list_qanxt_listener =
(sid_list =
(sid_desc = (oracle_home = /apps/oracle/10.2) (sid_name = qanxt) )
(sid_desc= (SID_NAME=hsagent) (ORACLE_HOME=/apps/oracle/10.2) (PROGRAM=hsodbc))
)
4. Create the Database Link to the Non-Oracle System
The service name that is used in the USING clause of the CREATE DATABASE LINK
statement is the Oracle Net service name in the tnsnames.ora
create public database link hsagent
connect to "dbadmin"
identified by ""
USING 'hsagent'
5. Configure ODBC
- A third party ODBC or OLE DB driver is required, as Oracle does not provide these
Used: http://www.datadirect.com/...
- For this config: ODBC is installed in /opt/odbc32v52
(tried /opt/odbc64v52/, but 32-bit was required)
- configure odbc.ini Sample:
[Sybase_Wire_Protocol]
QEWSD=39176
Driver=/opt/odbc32v52/lib/ivase22.so
Description=DataDirect 5.2 Sybase Wire Protocol
AlternateServers=
ApplicationName=
ApplicationUsingThreads=1
ArraySize=50
AuthenticationMethod=0
Charset=
ConnectionRetryCount=0
ConnectionRetryDelay=3
CursorCacheSize=1
Database=SAV7db
DefaultLongDataBuffLen=1024
EnableDescribeParam=0
EnableQuotedIdentifiers=0
GSSClient=
InitializationString=
Language=
LoadBalancing=0
LogonID=dbadmin
NetworkAddress=BEDVPS02,1600
OptimizePrepare=1
PacketSize=0
Password=
RaiseErrorPositionBehavior=0
ReportCodePageConversionErrors=0
SelectMethod=0
ServicePrincipalName=
TruncateTimeTypeFractions=0
WorkStationID=
6. Configure Heterogeneous Services init parameters in $ORACLE_HOME/hs/admin/.
- cd $ORACLE_HOME/hs/admin/.
- copy inithsodbc.ora to init.ora and configure.
where matches the SID defined in listener above
HS_FDS_CONNECT_INFO = Sybase_Wire_Protocol
HS_FDS_TRACE_LEVEL = off
HS_FDS_SHAREABLE_NAME = /opt/odbc32v52/lib/libodbc.so
set ODBCINI=/opt/odbc32v52/odbc.ini
#set =
7. Source the ODBC env file (includes ODBC lib in LD_LIBRARY_PATH):
$ cd /opt/odbc32v52
$ . odbc.sh
8. restart listener
9. Attempt an HS query
$ sqlplus "/ as sysdba"
> select * from "dbo.sysroles"@hsagent;
--------------------------------------------------------------------------------
ORACLE PARTNER NETWORK - OPN :
- ordering CD Packs - Media Packs
- Goto Oracle Store and get CD Pack name
- http://oraclestore.oracle.com
- Media Packs -> choose Operating System -> copy CD Pack name
- Goto Oracle Partner Network website:
http://partner.oracle.com (need username / password)
- Development -> Order Technology Software
- Click on link to download "Development and Demonstration Ordering Document"
- Complete all fields of the "Development and Demonstration Ordering Document" spreadsheet
- Partner Name: Boston Communication Group, Inc
- paste CD Pack name (copied above) into "enter Media pack name" field
- Goto 2nd tab in spreadsheet
- Place an "X" in "You elect to receive the programs only via shipment of the Media pack(s) listed in section A."
- At bottom, fill in Name and Title
- Email to prn-nas_in@oracle.com
- Will receive fax back, sign fax and return to Oracle
- Should receive package in 5-7 business days
--------------------------------------------------------------------------------
SETUP USER WITH SYSOPER PRIVILEGES TO SHUTDOWN/STARTUP DATABASES:
- if password file does not exist, create one
1. login as sys and shutdown the database.
2. cd $ORACLE_HOME/dbs
3. orapwd file=orapw password= entries=5
4. edit the init.ora for the database and set REMOTE_LOGIN_PASSWRODFILE=exclusive
5. restart the database
- create user
create user
identified by
default tablespace users
temporary tablespace temp;
- grant access
grant connect, resource, sysoper to ;
--------------------------------------------------------------------------------
########## INSTALL - CREATE - UPGRADE DATABASE ##########
--------------------------------------------------------------------------------
ORAINVENTORY:
- info about oracle on a server
- /var/opt/oracle/oraInst.loc: location of inventory
--------------------------------------------------------------------------------
DATABASE VERSIONS - DATABASE OPTIONS - DATABASE COMPONENTS - INSTALLED OPTIONS:
- data dict: DBA_REGISTRY, DBA_PROPERTIES, PRODUCT_COMPONENT_VERSION, V$OPTION
- DBA_REGISTRY
- SELECT comp_name, status, substr(version,1,10) as version from dba_registry;
- version and compatible:
declare
v_ver varchar2(100);
v_com varchar2(100);
begin
dbms_utility.db_version(v_ver, v_com);
dbms_output.put_line('version = ' || v_ver);
dbms_output.put_line('compatibility = ' || v_com);
end;
- database options: V$OPTION (TRUE/FALSE if installed)
- PRODUCT_COMPONENT_VERSION: select * from product_component_version;
--------------------------------------------------------------------------------
MOUNTING CD:
- HP-UX: 11.00
1. ps -ef | grep pfs - if nothing continue with steps 2 & 3
2. as root: nohup /usr/sbin/pfs_mountd &
3. as root: nohup /usr/sbin/pfsd &
4. create directory for mount point, if necessary (ex. /SD_CDROM)
5. insert disk (then wait a few seconds)
6. as root: ioscan -funC disk to get CD disk info
7. as root: /usr/sbin/pfs_mount -t rrip -x unix /dev/dsk/ /SD_CDROM
...
8. when finished: /usr/sbin/pfs_umount /SD_CDROM
--------------------------------------------------------------------------------
INSTALLATIONS - RUNINSTALLER - UNIVERSAL INSTALLER:
- logs: /logs/.
- universal installer:
- //runInstaller
- silent mode:
1. edit response file: copy from CD /response/*.rsp
2. ./runInstaller -responsefile -silent -nowelcome
- new installations (will automatically call ODMA for upgrades)
1. create oracle unix user on server - put in dba group - set env
2. create directory /var/opt/oracle
3. ensure OS parameters (/etc/system) have been set
4. mount Oracle Enterprise Edition CD
5. as root: /usr/openwin/bin/xhost +
6. as oracle: set env to new version
7. as oracle: DISPLAY=:0; export DISPLAY
8. as oracle: /cdrom/cdrom0/runInstaller
9. when asked to supply the datafile directory, only give root dir,
the oradata/ dirs are created for you
- installing additional products:
1. mount Oracle Enterprise Edition CD
2. as root: /usr/openwin/bin/xhost +
3. as oracle: set env to current version
4. shutdown all DB's using this ORACLE_HOME
5. stop listener
6. kill leftover SQL*Net connections
7. as oracle: DISPLAY=:0; export DISPLAY
9. as oracle: /cdrom/cdrom0/runInstaller
10. custom install then choose new products
careful not to choose unnecessary products, recompiling errors are common
--------------------------------------------------------------------------------
CREATING - MODIFYING - REMOVING - DBASSIST - DBCA:
- dbassist (DBCA):
1. as root: /usr/openwin/bin/xhost +
2. as oracle: set env to new version
3. as oracle: DISPLAY=:0; export DISPLAY
4. in version 8: $ORACLE_HOME/bin/dbassist
in version 9: $ORACLE_HOME/bin/dbca
- at the end, dbca can save the DB creation scripts instead of running
- custom create:
- allows for sizing of tablespaces, redo logs, SGA, etc.
- takes about 1 1/2 hours to create a database
--------------------------------------------------------------------------------
SQL.BSQ - sql.bsq:
- called at create database
- contains table definitions that make up the oracle server
- Oracle recommends limiting modifications
- you may modify storage parameters (initial, next, ...) and TS's
--------------------------------------------------------------------------------
OPATCH:
- OPatch supports:
- Applying an interim patch
- Rolling back the application of an interim patch
- Conflict resolution when applying an interim patch after previous patches have been applied
- Reporting on installed products and Interim (One-Off) patch
- download OPatch from metalink, patch number 2617419
- OPatch is perl-based: Perl may be obtained the following ways
- Perl is included in 9.2.0.1 under APACHE with a 'Typical' install of Enterprise Server
- if a Typical install was not performed, add by running the installer and selecting
HTTP Server under the 'Custom' installation
- Download a self-contained version of perl available on MetaLink under patch 2646235
- Freeware versions of perl are available for download at
http://www.sunfreeware.com/programlistsparc8.html#perl (Solaris only) OR
http://www.perl.com (all platforms).
- OPatch utilizes the same inventory as the Oracle Universal Installer (OUI).
- logs
- Unix: $ORACLE_HOME/.patch_storage//
- NT: %ORACLE_HOME%\.patch_storage\\
- help
- opatch -help [ apply | lsinventory | rollback | version ] OR
- opatch.pl -help [ apply | lsinventory | rollback | version ]
- OPatch lsinventory lists the inventory for an ORACLE_HOME
- lsinventory [-all] [-oh ]
--------------------------------------------------------------------------------
UPGRADES - MIGRATIONS - ODMA:
- ODMA - Oracle Database Migration Assistant:
1. clean shutdown
2. backup
3. as root: /usr/openwin/bin/xhost +
4. as oracle: set env to new version
5. as oracle: DISPLAY=:0; export DISPLAY
6. set job_queue_processes = 0
7. $ORACLE_HOME/bin/odma
8. logs: $ORACLE_HOME/assistants/dbma/.
9. copy init, config, password, and Net8 files to new version and modify
10. init:
- job_queue_processes back to original
- compatible =
- service_name = (if necessary)
11. recompile invalids
12. test Net8
- manual upgrades:
1. clean shutdown
2. backup
3. install kernal code with runInstaller
4. exit after code install
5. set env to new version
6. copy init and Net8 files to new version
7. set job_queue_processes = 0
8. execute: $ORACLE_HOME/rdbms/admin/u0801050.sql (if 8.1.5 to 8.1.6)
9. may need to manually execute catparr.sql and catrep.sql
10. init:
- job_queue_processes back to original
- compatible =
- service_name = (if necessary)
11. if necessary, alter system disable restricted session;
12. recompile invalids
13. test Net8
14. edit oratab
--------------------------------------------------------------------------------
CATALOG:
- $ORACLE_HOME/rdbms/admin
- catalog.sql: data dictionary
- standard.sql: basic procedures and functions
- cataudit.sql
- catexp.sql
- catldr.sql
- catpart.sql
- catadt.sql: type and object views
- catsum.sql
- catproc.sql: procedures and function
- catprc.sql
- catjobq.sql
- catrpc.sql
- pl/sql
- catspace.sql
- catsvrmg.sql: server manager
- rdbms functionality
- rules engine
- catsnap.sql
- dbmssnap.sql
- dbmspool.sql: access to shared pool
- prvtpool.plb
- pupbld.sql: product user profiles (as system)
- misc:
- catparr.sql: parallel server stats
- catrep.sql: replication option
- catio.sql: I/O stats
- catblock.sql: lock views
- catsnmp.sql: snmp for OEM
--------------------------------------------------------------------------------
OPTIMAL FLEXIBLE ARCHITECTURE - OFA:
$ORACLE_BASE default = /u01/app/oracle
$ORACLE_HOME: $ORACLE_BASE/product/
/bin binaries
/dbs links to init files (for backward compatibility)
/network Net8
/ops ODCA (Oracle Database Creation Assistant)
/opsm Parallel Server management
/admin ops.sql and init file templates
/admin administration files
/ admin files for this SID
/adhoc ad hoc SQL scripts
/arch archived redo logs
/bdump background traces
/cdump core dumps
/create create DB scripts
/exp export files
/pfile init files
/udump user SQL trace files
--------------------------------------------------------------------------------
ENV VARS - ENVIRONMENTAL VARIABLES:
- recommended
- ORACLE_HOME=$ORACLE_BASE/product/
- CLASSPATH=$ORACLE_HOME/jlib for java functionality
- LD_LIBRARY_PATH
- for 32 bit: $ORACLE_HOME/lib:$ORACLE_HOME/ctx/lib
- for 64 bit: $ORACLE_HOME/lib32:$ORACLE_HOME/ctx/lib
- LD_LIBRARY_PATH_64: for 64 bit
$ORACLE_HOME/lib:$ORACLE_HOME/ctx/lib
- SHLIB_PATH: for HP
$ORACLE_HOME/lib32
- LIBPATH: for AIX
$ORACLE_HOME/lib32:$ORACLE_HOME/lib
- ORACLE_SID: instance name - recommended no more than 4 chars
- PATH:
- solaris: $ORACLE_HOME/bin:/usr/ccs/bin:/usr/bin:/etc:/usr/openwin/bin:
/usr/local/bin
- linux: $ORACLE_HOME/bin:/usr/bin,/bin:/usr/bin/X11:/usr/local/bin
- HP: $ORACLE_HOME/bin:/usr/bin:/etc:/usr/bin/X11:/usr/local/bin
- AIX: $ORACLE_HOME/bin:/etc:/usr/bin:/usr/bin/X11:/usr/lbin:/usr/local/bin
- TNS_ADMIN: $ORACLE_HOME/network/admin
- TWO_TASK: default connect string identifier to be used if one is not given
- misc oracle
- ORACLE_BASE=/u01/app/oracle
- NLS_LANG=AMERICAN_AMERICA.UTF8
- ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data
- ORACLE_DOC=
- DBCA_RAW_CONFIG=
- SDD_BIN=$ORACLE_HOME
- others
- DISPLAY=localhost:0
- TZ=GMT
--------------------------------------------------------------------------------
SPFILE - SERVER PARAMETER FILE:
- data dict: V$PARAMETER, V$SPPARAMETER
- stores initialization parameters
- if spfile exists, init.ora is NOT read
- spfile may be removed if init.ora is present
- at instance startup, order of precedence:
1. spfile.ora
2. spfile.ora if spfile.ora does not exist
3. init.ora
- creating from existing pfile:
- create spfile from pfile='/usr/app/oracle/admin/dbv9/scripts/init.ora';
- exporting spfile contents to a file
create pfile = '/oracle/admin/zevdev1/pfile/initzevdev.ora'
from spfile = '/oracle/product/9.2.0/dbs/spfilezevdev1.ora';
- alter system set ...
- ALTER SYSTEM allows you to set, change, or delete (restore to default value) init parameters
- if using spfile, changes do not also have to be made manually to the init file
- scope
- BOTH (default if spfile is in use)
- MEMORY (default if no spfile): change not written to spfile
- SPFILE: change only performed to spfile, not to DB
--------------------------------------------------------------------------------
HIGH TRANSACTION SUPPORT:
- Oracle Configuration Parameters Recommended
- Processes=3X the maximum number of connections expected in the database
- Sessions =(1.1 * PROCESSES) + 5
- Transactions = (1.1 * SESSIONS)
- db_cache_size = large
- shared_pool = large
- large_pool = large, especially with MTS
- pga_aggregate_target = large
- open_cursors = 1000
- distributed_transactions = Transactions
- open_cursors = 1000
--------------------------------------------------------------------------------
UNIX KERNEL PARAMETERS - DB SERVER PARAMETERS:
- solaris: /etc/system
- for Oracle8i: other programs may require values larger than these
set shmsys:shminfo_shmmax=1610612736 # .75 * server physical memory
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=100
set shmsys:shminfo_shmseg=10
set semsys:seminfo_semmni=100
set semsys:seminfo_semmsl=100 # 10 + processes init para
# semmns = 2x largest processes + 10x num DB's + processes for other DB's
set semsys:seminfo_semmns=2010 # max semaphores
set semsys:seminfo_semopm=100
set semsys:seminfo_semvmx=32767
- HP-UX:
- use SAM (System Administrator Menu) to configure
- for Oracle8i: other programs may require values larger than these
- shared memory:
SHMMAX=1G
SHMMNI=200 # default
SHMSEG=120 # default - max = SHMMNI
- semaphores:
SEMVMX=? # max allowed semaphore value
SEMMNI=350 # max semaphores system wide - SEMMNI <= SEMMNS
SEMMNS=350 # max user accessible semaphores - SEMMNS <= (SEMMNI*SEMMSL)
- threads:
MAX_THREAD_PROC=256
- other:
maxdsiz & maxdsiz_64bit=1G # 1.9G is max
maxssiz & maxssiz_64bit=400M # max allowed
maxtsiz & maxtsiz_64bit=1G # 200M is min
--------------------------------------------------------------------------------
NATIONAL LANGUAGE SUPPORT - NLS - DATABASE CHARACTER SET:
- data dict: V$NLS_PARAMETERS, NLS_DATABASE_PARAMETERS, V$PAREMETER
- NLS/character set info:
- NLS_DATABASE_PARAMETERS.nls_characterset
- select * from NLS_DATABASE_PARAMETERS;
- select * from V$NLS_PARAMETERS;
- select name, value from V$PARAMETER where name like '%nls%';
- select ksppinm, ksppdesc from x$ksppi where ksppinm like '%nls%';
- checking if languages are installed in kernel:
1. cd $ORACLE_HOME/ocommon/nls/admin/data
2. strings * | grep -i utf8
- If no values come back, goto Installer, choose custom, install all langs.
- set in create database ... character set "" ...
- altering character set:
- current character set must be a strict subset of the new character set
1. full backup
2. startup restrict
3. alter database character set utf8;
- UTF2: Multi-byte (2 byte) character set
- UTF8: Variable length character set
- problem when going to a DB with a UTF8 character set from DB's below 7.3.4
ERROR: ORA-03106: fatal two-task communication protocol error
- NLS datatypes: nchar, nvarchar2, nclob
- querying NLS datatypes: add an 'N'
select * from emp where emp_name like N'Mike%';
--------------------------------------------------------------------------------
32-BIT or 64-BIT ORACLE:
- determine by executing: file $ORACLE_HOME/bin/oracle
--------------------------------------------------------------------------------
SWAP SPACE FOR ORACLE:
- swap space recommended by Oracle, checked during 10g install
- if RAM = 1024MB to 2048Mb then 1.5 times RAM
- if RAM > 2048MB and < 8192MB then match RAM
- if RAM > 8192MB then .75 times RAM
--------------------------------------------------------------------------------
########## ORACLE NET - NET8 - SQL*NET - SQLNET ##########
--------------------------------------------------------------------------------
DATABASE INSTANCE REGISTRATION - SERVICE REGISTRATION:
- performed by PMON
- allows connection load balancing and connect time failover
- instances register themselves with listeners on the local machine:
31-MAR-2000 10:22:39 * service_register * * 0 in listener.log
- service_names (init): service_names = .,
- local_listener (init): if 1521 default port not used
local_listener="(address_list=(address=(protocol=tcp)(host=)(port=)))"
- SID_LIST_ is not required for DBs in listener.ora
- It is required for external procedures (PLSExtProc)
- .world suffix on tnsnames net service names is obsolete
--------------------------------------------------------------------------------
LISTENER - LISTENER.ORA:
- lsnrctl: Listener Control
- starting: lsnrctl start [] name defined in listener.ora
- stopping: lsnrctl stop []
- reload: lsnrctl reload [] applies changes without stopping
- status: lsnrctl status
- services: lsnrctl services
- help: lsnrctl help
- new log: lsnrctl set log_file
- trace: lsnrclt trace 16
- listener parameters:
- logging_=off default is on
- log_directory_=
default is dir where listener was started
- log_file_= default is .log
- trace_level_= val=off, user, admin, or support
- trace_directory_=
default is $O_H/network/trace
- trace_file_=
- queuesize = modify for large connect volumes
- global_names = FALSE allows link names to be different from SIDs
- tracing
- command line
lsnrctl trace 16 to turn the trace on
lsnrctl trace off to turn the trace off
- Or set the following listener.ora parametrs and reload:
TRACE_LEVEL_=16
TRACE_FILE_=listener.trc listener.trc is the default
TRACE_DIRECTORY_
=$ORACLE_HOME/network/trace
- listener load balancing: mutliple listeners for a single DB
- defining TCP/IP port:
file = /etc/services Entry = oralistener /tcp
- sample with database instance registration (no SID_LIST for DBs):
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.5.34.163)(PORT = 2020)(queuesize=100))
)
)
)
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /app/oracle/product/8.1.6)
(PROGRAM = extproc)
)
)
TRACE_LEVEL_LISTENER = OFF
- sample without database instance registration:
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ditto)(PORT = 1521))
)
)
(DESCRIPTION =
(PROTOCOL_STACK =
(PRESENTATION = GIOP)
(SESSION = RAW)
)
(ADDRESS = (PROTOCOL = TCP)(HOST = ditto)(PORT = 2481))
)
)
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /u01/app/oracle/product/8.1.6)
(PROGRAM = extproc)
)
(SID_DESC =
(GLOBAL_DBNAME = xxx)
(ORACLE_HOME = /u01/app/oracle/product/8.1.6)
(SID_NAME = xxx)
)
)
TRACE_LEVEL_LISTENER = OFF
--------------------------------------------------------------------------------
TNSNAMES - TNSNAMES.ORA - NET SERVICE NAMES:
- DB links: net_service_name = SYS.DBA_DB_LINKS.host
- confirming DB link: select * from global_name@;
- TWO_TASK is the default service name
- 8.0:
=
(description=
(address_list=
(address=...)
)
(connect_data=(sid=sales))
- 8.1:
=
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT = ))
)
(CONNECT_DATA =
(service_name=)
(server=dedicated) - optional parameter to disable MTS
)
)
- 8.1 with connect time failover (NOTE: requires local_listener init parameter)
=
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT =
(ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT =
)
(CONNECT_DATA =
(service_name=)
)
)
- CTF for instances with different names
- modify service_names in inst2 to include an entry for inst1
- on inst1: service_names=inst1
- on inst2: service_names=inst2, inst1
- bounce inst2
- configure tnsnames.ora service as usual for CTF with a SERVICE_NAME = inst1
- 8.1 with connect time failover and TAF (transparent application failover):
=
(DESCRIPTION =
(failover=on)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT =
(ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT =
)
(CONNECT_DATA =
(service_name=)
(instance_name=) - (OPS only)
(failover_mode=
(backup=)
(type=select) - select, session, none
(method=basic) - basic (no FO), preconnect (no login)
)
)
)
- NOTE: failover=on will not work with static service config parameters.
may be required for external procedures, OEM, and HS
- external procedures:
EXTPROC_CONNECTION_DATA =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
)
(CONNECT_DATA =
(SID = PLSExtProc)
(PRESENTATION = RO)
)
)
- sample:
XXX =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = ditto)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = xxx)
)
)
EXTPROC_CONNECTION_DATA =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
)
(CONNECT_DATA =
(SID = PLSExtProc)
(PRESENTATION = RO)
)
)
--------------------------------------------------------------------------------
MTS - MULTITHREADED SERVER - SHARED SERVER:
- data dict: V$MTS, V$DISPATCHER, V$DISPATCHER_RATE (V8), V$SHARED_SERVER,
V$QUEUE, V$SESSION.server
- user processes share a limited number of server processes (shared server)
- SGA:
- large_pool: approx 300k per connection, 100 sessions = 30m
- shared pool:
- with MTS, PROCESSES can be small. So, set SESSIONS to approx 1.1 * total connections
- total dispatchers and shared servers limited by OS limit for processes
- listener
- should be started before DB
- lsnrctl serv to see dispatchers registered
- flow:
1. listener gets an incoming connection request and determines if a
shared server process can be used
2. listener gives the user process the address of a dispatcher process
3. dispatcher places the user request on the request queue in the SGA
4. shared server process picks up the request and interacts with the DB
5. shared server returns results to the response queue of that dispatcher
6. dispatcher returns results to the user process
- dispatchers:
- manual increase: alter system set mts_dispatchers = ...;
- ex. alter system set mts_dispatchers = '(protocol=tcp) (dispatchers=10)
(index=0)';
note: index selected from V$DISPATCHER.conf_index
- shutdown dispatchers: alter system shutdown immediate 'D002';
note: 'D002' selected from V$DISPATCHER.name
- administer connections between user and shared server processes
- DB registers dispatchers with listener (local_listener)
- set number of dispatchers to concurrent sessions / OS connections per process
- shared server processes:
- manual change: alter system set mts_servers = ;
- to temporarily disable MTS set mts_servers = 0
- Oracle will automatically add server processes based on the length of the
request queue in the SGA
- start with one shared server for every ten connections, then tweak
- connection pooling:
- allows dispatchers to support more users
- configured with pool=on keywords in mts_dispatchers
- connection manager - multiplexing:
- enables several users to connect to one dispatcher at the same time
- configured with multiplex=on keywords in mts_dispatchers
- unix processes:
- dispatchers: ora_d0nn_SID
- shared servers: ora_s0nn_SID
- init parameters:
local_listener="(address_list=(address=(protocol=tcp)(host=)(port=)))"
mts_servers= default 1
mts_max_servers= default 20 or 2*mts_servers
mts_dispatchers="(address=(protocol=tcp)(host=))(dispatchers=)"
mts_max_dispatchers= default 5
mts_circuits= default = sessions
mts_sessions= default is lesser of mts_circuits or (sessions - 5)
processes increaese to allow for mts_max_servers
large_pool increase for MTS use
shared_pool
--------------------------------------------------------------------------------
JDBC:
- thin driver:
- connecting without using SQL*Net on the client side
- example: jdbc:oracle:thin:/@::
- jdbc-oci:
- example: jdbc:oracle:oci8@
- sample thin driver with connect time failover
ampdDS
jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=x.x.x.x)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=x.x.x.x)(PORT=1521))(LOAD_BALANCE=YES)(FAILOVER=YES))(CONNECT_DATA=(SERVICE_NAME=LSPM)))
oracle.jdbc.driver.OracleDriver
user
pass
org.jboss.resource.adapter.jdbc.vendor.OracleExceptionSorter
20
70
5000000
1
--------------------------------------------------------------------------------
SQLNET.ORA:
- server domain:
- names.default_domain = us.acme.com
- appended to unqualified service names (ex. server -> server.us.acme.com)
- naming methods:
- names.directory_path = (tnsnames, onames, hostname)
- naming methods, in order, the server can use
- DCD: Dead Connection Detection
- SQLNET.EXPIRE_TIME =
- detects connections terminated unexpectedly, and releases associated resources
- if the probe finds a terminated connection, or a connection that is no longer in use,
it returns an error, causing the server process to exit
- intended for the database server
- restrict Oracle Net (SQL*Net) access from clients: valid node checking
1. tcp.validnode_checking = yes
2. tcp.excluded_nodes = (10.3.2.222, 10.100.1.81)
3. stop/start listener (note lsnrctl reload does enforce valid node checking)
--------------------------------------------------------------------------------
########## REAL APPLICATION CLUSTER - ORACLE PARALLEL SERVER and RAW DEVICES – RAC/OPS ##########
--------------------------------------------------------------------------------
SRVCTL - SERVER CONTROL UTILITY
- notes
- not required for RAC
- to use srvctl, config info must have been created using the srvctl add command OR DBCA
- log: $ORACLE_HOME/srvm/log/.
- examples
- start ASM (for each node): srvctl start asm -n -i
- stop ASM (for each node): srvctl stop asm -n -i
- start database: srvctl start database -d
- stop database: srvctl stop database -d
- start instance: srvctl start instance -d pmpr -i
- stop instance: srvctl stop instance -d pmpr -i
- start listener (vip, gsd, and ons): srvctl start nodeapps -n
- stop listener (vip, gsd, and ons): srvctl stop nodeapps -n
- add instance: srvctl add instance -d -i -n
Subscribe to:
Comments (Atom)
Oracle DBA
anuj blog Archive
- ► 2011 (362)
