Hartmut und Angela Inerle

Oracle

As on ORACLE DBA working for a international company I will make some experiences with ORACLE I will publish here.

Presentation : Backup with ORACLE RMAN held on DOAG 2002

This a presentation I have held 5 years ago at the DOAG Conferenz in 2002. The presentation is in german.Related scripts you will find here.

Workflow to migrate tablespaces from directory managed to local managed

These is a workflowto migrate a database from dictionary managed to local managed.The system tablespace can be only local managed if all other tablespaces are local managed.

-- tablespaces to migrate
select t.tablespace_name, EXTENT_MANAGEMENT, ENABLED
from dba_tablespaces t, v$datafile d, dba_data_files f
where t.tablespace_name = f.tablespace_name and f.file_id = d.file#;

-- set temporary tablespace to system
alter database default temporary tablespace system;
--
select * from dba_data_files;

--
drop temporay tablespace drop tablespace temp;
-- recreate tablespace temp create temporary tablespace temp tempfile '/home1_liverpool/oradata/cqlhs34/temp/temp_1.dbf' size 40M extent management local uniform size 256k;
alter database default temporary tablespace temp;
-- set tablespace in to readonly mode ( not RBS, TEMP)
alter tablespace data read only;

-- check compatible parameter should be 9.2.0;
select * from v$parameter where name='compatible';

-- restart database in restricted mode and active the modification of the compatible parameter.

-- migrate system tablespace
begin dbms_space_admin.TABLESPACE_MIGRATE_TO_LOCAL('SYSTEM'); end;

--- set tablespaces into read write mode
alter tablespace data read write;

-- problem with RBS tablespace if the tablespace has be migrated drop the RBS segments and recreate it before restarting the database.

-- restart database. To get shure that everything is running fine.

Get all oracle parameters hidden and unhidden

connect as sys with sysdba rights necessary

select x.ksppinm name, ksppdesc description, y.kspftctxvl value, y.kspftctxdf isdefault, decode(bitand(y.kspftctxvf,7),1,'MODIFIED',4,'SYSTEM_MOD','FALSE') ismod,
decode(bitand(y.kspftctxvf,2),2,'TRUE','FALSE') isadj
from sys.x$ksppi x, sys.x$ksppcv2 y
where x.inst_id = userenv('Instance') and y.inst_id = userenv('Instance') and x.indx+1 = y.kspftctxpn
order by name ;

Workflow to change the size of a temporary tablespace

Starting with Oracle 9.2 a simple resize of a enlarged temporary tablespace is not longer possible.
This a workflow to create a new smaller temporary tablespace and set the user defaults to this one.
A least the previous temporary tablespace will be dropped.

Is the temporary tablespace in use a error message will be generated. In this case wait until the session has finished the sorting.
The next sorting should use TEMP1 as the sorting tablespace.

-- locate your temporary tablespaces
select * from dba_temp_files;

create temporary tablespace TEMP1 tempfile '<PATH_TO_TEMPFILE>/TEMP1_1.dbf' size 2000M extent management
local uniform size 128K;
 
-- Set new tablespace as default temporary tablespace for all users in database.
 select 'alter user '||username||' temporary tablespace temp1;' from dba_users where temporary_tablespace='TEMP';

-- change default temporary tablespace
alter database default temporary tablespace TEMP1;
 
-- Drop the old tablespace.
 drop tablespace temp including contents and datafiles;


Purge SYSAUX tablespace

This is a set of commands which is able to purge the SYSAUX tablespace.
1) PURGE_STATS will be executed in smaller steps. Otherwise the RBS tablespace will be blown up.
2) Oracle is sometimes building new extents for SYSAUX stats table in other tablespaces. They will be moved back to the SYSAUX tablespace.
3) The Index rebuild will decrease the size of the indexes. They are mostly larger as the raw data.
4) The Indexes are partly function bases. Therefore it is imported in which order the index rebuild will be done. Otherwise you have to reexecute this steps again and again.

exec DBMS_STATS.PURGE_STATS(SYSDATE-180);
exec DBMS_STATS.PURGE_STATS(SYSDATE-160);
exec DBMS_STATS.PURGE_STATS(SYSDATE-140);
exec DBMS_STATS.PURGE_STATS(SYSDATE-120);
exec DBMS_STATS.PURGE_STATS(SYSDATE-100);
exec DBMS_STATS.PURGE_STATS(SYSDATE-80);
exec DBMS_STATS.PURGE_STATS(SYSDATE-60);
exec DBMS_STATS.PURGE_STATS(SYSDATE-40);
exec DBMS_STATS.PURGE_STATS(SYSDATE-20);
exec DBMS_STATS.PURGE_STATS(SYSDATE-7);
alter table WRI$_OPTSTAT_TAB_HISTORY move tablespace sysaux;
alter table WRI$_OPTSTAT_IND_HISTORY move tablespace sysaux;
alter table WRI$_OPTSTAT_HISTHEAD_HISTORY move tablespace sysaux;
alter table WRI$_OPTSTAT_HISTGRM_HISTORY move tablespace sysaux;
alter table WRI$_OPTSTAT_AUX_HISTORY move tablespace sysaux;
alter table WRI$_OPTSTAT_OPR move tablespace sysaux;
alter table WRH$_OPTIMIZER_ENV move tablespace sysaux;
Alter index SYS.I_WRI$_OPTSTAT_IND_ST rebuild TABLESPACE SYSAUX;
Alter index SYS.I_WRI$_OPTSTAT_IND_OBJ#_ST rebuild TABLESPACE SYSAUX;
Alter index SYS.I_WRI$_OPTSTAT_HH_ST rebuild TABLESPACE SYSAUX;
Alter index SYS.I_WRI$_OPTSTAT_HH_OBJ_ICOL_ST rebuild TABLESPACE SYSAUX;
Alter index SYS.I_WRI$_OPTSTAT_TAB_ST rebuild TABLESPACE SYSAUX;
Alter index SYS.I_WRI$_OPTSTAT_TAB_OBJ#_ST rebuild TABLESPACE SYSAUX;
Alter index SYS.I_WRI$_OPTSTAT_OPR_STIME rebuild TABLESPACE SYSAUX;

Oracle Restart and ACFS Filesystems on Solaris x86

In a single node environment Oracle Restart is not able to handling ACFS for a automatic startup. The purpose of Oracle is to put a startup script into /etc/rc3.d. But this is not working.  <<LINK>>

A defition for a restart in /etc/rc3.d

-rwxr--r--   5 root     sys          824 Aug 16  2006 S84appserv
lrwxrwxrwx   1 root     root          17 Jun 27 18:18 S96ohasd -> /etc/init.d/ohasd
lrwxrwxrwx   1 root     root          25 Jul  4 16:43 S97acfsload -> /etc/init.d/acfs_start.sh

This leads to problems during a reboot. Even if acfsload will be started after ohasd but this is a deamon who is working and the start scripts are getting back to next startup sequence. Therefore the next script will be started ohas demon has started the +ASM Instance. The startup of +ASM leads to a unload of the acfs modules.

It can been seen in /var/adm/messages

Ohs deamon is starting and unloading the acfs / advm modules available by the acfs_start script.

/var/adm/messages :
Jul  4 16:15:12 sylt root: [ID 702911 user.alert] Oracle HA daemon is enabled for autostart.
Jul  4 16:15:12 sylt grid: [ID 702911 user.error] exec /oracle/grid/11.2.0/grid/perl/bin/perl -I/oracle/grid/11.2.0/grid/perl/lib /oracle/grid/11.2.0/grid/bin/crswrapexece.pl /oracle/grid/11.2.0/grid/crs/install/s_crsconfig_sylt_env.txt /oracle/grid/11.2.0/grid/bin/ohasd.bin "reboot"
Jul  4 16:15:20 sylt mac: [ID 736570 kern.info] NOTICE: bnx1 unregistered
Jul  4 16:15:21 sylt oracleadvm: [ID 980447 kern.notice] NOTICE: [Oracle ADVM] Module unloaded.
Jul  4 16:15:21 sylt oracleoks: [ID 679573 kern.notice] NOTICE: ACFSK-0039: Module unloaded.

To get the autostart running the startup script has to wait until the +ASM instance has been started.

This can be done by using a forking startup script which does

acfs_startup:
/etc/init.d/acfs_enable.sh &
exit 0

acfs_enable.sh:
#  Load acfs filesystem driver
# Wait until ASM instance has been started . 26 processes are expected
 countasm=0
 while [ $countasm -lt 20 ] do
      sleep 10
      countasm=`ps -ef | grep +ASM | wc -l`
done
# now we can start
ORACLE_HOME=/oracle/grid/11.2.0/grid; export ORACLE_HOME
$ORACLE_HOME/bin/acfsload start 2>&1 1>/tmp/acfs_load.log >/tmp/acfs_load.log
# Get acfs status after load
$ORACLE_HOME/bin/acfsdriverstate -orahome $ORACLE_HOME version 2>&1 1>/tmp/acfsversion.log >/tmp/acfsversion.log
$ORACLE_HOME/bin/acfsdriverstate -orahome $ORACLE_HOME loaded  2>&1 1>/tmp/acfsloaded.log >/tmp/acfsloaded.log
$ORACLE_HOME/bin/acfsdriverstate -orahome $ORACLE_HOME supported  2>&1 1>/tmp/acfssupported.log >/tmp/acfssupported.log
$ORACLE_HOME/bin/acfsdriverstate -orahome $ORACLE_HOME installed  2>&1 1>/tmp/acfsstate.log >/tmp/acfsstate.log
#  Before to enable a volume you have to explizit disable
su - grid -c "/etc/init.d/acfs_disable_grid.sh" 2>&1 1>/tmp/acfs_disable.log >/tmp/acfs_disable.log
#  Enable and mount volume
su - grid -c "/etc/init.d/acfs_enable_grid.sh" 2>&1 1>/tmp/acfs_enable.log >/tmp/acfs_enable.log
# Mount filesystem to system
/usr/lib/fs/acfs/mount -o all 2>&1 1>/tmp/acfs_mount.log >/tmp/acfs_mount.log
ls -l /acfsvol 2>&1 1>/tmp/acfsvol_ls.log >/tmp/acfsvol_ls.log exit 0

Comments
At first wait until the +ASM instance is up and running by counting the number of processes.
Then we can set the environment and load the driver. "2>&1 ..." is for debugging purposes and can be eliminated.
The explicite disabeling of the acfs volume is necessary because acfs seems to be in a undefined state after a restart.
This leads to the situation that we can execute acfs_enable.sh but the volume is still disabled. Only a explicit disabeling running in front of enabling script puts it into the state to "enable". 
 At least we can now mount the filesystem.

acfs_disable_grid.sh:
# Disable the filesystem
/etc/init.d/acfs_disable_grid.sh 2>&1 1>/dev/null >/dev/null
exit 0
 
acfs_enable_grid.sh:
# enable volume
/oracle/grid/11.2.0/grid/bin/asmcmd  volenable -G ACFSDG ACFSVOL
exit 0
 
acfs_disable_grid.sh:
# disable volume
/oracle/grid/11.2.0/grid/bin/asmcmd  voldisable -G ACFSDG ACFSVOL
exit 0

This piece of code cleans a acfs volum as /etc/rc3.d/K01acfs_disable
# unmount acfs file system
/usr/sbin/umountall -F acfs.