PRKP-1001 : Error starting instance

srvtl start instance or start database gives error, but sqlplus can start it without any errors.

srvctl start instance -d FDRPRD -i FDRPRD1
PRKP-1001 : Error starting instance fdrprd1 on node ls1orafdrr1sb
CRS-0215: Could not start resource 'ora.fdrprd.fdrprd1.inst'.


Issue was instance name in spfile and CRS was not having the same CASE

1) run crs_stat (this is customized script ) this should show the contents in lower case

HA Resource Target State
ora.fdrlup.db OFFLINE OFFLINE
ora.fdrlup.fdrlup1.inst ONLINE OFFLINE
ora.fdrlup.fdrlup2.inst ONLINE OFFLINE
ora.fdrlup.fdrlup3.inst ONLINE OFFLINE
ora.ls1orafdrr1sb.gsd ONLINE ONLINE on ls1orafdrr1sb
ora.ls1orafdrr1sb.ons ONLINE ONLINE on ls1orafdrr1sb
ora.ls1orafdrr1sb.vip ONLINE ONLINE on ls1orafdrr1sb
ora.ls1orafdrr2sb.gsd ONLINE ONLINE on ls1orafdrr2sb
ora.ls1orafdrr2sb.ons ONLINE ONLINE on ls1orafdrr2sb
ora.ls1orafdrr2sb.vip ONLINE ONLINE on ls1orafdrr2sb
ora.ls1orafdrr3sb.gsd ONLINE ONLINE on ls1orafdrr3sb
ora.ls1orafdrr3sb.ons ONLINE ONLINE on ls1orafdrr3sb
ora.ls1orafdrr3sb.vip ONLINE ONLINE on ls1orafdrr3sb



2) check the imon file for error
/opt/oracle/product/10.2.0/log/ls1orafdrr1sb/racg/imon_FDRPRD.log ,
you should an error similar like
SQL> ORA-01506: missing or illegal database name Disconnected


ora.FDRPRD.FDRPRD1.inst.log

Oracle Database 10g CRS Release 10.2.0.1.0 Production Copyright 1996, 2005 Oracle. All rights reserved.
2009-02-20 15:08:10.022: [ RACG][3086904064] [2840][3086904064][ora.default]: Cannot read string from /opt/oracle/product/10.2.0/racg/init/ls1orafdrr1sb/FDRPRD/.connect
2009-02-20 15:08:12.450: [ RACG][3086904064] [2840][3086904064][ora.FDRPRD.FDRPRD1.inst]: clsrcremaction: clsrcrpcreq message returned: 1 '
SQL*Plus: Release 10.2.0.3.0 - Production on Fri Feb 20 15:08:11 2009
Copyright (c) 1982, 2006, Oracle. All Rights Reserved.
Enter user-name: Connected to an idle instance.
SQL> ORA-01506: missing or illegal database name
SQL> Disconnected



3) Remove the instance and database

srvctl remove instance -d FDRPRD -i FDRPRD1
Remove instance FDRPRD1 from the database FDRPRD? (y/[n]) y
srvctl remove instance -d FDRPRD -i FDRPRD2
Remove instance FDRPRD2 from the database FDRPRD? (y/[n]) y
srvctl remove instance -d FDRPRD -i FDRPRD3
[ls1orafdrr1sb:oracle]/opt/oracle/product/10.2.0/log/ls1orafdrr1sb/racg>

srvctl remove database -d FDRPRD
Remove the database FDRPRD? (y/[n]) y


4) Check the ocrdumpfile by running ocrdump command .You shouldn't see
any reference for the instance or db.


5) Add the instance and database


srvctl add database -d FDRPRD -o $ORACLE_HOME -p
/dev/raw/spfileFDRPRDRD -r PHYSICAL_STANDBY -s mount
[ls1orafdrr1sb:oracle]/opt/oracle/product/10.2.0/log/ls1orafdrr1sb/racg>

srvctl add instance -d FDRPRD -i FDRPRD3 -n ls1orafdrr3sb
[ls1orafdrr1sb:oracle]/opt/oracle/product/10.2.0/log/ls1orafdrr1sb/racg>

srvctl add instance -d FDRPRD -i FDRPRD2 -n ls1orafdrr2sb
[ls1orafdrr1sb:oracle]/opt/oracle/product/10.2.0/log/ls1orafdrr1sb/racg>

srvctl add instance -d FDRPRD -i FDRPRD1 -n ls1orafdrr1sb


6) recheck the ocrdumpfile by running ocrdump command and You should
see any reference for the instance or db.



7) rerun the crs_stat to check the contents


HA Resource Target State
----------- ------ -----
ora.FDRPRD.FDRPRD1.inst OFFLINE OFFLINE
ora.FDRPRD.FDRPRD2.inst OFFLINE OFFLINE
ora.FDRPRD.FDRPRD3.inst OFFLINE OFFLINE
ora.FDRPRD.db OFFLINE OFFLINE
ora.ls1orafdrr1sb.gsd ONLINE ONLINE on ls1orafdrr1sb
ora.ls1orafdrr1sb.ons ONLINE ONLINE on ls1orafdrr1sb
ora.ls1orafdrr1sb.vip ONLINE ONLINE on ls1orafdrr1sb
ora.ls1orafdrr2sb.gsd ONLINE ONLINE on ls1orafdrr2sb
ora.ls1orafdrr2sb.ons ONLINE ONLINE on ls1orafdrr2sb
ora.ls1orafdrr2sb.vip ONLINE ONLINE on ls1orafdrr2sb
ora.ls1orafdrr3sb.gsd ONLINE ONLINE on ls1orafdrr3sb
ora.ls1orafdrr3sb.ons ONLINE ONLINE on ls1orafdrr3sb
ora.ls1orafdrr3sb.vip ONLINE ONLINE on ls1orafdrr3sb


8) srvctl start database -d FDRPRD

How to drop Lobs

Lobs can be dropped using alter table command.
A particular column which is of LOB type, can be dropped using

ALTER TABLE DROP COLUMN ;

It can also be dropped using DROP TABLE command

10g onwards, if you drop a table, it goes to the recyclebin. So dropped lob objects are still visible.

I dropped all the objects using,

select 'DROP ' object_type ' ' owner '.' object_name ';'
from dba_objects
where owner in (upper('&UserName'))
and object_type in ('TABLE', 'VIEW', 'PROCEDURE', 'FUNCTION', 'PACKAGE', 'SEQUENCE', 'TRIGGER')
order by object_id desc;

But then, lobs were still present.


select count(1) from dba_objects where owner = 'RCUSER';
COUNT(1)
----------
84
1 row selected.


These were coming from recyclebin. So to clear the recycle bin for all users


SQL> purge dba_recyclebin;
DBA Recyclebin purged.


After this lobs were not found.


SQL> select count(1) from dba_objects where owner = 'RCUSER';
COUNT(1)
----------
0
1 row selected.

NFS and Oracle Backups

Oracle while taking backup using datapump or RMAN autobackup, checks for options set on the mount point if it is enaled for backup or not. If correct options are not set, it does not dump the backup file. Same script can backup on a cooked file system or even on NFS log file is written. Issue comes only in case of dump files.

Following error appears in case of incorrect mount point options.


In case of datapump

Datapump export unable to write to dump file NFS

ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "/u01/app/oracle/admin/ges/export/dmp/expdp.ges1.full.xrows.20090126.11244801.dm
p"
ORA-27054: NFS file system where the file is created or resides is not mounted with correct options
Additional information: 3
In case of RMAN backup

RMAN-03009: failure of Control File and SPFILE Autobackup command on c1 channel at 02/12/2009 13:41:26
ORA-19504: failed to create file "/apps/warehouse2/rman/wbeqdrac07p_ges/cf_c-1576423539-20090212-03.bak"
ORA-27054: NFS file system where the file is created or resides is not mounted with correct options
Additional information: 3



To correct the problem,
set the following options in /etc/fstab file and umount the mount point and again mount it.

rw,bg,nfsvers=3,tcp,timeo=600,rsize=32768,wsize=32768,hard,nolock,intr,noac,addr=10.110.116.243

Mandetory options are
hard, rsize>=32768 and wsize>=32768

noac is required on RAC servers.

Options can be checked using
mount -v

Also it is necessary that following services to be running on NFS server and NFS client



> service nfs status
rpc.mountd (pid 25815) is running...
nfsd (pid 25812 25811 25810 25809 25808 25807 25806 25805) is running...

> service portmap status
portmap (pid 16735) is running...

> service nfslock status
rpc.statd (pid 16774) is running...

> ps -efgrep rpc.rquotad
root 6593 5730 0 11:46 pts/0 00:00:00 grep rpc.rquotad
root 23692 1 0 11:26 ? 00:00:00 rpc.rquotad
root 23727 1 0 11:26 ? 00:00:00 rpc.rquotad
root 25801 1 0 11:31 ? 00:00:00 rpc.rquotad


After these options are set, backup was successful for RMAN as well as datapump

Resolving GAP for Standby

Missing Archives and Standby Recovery

Introduction

Sometimes it is required to ship the archivelogs from one server to another. It becomes difficult when archivelogs are sitting on ASM file system. ASM files can not be directly accessed. RMAN can come to the rescue in this.

I had an issue of missing archives on STANDBY server (Log GAP) and somehow those archives got backedup to tape and mrp could not access it. Now I had to restore it and then get it shipped to STANDBY server. Since the GAP was too big, I decided to restore it and manually ship it rather than MRP to handle it. This is the process I followed. I am sure there may be many ways. But this one worked for me.
Audience
This document can be used in case of standby recovery because of missing logs on ASM. It is intended for DBAs

Approach
" I identified log GAP using query on Standby database

SELECT THREAD#, MAX(SEQUENCE#) AS "LAST_APPLIED_LOG"
FROM V$LOG_HISTORY
GROUP BY THREAD#
/

" Then copy/restore the ASM logs to the cooked file system.

RMAN> run {
allocate channel c2 device type 'sbt_tape';
set archivelog destination to '/u01/app/oracle/admin/infra/arch';
restore archivelog from sequence 8852 until sequence 8856 thread 2;
release channel c2;
}
2> 3> 4> 5> 6>
allocated channel: c2
channel c2: sid=2228 instance=infra1 devtype=SBT_TAPE
channel c2: Data Protection for Oracle: version 5.3.3.0
executing command: SET ARCHIVELOG DESTINATION
Starting restore at 22-JAN-09
archive log thread 2 sequence 8856 is already on disk as file /u01/app/oracle/admin/infra/arch/infra_2_8856_641059495.arc
channel c2: starting archive log restore to user-specified destination
archive log destination=/u01/app/oracle/admin/infra/arch
channel c2: restoring archive log
archive log thread=2 sequence=8852
channel c2: restoring archive log
archive log thread=2 sequence=8853
channel c2: restoring archive log
archive log thread=2 sequence=8854
channel c2: restoring archive log
archive log thread=2 sequence=8855
channel c2: reading from backup piece qek5f68i_1_1
channel c2: restored backup piece 1
piece handle=qek5f68i_1_1 tag=TAG20090122T161057
channel c2: restore complete, elapsed time: 00:00:55
Finished restore at 22-JAN-09
released channel: c2
RMAN> exit

" Once the logs are restored, scp or sftp it to the standby destination. This will again be a cooked file system
# scp infra_2_*.arc <stby serverName>:/tmp
infra_2_8853_641059495.arc 100% 131MB 11.0MB/s 00:12
infra_2_8854_641059495.arc 100% 267MB 12.2MB/s 00:22
infra_2_8855_641059495.arc 100% 60MB 8.5MB/s 00:07
infra_2_8856_641059495.arc 100% 542KB 541.5KB/s 00:00
" Now if standby database log recovery destination is ASM file system or different than the scp destination, manually recover the database. Or at the recovery prompt manually feed the file name with a full path.
SQL> alter database recover managed standby database cancel;
SQL> recover from '/tmp' standby database;
" Start the managed recovery
SQL> alter database recover managed standby database disconnect from session;
" Verify the log shipping and log apply and That's it

Database Growth Monitor

The following package monitors the growth for the database. It can be used for Capacity Planning.

1. Create the necessary tables

CREATE GLOBAL TEMPORARY TABLE SEGMENT_START_INFO AS
SELECT OWNER, SEGMENT_NAME, SEGMENT_TYPE, PARTITION_NAME, BYTES START_SIZE, SYSDATE START_DATE
FROM DBA_SEGMENTS WHERE 1=2
/
CREATE GLOBAL TEMPORARY TABLE SEGMENT_CURRENT_INFO AS
SELECT OWNER, SEGMENT_NAME, SEGMENT_TYPE, PARTITION_NAME, BYTES CURRENT_SIZE, SYSDATE CURRENT_DATE
FROM DBA_SEGMENTS WHERE 1=2
/
CREATE TABLE GROWTH_METRIC_LOG(STAGE VARCHAR2(255), RUN_DATE DATE) TABLESPACE USERS;
CREATE PUBLIC SYNONYM GROWTH_METRIC_LOG FOR GROWTH_METRIC_LOG;
GRANT SELECT ON GROWTH_METRIC_LOG TO PUBLIC;
CREATE TABLE SEGMENT_GROWTH_METRICS
(
OWNER VARCHAR2(30 BYTE),
SEGMENT_NAME VARCHAR2(81 BYTE),
PARTITION_NAME VARCHAR2(30 BYTE),
SEGMENT_TYPE VARCHAR2(18 BYTE),
BYTES NUMBER,
EXTENTS NUMBER,
ADD_DATE DATE,
DEL_DATE DATE
)
/
CREATE TABLE TABLESPACE_GROWTH_METRICS AS SELECT SYSDATE RUN_DATE, TABLESPACE_NAME, BYTES DAILY_AVG_GROWTH_MB
FROM DBA_SEGMENTS WHERE 1=2
/
CREATE PUBLIC SYNONYM TABLESPACE_GROWTH_METRICS FOR TABLESPACE_GROWTH_METRICS
/
GRANT SELECT ON TABLESPACE_GROWTH_METRICS TO PUBLIC
/


2. Create the Package which will monitor the growth


GRANT SELECT ON DBA_SEGMENTS TO OEM;
--@CREATE_TABS.SQL
CREATE OR REPLACE PACKAGE PKG_DATABASE_GROWTH_METRIC IS
PROCEDURE SP_DATABASE_GROWTH_METRIC;
PROCEDURE SP_CAPTURE_SEGMENT_GROWTH;
PROCEDURE SP_ANALYZE_SEGMENT_GROWTH;
END PKG_DATABASE_GROWTH_METRIC;
/

CREATE OR REPLACE PACKAGE BODY PKG_DATABASE_GROWTH_METRIC IS
/***********************************************************************
**CREATED BY: MAKARAND G
**CREATE DATE: 10/28/2004
**OVERVIEW: RUNS ALL OTHER PROCEDURES
**VERSION 01.01.00
***********************************************************************/
PROCEDURE SP_DATABASE_GROWTH_METRIC IS
BEGIN
INSERT INTO GROWTH_METRIC_LOG VALUES ('START SP_CAPTURE_SEGMENT_GROWTH', SYSDATE);
COMMIT;
SP_CAPTURE_SEGMENT_GROWTH;
INSERT INTO GROWTH_METRIC_LOG VALUES ('END SP_CAPTURE_SEGMENT_GROWTH', SYSDATE);
COMMIT;
END SP_DATABASE_GROWTH_METRIC;
PROCEDURE SP_ANALYZE_SEGMENT_GROWTH IS
BEGIN
--FIND OUT THE CURRENT SIZE FOR EACH OBJECT
INSERT INTO SEGMENT_CURRENT_INFO (OWNER, SEGMENT_NAME, SEGMENT_TYPE, PARTITION_NAME, CURRENT_SIZE, CURRENT_DATE)
SELECT OWNER, SEGMENT_NAME, SEGMENT_TYPE, PARTITION_NAME, SUM(BYTES) CURRENT_SIZE, MAX(ADD_DATE) CURRENT_DATE
FROM SEGMENT_GROWTH_METRICS A
WHERE DEL_DATE IS NULL
GROUP BY OWNER, SEGMENT_NAME, SEGMENT_TYPE, PARTITION_NAME;
--FIND OUT THE BASE SIZE FOR EACH OBJECT
INSERT INTO SEGMENT_START_INFO (OWNER, SEGMENT_NAME, SEGMENT_TYPE, PARTITION_NAME, START_SIZE, START_DATE)
SELECT OWNER, SEGMENT_NAME, SEGMENT_TYPE, PARTITION_NAME, SUM(BYTES) START_SIZE, MIN(ADD_DATE) START_DATE
FROM SEGMENT_GROWTH_METRICS A
WHERE DEL_DATE IS NULL
AND ADD_DATE = (
SELECT MIN(ADD_DATE) FROM SEGMENT_GROWTH_METRICS B
WHERE DEL_DATE IS NULL
AND A.OWNER = B.OWNER
AND A.SEGMENT_NAME = B.SEGMENT_NAME
AND A.SEGMENT_TYPE = B.SEGMENT_TYPE
AND NVL(A.PARTITION_NAME, '**') = NVL(B.PARTITION_NAME, '**'))
GROUP BY OWNER, SEGMENT_NAME, SEGMENT_TYPE, PARTITION_NAME;

INSERT INTO TABLESPACE_GROWTH_METRICS
--FIND OUT TABLESPACE DAILY AVG GROWTH, WHICH WOULD COME FROM THE INDIVIDUAL SEGMENTS
SELECT SYSDATE, TABLESPACE_NAME, ROUND(SUM(DAILY_AVG_GROWTH)/1024/1024) DAILY_AVG_GROWTH_MB FROM (
SELECT A.OWNER, A.SEGMENT_NAME, A.SEGMENT_TYPE, A.PARTITION_NAME, C.TABLESPACE_NAME, (B.CURRENT_SIZE - A.START_SIZE)/ROUND(B.CURRENT_DATE-A.START_DATE) DAILY_AVG_GROWTH
FROM SEGMENT_START_INFO A, SEGMENT_CURRENT_INFO B, TEMP_DBA_SEGMENTS C
WHERE A.OWNER = B.OWNER
AND A.SEGMENT_NAME = B.SEGMENT_NAME
AND A.SEGMENT_TYPE = B.SEGMENT_TYPE
AND NVL(A.PARTITION_NAME, '**') = NVL(B.PARTITION_NAME, '**')
AND A.OWNER = C.OWNER
AND A.SEGMENT_NAME = C.SEGMENT_NAME
AND A.SEGMENT_TYPE = C.SEGMENT_TYPE
AND NVL(A.PARTITION_NAME, '**') = NVL(C.PARTITION_NAME, '**')
AND ROUND(B.CURRENT_DATE-A.START_DATE) > 0)
GROUP BY TABLESPACE_NAME
HAVING ROUND(SUM(DAILY_AVG_GROWTH)/1024/1024) > 0
ORDER BY TABLESPACE_NAME;
COMMIT;
END SP_ANALYZE_SEGMENT_GROWTH;
PROCEDURE SP_CAPTURE_SEGMENT_GROWTH IS
CURSOR C1 IS
SELECT A.OWNER, A.SEGMENT_NAME,
A.PARTITION_NAME,
A. SEGMENT_TYPE
FROM TEMP_DBA_SEGMENTS A, (
SELECT OWNER,
SEGMENT_NAME,
PARTITION_NAME,
SEGMENT_TYPE,
SUM(BYTES) PREV_SEG_BYTES,
SUM(EXTENTS) PREV_SEG_EXTS
FROM SEGMENT_GROWTH_METRICS
WHERE DEL_DATE IS NULL
GROUP BY OWNER,
SEGMENT_NAME,
PARTITION_NAME,
SEGMENT_TYPE) B
WHERE A.OWNER = B.OWNER
AND A.SEGMENT_NAME = B.SEGMENT_NAME
AND A.SEGMENT_TYPE = B.SEGMENT_TYPE
AND NVL(A.PARTITION_NAME, '**') = NVL(B.PARTITION_NAME, '**')
AND (A.EXTENTS < B.PREV_SEG_EXTS OR A.BYTES < B.PREV_SEG_BYTES);
CURSOR C2 IS
SELECT OWNER, SEGMENT_NAME, SEGMENT_TYPE, PARTITION_NAME FROM SEGMENT_GROWTH_METRICS WHERE DEL_DATE IS NULL
MINUS
SELECT OWNER, SEGMENT_NAME, SEGMENT_TYPE, PARTITION_NAME FROM TEMP_DBA_SEGMENTS ;


BEGIN
/* CREATE A SNAPSHOT OF DBA_SEGMENTS, WHICH WILL IMPROVE THE PROCESSING TIME, ALSO IT CAN BE USED FOR BACKTRACKING TILL NEXT RUN
*/
EXECUTE IMMEDIATE 'DROP TABLE TEMP_DBA_SEGMENTS';
EXECUTE IMMEDIATE 'CREATE TABLE TEMP_DBA_SEGMENTS PARALLEL AS SELECT * FROM DBA_SEGMENTS';
EXECUTE IMMEDIATE 'ANALYZE TABLE TEMP_DBA_SEGMENTS ESTIMATE STATISTICS';
EXECUTE IMMEDIATE 'ANALYZE TABLE SEGMENT_GROWTH_METRICS ESTIMATE STATISTICS';
/* FIRST FOR ALL THE OBJECTS WHICH ARE SHRUNK DUE THE REBUILD OR MOVE OR ANY OTHER ACTION SOFT DELETE THEM
** THESE OBJECTS WILL AGAIN BE INSERTED AT A LATER STAGE
*/
FOR X IN C1
LOOP
UPDATE SEGMENT_GROWTH_METRICS A
SET DEL_DATE = SYSDATE
WHERE A.OWNER = X.OWNER
AND A.SEGMENT_NAME = X.SEGMENT_NAME
AND NVL(A.PARTITION_NAME,'**') = NVL(X.PARTITION_NAME, '**')
AND A.SEGMENT_TYPE = X.SEGMENT_TYPE
AND DEL_DATE IS NULL;
END LOOP;
INSERT INTO GROWTH_METRIC_LOG VALUES ('STEP1:TOTAL4', SYSDATE);
COMMIT;
/* INSERT OBJECTS WHICH ARE NEW/WHO ARE NOT MARKED FOR SOFT DELETE.
** THESE WERE NEVER FOUND IN GROWTH METRIC SO FAR OR SOFT DELETED EARLIER
*/
INSERT INTO SEGMENT_GROWTH_METRICS
SELECT OWNER ,
SEGMENT_NAME,
PARTITION_NAME,
SEGMENT_TYPE,
BYTES,
EXTENTS,
SYSDATE ADD_DATE,
NULL
FROM TEMP_DBA_SEGMENTS A
WHERE NOT EXISTS (SELECT 1 FROM SEGMENT_GROWTH_METRICS B
WHERE A.OWNER = B.OWNER
AND A.SEGMENT_NAME = B.SEGMENT_NAME
AND A.SEGMENT_TYPE = B.SEGMENT_TYPE
AND NVL(A.PARTITION_NAME, '**') = NVL(B.PARTITION_NAME, '**')
AND B.DEL_DATE IS NULL)
AND SEGMENT_TYPE NOT LIKE '%UNDO%'
AND SEGMENT_TYPE != 'TEMPORARY';
INSERT INTO GROWTH_METRIC_LOG VALUES ('STEP2:TOTAL4', SYSDATE);
COMMIT;
/* INSERT THE EXISTING OBJECTS GROWTH. WE ONLY CONSIDER THE # OF EXTENTS AND BYTES ADDED SINCE THE LAST TIME AND CONSIDERS IT AS A GROWTH
*/
INSERT INTO SEGMENT_GROWTH_METRICS
SELECT A.OWNER,
A.SEGMENT_NAME,
A.PARTITION_NAME,
A.SEGMENT_TYPE,
A.BYTES - B.PREV_SEG_BYTES BYTES_ADDED,
A.EXTENTS - B.PREV_SEG_EXTS EXTENTS_ADDED,
SYSDATE ADD_DATE,
NULL DEL_DATE
FROM TEMP_DBA_SEGMENTS A, (
SELECT OWNER,
SEGMENT_NAME,
PARTITION_NAME,
SEGMENT_TYPE,
SUM(BYTES) PREV_SEG_BYTES,
SUM(EXTENTS) PREV_SEG_EXTS
FROM SEGMENT_GROWTH_METRICS
WHERE DEL_DATE IS NULL
GROUP BY OWNER,
SEGMENT_NAME,
PARTITION_NAME,
SEGMENT_TYPE) B
WHERE A.OWNER = B.OWNER
AND A.SEGMENT_NAME = B.SEGMENT_NAME
AND A.SEGMENT_TYPE = B.SEGMENT_TYPE
AND NVL(A.PARTITION_NAME, '**') = NVL(B.PARTITION_NAME, '**')
AND (A.EXTENTS > B.PREV_SEG_EXTS OR A.BYTES > B.PREV_SEG_BYTES);
INSERT INTO GROWTH_METRIC_LOG VALUES ('STEP3:TOTAL4', SYSDATE);
COMMIT;
/* THESE ARE THE OBJECTS WHICH WERE PRESENT LAST TIME BUT DROPPED NOW, SO NOT PART OF GROWTH ANYMORE
*/
/*
UPDATE SEGMENT_GROWTH_METRICS A
SET DEL_DATE = SYSDATE
WHERE NOT EXISTS (SELECT 1 FROM TEMP_DBA_SEGMENTS B
WHERE A.OWNER = B.OWNER
AND A.SEGMENT_NAME = B.SEGMENT_NAME
AND A.SEGMENT_TYPE = B.SEGMENT_TYPE
AND NVL(A.PARTITION_NAME, '**') = NVL(B.PARTITION_NAME, '**'))
AND DEL_DATE IS NULL;
*/
FOR B IN C2
LOOP
UPDATE SEGMENT_GROWTH_METRICS A
SET DEL_DATE = SYSDATE
WHERE A.OWNER = B.OWNER
AND A.SEGMENT_NAME = B.SEGMENT_NAME
AND A.SEGMENT_TYPE = B.SEGMENT_TYPE
AND NVL(A.PARTITION_NAME, '**') = NVL(B.PARTITION_NAME, '**')
AND DEL_DATE IS NULL;
END LOOP;
INSERT INTO GROWTH_METRIC_LOG VALUES ('STEP4:TOTAL4', SYSDATE);
COMMIT;
END SP_CAPTURE_SEGMENT_GROWTH;

END PKG_DATABASE_GROWTH_METRIC;
/
GRANT EXECUTE ON PKG_DATABASE_GROWTH_METRIC TO PUBLIC
/
CREATE PUBLIC SYNONYM PKG_DATABASE_GROWTH_METRIC FOR PKG_DATABASE_GROWTH_METRIC
/


3. Schedule a growth monitoring script by calling

exec PKG_DATABASE_GROWTH_METRIC.SP_DATABASE_GROWTH_METRIC;

4. Analyze the growth on-demand

exec PKG_DATABASE_GROWTH_METRIC.SP_ANALYZE_SEGMENT_GROWTH;

5. Interpret the result


SELECT A.TABLESPACE_NAME, ROUND(A.TOTAL_SPACE) TOTAL_SPACE_MB, ROUND(SUM(B.BYTES)/1024/1024) FREE_SPACE_MB, ROUND((SUM(B.BYTES)/(1024*1024)/A.TOTAL_SPACE)*100,2) PERCENT_FREE, C.DAILY_AVG_GROWTH_MB, ROUND(SUM(B.BYTES)/1024/1024/C.DAILY_AVG_GROWTH_MB) "#OF DAYS TO FAIL"
FROM (
SELECT TABLESPACE_NAME, SUM(BYTES)/1024/1024 TOTAL_SPACE FROM DBA_DATA_FILES GROUP BY TABLESPACE_NAME) A,
DBA_FREE_SPACE B, TABLESPACE_GROWTH_METRICS C
WHERE A.TABLESPACE_NAME = B.TABLESPACE_NAME
AND A.TABLESPACE_NAME = C.TABLESPACE_NAME
AND C.RUN_DATE = (SELECT MAX(RUN_DATE) FROM TABLESPACE_GROWTH_METRICS)
GROUP BY A.TABLESPACE_NAME, A.TOTAL_SPACE, C.DAILY_AVG_GROWTH_MB
ORDER BY 6
/


6. Additional queries to troubleshoot


SELECT EXECUTIONS FROM V$SQLAREA WHERE SQL_TEXT LIKE 'UPDATE SEGMENT_GROWTH_METRICS%'
SELECT COUNT(1) FROM (
SELECT OWNER, SEGMENT_NAME, SEGMENT_TYPE, PARTITION_NAME FROM SEGMENT_GROWTH_METRICS WHERE DEL_DATE IS NULL
MINUS
SELECT OWNER, SEGMENT_NAME, SEGMENT_TYPE, PARTITION_NAME FROM TEMP_DBA_SEGMENTS)


SELECT COUNT(1),SUM(BYTES)/1024/1024 DB_SIZE FROM SEGMENT_GROWTH_METRICS WHERE DEL_DATE IS NULL;


SELECT COUNT(1),SUM(BYTES)/1024/1024 DB_SIZE FROM TEMP_DBA_SEGMENTS WHERE SEGMENT_TYPE NOT LIKE '%UNDO%' AND SEGMENT_TYPE != 'TEMPORARY' ;

--PROBLEM SEGMENTS
SELECT SEGMENT_NAME, SUM(BYTES) FROM SEGMENT_GROWTH_METRICS WHERE DEL_DATE IS NULL GROUP BY SEGMENT_NAME
MINUS
SELECT SEGMENT_NAME, SUM(BYTES) FROM TEMP_DBA_SEGMENTS WHERE SEGMENT_TYPE NOT LIKE '%UNDO%' GROUP BY SEGMENT_NAME
/

Copy Tablespace to ASM

In order to copy a tablespace to ASM diskgroup, RMAN is the only utility that can be used. Here is an example on how to do it.

--Take the tablespace offline
sql 'alter tablespace name offline';

--Back it up as copy onto ASM diskgroup. This creates an image copy
backup as copy tablespace name format '+DATA02';


--Now just tell the controlfile that you will use new copy as your tablespace
switch tablespace name to copy;


--Bring the tablespace back online
sql 'alter tablespace name online';



Same can be done for datafiles as well. Also this can be used to copy or move a tablespace or datafile from one diskgroup to another.

To remove ASM Files

In order to delete RMAN backup files, which will have a type of "ARCHIVELOG"
and "BACKUPSET" the following two commands can be run and their output
spooled to separate files

COL gsql FORMAT a300
SET ECHO OFF
SET VERIFY OFF
SET HEADING OFF
SET FEEDBACK OFF
SET PAGESIZE 0
SET LINESIZE 600
SET TRIMSPOOL ON


--generates file list for ARCHIVELOG
SELECT 'alter diskgroup 'dg.name' drop file
''+'dg.name''SYS_CONNECT_BY_PATH(al.name,'/')''';'
FROM v$asm_alias al, v$asm_file fi, v$asm_diskgroup dg
WHERE al.file_number = fi.file_number(+)
AND al.group_number = dg.group_number
AND fi.type = 'ARCHIVELOG'
START WITH alias_index = 0
CONNECT BY PRIOR al.reference_index = al.parent_index
/


--generates file list for BACKUPSET
SELECT 'alter diskgroup 'dg.name' drop file
''+'dg.name''SYS_CONNECT_BY_PATH(al.name,'/')''';'
FROM v$asm_alias al, v$asm_file fi, v$asm_diskgroup dg
WHERE al.file_number = fi.file_number(+)
AND al.group_number = dg.group_number
AND fi.type = 'BACKUPSET'
START WITH alias_index = 0
CONNECT BY PRIOR al.reference_index = al.parent_index
/


--get database name
ACCEPT database PROMPT 'Enter the database name: '
--generates file list
SELECT 'ALTER DISKGROUP 'gname' DROP FILE '''full_path''';' gsql FROM
(SELECT CONCAT('+'gname, SYS_CONNECT_BY_PATH(aname,'/')) full_path, gname FROM
(SELECT g.name gname, a.parent_index pindex, a.name aname,
a.reference_index rindex, a.ALIAS_DIRECTORY adir
FROM v$asm_alias a, v$asm_diskgroup g
WHERE a.group_number = g.group_number)
WHERE adir='N'
START WITH (MOD(pindex, POWER(2, 24))) = 0
CONNECT BY PRIOR rindex = pindex)
WHERE full_path LIKE UPPER('%&database%');

Restoring the stats

Sometimes it is required to restore the stats from history. Oracle 10g provides automatic retention of stats in dictionary. By default it keeps it for 31 days. And you can get it back anytime.


History information is available in dba_tab_stats_history.

Two functions can tell the retention and availability of the history

SQL> select DBMS_STATS.GET_STATS_HISTORY_RETENTION from dual;
GET_STATS_HISTORY_RETENTION
---------------------------
31
1 row selected.

SQL> select DBMS_STATS.GET_STATS_HISTORY_AVAILABILITY from dual;
GET_STATS_HISTORY_AVAILABILITY
---------------------------------------------------------------------------
08-JAN-09 10.01.52.927345000 PM -05:00
1 row selected.

****************
Now to restore the stats for any day, we just have to use the restore_schema_stats, restore_table_stats etc functions


Case Study

13:22:42 SYS@ptdev SQL> create table maktest.a (id int);
Table created.
Elapsed: 00:00:00.14


13:22:54 SYS@ptdev SQL> insert into maktest.a values(1);
1 row created.
Elapsed: 00:00:00.00
13:23:06 SYS@ptdev SQL> /
1 row created.
Elapsed: 00:00:00.00
13:23:06 SYS@ptdev SQL> /
1 row created.
Elapsed: 00:00:00.00
13:23:07 SYS@ptdev SQL> commit;
Commit complete.
Elapsed: 00:00:00.01


13:23:14 SYS@ptdev SQL> exec dbms_stats.gather_schema_stats('MAKTEST');
PL/SQL procedure successfully completed.
Elapsed: 00:00:01.83


13:24:28 SYS@ptdev SQL> select table_name, num_rows from dba_tables where owner = 'MAKTEST';
TABLE_NAME NUM_ROWS
------------------------------ ----------
ORDER_DETAILS 0
A 3
2 rows selected.
Elapsed: 00:00:00.12

13:24:46 SYS@ptdev SQL> select table_name, stats_update_time from dba_tab_stats_history where owner = 'MAKTEST' and table_name = 'A';
TABLE_NAME STATS_UPDATE_TIME
------------------------------ ---------------------------------------------------------------------------
A 09-FEB-09 01.24.28.094433 PM -05:00
1 row selected.
Elapsed: 00:00:00.03

13:25:46 SYS@ptdev SQL> truncate table maktest.a;
Table truncated.
Elapsed: 00:00:00.05

13:25:58 SYS@ptdev SQL> exec dbms_stats.gather_schema_stats('MAKTEST');
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.42

13:26:05 SYS@ptdev SQL> select table_name, num_rows from dba_tables where owner = 'MAKTEST';
TABLE_NAME NUM_ROWS
------------------------------ ----------
ORDER_DETAILS 0
A 0
2 rows selected.
Elapsed: 00:00:00.15

13:26:09 SYS@ptdev SQL> select table_name, stats_update_time from dba_tab_stats_history where owner = 'MAKTEST' and table_name = 'A';
TABLE_NAME STATS_UPDATE_TIME
------------------------------ ---------------------------------------------------------------------------
A 09-FEB-09 01.24.28.094433 PM -05:00
A 09-FEB-09 01.26.05.034477 PM -05:00
2 rows selected.
Elapsed: 00:00:00.00

13:26:20 SYS@ptdev SQL> exec dbms_stats.restore_schema_stats(ownname=>'MAKTEST', as_of_timestamp=>sysdate-2/1400);
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.27

13:27:27 SYS@ptdev SQL> select table_name, num_rows from dba_tables where owner = 'MAKTEST';
TABLE_NAME NUM_ROWS
------------------------------ ----------
ORDER_DETAILS 0
A 3
2 rows selected.
Elapsed: 00:00:00.06



Conclusion
We do not need to enable any manual script to cature the stats history. Its
present by default in 10g for 31 days and can be restored at anytime.