Posted on

Finding unused indexes

Oracle has a built-in facility to monitor the usage of some objects, including indexes. This is useful for identifying indexes that are not being used, in many cases you can improve DML performance by dropping those indexes. This method is not 100% accurate, some usages of indexes are not caught with this, so you should still be careful with the drops.
The DDL to turn on monitoring is “ALTER INDEX .. MONITORING USAGE” (and NOMONITORING USAGE to turn it off). This command is well documented elsewhere, what I’m able to contribute is a shell script to automate and report on this infrastructure. The script is below, but first a few notes about this process.
  1. The Oracle-supplied view to report about what is being monitored is worse than useless to a DBA. It only works on your current schema, and that’s such a big drawback that I wrote my own view to see all of the schemas.
  2. The script avoids monitoring certain indexes that you will not want to delete anyway, even if you find they are not being used. Indexes owned by default schemas are ignored. Unique indexes are ignored (since they are enforcing uniqueness you will still want them). Only normal and bitmap types are considered.
  3. Please examine the usage instructions, you will want to run this first with -v to create the needed view (mentioned above) in each DB.
  4. The default behavior is to find any indexes that have never been monitored and turn on monitoring, then find any monitored indexes that have been used and turn off monitoring for those, finally it creates a report file of the unused indexes.
EXEC_CMD=”sqlplus -s / as sysdba”
CMD_FILE=index_monitor-${ORACLE_SID}.sql
:> $CMD_FILE

usage() {
echo “Usage: $0 [-r] [-n] [-v]”
echo ”  -r – only report on unused indexes, no changes are made”
echo ”  -n – new indexes, monitor any indexes that have never been monitored”
echo ”  -v – create the view (only needed on the first run)”
exit 1
}

install_view() {
sqlplus -s / as sysdba <<!
CREATE VIEW sys.v\$all_object_usage (owner, index_name, table_name, monitoring, used, start_monitoring, end_monitoring )
AS
SELECT do.owner, io.name, t.name, DECODE (BITAND (i.flags, 65536), 0, ‘NO’, ‘YES’),
DECODE (BITAND (ou.flags, 1), 0, ‘NO’, ‘YES’), ou.start_monitoring, ou.end_monitoring
FROM sys.obj\$ io, sys.obj\$ t, sys.ind\$ i, sys.object_usage ou, dba_objects do
WHERE i.obj# = ou.obj# AND io.obj# = ou.obj# AND t.obj# = i.bo# AND ou.obj# = do.object_id;
EXIT
!
}

# Start monitoring any new indexes
turn_on_monitoring() {
(sqlplus -s / as sysdba <<!
set lines 100
set pages 0
set feed off
PROMPT set pages 0
PROMPT set echo on
SELECT ‘alter index ‘||owner||’.”‘||index_name||'” monitoring usage;’ “Monitoring changes” FROM dba_indexes
WHERE index_type IN (‘NORMAL’, ‘BITMAP’) AND uniqueness = ‘NONUNIQUE’
AND owner NOT LIKE ‘%SYS%’ AND owner NOT LIKE ‘APEX%’ AND owner NOT IN (‘OUTLN’, ‘PUBLIC’, ‘DBSNMP’, ‘XDB’, ‘ORDDATA’)
AND (owner, index_name) NOT IN (SELECT index_owner, index_name FROM dba_ind_columns c
WHERE (c.table_owner, c.table_name, c.column_name, c.column_position) IN
(SELECT c.owner, c.table_name, cc.column_name, cc.position
FROM dba_constraints  c, dba_constraints  r, dba_cons_columns cc, dba_cons_columns rc
WHERE c.constraint_type = ‘R’ AND c.r_owner = r.owner AND c.r_constraint_name = r.constraint_name
AND c.constraint_name = cc.constraint_name AND c.owner = cc.owner AND r.constraint_name = rc.constraint_name
AND r.owner = rc.owner AND cc.position = rc.position
AND c.owner NOT LIKE ‘%SYS%’ AND c.owner NOT LIKE ‘APEX%’ AND c.owner NOT IN (‘OUTLN’, ‘PUBLIC’, ‘DBSNMP’, ‘XDB’, ‘ORDDATA’)))
AND (owner, index_name) NOT IN (SELECT owner, index_name FROM sys.v\$all_object_usage);
prompt exit
EXIT
!
)
}

# Turn off monitoring of used indexes
turn_off_monitoring() {
(sqlplus -s / as sysdba <<!
set lines 100
set pages 0
set feed off
PROMPT set pages 0
PROMPT set echo on
select ‘alter index ‘ || owner || ‘.”‘ || index_name || ‘” nomonitoring usage;’ from sys.v\$all_object_usage where used = ‘YES’ and monitoring = ‘YES’;
prompt exit
EXIT
!
)
}

# Unused Index Report
report() {
export REP_NAME=unused_indexes.lst
tty -s && echo “Unused index report: ${REP_NAME}”
(sqlplus -s / as sysdba <<!
set lines 200
set pages 0
select owner || ‘.’ || index_name || ‘ on ‘ || table_name || ‘ not used since ‘ || to_char(trunc(to_date(start_monitoring,’MM-DD-YYYY HH24:MI:SS’)),’MM/DD/YY’)
from sys.v\$all_object_usage
where used = ‘NO’ and monitoring = ‘YES’ order by trunc(to_date(start_monitoring,’MM-DD-YYYY HH24:MI:SS’)), owner, table_name, index_name;
EXIT
!
) > ${REP_NAME}
}

# Handle parameters
while getopts “:rnv” opt; do
case $opt in
r)
EXEC_CMD=”true”
;;
n)
# only run this if requested, it is slow
turn_on_monitoring > $CMD_FILE
;;
v)
install_view
;;
\?)
echo “Invalid option: -$OPTARG” >&2
usage
;;
: )
echo “Option -$OPTARG requires an argument.” >&2
usage
;;
esac
done

# always run this
turn_off_monitoring >> $CMD_FILE

LNCNT=`wc -l $CMD_FILE|cut -f1 -d” “`
if [ “$LNCNT” -gt 3 ] ; then
more $CMD_FILE
$EXEC_CMD @$CMD_FILE
echo ” ”
fi

# always run the report
report

Posted on

Managing empty tables

Does your database have a lot of empty tables of (potentially) unknown origin? While the returned space is usually insignificant, there are some simple things you can do to help manage these.
An empty table may or may not have a segment in a tablespace. It does not need one, now that we have defered segment creation, so let’s get rid of those. This SQL is helpful with that:
— empty table management
SELECT ‘exec DBMS_STATS.GATHER_TABLE_STATS(”’||owner||”’,”’||table_name||”’);’ stats_cmd,
‘select count(*) from ‘||owner||’.’||table_name||’;’ cnt_cmd, ‘truncate table ‘||owner||’.’||table_name||’ drop all storage;’ trunc_cmd
FROM dba_tables WHERE num_rows = 0 AND temporary = ‘N’ AND nested = ‘NO’ AND segment_created = ‘YES’
AND owner NOT LIKE ‘%SYS%’ AND owner NOT LIKE ‘APEX%’ AND owner NOT IN (‘OUTLN’, ‘PUBLIC’, ‘DBSNMP’, ‘XDB’, ‘ORDDATA’);
There are 3 columns here, the first gathers new stats for the tables. This is very safe and very fast and will eliminate from consideration any tables that actually have rows.
The second column is a set of count commands, you can run these to verify that all the tables are empty, because the third column is DDL to truncate the tables and remove the segments. You want to be careful with this of course, hence the first 2 columns. I would visually check the listing before running it, and of course a backup of the tables is always a good idea. In fact, exporting the tables is an other way to confirm they are truly empty since the expdp logs show the records exported.
Note that if there are foreign key dependencies on any table, the truncate will fail. It’s up to you if it is worth disabling and re-enabling the dependent FK around the truncate.
If these tables were created by your own development process, you might want to give a list of tables to development for possible removal. You can check that the tables are not used by any objects inside the db with the following SQL. This does not prove that no outside code (or dynamic SQL inside PL/SQL) uses the tables, so you still must be cautious and remove these first in an environment where thorough testing takes place.
— empty tables with no “in db” dependencies
— empty tables
SELECT owner, table_name FROM dba_tables
WHERE num_rows = 0 AND temporary = ‘N’ AND NESTED = ‘NO’
AND owner NOT LIKE ‘%SYS%’ AND owner NOT LIKE ‘APEX%’ AND owner NOT IN (‘OUTLN’, ‘PUBLIC’, ‘DBSNMP’, ‘XDB’, ‘ORDDATA’)
MINUS
— empty tables with “in db” dependencies
SELECT referenced_owner, referenced_name FROM dba_dependencies d
WHERE (referenced_owner, referenced_name) IN
(SELECT owner, table_name FROM dba_tables
WHERE num_rows = 0 AND temporary = ‘N’ AND NESTED = ‘NO’ AND owner NOT LIKE ‘%SYS%’ AND owner NOT LIKE ‘APEX%’
AND owner NOT IN (‘OUTLN’, ‘PUBLIC’, ‘DBSNMP’, ‘XDB’, ‘ORDDATA’));
Posted on

Is it safe to drop that tablespace?

When dropping tablespaces, I like to use the “including datafiles” clause so I don’t have to find and remove the underlying datafiles. That presents a problem because that clause is only accepted if you add the “contents” clause (including contents and datafiles), meaning it should drop any objects with segments in the tablespace. Not a problem since you checked the dba_segments first, right?
WRONG!!!  if you tell Oracle to drop the contents, it will drop objects that have no segments, but would create a segment there if a row was inserted. By default, 11g will defer segment creation for a table or index if there are no rows when the object is created (very common case). So you’d better check for this before you drop the tablespace. If only Oracle would allow us to leave off the contents clause (check and fail if there are any dependent objects, but drop the datafiles if the TS is dropped), but alas it does not, so we must be diligent to check this if we want to drop datafiles as part of the tablespace drop.
Here is the contents of a SQL script I use to check that a tablespace has no dependencies. Try this on a tablespace you know has objects and also a non-existent tablespace, to see the output.
set ver off pages 0 feed off
SELECT decode(COUNT(*), 0, ‘OK’, ‘Dependency found in dba_segments!’) FROM dba_segments WHERE tablespace_name = ‘&&1’;
SELECT decode(COUNT(*), 0, ‘OK’, ‘Dependency found in dba_tables!’) FROM dba_tables WHERE tablespace_name = ‘&&1’;
SELECT decode(COUNT(*), 0, ‘OK’, ‘Dependency found in dba_tab_partitions!’) FROM dba_tab_partitions WHERE tablespace_name = ‘&&1’;
SELECT decode(COUNT(*), 0, ‘OK’, ‘Dependency found in dba_tab_subpartitions!’) FROM dba_tab_subpartitions WHERE tablespace_name = ‘&&1’;
SELECT decode(COUNT(*), 0, ‘OK’, ‘Dependency found in dba_indexes!’) FROM dba_indexes WHERE tablespace_name = ‘&&1’;
SELECT decode(COUNT(*), 0, ‘OK’, ‘Dependency found in dba_ind_partitions!’) FROM dba_ind_partitions WHERE tablespace_name = ‘&&1’;
SELECT decode(COUNT(*), 0, ‘OK’, ‘Dependency found in dba_ind_subpartitions!’) FROM dba_ind_subpartitions WHERE tablespace_name = ‘&&1’;
exit
Posted on

Cleaning up invalid objects

Many databases seem to accumulate invalid objects over time. If you are monitoring for objects becoming invalid, you have to learn to ignore those and then we have a tendency to ignore all the warnings about objects being invalid, and that is not good.
Since these objects cannot be used, why not remove them? Getting rid of code is a bit scary, maybe we will have to write something like it again in the future, so it would be nice to have a backup. Regular backups from before the time you dropped the objects would work, but will you have them, or want to use them, then the time comes to get that code back? Better to make a specific backup of those objects and keep it around for a long time, “just in case”.
This SQL will list the invalid objects, along with DDL to back up the code and to drop them after the backup is completed and archived somewhere safe.
SELECT ‘Invalid ‘ || object_type || ‘: ‘ || owner || ‘.’ || object_name listing,
‘select dbms_metadata.get_ddl(”’ || object_type || ”’,”’ || object_name || ”’,”’ || owner || ”’) from dual;’ backup_cmd,
‘drop ‘ || object_type || ‘ ‘ || owner || ‘.’ || object_name || ‘;’ drop_cmd
FROM dba_objects WHERE status = ‘INVALID’ ORDER BY owner, object_name;
The first column could be emailed to interested parties to let them know what you are about to do.
The second column can be put into a script and wrapped with spool commands to make the backup file.
The third column is the DDL to actually perform the drops.
I used these commands around the backup lines to make a backup into a .sql file:
set long 300000 pages 0 lines 140
set trimspool on
set head off
spool invalid_code.sql
spool off
Posted on

Bash script to run a command on all databases

It is very common that you want to perform some operation on many databases. Logging into each database to do this is cumbersome, time consuming, and error prone. Especially if you have many databases to manage on a server.
The script show below will create a shell script with a line for every database running on the server, Then it will run that script for you, unless you want to edit and run it yourself. It makes use of some of the functions we created in the last post. It is very flexible in the command to run, it could be sqlplus, another program, or a shell script, There are also several filters to exclude certain types of db’s.

# Iterates a command across all running DB instances
#
. `dirname “${BASH_SOURCE[0]}”`/ora_funcs.sh

RUN_CMDS=Y

usage() {
echo “Usage: $0 -c command string [-n ] [-e exclude type] [-l list file]”
echo ”  -c command string – the string of commands to run once the DB environment is set”
echo ”  -n – no changes, only create files with commands to run, so you can edit and run manually”
echo ”  -e exclude type – exclude these types of databases: clonedb standby”
echo ”  -l list file – name of text file that lists the databases to process, 1 per line”
exit 1
}

# Handle parameters
while getopts “:c:e:l:n” opt; do
case $opt in
n)
RUN_CMDS=N
;;
c)
CMD_STRING=$OPTARG
;;
l)
LIST_FILE=$OPTARG
;;
e)
EXCLUDE_TYPE=$OPTARG
echo “Exclude: $EXCLUDE_TYPE”
;;
\?)
echo “Invalid option: -$OPTARG” >&2
usage
;;
: )
echo “Option -$OPTARG requires an argument.” >&2
usage
;;
esac
done

REPORT_NAME=RunForAllDatabases-`date “+%y%m%d%H%M”`.sh
:>${REPORT_NAME}
# Create the file listing all the databases to operate on
if [ -z ${LIST_FILE} ] ; then
LIST_FILE=/tmp/Run4All-listing.txt
ps -ef|grep ora_\\pmon|cut -d_ -f3-|grep -v ASM > $LIST_FILE
fi
#echo “List file: $LIST_FILE”

for ORACLE_SID in `cat $LIST_FILE`
do
if [ “$EXCLUDE_TYPE” = “clonedb” ] ; then
is_clonedb ${ORACLE_SID} && continue
else
test_connect ${ORACLE_SID} || continue
fi

if [ “$EXCLUDE_TYPE” = “standby” -a “${DB_ROLE}” = “PHYSICAL STANDBY” ] ; then
continue
fi

echo “export ORACLE_SID=${ORACLE_SID};export ORACLE_HOME=${ORACLE_HOME};${CMD_STRING}” >>${REPORT_NAME}
done

echo “Commands written to ${REPORT_NAME}”

if [ “${RUN_CMDS}” = ‘Y’ ] ; then
sh ${REPORT_NAME}
fi

# cleanup
if [ -r “/tmp/Run4All-listing.txt” ] ; then
rm /tmp/Run4All-listing.txt
fi

Posted on

Bash script functions for the Oracle DBA

The bash script below can be sourced from your .bash_profle and/or .bashrc to create some useful commands. Copy this into a file called ora_funcs.sh somewhere on your path and make it executable. Logout and back in and run “typeset -F”, you should see these functions defined:
declare -f dbname
declare -f is_clonedb
declare -f oe
declare -f pathmunge
declare -f test_connect
Now you can set the Oracle environment to a local instance with a command like this:
oe mydb
This sets ORACLE_SID and ORACLE_HOME. If you give it a partial name, it will makes it’s best guess.
I use these functions in many other scripts, that is why I’m introducing this here, so  that other scripts can build upon these.
This is the contents of my ora_funcs.sh script file:

#!/bin/bash
pathmunge () {
if ! echo $PATH | /bin/egrep -q “(^|:)$1($|:)” ; then
if [ “$2” = “after” ] ; then
PATH=$PATH:$1
else
PATH=$1:$PATH
fi
fi
}

# Sets the Oracle environment, if possible
oe ()
{
# if no argument, try the existing value of ORACLE_SID
TMP_ORACLE_SID=${1:-$ORACLE_SID}
if [ -z “${TMP_ORACLE_SID}” ] ; then
echo “No SID”
return 1
fi
# Set the SID
ORACLE_SID=`ps -ef|grep ora_\\\\pmon|sed -e “s/.*pmon_//”|grep ${TMP_ORACLE_SID}|sort -u`
# check that only one SID was found
if [ `echo $ORACLE_SID|wc -w` -gt 1 ] ; then
echo “Too many matching SID’s (${ORACLE_SID})”
# Just take the first one
ORACLE_SID=`echo $ORACLE_SID|cut -d” ” -f1`
echo “Setting ORACLE_SID to $ORACLE_SID”
fi
if [ -z “${ORACLE_SID}” ] ; then
echo “SID not found”
return 1
fi
ps -e -o command,pid | grep ora_\\\\pmon|grep ${ORACLE_SID}|read cmd pid
#Find ORACLE_HOME from process id
if [ -r /proc/$pid/exe ] ; then
TMP_ORACLE_HOME=$(ls -l /proc/$pid/exe | awk -F’> ‘ ‘{print $2}’ | sed ‘s/\/[^\/]*\/[^\/]*$//’)
else
# Get the OH from oratab if the process files are not readable
TMP_ORACLE_HOME=`grep “${TMP_ORACLE_SID}:” /etc/oratab|cut -d: -f2`
# If it wasn’t found, strip off the last char to work on RAC
if [ -z “${TMP_ORACLE_HOME}” ]; then
INST=`expr substr ${TMP_ORACLE_SID} ${#TMP_ORACLE_SID} 1`
TMP_ORACLE_HOME=`grep “${TMP_ORACLE_SID%$INST}:” /etc/oratab|cut -d: -f2`
fi
fi

# If we found an OH and it’s valid, set the rel OH, otherwise leave it unchanged and hope for the best
if [ -d “${TMP_ORACLE_HOME}” ] ; then
ORACLE_HOME=${TMP_ORACLE_HOME}
fi

# Check that the ORACLE_SID we finally decided on has a running db
ps -ef|grep ora_\\pmon_${ORACLE_SID} >/dev/null
if [ $? = ‘0’ ] ; then
# Everything works! exit with success
echo “${ORACLE_SID}”
pathmunge $ORACLE_HOME/bin
export ORACLE_SID ORACLE_HOME PATH
return 0
else
echo “SID not found”
return 1
fi
}

test_connect ()
{
if [ $# -gt 0 ] ; then
oe $1
fi

# Test the connection
export DB_ROLE=`sqlplus -s / as sysdba <<!
set pages 0
select DATABASE_ROLE from v\\$database;
exit
!`
echo $DB_ROLE | grep ORA- && return 1
return 0
}

is_clonedb ()
{
if [ $# -gt 0 ] ; then
oe $1
fi

# Test the connection
(sqlplus -s / as sysdba <<!
set pages 0
select VALUE from v\$parameter where NAME = ‘clonedb’;
exit
!
) | grep TRUE >/dev/null && return 0
return 1
}

dbname ()
{
if [ $# -gt 0 ] ; then
oe $1
fi

# Get the real database name
export DBNAME=`sqlplus -s / as sysdba <<!
set pages 0
select NAME from v\\$database;
exit
!`
return 0
}

Posted on

Managing Oracle Jobs during maintenance windows

Oracle now has 2 different ways of scheduling jobs. We sometimes want to disable jobs for a period in order to do maintenance of some kind, and then re-enable them. Care must be taken that we do not re-enable jobs that were not enabled before we started, so the correct approach is to generate the re-enable scripts before we disable anything.

The following shell script creates the disable and enable SQL scripts for you, and will handle both types of job queues. You can run the SQL “as is” or edit them to leave some jobs running.

 #Generate SQL to maintain Oracle jobs – both DBA_SCHEDULER_JOBS and DBA_JOBS
CRED=${CRED:-/}

usage() {
echo “Usage: $0 [-i] [-c] [-d database name]”
echo ”  -i – interactive (allows you to run the SQL that is generated”
echo ”  -c – check or count, only counts the number of jobs that are enabled”
echo ”  -d database name – defaults to $ORACLE_SID”
echo ” ”
echo “This script creates 2 files, containing commands to disable and enable the currently enabled jobs.”
echo “It handles both DBMS_JOBS and DBMS_SCHEDULER_JOBS”
echo “The .sql files contain the database name and a timestamp to avoid overwriting existing files”
echo “Run the disable SQL before the maintenance starts, and the enable SQL after it completes, to restore operations as they were”
echo “If you are running this from a remote host (using TNS), you will need to set the CRED env var to a user/passwd with SYSDBA established in the orapwd files”
exit 1
}

# Handle parameters
while getopts “:d:ic” opt; do
case $opt in
d)
TWO_TASK=$OPTARG
DB_NAME=$OPTARG
;;
c)
COUNT_ONLY=Y
;;
i)
INTERACTIVE=Y
;;
\?)
echo “Invalid option: -$OPTARG” >&2
usage
;;
: )
echo “Option -$OPTARG requires an argument.” >&2
usage
;;
esac
done

DB_NAME=${DB_NAME:-${ORACLE_SID}}
if [ “${COUNT_ONLY}” = “Y” ] ; then
echo “${DB_NAME}”
(sqlplus -s ${CRED} as sysdba <<!
set pages 0
set feed off
set head off
set lines 130
SELECT ‘DBMS_SCHEDULER enabled: ‘||count(*) FROM dba_scheduler_jobs WHERE enabled = ‘TRUE’;
SELECT ‘DBMS_JOBS enabled: ‘||count(*) FROM dba_jobs WHERE broken = ‘N’;
exit
!
)
exit 0
fi

DISABLE_SQL=JobDisable_${DB_NAME}-`date “+%y%m%d%H%M”`.sql
DISABLE_LOG=JobDisable_${DB_NAME}-`date “+%y%m%d%H%M”`.log
ENABLE_SQL=JobEnable_${DB_NAME}-`date “+%y%m%d%H%M”`.sql
ENABLE_LOG=JobEnable_${DB_NAME}-`date “+%y%m%d%H%M”`.log

echo “Generating SQL to disable and enable Oracle jobs”
echo
(sqlplus -s ${CRED} as sysdba <<!
set pages 0
set feed off
set head off
set lines 130
spool ${DISABLE_SQL}
prompt set sqlprompt “_CONNECT_IDENTIFIER> “;
prompt set feed off;
prompt set echo on;
prompt spool ${DISABLE_LOG};
SELECT ‘exec sys.dbms_scheduler.disable(”’||owner||’.’||job_name||”’)’ disable_cmd FROM dba_scheduler_jobs WHERE enabled = ‘TRUE’;
SELECT ‘exec sys.dbms_ijob.broken(‘||job||’, TRUE)’ FROM dba_jobs WHERE broken = ‘N’;
prompt SELECT ‘exec sys.dbms_scheduler.stop_job(”’||owner||’.’||job_name||”’, TRUE)’ stop_cmd FROM dba_scheduler_jobs WHERE enabled = ‘TRUE’;;
prompt SELECT ‘exec sys.dbms_scheduler.disable(”’||owner||’.’||job_name||”’)’ disable_cmd FROM dba_scheduler_jobs WHERE enabled = ‘TRUE’;
spool ${ENABLE_SQL}
prompt set sqlprompt “_CONNECT_IDENTIFIER> “;
prompt set feed off;
prompt set echo on;
prompt spool ${ENABLE_LOG};
SELECT ‘exec sys.dbms_scheduler.enable(”’||owner||’.’||job_name||”’)’ enable_cmd FROM dba_scheduler_jobs WHERE enabled = ‘TRUE’;
SELECT ‘exec sys.dbms_ijob.broken(‘||job||’, FALSE)’ FROM dba_jobs WHERE broken = ‘N’;
spool off
SELECT ‘DBMS_SCHEDULER enabled: ‘||count(*) FROM dba_scheduler_jobs WHERE enabled = ‘TRUE’;
SELECT ‘DBMS_JOBS enabled: ‘||count(*) FROM dba_jobs WHERE broken = ‘N’;
exit
!
)

echo
echo “job disable commands for ${DB_NAME} are in ${DISABLE_SQL}”
echo “job enable commands for ${DB_NAME} are in ${ENABLE_SQL}”

if [ “${INTERACTIVE}” = “Y” ] ; then
echo “Do you want to run the Disable SQL in ${DB_NAME} ?”
read ans
if [ “$ans” = “Y” ] ; then
echo sqlplus ${CRED} as sysdba @${DISABLE_SQL}
echo Output spooled to ${DISABLE_LOG}
fi
echo “Do you want to run the Enable SQL in ${DB_NAME} ?”
read ans
if [ “$ans” = “Y” ] ; then
echo sqlplus ${CRED} as sysdba @${ENABLE_SQL}
echo Output spooled to ${ENABLE_LOG}
fi
fi

Posted on

Data Uniqueness in Oracle

It is very common for a data model to define a set of columns that must be unique in the table. This is enforced with a primary key, a unique key, or a unique index. These are all slightly different, but obviously related concepts. Any constraint that declares uniqueness (either a PK or UK) needs a unique index to help enforce that. A unique index maintains this by inserting a row in it’s b-tree for each row in the table that has at least one column that is not null. It is a fast operation for an insert or update to check for uniqueness if this index is in place, in fact it’s is simply the result of inserting the new leaf in the b-tree, if it succeeds then the uniqueness constraint has been satisfied.
This implies that the columns in the constraint and the index are an exact match.But what if that is not true? You might think that is an impossible case, but it can happen, and it’s worth monitoring for because of the potentially horrible effects of that index being missing (worst case – a full table scan to verify the uniqueness before insert or update of each record).
Fortunately, the check for this is very simple:
SELECT cc.owner, cc.table_name, c.index_name, cc.column_name, cc.position
FROM dba_cons_columns cc
JOIN dba_constraints c ON (c.owner = cc.owner AND c.constraint_name = cc.constraint_name)
WHERE cc.owner NOT LIKE ‘%SYS%’ AND c.constraint_type IN (‘P’, ‘U’)
MINUS
SELECT table_owner, table_name, index_name, column_name, column_position
FROM sys.dba_ind_columns;

This query finds all the columns that are involved in a unique constraint (a primary key also implies uniqueness, hence the P in the constraint_type filter), then it removes from that set all the columns that are indexed. The results are the index columns that do not match the constraints in column number and order. It may not be necessary to include the position of the columns in this check, but I like to find any discrepancies in order also because it implies a logical mismatch in the design.

Posted on

Bash script to gather statistics

The following shell script will gather table stats, and optionally, various system-wide stats. Copy the contents into a file and make it executable as the oracle owner account.

By default it runs with OS authentication as SYSDBA, you can alter that behavior by setting an env variable before running the script. I do not recommend adding passwords into this script for obvious security reasons.

You can show the usage by just passing an invalid command line parameter, for example ./GatherStats.sh -?

# Gather table stats – table list filtered by last time and (optionally) size (based on the number of records)
#
# File $Id: GatherStats.sh  $
# Modified $Author: dheltzel $
# Date $Date:  $
# Revision $Revision: $

CRED=${CRED:-/}

usage() {
echo “Usage: $0 [-t days] [-r num_recs] [-d database name] [-s] [-f filter_string]”
echo ”  -t days – only select tables that have not had stats gathered in at least this many days (default 7)”
echo ”  -r num_recs – only select tables with fewer than X thousand records (default is unlimited)”
echo ”  -d database name – defaults to $ORACLE_SID”
echo ”  -s – also gather system stats (takes a long time)”
echo ”  -f filter_string – arbitrary filter for the select query (ex. \”AND s.owner=’DBADMIN’\”)”
exit 1
}

# Handle parameters
while getopts “:d:t:r:sf:” opt; do
case $opt in
d)
TWO_TASK=$OPTARG
DB_NAME=$OPTARG
;;
t)
AGE_DAYS=$OPTARG
;;
r)
NUM_RECS=”AND num_rows < $OPTARG * 1000″
;;
s)
SYS_STATS=Y
;;
f)
OPT_FILTER=$OPTARG
;;
\?)
echo “Invalid option: -$OPTARG” >&2
usage
;;
: )
echo “Option -$OPTARG requires an argument.” >&2
usage
;;
esac
done

DB_NAME=${DB_NAME:-${ORACLE_SID}}
AGE_DAYS=${AGE_DAYS:-7}
REPORT_NAME=GatherStats_${DB_NAME}-`date “+%y%m%d%H%M”`.sql
LOG_NAME=GatherStats_${DB_NAME}-`date “+%y%m%d%H%M”`.log
#echo “${AGE_DAYS}”
#echo “${NUM_RECS}”
#echo “${REPORT_NAME}”
#echo “${OPT_FILTER}”
echo “Gathering stats for tables in ${DB_NAME} with stats older that ${AGE_DAYS} days . . .”

(sqlplus -s ${CRED} as sysdba <<!
set pages 0
set feed off
set head off
set lines 130
prompt set sqlprompt “_CONNECT_IDENTIFIER> “;
prompt set feed off;
prompt set echo on;
–prompt spool ${LOG_NAME}
SELECT ‘exec DBMS_STATS.GATHER_TABLE_STATS(”’||t.owner||”’,”’||t.table_name||”’);’ FROM dba_tables t
JOIN dba_tab_statistics s ON (s.OWNER = t.OWNER AND s.TABLE_NAME = t.TABLE_NAME)
WHERE t.owner NOT LIKE ‘%SYS%’ AND t.owner NOT IN (‘XDB’) AND t.temporary = ‘N’ AND t.secondary = ‘N’ AND t.segment_created = ‘YES’
AND s.stattype_locked IS NULL AND (t.last_analyzed < SYSDATE –  ${AGE_DAYS} OR t.last_analyzed IS NULL) AND t.table_name NOT LIKE ‘SYS%’ ${NUM_RECS} ${OPT_FILTER}
ORDER BY t.num_rows;
exit
!
) > ${REPORT_NAME}

# Add commands to gather system stats if requested
if [ -n “${SYS_STATS}” ] ; then
echo “exec DBMS_STATS.GATHER_SYSTEM_STATS” >>${REPORT_NAME}
echo “exec DBMS_STATS.GATHER_DICTIONARY_STATS” >>${REPORT_NAME}
echo “exec DBMS_STATS.GATHER_FIXED_OBJECTS_STATS” >>${REPORT_NAME}
fi
echo “exit” >>${REPORT_NAME}

sqlplus ${CRED} as sysdba @${REPORT_NAME}

 

Posted on

Editioning Implementation – Moving Triggers

You’ve got the tables renamed the EV’s in place, now it’s time to move the triggers from the base tables to the EV’s.
  1. Drop the triggers from the base tables
  2. Run the DDL file you saved the trigger source into earlier

That’s it!

You probably want to write some validation process here to check that all the triggers you had before are still there and in the same state. I would save the owner, trigger_name, and status into a text file, before and after and diff those, but you can do something else if you prefer. The goal is make sure you didn’t miss anything.

This shows the triggers and their status, spool the output and run it before and after as a validity check:

SELECT owner, trigger_name, status FROM dba_triggers WHERE owner IN (SELECT owner FROM editioned_users_obj_cnt WHERE object_type = ‘TRIGGER’);

This SQL will generate the trigger drop commands:

SELECT ‘drop trigger ‘ || owner || ‘.’ || trigger_name || ‘;’ FROM dba_triggers WHERE owner IN (SELECT owner FROM editioned_users_obj_cnt WHERE object_type = ‘TRIGGER’);