Posted on

Setup commands for SQL deploy scripts

These are the commands I put at the beginning of each SQL script prior to running.

SET TIME ON
SET TIMI ON
SET DEF OFF
ALTER SESSION SET NLS_LENGTH_SEMANTICS = ‘CHAR’;
ALTER SESSION SET DDL_LOCK_TIMEOUT=60;
SPOOL “ticket #”_”environment”.lst

SET TIME ON – Provides timestamps for each operation, important for recordkeeping/auditing

SET TIMI ON – Most useful while testing. Which operations are taking a long time? Will this be an issue in production systems during the deploy?

SET DEF OFF – For DML statements, allow any &’s to be interpreted literally

ALTER SESSION SET NLS_LENGTH_SEMANTICS = ‘CHAR’; – Usually you want the VARCHAR2 data types to be CHAR and not BYTE (the default). While it is best to explicitly set this in the DDL commands, this will pick up any that were missed.

ALTER SESSION SET DDL_LOCK_TIMEOUT=60 – If the script tries to execute a DDL command on an object in use it may encounter a lock. This allows the script to retry for up to 60 seconds before timing out.

SPOOL – Write all output to a file. This should be kept as a record of what was done during the script’s run.

Posted on

Compiling invalid objects with a script

I often compile invalid objects with the sys.utl_recomp package. It is an easy way to pick up any invalid objects, but there are faster and smarter ways to compile invalid objects – if you know ahead of time what will be going invalid. How would you be in a place to do this? Consider a scripted database deploy. It can be written well in advance of the deploy to production systems, and as it is run in different databases, you can easily see what goes invalid.

The SQL below will write DDL to compile any invalid objects. In addition to running it to compile the currently invalid objects, you can copy this set of DDL into your deploy scripts.

SELECT decode(object_type,’PACKAGE BODY’,’alter package ‘||owner||’.’||object_name||’ compile body;’,
‘alter ‘||object_type||’ ‘||owner||’.’||object_name||’ compile;’) cmd
FROM all_objects WHERE object_type IN (‘FUNCTION’,’JAVA SOURCE’,’PACKAGE’,’PACKAGE BODY’,’PROCEDURE’,’SYNONYM’,’TRIGGER’,’TYPE’,’TYPE BODY’,’VIEW’)
AND status = ‘INVALID’;

Incorporating the compile commands directly into your deploy scripts has several advantages:

  • Move high priority objects to the top of the list, I put triggers first, then views, as often the views are used by the PL/SQL objects. Synonyms are usually fine to go last.
  • Move compile commands to immediately after the objects become invalid. This requires some knowledge about what is causing the objects to become invalid. This is especially helpful if there are many changes in a deploy and you have a way to run parallel streams of changes. Allowing the compiles to happen in each stream as the objects are invalidated will significantly lower the disruption to running sessions.

With a bit of imagination, you can use the SQL above as a starting point for other actions. For example, if you want to recompile all triggers for user SCOTT, just change the SQL where clauses.

One additional note, I used all_objects in the SQL to allow any user to run this, but if you wish to use dba_objects, that works also.

Posted on

Analyzing cache sizes for sequences

When creating sequences, most people give no thought to the size of the cache, just accepting the default of 20. That is perhaps a reasonable trade off for sequences where you have no idea about the velocity of allocation. After having been in production for a while, it is worth taking a look at the cache sizes and adjusting them to a more optimal value. The following SQL looks for sequences that have a cache set, but such a low number of allocations that it would be reasonable to turn off the caching.

SELECT sequence_owner,sequence_name,cache_size,last_number,’alter sequence ‘||sequence_owner||’.”‘||sequence_name||'” nocache;’ nocache_cmd
FROM dba_sequences WHERE last_number – min_value 0
AND sequence_owner NOT LIKE ‘APEX%’ AND sequence_owner NOT IN (SELECT owner FROM dba_logstdby_skip WHERE statement_opt = ‘INTERNAL SCHEMA’)
AND last_number < 1000 ORDER BY 1, 2;

You should manually look over this list before just executing the alter commands in the last column, but generally these are safe to run unless the sequence is newly created.

Now for the other end of the spectrum — which sequences would perform better if the cache size was increased? Try this SQL:

SELECT sequence_owner,sequence_name,cache_size,last_number,’alter sequence ‘||sequence_owner||’.”‘||sequence_name||'” cache ‘||cache_size * 10||’;’ cache_cmd
FROM dba_sequences WHERE last_number – min_value > cache_size * 500 AND cache_size 1000000 AND last_number > 1000000 ORDER BY 1, 2;

Increasing the size of the cache is a bit trickier, and you should think about the usage of the sequence before increasing this value. If the value is too low, increasing it can have a dramatic effect on insert performance. Be particularly careful about changing sequences from nocache (cache size 0) to cache. This can change the behavior of the sequence assignment in subtle ways, so make sure you understand that effect and confirm that it will not affect the application.

Posted on

Oracle password change script

I always script operations that I do a lot, and with ever shrinking password age settings, scripting a password change is a big time saver. This is the script I use:

#!/bin/bash
# change_password.sh – changes a user’s Oracle password in the current database
#
# written by Dennis Heltzel

usage() {
echo “Usage: $0 “
exit 1
}

if [ $# -lt 1 ]; then
usage
exit 1
fi

PASSWORD_SIZE=20
USER=$1
# 2 options for easily generating a random password
NEWPASS=`openssl rand -base64 ${PASSWORD_SIZE}`
#NEWPASS=`date|md5sum|cut -c-${PASSWORD_SIZE}`

sqlplus -s / as sysdba <<!
alter user $USER identified by "$NEWPASS" account unlock;
prompt Your new password is $NEWPASS
prompt You can change your password anytime with:
prompt alter user $USER identified by "” replace “$NEWPASS”;
prompt Your password will expire in 60 days, please change it before then.

prompt connect $USER/”$NEWPASS”

exit
!

The script assumes your ORACLE_SID is set to the correct database. If you have a number of databases you change passwords in, you could show the current value and allow it to be changed if desired.

I included 2 options for generating a random password, pick either one and see which you prefer.

The script outputs text you can send (securely) to the user. It also outputs the connect string you can use to verify the new password.

Posted on

Partition a table with no downtime

Oracle’s Online Table Redefinition can be used for a lot of things, but I have used it the most for partitioning an existing table. Because I do this so much, I created some “helper” shell scripts to make it easier and ensure I don’t miss something important. The scripts are available in my personal git repo: https://github.com/dwheltzel/Shell-Scripts-for-Oracle-DBAs/tree/master/online_partitioning

Preparing the scripts

You will need to copy 3 scripts and edit at least 2 of them. For this exercise, I am assuming you are starting with the scripts directly from the repo and want to partition a table called SCOTT.ORDERS by order_date (a date type column), and that you want monthly partitions.

Copy/create scripts

  • cp partition_CUST-1.sql partition_ORDERS-1.sql
  • cp partition_CUST-setup.sql partition_ORDERS-setup.sql
  • touch partition_CUST-local.sql

Edit setup script

vi partition_ORDERS-setup.sql – This file as 3 variables defined. Change these to match your table. The PART_TABLE is the name of a temporary table name that is needed and will remain as the old, unpartitioned table after the job completes. This will need to be manually dropped by you after determining that you don’t need it as a backup.

Edit -1.sql script

vi partition_ORDERS-1.sql – This file has all the details about the new table structure. Make these edits:

  • First line – Change “CUST” to “ORDERS” so the setup file gets called correctly.
  • DEFINE V_TS – this is the tablespace the final partitions should reside in, by default
  • dbms_redefinition.can_redef_table – uncomment the correct line, depending on whether a PK exists for this table.
  • create table – change this section to create the table you are using. Column names must match and number of columns needs to remain the same, but you can change the order of the columns if you wish.
  • PARTITION BY RANGE – make this match your partitioning column name (order_date in our example). You can also change the partition range here – INTERVAL(numtoyminterval(1,’MONTH’))
  • PARTITION cust_ – change the names of the partitions, update the dates, and add or delete rows as needed to make the partitions match what you want.
  • dbms_redefinition.start_redef_table – If you are partitioning a table without a PK, edit these to run the version with ROWID

Run the -1 script you just edited

Run this from sqlplus – @partition_ORDERS-1.sql. It can take a while to run, as it copies all the records to the new partitioned table you defined. If it errors, note the error and run the abort script as shown in the output. Fix the error and try again.

Once it completes without error, run the -2 script. The syntax is output at the end of the -1 script (@partition-2.sql CUST).

When the second script completes, test the resulting table that is now partitioned. You might count rows or run your application. Once you are convinced it worked as expected, you can drop the temporary table.

Posted on

Do large deletes or updates the safe way in Oracle

Performing a large amount of DML in a single transaction can be disastrous in an Oracle database. It may run for a long time, creating large amounts of redo and undo, overwhelming a database tuned for smaller transactions. Also, if interrupted, all the work will be rolled back, creating locking and performance issues, sometimes for hours. Even restarting the DB will not fix this, and this has been the cause of much downtime in Oracle shops.
The obvious solutions is to break up the DML into a series of smaller transactions, committing in between each statement. While this works, it takes time to setup, is prone to errors that commit as part of the script, and even if done well, can cause blocks to be changed multiple times, creating much more total redo than would be needed if done as as single statement.
There is a way to get the best of both worlds, using an Oracle package called DBMS_PARALLEL_EXECUTE. This package is fairly new and somewhat confusing in it’s documentation, but it works very well. I have written a helper procedure (purge_utils.purge_large_table) to hide the complexity and provide a tested, easy implementation for doing DML on large tables

Theory of Operation
The goal is to perform all the needed DML to a set of blocks rather than searching for a subset of records and changing all the blocks with those records, resulting in some blocks being affected by many changes. The package will perform the subsetting by blocks and will perform commits and checkpoints at appropriate intervals. Commit frequency is determined by the number of records changed. This allows the process to traverse large areas with few or no affected records without adding commit load to the database, but will commit frequently enough to ensure that blocking locks are kept to a minimum. After each commit, the process will sleep to allow the redo stream to catch up and other database operations to use the resources. Tuning the sleep time, chunk size, and commit/checkpoint intervals can have a dramatic impact on the run time of the process and it’s load on the database. The defaults are meant to be conservative and favor low impact on the database over fast completion time.

Procedure Usage
purge_utils.purge_large_table(p_owner VARCHAR2,
p_table_name VARCHAR2,
p_sql VARCHAR2,
p_keep_days PLS_INTEGER DEFAULT 365,
p_sleep_time PLS_INTEGER DEFAULT 2,
p_chunk_size PLS_INTEGER DEFAULT 500,
p_commit_interval PLS_INTEGER DEFAULT 1000,
p_chkpnt_interval PLS_INTEGER DEFAULT 100,
p_max_run_time PLS_INTEGER DEFAULT 0,
p_exit_time VARCHAR2 DEFAULT NULL,
p_log_always VARCHAR2 DEFAULT 'N');

The only mandatory parameters are the first 3, table owner and name and the SQL that is to be run for the DML. This SQL needs to be specially crafted to look like this:
‘DELETE /*+ ROWID (EMPLOYEES) */ FROM EMPLOYEES WHERE rowid BETWEEN :start_id AND :end_id AND start_date < sysdate – :keep_days’

Notes about the SQL parameter

  • Any single quotes in the statement must be escaped by adding an extra single quote before it.
  • The ROWID hint is needed and must have the name of the table
  • The BETWEEN clause must remain exactly as shown, it is what allows for range operations.
  • The :keep_days var needs to be included to prevent errors when executing, but if you wish to not use it, add it as a throwaway clause like ” :keep_days > 0 ” (it defaults to 365, so this will always be true)

Notes about the tuning parameters

  • p_sleep_time – This is the number of seconds to sleep after each commit. This is to allow other database operations to use resources for a time and prevent the DML process from hogging all the resources and impacting DB operations.
  • p_chunk_size – The number of blocks in each processing unit. The entire table will be processed, but only this many blocks at a time. Increasing this speeds the DML process but increases the time for each operation and the chance of other processes being blocked.
  • p_commit_interval – Once this many (or more) records are affected by the DML (regardless of the number of actual blocks processed), a commit will be issued and the sleep will take place. Increasing this has the same risks/benefits as p_chunk_size
  • p_chkpnt_interval – Once this many commits have been made, a checkpoint is performed to avoid any problems with online redo logs not archiving fast enough.

Example

This will delete all the rows from the EMPLOYEES table where the start_date is at least 60 days ago.

BEGIN
purge_large_table(p_owner => 'RE', p_table_name => 'EMPLOYEES', p_sql => 'DELETE /*+ ROWID (EMPLOYEES) */ FROM EMPLOYEES WHERE rowid BETWEEN :start_id AND :end_id AND start_date < sysdate - keep_days', p_keep_days => 60);
END;

Where to get the package code

You can get the code for this package from my github repository: https://github.com/dwheltzel/dbadmin_packages

Posted on

Oracle 19c Upgrade – Symlinks in Directories

Oracle 19c does not allow a directory path to contain a symlink. There is a utility you can run after the upgrade to help fix these, but I like to clear up as many issues as possible before the upgrade is run. To this end, I wrote a bash script that connects to the current database (ORACLE_SID) and outputs DDL that will replace the symlink directory path with the real path (bypasses the symlinks).

# fix_oracle_dirs.sh
#
# Written by Dennis Heltzel, August 2021
# This will scan the Oracle directories in the current database and write DDL to:
# 1) fix any paths with symlinks (will not work in 19c)
# 2) drop dirs with invalid paths (unreachable)
#
DIRLIST=${ORACLE_SID}-directories.lst
FIXSQL=${ORACLE_SID}-fix_dirs.sql
:> ${DIRLIST}
:> ${FIXSQL}
echo "processing database ${ORACLE_SID}"
echo "list of all directories: ${DIRLIST}"
echo "DDL to fix directories: ${FIXSQL}"
# Get directory info from database
sqlplus -s / as sysdba <<! >${DIRLIST}
set pages 0
set lines 200
set head off
set feed off
select DIRECTORY_PATH||':'||owner||':'||DIRECTORY_NAME from dba_directories;
exit
!
# Loop through each found directory and write DDL if needed
cat ${DIRLIST} | while IFS=: read -r old_path owner dirname;do 
  if `ls -d $old_path >/dev/null 2>&1` ; then
    #echo "$old_path exists"
    new_path=`readlink -m $old_path`
    if [ $new_path != $old_path ] ; then
      echo "as $owner, CREATE OR REPLACE DIRECTORY $dirname AS '$new_path';" >>${FIXSQL}
    fi
  else
    #echo "$old_path does not exist"
    echo "as $owner, DROP DIRECTORY $dirname;" >>${FIXSQL}
  fi
done
# Display the results
cat ${FIXSQL}

Posted on

Find sessions locking objects on RAC

DBA query with kill commands:

SELECT b.inst_id||’ ‘||c.object_type||’ ‘||c.owner||’.’||c.object_name||’
‘||c.subobject_name||

‘ user:’||b.username||’ ‘||b.osuser||’ on ‘||b.machine||’ ‘||b.status||’
‘||b.sid||’,’||b.serial# “Instance Object Session Info”,

‘alter system kill session ”’||b.sid||’,’||b.serial#||”’;’ kill_cmd

FROM gv$locked_object a JOIN gv$session b ON (a.inst_id = b.inst_id AND
b.sid = a.session_id) JOIN dba_objects c ON (a.object_id = c.object_id)

ORDER BY c.owner,c.object_name,c.subobject_name,b.inst_id,b.machine;

User accessible view:

CREATE OR REPLACE VIEW http://sys.my/ sys.my_locked_objects AS
SELECT b.inst_id,c.object_type,c.owner || ‘.’ || c.object_name
OBJECT,c.subobject_name,b.username,b.osuser,b.machine,b.status,b.sid,b.ser
ial#
FROM gv$locked_object a JOIN gv$session b ON (a.inst_id = b.inst_id AND
b.sid = a.session_id) JOIN dba_objects c ON (a.object_id = c.object_id)
WHERE c.owner = USER OR b.username = USER;

You can grant select on this view to PUBLIC and make a public synonym if you wish.

Posted on

Emergency cleanup of files on Exadata compute node

This is an aggressive script to clear space in an Exadata compute node. Use with caution as it wipes out a lot of logs quickly.

## Cleanup of filesystems by the root user
# First, cleanup system files
journalctl --vacuum-time=1s

# Remove un-needed system files
rm -rf /tmp/OraInstall*
rm -f /var/log/*.1
rm -f /var/log/*gz
# Clear the ones we do need to keep
:> /var/spool/mail/root
:> /var/log/rdbmsaudit.log
:> /var/log/asmaudit.log

## Oracle file cleanup
. /u01/app/oracle/bin/ora_funcs.sh
# Migrate schemas before we run purgelogs
oe PRD
ADR_HOME=`ls -d /u01/app/oracle/diag/rdbms/ftsprd*/$ORACLE_SID|cut -c17-`
#echo $ADR_HOME
export ORACLE_HOME=/u01/app/oracle/product/18.0.0.0/DbHome_1;/u01/app/oracle/product/18.0.0.0/DbHome_1/bin/adrci exec="set homepath $ADR_HOME; migrate schema"
# Fix permissions - If these pv dirs are owned by root, you can't login locally
chown -R oracle /u01/app/oracle/diag/rdbms/*/*/metadata_pv
/u01/app/oracle/bin/purgeLogs -orcl 1 -osw 1 -oda 1 -extra /tmp:1,/var/log:1

DIAGDIR=/u01/app/oracle/diag/rdbms
cd $DIAGDIR

# remove core dumps
rm -rf ${DIAGDIR}/*/*/trace/cdmp*
find /u01/app -name "cdmp*" -type d -delete

# remove incident files
rm -rf ${DIAGDIR}/*/*/incident/incdir*

DIAGDIR=/u01/app

# remove audit files
find /u01/app -mount -name "*.aud" -type f -delete

# remove zero length files older than 6 hours
find $DIAGDIR -mount -name "*.trc" -type f -size 0 -mmin +360 -delete
find $DIAGDIR -mount -name "*.trm" -type f -size 0 -mmin +360 -delete

# clear older trace files
#find $DIAGDIR -mount -name "*.trc" -type f -size +0 -mmin +10 -execdir cp /dev/null {} \;
#find $DIAGDIR -mount -name "*.trm" -type f -size +0 -mmin +10 -execdir cp /dev/null {} \;

# Clear of all large trace files
find $DIAGDIR -mount -name "*.trc" -type f -size +0 -execdir cp /dev/null {} \;
find $DIAGDIR -mount -name "*.trm" -type f -size +0 -execdir cp /dev/null {} \;

# Check ownership of directories
ls -ld /u01/app/oracle/diag/rdbms/*/*/metadata_pv

Posted on

Control which instance a scheduled job will run in

If you are scheduling jobs on a RAC it would be nice to be able to control which instance the job will run in. This is easy to do using job classes with these steps:

  1. Create a new job class and assign a service name to the class
  2. Grant execute in that job class to all the Oracle accounts that you want to use it with
  3. Assign the new job class to existing (or new) jobs.

Here are some SQL snippets to help with steps 2 and 3:

— make executable grants for job classes where needed
select distinct ‘grant execute on SYS.’||o.object_name||’ to ‘||j.owner || ‘;’
from dba_objects o, dba_scheduler_jobs j
where o.object_type = ‘JOB CLASS’ and o.oracle_maintained = ‘N’ and j.owner not like ‘SYS%’
minus
select distinct ‘grant execute on SYS.’||p.table_name||’ to ‘||p.grantee || ‘;’
from dba_tab_privs p where p.privilege = ‘EXECUTE’ and type = ‘JOB CLASS’;

@fix job classes@
— fix job classes
select owner,job_name,job_class,’exec SYS.DBMS_SCHEDULER.SET_ATTRIBUTE(name => ”’||owner||’.’||job_name||”’, attribute => ”JOB_CLASS”, value => ”ETL_JOB_CLASS”);’
from dba_scheduler_jobs j where owner not in (‘SYS’,’APEX_040200′) and job_class = ‘DEFAULT_JOB_CLASS’
order by 1, 2;