Wednesday, December 17, 2014

UNIX - More of searching for files

From the previous post....

In order to locate large directories, I extended the idea of the find statement and an executable script (ascr2)

In this example, I use the du -sm command to locate directories exceeding a certain size in MB.

File - ascr2 Parameter 1 in the directory name, parameter 2 is the minimum size we want to display.

if [ `du -sm $1 | cut -f1` -gt $2 ]; then
echo "$1 - `du -sm $1 | cut -f1`"
fi

The File Command. This will only display directories over 100Mb. Obviously the parent directories will be larger.

find /app/oracle -type d -exec ./ascr2 {} 100 \;


Happyjohn

UNIX - finding directories with more than x number of files

I had to scratch my head a little when I was supporting Oracle in a Unix/Linux/AIX environment. We would keep getting tickets for partition /app/oracle exceeds 90%.

So I'd log into the server and do a df -h /app/oracle or df -g /app/oracle (in AIX) and sure enough it exceed 90% full.
But which files should I delete or gzip?

So I'd do this...

find /app/oracle -type f -size +100000 -exec ls -l {} \;

to locate large files, and this to locate dump files.

find /app/oracle -name '*.dmp' -exec ls -l {} \;

but the biggest problem is with large numbers of small log files clogging up directories. Icouldn't find a suitable unix command so I wrote one.

File - ascr1 (this will display the count of files in directory $1 but only if it exceed the number in $2). Give it execute privs.

if [ `ls $1 | wc -l` -gt $2 ]; then
echo "$1 - `ls $1 | wc -l`"
fi

then use it in a find command. In this example it will only display directories exceeding 1000 files.

find /app/oracle -type d -exec ./ascr1 {} 1000 \;



Happyjohn.

Wednesday, November 19, 2014

How to demonstrate Oracle In-Doubt Transactions

An 'in doubt transaction' occurs in Oracle when you update both the local and a remote table via a db link but then the remote databases fails.

Here is the example.

Using databases TEST1 and TEST2, create a table on each. Also create a database link from TEST1 to TEST2.

conn user1/pwd1@TEST1
create table loc_tab (afld varchar2(10));
create database link alink connect to user2 identified by pwd2 using 'TEST2';

conn user2/pwd2@TEST2
create table rem_tab (afld varchar2(10));

Then connect to TEST1 and insert a row into both.

conn user1/pwd1@TEST1
insert into loc_tab values ('A');
insert into rem_tab@alink values ('B');

The connect another session to SYS as SYSDBA on TEST2 and abort the database.

shutdown abort

Back on TEST1, commit the transaction. You will get an error.

SQL> commit;
commit
*
ERROR at line 1:
ORA-02054: transaction 6.21.2068 in-doubt
ORA-03150: end-of-file on communication channel for database link
ORA-02063: preceding line from ALINK

Select * from dba_2pc_pending to see the in doubt transaction on TEST1.
Startup TEST2 and you'll see that the in doubt transaction remains.

The options for dealing with this are COMMIT FORCE or ROLLBACK FORCE. Because we are creating standby databases from TEST1 and don't want the users getting ORA-01591 errors, we will roll back the error.

SQL> select local_tran_id from dba_2pc_pending WHERE state not like 'forced%';
LOCAL_TRAN_ID
----------------------
6.21.2068

SQL> rollback force '6.21.2068';
Rollback complete.

** note the WHERE Clause to avoid seeing other in doubt transaction that have already been forced back recently.

If you choose to COMMIT FORCE '6.21.2068', then it will only commit the local part of the transaction.

I did find that when I restarted TEST2, before performing a 'commit force' on TEST1, the command hung? Not sure about that. I didn't hang on later tests.


Another thing you can do is pretend the failure on TEST2 occurred.
This is done by not aborting TEST2, but instead committing with a comment.

COMMIT COMMENT 'ORA-2PC-CRASH-TEST-7';

Where the 7 can be a number between 1 and 10. The setting of this requires some thinking about. I've listed the values and what they mean at the end of this note.

The result however is the same. You still have to rollback force or commit force the in doubt transaction, but you'll get an error when you do.

Finally there are options to auto fix in doubt transactions based on some criteria.

ALTER SYSTEM ENABLE DISTRIBUTED RECOVERY;
ALTER SYSTEM DISABLE DISTRIBUTED RECOVERY;

Doc ID 100664.1 from Oracle will give you more detailed information on this matter, and it also has links to other related documents.


COMMIT COMMENT Options.

1 - Crash commit point after collect
2 - Crash non-commit-point site after collect
3 - Crash before prepare (non-commit-point site)
4 - Crash after prepare (non-commit-point site)
5 - Crash commit point site before commit
6 - Crash commit point site after commit
7 - Crash non-commit-point site before commit
8 - Crash non-commit-point site after commit
9 - Crash commit point site before forget
10 - Crash non-commit-point site before forget

Monday, November 3, 2014

Data Guard Transport Lag

I need to thank Emre Baransel's Oracle Blog for pulling me through a tight spot for something I should have already known.

OEM 12c reports the following :-

There are approximately 3,753 seconds of redo not yet available on this standby database.

I know this means that there is a delay or a gap in the transport of REDO logs to a Data Guard Standby Database, but having the commands to hand so you can make a quick diagnosis of the problem, was briefly out of my grasp. So this is what you do.

Locate your Primary and Standby databases and get a sqlplus session running as sysdba on each.

On the primary

SELECT a.thread#, 
                b. last_seq,
                a.applied_seq,
                a. last_app_timestamp,
                b.last_seq - a.applied_seq   arc_diff
FROM (SELECT   thread#,
                                MAX(sequence#) applied_seq,
                                MAX(next_time) last_app_timestamp
                FROM gv$archived_log
                WHERE applied = 'YES'
                GROUP BY thread#) a,
             (SELECT   thread#,
                                MAX (sequence#) last_seq
                FROM gv$archived_log
                GROUP BY thread#) b
WHERE a.thread# = b.thread#;

Sadly at this point I find that the 'arc_diff' is now zero and the problem has gone from OEM too.
However ,the principle still applies, and when this happens again, I'll put the figures in.

  THREAD#   LAST_SEQ APPLIED_SEQ LAST_APP_TIMESTAMP     ARC_DIFF
---------- ---------- ----------- -------------------- ----------
        1     568343      568343 03-NOV-2014 12:02:23          0

Now  looking on the Standby.

SELECT thread#, max(SEQUENCE#) FROM V$ARCHIVED_LOG where APPLIED='YES' group by thread#;

  THREAD# MAX(SEQUENCE#)
---------- --------------
        1         568343

And

select process, status, thread#, sequence#, block#, blocks from v$managed_standby ;

PROCESS   STATUS          THREAD#  SEQUENCE#     BLOCK#     BLOCKS
--------- ------------ ---------- ---------- ---------- ----------
ARCH      CLOSING               1     568343      12289       1833
ARCH      CONNECTED             0          0          0          0
ARCH      CLOSING               1     568342       8193       2034
ARCH      CONNECTED             0          0          0          0
MRP0      APPLYING_LOG          1     568344       7008     204800
RFS       IDLE                  1     568344       7010        116
RFS       IDLE                  0          0          0          0
RFS       IDLE    ;               0          0          0          0

8 rows selected.

•Standby database process status: You can run following query on standby database to see what MRP and RFS processes are doing, which block of which archivelog sequences are being shipped or being applied.

And lastly..... Apply/transport lags: v$dataguard_stats view will show the general synchronization status of standby database.

 

set lines 200

col name format a40

col value format a20

select * from v$dataguard_stats;

 

NAME                                     VALUE                UNIT                           TIME_COMPUTED

---------------------------------------- -------------------- ------------------------------ ------------------------------

apply finish time                        +00 00:00:00.0       day(2) to second(1) interval   03-NOV-2014 12:58:01

apply lag                                +00 00:00:06         day(2) to second(0) interval   03-NOV-2014 12:58:01

estimated startup time                   15                   second                         03-NOV-2014 12:58:01

standby has been open                    N                                                   03-NOV-2014 12:58:01

transport lag                            +00 00:00:00         day(2) to second(0) interval   03-NOV-2014 12:58:01

 

SQL>

 

And ..... Apply rate: To find out the speed of media recovery in a standby database, you can use this query

 

set lines 200

col type format a30

col ITEM format a20

col comments format a20

select * from v$recovery_progress;

 

START_TIME           TYPE                           ITEM                 UNITS                                 SOFAR      TOTAL TIMESTAMP

-------------------- ------------------------------ -------------------- -------------------------------- ---------- ---------- --------------------

03-NOV-2014 08:41:01 Media Recovery                 Log Files            Files                                     4          0

03-NOV-2014 08:41:01 Media Recovery                 Active Apply Rate    KB/sec                                  241          0

03-NOV-2014 08:41:01 Media Recovery                 Average Apply Rate   KB/sec                                    3          0

03-NOV-2014 08:41:01 Media Recovery                 Redo Applied         Megabytes                                48          0

03-NOV-2014 08:41:01 Media Recovery                 Last Applied Redo    SCN+Time                         2147483647          0 03-NOV-2014 12:50:07

03-NOV-2014 08:41:01 Media Recovery                 Active Time          Seconds                                  13          0

03-NOV-2014 08:41:01 Media Recovery                 Apply Time per Log   Seconds                                   2          0

03-NOV-2014 08:41:01 Media Recovery                 Checkpoint Time per  Seconds                                   0          0

                                                    Log

 

03-NOV-2014 08:41:01 Media Recovery                 Elapsed Time         Seconds                               15598          0

 

9 rows selected.

 

 

 

Happyjohn

 




Wednesday, October 22, 2014

Is my Materialized View Log being used?

We have a large number of production databases that are being used as the source for Materialized Views.

Often the developers just add an MV Log to a new table on the chance that it might be used. As we all know MV Logs or MLogs are used to record changed rows within a table, so that the MV based upon that source table, can perform a fast refresh.

I needed a way to locate all MLOGs and work out if they are still or have even been used. The solution to this is quite simple.

The object SYS.MLOG$ contains various details of the MLOGs usage. The information is better at 11g but ok at 10g.

The following query lists all tables who's name starts with MLOG$, and then added the SYS.MLOG$ details. I've used an outer join to still show tables that have never been used as MV Logs. The v$database is only there because I was doing a lot.

select db.name, tabs.owner, tabs.table_name, mlog.master, mlog.log, mlog.youngest
from sys.mlog$ mlog, v$database db, dba_tables tabs
where tabs.owner = mlog.mowner(+) and tabs.table_name = mlog.log(+)
and tabs.table_name like 'MLOG$%' and tabs.table_name not in ('MLOG$','MLOG_REF$')
order by 2,3;

Let me know what you think of my solution.

Happyjohn

Sunday, October 19, 2014

Resumable Timeouts caused by ORA-01652 but Tablespace is not full

* when you have a resumable timeout set so that SQL waiting for tablespace resources can pause until that space becomes available.

I had this interesting problem the other day, and while the solution is reasonably obvious, it didn't occur to me immediately at the time.

You can detect resumable pauses with this statement...

select user_id, instance_id, status, name, error_msg, suspend_time, resume_time from dba_resumable;

This then shows one or more sessions where the resumable wait has cleared or an error is display showing a wait in progress. At my company we monitor for these waits extending to long and then report them to call-out.

In this case, as always, this is the error message shown. There were 2 sessions waiting on a large data warehouse.

ORA-01652: unable to extend temp segment by string in tablespace cma_ts1

Earlier in the day, I had got the same message and when I checked, the tablespace was indeed almost completely full. It was on ASM and the single file in the tablespace was 1.5Tb. Because it's a production database I didn't hang around and increased the datafile to 2Tb.

ALTER DATABASE DATAFILE '+DATA_DG/dgb_tsm1/datafile/cma_ts1.1145.851597989' AUTOEXTEND ON MAXSIZE 2000G
database datafile '+DATA_DG/DGB_TSM1/DATAFILE/CMA_TS1.1145.851597989' altered.

So with an additional 500Gb, I figured we'd be ok for some time. However 3 hours later I was called out again. Same error.

However now, only 91% of the data file was being used. This equates to 180Gb still free.

Of course what I didn't immediately spot was that just checking the size of the free space isn't enough. You must also consider the amount of space being requested by running processes.

This statement gives you the tablespace usage details.

select file_name, file_id, round(user_bytes/1024/1024,2) "Used(Mb)", round(user_bytes/1024/1024/1024,2) "Used(Gb)", status, autoextensible
from dba_data_files c
where tablespace_name = 'CMA_TS1' order by file_id desc;

And this the percent used. (There are other ways).

select tablespace_name, used_percent, tablespace_size, used_space, (tablespace_size - used_space) free
from dba_tablespace_usage_metrics
where tablespace_name = 'CMA_TS1';

Having obtained the space used figures, take a look at the amount of space currently being requested by the running processes.

select segment_name, bytes/1048576/1024 from dba_segments where tablespace_name='CMA_TS1'
order by 2 desc;

If the segment name is a number rather than an object name, then this represents those temporary segments we're looking for. In my case there were 2 (as expected), and these were sized at 337Gb and 240Gb. Clearly neither of these next extent requirements would fit into the free space available.

I could have added more diskspace, but the prudent thing to do was to locate the session owners and have a quiet word with them. Having done this and explained the realities of initial and next extents, we killed the processes and the scripts were rewritten.

HappyJohn

Tuesday, September 16, 2014

ORA-30036 and no free UNDO space

We have a database with a large amount (10Gb) of redo that has never need to be increased since the database was built. One morning we find lots of sessions suspended because someone has taken all of the undo.

Firstly, I identified the session that was causing the problem.

SELECT a.sid, a.username, b.used_urec, b.used_ublk
FROM v$session a, v$transaction b
WHERE a.saddr = b.ses_addr
ORDER BY b.used_ublk DESC;

The result of this query showed one huge amount of undo held by a single session. In my case it was SID 177. So next I display details about the session.

SELECT s.sid, s.serial#, s.username, s.osuser, p.spid, s.machine, p.terminal, s.program
FROM v$session s, v$process p
WHERE s.paddr = p.addr
and sid = 177;

I've not only displayed the sid and serial# I need to kill the session but also the SPID so I can kill it from unix if necessary.

I also located the SQL for that session and found it was a massive DELETE statement with no commits.

Having got agreement from the user that I could end the session, I killed it.

alter system kill session '177,13989' immediate;


Once the session was killed, I increased the size of the tablespace to allow some sessions to run while the roll back occurred.

This statement shows how much undo is still being used.

select status,
round(sum_bytes / (1024*1024), 0) as MB,
round((sum_bytes / undo_size) * 100, 0) as PERC
from
(
select status, sum(bytes) sum_bytes
from dba_undo_extents
group by status
),
(
select sum(a.bytes) undo_size
from dba_tablespaces c
join v$tablespace b on b.name = c.tablespace_name
join v$datafile a on a.ts# = b.ts#
where c.contents = 'UNDO'
and c.status = 'ONLINE'
);

STATUS MB PERC
--------- ---------- ----------
ACTIVE 10276 96
EXPIRED 1 0
UNEXPIRED 396 4

I continued to monitor the recovery of the database thus.

select * from v$fast_start_transactions;

Note: UNDOBLOCKSDONE increases up to a fixed batch which is then removed from UNDOBLOCKSTOTAL.

Once UNDOBLOCKSTOTAL reached 0, the available bocks in undo went back to normal.

Once the recovery has completed, the above query returns this result.

STATUS MB PERC
--------- ---------- ----------
ACTIVE 81 1
EXPIRED 2 0
UNEXPIRED 10590 99

The quantity of unexpired block is now ok.




HappyJohn.

Friday, August 1, 2014

ADDM & Reports

The Automatic Database Diagnostic Monitor (ADDM) analyses data in the Automatic Workload Repository (AWR) to identify potential performance bottlenecks. For each of the identified issues it locates the root cause and provides recommendations for correcting the problem. An ADDM analysis task is performed and its findings and recommendations stored in the database every time an AWR snapshot is taken.

The ADDM analysis includes the following.

CPU load
Memory usage
I/O usage
Resource intensive SQL
Resource intensive PL/SQL and Java
RAC issues
Application issues
Database configuration issues
Concurrency issues
Object contention

Note:

The analysis of I/O performance is affected by the DBIO_EXPECTED parameter which should be set to the average time (in microseconds) it takes to read a single database block from disk. Typical values range from 5000 to 20000 microseconds. The parameter can be set using the following.

EXECUTE DBMS_ADVISOR.set_default_task_parameter('ADDM', 'DBIO_EXPECTED', 8000);

To activate:
alter system set statistics_level = typical;
alter system set statistics_level = all;

To De-active:
alter system set statistics_level = basic;

---------------------------------------------------------------------------------
The addmrpt.sql script can be used to create an ADDM report from SQL*Plus.

-- Windows
@C:\app\oracle\product\12.1.0\dbhome_2\rdbms\admin\addmrpt.sql

(the script asks for a range of snapshots (without a restart in between), adn the output report filename. The report is created in the local directory unless you state otherwise).

---------------------------------------------------------------------------------
The basic SQL for this report is listed below, just with a few parameters and a spool slapped around it.

BEGIN
-- Create an ADDM task.
DBMS_ADVISOR.create_task (
advisor_name => 'ADDM',
task_name => '455_478_AWR_SNAPSHOT',
task_desc => 'Advisor for snapshots 455 to 478.');

-- Set the start and end snapshots.
DBMS_ADVISOR.set_task_parameter (
task_name => '455_478_AWR_SNAPSHOT',
parameter => 'START_SNAPSHOT',
value => 455);

DBMS_ADVISOR.set_task_parameter (
task_name => '455_478_AWR_SNAPSHOT',
parameter => 'END_SNAPSHOT',
value => 478);

-- Execute the task.
DBMS_ADVISOR.execute_task(task_name => '455_478_AWR_SNAPSHOT');
END;
/

-- Display the report.
SET LONG 100000
SET PAGESIZE 50000
SELECT DBMS_ADVISOR.get_task_report('455_478_AWR_SNAPSHOT') AS report
FROM dual;
SET PAGESIZE 24

---------------------------------------------------------------------------------
The snapshots are controlled by dba_hist_wr_control. This seems to be hourly.

col snap_interval for a20
col retention for a20
select * from dba_hist_wr_control;

DBID SNAP_INTERVAL RETENTION TOPNSQL CON_ID
---------- -------------------- -------------------- ---------- ----------
3043802518 +00000 01:00:00.0 +00008 00:00:00.0 DEFAULT 1



---------------------------------------------------------------------------------

To change the collection settings (interval is in minutes, retention in seconds)

exec dbms_workload_repository.modify_snapshot_settings ( interval => 60, retention => 43200);

---------------------------------------------------------------------------------
To display snapshots

col begin_interval_time for a25
col end_interval_time for a25
select snap_id, begin_interval_time, end_interval_time from dba_hist_snapshot order by 1;

---------------------------------------------------------------------------------
Create a snapshot : exec dbms_workload_repository.create_snapshot;

Delete snapshots : exec dbms_workload_repository.drop_snapshot_range (low_snap_id=>455, high_snap_id=>478);

---------------------------------------------------------------------------------
The report itself.....

ADDM Report for Task 'TASK_830'
-------------------------------

Analysis Period
---------------
AWR snapshot range from 455 to 478.
Time period starts at 31-JUL-14 11.04.16
Time period ends at 01-AUG-14 10.00.34

Analysis Target
---------------
Database 'JHCDB' with DB ID 3043802518.
Database version 12.1.0.1.0.
ADDM performed an analysis of instance jhcdb, numbered 1 and hosted at
Serverxxx

Activity During the Analysis Period
-----------------------------------
Total database time was 742 seconds.
The average number of active sessions was .01.

Summary of Findings
-------------------
Description Active Sessions Recommendations
Percent of Activity
--------------------- ------------------- ---------------

1 Top SQL Statements .01 | 61.11 2
2 "User I/O" wait Class 0 | 7.69 0
3 Hard Parse 0 | 4.3 0
4 Soft Parse 0 | 2.45 2

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Findings and Recommendations
----------------------------

Finding 1: Top SQL Statements
Impact is .01 active sessions, 61.11% of total activity.
--------------------------------------------------------
SQL statements consuming significant database time were found. These
statements offer a good opportunity for performance improvement.

Recommendation 1: SQL Tuning
Estimated benefit is 0 active sessions, 31.94% of total activity.
-----------------------------------------------------------------
Action
Run SQL Tuning Advisor on the SELECT statement with SQL_ID
"bj08huw9tc01y".
Related Object
SQL statement with SQL_ID bj08huw9tc01y.
select /*+ no_monitor */ CON_ID, OBJOID, CLSOID, RUNTIME, PRI,
JOBTYPE, SCHLIM, WT, INST, RUNNOW, ENQ_SCHLIM from ( select
...
DESC, OBJOID
Action
Use bigger fetch arrays while fetching results from the SELECT statement
with SQL_ID "bj08huw9tc01y".
Related Object
SQL statement with SQL_ID bj08huw9tc01y.
select /*+ no_monitor */ CON_ID, OBJOID, CLSOID, RUNTIME, PRI,
...
DESC, OBJOID
Rationale
The SQL spent 100% of its database time on CPU, I/O and Cluster waits.
This part of database time may be improved by the SQL Tuning Advisor.
Rationale
Database time for this SQL was divided as follows: 100% for SQL
execution, 0% for parsing, 0% for PL/SQL execution and 0% for Java
execution.
Rationale
SQL statement with SQL_ID "bj08huw9tc01y" was executed 60 times and had
an average elapsed time of 5.5 seconds.
Rationale
At least one execution of the statement ran in parallel.
Recommendation 2: SQL Tuning
Estimated benefit is 0 active sessions, 29.17% of total activity.
-----------------------------------------------------------------
Action
Run SQL Tuning Advisor on the SELECT statement with SQL_ID
"bj08huw9tc01y".
Related Object
SQL statement with SQL_ID bj08huw9tc01y.
select /*+ no_monitor */ CON_ID, OBJOID, CLSOID, RUNTIME, PRI,
JOBTYPE, SCHLIM, WT, INST, RUNNOW, ENQ_SCHLIM from ( select
...
and ik.logins = 'ALLOWED') order by RUNTIME, JOBTYPE, CLSOID, PRI, WT
DESC, OBJOID
Rationale
The SQL spent 100% of its database time on CPU, I/O and Cluster waits.
This part of database time may be improved by the SQL Tuning Advisor.
Rationale
Database time for this SQL was divided as follows: 100% for SQL
execution, 0% for parsing, 0% for PL/SQL execution and 0% for Java
execution.

Finding 2: "User I/O" wait Class
Impact is 0 active sessions, 7.69% of total activity.
-----------------------------------------------------
Wait class "User I/O" was consuming significant database time.
The throughput of the I/O subsystem was not significantly lower than expected.
The Oracle instance memory (SGA and PGA) was adequately sized.

No recommendations are available.

Finding 3: Hard Parse
Impact is 0 active sessions, 4.3% of total activity.
----------------------------------------------------
Hard parsing of SQL statements was consuming significant database time.
Hard parses due to cursor environment mismatch were not consuming significant
database time.
Hard parsing SQL statements that encountered parse errors was not consuming
significant database time.
Hard parses due to literal usage and cursor invalidation were not consuming
significant database time.
The Oracle instance memory (SGA and PGA) was adequately sized.

No recommendations are available.

Finding 4: Soft Parse
Impact is 0 active sessions, 2.45% of total activity.
-----------------------------------------------------
Soft parsing of SQL statements was consuming significant database time.

Recommendation 1: Application Analysis
Estimated benefit is 0 active sessions, 2.45% of total activity.
----------------------------------------------------------------
Action
Investigate application logic to keep open the frequently used cursors.
Note that cursors are closed by both cursor close calls and session
disconnects.

Recommendation 2: Database Configuration
Estimated benefit is 0 active sessions, 2.45% of total activity.
----------------------------------------------------------------
Action
Consider increasing the session cursor cache size by increasing the
value of parameter "session_cached_cursors".
Rationale
The value of parameter "session_cached_cursors" was "50" during the
analysis period.


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Additional Information
----------------------

Miscellaneous Information
-------------------------
Wait class "Application" was not consuming significant database time.
Wait class "Commit" was not consuming significant database time.
Wait class "Concurrency" was not consuming significant database time.
Wait class "Configuration" was not consuming significant database time.
CPU was not a bottleneck for the instance.
Wait class "Network" was not consuming significant database time.
Session connect and disconnect calls were not consuming significant database
time.

The database's maintenance windows were active during 17% of the analysis
period.













Happyjohn






Thursday, July 31, 2014

Setting up Dataguard in 12c (inc active data guard)

See Document http://docs.oracle.com/database/121/SBYDB/create_ps.htm#SBYDB5017

NOTE: My Primary is jhcdb and my new standby will be jhstby

I think the hardest part of this is setting everything back if it goes wrong.

1. enable force logging on primary (jhcdb).

SQL> ALTER DATABASE FORCE LOGGING;

2. Set the primary to receive redo (this will alow the existing database to switch roles).
Make sure the file size matches existing redo logs (select members, bytes/1024/1024 from v$log;)

SQL> ALTER DATABASE ADD STANDBY LOGFILE ('C:\app\oracle\oradata\jhcdb\slog01.rdo') SIZE 50M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE ('C:\app\oracle\oradata\jhcdb\slog02.rdo') SIZE 50M;

3. Define additional init params for redo transport services on the existing primary
-- already set

DB_NAME=jhcdb
DB_UNIQUE_NAME=jhcdb
CONTROL_FILES='C:\APP\ORACLE\ORADATA\JHCDB\CONTROL01.CTL', 'C:\APP\ORACLE\FAST_RECOVERY_AREA\JHCDB\CONTROL02.CTL'

-- Require setting

alter system set LOG_ARCHIVE_CONFIG='DG_CONFIG=(jhcdb,jhstby)' scope=spfile;
alter system set LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=jhcdb' scope=spfile;
alter system set LOG_ARCHIVE_DEST_2='SERVICE=jhstby ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=jhstby' scope=spfile;
alter system set REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE scope=spfile;
alter system set LOG_ARCHIVE_FORMAT='%t_%s_%r.arc' scope=spfile;

-- Standby role parameters

alter system set FAL_SERVER=jhstby scope=spfile;
alter system set DB_FILE_NAME_CONVERT='\jhcdb\','\jhstby\' scope=spfile;
alter system set LOG_FILE_NAME_CONVERT='\jhcdb\','\jhstby\' scope=spfile;
alter system set STANDBY_FILE_MANAGEMENT=AUTO scope=spfile;

4. Enable archiving, though this should already be done. But its worth at least shutting down
because you've likely changed the LOG_ARCHIVE_DEST_1

SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
SQL> ALTER DATABASE ARCHIVELOG;
SQL> ALTER DATABASE OPEN;

5. For the creation of the database the following steps are required and the book mark listed is from the oracle doc (see top).

reference below task database

Section 3.2.1 - Create a Backup Copy of the Primary Database Data Files Primary
Section 3.2.2 6 Create a Control File for the Standby Database Primary
Section 3.2.3 7,8 Create a Parameter File for the Standby Database Primary
Section 3.2.4 9 Copy Files from the Primary System to the Standby System Primary
Section 3.2.5 Set Up the Environment to Support the Standby Database Standby
Section 3.2.6 Start the Physical Standby Database Standby
Section 3.2.7 Verify the Physical Standby Database Is Performing Properly Standby

6. Create a backup copy of the control file. (jhcdb - Primary)

SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS 'c:\app\tmp\jhcdb.ctl';

7. Create a control file for the standby database

SQL> CREATE PFILE='c:\app\tmp\initjhstby.ora' FROM SPFILE;

8. Amend the necessary parameters to make the init file suitable for the new standby database

DB_NAME='jhcdb'
DB_UNIQUE_NAME='jhstby'
LOG_ARCHIVE_CONFIG='DG_CONFIG=(jhcdb,jhstby)'
CONTROL_FILES='C:\APP\ORACLE\ORADATA\JHSTBY\CONTROL01.CTL', 'C:\APP\ORACLE\FAST_RECOVERY_AREA\JHSTBY\CONTROL02.CTL'
DB_FILE_NAME_CONVERT='\jhcdb\','\jhstby\'
LOG_FILE_NAME_CONVERT='\jhcdb\','\jhstby\'
LOG_ARCHIVE_FORMAT='log%t_%s_%r.arc'
LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=jhstby'
LOG_ARCHIVE_DEST_2='SERVICE=jhcdb ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=jhcdb'
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
STANDBY_FILE_MANAGEMENT=AUTO
FAL_SERVER=jhcdb

Not sure what to do about the other parameters, I did change the audit dest.

9. Shutdown jhcdb and copy the files

SQL> shutdown immediate

create C:\app\oracle\audit\jhstby

copy C:\app\oracle\oradata\jhcdb to C:\app\oracle\oradata\jhstby

**** copy the orapwd file from jhcdb to jhstby (when I created one, the redo log transfer kept failing with ORA-01017 in the arc1 trace file).

set oracle_sid=jhstby
copy the control file create earlier to both locations.
run orapwd file=PWDjhstby.ora password=oracle entries=10

SQL-JHSTBY> STARTUP MOUNT
SQL-JHSTBY> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

10. Startup the primary

SQL> startup
SQL> select max(sequence#) from v$loghist;

11. Check

SQL-JHSTBY> SELECT CLIENT_PROCESS, PROCESS, THREAD#, SEQUENCE#, STATUS FROM V$MANAGED_STANDBY WHERE CLIENT_PROCESS='LGWR' OR PROCESS='MRP0';
SQL-JHCDB> SELECT MAX(SEQUENCE#) FROM V$LOGHIST;


12. Making it active data guard.
SQL-JHSTBY> recover managed standby database cancel;
SQL-JHSTBY> alter database open read only;
SQL-JHSTBY> recover managed standby database disconnect using current log file;

And hey presto, it's open in read only mode. Try this

On jhcdb, create table jh1 (adate date);
Then on JHSTBY, desc jh1 ..... And you should get n immediate result.

But remember active data guard is separately licenceable .



Happyjohn


Wednesday, July 30, 2014

RMAN - Automatic backup in windows 7

1. Create a simple backup script

In C:\app\rman_scripts create a file backup1.bat containing this...

cd C:\app\rman_scripts
set oracle_home=C:\app\oracle\product\12.1.0\dbhome_2
set oracle_sid=jhcdb
rman target / @backup1.bck log log\log1.log
set t=%date:~6,4%-%date:~3,2%-%date:~0,2%_%time:~0,2%.%time:~3,2%.%time:~6,2%
set t=%t: =0%
ren log\log1.log log-%t%.log

And create a log sub-directory in c:\app\rman_scripts

Then create a another file called backup1.bck containing this...

run
{
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default

backup incremental level 0 database tag 'Level 0' plus archivelog delete input tag 'Level 0 - Arch';
backup archivelog all delete input;
delete force noprompt obsolete;
}
list backup summary;
exit;

Notes:

a. The 1st set command in thebat file sets a date/time string. The 2nd set changes hours before 10am to include a leading zero.

b. The configures are the default settings, but best to include them to prevent changes to the defaults adversely affecting your backup.

c. The plus archivelog should do enough, but I've included a catchall backup afterwards. Not sure if this is needed.

d. Just to keep things tidy, I'm removing the obsolete backups.

2. Test it.

3. Setup an automatic schedule for the job. I typically leave my PC on overnight so I'm running the backup at 6am. Of course you must be sure the database is running too.

Go to the Control Panel, then Administration Tasks and select Task Scheduler.
Click 'Create a Basic Task'
Enter name and description, click next.
Trigger is Daily, next.
Set a start date and time, tomorrow, 6am, reoccur every day.
Action is 'Start a Program', next
Enter script name C:\app\rman_scripts\backup1, next, then Finish.
Click on 'Task Scheduler Library' (left) to see your tasks.

Note: Make sure your default environment works for rman to run. Obvious perhaps, but important.

Also, if you want an email of the results log, change the bat file from rename to copy, thus keeping log1.log, and then in the scheduler, after the run of the script add an email and make log1.log the attachment. You'll have to set the other details yourself.



Happyjohn

Tuesday, July 29, 2014

Further notes on Sessions and Blocking

-- The following are just a few commands that are useful.
-- take particular note of the blocking locks statement that doesn't use the lock table. It's just different.

-- Show all my sessions
select *
from v$session sess
where sess.osuser = 'j36563';

-- Show specific details for my sessions
select sess.sid, sess.serial#, sess.username, sess.osuser, sess.schemaname, sess.program, sess.sql_id
from v$session sess
where sess.osuser = 'j36563';

-- Show sql details for my sessions.
select sess.sid, sess.serial#, sess.username, sess.osuser, sess.schemaname, sess.program, sess.sql_id,
stxt.sql_text
from v$session sess,
V$sqltext stxt
where sess.osuser = 'j36563'
and sess.sql_id = stxt.sql_id;

-- Show v$accesss locks for a given object
SELECT * FROM v$access WHERE lower(object) = 'pkg_rege_loss_objection';

-- Show detailed information regarding access locks for a given package
select sess.sid, sess.serial#, sess.username, sess.osuser, sess.schemaname, sess.program, sess.sql_id,
'alter system kill session '''||sid||','||serial#||''' immediate;' comm
from v$session sess where sid in (
SELECT sid FROM v$access WHERE lower(object) = 'pkg_xml_schema_util');

-- show blocking locks
select sess1.username || ' on ' || sess1.machine || ' (' || sess1.sid || '/' || sess1.serial# || ') ' ||
'Is blocking ' ||
sess2.username || ' on ' || sess2.machine || ' (' || sess2.sid || '/' || sess2.serial# || ') ' ||
' => ' || dobj1.owner || '.' || dobj1.object_name || ' (' || dobj1.object_type || ')' "object"
from v$lock lock1,
v$session sess1,
v$lock lock2,
v$session sess2,
v$locked_object lobj1,
dba_objects dobj1
where sess1.sid=lock1.sid and sess2.sid=lock2.sid
and lock1.BLOCK=1 and lock2.request > 0
and lock1.id1 = lock2.id1
and lock2.id2 = lock2.id2
and sess1.sid = lobj1.session_id
and lobj1.object_id = dobj1.object_id
/

-- Another way to view blocking locks
select sid, serial#, username, program, blocking_session_status, blocking_instance, blocking_session, p2text, p2, sql_exec_start,
'alter system kill session '''||sid||','||serial#||''' immediate;' comm
from v$session sess
where blocking_session_status = 'VALID';
--where (sess.osuser = 'j36563' or sid = 20);

-- or
select sess2.username || ' on ' || sess2.machine || ' (' || sess.blocking_session || '/' || sess2.serial# || ') ' ||
'Is blocking ' ||
sess.username || ' on ' || sess.machine || ' (' || sess.sid || '/' || sess.serial# || ') '
from v$session sess, v$session sess2
where sess.blocking_session_status = 'VALID'
and sess.blocking_session = sess2.sid;

-- Kill a session (change the sid,serial# for values)
alter system kill session 'sid,serial#' immediate;


happyjohn

Monday, July 28, 2014

Oracle Database Startup Problems

SQL> startup

ORACLE instance started.

Total System Global Area 1653518336 bytes
Fixed Size                  2403304 bytes
Variable Size             989856792 bytes
Database Buffers          654311424 bytes
Redo Buffers                6946816 bytes
Database mounted.
ORA-03113: end-of-file on communication channel
Process ID: 5712
Session ID: 242 Serial number: 3

What could this be? So I looked at the alert log.

Errors in file C:\APP\ORACLE\diag\rdbms\jhcdb\jhcdb\trace\jhcdb_ora_5768.trc:

ORA-19815: WARNING: db_recovery_file_dest_size of 12884901888 bytes is 100.00% used, and has 0 remaining bytes available.

Mon Jul 28 09:59:40 2014
************************************************************************
You have following choices to free up space from recovery area:
1. Consider changing RMAN RETENTION POLICY. If you are using Data Guard,
then consider changing RMAN ARCHIVELOG DELETION POLICY.
2. Back up files to tertiary device such as tape using RMAN
BACKUP RECOVERY AREA command.
3. Add disk space and increase db_recovery_file_dest_size parameter to
reflect the new space.
4. Delete unnecessary files using RMAN DELETE command. If an operating
system command was used to delete files, then use RMAN CROSSCHECK and
DELETE EXPIRED commands.
************************************************************************
ARCH: Error 19809 Creating archive log file to 'C:\APP\ORACLE\FAST_RECOVERY_AREA\JHCDB\ARCHIVELOG\2014_07_28\O1_MF_1_408_%U_.ARC'
Mon Jul 28 09:59:40 2014
Errors in file C:\APP\ORACLE\diag\rdbms\jhcdb\jhcdb\trace\jhcdb_ora_5768.trc:
ORA-16038: log 3 sequence# 408 cannot be archived
ORA-19809: limit exceeded for recovery files
ORA-00312: online log 3 thread 1: 'C:\APP\ORACLE\ORADATA\JHCDB\REDO03.LOG'

 
I think that says it all.

Of course I cannot get into rman without he database running, so I must increase the size of the db_recovery_file_dest_size so I can get the database running so I can tidy up.

sqlplus / as sysdba
              SQL> startup mount
              SQL> alter system set db_recovery_file_dest_size = 20G;
              System altered.
              SQL> shutdown immediate
              ORA-01109: database not open
              Database dismounted.
              ORACLE instance shut down.
              SQL> startup
              ORACLE instance started.

              Total System Global Area 1653518336 bytes
              Fixed Size                  2403304 bytes
              Variable Size             989856792 bytes
              Database Buffers          654311424 bytes
              Redo Buffers                6946816 bytes
              Database mounted.
              Database opened.
               SQL>

 

Now I can go into RMAN and sort things out.

 

Happyjohn.




Locked Package (ORA-04021)

When releasing a package to test, I got the following error. That is after a significant wait time.


Calling "pkg_loss _spec.sql"...

CREATE OR REPLACE PACKAGE pkg_loss IS

*

ERROR at line 1:

ORA-04021: timeout occurred while waiting to lock object

 

So I checked for blocking locks but that doesn’t apply to DDL. Instead you need to check the V$ACCESS view.


SELECT * FROM v$access WHERE object = 'PKG_LOSS';

 

This shows that there is certainly some sessions holding the object. So having discussed the matter with the developer, they authorised me to kill the sessions holding the object. So put this next statement together. It shows the relevant session information, plus the kill statement.

 

select sess.sid, sess.serial#, sess.username, sess.osuser, sess.schemaname, sess.program, sess.sql_id,

      'alter system kill session '''||sid||','||serial#||''' immediate;' comm

from v$session sess where sid in (

SELECT sid FROM v$access WHERE lower(object) = 'pkg_rege_loss_objection');

 

I then copy & paste the kill statements I needed, and then was able to replace the package.

 

alter system kill session '230,2911' immediate;

 

 

HappyJohn.

 



Tuesday, July 22, 2014

ORA-28031: maximum enabled roles exceeded

So this is what I got…..

 

ORA-28031: maximum of 150 enabled roles exceeded

ORA-06512: at "SYS.UTL_RECOMP", line 865

 

I checked the parameter….

 

Show parameter max_enabled_roles

NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

max_enabled_roles                    integer     150

 

But if you try to increase this, it won’t work.

 

SQL> alter system set max_enabled_roles = 200 scope = spfile;

alter system set max_enabled_roles = 200 scope = spfile

*

ERROR at line 1:

ORA-00068: invalid value 200 for parameter max_enabled_roles, must be between 1 and 148

 

Which is curious because its already set to 150 regardless of what the message says.

 

So I had to find out which users were to blame

 

select "Grantee", count(*) "Role Number" from
(select distinct connect_by_root grantee "Grantee", granted_role
from dba_role_privs
connect by prior granted_role=grantee)
group by "Grantee"
having count(*)>=148
order by count(*),"Grantee","Role Number"
/

and then revoke some roles to bring the number down.

 

 

 

Monday, July 21, 2014

Blocking Locks

In order to identify who is blocking a session in Oracle, try the following example.

 

1.       Create a table and then lock it.

create table tab1 (acolumn date);
lock table tab1 in exclusive mode;



2.       Now from another session, try to insert into it.

insert into tab1 select sysdate from dual;

Of course because of the lock, this session will now hang waiting for the lock to be released.


3.       From a third session, take a look at the blocking lock.

set linesize 132
set heading off
col “Session 1” for a50
col “Session 2” for a50
select sess1.username || ' on ' || sess1.machine || ' (' || sess1.sid || '/' || sess1.serial# || ') ' ||
'Is blocking ' ||
sess2.username || ' on ' || sess2.machine || ' (' || sess2.sid || '/' || sess2.serial# || ') ' ||
' => ' || dobj1.owner || '.' || dobj1.object_name || ' (' || dobj1.object_type || ')' "object"
from v$lock    lock1,
v$session sess1,
v$lock    lock2,
v$session sess2,
v$locked_object lobj1,
dba_objects dobj1
where sess1.sid=lock1.sid and sess2.sid=lock2.sid
and lock1.BLOCK=1 and lock2.request > 0
and lock1.id1 = lock2.id1
and lock2.id2 = lock2.id2
and sess1.sid = lobj1.session_id
and lobj1.object_id = dobj1.object_id
/
set heading on



Which will return something like this. I included the sid/serial# so you can alter session kill ‘sid/serial#’ immediate; if you need to.

SYS on U-DOM1\HP20014881 (64/3207)    Is blocking  SYS on U-DOM1\HP20014881 (307/3513)


4.       Finally release the lock. On the original ‘lock’ session.

rollback;

You might want to rollback or commit the insert session and drop the table.


Happy Heth





 

 










Wednesday, July 16, 2014

Windows 7 - Overriding company settings on login.

Often when you’re working for a company, they preset things in order to prevent you spoiling the machine for others to use..

This often includes things like screen savers, desktop wallpapers and power setting.

This doesn’t include user privilege settings. As an Oracle Administrator I require administrator access, however, having to reset everything each time I log in.

 

This is a little script that you can try and put in your settings back to what you want.

 

Rem Firstly we wait for other things to complete., but remove it if you like

Rem The following lines manipulate the default mount points to what I want.

ping 1.1.1.1 -n 1 -w 10000

net use g: /delete      

ping 1.1.1.1 -n 1 -w 3000

net use g: \\dom1.net\Root0Drive /p:yes

net use s: \\dom1.net\GB539user$ \MyDocs /p:yes

 

Rem I took the time to create a power plan but kept losing it so I saved it and use the

Rem next line to reset it.

powercfg -setactive 4f871d8c-1b18-4933-a841-00dbf4d9c491

 

Rem The next line imports a file of settings into the registry

Rem however it sbetter just to list the entries in this file.

rem regedit /s "h:\my bits\jhSSreg1.reg"

Rem To prevent the screensaver coming on and unlock the visual options.

@echo off

REG ADD "HKCU\SOFTWARE\POLICIES\Microsoft\Windows\Control Panel\Desktop" /V "ScreenSaverIsSecure" /D "0" /F

@echo off

REG ADD "HKCU\SOFTWARE\POLICIES\Microsoft\Windows\Control Panel\Desktop" /V "ScreenSaveTimeOut" /D "10000" /F

 

Rem The following set the default home page on 3 different web browsers

@echo off

REG ADD "HKCU\SOFTWARE\MICROSOFT\INTERNET EXPLORER\MAIN" /V "START PAGE" /D "http://home.intranet.eon.com/ebs/en.html" /F

@echo off

REG ADD "HKCU\SOFTWARE\MICROSOFT\GOOGLE CHROME\MAIN" /V "START PAGE" /D "http://home.intranet.eon.com/ebs/en.html" /F

@echo off

REG ADD "HKCU\SOFTWARE\MICROSOFT\MOZILLA FIREFOX\MAIN" /V "START PAGE" /D "http://home.intranet.eon.com/ebs/en.html" /F

 

 

Place all of the entries into a file with the BAT suffix and add it into the Startup directory on the Start menu.

 

 



Thursday, July 10, 2014

Cloning and Deleting a Pluggable Database

The following example uses Windows 7 64bit, so the filenames will differ to those of a Linux environment.

And don't forget to amend the tnsnames.ora and register any new PDBs with the listener.

 

To clone a local/remote pluggable database

 

1.       Create a directory to hold the data files.

md C:\app\oracle\oradata\jhcdb\jhpdb3


2.       Make sure you’re in the root cdb. I’m using a common user with DBA granted to all containers.

alter session set container = cdb$root;


3.       Close the source PDB and re-open it read only.

alter pluggable database jhpdb1 close;
alter pluggable database jhpdb1 open read only;


4.       Create the clone with some filename remapping and sizing info.

create pluggable database jhpdb3 from jhpdb1
file_name_convert = ('C:\app\oracle\oradata\jhcdb\jhpdb1', 'C:\app\oracle\oradata\jhcdb\jhpdb3')
storage (maxsize 2G max_shared_temp_size 100M);


If you want to clone a remote pdb, you can use a database link. Same as above except firstly create a database link from the target DB, then open the pdb on the source read only, then create pluggable database on the target needs a @dblink next to the source name (eg jhpdb1@rem_db).

 

Don’t forget that a PDB must be opened at least once before you can do anything with it.

 

To remove a PDB

 

                drop pluggable database jhpdb3 including datafiles;     
-- this leaves the directory but none of the files.

 

If you miss off the including datafiles, it obviously leaves them and they can be reintroduced to the database later (see below).

 

To unplug a pluggable database

 

                Alter pluggable database jhpdb3 close;

                Alter pluggable database jhpdb3 unplug into 'C:\app\oracle\unplug\jhpdb3.xml';

 

Because the unplugged PDB is still part of the database, the files remain in the same location and they still form part of the rman backups.

 

To plug in an unplugged pluggable database

 

To replug an unplugged PDB into the same name, you must first drop the existing PDB but keep the datafiles

 

                drop pluggable database jhpdb3 keep datafiles;

                create pluggable database jhpdb3 using 'C:\app\oracle\unplug\jhpdb3.xml' nocopy tempfile reuse;

Alter pluggable database jhpdb3 open;

 

To replug an unplugged PDB into a new name (perhaps on another database) with new file location on another database

 

create pluggable database jhpdb5 using 'C:\app\oracle\unplug\jhpdb4.xml'

file_name_convert = ('C:\app\oracle\oradata\jhcdb\jhpdb4', 'C:\app\oracle\oradata\jhcdb\jhpdb5')

move;

 

Note you will get an ORa-65122 if you try to plug in a clone on the same CDB as its source. You must clone. The only way to do this is to plug in a dropped unplugged PDB as above.

 

 

Using DBMS_PDB to check plug compatibility

 

Run the DBMS_PDB.CHECK_PLUG_COMPATIBILITY function to determine whether the unplugged PDB is compatible with the CDB.

 

SET SERVEROUTPUT ON

DECLARE

compatible CONSTANT VARCHAR2(3) :=

  CASE DBMS_PDB.CHECK_PLUG_COMPATIBILITY(

      pdb_descr_file => '/home/oracle/devpdb3.xml')

  WHEN TRUE THEN 'YES'

  ELSE 'NO'

END;

BEGIN

DBMS_OUTPUT.PUT_LINE(compatible);

END;

/

 

.

 

Happyjohn

 



Monday, July 7, 2014

Automatically Starting Pluggable Databases

At this time, Oracle has not provided a way of auto-starting pluggable databases in a multitenant environment.

 

This has the effect that should you need to restart your container database (CDB), then you must also connect to it, in order to manually restart the pluggables, otherwise they’ll sit in mounted state forever.

 

To avoid this ,try something like the following. You may add something a bit more clever to check the state of the database so that it only starts them under certain circumstances, and light include a table containing a single row per PDB with a preset indicator that you can control what will restart the next time the database opens.

 

CREATE or REPLACE trigger START_PLUGGABLES

   after startup on database

BEGIN

   execute immediate 'alter pluggable database all open';

END start_pluggables;

/

 

Happyjohn.

Tuesday, June 10, 2014

Oracle 12c : Creating Pluggable Databases

So I’ve started taking my first steps in Oracle 12c.

It is my intention to make myself an indispensable Oracle 12c assist so I can make a fortune as a consultant. I figured the best way to start was to fully understand the new ‘pluggable database’ concepts. It makes sense to use this new feature if your company is limited on the number of instances it can legitimately stand up because of licence restrictions. Where I currently work there are no such licence restrictions so they’ve got no intention of upgrading. But to me it still makes sense that if you are going to have more than one development copy of a database, that they all live together as pluggable databases within a single traditional database.
So I’ve got a small schema with a number of tables in. I want to create a new CDB (Container Database) and several PDB’s (Pluggable databases). I’ll call the CDB jhcdb and the PDB’s will be jhpdb1, jhpdb2 etc.
I’m also using a copy of Oracles Developer Days 12C Virtual linux environment in Virtualbox (see the oracle website).  
The first thing I notice is that the virtual server already has a CDB database with a PDB and they’ve used the $TWO_TASK variable to allow you to access the PDB/CDB. I didn’t like this much, so I unset the TWO_TASK variable in the .bash_profile file, and instead I’ve placed the necessary entries within the tnsnames.ora file, which I will use from now one.
 
Tnsnames.ora
 jhcdb =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = jhcdb)

)

jhpdb1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = jhpdb1)
)
)

jhpdb2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = jhpdb2)
)
)

Initjhcdb.ora

I created all of the necessary folders as indicated by the initjhcdb.ora file. I’m sure much of this isn’t necessary, but it’s what I used.
 
audit_file_dest='/u01/app/oracle/admin/jhcdb/adump'
audit_trail ='db'
compatible ='12.1.0.0.0'
control_files = ('/u02/oradata/jhcdb/control_01.dbf','/u02/oradata/jhcdb/control_02.dbf')
db_block_size=8192
db_domain=''
db_name='JHCDB'
db_recovery_file_dest='/u01/app/oracle/fast_recovery_area'
db_recovery_file_dest_size=2G
diagnostic_dest='/u01/app/oracle'
enable_pluggable_database=true
open_cursors=300
pga_aggregate_target=200M
processes = 150
sga_target=600M
remote_login_passwordfile='EXCLUSIVE'
undo_tablespace='UNDOTBS1'

Creation Script

I chose to create the database manually. If I’d used the DBCA utility, I’m sure I wouldn’t have needed to think about it a great deal, but this way helps the understanding better.
You need to be sure you’ve precreated each of the directories included below, including the cdb and seed directories. The SEED directory hold a list of DBFs that will be used when the PDB is created.
 
CREATE DATABASE jhcdb
USER SYS IDENTIFIED BY oracle      
USER SYSTEM IDENTIFIED BY oracle  
LOGFILE GROUP 1 ('/u02/oradata/jhcdb/redo01a.log','/u02/oradata/jhcdb/redo01b.log')
           SIZE 100M BLOCKSIZE 512,
        GROUP 2 ('/u02/oradata/jhcdb/redo02a.log','/u02/oradata/jhcdb/redo02b.log')
           SIZE 100M BLOCKSIZE 512,
        GROUP 3 ('/u02/oradata/jhcdb/redo03a.log','/u02/oradata/jhcdb/redo03b.log')
           SIZE 100M BLOCKSIZE 512
MAXLOGHISTORY 1
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 1024
CHARACTER SET AL32UTF8
NATIONAL CHARACTER SET AL16UTF16
EXTENT MANAGEMENT LOCAL
DATAFILE '/u02/oradata/jhcdb/system01.dbf'
  SIZE 700M REUSE AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
SYSAUX DATAFILE '/u02/oradata/jhcdb/sysaux01.dbf'
  SIZE 550M REUSE AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
DEFAULT TABLESPACE deftbs
   DATAFILE '/u02/oradata/jhcdb/deftbs01.dbf'
   SIZE 500M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED
DEFAULT TEMPORARY TABLESPACE tempts1
   TEMPFILE '/u02/oradata/jhcdb/temp01.dbf'
   SIZE 20M REUSE AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED
UNDO TABLESPACE undotbs1
   DATAFILE '/u02/oradata/jhcdb/undotbs01.dbf'
   SIZE 200M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED
ENABLE PLUGGABLE DATABASE
  SEED
  FILE_NAME_CONVERT = ('/u02/oradata/jhcdb/',
                       '/u02/oradata/jhseed/')
    SYSTEM DATAFILES SIZE 125M AUTOEXTEND ON NEXT 10M MAXSIZE UNLIMITED
  SYSAUX DATAFILES SIZE 100M
USER_DATA TABLESPACE usertbs
  DATAFILE '/u02/oradata/jhseed/usertbs01.dbf'
  SIZE 200M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
 
 
The main CDB can be built in the same way as any other database so I won’t go into the ins and outs, just to say that with the ENABLE PLUGGABLE DATABASE option, you get what you need.
 
Creating the Pluggable database.
 
Important note here. After creating the database, I opened it and ran the usual CATPROC, CATALOG, etc. etc. scripts as usual. But when I tried to create the PDB, it wouldn’t work.
 
  ERROR at line 1:
  ORA-00604: error occurred at recursive SQL level 1
  ORA-00942: table or view does not exist
 
So instead, it  seems you need to run the catalog scripts like this using PERL from the linux command prompt.
 
  PERL5LIB=$ORACLE_HOME/rdbms/admin:$PERL5LIB; export PERL5LIB
  perl $ORACLE_HOME/rdbms/admin/catcon.pl -n 1 -l /u01/oracle -b catalog $ORACLE_HOME/rdbms/admin/catalog.sql;
  perl $ORACLE_HOME/rdbms/admin/catcon.pl -n 1 -l /u01/oracle -b catproc $ORACLE_HOME/rdbms/admin/catproc.sql;
  perl $ORACLE_HOME/rdbms/admin/catcon.pl -n 1 -l /u01/oracle -b pupbld -u SYSTEM/oracle $ORACLE_HOME/sqlplus/admin/pupbld.sql;
 
Now the first PDB must be created from the seed (/u02/oradata/jhseed). My admin user is to be family as that is also the user I will import my tables into. You need to precreate directory jhpdb1.
 
CREATE PLUGGABLE DATABASE jhpdb1 ADMIN USER family IDENTIFIED BY family
  STORAGE (MAXSIZE 2G MAX_SHARED_TEMP_SIZE 100M)
  DEFAULT TABLESPACE familyts1
  DATAFILE '/u02/oradata/jhpdb1/familyts01.dbf' SIZE 250M   AUTOEXTEND ON
  PATH_PREFIX = '/u02/oradata/jhpdb1/'
  FILE_NAME_CONVERT = ('/u02/oradata/jhseed/', '/u02/oradata/jhpdb1/');
 
alter pluggable database open;
 
Now I import the dump file of my schema. I needed to create an oracle directory for the data pump and this is done in the PDB rather than in the CDB.
 
sqlplus system/oracle@jhpdb1 <<EOF
create or replace directory jh_dp as '/u02/oradata/family';
quit
EOF
 
impdp directory=jh_dp dumpfile=jh_family.dmp userid=system/oracle@jhpdb1
 
Creating the second PDB from the first
 
Creating another copy of the PDB is easy, but you must put the first PDB into read only mode before you can execute the copy command.
 
alter pluggable database jhpdb1 close immediate;
alter pluggable database jhpdb1 open read only;
CREATE PLUGGABLE DATABASE jhpdb2 FROM jhpdb1
  PATH_PREFIX = '/u02/oradata/jhpdb2'
  FILE_NAME_CONVERT = ('/u02/oradata/jhpdb1/', '/u02/oradata/jhpdb2/');
alter pluggable database jhpdb2 open;
alter pluggable database jhpdb1 close;
alter pluggable database jhpdb1 open;
 
 
Now both copies of the family schema are present in the same database, but in separate pluggable databases. They can be opened and closed independently, populated independently etc etc.

Use: select pdb_id, pdb_name, con_id, status, open_mode from dba_pdbs, v$pdbs; 
 
 
 
HappyJohn





Wednesday, May 7, 2014

ORA-00942: table or view does not exist

Its been a while since I had anything worth sharing, but here is a small note on identifying which table or view is missing in a complex view select statement.

 

If you look at DBA_ERRORS for an invalid view,all you’ll see is a ORA-00942: table or view does not exist.

 

You need something a bit deeper to locate the table. Simple but effective.

 

select owner, name, type, referenced_owner, referenced_name, referenced_type from dba_dependencies where (owner, name) in

(select owner, object_name from dba_objects where objecT_type = 'VIEW' and status <> 'VALID')

and not exists (select 1 from dba_objects a where referenced_owner = a.owner and referenced_name = a.object_name and referenced_type = a.object_type)

/

 

All this does is search through the dependencies of any invalid views looking for a missing object.

 

Happyheth

Wednesday, February 5, 2014

Moving Oracle Database files (part 2)

Yesterday I wrote about an automated script for moving some or all of the files that make up the oracle database. This is occasionally needed to relocate the database files to a new mount point.

 

Since then I have tried to make the scripts more user friendly  and have made the following modifications.

 

1.       Improved documentary comments to guide the user through the process.

2.       The ability to review the generated script before running it.

3.       The option of not running the generated script and keeping it for later.

4.       The option to watch the progress of the file moves

 

The move_files.sql script now makes use of the ‘rsync’ method of moving/copying files which provides a rolling percentage process. This can be stopped with a simple script alteration.

 

Example (using same database files as previously).

 

SQL> @move_files

 

You are connected to database ORCLSA. Ctrl+C if this is incorrect

******************************************************************

 

=======================

Relocate Database Files

=======================

 

Move any files (control,redo,temp,data) in a given directory

to a new directory. Any file on the database in that directory

will be moved. All other files will remain in their own

directories.

 

You will be prompted for the from and to directory names.

A script will then be prepared which you can review and run

immediately, or keep until another time.

 

A list of existing files will be provided.

 

******************************************************************

CTRL+C to abort, Press <ENTER> to continue

 

 

:ORCLSA:CTRL:/u02/oradata/jh/control01.ctl

:ORCLSA:CTRL:/u02/oradata/jh/control02.ctl

:ORCLSA:LOG:/u02/oradata/jh/redo_1a.log

:ORCLSA:LOG:/u02/oradata/jh/redo_1b.log

:ORCLSA:LOG:/u02/oradata/jh/redo_2a.log

:ORCLSA:LOG:/u02/oradata/jh/redo_2b.log

:ORCLSA:LOG:/u02/oradata/jh/redo_3a.log

:ORCLSA:LOG:/u02/oradata/jh/redo_3b.log

:ORCLSA:DATA:/u02/oradata/jh/system.dbf

:ORCLSA:DATA:/u02/oradata/jh/sysaux.dbf

:ORCLSA:DATA:/u02/oradata/jh/undo.dbf

:ORCLSA:DATA:/u02/oradata/jh/users01.dbf

:ORCLSA:DATA:/u02/oradata/jh/examples01.dbf

:ORCLSA:DATA:/u02/oradata/jh/xmldb_01.dbf

:ORCLSA:TEMP:/u02/oradata/jh/temp.tmp

 

Press enter the FROM directory (/u02/oradata/jh)?

Press enter the TO directory (/u03/oradata/orclsa)?

 

******************************************************************

 

Would you like to review the generated script <default-Y>?

 

 

CTRL+C to abort, Press <ENTER> to perform changes

 

******************************************************************

 

Performing File moves on ORCLSA

 

Before:ORCLSA:CTRL:/u02/oradata/jh/control01.ctl

Before:ORCLSA:CTRL:/u02/oradata/jh/control02.ctl

Before:ORCLSA:LOG:/u02/oradata/jh/redo_1a.log

Before:ORCLSA:LOG:/u02/oradata/jh/redo_1b.log

Before:ORCLSA:LOG:/u02/oradata/jh/redo_2a.log

Before:ORCLSA:LOG:/u02/oradata/jh/redo_2b.log

Before:ORCLSA:LOG:/u02/oradata/jh/redo_3a.log

Before:ORCLSA:LOG:/u02/oradata/jh/redo_3b.log

Before:ORCLSA:DATA:/u02/oradata/jh/system.dbf

Before:ORCLSA:DATA:/u02/oradata/jh/sysaux.dbf

Before:ORCLSA:DATA:/u02/oradata/jh/undo.dbf

Before:ORCLSA:DATA:/u02/oradata/jh/users01.dbf

Before:ORCLSA:DATA:/u02/oradata/jh/examples01.dbf

Before:ORCLSA:DATA:/u02/oradata/jh/xmldb_01.dbf

Before:ORCLSA:TEMP:/u02/oradata/jh/temp.tmp

 

Amending SPFILE for control files

Stopping Database

Database closed.

Database dismounted.

ORACLE instance shut down.

Moving Control files

building file list ...

1 file to consider

control01.ctl

    10076160 100%   59.86MB/s    0:00:00 (xfer#1, to-check=0/1)

 

sent 10077520 bytes  received 42 bytes  20155124.00 bytes/sec

total size is 10076160  speedup is 1.00

 

building file list ...

1 file to consider

control02.ctl

    10076160 100%   11.84MB/s    0:00:00 (xfer#1, to-check=0/1)

 

sent 10077520 bytes  received 42 bytes  6718374.67 bytes/sec

total size is 10076160  speedup is 1.00

 

Starting Database in mount mode

ORACLE instance started.

 

Total System Global Area  456146944 bytes

Fixed Size                  1344840 bytes

Variable Size             385878712 bytes

Database Buffers           62914560 bytes

Redo Buffers                6008832 bytes

Database mounted.

Moving Data : /u02/oradata/jh/system.dbf to /u03/oradata/orclsa/system.dbf

building file list ...

1 file to consider

system.dbf

   461381632 100%   16.83MB/s    0:00:26 (xfer#1, to-check=0/1)

 

sent 461438081 bytes  received 42 bytes  17412759.36 bytes/sec

total size is 461381632  speedup is 1.00

 

Moving Data : /u02/oradata/jh/sysaux.dbf to /u03/oradata/orclsa/sysaux.dbf

building file list ...

1 file to consider

sysaux.dbf

   125837312 100%   13.75MB/s    0:00:08 (xfer#1, to-check=0/1)

 

sent 125852801 bytes  received 42 bytes  14806216.82 bytes/sec

total size is 125837312  speedup is 1.00

 

Moving Data : /u02/oradata/jh/undo.dbf to /u03/oradata/orclsa/undo.dbf

building file list ...

1 file to consider

undo.dbf

   209723392 100%    9.42MB/s    0:00:21 (xfer#1, to-check=0/1)

 

sent 209749119 bytes  received 42 bytes  9755774.93 bytes/sec

total size is 209723392  speedup is 1.00

 

Moving Data : /u02/oradata/jh/users01.dbf to /u03/oradata/orclsa/users01.dbf

building file list ...

1 file to consider

users01.dbf

   209723392 100%    9.98MB/s    0:00:20 (xfer#1, to-check=0/1)

 

sent 209749122 bytes  received 42 bytes  10231666.54 bytes/sec

total size is 209723392  speedup is 1.00

 

Moving Data : /u02/oradata/jh/examples01.dbf to /u03/oradata/orclsa/examples01.dbf

building file list ...

1 file to consider

examples01.dbf

   209723392 100%   11.70MB/s    0:00:17 (xfer#1, to-check=0/1)

 

sent 209749125 bytes  received 42 bytes  11337792.81 bytes/sec

total size is 209723392  speedup is 1.00

 

Moving Data : /u02/oradata/jh/xmldb_01.dbf to /u03/oradata/orclsa/xmldb_01.dbf

building file list ...

1 file to consider

xmldb_01.dbf

   954212352 100%   13.04MB/s    0:01:09 (xfer#1, to-check=0/1)

 

sent 954328963 bytes  received 42 bytes  13536581.63 bytes/sec

total size is 954212352  speedup is 1.00

 

Moving Temp : /u02/oradata/jh/temp.tmp to /u03/oradata/orclsa/temp.tmp

building file list ...

1 file to consider

temp.tmp

    20979712 100%   58.58MB/s    0:00:00 (xfer#1, to-check=0/1)

 

sent 20982399 bytes  received 42 bytes  41964882.00 bytes/sec

total size is 20979712  speedup is 1.00

 

Moving REDO logs

building file list ...

1 file to consider

redo_1a.log

    52429312 100%   46.74MB/s    0:00:01 (xfer#1, to-check=0/1)

 

sent 52435842 bytes  received 42 bytes  34957256.00 bytes/sec

total size is 52429312  speedup is 1.00

 

building file list ...

1 file to consider

redo_1b.log

    52429312 100%   16.23MB/s    0:00:03 (xfer#1, to-check=0/1)

 

sent 52435842 bytes  received 42 bytes  14981681.14 bytes/sec

total size is 52429312  speedup is 1.00

 

building file list ...

1 file to consider

redo_2a.log

    52429312 100%   13.12MB/s    0:00:03 (xfer#1, to-check=0/1)

 

sent 52435842 bytes  received 42 bytes  11652418.67 bytes/sec

total size is 52429312  speedup is 1.00

 

building file list ...

1 file to consider

redo_2b.log

    52429312 100%   13.71MB/s    0:00:03 (xfer#1, to-check=0/1)

 

sent 52435842 bytes  received 42 bytes  11652418.67 bytes/sec

total size is 52429312  speedup is 1.00

 

building file list ...

1 file to consider

redo_3a.log

    52429312 100%   12.78MB/s    0:00:03 (xfer#1, to-check=0/1)

 

sent 52435842 bytes  received 42 bytes  11652418.67 bytes/sec

total size is 52429312  speedup is 1.00

 

building file list ...

1 file to consider

redo_3b.log

    52429312 100%   17.34MB/s    0:00:02 (xfer#1, to-check=0/1)

 

sent 52435842 bytes  received 42 bytes  14981681.14 bytes/sec

total size is 52429312  speedup is 1.00

 

Opening Database

 

Finished on ORCLSA

 

After:ORCLSA:CTRL:/u03/oradata/orclsa/control01.ctl

After:ORCLSA:CTRL:/u03/oradata/orclsa/control02.ctl

After:ORCLSA:LOG:/u03/oradata/orclsa/redo_1a.log

After:ORCLSA:LOG:/u03/oradata/orclsa/redo_1b.log

After:ORCLSA:LOG:/u03/oradata/orclsa/redo_2a.log

After:ORCLSA:LOG:/u03/oradata/orclsa/redo_2b.log

After:ORCLSA:LOG:/u03/oradata/orclsa/redo_3a.log

After:ORCLSA:LOG:/u03/oradata/orclsa/redo_3b.log

After:ORCLSA:DATA:/u03/oradata/orclsa/system.dbf

After:ORCLSA:DATA:/u03/oradata/orclsa/sysaux.dbf

After:ORCLSA:DATA:/u03/oradata/orclsa/undo.dbf

After:ORCLSA:DATA:/u03/oradata/orclsa/users01.dbf

After:ORCLSA:DATA:/u03/oradata/orclsa/examples01.dbf

After:ORCLSA:DATA:/u03/oradata/orclsa/xmldb_01.dbf

After:ORCLSA:TEMP:/u03/oradata/orclsa/temp.tmp

SQL>

 

The scripts are as follows.

 

Script review_file.sh requires chmod +x

 

cat list_files.sql

set head off

set feedback off

set pages 100

set verify off

select '&1:'||b.name||':CTRL:'||a.name from v$controlfile a, v$database b

union all

select '&1:'||b.name||':LOG:'||a.member from v$logfile a, v$database b

union all

select '&1:'||b.name||':DATA:'||a.name from v$datafile a, v$database b

union all

select '&1:'||b.name||':TEMP:'||a.name from v$tempfile a, v$database b;

 

cat review_file.sh            

#!/bin/ksh

 

par1=$1

par2=$2

 

if [ "${par1}" == "Y" ] || [ "${par1}" == 'y' ]; then

   view ${par2}

fi

 

[oracle@localhost sql]$ cat move_files.sql

define fromdir = '/u02/oradata/jh'

define todir = '/u03/oradata/orclsa'

 

-- The follow rsync code allows a move command that displays progress.

-- Useful in a large file situation.

define rsmv = 'rsync -aP --remove-sent-files '

-- define rsmv = 'mv -i '

 

set termout off

column var new_value _var

select name  as var from v$database;

set termout on

prompt

accept aaa prompt 'You are connected to database &_var.. Ctrl+C if this is incorrect '

 

prompt ******************************************************************

prompt

prompt =======================

prompt Relocate Database Files

prompt =======================

prompt

prompt Move any files (control,redo,temp,data) in a given directory

prompt to a new directory. Any file on the database in that directory

prompt will be moved. All other files will remain in their own

prompt directories.

prompt

prompt You will be prompted for the from and to directory names.

prompt A script will then be prepared which you can review and run

prompt immediately, or keep until another time.

prompt

prompt A list of existing files will be provided.

prompt

 

prompt ******************************************************************

accept aaa prompt 'CTRL+C to abort, Press <ENTER> to continue '

prompt

 

set head off

set lines 200

set verify off

set feedback off

set trimspool on

 

@list_files ' '

 

prompt

 

accept fromdir default '&fromdir' prompt 'Press enter the FROM directory (&fromdir)? '

 

accept todir default '&todir' prompt 'Press enter the TO directory (&todir)? '

 

set termout off

prompt ****************************************************************

spool move_files_&_var..sql

 

-- List all files before the change occurs

select 'prompt Performing File moves on &_var' from dual;

 

select '@list_files ''Before''' from dual;

select 'prompt' from dual;

 

select 'set head off' from dual;

 

-- Control files parameter must be set before closing the database (spfile only)

select 'prompt Amending SPFILE for control files' from dual;

select aaa from

(

select 0 anum,'alter system set control_files = ' aaa from dual

union

select rownum, ''''||replace(name,'&fromdir','&todir')||''''||

        case rownum when (select count(0) from v$controlfile) then null else ',' end

   from v$controlfile

union

select 99999999, 'scope = spfile;' from dual

) order by anum;

 

-- shutdown the database completely

select 'prompt Stopping Database' from dual;

select 'shutdown immediate' from dual;

 

-- move the control files to their new location while the database is completely closed

select 'prompt Moving Control files' from dual;

select '!&rsmv '||name||' '||replace(name,'&fromdir','&todir')

   from v$controlfile

where name like '&fromdir%';

 

-- startup mount for the movement of all other files

select 'prompt Starting Database in mount mode' from dual;

select 'startup mount' from dual;

 

-- move the files themselves (datafiles)

select 'prompt Moving Data : '||name||' to '||replace(name,'&fromdir','&todir'),

'!&rsmv '||name||' '||replace(name,'&fromdir','&todir'),

'alter database rename file '''||name||''' to '''||replace(name,'&fromdir','&todir')||''';'

from v$datafile

where name like '&fromdir%'

/

 

-- move the files themselves (tempfiles)

select 'prompt Moving Temp : '||name||' to '||replace(name,'&fromdir','&todir'),

'!&rsmv '||name||' '||replace(name,'&fromdir','&todir'),

'alter database rename file '''||name||''' to '''||replace(name,'&fromdir','&todir')||''';'

from v$tempfile

where name like '&fromdir%'

/

 

-- move the files themselves (logfiles)

select 'prompt Moving REDO logs' from dual;

select '!&rsmv '||member||' '||replace(member,'&fromdir','&todir'),

'alter database rename file '''||member||''' to '''||replace(member,'&fromdir','&todir')||''';'

from v$logfile

where member like '&fromdir%'

/

 

-- Open the database

select 'prompt Opening Database' from dual;

select 'alter database open;' from dual;

 

select 'select ''Finished on '||name||''' from dual;' from v$database;

 

--set feedback on

--set verify on

--set head on

 

-- List all files after the change occurs

select '@list_files ''After''' from dual;

 

spool off

set termout on

 

prompt

prompt ******************************************************************

prompt

accept aaa default 'Y' prompt 'Would you like to review the generated script <default-Y>? '

prompt

host ./review_file.sh &aaa move_files_&_var..sql

 

accept aaa prompt 'CTRL+C to abort, Press <ENTER> to perform changes '

prompt

prompt ******************************************************************

prompt

spool move_files_&_var..lst

@move_files_&_var..sql

spool off

 

 

[oracle@localhost sql]$

 

 

 

Happyjohn