ORA-01552: cannot use system rollback segment

Hi all,

When I tried to open my secondary mounted database, ORA-01552 was thrown. Did a bit of google for this error and it was successfully resolved through the below simple steps.

*** Firstly all the rollback segements were checked.

select segment_name,tablespace_name,status from dba_rollback_segs;

*** Brought the UNDO tablespace online.

alter database datafile ‘/home/app/host/undotbs01.dbf’ online;

*** Set the undo management to AUTO. You will need to restart the DB.

alter system set undo_management=AUTO scope=spfile;

*** Brough the rollback segments to online.
ALTER ROLLBACK SEGMENT “_SYSSMU1_99961370$” online;

Advertisement

Oracle Data Guard ~ Standard Edition 11g

As you know, Oracle Enterprise Edition is very expensive.  With 11g Oracle has introduced the concept of “Active Data Guard”  with the possibility to query our Physical Standby Database while it is constantly actualized to reflect the state of the Primary Database but with an extra charge.

We can have Oracle data guard with Standard Edition, however manual work (archive transfer and archive logs applying process) has to be done.

Lets see how we can achieve this target using Oracle 11g Standard Edition.

Environment

OS: Linux Red Hat 6

Primary DB Server: primhostserv

Primary DB: phostdb

Secondary DB Server: sechostserv

Install Oracle 11g SE on both primhostserv and sechostserv servers. Create a new database named primhostdb. This is our primary database for Oracle data guard.

It will be easier if you could install Oracle in the same paths in both servers.

Step 01

Enable archive in the Primary DB.

You can refer my earlier post Flashback Database ~ Oracle

Step 02

Create similar folder structure in Secondary Server similar to Primary Server.

e.g: /home/app/oracle/admin/PHOSTDB/adump

Step 03

Create password file, pfile from the Primary DB and copy to Secondary DB server.

> create pfile=’/home/oracle/pfile_PHOSTDB.ora’ from spfile;

Step 04

You will not need to create a backup of the Primary DB and apply to Secondary. You can do this using RMAN, however a simple file copy will be easier. Keep your primary DB shutdown during the copy.

Step 05

Create a Standby control file from Primary DB.

>alter database create standby controlfile as ‘/home/app/standby.ctl’;

Please note that, you will need to change the pfile used to create the Standby DB to use the newly created control file.

*.control_files= ‘/home/app/PHOSTDB/standby.ctl’;

Step 06

Now you will have all you files copied to Secondary DB Server. You can edit the created pfile for Secondary DB creation to tell Oracle where you files on Primary is located in Secondary.

DB_FILE_NAME_CONVERT = ‘/home/app/PHOSTDB’,’/home/newapp/PHOSTDB’

Step 07

Now you can start up the Standby database.

> startup nomount pfile=’/home/oracle/pfile_PHOSTDB.ora’

>alter database mount standby database;

Step 08

You can now recover you Standby database. Please note that as we are using Oracle SE here, archive logs will not be copied/moved to Secondary server. You can add a script to copy the newly created archive files every 30-60 min time.  Also, if you need to automate the recover, this too can be done using a script, you can find sample scripts at the end of this post.

> recover standby database

AUTO

Step 09

In order to verify that Secondary DB is in sync, you can open the Secondary in read-only mode.

>alter database open read only;

Please note that, you will need to put the Secondary DB into standby mode to recover.

> shutdown immediate

> startup nomount pfile=’/home/oracle/pfile_PHOSTDB.ora’

>alter database mount standby database;

Please find the scripts used to automatically sync the archivelogs from Primary to Secondary server. The script to apply the archivelogs into the standby DB is also provided.

rsync.sh – Add into Primary server. This script is used to sync the archive logs. I have the linux version here. You can use the link on how to enable secure rsync without entering the remote servers password.

recover.sql – Add into Secondary server. This the sql that will recover/apply the archive logs.

recover.sh – Add into Secondary server. Config the path to recover.sql conrrectly. This script is used to export the environmental variables and invoke recover.sql.

rsync.sh

cd /home/app/oracle/PHOSTDB/archive
rsync * --ignore-existing oracle@sechostserv:/home/app/oracle/phostdb/archive/

 

recover.sql

spool log.recovery replace
select to_char(sysdate,'yyyy-mm-dd hh:mi:ss') from dual;
SELECT MAX(RECID) "Log id now" FROM V$LOG_HISTORY;
recover standby database
auto
spool off
exit;

 

recover.sh

ORACLE_BASE=/home/app/oracle;
export ORACLE_BASE
ORACLE_HOME=/home/app/oracle/product/11.2.0/db_1;
export ORACLE_HOME
PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin;
export PATH
ORACLE_SID=phostdb;
export ORACLE_SID
sqlplus / as sysdba @/home/app/oracle/phostdb/archive/recover.sql

Enjoy!!!

Add a sequence to an existing table

There are situations where a sequence needs to be added to an existing table’s primary key. In order to achieve this task, we can create a sequence dynamically getting the “START WITH” value from the primary column of the table.

e.g:
Table Name: students
Primary Column: student_no (integer)

DECLARE maxid INTEGER;
sql_stmt VARCHAR2 (200);
BEGIN
SELECT MAX(student_no)+1 INTO maxid FROM students;
sql_stmt := ‘CREATE SEQUENCE seq_students_no INCREMENT BY 1
START WITH ‘||maxid|| ‘
MINVALUE ‘||maxid||’
MAXVALUE 9999999999999999
NOCYCLE
NOORDER
CACHE 20′;
EXECUTE IMMEDIATE sql_stmt;
END;
/

Restore point ~ Oracle

Restore Points can be created either as a normal restore point or a guaranteed restore point. The difference between the two is that guaranteed restore point allows you to flashback to the restore point regardless of the DB_FLASHBACK_RETENTION_TARGET initialization parameter i.e. it is always available (assuming you have enough space in the flash recovery area).

Database is in OPEN mode and FLASH BACK enabled.

* Log as SYS:

* Create a restore point :
SQL>create restore point INITDB guarantee flashback database;

In case, you are required to Flash back to the created restore point, following steps need to be followed.

* Shutdown the database :
SQL> shutdown immediate

* Statup the database (mount) :
SQL> startup mount

*You can view the created restore point :
SQL> select * from v$restore_point;

* Flashback to the restore point
SQL> flashback database to restore point INITDB;
Flashback complete.

* You can view the created restore point :
SQL>select * from v$restore_point;

Note: Check my note on Oracle flash back.

Flashback Database ~ Oracle

Introduction

* Flashback Database is an alternative to a point in time recovery (PITR). datafiles are brought to their contents they had at a past time.
* Old database block images are stored in a flash recovery area which allow fast rollbacks of database (as no online redo logs are required.)

* Relevant parameters:
o db_recovery_file_dest
o db_recovery_file_dest_size
o db_flashback_retention_target

* Relevant Commands:
o alter database flashback on | off
o alter tablespace flashback on | off

STEPS

* Log as SYS:
CMD>set oracle_sid=DB01
CMD>sqlplus / as sysdba

* Database should be operated in spfile mode :
SQL>show parameter spfile;

* Set the Archive Log Destination:
SQL>alter system set log_archive_dest_1=’location=C:\oracle\SAMPLE\archivefiles’;

* Change the archive log format:
SQL>alter system set log_archive_format=’arch_%t_%s_%r.log’ scope=spfile;

* Check the recovery file destination parameters:
SQL>show parameter recover

It should show the parameters like db_recovery_file_dest, db_recovery_file_dest_size, recovery_parallelism

* Change the parameter values : Ex:
SQL>alter system set db_recovery_file_dest_size=3G scope=both;
SQL>alter system set db_recovery_file_dest=’C:\oracle\SAMPLE\archivefiles\recoveryarea’ scope=both;

* Check the retention parameters:
SQL>show parameter reten

It should show the parameters like db_flashback_retention_target, undo_retention

* Change the parameter values : Ex:
SQL>alter system set db_flashback_retention_target=14400 scope=both;

* Check the lag parameters:
SQL>show parameter lag

It should show the parameters like archive_lag_target

* Change the parameter values : Ex:
SQL>alter system set archive_lag_target=3600 scope=both;

* Then do a checkpoint :
SQL>alter system checkpoint;

* Shutdown the database :
SQL>shutdown immediate;

* Statup the database (mount) :
SQL>startup mount;

* Execute following statements to enable flashback :

SQL>alter database archivelog;
SQL>alter database force logging;
SQL>alter database flashback on;

* Shutdown the database :
SQL>shutdown immediate;

* Statup the database :
SQL>startup

Note: Check my note on Oracle restore point to create and work with flash back.

ORACLE ~ Locks

Normally deadlock generates dump file and automatically is released by oracle system process:

1) check v$session
sqlplus> select sid, serial#, username, command, lockwait, osuser from v$session where lockwait is not null

2) To kill a locked session, first need to find sid, serial and use
sqlplus>alter system kill session ‘sid, serial#’;
*** you need have dba priviledge to kill sessions

3. To find which SQL has lock wait
sqlplus>select sql_text from v$sqltext where (address,hash_value) in (select sql_address,sql_hash_value from v$session where lockwait is not null) order by address, hash_value, piece

V$LOCK -> Lists the locks currently held by Oracle Database and outstanding requests for a lock or latch
DBA_BLOCKERS-> Displays a session if it is holding a lock on an object for which another session is waiting
DBA_WAITERS -> Displays a session if it is waiting for a locked object
DBA_DDL_LOCKS-> Lists all DDL locks held in the database and all outstanding requests for a DDL lock
DBA_DML_LOCKS-> Lists all DML locks held in the database and all outstanding requests for a DML lock
DBA_LOCK-> Lists all locks or latches held in the database and all outstanding requests for a lock or latch
DBA_LOCK_INTERNAL-> Displays a row for each lock or latch that is being held, and one row for each outstanding request for a lock or latch

Oracle Sessions

Some situations rise where we need to find out the current sessions in the database and this can be viewed through v$session SYS view.

> select * from v$session

This tables gives you loads of information, schema name, application, session id etc.

By retrieving the session id, unwanted sessions can be removed (disconnected) from the DB.

>ALTER SYSTEM KILL SESSION ‘sid,serial#’;

Create a new UNDO tablespace

There can be situations were UNDO tablespace grows continuously and there is a requirement to clear this space. We cannot simply remove this tablespace and create a new, we will need to follow the below steps:

Steps

1) Switch log files

2) Create a new UNDO tablespace

CREATE UNDO TABLESPACE “UNDOTBS2” DATAFILE ‘<path>/undotbs201.dbf’ SIZE 100M AUTOEXTEND ON NEXT 10240K MAXSIZE 32767M ;

3) Assign the newly created tablespace as the UNDO tablespace

alter system set undo_tablespace = UNDOTBS2;

4) Drop the old UNDO tablespace

drop tablespace “UNDOTBS” INCLUDING CONTENTS AND DATAFILES;

Note:  Use below query to check the current UNDO tablespace

select name,value from v$parameter where name in (‘undo_management’,’undo_tablespace’);

Oracle 11g password expiration

In Oracle 11g release, schema password is expired periodically, by default this is 180 days. However, if required we can alter this setting and make the password permanent. This change needs to be done to the profile of the schema, all schemas under this profile is affected.

select PROFILE from dba_users where username = ‘User you are interested in’;

ALTER PROFILE <Profile from ABOVE query> PASSWORD_LIFE_TIME UNLIMITED;

Oracle expdp

Oracle data pump export has been a new utility for backing up data replacing the old “exp” utility. “Expdp” has a lot of new and important features in taking a backup, it is must faster and reliable. However as in “exp” utility you cannot get a backup to a remote location, backups are taken only to the specific server (where the DB you need to backup resides).

Basic Example

/*Exporting the source DB */

expdp
userid=system/<system password>@//<ip>/<db> /*Source DB details*/
dumpfile=testdump.dmp /*Name of dump file*/
logfile=testdump.log /*Name of log file*/
full=y /*Type of dump (all schema/ only specific schema(s)*/
directory=testdirectory /*Directory where dump is created*/

/*Importing the destination DB */

More Examples

1) Full dump

expdp userid=system/password@SOURCE directory=DIR dumpfile=fullexample1.dmp logfile=expdpexample1.log full=y
impdp userid=system/password@TARGET dumpfile=fullexample1.dmp logfile=impdpexample1.log full=y directory=DIR
CONTENT=METADATA_ONLY

*CONTENT={ALL / METADATA_ONLY / DATA_ONLY}

ALL – Imports both data and metadata to destination DB (Default)
METADATA_ONLY – Imports only object definitions, no table data
DATA_ONLY – Imports only table data, no object definitions

2) Table dump

expdp userid=system/password@SOURCE tables=user1.example2 directory=DIR dumpfile=table1.dmp logfile=expdpexample2.log
impdp userid=system/password@TARGET directory=DIR dumpfile=table.dmp full=y logfile=impdptable.log

expdp userid=mubasher_oms/password@SOURCE tables=example2 directory=DIR dumpfile=table2.dmp logfile=expdptable2.log
impdp userid=system/password@TARGET directory=DIR dumpfile=table2.dmp full=y logfile=impdptable2.log TABLE_EXISTS_ACTION=APPEND

*TABLE_EXISTS_ACTION={APPEND / REPLACE / TRUNCATE}

APPEND – Extra records in source table imported
REPLACE – Records are updated from the source table
TRUNCATE – Table transformed as source

3) Schema dump

expdp userid=system/password@SOURCE schemas=schema1 directory=DIR dumpfile=schema1.dmp logfile=expdpschema1.log
impdp userid=system/password@TARGET full=y directory=DIR dumpfile=schema1.dmp logfile=impdpschema1.log

/*Exports only a particular schema*/

4) INCLUDE and EXCLUDE

INCLUDE

/*This option is used when you are required to export particular objects*/

Only tables
expdp userid=system/password@SOURCE schemas=schema2 include=TABLE directory=DIR dumpfile=include1.dmp logfile=expdpinclude1.log

Only tables and procedures
expdp userid=system/password@SOURCE schemas=schema2 include=TABLE include=PROCEDURE directory=DIR dumpfile=include2.dmp logfile=expdpinclude2.log

expdp userid=system/password@SOURCE schemas=schema2 include=TABLE,PROCEDURE directory=DIR dumpfile=include3.dmp logfile=expdpinclude3.log

Only table “Prac1”
expdp userid=system/password@SOURCE schemas=schema2 include=TABLE:”\=’PRAC1′” directory=DIR dumpfile=include4.dmp logfile=expdpinclude4.log

Only tables that start with “P”
expdp userid=system/password@SOURCE schemas=schema2 include=TABLE:”LIKE ‘P%'” directory=DIR dumpfile=include5.dmp logfile=expdpinclude5.log

Only tables “PRAC1” & “PRAC2”
expdp userid=system/password@SOURCE schemas=schema2 include=TABLE:”IN (‘PRAC1′,’PRAC2’)” directory=DIR dumpfile=include6.dmp logfile=expdpinclude6.log

EXCLUDE

Exclude all tables
expdp userid=system/password@SOURCE schemas=schema2 exclude=TABLE directory=DIR dumpfile=exclude1.dmp logfile=expdpexclude1.log

Exclude table “PRAC1”
expdp userid=system/password@SOURCE schemas=schema2 exclude=TABLE:”\=’PRAC1′” directory=DIR dumpfile=exclude2.dmp logfile=expdpexclude2.log

Import only meta data of all tables except “PRAC1”
expdp userid=system/password@SOURCE schemas=schema2 exclude=TABLE:”\=’PRAC1′” directory=DIR dumpfile=exclude3.dmp logfile=expdpexclude3.log content=metadata_only

5) Network Exports/Imports (NETWORK_LINK)

grant IMP_FULL_DATABASE to source_schema1;
grant IMP_FULL_DATABASE to destination_schema1;
grant EXP_FULL_DATABASE  to destination_schema1;

impdp userid=source_schema1/password@TARGET DIRECTORY=DIR NETWORK_LINK=REMOTE_LINK

*NETWORK_LINK=DB link to Source DB (Here, impdp connects to the source DB, gets the required data and writes it directory to the destination DB)/

6) Query dump

/*Export data for specific conditions only (all data where st_id=1 in Prac1 table*/

expdp userid=mubasher_test2/password@SOURCE directory=DIR dumpfile=query1.dmp logfile=query1.log tables=prac1 query=prac1:\”where st_id=1\”

7)Remap schemas

/*Change the schema name in the destination DB */

expdp userid=system/password@SOURCE schemas=schema2 directory=DIR dumpfile=remapschema1.dmp logfile=expdpremapschema1.log
impdp userid=system/password@TARGET full=y directory=DIR dumpfile=remapschema1.dmp logfile=impdpremapschema1.log remap_schema=schema2 :test2

8)Remap tablespaces

/*Change the schema name in the destination DB */

expdp userid=system/password@SOURCE schemas=schema2 directory=DIR dumpfile=remapdatafiles1.dmp logfile=expdpremapdatafiles1.log
impdp userid=system/password@TARGET full=y directory=DIR dumpfile=remapdatafiles1.dmp logfile=impdpremapdatafiles1.dmp remap_schema=schema2 :test2 remap_tablespace=schema2_ts:test2_ts

9) Estimate

/*Get an estimation of the dump*/

expdp userid=system/password@SOURCE schemas=mubasher_test2 estimate_only=y
expdp userid=system/password@SOURCE schemas=mubasher_test2 estimate_only=y estimate=blocks
expdp userid=system/password@SOURCE schemas=mubasher_test2 estimate_only=y estimate=statistics

10) Files

/*Export data into more than one dump file depending on size*/

expdp userid=system/password@SOURCE full=y dumpfile=file%U.dmp filesize=15M directory=DIR logfile=expdpfile.log
impdp userid=system/password@TARGET full=y directory=DIR dumpfile=file%U.dmp logfile=impdpfile.log

11)Attach and detach

/*Exports can be stopped, paused, resumed accordingly*/

expdp userid=system/password@SOURCE directory=DIR dumpfile=attach1.dmp logfile=expdpattach1.log full=y job_name=fullexport

Export> STOP_JOB=IMMEDIATE /*pause the export*/

expdp userid=system/password@SOURCE attach=fullexport

Export> CONTINUE_CLIENT /*continue with the export*/
Export> KILL_JOB /*kill the export*/

12) Version

/*Export data when destination DB version is not similar to source DB*/

expdp userid=system/password@SOURCE directory=DIR dumpfile=fullexample1.dmp logfile=expdpexample1.log full=y version=10.2

For more information : http://docs.oracle.com/cd/B14117_01/server.101/b10825/dp_export.htm#i1006388