Pages

Friday 30 December 2011

CREATING A USER IN ORACLE



                     Here is an example for simple creation of a user in oracle 10g.

Purpose:
Create the user by using the CREATE USER  statement. It is an account through which the user can login to the database and access the data by the privileges he has.

SYNTAX:

SQL> CERATE USER username [IDENTIFIED BY password |EXTERNALLY]
 DEFAULT  TABLESPACE  tablespace 
TEMPORARY TABLESPACE  [tablespace|tablespace_group_name]
QUOTA [UNLIMITED|value] ON tablepsace PROFILE profile_name
PASSWORD EXPIRE
ACCOUNT  [UNLOCK|UNLOCK];

Username:
It specifies the name of the user account which is to be created.

IDENTIFIED BY:
It is that how the user will be identified i.e. how the oracle database authenticates the user account.

BY password:
It specifies that the user is a local database user, who can login into the database directly using the password

EXERNALLY:
It specifies that the user created as an external user. In this the user is authenticated by the operating system password file.

DEFAULT TABLESPACE:
This clause specifies the default storage of the objects created and owned by that user. If the default tablespace is not specified, then the objects that are owned by the user defaultly stored in the default tablespace of the database.

TEMPORARY TABLEPSACE:
It specifies the default temporary tablepsace storage. It is used to store the temporary(sorted) data during the sql operations done by the user. A singe tablespace or tablespace group can be assigned to the user.

QUOTA:
This clause specifies allocating specific amount of the space(quota) on different tablespaces. So, the user is restricted to use more than this quota size on that particular tablepsace.

PROFILE:
 This clause specifies to set the resource limits to the user such as database resources, password limits and OS resources.

PASSWORD EXPIRE:
It is to specify that the password created by the DBA in the identified clause expires. So that when the user login to the database, then he is prompted to give the new password to that account.

ACCOUNT:
It specifies the user account to unlock or lock. The user unable to access the account if the UNLOCK parameter is used for this. If the user

Example for creating a user:

SQL> CREATE USER sample IDENTIFIED BY sample  
DEFAULT TABLESPACE TBL_SAMPLE_DATA
QUOTA UNLIMITED ON IDX_SAMPLE_DATA
PROFILE default
PASSWORD EXPIRE
ACCOUNT UNLOCK;

Example for creating an external user:
In this example the authentication of the user is done by using the password file.

  1. The password file should be created in the $ORACLE_HOME/dbs location
$ cd $ORACLE_HOME/dbs
$ orapwd file=orapwd<SID> password=<password> entries=<max_users>    
                  Ex: $ orapwd file=orapwdACDB2 password=oracle entries=5

  1. Set the initialization parameter REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
  2. Create the user sample
    SQL> CREATE USER SAMPLE IDENTIFIED EXTERNALLY;
Note: The password file users information can be viewed in the v$pwd_users view.




Wednesday 28 December 2011

TABLESPACE POINT IN TIME RECOVERY


           I have a table in my production database which is being updated every week. By mistake one of the developers dropped that table yesterday(Monday) morning around 10 am. So, my manager asked me to get the table back into the database as that table is very important for some transactions. Now I have two options to get the table back into the database.

a.        Duplicate the database in the stage server using the Sunday night full backup and recover it until the     morning 10am using the archive logs and export the table and import the table in the production database.
b.      Tablespace point in time recovery on the production database itself.

If there is enough space in the stage server to duplicate the database and if the database is small,we can choose the option a.If the database is large it takes more time as it needs to duplicate all the tablespaces. But I did not have enough time and space in the stage server to choose the option a.

If there is enough space in the production server and if you have given some maintenance window to recover the tablespace, we can choose the option b. I have enough space in the production server and the tablespace was a small one. So, I opted the b option. 
Here is the process I followed:

Target database:DBACLASS
Auxiliary database: AUX
Version:10.2.0

1.       Create the pfile for an auxiliary instance from the target database in which we want to do the tablespace point in time recovery.
SQL>create pfile from spfile;

2.       Modify the pfile for the auxiliary database.
InitAUX.ora
----------------
large_pool_size=150M
shared_pool_size=150M
compatible='10.2.0.1.0'
control_files='/u02/oradata/AUX/control01.ctl','/u02/oradata/AUX/control02.ctl','/u02/oradata/AUX/control03.ctl'
db_block_size=8192
db_name='DBACLASS'
remote_login_passwordfile='EXCLUSIVE'
#db_unique_name='DBACLASS'
DB_FILE_NAME_CONVERT= ('/u02/oradata/DBACLASS/DBACLASS/','/u02/oradata/AUX/')
LOG_FILE_NAME_CONVERT= ('/u02/oradata/DBACLASS/DBACLASS/','/u02/oradata/AUX/')
LOG_ARCHIVE_DEST_1='LOCATION=/u02/oradata/AUX/arch'
LOG_ARCHIVE_FORMAT=%r_%t_%s.arc
service_names='AUX'
LOCK_NAME_SPACE ='DBACLASS'

db_name:
The database name of the auxiliary database should be same as that of the target database.

LOCK_NAME_SPACE:
When the primary database and auxiliary database (cloned database) is in the same server with the same name, then LOCK_NAME_SPACE parameter has to be added in the auxiliary parameter file. Because Oracle check for something unique to differ among the databases when the two databases have same name. Distributed lock manager generates the lock names by using this parameter.
  DB_FILE_NAME_CONVERT:
This parameter is used to create the database files in the specified location for clone database.

3.       Export the oracle_sid for the auxiliary database.
$ORACLE_SID=AUX
4.       Create the password file for the auxiliary database in $ORACLE_HOME/dbs.
$$ORACLE_HOME/dbs /orapwd file=orapwAUX password=abacus entries=5

5.       Configure the listener and tnsnames for the AUX database.

Listener: 
SID_LIST_AUX =
  (SID_LIST =
    (SID_DESC =
      (SID_NAME = PLSExtProc)
      (SID_NAME = AUX)
      (ORACLE_HOME = /opt/app/oracle/product/10.2.0)
      (PROGRAM = extproc)
    )
  )
AUX =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.210)(PORT = 1521))
    )
  )
Tnsnames:
AUX =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.210)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = AUX)
      (UR = A)
    )
  ) 
6.       Create table TABLE1 in the users tablespace in the Primary database (DBACLASS).
SQL>create table TABLE1(num number, tname varchar2(20));
Table created.

7.       Insert some rows in the table TABLE1.
SQL>select count(*) from TABLE1;
Count(*)
---------
6 
8.       Connect to the rman and take the backup of the database.(dg2.abacus.com)
$rman catalog rmn/rmn target sys/abacus@DBACLASS 
Rman>backup database;
9.       Now drop the table TABLE1 in the users tablespace(DBACLASS).
SYS>set time on;
09:47:41 SQL> drop table table1; 
Now if you want the table TABLE1 to be restored into the database. TABLESPACE point in time recovery has to be done until 9:47.

10.    Now start up the AUX database in nomount state.
SQL>startup nomount;
11.   Now connect to the rman catalog and run the tspitr.rmn( RMAN server)

tspitr.rmn
------------
     CONNECT AUXILIARY sys/****@aux
CONNECT TARGET sys/****@dbaclass
CONNECT CATALOG rmn/***@acdbrmn
 run{
  allocate auxiliary channel c1 device type disk;
  recover tablespace users until time "TO_DATE('2011-DEC-22:21:47:45','yyyy-mon-dd:hh24:mi:ss')";
  }

$rman catalog rmn/rmn target sys/abacus@DBACLASS auxiliary sys/abacus@AUX 
Rman>@tspitr.rmn 
12.   After that check the tablespace USERS  is in offline mode. Make it as online and check for the table TABLE1 in the primary database(DBACLASS). 
SQL>alter tablespace users online;
SQL>select count(*) from table1;

Mechanism that follows by TSPITR: 
1.       Makes the tablespace offline in the target database which is to recovered.
2.       Restores the control file from the rman database backup before the target time specified in the recover until time statement for clone database.
3.       Restores the db files for the auxiliary database to the specified location in the DB_FILE_NAME_CONVERT.
4.       Then it recover the database files until the specified time in the tablespace recovery statement and the database opened with reset logs option.
5.       Exports the meta data of the  tablespace which has to be recovered and the auxiliary database is shut down.
Eg:
'exp userid =\"sys/abacus@aux as sysdba\" point_in_time_recover=y tablespaces=USERS file=tspitr_a.dmp';
6.       Imports the metadata into the target database from the auxiliary database. And removes the auxiliary database files.
7.       Makes the recovered tablespace offline. 

Objects to check after the TSPITR: 
 When the tablespace is recovered upto a certain time t, then the objects those are created after the time t will be lost. So, we need to take the backup of those tables before the TSPITR.

Check the objects that created after time t by using the below query:   
select
OWNER,NAME, CREATION_TIME, TABLESPACE_NAME
from TS_PITR_OBJECTS_TO_BE_DROPPED
 where tablespace_name=’****’ and  creation_time >= to_date(********);

           Issues may face:
--------------------- 
If $ORACLE_HOME/dbs/lk$ORACLE_SID is existed, then remove it.  If it is not removed an error occurs while opening the auxiliary database that don’t open database in exclusive mode.
               
Note:
------- 
The AUX database is cloned into the directory specified in the DB_FILE_NAME_CONVERT. After the TSPITR is completed, automatically rman deletes the db files,redo logs, controfiles , temp files of the AUX database(auxiliary).

Friday 16 December 2011

Dataguard configuration using cold backup


Dataguard configuration using cold backup

Versions:
CentOs-5.5
Oracle 10.2.0

Databases information:
Primary database name:ACDBP
Primary db unique_name=ACDBP
Standby  database name=ACDBP
Standby db_unique_name=ACDBS

1.       Create the primary database
2.       Set the parameter file for the primary database:

initACDBP.ora

db_cache_size=125M
java_pool_size=8M
large_pool_size=8M
shared_pool_size=100M
streams_pool_size=20M
audit_file_dest='/opt/app/oracle/admin/ACDBP/adump'
background_dump_dest='/opt/app/oracle/admin/ACDBP/bdump'
compatible='10.2.0'
control_files='/opt/app/oracle/oradata/ACDBP/control01.ctl','/opt/app/oracle/admin/ACDBP/controlfile/control02.ctl'
core_dump_dest='/opt/app/oracle/admin/ACDBP/cdump'
db_block_size=8192
db_domain=''
db_file_multiblock_read_count=16
db_name='ACDBP'
db_recovery_file_dest='/opt/app/oracle/flash_recovery_area'
db_recovery_file_dest_size=2048M
job_queue_processes=10
open_cursors=100
pga_aggregate_target=70M
processes=50
remote_login_passwordfile='EXCLUSIVE'
sga_target=536M
undo_management='AUTO'
undo_tablespace='UNDOTBS1'
user_dump_dest='/opt/app/oracle/admin/ACDBP/udump'
service_names=ACDBP
#data guard parameters
db_unique_name='ACDBP'
fal_client='ACDBP'
fal_server='ACDBS'
log_archive_config='DG_CONFIG=(ACDBP,ACDBS)'
log_archive_dest_1='LOCATION=/opt/app/oracle/admin/ACDBP/arch VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=ACDBP'
log_archive_dest_2='SERVICE=ACDBS LGWR ASYNC VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ACDBS'
#log_archive_dest_2='SERVICE=ACDBS lgwr sync affirm reopen=10 VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ACDBS'
log_archive_dest_state_1='ENABLE'
log_archive_dest_state_2='ENABLE'
log_archive_format='%t_%s_%r.dbf'
ACDBP.log_archive_format='%t_%s_%r.dbf'
log_archive_max_processes=2
ACDBP.standby_archive_dest='/opt/app/oracle/admin/ACDBP/arch'
standby_file_management='AUTO'
log_archive_min_succeed_dest=1

3.       Create the appropriate directories for the stand by database and prepare the parameter file for the standby database.

initACDBS.ora

db_cache_size=125M
java_pool_size=8M
large_pool_size=8M
shared_pool_size=100M
streams_pool_size=20M
audit_file_dest='/opt/app/oracle/admin/ACDBS/adump'
background_dump_dest='/opt/app/oracle/admin/ACDBS/bdump'
compatible='10.2.0'
control_files='/opt/app/oracle/oradata/ACDBS/standby1.ctl'
core_dump_dest='/opt/app/oracle/admin/ACDBS/cdump'
db_block_size=8192
db_domain=''
db_file_multiblock_read_count=16
db_name='ACDBP'
db_recovery_file_dest='/opt/app/oracle/flash_recovery_area'
db_recovery_file_dest_size=2048M
job_queue_processes=10
open_cursors=100
pga_aggregate_target=70M
processes=50
remote_login_passwordfile='EXCLUSIVE'
sga_target=536M
undo_management='AUTO'
undo_tablespace='UNDOTBS1'
user_dump_dest='/opt/app/oracle/admin/ACDBS/udump'
service_names=ACDBS
#data guard parameters
db_unique_name='ACDBS'
fal_client='ACDBS'
fal_server='ACDBP'
log_archive_config='DG_CONFIG=(ACDBS,ACDBP)'
log_archive_dest_1='LOCATION=/opt/app/oracle/admin/ACDBS/arch VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=ACDBS'
#log_archive_dest_2='SERVICE=ACDBS LGWR ASYNC VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ACDBS'
log_archive_dest_2='SERVICE=ACDBP LGWR ASYNC VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ACDBP'
log_archive_dest_state_1='ENABLE'
log_archive_dest_state_2='ENABLE'
log_archive_format='%t_%s_%r.dbf'
db_file_name_convert='/opt/app/oracle/oradata/ACDBP/datafile','/opt/app/oracle/oradata/ACDBS/datafile'
log_file_name_convert='/opt/app/oracle/oradata/ACDBP/logfile','/opt/app/oracle/oradata/ACDBS/logfile','/opt/app/oracle/admin/ACDBP/logfile','/opt/app/oracle/admin/ACBS/logfile'
log_archive_max_processes=2
standby_archive_dest='/opt/app/oracle/admin/ACDBS/arch'
standby_file_management='AUTO'
log_archive_min_succeed_dest=1



4.       Configure the listener and tns entries on both primary and standby database servers:

Primary listener .ora and tnsnames.ora:
listerner.ora
LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS_LIST =
        (ADDRESS = (PROTOCOL = IPC)(KEY = ACDBP))
      )
      (ADDRESS_LIST =
        (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.210)(PORT = 1521))
      )
    )
  )

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (ORACLE_HOME = /opt/app/oracle/product/10.2.0)
      (SID_NAME = ACDBP)
    )
  
    (SID_DESC =
      (ORACLE_HOME = /opt/app/oracle/product/10.2.0)
      (SID_NAME = PLSExtProc)
      (PROGRAM = extproc)
     )
    )

tnsnames.ora:

ACDBS =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.211)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = ACDBS)
    )
  )



Standby listener .ora and tnsnames.ora:

listerner.ora

LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS_LIST =
        (ADDRESS = (PROTOCOL = IPC)(KEY = ACDBS))
      )
      (ADDRESS_LIST =
        (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.211)(PORT = 1521))
      )
    )
  )

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (ORACLE_HOME = /opt/app/oracle/product/10.2.0)
      (SID_NAME = ACDBS)
    )
  
    (SID_DESC =
      (ORACLE_HOME = /opt/app/oracle/product/10.2.0)
      (SID_NAME = PLSExtProc)
      (PROGRAM = extproc)
     )
    )

tnsnames.ora:

ACDBP =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.210)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = ACDBSP)
    )
  )


5.       Enable the archive log mode for the primary database.

SQL>shut immediate;
SQL>startup mount;
SQL>alter database archivelog;
SQL>alter database open;

6.       Create the standby logfiles in the primary database.

ALTER DATABASE ADD STANDBY LOGFILE GROUP 3 ('/opt/app/oracle/admin/ACDBP/logfile/redo_stand_1a.log',
'/opt/app/oracle/oradata/ACDBP/logfile/redo_stand_1b.log') size 10M;
    /

 ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 ('/opt/app/oracle/admin/ACDBP/logfile/redo_stand_1a.log',
'/opt/app/oracle/oradata/ACDBP/logfile/redo_stand_1b.log') size 10M;
   /

7.       Backup the database(cold) and scp the files to the standby  server:

SQL>shut immediate;

$cd opt/app/oracle/oradata/ACDBP/datafile
$scp * oracle@192.168.1.211:/opt/app/oracle/oradata/ACDBS/datafile
$cd opt/app/oracle/oradata/ACDBP/logfile
$scp * oracle@192.168.1.211:/opt/app/oracle/oradata/ACDBS/logfile
$cd opt/app/oracle/admin/ACDBP/logfile
$scp * oracle@192.168.1.211:/opt/app/oracle/ admin/ACDBS/logfile


8.       Create the passwordfile for the primary database and scp the file to standby database to $ORACLE_HOME/dbs location.

cd $ORACLE_HOME/dbs
$orapwd file=orapwd$ORACLE_SID password=abacus enties=5
$cp orapwACDBP orapwACDBS
$scp orapwACDBP  oracle@192.168.1.211:$ORACLE_HOME/dbs
$rm orapwACDBS

Note: scp the orapwdACDBP file of the primary to the same location on the standby with the name of orapwACDBS.

9.       create the standby controlfile and scp the file to standby server:

SQL> shut immediate;
SQL>startup mount;
SQL> alter database create standby controlfile as ‘/opt/app/oracle/admin/ACDBP/controlfile/stndby01.ctl’;
SQL>exit

$cd /opt/app/oracle/admin/ACDBP/controlfile/
$scp stndby01.ctl  oracle@192.168.1.211: opt/app/oracle/admin/ACDBS/controlfile/

10.   start and verify the standby database (ON STANDBY DATABASE) .

SQL>startup nomount;
SQL>alter database mount standby database;
SQL>archive log list;

Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /opt/app/oracle/admin/ACDBS/arch
Oldest online log sequence     201
Next log sequence to archive   0
Current log sequence           202
           
            Check the archive log sequence on both the databases(ACDBP AND ACDBS).
            On ACDBP (PRIMARY):
            SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /opt/app/oracle/admin/ACDBP/arch
Oldest online log sequence     201
Next log sequence to archive   202
Current log sequence           202

SQL>alter database recover managed standby database using current logfile disconnect;

    11.  Identify the existing files on the standby database

  SQL> select sequence#, first_time, next_time from v$archived_log order by sequence#;
   
Switch a log on the primary database:

  SQL>
alter system switch logfile;

- Re-Run the same SQL to make sure that the logs are received and applied to
  the standby server.


Now verify that the logs created in the primary database were applied:

 SQL>
select sequence#,applied from v$archived_log
 order by sequence#;

Note: Monitor the alert logfiles on both the primary and standby databases to check the heartbeat of the databases(Redolog shipping and sql apply).


Issue in the primary:

Fri Jul 22 06:23:45 2011
Errors in file /opt/app/oracle/admin/ACDBP/bdump/acdbp_arc0_15992.trc:
ORA-16191: Primary log shipping client not logged on standby
PING[ARC0]: Heartbeat failed to connect to standby 'ACDBS'. Error is 16191.


Issue in standby database:
SQL> alter database recover managed standby database using current logfile;
alter database recover managed standby database using current logfile
*
ERROR at line 1:
ORA-01093: ALTER DATABASE CLOSE only permitted with no sessions connected


When the standby database is opened in the read only mode, at that time if the database needs to be recovered, there should be no active connections to the standby database.



Wednesday 7 December 2011

Query level export using expdp utility





Today I used a new feature in oracle which I never used till now in my dba career that is query level export of a table using expdp utility. There is a table gds_salesseries_cd is one of the columns in that table. I need the rows with column series_cd value less than 9.8(series_cd>9.8) to be exported. I used the query level export for the data. The process is as follows:

Parfile
--------
tables = rtd.gds_sales
dumpfile= expdp.gds.qry.dmp
logfile= expdp.gds.qry.log
directory=DPUMP
query= rtd.gds_sales:"WHERE SERIES_CD<9.8"

Note: In the first parameter tables = rtd.gds_sales, rtd is the owner of the table. Since we do the export using system user, we specify the schema name before the table.

[oracle@rac-a exp]$ expdp parfile=expdp.gds.qry.par

Export: Release 10.2.0.1.0 - 64bit Production on Friday, 19 November, 2010 2:28:21

Copyright (c) 2003, 2005, Oracle.  All rights reserved.

Username: system
Password: ******

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
FLASHBACK automatically enabled to preserve database integrity.
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/******** parfile=/home/oracle/admin/ACDB2/exp/expdp.gds.qry.par
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "RTD"."GDS_PRODUCT"                         29.16 MB   46963 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
  /home/oracle/admin/ACDB2/exp/expdp.gds.qry.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 02:28:41



> we need to specify the table name in the above format which is in the parfile, if not the below error arises.


query="where series_cd<9.8"
ORA-31658: specifying a schema name requires a table name