Saturday, April 27, 2013

NULL Has No Equivalents



One aspect involving NULL in SQL often stumps people. SQL expressions are tri-valued, meaning every expression
can be true, false, or NULL. This affects all kinds of comparisons, operators, and logic as you've already seen.
But a nuance of this kind of logic is occasionally forgotten, so we'll repeat it explicitly.



NULL has no equivalents.
No other value is the same as NULL, not even other NULL values.


If you run the following query, can you guess your results?

select first_name, last_name
from hr.employees
where commission_pct = NULL;

The answer is no rows will be selected. Even though the values exists, so the COMMISSION_PCT = NULL criterion
will never find a match, and you will never see results from this query.
Always use IS NULL and IS NOT NULL to find or exclude your NULL values.

Saturday, April 20, 2013

Automating Database Startup and Shutdown on Linux 11g R2


I followed this link


Create a file called "/etc/init.d/dbora" as the root user, containing the following
vi /etc/init.d/dbora #!/bin/sh
# chkconfig: 345 99 10
# description: Oracle auto start-stop script.
#
# Set ORA_OWNER to the user id of the owner of the
# Oracle database software.

ORA_OWNER=oracle

case "$1" in
'start')
# Start the Oracle databases:
# The following command assumes that the oracle login
# will not prompt the user for any values
su $ORA_OWNER -c "/home/oracle/scripts/startup.sh >> /home/oracle/scripts/startup_shutdown.log 2>&1"
touch /var/lock/subsys/dbora
;;
'stop')
# Stop the Oracle databases:
# The following command assumes that the oracle login
# will not prompt the user for any values
su $ORA_OWNER -c "/home/oracle/scripts/shutdown.sh >> /home/oracle/scripts/startup_shutdown.log 2>&1"
rm -f /var/lock/subsys/dbora
;;
esac


Use the chmod command to set the privileges to 750.

chmod 750 /etc/init.d/dbora

Associate the dbora service with the appropriate run levels and set it to auto-start using the following command.

chkconfig --add dbora

Next, we must create the "startup.sh" and "shutdown.sh" scripts in the "/home/oracle/scripts". First create the directory.

# mkdir -p /home/oracle/scripts
# chown oracle.oinstall /home/oracle/scripts


The "/home/oracle/scripts/startup.sh" script should contain the following commands.


#!/bin/bash

export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=maniner.domain
export ORACLE_UNQNAME=orcl
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export PATH=/usr/sbin:$ORACLE_HOME/bin:$PATH

export ORACLE_SID=orcl
ORAENV_ASK=NO
. oraenv
ORAENV_ASK=YES

# Start Listener
lsnrctl start

# Start Database
sqlplus / as sysdba << EOF
STARTUP;
EXIT;
EOF


The "/home/oracle/scripts/shutdown.sh" script is similar.


#!/bin/bash

export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=maniner.domain
export ORACLE_UNQNAME=orcl
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export PATH=/usr/sbin:$ORACLE_HOME/bin:$PATH

export ORACLE_SID=orcl
ORAENV_ASK=NO
. oraenv
ORAENV_ASK=YES

# Stop Database
sqlplus / as sysdba << EOF
SHUTDOWN IMMEDIATE;
EXIT;
EOF

# Stop Listener
lsnrctl stop



Note. You could move the environment settings into the "dbora" file or into a separate file that is sourced in the startup and shutdown script. I kept it local to the script so you could see the type of things that need to be set in case you have to write a script to deal with multiple installations, instances and listeners.

Make sure the permissions and ownership of the files is correct.

# chmod u+x /home/oracle/scripts/startup.sh /home/oracle/scripts/shutdown.sh
# chown oracle.oinstall /home/oracle/scripts/startup.sh /home/oracle/scripts/shutdown.sh



The listener and database will now start and stop automatically with the machine. You can test them using the following command as the "root" user.

# service dbora start
# service dbora stop




Check by rebooting the system ,its magic :) Oracle will start by itself

error in invoking target 'ntcontab.o' of makefile




I was stuck in the error when installing oracle on my vmvare centos 5 m/c

finally got solution,problem was with rpms.
as I don't have internet running on my m/c
I was not using yum cmd to install the rpms
I was downloading the rpms and then installing them
and getting the error like

libstdc++-devel-4.1.2-54.el5.i386 conflicts with file from package gcc-c++-4.1.2-33.i38

and reversal means order of installing rms doesn't work

then as solution I manage run internet on my system
and run

yum remove gcc-c++-4.1.2-33.i386
yum remove libstdc++-devel-4.1.2-54.el5.i386
yum remove libstdc++-devel
yum install gcc-c++
and finally get it resolved :)

Tuesday, April 16, 2013

mysql clustering steps by step


i have followed this link
for basics about mysql clustering here are some basic things
mysql clustering basics
setting up the Cluster

To set up the cluster, you need three servers. Two cluster nodes and one Management node. I should point out that the Management node is not required after the cluster install, but I strongly recommend keeping it as it gives you the automatic failover capability. I will use three servers as examples:


Server1 192.168.52.128 (Cluster Management Server)
Server2 192.168.52.129 (Data Node 1)
Server3 192.168.52.130 (Data Node 2)

#############################################################################################!
First step is to install MySQL Cluster Management Server on Server1.
Lets download from MySQL Cluster 6.2 from MySQL website (http://dev.mysql.com/downloads/cluster/).
This guide is intended for Debian based systems, so we will download nonrpm package (mysql-cluster-gpl-6.2.15-linux-i686-glibc23.tar.gz).
Here are the steps to follow, to set up the MySQL Cluster Management Server (ndb_mgmd) and the cluster Management client (ndb_mgm)
mkdir /usr/src/mysql-mgm
cd /usr/src/mysql-mgm
tar xvfz mysql-cluster-gpl-6.2.15-linux-i686-glibc23.tar.gz
cd mysql-cluster-gpl-6.2.15-linux-i686-glibc23
mv bin/ndb_mgm /usr/bin
mv bin/ndb_mgmd /usr/bin
chmod 755 /usr/bin/ndb_mg*
cd /usr/src
rm -rf /usr/src/mysql-mgm

Next step is to create the Cluster configuration file:

mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
vi config.ini

(I try to use nano as the text editor, just because it is much easier to use than vi.)


Here is the sample config file:


[NDBD DEFAULT]
NoOfReplicas=2

[MYSQLD DEFAULT]

[NDB_MGMD DEFAULT]

[TCP DEFAULT]

# Section for the cluster management node
[NDB_MGMD]
# IP address of the management node (server1)
HostName=192.168.52.128

# Section for the storage nodes
[NDBD]
# IP address of the first data node (Server2)
HostName=192.168.52.129
DataDir= /var/lib/mysql-cluster

[NDBD]
# IP address of the second storage node (Server3)
HostName=192.168.52.130
DataDir=/var/lib/mysql-cluster

# one [MYSQLD] per storage node
[MYSQLD]
[MYSQLD]


Now let's start the Management Server:

ndb_mgmd -f /var/lib/mysql-cluster/config.ini

$$$$
[root@localhost mysql]# ndb_mgmd -f /var/lib/mysql-cluster/config.ini
MySQL Cluster Management Server mysql-5.5.29 ndb-7.2.10
$$$$


Now, we would want to start the Management Server automatically in case of a system reboot,
so we add an init script to do that:


echo 'ndb_mgmd -f /var/lib/mysql-cluster/config.ini' > /etc/init.d/ndb_mgmd
chmod 755 /etc/init.d/ndb_mgmd
update-rc.d ndb_mgmd defaults

#######################################################################################################################
Data Nodes Configuration (Server2 and Server3):

Now let's set up the data nodes. Here are the steps to do that (do on both data nodes)


groupadd mysql
useradd -g mysql mysql
cd /usr/local/
wget pick up any mirror from MySQL's website
tar xvfz mysql-cluster-gpl-6.2.15-linux-i686-glibc23.tar.gz
ln -s mysql-cluster-gpl-6.2.15-linux-i686-glibc23 mysql
cd mysql
scripts/mysql_install_db --user=mysql
chown -R root:mysql .
chown -R mysql data
cp support-files/mysql.server /etc/init.d/
chmod 755 /etc/init.d/mysql.server
update-rc.d mysql.server defaults
cd /usr/local/mysql/bin
mv * /usr/bin
cd ../
rm -fr /usr/local/mysql/bin
ln -s /usr/bin /usr/local/mysql/bin

#################################################################################
Next we need to create the MySQL config file /etc/my.cnf on #####both nodes:####

vi /etc/my.cnf


Here is the sample file:

[mysqld]
ndbcluster
# IP address of the cluster management server (Server1)
ndb-connectstring=192.168.52.128

[mysql_cluster]
# IP address of the cluster management Server (Server1)
ndb-connectstring=192.168.52.128


Our MySQL installation is almost complete, now let's create the data directories and start the MySQL Server on both nodes:


mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
ndbd --initial
/etc/init.d/mysql.server start


###2013-02-08 08:52:46 [ndbd] INFO -- Angel connected to '192.168.52.128:1186'
2013-02-08 08:52:46 [ndbd] INFO -- Angel allocated nodeid

(Important: we need to run ndbd --initial
only when the start MySQL for the first time, and if /var/lib/mysql-cluster/config.ini on Management Server changes.)

MySQL installation is complete, now let's put in a root password for our MySQL Servers:

mysqladmin -u root password newrootpassword


Again, it makes sense to start up the cluster nodes automatically in case of a system restart/failure.
Here are the ndbd init script and system startup links for that:

echo 'ndbd' > /etc/init.d/ndbd
chmod 755 /etc/init.d/ndbd
update-rc.d ndbd defaults


this completes are Cluster installation process, next, now let's test it.



################################################################################################
Test:

On Cluster Management Server, run the Cluster Management Client:

ndb_mgm

It will take you to the ndb_mgm prompt:


-- NDB Cluster -- Management Client --
ndb_mgm>


Now type show on the prompt:

ndb_mgm> show;


You should see an output similar to this:

ndb_mgm> show;
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @192.168.52.129 (Version: version number, Nodegroup: 0, Master)
id=3 @192.168.52.130 (Version: version number, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.52.128 (Version: version number)

[mysqld(API)] 2 node(s)
id=4 @192.168.52.129 (Version: version number)
id=5 @192.168.52.130 (Version: version number)

ndb_mgm>



We should see our data nodes connected in the previous screen. Now type quit to close the Management client:

ndb_mgm>quit;



Test the Cluster:

Now, let's create a Test database on Server2 (192.168.52.129) and run some tests

On Server2:

mysql -u root -p
CREATE DATABASE testdb;
USE testdb;
CREATE TABLE tblCustomer (ID INT) ENGINE=NDBCLUSTER;
INSERT INTO tblCustomer VALUES (1);
SELECT * FROM tblCustomer;
quit;

pay attention to the create table statement,
we must specify ENGINE=NDBCLUSTER for all tables that we want to clustered. As stated earlier,
MySQL cluster only saupports NDB engine, so if you use any other engine, table simply wont get clustered.

The result of the SELECT statement would be:

mysql> SELECT * FROM tblCustomer;
+------+
| ID |
+------+
| 1 |
+------+


Since clustering in MySQL is at the "table level" not at the database level, so we would have to create the database sperately on
Server3 (192.168.52.130) as well, but afterwards tblCustomer would be replicated with all its data (since the engine is NDBCLUSTER):


On Server3.......192.168.52.130

mysql -u root -p
CREATE DATABASE testdb;
USE testdb;
SELECT * FROM tblCustomer;

Now, if we insert a row of data on Server3, it should be replicated back to Server2:192.168.52.129

INSERT INTO tblCustomer VALUES (2);


If we run a SELECT query on Server2, here is what we should see:


mysql> SELECT * FROM testtable;

+------+
| ID |
+------+
| 1 |
| 2 |
+------+


Test Node shutdown:

Now run the following on Server2....192.168.52.129.... to test what happens if a node goes offline:


killall ndbd

and run this command to make sure that all ndbd processes have terminated:


ps aux | grep ndbd | grep -iv grep


If you still see any prpcesses, run this again:

killall ndbd

Now, lets go the management server (Server1)....192.168.52.128... and run the following to check the cluster status:


ndb_mgm

On the ndb_mgm console. run:

show;

it should be bring you an output simlar to the following:

ndb_mgm> show;
Connected to Management Server at: localhost:1186

Cluster Configuration
---------------------

[ndbd(NDB)] 2 node(s)
id=2 (not connected, accepting connect from 192.168.52.129)..
id=3 @192.168.52.130 (Version: -----, Nodegroup: 0, Master).....

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.52.128 (Version: -----)

[mysqld(API)] 2 node(s)
id=4 @192.168.52.129 (Version: --------)
id=5 @192.168.52.130 (Version: --------)

ndb_mgm>

You see, Server2 is not connected anymore.

Type quit; to leave the ndb_mgm management console. Now, let's check on Server3,
if our database is still up and we can make connections to it:

mysql -u root -p
USE testdb;
SELECT * FROM tblCustomer;
quit;

It should bring up the following result set:


mysql> SELECT * FROM tblCustomer;

+------+
| ID |
+------+
| 1 |
| 2 |
+------+


Now, let's start MySQL on Server2 again by issuing the following command:

ndbd


How to Restart MySQL Cluster:

In managing a produciton MySQL environment or any other transactional database environment,
times come when we have to restart/shutdone our systems. So, let's see how would we shutdown our MySQL Cluster:

On Server1, open the management console:

ndb_mgm

then type:

shutdown;

it would bring up an output like this:

ndb_mgm> shutdown;
Node 3: Cluster shutdown initiated
Node 2: Node shutdown completed.
2 NDB Cluster node(s) have shutdown.
NDB Cluster management server shutdown.
ndb_mgm>

This means that the cluster nodes Server2 and Server3 and also the Management node (Server1) have shut down.

To leave the Management console, run:

quit;


To start the cluster management server again, run the following (on Server1, Management Server):

ndb_mgmd -f /var/lib/mysql-cluster/config.ini


and on Server2 and Server3, run the following:

ndbd

in case /var/lib/mysql-cluster/config.ini on Management Server changed, you should run the following:


ndbd --initial


You can go back to the Management node and verify if the cluster started ok, without any errors:

ndb_mgm

on the Management console run the following:

show;

This should bring up the cluster configuration.


steps to do replication in mysql (both ways)



################################################################################################
MASTER -MASTER Replication on 2 Server (Primary Server(10.110.19.99)- Standby Server(10.110.19.100))
################################################################################################

################################################################################################
Creating the replication user on both nodes :
################################################################################################


MASTER SERVER ON 10.135.44.3
################################################################################################
grant replication slave, replication client on *.* to repl@"10.135.44.4" identified by "repl";
grant replication slave, replication client on *.* to repl@"Airtel_Ghana_4" identified by "repl";


SLAVE SERVER ON 10.110.19.100 altruist_100
################################################################################################
grant replication slave, replication client on *.* to repl@"10.135.44.3" identified by "repl";
grant replication slave, replication client on *.* to repl@"airtel_Ghana_3" identified by "repl";



################################################################################################
PRIMARY Server My.cnf file
################################################################################################
server-id=1
log-bin=/var/lib/mysql/mysql-bin.log
binlog-do-db=voicechat
replicate-do-db=voicechat
auto_increment_increment = 10
auto_increment_offset = 1

################################################################################################
Standby Server My.cnd File
################################################################################################
server-id=2
log-bin=/var/lib/mysql/mysql-bin.log
binlog-do-db=voicechat
replicate-do-db=voicechat
auto_increment_increment = 10
auto_increment_offset = 2

################################################################################################
Check log position for both server
MASTER SERVER
################################################################################################

mysql> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000009 | 107 | voicechat | |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)

################################################################################################
SLAVE SERVER
################################################################################################
mysql> show master status;
+------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000003 | 107 | voicechat | |
+------------------+----------+--------------+------------------+
1 row in set (0.01 sec)

################################################################################################
show master status and then slave status run the following command
################################################################################################
MASTER SERVER
################################################################################################
SLAVE STOP;
CHANGE MASTER TO MASTER_HOST='10.135.44.4', MASTER_USER='repl', MASTER_PASSWORD='repl', MASTER_LOG_FILE='mysql-bin.000003', MASTER_LOG_POS=107;
START SLAVE;
quit;


################################################################################################
SLAVE SERVER
################################################################################################
SLAVE STOP;
CHANGE MASTER TO MASTER_HOST='10.135.44.3', MASTER_USER='repl', MASTER_PASSWORD='repl', MASTER_LOG_FILE='mysql-bin.000009', MASTER_LOG_POS=107;
START SLAVE;
quit;


################################################################################################
Test Replication (Master-Master)
################################################################################################
Create database voicechat;
create table primary1 (name varchar(15));

insert into primary2(name) values('test_primary');
insert into primary2(name) values('test_primary');
insert into primary2(name) values('test_primary');






delete from primary1 ;
insert into abc_primary(name) values('test1_standby');
insert into abc_primary(name) values('test2_standby');
insert into abc_primary(name) values('test3_standby');




show slave status\G;
stop slave;
start slave;

steps to do replication in mysql (source to target -one way)


replication of master server on slave server

steps

step 1
on master

must enable binary logging
configure a unique server ID.

this can be done by setting follwing parameters in my.cnf file

[mysqld]
server-id=2
log-bin=mysql-bin


step 2

take backup of the databases or tables of which you want to set replication


step 3

create user

CREATE USER 'repl'@'%' IDENTIFIED BY 'slavepass';
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%';


restart mysql


serice mysql restart


note the follwing

mysql>FLUSH TABLES WITH READ LOCK;
mysql>SHOW MASTER STATUS;

+------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000001 | 333 | | |
+------------------+----------+--------------+------------------+



step4

on slave

[mysqld]
server-id=2


export the backup which you have taken on master


restart the mysql services

on mysql prompt run following command

mysql>CHANGE MASTER TO
MASTER_HOST='192.168.163.130', ##ip of master
MASTER_USER='repl',
MASTER_PASSWORD='slavepass',
MASTER_LOG_FILE='mysql-bin.000007',
MASTER_LOG_POS=333;

mysql>start slave



its done..:)



if u want to replicate a spesific table add following in the server my.cnf

replicate-do-table=mani.abc

Friday, April 12, 2013

how to convert a cooked file system oracle database to ASM



First create a ASM instance by following the link
steps-to-create-asm-instance-on-windows

once done follow the following steps


sqlplus>select name, state, type, total_mb, free_mb from v$asm_diskgroup;
sqlplus>select path,name from v$asm_disk;


set oracle_sid=test

take backup to be secure


create a table just for verification

sqlplus>create table friend (name varchar(20));
sqlplus>insert into friend values ('mani');
sqlplus>insert into friend values ('kirnai');
sqlplus>insert into friend values ('lovi');
sqlplus>commit

sqlplus>create pfile from spfile;

edit the pfile as follows
in the value of contrtolfile edit it with '+dgroup1'

sqlplus>shu immediate;

sqlplus / as sysdba


SQL> create spfile from pfile;

File created.



RMAN> startup nomount;

Oracle instance started

Total System Global Area 293601280 bytes

Fixed Size 1248624 bytes
Variable Size 113246864 bytes
Database Buffers 176160768 bytes
Redo Buffers 2945024 bytes

RMAN> restore controlfile from 'C:\oracle\product\10.2.0\oradata\orcl\CONTROL01.CTL';

Starting restore at 12-APR-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=157 devtype=DISK

channel ORA_DISK_1: copied control file copy
output filename=+DGROUP1/orcl/controlfile/backup.256.812557745
Finished restore at 12-APR-13

RMAN> alter database mount;

database mounted
released channel: ORA_DISK_1

RMAN> backup as copy database format '+dgroup1';

Starting backup at 12-APR-13
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=157 devtype=DISK
channel ORA_DISK_1: starting datafile copy
input datafile fno=00001 name=C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSTEM01.DBF
output filename=+DGROUP1/orcl/datafile/system.257.812558199 tag=TAG20130412T143637 recid=7 stamp=812558268
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:01:15
channel ORA_DISK_1: starting datafile copy
input datafile fno=00003 name=C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\SYSAUX01.DBF
output filename=+DGROUP1/orcl/datafile/sysaux.258.812558273 tag=TAG20130412T143637 recid=8 stamp=812558310
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:45
channel ORA_DISK_1: starting datafile copy
input datafile fno=00005 name=C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\EXAMPLE01.DBF
output filename=+DGROUP1/orcl/datafile/example.259.812558319 tag=TAG20130412T143637 recid=9 stamp=812558335
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile copy
input datafile fno=00002 name=C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\UNDOTBS01.DBF
output filename=+DGROUP1/orcl/datafile/undotbs1.260.812558345 tag=TAG20130412T143637 recid=10 stamp=812558358
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:16
channel ORA_DISK_1: starting datafile copy
input datafile fno=00004 name=C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\USERS01.DBF
output filename=+DGROUP1/orcl/datafile/users.261.812558361 tag=TAG20130412T143637 recid=11 stamp=812558361
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting datafile copy
copying current control file
output filename=+DGROUP1/orcl/controlfile/backup.262.812558363 tag=TAG20130412T143637 recid=12 stamp=812558366
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
including current SPFILE in backupset
channel ORA_DISK_1: starting piece 1 at 12-APR-13
channel ORA_DISK_1: finished piece 1 at 12-APR-13
piece handle=+DGROUP1/orcl/backupset/2013_04_12/nnsnf0_tag20130412t143637_0.263.812558367 tag=TAG20130412T143637 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
Finished backup at 12-APR-13

RMAN>



alter diskgroup group1 add disk ' ';
# if you are show error of insufficient space

RMAN> switch database to copy;

datafile 1 switched to datafile copy "+DGROUP1/orcl/datafile/system.257.812558199"
datafile 2 switched to datafile copy "+DGROUP1/orcl/datafile/undotbs1.260.812558345"
datafile 3 switched to datafile copy "+DGROUP1/orcl/datafile/sysaux.258.812558273"
datafile 4 switched to datafile copy "+DGROUP1/orcl/datafile/users.261.812558361"
datafile 5 switched to datafile copy "+DGROUP1/orcl/datafile/example.259.812558319"


RMAN> alter database open;

database opened


SQL> select file_name from dba_data_files;

FILE_NAME
----------------------------------------------
+DGROUP1/orcl/datafile/users.261.812558361
+DGROUP1/orcl/datafile/sysaux.258.812558273
+DGROUP1/orcl/datafile/undotbs1.260.812558345
+DGROUP1/orcl/datafile/system.257.812558199
+DGROUP1/orcl/datafile/example.259.812558319


SQL> select name from v$controlfile;

NAME
------------------------------------------------------
+DGROUP1/orcl/controlfile/backup.256.812557745


SQL> select member from v$logfile;

MEMBER
-----------------------------------------------------
C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO03.LOG
C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO02.LOG
C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO01.LOG



SQL> alter database drop logfile group 2;

Database altered.


SQL> alter database add logfile group 2 ('+DGROUP1/orcl/REDO02.LOG') size 10m;

Database altered.
SQL> alter database drop logfile group 3;

Database altered.


SQL> alter database add logfile group 3 ('+DGROUP1/orcl/REDO03.LOG') size 10m;

Database altered.

SQL> alter system switch logfile;

System altered.

SQL> alter system switch logfile;

System altered.

SQL> alter database drop logfile group 1;

Database altered.


SQL> alter database add logfile group 1 ('+DGROUP1/orcl/REDO01.LOG') size 10m;

Database altered.

1

SQL> select group#,status from v$log;

GROUP# STATUS
---------- ----------------
1 UNUSED
2 INACTIVE
3 CURRENT


SQL> select member from v$logfile;

MEMBER
-------------------------------------
+DGROUP1/orcl/redo03.log
+DGROUP1/orcl/redo02.log
+DGROUP1/orcl/redo01.log


SQL> select name from v$tempfile;

NAME
----------------------------------------------------
C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\TEMP01.DBF


SQL> alter tablespace temp add tempfile '+DGROUP1' SIZE 10M;

Tablespace altered.

SQL> select name from v$tempfile;

NAME
--------------------------------------------------------------------------------
C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\TEMP01.DBF
+DGROUP1/orcl/tempfile/temp.267.812559725


SQL> alter tablespace temp drop tempfile 'C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\TEMP01.DBF';

Tablespace altered.

SQL> select name from v$tempfile;

NAME
--------------------------------------------------------------------------------
+DGROUP1/orcl/tempfile/temp.267.812559725


SQL> select * from friend
2 ;

NAME
--------------------
mani
kirnai
lovi


thats it :)

Wednesday, April 10, 2013

ORA-29701: unable to connect to Cluster Manager


Please run the following batch files in the $ORACLE_HOME/bin directory to solve this error
localconfig delete
localconfig add

Thursday, April 4, 2013

sqlplus easy connect to other database server from local


sqlplus username/password@//server_ip_or_name:1521/servicename
eg sqlplus scott/tiger@//PODS.GATE.COM:1521/ODS

Wednesday, April 3, 2013

block change tracking to speeds up the incremntal backup


This parameter is new in 10g and it enables fast incremental backup

Oracle need not to scan all the database block for change as if this parameter is enable
it contains all the information of changed blocks ,so it speeds up the incremental backup.


to check whether this parameter is enables or not


SQL> select * from v$block_change_tracking;

to enable it use following

SQL> alter database enable block change tracking using file 'e:\file.txt';

Database altered.

lost undo datfile ,what to do?



You can offline drop the undo tablespce and then set undo managament to manual
and then recreate another undo tablespace,alter system to use the new undo and
set undo management to auto again,drop the older undo tablespace


follow the steps


SQL> startup
ORACLE instance started.

Total System Global Area 535662592 bytes
Fixed Size 1334380 bytes
Variable Size 176161684 bytes
Database Buffers 352321536 bytes
Redo Buffers 5844992 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 3 - see DBWR trace file
ORA-01110: data file 3: 'E:\DATA\TEST\UNDOTBS01.DBF'



SQL> startup mount

SQL> alter system set undo_management=manual scope=spfile;

System altered.

SQL> alter database datafile 'E:\DATA\TEST\UNDOTBS01.DBF' offline drop;

Database altered.

SQL> shu immediate;

SQL> select file_name,status,online_status from dba_data_files;

SQL> create undo tablespace undomani datafile 'E:\DATA\TEST\UNDOTBSMANI01.DBF' SIZE 10M;

Tablespace created.

SQL> alter system set undo_management=auto scope=spfile;

SQL> alter system set undo_tablespace ='undomani';

SQL> drop tablespace undo tbs_1;

Tuesday, April 2, 2013

RAC and Grid Computing , are these similar or different or one is subpart or other ?


RAC existed as OPS (Oracle Parallel Server) back in Oracle v7. So the concept of a clustered database
(shared everything database running across several servers) is not new.

What did change was the RAC s/w stack and architecture - as did HPC (High Performance Computing) in general.
The clusters we build today are very different from those we build back in the 90's.

So as the h/w technology changed, so too did the s/w technology for clustering. And Oracle (mostly for marketing reasons IMO)
choose the term Grid to describe their cluster s/w stack.

Grid in Oracle-speak basically means a s/w stack that provides the basic foundation for a clustered system.
RAC happens to be the house most frequently build on this foundation.

Monday, April 1, 2013

dbms_scheduler create,schedule,drop a job



create a procedure that is to be scheduled as a job

create or replace procedure insert_employees
as
begin
insert into employees values('aak');
end;
/


create a job and schedule it
begin
dbms_scheduler.create_job(
job_name => 'insertintoemployees2',
job_type => 'stored_procedure',
job_action => 'insert_employees',
start_date => sysdate,
repeat_interval => 'FREQ=DAILY;BYHOUR=4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22;BYMINUTE=0,5,10,15,20,25,30,35,40,45,50,55',
enabled => true,
auto_drop => false );
end;
/


mani>select job_name,state from dba_scheduler_jobs where job_name='INSERTINTOEMPLOYEES2';

JOB_NAME STATE
------------------------------ ---------------
INSERTINTOEMPLOYEES1 SUCCEEDED



drop a job

SQL> begin
2 dbms_scheduler.drop_job('INSERTINTOEMPLOYEES1');
3 end;
4 /

PL/SQL procedure successfully completed.

SQL> select job_name,state from dba_scheduler_jobs where job_name='INSERTINTOEMP
LOYEES1';

no rows selected

SQL>


useful link