Sunday, January 12, 2014

exclusive lock v/s shared lock


Think of a lockable object as a blackboard (lockable) in a class room containing a teacher (writer) and many students (readers).

While a teacher is writing something (exclusive lock ) on the board:

Nobody can read it, because it's still being written, and she's blocking your view => If an object is exclusively locked, shared locks cannot be obtained.

Other teachers won't come up and start writing either, or the board becomes unreadable, and confuses students => If an object is exclusively locked, other exclusive locks cannot be obtained.

When the students are reading (shared locks) what is on the board:

They all can read what is on it, together => Multiple shared locks can co-exist.

The teacher waits for them to finish reading before she clears the board to write more => If one or more shared locks already exist, exclusive locks cannot be obtained.

Sunday, October 6, 2013

Shrink data file and reclaim unused space


shrink data file and reclaim unused space inside a data file can be achieved in simple way by
moving all the objects inside the tablespace to another tablespace and the droping or shrinking the empty datafiles

steps are
create new tablespace
move all the tables to the new tablespace
move all the indexes to the new tablespace
move all the partitions of a partioned tables to the new tablespace
move all the LOBS to the new tablespace
check if still any object is left in the Older tablespace
shrink the old data files or drop the old tablespace
grant default tablespace and quota to the specified users to use new tablespace


note : while moving the indexes sometime the status of indexes goes to unused.
take structure of those indexes drop those indexes and create them again.or try rebuilt again.


------ to see the free space the datafiles------------------

SELECT SUBSTR (df.NAME, 1, 60) file_name,dfs.tablespace_name, df.bytes / 1024 / 1024 allocated_mb,((df.bytes / 1024 / 1024) – NVL (SUM (dfs.bytes) / 1024 / 1024, 0)) used_mb,NVL (SUM (dfs.bytes) / 1024 / 1024, 0) free_space_mb FROM v$datafile df, dba_free_space dfs WHERE df.file# = dfs.file_id(+) GROUP BY dfs.file_id, df.NAME, df.file#, df.bytes,dfs.tablespace_name ORDER BY file_name;


-----move tables -------------------
SELECT 'ALTER TABLE RFC_PROMO2.' || OBJECT_NAME ||' MOVE TABLESPACE '||' PROMO02; '
FROM ALL_OBJECTS
WHERE OWNER = 'RFC_PROMO2'
AND OBJECT_TYPE = 'TABLE';


-----move indexes -------------------
SELECT 'ALTER INDEX RFC_PROMO2.'||INDEX_NAME||' REBUILD TABLESPACE PROMO02 ONLINE;'
FROM ALL_INDEXES
WHERE OWNER = 'RFC_PROMO2';

-----move lobs-----------------
SELECT 'ALTER TABLE RFC_PROMO2.'||LOWER(TABLE_NAME)||' MOVE LOB('||LOWER(COLUMN_NAME)||') STORE AS (TABLESPACE PROMO02);'
FROM DBA_TAB_COLS
WHERE OWNER = 'RFC_PROMO2' AND DATA_TYPE like '%LOB%';



-----confirm if any object is still left in old tablespace

select segment_name,segment_type,owner
from dba_segments
where tablespace_name ='PROMO1';


-----to see the names of portioned tables and their partitions
SELECT TABLESPACE_NAME,PARTITION_NAME FROM USER_TAB_PARTITIONS WHERE TABLE_NAME='tablename';


-----move partioned table partition

alter table partitioned move partition part_3 tablespace users;

SELECT 'ALTER TABLE ' ||table_name ||' MOVE PARTITION ' ||partition_name ||' TABLESPACE REPORT;' FROM all_tab_partitions WHERE table_name = 'requestLog';

---------------shrinking temp file
SELECT tablespace_name, file_name, bytes
FROM dba_temp_files WHERE tablespace_name like 'TEMP%';

alter database tempfile 'D:\APP\ADMINISTRATOR\ORADATA\MONETA\TEMP01.DBF' resize 256M;


--- to shrink datafiles:
select 'alter database datafile '''||file_name||''' resize '||ceil( (nvl(hwm,1)*&&blksize)/1024/1024 )||'m;' as cmd, bytes/1024/1024 from dba_data_files a, ( select file_id, max(block_id+blocks-1) as hwm from dba_extents group by file_id ) b where a.file_id = b.file_id(+) and ceil(blocks*&&blksize/1024/1024)- ceil((nvl(hwm,1)* &&blksize)/1024/1024 ) > 0 ;



Thursday, July 11, 2013

BTREE v/s BITMAP indexes


btree index

when the values in a particular coulumn is all diffrenet or have
large variance,then b tree index is usefull
this is based on concept of branch block and leaf block
branch block has condition and pointer,pointer will either pints to next condition or the rowid.
leaf block has rowid and data.

bitmap index
when the data in particular coulumn is repitative
eg sex field it has value only as male or female
in such case its not good to create b tree index.
bitmap index is use full there.
its will be like following

male 0 1 0 0 0 0 0 0 0 0 0 0

female 1 0 1 1 1 1 1 1 1 1 1 1


Saturday, April 27, 2013

NULL Has No Equivalents



One aspect involving NULL in SQL often stumps people. SQL expressions are tri-valued, meaning every expression
can be true, false, or NULL. This affects all kinds of comparisons, operators, and logic as you've already seen.
But a nuance of this kind of logic is occasionally forgotten, so we'll repeat it explicitly.



NULL has no equivalents.
No other value is the same as NULL, not even other NULL values.


If you run the following query, can you guess your results?

select first_name, last_name
from hr.employees
where commission_pct = NULL;

The answer is no rows will be selected. Even though the values exists, so the COMMISSION_PCT = NULL criterion
will never find a match, and you will never see results from this query.
Always use IS NULL and IS NOT NULL to find or exclude your NULL values.

Saturday, April 20, 2013

Automating Database Startup and Shutdown on Linux 11g R2


I followed this link


Create a file called "/etc/init.d/dbora" as the root user, containing the following
vi /etc/init.d/dbora #!/bin/sh
# chkconfig: 345 99 10
# description: Oracle auto start-stop script.
#
# Set ORA_OWNER to the user id of the owner of the
# Oracle database software.

ORA_OWNER=oracle

case "$1" in
'start')
# Start the Oracle databases:
# The following command assumes that the oracle login
# will not prompt the user for any values
su $ORA_OWNER -c "/home/oracle/scripts/startup.sh >> /home/oracle/scripts/startup_shutdown.log 2>&1"
touch /var/lock/subsys/dbora
;;
'stop')
# Stop the Oracle databases:
# The following command assumes that the oracle login
# will not prompt the user for any values
su $ORA_OWNER -c "/home/oracle/scripts/shutdown.sh >> /home/oracle/scripts/startup_shutdown.log 2>&1"
rm -f /var/lock/subsys/dbora
;;
esac


Use the chmod command to set the privileges to 750.

chmod 750 /etc/init.d/dbora

Associate the dbora service with the appropriate run levels and set it to auto-start using the following command.

chkconfig --add dbora

Next, we must create the "startup.sh" and "shutdown.sh" scripts in the "/home/oracle/scripts". First create the directory.

# mkdir -p /home/oracle/scripts
# chown oracle.oinstall /home/oracle/scripts


The "/home/oracle/scripts/startup.sh" script should contain the following commands.


#!/bin/bash

export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=maniner.domain
export ORACLE_UNQNAME=orcl
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export PATH=/usr/sbin:$ORACLE_HOME/bin:$PATH

export ORACLE_SID=orcl
ORAENV_ASK=NO
. oraenv
ORAENV_ASK=YES

# Start Listener
lsnrctl start

# Start Database
sqlplus / as sysdba << EOF
STARTUP;
EXIT;
EOF


The "/home/oracle/scripts/shutdown.sh" script is similar.


#!/bin/bash

export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=maniner.domain
export ORACLE_UNQNAME=orcl
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export PATH=/usr/sbin:$ORACLE_HOME/bin:$PATH

export ORACLE_SID=orcl
ORAENV_ASK=NO
. oraenv
ORAENV_ASK=YES

# Stop Database
sqlplus / as sysdba << EOF
SHUTDOWN IMMEDIATE;
EXIT;
EOF

# Stop Listener
lsnrctl stop



Note. You could move the environment settings into the "dbora" file or into a separate file that is sourced in the startup and shutdown script. I kept it local to the script so you could see the type of things that need to be set in case you have to write a script to deal with multiple installations, instances and listeners.

Make sure the permissions and ownership of the files is correct.

# chmod u+x /home/oracle/scripts/startup.sh /home/oracle/scripts/shutdown.sh
# chown oracle.oinstall /home/oracle/scripts/startup.sh /home/oracle/scripts/shutdown.sh



The listener and database will now start and stop automatically with the machine. You can test them using the following command as the "root" user.

# service dbora start
# service dbora stop




Check by rebooting the system ,its magic :) Oracle will start by itself

error in invoking target 'ntcontab.o' of makefile




I was stuck in the error when installing oracle on my vmvare centos 5 m/c

finally got solution,problem was with rpms.
as I don't have internet running on my m/c
I was not using yum cmd to install the rpms
I was downloading the rpms and then installing them
and getting the error like

libstdc++-devel-4.1.2-54.el5.i386 conflicts with file from package gcc-c++-4.1.2-33.i38

and reversal means order of installing rms doesn't work

then as solution I manage run internet on my system
and run

yum remove gcc-c++-4.1.2-33.i386
yum remove libstdc++-devel-4.1.2-54.el5.i386
yum remove libstdc++-devel
yum install gcc-c++
and finally get it resolved :)

Tuesday, April 16, 2013

mysql clustering steps by step


i have followed this link
for basics about mysql clustering here are some basic things
mysql clustering basics
setting up the Cluster

To set up the cluster, you need three servers. Two cluster nodes and one Management node. I should point out that the Management node is not required after the cluster install, but I strongly recommend keeping it as it gives you the automatic failover capability. I will use three servers as examples:


Server1 192.168.52.128 (Cluster Management Server)
Server2 192.168.52.129 (Data Node 1)
Server3 192.168.52.130 (Data Node 2)

#############################################################################################!
First step is to install MySQL Cluster Management Server on Server1.
Lets download from MySQL Cluster 6.2 from MySQL website (http://dev.mysql.com/downloads/cluster/).
This guide is intended for Debian based systems, so we will download nonrpm package (mysql-cluster-gpl-6.2.15-linux-i686-glibc23.tar.gz).
Here are the steps to follow, to set up the MySQL Cluster Management Server (ndb_mgmd) and the cluster Management client (ndb_mgm)
mkdir /usr/src/mysql-mgm
cd /usr/src/mysql-mgm
tar xvfz mysql-cluster-gpl-6.2.15-linux-i686-glibc23.tar.gz
cd mysql-cluster-gpl-6.2.15-linux-i686-glibc23
mv bin/ndb_mgm /usr/bin
mv bin/ndb_mgmd /usr/bin
chmod 755 /usr/bin/ndb_mg*
cd /usr/src
rm -rf /usr/src/mysql-mgm

Next step is to create the Cluster configuration file:

mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
vi config.ini

(I try to use nano as the text editor, just because it is much easier to use than vi.)


Here is the sample config file:


[NDBD DEFAULT]
NoOfReplicas=2

[MYSQLD DEFAULT]

[NDB_MGMD DEFAULT]

[TCP DEFAULT]

# Section for the cluster management node
[NDB_MGMD]
# IP address of the management node (server1)
HostName=192.168.52.128

# Section for the storage nodes
[NDBD]
# IP address of the first data node (Server2)
HostName=192.168.52.129
DataDir= /var/lib/mysql-cluster

[NDBD]
# IP address of the second storage node (Server3)
HostName=192.168.52.130
DataDir=/var/lib/mysql-cluster

# one [MYSQLD] per storage node
[MYSQLD]
[MYSQLD]


Now let's start the Management Server:

ndb_mgmd -f /var/lib/mysql-cluster/config.ini

$$$$
[root@localhost mysql]# ndb_mgmd -f /var/lib/mysql-cluster/config.ini
MySQL Cluster Management Server mysql-5.5.29 ndb-7.2.10
$$$$


Now, we would want to start the Management Server automatically in case of a system reboot,
so we add an init script to do that:


echo 'ndb_mgmd -f /var/lib/mysql-cluster/config.ini' > /etc/init.d/ndb_mgmd
chmod 755 /etc/init.d/ndb_mgmd
update-rc.d ndb_mgmd defaults

#######################################################################################################################
Data Nodes Configuration (Server2 and Server3):

Now let's set up the data nodes. Here are the steps to do that (do on both data nodes)


groupadd mysql
useradd -g mysql mysql
cd /usr/local/
wget pick up any mirror from MySQL's website
tar xvfz mysql-cluster-gpl-6.2.15-linux-i686-glibc23.tar.gz
ln -s mysql-cluster-gpl-6.2.15-linux-i686-glibc23 mysql
cd mysql
scripts/mysql_install_db --user=mysql
chown -R root:mysql .
chown -R mysql data
cp support-files/mysql.server /etc/init.d/
chmod 755 /etc/init.d/mysql.server
update-rc.d mysql.server defaults
cd /usr/local/mysql/bin
mv * /usr/bin
cd ../
rm -fr /usr/local/mysql/bin
ln -s /usr/bin /usr/local/mysql/bin

#################################################################################
Next we need to create the MySQL config file /etc/my.cnf on #####both nodes:####

vi /etc/my.cnf


Here is the sample file:

[mysqld]
ndbcluster
# IP address of the cluster management server (Server1)
ndb-connectstring=192.168.52.128

[mysql_cluster]
# IP address of the cluster management Server (Server1)
ndb-connectstring=192.168.52.128


Our MySQL installation is almost complete, now let's create the data directories and start the MySQL Server on both nodes:


mkdir /var/lib/mysql-cluster
cd /var/lib/mysql-cluster
ndbd --initial
/etc/init.d/mysql.server start


###2013-02-08 08:52:46 [ndbd] INFO -- Angel connected to '192.168.52.128:1186'
2013-02-08 08:52:46 [ndbd] INFO -- Angel allocated nodeid

(Important: we need to run ndbd --initial
only when the start MySQL for the first time, and if /var/lib/mysql-cluster/config.ini on Management Server changes.)

MySQL installation is complete, now let's put in a root password for our MySQL Servers:

mysqladmin -u root password newrootpassword


Again, it makes sense to start up the cluster nodes automatically in case of a system restart/failure.
Here are the ndbd init script and system startup links for that:

echo 'ndbd' > /etc/init.d/ndbd
chmod 755 /etc/init.d/ndbd
update-rc.d ndbd defaults


this completes are Cluster installation process, next, now let's test it.



################################################################################################
Test:

On Cluster Management Server, run the Cluster Management Client:

ndb_mgm

It will take you to the ndb_mgm prompt:


-- NDB Cluster -- Management Client --
ndb_mgm>


Now type show on the prompt:

ndb_mgm> show;


You should see an output similar to this:

ndb_mgm> show;
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @192.168.52.129 (Version: version number, Nodegroup: 0, Master)
id=3 @192.168.52.130 (Version: version number, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.52.128 (Version: version number)

[mysqld(API)] 2 node(s)
id=4 @192.168.52.129 (Version: version number)
id=5 @192.168.52.130 (Version: version number)

ndb_mgm>



We should see our data nodes connected in the previous screen. Now type quit to close the Management client:

ndb_mgm>quit;



Test the Cluster:

Now, let's create a Test database on Server2 (192.168.52.129) and run some tests

On Server2:

mysql -u root -p
CREATE DATABASE testdb;
USE testdb;
CREATE TABLE tblCustomer (ID INT) ENGINE=NDBCLUSTER;
INSERT INTO tblCustomer VALUES (1);
SELECT * FROM tblCustomer;
quit;

pay attention to the create table statement,
we must specify ENGINE=NDBCLUSTER for all tables that we want to clustered. As stated earlier,
MySQL cluster only saupports NDB engine, so if you use any other engine, table simply wont get clustered.

The result of the SELECT statement would be:

mysql> SELECT * FROM tblCustomer;
+------+
| ID |
+------+
| 1 |
+------+


Since clustering in MySQL is at the "table level" not at the database level, so we would have to create the database sperately on
Server3 (192.168.52.130) as well, but afterwards tblCustomer would be replicated with all its data (since the engine is NDBCLUSTER):


On Server3.......192.168.52.130

mysql -u root -p
CREATE DATABASE testdb;
USE testdb;
SELECT * FROM tblCustomer;

Now, if we insert a row of data on Server3, it should be replicated back to Server2:192.168.52.129

INSERT INTO tblCustomer VALUES (2);


If we run a SELECT query on Server2, here is what we should see:


mysql> SELECT * FROM testtable;

+------+
| ID |
+------+
| 1 |
| 2 |
+------+


Test Node shutdown:

Now run the following on Server2....192.168.52.129.... to test what happens if a node goes offline:


killall ndbd

and run this command to make sure that all ndbd processes have terminated:


ps aux | grep ndbd | grep -iv grep


If you still see any prpcesses, run this again:

killall ndbd

Now, lets go the management server (Server1)....192.168.52.128... and run the following to check the cluster status:


ndb_mgm

On the ndb_mgm console. run:

show;

it should be bring you an output simlar to the following:

ndb_mgm> show;
Connected to Management Server at: localhost:1186

Cluster Configuration
---------------------

[ndbd(NDB)] 2 node(s)
id=2 (not connected, accepting connect from 192.168.52.129)..
id=3 @192.168.52.130 (Version: -----, Nodegroup: 0, Master).....

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.52.128 (Version: -----)

[mysqld(API)] 2 node(s)
id=4 @192.168.52.129 (Version: --------)
id=5 @192.168.52.130 (Version: --------)

ndb_mgm>

You see, Server2 is not connected anymore.

Type quit; to leave the ndb_mgm management console. Now, let's check on Server3,
if our database is still up and we can make connections to it:

mysql -u root -p
USE testdb;
SELECT * FROM tblCustomer;
quit;

It should bring up the following result set:


mysql> SELECT * FROM tblCustomer;

+------+
| ID |
+------+
| 1 |
| 2 |
+------+


Now, let's start MySQL on Server2 again by issuing the following command:

ndbd


How to Restart MySQL Cluster:

In managing a produciton MySQL environment or any other transactional database environment,
times come when we have to restart/shutdone our systems. So, let's see how would we shutdown our MySQL Cluster:

On Server1, open the management console:

ndb_mgm

then type:

shutdown;

it would bring up an output like this:

ndb_mgm> shutdown;
Node 3: Cluster shutdown initiated
Node 2: Node shutdown completed.
2 NDB Cluster node(s) have shutdown.
NDB Cluster management server shutdown.
ndb_mgm>

This means that the cluster nodes Server2 and Server3 and also the Management node (Server1) have shut down.

To leave the Management console, run:

quit;


To start the cluster management server again, run the following (on Server1, Management Server):

ndb_mgmd -f /var/lib/mysql-cluster/config.ini


and on Server2 and Server3, run the following:

ndbd

in case /var/lib/mysql-cluster/config.ini on Management Server changed, you should run the following:


ndbd --initial


You can go back to the Management node and verify if the cluster started ok, without any errors:

ndb_mgm

on the Management console run the following:

show;

This should bring up the cluster configuration.