GFS mini-HOWTO

2009.07.21 11:13

Step 1: iSCSI Setup (Using tgtd)

1. iSCSI target 설정 (ex. gfs001, gfs002)

  • iSCSI target daemon 설치

># yum -y install scsi-target-utils

  • initialize hard drives (ex. /dev/sda ~ /dev/sdd)

Where are the hard drives?

># dmesg | grep -i '^sd.:'
sda: Write Protect is off
...
sdb: Write Protect is off
...
sdb: Mode Sense: b3 00 10 08
sdc: Write Protect is off
...
sdc: Mode Sense: b3 00 10 08
sdd: Write Protect is off
...
sdd: Mode Sense: b3 00 10 08

Setup disk partition
METHOD 1: Using fdisk command
># fdisk /dev/sda
> d (in case partition needs to be deleted)
> n -> Enter -> Enter
> w
># fdisk /dev/sdb
...
># fdisk /dev/sdc
...
># fdisk /dev/sdd
...

METHOD 2: Using gparted (GUI)
># rpm -ivh http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-3.noarch.rpm
># yum install gparted
># gparted

  • iSCSI target daemon 설정

Start tgtd daemon
># service tgtd start
># chkconfig tgtd on

Create and export iSCSI target devices
># tgtadm --lld iscsi --op new --mode target --tid 1 -T iqn.2009-03.kr.re.etri.vine:disk001
># tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /dev/sda3
># tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL

># tgtadm --lld iscsi --op new --mode target --tid 2 -T iqn.2009-03.kr.re.etri.vine:disk002
># tgtadm --lld iscsi --op new --mode logicalunit --tid 2 --lun 1 -b /dev/sdb1
># tgtadm --lld iscsi --op bind --mode target --tid 2 -I ALL

># tgtadm --lld iscsi --op new --mode target --tid 3 -T iqn.2009-03.kr.re.etri.vine:disk003
># tgtadm --lld iscsi --op new --mode logicalunit --tid 3 --lun 1 -b /dev/sdc1
># tgtadm --lld iscsi --op bind --mode target --tid 3 -I ALL

># tgtadm --lld iscsi --op new --mode target --tid 4 -T iqn.2009-03.kr.re.etri.vine:disk004
># tgtadm --lld iscsi --op new --mode logicalunit --tid 4 --lun 1 -b /dev/sdd1
># tgtadm --lld iscsi --op bind --mode target --tid 4 -I ALL

NOTE: Include the list of tgtadm commands in the /etc/rc.local file so that
it works even after the reboot.

  • IPtables 방화벽 포트 설정

># vi /etc/sysconfig/iptables
...
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 3260 -j ACCEPT
...
># service iptables restart

1. iSCSI initiator 설정 (ex. nodeXXX)

  • iSCSI initiator daemon 설치

># yum install iscsi-initiator-utils

  • iSCSI initiator daemon 설정

Add Alias
># vi /etc/iscsi/initiatorname.iscsi
...
InitiatorName=iqn.2009-02.kr.re.etri.glory:node001
InitiatorAlias=node001
InitiatorName=iqn.2009-02.kr.re.etri.glory:node002
InitiatorAlias=node002
...

Start the service
># chkconfig iscsi on
># service iscsi start

Discover the iSCSI target
># iscsiadm -m discovery -t sendtargets -p 192.168.13.51:3260
># iscsiadm -m discovery -t sendtargets -p 192.168.13.52:3260

Login (attaching devices)
># iscsiadm -m node -T iqn.2009-02.kr.re.etri.glory:disk001 -p 192.168.17.51:3260 -l
># iscsiadm -m node -T iqn.2009-02.kr.re.etri.glory:disk002 -p 192.168.17.52:3260 -l

Verify the imported devices
># fdisk -l

Step 2: Cluster Setup

Method 1: Using system-config-cluster (recommended)

  1. system-config-cluster cluster.conf 파일 만들기
    (vi editor
    cluster.conf 생성하여 노드에 distribute해도 무방하다)

># cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster alias="VINEU_Cluster" config_version="1" name="VINEU_Cluster">
    <fence_daemon post_fail_delay="0" post_join_delay="3"/>
    <clusternodes>
        <clusternode name="192.168.13.16" nodeid="1" votes="1"/>
        <clusternode name="192.168.13.15" nodeid="2" votes="1"/>
        <clusternode name="192.168.13.14" nodeid="3" votes="1"/>
        <clusternode name="192.168.13.13" nodeid="4" votes="1"/>
        <clusternode name="192.168.13.12" nodeid="5" votes="1"/>
        <clusternode name="192.168.13.11" nodeid="6" votes="1"/>
        <clusternode name="192.168.13.17" nodeid="7" votes="1"/>
    </clusternodes>
    <cman/>
    <fencedevices/>
    <rm/>
</cluster>

  1. Distribute the file to nodeXXX (참조: pssh/pscp 사용 - ParallelSSH)

># pscp -h /root/hosts.txt /root/cluster.conf /etc/cluster/cluster.conf
># pssh -h /root/hosts.txt cat /etc/cluster/cluster.conf

Method 2: Using Luci (web interface)
On nodemgmt:

># yum -y install luci

Initialize admin password
># luci_admin init

Start luci daemon
># chkconfig luci on
># service luci restart

># vi /etc/sysconfig/iptables
...
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 8084 -j ACCEPT
...

># vi /etc/selinux/config
SELINUX=disabled
># /sbin/reboot

># vi /etc/redhat-release
Red Hat Enterprise Server release 5 (Tikanga)

On nodeXXX:

  1. ricci & httpd 설치 설정

># pssh -h /root/hosts.txt -P ricci httpd

Start the daemons
># chkconfig ricci on
># chkconfig httpd on
># service ricci start
># service httpd start

  1. IPtables 포트 설정 (nodeXXX)

># vi /etc/sysconfig/iptables
...
# cman (cluster manager)
-A RH-Firewall-1-INPUT -m state --state NEW -m udp -p udp --dport 5149 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m udp -p udp --dport 5405 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m udp -p udp --dport 6809 -j ACCEPT

# ricci (conga remote agent)
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 11111 -j ACCEPT

# gnbd (global network block device)
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 14567 -j ACCEPT

# dlm (distributed lock manager)
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 21064 -j ACCEPT

# rgmanager (high-availabilty service management)
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 41966 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 41967 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 41968 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 41969 -j ACCEPT

# ccsd (cluster configuration system daemon)
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 50006 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m udp -p udp --dport 50007 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 50008 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 50009 -j ACCEPT
...

># service iptables restart

  1. Luci Cluster 구성하기 (Access https://192.168.12.1:8084)

참조: (Red Hat Cluster Guide)

Step 3: Setting GFS

># pssh -h /root/hosts.txt -P yum -y install gfs2-utils gfs-utils \
rgmanager kmod-gfs2-xen kmod-gfs-xen lvm2-cluster

NOTE: change locking type permanently
># vi /etc/lvm/lvm.conf
...
locking_type = 3
...

NOTE: change locking type w/o reboot
make sure that clvmd service is running
># service clvmd status
># lvmconf --enable-cluster

See what we have ...
># fdisk -l

># fdisk /dev/sdb or fdisk /dev/sdc ...
...
Command (m for help): n (add a new partition)
Command action
e extended
p primary partition (1-4)
p (choose primary partition)
Partition number (1-4): 1
First cylinder (##-##, default #): Press Enter
Using default value #
Last cylinder or +size or +sizeM or +sizeK (#-##, default ##): Press Enter

Using default value ##

Command (m for help): t (change a partition's system id)
Selected partition 1
Hex code (type L to list codes): 8e (choose Linux LVM file system)
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): w (write table to disk and exit)
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 22:
부적절한 인수.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
># partprobe

># pvcreate /dev/sdb1 or pvcreate /dev/sdc1 ...
># pvdisplay

Just in case, we need to remove it
># pvremove /dev/sdb1

># vgcreate VolGroupGFS /dev/sdb1 /dev/sdc1 /deb/sdd1 /dev/sde1 ...
># vgdisplay
># vgremove VolGroupGFS

># lvcreate -L 100G -n GFS VolGroupGFS
># lvdisplay
># lvremove GFS

># gfs_mkfs -p lock_dlm -t VINE:GFS -j 10 /dev/VolGroupGFS/GFS

Restart iscsi on all nodeXXX
># pssh -h /root/hosts.txt -P service iscsi restart

Step 4: Start/Shutdown cluster services

  1. Start cluster services (nodeXXX) (참조: cssh 사용 - ParallelSSH)

># cat /etc/clusters
node node001 node002 node003 node004 node005 node006 node007

># cssh node &
># cd /etc/init.d
># ./cman start
># ./clvmd start
># ./gfs start
># ./rgmanager start

Mount the volume on all nodeXXX
># mkdir /mnt/vine_data
># mount /dev/VolGroupGFS/GFS /mnt/vine_data

  1. Shutdown cluster services

># cssh node &
># cd /etc/init.d
># umount /mnt/vine_data
># ./rgmanager stop
># ./gfs stop
># ./clvmd stop
># ./cman stop

Trouble-shooting

  1. cman daemon stop 없을 경우, openais daemon 먼저 kill한다.

># service openais stop
># service cman stop

Q&A

    1. iSCSI initiator 설치 설정
    2. Cluster 패키지 설치
    3. cluster daemon 순서에 맞게 시작

(NOTE: 물리노드 제거시 역순으로 실행하면 된다)

  1. 질문: How to increase the logical volume size?
    답변:

># lvextend -L +100G /dev/VolGroupGFS/GFS
># gfs_grow -v /mnt/vine_data
># df -h

  1. 질문: I'm confused about port #
    답변:

IP Port #

Protocol

Component

5404
5405

UDP

cman

 

111111

TCP

ricci

 

14567

TCP

gmbd

 

16851

TCP

modclusterd

 

21064

TCp

ldm

 

41966~41969

TCP

rgmanager

 

50006~50009

TCP

ccsd

 

5007

UDP

ccsd

 

Tips

  1. GINUX OS 2.2 사용시 Luci 설치할 경우 제대로 설치가 되지 않기 때문에
    Ginux
    에서는 system-config-cluster 사용하여 GFS 설정하는 것이 좋다.

Terms

References

신고

'linux' 카테고리의 다른 글

Trouble installing vmware-tools on Ubuntu 9.04  (0) 2009.08.06
Assign IP address temporarily  (0) 2009.08.06
openVINE project  (0) 2009.08.03
Network Address  (0) 2009.07.21
GFS mini-HOWTO  (3) 2009.07.21
Python – Intro  (0) 2009.07.17

Frank kenshin579 linux , , ,

  1. 정말 자세하고 꼼꼼하게 잘 정리해 주셨네요. 감사합니다.

  2. You're welcome ^_^
    조금이나마 도움이 되었으면 좋겠네요.

  3. Le ministre de l'Intérieur Brice Hortefeux, http://www.moncleroutletespain.com/ moncler outlet, en charge des Cultes, http://www.moncleroutletespain.com/ moncler españa, se rendra jeudi soir à la messe de No, http://www.moncleroutletespain.com/ moncler?l de l'église copte de Chatenay-Malabry (Hauts-de-Seine), http://www.moncleroutletespain.com/ moncler chaquetas, a-t-on appris auprès du ministère, http://www.moncleroutletespain.com/ moncler online.Related articles:


    http://exsihell.tistory.com/799 http://exsihell.tistory.com/799

    http://exsihell.tistory.com/803 http://exsihell.tistory.com/803