こんにちはnakadaです。
今回はDRBD9をAWS上で作成してみます。 DRBD9の新機能のDRBDプールとDRBDクライアントを構成してみます。
目次
環境
今回の構成-3台構成 ストレージノード×2台 DRBDクライアント用サーバ×1台 OS領域とは別にDRBDで利用するEBS領域を追加しておく 使用OS:CentOS7
雛形作成
とりあえず、雛形になるAIMを作成しておきます。 CentOS7のAMIからEC2を作成して、DRBDのインストールに必要なパッケージの導入しておきます。
#sudo yum install kernel-devel gcc make flex libtool pygobject2 help2man libxslt wget lvm2 #sudo yum update #sudo shutdown -r now
DRBDをインストールしていきます。
$mkdir drbd $cd drbd $wget http://oss.linbit.com/drbd/9.0/drbd-9.0.0.tar.gz $wget http://oss.linbit.com/drbd/drbd-utils-8.9.4.tar.gz $wget http://oss.linbit.com/drbdmanage/drbdmanage-0.50.tar.gz
drbdのインストール
$tar xvzf drbd-9.0.0.tar.gz $cd drbd-9.0.0 $make $sudo make install $sudo lsmod |grep drbd $sudo /sbin/modprobe drbd $sudo lsmod |grep drbd
drbd-utilsのインストール
$ tar xvzf drbd-utils-8.9.4.tar.gz $ cd drbd-utils-8.9.4 $ ./configure --prefix=/usr --localstatedir=/var --sysconfdir=/etc $ make $ sudo make install
drbdmanagerのインストール
$ tar xvzf drbdmanage-0.50.tar.gz $ cd drbdmanage-0.50.tar.gz $ python setup.py install
インストールが終わったら、イメージ作成して3台サーバを作成しておきます。
コピーしたサーバのホスト名と内部IPアドレスを固定化しておきます。
とりあえず、記事内では次のホスト名を使用しています。 drbd-storaeg01 drbd-storage02 drbd-app01
後、今回はstorage01から各サーバへssh鍵でログインできるようにしています。
DRBDの設定
1.DRBDプールで利用するvgボリュームを作成
[root@drbd-storage01 ~]# vgcreate drbdpool /dev/xvdb Physical volume "/dev/xvdb" successfully created Volume group "drbdpool" successfully created
他のマシンも作成しておきます。
drbdmanageの初期化します。
[root@drbd-storage01 ~]# drbdmanage init 192.168.1.11 You are going to initalize a new drbdmanage cluster. CAUTION! Note that: * Any previous drbdmanage cluster information may be removed * Any remaining resources managed by a previous drbdmanage installation that still exist on this system will no longer be managed by drbdmanage Confirm: yes/no: yes Failed to find logical volume "drbdpool/.drbdctrl_0" Failed to find logical volume "drbdpool/.drbdctrl_1" Logical volume ".drbdctrl_0" created. Logical volume ".drbdctrl_1" created. initializing activity log NOT initializing bitmap Writing meta data... New drbd meta data block successfully created. initializing activity log NOT initializing bitmap Writing meta data... New drbd meta data block successfully created. empty drbdmanage control volume initialized. empty drbdmanage control volume initialized. Operation completed successfully
2.drbd-storage02を追加していきます。
[root@drbd-storage01 ~]# drbdmanage new-node drbd-storage02 192.168.1.12 Operation completed successfully Operation completed successfully Executing join command using ssh. IMPORTANT: The output you see comes from drbd-storage02 IMPORTANT: Your input is executed on drbd-storage02 You are going to join an existing drbdmanage cluster. CAUTION! Note that: * Any previous drbdmanage cluster information may be removed * Any remaining resources managed by a previous drbdmanage installation that still exist on this system will no longer be managed by drbdmanage Confirm: yes/no: yes Failed to find logical volume "drbdpool/.drbdctrl_0" Failed to find logical volume "drbdpool/.drbdctrl_1" Logical volume ".drbdctrl_0" created. Logical volume ".drbdctrl_1" created. NOT initializing bitmap initializing activity log Writing meta data... New drbd meta data block successfully created. NOT initializing bitmap initializing activity log Writing meta data... New drbd meta data block successfully created. Operation completed successfully [root@drbd-storage01 ~]# drbdadm statuts drbdadm: Unknown command 'statuts' [root@drbd-storage01 ~]# drbdadm status --== Thank you for participating in the global usage survey ==-- The server's response is: you are the 447th user to install this version .drbdctrl role:Secondary volume:0 disk:UpToDate volume:1 disk:UpToDate drbd-storage02 role:Secondary volume:0 peer-disk:UpToDate volume:1 peer-disk:UpToDate [root@drbd-storage01 ~]# drbdmanage list-nodes +------------------------------------------------------------------------------------------------------------+ | Name | Pool Size | Pool Free | Site | | State | +------------------------------------------------------------------------------------------------------------+ | drbd-storage01 | 8188 | 8180 | N/A | | ok | | drbd-storage02 | 8188 | 8180 | N/A | | ok | +------------------------------------------------------------------------------------------------------------+ [root@drbd-storage01 ~]# [root@drbd-storage01 ~]# [root@drbd-storage01 ~]# drbdmanage list-volumes No resources defined
3.ボリュームを作成してみます。
[root@drbd-storage01 ~]# drbdmanage new-volume appdata 3GB --deploy 2 Operation completed successfully Operation completed successfully [root@drbd-storage01 ~]# drbdmanage list-volumes +------------------------------------------------------------------------------------------------------------+ | Name | Vol ID | Size | Minor | | State | +------------------------------------------------------------------------------------------------------------+ | appdata | 0 | 2861 | 100 | | ok | +------------------------------------------------------------------------------------------------------------+ [root@drbd-storage01 ~]# drbdmanage list-nodes +------------------------------------------------------------------------------------------------------------+ | Name | Pool Size | Pool Free | Site | | State | +------------------------------------------------------------------------------------------------------------+ | drbd-storage01 | 8188 | 5316 | N/A | | ok | | drbd-storage02 | 8188 | 5316 | N/A | | ok | +------------------------------------------------------------------------------------------------------------+ [root@drbd-storage01 ~]#
4.DRBDクライアント接続してみます。
[root@drbd-storage01 ~]# drbdmanage new-node -s drbd-app01 192.168.1.15 Operation completed successfully Operation completed successfully Executing join command using ssh. IMPORTANT: The output you see comes from drbd-app01 IMPORTANT: Your input is executed on drbd-app01 You are going to join an existing drbdmanage cluster. CAUTION! Note that: * Any previous drbdmanage cluster information may be removed * Any remaining resources managed by a previous drbdmanage installation that still exist on this system will no longer be managed by drbdmanage Confirm: yes/no: yes Failed to find logical volume "drbdpool/.drbdctrl_0" Failed to find logical volume "drbdpool/.drbdctrl_1" Logical volume ".drbdctrl_0" created. Logical volume ".drbdctrl_1" created. NOT initializing bitmap initializing activity log Writing meta data... New drbd meta data block successfully created. NOT initializing bitmap initializing activity log Writing meta data... New drbd meta data block successfully created. Operation completed successfully [root@drbd-storage01 ~]#
クライアントモードなので、Unknown,no storageとなっています。
[root@drbd-storage01 ~]# drbdmanage list-nodes +------------------------------------------------------------------------------------------------------------+ | Name | Pool Size | Pool Free | Site | | State | +------------------------------------------------------------------------------------------------------------+ | drbd-app01 | unknown | unknown | N/A | | no storage | | drbd-storage01 | 8188 | 5316 | N/A | | ok | | drbd-storage02 | 8188 | 5316 | N/A | | ok | +------------------------------------------------------------------------------------------------------------+ [root@drbd-storage01 ~]# drbdmanage assign --client appdata drbd-app01 Operation completed successfully [root@drbd-storage01 ~]# drbdmanage list-assignments +------------------------------------------------------------------------------------------------------------+ | Node | Resource | Vol ID | | State | +------------------------------------------------------------------------------------------------------------+ | drbd-app01 | appdata | * | | client | | drbd-storage01 | appdata | * | | ok | | drbd-storage02 | appdata | * | | ok | +------------------------------------------------------------------------------------------------------------+ [root@drbd-app01 ~]# drbdadm status --== Thank you for participating in the global usage survey ==-- The server's response is: you are the 449th user to install this version .drbdctrl role:Secondary volume:0 disk:UpToDate volume:1 disk:UpToDate drbd-storage01 role:Secondary volume:0 peer-disk:UpToDate volume:1 peer-disk:UpToDate drbd-storage02 role:Secondary volume:0 peer-disk:UpToDate volume:1 peer-disk:UpToDate appdata role:Primary disk:Diskless drbd-storage01 role:Secondary peer-disk:UpToDate drbd-storage02 role:Secondary peer-disk:UpToDate [root@drbd-app01 ~]# [root@drbd-app01 ~]# drbdmanage list-nodes +------------------------------------------------------------------------------------------------------------+ | Name | Pool Size | Pool Free | Site | | State | +------------------------------------------------------------------------------------------------------------+ | drbd-app01 | unknown | unknown | N/A | | no storage | | drbd-storage01 | 8188 | 5316 | N/A | | ok | | drbd-storage02 | 8188 | 5316 | N/A | | ok | +------------------------------------------------------------------------------------------------------------+ [root@drbd-app01 ~]# ll /dev/drbd drbd/ drbd0 drbd1 drbd100 drbdpool/ [root@drbd-app01 ~]# mkfs.xfs /dev/drbd100 meta-data=/dev/drbd100 isize=256 agcount=4, agsize=183106 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 finobt=0 data = bsize=4096 blocks=732422, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=0 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@drbd-app01 ~]# mount /dev/drbd100 /data
クライアントがPrimaryになっていることを確認します。
[root@drbd-storage01 ~]# drbdadm status .drbdctrl role:Secondary volume:0 disk:UpToDate volume:1 disk:UpToDate drbd-app01 role:Secondary volume:0 peer-disk:UpToDate volume:1 peer-disk:UpToDate drbd-storage02 role:Secondary volume:0 peer-disk:UpToDate volume:1 peer-disk:UpToDate appdata role:Secondary disk:UpToDate drbd-app01 role:Primary peer-disk:Diskless drbd-storage02 role:Secondary peer-disk:UpToDate [root@drbd-app01 ~]# [root@drbd-app01 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 8.0G 1.4G 6.7G 17% / devtmpfs 480M 0 480M 0% /dev tmpfs 497M 0 497M 0% /dev/shm tmpfs 497M 13M 484M 3% /run tmpfs 497M 0 497M 0% /sys/fs/cgroup /dev/drbd100 2.8G 33M 2.8G 2% /data [root@drbd-app01 ~]# cd /data/ [root@drbd-app01 data]# ll total 0 [root@drbd-app01 data]# pwd /data [root@drbd-app01 data]# touch test.txt [root@drbd-app01 data]# echo "test" > test.txt [root@drbd-app01 data]# cat test.txt test
次回は色々試してみます。