Quản trị net diễn đàn chia sẻ thông tin các thủ thuật mạng, internet bảo mật thông tin dành cho giới IT VIệt hy vọng là nơi bổ ích cho cộng đồng

Quản trị net diễn đàn chia sẻ thông tin các thủ thuật mạng, internet bảo mật thông tin dành cho giới IT VIệt hy vọng là nơi bổ ích cho cộng đồng (http://quantrinet.com/forum/index.php)
-   Linux Applications (http://quantrinet.com/forum/forumdisplay.php?f=160)
-   -   Thay thế Server trong Glusterfs Replacing a GlusterFS Server: Best Practice (http://quantrinet.com/forum/showthread.php?t=9743)

hoctinhoc 29-04-2015 03:53 PM

Thay thế Server trong Glusterfs Replacing a GlusterFS Server: Best Practice
 



Thay thế Server trong Glusterfs


Replacing a GlusterFS Server: Best Practice



Posted by Joe Julian 2 years, 6 months ago (comments)



Last month, I received two new servers to replace two of our three (replica 3) GlusterFS servers. My first inclination was to just down the server, move the hard drives into the new server, re-install the OS (moving from 32 bit to 64 bit), and voila, done deal. Probably would have been okay if I hadn't used a kickstart file that formatted all the drives. Oops. Since the drives were now blank, I decided to just put it in place, using the same gfid and let it self-heal everything back over.


This idea sucked. I have 15 volumes, and 4 bricks per server. Self-healing 60 bricks brought the remaining 32 bit server to it's knees (and I filed multiple bugs against 3.3.0 including that the load for self heal doesn't balance between sane servers). After a day (luckily I don't have that much data) of having everyone in the company mad at me, the heal was completed and I was a bit wiser.


Today I installed the other new server. I installed CentOS 6.3, created the LVs (I use lvm to partition up the disks to make resizing volumes easier should the need arise and to allow me to do snapshots before I make any major changes), and one new hard drive (My drives aren't that old. No need to replace them all).


I then added the new server to the trusted pool and used replace-brick to migrate one brick at a time to the new server. I also changed my placement of bricks to fit our newer best-practices.


oldserver=ewcs4 newserver=ewcs10 oldbrickpath=/var/spool/glusterfs/a_home newbrickpath=/data/glusterfs/home/a
gluster peer probe $newserver
gluster volume replace-brick ${volname} ${oldserver}:${oldbrickpath} ${newserver}:${newbrickpath} start

I monitored the migration.


watch gluster volume replace-brick ${volname} ${oldserver}:${oldbrickpath} ${newserver}:${newbrickpath} status

Then committed the change after all the files were finished moving.


gluster volume replace-brick ${volname} ${oldserver}:${oldbrickpath} ${newserver}:${newbrickpath} commit
Repeat as necessary.


As for performance, it met my performance requirements: nobody calling me or emailing me to say that anything's not working or is too slow. My VM's continued without interruption, as did mysql - both hosted on their own volumes. As long as nobody noticed, I'm happy.


:battay:


Bây giờ là 08:26 PM. Giờ GMT +7

Diễn đàn tin học QuantriNet
quantrinet.com | quantrimang.co.cc
Founded by Trương Văn Phương | Developed by QuantriNet's members.
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.