Wise people learn when they can; fools learn when they must - Arthur Wellesley

Tuesday, 7 October 2014

Solaris Volume Manager -5 [Hot Spare Pool / Hot Spare]

                     
                           SVM-5

What we will learn in next Few Pages

·         HOT SPARE


SO, what is Hot Spare?

Allows automatic replacement of failed submirrors / Raid-5 components, provided space components are available & reserved

HOT SPARE STATES:

3 states
A. available
B. in-use
C. broken

a.  Available:

Available hotspares are running and ready to accept data, but are not currently being written to or read from.


b.  In-use:

In-use hot spares are currently being written to and read from

c.  Broken:

Broken hotspares are out of service.
A hot spare is placed in the broken state when an I/O error occurs


·         The number of hot spare pools is limited to 1000.
·         Hotsapre pools are named “hspnnn” where “nnn” is a number in the range 000-999
·         A meta device cannot be configured as a Hotsapre
·         Avoid using small size of hotspares compare to submirrors
·         If the entire hot spare are in use and a sub mirror fails due to errors, that portion of the mirror will no longer be replicated.

CREATE

root@sol-test-2:>/# metainit hsp100 c1t2d0s3
hsp100: Hotspare pool is setup

root@sol-test-2:>/# metainit hsp200 c1t2d0s4 c1t2d0s05
hsp200: Hotspare pool is setup


root@sol-test-2:>/# metastat -c
d23              p   10MB d2
d22              p   10MB d2
d21              p   10MB d2
    d2           r  298MB d11 d12 d13 d15
        d11      p  100MB c1t2d0s0
        d12      p  100MB c1t2d0s0
        d13      p  100MB c1t2d0s0
        d15      p  100MB c1t2d0s0
hsp200           h      - c1t2d0s4 c1t2d0s5
hsp100           h      - c1t2d0s3


Look, these are 2 different things hsp100 & c1t2d0s3

Hsp100 is Hotspare pool and c1t2d0s3 is hot spare

So if we want to remove then we must have to remove the spares first then after pool.

HOTSPARE DELETE

root@sol-test-2:>/# metahs -d hsp200 c1t2d0s5
hsp200: Hotspare is deleted

root@sol-test-2:>/# metahs -d hsp200 c1t2d0s4
hsp200: Hotspare is deleted

POOL DELETE

root@sol-test-2:>/# metahs -d hsp200
hsp200: Hotspare pool is cleared

root@sol-test-2:>/# metastat -c
d23              p   10MB d2
d22              p   10MB d2
d21              p   10MB d2
    d2           r  298MB d11 d12 d13 d15
        d11      p  100MB c1t2d0s0
        d12      p  100MB c1t2d0s0
        d13      p  100MB c1t2d0s0
        d15      p  100MB c1t2d0s0
hsp100           h      - c1t2d0s3

HS POOL CREATE
root@sol-test-2:>/# metainit hsp399 c1t2d0s4 c1t2d0s05
hsp399: Hotspare pool is setup
HS DELETE
root@sol-test-2:>/# metahs -d hsp399 c1t2d0s4
hsp399: Hotspare is deleted


HS ADD TO EXISTING HS POOL
root@sol-test-2:>/# metahs -a hsp399 c1t2d0s4
hsp399: Hotspare is added

root@sol-test-2:>/# metastat -c
d23              p   10MB d2
d22              p   10MB d2
d21              p   10MB d2
    d2           r  298MB d11 d12 d13 d15
        d11      p  100MB c1t2d0s0
        d12      p  100MB c1t2d0s0
        d13      p  100MB c1t2d0s0
        d15      p  100MB c1t2d0s0
hsp399           h      - c1t2d0s5 c1t2d0s4
hsp100           h      - c1t2d0s3

=============================
root@sol-test-2:>/# metahs -d hsp100 c1t2d0s3
hsp100: Hotspare is deleted
root@sol-test-2:>/# metahs -d hsp100
hsp100: Hotspare pool is cleared
=============================   I did this to free c1t2d0s3

REPLACE THE HOTSPARE

root@sol-test-2:>/# metahs -r hsp399 c1t2d0s5 c1t2d0s3
hsp399: Hotspare c1t2d0s5 is replaced with c1t2d0s3

root@sol-test-2:>/# metahs -r <POOLNAME> <OLD SPARE> <NEW SPARE>


==============================
root@sol-test-2:>/# metaclear -f d2
metaclear: sol-test-2: d2: metadevice in use

root@sol-test-2:>/# metaclear -f d21 d22 d23
d21: Soft Partition is cleared
d22: Soft Partition is cleared
d23: Soft Partition is cleared
root@sol-test-2:>/# metaclear -f d2
d2: RAID is cleared
root@sol-test-2:>/# metaclear -f d15 d13 d12 d11

==============================  removed the raid5

Created new raid volume d3
root@sol-test-2:>/# metainit d3 -r c1t2d0s0 c1t2d0s1 c1t2d0s3
d3: RAID is setup

Created Hotspare pool

root@sol-test-2:>/# metahs -a hsp199 c1t2d0s4 c1t2d0s5
hsp199: Hotspares are added

root@sol-test-2:>/# metainit d9 -r c1t3d0s0 c1t4d0s0 c1t5d0s0
d9: RAID is setup

root@sol-test-2:>/# metahs -a hsp009 c1t6d0s0
root@sol-test-2:>/# metahs -a hsp009 c1t8d0s0

ATTACHING HSP TO RAID VOLUME

root@sol-test-2:>/# metaparam -h hsp009 d9

root@sol-test-2:>/# metastat d9
d9: RAID
    State: Okay
    Hot spare pool: hsp009
    Interlace: 32 blocks
    Size: 8335360 blocks (4.0 GB)

root@sol-test-2:>/# newfs /dev/md/rdsk/d9
root@sol-test-2:>/# mount /dev/md/dsk/d9 /raidtest/

root@sol-test-2:>/raidtest# ls -l
-rw------T   1 root     root     104857600 Oct  2 05:58 f1
-rw------T   1 root     root     104857600 Oct  2 05:59 f2


Now I had removed one disk from the raid volume,

root@sol-test-2:>/raidtest# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c1t0d0 <DEFAULT cyl 1563 alt 2 hd 255 sec 63>
          /pci@0,0/pci15ad,1976@10/sd@0,0
       1. c1t1d0 <DEFAULT cyl 1564 alt 2 hd 255 sec 63>
          /pci@0,0/pci15ad,1976@10/sd@1,0
       2. c1t2d0 <DEFAULT cyl 1563 alt 2 hd 255 sec 63>
          /pci@0,0/pci15ad,1976@10/sd@2,0
       3. c1t3d0 <DEFAULT cyl 1020 alt 2 hd 128 sec 32>
          /pci@0,0/pci15ad,1976@10/sd@3,0
       4. c1t4d0 <drive type unknown>
          /pci@0,0/pci15ad,1976@10/sd@4,0
       5. c1t5d0 <DEFAULT cyl 1020 alt 2 hd 128 sec 32>
          /pci@0,0/pci15ad,1976@10/sd@5,0
       6. c1t6d0 <DEFAULT cyl 1020 alt 2 hd 128 sec 32>
          /pci@0,0/pci15ad,1976@10/sd@6,0
       7. c1t8d0 <DEFAULT cyl 1020 alt 2 hd 128 sec 32>
          /pci@0,0/pci15ad,1976@10/sd@8,0
Specify disk (enter its number): ^C

Then rebooted the system…

root@sol-test-2:>/# metastat d9
d9: RAID
    State: Okay
    Hot spare pool: hsp009
    Interlace: 32 blocks
    Size: 8335360 blocks (4.0 GB)
Original device:
    Size: 8338752 blocks (4.0 GB)
        Device     Start Block  Dbase        State Reloc  Hot Spare
        c1t3d0s0       4426        No         Okay   Yes
        c1t4d0s0       4426        No         Okay   Yes c1t6d0s0
        c1t5d0s0       4426        No         Okay   Yes

root@sol-test-2:>/# cd /raidtest/
root@sol-test-2:>/raidtest# ls
f1          f2          lost+found

SO… it’s working?

Check yourself………

root@sol-test-2:>/# metastat d9
d9: RAID
    State: Okay
    Hot spare pool: hsp009
    Interlace: 32 blocks
    Size: 8335360 blocks (4.0 GB)
Original device:
    Size: 8338752 blocks (4.0 GB)
        Device     Start Block  Dbase        State Reloc  Hot Spare
        c1t3d0s0       4426        No         Okay   Yes
        c1t4d0s0       4426        No         Okay   Yes c1t6d0s0
        c1t5d0s0       4426        No         Okay   Yes c1t8d0s0

root@sol-test-2:>/# ls -l /raidtest/
total 409872
-rw------T   1 root     root     104857600 Oct  2 05:58 f1
-rw------T   1 root     root     104857600 Oct  2 05:59 f2
drwx------   2 root     root        8192 Oct  2 05:56 lost+found

OK…
Now I had attached both disks again
c1t4d0
c1t5d0


Now get back the hotspares,



root@sol-test-2:>/# metareplace -e d9 c1t4d0s0
d9: device c1t4d0s0 is enabled
root@sol-test-2:>/# metareplace -e d9 c1t5d0s0
metareplace: sol-test-2: d9: resync in progress

*[metareplace will replace the faulty disk with new one and –e will tell to         replace it at the same location ---- metareplace automatically starts resynchronizing the new component withthe rest of the RAID 5 volume.]

root@sol-test-2:>/# metastat d9
d9: RAID
    State: Resyncing
    Hot spare pool: hsp009
    Interlace: 32 blocks
    Size: 8335360 blocks (4.0 GB)
Original device:
    Size: 8338752 blocks (4.0 GB)
        Device     Start Block  Dbase        State Reloc  Hot Spare
        c1t3d0s0       4426        No         Okay   Yes
        c1t4d0s0       4426        No    Resyncing   Yes c1t6d0s0
        c1t5d0s0       4426        No         Okay   Yes c1t8d0s0


root@sol-test-2:>/# metastat d9
d9: RAID
    State: Okay
    Hot spare pool: hsp009
    Interlace: 32 blocks
    Size: 8335360 blocks (4.0 GB)
Original device:
    Size: 8338752 blocks (4.0 GB)
        Device     Start Block  Dbase        State Reloc  Hot Spare
        c1t3d0s0       4426        No         Okay   Yes
        c1t4d0s0       4426        No         Okay   Yes c1t6d0s0
        c1t5d0s0       4426        No         Okay   Yes c1t8d0s0

root@sol-test-2:>/# metareplace -e d9 c1t5d0s0
d9: device c1t5d0s0 is enabled
root@sol-test-2:>/# metastat d9
d9: RAID
    State: Resyncing
    Hot spare pool: hsp009
    Interlace: 32 blocks
    Size: 8335360 blocks (4.0 GB)
Original device:
    Size: 8338752 blocks (4.0 GB)
        Device     Start Block  Dbase        State Reloc  Hot Spare
        c1t3d0s0       4426        No         Okay   Yes
        c1t4d0s0       4426        No         Okay   Yes c1t6d0s0
        c1t5d0s0       4426        No    Resyncing   Yes c1t8d0s0


we can see the status of our HS’s

root@sol-test-2:>/# metastat hsp009
hsp009: 2 hot spares
        Device     Status      Length           Reloc
        c1t6d0s0   In use       4173824 blocks  Yes
        c1t8d0s0   In use       4173824 blocks  Yes




No comments:

Post a Comment