Wise people learn when they can; fools learn when they must - Arthur Wellesley

Sunday, 28 December 2014

Solaris Practice-1 [zfs-Answers]


               PRACTICE WORK-SHEET-1-ANSWERS [zfs]

ANSWERS  are from Q no 11,

1.      What is zfs?
2.      List some benefits of zfs
3.      Compare ufs & zfs
4.      Why we should move to zfs?
5.      What is the basic requirement for zfs?
6.      Limitations of zfs
7.      What is “COW” in zfs?
8.      What is “dataset” and how many datasets zfs can support?
9.      What is zpool and how many datasets zpool can support?
10.  What is “vdev” in zfs and also define the types of vdevs?
11.  What is clone in zfs?


Clones can be created via snapshot
Clones turn a snapshot into separate read/write datasets,
Clones acts as any other datasets without consuming space
Clones can be used for testing without disturbing the original data
Clones are bound with its snapshot.

12.  Create a stripe volume “zstripe” with 2 disks                    [raid0]

root@sol-test-1:>/# zpool create zstripe c1t4d0 c1t5d0
root@sol-test-1:>/# zpool list
NAME      SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
zstripe  3.97G    80K  3.97G     0%  ONLINE  -
root@sol-test-1:>/# zfs list
NAME      USED  AVAIL  REFER  MOUNTPOINT
zstripe    79K  3.91G    31K  /zstripe


13.  Create a mirrored volume “zmirror” with 2 disks                        [raid1]

root@sol-test-1:>/# zpool create zmirror mirror c1t4d0 c1t5d0
root@sol-test-1:>/# zpool list
NAME      SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
zmirror  1.98G  78.5K  1.98G     0%  ONLINE  -
root@sol-test-1:>/# zfs list
NAME      USED  AVAIL  REFER  MOUNTPOINT
zmirror  77.5K  1.95G    31K  /zmirror

14.  Run “zpool list” and “zpool status” to verify the creation

root@sol-test-1:>/# zpool list
NAME      SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
zmirror  1.98G  78.5K  1.98G     0%  ONLINE  -

root@sol-test-1:>/# zfs list
NAME      USED  AVAIL  REFER  MOUNTPOINT
zmirror  77.5K  1.95G    31K  /zmirror

15.  Add 2 disks to “zmirror” to make it raid1+0 volume.       [raid1+0]

root@sol-test-1:>/# zpool add zmirror mirror c1t6d0 c1t8d0
root@sol-test-1:>/# zpool list
NAME      SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
zmirror  3.97G   138K  3.97G     0%  ONLINE  -
root@sol-test-1:>/# zfs list
NAME      USED  AVAIL  REFER  MOUNTPOINT
zmirror  92.5K  3.91G    31K  /zmirror
root@sol-test-1:>/# zpool status
  pool: zmirror
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zmirror     ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c1t4d0  ONLINE       0     0     0
            c1t5d0  ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            c1t6d0  ONLINE       0     0     0
            c1t8d0  ONLINE       0     0     0

errors: No known data errors

16.  Destroy the “zstripe” & “zmirror”

root@sol-test-1:>/# zpool destroy zmirror

17.  Again using 4 disks create “zraid” pool with raid1+0       [raid1+0]

root@sol-test-1:>/# zpool create zraid mirror c1t4d0 c1t5d0 mirror c1t6d0 c1t8d0
root@sol-test-1:>/# zpool status zraid
  pool: zraid
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zraid       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c1t4d0  ONLINE       0     0     0
            c1t5d0  ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            c1t6d0  ONLINE       0     0     0
            c1t8d0  ONLINE       0     0     0

errors: No known data errors
root@sol-test-1:>/# zpool list
NAME    SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
zraid  3.97G    86K  3.97G     0%  ONLINE  -
root@sol-test-1:>/# zfs list
NAME    USED  AVAIL  REFER  MOUNTPOINT
zraid  80.5K  3.91G    31K  /zraid

18.  Create a raidz1 volume “zm1” with 3 disks                       [raidz/raidz1]

root@sol-test-1:>/# zpool create zm1 raidz1 c1t4d0 c1t5d0 c1t6d0
root@sol-test-1:>/# zpool status
  pool: zm1
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zm1         ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c1t4d0  ONLINE       0     0     0
            c1t5d0  ONLINE       0     0     0
            c1t6d0  ONLINE       0     0     0

errors: No known data errors
root@sol-test-1:>/# zpool list
NAME   SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
zm1   5.94G   166K  5.94G     0%  ONLINE  -
root@sol-test-1:>/# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
zm1    109K  3.89G  34.6K  /zm1

19.  Create a raidz1 volume “zm2” with 6 disks                       [raidz2]

root@sol-test-1:>/# zpool create zm2 raidz2 c1t4d0 c1t5d0 c1t6d0 c1t8d0 c1t9d0 c1t10d0
root@sol-test-1:>/# zpool status
  pool: zm2
 state: ONLINE
 scan: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        zm2          ONLINE       0     0     0
          raidz2-0   ONLINE       0     0     0
            c1t4d0   ONLINE       0     0     0
            c1t5d0   ONLINE       0     0     0
            c1t6d0   ONLINE       0     0     0
            c1t8d0   ONLINE       0     0     0
            c1t9d0   ONLINE       0     0     0
            c1t10d0  ONLINE       0     0     0

errors: No known data errors
root@sol-test-1:>/# zpool list
NAME   SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
zm2   11.9G   177K  11.9G     0%  ONLINE  -
root@sol-test-1:>/# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
zm2    116K  7.79G  57.9K  /zm2

20.  Create a raidz1 volume “zm3” with 9 disks                       [raidz3]

root@sol-test-1:>/# zpool create zm3 raidz3 c1t4d0 c1t5d0 c1t6d0 c1t8d0 c1t9d0 c1t10d0 c1t11d0 c1t12d0 c1t13d0

21.  List & define the stats in “zpool status” command

ONLINE                      Normal
DEGRADED                Partial failure of vdev
FAULTED                   Device failure
OFFLINE                     Device made unavailable by Administrator
REMOVED                  Device removed while in use
UNAVAIL                    Device inaccessible

22.  Add a spare disk to volume “zm1”

root@sol-test-1:>/# zpool add zm1 spare c1t8d0
root@sol-test-1:>/# zpool status
  pool: zm1
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zm1         ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c1t4d0  ONLINE       0     0     0
            c1t5d0  ONLINE       0     0     0
            c1t6d0  ONLINE       0     0     0
        spares
          c1t8d0    AVAIL

errors: No known data errors

23.  Fire the command to check all the properties of zpool “zm1”

root@sol-test-1:>/# zpool get all zm1
NAME  PROPERTY       VALUE       SOURCE
zm1   size           5.94G       -
zm1   capacity       0%          -
zm1   altroot        -           default
zm1   health         ONLINE      -
zm1   guid           11830464581855624321  default
zm1   version        29          default
zm1   bootfs         -           default
zm1   delegation     on          default
zm1   autoreplace    off         default
zm1   cachefile      -           default
zm1   failmode       wait        default
zm1   listsnapshots  on          default
zm1   autoexpand     off         default
zm1   free           5.94G       -
zm1   allocated      270K        -
zm1   readonly       off         -

24.  Make the autoreplace flag on for “zm1”

root@sol-test-1:>/# zpool autorelace=on zm1
root@sol-test-1:>/# zpool get all zm1 |grep autoreplace
zm1   autoreplace    on          local

25.  Remove one disk from “zm1” & reboot (from vm only, don’t remove from physical system)

root@sol-test-1:>/# zpool status zm1
  pool: zm1
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zm1         ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c1t4d0  ONLINE       0     0     0
            c1t5d0  ONLINE       0     0     0
            c1t6d0  ONLINE       0     0     0
        spares
          c1t8d0    AVAIL

errors: No known data errors

****we can remove t4/t5/t6 any one of them***






we should use scrub to realize zfs that something went wrong

root@sol-test-1:>/# zpool scrub zm1

root@sol-test-1:>/# zpool status zm1
  pool: zm1
 state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
        repaired.
 scan: scrub repaired 0 in 0h0m with 0 errors on Fri Dec 19 13:13:39 2014
config:

        NAME        STATE     READ WRITE CKSUM
        zm1         DEGRADED     0     0     0
          raidz1-0  DEGRADED     0     0     0
            c1t4d0  FAULTED      0     0     0  too many errors
            c1t5d0  ONLINE       0     0     0
            c1t6d0  ONLINE       0     0     0
        spares
          c1t8d0    AVAIL

errors: No known data errors

Now again after few minutes check the status

26.  Now see the status of spare disk in ‘zpool status’ of zpool “zm1”

root@sol-test-1:>/# zpool status
  pool: zm1
 state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
        repaired.
 scan: resilvered 67.5K in 0h0m with 0 errors on Fri Dec 19 13:13:55 2014
config:

        NAME          STATE     READ WRITE CKSUM
        zm1           DEGRADED     0     0     0
          raidz1-0    DEGRADED     0     0     0
            spare-0   DEGRADED     0     0     0
              c1t4d0  FAULTED      0     0     0  too many errors
              c1t8d0  ONLINE       0     0     0
            c1t5d0    ONLINE       0     0     0
            c1t6d0    ONLINE       0     0     0
        spares
          c1t8d0      INUSE     currently in use

errors: No known data errors

27.  Now reattach the disk in vm and replace in “zm1” and see the status of spare disk.

When we add disk to VM it will go to same vacant slot, But in VM we need to reboot the system

Just I added one disk and it went to same slot and I also ran devfsadm but

root@sol-test-1:>/# echo |format |grep c1t4d0
       4. c1t4d0 <drive type unknown>

root@sol-test-1:>/# zpool replace -f zm1 c1t4d0
cannot replace c1t4d0 with c1t4d0: one or more devices is currently unavailable

If this will be at physical server then above command will work , but in VM it is showing error
So we need to reboot the system

root@sol-test-1:>/# reboot -- -r
root@sol-test-1:>/# zpool status
  pool: zm1
 state: ONLINE
 scan: resilvered 65.5K in 0h0m with 0 errors on Fri Dec 19 13:27:44 2014
config:

        NAME        STATE     READ WRITE CKSUM
        zm1         ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c1t4d0  ONLINE       0     0     0
            c1t5d0  ONLINE       0     0     0
            c1t6d0  ONLINE       0     0     0
        spares
          c1t8d0    AVAIL

errors: No known data errors

Here we can see that just after rebooting we got all disk at correct place

28.  Create a root mirror

c1t0d0 is my zfs root disk and I want to create mirror with c1t1d0

root@sol-test-4:>/# fdisk -B /dev/rdsk/c1t1d0s2
WARNING: Device /dev/rdsk/c1t1d0s2:
The device does not appear to include absolute
sector 0 of the PHYSICAL disk (the normal location for an fdisk table).
Fdisk is normally used with the device that represents the entire fixed disk.
(For example, /dev/rdsk/c0d0p0 on x86 or /dev/rdsk/c0t5d0s2 on sparc).
Are you sure you want to continue? (y/n) y

root@sol-test-4:>/# prtvtoc /dev/rdsk/c1t0d0s2 |fmthard -s - /dev/rdsk/c1t1d0s2
fmthard:  New volume table of contents now in place.

root@sol-test-4:>/# zpool attach rpool c1t0d0s0 c1t1d0s0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c1t1d0s0 overlaps with /dev/dsk/c1t1d0s2
root@sol-test-4:>/# zpool attach -f rpool c1t0d0s0 c1t1d0s0
Make sure to wait until resilver is done before rebooting.

root@sol-test-4:>/# zpool status
  pool: rpool
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scan: resilver in progress since Sat Dec 20 20:26:46 2014
    2.14G scanned out of 6.21G at 10.2M/s, 0h6m to go
    2.14G scanned out of 6.21G at 10.2M/s, 0h6m to go
    2.13G resilvered, 34.45% done
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
            c1t0d0s0  ONLINE       0     0     0
            c1t1d0s0  ONLINE       0     0     0  (resilvering)

errors: No known data errors

root@sol-test-4:>/# zpool status
  pool: rpool
 state: ONLINE
 scan: resilvered 6.21G in 0h13m with 0 errors on Sat Dec 20 20:39:58 2014
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
            c1t0d0s0  ONLINE       0     0     0
            c1t1d0s0  ONLINE       0     0     0

errors: No known data errors

root@sol-test-4:>/# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
stage1 written to partition 0 sector 0 (abs 16065)
stage2 written to partition 0, 273 sectors starting at 50 (abs 16115)

root@sol-test-4:>/# ls -l /dev/dsk/c1t1d0s0
lrwxrwxrwx   1 root     root          46 Dec 20 19:19 /dev/dsk/c1t1d0s0 -> ../../devices/pci@0,0/pci15ad,1976@10/sd@1,0:a

root@sol-test-4:>/# eeprom altbootpath=/pci@0,0/pci15ad,1976@10/sd@1,0:a

root@sol-test-4:>/# cat /boot/solaris/bootenv.rc |grep altboot
setprop altbootpath '/pci@0,0/pci15ad,1976@10/sd@1,0:a'

29.  Create a snapshot for zpool “zm1”

root@sol-test-1:>/# zfs snapshot -r zm1@bkp
root@sol-test-1:>/# zfs list -t snapshot
NAME      USED  AVAIL  REFER  MOUNTPOINT
zm1@bkp      0      -  34.6K  -

30.  Create a file system “myfs” within “zm1”

root@sol-test-1:>/# zfs list -t snapshot
NAME      USED  AVAIL  REFER  MOUNTPOINT
zm1@bkp      0      -  34.6K  -
root@sol-test-1:>/# zfs create zm1/myfs
root@sol-test-1:>/# zfs list
NAME       USED  AVAIL  REFER  MOUNTPOINT
zm1        416K  3.89G  34.6K  /zm1
zm1@bkp       0      -  34.6K  -
zm1/myfs  34.6K  3.89G  34.6K  /zm1/myfs

31.  Take snapshot of zm1/myfs then Create some files within zm1/myfs and then restore from snapshot

root@sol-test-1:>/# zfs snapshot -r zm1/myfs@bkp2
root@sol-test-1:>/# cd zm1/myfs/
root@sol-test-1:>/zm1/myfs# touch 1 2 3
root@sol-test-1:>/zm1/myfs# ls -l
total 3
-rw-r--r--   1 root     root           0 Dec 19 13:58 1
-rw-r--r--   1 root     root           0 Dec 19 13:58 2
-rw-r--r--   1 root     root           0 Dec 19 13:58 3

root@sol-test-1:>/# zfs rollback zm1/myfs@bkp2
root@sol-test-1:>/# cd zm1/myfs/
root@sol-test-1:>/zm1/myfs# ls -l
total 0

32.  Find where your snapshot is located?

root@sol-test-1:>/# zfs list -t all
NAME            USED  AVAIL  REFER  MOUNTPOINT
zm1             302K  3.89G  36.0K  /zm1
zm1@bkp        20.6K      -  34.6K  -
zm1/myfs       36.0K  3.89G  34.6K  /zm1/myfs
zm1/myfs@bkp2  1.33K      -  34.6K  -
root@sol-test-1:>/# zfs list -t snapshot
NAME            USED  AVAIL  REFER  MOUNTPOINT
zm1@bkp        20.6K      -  34.6K  -
zm1/myfs@bkp2  19.3K      -  34.6K  -

root@sol-test-1:>/# cd zm1/myfs/
root@sol-test-1:>/zm1/myfs# ls -la
total 8
drwxr-xr-x   2 root     root           2 Dec 19 14:01 .
drwxr-xr-x   3 root     root           3 Dec 19 14:01 ..
root@sol-test-1:>/zm1/myfs# cd .zfs  [this is the hidden file which cannot shown in ls –la]
root@sol-test-1:>/zm1/myfs/.zfs# ls -l
total 0
dr-xr-xr-x   2 root     root           2 Dec 19 14:02 snapshot
root@sol-test-1:>/zm1/myfs/.zfs# cd snapshot/
root@sol-test-1:>/zm1/myfs/.zfs/snapshot# ls -l
total 4
drwxr-xr-x   2 root     root           2 Dec 19 14:01 bkp2
root@sol-test-1:>/zm1/myfs/.zfs/snapshot# cd bkp2/
root@sol-test-1:>/zm1/myfs/.zfs/snapshot/bkp2# ls -l
total 0

33.  Create 2 files within zm1/myfs then take snapshot, again create 2 files then again take snapshot, again create 2 files then take snapshot

root@sol-test-1:>/zm1/myfs/.zfs/snapshot/bkp2# cd ../../..
root@sol-test-1:>/zm1/myfs# touch file1 file2
root@sol-test-1:>/zm1/myfs# zfs snapshot -r zm1/myfs@bkp3
root@sol-test-1:>/zm1/myfs# touch file3 file4
root@sol-test-1:>/zm1/myfs# zfs snapshot -r zm1/myfs@bkp4
root@sol-test-1:>/zm1/myfs# touch file5 file6
root@sol-test-1:>/zm1/myfs# zfs snapshot -r zm1/myfs@bkp5

34.  List all the snapshots

root@sol-test-1:>/zm1/myfs# zfs list -t snapshot
NAME            USED  AVAIL  REFER  MOUNTPOINT
zm1@bkp        20.6K      -  34.6K  -
zm1/myfs@bkp2  20.6K      -  34.6K  -
zm1/myfs@bkp3  20.6K      -  34.6K  -
zm1/myfs@bkp4  20.6K      -  34.6K  -
zm1/myfs@bkp5      0      -  34.6K  -

35.  Try to find out the differences between snapshots

root@sol-test-1:>/zm1/myfs# zfs diff zm1/myfs@bkp2 zm1/myfs@bkp3
M       /zm1/myfs/
+       /zm1/myfs/file1
+       /zm1/myfs/file2

Here the snapshot bkp2 is compared with bkp3 and lists the changes made in bkp3
root@sol-test-1:>/zm1/myfs# zfs diff zm1/myfs@bkp3 zm1/myfs@bkp4
M       /zm1/myfs/
+       /zm1/myfs/file3
+       /zm1/myfs/file4

root@sol-test-1:>/zm1/myfs# zfs diff zm1/myfs@bkp4 zm1/myfs@bkp5
M       /zm1/myfs/
+       /zm1/myfs/file5
+       /zm1/myfs/file6

OK… every o/p is almost same so I just make some changes
root@sol-test-1:>/zm1/myfs# ls -l
total 6
-rw-r--r--   1 root     root           0 Dec 19 14:09 file1
-rw-r--r--   1 root     root           0 Dec 19 14:09 file2
-rw-r--r--   1 root     root           0 Dec 19 14:09 file3
-rw-r--r--   1 root     root           0 Dec 19 14:09 file4
-rw-r--r--   1 root     root           0 Dec 19 14:09 file5
-rw-r--r--   1 root     root           0 Dec 19 14:09 file6
root@sol-test-1:>/zm1/myfs# mv file6 mynewfile
root@sol-test-1:>/zm1/myfs# rm file5

root@sol-test-1:>/zm1/myfs# zfs diff zm1/myfs@bkp5 zm1/myfs@bkp6
M       /zm1/myfs/
-       /zm1/myfs/file5
R       /zm1/myfs/file6 -> /zm1/myfs/mynewfile


M                     item has been modified
R                      item has been renamed
+                      item has been added
-                              item has been removed

36.  Try to roll back on first snapshot and see what it is saying

root@sol-test-1:>/zm1/myfs# zfs rollback zm1/myfs@bkp3
cannot rollback to 'zm1/myfs@bkp3': more recent snapshots exist
use '-r' to force deletion of the following snapshots:
zm1/myfs@bkp5
zm1/myfs@bkp6
zm1/myfs@bkp4

Means if we have latest snapshots and we try to roll back on previous, it will results in deletion of all previous snapshots, Let’s do it

root@sol-test-1:>/zm1/myfs# zfs rollback -r zm1/myfs@bkp3

37.  List the snapshots

root@sol-test-1:>/zm1/myfs# zfs list -t snapshot
NAME            USED  AVAIL  REFER  MOUNTPOINT
zm1@bkp        20.6K      -  34.6K  -
zm1/myfs@bkp2  20.6K      -  34.6K  -
zm1/myfs@bkp3  1.33K      -  34.6K  -

38.  Migrate a ufs root disk to zfs

======================/////====================


39.  How to patch a zfs file system, explain in detail

Though I don’t have patch bundle right now, so I will just mention the steps/commands

#lucreate –n 10u6be-1
#cd /var/tmp
#unzip 10_x86_Recommended.zip
#luupgrade –t –n 10u6be-1 –s /var/tmp/10_x86_Recommended
#luactivate –n 10u6be-1
#lustatus
#init 6

40.  Make a dry run for creating a mirror zpool

root@sol-test-1:>/# zpool create -n zmirror mirror c1t10d0 c1t11d0
would create 'zmirror' with the following layout:

        zmirror
          mirror
            c1t10d0
            c1t11d0

41.  Define quota & refquota

Quota              Absolute max consumption for FS, including snapshots & clones
Refquota        Only for FS, does not include snapshots & clones

42.  Create 2 FS zm1/myfs/test1 & zm1/myfs/test2 and set quota of 20mb on each with compression off

root@sol-test-1:>/# zfs create zm1/myfs/test1
root@sol-test-1:>/# zfs create zm1/myfs/test2
root@sol-test-1:>/# zfs set quota=20m zm1/myfs/test1
root@sol-test-1:>/# zfs set quota=20m zm1/myfs/test2

root@sol-test-1:>/# zfs set compression=off zm1/myfs/test1
root@sol-test-1:>/# zfs set compression=off zm1/myfs/test2


43.  Check the quota is working or not then remove quota from both FS’s.

root@sol-test-1:>/zm1/myfs/test1# mkfile 16m f1
root@sol-test-1:>/zm1/myfs/test1# du -sh
  16M   .

root@sol-test-1:>/zm1/myfs/test1# mkfile 4m f2
warning: couldn't set mode to 01600
root@sol-test-1:>/zm1/myfs/test1# du -sh
  20M   .
root@sol-test-1:>/zm1/myfs/test1# touch aaa
touch: cannot create aaa: Disc quota exceeded

root@sol-test-1:>/zm1/myfs# zfs set quota=none zm1/myfs/test1
root@sol-test-1:>/zm1/myfs# zfs set quota=none zm1/myfs/test2

44.  Now set the refquota of 30mb on 1 FS and check refquota is working or not?

root@sol-test-1:>/zm1/myfs# zfs set refquota=30m zm1/myfs/test1

its already 20m filled so we need to create a 10m file to fill all 30m

root@sol-test-1:>/zm1/myfs/test1# ls -l
total 40937
-rw------T   1 root     root     16777216 Dec 19 14:38 f1
-rw-------   1 root     root     4194304 Dec 19 14:39 f2
root@sol-test-1:>/zm1/myfs/test1# mkfile 10m f3
f3: initialized 10354688 of 10485760 bytes: Disc quota exceeded
root@sol-test-1:>/zm1/myfs/test1# ls -l
total 61405
-rw------T   1 root     root     16777216 Dec 19 14:38 f1
-rw-------   1 root     root     4194304 Dec 19 14:39 f2
-rw-------   1 root     root     10485760 Dec 19 15:58 f3
root@sol-test-1:>/zm1/myfs/test1# du -sh
  30M   .
root@sol-test-1:>/zm1/myfs/test1# touch tfile1
touch: cannot create tfile1: Disc quota exceeded

root@sol-test-1:>/zm1/myfs/test1# zfs snapshot -r zm1/myfs/test1@test1bkp
root@sol-test-1:>/zm1/myfs/test1# zfs list -t snapshot
NAME                      USED  AVAIL  REFER  MOUNTPOINT
zm1@bkp                  20.6K      -  34.6K  -
zm1/myfs@bkp2            20.6K      -  34.6K  -
zm1/myfs@bkp3            20.6K      -  34.6K  -
zm1/myfs/test1@test1bkp      0      -  30.0M  -

well snapshot is created and we know that it resides inside the test1 with hidden folder .zfs

root@sol-test-1:>/zm1/myfs/test1# cd .zfs
root@sol-test-1:>/zm1/myfs/test1/.zfs# ls -l
total 0
dr-xr-xr-x   2 root     root           2 Dec 19 15:59 snapshot
root@sol-test-1:>/zm1/myfs/test1/.zfs# cd snapshot/
root@sol-test-1:>/zm1/myfs/test1/.zfs/snapshot# ls -l
total 4
drwxr-xr-x   2 root     root           5 Dec 19 15:56 test1bkp
root@sol-test-1:>/zm1/myfs/test1/.zfs/snapshot# cd test1bkp/

root@sol-test-1:>/zm1/myfs/test1/.zfs/snapshot/test1bkp# ls -l
total 61405
-rw------T   1 root     root     16777216 Dec 19 14:38 f1
-rw-------   1 root     root     4194304 Dec 19 14:39 f2
-rw-------   1 root     root     10485760 Dec 19 15:58 f3
root@sol-test-1:>/zm1/myfs/test1/.zfs/snapshot/test1bkp# du -sh
  30M   .
So we cannot create a 1kb file by touch but it let us create a 30m snapshot, b’coz of refquota

45.  Create a zfs ‘zm1/home’ and set reservation on entire file system ‘zm1/home’

root@sol-test-1:>/# zfs create zm1/home
root@sol-test-1:>/# zfs list zm1
NAME   USED  AVAIL  REFER  MOUNTPOINT
zm1   30.9M  3.86G  37.3K  /zm1
root@sol-test-1:>/# zfs set reservation=1G zm1/home
root@sol-test-1:>/# zfs list zm1
NAME   USED  AVAIL  REFER  MOUNTPOINT
zm1   1.03G  2.86G  37.3K  /zm1

Now we can see the diff in AVAIL field by setting reservation, means now FS /home have 1G, it can grow more than 1g or remain within 1g, but none other FS can use their 1G space

46.  Remove all quotas

root@sol-test-1:>/zm1/myfs# zfs set quota=none zm1/myfs/test1
root@sol-test-1:>/zm1/myfs# zfs set quota=none zm1/myfs/test2
root@sol-test-1:>/# zfs set reservation=none zm1/home


47.  Share the FS ‘zm1/home’ to other systems

root@sol-test-1:>/# zfs set sharenfs=on zm1/home

48.  Unshared the ‘zm1/home’

root@sol-test-1:>/# zfs unshare zm1/home

49.  Replace the faulty disk with new one.

root@sol-test-1:>/# zpool status
  pool: zm1
 state: ONLINE
 scan: resilvered 65.5K in 0h0m with 0 errors on Fri Dec 19 13:27:44 2014
config:

        NAME        STATE     READ WRITE CKSUM
        zm1         ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c1t4d0  ONLINE       0     0     0
            c1t5d0  ONLINE       0     0     0
            c1t6d0  ONLINE       0     0     0
        spares
          c1t8d0    AVAIL

errors: No known data errors
root@sol-test-1:>/# zpool replace zm1 c1t4d0 c1t9d0
root@sol-test-1:>/# zpool status
  pool: zm1
 state: ONLINE
 scan: resilvered 15.3M in 0h0m with 0 errors on Fri Dec 19 16:20:08 2014
config:

        NAME        STATE     READ WRITE CKSUM
        zm1         ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c1t9d0  ONLINE       0     0     0
            c1t5d0  ONLINE       0     0     0
            c1t6d0  ONLINE       0     0     0
        spares
          c1t8d0    AVAIL

errors: No known data errors

50.  We had created snapshot zm1/myfs/test1@test1bkp, now restore this snapshot to zm1/home and see the results.

root@sol-test-1:>/# zfs send -R zm1/myfs/test1@test1bkp |zfs recv -v -F -d zm1/home
receiving full stream of zm1/myfs/test1@test1bkp into zm1/home/myfs/test1@test1bkp
received 30.1MB stream in 114 seconds (271KB/sec)
root@sol-test-1:>/# cd zm1/home
root@sol-test-1:>/zm1/home# ls -l
total 4
drwxr-xr-x   3 root     root           3 Dec 19 16:33 myfs
root@sol-test-1:>/zm1/home# cd myfs/
root@sol-test-1:>/zm1/home/myfs# ls -l
total 4
drwxr-xr-x   2 root     root           5 Dec 19 15:56 test1
root@sol-test-1:>/zm1/home/myfs# cd test1/
root@sol-test-1:>/zm1/home/myfs/test1# ls -l
total 61401
-rw------T   1 root     root     16777216 Dec 19 14:38 f1
-rw-------   1 root     root     4194304 Dec 19 14:39 f2
-rw-------   1 root     root     10485760 Dec 19 15:58 f3

51.  Delete the zm1/home and the recover it

root@sol-test-1:>/# zpool destroy -f zm1
root@sol-test-1:>/# zpool list
no pools available
root@sol-test-1:>/# zpool import -D
  pool: zm1
    id: 11830464581855624321
 state: ONLINE (DESTROYED)
action: The pool can be imported using its name or numeric identifier.
config:

        zm1         ONLINE
          raidz1-0  ONLINE
            c1t9d0  ONLINE
            c1t5d0  ONLINE
            c1t6d0  ONLINE
        spares
          c1t8d0
root@sol-test-1:>/# zpool import zm1
cannot import 'zm1': no such pool available
root@sol-test-1:>/# zpool import -D zm1
root@sol-test-1:>/# zpool list
NAME   SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
zm1   5.94G  46.3M  5.89G     0%  ONLINE  -

52.  What is Emulated Volume in zfs, create a 500mb emulated volume and use it as swap

For swap in zfs we need a device / FS that is raw in nature means not formatted in zfs fs… that type of devices in zfs are called EMULATED VOLUMES

root@sol-test-1:>/# zfs create -V 500M zm1/zswap
root@sol-test-1:>/# mkfs -F ufs /dev/zvol/rdsk/zm1/zswap 500M
root@sol-test-1:>/# swap -a /dev/zvol/dsk/zm1/zswap
root@sol-test-1:>/# swap -l
swapfile             dev  swaplo blocks   free
/dev/dsk/c1t0d0s1   30,1       8 2104504 2104504
/dev/zvol/dsk/zm1/zswap 181,9       8 1023992 1023992

53.  In similar way create a dump device
54.  Create a 100m FS zm1/scsivol2 and use it as iscsi volume on other system

root@sol-test-1:>/# zfs create -V 100M zm1/scsivol2
root@sol-test-1:>/# zfs share zm1/scsivol2
cannot share 'zm1/scsivol2': 'shareiscsi' property not set
set 'shareiscsi' property or use iscsitadm(1M) to share this volume
root@sol-test-1:>/# zfs set shareiscsi=on zm1/scsivol2
root@sol-test-1:>/# iscsitadm list target
Target: zm1/scsivol2
    iSCSI Name: iqn.1986-03.com.sun:02:bd7db28b-f6b6-c17c-ba98-856ee23e32b3
    Connections: 0

ON OTHER SYSTEM…

root@sol-tst-2:>/# svcadm enable iscsitgt
root@sol-tst-2:>/# iscsiadm add static-config iqn.1986-03.com.sun:02:bd7db28b-f6b6-c17c-ba98-856ee23e32b3,192.168.234.133:3260
root@sol-tst-2:>/# iscsiadm list static-config
Static Configuration Target: iqn.1986-03.com.sun:02:bd7db28b-f6b6-c17c-ba98-856ee23e32b3,192.168.234.133:3260

root@sol-tst-2:>/# devfsadm
root@sol-tst-2:>/# format

[……..]
    5. c2t600144F05494136700000C295EB26000d0 <DEFAULT cyl 97 alt 2 hd 64 sec 32>
          /scsi_vhci/disk@g600144f05494136700000c295eb26000


55.  Rename the zm1/scsivol2 to zm1/scsiNEW

root@sol-test-1:>/# zfs rename zm1/scsivol2 zm1/scsiNEW
root@sol-test-1:>/# zfs list zm1/scsiNEW
NAME          USED  AVAIL  REFER  MOUNTPOINT
zm1/scsiNEW   103M  3.26G  16.7K  -

56.  What is clone in zfs? And how it is related with snapshot?
57.  Create a FS zm1/ctest, create two 10m files f1&f2 inside, create clone of ctest in zm1/myfs2

root@sol-test-1:>/# zfs create zm1/ctest
root@sol-test-1:>/# cd zm1/ctest/
root@sol-test-1:>/zm1/ctest# zfs set compression=off zm1/ctest
root@sol-test-1:>/zm1/ctest# mkfile 10m f1
root@sol-test-1:>/zm1/ctest# mkfile 10m f2
root@sol-test-1:>/# zfs snapshot -r zm1/ctest@ctest.bkp  [clone can created only from sshot]
root@sol-test-1:>/# zfs create zm1/myfs2
root@sol-test-1:>/# zfs clone zm1/ctest@ctest.bkp zm1/myfs2/ctestclone
root@sol-test-1:>/# zfs list -r zm1/ctest
NAME                  USED  AVAIL  REFER  MOUNTPOINT
zm1/ctest            20.0M  3.14G  20.0M  /zm1/ctest
zm1/ctest@ctest.bkp      0      -  20.0M  -
root@sol-test-1:>/# zfs list -r zm1/myfs2/ctestclone
NAME                         USED  AVAIL  REFER  MOUNTPOINT
zm1/myfs2/ctestclone  1.33K  3.14G  20.0M  /zm1/myfs2/ctestclone

58.  Now we had cloned ctest at zm1/myfs2/ctestclone, restore the clone to ctestclone and rename it to ctest

root@sol-test-1:>/# zfs promote zm1/myfs2/ctestclone

root@sol-test-1:>/# zfs list -r zm1/myfs2/ctestclone
NAME                                     USED  AVAIL  REFER  MOUNTPOINT
zm1/myfs2/ctestclone            20.0M  3.14G  20.0M  /zm1/myfs2/ctestclone
zm1/myfs2/ctestclone@ctest.bkp  1.33K      -  20.0M  -

See… now the space is occupied, and also see what happen to zm1/ctest

root@sol-test-1:>/# zfs list -r zm1/ctest
NAME        USED  AVAIL  REFER  MOUNTPOINT
zm1/ctest      0  3.14G  20.0M  /zm1/ctest

used is showing 0

root@sol-test-1:>/# zfs rename zm1/ctest zm1/ctest—OLD

root@sol-test-1:>/# zfs rename zm1/myfs2/ctestclone zm1/ctest

root@sol-test-1:>/# zfs list -r zm1/ctest
NAME                  USED  AVAIL  REFER  MOUNTPOINT
zm1/ctest            20.0M  3.14G  20.0M  /zm1/ctest
zm1/ctest@ctest.bkp  1.33K      -  20.0M  -

root@sol-test-1:>/# ls -l zm1/ctest
total 40930
-rw------T   1 root     root     10485760 Dec 19 18:25 f1
-rw------T   1 root     root     10485760 Dec 19 18:25 f2

root@sol-test-1:>/# zfs list
NAME                      USED  AVAIL  REFER  MOUNTPOINT
zm1                       774M  3.14G  40.6K  /zm1
zm1@bkp                  22.0K      -  34.6K  -
zm1/ctest                20.0M  3.14G  20.0M  /zm1/ctest
zm1/ctest@ctest.bkp      1.33K      -  20.0M  -
zm1/ctest--OLD               0  3.14G  20.0M  /zm1/ctest--OLD
zm1/myfs                 30.2M  3.14G  40.6K  /zm1/myfs
zm1/myfs@bkp2            20.6K      -  34.6K  -
zm1/myfs@bkp3            20.6K      -  34.6K  -
zm1/myfs/test1           30.0M      0  30.0M  /zm1/myfs/test1
zm1/myfs/test1@test1bkp  20.0K      -  30.0M  -
zm1/myfs/test2           34.6K  3.14G  34.6K  /zm1/myfs/test2
zm1/myfs2                34.6K  3.14G  34.6K  /zm1/myfs2
zm1/scsiNEW               103M  3.24G  16.7K  -
zm1/scsivol1              103M  3.23G  7.58M  -
zm1/zswap                 516M  3.61G  30.1M  -

root@sol-test-1:>/# ls -l zm1/myfs2
total 0

root@sol-test-1:>/# ls -l zm1
total 16
drwxr-xr-x   2 root     root           4 Dec 19 18:25 ctest
drwxr-xr-x   2 root     root           4 Dec 19 18:25 ctest--OLD
drwxr-xr-x   6 root     root           8 Dec 19 14:35 myfs
drwxr-xr-x   2 root     root           2 Dec 19 18:36 myfs2

59.  Send the snapshot to other system using zfs via ssh

First I created a zpool “bkp” on other system

root@sol-tst-2:>/# zpool create bkp c2t600144F0549411B700000C295EB26000d0

Now from first system,

root@sol-test-1:>/# zfs send zm1/myfs@bkp3 | ssh 192.168.234.134 zfs recv bkp/backup@today
Password:

Again on other system
root@sol-tst-2:>/# ls -l bkp/backup/
total 2
-rw-r--r--   1 root     root           0 Dec 19 14:09 file1
-rw-r--r--   1 root     root           0 Dec 19 14:09 file2


60.  Make device c1t9d0 zm1 offline temporarily & make it online also do the clearing of device

root@sol-test-1:>/# zpool offline -t zm1 c1t9d0
root@sol-test-1:>/# zpool status
  pool: zm1
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
 scan: resilvered 15.3M in 0h0m with 0 errors on Fri Dec 19 16:20:08 2014
config:

        NAME        STATE     READ WRITE CKSUM
        zm1         DEGRADED     0     0     0
          raidz1-0  DEGRADED     0     0     0
            c1t9d0  OFFLINE      0     0     0
            c1t5d0  ONLINE       0     0     0
            c1t6d0  ONLINE       0     0     0
        spares
          c1t8d0    AVAIL

errors: No known data errors

root@sol-test-1:>/# zpool online zm1 c1t9d0
root@sol-test-1:>/# zpool status
  pool: zm1
 state: ONLINE
 scan: resilvered 51K in 0h0m with 0 errors on Fri Dec 19 19:15:34 2014
config:

        NAME        STATE     READ WRITE CKSUM
        zm1         ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c1t9d0  ONLINE       0     0     0
            c1t5d0  ONLINE       0     0     0
            c1t6d0  ONLINE       0     0     0
        spares
          c1t8d0    AVAIL

errors: No known data errors


There is no data errors, so we do not need to clear, although the command is

root@sol-test-1:>/# zpool clear zm1 c1t9d0

61.  Command to display pool statistics

root@sol-test-1:>/# zpool iostat 2 5
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
zm1          135M  5.81G      0      1     39  23.7K
zm1          135M  5.81G      0      0      0      0
zm1          135M  5.81G      0      0      0      0
zm1          135M  5.81G      0      0      0      0
zm1          135M  5.81G      0      0      0      0

62.  Suppose you had migrated a zpool from older version of solaris, and in zpool status it is showing that some features are unavailable, what is the cause and solution?

root@sol-test-1:>/# zpool upgrade -a
This system is currently running ZFS pool version 29.

All pools are formatted using this version.

63.  How to see the current version of zfs running on system?

root@sol-test-1:>/# zpool upgrade -v
This system is currently running ZFS pool version 29.

The following versions are supported:

VER  DESCRIPTION
---  --------------------------------------------------------
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
[…………….]
28  Multiple vdev replacements
29  RAID-Z/mirror hybrid allocator

64.  How to check the pool’s integrity and pool with detailed error?

root@sol-test-1:>/# zpool scrub zm1
root@sol-test-1:>/# zpool status -x
all pools are healthy
root@sol-test-1:>/# zpool status -v
  pool: zm1
 state: ONLINE
 scan: scrub in progress since Sat Dec 20 13:32:16 2014
    130M scanned out of 135M at 8.11M/s, 0h0m to go
    130M scanned out of 135M at 8.11M/s, 0h0m to go
    0 repaired, 95.88% done
config:

        NAME        STATE     READ WRITE CKSUM
        zm1         ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c1t9d0  ONLINE       0     0     0
            c1t5d0  ONLINE       0     0     0
            c1t6d0  ONLINE       0     0     0
        spares
          c1t8d0    AVAIL

errors: No known data errors
root@sol-test-1:>/# zpool status -v
  pool: zm1
 state: ONLINE
 scan: scrub repaired 0 in 0h0m with 0 errors on Sat Dec 20 13:32:33 2014
config:

        NAME        STATE     READ WRITE CKSUM
        zm1         ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c1t9d0  ONLINE       0     0     0
            c1t5d0  ONLINE       0     0     0
            c1t6d0  ONLINE       0     0     0
        spares
          c1t8d0    AVAIL

errors: No known data errors



SOME TROUBLE SHOOTING…………


65.  How to recover lost root password in zfs system?

On a Solaris 10 system, boot failsafe mode by using the following steps:
Boot failsafe mode.
ok boot -F failsafe
Mount the ZFS BE on /a when prompted:
.
.
.
ROOT/zfsBE was found on rpool.
Do you wish to have it mounted read-write on /a? [y,n,?] y
mounting rpool on /a
Starting shell.
Change to the /a/etc directory.
# cd /a/etc
Correct the passwd or shadow file.
# vi passwd
Reboot the system.
# init 6
                                                                “OR”

ok boot cdrom -s
ok boot net -s
If you don't use the -s option, you will need to exit the installation program.
Import the root pool and specify an alternate mount point:
# zpool import -R /a rpool
Mount the ZFS BE specifically because canmount is set to noauto by default.
# zfs mount rpool/ROOT/zfsBE
Change to the /a/etc directory.
# cd /a/etc
Correct the passwd or shadow file.
# vi shadow
Reboot the system.
# init 6

       
66.  How to stop ongoing scrub operation?

root@sol-test-1:>/# zpool scrub -s zm1

67.  Newly created Pool “testpool” having some error and inaccessible, diagnose and figure out the solution.

root@sol-test-1:>/# zpool status testpool
  pool: testpool
 state: UNAVAIL
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
   see: http://www.sun.com/msg/ZFS-8000-HC
 scan: scrub repaired 0 in 0h0m with 0 errors on Sat Dec 20 15:00:10 2014
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    UNAVAIL      0     0     0  insufficient replicas
          c1t10d0   FAULTED      0     0     0  too many errors
          c1t11d0   ONLINE       0     0     0

root@sol-test-1:>/# zpool status -v testpool
  pool: testpool
 state: UNAVAIL
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
   see: http://www.sun.com/msg/ZFS-8000-HC
 scan: scrub repaired 0 in 0h0m with 0 errors on Sat Dec 20 15:00:10 2014
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    UNAVAIL      0     0     0  insufficient replicas
          c1t10d0   FAULTED      0     0     0  too many errors
          c1t11d0   ONLINE       0     0     0

Almost same result in #zpool status –x

It seems that one of the disk c1t10d0   has some problem, we need to replace that disk. Fine then I added a new disk in vm and it goes to same slot, and then I tried to replace But

root@sol-test-1:>/# zpool replace -f testpool c1t10d0
cannot replace c1t10d0 with c1t10d0: pool I/O is currently suspended
root@sol-test-1:>/# zpool get failmode testpool
NAME      PROPERTY  VALUE     SOURCE
testpool  failmode  wait      default
root@sol-test-1:>/# zpool set failmode=continue testpool
cannot set property for 'testpool': pool I/O is currently suspended

Then I rebooted the system,
root@sol-test-1:>/# zpool status -xv testpool
  pool: testpool
 state: ONLINE
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-4J
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        testpool    ONLINE       0     0     0
          c1t10d0   UNAVAIL      0     0     0  corrupted data
          c1t11d0   ONLINE       0     0     0

errors: No known data errors

root@sol-test-1:>/# zpool replace -f testpool c1t10d0
root@sol-test-1:>/# zpool status -xv testpool
pool 'testpool' is healthy

WE CAN ALSO USE FOLLOWING COMMAND TO FIND THE PROBLEM,

root@sol-test-1:>/# fmadm faulty -a
--------------- ------------------------------------  -------------- ---------
TIME            EVENT-ID                              MSG-ID         SEVERITY
--------------- ------------------------------------  -------------- ---------
Dec 20 15:00:25 382cd102-f2a6-ec37-84d5-91886c480b6d  ZFS-8000-FD    Major

Host        : sol-test-1
Platform    : VMware-Virtual-Platform   Chassis_id  : VMware-56-4d-b1-7b-a9-53-aa-80-d5-02-cc-26-6d-5e-b2-60
Product_sn  :

Fault class : fault.fs.zfs.vdev.io
Affects     : zfs://pool=testpool/vdev=77b1e58bdd1887a0
                  faulted and taken out of service
Problem in  : zfs://pool=testpool/vdev=77b1e58bdd1887a0
                  not present

Description : The number of I/O errors associated with a ZFS device exceeded
                     acceptable levels.  Refer to http://sun.com/msg/ZFS-8000-FD
              for more information.

Response    : The device has been offlined and marked as faulted.  An attempt
                     will be made to activate a hot spare if available.

Impact      : Fault tolerance of the pool may be compromised.

Action      : Run 'zpool status -x' and replace the bad device.

Do this after correction so that fmadm will know that problem has been resolved

root@sol-test-1:>/# fmadm repair zfs://pool=testpool/vdev=77b1e58bdd1887a0
fmadm: recorded repair to zfs://pool=testpool/vdev=77b1e58bdd1887a0

Check for any other

root@sol-test-1:>/# fmadm faulty

68.  Here is a situation, I created a mirror vol named “testpool” with 2gb disks, now I realize that 2gb space is not enough for me, so I want to increase the disk capacity. How can I do that?

I added two 5gb disks in VM , then “devfsadm”

Then,

root@sol-test-1:>/# zfs list testpool
NAME       USED  AVAIL  REFER  MOUNTPOINT
testpool    91K  1.95G    31K  /testpool

root@sol-test-1:>/# zpool status testpool
  pool: testpool
 state: ONLINE
 scan: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        testpool     ONLINE       0     0     0
          mirror-0   ONLINE       0     0     0
            c1t9d0   ONLINE       0     0     0
            c1t10d0  ONLINE       0     0     0

errors: No known data errors

root@sol-test-1:>/# zpool replace testpool c1t9d0 c1t14d0
root@sol-test-1:>/# zpool replace testpool c1t10d0 c1t15d0

root@sol-test-1:>/#  zpool status testpool
  pool: testpool
 state: ONLINE
 scan: resilvered 95.5K in 0h0m with 0 errors on Sat Dec 20 17:32:43 2014
config:

        NAME         STATE     READ WRITE CKSUM
        testpool     ONLINE       0     0     0
          mirror-0   ONLINE       0     0     0
            c1t14d0  ONLINE       0     0     0
            c1t15d0  ONLINE       0     0     0

errors: No known data errors

root@sol-test-1:>/# zfs list testpool
NAME       USED  AVAIL  REFER  MOUNTPOINT
testpool   103K  1.95G    31K  /testpool

Still there is no change in size, so we need to

root@sol-test-1:>/# zpool set autoexpand=on testpool
root@sol-test-1:>/# zfs list testpool
NAME       USED  AVAIL  REFER  MOUNTPOINT
testpool   110K  4.91G    31K  /testpool
root@sol-test-1:>/#

69.  What to do in system panic/reboot/pool import case

Boot to the none milestone by using the -m milestone=none boot option.
               ok boot -m milestone=none
Remount your root file system as writable.
Rename or move the /etc/zfs/zpool.cache file to another location.
These actions cause ZFS to forget that any pools exist on the system, preventing it from trying to access the bad pool causing the Problem. If you have multiple pools on the system, do these additional steps:
* Determine which pool might have issues by using the fmdump -eV command to display   the pools with reported fatal errors.
* Import the pools one-by-one, skipping the pools that are having issues, as   described in the fmdump output.
  • If the system is back up, issue the svcadm milestone all command.

70.  One disk is suddenly got faulty in your zfs root mirror disk, and u found this in o/p

root@sol-test-4:>/# zpool status -x
  pool: rpool
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
        the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-2Q
 scan: resilvered 6.21G in 0h13m with 0 errors on Sat Dec 20 20:39:58 2014
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         DEGRADED     0     0     0
          mirror-0    DEGRADED     0     0     0
            c1t0d0s0  UNAVAIL      0     0     0  cannot open
            c1t1d0s0  ONLINE       0     0     0

errors: No known data errors

Now, we need to just reattach the device to our VM and then ….. let’s see

I attached a new disk of same size and it went to same slot

OHHHHhhhhhh…… unfortunately I did a disastrous mistake…. I ran #!prtvtoc

And u know what happen ?  the command fired was

# prtvtoc /dev/rdsk/c1t0d0s2 |fmthard -s - /dev/rdsk/c1t1d0s2

It was the previous cmd that I used for mirroring root disk, and now after attaching disk c1t0d0s2
I ran this cmd and booom….

Anyways… now I am not going to create mirror again,  I am just writing the steps & cmds

After attaching the disk, run devfsadm then
#format –B c1t0d0p0
# prtvtoc /dev/rdsk/c1t1d0s2 |fmthard -s - /dev/rdsk/c1t0d0s2

Check the zpool status

#zpool replace –f rpool c1t0d0s0

Again check the zpool status, if every thing fine the no prob otherwise run zpool clear rpool

Install the grub to c1t0d0s0 and also set the altbootpath [can find the cmds in one of above qstns]

71.   


72.   



No comments:

Post a Comment