Creating ZFS filesystem on Solaris 10, x86

Redid the installation of Solaris 10 on the X4200
patched everything
Here is the process for creating the filesystem
(It will be expanded when I get more disks)

bash-3.00# zpool create pool c0t0d0s7 c0t1d0s7
invalid vdev specification
use ‘-f’ to override the following errors:
/dev/dsk/c0t0d0s7 is currently mounted on /export/home. Please see umount(1M).
bash-3.00# zpool create pool c0t0d0s7 c0t1d0s7
bash-3.00# umount /export/home
“/etc/vfstab” 12 lines, 424 characters
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
fd – /dev/fd fd – no –
/proc – /proc proc – no –
/dev/dsk/c0t0d0s1 – – swap – no –
/dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0 / ufs 1 no

#/dev/dsk/c0t0d0s7 /dev/rdsk/c0t0d0s7 /export/home ufs 2
yes –
/devices – /devices devfs – no –
ctfs – /system/contract ctfs – no –
objfs – /system/object objfs – no –
swap – /tmp tmpfs – yes –
~
“/etc/vfstab” 12 lines, 425 characters
bash-3.00# zpool create pool c0t0d0s7 c0t1d0s7
invalid vdev specification
use ‘-f’ to override the following errors:
/dev/dsk/c0t0d0s7 contains a ufs filesystem.
bash-3.00# zpool create -f pool c0t0d0s7 c0t1d0s7
bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
pool 130G 80K 130G 0% ONLINE –

bash-3.00# zfs create export/home
cannot create ‘export/home’: no such pool ‘export’
bash-3.00# zfs create pool/home
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool 107K 128G 25.5K /pool
pool/home 24.5K 128G 24.5K /pool/home
bash-3.00# cd /pool/home

That was easy.

Installing ZFS on Solaris 10, x86

The web has many entries on how to install ZFS and here is the commands I have used.

The machine is a X4200 with (for now) two internal disks.

I need to use this server for Perforce (versioning control) software and need a lot more diskspace.

Create first a pool of all available diskspace. In this case the existing /export/home which was created during installation will be used as well.

bash-3.00# umount /export/home
bash-3.00# zpool create -f perforcepool c0t0d0s7
invalid vdev specification
the following errors must be manually repaired:
/dev/dsk/c0t0d0s7 is normally mounted on /export/home according to /etc/vfstab. Please remove this entry to use this device.

So remove the entry in /etc/vfstab and try again

bash-3.00# zpool create -f perforcepool c0t0d0s7
bash-3.00#

Success!!!

There is one extra disk, so lets add that one.

Here is the outputr from the format command.

bash-3.00# format
Searching for disks…done

AVAILABLE DISK SELECTIONS:
0. c0t0d0
/[email protected],0/pci1022,[email protected]/pci1000,[email protected]/[email protected],0
1. c0t1d0
/[email protected],0/pci1022,[email protected]/pci1000,[email protected]/[email protected],0
Specify disk (enter its number): 0
selecting c0t0d0
[disk formatted]
Warning: Current Disk has mounted partitions.
/dev/dsk/c0t0d0s0 is currently mounted on /. Please see umount(1M).
/dev/dsk/c0t0d0s1 is currently used by swap. Please see swap(1M).
/dev/dsk/c0t0d0s3 is currently mounted on /metadb1. Please see umount(1M).
q/dev/dsk/c0t0d0s4 is currently mounted on /metadb2. Please see umount(1M).
/dev/dsk/c0t0d0s7 is part of active ZFS pool perforcepool. Please see zpool(1M).

So the second disk is c0t1d0

Can we just add the s2 (whole disk) ?

bash-3.00# zpool add perforcepool c0t1d0s2
invalid vdev specification
use ‘-f’ to override the following errors:
/dev/dsk/c0t1d0s2 overlaps with /dev/dsk/c0t1d0s7

Argh! so during installation the hardware RAID was choosen.
My bad! But it was first time I installed a X4200. Well, thats my excuse anyway.

So back to the drawing board. Maybe I have to install again, which is a pain since I can not set up a jumpstart server and have to install everything with the CD’s.

Whatever.

More to some in later posts.

Asbjorn