Creating new partitions on AFS file servers

This is a very brief note on how to add new partitions (vicepa, vicepb etc) to an AFS file server.

1. Create your file space Note this is for the Nexsan boxes It is probably safe to assume that any new AFS partitions will be on RAIDed disk space on one of the School's (S)ATAdisks. The first step therefore is to create an appropriate volume on the RAID array to serve as the new partition. On the web interface of the RAID array, click on 'Configure Volumes' on the left hand menu then the 'add volume' tab. Choose the array you want the new volume to be on (if there is more than one, then fill in the appropriate fields. So far, all AFS partitions have been 50GB in size. the volume should be named servername_-_partitionname. Create the volume noting the LUN which has been assigned to it. Click on the 'LUN mask' tab. If you know the WWN of the AFS server (there is a partial list of WWNs at ServicesUnitPortnameInfo) then click the button beside that WWN. Otherwise click on the 'Select All' button. Make sure to click the 'Mask LUN' button afterwards.

Specific Evo boxes not covered yet

  • Possibly useful. If you want to end up with 500GB (as reported by vos partinfo) of usable AFS partition space, then (on the Evo's at least) I use a multiplication factor of 1096 to turn that into the number on MBs for the size of the volume to create on the array, eg 500*1096= 548000MB. It more or less works out.

2. Make sure the server can see the new volume

The new volume should appear in the output from the format command. There is some guidelines on how to make this happen at ServicesSANStuff. Partition the new volume (slice 0 should cover the whole volume). Label the disk.

2.1 Nowadays using "parted" to partition the disks with a "gpt" disk label. eg:

(parted) mktable gpt
(parted) unit %                                                           
(parted) mkpart viceps ext3 0 33.34
(parted) mkpart vicept ext3 33.34 66.67
(parted) mkpart vicepu ext3  66.67 100                               
(parted) unit mib                                                         
(parted) print                                                            
Model: Linux device-mapper (multipath) (dm)
Disk /dev/mapper/36000402001fc148f78a6b9fa00000000: 1430649MiB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start      End         Size       File system  Name    Flags
 1      192MiB     476928MiB   476736MiB               viceps
 2      476928MiB  953856MiB   476928MiB               vicept
 3      953856MiB  1430592MiB  476736MiB               vicepu

3. Mount the volume on the server

Create new filesystems on the new volumes using mke2fs. Add an entry to /etc/fstab, preferably using the LCFG OPENAFS_VICE_PARTITION() macro to mount the volume in the appropriate place (/vicep_whatever_).

Also, probably, for large volumes that would take forever to fsck at boot time, use tune2fs -c 0 -i 0 /dev/.... to stop the automatic "fsck after x mounts, x days". It will still fsck if shutdown uncleanly and journaling can't save the day.

Mount the new partition.

4. Restart the file server

Using your AFS admin token, run the command bos restart servername fs. Then run bos status _servername_ to make sure the fileserver process restarted ok. vos listpart _servername_ should now show your new AFS partition!

SL7 Gotcha

An issue spotted on SL7 with systemd. If you partition up your disk, and then realise you need to redo the partitions, you may have some fun.

What I'd normally do is:

  1. umount all the /vicep? on the affected "disk",
  2. then use "parted rm" to remove the partitions,
  3. then just recreate them as before, but with the sizes/layout you now want.
  4. update fstab as appropriate

What seems to happen is when you make your first change (rm or even print) systemd will automatically remount the partitions, so the next "rm" you do will complain about the partition already being in use. Even if you remove them from fstab, you still get this behaviour.

It seems to be related to the files automatically created in /run/systemd/generator.

The solution seems to be, to indeed remove them from /etc/fstab, then run systemctl daemon-reload. That clears out the old mount info from the generator directory. Googling also suggested that:

  • systemctl restart remote-fs.target
  • systemctl restart local-fs.target
were also needed, but not in my case.

-- CraigStrachan - 01 Sep 2006

Topic revision: r5 - 29 Sep 2016 - 16:03:46 - NeilBrown
 
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback
This Wiki uses Cookies