Getting at old RAID sets

After I got Ubuntu 8.10 working with MD RAID Sata drives, I wanted to move my old data onto the new drives.  As I explained previously, the system would not boot with the old drives plugged into the primary IDE controller and the CD and extra drive plugged into the secondary IDE.  It would boot with the old raid set plugged into the secondary IDE and the primary left unused.
Now, running on the Sata drives, I wanted to access the old drives, which were set up as a number of MD Raid-1 sets.  After the break I’ll explain step by step how to find and mount the old raid sets.

First, I looked at /proc/mdstat to see if the kernel had noticed them

root@arctic:~# cat /proc/mdstat
md3 : active raid1 sda5[0] sdb5[1]
945417088 blocks [2/2] [UU]
md2 : active raid1 sda3[0] sdb3[1]
29294400 blocks [2/2] [UU]
md1 : active raid1 sda2[0] sdb2[1]
1951808 blocks [2/2] [UU]
md0 : active raid1 sda1[0] sdb1[1]
96256 blocks [2/2] [UU]
unused devices: <none>

No.  Well I suspect that /prod/mdstat only knows about things that /etc/mdadm/mdadm.conf tells it about, plus magic for the boot path to read mdadm.conf in the first place.

root@arctic:~# cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=c8e5de34:961c19ff:76057e05:202e9e31
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=aec97020:f0d5a2aa:9a4fcaff:367dee9f
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=4479e18c:46028497:e57d763b:9df20a5f
ARRAY /dev/md3 level=raid1 num-devices=2 UUID=6a00caef:013f5002:fd252cbf:c8fed836
# This file was auto-generated on Mon, 04 Feb 2008 03:58:53 +0000
# by mkconf $Id$

Which shows only the new arrays, not the old ones.  /proc/partitions, however, does know about all the drives.

root@arctic:~# cat /proc/partitions
major minor  #blocks  name
   8     0  976762584 sda
   8     1      96358 sda1
   8     2    1951897 sda2
   8     3   29294527 sda3
   8     4          1 sda4
   8     5  945417186 sda5
   8    16  976762584 sdb
   8    17      96358 sdb1
   8    18    1951897 sdb2
   8    19   29294527 sdb3
   8    20          1 sdb4
   8    21  945417186 sdb5
   8    32  244198584 sdc
   8    33    1028128 sdc1
   8    34     104422 sdc2
   8    35    1959930 sdc3
   8    36          1 sdc4
   8    37   35985568 sdc5
   8    38  205117888 sdc6
   8    48  244198584 sdd
   8    49    1028128 sdd1
   8    50     104422 sdd2
   8    51    1959930 sdd3
   8    52          1 sdd4
   8    53   35985568 sdd5
   8    54  205117888 sdd6
   9     0      96256 md0
   9     1    1951808 md1
   9     2   29294400 md2
   9     3  945417088 md3

After looking at the mdadm man page – read the directions?  Who knew? – I tried mdadm –examine –scan

root@arctic:~# mdadm --examine --scan
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=c8e5de34:961c19ff:76057e05:202e9e31
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=aec97020:f0d5a2aa:9a4fcaff:367dee9f
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=4479e18c:46028497:e57d763b:9df20a5f
ARRAY /dev/md3 level=raid1 num-devices=2 UUID=6a00caef:013f5002:fd252cbf:c8fed836
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=58f11fc0:21a26499:4ee83ba0:522d3147
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=e8c090ca:ef6c4740:fbb18e02:6e49d116
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=8420f770:444f6a79:a9527619:dbae0820
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=e4da4408:caf7d24f:0294b21a:06112a53
ARRAY /dev/md3 level=raid1 num-devices=2 UUID=e370a7f8:bab77935:265bc735:5494d178

Which finds the old arrays in an alarming way. It doesn’t say which drives they are on, and it shows where they used to be mounted.  Did the system boot correctly by accident?  Just because the order of discovery of the drives just happens to find the new arrays first?  Or last?  The good news is that I can identify which group of arrays is which by comparting the UUIDs with the ones in mdadm.conf. Almost. I don’t remember which ARRAY was swap, otherwise I think they go /boot /, and /home.
But mdadm has another command that will let me figure out which one is which, from the sizes.

root@arctic:~# mdadm -E /dev/sdc1
           UUID : 58f11fc0:21a26499:4ee83ba0:522d3147
  Creation Time : Wed Apr  5 19:10:09 2006
     Raid Level : raid1
     Array Size : 1028032 (1004.11 MiB 1052.70 MB)
   Raid Devices : 2

And /dev/sdc1 was probably space for Windows, shudder. Is this improperly recognized as a RAID array? Ignore it.  Too small to be what I want.

root@arctic:~# mdadm -E /dev/sdc2
           UUID : e8c090ca:ef6c4740:fbb18e02:6e49d116
  Creation Time : Wed Apr  5 19:34:53 2006
     Raid Level : raid1
     Array Size : 104320 (101.89 MiB 106.82 MB)
   Raid Devices : 2

/dev/sdc2 looks like my old /boot.

root@arctic:~# mdadm -E /dev/sdc3
           UUID : 8420f770:444f6a79:a9527619:dbae0820
  Creation Time : Sat Apr  1 16:05:11 2006     Raid Level : raid1
     Array Size : 1959808 (1914.20 MiB 2006.84 MB)

/dev/sdc3 looks like my old swap partition, it has about twice physical memory worth of space.

root@arctic:~# mdadm -E /dev/sdc4
mdadm: No md superblock detected on /dev/sdc4.

Oh, right, the extended partition.  Who thought that was a sensible design?

root@arctic:~# mdadm -E /dev/sdc5
/dev/sdc5:
           UUID : e4da4408:caf7d24f:0294b21a:06112a53
  Creation Time : Sat Apr  1 16:05:30 2006
     Raid Level : raid1
     Array Size : 35985472 (34.32 GiB 36.85 GB)

There is my old / (root) partition.

root@arctic:~# mdadm -E /dev/sdc6
/dev/sdc6:
           UUID : e370a7f8:bab77935:265bc735:5494d178
  Creation Time : Sat Apr  1 16:05:49 2006
     Raid Level : raid1
     Array Size : 205117824 (195.62 GiB 210.04 GB)

And my old /home.
Now I can edit mdadm.conf to add lines for the ones I want, namely my old root and /home. May as well add /boot as well.  I will assign them higher md numbers..

# mdadm.conf
# definitions of existing MD arrays
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=c8e5de34:961c19ff:76057e05:202e9e31
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=aec97020:f0d5a2aa:9a4fcaff:367dee9f
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=4479e18c:46028497:e57d763b:9df20a5f
ARRAY /dev/md3 level=raid1 num-devices=2 UUID=6a00caef:013f5002:fd252cbf:c8fed836
ARRAY /dev/md4 level=raid1 num-devices=2 UUID=e8c090ca:ef6c4740:fbb18e02:6e49d116
ARRAY /dev/md5 level=raid1 num-devices=2 UUID=e4da4408:caf7d24f:0294b21a:06112a53
ARRAY /dev/md6 level=raid1 num-devices=2 UUID=e370a7f8:bab77935:265bc735:5494d178

Now I can start the arrays with mdadm, and they will also probably auto-start on the next boot.

root@arctic:/etc/mdadm# mdadm -A /dev/md6 -u e370a7f8:bab77935:265bc735:5494d178
mdadm: /dev/md6 has been started with 2 drives.
root@arctic:/etc/mdadm# mdadm -A /dev/md5 -u e4da4408:caf7d24f:0294b21a:06112a53
mdadm: /dev/md5 has been started with 2 drives.
root@arctic:/etc/mdadm# mdadm -A /dev/md4 -u e8c090ca:ef6c4740:fbb18e02:6e49d116
mdadm: /dev/md4 has been started with 2 drives.

Let’s try and look at the files:

root@arctic:/etc/mdadm# mkdir /mnt/old
root@arctic:/etc/mdadm# mount /dev/md5 /mnt/old
root@arctic:/etc/mdadm# mount /dev/md6 /mnt/old/home
root@arctic:/etc/mdadm# mount /dev/md4 /mnt/old/boot
root@arctic:/etc/mdadm# dfFilesystem           1K-blocks      Used Available Use% Mounted on/dev/md2              28834620   2762164  24607736  11% /
/dev/md0                 90195     24721     60662  29% /boot
/dev/md3             930582752 205238388 678073512  24% /home
/dev/md5              35984368   9025652  26958716  26% /mnt/old
/dev/md6             205111560 204432048    679512 100% /mnt/old/home
/dev/md4                101018      8514     87288   9% /mnt/old/boot
root@arctic:/etc/mdadm#

Now I can copy my old files to the new drives.
May I rant?  This is way harder than it should be.  I suspect the problem is documentation.  I am a great fan of presenting documentation as reference plus examples.  I have a hard time with reference material without examples, because the authors usually don’t define their vocabulary, so it is hard to tell which option you want.  A good set of examples usually get me over the hump.  The difficulty with the Linux MD machinery is that the examples you can find on the web are usually about creating new RAID sets, rather than discovering and attaching to old ones. As a consequence I was very nervous the whole time that I would accidently use the commands to erase my old files, rather than find and access them.
Refernce material without examples is useful if you use the commands often enough to know the vocabulary and to generally know what to do.  The reference helps with the one new feature you need. With something like mdadm, you can go years without using it, and forget even the basics.  Examples help a lot.  I am not talking abou tutorial material, which I always find too slow and not on point. Tutorials lead you painfully through one path that the author thought was useful, they don’t provide quick examples of a dozen use cases.
Well done GUIs are another approach.  Sometimes it is just obvious what to do.  With all the options laid out, a GUI can be just the thing for an operation you don’t use regularly enough to remember. The Mac OS Disk Utility is pretty good.  I also like hover popups that explain what a button would do, should you click on it.

Leave a Reply

Your email address will not be published. Required fields are marked *