raid – Adams Bros Blog http://blog.adamsbros.org Sun, 16 Nov 2014 03:59:00 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.2 Linux Software RAID http://blog.adamsbros.org/2014/11/15/linux-software-raid/ http://blog.adamsbros.org/2014/11/15/linux-software-raid/#respond Sun, 16 Nov 2014 03:53:30 +0000 http://blog.adamsbros.org/?p=518 One of my drives in my RAID died, so I went and bought a 3TB drive for a replacement. The RAID is only 1T, so I’ll use the rest of it for something else.

It drives me nuts that every site out there describes in great detail how to do different things. Anyone can read the man page, or issue the parted “help mkpart” for example; but what I usually look for is a quick start.  So, here’s a quick rundown on how I re-set up the RAID drive. At the bottom is a full rundown on how I think I originally setup the RAID, but don’t quote me on it. 😀

We run through the basic use of parted and mdadm.

Create and Add RAID Mirror

parted /dev/sdb
mklabel gpt
mkpart primary 1 1T
set 1 raid on
align-check
mdadm --manage /dev/md10 -a /dev/sdb1
cat /proc/mdstat

 

RAID Setup

# create a mirror with two devices
mdadm --create /dev/md10 --force --level=1 --raid-devices=2 /dev/sdb1 /dev/sde1

That’s really all there is to it. From there, you can just use /dev/md10 as you would any other drive or partition. You can use it as a raw disk with partitions, or turn it into an encrypted volume, or a physical volume for LVM, etc.

 

RAID Cheat Cheat

# create mirror with only one device
mdadm --create /dev/md10 --force --level=1 --raid-devices=1 /dev/sdb3

# create a mirror with two devices
mdadm --create /dev/md10 --force --level=1 --raid-devices=2 /dev/sdb3 /dev/sdd1

# grow and add a mirror instantly.
mdadm --grow /dev/md10 --raid-devices=2 -a /dev/sdd1

# fail and remove a device
mdadm --manage /dev/md10 --fail /dev/sdd1
mdadm --manage /dev/md10 -r /dev/sdd1

# turn a mirror with one disk into a striped array.
mdadm --grow /dev/md10 --level=raid5
mdadm --grow /dev/md10 --raid-devices=2 -a /dev/sdd1
]]>
http://blog.adamsbros.org/2014/11/15/linux-software-raid/feed/ 0
Ubuntu won’t boot raided root http://blog.adamsbros.org/2010/08/14/ubuntu-wont-boot-raided-root/ http://blog.adamsbros.org/2010/08/14/ubuntu-wont-boot-raided-root/#comments Sat, 14 Aug 2010 07:07:44 +0000 http://blog.adamsbros.org/?p=281 Well, I’ve had a lot of trouble switching my system from LVM, to RAID1+LVM on ubuntu 10.04.  I got another drive for my system, created a mirror with one drive (temporarily of course) asked lvm to move my entire system over to that physical device, added the previous drive to the raid array, and rebooted (oops).  I am listing a few things that are important to know when you’re both new to Ubuntu, and doing raid post installation of Ubuntu.

First and foremost, it is very important that you realize the need to run both update-initramfs and update-grub.  update-grub re-configures grub based on your current system configuration.  Things like where /boot resides,  and things of that nature, will be automagically updated in the grub configuration by running update-grub.  update-initramfs takes care of any booting issues you might have post grub, such as raid.  So, any time you make a system change that might affect booting, run these commands before rebooting (thanks to Jordan_U on irc.freenode.org for tips)…

update-grub
update-initramfs -ck all # all kernels and create (not update) the initramfs.

In the event of changing your /boot fs to another location, you should run the following as well.  Thanks again to Jordan_U.

dpkg-reconfigure grub-pc

This ensure that

  1. You can configure the devices that grub will install to
  2. Ubuntu is aware of proper system configurations, for when upgrades occurs and what not, so that it doesn’t write the improper grub.cfg
  3. update-grub is run automatically
  4. installs grub to the devices configured

Now, the particular problem I was having, even though I was running the above commands, is that I kept getting dropped to a shell, with an error that said “ALERT! /dev/mapper/sys-ubuntu does not exist. Dropping to a shell“.   So, as we can see, it was not finding my lvm root system.  So, the big question is “Why?”  Well, after poking around the system a bit, to find out what update-initramfs was doing, I found a file in /usr/share/initramfs-tools/hooks/mdadm that was copying /etc/mdadm/mdadm.conf verbatim to the initramfs.  Well, it’s supposed to “auto detect” my raid arrays, with that configuration, but it doesn’t.  So, the way I fixed it was to run the following commands…

mdadm --detail --scan > /etc/mdadm/mdadm.conf
update-initramfs -ck all

I then rebooted, and my system was very happy, and booted up very quickly. I have since filed a bug report on ubuntu’s launchpad.

]]>
http://blog.adamsbros.org/2010/08/14/ubuntu-wont-boot-raided-root/feed/ 2