Usage: mdadm --create
md-device --chunk=
X --level=
Y
--raid-devices=
Z devices
This usage will initialise a new md array, associate some devices
with it, and activate the array.
The named device will normally not exist when mdadm --create is
run, but will be created by udev once the array becomes active.
As devices are added, they are checked to see if they contain
RAID superblocks or filesystems. They are also checked to see if
the variance in device size exceeds 1%.
If any discrepancy is found, the array will not automatically be
run, though the presence of a --run
can override this caution.
To create a "degraded" array in which some devices are missing,
simply give the word "missing
" in place of a device name. This
will cause mdadm to leave the corresponding slot in the array
empty. For a RAID4 or RAID5 array at most one slot can be
"missing
"; for a RAID6 array at most two slots. For a RAID1
array, only one real device needs to be given. All of the others
can be "missing
".
When creating a RAID5 array, mdadm will automatically create a
degraded array with an extra spare drive. This is because
building the spare into a degraded array is in general faster
than resyncing the parity on a non-degraded, but not clean,
array. This feature can be overridden with the --force
option.
When creating an array with version-1 metadata a name for the
array is required. If this is not given with the --name
option,
mdadm will choose a name based on the last component of the name
of the device being created. So if /dev/md3
is being created,
then the name 3
will be chosen. If /dev/md/home
is being
created, then the name home
will be used.
When creating a partition based array, using mdadm with
version-1.x metadata, the partition type should be set to 0xDA
(non fs-data). This type selection allows for greater precision
since using any other [RAID auto-detect (0xFD) or a GNU/Linux
partition (0x83)], might create problems in the event of array
recovery through a live cdrom.
A new array will normally get a randomly assigned 128bit UUID
which is very likely to be unique. If you have a specific need,
you can choose a UUID for the array by giving the --uuid=
option.
Be warned that creating two arrays with the same UUID is a recipe
for disaster. Also, using --uuid=
when creating a v0.90 array
will silently override any --homehost=
setting.
If the array type supports a write-intent bitmap, and if the
devices in the array exceed 100G is size, an internal write-
intent bitmap will automatically be added unless some other
option is explicitly requested with the --bitmap
option or a
different consistency policy is selected with the
--consistency-policy
option. In any case space for a bitmap will
be reserved so that one can be added later with --grow
--bitmap=internal
.
If the metadata type supports it (currently only 1.x and IMSM
metadata), space will be allocated to store a bad block list.
This allows a modest number of bad blocks to be recorded,
allowing the drive to remain in service while only partially
functional.
When creating an array within a CONTAINER
mdadm can be given
either the list of devices to use, or simply the name of the
container. The former case gives control over which devices in
the container will be used for the array. The latter case allows
mdadm to automatically choose which devices to use based on how
much spare space is available.
The General Management options that are valid with --create
are:
--run
insist on running the array even if some devices look like
they might be in use.
--readonly
start the array in readonly mode.