graid: unbreak Promise RAID1 with 4+ providers

Fix a problem in graid implementation of Promise RAID1 created with 4+ disks.
Such an array generally works fine until reboot only due to a bug
in metadata writing code. Before the fix, next taste erronously created
RAID1E (kind of RAID10) instead of RAID1, hence graid used wrong offsets
for I/O operations.

The bug did not affect Promise RAID1 arrays with 2 or 3 disks only.

Reviewed by:	mav
MFC after:	3 days
This commit is contained in:
Eugene Grosbein 2024-02-12 14:24:28 +07:00
parent ed27ae8df4
commit 81092e92ea

View File

@ -1762,8 +1762,9 @@ g_raid_md_write_promise(struct g_raid_md_object *md, struct g_raid_volume *tvol,
meta->total_disks = vol->v_disks_count;
meta->stripe_shift = ffs(vol->v_strip_size / 1024);
meta->array_width = vol->v_disks_count;
if (vol->v_raid_level == G_RAID_VOLUME_RL_RAID1 ||
vol->v_raid_level == G_RAID_VOLUME_RL_RAID1E)
if (vol->v_raid_level == G_RAID_VOLUME_RL_RAID1)
meta->array_width = 1;
else if (vol->v_raid_level == G_RAID_VOLUME_RL_RAID1E)
meta->array_width /= 2;
meta->array_number = vol->v_global_id;
meta->total_sectors = vol->v_mediasize / 512;