If you don't have a raid controller with battery-backed cache - disable the raid in
Fake raids are just like software raid, use the cpu for calculations and could be a
potential problem if the Motherboard dies.
Software raid (controlled by mdadm) and lvm raid (sane driver but controlled from lvm) can
easily be recovered on any type of linux (as long as it is not too old ofc) and do not
have the minuses of the fake raid.
Strahil NikolovOn Jul 4, 2019 13:41, rubentrindade(a)live.com wrote:
So, following your advise, what I did was "dd status=progress if=/dev/zero
of=/dev/sda count=1" and that did indeed fix the anaconda starting up issue.
By the way, when I said mirrored, it's raid 1, so if I do to one, I do on both at the
same time. The raid is being done and managed on the bios. Maybe poor wording on my side.
The thing is, as soon as I try to create partitions/LVMs it returns me an error.
Here's the three different outputs:
LVM Thin Provisioning https://i.imgur.com/tfbi0oo.jpg
(curiously, it's the same
output error as when it crashed)
Standard Partition https://i.imgur.com/j6gNRpo.jpg
After taking the screenshots above, I tried the same process using a CentOS7 minimal
image on text mode to see the output, and there's no error output what so ever.
Out of curiosity, I tried with a 4.3.3 ovirt node image and I had similar error as
above(above I'm using 4.3.4) with a slight different wording, but still the same
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/