Using Ansible to automate failed Drive replacement

Do we have any inbuild ansible modules/tasks which can help to automate replacing failed drive/volume in oVirt hypervisor

Il giorno lun 7 set 2020 alle ore 11:51 <kushagra2agarwal@gmail.com> ha scritto:
Do we have any inbuild ansible modules/tasks which can help to automate replacing failed drive/volume in oVirt hypervisor
Hi, do you mean failed drive within a Gluster Hyperconverged deployment?
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TJOIKBZSJJPKSX...
-- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://www.redhat.com/> *Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*

that's correct Sandro!!!

I think replacing a failed disk won't be much different than replacing a failing host, there's a session today at the conference *Replacing gluster host in oVirt-Engine <https://youtu.be/dFWW5fEupYQ>* – Prajith Kesava Prasad <https://twitter.com/PrajithKPrasad> +Prajith Kesava Prasad <pkesavap@redhat.com> or +Gobinda Das <godas@redhat.com> can probably elaborate on this. Il giorno lun 7 set 2020 alle ore 12:06 <kushagra2agarwal@gmail.com> ha scritto:
that's correct Sandro!!! _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ALHN35S2N4MWKK...
-- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://www.redhat.com/> *Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*

Im unable to view the original thread, so apologies if the content of my message is already repeated, judging by the title of the email, i would say sandro is right, if one of the node in your gluster enabled cluster has a failed disk, you could replace the disk with a new one(thus losing all your old data thereby making your host new) and run "same node fqdn replace host procedure" and then follow this instruction to setup your replace host playbook instructions <https://github.com/gluster/gluster-ansible/blob/master/playbooks/hc-ansible-deployment/README#L52> . Kind Regards, Prajith Kesava Prasad. On Mon, Sep 7, 2020 at 3:45 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
I think replacing a failed disk won't be much different than replacing a failing host, there's a session today at the conference *Replacing gluster host in oVirt-Engine <https://youtu.be/dFWW5fEupYQ>* – Prajith Kesava Prasad <https://twitter.com/PrajithKPrasad> +Prajith Kesava Prasad <pkesavap@redhat.com> or +Gobinda Das <godas@redhat.com> can probably elaborate on this.
Il giorno lun 7 set 2020 alle ore 12:06 <kushagra2agarwal@gmail.com> ha scritto:
that's correct Sandro!!! _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ALHN35S2N4MWKK...
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*

apologies for the typo in my previous email :-) , dont mind the second "and then". :- [EDITED ]Im unable to view the original thread, so apologies if the content of my message is already repeated, judging by the title of the email, i would say sandro is right, if one of the node in your gluster enabled cluster has a failed disk, you could replace the disk with a new one(thus losing all your old data thereby making your host new) and run "same node fqdn replace host procedure" . follow this instruction to setup your replace host playbook instructions <https://github.com/gluster/gluster-ansible/blob/master/playbooks/hc-ansible-deployment/README#L52> .
Im unable to view the original thread, so apologies if the content of my
message is already repeated, judging by the title of the email, i would say sandro is right, if one of the node in your gluster enabled cluster has a failed disk, you could replace the disk with a new one(thus losing all your old data thereby making your host new) and run "same node fqdn replace host procedure" . follow this instruction to setup your replace host playbook instructions <https://github.com/gluster/gluster-ansible/blob/master/playbooks/hc-ansible-deployment/README#L52> .
Kind Regards, Prajith Kesava Prasad.
participants (3)
-
kushagra2agarwal@gmail.com
-
Prajith Kesava Prasad
-
Sandro Bonazzola