Thanks for pointing me in the right direction, I was able to add the server
to the cluster by adding /etc/vdsm/vdsm.id
I will now try to create the new bricks and try a replacement brick, this
part I think I will have to do thru command line because my Hyperconverged
setup with a replica 3 is as follows:
*/dev/sdb = /gluster_bricks/engine 100G*
*/dev/sdb = /gluster_bricks/vmstore1 2600G*
/dev/sdc = /gluster_bricks/data1 2700G
/dev/sdd = /gluster_bricks/data2 2700G
/dev/sde = caching disk.
The issue i see here is that I don't see an option through the WEB UI to
create 2 bricks in the same /dev/sdb (one of 100Gb for the engine and one
of 2600Gb for vmstore1).
So if you have any ideas they are most welcome.
thanks again.
On Mon, Jun 10, 2019 at 4:35 PM Leo David <leoalex(a)gmail.com> wrote:
https://stijn.tintel.eu/blog/2013/03/02/ovirt-problem-duplicate-uuids
On Mon, Jun 10, 2019, 18:13 Adrian Quintero <adrianquintero(a)gmail.com>
wrote:
> Ok I have tried reinstalling the server from scratch with a different
> name and IP address and when trying to add it to cluster I get the
> following error:
>
> Event details
> ID: 505
> Time: Jun 10, 2019, 10:00:00 AM
> Message: Host myshost2.virt.iad3p installation failed. Host
>
myhost2.mydomain.com reports unique id which already registered for
>
myhost1.mydomain.com
>
> I am at a loss here, I don't have a brand new server to do this and in
> need to re-use what I have.
>
>
> *From the oVirt engine log (/var/log/ovirt-engine/engine.log): *
> 2019-06-10 10:57:59,950-04 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engine-Thread-37744) [9b88055] EVENT_ID:
> VDS_INSTALL_FAILED(505), Host
myhost2.mydomain.com installation failed.
> Host
myhost2.mydomain.com reports unique id which already registered for
>
myhost1.mydomain.com
>
> So in the
>
/var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20190610105759-myhost2.mydomain.com-9b88055.log
> of the ovirt engine I see that the host deploy is running the following
> command to identify the system, if this is the case then it will never work
> :( because it identifies each host using the system uuid.
>
> *dmidecode -s system-uuid*
> b64d566e-055d-44d4-83a2-d3b83f25412e
>
>
> Any suggestions?
>
> Thanks
>
> On Sat, Jun 8, 2019 at 11:23 AM Adrian Quintero <adrianquintero(a)gmail.com>
> wrote:
>
>> Leo,
>> I did try putting it under maintenance and checking to ignore gluster
>> and it did not work.
>> Error while executing action:
>> -Cannot remove host. Server having gluster volume.
>>
>> Note: the server was already reinstalled so gluster will never see the
>> volumes or bricks for this server.
>>
>> I will rename the server to
myhost2.mydomain.com and try to replace the
>> bricks hopefully that might work, however it would be good to know that you
>> can re-install from scratch an existing cluster server and put it back to
>> the cluster.
>>
>> Still doing research hopefully we can find a way.
>>
>> thanks again
>>
>> Adrian
>>
>>
>>
>>
>> On Fri, Jun 7, 2019 at 2:39 AM Leo David <leoalex(a)gmail.com> wrote:
>>
>>> You will need to remove the storage role from that server first ( not
>>> being part of gluster cluster ).
>>> I cannot test this right now on production, but maybe putting host
>>> although its already died under "mantainance" while checking to
ignore
>>> guster warning will let you remove it.
>>> Maybe I am wrong about the procedure, can anybody input an advice
>>> helping with this situation ?
>>> Cheers,
>>>
>>> Leo
>>>
>>>
>>>
>>>
>>> On Thu, Jun 6, 2019 at 9:45 PM Adrian Quintero <
>>> adrianquintero(a)gmail.com> wrote:
>>>
>>>> I tried removing the bad host but running into the following issue ,
>>>> any idea?
>>>> Operation Canceled
>>>> Error while executing action:
>>>>
>>>>
host1.mydomain.com
>>>>
>>>> - Cannot remove Host. Server having Gluster volume.
>>>>
>>>>
>>>>
>>>>
>>>> On Thu, Jun 6, 2019 at 11:18 AM Adrian Quintero <
>>>> adrianquintero(a)gmail.com> wrote:
>>>>
>>>>> Leo, I forgot to mention that I have 1 SSD disk for caching
purposes,
>>>>> wondering how that setup should be achieved?
>>>>>
>>>>> thanks,
>>>>>
>>>>> Adrian
>>>>>
>>>>> On Wed, Jun 5, 2019 at 11:25 PM Adrian Quintero <
>>>>> adrianquintero(a)gmail.com> wrote:
>>>>>
>>>>>> Hi Leo, yes, this helps a lot, this confirms the plan we had in
mind.
>>>>>>
>>>>>> Will test tomorrow and post the results.
>>>>>>
>>>>>> Thanks again
>>>>>>
>>>>>> Adrian
>>>>>>
>>>>>> On Wed, Jun 5, 2019 at 11:18 PM Leo David
<leoalex(a)gmail.com> wrote:
>>>>>>
>>>>>>> Hi Adrian,
>>>>>>> I think the steps are:
>>>>>>> - reinstall the host
>>>>>>> - join it to virtualisation cluster
>>>>>>> And if was member of gluster cluster as well:
>>>>>>> - go to host - storage devices
>>>>>>> - create the bricks on the devices - as they are on the other
hosts
>>>>>>> - go to storage - volumes
>>>>>>> - replace each failed brick with the corresponding new one.
>>>>>>> Hope it helps.
>>>>>>> Cheers,
>>>>>>> Leo
>>>>>>>
>>>>>>>
>>>>>>> On Wed, Jun 5, 2019, 23:09 <adrianquintero(a)gmail.com>
wrote:
>>>>>>>
>>>>>>>> Anybody have had to replace a failed host from a 3, 6, or
9 node
>>>>>>>> hyperconverged setup with gluster storage?
>>>>>>>>
>>>>>>>> One of my hosts is completely dead, I need to do a fresh
install
>>>>>>>> using ovirt node iso, can anybody point me to the proper
steps?
>>>>>>>>
>>>>>>>> thanks,
>>>>>>>> _______________________________________________
>>>>>>>> Users mailing list -- users(a)ovirt.org
>>>>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>>>>>> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
>>>>>>>> oVirt Code of Conduct:
>>>>>>>>
https://www.ovirt.org/community/about/community-guidelines/
>>>>>>>> List Archives:
>>>>>>>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RFBYQKWC2KN...
>>>>>>>>
>>>>>>> --
>>>>>> Adrian Quintero
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Adrian Quintero
>>>>>
>>>>
>>>>
>>>> --
>>>> Adrian Quintero
>>>>
>>>
>>>
>>> --
>>> Best regards, Leo David
>>>
>>
>>
>> --
>> Adrian Quintero
>>
>
>
> --
> Adrian Quintero
>