[ovirt-users] Problems with some vms

Endre Karlson endre.karlson at gmail.com
Wed Jan 17 06:54:45 UTC 2018


It's there now for each of the hosts. ovirt1 is not in service yet.

2018-01-17 5:52 GMT+01:00 Gobinda Das <godas at redhat.com>:

> In the above url only data and iso mnt log present,But there is no engine
> and vmstore mount log.
>
> On Wed, Jan 17, 2018 at 1:26 AM, Endre Karlson <endre.karlson at gmail.com>
> wrote:
>
>> Hi, all logs are located here: https://www.dropbox.com/
>> sh/3qzmwe76rkt09fk/AABzM9rJKbH5SBPWc31Npxhma?dl=0 for the mounts
>>
>> additionally we replaced a broken disk that is now resynced.
>>
>> 2018-01-15 11:17 GMT+01:00 Gobinda Das <godas at redhat.com>:
>>
>>> Hi Endre,
>>>  Mount logs will be in below format inside  /var/log/glusterfs :
>>>
>>>      /var/log/glusterfs/rhev-data-center-mnt-glusterSD-*\:_engine.log
>>>     /var/log/glusterfs/rhev-data-center-mnt-glusterSD-*\:_data.log
>>>     /var/log/glusterfs/rhev-data-center-mnt-glusterSD-*\:_vmstore.log
>>>
>>> On Mon, Jan 15, 2018 at 11:57 AM, Endre Karlson <endre.karlson at gmail.com
>>> > wrote:
>>>
>>>> Hi.
>>>>
>>>> What are the gluster mount logs ?
>>>>
>>>> I have these gluster logs.
>>>> cli.log          etc-glusterfs-glusterd.vol.log  glfsheal-engine.log
>>>> glusterd.log    nfs.log
>>>> rhev-data-center-mnt-glusterSD-ovirt0:_engine.log
>>>> rhev-data-center-mnt-glusterSD-ovirt3:_iso.log
>>>> cmd_history.log  glfsheal-data.log               glfsheal-iso.log
>>>>  glustershd.log  rhev-data-center-mnt-glusterSD-ovirt0:_data.log
>>>> rhev-data-center-mnt-glusterSD-ovirt0:_iso.log     statedump.log
>>>>
>>>>
>>>> I am running version
>>>> glusterfs-server-3.12.4-1.el7.x86_64
>>>> glusterfs-geo-replication-3.12.4-1.el7.x86_64
>>>> libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.7.x86_64
>>>> glusterfs-libs-3.12.4-1.el7.x86_64
>>>> glusterfs-api-3.12.4-1.el7.x86_64
>>>> python2-gluster-3.12.4-1.el7.x86_64
>>>> glusterfs-client-xlators-3.12.4-1.el7.x86_64
>>>> glusterfs-cli-3.12.4-1.el7.x86_64
>>>> glusterfs-events-3.12.4-1.el7.x86_64
>>>> glusterfs-rdma-3.12.4-1.el7.x86_64
>>>> vdsm-gluster-4.20.9.3-1.el7.centos.noarch
>>>> glusterfs-3.12.4-1.el7.x86_64
>>>> glusterfs-fuse-3.12.4-1.el7.x86_64
>>>>
>>>> // Endre
>>>>
>>>> 2018-01-15 6:11 GMT+01:00 Gobinda Das <godas at redhat.com>:
>>>>
>>>>> Hi Endre,
>>>>>  Can you please provide glusterfs mount logs?
>>>>>
>>>>> On Mon, Jan 15, 2018 at 6:16 AM, Darrell Budic <budic at onholyground.com
>>>>> > wrote:
>>>>>
>>>>>> What version of gluster are you running? I’ve seen a few of these
>>>>>> since moving my storage cluster to 12.3, but still haven’t been able to
>>>>>> determine what’s causing it. Seems to be happening most often on VMs that
>>>>>> haven’t been switches over to libgfapi mounts yet, but even one of those
>>>>>> has paused once so far. They generally restart fine from the GUI, and
>>>>>> nothing seems to need healing.
>>>>>>
>>>>>> ------------------------------
>>>>>> *From:* Endre Karlson <endre.karlson at gmail.com>
>>>>>> *Subject:* [ovirt-users] Problems with some vms
>>>>>> *Date:* January 14, 2018 at 12:55:45 PM CST
>>>>>> *To:* users
>>>>>>
>>>>>> Hi, we are getting some errors with some of our vms in a 3 node
>>>>>> server setup.
>>>>>>
>>>>>> 2018-01-14 15:01:44,015+0100 INFO  (libvirt/events) [virt.vm]
>>>>>> (vmId='2c34f52d-140b-4dbe-a4bd-d2cb467b0b7c') abnormal vm stop
>>>>>> device virtio-disk0  error eother (vm:4880)
>>>>>>
>>>>>> We are running glusterfs for shared storage.
>>>>>>
>>>>>> I have tried setting global maintenance on the first server and then
>>>>>> issuing a 'hosted-engine --vm-start' but that leads to nowhere.
>>>>>> _______________________________________________
>>>>>> Users mailing list
>>>>>> Users at ovirt.org
>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Users mailing list
>>>>>> Users at ovirt.org
>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Thanks,
>>>>> Gobinda
>>>>> +91-9019047912 <+91%2090190%2047912>
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Thanks,
>>> Gobinda
>>> +91-9019047912 <+91%2090190%2047912>
>>>
>>
>>
>
>
> --
> Thanks,
> Gobinda
> +91-9019047912 <+91%2090190%2047912>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20180117/4cf4a248/attachment.html>


More information about the Users mailing list