[ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 25.1.17 ] [hotplug_disk]

Eyal Edri eedri at redhat.com
Thu Jan 26 08:43:48 UTC 2017


On Thu, Jan 26, 2017 at 10:41 AM, Nir Soffer <nsoffer at redhat.com> wrote:

> On Thu, Jan 26, 2017 at 9:46 AM, Yaniv Kaul <ykaul at redhat.com> wrote:
> > I'm seeing a failure in 4.1, which I've never seen in the past:
> > 2017-01-26 02:21:00,124 ERROR (mailbox-spm) [storage.MailBox.
> SpmMailMonitor]
> > SPM_MailMonitor: mailbox 2 checksum failed, not clearing mailbox,
> clearing
> > newMail. (storage_mailbox:618)
>
> I have seen this in the past, this is a very rare error.
>
> I'm not clear why this happens, if it has any effect or how we can avoid
> it.
>
> Best open a bug about it, we should investigate it.
>

Not sure if its the same error, but we have open bug on the same test:

https://bugzilla.redhat.com/show_bug.cgi?id=1416386


>
> > (from host1).
> >
> > There are also some new warnings around monitoring, such as:
> > 2017-01-26 02:20:58,035 WARN  (libvirt/events) [virt.vm]
> > (vmId='85e8df3c-8bdf-470f-8c28-baa521d663de') unknown eventid 8 args
> > ('/rhev/data-center/27d461cd-03b2-4b2b-a1f8-157c27bb710e/
> 2e10aa41-24f1-4c83-8e29-15f772a2d04b/images/304c3f03-
> 61a6-4b79-a748-174a4cd24f84/3f96ca05-e8f3-41f2-8b9c-c7d170b24f92',
> > 3, 0, 8) (clientIF:540)
>
> This means that libvirt sent an event that we are not handling, there is
> nothing
> to warn about.
>
> Since this event is a change in system state, having an info level log
> would be nice.
>
> The number 8 is not very helpful, we need to map the number to the event
> name
> as documented in libvirt.
> https://libvirt.org/html/libvirt-libvirt-domain.html#virDomainEventID
>
> Ala is working on handling this and other events during live merge for 4.2.
>
> > On Wed, Jan 25, 2017 at 2:17 PM, Gil Shinar <gshinar at redhat.com> wrote:
> >>
> >> Link to job:
> >> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/4951/
> >>
> >> Sorry for the mistake
> >>
> >> On Wed, Jan 25, 2017 at 1:43 PM, Gil Shinar <gshinar at redhat.com> wrote:
> >>>
> >>> Hi,
> >>>
> >>> The test in $subject has failed. Below please find exception I have
> found
> >>> in engine.log.
> >>>
> >>> {"jsonrpc": "2.0", "id": "4b8eb810-c52d-4c92-a792-e554f87c9493",
> "error":
> >>> {"message": "Cannot deactivate Logical Volume: ('General Storage
> Exception:
> >>> (\"5 [] [\\'  WARNING: Not using lvmetad because config setting
> >>> use_lvmetad=0.\\', \\'  WARNING: To avoid corruption, rescan devices
> to make
> >>> changes visible (pvscan --cache).\\', \\'  Logical volume
> >>> f9dce023-0282-4185-9ad9-fe71c3975106/778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5
> in
> >>> use.\\', \\'  Logical volume
> >>> f9dce023-0282-4185-9ad9-fe71c3975106/ab4e8962-6196-485e-be2a-d5791a38eaeb
> in
> >>> use.\\']\\\\nf9dce023-0282-4185-9ad9-fe71c3975106/[\\'
> 778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5\\',
> >>> \\'ab4e8962-6196-485e-be2a-d5791a38eaeb\\']\",)',)", "code": 552}}�
> >>>
> >>> 2017-01-25 05:05:49,037-05 DEBUG
> >>> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
> (ResponseWorker) []
> >>> Message received: {"jsonrpc": "2.0", "id":
> >>> "4b8eb810-c52d-4c92-a792-e554f87c9493", "error": {"message": "Cannot
> >>> deactivate Logical Volume: ('General Storage Exception: (\"5 [] [\\'
> >>> WARNING: Not using lvmetad because config setting use_lvmetad=0.\\',
> \\'
> >>> WARNING: To avoid corruption, rescan devices to make changes visible
> (pvscan
> >>> --cache).\\', \\'  Logical volume
> >>> f9dce023-0282-4185-9ad9-fe71c3975106/778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5
> in
> >>> use.\\', \\'  Logical volume
> >>> f9dce023-0282-4185-9ad9-fe71c3975106/ab4e8962-6196-485e-be2a-d5791a38eaeb
> in
> >>> use.\\']\\\\nf9dce023-0282-4185-9ad9-fe71c3975106/[\\'
> 778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5\\',
> >>> \\'ab4e8962-6196-485e-be2a-d5791a38eaeb\\']\",)',)", "code": 552}}
> >>> 2017-01-25 05:05:49,047-05 ERROR
> >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> >>> (DefaultQuartzScheduler5) [59ab00f1] EVENT_ID:
> >>> VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call Stack:
> null,
> >>> Custom Event ID: -1, Message: VDSM lago-basic-suite-master-host0
> command
> >>> TeardownImageVDS failed: Cannot deactivate Logical Volume: ('General
> Storage
> >>> Exception: ("5 [] [\'  WARNING: Not using lvmetad because config
> setting
> >>> use_lvmetad=0.\', \'  WARNING: To avoid corruption, rescan devices to
> make
> >>> changes visible (pvscan --cache).\', \'  Logical volume
> >>> f9dce023-0282-4185-9ad9-fe71c3975106/778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5
> in
> >>> use.\', \'  Logical volume
> >>> f9dce023-0282-4185-9ad9-fe71c3975106/ab4e8962-6196-485e-be2a-d5791a38eaeb
> in
> >>> use.\']\\nf9dce023-0282-4185-9ad9-fe71c3975106/[\'778cbc5b-
> a9df-46d7-bc80-1a66f7d3e2b5\',
> >>> \'ab4e8962-6196-485e-be2a-d5791a38eaeb\']",)',)
> >>> 2017-01-25 05:05:49,047-05 ERROR
> >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand]
> >>> (DefaultQuartzScheduler5) [59ab00f1] Command
> >>> 'TeardownImageVDSCommand(HostName = lago-basic-suite-master-host0,
> >>> ImageActionsVDSCommandParameters:{runAsync='true',
> >>> hostId='60e52527-b637-445c-b408-0275d347e76a'})' execution failed:
> >>> VDSGenericException: VDSErrorException: Failed in vdscommand to
> >>> TeardownImageVDS, error = Cannot deactivate Logical Volume: ('General
> >>> Storage Exception: ("5 [] [\'  WARNING: Not using lvmetad because
> config
> >>> setting use_lvmetad=0.\', \'  WARNING: To avoid corruption, rescan
> devices
> >>> to make changes visible (pvscan --cache).\', \'  Logical volume
> >>> f9dce023-0282-4185-9ad9-fe71c3975106/778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5
> in
> >>> use.\', \'  Logical volume
> >>> f9dce023-0282-4185-9ad9-fe71c3975106/ab4e8962-6196-485e-be2a-d5791a38eaeb
> in
> >>> use.\']\\nf9dce023-0282-4185-9ad9-fe71c3975106/[\'778cbc5b-
> a9df-46d7-bc80-1a66f7d3e2b5\',
> >>> \'ab4e8962-6196-485e-be2a-d5791a38eaeb\']",)',)
> >>> 2017-01-25 05:05:49,047-05 DEBUG
> >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand]
> >>> (DefaultQuartzScheduler5) [59ab00f1] Exception:
> >>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> >>> VDSGenericException: VDSErrorException: Failed in vdscommand to
> >>> TeardownImageVDS, error = Cannot deactivate Logical Volume: ('General
> >>> Storage Exception: ("5 [] [\'  WARNING: Not using lvmetad because
> config
> >>> setting use_lvmetad=0.\', \'  WARNING: To avoid corruption, rescan
> devices
> >>> to make changes visible (pvscan --cache).\', \'  Logical volume
> >>> f9dce023-0282-4185-9ad9-fe71c3975106/778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5
> in
> >>> use.\', \'  Logical volume
> >>> f9dce023-0282-4185-9ad9-fe71c3975106/ab4e8962-6196-485e-be2a-d5791a38eaeb
> in
> >>> use.\']\\nf9dce023-0282-4185-9ad9-fe71c3975106/[\'778cbc5b-
> a9df-46d7-bc80-1a66f7d3e2b5\',
> >>> \'ab4e8962-6196-485e-be2a-d5791a38eaeb\']",)',)
> >>> at
> >>> org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.
> proceedProxyReturnValue(BrokerCommandBase.java:182)
> >>> [vdsbroker.jar:]
> >>> at
> >>> org.ovirt.engine.core.vdsbroker.vdsbroker.ImageActionsVDSCommandBase.
> executeVdsBrokerCommand(ImageActionsVDSCommandBase.java:20)
> >>> [vdsbroker.jar:]
> >>> at
> >>> org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.
> executeVDSCommand(VdsBrokerCommand.java:111)
> >>> [vdsbroker.jar:]
> >>> at
> >>> org.ovirt.engine.core.vdsbroker.VDSCommandBase.
> executeCommand(VDSCommandBase.java:73)
> >>> [vdsbroker.jar:]
> >>> at
> >>> org.ovirt.engine.core.dal.VdcCommandBase.execute(
> VdcCommandBase.java:33)
> >>> [dal.jar:]
> >>> at
> >>> org.ovirt.engine.core.vdsbroker.vdsbroker.DefaultVdsCommandExecutor.
> execute(DefaultVdsCommandExecutor.java:14)
> >>> [vdsbroker.jar:]
> >>> at
> >>> org.ovirt.engine.core.vdsbroker.ResourceManager.
> runVdsCommand(ResourceManager.java:407)
> >>> [vdsbroker.jar:]
> >>> at
> >>> org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(
> VDSBrokerFrontendImpl.java:33)
> >>> [bll.jar:]
> >>> at
> >>> org.ovirt.engine.core.bll.storage.disk.image.
> ImagesHandler.teardownImage(ImagesHandler.java:1007)
> >>> [bll.jar:]
> >>> at
> >>> org.ovirt.engine.core.bll.storage.disk.image.ImagesHandler.
> getQemuImageInfoFromVdsm(ImagesHandler.java:856)
> >>> [bll.jar:]
> >>> at
> >>> org.ovirt.engine.core.bll.storage.disk.image.BaseImagesCommand.
> endSuccessfully(BaseImagesCommand.java:367)
> >>> [bll.jar:]
> >>> at
> >>> org.ovirt.engine.core.bll.CommandBase.internalEndSuccessfully(
> CommandBase.java:736)
> >>> [bll.jar:]
> >>> at
> >>> org.ovirt.engine.core.bll.CommandBase.endActionInTransactionScope(
> CommandBase.java:694)
> >>> [bll.jar:]
> >>> at
> >>> org.ovirt.engine.core.bll.CommandBase.runInTransaction(
> CommandBase.java:2057)
> >>> [bll.jar:]
> >>> at
> >>> org.ovirt.engine.core.utils.transaction.TransactionSupport.
> executeInSuppressed(TransactionSupport.java:164)
> >>> [utils.jar:]
> >>> at
> >>> org.ovirt.engine.core.utils.transaction.TransactionSupport.
> executeInScope(TransactionSupport.java:103)
> >>> [utils.jar:]
> >>> at org.ovirt.engine.core.bll.CommandBase.endAction(
> CommandBase.java:559)
> >>> [bll.jar:]
> >>> at org.ovirt.engine.core.bll.Backend.endAction(Backend.java:536)
> >>> [bll.jar:]
> >>>
> >>>
> >>> Link to suspected patches: https://gerrit.ovirt.org/#/c/71132/
> >>> Link to Job:
> >>> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/4950/
> >>> Link to all logs:
> >>> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_
> master/4951/artifact/exported-artifacts/basic_suite_master.
> sh-el7/test_logs/basic-suite-master/post-004_basic_sanity.py/
> >>>
> >>> Thanks
> >>> Gil
> >>
> >>
> >>
> >> _______________________________________________
> >> Devel mailing list
> >> Devel at ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/devel
> >
> >
> _______________________________________________
> Devel mailing list
> Devel at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
>
>


-- 
Eyal Edri
Associate Manager
RHV DevOps
EMEA ENG Virtualization R&D
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/devel/attachments/20170126/1167cc4b/attachment-0001.html>


More information about the Devel mailing list