Weird ordering of target milestones in bugzilla
by Sandro Bonazzola
Hi,
not sure who touched the target milestones in ovirt bugzilla, but
ovirt-4.1.7 is missing (added it to the projects I'm working as needed. The
remaining milestones have a very weird ordering:
ovirt-4.1.7 0 Yes Delete
ovirt-3.6.11 10 No Delete
ovirt-4.0.7 20 No Delete
ovirt-4.0.8 30 No Delete
ovirt-4.1.1 40 No Delete
ovirt-4.1.1-1 50 No Delete
ovirt-4.1.2 60 No Delete
ovirt-4.1.3 70 No Delete
ovirt-4.4.0 70 Yes Delete
ovirt-4.1.4 80 Yes Delete
ovirt-4.5.0 80 Yes Delete
ovirt-4.1.5 90 Yes Delete
ovirt-4.1.6 100 Yes Delete
ovirt-4.1.8 120 Yes Delete
ovirt-4.1.9 130 Yes Delete
ovirt-4.2.0 140 Yes Delete
ovirt-4.3.0 150 Yes Delete
Any good reason for such ordering?
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
7 years, 3 months
hosted engine job fails starting VM
by Sandro Bonazzola
http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/20/...
In hosted engine setup logs:
2017-09-04 23:06:33,705-0400 DEBUG otopi.plugins.gr_he_setup.vm.runvm
mixins._create_vm:283 {u'status': {'message': 'Done', 'code': 0},
u'emulatedMachine': u'pc', u'vmId':
u'cef091fe-ff3e-4a90-b2f1-674773eac4ce', u'devices': [{u'device':
u'scsi', u'model': u'virtio-scsi', u'type': u'controller'},
{u'device': u'console', u'specParams': {u'enableSocket': u'true'},
u'type': u'console', u'deviceId':
u'494ce6e1-b43d-4c57-acd7-7a6c4943cc5a', u'alias': u'console0'},
{u'index': u'2', u'iface': u'ide', u'specParams': {}, u'readonly':
u'true', u'deviceId': u'cfa4e0a2-6261-4055-b815-91bff9b78ab6',
u'address': {u'bus': u'1', u'controller': u'0', u'type': u'drive',
u'target': u'0', u'unit': u'0'}, u'device': u'cdrom', u'shared':
u'false', u'path': u'/tmp/tmpnlD4s1/seed.iso', u'type': u'disk'},
{u'index': u'0', u'iface': u'virtio', u'format': u'raw', u'bootOrder':
u'1', u'poolID': u'00000000-0000-0000-0000-000000000000', u'volumeID':
u'e2a2e6cb-6f0f-4824-b581-e949ce07f612', u'imageID':
u'93d9c21e-b029-45a2-997f-ba5e65fcc9a2', u'specParams': {},
u'readonly': u'false', u'domainID':
u'6ae9f9dd-930b-4894-92e0-c9e3bfc0d875', u'optional': u'false',
u'deviceId': u'93d9c21e-b029-45a2-997f-ba5e65fcc9a2', u'address':
{u'slot': u'0x06', u'bus': u'0x00', u'domain': u'0x0000', u'type':
u'pci', u'function': u'0x0'}, u'device': u'disk', u'shared':
u'exclusive', u'propagateErrors': u'off', u'type': u'disk'},
{u'nicModel': u'pv', u'macAddr': u'54:52:c0:a8:c8:63', u'linkActive':
u'true', u'network': u'ovirtmgmt', u'specParams': {}, u'deviceId':
u'977a4b82-06bb-48ee-b71c-5c87b88488af', u'address': {u'slot':
u'0x03', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci',
u'function': u'0x0'}, u'device': u'bridge', u'type': u'interface'},
{u'device': u'vga', u'alias': u'video0', u'type': u'video'},
{u'device': u'vnc', u'type': u'graphics'}, {u'device': u'virtio',
u'specParams': {u'source': u'urandom'}, u'model': u'virtio', u'type':
u'rng'}], u'guestDiskMapping': {}, u'vmType': u'kvm', u'smp': u'2',
u'display': u'vnc', u'memSize': 3171, u'cpuType': u'SandyBridge',
u'clientIp': u'', u'statusTime': u'4295135080', u'vmName':
u'HostedEngine', u'spiceSecureChannels':
u'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir',
u'maxVCpus': u'2'}
2017-09-04 23:06:33,722-0400 DEBUG otopi.plugins.gr_he_setup.vm.runvm
mixins._create_vm:300 {'status': {'message': 'Done', 'code': 0},
'items': [{u'username': u'Unknown', u'displayInfo': [], u'hash':
u'4313467795400122059', u'acpiEnable': u'true', u'guestFQDN': u'',
u'monitorResponse': u'0', u'vmId':
u'cef091fe-ff3e-4a90-b2f1-674773eac4ce', u'kvmEnable': u'true',
u'elapsedTime': u'0', u'vmType': u'kvm', u'session': u'Unknown',
u'status': u'WaitForLaunch', u'guestCPUCount': -1, u'appsList': [],
u'timeOffset': u'0', u'memUsage': u'0', u'guestIPs': u'',
u'statusTime': u'4295135100', u'vmName': u'HostedEngine', u'clientIp':
u''}]}
2017-09-04 23:06:36,734-0400 DEBUG otopi.plugins.gr_he_setup.vm.runvm
mixins._create_vm:300 {'status': {'message': 'Done', 'code': 0},
'items': [{u'status': u'Down', u'exitMessage': u'invalid argument:
could not find capabilities for arch=x86_64 domaintype=kvm ', u'vmId':
u'cef091fe-ff3e-4a90-b2f1-674773eac4ce', u'exitReason': 1,
u'statusTime': u'4295138120', u'exitCode': 1}]}
2017-09-04 23:06:36,735-0400 DEBUG otopi.context
context._executeMethod:142 method exception
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132,
in _executeMethod
method['method']()
File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-setup/vm/runvm.py",
line 168, in _boot_from_hd
self._create_vm()
File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/mixins.py",
line 315, in _create_vm
'The VM is not powering up: please check VDSM logs'
RuntimeError: The VM is not powering up: please check VDSM logs
In VDSM logs I see:
2017-09-04 23:06:33,702-0400 INFO (vm/cef091fe) [vdsm.api] START
prepareImage(sdUUID=u'6ae9f9dd-930b-4894-92e0-c9e3bfc0d875',
spUUID=u'00000000-0000-0000-0000-000000000000',
imgUUID=u'93d9c21e-b029-45a2-997f-ba5e65fcc9a2',
leafUUID=u'e2a2e6cb-6f0f-4824-b581-e949ce07f612', allowIllegal=False)
from=internal, task_id=127e77e0-29cd-47e2-870f-491fb34a001d (api:46)
2017-09-04 23:06:33,708-0400 ERROR (jsonrpc/6) [virt.vm]
(vmId='cef091fe-ff3e-4a90-b2f1-674773eac4ce') Error fetching vm stats
(vm:1676)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1672,
in _getRunningVmStats
vm_sample.interval)
File "/usr/lib/python2.7/site-packages/vdsm/virt/vmstats.py", line
47, in produce
balloon(vm, stats, last_sample)
File "/usr/lib/python2.7/site-packages/vdsm/virt/vmstats.py", line
153, in balloon
balloon_info = vm.get_balloon_info()
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 4947,
in get_balloon_info
dev = self._devices[hwclass.BALLOON][0]
IndexError: list index out of range
2017-09-04 23:06:33,713-0400 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer]
RPC call VM.getStats succeeded in 0.00 seconds (__init__:630)
And
2017-09-04 23:06:34,180-0400 ERROR (vm/cef091fe) [virt.vm]
(vmId='cef091fe-ff3e-4a90-b2f1-674773eac4ce') The vm start process
failed (vm:877)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 811,
in _startUnderlyingVm
self._run()
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2543, in _run
dom = self._connection.createXML(domxml, flags)
File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
line 125, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 586, in wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3782, in createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: invalid argument: could not find capabilities for
arch=x86_64 domaintype=kvm
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
7 years, 3 months
vdsm failed to start
by Piotr Kliczewski
All,
I pulled the latest master and updated my vdsm. It failed to start and
I found following failure.
It was introduced by [1].
Am I missing something in my env?
Thanks,
Piotr
[1] https://gerrit.ovirt.org/#/c/81088/
Sep 4 10:53:46 f20 vdsmd_init_common.sh[8508]: Traceback (most recent
call last):
Sep 4 10:53:46 f20 vdsmd_init_common.sh[8508]: File
"/usr/bin/vdsm-tool", line 219, in main
Sep 4 10:53:46 f20 vdsmd_init_common.sh[8508]: return
tool_command[cmd]["command"](*args)
Sep 4 10:53:46 f20 vdsmd_init_common.sh[8508]: File
"/usr/lib/python2.7/site-packages/vdsm/tool/__init__.py", line 38, in
wrapper
Sep 4 10:53:46 f20 vdsmd_init_common.sh[8508]: func(*args, **kwargs)
Sep 4 10:53:46 f20 vdsmd_init_common.sh[8508]: File
"/usr/lib/python2.7/site-packages/vdsm/tool/configurator.py", line
160, in isconfigured
Sep 4 10:53:46 f20 vdsmd_init_common.sh[8508]: m = [c.name for c
in pargs.modules if _isconfigured(c) == configurators.NO]
Sep 4 10:53:46 f20 vdsmd_init_common.sh[8508]: File
"/usr/lib/python2.7/site-packages/vdsm/tool/configurator.py", line 98,
in _isconfigured
Sep 4 10:53:46 f20 vdsmd_init_common.sh[8508]: return
getattr(module, 'isconfigured', lambda: configurators.NO)()
Sep 4 10:53:46 f20 vdsmd_init_common.sh[8508]: File
"/usr/lib/python2.7/site-packages/vdsm/tool/configurators/sebool.py",
line 74, in isconfigured
Sep 4 10:53:46 f20 vdsmd_init_common.sh[8508]: if not
all(sebool_status[sebool_variable]):
Sep 4 10:53:46 f20 vdsmd_init_common.sh[8508]: KeyError: 'virt_use_glusterd'
7 years, 3 months
[ OST Failure Report ] [ oVirt master ] [ 2017-09-01 ] [add_host]
by Barak Korren
Test failed: [ add_host ]
Link to suspected patches:
https://gerrit.ovirt.org/#/c/79474/9
Note: tests with ovirt-host-deploy had been failing since:
https://gerrit.ovirt.org/#/c/77650/1
So it may be the root cause of failure.
Link to Job:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2230
Link to all logs:
(host-deploy-logs)
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2230/artifa...
Error snippet from log:
<error>
2017-09-01 10:55:06,625-0400 DEBUG otopi.context
context._executeMethod:128 Stage packages METHOD
otopi.plugins.otopi.packagers.yumpackager.Plugin._packages
2017-09-01 10:55:06,625-0400 DEBUG
otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum
Building transaction
2017-09-01 10:55:08,784-0400 ERROR
otopi.plugins.otopi.packagers.yumpackager yumpackager.error:85 Yum
[u'glusterfs-rdma-3.7.9-12.el7.centos.x86_64 requires
glusterfs(x86-64) = 3.7.9-12.el7.centos',
u'glusterfs-3.7.9-12.el7.centos.x86_64 requires glusterfs-libs(x86-64)
= 3.7.9-12.el7.centos']
2017-09-01 10:55:08,784-0400 DEBUG otopi.context
context._executeMethod:142 method exception
Traceback (most recent call last):
File "/tmp/ovirt-1RyqhT7Wyt/pythonlib/otopi/context.py", line 132,
in _executeMethod
method['method']()
File "/tmp/ovirt-1RyqhT7Wyt/otopi-plugins/otopi/packagers/yumpackager.py",
line 256, in _packages
if self._miniyum.buildTransaction():
File "/tmp/ovirt-1RyqhT7Wyt/pythonlib/otopi/miniyum.py", line 919,
in buildTransaction
raise yum.Errors.YumBaseError(msg)
YumBaseError: [u'glusterfs-rdma-3.7.9-12.el7.centos.x86_64 requires
glusterfs(x86-64) = 3.7.9-12.el7.centos',
u'glusterfs-3.7.9-12.el7.centos.x86_64 requires glusterfs-libs(x86-64)
= 3.7.9-12.el7.centos']
2017-09-01 10:55:08,785-0400 ERROR otopi.context
context._executeMethod:151 Failed to execute stage 'Package
installation': [u'glusterfs-rdma-3.7.9-12.el7.centos.x86_64 requires
glusterfs(x86-64) = 3.7.9-12.el7.centos',
u'glusterfs-3.7.9-12.el7.centos.x86_64 requires glusterfs-libs(x86-64)
= 3.7.9-12.el7.centos']
2017-09-01 10:55:08,785-0400 DEBUG otopi.transaction
transaction.abort:119 aborting 'Yum Transaction'
</error>
Note: Error does not seem to be directly relates to the contents of
the suspected patches, but it reproduces consistently with then and
does not reproduce without them.
Here is a link to another reproducing run:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2239/
--
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
7 years, 3 months
Re: [ovirt-devel] Slack channel
by Dan Kenigsberg
Last time I checked, slack was not ready for real use. It lost my
confidence when it decided to hide my two weeks old chat because my group
account used up all its free resources.
That's closed-source mentality at its worst.
On Sep 3, 2017 16:09, "Roy Golan" <rgolan(a)redhat.com> wrote:
I think SLA uses is mostly and it works well for them but there isn't much
presence of all other teams on slack.
Opening the discussion here, I think we need to give our community a push
here, and modernize the communication our channel. Lets consider:
- slack
- gitter
- self hosted service
Slack experience is good, but again, wasn't adopted much further by ovirt.
Some folks prefer other OS solution.
I think gitter plays nice for communities and uses your github identity so
its open for everyone. Slack is bit different in the approach. But I don't
have experience with gitter at all so help out here people who do.
Self hosted service, like RocketChat, means $$$ and time, and is less
visible on the internet but has other advantages of course.
Sandro, Eyal maybe you already have something up your sleeve?
On Thu, 24 Aug 2017 at 16:00 Eyal Edri <eedri(a)redhat.com> wrote:
> Marc,
> I just sent you an invitation, see if you can signup.
>
> On Thu, Aug 24, 2017 at 3:53 PM, Eyal Edri <eedri(a)redhat.com> wrote:
>
>> I think Roy is mostly using it, so maybe he can update the settings.
>>
>> On Thu, Aug 24, 2017 at 3:48 PM, Marc Young <3vilpenguin(a)gmail.com>
>> wrote:
>>
>>> That slack team requires either an invite or an `(a)redhat.com` email to
>>> sign up
>>>
>>> On Wed, Aug 23, 2017 at 11:58 PM, Yaniv Kaul <ykaul(a)redhat.com> wrote:
>>>
>>>>
>>>>
>>>> On Thu, Aug 24, 2017 at 6:03 AM, Greg Sheremeta <gshereme(a)redhat.com>
>>>> wrote:
>>>>
>>>>> Some of the teams have dedicated slack channels. We don't have a
>>>>> global ovirt team one that I know of.
>>>>>
>>>>
>>>> There's https://ovirt.slack.com/ - but I'm not sure how used it is.
>>>> Y.
>>>>
>>>>
>>>>>
>>>>> You can disable connect / disconnect chatter with a setting in
>>>>> hexchat. And you can catch what you missed by using an irc proxy.
>>>>>
>>>>> Full disclosure : I love slack and would love to see us fully cut over.
>>>>>
>>>>> Greg Sheremeta, MBA
>>>>> Sr. Software Engineer
>>>>> Red Hat, Inc.
>>>>> gshereme(a)redhat.com
>>>>>
>>>>> On Aug 23, 2017 10:31 PM, "Marc Young" <3vilpenguin(a)gmail.com> wrote:
>>>>>
>>>>> Is there hope for slack over IRC?
>>>>>
>>>>> The problem with IRC is all the connect/disconnect chatter (and
>>>>> offline being a black hole)
>>>>>
>>>>> _______________________________________________
>>>>> Devel mailing list
>>>>> Devel(a)ovirt.org
>>>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Devel mailing list
>>>>> Devel(a)ovirt.org
>>>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>>>
>>>>
>>>>
>>>
>>> _______________________________________________
>>> Devel mailing list
>>> Devel(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>
>>
>>
>>
>> --
>>
>> Eyal edri
>>
>>
>> ASSOCIATE MANAGER
>>
>> RHV DevOps
>>
>> EMEA VIRTUALIZATION R&D
>>
>>
>> Red Hat EMEA <https://www.redhat.com/>
>> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>> phone: +972-9-7692018 <+972%209-769-2018>
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>
>
>
>
> --
>
> Eyal edri
>
>
> ASSOCIATE MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R&D
>
>
> Red Hat EMEA <https://www.redhat.com/>
> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
_______________________________________________
Devel mailing list
Devel(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel
7 years, 3 months
planned Gerrit maintenance
by Evgheni Dereveanchin
Hi everyone,
I will be performing updates on gerrit.ovirt.org during the next two hours.
Within this period the Gerrit UI and Git repositories may be unavailable.
I will follow up as soon as maintenance activities are over.
--
Regards,
Evgheni Dereveanchin
7 years, 3 months
[ OST Failure Report ] [ oVirt $VER ] [ 2017-09-01 ] [add_hosts]
by Barak Korren
Test failed: [ add_hosts ]
Link to suspected patches:
https://gerrit.ovirt.org/#/c/81088/4
Link to Job:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2234/
Link to all logs:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2234/artifa...
Error snippet from log:
<error>
(From supervdsm.log)
MainThread::ERROR::2017-09-01
14:27:32,291::initializer::53::root::(_lldp_init) Failed to enable
LLDP on eth2
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/network/initializer.py",
line 51, in _lldp_init
Lldp.enable_lldp_on_iface(device)
File "/usr/lib/python2.7/site-packages/vdsm/network/lldp/lldpad.py",
line 30, in enable_lldp_on_iface
lldptool.enable_lldp_on_iface(iface, rx_only)
File "/usr/lib/python2.7/site-packages/vdsm/network/lldpad/lldptool.py",
line 46, in enable_lldp_on_iface
raise EnableLldpError(rc, out, err, iface)
EnableLldpError: (1,
"timeout\n'M00000001C3040000000c04eth2000badminStatus0002rx' command
timed out.\n", '', 'eth2')
</error>
--
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
7 years, 3 months