Error: Adding new Host to ovirt-engine
by Ahmad Khiet
Hi,
Can't add new host to ovirt engine, because the following error:
2019-06-12 12:23:09,664 p=4134 u=engine | TASK [ovirt-host-deploy-facts :
Set facts] *************************************
2019-06-12 12:23:09,684 p=4134 u=engine | ok: [10.35.1.17] => {
"ansible_facts": {
"ansible_python_interpreter": "/usr/bin/python2",
"host_deploy_vdsm_version": "4.40.0"
},
"changed": false
}
2019-06-12 12:23:09,697 p=4134 u=engine | TASK [ovirt-provider-ovn-driver
: Install ovs] *********************************
2019-06-12 12:23:09,726 p=4134 u=engine | fatal: [10.35.1.17]: FAILED! =>
{}
MSG:
The conditional check 'cluster_switch == "ovs" or (ovn_central is defined
and ovn_central | ipaddr and ovn_engine_cluster_version is
version_compare('4.2', '>='))' failed. The error was: The ipaddr filter
requires python's netaddr be installed on the ansible controller
The error appears to be in
'/home/engine/apps/engine/share/ovirt-engine/playbooks/roles/ovirt-provider-ovn-driver/tasks/configure.yml':
line 3, column 5, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- block:
- name: Install ovs
^ here
2019-06-12 12:23:09,728 p=4134 u=engine | PLAY RECAP
*********************************************************************
2019-06-12 12:23:09,728 p=4134 u=engine | 10.35.1.17 :
ok=3 changed=0 unreachable=0 failed=1 skipped=0 rescued=0
ignored=0
whats missing!?
Thanks
--
Ahmad Khiet
Red Hat <https://www.redhat.com/>
akhiet(a)redhat.com
M: +972-54-6225629
<https://red.ht/sig>
1 year, 3 months
Merge rights changes in the oVirt Engine project
by Tal Nisan
Hi everyone,
As you probably know we are now in a mode in which we develop our next
zstream version on the master branch as opposed to how we worked before
where the master version was dedicated for the next major version. This
makes the rapid changes in master to be delivered to customers in a much
higher cadence thus affecting stability.
Due to that we think it's best that from now on merges in the master branch
will be done only by stable branch maintainers after inspecting those
closely.
What you need to do in order to get your patch merged:
- Have it pass Jenkins
- Have it get code review +2
- Have it mark verified +1
- It's always encourage to have it tested by OST, for bigger changes it's a
must
Once you have all those covered, please add me as a reviewer and I'll
examine it and merge if everything seems right, if I haven't done it in a
timely manner feel free to ping me.
3 years, 7 months
Error Java SDK Issue??
by Geschwentner, Patrick
Dear Ladies and Gentlemen!
I am currently working with the java-sdk and I encountered a problem.
If I would like to retrieve the disk details, I get the following error:
Disk currDisk = ovirtConnection.followLink(diskAttachment.disk());
The Error is occurring in this line:
[cid:image001.png@01D44537.AF127FD0]
The getResponst looks quiet ok. (I inspected: [cid:image002.png@01D44537.AF127FD0] and it looks ok).
Error:
wrong number of arguments
The code is quiet similar to what you published on github (https://github.com/oVirt/ovirt-engine-sdk-java/blob/master/sdk/src/test/j... ).
Can you confirm the defect?
Best regards
Patrick
3 years, 7 months
CentOS Stream support
by Michal Skrivanek
Hi all,
we would like to ask about interest in community about oVirt moving to CentOS Stream.
There were some requests before but it’s hard to see how many people would really like to see that.
With CentOS releases lagging behind RHEL for months it’s interesting to consider moving to CentOS Stream as it is much more up to date and allows us to fix bugs faster, with less workarounds and overhead for maintaining old code. E.g. our current integration tests do not really pass on CentOS 8.1 and we can’t really do much about that other than wait for more up to date packages. It would also bring us closer to make oVirt run smoothly on RHEL as that is also much closer to Stream than it is to outdated CentOS.
So..would you like us to support CentOS Stream?
We don’t really have capacity to run 3 different platforms, would you still want oVirt to support CentOS Stream if it means “less support” for regular CentOS?
There are some concerns about Stream being a bit less stable, do you share those concerns?
Thank you for your comments,
michal
3 years, 9 months
planned Jenkins restart
by Evgheni Dereveanchin
Hi everyone,
I'll be performing a planned Jenkins restart within the next hour.
No new CI jobs will be scheduled during this maintenance period.
I will inform you once it is back online.
--
Regards,
Evgheni Dereveanchin
3 years, 11 months
Branching out 4.3 in ovirt-system-tests
by Marcin Sobczyk
Hi all,
after minimizing the usage of lago in basic suite,
and some minor adjustments in the network suite, we are finally
able to remove lago OST plugin as a dependency [1].
This however comes with a price of keeping lots of ugly ifology, i.e. [2][3].
There's big disparity between OST runs we have on el7 and el8.
There's also tons of symlink-based code sharing between suites - be it 4.3
suites and master suites or simply different types of suites.
The basic suite has its own 'test_utils', which is copied/symlinked
in multiple places. There's also 'ost_utils', which is really messy ATM.
It's very hard to keep track and maintain all of this...
At this moment, we are able to run basic suite and network suite
on el8, with prebuilt ost-images and without lago plugin.
HE suites should be the next step. We have patches that make them
py3-compatible that probably still need some attention [4][5].
We don't have any prebuilt HE ost-images, but this will be handled
in the nearest future.
I think it's good time to detach ourselves from the legacy stuff
and start with a clean slate. My proposition would be to branch
out 4.3 in ovirt-system-tests and not use py2/el7 in the master
branch at all. This would allow us to focus on py3, el8 and ost-images
efforts while keeping the legacy stuff intact.
WDYT?
Regards, Marcin
[1] https://gerrit.ovirt.org/#/c/111643/
[2] https://gerrit.ovirt.org/#/c/111643/6/basic-suite-master/control.sh
[3] https://gerrit.ovirt.org/#/c/111643/6/basic-suite-master/test-scenarios/c...
[4] https://gerrit.ovirt.org/108809
[5] https://gerrit.ovirt.org/110097
4 years
virt-sparsify failed (was: [oVirt Jenkins] ovirt-system-tests_basic-suite-master_nightly - Build # 479 - Failure!)
by Yedidyah Bar David
Hi all,
On Mon, Oct 12, 2020 at 5:17 AM <jenkins(a)jenkins.phx.ovirt.org> wrote:
>
> Project: https://jenkins.ovirt.org/job/ovirt-system-tests_basic-suite-master_nightly/
> Build: https://jenkins.ovirt.org/job/ovirt-system-tests_basic-suite-master_night...
Above failed with:
https://jenkins.ovirt.org/job/ovirt-system-tests_basic-suite-master_night...
vdsm.log has:
https://jenkins.ovirt.org/job/ovirt-system-tests_basic-suite-master_night...
2020-10-11 22:05:14,695-0400 INFO (jsonrpc/1) [api.host] FINISH
getJobs return={'jobs': {'05eaea44-7e4c-4442-9926-2bcb696520f1':
{'id': '05eaea44-7e4c-4442-9926-2bcb696520f1', 'status': 'failed',
'description': 'sparsify_volume', 'job_type': 'storage', 'error':
{'code': 100, 'message': 'General Exception: (\'Command
[\\\'/usr/bin/virt-sparsify\\\', \\\'--machine-readable\\\',
\\\'--in-place\\\',
\\\'/rhev/data-center/mnt/192.168.200.4:_exports_nfs_share1/8b292c13-fd8a-4a7c-903c-5724ec742c10/images/a367c179-2ac9-4930-abeb-848229f81c97/515fcf06-8743-45d1-9af8-61a0c48e8c67\\\']
failed with rc=1 out=b\\\'3/12\\\\n{ "message": "libguestfs error:
guestfs_launch failed.\\\\\\\\nThis usually means the libguestfs
appliance failed to start or crashed.\\\\\\\\nDo:\\\\\\\\n export
LIBGUESTFS_DEBUG=1 LIBGUESTFS_TRACE=1\\\\\\\\nand run the command
again. For further information, read:\\\\\\\\n
http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs\\\\\\\\nYou
can also run \\\\\\\'libguestfs-test-tool\\\\\\\' and post the
*complete* output\\\\\\\\ninto a bug report or message to the
libguestfs mailing list.", "timestamp":
"2020-10-11T22:05:08.397538670-04:00", "type": "error" }\\\\n\\\'
err=b"virt-sparsify: error: libguestfs error: guestfs_launch
failed.\\\\nThis usually means the libguestfs appliance failed to
start or crashed.\\\\nDo:\\\\n export LIBGUESTFS_DEBUG=1
LIBGUESTFS_TRACE=1\\\\nand run the command again. For further
information, read:\\\\n
http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs\\\\nYou
can also run \\\'libguestfs-test-tool\\\' and post the *complete*
output\\\\ninto a bug report or message to the libguestfs mailing
list.\\\\n\\\\nIf reporting bugs, run virt-sparsify with debugging
enabled and include the \\\\ncomplete output:\\\\n\\\\n virt-sparsify
-v -x [...]\\\\n"\',)'}}}, 'status': {'code': 0, 'message': 'Done'}}
from=::ffff:192.168.201.4,43318,
flow_id=365642f4-2fe2-45df-937a-f4ca435eea38 (api:54)
2020-10-11 22:05:14,695-0400 DEBUG (jsonrpc/1) [jsonrpc.JsonRpcServer]
Return 'Host.getJobs' in bridge with
{'05eaea44-7e4c-4442-9926-2bcb696520f1': {'id':
'05eaea44-7e4c-4442-9926-2bcb696520f1', 'status': 'failed',
'description': 'sparsify_volume', 'job_type': 'storage', 'error':
{'code': 100, 'message': 'General Exception: (\'Command
[\\\'/usr/bin/virt-sparsify\\\', \\\'--machine-readable\\\',
\\\'--in-place\\\',
\\\'/rhev/data-center/mnt/192.168.200.4:_exports_nfs_share1/8b292c13-fd8a-4a7c-903c-5724ec742c10/images/a367c179-2ac9-4930-abeb-848229f81c97/515fcf06-8743-45d1-9af8-61a0c48e8c67\\\']
failed with rc=1 out=b\\\'3/12\\\\n{ "message": "libguestfs error:
guestfs_launch failed.\\\\\\\\nThis usually means the libguestfs
appliance failed to start or crashed.\\\\\\\\nDo:\\\\\\\\n export
LIBGUESTFS_DEBUG=1 LIBGUESTFS_TRACE=1\\\\\\\\nand run the command
again. For further information, read:\\\\\\\\n
http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs\\\\\\\\nYou
can also run \\\\\\\'libguestfs-test-tool\\\\\\\' and post the
*complete* output\\\\\\\\ninto a bug report or message to the
libguestfs mailing list.", "timestamp":
"2020-10-11T22:05:08.397538670-04:00", "type": "error" }\\\\n\\\'
err=b"virt-sparsify: error: libguestfs error: guestfs_launch
failed.\\\\nThis usually means the libguestfs appliance failed to
start or crashed.\\\\nDo:\\\\n export LIBGUESTFS_DEBUG=1
LIBGUESTFS_TRACE=1\\\\nand run the command again. For further
information, read:\\\\n
http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs\\\\nYou
can also run \\\'libguestfs-test-tool\\\' and post the *complete*
output\\\\ninto a bug report or message to the libguestfs mailing
list.\\\\n\\\\nIf reporting bugs, run virt-sparsify with debugging
enabled and include the \\\\ncomplete output:\\\\n\\\\n virt-sparsify
-v -x [...]\\\\n"\',)'}}} (__init__:356)
/var/log/messages has:
Oct 11 22:04:51 lago-basic-suite-master-host-0 kvm[80601]: 1 guest now active
Oct 11 22:05:06 lago-basic-suite-master-host-0 journal[80557]: Domain
id=1 name='guestfs-hl0ntvn92rtkk2u0'
uuid=05ea5a53-562f-49f8-a8ca-76b45c5325b4 is tainted: custom-argv
Oct 11 22:05:06 lago-basic-suite-master-host-0 journal[80557]: Domain
id=1 name='guestfs-hl0ntvn92rtkk2u0'
uuid=05ea5a53-562f-49f8-a8ca-76b45c5325b4 is tainted: host-cpu
Oct 11 22:05:06 lago-basic-suite-master-host-0 kvm[80801]: 2 guests now active
Oct 11 22:05:08 lago-basic-suite-master-host-0 journal[80557]:
internal error: End of file from qemu monitor
Oct 11 22:05:08 lago-basic-suite-master-host-0 kvm[80807]: 1 guest now active
Oct 11 22:05:08 lago-basic-suite-master-host-0 journal[80557]: cannot
resolve symlink /tmp/libguestfseTG8xF/console.sock: No such file or
directoryKN<F3>L^?
Oct 11 22:05:08 lago-basic-suite-master-host-0 journal[80557]: cannot
resolve symlink /tmp/libguestfseTG8xF/guestfsd.sock: No such file or
directoryKN<F3>L^?
Oct 11 22:05:08 lago-basic-suite-master-host-0 journal[80557]: cannot
lookup default selinux label for /tmp/libguestfs1WkcF7/overlay1.qcow2
Oct 11 22:05:08 lago-basic-suite-master-host-0 journal[80557]: cannot
lookup default selinux label for
/var/tmp/.guestfs-36/appliance.d/kernel
Oct 11 22:05:08 lago-basic-suite-master-host-0 journal[80557]: cannot
lookup default selinux label for
/var/tmp/.guestfs-36/appliance.d/initrd
Oct 11 22:05:08 lago-basic-suite-master-host-0 vdsm[74096]: ERROR Job
'05eaea44-7e4c-4442-9926-2bcb696520f1' failed#012Traceback (most
recent call last):#012 File
"/usr/lib/python3.6/site-packages/vdsm/jobs.py", line 159, in run#012
self._run()#012 File
"/usr/lib/python3.6/site-packages/vdsm/storage/sdm/api/sparsify_volume.py",
line 57, in _run#012
virtsparsify.sparsify_inplace(self._vol_info.path)#012 File
"/usr/lib/python3.6/site-packages/vdsm/virtsparsify.py", line 40, in
sparsify_inplace#012 commands.run(cmd)#012 File
"/usr/lib/python3.6/site-packages/vdsm/common/commands.py", line 101,
in run#012 raise cmdutils.Error(args, p.returncode, out,
err)#012vdsm.common.cmdutils.Error: Command ['/usr/bin/virt-sparsify',
'--machine-readable', '--in-place',
'/rhev/data-center/mnt/192.168.200.4:_exports_nfs_share1/8b292c13-fd8a-4a7c-903c-5724ec742c10/images/a367c179-2ac9-4930-abeb-848229f81c97/515fcf06-8743-45d1-9af8-61a0c48e8c67']
failed with rc=1 out=b'3/12\n{ "message": "libguestfs error:
guestfs_launch failed.\\nThis usually means the libguestfs appliance
failed to start or crashed.\\nDo:\\n export LIBGUESTFS_DEBUG=1
LIBGUESTFS_TRACE=1\\nand run the command again. For further
information, read:\\n
http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs\\nYou
can also run \'libguestfs-test-tool\' and post the *complete*
output\\ninto a bug report or message to the libguestfs mailing
list.", "timestamp": "2020-10-11T22:05:08.397538670-04:00", "type":
"error" }\n' err=b"virt-sparsify: error: libguestfs error:
guestfs_launch failed.\nThis usually means the libguestfs appliance
failed to start or crashed.\nDo:\n export LIBGUESTFS_DEBUG=1
LIBGUESTFS_TRACE=1\nand run the command again. For further
information, read:\n
http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs\nYou can
also run 'libguestfs-test-tool' and post the *complete* output\ninto a
bug report or message to the libguestfs mailing list.\n\nIf reporting
bugs, run virt-sparsify with debugging enabled and include the
\ncomplete output:\n\n virt-sparsify -v -x [...]\n"
The next run of the job (480) did finish successfully. No idea if it
was already fixed by a patch, or is simply a random/env issue.
Is it possible to pass LIBGUESTFS_DEBUG=1 LIBGUESTFS_TRACE=1 without
patching vdsm? Not sure. In any case, if this does not cause too much
excess debug logging, perhaps better always pass it, to help
retroactively analyze such failures. Or, patch
virt-sparsify/libguestfs/whatever to always log at least enough
information on failure even without passing these.
Best regards,
> Build Number: 479
> Build Status: Failure
> Triggered By: Started by timer
>
> -------------------------------------
> Changes Since Last Success:
> -------------------------------------
> Changes for Build #479
> [hbraha] network: bond active slave test
>
>
>
>
> -----------------
> Failed Tests:
> -----------------
> 1 tests failed.
> FAILED: basic-suite-master.test-scenarios.004_basic_sanity.test_sparsify_disk1
>
> Error Message:
> AssertionError: False != True after 600 seconds
>
> Stack Trace:
> api_v4 = <ovirtsdk4.Connection object at 0x7fe717c60e50>
>
> @order_by(_TEST_LIST)
> def test_sparsify_disk1(api_v4):
> engine = api_v4.system_service()
> disk_service = test_utils.get_disk_service(engine, DISK1_NAME)
> with test_utils.TestEvent(engine, 1325): # USER_SPARSIFY_IMAGE_START event
> disk_service.sparsify()
>
> with test_utils.TestEvent(engine, 1326): # USER_SPARSIFY_IMAGE_FINISH_SUCCESS
> > pass
>
> ../basic-suite-master/test-scenarios/004_basic_sanity.py:295:
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> /usr/lib64/python2.7/contextlib.py:24: in __exit__
> self.gen.next()
> ../ost_utils/ost_utils/engine_utils.py:44: in wait_for_event
> lambda:
> ../ost_utils/ost_utils/assertions.py:97: in assert_true_within_long
> assert_equals_within_long(func, True, allowed_exceptions)
> ../ost_utils/ost_utils/assertions.py:82: in assert_equals_within_long
> func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
>
> func = <function <lambda> at 0x7fe7176dfb18>, value = True, timeout = 600
> allowed_exceptions = [], initial_wait = 0
> error_message = 'False != True after 600 seconds'
>
> def assert_equals_within(
> func, value, timeout, allowed_exceptions=None, initial_wait=10,
> error_message=None
> ):
> allowed_exceptions = allowed_exceptions or []
> with _EggTimer(timeout) as timer:
> while not timer.elapsed():
> try:
> res = func()
> if res == value:
> return
> except Exception as exc:
> if _instance_of_any(exc, allowed_exceptions):
> time.sleep(3)
> continue
>
> LOGGER.exception("Unhandled exception in %s", func)
> raise
>
> if initial_wait == 0:
> time.sleep(3)
> else:
> time.sleep(initial_wait)
> initial_wait = 0
> try:
> if error_message is None:
> error_message = '%s != %s after %s seconds' % (res, value, timeout)
> > raise AssertionError(error_message)
> E AssertionError: False != True after 600 seconds
>
> ../ost_utils/ost_utils/assertions.py:60: AssertionError_______________________________________________
> Infra mailing list -- infra(a)ovirt.org
> To unsubscribe send an email to infra-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/infra@ovirt.org/message/65FECSE7EBW...
--
Didi
4 years