Error: Adding new Host to ovirt-engine
by Ahmad Khiet
Hi,
Can't add new host to ovirt engine, because the following error:
2019-06-12 12:23:09,664 p=4134 u=engine | TASK [ovirt-host-deploy-facts :
Set facts] *************************************
2019-06-12 12:23:09,684 p=4134 u=engine | ok: [10.35.1.17] => {
"ansible_facts": {
"ansible_python_interpreter": "/usr/bin/python2",
"host_deploy_vdsm_version": "4.40.0"
},
"changed": false
}
2019-06-12 12:23:09,697 p=4134 u=engine | TASK [ovirt-provider-ovn-driver
: Install ovs] *********************************
2019-06-12 12:23:09,726 p=4134 u=engine | fatal: [10.35.1.17]: FAILED! =>
{}
MSG:
The conditional check 'cluster_switch == "ovs" or (ovn_central is defined
and ovn_central | ipaddr and ovn_engine_cluster_version is
version_compare('4.2', '>='))' failed. The error was: The ipaddr filter
requires python's netaddr be installed on the ansible controller
The error appears to be in
'/home/engine/apps/engine/share/ovirt-engine/playbooks/roles/ovirt-provider-ovn-driver/tasks/configure.yml':
line 3, column 5, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- block:
- name: Install ovs
^ here
2019-06-12 12:23:09,728 p=4134 u=engine | PLAY RECAP
*********************************************************************
2019-06-12 12:23:09,728 p=4134 u=engine | 10.35.1.17 :
ok=3 changed=0 unreachable=0 failed=1 skipped=0 rescued=0
ignored=0
whats missing!?
Thanks
--
Ahmad Khiet
Red Hat <https://www.redhat.com/>
akhiet(a)redhat.com
M: +972-54-6225629
<https://red.ht/sig>
1 year, 3 months
Error Java SDK Issue??
by Geschwentner, Patrick
Dear Ladies and Gentlemen!
I am currently working with the java-sdk and I encountered a problem.
If I would like to retrieve the disk details, I get the following error:
Disk currDisk = ovirtConnection.followLink(diskAttachment.disk());
The Error is occurring in this line:
[cid:image001.png@01D44537.AF127FD0]
The getResponst looks quiet ok. (I inspected: [cid:image002.png@01D44537.AF127FD0] and it looks ok).
Error:
wrong number of arguments
The code is quiet similar to what you published on github (https://github.com/oVirt/ovirt-engine-sdk-java/blob/master/sdk/src/test/j... ).
Can you confirm the defect?
Best regards
Patrick
3 years, 7 months
unicode sandwich in otopi/engine-setup
by Yedidyah Bar David
Hi all,
This is in a sense a continuation of the thread "Why filetransaction
needs to encode the content to utf-8?", but I decided that a new
thread is better.
I started to systematically convert the code to use a unicode
sandwich. I admit it was harder than I expected, and made me think
somewhat differently about the move to python3, and about how
reasonable (or not) it is to develop in the common subset of python2
and python3 vs ditching python2 and moving fully to python3. It seems
like at least parts of our (integration team) code will still have to
run in python2 also in oVirt 4.4, so I guess we'll not have much
choice :-)
Current patches are only for otopi and engine-setup, and are by no
means thorough - I didn't check each and every open() call and similar
ones. But it's enough for getting engine-setup finish successfully on
both python2 and python3 (EL7 and Fedora 29), with some utf-8 inserted
in relevant places of the input (for the plugins already handled).
I didn't bother trying non-utf-8 encodings. Perhaps I should, but it's
not completely clear to me what's the best approach [2].
Currently, you must have both otopi and engine updated to get things
working. If there is demand, I might spend some time
splitting/rebasing/etc to make it possible to update just one of them
and only later the other, but not sure it's worth it.
I don't mind splitting/squashing if it makes reviews simpler, but I
think the patches are ok as-is. These are the bottom patches of each
stack:
otopi: https://gerrit.ovirt.org/102085
engine-setup: https://gerrit.ovirt.org/102934
[1] http://python-future.org/unicode_literals.html
[2] https://stackoverflow.com/questions/4012571/python-which-encoding-is-used...
Thanks and best regards,
--
Didi
5 years, 2 months
[VDSM] storage python 3 tests status
by Nir Soffer
Sharing progress on python 3 work.
## 2 weeks ago
git checkout @{two.weeks.ago}
tox -e storage-py37
...
1448 passed, 103 skipped, 100 deselected, 322 xfailed, 236 warnings in
71.40 seconds
## Today
git checkout
tox -e storage-py37
...
1643 passed, 103 skipped, 100 deselected, 105 xfailed, 82 xpassed, 107
warnings in 79.72 seconds
Change:
*passed: +195*
*skips: 0*
*xfail: -217*
*xpass: +107*
*warnings: -129*
## XPASS
107 xpass are most likely easy fix, just remove the xfail() mark from the
test or parameter.
Patches should not introduce new XPASS, you should remove all xfail() marks
that a patch
fixes. To test that your patch do not introduce new XPASS, add this option
to tox.ini:
[pytest]
xfail_strict = True
This converts XPASS to FAIL.
These modules need to be fixed:
$ egrep '^storage.+_test.py.+X' storage-py37.out
storage/filevolume_test.py ...XXXxXXXXxXXX [
19%]
storage/formatconverter_test.py XXXXXxxss [
20%]
storage/merge_test.py XXXXXXxxxxXXxXxxxxxXxxxxxx [
38%]
storage/nbd_test.py ssssssssssssX. [
49%]
storage/sd_manifest_test.py XXXXXXXXXxxxxxxxxxxxxxxxxxxxxxxx............ [
65%]
storage/sdm_amend_volume_test.py xXxXxXxX [
66%]
storage/sdm_copy_data_test.py xXXxxxxXXXXXxxxXXXxxXXxxXXx [
67%]
storage/sdm_merge_test.py xxxxXXX [
79%]
storage/sdm_update_volume_test.py xXxXxxXXxxXXxXxXxXxX....... [
81%]
storage/testlib_test.py xXXXxxXXXXXXXXxxxxxxxxxxxxxXxXxXxX.......... [
87%]
storage/volume_metadata_test.py ..........................X............. [
90%]
## xfail
We must fix these xfail before we can run OST. If we start running OST
we will waste days debugging OST. Debugging failing tests is 1000X times
faster.
$ egrep '^storage.+_test.py.+x' storage-py37.out
storage/blocksd_test.py ........x..........sssssssssss............ [
4%]
storage/blockvolume_test.py ...........xxxxxxxxx [
5%]
storage/fileutil_test.py ..xx....ss............................ [
19%]
storage/filevolume_test.py ...XXXxXXXXxXXX [
19%]
storage/formatconverter_test.py XXXXXxxss [
20%]
storage/merge_test.py XXXXXXxxxxXXxXxxxxxXxxxxxx [
38%]
storage/sd_manifest_test.py XXXXXXXXXxxxxxxxxxxxxxxxxxxxxxxx............ [
65%]
storage/sdm_amend_volume_test.py xXxXxXxX [
66%]
storage/sdm_copy_data_test.py xXXxxxxXXXXXxxxXXXxxXXxxXXx [
67%]
storage/sdm_merge_test.py xxxxXXX [
79%]
storage/sdm_update_volume_test.py xXxXxxXXxxXXxXxXxXxX....... [
81%]
storage/testlib_test.py xXXXxxXXXXXXXXxxxxxxxxxxxxxXxXxXxX.......... [
87%]
## Skips
Skips should be used only when test cannot run on specific environment,
but I think we have some wrong skips that should have been xfail.
$ egrep '^storage.+_test.py.+s' storage-py37.out
storage/backends_test.py ss [
2%]
storage/blockdev_test.py ssss....s [
2%]
storage/blocksd_test.py ........x..........sssssssssss............ [
4%]
storage/devicemapper_test.py s. [
9%]
storage/fileutil_test.py ..xx....ss............................ [
19%]
storage/formatconverter_test.py XXXXXxxss [
20%]
storage/loopback_test.py ssss [
31%]
storage/lvm_test.py ....................ssssssssssssssssssss [
33%]
storage/lvmfilter_test.py .....ss.......... [
35%]
storage/managedvolume_test.py ssssssssssssssssss.... [
36%]
storage/misc_test.py .................sssssssss...............ssss...... [
41%]
storage/mount_test.py .........ss........ssssss [
47%]
storage/nbd_test.py ssssssssssssX. [
49%]
storage/udev_multipath_test.py .......................s [
88%]
## Warnings
See "warnings summary" in pytest output.
Looks less useful, report issues in packages we use instead of issues in
our code.
But this may be an issue using old versions of packages.
See attached results created with:
tox -e storage-py37 > storage-py37.out 2>&1
Nir
5 years, 2 months
Ovirt Manager - FATAL: the database system is in recovery mode
by Sameer Sardar
PACKAGE_NAME="ovirt-engine"
PACKAGE_VERSION="3.6.2.6"
PACKAGE_DISPLAY_VERSION="3.6.2.6-1.el6"
OPERATING SYSTEM="Centos 6.7"
Its been running error-free for over 3 years now. Over the past few weeks, The Ovirt manager application has been frequently losing connection with its own database residing on the same server. It is throwing up errors saying: “Caused by org.postgresql.util.PSQLException: FATAL: the database system is in recovery mode”. The only means we have found to recover from this is to hard reboot the server, after which it works fine for a couple of days before throwing up these errors again.
Here are some logs :
Caused by: org.postgresql.util.PSQLException: FATAL: the database system is in recovery mode
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:293)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:108)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:66)
at org.postgresql.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:125)
at org.postgresql.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:30)
at org.postgresql.jdbc3g.AbstractJdbc3gConnection.<init>(AbstractJdbc3gConnection.java:22)
at org.postgresql.jdbc4.AbstractJdbc4Connection.<init>(AbstractJdbc4Connection.java:32)
at org.postgresql.jdbc4.Jdbc4Connection.<init>(Jdbc4Connection.java:24)
at org.postgresql.Driver.makeConnection(Driver.java:393)
at org.postgresql.Driver.connect(Driver.java:267)
at org.jboss.jca.adapters.jdbc.local.LocalManagedConnectionFactory.createLocalManagedConnection(LocalManagedConnectionFactory.java:322)
5 years, 2 months
chanking mkIsoFs path
by Dan Kenigsberg
I bumped by accident on https://gerrit.ovirt.org/#/c/102698/
py3: mkimage: Remove md5 part in getFileName
and I don't think it is right: if a VM is created in a 4.3 cluster and then
migrated to a 4.4 cluster, qemu would not find the ISO image where it
expects it, and it would break.
I don't recall WHY we had md5 included into the path. It may well has been
a mistake. I just do not see a (easy) backward-compatible way to drop it
now.
5 years, 2 months
[oVirt] [CQ weekly status] [30-08-2019]
by Dusan Fodor
Hi,
This mail is to provide the current status of CQ and allow people to review
status before and after the weekend.
Please refer to below colour map for further information on the meaning of
the colours.
*CQ-4.2*: NOT RELEVANT
This CQ is not followed anymore, since it's about to be decommissioned.
*CQ-4.3*: YELLOW (#1)
Last failure was on 28-08 for ovirt-web-ui. It was solved soon after by
change 57c5c4b.
*CQ-Master:* RED (#1)
Last failure was on 30-08 for ovirt-engine in host-deployment. This issue
is under investigation.
Current running jobs for 4.2 [1], 4.3 [2] and master [3] can be found
here:
[1] http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt
-4.2_change-queue-tester/
[2] https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt
-4.3_change-queue-tester/
[3] http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt
-master_change-queue-tester/
Have a nice week!
Dusan
-------------------------------------------------------------------------------------------------------------------
COLOUR MAP
Green = job has been passing successfully
** green for more than 3 days may suggest we need a review of our test
coverage
1.
1-3 days GREEN (#1)
2.
4-7 days GREEN (#2)
3.
Over 7 days GREEN (#3)
Yellow = intermittent failures for different projects but no lasting or
current regressions
** intermittent would be a healthy project as we expect a number of
failures during the week
** I will not report any of the solved failures or regressions.
1.
Solved job failures YELLOW (#1)
2.
Solved regressions YELLOW (#2)
Red = job has been failing
** Active Failures. The colour will change based on the amount of time the
project/s has been broken. Only active regressions would be reported.
1.
1-3 days RED (#1)
2.
4-7 days RED (#2)
3.
Over 7 days RED (#3)
_______________________________________________
Devel mailing list -- devel(a)ovirt.org
To unsubscribe send an email to devel-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt
.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/devel@ovirt
.org/message/YCNCKRK3G4EJXA3OCYAUS4VMKRDA67F4/
5 years, 2 months
Failure adding host due to missing 'ovirt_host_deploy'
by Eyal Shenitzky
Hi all,
I am failing to add a new host to oVirt environment, the engine runs on
fedora-30.
ovirt_host_deploy isn't found under -
http://resources.ovirt.org/pub/ovirt-master-snapshot/rpm/fc30
I installed the fc29 version.
*Error:*
File
"/tmp/ovirt-biYslZJETn/otopi-plugins/ovirt-host-deploy/kdump/packages.py",
line 37, in <module> from ovirt_host_deploy import constants as odeploycons
otopi.main.PluginLoadException: No module named 'ovirt_host_deploy
*Installed packages:*
ovirt-host-deploy-common-1.9.0-0.0.master.20190722100027.git138fb90.fc29.noarch
python2-ovirt-host-deploy-1.9.0-0.0.master.20190722100027.git138fb90.fc29.noarch
ovirt-host-deploy-common-1.9.0-0.0.master.20190722100027.git138fb90.fc29.noarch
Does someone encounter that issue?
Thanks,
--
Regards,
Eyal Shenitzky
5 years, 2 months