Clicking on compute, templates, the HE complains that the export domain is
Isn't export domain deprecated and the import window should default to the
a storage domain?
Should I open a bug in Bugzilla?
On Thu, Jun 20, 2019 at 6:53 AM <bugzilla(a)redhat.com> wrote:
> Shirly Radco <sradco(a)redhat.com> changed:
> What |Removed |Added
> Status|POST |MODIFIED
> Flags|needinfo?(sradco(a)redhat.com |
> |) |
> You are receiving this mail because:
> You reported the bug.
We have new random failure in this test:
[2019-06-18T16:05:56.576Z] =================================== FAILURES
[2019-06-18T16:05:56.576Z] self = <ssl_test.SSLTests
[2019-06-18T16:05:56.576Z] def testConnectWithoutCertificateFails(self):
[2019-06-18T16:05:56.576Z] Verify that the connection without a
[2019-06-18T16:05:56.576Z] with self.assertRaises(cmdutils.Error):
[2019-06-18T16:05:56.576Z] args = ["openssl", "s_client",
"-connect", "%s:%d" % self.address]
[2019-06-18T16:05:56.576Z] > commands.run(args)
[2019-06-18T16:05:56.576Z] E AssertionError: Error not raised
[2019-06-18T16:05:56.576Z] ssl_test.py:173: AssertionError
Marcin, can you look at it?
I've setup oVirt HCI over 3 servers.
Each server has 1* 480 GB SSD, 5* 1.8 TB SAS HDD with oVirt version 4.3.3
But I am still able to get low performance.
Can anyone suggest the tuning variables for Gluster volume.
Remark :What should the value be set for network.remote-dio= enable / disable ?
When there are image chains problems, sometimes it is not possible to
determine the correct chain due to lost information like rotated logs. For
this cases, it is usually desirable to remove a volume from the chain but
not from the storage until the user confirms the VM is healthy and contains
the correct data. Currently this is done by moving files (file storage) or
renaming LVM tags (block storage) so VDSM does not find the particular
volume to be part of the chain.
Ideally I would like to have this done by a vdsm-tool talking to VDSM api.
The tool would be able to mark the volume to be ignored or not, list all
ignored volumes and perhaps even delete them (to free space).
How would you recommend proceeding with such tool? Some ideas:
- Have a reserved image UUID which contains all ignored volumes, so they
are not part of any chain.
- Add an "ignore" metadata attribute to each volume.
I like more the first option, but looking for opinions and ideas.
This mail is to provide the current status of CQ and allow people to review
status before and after the weekend.
Please refer to below colour map for further information on the meaning of
*CQ-4.2*: GREEN (#3)
Last failure was on 05-06 for project ovirt-ansible-cluster-upgrade due to
failure in test 002_bootstrap.add_master_storage_domain.
The fix was created and merged
*CQ-4.3*: GREEN (#1)
There were two main failures this week.
1. vdsm was failing due to the sos change which was causing metrics tests
to fail as it was using vdsm sos plugin and that is no longer available.
this was fix by mperina:
2. there was a general failure due to packages. two patches were merged to
fix the issue:
*CQ-Master:* GREEN (#1)
Master was failing on the same issues as in 4.3 which are now fixed
Current running jobs for 4.2 , 4.3  and master  can be found
Green = job has been passing successfully
** green for more than 3 days may suggest we need a review of our test
1-3 days GREEN (#1)
4-7 days GREEN (#2)
Over 7 days GREEN (#3)
Yellow = intermittent failures for different projects but no lasting or
** intermittent would be a healthy project as we expect a number of
failures during the week
** I will not report any of the solved failures or regressions.
Solved job failures YELLOW (#1)
Solved regressions YELLOW (#2)
Red = job has been failing
** Active Failures. The colour will change based on the amount of time the
project/s has been broken. Only active regressions would be reported.
1-3 days RED (#1)
4-7 days RED (#2)
Over 7 days RED (#3)
I have been checking oVirt latest (4.3.4) with adding various bare metal
hosts with various CPU's: I'm asking on this mailing list if I should open
a ticket and if so what ticket (bug? RFE?).
- AMD Ryzen 5
- Xeon SP (first gen)
- Xeon SP (second gen)
- Upcoming unannounced Xeon CPU.
The main oVirt machine (and the HE) is running on Xeon E3-1225 V3 CPU.
The big problem is that when you add for example any AMD processor
(whatever it's a low end Ryzen 5 or a high end 32 cores EPYC) to such a
setup, it will fail because it tries to add the AMD machine to the default
cluster which is defined as Intel based VT-X. If I'll try to add any non
Xeon V3 CPU, it will succeed, but any live migration of VM's inside the
cluster will fail since any Xeon chip version has various VT flags.
So, my suggestion is that when adding nodes/hosts to oVirt, they will be
added to a new "non clustered" place without any features allowed (live
migration, for example) and after the admin will add the machines, he/she
can assign nodes to a defined cluster (prepared or that he/she can create)
Another problem with cluster defining is that the CPU type is too
complicated for any admin who doesn't know the nuts and bolts of each CPU.
For example - the Haswell CPU type got *6* CPU types. Do you really expect
an admin to know which one to select? IMHO, a good solution would be to add
some "detect" button to hosts, and the result from clicking this button
should be a window with options like:
- Add this host to an existing cluster (which has the same CPU type)
- Auto create a new cluster with the CPU type detected from the
I'll be happy to hear your thoughts about this issue.