[PTHREADING] No build for ppc64le and s390x, text fail on x86_64
by Nir Soffer
After adding Victor to jenkins whitelist, CI started to test
https://gerrit.ovirt.org/#/c/94974/
But we have only x86_64 jobs. This kind of project must have jobs for all
platforms.
The last maintainer was Yaniv Bronhaim, so I guess nobody remember why we
have only x86_64 builds.
I hope someone can help to move the project to new stdci and add
ppc64le and s390x builds.
While I was writing this the build completed with this failure:
00:01:01.195 ========== Running the shellscript automation/check-patch.sh
00:01:01.204 ./automation/check-patch.sh: line 3: tox: command not found
This worked when I added tox for fedora 25. Looks like we need
a simple patch to get tox on el7/fc28.
Nir
3 years, 8 months
Re: List of Queries related to RHV
by Yaniv Lavi
YANIV LAVI
SENIOR TECHNICAL PRODUCT MANAGER
Red Hat Israel Ltd. <https://www.redhat.com/>
34 Jerusalem Road, Building A, 1st floor
Ra'anana, Israel 4350109
ylavi(a)redhat.com T: +972-9-7692306/8272306 F: +972-9-7692223 IM: ylavi
<https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
@redhatnews <https://twitter.com/redhatnews> Red Hat
<https://www.linkedin.com/company/red-hat> Red Hat
<https://www.facebook.com/RedHatInc>
On Tue, Oct 16, 2018 at 4:35 PM Mahesh Falmari <Mahesh.Falmari(a)veritas.com>
wrote:
> Hi Nir,
>
> We have few queries with respect to RHV which we would like to understand
> from you.
>
>
>
> *1. Does RHV maintains the virtual machine configuration file in back end?*
>
> Just like we have configuration files for other hypervisors like for
> VMware it is .vmx and for Hyper-V, it is .vmcx which captures most of the
> virtual machine configuration information in that. On the similar lines,
> does RHV also maintains such file? If not, what is the other way to get all
> the virtual machine configuration information from a single API?
>
There is a OVF storage, but this is not meant for consumption.
Please follow the first section in:
*https://ovirt.org/develop/release-management/features/storage/backup-restore-disk-snapshots/
<https://ovirt.org/develop/release-management/features/storage/backup-rest...>*
This explains how to use the API to get a OVF for a VM.
>
>
> *2. Is VM UUID the only way to uniquely identify virtual machine in the
> RHV infrastructure?*
>
> What we understand is that VM UUID is the way to identify virtual machine
> uniquely in the RHV infrastructure. Would like to know any other way than
> this?
>
UUID is the way to do this.
>
>
> *3. Do we have any version associated with the virtual machine?*
>
> Just like we have hardware version in case of VMware and virtual machine
> version in case of Hyper-V, does RHV also associate any such version with
> virtual machine?
>
The HW version is based on the VM machine type.
>
>
> *4. Is it possible to create virtual machines with QCOW2 as base disks
> instead of RAW?*
>
> We would like to understand if there are any use cases customers prefer
> creating virtual machines from QCOW2 as base disks instead of RAW ones.
>
That is a possibility in cases of thin disk on file storage.
>
>
> *5. RHV Deployment*
>
> What kind of deployments you have come across in the field? Does customers
> scale their infrastructure by adding more datacenters/clusters/nodes or
> they add more RHV managers? What scenarios trigger having more than one RHV
> manager?
>
We are all kind with oVirt. I depends on the use case.
>
> *6. Image transfer*
>
> We are trying to download disk chunks using multiple threads to improve
> performance of reading data from RHV. Downloading 2 disk chunks
> simultaneously via threads should take approximately the same time. But
> from our observations this takes roughly 1.5 times.
> Can RHVM server requests in parallel, if so are there any settings that
> need to be tweaked?
>
> Here is an example:
> Request 1 for chunk 1 from thread 1, Range: bytes=0-1023
> Request 2 for chunk 2 from thread 2, Range: bytes=1024-2047
> Takes roughly 1.5 seconds, whereas a single request would take 1 second.
> Expecting it to take just around 1 second.
>
>
>
> *7. Free and Thaw operation*
>
> For cinder based VM, API recommended for FS consistent backup.
>
> - POST /api/vms/<ID>/freezefilesystems
> - POST /api/vms/<ID>/thawfilesystems
>
> Why do we need this whereas it is not required for other storage?
>
Creating a snapshot does this for you in a case where you have the oVirt
guest agent install on the guest.
>
>
> Thanks & Regards,
> Mahesh Falmari
>
>
>
3 years, 8 months
VDSM API schema inconsistencies
by Marcin Sobczyk
Hi,
when I was working on BZ#1624306 it turned out, that even if I'd fix the
help-printing code in vdsm-client, there's still a lot of
inconsistencies in API's schema files.
After some investigation, I found that there's a code for reporting
schema inconsistencies, but it's turned off because it bloats the logs.
There's also a "strict" mode that raises an exception each time an
inconsistency is found, but it's also off because there's so many of them.
I think that no one wants to maintain both: the actual code and schema
files. My idea would be to make the schema derived from the code at
build time as a long-term effort. Before we can do that though, we need
to address the inconsistencies first.
https://gerrit.ovirt.org/#/q/status:open+project:vdsm+branch:master+topic...
This is a series of patches that helps fixing schema issues. What it
does, is it adds a bunch of information from stack to vastly ease the
process of fixing them. The logging level is also changed from 'WARNING'
to 'DEBUG'. Here's an example of a logged entry:
Host.setKsmTune
With message: Required property pages_to_scan is not provided when
calling Host.setKsmTune
With backtrace: [
[
"_verify_object_type",
{
"call_arg_keys":[
"merge_across_nodes",
"run"
],
"schema_type_name":"KsmTuneParams"
}
],
[
"_verify_complex_type",
{
"schema_type_type":"object"
}
]
]
To make it work, a patch for OST's 'basic-master-suite' is on the way
that switches 'schema.inconsistency' logger's logging level do 'DEBUG'.
That way, we can find all reported errors in 'vdsm-schema-error.log' files.
An initial report that groups reported errors by namespaces/methods has
also been made. Please note, that an implementation that completely
avoids error duplicates is really hard and IMHO not worth the effort.
You can find the results here:
* Host: https://pastebin.com/YM2XnvTV
* Image: https://pastebin.com/GRc1yErL
* LVMVolumeGroup: https://pastebin.com/R1276gTu
* SDM: https://pastebin.com/P3tfYEDD
* StorageDomain: https://pastebin.com/Q0DxKgF5
* StoragePool: https://pastebin.com/VXFWpfRC
* VM: https://pastebin.com/tDC60c29
* Volume: https://pastebin.com/TYkr9Zsd
* |jobs|: https://pastebin.com/jeXpYyz9
* |virt|: https://pastebin.com/nRREuEub
Regards, Marcin
3 years, 8 months
Upgrade to WildFly 14.0.1
by Martin Perina
Hi,
WildFly 14 packages are already available for both CentOS 7 and FC28, both
oVirt engine master and 4.2 already contain fixes for compatibility with
both WildFly 13 and 14, but please update your development machines to
contain both ovirt-engine-wildfly-14.0.1 and
ovirt-engine-wildfly-overlay-14.0.1 packages.
Patch adding requirement for WildFly 14 is prepared [1], it should be
merged tomorrow (Thursday), backport to 4.2 will follow immediately.
Anyway if you find any issues related to this WildFly upgrade, please
report a bug on it asap, we need to have everything fixed in 4.2.7.
Thanks
Martin
[1] https://gerrit.ovirt.org/94824
--
Martin Perina
Associate Manager, Software Engineering
Red Hat Czech s.r.o.
3 years, 8 months
diskunmap hooks and oVirt Node
by Sandro Bonazzola
Hi,
we just got Bug 1638317
<https://bugzilla.redhat.com/show_bug.cgi?id=1638317> - "missing VDSM hook
diskunmap in Node NG releases"
opened as a bug, not an RFE, because
"Pass discard from guest to underlying storage
<https://www.ovirt.org/develop/release-management/features/storage/pass-di...>"
features introduced in oVirt 4.1 is not yet implemented for Cinder storage,
for which Bug 1440230 <https://bugzilla.redhat.com/show_bug.cgi?id=1440230>
- "[RFE] Allow "Pass discard from guest to underlying storage" for Cinder.
" has been opened.
I'm writing to people involved in the hook introduction (
https://gerrit.ovirt.org/#/c/29770/) to understand how safe is to include
the hook in oVirt Node as default installed hook.
I understand that the hook is going to add "discard=unmap" always, not only
on Cinder.
I don't know the implications of it being enabled other than supposedly fix
the issue with Cinder storage. Looking at the feature page looks like this
won't work with NFS storage, but other than not working, will it cause
issues?
I see Bug 1440230 <https://bugzilla.redhat.com/show_bug.cgi?id=1440230> is
un-targeted, is there any plan to get it into oVirt 4.3?
We support Cinder/Ceph since 3.6 and pass discard is supported since 4.1,
not sure about what prevented the pass discard to be implemented for Cinder
as well in 4.1. Can someone elaborate?
Thanks,
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
<https://www.redhat.com/en/events/red-hat-open-source-day-italia?sc_cid=70...>
3 years, 8 months
discussing the future of the upgrade suite in ost
by Dafna Ron
Hi All,
I was reviewing the upgrade suites in ost and there are some issues that I
am seeing in the suite tests-scenarios which I want to discuss and decide
the future of.
At it current state, I think we should remove the upgrade suite or most of
the post test-scenarios as it is not testing what it should.
The tests currently only test engine upgrade and basic sanity after the
upgrade.
This is problematic in a few ways:
1. upgrade should test the upgrade of rhv and not just a clean engine
upgrade (i.e host, storage, vm).
2. as we have limited resources I do not think that the upgrade suite
should be longer then the basic suite (and as we are currently running the
basic suite after the upgrade it is longer)
That brings me to question what should be essential to test in upgrade in
the CI?
I would also need someone in dev to volunteer and take ownership of the
testing scenarios for upgrade - is there anyone that can help?
Thanks,
Dafna
3 years, 8 months
Fwd: VDSM API schema inconsistencies
by Nir Soffer
Adding devel.
---------- Forwarded message ---------
From: Nir Soffer <nsoffer(a)redhat.com>
Date: Wed, 10 Oct 2018, 11:07
Subject: Re: VDSM API schema inconsistencies
To: Piotr Kliczewski <pkliczew(a)redhat.com>
Cc: Dan Kenigsberg <danken(a)redhat.com>, Edward Haas <ehaas(a)redhat.com>,
Milan Zamazal <mzamazal(a)redhat.com>, Tomasz Baranski <tbaransk(a)redhat.com>,
Francesco Romani <fromani(a)redhat.com>, Irit Goihman <igoihman(a)redhat.com>,
Marcin Sobczyk <msobczyk(a)redhat.com>
On Wed, 10 Oct 2018, 10:21 Piotr Kliczewski, <pkliczew(a)redhat.com> wrote:
> +Nir
>
> On Wed, Oct 10, 2018 at 8:39 AM Marcin Sobczyk <msobczyk(a)redhat.com>
> wrote:
>
>> Hi,
>> I would love to get some feedback from you about it on the VDSM call
>> today so if you have a moment, please take a look.
>> Marcin
>>
>> ---------- Forwarded message ---------
>> From: Marcin Sobczyk <msobczyk(a)redhat.com>
>> Date: Mon, Oct 8, 2018 at 2:40 PM
>> Subject: VDSM API schema inconsistencies
>> To: devel <devel(a)ovirt.org>
>>
>>
>> Hi,
>>
>> when I was working on BZ#1624306 it turned out, that even if I'd fix the
>> help-printing code in vdsm-client, there's still a lot of inconsistencies
>> in API's schema files.
>>
>> After some investigation, I found that there's a code for reporting
>> schema inconsistencies, but it's turned off because it bloats the logs.
>>
> Right, an it should remain like this. In runtime
we never want these warnings.
There's also a "strict" mode that raises an exception each time an
>> inconsistency is found, but it's also off because there's so many of them.
>>
>
Same.
Both are debugging aids for developers, to
make it easy to check the schema.
> I think that no one wants to maintain both: the actual code and schema
>> files.
>>
>
This is not a big issue. We rarely add or
change APIs. But it is nice if we don't need to
manualy make the schema correct.
> My idea would be to make the schema derived from the code at build time as
>> a long-term effort.
>>
>
This cannot work, since clients depend on the schema, it should not change
if when someone
makes uintended change in the code.
It should work in the opposite way, you change the schema, and generate new
code frome schema.
Threre are existing tools like grpc and thrift that
work in this way. For long term we should drop
our jsonrpc/stomp stack and move to grpc.
For short term we can just fix the schema to
reflect actual engine interaction. when engine
and schema do not match, engine is usualy
right. For responses, if engine can handle the
response the code is right.
Please do not make RPC code more complex
and inefficient for improving schema issues
detection. I would rather remove this code
instead.
Before we can do that though, we need to address the inconsistencies first.
>>
>>
>> https://gerrit.ovirt.org/#/q/status:open+project:vdsm+branch:master+topic...
>>
>> This is a series of patches that helps fixing schema issues. What it
>> does, is it adds a bunch of information from stack to vastly ease the
>> process of fixing them. The logging level is also changed from 'WARNING' to
>> 'DEBUG'. Here's an example of a logged entry:
>>
>> Host.setKsmTune
>> With message: Required property pages_to_scan is not provided when
>> calling Host.setKsmTune
>> With backtrace: [
>> [
>> "_verify_object_type",
>> {
>> "call_arg_keys":[
>> "merge_across_nodes",
>> "run"
>> ],
>> "schema_type_name":"KsmTuneParams"
>> }
>> ],
>> [
>> "_verify_complex_type",
>> {
>> "schema_type_type":"object"
>> }
>> ]
>> ]
>>
>> To make it work, a patch for OST's 'basic-master-suite' is on the way
>> that switches 'schema.inconsistency' logger's logging level do 'DEBUG'.
>> That way, we can find all reported errors in 'vdsm-schema-error.log' files.
>>
>> An initial report that groups reported errors by namespaces/methods has
>> also been made. Please note, that an implementation that completely avoids
>> error duplicates is really hard and IMHO not worth the effort. You can find
>> the results here:
>>
>> - Host: https://pastebin.com/YM2XnvTV
>> - Image: https://pastebin.com/GRc1yErL
>> - LVMVolumeGroup: https://pastebin.com/R1276gTu
>> - SDM: https://pastebin.com/P3tfYEDD
>> - StorageDomain: https://pastebin.com/Q0DxKgF5
>> - StoragePool: https://pastebin.com/VXFWpfRC
>> - VM: https://pastebin.com/tDC60c29
>> - Volume: https://pastebin.com/TYkr9Zsd
>> - |jobs|: https://pastebin.com/jeXpYyz9
>> - |virt|: https://pastebin.com/nRREuEub
>>
>>
File a bug per vertical (infra, storage, virt) for
these issues. Since we don't know about real user issues with these apis
this must be incorrect schema.
Nir
3 years, 8 months