Python 3 vdsm RPM packages
by Marcin Sobczyk
Hi,
On yesterday's vdsm weekly call, we were discussing the need of making
Python 3 vdsm RPM packages.
Some facts:
- it doesn't make a lot sense to spend much time on trying to package
everything - it's completely impossible i.e. to run vdsm without having
'sanlock' module
- our current vdsm.spec file is crap
Two non-exclusive propositions were raised:
- let's try to make a quick-and-dirty patch, that will completely
overwrite the existing 'vdsm.spec' (effectively making it Python 3-only)
for testing purposes, and maintain it for a while
- in the meantime, let's write a completely new, clean and beautiful
spec file in an package-by-package, incremental manner, (also Python
3-only) that would eventually substitute the original one
The quick-and-dirty spec file would be completely unsupported by CI. The
new one would get a proper CI sub-stage in 'build-artifacts' stage.
The steps needed to be done are:
- prepare autotools/Makefiles to differentiate Python 2/Python 3 RPM builds
- prepare the new spec file (for now including only 'vdsm-common' package)
- split 'build-artifacts' stage into 'build-py27' and 'build-py36'
sub-stages (the latter currently running on fc28 only)
The only package we can start with, when making the new spec file, is
'vdsm-common', as it doesn't depend on anything else (or at least I hope
so...).
There were also propositions about how to change the new spec file in
regard to the old one (like making 'vdsm' package a meta-package). This
is a good time for these propositions to be raised, reviewed and
documented (something like this maybe?
https://docs.google.com/document/d/13EXN1Iwq-OPoc2A5Y3PJBpOiNC10ugx6eCE72...),
so we can align the new spec file as we build it.
I can lay the groundwork by doing the autotools/Makefiles and
'build-artifacts' splitting. Gal Zaidman agreed on starting to work on
the new spec file. Milan mentioned, that he had something like the
quick-and-dirty patch, maybe he can share it with us.
Questions, comments are welcome.
Regards, Marcin
5 years, 7 months
Re: VDSM and safelease
by Michal Skrivanek
On 5 Apr 2019, at 09:31, Martin Tessun <mtessun(a)redhat.com> wrote:
Hi Sandro, Michal,
On 4/5/19 9:01 AM, Sandro Bonazzola wrote:
Il giorno gio 4 apr 2019 alle ore 19:15 Michal Skrivanek <
michal.skrivanek(a)redhat.com> ha scritto:
>
>
> On 3 Apr 2019, at 13:24, Martin Perina <mperina(a)redhat.com> wrote:
>
>
>
> On Wed, Apr 3, 2019 at 10:22 AM Martin Tessun <mtessun(a)redhat.com> wrote:
>
>> On 4/3/19 8:25 AM, Sandro Bonazzola wrote:
>>
>> So, according to the thread we have a few action items:
>> - Decide if we'll drop export domain and iso domain in 4.4
>>
>>
> Just please don't forget that 4.4 engine will need to support 4.2, 4.3 and
> 4.4 hosts and we will need to allow migrating VMs from 4.2/4.3 hosts (EL7)
> to 4.4 hosts (EL8), so we need to be careful about implications of removing
> ISO and export data domains
>
>
> In general we can’t remove anything while the corresponding cluster level
> is still supported. So feel free to drop anything we used in <4.2, but
> think twice and (run that through virt team at least) before you remove
> anything used in 4.2+
>
Ok, so: do we need to take safelease with us in 4.4? If the answer is yes,
I need to request to find a maintainer for it since we'll need to package
it for RHEL 8 / CentOS 8.
This is currently blocking pre-integration testing with RHEL 8 Beta so it
needs to be addressed quickly in order to be able to proceed.
First question: Up to which clusterlevel is safelease needed? Is it needed
in 4.2 cluster level?
Also 4.3
If so: safelease is probably only required on RHEL 7 hypervisors then. In
that case we don't need it for RHEL 8 Hypervisors.
If we won’t have it on RHEL 8/4.4 then you are effectively cutting off <4.4
support in 4.4 vdsm. Which also means no live migration between 4.3 compat
level which means no way to upgrade to 4.4 with running VMs.
I do not think that’s acceptable
So in case we remove safelease from RHV 4.4/RHEL 8 based hypervisors we
should be fine.
In case safelease is needed for the engine, we need to think if we want to
move the engine to RHEL 8 in that case.
Cheers,
Martin
>
>> If we do so, we still need a way to clean these old domains up, aka
>> moving the ISOs to a Data Domain or "migrating" an existing ISO domain into
>> a data Domain.
>> Export Domain is probably easier, as the OVAs can simply be copied to a
>> central location. Maybe having an export domain available as a second way
>> to upload VMs (e.g. for bulk-imports) still makes sense. Esp. as I believe
>> v2v is relying on export domains today.
>>
>> So while I am in favor of getting rid of unneeded code, we need to think
>> about the benefits they both have and how to get them implemented in case
>> we agree on removing the old domains.
>>
>> So what are the benefits of the ISO domain:
>>
>> - Easy to add new ISOs (just copy them to the NFS location
>> - simple way of sharing between DCs/RHV-Ms
>> - Having a central place for the ISOs
>>
>> The 3rd item can be achieved by admins simply using one Storage Domain
>> just for ISOs.
>> The 2nd would probably require some sort of read-only SDs or a way to
>> have one SD shared between DCs with just one DC having write-access.
>> The 1st one is probably hardest, as there is no easy way of adding data
>> to the Storage Domain without tooling. Maybe there are also other ways of
>> achieving this, the above are just my ideas.
>>
>>
>> Next - what are the benefits of Export Domain:
>>
>> - Unattended Import
>> - Bulk Im- and Export
>> - Central location
>> - Easily sharable between DCs/RHV-Ms
>>
>> The 4th one is already achieved as we have a common Import/Export tool,
>> so the OVAs can be easily shared and used by different DCs/RHV-Ms
>> The 3rd one is something that could be easily achieved administrately.
>> The 2nd one is already more complicated, but can probably be solved with
>> Ansible (as the 1st one probably as well).
>>
>>
>> So from my PoV it is easiest to remove the Export Domain while still
>> having all needed features available. The ISO domain seems a bit harder to
>> me.
>> Please think about how to solve this, before we decide on removing both
>> of them.
>>
>>
>> Cheers,
>> Martin
>>
>>
>> - Move requirements from safelease to vdsm for numactl, dmidecode and
>> virt-v2v if not already done
>> - Elect a maintainer for safelease for 4.3 scope
>> - Deprecate safelease in 4.3 and remove it on master if we agree on
>> removing iso and export domain in 4.4
>>
>> Il giorno mar 2 apr 2019 alle ore 18:14 Nir Soffer <nsoffer(a)redhat.com>
>> ha scritto:
>>
>>> On Tue, Apr 2, 2019 at 6:40 PM Dan Kenigsberg <danken(a)redhat.com> wrote:
>>>
>>>>
>>>>
>>>> On Tue, Apr 2, 2019 at 6:07 PM Nir Soffer <nsoffer(a)redhat.com> wrote:
>>>>
>>>>> On Tue, Apr 2, 2019 at 5:00 PM Sandro Bonazzola <sbonazzo(a)redhat.com>
>>>>> wrote:
>>>>>
>>>>>> Hi,
>>>>>> I stumbled upon safelease package, introduced in oVirt 3.6.
>>>>>> I realigned the spec file with Fedora Rawhide:
>>>>>> https://gerrit.ovirt.org/#/c/99123/
>>>>>> and then I stopped working on it and decided to open a thread here.
>>>>>>
>>>>>> safelease package is required in vdsm.
>>>>>> I searched for the home page for this package since it moved and
>>>>>> found: https://ovirt.org/develop/developer-guide/vdsm/safelease.html
>>>>>> This says that sanlock is meant to obsolete safelease.
>>>>>> I'm assuming that safelease was used in 3.6 and replaced later by
>>>>>> sanlock then kept for backward compatibility.
>>>>>> In 4.3 we dropped support for 3.6 level clusters, is this package
>>>>>> still needed?
>>>>>>
>>>>>
>>>>> safelease is our clusterlock with V1 storage domains - export and iso
>>>>> domains.
>>>>>
>>>>> https://github.com/oVirt/vdsm/blob/f433ed5aaf67729b787cf82ee21b0f17af968b...
>>>>> https://github.com/oVirt/vdsm/blob/master/lib/vdsm/storage/sd.py#L320
>>>>>
>>>>> Once we remove these domains we can remove also safelease.
>>>>>
>>>>> If it's still needed, why is it requiring:
>>>>>> # Numactl is not available on s390[x] and ARM
>>>>>> %ifnarch s390 s390x %{arm}
>>>>>> Requires: numactl
>>>>>> %endif
>>>>>>
>>>>>> %ifarch x86_64
>>>>>> Requires: python2-dmidecode
>>>>>> Requires: dmidecode
>>>>>> Requires: virt-v2v
>>>>>> %endif
>>>>>>
>>>>>
>>>>> These are hacks Yaniv added so we can make vdsm noarch package. Since
>>>>> then we reverted
>>>>> back to vdsm arch specific package but the bad requirements remained
>>>>> in safelease.
>>>>>
>>>>> We can safely remove the requirements from safelease if vdsm requires
>>>>> these packages, but
>>>>> I'm not sure who has time to work on safelease.
>>>>>
>>>>> I think it is time to remove export and iso domain in 4.4.
>>>>>
>>>>
>>>> Would it be possible?
>>>> If an ovirt-4.3 storage pool has an ISO domain, and we add an ovirt-4.4
>>>> host to it, we would like it to be able to become SPM.
>>>>
>>>
>>> In rhel 8.1, vdsm 4.4, I don't want to support export or iso domain
>>> regardless of the
>>> cluster version.
>>>
>>> We don't have the time to port all code in vdsm to python 3. If you want
>>> python 3, you need
>>> to remove some features.
>>>
>>> If you want to mix 4.4. host with 4.3, env, detach the iso domain and
>>> export domain?
>>>
>>> Tal, what do you think?
>>>
>>>>
>>
>> --
>>
>> SANDRO BONAZZOLA
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>>
>> Red Hat EMEA <https://www.redhat.com/>
>>
>> sbonazzo(a)redhat.com
>> <https://red.ht/sig>
>>
>>
>> --
>> Martin Tessun
>> Senior Technical Product Manager KVM, Red Hat GmbH (Munich Office)
>>
>> mobile +49.173.6595494
>> desk +49.89.205071-107
>> fax +49.89.205071-111
>>
>> GPG Fingerprint: EDBB 7C6A B5FE 9199 B861 478D 3526 E46D 0D8B 44F0
>>
>> Red Hat GmbH, http://www.de.redhat.com/ Registered seat: Grasbrunn,
>> Commercial register: Amtsgericht Muenchen, HRB 153243,
>> Managing Directors: Charles Cachera, Michael O'Neill, Tom Savage, Eric Shander
>>
>> _______________________________________________
>> Devel mailing list -- devel(a)ovirt.org
>> To unsubscribe send an email to devel-leave(a)ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JZZEG5LQLLA...
>>
>
>
> --
> Martin Perina
> Manager, Software Engineering
> Red Hat Czech s.r.o.
>
> _______________________________________________
> Devel mailing list -- devel(a)ovirt.org
> To unsubscribe send an email to devel-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/GRB2666BFX3...
>
> _______________________________________________
> Devel mailing list -- devel(a)ovirt.org
> To unsubscribe send an email to devel-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/45RWPIMLH6M...
>
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
--
Martin Tessun
Senior Technical Product Manager KVM, Red Hat GmbH (Munich Office)
mobile +49.173.6595494
desk +49.89.205071-107
fax +49.89.205071-111
GPG Fingerprint: EDBB 7C6A B5FE 9199 B861 478D 3526 E46D 0D8B 44F0
Red Hat GmbH, http://www.de.redhat.com/ Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Michael O'Neill, Tom Savage, Eric Shander
5 years, 7 months
Network Labels and API Access (Error in documentation)
by jorr@streamguys.com
The self-hosted documentation for network labels appears to be inaccurate.
https://{{ MY_DOMAIN }}/ovirt-engine/apidoc/#/services/network_labels/methods/add
The documentation asks the user to post to the following URI:
POST /ovirt-engine/api/networks/123/labels
However that results in a 404. The following URL should be posted to instead for a successful response:
POST /ovirt-engine/api/networks/123/networklabels
I discovered the discrepancy when trying to curl loop automate adding networks to my cluster, and resulted in 404s. The 'links' section of the specific network label actually do contain the right URI to hit to get the proper label tagging:
"link": [
{
"href": "/ovirt-engine/api/networks/929fec34-7a34-4c1b-9451-e6abf6733ac6/networklabels",
"rel": "networklabels"
},
{
"href": "/ovirt-engine/api/networks/929fec34-7a34-4c1b-9451-e6abf6733ac6/permissions",
"rel": "permissions"
},
{
"href": "/ovirt-engine/api/networks/929fec34-7a34-4c1b-9451-e6abf6733ac6/vnicprofiles",
"rel": "vnicprofiles"
}
]
The hosted documentation also has confusing information with networklabels being used in the results of some requests, but this section here also just says 'labels':
https://ovirt.github.io/ovirt-engine-api-model/4.1/#services/network_labe...
5 years, 7 months
VDSM and safelease
by Sandro Bonazzola
Hi,
I stumbled upon safelease package, introduced in oVirt 3.6.
I realigned the spec file with Fedora Rawhide:
https://gerrit.ovirt.org/#/c/99123/
and then I stopped working on it and decided to open a thread here.
safelease package is required in vdsm.
I searched for the home page for this package since it moved and found:
https://ovirt.org/develop/developer-guide/vdsm/safelease.html
This says that sanlock is meant to obsolete safelease.
I'm assuming that safelease was used in 3.6 and replaced later by sanlock
then kept for backward compatibility.
In 4.3 we dropped support for 3.6 level clusters, is this package still
needed?
If it's still needed, why is it requiring:
# Numactl is not available on s390[x] and ARM
%ifnarch s390 s390x %{arm}
Requires: numactl
%endif
%ifarch x86_64
Requires: python2-dmidecode
Requires: dmidecode
Requires: virt-v2v
%endif
while nothing of above packages seems to be used within the source code of
this package?
Currently listed maintainers for this package are:
Saggi Mizrahi (doesn't seems to be in oVirt project anymore)
Yaniv Bronhaim (doesn't seems to be in oVirt project anymore)
Yoav Kleinberger (doesn't seems to be in oVirt project anymore)
Any thought?
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
5 years, 7 months
Suspend resume and DHCP
by Dominik Holler
Hello,
would you help me to understand if the dhcp client in an oVirt guest
should refresh his dhcp configuration after the guest is resumed?
If this is the case, how this should be triggered?
The reason why I ask is, that if a VM suspends on a first host, and
resumes on a second one, libvirt's nwfilter losses the IP address of
the guest, which means that the guest is not reachable until he
refreshes dhcp config, if the clean-traffic filter with
CTRL_IP_LEARNING=dhcp is used.
This scenario might happen in OST basic-suite-master and
basic-suite-4.3 in verify_suspend_resume_vm0.
Thanks
Dominik
5 years, 7 months
[ OST Failure Report ] [ oVirt master (ovirt-engine) ] [ 22-03-2019 ] [004_basic_sanity.hotplug_cpu ]
by Dafna Ron
Hi,
We are failing ovirt-engine master on test 004_basic_sanity.hotplug_cpu
looking at the logs, we can see that the for some reason, libvirt reports a
vm as none responsive which fails the test.
CQ first failure was for patch:
https://gerrit.ovirt.org/#/c/98553/ - core: Add display="on" for mdevs, use
nodisplay to override
But I do not think this is the cause of failure.
Adding Marcin, Milan and Dan as well as I think it may be netwrok related.
You can see the libvirt log here:
https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/13516/arti...
you can see the full logs here:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/13516/artif...
Evgheni and I confirmed this is not an infra issue and the problem is ssh
connection to the internal vm
Thanks,
Dafna
error:
2019-03-22 15:08:22.658+0000: 22068: warning : qemuDomainObjTaint:7521
: Domain id=3 name='vm0' uuid=a9443d02-e054-40bb-8ea3-ae346e2d02a7 is
tainted: hook-script
2019-03-22 15:08:22.693+0000: 22068: error :
virProcessRunInMountNamespace:1159 : internal error: child reported:
unable to set security context 'system_u:object_r:virt_content_t:s0'
on '/rhev/data-center/mnt/blockSD/91d97292-9ac3-4d77-a152-c7ea3250b065/images/e60dae48-ecc7-4171-8bfe-42bfc2190ffd/40243c76-a384-4497-8a2d-792a5e10d510':
No such file or directory
2019-03-22 15:08:28.168+0000: 22070: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU
guest agent is not connected
2019-03-22 15:08:58.193+0000: 22070: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU
guest agent is not connected
2019-03-22 15:13:58.179+0000: 22071: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU
guest agent is not connected
5 years, 7 months