Error: Adding new Host to ovirt-engine
by Ahmad Khiet
Hi,
Can't add new host to ovirt engine, because the following error:
2019-06-12 12:23:09,664 p=4134 u=engine | TASK [ovirt-host-deploy-facts :
Set facts] *************************************
2019-06-12 12:23:09,684 p=4134 u=engine | ok: [10.35.1.17] => {
"ansible_facts": {
"ansible_python_interpreter": "/usr/bin/python2",
"host_deploy_vdsm_version": "4.40.0"
},
"changed": false
}
2019-06-12 12:23:09,697 p=4134 u=engine | TASK [ovirt-provider-ovn-driver
: Install ovs] *********************************
2019-06-12 12:23:09,726 p=4134 u=engine | fatal: [10.35.1.17]: FAILED! =>
{}
MSG:
The conditional check 'cluster_switch == "ovs" or (ovn_central is defined
and ovn_central | ipaddr and ovn_engine_cluster_version is
version_compare('4.2', '>='))' failed. The error was: The ipaddr filter
requires python's netaddr be installed on the ansible controller
The error appears to be in
'/home/engine/apps/engine/share/ovirt-engine/playbooks/roles/ovirt-provider-ovn-driver/tasks/configure.yml':
line 3, column 5, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- block:
- name: Install ovs
^ here
2019-06-12 12:23:09,728 p=4134 u=engine | PLAY RECAP
*********************************************************************
2019-06-12 12:23:09,728 p=4134 u=engine | 10.35.1.17 :
ok=3 changed=0 unreachable=0 failed=1 skipped=0 rescued=0
ignored=0
whats missing!?
Thanks
--
Ahmad Khiet
Red Hat <https://www.redhat.com/>
akhiet(a)redhat.com
M: +972-54-6225629
<https://red.ht/sig>
1 year, 4 months
Alpha 4.5 Test Day | Deploy on Ceph Storage
by Jöran Malek
I tried deploying oVirt 4.5 Alpha (using the oVirt Node NG master
installer ISO for el8) with converged Ceph using following steps:
* Install two oVirt Node-nodes
* Install cephadm on both, single 240GB OSD per node (this is a nested
virtualization test environment)
* Deploy ceph cluster with
cephadm --skip-monitoring-stack --single-host-defaults
* Added OSDs to Ceph
* Created CephFS with "ceph fs volume create cephfs"
* Deployed NFS ganesha using ceph orch apply
* Added export "/ovirt" to NFS ganesha for CephFS "/", mounted CephFS
temporarily, changing its owner to 36:36
* Added RBD Pool "ovirt"
At this point: Ceph is running, CephFS is working, NFS exports are available.
Performed the usual ovirt-hosted-engine-setup, answers file attached.
Hosted Engine: Check, on NFS storage domain over Ceph (not doing the
iSCSI route as that is a black box to me). NFS "just works", and I can
connect to localhost as server, making it the perfect candidate for HA
storage (because NFS is deployed on every node in the Cluster, due to
Ceph being installed on every node in the cluster)
Copied over the ceph.conf and ceph.client.admin.keyring to engine-VM,
and changed their owner to ovirt.
Applied cinderlib integration on Engine with
> engine-setup --reconfigure-optional-components
Added a block storage domain (configuration as in the blog post), and
getting following error, as attached (engine.log, cinderlib.log).
German:
Fehler beim Ausführen der Aktion: Kann Speicher nicht hinzufügen.
Verbinden mit verwalteter Blockdomäne fehlgeschlagen.
English (translated):
Error while performing action: Unable to add storage. Failed to
connect to managed block storage domain.
Anything I can provide that helps figuring this out?
Ceph Version:
16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503)
pacific (stable)
For the Trello-board:
Installation on Ceph works as expected - iSCSI and NFS are
well-supported, and deployment with NFS is a bit easier than iSCSI.
Adding a managed block storage domain failed for me, works on oVirt
4.4 with the exact same procedure.
Best,
Jöran
2 years, 8 months
Re: [ovirt-users] Re: [=EXTERNAL=] Re: help using nvme/tcp storage with cinderlib and Managed Block Storage
by Nir Soffer
On Wed, Feb 23, 2022 at 6:24 PM Muli Ben-Yehuda <muli(a)lightbitslabs.com> wrote:
>
> Thanks for the detailed instructions, Nir. I'm going to scrounge up some hardware.
> By the way, if anyone else would like to work on NVMe/TCP support, for NVMe/TCP target you can either use Lightbits (talk to me offline for details) or use the upstream Linux NVMe/TCP target. Lightbits is a clustered storage system while upstream is a single target, but the client side should be close enough for vdsm/ovirt purposes.
I played with NVMe/TCP a little bit, using qemu to create a virtual
NVMe disk, and export
it using the kernel on one VM, and consume it on another VM.
https://futurewei-cloud.github.io/ARM-Datacenter/qemu/nvme-of-tcp-vms/
One question about device naming - do we always get the same name of the
device in all hosts?
To support VM migration, every device must have unique name in the cluster.
With multipath we always have unique name, since we disable "friendly names",
so we always have:
/dev/mapper/{wwid}
With rbd we also do not use /dev/rbdN but a unique path:
/dev/rbd/poolname/volume-vol-id
How do we ensure cluster-unique device path? If os_brick does not handle it, we
can to do in ovirt, for example:
/run/vdsm/mangedvolumes/{uuid} -> /dev/nvme7n42
but I think this should be handled in cinderlib, since openstack have
the same problem
with migration.
Nir
>
> Cheers,
> Muli
> --
> Muli Ben-Yehuda
> Co-Founder and Chief Scientist @ http://www.lightbitslabs.com
> LightOS: The Special Storage Sauce For Your Cloud
>
>
> On Wed, Feb 23, 2022 at 4:55 PM Nir Soffer <nsoffer(a)redhat.com> wrote:
>>
>> On Wed, Feb 23, 2022 at 4:20 PM Muli Ben-Yehuda <muli(a)lightbitslabs.com> wrote:
>> >
>> > Thanks, Nir and Benny (nice to run into you again, Nir!). I'm a neophyte in ovirt and vdsm... What's the simplest way to set up a development environment? Is it possible to set up a "standalone" vdsm environment to hack support for nvme/tcp or do I need "full ovirt" to make it work?
>>
>> It should be possible to install vdsm on a single host or vm, and use vdsm
>> API to bring the host to the right state, and then attach devices and run
>> vms. But I don't know anyone that can pull this out since simulating what
>> engine is doing is hard.
>>
>> So the best way is to set up at least one host and engine host using the
>> latest 4.5 rpms, and continue from there. Once you have a host, building
>> vdsm on the host and upgrading the rpms is pretty easy.
>>
>> My preferred setup is to create vms using virt-manager for hosts, engine
>> and storage and run all the vms on my laptop.
>>
>> Note that you must have some traditional storage (NFS/iSCSI) to bring up
>> the system even if you plan to use only managed block storage (MBS).
>> Unfortunately when we add MBS support we did have time to fix the huge
>> technical debt so you still need a master storage domain using one of the
>> traditional legacy options.
>>
>> To build a setup, you can use:
>>
>> - engine vm: 6g ram, 2 cpus, centos stream 8
>> - hosts vm: 4g ram, 2 cpus, centos stream 8
>> you can start with one host and add more hosts later if you want to
>> test migration.
>> - storage vm: 2g ram, 2 cpus, any os you like, I use alpine since it
>> takes very little
>> memory and its NFS server is fast.
>>
>> See vdsm README for instructions how to setup a host:
>> https://github.com/oVirt/vdsm#manual-installation
>>
>> For engine host you can follow:
>> https://ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_...
>>
>> And after that this should work:
>>
>> dnf install ovirt-engine
>> engine-setup
>>
>> Accepting all the defaults should work.
>>
>> When you have engine running, you can add a new host with
>> the ip address or dns name of you host(s) vm, and engine will
>> do everything for you. Note that you must install the ovirt-release-master
>> rpm on the host before you add it to engine.
>>
>> Nir
>>
>> >
>> > Cheers,
>> > Muli
>> > --
>> > Muli Ben-Yehuda
>> > Co-Founder and Chief Scientist @ http://www.lightbitslabs.com
>> > LightOS: The Special Storage Sauce For Your Cloud
>> >
>> >
>> > On Wed, Feb 23, 2022 at 4:16 PM Nir Soffer <nsoffer(a)redhat.com> wrote:
>> >>
>> >> On Wed, Feb 23, 2022 at 2:48 PM Benny Zlotnik <bzlotnik(a)redhat.com> wrote:
>> >> >
>> >> > So I started looking in the logs and tried to follow along with the
>> >> > code, but things didn't make sense and then I saw it's ovirt 4.3 which
>> >> > makes things more complicated :)
>> >> > Unfortunately because GUID is sent in the metadata the volume is
>> >> > treated as a vdsm managed volume[2] for the udev rule generation and
>> >> > it prepends the /dev/mapper prefix to an empty string as a result.
>> >> > I don't have the vdsm logs, so I am not sure where exactly this fails,
>> >> > but if it's after [4] it may be possible to workaround it with a vdsm
>> >> > hook
>> >> >
>> >> > In 4.4.6 we moved the udev rule triggering the volume mapping phase,
>> >> > before starting the VM. But it could still not work because we check
>> >> > the driver_volume_type in[1], and I saw it's "driver_volume_type":
>> >> > "lightos" for lightbits
>> >> > In theory it looks like it wouldn't take much to add support for your
>> >> > driver in a future release (as it's pretty late for 4.5)
>> >>
>> >> Adding support for nvme/tcp in 4.3 is probably not feasible, but we will
>> >> be happy to accept patches for 4.5.
>> >>
>> >> To debug such issues vdsm log is the best place to check. We should see
>> >> the connection info passed to vdsm, and we have pretty simple code using
>> >> it with os_brick to attach the device to the system and setting up the udev
>> >> rule (which may need some tweaks).
>> >>
>> >> Nir
>> >>
>> >> > [1] https://github.com/oVirt/vdsm/blob/500c035903dd35180d71c97791e0ce4356fb77...
>> >> >
>> >> > (4.3)
>> >> > [2] https://github.com/oVirt/vdsm/blob/b42d4a816b538e00ea4955576a5fe762367be7...
>> >> > [3] https://github.com/oVirt/vdsm/blob/b42d4a816b538e00ea4955576a5fe762367be7...
>> >> > [4] https://github.com/oVirt/vdsm/blob/b42d4a816b538e00ea4955576a5fe762367be7...
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > On Wed, Feb 23, 2022 at 12:44 PM Muli Ben-Yehuda <muli(a)lightbitslabs.com> wrote:
>> >> > >
>> >> > > Certainly, thanks for your help!
>> >> > > I put cinderlib and engine.log here: http://www.mulix.org/misc/ovirt-logs-20220223123641.tar.gz
>> >> > > If you grep for 'mulivm1' you will see for example:
>> >> > >
>> >> > > 2022-02-22 04:31:04,473-05 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (default task-10) [36d8a122] Command 'HotPlugDiskVDSCommand(HostName = client1, HotPlugDiskVDSParameters:{hostId='fc5c2860-36b1-4213-843f-10ca7b35556c', vmId='e13f73a0-8e20-4ec3-837f-aeacc082c7aa', diskId='d1e1286b-38cc-4d56-9d4e-f331ffbe830f', addressMap='[bus=0, controller=0, unit=2, type=drive, target=0]'})' execution failed: VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error = Failed to bind /dev/mapper/ on to /var/run/libvirt/qemu/21-mulivm1.mapper.: Not a directory, code = 45
>> >> > >
>> >> > > Please let me know what other information will be useful and I will prove.
>> >> > >
>> >> > > Cheers,
>> >> > > Muli
>> >> > >
>> >> > > On Wed, Feb 23, 2022 at 11:14 AM Benny Zlotnik <bzlotnik(a)redhat.com> wrote:
>> >> > >>
>> >> > >> Hi,
>> >> > >>
>> >> > >> We haven't tested this, and we do not have any code to handle nvme/tcp
>> >> > >> drivers, only iscsi and rbd. Given the path seen in the logs
>> >> > >> '/dev/mapper', it looks like it might require code changes to support
>> >> > >> this.
>> >> > >> Can you share cinderlib[1] and engine logs to see what is returned by
>> >> > >> the driver? I may be able to estimate what would be required (it's
>> >> > >> possible that it would be enough to just change the handling of the
>> >> > >> path in the engine)
>> >> > >>
>> >> > >> [1] /var/log/ovirt-engine/cinderlib/cinderlib//log
>> >> > >>
>> >> > >> On Wed, Feb 23, 2022 at 10:54 AM <muli(a)lightbitslabs.com> wrote:
>> >> > >> >
>> >> > >> > Hi everyone,
>> >> > >> >
>> >> > >> > We are trying to set up ovirt (4.3.10 at the moment, customer preference) to use Lightbits (https://www.lightbitslabs.com) storage via our openstack cinder driver with cinderlib. The cinderlib and cinder driver bits are working fine but when ovirt tries to attach the device to a VM we get the following error:
>> >> > >> >
>> >> > >> > libvirt: error : cannot create file '/var/run/libvirt/qemu/18-mulivm1.dev/mapper/': Is a directory
>> >> > >> >
>> >> > >> > We get the same error regardless of whether I try to run the VM or try to attach the device while it is running. The error appears to come from vdsm which passes /dev/mapper as the prefered device?
>> >> > >> >
>> >> > >> > 2022-02-22 09:50:11,848-0500 INFO (vm/3ae7dcf4) [vdsm.api] FINISH appropriateDevice return={'path': '/dev/mapper/', 'truesize': '53687091200', 'apparentsize': '53687091200'} from=internal, task_id=77f40c4e-733d-4d82-b418-aaeb6b912d39 (api:54)
>> >> > >> > 2022-02-22 09:50:11,849-0500 INFO (vm/3ae7dcf4) [vds] prepared volume path: /dev/mapper/ (clientIF:510)
>> >> > >> >
>> >> > >> > Suggestions for how to debug this further? Is this a known issue? Did anyone get nvme/tcp storage working with ovirt and/or vdsm?
>> >> > >> >
>> >> > >> > Thanks,
>> >> > >> > Muli
>> >> > >> >
>> >> > >> > _______________________________________________
>> >> > >> > Users mailing list -- users(a)ovirt.org
>> >> > >> > To unsubscribe send an email to users-leave(a)ovirt.org
>> >> > >> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> >> > >> > oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>> >> > >> > List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/I3PAG5HMBHU...
>> >> > >>
>> >> > >
>> >> > > Lightbits Labs
>> >> > > Lead the cloud-native data center transformation by delivering scalable and efficient software defined storage that is easy to consume.
>> >> > >
>> >> > > This message is sent in confidence for the addressee only. It may contain legally privileged information. The contents are not to be disclosed to anyone other than the addressee. Unauthorized recipients are requested to preserve this confidentiality, advise the sender immediately of any error in transmission and delete the email from their systems.
>> >> > >
>> >> > >
>> >> > _______________________________________________
>> >> > Users mailing list -- users(a)ovirt.org
>> >> > To unsubscribe send an email to users-leave(a)ovirt.org
>> >> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> >> > oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>> >> > List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/DKFOCYQA6E4...
>> >>
>> >
>> > Lightbits Labs
>> > Lead the cloud-native data center transformation by delivering scalable and efficient software defined storage that is easy to consume.
>> >
>> > This message is sent in confidence for the addressee only. It may contain legally privileged information. The contents are not to be disclosed to anyone other than the addressee. Unauthorized recipients are requested to preserve this confidentiality, advise the sender immediately of any error in transmission and delete the email from their systems.
>> >
>> >
>>
>
> Lightbits Labs
> Lead the cloud-native data center transformation by delivering scalable and efficient software defined storage that is easy to consume.
>
> This message is sent in confidence for the addressee only. It may contain legally privileged information. The contents are not to be disclosed to anyone other than the addressee. Unauthorized recipients are requested to preserve this confidentiality, advise the sender immediately of any error in transmission and delete the email from their systems.
>
>
2 years, 8 months
4.5.0 beta compose delayed to April 4th 2022
by Sandro Bonazzola
oVirt 4.5.0 beta compose has been delayed to next week on April 4th as
ovirt-engine and gluster-ansible-role support for ansible-core 2.12 missed
today's deadline.
Test day has been rescheduled accordingly to April 5th.
Testers: please continue testing the released Alpha and providing feedback
https://trello.com/b/3FZ7gdhM/ovirt-450-test-day
A few packages that were meant to be shipped today as part of the beta
release have been already pushed to testing repositories so you can already
provide some feedback on the beta sanity.
Known issues:
- hyperconverged deployment doesn't work due to missing updated
gluster-ansible-roles packages
- ovirt-engine is still using ansible 2.9.27 and wasn't updated from 4.5.0
Alpha
- ovirt-appliance and oVirt Node have not been built with current content
of the testing repos as beta has been rescheduled
Be aware the RHEL 8.6 Beta has been released yesterday so you can already
try running on top of it.
Rocky Linux announced they're already building 8.6 beta as well so it may
be possible to start testing on top of it soon as well.
Professional Services, Integrators and Backup vendors: please run a test
session against your additional services, integrated solutions,
downstream rebuilds, backup solution on the released alpha release and
report issues as soon as possible.
If you're not listed here:
https://ovirt.org/community/user-stories/users-and-providers.html
consider adding your company there.
If you're willing to help updating the localization for oVirt 4.5.0 please
follow https://ovirt.org/develop/localization.html
If you're willing to help promoting the oVirt 4.5.0 release you can submit
your banner proposals for the oVirt home page and for the
social media advertising at https://github.com/oVirt/ovirt-site/issues no
later than April 5th
As an alternative please consider submitting a case study as in
https://ovirt.org/community/user-stories/user-stories.html
Feature owners: please submit a presentation of your feature for oVirt
Youtube channel: https://www.youtube.com/c/ovirtproject no later than April
5th
If you have some new feature requiring community feedback / testing please
add your case under the "Test looking for volunteer" section no later than
April 4th.
Do you want to contribute to getting ready for this release?
Read more about oVirt community at https://ovirt.org/community/ and join
the oVirt developers https://ovirt.org/develop/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
2 years, 8 months
github commit hashes at CI/before merge/after merge
by Yedidyah Bar David
Hi all,
Now spent a few minutes verifying that OST indeed failed [1] on the
results of [2] and not on something else. The generated rpm's name is
ovirt-hosted-engine-setup-2.6.2-0.0.master.20220322151459.git8af2a1f.el8.noarch.rpm
, where 8af2a1f is a result of [4]:
```
/usr/bin/git checkout --progress --force refs/remotes/pull/33/merge
Note: switching to 'refs/remotes/pull/33/merge'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:
git switch -c <new-branch-name>
Or undo this operation with:
git switch -
Turn off this advice by setting config variable advice.detachedHead to false
HEAD is now at 8af2a1f Merge 1dcab4385f4e4b9e40feac9d2e00e78d8dde3df4
into b1e3bd386dcf781f62bde4d58ea26f93649c8f93
```
You have to press "Run actions/checkout@v2" and then "Checking out the
ref", to see this, or the "Ship wheel" icon and then "View raw logs".
I prefer the latter, but the link it generates is temporary and
already expired, so not pasting it here.
1dcab4385f4e4b9e40feac9d2e00e78d8dde3df4 is the commit of [2] and
b1e3bd386dcf781f62bde4d58ea26f93649c8f93 is current HEAD (master). It
seems like I can force it to not merge using [3] - didn't try - but
not sure it makes that much more sense either. *Perhaps* 8af2a1f is
the commit that would eventually enter the git log, and so it would
then make sense to be able to match the CI results [1]. I think I
still prefer not having to go through all of this - instead, have a
"merge strategy" of "just move the HEAD to this commit and that's it",
AKA "fast-forward" AFAIK, but as discussed internally a few months
ago, I do not think there is a nice way to achieve this, other than
force pushing directly to master from remote - I failed to find a way
to do this in the web UI (and to make force pushing clarified in the
UI either).
Comments/opinions, anyone?
Do we want [3]?
Does anyone care at all, other than me?
Best regards,
[1] https://rhv-devops-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/ds-ost-bar...
[2] https://github.com/oVirt/ovirt-hosted-engine-setup/actions/runs/2023236328
[3] https://github.com/actions/checkout#Checkout-pull-request-HEAD-commit-ins...
[4] https://github.com/oVirt/ovirt-hosted-engine-setup/runs/5646552158?check_...
--
Didi
2 years, 9 months