Re: [ovirt-devel] Foreman needs a release of ovirt-engine-sdk-ruby
by Guillaume Pavese
We were just starting to depend on this workflow...
On Fri, Jan 26, 2024 at 2:02 PM Ewoud Kohl van Wijngaarden <
ewoud+ovirt(a)kohlvanwijngaarden.nl> wrote:
> Hello everyone,
>
> Foreman is a bit late in updating Ruby to a newer version. Looking ahead
> we're aiming at Ruby 3.1+ but ovirt-engine-sdk-ruby doesn't compile on
> it.
>
> https://github.com/oVirt/ovirt-engine-sdk-ruby/pull/3 was merged in
> September 2022 and a request to release it was opened a year ago:
> https://github.com/oVirt/ovirt-engine-sdk-ruby/issues/4
>
> The Foreman community is currently discussing dropping oVirt support:
> https://community.theforeman.org/t/proposal-to-drop-support-for-ovirt/36324
>
> Is there anyone who can still perform this release, or should we proceed
> with removal?
>
> Regards,
> Ewoud Kohl van Wijngaarden
> _______________________________________________
> Devel mailing list -- devel(a)ovirt.org
> To unsubscribe send an email to devel-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ESA4LFSQ5JP...
>
--
Ce message et toutes les pièces jointes (ci-après le “message”) sont
établis à l’intention exclusive de ses destinataires et sont confidentiels.
Si vous recevez ce message par erreur, merci de le détruire et d’en avertir
immédiatement l’expéditeur. Toute utilisation de ce message non conforme a
sa destination, toute diffusion ou toute publication, totale ou partielle,
est interdite, sauf autorisation expresse. L’internet ne permettant pas
d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales)
décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse
ou il aurait été modifié. IT, ES, UK.
<https://interactiv-group.com/disclaimer.html>
3 months, 1 week
onn pv
by Michael Thomas
I think I may have just messed up my cluster.
I'm running an older 4.4.2.6 cluster on CentOS-8 with 4 nodes and a
self-hosted engine. I wanted to assemble the spare drives on 3 of the 4
nodes into a new gluster volume for extra VM storage.
Unfortunately, I did not look closely enough at one of the nodes before
running sfdisk+parted+pvcreate, and now it looks like I may have broken
my onn storage. pvs shows missing uuids:
# pvs
WARNING: Couldn't find device with uuid
RgTaWg-fR1T-J3Nv-uh03-ZTi5-jz9X-cjl1lo
.
WARNING: Couldn't find device with uuid
0l9CFI-Z7pP-x1P8-AJ78-gRoz-ql0e-2gzXsC
.
WARNING: Couldn't find device with uuid
fl73h2-ztyn-y9NY-4TF4-K2Pd-G2Ow-vH46yH
.
WARNING: VG onn_ovirt1 is missing PV
RgTaWg-fR1T-J3Nv-uh03-ZTi5-jz9X-cjl1lo (l
ast written to /dev/nvme0n1p3).
WARNING: VG onn_ovirt1 is missing PV
0l9CFI-Z7pP-x1P8-AJ78-gRoz-ql0e-2gzXsC (l
ast written to /dev/nvme1n1p1).
WARNING: VG onn_ovirt1 is missing PV
fl73h2-ztyn-y9NY-4TF4-K2Pd-G2Ow-vH46yH (l
ast written to /dev/nvme2n1p1).
PV VG Fmt Attr PSize PFree
/dev/md2 vg00 lvm2 a-- <928.80g 0
/dev/nvme2n1p1 gluster_vg_nvme2n1p1 lvm2 a-- 2.91t 0
/dev/nvme3n1p1 onn_ovirt1 lvm2 a-- 2.91t 0
[unknown] onn_ovirt1 lvm2 a-m 929.92g 100.00g
[unknown] onn_ovirt1 lvm2 a-m <931.51g 0
[unknown] onn_ovirt1 lvm2 a-m 2.91t 0
Here's what I don't understand:
* This onn volume group only existed on one of the 4 nodes. I expected
it would have been on all 4?
* lsblk and /etc/fstab don't show any reference to onn
* What is the ONN volume group used for, and how bad is it if it's now
missing? I note that my VMs all continue to run and I've been able to
migrate them off of this affected node with no apparent problems.
* Is it possible that this onn volume group was already broken before I
messed with the nvme3n1 disk? When ovirt was originally installed
several years ago, I went through the install process multiple times and
might not have cleaned up properly each time.
--Mike
3 months, 1 week
Shared and local storage, together
by duparchy@esrf.fr
HI,
We running some VMs from local storage.
These are VMs for managing and monitoring tools such as Dell Storage Manager.
So we don't want them to be stored in the shared storages they are managing and monitoring....
If something goes wrong on either the storage or the ISCSI network we want those VM to be up and running.
Any way to enable both shared storages and some local storage in a cluster ?
For the shake of using a host most efficiently and not for only a couple of VMs ?.
3 months, 1 week