Re: CentOS 8 is dead
by marcel d'heureuse
So, I think keep the live system on ovirt 4.3 to be sure that's works after 2021?
Distribution you have 10 years support? Centos 7 has support up to June 24.
Someone starts to evolute Gentoo?
marcel
Am 8. Dezember 2020 21:15:48 MEZ schrieb "Vinícius Ferrão via Users" <users(a)ovirt.org>:
>CentOS Stream is unstable at best.
>
>I’ve used it recently and it was just a mess. There’s no binary
>compatibility with the current point release and there’s no version
>pinning. So it will be really difficult to keep track of things.
>
>I’m really curious how oVirt will handle this.
>
>From: Wesley Stewart <wstewart3(a)gmail.com>
>Sent: Tuesday, December 8, 2020 4:56 PM
>To: Strahil Nikolov <hunter86_bg(a)yahoo.com>
>Cc: users <users(a)ovirt.org>
>Subject: [ovirt-users] Re: CentOS 8 is dead
>
>This is a little concerning.
>
>But it seems pretty easy to convert:
>https://www.centos.org/centos-stream/
>
>However I would be curious to see if someone tests this with having an
>active ovirt node!
>
>On Tue, Dec 8, 2020 at 2:39 PM Strahil Nikolov via Users
><users(a)ovirt.org<mailto:users@ovirt.org>> wrote:
>Hello All,
>
>I'm really worried about the following news:
>https://blog.centos.org/2020/12/future-is-centos-stream/
>
>Did anyone tried to port oVirt to SLES/openSUSE or any Debian-based
>distro ?
>
>Best Regards,
>Strahil Nikolov
>_______________________________________________
>Users mailing list -- users(a)ovirt.org<mailto:users@ovirt.org>
>To unsubscribe send an email to
>users-leave(a)ovirt.org<mailto:users-leave@ovirt.org>
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/HZC4D4OSYL6...
--
Diese Nachricht wurde von meinem Android-Gerät mit K-9 Mail gesendet.
4 years, 3 months
what's error?
by tommy
Hi,I installed oVirt on my hosts. But there are many such errors on the
console of the host, such as:
Why ???
Thanks.
4 years, 3 months
pgrade 4.3 to 4.4 with migration CentOS7 to CentOS8.3
by Ilya Fedotov
Good day,
Encountered such a problem when migrating to ovirt 4.4
At
hosted-engine --deploy --restore-from-file=backup.bck
Getting, error below
Upgrading engine extension configuration:
/etc/ovirt-engine/extensions.d/xx-xxxx.properties", "[ INFO ] Upgrading
CA", "[ INFO ]
Creating CA: /etc/pki/ovirt-engine/qemu-ca.pem", "[ ERROR ] Failed to
execute stage 'Misc configuration': [Errno 17]
File exists: '/etc/pki/ovirt-engine/ca.pem' ->
'/etc/pki/ovirt-engine/apache-ca.pem'", "[ INFO ]
DNF Performing DNF transaction rollback", "[ INFO ] Stage: Clean up",
At setting of initial parameters I select "No" parameter in para
'Renew engine CA on restore if needed? Please notice ' 'that if you choose
Yes, all hosts will have to be ' 'later manually reinstalled from the
engine. ' '(@VALUES@)[@DEFAULT@]
Dosnt need to renew the .ca certificate, thats upgrade and dosnt need to
re-make connections with nodes!
Even with this item, he still tries to create a new certificate.
I found a similar question here:
https://www.mail-archive.com/users@ovirt.org/msg61114.html
Package Data:
ovirt-hosted-engine-setup-2.4.8-1.el8.noarch
ovirt-hosted-engine-ha-2.4.5-1.el8.noarch
ovirt-engine-appliance-4.4-20201110154142.1.el8.x86_64
CentOS Linux release 8.3.2011
4.18.0-240.1.1.el8_3.x86_64
Pls, help programmers.....
with br, Ilya Fedotov
4 years, 3 months
hosted engine wrong bios
by Michael Rohweder
Hi,
i run with ovirt node 4.4.2 in some old mistake.
I changed cluster default to uefi weeks ago.
now today node must be restarted, and now i cannot work.
manager VM try to boot on uefi. and all other vm are down, because i cannot
start anny with cli.
how can i change (some config, file or something els) that setting in this
vm to normal bios?
Greetings
Michael
4 years, 3 months
Hosted engine deployment w/ two networks (one migration, one management).
by Gilboa Davara
Hello all,
I'm slowly building a new ovirt over glusterfs cluster with 3 fairly beefy
servers.
Each of the nodes has the following network configuration:
3x1GbE: ILO, ovirtmgmt and SSH.
4x10GbE: Private and external VM network(s).
2x40GBE: GlusterFS and VM migration.
Now, for some odd reasons, I rather keep the two 40GbE networks
disconnected from my normal management network.
My question is simple: I remember that I can somehow configure ovirt to use
two different networks for for management / migration, but as far as I can
see, I cannot configure the cluster to use a different network for
migration purposes.
1. Am I missing something?
2. Can I somehow configure the hosted engine to have an IP in more than
network (management and migration)?
3. More of a gluster question: As the 40GbE NICs and 1GbE NIC sitting on
different switches, can I somehow configure gluster to fallback to the 1GbE
NIC if the main 40GbE link fails? AFAIR bond doesn't support asymmetrical
network device configuration. (And rightly so, in this case).
Thanks,
Gilboa
4 years, 3 months
VMs shut down after backup: "moved from 'Up' --> 'Down'" on RHEL host
by Łukasz Kołaciński
Hello,
Thank you for helping with previous issue. Unfortunately we got another. We have RHV manager with several different hosts. After the backup, vm placed on RHEL shuts down. In engine.log I found moment when this happen: "VM '35183baa-1c70-4016-b7cd-528889876f19'(stor2rrd) moved from 'Up' --> 'Down'". I attached whole logs to email. It doesn't matter if it's a full or incremental, results are always the same. RHVH hosts works properly.
Log fragment:
2020-12-08 10:18:33,845+01 INFO [org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater] (default task-76) [60da31e3-92f6-4555-8c43-2f8afee272e0] Updating image transfer 87bdb42e-e64c-460d-97ac-218e923336a1 (image e57e4af0-5d0b-4f60-9e6c-e217c666e5e6) phase to Finalizing Success
2020-12-08 10:18:33,940+01 INFO [org.ovirt.engine.core.bll.StopVmBackupCommand] (default task-76) [89ec1a77-4b46-42b0-9d0f-15e53d5f952a] Running command: StopVmBackupCommand internal: false. Entities affected : ID: 35183baa-1c70-4016-b7cd-528889876f19 Type: VMAction group BACKUP_DISK with role type ADMIN, ID: e57e4af0-5d0b-4f60-9e6c-e217c666e5e6 Type: DiskAction group BACKUP_DISK with role type ADMIN
2020-12-08 10:18:33,940+01 INFO [org.ovirt.engine.core.bll.StopVmBackupCommand] (default task-76) [89ec1a77-4b46-42b0-9d0f-15e53d5f952a] Stopping VmBackup 'aae03819-cea6-45a1-9ee5-0f831af8464d'
2020-12-08 10:18:33,952+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.StopVmBackupVDSCommand] (default task-76) [89ec1a77-4b46-42b0-9d0f-15e53d5f952a] START, StopVmBackupVDSCommand(HostName = rhv-2, VmBackupVDSParameters:{hostId='afad6b8b-78a6-4e9a-a9bd-783ad42a2d47', backupId='aae03819-cea6-45a1-9ee5-0f831af8464d'}), log id: 78b2c27a
2020-12-08 10:18:33,958+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.StopVmBackupVDSCommand] (default task-76) [89ec1a77-4b46-42b0-9d0f-15e53d5f952a] FINISH, StopVmBackupVDSCommand, return: , log id: 78b2c27a
2020-12-08 10:18:33,975+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-76) [89ec1a77-4b46-42b0-9d0f-15e53d5f952a] EVENT_ID: VM_BACKUP_FINALIZED(10,794), Backup <UNKNOWN> for VM stor2rrd finalized (User: admin@internal-authz).
2020-12-08 10:18:35,221+01 INFO [org.ovirt.engine.core.sso.servlets.OAuthRevokeServlet] (default task-73) [] User admin@internal successfully logged out
2020-12-08 10:18:35,236+01 INFO [org.ovirt.engine.core.bll.aaa.TerminateSessionsForTokenCommand] (default task-81) [1b35276c] Running command: TerminateSessionsForTokenCommand internal: true.
2020-12-08 10:18:35,236+01 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (default task-81) [1b35276c] Not removing session '90TxdK0PBueLijy+sCrFoHC/KNUGNzNpZuYMK/yKDAkbAefFr+8wOJsATsDKv18LxpyxCl+eX7hTHNxN23anAw==', session has running commands for user 'admin@internal-authz'.
2020-12-08 10:18:35,447+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-31) [] VM '35183baa-1c70-4016-b7cd-528889876f19' was reported as Down on VDS 'afad6b8b-78a6-4e9a-a9bd-783ad42a2d47'(rhv-2)
2020-12-08 10:18:35,448+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-31) [] START, DestroyVDSCommand(HostName = rhv-2, DestroyVmVDSCommandParameters:{hostId='afad6b8b-78a6-4e9a-a9bd-783ad42a2d47', vmId='35183baa-1c70-4016-b7cd-528889876f19', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 4f473135
2020-12-08 10:18:35,451+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-31) [] FINISH, DestroyVDSCommand, return: , log id: 4f473135
2020-12-08 10:18:35,451+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-31) [] VM '35183baa-1c70-4016-b7cd-528889876f19'(stor2rrd) moved from 'Up' --> 'Down'
2020-12-08 10:18:35,466+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-31) [] EVENT_ID: VM_DOWN_ERROR(119), VM stor2rrd is down with error. Exit message: Lost connection with qemu process.
2020-12-08 10:18:35,484+01 INFO [org.ovirt.engine.core.bll.ProcessDownVmCommand] (EE-ManagedThreadFactory-engine-Thread-34085) [3da40159] Running command: ProcessDownVmCommand internal: true.
The environment of faulty host is:
OS Version: RHEL - 8.3 - 1.0.el8
OS Description: Red Hat Enterprise Linux 8.3 (Ootpa)
Kernel Version: 4.18.0 - 240.1.1.el8_3.x86_64
KVM Version: 5.1.0 - 14.module+el8.3.0+8438+644aff69
LIBVIRT Version: libvirt-6.6.0-7.module+el8.3.0+8424+5ea525c5
VDSM Version: vdsm-4.40.26.3-1.el8ev
SPICE Version: 0.14.3 - 3.el8
GlusterFS Version: [N/A]
CEPH Version: librbd1-12.2.7-9.el8
Open vSwitch Version: openvswitch-2.11-7.el8ev
Nmstate Version: nmstate-0.3.4-13.el8_3
Regards
Łukasz Kołaciński
Junior Java Developer
e-mail: l.kolacinski(a)storware.eu<mailto:l.kolacinski@storware.eu>
<mailto:m.helbert@storware.eu>
[STORWARE]<http://www.storware.eu/>
ul. Leszno 8/44
01-192 Warszawa
www.storware.eu <https://www.storware.eu/>
[facebook]<https://www.facebook.com/storware>
[twitter]<https://twitter.com/storware>
[linkedin]<https://www.linkedin.com/company/storware>
[Storware_Stopka_09]<https://www.youtube.com/channel/UCKvLitYPyAplBctXibFWrkw>
Storware Spółka z o.o. nr wpisu do ewidencji KRS dla M.St. Warszawa 000510131 , NIP 5213672602. Wiadomość ta jest przeznaczona jedynie dla osoby lub podmiotu, który jest jej adresatem i może zawierać poufne i/lub uprzywilejowane informacje. Zakazane jest jakiekolwiek przeglądanie, przesyłanie, rozpowszechnianie lub inne wykorzystanie tych informacji lub podjęcie jakichkolwiek działań odnośnie tych informacji przez osoby lub podmioty inne niż zamierzony adresat. Jeżeli Państwo otrzymali przez pomyłkę tę informację prosimy o poinformowanie o tym nadawcy i usunięcie tej wiadomości z wszelkich komputerów. This message is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you have received this message in error, please contact the sender and remove the material from all of your computer systems.
4 years, 3 months
oVirt and RHEV
by tommy
1、 If oVirt can be used to manage RHEV ?
2、 What relation between oVirt and RHEV?
Thanks!
4 years, 3 months
OPNsense / FreeBSD 12.1
by Jorge Visentini
Hi all.
I tried to install OPNsense 20.7.6 (FreeBSD 12.1) and it was not possible
to detect the NICs.
I tried both the virtio driver and the e1000. Virtio does not detect and
e1000 crashes at startup.
In pure KVM, it works, so I believe there is some incompatibility with
oVirt 4.4.4.
Any tips?
--
Att,
Jorge Visentini
+55 55 98432-9868
4 years, 4 months
Recent news & oVirt future
by Charles Kozler
I guess this is probably a question for all current open source projects
that red hat runs but -
Does this mean oVirt will effectively become a rolling release type
situation as well?
How exactly is oVirt going to stay open source and stay in cadence with all
the other updates happening around it on packages/etc that it depends on if
the streams are rolling release? Do they now need to fork every piece of
dependency?
What exactly does this mean for oVirt going forward and its overall
stability?
--
*Notice to Recipient*: https://www.fixflyer.com/disclaimer
<https://www.fixflyer.com/disclaimer>
4 years, 4 months