Re: oVirt and ARM
by Sandro Bonazzola
Il giorno ven 9 lug 2021 alle ore 11:00 Marko Vrgotic <
M.Vrgotic(a)activevideo.com> ha scritto:
> Hi Sandro and the rest of oVirt gurus,
>
>
>
> My managers are positive regarding helping provide some ARM hardware, but
> it would not happened earlier than three months from now, as we are in
> process of establishing certain relationship with ARM HW vendor.
>
T news!
>
>
> In the meantime, I was asked to check if in current 4.4 version or coming
> 4.5, are/will there any capabilities or options of emulating aarch64 on
> x86_64 platform and if so, what would be the steps to test/enable it.
>
+Arik Hadas <ahadas(a)redhat.com> , +Milan Zamazal <mzamazal(a)redhat.com> ?
>
>
> Kindly awaiting your reply.
>
>
>
> Marko Vrgotic
>
>
>
> *From: *Marko Vrgotic <M.Vrgotic(a)activevideo.com>
> *Date: *Monday, 28 June 2021 at 15:38
> *To: *Sandro Bonazzola <sbonazzo(a)redhat.com>, Evgheni Dereveanchin <
> ederevea(a)redhat.com>
> *Cc: *Zhenyu Zheng <zhengzhenyulixi(a)gmail.com>, Joey Ma <
> majunjiev(a)gmail.com>, users(a)ovirt.org <users(a)ovirt.org>
> *Subject: *Re: [ovirt-users] oVirt and ARM
>
> Hi Sandro,
>
>
>
> I will check with my managers if we have and could spare some hardware to
> contribute developing for oVirt.
>
>
>
>
>
> -----
>
> kind regards/met vriendelijke groeten
>
>
>
> Marko Vrgotic
> Sr. System Engineer @ System Administration
>
>
> ActiveVideo
>
> *o: *+31 (35) 6774131
>
> *m: +*31 (65) 5734174
>
> *e:* m.vrgotic(a)activevideo.com
> *w: *www.activevideo.com
>
>
>
> ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217
> WJ Hilversum, The Netherlands. The information contained in this message
> may be legally privileged and confidential. It is intended to be read only
> by the individual or entity to whom it is addressed or by their designee.
> If the reader of this message is not the intended recipient, you are on
> notice that any distribution of this message, in any form, is strictly
> prohibited. If you have received this message in error, please immediately
> notify the sender and/or ActiveVideo Networks, LLC by telephone at +1
> 408.931.9200 and delete or destroy any copy of this message.
>
>
>
>
>
>
>
> *From: *Sandro Bonazzola <sbonazzo(a)redhat.com>
> *Date: *Friday, 25 June 2021 at 15:26
> *To: *Marko Vrgotic <M.Vrgotic(a)activevideo.com>, Evgheni Dereveanchin <
> ederevea(a)redhat.com>
> *Cc: *Zhenyu Zheng <zhengzhenyulixi(a)gmail.com>, Joey Ma <
> majunjiev(a)gmail.com>, users(a)ovirt.org <users(a)ovirt.org>
> *Subject: *Re: [ovirt-users] oVirt and ARM
>
> ***CAUTION: This email originated from outside of the organization. Do not
> click links or open attachments unless you recognize the sender!!!***
>
>
>
>
>
> Il giorno ven 25 giu 2021 alle ore 14:20 Marko Vrgotic <
> M.Vrgotic(a)activevideo.com> ha scritto:
>
> Hi Sandro,
>
>
>
> Thank you for the update. I am not equipped to help on development side,
> but I can most certainly do test deployments, once there is something
> available.
>
>
>
> We are big oVirt shop and moving to ARM64 with new product, it would be
> great if oVirt would start supporting it.
>
>
>
> If we are able to help somehow, let me know.
>
>
>
> I guess a start could be adding some arm64 machine to oVirt infrastructure
> so developers can build for it.
>
> You can have a look at
> https://ovirt.org/community/get-involved/donate-hardware.html
> <https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fovirt.o...>
>
>
> Looping in +Evgheni Dereveanchin <ederevea(a)redhat.com> in case you can
> share some resources.
>
>
>
>
>
>
>
>
>
> -----
>
> kind regards/met vriendelijke groeten
>
>
>
> Marko Vrgotic
> Sr. System Engineer @ System Administration
>
>
> ActiveVideo
>
> *e:* m.vrgotic(a)activevideo.com
> *w: *www.activevideo.com
>
>
>
> ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217
> WJ Hilversum, The Netherlands. The information contained in this message
> may be legally privileged and confidential. It is intended to be read only
> by the individual or entity to whom it is addressed or by their designee.
> If the reader of this message is not the intended recipient, you are on
> notice that any distribution of this message, in any form, is strictly
> prohibited. If you have received this message in error, please immediately
> notify the sender and/or ActiveVideo Networks, LLC by telephone at +1
> 408.931.9200 and delete or destroy any copy of this message.
>
>
>
>
>
>
>
> *From: *Sandro Bonazzola <sbonazzo(a)redhat.com>
> *Date: *Thursday, 24 June 2021 at 18:21
> *To: *Marko Vrgotic <M.Vrgotic(a)activevideo.com>, Zhenyu Zheng <
> zhengzhenyulixi(a)gmail.com>, Joey Ma <majunjiev(a)gmail.com>
> *Cc: *users(a)ovirt.org <users(a)ovirt.org>
> *Subject: *Re: [ovirt-users] oVirt and ARM
>
> ***CAUTION: This email originated from outside of the organization. Do not
> click links or open attachments unless you recognize the sender!!!***
>
>
>
>
>
> Il giorno gio 24 giu 2021 alle ore 16:34 Marko Vrgotic <
> M.Vrgotic(a)activevideo.com> ha scritto:
>
> Hi oVirt,
>
>
>
> Where can I find if there are any information about oVirt supporting arm64
> CPU architecture?
>
>
>
> Right now oVirt is not supporting arm64. There was an initiative about
> supporting it started some time ago from openEuler ovirt SIG.
>
> I didn't got any further updates on this topic, looping in those who I
> remember being looking into it.
>
> I think that if someone contributes arm64 support it would also be a
> feature worth a 4.5 release :-)
>
>
>
>
>
> -----
>
> kind regards/met vriendelijke groeten
>
>
>
> Marko Vrgotic
> Sr. System Engineer @ System Administration
>
>
> ActiveVideo
>
> *o: *+31 (35) 6774131
>
> *m: +*31 (65) 5734174
>
> *e:* m.vrgotic(a)activevideo.com
> *w: *www.activevideo.com
>
>
>
> ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217
> WJ Hilversum, The Netherlands. The information contained in this message
> may be legally privileged and confidential. It is intended to be read only
> by the individual or entity to whom it is addressed or by their designee.
> If the reader of this message is not the intended recipient, you are on
> notice that any distribution of this message, in any form, is strictly
> prohibited. If you have received this message in error, please immediately
> notify the sender and/or ActiveVideo Networks, LLC by telephone at +1
> 408.931.9200 and delete or destroy any copy of this message.
>
>
>
>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> <https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> <https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...>
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MQ3XND2NKIL...
> <https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.o...>
>
>
>
>
> --
>
> *Sandro Bonazzola*
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA
> <https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.red...>
>
> sbonazzo(a)redhat.com
>
> *Error! Filename not specified.*
> <https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.red...>
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.*
>
>
>
>
>
>
> --
>
> *Sandro Bonazzola*
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA
> <https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.red...>
>
> sbonazzo(a)redhat.com
>
>
> <https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.red...>
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.*
>
>
>
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
<https://mojo.redhat.com/docs/DOC-1199578>*
3 years, 8 months
Create new disk failure
by Gangi Reddy
Software version: 4.4.6.7-1.el8
Error: VDSM server command HSMGetAllTasksStatusesVDS failed: value=Error creating a new volume: ("Volume creation 8f509d4b-6d37-44c5-aa37-acba17391143 failed: (28, 'Sanlock resource write failure', 'No space left on device')",) abortedcode=205
3 years, 8 months
Ovirt Storage not reclaim
by Ranesh Tharanga Perera
I have attached 7TB disk ( pre allocated) for VM ( Redhat 8 ).. Due to
some space issue we have unattached disk from VM and deleted it. but free
space still has not changed and we can't use that space for other purposes
.
What would be the issue ?
Thanks,
Ranesh..
3 years, 8 months
Unable to migrate VMs to or from oVirt node 4.4.7
by nroach44@nroach44.id.au
Hi All,
After upgrading some of my hosts to 4.4.7, and after fixing the policy issue, I'm no longer able to migrate VMs to or from 4.4.7 hosts. Starting them works fine regardless of the host version.
HE 4.4.7.6-1.el8, Linux and Windows VMs.
The log on the receiving end (4.4.7 in this case):
VDSM:
2021-07-09 22:02:17,491+0800 INFO (libvirt/events) [vds] Channel state for vm_id=5d11885a-37d3-4f68-a953-72d808f43cdd changed from=UNKNOWN(-1) to=disconnected(2) (qemuguestagent:289)
2021-07-09 22:02:55,537+0800 INFO (libvirt/events) [virt.vm] (vmId='5d11885a-37d3-4f68-a953-72d808f43cdd') underlying process disconnected (vm:1134)
2021-07-09 22:02:55,537+0800 INFO (libvirt/events) [virt.vm] (vmId='5d11885a-37d3-4f68-a953-72d808f43cdd') Release VM resources (vm:5313)
2021-07-09 22:02:55,537+0800 INFO (libvirt/events) [virt.vm] (vmId='5d11885a-37d3-4f68-a953-72d808f43cdd') Stopping connection (guestagent:438)
2021-07-09 22:02:55,539+0800 INFO (libvirt/events) [virt.vm] (vmId='5d11885a-37d3-4f68-a953-72d808f43cdd') Stopping connection (guestagent:438)
2021-07-09 22:02:55,539+0800 INFO (libvirt/events) [vdsm.api] START inappropriateDevices(thiefId='5d11885a-37d3-4f68-a953-72d808f43cdd') from=internal, task_id=7abe370b-13bc-4c49-bf02-2e40db142250 (api:48)
2021-07-09 22:02:55,544+0800 WARN (vm/5d11885a) [virt.vm] (vmId='5d11885a-37d3-4f68-a953-72d808f43cdd') Couldn't destroy incoming VM: Domain not found: no domain with matching uuid '5d11885a-37d3-4f68-a953-72d808f43cdd' (vm:4046)
2021-07-09 22:02:55,544+0800 INFO (vm/5d11885a) [virt.vm] (vmId='5d11885a-37d3-4f68-a953-72d808f43cdd') Changed state to Down: VM destroyed during the startup (code=10) (vm:1895)
syslog shows:
Jul 09 22:35:01 HOSTNAME abrt-hook-ccpp[177862]: Process 177022 (qemu-kvm) of user 107 killed by SIGABRT - dumping core
qemu:
qemu-kvm: ../util/yank.c:107: yank_unregister_instance: Assertion `QLIST_EMPTY(&entry->yankfns)' failed.
2021-07-09 14:02:54.521+0000: shutting down, reason=failed
When migrating from 4.4.7 to 4.4.6, syslog shows:
Jul 09 22:36:36 HOSTNAME libvirtd[2775]: unsupported configuration: unknown audio type 'spice'
Any suggestions or further logs I can chase up?
Thanks,
N
3 years, 8 months
Q: Strange ping timeouts/network issues when 1 node is down
by Andrei Verovski
Hi,
I have observed strange ping timeouts/network issues when one of nodes is down, for example, for RAM upgrade.
Powering up and connecting node back solves this.
Each node have 2 networks (connected to different Ethernet interfaces), ovirtmgmt 192.168.0.xxx and DMZ 192.168.1.x.
Additionally, time consuming operations like VM cloning causes increase of ping latency to ALL nodes from local workstations.
VM cloning should load only 1 node but it should not affect the whole ovirt mgmt network on all nodes.
How to fix this issue?
Thanks in advance for any help.
Andrei
3 years, 8 months
glusterfs health-check failed, (brick) going down
by Jiří Sléžka
Hello,
I have 3 node HCI cluster with oVirt 4.4.6 and CentOS8.
For time to time (I belive) random brick on random host goes down
because health-check. It looks like
[root@ovirt-hci02 ~]# grep "posix_health_check" /var/log/glusterfs/bricks/*
/var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07
07:13:37.408184] M [MSGID: 113075]
[posix-helpers.c:2214:posix_health_check_thread_proc] 0-vms-posix:
health-check failed, going down
/var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07
07:13:37.408407] M [MSGID: 113075]
[posix-helpers.c:2232:posix_health_check_thread_proc] 0-vms-posix: still
alive! -> SIGTERM
/var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07
16:11:14.518971] M [MSGID: 113075]
[posix-helpers.c:2214:posix_health_check_thread_proc] 0-vms-posix:
health-check failed, going down
/var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07
16:11:14.519200] M [MSGID: 113075]
[posix-helpers.c:2232:posix_health_check_thread_proc] 0-vms-posix: still
alive! -> SIGTERM
on other host
[root@ovirt-hci01 ~]# grep "posix_health_check" /var/log/glusterfs/bricks/*
/var/log/glusterfs/bricks/gluster_bricks-engine-engine.log:[2021-07-05
13:15:51.983327] M [MSGID: 113075]
[posix-helpers.c:2214:posix_health_check_thread_proc] 0-engine-posix:
health-check failed, going down
/var/log/glusterfs/bricks/gluster_bricks-engine-engine.log:[2021-07-05
13:15:51.983728] M [MSGID: 113075]
[posix-helpers.c:2232:posix_health_check_thread_proc] 0-engine-posix:
still alive! -> SIGTERM
/var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-05
01:53:35.769129] M [MSGID: 113075]
[posix-helpers.c:2214:posix_health_check_thread_proc] 0-vms-posix:
health-check failed, going down
/var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-05
01:53:35.769819] M [MSGID: 113075]
[posix-helpers.c:2232:posix_health_check_thread_proc] 0-vms-posix: still
alive! -> SIGTERM
I cannot link these errors to any storage/fs issue (in dmesg or
/var/log/messages), brick devices looks healthy (smartd).
I can force start brick with
gluster volume start vms|engine force
and after some healing all works fine for few days
Did anybody observe this behavior?
vms volume has this structure (two bricks per host, each is separate
JBOD ssd disk), engine volume has one brick on each host...
gluster volume info vms
Volume Name: vms
Type: Distributed-Replicate
Volume ID: 52032ec6-99d4-4210-8fb8-ffbd7a1e0bf7
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 10.0.4.11:/gluster_bricks/vms/vms
Brick2: 10.0.4.13:/gluster_bricks/vms/vms
Brick3: 10.0.4.12:/gluster_bricks/vms/vms
Brick4: 10.0.4.11:/gluster_bricks/vms2/vms2
Brick5: 10.0.4.13:/gluster_bricks/vms2/vms2
Brick6: 10.0.4.12:/gluster_bricks/vms2/vms2
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.stat-prefetch: off
cluster.eager-lock: enable
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
user.cifs: off
network.ping-timeout: 30
network.remote-dio: off
performance.strict-o-direct: on
performance.low-prio-threads: 32
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
Cheers,
Jiri
3 years, 8 months
export VM from oVirt engine 3.5
by fddi@comcast.net
Hello,
I am trying to export a bunch o VM from oVirt version 3.5
The problem is that there is no OVA support for exporting them. I also tryed to export VMs into OVA format using ovirtsdk4 which anyway won't work on oVirt 3.5.
I can successfully export to a NFS partition but for each VM there are several qcow v2 files which I suppose to be the VM image and its snapshots.
How to identify the correct file to represent the current VM status ?
If I delete all the snapshots, will I end up with only one valid qcow v2 image that I can use ?
3 years, 8 months
passing data between guacamole and xrdp server
by Jason Keltz
Hi.
I have a custom use case where I would like to have two guacamole RDP
connections that point to the same host, but the underlying xrdp start
script would initialize one connection slightly different than the
other. I don't see a simple way that I can pass anything between
Guacamole and the xrdp server which would allow the script to determine
which connection the user chose in guac? I thought *maybe* "client
name" under "Basic Settings" but it seems to apply to Windows only. It
would be neat to be able to pass environment variables in.
Thanks,
Jason.
3 years, 8 months
Gluster deploy error!
by Patrick Lomakin
Hello! I have tried to deploy single node vith gluster, but if select "Compression and dedublication" I get an error:
TASK [gluster.infra/roles/backend_setup : Create thick logical volume] *********
failed: [host01********] (item={'vgname': 'gluster_vg_sda4', 'lvname': 'gluster_lv_engine', 'size': '1970G'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": " Volume group \"gluster_vg_sda4\" has insufficient free space (504319 extents): 504320 required.\n", "index": 0, "item": {"lvname": "gluster_lv_engine", "size": "1970G", "vgname": "gluster_vg_sda4"}, "msg": "Creating logical volume 'gluster_lv_engine' failed", "rc": 5}
3 years, 8 months