New oVirt setup with OVN : Hypervisor with LACP bond : queries
by ravi k
Hello team,
Thank you for all the wonderful work you've been doing. I'm starting out new with oVirt and OVN. So please excuse me if the questions are too naive.
We intend to do a POC to check if we can migrate VMs off our current VMware to oVirt. The intention is to migrate the VMs with the same IP into oVirt. We've setup oVirt with three hypervisors. All of them have four ethernet adapters. We have SDN implemented in our network and LACP bonds are created at the switch level. So we've created two bonds, bond0 and bond1 in each hypervisor. bond0 has the logical networks with vlan tagging created like bond0.101, bond0.102 etc.
As a part of the POC we also want to explore OVN as well to check if we can implement a zero trust security policy. Here are the questions now :)
1. We would like to migrate VMs with the current IP into oVirt. Is it possible to achieve this? I've been reading notes and pages that mention about extending the physical network into OVN. But it's a bit confusing on how to implement it.
How do we connect OVN to the physical network? Does the fact that we have a SDN make it easier to get this done?
I am still reading the ovn-architecture page. It is mentioned that the gateway is the component that extends a tunnel-based logical network into a physical network.
2. We have the IP for the hypervisor assigned on a logical network(ovirtmgmt) in bond0. I read in https://lists.ovirt.org/archives/list/users@ovirt.org/thread/CIE6MZ47GRCE... that oVirt does not care about how the IP is configured when creating the tunnels.
3. Once we have OVN setup, ovn logical networks created and VMs created/migrated, how do we establish the zero trust policy? From what I've read there are ACLs and security groups. Any pointers on where to explore more about implementing it.
If you've read till here, thank you for your patience.
Regards,
ravi
2 years, 2 months
VM Migrations failing to newly upgraded host
by k.gunasekhar@non.keysight.com
I am able to power on vms on newly upgraded host, But not able to migrate VMs from other hosts to new host or newly upgraded hosts to other hosts. This was worked fine before upgraded.
i see below logs
Unable to read from monitor: Connection reset by peer
internal error: qemu unexpectedly closed the monitor: 2022-01-24T17:51:46.598571Z qemu-kvm: get_pci_config_device: Bad config >
2022-01-24T17:51:46.598627Z qemu-kvm: Failed to load PCIDevice:config
2022-01-24T17:51:46.598635Z qemu-kvm: Failed to load pcie-root-port:parent_obj.parent_obj.parent_obj
2022-01-24T17:51:46.598642Z qemu-kvm: error while loading state for instance 0x0 of device '0000:00:02.0/pcie-root-port'
2022-01-24T17:51:46.598830Z qemu-kvm: load of migration failed: Invalid argument
Guest agent is not responding: QEMU guest agent is not connected
Guest agent is not responding: QEMU guest agent is not connected
Guest agent is not responding: QEMU guest agent is not connected
OS Version:
RHEL - 8.6 - 1.el8
OS Description:
CentOS Stream 8
Kernel Version:
4.18.0 - 358.el8.x86_64
KVM Version:
6.1.0 - 5.module_el8.6.0+1040+0ae94936
LIBVIRT Version:
libvirt-7.10.0-1.module_el8.6.0+1046+bd8eec5e
VDSM Version:
vdsm-4.40.100.2-1.el8
SPICE Version:
0.14.3 - 4.el8
GlusterFS Version:
[N/A]
CEPH Version:
librbd1-16.2.7-1.el8s
Open vSwitch Version:
openvswitch-2.11-1.el8
Nmstate Version:
nmstate-1.2.1-0.1.alpha1.el8
2 years, 2 months
Migration failed due to an Error: Fatal error during migration
by Gunasekhar Kothapalli
I am able to power on vms on newly upgraded host, But not able to migrate VMs from other
hosts to new host or newly upgraded hosts to other hosts. This was worked fine before
upgraded.
Host logs
==============================================================================
Unable to read from monitor: Connection reset by peer
internal error: qemu unexpectedly closed the monitor: 2022-01-24T17:51:46.598571Z
qemu-kvm: get_pci_config_device: Bad config >
2022-01-24T17:51:46.598627Z qemu-kvm: Failed to load PCIDevice:config
2022-01-24T17:51:46.598635Z qemu-kvm: Failed to load
pcie-root-port:parent_obj.parent_obj.parent_obj
2022-01-24T17:51:46.598642Z qemu-kvm: error while loading state for instance 0x0 of device
'0000:00:02.0/pcie-root-port'
2022-01-24T17:51:46.598830Z qemu-kvm: load of migration failed: Invalid argument
Guest agent is not responding: QEMU guest agent is not connected
Guest agent is not responding: QEMU guest agent is not connected
Guest agent is not responding: QEMU guest agent is not connected
Engine Logs
===========================================================
2022-01-24 11:31:25,080-07 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-21) [] Adding VM '9838c44b-710f-407a-b775-56bb0a3d4221'(zzz2019) to re-run list
2022-01-24 11:31:25,099-07 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-2331914) [] EVENT_ID: VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed due to an Error: Fatal error during migration (VM: zzz2019, Source: lcoskvmp07.cos.is.keysight.com, Destination: lcoskvmp03.cos.is.keysight.com).
2022-01-24 18:39:47,897-07 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-771) [a78e85d4-068a-41c1-a8fa-b3acd8c69317] EVENT_ID: VM_MIGRATION_START(62), Migration started (VM: zzz2019, Source: lcoskvmp07.cos.is.keysight.com, Destination: lcoskvmp03.cos.is.keysight.com, User: k.gunasekhar@non.keysight.com(a)KEYSIGHT).
2022-01-24 18:40:01,417-07 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-27) [] VM '9838c44b-710f-407a-b775-56bb0a3d4221'(zzz2019) was unexpectedly detected as 'Down' on VDS 'ee23b44d-976d-4889-8769-59b56e4b23c0'(lcoskvmp03.cos.is.keysight.com) (expected on '0d58953f-b3cc-4bac-b3b2-08baeeee1bca')
2022-01-24 18:40:01,589-07 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-27) [] VM '9838c44b-710f-407a-b775-56bb0a3d4221'(zzz2019) was unexpectedly detected as 'Down' on VDS 'ee23b44d-976d-4889-8769-59b56e4b23c0'(lcoskvmp03.cos.is.keysight.com) (expected on '0d58953f-b3cc-4bac-b3b2-08baeeee1bca')
2022-01-24 18:40:01,589-07 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-27) [] Migration of VM 'zzz2019' to host 'lcoskvmp03.cos.is.keysight.com' failed: VM destroyed during the startup.
2022-01-24 18:40:01,591-07 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-17) [] VM '9838c44b-710f-407a-b775-56bb0a3d4221'(zzz2019) moved from 'MigratingFrom' --> 'Up'
2022-01-24 18:40:01,591-07 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-17) [] Adding VM '9838c44b-710f-407a-b775-56bb0a3d4221'(zzz2019) to re-run list
2022-01-24 18:40:01,611-07 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-2348837) [] EVENT_ID: VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed due to an Error: Fatal error during migration (VM: zzz2019, Source: lcoskvmp07.cos.is.keysight.com, Destination: lcoskvmp03.cos.is.keysight.com).
[root@lcosovirt02 ovirt-engine]#
2 years, 2 months
Import an exported VM using Ansible
by paolo@airaldi.it
Hello everybody!
I'm trying to automate a copy of a VM from one Datacenter to another using an Ansible.playbook.
I'm able to:
- Create a snapshot of the source VM
- create a clone from the snapshot
- remove the snapshot
- attach an Export Domain
- export the clone to the Export Domain
- remove the clone
- detach the Export domain from the source Datacenter and attach to the destination.
Unfortunately I cannot find a module to:
- import the VM from the Export Domain
- delete the VM image from the Export Domain.
Any hint on how to do that?
Thanks in advance. Cheers.
Paolo
PS: if someone is interested I can share the playbook.
2 years, 2 months
Can't upgrade ovirt 4.4.3 to 4.4.9
by jihwahn1018@naver.com
Hello,
I tried to upgrade ovirt to 4.4.9 from 4.4.3
when i do 'engine-setup' i get error
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[ ERROR ] Failed to execute stage 'Misc configuration': Command '/usr/share/ovirt-engine-dwh/dbscripts/schema.sh' failed to execute
[ INFO ] DNF Performing DNF transaction rollback
[ ERROR ] DNF module 'dnf.history' has no attribute 'open_history'
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
from the log i found that column "count_threads_as_cores" does not exist
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
193375 2022-01-18 16:36:26,309+0900 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine_dwh.db.schema plugin.execute :926 execute-output: ['/usr/share/ovirt-engine-dwh/dbscripts/schema.sh', '-s', 'localhost', '-p', '5432', '-u ', 'ovirt_engine_history', '-d', 'ovirt_engine_history', '-l', '/var/log/ovirt-engine/setup/ovirt-engine-setu p-20220118162426-z0o4xg.log', '-c', 'apply'] stderr:
193376 psql:/usr/share/ovirt-engine-dwh/dbscripts/create_views_4_4.sql:148: ERROR: column "count_threads_as_cores" does not exist
193377 LINE 10: count_threads_as_cores as count_threads_as_cores,
193378 ^
193379 FATAL: Cannot execute sql command: --file=/usr/share/ovirt-engine-dwh/dbscripts/create_views_4_4.sql
193380
193381 2022-01-18 16:36:26,309+0900 DEBUG otopi.context context._executeMethod:145 method exception
193382 Traceback (most recent call last):
193383 File "/usr/lib/python3.6/site-packages/otopi/context.py", line 132, in _executeMethod
193384 method['method']()
193385 File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine-dwh/db/schema.py", line 367, in _misc
193386 odwhcons.DBEnv.PGPASS_FILE
193387 File "/usr/lib/python3.6/site-packages/otopi/plugin.py", line 931, in execute
193388 command=args[0],
193389 RuntimeError: Command '/usr/share/ovirt-engine-dwh/dbscripts/schema.sh' failed to execute
193390 2022-01-18 16:36:26,330+0900 ERROR otopi.context context._executeMethod:154 Failed to execute stage 'Misc con figuration': Command '/usr/share/ovirt-engine-dwh/dbscripts/schema.sh' failed to execute
193391 2022-01-18 16:36:26,330+0900 DEBUG otopi.transaction transaction.abort:119 aborting 'DNF Transaction'
193392 2022-01-18 16:36:26,331+0900 DEBUG otopi.plugins.otopi.packagers.dnfpackager dnfpackager.verbose:75 DNF Closi ng transaction with rollback
193393 2022-01-18 16:36:26,731+0900 INFO otopi.plugins.otopi.packagers.dnfpackager dnfpackager.info:79 DNF Performin g DNF transaction rollback
193394 2022-01-18 16:36:27,570+0900 ERROR otopi.plugins.otopi.packagers.dnfpackager dnfpackager.error:84 DNF module 'dnf.history' has no attribute 'open_history'
193395 2022-01-18 16:36:27,571+0900 DEBUG otopi.transaction transaction.abort:125 Unexpected exception from abort() of 'DNF Transaction'
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
when i search for this problem, i found that column "count_threads_as_cores" is added from ovirt-engine 4.4.6.7. and ovirt-dwh 4.4.7.
do i need to update 4.4.3 to 4.4.6( or 4.4.7) then 4.4.6(or 4.4.7) to 4.4.9?
Thanks,
Jihwan
2 years, 2 months
HostedEngine deployment fails
by Christoph Köhler
Hello,
on CentOS Stream the deployment of HE fails with the following message:
[ INFO ] TASK [ovirt.ovirt.engine_setup : Gather facts on installed
packages]
[ INFO ] ok: [localhost -> 192.168.1.239]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Fail when firewall manager is
not installed]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Install required packages for
oVirt Engine deployment]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Install oVirt Engine package]
[ ERROR ] fatal: [localhost -> 192.168.1.239]: FAILED! => {"changed":
false, "msg": "Failed to download metadata for repo
'ovirt-4.4-centos-ceph-pacific': Cannot download repomd.xml: Cannot
download repodata/repomd.xml: All mirrors were
tried", "rc": 1, "results": []}
It's because we have to use a proxy. It is configured on the host but
setup does not deliver it into the appliance (temporarily HE). So what
to do? Howto tell setup to use a proxy in HE appliance?
Thank you for hits!
Chris
2 years, 2 months
About go-ovirt Library
by Yusuf Papurcu
After v4.3.4 this package is unable to get via go modules. Please check this out.
2 years, 2 months
Re: Failed HostedEngine Deployment
by Strahil Nikolov
yum downgrade qemu-kvm-block-gluster-6.0.0-33.el8s libvirt-daemon-driver-qemu-6.0.0-33.el8s qemu-kvm-common-6.0.0-33.el8s qemu-kvm-hw-usbredir-6.0.0-33.el8s qemu-kvm-ui-opengl-6.0.0-33.el8s qemu-kvm-block-rbd-6.0.0-33.el8s qemu-img-6.0.0-33.el8s qemu-kvm-6.0.0-33.el8s qemu-kvm-block-curl-6.0.0-33.el8s qemu-kvm-block-ssh-6.0.0-33.el8s qemu-kvm-ui-spice-6.0.0-33.el8s ipxe-roms-qemu-6.0.0-33.el8s qemu-kvm-core-6.0.0-33.el8s qemu-kvm-docs-6.0.0-33.el8s qemu-kvm-block-6.0.0-33.el8s
Best Regards,Strahil Nikolov
On Sun, Jan 23, 2022 at 22:47, Robert Tongue<phunyguy(a)neverserio.us> wrote: #yiv7072323153 P {margin-top:0;margin-bottom:0;}Ahh, I did some repoquery commands can see a good bit of qemu* packages are coming from appstream rather than ovirt-4.4-centos-stream-advanced-virtualization.
What's the recommanded fix?From: Strahil Nikolov <hunter86_bg(a)yahoo.com>
Sent: Sunday, January 23, 2022 3:41 PM
To: users <users(a)ovirt.org>; Robert Tongue <phunyguy(a)neverserio.us>
Subject: Re: [ovirt-users] Failed HostedEngine Deployment I've seen this.
Ensure that all qemu-related packages are coming from centos-advanced-virtualization repo (6.0.0-33.el8s.x86_64).
There is a known issue with the latest packages in the CentOS Stream.
Also, you can set the following alias on the Hypervisours:
alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf'
Best Regards,
Strahil Nikolov
В неделя, 23 януари 2022 г., 21:14:20 Гринуич+2, Robert Tongue <phunyguy(a)neverserio.us> написа:
<!--#yiv7072323153 #yiv7072323153x_yiv4464233184 p {margin-top:0;margin-bottom:0;}-->Greetings oVirt people,
I am having a problem with the hosted-engine deployment, and unfortunately after a weekend spent trying to get this far, I am finally stuck, and cannot figure out how to fix this.
I am starting with 1 host, and will have 4 when this is finished. Storage is GlusterFS, hyperconverged, but I am managing that myself outside of oVirt. It's a single-node GlusterFS volume, which I will expand out across the other 4 nodes as well. I get all the way through the initial hosted-engine deployment (via the cockpit interface) pre-storage, then get most of the way through the storage portion of it. It fails at starting the HostedEngine VM in its final state after copying the VM disk to shared storage.
This is where it gets weird.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Engine VM IP address is while the engine's he_fqdn ovirt.deleted.domain resolves to 192.168.x.x. If you are using DHCP, check your DHCP reservation configuration"}
I've masked out the domain and IP for obvious reasons. However I think this deployment error isn't really the reason for the failure, it's just where it is at when it fails. The HostedEngine VM is starting, but not actually booting. I was able to change the VNC password with `hosted-engine --add-console-password`, and see the local console display with that, however it just displays "The guest has not initialized the display (yet)".
I also did:
# hosted-engine --consoleThe engine VM is running on this hostEscape character is ^]
Yet that doesn't move any further, nor allow any input. The VM does not respond on the network. I am thinking it's just not making it to the initial BIOS screen and booting at all. What would cause that?
Here is the glusterfs volume for clarity.
# gluster volume info storage Volume Name: storageType: DistributeVolume ID: e9544310-8890-43e3-b49c-6e8c7472dbbbStatus: StartedSnapshot Count: 0Number of Bricks: 1Transport-type: tcpBricks:Brick1: node1:/var/glusterfs/storage/1Options Reconfigured:storage.owner-gid: 36storage.owner-uid: 36network.ping-timeout: 5performance.client-io-threads: onserver.event-threads: 4client.event-threads: 4cluster.choose-local: offuser.cifs: offfeatures.shard: oncluster.shd-wait-qlength: 1024cluster.locking-scheme: fullcluster.data-self-heal-algorithm: fullcluster.server-quorum-type: servercluster.quorum-type: autocluster.eager-lock: enableperformance.strict-o-direct: onnetwork.remote-dio: disableperformance.low-prio-threads: 32performance.io-cache: offperformance.read-ahead: offperformance.quick-read: offstorage.fips-mode-rchecksum: ontransport.address-family: inetnfs.disable: on
# cat /proc/cpuinfoprocessor : 0vendor_id : GenuineIntelcpu family : 6model : 58model name : Intel(R) Xeon(R) CPU E3-1280 V2 @ 3.60GHzstepping : 9microcode : 0x21cpu MHz : 4000.000cache size : 8192 KBphysical id : 0siblings : 8core id : 0cpu cores : 4apicid : 0initial apicid : 0fpu : yesfpu_exception : yescpuid level : 13wp : yesflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer xsave avx f16c rdrand lahf_lm cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts md_clear flush_l1dbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit srbdsbogomips : 7199.86clflush size : 64cache_alignment: 64address sizes : 36 bits physical, 48 bits virtualpower management:
[ plus 7 more ]
Thanks for any insight that can be provided.
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JZQYGXQP5DO...
2 years, 2 months
Major problems after upgrading 2 (of 4) Red Hat hosts to 4.4.10
by David White
I have a Hyperconverged cluster with 4 hosts.
Gluster is replicated across 2 hosts, and a 3rd host is an arbiter node.
The 4th host is compute only.
I updated the compute-only node, as well as the arbiter node, early this morning. I didn't touch either of the actual storage nodes.That said, I forgot to upgrade the engine.
oVirt Manager thinks that all but 1 of the hosts in the cluster are unhealthy. However, all 4 hosts are online. oVirt Manager (Engine) also keeps deactivating at least 1, if not 2 of the 3 (total) bricks behind each volume.
Even though the Engine thinks that only 1 host is healthy, VMs are clearly running on some of the other hosts. However, in troubleshooting, some of the customer VMs were turned off, and oVirt is refusing to start those VMs, because it only recognizes that 1 of the hosts is healthy -- and that host's resources are maxed out.
This afternoon, I went ahead and upgraded (and rebooted) the Engine VM, so it is now up-to-date. Unfortunately, that didn't resolve the issue. So I took one of the "unhealthy" hosts which didn't have any VMs on it (which was the host that is our compute-only server hosting no gluster data), and I used oVirt to "reinstall" the oVirt software. That didn't resolve the issue for that host.
How can I troubleshoot this? I need:
- To figure out why oVirt keeps trying to deactivate volumes
- From the command line, `gluster peer status` show all nodes connected, and all volumes appear to be healthy
- More importantly, I need to get these VMs that are currently down back online. Is there a way to somehow force oVirt to launch the VMs on the "unhealthy" nodes?
What logs should I be looking at? Any help would be greatly appreciated .
Sent with ProtonMail Secure Email.
2 years, 2 months
Re: Failed HostedEngine Deployment
by Strahil Nikolov
I've seen this.
Ensure that all qemu-related packages are coming from centos-advanced-virtualization repo (6.0.0-33.el8s.x86_64).
There is a known issue with the latest packages in the CentOS Stream.
Also, you can set the following alias on the Hypervisours:
alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf'
Best Regards,
Strahil Nikolov
В неделя, 23 януари 2022 г., 21:14:20 Гринуич+2, Robert Tongue <phunyguy(a)neverserio.us> написа:
#yiv4464233184 P {margin-top:0;margin-bottom:0;}Greetings oVirt people,
I am having a problem with the hosted-engine deployment, and unfortunately after a weekend spent trying to get this far, I am finally stuck, and cannot figure out how to fix this.
I am starting with 1 host, and will have 4 when this is finished. Storage is GlusterFS, hyperconverged, but I am managing that myself outside of oVirt. It's a single-node GlusterFS volume, which I will expand out across the other 4 nodes as well. I get all the way through the initial hosted-engine deployment (via the cockpit interface) pre-storage, then get most of the way through the storage portion of it. It fails at starting the HostedEngine VM in its final state after copying the VM disk to shared storage.
This is where it gets weird.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Engine VM IP address is while the engine's he_fqdn ovirt.deleted.domain resolves to 192.168.x.x. If you are using DHCP, check your DHCP reservation configuration"}
I've masked out the domain and IP for obvious reasons. However I think this deployment error isn't really the reason for the failure, it's just where it is at when it fails. The HostedEngine VM is starting, but not actually booting. I was able to change the VNC password with `hosted-engine --add-console-password`, and see the local console display with that, however it just displays "The guest has not initialized the display (yet)".
I also did:
# hosted-engine --consoleThe engine VM is running on this hostEscape character is ^]
Yet that doesn't move any further, nor allow any input. The VM does not respond on the network. I am thinking it's just not making it to the initial BIOS screen and booting at all. What would cause that?
Here is the glusterfs volume for clarity.
# gluster volume info storage Volume Name: storageType: DistributeVolume ID: e9544310-8890-43e3-b49c-6e8c7472dbbbStatus: StartedSnapshot Count: 0Number of Bricks: 1Transport-type: tcpBricks:Brick1: node1:/var/glusterfs/storage/1Options Reconfigured:storage.owner-gid: 36storage.owner-uid: 36network.ping-timeout: 5performance.client-io-threads: onserver.event-threads: 4client.event-threads: 4cluster.choose-local: offuser.cifs: offfeatures.shard: oncluster.shd-wait-qlength: 1024cluster.locking-scheme: fullcluster.data-self-heal-algorithm: fullcluster.server-quorum-type: servercluster.quorum-type: autocluster.eager-lock: enableperformance.strict-o-direct: onnetwork.remote-dio: disableperformance.low-prio-threads: 32performance.io-cache: offperformance.read-ahead: offperformance.quick-read: offstorage.fips-mode-rchecksum: ontransport.address-family: inetnfs.disable: on
# cat /proc/cpuinfoprocessor : 0vendor_id : GenuineIntelcpu family : 6model : 58model name : Intel(R) Xeon(R) CPU E3-1280 V2 @ 3.60GHzstepping : 9microcode : 0x21cpu MHz : 4000.000cache size : 8192 KBphysical id : 0siblings : 8core id : 0cpu cores : 4apicid : 0initial apicid : 0fpu : yesfpu_exception : yescpuid level : 13wp : yesflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer xsave avx f16c rdrand lahf_lm cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts md_clear flush_l1dbugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit srbdsbogomips : 7199.86clflush size : 64cache_alignment: 64address sizes : 36 bits physical, 48 bits virtualpower management:
[ plus 7 more ]
Thanks for any insight that can be provided.
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JZQYGXQP5DO...
2 years, 2 months