feature policy='require' name='md-clear'
by thomas.c.albrecht@lmco.com
I had oVirt 4.3 running on CentOS just fine for months. I did a yum update, and now the hosted engine isn't starting up.
[root@nazgul qemu]# hosted-engine --vm-status
--== Host nazgul.glamdring.offline (id: 1) status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : nazgul.glamdring.offline
Host ID : 1
Engine status : {"reason": "bad vm status", "health": "bad", "vm": "down_unexpected", "detail": "Down"}
Score : 0
stopped : False
Local maintenance : False
crc32 : fdab6d19
local_conf_timestamp : 2357
Host timestamp : 2357
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=2357 (Sat Jan 25 22:14:55 2020)
host-id=1
score=0
vm_conf_refresh_time=2357 (Sat Jan 25 22:14:55 2020)
conf_on_shared_storage=True
maintenance=False
state=EngineUnexpectedlyDown
stopped=False
timeout=Wed Dec 31 19:47:06 1969
Digging through the logs, I'm seeing an error that seems to be caused by an updated version of libvirt.
Jan 25 22:13:05 nazgul libvirtd: 2020-01-26 03:13:05.879+0000: 4307: error : virCPUx86Compare:1785 : the CPU is incompatible with host CPU: Host CPU does not provide required features: md-clear
In my /etc/libvirt/qemu/HostedEngine.xml, I'm thinking this line is the problem:
<feature policy='require' name='md-clear'/>
Looking at my cpu flags, md-clear ain't one of them:
model name : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
stepping : 7
microcode : 0x714
cpu MHz : 3000.061
cache size : 20480 KB
physical id : 1
siblings : 16
core id : 7
cpu cores : 8
apicid : 47
initial apicid : 47
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm epb ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm ida arat pln pts spec_ctrl intel_stibp flush_l1d
Any idea of how to remove that from the required CPU flags? Why is that needed?
4 years, 11 months
ovirt_vm ansible module -- how to wait for ova export to finish
by Jayme
I wrote a simple task that is using the ovirt_vm module
https://docs.ansible.com/ansible/latest/modules/ovirt_vm_module.html -- it
essentially loops over a list of vms and exports them to OVA.
The problem I have is the task is deemed changed once it successfully
submits the export task to oVirt. This means that if I gave it a list of
100 Vms I believe it would start an export task on all of them. I want to
prevent this and have it only export one VM at a time. In order to do this
I believe I will need to find a way for the task to wait and somehow verify
that the export was completed before submitting a task for the next VM
export.
Any ideas?
- name: Export the VM
ovirt_vm:
auth: "{{ ovirt_auth }}"
name: "{{ item }}"
state: exported
cluster: default
export_ova:
host: Host0
filename: "{{ item }}"
directory: /backup/
with_items: "{{ vms }}"
4 years, 11 months
upgrading ovirt host from 4.1 to 4.2
by Crazy Ayansh
Hi Team,
I need a quick help from you. I need to upgrade ovirt host server from 4.1
to 4.2 version but i am getting below errors :-
--> Finished Dependency Resolution
Error: Package: nfs-ganesha-nullfs-2.3.3-1.el7.x86_64
(@ovirt-4.1-centos-gluster38)
Requires: nfs-ganesha = 2.3.3-1.el7
Removing: nfs-ganesha-2.3.3-1.el7.x86_64
(@ovirt-4.1-centos-gluster38)
nfs-ganesha = 2.3.3-1.el7
Updated By: nfs-ganesha-2.5.5-1.el7.x86_64
(ovirt-4.2-centos-gluster312)
nfs-ganesha = 2.5.5-1.el7
Available: nfs-ganesha-2.5.2-1.el7.x86_64
(ovirt-4.2-centos-gluster312)
nfs-ganesha = 2.5.2-1.el7
Available: nfs-ganesha-2.5.3-1.el6.x86_64
(ovirt-4.2-centos-gluster312)
nfs-ganesha = 2.5.3-1.el6
Available: nfs-ganesha-2.5.3-1.el7.x86_64
(ovirt-4.2-centos-gluster312)
nfs-ganesha = 2.5.3-1.el7
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
This host server also have hosted engine running on it. I am using local
storage for hosted engine. Can any one help on above.
Thanks
Shashank
4 years, 11 months
gluster shard size
by Alex McWhirter
building a new gluster volume this weekend, trying to optimize it fully
for virt. RHGS states that it supports only a 512mb shard size, so i ask
why is the default for ovirt 64mb?
4 years, 11 months
Re: KVM (libvirt) VM Import
by jorgevisentini@gmail.com
Hi Paul, thanks for your reply.
I imported the VM with the command below, but I had that create a Export Domain on oVirt for this task. Only problem this task is that is in two parts, first part is export to Export Domain, on NFS, and second part is import on oVirt through of the Export Domain.
This was the only way I managed to import.
virt-v2v -ic qemu+ssh://root@10.255.255.23/system TEMPLATE-WINDOWS2019 -o rhev -os 10.255.255.22:/export --network TEMPLATE-WINDOWS2019
Thanks again.
4 years, 11 months
API OVA export - getting job id/job status
by lars.stolpe@bvg.de
hi,
I tryed this with API 4.2 and 4.3.
purpose of the following script is, to export a given list of vm as OVA one after another.
To reach that i need to monitor the job status and pause the script till the actual export is done.
The script works fine, but not the restriction of the returned jobs to the one spcificaly job i need to monitor.
Therefor the script pauses on *any* running job.
the working script:
#!/usr/bin/python
import logging
import time
import ovirtsdk4 as sdk
import ovirtsdk4.types as types
connection = sdk.Connection(
url='https://ovirtman12/ovirt-engine/api',
username='admin@internal',
password='***',
ca_file='/etc/pki/ovirt-engine/ca-ovirtman12.pem',
)
hosts_service = connection.system_service().hosts_service()
hosts = hosts_service.list()[0]
vms_service = connection.system_service().vms_service()
vms = vms_service.list(search='name=blxlic954')
for vm in vms:
# print("%s (%s)" % (vm.name, vm.id))
vm_service = vms_service.vm_service(vm.id)
start_time = (time.strftime('%Y%m%d_%H%M%S', time.localtime(int(time.time()))))
vm_service.export_to_path_on_host(
host=types.Host(id=hosts.id),
directory='/nfs_c3/export',
filename=('%s_backup_%s.ova' % (vm.name, start_time)),
wait=True,
)
# time.sleep(5)
jobs_service = connection.system_service().jobs_service()
jobs = jobs_service.list(search='')
for job in jobs:
print(job.id, job.description)
# job = jobs_service.job_service(job.id).get()
while job.status == types.JobStatus.STARTED:
time.sleep(10)
job = jobs_service.job_service(job.id).get()
print('job-status: %s' % (job.status))
connection.close()
The line
jobs = jobs_service.list(search='')
works fine as long as the search pattern is empty.
if i try to restrict the results returned like this:
jobs = jobs_service.list(search='description=*blxlic954*')
i get an error:
bad sql grammar [select * from (select * from job where ( job id in (select distinct job.job id from job where ( ) )) order by start time asc) as t1 offset (1 -1) limit 2147483647]; nested exception is org.postgresql.util.psqlexception: error: syntax error at or near ")"
looks like the 'where' clause is not filled correctly.
Am i wrong with my syntax or ist that a bug?
Is there another way to get the correct job id/status?
4 years, 11 months
Hyperconverged solution
by Benedetto Vassallo
Hello Guys,
Here at University of Palermo (Italy) we are planning to switch from
vmware to ovirt using the hyperconverged solution.
Our design is a 6 nodes cluster, each node with this configuration:
- 1x Dell PowerEdge R7425 server;
- 2x AMD EPYC 7301 Processor;
- 512GB of RAM (8x 64GB LRDIMM, 2666MT/s, Quad Rank);
- 2x Broadcom 57412 Dual Port 10Gb SFP+ ethernet card;
- 3x 600GB 10K RPM SAS for the OS (Raid1 + hotspare);
- 5x 1.2TB 10K RPM SAS for the hosted storage domain (Raid5 + hotspare);
- 11x 2.4TB 10KRPM SAS for the vm data domain (Raid6 + hotspare);
- 4x 960GB SSD SAS for an additional SSD storage domain (Raid5 + hotspare);
Is this configuration supported or I have to change something?
Thank you and Best Regards.
--
Benedetto Vassallo
Responsabile U.O. Sviluppo e manutenzione dei sistemi
Sistema Informativo di Ateneo
Università degli studi di Palermo
Phone: +3909123860056
Fax: +3909123860880
4 years, 11 months
ovirt upgrade
by David David
hi all
how do upgrade from 4.2.5 to 4.3.7 ?
what steps are necessary, are the same as when upgrading from 4.1 to 4.2?
# engine-upgrade-check
# yum update ovirt*setup*
# engine-setup
thanks
4 years, 11 months
Ovirt 4.3
by eevans@digitaldatatechs.com
I set up the repository as per Ovirt instructions on Centos 7 x64, but when
I yum install ovirt-engine I get missing dependencies.
Here are the errors:
Error: Package: ovirt-engine-backend-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: ebay-cors-filter
Error: Package: ovirt-engine-setup-plugin-ovirt-engine-4.3.7.2-1.el7.noarch
(ovirt-4.3)
Requires: rh-postgresql10-postgresql-contrib
Error: Package: ovirt-provider-ovn-1.2.27-1.el7.noarch (ovirt-4.3)
Requires: python-ovsdbapp
Error: Package: ovirt-engine-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: openstack-java-keystone-client >= 3.2.7
Error: Package: ovirt-engine-metrics-1.3.5.1-1.el7.noarch (ovirt-4.3)
Requires: ansible >= 2.8.3
Available: ansible-2.4.2.0-2.el7.noarch (extras)
ansible = 2.4.2.0-2.el7
Error: Package: ovirt-provider-ovn-1.2.27-1.el7.noarch (ovirt-4.3)
Requires: openvswitch >= 2.7
Error: Package: ovirt-engine-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: openstack-java-cinder-model >= 3.2.7
Error: Package: ovirt-ansible-manageiq-1.1.14-1.el7.noarch (ovirt-4.3)
Requires: ansible >= 2.7.2
Installing: ansible-2.4.2.0-2.el7.noarch (extras)
ansible = 2.4.2.0-2.el7
Error: Package: ovirt-ansible-vm-infra-1.1.22-1.el7.noarch (ovirt-4.3)
Requires: ansible >= 2.7.2
Available: ansible-2.4.2.0-2.el7.noarch (extras)
ansible = 2.4.2.0-2.el7
Error: Package: ovirt-provider-ovn-1.2.27-1.el7.noarch (ovirt-4.3)
Requires: openvswitch-ovn-common >= 2.7
Error: Package: ovirt-engine-backend-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: novnc >= 0.5.0
Error: Package: ovirt-engine-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: collectd-write_http
Error: Package: ovirt-ansible-repositories-1.1.5-1.el7.noarch (ovirt-4.3)
Requires: ansible >= 2.7.2
Available: ansible-2.4.2.0-2.el7.noarch (extras)
ansible = 2.4.2.0-2.el7
Error: Package: ovirt-ansible-infra-1.1.13-1.el7.noarch (ovirt-4.3)
Requires: ansible >= 2.7.2
Installing: ansible-2.4.2.0-2.el7.noarch (extras)
ansible = 2.4.2.0-2.el7
Error: Package: ovirt-engine-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: openstack-java-glance-model >= 3.2.7
Error: Package: ovirt-provider-ovn-1.2.27-1.el7.noarch (ovirt-4.3)
Requires: python-openvswitch >= 2.7
Error: Package: ovirt-engine-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: openstack-java-keystone-model >= 3.2.7
Error: Package: ovirt-engine-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: collectd-write_syslog
Error: Package: ovirt-ansible-image-template-1.1.12-1.el7.noarch (ovirt-4.3)
Requires: ansible >= 2.7.2
Available: ansible-2.4.2.0-2.el7.noarch (extras)
ansible = 2.4.2.0-2.el7
Error: Package: ovirt-engine-dwh-setup-4.3.6-1.el7.noarch (ovirt-4.3)
Requires: rh-postgresql10-postgresql-server
Error: Package: ovirt-engine-websocket-proxy-4.3.7.2-1.el7.noarch
(ovirt-4.3)
Requires: python-websockify >= 0.6.0
Error: Package: ovirt-engine-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: nimbus-jose-jwt >= 5.12
Error: Package: ovirt-engine-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: openstack-java-glance-client >= 3.2.7
Error: Package: ovirt-engine-dwh-4.3.6-1.el7.noarch (ovirt-4.3)
Requires: rh-postgresql10-postgresql-server
Error: Package: ovirt-engine-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: openstack-java-resteasy-connector >= 3.2.7
Error: Package: ovirt-ansible-engine-setup-1.1.9-1.el7.noarch (ovirt-4.3)
Requires: ansible >= 2.7.2
Available: ansible-2.4.2.0-2.el7.noarch (extras)
ansible = 2.4.2.0-2.el7
Error: Package: ovirt-ansible-hosted-engine-setup-1.0.32-1.el7.noarch
(ovirt-4.3)
Requires: ansible >= 2.7
Available: ansible-2.4.2.0-2.el7.noarch (extras)
ansible = 2.4.2.0-2.el7
Error: Package: ovirt-ansible-disaster-recovery-1.2.0-1.el7.noarch
(ovirt-4.3)
Requires: ansible >= 2.7.2
Available: ansible-2.4.2.0-2.el7.noarch (extras)
ansible = 2.4.2.0-2.el7
Error: Package: ovirt-engine-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: rh-postgresql10-postgresql-server
Error: Package: ovirt-ansible-hosted-engine-setup-1.0.32-1.el7.noarch
(ovirt-4.3)
Requires: ansible >= 2.7
Installing: ansible-2.4.2.0-2.el7.noarch (extras)
ansible = 2.4.2.0-2.el7
Error: Package: python2-ovirt-engine-lib-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: python-daemon
Error: Package: ovirt-engine-tools-backup-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: rh-postgresql10-postgresql
Error: Package: ovirt-engine-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: ansible >= 2.7.2
Available: ansible-2.4.2.0-2.el7.noarch (extras)
ansible = 2.4.2.0-2.el7
Error: Package: ovirt-ansible-shutdown-env-1.0.3-1.el7.noarch (ovirt-4.3)
Requires: ansible >= 2.7
Available: ansible-2.4.2.0-2.el7.noarch (extras)
ansible = 2.4.2.0-2.el7
Error: Package: ovirt-ansible-vm-infra-1.1.22-1.el7.noarch (ovirt-4.3)
Requires: ansible >= 2.7.2
Installing: ansible-2.4.2.0-2.el7.noarch (extras)
ansible = 2.4.2.0-2.el7
Error: Package: ovirt-engine-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: openstack-java-cinder-client >= 3.2.7
Error: Package: ovirt-imageio-common-1.5.2-0.el7.x86_64 (ovirt-4.3)
Requires: qemu-img-rhev
Error: Package: ovirt-engine-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: json-smart >= 2.2
Error: Package: ovirt-engine-setup-plugin-ovirt-engine-4.3.7.2-1.el7.noarch
(ovirt-4.3)
Requires: openvswitch-ovn-central >= 2.7
Error: Package: ovirt-ansible-infra-1.1.13-1.el7.noarch (ovirt-4.3)
Requires: ansible >= 2.7.2
Available: ansible-2.4.2.0-2.el7.noarch (extras)
ansible = 2.4.2.0-2.el7
Error: Package: ovirt-engine-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: ansible >= 2.7.2
Installing: ansible-2.4.2.0-2.el7.noarch (extras)
ansible = 2.4.2.0-2.el7
Error: Package: ovirt-ansible-disaster-recovery-1.2.0-1.el7.noarch
(ovirt-4.3)
Requires: ansible >= 2.7.2
Installing: ansible-2.4.2.0-2.el7.noarch (extras)
ansible = 2.4.2.0-2.el7
Error: Package: ovirt-ansible-engine-setup-1.1.9-1.el7.noarch (ovirt-4.3)
Requires: ansible >= 2.7.2
Installing: ansible-2.4.2.0-2.el7.noarch (extras)
ansible = 2.4.2.0-2.el7
Error: Package: ovirt-engine-dwh-4.3.6-1.el7.noarch (ovirt-4.3)
Requires: rh-postgresql10-postgresql-contrib
Error: Package: ovirt-ansible-manageiq-1.1.14-1.el7.noarch (ovirt-4.3)
Requires: ansible >= 2.7.2
Available: ansible-2.4.2.0-2.el7.noarch (extras)
ansible = 2.4.2.0-2.el7
Error: Package: ovirt-engine-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: archaius-core
Error: Package: ovirt-engine-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: rxjava
Error: Package: ovirt-ansible-roles-1.1.7-1.el7.noarch (ovirt-4.3)
Requires: ansible >= 2.7.2
Available: ansible-2.4.2.0-2.el7.noarch (extras)
ansible = 2.4.2.0-2.el7
Error: Package: ovirt-ansible-roles-1.1.7-1.el7.noarch (ovirt-4.3)
Requires: ansible >= 2.7.2
Installing: ansible-2.4.2.0-2.el7.noarch (extras)
ansible = 2.4.2.0-2.el7
Error: Package: ovirt-engine-dwh-setup-4.3.6-1.el7.noarch (ovirt-4.3)
Requires: rh-postgresql10-postgresql-contrib
Error: Package: ovirt-provider-ovn-1.2.27-1.el7.noarch (ovirt-4.3)
Requires: openvswitch-ovn-central >= 2.7
Error: Package: ovirt-engine-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: hystrix-core
Error: Package: ovirt-engine-backend-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: novnc < 0.6.0
Error: Package: ovirt-engine-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: collectd-postgresql
Error: Package: ovirt-engine-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: rh-postgresql10-postgresql-contrib
Error: Package: ovirt-engine-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: openstack-java-client >= 3.2.7
Error: Package: ovirt-engine-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: collectd-disk
Error: Package: ovirt-engine-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: collectd
Error: Package: ovirt-ansible-image-template-1.1.12-1.el7.noarch (ovirt-4.3)
Requires: ansible >= 2.7.2
Installing: ansible-2.4.2.0-2.el7.noarch (extras)
ansible = 2.4.2.0-2.el7
Error: Package: ovirt-engine-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: openstack-java-quantum-client >= 3.2.7
Error: Package: ovirt-engine-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: openstack-java-quantum-model >= 3.2.7
Error: Package: ovirt-ansible-cluster-upgrade-1.1.14-1.el7.noarch
(ovirt-4.3)
Requires: ansible >= 2.7.2
Available: ansible-2.4.2.0-2.el7.noarch (extras)
ansible = 2.4.2.0-2.el7
Error: Package: ovirt-engine-4.3.7.2-1.el7.noarch (ovirt-4.3)
Requires: hystrix-metrics-event-stream
Error: Package: ovirt-ansible-repositories-1.1.5-1.el7.noarch (ovirt-4.3)
Requires: ansible >= 2.7.2
Installing: ansible-2.4.2.0-2.el7.noarch (extras)
ansible = 2.4.2.0-2.el7
Error: Package: ovirt-ansible-cluster-upgrade-1.1.14-1.el7.noarch
(ovirt-4.3)
Requires: ansible >= 2.7.2
Installing: ansible-2.4.2.0-2.el7.noarch (extras)
ansible = 2.4.2.0-2.el7
Error: Package: ovirt-ansible-shutdown-env-1.0.3-1.el7.noarch (ovirt-4.3)
Requires: ansible >= 2.7
Installing: ansible-2.4.2.0-2.el7.noarch (extras)
ansible = 2.4.2.0-2.el7
Error: Package: ovirt-engine-setup-plugin-ovirt-engine-4.3.7.2-1.el7.noarch
(ovirt-4.3)
Requires: rh-postgresql10-postgresql-server
Error: Package: ovirt-engine-metrics-1.3.5.1-1.el7.noarch (ovirt-4.3)
Requires: ansible >= 2.8.3
Installing: ansible-2.4.2.0-2.el7.noarch (extras)
ansible = 2.4.2.0-2.el7
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
My understanding is all the dependencies should be in the ovirt repository.
Also, the repos point to fc$release server. I change the fc to
el$releaseserver before it even works.
What did I miss?
Eric Evans
Digital Data Services LLC.
304.660.9080
4 years, 11 months
Re: OVA export to NFS share slow
by Jürgen Walch
➢ I have a very similar setup as you and have just very recently started testing OVA exports for backup purposes to NFS attached storage.
➢ I have a three node HCI on GlusterFS (SSD backed) with 10Gbit and my ovirt management network is 10Gbit as well. My NFS storage server is an 8 x 8Tb 7200 RPM drives in RAID10 running CentOS 8x with 10Gbit link.
Our setups are indeed similar, the main difference being, that my management network including the connection to the NFS server is only 1Gbit. Only GlusterFS has 10Gbit here.
➢ I haven't done specific measurement yet as I just setup the storage today but a test export of a 50Gb VM took just about ~10 minutes start to finish.
Doing the maths this is ~80MiB/s and 20 times faster than in my setup. Lucky you 😊
Much less than your 10Gbit link between NFS Server and nodes could provide, but maybe close to the limit of the drives in your NFS server.
The interesting thing is, that when setting up an export domain, stopping the VM and doing an export to the *same* NFS server, I'm getting write speeds as expected.
Only the OVA export is terribly slow.
The main difference I can see is the use of a loop device when exporting to OVA.
The export to the export domain does something like
/usr/bin/qemu-img convert -p -t none -T none -f raw {source disk on GlusterFS} {target disk on NFS server}
whereas the OVA export will do
/usr/bin/qemu-img convert -T none -O qcow2 {source snapshot on GlusterFS} /dev/loopX
with /dev/loopX pointing to the NFS OVA target image.
If you have the time and are willing to test, I would be interested in how fast your exports to an export domain are
--
juergen
4 years, 11 months