oVirt survey - May 2019
by Sandro Bonazzola
As we continue to develop oVirt 4.3 and future releases, the Development
and Integration teams at Red Hat would value insights on how you are
deploying the oVirt environment.
Please help us to hit the mark by completing this short survey. Survey will
close on June 7th.
If you're managing multiple oVirt deployments with very different use cases
or very different deployments you can consider answering this survey
multiple times.
Please note the answers to this survey will be publicly accessible. This
survey is under oVirt Privacy Policy available at
https://www.ovirt.org/site/privacy-policy.html
The survey is available here: https://forms.gle/8uzuVNmDWtoKruhm8
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
<https://redhat.com/summit>
5 years, 5 months
[ANN] oVirt 4.3.4 Third Release Candidate is now available
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.3.4 Third Release Candidate, as of May 30th, 2019.
This update is a release candidate of the fourth in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used
inproduction.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)
Experimental tech preview for x86_64 and s390x architectures for Fedora 28
is also included.
See the release notes [1] for installation / upgrade instructions and a
list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available[2]
Additional Resources:
* Read more about the oVirt 4.3.4 release highlights:
http://www.ovirt.org/release/4.3.4/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.4/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
<https://redhat.com/summit>
5 years, 5 months
[ANN] oVirt 4.3.4 Second Release Candidate is now available
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.3.4 Second Release Candidate, as of May 22nd, 2019.
This update is a release candidate of the fourth in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used
inproduction.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)
Experimental tech preview for x86_64 and s390x architectures for Fedora 28
is also included.
See the release notes [1] for installation / upgrade instructions and a
list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available[2]
Additional Resources:
* Read more about the oVirt 4.3.4 release highlights:
http://www.ovirt.org/release/4.3.4/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.4/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
<https://redhat.com/summit>
5 years, 6 months
Re: [ovirt-users] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available
by Nir Soffer
On Thu, May 16, 2019 at 10:12 PM Darrell Budic <budic(a)onholyground.com>
wrote:
> On May 16, 2019, at 1:41 PM, Nir Soffer <nsoffer(a)redhat.com> wrote:
>
>
> On Thu, May 16, 2019 at 8:38 PM Darrell Budic <budic(a)onholyground.com>
> wrote:
>
>> I tried adding a new storage domain on my hyper converged test cluster
>> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster
>> volume fine, but it’s not able to add the gluster storage domain (as either
>> a managed gluster volume or directly entering values). The created gluster
>> volume mounts and looks fine from the CLI. Errors in VDSM log:
>>
>> ...
>
>> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying
>> file system doesn't supportdirect IO (fileSD:110)
>> 2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [vdsm.api] FINISH
>> createStorageDomain error=Storage Domain target is unsupported: ()
>> from=::ffff:10.100.90.5,44732, flow_id=31d993dd,
>> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>>
>
> The direct I/O check has failed.
>
>
> So something is wrong in the files system.
>
> To confirm, you can try to do:
>
> dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
>
> This will probably fail with:
> dd: failed to open '/path/to/mountoint/test': Invalid argument
>
> If it succeeds, but oVirt fail to connect to this domain, file a bug and
> we will investigate.
>
> Nir
>
>
> Yep, it fails as expected. Just to check, it is working on pre-existing
> volumes, so I poked around at gluster settings for the new volume. It has
> network.remote-dio=off set on the new volume, but enabled on old volumes.
> After enabling it, I’m able to run the dd test:
>
> [root@boneyard mnt]# gluster vol set test network.remote-dio enable
> volume set: success
> [root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1
> oflag=direct
> 1+0 records in
> 1+0 records out
> 4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s
>
> I’m also able to add the storage domain in ovirt now.
>
> I see network.remote-dio=enable is part of the gluster virt group, so
> apparently it’s not getting set by ovirt duding the volume creation/optimze
> for storage?
>
I'm not sure who is responsible for changing these settings. oVirt always
required directio, and we
never had to change anything in gluster.
Sahina, maybe gluster changed the defaults?
Darrell, please file a bug, probably for RHHI.
Nir
5 years, 6 months
Re: [ovirt-users] Re: Re: [ANN] oVirt 4.3.4 First Release Candidate is now available
by Nir Soffer
On Thu, May 16, 2019 at 10:02 PM Strahil <hunter86_bg(a)yahoo.com> wrote:
> This is my previous e-mail:
>
> On May 16, 2019 15:23, Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
>
> It seems that the issue is within the 'dd' command as it stays waiting for
> input:
>
> [root@ovirt1 mnt]# /usr/bin/dd iflag=fullblock of=file
> oflag=direct,seek_bytes seek=1048576 bs=256512 count=1
> conv=notrunc,nocreat,fsync ^C0+0 records in
> 0+0 records out
> 0 bytes (0 B) copied, 19.3282 s, 0.0 kB/s
>
> Changing the dd command works and shows that the gluster is working:
>
> [root@ovirt1 mnt]# cat /dev/urandom | /usr/bin/dd of=file
> oflag=direct,seek_bytes seek=1048576 bs=256512 count=1
> conv=notrunc,nocreat,fsync 0+1 records in
> 0+1 records out
> 131072 bytes (131 kB) copied, 0.00705081 s, 18.6 MB/s
>
> Best Regards,
>
> Strahil Nikolov
>
> ----- Препратено съобщение -----
>
> *От:* Strahil Nikolov <hunter86_bg(a)yahoo.com>
>
> *До:* Users <users(a)ovirt.org>
>
> *Изпратено:* четвъртък, 16 май 2019 г., 5:56:44 ч. Гринуич-4
>
> *Тема:* ovirt 4.3.3.7 cannot create a gluster storage domain
>
> Hey guys,
>
> I have recently updated (yesterday) my platform to latest available (v
> 4.3.3.7) and upgraded to gluster v6.1 .The setup is hyperconverged 3 node
> cluster with ovirt1/gluster1 & ovirt2/gluster2 as replica nodes (glusterX
> is for gluster communication) while ovirt3 is the arbiter.
>
> Today I have tried to add new domain storages but they fail with the
> following:
>
> 2019-05-16 10:15:21,296+0300 INFO (jsonrpc/2) [vdsm.api] FINISH
> createStorageDomain error=Command ['/usr/bin/dd', 'iflag=fullblock',
> u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases',
> 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1',
> 'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]'
> err="/usr/bin/dd: error writing
> '/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases':
> Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied,
> 0.0138582 s, 0.0 kB/s\n" from=::ffff:192.168.1.2,43864, flow_id=4a54578a,
> task_id=d2535d0f-c7f7-4f31-a10f-704923ce1790 (api:52)
>
>
This may be another issue. This command works only for storage with 512
bytes sector size.
Hyperconverge systems may use VDO, and it must be configured in
compatibility mode to support
512 bytes sector size.
I'm not sure how this is configured but Sahina should know.
Nir
> 2019-05-16 10:15:21,296+0300 ERROR (jsonrpc/2) [storage.TaskManager.Task]
> (Task='d2535d0f-c7f7-4f31-a10f-704923ce1790') Unexpected error (task:875)
> Traceback (most recent call last):
> File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
> in _run
> return fn(*args, **kargs)
> File "<string>", line 2, in createStorageDomain
> File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in
> method
> ret = func(*args, **kwargs)
> File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2614,
> in createStorageDomain
> storageType, domVersion, block_size, alignment)
> File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py
> <http://nfssd.py/>", line 106, in create
> block_size)
> File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py
> <http://filesd.py/>", line 466, in _prepareMetadata
> cls.format_external_leases(sdUUID, xleases_path)
> File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 1255,
> in format_external_leases
> xlease.format_index(lockspace, backend)
> File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line
> 681, in format_index
> index.dump(file)
> File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line
> 843, in dump
> file.pwrite(INDEX_BASE, self._buf)
> File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line
> 1076, in pwr
>
>
> It seems that the 'dd' is having trouble checking the new gluster volume.
> The output is from the RC1 , but as you see Darell's situation is maybe
> the same.
> On May 16, 2019 21:41, Nir Soffer <nsoffer(a)redhat.com> wrote:
>
> On Thu, May 16, 2019 at 8:38 PM Darrell Budic <budic(a)onholyground.com>
> wrote:
>
> I tried adding a new storage domain on my hyper converged test cluster
> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new
> gluster volume fine, but it’s not able to add the gluster storage domain
> (as either a managed gluster volume or directly entering values). The
> created gluster volume mounts and looks fine from the CLI. Errors in VDSM
> log:
>
> ...
>
> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying
> file system doesn't supportdirect IO (fileSD:110)
> 2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [vdsm.api] FINISH
> createStorageDomain error=Storage Domain target is unsupported: ()
> from=::ffff:10.100.90.5,44732, flow_id=31d993dd,
> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>
>
> The direct I/O check has failed.
>
> This is the code doing the check:
>
> 98 def validateFileSystemFeatures(sdUUID, mountDir):
> 99 try:
> 100 # Don't unlink this file, we don't have the cluster lock yet
> as it
> 101 # requires direct IO which is what we are trying to test for.
> This
> 102 # means that unlinking the file might cause a race. Since we
> don't
> 103 # care what the content of the file is, just that we managed to
> 104 # open it O_DIRECT.
> 105 testFilePath = os.path.join(mountDir, "__DIRECT_IO_TEST__")
> 106 oop.getProcessPool(sdUUID).directTouch(testFilePath)
>
>
> 107 except OSError as e:
> 108 if e.errno == errno.EINVAL:
> 109 log = logging.getLogger("storage.fileSD")
> 110 log.error("Underlying file system doesn't support"
>
>
5 years, 6 months
Re: [ovirt-users] Re: Re: [ANN] oVirt 4.3.4 First Release Candidate is now available
by Nir Soffer
On Thu, May 16, 2019 at 8:38 PM Darrell Budic <budic(a)onholyground.com>
wrote:
> I tried adding a new storage domain on my hyper converged test cluster
> running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster
> volume fine, but it’s not able to add the gluster storage domain (as either
> a managed gluster volume or directly entering values). The created gluster
> volume mounts and looks fine from the CLI. Errors in VDSM log:
>
> ...
> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying
> file system doesn't supportdirect IO (fileSD:110)
> 2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [vdsm.api] FINISH
> createStorageDomain error=Storage Domain target is unsupported: ()
> from=::ffff:10.100.90.5,44732, flow_id=31d993dd,
> task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>
The direct I/O check has failed.
This is the code doing the check:
98 def validateFileSystemFeatures(sdUUID, mountDir):
99 try:
100 # Don't unlink this file, we don't have the cluster lock yet as
it
101 # requires direct IO which is what we are trying to test for.
This
102 # means that unlinking the file might cause a race. Since we
don't
103 # care what the content of the file is, just that we managed to
104 # open it O_DIRECT.
105 testFilePath = os.path.join(mountDir, "__DIRECT_IO_TEST__")
106 oop.getProcessPool(sdUUID).directTouch(testFilePath)
107 except OSError as e:
108 if e.errno == errno.EINVAL:
109 log = logging.getLogger("storage.fileSD")
110 log.error("Underlying file system doesn't support"
111 "direct IO")
112 raise se.StorageDomainTargetUnsupported()
113
114 raise
The actual check is done in ioprocess, using:
319 fd = open(path->str, allFlags, mode);
320 if (fd == -1) {
321 rv = fd;
322 goto clean;
323 }
324
325 rv = futimens(fd, NULL);
326 if (rv < 0) {
327 goto clean;
328 }
With:
allFlags = O_WRONLY | O_CREAT | O_DIRECT
See:
https://github.com/oVirt/ioprocess/blob/7508d23e19aeeb4dfc180b854a5a92690...
According to the error message:
Underlying file system doesn't support direct IO
We got EINVAL, which is possible only from open(), and is likely an issue
opening
the file with O_DIRECT.
So something is wrong in the files system.
To confirm, you can try to do:
dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
This will probably fail with:
dd: failed to open '/path/to/mountoint/test': Invalid argument
If it succeeds, but oVirt fail to connect to this domain, file a bug and we
will investigate.
Nir
>
> On May 16, 2019, at 11:55 AM, Nir Soffer <nsoffer(a)redhat.com> wrote:
>
> On Thu, May 16, 2019 at 7:42 PM Strahil <hunter86_bg(a)yahoo.com> wrote:
>
>> Hi Sandro,
>>
>> Thanks for the update.
>>
>> I have just upgraded to RC1 (using gluster v6 here) and the issue I
>> detected in 4.3.3.7 - where gluster Storage domain fails creation - is
>> still present.
>>
>
> What is is this issue? can you provide a link to the bug/mail about it?
>
> Can you check if the 'dd' command executed during creation has been
>> recently modified ?
>>
>> I've received update from Darrell (also gluster v6) , but haven't
>> received an update from anyone who is using gluster v5 -> thus I haven't
>> opened a bug yet.
>>
>> Best Regards,
>> Strahil Nikolov
>> On May 16, 2019 11:21, Sandro Bonazzola <sbonazzo(a)redhat.com> wrote:
>>
>> The oVirt Project is pleased to announce the availability of the oVirt
>> 4.3.4 First Release Candidate, as of May 16th, 2019.
>>
>> This update is a release candidate of the fourth in a series of
>> stabilization updates to the 4.3 series.
>> This is pre-release software. This pre-release should not to be used
>> inproduction.
>>
>> This release is available now on x86_64 architecture for:
>> * Red Hat Enterprise Linux 7.6 or later
>> * CentOS Linux (or similar) 7.6 or later
>>
>> This release supports Hypervisor Hosts on x86_64 and ppc64le
>> architectures for:
>> * Red Hat Enterprise Linux 7.6 or later
>> * CentOS Linux (or similar) 7.6 or later
>> * oVirt Node 4.3 (available for x86_64 only)
>>
>> Experimental tech preview for x86_64 and s390x architectures for Fedora
>> 28 is also included.
>>
>> See the release notes [1] for installation / upgrade instructions and a
>> list of new features and bugs fixed.
>>
>> Notes:
>> - oVirt Appliance is already available
>> - oVirt Node is already available[2]
>>
>> Additional Resources:
>> * Read more about the oVirt 4.3.4 release highlights:
>> http://www.ovirt.org/release/4.3.4/
>> * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
>> * Check out the latest project news on the oVirt blog:
>> http://www.ovirt.org/blog/
>>
>> [1] http://www.ovirt.org/release/4.3.4/
>> [2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
>>
>> --
>> Sandro Bonazzola
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>>
>> Red Hat EMEA <https://www.redhat.com/>
>> sbonazzo(a)redhat.com
>> <https://red.ht/sig>
>> <https://redhat.com/summit>
>>
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org
>> To unsubscribe send an email to users-leave(a)ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/REDV54BH7CI...
>>
> _______________________________________________
> Announce mailing list -- announce(a)ovirt.org
> To unsubscribe send an email to announce-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/announce@ovirt.org/message/ABFECS5E...
>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RO6PQQ4XQ6K...
>
5 years, 6 months
Re: [ovirt-users] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available
by Nir Soffer
On Thu, May 16, 2019 at 7:42 PM Strahil <hunter86_bg(a)yahoo.com> wrote:
> Hi Sandro,
>
> Thanks for the update.
>
> I have just upgraded to RC1 (using gluster v6 here) and the issue I
> detected in 4.3.3.7 - where gluster Storage domain fails creation - is
> still present.
>
What is is this issue? can you provide a link to the bug/mail about it?
Can you check if the 'dd' command executed during creation has been
> recently modified ?
>
> I've received update from Darrell (also gluster v6) , but haven't
> received an update from anyone who is using gluster v5 -> thus I haven't
> opened a bug yet.
>
> Best Regards,
> Strahil Nikolov
> On May 16, 2019 11:21, Sandro Bonazzola <sbonazzo(a)redhat.com> wrote:
>
> The oVirt Project is pleased to announce the availability of the oVirt
> 4.3.4 First Release Candidate, as of May 16th, 2019.
>
> This update is a release candidate of the fourth in a series of
> stabilization updates to the 4.3 series.
> This is pre-release software. This pre-release should not to be used
> inproduction.
>
> This release is available now on x86_64 architecture for:
> * Red Hat Enterprise Linux 7.6 or later
> * CentOS Linux (or similar) 7.6 or later
>
> This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
> for:
> * Red Hat Enterprise Linux 7.6 or later
> * CentOS Linux (or similar) 7.6 or later
> * oVirt Node 4.3 (available for x86_64 only)
>
> Experimental tech preview for x86_64 and s390x architectures for Fedora 28
> is also included.
>
> See the release notes [1] for installation / upgrade instructions and a
> list of new features and bugs fixed.
>
> Notes:
> - oVirt Appliance is already available
> - oVirt Node is already available[2]
>
> Additional Resources:
> * Read more about the oVirt 4.3.4 release highlights:
> http://www.ovirt.org/release/4.3.4/
> * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
> * Check out the latest project news on the oVirt blog:
> http://www.ovirt.org/blog/
>
> [1] http://www.ovirt.org/release/4.3.4/
> [2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA <https://www.redhat.com/>
>
> sbonazzo(a)redhat.com
> <https://red.ht/sig>
> <https://redhat.com/summit>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/REDV54BH7CI...
>
5 years, 6 months
[ANN] oVirt 4.3.4 First Release Candidate is now available
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.3.4 First Release Candidate, as of May 16th, 2019.
This update is a release candidate of the fourth in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used
inproduction.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)
Experimental tech preview for x86_64 and s390x architectures for Fedora 28
is also included.
See the release notes [1] for installation / upgrade instructions and a
list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available[2]
Additional Resources:
* Read more about the oVirt 4.3.4 release highlights:
http://www.ovirt.org/release/4.3.4/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.4/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
<https://redhat.com/summit>
5 years, 6 months