Re: [ovirt-announce] Re: Re: [ANN] oVirt 4.3.4 First Release Candidate is now available
by Strahil
On May 21, 2019 06:00, Satheesaran Sundaramoorthi <sasundar(a)redhat.com> wrote:
>
>
> On Fri, May 17, 2019 at 1:12 AM Nir Soffer <nsoffer(a)redhat.com> wrote:
>>
>> On Thu, May 16, 2019 at 10:12 PM Darrell Budic <budic(a)onholyground.com> wrote:
>>>
>>> On May 16, 2019, at 1:41 PM, Nir Soffer <nsoffer(a)redhat.com> wrote:
>>>>
>>>>
>>>> On Thu, May 16, 2019 at 8:38 PM Darrell Budic <budic(a)onholyground.com> wrote:
>>>>>
>>>>> I tried adding a new storage domain on my hyper converged test cluster running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume fine, but it’s not able to add the gluster storage domain (as either a managed gluster volume or directly entering values). The created gluster volume mounts and looks fine from the CLI. Errors in VDSM log:
>>>>>
>>>> ...
>>>>>
>>>>> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file system doesn't supportdirect IO (fileSD:110)
>>>>> 2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [vdsm.api] FINISH createStorageDomain error=Storage Domain target is unsupported: () from=::ffff:10.100.90.5,44732, flow_id=31d993dd, task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>>>>
>>>>
>>>> The direct I/O check has failed.
>>>>
>>>>
>>>> So something is wrong in the files system.
>>>>
>>>> To confirm, you can try to do:
>>>>
>>>> dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
>>>>
>>>> This will probably fail with:
>>>> dd: failed to open '/path/to/mountoint/test': Invalid argument
>>>>
>>>> If it succeeds, but oVirt fail to connect to this domain, file a bug and we will investigate.
>>>>
>>>> Nir
>>>
>>>
>>> Yep, it fails as expected. Just to check, it is working on pre-existing volumes, so I poked around at gluster settings for the new volume. It has network.remote-dio=off set on the new volume, but enabled on old volumes. After enabling it, I’m able to run the dd test:
>>>
>>> [root@boneyard mnt]# gluster vol set test network.remote-dio enable
>>> volume set: success
>>> [root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 oflag=direct
>>> 1+0 records in
>>> 1+0 records out
>>> 4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s
>>>
>>> I’m also able to add the storage domain in ovirt now.
>>>
>>> I see network.remote-dio=enable is part of the gluster virt group, so apparently it’s not getting set by ovirt duding the volume creation/optimze for storage?
>>
>>
>> I'm not sure who is responsible for changing these settings. oVirt always required directio, and we
>> never had to change anything in gluster.
>>
>> Sahina, maybe gluster changed the defaults?
>>
>> Darrell, please file a bug, probably for RHHI.
>
>
> Hello Darrell & Nir,
>
> Do we have a bug available now for this issue ?
> I just need to make sure performance.strict-o-direct=on is enabled on that volume.
>
>
> Satheesaran Sundaramoorthi
>
> Senior Quality Engineer, RHHI-V QE
>
> Red Hat APAC
>
Please check https://bugzilla.redhat.com/show_bug.cgi?id=1711054
Best Regards,
Strahil Nikolov
5 years, 6 months
Re: [ANN] oVirt 4.3.4 First Release Candidate is now available
by Strahil
Hi Sandro,
Thanks for the update.
I have just upgraded to RC1 (using gluster v6 here) and the issue I detected in 4.3.3.7 - where gluster Storage domain fails creation - is still present.
Can you check if the 'dd' command executed during creation has been recently modified ?
I've received update from Darrell (also gluster v6) , but haven't received an update from anyone who is using gluster v5 -> thus I haven't opened a bug yet.
Best Regards,
Strahil NikolovOn May 16, 2019 11:21, Sandro Bonazzola <sbonazzo(a)redhat.com> wrote:
>
> The oVirt Project is pleased to announce the availability of the oVirt 4.3.4 First Release Candidate, as of May 16th, 2019.
>
> This update is a release candidate of the fourth in a series of stabilization updates to the 4.3 series.
> This is pre-release software. This pre-release should not to be used inproduction.
>
> This release is available now on x86_64 architecture for:
> * Red Hat Enterprise Linux 7.6 or later
> * CentOS Linux (or similar) 7.6 or later
>
> This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for:
> * Red Hat Enterprise Linux 7.6 or later
> * CentOS Linux (or similar) 7.6 or later
> * oVirt Node 4.3 (available for x86_64 only)
>
> Experimental tech preview for x86_64 and s390x architectures for Fedora 28 is also included.
>
> See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
>
> Notes:
> - oVirt Appliance is already available
> - oVirt Node is already available[2]
>
> Additional Resources:
> * Read more about the oVirt 4.3.4 release highlights:http://www.ovirt.org/release/4.3.4/
> * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
> * Check out the latest project news on the oVirt blog:http://www.ovirt.org/blog/
>
> [1] http://www.ovirt.org/release/4.3.4/
> [2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA
>
> sbonazzo(a)redhat.com
5 years, 6 months
Virtual office, how?
by andres.b.dev@gmail.com
I'm trying to be able to create different virtual LANs, where, for example, I have 2 groups of pcs
A and B belongs to network N1
C and D belongs to network N2
N1 and N2 with his own public IP. For example
A: Local ip: 192.168.122.100
B: Local ip: 192.168.122.101
C: Local ip: 192.168.122.102
D: Local ip: 192.168.122.103
Where A and B has the same public ip, and C and D has the same public ip.
Now, I want that A can ssh on B, but not on C or D. The same goes for C, where C can access to D via ssh but not to A or B
I'm not sure if OVS solve this problem or not, or if this is not possible.
Is this possible? How?
5 years, 6 months
Every now and then Backup question
by Markus Schaufler
Hi,
looking for a backup solution compareable to vmware/veeam - ie. features like agent-less, incremental, lan-free backups with dedup and single item restore.
As I understand the main construction site for all commercial backup solutions available for kvm/rhev was the underlying qemu with its CBT implementation. The main functions should now be implemented as there are solutions like vprotect which state that they support incremental backups for rhev. But as I read at https://www.openvirtualization.pro/agent-less-backup-strategies-for-ovirt... there might still be some drawbacks.
Are there ressources or a roadmap covering the state of the backup/recovery process, ie. what is possible now and whats in progress? I'm sure a big showstopper for open source virtualization projects are is uncertainty about backup/recovery and especially disaster recovery processes.
Follow up questions to the users with bigger environments:
Whats your current backup and DR strategy?
Does anybody have experience with commercially availbe backup solutions like Bacula, SEP, Commvault, etc.?
5 years, 6 months
newbie questions about installation using synology NAS
by bmillett@gmail.com
I'm starting fairly fresh.
Ovirt engine 4.3.3.7 on a host
2 nodes installed with ovirt-release-host-node-4.3.3.1
synology DS418 with 5 TB disks
I've configured the master engine and am ready to add the nodes.
My questions have to do with the order or steps .
Do I need to mount /volume1/data/images/rhev from the DS418 on each of the nodes before I add them or
do I just add the nodes, then define a domain using the NFS mounts from the DS418?
Thanks.
5 years, 6 months
Re: oVirt upgrade version from 4.2 to 4.3
by Strahil
I would recommend you to postpone your upgrade if you use gluster (without the API) , as creation of virtual disks via UI on gluster is having issues - only preallocated can be created.
Best Regards,
Strahil NikolovOn May 19, 2019 09:53, Yedidyah Bar David <didi(a)redhat.com> wrote:
>
> On Thu, May 16, 2019 at 3:40 PM <dmarini(a)it.iliad.com> wrote:
> >
> > I cannot find an official upgrade procedure from 4.2 to 4.3 oVirt version on this page: https://www.ovirt.org/documentation/upgrade-guide/upgrade-guide.html
> >
> > Can you help me?
>
> As others noted, the above should be sufficient, for general upgrade
> instructions, even though it does require some updates.
>
> You probably want to read also:
>
> https://ovirt.org/release/4.3.0/
>
> as well as all the other relevant pages in:
>
> https://ovirt.org/release/
>
> Best regards,
>
> >
> > Thanks
> > _______________________________________________
> > Users mailing list -- users(a)ovirt.org
> > To unsubscribe send an email to users-leave(a)ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> > List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WG2EI6HL3S2...
>
>
>
> --
> Didi
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KAJGM3URCFS...
5 years, 6 months
Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available
by Strahil
OK,
Can we summarize it:
1. VDO must 'emulate512=true'
2. 'network.remote-dio' should be off ?
As per this: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/h...
We should have these:
quick-read=off
read-ahead=off
io-cache=off
stat-prefetch=off
eager-lock=enable
remote-dio=on
quorum-type=auto
server-quorum-type=server
I'm a little bit confused here.
Best Regards,
Strahil NikolovOn May 19, 2019 07:44, Sahina Bose <sabose(a)redhat.com> wrote:
>
>
>
> On Sun, 19 May 2019 at 12:21 AM, Nir Soffer <nsoffer(a)redhat.com> wrote:
>>
>> On Fri, May 17, 2019 at 7:54 AM Gobinda Das <godas(a)redhat.com> wrote:
>>>
>>> From RHHI side default we are setting below volume options:
>>>
>>> { group: 'virt',
>>> storage.owner-uid: '36',
>>> storage.owner-gid: '36',
>>> network.ping-timeout: '30',
>>> performance.strict-o-direct: 'on',
>>> network.remote-dio: 'off'
>>
>>
>> According to the user reports, this configuration is not compatible with oVirt.
>>
>> Was this tested?
>
>
> Yes, this is set by default in all test configuration. We’re checking on the bug, but the error is likely when the underlying device does not support 512b writes.
> With network.remote-dio off gluster will ensure o-direct writes
>>
>>
>>> }
>>>
>>>
>>> On Fri, May 17, 2019 at 2:31 AM Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
>>>>
>>>> Ok, setting 'gluster volume set data_fast4 network.remote-dio on' allowed me to create the storage domain without any issues.
>>>> I set it on all 4 new gluster volumes and the storage domains were successfully created.
>>>>
>>>> I have created bug for that:
>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1711060
>>>>
>>>> If someone else already opened - please ping me to mark this one as duplicate.
>>>>
>>>> Best Regards,
>>>> Strahil Nikolov
>>>>
>>>>
>>>> В четвъртък, 16 май 2019 г., 22:27:01 ч. Гринуич+3, Darrell Budic <budic(a)onholyground.com> написа:
>>>>
>>>>
>>>> On May 16, 2019, at 1:41 PM, Nir Soffer <nsoffer(a)redhat.com> wrote:
>>>>
>>>>>
>>>>> On Thu, May 16, 2019 at 8:38 PM Darrell Budic <budic(a)onholyground.com> wrote:
>>>>>>
>>>>>> I tried adding a new storage domain on my hyper converged test cluster running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume fine, but it’s not able to add the gluster storage domain (as either a managed gluster volume or directly entering values). The created gluster volume mounts and looks fine from the CLI. Errors in VDSM log:
>>>>>>
>>>>> ...
>>>>>>
>>>>>> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file system doesn't supportdirect IO (fileSD:110)
>>>>>> 2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [vdsm.api] FINISH createStorageDomain error=Storage Domain target is unsupported: () from=::ffff:10.100.90.5,44732, flow_id=31d993dd, task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>>>>>
>>>>>
>>>>> The direct I/O check has failed.
>>>>>
>>>>>
>>>>> So something is wrong in the files system.
>>>>>
>>>>> To confirm, you can try to do:
>>>>>
>>>>> dd if=/dev/zero of=/path/to/mountoint/test bs=4096 count=1 oflag=direct
>>>>>
>>>>> This will probably fail with:
>>>>> dd: failed to open '/path/to/mountoint/test': Invalid argument
>>>>>
>>>>> If it succeeds, but oVirt fail to connect to this domain, file a bug and we will investigate.
>>>>>
>>>>> Nir
>>>>
>>>>
>>>> Yep, it fails as expected. Just to check, it is working on pre-existing volumes, so I poked around at gluster settings for the new volume. It has network.remote-dio=off set on the new volume, but enabled on old volumes. After enabling it, I’m able to run the dd test:
>>>>
>>>> [root@boneyard mnt]# gluster vol set test network.remote-dio enable
>>>> volume set: success
>>>> [root@boneyard mnt]# dd if=/dev/zero of=testfile bs=4096 count=1 oflag=direct
>>>> 1+0 records in
>>>> 1+0 records out
>>>> 4096 bytes (4.1 kB) copied, 0.0018285 s, 2.2 MB/s
>>>>
>>>> I’m also able to add the storage domain in ovirt now.
>>>>
>>>> I see network.remote-dio=enable is part of the gluster virt group, so apparently it’s not getting set by ovirt duding the volume creation/optimze for storage?
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Users mailing list -- users(a)ovirt.org
>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OPBXHYOHZA4...
>>>>
>>>> _______________________________________________
>>>> Users mailing list -- users(a)ovirt.org
>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/B7K24XYG3M4...
>>>
>>>
>>>
>>> --
>>>
>>>
>>> Thanks,
>>> Gobinda
5 years, 6 months
Re: [ovirt-announce] Re: [ANN] oVirt 4.3.4 First Release Candidate is now available
by Strahil
This is my previous e-mail:
On May 16, 2019 15:23, Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
It seems that the issue is within the 'dd' command as it stays waiting for input:
[root@ovirt1 mnt]# /usr/bin/dd iflag=fullblock of=file oflag=direct,seek_bytes seek=1048576 bs=256512 count=1 conv=notrunc,nocreat,fsync ^C0+0 records in
0+0 records out
0 bytes (0 B) copied, 19.3282 s, 0.0 kB/s
Changing the dd command works and shows that the gluster is working:
[root@ovirt1 mnt]# cat /dev/urandom | /usr/bin/dd of=file oflag=direct,seek_bytes seek=1048576 bs=256512 count=1 conv=notrunc,nocreat,fsync 0+1 records in
0+1 records out
131072 bytes (131 kB) copied, 0.00705081 s, 18.6 MB/s
Best Regards,
Strahil Nikolov
----- Препратено съобщение -----
От: Strahil Nikolov <hunter86_bg(a)yahoo.com>
До: Users <users(a)ovirt.org>
Изпратено: четвъртък, 16 май 2019 г., 5:56:44 ч. Гринуич-4
Тема: ovirt 4.3.3.7 cannot create a gluster storage domain
Hey guys,
I have recently updated (yesterday) my platform to latest available (v4.3.3.7) and upgraded to gluster v6.1 .The setup is hyperconverged 3 node cluster with ovirt1/gluster1 & ovirt2/gluster2 as replica nodes (glusterX is for gluster communication) while ovirt3 is the arbiter.
Today I have tried to add new domain storages but they fail with the following:
2019-05-16 10:15:21,296+0300 INFO (jsonrpc/2) [vdsm.api] FINISH createStorageDomain error=Command ['/usr/bin/dd', 'iflag=fullblock', u'of=/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases', 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' err="/usr/bin/dd: error writing '/rhev/data-center/mnt/glusterSD/gluster1:_data__fast2/591d9b61-5c7d-4388-a6b7-ab03181dff8a/dom_md/xleases': Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 0.0138582 s, 0.0 kB/s\n" from=::ffff:192.168.1.2,43864, flow_id=4a54578a, task_id=d2535d0f-c7f7-4f31-a10f-704923ce1790 (api:52)
2019-05-16 10:15:21,296+0300 ERROR (jsonrpc/2) [storage.TaskManager.Task] (Task='d2535d0f-c7f7-4f31-a10f-704923ce1790') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
return fn(*args, **kargs)
File "<string>", line 2, in createStorageDomain
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2614, in createStorageDomain
storageType, domVersion, block_size, alignment)
File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py", line 106, in create
block_size)
File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line 466, in _prepareMetadata
cls.format_external_leases(sdUUID, xleases_path)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 1255, in format_external_leases
xlease.format_index(lockspace, backend)
File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 681, in format_index
index.dump(file)
File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 843, in dump
file.pwrite(INDEX_BASE, self._buf)
File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 1076, in pwr
It seems that the 'dd' is having trouble checking the new gluster volume.
The output is from the RC1 , but as you see Darell's situation is maybe the same.On May 16, 2019 21:41, Nir Soffer <nsoffer(a)redhat.com> wrote:
>
> On Thu, May 16, 2019 at 8:38 PM Darrell Budic <budic(a)onholyground.com> wrote:
>>
>> I tried adding a new storage domain on my hyper converged test cluster running Ovirt 4.3.3.7 and gluster 6.1. I was able to create the new gluster volume fine, but it’s not able to add the gluster storage domain (as either a managed gluster volume or directly entering values). The created gluster volume mounts and looks fine from the CLI. Errors in VDSM log:
>>
> ...
>>
>> 2019-05-16 10:25:09,584-0500 ERROR (jsonrpc/5) [storage.fileSD] Underlying file system doesn't supportdirect IO (fileSD:110)
>> 2019-05-16 10:25:09,584-0500 INFO (jsonrpc/5) [vdsm.api] FINISH createStorageDomain error=Storage Domain target is unsupported: () from=::ffff:10.100.90.5,44732, flow_id=31d993dd, task_id=ecea28f3-60d4-476d-9ba8-b753b7c9940d (api:52)
>
>
> The direct I/O check has failed.
>
> This is the code doing the check:
>
> 98 def validateFileSystemFeatures(sdUUID, mountDir):
> 99 try:
> 100 # Don't unlink this file, we don't have the cluster lock yet as it
> 101 # requires direct IO which is what we are trying to test for. This
> 102 # means that unlinking the file might cause a race. Since we don't
> 103 # care what the content of the file is, just that we managed to
> 104 # open it O_DIRECT.
> 105 testFilePath = os.path.join(mountDir, "__DIRECT_IO_TEST__")
> 106 oop.getProcessPool(sdUUID).directTouch(testFilePath)
> 107 except OSError as e:
> 108 if e.errno == errno.EINVAL:
> 109 log = logging.getLogger("storage.fileSD")
> 110 log.error("Underlying file system doesn't support"
5 years, 6 months