oVirt 4.2.1 rc1 and upload iso to data domain test

Hello, I see in release notes this BZ 1530730 [downstream clone - 4.2.1] [RFE] Allow uploading ISO images to data domains and using them in VMs It is now possible to upload an ISO file to a data domain and attach it to a VM as a CDROM device. In order to do so the user has to upload an ISO file via the UI (which will recognize the ISO by it's header and will upload it as ISO) or via the APIs in which case the request should define the disk container "content_type" property as "iso" before the upload. Once the ISO exists on an active storage domain in the data center it will be possible to attach it to a VM as a CDROM device either through the "Edit VM" dialog or through the APIs (see example in comment #27 So I'm trying it on an HCI Gluster environment of mine for testing. I get this in image-proxy.log (Thread-39 ) INFO 2018-01-14 18:35:38,066 web:95:web:(log_start) START [192.168.150.101] PUT /images/0d852f7a-b19e-447d-82ad-966755070701 (Thread-39 ) WARNING 2018-01-14 18:35:38,067 web:112:web:(log_error) ERROR [192.168.150.101] PUT /images/0d852f7a-b19e-447d-82ad-966755070701: [401] Not authorized (0.00s) (Thread-40 ) INFO 2018-01-14 18:35:38,106 web:95:web:(log_start) START [192.168.150.101] PUT /images/0d852f7a-b19e-447d-82ad-966755070701 (Thread-40 ) WARNING 2018-01-14 18:35:38,106 web:112:web:(log_error) ERROR [192.168.150.101] PUT /images/0d852f7a-b19e-447d-82ad-966755070701: [401] Not authorized (0.00s) Does this mean the functionality is not completely ready yet or what? Any one has already tried on iSCSI and/or FC? Thanks, Gianluca

Hi, I will look into it. Is it also not working also for non-iso images? Thanks, Fred On Jan 14, 2018 8:16 PM, "Gianluca Cecchi" <gianluca.cecchi@gmail.com> wrote:
Hello, I see in release notes this
BZ 1530730 [downstream clone - 4.2.1] [RFE] Allow uploading ISO images to data domains and using them in VMs It is now possible to upload an ISO file to a data domain and attach it to a VM as a CDROM device. In order to do so the user has to upload an ISO file via the UI (which will recognize the ISO by it's header and will upload it as ISO) or via the APIs in which case the request should define the disk container "content_type" property as "iso" before the upload. Once the ISO exists on an active storage domain in the data center it will be possible to attach it to a VM as a CDROM device either through the "Edit VM" dialog or through the APIs (see example in comment #27
So I'm trying it on an HCI Gluster environment of mine for testing. I get this in image-proxy.log
(Thread-39 ) INFO 2018-01-14 18:35:38,066 web:95:web:(log_start) START [192.168.150.101] PUT /images/0d852f7a-b19e-447d-82ad-966755070701 (Thread-39 ) WARNING 2018-01-14 18:35:38,067 web:112:web:(log_error) ERROR [192.168.150.101] PUT /images/0d852f7a-b19e-447d-82ad-966755070701: [401] Not authorized (0.00s) (Thread-40 ) INFO 2018-01-14 18:35:38,106 web:95:web:(log_start) START [192.168.150.101] PUT /images/0d852f7a-b19e-447d-82ad-966755070701 (Thread-40 ) WARNING 2018-01-14 18:35:38,106 web:112:web:(log_error) ERROR [192.168.150.101] PUT /images/0d852f7a-b19e-447d-82ad-966755070701: [401] Not authorized (0.00s)
Does this mean the functionality is not completely ready yet or what? Any one has already tried on iSCSI and/or FC?
Thanks, Gianluca
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Tue, Jan 16, 2018 at 6:48 AM, Fred Rolland <frolland@redhat.com> wrote:
Hi, I will look into it.
Is it also not working also for non-iso images?
Thanks, Fred
Hello, I get the same with a disk. I have tried with a raw disk of size 1Gb. When I upload I can set the size (why?) while with the iso image I could not. The disk is recognized as "data" in upload window (the iso fie was correctly recognized as "iso"), but in image-proxy.log I get (Thread-42 ) INFO 2018-01-16 14:15:50,530 web:95:web:(log_start) START [192.168.150.101] GET /info/ (Thread-42 ) INFO 2018-01-16 14:15:50,532 web:102:web:(log_finish) FINISH [192.168.150.101] GET /info/: [200] 20 (0.00s) (Thread-43 ) INFO 2018-01-16 14:16:12,659 web:95:web:(log_start) START [192.168.150.105] PUT /tickets/ (Thread-43 ) INFO 2018-01-16 14:16:12,661 auth2:170:auth2:(add_signed_ticket) Adding new ticket: <Ticket id=u'81569ab3-1b92- 4744-8a58-f1948afa20b7', url=u'https://ovirt01.localdomain.local:54322' at 0x7f048c03c610> (Thread-43 ) INFO 2018-01-16 14:16:12,662 web:102:web:(log_finish) FINISH [192.168.150.105] PUT /tickets/: [200] 0 (0.00s) (Thread-44 ) INFO 2018-01-16 14:16:13,800 web:95:web:(log_start) START [192.168.150.101] OPTIONS /images/81569ab3-1b92-4744- 8a58-f1948afa20b7 (Thread-44 ) INFO 2018-01-16 14:16:13,814 web:102:web:(log_finish) FINISH [192.168.150.101] OPTIONS /images/81569ab3-1b92-47 44-8a58-f1948afa20b7: [204] 0 (0.02s) (Thread-45 ) INFO 2018-01-16 14:16:13,876 web:95:web:(log_start) START [192.168.150.101] PUT /images/81569ab3-1b92-4744-8a58 -f1948afa20b7 (Thread-45 ) WARNING 2018-01-16 14:16:13,877 web:112:web:(log_error) ERROR [192.168.150.101] PUT /images/81569ab3-1b92-4744-8a58-f1948afa20b7: [401] Not authorized (0.00s) Gianluca

On Tue, Jan 16, 2018 at 2:21 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Tue, Jan 16, 2018 at 6:48 AM, Fred Rolland <frolland@redhat.com> wrote:
Hi, I will look into it.
Is it also not working also for non-iso images?
Thanks, Fred
Hello, I get the same with a disk. I have tried with a raw disk of size 1Gb. When I upload I can set the size (why?) while with the iso image I could not. The disk is recognized as "data" in upload window (the iso fie was correctly recognized as "iso"), but in image-proxy.log I get
(Thread-42 ) INFO 2018-01-16 14:15:50,530 web:95:web:(log_start) START [192.168.150.101] GET /info/ (Thread-42 ) INFO 2018-01-16 14:15:50,532 web:102:web:(log_finish) FINISH [192.168.150.101] GET /info/: [200] 20 (0.00s) (Thread-43 ) INFO 2018-01-16 14:16:12,659 web:95:web:(log_start) START [192.168.150.105] PUT /tickets/ (Thread-43 ) INFO 2018-01-16 14:16:12,661 auth2:170:auth2:(add_signed_ticket) Adding new ticket: <Ticket id=u'81569ab3-1b92- 4744-8a58-f1948afa20b7', url=u'https://ovirt01.localdomain.local:54322' at 0x7f048c03c610> (Thread-43 ) INFO 2018-01-16 14:16:12,662 web:102:web:(log_finish) FINISH [192.168.150.105] PUT /tickets/: [200] 0 (0.00s) (Thread-44 ) INFO 2018-01-16 14:16:13,800 web:95:web:(log_start) START [192.168.150.101] OPTIONS /images/81569ab3-1b92-4744- 8a58-f1948afa20b7 (Thread-44 ) INFO 2018-01-16 14:16:13,814 web:102:web:(log_finish) FINISH [192.168.150.101] OPTIONS /images/81569ab3-1b92-47 44-8a58-f1948afa20b7: [204] 0 (0.02s) (Thread-45 ) INFO 2018-01-16 14:16:13,876 web:95:web:(log_start) START [192.168.150.101] PUT /images/81569ab3-1b92-4744-8a58 -f1948afa20b7 (Thread-45 ) WARNING 2018-01-16 14:16:13,877 web:112:web:(log_error) ERROR [192.168.150.101] PUT /images/81569ab3-1b92-4744-8a58-f1948afa20b7: [401] Not authorized (0.00s)
Gianluca
BTW: I don't know if its is in some way related with the upload problems, but in my engine .log I see these kind of messages every 5 or such seconds: 2018-01-16 14:27:38,428+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler3) [61e72c38] START, GlusterServersListVDSCommand(HostName = ovirt02.localdomain.local, VdsIdVDSCommandParametersBase:{hostId='cb9cc605-fceb-4689-ad35-43ba883f4556'}), log id: 65e60794 2018-01-16 14:27:38,858+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler3) [61e72c38] FINISH, GlusterServersListVDSCommand, return: [192.168.150.103/24:CONNECTED, ovirt03.localdomain.local:CONNECTED, ovirt01.localdomain.local:CONNECTED], log id: 65e60794 2018-01-16 14:27:38,867+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler3) [61e72c38] START, GlusterVolumesListVDSCommand(HostName = ovirt02.localdomain.local, GlusterVolumesListVDSParameters:{hostId='cb9cc605-fceb-4689-ad35-43ba883f4556'}), log id: 6e01993d 2018-01-16 14:27:39,221+01 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler3) [61e72c38] Could not associate brick 'ovirt02.localdomain.local:/gluster/brick1/engine' of volume '6e2bd1d7-9c8e-4c54-9d85-f36e1b871771' with correct network as no gluster network found in cluster '582badbe-0080-0197-013b-0000000001c6' 2018-01-16 14:27:39,231+01 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler3) [61e72c38] Could not associate brick 'ovirt02.localdomain.local:/gluster/brick2/data' of volume '2238c6db-48c5-4071-8929-879cedcf39bf' with correct network as no gluster network found in cluster '582badbe-0080-0197-013b-0000000001c6' 2018-01-16 14:27:39,253+01 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler3) [61e72c38] Could not associate brick 'ovirt02.localdomain.local:/gluster/brick4/iso' of volume '28f99f11-3529-43a1-895c-abf1c66884ab' with correct network as no gluster network found in cluster '582badbe-0080-0197-013b-0000000001c6' 2018-01-16 14:27:39,255+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler3) [61e72c38] FINISH, GlusterVolumesListVDSCommand, return: {2238c6db-48c5-4071-8929-879cedcf39bf=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@aa6e9a1e, df0ccd1d-5de6-42b8-a163-ec65c3698da3=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@31c29088, 6e2bd1d7-9c8e-4c54-9d85-f36e1b871771=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@ae82860f, 28f99f11-3529-43a1-895c-abf1c66884ab=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@1b6a11e5}, log id: 6e01993d Actualy te gluster network seems ok. Eg [root@ovirt01 glusterfs]# gluster volume info data Volume Name: data Type: Replicate Volume ID: 2238c6db-48c5-4071-8929-879cedcf39bf Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt01.localdomain.local:/gluster/brick2/data Brick2: ovirt02.localdomain.local:/gluster/brick2/data Brick3: ovirt03.localdomain.local:/gluster/brick2/data (arbiter) Options Reconfigured: performance.strict-o-direct: on nfs.disable: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 6 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on transport.address-family: inet [root@ovirt01 glusterfs]# [root@ovirt01 glusterfs]# gluster volume heal data info Brick ovirt01.localdomain.local:/gluster/brick2/data Status: Connected Number of entries: 0 Brick ovirt02.localdomain.local:/gluster/brick2/data Status: Connected Number of entries: 0 Brick ovirt03.localdomain.local:/gluster/brick2/data Status: Connected Number of entries: 0 [root@ovirt01 glusterfs]# Gianluca

On Tue, Jan 16, 2018 at 2:30 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Tue, Jan 16, 2018 at 2:21 PM, Gianluca Cecchi < gianluca.cecchi@gmail.com> wrote:
On Tue, Jan 16, 2018 at 6:48 AM, Fred Rolland <frolland@redhat.com> wrote:
Hi, I will look into it.
Is it also not working also for non-iso images?
Thanks, Fred
Hello, I get the same with a disk. I have tried with a raw disk of size 1Gb. When I upload I can set the size (why?) while with the iso image I could not. The disk is recognized as "data" in upload window (the iso fie was correctly recognized as "iso"), but in image-proxy.log I get
(Thread-42 ) INFO 2018-01-16 14:15:50,530 web:95:web:(log_start) START [192.168.150.101] GET /info/ (Thread-42 ) INFO 2018-01-16 14:15:50,532 web:102:web:(log_finish) FINISH [192.168.150.101] GET /info/: [200] 20 (0.00s) (Thread-43 ) INFO 2018-01-16 14:16:12,659 web:95:web:(log_start) START [192.168.150.105] PUT /tickets/ (Thread-43 ) INFO 2018-01-16 14:16:12,661 auth2:170:auth2:(add_signed_ticket) Adding new ticket: <Ticket id=u'81569ab3-1b92- 4744-8a58-f1948afa20b7', url=u'https://ovirt01.localdomain.local:54322' at 0x7f048c03c610> (Thread-43 ) INFO 2018-01-16 14:16:12,662 web:102:web:(log_finish) FINISH [192.168.150.105] PUT /tickets/: [200] 0 (0.00s) (Thread-44 ) INFO 2018-01-16 14:16:13,800 web:95:web:(log_start) START [192.168.150.101] OPTIONS /images/81569ab3-1b92-4744- 8a58-f1948afa20b7 (Thread-44 ) INFO 2018-01-16 14:16:13,814 web:102:web:(log_finish) FINISH [192.168.150.101] OPTIONS /images/81569ab3-1b92-47 44-8a58-f1948afa20b7: [204] 0 (0.02s) (Thread-45 ) INFO 2018-01-16 14:16:13,876 web:95:web:(log_start) START [192.168.150.101] PUT /images/81569ab3-1b92-4744-8a58 -f1948afa20b7 (Thread-45 ) WARNING 2018-01-16 14:16:13,877 web:112:web:(log_error) ERROR [192.168.150.101] PUT /images/81569ab3-1b92-4744-8a58-f1948afa20b7: [401] Not authorized (0.00s)
Gianluca
BTW: I don't know if its is in some way related with the upload problems, but in my engine .log I see these kind of messages every 5 or such seconds:
2018-01-16 14:27:38,428+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler3) [61e72c38] START, GlusterServersListVDSCommand(HostName = ovirt02.localdomain.local, VdsIdVDSCommandParametersBase: {hostId='cb9cc605-fceb-4689-ad35-43ba883f4556'}), log id: 65e60794 2018-01-16 14:27:38,858+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler3) [61e72c38] FINISH, GlusterServersListVDSCommand, return: [192.168.150.103/24:CONNECTED, ovirt03.localdomain.local:CONNECTED, ovirt01.localdomain.local:CONNECTED], log id: 65e60794 2018-01-16 14:27:38,867+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler3) [61e72c38] START, GlusterVolumesListVDSCommand(HostName = ovirt02.localdomain.local, GlusterVolumesListVDSParameter s:{hostId='cb9cc605-fceb-4689-ad35-43ba883f4556'}), log id: 6e01993d 2018-01-16 14:27:39,221+01 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler3) [61e72c38] Could not associate brick 'ovirt02.localdomain.local:/gluster/brick1/engine' of volume '6e2bd1d7-9c8e-4c54-9d85-f36e1b871771' with correct network as no gluster network found in cluster '582badbe-0080-0197-013b-0000000001c6' 2018-01-16 14:27:39,231+01 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler3) [61e72c38] Could not associate brick 'ovirt02.localdomain.local:/gluster/brick2/data' of volume '2238c6db-48c5-4071-8929-879cedcf39bf' with correct network as no gluster network found in cluster '582badbe-0080-0197-013b-0000000001c6' 2018-01-16 14:27:39,253+01 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler3) [61e72c38] Could not associate brick 'ovirt02.localdomain.local:/gluster/brick4/iso' of volume '28f99f11-3529-43a1-895c-abf1c66884ab' with correct network as no gluster network found in cluster '582badbe-0080-0197-013b-0000000001c6' 2018-01-16 14:27:39,255+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler3) [61e72c38] FINISH, GlusterVolumesListVDSCommand, return: {2238c6db-48c5-4071-8929-879cedcf39bf=org.ovirt.engine. core.common.businessentities.gluster.GlusterVolumeEntity@aa6e9a1e, df0ccd1d-5de6-42b8-a163-ec65c3698da3=org.ovirt.engine. core.common.businessentities.gluster.GlusterVolumeEntity@31c29088, 6e2bd1d7-9c8e-4c54-9d85-f36e1b871771=org.ovirt.engine. core.common.businessentities.gluster.GlusterVolumeEntity@ae82860f, 28f99f11-3529-43a1-895c-abf1c66884ab=org.ovirt.engine. core.common.businessentities.gluster.GlusterVolumeEntity@1b6a11e5}, log id: 6e01993d
Actualy te gluster network seems ok.
Eg
[root@ovirt01 glusterfs]# gluster volume info data
Volume Name: data Type: Replicate Volume ID: 2238c6db-48c5-4071-8929-879cedcf39bf Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt01.localdomain.local:/gluster/brick2/data Brick2: ovirt02.localdomain.local:/gluster/brick2/data Brick3: ovirt03.localdomain.local:/gluster/brick2/data (arbiter) Options Reconfigured: performance.strict-o-direct: on nfs.disable: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 6 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on transport.address-family: inet [root@ovirt01 glusterfs]#
[root@ovirt01 glusterfs]# gluster volume heal data info Brick ovirt01.localdomain.local:/gluster/brick2/data Status: Connected Number of entries: 0
Brick ovirt02.localdomain.local:/gluster/brick2/data Status: Connected Number of entries: 0
Brick ovirt03.localdomain.local:/gluster/brick2/data Status: Connected Number of entries: 0
[root@ovirt01 glusterfs]#
Gianluca
At host side: [root@ovirt01 glusterfs]# systemctl status ovirt-imageio-daemon -l ● ovirt-imageio-daemon.service - oVirt ImageIO Daemon Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-daemon.service; disabled; vendor preset: disabled) Active: active (running) since Sat 2018-01-13 06:49:20 CET; 3 days ago Main PID: 1004 (ovirt-imageio-d) CGroup: /system.slice/ovirt-imageio-daemon.service └─1004 /usr/bin/python /usr/bin/ovirt-imageio-daemon Jan 13 06:49:19 ovirt01.localdomain.local systemd[1]: Starting oVirt ImageIO Daemon... Jan 13 06:49:20 ovirt01.localdomain.local systemd[1]: Started oVirt ImageIO Daemon. [root@ovirt01 glusterfs]# and in /var/log/ovirt-imageio-daemon/daemon.log 2018-01-16 14:16:12,625 INFO (ticket.server) [web] START [/] PUT /tickets/81569ab3-1b92-4744-8a58-f1948afa20b7 2018-01-16 14:16:12,626 INFO (ticket.server) [tickets] Adding ticket <Ticket active=False expires=4580999 filename=None ops=[u'write'] size=1073741824 transferred=0 url=u'file:///rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908-825a-6c231e60276d/images/93b51148-5a79-4c1a-a173-9510f3807822/9dd4cdcb-5785-496e-8b46-7b25a3e97791' uuid=u'81569ab3-1b92-4744-8a58-f1948afa20b7' at 0x24cde10> 2018-01-16 14:16:12,628 INFO (ticket.server) [web] FINISH [/] PUT /tickets/81569ab3-1b92-4744-8a58-f1948afa20b7: [200] 0 (0.00s) 2018-01-16 14:16:13,800 INFO (ticket.server) [web] START [/] GET /tickets/81569ab3-1b92-4744-8a58-f1948afa20b7 2018-01-16 14:16:13,800 INFO (ticket.server) [tickets] Retrieving ticket 81569ab3-1b92-4744-8a58-f1948afa20b7 2018-01-16 14:16:13,802 INFO (ticket.server) [web] FINISH [/] GET /tickets/81569ab3-1b92-4744-8a58-f1948afa20b7: [200] 355 (0.00s) 2018-01-16 14:16:15,846 INFO (ticket.server) [web] START [/] GET /tickets/81569ab3-1b92-4744-8a58-f1948afa20b7 2018-01-16 14:16:15,846 INFO (ticket.server) [tickets] Retrieving ticket 81569ab3-1b92-4744-8a58-f1948afa20b7 2018-01-16 14:16:15,847 INFO (ticket.server) [web] FINISH [/] GET /tickets/81569ab3-1b92-4744-8a58-f1948afa20b7: [200] 355 (0.00s) 2018-01-16 14:16:19,865 INFO (ticket.server) [web] START [/] GET /tickets/81569ab3-1b92-4744-8a58-f1948afa20b7 2018-01-16 14:16:19,865 INFO (ticket.server) [tickets] Retrieving ticket 81569ab3-1b92-4744-8a58-f1948afa20b7 2018-01-16 14:16:19,866 INFO (ticket.server) [web] FINISH [/] GET /tickets/81569ab3-1b92-4744-8a58-f1948afa20b7: [200] 355 (0.00s) and it seems the file is there??? root@ovirt01 ovirt-imageio-daemon]# ll /rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908-825a-6c231e60276d/images/93b51148-5a79-4c1a-a173-9510f3807822/9dd4cdcb-5785-496e-8b46-7b25a3e97791 -rw-rw----. 1 vdsm kvm 1073741824 Jan 16 14:15 /rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908-825a-6c231e60276d/images/93b51148-5a79-4c1a-a173-9510f3807822/9dd4cdcb-5785-496e-8b46-7b25a3e97791 [root@ovirt01 ovirt-imageio-daemon]# [root@ovirt01 ovirt-imageio-daemon]# qemu-img info /rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908-825a-6c231e60276d/images/93b51148-5a79-4c1a-a173-9510f3807822/9dd4cdcb-5785-496e-8b46-7b25a3e97791 image: /rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908-825a-6c231e60276d/images/93b51148-5a79-4c1a-a173-9510f3807822/9dd4cdcb-5785-496e-8b46-7b25a3e97791 file format: raw virtual size: 1.0G (1073741824 bytes) disk size: 0 [root@ovirt01 ovirt-imageio-daemon]# but size zero: [root@ovirt01 ovirt-imageio-daemon]# ls -ls /rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908-825a-6c231e60276d/images/93b51148-5a79-4c1a-a173-9510f3807822/9dd4cdcb-5785-496e-8b46-7b25a3e97791 0 -rw-rw----. 1 vdsm kvm 1073741824 Jan 16 14:15 /rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908-825a-6c231e60276d/images/93b51148-5a79-4c1a-a173-9510f3807822/9dd4cdcb-5785-496e-8b46-7b25a3e97791 [root@ovirt01 ovirt-imageio-daemon]# And the same if I go back when I tried to upload iso: 2018-01-14 18:23:24,867 INFO (ticket.server) [web] START [/] PUT /tickets/b89f35d3-c09b-4dcb-bc13-abf18feb8cd0 2018-01-14 18:23:24,868 INFO (ticket.server) [tickets] Adding ticket <Ticket active=False expires=4423031 filename=None o ps=[u'write'] size=830472192 transferred=0 url=u'file:///rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:data/190f4 096-003e-4908-825a-6c231e60276d/images/74c1ff00-efbc-494f-ac5c-b3c84ea02ae4/cb5d0551-e8ff-4a93-b107-a6138770d894' uuid=u'b89 f35d3-c09b-4dcb-bc13-abf18feb8cd0' at 0x24cde50> 2018-01-14 18:23:24,868 INFO (ticket.server) [web] FINISH [/] PUT /tickets/b89f35d3-c09b-4dcb-bc13-abf18feb8cd0: [200] 0 (0.00s) and [root@ovirt01 ovirt-imageio-daemon]# ll /rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908-825a-6c231e60276d/images/74c1ff00-efbc-494f-ac5c-b3c84ea02ae4/cb5d0551-e8ff-4a93-b107-a6138770d894 -rw-rw----. 1 vdsm kvm 830472192 Jan 14 18:23 /rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908-825a-6c231e60276d/images/74c1ff00-efbc-494f-ac5c-b3c84ea02ae4/cb5d0551-e8ff-4a93-b107-a6138770d894 [root@ovirt01 ovirt-imageio-daemon]# ls -s gives 0 and in fact if I try to mount as loop device the supposed iso files I get the error about "wrong fs type" ... [root@ovirt01 ovirt-imageio-daemon]# ls -ls /rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908-825a-6c231e60276d/images/74c1ff00-efbc-494f-ac5c-b3c84ea02ae4/cb5d0551-e8ff-4a93-b107-a6138770d894 0 -rw-rw----. 1 vdsm kvm 830472192 Jan 14 18:23 /rhev/data-center/mnt/glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908-825a-6c231e60276d/images/74c1ff00-efbc-494f-ac5c-b3c84ea02ae4/cb5d0551-e8ff-4a93-b107-a6138770d894 [root@ovirt01 ovirt-imageio-daemon]# HIH debugging, Gianluca

Hi, I tested uploading an ISO to Gluster on my setup and it worked fine. The size is OK: # ls -lsh /rhev/data-center/mnt/glusterSD/*********:_fred1/f80e6d34-7c4c-4c0b-9451-dd140812c4ee/images/fa2209bc-d34c-4f3e-a425-9a085d72c3ba/0deef094-05a3-49c4-a347-ca423bd57a87 1.6G -rw-rw----. 1 vdsm kvm 1.6G Jan 17 10:06 /rhev/data-center/mnt/glusterSD/********:_fred1/f80e6d34-7c4c-4c0b-9451-dd140812c4ee/images/fa2209bc-d34c-4f3e-a425-9a085d72c3ba/0deef094-05a3-49c4-a347-ca423bd57a87 Are you seeing any other issues with your Gluster setup? Creating regular disks, copy/move disks to this SD? Thanks, Fred On Tue, Jan 16, 2018 at 4:25 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Tue, Jan 16, 2018 at 2:30 PM, Gianluca Cecchi < gianluca.cecchi@gmail.com> wrote:
On Tue, Jan 16, 2018 at 2:21 PM, Gianluca Cecchi < gianluca.cecchi@gmail.com> wrote:
On Tue, Jan 16, 2018 at 6:48 AM, Fred Rolland <frolland@redhat.com> wrote:
Hi, I will look into it.
Is it also not working also for non-iso images?
Thanks, Fred
Hello, I get the same with a disk. I have tried with a raw disk of size 1Gb. When I upload I can set the size (why?) while with the iso image I could not. The disk is recognized as "data" in upload window (the iso fie was correctly recognized as "iso"), but in image-proxy.log I get
(Thread-42 ) INFO 2018-01-16 14:15:50,530 web:95:web:(log_start) START [192.168.150.101] GET /info/ (Thread-42 ) INFO 2018-01-16 14:15:50,532 web:102:web:(log_finish) FINISH [192.168.150.101] GET /info/: [200] 20 (0.00s) (Thread-43 ) INFO 2018-01-16 14:16:12,659 web:95:web:(log_start) START [192.168.150.105] PUT /tickets/ (Thread-43 ) INFO 2018-01-16 14:16:12,661 auth2:170:auth2:(add_signed_ticket) Adding new ticket: <Ticket id=u'81569ab3-1b92- 4744-8a58-f1948afa20b7', url=u'https://ovirt01.localdomain.local:54322' at 0x7f048c03c610> (Thread-43 ) INFO 2018-01-16 14:16:12,662 web:102:web:(log_finish) FINISH [192.168.150.105] PUT /tickets/: [200] 0 (0.00s) (Thread-44 ) INFO 2018-01-16 14:16:13,800 web:95:web:(log_start) START [192.168.150.101] OPTIONS /images/81569ab3-1b92-4744- 8a58-f1948afa20b7 (Thread-44 ) INFO 2018-01-16 14:16:13,814 web:102:web:(log_finish) FINISH [192.168.150.101] OPTIONS /images/81569ab3-1b92-47 44-8a58-f1948afa20b7: [204] 0 (0.02s) (Thread-45 ) INFO 2018-01-16 14:16:13,876 web:95:web:(log_start) START [192.168.150.101] PUT /images/81569ab3-1b92-4744-8a58 -f1948afa20b7 (Thread-45 ) WARNING 2018-01-16 14:16:13,877 web:112:web:(log_error) ERROR [192.168.150.101] PUT /images/81569ab3-1b92-4744-8a58-f1948afa20b7: [401] Not authorized (0.00s)
Gianluca
BTW: I don't know if its is in some way related with the upload problems, but in my engine .log I see these kind of messages every 5 or such seconds:
2018-01-16 14:27:38,428+01 INFO [org.ovirt.engine.core.vdsbrok er.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler3) [61e72c38] START, GlusterServersListVDSCommand(HostName = ovirt02.localdomain.local, VdsIdVDSCommandParametersBase: {hostId='cb9cc605-fceb-4689-ad35-43ba883f4556'}), log id: 65e60794 2018-01-16 14:27:38,858+01 INFO [org.ovirt.engine.core.vdsbrok er.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler3) [61e72c38] FINISH, GlusterServersListVDSCommand, return: [ 192.168.150.103/24:CONNECTED, ovirt03.localdomain.local:CONNECTED, ovirt01.localdomain.local:CONNECTED], log id: 65e60794 2018-01-16 14:27:38,867+01 INFO [org.ovirt.engine.core.vdsbrok er.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler3) [61e72c38] START, GlusterVolumesListVDSCommand(HostName = ovirt02.localdomain.local, GlusterVolumesListVDSParameter s:{hostId='cb9cc605-fceb-4689-ad35-43ba883f4556'}), log id: 6e01993d 2018-01-16 14:27:39,221+01 WARN [org.ovirt.engine.core.vdsbrok er.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler3) [61e72c38] Could not associate brick 'ovirt02.localdomain.local:/gluster/brick1/engine' of volume '6e2bd1d7-9c8e-4c54-9d85-f36e1b871771' with correct network as no gluster network found in cluster '582badbe-0080-0197-013b-00000 00001c6' 2018-01-16 14:27:39,231+01 WARN [org.ovirt.engine.core.vdsbrok er.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler3) [61e72c38] Could not associate brick 'ovirt02.localdomain.local:/gluster/brick2/data' of volume '2238c6db-48c5-4071-8929-879cedcf39bf' with correct network as no gluster network found in cluster '582badbe-0080-0197-013b-00000 00001c6' 2018-01-16 14:27:39,253+01 WARN [org.ovirt.engine.core.vdsbrok er.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler3) [61e72c38] Could not associate brick 'ovirt02.localdomain.local:/gluster/brick4/iso' of volume '28f99f11-3529-43a1-895c-abf1c66884ab' with correct network as no gluster network found in cluster '582badbe-0080-0197-013b-00000 00001c6' 2018-01-16 14:27:39,255+01 INFO [org.ovirt.engine.core.vdsbrok er.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler3) [61e72c38] FINISH, GlusterVolumesListVDSCommand, return: {2238c6db-48c5-4071-8929-879cedcf39bf=org.ovirt.engine.core. common.businessentities.gluster.GlusterVolumeEntity@aa6e9a1e, df0ccd1d-5de6-42b8-a163-ec65c3698da3=org.ovirt.engine.core. common.businessentities.gluster.GlusterVolumeEntity@31c29088, 6e2bd1d7-9c8e-4c54-9d85-f36e1b871771=org.ovirt.engine.core. common.businessentities.gluster.GlusterVolumeEntity@ae82860f, 28f99f11-3529-43a1-895c-abf1c66884ab=org.ovirt.engine.core. common.businessentities.gluster.GlusterVolumeEntity@1b6a11e5}, log id: 6e01993d
Actualy te gluster network seems ok.
Eg
[root@ovirt01 glusterfs]# gluster volume info data
Volume Name: data Type: Replicate Volume ID: 2238c6db-48c5-4071-8929-879cedcf39bf Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt01.localdomain.local:/gluster/brick2/data Brick2: ovirt02.localdomain.local:/gluster/brick2/data Brick3: ovirt03.localdomain.local:/gluster/brick2/data (arbiter) Options Reconfigured: performance.strict-o-direct: on nfs.disable: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 6 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on transport.address-family: inet [root@ovirt01 glusterfs]#
[root@ovirt01 glusterfs]# gluster volume heal data info Brick ovirt01.localdomain.local:/gluster/brick2/data Status: Connected Number of entries: 0
Brick ovirt02.localdomain.local:/gluster/brick2/data Status: Connected Number of entries: 0
Brick ovirt03.localdomain.local:/gluster/brick2/data Status: Connected Number of entries: 0
[root@ovirt01 glusterfs]#
Gianluca
At host side:
[root@ovirt01 glusterfs]# systemctl status ovirt-imageio-daemon -l ● ovirt-imageio-daemon.service - oVirt ImageIO Daemon Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-daemon.service; disabled; vendor preset: disabled) Active: active (running) since Sat 2018-01-13 06:49:20 CET; 3 days ago Main PID: 1004 (ovirt-imageio-d) CGroup: /system.slice/ovirt-imageio-daemon.service └─1004 /usr/bin/python /usr/bin/ovirt-imageio-daemon
Jan 13 06:49:19 ovirt01.localdomain.local systemd[1]: Starting oVirt ImageIO Daemon... Jan 13 06:49:20 ovirt01.localdomain.local systemd[1]: Started oVirt ImageIO Daemon. [root@ovirt01 glusterfs]#
and in /var/log/ovirt-imageio-daemon/daemon.log
2018-01-16 14:16:12,625 INFO (ticket.server) [web] START [/] PUT /tickets/81569ab3-1b92-4744-8a58-f1948afa20b7 2018-01-16 14:16:12,626 INFO (ticket.server) [tickets] Adding ticket <Ticket active=False expires=4580999 filename=None ops=[u'write'] size=1073741824 transferred=0 url=u'file:///rhev/data- center/mnt/glusterSD/ovirt01.localdomain.local:data/ 190f4096-003e-4908-825a-6c231e60276d/images/93b51148- 5a79-4c1a-a173-9510f3807822/9dd4cdcb-5785-496e-8b46-7b25a3e97791' uuid=u'81569ab3-1b92-4744-8a58-f1948afa20b7' at 0x24cde10> 2018-01-16 14:16:12,628 INFO (ticket.server) [web] FINISH [/] PUT /tickets/81569ab3-1b92-4744-8a58-f1948afa20b7: [200] 0 (0.00s) 2018-01-16 14:16:13,800 INFO (ticket.server) [web] START [/] GET /tickets/81569ab3-1b92-4744-8a58-f1948afa20b7 2018-01-16 14:16:13,800 INFO (ticket.server) [tickets] Retrieving ticket 81569ab3-1b92-4744-8a58-f1948afa20b7 2018-01-16 14:16:13,802 INFO (ticket.server) [web] FINISH [/] GET /tickets/81569ab3-1b92-4744-8a58-f1948afa20b7: [200] 355 (0.00s) 2018-01-16 14:16:15,846 INFO (ticket.server) [web] START [/] GET /tickets/81569ab3-1b92-4744-8a58-f1948afa20b7 2018-01-16 14:16:15,846 INFO (ticket.server) [tickets] Retrieving ticket 81569ab3-1b92-4744-8a58-f1948afa20b7 2018-01-16 14:16:15,847 INFO (ticket.server) [web] FINISH [/] GET /tickets/81569ab3-1b92-4744-8a58-f1948afa20b7: [200] 355 (0.00s) 2018-01-16 14:16:19,865 INFO (ticket.server) [web] START [/] GET /tickets/81569ab3-1b92-4744-8a58-f1948afa20b7 2018-01-16 14:16:19,865 INFO (ticket.server) [tickets] Retrieving ticket 81569ab3-1b92-4744-8a58-f1948afa20b7 2018-01-16 14:16:19,866 INFO (ticket.server) [web] FINISH [/] GET /tickets/81569ab3-1b92-4744-8a58-f1948afa20b7: [200] 355 (0.00s)
and it seems the file is there???
root@ovirt01 ovirt-imageio-daemon]# ll /rhev/data-center/mnt/ glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908- 825a-6c231e60276d/images/93b51148-5a79-4c1a-a173- 9510f3807822/9dd4cdcb-5785-496e-8b46-7b25a3e97791 -rw-rw----. 1 vdsm kvm 1073741824 Jan 16 14:15 /rhev/data-center/mnt/ glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908- 825a-6c231e60276d/images/93b51148-5a79-4c1a-a173- 9510f3807822/9dd4cdcb-5785-496e-8b46-7b25a3e97791 [root@ovirt01 ovirt-imageio-daemon]#
[root@ovirt01 ovirt-imageio-daemon]# qemu-img info /rhev/data-center/mnt/ glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908- 825a-6c231e60276d/images/93b51148-5a79-4c1a-a173- 9510f3807822/9dd4cdcb-5785-496e-8b46-7b25a3e97791 image: /rhev/data-center/mnt/glusterSD/ovirt01.localdomain. local:data/190f4096-003e-4908-825a-6c231e60276d/images/ 93b51148-5a79-4c1a-a173-9510f3807822/9dd4cdcb-5785-496e-8b46-7b25a3e97791 file format: raw virtual size: 1.0G (1073741824 bytes) disk size: 0 [root@ovirt01 ovirt-imageio-daemon]#
but size zero: [root@ovirt01 ovirt-imageio-daemon]# ls -ls /rhev/data-center/mnt/ glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908- 825a-6c231e60276d/images/93b51148-5a79-4c1a-a173- 9510f3807822/9dd4cdcb-5785-496e-8b46-7b25a3e97791 0 -rw-rw----. 1 vdsm kvm 1073741824 Jan 16 14:15 /rhev/data-center/mnt/ glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908- 825a-6c231e60276d/images/93b51148-5a79-4c1a-a173- 9510f3807822/9dd4cdcb-5785-496e-8b46-7b25a3e97791 [root@ovirt01 ovirt-imageio-daemon]#
And the same if I go back when I tried to upload iso:
2018-01-14 18:23:24,867 INFO (ticket.server) [web] START [/] PUT /tickets/b89f35d3-c09b-4dcb-bc13-abf18feb8cd0 2018-01-14 18:23:24,868 INFO (ticket.server) [tickets] Adding ticket <Ticket active=False expires=4423031 filename=None o ps=[u'write'] size=830472192 transferred=0 url=u'file:///rhev/data- center/mnt/glusterSD/ovirt01.localdomain.local:data/190f4 096-003e-4908-825a-6c231e60276d/images/74c1ff00- efbc-494f-ac5c-b3c84ea02ae4/cb5d0551-e8ff-4a93-b107-a6138770d894' uuid=u'b89 f35d3-c09b-4dcb-bc13-abf18feb8cd0' at 0x24cde50> 2018-01-14 18:23:24,868 INFO (ticket.server) [web] FINISH [/] PUT /tickets/b89f35d3-c09b-4dcb-bc13-abf18feb8cd0: [200] 0 (0.00s)
and
[root@ovirt01 ovirt-imageio-daemon]# ll /rhev/data-center/mnt/ glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908- 825a-6c231e60276d/images/74c1ff00-efbc-494f-ac5c- b3c84ea02ae4/cb5d0551-e8ff-4a93-b107-a6138770d894 -rw-rw----. 1 vdsm kvm 830472192 Jan 14 18:23 /rhev/data-center/mnt/ glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908- 825a-6c231e60276d/images/74c1ff00-efbc-494f-ac5c- b3c84ea02ae4/cb5d0551-e8ff-4a93-b107-a6138770d894 [root@ovirt01 ovirt-imageio-daemon]#
ls -s gives 0 and in fact if I try to mount as loop device the supposed iso files I get the error about "wrong fs type" ...
[root@ovirt01 ovirt-imageio-daemon]# ls -ls /rhev/data-center/mnt/ glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908- 825a-6c231e60276d/images/74c1ff00-efbc-494f-ac5c- b3c84ea02ae4/cb5d0551-e8ff-4a93-b107-a6138770d894 0 -rw-rw----. 1 vdsm kvm 830472192 Jan 14 18:23 /rhev/data-center/mnt/ glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908- 825a-6c231e60276d/images/74c1ff00-efbc-494f-ac5c- b3c84ea02ae4/cb5d0551-e8ff-4a93-b107-a6138770d894 [root@ovirt01 ovirt-imageio-daemon]#
HIH debugging, Gianluca

On Wed, Jan 17, 2018 at 9:28 AM, Fred Rolland <frolland@redhat.com> wrote:
Hi,
I tested uploading an ISO to Gluster on my setup and it worked fine.
Ok.
Are you seeing any other issues with your Gluster setup? Creating regular disks, copy/move disks to this SD?
Thanks, Fred
Nothing particular. Indeed it is a nested environment and not with great performances, but it doesn't have particular problems. At this moment I have 3 VMs defined on this SD and one of them is powered on: I just created another 3Gb disk on it and then formatted a file system without problems If I try to copy from engine VM (that is on its engine gluster domain) to this VM (that is on data gluster domain) First I create a 2Gb file on hosted engine vm, so the only I/O is on engine gluster volume: [root@ovengine ~]# time dd if=/dev/zero bs=1024k count=2048 of=/testfile 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 98.1768 s, 21.9 MB/s real 1m38.188s user 0m0.023s sys 0m14.687s [root@ovengine ~]# Then I copy this file from the engine VM to the CentOS 6 VM with its disk on data gluster volume. [root@ovengine ~]# time dd if=/testfile bs=1024k count=2048 | gzip | ssh 192.168.150.111 "gunzip | dd of=/testfile bs=1024k" root@192.168.150.111's password: 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB) copied, 41.9347 s, 51.2 MB/s 0+62205 records in 0+62205 records out 2147483648 bytes (2.1 GB) copied, 39.138 s, 54.9 MB/s real 0m42.634s user 0m29.727s sys 0m5.421s [root@ovengine ~]# [root@centos6 ~]# ll /testfile -rw-r--r--. 1 root root 2147483648 Jan 17 11:47 /testfile [root@centos6 ~]# And right after the end of the command, also at gluster point of view all seems consistent (I see also replication go at about 50MB/s): [[root@ovirt01 ovirt-imageio-daemon]# gluster volume heal data info Brick ovirt01.localdomain.local:/gluster/brick2/data Status: Connected Number of entries: 0 Brick ovirt02.localdomain.local:/gluster/brick2/data Status: Connected Number of entries: 0 Brick ovirt03.localdomain.local:/gluster/brick2/data Status: Connected Number of entries: 0 [root@ovirt01 ovirt-imageio-daemon]# gluster volume heal engine info Brick ovirt01.localdomain.local:/gluster/brick1/engine Status: Connected Number of entries: 0 Brick ovirt02.localdomain.local:/gluster/brick1/engine Status: Connected Number of entries: 0 Brick ovirt03.localdomain.local:/gluster/brick1/engine Status: Connected Number of entries: 0 [root@ovirt01 ovirt-imageio-daemon]# Could it be in any way related to the fact that this environment has been created in 4.0.5 then gradually updated to 4.2.1rc1? The detailed history: Nov 2016 installed 4.0.5 with ansible and gdeploy Jun 2017 upgrade to 4.1.2 Jul 2017 upgrade to 4.1.3 Nov 2017 upgrade to 4.1.7 Dec 2017 upgrade to 4.2.0 Jan 2018 upgrade to 4.2.1rc1 I had a problem with enabling libgfapi due to the connection to gluster volumes being of type host:volume instead of the default now node:/volume see here: https://bugzilla.redhat.com/show_bug.cgi?id=1530261 just a guess... Thanks, Gianluca
participants (2)
-
Fred Rolland
-
Gianluca Cecchi