[ovirt-users] oVirt 4.2.1 rc1 and upload iso to data domain test
Fred Rolland
frolland at redhat.com
Wed Jan 17 08:28:53 UTC 2018
Hi,
I tested uploading an ISO to Gluster on my setup and it worked fine.
The size is OK:
# ls -lsh
/rhev/data-center/mnt/glusterSD/*********:_fred1/f80e6d34-7c4c-4c0b-9451-dd140812c4ee/images/fa2209bc-d34c-4f3e-a425-9a085d72c3ba/0deef094-05a3-49c4-a347-ca423bd57a87
1.6G -rw-rw----. 1 vdsm kvm 1.6G Jan 17 10:06
/rhev/data-center/mnt/glusterSD/********:_fred1/f80e6d34-7c4c-4c0b-9451-dd140812c4ee/images/fa2209bc-d34c-4f3e-a425-9a085d72c3ba/0deef094-05a3-49c4-a347-ca423bd57a87
Are you seeing any other issues with your Gluster setup?
Creating regular disks, copy/move disks to this SD?
Thanks,
Fred
On Tue, Jan 16, 2018 at 4:25 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com>
wrote:
>
>
> On Tue, Jan 16, 2018 at 2:30 PM, Gianluca Cecchi <
> gianluca.cecchi at gmail.com> wrote:
>
>>
>>
>> On Tue, Jan 16, 2018 at 2:21 PM, Gianluca Cecchi <
>> gianluca.cecchi at gmail.com> wrote:
>>
>>> On Tue, Jan 16, 2018 at 6:48 AM, Fred Rolland <frolland at redhat.com>
>>> wrote:
>>>
>>>> Hi,
>>>> I will look into it.
>>>>
>>>> Is it also not working also for non-iso images?
>>>>
>>>> Thanks,
>>>> Fred
>>>>
>>>>
>>> Hello,
>>> I get the same with a disk.
>>> I have tried with a raw disk of size 1Gb.
>>> When I upload I can set the size (why?) while with the iso image I could
>>> not.
>>> The disk is recognized as "data" in upload window (the iso fie was
>>> correctly recognized as "iso"), but in image-proxy.log I get
>>>
>>> (Thread-42 ) INFO 2018-01-16 14:15:50,530 web:95:web:(log_start) START
>>> [192.168.150.101] GET /info/
>>> (Thread-42 ) INFO 2018-01-16 14:15:50,532 web:102:web:(log_finish)
>>> FINISH [192.168.150.101] GET /info/: [200] 20 (0.00s)
>>> (Thread-43 ) INFO 2018-01-16 14:16:12,659 web:95:web:(log_start) START
>>> [192.168.150.105] PUT /tickets/
>>> (Thread-43 ) INFO 2018-01-16 14:16:12,661 auth2:170:auth2:(add_signed_ticket)
>>> Adding new ticket: <Ticket id=u'81569ab3-1b92-
>>> 4744-8a58-f1948afa20b7', url=u'https://ovirt01.localdomain.local:54322'
>>> at 0x7f048c03c610>
>>> (Thread-43 ) INFO 2018-01-16 14:16:12,662 web:102:web:(log_finish)
>>> FINISH [192.168.150.105] PUT /tickets/: [200] 0 (0.00s)
>>> (Thread-44 ) INFO 2018-01-16 14:16:13,800 web:95:web:(log_start) START
>>> [192.168.150.101] OPTIONS /images/81569ab3-1b92-4744-
>>> 8a58-f1948afa20b7
>>> (Thread-44 ) INFO 2018-01-16 14:16:13,814 web:102:web:(log_finish)
>>> FINISH [192.168.150.101] OPTIONS /images/81569ab3-1b92-47
>>> 44-8a58-f1948afa20b7: [204] 0 (0.02s)
>>> (Thread-45 ) INFO 2018-01-16 14:16:13,876 web:95:web:(log_start) START
>>> [192.168.150.101] PUT /images/81569ab3-1b92-4744-8a58
>>> -f1948afa20b7
>>> (Thread-45 ) WARNING 2018-01-16 14:16:13,877 web:112:web:(log_error)
>>> ERROR [192.168.150.101] PUT /images/81569ab3-1b92-4744-8a58-f1948afa20b7:
>>> [401] Not authorized (0.00s)
>>>
>>> Gianluca
>>>
>>
>>
>> BTW:
>> I don't know if its is in some way related with the upload problems, but
>> in my engine .log I see these kind of messages every 5 or such seconds:
>>
>> 2018-01-16 14:27:38,428+01 INFO [org.ovirt.engine.core.vdsbrok
>> er.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler3)
>> [61e72c38] START, GlusterServersListVDSCommand(HostName =
>> ovirt02.localdomain.local, VdsIdVDSCommandParametersBase:
>> {hostId='cb9cc605-fceb-4689-ad35-43ba883f4556'}), log id: 65e60794
>> 2018-01-16 14:27:38,858+01 INFO [org.ovirt.engine.core.vdsbrok
>> er.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler3)
>> [61e72c38] FINISH, GlusterServersListVDSCommand, return: [
>> 192.168.150.103/24:CONNECTED, ovirt03.localdomain.local:CONNECTED,
>> ovirt01.localdomain.local:CONNECTED], log id: 65e60794
>> 2018-01-16 14:27:38,867+01 INFO [org.ovirt.engine.core.vdsbrok
>> er.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler3)
>> [61e72c38] START, GlusterVolumesListVDSCommand(HostName =
>> ovirt02.localdomain.local, GlusterVolumesListVDSParameter
>> s:{hostId='cb9cc605-fceb-4689-ad35-43ba883f4556'}), log id: 6e01993d
>> 2018-01-16 14:27:39,221+01 WARN [org.ovirt.engine.core.vdsbrok
>> er.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler3)
>> [61e72c38] Could not associate brick 'ovirt02.localdomain.local:/gluster/brick1/engine'
>> of volume '6e2bd1d7-9c8e-4c54-9d85-f36e1b871771' with correct network as
>> no gluster network found in cluster '582badbe-0080-0197-013b-00000
>> 00001c6'
>> 2018-01-16 14:27:39,231+01 WARN [org.ovirt.engine.core.vdsbrok
>> er.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler3)
>> [61e72c38] Could not associate brick 'ovirt02.localdomain.local:/gluster/brick2/data'
>> of volume '2238c6db-48c5-4071-8929-879cedcf39bf' with correct network as
>> no gluster network found in cluster '582badbe-0080-0197-013b-00000
>> 00001c6'
>> 2018-01-16 14:27:39,253+01 WARN [org.ovirt.engine.core.vdsbrok
>> er.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler3)
>> [61e72c38] Could not associate brick 'ovirt02.localdomain.local:/gluster/brick4/iso'
>> of volume '28f99f11-3529-43a1-895c-abf1c66884ab' with correct network as
>> no gluster network found in cluster '582badbe-0080-0197-013b-00000
>> 00001c6'
>> 2018-01-16 14:27:39,255+01 INFO [org.ovirt.engine.core.vdsbrok
>> er.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler3)
>> [61e72c38] FINISH, GlusterVolumesListVDSCommand, return:
>> {2238c6db-48c5-4071-8929-879cedcf39bf=org.ovirt.engine.core.
>> common.businessentities.gluster.GlusterVolumeEntity at aa6e9a1e,
>> df0ccd1d-5de6-42b8-a163-ec65c3698da3=org.ovirt.engine.core.
>> common.businessentities.gluster.GlusterVolumeEntity at 31c29088,
>> 6e2bd1d7-9c8e-4c54-9d85-f36e1b871771=org.ovirt.engine.core.
>> common.businessentities.gluster.GlusterVolumeEntity at ae82860f,
>> 28f99f11-3529-43a1-895c-abf1c66884ab=org.ovirt.engine.core.
>> common.businessentities.gluster.GlusterVolumeEntity at 1b6a11e5}, log id:
>> 6e01993d
>>
>> Actualy te gluster network seems ok.
>>
>> Eg
>>
>> [root at ovirt01 glusterfs]# gluster volume info data
>>
>> Volume Name: data
>> Type: Replicate
>> Volume ID: 2238c6db-48c5-4071-8929-879cedcf39bf
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: ovirt01.localdomain.local:/gluster/brick2/data
>> Brick2: ovirt02.localdomain.local:/gluster/brick2/data
>> Brick3: ovirt03.localdomain.local:/gluster/brick2/data (arbiter)
>> Options Reconfigured:
>> performance.strict-o-direct: on
>> nfs.disable: on
>> user.cifs: off
>> network.ping-timeout: 30
>> cluster.shd-max-threads: 6
>> cluster.shd-wait-qlength: 10000
>> cluster.locking-scheme: granular
>> cluster.data-self-heal-algorithm: full
>> performance.low-prio-threads: 32
>> features.shard-block-size: 512MB
>> features.shard: on
>> storage.owner-gid: 36
>> storage.owner-uid: 36
>> cluster.server-quorum-type: server
>> cluster.quorum-type: auto
>> network.remote-dio: off
>> cluster.eager-lock: enable
>> performance.stat-prefetch: off
>> performance.io-cache: off
>> performance.read-ahead: off
>> performance.quick-read: off
>> performance.readdir-ahead: on
>> transport.address-family: inet
>> [root at ovirt01 glusterfs]#
>>
>> [root at ovirt01 glusterfs]# gluster volume heal data info
>> Brick ovirt01.localdomain.local:/gluster/brick2/data
>> Status: Connected
>> Number of entries: 0
>>
>> Brick ovirt02.localdomain.local:/gluster/brick2/data
>> Status: Connected
>> Number of entries: 0
>>
>> Brick ovirt03.localdomain.local:/gluster/brick2/data
>> Status: Connected
>> Number of entries: 0
>>
>> [root at ovirt01 glusterfs]#
>>
>> Gianluca
>>
>>
>
> At host side:
>
> [root at ovirt01 glusterfs]# systemctl status ovirt-imageio-daemon -l
> ● ovirt-imageio-daemon.service - oVirt ImageIO Daemon
> Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-daemon.service;
> disabled; vendor preset: disabled)
> Active: active (running) since Sat 2018-01-13 06:49:20 CET; 3 days ago
> Main PID: 1004 (ovirt-imageio-d)
> CGroup: /system.slice/ovirt-imageio-daemon.service
> └─1004 /usr/bin/python /usr/bin/ovirt-imageio-daemon
>
> Jan 13 06:49:19 ovirt01.localdomain.local systemd[1]: Starting oVirt
> ImageIO Daemon...
> Jan 13 06:49:20 ovirt01.localdomain.local systemd[1]: Started oVirt
> ImageIO Daemon.
> [root at ovirt01 glusterfs]#
>
> and in /var/log/ovirt-imageio-daemon/daemon.log
>
> 2018-01-16 14:16:12,625 INFO (ticket.server) [web] START [/] PUT
> /tickets/81569ab3-1b92-4744-8a58-f1948afa20b7
> 2018-01-16 14:16:12,626 INFO (ticket.server) [tickets] Adding ticket
> <Ticket active=False expires=4580999 filename=None ops=[u'write']
> size=1073741824 transferred=0 url=u'file:///rhev/data-
> center/mnt/glusterSD/ovirt01.localdomain.local:data/
> 190f4096-003e-4908-825a-6c231e60276d/images/93b51148-
> 5a79-4c1a-a173-9510f3807822/9dd4cdcb-5785-496e-8b46-7b25a3e97791'
> uuid=u'81569ab3-1b92-4744-8a58-f1948afa20b7' at 0x24cde10>
> 2018-01-16 14:16:12,628 INFO (ticket.server) [web] FINISH [/] PUT
> /tickets/81569ab3-1b92-4744-8a58-f1948afa20b7: [200] 0 (0.00s)
> 2018-01-16 14:16:13,800 INFO (ticket.server) [web] START [/] GET
> /tickets/81569ab3-1b92-4744-8a58-f1948afa20b7
> 2018-01-16 14:16:13,800 INFO (ticket.server) [tickets] Retrieving
> ticket 81569ab3-1b92-4744-8a58-f1948afa20b7
> 2018-01-16 14:16:13,802 INFO (ticket.server) [web] FINISH [/] GET
> /tickets/81569ab3-1b92-4744-8a58-f1948afa20b7: [200] 355 (0.00s)
> 2018-01-16 14:16:15,846 INFO (ticket.server) [web] START [/] GET
> /tickets/81569ab3-1b92-4744-8a58-f1948afa20b7
> 2018-01-16 14:16:15,846 INFO (ticket.server) [tickets] Retrieving
> ticket 81569ab3-1b92-4744-8a58-f1948afa20b7
> 2018-01-16 14:16:15,847 INFO (ticket.server) [web] FINISH [/] GET
> /tickets/81569ab3-1b92-4744-8a58-f1948afa20b7: [200] 355 (0.00s)
> 2018-01-16 14:16:19,865 INFO (ticket.server) [web] START [/] GET
> /tickets/81569ab3-1b92-4744-8a58-f1948afa20b7
> 2018-01-16 14:16:19,865 INFO (ticket.server) [tickets] Retrieving
> ticket 81569ab3-1b92-4744-8a58-f1948afa20b7
> 2018-01-16 14:16:19,866 INFO (ticket.server) [web] FINISH [/] GET
> /tickets/81569ab3-1b92-4744-8a58-f1948afa20b7: [200] 355 (0.00s)
>
> and it seems the file is there???
>
> root at ovirt01 ovirt-imageio-daemon]# ll /rhev/data-center/mnt/
> glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908-
> 825a-6c231e60276d/images/93b51148-5a79-4c1a-a173-
> 9510f3807822/9dd4cdcb-5785-496e-8b46-7b25a3e97791
> -rw-rw----. 1 vdsm kvm 1073741824 Jan 16 14:15 /rhev/data-center/mnt/
> glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908-
> 825a-6c231e60276d/images/93b51148-5a79-4c1a-a173-
> 9510f3807822/9dd4cdcb-5785-496e-8b46-7b25a3e97791
> [root at ovirt01 ovirt-imageio-daemon]#
>
> [root at ovirt01 ovirt-imageio-daemon]# qemu-img info /rhev/data-center/mnt/
> glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908-
> 825a-6c231e60276d/images/93b51148-5a79-4c1a-a173-
> 9510f3807822/9dd4cdcb-5785-496e-8b46-7b25a3e97791
> image: /rhev/data-center/mnt/glusterSD/ovirt01.localdomain.
> local:data/190f4096-003e-4908-825a-6c231e60276d/images/
> 93b51148-5a79-4c1a-a173-9510f3807822/9dd4cdcb-5785-496e-8b46-7b25a3e97791
> file format: raw
> virtual size: 1.0G (1073741824 bytes)
> disk size: 0
> [root at ovirt01 ovirt-imageio-daemon]#
>
> but size zero:
> [root at ovirt01 ovirt-imageio-daemon]# ls -ls /rhev/data-center/mnt/
> glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908-
> 825a-6c231e60276d/images/93b51148-5a79-4c1a-a173-
> 9510f3807822/9dd4cdcb-5785-496e-8b46-7b25a3e97791
> 0 -rw-rw----. 1 vdsm kvm 1073741824 Jan 16 14:15 /rhev/data-center/mnt/
> glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908-
> 825a-6c231e60276d/images/93b51148-5a79-4c1a-a173-
> 9510f3807822/9dd4cdcb-5785-496e-8b46-7b25a3e97791
> [root at ovirt01 ovirt-imageio-daemon]#
>
>
> And the same if I go back when I tried to upload iso:
>
> 2018-01-14 18:23:24,867 INFO (ticket.server) [web] START [/] PUT
> /tickets/b89f35d3-c09b-4dcb-bc13-abf18feb8cd0
> 2018-01-14 18:23:24,868 INFO (ticket.server) [tickets] Adding ticket
> <Ticket active=False expires=4423031 filename=None o
> ps=[u'write'] size=830472192 transferred=0 url=u'file:///rhev/data-
> center/mnt/glusterSD/ovirt01.localdomain.local:data/190f4
> 096-003e-4908-825a-6c231e60276d/images/74c1ff00-
> efbc-494f-ac5c-b3c84ea02ae4/cb5d0551-e8ff-4a93-b107-a6138770d894'
> uuid=u'b89
> f35d3-c09b-4dcb-bc13-abf18feb8cd0' at 0x24cde50>
> 2018-01-14 18:23:24,868 INFO (ticket.server) [web] FINISH [/] PUT
> /tickets/b89f35d3-c09b-4dcb-bc13-abf18feb8cd0: [200] 0
> (0.00s)
>
> and
>
> [root at ovirt01 ovirt-imageio-daemon]# ll /rhev/data-center/mnt/
> glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908-
> 825a-6c231e60276d/images/74c1ff00-efbc-494f-ac5c-
> b3c84ea02ae4/cb5d0551-e8ff-4a93-b107-a6138770d894
> -rw-rw----. 1 vdsm kvm 830472192 Jan 14 18:23 /rhev/data-center/mnt/
> glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908-
> 825a-6c231e60276d/images/74c1ff00-efbc-494f-ac5c-
> b3c84ea02ae4/cb5d0551-e8ff-4a93-b107-a6138770d894
> [root at ovirt01 ovirt-imageio-daemon]#
>
> ls -s gives 0 and in fact if I try to mount as loop device the supposed
> iso files I get the error about "wrong fs type" ...
>
> [root at ovirt01 ovirt-imageio-daemon]# ls -ls /rhev/data-center/mnt/
> glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908-
> 825a-6c231e60276d/images/74c1ff00-efbc-494f-ac5c-
> b3c84ea02ae4/cb5d0551-e8ff-4a93-b107-a6138770d894
> 0 -rw-rw----. 1 vdsm kvm 830472192 Jan 14 18:23 /rhev/data-center/mnt/
> glusterSD/ovirt01.localdomain.local:data/190f4096-003e-4908-
> 825a-6c231e60276d/images/74c1ff00-efbc-494f-ac5c-
> b3c84ea02ae4/cb5d0551-e8ff-4a93-b107-a6138770d894
> [root at ovirt01 ovirt-imageio-daemon]#
>
> HIH debugging,
> Gianluca
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20180117/3b4a42c7/attachment.html>
More information about the Users
mailing list