[ovirt-users] oVirt 4.2.1 rc1 and upload iso to data domain test
Gianluca Cecchi
gianluca.cecchi at gmail.com
Tue Jan 16 13:30:39 UTC 2018
On Tue, Jan 16, 2018 at 2:21 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com>
wrote:
> On Tue, Jan 16, 2018 at 6:48 AM, Fred Rolland <frolland at redhat.com> wrote:
>
>> Hi,
>> I will look into it.
>>
>> Is it also not working also for non-iso images?
>>
>> Thanks,
>> Fred
>>
>>
> Hello,
> I get the same with a disk.
> I have tried with a raw disk of size 1Gb.
> When I upload I can set the size (why?) while with the iso image I could
> not.
> The disk is recognized as "data" in upload window (the iso fie was
> correctly recognized as "iso"), but in image-proxy.log I get
>
> (Thread-42 ) INFO 2018-01-16 14:15:50,530 web:95:web:(log_start) START
> [192.168.150.101] GET /info/
> (Thread-42 ) INFO 2018-01-16 14:15:50,532 web:102:web:(log_finish) FINISH
> [192.168.150.101] GET /info/: [200] 20 (0.00s)
> (Thread-43 ) INFO 2018-01-16 14:16:12,659 web:95:web:(log_start) START
> [192.168.150.105] PUT /tickets/
> (Thread-43 ) INFO 2018-01-16 14:16:12,661 auth2:170:auth2:(add_signed_ticket)
> Adding new ticket: <Ticket id=u'81569ab3-1b92-
> 4744-8a58-f1948afa20b7', url=u'https://ovirt01.localdomain.local:54322'
> at 0x7f048c03c610>
> (Thread-43 ) INFO 2018-01-16 14:16:12,662 web:102:web:(log_finish) FINISH
> [192.168.150.105] PUT /tickets/: [200] 0 (0.00s)
> (Thread-44 ) INFO 2018-01-16 14:16:13,800 web:95:web:(log_start) START
> [192.168.150.101] OPTIONS /images/81569ab3-1b92-4744-
> 8a58-f1948afa20b7
> (Thread-44 ) INFO 2018-01-16 14:16:13,814 web:102:web:(log_finish) FINISH
> [192.168.150.101] OPTIONS /images/81569ab3-1b92-47
> 44-8a58-f1948afa20b7: [204] 0 (0.02s)
> (Thread-45 ) INFO 2018-01-16 14:16:13,876 web:95:web:(log_start) START
> [192.168.150.101] PUT /images/81569ab3-1b92-4744-8a58
> -f1948afa20b7
> (Thread-45 ) WARNING 2018-01-16 14:16:13,877 web:112:web:(log_error) ERROR
> [192.168.150.101] PUT /images/81569ab3-1b92-4744-8a58-f1948afa20b7: [401]
> Not authorized (0.00s)
>
> Gianluca
>
BTW:
I don't know if its is in some way related with the upload problems, but in
my engine .log I see these kind of messages every 5 or such seconds:
2018-01-16 14:27:38,428+01 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(DefaultQuartzScheduler3) [61e72c38] START,
GlusterServersListVDSCommand(HostName = ovirt02.localdomain.local,
VdsIdVDSCommandParametersBase:{hostId='cb9cc605-fceb-4689-ad35-43ba883f4556'}),
log id: 65e60794
2018-01-16 14:27:38,858+01 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(DefaultQuartzScheduler3) [61e72c38] FINISH, GlusterServersListVDSCommand,
return: [192.168.150.103/24:CONNECTED, ovirt03.localdomain.local:CONNECTED,
ovirt01.localdomain.local:CONNECTED], log id: 65e60794
2018-01-16 14:27:38,867+01 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler3) [61e72c38] START,
GlusterVolumesListVDSCommand(HostName = ovirt02.localdomain.local,
GlusterVolumesListVDSParameters:{hostId='cb9cc605-fceb-4689-ad35-43ba883f4556'}),
log id: 6e01993d
2018-01-16 14:27:39,221+01 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler3) [61e72c38] Could not associate brick
'ovirt02.localdomain.local:/gluster/brick1/engine' of volume
'6e2bd1d7-9c8e-4c54-9d85-f36e1b871771' with correct network as no gluster
network found in cluster '582badbe-0080-0197-013b-0000000001c6'
2018-01-16 14:27:39,231+01 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler3) [61e72c38] Could not associate brick
'ovirt02.localdomain.local:/gluster/brick2/data' of volume
'2238c6db-48c5-4071-8929-879cedcf39bf' with correct network as no gluster
network found in cluster '582badbe-0080-0197-013b-0000000001c6'
2018-01-16 14:27:39,253+01 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler3) [61e72c38] Could not associate brick
'ovirt02.localdomain.local:/gluster/brick4/iso' of volume
'28f99f11-3529-43a1-895c-abf1c66884ab' with correct network as no gluster
network found in cluster '582badbe-0080-0197-013b-0000000001c6'
2018-01-16 14:27:39,255+01 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler3) [61e72c38] FINISH, GlusterVolumesListVDSCommand,
return:
{2238c6db-48c5-4071-8929-879cedcf39bf=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at aa6e9a1e,
df0ccd1d-5de6-42b8-a163-ec65c3698da3=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 31c29088,
6e2bd1d7-9c8e-4c54-9d85-f36e1b871771=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at ae82860f,
28f99f11-3529-43a1-895c-abf1c66884ab=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 1b6a11e5},
log id: 6e01993d
Actualy te gluster network seems ok.
Eg
[root at ovirt01 glusterfs]# gluster volume info data
Volume Name: data
Type: Replicate
Volume ID: 2238c6db-48c5-4071-8929-879cedcf39bf
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt01.localdomain.local:/gluster/brick2/data
Brick2: ovirt02.localdomain.local:/gluster/brick2/data
Brick3: ovirt03.localdomain.local:/gluster/brick2/data (arbiter)
Options Reconfigured:
performance.strict-o-direct: on
nfs.disable: on
user.cifs: off
network.ping-timeout: 30
cluster.shd-max-threads: 6
cluster.shd-wait-qlength: 10000
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
performance.low-prio-threads: 32
features.shard-block-size: 512MB
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: off
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.readdir-ahead: on
transport.address-family: inet
[root at ovirt01 glusterfs]#
[root at ovirt01 glusterfs]# gluster volume heal data info
Brick ovirt01.localdomain.local:/gluster/brick2/data
Status: Connected
Number of entries: 0
Brick ovirt02.localdomain.local:/gluster/brick2/data
Status: Connected
Number of entries: 0
Brick ovirt03.localdomain.local:/gluster/brick2/data
Status: Connected
Number of entries: 0
[root at ovirt01 glusterfs]#
Gianluca
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20180116/f944a6a9/attachment.html>
More information about the Users
mailing list