vm with disks on multiple glusterfs domains fails if a gluster host goes down
by g.vasilopoulos@uoc.gr
It seems that a vm with 3 disks boot in domain engine another disk in domain vol1 and a third in domain v3 became non responsive when one gluster host went down.
To explain a bit the situation I have 3 glusterfs hosts with 3 volumes
hosts are g1,g2,g3 each have 3 bricks
g1 has vol1,vol2 and vol3 arbiter
g2 has vol1, vol2arbiter and vol3
g3 has vol1arb vol2 and vol3
libgfapi is enabled . I put a host in maintenance to update the bios and the vm who had disks in two domain became unresponsive..
is this normal? qemu logs showing that it tries Domain configuration shows host1 as primary for vol1 and host2 as primary for vol3 with the other two as backup-volfile servers..
it seems it always try to connect to the server that is down and not to one of the alternative hosts...
is this libgapi/libvirt problem ?
Here are some libvirt logs showing what it tries to do..
[2018-09-10 19:43:42.876114] T [socket.c:3133:socket_connect] 0-vol1-client-2: connecting 0x55ed673525c0, state=2 gen=0 sock=-1
[2018-09-10 19:43:42.876124] T [name.c:243:af_inet_client_get_remote_sockaddr] 0-vol1-client-2: option remote-port missing in volume vol1-client-2. Defaulting to 24007
[2018-09-10 19:43:42.878566] D [socket.c:3051:socket_fix_ssl_opts] 0-vol1-client-2: disabling SSL for portmapper connection
[2018-09-10 19:43:42.878770] T [socket.c:834:__socket_nodelay] 0-vol1-client-2: NODELAY enabled for socket 30
[2018-09-10 19:43:42.878780] T [socket.c:920:__socket_keepalive] 0-vol1-client-2: Keep-alive enabled for socket: 30, (idle: 20, interval: 2, max-probes: 9, timeout: 0)
[2018-09-10 19:43:42.878830] T [rpc-clnt.c:406:rpc_clnt_reconnect] 0-vol3-client-1: attempting reconnect
[2018-09-10 19:43:42.878846] T [socket.c:3133:socket_connect] 0-vol3-client-1: connecting 0x55ed673546c0, state=2 gen=0 sock=-1
[2018-09-10 19:43:42.878856] T [name.c:243:af_inet_client_get_remote_sockaddr] 0-vol3-client-1: option remote-port missing in volume vol3-client-1. Defaulting to 24007
[2018-09-10 19:43:42.881229] D [socket.c:3051:socket_fix_ssl_opts] 0-vol3-client-1: disabling SSL for portmapper connection
[2018-09-10 19:43:42.881255] T [socket.c:834:__socket_nodelay] 0-vol3-client-1: NODELAY enabled for socket 38
[2018-09-10 19:43:42.881264] T [socket.c:920:__socket_keepalive] 0-vol3-client-1: Keep-alive enabled for socket: 38, (idle: 20, interval: 2, max-probes: 9, timeout: 0)
[2018-09-10 19:43:45.569298] T [socket.c:724:__socket_disconnect] 0-vol3-client-1: disconnecting 0x55ed673546c0, state=2 gen=0 sock=38
[2018-09-10 19:43:45.569308] T [socket.c:724:__socket_disconnect] 0-vol1-client-2: disconnecting 0x55ed673525c0, state=2 gen=0 sock=30
[2018-09-10 19:43:45.570000] T [socket.c:728:__socket_disconnect] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fddbadade9b] (--> /usr/lib64/glusterfs/3.12.13/rpc-t
ransport/socket.so(+0x4ea0)[0x7fdda7bbfea0] (--> /usr/lib64/glusterfs/3.12.13/rpc-transport/socket.so(+0x530a)[0x7fdda7bc030a] (--> /usr/lib64/glusterfs/3.12.13/rpc-transport/s
ocket.so(+0x9a08)[0x7fdda7bc4a08] (--> /lib64/libglusterfs.so.0(+0x883c4)[0x7fddbae093c4] ))))) 0-vol3-client-1: tearing down socket connection
[2018-09-10 19:43:45.570020] D [socket.c:686:__socket_shutdown] 0-vol3-client-1: shutdown() returned -1. Transport endpoint is not connected
[2018-09-10 19:43:45.570038] D [socket.c:733:__socket_disconnect] 0-vol3-client-1: __socket_teardown_connection () failed: Transport endpoint is not connected
[2018-09-10 19:43:45.570043] D [socket.c:2474:socket_event_handler] 0-transport: EPOLLERR - disconnecting now
[2018-09-10 19:43:45.570907] T [socket.c:728:__socket_disconnect] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fddbadade9b] (--> /usr/lib64/glusterfs/3.12.13/rpc-t
ransport/socket.so(+0x4ea0)[0x7fdda7bbfea0] (--> /usr/lib64/glusterfs/3.12.13/rpc-transport/socket.so(+0x530a)[0x7fdda7bc030a] (--> /usr/lib64/glusterfs/3.12.13/rpc-transport/s
ocket.so(+0x9a08)[0x7fdda7bc4a08] (--> /lib64/libglusterfs.so.0(+0x883c4)[0x7fddbae093c4] ))))) 0-vol1-client-2: tearing down socket connection
[2018-09-10 19:43:45.570928] D [socket.c:686:__socket_shutdown] 0-vol1-client-2: shutdown() returned -1. Transport endpoint is not connected
[2018-09-10 19:43:45.570936] D [socket.c:733:__socket_disconnect] 0-vol1-client-2: __socket_teardown_connection () failed: Transport endpoint is not connected
[2018-09-10 19:43:45.570940] D [socket.c:2474:socket_event_handler] 0-transport: EPOLLERR - disconnecting now
[2018-09-10 19:43:45.570960] D [rpc-clnt-ping.c:99:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fddbadade9b] (--> /lib64/libgfrp
c.so.0(rpc_clnt_remove_ping_timer_locked+0x8b)[0x7fddbab7828b] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x5f)[0x7fddbab7460f] (--> /lib64/libgfrpc.so.0(rpc_clnt_no
tify+0x2a0)[0x7fddbab75130] (--> /lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7fddbab70ea3] ))))) 0-: 10.xxx.xxx.130:24007: ping timer event already removed
[2018-09-10 19:43:45.571098] D [rpc-clnt-ping.c:99:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fddbadade9b] (--> /lib64/libgfrp
c.so.0(rpc_clnt_remove_ping_timer_locked+0x8b)[0x7fddbab7828b] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x5f)[0x7fddbab7460f] (--> /lib64/libgfrpc.so.0(rpc_clnt_no
tify+0x2a0)[0x7fddbab75130] (--> /lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7fddbab70ea3] ))))) 0-: 10.xxx.xxx.130:24007: ping timer event already removed
[2018-09-10 19:43:45.878885] T [rpc-clnt.c:406:rpc_clnt_reconnect] 0-vol1-client-2: attempting reconnect
[2018-09-10 19:43:45.881546] T [socket.c:834:__socket_nodelay] 0-vol1-client-2: NODELAY enabled for socket 38
[2018-09-10 19:43:45.881555] T [socket.c:920:__socket_keepalive] 0-vol1-client-2: Keep-alive enabled for socket: 38, (idle: 20, interval: 2, max-probes: 9, timeout: 0)
[2018-09-10 19:43:45.883839] D [socket.c:3051:socket_fix_ssl_opts] 0-vol3-client-1: disabling SSL for portmapper connection
[2018-09-10 19:43:45.883878] T [socket.c:834:__socket_nodelay] 0-vol3-client-1: NODELAY enabled for socket 30
[2018-09-10 19:43:45.883886] T [socket.c:920:__socket_keepalive] 0-vol3-client-1: Keep-alive enabled for socket: 30, (idle: 20, interval: 2, max-probes: 9, timeout: 0)
[2018-09-10 19:43:48.575316] T [socket.c:724:__socket_disconnect] 0-vol3-client-1: disconnecting 0x55ed673546c0, state=2 gen=0 sock=30
[2018-09-10 19:43:48.575329] T [socket.c:724:__socket_disconnect] 0-vol1-client-2: disconnecting 0x55ed673525c0, state=2 gen=0 sock=38
[2018-09-10 19:43:48.576022] T [socket.c:728:__socket_disconnect] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fddbadade9b] (--> /usr/lib64/glusterfs/3.12.13/rpc-transport/socket.so(+0x4ea0)[0x7fdda7bbfea0] (--> /usr/lib64/glusterfs/3.12.13/rpc-transport/socket.so(+0x530a)[0x7fdda7bc030a] (--> /usr/lib64/glusterfs/3.12.13/rpc-transport/socket.so(+0x9a08)[0x7fdda7bc4a08] (--> /lib64/libglusterfs.so.0(+0x883c4)[0x7fddbae093c4] ))))) 0-vol3-client-1: tearing down socket connection
[2018-09-10 19:43:48.576045] D [socket.c:686:__socket_shutdown] 0-vol3-client-1: shutdown() returned -1. Transport endpoint is not connected
[2018-09-10 19:43:48.576054] D [socket.c:733:__socket_disconnect] 0-vol3-client-1: __socket_teardown_connection () failed: Transport endpoint is not connected
[2018-09-10 19:43:48.576059] D [socket.c:2474:socket_event_handler] 0-transport: EPOLLERR - disconnecting now
[2018-09-10 19:43:48.576079] T [socket.c:728:__socket_disconnect] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fddbadade9b] (--> /usr/lib64/glusterfs/3.12.13/rpc-transport/socket.so(+0x4ea0)[0x7fdda7bbfea0] (--> /usr/lib64/glusterfs/3.12.13/rpc-transport/socket.so(+0x530a)[0x7fdda7bc030a] (--> /usr/lib64/glusterfs/3.12.13/rpc-transport/socket.so(+0x9a08)[0x7fdda7bc4a08] (--> /lib64/libglusterfs.so.0(+0x883c4)[0x7fddbae093c4] ))))) 0-vol1-client-2: tearing down socket connection
[2018-09-10 19:43:48.576099] D [socket.c:686:__socket_shutdown] 0-vol1-client-2: shutdown() returned -1. Transport endpoint is not connected
[2018-09-10 19:43:48.576106] D [socket.c:733:__socket_disconnect] 0-vol1-client-2: __socket_teardown_connection () failed: Transport endpoint is not connected
[2018-09-10 19:43:48.576111] D [socket.c:2474:socket_event_handler] 0-transport: EPOLLERR - disconnecting now
[2018-09-10 19:43:48.576879] D [rpc-clnt-ping.c:99:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fddbadade9b] (--> /lib64/libgfrpc.so.0(rpc_clnt_remove_ping_timer_locked+0x8b)[0x7fddbab7828b] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x5f)[0x7fddbab7460f] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fddbab75130] (--> /lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7fddbab70ea3] ))))) 0-: 10.xxx.xxx.130:24007: ping timer event already removed
[2018-09-10 19:43:48.576958] D [rpc-clnt-ping.c:99:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fddbadade9b] (--> /lib64/libgfrpc.so.0(rpc_clnt_remove_ping_timer_locked+0x8b)[0x7fddbab7828b] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x5f)[0x7fddbab7460f] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fddbab75130] (--> /lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7fddbab70ea3] ))))) 0-: 10.xxx.xxx.130:24007: ping timer event already removed
[2018-09-10 19:43:48.881651] T [rpc-clnt.c:406:rpc_clnt_reconnect] 0-vol1-client-2: attempting reconnect
[2018-09-10 19:43:48.881667] T [socket.c:3133:socket_connect] 0-vol1-client-2: connecting 0x55ed673525c0, state=2 gen=0 sock=-1
[2018-09-10 19:43:48.881689] T [name.c:243:af_inet_client_get_remote_sockaddr] 0-vol1-client-2: option remote-port missing in volume vol1-client-2. Defaulting to 24007
[2018-09-10 19:43:48.884056] T [rpc-clnt.c:406:rpc_clnt_reconnect] 0-vol3-client-1: attempting reconnect
[2018-09-10 19:43:48.884072] T [socket.c:3133:socket_connect] 0-vol3-client-1: connecting 0x55ed673546c0, state=2 gen=0 sock=-1
[2018-09-10 19:43:48.884084] T [name.c:243:af_inet_client_get_remote_sockaddr] 0-vol3-client-1: option remote-port missing in volume vol3-client-1. Defaulting to 24007
[2018-09-10 19:43:48.884190] D [socket.c:3051:socket_fix_ssl_opts] 0-vol1-client-2: disabling SSL for portmapper connection
[2018-09-10 19:43:48.886524] T [socket.c:834:__socket_nodelay] 0-vol3-client-1: NODELAY enabled for socket 30
[2018-09-10 19:43:48.886532] T [socket.c:920:__socket_keepalive] 0-vol3-client-1: Keep-alive enabled for socket: 30, (idle: 20, interval: 2, max-probes: 9, timeout: 0)
[2018-09-10 19:43:51.581293] T [socket.c:724:__socket_disconnect] 0-vol3-client-1: disconnecting 0x55ed673546c0, state=2 gen=0 sock=30
[2018-09-10 19:43:51.581293] T [socket.c:724:__socket_disconnect] 0-vol1-client-2: disconnecting 0x55ed673525c0, state=2 gen=0 sock=38
[2018-09-10 19:43:51.582009] T [socket.c:728:__socket_disconnect] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fddbadade9b] (--> /usr/lib64/glusterfs/3.12.13/rpc-transport/socket.so(+0x4ea0)[0x7fdda7bbfea0] (--> /usr/lib64/glusterfs/3.12.13/rpc-transport/socket.so(+0x530a)[0x7fdda7bc030a] (--> /usr/lib64/glusterfs/3.12.13/rpc-transport/socket.so(+0x9a08)[0x7fdda7bc4a08] (--> /lib64/libglusterfs.so.0(+0x883c4)[0x7fddbae093c4] ))))) 0-vol1-client-2: tearing down socket connection
[2018-09-10 19:43:51.582030] D [socket.c:686:__socket_shutdown] 0-vol1-client-2: shutdown() returned -1. Transport endpoint is not connected
[2018-09-10 19:43:51.582036] D [socket.c:733:__socket_disconnect] 0-vol1-client-2: __socket_teardown_connection () failed: Transport endpoint is not connected
[2018-09-10 19:43:51.582040] D [socket.c:2474:socket_event_handler] 0-transport: EPOLLERR - disconnecting now
[2018-09-10 19:43:51.582084] T [socket.c:728:__socket_disconnect] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fddbadade9b] (--> /usr/lib64/glusterfs/3.12.13/rpc-transport/socket.so(+0x4ea0)[0x7fdda7bbfea0] (--> /usr/lib64/glusterfs/3.12.13/rpc-transport/socket.so(+0x530a)[0x7fdda7bc030a] (--> /usr/lib64/glusterfs/3.12.13/rpc-transport/socket.so(+0x9a08)[0x7fdda7bc4a08] (--> /lib64/libglusterfs.so.0(+0x883c4)[0x7fddbae093c4] ))))) 0-vol3-client-1: tearing down socket connection
[2018-09-10 19:43:51.582105] D [socket.c:686:__socket_shutdown] 0-vol3-client-1: shutdown() returned -1. Transport endpoint is not connected
[2018-09-10 19:43:51.582111] D [socket.c:733:__socket_disconnect] 0-vol3-client-1: __socket_teardown_connection () failed: Transport endpoint is not connected
[2018-09-10 19:43:51.582116] D [socket.c:2474:socket_event_handler] 0-transport: EPOLLERR - disconnecting now
[2018-09-10 19:43:51.582812] D [rpc-clnt-ping.c:99:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fddbadade9b] (--> /lib64/libgfrpc.so.0(rpc_clnt_remove_ping_timer_locked+0x8b)[0x7fddbab7828b] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x5f)[0x7fddbab7460f] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fddbab75130] (--> /lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7fddbab70ea3] ))))) 0-: 10.xxx.xxx.130:24007: ping timer event already removed
[2018-09-10 19:43:51.582865] D [rpc-clnt-ping.c:99:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fddbadade9b] (--> /lib64/libgfrpc.so.0(rpc_clnt_remove_ping_timer_locked+0x8b)[0x7fddbab7828b] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x5f)[0x7fddbab7460f] (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x2a0)[0x7fddbab75130] (--> /lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7fddbab70ea3] ))))) 0-: 10.xxx.xxx.130:24007: ping timer event already removed
[2018-09-10 19:43:51.884349] T [rpc-clnt.c:406:rpc_clnt_reconnect] 0-vol1-client-2: attempting reconnect
[2018-09-10 19:43:51.884367] T [socket.c:3133:socket_connect] 0-vol1-client-2: connecting 0x55ed673525c0, state=2 gen=0 sock=-1
[2018-09-10 19:43:51.884376] T [name.c:243:af_inet_client_get_remote_sockaddr] 0-vol1-client-2: option remote-port missing in volume vol1-client-2. Defaulting to 24007
[2018-09-10 19:43:51.886644] T [rpc-clnt.c:406:rpc_clnt_reconnect] 0-vol3-client-1: attempting reconnect
[2018-09-10 19:43:51.886659] T [socket.c:3133:socket_connect] 0-vol3-client-1: connecting 0x55ed673546c0, state=2 gen=0 sock=-1
[2018-09-10 19:43:51.886669] T [name.c:243:af_inet_client_get_remote_sockaddr] 0-vol3-client-1: option remote-port missing in volume vol3-client-1. Defaulting to 24007
[2018-09-10 19:43:51.887251] D [socket.c:3051:socket_fix_ssl_opts] 0-vol1-client-2: disabling SSL for portmapper connection
[2018-09-10 19:43:51.887281] T [socket.c:834:__socket_nodelay] 0-vol1-client-2: NODELAY enabled for socket 38
[2018-09-10 19:43:51.887290] T [socket.c:920:__socket_keepalive] 0-vol1-client-2: Keep-alive enabled for socket: 38, (idle: 20, interval: 2, max-probes: 9, timeout: 0)
[2018-09-10 19:43:51.889141] D [socket.c:3051:socket_fix_ssl_opts] 0-vol3-client-1: disabling SSL for portmapper connection
:
6 years, 3 months
Install failing at DNS resolution check (closing up).
by Kristian Petersen
I am trying to set up a small oVirt 4.2 cluster on CentOS with 2 hosts to
do some testing with. I got the everything set up to the point where I am
running the hosted-engine --deploy, but it fails with an error saying it
was unable to resolve some host name or IP address (but it isn't very clear
what one). I have tested on the same machine its ability resolve the hosts
in the cluster both forward and reverse and they work fine with nslookup.
That includes the DNS entry for the hosted-engine. I tried putting them
all into the /etc/hosts files of each computer but it still fails at the
same step. Thge log file it refers to does not seem very helpful. Maybe
I'm just not sure what to look for.
--
Kristian Petersen
System Administrator
BYU Dept. of Chemistry and Biochemistry
6 years, 3 months
Question on gluster sharding maturity/stability
by Brian Sipos
I'm currently running oVirt 4.1 (with gluster 3.8.15) and as this is a cluster which has been upgraded from (I believe) oVirt 3.5 originally, the storage volumes use old "Optimize for Virt Store" configuration of not enabling sharding. I see now that the oVirt 4.1 "optimize" configuration does enable sharding, but I am a bit wary after seeing some horror stories from about a year ago about corruptions and data loss caused by sharding in earlier gluster versions. Is the gluster associated with oVirt 4.1 now at a stable point that I can and should trust sharding?
I plan on doing some simple experimenting first, but I see a great benefit to balancing distributed-replicated volumes and for healing times and I would like to take advantage of these benefits now. Since the ovirt engine is now 'recommending' the use of sharding for disk store volumes, this leads me to believe that it is trusted by the oVirt crew. Does this seem accurate?
6 years, 3 months
Info about procedure to shutdown hosted engine VM
by Gianluca Cecchi
Hello,
I'm writing a workflow regarding operations to do in case of planned
maintenance where one has to stop all hypervisors and so also hosted engine
vm.
At the moment I have imagined:
- shutdown all VMs but Hosted Engine
- put into maintenance and then shutdown all the hosts where Engine is not
running, one by one
- put environment in global maintenance
Now the next step would be to shutdown Hosted Engine VM.
As my workflow is for users not necessarily expert with Linux I was
thinking what would be an alternate method in respect of direct connect via
ssh as root user and run "shutdown -h now" on engine
Is there anything I can do from gui of web admin portal or cockpit of the
host where it is running?
Can I for example select to shutdown engine vm from web admin UI with a
certain delay in time so that I disconnect and wait?
Or anything I can do from cockpit of the host?
Of course I'm searching for a supported flow.
Thanks in advance,
Gianluca
6 years, 3 months
Re: Failed to synchronize networks of Provider ovirt-provider-ovn
by Mail SET Inc. Group
Yes, i use same manual to change WebUI SSL.
ovirt-ca-file= is a same SSL file which use WebUI.
Yes, i restart ovirt-provider-ovn, i restart engine, i restart all what i can restart. Nothing...
> 12 сент. 2018 г., в 16:11, Dominik Holler <dholler(a)redhat.com> написал(а):
>
> On Wed, 12 Sep 2018 14:23:54 +0300
> "Mail SET Inc. Group" <mail(a)set-pro.net> wrote:
>
>> Ok!
>
> Not exactly, please use users(a)ovirt.org for such questions.
> Other should benefit from this questions, too.
> Please write the next mail to users(a)ovirt.org and keep me in CC.
>
>> What i did:
>>
>> 1) install oVirt «from box» (4.2.5.2-1.el7);
>> 2) generate own ssl for my engine using my FreeIPA CA, Install it and
>
> What means "Install it"? You can use the doc from the following link
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/...
>
> Ensure that ovirt-ca-file= in
> /etc/ovirt-provider-ovn/conf.d/10-setup-ovirt-provider-ovn.conf
> points to the correct file and ovirt-provider-ovn is restarted.
>
>> get tis issue;
>>
>>
>> [root@engine ~]# tail -n 50 /var/log/ovirt-provider-ovn.log
>> 2018-09-12 14:10:23,828 root [SSL: CERTIFICATE_VERIFY_FAILED]
>> certificate verify failed (_ssl.c:579) Traceback (most recent call
>> last): File "/usr/share/ovirt-provider-ovn/handlers/base_handler.py",
>> line 133, in _handle_request method, path_parts, content
>> File "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py",
>> line 175, in handle_request return
>> self.call_response_handler(handler, content, parameters) File
>> "/usr/share/ovirt-provider-ovn/handlers/keystone.py", line 33, in
>> call_response_handler return response_handler(content, parameters)
>> File "/usr/share/ovirt-provider-ovn/handlers/keystone_responses.py",
>> line 62, in post_tokens user_password=user_password) File
>> "/usr/share/ovirt-provider-ovn/auth/plugin_facade.py", line 26, in
>> create_token return auth.core.plugin.create_token(user_at_domain,
>> user_password) File
>> "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/plugin.py", line
>> 48, in create_token timeout=self._timeout()) File
>> "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 75,
>> in create_token username, password, engine_url, ca_file, timeout)
>> File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line
>> 91, in _get_sso_token timeout=timeout File
>> "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 54,
>> in wrapper response = func(*args, **kwargs) File
>> "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 47,
>> in wrapper raise BadGateway(e) BadGateway: [SSL:
>> CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)
>>
>>
>> [root@engine ~]# tail -n 20 /var/log/ovirt-engine/engine.log
>> 2018-09-12 14:10:23,773+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-47) [316db685] Lock
>> Acquired to object
>> 'EngineLock:{exclusiveLocks='[14e4fb72-9764-4757-b37d-4d487995571a=PROVIDER]',
>> sharedLocks=''}' 2018-09-12 14:10:23,778+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-47) [316db685]
>> Running command: SyncNetworkProviderCommand internal: true.
>> 2018-09-12 14:10:23,836+03 ERROR
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-47) [316db685]
>> Command
>> 'org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand'
>> failed: EngineException: (Failed with error Bad Gateway and code
>> 5050) 2018-09-12 14:10:23,837+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-47) [316db685] Lock
>> freed to object
>> 'EngineLock:{exclusiveLocks='[14e4fb72-9764-4757-b37d-4d487995571a=PROVIDER]',
>> sharedLocks=''}' 2018-09-12 14:14:12,477+03 INFO
>> [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default
>> task-6) [] User admin@internal successfully logged in with scopes:
>> ovirt-app-admin ovirt-app-api ovirt-app-portal
>> ovirt-ext=auth:sequence-priority=~ ovirt-ext=revoke:revoke-all
>> ovirt-ext=token-info:authz-search
>> ovirt-ext=token-info:public-authz-search
>> ovirt-ext=token-info:validate ovirt-ext=token:password-access
>> 2018-09-12 14:14:12,587+03 INFO
>> [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default
>> task-6) [1bf1b763] Running command: CreateUserSessionCommand
>> internal: false. 2018-09-12 14:14:12,628+03 INFO
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (default task-6) [1bf1b763] EVENT_ID: USER_VDC_LOGIN(30), User
>> admin@internal-authz connecting from '10.0.3.61' using session
>> 's8jAm7BUJGlicthm6yZBA3CUM8QpRdtwFaK3M/IppfhB3fHFB9gmNf0cAlbl1xIhcJ2WX+ww7e71Ri+MxJSsIg=='
>> logged in. 2018-09-12 14:14:30,972+03 INFO
>> [org.ovirt.engine.core.bll.provider.ImportProviderCertificateCommand]
>> (default task-6) [ee3cc8a7-4485-4fdf-a0c2-e9d67b5cfcd3] Running
>> command: ImportProviderCertificateCommand internal: false. Entities
>> affected : ID: aaa00000-0000-0000-0000-123456789aaa Type:
>> SystemAction group CREATE_STORAGE_POOL with role type ADMIN
>> 2018-09-12 14:14:30,982+03 INFO
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (default task-6) [ee3cc8a7-4485-4fdf-a0c2-e9d67b5cfcd3] EVENT_ID:
>> PROVIDER_CERTIFICATE_IMPORTED(213), Certificate for provider
>> ovirt-provider-ovn was imported. (User: admin@internal-authz)
>> 2018-09-12 14:14:31,006+03 INFO
>> [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
>> (default task-6) [a48d94ab-b0b2-42a2-a667-0525b4c652ea] Running
>> command: TestProviderConnectivityCommand internal: false. Entities
>> affected : ID: aaa00000-0000-0000-0000-123456789aaa Type:
>> SystemAction group CREATE_STORAGE_POOL with role type ADMIN
>> 2018-09-12 14:14:31,058+03 ERROR
>> [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
>> (default task-6) [a48d94ab-b0b2-42a2-a667-0525b4c652ea] Command
>> 'org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand'
>> failed: EngineException: (Failed with error Bad Gateway and code
>> 5050) 2018-09-12 14:15:10,954+03 INFO
>> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread
>> pool 'default' is using 0 threads out of 1, 5 threads waiting for
>> tasks. 2018-09-12 14:15:10,954+03 INFO
>> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread
>> pool 'engine' is using 0 threads out of 500, 16 threads waiting for
>> tasks and 0 tasks in queue. 2018-09-12 14:15:10,954+03 INFO
>> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread
>> pool 'engineScheduled' is using 0 threads out of 100, 100 threads
>> waiting for tasks. 2018-09-12 14:15:10,954+03 INFO
>> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread
>> pool 'engineThreadMonitoring' is using 1 threads out of 1, 0 threads
>> waiting for tasks. 2018-09-12 14:15:10,954+03 INFO
>> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread
>> pool 'hostUpdatesChecker' is using 0 threads out of 5, 2 threads
>> waiting for tasks. 2018-09-12 14:15:23,843+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-61) [2455041f] Lock
>> Acquired to object
>> 'EngineLock:{exclusiveLocks='[14e4fb72-9764-4757-b37d-4d487995571a=PROVIDER]',
>> sharedLocks=''}' 2018-09-12 14:15:23,849+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-61) [2455041f]
>> Running command: SyncNetworkProviderCommand internal: true.
>> 2018-09-12 14:15:23,900+03 ERROR
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-61) [2455041f]
>> Command
>> 'org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand'
>> failed: EngineException: (Failed with error Bad Gateway and code
>> 5050) 2018-09-12 14:15:23,901+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-61) [2455041f] Lock
>> freed to object
>> 'EngineLock:{exclusiveLocks='[14e4fb72-9764-4757-b37d-4d487995571a=PROVIDER]',
>> sharedLocks=''}'
>>
>>
>> [root@engine ~]#
>> cat /etc/ovirt-provider-ovn/conf.d/10-setup-ovirt-provider-ovn.conf #
>> This file is automatically generated by engine-setup. Please do not
>> edit manually [OVN REMOTE] ovn-remote=ssl:127.0.0.1:6641
>> [SSL]
>> https-enabled=true
>> ssl-cacert-file=/etc/pki/ovirt-engine/ca.pem
>> ssl-cert-file=/etc/pki/ovirt-engine/certs/ovirt-provider-ovn.cer
>> ssl-key-file=/etc/pki/ovirt-engine/keys/ovirt-provider-ovn.key.nopass
>> [OVIRT]
>> ovirt-sso-client-secret=Ms7Gw9qNT6IkXu7oA54tDmxaZDIukABV
>> ovirt-host=https://engine.set.local:443
>> ovirt-sso-client-id=ovirt-provider-ovn
>> ovirt-ca-file=/etc/pki/ovirt-engine/apache-ca.pem
>> [PROVIDER]
>> provider-host=engine.set.local
>>
>>
>>> 12 сент. 2018 г., в 13:59, Dominik Holler <dholler(a)redhat.com>
>>> написал(а):
>>>
>>> On Wed, 12 Sep 2018 13:04:53 +0300
>>> "Mail SET Inc. Group" <mail(a)set-pro.net> wrote:
>>>
>>>> Hello Dominik!
>>>> I have a same issue with OVN provider and SSL
>>>> https://www.mail-archive.com/users@ovirt.org/msg47020.html
>>>> <https://www.mail-archive.com/users@ovirt.org/msg47020.html> But
>>>> certificate changes not helps to resolve it. Maybe you can help me
>>>> with this?
>>>
>>> Sure. Can you please share the relevant lines of
>>> ovirt-provider-ovn.log and engine.log, and the information if you
>>> are using the certificates generated by engine-setup with
>>> users(a)ovirt.org ? Thanks,
>>> Dominik
6 years, 3 months
Admin Portal
by mattias.kihl@gmail.com
The redirection URI for client is not registered
How to add a new host where i can use the adminportal. I found this.
> Create a new conf file /etc/ovirt-engine/engine.conf.d/99-sso.conf and add:
> SSO_CALLBACK_PREFIX_CHECK=false
>
> then
> systemctl restart ovirt-engine
>
> This will turn off the additional security check for the callback prefix.
But i want to keep the security and learn howto do it correct.
Thanks
6 years, 3 months
upload ISO from webui failed,"Paused by System"
by henaumars@sina.com
Hi,
I can't upload iso image from webui,always got "Paused by System"
system info:
engine:4.2.6.4-1.el7
vdsm:4.20.35-1.el7
imageio-proxy:1.4.4-0.el7
imageio-daemon:1.4.4-0.el7
operation:
1、download and install ca to the browser
2、select iso file to upload
3、click test ,alert to installl ca
4、install ca again(optional)
5、upload
6、wait and got "Paused by System"
log:
engine:
2018-09-11 11:13:22,273+08 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-61) [02c871da-a599-4201-9dd5-92a468dee952] START, HSMClearTaskVDSCommand(HostName = 21, HSMTaskGuidBaseVDSCommandParameters:{hostId='68f27646-da12-480b-9887-42ada2911132', taskId='90541bd1-65e7-4185-a051-3d8d9c1e3a5f'}), log id: 3ff168d7
2018-09-11 11:13:22,278+08 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-61) [02c871da-a599-4201-9dd5-92a468dee952] FINISH, HSMClearTaskVDSCommand, log id: 3ff168d7
2018-09-11 11:13:22,278+08 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-61) [02c871da-a599-4201-9dd5-92a468dee952] FINISH, SPMClearTaskVDSCommand, log id: 43d30861
2018-09-11 11:13:22,280+08 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-61) [02c871da-a599-4201-9dd5-92a468dee952] BaseAsyncTask::removeTaskFromDB: Removed task '90541bd1-65e7-4185-a051-3d8d9c1e3a5f' from DataBase
2018-09-11 11:13:22,280+08 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-61) [02c871da-a599-4201-9dd5-92a468dee952] CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity '4ff92f68-8353-40ac-a7c5-f0efbd054841'
2018-09-11 11:13:26,098+08 INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] (default task-8) [352651c9-ddde-4e1e-b95c-05ad967ed0b1] Running command: TransferImageStatusCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_DISK with role type USER
2018-09-11 11:13:28,836+08 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-55) [02c871da-a599-4201-9dd5-92a468dee952] Command 'AddDisk' id: '4b72410d-e5a0-4c43-b7db-a5324a32d012' child commands '[4ff92f68-8353-40ac-a7c5-f0efbd054841]' executions were completed, status 'SUCCEEDED'
2018-09-11 11:13:28,836+08 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-55) [02c871da-a599-4201-9dd5-92a468dee952] Command 'AddDisk' id: '4b72410d-e5a0-4c43-b7db-a5324a32d012' Updating status to 'SUCCEEDED', The command end method logic will be executed by one of its parent commands.
2018-09-11 11:13:28,862+08 INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-55) [02c871da-a599-4201-9dd5-92a468dee952] Successfully added Upload disk 'oVirt-toolsSetup-4.2-1.el7.centos.iso' (disk id: '97179509-65eb-4b45-ad8e-ce112cfd016a', image id: '6ca217cc-015e-4d00-872a-faf60a8954ac') for image transfer command '4cdff02c-7fe4-4000-9c92-8ef613597d13'
2018-09-11 11:13:28,892+08 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-55) [02c871da-a599-4201-9dd5-92a468dee952] START, PrepareImageVDSCommand(HostName = 21, PrepareImageVDSCommandParameters:{hostId='68f27646-da12-480b-9887-42ada2911132'}), log id: 19407261
2018-09-11 11:13:28,901+08 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-55) [02c871da-a599-4201-9dd5-92a468dee952] FINISH, PrepareImageVDSCommand, return: PrepareImageReturn:{status='Status [code=0, message=Done]'}, log id: 19407261
2018-09-11 11:13:28,901+08 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeLegalityVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-55) [02c871da-a599-4201-9dd5-92a468dee952] START, SetVolumeLegalityVDSCommand( SetVolumeLegalityVDSCommandParameters:{storagePoolId='ab65cc43-3c9e-4c02-ba75-ccd9fa8691b1', ignoreFailoverLimit='false', storageDomainId='ffc618b9-fd75-4ca2-ace4-15a1b7b58ecf', imageGroupId='97179509-65eb-4b45-ad8e-ce112cfd016a', imageId='6ca217cc-015e-4d00-872a-faf60a8954ac'}), log id: 5068b962
2018-09-11 11:13:28,914+08 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeLegalityVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-55) [02c871da-a599-4201-9dd5-92a468dee952] FINISH, SetVolumeLegalityVDSCommand, log id: 5068b962
2018-09-11 11:13:28,915+08 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.AddImageTicketVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-55) [02c871da-a599-4201-9dd5-92a468dee952] START, AddImageTicketVDSCommand(HostName = 21, AddImageTicketVDSCommandParameters:{hostId='68f27646-da12-480b-9887-42ada2911132', ticketId='3564a630-be37-407f-802d-ce38a01cc104', timeout='300', operations='[write]', size='382971904', url='file:///rhev/data-center/mnt/_data/ffc618b9-fd75-4ca2-ace4-15a1b7b58ecf/images/97179509-65eb-4b45-ad8e-ce112cfd016a/6ca217cc-015e-4d00-872a-faf60a8954ac', filename='null', sparse='true'}), log id: 64950261
2018-09-11 11:13:28,920+08 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.AddImageTicketVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-55) [02c871da-a599-4201-9dd5-92a468dee952] FINISH, AddImageTicketVDSCommand, return: StatusOnlyReturn [status=Status [code=0, message=Done]], log id: 64950261
2018-09-11 11:13:28,920+08 INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-55) [02c871da-a599-4201-9dd5-92a468dee952] Started transfer session with ticket id 3564a630-be37-407f-802d-ce38a01cc104, timeout 300 seconds
2018-09-11 11:13:28,920+08 INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-55) [02c871da-a599-4201-9dd5-92a468dee952] Adding image ticket to ovirt-imageio-proxy, id 3564a630-be37-407f-802d-ce38a01cc104
2018-09-11 11:13:28,934+08 INFO [org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater] (EE-ManagedThreadFactory-engineScheduled-Thread-55) [02c871da-a599-4201-9dd5-92a468dee952] Updating image transfer 4cdff02c-7fe4-4000-9c92-8ef613597d13 (image 97179509-65eb-4b45-ad8e-ce112cfd016a) phase to Transferring
2018-09-11 11:13:28,935+08 INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-55) [02c871da-a599-4201-9dd5-92a468dee952] Returning from proceedCommandExecution after starting transfer session for image transfer command '4cdff02c-7fe4-4000-9c92-8ef613597d13'
2018-09-11 11:13:29,948+08 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetImageTicketVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-56) [02c871da-a599-4201-9dd5-92a468dee952] START, GetImageTicketVDSCommand(HostName = 21, GetImageTicketVDSCommandParameters:{hostId='68f27646-da12-480b-9887-42ada2911132', ticketId='3564a630-be37-407f-802d-ce38a01cc104', timeout='null'}), log id: 1b9af167
2018-09-11 11:13:29,952+08 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetImageTicketVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-56) [02c871da-a599-4201-9dd5-92a468dee952] FINISH, GetImageTicketVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.ImageTicketInformation@558504b3, log id: 1b9af167
2018-09-11 11:13:30,111+08 INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] (default task-6) [c1647c4b-63fe-4ca7-b4a1-adb77c85e076] Running command: TransferImageStatusCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_DISK with role type USER
2018-09-11 11:13:42,126+08 INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] (default task-6) [32ab474a-5bd4-4026-b653-67b2a6d776b9] Running command: TransferImageStatusCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_DISK with role type USER
2018-09-11 11:13:44,039+08 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetImageTicketVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-44) [02c871da-a599-4201-9dd5-92a468dee952] START, GetImageTicketVDSCommand(HostName = 21, GetImageTicketVDSCommandParameters:{hostId='68f27646-da12-480b-9887-42ada2911132', ticketId='3564a630-be37-407f-802d-ce38a01cc104', timeout='null'}), log id: 66ab6e50
2018-09-11 11:13:44,044+08 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetImageTicketVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-44) [02c871da-a599-4201-9dd5-92a468dee952] FINISH, GetImageTicketVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.ImageTicketInformation@558504c1, log id: 66ab6e50
2018-09-11 11:13:46,135+08 INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] (default task-6) [1e65c0eb-95b0-4152-9887-5fa9521bb0a9] Running command: TransferImageStatusCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_DISK with role type USER
2018-09-11 11:13:48,327+08 INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] (default task-6) [4ad6f0ea-2ae5-4067-b1c2-c0503b2e3676] Running command: TransferImageStatusCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_DISK with role type USER
2018-09-11 11:13:48,328+08 INFO [org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater] (default task-6) [4ad6f0ea-2ae5-4067-b1c2-c0503b2e3676] Updating image transfer 4cdff02c-7fe4-4000-9c92-8ef613597d13 (image 97179509-65eb-4b45-ad8e-ce112cfd016a) phase to Paused by System (message: 'Sent 0MB')
2018-09-11 11:13:48,335+08 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-6) [4ad6f0ea-2ae5-4067-b1c2-c0503b2e3676] EVENT_ID: UPLOAD_IMAGE_NETWORK_ERROR(1,062), Unable to upload image to disk 97179509-65eb-4b45-ad8e-ce112cfd016a due to a network error. Make sure ovirt-imageio-proxy service is installed and configured, and ovirt-engine's certificate is registered as a valid CA in the browser. The certificate can be fetched from https://<engine_url>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA
2018-09-11 11:13:50,220+08 INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] (default task-6) [2a9835b2-a08d-4efd-9fa0-208d14dd4bdc] Running command: TransferImageStatusCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_DISK with role type USER
2018-09-11 11:13:54,080+08 INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-60) [02c871da-a599-4201-9dd5-92a468dee952] Transfer was paused by system. Upload disk 'oVirt-toolsSetup-4.2-1.el7.centos.iso' (disk id: '97179509-65eb-4b45-ad8e-ce112cfd016a', image id: '6ca217cc-015e-4d00-872a-faf60a8954ac')
2018-09-11 11:13:54,196+08 INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] (default task-6) [3742a3f4-4352-46c4-8480-0035e8d7a5fb] Running command: TransferImageStatusCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_DISK with role type USER
imageio-proxy:
(Thread-4 ) INFO 2018-09-11 11:12:32,476 auth:201:auth2:(delete_ticket) Deleting ticket u'9a8117e4-2ba0-40c3-ae3d-aadc43b7857d'
(Thread-5 ) INFO 2018-09-11 11:13:28,929 auth:187:auth2:(add_signed_ticket) Adding new ticket: <Ticket id=u'3564a630-be37-407f-802d-ce38a01cc104', url=u'https://172.22.224.21:54322' timeout=35999.070008039474 at 0x7fa30cdd6b50>
imageio-daemon:
2018-09-11 11:13:28,402 INFO (Thread-12) [tickets] [local] ADD ticket={u'uuid': u'3564a630-be37-407f-802d-ce38a01cc104', u'ops': [u'write'], u'url': u'file:///rhev/data-center/mnt/_data/ffc618b9-fd75-4ca2-ace4-15a1b7b58ecf/images/97179509-65eb-4b45-ad8e-ce112cfd016a/6ca217cc-015e-4d00-872a-faf60a8954ac', u'sparse': True, u'timeout': 300, u'size': 382971904}
I‘ve been stucked for days and need to get help!
6 years, 3 months
Ovirt and gluster deployment - host name
by florentl
Hello,
I want to deploy ovirt (hosted engine) and gluster on three servers. I
have thought about a configuration but I'm not sure that it's the most
suitable one.
All the servers have 2 ten gigabit Ethernet nic.
I read that, am besten, I must dedicate one nic for the storage and
another one for the system.
So I will have two netnork address space :
- for the system : 192.168.176.0/24 with a gateway configured
- for the storage : 10.255.255.0/24 without a gateway
Inside my hosts file on all the server I will have :
192.168.176.1 server1
192.168.176.2 server2
192.168.176.3 server3
10.255.255.1 gluster1
10.255.255.2 gluster2
10.255.255.3 gluster2
When I start the gluster deployment I will use gluster1, gluster2 and
gluster3 name. And after the gluster deployment during the
hyperconverged setup I will use server1, server2 and server3 name.
May I have your opinion about this deployment ?
Thanks in advance for the help.
Florent LORNE
6 years, 3 months