SPM and Task error ...
by Enrico
Hi all,
my ovirt cluster has got 3 Hypervisors runnig Centos 7.5.1804 vdsm is
4.20.39.1-1.el7,
ovirt engine is 4.2.4.5-1.el7, the storage systems are HP MSA P2000 and
2050 (fibre channel).
I need to stop one of the hypervisors for maintenance but this system is
the storage pool manager.
For this reason I decided to manually activate SPM in one of the other
nodes but this operation is not
successful.
In the ovirt engine (engine.log) the error is this:
2019-07-25 12:39:16,744+02 INFO
[org.ovirt.engine.core.bll.storage.pool.ForceSelectSPMCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] Running command:
ForceSelectSPMCommand internal: false. Entities affected : ID:
81c9bd3c-ae0a-467f-bf7f-63ab30cd8d9e Type: VDSAction group
MANIPULATE_HOST with role type ADMIN
2019-07-25 12:39:16,745+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
SpmStopOnIrsVDSCommand(
SpmStopOnIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false'}), log id: 37bf4639
2019-07-25 12:39:16,747+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
ResetIrsVDSCommand(
ResetIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
vdsId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
ignoreStopFailed='false'}), log id: 2522686f
2019-07-25 12:39:16,749+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
SpmStopVDSCommand(HostName = infn-vm05.management,
SpmStopVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7'}), log id: 1810fd8b
2019-07-25 12:39:16,758+02 *ERROR*
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] SpmStopVDSCommand::Not
stopping SPM on vds 'infn-vm05.management', pool id
'18d57688-6ed4-43b8-bd7c-0665b55950b7' as there are uncleared tasks
'Task 'fdcf4d1b-82fe-49a6-b233-323ebe568f8e', status 'running''
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
SpmStopVDSCommand, log id: 1810fd8b
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
ResetIrsVDSCommand, log id: 2522686f
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
SpmStopOnIrsVDSCommand, log id: 37bf4639
2019-07-25 12:39:16,760+02 *ERROR*
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] EVENT_ID:
USER_FORCE_SELECTED_SPM_STOP_FAILED(4,096), Failed to force select
infn-vm07.management as the SPM due to a failure to stop the current SPM.
while in the hypervisor (SPM) vdsm.log:
2019-07-25 12:39:16,744+02 INFO
[org.ovirt.engine.core.bll.storage.pool.ForceSelectSPMCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] Running command:
ForceSelectSPMCommand internal: false. Entities affected : ID:
81c9bd3c-ae0a-467f-bf7f-63ab30cd8d9e Type: VDSAction group
MANIPULATE_HOST with role type ADMIN
2019-07-25 12:39:16,745+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
SpmStopOnIrsVDSCommand(
SpmStopOnIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false'}), log id: 37bf4639
2019-07-25 12:39:16,747+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
ResetIrsVDSCommand(
ResetIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
vdsId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
ignoreStopFailed='false'}), log id: 2522686f
2019-07-25 12:39:16,749+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
SpmStopVDSCommand(HostName = infn-vm05.management,
SpmStopVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7'}), log id: 1810fd8b
2019-07-25 12:39:16,758+02 *ERROR*
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] SpmStopVDSCommand::Not
stopping SPM on vds 'infn-vm05.management', pool id
'18d57688-6ed4-43b8-bd7c-0665b55950b7' as there are uncleared tasks
'Task 'fdcf4d1b-82fe-49a6-b233-323ebe568f8e', status 'running''
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
SpmStopVDSCommand, log id: 1810fd8b
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
ResetIrsVDSCommand, log id: 2522686f
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
SpmStopOnIrsVDSCommand, log id: 37bf4639
2019-07-25 12:39:16,760+02 *ERROR*
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] EVENT_ID:
USER_FORCE_SELECTED_SPM_STOP_FAILED(4,096), Failed to force select
infn-vm07.management as the SPM due to a failure to stop the current SPM.
2019-07-25 12:39:18,660+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Task id
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' has passed pre-polling period
time and should be polled. Pre-polling period is 60000 millis.
2019-07-25 12:39:18,660+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Task id
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' has passed pre-polling period
time and should be polled. Pre-polling period is 60000 millis.
2019-07-25 12:39:18,750+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Task id
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' has passed pre-polling period
time and should be polled. Pre-polling period is 60000 millis.
2019-07-25 12:39:18,750+02 *ERROR*
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
BaseAsyncTask::logEndTaskFailure: Task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' (Parent Command 'Unknown',
Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended
with failure:
2019-07-25 12:39:18,750+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: Attempting to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e'
2019-07-25 12:39:18,751+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
SPMClearTaskVDSCommand(
SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: 34ae2b2f
2019-07-25 12:39:18,752+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
HSMClearTaskVDSCommand(HostName = infn-vm05.management,
HSMTaskGuidBaseVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: d3a78ad
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
HSMClearTaskVDSCommand, log id: d3a78ad
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
SPMClearTaskVDSCommand, log id: 34ae2b2f
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: At time of attempt to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' the response code was
'TaskStateError' and message was 'Operation is not allowed in this task
state: ("can't clean in state running",)'. Task will not be cleaned
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
BaseAsyncTask::onTaskEndSuccess: Task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' (Parent Command 'Unknown',
Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended
successfully.
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: Attempting to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e'
2019-07-25 12:39:18,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
SPMClearTaskVDSCommand(
SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: 42de0c2b
2019-07-25 12:39:18,759+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
HSMClearTaskVDSCommand(HostName = infn-vm05.management,
HSMTaskGuidBaseVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: 4895c79c
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
HSMClearTaskVDSCommand, log id: 4895c79c
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
SPMClearTaskVDSCommand, log id: 42de0c2b
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: At time of attempt to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' the response code was
'TaskStateError' and message was 'Operation is not allowed in this task
state: ("can't clean in state running",)'. Task will not be cleaned
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Task id
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' has passed pre-polling period
time and should be polled. Pre-polling period is 60000 millis.
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Cleaning zombie
tasks: Clearing async task 'Unknown' that started at 'Fri May 03
14:48:50 CEST 2019'
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: Attempting to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e'
2019-07-25 12:39:18,765+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
SPMClearTaskVDSCommand(
SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: da77af2
2019-07-25 12:39:18,766+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
HSMClearTaskVDSCommand(HostName = infn-vm05.management,
HSMTaskGuidBaseVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: 530694fb
2019-07-25 12:39:18,771+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
HSMClearTaskVDSCommand, log id: 530694fb
2019-07-25 12:39:18,771+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
SPMClearTaskVDSCommand, log id: da77af2
2019-07-25 12:39:18,771+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: At time of attempt to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' the response code was
'TaskStateError' and message was 'Operation is not allowed in this task
state: ("can't clean in state running",)'. Task will not be cleaned
there is some relation between this error and a task that has remained
hanging, from SPM server:
# vdsm-client Task getInfo taskID=fdcf4d1b-82fe-49a6-b233-323ebe568f8e
{
"verb": "prepareMerge",
"id": "fdcf4d1b-82fe-49a6-b233-323ebe568f8e"
}
# vdsm-client Task getStatus taskID=fdcf4d1b-82fe-49a6-b233-323ebe568f8e
{
"message": "running job 1 of 1",
"code": 0,
"taskID": "fdcf4d1b-82fe-49a6-b233-323ebe568f8e",
"taskResult": "",
"taskState": "running"
}
How can I solve this problem ?
Thanks a lot for your help !!
Best Regards
Enrico
--
_______________________________________________________________________
Enrico Becchetti Servizio di Calcolo e Reti
Istituto Nazionale di Fisica Nucleare - Sezione di Perugia
Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY)
Phone:+39 075 5852777 Mail: Enrico.Becchetti<at>pg.infn.it
_______________________________________________________________________
5 months, 3 weeks
Changing disk QoS causes segfault with IO-Threads enabled (oVirt 4.3.0.4-1.el7)
by jloh@squiz.net
We recently upgraded to 4.3.0 and have found that when changing disk QoS settings on VMs whilst IO-Threads is enabled causes them to segfault and the VM to reboot. We've been able to replicate this across several VMs. VMs with IO-Threads disabled/turned off do not segfault when changing the QoS.
Mar 1 11:49:06 srvXX kernel: IO iothread1[30468]: segfault at fffffffffffffff8 ip 0000557649f2bd24 sp 00007f80de832f60 error 5 in qemu-kvm[5576498dd000+a03000]
Mar 1 11:49:06 srvXX abrt-hook-ccpp: invalid number 'iothread1'
Mar 1 11:49:11 srvXX libvirtd: 2019-03-01 00:49:11.116+0000: 13365: error : qemuMonitorIORead:609 : Unable to read from monitor: Connection reset by peer
Happy to supply some more logs to someone if they'll help but just wondering whether anyone else has experienced this or knows of a current fix other than turning io-threads off.
Cheers.
1 year
Unable to install oVirt on RHEL7.5
by SS00514758@techmahindra.com
Hi All,
I am unable to install oVirt on RHEL7.5, to install it I am taking reference of below link,
https://www.ovirt.org/documentation/install-guide/chap-Installing_oVirt.html
But though it is not working for me, couple of dependencies is not getting installed, and because of this I am not able to run the ovirt-engine, below are the depencies packages that unable to install,
Error: Package: collectd-write_http-5.8.0-6.1.el7.x86_64 (@ovirt-4.2-centos-opstools)
Requires: collectd(x86-64) = 5.8.0-6.1.el7
Removing: collectd-5.8.0-6.1.el7.x86_64 (@ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.8.0-6.1.el7
Updated By: collectd-5.8.1-1.el7.x86_64 (epel)
collectd(x86-64) = 5.8.1-1.el7
Available: collectd-5.7.2-1.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.7.2-1.el7
Available: collectd-5.7.2-3.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.7.2-3.el7
Available: collectd-5.8.0-2.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.8.0-2.el7
Available: collectd-5.8.0-3.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.8.0-3.el7
Available: collectd-5.8.0-5.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.8.0-5.el7
Help me to install this.
Looking forward to resolve this issue.
Regards
Sumit Sahay
1 year, 4 months
Re: Failed to synchronize networks of Provider ovirt-provider-ovn
by Mail SET Inc. Group
Yes, i use same manual to change WebUI SSL.
ovirt-ca-file= is a same SSL file which use WebUI.
Yes, i restart ovirt-provider-ovn, i restart engine, i restart all what i can restart. Nothing...
> 12 сент. 2018 г., в 16:11, Dominik Holler <dholler(a)redhat.com> написал(а):
>
> On Wed, 12 Sep 2018 14:23:54 +0300
> "Mail SET Inc. Group" <mail(a)set-pro.net> wrote:
>
>> Ok!
>
> Not exactly, please use users(a)ovirt.org for such questions.
> Other should benefit from this questions, too.
> Please write the next mail to users(a)ovirt.org and keep me in CC.
>
>> What i did:
>>
>> 1) install oVirt «from box» (4.2.5.2-1.el7);
>> 2) generate own ssl for my engine using my FreeIPA CA, Install it and
>
> What means "Install it"? You can use the doc from the following link
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/...
>
> Ensure that ovirt-ca-file= in
> /etc/ovirt-provider-ovn/conf.d/10-setup-ovirt-provider-ovn.conf
> points to the correct file and ovirt-provider-ovn is restarted.
>
>> get tis issue;
>>
>>
>> [root@engine ~]# tail -n 50 /var/log/ovirt-provider-ovn.log
>> 2018-09-12 14:10:23,828 root [SSL: CERTIFICATE_VERIFY_FAILED]
>> certificate verify failed (_ssl.c:579) Traceback (most recent call
>> last): File "/usr/share/ovirt-provider-ovn/handlers/base_handler.py",
>> line 133, in _handle_request method, path_parts, content
>> File "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py",
>> line 175, in handle_request return
>> self.call_response_handler(handler, content, parameters) File
>> "/usr/share/ovirt-provider-ovn/handlers/keystone.py", line 33, in
>> call_response_handler return response_handler(content, parameters)
>> File "/usr/share/ovirt-provider-ovn/handlers/keystone_responses.py",
>> line 62, in post_tokens user_password=user_password) File
>> "/usr/share/ovirt-provider-ovn/auth/plugin_facade.py", line 26, in
>> create_token return auth.core.plugin.create_token(user_at_domain,
>> user_password) File
>> "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/plugin.py", line
>> 48, in create_token timeout=self._timeout()) File
>> "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 75,
>> in create_token username, password, engine_url, ca_file, timeout)
>> File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line
>> 91, in _get_sso_token timeout=timeout File
>> "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 54,
>> in wrapper response = func(*args, **kwargs) File
>> "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 47,
>> in wrapper raise BadGateway(e) BadGateway: [SSL:
>> CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)
>>
>>
>> [root@engine ~]# tail -n 20 /var/log/ovirt-engine/engine.log
>> 2018-09-12 14:10:23,773+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-47) [316db685] Lock
>> Acquired to object
>> 'EngineLock:{exclusiveLocks='[14e4fb72-9764-4757-b37d-4d487995571a=PROVIDER]',
>> sharedLocks=''}' 2018-09-12 14:10:23,778+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-47) [316db685]
>> Running command: SyncNetworkProviderCommand internal: true.
>> 2018-09-12 14:10:23,836+03 ERROR
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-47) [316db685]
>> Command
>> 'org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand'
>> failed: EngineException: (Failed with error Bad Gateway and code
>> 5050) 2018-09-12 14:10:23,837+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-47) [316db685] Lock
>> freed to object
>> 'EngineLock:{exclusiveLocks='[14e4fb72-9764-4757-b37d-4d487995571a=PROVIDER]',
>> sharedLocks=''}' 2018-09-12 14:14:12,477+03 INFO
>> [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default
>> task-6) [] User admin@internal successfully logged in with scopes:
>> ovirt-app-admin ovirt-app-api ovirt-app-portal
>> ovirt-ext=auth:sequence-priority=~ ovirt-ext=revoke:revoke-all
>> ovirt-ext=token-info:authz-search
>> ovirt-ext=token-info:public-authz-search
>> ovirt-ext=token-info:validate ovirt-ext=token:password-access
>> 2018-09-12 14:14:12,587+03 INFO
>> [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default
>> task-6) [1bf1b763] Running command: CreateUserSessionCommand
>> internal: false. 2018-09-12 14:14:12,628+03 INFO
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (default task-6) [1bf1b763] EVENT_ID: USER_VDC_LOGIN(30), User
>> admin@internal-authz connecting from '10.0.3.61' using session
>> 's8jAm7BUJGlicthm6yZBA3CUM8QpRdtwFaK3M/IppfhB3fHFB9gmNf0cAlbl1xIhcJ2WX+ww7e71Ri+MxJSsIg=='
>> logged in. 2018-09-12 14:14:30,972+03 INFO
>> [org.ovirt.engine.core.bll.provider.ImportProviderCertificateCommand]
>> (default task-6) [ee3cc8a7-4485-4fdf-a0c2-e9d67b5cfcd3] Running
>> command: ImportProviderCertificateCommand internal: false. Entities
>> affected : ID: aaa00000-0000-0000-0000-123456789aaa Type:
>> SystemAction group CREATE_STORAGE_POOL with role type ADMIN
>> 2018-09-12 14:14:30,982+03 INFO
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (default task-6) [ee3cc8a7-4485-4fdf-a0c2-e9d67b5cfcd3] EVENT_ID:
>> PROVIDER_CERTIFICATE_IMPORTED(213), Certificate for provider
>> ovirt-provider-ovn was imported. (User: admin@internal-authz)
>> 2018-09-12 14:14:31,006+03 INFO
>> [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
>> (default task-6) [a48d94ab-b0b2-42a2-a667-0525b4c652ea] Running
>> command: TestProviderConnectivityCommand internal: false. Entities
>> affected : ID: aaa00000-0000-0000-0000-123456789aaa Type:
>> SystemAction group CREATE_STORAGE_POOL with role type ADMIN
>> 2018-09-12 14:14:31,058+03 ERROR
>> [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
>> (default task-6) [a48d94ab-b0b2-42a2-a667-0525b4c652ea] Command
>> 'org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand'
>> failed: EngineException: (Failed with error Bad Gateway and code
>> 5050) 2018-09-12 14:15:10,954+03 INFO
>> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread
>> pool 'default' is using 0 threads out of 1, 5 threads waiting for
>> tasks. 2018-09-12 14:15:10,954+03 INFO
>> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread
>> pool 'engine' is using 0 threads out of 500, 16 threads waiting for
>> tasks and 0 tasks in queue. 2018-09-12 14:15:10,954+03 INFO
>> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread
>> pool 'engineScheduled' is using 0 threads out of 100, 100 threads
>> waiting for tasks. 2018-09-12 14:15:10,954+03 INFO
>> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread
>> pool 'engineThreadMonitoring' is using 1 threads out of 1, 0 threads
>> waiting for tasks. 2018-09-12 14:15:10,954+03 INFO
>> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread
>> pool 'hostUpdatesChecker' is using 0 threads out of 5, 2 threads
>> waiting for tasks. 2018-09-12 14:15:23,843+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-61) [2455041f] Lock
>> Acquired to object
>> 'EngineLock:{exclusiveLocks='[14e4fb72-9764-4757-b37d-4d487995571a=PROVIDER]',
>> sharedLocks=''}' 2018-09-12 14:15:23,849+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-61) [2455041f]
>> Running command: SyncNetworkProviderCommand internal: true.
>> 2018-09-12 14:15:23,900+03 ERROR
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-61) [2455041f]
>> Command
>> 'org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand'
>> failed: EngineException: (Failed with error Bad Gateway and code
>> 5050) 2018-09-12 14:15:23,901+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-61) [2455041f] Lock
>> freed to object
>> 'EngineLock:{exclusiveLocks='[14e4fb72-9764-4757-b37d-4d487995571a=PROVIDER]',
>> sharedLocks=''}'
>>
>>
>> [root@engine ~]#
>> cat /etc/ovirt-provider-ovn/conf.d/10-setup-ovirt-provider-ovn.conf #
>> This file is automatically generated by engine-setup. Please do not
>> edit manually [OVN REMOTE] ovn-remote=ssl:127.0.0.1:6641
>> [SSL]
>> https-enabled=true
>> ssl-cacert-file=/etc/pki/ovirt-engine/ca.pem
>> ssl-cert-file=/etc/pki/ovirt-engine/certs/ovirt-provider-ovn.cer
>> ssl-key-file=/etc/pki/ovirt-engine/keys/ovirt-provider-ovn.key.nopass
>> [OVIRT]
>> ovirt-sso-client-secret=Ms7Gw9qNT6IkXu7oA54tDmxaZDIukABV
>> ovirt-host=https://engine.set.local:443
>> ovirt-sso-client-id=ovirt-provider-ovn
>> ovirt-ca-file=/etc/pki/ovirt-engine/apache-ca.pem
>> [PROVIDER]
>> provider-host=engine.set.local
>>
>>
>>> 12 сент. 2018 г., в 13:59, Dominik Holler <dholler(a)redhat.com>
>>> написал(а):
>>>
>>> On Wed, 12 Sep 2018 13:04:53 +0300
>>> "Mail SET Inc. Group" <mail(a)set-pro.net> wrote:
>>>
>>>> Hello Dominik!
>>>> I have a same issue with OVN provider and SSL
>>>> https://www.mail-archive.com/users@ovirt.org/msg47020.html
>>>> <https://www.mail-archive.com/users@ovirt.org/msg47020.html> But
>>>> certificate changes not helps to resolve it. Maybe you can help me
>>>> with this?
>>>
>>> Sure. Can you please share the relevant lines of
>>> ovirt-provider-ovn.log and engine.log, and the information if you
>>> are using the certificates generated by engine-setup with
>>> users(a)ovirt.org ? Thanks,
>>> Dominik
>>>
>>
>
>
1 year, 6 months
engine-setup failing on 4.3.2 -> 4.3.3 fails during Engine schema refresh fail
by Edward Berger
I was trying to upgrade a hyperconverged oVirt hosted engine and failed in
the engine-setup command with these error and warnings.
...
[ INFO ] Creating/refreshing Engine database schema
[ ERROR ] schema.sh: FATAL: Cannot execute sql command:
--file=/usr/share/ovirt-engine/dbscripts/upgrade/04_03_0830_add_foreign_key_to_image_transfers.sql
[ ERROR ] Failed to execute stage 'Misc configuration': Engine schema
refresh failed
...
[ INFO ] Yum Verify: 16/16: ovirt-engine-tools.noarch 0:4.3.3.5-1.el7 - e
[WARNING] Rollback of DWH database postponed to Stage "Clean up"
[ INFO ] Rolling back database schema
...
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
Attaching engine-setup logfile.
1 year, 10 months
Out-of-sync networks can only be detached
by Sakhi Hadebe
Hi,
I have a 3-node oVirt cluster. I have configured 2 logical networks:
ovirtmgmt and public. Public logical network is attached in only 2 nodes
and failing to attach on the 3rd node with the below error
Invalid operation, out-of-sync network 'public' can only be detached.
Please have been stuck on this for almost the whole day now. How do I fix
this error?
--
Regards,
Sakhi Hadebe
2 years, 2 months
Network Address Change
by Paul.LKW
Hi All:
I just has a case, I need to change the oVirt host and engine IP address
due to data center decommission I checked in the hosted-engine host
there are some files I could change ;
in ovirt-hosted-engine/hosted-engine.conf
ca_subject="O=simple.com, CN=1.2.3.4"
gateway=1.2.3.254
and of course I need to change the ovirtmgmt interface IP too, I think
just change the above line could do the tick, but where could I change
the other host IP in the cluster ?
I think I have to be lost all the host as once changed the hosted-engine
host IP as it is in diff. sub net.
Does there any command line tools could do that or someone has such
experience could share?
Best Regards,
Paul.LKW
2 years, 4 months
OVS switch type for hosted-engine
by Devin A. Bougie
Is it possible to setup a hosted engine using the OVS switch type instead of Legacy? If it's not possible to start out as OVS, instructions for switching from Legacy to OVS after the fact would be greatly appreciated.
Many thanks,
Devin
3 years
USB3 redirection
by Rik Theys
Hi,
I'm trying to assign a USB3 controller to a CentOS 7.4 VM in oVirt 4.1
with USB redirection enabled.
I've created the following file in /etc/ovirt-engine/osinfo.conf.d:
01-usb.properties with content
os.other.devices.usb.controller.value = nec-xhci
and have restarted ovirt-engine.
If I disable USB-support in the web interface for the VM, the xhci
controller is added to the VM (I can see it in the qemu-kvm
commandline), but usb redirection is not available.
If I enable USB-support in the UI, no xhci controller is added (only 4
uhci controllers).
Is there a way to make the controllers for usb redirection xhci controllers?
Regards,
Rik
--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee
+32(0)16/32.11.07
----------------------------------------------------------------
<<Any errors in spelling, tact or fact are transmission errors>>
3 years
OVN routing and firewalling in oVirt
by Gianluca Cecchi
Hello,
how do we manage routing between different OVN networks in oVirt?
And between OVN networks and physical ones?
Based on architecture read here:
http://openvswitch.org/support/dist-docs/ovn-architecture.7.html
I see terms for logical routers and gateway routers respectively but how to
apply to oVirt configuration?
Do I have to choose between setting up a specialized VM or a physical one:
is it applicable/advisable to put on oVirt host itself the gateway
functionality?
Is there any security policy (like security groups in Openstack) to
implement?
Thanks,
Gianluca
3 years
Install hosted-engine - Task Get local VM IP failed
by florentl
Hi all,
I try to install hosted-engine on node : ovirt-node-ng-4.2.3-0.20180518.
Every times I get stuck on :
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed":
true, "cmd": "virsh -r net-dhcp-leases default | grep -i
00:16:3e:6c:5a:91 | awk '{ print $5 }' | cut -f1 -d'/'", "delta":
"0:00:00.108872", "end": "2018-06-01 11:17:34.421769", "rc": 0, "start":
"2018-06-01 11:17:34.312897", "stderr": "", "stderr_lines": [],
"stdout": "", "stdout_lines": []}
I tried with static IP Address and with DHCP but both failed.
To be more specific, I installed three nodes, deployed glusterfs with
the wizard. I'm in a nested virtualization environment for this lab
(Vmware Esxi Hypervisor).
My node IP is : 192.168.176.40 / and I want the hosted-engine vm has
192.168.176.43.
Thanks,
Florent
3 years, 2 months
Host needs to be reinstalled after configuring power management
by Andrew DeMaria
Hi,
I am running ovirt 4.3 and have found the following action item immediately
after configuring power management for a host:
Host needs to be reinstalled as important configuration changes were
applied on it.
The thing is - I've just freshly installed this host and it seems strange
that I need to reinstall it.
Is there a better way to install a host and configure power management
without having to reinstall it after?
Thanks,
Andrew
3 years, 3 months
Import an exported VM using Ansible
by paolo@airaldi.it
Hello everybody!
I'm trying to automate a copy of a VM from one Datacenter to another using an Ansible.playbook.
I'm able to:
- Create a snapshot of the source VM
- create a clone from the snapshot
- remove the snapshot
- attach an Export Domain
- export the clone to the Export Domain
- remove the clone
- detach the Export domain from the source Datacenter and attach to the destination.
Unfortunately I cannot find a module to:
- import the VM from the Export Domain
- delete the VM image from the Export Domain.
Any hint on how to do that?
Thanks in advance. Cheers.
Paolo
PS: if someone is interested I can share the playbook.
3 years, 3 months
Lots of storage.MailBox.SpmMailMonitor
by Fabrice Bacchella
My vdsm log files are huge:
-rw-r--r-- 1 vdsm kvm 1.8G Nov 22 11:32 vdsm.log
And this is juste half an hour of logs:
$ head -1 vdsm.log
2018-11-22 11:01:12,132+0100 ERROR (mailbox-spm) [storage.MailBox.SpmMailMonitor] mailbox 2 checksum failed, not clearing mailbox, clearing new mail (data='...lots of data', expected='\xa4\x06\x08\x00') (mailbox:612)
I just upgraded vdsm:
$ rpm -qi vdsm
Name : vdsm
Version : 4.20.43
3 years, 4 months
Template for Ubuntu 18.04 Server Issues
by jeremy_tourville@hotmail.com
I have built a system as a template on oVirt. Specifically, Ubuntu 18.04 server.
I am noticing an issue when creating new vms from that template. I used the check box for "seal template" when creating the template.
When I create a new Ubuntu VM I am getting duplicate IP addresses for all the machines created from the template.
It seems like the checkbox doesn't fully function as intended. I would need to do further manual steps to clear up this issue.
Has anyone else noticed this behavior? Is this expected or have I missed something?
Thanks for your input!
3 years, 8 months
Libgfapi considerations
by Jayme
Are there currently any known issues with using libgfapi in the latest
stable version of ovirt in hci deployments? I have recently enabled it and
have noticed a significant (over 4x) increase in io performance on my vms.
I’m concerned however since it does not seem to be an ovirt default
setting. Is libgfapi considered safe and stable to use in ovirt 4.3 hci?
3 years, 10 months
poweroff and reboot with ovirt_vm ansible module
by Nathanaël Blanchet
Hello, is there a way to poweroff or reboot (without stopped and running
state) a vm with the ovirt_vm ansible module?
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
3 years, 12 months
supervdsm failing during network_caps
by Alan G
Hi,
I have issues with one host where supervdsm is failing in network_caps.
I see the following trace in the log.
MainProcess|jsonrpc/1::ERROR::2020-01-06 03:01:05,558::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) Error in network_caps
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 98, in wrapper
res = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/network/api.py", line 56, in network_caps
return netswitch.configurator.netcaps(compatibility=30600)
File "/usr/lib/python2.7/site-packages/vdsm/network/netswitch/configurator.py", line 317, in netcaps
net_caps = netinfo(compatibility=compatibility)
File "/usr/lib/python2.7/site-packages/vdsm/network/netswitch/configurator.py", line 325, in netinfo
_netinfo = netinfo_get(vdsmnets, compatibility)
File "/usr/lib/python2.7/site-packages/vdsm/network/netinfo/cache.py", line 150, in get
return _stringify_mtus(_get(vdsmnets))
File "/usr/lib/python2.7/site-packages/vdsm/network/netinfo/cache.py", line 59, in _get
ipaddrs = getIpAddrs()
File "/usr/lib/python2.7/site-packages/vdsm/network/netinfo/addresses.py", line 72, in getIpAddrs
for addr in nl_addr.iter_addrs():
File "/usr/lib/python2.7/site-packages/vdsm/network/netlink/addr.py", line 33, in iter_addrs
with _nl_addr_cache(sock) as addr_cache:
File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/usr/lib/python2.7/site-packages/vdsm/network/netlink/__init__.py", line 92, in _cache_manager
cache = cache_allocator(sock)
File "/usr/lib/python2.7/site-packages/vdsm/network/netlink/libnl.py", line 469, in rtnl_addr_alloc_cache
raise IOError(-err, nl_geterror(err))
IOError: [Errno 16] Message sequence number mismatch
A restart of supervdsm will resolve the issue for a period, maybe 24 hours, then it will occur again. So I'm thinking it's resource exhaustion or a leak of some kind?
Running 4.2.8.2 with VDSM at 4.20.46.
I've had a look through the bugzilla and can't find an exact match, closest was this one https://bugzilla.redhat.com/show_bug.cgi?id=1666123 which seems to be a RHV only fix.
Thanks,
Alan
4 years, 1 month
OVN and change of mgmt network
by Gianluca Cecchi
Hello,
I previously had OVN running on engine (as OVN provider with northd and
northbound and southbound DBs) and hosts (with OVN controller).
After changing mgmt ip of hosts (engine has retained instead the same ip),
I executed again on them the command:
vdsm-tool ovn-config <ip_of_engine> <nel_local_ip_of_host>
Now I think I have to clean up some things, eg:
1) On engine
where I get these lines below
systemctl status ovn-northd.service -l
. . .
Sep 29 14:41:42 ovmgr1 ovsdb-server[940]: ovs|00005|reconnect|ERR|tcp:
10.4.167.40:37272: no response to inactivity probe after 5 seconds,
disconnecting
Oct 03 11:52:00 ovmgr1 ovsdb-server[940]: ovs|00006|reconnect|ERR|tcp:
10.4.167.41:52078: no response to inactivity probe after 5 seconds,
disconnecting
The two IPs are the old ones of two hosts
It seems that a restart of the services has fixed...
Can anyone confirm if I have to do anything else?
2) On hosts (there are 3 hosts with OVN on ip 10.4.192.32/33/34)
where I currently have this output
[root@ov301 ~]# ovs-vsctl show
3a38c5bb-0abf-493d-a2e6-345af8aedfe3
Bridge br-int
fail_mode: secure
Port "ovn-1dce5b-0"
Interface "ovn-1dce5b-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.4.192.32"}
Port "ovn-ddecf0-0"
Interface "ovn-ddecf0-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.4.192.33"}
Port "ovn-fd413b-0"
Interface "ovn-fd413b-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.4.168.74"}
Port br-int
Interface br-int
type: internal
ovs_version: "2.7.2"
[root@ov301 ~]#
The IPs of kind 10.4.192.x are ok.
But there is a left-over of an old host I initially used for tests,
corresponding to 10.4.168.74, that now doesn't exist anymore
How can I clean records for 1) and 2)?
Thanks,
Gianluca
4 years, 3 months
encrypted GENEVE traffic
by Pavel Nakonechnyi
Dear oVirt Community,
From my understanding oVirt does not support Open vSwitch IPSEC tunneling for GENEVE traffic (which is described on pages http://docs.openvswitch.org/en/latest/howto/ipsec/ and http://docs.openvswitch.org/en/latest/tutorials/ipsec/).
Are there plans to introduce such support? (or explicitly not to..)
Is it possible to somehow manually configure such tunneling for existing virtual networks? (even in a limited way)
Alternatively, is it possible to deploy oVirt on top of the tunneled (i.e. via VXLAN, IPSec) interfaces? This will allow to encrypt all management traffic.
Such requirement arises when using oVirt deployment on third-party premises with untrusted network.
Thank in advance for any clarifications. :)
--
WBR, Pavel
+32478910884
4 years, 3 months
"gluster-ansible-roles is not installed on Host" error on Cockpit
by Hesham Ahmed
On a new 4.3.1 oVirt Node installation, when trying to deploy HCI
(also when trying adding a new gluster volume to existing clusters)
using Cockpit, an error is displayed "gluster-ansible-roles is not
installed on Host. To continue deployment, please install
gluster-ansible-roles on Host and try again". There is no package
named gluster-ansible-roles in the repositories:
[root@localhost ~]# yum install gluster-ansible-roles
Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
package_upload, product-id, search-disabled-repos,
subscription-manager, vdsmupgrade
This system is not registered with an entitlement server. You can use
subscription-manager to register.
Loading mirror speeds from cached hostfile
* ovirt-4.3-epel: mirror.horizon.vn
No package gluster-ansible-roles available.
Error: Nothing to do
Uploading Enabled Repositories Report
Cannot upload enabled repos report, is this client registered?
This is due to check introduced here:
https://gerrit.ovirt.org/#/c/98023/1/dashboard/src/helpers/AnsibleUtil.js
Changing the line from:
[ "rpm", "-qa", "gluster-ansible-roles" ], { "superuser":"require" }
to
[ "rpm", "-qa", "gluster-ansible" ], { "superuser":"require" }
resolves the issue. The above code snippet is installed at
/usr/share/cockpit/ovirt-dashboard/app.js on oVirt node and can be
patched by running "sed -i 's/gluster-ansible-roles/gluster-ansible/g'
/usr/share/cockpit/ovirt-dashboard/app.js && systemctl restart
cockpit"
4 years, 5 months
Error exporting into ova
by Gianluca Cecchi
Hello,
I'm playing with export_vm_as_ova.py downloaded from the examples github:
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/export...
My environment is oVirt 4.3.3.7 with iSCSI storage domain.
It fails leaving an ova.tmp file
In webadmin gui:
Starting to export Vm enginecopy1 as a Virtual Appliance
7/19/1911:55:12 AM
VDSM ov301 command TeardownImageVDS failed: Cannot deactivate Logical
Volume: ('General Storage Exception: ("5 [] [\' Logical volume
fa33df49-b09d-4f86-9719-ede649542c21/0420ef47-0ad0-4cf9-babd-d89383f7536b
in
use.\']\\nfa33df49-b09d-4f86-9719-ede649542c21/[\'a7480dc5-b5ca-4cb3-986d-77bc12165be4\',
\'0420ef47-0ad0-4cf9-babd-d89383f7536b\']",)',)
7/19/1912:25:36 PM
Failed to export Vm enginecopy1 as a Virtual Appliance to path
/save_ova/base/dump/myvm2.ova on Host ov301
7/19/1912:25:37 PM
During export I have this qemu-img process creating the disk over the loop
device:
root 30878 30871 0 11:55 pts/2 00:00:00 su -p -c qemu-img convert
-T none -O qcow2
'/rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/59a4a324-4c99-4ff5-abb1-e9bbac83292a/0420ef47-0ad0-4cf9-babd-d89383f7536b'
'/dev/loop1' vdsm
vdsm 30882 30878 10 11:55 ? 00:00:00 qemu-img convert -T none -O
qcow2
/rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/59a4a324-4c99-4ff5-abb1-e9bbac83292a/0420ef47-0ad0-4cf9-babd-d89383f7536b
/dev/loop1
The ova.tmp file is getting filled while command runs
eg:
[root@ov301 ]# du -sh /save_ova/base/dump/myvm2.ova.tmp
416M /save_ova/base/dump/myvm2.ova.tmp
[root@ov301 sysctl.d]#
[root@ov301 sysctl.d]# du -sh /save_ova/base/dump/myvm2.ova.tmp
911M /save_ova/base/dump/myvm2.ova.tmp
[root@ov301 ]#
and the final generated / not completed file is in this state:
[root@ov301 ]# qemu-img info /save_ova/base/dump/myvm2.ova.tmp
image: /save_ova/base/dump/myvm2.ova.tmp
file format: raw
virtual size: 30G (32217446400 bytes)
disk size: 30G
[root@ov301 sysctl.d]#
But I notice that the timestamp of the file is about 67 minutes after start
of job and well after the notice of its failure....
[root@ov301 sysctl.d]# ll /save_ova/base/dump/
total 30963632
-rw-------. 1 root root 32217446400 Jul 19 13:02 myvm2.ova.tmp
[root@ov301 sysctl.d]#
[root@ov301 sysctl.d]# du -sh /save_ova/base/dump/myvm2.ova.tmp
30G /save_ova/base/dump/myvm2.ova.tmp
[root@ov301 sysctl.d]#
In engine.log the first error I see is 30 minutes after start
2019-07-19 12:25:31,563+02 ERROR
[org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor]
(EE-ManagedThreadFactory-engineScheduled-Thread-64) [2001ddf4] Ansible
playbook execution failed: Timeout occurred while executing Ansible
playbook.
2019-07-19 12:25:31,563+02 INFO
[org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor]
(EE-ManagedThreadFactory-engineScheduled-Thread-64) [2001ddf4] Ansible
playbook command has exited with value: 1
2019-07-19 12:25:31,564+02 ERROR
[org.ovirt.engine.core.bll.CreateOvaCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-64) [2001ddf4] Failed to
create OVA. Please check logs for more details:
/var/log/ovirt-engine/ova/ovirt-export-ova-ansible-20190719115531-ov301-2001ddf4.log
2019-07-19 12:25:31,565+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-64) [2001ddf4] START,
TeardownImageVDSCommand(HostName = ov301,
ImageActionsVDSCommandParameters:{hostId='8ef1ce6f-4e38-486c-b3a4-58235f1f1d06'}),
log id: 3d2246f7
2019-07-19 12:25:36,569+02 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-64) [2001ddf4] EVENT_ID:
VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ov301 command TeardownImageVDS
failed: Cannot deactivate Logical Volume: ('General Storage Exception: ("5
[] [\' Logical volume
fa33df49-b09d-4f86-9719-ede649542c21/0420ef47-0ad0-4cf9-babd-d89383f7536b
in
use.\']\\nfa33df49-b09d-4f86-9719-ede649542c21/[\'a7480dc5-b5ca-4cb3-986d-77bc12165be4\',
\'0420ef47-0ad0-4cf9-babd-d89383f7536b\']",)',)
In ansible playbook suggested log file I don't see anything useful.
It ends with timestamps when the script has been launched.
Last lines are:
2019-07-19 11:55:33,877 p=5699 u=ovirt | TASK [ovirt-ova-export-pre-pack :
Retrieving the temporary path for the OVA file] ***
2019-07-19 11:55:34,198 p=5699 u=ovirt | changed: [ov301] => {
"changed": true,
"dest": "/save_ova/base/dump/myvm2.ova.tmp",
"gid": 0,
"group": "root",
"mode": "0600",
"owner": "root",
"secontext": "system_u:object_r:nfs_t:s0",
"size": 32217446912,
"state": "file",
"uid": 0
}
2019-07-19 11:55:34,204 p=5699 u=ovirt | TASK [ovirt-ova-pack : Run
packing script] *************************************
It seems 30 minutes... for timeout? About what, ansible job?
Or possibly implicit user session created when running the python script?
The snapshot has been correctly deleted (as I see also in engine.log), I
don't see it in webadmin gui.
Any known problem?
Just for test I executed again at 14:24 and I see same Ansible error at
14:54
The snapshot gets deleted, while the qemu-img command still continues....
[root@ov301 sysctl.d]# ps -ef | grep qemu-img
root 13504 13501 0 14:24 pts/1 00:00:00 su -p -c qemu-img convert
-T none -O qcow2
'/rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/59a4a324-4c99-4ff5-abb1-e9bbac83292a/0420ef47-0ad0-4cf9-babd-d89383f7536b'
'/dev/loop0' vdsm
vdsm 13505 13504 3 14:24 ? 00:01:26 qemu-img convert -T none -O
qcow2
/rhev/data-center/mnt/blockSD/fa33df49-b09d-4f86-9719-ede649542c21/images/59a4a324-4c99-4ff5-abb1-e9bbac83292a/0420ef47-0ad0-4cf9-babd-d89383f7536b
/dev/loop0
root 17587 24530 0 15:05 pts/0 00:00:00 grep --color=auto qemu-img
[root@ov301 sysctl.d]#
[root@ov301 sysctl.d]# du -sh /save_ova/base/dump/myvm2.ova.tmp
24G /save_ova/base/dump/myvm2.ova.tmp
[root@ov301 sysctl.d]# ll /save_ova/base/dump/myvm2.ova.tmp
-rw-------. 1 root root 32217446400 Jul 19 15:14
/save_ova/base/dump/myvm2.ova.tmp
[root@ov301 sysctl.d]#
and then continues until image copy completes, but at this time the job has
already aborted and so the completion of the ova composition doesn't go
ahead... and I remain with the ova.tmp file...
How to extend timeout?
Thanks in advance,
Gianluca
4 years, 8 months
deprecating export domain?
by Charles Kozler
Hello,
I recently read on this list from a redhat member that export domain is
either being deprecated or looking at being deprecated
To that end, can you share details? Can you share any notes/postings/bz's
that document this? I would imagine something like this would be discussed
in larger audience
This seems like a somewhat significant change to make and I am curious
where this is scheduled? Currently, a lot of my backups rely explicitly on
an export domain for online snapshots, so I'd like to plan accordingly
Thanks!
4 years, 8 months
Support for Shared SAS storage
by Vinícius Ferrão
Hello,
I’ve two compute nodes with SAS Direct Attached sharing the same disks.
Looking at the supported types I can’t see this on the documentation: https://www.ovirt.org/documentation/admin-guide/chap-Storage.html
There’s is local storage on this documentation, but my case is two machines, both using SAS, connected to the same machines. It’s the VRTX hardware from Dell.
Is there any support for this? It should be just like Fibre Channel and iSCSI, but with SAS instead.
Thanks,
4 years, 9 months
Shutdown procedure for single host HCI Gluster
by Gianluca Cecchi
Hello,
I'm testing the single node HCI with ovirt-node-ng 4.3.9 iso.
Very nice and many improvements over the last time I tried it. Good!
I have a doubt related to shutdown procedure of the server.
Here below my steps:
- Shutdown all VMs (except engine)
- Put into maintenance data and vmstore domains
- Enable Global HA Maintenance
- Shutdown engine
- Shutdown hypervisor
It seems that the last step doesn't end and I had to brutally power off the
hypervisor.
Here the screenshot regarding infinite failure in unmounting
/gluster_bricks/engine
https://drive.google.com/file/d/1ee0HG21XmYVA0t7LYo5hcFx1iLxZdZ-E/view?us...
What would be the right step to do before the final shutdown of hypervisor?
Thanks,
Gianluca
4 years, 9 months
Upgrade ovirt from 3.4 to 4.3
by lu.alfonsi@almaviva.it
Good morning,
i have a difficult enviroment with 20 Hypervisors based on ovirt 3.4.3-1 and i would like to reach the 4.3 version. Which are the best steps to achieve these objective?
Thanks in advance
Luigi
4 years, 11 months
Re: Single instance scaleup.
by Strahil
Hi Leo,
As you do not have a distributed volume , you can easily switch to replica 2 arbiter 1 or replica 3 volumes.
You can use the following for adding the bricks:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Ad...
Best Regards,
Strahil NikolivOn May 26, 2019 10:54, Leo David <leoalex(a)gmail.com> wrote:
>
> Hi Stahil,
> Thank you so much for yout input !
>
> gluster volume info
>
>
> Volume Name: engine
> Type: Distribute
> Volume ID: d7449fc2-cc35-4f80-a776-68e4a3dbd7e1
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.80.191:/gluster_bricks/engine/engine
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard: on
> performance.low-prio-threads: 32
> performance.strict-o-direct: off
> network.remote-dio: off
> network.ping-timeout: 30
> user.cifs: off
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> cluster.eager-lock: enable
> Volume Name: ssd-samsung
> Type: Distribute
> Volume ID: 76576cc6-220b-4651-952d-99846178a19e
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.80.191:/gluster_bricks/sdc/data
> Options Reconfigured:
> cluster.eager-lock: enable
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> user.cifs: off
> network.ping-timeout: 30
> network.remote-dio: off
> performance.strict-o-direct: on
> performance.low-prio-threads: 32
> features.shard: on
> storage.owner-gid: 36
> storage.owner-uid: 36
> transport.address-family: inet
> nfs.disable: on
>
> The other two hosts will be 192.168.80.192/193 - this is gluster dedicated network over 10GB sfp+ switch.
> - host 2 wil have identical harware configuration with host 1 ( each disk is actually a raid0 array )
> - host 3 has:
> - 1 ssd for OS
> - 1 ssd - for adding to engine volume in a full replica 3
> - 2 ssd's in a raid 1 array to be added as arbiter for the data volume ( ssd-samsung )
> So the plan is to have "engine" scaled in a full replica 3, and "ssd-samsung" scalled in a replica 3 arbitrated.
>
>
>
>
> On Sun, May 26, 2019 at 10:34 AM Strahil <hunter86_bg(a)yahoo.com> wrote:
>>
>> Hi Leo,
>>
>> Gluster is quite smart, but in order to provide any hints , can you provide output of 'gluster volume info <glustervol>'.
>> If you have 2 more systems , keep in mind that it is best to mirror the storage on the second replica (2 disks on 1 machine -> 2 disks on the new machine), while for the arbiter this is not neccessary.
>>
>> What is your network and NICs ? Based on my experience , I can recommend at least 10 gbit/s interfase(s).
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> On May 26, 2019 07:52, Leo David <leoalex(a)gmail.com> wrote:
>>>
>>> Hello Everyone,
>>> Can someone help me to clarify this ?
>>> I have a single-node 4.2.8 installation ( only two gluster storage domains - distributed single drive volumes ). Now I just got two identintical servers and I would like to go for a 3 nodes bundle.
>>> Is it possible ( after joining the new nodes to the cluster ) to expand the existing volumes across the new nodes and change them to replica 3 arbitrated ?
>>> If so, could you share with me what would it be the procedure ?
>>> Thank you very much !
>>>
>>> Leo
>
>
>
> --
> Best regards, Leo David
4 years, 11 months
Failed to add storage domain
by thunderlight1@gmail.com
Hi!
I have installed oVirt using the iso ovirt-node-ng-installer-4.3.2-2019031908.el7. I the did run the Host-engine deployment through Cockpit.
I got an error when it tries to create the domain storage. It sucessfully mounted the NFS-share on the host. Bellow is the error I got:
2019-04-14 10:40:38,967+0200 INFO ansible skipped {'status': 'SKIPPED', 'ansible_task': u'Check storage domain free space', 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 'ansible_type': 'task'}
2019-04-14 10:40:38,967+0200 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fb6918ad9d0> kwargs
2019-04-14 10:40:39,516+0200 INFO ansible task start {'status': 'OK', 'ansible_task': u'ovirt.hosted_engine_setup : Activate storage domain', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 'ansible_type': 'task'}
2019-04-14 10:40:39,516+0200 DEBUG ansible on_any args TASK: ovirt.hosted_engine_setup : Activate storage domain kwargs is_conditional:False
2019-04-14 10:40:41,923+0200 DEBUG var changed: host "localhost" var "otopi_storage_domain_details" type "<type 'dict'>" value: "{
"changed": false,
"exception": "Traceback (most recent call last):\n File \"/tmp/ansible_ovirt_storage_domain_payload_xSFxOp/__main__.py\", line 664, in main\n storage_domains_module.post_create_check(sd_id)\n File \"/tmp/ansible_ovirt_storage_domain_payload_xSFxOp/__main__.py\", line 526, in post_create_check\n id=storage_domain.id,\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in add\n return self._internal_add(storage_domain, headers, query, wait)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232, in _internal_add\n return future.wait() if wait else future\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in wait\n return self._code(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in callback\n self._check_fault(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in _check_fault\n self._raise_error(response
, body)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in _raise_error\n raise error\nError: Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400.\n",
"failed": true,
"msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code is 400."
}"
2019-04-14 10:40:41,924+0200 DEBUG var changed: host "localhost" var "ansible_play_hosts" type "<type 'list'>" value: "[]"
2019-04-14 10:40:41,924+0200 DEBUG var changed: host "localhost" var "play_hosts" type "<type 'list'>" value: "[]"
2019-04-14 10:40:41,924+0200 DEBUG var changed: host "localhost" var "ansible_play_batch" type "<type 'list'>" value: "[]"
2019-04-14 10:40:41,924+0200 ERROR ansible failed {'status': 'FAILED', 'ansible_type': 'task', 'ansible_task': u'Activate storage domain', 'ansible_result': u'type: <type \'dict\'>\nstr: {\'_ansible_parsed\': True, u\'exception\': u\'Traceback (most recent call last):\\n File "/tmp/ansible_ovirt_storage_domain_payload_xSFxOp/__main__.py", line 664, in main\\n storage_domains_module.post_create_check(sd_id)\\n File "/tmp/ansible_ovirt_storage_domain_payload_xSFxOp/__main__.py", line 526', 'task_duration': 2, 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml'}
2019-04-14 10:40:41,924+0200 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7fb691843190> kwargs ignore_errors:None
2019-04-14 10:40:41,928+0200 INFO ansible stats {
"ansible_playbook": "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
"ansible_playbook_duration": "00:37 Minutes",
"ansible_result": "type: <type 'dict'>\nstr: {u'localhost': {'unreachable': 0, 'skipped': 6, 'ok': 23, 'changed': 1, 'failures': 1}}",
"ansible_type": "finish",
"status": "FAILED"
}
2019-04-14 10:40:41,928+0200 INFO SUMMARY:
Duration Task Name
-------- --------
[ < 1 sec ] Execute just a specific set of steps
[ 00:01 ] Force facts gathering
[ 00:01 ] Check local VM dir stat
[ 00:01 ] Obtain SSO token using username/password credentials
[ 00:01 ] Fetch host facts
[ < 1 sec ] Fetch cluster ID
[ 00:01 ] Fetch cluster facts
[ 00:01 ] Fetch Datacenter facts
[ < 1 sec ] Fetch Datacenter ID
[ < 1 sec ] Fetch Datacenter name
[ 00:02 ] Add NFS storage domain
[ 00:01 ] Get storage domain details
[ 00:01 ] Find the appliance OVF
[ 00:01 ] Parse OVF
[ < 1 sec ] Get required size
[ FAILED ] Activate storage domain
2019-04-14 10:40:41,928+0200 DEBUG ansible on_any args <ansible.executor.stats.AggregateStats object at 0x7fb69404eb90> kwargs
Any suggestions on how fix this?
4 years, 11 months
How to connect to a guest with vGPU ?
by Josep Manel Andrés Moscardó
Hi,
I got vGPU through mdev working but I am wondering how I would connect
to the client and make use of the GPU. So far I try to access the
console through SPICE and at some point in the boot process it switches
to GPU and I cannot see anything else.
Thanks.
--
Josep Manel Andrés Moscardó
Systems Engineer, IT Operations
EMBL Heidelberg
T +49 6221 387-8394
4 years, 11 months
Issues encountered performing HE install on 4.3
by Alan G
Hi,
I hit a few issues while performing a recent HE install of 4.3. While I managed to find solutions/workarounds to all the problems I thought I might share them here
* As defined in the Ansible defaults the temp dir for building the local HE VM is /var/tmp. I was 80M short of the required space and there did not appear to be a (supported) way to specify a different location. I ended up having to do a bind mount of /var/tmp to get me through the install. Would be nice to be able to specify a custom location.
* Permissive umask required. Our CIS CentOS 7 build requires that default umask is 027. This breaks the installer as it creates the VM image under /var/tmp as root and cannot then access it as qemu user. As the temp files are cleaned up on failure it took me a while to track this one down. My solution was to temporarily set the umask to 022 for the session while running the installer. It would be nice if the installer either handled this by doing a chmod/chown as required, or at least doing a umask pre-check and failing with a meaningful error.
* SSH root login required on host. Again for CIS we have "PermitRoologin no" configured in sshd. This means the add host task fails on the Engine, but instead of a hard failure we get a timeout on the installer. Which left me chasing some imagined routing/bridging/DNS issue. Eventually I realised I could get to the engine logs and found the issue but took several hours. Would be nice if the installer could either support a sudo option or at least perform a root login pre-check and fail with a meaningful error.
Thanks,
Alan
4 years, 11 months
Ubuntu 18.04 and 16.04 cloud images hang at boot up
by suaro@live.com
I'm using oVirt 4.3 (latest ) and able to successfully provision Centos VMs without any problems.
When I attempt to provision Ubuntu VMs, they hang at startup.
The console shows :
...
...
[ 4.010016] Btrfs loaded
[ 101.268594] random: nonblocking pool is initialized
It stays like this indefinitely.
Again, I have no problems with Centos images, but need Ubuntu
Any tips greatly appreciated.
5 years
Re: vm console problem
by David David
tested on four different workstations with: fedora20, fedora31 and
windows10(remote-manager last vers)
вс, 29 мар. 2020 г. в 12:39, Strahil Nikolov <hunter86_bg(a)yahoo.com>:
> On March 29, 2020 9:47:02 AM GMT+03:00, David David <dd432690(a)gmail.com>
> wrote:
> >I did as you said:
> >copied from engine /etc/ovirt-engine/ca.pem onto my desktop into
> >/etc/pki/ca-trust/source/anchors and then run update-ca-trust
> >it didn’t help, still the same errors
> >
> >
> >пт, 27 мар. 2020 г. в 21:56, Strahil Nikolov <hunter86_bg(a)yahoo.com>:
> >
> >> On March 27, 2020 12:23:10 PM GMT+02:00, David David
> ><dd432690(a)gmail.com>
> >> wrote:
> >> >here is debug from opening console.vv by remote-viewer
> >> >
> >> >2020-03-27 14:09 GMT+04:00, Milan Zamazal <mzamazal(a)redhat.com>:
> >> >> David David <dd432690(a)gmail.com> writes:
> >> >>
> >> >>> yes i have
> >> >>> console.vv attached
> >> >>
> >> >> It looks the same as mine.
> >> >>
> >> >> There is a difference in our logs, you have
> >> >>
> >> >> Possible auth 19
> >> >>
> >> >> while I have
> >> >>
> >> >> Possible auth 2
> >> >>
> >> >> So I still suspect a wrong authentication method is used, but I
> >don't
> >> >> have any idea why.
> >> >>
> >> >> Regards,
> >> >> Milan
> >> >>
> >> >>> 2020-03-26 21:38 GMT+04:00, Milan Zamazal <mzamazal(a)redhat.com>:
> >> >>>> David David <dd432690(a)gmail.com> writes:
> >> >>>>
> >> >>>>> copied from qemu server all certs except "cacrl" to my
> >> >desktop-station
> >> >>>>> into /etc/pki/
> >> >>>>
> >> >>>> This is not needed, the CA certificate is included in console.vv
> >> >and no
> >> >>>> other certificate should be needed.
> >> >>>>
> >> >>>>> but remote-viewer is still didn't work
> >> >>>>
> >> >>>> The log looks like remote-viewer is attempting certificate
> >> >>>> authentication rather than password authentication. Do you have
> >> >>>> password in console.vv? It should look like:
> >> >>>>
> >> >>>> [virt-viewer]
> >> >>>> type=vnc
> >> >>>> host=192.168.122.2
> >> >>>> port=5900
> >> >>>> password=fxLazJu6BUmL
> >> >>>> # Password is valid for 120 seconds.
> >> >>>> ...
> >> >>>>
> >> >>>> Regards,
> >> >>>> Milan
> >> >>>>
> >> >>>>> 2020-03-26 2:22 GMT+04:00, Nir Soffer <nsoffer(a)redhat.com>:
> >> >>>>>> On Wed, Mar 25, 2020 at 12:45 PM David David
> ><dd432690(a)gmail.com>
> >> >>>>>> wrote:
> >> >>>>>>>
> >> >>>>>>> ovirt 4.3.8.2-1.el7
> >> >>>>>>> gtk-vnc2-1.0.0-1.fc31.x86_64
> >> >>>>>>> remote-viewer version 8.0-3.fc31
> >> >>>>>>>
> >> >>>>>>> can't open vm console by remote-viewer
> >> >>>>>>> vm has vnc console protocol
> >> >>>>>>> when click on console button to connect to a vm, the
> >> >remote-viewer
> >> >>>>>>> console disappear immediately
> >> >>>>>>>
> >> >>>>>>> remote-viewer debug in attachment
> >> >>>>>>
> >> >>>>>> You an issue with the certificates:
> >> >>>>>>
> >> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.238:
> >> >>>>>> ../src/vncconnection.c Set credential 2 libvirt
> >> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
> >> >>>>>> ../src/vncconnection.c Searching for certs in /etc/pki
> >> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
> >> >>>>>> ../src/vncconnection.c Searching for certs in /root/.pki
> >> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
> >> >>>>>> ../src/vncconnection.c Failed to find certificate
> >CA/cacert.pem
> >> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
> >> >>>>>> ../src/vncconnection.c No CA certificate provided, using
> >GNUTLS
> >> >global
> >> >>>>>> trust
> >> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
> >> >>>>>> ../src/vncconnection.c Failed to find certificate CA/cacrl.pem
> >> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
> >> >>>>>> ../src/vncconnection.c Failed to find certificate
> >> >>>>>> libvirt/private/clientkey.pem
> >> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
> >> >>>>>> ../src/vncconnection.c Failed to find certificate
> >> >>>>>> libvirt/clientcert.pem
> >> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
> >> >>>>>> ../src/vncconnection.c Waiting for missing credentials
> >> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
> >> >>>>>> ../src/vncconnection.c Got all credentials
> >> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
> >> >>>>>> ../src/vncconnection.c No CA certificate provided; trying the
> >> >system
> >> >>>>>> trust store instead
> >> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.240:
> >> >>>>>> ../src/vncconnection.c Using the system trust store and CRL
> >> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.240:
> >> >>>>>> ../src/vncconnection.c No client cert or key provided
> >> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.240:
> >> >>>>>> ../src/vncconnection.c No CA revocation list provided
> >> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.241:
> >> >>>>>> ../src/vncconnection.c Handshake was blocking
> >> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.243:
> >> >>>>>> ../src/vncconnection.c Handshake was blocking
> >> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.251:
> >> >>>>>> ../src/vncconnection.c Handshake was blocking
> >> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.298:
> >> >>>>>> ../src/vncconnection.c Handshake done
> >> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.298:
> >> >>>>>> ../src/vncconnection.c Validating
> >> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.301:
> >> >>>>>> ../src/vncconnection.c Error: The certificate is not trusted
> >> >>>>>>
> >> >>>>>> Adding people that may know more about this.
> >> >>>>>>
> >> >>>>>> Nir
> >> >>>>>>
> >> >>>>>>
> >> >>>>
> >> >>>>
> >> >>
> >> >>
> >>
> >> Hello,
> >>
> >> You can try to take the engine's CA (maybe it's useless) and put it
> >on
> >> your system in:
> >> /etc/pki/ca-trust/source/anchors (if it's EL7 or a Fedora) and then
> >run
> >> update-ca-trust
> >>
> >> Best Regards,
> >> Strahil Nikolov
> >>
>
> Hey David,
>
> What is you workstation's OS ?
> Also, have you tried from another workstation ?
>
> Best Regards,
> Strahil Nikolov
>
5 years
Sometimes paused due to unknown storage error on gluster
by Gianluca Cecchi
Hello,
having deployed oVirt 4.3.9 single host HCI with Gluster, I see some times
VM going into paused state for the error above and needing to manually run
it (sometimes this resumal operation fails).
Actually it only happened with empty disk (thin provisioned) and sudden
high I/O during the initial phase of install of the OS; it didn't happened
then during normal operaton (even with 600MB/s of throughput).
I suspect something related to metadata extension not able to be in pair
with the speed of the physical disk growing.... similar to what happens for
block based storage domains where the LVM layer has to extend the logical
volume representing the virtual disk
My real world reproduction of the error is during install of OCP 4.3.8
master node, when Red Hat Cores OS boots from network and wipes the disk
and I think then transfer an image, so doing high immediate I/O.
The VM used as master node has been created with a 120Gb thin provisioned
disk (virtio-scsi type) and starts with disk just initialized and empty,
going through PXE install.
I get this line inside events for the VM
Mar 27, 2020, 12:35:23 AM VM master01 has been paused due to unknown
storage error.
Here logs around the time frame above:
- engine.log
https://drive.google.com/file/d/1zpNo5IgFVTAlKXHiAMTL-uvaoXSNMVRO/view?us...
- vdsm.log
https://drive.google.com/file/d/1v8kR0N6PdHBJ5hYzEYKl4-m7v1Lb_cYX/view?us...
Any suggestions?
The disk of the VM is on vmstore storage domain and its gluster volume
settings are:
[root@ovirt tmp]# gluster volume info vmstore
Volume Name: vmstore
Type: Distribute
Volume ID: a6203d77-3b9d-49f9-94c5-9e30562959c4
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: ovirtst.mydomain.storage:/gluster_bricks/vmstore/vmstore
Options Reconfigured:
performance.low-prio-threads: 32
storage.owner-gid: 36
performance.read-ahead: off
user.cifs: off
storage.owner-uid: 36
performance.io-cache: off
performance.quick-read: off
network.ping-timeout: 30
features.shard: on
network.remote-dio: off
cluster.eager-lock: enable
performance.strict-o-direct: on
transport.address-family: inet
nfs.disable: on
[root@ovirt tmp]#
What about config above, related to eventual optimizations to be done based
on having single host?
And comparing with the virt group of options:
[root@ovirt tmp]# cat /var/lib/glusterd/groups/virt
performance.quick-read=off
performance.read-ahead=off
performance.io-cache=off
performance.low-prio-threads=32
network.remote-dio=enable
cluster.eager-lock=enable
cluster.quorum-type=auto
cluster.server-quorum-type=server
cluster.data-self-heal-algorithm=full
cluster.locking-scheme=granular
cluster.shd-max-threads=8
cluster.shd-wait-qlength=10000
features.shard=on
user.cifs=off
cluster.choose-local=off
client.event-threads=4
server.event-threads=4
performance.client-io-threads=on
[root@ovirt tmp]#
?
Thanks Gianluca
5 years, 1 month
DR on hyperconverged deployment
by wodel youchi
Hi,
Need to understand somethings about DR on oVirt-HI
- What does mean : Scheduling regular backups using geo-replication
(point 3.3.4 RHHI 1.7 Doc Maintaining RHHI) :
- Does this mean creating a check-point?
- If yes, does this mean that the geo-replication process will sync
data up to that check-point and then stops the synchronization,
then repeat
the same cycle the day after? does this mean that the minimum RPO is one
day?
- I created a snapshot of a VM on the source Manager, I synced the
volume then I executed a DR, The VM was started on the Target Manager but
the VM didn't have its snapshot, any idea???
Regards, be safe.
5 years, 1 month
Ovirt and Dell Compellent in ISCSI
by dalmasso@cines.fr
hi all,
we use ovirt 4.3 on dell server r640 runing centos 7.7 and a storage bay Dell Compellent SCv3020 in ISCSI.
We use two 10gb interfaces for iSCSI connection on each dell server.
If we configure ISCSI connection directly from web IU, we can’t specify the two physical ethernet interface , and there are missing path . (only 4 path on 8)
So, on the shell of hypervisor we use this commands for configure the connections :
iscsiadm -m iface -I em1 --op=new # 1st ethernet interface
iscsiadm -m iface -I p3p1 --op=new # 2d ethernet interface
iscsiadm -m discovery -t sendtargets -p xx.xx.xx.xx
iscsiadm -m node -o show
iscsiadm -m node --login
after this, on the web IU we can connect our LUN with all path.
Also, I don’t understand how to configure multipath in the web UI . By defaut the configuration is in failover :
multipath -ll :
36000d3100457e4000000000000000005 dm-3 COMPELNT,Compellent Vol
size=500G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
|- 23:0:0:1 sdb 8:16 active ready running
|- 24:0:0:1 sdd 8:48 active ready running
|- 25:0:0:1 sdc 8:32 active ready running
|- 26:0:0:1 sde 8:64 active ready running
|- 31:0:0:1 sdf 8:80 active ready running
|- 32:0:0:1 sdg 8:96 active ready running
|- 33:0:0:1 sdh 8:112 active ready running
|- 34:0:0:1 sdi 8:128 active ready running
I think round robind or another configuration will be more performent.
So can we made this configuration , select physical interface and configure multipath in web UI ? for easyly maintenance and adding other server ?
Thank you.
Sylvain.
5 years, 1 month
oVirt 4.4.0 Beta release is now available for testing
by Sandro Bonazzola
oVirt 4.4.0 Beta release is now available for testing
The oVirt Project is excited to announce the availability of the beta
release of oVirt 4.4.0 for testing, as of March 27th, 2020
This release unleashes an altogether more powerful and flexible open source
virtualization solution that encompasses hundreds of individual changes and
a wide range of enhancements across the engine, storage, network, user
interface, and analytics on top of oVirt 4.3.
Important notes before you try it
Please note this is a Beta release.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not to be used in production.
In particular, please note that upgrades from 4.3 and future upgrades from
this beta to the final 4.4 release from this version are not supported.
Some of the features included in oVirt 4.4.0 Beta require content that will
be available in CentOS Linux 8.2 which are currently included in Red Hat
Enterprise Linux 8.2 beta. If you want to have a better experience you can
test oVirt 4.4.0 Beta on Red Hat Enterprise Linux 8.2 beta.
Known Issues
-
ovirt-imageio development is still in progress. In this beta you can’t
upload images to data domains. You can still copy iso images into the
deprecated ISO domain for installing VMs.
Installation instructions
For the engine: either use appliance or:
- Install CentOS Linux 8 minimal from
http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-...
- dnf install
https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm
- dnf update (reboot if needed)
- dnf module enable -y javapackages-tools pki-deps 389-ds
- dnf install ovirt-engine
- engine-setup
For the nodes:
Either use oVirt Node ISO or:
- Install CentOS Linux 8 from
http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-...
; select minimal installation
- dnf install
https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm
- dnf update (reboot if needed)
- Attach the host to engine and let it be deployed.
What’s new in oVirt 4.4.0 Beta?
-
Hypervisors based on CentOS Linux 8 (rebuilt from award winning RHEL8),
for both oVirt Node and standalone CentOS Linux hosts
-
Easier network management and configuration flexibility with
NetworkManager
-
VMs based on a more modern Q35 chipset with legacy seabios and UEFI
firmware
-
Support for direct passthrough of local host disks to VMs
-
Live migration improvements for High Performance guests.
-
New Windows Guest tools installer based on WiX framework now moved to
VirtioWin project
-
Dropped support for cluster level prior to 4.2
-
Dropped SDK3 support
-
4K disks support
-
Exporting a VM to a data domain
-
Editing of floating disks
-
Integrating ansible-runner into engine, which allows a more detailed
monitoring of playbooks executed from engine
-
Adding/reinstalling hosts are now completely based on Ansible
-
The OpenStack Neutron Agent cannot be configured by oVirt anymore, it
should be configured by TripleO instead
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.1 or newer
* CentOS Linux (or similar) 8.1 or newer
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.1 or newer
* CentOS Linux (or similar) 8.1 or newer
* oVirt Node 4.4 based on CentOS Linux 8.1 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
If you manage more than one oVirt instance, OKD or RDO we also recommend to
try ManageIQ <http://manageiq.org/>.
In such a case, please be sure to take the qc2 image and not the ova image.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
Additional Resources:
* Read more about the oVirt 4.4.0 release highlights:
http://www.ovirt.org/release/4.4.0/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.0/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>*
<https://www.redhat.com/en/summit?sc_cid=7013a000002D2QxAAK>*
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
5 years, 1 month
Re: oVirt Storage quota problems
by Vrgotic, Marko
Dear oVirt,
Is there anyone willing and able to assist me in troubleshooting what seems to be multiple issues regarding quotas:
1. UI Exception each time I add Consumer to Quota.
2. User getting warning of not being authorized to ‘ConsumeQuota’ in order to AddVM or AddVMFromTemplate, even though User is in Consumers list.
3. Fishy quota usage percentage.
I see this strange behavior on all my platforms, so it’s starting to look as a bug to me.
Happy to provide any relevant logs or do tests needed.
Kindly awaiting your reply.
-----
kind regards/met vriendelijke groeten
Marko Vrgotic
ActiveVideo
From: "Staniforth, Paul" <P.Staniforth(a)leedsbeckett.ac.uk>
Date: Friday, 13 March 2020 at 18:19
To: "Vrgotic, Marko" <M.Vrgotic(a)activevideo.com>, "users(a)ovirt.org" <users(a)ovirt.org>
Cc: Darko Stojchev <D.Stojchev(a)activevideo.com>
Subject: Re: oVirt Storage quota problems
Sorry haven't used quota in a while, one thought is there a template quota getting used?
Regards,
Paul S.
________________________________
From: Vrgotic, Marko <M.Vrgotic(a)activevideo.com>
Sent: 13 March 2020 15:15
To: Staniforth, Paul <P.Staniforth(a)leedsbeckett.ac.uk>; users(a)ovirt.org <users(a)ovirt.org>
Cc: Stojchev, Darko <D.Stojchev(a)activevideo.com>
Subject: Re: oVirt Storage quota problems
Caution External Mail: Do not click any links or open any attachments unless you trust the sender and know that the content is safe.
Hey Paul,
I did the check for another user that has same issue and what I found out its bit strange:
* DC1 / infrastructure1 – quota storage_inf1 – user is added in Consumers list - gets “2020-03-13 08:26:18,057Z WARN [org.ovirt.engine.core.bll.AddVmFromTemplateCommand] (default task-3804) [a6ba871a-dc42-4d4b-a01a-a72cd5a45959] Validation of action 'AddVmFromTemplate' failed for user azabaleta@ictv.com(a)ictv.com-authz. Reasons: VAR__ACTION__ADD,VAR__TYPE__VM,USER_NOT_AUTHORIZED_TO_CONSUME_QUOTA”
* DC1 / development2 – quota storage_dev2 – user is added in Consumers list – works
The following logs confirm that:
* Fails against infrastructure1 :
“2020-03-13 08:26:18,057Z INFO [org.ovirt.engine.core.bll.AddVmFromTemplateCommand] (default task-3804) [a6ba871a-dc42-4d4b-a01a-a72cd5a45959] No permission found for user '699687a1-da37-4b4
0-a86d-dc744208302d' or one of the groups he is member of, when running action 'AddVmFromTemplate', Required permissions are: Action type: 'USER' Action group: 'CONSUME_QUOTA' Object type: '
Quota' Object ID: '254f2582-839d-11e9-aaa2-00163e4f2a6d'.
2020-03-13 08:26:18,057Z WARN [org.ovirt.engine.core.bll.AddVmFromTemplateCommand] (default task-3804) [a6ba871a-dc42-4d4b-a01a-a72cd5a45959] Validation of action 'AddVmFromTemplate' failed for user azabaleta@ictv.com(a)ictv.com-authz. Reasons: VAR__ACTION__ADD,VAR__TYPE__VM,USER_NOT_AUTHORIZED_TO_CONSUME_QUOTA
2020-03-13 08:27:03,426Z INFO [org.ovirt.engine.core.bll.AddVmFromTemplateCommand] (default task-3813) [1e11522e-b25a-4972-8ff8-1d9a08bd57ca] Lock Acquired to object 'EngineLock:{exclusiveLocks='[azabaleta-runner=VM_NAME]', sharedLocks='[a80c1f1f-9cd3-4e8c-bfe9-670aa36d2aff=DISK, 9c710118-bb68-45aa-bafd-0e90cb07b9cf=TEMPLATE]'}'
* Works against development2:
2020-03-13 08:27:04,323Z INFO [org.ovirt.engine.core.bll.AddGraphicsDeviceCommand] (default task-3813) [2cd83827] Running command: AddGraphicsDeviceCommand internal: true. Entities affected : ID: d75c0d37-4640-49b0-afd1-7db27541f4d4 Type: VMAction group EDIT_VM_PROPERTIES with role type USER
2020-03-13 08:27:04,379Z INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-3813) [2cd83827] EVENT_ID: USER_ADD_VM_STARTED(37), VM azabaleta-runner creation was initiated by azabaleta@ictv.com(a)ictv.com-authz.
2020-03-13 08:27:04,777Z INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-39) [1e11522e-b25a-4972-8ff8-1d9a08bd57ca] Command 'CreateCloneOfTemplate' (id: '806f2e38-fc42-45c3-87ab-20bec50c226c') waiting on child command id: 'd3826f75-1095-4b5f-98c3-973f4683616b' type:'CopyImageGroupWithData' to complete
2020-03-13 08:27:04,777Z INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-39) [1e11522e-b25a-4972-8ff8-1d9a08bd57ca] Command 'CopyImageGroupWithData' (id: 'd3826f75-1095-4b5f-98c3-973f4683616b') waiting on child command id: 'eb9c3360-00a8-4539-bda4-89d70d22da68' type:'CreateVolumeContainer' to complete
2020-03-13 08:27:05,003Z INFO [org.ovirt.engine.core.bll.network.vm.ReorderVmNicsCommand] (default task-3813) [e9f67a0f-aaf8-45a6-bdae-d501e757a3b6] Running command: ReorderVmNicsCommand internal: false. Entities affected : ID: d75c0d37-4640-49b0-afd1-7db27541f4d4 Type: VMAction group CREATE_VM with role type USER
Please assist as it currently seems the Quota permissions are not working as expected. The Quotas storage_inf1 and storage_dev2 are configured in exact same way, expect that they are for different DataCenters.
From: "Vrgotic, Marko" <M.Vrgotic(a)activevideo.com>
Date: Thursday, 12 March 2020 at 20:19
To: "Staniforth, Paul" <P.Staniforth(a)leedsbeckett.ac.uk>
Cc: "users(a)ovirt.org" <users(a)ovirt.org>, Darko Stojchev <D.Stojchev(a)activevideo.com>
Subject: Re: oVirt Storage quota problems
Hey Paul,
Thank you. I believe I already added him via Consumers tab, but it does not hurt to doublecheck.
Sent from my iPhone
On 12 Mar 2020, at 17:53, Staniforth, Paul <P.Staniforth(a)leedsbeckett.ac.uk> wrote:
Hello Marko,
if you select quota in the administration menu, then select the quota it is using you will see the consumers option and there it will allow you to add users or groups.
https://www.ovirt.org/documentation/admin-guide/chap-Quotas_and_Service_L...<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...>
Quotas and Service Level Agreement Policy | oVirt<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...>
Chapter 16: Quotas and Service Level Agreement Policy Introduction to Quota. Quota is a resource limitation tool provided with oVirt. Quota may be thought of as a layer of limitations on top of the layer of limitations set by User Permissions.
www.ovirt.org
Regards,
Paul S.
________________________________
From: Vrgotic, Marko <M.Vrgotic(a)activevideo.com>
Sent: 12 March 2020 15:46
To: users(a)ovirt.org <users(a)ovirt.org>
Cc: Stojchev, Darko <D.Stojchev(a)activevideo.com>
Subject: [ovirt-users] Re: oVirt Storage quota problems
Caution External Mail: Do not click any links or open any attachments unless you trust the sender and know that the content is safe.
I fond the following:
“2020-03-12 14:03:05,260Z INFO [org.ovirt.engine.core.bll.AddVmCommand] (default task-3613) [b9635e29-4b8a-44c4-9611-e5af362c783c] No permission found for user '7f906bbf-d194-425f-b313-08a777b764ab' or one of the groups he is member of, when running action 'AddVm', Required permissions are: Action type: 'USER' Action group: 'CONSUME_QUOTA' Object type: 'Quota' Object ID: '47b5e9f4-67a4-4b78-abfd-175e3d9de8da'.
2020-03-12 14:03:05,260Z WARN [org.ovirt.engine.core.bll.AddVmCommand] (default task-3613) [b9635e29-4b8a-44c4-9611-e5af362c783c] Validation of action 'AddVm' failed for user esuilen@ictv.com(a)ictv.com-authz. Reasons: VAR__ACTION__ADD,VAR__TYPE__VM,USER_NOT_AUTHORIZED_TO_CONSUME_QUOTA”
Does anyone know which permission setting that is, or under which section belongs, as I cannot found clear check for it under Roles definition?
Kindly awaiting your reply.
-----
kind regards/met vriendelijke groeten
Marko Vrgotic
ActiveVideo
From: "Vrgotic, Marko" <M.Vrgotic(a)activevideo.com>
Date: Thursday, 12 March 2020 at 16:28
To: "users(a)ovirt.org" <users(a)ovirt.org>
Cc: Darko Stojchev <D.Stojchev(a)activevideo.com>
Subject: Re: oVirt Storage quota problems
Addiitonally, WARN from log file for one of the Users:
2020-03-12 14:03:05,218Z WARN [org.ovirt.engine.core.bll.AddVmCommand] (default task-3607) [2bb50857-d018-48aa-b4ca-ed6fb5cd76bf] Validation of action 'AddVm' failed for user esuilen@ictv.com(a)ictv.com-authz. Reasons: VAR__ACTION__ADD,VAR__TYPE__VM,USER_NOT_AUTHORIZED_TO_CONSUME_QUOTA
2020-03-12 14:03:05,236Z WARN [org.ovirt.engine.core.bll.AddVmCommand] (default task-3583) [4ef095e5-59fa-4c73-9035-c5251372066d] Validation of action 'AddVm' failed for user esuilen@ictv.com(a)ictv.com-authz. Reasons: VAR__ACTION__ADD,VAR__TYPE__VM,USER_NOT_AUTHORIZED_TO_CONSUME_QUOTA
2020-03-12 14:03:05,238Z WARN [org.ovirt.engine.core.bll.AddVmCommand] (default task-3608) [a571bd5d-98af-4a92-bf74-40681c208d32] Validation of action 'AddVm' failed for user esuilen@ictv.com(a)ictv.com-authz. Reasons: VAR__ACTION__ADD,VAR__TYPE__VM,USER_NOT_AUTHORIZED_TO_CONSUME_QUOTA
2020-03-12 14:03:05,240Z WARN [org.ovirt.engine.core.bll.AddVmCommand] (default task-3599) [a77740ff-e725-4632-a189-cac9979238df] Validation of action 'AddVm' failed for user esuilen@ictv.com(a)ictv.com-authz. Reasons: VAR__ACTION__ADD,VAR__TYPE__VM,USER_NOT_AUTHORIZED_TO_CONSUME_QUOTA
2020-03-12 14:03:05,260Z WARN [org.ovirt.engine.core.bll.AddVmCommand] (default task-3613) [b9635e29-4b8a-44c4-9611-e5af362c783c] Validation of action 'AddVm' failed for user esuilen@ictv.com(a)ictv.com-authz. Reasons: VAR__ACTION__ADD,VAR__TYPE__VM,USER_NOT_AUTHORIZED_TO_CONSUME_QUOTA
2020-03-12 14:03:30,584Z WARN [org.ovirt.engine.core.bll.AddVmCommand] (default task-3599) [90da1953-a2d6-45bb-8a5b-8f8e9a1b7e2c] Validation of action 'AddVm' failed for user esuilen@ictv.com(a)ictv.com-authz. Reasons: VAR__ACTION__ADD,VAR__TYPE__VM,USER_NOT_AUTHORIZED_TO_CONSUME_QUOTA
2020-03-12 14:03:30,597Z WARN [org.ovirt.engine.core.bll.AddVmCommand] (default task-3613) [fb938e77-1b95-4405-bebc-b0dd89e99ff0] Validation of action 'AddVm' failed for user esuilen@ictv.com(a)ictv.com-authz. Reasons: VAR__ACTION__ADD,VAR__TYPE__VM,USER_NOT_AUTHORIZED_TO_CONSUME_QUOTA
2020-03-12 14:03:30,600Z WARN [org.ovirt.engine.core.bll.AddVmCommand] (default task-3608) [d3c4cd69-134d-49c7-a977-c27d79ce2e24] Validation of action 'AddVm' failed for user esuilen@ictv.com(a)ictv.com-authz. Reasons: VAR__ACTION__ADD,VAR__TYPE__VM,USER_NOT_AUTHORIZED_TO_CONSUME_QUOTA
2020-03-12 14:03:43,460Z WARN [org.ovirt.engine.core.bll.AddVmCommand] (default task-3599) [99b158c6-c476-43e4-838f-45cadf73c7e9] Validation of action 'AddVm' failed for user esuilen@ictv.com(a)ictv.com-authz. Reasons: VAR__ACTION__ADD,VAR__TYPE__VM,USER_NOT_AUTHORIZED_TO_CONSUME_QUOTA
2020-03-12 14:03:43,462Z WARN [org.ovirt.engine.core.bll.AddVmCommand] (default task-3608) [300289be-d55d-468e-aeda-d845180621e2] Validation of action 'AddVm' failed for user esuilen@ictv.com(a)ictv.com-authz. Reasons: VAR__ACTION__ADD,VAR__TYPE__VM,USER_NOT_AUTHORIZED_TO_CONSUME_QUOTA
2020-03-12 14:03:43,467Z WARN [org.ovirt.engine.core.bll.AddVmCommand] (default task-3613) [a6d267af-d14e-44d5-9573-de33a4d45586] Validation of action 'AddVm' failed for user esuilen@ictv.com(a)ictv.com-authz. Reasons: VAR__ACTION__ADD,VAR__TYPE__VM,USER_NOT_AUTHORIZED_TO_CONSUME_QUOTA
2020-03-12 14:03:45,101Z WARN [org.ovirt.engine.core.bll.AddVmCommand] (default task-3613) [8efba949-8f58-427b-9add-ead7b54e69d5] Validation of action 'AddVm' failed for user esuilen@ictv.com(a)ictv.com-authz. Reasons: VAR__ACTION__ADD,VAR__TYPE__VM,USER_NOT_AUTHORIZED_TO_CONSUME_QUOTA
2020-03-12 14:03:55,071Z WARN [org.ovirt.engine.core.bll.AddVmCommand] (default task-3613) [3444440d-5c3a-4ba7-9a6b-4de6ba71a2cd] Validation of action 'AddVm' failed for user esuilen@ictv.com(a)ictv.com-authz. Reasons: VAR__ACTION__ADD,VAR__TYPE__VM,USER_NOT_AUTHORIZED_TO_CONSUME_QUOTA
2020-03-12 14:03:55,093Z WARN [org.ovirt.engine.core.bll.AddVmCommand] (default task-3608) [34e8e6f9-7a5d-4d86-92bc-a595a3ae7688] Validation of action 'AddVm' failed for user esuilen@ictv.com(a)ictv.com-authz. Reasons: VAR__ACTION__ADD,VAR__TYPE__VM,USER_NOT_AUTHORIZED_TO_CONSUME_QUOTA
-----
kind regards/met vriendelijke groeten
Marko Vrgotic
ActiveVideo
From: "Vrgotic, Marko" <M.Vrgotic(a)activevideo.com>
Date: Thursday, 12 March 2020 at 16:09
To: "users(a)ovirt.org" <users(a)ovirt.org>
Cc: Darko Stojchev <D.Stojchev(a)activevideo.com>
Subject: oVirt Storage quota problems
Dear oVirt,
I am experiencing a quota related problem regarding two things:
1. Quota usage and exception trigger
2. User assigned as quota consumer
First issue:
I have defined quota for storage ( qstorage ), setting all following items:
a. Defining quota for storage, with threshold and grace
b. Setting quota mode to enforced in datacenter
c. Added Users as consumers of the Quota
d. Updated quota from Default to qstorage for Templates and related images
e. Updated quota from Default to qstorage from all existing VMs and the related Disks
Problem:
* the percentage of the Storage Consumption is jumping up and down, between 3% and 97%, and at certain point goes back to Unlimited
* when checking under Storage tab inside quota name, the value “Used MB out of Total MB” is correct, however, when attempting to exceed the threshold, I am not getting a warning, even though I am consumer of a quota
Second issue:
User is added to qstorage quota as consumer. Still, when he tries to launch the VMs, Waring is displayed:
Cannot add VM. The user is not a consumer of the Quota assigned to the resource.
Have I missed something – could it be a permission issue, not a consumer issue.
All my users are utilizing Administration Portal with limited set of permissions. They all have same Role assigned, but only few of them are having the problem I described as Second problem.
Please assist.
-----
kind regards/met vriendelijke groeten
Marko Vrgotic
Sr. System Engineer @ System Administration
ActiveVideo
o: +31 (35) 6774131
e: m.vrgotic(a)activevideo.com<mailto:m.vrgotic@activevideo.com>
w: www.activevideo.com<https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.acti...>
ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ Hilversum, The Netherlands. The information contained in this message may be legally privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or destroy any copy of this message.
From: "Vrgotic, Marko" <M.Vrgotic(a)activevideo.com>
Date: Thursday, 27 February 2020 at 10:19
To: "users(a)ovirt.org" <users(a)ovirt.org>
Cc: Darko Stojchev <D.Stojchev(a)activevideo.com>
Subject: oVirt Storage quota questions
Dear oVirt,
My platform is running SHE oVirt version 4.3.8.
Why is my StorageQuota consumption still showing “ 0 out of 3700” ? Did I forget something?
<image001.png>
I have setup storage quota “StorageQuota” for one of domains.
It has 4400GB actual storage.
• Quota is set to 3700GB.
o Treshold for Quota is 85% and Grace is set to 110%.
• Datacenter Quota is set to Enforced
• The template image has been changed from Default to StorageQuota
• I see that Vms created from Template are automatically assigned StorageQuota
To present the following situation in screenshots:
• Actual Quota:
<image002.png>
• VMs (more than on the screenshot) which have the StorageQuota assigned:
<image003.png>
• Templates which have the StorageQuota assigned
<image004.png>
Kindly awaiting your reply.
Additionally – is there an easy&safe way to update all running VMs quota?
-----
kind regards/met vriendelijke groeten
Marko Vrgotic
Sr. System Engineer @ System Administration
ActiveVideo
o: +31 (35) 6774131
e: m.vrgotic(a)activevideo.com<mailto:m.vrgotic@activevideo.com>
w: www.activevideo.com<https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.acti...>
ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ Hilversum, The Netherlands. The information contained in this message may be legally privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or destroy any copy of this message.
To view the terms under which this email is distributed, please go to:-
http://leedsbeckett.ac.uk/disclaimer/email/
To view the terms under which this email is distributed, please go to:-
http://leedsbeckett.ac.uk/disclaimer/email/
5 years, 1 month
Problem with connecting the Storage Domain (Host host2 cannot access the Storage Domain(s) <UNKNOWN>)
by Patrick Lomakin
Hello, everyone! I need specialist help, because I'm already desperate. My company has four hosts that are connected to the vault. Each host has its own IP to access the storage, which means host 1 has an ip 10.42.0.10 and 10.42.1.10 -> host 2 has an ip 10.42.0.20 and 10.42.0.20 respectively. Host 1 cannot ping the address 10.42.0.20. Hardware I tried to explain in more detail.
Host 1 has ovirt node 4.3.9 installed and hosted-engine deployed.
When trying to add host 2 to a cluster it is installed, but not activated. There is an error in ovirt manager - "Host host2 cannot access the Storage Domain(s) <UNKNOWN>" and host 2 goes to "Not operational" status. On host 2, it writes "connect to 10.42.1.10:3260 failed (No route to host)" in the logs and repeats indefinitely. I manually connected host 2 to the storage using iscsiadm to ip 10.42.0.20. But the error is not missing(. At the same time, when the host tries to activate it, I can run virtual machines on it until the host shows an error message. VMs that have been run on host 2 continue to run even when the host has "Not operational" status.
I assume that when adding host 2 to a cluster, ovirt tries to connect it to the same repository host 1 is connected to from ip 10.42.1.10. There may be a way to get ovirt to connect to another ip address instead of the ip domain address for the first host. I'm attaching logs:
----> /var/log/ovirt-engine/engine.log
2020-03-31 09:13:03,866+03 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghan dling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-90) [7fa128f4] EVENT_ID: VDS_SET_NONOPERATIONAL_DOMAIN(522), Host host2.school34.local c annot access the Storage Domain(s) <UNKNOWN> attached to the Data Center DataCenter. Setting Host state to Non-Operational.
2020-03-31 10:40:04,883+03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-12) [7a48ebb7] START, ConnectStorageServerVDSCommand(HostName = host2.school34.local, StorageServerConnectionManagementVDSParameters:{hostId='d82c3a76-e417-4fe4-8b08-a29414e3a9c1', storagePoolId='6052cc0a-71b9-11ea-ba5a-00163e10c7e7', storageType='ISCSI', connectionList='[StorageServerConnections:{id='c8a05dc2-f8a2-4354-96ed-907762c29761', connection='10.42.0.10', iqn='iqn.2002-09.com.lenovo:01.array.00c0ff3bfcb0', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='0ec6f34e-01c8-4ecc-9bd4-7e2a250d589d', connection='10.42.1.10', iqn='iqn.2002-09.com.lenovo:01.array.00c0ff3bfcb0', vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]', sendNetworkEventOnF
ailure='true'}), log id: 2c1a22b5
2020-03-31 10:43:05,061+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-12) [7a48ebb7] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM host2.school34.local command ConnectStorageServerVDS failed: Message timeout which can be caused by communication issues
----> vdsm.log
2020-03-31 09:34:07,264+0300 ERROR (jsonrpc/5) [storage.HSM] Could not connect to storageServer (hsm:2420)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2417, in connectStorageServer
conObj.connect()
File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 488, in connect
iscsi.addIscsiNode(self._iface, self._target, self._cred)
File "/usr/lib/python2.7/site-packages/vdsm/storage/iscsi.py", line 217, in addIscsiNode
iscsiadm.node_login(iface.name, target.address, target.iqn)
File "/usr/lib/python2.7/site-packages/vdsm/storage/iscsiadm.py", line 337, in node_login
raise IscsiNodeError(rc, out, err)
IscsiNodeError: (8, ['Logging in to [iface: default, target: iqn.2002-09.com.lenovo:01.array.00c0ff3bfcb0, portal: 10.42.1.10,3260] (multiple)'], ['iscsiadm: Could not login to [iface: default, target: iqn.2002-09.com.lenovo:01.array.00c0ff3bfcb0, portal: 10.42.1.10,3260].', 'iscsiadm: initiator reported error (8 - connection timed out)', 'iscsiadm: Could not log into all portals'])
2020-03-31 09:36:01,583+0300 WARN (vdsm.Scheduler) [Executor] Worker blocked: <Worker name=jsonrpc/0 running <Task <JsonRpcTask {'params': {u'connectionParams': [{u'port': u'3260', u'connection': u'10.42.0.10', u'iqn': u'iqn.2002-09.com.lenovo:01.array.00c0ff3bfcb0', u'user': u'', u'tpgt': u'2', u'ipv6_enabled': u'false', u'password': '********', u'id': u'c8a05dc2-f8a2-4354-96ed-907762c29761'}, {u'port': u'3260', u'connection': u'10.42.1.10', u'iqn': u'iqn.2002-09.com.lenovo:01.array.00c0ff3bfcb0', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false', u'password': '********', u'id': u'0ec6f34e-01c8-4ecc-9bd4-7e2a250d589d'}], u'storagepoolID': u'6052cc0a-71b9-11ea-ba5a-00163e10c7e7', u'domainType': 3}, 'jsonrpc': '2.0', 'method': u'StoragePool.connectStorageServer', 'id': u'64cc0385-3a11-474b-98f0-b0ecaa6c67c8'} at 0x7fe1ac1ff510> timeout=60, duration=60.00 at 0x7fe1ac1ffb10> task#=316 at 0x7fe1f0041ad0>, traceback:
File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap
self.__bootstrap_inner()
File: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner
self.run()
File: "/usr/lib64/python2.7/threading.py", line 765, in run
self.__target(*self.__args, **self.__kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 260, in run
ret = func(*args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run
self._execute_task()
File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in _execute_task
task()
File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in __call__
self._callable()
File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 262, in __call__
self._handler(self._ctx, self._req)
File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 305, in _serveRequest
response = self._handle_request(req, ctx)
File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in _handle_request
res = method(**params)
File: "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in _dynamicMethod
result = fn(*methodArgs)
File: "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1102, in connectStorageServer
connectionParams)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py", line 74, in wrapper
result = ctask.prepare(func, *args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108, in wrapper
return m(self, *a, **kw)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1179, in prepare
result = self._run(func, *args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
return fn(*args, **kargs)
File: "<string>", line 2, in connectStorageServer
File: "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
ret = func(*args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2417, in connectStorageServer
conObj.connect()
File: "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 488, in connect
iscsi.addIscsiNode(self._iface, self._target, self._cred)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/iscsi.py", line 217, in addIscsiNode
iscsiadm.node_login(iface.name, target.address, target.iqn)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/iscsiadm.py", line 327, in node_login
portal, "-l"])
File: "/usr/lib/python2.7/site-packages/vdsm/storage/iscsiadm.py", line 122, in _runCmd
return misc.execCmd(cmd, printable=printCmd, sudo=True, sync=sync)
File: "/usr/lib/python2.7/site-packages/vdsm/common/commands.py", line 213, in execCmd
(out, err) = p.communicate(data)
File: "/usr/lib64/python2.7/site-packages/subprocess32.py", line 924, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File: "/usr/lib64/python2.7/site-packages/subprocess32.py", line 1706, in _communicate
orig_timeout)
File: "/usr/lib64/python2.7/site-packages/subprocess32.py", line 1779, in _communicate_with_poll
ready = poller.poll(self._remaining_time(endtime)) (executor:363)
5 years, 1 month
Failing to redeploy self hosted engine
by Maton, Brett
I keep running into this error when I try to (re)deploy self-hosted engine.
# ovirt-hosted-engine-cleanup
# hosted-engine --deploy
...
...
[ INFO ] TASK [ovirt.hosted_engine_setup : Fail with error description]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The
host has been set in non_operational status, deployment errors: code
9000: Failed to verify Power Management configuration for Host
physhost01.example.com, fix accordingly and re-deploy."}
I shut down all of the VM's and detached the storage before cleaning up and
trying to re-deploy the hosted engine, first time I've run into this
particular problem.
Any help appreciated
Brett
5 years, 1 month
Artwork: 4.4 GA banners
by Sandro Bonazzola
Hi,
in preparation of oVirt 4.4 GA it would be nice to have some graphics we
can use for launching oVirt 4.4 GA on social media and oVirt website.
If you don't have coding skills but you have marketing or design skills
this is a good opportunity to contribute back to the project.
Looking forward to your designs!
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>*
<https://www.redhat.com/en/summit?sc_cid=7013a000002D2QxAAK>*
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
5 years, 1 month
NFS permissions error on ISODomain file with correct permissions
by Shareef Jalloq
Hi,
I asked this question in another thread but it seems to have been lost in
the noise so I'm reposting with a more descriptive subject.
I'm trying to start a Windows VM and use the virtio-win VFD floppy to get
the drivers but the VM startup fails due to a permissions issue detailed
below. The permissions look fine to me so why can't the VFD be read?
Shareef.
I found a permissions issue in the engine.log:
2020-03-25 21:28:41,662Z ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ForkJoinPool-1-worker-14) [] EVENT_ID: VM_DOWN_ERROR(119), VM win-2019 is
down with error. Exit message: internal error: qemu unexpectedly closed the
monitor: 2020-03-25T21:28:40.324426Z qemu-kvm: -drive
file=/rhev/data-center/mnt/nas-01.phoelex.com:_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/11111111-1111-1111-1111-111111111111/virtio-win_servers_amd64.vfd,format=raw,if=none,id=drive-ua-0b9c28b5-f75c-4575-ad85-b5b836f67d61,readonly=on:
Could not open '/rhev/data-center/mnt/nas-01.phoelex.com:_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/11111111-1111-1111-1111-111111111111/virtio-win_servers_amd64.vfd':
Permission denied.
But when I look at that path on the node in question, every folder and the
final file have the correct vdsm:kvm permissions:
[root@ovirt-node-01 ~]# ll /rhev/data-center/mnt/nas-01.phoelex.com:
_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/11111111-1111-1111-1111-111111111111/virtio-win_servers_amd64.vfd
-rwxrwxrwx. 1 vdsm kvm 2949120 Mar 25 21:24
/rhev/data-center/mnt/nas-01.phoelex.com:
_volume2_isostore/41cebb4b-c164-4956-8f44-6426170cd9f5/images/11111111-1111-1111-1111-111111111111/virtio-win_servers_amd64.vfd
The files were uploaded to the ISO domain using:
engine-iso-uploader --iso-domain=iso_storage upload virtio-win.iso
virtio-win_servers_amd64.vfd
5 years, 1 month
Re: Windows VirtIO drivers
by eevans@digitaldatatechs.com
Ok. Personally, I like having an ISO repository, but I like the fact that's
it will be optional.
Thanks.
Eric Evans
Digital Data Services LLC.
304.660.9080
-----Original Message-----
From: Robert Webb <rwebb(a)ropeguru.com>
Sent: Wednesday, March 25, 2020 4:26 PM
To: eevans(a)digitaldatatechs.com; 'Shareef Jalloq' <shareef(a)jalloq.co.uk>;
users(a)ovirt.org
Subject: Re: [ovirt-users] Re: Windows VirtIO drivers
It does work that way.
I found that out as I was testing oVirt and had not created a separate ISO
Domain. I believe it was Strahil who pointed me in the right direction.
So if one does not have an ISO Domain, it is no longer required. Along with
the fact that ISO Domains are or in the process of being deprecated.
________________________________________
From: eevans(a)digitaldatatechs.com <eevans(a)digitaldatatechs.com>
Sent: Wednesday, March 25, 2020 3:13 PM
To: Robert Webb; 'Shareef Jalloq'; users(a)ovirt.org
Subject: RE: [ovirt-users] Re: Windows VirtIO drivers
That may be true, but in the ISO domain, when you open virt viewer you can
change the cd very easily...maybe it works that way as well..
Eric Evans
Digital Data Services LLC.
304.660.9080
-----Original Message-----
From: Robert Webb <rwebb(a)ropeguru.com>
Sent: Wednesday, March 25, 2020 2:35 PM
To: eevans(a)digitaldatatechs.com; 'Shareef Jalloq' <shareef(a)jalloq.co.uk>;
users(a)ovirt.org
Subject: [ovirt-users] Re: Windows VirtIO drivers
Don't think you have to use the ISO Domain any longer.
You can upload to a Data Domain and when you highlight the VM in the
management GUI, select the three dots in the top left for extra options and
there is a change cd option. That option will allow for attaching an ISO
from a Data Domain.
That is what I recall when I was using oVirt a month or so ago.
________________________________________
From: eevans(a)digitaldatatechs.com <eevans(a)digitaldatatechs.com>
Sent: Wednesday, March 25, 2020 2:28 PM
To: 'Shareef Jalloq'; users(a)ovirt.org
Subject: [ovirt-users] Re: Windows VirtIO drivers
You have to copy the iso and vfd files to the ISO domain to make them
available to the vm's that need drivers.
engine-iso-uploader options list
# engine-iso-uploader options upload file file file Documentation is found
here: https://www.ovirt.org/documentation/admin-guide/chap-Utilities.html
Eric Evans
Digital Data Services LLC.
304.660.9080
[cid:image001.jpg@01D602B1.9FEA2EC0]
From: Shareef Jalloq <shareef(a)jalloq.co.uk>
Sent: Wednesday, March 25, 2020 1:51 PM
To: users(a)ovirt.org
Subject: [ovirt-users] Windows VirtIO drivers
Hi,
it seems the online documentation regarding the windows installation steps
is well out of date. Where is there any current documentation on where to
get the VirtIO drivers for a Windows installation?
From a bit of Googling, it seems that I need to 'yum install virtio-win' on
the engine VM and then copy the relevant .iso/.vfd to the ISO domain. Is
that correct?
Where is the documentation maintained and how do I open a bug on it?
Thanks, Shareef.
5 years, 1 month
Engine status : unknown stale-data on single node
by Wood, Randall
I have a three node Ovirt cluster where one node has stale-data for the hosted engine, but the other two nodes do not:
Output of `hosted-engine --vm-status` on a good node:
```
!! Cluster is in GLOBAL MAINTENANCE mode !!
--== Host ovirt2.low.mdds.tcs-sec.com (id: 1) status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt2.low.mdds.tcs-sec.com
Host ID : 1
Engine status : {"health": "good", "vm": "up", "detail": "Up"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : f91f57e4
local_conf_timestamp : 9915242
Host timestamp : 9915241
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=9915241 (Fri Mar 27 14:38:14 2020)
host-id=1
score=3400
vm_conf_refresh_time=9915242 (Fri Mar 27 14:38:14 2020)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
--== Host ovirt1.low.mdds.tcs-sec.com (id: 2) status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt1.low.mdds.tcs-sec.com
Host ID : 2
Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 48f9c0fc
local_conf_timestamp : 9218845
Host timestamp : 9218845
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=9218845 (Fri Mar 27 14:38:22 2020)
host-id=2
score=3400
vm_conf_refresh_time=9218845 (Fri Mar 27 14:38:22 2020)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
--== Host ovirt3.low.mdds.tcs-sec.com (id: 3) status ==--
conf_on_shared_storage : True
Status up-to-date : False
Hostname : ovirt3.low.mdds.tcs-sec.com
Host ID : 3
Engine status : unknown stale-data
Score : 3400
stopped : False
Local maintenance : False
crc32 : 620c8566
local_conf_timestamp : 1208310
Host timestamp : 1208310
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1208310 (Mon Dec 16 21:14:24 2019)
host-id=3
score=3400
vm_conf_refresh_time=1208310 (Mon Dec 16 21:14:24 2019)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
!! Cluster is in GLOBAL MAINTENANCE mode !!
```
I tried the steps in https://access.redhat.com/discussions/3511881, but `hosted-engine --vm-status` on the node with stale data shows:
```
The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable.
```
One the stale node, ovirt-ha-agent and ovirt-ha-broker are continually restarting. Since it seems the agent depends on the broker, the broker logs includes this snippet, repeating roughly every 3 seconds:
```
MainThread::INFO::2020-03-27 15:01:06,584::broker::47::ovirt_hosted_engine_ha.broker.broker.Broker::(run) ovirt-hosted-engine-ha broker 2.3.6 started
MainThread::INFO::2020-03-27 15:01:06,584::monitor::40::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Searching for submonitors in /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/submonitors
MainThread::INFO::2020-03-27 15:01:06,585::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor engine-health
MainThread::INFO::2020-03-27 15:01:06,585::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor storage-domain
MainThread::INFO::2020-03-27 15:01:06,585::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor network
MainThread::INFO::2020-03-27 15:01:06,587::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor cpu-load-no-engine
MainThread::INFO::2020-03-27 15:01:06,587::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor mem-free
MainThread::INFO::2020-03-27 15:01:06,587::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor network
MainThread::INFO::2020-03-27 15:01:06,588::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor mgmt-bridge
MainThread::INFO::2020-03-27 15:01:06,588::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor storage-domain
MainThread::INFO::2020-03-27 15:01:06,589::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor cpu-load-no-engine
MainThread::INFO::2020-03-27 15:01:06,589::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor engine-health
MainThread::INFO::2020-03-27 15:01:06,589::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor mgmt-bridge
MainThread::INFO::2020-03-27 15:01:06,589::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor cpu-load
MainThread::INFO::2020-03-27 15:01:06,590::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor cpu-load
MainThread::INFO::2020-03-27 15:01:06,590::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor mem-free
MainThread::INFO::2020-03-27 15:01:06,590::monitor::50::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Finished loading submonitors
MainThread::INFO::2020-03-27 15:01:06,678::storage_backends::373::ovirt_hosted_engine_ha.lib.storage_backends::(connect) Connecting the storage
MainThread::INFO::2020-03-27 15:01:06,678::storage_server::349::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server) Connecting storage server
MainThread::INFO::2020-03-27 15:01:06,717::storage_server::356::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server) Connecting storage server
MainThread::INFO::2020-03-27 15:01:06,732::storage_server::413::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server) Refreshing the storage domain
MainThread::WARNING::2020-03-27 15:01:08,940::storage_broker::97::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__) Can't connect vdsm storage: [Errno 5] Input/output error: '/rhev/data-center/mnt/glusterSD/ovirt2:_engine/182a4a94-743f-4941-89c1-dc2008ae1cf5/ha_agent/hosted-engine.lockspace'
```
I restarted the stale node yesterday, but it still shows stale data from December of last year.
What is the recommended way for me to try to recover from this?
(This came to my attention when warnings concerning space on the /var/log partition began popping up.)
Thank you,
Randall
5 years, 1 month
Re: Local network
by Tommaso - Shellrent
This is what i've got:
*ovs-vsctl show*
03a038d4-e81c-45e0-94d1-6f18d6504f1f
Bridge br-int
fail_mode: secure
Port "ovn-765f43-0"
Interface "ovn-765f43-0"
type: geneve
options: {csum="true", key=flow, remote_ip="xxx.169.yy.6"}
Port br-int
Interface br-int
type: internal
Port "vnet1"
Interface "vnet1"
Port "ovn-b33f6e-0"
Interface "ovn-b33f6e-0"
type: geneve
options: {csum="true", key=flow, remote_ip="xxx.169.yy.2"}
Port "vnet3"
Interface "vnet3"
Port "ovn-8678d9-0"
Interface "ovn-8678d9-0"
type: geneve
options: {csum="true", key=flow, remote_ip="xxx.169.yy.8"}
Port "ovn-fdd090-0"
Interface "ovn-fdd090-0"
type: geneve
options: {csum="true", key=flow, remote_ip="xxx.169.yy.4"}
ovs_version: "2.11.0"
I suppose that the vnic are:
Port "vnet1"
Interface "vnet1"
Port "vnet3"
Interface "vnet3"
on the engine:
*ovn-nbctl show*
switch a1f30e99-3ab7-46a4-925d-287871905cab
(ovirt-local_network_definitiva-d58aea97-bb20-4e8f-bcc3-5277754846bb)
port b82f3479-b459-4c26-aff0-053d15c74ddd
addresses: ["56:6f:96:b1:00:4c"]
port 52f09a28-1645-45ff-9b84-1e53a81bb399
addresses: ["56:6f:96:b1:00:4b"]
*ovn-sbctl show*
Chassis "ab5bdfdd-8df4-4e9b-9ce9-565cfd513a4d"
hostname: "pvt-41f18-002.serverlet.com"
Encap geneve
ip: "aaa.31.bbb.224"
options: {csum="true"}
Port_Binding "b82f3479-b459-4c26-aff0-053d15c74ddd"
Port_Binding "52f09a28-1645-45ff-9b84-1e53a81bb399"
Il 31/03/20 13:39, Staniforth, Paul ha scritto:
> The engine runs the controller so ovn-sbctl won't work, on the hosts,
> use ovs-vsctl show
>
> Paul S.
> ------------------------------------------------------------------------
> *From:* Tommaso - Shellrent <tommaso(a)shellrent.com>
> *Sent:* 31 March 2020 12:13
> *To:* Staniforth, Paul <P.Staniforth(a)leedsbeckett.ac.uk>;
> users(a)ovirt.org <users(a)ovirt.org>
> *Subject:* Re: [ovirt-users] Local network
>
> *Caution External Mail:* Do not click any links or open any
> attachments unless you trust the sender and know that the content is safe.
>
> Hi.
>
> on engine all seems fine.
>
> on host the command "ovn-sbctl show" is stuck, and with a strace a se
> the following error:
>
>
> connect(5, {sa_family=AF_LOCAL,
> sun_path="/var/run/openvswitch/ovnsb_db.sock"}, 37) = -1 ENOENT (No
> such file or directory)
>
>
>
>
>
>
> Il 31/03/20 11:18, Staniforth, Paul ha scritto:
>>
>> .Hello Tommaso,
>> on your oVirt engine host run
>> check the north bridge controller
>> ovn-nbctl show
>> this should show a software switch for each ovn logical network witch
>> any ports that are active( in your case you should have 2)
>>
>> check the south bridge controller
>> ovn-sbctl show
>> this should show the software switch on each host with a geneve tunnel.
>>
>> on each host run
>> ovs-vsctl show
>> this should show the virtual switch with a geneve tunnel to each
>> other host and a port for any active vnics
>>
>> Regards,
>> Paul S.
>>
>> ------------------------------------------------------------------------
>> *From:* Tommaso - Shellrent <tommaso(a)shellrent.com>
>> <mailto:tommaso@shellrent.com>
>> *Sent:* 31 March 2020 09:27
>> *To:* users(a)ovirt.org <mailto:users@ovirt.org> <users(a)ovirt.org>
>> <mailto:users@ovirt.org>
>> *Subject:* [ovirt-users] Local network
>>
>> *Caution External Mail:* Do not click any links or open any
>> attachments unless you trust the sender and know that the content is
>> safe.
>>
>> Hi to all.
>>
>> I'm trying to connect two vm, on the same "local storage" host,
>> with an internal isolated network.
>>
>> My setup;
>>
>> VM A:
>>
>> * eth0 with an external ip
>> * eth1, with 1922.168.1.1/24
>>
>> VM B
>>
>> * eth0 with an external ip
>> * eth1, with 1922.168.1.2/24
>>
>> the eth1 interfaces are connetter by a network created on external
>> provider ovirt-network-ovn , whithout a subnet defined.
>>
>> Now, the external ip works fine, but the two vm cannot connect
>> through the local network
>>
>> ping: ko
>> arping: ko
>>
>>
>> any idea to what to check?
>>
>>
>> Regards
>>
>> --
>> --
>> Shellrent - Il primo hosting italiano Security First
>>
>> *Tommaso De Marchi*
>> /COO - Chief Operating Officer/
>> Shellrent Srl
>> Via dell'Edilizia, 19 - 36100 Vicenza
>> Tel. 0444321155 <tel:+390444321155> | Fax 04441492177
>>
>> To view the terms under which this email is distributed, please go to:-
>> http://leedsbeckett.ac.uk/disclaimer/email/
> --
> --
> Shellrent - Il primo hosting italiano Security First
>
> *Tommaso De Marchi*
> /COO - Chief Operating Officer/
> Shellrent Srl
> Via dell'Edilizia, 19 - 36100 Vicenza
> Tel. 0444321155 <tel:+390444321155> | Fax 04441492177
>
> To view the terms under which this email is distributed, please go to:-
> http://leedsbeckett.ac.uk/disclaimer/email/
--
--
Shellrent - Il primo hosting italiano Security First
*Tommaso De Marchi*
/COO - Chief Operating Officer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155 <tel:+390444321155> | Fax 04441492177
5 years, 1 month
How to install Ovirt Node without ISO
by raphael.garcia@centralesupelec.fr
Hello
Is it possible to install an Ovirt node on a CentOs 7 server without iso (CD or USB).
Sorry for this newbie question.
5 years, 1 month
Re: Local network
by Tommaso - Shellrent
Hi.
on engine all seems fine.
on host the command "ovn-sbctl show" is stuck, and with a strace a se
the following error:
connect(5, {sa_family=AF_LOCAL,
sun_path="/var/run/openvswitch/ovnsb_db.sock"}, 37) = -1 ENOENT (No such
file or directory)
Il 31/03/20 11:18, Staniforth, Paul ha scritto:
>
> .Hello Tommaso,
> on your oVirt engine host run
> check the north bridge controller
> ovn-nbctl show
> this should show a software switch for each ovn logical network witch
> any ports that are active( in your case you should have 2)
>
> check the south bridge controller
> ovn-sbctl show
> this should show the software switch on each host with a geneve tunnel.
>
> on each host run
> ovs-vsctl show
> this should show the virtual switch with a geneve tunnel to each other
> host and a port for any active vnics
>
> Regards,
> Paul S.
>
> ------------------------------------------------------------------------
> *From:* Tommaso - Shellrent <tommaso(a)shellrent.com>
> *Sent:* 31 March 2020 09:27
> *To:* users(a)ovirt.org <users(a)ovirt.org>
> *Subject:* [ovirt-users] Local network
>
> *Caution External Mail:* Do not click any links or open any
> attachments unless you trust the sender and know that the content is safe.
>
> Hi to all.
>
> I'm trying to connect two vm, on the same "local storage" host,
> with an internal isolated network.
>
> My setup;
>
> VM A:
>
> * eth0 with an external ip
> * eth1, with 1922.168.1.1/24
>
> VM B
>
> * eth0 with an external ip
> * eth1, with 1922.168.1.2/24
>
> the eth1 interfaces are connetter by a network created on external
> provider ovirt-network-ovn , whithout a subnet defined.
>
> Now, the external ip works fine, but the two vm cannot connect through
> the local network
>
> ping: ko
> arping: ko
>
>
> any idea to what to check?
>
>
> Regards
>
> --
> --
> Shellrent - Il primo hosting italiano Security First
>
> *Tommaso De Marchi*
> /COO - Chief Operating Officer/
> Shellrent Srl
> Via dell'Edilizia, 19 - 36100 Vicenza
> Tel. 0444321155 <tel:+390444321155> | Fax 04441492177
>
> To view the terms under which this email is distributed, please go to:-
> http://leedsbeckett.ac.uk/disclaimer/email/
--
--
Shellrent - Il primo hosting italiano Security First
*Tommaso De Marchi*
/COO - Chief Operating Officer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155 <tel:+390444321155> | Fax 04441492177
5 years, 1 month
Local network
by Tommaso - Shellrent
Hi to all.
I'm trying to connect two vm, on the same "local storage" host, with
an internal isolated network.
My setup;
VM A:
* eth0 with an external ip
* eth1, with 1922.168.1.1/24
VM B
* eth0 with an external ip
* eth1, with 1922.168.1.2/24
the eth1 interfaces are connetter by a network created on external
provider ovirt-network-ovn , whithout a subnet defined.
Now, the external ip works fine, but the two vm cannot connect through
the local network
ping: ko
arping: ko
any idea to what to check?
Regards
--
--
Shellrent - Il primo hosting italiano Security First
*Tommaso De Marchi*
/COO - Chief Operating Officer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155 <tel:+390444321155> | Fax 04441492177
5 years, 1 month
New HCI with gluster
by Ovirt User
I am performing a new install of 3 nodes
ovnode1 - 3 disks,install on first two disks
ovnode2 - 3 disks,install on first two disks
ovnode3 - 3 disks,install on first two disks
Storage network interfaces are configured with 192.168.xxx.10 no gateways configured for this network
Data network interfaces are configured with 10.3.x.10
When i run the wizard it always fails. All 3 hosts are running the latest stable (4.3.9). I have attached the log files from the runs on these hosts
5 years, 1 month
ovirt-guest-agent for CentOS 8
by Eduardo Mayoral
Hi,
Just like many of you I am testing my first CentOS 8 VMs on top of ovirt.
I am not finding the package ovirt-guest-agent.noarch . Closest I can
find is qemu-guest-agent.x86_64
After installing and starting it, I do see information reported on the
"Guest info" tab.
Can anybody confirm if this is indeed the agent we should be using? Is
there - or will there be - a more specific package for ovirt guests?
Thanks!
--
Eduardo Mayoral Jimeno
Systems engineer, platform department. Arsys Internet.
emayoral(a)arsys.es - +34 941 620 105 - ext 2153
5 years, 1 month
install oVirt on Fedora31
by eldaeron@mail.ru
Hello! Tell me, can I install oVirt on Fedora 31? I try to install "http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm" but only "ovirt-4.2.repo" appears in "yum.repos.d". "ovirt-4.2-dependencies.repo" is missing. ok, I installed ovirt-release42.rpm on a virtual machine with centos7 and downloaded both files from it. but when I try to run "yum install ovirt-hosted-engine-setup" I get "Error: Failed to download metadata for repo 'ovirt-4.2': Cannot prepare internal mirrorlist: No URLs in mirrorlist" and "No match for argument: ovirt -hosted-engine-setup "
sorry for my english, this is a translator :)
5 years, 1 month
Debian guest Agent
by eevans@digitaldatatechs.com
I am revisiting this as I have had no luck getting it resolved.
I have 2 Debian servers, one is an email appliance and one is a virtual pbx.
The guest agent works in the email appliance after following an article to get the agent to work properly, but the agent in the pbx has never worked. I have an exclamation point saying the guest agent needs to be installed. It is installed and running.
I'm not sure why the agent is not checking in but any help would be appreciated.
Thanks.
5 years, 1 month
HCI cluster single node error making template
by Gianluca Cecchi
Hello,
I'm on 4.3.9
I have created a VM with 4vcpu, 16gb mem and a nic and a thin provisioned
disk of 120Gb
Installed nothing on it, only defined.
Now I'm trying to make template from it but I get error.
I leave the prefille value of raw for format
Target storage domain has 900 Gb free, almost empty
In events pane:
Creation of Template ocp_node from VM ocp_node_template was initiated by
admin@internal-authz.
3/25/20 12:42:35 PM
Then I get error in events pane is:
VDSM ovirt.example.local command HSMGetAllTasksStatusesVDS failed: low
level Image copy failed: ("Command ['/usr/bin/qemu-img', 'convert', '-p',
'-t', 'none', '-T', 'none', '-f', 'raw',
u'/rhev/data-center/mnt/glusterSD/ovirtst.example.storage:_vmstore/81b97244-4b69-4d49-84c4-c822387adc6a/images/61689cb2-fdce-41a5-a6d9-7d06aefeb636/30009efb-83ed-4b0d-b243-3160195ae46e',
'-O', 'qcow2', '-o', 'compat=1.1',
u'/rhev/data-center/mnt/glusterSD/ovirtst.example.storage:_vmstore/81b97244-4b69-4d49-84c4-c822387adc6a/images/5642b52f-d7e8-48a8-adf9-f79022ce4594/982dd5cc-5f8f-41cb-b2e7-3cbdf2a656cf']
failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading
sector 18620412: No such file or directory\\n')",)
3/25/20 12:43:39 PM
Is there a problem templating a thin provisioned disk in gluster?
In vdsm.log
2020-03-25 12:42:40,331+0100 ERROR (tasks/9) [storage.Image] conversion
failure for volume 30009efb-83ed-4b0d-b243-3160195ae46e (image:836)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line 831,
in copyCollapsed
self._run_qemuimg_operation(operation)
File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line 98,
in _run_qemuimg_operation
operation.run()
File "/usr/lib/python2.7/site-packages/vdsm/storage/qemuimg.py", line
343, in run
for data in self._operation.watch():
File "/usr/lib/python2.7/site-packages/vdsm/storage/operation.py", line
106, in watch
self._finalize(b"", err)
File "/usr/lib/python2.7/site-packages/vdsm/storage/operation.py", line
179, in _finalize
raise cmdutils.Error(self._cmd, rc, out, err)
Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T',
'none', '-f', 'raw',
u'/rhev/data-center/mnt/glusterSD/ovirtst.example.storage:_vmstore/81b97244-4b69-4d49-84c4-c822387adc6a/images/61689cb2-fdce-41a5-a6d9-7d06aefeb636/30009efb-83ed-4b0d-b243-3160195ae46e',
'-O', 'raw',
u'/rhev/data-center/mnt/glusterSD/ovirtst.example.storage:_vmstore/81b97244-4b69-4d49-84c4-c822387adc6a/images/bb167b28-94fa-434c-8fb6-c4bedfc06c62/53d3ab96-e5d1-453a-9989-2f858e6a9e0a']
failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading
sector 22028283: No data available\n')
2020-03-25 12:42:40,331+0100 ERROR (tasks/9) [storage.Image] Unexpected
error (image:849)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line 837,
in copyCollapsed
raise se.CopyImageError(str(e))
CopyImageError: low level Image copy failed: ("Command
['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f',
'raw',
u'/rhev/data-center/mnt/glusterSD/ovirtst.example.storage:_vmstore/81b97244-4b69-4d49-84c4-c822387adc6a/images/61689cb2-fdce-41a5-a6d9-7d06aefeb636/30009efb-83ed-4b0d-b243-3160195ae46e',
'-O', 'raw',
u'/rhev/data-center/mnt/glusterSD/ovirtst.example.storage:_vmstore/81b97244-4b69-4d49-84c4-c822387adc6a/images/bb167b28-94fa-434c-8fb6-c4bedfc06c62/53d3ab96-e5d1-453a-9989-2f858e6a9e0a']
failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading
sector 22028283: No data available\\n')",)
2020-03-25 12:42:40,332+0100 ERROR (tasks/9) [storage.TaskManager.Task]
(Task='48476954-310f-45b6-9e1b-921d94e5495b') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
in _run
return fn(*args, **kargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 336,
in run
return self.cmd(*self.argslist, **self.argsdict)
File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line
79, in wrapper
return method(self, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1633, in
copyImage
postZero, force, discard)
File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line 837,
in copyCollapsed
raise se.CopyImageError(str(e))
CopyImageError: low level Image copy failed: ("Command
['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f',
'raw',
u'/rhev/data-center/mnt/glusterSD/ovirtst.example.storage:_vmstore/81b97244-4b69-4d49-84c4-c822387adc6a/images/61689cb2-fdce-41a5-a6d9-7d06aefeb636/30009efb-83ed-4b0d-b243-3160195ae46e',
'-O', 'raw',
u'/rhev/data-center/mnt/glusterSD/ovirtst.example.storage:_vmstore/81b97244-4b69-4d49-84c4-c822387adc6a/images/bb167b28-94fa-434c-8fb6-c4bedfc06c62/53d3ab96-e5d1-453a-9989-2f858e6a9e0a']
failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading
sector 22028283: No data available\\n')",)
2020-03-25 12:42:40,384+0100 INFO (tasks/9) [storage.Volume]
createVolumeRollback:
repoPath=/rhev/data-center/9ce6ed92-6c6c-11ea-a971-00163e0acd5c
sdUUID=81b97244-4b69-4d49-84c4-c822387adc6a
imgUUID=bb167b28-94fa-434c-8fb6-c4bedfc06c62
volUUID=53d3ab96-e5d1-453a-9989-2f858e6a9e0a
imageDir=/rhev/data-center/mnt/glusterSD/ovirtst.example.storage:_vmstore/81b97244-4b69-4d49-84c4-c822387adc6a/images/bb167b28-94fa-434c-8fb6-c4bedfc06c62
(volume:1119)
2020-03-25 12:42:40,387+0100 INFO (tasks/9) [storage.Volume] Request to
delete volume 53d3ab96-e5d1-453a-9989-2f858e6a9e0a (fileVolume:555)
...
2020-03-25 12:43:38,161+0100 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC
call VM.getStats succeede
d in 0.00 seconds (__init__:312)
2020-03-25 12:43:39,778+0100 INFO (jsonrpc/4) [vdsm.api] START
getAllTasksStatuses(spUUID=None, opt
ions=None) from=::ffff:172.16.0.31,40506,
task_id=1ec1f913-641c-44c4-8eb6-06a398af7b0c (api:48)
2020-03-25 12:43:39,778+0100 INFO (jsonrpc/4) [vdsm.api] FINISH
getAllTasksStatuses return={'allTas
ksStatus': {'013f3fc3-d9e8-4acd-8e95-aef4001255a1': {'code': 261,
'message': 'low level Image copy f
ailed: ("Command [\'/usr/bin/qemu-img\', \'convert\', \'-p\', \'-t\',
\'none\', \'-T\', \'none\', \'
-f\', \'raw\',
u\'/rhev/data-center/mnt/glusterSD/ovirtst.example.storage:_vmstore/81b97244-4b69-4d4
9-84c4-c822387adc6a/images/61689cb2-fdce-41a5-a6d9-7d06aefeb636/30009efb-83ed-4b0d-b243-3160195ae46e
\', \'-O\', \'qcow2\', \'-o\', \'compat=1.1\',
u\'/rhev/data-center/mnt/glusterSD/ovirtst.example.st
orage:_vmstore/81b97244-4b69-4d49-84c4-c822387adc6a/images/5642b52f-d7e8-48a8-adf9-f79022ce4594/982d
d5cc-5f8f-41cb-b2e7-3cbdf2a656cf\'] failed with rc=1 out=\'\'
err=bytearray(b\'qemu-img: error while
reading sector 18620412: No such file or directory\\\\n\')",)',
'taskState': 'finished', 'taskResul
t': 'cleanSuccess', 'taskID': '013f3fc3-d9e8-4acd-8e95-aef4001255a1'},
'48476954-310f-45b6-9e1b-921d
94e5495b': {'code': 261, 'message': 'low level Image copy failed: ("Command
[\'/usr/bin/qemu-img\',
\'convert\', \'-p\', \'-t\', \'none\', \'-T\', \'none\', \'-f\', \'raw\',
u\'/rhev/data-center/mnt/g
lusterSD/ovirtst.example.storage:_vmstore/81b97244-4b69-4d49-84c4-c822387adc6a/images/61689cb2-fdce-
41a5-a6d9-7d06aefeb636/30009efb-83ed-4b0d-b243-3160195ae46e\', \'-O\',
\'raw\', u\'/rhev/data-center
/mnt/glusterSD/ovirtst.example.storage:_vmstore/81b97244-4b69-4d49-84c4-c822387adc6a/images/bb167b28-94fa-434c-8fb6-c4bedfc06c62/53d3ab96-e5d1-453a-9989-2f858e6a9e0a\']
failed with rc=1 out=\'\' err=bytearray(b\'qemu-img: error while reading
sector 22028283: No data available\\\\n\')",)', 'taskState': 'finished',
'taskResult': 'cleanSuccess', 'taskID':
'48476954-310f-45b6-9e1b-921d94e5495b'}}} from=::ffff:172.16.0.31,40506,
task_id=1ec1f913-641c-44c4-8eb6-06a398af7b0c (api:54)
2020-03-25 12:43:39,778+0100 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC
call Host.getAllTasksStatuses succeeded in 0.00 seconds (__init__:312)
2020-03-25 12:43:39,851+0100 INFO (jsonrpc/0) [vdsm.api] START
deleteImage(sdUUID=u'81b97244-4b69-4d49-84c4-c822387adc6a',
spUUID=u'9ce6ed92-6c6c-11ea-a971-00163e0acd5c',
imgUUID=u'bb167b28-94fa-434c-8fb6-c4bedfc06c62', postZero=u'false',
force=u'false', discard=False) from=::ffff:172.16.0.31,40512,
flow_id=3540fed2, task_id=5f048634-f07f-4643-8aeb-88a6c562f038 (api:48)
2020-03-25 12:43:39,856+0100 ERROR (jsonrpc/0) [storage.HSM] Empty or not
found image bb167b28-94fa-434c-8fb6-c4bedfc06c62 in SD
81b97244-4b69-4d49-84c4-c822387adc6a.
{u'2f48ba54-11a5-4fc3-a8da-eba41c345c0e':
ImgsPar(imgs=(u'55d7c95e-8f54-41c8-8422-3daac542a917',), parent=None),
u'54c43445-7066-423f-9c07-2082d1442c78':
ImgsPar(imgs=(u'08238512-aa7a-4ebb-82e4-6831e8805516',), parent=None),
u'30009efb-83ed-4b0d-b243-3160195ae46e':
ImgsPar(imgs=(u'61689cb2-fdce-41a5-a6d9-7d06aefeb636',), parent=None)}
(hsm:1507)
2020-03-25 12:43:39,856+0100 INFO (jsonrpc/0) [vdsm.api] FINISH
deleteImage error=Image does not exist in domain:
u'image=bb167b28-94fa-434c-8fb6-c4bedfc06c62,
domain=81b97244-4b69-4d49-84c4-c822387adc6a' from=::ffff:172.16.0.31,40512,
flow_id=3540fed2, task_id=5f048634-f07f-4643-8aeb-88a6c562f038 (api:52)
2020-03-25 12:43:39,856+0100 ERROR (jsonrpc/0) [storage.TaskManager.Task]
(Task='5f048634-f07f-4643-8aeb-88a6c562f038') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
in _run
return fn(*args, **kargs)
File "<string>", line 2, in deleteImage
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in
method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1508,
in deleteImage
raise se.ImageDoesNotExistInSD(imgUUID, sdUUID)
ImageDoesNotExistInSD: Image does not exist in domain:
u'image=bb167b28-94fa-434c-8fb6-c4bedfc06c62,
domain=81b97244-4b69-4d49-84c4-c822387adc6a'
2020-03-25 12:43:39,856+0100 INFO (jsonrpc/0) [storage.TaskManager.Task]
(Task='5f048634-f07f-4643-8aeb-88a6c562f038') aborting: Task is aborted:
"Image does not exist in domain:
u'image=bb167b28-94fa-434c-8fb6-c4bedfc06c62,
domain=81b97244-4b69-4d49-84c4-c822387adc6a'" - code 268 (task:1181)
2020-03-25 12:43:39,857+0100 ERROR (jsonrpc/0) [storage.Dispatcher] FINISH
deleteImage error=Image does not exist in domain:
u'image=bb167b28-94fa-434c-8fb6-c4bedfc06c62,
domain=81b97244-4b69-4d49-84c4-c822387adc6a' (dispatcher:83)
engine.log
2020-03-25 12:43:39,778+01 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFac
tory-engineScheduled-Thread-35) [] Task id
'48476954-310f-45b6-9e1b-921d94e5495b' has passed pre-pol
ling period time and should be polled. Pre-polling period is 60000 millis.
2020-03-25 12:43:39,778+01 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedThrea
dFactory-engineScheduled-Thread-35) [] Polling and updating Async Tasks: 2
tasks, 1 tasks to poll no
w
2020-03-25 12:43:39,778+01 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFac
tory-engineScheduled-Thread-35) [] Task id
'48476954-310f-45b6-9e1b-921d94e5495b' has passed pre-pol
ling period time and should be polled. Pre-polling period is 60000 millis.
2020-03-25 12:43:39,781+01 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-35) [] Failed in
'HSMGetAllTasksStatusesVDS' method
2020-03-25 12:43:39,784+01 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-35) [] EVENT_ID:
VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ovirt.example.com command
HSMGetAllTasksStatusesVDS failed: low level Image copy failed: ("Command
['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f',
'raw',
u'/rhev/data-center/mnt/glusterSD/ovirtst.example.storage:_vmstore/81b97244-4b69-4d49-84c4-c822387adc6a/images/61689cb2-fdce-41a5-a6d9-7d06aefeb636/30009efb-83ed-4b0d-b243-3160195ae46e',
'-O', 'qcow2', '-o', 'compat=1.1',
u'/rhev/data-center/mnt/glusterSD/ovirtst.example.storage:_vmstore/81b97244-4b69-4d49-84c4-c822387adc6a/images/5642b52f-d7e8-48a8-adf9-f79022ce4594/982dd5cc-5f8f-41cb-b2e7-3cbdf2a656cf']
failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading
sector 18620412: No such file or directory\\n')",)
2020-03-25 12:43:39,784+01 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-35) [] Failed in
'HSMGetAllTasksStatusesVDS' method
2020-03-25 12:43:39,785+01 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-35) [] Task id
'48476954-310f-45b6-9e1b-921d94e5495b' has passed pre-polling period time
and should be polled. Pre-polling period is 60000 millis.
2020-03-25 12:43:39,785+01 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-35) []
SPMAsyncTask::PollTask: Polling task '48476954-310f-45b6-9e1b-921d94e5495b'
(Parent Command 'CreateAllTemplateDisks', Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned
status 'finished', result 'cleanSuccess'.
2020-03-25 12:43:39,787+01 ERROR
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-35) []
BaseAsyncTask::logEndTaskFailure: Task
'48476954-310f-45b6-9e1b-921d94e5495b' (Parent Command
'CreateAllTemplateDisks', Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended with
failure:
-- Result: 'cleanSuccess'
-- Message: 'VDSGenericException: VDSErrorException: Failed to
HSMGetAllTasksStatusesVDS, error = low level Image copy failed: ("Command
['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f',
'raw',
u'/rhev/data-center/mnt/glusterSD/ovirtst.example.storage:_vmstore/81b97244-4b69-4d49-84c4-c822387adc6a/images/61689cb2-fdce-41a5-a6d9-7d06aefeb636/30009efb-83ed-4b0d-b243-3160195ae46e',
'-O', 'raw',
u'/rhev/data-center/mnt/glusterSD/ovirtst.example.storage:_vmstore/81b97244-4b69-4d49-84c4-c822387adc6a/images/bb167b28-94fa-434c-8fb6-c4bedfc06c62/53d3ab96-e5d1-453a-9989-2f858e6a9e0a']
failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading
sector 22028283: No data a
vailable\\n')",), code = 261',
-- Exception: 'VDSGenericException: VDSErrorException: Failed to
HSMGetAllTasksStatusesVDS, error = low level Image copy failed: ("Command
['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f',
'raw',
u'/rhev/data-center/mnt/glusterSD/ovirtst.example.storage:_vmstore/81b97244-4b69-4d49-84c4-c822387adc6a/images/61689cb2-fdce-41a5-a6d9-7d06aefeb636/30009efb-83ed-4b0d-b243-3160195ae46e',
'-O', 'raw',
u'/rhev/data-center/mnt/glusterSD/ovirtst.example.storage:_vmstore/81b97244-4b69-4d49-84c4-c822387adc6a/images/bb167b28-94fa-434c-8fb6-c4bedfc06c62/53d3ab96-e5d1-453a-9989-2f858e6a9e0a']
failed with rc=1 out='' err=bytearray(b'qemu-img: error while reading
sector 22028283: No data available\\n')",), code = 261'
2020-03-25 12:43:39,793+01 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-35) []
CommandAsyncTask::endActionIfNecessary: All tasks of command
'76c06966-36b5-4e06-9426-f3187c5a07b9' has ended -> executing 'endAction'
2020-03-25 12:43:39,793+01 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-35) []
CommandAsyncTask::endAction: Ending action for '1' tasks (command ID:
'76c06966-36b5-4e06-9426-f3187c5a07b9'): calling endAction '.
2020-03-25 12:43:39,793+01 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(EE-ManagedThreadFactory-engine-Thread-9168) []
CommandAsyncTask::endCommandAction [within thread] context: Attempting to
endAction 'CreateAllTemplateDisks',
2020-03-25 12:43:39,805+01 ERROR
[org.ovirt.engine.core.bll.storage.disk.CreateAllTemplateDisksCommand]
(EE-ManagedThreadFactory-engine-Thread-9168)
[76349650-8bc6-4769-a011-49d4c2324052] Ending command
'org.ovirt.engine.core.bll.storage.disk.CreateAllTemplateDisksCommand' with
failure.
2020-03-25 12:43:39,810+01 ERROR
[org.ovirt.engine.core.bll.storage.disk.image.CreateImageTemplateCommand]
(EE-ManagedThreadFactory-engine-Thread-9168)
[76349650-8bc6-4769-a011-49d4c2324052] Ending command
'org.ovirt.engine.core.bll.storage.disk.image.CreateImageTemplateCommand'
with failure.
...
Thanks,
Gianluca
5 years, 1 month
How to debug a failed Run Once launch
by Shareef Jalloq
Hi,
I'm trying to create a Windows Server 2019 VM and having found the
virtio-win package that needed to be installed am facing the next hurdle.
I've followed the documentation and I'm using the Run Once option with the
following boot options:
Attach Floppy: virtio-win_servers_amd64.vfd
Attach CD: Win 2029 ISO
CD-ROM at top of Predefined Boot Sequence
Clicking OK starts the VM but it immediately fails with a Failed Launching
pop up.
How do I go about debugging this?
Shareef.
5 years, 1 month
"engine-setup" 4.3.9 on fresh centos 7 install with PKI error
by edsonrichter@hotmail.com
I'm newbie on oVirt - besides having many years on server administration, docker and vmware included.
Installation is no mistery, but I'm stuck with error described below.
I've just installed a new machine with Centos 7 with all updates:
[root@mgmt ~]# uname -a
Linux mgmt.simfrete.com 3.10.0-1062.12.1.el7.x86_64 #1 SMP Tue Feb 4 23:02:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
[root@mgmt ~]# cat /etc/redhat-release
CentOS Linux release 7.7.1908 (Core)
I've installed the repository:
[root@mgmt ~]# sudo yum install https://resources.ovirt.org/pub/yum-repo/ovirt-release43.rpm -y
and then the ovirt-engine:
[root@mgmt ~]# yum install ovirt-engine -y
Then, run the engine-setup with all defaults (exception fqdn = "mgmt.mydomain.com").
Everything runs smoothly, exception with I open the mamagement interface at "https://mgmt.mydomain.com/ovirt-engine" I get the error:
sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
In the log, I get
2020-03-23 02:20:23,883-03 ERROR [org.ovirt.engine.core.aaa.servlet.SsoPostLoginServlet] (default task-8) [] server_error: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
I don't know how to proceed from here.
Would you please guide me how to fix this?
5 years, 1 month
Error installing single hyperconverged host using CentOS7
by wodel youchi
Hi,
I am using the latest version of oVirt 4.3.9
I am using nested KVM for a LAB.
the CentOS is updated with the latest packages available.
The goal of my LAB is to test DR on hypervonerged environment, Why using a
CentOS host rather than the oVirt-node, because of the my Internet
connection, which is not great, I find simple to just update the new
packages on the host rather than update all of the host's image.
I installed a CentOS host using the basic profile, then I added the oVirt
repository and then installed the needed packages, and by the way the oVirt
documentation on hyperconverged setup doesn't mention gluster-ansible-roles
as a prerequisite.
The Gluster phase is done correctly, but the deployment of the VM Engine
fails every time at the same task : TASK [ovirt.hosted_engine_setup : Get
local VM IP
I repeated the deployment three times and I get the same result.
Then I tested with ovirt-node and it worked fine.
You can find the logs of my last attempt attached.
Regards.
5 years, 1 month
cannot manually migrate vm's
by eevans@digitaldatatechs.com
I upgraded from 4.3.8 to 4.3.9. Before the migration, I could manually migrate vm's and also it would automatically load balance the hosts. Now I cannot manually migrate and I have one server with no vm's and 2 with several.
If I put a host into maintenance mode, it will migrate the vm's off to other hosts, but before I noticed it would move vm's around to load balance and now it does not.
Not sure if this is a bug or not.
How it happens:
When I right click on a vm and click migrate, the migrate screen flashes on screen then disappears. Same behavior if I highlight the vm and click the migrate button at the top of the vm screen.
It's not critical, but something that needs corrected.
Any help or advice is very much appreciated.
Thanks.
5 years, 1 month
Cinderlib db not contained in engine-backup
by Thomas Klute
Dear oVirt users,
I just noticed that the ovirt_cinderlib database does not seem to be
contained in the archive file created by engine-backup.
Is it missing there?
Or is there any other way to restore the data of the ovirt_cinderlib
database in case a restore the engine-backup would be required?
Best regards,
Thomas
5 years, 1 month
Re: vm console problem
by David David
I did as you said:
copied from engine /etc/ovirt-engine/ca.pem onto my desktop into
/etc/pki/ca-trust/source/anchors and then run update-ca-trust
it didn’t help, still the same errors
вс, 29 мар. 2020 г. в 10:47, David David <dd432690(a)gmail.com>:
> I did as you said:
> copied from engine /etc/ovirt-engine/ca.pem onto my desktop into
> /etc/pki/ca-trust/source/anchors and then run update-ca-trust
> it didn’t help, still the same errors
>
>
> пт, 27 мар. 2020 г. в 21:56, Strahil Nikolov <hunter86_bg(a)yahoo.com>:
>
>> On March 27, 2020 12:23:10 PM GMT+02:00, David David <dd432690(a)gmail.com>
>> wrote:
>> >here is debug from opening console.vv by remote-viewer
>> >
>> >2020-03-27 14:09 GMT+04:00, Milan Zamazal <mzamazal(a)redhat.com>:
>> >> David David <dd432690(a)gmail.com> writes:
>> >>
>> >>> yes i have
>> >>> console.vv attached
>> >>
>> >> It looks the same as mine.
>> >>
>> >> There is a difference in our logs, you have
>> >>
>> >> Possible auth 19
>> >>
>> >> while I have
>> >>
>> >> Possible auth 2
>> >>
>> >> So I still suspect a wrong authentication method is used, but I don't
>> >> have any idea why.
>> >>
>> >> Regards,
>> >> Milan
>> >>
>> >>> 2020-03-26 21:38 GMT+04:00, Milan Zamazal <mzamazal(a)redhat.com>:
>> >>>> David David <dd432690(a)gmail.com> writes:
>> >>>>
>> >>>>> copied from qemu server all certs except "cacrl" to my
>> >desktop-station
>> >>>>> into /etc/pki/
>> >>>>
>> >>>> This is not needed, the CA certificate is included in console.vv
>> >and no
>> >>>> other certificate should be needed.
>> >>>>
>> >>>>> but remote-viewer is still didn't work
>> >>>>
>> >>>> The log looks like remote-viewer is attempting certificate
>> >>>> authentication rather than password authentication. Do you have
>> >>>> password in console.vv? It should look like:
>> >>>>
>> >>>> [virt-viewer]
>> >>>> type=vnc
>> >>>> host=192.168.122.2
>> >>>> port=5900
>> >>>> password=fxLazJu6BUmL
>> >>>> # Password is valid for 120 seconds.
>> >>>> ...
>> >>>>
>> >>>> Regards,
>> >>>> Milan
>> >>>>
>> >>>>> 2020-03-26 2:22 GMT+04:00, Nir Soffer <nsoffer(a)redhat.com>:
>> >>>>>> On Wed, Mar 25, 2020 at 12:45 PM David David <dd432690(a)gmail.com>
>> >>>>>> wrote:
>> >>>>>>>
>> >>>>>>> ovirt 4.3.8.2-1.el7
>> >>>>>>> gtk-vnc2-1.0.0-1.fc31.x86_64
>> >>>>>>> remote-viewer version 8.0-3.fc31
>> >>>>>>>
>> >>>>>>> can't open vm console by remote-viewer
>> >>>>>>> vm has vnc console protocol
>> >>>>>>> when click on console button to connect to a vm, the
>> >remote-viewer
>> >>>>>>> console disappear immediately
>> >>>>>>>
>> >>>>>>> remote-viewer debug in attachment
>> >>>>>>
>> >>>>>> You an issue with the certificates:
>> >>>>>>
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.238:
>> >>>>>> ../src/vncconnection.c Set credential 2 libvirt
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> >>>>>> ../src/vncconnection.c Searching for certs in /etc/pki
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> >>>>>> ../src/vncconnection.c Searching for certs in /root/.pki
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> >>>>>> ../src/vncconnection.c Failed to find certificate CA/cacert.pem
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> >>>>>> ../src/vncconnection.c No CA certificate provided, using GNUTLS
>> >global
>> >>>>>> trust
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> >>>>>> ../src/vncconnection.c Failed to find certificate CA/cacrl.pem
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> >>>>>> ../src/vncconnection.c Failed to find certificate
>> >>>>>> libvirt/private/clientkey.pem
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> >>>>>> ../src/vncconnection.c Failed to find certificate
>> >>>>>> libvirt/clientcert.pem
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> >>>>>> ../src/vncconnection.c Waiting for missing credentials
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> >>>>>> ../src/vncconnection.c Got all credentials
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> >>>>>> ../src/vncconnection.c No CA certificate provided; trying the
>> >system
>> >>>>>> trust store instead
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.240:
>> >>>>>> ../src/vncconnection.c Using the system trust store and CRL
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.240:
>> >>>>>> ../src/vncconnection.c No client cert or key provided
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.240:
>> >>>>>> ../src/vncconnection.c No CA revocation list provided
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.241:
>> >>>>>> ../src/vncconnection.c Handshake was blocking
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.243:
>> >>>>>> ../src/vncconnection.c Handshake was blocking
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.251:
>> >>>>>> ../src/vncconnection.c Handshake was blocking
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.298:
>> >>>>>> ../src/vncconnection.c Handshake done
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.298:
>> >>>>>> ../src/vncconnection.c Validating
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.301:
>> >>>>>> ../src/vncconnection.c Error: The certificate is not trusted
>> >>>>>>
>> >>>>>> Adding people that may know more about this.
>> >>>>>>
>> >>>>>> Nir
>> >>>>>>
>> >>>>>>
>> >>>>
>> >>>>
>> >>
>> >>
>>
>> Hello,
>>
>> You can try to take the engine's CA (maybe it's useless) and put it on
>> your system in:
>> /etc/pki/ca-trust/source/anchors (if it's EL7 or a Fedora) and then run
>> update-ca-trust
>>
>> Best Regards,
>> Strahil Nikolov
>>
>
5 years, 1 month
Import storage domain with different storage type?
by Rik Theys
Hi,
We have an oVirt environment with a FC storage domain. Multiple LUNs on
a SAN are exported to the oVirt nodes and combined in a single FC
storage domain.
The SAN replicates the disks to another storage box that has iSCSI
connectivity.
Is it possible to - in case of disaster - import the existing,
replicated, storage domain as an iSCSI domain and import/run the VM's
from that domain? Or is import of a storage domain only possible if they
are the same type? Does it also work if multiple LUNs are needed to form
the storage domain?
Are there any special actions that should be performed beyond the
regular import action?
Regards,
Rik
--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee
+32(0)16/32.11.07
----------------------------------------------------------------
<<Any errors in spelling, tact or fact are transmission errors>>
5 years, 1 month
Doubts related to single HCI and storage network
by Gianluca Cecchi
Hello,
I deployed HCI 4.3.9 with gluster and single node from the cockpit based
interface.
During install I specified the storage network, using
1) For the mgmt network and hostname of the hypervisor
172.16.0.30 ovirt.mydomain
2) for the storage network (even if not used in single host... but in case
of future addition..)
10.50.50.11 ovirtst.mydomain.storage
all went good. And system runs quite ok: I was able to deploy an OCP 4.3.8
cluster with 3 workers and 3 masters... a part from erratic "vm paused"
messages for which I'm going to send a dedicated mail...
I see in engine.log warning messages of this kind:
2020-03-27 00:32:08,655+01 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler2) [15cbd52e] Could not associate brick
'ovirtst.mydomain.storage:/gluster_bricks/engine/engine' of volume
'40ad3b5b-4cc1-495a-815b-3c7e3436b15b' with correct network as no gluster
network found in cluster '9cecfa02-6c6c-11ea-8a94-00163e0acd5c'
I would have expected the setup to create a gluster network as it was part
of the initial configuration.... could this be a subject for an RFE?
What can I do to fix this warning?
Thanks,
Gianluca
5 years, 1 month
vm console problem
by David David
ovirt 4.3.8.2-1.el7
gtk-vnc2-1.0.0-1.fc31.x86_64
remote-viewer version 8.0-3.fc31
can't open vm console by remote-viewer
vm has vnc console protocol
when click on console button to connect to a vm, the remote-viewer
console disappear immediately
remote-viewer debug in attachment
5 years, 1 month
Speed Issues
by Christian Reiss
Hey folks,
gluster related question. Having SSD in a RAID that can do 2 GB writes
and Reads (actually above, but meh) in a 3-way HCI cluster connected
with 10gbit connection things are pretty slow inside gluster.
I have these settings:
Options Reconfigured:
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.shd-max-threads: 8
features.shard: on
features.shard-block-size: 64MB
server.event-threads: 8
user.cifs: off
cluster.shd-wait-qlength: 10000
cluster.locking-scheme: granular
cluster.eager-lock: enable
performance.low-prio-threads: 32
network.ping-timeout: 30
cluster.granular-entry-heal: enable
storage.owner-gid: 36
storage.owner-uid: 36
cluster.choose-local: true
client.event-threads: 16
performance.strict-o-direct: on
network.remote-dio: enable
performance.client-io-threads: on
nfs.disable: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
cluster.readdir-optimize: on
cluster.metadata-self-heal: on
cluster.data-self-heal: on
cluster.entry-self-heal: on
cluster.data-self-heal-algorithm: full
features.uss: enable
features.show-snapshot-directory: on
features.barrier: disable
auto-delete: enable
snap-activate-on-create: enable
Writing inside the /gluster_bricks yields those 2GB/sec writes, Reading
the same.
Reading inside the /rhev/data-center/mnt/glusterSD/ dir reads go down to
366mb/sec while writes plummet to to 200mb/sec.
Summed up: Writing into the SSD Raid in the lvm/xfs gluster brick
directory is fast, writing into the mounted gluster dir is horribly slow.
The above can be seen and repeated on all 3 servers. The network can do
full 10gbit (tested with, among others: rsync, iperf3).
Anyone with some idea on whats missing/ going on here?
Thanks folks,
as always stay safe and healthy!
--
with kind regards,
mit freundlichen Gruessen,
Christian Reiss
5 years, 1 month
bare-metal to self-hosted engine
by kim.kargaard@noroff.no
Hi,
We currently have an ovirt engine running on a server. The server has CentOS installed and the ovirt-engine installed, but is not a node that hosts VM's. I would like to move the ovirt-engine to a self-hosted engine and it seems like this articles is the one to follow: https://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_...
Am I correct that I can migrate from a bare-metal CentOS server engine to a self-hosted VM of the engine and is the documentation above the only documentation I will need to complete this process?
Kind regards
Kim
5 years, 1 month
can't run VM
by garcialiang.anne@gmail.com
Hi,
I created VM on ovirt-engine. But I can't run this VM. The message is :
2020-03-26 21:28:02,745+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-147) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM VirtMachine due to a failed validation: [Cannot run VM. There is no host that satisfies current scheduling constraints. See below for details:, The host xxxxxxxxx did not satisfy internal filter Network because display network ovirtmgmt was missing.] (User: admin@internal-authz).
Could you help me ?
Thanks
Anne
5 years, 1 month
Orphaned ISO Storage Domain
by bob.franzke@mdaemon.com
Greetings all,
Full disclosure, complete OVIRT novice here. I inherited an OVIRT system and had a complete ovirt-engine back in December-January. Because of time and my inexperience with OVIRT, I had to resort to hiring consultants to rebuild my OVIRT engine from backups. That’s a situation I never want to repeat.
Anyway, we were able to piece it together and at least get most functionality back. The previous setup had a ISO storage domain called ‘ISO-COLO’ that seems to have been hosted on the engine server itself. The engine hostname is ‘mydesktop’. We restored the engine from backups I had taken of the SQL DB and various support files using the built in OVIRT backup tool.
So now when looking into the OVIRT console, I see the storage domain listed. It has a status of ‘inactive’ showing in the list of various storage domains we have setup for this. We tried to ‘activate’ it and it fails activation. The path listed for the domain is mydesktop:/gluster/colo-iso. On the host however there is no mountpoint that equates to that path:
[root@mydesktop ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 47G 0 47G 0% /dev
tmpfs 47G 12K 47G 1% /dev/shm
tmpfs 47G 131M 47G 1% /run
tmpfs 47G 0 47G 0% /sys/fs/cgroup
/dev/mapper/centos-root 50G 5.4G 45G 11% /
/dev/sda2 1014M 185M 830M 19% /boot
/dev/sda1 200M 12M 189M 6% /boot/efi
/dev/mapper/centos-home 224G 15G 210G 7% /home
tmpfs 9.3G 0 9.3G 0% /run/user/0
The original layout looked like this on the broken engine:
[root@mydesktop ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos_mydesktop-root 50G 27G 20G 58% /
devtmpfs 24G 0 24G 0% /dev
tmpfs 24G 28K 24G 1% /dev/shm
tmpfs 24G 42M 24G 1% /run
tmpfs 24G 0 24G 0% /sys/fs/cgroup
/dev/mapper/centos_mydesktop-home 25G 45M 24G 1% /home
/dev/sdc1 1014M 307M 708M 31% /boot
/dev/mapper/centos_mydesktop-gluster 177G 127G 42G 76% /gluster
tmpfs 4.7G 0 4.7G 0% /run/user/0
So it seems the orphaned storage domain is just point to a path that does not exist on the new Engine host.
Also noticed some of the hosts are trying to aces this storage domain and getting errors:
The error message for connection mydesktop:/gluster/colo-iso returned by VDSM was: Problem while trying to mount target
3/17/2010:47:05 AM
Failed to connect Host vm-host-colo-2 to the Storage Domains ISO-Colo.
3/17/2010:47:05 AM
So it seems hosts are trying to be connected to this storage domain but cannot because its not there. Any of the files from the original path are not available so I am not even sure what we are missing if anything.
So what are my options here. Destroy the current ISO domain and recreate it, or somehow provide the correct path on the engine server? Currently the storage space I can use is mounted with /home, which is a different path than the original one. Not sure if anything can be done with the disk layout at this point to correct this on the engine server itself to get the gluster path back. Right now we cannot attach CDs to VMs for booting. No choices show up for use when doing a ‘run once’ on an existing VM so I would like to get this working so I can fix a broken VM that I need to boot off of ISO media.
Thanks in advance for any help you can provide.
5 years, 1 month
Gluster permissions HCI
by Strahil Nikolov
Hello All,
can someone assist me with some issue.
Could you check the ownership of some folders for me ?
1. ls -l /rhev/data-center/mnt/glusterSD
2. ls -l /rhev/data-center/mnt/glusterSD/<gluster_node>_<engine volume>
3. ls -l /rhev/data-center/mnt/glusterSD/<gluster_node>_<engine volume>/<volume uuid>/images
4. ls -l /rhev/data-center/mnt/glusterSD/<gluster_node>_<any data domain>
5. ls -l /rhev/data-center/mnt/glusterSD/<gluster_node>_<any data domain>/<volume uuid>/images
Also mention your gluster version.
Thanks in advance.
Best Regards,
Strahil Nikolov
5 years, 1 month
how to use idrac interface as a vm network
by Nathanaël Blanchet
Hello,
I noticed an idrac interface was available in the host network since an
undefined ovirt version.
Mine are already plugged as dedicated vlan to administrate host at the
lowest level.
I tested to use it as a vm network on the same vlan, but it doesn't work.
What am I supposed to do with this available interface?
PS: on HP hosts, no Ilo interface appears in host network tab
--
Nathanaël Blanchet
Supervision réseau
SIRE
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
5 years, 1 month
Re: Windows VirtIO drivers
by eevans@digitaldatatechs.com
That may be true, but in the ISO domain, when you open virt viewer you can
change the cd very easily...maybe it works that way as well..
Eric Evans
Digital Data Services LLC.
304.660.9080
-----Original Message-----
From: Robert Webb <rwebb(a)ropeguru.com>
Sent: Wednesday, March 25, 2020 2:35 PM
To: eevans(a)digitaldatatechs.com; 'Shareef Jalloq' <shareef(a)jalloq.co.uk>;
users(a)ovirt.org
Subject: [ovirt-users] Re: Windows VirtIO drivers
Don't think you have to use the ISO Domain any longer.
You can upload to a Data Domain and when you highlight the VM in the
management GUI, select the three dots in the top left for extra options and
there is a change cd option. That option will allow for attaching an ISO
from a Data Domain.
That is what I recall when I was using oVirt a month or so ago.
________________________________________
From: eevans(a)digitaldatatechs.com <eevans(a)digitaldatatechs.com>
Sent: Wednesday, March 25, 2020 2:28 PM
To: 'Shareef Jalloq'; users(a)ovirt.org
Subject: [ovirt-users] Re: Windows VirtIO drivers
You have to copy the iso and vfd files to the ISO domain to make them
available to the vm's that need drivers.
engine-iso-uploader options list
# engine-iso-uploader options upload file file file Documentation is found
here: https://www.ovirt.org/documentation/admin-guide/chap-Utilities.html
Eric Evans
Digital Data Services LLC.
304.660.9080
[cid:image001.jpg@01D602B1.9FEA2EC0]
From: Shareef Jalloq <shareef(a)jalloq.co.uk>
Sent: Wednesday, March 25, 2020 1:51 PM
To: users(a)ovirt.org
Subject: [ovirt-users] Windows VirtIO drivers
Hi,
it seems the online documentation regarding the windows installation steps
is well out of date. Where is there any current documentation on where to
get the VirtIO drivers for a Windows installation?
From a bit of Googling, it seems that I need to 'yum install virtio-win' on
the engine VM and then copy the relevant .iso/.vfd to the ISO domain. Is
that correct?
Where is the documentation maintained and how do I open a bug on it?
Thanks, Shareef.
5 years, 1 month
Re: Windows VirtIO drivers
by eevans@digitaldatatechs.com
The iso's and vfd's are in /usr/share/virtio-win
Eric Evans
Digital Data Services LLC.
304.660.9080
-----Original Message-----
From: Robert Webb <rwebb(a)ropeguru.com>
Sent: Wednesday, March 25, 2020 2:35 PM
To: eevans(a)digitaldatatechs.com; 'Shareef Jalloq' <shareef(a)jalloq.co.uk>;
users(a)ovirt.org
Subject: [ovirt-users] Re: Windows VirtIO drivers
Don't think you have to use the ISO Domain any longer.
You can upload to a Data Domain and when you highlight the VM in the
management GUI, select the three dots in the top left for extra options and
there is a change cd option. That option will allow for attaching an ISO
from a Data Domain.
That is what I recall when I was using oVirt a month or so ago.
________________________________________
From: eevans(a)digitaldatatechs.com <eevans(a)digitaldatatechs.com>
Sent: Wednesday, March 25, 2020 2:28 PM
To: 'Shareef Jalloq'; users(a)ovirt.org
Subject: [ovirt-users] Re: Windows VirtIO drivers
You have to copy the iso and vfd files to the ISO domain to make them
available to the vm's that need drivers.
engine-iso-uploader options list
# engine-iso-uploader options upload file file file Documentation is found
here: https://www.ovirt.org/documentation/admin-guide/chap-Utilities.html
Eric Evans
Digital Data Services LLC.
304.660.9080
[cid:image001.jpg@01D602B1.9FEA2EC0]
From: Shareef Jalloq <shareef(a)jalloq.co.uk>
Sent: Wednesday, March 25, 2020 1:51 PM
To: users(a)ovirt.org
Subject: [ovirt-users] Windows VirtIO drivers
Hi,
it seems the online documentation regarding the windows installation steps
is well out of date. Where is there any current documentation on where to
get the VirtIO drivers for a Windows installation?
From a bit of Googling, it seems that I need to 'yum install virtio-win' on
the engine VM and then copy the relevant .iso/.vfd to the ISO domain. Is
that correct?
Where is the documentation maintained and how do I open a bug on it?
Thanks, Shareef.
5 years, 1 month
Windows VirtIO drivers
by Shareef Jalloq
Hi,
it seems the online documentation regarding the windows installation steps
is well out of date. Where is there any current documentation on where to
get the VirtIO drivers for a Windows installation?
From a bit of Googling, it seems that I need to 'yum install virtio-win' on
the engine VM and then copy the relevant .iso/.vfd to the ISO domain. Is
that correct?
Where is the documentation maintained and how do I open a bug on it?
Thanks, Shareef.
5 years, 1 month
oVirt 4.4.0 Alpha release refresh is now available for testing
by Sandro Bonazzola
oVirt 4.4.0 Alpha release refresh is now available for testing
The oVirt Project is excited to announce the availability of the alpha
release refresh of oVirt 4.4.0 for testing, as of March 6th, 2020
This release unleashes an altogether more powerful and flexible open source
virtualization solution that encompasses hundreds of individual changes and
a wide range of enhancements across the engine, storage, network, user
interface, and analytics on top of oVirt 4.3.
Important notes before you try it
Please note this is an Alpha release.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not to be used in production, and it is not feature
complete.
In particular, please note that upgrades from 4.3 and future upgrades from
this alpha to the final 4.4 release from this version are not supported.
Some of the features included in oVirt 4.4.0 Alpha require content that
will be available in CentOS Linux 8.2 which are currently included in Red
Hat Enterprise Linux 8.2 beta. If you want to have a better experience you
can test oVirt 4.4.0 Alpha on Red Hat Enterprise Linux 8.2 beta.
Known Issues
-
After installation open the Default cluster and hit “Save”, for any
other new Cluster using CPU autodetection the dialog needs to be explicitly
saved after the detection happens, after first host is added. (bug
https://bugzilla.redhat.com/1770697)
Installation instructions
For the engine: either use appliance or:
- Install CentOS Linux 8 minimal from
http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-...
- dnf install
https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm
- dnf update (reboot if needed)
- dnf module enable -y javapackages-tools pki-deps 389-ds
- dnf install ovirt-engine
- engine-setup
For the nodes:
Either use oVirt Node ISO or:
- Install CentOS Linux 8 from
http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-...
; select minimal installation
- dnf config-manager --set-enabled PowerTools
- dnf install
https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm
- dnf update (reboot if needed)
- Attach the host to engine and let it be deployed.
What’s new in oVirt 4.4.0 Alpha?
-
Hypervisors based on CentOS Linux 8 (rebuilt from award winning RHEL8),
for both oVirt Node and standalone CentOS Linux hosts
-
Easier network management and configuration flexibility with
NetworkManager
-
VMs based on a more modern Q35 chipset with legacy seabios and UEFI
firmware
-
Support for direct passthrough of local host disks to VMs
-
Live migration improvements for High Performance guests.
-
New Windows Guest tools installer based on WiX framework now moved to
VirtioWin project
-
Dropped support for cluster level prior to 4.2
-
Dropped SDK3 support
-
4K disks support
-
Exporting a VM to a data domain
-
Editing of floating disks
-
Integrating ansible-runner into engine, which allows a more detailed
monitoring of playbooks executed from engine
-
Adding/reinstalling hosts are now completely based on Ansible
-
The OpenStack Neutron Agent cannot be configured by oVirt anymore, it
should be configured by TripleO instead
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.1 or newer
* CentOS Linux (or similar) 8.1 or newer
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.1 or newer (8.2 beta recommended)
* CentOS Linux (or similar) 8.1 or newer
* oVirt Node 4.4 based on CentOS Linux 8.1 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
If you manage more than one oVirt instance, OKD or RDO we also recommend to
try ManageIQ <http://manageiq.org/>.
In such a case, please be sure to take the qc2 image and not the ova image.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
Additional Resources:
* Read more about the oVirt 4.4.0 release highlights:
http://www.ovirt.org/release/4.4.0/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.0/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>*Red Hat respects your work life balance.
Therefore there is no need to answer this email out of your office hours.*
5 years, 1 month
Migrate Hosted Engine to new Storage
by Anton Louw
Hi Everybody,
Does anybody know when we will most likely be able to do a live storage migration of the Hosted Engine? I've been battling with the backup and restore way for a couple of days now already, and it is starting to feel like a fruitless exercise.
Thanks
Anton Louw
Cloud Engineer: Storage and Virtualization
______________________________________
D: 087 805 1572 | M: N/A
A: Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
anton.louw(a)voxtelecom.co.za
www.vox.co.za
5 years, 1 month
upload image using python api
by David David
hi
can't upload disk image with that script:
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload...
this error message is appeared when i trying to upload image:
# python upload_disk.py --engine-url https://alias-e.localdomain
--username admin@internal --disk-format raw --sd-name iscsi-test-7 -c
ca.pem /home/linux1.raw
Checking image...
Image format: raw
Disk format: raw
Disk content type: data
Disk provisioned size: 42949672960
Disk initial size: 42949672960
Disk name: linux1.raw
Connecting...
Password:
Creating disk...
Creating transfer session...
Uploading image...
Traceback (most recent call last):
File "upload_disk.py", line 288, in <module>
with ui.ProgressBar() as pb:
TypeError: __init__() takes at least 2 arguments (1 given)
software using:
ovirt-engine 4.3.8.2-1
python-ovirt-engine-sdk4-4.3.2-2
5 years, 1 month
does SPM still exist?
by yam yam
Hello,
I heard some say SPM disappeared since 3.6.
nevertheless, SPM still exists in oVirt admin portal or even in RHV's manual.
So, I am wondering whether SPM still exists now.
And could I know how to get more detailed information for oVirt internals??
is the code review the best way?
5 years, 1 month
Re: safe to have perf and dstat on ovirt node?
by Gianluca Cecchi
On Mon, Mar 23, 2020 at 11:56 PM Shirly Radco <sradco(a)redhat.com> wrote:
> Hi,
>
> I can't answer about perf, But would Collectd be useful for you?
> It is already installed on the hosts and engine.
>
> Best,
> Shirly
>
>
Thanks for your answer Shirly,
do you mean implementing Metrics Store?
Gianluca
5 years, 1 month
connect from public to private network
by nikkognt@gmail.com
Good morning,
I have an Ovirt infrastructure and its configured in private network.
How can I connect from public network to web portal and the virtual machines via spice client?
My infrastructure is configured by 1 engine and 2 hosts.
How can I do? Suggestions?
Thanks!
Best regards
Nikkognt
5 years, 1 month
Data Center, Cluster and Storage Domains show as "Down" after backup and restore
by anton.louw@voxtelecom.co.za
Hi Everybody,
So I did a full backup of our HE, redeployed and restored. After the restore, I ran engine-setup on the newly deployed HE. I can access the the environment as per normal, the only issue is that everything still shows as "down", this includes hosts and VMs?
Is there something I need to do in order to get everything to display as "Up" again?
Thanks
5 years, 1 month
storage allocation of template
by yam yam
Hello,
I want to know whether a base disk image is physically copied whenever the VM is created from a template.
I've just created a VM based on template with 1 disk and checked VM's virtual disk images.
there were (1)base image copied? from the template, (2)new delta image.
Here, (1) image was identical to that of the template and I know that image is used as read-only mode.
So, I am wondering why base (disk) image is inefficiently copied instead of directly referencing template's image as backing file.
5 years, 1 month
safe to have perf and dstat on ovirt node?
by Gianluca Cecchi
Hello,
I would need to have on ovirt node these three packages:
- ipmitool
- perf
- dstat
Comparing latest RHVH (4.3.8) and latest ovirt-node-ng (4.3.9) I see that:
- ipmitool is present on both
- perf is present as an installable package in RHVH and not in ovirt-node-ng
- dstat is not available in any of them
Why the difference about perf?
Is it safe to install perf and dstat on ovirt-node-ng taking them from
CentOS updates?
This is for a lab and the packages would be used to compute some metrics
about performance of the node.
Thanks,
Gianluca
5 years, 1 month
Re: Moving Hosted Engine
by Joseph Goldman
Hi Anton,
I believe so - but I believe you can do a hosted-engine deploy with
the backup file specified so it auto-restores with it.
I need to try it on a test system I haven't done much DR testing yet.
My builds have been rushed into production due to time constraints, and
although I backup and ship off-site so I have it all I actually haven't
done a full breakdown and re-restore yet :/
I'll be interested to know how you go, if you dont mind.
Thanks,
Joe
On 2020-03-23 3:32 PM, Anton Louw wrote:
>
>
> Hi Joseph,
>
> Thanks for the reply. So in short, the process will then be to 1) Take
> a backup of the Hosted Engine, 2) Do a clean-up of the Hosted Engine,
> and then lastly redeploy on one of the nodes using hosted-engine
> deploy? After that then I will do a restore?
>
> Thanks
>
>
> *Anton Louw*
> *Cloud Engineer: Storage and Virtualization* at *Vox*
> ------------------------------------------------------------------------
> *T:* 087 805 0000 | *D:* 087 805 1572
> *M:* N/A
> *E:* anton.louw(a)voxtelecom.co.za <mailto:anton.louw@voxtelecom.co.za>
> *A:* Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
> www.vox.co.za <http://www.vox.co.za>
>
> F <https://www.facebook.com/voxtelecomZA>
>
>
> T <https://www.twitter.com/voxtelecom>
>
>
> I <https://www.instagram.com/voxtelecomza/>
>
>
> L <https://www.linkedin.com/company/voxtelecom>
>
>
> Y <https://www.youtube.com/user/VoxTelecom>
>
> #VoxBrand <https://www.vox.co.za/fibre/fibre-to-the-home/?prod=HOME>
>
>
> *Disclaimer*
>
> The contents of this email are confidential to the sender and the
> intended recipient. Unless the contents are clearly and entirely of a
> personal nature, they are subject to copyright in favour of the
> holding company of the Vox group of companies. Any recipient who
> receives this email in error should immediately report the error to
> the sender and permanently delete this email from all storage devices.
>
> This email has been scanned for viruses and malware, and may have been
> automatically archived by *Mimecast Ltd*, an innovator in Software as
> a Service (SaaS) for business. Providing a *safer* and *more useful*
> place for your human generated data. Specializing in; Security,
> archiving and compliance. To find out more Click Here
> <https://www.voxtelecom.co.za/security/mimecast/?prod=Enterprise>.
>
>
>
> *From:*Joseph Goldman <joseph(a)goldman.id.au>
> *Sent:* 20 March 2020 14:23
> *To:* users(a)ovirt.org
> *Subject:* [ovirt-users] Re: Moving Hosted Engine
>
> It is my understanding that yes taking a full backup and then redploying
> into a new storage domain using the backup is the correct way - I dont
> think you'd need a new physical server for it though unless you mean you
> are running on bare-metal.
>
> On 2020-03-20 11:04 PM, anton.louw(a)voxtelecom.co.za
> <mailto:anton.louw@voxtelecom.co.za> wrote:
> > Hi All,
> >
> > I am in serious need of some help. We currently have our Hosted
> Engine running on a Storage Domain, but that storage Domain is
> connected to a San that needs to be decommissioned. What would I need
> to do in order to get my Hosted Engine running on a different Storage
> Domain?
> >
> > Will I need to build a new CentOS server, download and Install the
> oVirt Setup, and then restore the backup that I took of my original
> Hosted Engine, using “engine-backup --mode=backup”?
> >
> > I tried looking around at backup solutions that would restore the
> whole VM to a different Storage Domain, but I can’t seem to find anything.
> >
> > Any advise would be greatly appreciated.
> > _______________________________________________
> > Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
> > To unsubscribe send an email to users-leave(a)ovirt.org
> <mailto:users-leave@ovirt.org>
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> <https://www.ovirt.org/privacy-policy.html>
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> <https://www.ovirt.org/community/about/community-guidelines>
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TGI2T46W62J...
> <https://lists.ovirt.org/archives/list/users@ovirt.org/message/TGI2T46W62J...>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
> To unsubscribe send an email to users-leave(a)ovirt.org
> <mailto:users-leave@ovirt.org>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> <https://www.ovirt.org/privacy-policy.html>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> <https://www.ovirt.org/community/about/community-guidelines/>
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NYLHUQPK72Q...
> <https://lists.ovirt.org/archives/list/users@ovirt.org/message/NYLHUQPK72Q...>
>
5 years, 1 month
Can't connect storage domain when using VLAN
by Brian Dumont
Hello,
First post!
I'm trying to add a nfs storage domain with VLAN tagging turned on the
logical network. It errors out with "Error while executing action Add
Storage Connection: Problem while trying to mount target". I am able to
attach to this nfs export without vlan tagging. I have also verified that
I can attach to this nfs export with vlan tagging using a non-ovirt rhel host.
Here's my config:
- Logical network created at Data Center
- Enable Vlan Tagging - ID 15
- Not a VM Network
- Cluster status
- Network - Up
- Assign all - checked
- Require - checked
- no other boxes checked
Host status
- Host 1 and Host 2 each show
- Status - up
- BootP - None (same issue with static IP's)
- LinK Layer info shows VLAN ID 15 on switch port
Appreciate any help!
Brian
5 years, 1 month
bonding and SR-IOV
by Nathanaël Blanchet
Hello,
I successfully attached VF to a VM on a single PF nic following
https://access.redhat.com/articles/3215851.
I want the vm with VF to be migratable, but the nic hotplug/replug is
too long. So I tested the mode 1 bonding (active-backup) with a virtio
nic following this
https://www.ovirt.org/develop/release-management/features/network/liveMig...
On a single PF, the resilience tests with migration are correct with 1
or 2 second downtime.
But now I want to do the same on a typical host with LACP PF:
Considering vlan13, I must bond a virtio nic on a brv13/brv13 profile
with SR-IOV/brv13 profile as the primary interface.
Everything work when the primary interface is brv13/brv13 profile (work
as a traditionnal bridge config), but not when changing on the
SR-IOV/brv13 profile.
I know in this case that the SR-IOV profile is connected to a LACP
interface, but does it mean that I have to configure the VF vnic in the
VM to be part of the LACP(bond1) as a single member and in that case to
do active-backup bond0 with two members bond1 and brv13 nic?
Hope to be clear enough.
Thanks for your help.
--
Nathanaël Blanchet
Supervision réseau
SIRE
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
5 years, 1 month
Moving Hosted Engine
by anton.louw@voxtelecom.co.za
Hi All,
I am in serious need of some help. We currently have our Hosted Engine running on a Storage Domain, but that storage Domain is connected to a San that needs to be decommissioned. What would I need to do in order to get my Hosted Engine running on a different Storage Domain?
Will I need to build a new CentOS server, download and Install the oVirt Setup, and then restore the backup that I took of my original Hosted Engine, using “engine-backup --mode=backup”?
I tried looking around at backup solutions that would restore the whole VM to a different Storage Domain, but I can’t seem to find anything.
Any advise would be greatly appreciated.
5 years, 1 month
When and how to change VDSM REVISION in multipath.conf
by Gianluca Cecchi
Hello,
I created some hosts at time of 4.3.3 or similar and connecting to iSCSI I
set this in multipath.conf to specify customization
"
# VDSM REVISION 1.5
# VDSM PRIVATE
# This file is managed by vdsm.
...
"
Then I updated the environment gradually to 4.3.8 but of course the file
remained the same because of its configuration and the "PRIVATE" label.
Now I install new hosts in 4.3.8 and I see that by default they have these
lines
"
# VDSM REVISION 1.8
# This file is managed by vdsm.
"
So the question is:
suppose I customize the multipath.conf file using the "# VDSM PRIVATE"
line, how do I have to manage the REVISION number as time goes on and I
execute minor updates to hosts?
What will it change between 1.5 above and 1.8? Any impact if I leave 1.5
for example across all my 4.3.8 hosts?
Thanks in advance for clarification,
Gianluca
5 years, 1 month
HCI with Ansible deployment steps: oVirt 4.3.7 + Gluster
by adrianquintero@gmail.com
Hello Community,
Wanted to contribute with the little I know after working a few times with oVirt in an HCI environment.
One of the questions for which I was not able to find a straight forward answer is how to deploy an HCI oVirt environment, so after a few struggles I was able to achieve that goal.
Below is an example of what I used to fully deploy a 3 node cluster with oVirt 4.3.7.
Please note I am no expert and just providing what I have acomplished in my lab, so use the below at your own risk.
1.-Hardware used in my lab :
3 x Dell R610s, 48Gb Ram per server, Intel Xeon L5640 @ 2.27GHz Hex core x 2
2 x 146GB HDDs for OS
1 x 600GB HDD for Gluster Bricks
Dual 10Gb Nic.
2.-Deployed the servers using oVirt 4.3.7 image.
3.-Added ssh keys to be able to ssh from host 1 over to host 2 and host 3 without a password
4.-Modified the following files for ansible deployment on Host 1
4.1.-Under etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment
-- Modify file he_gluster_vars.json with the Hosted Engine VM required information:
--Modify file gluster_inventory.yml with hosts and gluster information
You can find the modified files that I used here https://github.com/viejillo/ovirt-hci-ansible
I hope this helps and if any questions feel free to reach out.
Regards,
AQ
5 years, 1 month
ovirt-engine service in hosted engine appliance cannot start properly
by it@rafalwojciechowski.pl
hello,
I have a problem with oVirt in my home lab - probably after power outage(all hypervisors + hosted engine appliance went down)
Current state: hosted engine appliance is starting from OS perspective, however some jboss related stuff seems to be malfunctioning.
/var/log/ovirt-engine:
grep ERROR engine.log | tail -n2
2020-03-17 17:35:52,769+01 ERROR [org.ovirt.engine.core.bll.network.macpool.MacPoolPerCluster] (ServerService Thread Pool -- 50) [] Error initializing: Duplicate key VM [***some_vm_name***]
2020-03-17 17:35:52,922+01 ERROR [org.ovirt.engine.core.bll.Backend] (ServerService Thread Pool -- 50) [] Error during initialization: javax.ejb.EJBException: java.lang.IllegalStateException: WFLYEE0042: Failed to construct component instance
grep ERROR server.log | tail -n2
2020-03-17 17:36:00,755+01 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("deploy") failed - address: ([("deployment" => "engine.ear")]) - failure description: {"WFLYCTL0080: Failed services" => {"jboss.deployment.subunit.\"engine.ear\".\"bll.jar\".component.Backend.START" => "java.lang.IllegalStateException: WFLYEE0042: Failed to construct component instance
2020-03-17 17:36:01,021+01 ERROR [org.jboss.as] (Controller Boot Thread) WFLYSRV0026: WildFly Full 14.0.1.Final (WildFly Core 6.0.2.Final) started (with errors) in 56493ms - Started 1492 of 1728 services (6 services failed or missing dependencies, 387 services are lazy, passive or on-demand)
probably because of that I am receiving 503 while connecting to webgui
grep 503 /var/log/httpd/access_log | tail -n1
***some_intenal_ip*** - - [18/Mar/2020:13:01:26 +0100] "GET /ovirt-engine/services/health HTTP/1.1" 503 299 "-" "Python-urllib/2.7"
Ovirt Engine version: ovirt-engine-4.2.8.2-1.el7.noarch
if you could help me or at least give some hints/advice then it could be great. I tried to google it however without any luck. thanks in advance.
btw - in this case ovirt-engine service status is "fine" when checking via systemctl - not sure if it should reflect state "active (running)" when it is working partially
5 years, 1 month
3.6-- hosted_storage volume not importing
by briany@alleninstitute.org
Hello,
I have been working to upgrade our long-neglected oVirt cluster from v3.5 to v4.3. Our upgrade from v3.5 to 3.6 is mostly complete except the hosted_storage volume is not importing. None of the tricks that I have come across have helped so far. I am hoping someone can help us complete this step so we can continue with the next set of upgrades.
Our upgrade to 3.6 was not completely straight forward. We needed to move the hosted engine itself to new nfs storage. This was done by performing an in-place upgrade of the engine from 3.5 to 3.6, taking a backup, then restoring this to a new engine appliance hosted on the new nfs storage target. The new engine was never able to fully import the hosted_engine volume. This volume remained in a locked state. Following [0], I destroyed the volume from the Admin UI. The volume has yet to be re-imported.
Log snippets and installed rpm lists are below.
We have two Data Centers configured, each with one Cluster and two data domains (all nfs). All components have been updated to v3.6 compatibility levels.
Any ideas on what our next steps should be so we can continue the upgrade process?
Many thanks,
-Brian
[0] https://access.redhat.com/solutions/2592761
Installed packages
libgovirt-0.3.3-4.el7.x86_64
ovirt-engine-appliance-3.6-20160623.1.el7.centos.noarch
ovirt-engine-sdk-python-3.6.7.0-1.el7.centos.noarch
ovirt-host-deploy-1.4.1-1.el7.centos.noarch
ovirt-hosted-engine-ha-1.3.5.7-1.el7.centos.noarch
ovirt-hosted-engine-setup-1.3.7.2-1.el7.centos.noarch
ovirt-release36-3.6.7-3.el7.centos.noarch
ovirt-setup-lib-1.0.1-1.el7.centos.noarch
ovirt-vmconsole-1.0.2-1.el7.centos.noarch
ovirt-vmconsole-host-1.0.2-1.el7.centos.noarch
vdsm-4.17.32-1.el7.noarch
vdsm-cli-4.17.32-1.el7.noarch
vdsm-hook-vmfex-dev-4.17.32-1.el7.noarch
vdsm-infra-4.17.32-1.el7.noarch
vdsm-jsonrpc-4.17.32-1.el7.noarch
vdsm-python-4.17.32-1.el7.noarch
vdsm-xmlrpc-4.17.32-1.el7.noarch
vdsm-yajsonrpc-4.17.32-1.el7.noarch
--- agent.log ---
This set of log entries are on repeat
MainThread::INFO::2020-03-18 09:39:48,230::hosted_engine::613::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) Initializing VDSM
MainThread::INFO::2020-03-18 09:39:48,260::hosted_engine::658::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images) Connecting the storage
MainThread::INFO::2020-03-18 09:39:48,260::storage_server::218::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server) Connecting storage server
MainThread::INFO::2020-03-18 09:39:48,279::storage_server::222::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server) Connecting storage server
MainThread::INFO::2020-03-18 09:39:48,287::storage_server::230::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server) Refreshing the storage domain
MainThread::INFO::2020-03-18 09:39:48,413::hosted_engine::685::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images) Preparing images
MainThread::INFO::2020-03-18 09:39:48,414::image::126::ovirt_hosted_engine_ha.lib.image.Image::(prepare_images) Preparing images
MainThread::INFO::2020-03-18 09:39:48,545::hosted_engine::688::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images) Reloading vm.conf from the shared storage domain
MainThread::INFO::2020-03-18 09:39:48,546::config::206::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) Trying to get a fresher copy of vm configuration from the OVF_STORE
MainThread::INFO::2020-03-18 09:39:48,608::ovf_store::100::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) Found OVF_STORE: imgUUID:00ad81a7-5637-40ff-8635-c039347f69ee, volUUID:ec87b10a-e601-44bc-bd87-fbe6de274cd4
MainThread::INFO::2020-03-18 09:39:48,662::ovf_store::100::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) Found OVF_STORE: imgUUID:63491779-c7cf-434c-84c9-7878694a8946, volUUID:2f086a70-e97d-4161-a232-1268bb3145de
MainThread::INFO::2020-03-18 09:39:48,662::ovf_store::109::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) Extracting Engine VM OVF from the OVF_STORE
MainThread::INFO::2020-03-18 09:39:48,662::ovf_store::116::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/63491779-c7cf-434c-84c9-7878694a8946/2f086a70-e97d-4161-a232-1268bb3145de
MainThread::ERROR::2020-03-18 09:39:48,670::ovf_store::121::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) Unable to extract HEVM OVF
MainThread::ERROR::2020-03-18 09:39:48,670::config::235::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) Unable to get vm.conf from OVF_STORE, falling back to initial vm.conf
MainThread::INFO::2020-03-18 09:39:48,714::hosted_engine::462::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) Current state EngineUp (score: 3400)
MainThread::INFO::2020-03-18 09:39:58,762::states::421::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine vm running on localhost
--- vsdm.log ---
Reactor thread::INFO::2020-03-18 09:39:48,253::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42646
Reactor thread::DEBUG::2020-03-18 09:39:48,257::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,257::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42646
Reactor thread::DEBUG::2020-03-18 09:39:48,258::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42646)
BindingXMLRPC::INFO::2020-03-18 09:39:48,258::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42646
Thread-450::INFO::2020-03-18 09:39:48,258::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42646 started
Thread-450::DEBUG::2020-03-18 09:39:48,258::bindingxmlrpc::1262::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {}
Thread-450::DEBUG::2020-03-18 09:39:48,259::bindingxmlrpc::1269::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 'ProLiant DL380 Gen10', 'systemUUID': '32303250-3534-4D32-3239-333730323753', 'systemSerialNumber': '2M2937027S', 'systemFamily': 'ProLiant', 'systemManufacturer': 'HPE'}}
Thread-450::INFO::2020-03-18 09:39:48,260::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42646 stopped
Reactor thread::INFO::2020-03-18 09:39:48,261::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42648
Reactor thread::DEBUG::2020-03-18 09:39:48,264::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,265::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42648
Reactor thread::DEBUG::2020-03-18 09:39:48,265::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42648)
BindingXMLRPC::INFO::2020-03-18 09:39:48,265::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42648
Thread-451::INFO::2020-03-18 09:39:48,265::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42648 started
Thread-451::DEBUG::2020-03-18 09:39:48,265::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-451::DEBUG::2020-03-18 09:39:48,265::task::595::Storage.TaskManager.Task::(_updateState) Task=`084fd462-75f9-4a76-93c8-3e4432d988d9`::moving from state init -> state preparing
Thread-451::INFO::2020-03-18 09:39:48,266::logUtils::48::dispatcher::(wrapper) Run and protect: getStorageDomainInfo(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', options=None)
Thread-451::INFO::2020-03-18 09:39:48,266::fileSD::357::Storage.StorageDomain::(validate) sdUUID=331e6287-61df-48dd-9733-a8ad236750b1
Thread-451::DEBUG::2020-03-18 09:39:48,267::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=hosted_storage', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=00000002-0002-0002-0002-000000000357', 'REMOTE_PATH=aidc-nap1-n1.corp.alleninstitute.org:/netapp_engine_36', 'ROLE=Regular', 'SDUUID=331e6287-61df-48dd-9733-a8ad236750b1', 'TYPE=NFS', 'VERSION=3', '_SHA_CKSUM=87c4db40d7fcee58986a7e2669f5bd20e99cf807']
Thread-451::DEBUG::2020-03-18 09:39:48,267::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`028654fb-b9a6-4c40-93d6-bc8f223d2681`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '2838' at 'getStorageDomainInfo'
Thread-451::DEBUG::2020-03-18 09:39:48,267::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-451::DEBUG::2020-03-18 09:39:48,267::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-451::DEBUG::2020-03-18 09:39:48,267::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`028654fb-b9a6-4c40-93d6-bc8f223d2681`::Granted request
Thread-451::DEBUG::2020-03-18 09:39:48,268::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`084fd462-75f9-4a76-93c8-3e4432d988d9`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-451::DEBUG::2020-03-18 09:39:48,268::task::993::Storage.TaskManager.Task::(_decref) Task=`084fd462-75f9-4a76-93c8-3e4432d988d9`::ref 1 aborting False
Thread-451::INFO::2020-03-18 09:39:48,269::logUtils::51::dispatcher::(wrapper) Run and protect: getStorageDomainInfo, Return response: {'info': {'uuid': u'331e6287-61df-48dd-9733-a8ad236750b1', 'version': '3', 'role': 'Regular', 'remotePath': 'aidc-nap1-n1.corp.alleninstitute.org:/netapp_engine_36', 'type': 'NFS', 'class': 'Data', 'pool': ['00000002-0002-0002-0002-000000000357'], 'name': 'hosted_storage'}}
Thread-451::DEBUG::2020-03-18 09:39:48,269::task::1191::Storage.TaskManager.Task::(prepare) Task=`084fd462-75f9-4a76-93c8-3e4432d988d9`::finished: {'info': {'uuid': u'331e6287-61df-48dd-9733-a8ad236750b1', 'version': '3', 'role': 'Regular', 'remotePath': 'aidc-nap1-n1.corp.alleninstitute.org:/netapp_engine_36', 'type': 'NFS', 'class': 'Data', 'pool': ['00000002-0002-0002-0002-000000000357'], 'name': 'hosted_storage'}}
Thread-451::DEBUG::2020-03-18 09:39:48,269::task::595::Storage.TaskManager.Task::(_updateState) Task=`084fd462-75f9-4a76-93c8-3e4432d988d9`::moving from state preparing -> state finished
Thread-451::DEBUG::2020-03-18 09:39:48,269::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-451::DEBUG::2020-03-18 09:39:48,270::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-451::DEBUG::2020-03-18 09:39:48,270::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-451::DEBUG::2020-03-18 09:39:48,270::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-451::DEBUG::2020-03-18 09:39:48,270::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-451::DEBUG::2020-03-18 09:39:48,270::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-451::DEBUG::2020-03-18 09:39:48,270::task::993::Storage.TaskManager.Task::(_decref) Task=`084fd462-75f9-4a76-93c8-3e4432d988d9`::ref 0 aborting False
Thread-451::INFO::2020-03-18 09:39:48,270::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42648 stopped
Reactor thread::INFO::2020-03-18 09:39:48,271::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42650
Reactor thread::DEBUG::2020-03-18 09:39:48,274::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,274::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42650
Reactor thread::DEBUG::2020-03-18 09:39:48,274::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42650)
BindingXMLRPC::INFO::2020-03-18 09:39:48,275::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42650
Thread-452::INFO::2020-03-18 09:39:48,275::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42650 started
Thread-452::DEBUG::2020-03-18 09:39:48,275::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-452::DEBUG::2020-03-18 09:39:48,275::task::595::Storage.TaskManager.Task::(_updateState) Task=`5060fa58-3c70-404c-a037-1438228d69c2`::moving from state init -> state preparing
Thread-452::INFO::2020-03-18 09:39:48,275::logUtils::48::dispatcher::(wrapper) Run and protect: getStorageDomainInfo(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', options=None)
Thread-452::INFO::2020-03-18 09:39:48,275::fileSD::357::Storage.StorageDomain::(validate) sdUUID=331e6287-61df-48dd-9733-a8ad236750b1
Thread-452::DEBUG::2020-03-18 09:39:48,276::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=hosted_storage', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=00000002-0002-0002-0002-000000000357', 'REMOTE_PATH=aidc-nap1-n1.corp.alleninstitute.org:/netapp_engine_36', 'ROLE=Regular', 'SDUUID=331e6287-61df-48dd-9733-a8ad236750b1', 'TYPE=NFS', 'VERSION=3', '_SHA_CKSUM=87c4db40d7fcee58986a7e2669f5bd20e99cf807']
Thread-452::DEBUG::2020-03-18 09:39:48,276::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`222a340e-20b8-4bd0-ade1-9b0ca6cb320d`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '2838' at 'getStorageDomainInfo'
Thread-452::DEBUG::2020-03-18 09:39:48,276::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-452::DEBUG::2020-03-18 09:39:48,276::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-452::DEBUG::2020-03-18 09:39:48,277::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`222a340e-20b8-4bd0-ade1-9b0ca6cb320d`::Granted request
Thread-452::DEBUG::2020-03-18 09:39:48,277::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`5060fa58-3c70-404c-a037-1438228d69c2`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-452::DEBUG::2020-03-18 09:39:48,277::task::993::Storage.TaskManager.Task::(_decref) Task=`5060fa58-3c70-404c-a037-1438228d69c2`::ref 1 aborting False
Thread-452::INFO::2020-03-18 09:39:48,278::logUtils::51::dispatcher::(wrapper) Run and protect: getStorageDomainInfo, Return response: {'info': {'uuid': u'331e6287-61df-48dd-9733-a8ad236750b1', 'version': '3', 'role': 'Regular', 'remotePath': 'aidc-nap1-n1.corp.alleninstitute.org:/netapp_engine_36', 'type': 'NFS', 'class': 'Data', 'pool': ['00000002-0002-0002-0002-000000000357'], 'name': 'hosted_storage'}}
Thread-452::DEBUG::2020-03-18 09:39:48,278::task::1191::Storage.TaskManager.Task::(prepare) Task=`5060fa58-3c70-404c-a037-1438228d69c2`::finished: {'info': {'uuid': u'331e6287-61df-48dd-9733-a8ad236750b1', 'version': '3', 'role': 'Regular', 'remotePath': 'aidc-nap1-n1.corp.alleninstitute.org:/netapp_engine_36', 'type': 'NFS', 'class': 'Data', 'pool': ['00000002-0002-0002-0002-000000000357'], 'name': 'hosted_storage'}}
Thread-452::DEBUG::2020-03-18 09:39:48,278::task::595::Storage.TaskManager.Task::(_updateState) Task=`5060fa58-3c70-404c-a037-1438228d69c2`::moving from state preparing -> state finished
Thread-452::DEBUG::2020-03-18 09:39:48,278::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-452::DEBUG::2020-03-18 09:39:48,279::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-452::DEBUG::2020-03-18 09:39:48,279::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-452::DEBUG::2020-03-18 09:39:48,279::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-452::DEBUG::2020-03-18 09:39:48,279::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-452::DEBUG::2020-03-18 09:39:48,279::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-452::DEBUG::2020-03-18 09:39:48,279::task::993::Storage.TaskManager.Task::(_decref) Task=`5060fa58-3c70-404c-a037-1438228d69c2`::ref 0 aborting False
Thread-452::INFO::2020-03-18 09:39:48,279::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42650 stopped
Reactor thread::INFO::2020-03-18 09:39:48,280::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42652
Reactor thread::DEBUG::2020-03-18 09:39:48,283::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,283::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42652
Reactor thread::DEBUG::2020-03-18 09:39:48,283::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42652)
BindingXMLRPC::INFO::2020-03-18 09:39:48,284::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42652
Thread-453::INFO::2020-03-18 09:39:48,284::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42652 started
Thread-453::DEBUG::2020-03-18 09:39:48,284::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-453::DEBUG::2020-03-18 09:39:48,284::task::595::Storage.TaskManager.Task::(_updateState) Task=`024ef770-6c34-4f8d-bee6-f1828e4a5369`::moving from state init -> state preparing
Thread-453::INFO::2020-03-18 09:39:48,284::logUtils::48::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=1, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'protocol_version': 3, 'connection': 'aidc-nap1-n1.corp.alleninstitute.org:/netapp_engine_36', 'user': 'kvm', 'id': 'd24257f1-b48c-44f0-a5cb-9061e7ce30ba'}], options=None)
Thread-453::DEBUG::2020-03-18 09:39:48,286::hsm::2413::Storage.HSM::(__prefetchDomains) nfs local path: /rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36
Thread-453::DEBUG::2020-03-18 09:39:48,287::hsm::2437::Storage.HSM::(__prefetchDomains) Found SD uuids: (u'331e6287-61df-48dd-9733-a8ad236750b1',)
Thread-453::DEBUG::2020-03-18 09:39:48,287::hsm::2497::Storage.HSM::(connectStorageServer) knownSDs: {331e6287-61df-48dd-9733-a8ad236750b1: storage.nfsSD.findDomain}
Thread-453::INFO::2020-03-18 09:39:48,287::logUtils::51::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': 'd24257f1-b48c-44f0-a5cb-9061e7ce30ba'}]}
Thread-453::DEBUG::2020-03-18 09:39:48,287::task::1191::Storage.TaskManager.Task::(prepare) Task=`024ef770-6c34-4f8d-bee6-f1828e4a5369`::finished: {'statuslist': [{'status': 0, 'id': 'd24257f1-b48c-44f0-a5cb-9061e7ce30ba'}]}
Thread-453::DEBUG::2020-03-18 09:39:48,287::task::595::Storage.TaskManager.Task::(_updateState) Task=`024ef770-6c34-4f8d-bee6-f1828e4a5369`::moving from state preparing -> state finished
Thread-453::DEBUG::2020-03-18 09:39:48,287::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-453::DEBUG::2020-03-18 09:39:48,287::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-453::DEBUG::2020-03-18 09:39:48,287::task::993::Storage.TaskManager.Task::(_decref) Task=`024ef770-6c34-4f8d-bee6-f1828e4a5369`::ref 0 aborting False
Thread-453::INFO::2020-03-18 09:39:48,287::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42652 stopped
Reactor thread::INFO::2020-03-18 09:39:48,288::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42654
Reactor thread::DEBUG::2020-03-18 09:39:48,291::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,291::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42654
Reactor thread::DEBUG::2020-03-18 09:39:48,292::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42654)
BindingXMLRPC::INFO::2020-03-18 09:39:48,292::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42654
Thread-454::INFO::2020-03-18 09:39:48,292::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42654 started
Thread-454::DEBUG::2020-03-18 09:39:48,292::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-454::DEBUG::2020-03-18 09:39:48,292::task::595::Storage.TaskManager.Task::(_updateState) Task=`87423fea-b0d6-4ce8-8da4-65fe9fdd060f`::moving from state init -> state preparing
Thread-454::INFO::2020-03-18 09:39:48,292::logUtils::48::dispatcher::(wrapper) Run and protect: getStorageDomainStats(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', options=None)
Thread-454::DEBUG::2020-03-18 09:39:48,292::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`bc4187c3-65c8-48d5-90b8-74eac77fd4f0`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '2856' at 'getStorageDomainStats'
Thread-454::DEBUG::2020-03-18 09:39:48,292::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-454::DEBUG::2020-03-18 09:39:48,292::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-454::DEBUG::2020-03-18 09:39:48,293::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`bc4187c3-65c8-48d5-90b8-74eac77fd4f0`::Granted request
Thread-454::DEBUG::2020-03-18 09:39:48,293::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`87423fea-b0d6-4ce8-8da4-65fe9fdd060f`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-454::DEBUG::2020-03-18 09:39:48,293::task::993::Storage.TaskManager.Task::(_decref) Task=`87423fea-b0d6-4ce8-8da4-65fe9fdd060f`::ref 1 aborting False
Thread-454::DEBUG::2020-03-18 09:39:48,293::misc::750::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage)
Thread-454::DEBUG::2020-03-18 09:39:48,293::misc::753::Storage.SamplingMethod::(__call__) Got in to sampling method
Thread-454::DEBUG::2020-03-18 09:39:48,293::misc::750::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan)
Thread-454::DEBUG::2020-03-18 09:39:48,293::misc::753::Storage.SamplingMethod::(__call__) Got in to sampling method
Thread-454::DEBUG::2020-03-18 09:39:48,293::iscsi::434::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds
Thread-454::DEBUG::2020-03-18 09:39:48,293::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/taskset --cpu-list 0-31 /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None)
Thread-454::DEBUG::2020-03-18 09:39:48,305::misc::760::Storage.SamplingMethod::(__call__) Returning last result
Thread-454::DEBUG::2020-03-18 09:39:48,305::misc::750::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan)
Thread-454::DEBUG::2020-03-18 09:39:48,306::misc::753::Storage.SamplingMethod::(__call__) Got in to sampling method
Thread-454::DEBUG::2020-03-18 09:39:48,306::hba::56::Storage.HBA::(rescan) Starting scan
Thread-454::DEBUG::2020-03-18 09:39:48,356::hba::62::Storage.HBA::(rescan) Scan finished
Thread-454::DEBUG::2020-03-18 09:39:48,356::misc::760::Storage.SamplingMethod::(__call__) Returning last result
Thread-454::DEBUG::2020-03-18 09:39:48,357::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/taskset --cpu-list 0-31 /usr/bin/sudo -n /usr/sbin/multipath (cwd None)
Thread-454::DEBUG::2020-03-18 09:39:48,396::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0
Thread-454::DEBUG::2020-03-18 09:39:48,397::utils::671::root::(execCmd) /usr/bin/taskset --cpu-list 0-31 /sbin/udevadm settle --timeout=5 (cwd None)
Thread-454::DEBUG::2020-03-18 09:39:48,403::utils::689::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
Thread-454::DEBUG::2020-03-18 09:39:48,404::lvm::497::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-454::DEBUG::2020-03-18 09:39:48,404::lvm::499::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-454::DEBUG::2020-03-18 09:39:48,404::lvm::508::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex
Thread-454::DEBUG::2020-03-18 09:39:48,404::lvm::510::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex
Thread-454::DEBUG::2020-03-18 09:39:48,404::lvm::528::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-454::DEBUG::2020-03-18 09:39:48,404::lvm::530::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-454::DEBUG::2020-03-18 09:39:48,404::misc::760::Storage.SamplingMethod::(__call__) Returning last result
Thread-454::DEBUG::2020-03-18 09:39:48,408::fileSD::157::Storage.StorageDomainManifest::(__init__) Reading domain in path /rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1
Thread-454::DEBUG::2020-03-18 09:39:48,409::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with FileMetadataRW backend
Thread-454::DEBUG::2020-03-18 09:39:48,410::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=hosted_storage', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=00000002-0002-0002-0002-000000000357', 'REMOTE_PATH=aidc-nap1-n1.corp.alleninstitute.org:/netapp_engine_36', 'ROLE=Regular', 'SDUUID=331e6287-61df-48dd-9733-a8ad236750b1', 'TYPE=NFS', 'VERSION=3', '_SHA_CKSUM=87c4db40d7fcee58986a7e2669f5bd20e99cf807']
Thread-454::DEBUG::2020-03-18 09:39:48,411::fileSD::671::Storage.StorageDomain::(imageGarbageCollector) Removing remnants of deleted images []
Thread-454::INFO::2020-03-18 09:39:48,411::sd::442::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace 331e6287-61df-48dd-9733-a8ad236750b1_imageNS already registered
Thread-454::INFO::2020-03-18 09:39:48,411::sd::450::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace 331e6287-61df-48dd-9733-a8ad236750b1_volumeNS already registered
Thread-454::INFO::2020-03-18 09:39:48,412::logUtils::51::dispatcher::(wrapper) Run and protect: getStorageDomainStats, Return response: {'stats': {'mdasize': 0, 'mdathreshold': True, 'mdavalid': True, 'diskfree': '98607497216', 'disktotal': '102005473280', 'mdafree': 0}}
Thread-454::DEBUG::2020-03-18 09:39:48,412::task::1191::Storage.TaskManager.Task::(prepare) Task=`87423fea-b0d6-4ce8-8da4-65fe9fdd060f`::finished: {'stats': {'mdasize': 0, 'mdathreshold': True, 'mdavalid': True, 'diskfree': '98607497216', 'disktotal': '102005473280', 'mdafree': 0}}
Thread-454::DEBUG::2020-03-18 09:39:48,412::task::595::Storage.TaskManager.Task::(_updateState) Task=`87423fea-b0d6-4ce8-8da4-65fe9fdd060f`::moving from state preparing -> state finished
Thread-454::DEBUG::2020-03-18 09:39:48,412::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-454::DEBUG::2020-03-18 09:39:48,412::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-454::DEBUG::2020-03-18 09:39:48,412::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-454::DEBUG::2020-03-18 09:39:48,412::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-454::DEBUG::2020-03-18 09:39:48,412::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-454::DEBUG::2020-03-18 09:39:48,412::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-454::DEBUG::2020-03-18 09:39:48,413::task::993::Storage.TaskManager.Task::(_decref) Task=`87423fea-b0d6-4ce8-8da4-65fe9fdd060f`::ref 0 aborting False
Thread-454::INFO::2020-03-18 09:39:48,413::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42654 stopped
Reactor thread::INFO::2020-03-18 09:39:48,414::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42656
Reactor thread::DEBUG::2020-03-18 09:39:48,418::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,418::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42656
Reactor thread::DEBUG::2020-03-18 09:39:48,418::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42656)
BindingXMLRPC::INFO::2020-03-18 09:39:48,418::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42656
Thread-461::INFO::2020-03-18 09:39:48,418::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42656 started
Thread-461::DEBUG::2020-03-18 09:39:48,419::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-461::DEBUG::2020-03-18 09:39:48,419::task::595::Storage.TaskManager.Task::(_updateState) Task=`0e25784b-3d90-414c-906f-9568b6a9e08c`::moving from state init -> state preparing
Thread-461::INFO::2020-03-18 09:39:48,419::logUtils::48::dispatcher::(wrapper) Run and protect: getImagesList(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', options=None)
Thread-461::DEBUG::2020-03-18 09:39:48,419::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`cd0116aa-4843-41ef-b5f7-fedca913c4de`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3313' at 'getImagesList'
Thread-461::DEBUG::2020-03-18 09:39:48,419::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-461::DEBUG::2020-03-18 09:39:48,419::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-461::DEBUG::2020-03-18 09:39:48,419::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`cd0116aa-4843-41ef-b5f7-fedca913c4de`::Granted request
Thread-461::DEBUG::2020-03-18 09:39:48,419::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`0e25784b-3d90-414c-906f-9568b6a9e08c`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-461::DEBUG::2020-03-18 09:39:48,420::task::993::Storage.TaskManager.Task::(_decref) Task=`0e25784b-3d90-414c-906f-9568b6a9e08c`::ref 1 aborting False
Thread-461::INFO::2020-03-18 09:39:48,420::logUtils::51::dispatcher::(wrapper) Run and protect: getImagesList, Return response: {'imageslist': []}
Thread-461::DEBUG::2020-03-18 09:39:48,420::task::1191::Storage.TaskManager.Task::(prepare) Task=`0e25784b-3d90-414c-906f-9568b6a9e08c`::finished: {'imageslist': []}
Thread-461::DEBUG::2020-03-18 09:39:48,420::task::595::Storage.TaskManager.Task::(_updateState) Task=`0e25784b-3d90-414c-906f-9568b6a9e08c`::moving from state preparing -> state finished
Thread-461::DEBUG::2020-03-18 09:39:48,420::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-461::DEBUG::2020-03-18 09:39:48,420::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-461::DEBUG::2020-03-18 09:39:48,420::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-461::DEBUG::2020-03-18 09:39:48,420::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-461::DEBUG::2020-03-18 09:39:48,420::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-461::DEBUG::2020-03-18 09:39:48,420::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-461::DEBUG::2020-03-18 09:39:48,420::task::993::Storage.TaskManager.Task::(_decref) Task=`0e25784b-3d90-414c-906f-9568b6a9e08c`::ref 0 aborting False
Thread-461::INFO::2020-03-18 09:39:48,421::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42656 stopped
Reactor thread::INFO::2020-03-18 09:39:48,422::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42658
Reactor thread::DEBUG::2020-03-18 09:39:48,425::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,425::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42658
Reactor thread::DEBUG::2020-03-18 09:39:48,425::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42658)
BindingXMLRPC::INFO::2020-03-18 09:39:48,425::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42658
Thread-462::INFO::2020-03-18 09:39:48,425::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42658 started
Thread-462::DEBUG::2020-03-18 09:39:48,426::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-462::DEBUG::2020-03-18 09:39:48,426::task::595::Storage.TaskManager.Task::(_updateState) Task=`21627f7f-0ce8-49d9-858f-bb08682b0915`::moving from state init -> state preparing
Thread-462::INFO::2020-03-18 09:39:48,426::logUtils::48::dispatcher::(wrapper) Run and protect: getVolumesList(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='b9fd2434-60b0-4a5d-abb2-adc358d0dfd1', options=None)
Thread-462::DEBUG::2020-03-18 09:39:48,426::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`7ed6ca2b-709c-439b-afbc-7782d275d9f7`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3291' at 'getVolumesList'
Thread-462::DEBUG::2020-03-18 09:39:48,426::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-462::DEBUG::2020-03-18 09:39:48,426::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-462::DEBUG::2020-03-18 09:39:48,426::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`7ed6ca2b-709c-439b-afbc-7782d275d9f7`::Granted request
Thread-462::DEBUG::2020-03-18 09:39:48,426::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`21627f7f-0ce8-49d9-858f-bb08682b0915`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-462::DEBUG::2020-03-18 09:39:48,427::task::993::Storage.TaskManager.Task::(_decref) Task=`21627f7f-0ce8-49d9-858f-bb08682b0915`::ref 1 aborting False
Thread-462::INFO::2020-03-18 09:39:48,429::logUtils::51::dispatcher::(wrapper) Run and protect: getVolumesList, Return response: {'uuidlist': [u'53d31c6e-bfc3-4dee-99be-f0fa77006cad']}
Thread-462::DEBUG::2020-03-18 09:39:48,429::task::1191::Storage.TaskManager.Task::(prepare) Task=`21627f7f-0ce8-49d9-858f-bb08682b0915`::finished: {'uuidlist': [u'53d31c6e-bfc3-4dee-99be-f0fa77006cad']}
Thread-462::DEBUG::2020-03-18 09:39:48,429::task::595::Storage.TaskManager.Task::(_updateState) Task=`21627f7f-0ce8-49d9-858f-bb08682b0915`::moving from state preparing -> state finished
Thread-462::DEBUG::2020-03-18 09:39:48,429::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-462::DEBUG::2020-03-18 09:39:48,429::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-462::DEBUG::2020-03-18 09:39:48,430::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-462::DEBUG::2020-03-18 09:39:48,430::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-462::DEBUG::2020-03-18 09:39:48,430::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-462::DEBUG::2020-03-18 09:39:48,430::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-462::DEBUG::2020-03-18 09:39:48,430::task::993::Storage.TaskManager.Task::(_decref) Task=`21627f7f-0ce8-49d9-858f-bb08682b0915`::ref 0 aborting False
Thread-462::INFO::2020-03-18 09:39:48,430::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42658 stopped
Reactor thread::INFO::2020-03-18 09:39:48,431::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42660
Reactor thread::DEBUG::2020-03-18 09:39:48,434::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,434::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42660
Reactor thread::DEBUG::2020-03-18 09:39:48,434::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42660)
BindingXMLRPC::INFO::2020-03-18 09:39:48,434::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42660
Thread-463::INFO::2020-03-18 09:39:48,434::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42660 started
Thread-463::DEBUG::2020-03-18 09:39:48,435::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-463::DEBUG::2020-03-18 09:39:48,435::task::595::Storage.TaskManager.Task::(_updateState) Task=`c62cf17c-ac63-49d1-ac7b-cf1fa5494aee`::moving from state init -> state preparing
Thread-463::INFO::2020-03-18 09:39:48,435::logUtils::48::dispatcher::(wrapper) Run and protect: prepareImage(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='b9fd2434-60b0-4a5d-abb2-adc358d0dfd1', leafUUID='53d31c6e-bfc3-4dee-99be-f0fa77006cad')
Thread-463::DEBUG::2020-03-18 09:39:48,435::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`e20d6046-0f92-461d-a221-ddcfae6acd37`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3213' at 'prepareImage'
Thread-463::DEBUG::2020-03-18 09:39:48,435::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-463::DEBUG::2020-03-18 09:39:48,435::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-463::DEBUG::2020-03-18 09:39:48,435::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`e20d6046-0f92-461d-a221-ddcfae6acd37`::Granted request
Thread-463::DEBUG::2020-03-18 09:39:48,435::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`c62cf17c-ac63-49d1-ac7b-cf1fa5494aee`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-463::DEBUG::2020-03-18 09:39:48,435::task::993::Storage.TaskManager.Task::(_decref) Task=`c62cf17c-ac63-49d1-ac7b-cf1fa5494aee`::ref 1 aborting False
Thread-463::DEBUG::2020-03-18 09:39:48,438::fileVolume::535::Storage.Volume::(validateVolumePath) validate path for 53d31c6e-bfc3-4dee-99be-f0fa77006cad
Thread-463::DEBUG::2020-03-18 09:39:48,440::fileSD::560::Storage.StorageDomain::(activateVolumes) Fixing permissions on /rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/b9fd2434-60b0-4a5d-abb2-adc358d0dfd1/53d31c6e-bfc3-4dee-99be-f0fa77006cad
Thread-463::DEBUG::2020-03-18 09:39:48,440::fileUtils::143::Storage.fileUtils::(createdir) Creating directory: /var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1 mode: None
Thread-463::WARNING::2020-03-18 09:39:48,440::fileUtils::152::Storage.fileUtils::(createdir) Dir /var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1 already exists
Thread-463::DEBUG::2020-03-18 09:39:48,440::fileSD::517::Storage.StorageDomain::(createImageLinks) Creating symlink from /rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/b9fd2434-60b0-4a5d-abb2-adc358d0dfd1 to /var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1/b9fd2434-60b0-4a5d-abb2-adc358d0dfd1
Thread-463::DEBUG::2020-03-18 09:39:48,440::fileSD::522::Storage.StorageDomain::(createImageLinks) img run dir already exists: /var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1/b9fd2434-60b0-4a5d-abb2-adc358d0dfd1
Thread-463::DEBUG::2020-03-18 09:39:48,441::fileVolume::535::Storage.Volume::(validateVolumePath) validate path for 53d31c6e-bfc3-4dee-99be-f0fa77006cad
Thread-463::INFO::2020-03-18 09:39:48,441::logUtils::51::dispatcher::(wrapper) Run and protect: prepareImage, Return response: {'info': {'domainID': '331e6287-61df-48dd-9733-a8ad236750b1', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/b9fd2434-60b0-4a5d-abb2-adc358d0dfd1/53d31c6e-bfc3-4dee-99be-f0fa77006cad', 'volumeID': u'53d31c6e-bfc3-4dee-99be-f0fa77006cad', 'leasePath': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/b9fd2434-60b0-4a5d-abb2-adc358d0dfd1/53d31c6e-bfc3-4dee-99be-f0fa77006cad.lease', 'imageID': 'b9fd2434-60b0-4a5d-abb2-adc358d0dfd1'}, 'path': u'/var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1/b9fd2434-60b0-4a5d-abb2-adc358d0dfd1/53d31c6e-bfc3-4dee-99be-f0fa77006cad', 'imgVolumesInfo': [{'domainID': '331e6287-61df-48dd-9733-a8ad236750b1', 'volType': 'path', 'l
easeOffset': 0, 'path': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/b9fd2434-60b0-4a5d-abb2-adc358d0dfd1/53d31c6e-bfc3-4dee-99be-f0fa77006cad', 'volumeID': u'53d31c6e-bfc3-4dee-99be-f0fa77006cad', 'leasePath': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/b9fd2434-60b0-4a5d-abb2-adc358d0dfd1/53d31c6e-bfc3-4dee-99be-f0fa77006cad.lease', 'imageID': 'b9fd2434-60b0-4a5d-abb2-adc358d0dfd1'}]}
Thread-463::DEBUG::2020-03-18 09:39:48,441::task::1191::Storage.TaskManager.Task::(prepare) Task=`c62cf17c-ac63-49d1-ac7b-cf1fa5494aee`::finished: {'info': {'domainID': '331e6287-61df-48dd-9733-a8ad236750b1', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/b9fd2434-60b0-4a5d-abb2-adc358d0dfd1/53d31c6e-bfc3-4dee-99be-f0fa77006cad', 'volumeID': u'53d31c6e-bfc3-4dee-99be-f0fa77006cad', 'leasePath': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/b9fd2434-60b0-4a5d-abb2-adc358d0dfd1/53d31c6e-bfc3-4dee-99be-f0fa77006cad.lease', 'imageID': 'b9fd2434-60b0-4a5d-abb2-adc358d0dfd1'}, 'path': u'/var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1/b9fd2434-60b0-4a5d-abb2-adc358d0dfd1/53d31c6e-bfc3-4dee-99be-f0fa77006cad', 'imgVolumesInfo': [{'domainID': '331e6287-61df-48dd-9733-a8ad236750b1', '
volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/b9fd2434-60b0-4a5d-abb2-adc358d0dfd1/53d31c6e-bfc3-4dee-99be-f0fa77006cad', 'volumeID': u'53d31c6e-bfc3-4dee-99be-f0fa77006cad', 'leasePath': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/b9fd2434-60b0-4a5d-abb2-adc358d0dfd1/53d31c6e-bfc3-4dee-99be-f0fa77006cad.lease', 'imageID': 'b9fd2434-60b0-4a5d-abb2-adc358d0dfd1'}]}
Thread-463::DEBUG::2020-03-18 09:39:48,441::task::595::Storage.TaskManager.Task::(_updateState) Task=`c62cf17c-ac63-49d1-ac7b-cf1fa5494aee`::moving from state preparing -> state finished
Thread-463::DEBUG::2020-03-18 09:39:48,441::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-463::DEBUG::2020-03-18 09:39:48,441::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-463::DEBUG::2020-03-18 09:39:48,441::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-463::DEBUG::2020-03-18 09:39:48,442::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-463::DEBUG::2020-03-18 09:39:48,442::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-463::DEBUG::2020-03-18 09:39:48,442::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-463::DEBUG::2020-03-18 09:39:48,442::task::993::Storage.TaskManager.Task::(_decref) Task=`c62cf17c-ac63-49d1-ac7b-cf1fa5494aee`::ref 0 aborting False
Thread-463::INFO::2020-03-18 09:39:48,443::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42660 stopped
Reactor thread::INFO::2020-03-18 09:39:48,443::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42662
Reactor thread::DEBUG::2020-03-18 09:39:48,446::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,447::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42662
Reactor thread::DEBUG::2020-03-18 09:39:48,447::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42662)
BindingXMLRPC::INFO::2020-03-18 09:39:48,447::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42662
Thread-464::INFO::2020-03-18 09:39:48,447::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42662 started
Thread-464::DEBUG::2020-03-18 09:39:48,447::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-464::DEBUG::2020-03-18 09:39:48,447::task::595::Storage.TaskManager.Task::(_updateState) Task=`55928ea1-0bb9-434a-839e-e339893ee197`::moving from state init -> state preparing
Thread-464::INFO::2020-03-18 09:39:48,447::logUtils::48::dispatcher::(wrapper) Run and protect: getVolumesList(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='a667b14a-1f92-40f1-8379-210e0c42fc26', options=None)
Thread-464::DEBUG::2020-03-18 09:39:48,448::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`db8e29d0-dc40-4e30-982c-7040544b86d0`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3291' at 'getVolumesList'
Thread-464::DEBUG::2020-03-18 09:39:48,448::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-464::DEBUG::2020-03-18 09:39:48,448::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-464::DEBUG::2020-03-18 09:39:48,448::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`db8e29d0-dc40-4e30-982c-7040544b86d0`::Granted request
Thread-464::DEBUG::2020-03-18 09:39:48,448::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`55928ea1-0bb9-434a-839e-e339893ee197`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-464::DEBUG::2020-03-18 09:39:48,448::task::993::Storage.TaskManager.Task::(_decref) Task=`55928ea1-0bb9-434a-839e-e339893ee197`::ref 1 aborting False
Thread-464::INFO::2020-03-18 09:39:48,450::logUtils::51::dispatcher::(wrapper) Run and protect: getVolumesList, Return response: {'uuidlist': [u'43cc29d0-6919-4493-95fe-6d58f97acdfc']}
Thread-464::DEBUG::2020-03-18 09:39:48,450::task::1191::Storage.TaskManager.Task::(prepare) Task=`55928ea1-0bb9-434a-839e-e339893ee197`::finished: {'uuidlist': [u'43cc29d0-6919-4493-95fe-6d58f97acdfc']}
Thread-464::DEBUG::2020-03-18 09:39:48,450::task::595::Storage.TaskManager.Task::(_updateState) Task=`55928ea1-0bb9-434a-839e-e339893ee197`::moving from state preparing -> state finished
Thread-464::DEBUG::2020-03-18 09:39:48,450::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-464::DEBUG::2020-03-18 09:39:48,450::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-464::DEBUG::2020-03-18 09:39:48,450::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-464::DEBUG::2020-03-18 09:39:48,451::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-464::DEBUG::2020-03-18 09:39:48,451::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-464::DEBUG::2020-03-18 09:39:48,451::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-464::DEBUG::2020-03-18 09:39:48,451::task::993::Storage.TaskManager.Task::(_decref) Task=`55928ea1-0bb9-434a-839e-e339893ee197`::ref 0 aborting False
Thread-464::INFO::2020-03-18 09:39:48,451::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42662 stopped
Reactor thread::INFO::2020-03-18 09:39:48,451::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42664
Reactor thread::DEBUG::2020-03-18 09:39:48,455::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,455::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42664
Reactor thread::DEBUG::2020-03-18 09:39:48,455::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42664)
BindingXMLRPC::INFO::2020-03-18 09:39:48,455::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42664
Thread-465::INFO::2020-03-18 09:39:48,455::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42664 started
Thread-465::DEBUG::2020-03-18 09:39:48,456::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-465::DEBUG::2020-03-18 09:39:48,456::task::595::Storage.TaskManager.Task::(_updateState) Task=`d239dab8-78ab-4fc0-8deb-9cd694f7b5c6`::moving from state init -> state preparing
Thread-465::INFO::2020-03-18 09:39:48,456::logUtils::48::dispatcher::(wrapper) Run and protect: prepareImage(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='a667b14a-1f92-40f1-8379-210e0c42fc26', leafUUID='43cc29d0-6919-4493-95fe-6d58f97acdfc')
Thread-465::DEBUG::2020-03-18 09:39:48,456::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`9d41db4b-4b09-4805-8cb8-098b391447e6`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3213' at 'prepareImage'
Thread-465::DEBUG::2020-03-18 09:39:48,456::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-465::DEBUG::2020-03-18 09:39:48,456::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-465::DEBUG::2020-03-18 09:39:48,456::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`9d41db4b-4b09-4805-8cb8-098b391447e6`::Granted request
Thread-465::DEBUG::2020-03-18 09:39:48,456::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`d239dab8-78ab-4fc0-8deb-9cd694f7b5c6`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-465::DEBUG::2020-03-18 09:39:48,456::task::993::Storage.TaskManager.Task::(_decref) Task=`d239dab8-78ab-4fc0-8deb-9cd694f7b5c6`::ref 1 aborting False
Thread-465::DEBUG::2020-03-18 09:39:48,459::fileVolume::535::Storage.Volume::(validateVolumePath) validate path for 43cc29d0-6919-4493-95fe-6d58f97acdfc
Thread-465::DEBUG::2020-03-18 09:39:48,460::fileSD::560::Storage.StorageDomain::(activateVolumes) Fixing permissions on /rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/a667b14a-1f92-40f1-8379-210e0c42fc26/43cc29d0-6919-4493-95fe-6d58f97acdfc
Thread-465::DEBUG::2020-03-18 09:39:48,460::fileUtils::143::Storage.fileUtils::(createdir) Creating directory: /var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1 mode: None
Thread-465::WARNING::2020-03-18 09:39:48,461::fileUtils::152::Storage.fileUtils::(createdir) Dir /var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1 already exists
Thread-465::DEBUG::2020-03-18 09:39:48,461::fileSD::517::Storage.StorageDomain::(createImageLinks) Creating symlink from /rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/a667b14a-1f92-40f1-8379-210e0c42fc26 to /var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1/a667b14a-1f92-40f1-8379-210e0c42fc26
Thread-465::DEBUG::2020-03-18 09:39:48,461::fileSD::522::Storage.StorageDomain::(createImageLinks) img run dir already exists: /var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1/a667b14a-1f92-40f1-8379-210e0c42fc26
Thread-465::DEBUG::2020-03-18 09:39:48,461::fileVolume::535::Storage.Volume::(validateVolumePath) validate path for 43cc29d0-6919-4493-95fe-6d58f97acdfc
Thread-465::INFO::2020-03-18 09:39:48,462::logUtils::51::dispatcher::(wrapper) Run and protect: prepareImage, Return response: {'info': {'domainID': '331e6287-61df-48dd-9733-a8ad236750b1', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/a667b14a-1f92-40f1-8379-210e0c42fc26/43cc29d0-6919-4493-95fe-6d58f97acdfc', 'volumeID': u'43cc29d0-6919-4493-95fe-6d58f97acdfc', 'leasePath': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/a667b14a-1f92-40f1-8379-210e0c42fc26/43cc29d0-6919-4493-95fe-6d58f97acdfc.lease', 'imageID': 'a667b14a-1f92-40f1-8379-210e0c42fc26'}, 'path': u'/var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1/a667b14a-1f92-40f1-8379-210e0c42fc26/43cc29d0-6919-4493-95fe-6d58f97acdfc', 'imgVolumesInfo': [{'domainID': '331e6287-61df-48dd-9733-a8ad236750b1', 'volType': 'path', 'l
easeOffset': 0, 'path': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/a667b14a-1f92-40f1-8379-210e0c42fc26/43cc29d0-6919-4493-95fe-6d58f97acdfc', 'volumeID': u'43cc29d0-6919-4493-95fe-6d58f97acdfc', 'leasePath': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/a667b14a-1f92-40f1-8379-210e0c42fc26/43cc29d0-6919-4493-95fe-6d58f97acdfc.lease', 'imageID': 'a667b14a-1f92-40f1-8379-210e0c42fc26'}]}
Thread-465::DEBUG::2020-03-18 09:39:48,462::task::1191::Storage.TaskManager.Task::(prepare) Task=`d239dab8-78ab-4fc0-8deb-9cd694f7b5c6`::finished: {'info': {'domainID': '331e6287-61df-48dd-9733-a8ad236750b1', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/a667b14a-1f92-40f1-8379-210e0c42fc26/43cc29d0-6919-4493-95fe-6d58f97acdfc', 'volumeID': u'43cc29d0-6919-4493-95fe-6d58f97acdfc', 'leasePath': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/a667b14a-1f92-40f1-8379-210e0c42fc26/43cc29d0-6919-4493-95fe-6d58f97acdfc.lease', 'imageID': 'a667b14a-1f92-40f1-8379-210e0c42fc26'}, 'path': u'/var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1/a667b14a-1f92-40f1-8379-210e0c42fc26/43cc29d0-6919-4493-95fe-6d58f97acdfc', 'imgVolumesInfo': [{'domainID': '331e6287-61df-48dd-9733-a8ad236750b1', '
volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/a667b14a-1f92-40f1-8379-210e0c42fc26/43cc29d0-6919-4493-95fe-6d58f97acdfc', 'volumeID': u'43cc29d0-6919-4493-95fe-6d58f97acdfc', 'leasePath': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/a667b14a-1f92-40f1-8379-210e0c42fc26/43cc29d0-6919-4493-95fe-6d58f97acdfc.lease', 'imageID': 'a667b14a-1f92-40f1-8379-210e0c42fc26'}]}
Thread-465::DEBUG::2020-03-18 09:39:48,462::task::595::Storage.TaskManager.Task::(_updateState) Task=`d239dab8-78ab-4fc0-8deb-9cd694f7b5c6`::moving from state preparing -> state finished
Thread-465::DEBUG::2020-03-18 09:39:48,462::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-465::DEBUG::2020-03-18 09:39:48,462::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-465::DEBUG::2020-03-18 09:39:48,462::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-465::DEBUG::2020-03-18 09:39:48,462::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-465::DEBUG::2020-03-18 09:39:48,462::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-465::DEBUG::2020-03-18 09:39:48,462::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-465::DEBUG::2020-03-18 09:39:48,462::task::993::Storage.TaskManager.Task::(_decref) Task=`d239dab8-78ab-4fc0-8deb-9cd694f7b5c6`::ref 0 aborting False
Thread-465::INFO::2020-03-18 09:39:48,463::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42664 stopped
Reactor thread::INFO::2020-03-18 09:39:48,464::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42666
Reactor thread::DEBUG::2020-03-18 09:39:48,467::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,467::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42666
Reactor thread::DEBUG::2020-03-18 09:39:48,467::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42666)
BindingXMLRPC::INFO::2020-03-18 09:39:48,467::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42666
Thread-466::INFO::2020-03-18 09:39:48,467::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42666 started
Thread-466::DEBUG::2020-03-18 09:39:48,468::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-466::DEBUG::2020-03-18 09:39:48,468::task::595::Storage.TaskManager.Task::(_updateState) Task=`5d96e07f-0897-4066-a15f-33e8900f1cdd`::moving from state init -> state preparing
Thread-466::INFO::2020-03-18 09:39:48,468::logUtils::48::dispatcher::(wrapper) Run and protect: getVolumesList(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='00ad81a7-5637-40ff-8635-c039347f69ee', options=None)
Thread-466::DEBUG::2020-03-18 09:39:48,468::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`2eb15abe-9a97-437c-a79e-1fb7a8f2a1d9`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3291' at 'getVolumesList'
Thread-466::DEBUG::2020-03-18 09:39:48,468::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-466::DEBUG::2020-03-18 09:39:48,468::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-466::DEBUG::2020-03-18 09:39:48,468::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`2eb15abe-9a97-437c-a79e-1fb7a8f2a1d9`::Granted request
Thread-466::DEBUG::2020-03-18 09:39:48,468::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`5d96e07f-0897-4066-a15f-33e8900f1cdd`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-466::DEBUG::2020-03-18 09:39:48,468::task::993::Storage.TaskManager.Task::(_decref) Task=`5d96e07f-0897-4066-a15f-33e8900f1cdd`::ref 1 aborting False
Thread-466::INFO::2020-03-18 09:39:48,470::logUtils::51::dispatcher::(wrapper) Run and protect: getVolumesList, Return response: {'uuidlist': [u'ec87b10a-e601-44bc-bd87-fbe6de274cd4']}
Thread-466::DEBUG::2020-03-18 09:39:48,471::task::1191::Storage.TaskManager.Task::(prepare) Task=`5d96e07f-0897-4066-a15f-33e8900f1cdd`::finished: {'uuidlist': [u'ec87b10a-e601-44bc-bd87-fbe6de274cd4']}
Thread-466::DEBUG::2020-03-18 09:39:48,471::task::595::Storage.TaskManager.Task::(_updateState) Task=`5d96e07f-0897-4066-a15f-33e8900f1cdd`::moving from state preparing -> state finished
Thread-466::DEBUG::2020-03-18 09:39:48,471::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-466::DEBUG::2020-03-18 09:39:48,471::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-466::DEBUG::2020-03-18 09:39:48,471::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-466::DEBUG::2020-03-18 09:39:48,471::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-466::DEBUG::2020-03-18 09:39:48,471::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-466::DEBUG::2020-03-18 09:39:48,471::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-466::DEBUG::2020-03-18 09:39:48,471::task::993::Storage.TaskManager.Task::(_decref) Task=`5d96e07f-0897-4066-a15f-33e8900f1cdd`::ref 0 aborting False
Thread-466::INFO::2020-03-18 09:39:48,471::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42666 stopped
Reactor thread::INFO::2020-03-18 09:39:48,472::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42668
Reactor thread::DEBUG::2020-03-18 09:39:48,475::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,475::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42668
Reactor thread::DEBUG::2020-03-18 09:39:48,475::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42668)
BindingXMLRPC::INFO::2020-03-18 09:39:48,476::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42668
Thread-467::INFO::2020-03-18 09:39:48,476::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42668 started
Thread-467::DEBUG::2020-03-18 09:39:48,476::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-467::DEBUG::2020-03-18 09:39:48,476::task::595::Storage.TaskManager.Task::(_updateState) Task=`79c265c9-44b0-49ae-a7d0-df1f1fffbd28`::moving from state init -> state preparing
Thread-467::INFO::2020-03-18 09:39:48,476::logUtils::48::dispatcher::(wrapper) Run and protect: prepareImage(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='00ad81a7-5637-40ff-8635-c039347f69ee', leafUUID='ec87b10a-e601-44bc-bd87-fbe6de274cd4')
Thread-467::DEBUG::2020-03-18 09:39:48,476::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`0f37c05b-323f-4806-80c8-bbedbbd79570`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3213' at 'prepareImage'
Thread-467::DEBUG::2020-03-18 09:39:48,476::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-467::DEBUG::2020-03-18 09:39:48,476::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-467::DEBUG::2020-03-18 09:39:48,476::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`0f37c05b-323f-4806-80c8-bbedbbd79570`::Granted request
Thread-467::DEBUG::2020-03-18 09:39:48,477::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`79c265c9-44b0-49ae-a7d0-df1f1fffbd28`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-467::DEBUG::2020-03-18 09:39:48,477::task::993::Storage.TaskManager.Task::(_decref) Task=`79c265c9-44b0-49ae-a7d0-df1f1fffbd28`::ref 1 aborting False
Thread-467::DEBUG::2020-03-18 09:39:48,479::fileVolume::535::Storage.Volume::(validateVolumePath) validate path for ec87b10a-e601-44bc-bd87-fbe6de274cd4
Thread-467::DEBUG::2020-03-18 09:39:48,480::fileSD::560::Storage.StorageDomain::(activateVolumes) Fixing permissions on /rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/00ad81a7-5637-40ff-8635-c039347f69ee/ec87b10a-e601-44bc-bd87-fbe6de274cd4
Thread-467::DEBUG::2020-03-18 09:39:48,481::fileUtils::143::Storage.fileUtils::(createdir) Creating directory: /var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1 mode: None
Thread-467::WARNING::2020-03-18 09:39:48,481::fileUtils::152::Storage.fileUtils::(createdir) Dir /var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1 already exists
Thread-467::DEBUG::2020-03-18 09:39:48,481::fileSD::517::Storage.StorageDomain::(createImageLinks) Creating symlink from /rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/00ad81a7-5637-40ff-8635-c039347f69ee to /var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1/00ad81a7-5637-40ff-8635-c039347f69ee
Thread-467::DEBUG::2020-03-18 09:39:48,481::fileSD::522::Storage.StorageDomain::(createImageLinks) img run dir already exists: /var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1/00ad81a7-5637-40ff-8635-c039347f69ee
Thread-467::DEBUG::2020-03-18 09:39:48,481::fileVolume::535::Storage.Volume::(validateVolumePath) validate path for ec87b10a-e601-44bc-bd87-fbe6de274cd4
Thread-467::INFO::2020-03-18 09:39:48,482::logUtils::51::dispatcher::(wrapper) Run and protect: prepareImage, Return response: {'info': {'domainID': '331e6287-61df-48dd-9733-a8ad236750b1', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/00ad81a7-5637-40ff-8635-c039347f69ee/ec87b10a-e601-44bc-bd87-fbe6de274cd4', 'volumeID': u'ec87b10a-e601-44bc-bd87-fbe6de274cd4', 'leasePath': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/00ad81a7-5637-40ff-8635-c039347f69ee/ec87b10a-e601-44bc-bd87-fbe6de274cd4.lease', 'imageID': '00ad81a7-5637-40ff-8635-c039347f69ee'}, 'path': u'/var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1/00ad81a7-5637-40ff-8635-c039347f69ee/ec87b10a-e601-44bc-bd87-fbe6de274cd4', 'imgVolumesInfo': [{'domainID': '331e6287-61df-48dd-9733-a8ad236750b1', 'volType': 'path', 'l
easeOffset': 0, 'path': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/00ad81a7-5637-40ff-8635-c039347f69ee/ec87b10a-e601-44bc-bd87-fbe6de274cd4', 'volumeID': u'ec87b10a-e601-44bc-bd87-fbe6de274cd4', 'leasePath': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/00ad81a7-5637-40ff-8635-c039347f69ee/ec87b10a-e601-44bc-bd87-fbe6de274cd4.lease', 'imageID': '00ad81a7-5637-40ff-8635-c039347f69ee'}]}
Thread-467::DEBUG::2020-03-18 09:39:48,482::task::1191::Storage.TaskManager.Task::(prepare) Task=`79c265c9-44b0-49ae-a7d0-df1f1fffbd28`::finished: {'info': {'domainID': '331e6287-61df-48dd-9733-a8ad236750b1', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/00ad81a7-5637-40ff-8635-c039347f69ee/ec87b10a-e601-44bc-bd87-fbe6de274cd4', 'volumeID': u'ec87b10a-e601-44bc-bd87-fbe6de274cd4', 'leasePath': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/00ad81a7-5637-40ff-8635-c039347f69ee/ec87b10a-e601-44bc-bd87-fbe6de274cd4.lease', 'imageID': '00ad81a7-5637-40ff-8635-c039347f69ee'}, 'path': u'/var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1/00ad81a7-5637-40ff-8635-c039347f69ee/ec87b10a-e601-44bc-bd87-fbe6de274cd4', 'imgVolumesInfo': [{'domainID': '331e6287-61df-48dd-9733-a8ad236750b1', '
volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/00ad81a7-5637-40ff-8635-c039347f69ee/ec87b10a-e601-44bc-bd87-fbe6de274cd4', 'volumeID': u'ec87b10a-e601-44bc-bd87-fbe6de274cd4', 'leasePath': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/00ad81a7-5637-40ff-8635-c039347f69ee/ec87b10a-e601-44bc-bd87-fbe6de274cd4.lease', 'imageID': '00ad81a7-5637-40ff-8635-c039347f69ee'}]}
Thread-467::DEBUG::2020-03-18 09:39:48,482::task::595::Storage.TaskManager.Task::(_updateState) Task=`79c265c9-44b0-49ae-a7d0-df1f1fffbd28`::moving from state preparing -> state finished
Thread-467::DEBUG::2020-03-18 09:39:48,482::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-467::DEBUG::2020-03-18 09:39:48,482::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-467::DEBUG::2020-03-18 09:39:48,482::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-467::DEBUG::2020-03-18 09:39:48,482::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-467::DEBUG::2020-03-18 09:39:48,482::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-467::DEBUG::2020-03-18 09:39:48,482::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-467::DEBUG::2020-03-18 09:39:48,482::task::993::Storage.TaskManager.Task::(_decref) Task=`79c265c9-44b0-49ae-a7d0-df1f1fffbd28`::ref 0 aborting False
Thread-467::INFO::2020-03-18 09:39:48,483::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42668 stopped
Reactor thread::INFO::2020-03-18 09:39:48,483::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42670
Reactor thread::DEBUG::2020-03-18 09:39:48,487::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,487::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42670
Reactor thread::DEBUG::2020-03-18 09:39:48,487::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42670)
BindingXMLRPC::INFO::2020-03-18 09:39:48,488::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42670
Thread-468::INFO::2020-03-18 09:39:48,488::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42670 started
Thread-468::DEBUG::2020-03-18 09:39:48,488::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-468::DEBUG::2020-03-18 09:39:48,488::task::595::Storage.TaskManager.Task::(_updateState) Task=`f2658559-934e-4136-a702-50f303954be0`::moving from state init -> state preparing
Thread-468::INFO::2020-03-18 09:39:48,488::logUtils::48::dispatcher::(wrapper) Run and protect: getVolumesList(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='50cc8c63-9929-4cbc-aec8-f1d196874b72', options=None)
Thread-468::DEBUG::2020-03-18 09:39:48,488::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`f1ae25bc-437f-4971-bee4-bfa8c4ebee45`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3291' at 'getVolumesList'
Thread-468::DEBUG::2020-03-18 09:39:48,488::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-468::DEBUG::2020-03-18 09:39:48,488::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-468::DEBUG::2020-03-18 09:39:48,489::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`f1ae25bc-437f-4971-bee4-bfa8c4ebee45`::Granted request
Thread-468::DEBUG::2020-03-18 09:39:48,489::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`f2658559-934e-4136-a702-50f303954be0`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-468::DEBUG::2020-03-18 09:39:48,489::task::993::Storage.TaskManager.Task::(_decref) Task=`f2658559-934e-4136-a702-50f303954be0`::ref 1 aborting False
Thread-468::INFO::2020-03-18 09:39:48,491::logUtils::51::dispatcher::(wrapper) Run and protect: getVolumesList, Return response: {'uuidlist': [u'e8a4e709-0c98-4f1d-a42f-c5f0d499dca0']}
Thread-468::DEBUG::2020-03-18 09:39:48,491::task::1191::Storage.TaskManager.Task::(prepare) Task=`f2658559-934e-4136-a702-50f303954be0`::finished: {'uuidlist': [u'e8a4e709-0c98-4f1d-a42f-c5f0d499dca0']}
Thread-468::DEBUG::2020-03-18 09:39:48,491::task::595::Storage.TaskManager.Task::(_updateState) Task=`f2658559-934e-4136-a702-50f303954be0`::moving from state preparing -> state finished
Thread-468::DEBUG::2020-03-18 09:39:48,492::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-468::DEBUG::2020-03-18 09:39:48,492::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-468::DEBUG::2020-03-18 09:39:48,492::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-468::DEBUG::2020-03-18 09:39:48,492::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-468::DEBUG::2020-03-18 09:39:48,492::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-468::DEBUG::2020-03-18 09:39:48,492::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-468::DEBUG::2020-03-18 09:39:48,492::task::993::Storage.TaskManager.Task::(_decref) Task=`f2658559-934e-4136-a702-50f303954be0`::ref 0 aborting False
Thread-468::INFO::2020-03-18 09:39:48,492::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42670 stopped
Reactor thread::INFO::2020-03-18 09:39:48,493::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42672
Reactor thread::DEBUG::2020-03-18 09:39:48,496::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,496::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42672
Reactor thread::DEBUG::2020-03-18 09:39:48,496::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42672)
BindingXMLRPC::INFO::2020-03-18 09:39:48,496::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42672
Thread-469::INFO::2020-03-18 09:39:48,497::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42672 started
Thread-469::DEBUG::2020-03-18 09:39:48,497::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-469::DEBUG::2020-03-18 09:39:48,497::task::595::Storage.TaskManager.Task::(_updateState) Task=`224a4641-f77a-4fca-9ea7-393a17b105f1`::moving from state init -> state preparing
Thread-469::INFO::2020-03-18 09:39:48,497::logUtils::48::dispatcher::(wrapper) Run and protect: prepareImage(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='50cc8c63-9929-4cbc-aec8-f1d196874b72', leafUUID='e8a4e709-0c98-4f1d-a42f-c5f0d499dca0')
Thread-469::DEBUG::2020-03-18 09:39:48,497::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`bc5c6e82-a5d1-43db-87ac-135224c3bcf5`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3213' at 'prepareImage'
Thread-469::DEBUG::2020-03-18 09:39:48,497::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-469::DEBUG::2020-03-18 09:39:48,497::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-469::DEBUG::2020-03-18 09:39:48,497::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`bc5c6e82-a5d1-43db-87ac-135224c3bcf5`::Granted request
Thread-469::DEBUG::2020-03-18 09:39:48,498::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`224a4641-f77a-4fca-9ea7-393a17b105f1`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-469::DEBUG::2020-03-18 09:39:48,498::task::993::Storage.TaskManager.Task::(_decref) Task=`224a4641-f77a-4fca-9ea7-393a17b105f1`::ref 1 aborting False
Thread-469::DEBUG::2020-03-18 09:39:48,500::fileVolume::535::Storage.Volume::(validateVolumePath) validate path for e8a4e709-0c98-4f1d-a42f-c5f0d499dca0
Thread-469::DEBUG::2020-03-18 09:39:48,501::fileSD::560::Storage.StorageDomain::(activateVolumes) Fixing permissions on /rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/50cc8c63-9929-4cbc-aec8-f1d196874b72/e8a4e709-0c98-4f1d-a42f-c5f0d499dca0
Thread-469::DEBUG::2020-03-18 09:39:48,502::fileUtils::143::Storage.fileUtils::(createdir) Creating directory: /var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1 mode: None
Thread-469::WARNING::2020-03-18 09:39:48,502::fileUtils::152::Storage.fileUtils::(createdir) Dir /var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1 already exists
Thread-469::DEBUG::2020-03-18 09:39:48,502::fileSD::517::Storage.StorageDomain::(createImageLinks) Creating symlink from /rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/50cc8c63-9929-4cbc-aec8-f1d196874b72 to /var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1/50cc8c63-9929-4cbc-aec8-f1d196874b72
Thread-469::DEBUG::2020-03-18 09:39:48,502::fileSD::522::Storage.StorageDomain::(createImageLinks) img run dir already exists: /var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1/50cc8c63-9929-4cbc-aec8-f1d196874b72
Thread-469::DEBUG::2020-03-18 09:39:48,502::fileVolume::535::Storage.Volume::(validateVolumePath) validate path for e8a4e709-0c98-4f1d-a42f-c5f0d499dca0
Thread-469::INFO::2020-03-18 09:39:48,503::logUtils::51::dispatcher::(wrapper) Run and protect: prepareImage, Return response: {'info': {'domainID': '331e6287-61df-48dd-9733-a8ad236750b1', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/50cc8c63-9929-4cbc-aec8-f1d196874b72/e8a4e709-0c98-4f1d-a42f-c5f0d499dca0', 'volumeID': u'e8a4e709-0c98-4f1d-a42f-c5f0d499dca0', 'leasePath': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/50cc8c63-9929-4cbc-aec8-f1d196874b72/e8a4e709-0c98-4f1d-a42f-c5f0d499dca0.lease', 'imageID': '50cc8c63-9929-4cbc-aec8-f1d196874b72'}, 'path': u'/var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1/50cc8c63-9929-4cbc-aec8-f1d196874b72/e8a4e709-0c98-4f1d-a42f-c5f0d499dca0', 'imgVolumesInfo': [{'domainID': '331e6287-61df-48dd-9733-a8ad236750b1', 'volType': 'path', 'l
easeOffset': 0, 'path': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/50cc8c63-9929-4cbc-aec8-f1d196874b72/e8a4e709-0c98-4f1d-a42f-c5f0d499dca0', 'volumeID': u'e8a4e709-0c98-4f1d-a42f-c5f0d499dca0', 'leasePath': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/50cc8c63-9929-4cbc-aec8-f1d196874b72/e8a4e709-0c98-4f1d-a42f-c5f0d499dca0.lease', 'imageID': '50cc8c63-9929-4cbc-aec8-f1d196874b72'}]}
Thread-469::DEBUG::2020-03-18 09:39:48,503::task::1191::Storage.TaskManager.Task::(prepare) Task=`224a4641-f77a-4fca-9ea7-393a17b105f1`::finished: {'info': {'domainID': '331e6287-61df-48dd-9733-a8ad236750b1', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/50cc8c63-9929-4cbc-aec8-f1d196874b72/e8a4e709-0c98-4f1d-a42f-c5f0d499dca0', 'volumeID': u'e8a4e709-0c98-4f1d-a42f-c5f0d499dca0', 'leasePath': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/50cc8c63-9929-4cbc-aec8-f1d196874b72/e8a4e709-0c98-4f1d-a42f-c5f0d499dca0.lease', 'imageID': '50cc8c63-9929-4cbc-aec8-f1d196874b72'}, 'path': u'/var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1/50cc8c63-9929-4cbc-aec8-f1d196874b72/e8a4e709-0c98-4f1d-a42f-c5f0d499dca0', 'imgVolumesInfo': [{'domainID': '331e6287-61df-48dd-9733-a8ad236750b1', '
volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/50cc8c63-9929-4cbc-aec8-f1d196874b72/e8a4e709-0c98-4f1d-a42f-c5f0d499dca0', 'volumeID': u'e8a4e709-0c98-4f1d-a42f-c5f0d499dca0', 'leasePath': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/50cc8c63-9929-4cbc-aec8-f1d196874b72/e8a4e709-0c98-4f1d-a42f-c5f0d499dca0.lease', 'imageID': '50cc8c63-9929-4cbc-aec8-f1d196874b72'}]}
Thread-469::DEBUG::2020-03-18 09:39:48,503::task::595::Storage.TaskManager.Task::(_updateState) Task=`224a4641-f77a-4fca-9ea7-393a17b105f1`::moving from state preparing -> state finished
Thread-469::DEBUG::2020-03-18 09:39:48,503::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-469::DEBUG::2020-03-18 09:39:48,503::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-469::DEBUG::2020-03-18 09:39:48,503::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-469::DEBUG::2020-03-18 09:39:48,503::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-469::DEBUG::2020-03-18 09:39:48,503::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-469::DEBUG::2020-03-18 09:39:48,503::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-469::DEBUG::2020-03-18 09:39:48,503::task::993::Storage.TaskManager.Task::(_decref) Task=`224a4641-f77a-4fca-9ea7-393a17b105f1`::ref 0 aborting False
Thread-469::INFO::2020-03-18 09:39:48,504::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42672 stopped
Reactor thread::INFO::2020-03-18 09:39:48,504::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42674
Reactor thread::DEBUG::2020-03-18 09:39:48,508::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,508::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42674
Reactor thread::DEBUG::2020-03-18 09:39:48,508::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42674)
BindingXMLRPC::INFO::2020-03-18 09:39:48,508::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42674
Thread-470::INFO::2020-03-18 09:39:48,508::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42674 started
Thread-470::DEBUG::2020-03-18 09:39:48,508::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-470::DEBUG::2020-03-18 09:39:48,509::task::595::Storage.TaskManager.Task::(_updateState) Task=`5a2cb9d9-1806-4370-8c9a-82a5bbc6f55e`::moving from state init -> state preparing
Thread-470::INFO::2020-03-18 09:39:48,509::logUtils::48::dispatcher::(wrapper) Run and protect: getVolumesList(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='f470821b-d9a9-4835-8ed9-2ab358e06b41', options=None)
Thread-470::DEBUG::2020-03-18 09:39:48,509::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`579eed7d-8b30-484a-b351-8a099e71663c`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3291' at 'getVolumesList'
Thread-470::DEBUG::2020-03-18 09:39:48,509::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-470::DEBUG::2020-03-18 09:39:48,509::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-470::DEBUG::2020-03-18 09:39:48,509::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`579eed7d-8b30-484a-b351-8a099e71663c`::Granted request
Thread-470::DEBUG::2020-03-18 09:39:48,509::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`5a2cb9d9-1806-4370-8c9a-82a5bbc6f55e`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-470::DEBUG::2020-03-18 09:39:48,509::task::993::Storage.TaskManager.Task::(_decref) Task=`5a2cb9d9-1806-4370-8c9a-82a5bbc6f55e`::ref 1 aborting False
Thread-470::INFO::2020-03-18 09:39:48,512::logUtils::51::dispatcher::(wrapper) Run and protect: getVolumesList, Return response: {'uuidlist': [u'40f29b86-0eaa-4e64-a670-69ed7bc1011d']}
Thread-470::DEBUG::2020-03-18 09:39:48,512::task::1191::Storage.TaskManager.Task::(prepare) Task=`5a2cb9d9-1806-4370-8c9a-82a5bbc6f55e`::finished: {'uuidlist': [u'40f29b86-0eaa-4e64-a670-69ed7bc1011d']}
Thread-470::DEBUG::2020-03-18 09:39:48,512::task::595::Storage.TaskManager.Task::(_updateState) Task=`5a2cb9d9-1806-4370-8c9a-82a5bbc6f55e`::moving from state preparing -> state finished
Thread-470::DEBUG::2020-03-18 09:39:48,512::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-470::DEBUG::2020-03-18 09:39:48,512::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-470::DEBUG::2020-03-18 09:39:48,512::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-470::DEBUG::2020-03-18 09:39:48,512::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-470::DEBUG::2020-03-18 09:39:48,512::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-470::DEBUG::2020-03-18 09:39:48,512::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-470::DEBUG::2020-03-18 09:39:48,512::task::993::Storage.TaskManager.Task::(_decref) Task=`5a2cb9d9-1806-4370-8c9a-82a5bbc6f55e`::ref 0 aborting False
Thread-470::INFO::2020-03-18 09:39:48,513::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42674 stopped
Reactor thread::INFO::2020-03-18 09:39:48,513::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42676
Reactor thread::DEBUG::2020-03-18 09:39:48,516::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,516::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42676
Reactor thread::DEBUG::2020-03-18 09:39:48,517::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42676)
BindingXMLRPC::INFO::2020-03-18 09:39:48,517::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42676
Thread-471::INFO::2020-03-18 09:39:48,517::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42676 started
Thread-471::DEBUG::2020-03-18 09:39:48,517::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-471::DEBUG::2020-03-18 09:39:48,517::task::595::Storage.TaskManager.Task::(_updateState) Task=`0b77417d-7665-446d-9c5d-fa9d774a805b`::moving from state init -> state preparing
Thread-471::INFO::2020-03-18 09:39:48,517::logUtils::48::dispatcher::(wrapper) Run and protect: prepareImage(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='f470821b-d9a9-4835-8ed9-2ab358e06b41', leafUUID='40f29b86-0eaa-4e64-a670-69ed7bc1011d')
Thread-471::DEBUG::2020-03-18 09:39:48,517::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`4a1db10f-c03c-412f-9d63-a7edeb6015ac`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3213' at 'prepareImage'
Thread-471::DEBUG::2020-03-18 09:39:48,517::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-471::DEBUG::2020-03-18 09:39:48,518::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-471::DEBUG::2020-03-18 09:39:48,518::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`4a1db10f-c03c-412f-9d63-a7edeb6015ac`::Granted request
Thread-471::DEBUG::2020-03-18 09:39:48,518::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`0b77417d-7665-446d-9c5d-fa9d774a805b`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-471::DEBUG::2020-03-18 09:39:48,518::task::993::Storage.TaskManager.Task::(_decref) Task=`0b77417d-7665-446d-9c5d-fa9d774a805b`::ref 1 aborting False
Thread-471::DEBUG::2020-03-18 09:39:48,520::fileVolume::535::Storage.Volume::(validateVolumePath) validate path for 40f29b86-0eaa-4e64-a670-69ed7bc1011d
Thread-471::DEBUG::2020-03-18 09:39:48,521::fileSD::560::Storage.StorageDomain::(activateVolumes) Fixing permissions on /rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/f470821b-d9a9-4835-8ed9-2ab358e06b41/40f29b86-0eaa-4e64-a670-69ed7bc1011d
Thread-471::DEBUG::2020-03-18 09:39:48,521::fileUtils::143::Storage.fileUtils::(createdir) Creating directory: /var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1 mode: None
Thread-471::WARNING::2020-03-18 09:39:48,522::fileUtils::152::Storage.fileUtils::(createdir) Dir /var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1 already exists
Thread-471::DEBUG::2020-03-18 09:39:48,522::fileSD::517::Storage.StorageDomain::(createImageLinks) Creating symlink from /rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/f470821b-d9a9-4835-8ed9-2ab358e06b41 to /var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1/f470821b-d9a9-4835-8ed9-2ab358e06b41
Thread-471::DEBUG::2020-03-18 09:39:48,522::fileSD::522::Storage.StorageDomain::(createImageLinks) img run dir already exists: /var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1/f470821b-d9a9-4835-8ed9-2ab358e06b41
Thread-471::DEBUG::2020-03-18 09:39:48,522::fileVolume::535::Storage.Volume::(validateVolumePath) validate path for 40f29b86-0eaa-4e64-a670-69ed7bc1011d
Thread-471::INFO::2020-03-18 09:39:48,523::logUtils::51::dispatcher::(wrapper) Run and protect: prepareImage, Return response: {'info': {'domainID': '331e6287-61df-48dd-9733-a8ad236750b1', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/f470821b-d9a9-4835-8ed9-2ab358e06b41/40f29b86-0eaa-4e64-a670-69ed7bc1011d', 'volumeID': u'40f29b86-0eaa-4e64-a670-69ed7bc1011d', 'leasePath': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/f470821b-d9a9-4835-8ed9-2ab358e06b41/40f29b86-0eaa-4e64-a670-69ed7bc1011d.lease', 'imageID': 'f470821b-d9a9-4835-8ed9-2ab358e06b41'}, 'path': u'/var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1/f470821b-d9a9-4835-8ed9-2ab358e06b41/40f29b86-0eaa-4e64-a670-69ed7bc1011d', 'imgVolumesInfo': [{'domainID': '331e6287-61df-48dd-9733-a8ad236750b1', 'volType': 'path', 'l
easeOffset': 0, 'path': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/f470821b-d9a9-4835-8ed9-2ab358e06b41/40f29b86-0eaa-4e64-a670-69ed7bc1011d', 'volumeID': u'40f29b86-0eaa-4e64-a670-69ed7bc1011d', 'leasePath': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/f470821b-d9a9-4835-8ed9-2ab358e06b41/40f29b86-0eaa-4e64-a670-69ed7bc1011d.lease', 'imageID': 'f470821b-d9a9-4835-8ed9-2ab358e06b41'}]}
Thread-471::DEBUG::2020-03-18 09:39:48,523::task::1191::Storage.TaskManager.Task::(prepare) Task=`0b77417d-7665-446d-9c5d-fa9d774a805b`::finished: {'info': {'domainID': '331e6287-61df-48dd-9733-a8ad236750b1', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/f470821b-d9a9-4835-8ed9-2ab358e06b41/40f29b86-0eaa-4e64-a670-69ed7bc1011d', 'volumeID': u'40f29b86-0eaa-4e64-a670-69ed7bc1011d', 'leasePath': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/f470821b-d9a9-4835-8ed9-2ab358e06b41/40f29b86-0eaa-4e64-a670-69ed7bc1011d.lease', 'imageID': 'f470821b-d9a9-4835-8ed9-2ab358e06b41'}, 'path': u'/var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1/f470821b-d9a9-4835-8ed9-2ab358e06b41/40f29b86-0eaa-4e64-a670-69ed7bc1011d', 'imgVolumesInfo': [{'domainID': '331e6287-61df-48dd-9733-a8ad236750b1', '
volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/f470821b-d9a9-4835-8ed9-2ab358e06b41/40f29b86-0eaa-4e64-a670-69ed7bc1011d', 'volumeID': u'40f29b86-0eaa-4e64-a670-69ed7bc1011d', 'leasePath': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/f470821b-d9a9-4835-8ed9-2ab358e06b41/40f29b86-0eaa-4e64-a670-69ed7bc1011d.lease', 'imageID': 'f470821b-d9a9-4835-8ed9-2ab358e06b41'}]}
Thread-471::DEBUG::2020-03-18 09:39:48,523::task::595::Storage.TaskManager.Task::(_updateState) Task=`0b77417d-7665-446d-9c5d-fa9d774a805b`::moving from state preparing -> state finished
Thread-471::DEBUG::2020-03-18 09:39:48,523::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-471::DEBUG::2020-03-18 09:39:48,523::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-471::DEBUG::2020-03-18 09:39:48,523::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-471::DEBUG::2020-03-18 09:39:48,523::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-471::DEBUG::2020-03-18 09:39:48,523::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-471::DEBUG::2020-03-18 09:39:48,523::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-471::DEBUG::2020-03-18 09:39:48,523::task::993::Storage.TaskManager.Task::(_decref) Task=`0b77417d-7665-446d-9c5d-fa9d774a805b`::ref 0 aborting False
Thread-471::INFO::2020-03-18 09:39:48,524::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42676 stopped
Reactor thread::INFO::2020-03-18 09:39:48,524::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42678
Reactor thread::DEBUG::2020-03-18 09:39:48,528::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,528::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42678
Reactor thread::DEBUG::2020-03-18 09:39:48,528::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42678)
BindingXMLRPC::INFO::2020-03-18 09:39:48,528::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42678
Thread-472::INFO::2020-03-18 09:39:48,528::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42678 started
Thread-472::DEBUG::2020-03-18 09:39:48,528::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-472::DEBUG::2020-03-18 09:39:48,529::task::595::Storage.TaskManager.Task::(_updateState) Task=`8d6ac831-7be0-411f-be32-1b1455d0c5d1`::moving from state init -> state preparing
Thread-472::INFO::2020-03-18 09:39:48,529::logUtils::48::dispatcher::(wrapper) Run and protect: getVolumesList(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='63491779-c7cf-434c-84c9-7878694a8946', options=None)
Thread-472::DEBUG::2020-03-18 09:39:48,529::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`10cffda5-c176-4262-871f-c4df5607df89`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3291' at 'getVolumesList'
Thread-472::DEBUG::2020-03-18 09:39:48,529::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-472::DEBUG::2020-03-18 09:39:48,529::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-472::DEBUG::2020-03-18 09:39:48,529::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`10cffda5-c176-4262-871f-c4df5607df89`::Granted request
Thread-472::DEBUG::2020-03-18 09:39:48,529::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`8d6ac831-7be0-411f-be32-1b1455d0c5d1`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-472::DEBUG::2020-03-18 09:39:48,529::task::993::Storage.TaskManager.Task::(_decref) Task=`8d6ac831-7be0-411f-be32-1b1455d0c5d1`::ref 1 aborting False
Thread-472::INFO::2020-03-18 09:39:48,532::logUtils::51::dispatcher::(wrapper) Run and protect: getVolumesList, Return response: {'uuidlist': [u'2f086a70-e97d-4161-a232-1268bb3145de']}
Thread-472::DEBUG::2020-03-18 09:39:48,533::task::1191::Storage.TaskManager.Task::(prepare) Task=`8d6ac831-7be0-411f-be32-1b1455d0c5d1`::finished: {'uuidlist': [u'2f086a70-e97d-4161-a232-1268bb3145de']}
Thread-472::DEBUG::2020-03-18 09:39:48,533::task::595::Storage.TaskManager.Task::(_updateState) Task=`8d6ac831-7be0-411f-be32-1b1455d0c5d1`::moving from state preparing -> state finished
Thread-472::DEBUG::2020-03-18 09:39:48,533::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-472::DEBUG::2020-03-18 09:39:48,533::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-472::DEBUG::2020-03-18 09:39:48,533::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-472::DEBUG::2020-03-18 09:39:48,533::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-472::DEBUG::2020-03-18 09:39:48,533::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-472::DEBUG::2020-03-18 09:39:48,533::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-472::DEBUG::2020-03-18 09:39:48,533::task::993::Storage.TaskManager.Task::(_decref) Task=`8d6ac831-7be0-411f-be32-1b1455d0c5d1`::ref 0 aborting False
Thread-472::INFO::2020-03-18 09:39:48,534::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42678 stopped
Reactor thread::INFO::2020-03-18 09:39:48,534::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42680
Reactor thread::DEBUG::2020-03-18 09:39:48,537::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,537::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42680
Reactor thread::DEBUG::2020-03-18 09:39:48,537::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42680)
BindingXMLRPC::INFO::2020-03-18 09:39:48,538::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42680
Thread-473::INFO::2020-03-18 09:39:48,538::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42680 started
Thread-473::DEBUG::2020-03-18 09:39:48,538::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-473::DEBUG::2020-03-18 09:39:48,538::task::595::Storage.TaskManager.Task::(_updateState) Task=`023b8d2f-c581-4da4-b03e-7c2b00fb0d51`::moving from state init -> state preparing
Thread-473::INFO::2020-03-18 09:39:48,538::logUtils::48::dispatcher::(wrapper) Run and protect: prepareImage(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='63491779-c7cf-434c-84c9-7878694a8946', leafUUID='2f086a70-e97d-4161-a232-1268bb3145de')
Thread-473::DEBUG::2020-03-18 09:39:48,538::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`823a80ee-eaee-4c8c-93a3-711e7eaa0c22`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3213' at 'prepareImage'
Thread-473::DEBUG::2020-03-18 09:39:48,538::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-473::DEBUG::2020-03-18 09:39:48,538::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-473::DEBUG::2020-03-18 09:39:48,539::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`823a80ee-eaee-4c8c-93a3-711e7eaa0c22`::Granted request
Thread-473::DEBUG::2020-03-18 09:39:48,539::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`023b8d2f-c581-4da4-b03e-7c2b00fb0d51`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-473::DEBUG::2020-03-18 09:39:48,539::task::993::Storage.TaskManager.Task::(_decref) Task=`023b8d2f-c581-4da4-b03e-7c2b00fb0d51`::ref 1 aborting False
Thread-473::DEBUG::2020-03-18 09:39:48,541::fileVolume::535::Storage.Volume::(validateVolumePath) validate path for 2f086a70-e97d-4161-a232-1268bb3145de
Thread-473::DEBUG::2020-03-18 09:39:48,542::fileSD::560::Storage.StorageDomain::(activateVolumes) Fixing permissions on /rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/63491779-c7cf-434c-84c9-7878694a8946/2f086a70-e97d-4161-a232-1268bb3145de
Thread-473::DEBUG::2020-03-18 09:39:48,543::fileUtils::143::Storage.fileUtils::(createdir) Creating directory: /var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1 mode: None
Thread-473::WARNING::2020-03-18 09:39:48,543::fileUtils::152::Storage.fileUtils::(createdir) Dir /var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1 already exists
Thread-473::DEBUG::2020-03-18 09:39:48,543::fileSD::517::Storage.StorageDomain::(createImageLinks) Creating symlink from /rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/63491779-c7cf-434c-84c9-7878694a8946 to /var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1/63491779-c7cf-434c-84c9-7878694a8946
Thread-473::DEBUG::2020-03-18 09:39:48,543::fileSD::522::Storage.StorageDomain::(createImageLinks) img run dir already exists: /var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1/63491779-c7cf-434c-84c9-7878694a8946
Thread-473::DEBUG::2020-03-18 09:39:48,544::fileVolume::535::Storage.Volume::(validateVolumePath) validate path for 2f086a70-e97d-4161-a232-1268bb3145de
Thread-473::INFO::2020-03-18 09:39:48,544::logUtils::51::dispatcher::(wrapper) Run and protect: prepareImage, Return response: {'info': {'domainID': '331e6287-61df-48dd-9733-a8ad236750b1', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/63491779-c7cf-434c-84c9-7878694a8946/2f086a70-e97d-4161-a232-1268bb3145de', 'volumeID': u'2f086a70-e97d-4161-a232-1268bb3145de', 'leasePath': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/63491779-c7cf-434c-84c9-7878694a8946/2f086a70-e97d-4161-a232-1268bb3145de.lease', 'imageID': '63491779-c7cf-434c-84c9-7878694a8946'}, 'path': u'/var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1/63491779-c7cf-434c-84c9-7878694a8946/2f086a70-e97d-4161-a232-1268bb3145de', 'imgVolumesInfo': [{'domainID': '331e6287-61df-48dd-9733-a8ad236750b1', 'volType': 'path', 'l
easeOffset': 0, 'path': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/63491779-c7cf-434c-84c9-7878694a8946/2f086a70-e97d-4161-a232-1268bb3145de', 'volumeID': u'2f086a70-e97d-4161-a232-1268bb3145de', 'leasePath': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/63491779-c7cf-434c-84c9-7878694a8946/2f086a70-e97d-4161-a232-1268bb3145de.lease', 'imageID': '63491779-c7cf-434c-84c9-7878694a8946'}]}
Thread-473::DEBUG::2020-03-18 09:39:48,544::task::1191::Storage.TaskManager.Task::(prepare) Task=`023b8d2f-c581-4da4-b03e-7c2b00fb0d51`::finished: {'info': {'domainID': '331e6287-61df-48dd-9733-a8ad236750b1', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/63491779-c7cf-434c-84c9-7878694a8946/2f086a70-e97d-4161-a232-1268bb3145de', 'volumeID': u'2f086a70-e97d-4161-a232-1268bb3145de', 'leasePath': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/63491779-c7cf-434c-84c9-7878694a8946/2f086a70-e97d-4161-a232-1268bb3145de.lease', 'imageID': '63491779-c7cf-434c-84c9-7878694a8946'}, 'path': u'/var/run/vdsm/storage/331e6287-61df-48dd-9733-a8ad236750b1/63491779-c7cf-434c-84c9-7878694a8946/2f086a70-e97d-4161-a232-1268bb3145de', 'imgVolumesInfo': [{'domainID': '331e6287-61df-48dd-9733-a8ad236750b1', '
volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/63491779-c7cf-434c-84c9-7878694a8946/2f086a70-e97d-4161-a232-1268bb3145de', 'volumeID': u'2f086a70-e97d-4161-a232-1268bb3145de', 'leasePath': u'/rhev/data-center/mnt/aidc-nap1-n1.corp.alleninstitute.org:_netapp__engine__36/331e6287-61df-48dd-9733-a8ad236750b1/images/63491779-c7cf-434c-84c9-7878694a8946/2f086a70-e97d-4161-a232-1268bb3145de.lease', 'imageID': '63491779-c7cf-434c-84c9-7878694a8946'}]}
Thread-473::DEBUG::2020-03-18 09:39:48,544::task::595::Storage.TaskManager.Task::(_updateState) Task=`023b8d2f-c581-4da4-b03e-7c2b00fb0d51`::moving from state preparing -> state finished
Thread-473::DEBUG::2020-03-18 09:39:48,544::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-473::DEBUG::2020-03-18 09:39:48,544::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-473::DEBUG::2020-03-18 09:39:48,544::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-473::DEBUG::2020-03-18 09:39:48,544::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-473::DEBUG::2020-03-18 09:39:48,544::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-473::DEBUG::2020-03-18 09:39:48,544::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-473::DEBUG::2020-03-18 09:39:48,545::task::993::Storage.TaskManager.Task::(_decref) Task=`023b8d2f-c581-4da4-b03e-7c2b00fb0d51`::ref 0 aborting False
Thread-473::INFO::2020-03-18 09:39:48,545::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42680 stopped
Reactor thread::INFO::2020-03-18 09:39:48,546::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42682
Reactor thread::DEBUG::2020-03-18 09:39:48,550::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,550::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42682
Reactor thread::DEBUG::2020-03-18 09:39:48,550::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42682)
BindingXMLRPC::INFO::2020-03-18 09:39:48,550::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42682
Thread-474::INFO::2020-03-18 09:39:48,550::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42682 started
Thread-474::DEBUG::2020-03-18 09:39:48,550::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-474::DEBUG::2020-03-18 09:39:48,550::task::595::Storage.TaskManager.Task::(_updateState) Task=`160804a8-d1cf-4b69-9a07-99c4b4c6f8ff`::moving from state init -> state preparing
Thread-474::INFO::2020-03-18 09:39:48,550::logUtils::48::dispatcher::(wrapper) Run and protect: getImagesList(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', options=None)
Thread-474::DEBUG::2020-03-18 09:39:48,551::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`fda7ff0a-f3c4-4733-91df-868b9240d625`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3313' at 'getImagesList'
Thread-474::DEBUG::2020-03-18 09:39:48,551::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-474::DEBUG::2020-03-18 09:39:48,551::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-474::DEBUG::2020-03-18 09:39:48,551::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`fda7ff0a-f3c4-4733-91df-868b9240d625`::Granted request
Thread-474::DEBUG::2020-03-18 09:39:48,551::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`160804a8-d1cf-4b69-9a07-99c4b4c6f8ff`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-474::DEBUG::2020-03-18 09:39:48,551::task::993::Storage.TaskManager.Task::(_decref) Task=`160804a8-d1cf-4b69-9a07-99c4b4c6f8ff`::ref 1 aborting False
Thread-474::INFO::2020-03-18 09:39:48,551::logUtils::51::dispatcher::(wrapper) Run and protect: getImagesList, Return response: {'imageslist': []}
Thread-474::DEBUG::2020-03-18 09:39:48,551::task::1191::Storage.TaskManager.Task::(prepare) Task=`160804a8-d1cf-4b69-9a07-99c4b4c6f8ff`::finished: {'imageslist': []}
Thread-474::DEBUG::2020-03-18 09:39:48,551::task::595::Storage.TaskManager.Task::(_updateState) Task=`160804a8-d1cf-4b69-9a07-99c4b4c6f8ff`::moving from state preparing -> state finished
Thread-474::DEBUG::2020-03-18 09:39:48,551::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-474::DEBUG::2020-03-18 09:39:48,552::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-474::DEBUG::2020-03-18 09:39:48,552::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-474::DEBUG::2020-03-18 09:39:48,552::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-474::DEBUG::2020-03-18 09:39:48,552::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-474::DEBUG::2020-03-18 09:39:48,552::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-474::DEBUG::2020-03-18 09:39:48,552::task::993::Storage.TaskManager.Task::(_decref) Task=`160804a8-d1cf-4b69-9a07-99c4b4c6f8ff`::ref 0 aborting False
Thread-474::INFO::2020-03-18 09:39:48,552::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42682 stopped
Reactor thread::INFO::2020-03-18 09:39:48,553::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42684
Reactor thread::DEBUG::2020-03-18 09:39:48,556::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,556::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42684
Reactor thread::DEBUG::2020-03-18 09:39:48,557::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42684)
BindingXMLRPC::INFO::2020-03-18 09:39:48,557::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42684
Thread-475::INFO::2020-03-18 09:39:48,557::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42684 started
Thread-475::DEBUG::2020-03-18 09:39:48,557::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-475::DEBUG::2020-03-18 09:39:48,557::task::595::Storage.TaskManager.Task::(_updateState) Task=`ac68ce82-549c-4165-bc93-52a0ab4869b4`::moving from state init -> state preparing
Thread-475::INFO::2020-03-18 09:39:48,557::logUtils::48::dispatcher::(wrapper) Run and protect: getVolumesList(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='b9fd2434-60b0-4a5d-abb2-adc358d0dfd1', options=None)
Thread-475::DEBUG::2020-03-18 09:39:48,557::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`93c014c9-1c13-40db-906f-794242525271`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3291' at 'getVolumesList'
Thread-475::DEBUG::2020-03-18 09:39:48,557::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-475::DEBUG::2020-03-18 09:39:48,558::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-475::DEBUG::2020-03-18 09:39:48,558::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`93c014c9-1c13-40db-906f-794242525271`::Granted request
Thread-475::DEBUG::2020-03-18 09:39:48,558::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`ac68ce82-549c-4165-bc93-52a0ab4869b4`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-475::DEBUG::2020-03-18 09:39:48,558::task::993::Storage.TaskManager.Task::(_decref) Task=`ac68ce82-549c-4165-bc93-52a0ab4869b4`::ref 1 aborting False
Thread-475::INFO::2020-03-18 09:39:48,560::logUtils::51::dispatcher::(wrapper) Run and protect: getVolumesList, Return response: {'uuidlist': [u'53d31c6e-bfc3-4dee-99be-f0fa77006cad']}
Thread-475::DEBUG::2020-03-18 09:39:48,560::task::1191::Storage.TaskManager.Task::(prepare) Task=`ac68ce82-549c-4165-bc93-52a0ab4869b4`::finished: {'uuidlist': [u'53d31c6e-bfc3-4dee-99be-f0fa77006cad']}
Thread-475::DEBUG::2020-03-18 09:39:48,560::task::595::Storage.TaskManager.Task::(_updateState) Task=`ac68ce82-549c-4165-bc93-52a0ab4869b4`::moving from state preparing -> state finished
Thread-475::DEBUG::2020-03-18 09:39:48,560::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-475::DEBUG::2020-03-18 09:39:48,561::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-475::DEBUG::2020-03-18 09:39:48,561::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-475::DEBUG::2020-03-18 09:39:48,561::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-475::DEBUG::2020-03-18 09:39:48,561::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-475::DEBUG::2020-03-18 09:39:48,561::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-475::DEBUG::2020-03-18 09:39:48,561::task::993::Storage.TaskManager.Task::(_decref) Task=`ac68ce82-549c-4165-bc93-52a0ab4869b4`::ref 0 aborting False
Thread-475::INFO::2020-03-18 09:39:48,561::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42684 stopped
Reactor thread::INFO::2020-03-18 09:39:48,562::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42686
Reactor thread::DEBUG::2020-03-18 09:39:48,565::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,565::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42686
Reactor thread::DEBUG::2020-03-18 09:39:48,565::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42686)
BindingXMLRPC::INFO::2020-03-18 09:39:48,565::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42686
Thread-476::INFO::2020-03-18 09:39:48,566::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42686 started
Thread-476::DEBUG::2020-03-18 09:39:48,566::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-476::DEBUG::2020-03-18 09:39:48,566::task::595::Storage.TaskManager.Task::(_updateState) Task=`9ac96c69-475c-4e0d-bf79-71e33f387255`::moving from state init -> state preparing
Thread-476::INFO::2020-03-18 09:39:48,566::logUtils::48::dispatcher::(wrapper) Run and protect: getVolumeInfo(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='b9fd2434-60b0-4a5d-abb2-adc358d0dfd1', volUUID='53d31c6e-bfc3-4dee-99be-f0fa77006cad', options=None)
Thread-476::DEBUG::2020-03-18 09:39:48,566::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`07603064-78b1-474c-99ee-43fe7cb23267`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3159' at 'getVolumeInfo'
Thread-476::DEBUG::2020-03-18 09:39:48,566::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-476::DEBUG::2020-03-18 09:39:48,566::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-476::DEBUG::2020-03-18 09:39:48,566::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`07603064-78b1-474c-99ee-43fe7cb23267`::Granted request
Thread-476::DEBUG::2020-03-18 09:39:48,567::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`9ac96c69-475c-4e0d-bf79-71e33f387255`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-476::DEBUG::2020-03-18 09:39:48,567::task::993::Storage.TaskManager.Task::(_decref) Task=`9ac96c69-475c-4e0d-bf79-71e33f387255`::ref 1 aborting False
Thread-476::DEBUG::2020-03-18 09:39:48,567::fileVolume::535::Storage.Volume::(validateVolumePath) validate path for 53d31c6e-bfc3-4dee-99be-f0fa77006cad
Thread-476::INFO::2020-03-18 09:39:48,568::volume::915::Storage.Volume::(getInfo) Info request: sdUUID=331e6287-61df-48dd-9733-a8ad236750b1 imgUUID=b9fd2434-60b0-4a5d-abb2-adc358d0dfd1 volUUID = 53d31c6e-bfc3-4dee-99be-f0fa77006cad
Thread-476::INFO::2020-03-18 09:39:48,570::volume::943::Storage.Volume::(getInfo) 331e6287-61df-48dd-9733-a8ad236750b1/b9fd2434-60b0-4a5d-abb2-adc358d0dfd1/53d31c6e-bfc3-4dee-99be-f0fa77006cad info is {'status': 'OK', 'domain': '331e6287-61df-48dd-9733-a8ad236750b1', 'voltype': 'LEAF', 'description': 'hosted-engine.lockspace', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'image': 'b9fd2434-60b0-4a5d-abb2-adc358d0dfd1', 'ctime': '1581543998', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1048576', 'children': [], 'pool': '', 'capacity': '1048576', 'uuid': '53d31c6e-bfc3-4dee-99be-f0fa77006cad', 'truesize': '1056768', 'type': 'PREALLOCATED'}
Thread-476::INFO::2020-03-18 09:39:48,570::logUtils::51::dispatcher::(wrapper) Run and protect: getVolumeInfo, Return response: {'info': {'status': 'OK', 'domain': '331e6287-61df-48dd-9733-a8ad236750b1', 'voltype': 'LEAF', 'description': 'hosted-engine.lockspace', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'image': 'b9fd2434-60b0-4a5d-abb2-adc358d0dfd1', 'ctime': '1581543998', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1048576', 'children': [], 'pool': '', 'capacity': '1048576', 'uuid': '53d31c6e-bfc3-4dee-99be-f0fa77006cad', 'truesize': '1056768', 'type': 'PREALLOCATED'}}
Thread-476::DEBUG::2020-03-18 09:39:48,570::task::1191::Storage.TaskManager.Task::(prepare) Task=`9ac96c69-475c-4e0d-bf79-71e33f387255`::finished: {'info': {'status': 'OK', 'domain': '331e6287-61df-48dd-9733-a8ad236750b1', 'voltype': 'LEAF', 'description': 'hosted-engine.lockspace', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'image': 'b9fd2434-60b0-4a5d-abb2-adc358d0dfd1', 'ctime': '1581543998', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1048576', 'children': [], 'pool': '', 'capacity': '1048576', 'uuid': '53d31c6e-bfc3-4dee-99be-f0fa77006cad', 'truesize': '1056768', 'type': 'PREALLOCATED'}}
Thread-476::DEBUG::2020-03-18 09:39:48,570::task::595::Storage.TaskManager.Task::(_updateState) Task=`9ac96c69-475c-4e0d-bf79-71e33f387255`::moving from state preparing -> state finished
Thread-476::DEBUG::2020-03-18 09:39:48,570::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-476::DEBUG::2020-03-18 09:39:48,570::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-476::DEBUG::2020-03-18 09:39:48,570::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-476::DEBUG::2020-03-18 09:39:48,570::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-476::DEBUG::2020-03-18 09:39:48,570::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-476::DEBUG::2020-03-18 09:39:48,570::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-476::DEBUG::2020-03-18 09:39:48,570::task::993::Storage.TaskManager.Task::(_decref) Task=`9ac96c69-475c-4e0d-bf79-71e33f387255`::ref 0 aborting False
Thread-476::INFO::2020-03-18 09:39:48,571::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42686 stopped
Reactor thread::INFO::2020-03-18 09:39:48,572::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42688
Reactor thread::DEBUG::2020-03-18 09:39:48,575::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,575::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42688
Reactor thread::DEBUG::2020-03-18 09:39:48,575::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42688)
BindingXMLRPC::INFO::2020-03-18 09:39:48,575::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42688
Thread-477::INFO::2020-03-18 09:39:48,575::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42688 started
Thread-477::DEBUG::2020-03-18 09:39:48,576::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-477::DEBUG::2020-03-18 09:39:48,576::task::595::Storage.TaskManager.Task::(_updateState) Task=`65f3cb5c-2c78-4256-9ade-d50be15739e0`::moving from state init -> state preparing
Thread-477::INFO::2020-03-18 09:39:48,576::logUtils::48::dispatcher::(wrapper) Run and protect: getVolumesList(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='a667b14a-1f92-40f1-8379-210e0c42fc26', options=None)
Thread-477::DEBUG::2020-03-18 09:39:48,576::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`c585444c-4f57-4e14-9e1c-d0d2f82e8475`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3291' at 'getVolumesList'
Thread-477::DEBUG::2020-03-18 09:39:48,576::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-477::DEBUG::2020-03-18 09:39:48,576::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-477::DEBUG::2020-03-18 09:39:48,576::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`c585444c-4f57-4e14-9e1c-d0d2f82e8475`::Granted request
Thread-477::DEBUG::2020-03-18 09:39:48,576::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`65f3cb5c-2c78-4256-9ade-d50be15739e0`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-477::DEBUG::2020-03-18 09:39:48,576::task::993::Storage.TaskManager.Task::(_decref) Task=`65f3cb5c-2c78-4256-9ade-d50be15739e0`::ref 1 aborting False
Thread-477::INFO::2020-03-18 09:39:48,579::logUtils::51::dispatcher::(wrapper) Run and protect: getVolumesList, Return response: {'uuidlist': [u'43cc29d0-6919-4493-95fe-6d58f97acdfc']}
Thread-477::DEBUG::2020-03-18 09:39:48,579::task::1191::Storage.TaskManager.Task::(prepare) Task=`65f3cb5c-2c78-4256-9ade-d50be15739e0`::finished: {'uuidlist': [u'43cc29d0-6919-4493-95fe-6d58f97acdfc']}
Thread-477::DEBUG::2020-03-18 09:39:48,579::task::595::Storage.TaskManager.Task::(_updateState) Task=`65f3cb5c-2c78-4256-9ade-d50be15739e0`::moving from state preparing -> state finished
Thread-477::DEBUG::2020-03-18 09:39:48,579::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-477::DEBUG::2020-03-18 09:39:48,579::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-477::DEBUG::2020-03-18 09:39:48,579::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-477::DEBUG::2020-03-18 09:39:48,579::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-477::DEBUG::2020-03-18 09:39:48,579::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-477::DEBUG::2020-03-18 09:39:48,579::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-477::DEBUG::2020-03-18 09:39:48,579::task::993::Storage.TaskManager.Task::(_decref) Task=`65f3cb5c-2c78-4256-9ade-d50be15739e0`::ref 0 aborting False
Thread-477::INFO::2020-03-18 09:39:48,580::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42688 stopped
Reactor thread::INFO::2020-03-18 09:39:48,580::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42690
Reactor thread::DEBUG::2020-03-18 09:39:48,583::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,583::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42690
Reactor thread::DEBUG::2020-03-18 09:39:48,584::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42690)
BindingXMLRPC::INFO::2020-03-18 09:39:48,584::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42690
Thread-478::INFO::2020-03-18 09:39:48,584::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42690 started
Thread-478::DEBUG::2020-03-18 09:39:48,584::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-478::DEBUG::2020-03-18 09:39:48,584::task::595::Storage.TaskManager.Task::(_updateState) Task=`00ffc02c-e764-416b-85ea-e8cc8944aefb`::moving from state init -> state preparing
Thread-478::INFO::2020-03-18 09:39:48,584::logUtils::48::dispatcher::(wrapper) Run and protect: getVolumeInfo(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='a667b14a-1f92-40f1-8379-210e0c42fc26', volUUID='43cc29d0-6919-4493-95fe-6d58f97acdfc', options=None)
Thread-478::DEBUG::2020-03-18 09:39:48,584::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`86ee411e-0888-41ce-a36d-e0544d0f0485`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3159' at 'getVolumeInfo'
Thread-478::DEBUG::2020-03-18 09:39:48,584::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-478::DEBUG::2020-03-18 09:39:48,585::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-478::DEBUG::2020-03-18 09:39:48,585::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`86ee411e-0888-41ce-a36d-e0544d0f0485`::Granted request
Thread-478::DEBUG::2020-03-18 09:39:48,585::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`00ffc02c-e764-416b-85ea-e8cc8944aefb`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-478::DEBUG::2020-03-18 09:39:48,585::task::993::Storage.TaskManager.Task::(_decref) Task=`00ffc02c-e764-416b-85ea-e8cc8944aefb`::ref 1 aborting False
Thread-478::DEBUG::2020-03-18 09:39:48,585::fileVolume::535::Storage.Volume::(validateVolumePath) validate path for 43cc29d0-6919-4493-95fe-6d58f97acdfc
Thread-478::INFO::2020-03-18 09:39:48,586::volume::915::Storage.Volume::(getInfo) Info request: sdUUID=331e6287-61df-48dd-9733-a8ad236750b1 imgUUID=a667b14a-1f92-40f1-8379-210e0c42fc26 volUUID = 43cc29d0-6919-4493-95fe-6d58f97acdfc
Thread-478::INFO::2020-03-18 09:39:48,588::volume::943::Storage.Volume::(getInfo) 331e6287-61df-48dd-9733-a8ad236750b1/a667b14a-1f92-40f1-8379-210e0c42fc26/43cc29d0-6919-4493-95fe-6d58f97acdfc info is {'status': 'OK', 'domain': '331e6287-61df-48dd-9733-a8ad236750b1', 'voltype': 'LEAF', 'description': 'HostedEngineConfigurationImage', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'image': 'a667b14a-1f92-40f1-8379-210e0c42fc26', 'ctime': '1581543997', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '20480', 'children': [], 'pool': '', 'capacity': '1073741824', 'uuid': '43cc29d0-6919-4493-95fe-6d58f97acdfc', 'truesize': '20480', 'type': 'PREALLOCATED'}
Thread-478::INFO::2020-03-18 09:39:48,588::logUtils::51::dispatcher::(wrapper) Run and protect: getVolumeInfo, Return response: {'info': {'status': 'OK', 'domain': '331e6287-61df-48dd-9733-a8ad236750b1', 'voltype': 'LEAF', 'description': 'HostedEngineConfigurationImage', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'image': 'a667b14a-1f92-40f1-8379-210e0c42fc26', 'ctime': '1581543997', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '20480', 'children': [], 'pool': '', 'capacity': '1073741824', 'uuid': '43cc29d0-6919-4493-95fe-6d58f97acdfc', 'truesize': '20480', 'type': 'PREALLOCATED'}}
Thread-478::DEBUG::2020-03-18 09:39:48,588::task::1191::Storage.TaskManager.Task::(prepare) Task=`00ffc02c-e764-416b-85ea-e8cc8944aefb`::finished: {'info': {'status': 'OK', 'domain': '331e6287-61df-48dd-9733-a8ad236750b1', 'voltype': 'LEAF', 'description': 'HostedEngineConfigurationImage', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'image': 'a667b14a-1f92-40f1-8379-210e0c42fc26', 'ctime': '1581543997', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '20480', 'children': [], 'pool': '', 'capacity': '1073741824', 'uuid': '43cc29d0-6919-4493-95fe-6d58f97acdfc', 'truesize': '20480', 'type': 'PREALLOCATED'}}
Thread-478::DEBUG::2020-03-18 09:39:48,588::task::595::Storage.TaskManager.Task::(_updateState) Task=`00ffc02c-e764-416b-85ea-e8cc8944aefb`::moving from state preparing -> state finished
Thread-478::DEBUG::2020-03-18 09:39:48,588::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-478::DEBUG::2020-03-18 09:39:48,588::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-478::DEBUG::2020-03-18 09:39:48,588::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-478::DEBUG::2020-03-18 09:39:48,588::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-478::DEBUG::2020-03-18 09:39:48,588::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-478::DEBUG::2020-03-18 09:39:48,588::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-478::DEBUG::2020-03-18 09:39:48,589::task::993::Storage.TaskManager.Task::(_decref) Task=`00ffc02c-e764-416b-85ea-e8cc8944aefb`::ref 0 aborting False
Thread-478::INFO::2020-03-18 09:39:48,589::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42690 stopped
Reactor thread::INFO::2020-03-18 09:39:48,590::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42692
Reactor thread::DEBUG::2020-03-18 09:39:48,593::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,593::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42692
Reactor thread::DEBUG::2020-03-18 09:39:48,593::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42692)
BindingXMLRPC::INFO::2020-03-18 09:39:48,593::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42692
Thread-479::INFO::2020-03-18 09:39:48,593::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42692 started
Thread-479::DEBUG::2020-03-18 09:39:48,594::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-479::DEBUG::2020-03-18 09:39:48,594::task::595::Storage.TaskManager.Task::(_updateState) Task=`34f20abc-8aef-4083-a8b9-86c949d7c21e`::moving from state init -> state preparing
Thread-479::INFO::2020-03-18 09:39:48,594::logUtils::48::dispatcher::(wrapper) Run and protect: getVolumesList(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='00ad81a7-5637-40ff-8635-c039347f69ee', options=None)
Thread-479::DEBUG::2020-03-18 09:39:48,594::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`9f38dff1-8a79-4c6d-8545-24be38b5f3fb`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3291' at 'getVolumesList'
Thread-479::DEBUG::2020-03-18 09:39:48,594::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-479::DEBUG::2020-03-18 09:39:48,594::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-479::DEBUG::2020-03-18 09:39:48,594::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`9f38dff1-8a79-4c6d-8545-24be38b5f3fb`::Granted request
Thread-479::DEBUG::2020-03-18 09:39:48,594::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`34f20abc-8aef-4083-a8b9-86c949d7c21e`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-479::DEBUG::2020-03-18 09:39:48,594::task::993::Storage.TaskManager.Task::(_decref) Task=`34f20abc-8aef-4083-a8b9-86c949d7c21e`::ref 1 aborting False
Thread-479::INFO::2020-03-18 09:39:48,596::logUtils::51::dispatcher::(wrapper) Run and protect: getVolumesList, Return response: {'uuidlist': [u'ec87b10a-e601-44bc-bd87-fbe6de274cd4']}
Thread-479::DEBUG::2020-03-18 09:39:48,596::task::1191::Storage.TaskManager.Task::(prepare) Task=`34f20abc-8aef-4083-a8b9-86c949d7c21e`::finished: {'uuidlist': [u'ec87b10a-e601-44bc-bd87-fbe6de274cd4']}
Thread-479::DEBUG::2020-03-18 09:39:48,596::task::595::Storage.TaskManager.Task::(_updateState) Task=`34f20abc-8aef-4083-a8b9-86c949d7c21e`::moving from state preparing -> state finished
Thread-479::DEBUG::2020-03-18 09:39:48,597::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-479::DEBUG::2020-03-18 09:39:48,597::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-479::DEBUG::2020-03-18 09:39:48,597::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-479::DEBUG::2020-03-18 09:39:48,597::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-479::DEBUG::2020-03-18 09:39:48,597::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-479::DEBUG::2020-03-18 09:39:48,597::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-479::DEBUG::2020-03-18 09:39:48,597::task::993::Storage.TaskManager.Task::(_decref) Task=`34f20abc-8aef-4083-a8b9-86c949d7c21e`::ref 0 aborting False
Thread-479::INFO::2020-03-18 09:39:48,597::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42692 stopped
Reactor thread::INFO::2020-03-18 09:39:48,598::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42694
Reactor thread::DEBUG::2020-03-18 09:39:48,601::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,601::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42694
Reactor thread::DEBUG::2020-03-18 09:39:48,601::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42694)
BindingXMLRPC::INFO::2020-03-18 09:39:48,601::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42694
Thread-480::INFO::2020-03-18 09:39:48,601::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42694 started
Thread-480::DEBUG::2020-03-18 09:39:48,602::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-480::DEBUG::2020-03-18 09:39:48,602::task::595::Storage.TaskManager.Task::(_updateState) Task=`4b80c7cf-9e00-46fa-bbbf-cd562fa8bd0e`::moving from state init -> state preparing
Thread-480::INFO::2020-03-18 09:39:48,602::logUtils::48::dispatcher::(wrapper) Run and protect: getVolumeInfo(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='00ad81a7-5637-40ff-8635-c039347f69ee', volUUID='ec87b10a-e601-44bc-bd87-fbe6de274cd4', options=None)
Thread-480::DEBUG::2020-03-18 09:39:48,602::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`e11c8a21-42f1-40bc-8d40-5ebc2f5dc5ce`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3159' at 'getVolumeInfo'
Thread-480::DEBUG::2020-03-18 09:39:48,602::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-480::DEBUG::2020-03-18 09:39:48,602::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-480::DEBUG::2020-03-18 09:39:48,602::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`e11c8a21-42f1-40bc-8d40-5ebc2f5dc5ce`::Granted request
Thread-480::DEBUG::2020-03-18 09:39:48,603::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`4b80c7cf-9e00-46fa-bbbf-cd562fa8bd0e`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-480::DEBUG::2020-03-18 09:39:48,603::task::993::Storage.TaskManager.Task::(_decref) Task=`4b80c7cf-9e00-46fa-bbbf-cd562fa8bd0e`::ref 1 aborting False
Thread-480::DEBUG::2020-03-18 09:39:48,604::fileVolume::535::Storage.Volume::(validateVolumePath) validate path for ec87b10a-e601-44bc-bd87-fbe6de274cd4
Thread-480::INFO::2020-03-18 09:39:48,604::volume::915::Storage.Volume::(getInfo) Info request: sdUUID=331e6287-61df-48dd-9733-a8ad236750b1 imgUUID=00ad81a7-5637-40ff-8635-c039347f69ee volUUID = ec87b10a-e601-44bc-bd87-fbe6de274cd4
Thread-480::INFO::2020-03-18 09:39:48,607::volume::943::Storage.Volume::(getInfo) 331e6287-61df-48dd-9733-a8ad236750b1/00ad81a7-5637-40ff-8635-c039347f69ee/ec87b10a-e601-44bc-bd87-fbe6de274cd4 info is {'status': 'OK', 'domain': '331e6287-61df-48dd-9733-a8ad236750b1', 'voltype': 'LEAF', 'description': '{"Updated":true,"Size":10240,"Last Updated":"Tue Mar 03 10:33:30 PST 2020","Storage Domains":[{"uuid":"331e6287-61df-48dd-9733-a8ad236750b1"}],"Disk Description":"OVF_STORE"}', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'image': '00ad81a7-5637-40ff-8635-c039347f69ee', 'ctime': '1583260407', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '10240', 'children': [], 'pool': '', 'capacity': '134217728', 'uuid': 'ec87b10a-e601-44bc-bd87-fbe6de274cd4', 'truesize': '12288', 'type': 'PREALLOCATED'}
Thread-480::INFO::2020-03-18 09:39:48,607::logUtils::51::dispatcher::(wrapper) Run and protect: getVolumeInfo, Return response: {'info': {'status': 'OK', 'domain': '331e6287-61df-48dd-9733-a8ad236750b1', 'voltype': 'LEAF', 'description': '{"Updated":true,"Size":10240,"Last Updated":"Tue Mar 03 10:33:30 PST 2020","Storage Domains":[{"uuid":"331e6287-61df-48dd-9733-a8ad236750b1"}],"Disk Description":"OVF_STORE"}', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'image': '00ad81a7-5637-40ff-8635-c039347f69ee', 'ctime': '1583260407', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '10240', 'children': [], 'pool': '', 'capacity': '134217728', 'uuid': 'ec87b10a-e601-44bc-bd87-fbe6de274cd4', 'truesize': '12288', 'type': 'PREALLOCATED'}}
Thread-480::DEBUG::2020-03-18 09:39:48,607::task::1191::Storage.TaskManager.Task::(prepare) Task=`4b80c7cf-9e00-46fa-bbbf-cd562fa8bd0e`::finished: {'info': {'status': 'OK', 'domain': '331e6287-61df-48dd-9733-a8ad236750b1', 'voltype': 'LEAF', 'description': '{"Updated":true,"Size":10240,"Last Updated":"Tue Mar 03 10:33:30 PST 2020","Storage Domains":[{"uuid":"331e6287-61df-48dd-9733-a8ad236750b1"}],"Disk Description":"OVF_STORE"}', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'image': '00ad81a7-5637-40ff-8635-c039347f69ee', 'ctime': '1583260407', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '10240', 'children': [], 'pool': '', 'capacity': '134217728', 'uuid': 'ec87b10a-e601-44bc-bd87-fbe6de274cd4', 'truesize': '12288', 'type': 'PREALLOCATED'}}
Thread-480::DEBUG::2020-03-18 09:39:48,607::task::595::Storage.TaskManager.Task::(_updateState) Task=`4b80c7cf-9e00-46fa-bbbf-cd562fa8bd0e`::moving from state preparing -> state finished
Thread-480::DEBUG::2020-03-18 09:39:48,607::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-480::DEBUG::2020-03-18 09:39:48,607::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-480::DEBUG::2020-03-18 09:39:48,607::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-480::DEBUG::2020-03-18 09:39:48,607::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-480::DEBUG::2020-03-18 09:39:48,607::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-480::DEBUG::2020-03-18 09:39:48,607::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-480::DEBUG::2020-03-18 09:39:48,607::task::993::Storage.TaskManager.Task::(_decref) Task=`4b80c7cf-9e00-46fa-bbbf-cd562fa8bd0e`::ref 0 aborting False
Thread-480::INFO::2020-03-18 09:39:48,608::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42694 stopped
Reactor thread::INFO::2020-03-18 09:39:48,609::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42696
Reactor thread::DEBUG::2020-03-18 09:39:48,612::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,612::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42696
Reactor thread::DEBUG::2020-03-18 09:39:48,612::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42696)
BindingXMLRPC::INFO::2020-03-18 09:39:48,612::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42696
Thread-481::INFO::2020-03-18 09:39:48,613::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42696 started
Thread-481::DEBUG::2020-03-18 09:39:48,613::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-481::DEBUG::2020-03-18 09:39:48,613::task::595::Storage.TaskManager.Task::(_updateState) Task=`2e2c57c4-a55c-4bab-bf5e-2f534e3677bc`::moving from state init -> state preparing
Thread-481::INFO::2020-03-18 09:39:48,613::logUtils::48::dispatcher::(wrapper) Run and protect: getVolumesList(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='50cc8c63-9929-4cbc-aec8-f1d196874b72', options=None)
Thread-481::DEBUG::2020-03-18 09:39:48,613::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`b17a1185-64d3-4916-8bc9-a0a6054efd6a`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3291' at 'getVolumesList'
Thread-481::DEBUG::2020-03-18 09:39:48,613::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-481::DEBUG::2020-03-18 09:39:48,613::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-481::DEBUG::2020-03-18 09:39:48,613::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`b17a1185-64d3-4916-8bc9-a0a6054efd6a`::Granted request
Thread-481::DEBUG::2020-03-18 09:39:48,614::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`2e2c57c4-a55c-4bab-bf5e-2f534e3677bc`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-481::DEBUG::2020-03-18 09:39:48,614::task::993::Storage.TaskManager.Task::(_decref) Task=`2e2c57c4-a55c-4bab-bf5e-2f534e3677bc`::ref 1 aborting False
Thread-481::INFO::2020-03-18 09:39:48,615::logUtils::51::dispatcher::(wrapper) Run and protect: getVolumesList, Return response: {'uuidlist': [u'e8a4e709-0c98-4f1d-a42f-c5f0d499dca0']}
Thread-481::DEBUG::2020-03-18 09:39:48,615::task::1191::Storage.TaskManager.Task::(prepare) Task=`2e2c57c4-a55c-4bab-bf5e-2f534e3677bc`::finished: {'uuidlist': [u'e8a4e709-0c98-4f1d-a42f-c5f0d499dca0']}
Thread-481::DEBUG::2020-03-18 09:39:48,615::task::595::Storage.TaskManager.Task::(_updateState) Task=`2e2c57c4-a55c-4bab-bf5e-2f534e3677bc`::moving from state preparing -> state finished
Thread-481::DEBUG::2020-03-18 09:39:48,616::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-481::DEBUG::2020-03-18 09:39:48,616::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-481::DEBUG::2020-03-18 09:39:48,616::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-481::DEBUG::2020-03-18 09:39:48,616::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-481::DEBUG::2020-03-18 09:39:48,616::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-481::DEBUG::2020-03-18 09:39:48,616::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-481::DEBUG::2020-03-18 09:39:48,616::task::993::Storage.TaskManager.Task::(_decref) Task=`2e2c57c4-a55c-4bab-bf5e-2f534e3677bc`::ref 0 aborting False
Thread-481::INFO::2020-03-18 09:39:48,616::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42696 stopped
Reactor thread::INFO::2020-03-18 09:39:48,617::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42698
Reactor thread::DEBUG::2020-03-18 09:39:48,620::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,620::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42698
Reactor thread::DEBUG::2020-03-18 09:39:48,620::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42698)
BindingXMLRPC::INFO::2020-03-18 09:39:48,620::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42698
Thread-482::INFO::2020-03-18 09:39:48,620::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42698 started
Thread-482::DEBUG::2020-03-18 09:39:48,621::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-482::DEBUG::2020-03-18 09:39:48,621::task::595::Storage.TaskManager.Task::(_updateState) Task=`a684e482-3427-417b-8b21-54d1d5565f5e`::moving from state init -> state preparing
Thread-482::INFO::2020-03-18 09:39:48,621::logUtils::48::dispatcher::(wrapper) Run and protect: getVolumeInfo(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='50cc8c63-9929-4cbc-aec8-f1d196874b72', volUUID='e8a4e709-0c98-4f1d-a42f-c5f0d499dca0', options=None)
Thread-482::DEBUG::2020-03-18 09:39:48,621::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`2f072630-b931-4da7-9829-4722fa60b0b4`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3159' at 'getVolumeInfo'
Thread-482::DEBUG::2020-03-18 09:39:48,621::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-482::DEBUG::2020-03-18 09:39:48,621::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-482::DEBUG::2020-03-18 09:39:48,621::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`2f072630-b931-4da7-9829-4722fa60b0b4`::Granted request
Thread-482::DEBUG::2020-03-18 09:39:48,621::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`a684e482-3427-417b-8b21-54d1d5565f5e`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-482::DEBUG::2020-03-18 09:39:48,621::task::993::Storage.TaskManager.Task::(_decref) Task=`a684e482-3427-417b-8b21-54d1d5565f5e`::ref 1 aborting False
Thread-482::DEBUG::2020-03-18 09:39:48,622::fileVolume::535::Storage.Volume::(validateVolumePath) validate path for e8a4e709-0c98-4f1d-a42f-c5f0d499dca0
Thread-482::INFO::2020-03-18 09:39:48,622::volume::915::Storage.Volume::(getInfo) Info request: sdUUID=331e6287-61df-48dd-9733-a8ad236750b1 imgUUID=50cc8c63-9929-4cbc-aec8-f1d196874b72 volUUID = e8a4e709-0c98-4f1d-a42f-c5f0d499dca0
Thread-482::INFO::2020-03-18 09:39:48,625::volume::943::Storage.Volume::(getInfo) 331e6287-61df-48dd-9733-a8ad236750b1/50cc8c63-9929-4cbc-aec8-f1d196874b72/e8a4e709-0c98-4f1d-a42f-c5f0d499dca0 info is {'status': 'OK', 'domain': '331e6287-61df-48dd-9733-a8ad236750b1', 'voltype': 'LEAF', 'description': 'hosted-engine.metadata', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'image': '50cc8c63-9929-4cbc-aec8-f1d196874b72', 'ctime': '1581543999', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1028096', 'children': [], 'pool': '', 'capacity': '1028096', 'uuid': 'e8a4e709-0c98-4f1d-a42f-c5f0d499dca0', 'truesize': '1032192', 'type': 'PREALLOCATED'}
Thread-482::INFO::2020-03-18 09:39:48,625::logUtils::51::dispatcher::(wrapper) Run and protect: getVolumeInfo, Return response: {'info': {'status': 'OK', 'domain': '331e6287-61df-48dd-9733-a8ad236750b1', 'voltype': 'LEAF', 'description': 'hosted-engine.metadata', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'image': '50cc8c63-9929-4cbc-aec8-f1d196874b72', 'ctime': '1581543999', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1028096', 'children': [], 'pool': '', 'capacity': '1028096', 'uuid': 'e8a4e709-0c98-4f1d-a42f-c5f0d499dca0', 'truesize': '1032192', 'type': 'PREALLOCATED'}}
Thread-482::DEBUG::2020-03-18 09:39:48,625::task::1191::Storage.TaskManager.Task::(prepare) Task=`a684e482-3427-417b-8b21-54d1d5565f5e`::finished: {'info': {'status': 'OK', 'domain': '331e6287-61df-48dd-9733-a8ad236750b1', 'voltype': 'LEAF', 'description': 'hosted-engine.metadata', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'image': '50cc8c63-9929-4cbc-aec8-f1d196874b72', 'ctime': '1581543999', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1028096', 'children': [], 'pool': '', 'capacity': '1028096', 'uuid': 'e8a4e709-0c98-4f1d-a42f-c5f0d499dca0', 'truesize': '1032192', 'type': 'PREALLOCATED'}}
Thread-482::DEBUG::2020-03-18 09:39:48,625::task::595::Storage.TaskManager.Task::(_updateState) Task=`a684e482-3427-417b-8b21-54d1d5565f5e`::moving from state preparing -> state finished
Thread-482::DEBUG::2020-03-18 09:39:48,625::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-482::DEBUG::2020-03-18 09:39:48,625::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-482::DEBUG::2020-03-18 09:39:48,625::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-482::DEBUG::2020-03-18 09:39:48,625::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-482::DEBUG::2020-03-18 09:39:48,625::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-482::DEBUG::2020-03-18 09:39:48,625::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-482::DEBUG::2020-03-18 09:39:48,625::task::993::Storage.TaskManager.Task::(_decref) Task=`a684e482-3427-417b-8b21-54d1d5565f5e`::ref 0 aborting False
Thread-482::INFO::2020-03-18 09:39:48,626::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42698 stopped
Reactor thread::INFO::2020-03-18 09:39:48,626::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42700
Reactor thread::DEBUG::2020-03-18 09:39:48,630::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,630::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42700
Reactor thread::DEBUG::2020-03-18 09:39:48,630::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42700)
BindingXMLRPC::INFO::2020-03-18 09:39:48,630::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42700
Thread-483::INFO::2020-03-18 09:39:48,630::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42700 started
Thread-483::DEBUG::2020-03-18 09:39:48,631::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-483::DEBUG::2020-03-18 09:39:48,631::task::595::Storage.TaskManager.Task::(_updateState) Task=`badbadac-bf40-4a17-9309-90bcafab3508`::moving from state init -> state preparing
Thread-483::INFO::2020-03-18 09:39:48,631::logUtils::48::dispatcher::(wrapper) Run and protect: getVolumesList(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='f470821b-d9a9-4835-8ed9-2ab358e06b41', options=None)
Thread-483::DEBUG::2020-03-18 09:39:48,631::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`7fdf501c-e758-4e3c-8a63-b801f2f28777`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3291' at 'getVolumesList'
Thread-483::DEBUG::2020-03-18 09:39:48,631::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-483::DEBUG::2020-03-18 09:39:48,631::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-483::DEBUG::2020-03-18 09:39:48,631::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`7fdf501c-e758-4e3c-8a63-b801f2f28777`::Granted request
Thread-483::DEBUG::2020-03-18 09:39:48,631::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`badbadac-bf40-4a17-9309-90bcafab3508`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-483::DEBUG::2020-03-18 09:39:48,631::task::993::Storage.TaskManager.Task::(_decref) Task=`badbadac-bf40-4a17-9309-90bcafab3508`::ref 1 aborting False
Thread-483::INFO::2020-03-18 09:39:48,633::logUtils::51::dispatcher::(wrapper) Run and protect: getVolumesList, Return response: {'uuidlist': [u'40f29b86-0eaa-4e64-a670-69ed7bc1011d']}
Thread-483::DEBUG::2020-03-18 09:39:48,633::task::1191::Storage.TaskManager.Task::(prepare) Task=`badbadac-bf40-4a17-9309-90bcafab3508`::finished: {'uuidlist': [u'40f29b86-0eaa-4e64-a670-69ed7bc1011d']}
Thread-483::DEBUG::2020-03-18 09:39:48,633::task::595::Storage.TaskManager.Task::(_updateState) Task=`badbadac-bf40-4a17-9309-90bcafab3508`::moving from state preparing -> state finished
Thread-483::DEBUG::2020-03-18 09:39:48,633::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-483::DEBUG::2020-03-18 09:39:48,633::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-483::DEBUG::2020-03-18 09:39:48,633::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-483::DEBUG::2020-03-18 09:39:48,633::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-483::DEBUG::2020-03-18 09:39:48,633::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-483::DEBUG::2020-03-18 09:39:48,634::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-483::DEBUG::2020-03-18 09:39:48,634::task::993::Storage.TaskManager.Task::(_decref) Task=`badbadac-bf40-4a17-9309-90bcafab3508`::ref 0 aborting False
Thread-483::INFO::2020-03-18 09:39:48,634::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42700 stopped
Reactor thread::INFO::2020-03-18 09:39:48,634::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42702
Reactor thread::DEBUG::2020-03-18 09:39:48,638::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,638::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42702
Reactor thread::DEBUG::2020-03-18 09:39:48,638::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42702)
BindingXMLRPC::INFO::2020-03-18 09:39:48,638::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42702
Thread-484::INFO::2020-03-18 09:39:48,638::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42702 started
Thread-484::DEBUG::2020-03-18 09:39:48,638::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-484::DEBUG::2020-03-18 09:39:48,639::task::595::Storage.TaskManager.Task::(_updateState) Task=`3212d32a-f088-415d-853c-d4a6c203b2fb`::moving from state init -> state preparing
Thread-484::INFO::2020-03-18 09:39:48,639::logUtils::48::dispatcher::(wrapper) Run and protect: getVolumeInfo(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='f470821b-d9a9-4835-8ed9-2ab358e06b41', volUUID='40f29b86-0eaa-4e64-a670-69ed7bc1011d', options=None)
Thread-484::DEBUG::2020-03-18 09:39:48,639::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`0b02bb3e-2a26-4d99-ad15-a329eeae426d`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3159' at 'getVolumeInfo'
Thread-484::DEBUG::2020-03-18 09:39:48,639::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-484::DEBUG::2020-03-18 09:39:48,639::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-484::DEBUG::2020-03-18 09:39:48,639::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`0b02bb3e-2a26-4d99-ad15-a329eeae426d`::Granted request
Thread-484::DEBUG::2020-03-18 09:39:48,639::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`3212d32a-f088-415d-853c-d4a6c203b2fb`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-484::DEBUG::2020-03-18 09:39:48,639::task::993::Storage.TaskManager.Task::(_decref) Task=`3212d32a-f088-415d-853c-d4a6c203b2fb`::ref 1 aborting False
Thread-484::DEBUG::2020-03-18 09:39:48,640::fileVolume::535::Storage.Volume::(validateVolumePath) validate path for 40f29b86-0eaa-4e64-a670-69ed7bc1011d
Thread-484::INFO::2020-03-18 09:39:48,640::volume::915::Storage.Volume::(getInfo) Info request: sdUUID=331e6287-61df-48dd-9733-a8ad236750b1 imgUUID=f470821b-d9a9-4835-8ed9-2ab358e06b41 volUUID = 40f29b86-0eaa-4e64-a670-69ed7bc1011d
Thread-484::INFO::2020-03-18 09:39:48,642::volume::943::Storage.Volume::(getInfo) 331e6287-61df-48dd-9733-a8ad236750b1/f470821b-d9a9-4835-8ed9-2ab358e06b41/40f29b86-0eaa-4e64-a670-69ed7bc1011d info is {'status': 'OK', 'domain': '331e6287-61df-48dd-9733-a8ad236750b1', 'voltype': 'LEAF', 'description': 'Hosted Engine Image', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'image': 'f470821b-d9a9-4835-8ed9-2ab358e06b41', 'ctime': '1581544001', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '10737418240', 'children': [], 'pool': '', 'capacity': '10737418240', 'uuid': '40f29b86-0eaa-4e64-a670-69ed7bc1011d', 'truesize': '4488503296', 'type': 'SPARSE'}
Thread-484::INFO::2020-03-18 09:39:48,642::logUtils::51::dispatcher::(wrapper) Run and protect: getVolumeInfo, Return response: {'info': {'status': 'OK', 'domain': '331e6287-61df-48dd-9733-a8ad236750b1', 'voltype': 'LEAF', 'description': 'Hosted Engine Image', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'image': 'f470821b-d9a9-4835-8ed9-2ab358e06b41', 'ctime': '1581544001', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '10737418240', 'children': [], 'pool': '', 'capacity': '10737418240', 'uuid': '40f29b86-0eaa-4e64-a670-69ed7bc1011d', 'truesize': '4488503296', 'type': 'SPARSE'}}
Thread-484::DEBUG::2020-03-18 09:39:48,643::task::1191::Storage.TaskManager.Task::(prepare) Task=`3212d32a-f088-415d-853c-d4a6c203b2fb`::finished: {'info': {'status': 'OK', 'domain': '331e6287-61df-48dd-9733-a8ad236750b1', 'voltype': 'LEAF', 'description': 'Hosted Engine Image', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'image': 'f470821b-d9a9-4835-8ed9-2ab358e06b41', 'ctime': '1581544001', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '10737418240', 'children': [], 'pool': '', 'capacity': '10737418240', 'uuid': '40f29b86-0eaa-4e64-a670-69ed7bc1011d', 'truesize': '4488503296', 'type': 'SPARSE'}}
Thread-484::DEBUG::2020-03-18 09:39:48,643::task::595::Storage.TaskManager.Task::(_updateState) Task=`3212d32a-f088-415d-853c-d4a6c203b2fb`::moving from state preparing -> state finished
Thread-484::DEBUG::2020-03-18 09:39:48,643::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-484::DEBUG::2020-03-18 09:39:48,643::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-484::DEBUG::2020-03-18 09:39:48,643::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-484::DEBUG::2020-03-18 09:39:48,643::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-484::DEBUG::2020-03-18 09:39:48,643::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-484::DEBUG::2020-03-18 09:39:48,643::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-484::DEBUG::2020-03-18 09:39:48,643::task::993::Storage.TaskManager.Task::(_decref) Task=`3212d32a-f088-415d-853c-d4a6c203b2fb`::ref 0 aborting False
Thread-484::INFO::2020-03-18 09:39:48,644::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42702 stopped
Reactor thread::INFO::2020-03-18 09:39:48,644::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42704
Reactor thread::DEBUG::2020-03-18 09:39:48,647::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,648::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42704
Reactor thread::DEBUG::2020-03-18 09:39:48,648::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42704)
BindingXMLRPC::INFO::2020-03-18 09:39:48,648::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42704
Thread-485::INFO::2020-03-18 09:39:48,648::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42704 started
Thread-485::DEBUG::2020-03-18 09:39:48,648::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-485::DEBUG::2020-03-18 09:39:48,648::task::595::Storage.TaskManager.Task::(_updateState) Task=`bf8ef3e1-bcee-430c-b9c0-a8a80768dbfc`::moving from state init -> state preparing
Thread-485::INFO::2020-03-18 09:39:48,648::logUtils::48::dispatcher::(wrapper) Run and protect: getVolumesList(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='63491779-c7cf-434c-84c9-7878694a8946', options=None)
Thread-485::DEBUG::2020-03-18 09:39:48,649::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`ad813fca-5f8d-4197-874b-7feb262fe810`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3291' at 'getVolumesList'
Thread-485::DEBUG::2020-03-18 09:39:48,649::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-485::DEBUG::2020-03-18 09:39:48,649::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-485::DEBUG::2020-03-18 09:39:48,649::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`ad813fca-5f8d-4197-874b-7feb262fe810`::Granted request
Thread-485::DEBUG::2020-03-18 09:39:48,649::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`bf8ef3e1-bcee-430c-b9c0-a8a80768dbfc`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-485::DEBUG::2020-03-18 09:39:48,649::task::993::Storage.TaskManager.Task::(_decref) Task=`bf8ef3e1-bcee-430c-b9c0-a8a80768dbfc`::ref 1 aborting False
Thread-485::INFO::2020-03-18 09:39:48,651::logUtils::51::dispatcher::(wrapper) Run and protect: getVolumesList, Return response: {'uuidlist': [u'2f086a70-e97d-4161-a232-1268bb3145de']}
Thread-485::DEBUG::2020-03-18 09:39:48,651::task::1191::Storage.TaskManager.Task::(prepare) Task=`bf8ef3e1-bcee-430c-b9c0-a8a80768dbfc`::finished: {'uuidlist': [u'2f086a70-e97d-4161-a232-1268bb3145de']}
Thread-485::DEBUG::2020-03-18 09:39:48,651::task::595::Storage.TaskManager.Task::(_updateState) Task=`bf8ef3e1-bcee-430c-b9c0-a8a80768dbfc`::moving from state preparing -> state finished
Thread-485::DEBUG::2020-03-18 09:39:48,651::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-485::DEBUG::2020-03-18 09:39:48,651::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-485::DEBUG::2020-03-18 09:39:48,651::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-485::DEBUG::2020-03-18 09:39:48,651::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-485::DEBUG::2020-03-18 09:39:48,651::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-485::DEBUG::2020-03-18 09:39:48,651::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-485::DEBUG::2020-03-18 09:39:48,651::task::993::Storage.TaskManager.Task::(_decref) Task=`bf8ef3e1-bcee-430c-b9c0-a8a80768dbfc`::ref 0 aborting False
Thread-485::INFO::2020-03-18 09:39:48,652::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42704 stopped
Reactor thread::INFO::2020-03-18 09:39:48,652::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42706
Reactor thread::DEBUG::2020-03-18 09:39:48,656::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,656::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42706
Reactor thread::DEBUG::2020-03-18 09:39:48,656::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42706)
BindingXMLRPC::INFO::2020-03-18 09:39:48,656::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42706
Thread-486::INFO::2020-03-18 09:39:48,656::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42706 started
Thread-486::DEBUG::2020-03-18 09:39:48,656::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-486::DEBUG::2020-03-18 09:39:48,656::task::595::Storage.TaskManager.Task::(_updateState) Task=`d72d37c8-3a4d-4afc-bc50-b64a8359362e`::moving from state init -> state preparing
Thread-486::INFO::2020-03-18 09:39:48,657::logUtils::48::dispatcher::(wrapper) Run and protect: getVolumeInfo(sdUUID='331e6287-61df-48dd-9733-a8ad236750b1', spUUID='00000000-0000-0000-0000-000000000000', imgUUID='63491779-c7cf-434c-84c9-7878694a8946', volUUID='2f086a70-e97d-4161-a232-1268bb3145de', options=None)
Thread-486::DEBUG::2020-03-18 09:39:48,657::resourceManager::199::Storage.ResourceManager.Request::(__init__) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`bab61764-0f3d-491c-9009-d8d47024878a`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3159' at 'getVolumeInfo'
Thread-486::DEBUG::2020-03-18 09:39:48,657::resourceManager::545::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' for lock type 'shared'
Thread-486::DEBUG::2020-03-18 09:39:48,657::resourceManager::604::Storage.ResourceManager::(registerResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free. Now locking as 'shared' (1 active user)
Thread-486::DEBUG::2020-03-18 09:39:48,657::resourceManager::239::Storage.ResourceManager.Request::(grant) ResName=`Storage.331e6287-61df-48dd-9733-a8ad236750b1`ReqID=`bab61764-0f3d-491c-9009-d8d47024878a`::Granted request
Thread-486::DEBUG::2020-03-18 09:39:48,657::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=`d72d37c8-3a4d-4afc-bc50-b64a8359362e`::_resourcesAcquired: Storage.331e6287-61df-48dd-9733-a8ad236750b1 (shared)
Thread-486::DEBUG::2020-03-18 09:39:48,657::task::993::Storage.TaskManager.Task::(_decref) Task=`d72d37c8-3a4d-4afc-bc50-b64a8359362e`::ref 1 aborting False
Thread-486::DEBUG::2020-03-18 09:39:48,658::fileVolume::535::Storage.Volume::(validateVolumePath) validate path for 2f086a70-e97d-4161-a232-1268bb3145de
Thread-486::INFO::2020-03-18 09:39:48,658::volume::915::Storage.Volume::(getInfo) Info request: sdUUID=331e6287-61df-48dd-9733-a8ad236750b1 imgUUID=63491779-c7cf-434c-84c9-7878694a8946 volUUID = 2f086a70-e97d-4161-a232-1268bb3145de
Thread-486::INFO::2020-03-18 09:39:48,660::volume::943::Storage.Volume::(getInfo) 331e6287-61df-48dd-9733-a8ad236750b1/63491779-c7cf-434c-84c9-7878694a8946/2f086a70-e97d-4161-a232-1268bb3145de info is {'status': 'OK', 'domain': '331e6287-61df-48dd-9733-a8ad236750b1', 'voltype': 'LEAF', 'description': '{"Updated":true,"Size":10240,"Last Updated":"Tue Mar 03 10:33:30 PST 2020","Storage Domains":[{"uuid":"331e6287-61df-48dd-9733-a8ad236750b1"}],"Disk Description":"OVF_STORE"}', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'image': '63491779-c7cf-434c-84c9-7878694a8946', 'ctime': '1583260407', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '10240', 'children': [], 'pool': '', 'capacity': '134217728', 'uuid': '2f086a70-e97d-4161-a232-1268bb3145de', 'truesize': '12288', 'type': 'PREALLOCATED'}
Thread-486::INFO::2020-03-18 09:39:48,660::logUtils::51::dispatcher::(wrapper) Run and protect: getVolumeInfo, Return response: {'info': {'status': 'OK', 'domain': '331e6287-61df-48dd-9733-a8ad236750b1', 'voltype': 'LEAF', 'description': '{"Updated":true,"Size":10240,"Last Updated":"Tue Mar 03 10:33:30 PST 2020","Storage Domains":[{"uuid":"331e6287-61df-48dd-9733-a8ad236750b1"}],"Disk Description":"OVF_STORE"}', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'image': '63491779-c7cf-434c-84c9-7878694a8946', 'ctime': '1583260407', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '10240', 'children': [], 'pool': '', 'capacity': '134217728', 'uuid': '2f086a70-e97d-4161-a232-1268bb3145de', 'truesize': '12288', 'type': 'PREALLOCATED'}}
Thread-486::DEBUG::2020-03-18 09:39:48,660::task::1191::Storage.TaskManager.Task::(prepare) Task=`d72d37c8-3a4d-4afc-bc50-b64a8359362e`::finished: {'info': {'status': 'OK', 'domain': '331e6287-61df-48dd-9733-a8ad236750b1', 'voltype': 'LEAF', 'description': '{"Updated":true,"Size":10240,"Last Updated":"Tue Mar 03 10:33:30 PST 2020","Storage Domains":[{"uuid":"331e6287-61df-48dd-9733-a8ad236750b1"}],"Disk Description":"OVF_STORE"}', 'parent': '00000000-0000-0000-0000-000000000000', 'format': 'RAW', 'image': '63491779-c7cf-434c-84c9-7878694a8946', 'ctime': '1583260407', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '10240', 'children': [], 'pool': '', 'capacity': '134217728', 'uuid': '2f086a70-e97d-4161-a232-1268bb3145de', 'truesize': '12288', 'type': 'PREALLOCATED'}}
Thread-486::DEBUG::2020-03-18 09:39:48,660::task::595::Storage.TaskManager.Task::(_updateState) Task=`d72d37c8-3a4d-4afc-bc50-b64a8359362e`::moving from state preparing -> state finished
Thread-486::DEBUG::2020-03-18 09:39:48,661::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.331e6287-61df-48dd-9733-a8ad236750b1': < ResourceRef 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', isValid: 'True' obj: 'None'>}
Thread-486::DEBUG::2020-03-18 09:39:48,661::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-486::DEBUG::2020-03-18 09:39:48,661::resourceManager::619::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1'
Thread-486::DEBUG::2020-03-18 09:39:48,661::resourceManager::638::Storage.ResourceManager::(releaseResource) Released resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' (0 active users)
Thread-486::DEBUG::2020-03-18 09:39:48,661::resourceManager::644::Storage.ResourceManager::(releaseResource) Resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1' is free, finding out if anyone is waiting for it.
Thread-486::DEBUG::2020-03-18 09:39:48,661::resourceManager::652::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.331e6287-61df-48dd-9733-a8ad236750b1', Clearing records.
Thread-486::DEBUG::2020-03-18 09:39:48,661::task::993::Storage.TaskManager.Task::(_decref) Task=`d72d37c8-3a4d-4afc-bc50-b64a8359362e`::ref 0 aborting False
Thread-486::INFO::2020-03-18 09:39:48,662::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42706 stopped
Reactor thread::INFO::2020-03-18 09:39:48,686::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127.0.0.1:42708
Reactor thread::DEBUG::2020-03-18 09:39:48,690::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=11
Reactor thread::INFO::2020-03-18 09:39:48,690::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127.0.0.1:42708
Reactor thread::DEBUG::2020-03-18 09:39:48,690::bindingxmlrpc::1302::XmlDetector::(handle_socket) xml over http detected from ('127.0.0.1', 42708)
BindingXMLRPC::INFO::2020-03-18 09:39:48,690::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0.0.1:42708
Thread-487::INFO::2020-03-18 09:39:48,690::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42708 started
Thread-487::DEBUG::2020-03-18 09:39:48,690::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]
Thread-487::DEBUG::2020-03-18 09:39:48,691::task::595::Storage.TaskManager.Task::(_updateState) Task=`b3a81320-5e4b-49c1-bd67-546012d57a9a`::moving from state init -> state preparing
Thread-487::INFO::2020-03-18 09:39:48,691::logUtils::48::dispatcher::(wrapper) Run and protect: repoStats(options=None)
Thread-487::INFO::2020-03-18 09:39:48,691::logUtils::51::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'331e6287-61df-48dd-9733-a8ad236750b1': {'code': 0, 'actual': True, 'version': 3, 'acquired': True, 'delay': '0.000279927', 'lastCheck': '4.6', 'valid': True}}
Thread-487::DEBUG::2020-03-18 09:39:48,691::task::1191::Storage.TaskManager.Task::(prepare) Task=`b3a81320-5e4b-49c1-bd67-546012d57a9a`::finished: {'331e6287-61df-48dd-9733-a8ad236750b1': {'code': 0, 'actual': True, 'version': 3, 'acquired': True, 'delay': '0.000279927', 'lastCheck': '4.6', 'valid': True}}
Thread-487::DEBUG::2020-03-18 09:39:48,691::task::595::Storage.TaskManager.Task::(_updateState) Task=`b3a81320-5e4b-49c1-bd67-546012d57a9a`::moving from state preparing -> state finished
Thread-487::DEBUG::2020-03-18 09:39:48,691::resourceManager::943::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-487::DEBUG::2020-03-18 09:39:48,691::resourceManager::980::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-487::DEBUG::2020-03-18 09:39:48,691::task::993::Storage.TaskManager.Task::(_decref) Task=`b3a81320-5e4b-49c1-bd67-546012d57a9a`::ref 0 aborting False
Thread-487::INFO::2020-03-18 09:39:48,692::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:42708 stopped
5 years, 1 month
LDAP
by Nicholas Emmerling
Would you please provide any documentation you have regarding configuring oVirt to work with LDAP. Preferably the guest VMs as well as the Hosts/Nodes themselves. Thank you.
nicholas.emmerling(a)me.com
Sent from my iPhone
5 years, 1 month
Error when adding storage domain using VLAN tagging
by briandumont@gmail.com
Hello,
First post!
I'm trying to add a nfs storage domain with VLAN tagging turned on the logical network. It errors out with "Error while executing action Add Storage Connection: Problem while trying to mount target".
Couple of notes - I am able to attach to this nfs export from RHV without vlan tagging. I have also verified that I can attach to this nfs export with vlan tagging using a non-ovirt rhel host.
Here's my config:
- Logical network created at Data Center
- Enable Vlan Tagging - ID 15
- Not a VM Network
- Cluster status
- Network - Up
- Assign all - checked
- Require - checked
- no other boxes checked
Host status
- Host 1 and Host 2 each show
- Status - up
- BootP - None (same issue with static IP's)
- LinK Layer info shows VLAN ID 15 on switch port
Appreciate any help!
Brian
5 years, 1 month
storage use after storage live migration
by Rik Theys
Hi,
We are in the process of migrating our VM's from one storage domain to
another. Both domains are FC storage.
There are VM's with thin provisioned disks of 16G that currently only
occupy 3G according to the interface. When we live migrate the disks
(with the VM running), I can see that a snapshot is being taken and
removed afterwards.
After the storage migration, the occupied disk space on the new storage
domain is 6G. Even for a VM that hardly has any writes. How can I
reclaim this space? I've powered down the VM and did a sparsify on the
disk but this doesn't seem to have any effect.
When I do a storage migration of a VM with a thin provisioned disk that
is down during the migration, the used disk space does not increase.
VM's with fully allocated disks also don't seem to exhibit this behavior.
My storage domain now also contains VM with more occupied storage space
and the size of the disk?? There are no snapshots listed for those
disks. Is there a way to clean up this situation?
Regards,
Rik
--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee
+32(0)16/32.11.07
----------------------------------------------------------------
<<Any errors in spelling, tact or fact are transmission errors>>
5 years, 1 month
Ovirt API and CLI
by Eugène Ngontang
Hi,
I'm trying to find out there a sort of API or ovirt CLI/SDK in order to be
able to interact with my ovirt VMS and associated resources.
In my architecture, I have an Ovirt virtualization host, with a self-hosted
engine VM to manage VMs.
From the host I have the virsh command to list VMs status, but this doesn't
really let me get into VMs management actions like : create, delete, get,
reboot, get VMs wide informations (IPs, name, disks.....)
So each time I have to login to the hosted engine web admin page to explore
VM, but I'd really like to play with my Ovirt resources from my command
line or programatically.
The ovirt API documentation I've found is really poor, I don't know if
someone here has already got the same need and had a good solution.
Thanks for your help.
Regards,
Eugène NG
--
LesCDN <http://lescdn.com>
engontang(a)lescdn.com
------------------------------------------------------------
*Aux hommes il faut un chef, et au*
* chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te
voit on te juge!*
5 years, 1 month
Orphaned ISO Storage Domain
by Bob Franzke
Greetings all,
Full disclosure, complete OVIRT novice here. I inherited an OVIRT system and
had a complete ovirt-engine back in December-January. Because of time and my
inexperience with OVIRT, I had to resort to hiring consultants to rebuild my
OVIRT engine from backups. That's a situation I never want to repeat.
Anyway, we were able to piece it together and at least get most
functionality back. The previous setup had a ISO storage domain called
'ISO-COLO' that seems to have been hosted on the engine server itself. The
engine hostname is 'mydesktop'. We restored the engine from backups I had
taken of the SQL DB and various support files using the built in OVIRT
backup tool.
So now when looking into the OVIRT console, I see the storage domain listed.
It has a status of 'inactive' showing in the list of various storage domains
we have setup for this. We tried to 'activate' it and it fails activation.
The path listed for the domain is mydesktop:/gluster/colo-iso. On the host
however there is no mountpoint that equates to that path:
[root@mydesktop ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 47G 0 47G 0% /dev
tmpfs 47G 12K 47G 1% /dev/shm
tmpfs 47G 131M 47G 1% /run
tmpfs 47G 0 47G 0% /sys/fs/cgroup
/dev/mapper/centos-root 50G 5.4G 45G 11% /
/dev/sda2 1014M 185M 830M 19% /boot
/dev/sda1 200M 12M 189M 6% /boot/efi
/dev/mapper/centos-home 224G 15G 210G 7% /home
tmpfs 9.3G 0 9.3G 0% /run/user/0
The original layout looked like this on the broken engine:
[root@mydesktop ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos_mydesktop-root 50G 27G 20G 58% /
devtmpfs 24G 0 24G 0% /dev
tmpfs 24G 28K 24G 1% /dev/shm
tmpfs 24G 42M 24G 1% /run
tmpfs 24G 0 24G 0% /sys/fs/cgroup
/dev/mapper/centos_mydesktop-home 25G 45M 24G 1% /home
/dev/sdc1 1014M 307M 708M 31% /boot
/dev/mapper/centos_mydesktop-gluster 177G 127G 42G 76% /gluster
tmpfs 4.7G 0 4.7G 0% /run/user/0
So it seems the orphaned storage domain is just point to a path that does
not exist on the new Engine host.
Also noticed some of the hosts are trying to aces this storage domain and
getting errors:
The error message for connection mydesktop:/gluster/colo-iso returned by
VDSM was: Problem while trying to mount target
3/17/2010:47:05 AM
Failed to connect Host vm-host-colo-2 to the Storage Domains ISO-Colo.
3/17/2010:47:05 AM
So it seems hosts are trying to be connected to this storage domain but
cannot because its not there. Any of the files from the original path are
not available so I am not even sure what we are missing if anything.
So what are my options here. Destroy the current ISO domain and recreate it,
or somehow provide the correct path on the engine server? Currently the
storage space I can use is mounted with /home, which is a different path
than the original one. Not sure if anything can be done with the disk layout
at this point to correct this on the engine server itself to get the gluster
path back. Right now we cannot attach CDs to VMs for booting. No choices
show up for use when doing a 'run once' on an existing VM so I would like to
get this working so I can fix a broken VM that I need to boot off of ISO
media.
Thanks in advance for any help you can provide.
5 years, 1 month
Gluster Settings
by Christian Reiss
Hey folks,
quick question. For running Gluster / oVirt I found several places, some
outdated (ovirt docs), gluster Mailinglists, oVirt Mailinglists etc that
recommend different things.
Here is what I found out/configured:
features.barrier: disable
features.show-snapshot-directory: on
features.uss: enable
cluster.data-self-heal-algorithm: full
cluster.entry-self-heal: on
cluster.data-self-heal: on
cluster.metadata-self-heal: on
cluster.readdir-optimize: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
network.remote-dio: off
performance.strict-o-direct: on
client.event-threads: 16
cluster.choose-local: true
snap-activate-on-create: enable
auto-delete: enable
Would you agree or change anything (usual vm workload).
Thanks! o/
And keep healthy.
--
with kind regards,
mit freundlichen Gruessen,
Christian Reiss
5 years, 1 month
VNC Encryption
by Tommaso - Shellrent
Hi to all.
We need to set VNC Encryption to disabled on cluster creation, but we
do not find any reference on API or SDK to do this automatically.
Someone have any kind of hint to do this?
--
--
Shellrent - Il primo hosting italiano Security First
*Tommaso De Marchi*
/COO - Chief Operating Officer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155 <tel:+390444321155> | Fax 04441492177
5 years, 1 month