[SOLVED] Re: Trying to make ovirt-hosted-engine-setup create a customized Engine-vm on 3.6 HC HE
by Giuseppe Ragusa
On Mon, Oct 26, 2015, at 09:48, Simone Tiraboschi wrote:
>
>
> On Mon, Oct 26, 2015 at 12:14 AM, Giuseppe Ragusa <giuseppe.ragusa(a)hotmail.com> wrote:
>> Hi all,
>> I'm experiencing some difficulties using oVirt 3.6 latest snapshot.
>>
>> I'm trying to trick the self-hosted-engine setup to create a custom engine vm with 3 nics (with fixed MACs/UUIDs).
>>
>> The GlusterFS volume (3.7.5 hyperconverged, replica 3, for the engine vm) and the network bridges (ovirtmgmt and other two bridges, called nfs and lan, for the engine vm) have been preconfigured on the initial fully-patched CentOS 7.1 host (plus other two identical hosts which are awaiting to be added).
>>
>> I'm stuck at a point with the engine vm successfully starting but with only one nic present (connected to the ovirtmgmt bridge).
>>
>> I'm trying to obtain the modified engine vm by means of a trick which used to work in a previous (aborted because of lacking GlusterFS-by-libgfapi support) oVirt 3.5 test setup (about a year ago, maybe more): I'm substituting the standard /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in with the following:
>>
>> vmId=@VM_UUID@
>> memSize=@MEM_SIZE@
>> display=@CONSOLE_TYPE@
>> devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0, bus:1, type:drive},specParams:{},readonly:true,deviceId:@CDROM_UUID@,path:@CDROM@,device:cdrom,shared:false,type:disk@BOOT_CDROM@}
>> devices={index:0,iface:virtio,format:raw,poolID:@SP_UUID@,volumeID:@VOL_UUID@,imageID:@IMG_UUID@,specParams:{},readonly:false,domainID:@SD_UUID@,optional:false,deviceId:@IMG_UUID@,address:{bus:0x00, slot:0x06, domain:0x0000, type:pci, function:0x0},device:disk,shared:exclusive,propagateErrors:off,type:disk@BOOT_DISK@}
>> devices={device:scsi,model:virtio-scsi,type:controller}
>> devices={index:4,nicModel:pv,macAddr:02:50:56:3f:c4:b0,linkActive:true,network:@BRIDGE@,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:@NIC_UUID@,address:{bus:0x00, slot:0x03, domain:0x0000, type:pci, function:0x0},device:bridge,type:interface@BOOT_PXE@}
>> devices={index:8,nicModel:pv,macAddr:02:50:56:3f:c4:a0,linkActive:true,network:lan,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:6c467650-1837-47ea-89bc-1113f4bfefee,address:{bus:0x00, slot:0x09, domain:0x0000, type:pci, function:0x0},device:bridge,type:interface@BOOT_PXE@}
>> devices={index:16,nicModel:pv,macAddr:02:50:56:3f:c4:c0,linkActive:true,network:nfs,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:4d8e0705-8cb4-45b7-b960-7f98bb59858d,address:{bus:0x00, slot:0x0c, domain:0x0000, type:pci, function:0x0},device:bridge,type:interface@BOOT_PXE@}
>> devices={device:console,specParams:{},type:console,deviceId:@CONSOLE_UUID@,alias:console0}
>> vmName=@NAME@
>> spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
>> smp=@VCPUS@
>> cpuType=@CPU_TYPE@
>> emulatedMachine=@EMULATED_MACHINE@
>>
>> but unfortunately the vm gets created like this (output from "ps"; note that I'm attaching a CentOS7.1 Netinstall ISO with an embedded kickstart: the installation should proceed by HTTP on the lan network but obviously fails):
>>
>> /usr/libexec/qemu-kvm -name HostedEngine -S -machine
>> pc-i440fx-rhel7.1.0,accel=kvm,usb=off -cpu Westmere -m 4096 -realtime mlock=off
>> -smp 2,sockets=2,cores=1,threads=1 -uuid f49da721-8aa6-4422-8b91-e91a0e38aa4a -s
>> mbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-1.1503.el7.centos.2
>> .8,serial=2a1855a9-18fb-4d7a-b8b8-6fc898a8e827,uuid=f49da721-8aa6-4422-8b91-e91a
>> 0e38aa4a -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/li
>> b/libvirt/qemu/HostedEngine.monitor,server,nowait -mon chardev=charmonitor,id=mo
>> nitor,mode=control -rtc base=2015-10-25T11:22:22,driftfix=slew -global kvm-pit.l
>> ost_tick_policy=discard -no-hpet -no-reboot -boot strict=on -device piix3-usb-uh
>> ci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr
>> =0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=
>> /var/tmp/engine.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -drive file=/var/run/vdsm/storage/be4434bf-a5fd-44d7-8011-d5e4ac9cf523/b3abc1cb-8a78-4b56-a9b0-e5f41fea0fdc/8d075a8d-730a-4925-8779-e0ca2b3dbcf4,if=none,id=drive-virtio-disk0,format=raw,serial=b3abc1cb-8a78-4b56-a9b0-e5f41fea0fdc,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0 -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=02:50:56:3f:c4:b0,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/f49da721-8aa6-4422-8b91-e91a0e38aa4a.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/f49da721-8aa6-4422-8b91-e91a0e38aa4a.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev socket,id=charchannel2,path=/var/lib/libvirt/qemu/channels/f49da721-8aa6-4422-8b91-e91a0e38aa4a.org.ovirt.hosted-engine-setup.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=org.ovirt.hosted-engine-setup.0 -chardev socket,id=charconsole0,path=/var/run/ovirt-vmconsole-console/f49da721-8aa6-4422-8b91-e91a0e38aa4a.sock,server,nowait -device virtconsole,chardev=charconsole0,id=console0 -vnc 0:0,password -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -msg timestamp=on
>>
>> There seem to be no errors in the logs.
>>
>> I've tried reading some (limited) Python setup code but I've not found any obvious reason why the trick should not work anymore.
>>
>> I know that 3.6 has different network configuration/management and this could be the hot point.
>>
>> Does anyone have any further suggestion or clue (code/logs to read)?
>
> The VM creation path is now a bit different cause we use just vdscli library instead of vdsClient.
> Please take a a look at mixins.py
Many thanks for your very valuable hint:
I've restored the original /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in and I've managed to obtain the 3-nics-customized vm by modifying /usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/mixins.py like this ("diff -Naur" output):
************************************************************************************
--- mixins.py.orig 2015-10-20 16:57:40.000000000 +0200
+++ mixins.py 2015-10-26 22:22:58.351223922 +0100
@@ -25,6 +25,7 @@
import random
import string
import time
+import uuid
from ovirt_hosted_engine_setup import constants as ohostedcons
@@ -247,6 +248,44 @@
]['@BOOT_PXE@'] == ',bootOrder:1':
nic['bootOrder'] = '1'
conf['devices'].append(nic)
+ nic2 = {
+ 'nicModel': 'pv',
+ 'macAddr': '02:50:56:3f:c4:a0',
+ 'linkActive': 'true',
+ 'network': 'lan',
+ 'filter': 'vdsm-no-mac-spoofing',
+ 'specParams': {},
+ 'deviceId': str(uuid.uuid4()),
+ 'address': {
+ 'bus': '0x00',
+ 'slot': '0x09',
+ 'domain': '0x0000',
+ 'type': 'pci',
+ 'function': '0x0'
+ },
+ 'device': 'bridge',
+ 'type': 'interface',
+ }
+ conf['devices'].append(nic2)
+ nic3 = {
+ 'nicModel': 'pv',
+ 'macAddr': '02:50:56:3f:c4:c0',
+ 'linkActive': 'true',
+ 'network': 'nfs',
+ 'filter': 'vdsm-no-mac-spoofing',
+ 'specParams': {},
+ 'deviceId': str(uuid.uuid4()),
+ 'address': {
+ 'bus': '0x00',
+ 'slot': '0x0c',
+ 'domain': '0x0000',
+ 'type': 'pci',
+ 'function': '0x0'
+ },
+ 'device': 'bridge',
+ 'type': 'interface',
+ }
+ conf['devices'].append(nic3)
cli = self.environment[ohostedcons.VDSMEnv.VDS_CLI]
status = cli.create(conf)
************************************************************************************
Obviously this is a horrible ad-hoc hack that I'm not able to generalize/clean-up now: doing so would involve (apart from a deeper understanding of the whole setup code/workflow) some well-thought-out design decisions and, given the effective deprecation of the aforementioned easy-to-modify vm.conf.in template substituted by hardwired Python program logic, it seems that such a functionality is not very high on the development priority list atm ;)
Many thanks again!
Kind regards,
Giuseppe
>> Many thanks in advance.
>>
>> Kind regards,
>> Giuseppe
>>
>> PS: please keep also my address in replying because I'm experiencing some problems between Hotmail and oVirt-mailing-list
>>
>> _______________________________________________
>>
Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
8 years, 10 months
Problem with Adding Pre-Configured Domain
by Roger Meier
This is a multi-part message in MIME format.
--------------050208060806020800020208
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi All,
I don't know if this is a Bug or an error on my side.
At the moment, i have a oVirt 3.6 installation with two Nodes and Two
Storage Server, which
are configured themselfs als master/slave (solaris zfs snapshot copy
from master to slave all 2 hours)
Now i try to do some tests for some failure use cases like, master
storage isn't available anymore or
one of the virtual machines must be restored from the snapshot.
Because the data n the slave is a snapshot copy, all data which are on
the Data Domain NFS Storage,
are also on the slave NFS Storage.
I tried it to add over WebUI over the option "Import Domain" (Import
Pre-Configured Domain) with both
Domain Functions (Data and Export) but nothing happens, expect some
errors in the vdsm.log Logfile.
Something like this
Thread-253746::ERROR::2015-11-24
11:44:41,758::hsm::2549::Storage.HSM::(disconnectStorageServer) Could
not disconnect from storageServer
Traceback (most recent call last):
File "/usr/share/vdsm/storage/hsm.py", line 2545, in
disconnectStorageServer
conObj.disconnect()
File "/usr/share/vdsm/storage/storageServer.py", line 425, in disconnect
return self._mountCon.disconnect()
File "/usr/share/vdsm/storage/storageServer.py", line 254, in disconnect
self._mount.umount(True, True)
File "/usr/share/vdsm/storage/mount.py", line 256, in umount
return self._runcmd(cmd, timeout)
File "/usr/share/vdsm/storage/mount.py", line 241, in _runcmd
raise MountError(rc, ";".join((out, err)))
MountError: (32, ';umount:
/rhev/data-center/mnt/192.168.1.13:_oi-srv2-sasData1_oi-srv1-sasData1_nfsshare1:
mountpoint not found\n')
I checked with nfs-check.py if all permissions are ok, the tool say this:
Konsole output
[root@lin-ovirt1 contrib]# python ./nfs-check.py
192.168.1.13:/oi-srv2-sasData1/oi-srv1-sasData1/nfsshare1
Current hostname: lin-ovirt1 - IP addr 192.168.1.14
Trying to /bin/mount -t nfs
192.168.1.13:/oi-srv2-sasData1/oi-srv1-sasData1/nfsshare1...
Executing NFS tests..
Removing vdsmTest file..
Status of tests [OK]
Disconnecting from NFS Server..
Done!
Greetings
Roger Meier
--------------050208060806020800020208
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Hi All,<br>
<br>
I don't know if this is a Bug or an error on my side.<br>
<br>
At the moment, i have a oVirt 3.6 installation with two Nodes and
Two Storage Server, which<br>
are configured themselfs als master/slave (solaris zfs snapshot copy
from master to slave all 2 hours)<br>
<br>
Now i try to do some tests for some failure use cases like, master
storage isn't available anymore or<br>
one of the virtual machines must be restored from the snapshot.<br>
<br>
Because the data n the slave is a snapshot copy, all data which are
on the Data Domain NFS Storage,<br>
are also on the slave NFS Storage. <br>
<br>
I tried it to add over WebUI over the option "Import Domain" (Import
Pre-Configured Domain) with both<br>
Domain Functions (Data and Export) but nothing happens, expect some
errors in the vdsm.log Logfile.<br>
<br>
Something like this<br>
<br>
Thread-253746::ERROR::2015-11-24
11:44:41,758::hsm::2549::Storage.HSM::(disconnectStorageServer)
Could not disconnect from storageServer<br>
Traceback (most recent call last):<br>
File "/usr/share/vdsm/storage/hsm.py", line 2545, in
disconnectStorageServer<br>
conObj.disconnect()<br>
File "/usr/share/vdsm/storage/storageServer.py", line 425, in
disconnect<br>
return self._mountCon.disconnect()<br>
File "/usr/share/vdsm/storage/storageServer.py", line 254, in
disconnect<br>
self._mount.umount(True, True)<br>
File "/usr/share/vdsm/storage/mount.py", line 256, in umount<br>
return self._runcmd(cmd, timeout)<br>
File "/usr/share/vdsm/storage/mount.py", line 241, in _runcmd<br>
raise MountError(rc, ";".join((out, err)))<br>
MountError: (32, ';umount:
/rhev/data-center/mnt/192.168.1.13:_oi-srv2-sasData1_oi-srv1-sasData1_nfsshare1:
mountpoint not found\n')<br>
<br>
I checked with nfs-check.py if all permissions are ok, the tool say
this:<br>
<br>
<title>Konsole output</title>
<div>
<span style="font-family:monospace"><span
style="color:#000000;background-color:#ffffff;">[root@lin-ovirt1
contrib]# python ./nfs-check.py
192.168.1.13:/oi-srv2-sasData1/oi-srv1-sasData1/nfsshare1
</span><br>
Current hostname: lin-ovirt1 - IP addr 192.168.1.14
<br>
Trying to /bin/mount -t nfs
192.168.1.13:/oi-srv2-sasData1/oi-srv1-sasData1/nfsshare1...
<br>
Executing NFS tests..
<br>
Removing vdsmTest file..
<br>
Status of tests [OK]
<br>
Disconnecting from NFS Server..
<br>
Done!<br>
<br>
</span></div>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
Greetings<br>
Roger Meier<br>
</body>
</html>
--------------050208060806020800020208--
8 years, 10 months
Move cluster between datacenters
by Dael Maselli
Hello,
I have an environment with two data centers with some clusters in each
one. Each cluster has one or more dedicated FC storage domains.
We have some management problem because adding storage to one cluster
means to add it to all nodes of all clusters in the same data center, by
the way I found this a little overkill and I think it should be managed
like networks.
Anyway, to resolve out problems we would like to create a new datacenter
and move a cluster from old data center to the new one, obviously with
it's own storage domain.
Is there a way to do this without export/import all vms?
Thank you.
Regards,
Dael Maselli.
--
___________________________________________________________________
Dael Maselli --- INFN-LNF Computing Service -- +39.06.9403.2214
___________________________________________________________________
* http://www.lnf.infn.it/~dmaselli/ *
___________________________________________________________________
Democracy is two wolves and a lamb voting on what to have for lunch
___________________________________________________________________
8 years, 10 months
oVirt 4.0 wishlist: oVirt Node
by Giuseppe Ragusa
Hi all,
I'm trying to organize my wishes/hopes for oVirt 4.0
These items derive from both solitary mumblings and community talks at the the first Italian oVirt Meetup.
I offer to help in coding (work/family schedules permitting) but keep in mind that I'm a sysadmin with mainly C and bash-scripting skills (but hoping to improve my less-than-newbie Python too...)
Since I have related interests/wishes also for Engine and VDSM, I'll send a separate message for each one.
Let's start from the oVirt Node:
*) oVirt Node complete convergence with Atomic Host: start from Project Atomic tools and define an ISO-installable Atomic Host variant [1] to include gluster, qemu, libvirt, vdsm and all the packages/configurations that an oVirt Node would need (remove unneeded parts)
*) add Samba, CTDB and Ganesha to oVirt Node to allow it to be used as a full storage appliance (specifically, I'm thinking of the GlusterFS integration); there are related wishlist items on configuring/managing Samba/CTDB/Ganesha on the Engine and on VDSM
*) add oVirt Node ability to host containers (independent of the above mentioned convergence with Atomic); note that Atomic Host has Docker/Kubernetes, but libvirt already has a LXC driver [2] and the Engine could benefit from some added smartness in managing groups of guests etc. in the vm case too; there are related wishlist items on configuring/managing containers on the Engine and on VDSM
*) add Open vSwitch direct support (not Neutron-mediated); there are related wishlist items on configuring/managing Open vSwitch on the Engine and on VDSM
*) add DRBD9 as a supported Storage Domain type, maybe for HC and HE too; there are related wishlist items on configuring/managing DRBD9 on the Engine and on VDSM
*) add oVirt Node ability to fully perform as a stand-alone hypervisor: I hear that Cockpit is coming, so why not Kimchi too? ;)
Regards,
Giuseppe
[1] product.json, I suppose, but I'm starting to learn Atomic now...
[2] barring a pending deprecation in RHEL7, but I suppose that a community/Centos-Virt-SIG libvirt build could restore it and maybe RedHat too could support it on a special libvirt build for RHEV (just to remove those support costs from the base RHEL OS offering)
8 years, 10 months
[OT] Gmail is marking the list as spam.
by Juan Pablo Lorier
Hi,
Someone may take a look at the spam policy of Gmail to see what is not
been done as it's been months since I have to get the mails from spam
folder no matter how much I try to get them as legitime.
Regards
El 22/10/15 a las 12:24 p.m., users-request(a)ovirt.org escribió:
> Send Users mailing list submissions to
> users(a)ovirt.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.ovirt.org/mailman/listinfo/users
> or, via email, send a message with subject or body 'help' to
> users-request(a)ovirt.org
>
> You can reach the person managing the list at
> users-owner(a)ovirt.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Users digest..."
>
>
> Today's Topics:
>
> 1. [ANN] oVirt 3.6.0 Third Release Candidate is now available
> for testing (Sandro Bonazzola)
> 2. Re: Testing self hosted engine in 3.6: hostname not resolved
> error (Gianluca Cecchi)
> 3. Re: 3.6 upgrade issue (Yaniv Dary)
> 4. Re: How to change the hosted engine VM RAM size after
> deploying (Simone Tiraboschi)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Thu, 22 Oct 2015 16:08:25 +0200
> From: Sandro Bonazzola <sbonazzo(a)redhat.com>
> To: announce(a)ovirt.org, users <users(a)ovirt.org>, devel
> <devel(a)ovirt.org>
> Subject: [ovirt-users] [ANN] oVirt 3.6.0 Third Release Candidate is
> now available for testing
> Message-ID:
> <CAPQRNTm4GyWo0zo-L=92ScLJvWQFEPKaF5UfmdJ4SroKCCC3pQ(a)mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> The oVirt Project is pleased to announce the availability
> of the Third Release Candidate of oVirt 3.6 for testing, as of October
> 22nd, 2015.
>
> This release is available now for Fedora 22,
> Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and
> Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar).
>
> This release supports Hypervisor Hosts running
> Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar),
> Fedora 21 and Fedora 22.
> Highly experimental support for Debian 8.1 Jessie has been added too.
>
> This release of oVirt 3.6.0 includes numerous bug fixes.
> See the release notes [1] for an initial list of the new features and bugs
> fixed.
>
> Please refer to release notes [1] for Installation / Upgrade instructions.
> New oVirt Node ISO and oVirt Live ISO will be available soon as well[2].
>
> Please note that mirrors[3] may need usually one day before being
> synchronized.
>
> Please refer to the release notes for known issues in this release.
>
> [1] http://www.ovirt.org/OVirt_3.6_Release_Notes
> [2] http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/iso/
> [3] http://www.ovirt.org/Repository_mirrors#Current_mirrors
>
>
8 years, 10 months
Could not associate brick
by Bello Florent
--=_920b73690f5d77fd5fed2f8d20a489a7
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset=UTF-8
Hi,
I activate gluster service in Cluster, then my engine.log
chain : Could not add brick xxx to volume xxxx server uuid xxx not found
in cluster.
I found in mailing list i have to put all my hosts in
maintenance mode and put on.
Then now engine.log chain :
2015-11-09
11:15:53,563 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-64) [] START,
GlusterVolumesListVDSCommand(HostName = ovirt02,
GlusterVolumesListVDSParameters:{runAsync='true',
hostId='0d1284e1-fa18-4309-b196-df9a6a337c44'}), log id:
6ddd5b9d
2015-11-09 11:15:53,711 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-64) [] Could not associate brick
'ovirt01.mafia.kru:/gfs1/engine/brick' of volume
'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no
gluster network found in cluster
'00000002-0002-0002-0002-00000000022d'
2015-11-09 11:15:53,714 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-64) [] Could not associate brick
'ovirt02.mafia.kru:/gfs1/engine/brick' of volume
'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no
gluster network found in cluster
'00000002-0002-0002-0002-00000000022d'
2015-11-09 11:15:53,716 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-64) [] Could not associate brick
'ovirt03.mafia.kru:/gfs1/engine/brick' of volume
'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no
gluster network found in cluster
'00000002-0002-0002-0002-00000000022d'
2015-11-09 11:15:53,719 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-64) [] Could not associate brick
'ovirt01.mafia.kru:/gfs2/engine/brick' of volume
'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no
gluster network found in cluster
'00000002-0002-0002-0002-00000000022d'
2015-11-09 11:15:53,722 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-64) [] Could not associate brick
'ovirt02.mafia.kru:/gfs2/engine/brick' of volume
'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no
gluster network found in cluster
'00000002-0002-0002-0002-00000000022d'
2015-11-09 11:15:53,725 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-64) [] Could not associate brick
'ovirt03.mafia.kru:/gfs2/engine/brick' of volume
'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no
gluster network found in cluster
'00000002-0002-0002-0002-00000000022d'
2015-11-09 11:15:53,732 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-64) [] FINISH,
GlusterVolumesListVDSCommand, return:
{e9a24161-3e72-47ea-b593-57f3302e7c4e=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7eafe244,
e5df896f-b818-4d70-ac86-ad9270f9d5f2=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@cb7d0349},
log id: 6ddd5b9d
Here my vdsm.log on host 1:
Thread-4247::DEBUG::2015-11-09
11:17:47,621::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return 'Host.getVMFullList' in bridge with [{u'status': 'Up',
u'nicModel': u'rtl8139,pv', u'kvmEnable': u'true', u'smp': u'1',
u'emulatedMachine': u'pc', u'afterMigrationStatus': u'', 'pid': '4450',
u'vmId': u'3930e6e3-5b41-45c3-bb7c-2af8563cefab', u'devices':
[{u'alias': u'console0', u'specParams': {}, 'deviceType': u'console',
u'deviceId': u'ab824f92-f636-4c0f-96ad-b4f3d1c352be', u'device':
u'console', u'type': u'console'}, {u'target': 1572864, u'alias':
u'balloon0', u'specParams': {u'model': u'none'}, 'deviceType':
u'balloon', u'device': u'memballoon', u'type': u'balloon'}, {u'device':
u'unix', u'alias': u'channel0', 'deviceType': u'channel', u'type':
u'channel', u'address': {u'bus': u'0', u'controller': u'0', u'type':
u'virtio-serial', u'port': u'1'}}, {u'device': u'unix', u'alias':
u'channel1', 'deviceType': u'channel', u'type': u'channel', u'address':
{u'bus': u'0', u'controller': u'0', u'type': u'virtio-serial', u'port':
u'2'}}, {u'device': u'unix', u'alias': u'channel2', 'deviceType':
u'channel', u'type': u'channel', u'address': {u'bus': u'0',
u'controller': u'0', u'type': u'virtio-serial', u'port': u'3'}},
{u'alias': u'scsi0', 'deviceType': u'controller', u'address': {u'slot':
u'0x04', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci',
u'function': u'0x0'}, u'device': u'scsi', u'model': u'virtio-scsi',
u'type': u'controller'}, {u'device': u'usb', u'alias': u'usb0',
'deviceType': u'controller', u'type': u'controller', u'address':
{u'slot': u'0x01', u'bus': u'0x00', u'domain': u'0x0000', u'type':
u'pci', u'function': u'0x2'}}, {u'device': u'ide', u'alias': u'ide0',
'deviceType': u'controller', u'type': u'controller', u'address':
{u'slot': u'0x01', u'bus': u'0x00', u'domain': u'0x0000', u'type':
u'pci', u'function': u'0x1'}}, {u'device': u'virtio-serial', u'alias':
u'virtio-serial0', 'deviceType': u'controller', u'type': u'controller',
u'address': {u'slot': u'0x05', u'bus': u'0x00', u'domain': u'0x0000',
u'type': u'pci', u'function': u'0x0'}}, {u'device': u'', u'alias':
u'video0', 'deviceType': u'video', u'type': u'video', u'address':
{u'slot': u'0x02', u'bus': u'0x00', u'domain': u'0x0000', u'type':
u'pci', u'function': u'0x0'}}, {u'device': u'vnc', u'specParams':
{u'spiceSecureChannels':
u'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir',
u'displayIp': '0'}, 'deviceType': u'graphics', u'type': u'graphics',
u'port': u'5900'}, {u'nicModel': u'pv', u'macAddr':
u'00:16:3e:43:96:7b', u'linkActive': True, u'network': u'ovirtmgmt',
u'specParams': {}, u'filter': u'vdsm-no-mac-spoofing', u'alias':
u'net0', 'deviceType': u'interface', u'deviceId':
u'c2913ff3-fea3-4b17-a4b3-83398d920cd3', u'address': {u'slot': u'0x03',
u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'function':
u'0x0'}, u'device': u'bridge', u'type': u'interface', u'name':
u'vnet0'}, {u'index': u'2', u'iface': u'ide', u'name': u'hdc', u'alias':
u'ide0-1-0', u'specParams': {}, u'readonly': 'True', 'deviceType':
u'disk', u'deviceId': u'13f4e285-c161-46f5-9ec3-ba1f92f374d9',
u'address': {u'bus': u'1', u'controller': u'0', u'type': u'drive',
u'target': u'0', u'unit': u'0'}, u'device': u'cdrom', u'shared':
u'false', u'path': '', u'type': u'disk'}, {u'poolID':
u'00000000-0000-0000-0000-000000000000', u'volumeInfo': {'domainID':
u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease',
'imageID': u'56461302-0710-4df0-964d-5e7b1ff07828', 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3'},
u'index': u'0', u'iface': u'virtio', u'apparentsize': '26843545600',
u'specParams': {}, u'imageID': u'56461302-0710-4df0-964d-5e7b1ff07828',
u'readonly': 'False', 'deviceType': u'disk', u'shared': u'exclusive',
u'truesize': '3515854848', u'type': u'disk', u'domainID':
u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', u'reqsize': u'0', u'format':
u'raw', u'deviceId': u'56461302-0710-4df0-964d-5e7b1ff07828',
u'address': {u'slot': u'0x06', u'bus': u'0x00', u'domain': u'0x0000',
u'type': u'pci', u'function': u'0x0'}, u'device': u'disk', u'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
u'propagateErrors': u'off', u'optional': u'false', u'name': u'vda',
u'bootOrder': u'1', u'volumeID':
u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', u'alias': u'virtio-disk0',
u'volumeChain': [{'domainID': u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'volumeID':
u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease',
'imageID': u'56461302-0710-4df0-964d-5e7b1ff07828', 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3'}]}],
u'guestDiskMapping': {}, u'spiceSecureChannels':
u'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir',
u'vmType': u'kvm', u'displayIp': '0', u'displaySecurePort': '-1',
u'memSize': 1536, u'displayPort': u'5900', u'cpuType': u'Conroe',
'clientIp': u'', u'statusTime': '4299704920', u'vmName':
u'HostedEngine', u'display': 'vnc'}]
Reactor thread::INFO::2015-11-09
11:17:48,004::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57851
Reactor
thread::DEBUG::2015-11-09
11:17:48,012::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:48,013::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from 127.0.0.1:57851
Reactor
thread::DEBUG::2015-11-09
11:17:48,013::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57851)
BindingXMLRPC::INFO::2015-11-09
11:17:48,013::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57851
Thread-4248::INFO::2015-11-09
11:17:48,015::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57851 started
Thread-4248::INFO::2015-11-09
11:17:48,022::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57851 stopped
Thread-303::DEBUG::2015-11-09
11:17:48,143::fileSD::173::Storage.Misc.excCmd::(getReadDelay)
/usr/bin/dd
if=/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_data/0af99439-f140-4636-90f7-f43904735da0/dom_md/metadata
iflag=direct of=/dev/null bs=4096 count=1 (cwd
None)
Thread-303::DEBUG::2015-11-09
11:17:48,154::fileSD::173::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
<err> = '0+1 records inn0+1 records outn461 bytes (461 B) copied,
0.000382969 s, 1.2 MB/sn'; <rc> =
0
mailbox.SPMMonitor::DEBUG::2015-11-09
11:17:48,767::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail)
dd
if=/rhev/data-center/00000001-0001-0001-0001-000000000230/mastersd/dom_md/inbox
iflag=direct,fullblock count=1 bs=1024000 (cwd
None)
mailbox.SPMMonitor::DEBUG::2015-11-09
11:17:48,783::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail)
SUCCESS: <err> = '1+0 records inn1+0 records outn1024000 bytes (1.0 MB)
copied, 0.00507258 s, 202 MB/sn'; <rc> = 0
Reactor
thread::INFO::2015-11-09
11:17:49,939::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57852
Reactor
thread::DEBUG::2015-11-09
11:17:49,947::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:49,947::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from 127.0.0.1:57852
Reactor
thread::DEBUG::2015-11-09
11:17:49,947::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57852)
BindingXMLRPC::INFO::2015-11-09
11:17:49,948::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57852
Thread-4249::INFO::2015-11-09
11:17:49,949::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57852 started
Thread-4249::DEBUG::2015-11-09
11:17:49,950::bindingxmlrpc::1257::vds::(wrapper) client
[127.0.0.1]::call getCapabilities with ()
{}
Thread-4249::DEBUG::2015-11-09
11:17:49,962::netinfo::454::root::(_dhcp_used) DHCPv6 configuration not
specified for ovirtmgmt.
Thread-4249::DEBUG::2015-11-09
11:17:49,963::netinfo::686::root::(_get_gateway) The gateway
10.10.10.254 is duplicated for the device
ovirtmgmt
Thread-4249::DEBUG::2015-11-09
11:17:49,965::netinfo::440::root::(_dhcp_used) There is no VDSM network
configured on enp2s0.
Thread-4249::DEBUG::2015-11-09
11:17:49,965::netinfo::440::root::(_dhcp_used) There is no VDSM network
configured on enp2s0.
Thread-4249::DEBUG::2015-11-09
11:17:49,968::netinfo::440::root::(_dhcp_used) There is no VDSM network
configured on bond0.
Thread-4249::DEBUG::2015-11-09
11:17:49,968::netinfo::440::root::(_dhcp_used) There is no VDSM network
configured on bond0.
Thread-4249::DEBUG::2015-11-09
11:17:49,970::netinfo::686::root::(_get_gateway) The gateway
10.10.10.254 is duplicated for the device
ovirtmgmt
Thread-4249::DEBUG::2015-11-09
11:17:49,971::utils::676::root::(execCmd) /usr/sbin/tc qdisc show (cwd
None)
Thread-4249::DEBUG::2015-11-09
11:17:49,979::utils::694::root::(execCmd) SUCCESS: <err> = ''; <rc> =
0
Thread-4249::DEBUG::2015-11-09
11:17:49,980::utils::676::root::(execCmd) /usr/sbin/tc class show dev
enp2s0 classid 0:1388 (cwd None)
Thread-4249::DEBUG::2015-11-09
11:17:49,989::utils::694::root::(execCmd) SUCCESS: <err> = ''; <rc> =
0
Thread-4249::DEBUG::2015-11-09
11:17:49,993::caps::807::root::(_getKeyPackages) rpm package
('glusterfs-rdma',) not found
Thread-4249::DEBUG::2015-11-09
11:17:49,997::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift-object',) not found
Thread-4249::DEBUG::2015-11-09
11:17:49,997::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift-proxy',) not found
Thread-4249::DEBUG::2015-11-09
11:17:50,001::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift-plugin',) not found
Thread-4249::DEBUG::2015-11-09
11:17:50,002::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift',) not found
Thread-4249::DEBUG::2015-11-09
11:17:50,002::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift-container',) not found
Thread-4249::DEBUG::2015-11-09
11:17:50,003::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift-account',) not found
Thread-4249::DEBUG::2015-11-09
11:17:50,003::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift-doc',) not found
Thread-4249::DEBUG::2015-11-09
11:17:50,005::bindingxmlrpc::1264::vds::(wrapper) return getCapabilities
with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory':
{'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:1954deeb7a38'}],
'FC': []}, 'packages2': {'kernel': {'release': '229.20.1.el7.x86_64',
'buildtime': 1446588607.0, 'version': '3.10.0'}, 'glusterfs-fuse':
{'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'},
'spice-server': {'release': '9.el7_1.3', 'buildtime': 1444691699L,
'version': '0.12.4'}, 'librbd1': {'release': '2.el7', 'buildtime':
1425594433L, 'version': '0.80.7'}, 'vdsm': {'release': '0.el7.centos',
'buildtime': 1446474396L, 'version': '4.17.10.1'}, 'qemu-kvm':
{'release': '29.1.el7', 'buildtime': 1444310806L, 'version': '2.3.0'},
'glusterfs': {'release': '1.el7', 'buildtime': 1444235292L, 'version':
'3.7.5'}, 'libvirt': {'release': '16.el7_1.5', 'buildtime': 1446559281L,
'version': '1.2.8'}, 'qemu-img': {'release': '29.1.el7', 'buildtime':
1444310806L, 'version': '2.3.0'}, 'mom': {'release': '2.el7',
'buildtime': 1442501481L, 'version': '0.5.1'},
'glusterfs-geo-replication': {'release': '1.el7', 'buildtime':
1444235292L, 'version': '3.7.5'}, 'glusterfs-server': {'release':
'1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}, 'glusterfs-cli':
{'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}},
'numaNodeDistance': {'0': [10]}, 'cpuModel': 'Intel(R) Core(TM)2 Quad
CPU Q8400 @ 2.66GHz', 'liveMerge': 'true', 'hooks': {'before_vm_start':
{'50_hostedengine': {'md5': '2a6d96c26a3599812be6cf1a13d9f485'}}},
'vmTypes': ['kvm'], 'selinux': {'mode': '-1'}, 'liveSnapshot': 'true',
'kdumpStatus': 0, 'networks': {'ovirtmgmt': {'iface': 'ovirtmgmt',
'addr': '10.10.10.211', 'cfg': {'IPV6INIT': 'no', 'DEFROUTE': 'yes',
'IPADDR': '10.10.10.211', 'HOTPLUG': 'no', 'GATEWAY': '10.10.10.254',
'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0',
'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'ovirtmgmt', 'TYPE':
'Bridge', 'ONBOOT': 'yes'}, 'bridged': True, 'ipv6addrs':
['fe80::6e62:6dff:feb3:3b72/64'], 'gateway': '10.10.10.254', 'dhcpv4':
False, 'netmask': '255.255.255.0', 'dhcpv6': False, 'stp': 'off',
'ipv4addrs': ['10.10.10.211/24'], 'mtu': '1500', 'ipv6gateway': '::',
'ports': ['vnet0', 'enp2s0']}}, 'bridges': {'ovirtmgmt': {'addr':
'10.10.10.211', 'cfg': {'IPV6INIT': 'no', 'DEFROUTE': 'yes', 'IPADDR':
'10.10.10.211', 'HOTPLUG': 'no', 'GATEWAY': '10.10.10.254', 'DELAY':
'0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO':
'none', 'STP': 'off', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT':
'yes'}, 'ipv6addrs': ['fe80::6e62:6dff:feb3:3b72/64'], 'gateway':
'10.10.10.254', 'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6':
False, 'stp': 'off', 'ipv4addrs': ['10.10.10.211/24'], 'mtu': '1500',
'ipv6gateway': '::', 'ports': ['vnet0', 'enp2s0'], 'opts':
{'multicast_last_member_count': '2', 'hash_elasticity': '4',
'multicast_query_response_interval': '1000', 'group_fwd_mask': '0x0',
'multicast_snooping': '1', 'multicast_startup_query_interval': '3125',
'hello_timer': '172', 'multicast_querier_interval': '25500', 'max_age':
'2000', 'hash_max': '512', 'stp_state': '0', 'topology_change_detected':
'0', 'priority': '32768', 'multicast_membership_interval': '26000',
'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0',
'multicast_startup_query_count': '2', 'nf_call_iptables': '0',
'topology_change': '0', 'hello_time': '200', 'root_id':
'8000.6c626db33b72', 'bridge_id': '8000.6c626db33b72',
'topology_change_timer': '0', 'ageing_time': '30000',
'nf_call_ip6tables': '0', 'gc_timer': '25099', 'nf_call_arptables': '0',
'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval': '100',
'default_pvid': '1', 'multicast_query_interval': '12500', 'tcn_timer':
'0', 'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay':
'0'}}}, 'uuid': 'c2cac9d6-9ed7-44f0-8bbc-eff4c71db7ca', 'onlineCpus':
'0,1,2,3', 'nics': {'enp2s0': {'addr': '', 'ipv6gateway': '::',
'ipv6addrs': ['fe80::6e62:6dff:feb3:3b72/64'], 'mtu': '1500', 'dhcpv4':
False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg':
{'BRIDGE': 'ovirtmgmt', 'IPV6INIT': 'no', 'NM_CONTROLLED': 'no',
'HWADDR': '6c:62:6d:b3:3b:72', 'BOOTPROTO': 'none', 'DEVICE': 'enp2s0',
'ONBOOT': 'yes'}, 'hwaddr': '6c:62:6d:b3:3b:72', 'speed': 1000,
'gateway': ''}}, 'software_revision': '0', 'hostdevPassthrough':
'false', 'clusterLevels': ['3.4', '3.5', '3.6'], 'cpuFlags':
'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,sse4_1,xsave,lahf_lm,dtherm,tpr_shadow,vnmi,flexpriority,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_n270',
'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:1954deeb7a38',
'netConfigDirty': 'False', 'supportedENGINEs': ['3.4', '3.5', '3.6'],
'autoNumaBalancing': 0, 'additionalFeatures': ['GLUSTER_SNAPSHOT',
'GLUSTER_GEO_REPLICATION', 'GLUSTER_BRICK_MANAGEMENT'], 'reservedMem':
'321', 'bondings': {'bond0': {'ipv4addrs': [], 'addr': '', 'cfg':
{'BOOTPROTO': 'none'}, 'ipv6addrs': [], 'active_slave': '', 'mtu':
'1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'slaves': [],
'hwaddr': 'ba:5f:22:a3:17:07', 'ipv6gateway': '::', 'gateway': '',
'opts': {}}}, 'software_version': '4.17', 'memSize': '3782', 'cpuSpeed':
'2670.000', 'numaNodes': {'0': {'totalMemory': '3782', 'cpus': [0, 1, 2,
3]}}, 'cpuSockets': '1', 'vlans': {}, 'lastClientIface': 'lo',
'cpuCores': '4', 'kvmEnabled': 'true', 'guestOverhead': '65',
'version_name': 'Snow Man', 'cpuThreads': '4', 'emulatedMachines':
['pc-i440fx-rhel7.1.0', 'rhel6.3.0', 'pc-q35-rhel7.2.0',
'pc-i440fx-rhel7.0.0', 'rhel6.1.0', 'rhel6.6.0', 'rhel6.2.0', 'pc',
'pc-q35-rhel7.0.0', 'pc-q35-rhel7.1.0', 'q35', 'pc-i440fx-rhel7.2.0',
'rhel6.4.0', 'rhel6.0.0', 'rhel6.5.0'], 'rngSources': ['random'],
'operatingSystem': {'release': '1.1503.el7.centos.2.8', 'version': '7',
'name': 'RHEL'}, 'lastClient':
'127.0.0.1'}}
Thread-4249::INFO::2015-11-09
11:17:50,020::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57852
stopped
mailbox.SPMMonitor::DEBUG::2015-11-09
11:17:50,797::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail)
dd
if=/rhev/data-center/00000001-0001-0001-0001-000000000230/mastersd/dom_md/inbox
iflag=direct,fullblock count=1 bs=1024000 (cwd
None)
mailbox.SPMMonitor::DEBUG::2015-11-09
11:17:50,815::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail)
SUCCESS: <err> = '1+0 records inn1+0 records outn1024000 bytes (1.0 MB)
copied, 0.00511026 s, 200 MB/sn'; <rc> = 0
Reactor
thread::INFO::2015-11-09
11:17:52,098::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57853
Reactor
thread::DEBUG::2015-11-09
11:17:52,106::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,107::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from 127.0.0.1:57853
Reactor
thread::DEBUG::2015-11-09
11:17:52,107::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57853)
BindingXMLRPC::INFO::2015-11-09
11:17:52,108::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57853
Thread-4250::INFO::2015-11-09
11:17:52,110::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57853 started
Thread-4250::DEBUG::2015-11-09
11:17:52,111::bindingxmlrpc::1257::vds::(wrapper) client
[127.0.0.1]::call getHardwareInfo with ()
{}
Thread-4250::DEBUG::2015-11-09
11:17:52,112::bindingxmlrpc::1264::vds::(wrapper) return getHardwareInfo
with {'status': {'message': 'Done', 'code': 0}, 'info':
{'systemProductName': 'MS-7529', 'systemSerialNumber': 'To Be Filled By
O.E.M.', 'systemFamily': 'To Be Filled By O.E.M.', 'systemVersion':
'1.0', 'systemUUID': '00000000-0000-0000-0000-6C626DB33B72',
'systemManufacturer': 'MICRO-STAR INTERNATIONAL
CO.,LTD'}}
Thread-4250::INFO::2015-11-09
11:17:52,114::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57853 stopped
Reactor thread::INFO::2015-11-09
11:17:52,116::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57854
Reactor
thread::DEBUG::2015-11-09
11:17:52,124::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,124::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from
127.0.0.1:57854
BindingXMLRPC::INFO::2015-11-09
11:17:52,125::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57854
Reactor thread::DEBUG::2015-11-09
11:17:52,125::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57854)
Thread-4251::INFO::2015-11-09
11:17:52,128::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57854 started
Thread-4251::DEBUG::2015-11-09
11:17:52,129::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4251::DEBUG::2015-11-09
11:17:52,130::task::595::Storage.TaskManager.Task::(_updateState)
Task=`8535d95e-dce6-4474-bd8d-7824f68cf68a`::moving from state init ->
state preparing
Thread-4251::INFO::2015-11-09
11:17:52,130::logUtils::48::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=7,
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'id':
'2c69bdcf-793b-4fda-b326-b8aa6c33ade0', 'vfs_type': 'glusterfs',
'connection': 'ovirt02.mafia.kru:/engine', 'user': 'kvm'}],
options=None)
Thread-4251::DEBUG::2015-11-09
11:17:52,132::hsm::2417::Storage.HSM::(__prefetchDomains)
glusterDomPath: glusterSD/*
Thread-4251::DEBUG::2015-11-09
11:17:52,146::hsm::2429::Storage.HSM::(__prefetchDomains) Found SD
uuids: (u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
u'0af99439-f140-4636-90f7-f43904735da0')
Thread-4251::DEBUG::2015-11-09
11:17:52,147::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs:
{b4c488af-9d2f-4b7b-a6f6-74a0bac06c41: storage.glusterSD.findDomain,
0af99439-f140-4636-90f7-f43904735da0:
storage.glusterSD.findDomain}
Thread-4251::INFO::2015-11-09
11:17:52,147::logUtils::51::dispatcher::(wrapper) Run and protect:
connectStorageServer, Return response: {'statuslist': [{'status': 0,
'id':
'2c69bdcf-793b-4fda-b326-b8aa6c33ade0'}]}
Thread-4251::DEBUG::2015-11-09
11:17:52,147::task::1191::Storage.TaskManager.Task::(prepare)
Task=`8535d95e-dce6-4474-bd8d-7824f68cf68a`::finished: {'statuslist':
[{'status': 0, 'id':
'2c69bdcf-793b-4fda-b326-b8aa6c33ade0'}]}
Thread-4251::DEBUG::2015-11-09
11:17:52,147::task::595::Storage.TaskManager.Task::(_updateState)
Task=`8535d95e-dce6-4474-bd8d-7824f68cf68a`::moving from state preparing
-> state finished
Thread-4251::DEBUG::2015-11-09
11:17:52,147::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{}
Thread-4251::DEBUG::2015-11-09
11:17:52,148::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4251::DEBUG::2015-11-09
11:17:52,148::task::993::Storage.TaskManager.Task::(_decref)
Task=`8535d95e-dce6-4474-bd8d-7824f68cf68a`::ref 0 aborting
False
Thread-4251::INFO::2015-11-09
11:17:52,149::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57854 stopped
Reactor thread::INFO::2015-11-09
11:17:52,150::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57855
Reactor
thread::DEBUG::2015-11-09
11:17:52,158::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,158::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from
127.0.0.1:57855
BindingXMLRPC::INFO::2015-11-09
11:17:52,159::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57855
Reactor thread::DEBUG::2015-11-09
11:17:52,159::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57855)
Thread-4255::INFO::2015-11-09
11:17:52,162::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57855 started
Thread-4255::DEBUG::2015-11-09
11:17:52,162::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4255::DEBUG::2015-11-09
11:17:52,163::task::595::Storage.TaskManager.Task::(_updateState)
Task=`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::moving from state init ->
state preparing
Thread-4255::INFO::2015-11-09
11:17:52,163::logUtils::48::dispatcher::(wrapper) Run and protect:
getStorageDomainStats(sdUUID='b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
options=None)
Thread-4255::DEBUG::2015-11-09
11:17:52,164::resourceManager::198::Storage.ResourceManager.Request::(__init__)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`41531754-8ba9-4fc4-8788-d4d67fa33e5c`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '2848' at
'getStorageDomainStats'
Thread-4255::DEBUG::2015-11-09
11:17:52,164::resourceManager::542::Storage.ResourceManager::(registerResource)
Trying to register resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type
'shared'
Thread-4255::DEBUG::2015-11-09
11:17:52,164::resourceManager::601::Storage.ResourceManager::(registerResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now
locking as 'shared' (1 active user)
Thread-4255::DEBUG::2015-11-09
11:17:52,164::resourceManager::238::Storage.ResourceManager.Request::(grant)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`41531754-8ba9-4fc4-8788-d4d67fa33e5c`::Granted
request
Thread-4255::DEBUG::2015-11-09
11:17:52,165::task::827::Storage.TaskManager.Task::(resourceAcquired)
Task=`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::_resourcesAcquired:
Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
(shared)
Thread-4255::DEBUG::2015-11-09
11:17:52,165::task::993::Storage.TaskManager.Task::(_decref)
Task=`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::ref 1 aborting
False
Thread-4255::DEBUG::2015-11-09
11:17:52,165::misc::750::Storage.SamplingMethod::(__call__) Trying to
enter sampling method
(storage.sdc.refreshStorage)
Thread-4255::DEBUG::2015-11-09
11:17:52,165::misc::753::Storage.SamplingMethod::(__call__) Got in to
sampling method
Thread-4255::DEBUG::2015-11-09
11:17:52,165::misc::750::Storage.SamplingMethod::(__call__) Trying to
enter sampling method
(storage.iscsi.rescan)
Thread-4255::DEBUG::2015-11-09
11:17:52,165::misc::753::Storage.SamplingMethod::(__call__) Got in to
sampling method
Thread-4255::DEBUG::2015-11-09
11:17:52,166::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan,
this will take up to 30 seconds
Thread-4255::DEBUG::2015-11-09
11:17:52,166::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo
-n /sbin/iscsiadm -m session -R (cwd
None)
Thread-4255::DEBUG::2015-11-09
11:17:52,183::misc::760::Storage.SamplingMethod::(__call__) Returning
last result
Thread-4255::DEBUG::2015-11-09
11:17:52,183::misc::750::Storage.SamplingMethod::(__call__) Trying to
enter sampling method
(storage.hba.rescan)
Thread-4255::DEBUG::2015-11-09
11:17:52,184::misc::753::Storage.SamplingMethod::(__call__) Got in to
sampling method
Thread-4255::DEBUG::2015-11-09
11:17:52,184::hba::56::Storage.HBA::(rescan) Starting
scan
Thread-4255::DEBUG::2015-11-09
11:17:52,295::hba::62::Storage.HBA::(rescan) Scan
finished
Thread-4255::DEBUG::2015-11-09
11:17:52,296::misc::760::Storage.SamplingMethod::(__call__) Returning
last result
Thread-4255::DEBUG::2015-11-09
11:17:52,296::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo
-n /usr/sbin/multipath (cwd None)
Thread-4255::DEBUG::2015-11-09
11:17:52,362::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS:
<err> = ''; <rc> = 0
Thread-4255::DEBUG::2015-11-09
11:17:52,362::utils::676::root::(execCmd) /sbin/udevadm settle
--timeout=5 (cwd None)
Thread-4255::DEBUG::2015-11-09
11:17:52,371::utils::694::root::(execCmd) SUCCESS: <err> = ''; <rc> =
0
Thread-4255::DEBUG::2015-11-09
11:17:52,372::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' got the operation
mutex
Thread-4255::DEBUG::2015-11-09
11:17:52,372::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' released the operation
mutex
Thread-4255::DEBUG::2015-11-09
11:17:52,373::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' got the operation
mutex
Thread-4255::DEBUG::2015-11-09
11:17:52,373::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' released the operation
mutex
Thread-4255::DEBUG::2015-11-09
11:17:52,373::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' got the operation
mutex
Thread-4255::DEBUG::2015-11-09
11:17:52,373::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' released the operation
mutex
Thread-4255::DEBUG::2015-11-09
11:17:52,374::misc::760::Storage.SamplingMethod::(__call__) Returning
last result
Thread-4255::DEBUG::2015-11-09
11:17:52,386::fileSD::157::Storage.StorageDomainManifest::(__init__)
Reading domain in path
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
Thread-4255::DEBUG::2015-11-09
11:17:52,387::persistentDict::192::Storage.PersistentDict::(__init__)
Created a persistent dict with FileMetadataRW
backend
Thread-4255::DEBUG::2015-11-09
11:17:52,395::persistentDict::234::Storage.PersistentDict::(refresh)
read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=hosted_storage',
'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=',
'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=',
'REMOTE_PATH=ovirt02.mafia.kru:/engine', 'ROLE=Regular',
'SDUUID=b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'TYPE=GLUSTERFS',
'VERSION=3',
'_SHA_CKSUM=cb09606ada74ed4155ad158923dd930264780fc8']
Thread-4255::DEBUG::2015-11-09
11:17:52,398::fileSD::647::Storage.StorageDomain::(imageGarbageCollector)
Removing remnants of deleted images []
Thread-4255::INFO::2015-11-09
11:17:52,399::sd::442::Storage.StorageDomain::(_registerResourceNamespaces)
Resource namespace b4c488af-9d2f-4b7b-a6f6-74a0bac06c41_imageNS already
registered
Thread-4255::INFO::2015-11-09
11:17:52,399::sd::450::Storage.StorageDomain::(_registerResourceNamespaces)
Resource namespace b4c488af-9d2f-4b7b-a6f6-74a0bac06c41_volumeNS
already registered
Thread-4255::INFO::2015-11-09
11:17:52,400::logUtils::51::dispatcher::(wrapper) Run and protect:
getStorageDomainStats, Return response: {'stats': {'mdasize': 0,
'mdathreshold': True, 'mdavalid': True, 'diskfree': '210878988288',
'disktotal': '214643507200', 'mdafree':
0}}
Thread-4255::DEBUG::2015-11-09
11:17:52,401::task::1191::Storage.TaskManager.Task::(prepare)
Task=`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::finished: {'stats':
{'mdasize': 0, 'mdathreshold': True, 'mdavalid': True, 'diskfree':
'210878988288', 'disktotal': '214643507200', 'mdafree':
0}}
Thread-4255::DEBUG::2015-11-09
11:17:52,401::task::595::Storage.TaskManager.Task::(_updateState)
Task=`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::moving from state preparing
-> state finished
Thread-4255::DEBUG::2015-11-09
11:17:52,401::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj:
'None'>}
Thread-4255::DEBUG::2015-11-09
11:17:52,401::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4255::DEBUG::2015-11-09
11:17:52,402::resourceManager::616::Storage.ResourceManager::(releaseResource)
Trying to release resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'
Thread-4255::DEBUG::2015-11-09
11:17:52,402::resourceManager::635::Storage.ResourceManager::(releaseResource)
Released resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0
active users)
Thread-4255::DEBUG::2015-11-09
11:17:52,402::resourceManager::641::Storage.ResourceManager::(releaseResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free,
finding out if anyone is waiting for it.
Thread-4255::DEBUG::2015-11-09
11:17:52,402::resourceManager::649::Storage.ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing
records.
Thread-4255::DEBUG::2015-11-09
11:17:52,402::task::993::Storage.TaskManager.Task::(_decref)
Task=`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::ref 0 aborting
False
Thread-4255::INFO::2015-11-09
11:17:52,404::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57855 stopped
Reactor thread::INFO::2015-11-09
11:17:52,405::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57856
Reactor
thread::DEBUG::2015-11-09
11:17:52,413::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,414::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from
127.0.0.1:57856
BindingXMLRPC::INFO::2015-11-09
11:17:52,414::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57856
Reactor thread::DEBUG::2015-11-09
11:17:52,414::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57856)
Thread-4259::INFO::2015-11-09
11:17:52,417::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57856 started
Thread-4259::DEBUG::2015-11-09
11:17:52,418::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4259::DEBUG::2015-11-09
11:17:52,418::task::595::Storage.TaskManager.Task::(_updateState)
Task=`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::moving from state init ->
state preparing
Thread-4259::INFO::2015-11-09
11:17:52,419::logUtils::48::dispatcher::(wrapper) Run and protect:
prepareImage(sdUUID='b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
spUUID='00000000-0000-0000-0000-000000000000',
imgUUID='56461302-0710-4df0-964d-5e7b1ff07828',
leafUUID='8f8ee034-de86-4438-b6eb-9109faa8b3d3')
Thread-4259::DEBUG::2015-11-09
11:17:52,419::resourceManager::198::Storage.ResourceManager.Request::(__init__)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`bec7c8c3-42b9-4acb-88cf-841d9dc28fb0`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '3205' at
'prepareImage'
Thread-4259::DEBUG::2015-11-09
11:17:52,419::resourceManager::542::Storage.ResourceManager::(registerResource)
Trying to register resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type
'shared'
Thread-4259::DEBUG::2015-11-09
11:17:52,420::resourceManager::601::Storage.ResourceManager::(registerResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now
locking as 'shared' (1 active user)
Thread-4259::DEBUG::2015-11-09
11:17:52,420::resourceManager::238::Storage.ResourceManager.Request::(grant)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`bec7c8c3-42b9-4acb-88cf-841d9dc28fb0`::Granted
request
Thread-4259::DEBUG::2015-11-09
11:17:52,420::task::827::Storage.TaskManager.Task::(resourceAcquired)
Task=`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::_resourcesAcquired:
Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
(shared)
Thread-4259::DEBUG::2015-11-09
11:17:52,420::task::993::Storage.TaskManager.Task::(_decref)
Task=`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::ref 1 aborting
False
Thread-4259::DEBUG::2015-11-09
11:17:52,445::fileSD::536::Storage.StorageDomain::(activateVolumes)
Fixing permissions on
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3
Thread-4259::DEBUG::2015-11-09
11:17:52,446::fileUtils::143::Storage.fileUtils::(createdir) Creating
directory: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
mode: None
Thread-4259::WARNING::2015-11-09
11:17:52,446::fileUtils::152::Storage.fileUtils::(createdir) Dir
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 already
exists
Thread-4259::DEBUG::2015-11-09
11:17:52,446::fileSD::511::Storage.StorageDomain::(createImageLinks)
Creating symlink from
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828
to
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302-0710-4df0-964d-5e7b1ff07828
Thread-4259::DEBUG::2015-11-09
11:17:52,447::fileSD::516::Storage.StorageDomain::(createImageLinks)
img run dir already exists:
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302-0710-4df0-964d-5e7b1ff07828
Thread-4259::DEBUG::2015-11-09
11:17:52,448::fileVolume::535::Storage.Volume::(validateVolumePath)
validate path for
8f8ee034-de86-4438-b6eb-9109faa8b3d3
Thread-4259::INFO::2015-11-09
11:17:52,450::logUtils::51::dispatcher::(wrapper) Run and protect:
prepareImage, Return response: {'info': {'domainID':
'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease',
'imageID': '56461302-0710-4df0-964d-5e7b1ff07828'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease',
'imageID':
'56461302-0710-4df0-964d-5e7b1ff07828'}]}
Thread-4259::DEBUG::2015-11-09
11:17:52,450::task::1191::Storage.TaskManager.Task::(prepare)
Task=`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::finished: {'info':
{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease',
'imageID': '56461302-0710-4df0-964d-5e7b1ff07828'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease',
'imageID':
'56461302-0710-4df0-964d-5e7b1ff07828'}]}
Thread-4259::DEBUG::2015-11-09
11:17:52,450::task::595::Storage.TaskManager.Task::(_updateState)
Task=`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::moving from state preparing
-> state finished
Thread-4259::DEBUG::2015-11-09
11:17:52,450::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj:
'None'>}
Thread-4259::DEBUG::2015-11-09
11:17:52,450::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4259::DEBUG::2015-11-09
11:17:52,451::resourceManager::616::Storage.ResourceManager::(releaseResource)
Trying to release resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'
Thread-4259::DEBUG::2015-11-09
11:17:52,451::resourceManager::635::Storage.ResourceManager::(releaseResource)
Released resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0
active users)
Thread-4259::DEBUG::2015-11-09
11:17:52,451::resourceManager::641::Storage.ResourceManager::(releaseResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free,
finding out if anyone is waiting for it.
Thread-4259::DEBUG::2015-11-09
11:17:52,451::resourceManager::649::Storage.ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing
records.
Thread-4259::DEBUG::2015-11-09
11:17:52,451::task::993::Storage.TaskManager.Task::(_decref)
Task=`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::ref 0 aborting
False
Thread-4259::INFO::2015-11-09
11:17:52,454::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57856 stopped
Reactor thread::INFO::2015-11-09
11:17:52,454::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57857
Reactor
thread::DEBUG::2015-11-09
11:17:52,463::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,463::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from 127.0.0.1:57857
Reactor
thread::DEBUG::2015-11-09
11:17:52,464::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57857)
BindingXMLRPC::INFO::2015-11-09
11:17:52,464::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57857
Thread-4260::INFO::2015-11-09
11:17:52,466::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57857 started
Thread-4260::DEBUG::2015-11-09
11:17:52,467::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4260::DEBUG::2015-11-09
11:17:52,467::task::595::Storage.TaskManager.Task::(_updateState)
Task=`aed16a50-ede9-4ff5-92ef-356692fd56ae`::moving from state init ->
state preparing
Thread-4260::INFO::2015-11-09
11:17:52,467::logUtils::48::dispatcher::(wrapper) Run and protect:
prepareImage(sdUUID='b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
spUUID='00000000-0000-0000-0000-000000000000',
imgUUID='fd81353f-b654-4493-bcaf-2f417849b830',
leafUUID='8bb29fcb-c109-4f0a-a227-3819b6ecfdd9')
Thread-4260::DEBUG::2015-11-09
11:17:52,468::resourceManager::198::Storage.ResourceManager.Request::(__init__)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`974119cd-1351-46e9-8062-ffb1298c4ac9`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '3205' at
'prepareImage'
Thread-4260::DEBUG::2015-11-09
11:17:52,468::resourceManager::542::Storage.ResourceManager::(registerResource)
Trying to register resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type
'shared'
Thread-4260::DEBUG::2015-11-09
11:17:52,468::resourceManager::601::Storage.ResourceManager::(registerResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now
locking as 'shared' (1 active user)
Thread-4260::DEBUG::2015-11-09
11:17:52,468::resourceManager::238::Storage.ResourceManager.Request::(grant)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`974119cd-1351-46e9-8062-ffb1298c4ac9`::Granted
request
Thread-4260::DEBUG::2015-11-09
11:17:52,469::task::827::Storage.TaskManager.Task::(resourceAcquired)
Task=`aed16a50-ede9-4ff5-92ef-356692fd56ae`::_resourcesAcquired:
Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
(shared)
Thread-4260::DEBUG::2015-11-09
11:17:52,469::task::993::Storage.TaskManager.Task::(_decref)
Task=`aed16a50-ede9-4ff5-92ef-356692fd56ae`::ref 1 aborting
False
Thread-4260::DEBUG::2015-11-09
11:17:52,485::fileSD::536::Storage.StorageDomain::(activateVolumes)
Fixing permissions on
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9
Thread-4260::DEBUG::2015-11-09
11:17:52,486::fileUtils::143::Storage.fileUtils::(createdir) Creating
directory: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
mode: None
Thread-4260::WARNING::2015-11-09
11:17:52,487::fileUtils::152::Storage.fileUtils::(createdir) Dir
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 already
exists
Thread-4260::DEBUG::2015-11-09
11:17:52,487::fileSD::511::Storage.StorageDomain::(createImageLinks)
Creating symlink from
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830
to
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-4493-bcaf-2f417849b830
Thread-4260::DEBUG::2015-11-09
11:17:52,487::fileSD::516::Storage.StorageDomain::(createImageLinks)
img run dir already exists:
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-4493-bcaf-2f417849b830
Thread-4260::DEBUG::2015-11-09
11:17:52,488::fileVolume::535::Storage.Volume::(validateVolumePath)
validate path for
8bb29fcb-c109-4f0a-a227-3819b6ecfdd9
Thread-4260::INFO::2015-11-09
11:17:52,490::logUtils::51::dispatcher::(wrapper) Run and protect:
prepareImage, Return response: {'info': {'domainID':
'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9',
'volumeID': u'8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9.lease',
'imageID': 'fd81353f-b654-4493-bcaf-2f417849b830'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9',
'volumeID': u'8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9.lease',
'imageID':
'fd81353f-b654-4493-bcaf-2f417849b830'}]}
Thread-4260::DEBUG::2015-11-09
11:17:52,490::task::1191::Storage.TaskManager.Task::(prepare)
Task=`aed16a50-ede9-4ff5-92ef-356692fd56ae`::finished: {'info':
{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9',
'volumeID': u'8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9.lease',
'imageID': 'fd81353f-b654-4493-bcaf-2f417849b830'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9',
'volumeID': u'8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9.lease',
'imageID':
'fd81353f-b654-4493-bcaf-2f417849b830'}]}
Thread-4260::DEBUG::2015-11-09
11:17:52,490::task::595::Storage.TaskManager.Task::(_updateState)
Task=`aed16a50-ede9-4ff5-92ef-356692fd56ae`::moving from state preparing
-> state finished
Thread-4260::DEBUG::2015-11-09
11:17:52,490::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj:
'None'>}
Thread-4260::DEBUG::2015-11-09
11:17:52,491::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4260::DEBUG::2015-11-09
11:17:52,491::resourceManager::616::Storage.ResourceManager::(releaseResource)
Trying to release resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'
Thread-4260::DEBUG::2015-11-09
11:17:52,491::resourceManager::635::Storage.ResourceManager::(releaseResource)
Released resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0
active users)
Thread-4260::DEBUG::2015-11-09
11:17:52,491::resourceManager::641::Storage.ResourceManager::(releaseResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free,
finding out if anyone is waiting for it.
Thread-4260::DEBUG::2015-11-09
11:17:52,491::resourceManager::649::Storage.ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing
records.
Thread-4260::DEBUG::2015-11-09
11:17:52,492::task::993::Storage.TaskManager.Task::(_decref)
Task=`aed16a50-ede9-4ff5-92ef-356692fd56ae`::ref 0 aborting
False
Thread-4260::INFO::2015-11-09
11:17:52,494::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57857 stopped
Reactor thread::INFO::2015-11-09
11:17:52,494::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57858
Reactor
thread::DEBUG::2015-11-09
11:17:52,503::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,504::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from
127.0.0.1:57858
BindingXMLRPC::INFO::2015-11-09
11:17:52,504::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57858
Reactor thread::DEBUG::2015-11-09
11:17:52,504::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57858)
Thread-4261::INFO::2015-11-09
11:17:52,507::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57858 started
Thread-4261::DEBUG::2015-11-09
11:17:52,508::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4261::DEBUG::2015-11-09
11:17:52,508::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d39463fd-486f-4280-903a-51b72862b648`::moving from state init ->
state preparing
Thread-4261::INFO::2015-11-09
11:17:52,509::logUtils::48::dispatcher::(wrapper) Run and protect:
prepareImage(sdUUID='b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
spUUID='00000000-0000-0000-0000-000000000000',
imgUUID='0e1c20d1-94aa-4003-8e12-0dbbf06a6af8',
leafUUID='3fc3362d-ab6d-4e06-bd72-82d5750c7095')
Thread-4261::DEBUG::2015-11-09
11:17:52,509::resourceManager::198::Storage.ResourceManager.Request::(__init__)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`240c2aba-6c2e-44da-890d-c3d605e1933f`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '3205' at
'prepareImage'
Thread-4261::DEBUG::2015-11-09
11:17:52,509::resourceManager::542::Storage.ResourceManager::(registerResource)
Trying to register resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type
'shared'
Thread-4261::DEBUG::2015-11-09
11:17:52,509::resourceManager::601::Storage.ResourceManager::(registerResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now
locking as 'shared' (1 active user)
Thread-4261::DEBUG::2015-11-09
11:17:52,510::resourceManager::238::Storage.ResourceManager.Request::(grant)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`240c2aba-6c2e-44da-890d-c3d605e1933f`::Granted
request
Thread-4261::DEBUG::2015-11-09
11:17:52,510::task::827::Storage.TaskManager.Task::(resourceAcquired)
Task=`d39463fd-486f-4280-903a-51b72862b648`::_resourcesAcquired:
Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
(shared)
Thread-4261::DEBUG::2015-11-09
11:17:52,510::task::993::Storage.TaskManager.Task::(_decref)
Task=`d39463fd-486f-4280-903a-51b72862b648`::ref 1 aborting
False
Thread-4261::DEBUG::2015-11-09
11:17:52,526::fileSD::536::Storage.StorageDomain::(activateVolumes)
Fixing permissions on
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095
Thread-4261::DEBUG::2015-11-09
11:17:52,528::fileUtils::143::Storage.fileUtils::(createdir) Creating
directory: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
mode: None
Thread-4261::WARNING::2015-11-09
11:17:52,528::fileUtils::152::Storage.fileUtils::(createdir) Dir
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 already
exists
Thread-4261::DEBUG::2015-11-09
11:17:52,528::fileSD::511::Storage.StorageDomain::(createImageLinks)
Creating symlink from
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8
to
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8
Thread-4261::DEBUG::2015-11-09
11:17:52,528::fileSD::516::Storage.StorageDomain::(createImageLinks)
img run dir already exists:
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8
Thread-4261::DEBUG::2015-11-09
11:17:52,530::fileVolume::535::Storage.Volume::(validateVolumePath)
validate path for
3fc3362d-ab6d-4e06-bd72-82d5750c7095
Thread-4261::INFO::2015-11-09
11:17:52,531::logUtils::51::dispatcher::(wrapper) Run and protect:
prepareImage, Return response: {'info': {'domainID':
'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095',
'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease',
'imageID': '0e1c20d1-94aa-4003-8e12-0dbbf06a6af8'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095',
'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease',
'imageID':
'0e1c20d1-94aa-4003-8e12-0dbbf06a6af8'}]}
Thread-4261::DEBUG::2015-11-09
11:17:52,531::task::1191::Storage.TaskManager.Task::(prepare)
Task=`d39463fd-486f-4280-903a-51b72862b648`::finished: {'info':
{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095',
'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease',
'imageID': '0e1c20d1-94aa-4003-8e12-0dbbf06a6af8'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095',
'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease',
'imageID':
'0e1c20d1-94aa-4003-8e12-0dbbf06a6af8'}]}
Thread-4261::DEBUG::2015-11-09
11:17:52,532::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d39463fd-486f-4280-903a-51b72862b648`::moving from state preparing
-> state finished
Thread-4261::DEBUG::2015-11-09
11:17:52,532::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj:
'None'>}
Thread-4261::DEBUG::2015-11-09
11:17:52,532::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4261::DEBUG::2015-11-09
11:17:52,532::resourceManager::616::Storage.ResourceManager::(releaseResource)
Trying to release resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'
Thread-4261::DEBUG::2015-11-09
11:17:52,532::resourceManager::635::Storage.ResourceManager::(releaseResource)
Released resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0
active users)
Thread-4261::DEBUG::2015-11-09
11:17:52,533::resourceManager::641::Storage.ResourceManager::(releaseResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free,
finding out if anyone is waiting for it.
Thread-4261::DEBUG::2015-11-09
11:17:52,533::resourceManager::649::Storage.ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing
records.
Thread-4261::DEBUG::2015-11-09
11:17:52,533::task::993::Storage.TaskManager.Task::(_decref)
Task=`d39463fd-486f-4280-903a-51b72862b648`::ref 0 aborting
False
Thread-4261::INFO::2015-11-09
11:17:52,535::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57858 stopped
Reactor thread::INFO::2015-11-09
11:17:52,536::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57859
Reactor
thread::DEBUG::2015-11-09
11:17:52,544::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,545::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from 127.0.0.1:57859
Reactor
thread::DEBUG::2015-11-09
11:17:52,545::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57859)
BindingXMLRPC::INFO::2015-11-09
11:17:52,545::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57859
Thread-4262::INFO::2015-11-09
11:17:52,548::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57859 started
Thread-4262::DEBUG::2015-11-09
11:17:52,548::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4262::DEBUG::2015-11-09
11:17:52,549::task::595::Storage.TaskManager.Task::(_updateState)
Task=`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::moving from state init ->
state preparing
Thread-4262::INFO::2015-11-09
11:17:52,549::logUtils::48::dispatcher::(wrapper) Run and protect:
prepareImage(sdUUID='b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
spUUID='00000000-0000-0000-0000-000000000000',
imgUUID='350fb787-049a-4174-8914-f371aabfa72c',
leafUUID='02c5d59d-638c-4672-814d-d734e334e24a')
Thread-4262::DEBUG::2015-11-09
11:17:52,549::resourceManager::198::Storage.ResourceManager.Request::(__init__)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`fd9ea6d0-3a31-4ec6-a74c-8b84b2e51746`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '3205' at
'prepareImage'
Thread-4262::DEBUG::2015-11-09
11:17:52,550::resourceManager::542::Storage.ResourceManager::(registerResource)
Trying to register resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type
'shared'
Thread-4262::DEBUG::2015-11-09
11:17:52,550::resourceManager::601::Storage.ResourceManager::(registerResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now
locking as 'shared' (1 active user)
Thread-4262::DEBUG::2015-11-09
11:17:52,550::resourceManager::238::Storage.ResourceManager.Request::(grant)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`fd9ea6d0-3a31-4ec6-a74c-8b84b2e51746`::Granted
request
Thread-4262::DEBUG::2015-11-09
11:17:52,550::task::827::Storage.TaskManager.Task::(resourceAcquired)
Task=`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::_resourcesAcquired:
Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
(shared)
Thread-4262::DEBUG::2015-11-09
11:17:52,551::task::993::Storage.TaskManager.Task::(_decref)
Task=`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::ref 1 aborting
False
Thread-4262::DEBUG::2015-11-09
11:17:52,566::fileSD::536::Storage.StorageDomain::(activateVolumes)
Fixing permissions on
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a
Thread-4262::DEBUG::2015-11-09
11:17:52,568::fileUtils::143::Storage.fileUtils::(createdir) Creating
directory: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
mode: None
Thread-4262::WARNING::2015-11-09
11:17:52,568::fileUtils::152::Storage.fileUtils::(createdir) Dir
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 already
exists
Thread-4262::DEBUG::2015-11-09
11:17:52,568::fileSD::511::Storage.StorageDomain::(createImageLinks)
Creating symlink from
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c
to
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-049a-4174-8914-f371aabfa72c
Thread-4262::DEBUG::2015-11-09
11:17:52,568::fileSD::516::Storage.StorageDomain::(createImageLinks)
img run dir already exists:
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-049a-4174-8914-f371aabfa72c
Thread-4262::DEBUG::2015-11-09
11:17:52,570::fileVolume::535::Storage.Volume::(validateVolumePath)
validate path for
02c5d59d-638c-4672-814d-d734e334e24a
Thread-4262::INFO::2015-11-09
11:17:52,572::logUtils::51::dispatcher::(wrapper) Run and protect:
prepareImage, Return response: {'info': {'domainID':
'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a',
'volumeID': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a.lease',
'imageID': '350fb787-049a-4174-8914-f371aabfa72c'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a',
'volumeID': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a.lease',
'imageID':
'350fb787-049a-4174-8914-f371aabfa72c'}]}
Thread-4262::DEBUG::2015-11-09
11:17:52,573::task::1191::Storage.TaskManager.Task::(prepare)
Task=`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::finished: {'info':
{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a',
'volumeID': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a.lease',
'imageID': '350fb787-049a-4174-8914-f371aabfa72c'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a',
'volumeID': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a.lease',
'imageID':
'350fb787-049a-4174-8914-f371aabfa72c'}]}
Thread-4262::DEBUG::2015-11-09
11:17:52,573::task::595::Storage.TaskManager.Task::(_updateState)
Task=`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::moving from state preparing
-> state finished
Thread-4262::DEBUG::2015-11-09
11:17:52,573::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj:
'None'>}
Thread-4262::DEBUG::2015-11-09
11:17:52,573::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4262::DEBUG::2015-11-09
11:17:52,573::resourceManager::616::Storage.ResourceManager::(releaseResource)
Trying to release resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'
Thread-4262::DEBUG::2015-11-09
11:17:52,573::resourceManager::635::Storage.ResourceManager::(releaseResource)
Released resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0
active users)
Thread-4262::DEBUG::2015-11-09
11:17:52,574::resourceManager::641::Storage.ResourceManager::(releaseResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free,
finding out if anyone is waiting for it.
Thread-4262::DEBUG::2015-11-09
11:17:52,574::resourceManager::649::Storage.ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing
records.
Thread-4262::DEBUG::2015-11-09
11:17:52,574::task::993::Storage.TaskManager.Task::(_decref)
Task=`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::ref 0 aborting
False
Thread-4262::INFO::2015-11-09
11:17:52,576::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57859 stopped
Reactor thread::INFO::2015-11-09
11:17:52,610::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57860
Reactor
thread::DEBUG::2015-11-09
11:17:52,619::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,619::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from
127.0.0.1:57860
BindingXMLRPC::INFO::2015-11-09
11:17:52,620::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57860
Reactor thread::DEBUG::2015-11-09
11:17:52,620::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57860)
Thread-4263::INFO::2015-11-09
11:17:52,623::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57860 started
Thread-4263::DEBUG::2015-11-09
11:17:52,623::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4263::DEBUG::2015-11-09
11:17:52,624::task::595::Storage.TaskManager.Task::(_updateState)
Task=`6fd1d011-d931-4eca-b93b-c0fc3a1b4107`::moving from state init ->
state preparing
Thread-4263::INFO::2015-11-09
11:17:52,624::logUtils::48::dispatcher::(wrapper) Run and protect:
repoStats(options=None)
Thread-4263::INFO::2015-11-09
11:17:52,624::logUtils::51::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41':
{'code': 0, 'actual': True, 'version': 3, 'acquired': True, 'delay':
'0.000392118', 'lastCheck': '6.0', 'valid': True},
u'0af99439-f140-4636-90f7-f43904735da0': {'code': 0, 'actual': True,
'version': 3, 'acquired': True, 'delay': '0.000382969', 'lastCheck':
'4.5', 'valid': True}}
Thread-4263::DEBUG::2015-11-09
11:17:52,624::task::1191::Storage.TaskManager.Task::(prepare)
Task=`6fd1d011-d931-4eca-b93b-c0fc3a1b4107`::finished:
{'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': {'code': 0, 'actual': True,
'version': 3, 'acquired': True, 'delay': '0.000392118', 'lastCheck':
'6.0', 'valid': True}, u'0af99439-f140-4636-90f7-f43904735da0': {'code':
0, 'actual': True, 'version': 3, 'acquired': True, 'delay':
'0.000382969', 'lastCheck': '4.5', 'valid':
True}}
Thread-4263::DEBUG::2015-11-09
11:17:52,625::task::595::Storage.TaskManager.Task::(_updateState)
Task=`6fd1d011-d931-4eca-b93b-c0fc3a1b4107`::moving from state preparing
-> state finished
Thread-4263::DEBUG::2015-11-09
11:17:52,625::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{}
Thread-4263::DEBUG::2015-11-09
11:17:52,625::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4263::DEBUG::2015-11-09
11:17:52,625::task::993::Storage.TaskManager.Task::(_decref)
Task=`6fd1d011-d931-4eca-b93b-c0fc3a1b4107`::ref 0 aborting
False
Thread-4263::INFO::2015-11-09
11:17:52,627::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57860
stopped
mailbox.SPMMonitor::DEBUG::2015-11-09
11:17:52,829::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail)
dd
if=/rhev/data-center/00000001-0001-0001-0001-000000000230/mastersd/dom_md/inbox
iflag=direct,fullblock count=1 bs=1024000 (cwd
None)
mailbox.SPMMonitor::DEBUG::2015-11-09
11:17:52,845::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail)
SUCCESS: <err> = '1+0 records inn1+0 records outn1024000 bytes (1.0 MB)
copied, 0.00494757 s, 207 MB/sn'; <rc> = 0
--
Florent BELLO
Service
Informatique
informatique(a)ville-kourou.fr
0594 22 31 22
Mairie de Kourou
--=_920b73690f5d77fd5fed2f8d20a489a7
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html; charset=UTF-8
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN">
<html><body>
<p>Hi,</p>
<p>I activate gluster service in Cluster, then my engine.log chain : Could =
not add brick xxx to volume xxxx server uuid xxx not found in cluster=
=2E<br />I found in mailing list i have to put all my hosts in maintenance =
mode and put on.</p>
<p>Then now engine.log chain :</p>
<p>2015-11-09 11:15:53,563 INFO [org.ovirt.engine.core.vdsbroker.glus=
ter.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-64) [] STA=
RT, GlusterVolumesListVDSCommand(HostName =3D ovirt02, GlusterVolumesListVD=
SParameters:{runAsync=3D'true', hostId=3D'0d1284e1-fa18-4309-b196-df9a6a337=
c44'}), log id: 6ddd5b9d<br />2015-11-09 11:15:53,711 WARN [org.ovirt=
=2Eengine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] (Defaul=
tQuartzScheduler_Worker-64) [] Could not associate brick 'ovirt01.mafia.kru=
:/gfs1/engine/brick' of volume 'e9a24161-3e72-47ea-b593-57f3302e7c4e' with =
correct network as no gluster network found in cluster '00000002-0002-0002-=
0002-00000000022d'<br />2015-11-09 11:15:53,714 WARN [org.ovirt.engin=
e.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] (DefaultQuartzS=
cheduler_Worker-64) [] Could not associate brick 'ovirt02.mafia.kru:/gfs1/e=
ngine/brick' of volume 'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct =
network as no gluster network found in cluster '00000002-0002-0002-0002-000=
00000022d'<br />2015-11-09 11:15:53,716 WARN [org.ovirt.engine.core=
=2Evdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] (DefaultQuartzSched=
uler_Worker-64) [] Could not associate brick 'ovirt03.mafia.kru:/gfs1/engin=
e/brick' of volume 'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct netw=
ork as no gluster network found in cluster '00000002-0002-0002-0002-0000000=
0022d'<br />2015-11-09 11:15:53,719 WARN [org.ovirt.engine.core.vdsbr=
oker.gluster.GlusterVolumesListReturnForXmlRpc] (DefaultQuartzScheduler_Wor=
ker-64) [] Could not associate brick 'ovirt01.mafia.kru:/gfs2/engine/brick'=
of volume 'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as n=
o gluster network found in cluster '00000002-0002-0002-0002-00000000022d'<b=
r />2015-11-09 11:15:53,722 WARN [org.ovirt.engine.core.vdsbroker.glu=
ster.GlusterVolumesListReturnForXmlRpc] (DefaultQuartzScheduler_Worker-64) =
[] Could not associate brick 'ovirt02.mafia.kru:/gfs2/engine/brick' of volu=
me 'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no gluste=
r network found in cluster '00000002-0002-0002-0002-00000000022d'<br />2015=
-11-09 11:15:53,725 WARN [org.ovirt.engine.core.vdsbroker.gluster.Glu=
sterVolumesListReturnForXmlRpc] (DefaultQuartzScheduler_Worker-64) [] Could=
not associate brick 'ovirt03.mafia.kru:/gfs2/engine/brick' of volume 'e9a2=
4161-3e72-47ea-b593-57f3302e7c4e' with correct network as no gluster networ=
k found in cluster '00000002-0002-0002-0002-00000000022d'<br />2015-11-09 1=
1:15:53,732 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolu=
mesListVDSCommand] (DefaultQuartzScheduler_Worker-64) [] FINISH, GlusterVol=
umesListVDSCommand, return: {e9a24161-3e72-47ea-b593-57f3302e7c4e=3Dorg.ovi=
rt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7eafe244=
, e5df896f-b818-4d70-ac86-ad9270f9d5f2=3Dorg.ovirt.engine.core.common.busin=
essentities.gluster.GlusterVolumeEntity@cb7d0349}, log id: 6ddd5b9d</p>
<p> </p>
<p>Here my vdsm.log on host 1:</p>
<p>Thread-4247::DEBUG::2015-11-09 11:17:47,621::__init__::533::jsonrpc.Json=
RpcServer::(_serveRequest) Return 'Host.getVMFullList' in bridge with [{u's=
tatus': 'Up', u'nicModel': u'rtl8139,pv', u'kvmEnable': u'true', u'smp': u'=
1', u'emulatedMachine': u'pc', u'afterMigrationStatus': u'', 'pid': '4450',=
u'vmId': u'3930e6e3-5b41-45c3-bb7c-2af8563cefab', u'devices': [{u'alias': =
u'console0', u'specParams': {}, 'deviceType': u'console', u'deviceId': u'ab=
824f92-f636-4c0f-96ad-b4f3d1c352be', u'device': u'console', u'type': u'cons=
ole'}, {u'target': 1572864, u'alias': u'balloon0', u'specParams': {u'model'=
: u'none'}, 'deviceType': u'balloon', u'device': u'memballoon', u'type': u'=
balloon'}, {u'device': u'unix', u'alias': u'channel0', 'deviceType': u'chan=
nel', u'type': u'channel', u'address': {u'bus': u'0', u'controller': u'0', =
u'type': u'virtio-serial', u'port': u'1'}}, {u'device': u'unix', u'alias': =
u'channel1', 'deviceType': u'channel', u'type': u'channel', u'address': {u'=
bus': u'0', u'controller': u'0', u'type': u'virtio-serial', u'port': u'2'}}=
, {u'device': u'unix', u'alias': u'channel2', 'deviceType': u'channel', u't=
ype': u'channel', u'address': {u'bus': u'0', u'controller': u'0', u'type': =
u'virtio-serial', u'port': u'3'}}, {u'alias': u'scsi0', 'deviceType': u'con=
troller', u'address': {u'slot': u'0x04', u'bus': u'0x00', u'domain': u'0x00=
00', u'type': u'pci', u'function': u'0x0'}, u'device': u'scsi', u'model': u=
'virtio-scsi', u'type': u'controller'}, {u'device': u'usb', u'alias': u'usb=
0', 'deviceType': u'controller', u'type': u'controller', u'address': {u'slo=
t': u'0x01', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'func=
tion': u'0x2'}}, {u'device': u'ide', u'alias': u'ide0', 'deviceType': u'con=
troller', u'type': u'controller', u'address': {u'slot': u'0x01', u'bus': u'=
0x00', u'domain': u'0x0000', u'type': u'pci', u'function': u'0x1'}}, {u'dev=
ice': u'virtio-serial', u'alias': u'virtio-serial0', 'deviceType': u'contro=
ller', u'type': u'controller', u'address': {u'slot': u'0x05', u'bus': u'0x0=
0', u'domain': u'0x0000', u'type': u'pci', u'function': u'0x0'}}, {u'device=
': u'', u'alias': u'video0', 'deviceType': u'video', u'type': u'video', u'a=
ddress': {u'slot': u'0x02', u'bus': u'0x00', u'domain': u'0x0000', u'type':=
u'pci', u'function': u'0x0'}}, {u'device': u'vnc', u'specParams': {u'spice=
SecureChannels': u'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartc=
ard,susbredir', u'displayIp': '0'}, 'deviceType': u'graphics', u'type': u'g=
raphics', u'port': u'5900'}, {u'nicModel': u'pv', u'macAddr': u'00:16:3e:43=
:96:7b', u'linkActive': True, u'network': u'ovirtmgmt', u'specParams': {}, =
u'filter': u'vdsm-no-mac-spoofing', u'alias': u'net0', 'deviceType': u'inte=
rface', u'deviceId': u'c2913ff3-fea3-4b17-a4b3-83398d920cd3', u'address': {=
u'slot': u'0x03', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u=
'function': u'0x0'}, u'device': u'bridge', u'type': u'interface', u'name': =
u'vnet0'}, {u'index': u'2', u'iface': u'ide', u'name': u'hdc', u'alias': u'=
ide0-1-0', u'specParams': {}, u'readonly': 'True', 'deviceType': u'disk', u=
'deviceId': u'13f4e285-c161-46f5-9ec3-ba1f92f374d9', u'address': {u'bus': u=
'1', u'controller': u'0', u'type': u'drive', u'target': u'0', u'unit': u'0'=
}, u'device': u'cdrom', u'shared': u'false', u'path': '', u'type': u'disk'}=
, {u'poolID': u'00000000-0000-0000-0000-000000000000', u'volumeInfo': {'dom=
ainID': u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path', 'leaseO=
ffset': 0, 'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath'=
: u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f=
-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee03=
4-de86-4438-b6eb-9109faa8b3d3.lease', 'imageID': u'56461302-0710-4df0-964d-=
5e7b1ff07828', 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:=
_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d=
-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3'}, u'index': u'0', u'ifa=
ce': u'virtio', u'apparentsize': '26843545600', u'specParams': {}, u'imageI=
D': u'56461302-0710-4df0-964d-5e7b1ff07828', u'readonly': 'False', 'deviceT=
ype': u'disk', u'shared': u'exclusive', u'truesize': '3515854848', u'type':=
u'disk', u'domainID': u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', u'reqsize':=
u'0', u'format': u'raw', u'deviceId': u'56461302-0710-4df0-964d-5e7b1ff078=
28', u'address': {u'slot': u'0x06', u'bus': u'0x00', u'domain': u'0x0000', =
u'type': u'pci', u'function': u'0x0'}, u'device': u'disk', u'path': u'/var/=
run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302-0710-4df0-96=
4d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3', u'propagateErrors': =
u'off', u'optional': u'false', u'name': u'vda', u'bootOrder': u'1', u'volum=
eID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', u'alias': u'virtio-disk0', u=
'volumeChain': [{'domainID': u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volT=
ype': 'path', 'leaseOffset': 0, 'volumeID': u'8f8ee034-de86-4438-b6eb-9109f=
aa8b3d3', 'leasePath': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:=
_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d=
-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease', 'imageID': u'564=
61302-0710-4df0-964d-5e7b1ff07828', 'path': u'/rhev/data-center/mnt/gluster=
SD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56=
461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3'}]}=
], u'guestDiskMapping': {}, u'spiceSecureChannels': u'smain,sdisplay,sinput=
s,scursor,splayback,srecord,ssmartcard,susbredir', u'vmType': u'kvm', u'dis=
playIp': '0', u'displaySecurePort': '-1', u'memSize': 1536, u'displayPort':=
u'5900', u'cpuType': u'Conroe', 'clientIp': u'', u'statusTime': '429970492=
0', u'vmName': u'HostedEngine', u'display': 'vnc'}]<br />Reactor thread::IN=
FO::2015-11-09 11:17:48,004::protocoldetector::72::ProtocolDetector.Accepto=
rImpl::(handle_accept) Accepting connection from 127.0.0.1:57851<br />React=
or thread::DEBUG::2015-11-09 11:17:48,012::protocoldetector::82::ProtocolDe=
tector.Detector::(__init__) Using required_size=3D11<br />Reactor thread::I=
NFO::2015-11-09 11:17:48,013::protocoldetector::118::ProtocolDetector.Detec=
tor::(handle_read) Detected protocol xml from 127.0.0.1:57851<br />Reactor =
thread::DEBUG::2015-11-09 11:17:48,013::bindingxmlrpc::1297::XmlDetector::(=
handle_socket) xml over http detected from ('127.0.0.1', 57851)<br />Bindin=
gXMLRPC::INFO::2015-11-09 11:17:48,013::xmlrpc::73::vds.XMLRPCServer::(hand=
le_request) Starting request handler for 127.0.0.1:57851<br />Thread-4248::=
INFO::2015-11-09 11:17:48,015::xmlrpc::84::vds.XMLRPCServer::(_process_requ=
ests) Request handler for 127.0.0.1:57851 started<br />Thread-4248::INFO::2=
015-11-09 11:17:48,022::xmlrpc::92::vds.XMLRPCServer::(_process_requests) R=
equest handler for 127.0.0.1:57851 stopped<br />Thread-303::DEBUG::2015-11-=
09 11:17:48,143::fileSD::173::Storage.Misc.excCmd::(getReadDelay) /usr/bin/=
dd if=3D/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_data/0af99439-f1=
40-4636-90f7-f43904735da0/dom_md/metadata iflag=3Ddirect of=3D/dev/null bs=
=3D4096 count=3D1 (cwd None)<br />Thread-303::DEBUG::2015-11-09 11:17:48,15=
4::fileSD::173::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> =
=3D '0+1 records in\n0+1 records out\n461 bytes (461 B) copied, 0.000382969=
s, 1.2 MB/s\n'; <rc> =3D 0<br />mailbox.SPMMonitor::DEBUG::2015-11-0=
9 11:17:48,767::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail) =
dd if=3D/rhev/data-center/00000001-0001-0001-0001-000000000230/mastersd/dom=
_md/inbox iflag=3Ddirect,fullblock count=3D1 bs=3D1024000 (cwd None)<br />m=
ailbox.SPMMonitor::DEBUG::2015-11-09 11:17:48,783::storage_mailbox::735::St=
orage.Misc.excCmd::(_checkForMail) SUCCESS: <err> =3D '1+0 records in=
\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.00507258 s, 202 MB/s\n'=
; <rc> =3D 0<br />Reactor thread::INFO::2015-11-09 11:17:49,939::prot=
ocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting =
connection from 127.0.0.1:57852<br />Reactor thread::DEBUG::2015-11-09 11:1=
7:49,947::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using=
required_size=3D11<br />Reactor thread::INFO::2015-11-09 11:17:49,947::pro=
tocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected proto=
col xml from 127.0.0.1:57852<br />Reactor thread::DEBUG::2015-11-09 11:17:4=
9,947::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over http dete=
cted from ('127.0.0.1', 57852)<br />BindingXMLRPC::INFO::2015-11-09 11:17:4=
9,948::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request hand=
ler for 127.0.0.1:57852<br />Thread-4249::INFO::2015-11-09 11:17:49,949::xm=
lrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0=
=2E0.1:57852 started<br />Thread-4249::DEBUG::2015-11-09 11:17:49,950::bind=
ingxmlrpc::1257::vds::(wrapper) client [127.0.0.1]::call getCapabilities wi=
th () {}<br />Thread-4249::DEBUG::2015-11-09 11:17:49,962::netinfo::454::ro=
ot::(_dhcp_used) DHCPv6 configuration not specified for ovirtmgmt.<br />Thr=
ead-4249::DEBUG::2015-11-09 11:17:49,963::netinfo::686::root::(_get_gateway=
) The gateway 10.10.10.254 is duplicated for the device ovirtmgmt<br />Thre=
ad-4249::DEBUG::2015-11-09 11:17:49,965::netinfo::440::root::(_dhcp_used) T=
here is no VDSM network configured on enp2s0.<br />Thread-4249::DEBUG::2015=
-11-09 11:17:49,965::netinfo::440::root::(_dhcp_used) There is no VDSM netw=
ork configured on enp2s0.<br />Thread-4249::DEBUG::2015-11-09 11:17:49,968:=
:netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bo=
nd0.<br />Thread-4249::DEBUG::2015-11-09 11:17:49,968::netinfo::440::root::=
(_dhcp_used) There is no VDSM network configured on bond0.<br />Thread-4249=
::DEBUG::2015-11-09 11:17:49,970::netinfo::686::root::(_get_gateway) The ga=
teway 10.10.10.254 is duplicated for the device ovirtmgmt<br />Thread-4249:=
:DEBUG::2015-11-09 11:17:49,971::utils::676::root::(execCmd) /usr/sbin/tc q=
disc show (cwd None)<br />Thread-4249::DEBUG::2015-11-09 11:17:49,979::util=
s::694::root::(execCmd) SUCCESS: <err> =3D ''; <rc> =3D 0<br />=
Thread-4249::DEBUG::2015-11-09 11:17:49,980::utils::676::root::(execCmd) /u=
sr/sbin/tc class show dev enp2s0 classid 0:1388 (cwd None)<br />Thread-4249=
::DEBUG::2015-11-09 11:17:49,989::utils::694::root::(execCmd) SUCCESS: <=
err> =3D ''; <rc> =3D 0<br />Thread-4249::DEBUG::2015-11-09 11:17:=
49,993::caps::807::root::(_getKeyPackages) rpm package ('glusterfs-rdma',) =
not found<br />Thread-4249::DEBUG::2015-11-09 11:17:49,997::caps::807::root=
::(_getKeyPackages) rpm package ('gluster-swift-object',) not found<br />Th=
read-4249::DEBUG::2015-11-09 11:17:49,997::caps::807::root::(_getKeyPackage=
s) rpm package ('gluster-swift-proxy',) not found<br />Thread-4249::DEBUG::=
2015-11-09 11:17:50,001::caps::807::root::(_getKeyPackages) rpm package ('g=
luster-swift-plugin',) not found<br />Thread-4249::DEBUG::2015-11-09 11:17:=
50,002::caps::807::root::(_getKeyPackages) rpm package ('gluster-swift',) n=
ot found<br />Thread-4249::DEBUG::2015-11-09 11:17:50,002::caps::807::root:=
:(_getKeyPackages) rpm package ('gluster-swift-container',) not found<br />=
Thread-4249::DEBUG::2015-11-09 11:17:50,003::caps::807::root::(_getKeyPacka=
ges) rpm package ('gluster-swift-account',) not found<br />Thread-4249::DEB=
UG::2015-11-09 11:17:50,003::caps::807::root::(_getKeyPackages) rpm package=
('gluster-swift-doc',) not found<br />Thread-4249::DEBUG::2015-11-09 11:17=
:50,005::bindingxmlrpc::1264::vds::(wrapper) return getCapabilities with {'=
status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI':=
[{'InitiatorName': 'iqn.1994-05.com.redhat:1954deeb7a38'}], 'FC': []}, 'pa=
ckages2': {'kernel': {'release': '229.20.1.el7.x86_64', 'buildtime': 144658=
8607.0, 'version': '3.10.0'}, 'glusterfs-fuse': {'release': '1.el7', 'build=
time': 1444235292L, 'version': '3.7.5'}, 'spice-server': {'release': '9.el7=
_1.3', 'buildtime': 1444691699L, 'version': '0.12.4'}, 'librbd1': {'release=
': '2.el7', 'buildtime': 1425594433L, 'version': '0.80.7'}, 'vdsm': {'relea=
se': '0.el7.centos', 'buildtime': 1446474396L, 'version': '4.17.10.1'}, 'qe=
mu-kvm': {'release': '29.1.el7', 'buildtime': 1444310806L, 'version': '2.3=
=2E0'}, 'glusterfs': {'release': '1.el7', 'buildtime': 1444235292L, 'versio=
n': '3.7.5'}, 'libvirt': {'release': '16.el7_1.5', 'buildtime': 1446559281L=
, 'version': '1.2.8'}, 'qemu-img': {'release': '29.1.el7', 'buildtime': 144=
4310806L, 'version': '2.3.0'}, 'mom': {'release': '2.el7', 'buildtime': 144=
2501481L, 'version': '0.5.1'}, 'glusterfs-geo-replication': {'release': '1=
=2Eel7', 'buildtime': 1444235292L, 'version': '3.7.5'}, 'glusterfs-server':=
{'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}, 'glust=
erfs-cli': {'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7=
=2E5'}}, 'numaNodeDistance': {'0': [10]}, 'cpuModel': 'Intel(R) Core(TM)2 Q=
uad CPU Q8400 @ 2.66GHz', 'liveMerge': 'true', 'hoo=
ks': {'before_vm_start': {'50_hostedengine': {'md5': '2a6d96c26a3599812be6c=
f1a13d9f485'}}}, 'vmTypes': ['kvm'], 'selinux': {'mode': '-1'}, 'liveSnapsh=
ot': 'true', 'kdumpStatus': 0, 'networks': {'ovirtmgmt': {'iface': 'ovirtmg=
mt', 'addr': '10.10.10.211', 'cfg': {'IPV6INIT': 'no', 'DEFROUTE': 'yes', '=
IPADDR': '10.10.10.211', 'HOTPLUG': 'no', 'GATEWAY': '10.10.10.254', 'DELAY=
': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'no=
ne', 'STP': 'off', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'=
}, 'bridged': True, 'ipv6addrs': ['fe80::6e62:6dff:feb3:3b72/64'], 'gateway=
': '10.10.10.254', 'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6': F=
alse, 'stp': 'off', 'ipv4addrs': ['10.10.10.211/24'], 'mtu': '1500', 'ipv6g=
ateway': '::', 'ports': ['vnet0', 'enp2s0']}}, 'bridges': {'ovirtmgmt': {'a=
ddr': '10.10.10.211', 'cfg': {'IPV6INIT': 'no', 'DEFROUTE': 'yes', 'IPADDR'=
: '10.10.10.211', 'HOTPLUG': 'no', 'GATEWAY': '10.10.10.254', 'DELAY': '0',=
'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'S=
TP': 'off', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv=
6addrs': ['fe80::6e62:6dff:feb3:3b72/64'], 'gateway': '10.10.10.254', 'dhcp=
v4': False, 'netmask': '255.255.255.0', 'dhcpv6': False, 'stp': 'off', 'ipv=
4addrs': ['10.10.10.211/24'], 'mtu': '1500', 'ipv6gateway': '::', 'ports': =
['vnet0', 'enp2s0'], 'opts': {'multicast_last_member_count': '2', 'hash_ela=
sticity': '4', 'multicast_query_response_interval': '1000', 'group_fwd_mask=
': '0x0', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3=
125', 'hello_timer': '172', 'multicast_querier_interval': '25500', 'max_age=
': '2000', 'hash_max': '512', 'stp_state': '0', 'topology_change_detected':=
'0', 'priority': '32768', 'multicast_membership_interval': '26000', 'root_=
path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_sta=
rtup_query_count': '2', 'nf_call_iptables': '0', 'topology_change': '0', 'h=
ello_time': '200', 'root_id': '8000.6c626db33b72', 'bridge_id': '8000.6c626=
db33b72', 'topology_change_timer': '0', 'ageing_time': '30000', 'nf_call_ip=
6tables': '0', 'gc_timer': '25099', 'nf_call_arptables': '0', 'group_addr':=
'1:80:c2:0:0:0', 'multicast_last_member_interval': '100', 'default_pvid': =
'1', 'multicast_query_interval': '12500', 'tcn_timer': '0', 'multicast_rout=
er': '1', 'vlan_filtering': '0', 'forward_delay': '0'}}}, 'uuid': 'c2cac9d6=
-9ed7-44f0-8bbc-eff4c71db7ca', 'onlineCpus': '0,1,2,3', 'nics': {'enp2s0': =
{'addr': '', 'ipv6gateway': '::', 'ipv6addrs': ['fe80::6e62:6dff:feb3:3b72/=
64'], 'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4=
addrs': [], 'cfg': {'BRIDGE': 'ovirtmgmt', 'IPV6INIT': 'no', 'NM_CONTROLLED=
': 'no', 'HWADDR': '6c:62:6d:b3:3b:72', 'BOOTPROTO': 'none', 'DEVICE': 'enp=
2s0', 'ONBOOT': 'yes'}, 'hwaddr': '6c:62:6d:b3:3b:72', 'speed': 1000, 'gate=
way': ''}}, 'software_revision': '0', 'hostdevPassthrough': 'false', 'clust=
erLevels': ['3.4', '3.5', '3.6'], 'cpuFlags': 'fpu,vme,de,pse,tsc,msr,pae,m=
ce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,s=
se2,ss,ht,tm,pbe,syscall,nx,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,=
nopl,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,=
sse4_1,xsave,lahf_lm,dtherm,tpr_shadow,vnmi,flexpriority,model_Conroe,model=
_coreduo,model_core2duo,model_Penryn,model_n270', 'ISCSIInitiatorName': 'iq=
n.1994-05.com.redhat:1954deeb7a38', 'netConfigDirty': 'False', 'supportedEN=
GINEs': ['3.4', '3.5', '3.6'], 'autoNumaBalancing': 0, 'additionalFeatures'=
: ['GLUSTER_SNAPSHOT', 'GLUSTER_GEO_REPLICATION', 'GLUSTER_BRICK_MANAGEMENT=
'], 'reservedMem': '321', 'bondings': {'bond0': {'ipv4addrs': [], 'addr': '=
', 'cfg': {'BOOTPROTO': 'none'}, 'ipv6addrs': [], 'active_slave': '', 'mtu'=
: '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'slaves': [], 'h=
waddr': 'ba:5f:22:a3:17:07', 'ipv6gateway': '::', 'gateway': '', 'opts': {}=
}}, 'software_version': '4.17', 'memSize': '3782', 'cpuSpeed': '2670.000', =
'numaNodes': {'0': {'totalMemory': '3782', 'cpus': [0, 1, 2, 3]}}, 'cpuSock=
ets': '1', 'vlans': {}, 'lastClientIface': 'lo', 'cpuCores': '4', 'kvmEnabl=
ed': 'true', 'guestOverhead': '65', 'version_name': 'Snow Man', 'cpuThreads=
': '4', 'emulatedMachines': ['pc-i440fx-rhel7.1.0', 'rhel6.3.0', 'pc-q35-rh=
el7.2.0', 'pc-i440fx-rhel7.0.0', 'rhel6.1.0', 'rhel6.6.0', 'rhel6.2.0', 'pc=
', 'pc-q35-rhel7.0.0', 'pc-q35-rhel7.1.0', 'q35', 'pc-i440fx-rhel7.2.0', 'r=
hel6.4.0', 'rhel6.0.0', 'rhel6.5.0'], 'rngSources': ['random'], 'operatingS=
ystem': {'release': '1.1503.el7.centos.2.8', 'version': '7', 'name': 'RHEL'=
}, 'lastClient': '127.0.0.1'}}<br />Thread-4249::INFO::2015-11-09 11:17:50,=
020::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for =
127.0.0.1:57852 stopped<br />mailbox.SPMMonitor::DEBUG::2015-11-09 11:17:50=
,797::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail) dd if=3D/r=
hev/data-center/00000001-0001-0001-0001-000000000230/mastersd/dom_md/inbox =
iflag=3Ddirect,fullblock count=3D1 bs=3D1024000 (cwd None)<br />mailbox.SPM=
Monitor::DEBUG::2015-11-09 11:17:50,815::storage_mailbox::735::Storage.Misc=
=2EexcCmd::(_checkForMail) SUCCESS: <err> =3D '1+0 records in\n1+0 re=
cords out\n1024000 bytes (1.0 MB) copied, 0.00511026 s, 200 MB/s\n'; <rc=
> =3D 0<br />Reactor thread::INFO::2015-11-09 11:17:52,098::protocoldete=
ctor::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connecti=
on from 127.0.0.1:57853<br />Reactor thread::DEBUG::2015-11-09 11:17:52,106=
::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using require=
d_size=3D11<br />Reactor thread::INFO::2015-11-09 11:17:52,107::protocoldet=
ector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml =
from 127.0.0.1:57853<br />Reactor thread::DEBUG::2015-11-09 11:17:52,107::b=
indingxmlrpc::1297::XmlDetector::(handle_socket) xml over http detected fro=
m ('127.0.0.1', 57853)<br />BindingXMLRPC::INFO::2015-11-09 11:17:52,108::x=
mlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for =
127.0.0.1:57853<br />Thread-4250::INFO::2015-11-09 11:17:52,110::xmlrpc::84=
::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:57853=
started<br />Thread-4250::DEBUG::2015-11-09 11:17:52,111::bindingxmlrpc::1=
257::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {}<br =
/>Thread-4250::DEBUG::2015-11-09 11:17:52,112::bindingxmlrpc::1264::vds::(w=
rapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': =
0}, 'info': {'systemProductName': 'MS-7529', 'systemSerialNumber': 'To Be F=
illed By O.E.M.', 'systemFamily': 'To Be Filled By O.E.M.', 'systemVersion'=
: '1.0', 'systemUUID': '00000000-0000-0000-0000-6C626DB33B72', 'systemManuf=
acturer': 'MICRO-STAR INTERNATIONAL CO.,LTD'}}<br />Thread-4250::INFO::2015=
-11-09 11:17:52,114::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Requ=
est handler for 127.0.0.1:57853 stopped<br />Reactor thread::INFO::2015-11-=
09 11:17:52,116::protocoldetector::72::ProtocolDetector.AcceptorImpl::(hand=
le_accept) Accepting connection from 127.0.0.1:57854<br />Reactor thread::D=
EBUG::2015-11-09 11:17:52,124::protocoldetector::82::ProtocolDetector.Detec=
tor::(__init__) Using required_size=3D11<br />Reactor thread::INFO::2015-11=
-09 11:17:52,124::protocoldetector::118::ProtocolDetector.Detector::(handle=
_read) Detected protocol xml from 127.0.0.1:57854<br />BindingXMLRPC::INFO:=
:2015-11-09 11:17:52,125::xmlrpc::73::vds.XMLRPCServer::(handle_request) St=
arting request handler for 127.0.0.1:57854<br />Reactor thread::DEBUG::2015=
-11-09 11:17:52,125::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml =
over http detected from ('127.0.0.1', 57854)<br />Thread-4251::INFO::2015-1=
1-09 11:17:52,128::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Reques=
t handler for 127.0.0.1:57854 started<br />Thread-4251::DEBUG::2015-11-09 1=
1:17:52,129::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]<br />Thr=
ead-4251::DEBUG::2015-11-09 11:17:52,130::task::595::Storage.TaskManager.Ta=
sk::(_updateState) Task=3D`8535d95e-dce6-4474-bd8d-7824f68cf68a`::moving fr=
om state init -> state preparing<br />Thread-4251::INFO::2015-11-09 11:1=
7:52,130::logUtils::48::dispatcher::(wrapper) Run and protect: connectStora=
geServer(domType=3D7, spUUID=3D'00000000-0000-0000-0000-000000000000', conL=
ist=3D[{'id': '2c69bdcf-793b-4fda-b326-b8aa6c33ade0', 'vfs_type': 'glusterf=
s', 'connection': 'ovirt02.mafia.kru:/engine', 'user': 'kvm'}], options=3DN=
one)<br />Thread-4251::DEBUG::2015-11-09 11:17:52,132::hsm::2417::Storage=
=2EHSM::(__prefetchDomains) glusterDomPath: glusterSD/*<br />Thread-4251::D=
EBUG::2015-11-09 11:17:52,146::hsm::2429::Storage.HSM::(__prefetchDomains) =
Found SD uuids: (u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', u'0af99439-f140-4=
636-90f7-f43904735da0')<br />Thread-4251::DEBUG::2015-11-09 11:17:52,147::h=
sm::2489::Storage.HSM::(connectStorageServer) knownSDs: {b4c488af-9d2f-4b7b=
-a6f6-74a0bac06c41: storage.glusterSD.findDomain, 0af99439-f140-4636-90f7-f=
43904735da0: storage.glusterSD.findDomain}<br />Thread-4251::INFO::2015-11-=
09 11:17:52,147::logUtils::51::dispatcher::(wrapper) Run and protect: conne=
ctStorageServer, Return response: {'statuslist': [{'status': 0, 'id': '2c69=
bdcf-793b-4fda-b326-b8aa6c33ade0'}]}<br />Thread-4251::DEBUG::2015-11-09 11=
:17:52,147::task::1191::Storage.TaskManager.Task::(prepare) Task=3D`8535d95=
e-dce6-4474-bd8d-7824f68cf68a`::finished: {'statuslist': [{'status': 0, 'id=
': '2c69bdcf-793b-4fda-b326-b8aa6c33ade0'}]}<br />Thread-4251::DEBUG::2015-=
11-09 11:17:52,147::task::595::Storage.TaskManager.Task::(_updateState) Tas=
k=3D`8535d95e-dce6-4474-bd8d-7824f68cf68a`::moving from state preparing -&g=
t; state finished<br />Thread-4251::DEBUG::2015-11-09 11:17:52,147::resourc=
eManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll=
requests {} resources {}<br />Thread-4251::DEBUG::2015-11-09 11:17:52,148:=
:resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.can=
celAll requests {}<br />Thread-4251::DEBUG::2015-11-09 11:17:52,148::task::=
993::Storage.TaskManager.Task::(_decref) Task=3D`8535d95e-dce6-4474-bd8d-78=
24f68cf68a`::ref 0 aborting False<br />Thread-4251::INFO::2015-11-09 11:17:=
52,149::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler f=
or 127.0.0.1:57854 stopped<br />Reactor thread::INFO::2015-11-09 11:17:52,1=
50::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Ac=
cepting connection from 127.0.0.1:57855<br />Reactor thread::DEBUG::2015-11=
-09 11:17:52,158::protocoldetector::82::ProtocolDetector.Detector::(__init_=
_) Using required_size=3D11<br />Reactor thread::INFO::2015-11-09 11:17:52,=
158::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detect=
ed protocol xml from 127.0.0.1:57855<br />BindingXMLRPC::INFO::2015-11-09 1=
1:17:52,159::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting reques=
t handler for 127.0.0.1:57855<br />Reactor thread::DEBUG::2015-11-09 11:17:=
52,159::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over http det=
ected from ('127.0.0.1', 57855)<br />Thread-4255::INFO::2015-11-09 11:17:52=
,162::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for=
127.0.0.1:57855 started<br />Thread-4255::DEBUG::2015-11-09 11:17:52,162::=
bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]<br />Thread-4255::DEB=
UG::2015-11-09 11:17:52,163::task::595::Storage.TaskManager.Task::(_updateS=
tate) Task=3D`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::moving from state init=
-> state preparing<br />Thread-4255::INFO::2015-11-09 11:17:52,163::log=
Utils::48::dispatcher::(wrapper) Run and protect: getStorageDomainStats(sdU=
UID=3D'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', options=3DNone)<br />Thread-4=
255::DEBUG::2015-11-09 11:17:52,164::resourceManager::198::Storage.Resource=
Manager.Request::(__init__) ResName=3D`Storage.b4c488af-9d2f-4b7b-a6f6-74a0=
bac06c41`ReqID=3D`41531754-8ba9-4fc4-8788-d4d67fa33e5c`::Request was made i=
n '/usr/share/vdsm/storage/hsm.py' line '2848' at 'getStorageDomainStats'<b=
r />Thread-4255::DEBUG::2015-11-09 11:17:52,164::resourceManager::542::Stor=
age.ResourceManager::(registerResource) Trying to register resource 'Storag=
e.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type 'shared'<br />Thread-=
4255::DEBUG::2015-11-09 11:17:52,164::resourceManager::601::Storage.Resourc=
eManager::(registerResource) Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0=
bac06c41' is free. Now locking as 'shared' (1 active user)<br />Thread-4255=
::DEBUG::2015-11-09 11:17:52,164::resourceManager::238::Storage.ResourceMan=
ager.Request::(grant) ResName=3D`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c=
41`ReqID=3D`41531754-8ba9-4fc4-8788-d4d67fa33e5c`::Granted request<br />Thr=
ead-4255::DEBUG::2015-11-09 11:17:52,165::task::827::Storage.TaskManager.Ta=
sk::(resourceAcquired) Task=3D`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::_reso=
urcesAcquired: Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 (shared)<br />T=
hread-4255::DEBUG::2015-11-09 11:17:52,165::task::993::Storage.TaskManager=
=2ETask::(_decref) Task=3D`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::ref 1 abo=
rting False<br />Thread-4255::DEBUG::2015-11-09 11:17:52,165::misc::750::St=
orage.SamplingMethod::(__call__) Trying to enter sampling method (storage=
=2Esdc.refreshStorage)<br />Thread-4255::DEBUG::2015-11-09 11:17:52,165::mi=
sc::753::Storage.SamplingMethod::(__call__) Got in to sampling method<br />=
Thread-4255::DEBUG::2015-11-09 11:17:52,165::misc::750::Storage.SamplingMet=
hod::(__call__) Trying to enter sampling method (storage.iscsi.rescan)<br /=
>Thread-4255::DEBUG::2015-11-09 11:17:52,165::misc::753::Storage.SamplingMe=
thod::(__call__) Got in to sampling method<br />Thread-4255::DEBUG::2015-11=
-09 11:17:52,166::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan,=
this will take up to 30 seconds<br />Thread-4255::DEBUG::2015-11-09 11:17:=
52,166::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin=
/iscsiadm -m session -R (cwd None)<br />Thread-4255::DEBUG::2015-11-09 11:1=
7:52,183::misc::760::Storage.SamplingMethod::(__call__) Returning last resu=
lt<br />Thread-4255::DEBUG::2015-11-09 11:17:52,183::misc::750::Storage.Sam=
plingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan=
)<br />Thread-4255::DEBUG::2015-11-09 11:17:52,184::misc::753::Storage.Samp=
lingMethod::(__call__) Got in to sampling method<br />Thread-4255::DEBUG::2=
015-11-09 11:17:52,184::hba::56::Storage.HBA::(rescan) Starting scan<br />T=
hread-4255::DEBUG::2015-11-09 11:17:52,295::hba::62::Storage.HBA::(rescan) =
Scan finished<br />Thread-4255::DEBUG::2015-11-09 11:17:52,296::misc::760::=
Storage.SamplingMethod::(__call__) Returning last result<br />Thread-4255::=
DEBUG::2015-11-09 11:17:52,296::multipath::77::Storage.Misc.excCmd::(rescan=
) /usr/bin/sudo -n /usr/sbin/multipath (cwd None)<br />Thread-4255::DEBUG::=
2015-11-09 11:17:52,362::multipath::77::Storage.Misc.excCmd::(rescan) SUCCE=
SS: <err> =3D ''; <rc> =3D 0<br />Thread-4255::DEBUG::2015-11-0=
9 11:17:52,362::utils::676::root::(execCmd) /sbin/udevadm settle --timeout=
=3D5 (cwd None)<br />Thread-4255::DEBUG::2015-11-09 11:17:52,371::utils::69=
4::root::(execCmd) SUCCESS: <err> =3D ''; <rc> =3D 0<br />Threa=
d-4255::DEBUG::2015-11-09 11:17:52,372::lvm::498::Storage.OperationMutex::(=
_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation m=
utex<br />Thread-4255::DEBUG::2015-11-09 11:17:52,372::lvm::500::Storage.Op=
erationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' rele=
ased the operation mutex<br />Thread-4255::DEBUG::2015-11-09 11:17:52,373::=
lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invali=
date operation' got the operation mutex<br />Thread-4255::DEBUG::2015-11-09=
11:17:52,373::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operat=
ion 'lvm invalidate operation' released the operation mutex<br />Thread-425=
5::DEBUG::2015-11-09 11:17:52,373::lvm::529::Storage.OperationMutex::(_inva=
lidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex<=
br />Thread-4255::DEBUG::2015-11-09 11:17:52,373::lvm::531::Storage.Operati=
onMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released =
the operation mutex<br />Thread-4255::DEBUG::2015-11-09 11:17:52,374::misc:=
:760::Storage.SamplingMethod::(__call__) Returning last result<br />Thread-=
4255::DEBUG::2015-11-09 11:17:52,386::fileSD::157::Storage.StorageDomainMan=
ifest::(__init__) Reading domain in path /rhev/data-center/mnt/glusterSD/ov=
irt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41<br />Thread-42=
55::DEBUG::2015-11-09 11:17:52,387::persistentDict::192::Storage.Persistent=
Dict::(__init__) Created a persistent dict with FileMetadataRW backend<br /=
>Thread-4255::DEBUG::2015-11-09 11:17:52,395::persistentDict::234::Storage=
=2EPersistentDict::(refresh) read lines (FileMetadataRW)=3D['CLASS=3DData',=
'DESCRIPTION=3Dhosted_storage', 'IOOPTIMEOUTSEC=3D10', 'LEASERETRIES=3D3',=
'LEASETIMESEC=3D60', 'LOCKPOLICY=3D', 'LOCKRENEWALINTERVALSEC=3D5', 'POOL_=
UUID=3D', 'REMOTE_PATH=3Dovirt02.mafia.kru:/engine', 'ROLE=3DRegular', 'SDU=
UID=3Db4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'TYPE=3DGLUSTERFS', 'VERSION=
=3D3', '_SHA_CKSUM=3Dcb09606ada74ed4155ad158923dd930264780fc8']<br />Thread=
-4255::DEBUG::2015-11-09 11:17:52,398::fileSD::647::Storage.StorageDomain::=
(imageGarbageCollector) Removing remnants of deleted images []<br />Thread-=
4255::INFO::2015-11-09 11:17:52,399::sd::442::Storage.StorageDomain::(_regi=
sterResourceNamespaces) Resource namespace b4c488af-9d2f-4b7b-a6f6-74a0bac0=
6c41_imageNS already registered<br />Thread-4255::INFO::2015-11-09 11:17:52=
,399::sd::450::Storage.StorageDomain::(_registerResourceNamespaces) Resourc=
e namespace b4c488af-9d2f-4b7b-a6f6-74a0bac06c41_volumeNS already registere=
d<br />Thread-4255::INFO::2015-11-09 11:17:52,400::logUtils::51::dispatcher=
::(wrapper) Run and protect: getStorageDomainStats, Return response: {'stat=
s': {'mdasize': 0, 'mdathreshold': True, 'mdavalid': True, 'diskfree': '210=
878988288', 'disktotal': '214643507200', 'mdafree': 0}}<br />Thread-4255::D=
EBUG::2015-11-09 11:17:52,401::task::1191::Storage.TaskManager.Task::(prepa=
re) Task=3D`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::finished: {'stats': {'md=
asize': 0, 'mdathreshold': True, 'mdavalid': True, 'diskfree': '21087898828=
8', 'disktotal': '214643507200', 'mdafree': 0}}<br />Thread-4255::DEBUG::20=
15-11-09 11:17:52,401::task::595::Storage.TaskManager.Task::(_updateState) =
Task=3D`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::moving from state preparing =
-> state finished<br />Thread-4255::DEBUG::2015-11-09 11:17:52,401::reso=
urceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.release=
All requests {} resources {'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': =
< ResourceRef 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: '=
True' obj: 'None'>}<br />Thread-4255::DEBUG::2015-11-09 11:17:52,401::re=
sourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancel=
All requests {}<br />Thread-4255::DEBUG::2015-11-09 11:17:52,402::resourceM=
anager::616::Storage.ResourceManager::(releaseResource) Trying to release r=
esource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'<br />Thread-4255::DE=
BUG::2015-11-09 11:17:52,402::resourceManager::635::Storage.ResourceManager=
::(releaseResource) Released resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0=
bac06c41' (0 active users)<br />Thread-4255::DEBUG::2015-11-09 11:17:52,402=
::resourceManager::641::Storage.ResourceManager::(releaseResource) Resource=
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free, finding out if any=
one is waiting for it.<br />Thread-4255::DEBUG::2015-11-09 11:17:52,402::re=
sourceManager::649::Storage.ResourceManager::(releaseResource) No one is wa=
iting for resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing=
records.<br />Thread-4255::DEBUG::2015-11-09 11:17:52,402::task::993::Stor=
age.TaskManager.Task::(_decref) Task=3D`04d7087f-f948-4221-8c0f-3e07e7d9bed=
8`::ref 0 aborting False<br />Thread-4255::INFO::2015-11-09 11:17:52,404::x=
mlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0=
=2E0.1:57855 stopped<br />Reactor thread::INFO::2015-11-09 11:17:52,405::pr=
otocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Acceptin=
g connection from 127.0.0.1:57856<br />Reactor thread::DEBUG::2015-11-09 11=
:17:52,413::protocoldetector::82::ProtocolDetector.Detector::(__init__) Usi=
ng required_size=3D11<br />Reactor thread::INFO::2015-11-09 11:17:52,414::p=
rotocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected pro=
tocol xml from 127.0.0.1:57856<br />BindingXMLRPC::INFO::2015-11-09 11:17:5=
2,414::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request hand=
ler for 127.0.0.1:57856<br />Reactor thread::DEBUG::2015-11-09 11:17:52,414=
::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over http detected =
from ('127.0.0.1', 57856)<br />Thread-4259::INFO::2015-11-09 11:17:52,417::=
xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127=
=2E0.0.1:57856 started<br />Thread-4259::DEBUG::2015-11-09 11:17:52,418::bi=
ndingxmlrpc::325::vds::(wrapper) client [127.0.0.1]<br />Thread-4259::DEBUG=
::2015-11-09 11:17:52,418::task::595::Storage.TaskManager.Task::(_updateSta=
te) Task=3D`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::moving from state init -=
> state preparing<br />Thread-4259::INFO::2015-11-09 11:17:52,419::logUt=
ils::48::dispatcher::(wrapper) Run and protect: prepareImage(sdUUID=3D'b4c4=
88af-9d2f-4b7b-a6f6-74a0bac06c41', spUUID=3D'00000000-0000-0000-0000-000000=
000000', imgUUID=3D'56461302-0710-4df0-964d-5e7b1ff07828', leafUUID=3D'8f8e=
e034-de86-4438-b6eb-9109faa8b3d3')<br />Thread-4259::DEBUG::2015-11-09 11:1=
7:52,419::resourceManager::198::Storage.ResourceManager.Request::(__init__)=
ResName=3D`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=3D`bec7c8c3-=
42b9-4acb-88cf-841d9dc28fb0`::Request was made in '/usr/share/vdsm/storage/=
hsm.py' line '3205' at 'prepareImage'<br />Thread-4259::DEBUG::2015-11-09 1=
1:17:52,419::resourceManager::542::Storage.ResourceManager::(registerResour=
ce) Trying to register resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c=
41' for lock type 'shared'<br />Thread-4259::DEBUG::2015-11-09 11:17:52,420=
::resourceManager::601::Storage.ResourceManager::(registerResource) Resourc=
e 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now locking as 's=
hared' (1 active user)<br />Thread-4259::DEBUG::2015-11-09 11:17:52,420::re=
sourceManager::238::Storage.ResourceManager.Request::(grant) ResName=3D`Sto=
rage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=3D`bec7c8c3-42b9-4acb-88cf-=
841d9dc28fb0`::Granted request<br />Thread-4259::DEBUG::2015-11-09 11:17:52=
,420::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=3D`9dbe0=
1b2-e3e0-466b-90e1-b9803dfce88b`::_resourcesAcquired: Storage.b4c488af-9d2f=
-4b7b-a6f6-74a0bac06c41 (shared)<br />Thread-4259::DEBUG::2015-11-09 11:17:=
52,420::task::993::Storage.TaskManager.Task::(_decref) Task=3D`9dbe01b2-e3e=
0-466b-90e1-b9803dfce88b`::ref 1 aborting False<br />Thread-4259::DEBUG::20=
15-11-09 11:17:52,445::fileSD::536::Storage.StorageDomain::(activateVolumes=
) Fixing permissions on /rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_=
engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-=
5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3<br />Thread-4259::DEBUG::=
2015-11-09 11:17:52,446::fileUtils::143::Storage.fileUtils::(createdir) Cre=
ating directory: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41=
mode: None<br />Thread-4259::WARNING::2015-11-09 11:17:52,446::fileUtils::=
152::Storage.fileUtils::(createdir) Dir /var/run/vdsm/storage/b4c488af-9d2f=
-4b7b-a6f6-74a0bac06c41 already exists<br />Thread-4259::DEBUG::2015-11-09 =
11:17:52,446::fileSD::511::Storage.StorageDomain::(createImageLinks) Creati=
ng symlink from /rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b=
4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff0=
7828 to /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302=
-0710-4df0-964d-5e7b1ff07828<br />Thread-4259::DEBUG::2015-11-09 11:17:52,4=
47::fileSD::516::Storage.StorageDomain::(createImageLinks) img run dir alre=
ady exists: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/5646=
1302-0710-4df0-964d-5e7b1ff07828<br />Thread-4259::DEBUG::2015-11-09 11:17:=
52,448::fileVolume::535::Storage.Volume::(validateVolumePath) validate path=
for 8f8ee034-de86-4438-b6eb-9109faa8b3d3<br />Thread-4259::INFO::2015-11-0=
9 11:17:52,450::logUtils::51::dispatcher::(wrapper) Run and protect: prepar=
eImage, Return response: {'info': {'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a=
0bac06c41', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-cente=
r/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06=
c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-910=
9faa8b3d3', 'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath=
': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2=
f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee0=
34-de86-4438-b6eb-9109faa8b3d3.lease', 'imageID': '56461302-0710-4df0-964d-=
5e7b1ff07828'}, 'path': u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a=
0bac06c41/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109=
faa8b3d3', 'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac=
06c41', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mn=
t/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/=
images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa=
8b3d3', 'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath': u=
'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b=
7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-d=
e86-4438-b6eb-9109faa8b3d3.lease', 'imageID': '56461302-0710-4df0-964d-5e7b=
1ff07828'}]}<br />Thread-4259::DEBUG::2015-11-09 11:17:52,450::task::1191::=
Storage.TaskManager.Task::(prepare) Task=3D`9dbe01b2-e3e0-466b-90e1-b9803df=
ce88b`::finished: {'info': {'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c=
41', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/g=
lusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/ima=
ges/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3=
d3', 'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath': u'/r=
hev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-=
a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86=
-4438-b6eb-9109faa8b3d3.lease', 'imageID': '56461302-0710-4df0-964d-5e7b1ff=
07828'}, 'path': u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c=
41/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d=
3', 'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',=
'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glust=
erSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/=
56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',=
'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath': u'/rhev/=
data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6=
-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-443=
8-b6eb-9109faa8b3d3.lease', 'imageID': '56461302-0710-4df0-964d-5e7b1ff0782=
8'}]}<br />Thread-4259::DEBUG::2015-11-09 11:17:52,450::task::595::Storage=
=2ETaskManager.Task::(_updateState) Task=3D`9dbe01b2-e3e0-466b-90e1-b9803df=
ce88b`::moving from state preparing -> state finished<br />Thread-4259::=
DEBUG::2015-11-09 11:17:52,450::resourceManager::940::Storage.ResourceManag=
er.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.b4c=
488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef 'Storage.b4c488af-9d2f=
-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj: 'None'>}<br />Thread-4259=
::DEBUG::2015-11-09 11:17:52,450::resourceManager::977::Storage.ResourceMan=
ager.Owner::(cancelAll) Owner.cancelAll requests {}<br />Thread-4259::DEBUG=
::2015-11-09 11:17:52,451::resourceManager::616::Storage.ResourceManager::(=
releaseResource) Trying to release resource 'Storage.b4c488af-9d2f-4b7b-a6f=
6-74a0bac06c41'<br />Thread-4259::DEBUG::2015-11-09 11:17:52,451::resourceM=
anager::635::Storage.ResourceManager::(releaseResource) Released resource '=
Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0 active users)<br />Thread-=
4259::DEBUG::2015-11-09 11:17:52,451::resourceManager::641::Storage.Resourc=
eManager::(releaseResource) Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0b=
ac06c41' is free, finding out if anyone is waiting for it.<br />Thread-4259=
::DEBUG::2015-11-09 11:17:52,451::resourceManager::649::Storage.ResourceMan=
ager::(releaseResource) No one is waiting for resource 'Storage.b4c488af-9d=
2f-4b7b-a6f6-74a0bac06c41', Clearing records.<br />Thread-4259::DEBUG::2015=
-11-09 11:17:52,451::task::993::Storage.TaskManager.Task::(_decref) Task=3D=
`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::ref 0 aborting False<br />Thread-42=
59::INFO::2015-11-09 11:17:52,454::xmlrpc::92::vds.XMLRPCServer::(_process_=
requests) Request handler for 127.0.0.1:57856 stopped<br />Reactor thread::=
INFO::2015-11-09 11:17:52,454::protocoldetector::72::ProtocolDetector.Accep=
torImpl::(handle_accept) Accepting connection from 127.0.0.1:57857<br />Rea=
ctor thread::DEBUG::2015-11-09 11:17:52,463::protocoldetector::82::Protocol=
Detector.Detector::(__init__) Using required_size=3D11<br />Reactor thread:=
:INFO::2015-11-09 11:17:52,463::protocoldetector::118::ProtocolDetector.Det=
ector::(handle_read) Detected protocol xml from 127.0.0.1:57857<br />Reacto=
r thread::DEBUG::2015-11-09 11:17:52,464::bindingxmlrpc::1297::XmlDetector:=
:(handle_socket) xml over http detected from ('127.0.0.1', 57857)<br />Bind=
ingXMLRPC::INFO::2015-11-09 11:17:52,464::xmlrpc::73::vds.XMLRPCServer::(ha=
ndle_request) Starting request handler for 127.0.0.1:57857<br />Thread-4260=
::INFO::2015-11-09 11:17:52,466::xmlrpc::84::vds.XMLRPCServer::(_process_re=
quests) Request handler for 127.0.0.1:57857 started<br />Thread-4260::DEBUG=
::2015-11-09 11:17:52,467::bindingxmlrpc::325::vds::(wrapper) client [127=
=2E0.0.1]<br />Thread-4260::DEBUG::2015-11-09 11:17:52,467::task::595::Stor=
age.TaskManager.Task::(_updateState) Task=3D`aed16a50-ede9-4ff5-92ef-356692=
fd56ae`::moving from state init -> state preparing<br />Thread-4260::INF=
O::2015-11-09 11:17:52,467::logUtils::48::dispatcher::(wrapper) Run and pro=
tect: prepareImage(sdUUID=3D'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', spUUID=
=3D'00000000-0000-0000-0000-000000000000', imgUUID=3D'fd81353f-b654-4493-bc=
af-2f417849b830', leafUUID=3D'8bb29fcb-c109-4f0a-a227-3819b6ecfdd9')<br />T=
hread-4260::DEBUG::2015-11-09 11:17:52,468::resourceManager::198::Storage=
=2EResourceManager.Request::(__init__) ResName=3D`Storage.b4c488af-9d2f-4b7=
b-a6f6-74a0bac06c41`ReqID=3D`974119cd-1351-46e9-8062-ffb1298c4ac9`::Request=
was made in '/usr/share/vdsm/storage/hsm.py' line '3205' at 'prepareImage'=
<br />Thread-4260::DEBUG::2015-11-09 11:17:52,468::resourceManager::542::St=
orage.ResourceManager::(registerResource) Trying to register resource 'Stor=
age.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type 'shared'<br />Threa=
d-4260::DEBUG::2015-11-09 11:17:52,468::resourceManager::601::Storage.Resou=
rceManager::(registerResource) Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74=
a0bac06c41' is free. Now locking as 'shared' (1 active user)<br />Thread-42=
60::DEBUG::2015-11-09 11:17:52,468::resourceManager::238::Storage.ResourceM=
anager.Request::(grant) ResName=3D`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac0=
6c41`ReqID=3D`974119cd-1351-46e9-8062-ffb1298c4ac9`::Granted request<br />T=
hread-4260::DEBUG::2015-11-09 11:17:52,469::task::827::Storage.TaskManager=
=2ETask::(resourceAcquired) Task=3D`aed16a50-ede9-4ff5-92ef-356692fd56ae`::=
_resourcesAcquired: Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 (shared)<b=
r />Thread-4260::DEBUG::2015-11-09 11:17:52,469::task::993::Storage.TaskMan=
ager.Task::(_decref) Task=3D`aed16a50-ede9-4ff5-92ef-356692fd56ae`::ref 1 a=
borting False<br />Thread-4260::DEBUG::2015-11-09 11:17:52,485::fileSD::536=
::Storage.StorageDomain::(activateVolumes) Fixing permissions on /rhev/data=
-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a=
0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a2=
27-3819b6ecfdd9<br />Thread-4260::DEBUG::2015-11-09 11:17:52,486::fileUtils=
::143::Storage.fileUtils::(createdir) Creating directory: /var/run/vdsm/sto=
rage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 mode: None<br />Thread-4260::WARN=
ING::2015-11-09 11:17:52,487::fileUtils::152::Storage.fileUtils::(createdir=
) Dir /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 already ex=
ists<br />Thread-4260::DEBUG::2015-11-09 11:17:52,487::fileSD::511::Storage=
=2EStorageDomain::(createImageLinks) Creating symlink from /rhev/data-cente=
r/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06=
c41/images/fd81353f-b654-4493-bcaf-2f417849b830 to /var/run/vdsm/storage/b4=
c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-4493-bcaf-2f417849b830<br =
/>Thread-4260::DEBUG::2015-11-09 11:17:52,487::fileSD::516::Storage.Storage=
Domain::(createImageLinks) img run dir already exists: /var/run/vdsm/storag=
e/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-4493-bcaf-2f417849b830=
<br />Thread-4260::DEBUG::2015-11-09 11:17:52,488::fileVolume::535::Storage=
=2EVolume::(validateVolumePath) validate path for 8bb29fcb-c109-4f0a-a227-3=
819b6ecfdd9<br />Thread-4260::INFO::2015-11-09 11:17:52,490::logUtils::51::=
dispatcher::(wrapper) Run and protect: prepareImage, Return response: {'inf=
o': {'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',=
'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia=
=2Ekru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-44=
93-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'volumeID': u'8=
bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath': u'/rhev/data-center/mnt/=
glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/im=
ages/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecf=
dd9.lease', 'imageID': 'fd81353f-b654-4493-bcaf-2f417849b830'}, 'path': u'/=
var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-449=
3-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'imgVolumesInfo'=
: [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path', =
'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia=
=2Ekru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-44=
93-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'volumeID': u'8=
bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath': u'/rhev/data-center/mnt/=
glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/im=
ages/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecf=
dd9.lease', 'imageID': 'fd81353f-b654-4493-bcaf-2f417849b830'}]}<br />Threa=
d-4260::DEBUG::2015-11-09 11:17:52,490::task::1191::Storage.TaskManager.Tas=
k::(prepare) Task=3D`aed16a50-ede9-4ff5-92ef-356692fd56ae`::finished: {'inf=
o': {'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',=
'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia=
=2Ekru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-44=
93-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'volumeID': u'8=
bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath': u'/rhev/data-center/mnt/=
glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/im=
ages/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecf=
dd9.lease', 'imageID': 'fd81353f-b654-4493-bcaf-2f417849b830'}, 'path': u'/=
var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-449=
3-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'imgVolumesInfo'=
: [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path', =
'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia=
=2Ekru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-44=
93-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'volumeID': u'8=
bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath': u'/rhev/data-center/mnt/=
glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/im=
ages/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecf=
dd9.lease', 'imageID': 'fd81353f-b654-4493-bcaf-2f417849b830'}]}<br />Threa=
d-4260::DEBUG::2015-11-09 11:17:52,490::task::595::Storage.TaskManager.Task=
::(_updateState) Task=3D`aed16a50-ede9-4ff5-92ef-356692fd56ae`::moving from=
state preparing -> state finished<br />Thread-4260::DEBUG::2015-11-09 1=
1:17:52,490::resourceManager::940::Storage.ResourceManager.Owner::(releaseA=
ll) Owner.releaseAll requests {} resources {'Storage.b4c488af-9d2f-4b7b-a6f=
6-74a0bac06c41': < ResourceRef 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac0=
6c41', isValid: 'True' obj: 'None'>}<br />Thread-4260::DEBUG::2015-11-09=
11:17:52,491::resourceManager::977::Storage.ResourceManager.Owner::(cancel=
All) Owner.cancelAll requests {}<br />Thread-4260::DEBUG::2015-11-09 11:17:=
52,491::resourceManager::616::Storage.ResourceManager::(releaseResource) Tr=
ying to release resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'<br =
/>Thread-4260::DEBUG::2015-11-09 11:17:52,491::resourceManager::635::Storag=
e.ResourceManager::(releaseResource) Released resource 'Storage.b4c488af-9d=
2f-4b7b-a6f6-74a0bac06c41' (0 active users)<br />Thread-4260::DEBUG::2015-1=
1-09 11:17:52,491::resourceManager::641::Storage.ResourceManager::(releaseR=
esource) Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free, f=
inding out if anyone is waiting for it.<br />Thread-4260::DEBUG::2015-11-09=
11:17:52,491::resourceManager::649::Storage.ResourceManager::(releaseResou=
rce) No one is waiting for resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0ba=
c06c41', Clearing records.<br />Thread-4260::DEBUG::2015-11-09 11:17:52,492=
::task::993::Storage.TaskManager.Task::(_decref) Task=3D`aed16a50-ede9-4ff5=
-92ef-356692fd56ae`::ref 0 aborting False<br />Thread-4260::INFO::2015-11-0=
9 11:17:52,494::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request h=
andler for 127.0.0.1:57857 stopped<br />Reactor thread::INFO::2015-11-09 11=
:17:52,494::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_ac=
cept) Accepting connection from 127.0.0.1:57858<br />Reactor thread::DEBUG:=
:2015-11-09 11:17:52,503::protocoldetector::82::ProtocolDetector.Detector::=
(__init__) Using required_size=3D11<br />Reactor thread::INFO::2015-11-09 1=
1:17:52,504::protocoldetector::118::ProtocolDetector.Detector::(handle_read=
) Detected protocol xml from 127.0.0.1:57858<br />BindingXMLRPC::INFO::2015=
-11-09 11:17:52,504::xmlrpc::73::vds.XMLRPCServer::(handle_request) Startin=
g request handler for 127.0.0.1:57858<br />Reactor thread::DEBUG::2015-11-0=
9 11:17:52,504::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over =
http detected from ('127.0.0.1', 57858)<br />Thread-4261::INFO::2015-11-09 =
11:17:52,507::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request han=
dler for 127.0.0.1:57858 started<br />Thread-4261::DEBUG::2015-11-09 11:17:=
52,508::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]<br />Thread-4=
261::DEBUG::2015-11-09 11:17:52,508::task::595::Storage.TaskManager.Task::(=
_updateState) Task=3D`d39463fd-486f-4280-903a-51b72862b648`::moving from st=
ate init -> state preparing<br />Thread-4261::INFO::2015-11-09 11:17:52,=
509::logUtils::48::dispatcher::(wrapper) Run and protect: prepareImage(sdUU=
ID=3D'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', spUUID=3D'00000000-0000-0000-0=
000-000000000000', imgUUID=3D'0e1c20d1-94aa-4003-8e12-0dbbf06a6af8', leafUU=
ID=3D'3fc3362d-ab6d-4e06-bd72-82d5750c7095')<br />Thread-4261::DEBUG::2015-=
11-09 11:17:52,509::resourceManager::198::Storage.ResourceManager.Request::=
(__init__) ResName=3D`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=3D=
`240c2aba-6c2e-44da-890d-c3d605e1933f`::Request was made in '/usr/share/vds=
m/storage/hsm.py' line '3205' at 'prepareImage'<br />Thread-4261::DEBUG::20=
15-11-09 11:17:52,509::resourceManager::542::Storage.ResourceManager::(regi=
sterResource) Trying to register resource 'Storage.b4c488af-9d2f-4b7b-a6f6-=
74a0bac06c41' for lock type 'shared'<br />Thread-4261::DEBUG::2015-11-09 11=
:17:52,509::resourceManager::601::Storage.ResourceManager::(registerResourc=
e) Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now loc=
king as 'shared' (1 active user)<br />Thread-4261::DEBUG::2015-11-09 11:17:=
52,510::resourceManager::238::Storage.ResourceManager.Request::(grant) ResN=
ame=3D`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=3D`240c2aba-6c2e-=
44da-890d-c3d605e1933f`::Granted request<br />Thread-4261::DEBUG::2015-11-0=
9 11:17:52,510::task::827::Storage.TaskManager.Task::(resourceAcquired) Tas=
k=3D`d39463fd-486f-4280-903a-51b72862b648`::_resourcesAcquired: Storage.b4c=
488af-9d2f-4b7b-a6f6-74a0bac06c41 (shared)<br />Thread-4261::DEBUG::2015-11=
-09 11:17:52,510::task::993::Storage.TaskManager.Task::(_decref) Task=3D`d3=
9463fd-486f-4280-903a-51b72862b648`::ref 1 aborting False<br />Thread-4261:=
:DEBUG::2015-11-09 11:17:52,526::fileSD::536::Storage.StorageDomain::(activ=
ateVolumes) Fixing permissions on /rhev/data-center/mnt/glusterSD/ovirt02=
=2Emafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-9=
4aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095<br />Thread=
-4261::DEBUG::2015-11-09 11:17:52,528::fileUtils::143::Storage.fileUtils::(=
createdir) Creating directory: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f=
6-74a0bac06c41 mode: None<br />Thread-4261::WARNING::2015-11-09 11:17:52,52=
8::fileUtils::152::Storage.fileUtils::(createdir) Dir /var/run/vdsm/storage=
/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 already exists<br />Thread-4261::DEBU=
G::2015-11-09 11:17:52,528::fileSD::511::Storage.StorageDomain::(createImag=
eLinks) Creating symlink from /rhev/data-center/mnt/glusterSD/ovirt02.mafia=
=2Ekru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-40=
03-8e12-0dbbf06a6af8 to /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0b=
ac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8<br />Thread-4261::DEBUG::2015-=
11-09 11:17:52,528::fileSD::516::Storage.StorageDomain::(createImageLinks) =
img run dir already exists: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-7=
4a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8<br />Thread-4261::DEBUG::2=
015-11-09 11:17:52,530::fileVolume::535::Storage.Volume::(validateVolumePat=
h) validate path for 3fc3362d-ab6d-4e06-bd72-82d5750c7095<br />Thread-4261:=
:INFO::2015-11-09 11:17:52,531::logUtils::51::dispatcher::(wrapper) Run and=
protect: prepareImage, Return response: {'info': {'domainID': 'b4c488af-9d=
2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path', 'leaseOffset': 0, 'path': u'=
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7=
b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab=
6d-4e06-bd72-82d5750c7095', 'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7=
095', 'leasePath': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_eng=
ine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0db=
bf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease', 'imageID': '0e1c20d1=
-94aa-4003-8e12-0dbbf06a6af8'}, 'path': u'/var/run/vdsm/storage/b4c488af-9d=
2f-4b7b-a6f6-74a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6=
d-4e06-bd72-82d5750c7095', 'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4=
b7b-a6f6-74a0bac06c41', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhe=
v/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6=
f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4=
e06-bd72-82d5750c7095', 'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095'=
, 'leasePath': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/=
b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06=
a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease', 'imageID': '0e1c20d1-94a=
a-4003-8e12-0dbbf06a6af8'}]}<br />Thread-4261::DEBUG::2015-11-09 11:17:52,5=
31::task::1191::Storage.TaskManager.Task::(prepare) Task=3D`d39463fd-486f-4=
280-903a-51b72862b648`::finished: {'info': {'domainID': 'b4c488af-9d2f-4b7b=
-a6f6-74a0bac06c41', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/d=
ata-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-=
74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06=
-bd72-82d5750c7095', 'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095', '=
leasePath': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c=
488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6a=
f8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease', 'imageID': '0e1c20d1-94aa-4=
003-8e12-0dbbf06a6af8'}, 'path': u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b=
-a6f6-74a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-=
bd72-82d5750c7095', 'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f=
6-74a0bac06c41', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-=
center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0=
bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd7=
2-82d5750c7095', 'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095', 'leas=
ePath': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488a=
f-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3=
fc3362d-ab6d-4e06-bd72-82d5750c7095.lease', 'imageID': '0e1c20d1-94aa-4003-=
8e12-0dbbf06a6af8'}]}<br />Thread-4261::DEBUG::2015-11-09 11:17:52,532::tas=
k::595::Storage.TaskManager.Task::(_updateState) Task=3D`d39463fd-486f-4280=
-903a-51b72862b648`::moving from state preparing -> state finished<br />=
Thread-4261::DEBUG::2015-11-09 11:17:52,532::resourceManager::940::Storage=
=2EResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resourc=
es {'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef 'Stora=
ge.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj: 'None'>}<=
br />Thread-4261::DEBUG::2015-11-09 11:17:52,532::resourceManager::977::Sto=
rage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}<br />Th=
read-4261::DEBUG::2015-11-09 11:17:52,532::resourceManager::616::Storage.Re=
sourceManager::(releaseResource) Trying to release resource 'Storage.b4c488=
af-9d2f-4b7b-a6f6-74a0bac06c41'<br />Thread-4261::DEBUG::2015-11-09 11:17:5=
2,532::resourceManager::635::Storage.ResourceManager::(releaseResource) Rel=
eased resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0 active use=
rs)<br />Thread-4261::DEBUG::2015-11-09 11:17:52,533::resourceManager::641:=
:Storage.ResourceManager::(releaseResource) Resource 'Storage.b4c488af-9d2f=
-4b7b-a6f6-74a0bac06c41' is free, finding out if anyone is waiting for it=
=2E<br />Thread-4261::DEBUG::2015-11-09 11:17:52,533::resourceManager::649:=
:Storage.ResourceManager::(releaseResource) No one is waiting for resource =
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing records.<br />Thre=
ad-4261::DEBUG::2015-11-09 11:17:52,533::task::993::Storage.TaskManager.Tas=
k::(_decref) Task=3D`d39463fd-486f-4280-903a-51b72862b648`::ref 0 aborting =
False<br />Thread-4261::INFO::2015-11-09 11:17:52,535::xmlrpc::92::vds.XMLR=
PCServer::(_process_requests) Request handler for 127.0.0.1:57858 stopped<b=
r />Reactor thread::INFO::2015-11-09 11:17:52,536::protocoldetector::72::Pr=
otocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127=
=2E0.0.1:57859<br />Reactor thread::DEBUG::2015-11-09 11:17:52,544::protoco=
ldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=3D=
11<br />Reactor thread::INFO::2015-11-09 11:17:52,545::protocoldetector::11=
8::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127=
=2E0.0.1:57859<br />Reactor thread::DEBUG::2015-11-09 11:17:52,545::binding=
xmlrpc::1297::XmlDetector::(handle_socket) xml over http detected from ('12=
7.0.0.1', 57859)<br />BindingXMLRPC::INFO::2015-11-09 11:17:52,545::xmlrpc:=
:73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0=
=2E0.1:57859<br />Thread-4262::INFO::2015-11-09 11:17:52,548::xmlrpc::84::v=
ds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:57859 st=
arted<br />Thread-4262::DEBUG::2015-11-09 11:17:52,548::bindingxmlrpc::325:=
:vds::(wrapper) client [127.0.0.1]<br />Thread-4262::DEBUG::2015-11-09 11:1=
7:52,549::task::595::Storage.TaskManager.Task::(_updateState) Task=3D`0eebd=
b1c-6c4d-4b86-a2fa-00ad35a19f24`::moving from state init -> state prepar=
ing<br />Thread-4262::INFO::2015-11-09 11:17:52,549::logUtils::48::dispatch=
er::(wrapper) Run and protect: prepareImage(sdUUID=3D'b4c488af-9d2f-4b7b-a6=
f6-74a0bac06c41', spUUID=3D'00000000-0000-0000-0000-000000000000', imgUUID=
=3D'350fb787-049a-4174-8914-f371aabfa72c', leafUUID=3D'02c5d59d-638c-4672-8=
14d-d734e334e24a')<br />Thread-4262::DEBUG::2015-11-09 11:17:52,549::resour=
ceManager::198::Storage.ResourceManager.Request::(__init__) ResName=3D`Stor=
age.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=3D`fd9ea6d0-3a31-4ec6-a74c-8=
b84b2e51746`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '32=
05' at 'prepareImage'<br />Thread-4262::DEBUG::2015-11-09 11:17:52,550::res=
ourceManager::542::Storage.ResourceManager::(registerResource) Trying to re=
gister resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock typ=
e 'shared'<br />Thread-4262::DEBUG::2015-11-09 11:17:52,550::resourceManage=
r::601::Storage.ResourceManager::(registerResource) Resource 'Storage.b4c48=
8af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now locking as 'shared' (1 active=
user)<br />Thread-4262::DEBUG::2015-11-09 11:17:52,550::resourceManager::2=
38::Storage.ResourceManager.Request::(grant) ResName=3D`Storage.b4c488af-9d=
2f-4b7b-a6f6-74a0bac06c41`ReqID=3D`fd9ea6d0-3a31-4ec6-a74c-8b84b2e51746`::G=
ranted request<br />Thread-4262::DEBUG::2015-11-09 11:17:52,550::task::827:=
:Storage.TaskManager.Task::(resourceAcquired) Task=3D`0eebdb1c-6c4d-4b86-a2=
fa-00ad35a19f24`::_resourcesAcquired: Storage.b4c488af-9d2f-4b7b-a6f6-74a0b=
ac06c41 (shared)<br />Thread-4262::DEBUG::2015-11-09 11:17:52,551::task::99=
3::Storage.TaskManager.Task::(_decref) Task=3D`0eebdb1c-6c4d-4b86-a2fa-00ad=
35a19f24`::ref 1 aborting False<br />Thread-4262::DEBUG::2015-11-09 11:17:5=
2,566::fileSD::536::Storage.StorageDomain::(activateVolumes) Fixing permiss=
ions on /rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-=
9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c=
5d59d-638c-4672-814d-d734e334e24a<br />Thread-4262::DEBUG::2015-11-09 11:17=
:52,568::fileUtils::143::Storage.fileUtils::(createdir) Creating directory:=
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 mode: None<br /=
>Thread-4262::WARNING::2015-11-09 11:17:52,568::fileUtils::152::Storage.fil=
eUtils::(createdir) Dir /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0b=
ac06c41 already exists<br />Thread-4262::DEBUG::2015-11-09 11:17:52,568::fi=
leSD::511::Storage.StorageDomain::(createImageLinks) Creating symlink from =
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7=
b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c to /var/run=
/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-049a-4174-8914-=
f371aabfa72c<br />Thread-4262::DEBUG::2015-11-09 11:17:52,568::fileSD::516:=
:Storage.StorageDomain::(createImageLinks) img run dir already exists: /var=
/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-049a-4174-8=
914-f371aabfa72c<br />Thread-4262::DEBUG::2015-11-09 11:17:52,570::fileVolu=
me::535::Storage.Volume::(validateVolumePath) validate path for 02c5d59d-63=
8c-4672-814d-d734e334e24a<br />Thread-4262::INFO::2015-11-09 11:17:52,572::=
logUtils::51::dispatcher::(wrapper) Run and protect: prepareImage, Return r=
esponse: {'info': {'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'vol=
Type': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/=
ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb=
787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a', 'vol=
umeID': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath': u'/rhev/data-=
center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0=
bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814=
d-d734e334e24a.lease', 'imageID': '350fb787-049a-4174-8914-f371aabfa72c'}, =
'path': u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb7=
87-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a', 'imgV=
olumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType=
': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovir=
t02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-=
049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a', 'volumeI=
D': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath': u'/rhev/data-cent=
er/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac0=
6c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d7=
34e334e24a.lease', 'imageID': '350fb787-049a-4174-8914-f371aabfa72c'}]}<br =
/>Thread-4262::DEBUG::2015-11-09 11:17:52,573::task::1191::Storage.TaskMana=
ger.Task::(prepare) Task=3D`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::finished=
: {'info': {'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': =
'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02=
=2Emafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-0=
49a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a', 'volumeID=
': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath': u'/rhev/data-cente=
r/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06=
c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d73=
4e334e24a.lease', 'imageID': '350fb787-049a-4174-8914-f371aabfa72c'}, 'path=
': u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-04=
9a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a', 'imgVolume=
sInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'p=
ath', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02=
=2Emafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-0=
49a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a', 'volumeID=
': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath': u'/rhev/data-cente=
r/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06=
c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d73=
4e334e24a.lease', 'imageID': '350fb787-049a-4174-8914-f371aabfa72c'}]}<br /=
>Thread-4262::DEBUG::2015-11-09 11:17:52,573::task::595::Storage.TaskManage=
r.Task::(_updateState) Task=3D`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::movin=
g from state preparing -> state finished<br />Thread-4262::DEBUG::2015-1=
1-09 11:17:52,573::resourceManager::940::Storage.ResourceManager.Owner::(re=
leaseAll) Owner.releaseAll requests {} resources {'Storage.b4c488af-9d2f-4b=
7b-a6f6-74a0bac06c41': < ResourceRef 'Storage.b4c488af-9d2f-4b7b-a6f6-74=
a0bac06c41', isValid: 'True' obj: 'None'>}<br />Thread-4262::DEBUG::2015=
-11-09 11:17:52,573::resourceManager::977::Storage.ResourceManager.Owner::(=
cancelAll) Owner.cancelAll requests {}<br />Thread-4262::DEBUG::2015-11-09 =
11:17:52,573::resourceManager::616::Storage.ResourceManager::(releaseResour=
ce) Trying to release resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c4=
1'<br />Thread-4262::DEBUG::2015-11-09 11:17:52,573::resourceManager::635::=
Storage.ResourceManager::(releaseResource) Released resource 'Storage.b4c48=
8af-9d2f-4b7b-a6f6-74a0bac06c41' (0 active users)<br />Thread-4262::DEBUG::=
2015-11-09 11:17:52,574::resourceManager::641::Storage.ResourceManager::(re=
leaseResource) Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is f=
ree, finding out if anyone is waiting for it.<br />Thread-4262::DEBUG::2015=
-11-09 11:17:52,574::resourceManager::649::Storage.ResourceManager::(releas=
eResource) No one is waiting for resource 'Storage.b4c488af-9d2f-4b7b-a6f6-=
74a0bac06c41', Clearing records.<br />Thread-4262::DEBUG::2015-11-09 11:17:=
52,574::task::993::Storage.TaskManager.Task::(_decref) Task=3D`0eebdb1c-6c4=
d-4b86-a2fa-00ad35a19f24`::ref 0 aborting False<br />Thread-4262::INFO::201=
5-11-09 11:17:52,576::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Req=
uest handler for 127.0.0.1:57859 stopped<br />Reactor thread::INFO::2015-11=
-09 11:17:52,610::protocoldetector::72::ProtocolDetector.AcceptorImpl::(han=
dle_accept) Accepting connection from 127.0.0.1:57860<br />Reactor thread::=
DEBUG::2015-11-09 11:17:52,619::protocoldetector::82::ProtocolDetector.Dete=
ctor::(__init__) Using required_size=3D11<br />Reactor thread::INFO::2015-1=
1-09 11:17:52,619::protocoldetector::118::ProtocolDetector.Detector::(handl=
e_read) Detected protocol xml from 127.0.0.1:57860<br />BindingXMLRPC::INFO=
::2015-11-09 11:17:52,620::xmlrpc::73::vds.XMLRPCServer::(handle_request) S=
tarting request handler for 127.0.0.1:57860<br />Reactor thread::DEBUG::201=
5-11-09 11:17:52,620::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml=
over http detected from ('127.0.0.1', 57860)<br />Thread-4263::INFO::2015-=
11-09 11:17:52,623::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Reque=
st handler for 127.0.0.1:57860 started<br />Thread-4263::DEBUG::2015-11-09 =
11:17:52,623::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]<br />Th=
read-4263::DEBUG::2015-11-09 11:17:52,624::task::595::Storage.TaskManager=
=2ETask::(_updateState) Task=3D`6fd1d011-d931-4eca-b93b-c0fc3a1b4107`::movi=
ng from state init -> state preparing<br />Thread-4263::INFO::2015-11-09=
11:17:52,624::logUtils::48::dispatcher::(wrapper) Run and protect: repoSta=
ts(options=3DNone)<br />Thread-4263::INFO::2015-11-09 11:17:52,624::logUtil=
s::51::dispatcher::(wrapper) Run and protect: repoStats, Return response: {=
'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': {'code': 0, 'actual': True, 'versio=
n': 3, 'acquired': True, 'delay': '0.000392118', 'lastCheck': '6.0', 'valid=
': True}, u'0af99439-f140-4636-90f7-f43904735da0': {'code': 0, 'actual': Tr=
ue, 'version': 3, 'acquired': True, 'delay': '0.000382969', 'lastCheck': '4=
=2E5', 'valid': True}}<br />Thread-4263::DEBUG::2015-11-09 11:17:52,624::ta=
sk::1191::Storage.TaskManager.Task::(prepare) Task=3D`6fd1d011-d931-4eca-b9=
3b-c0fc3a1b4107`::finished: {'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': {'code=
': 0, 'actual': True, 'version': 3, 'acquired': True, 'delay': '0.000392118=
', 'lastCheck': '6.0', 'valid': True}, u'0af99439-f140-4636-90f7-f43904735d=
a0': {'code': 0, 'actual': True, 'version': 3, 'acquired': True, 'delay': '=
0.000382969', 'lastCheck': '4.5', 'valid': True}}<br />Thread-4263::DEBUG::=
2015-11-09 11:17:52,625::task::595::Storage.TaskManager.Task::(_updateState=
) Task=3D`6fd1d011-d931-4eca-b93b-c0fc3a1b4107`::moving from state preparin=
g -> state finished<br />Thread-4263::DEBUG::2015-11-09 11:17:52,625::re=
sourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.relea=
seAll requests {} resources {}<br />Thread-4263::DEBUG::2015-11-09 11:17:52=
,625::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owne=
r.cancelAll requests {}<br />Thread-4263::DEBUG::2015-11-09 11:17:52,625::t=
ask::993::Storage.TaskManager.Task::(_decref) Task=3D`6fd1d011-d931-4eca-b9=
3b-c0fc3a1b4107`::ref 0 aborting False<br />Thread-4263::INFO::2015-11-09 1=
1:17:52,627::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request hand=
ler for 127.0.0.1:57860 stopped<br />mailbox.SPMMonitor::DEBUG::2015-11-09 =
11:17:52,829::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail) dd=
if=3D/rhev/data-center/00000001-0001-0001-0001-000000000230/mastersd/dom_m=
d/inbox iflag=3Ddirect,fullblock count=3D1 bs=3D1024000 (cwd None)<br />mai=
lbox.SPMMonitor::DEBUG::2015-11-09 11:17:52,845::storage_mailbox::735::Stor=
age.Misc.excCmd::(_checkForMail) SUCCESS: <err> =3D '1+0 records in\n=
1+0 records out\n1024000 bytes (1.0 MB) copied, 0.00494757 s, 207 MB/s\n'; =
<rc> =3D 0<br /><br /></p>
<div>-- <br />
<p>Florent BELLO<br />Service Informatique<br />informatique@ville-kourou=
=2Efr<br />0594 22 31 22<br />Mairie de Kourou</p>
</div>
</body></html>
--=_920b73690f5d77fd5fed2f8d20a489a7--
8 years, 10 months
ovirt-hosted-engine-setup and single machine install
by Johan Vermeulen
Hello All,
after configuring a first setup, based on the quick start guide, I'm now
looking
at the hosted-engine setup.
My question is: after I do the hosted-engine-setup, how do I setup vm's on
the same
machine that hosts the now-virtualized engine?
Greetings, J.
8 years, 10 months
Re: [ovirt-users] Ovirt 3.6 | After upgrade host can not connect to storage domains | returned by VDSM was: 480
by Punit Dambiwal
Hi Sahina,
Either after make the changes in the vdsm.conf,still not able to connect to
the replica=2 storage..
Thanks,
Punit
On Mon, Nov 23, 2015 at 4:15 PM, Punit Dambiwal <hypunit(a)gmail.com> wrote:
> Hi Sahina,
>
> Thanks for the update...would you mind to let me know the correct syntax
> to add the line in the vdsm.conf ??
>
> Thanks,
> Punit
>
> On Mon, Nov 23, 2015 at 3:48 PM, Sahina Bose <sabose(a)redhat.com> wrote:
>
>> You can change the allowed_replica_count to 2 in vdsm.conf - though this
>> is not recommended in production. Supported replica count is 3.
>>
>> thanks
>> sahina
>>
>>
>> On 11/23/2015 07:58 AM, Punit Dambiwal wrote:
>>
>> Hi Sahina,
>>
>> Is there any workaround to solve this issue ?
>>
>> Thanks,
>> Punit
>>
>> On Wed, Nov 11, 2015 at 9:36 AM, Sahina Bose <sabose(a)redhat.com> wrote:
>>
>>> Hi,
>>>
>>> Thanks for your email. I will be back on 16th Nov and will get back to
>>> you then.
>>>
>>> thanks
>>> sahina
>>>
>>
>>
>>
>
8 years, 10 months