Users
Threads by month
- ----- 2026 -----
- February
- January
- ----- 2025 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- 9 participants
- 19179 discussions
Adding direct lun from API doesn't populate attributes like size, vendor, etc
by Groten, Ryan 26 Nov '15
by Groten, Ryan 26 Nov '15
26 Nov '15
--_000_a0d6297bdc3245ba9b9edd86480d19b9CD1001M21corpads_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Using this python I am able to create a direct FC lun properly (and it work=
s if the lun_id is valid). But in the GUI after the disk is added none of =
the fields are populated except LUN ID (Size is <1GB, Serial, Vendor, Produ=
ct ID are all blank).
I see this Bugzilla [1] is very similar (for iSCSI) which says the issue wa=
s fixed in 3.5.0, but it seems to still be present in 3.5.1 for Fibre Chann=
el Direct Luns at least.
Here's the python I used to test:
lun_id =3D '3600a098038303053453f463045727654'
lu =3D params.LogicalUnit()
lu.set_id(lun_id)
lus =3D list()
lus.append(lu)
storage_params =3D params.Storage()
storage_params.set_id(lun_id)
storage_params.set_logical_unit(lus)
storage_params.set_type('fcp')
disk_params =3D params.Disk()
disk_params.set_format('raw')
disk_params.set_interface('virtio')
disk_params.set_alias(disk_name)
disk_params.set_active(True)
disk_params.set_lun_storage(storage_params)
disk =3D api.disks.add(disk_params)
[1] https://bugzilla.redhat.com/show_bug.cgi?id=3D1096217
Thanks,
Ryan
--_000_a0d6297bdc3245ba9b9edd86480d19b9CD1001M21corpads_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Century Gothic";
panose-1:2 11 5 2 2 2 2 2 2 4;}
@font-face
{font-family:Consolas;
panose-1:2 11 6 9 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:12.0pt;
font-family:"Times New Roman","serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Century Gothic","sans-serif";
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:Consolas=
">Using this python I am able to create a direct FC lun properly (and it wo=
rks if the lun_id is valid). But in the GUI after the disk is added n=
one of the fields are populated except LUN
ID (Size is <1GB, Serial, Vendor, Product ID are all blank).<o:p></o:p>=
</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:Consolas=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:Consolas=
">I see this Bugzilla [1] is very similar (for iSCSI) which says the issue =
was fixed in 3.5.0, but it seems to still be present in 3.5.1 for Fibre Cha=
nnel Direct Luns at least.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:Consolas=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:Consolas=
">Here’s the python I used to test:<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:Consolas=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">lun_id =3D '3600a098038303053453f463045727654'=
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">lu =3D params.LogicalUnit()
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">lu.set_id(lun_id) &nbs=
p; &=
nbsp; &nbs=
p; <=
o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">lus =3D list()<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">lus.append(lu)<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas"> &nbs=
p; &=
nbsp; &nbs=
p;
<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">storage_params =3D params.Storage()<o:p></o:p>=
</span></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">storage_params.set_id(lun_id)<o:p></o:p></span=
></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">storage_params.set_logical_unit(lus)<o:p></o:p=
></span></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">storage_params.set_type('fcp')<o:p></o:p></spa=
n></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">disk_params =3D params.Disk()<o:p></o:p></span=
></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">disk_params.set_format('raw')<o:p></o:p></span=
></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">disk_params.set_interface('virtio')<o:p></o:p>=
</span></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">disk_params.set_alias(disk_name)<o:p></o:p></s=
pan></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">disk_params.set_active(True)<o:p></o:p></span>=
</p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">disk_params.set_lun_storage(storage_params)<o:=
p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"margin-left:.5in"><span style=3D"font-size:=
10.0pt;font-family:Consolas">disk =3D api.disks.add(disk_params)<o:p></o:p>=
</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:Consolas=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:Consolas=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:Consolas=
">[1] <a href=3D"https://bugzilla.redhat.com/show_bug.cgi?id=3D1096217">
https://bugzilla.redhat.com/show_bug.cgi?id=3D1096217</a><o:p></o:p></span>=
</p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:Consolas=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:Consolas=
">Thanks,<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:Consolas=
">Ryan<o:p></o:p></span></p>
</div>
</body>
</html>
--_000_a0d6297bdc3245ba9b9edd86480d19b9CD1001M21corpads_--
2
1
--_000_2ad2e929edd04ef4a923887e0c9ab804MSEX04FRA9corpsolviansc_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hello oVirt guru out there:
I notice our oVirt engine postgres db size is growing quite fast for past c=
ouple of months. I checked the database size. Found that our ovirt_engine=
_history is 73GB in size.
engine=3D# \connect ovirt_engine_history
You are now connected to database "ovirt_engine_history" as user "postgres"=
.
ovirt_engine_history=3D# SELECT pg_size_pretty( pg_database_size( current_d=
atabase() ) ) As human_size
, pg_database_size( current_database() ) As raw_size;
human_size | raw_size
------------+-------------
73 GB | 78444213368
(1 row)
Brief check the records, there are entries dated back 2014.
I want to see if there is a safe way to archive and remove some of the olde=
r records?
Thanks,
Eric
--_000_2ad2e929edd04ef4a923887e0c9ab804MSEX04FRA9corpsolviansc_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:Helvetica;
panose-1:2 11 6 4 2 2 2 2 2 4;}
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:12.0pt;
font-family:"Times New Roman","serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span style=3D"font-size:8.5pt;font-family:"Hel=
vetica","sans-serif"">Hello oVirt guru out there:<br>
<br>
I notice our oVirt engine postgres db size is growing quite fast for past c=
ouple of months. I checked the database size. Found that our ov=
irt_engine_history is 73GB in size. <br>
<br>
<br>
engine=3D# \connect ovirt_engine_history<br>
You are now connected to database "ovirt_engine_history" as user =
"postgres".<br>
ovirt_engine_history=3D# SELECT pg_size_pretty( pg_database_size( current_d=
atabase() ) ) As human_size<br>
, pg_database_size( current_database() ) As raw_size;<br>
human_size | raw_size<br>
------------+-------------<br>
73 GB | 78444213368<br>
(1 row)<br>
<br>
<br>
Brief check the records, there are entries dated back 2014. <br>
<br>
I want to see if there is a safe way to archive and remove some of the olde=
r records?<br>
<br>
Thanks,<br>
Eric</span><o:p></o:p></p>
<p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:"Ca=
libri","sans-serif""><o:p> </o:p></span></p>
</div>
</body>
</html>
--_000_2ad2e929edd04ef4a923887e0c9ab804MSEX04FRA9corpsolviansc_--
2
1
I'm generating network activity on a RHEV 3.5 VM using iperf.The VM acts as
an iperf client.On the client,iperf reports that data has been successfully
sent to iperf server.The iperf server also shows that it's successfully
receiving data from the iperf client.But,network is at 0% on the RHEV-M
UI.The client and server are on different private networks.
On the same VM,when I generate network activity by fetching a file from a
public network using wget,the network column correctly shows activity on
the RHEV-M UI for the VM.
Could someone help me understand why I am unable to see network activity on
the RHEV-M UI when iperf is used?
Appreciate your help.TIA.
2
1
Hello
Been trying to resolve a problem with my second node which does not want to
connect :
one of my 2 nodes are not connecting, the problem is that the Node cant talk
NFS to Node. the node used to function fine, after performing a hardware
upgrade and bringing the system back up it seems to have come up with this
issue. we also performed a yum update, updated around 447 packages.
Can someone please help me with this.
Thanks
Sol
3
3
Hi,
I'm trying to add virtual NICs for a new VM. However, there is no vNIC
profile in the drop down list for 'nic1' even though I already have a
vNIC profile named 'ovirtmgmt' within 'ovirtmgmt' network. I attached
'ovirt-cant-add-nic-for-vms.mp4' as a visual explanation.
oVirt 3.6 @ CentOS 7
Regards,
Wee
---
ซอฟต์แวร์ Avast แอนตี้ไวรัสตรวจสอบหาไวรัสจากอีเมลนี้แล้ว
https://www.avast.com/antivirus
1
0
Re: [ovirt-users] Ovirt 3.6 | After upgrade host can not connect to storage domains | returned by VDSM was: 480
by Sahina Bose 26 Nov '15
by Sahina Bose 26 Nov '15
26 Nov '15
This is a multi-part message in MIME format.
--------------010405060202050306040708
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
The error 480 - Gluster volume replica count is not supported, seems to
indicate that vdsm has not read the updated conf.
Have you made the change in all the nodes (hypervisor)?
[+users]
On 11/26/2015 07:13 AM, Punit Dambiwal wrote:
> Hi Sahina,
>
> Still the same error "480"
>
> Engine Logs :- http://fpaste.org/294595/
>
> -----------
> 2015-11-06 15:01:51,042 ERROR
> [org.ovirt.engine.core.bll.storage.BaseFsStorageHelper]
> (DefaultQuartzScheduler_Worker-78) [40bde6e1] The connection with
> details 'gluster1.3linux.com:/ssd' failed because of error code '480'
> and error message is: 480
> 2015-11-06 15:01:51,103 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (DefaultQuartzScheduler_Worker-78) [40bde6e1] Correlation ID: null,
> Call Stack: null, Custom Event ID: -1, Message: The error message for
> connection gluster.3linux.com:/sata returned by VDSM was: 480
> 2015-11-06 15:01:51,104 ERROR
> [org.ovirt.engine.core.bll.storage.BaseFsStorageHelper]
> (DefaultQuartzScheduler_Worker-78) [40bde6e1] The connection with
> details 'gluster.3linux.com:/sata' failed because of error code '480'
> and error message is: 480
> 2015-11-06 15:01:51,104 INFO
> [org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServersCommand]
> (DefaultQuartzScheduler_Worker-78) [40bde6e1] Host 'compute6' storage
> connection was failed
> ----------
>
> Thanks,
> Punit
>
> On Wed, Nov 25, 2015 at 9:30 PM, Sahina Bose <sabose(a)redhat.com
> <mailto:sabose@redhat.com>> wrote:
>
> Hi Punit,
>
> Based on comment 2 in
> https://bugzilla.redhat.com/show_bug.cgi?id=1238093, you should be
> able to edit /etc/vdsm/vdsm.conf
>
> allowed_replica_counts=2,3
>
> and restart vdsm.
>
> Could you attach error that you continue to face (post above step?)
>
>
>
> On 11/25/2015 08:02 AM, Punit Dambiwal wrote:
>> Hi Sahina,
>>
>> Yes...i have restarted the vdsm and even rebooted the whole
>> machine but not work..
>>
>> On Tue, Nov 24, 2015 at 10:19 PM, Sahina Bose <sabose(a)redhat.com
>> <mailto:sabose@redhat.com>> wrote:
>>
>> Have you restarted vdsm?
>>
>> On 11/24/2015 07:38 AM, Punit Dambiwal wrote:
>>> Hi Sahina,
>>>
>>> Either after make the changes in the vdsm.conf,still not
>>> able to connect to the replica=2 storage..
>>>
>>> Thanks,
>>> Punit
>>>
>>> On Mon, Nov 23, 2015 at 4:15 PM, Punit Dambiwal
>>> <hypunit(a)gmail.com <mailto:hypunit@gmail.com>> wrote:
>>>
>>> Hi Sahina,
>>>
>>> Thanks for the update...would you mind to let me know
>>> the correct syntax to add the line in the vdsm.conf ??
>>>
>>> Thanks,
>>> Punit
>>>
>>> On Mon, Nov 23, 2015 at 3:48 PM, Sahina Bose
>>> <sabose(a)redhat.com <mailto:sabose@redhat.com>> wrote:
>>>
>>> You can change the allowed_replica_count to 2 in
>>> vdsm.conf - though this is not recommended in
>>> production. Supported replica count is 3.
>>>
>>> thanks
>>> sahina
>>>
>>>
>>> On 11/23/2015 07:58 AM, Punit Dambiwal wrote:
>>>> Hi Sahina,
>>>>
>>>> Is there any workaround to solve this issue ?
>>>>
>>>> Thanks,
>>>> Punit
>>>>
>>>> On Wed, Nov 11, 2015 at 9:36 AM, Sahina Bose
>>>> <sabose(a)redhat.com <mailto:sabose@redhat.com>> wrote:
>>>>
>>>> Hi,
>>>>
>>>> Thanks for your email. I will be back on 16th
>>>> Nov and will get back to you then.
>>>>
>>>> thanks
>>>> sahina
>>>>
>>>>
>>>
>>>
>>>
>>
>>
>
>
--------------010405060202050306040708
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
The error 480 - Gluster volume replica count is not supported, seems
to indicate that vdsm has not read the updated conf.<br>
Have you made the change in all the nodes (hypervisor)?<br>
<br>
[+users]<br>
<br>
<br>
<br>
<div class="moz-cite-prefix">On 11/26/2015 07:13 AM, Punit Dambiwal
wrote:<br>
</div>
<blockquote
cite="mid:CAGZcrBn+8Vh4kkkrCwPyhx+XaQgtwaj344D2KXAXPF+Lio=NOw@mail.gmail.com"
type="cite">
<div dir="ltr">Hi Sahina,
<div><br>
</div>
<div>Still the same error "480"</div>
<div><br>
</div>
<div>Engine Logs :- <a moz-do-not-send="true"
href="http://fpaste.org/294595/">http://fpaste.org/294595/</a></div>
<div><br>
</div>
<div>-----------</div>
<div>
<div>2015-11-06 15:01:51,042 ERROR
[org.ovirt.engine.core.bll.storage.BaseFsStorageHelper]
(DefaultQuartzScheduler_Worker-78) [40bde6e1] The connection
with details 'gluster1.3linux.com:/ssd' failed because of
error code '480' and error message is: 480</div>
<div>2015-11-06 15:01:51,103 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-78) [40bde6e1] Correlation
ID: null, Call Stack: null, Custom Event ID: -1, Message:
The error message for connection gluster.3linux.com:/sata
returned by VDSM was: 480</div>
<div>2015-11-06 15:01:51,104 ERROR
[org.ovirt.engine.core.bll.storage.BaseFsStorageHelper]
(DefaultQuartzScheduler_Worker-78) [40bde6e1] The connection
with details 'gluster.3linux.com:/sata' failed because of
error code '480' and error message is: 480</div>
<div>2015-11-06 15:01:51,104 INFO
[org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServersCommand]
(DefaultQuartzScheduler_Worker-78) [40bde6e1] Host
'compute6' storage connection was failed</div>
</div>
<div>----------</div>
<div><br>
</div>
<div>Thanks,</div>
<div>Punit</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Wed, Nov 25, 2015 at 9:30 PM, Sahina
Bose <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:sabose@redhat.com" target="_blank">sabose(a)redhat.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"> Hi Punit,<br>
<br>
Based on comment 2 in <a moz-do-not-send="true"
href="https://bugzilla.redhat.com/show_bug.cgi?id=1238093"
target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1238093</a>,
you should be able to edit /etc/vdsm/vdsm.conf<br>
<br>
allowed_replica_counts=2,3<br>
<br>
and restart vdsm.<br>
<br>
Could you attach error that you continue to face (post
above step?)
<div>
<div class="h5"><br>
<br>
<br>
<div>On 11/25/2015 08:02 AM, Punit Dambiwal wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hi Sahina,
<div><br>
</div>
<div>Yes...i have restarted the vdsm and even
rebooted the whole machine but not work..</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Nov 24, 2015 at
10:19 PM, Sahina Bose <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:sabose@redhat.com"
target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:sabose@redhat.com">sabose(a)redhat.com</a></a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0
0 0 .8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"> Have
you restarted vdsm?<span><br>
<br>
<div>On 11/24/2015 07:38 AM, Punit
Dambiwal wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hi Sahina,
<div><br>
</div>
<div>Either after make the changes in
the vdsm.conf,still not able to
connect to the replica=2 storage..</div>
<div><br>
</div>
<div>Thanks,</div>
<div>Punit</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Mon, Nov
23, 2015 at 4:15 PM, Punit Dambiwal
<span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:hypunit@gmail.com"
target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:hypunit@gmail.com">hypunit(a)gmail.com</a></a>></span>
wrote:<br>
<blockquote class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div dir="ltr">Hi Sahina,
<div><br>
</div>
<div>Thanks for the
update...would you mind to let
me know the correct syntax to
add the line in the vdsm.conf
??</div>
<div><br>
</div>
<div>Thanks,</div>
<div>Punit</div>
</div>
<div>
<div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On
Mon, Nov 23, 2015 at 3:48
PM, Sahina Bose <span
dir="ltr"><<a
moz-do-not-send="true"
href="mailto:sabose@redhat.com" target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:sabose@redhat.com">sabose(a)redhat.com</a></a>></span>
wrote:<br>
<blockquote
class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px
#ccc
solid;padding-left:1ex">
<div text="#000000"
bgcolor="#FFFFFF"> You
can change the
allowed_replica_count
to 2 in vdsm.conf -
though this is not
recommended in
production. Supported
replica count is 3.<br>
<br>
thanks<span><font
color="#888888"><br>
sahina</font></span>
<div>
<div><br>
<br>
<div>On 11/23/2015
07:58 AM, Punit
Dambiwal wrote:<br>
</div>
<blockquote
type="cite">
<div dir="ltr">Hi
Sahina,
<div><br>
</div>
<div>Is there
any workaround
to solve this
issue ?</div>
<div><br>
</div>
<div>Thanks,</div>
<div>Punit</div>
</div>
<div
class="gmail_extra"><br>
<div
class="gmail_quote">On
Wed, Nov 11,
2015 at 9:36
AM, Sahina
Bose <span
dir="ltr"><<a
moz-do-not-send="true" href="mailto:sabose@redhat.com" target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:sabose@redhat.com">sabose(a)redhat.com</a></a>></span>
wrote:<br>
<blockquote
class="gmail_quote"
style="margin:0
0 0
.8ex;border-left:1px
#ccc
solid;padding-left:1ex">Hi,<br>
<br>
Thanks for
your email. I
will be back
on 16th Nov
and will get
back to you
then.<br>
<br>
thanks<br>
<span><font
color="#888888">sahina<br>
</font></span></blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</span></div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</body>
</html>
--------------010405060202050306040708--
2
1
25 Nov '15
On Wed, Nov 25, 2015, at 12:13, Simone Tiraboschi wrote:
>
>
> On Mon, Nov 23, 2015 at 10:10 PM, Giuseppe Ragusa <giuseppe.ragusa(a)hotmail.com> wrote:
>> Hi all,
>> I go on with my wishlist, derived from both solitary mumblings and community talks at the the first Italian oVirt Meetup.
>>
>> I offer to help in coding (work/family schedules permitting) but keep in mind that I'm a sysadmin with mainly C and bash-scripting skills (but hoping to improve my less-than-newbie Python too...)
>>
>> I've sent separate wishlist messages for oVirt Node, oVirt Engine and VDSM.
>>
>> oVirt Self Hosted Engine Setup:
>>
>> *) allow virtual hardware customizations for locally-created Engine vm, specifically: allow to add an arbitrary number of NICs (asking for MAC address and local bridge to connect to) and maybe also an arbitrary number of disks (asking for size) as these seem to be the only/most_useful items missing; maybe the prebuilt appliance image too may be inspected by setup to detect a customized one and connect any further NICs to custom local bridges (which the user should be asked for)
>
> For 3.6.1 (it should be in 3.6.0 but it's bugged) you will be able to edit some parameter of the engine VM from the engine (than of course you need to reboot to make them effective).
> I'm not sure if it's worth to make the setup more complex or if it's better to keep it simple (single nic, single disk) and then let you edit the VM only from the engine as for other VMs.
Thanks Simone for your reply!
You are right: I was bothering you with this setup wishlist item *mainly* because further Engine vm modification was impossible/awkward/difficult before 3.6.1
Nonetheless I have seen many cases in which at least a second NIC would be absolutely needed to complete the Engine installation: it is a well known best practice to keep the management network (maybe conflated with the IPMI network in smaller cases) completely isolated from other services and to allow only limited access to/from it, and that network would be the ovirtmgmt-bridge-connected network (the only one available to the Engine, as of now); now think of a kickstart-based Engine OS installation/update from a local repository/mirror which would be reachable on a different network only (further access to the User/Administration Web portal could have similar needs but could be more easily covered by successive Engine vm modifications)
The "additional disks" part was (maybe "artificially") added by me out of fantasy, but I know of at least one enterprise customer that by policy mandates separate disks for OS and data (mainly on FC LUNs, to be honest, but FC is supported by hosted Engine now, isn't it?)
I absolutely don't know how the setup code is structured (and the recent logical "duplication" between mixins.py and vm.conf.in scares me a bit, actually ;), but I naively hope that changing the two single hardcoded nic/hdd questions into two loops of minimum 1 iteration (with a corresponding generalization of related otopi parameters) should not increase the complexity too much (and could be an excuse to rationalize/unify it further).
Obviously I could stand instantly corrected by anyone who really knows the code, but in exchange I would gain for free some interesting pointers/insights into the setup code/structure ;)
>> Regards,
>> Giuseppe
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
1
0
Updated Invitation: [Deep dive] Host Network QoS - oVirt 3.6 @ Wed Dec 2, 2015 5pm - 6pm (ibarkan@redhat.com)
by ibarkan@redhat.com 25 Nov '15
by ibarkan@redhat.com 25 Nov '15
25 Nov '15
This event has been changed.
Title: [Deep dive] Host Network QoS - oVirt 3.6
Hangouts on air: https://plus.google.com/events/c3la9vdse911atq991qflogtq0g
you tube link: https://plus.google.com/events/c3la9vdse911atq991qflogtq0g
When: Wed Dec 2, 2015 5pm - 6pm Jerusalem (changed)
Calendar: ibarkan(a)redhat.com
Who:
* ibarkan(a)redhat.com - organizer
* users(a)ovirt.org
Event details:
https://www.google.com/calendar/event?action=VIEW&eid=NGZxZXU0dDRka2ZrMmd1c…
Invitation from Google Calendar: https://www.google.com/calendar/
You are receiving this courtesy email at the account users(a)ovirt.org
because you are an attendee of this event.
To stop receiving future updates for this event, decline this event.
Alternatively you can sign up for a Google account at
https://www.google.com/calendar/ and control your notification settings for
your entire calendar.
Forwarding this invitation could allow any recipient to modify your RSVP
response. Learn more at
https://support.google.com/calendar/answer/37135#forwarding
1
0
Hi,
How i cam inport a virtual appliance (OVA), what should I write in the
field "path" ?
I try with 10.10.10.73:/export/nfsshare but i have this error:
VDSM ovirtsrv102 command failed: ['Permission denied, please try again.',
'Permission denied, please try again.', 'Permission denied
(publickey,gssapi-keyex,gssapi-with-mic,password).', '/usr/bin/tar:
10.10.10.73\\:/export/nfsshare: Cannot open: Input/output error',
'/usr/bin/tar: Error is not recoverable: exiting now']
I try to add the memory on a running vm ad i have this error:
Failed to hot set memory to VM CoverterV2V-REL6. Underlying error message:
unsupported configuration: unknown device type 'memory'
VDSM ovirtsrv104 command failed: unsupported configuration: unknown device
type 'memory'
Regards
Massimo
3
5
[ANN] oVirt 3.6.1 First Release Candidate is now available for testing
by Sandro Bonazzola 25 Nov '15
by Sandro Bonazzola 25 Nov '15
25 Nov '15
The oVirt Project is pleased to announce the availability
of the First Release Candidate of oVirt 3.6.1 for testing, as of November
25th, 2015.
This release is available now for Fedora 22,
Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and
Red Hat Enterprise Linux >= 7.1, CentOS Linux >= 7.1 (or similar).
This release supports Hypervisor Hosts running
Red Hat Enterprise Linux >= 7.1, CentOS Linux >= 7.1 (or similar) and
Fedora 22.
Highly experimental support for Debian 8.1 Jessie has been added too.
This release of oVirt 3.6.1 includes numerous bug fixes.
See the release notes [1] for an initial list of the new features and bugs
fixed.
Please refer to release notes [1] for Installation / Upgrade instructions.
A new oVirt Live ISO is also available[2].
Please note that mirrors[3] may need usually one day before being
synchronized.
Please refer to the release notes for known issues in this release.
[1] http://www.ovirt.org/OVirt_3.6_Release_Notes
[2] http://resources.ovirt.org/pub/ovirt-3.6-pre/iso/
[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
1
1
Hey,
in the last few months the Node team spent a lot of efforts in
stabilizing Node by closing tons of bugs, rebasing Node onto CentOS
7.2, and in addition adding features like Hosted-Engine - to get Node
in shape for the recently released oVirt 3.6.0.
But we were also seeing how Node is showing it's age. It becomes more
and more challenging to solve bugs and implement features in Node's
current architecture.
To address these problems, and let Node adjust better to new changes,
a few months ago we started to look at how we can change Node, to make
it easier to develop, test, and use.
This comparison [1] shows a brief summary of our investigations. We
especially looked at Atomic and containers [2].
At the bottom line both of them provided us an architecture which
would help us to achieve something like 70% of what we need. But
during early trials we quickly ran into issues which we experience in
similar ways with today's Node.
Finally we settled an approach - the idea was around since right from
the beginning - which aligns very well with existing technologies
which we already use in the oVirt and Fedora/CentOS scope.
The new Node will be using anaconda for installation, LVM for upgrades
and rollbacks, and Cockpit [3] for administration. The future design
is taking care that packages don't need to take special care to work
on Node - which was a major obstacle in the past. Node will rather
behave (mostly) like a regular host - but with the advantages of an
easy & ready to use image, image based delivery and a robust rollback.
The current design principles and links to some additional resources
are in the wiki [4].
Stay tuned, we are just getting started.
On behalf of the Node Team
fabian
--
[1] http://www.ovirt.org/Node/Specs#Comparison:_Possible_Implementations
[2] http://www.projectatomic.io/ ~ http://docker.com/
[3] http://cockpit-project.org/
[4] http://www.ovirt.org/Node/4.0
1
0
On Tue, Nov 17, 2015 at 6:29 PM, Simone Tiraboschi wrote:
>
>
> On Tue, Nov 17, 2015 at 5:04 PM, Gianluca Cecchi wrote:
>
>>
>>
>> Yes, I know. No problem for me.
>> I'm using master because I want to test Italian layout so that I can
>> change current translation where necessary.
>> But do I have to start from scratch or is there any command to clean the
>> current half-completed setup?
>>
>
> No, unfortunately is still not available for hosted-engine-setup
>
>
>>
>> Gianluca
>>
>>
>
Hello,
retrying from master, as the problem solution should have been merged.
But it seems the package for the appliance is not available now...
[root@ovc71 ~]# yum install ovirt-engine-appliance
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mi.mirror.garr.it
* extras: mi.mirror.garr.it
* ovirt-master-epel: fr2.rpmfind.net
* ovirt-master-snapshot: ftp.plusline.net
* ovirt-master-snapshot-static: ftp.plusline.net
* updates: mi.mirror.garr.it
No package ovirt-engine-appliance available.
Error: Nothing to do
Any alternative manually downloading the ova file or other strategy?
Gianluca
2
5
[SOLVED] Re: Trying to make ovirt-hosted-engine-setup create a customized Engine-vm on 3.6 HC HE
by Giuseppe Ragusa 25 Nov '15
by Giuseppe Ragusa 25 Nov '15
25 Nov '15
On Mon, Oct 26, 2015, at 09:48, Simone Tiraboschi wrote:
>
>
> On Mon, Oct 26, 2015 at 12:14 AM, Giuseppe Ragusa <giuseppe.ragusa(a)hotmail.com> wrote:
>> Hi all,
>> I'm experiencing some difficulties using oVirt 3.6 latest snapshot.
>>
>> I'm trying to trick the self-hosted-engine setup to create a custom engine vm with 3 nics (with fixed MACs/UUIDs).
>>
>> The GlusterFS volume (3.7.5 hyperconverged, replica 3, for the engine vm) and the network bridges (ovirtmgmt and other two bridges, called nfs and lan, for the engine vm) have been preconfigured on the initial fully-patched CentOS 7.1 host (plus other two identical hosts which are awaiting to be added).
>>
>> I'm stuck at a point with the engine vm successfully starting but with only one nic present (connected to the ovirtmgmt bridge).
>>
>> I'm trying to obtain the modified engine vm by means of a trick which used to work in a previous (aborted because of lacking GlusterFS-by-libgfapi support) oVirt 3.5 test setup (about a year ago, maybe more): I'm substituting the standard /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in with the following:
>>
>> vmId=@VM_UUID@
>> memSize=@MEM_SIZE@
>> display=@CONSOLE_TYPE@
>> devices={index:2,iface:ide,address:{ controller:0, target:0,unit:0, bus:1, type:drive},specParams:{},readonly:true,deviceId:@CDROM_UUID@,path:@CDROM@,device:cdrom,shared:false,type:disk@BOOT_CDROM@}
>> devices={index:0,iface:virtio,format:raw,poolID:@SP_UUID@,volumeID:@VOL_UUID@,imageID:@IMG_UUID@,specParams:{},readonly:false,domainID:@SD_UUID@,optional:false,deviceId:@IMG_UUID@,address:{bus:0x00, slot:0x06, domain:0x0000, type:pci, function:0x0},device:disk,shared:exclusive,propagateErrors:off,type:disk@BOOT_DISK@}
>> devices={device:scsi,model:virtio-scsi,type:controller}
>> devices={index:4,nicModel:pv,macAddr:02:50:56:3f:c4:b0,linkActive:true,network:@BRIDGE@,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:@NIC_UUID@,address:{bus:0x00, slot:0x03, domain:0x0000, type:pci, function:0x0},device:bridge,type:interface@BOOT_PXE@}
>> devices={index:8,nicModel:pv,macAddr:02:50:56:3f:c4:a0,linkActive:true,network:lan,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:6c467650-1837-47ea-89bc-1113f4bfefee,address:{bus:0x00, slot:0x09, domain:0x0000, type:pci, function:0x0},device:bridge,type:interface@BOOT_PXE@}
>> devices={index:16,nicModel:pv,macAddr:02:50:56:3f:c4:c0,linkActive:true,network:nfs,filter:vdsm-no-mac-spoofing,specParams:{},deviceId:4d8e0705-8cb4-45b7-b960-7f98bb59858d,address:{bus:0x00, slot:0x0c, domain:0x0000, type:pci, function:0x0},device:bridge,type:interface@BOOT_PXE@}
>> devices={device:console,specParams:{},type:console,deviceId:@CONSOLE_UUID@,alias:console0}
>> vmName=@NAME@
>> spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
>> smp=@VCPUS@
>> cpuType=@CPU_TYPE@
>> emulatedMachine=@EMULATED_MACHINE@
>>
>> but unfortunately the vm gets created like this (output from "ps"; note that I'm attaching a CentOS7.1 Netinstall ISO with an embedded kickstart: the installation should proceed by HTTP on the lan network but obviously fails):
>>
>> /usr/libexec/qemu-kvm -name HostedEngine -S -machine
>> pc-i440fx-rhel7.1.0,accel=kvm,usb=off -cpu Westmere -m 4096 -realtime mlock=off
>> -smp 2,sockets=2,cores=1,threads=1 -uuid f49da721-8aa6-4422-8b91-e91a0e38aa4a -s
>> mbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-1.1503.el7.centos.2
>> .8,serial=2a1855a9-18fb-4d7a-b8b8-6fc898a8e827,uuid=f49da721-8aa6-4422-8b91-e91a
>> 0e38aa4a -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/li
>> b/libvirt/qemu/HostedEngine.monitor,server,nowait -mon chardev=charmonitor,id=mo
>> nitor,mode=control -rtc base=2015-10-25T11:22:22,driftfix=slew -global kvm-pit.l
>> ost_tick_policy=discard -no-hpet -no-reboot -boot strict=on -device piix3-usb-uh
>> ci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr
>> =0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=
>> /var/tmp/engine.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -drive file=/var/run/vdsm/storage/be4434bf-a5fd-44d7-8011-d5e4ac9cf523/b3abc1cb-8a78-4b56-a9b0-e5f41fea0fdc/8d075a8d-730a-4925-8779-e0ca2b3dbcf4,if=none,id=drive-virtio-disk0,format=raw,serial=b3abc1cb-8a78-4b56-a9b0-e5f41fea0fdc,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0 -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=02:50:56:3f:c4:b0,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/f49da721-8aa6-4422-8b91-e91a0e38aa4a.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/f49da721-8aa6-4422-8b91-e91a0e38aa4a.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev socket,id=charchannel2,path=/var/lib/libvirt/qemu/channels/f49da721-8aa6-4422-8b91-e91a0e38aa4a.org.ovirt.hosted-engine-setup.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=org.ovirt.hosted-engine-setup.0 -chardev socket,id=charconsole0,path=/var/run/ovirt-vmconsole-console/f49da721-8aa6-4422-8b91-e91a0e38aa4a.sock,server,nowait -device virtconsole,chardev=charconsole0,id=console0 -vnc 0:0,password -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -msg timestamp=on
>>
>> There seem to be no errors in the logs.
>>
>> I've tried reading some (limited) Python setup code but I've not found any obvious reason why the trick should not work anymore.
>>
>> I know that 3.6 has different network configuration/management and this could be the hot point.
>>
>> Does anyone have any further suggestion or clue (code/logs to read)?
>
> The VM creation path is now a bit different cause we use just vdscli library instead of vdsClient.
> Please take a a look at mixins.py
Many thanks for your very valuable hint:
I've restored the original /usr/share/ovirt-hosted-engine-setup/templates/vm.conf.in and I've managed to obtain the 3-nics-customized vm by modifying /usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/mixins.py like this ("diff -Naur" output):
************************************************************************************
--- mixins.py.orig 2015-10-20 16:57:40.000000000 +0200
+++ mixins.py 2015-10-26 22:22:58.351223922 +0100
@@ -25,6 +25,7 @@
import random
import string
import time
+import uuid
from ovirt_hosted_engine_setup import constants as ohostedcons
@@ -247,6 +248,44 @@
]['@BOOT_PXE@'] == ',bootOrder:1':
nic['bootOrder'] = '1'
conf['devices'].append(nic)
+ nic2 = {
+ 'nicModel': 'pv',
+ 'macAddr': '02:50:56:3f:c4:a0',
+ 'linkActive': 'true',
+ 'network': 'lan',
+ 'filter': 'vdsm-no-mac-spoofing',
+ 'specParams': {},
+ 'deviceId': str(uuid.uuid4()),
+ 'address': {
+ 'bus': '0x00',
+ 'slot': '0x09',
+ 'domain': '0x0000',
+ 'type': 'pci',
+ 'function': '0x0'
+ },
+ 'device': 'bridge',
+ 'type': 'interface',
+ }
+ conf['devices'].append(nic2)
+ nic3 = {
+ 'nicModel': 'pv',
+ 'macAddr': '02:50:56:3f:c4:c0',
+ 'linkActive': 'true',
+ 'network': 'nfs',
+ 'filter': 'vdsm-no-mac-spoofing',
+ 'specParams': {},
+ 'deviceId': str(uuid.uuid4()),
+ 'address': {
+ 'bus': '0x00',
+ 'slot': '0x0c',
+ 'domain': '0x0000',
+ 'type': 'pci',
+ 'function': '0x0'
+ },
+ 'device': 'bridge',
+ 'type': 'interface',
+ }
+ conf['devices'].append(nic3)
cli = self.environment[ohostedcons.VDSMEnv.VDS_CLI]
status = cli.create(conf)
************************************************************************************
Obviously this is a horrible ad-hoc hack that I'm not able to generalize/clean-up now: doing so would involve (apart from a deeper understanding of the whole setup code/workflow) some well-thought-out design decisions and, given the effective deprecation of the aforementioned easy-to-modify vm.conf.in template substituted by hardwired Python program logic, it seems that such a functionality is not very high on the development priority list atm ;)
Many thanks again!
Kind regards,
Giuseppe
>> Many thanks in advance.
>>
>> Kind regards,
>> Giuseppe
>>
>> PS: please keep also my address in replying because I'm experiencing some problems between Hotmail and oVirt-mailing-list
>>
>> _______________________________________________
>>
Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
2
2
This is a multi-part message in MIME format.
--------------050208060806020800020208
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi All,
I don't know if this is a Bug or an error on my side.
At the moment, i have a oVirt 3.6 installation with two Nodes and Two
Storage Server, which
are configured themselfs als master/slave (solaris zfs snapshot copy
from master to slave all 2 hours)
Now i try to do some tests for some failure use cases like, master
storage isn't available anymore or
one of the virtual machines must be restored from the snapshot.
Because the data n the slave is a snapshot copy, all data which are on
the Data Domain NFS Storage,
are also on the slave NFS Storage.
I tried it to add over WebUI over the option "Import Domain" (Import
Pre-Configured Domain) with both
Domain Functions (Data and Export) but nothing happens, expect some
errors in the vdsm.log Logfile.
Something like this
Thread-253746::ERROR::2015-11-24
11:44:41,758::hsm::2549::Storage.HSM::(disconnectStorageServer) Could
not disconnect from storageServer
Traceback (most recent call last):
File "/usr/share/vdsm/storage/hsm.py", line 2545, in
disconnectStorageServer
conObj.disconnect()
File "/usr/share/vdsm/storage/storageServer.py", line 425, in disconnect
return self._mountCon.disconnect()
File "/usr/share/vdsm/storage/storageServer.py", line 254, in disconnect
self._mount.umount(True, True)
File "/usr/share/vdsm/storage/mount.py", line 256, in umount
return self._runcmd(cmd, timeout)
File "/usr/share/vdsm/storage/mount.py", line 241, in _runcmd
raise MountError(rc, ";".join((out, err)))
MountError: (32, ';umount:
/rhev/data-center/mnt/192.168.1.13:_oi-srv2-sasData1_oi-srv1-sasData1_nfsshare1:
mountpoint not found\n')
I checked with nfs-check.py if all permissions are ok, the tool say this:
Konsole output
[root@lin-ovirt1 contrib]# python ./nfs-check.py
192.168.1.13:/oi-srv2-sasData1/oi-srv1-sasData1/nfsshare1
Current hostname: lin-ovirt1 - IP addr 192.168.1.14
Trying to /bin/mount -t nfs
192.168.1.13:/oi-srv2-sasData1/oi-srv1-sasData1/nfsshare1...
Executing NFS tests..
Removing vdsmTest file..
Status of tests [OK]
Disconnecting from NFS Server..
Done!
Greetings
Roger Meier
--------------050208060806020800020208
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Hi All,<br>
<br>
I don't know if this is a Bug or an error on my side.<br>
<br>
At the moment, i have a oVirt 3.6 installation with two Nodes and
Two Storage Server, which<br>
are configured themselfs als master/slave (solaris zfs snapshot copy
from master to slave all 2 hours)<br>
<br>
Now i try to do some tests for some failure use cases like, master
storage isn't available anymore or<br>
one of the virtual machines must be restored from the snapshot.<br>
<br>
Because the data n the slave is a snapshot copy, all data which are
on the Data Domain NFS Storage,<br>
are also on the slave NFS Storage. <br>
<br>
I tried it to add over WebUI over the option "Import Domain" (Import
Pre-Configured Domain) with both<br>
Domain Functions (Data and Export) but nothing happens, expect some
errors in the vdsm.log Logfile.<br>
<br>
Something like this<br>
<br>
Thread-253746::ERROR::2015-11-24
11:44:41,758::hsm::2549::Storage.HSM::(disconnectStorageServer)
Could not disconnect from storageServer<br>
Traceback (most recent call last):<br>
File "/usr/share/vdsm/storage/hsm.py", line 2545, in
disconnectStorageServer<br>
conObj.disconnect()<br>
File "/usr/share/vdsm/storage/storageServer.py", line 425, in
disconnect<br>
return self._mountCon.disconnect()<br>
File "/usr/share/vdsm/storage/storageServer.py", line 254, in
disconnect<br>
self._mount.umount(True, True)<br>
File "/usr/share/vdsm/storage/mount.py", line 256, in umount<br>
return self._runcmd(cmd, timeout)<br>
File "/usr/share/vdsm/storage/mount.py", line 241, in _runcmd<br>
raise MountError(rc, ";".join((out, err)))<br>
MountError: (32, ';umount:
/rhev/data-center/mnt/192.168.1.13:_oi-srv2-sasData1_oi-srv1-sasData1_nfsshare1:
mountpoint not found\n')<br>
<br>
I checked with nfs-check.py if all permissions are ok, the tool say
this:<br>
<br>
<title>Konsole output</title>
<div>
<span style="font-family:monospace"><span
style="color:#000000;background-color:#ffffff;">[root@lin-ovirt1
contrib]# python ./nfs-check.py
192.168.1.13:/oi-srv2-sasData1/oi-srv1-sasData1/nfsshare1
</span><br>
Current hostname: lin-ovirt1 - IP addr 192.168.1.14
<br>
Trying to /bin/mount -t nfs
192.168.1.13:/oi-srv2-sasData1/oi-srv1-sasData1/nfsshare1...
<br>
Executing NFS tests..
<br>
Removing vdsmTest file..
<br>
Status of tests [OK]
<br>
Disconnecting from NFS Server..
<br>
Done!<br>
<br>
</span></div>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
Greetings<br>
Roger Meier<br>
</body>
</html>
--------------050208060806020800020208--
2
2
Hello,
I have an environment with two data centers with some clusters in each
one. Each cluster has one or more dedicated FC storage domains.
We have some management problem because adding storage to one cluster
means to add it to all nodes of all clusters in the same data center, by
the way I found this a little overkill and I think it should be managed
like networks.
Anyway, to resolve out problems we would like to create a new datacenter
and move a cluster from old data center to the new one, obviously with
it's own storage domain.
Is there a way to do this without export/import all vms?
Thank you.
Regards,
Dael Maselli.
--
___________________________________________________________________
Dael Maselli --- INFN-LNF Computing Service -- +39.06.9403.2214
___________________________________________________________________
* http://www.lnf.infn.it/~dmaselli/ *
___________________________________________________________________
Democracy is two wolves and a lamb voting on what to have for lunch
___________________________________________________________________
2
1
Hi all,
I'm trying to organize my wishes/hopes for oVirt 4.0
These items derive from both solitary mumblings and community talks at the the first Italian oVirt Meetup.
I offer to help in coding (work/family schedules permitting) but keep in mind that I'm a sysadmin with mainly C and bash-scripting skills (but hoping to improve my less-than-newbie Python too...)
Since I have related interests/wishes also for Engine and VDSM, I'll send a separate message for each one.
Let's start from the oVirt Node:
*) oVirt Node complete convergence with Atomic Host: start from Project Atomic tools and define an ISO-installable Atomic Host variant [1] to include gluster, qemu, libvirt, vdsm and all the packages/configurations that an oVirt Node would need (remove unneeded parts)
*) add Samba, CTDB and Ganesha to oVirt Node to allow it to be used as a full storage appliance (specifically, I'm thinking of the GlusterFS integration); there are related wishlist items on configuring/managing Samba/CTDB/Ganesha on the Engine and on VDSM
*) add oVirt Node ability to host containers (independent of the above mentioned convergence with Atomic); note that Atomic Host has Docker/Kubernetes, but libvirt already has a LXC driver [2] and the Engine could benefit from some added smartness in managing groups of guests etc. in the vm case too; there are related wishlist items on configuring/managing containers on the Engine and on VDSM
*) add Open vSwitch direct support (not Neutron-mediated); there are related wishlist items on configuring/managing Open vSwitch on the Engine and on VDSM
*) add DRBD9 as a supported Storage Domain type, maybe for HC and HE too; there are related wishlist items on configuring/managing DRBD9 on the Engine and on VDSM
*) add oVirt Node ability to fully perform as a stand-alone hypervisor: I hear that Cockpit is coming, so why not Kimchi too? ;)
Regards,
Giuseppe
[1] product.json, I suppose, but I'm starting to learn Atomic now...
[2] barring a pending deprecation in RHEL7, but I suppose that a community/Centos-Virt-SIG libvirt build could restore it and maybe RedHat too could support it on a special libvirt build for RHEV (just to remove those support costs from the base RHEL OS offering)
3
2
Hi,
Someone may take a look at the spam policy of Gmail to see what is not
been done as it's been months since I have to get the mails from spam
folder no matter how much I try to get them as legitime.
Regards
El 22/10/15 a las 12:24 p.m., users-request(a)ovirt.org escribió:
> Send Users mailing list submissions to
> users(a)ovirt.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.ovirt.org/mailman/listinfo/users
> or, via email, send a message with subject or body 'help' to
> users-request(a)ovirt.org
>
> You can reach the person managing the list at
> users-owner(a)ovirt.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Users digest..."
>
>
> Today's Topics:
>
> 1. [ANN] oVirt 3.6.0 Third Release Candidate is now available
> for testing (Sandro Bonazzola)
> 2. Re: Testing self hosted engine in 3.6: hostname not resolved
> error (Gianluca Cecchi)
> 3. Re: 3.6 upgrade issue (Yaniv Dary)
> 4. Re: How to change the hosted engine VM RAM size after
> deploying (Simone Tiraboschi)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Thu, 22 Oct 2015 16:08:25 +0200
> From: Sandro Bonazzola <sbonazzo(a)redhat.com>
> To: announce(a)ovirt.org, users <users(a)ovirt.org>, devel
> <devel(a)ovirt.org>
> Subject: [ovirt-users] [ANN] oVirt 3.6.0 Third Release Candidate is
> now available for testing
> Message-ID:
> <CAPQRNTm4GyWo0zo-L=92ScLJvWQFEPKaF5UfmdJ4SroKCCC3pQ(a)mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> The oVirt Project is pleased to announce the availability
> of the Third Release Candidate of oVirt 3.6 for testing, as of October
> 22nd, 2015.
>
> This release is available now for Fedora 22,
> Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and
> Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar).
>
> This release supports Hypervisor Hosts running
> Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar),
> Fedora 21 and Fedora 22.
> Highly experimental support for Debian 8.1 Jessie has been added too.
>
> This release of oVirt 3.6.0 includes numerous bug fixes.
> See the release notes [1] for an initial list of the new features and bugs
> fixed.
>
> Please refer to release notes [1] for Installation / Upgrade instructions.
> New oVirt Node ISO and oVirt Live ISO will be available soon as well[2].
>
> Please note that mirrors[3] may need usually one day before being
> synchronized.
>
> Please refer to the release notes for known issues in this release.
>
> [1] http://www.ovirt.org/OVirt_3.6_Release_Notes
> [2] http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/iso/
> [3] http://www.ovirt.org/Repository_mirrors#Current_mirrors
>
>
3
2
--=_920b73690f5d77fd5fed2f8d20a489a7
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset=UTF-8
Hi,
I activate gluster service in Cluster, then my engine.log
chain : Could not add brick xxx to volume xxxx server uuid xxx not found
in cluster.
I found in mailing list i have to put all my hosts in
maintenance mode and put on.
Then now engine.log chain :
2015-11-09
11:15:53,563 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-64) [] START,
GlusterVolumesListVDSCommand(HostName = ovirt02,
GlusterVolumesListVDSParameters:{runAsync='true',
hostId='0d1284e1-fa18-4309-b196-df9a6a337c44'}), log id:
6ddd5b9d
2015-11-09 11:15:53,711 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-64) [] Could not associate brick
'ovirt01.mafia.kru:/gfs1/engine/brick' of volume
'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no
gluster network found in cluster
'00000002-0002-0002-0002-00000000022d'
2015-11-09 11:15:53,714 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-64) [] Could not associate brick
'ovirt02.mafia.kru:/gfs1/engine/brick' of volume
'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no
gluster network found in cluster
'00000002-0002-0002-0002-00000000022d'
2015-11-09 11:15:53,716 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-64) [] Could not associate brick
'ovirt03.mafia.kru:/gfs1/engine/brick' of volume
'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no
gluster network found in cluster
'00000002-0002-0002-0002-00000000022d'
2015-11-09 11:15:53,719 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-64) [] Could not associate brick
'ovirt01.mafia.kru:/gfs2/engine/brick' of volume
'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no
gluster network found in cluster
'00000002-0002-0002-0002-00000000022d'
2015-11-09 11:15:53,722 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-64) [] Could not associate brick
'ovirt02.mafia.kru:/gfs2/engine/brick' of volume
'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no
gluster network found in cluster
'00000002-0002-0002-0002-00000000022d'
2015-11-09 11:15:53,725 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-64) [] Could not associate brick
'ovirt03.mafia.kru:/gfs2/engine/brick' of volume
'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no
gluster network found in cluster
'00000002-0002-0002-0002-00000000022d'
2015-11-09 11:15:53,732 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-64) [] FINISH,
GlusterVolumesListVDSCommand, return:
{e9a24161-3e72-47ea-b593-57f3302e7c4e=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7eafe244,
e5df896f-b818-4d70-ac86-ad9270f9d5f2=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@cb7d0349},
log id: 6ddd5b9d
Here my vdsm.log on host 1:
Thread-4247::DEBUG::2015-11-09
11:17:47,621::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return 'Host.getVMFullList' in bridge with [{u'status': 'Up',
u'nicModel': u'rtl8139,pv', u'kvmEnable': u'true', u'smp': u'1',
u'emulatedMachine': u'pc', u'afterMigrationStatus': u'', 'pid': '4450',
u'vmId': u'3930e6e3-5b41-45c3-bb7c-2af8563cefab', u'devices':
[{u'alias': u'console0', u'specParams': {}, 'deviceType': u'console',
u'deviceId': u'ab824f92-f636-4c0f-96ad-b4f3d1c352be', u'device':
u'console', u'type': u'console'}, {u'target': 1572864, u'alias':
u'balloon0', u'specParams': {u'model': u'none'}, 'deviceType':
u'balloon', u'device': u'memballoon', u'type': u'balloon'}, {u'device':
u'unix', u'alias': u'channel0', 'deviceType': u'channel', u'type':
u'channel', u'address': {u'bus': u'0', u'controller': u'0', u'type':
u'virtio-serial', u'port': u'1'}}, {u'device': u'unix', u'alias':
u'channel1', 'deviceType': u'channel', u'type': u'channel', u'address':
{u'bus': u'0', u'controller': u'0', u'type': u'virtio-serial', u'port':
u'2'}}, {u'device': u'unix', u'alias': u'channel2', 'deviceType':
u'channel', u'type': u'channel', u'address': {u'bus': u'0',
u'controller': u'0', u'type': u'virtio-serial', u'port': u'3'}},
{u'alias': u'scsi0', 'deviceType': u'controller', u'address': {u'slot':
u'0x04', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci',
u'function': u'0x0'}, u'device': u'scsi', u'model': u'virtio-scsi',
u'type': u'controller'}, {u'device': u'usb', u'alias': u'usb0',
'deviceType': u'controller', u'type': u'controller', u'address':
{u'slot': u'0x01', u'bus': u'0x00', u'domain': u'0x0000', u'type':
u'pci', u'function': u'0x2'}}, {u'device': u'ide', u'alias': u'ide0',
'deviceType': u'controller', u'type': u'controller', u'address':
{u'slot': u'0x01', u'bus': u'0x00', u'domain': u'0x0000', u'type':
u'pci', u'function': u'0x1'}}, {u'device': u'virtio-serial', u'alias':
u'virtio-serial0', 'deviceType': u'controller', u'type': u'controller',
u'address': {u'slot': u'0x05', u'bus': u'0x00', u'domain': u'0x0000',
u'type': u'pci', u'function': u'0x0'}}, {u'device': u'', u'alias':
u'video0', 'deviceType': u'video', u'type': u'video', u'address':
{u'slot': u'0x02', u'bus': u'0x00', u'domain': u'0x0000', u'type':
u'pci', u'function': u'0x0'}}, {u'device': u'vnc', u'specParams':
{u'spiceSecureChannels':
u'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir',
u'displayIp': '0'}, 'deviceType': u'graphics', u'type': u'graphics',
u'port': u'5900'}, {u'nicModel': u'pv', u'macAddr':
u'00:16:3e:43:96:7b', u'linkActive': True, u'network': u'ovirtmgmt',
u'specParams': {}, u'filter': u'vdsm-no-mac-spoofing', u'alias':
u'net0', 'deviceType': u'interface', u'deviceId':
u'c2913ff3-fea3-4b17-a4b3-83398d920cd3', u'address': {u'slot': u'0x03',
u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'function':
u'0x0'}, u'device': u'bridge', u'type': u'interface', u'name':
u'vnet0'}, {u'index': u'2', u'iface': u'ide', u'name': u'hdc', u'alias':
u'ide0-1-0', u'specParams': {}, u'readonly': 'True', 'deviceType':
u'disk', u'deviceId': u'13f4e285-c161-46f5-9ec3-ba1f92f374d9',
u'address': {u'bus': u'1', u'controller': u'0', u'type': u'drive',
u'target': u'0', u'unit': u'0'}, u'device': u'cdrom', u'shared':
u'false', u'path': '', u'type': u'disk'}, {u'poolID':
u'00000000-0000-0000-0000-000000000000', u'volumeInfo': {'domainID':
u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease',
'imageID': u'56461302-0710-4df0-964d-5e7b1ff07828', 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3'},
u'index': u'0', u'iface': u'virtio', u'apparentsize': '26843545600',
u'specParams': {}, u'imageID': u'56461302-0710-4df0-964d-5e7b1ff07828',
u'readonly': 'False', 'deviceType': u'disk', u'shared': u'exclusive',
u'truesize': '3515854848', u'type': u'disk', u'domainID':
u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', u'reqsize': u'0', u'format':
u'raw', u'deviceId': u'56461302-0710-4df0-964d-5e7b1ff07828',
u'address': {u'slot': u'0x06', u'bus': u'0x00', u'domain': u'0x0000',
u'type': u'pci', u'function': u'0x0'}, u'device': u'disk', u'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
u'propagateErrors': u'off', u'optional': u'false', u'name': u'vda',
u'bootOrder': u'1', u'volumeID':
u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', u'alias': u'virtio-disk0',
u'volumeChain': [{'domainID': u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'volumeID':
u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease',
'imageID': u'56461302-0710-4df0-964d-5e7b1ff07828', 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3'}]}],
u'guestDiskMapping': {}, u'spiceSecureChannels':
u'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir',
u'vmType': u'kvm', u'displayIp': '0', u'displaySecurePort': '-1',
u'memSize': 1536, u'displayPort': u'5900', u'cpuType': u'Conroe',
'clientIp': u'', u'statusTime': '4299704920', u'vmName':
u'HostedEngine', u'display': 'vnc'}]
Reactor thread::INFO::2015-11-09
11:17:48,004::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57851
Reactor
thread::DEBUG::2015-11-09
11:17:48,012::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:48,013::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from 127.0.0.1:57851
Reactor
thread::DEBUG::2015-11-09
11:17:48,013::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57851)
BindingXMLRPC::INFO::2015-11-09
11:17:48,013::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57851
Thread-4248::INFO::2015-11-09
11:17:48,015::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57851 started
Thread-4248::INFO::2015-11-09
11:17:48,022::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57851 stopped
Thread-303::DEBUG::2015-11-09
11:17:48,143::fileSD::173::Storage.Misc.excCmd::(getReadDelay)
/usr/bin/dd
if=/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_data/0af99439-f140-4636-90f7-f43904735da0/dom_md/metadata
iflag=direct of=/dev/null bs=4096 count=1 (cwd
None)
Thread-303::DEBUG::2015-11-09
11:17:48,154::fileSD::173::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
<err> = '0+1 records inn0+1 records outn461 bytes (461 B) copied,
0.000382969 s, 1.2 MB/sn'; <rc> =
0
mailbox.SPMMonitor::DEBUG::2015-11-09
11:17:48,767::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail)
dd
if=/rhev/data-center/00000001-0001-0001-0001-000000000230/mastersd/dom_md/inbox
iflag=direct,fullblock count=1 bs=1024000 (cwd
None)
mailbox.SPMMonitor::DEBUG::2015-11-09
11:17:48,783::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail)
SUCCESS: <err> = '1+0 records inn1+0 records outn1024000 bytes (1.0 MB)
copied, 0.00507258 s, 202 MB/sn'; <rc> = 0
Reactor
thread::INFO::2015-11-09
11:17:49,939::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57852
Reactor
thread::DEBUG::2015-11-09
11:17:49,947::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:49,947::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from 127.0.0.1:57852
Reactor
thread::DEBUG::2015-11-09
11:17:49,947::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57852)
BindingXMLRPC::INFO::2015-11-09
11:17:49,948::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57852
Thread-4249::INFO::2015-11-09
11:17:49,949::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57852 started
Thread-4249::DEBUG::2015-11-09
11:17:49,950::bindingxmlrpc::1257::vds::(wrapper) client
[127.0.0.1]::call getCapabilities with ()
{}
Thread-4249::DEBUG::2015-11-09
11:17:49,962::netinfo::454::root::(_dhcp_used) DHCPv6 configuration not
specified for ovirtmgmt.
Thread-4249::DEBUG::2015-11-09
11:17:49,963::netinfo::686::root::(_get_gateway) The gateway
10.10.10.254 is duplicated for the device
ovirtmgmt
Thread-4249::DEBUG::2015-11-09
11:17:49,965::netinfo::440::root::(_dhcp_used) There is no VDSM network
configured on enp2s0.
Thread-4249::DEBUG::2015-11-09
11:17:49,965::netinfo::440::root::(_dhcp_used) There is no VDSM network
configured on enp2s0.
Thread-4249::DEBUG::2015-11-09
11:17:49,968::netinfo::440::root::(_dhcp_used) There is no VDSM network
configured on bond0.
Thread-4249::DEBUG::2015-11-09
11:17:49,968::netinfo::440::root::(_dhcp_used) There is no VDSM network
configured on bond0.
Thread-4249::DEBUG::2015-11-09
11:17:49,970::netinfo::686::root::(_get_gateway) The gateway
10.10.10.254 is duplicated for the device
ovirtmgmt
Thread-4249::DEBUG::2015-11-09
11:17:49,971::utils::676::root::(execCmd) /usr/sbin/tc qdisc show (cwd
None)
Thread-4249::DEBUG::2015-11-09
11:17:49,979::utils::694::root::(execCmd) SUCCESS: <err> = ''; <rc> =
0
Thread-4249::DEBUG::2015-11-09
11:17:49,980::utils::676::root::(execCmd) /usr/sbin/tc class show dev
enp2s0 classid 0:1388 (cwd None)
Thread-4249::DEBUG::2015-11-09
11:17:49,989::utils::694::root::(execCmd) SUCCESS: <err> = ''; <rc> =
0
Thread-4249::DEBUG::2015-11-09
11:17:49,993::caps::807::root::(_getKeyPackages) rpm package
('glusterfs-rdma',) not found
Thread-4249::DEBUG::2015-11-09
11:17:49,997::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift-object',) not found
Thread-4249::DEBUG::2015-11-09
11:17:49,997::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift-proxy',) not found
Thread-4249::DEBUG::2015-11-09
11:17:50,001::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift-plugin',) not found
Thread-4249::DEBUG::2015-11-09
11:17:50,002::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift',) not found
Thread-4249::DEBUG::2015-11-09
11:17:50,002::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift-container',) not found
Thread-4249::DEBUG::2015-11-09
11:17:50,003::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift-account',) not found
Thread-4249::DEBUG::2015-11-09
11:17:50,003::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift-doc',) not found
Thread-4249::DEBUG::2015-11-09
11:17:50,005::bindingxmlrpc::1264::vds::(wrapper) return getCapabilities
with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory':
{'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:1954deeb7a38'}],
'FC': []}, 'packages2': {'kernel': {'release': '229.20.1.el7.x86_64',
'buildtime': 1446588607.0, 'version': '3.10.0'}, 'glusterfs-fuse':
{'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'},
'spice-server': {'release': '9.el7_1.3', 'buildtime': 1444691699L,
'version': '0.12.4'}, 'librbd1': {'release': '2.el7', 'buildtime':
1425594433L, 'version': '0.80.7'}, 'vdsm': {'release': '0.el7.centos',
'buildtime': 1446474396L, 'version': '4.17.10.1'}, 'qemu-kvm':
{'release': '29.1.el7', 'buildtime': 1444310806L, 'version': '2.3.0'},
'glusterfs': {'release': '1.el7', 'buildtime': 1444235292L, 'version':
'3.7.5'}, 'libvirt': {'release': '16.el7_1.5', 'buildtime': 1446559281L,
'version': '1.2.8'}, 'qemu-img': {'release': '29.1.el7', 'buildtime':
1444310806L, 'version': '2.3.0'}, 'mom': {'release': '2.el7',
'buildtime': 1442501481L, 'version': '0.5.1'},
'glusterfs-geo-replication': {'release': '1.el7', 'buildtime':
1444235292L, 'version': '3.7.5'}, 'glusterfs-server': {'release':
'1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}, 'glusterfs-cli':
{'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}},
'numaNodeDistance': {'0': [10]}, 'cpuModel': 'Intel(R) Core(TM)2 Quad
CPU Q8400 @ 2.66GHz', 'liveMerge': 'true', 'hooks': {'before_vm_start':
{'50_hostedengine': {'md5': '2a6d96c26a3599812be6cf1a13d9f485'}}},
'vmTypes': ['kvm'], 'selinux': {'mode': '-1'}, 'liveSnapshot': 'true',
'kdumpStatus': 0, 'networks': {'ovirtmgmt': {'iface': 'ovirtmgmt',
'addr': '10.10.10.211', 'cfg': {'IPV6INIT': 'no', 'DEFROUTE': 'yes',
'IPADDR': '10.10.10.211', 'HOTPLUG': 'no', 'GATEWAY': '10.10.10.254',
'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0',
'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'ovirtmgmt', 'TYPE':
'Bridge', 'ONBOOT': 'yes'}, 'bridged': True, 'ipv6addrs':
['fe80::6e62:6dff:feb3:3b72/64'], 'gateway': '10.10.10.254', 'dhcpv4':
False, 'netmask': '255.255.255.0', 'dhcpv6': False, 'stp': 'off',
'ipv4addrs': ['10.10.10.211/24'], 'mtu': '1500', 'ipv6gateway': '::',
'ports': ['vnet0', 'enp2s0']}}, 'bridges': {'ovirtmgmt': {'addr':
'10.10.10.211', 'cfg': {'IPV6INIT': 'no', 'DEFROUTE': 'yes', 'IPADDR':
'10.10.10.211', 'HOTPLUG': 'no', 'GATEWAY': '10.10.10.254', 'DELAY':
'0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO':
'none', 'STP': 'off', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT':
'yes'}, 'ipv6addrs': ['fe80::6e62:6dff:feb3:3b72/64'], 'gateway':
'10.10.10.254', 'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6':
False, 'stp': 'off', 'ipv4addrs': ['10.10.10.211/24'], 'mtu': '1500',
'ipv6gateway': '::', 'ports': ['vnet0', 'enp2s0'], 'opts':
{'multicast_last_member_count': '2', 'hash_elasticity': '4',
'multicast_query_response_interval': '1000', 'group_fwd_mask': '0x0',
'multicast_snooping': '1', 'multicast_startup_query_interval': '3125',
'hello_timer': '172', 'multicast_querier_interval': '25500', 'max_age':
'2000', 'hash_max': '512', 'stp_state': '0', 'topology_change_detected':
'0', 'priority': '32768', 'multicast_membership_interval': '26000',
'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0',
'multicast_startup_query_count': '2', 'nf_call_iptables': '0',
'topology_change': '0', 'hello_time': '200', 'root_id':
'8000.6c626db33b72', 'bridge_id': '8000.6c626db33b72',
'topology_change_timer': '0', 'ageing_time': '30000',
'nf_call_ip6tables': '0', 'gc_timer': '25099', 'nf_call_arptables': '0',
'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval': '100',
'default_pvid': '1', 'multicast_query_interval': '12500', 'tcn_timer':
'0', 'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay':
'0'}}}, 'uuid': 'c2cac9d6-9ed7-44f0-8bbc-eff4c71db7ca', 'onlineCpus':
'0,1,2,3', 'nics': {'enp2s0': {'addr': '', 'ipv6gateway': '::',
'ipv6addrs': ['fe80::6e62:6dff:feb3:3b72/64'], 'mtu': '1500', 'dhcpv4':
False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg':
{'BRIDGE': 'ovirtmgmt', 'IPV6INIT': 'no', 'NM_CONTROLLED': 'no',
'HWADDR': '6c:62:6d:b3:3b:72', 'BOOTPROTO': 'none', 'DEVICE': 'enp2s0',
'ONBOOT': 'yes'}, 'hwaddr': '6c:62:6d:b3:3b:72', 'speed': 1000,
'gateway': ''}}, 'software_revision': '0', 'hostdevPassthrough':
'false', 'clusterLevels': ['3.4', '3.5', '3.6'], 'cpuFlags':
'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,sse4_1,xsave,lahf_lm,dtherm,tpr_shadow,vnmi,flexpriority,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_n270',
'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:1954deeb7a38',
'netConfigDirty': 'False', 'supportedENGINEs': ['3.4', '3.5', '3.6'],
'autoNumaBalancing': 0, 'additionalFeatures': ['GLUSTER_SNAPSHOT',
'GLUSTER_GEO_REPLICATION', 'GLUSTER_BRICK_MANAGEMENT'], 'reservedMem':
'321', 'bondings': {'bond0': {'ipv4addrs': [], 'addr': '', 'cfg':
{'BOOTPROTO': 'none'}, 'ipv6addrs': [], 'active_slave': '', 'mtu':
'1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'slaves': [],
'hwaddr': 'ba:5f:22:a3:17:07', 'ipv6gateway': '::', 'gateway': '',
'opts': {}}}, 'software_version': '4.17', 'memSize': '3782', 'cpuSpeed':
'2670.000', 'numaNodes': {'0': {'totalMemory': '3782', 'cpus': [0, 1, 2,
3]}}, 'cpuSockets': '1', 'vlans': {}, 'lastClientIface': 'lo',
'cpuCores': '4', 'kvmEnabled': 'true', 'guestOverhead': '65',
'version_name': 'Snow Man', 'cpuThreads': '4', 'emulatedMachines':
['pc-i440fx-rhel7.1.0', 'rhel6.3.0', 'pc-q35-rhel7.2.0',
'pc-i440fx-rhel7.0.0', 'rhel6.1.0', 'rhel6.6.0', 'rhel6.2.0', 'pc',
'pc-q35-rhel7.0.0', 'pc-q35-rhel7.1.0', 'q35', 'pc-i440fx-rhel7.2.0',
'rhel6.4.0', 'rhel6.0.0', 'rhel6.5.0'], 'rngSources': ['random'],
'operatingSystem': {'release': '1.1503.el7.centos.2.8', 'version': '7',
'name': 'RHEL'}, 'lastClient':
'127.0.0.1'}}
Thread-4249::INFO::2015-11-09
11:17:50,020::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57852
stopped
mailbox.SPMMonitor::DEBUG::2015-11-09
11:17:50,797::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail)
dd
if=/rhev/data-center/00000001-0001-0001-0001-000000000230/mastersd/dom_md/inbox
iflag=direct,fullblock count=1 bs=1024000 (cwd
None)
mailbox.SPMMonitor::DEBUG::2015-11-09
11:17:50,815::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail)
SUCCESS: <err> = '1+0 records inn1+0 records outn1024000 bytes (1.0 MB)
copied, 0.00511026 s, 200 MB/sn'; <rc> = 0
Reactor
thread::INFO::2015-11-09
11:17:52,098::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57853
Reactor
thread::DEBUG::2015-11-09
11:17:52,106::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,107::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from 127.0.0.1:57853
Reactor
thread::DEBUG::2015-11-09
11:17:52,107::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57853)
BindingXMLRPC::INFO::2015-11-09
11:17:52,108::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57853
Thread-4250::INFO::2015-11-09
11:17:52,110::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57853 started
Thread-4250::DEBUG::2015-11-09
11:17:52,111::bindingxmlrpc::1257::vds::(wrapper) client
[127.0.0.1]::call getHardwareInfo with ()
{}
Thread-4250::DEBUG::2015-11-09
11:17:52,112::bindingxmlrpc::1264::vds::(wrapper) return getHardwareInfo
with {'status': {'message': 'Done', 'code': 0}, 'info':
{'systemProductName': 'MS-7529', 'systemSerialNumber': 'To Be Filled By
O.E.M.', 'systemFamily': 'To Be Filled By O.E.M.', 'systemVersion':
'1.0', 'systemUUID': '00000000-0000-0000-0000-6C626DB33B72',
'systemManufacturer': 'MICRO-STAR INTERNATIONAL
CO.,LTD'}}
Thread-4250::INFO::2015-11-09
11:17:52,114::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57853 stopped
Reactor thread::INFO::2015-11-09
11:17:52,116::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57854
Reactor
thread::DEBUG::2015-11-09
11:17:52,124::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,124::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from
127.0.0.1:57854
BindingXMLRPC::INFO::2015-11-09
11:17:52,125::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57854
Reactor thread::DEBUG::2015-11-09
11:17:52,125::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57854)
Thread-4251::INFO::2015-11-09
11:17:52,128::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57854 started
Thread-4251::DEBUG::2015-11-09
11:17:52,129::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4251::DEBUG::2015-11-09
11:17:52,130::task::595::Storage.TaskManager.Task::(_updateState)
Task=`8535d95e-dce6-4474-bd8d-7824f68cf68a`::moving from state init ->
state preparing
Thread-4251::INFO::2015-11-09
11:17:52,130::logUtils::48::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=7,
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'id':
'2c69bdcf-793b-4fda-b326-b8aa6c33ade0', 'vfs_type': 'glusterfs',
'connection': 'ovirt02.mafia.kru:/engine', 'user': 'kvm'}],
options=None)
Thread-4251::DEBUG::2015-11-09
11:17:52,132::hsm::2417::Storage.HSM::(__prefetchDomains)
glusterDomPath: glusterSD/*
Thread-4251::DEBUG::2015-11-09
11:17:52,146::hsm::2429::Storage.HSM::(__prefetchDomains) Found SD
uuids: (u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
u'0af99439-f140-4636-90f7-f43904735da0')
Thread-4251::DEBUG::2015-11-09
11:17:52,147::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs:
{b4c488af-9d2f-4b7b-a6f6-74a0bac06c41: storage.glusterSD.findDomain,
0af99439-f140-4636-90f7-f43904735da0:
storage.glusterSD.findDomain}
Thread-4251::INFO::2015-11-09
11:17:52,147::logUtils::51::dispatcher::(wrapper) Run and protect:
connectStorageServer, Return response: {'statuslist': [{'status': 0,
'id':
'2c69bdcf-793b-4fda-b326-b8aa6c33ade0'}]}
Thread-4251::DEBUG::2015-11-09
11:17:52,147::task::1191::Storage.TaskManager.Task::(prepare)
Task=`8535d95e-dce6-4474-bd8d-7824f68cf68a`::finished: {'statuslist':
[{'status': 0, 'id':
'2c69bdcf-793b-4fda-b326-b8aa6c33ade0'}]}
Thread-4251::DEBUG::2015-11-09
11:17:52,147::task::595::Storage.TaskManager.Task::(_updateState)
Task=`8535d95e-dce6-4474-bd8d-7824f68cf68a`::moving from state preparing
-> state finished
Thread-4251::DEBUG::2015-11-09
11:17:52,147::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{}
Thread-4251::DEBUG::2015-11-09
11:17:52,148::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4251::DEBUG::2015-11-09
11:17:52,148::task::993::Storage.TaskManager.Task::(_decref)
Task=`8535d95e-dce6-4474-bd8d-7824f68cf68a`::ref 0 aborting
False
Thread-4251::INFO::2015-11-09
11:17:52,149::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57854 stopped
Reactor thread::INFO::2015-11-09
11:17:52,150::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57855
Reactor
thread::DEBUG::2015-11-09
11:17:52,158::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,158::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from
127.0.0.1:57855
BindingXMLRPC::INFO::2015-11-09
11:17:52,159::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57855
Reactor thread::DEBUG::2015-11-09
11:17:52,159::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57855)
Thread-4255::INFO::2015-11-09
11:17:52,162::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57855 started
Thread-4255::DEBUG::2015-11-09
11:17:52,162::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4255::DEBUG::2015-11-09
11:17:52,163::task::595::Storage.TaskManager.Task::(_updateState)
Task=`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::moving from state init ->
state preparing
Thread-4255::INFO::2015-11-09
11:17:52,163::logUtils::48::dispatcher::(wrapper) Run and protect:
getStorageDomainStats(sdUUID='b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
options=None)
Thread-4255::DEBUG::2015-11-09
11:17:52,164::resourceManager::198::Storage.ResourceManager.Request::(__init__)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`41531754-8ba9-4fc4-8788-d4d67fa33e5c`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '2848' at
'getStorageDomainStats'
Thread-4255::DEBUG::2015-11-09
11:17:52,164::resourceManager::542::Storage.ResourceManager::(registerResource)
Trying to register resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type
'shared'
Thread-4255::DEBUG::2015-11-09
11:17:52,164::resourceManager::601::Storage.ResourceManager::(registerResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now
locking as 'shared' (1 active user)
Thread-4255::DEBUG::2015-11-09
11:17:52,164::resourceManager::238::Storage.ResourceManager.Request::(grant)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`41531754-8ba9-4fc4-8788-d4d67fa33e5c`::Granted
request
Thread-4255::DEBUG::2015-11-09
11:17:52,165::task::827::Storage.TaskManager.Task::(resourceAcquired)
Task=`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::_resourcesAcquired:
Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
(shared)
Thread-4255::DEBUG::2015-11-09
11:17:52,165::task::993::Storage.TaskManager.Task::(_decref)
Task=`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::ref 1 aborting
False
Thread-4255::DEBUG::2015-11-09
11:17:52,165::misc::750::Storage.SamplingMethod::(__call__) Trying to
enter sampling method
(storage.sdc.refreshStorage)
Thread-4255::DEBUG::2015-11-09
11:17:52,165::misc::753::Storage.SamplingMethod::(__call__) Got in to
sampling method
Thread-4255::DEBUG::2015-11-09
11:17:52,165::misc::750::Storage.SamplingMethod::(__call__) Trying to
enter sampling method
(storage.iscsi.rescan)
Thread-4255::DEBUG::2015-11-09
11:17:52,165::misc::753::Storage.SamplingMethod::(__call__) Got in to
sampling method
Thread-4255::DEBUG::2015-11-09
11:17:52,166::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan,
this will take up to 30 seconds
Thread-4255::DEBUG::2015-11-09
11:17:52,166::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo
-n /sbin/iscsiadm -m session -R (cwd
None)
Thread-4255::DEBUG::2015-11-09
11:17:52,183::misc::760::Storage.SamplingMethod::(__call__) Returning
last result
Thread-4255::DEBUG::2015-11-09
11:17:52,183::misc::750::Storage.SamplingMethod::(__call__) Trying to
enter sampling method
(storage.hba.rescan)
Thread-4255::DEBUG::2015-11-09
11:17:52,184::misc::753::Storage.SamplingMethod::(__call__) Got in to
sampling method
Thread-4255::DEBUG::2015-11-09
11:17:52,184::hba::56::Storage.HBA::(rescan) Starting
scan
Thread-4255::DEBUG::2015-11-09
11:17:52,295::hba::62::Storage.HBA::(rescan) Scan
finished
Thread-4255::DEBUG::2015-11-09
11:17:52,296::misc::760::Storage.SamplingMethod::(__call__) Returning
last result
Thread-4255::DEBUG::2015-11-09
11:17:52,296::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo
-n /usr/sbin/multipath (cwd None)
Thread-4255::DEBUG::2015-11-09
11:17:52,362::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS:
<err> = ''; <rc> = 0
Thread-4255::DEBUG::2015-11-09
11:17:52,362::utils::676::root::(execCmd) /sbin/udevadm settle
--timeout=5 (cwd None)
Thread-4255::DEBUG::2015-11-09
11:17:52,371::utils::694::root::(execCmd) SUCCESS: <err> = ''; <rc> =
0
Thread-4255::DEBUG::2015-11-09
11:17:52,372::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' got the operation
mutex
Thread-4255::DEBUG::2015-11-09
11:17:52,372::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' released the operation
mutex
Thread-4255::DEBUG::2015-11-09
11:17:52,373::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' got the operation
mutex
Thread-4255::DEBUG::2015-11-09
11:17:52,373::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' released the operation
mutex
Thread-4255::DEBUG::2015-11-09
11:17:52,373::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' got the operation
mutex
Thread-4255::DEBUG::2015-11-09
11:17:52,373::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' released the operation
mutex
Thread-4255::DEBUG::2015-11-09
11:17:52,374::misc::760::Storage.SamplingMethod::(__call__) Returning
last result
Thread-4255::DEBUG::2015-11-09
11:17:52,386::fileSD::157::Storage.StorageDomainManifest::(__init__)
Reading domain in path
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
Thread-4255::DEBUG::2015-11-09
11:17:52,387::persistentDict::192::Storage.PersistentDict::(__init__)
Created a persistent dict with FileMetadataRW
backend
Thread-4255::DEBUG::2015-11-09
11:17:52,395::persistentDict::234::Storage.PersistentDict::(refresh)
read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=hosted_storage',
'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=',
'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=',
'REMOTE_PATH=ovirt02.mafia.kru:/engine', 'ROLE=Regular',
'SDUUID=b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'TYPE=GLUSTERFS',
'VERSION=3',
'_SHA_CKSUM=cb09606ada74ed4155ad158923dd930264780fc8']
Thread-4255::DEBUG::2015-11-09
11:17:52,398::fileSD::647::Storage.StorageDomain::(imageGarbageCollector)
Removing remnants of deleted images []
Thread-4255::INFO::2015-11-09
11:17:52,399::sd::442::Storage.StorageDomain::(_registerResourceNamespaces)
Resource namespace b4c488af-9d2f-4b7b-a6f6-74a0bac06c41_imageNS already
registered
Thread-4255::INFO::2015-11-09
11:17:52,399::sd::450::Storage.StorageDomain::(_registerResourceNamespaces)
Resource namespace b4c488af-9d2f-4b7b-a6f6-74a0bac06c41_volumeNS
already registered
Thread-4255::INFO::2015-11-09
11:17:52,400::logUtils::51::dispatcher::(wrapper) Run and protect:
getStorageDomainStats, Return response: {'stats': {'mdasize': 0,
'mdathreshold': True, 'mdavalid': True, 'diskfree': '210878988288',
'disktotal': '214643507200', 'mdafree':
0}}
Thread-4255::DEBUG::2015-11-09
11:17:52,401::task::1191::Storage.TaskManager.Task::(prepare)
Task=`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::finished: {'stats':
{'mdasize': 0, 'mdathreshold': True, 'mdavalid': True, 'diskfree':
'210878988288', 'disktotal': '214643507200', 'mdafree':
0}}
Thread-4255::DEBUG::2015-11-09
11:17:52,401::task::595::Storage.TaskManager.Task::(_updateState)
Task=`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::moving from state preparing
-> state finished
Thread-4255::DEBUG::2015-11-09
11:17:52,401::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj:
'None'>}
Thread-4255::DEBUG::2015-11-09
11:17:52,401::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4255::DEBUG::2015-11-09
11:17:52,402::resourceManager::616::Storage.ResourceManager::(releaseResource)
Trying to release resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'
Thread-4255::DEBUG::2015-11-09
11:17:52,402::resourceManager::635::Storage.ResourceManager::(releaseResource)
Released resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0
active users)
Thread-4255::DEBUG::2015-11-09
11:17:52,402::resourceManager::641::Storage.ResourceManager::(releaseResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free,
finding out if anyone is waiting for it.
Thread-4255::DEBUG::2015-11-09
11:17:52,402::resourceManager::649::Storage.ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing
records.
Thread-4255::DEBUG::2015-11-09
11:17:52,402::task::993::Storage.TaskManager.Task::(_decref)
Task=`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::ref 0 aborting
False
Thread-4255::INFO::2015-11-09
11:17:52,404::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57855 stopped
Reactor thread::INFO::2015-11-09
11:17:52,405::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57856
Reactor
thread::DEBUG::2015-11-09
11:17:52,413::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,414::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from
127.0.0.1:57856
BindingXMLRPC::INFO::2015-11-09
11:17:52,414::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57856
Reactor thread::DEBUG::2015-11-09
11:17:52,414::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57856)
Thread-4259::INFO::2015-11-09
11:17:52,417::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57856 started
Thread-4259::DEBUG::2015-11-09
11:17:52,418::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4259::DEBUG::2015-11-09
11:17:52,418::task::595::Storage.TaskManager.Task::(_updateState)
Task=`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::moving from state init ->
state preparing
Thread-4259::INFO::2015-11-09
11:17:52,419::logUtils::48::dispatcher::(wrapper) Run and protect:
prepareImage(sdUUID='b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
spUUID='00000000-0000-0000-0000-000000000000',
imgUUID='56461302-0710-4df0-964d-5e7b1ff07828',
leafUUID='8f8ee034-de86-4438-b6eb-9109faa8b3d3')
Thread-4259::DEBUG::2015-11-09
11:17:52,419::resourceManager::198::Storage.ResourceManager.Request::(__init__)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`bec7c8c3-42b9-4acb-88cf-841d9dc28fb0`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '3205' at
'prepareImage'
Thread-4259::DEBUG::2015-11-09
11:17:52,419::resourceManager::542::Storage.ResourceManager::(registerResource)
Trying to register resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type
'shared'
Thread-4259::DEBUG::2015-11-09
11:17:52,420::resourceManager::601::Storage.ResourceManager::(registerResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now
locking as 'shared' (1 active user)
Thread-4259::DEBUG::2015-11-09
11:17:52,420::resourceManager::238::Storage.ResourceManager.Request::(grant)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`bec7c8c3-42b9-4acb-88cf-841d9dc28fb0`::Granted
request
Thread-4259::DEBUG::2015-11-09
11:17:52,420::task::827::Storage.TaskManager.Task::(resourceAcquired)
Task=`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::_resourcesAcquired:
Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
(shared)
Thread-4259::DEBUG::2015-11-09
11:17:52,420::task::993::Storage.TaskManager.Task::(_decref)
Task=`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::ref 1 aborting
False
Thread-4259::DEBUG::2015-11-09
11:17:52,445::fileSD::536::Storage.StorageDomain::(activateVolumes)
Fixing permissions on
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3
Thread-4259::DEBUG::2015-11-09
11:17:52,446::fileUtils::143::Storage.fileUtils::(createdir) Creating
directory: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
mode: None
Thread-4259::WARNING::2015-11-09
11:17:52,446::fileUtils::152::Storage.fileUtils::(createdir) Dir
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 already
exists
Thread-4259::DEBUG::2015-11-09
11:17:52,446::fileSD::511::Storage.StorageDomain::(createImageLinks)
Creating symlink from
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828
to
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302-0710-4df0-964d-5e7b1ff07828
Thread-4259::DEBUG::2015-11-09
11:17:52,447::fileSD::516::Storage.StorageDomain::(createImageLinks)
img run dir already exists:
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302-0710-4df0-964d-5e7b1ff07828
Thread-4259::DEBUG::2015-11-09
11:17:52,448::fileVolume::535::Storage.Volume::(validateVolumePath)
validate path for
8f8ee034-de86-4438-b6eb-9109faa8b3d3
Thread-4259::INFO::2015-11-09
11:17:52,450::logUtils::51::dispatcher::(wrapper) Run and protect:
prepareImage, Return response: {'info': {'domainID':
'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease',
'imageID': '56461302-0710-4df0-964d-5e7b1ff07828'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease',
'imageID':
'56461302-0710-4df0-964d-5e7b1ff07828'}]}
Thread-4259::DEBUG::2015-11-09
11:17:52,450::task::1191::Storage.TaskManager.Task::(prepare)
Task=`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::finished: {'info':
{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease',
'imageID': '56461302-0710-4df0-964d-5e7b1ff07828'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease',
'imageID':
'56461302-0710-4df0-964d-5e7b1ff07828'}]}
Thread-4259::DEBUG::2015-11-09
11:17:52,450::task::595::Storage.TaskManager.Task::(_updateState)
Task=`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::moving from state preparing
-> state finished
Thread-4259::DEBUG::2015-11-09
11:17:52,450::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj:
'None'>}
Thread-4259::DEBUG::2015-11-09
11:17:52,450::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4259::DEBUG::2015-11-09
11:17:52,451::resourceManager::616::Storage.ResourceManager::(releaseResource)
Trying to release resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'
Thread-4259::DEBUG::2015-11-09
11:17:52,451::resourceManager::635::Storage.ResourceManager::(releaseResource)
Released resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0
active users)
Thread-4259::DEBUG::2015-11-09
11:17:52,451::resourceManager::641::Storage.ResourceManager::(releaseResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free,
finding out if anyone is waiting for it.
Thread-4259::DEBUG::2015-11-09
11:17:52,451::resourceManager::649::Storage.ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing
records.
Thread-4259::DEBUG::2015-11-09
11:17:52,451::task::993::Storage.TaskManager.Task::(_decref)
Task=`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::ref 0 aborting
False
Thread-4259::INFO::2015-11-09
11:17:52,454::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57856 stopped
Reactor thread::INFO::2015-11-09
11:17:52,454::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57857
Reactor
thread::DEBUG::2015-11-09
11:17:52,463::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,463::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from 127.0.0.1:57857
Reactor
thread::DEBUG::2015-11-09
11:17:52,464::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57857)
BindingXMLRPC::INFO::2015-11-09
11:17:52,464::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57857
Thread-4260::INFO::2015-11-09
11:17:52,466::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57857 started
Thread-4260::DEBUG::2015-11-09
11:17:52,467::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4260::DEBUG::2015-11-09
11:17:52,467::task::595::Storage.TaskManager.Task::(_updateState)
Task=`aed16a50-ede9-4ff5-92ef-356692fd56ae`::moving from state init ->
state preparing
Thread-4260::INFO::2015-11-09
11:17:52,467::logUtils::48::dispatcher::(wrapper) Run and protect:
prepareImage(sdUUID='b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
spUUID='00000000-0000-0000-0000-000000000000',
imgUUID='fd81353f-b654-4493-bcaf-2f417849b830',
leafUUID='8bb29fcb-c109-4f0a-a227-3819b6ecfdd9')
Thread-4260::DEBUG::2015-11-09
11:17:52,468::resourceManager::198::Storage.ResourceManager.Request::(__init__)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`974119cd-1351-46e9-8062-ffb1298c4ac9`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '3205' at
'prepareImage'
Thread-4260::DEBUG::2015-11-09
11:17:52,468::resourceManager::542::Storage.ResourceManager::(registerResource)
Trying to register resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type
'shared'
Thread-4260::DEBUG::2015-11-09
11:17:52,468::resourceManager::601::Storage.ResourceManager::(registerResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now
locking as 'shared' (1 active user)
Thread-4260::DEBUG::2015-11-09
11:17:52,468::resourceManager::238::Storage.ResourceManager.Request::(grant)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`974119cd-1351-46e9-8062-ffb1298c4ac9`::Granted
request
Thread-4260::DEBUG::2015-11-09
11:17:52,469::task::827::Storage.TaskManager.Task::(resourceAcquired)
Task=`aed16a50-ede9-4ff5-92ef-356692fd56ae`::_resourcesAcquired:
Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
(shared)
Thread-4260::DEBUG::2015-11-09
11:17:52,469::task::993::Storage.TaskManager.Task::(_decref)
Task=`aed16a50-ede9-4ff5-92ef-356692fd56ae`::ref 1 aborting
False
Thread-4260::DEBUG::2015-11-09
11:17:52,485::fileSD::536::Storage.StorageDomain::(activateVolumes)
Fixing permissions on
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9
Thread-4260::DEBUG::2015-11-09
11:17:52,486::fileUtils::143::Storage.fileUtils::(createdir) Creating
directory: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
mode: None
Thread-4260::WARNING::2015-11-09
11:17:52,487::fileUtils::152::Storage.fileUtils::(createdir) Dir
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 already
exists
Thread-4260::DEBUG::2015-11-09
11:17:52,487::fileSD::511::Storage.StorageDomain::(createImageLinks)
Creating symlink from
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830
to
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-4493-bcaf-2f417849b830
Thread-4260::DEBUG::2015-11-09
11:17:52,487::fileSD::516::Storage.StorageDomain::(createImageLinks)
img run dir already exists:
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-4493-bcaf-2f417849b830
Thread-4260::DEBUG::2015-11-09
11:17:52,488::fileVolume::535::Storage.Volume::(validateVolumePath)
validate path for
8bb29fcb-c109-4f0a-a227-3819b6ecfdd9
Thread-4260::INFO::2015-11-09
11:17:52,490::logUtils::51::dispatcher::(wrapper) Run and protect:
prepareImage, Return response: {'info': {'domainID':
'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9',
'volumeID': u'8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9.lease',
'imageID': 'fd81353f-b654-4493-bcaf-2f417849b830'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9',
'volumeID': u'8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9.lease',
'imageID':
'fd81353f-b654-4493-bcaf-2f417849b830'}]}
Thread-4260::DEBUG::2015-11-09
11:17:52,490::task::1191::Storage.TaskManager.Task::(prepare)
Task=`aed16a50-ede9-4ff5-92ef-356692fd56ae`::finished: {'info':
{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9',
'volumeID': u'8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9.lease',
'imageID': 'fd81353f-b654-4493-bcaf-2f417849b830'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9',
'volumeID': u'8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9.lease',
'imageID':
'fd81353f-b654-4493-bcaf-2f417849b830'}]}
Thread-4260::DEBUG::2015-11-09
11:17:52,490::task::595::Storage.TaskManager.Task::(_updateState)
Task=`aed16a50-ede9-4ff5-92ef-356692fd56ae`::moving from state preparing
-> state finished
Thread-4260::DEBUG::2015-11-09
11:17:52,490::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj:
'None'>}
Thread-4260::DEBUG::2015-11-09
11:17:52,491::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4260::DEBUG::2015-11-09
11:17:52,491::resourceManager::616::Storage.ResourceManager::(releaseResource)
Trying to release resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'
Thread-4260::DEBUG::2015-11-09
11:17:52,491::resourceManager::635::Storage.ResourceManager::(releaseResource)
Released resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0
active users)
Thread-4260::DEBUG::2015-11-09
11:17:52,491::resourceManager::641::Storage.ResourceManager::(releaseResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free,
finding out if anyone is waiting for it.
Thread-4260::DEBUG::2015-11-09
11:17:52,491::resourceManager::649::Storage.ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing
records.
Thread-4260::DEBUG::2015-11-09
11:17:52,492::task::993::Storage.TaskManager.Task::(_decref)
Task=`aed16a50-ede9-4ff5-92ef-356692fd56ae`::ref 0 aborting
False
Thread-4260::INFO::2015-11-09
11:17:52,494::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57857 stopped
Reactor thread::INFO::2015-11-09
11:17:52,494::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57858
Reactor
thread::DEBUG::2015-11-09
11:17:52,503::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,504::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from
127.0.0.1:57858
BindingXMLRPC::INFO::2015-11-09
11:17:52,504::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57858
Reactor thread::DEBUG::2015-11-09
11:17:52,504::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57858)
Thread-4261::INFO::2015-11-09
11:17:52,507::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57858 started
Thread-4261::DEBUG::2015-11-09
11:17:52,508::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4261::DEBUG::2015-11-09
11:17:52,508::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d39463fd-486f-4280-903a-51b72862b648`::moving from state init ->
state preparing
Thread-4261::INFO::2015-11-09
11:17:52,509::logUtils::48::dispatcher::(wrapper) Run and protect:
prepareImage(sdUUID='b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
spUUID='00000000-0000-0000-0000-000000000000',
imgUUID='0e1c20d1-94aa-4003-8e12-0dbbf06a6af8',
leafUUID='3fc3362d-ab6d-4e06-bd72-82d5750c7095')
Thread-4261::DEBUG::2015-11-09
11:17:52,509::resourceManager::198::Storage.ResourceManager.Request::(__init__)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`240c2aba-6c2e-44da-890d-c3d605e1933f`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '3205' at
'prepareImage'
Thread-4261::DEBUG::2015-11-09
11:17:52,509::resourceManager::542::Storage.ResourceManager::(registerResource)
Trying to register resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type
'shared'
Thread-4261::DEBUG::2015-11-09
11:17:52,509::resourceManager::601::Storage.ResourceManager::(registerResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now
locking as 'shared' (1 active user)
Thread-4261::DEBUG::2015-11-09
11:17:52,510::resourceManager::238::Storage.ResourceManager.Request::(grant)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`240c2aba-6c2e-44da-890d-c3d605e1933f`::Granted
request
Thread-4261::DEBUG::2015-11-09
11:17:52,510::task::827::Storage.TaskManager.Task::(resourceAcquired)
Task=`d39463fd-486f-4280-903a-51b72862b648`::_resourcesAcquired:
Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
(shared)
Thread-4261::DEBUG::2015-11-09
11:17:52,510::task::993::Storage.TaskManager.Task::(_decref)
Task=`d39463fd-486f-4280-903a-51b72862b648`::ref 1 aborting
False
Thread-4261::DEBUG::2015-11-09
11:17:52,526::fileSD::536::Storage.StorageDomain::(activateVolumes)
Fixing permissions on
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095
Thread-4261::DEBUG::2015-11-09
11:17:52,528::fileUtils::143::Storage.fileUtils::(createdir) Creating
directory: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
mode: None
Thread-4261::WARNING::2015-11-09
11:17:52,528::fileUtils::152::Storage.fileUtils::(createdir) Dir
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 already
exists
Thread-4261::DEBUG::2015-11-09
11:17:52,528::fileSD::511::Storage.StorageDomain::(createImageLinks)
Creating symlink from
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8
to
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8
Thread-4261::DEBUG::2015-11-09
11:17:52,528::fileSD::516::Storage.StorageDomain::(createImageLinks)
img run dir already exists:
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8
Thread-4261::DEBUG::2015-11-09
11:17:52,530::fileVolume::535::Storage.Volume::(validateVolumePath)
validate path for
3fc3362d-ab6d-4e06-bd72-82d5750c7095
Thread-4261::INFO::2015-11-09
11:17:52,531::logUtils::51::dispatcher::(wrapper) Run and protect:
prepareImage, Return response: {'info': {'domainID':
'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095',
'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease',
'imageID': '0e1c20d1-94aa-4003-8e12-0dbbf06a6af8'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095',
'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease',
'imageID':
'0e1c20d1-94aa-4003-8e12-0dbbf06a6af8'}]}
Thread-4261::DEBUG::2015-11-09
11:17:52,531::task::1191::Storage.TaskManager.Task::(prepare)
Task=`d39463fd-486f-4280-903a-51b72862b648`::finished: {'info':
{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095',
'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease',
'imageID': '0e1c20d1-94aa-4003-8e12-0dbbf06a6af8'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095',
'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease',
'imageID':
'0e1c20d1-94aa-4003-8e12-0dbbf06a6af8'}]}
Thread-4261::DEBUG::2015-11-09
11:17:52,532::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d39463fd-486f-4280-903a-51b72862b648`::moving from state preparing
-> state finished
Thread-4261::DEBUG::2015-11-09
11:17:52,532::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj:
'None'>}
Thread-4261::DEBUG::2015-11-09
11:17:52,532::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4261::DEBUG::2015-11-09
11:17:52,532::resourceManager::616::Storage.ResourceManager::(releaseResource)
Trying to release resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'
Thread-4261::DEBUG::2015-11-09
11:17:52,532::resourceManager::635::Storage.ResourceManager::(releaseResource)
Released resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0
active users)
Thread-4261::DEBUG::2015-11-09
11:17:52,533::resourceManager::641::Storage.ResourceManager::(releaseResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free,
finding out if anyone is waiting for it.
Thread-4261::DEBUG::2015-11-09
11:17:52,533::resourceManager::649::Storage.ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing
records.
Thread-4261::DEBUG::2015-11-09
11:17:52,533::task::993::Storage.TaskManager.Task::(_decref)
Task=`d39463fd-486f-4280-903a-51b72862b648`::ref 0 aborting
False
Thread-4261::INFO::2015-11-09
11:17:52,535::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57858 stopped
Reactor thread::INFO::2015-11-09
11:17:52,536::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57859
Reactor
thread::DEBUG::2015-11-09
11:17:52,544::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,545::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from 127.0.0.1:57859
Reactor
thread::DEBUG::2015-11-09
11:17:52,545::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57859)
BindingXMLRPC::INFO::2015-11-09
11:17:52,545::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57859
Thread-4262::INFO::2015-11-09
11:17:52,548::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57859 started
Thread-4262::DEBUG::2015-11-09
11:17:52,548::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4262::DEBUG::2015-11-09
11:17:52,549::task::595::Storage.TaskManager.Task::(_updateState)
Task=`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::moving from state init ->
state preparing
Thread-4262::INFO::2015-11-09
11:17:52,549::logUtils::48::dispatcher::(wrapper) Run and protect:
prepareImage(sdUUID='b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
spUUID='00000000-0000-0000-0000-000000000000',
imgUUID='350fb787-049a-4174-8914-f371aabfa72c',
leafUUID='02c5d59d-638c-4672-814d-d734e334e24a')
Thread-4262::DEBUG::2015-11-09
11:17:52,549::resourceManager::198::Storage.ResourceManager.Request::(__init__)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`fd9ea6d0-3a31-4ec6-a74c-8b84b2e51746`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '3205' at
'prepareImage'
Thread-4262::DEBUG::2015-11-09
11:17:52,550::resourceManager::542::Storage.ResourceManager::(registerResource)
Trying to register resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type
'shared'
Thread-4262::DEBUG::2015-11-09
11:17:52,550::resourceManager::601::Storage.ResourceManager::(registerResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now
locking as 'shared' (1 active user)
Thread-4262::DEBUG::2015-11-09
11:17:52,550::resourceManager::238::Storage.ResourceManager.Request::(grant)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`fd9ea6d0-3a31-4ec6-a74c-8b84b2e51746`::Granted
request
Thread-4262::DEBUG::2015-11-09
11:17:52,550::task::827::Storage.TaskManager.Task::(resourceAcquired)
Task=`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::_resourcesAcquired:
Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
(shared)
Thread-4262::DEBUG::2015-11-09
11:17:52,551::task::993::Storage.TaskManager.Task::(_decref)
Task=`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::ref 1 aborting
False
Thread-4262::DEBUG::2015-11-09
11:17:52,566::fileSD::536::Storage.StorageDomain::(activateVolumes)
Fixing permissions on
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a
Thread-4262::DEBUG::2015-11-09
11:17:52,568::fileUtils::143::Storage.fileUtils::(createdir) Creating
directory: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
mode: None
Thread-4262::WARNING::2015-11-09
11:17:52,568::fileUtils::152::Storage.fileUtils::(createdir) Dir
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 already
exists
Thread-4262::DEBUG::2015-11-09
11:17:52,568::fileSD::511::Storage.StorageDomain::(createImageLinks)
Creating symlink from
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c
to
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-049a-4174-8914-f371aabfa72c
Thread-4262::DEBUG::2015-11-09
11:17:52,568::fileSD::516::Storage.StorageDomain::(createImageLinks)
img run dir already exists:
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-049a-4174-8914-f371aabfa72c
Thread-4262::DEBUG::2015-11-09
11:17:52,570::fileVolume::535::Storage.Volume::(validateVolumePath)
validate path for
02c5d59d-638c-4672-814d-d734e334e24a
Thread-4262::INFO::2015-11-09
11:17:52,572::logUtils::51::dispatcher::(wrapper) Run and protect:
prepareImage, Return response: {'info': {'domainID':
'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a',
'volumeID': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a.lease',
'imageID': '350fb787-049a-4174-8914-f371aabfa72c'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a',
'volumeID': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a.lease',
'imageID':
'350fb787-049a-4174-8914-f371aabfa72c'}]}
Thread-4262::DEBUG::2015-11-09
11:17:52,573::task::1191::Storage.TaskManager.Task::(prepare)
Task=`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::finished: {'info':
{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a',
'volumeID': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a.lease',
'imageID': '350fb787-049a-4174-8914-f371aabfa72c'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a',
'volumeID': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a.lease',
'imageID':
'350fb787-049a-4174-8914-f371aabfa72c'}]}
Thread-4262::DEBUG::2015-11-09
11:17:52,573::task::595::Storage.TaskManager.Task::(_updateState)
Task=`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::moving from state preparing
-> state finished
Thread-4262::DEBUG::2015-11-09
11:17:52,573::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj:
'None'>}
Thread-4262::DEBUG::2015-11-09
11:17:52,573::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4262::DEBUG::2015-11-09
11:17:52,573::resourceManager::616::Storage.ResourceManager::(releaseResource)
Trying to release resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'
Thread-4262::DEBUG::2015-11-09
11:17:52,573::resourceManager::635::Storage.ResourceManager::(releaseResource)
Released resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0
active users)
Thread-4262::DEBUG::2015-11-09
11:17:52,574::resourceManager::641::Storage.ResourceManager::(releaseResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free,
finding out if anyone is waiting for it.
Thread-4262::DEBUG::2015-11-09
11:17:52,574::resourceManager::649::Storage.ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing
records.
Thread-4262::DEBUG::2015-11-09
11:17:52,574::task::993::Storage.TaskManager.Task::(_decref)
Task=`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::ref 0 aborting
False
Thread-4262::INFO::2015-11-09
11:17:52,576::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57859 stopped
Reactor thread::INFO::2015-11-09
11:17:52,610::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57860
Reactor
thread::DEBUG::2015-11-09
11:17:52,619::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,619::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from
127.0.0.1:57860
BindingXMLRPC::INFO::2015-11-09
11:17:52,620::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57860
Reactor thread::DEBUG::2015-11-09
11:17:52,620::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57860)
Thread-4263::INFO::2015-11-09
11:17:52,623::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57860 started
Thread-4263::DEBUG::2015-11-09
11:17:52,623::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4263::DEBUG::2015-11-09
11:17:52,624::task::595::Storage.TaskManager.Task::(_updateState)
Task=`6fd1d011-d931-4eca-b93b-c0fc3a1b4107`::moving from state init ->
state preparing
Thread-4263::INFO::2015-11-09
11:17:52,624::logUtils::48::dispatcher::(wrapper) Run and protect:
repoStats(options=None)
Thread-4263::INFO::2015-11-09
11:17:52,624::logUtils::51::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41':
{'code': 0, 'actual': True, 'version': 3, 'acquired': True, 'delay':
'0.000392118', 'lastCheck': '6.0', 'valid': True},
u'0af99439-f140-4636-90f7-f43904735da0': {'code': 0, 'actual': True,
'version': 3, 'acquired': True, 'delay': '0.000382969', 'lastCheck':
'4.5', 'valid': True}}
Thread-4263::DEBUG::2015-11-09
11:17:52,624::task::1191::Storage.TaskManager.Task::(prepare)
Task=`6fd1d011-d931-4eca-b93b-c0fc3a1b4107`::finished:
{'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': {'code': 0, 'actual': True,
'version': 3, 'acquired': True, 'delay': '0.000392118', 'lastCheck':
'6.0', 'valid': True}, u'0af99439-f140-4636-90f7-f43904735da0': {'code':
0, 'actual': True, 'version': 3, 'acquired': True, 'delay':
'0.000382969', 'lastCheck': '4.5', 'valid':
True}}
Thread-4263::DEBUG::2015-11-09
11:17:52,625::task::595::Storage.TaskManager.Task::(_updateState)
Task=`6fd1d011-d931-4eca-b93b-c0fc3a1b4107`::moving from state preparing
-> state finished
Thread-4263::DEBUG::2015-11-09
11:17:52,625::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{}
Thread-4263::DEBUG::2015-11-09
11:17:52,625::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4263::DEBUG::2015-11-09
11:17:52,625::task::993::Storage.TaskManager.Task::(_decref)
Task=`6fd1d011-d931-4eca-b93b-c0fc3a1b4107`::ref 0 aborting
False
Thread-4263::INFO::2015-11-09
11:17:52,627::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57860
stopped
mailbox.SPMMonitor::DEBUG::2015-11-09
11:17:52,829::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail)
dd
if=/rhev/data-center/00000001-0001-0001-0001-000000000230/mastersd/dom_md/inbox
iflag=direct,fullblock count=1 bs=1024000 (cwd
None)
mailbox.SPMMonitor::DEBUG::2015-11-09
11:17:52,845::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail)
SUCCESS: <err> = '1+0 records inn1+0 records outn1024000 bytes (1.0 MB)
copied, 0.00494757 s, 207 MB/sn'; <rc> = 0
--
Florent BELLO
Service
Informatique
informatique(a)ville-kourou.fr
0594 22 31 22
Mairie de Kourou
--=_920b73690f5d77fd5fed2f8d20a489a7
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html; charset=UTF-8
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN">
<html><body>
<p>Hi,</p>
<p>I activate gluster service in Cluster, then my engine.log chain : Could =
not add brick xxx to volume xxxx server uuid xxx not found in cluster=
=2E<br />I found in mailing list i have to put all my hosts in maintenance =
mode and put on.</p>
<p>Then now engine.log chain :</p>
<p>2015-11-09 11:15:53,563 INFO [org.ovirt.engine.core.vdsbroker.glus=
ter.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-64) [] STA=
RT, GlusterVolumesListVDSCommand(HostName =3D ovirt02, GlusterVolumesListVD=
SParameters:{runAsync=3D'true', hostId=3D'0d1284e1-fa18-4309-b196-df9a6a337=
c44'}), log id: 6ddd5b9d<br />2015-11-09 11:15:53,711 WARN [org.ovirt=
=2Eengine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] (Defaul=
tQuartzScheduler_Worker-64) [] Could not associate brick 'ovirt01.mafia.kru=
:/gfs1/engine/brick' of volume 'e9a24161-3e72-47ea-b593-57f3302e7c4e' with =
correct network as no gluster network found in cluster '00000002-0002-0002-=
0002-00000000022d'<br />2015-11-09 11:15:53,714 WARN [org.ovirt.engin=
e.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] (DefaultQuartzS=
cheduler_Worker-64) [] Could not associate brick 'ovirt02.mafia.kru:/gfs1/e=
ngine/brick' of volume 'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct =
network as no gluster network found in cluster '00000002-0002-0002-0002-000=
00000022d'<br />2015-11-09 11:15:53,716 WARN [org.ovirt.engine.core=
=2Evdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] (DefaultQuartzSched=
uler_Worker-64) [] Could not associate brick 'ovirt03.mafia.kru:/gfs1/engin=
e/brick' of volume 'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct netw=
ork as no gluster network found in cluster '00000002-0002-0002-0002-0000000=
0022d'<br />2015-11-09 11:15:53,719 WARN [org.ovirt.engine.core.vdsbr=
oker.gluster.GlusterVolumesListReturnForXmlRpc] (DefaultQuartzScheduler_Wor=
ker-64) [] Could not associate brick 'ovirt01.mafia.kru:/gfs2/engine/brick'=
of volume 'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as n=
o gluster network found in cluster '00000002-0002-0002-0002-00000000022d'<b=
r />2015-11-09 11:15:53,722 WARN [org.ovirt.engine.core.vdsbroker.glu=
ster.GlusterVolumesListReturnForXmlRpc] (DefaultQuartzScheduler_Worker-64) =
[] Could not associate brick 'ovirt02.mafia.kru:/gfs2/engine/brick' of volu=
me 'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no gluste=
r network found in cluster '00000002-0002-0002-0002-00000000022d'<br />2015=
-11-09 11:15:53,725 WARN [org.ovirt.engine.core.vdsbroker.gluster.Glu=
sterVolumesListReturnForXmlRpc] (DefaultQuartzScheduler_Worker-64) [] Could=
not associate brick 'ovirt03.mafia.kru:/gfs2/engine/brick' of volume 'e9a2=
4161-3e72-47ea-b593-57f3302e7c4e' with correct network as no gluster networ=
k found in cluster '00000002-0002-0002-0002-00000000022d'<br />2015-11-09 1=
1:15:53,732 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolu=
mesListVDSCommand] (DefaultQuartzScheduler_Worker-64) [] FINISH, GlusterVol=
umesListVDSCommand, return: {e9a24161-3e72-47ea-b593-57f3302e7c4e=3Dorg.ovi=
rt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7eafe244=
, e5df896f-b818-4d70-ac86-ad9270f9d5f2=3Dorg.ovirt.engine.core.common.busin=
essentities.gluster.GlusterVolumeEntity@cb7d0349}, log id: 6ddd5b9d</p>
<p> </p>
<p>Here my vdsm.log on host 1:</p>
<p>Thread-4247::DEBUG::2015-11-09 11:17:47,621::__init__::533::jsonrpc.Json=
RpcServer::(_serveRequest) Return 'Host.getVMFullList' in bridge with [{u's=
tatus': 'Up', u'nicModel': u'rtl8139,pv', u'kvmEnable': u'true', u'smp': u'=
1', u'emulatedMachine': u'pc', u'afterMigrationStatus': u'', 'pid': '4450',=
u'vmId': u'3930e6e3-5b41-45c3-bb7c-2af8563cefab', u'devices': [{u'alias': =
u'console0', u'specParams': {}, 'deviceType': u'console', u'deviceId': u'ab=
824f92-f636-4c0f-96ad-b4f3d1c352be', u'device': u'console', u'type': u'cons=
ole'}, {u'target': 1572864, u'alias': u'balloon0', u'specParams': {u'model'=
: u'none'}, 'deviceType': u'balloon', u'device': u'memballoon', u'type': u'=
balloon'}, {u'device': u'unix', u'alias': u'channel0', 'deviceType': u'chan=
nel', u'type': u'channel', u'address': {u'bus': u'0', u'controller': u'0', =
u'type': u'virtio-serial', u'port': u'1'}}, {u'device': u'unix', u'alias': =
u'channel1', 'deviceType': u'channel', u'type': u'channel', u'address': {u'=
bus': u'0', u'controller': u'0', u'type': u'virtio-serial', u'port': u'2'}}=
, {u'device': u'unix', u'alias': u'channel2', 'deviceType': u'channel', u't=
ype': u'channel', u'address': {u'bus': u'0', u'controller': u'0', u'type': =
u'virtio-serial', u'port': u'3'}}, {u'alias': u'scsi0', 'deviceType': u'con=
troller', u'address': {u'slot': u'0x04', u'bus': u'0x00', u'domain': u'0x00=
00', u'type': u'pci', u'function': u'0x0'}, u'device': u'scsi', u'model': u=
'virtio-scsi', u'type': u'controller'}, {u'device': u'usb', u'alias': u'usb=
0', 'deviceType': u'controller', u'type': u'controller', u'address': {u'slo=
t': u'0x01', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'func=
tion': u'0x2'}}, {u'device': u'ide', u'alias': u'ide0', 'deviceType': u'con=
troller', u'type': u'controller', u'address': {u'slot': u'0x01', u'bus': u'=
0x00', u'domain': u'0x0000', u'type': u'pci', u'function': u'0x1'}}, {u'dev=
ice': u'virtio-serial', u'alias': u'virtio-serial0', 'deviceType': u'contro=
ller', u'type': u'controller', u'address': {u'slot': u'0x05', u'bus': u'0x0=
0', u'domain': u'0x0000', u'type': u'pci', u'function': u'0x0'}}, {u'device=
': u'', u'alias': u'video0', 'deviceType': u'video', u'type': u'video', u'a=
ddress': {u'slot': u'0x02', u'bus': u'0x00', u'domain': u'0x0000', u'type':=
u'pci', u'function': u'0x0'}}, {u'device': u'vnc', u'specParams': {u'spice=
SecureChannels': u'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartc=
ard,susbredir', u'displayIp': '0'}, 'deviceType': u'graphics', u'type': u'g=
raphics', u'port': u'5900'}, {u'nicModel': u'pv', u'macAddr': u'00:16:3e:43=
:96:7b', u'linkActive': True, u'network': u'ovirtmgmt', u'specParams': {}, =
u'filter': u'vdsm-no-mac-spoofing', u'alias': u'net0', 'deviceType': u'inte=
rface', u'deviceId': u'c2913ff3-fea3-4b17-a4b3-83398d920cd3', u'address': {=
u'slot': u'0x03', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u=
'function': u'0x0'}, u'device': u'bridge', u'type': u'interface', u'name': =
u'vnet0'}, {u'index': u'2', u'iface': u'ide', u'name': u'hdc', u'alias': u'=
ide0-1-0', u'specParams': {}, u'readonly': 'True', 'deviceType': u'disk', u=
'deviceId': u'13f4e285-c161-46f5-9ec3-ba1f92f374d9', u'address': {u'bus': u=
'1', u'controller': u'0', u'type': u'drive', u'target': u'0', u'unit': u'0'=
}, u'device': u'cdrom', u'shared': u'false', u'path': '', u'type': u'disk'}=
, {u'poolID': u'00000000-0000-0000-0000-000000000000', u'volumeInfo': {'dom=
ainID': u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path', 'leaseO=
ffset': 0, 'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath'=
: u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f=
-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee03=
4-de86-4438-b6eb-9109faa8b3d3.lease', 'imageID': u'56461302-0710-4df0-964d-=
5e7b1ff07828', 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:=
_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d=
-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3'}, u'index': u'0', u'ifa=
ce': u'virtio', u'apparentsize': '26843545600', u'specParams': {}, u'imageI=
D': u'56461302-0710-4df0-964d-5e7b1ff07828', u'readonly': 'False', 'deviceT=
ype': u'disk', u'shared': u'exclusive', u'truesize': '3515854848', u'type':=
u'disk', u'domainID': u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', u'reqsize':=
u'0', u'format': u'raw', u'deviceId': u'56461302-0710-4df0-964d-5e7b1ff078=
28', u'address': {u'slot': u'0x06', u'bus': u'0x00', u'domain': u'0x0000', =
u'type': u'pci', u'function': u'0x0'}, u'device': u'disk', u'path': u'/var/=
run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302-0710-4df0-96=
4d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3', u'propagateErrors': =
u'off', u'optional': u'false', u'name': u'vda', u'bootOrder': u'1', u'volum=
eID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', u'alias': u'virtio-disk0', u=
'volumeChain': [{'domainID': u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volT=
ype': 'path', 'leaseOffset': 0, 'volumeID': u'8f8ee034-de86-4438-b6eb-9109f=
aa8b3d3', 'leasePath': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:=
_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d=
-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease', 'imageID': u'564=
61302-0710-4df0-964d-5e7b1ff07828', 'path': u'/rhev/data-center/mnt/gluster=
SD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56=
461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3'}]}=
], u'guestDiskMapping': {}, u'spiceSecureChannels': u'smain,sdisplay,sinput=
s,scursor,splayback,srecord,ssmartcard,susbredir', u'vmType': u'kvm', u'dis=
playIp': '0', u'displaySecurePort': '-1', u'memSize': 1536, u'displayPort':=
u'5900', u'cpuType': u'Conroe', 'clientIp': u'', u'statusTime': '429970492=
0', u'vmName': u'HostedEngine', u'display': 'vnc'}]<br />Reactor thread::IN=
FO::2015-11-09 11:17:48,004::protocoldetector::72::ProtocolDetector.Accepto=
rImpl::(handle_accept) Accepting connection from 127.0.0.1:57851<br />React=
or thread::DEBUG::2015-11-09 11:17:48,012::protocoldetector::82::ProtocolDe=
tector.Detector::(__init__) Using required_size=3D11<br />Reactor thread::I=
NFO::2015-11-09 11:17:48,013::protocoldetector::118::ProtocolDetector.Detec=
tor::(handle_read) Detected protocol xml from 127.0.0.1:57851<br />Reactor =
thread::DEBUG::2015-11-09 11:17:48,013::bindingxmlrpc::1297::XmlDetector::(=
handle_socket) xml over http detected from ('127.0.0.1', 57851)<br />Bindin=
gXMLRPC::INFO::2015-11-09 11:17:48,013::xmlrpc::73::vds.XMLRPCServer::(hand=
le_request) Starting request handler for 127.0.0.1:57851<br />Thread-4248::=
INFO::2015-11-09 11:17:48,015::xmlrpc::84::vds.XMLRPCServer::(_process_requ=
ests) Request handler for 127.0.0.1:57851 started<br />Thread-4248::INFO::2=
015-11-09 11:17:48,022::xmlrpc::92::vds.XMLRPCServer::(_process_requests) R=
equest handler for 127.0.0.1:57851 stopped<br />Thread-303::DEBUG::2015-11-=
09 11:17:48,143::fileSD::173::Storage.Misc.excCmd::(getReadDelay) /usr/bin/=
dd if=3D/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_data/0af99439-f1=
40-4636-90f7-f43904735da0/dom_md/metadata iflag=3Ddirect of=3D/dev/null bs=
=3D4096 count=3D1 (cwd None)<br />Thread-303::DEBUG::2015-11-09 11:17:48,15=
4::fileSD::173::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> =
=3D '0+1 records in\n0+1 records out\n461 bytes (461 B) copied, 0.000382969=
s, 1.2 MB/s\n'; <rc> =3D 0<br />mailbox.SPMMonitor::DEBUG::2015-11-0=
9 11:17:48,767::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail) =
dd if=3D/rhev/data-center/00000001-0001-0001-0001-000000000230/mastersd/dom=
_md/inbox iflag=3Ddirect,fullblock count=3D1 bs=3D1024000 (cwd None)<br />m=
ailbox.SPMMonitor::DEBUG::2015-11-09 11:17:48,783::storage_mailbox::735::St=
orage.Misc.excCmd::(_checkForMail) SUCCESS: <err> =3D '1+0 records in=
\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.00507258 s, 202 MB/s\n'=
; <rc> =3D 0<br />Reactor thread::INFO::2015-11-09 11:17:49,939::prot=
ocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting =
connection from 127.0.0.1:57852<br />Reactor thread::DEBUG::2015-11-09 11:1=
7:49,947::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using=
required_size=3D11<br />Reactor thread::INFO::2015-11-09 11:17:49,947::pro=
tocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected proto=
col xml from 127.0.0.1:57852<br />Reactor thread::DEBUG::2015-11-09 11:17:4=
9,947::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over http dete=
cted from ('127.0.0.1', 57852)<br />BindingXMLRPC::INFO::2015-11-09 11:17:4=
9,948::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request hand=
ler for 127.0.0.1:57852<br />Thread-4249::INFO::2015-11-09 11:17:49,949::xm=
lrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0=
=2E0.1:57852 started<br />Thread-4249::DEBUG::2015-11-09 11:17:49,950::bind=
ingxmlrpc::1257::vds::(wrapper) client [127.0.0.1]::call getCapabilities wi=
th () {}<br />Thread-4249::DEBUG::2015-11-09 11:17:49,962::netinfo::454::ro=
ot::(_dhcp_used) DHCPv6 configuration not specified for ovirtmgmt.<br />Thr=
ead-4249::DEBUG::2015-11-09 11:17:49,963::netinfo::686::root::(_get_gateway=
) The gateway 10.10.10.254 is duplicated for the device ovirtmgmt<br />Thre=
ad-4249::DEBUG::2015-11-09 11:17:49,965::netinfo::440::root::(_dhcp_used) T=
here is no VDSM network configured on enp2s0.<br />Thread-4249::DEBUG::2015=
-11-09 11:17:49,965::netinfo::440::root::(_dhcp_used) There is no VDSM netw=
ork configured on enp2s0.<br />Thread-4249::DEBUG::2015-11-09 11:17:49,968:=
:netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bo=
nd0.<br />Thread-4249::DEBUG::2015-11-09 11:17:49,968::netinfo::440::root::=
(_dhcp_used) There is no VDSM network configured on bond0.<br />Thread-4249=
::DEBUG::2015-11-09 11:17:49,970::netinfo::686::root::(_get_gateway) The ga=
teway 10.10.10.254 is duplicated for the device ovirtmgmt<br />Thread-4249:=
:DEBUG::2015-11-09 11:17:49,971::utils::676::root::(execCmd) /usr/sbin/tc q=
disc show (cwd None)<br />Thread-4249::DEBUG::2015-11-09 11:17:49,979::util=
s::694::root::(execCmd) SUCCESS: <err> =3D ''; <rc> =3D 0<br />=
Thread-4249::DEBUG::2015-11-09 11:17:49,980::utils::676::root::(execCmd) /u=
sr/sbin/tc class show dev enp2s0 classid 0:1388 (cwd None)<br />Thread-4249=
::DEBUG::2015-11-09 11:17:49,989::utils::694::root::(execCmd) SUCCESS: <=
err> =3D ''; <rc> =3D 0<br />Thread-4249::DEBUG::2015-11-09 11:17:=
49,993::caps::807::root::(_getKeyPackages) rpm package ('glusterfs-rdma',) =
not found<br />Thread-4249::DEBUG::2015-11-09 11:17:49,997::caps::807::root=
::(_getKeyPackages) rpm package ('gluster-swift-object',) not found<br />Th=
read-4249::DEBUG::2015-11-09 11:17:49,997::caps::807::root::(_getKeyPackage=
s) rpm package ('gluster-swift-proxy',) not found<br />Thread-4249::DEBUG::=
2015-11-09 11:17:50,001::caps::807::root::(_getKeyPackages) rpm package ('g=
luster-swift-plugin',) not found<br />Thread-4249::DEBUG::2015-11-09 11:17:=
50,002::caps::807::root::(_getKeyPackages) rpm package ('gluster-swift',) n=
ot found<br />Thread-4249::DEBUG::2015-11-09 11:17:50,002::caps::807::root:=
:(_getKeyPackages) rpm package ('gluster-swift-container',) not found<br />=
Thread-4249::DEBUG::2015-11-09 11:17:50,003::caps::807::root::(_getKeyPacka=
ges) rpm package ('gluster-swift-account',) not found<br />Thread-4249::DEB=
UG::2015-11-09 11:17:50,003::caps::807::root::(_getKeyPackages) rpm package=
('gluster-swift-doc',) not found<br />Thread-4249::DEBUG::2015-11-09 11:17=
:50,005::bindingxmlrpc::1264::vds::(wrapper) return getCapabilities with {'=
status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI':=
[{'InitiatorName': 'iqn.1994-05.com.redhat:1954deeb7a38'}], 'FC': []}, 'pa=
ckages2': {'kernel': {'release': '229.20.1.el7.x86_64', 'buildtime': 144658=
8607.0, 'version': '3.10.0'}, 'glusterfs-fuse': {'release': '1.el7', 'build=
time': 1444235292L, 'version': '3.7.5'}, 'spice-server': {'release': '9.el7=
_1.3', 'buildtime': 1444691699L, 'version': '0.12.4'}, 'librbd1': {'release=
': '2.el7', 'buildtime': 1425594433L, 'version': '0.80.7'}, 'vdsm': {'relea=
se': '0.el7.centos', 'buildtime': 1446474396L, 'version': '4.17.10.1'}, 'qe=
mu-kvm': {'release': '29.1.el7', 'buildtime': 1444310806L, 'version': '2.3=
=2E0'}, 'glusterfs': {'release': '1.el7', 'buildtime': 1444235292L, 'versio=
n': '3.7.5'}, 'libvirt': {'release': '16.el7_1.5', 'buildtime': 1446559281L=
, 'version': '1.2.8'}, 'qemu-img': {'release': '29.1.el7', 'buildtime': 144=
4310806L, 'version': '2.3.0'}, 'mom': {'release': '2.el7', 'buildtime': 144=
2501481L, 'version': '0.5.1'}, 'glusterfs-geo-replication': {'release': '1=
=2Eel7', 'buildtime': 1444235292L, 'version': '3.7.5'}, 'glusterfs-server':=
{'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}, 'glust=
erfs-cli': {'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7=
=2E5'}}, 'numaNodeDistance': {'0': [10]}, 'cpuModel': 'Intel(R) Core(TM)2 Q=
uad CPU Q8400 @ 2.66GHz', 'liveMerge': 'true', 'hoo=
ks': {'before_vm_start': {'50_hostedengine': {'md5': '2a6d96c26a3599812be6c=
f1a13d9f485'}}}, 'vmTypes': ['kvm'], 'selinux': {'mode': '-1'}, 'liveSnapsh=
ot': 'true', 'kdumpStatus': 0, 'networks': {'ovirtmgmt': {'iface': 'ovirtmg=
mt', 'addr': '10.10.10.211', 'cfg': {'IPV6INIT': 'no', 'DEFROUTE': 'yes', '=
IPADDR': '10.10.10.211', 'HOTPLUG': 'no', 'GATEWAY': '10.10.10.254', 'DELAY=
': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'no=
ne', 'STP': 'off', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'=
}, 'bridged': True, 'ipv6addrs': ['fe80::6e62:6dff:feb3:3b72/64'], 'gateway=
': '10.10.10.254', 'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6': F=
alse, 'stp': 'off', 'ipv4addrs': ['10.10.10.211/24'], 'mtu': '1500', 'ipv6g=
ateway': '::', 'ports': ['vnet0', 'enp2s0']}}, 'bridges': {'ovirtmgmt': {'a=
ddr': '10.10.10.211', 'cfg': {'IPV6INIT': 'no', 'DEFROUTE': 'yes', 'IPADDR'=
: '10.10.10.211', 'HOTPLUG': 'no', 'GATEWAY': '10.10.10.254', 'DELAY': '0',=
'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'S=
TP': 'off', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv=
6addrs': ['fe80::6e62:6dff:feb3:3b72/64'], 'gateway': '10.10.10.254', 'dhcp=
v4': False, 'netmask': '255.255.255.0', 'dhcpv6': False, 'stp': 'off', 'ipv=
4addrs': ['10.10.10.211/24'], 'mtu': '1500', 'ipv6gateway': '::', 'ports': =
['vnet0', 'enp2s0'], 'opts': {'multicast_last_member_count': '2', 'hash_ela=
sticity': '4', 'multicast_query_response_interval': '1000', 'group_fwd_mask=
': '0x0', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3=
125', 'hello_timer': '172', 'multicast_querier_interval': '25500', 'max_age=
': '2000', 'hash_max': '512', 'stp_state': '0', 'topology_change_detected':=
'0', 'priority': '32768', 'multicast_membership_interval': '26000', 'root_=
path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_sta=
rtup_query_count': '2', 'nf_call_iptables': '0', 'topology_change': '0', 'h=
ello_time': '200', 'root_id': '8000.6c626db33b72', 'bridge_id': '8000.6c626=
db33b72', 'topology_change_timer': '0', 'ageing_time': '30000', 'nf_call_ip=
6tables': '0', 'gc_timer': '25099', 'nf_call_arptables': '0', 'group_addr':=
'1:80:c2:0:0:0', 'multicast_last_member_interval': '100', 'default_pvid': =
'1', 'multicast_query_interval': '12500', 'tcn_timer': '0', 'multicast_rout=
er': '1', 'vlan_filtering': '0', 'forward_delay': '0'}}}, 'uuid': 'c2cac9d6=
-9ed7-44f0-8bbc-eff4c71db7ca', 'onlineCpus': '0,1,2,3', 'nics': {'enp2s0': =
{'addr': '', 'ipv6gateway': '::', 'ipv6addrs': ['fe80::6e62:6dff:feb3:3b72/=
64'], 'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4=
addrs': [], 'cfg': {'BRIDGE': 'ovirtmgmt', 'IPV6INIT': 'no', 'NM_CONTROLLED=
': 'no', 'HWADDR': '6c:62:6d:b3:3b:72', 'BOOTPROTO': 'none', 'DEVICE': 'enp=
2s0', 'ONBOOT': 'yes'}, 'hwaddr': '6c:62:6d:b3:3b:72', 'speed': 1000, 'gate=
way': ''}}, 'software_revision': '0', 'hostdevPassthrough': 'false', 'clust=
erLevels': ['3.4', '3.5', '3.6'], 'cpuFlags': 'fpu,vme,de,pse,tsc,msr,pae,m=
ce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,s=
se2,ss,ht,tm,pbe,syscall,nx,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,=
nopl,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,=
sse4_1,xsave,lahf_lm,dtherm,tpr_shadow,vnmi,flexpriority,model_Conroe,model=
_coreduo,model_core2duo,model_Penryn,model_n270', 'ISCSIInitiatorName': 'iq=
n.1994-05.com.redhat:1954deeb7a38', 'netConfigDirty': 'False', 'supportedEN=
GINEs': ['3.4', '3.5', '3.6'], 'autoNumaBalancing': 0, 'additionalFeatures'=
: ['GLUSTER_SNAPSHOT', 'GLUSTER_GEO_REPLICATION', 'GLUSTER_BRICK_MANAGEMENT=
'], 'reservedMem': '321', 'bondings': {'bond0': {'ipv4addrs': [], 'addr': '=
', 'cfg': {'BOOTPROTO': 'none'}, 'ipv6addrs': [], 'active_slave': '', 'mtu'=
: '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'slaves': [], 'h=
waddr': 'ba:5f:22:a3:17:07', 'ipv6gateway': '::', 'gateway': '', 'opts': {}=
}}, 'software_version': '4.17', 'memSize': '3782', 'cpuSpeed': '2670.000', =
'numaNodes': {'0': {'totalMemory': '3782', 'cpus': [0, 1, 2, 3]}}, 'cpuSock=
ets': '1', 'vlans': {}, 'lastClientIface': 'lo', 'cpuCores': '4', 'kvmEnabl=
ed': 'true', 'guestOverhead': '65', 'version_name': 'Snow Man', 'cpuThreads=
': '4', 'emulatedMachines': ['pc-i440fx-rhel7.1.0', 'rhel6.3.0', 'pc-q35-rh=
el7.2.0', 'pc-i440fx-rhel7.0.0', 'rhel6.1.0', 'rhel6.6.0', 'rhel6.2.0', 'pc=
', 'pc-q35-rhel7.0.0', 'pc-q35-rhel7.1.0', 'q35', 'pc-i440fx-rhel7.2.0', 'r=
hel6.4.0', 'rhel6.0.0', 'rhel6.5.0'], 'rngSources': ['random'], 'operatingS=
ystem': {'release': '1.1503.el7.centos.2.8', 'version': '7', 'name': 'RHEL'=
}, 'lastClient': '127.0.0.1'}}<br />Thread-4249::INFO::2015-11-09 11:17:50,=
020::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for =
127.0.0.1:57852 stopped<br />mailbox.SPMMonitor::DEBUG::2015-11-09 11:17:50=
,797::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail) dd if=3D/r=
hev/data-center/00000001-0001-0001-0001-000000000230/mastersd/dom_md/inbox =
iflag=3Ddirect,fullblock count=3D1 bs=3D1024000 (cwd None)<br />mailbox.SPM=
Monitor::DEBUG::2015-11-09 11:17:50,815::storage_mailbox::735::Storage.Misc=
=2EexcCmd::(_checkForMail) SUCCESS: <err> =3D '1+0 records in\n1+0 re=
cords out\n1024000 bytes (1.0 MB) copied, 0.00511026 s, 200 MB/s\n'; <rc=
> =3D 0<br />Reactor thread::INFO::2015-11-09 11:17:52,098::protocoldete=
ctor::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connecti=
on from 127.0.0.1:57853<br />Reactor thread::DEBUG::2015-11-09 11:17:52,106=
::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using require=
d_size=3D11<br />Reactor thread::INFO::2015-11-09 11:17:52,107::protocoldet=
ector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml =
from 127.0.0.1:57853<br />Reactor thread::DEBUG::2015-11-09 11:17:52,107::b=
indingxmlrpc::1297::XmlDetector::(handle_socket) xml over http detected fro=
m ('127.0.0.1', 57853)<br />BindingXMLRPC::INFO::2015-11-09 11:17:52,108::x=
mlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for =
127.0.0.1:57853<br />Thread-4250::INFO::2015-11-09 11:17:52,110::xmlrpc::84=
::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:57853=
started<br />Thread-4250::DEBUG::2015-11-09 11:17:52,111::bindingxmlrpc::1=
257::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {}<br =
/>Thread-4250::DEBUG::2015-11-09 11:17:52,112::bindingxmlrpc::1264::vds::(w=
rapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': =
0}, 'info': {'systemProductName': 'MS-7529', 'systemSerialNumber': 'To Be F=
illed By O.E.M.', 'systemFamily': 'To Be Filled By O.E.M.', 'systemVersion'=
: '1.0', 'systemUUID': '00000000-0000-0000-0000-6C626DB33B72', 'systemManuf=
acturer': 'MICRO-STAR INTERNATIONAL CO.,LTD'}}<br />Thread-4250::INFO::2015=
-11-09 11:17:52,114::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Requ=
est handler for 127.0.0.1:57853 stopped<br />Reactor thread::INFO::2015-11-=
09 11:17:52,116::protocoldetector::72::ProtocolDetector.AcceptorImpl::(hand=
le_accept) Accepting connection from 127.0.0.1:57854<br />Reactor thread::D=
EBUG::2015-11-09 11:17:52,124::protocoldetector::82::ProtocolDetector.Detec=
tor::(__init__) Using required_size=3D11<br />Reactor thread::INFO::2015-11=
-09 11:17:52,124::protocoldetector::118::ProtocolDetector.Detector::(handle=
_read) Detected protocol xml from 127.0.0.1:57854<br />BindingXMLRPC::INFO:=
:2015-11-09 11:17:52,125::xmlrpc::73::vds.XMLRPCServer::(handle_request) St=
arting request handler for 127.0.0.1:57854<br />Reactor thread::DEBUG::2015=
-11-09 11:17:52,125::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml =
over http detected from ('127.0.0.1', 57854)<br />Thread-4251::INFO::2015-1=
1-09 11:17:52,128::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Reques=
t handler for 127.0.0.1:57854 started<br />Thread-4251::DEBUG::2015-11-09 1=
1:17:52,129::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]<br />Thr=
ead-4251::DEBUG::2015-11-09 11:17:52,130::task::595::Storage.TaskManager.Ta=
sk::(_updateState) Task=3D`8535d95e-dce6-4474-bd8d-7824f68cf68a`::moving fr=
om state init -> state preparing<br />Thread-4251::INFO::2015-11-09 11:1=
7:52,130::logUtils::48::dispatcher::(wrapper) Run and protect: connectStora=
geServer(domType=3D7, spUUID=3D'00000000-0000-0000-0000-000000000000', conL=
ist=3D[{'id': '2c69bdcf-793b-4fda-b326-b8aa6c33ade0', 'vfs_type': 'glusterf=
s', 'connection': 'ovirt02.mafia.kru:/engine', 'user': 'kvm'}], options=3DN=
one)<br />Thread-4251::DEBUG::2015-11-09 11:17:52,132::hsm::2417::Storage=
=2EHSM::(__prefetchDomains) glusterDomPath: glusterSD/*<br />Thread-4251::D=
EBUG::2015-11-09 11:17:52,146::hsm::2429::Storage.HSM::(__prefetchDomains) =
Found SD uuids: (u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', u'0af99439-f140-4=
636-90f7-f43904735da0')<br />Thread-4251::DEBUG::2015-11-09 11:17:52,147::h=
sm::2489::Storage.HSM::(connectStorageServer) knownSDs: {b4c488af-9d2f-4b7b=
-a6f6-74a0bac06c41: storage.glusterSD.findDomain, 0af99439-f140-4636-90f7-f=
43904735da0: storage.glusterSD.findDomain}<br />Thread-4251::INFO::2015-11-=
09 11:17:52,147::logUtils::51::dispatcher::(wrapper) Run and protect: conne=
ctStorageServer, Return response: {'statuslist': [{'status': 0, 'id': '2c69=
bdcf-793b-4fda-b326-b8aa6c33ade0'}]}<br />Thread-4251::DEBUG::2015-11-09 11=
:17:52,147::task::1191::Storage.TaskManager.Task::(prepare) Task=3D`8535d95=
e-dce6-4474-bd8d-7824f68cf68a`::finished: {'statuslist': [{'status': 0, 'id=
': '2c69bdcf-793b-4fda-b326-b8aa6c33ade0'}]}<br />Thread-4251::DEBUG::2015-=
11-09 11:17:52,147::task::595::Storage.TaskManager.Task::(_updateState) Tas=
k=3D`8535d95e-dce6-4474-bd8d-7824f68cf68a`::moving from state preparing -&g=
t; state finished<br />Thread-4251::DEBUG::2015-11-09 11:17:52,147::resourc=
eManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll=
requests {} resources {}<br />Thread-4251::DEBUG::2015-11-09 11:17:52,148:=
:resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.can=
celAll requests {}<br />Thread-4251::DEBUG::2015-11-09 11:17:52,148::task::=
993::Storage.TaskManager.Task::(_decref) Task=3D`8535d95e-dce6-4474-bd8d-78=
24f68cf68a`::ref 0 aborting False<br />Thread-4251::INFO::2015-11-09 11:17:=
52,149::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler f=
or 127.0.0.1:57854 stopped<br />Reactor thread::INFO::2015-11-09 11:17:52,1=
50::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Ac=
cepting connection from 127.0.0.1:57855<br />Reactor thread::DEBUG::2015-11=
-09 11:17:52,158::protocoldetector::82::ProtocolDetector.Detector::(__init_=
_) Using required_size=3D11<br />Reactor thread::INFO::2015-11-09 11:17:52,=
158::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detect=
ed protocol xml from 127.0.0.1:57855<br />BindingXMLRPC::INFO::2015-11-09 1=
1:17:52,159::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting reques=
t handler for 127.0.0.1:57855<br />Reactor thread::DEBUG::2015-11-09 11:17:=
52,159::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over http det=
ected from ('127.0.0.1', 57855)<br />Thread-4255::INFO::2015-11-09 11:17:52=
,162::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for=
127.0.0.1:57855 started<br />Thread-4255::DEBUG::2015-11-09 11:17:52,162::=
bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]<br />Thread-4255::DEB=
UG::2015-11-09 11:17:52,163::task::595::Storage.TaskManager.Task::(_updateS=
tate) Task=3D`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::moving from state init=
-> state preparing<br />Thread-4255::INFO::2015-11-09 11:17:52,163::log=
Utils::48::dispatcher::(wrapper) Run and protect: getStorageDomainStats(sdU=
UID=3D'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', options=3DNone)<br />Thread-4=
255::DEBUG::2015-11-09 11:17:52,164::resourceManager::198::Storage.Resource=
Manager.Request::(__init__) ResName=3D`Storage.b4c488af-9d2f-4b7b-a6f6-74a0=
bac06c41`ReqID=3D`41531754-8ba9-4fc4-8788-d4d67fa33e5c`::Request was made i=
n '/usr/share/vdsm/storage/hsm.py' line '2848' at 'getStorageDomainStats'<b=
r />Thread-4255::DEBUG::2015-11-09 11:17:52,164::resourceManager::542::Stor=
age.ResourceManager::(registerResource) Trying to register resource 'Storag=
e.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type 'shared'<br />Thread-=
4255::DEBUG::2015-11-09 11:17:52,164::resourceManager::601::Storage.Resourc=
eManager::(registerResource) Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0=
bac06c41' is free. Now locking as 'shared' (1 active user)<br />Thread-4255=
::DEBUG::2015-11-09 11:17:52,164::resourceManager::238::Storage.ResourceMan=
ager.Request::(grant) ResName=3D`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c=
41`ReqID=3D`41531754-8ba9-4fc4-8788-d4d67fa33e5c`::Granted request<br />Thr=
ead-4255::DEBUG::2015-11-09 11:17:52,165::task::827::Storage.TaskManager.Ta=
sk::(resourceAcquired) Task=3D`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::_reso=
urcesAcquired: Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 (shared)<br />T=
hread-4255::DEBUG::2015-11-09 11:17:52,165::task::993::Storage.TaskManager=
=2ETask::(_decref) Task=3D`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::ref 1 abo=
rting False<br />Thread-4255::DEBUG::2015-11-09 11:17:52,165::misc::750::St=
orage.SamplingMethod::(__call__) Trying to enter sampling method (storage=
=2Esdc.refreshStorage)<br />Thread-4255::DEBUG::2015-11-09 11:17:52,165::mi=
sc::753::Storage.SamplingMethod::(__call__) Got in to sampling method<br />=
Thread-4255::DEBUG::2015-11-09 11:17:52,165::misc::750::Storage.SamplingMet=
hod::(__call__) Trying to enter sampling method (storage.iscsi.rescan)<br /=
>Thread-4255::DEBUG::2015-11-09 11:17:52,165::misc::753::Storage.SamplingMe=
thod::(__call__) Got in to sampling method<br />Thread-4255::DEBUG::2015-11=
-09 11:17:52,166::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan,=
this will take up to 30 seconds<br />Thread-4255::DEBUG::2015-11-09 11:17:=
52,166::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin=
/iscsiadm -m session -R (cwd None)<br />Thread-4255::DEBUG::2015-11-09 11:1=
7:52,183::misc::760::Storage.SamplingMethod::(__call__) Returning last resu=
lt<br />Thread-4255::DEBUG::2015-11-09 11:17:52,183::misc::750::Storage.Sam=
plingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan=
)<br />Thread-4255::DEBUG::2015-11-09 11:17:52,184::misc::753::Storage.Samp=
lingMethod::(__call__) Got in to sampling method<br />Thread-4255::DEBUG::2=
015-11-09 11:17:52,184::hba::56::Storage.HBA::(rescan) Starting scan<br />T=
hread-4255::DEBUG::2015-11-09 11:17:52,295::hba::62::Storage.HBA::(rescan) =
Scan finished<br />Thread-4255::DEBUG::2015-11-09 11:17:52,296::misc::760::=
Storage.SamplingMethod::(__call__) Returning last result<br />Thread-4255::=
DEBUG::2015-11-09 11:17:52,296::multipath::77::Storage.Misc.excCmd::(rescan=
) /usr/bin/sudo -n /usr/sbin/multipath (cwd None)<br />Thread-4255::DEBUG::=
2015-11-09 11:17:52,362::multipath::77::Storage.Misc.excCmd::(rescan) SUCCE=
SS: <err> =3D ''; <rc> =3D 0<br />Thread-4255::DEBUG::2015-11-0=
9 11:17:52,362::utils::676::root::(execCmd) /sbin/udevadm settle --timeout=
=3D5 (cwd None)<br />Thread-4255::DEBUG::2015-11-09 11:17:52,371::utils::69=
4::root::(execCmd) SUCCESS: <err> =3D ''; <rc> =3D 0<br />Threa=
d-4255::DEBUG::2015-11-09 11:17:52,372::lvm::498::Storage.OperationMutex::(=
_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation m=
utex<br />Thread-4255::DEBUG::2015-11-09 11:17:52,372::lvm::500::Storage.Op=
erationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' rele=
ased the operation mutex<br />Thread-4255::DEBUG::2015-11-09 11:17:52,373::=
lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invali=
date operation' got the operation mutex<br />Thread-4255::DEBUG::2015-11-09=
11:17:52,373::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operat=
ion 'lvm invalidate operation' released the operation mutex<br />Thread-425=
5::DEBUG::2015-11-09 11:17:52,373::lvm::529::Storage.OperationMutex::(_inva=
lidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex<=
br />Thread-4255::DEBUG::2015-11-09 11:17:52,373::lvm::531::Storage.Operati=
onMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released =
the operation mutex<br />Thread-4255::DEBUG::2015-11-09 11:17:52,374::misc:=
:760::Storage.SamplingMethod::(__call__) Returning last result<br />Thread-=
4255::DEBUG::2015-11-09 11:17:52,386::fileSD::157::Storage.StorageDomainMan=
ifest::(__init__) Reading domain in path /rhev/data-center/mnt/glusterSD/ov=
irt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41<br />Thread-42=
55::DEBUG::2015-11-09 11:17:52,387::persistentDict::192::Storage.Persistent=
Dict::(__init__) Created a persistent dict with FileMetadataRW backend<br /=
>Thread-4255::DEBUG::2015-11-09 11:17:52,395::persistentDict::234::Storage=
=2EPersistentDict::(refresh) read lines (FileMetadataRW)=3D['CLASS=3DData',=
'DESCRIPTION=3Dhosted_storage', 'IOOPTIMEOUTSEC=3D10', 'LEASERETRIES=3D3',=
'LEASETIMESEC=3D60', 'LOCKPOLICY=3D', 'LOCKRENEWALINTERVALSEC=3D5', 'POOL_=
UUID=3D', 'REMOTE_PATH=3Dovirt02.mafia.kru:/engine', 'ROLE=3DRegular', 'SDU=
UID=3Db4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'TYPE=3DGLUSTERFS', 'VERSION=
=3D3', '_SHA_CKSUM=3Dcb09606ada74ed4155ad158923dd930264780fc8']<br />Thread=
-4255::DEBUG::2015-11-09 11:17:52,398::fileSD::647::Storage.StorageDomain::=
(imageGarbageCollector) Removing remnants of deleted images []<br />Thread-=
4255::INFO::2015-11-09 11:17:52,399::sd::442::Storage.StorageDomain::(_regi=
sterResourceNamespaces) Resource namespace b4c488af-9d2f-4b7b-a6f6-74a0bac0=
6c41_imageNS already registered<br />Thread-4255::INFO::2015-11-09 11:17:52=
,399::sd::450::Storage.StorageDomain::(_registerResourceNamespaces) Resourc=
e namespace b4c488af-9d2f-4b7b-a6f6-74a0bac06c41_volumeNS already registere=
d<br />Thread-4255::INFO::2015-11-09 11:17:52,400::logUtils::51::dispatcher=
::(wrapper) Run and protect: getStorageDomainStats, Return response: {'stat=
s': {'mdasize': 0, 'mdathreshold': True, 'mdavalid': True, 'diskfree': '210=
878988288', 'disktotal': '214643507200', 'mdafree': 0}}<br />Thread-4255::D=
EBUG::2015-11-09 11:17:52,401::task::1191::Storage.TaskManager.Task::(prepa=
re) Task=3D`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::finished: {'stats': {'md=
asize': 0, 'mdathreshold': True, 'mdavalid': True, 'diskfree': '21087898828=
8', 'disktotal': '214643507200', 'mdafree': 0}}<br />Thread-4255::DEBUG::20=
15-11-09 11:17:52,401::task::595::Storage.TaskManager.Task::(_updateState) =
Task=3D`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::moving from state preparing =
-> state finished<br />Thread-4255::DEBUG::2015-11-09 11:17:52,401::reso=
urceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.release=
All requests {} resources {'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': =
< ResourceRef 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: '=
True' obj: 'None'>}<br />Thread-4255::DEBUG::2015-11-09 11:17:52,401::re=
sourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancel=
All requests {}<br />Thread-4255::DEBUG::2015-11-09 11:17:52,402::resourceM=
anager::616::Storage.ResourceManager::(releaseResource) Trying to release r=
esource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'<br />Thread-4255::DE=
BUG::2015-11-09 11:17:52,402::resourceManager::635::Storage.ResourceManager=
::(releaseResource) Released resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0=
bac06c41' (0 active users)<br />Thread-4255::DEBUG::2015-11-09 11:17:52,402=
::resourceManager::641::Storage.ResourceManager::(releaseResource) Resource=
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free, finding out if any=
one is waiting for it.<br />Thread-4255::DEBUG::2015-11-09 11:17:52,402::re=
sourceManager::649::Storage.ResourceManager::(releaseResource) No one is wa=
iting for resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing=
records.<br />Thread-4255::DEBUG::2015-11-09 11:17:52,402::task::993::Stor=
age.TaskManager.Task::(_decref) Task=3D`04d7087f-f948-4221-8c0f-3e07e7d9bed=
8`::ref 0 aborting False<br />Thread-4255::INFO::2015-11-09 11:17:52,404::x=
mlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0=
=2E0.1:57855 stopped<br />Reactor thread::INFO::2015-11-09 11:17:52,405::pr=
otocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Acceptin=
g connection from 127.0.0.1:57856<br />Reactor thread::DEBUG::2015-11-09 11=
:17:52,413::protocoldetector::82::ProtocolDetector.Detector::(__init__) Usi=
ng required_size=3D11<br />Reactor thread::INFO::2015-11-09 11:17:52,414::p=
rotocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected pro=
tocol xml from 127.0.0.1:57856<br />BindingXMLRPC::INFO::2015-11-09 11:17:5=
2,414::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request hand=
ler for 127.0.0.1:57856<br />Reactor thread::DEBUG::2015-11-09 11:17:52,414=
::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over http detected =
from ('127.0.0.1', 57856)<br />Thread-4259::INFO::2015-11-09 11:17:52,417::=
xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127=
=2E0.0.1:57856 started<br />Thread-4259::DEBUG::2015-11-09 11:17:52,418::bi=
ndingxmlrpc::325::vds::(wrapper) client [127.0.0.1]<br />Thread-4259::DEBUG=
::2015-11-09 11:17:52,418::task::595::Storage.TaskManager.Task::(_updateSta=
te) Task=3D`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::moving from state init -=
> state preparing<br />Thread-4259::INFO::2015-11-09 11:17:52,419::logUt=
ils::48::dispatcher::(wrapper) Run and protect: prepareImage(sdUUID=3D'b4c4=
88af-9d2f-4b7b-a6f6-74a0bac06c41', spUUID=3D'00000000-0000-0000-0000-000000=
000000', imgUUID=3D'56461302-0710-4df0-964d-5e7b1ff07828', leafUUID=3D'8f8e=
e034-de86-4438-b6eb-9109faa8b3d3')<br />Thread-4259::DEBUG::2015-11-09 11:1=
7:52,419::resourceManager::198::Storage.ResourceManager.Request::(__init__)=
ResName=3D`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=3D`bec7c8c3-=
42b9-4acb-88cf-841d9dc28fb0`::Request was made in '/usr/share/vdsm/storage/=
hsm.py' line '3205' at 'prepareImage'<br />Thread-4259::DEBUG::2015-11-09 1=
1:17:52,419::resourceManager::542::Storage.ResourceManager::(registerResour=
ce) Trying to register resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c=
41' for lock type 'shared'<br />Thread-4259::DEBUG::2015-11-09 11:17:52,420=
::resourceManager::601::Storage.ResourceManager::(registerResource) Resourc=
e 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now locking as 's=
hared' (1 active user)<br />Thread-4259::DEBUG::2015-11-09 11:17:52,420::re=
sourceManager::238::Storage.ResourceManager.Request::(grant) ResName=3D`Sto=
rage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=3D`bec7c8c3-42b9-4acb-88cf-=
841d9dc28fb0`::Granted request<br />Thread-4259::DEBUG::2015-11-09 11:17:52=
,420::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=3D`9dbe0=
1b2-e3e0-466b-90e1-b9803dfce88b`::_resourcesAcquired: Storage.b4c488af-9d2f=
-4b7b-a6f6-74a0bac06c41 (shared)<br />Thread-4259::DEBUG::2015-11-09 11:17:=
52,420::task::993::Storage.TaskManager.Task::(_decref) Task=3D`9dbe01b2-e3e=
0-466b-90e1-b9803dfce88b`::ref 1 aborting False<br />Thread-4259::DEBUG::20=
15-11-09 11:17:52,445::fileSD::536::Storage.StorageDomain::(activateVolumes=
) Fixing permissions on /rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_=
engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-=
5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3<br />Thread-4259::DEBUG::=
2015-11-09 11:17:52,446::fileUtils::143::Storage.fileUtils::(createdir) Cre=
ating directory: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41=
mode: None<br />Thread-4259::WARNING::2015-11-09 11:17:52,446::fileUtils::=
152::Storage.fileUtils::(createdir) Dir /var/run/vdsm/storage/b4c488af-9d2f=
-4b7b-a6f6-74a0bac06c41 already exists<br />Thread-4259::DEBUG::2015-11-09 =
11:17:52,446::fileSD::511::Storage.StorageDomain::(createImageLinks) Creati=
ng symlink from /rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b=
4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff0=
7828 to /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302=
-0710-4df0-964d-5e7b1ff07828<br />Thread-4259::DEBUG::2015-11-09 11:17:52,4=
47::fileSD::516::Storage.StorageDomain::(createImageLinks) img run dir alre=
ady exists: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/5646=
1302-0710-4df0-964d-5e7b1ff07828<br />Thread-4259::DEBUG::2015-11-09 11:17:=
52,448::fileVolume::535::Storage.Volume::(validateVolumePath) validate path=
for 8f8ee034-de86-4438-b6eb-9109faa8b3d3<br />Thread-4259::INFO::2015-11-0=
9 11:17:52,450::logUtils::51::dispatcher::(wrapper) Run and protect: prepar=
eImage, Return response: {'info': {'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a=
0bac06c41', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-cente=
r/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06=
c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-910=
9faa8b3d3', 'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath=
': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2=
f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee0=
34-de86-4438-b6eb-9109faa8b3d3.lease', 'imageID': '56461302-0710-4df0-964d-=
5e7b1ff07828'}, 'path': u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a=
0bac06c41/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109=
faa8b3d3', 'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac=
06c41', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mn=
t/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/=
images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa=
8b3d3', 'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath': u=
'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b=
7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-d=
e86-4438-b6eb-9109faa8b3d3.lease', 'imageID': '56461302-0710-4df0-964d-5e7b=
1ff07828'}]}<br />Thread-4259::DEBUG::2015-11-09 11:17:52,450::task::1191::=
Storage.TaskManager.Task::(prepare) Task=3D`9dbe01b2-e3e0-466b-90e1-b9803df=
ce88b`::finished: {'info': {'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c=
41', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/g=
lusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/ima=
ges/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3=
d3', 'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath': u'/r=
hev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-=
a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86=
-4438-b6eb-9109faa8b3d3.lease', 'imageID': '56461302-0710-4df0-964d-5e7b1ff=
07828'}, 'path': u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c=
41/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d=
3', 'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',=
'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glust=
erSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/=
56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',=
'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath': u'/rhev/=
data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6=
-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-443=
8-b6eb-9109faa8b3d3.lease', 'imageID': '56461302-0710-4df0-964d-5e7b1ff0782=
8'}]}<br />Thread-4259::DEBUG::2015-11-09 11:17:52,450::task::595::Storage=
=2ETaskManager.Task::(_updateState) Task=3D`9dbe01b2-e3e0-466b-90e1-b9803df=
ce88b`::moving from state preparing -> state finished<br />Thread-4259::=
DEBUG::2015-11-09 11:17:52,450::resourceManager::940::Storage.ResourceManag=
er.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.b4c=
488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef 'Storage.b4c488af-9d2f=
-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj: 'None'>}<br />Thread-4259=
::DEBUG::2015-11-09 11:17:52,450::resourceManager::977::Storage.ResourceMan=
ager.Owner::(cancelAll) Owner.cancelAll requests {}<br />Thread-4259::DEBUG=
::2015-11-09 11:17:52,451::resourceManager::616::Storage.ResourceManager::(=
releaseResource) Trying to release resource 'Storage.b4c488af-9d2f-4b7b-a6f=
6-74a0bac06c41'<br />Thread-4259::DEBUG::2015-11-09 11:17:52,451::resourceM=
anager::635::Storage.ResourceManager::(releaseResource) Released resource '=
Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0 active users)<br />Thread-=
4259::DEBUG::2015-11-09 11:17:52,451::resourceManager::641::Storage.Resourc=
eManager::(releaseResource) Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0b=
ac06c41' is free, finding out if anyone is waiting for it.<br />Thread-4259=
::DEBUG::2015-11-09 11:17:52,451::resourceManager::649::Storage.ResourceMan=
ager::(releaseResource) No one is waiting for resource 'Storage.b4c488af-9d=
2f-4b7b-a6f6-74a0bac06c41', Clearing records.<br />Thread-4259::DEBUG::2015=
-11-09 11:17:52,451::task::993::Storage.TaskManager.Task::(_decref) Task=3D=
`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::ref 0 aborting False<br />Thread-42=
59::INFO::2015-11-09 11:17:52,454::xmlrpc::92::vds.XMLRPCServer::(_process_=
requests) Request handler for 127.0.0.1:57856 stopped<br />Reactor thread::=
INFO::2015-11-09 11:17:52,454::protocoldetector::72::ProtocolDetector.Accep=
torImpl::(handle_accept) Accepting connection from 127.0.0.1:57857<br />Rea=
ctor thread::DEBUG::2015-11-09 11:17:52,463::protocoldetector::82::Protocol=
Detector.Detector::(__init__) Using required_size=3D11<br />Reactor thread:=
:INFO::2015-11-09 11:17:52,463::protocoldetector::118::ProtocolDetector.Det=
ector::(handle_read) Detected protocol xml from 127.0.0.1:57857<br />Reacto=
r thread::DEBUG::2015-11-09 11:17:52,464::bindingxmlrpc::1297::XmlDetector:=
:(handle_socket) xml over http detected from ('127.0.0.1', 57857)<br />Bind=
ingXMLRPC::INFO::2015-11-09 11:17:52,464::xmlrpc::73::vds.XMLRPCServer::(ha=
ndle_request) Starting request handler for 127.0.0.1:57857<br />Thread-4260=
::INFO::2015-11-09 11:17:52,466::xmlrpc::84::vds.XMLRPCServer::(_process_re=
quests) Request handler for 127.0.0.1:57857 started<br />Thread-4260::DEBUG=
::2015-11-09 11:17:52,467::bindingxmlrpc::325::vds::(wrapper) client [127=
=2E0.0.1]<br />Thread-4260::DEBUG::2015-11-09 11:17:52,467::task::595::Stor=
age.TaskManager.Task::(_updateState) Task=3D`aed16a50-ede9-4ff5-92ef-356692=
fd56ae`::moving from state init -> state preparing<br />Thread-4260::INF=
O::2015-11-09 11:17:52,467::logUtils::48::dispatcher::(wrapper) Run and pro=
tect: prepareImage(sdUUID=3D'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', spUUID=
=3D'00000000-0000-0000-0000-000000000000', imgUUID=3D'fd81353f-b654-4493-bc=
af-2f417849b830', leafUUID=3D'8bb29fcb-c109-4f0a-a227-3819b6ecfdd9')<br />T=
hread-4260::DEBUG::2015-11-09 11:17:52,468::resourceManager::198::Storage=
=2EResourceManager.Request::(__init__) ResName=3D`Storage.b4c488af-9d2f-4b7=
b-a6f6-74a0bac06c41`ReqID=3D`974119cd-1351-46e9-8062-ffb1298c4ac9`::Request=
was made in '/usr/share/vdsm/storage/hsm.py' line '3205' at 'prepareImage'=
<br />Thread-4260::DEBUG::2015-11-09 11:17:52,468::resourceManager::542::St=
orage.ResourceManager::(registerResource) Trying to register resource 'Stor=
age.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type 'shared'<br />Threa=
d-4260::DEBUG::2015-11-09 11:17:52,468::resourceManager::601::Storage.Resou=
rceManager::(registerResource) Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74=
a0bac06c41' is free. Now locking as 'shared' (1 active user)<br />Thread-42=
60::DEBUG::2015-11-09 11:17:52,468::resourceManager::238::Storage.ResourceM=
anager.Request::(grant) ResName=3D`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac0=
6c41`ReqID=3D`974119cd-1351-46e9-8062-ffb1298c4ac9`::Granted request<br />T=
hread-4260::DEBUG::2015-11-09 11:17:52,469::task::827::Storage.TaskManager=
=2ETask::(resourceAcquired) Task=3D`aed16a50-ede9-4ff5-92ef-356692fd56ae`::=
_resourcesAcquired: Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 (shared)<b=
r />Thread-4260::DEBUG::2015-11-09 11:17:52,469::task::993::Storage.TaskMan=
ager.Task::(_decref) Task=3D`aed16a50-ede9-4ff5-92ef-356692fd56ae`::ref 1 a=
borting False<br />Thread-4260::DEBUG::2015-11-09 11:17:52,485::fileSD::536=
::Storage.StorageDomain::(activateVolumes) Fixing permissions on /rhev/data=
-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a=
0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a2=
27-3819b6ecfdd9<br />Thread-4260::DEBUG::2015-11-09 11:17:52,486::fileUtils=
::143::Storage.fileUtils::(createdir) Creating directory: /var/run/vdsm/sto=
rage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 mode: None<br />Thread-4260::WARN=
ING::2015-11-09 11:17:52,487::fileUtils::152::Storage.fileUtils::(createdir=
) Dir /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 already ex=
ists<br />Thread-4260::DEBUG::2015-11-09 11:17:52,487::fileSD::511::Storage=
=2EStorageDomain::(createImageLinks) Creating symlink from /rhev/data-cente=
r/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06=
c41/images/fd81353f-b654-4493-bcaf-2f417849b830 to /var/run/vdsm/storage/b4=
c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-4493-bcaf-2f417849b830<br =
/>Thread-4260::DEBUG::2015-11-09 11:17:52,487::fileSD::516::Storage.Storage=
Domain::(createImageLinks) img run dir already exists: /var/run/vdsm/storag=
e/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-4493-bcaf-2f417849b830=
<br />Thread-4260::DEBUG::2015-11-09 11:17:52,488::fileVolume::535::Storage=
=2EVolume::(validateVolumePath) validate path for 8bb29fcb-c109-4f0a-a227-3=
819b6ecfdd9<br />Thread-4260::INFO::2015-11-09 11:17:52,490::logUtils::51::=
dispatcher::(wrapper) Run and protect: prepareImage, Return response: {'inf=
o': {'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',=
'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia=
=2Ekru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-44=
93-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'volumeID': u'8=
bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath': u'/rhev/data-center/mnt/=
glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/im=
ages/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecf=
dd9.lease', 'imageID': 'fd81353f-b654-4493-bcaf-2f417849b830'}, 'path': u'/=
var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-449=
3-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'imgVolumesInfo'=
: [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path', =
'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia=
=2Ekru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-44=
93-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'volumeID': u'8=
bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath': u'/rhev/data-center/mnt/=
glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/im=
ages/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecf=
dd9.lease', 'imageID': 'fd81353f-b654-4493-bcaf-2f417849b830'}]}<br />Threa=
d-4260::DEBUG::2015-11-09 11:17:52,490::task::1191::Storage.TaskManager.Tas=
k::(prepare) Task=3D`aed16a50-ede9-4ff5-92ef-356692fd56ae`::finished: {'inf=
o': {'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',=
'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia=
=2Ekru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-44=
93-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'volumeID': u'8=
bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath': u'/rhev/data-center/mnt/=
glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/im=
ages/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecf=
dd9.lease', 'imageID': 'fd81353f-b654-4493-bcaf-2f417849b830'}, 'path': u'/=
var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-449=
3-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'imgVolumesInfo'=
: [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path', =
'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia=
=2Ekru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-44=
93-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'volumeID': u'8=
bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath': u'/rhev/data-center/mnt/=
glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/im=
ages/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecf=
dd9.lease', 'imageID': 'fd81353f-b654-4493-bcaf-2f417849b830'}]}<br />Threa=
d-4260::DEBUG::2015-11-09 11:17:52,490::task::595::Storage.TaskManager.Task=
::(_updateState) Task=3D`aed16a50-ede9-4ff5-92ef-356692fd56ae`::moving from=
state preparing -> state finished<br />Thread-4260::DEBUG::2015-11-09 1=
1:17:52,490::resourceManager::940::Storage.ResourceManager.Owner::(releaseA=
ll) Owner.releaseAll requests {} resources {'Storage.b4c488af-9d2f-4b7b-a6f=
6-74a0bac06c41': < ResourceRef 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac0=
6c41', isValid: 'True' obj: 'None'>}<br />Thread-4260::DEBUG::2015-11-09=
11:17:52,491::resourceManager::977::Storage.ResourceManager.Owner::(cancel=
All) Owner.cancelAll requests {}<br />Thread-4260::DEBUG::2015-11-09 11:17:=
52,491::resourceManager::616::Storage.ResourceManager::(releaseResource) Tr=
ying to release resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'<br =
/>Thread-4260::DEBUG::2015-11-09 11:17:52,491::resourceManager::635::Storag=
e.ResourceManager::(releaseResource) Released resource 'Storage.b4c488af-9d=
2f-4b7b-a6f6-74a0bac06c41' (0 active users)<br />Thread-4260::DEBUG::2015-1=
1-09 11:17:52,491::resourceManager::641::Storage.ResourceManager::(releaseR=
esource) Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free, f=
inding out if anyone is waiting for it.<br />Thread-4260::DEBUG::2015-11-09=
11:17:52,491::resourceManager::649::Storage.ResourceManager::(releaseResou=
rce) No one is waiting for resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0ba=
c06c41', Clearing records.<br />Thread-4260::DEBUG::2015-11-09 11:17:52,492=
::task::993::Storage.TaskManager.Task::(_decref) Task=3D`aed16a50-ede9-4ff5=
-92ef-356692fd56ae`::ref 0 aborting False<br />Thread-4260::INFO::2015-11-0=
9 11:17:52,494::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request h=
andler for 127.0.0.1:57857 stopped<br />Reactor thread::INFO::2015-11-09 11=
:17:52,494::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_ac=
cept) Accepting connection from 127.0.0.1:57858<br />Reactor thread::DEBUG:=
:2015-11-09 11:17:52,503::protocoldetector::82::ProtocolDetector.Detector::=
(__init__) Using required_size=3D11<br />Reactor thread::INFO::2015-11-09 1=
1:17:52,504::protocoldetector::118::ProtocolDetector.Detector::(handle_read=
) Detected protocol xml from 127.0.0.1:57858<br />BindingXMLRPC::INFO::2015=
-11-09 11:17:52,504::xmlrpc::73::vds.XMLRPCServer::(handle_request) Startin=
g request handler for 127.0.0.1:57858<br />Reactor thread::DEBUG::2015-11-0=
9 11:17:52,504::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over =
http detected from ('127.0.0.1', 57858)<br />Thread-4261::INFO::2015-11-09 =
11:17:52,507::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request han=
dler for 127.0.0.1:57858 started<br />Thread-4261::DEBUG::2015-11-09 11:17:=
52,508::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]<br />Thread-4=
261::DEBUG::2015-11-09 11:17:52,508::task::595::Storage.TaskManager.Task::(=
_updateState) Task=3D`d39463fd-486f-4280-903a-51b72862b648`::moving from st=
ate init -> state preparing<br />Thread-4261::INFO::2015-11-09 11:17:52,=
509::logUtils::48::dispatcher::(wrapper) Run and protect: prepareImage(sdUU=
ID=3D'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', spUUID=3D'00000000-0000-0000-0=
000-000000000000', imgUUID=3D'0e1c20d1-94aa-4003-8e12-0dbbf06a6af8', leafUU=
ID=3D'3fc3362d-ab6d-4e06-bd72-82d5750c7095')<br />Thread-4261::DEBUG::2015-=
11-09 11:17:52,509::resourceManager::198::Storage.ResourceManager.Request::=
(__init__) ResName=3D`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=3D=
`240c2aba-6c2e-44da-890d-c3d605e1933f`::Request was made in '/usr/share/vds=
m/storage/hsm.py' line '3205' at 'prepareImage'<br />Thread-4261::DEBUG::20=
15-11-09 11:17:52,509::resourceManager::542::Storage.ResourceManager::(regi=
sterResource) Trying to register resource 'Storage.b4c488af-9d2f-4b7b-a6f6-=
74a0bac06c41' for lock type 'shared'<br />Thread-4261::DEBUG::2015-11-09 11=
:17:52,509::resourceManager::601::Storage.ResourceManager::(registerResourc=
e) Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now loc=
king as 'shared' (1 active user)<br />Thread-4261::DEBUG::2015-11-09 11:17:=
52,510::resourceManager::238::Storage.ResourceManager.Request::(grant) ResN=
ame=3D`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=3D`240c2aba-6c2e-=
44da-890d-c3d605e1933f`::Granted request<br />Thread-4261::DEBUG::2015-11-0=
9 11:17:52,510::task::827::Storage.TaskManager.Task::(resourceAcquired) Tas=
k=3D`d39463fd-486f-4280-903a-51b72862b648`::_resourcesAcquired: Storage.b4c=
488af-9d2f-4b7b-a6f6-74a0bac06c41 (shared)<br />Thread-4261::DEBUG::2015-11=
-09 11:17:52,510::task::993::Storage.TaskManager.Task::(_decref) Task=3D`d3=
9463fd-486f-4280-903a-51b72862b648`::ref 1 aborting False<br />Thread-4261:=
:DEBUG::2015-11-09 11:17:52,526::fileSD::536::Storage.StorageDomain::(activ=
ateVolumes) Fixing permissions on /rhev/data-center/mnt/glusterSD/ovirt02=
=2Emafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-9=
4aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095<br />Thread=
-4261::DEBUG::2015-11-09 11:17:52,528::fileUtils::143::Storage.fileUtils::(=
createdir) Creating directory: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f=
6-74a0bac06c41 mode: None<br />Thread-4261::WARNING::2015-11-09 11:17:52,52=
8::fileUtils::152::Storage.fileUtils::(createdir) Dir /var/run/vdsm/storage=
/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 already exists<br />Thread-4261::DEBU=
G::2015-11-09 11:17:52,528::fileSD::511::Storage.StorageDomain::(createImag=
eLinks) Creating symlink from /rhev/data-center/mnt/glusterSD/ovirt02.mafia=
=2Ekru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-40=
03-8e12-0dbbf06a6af8 to /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0b=
ac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8<br />Thread-4261::DEBUG::2015-=
11-09 11:17:52,528::fileSD::516::Storage.StorageDomain::(createImageLinks) =
img run dir already exists: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-7=
4a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8<br />Thread-4261::DEBUG::2=
015-11-09 11:17:52,530::fileVolume::535::Storage.Volume::(validateVolumePat=
h) validate path for 3fc3362d-ab6d-4e06-bd72-82d5750c7095<br />Thread-4261:=
:INFO::2015-11-09 11:17:52,531::logUtils::51::dispatcher::(wrapper) Run and=
protect: prepareImage, Return response: {'info': {'domainID': 'b4c488af-9d=
2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path', 'leaseOffset': 0, 'path': u'=
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7=
b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab=
6d-4e06-bd72-82d5750c7095', 'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7=
095', 'leasePath': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_eng=
ine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0db=
bf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease', 'imageID': '0e1c20d1=
-94aa-4003-8e12-0dbbf06a6af8'}, 'path': u'/var/run/vdsm/storage/b4c488af-9d=
2f-4b7b-a6f6-74a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6=
d-4e06-bd72-82d5750c7095', 'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4=
b7b-a6f6-74a0bac06c41', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhe=
v/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6=
f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4=
e06-bd72-82d5750c7095', 'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095'=
, 'leasePath': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/=
b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06=
a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease', 'imageID': '0e1c20d1-94a=
a-4003-8e12-0dbbf06a6af8'}]}<br />Thread-4261::DEBUG::2015-11-09 11:17:52,5=
31::task::1191::Storage.TaskManager.Task::(prepare) Task=3D`d39463fd-486f-4=
280-903a-51b72862b648`::finished: {'info': {'domainID': 'b4c488af-9d2f-4b7b=
-a6f6-74a0bac06c41', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/d=
ata-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-=
74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06=
-bd72-82d5750c7095', 'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095', '=
leasePath': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c=
488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6a=
f8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease', 'imageID': '0e1c20d1-94aa-4=
003-8e12-0dbbf06a6af8'}, 'path': u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b=
-a6f6-74a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-=
bd72-82d5750c7095', 'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f=
6-74a0bac06c41', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-=
center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0=
bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd7=
2-82d5750c7095', 'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095', 'leas=
ePath': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488a=
f-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3=
fc3362d-ab6d-4e06-bd72-82d5750c7095.lease', 'imageID': '0e1c20d1-94aa-4003-=
8e12-0dbbf06a6af8'}]}<br />Thread-4261::DEBUG::2015-11-09 11:17:52,532::tas=
k::595::Storage.TaskManager.Task::(_updateState) Task=3D`d39463fd-486f-4280=
-903a-51b72862b648`::moving from state preparing -> state finished<br />=
Thread-4261::DEBUG::2015-11-09 11:17:52,532::resourceManager::940::Storage=
=2EResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resourc=
es {'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef 'Stora=
ge.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj: 'None'>}<=
br />Thread-4261::DEBUG::2015-11-09 11:17:52,532::resourceManager::977::Sto=
rage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}<br />Th=
read-4261::DEBUG::2015-11-09 11:17:52,532::resourceManager::616::Storage.Re=
sourceManager::(releaseResource) Trying to release resource 'Storage.b4c488=
af-9d2f-4b7b-a6f6-74a0bac06c41'<br />Thread-4261::DEBUG::2015-11-09 11:17:5=
2,532::resourceManager::635::Storage.ResourceManager::(releaseResource) Rel=
eased resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0 active use=
rs)<br />Thread-4261::DEBUG::2015-11-09 11:17:52,533::resourceManager::641:=
:Storage.ResourceManager::(releaseResource) Resource 'Storage.b4c488af-9d2f=
-4b7b-a6f6-74a0bac06c41' is free, finding out if anyone is waiting for it=
=2E<br />Thread-4261::DEBUG::2015-11-09 11:17:52,533::resourceManager::649:=
:Storage.ResourceManager::(releaseResource) No one is waiting for resource =
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing records.<br />Thre=
ad-4261::DEBUG::2015-11-09 11:17:52,533::task::993::Storage.TaskManager.Tas=
k::(_decref) Task=3D`d39463fd-486f-4280-903a-51b72862b648`::ref 0 aborting =
False<br />Thread-4261::INFO::2015-11-09 11:17:52,535::xmlrpc::92::vds.XMLR=
PCServer::(_process_requests) Request handler for 127.0.0.1:57858 stopped<b=
r />Reactor thread::INFO::2015-11-09 11:17:52,536::protocoldetector::72::Pr=
otocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127=
=2E0.0.1:57859<br />Reactor thread::DEBUG::2015-11-09 11:17:52,544::protoco=
ldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=3D=
11<br />Reactor thread::INFO::2015-11-09 11:17:52,545::protocoldetector::11=
8::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127=
=2E0.0.1:57859<br />Reactor thread::DEBUG::2015-11-09 11:17:52,545::binding=
xmlrpc::1297::XmlDetector::(handle_socket) xml over http detected from ('12=
7.0.0.1', 57859)<br />BindingXMLRPC::INFO::2015-11-09 11:17:52,545::xmlrpc:=
:73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0=
=2E0.1:57859<br />Thread-4262::INFO::2015-11-09 11:17:52,548::xmlrpc::84::v=
ds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:57859 st=
arted<br />Thread-4262::DEBUG::2015-11-09 11:17:52,548::bindingxmlrpc::325:=
:vds::(wrapper) client [127.0.0.1]<br />Thread-4262::DEBUG::2015-11-09 11:1=
7:52,549::task::595::Storage.TaskManager.Task::(_updateState) Task=3D`0eebd=
b1c-6c4d-4b86-a2fa-00ad35a19f24`::moving from state init -> state prepar=
ing<br />Thread-4262::INFO::2015-11-09 11:17:52,549::logUtils::48::dispatch=
er::(wrapper) Run and protect: prepareImage(sdUUID=3D'b4c488af-9d2f-4b7b-a6=
f6-74a0bac06c41', spUUID=3D'00000000-0000-0000-0000-000000000000', imgUUID=
=3D'350fb787-049a-4174-8914-f371aabfa72c', leafUUID=3D'02c5d59d-638c-4672-8=
14d-d734e334e24a')<br />Thread-4262::DEBUG::2015-11-09 11:17:52,549::resour=
ceManager::198::Storage.ResourceManager.Request::(__init__) ResName=3D`Stor=
age.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=3D`fd9ea6d0-3a31-4ec6-a74c-8=
b84b2e51746`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '32=
05' at 'prepareImage'<br />Thread-4262::DEBUG::2015-11-09 11:17:52,550::res=
ourceManager::542::Storage.ResourceManager::(registerResource) Trying to re=
gister resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock typ=
e 'shared'<br />Thread-4262::DEBUG::2015-11-09 11:17:52,550::resourceManage=
r::601::Storage.ResourceManager::(registerResource) Resource 'Storage.b4c48=
8af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now locking as 'shared' (1 active=
user)<br />Thread-4262::DEBUG::2015-11-09 11:17:52,550::resourceManager::2=
38::Storage.ResourceManager.Request::(grant) ResName=3D`Storage.b4c488af-9d=
2f-4b7b-a6f6-74a0bac06c41`ReqID=3D`fd9ea6d0-3a31-4ec6-a74c-8b84b2e51746`::G=
ranted request<br />Thread-4262::DEBUG::2015-11-09 11:17:52,550::task::827:=
:Storage.TaskManager.Task::(resourceAcquired) Task=3D`0eebdb1c-6c4d-4b86-a2=
fa-00ad35a19f24`::_resourcesAcquired: Storage.b4c488af-9d2f-4b7b-a6f6-74a0b=
ac06c41 (shared)<br />Thread-4262::DEBUG::2015-11-09 11:17:52,551::task::99=
3::Storage.TaskManager.Task::(_decref) Task=3D`0eebdb1c-6c4d-4b86-a2fa-00ad=
35a19f24`::ref 1 aborting False<br />Thread-4262::DEBUG::2015-11-09 11:17:5=
2,566::fileSD::536::Storage.StorageDomain::(activateVolumes) Fixing permiss=
ions on /rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-=
9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c=
5d59d-638c-4672-814d-d734e334e24a<br />Thread-4262::DEBUG::2015-11-09 11:17=
:52,568::fileUtils::143::Storage.fileUtils::(createdir) Creating directory:=
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 mode: None<br /=
>Thread-4262::WARNING::2015-11-09 11:17:52,568::fileUtils::152::Storage.fil=
eUtils::(createdir) Dir /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0b=
ac06c41 already exists<br />Thread-4262::DEBUG::2015-11-09 11:17:52,568::fi=
leSD::511::Storage.StorageDomain::(createImageLinks) Creating symlink from =
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7=
b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c to /var/run=
/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-049a-4174-8914-=
f371aabfa72c<br />Thread-4262::DEBUG::2015-11-09 11:17:52,568::fileSD::516:=
:Storage.StorageDomain::(createImageLinks) img run dir already exists: /var=
/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-049a-4174-8=
914-f371aabfa72c<br />Thread-4262::DEBUG::2015-11-09 11:17:52,570::fileVolu=
me::535::Storage.Volume::(validateVolumePath) validate path for 02c5d59d-63=
8c-4672-814d-d734e334e24a<br />Thread-4262::INFO::2015-11-09 11:17:52,572::=
logUtils::51::dispatcher::(wrapper) Run and protect: prepareImage, Return r=
esponse: {'info': {'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'vol=
Type': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/=
ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb=
787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a', 'vol=
umeID': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath': u'/rhev/data-=
center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0=
bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814=
d-d734e334e24a.lease', 'imageID': '350fb787-049a-4174-8914-f371aabfa72c'}, =
'path': u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb7=
87-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a', 'imgV=
olumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType=
': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovir=
t02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-=
049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a', 'volumeI=
D': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath': u'/rhev/data-cent=
er/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac0=
6c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d7=
34e334e24a.lease', 'imageID': '350fb787-049a-4174-8914-f371aabfa72c'}]}<br =
/>Thread-4262::DEBUG::2015-11-09 11:17:52,573::task::1191::Storage.TaskMana=
ger.Task::(prepare) Task=3D`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::finished=
: {'info': {'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': =
'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02=
=2Emafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-0=
49a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a', 'volumeID=
': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath': u'/rhev/data-cente=
r/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06=
c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d73=
4e334e24a.lease', 'imageID': '350fb787-049a-4174-8914-f371aabfa72c'}, 'path=
': u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-04=
9a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a', 'imgVolume=
sInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'p=
ath', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02=
=2Emafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-0=
49a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a', 'volumeID=
': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath': u'/rhev/data-cente=
r/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06=
c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d73=
4e334e24a.lease', 'imageID': '350fb787-049a-4174-8914-f371aabfa72c'}]}<br /=
>Thread-4262::DEBUG::2015-11-09 11:17:52,573::task::595::Storage.TaskManage=
r.Task::(_updateState) Task=3D`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::movin=
g from state preparing -> state finished<br />Thread-4262::DEBUG::2015-1=
1-09 11:17:52,573::resourceManager::940::Storage.ResourceManager.Owner::(re=
leaseAll) Owner.releaseAll requests {} resources {'Storage.b4c488af-9d2f-4b=
7b-a6f6-74a0bac06c41': < ResourceRef 'Storage.b4c488af-9d2f-4b7b-a6f6-74=
a0bac06c41', isValid: 'True' obj: 'None'>}<br />Thread-4262::DEBUG::2015=
-11-09 11:17:52,573::resourceManager::977::Storage.ResourceManager.Owner::(=
cancelAll) Owner.cancelAll requests {}<br />Thread-4262::DEBUG::2015-11-09 =
11:17:52,573::resourceManager::616::Storage.ResourceManager::(releaseResour=
ce) Trying to release resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c4=
1'<br />Thread-4262::DEBUG::2015-11-09 11:17:52,573::resourceManager::635::=
Storage.ResourceManager::(releaseResource) Released resource 'Storage.b4c48=
8af-9d2f-4b7b-a6f6-74a0bac06c41' (0 active users)<br />Thread-4262::DEBUG::=
2015-11-09 11:17:52,574::resourceManager::641::Storage.ResourceManager::(re=
leaseResource) Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is f=
ree, finding out if anyone is waiting for it.<br />Thread-4262::DEBUG::2015=
-11-09 11:17:52,574::resourceManager::649::Storage.ResourceManager::(releas=
eResource) No one is waiting for resource 'Storage.b4c488af-9d2f-4b7b-a6f6-=
74a0bac06c41', Clearing records.<br />Thread-4262::DEBUG::2015-11-09 11:17:=
52,574::task::993::Storage.TaskManager.Task::(_decref) Task=3D`0eebdb1c-6c4=
d-4b86-a2fa-00ad35a19f24`::ref 0 aborting False<br />Thread-4262::INFO::201=
5-11-09 11:17:52,576::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Req=
uest handler for 127.0.0.1:57859 stopped<br />Reactor thread::INFO::2015-11=
-09 11:17:52,610::protocoldetector::72::ProtocolDetector.AcceptorImpl::(han=
dle_accept) Accepting connection from 127.0.0.1:57860<br />Reactor thread::=
DEBUG::2015-11-09 11:17:52,619::protocoldetector::82::ProtocolDetector.Dete=
ctor::(__init__) Using required_size=3D11<br />Reactor thread::INFO::2015-1=
1-09 11:17:52,619::protocoldetector::118::ProtocolDetector.Detector::(handl=
e_read) Detected protocol xml from 127.0.0.1:57860<br />BindingXMLRPC::INFO=
::2015-11-09 11:17:52,620::xmlrpc::73::vds.XMLRPCServer::(handle_request) S=
tarting request handler for 127.0.0.1:57860<br />Reactor thread::DEBUG::201=
5-11-09 11:17:52,620::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml=
over http detected from ('127.0.0.1', 57860)<br />Thread-4263::INFO::2015-=
11-09 11:17:52,623::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Reque=
st handler for 127.0.0.1:57860 started<br />Thread-4263::DEBUG::2015-11-09 =
11:17:52,623::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]<br />Th=
read-4263::DEBUG::2015-11-09 11:17:52,624::task::595::Storage.TaskManager=
=2ETask::(_updateState) Task=3D`6fd1d011-d931-4eca-b93b-c0fc3a1b4107`::movi=
ng from state init -> state preparing<br />Thread-4263::INFO::2015-11-09=
11:17:52,624::logUtils::48::dispatcher::(wrapper) Run and protect: repoSta=
ts(options=3DNone)<br />Thread-4263::INFO::2015-11-09 11:17:52,624::logUtil=
s::51::dispatcher::(wrapper) Run and protect: repoStats, Return response: {=
'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': {'code': 0, 'actual': True, 'versio=
n': 3, 'acquired': True, 'delay': '0.000392118', 'lastCheck': '6.0', 'valid=
': True}, u'0af99439-f140-4636-90f7-f43904735da0': {'code': 0, 'actual': Tr=
ue, 'version': 3, 'acquired': True, 'delay': '0.000382969', 'lastCheck': '4=
=2E5', 'valid': True}}<br />Thread-4263::DEBUG::2015-11-09 11:17:52,624::ta=
sk::1191::Storage.TaskManager.Task::(prepare) Task=3D`6fd1d011-d931-4eca-b9=
3b-c0fc3a1b4107`::finished: {'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': {'code=
': 0, 'actual': True, 'version': 3, 'acquired': True, 'delay': '0.000392118=
', 'lastCheck': '6.0', 'valid': True}, u'0af99439-f140-4636-90f7-f43904735d=
a0': {'code': 0, 'actual': True, 'version': 3, 'acquired': True, 'delay': '=
0.000382969', 'lastCheck': '4.5', 'valid': True}}<br />Thread-4263::DEBUG::=
2015-11-09 11:17:52,625::task::595::Storage.TaskManager.Task::(_updateState=
) Task=3D`6fd1d011-d931-4eca-b93b-c0fc3a1b4107`::moving from state preparin=
g -> state finished<br />Thread-4263::DEBUG::2015-11-09 11:17:52,625::re=
sourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.relea=
seAll requests {} resources {}<br />Thread-4263::DEBUG::2015-11-09 11:17:52=
,625::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owne=
r.cancelAll requests {}<br />Thread-4263::DEBUG::2015-11-09 11:17:52,625::t=
ask::993::Storage.TaskManager.Task::(_decref) Task=3D`6fd1d011-d931-4eca-b9=
3b-c0fc3a1b4107`::ref 0 aborting False<br />Thread-4263::INFO::2015-11-09 1=
1:17:52,627::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request hand=
ler for 127.0.0.1:57860 stopped<br />mailbox.SPMMonitor::DEBUG::2015-11-09 =
11:17:52,829::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail) dd=
if=3D/rhev/data-center/00000001-0001-0001-0001-000000000230/mastersd/dom_m=
d/inbox iflag=3Ddirect,fullblock count=3D1 bs=3D1024000 (cwd None)<br />mai=
lbox.SPMMonitor::DEBUG::2015-11-09 11:17:52,845::storage_mailbox::735::Stor=
age.Misc.excCmd::(_checkForMail) SUCCESS: <err> =3D '1+0 records in\n=
1+0 records out\n1024000 bytes (1.0 MB) copied, 0.00494757 s, 207 MB/s\n'; =
<rc> =3D 0<br /><br /></p>
<div>-- <br />
<p>Florent BELLO<br />Service Informatique<br />informatique@ville-kourou=
=2Efr<br />0594 22 31 22<br />Mairie de Kourou</p>
</div>
</body></html>
--=_920b73690f5d77fd5fed2f8d20a489a7--
2
3
Hello All,
after configuring a first setup, based on the quick start guide, I'm now
looking
at the hosted-engine setup.
My question is: after I do the hosted-engine-setup, how do I setup vm's on
the same
machine that hosts the now-virtualized engine?
Greetings, J.
4
4
Invitation: [Deep dive] Host Network QoS - oVirt 3.6 @ Tue Nov 24 5pm - Thu Nov 26, 2015 6pm (ibarkan@redhat.com)
by ibarkan@redhat.com 24 Nov '15
by ibarkan@redhat.com 24 Nov '15
24 Nov '15
You have been invited to the following event.
Title: [Deep dive] Host Network QoS - oVirt 3.6
Hangouts on air: https://plus.google.com/events/c3la9vdse911atq991qflogtq0g
you tube link: https://plus.google.com/events/c3la9vdse911atq991qflogtq0g
When: Tue Nov 24 5pm - Thu Nov 26, 2015 6pm Jerusalem
Calendar: ibarkan(a)redhat.com
Who:
* ibarkan(a)redhat.com - organizer
* users(a)ovirt.org
Event details:
https://www.google.com/calendar/event?action=VIEW&eid=NGZxZXU0dDRka2ZrMmd1c…
Invitation from Google Calendar: https://www.google.com/calendar/
You are receiving this courtesy email at the account users(a)ovirt.org
because you are an attendee of this event.
To stop receiving future updates for this event, decline this event.
Alternatively you can sign up for a Google account at
https://www.google.com/calendar/ and control your notification settings for
your entire calendar.
Forwarding this invitation could allow any recipient to modify your RSVP
response. Learn more at
https://support.google.com/calendar/answer/37135#forwarding
1
0
[ANN] oVirt 3.5.6 Third Release Candidate is now available for testing
by Sandro Bonazzola 24 Nov '15
by Sandro Bonazzola 24 Nov '15
24 Nov '15
The oVirt Project is pleased to announce the availability
of the Third oVirt 3.5.6 Release Candidate for testing, as of November
24th, 2015.
This release is available now for
Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and
Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar).
This release supports Hypervisor Hosts running
Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and
Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar).
This release includes updated packages for:
- VDSM
See the release notes [1] for a list of fixed bugs.
Please refer to release notes [1] for Installation / Upgrade instructions.
a new oVirt Live ISO is already available[2].
Please note that mirrors[3] may need usually one day before being
synchronized.
Please refer to the release notes for known issues in this release.
Please add yourself to the test page[4] if you're testing this release.
[1] http://www.ovirt.org/OVirt_3.5.6_Release_Notes
[2] http://resources.ovirt.org/pub/ovirt-3.5-pre/iso/ovirt-live/
[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors
[4] http://www.ovirt.org/Testing/oVirt_3.5.6_Testing
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
1
0
Re: [ovirt-users] Ovirt 3.6 | After upgrade host can not connect to storage domains | returned by VDSM was: 480
by Punit Dambiwal 24 Nov '15
by Punit Dambiwal 24 Nov '15
24 Nov '15
Hi Sahina,
Either after make the changes in the vdsm.conf,still not able to connect to
the replica=2 storage..
Thanks,
Punit
On Mon, Nov 23, 2015 at 4:15 PM, Punit Dambiwal <hypunit(a)gmail.com> wrote:
> Hi Sahina,
>
> Thanks for the update...would you mind to let me know the correct syntax
> to add the line in the vdsm.conf ??
>
> Thanks,
> Punit
>
> On Mon, Nov 23, 2015 at 3:48 PM, Sahina Bose <sabose(a)redhat.com> wrote:
>
>> You can change the allowed_replica_count to 2 in vdsm.conf - though this
>> is not recommended in production. Supported replica count is 3.
>>
>> thanks
>> sahina
>>
>>
>> On 11/23/2015 07:58 AM, Punit Dambiwal wrote:
>>
>> Hi Sahina,
>>
>> Is there any workaround to solve this issue ?
>>
>> Thanks,
>> Punit
>>
>> On Wed, Nov 11, 2015 at 9:36 AM, Sahina Bose <sabose(a)redhat.com> wrote:
>>
>>> Hi,
>>>
>>> Thanks for your email. I will be back on 16th Nov and will get back to
>>> you then.
>>>
>>> thanks
>>> sahina
>>>
>>
>>
>>
>
2
1
Hi,
When creating a new data center, in oVirt 3.5 there is the options to have “local” or “shared” storage types.
Is there any resource out there that explains the difference between the two? The official doc does not really help there.
My current understanding is as follows:
In shared mode, I can create data domains that are shared between hosts in a same data center, eg NFS, iSCSI etc.
In lcoal mode, I can only create data domains locally, but I can “import” an existing iSCSI or Export domain to move VMs (with downtime) between data centers.
1. Is this correct or am I missing something here?
2. What would be the reason to go for a “local” storage type cluster?
Thank you very much for helping out a newcomer :)
Kind regards,
—
Christophe
2
1
hi everyone
how can i import an ovf file from a server to my ovirt. thanks in advance
2
1
------=_Part_16_134777541.1448296754211
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hello,
I'm getting the following error when I try to create a snapshot of one VM. Snapshots of all other VMs work as expected. I'm using oVirt 3.5 on Centos 7.
>Failed to create live snapshot 'fsbu3' for VM 'Odoo'. VM restart is recommended. Note that using the created snapshot might cause data inconsistency.
I think this is the relevant part of vdsm.log, what strikes me as odd is the line:
>Thread-1192052::ERROR::2015-11-23 17:18:20,532::vm::4355::vm.Vm::(snapshot) vmId=3D`581cebb3-7729-4c29-b98c-f9e04aa2fdd0`::The base volume doesn't exist: {'device': 'disk', 'domainID': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'volumeID': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'imageID': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d'}
The part "The base volume doesn't exist" seems interesting.
Also interesting is that it does create a snapshot, though I don't know if that snapshot is missing data.
Thread-1192048::DEBUG::2015-11-23 17:18:20,421::taskManager::103::Storage.TaskManager::(getTaskStatus) Entry. taskID: 21a1c403-f306-40b1-bad8-377d0265ebca
Thread-1192048::DEBUG::2015-11-23 17:18:20,421::taskManager::106::Storage.TaskManager::(getTaskStatus) Return. Response: {'code': 0, 'message': '1 jobs completed successfully', 'taskState': 'finished', 'taskResult': 'success', 'taskID': '21a1c403-f306-40b1-bad8-377d0265ebca'}
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::taskManager::123::Storage.TaskManager::(getAllTasksStatuses) Return: {'21a1c403-f306-40b1-bad8-377d0265ebca': {'code': 0, 'message': '1 jobs completed successfully', 'taskState': 'finished', 'taskResult': 'success', 'taskID': '21a1c403-f306-40b1-bad8-377d0265ebca'}}
Thread-1192048::INFO::2015-11-23 17:18:20,422::logUtils::47::dispatcher::(wrapper) Run and protect: getAllTasksStatuses, Return response: {'allTasksStatus': {'21a1c403-f306-40b1-bad8-377d0265ebca': {'code': 0, 'message': '1 jobs completed successfully', 'taskState': 'finished', 'taskResult': 'success', 'taskID': '21a1c403-f306-40b1-bad8-377d0265ebca'}}}
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::task::1191::Storage.TaskManager.Task::(prepare) Task=3D`ce3d857c-45d3-4acc-95a5-79484e457fc6`::finished: {'allTasksStatus': {'21a1c403-f306-40b1-bad8-377d0265ebca': {'code': 0, 'message': '1 jobs completed successfully', 'taskState': 'finished', 'taskResult': 'success', 'taskID': '21a1c403-f306-40b1-bad8-377d0265ebca'}}}
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::task::595::Storage.TaskManager.Task::(_updateState) Task=3D`ce3d857c-45d3-4acc-95a5-79484e457fc6`::moving from state preparing -> state finished
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::task::993::Storage.TaskManager.Task::(_decref) Task=3D`ce3d857c-45d3-4acc-95a5-79484e457fc6`::ref 0 aborting False
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::__init__::500::jsonrpc.JsonRpcServer::(_serveRequest) Return 'Host.getAllTasksStatuses' in bridge with {'21a1c403-f306-40b1-bad8-377d0265ebca': {'code': 0, 'message': '1 jobs completed successfully', 'taskState': 'finished', 'taskResult': 'success', 'taskID': '21a1c403-f306-40b1-bad8-377d0265ebca'}}
Thread-1192048::DEBUG::2015-11-23 17:18:20,423::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-11-23 17:18:20,423::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'SEND'>
JsonRpcServer::DEBUG::2015-11-23 17:18:20,424::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1192049::DEBUG::2015-11-23 17:18:20,426::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-11-23 17:18:20,438::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'SEND'>
JsonRpcServer::DEBUG::2015-11-23 17:18:20,439::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1192050::DEBUG::2015-11-23 17:18:20,441::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-11-23 17:18:20,442::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'SEND'>
JsonRpcServer::DEBUG::2015-11-23 17:18:20,443::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1192051::DEBUG::2015-11-23 17:18:20,445::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-11-23 17:18:20,529::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'SEND'>
Thread-1192052::DEBUG::2015-11-23 17:18:20,530::__init__::469::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'VM.snapshot' in bridge with {'vmID': '581cebb3-7729-4c29-b98c-f9e04aa2fdd0', 'snapDrives': [{'baseVolumeID': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'domainID': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'volumeID': '16f92498-c142-4330-bc0f-c96f210c379d', 'imageID': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d'}]}
JsonRpcServer::DEBUG::2015-11-23 17:18:20,530::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1192052::ERROR::2015-11-23 17:18:20,532::vm::4355::vm.Vm::(snapshot) vmId=3D`581cebb3-7729-4c29-b98c-f9e04aa2fdd0`::The base volume doesn't exist: {'device': 'disk', 'domainID': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'volumeID': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'imageID': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d'}
Thread-1192052::DEBUG::2015-11-23 17:18:20,532::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-11-23 17:18:20,588::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'SEND'>
JsonRpcServer::DEBUG::2015-11-23 17:18:20,590::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1192053::DEBUG::2015-11-23 17:18:20,590::__init__::469::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'Volume.getInfo' in bridge with {'imageID': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d', 'storagepoolID': '00000002-0002-0002-0002-000000000354', 'volumeID': '16f92498-c142-4330-bc0f-c96f210c379d', 'storagedomainID': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc'}
Thread-1192053::DEBUG::2015-11-23 17:18:20,592::task::595::Storage.TaskManager.Task::(_updateState) Task=3D`8f9455be-2f57-4b95-9d24-8db1522cbaed`::moving from state init -> state preparing
Thread-1192053::INFO::2015-11-23 17:18:20,592::logUtils::44::dispatcher::(wrapper) Run and protect: getVolumeInfo(sdUUID=3D'b4e7425a-53c7-40d4-befc-ea36ed7891fc', spUUID=3D'00000002-0002-0002-0002-000000000354', imgUUID=3D'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d', volUUID=3D'16f92498-c142-4330-bc0f-c96f210c379d', options=3DNone)
Thread-1192053::DEBUG::2015-11-23 17:18:20,593::resourceManager::198::Storage.ResourceManager.Request::(__init__) ResName=3D`Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc`ReqID=3D`e6aed3a3-c95a-4106-9a16-ad21e5db3ae7`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3124' at 'getVolumeInfo'
Thread-1192053::DEBUG::2015-11-23 17:18:20,593::resourceManager::542::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc' for lock type 'shared'
Thread-1192053::DEBUG::2015-11-23 17:18:20,594::resourceManager::601::Storage.ResourceManager::(registerResource) Resource 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc' is free. Now locking as 'shared' (1 active user)
Thread-1192053::DEBUG::2015-11-23 17:18:20,594::resourceManager::238::Storage.ResourceManager.Request::(grant) ResName=3D`Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc`ReqID=3D`e6aed3a3-c95a-4106-9a16-ad21e5db3ae7`::Granted request
Thread-1192053::DEBUG::2015-11-23 17:18:20,594::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=3D`8f9455be-2f57-4b95-9d24-8db1522cbaed`::_resourcesAcquired: Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc (shared)
Thread-1192053::DEBUG::2015-11-23 17:18:20,594::task::993::Storage.TaskManager.Task::(_decref) Task=3D`8f9455be-2f57-4b95-9d24-8db1522cbaed`::ref 1 aborting False
Thread-1192053::DEBUG::2015-11-23 17:18:20,594::lvm::419::Storage.OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex
Thread-1192053::DEBUG::2015-11-23 17:18:20,595::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm lvs --config ' devices { preferred_names =3D ["^/dev/mapper/"] ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_count=3D3 obtain_device_list_from_udev=3D0 filter =3D [ '\''a|/dev/mapper/1p_storage_store1|'\'', '\''r|.*|'\'' ] } global { locking_type=3D1 prioritise_write_locks=3D1 wait_for_locks=3D1 use_lvmetad=3D0 } backup { retain_min =3D 50 retain_days =3D 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags b4e7425a-53c7-40d4-befc-ea36ed7891fc (cwd None)
Thread-1192053::DEBUG::2015-11-23 17:18:20,731::lvm::288::Storage.Misc.excCmd::(cmd) SUCCESS: <err> =3D ''; <rc> =3D 0
Thread-1192053::DEBUG::2015-11-23 17:18:20,734::lvm::454::Storage.LVM::(_reloadlvs) lvs reloaded
Thread-1192053::DEBUG::2015-11-23 17:18:20,734::lvm::454::Storage.OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex
Thread-1192053::INFO::2015-11-23 17:18:20,734::volume::847::Storage.Volume::(getInfo) Info request: sdUUID=3Db4e7425a-53c7-40d4-befc-ea36ed7891fc imgUUID=3Ddfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d volUUID =3D 16f92498-c142-4330-bc0f-c96f210c379d=20
Thread-1192053::DEBUG::2015-11-23 17:18:20,734::blockVolume::594::Storage.Misc.excCmd::(getMetadata) /bin/dd iflag=3Ddirect skip=3D40 bs=3D512 if=3D/dev/b4e7425a-53c7-40d4-befc-ea36ed7891fc/metadata count=3D1 (cwd None)
Thread-1192053::DEBUG::2015-11-23 17:18:20,745::blockVolume::594::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> =3D '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.000196717 s, 2.6 MB/s\n'; <rc> =3D 0
Thread-1192053::DEBUG::2015-11-23 17:18:20,745::misc::262::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.000196717 s, 2.6 MB/s'], size: 512
Thread-1192053::INFO::2015-11-23 17:18:20,746::volume::875::Storage.Volume::(getInfo) b4e7425a-53c7-40d4-befc-ea36ed7891fc/dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d/16f92498-c142-4330-bc0f-c96f210c379d info is {'status': 'OK', 'domain': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'voltype': 'LEAF', 'description': '', 'parent': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'format': 'COW', 'image': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d', 'ctime': '1448295499', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1073741824', 'children': [], 'pool': '', 'capacity': '21474836480', 'uuid': '16f92498-c142-4330-bc0f-c96f210c379d', 'truesize': '1073741824', 'type': 'SPARSE'}
Thread-1192053::INFO::2015-11-23 17:18:20,746::logUtils::47::dispatcher::(wrapper) Run and protect: getVolumeInfo, Return response: {'info': {'status': 'OK', 'domain': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'voltype': 'LEAF', 'description': '', 'parent': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'format': 'COW', 'image': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d', 'ctime': '1448295499', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1073741824', 'children': [], 'pool': '', 'capacity': '21474836480', 'uuid': '16f92498-c142-4330-bc0f-c96f210c379d', 'truesize': '1073741824', 'type': 'SPARSE'}}
Thread-1192053::DEBUG::2015-11-23 17:18:20,746::task::1191::Storage.TaskManager.Task::(prepare) Task=3D`8f9455be-2f57-4b95-9d24-8db1522cbaed`::finished: {'info': {'status': 'OK', 'domain': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'voltype': 'LEAF', 'description': '', 'parent': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'format': 'COW', 'image': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d', 'ctime': '1448295499', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1073741824', 'children': [], 'pool': '', 'capacity': '21474836480', 'uuid': '16f92498-c142-4330-bc0f-c96f210c379d', 'truesize': '1073741824', 'type': 'SPARSE'}}
Thread-1192053::DEBUG::2015-11-23 17:18:20,746::task::595::Storage.TaskManager.Task::(_updateState) Task=3D`8f9455be-2f57-4b95-9d24-8db1522cbaed`::moving from state preparing -> state finished
Thread-1192053::DEBUG::2015-11-23 17:18:20,746::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc': < ResourceRef 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc', isValid: 'True' obj: 'None'>}
Thread-1192053::DEBUG::2015-11-23 17:18:20,746::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1192053::DEBUG::2015-11-23 17:18:20,747::resourceManager::616::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc'
Thread-1192053::DEBUG::2015-11-23 17:18:20,747::resourceManager::635::Storage.ResourceManager::(releaseResource) Released resource 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc' (0 active users)
Thread-1192053::DEBUG::2015-11-23 17:18:20,747::resourceManager::641::Storage.ResourceManager::(releaseResource) Resource 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc' is free, finding out if anyone is waiting for it.
Thread-1192053::DEBUG::2015-11-23 17:18:20,747::resourceManager::649::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc', Clearing records.
Thread-1192053::DEBUG::2015-11-23 17:18:20,747::task::993::Storage.TaskManager.Task::(_decref) Task=3D`8f9455be-2f57-4b95-9d24-8db1522cbaed`::ref 0 aborting False
Thread-1192053::DEBUG::2015-11-23 17:18:20,748::__init__::500::jsonrpc.JsonRpcServer::(_serveRequest) Return 'Volume.getInfo' in bridge with {'status': 'OK', 'domain': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'voltype': 'LEAF', 'description': '', 'parent': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'format': 'COW', 'image': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d', 'ctime': '1448295499', 'disktype': '2', 'legality': 'LEGAL', 'allocType': 'SPARSE', 'mtime': '0', 'apparentsize': '1073741824', 'children': [], 'pool': '', 'capacity': '21474836480', 'uuid': '16f92498-c142-4330-bc0f-c96f210c379d', 'truesize': '1073741824', 'type': 'SPARSE'}
Thread-1192053::DEBUG::2015-11-23 17:18:20,748::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-11-23 17:18:20,863::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'SEND'>
JsonRpcServer::DEBUG::2015-11-23 17:18:20,864::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1192054::DEBUG::2015-11-23 17:18:20,864::__init__::469::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'Task.clear' in bridge with {'taskID': '21a1c403-f306-40b1-bad8-377d0265ebca'}
------=_Part_16_134777541.1448296754211
Content-Type: application/pgp-signature; name=signature.asc
Content-Transfer-Encoding: 7bit
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: CIPHERMAIL (2.9.0-0)
iQIcBAEBCAAGBQJWU0EyAAoJEJ44dql+IcLKI0MP/3Asq2nBywx8kXJ0Gbt1wDYp
Yvakc220hhIxSKDCSuaWPoZHgBCMMi32po0lBrnyb/nq28O6vINL7EDNH5Did4fi
HIQ7JG/iyExgEwyWjhllM6d+yOIHYbIQnAqilxlnk/nBs98yXZn4LnYkaf9tDuXC
2VIYWisVM1VRkduGsIptkqN7qwfKek3NGwKBQbSL+o7CQNsCfSu7I7v7St+VyxH7
fSxRacILEaZmPSK7toSJKpZjabsmWEAbF1Finy2ni/qvnF984w/3b7Lmo0wmCbUG
2NeLveD5avQrO1RLMuSLPAO0/8aZnhPOL5ifYyM/f5WihFszmPn+72l17oMPxpwW
vpcpcsC2nq+ZTNvdR1LqYIzPlb201ohTv7YV5RJg4rU4uDfMlzv+0AEKJUlZgnmz
OLkfFH23OmmYMs6J/s/FDbsZBOZAdJpYwSdIWFyL3jfzqfoFwH0xUk6GKyZxAbRt
cMBUpDAtPoQ7qYJ+ctegY9XmexsnXhrRUTJWXTUBH7a/XKkgJbDIBLT5YRxACF33
tf1FIoc2f9RD3acdwAt/uF4wVQZI1CGZtmox0yk1dKDrPIfS3GQn6+9HZWRulT7W
han8iQLlyTN53sO4UKH8FHRjHqR0/JPmMXw4NR6r6CW6KA7Jf/m+9Bxh5sHVJXJt
kd8EaYpl165dFqUS+fy/
=fC70
-----END PGP SIGNATURE-----
------=_Part_16_134777541.1448296754211--
2
1