Users
Threads by month
- ----- 2025 -----
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- 10 participants
- 19138 discussions
Hi,
Someone may take a look at the spam policy of Gmail to see what is not
been done as it's been months since I have to get the mails from spam
folder no matter how much I try to get them as legitime.
Regards
El 22/10/15 a las 12:24 p.m., users-request(a)ovirt.org escribió:
> Send Users mailing list submissions to
> users(a)ovirt.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.ovirt.org/mailman/listinfo/users
> or, via email, send a message with subject or body 'help' to
> users-request(a)ovirt.org
>
> You can reach the person managing the list at
> users-owner(a)ovirt.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Users digest..."
>
>
> Today's Topics:
>
> 1. [ANN] oVirt 3.6.0 Third Release Candidate is now available
> for testing (Sandro Bonazzola)
> 2. Re: Testing self hosted engine in 3.6: hostname not resolved
> error (Gianluca Cecchi)
> 3. Re: 3.6 upgrade issue (Yaniv Dary)
> 4. Re: How to change the hosted engine VM RAM size after
> deploying (Simone Tiraboschi)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Thu, 22 Oct 2015 16:08:25 +0200
> From: Sandro Bonazzola <sbonazzo(a)redhat.com>
> To: announce(a)ovirt.org, users <users(a)ovirt.org>, devel
> <devel(a)ovirt.org>
> Subject: [ovirt-users] [ANN] oVirt 3.6.0 Third Release Candidate is
> now available for testing
> Message-ID:
> <CAPQRNTm4GyWo0zo-L=92ScLJvWQFEPKaF5UfmdJ4SroKCCC3pQ(a)mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> The oVirt Project is pleased to announce the availability
> of the Third Release Candidate of oVirt 3.6 for testing, as of October
> 22nd, 2015.
>
> This release is available now for Fedora 22,
> Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and
> Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar).
>
> This release supports Hypervisor Hosts running
> Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar),
> Fedora 21 and Fedora 22.
> Highly experimental support for Debian 8.1 Jessie has been added too.
>
> This release of oVirt 3.6.0 includes numerous bug fixes.
> See the release notes [1] for an initial list of the new features and bugs
> fixed.
>
> Please refer to release notes [1] for Installation / Upgrade instructions.
> New oVirt Node ISO and oVirt Live ISO will be available soon as well[2].
>
> Please note that mirrors[3] may need usually one day before being
> synchronized.
>
> Please refer to the release notes for known issues in this release.
>
> [1] http://www.ovirt.org/OVirt_3.6_Release_Notes
> [2] http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/iso/
> [3] http://www.ovirt.org/Repository_mirrors#Current_mirrors
>
>
3
2
--=_920b73690f5d77fd5fed2f8d20a489a7
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset=UTF-8
Hi,
I activate gluster service in Cluster, then my engine.log
chain : Could not add brick xxx to volume xxxx server uuid xxx not found
in cluster.
I found in mailing list i have to put all my hosts in
maintenance mode and put on.
Then now engine.log chain :
2015-11-09
11:15:53,563 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-64) [] START,
GlusterVolumesListVDSCommand(HostName = ovirt02,
GlusterVolumesListVDSParameters:{runAsync='true',
hostId='0d1284e1-fa18-4309-b196-df9a6a337c44'}), log id:
6ddd5b9d
2015-11-09 11:15:53,711 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-64) [] Could not associate brick
'ovirt01.mafia.kru:/gfs1/engine/brick' of volume
'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no
gluster network found in cluster
'00000002-0002-0002-0002-00000000022d'
2015-11-09 11:15:53,714 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-64) [] Could not associate brick
'ovirt02.mafia.kru:/gfs1/engine/brick' of volume
'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no
gluster network found in cluster
'00000002-0002-0002-0002-00000000022d'
2015-11-09 11:15:53,716 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-64) [] Could not associate brick
'ovirt03.mafia.kru:/gfs1/engine/brick' of volume
'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no
gluster network found in cluster
'00000002-0002-0002-0002-00000000022d'
2015-11-09 11:15:53,719 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-64) [] Could not associate brick
'ovirt01.mafia.kru:/gfs2/engine/brick' of volume
'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no
gluster network found in cluster
'00000002-0002-0002-0002-00000000022d'
2015-11-09 11:15:53,722 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-64) [] Could not associate brick
'ovirt02.mafia.kru:/gfs2/engine/brick' of volume
'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no
gluster network found in cluster
'00000002-0002-0002-0002-00000000022d'
2015-11-09 11:15:53,725 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-64) [] Could not associate brick
'ovirt03.mafia.kru:/gfs2/engine/brick' of volume
'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no
gluster network found in cluster
'00000002-0002-0002-0002-00000000022d'
2015-11-09 11:15:53,732 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-64) [] FINISH,
GlusterVolumesListVDSCommand, return:
{e9a24161-3e72-47ea-b593-57f3302e7c4e=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7eafe244,
e5df896f-b818-4d70-ac86-ad9270f9d5f2=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@cb7d0349},
log id: 6ddd5b9d
Here my vdsm.log on host 1:
Thread-4247::DEBUG::2015-11-09
11:17:47,621::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return 'Host.getVMFullList' in bridge with [{u'status': 'Up',
u'nicModel': u'rtl8139,pv', u'kvmEnable': u'true', u'smp': u'1',
u'emulatedMachine': u'pc', u'afterMigrationStatus': u'', 'pid': '4450',
u'vmId': u'3930e6e3-5b41-45c3-bb7c-2af8563cefab', u'devices':
[{u'alias': u'console0', u'specParams': {}, 'deviceType': u'console',
u'deviceId': u'ab824f92-f636-4c0f-96ad-b4f3d1c352be', u'device':
u'console', u'type': u'console'}, {u'target': 1572864, u'alias':
u'balloon0', u'specParams': {u'model': u'none'}, 'deviceType':
u'balloon', u'device': u'memballoon', u'type': u'balloon'}, {u'device':
u'unix', u'alias': u'channel0', 'deviceType': u'channel', u'type':
u'channel', u'address': {u'bus': u'0', u'controller': u'0', u'type':
u'virtio-serial', u'port': u'1'}}, {u'device': u'unix', u'alias':
u'channel1', 'deviceType': u'channel', u'type': u'channel', u'address':
{u'bus': u'0', u'controller': u'0', u'type': u'virtio-serial', u'port':
u'2'}}, {u'device': u'unix', u'alias': u'channel2', 'deviceType':
u'channel', u'type': u'channel', u'address': {u'bus': u'0',
u'controller': u'0', u'type': u'virtio-serial', u'port': u'3'}},
{u'alias': u'scsi0', 'deviceType': u'controller', u'address': {u'slot':
u'0x04', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci',
u'function': u'0x0'}, u'device': u'scsi', u'model': u'virtio-scsi',
u'type': u'controller'}, {u'device': u'usb', u'alias': u'usb0',
'deviceType': u'controller', u'type': u'controller', u'address':
{u'slot': u'0x01', u'bus': u'0x00', u'domain': u'0x0000', u'type':
u'pci', u'function': u'0x2'}}, {u'device': u'ide', u'alias': u'ide0',
'deviceType': u'controller', u'type': u'controller', u'address':
{u'slot': u'0x01', u'bus': u'0x00', u'domain': u'0x0000', u'type':
u'pci', u'function': u'0x1'}}, {u'device': u'virtio-serial', u'alias':
u'virtio-serial0', 'deviceType': u'controller', u'type': u'controller',
u'address': {u'slot': u'0x05', u'bus': u'0x00', u'domain': u'0x0000',
u'type': u'pci', u'function': u'0x0'}}, {u'device': u'', u'alias':
u'video0', 'deviceType': u'video', u'type': u'video', u'address':
{u'slot': u'0x02', u'bus': u'0x00', u'domain': u'0x0000', u'type':
u'pci', u'function': u'0x0'}}, {u'device': u'vnc', u'specParams':
{u'spiceSecureChannels':
u'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir',
u'displayIp': '0'}, 'deviceType': u'graphics', u'type': u'graphics',
u'port': u'5900'}, {u'nicModel': u'pv', u'macAddr':
u'00:16:3e:43:96:7b', u'linkActive': True, u'network': u'ovirtmgmt',
u'specParams': {}, u'filter': u'vdsm-no-mac-spoofing', u'alias':
u'net0', 'deviceType': u'interface', u'deviceId':
u'c2913ff3-fea3-4b17-a4b3-83398d920cd3', u'address': {u'slot': u'0x03',
u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'function':
u'0x0'}, u'device': u'bridge', u'type': u'interface', u'name':
u'vnet0'}, {u'index': u'2', u'iface': u'ide', u'name': u'hdc', u'alias':
u'ide0-1-0', u'specParams': {}, u'readonly': 'True', 'deviceType':
u'disk', u'deviceId': u'13f4e285-c161-46f5-9ec3-ba1f92f374d9',
u'address': {u'bus': u'1', u'controller': u'0', u'type': u'drive',
u'target': u'0', u'unit': u'0'}, u'device': u'cdrom', u'shared':
u'false', u'path': '', u'type': u'disk'}, {u'poolID':
u'00000000-0000-0000-0000-000000000000', u'volumeInfo': {'domainID':
u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease',
'imageID': u'56461302-0710-4df0-964d-5e7b1ff07828', 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3'},
u'index': u'0', u'iface': u'virtio', u'apparentsize': '26843545600',
u'specParams': {}, u'imageID': u'56461302-0710-4df0-964d-5e7b1ff07828',
u'readonly': 'False', 'deviceType': u'disk', u'shared': u'exclusive',
u'truesize': '3515854848', u'type': u'disk', u'domainID':
u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', u'reqsize': u'0', u'format':
u'raw', u'deviceId': u'56461302-0710-4df0-964d-5e7b1ff07828',
u'address': {u'slot': u'0x06', u'bus': u'0x00', u'domain': u'0x0000',
u'type': u'pci', u'function': u'0x0'}, u'device': u'disk', u'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
u'propagateErrors': u'off', u'optional': u'false', u'name': u'vda',
u'bootOrder': u'1', u'volumeID':
u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', u'alias': u'virtio-disk0',
u'volumeChain': [{'domainID': u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'volumeID':
u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease',
'imageID': u'56461302-0710-4df0-964d-5e7b1ff07828', 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3'}]}],
u'guestDiskMapping': {}, u'spiceSecureChannels':
u'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir',
u'vmType': u'kvm', u'displayIp': '0', u'displaySecurePort': '-1',
u'memSize': 1536, u'displayPort': u'5900', u'cpuType': u'Conroe',
'clientIp': u'', u'statusTime': '4299704920', u'vmName':
u'HostedEngine', u'display': 'vnc'}]
Reactor thread::INFO::2015-11-09
11:17:48,004::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57851
Reactor
thread::DEBUG::2015-11-09
11:17:48,012::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:48,013::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from 127.0.0.1:57851
Reactor
thread::DEBUG::2015-11-09
11:17:48,013::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57851)
BindingXMLRPC::INFO::2015-11-09
11:17:48,013::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57851
Thread-4248::INFO::2015-11-09
11:17:48,015::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57851 started
Thread-4248::INFO::2015-11-09
11:17:48,022::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57851 stopped
Thread-303::DEBUG::2015-11-09
11:17:48,143::fileSD::173::Storage.Misc.excCmd::(getReadDelay)
/usr/bin/dd
if=/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_data/0af99439-f140-4636-90f7-f43904735da0/dom_md/metadata
iflag=direct of=/dev/null bs=4096 count=1 (cwd
None)
Thread-303::DEBUG::2015-11-09
11:17:48,154::fileSD::173::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
<err> = '0+1 records inn0+1 records outn461 bytes (461 B) copied,
0.000382969 s, 1.2 MB/sn'; <rc> =
0
mailbox.SPMMonitor::DEBUG::2015-11-09
11:17:48,767::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail)
dd
if=/rhev/data-center/00000001-0001-0001-0001-000000000230/mastersd/dom_md/inbox
iflag=direct,fullblock count=1 bs=1024000 (cwd
None)
mailbox.SPMMonitor::DEBUG::2015-11-09
11:17:48,783::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail)
SUCCESS: <err> = '1+0 records inn1+0 records outn1024000 bytes (1.0 MB)
copied, 0.00507258 s, 202 MB/sn'; <rc> = 0
Reactor
thread::INFO::2015-11-09
11:17:49,939::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57852
Reactor
thread::DEBUG::2015-11-09
11:17:49,947::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:49,947::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from 127.0.0.1:57852
Reactor
thread::DEBUG::2015-11-09
11:17:49,947::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57852)
BindingXMLRPC::INFO::2015-11-09
11:17:49,948::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57852
Thread-4249::INFO::2015-11-09
11:17:49,949::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57852 started
Thread-4249::DEBUG::2015-11-09
11:17:49,950::bindingxmlrpc::1257::vds::(wrapper) client
[127.0.0.1]::call getCapabilities with ()
{}
Thread-4249::DEBUG::2015-11-09
11:17:49,962::netinfo::454::root::(_dhcp_used) DHCPv6 configuration not
specified for ovirtmgmt.
Thread-4249::DEBUG::2015-11-09
11:17:49,963::netinfo::686::root::(_get_gateway) The gateway
10.10.10.254 is duplicated for the device
ovirtmgmt
Thread-4249::DEBUG::2015-11-09
11:17:49,965::netinfo::440::root::(_dhcp_used) There is no VDSM network
configured on enp2s0.
Thread-4249::DEBUG::2015-11-09
11:17:49,965::netinfo::440::root::(_dhcp_used) There is no VDSM network
configured on enp2s0.
Thread-4249::DEBUG::2015-11-09
11:17:49,968::netinfo::440::root::(_dhcp_used) There is no VDSM network
configured on bond0.
Thread-4249::DEBUG::2015-11-09
11:17:49,968::netinfo::440::root::(_dhcp_used) There is no VDSM network
configured on bond0.
Thread-4249::DEBUG::2015-11-09
11:17:49,970::netinfo::686::root::(_get_gateway) The gateway
10.10.10.254 is duplicated for the device
ovirtmgmt
Thread-4249::DEBUG::2015-11-09
11:17:49,971::utils::676::root::(execCmd) /usr/sbin/tc qdisc show (cwd
None)
Thread-4249::DEBUG::2015-11-09
11:17:49,979::utils::694::root::(execCmd) SUCCESS: <err> = ''; <rc> =
0
Thread-4249::DEBUG::2015-11-09
11:17:49,980::utils::676::root::(execCmd) /usr/sbin/tc class show dev
enp2s0 classid 0:1388 (cwd None)
Thread-4249::DEBUG::2015-11-09
11:17:49,989::utils::694::root::(execCmd) SUCCESS: <err> = ''; <rc> =
0
Thread-4249::DEBUG::2015-11-09
11:17:49,993::caps::807::root::(_getKeyPackages) rpm package
('glusterfs-rdma',) not found
Thread-4249::DEBUG::2015-11-09
11:17:49,997::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift-object',) not found
Thread-4249::DEBUG::2015-11-09
11:17:49,997::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift-proxy',) not found
Thread-4249::DEBUG::2015-11-09
11:17:50,001::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift-plugin',) not found
Thread-4249::DEBUG::2015-11-09
11:17:50,002::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift',) not found
Thread-4249::DEBUG::2015-11-09
11:17:50,002::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift-container',) not found
Thread-4249::DEBUG::2015-11-09
11:17:50,003::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift-account',) not found
Thread-4249::DEBUG::2015-11-09
11:17:50,003::caps::807::root::(_getKeyPackages) rpm package
('gluster-swift-doc',) not found
Thread-4249::DEBUG::2015-11-09
11:17:50,005::bindingxmlrpc::1264::vds::(wrapper) return getCapabilities
with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory':
{'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:1954deeb7a38'}],
'FC': []}, 'packages2': {'kernel': {'release': '229.20.1.el7.x86_64',
'buildtime': 1446588607.0, 'version': '3.10.0'}, 'glusterfs-fuse':
{'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'},
'spice-server': {'release': '9.el7_1.3', 'buildtime': 1444691699L,
'version': '0.12.4'}, 'librbd1': {'release': '2.el7', 'buildtime':
1425594433L, 'version': '0.80.7'}, 'vdsm': {'release': '0.el7.centos',
'buildtime': 1446474396L, 'version': '4.17.10.1'}, 'qemu-kvm':
{'release': '29.1.el7', 'buildtime': 1444310806L, 'version': '2.3.0'},
'glusterfs': {'release': '1.el7', 'buildtime': 1444235292L, 'version':
'3.7.5'}, 'libvirt': {'release': '16.el7_1.5', 'buildtime': 1446559281L,
'version': '1.2.8'}, 'qemu-img': {'release': '29.1.el7', 'buildtime':
1444310806L, 'version': '2.3.0'}, 'mom': {'release': '2.el7',
'buildtime': 1442501481L, 'version': '0.5.1'},
'glusterfs-geo-replication': {'release': '1.el7', 'buildtime':
1444235292L, 'version': '3.7.5'}, 'glusterfs-server': {'release':
'1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}, 'glusterfs-cli':
{'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}},
'numaNodeDistance': {'0': [10]}, 'cpuModel': 'Intel(R) Core(TM)2 Quad
CPU Q8400 @ 2.66GHz', 'liveMerge': 'true', 'hooks': {'before_vm_start':
{'50_hostedengine': {'md5': '2a6d96c26a3599812be6cf1a13d9f485'}}},
'vmTypes': ['kvm'], 'selinux': {'mode': '-1'}, 'liveSnapshot': 'true',
'kdumpStatus': 0, 'networks': {'ovirtmgmt': {'iface': 'ovirtmgmt',
'addr': '10.10.10.211', 'cfg': {'IPV6INIT': 'no', 'DEFROUTE': 'yes',
'IPADDR': '10.10.10.211', 'HOTPLUG': 'no', 'GATEWAY': '10.10.10.254',
'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0',
'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'ovirtmgmt', 'TYPE':
'Bridge', 'ONBOOT': 'yes'}, 'bridged': True, 'ipv6addrs':
['fe80::6e62:6dff:feb3:3b72/64'], 'gateway': '10.10.10.254', 'dhcpv4':
False, 'netmask': '255.255.255.0', 'dhcpv6': False, 'stp': 'off',
'ipv4addrs': ['10.10.10.211/24'], 'mtu': '1500', 'ipv6gateway': '::',
'ports': ['vnet0', 'enp2s0']}}, 'bridges': {'ovirtmgmt': {'addr':
'10.10.10.211', 'cfg': {'IPV6INIT': 'no', 'DEFROUTE': 'yes', 'IPADDR':
'10.10.10.211', 'HOTPLUG': 'no', 'GATEWAY': '10.10.10.254', 'DELAY':
'0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO':
'none', 'STP': 'off', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT':
'yes'}, 'ipv6addrs': ['fe80::6e62:6dff:feb3:3b72/64'], 'gateway':
'10.10.10.254', 'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6':
False, 'stp': 'off', 'ipv4addrs': ['10.10.10.211/24'], 'mtu': '1500',
'ipv6gateway': '::', 'ports': ['vnet0', 'enp2s0'], 'opts':
{'multicast_last_member_count': '2', 'hash_elasticity': '4',
'multicast_query_response_interval': '1000', 'group_fwd_mask': '0x0',
'multicast_snooping': '1', 'multicast_startup_query_interval': '3125',
'hello_timer': '172', 'multicast_querier_interval': '25500', 'max_age':
'2000', 'hash_max': '512', 'stp_state': '0', 'topology_change_detected':
'0', 'priority': '32768', 'multicast_membership_interval': '26000',
'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0',
'multicast_startup_query_count': '2', 'nf_call_iptables': '0',
'topology_change': '0', 'hello_time': '200', 'root_id':
'8000.6c626db33b72', 'bridge_id': '8000.6c626db33b72',
'topology_change_timer': '0', 'ageing_time': '30000',
'nf_call_ip6tables': '0', 'gc_timer': '25099', 'nf_call_arptables': '0',
'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval': '100',
'default_pvid': '1', 'multicast_query_interval': '12500', 'tcn_timer':
'0', 'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay':
'0'}}}, 'uuid': 'c2cac9d6-9ed7-44f0-8bbc-eff4c71db7ca', 'onlineCpus':
'0,1,2,3', 'nics': {'enp2s0': {'addr': '', 'ipv6gateway': '::',
'ipv6addrs': ['fe80::6e62:6dff:feb3:3b72/64'], 'mtu': '1500', 'dhcpv4':
False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg':
{'BRIDGE': 'ovirtmgmt', 'IPV6INIT': 'no', 'NM_CONTROLLED': 'no',
'HWADDR': '6c:62:6d:b3:3b:72', 'BOOTPROTO': 'none', 'DEVICE': 'enp2s0',
'ONBOOT': 'yes'}, 'hwaddr': '6c:62:6d:b3:3b:72', 'speed': 1000,
'gateway': ''}}, 'software_revision': '0', 'hostdevPassthrough':
'false', 'clusterLevels': ['3.4', '3.5', '3.6'], 'cpuFlags':
'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,sse4_1,xsave,lahf_lm,dtherm,tpr_shadow,vnmi,flexpriority,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_n270',
'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:1954deeb7a38',
'netConfigDirty': 'False', 'supportedENGINEs': ['3.4', '3.5', '3.6'],
'autoNumaBalancing': 0, 'additionalFeatures': ['GLUSTER_SNAPSHOT',
'GLUSTER_GEO_REPLICATION', 'GLUSTER_BRICK_MANAGEMENT'], 'reservedMem':
'321', 'bondings': {'bond0': {'ipv4addrs': [], 'addr': '', 'cfg':
{'BOOTPROTO': 'none'}, 'ipv6addrs': [], 'active_slave': '', 'mtu':
'1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'slaves': [],
'hwaddr': 'ba:5f:22:a3:17:07', 'ipv6gateway': '::', 'gateway': '',
'opts': {}}}, 'software_version': '4.17', 'memSize': '3782', 'cpuSpeed':
'2670.000', 'numaNodes': {'0': {'totalMemory': '3782', 'cpus': [0, 1, 2,
3]}}, 'cpuSockets': '1', 'vlans': {}, 'lastClientIface': 'lo',
'cpuCores': '4', 'kvmEnabled': 'true', 'guestOverhead': '65',
'version_name': 'Snow Man', 'cpuThreads': '4', 'emulatedMachines':
['pc-i440fx-rhel7.1.0', 'rhel6.3.0', 'pc-q35-rhel7.2.0',
'pc-i440fx-rhel7.0.0', 'rhel6.1.0', 'rhel6.6.0', 'rhel6.2.0', 'pc',
'pc-q35-rhel7.0.0', 'pc-q35-rhel7.1.0', 'q35', 'pc-i440fx-rhel7.2.0',
'rhel6.4.0', 'rhel6.0.0', 'rhel6.5.0'], 'rngSources': ['random'],
'operatingSystem': {'release': '1.1503.el7.centos.2.8', 'version': '7',
'name': 'RHEL'}, 'lastClient':
'127.0.0.1'}}
Thread-4249::INFO::2015-11-09
11:17:50,020::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57852
stopped
mailbox.SPMMonitor::DEBUG::2015-11-09
11:17:50,797::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail)
dd
if=/rhev/data-center/00000001-0001-0001-0001-000000000230/mastersd/dom_md/inbox
iflag=direct,fullblock count=1 bs=1024000 (cwd
None)
mailbox.SPMMonitor::DEBUG::2015-11-09
11:17:50,815::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail)
SUCCESS: <err> = '1+0 records inn1+0 records outn1024000 bytes (1.0 MB)
copied, 0.00511026 s, 200 MB/sn'; <rc> = 0
Reactor
thread::INFO::2015-11-09
11:17:52,098::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57853
Reactor
thread::DEBUG::2015-11-09
11:17:52,106::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,107::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from 127.0.0.1:57853
Reactor
thread::DEBUG::2015-11-09
11:17:52,107::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57853)
BindingXMLRPC::INFO::2015-11-09
11:17:52,108::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57853
Thread-4250::INFO::2015-11-09
11:17:52,110::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57853 started
Thread-4250::DEBUG::2015-11-09
11:17:52,111::bindingxmlrpc::1257::vds::(wrapper) client
[127.0.0.1]::call getHardwareInfo with ()
{}
Thread-4250::DEBUG::2015-11-09
11:17:52,112::bindingxmlrpc::1264::vds::(wrapper) return getHardwareInfo
with {'status': {'message': 'Done', 'code': 0}, 'info':
{'systemProductName': 'MS-7529', 'systemSerialNumber': 'To Be Filled By
O.E.M.', 'systemFamily': 'To Be Filled By O.E.M.', 'systemVersion':
'1.0', 'systemUUID': '00000000-0000-0000-0000-6C626DB33B72',
'systemManufacturer': 'MICRO-STAR INTERNATIONAL
CO.,LTD'}}
Thread-4250::INFO::2015-11-09
11:17:52,114::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57853 stopped
Reactor thread::INFO::2015-11-09
11:17:52,116::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57854
Reactor
thread::DEBUG::2015-11-09
11:17:52,124::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,124::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from
127.0.0.1:57854
BindingXMLRPC::INFO::2015-11-09
11:17:52,125::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57854
Reactor thread::DEBUG::2015-11-09
11:17:52,125::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57854)
Thread-4251::INFO::2015-11-09
11:17:52,128::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57854 started
Thread-4251::DEBUG::2015-11-09
11:17:52,129::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4251::DEBUG::2015-11-09
11:17:52,130::task::595::Storage.TaskManager.Task::(_updateState)
Task=`8535d95e-dce6-4474-bd8d-7824f68cf68a`::moving from state init ->
state preparing
Thread-4251::INFO::2015-11-09
11:17:52,130::logUtils::48::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=7,
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'id':
'2c69bdcf-793b-4fda-b326-b8aa6c33ade0', 'vfs_type': 'glusterfs',
'connection': 'ovirt02.mafia.kru:/engine', 'user': 'kvm'}],
options=None)
Thread-4251::DEBUG::2015-11-09
11:17:52,132::hsm::2417::Storage.HSM::(__prefetchDomains)
glusterDomPath: glusterSD/*
Thread-4251::DEBUG::2015-11-09
11:17:52,146::hsm::2429::Storage.HSM::(__prefetchDomains) Found SD
uuids: (u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
u'0af99439-f140-4636-90f7-f43904735da0')
Thread-4251::DEBUG::2015-11-09
11:17:52,147::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs:
{b4c488af-9d2f-4b7b-a6f6-74a0bac06c41: storage.glusterSD.findDomain,
0af99439-f140-4636-90f7-f43904735da0:
storage.glusterSD.findDomain}
Thread-4251::INFO::2015-11-09
11:17:52,147::logUtils::51::dispatcher::(wrapper) Run and protect:
connectStorageServer, Return response: {'statuslist': [{'status': 0,
'id':
'2c69bdcf-793b-4fda-b326-b8aa6c33ade0'}]}
Thread-4251::DEBUG::2015-11-09
11:17:52,147::task::1191::Storage.TaskManager.Task::(prepare)
Task=`8535d95e-dce6-4474-bd8d-7824f68cf68a`::finished: {'statuslist':
[{'status': 0, 'id':
'2c69bdcf-793b-4fda-b326-b8aa6c33ade0'}]}
Thread-4251::DEBUG::2015-11-09
11:17:52,147::task::595::Storage.TaskManager.Task::(_updateState)
Task=`8535d95e-dce6-4474-bd8d-7824f68cf68a`::moving from state preparing
-> state finished
Thread-4251::DEBUG::2015-11-09
11:17:52,147::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{}
Thread-4251::DEBUG::2015-11-09
11:17:52,148::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4251::DEBUG::2015-11-09
11:17:52,148::task::993::Storage.TaskManager.Task::(_decref)
Task=`8535d95e-dce6-4474-bd8d-7824f68cf68a`::ref 0 aborting
False
Thread-4251::INFO::2015-11-09
11:17:52,149::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57854 stopped
Reactor thread::INFO::2015-11-09
11:17:52,150::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57855
Reactor
thread::DEBUG::2015-11-09
11:17:52,158::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,158::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from
127.0.0.1:57855
BindingXMLRPC::INFO::2015-11-09
11:17:52,159::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57855
Reactor thread::DEBUG::2015-11-09
11:17:52,159::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57855)
Thread-4255::INFO::2015-11-09
11:17:52,162::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57855 started
Thread-4255::DEBUG::2015-11-09
11:17:52,162::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4255::DEBUG::2015-11-09
11:17:52,163::task::595::Storage.TaskManager.Task::(_updateState)
Task=`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::moving from state init ->
state preparing
Thread-4255::INFO::2015-11-09
11:17:52,163::logUtils::48::dispatcher::(wrapper) Run and protect:
getStorageDomainStats(sdUUID='b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
options=None)
Thread-4255::DEBUG::2015-11-09
11:17:52,164::resourceManager::198::Storage.ResourceManager.Request::(__init__)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`41531754-8ba9-4fc4-8788-d4d67fa33e5c`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '2848' at
'getStorageDomainStats'
Thread-4255::DEBUG::2015-11-09
11:17:52,164::resourceManager::542::Storage.ResourceManager::(registerResource)
Trying to register resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type
'shared'
Thread-4255::DEBUG::2015-11-09
11:17:52,164::resourceManager::601::Storage.ResourceManager::(registerResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now
locking as 'shared' (1 active user)
Thread-4255::DEBUG::2015-11-09
11:17:52,164::resourceManager::238::Storage.ResourceManager.Request::(grant)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`41531754-8ba9-4fc4-8788-d4d67fa33e5c`::Granted
request
Thread-4255::DEBUG::2015-11-09
11:17:52,165::task::827::Storage.TaskManager.Task::(resourceAcquired)
Task=`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::_resourcesAcquired:
Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
(shared)
Thread-4255::DEBUG::2015-11-09
11:17:52,165::task::993::Storage.TaskManager.Task::(_decref)
Task=`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::ref 1 aborting
False
Thread-4255::DEBUG::2015-11-09
11:17:52,165::misc::750::Storage.SamplingMethod::(__call__) Trying to
enter sampling method
(storage.sdc.refreshStorage)
Thread-4255::DEBUG::2015-11-09
11:17:52,165::misc::753::Storage.SamplingMethod::(__call__) Got in to
sampling method
Thread-4255::DEBUG::2015-11-09
11:17:52,165::misc::750::Storage.SamplingMethod::(__call__) Trying to
enter sampling method
(storage.iscsi.rescan)
Thread-4255::DEBUG::2015-11-09
11:17:52,165::misc::753::Storage.SamplingMethod::(__call__) Got in to
sampling method
Thread-4255::DEBUG::2015-11-09
11:17:52,166::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan,
this will take up to 30 seconds
Thread-4255::DEBUG::2015-11-09
11:17:52,166::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo
-n /sbin/iscsiadm -m session -R (cwd
None)
Thread-4255::DEBUG::2015-11-09
11:17:52,183::misc::760::Storage.SamplingMethod::(__call__) Returning
last result
Thread-4255::DEBUG::2015-11-09
11:17:52,183::misc::750::Storage.SamplingMethod::(__call__) Trying to
enter sampling method
(storage.hba.rescan)
Thread-4255::DEBUG::2015-11-09
11:17:52,184::misc::753::Storage.SamplingMethod::(__call__) Got in to
sampling method
Thread-4255::DEBUG::2015-11-09
11:17:52,184::hba::56::Storage.HBA::(rescan) Starting
scan
Thread-4255::DEBUG::2015-11-09
11:17:52,295::hba::62::Storage.HBA::(rescan) Scan
finished
Thread-4255::DEBUG::2015-11-09
11:17:52,296::misc::760::Storage.SamplingMethod::(__call__) Returning
last result
Thread-4255::DEBUG::2015-11-09
11:17:52,296::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo
-n /usr/sbin/multipath (cwd None)
Thread-4255::DEBUG::2015-11-09
11:17:52,362::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS:
<err> = ''; <rc> = 0
Thread-4255::DEBUG::2015-11-09
11:17:52,362::utils::676::root::(execCmd) /sbin/udevadm settle
--timeout=5 (cwd None)
Thread-4255::DEBUG::2015-11-09
11:17:52,371::utils::694::root::(execCmd) SUCCESS: <err> = ''; <rc> =
0
Thread-4255::DEBUG::2015-11-09
11:17:52,372::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' got the operation
mutex
Thread-4255::DEBUG::2015-11-09
11:17:52,372::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' released the operation
mutex
Thread-4255::DEBUG::2015-11-09
11:17:52,373::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' got the operation
mutex
Thread-4255::DEBUG::2015-11-09
11:17:52,373::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' released the operation
mutex
Thread-4255::DEBUG::2015-11-09
11:17:52,373::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' got the operation
mutex
Thread-4255::DEBUG::2015-11-09
11:17:52,373::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' released the operation
mutex
Thread-4255::DEBUG::2015-11-09
11:17:52,374::misc::760::Storage.SamplingMethod::(__call__) Returning
last result
Thread-4255::DEBUG::2015-11-09
11:17:52,386::fileSD::157::Storage.StorageDomainManifest::(__init__)
Reading domain in path
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
Thread-4255::DEBUG::2015-11-09
11:17:52,387::persistentDict::192::Storage.PersistentDict::(__init__)
Created a persistent dict with FileMetadataRW
backend
Thread-4255::DEBUG::2015-11-09
11:17:52,395::persistentDict::234::Storage.PersistentDict::(refresh)
read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=hosted_storage',
'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=',
'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=',
'REMOTE_PATH=ovirt02.mafia.kru:/engine', 'ROLE=Regular',
'SDUUID=b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'TYPE=GLUSTERFS',
'VERSION=3',
'_SHA_CKSUM=cb09606ada74ed4155ad158923dd930264780fc8']
Thread-4255::DEBUG::2015-11-09
11:17:52,398::fileSD::647::Storage.StorageDomain::(imageGarbageCollector)
Removing remnants of deleted images []
Thread-4255::INFO::2015-11-09
11:17:52,399::sd::442::Storage.StorageDomain::(_registerResourceNamespaces)
Resource namespace b4c488af-9d2f-4b7b-a6f6-74a0bac06c41_imageNS already
registered
Thread-4255::INFO::2015-11-09
11:17:52,399::sd::450::Storage.StorageDomain::(_registerResourceNamespaces)
Resource namespace b4c488af-9d2f-4b7b-a6f6-74a0bac06c41_volumeNS
already registered
Thread-4255::INFO::2015-11-09
11:17:52,400::logUtils::51::dispatcher::(wrapper) Run and protect:
getStorageDomainStats, Return response: {'stats': {'mdasize': 0,
'mdathreshold': True, 'mdavalid': True, 'diskfree': '210878988288',
'disktotal': '214643507200', 'mdafree':
0}}
Thread-4255::DEBUG::2015-11-09
11:17:52,401::task::1191::Storage.TaskManager.Task::(prepare)
Task=`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::finished: {'stats':
{'mdasize': 0, 'mdathreshold': True, 'mdavalid': True, 'diskfree':
'210878988288', 'disktotal': '214643507200', 'mdafree':
0}}
Thread-4255::DEBUG::2015-11-09
11:17:52,401::task::595::Storage.TaskManager.Task::(_updateState)
Task=`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::moving from state preparing
-> state finished
Thread-4255::DEBUG::2015-11-09
11:17:52,401::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj:
'None'>}
Thread-4255::DEBUG::2015-11-09
11:17:52,401::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4255::DEBUG::2015-11-09
11:17:52,402::resourceManager::616::Storage.ResourceManager::(releaseResource)
Trying to release resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'
Thread-4255::DEBUG::2015-11-09
11:17:52,402::resourceManager::635::Storage.ResourceManager::(releaseResource)
Released resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0
active users)
Thread-4255::DEBUG::2015-11-09
11:17:52,402::resourceManager::641::Storage.ResourceManager::(releaseResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free,
finding out if anyone is waiting for it.
Thread-4255::DEBUG::2015-11-09
11:17:52,402::resourceManager::649::Storage.ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing
records.
Thread-4255::DEBUG::2015-11-09
11:17:52,402::task::993::Storage.TaskManager.Task::(_decref)
Task=`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::ref 0 aborting
False
Thread-4255::INFO::2015-11-09
11:17:52,404::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57855 stopped
Reactor thread::INFO::2015-11-09
11:17:52,405::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57856
Reactor
thread::DEBUG::2015-11-09
11:17:52,413::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,414::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from
127.0.0.1:57856
BindingXMLRPC::INFO::2015-11-09
11:17:52,414::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57856
Reactor thread::DEBUG::2015-11-09
11:17:52,414::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57856)
Thread-4259::INFO::2015-11-09
11:17:52,417::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57856 started
Thread-4259::DEBUG::2015-11-09
11:17:52,418::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4259::DEBUG::2015-11-09
11:17:52,418::task::595::Storage.TaskManager.Task::(_updateState)
Task=`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::moving from state init ->
state preparing
Thread-4259::INFO::2015-11-09
11:17:52,419::logUtils::48::dispatcher::(wrapper) Run and protect:
prepareImage(sdUUID='b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
spUUID='00000000-0000-0000-0000-000000000000',
imgUUID='56461302-0710-4df0-964d-5e7b1ff07828',
leafUUID='8f8ee034-de86-4438-b6eb-9109faa8b3d3')
Thread-4259::DEBUG::2015-11-09
11:17:52,419::resourceManager::198::Storage.ResourceManager.Request::(__init__)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`bec7c8c3-42b9-4acb-88cf-841d9dc28fb0`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '3205' at
'prepareImage'
Thread-4259::DEBUG::2015-11-09
11:17:52,419::resourceManager::542::Storage.ResourceManager::(registerResource)
Trying to register resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type
'shared'
Thread-4259::DEBUG::2015-11-09
11:17:52,420::resourceManager::601::Storage.ResourceManager::(registerResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now
locking as 'shared' (1 active user)
Thread-4259::DEBUG::2015-11-09
11:17:52,420::resourceManager::238::Storage.ResourceManager.Request::(grant)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`bec7c8c3-42b9-4acb-88cf-841d9dc28fb0`::Granted
request
Thread-4259::DEBUG::2015-11-09
11:17:52,420::task::827::Storage.TaskManager.Task::(resourceAcquired)
Task=`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::_resourcesAcquired:
Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
(shared)
Thread-4259::DEBUG::2015-11-09
11:17:52,420::task::993::Storage.TaskManager.Task::(_decref)
Task=`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::ref 1 aborting
False
Thread-4259::DEBUG::2015-11-09
11:17:52,445::fileSD::536::Storage.StorageDomain::(activateVolumes)
Fixing permissions on
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3
Thread-4259::DEBUG::2015-11-09
11:17:52,446::fileUtils::143::Storage.fileUtils::(createdir) Creating
directory: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
mode: None
Thread-4259::WARNING::2015-11-09
11:17:52,446::fileUtils::152::Storage.fileUtils::(createdir) Dir
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 already
exists
Thread-4259::DEBUG::2015-11-09
11:17:52,446::fileSD::511::Storage.StorageDomain::(createImageLinks)
Creating symlink from
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828
to
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302-0710-4df0-964d-5e7b1ff07828
Thread-4259::DEBUG::2015-11-09
11:17:52,447::fileSD::516::Storage.StorageDomain::(createImageLinks)
img run dir already exists:
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302-0710-4df0-964d-5e7b1ff07828
Thread-4259::DEBUG::2015-11-09
11:17:52,448::fileVolume::535::Storage.Volume::(validateVolumePath)
validate path for
8f8ee034-de86-4438-b6eb-9109faa8b3d3
Thread-4259::INFO::2015-11-09
11:17:52,450::logUtils::51::dispatcher::(wrapper) Run and protect:
prepareImage, Return response: {'info': {'domainID':
'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease',
'imageID': '56461302-0710-4df0-964d-5e7b1ff07828'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease',
'imageID':
'56461302-0710-4df0-964d-5e7b1ff07828'}]}
Thread-4259::DEBUG::2015-11-09
11:17:52,450::task::1191::Storage.TaskManager.Task::(prepare)
Task=`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::finished: {'info':
{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease',
'imageID': '56461302-0710-4df0-964d-5e7b1ff07828'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',
'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease',
'imageID':
'56461302-0710-4df0-964d-5e7b1ff07828'}]}
Thread-4259::DEBUG::2015-11-09
11:17:52,450::task::595::Storage.TaskManager.Task::(_updateState)
Task=`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::moving from state preparing
-> state finished
Thread-4259::DEBUG::2015-11-09
11:17:52,450::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj:
'None'>}
Thread-4259::DEBUG::2015-11-09
11:17:52,450::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4259::DEBUG::2015-11-09
11:17:52,451::resourceManager::616::Storage.ResourceManager::(releaseResource)
Trying to release resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'
Thread-4259::DEBUG::2015-11-09
11:17:52,451::resourceManager::635::Storage.ResourceManager::(releaseResource)
Released resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0
active users)
Thread-4259::DEBUG::2015-11-09
11:17:52,451::resourceManager::641::Storage.ResourceManager::(releaseResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free,
finding out if anyone is waiting for it.
Thread-4259::DEBUG::2015-11-09
11:17:52,451::resourceManager::649::Storage.ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing
records.
Thread-4259::DEBUG::2015-11-09
11:17:52,451::task::993::Storage.TaskManager.Task::(_decref)
Task=`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::ref 0 aborting
False
Thread-4259::INFO::2015-11-09
11:17:52,454::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57856 stopped
Reactor thread::INFO::2015-11-09
11:17:52,454::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57857
Reactor
thread::DEBUG::2015-11-09
11:17:52,463::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,463::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from 127.0.0.1:57857
Reactor
thread::DEBUG::2015-11-09
11:17:52,464::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57857)
BindingXMLRPC::INFO::2015-11-09
11:17:52,464::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57857
Thread-4260::INFO::2015-11-09
11:17:52,466::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57857 started
Thread-4260::DEBUG::2015-11-09
11:17:52,467::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4260::DEBUG::2015-11-09
11:17:52,467::task::595::Storage.TaskManager.Task::(_updateState)
Task=`aed16a50-ede9-4ff5-92ef-356692fd56ae`::moving from state init ->
state preparing
Thread-4260::INFO::2015-11-09
11:17:52,467::logUtils::48::dispatcher::(wrapper) Run and protect:
prepareImage(sdUUID='b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
spUUID='00000000-0000-0000-0000-000000000000',
imgUUID='fd81353f-b654-4493-bcaf-2f417849b830',
leafUUID='8bb29fcb-c109-4f0a-a227-3819b6ecfdd9')
Thread-4260::DEBUG::2015-11-09
11:17:52,468::resourceManager::198::Storage.ResourceManager.Request::(__init__)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`974119cd-1351-46e9-8062-ffb1298c4ac9`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '3205' at
'prepareImage'
Thread-4260::DEBUG::2015-11-09
11:17:52,468::resourceManager::542::Storage.ResourceManager::(registerResource)
Trying to register resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type
'shared'
Thread-4260::DEBUG::2015-11-09
11:17:52,468::resourceManager::601::Storage.ResourceManager::(registerResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now
locking as 'shared' (1 active user)
Thread-4260::DEBUG::2015-11-09
11:17:52,468::resourceManager::238::Storage.ResourceManager.Request::(grant)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`974119cd-1351-46e9-8062-ffb1298c4ac9`::Granted
request
Thread-4260::DEBUG::2015-11-09
11:17:52,469::task::827::Storage.TaskManager.Task::(resourceAcquired)
Task=`aed16a50-ede9-4ff5-92ef-356692fd56ae`::_resourcesAcquired:
Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
(shared)
Thread-4260::DEBUG::2015-11-09
11:17:52,469::task::993::Storage.TaskManager.Task::(_decref)
Task=`aed16a50-ede9-4ff5-92ef-356692fd56ae`::ref 1 aborting
False
Thread-4260::DEBUG::2015-11-09
11:17:52,485::fileSD::536::Storage.StorageDomain::(activateVolumes)
Fixing permissions on
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9
Thread-4260::DEBUG::2015-11-09
11:17:52,486::fileUtils::143::Storage.fileUtils::(createdir) Creating
directory: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
mode: None
Thread-4260::WARNING::2015-11-09
11:17:52,487::fileUtils::152::Storage.fileUtils::(createdir) Dir
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 already
exists
Thread-4260::DEBUG::2015-11-09
11:17:52,487::fileSD::511::Storage.StorageDomain::(createImageLinks)
Creating symlink from
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830
to
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-4493-bcaf-2f417849b830
Thread-4260::DEBUG::2015-11-09
11:17:52,487::fileSD::516::Storage.StorageDomain::(createImageLinks)
img run dir already exists:
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-4493-bcaf-2f417849b830
Thread-4260::DEBUG::2015-11-09
11:17:52,488::fileVolume::535::Storage.Volume::(validateVolumePath)
validate path for
8bb29fcb-c109-4f0a-a227-3819b6ecfdd9
Thread-4260::INFO::2015-11-09
11:17:52,490::logUtils::51::dispatcher::(wrapper) Run and protect:
prepareImage, Return response: {'info': {'domainID':
'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9',
'volumeID': u'8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9.lease',
'imageID': 'fd81353f-b654-4493-bcaf-2f417849b830'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9',
'volumeID': u'8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9.lease',
'imageID':
'fd81353f-b654-4493-bcaf-2f417849b830'}]}
Thread-4260::DEBUG::2015-11-09
11:17:52,490::task::1191::Storage.TaskManager.Task::(prepare)
Task=`aed16a50-ede9-4ff5-92ef-356692fd56ae`::finished: {'info':
{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9',
'volumeID': u'8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9.lease',
'imageID': 'fd81353f-b654-4493-bcaf-2f417849b830'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9',
'volumeID': u'8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9.lease',
'imageID':
'fd81353f-b654-4493-bcaf-2f417849b830'}]}
Thread-4260::DEBUG::2015-11-09
11:17:52,490::task::595::Storage.TaskManager.Task::(_updateState)
Task=`aed16a50-ede9-4ff5-92ef-356692fd56ae`::moving from state preparing
-> state finished
Thread-4260::DEBUG::2015-11-09
11:17:52,490::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj:
'None'>}
Thread-4260::DEBUG::2015-11-09
11:17:52,491::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4260::DEBUG::2015-11-09
11:17:52,491::resourceManager::616::Storage.ResourceManager::(releaseResource)
Trying to release resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'
Thread-4260::DEBUG::2015-11-09
11:17:52,491::resourceManager::635::Storage.ResourceManager::(releaseResource)
Released resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0
active users)
Thread-4260::DEBUG::2015-11-09
11:17:52,491::resourceManager::641::Storage.ResourceManager::(releaseResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free,
finding out if anyone is waiting for it.
Thread-4260::DEBUG::2015-11-09
11:17:52,491::resourceManager::649::Storage.ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing
records.
Thread-4260::DEBUG::2015-11-09
11:17:52,492::task::993::Storage.TaskManager.Task::(_decref)
Task=`aed16a50-ede9-4ff5-92ef-356692fd56ae`::ref 0 aborting
False
Thread-4260::INFO::2015-11-09
11:17:52,494::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57857 stopped
Reactor thread::INFO::2015-11-09
11:17:52,494::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57858
Reactor
thread::DEBUG::2015-11-09
11:17:52,503::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,504::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from
127.0.0.1:57858
BindingXMLRPC::INFO::2015-11-09
11:17:52,504::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57858
Reactor thread::DEBUG::2015-11-09
11:17:52,504::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57858)
Thread-4261::INFO::2015-11-09
11:17:52,507::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57858 started
Thread-4261::DEBUG::2015-11-09
11:17:52,508::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4261::DEBUG::2015-11-09
11:17:52,508::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d39463fd-486f-4280-903a-51b72862b648`::moving from state init ->
state preparing
Thread-4261::INFO::2015-11-09
11:17:52,509::logUtils::48::dispatcher::(wrapper) Run and protect:
prepareImage(sdUUID='b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
spUUID='00000000-0000-0000-0000-000000000000',
imgUUID='0e1c20d1-94aa-4003-8e12-0dbbf06a6af8',
leafUUID='3fc3362d-ab6d-4e06-bd72-82d5750c7095')
Thread-4261::DEBUG::2015-11-09
11:17:52,509::resourceManager::198::Storage.ResourceManager.Request::(__init__)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`240c2aba-6c2e-44da-890d-c3d605e1933f`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '3205' at
'prepareImage'
Thread-4261::DEBUG::2015-11-09
11:17:52,509::resourceManager::542::Storage.ResourceManager::(registerResource)
Trying to register resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type
'shared'
Thread-4261::DEBUG::2015-11-09
11:17:52,509::resourceManager::601::Storage.ResourceManager::(registerResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now
locking as 'shared' (1 active user)
Thread-4261::DEBUG::2015-11-09
11:17:52,510::resourceManager::238::Storage.ResourceManager.Request::(grant)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`240c2aba-6c2e-44da-890d-c3d605e1933f`::Granted
request
Thread-4261::DEBUG::2015-11-09
11:17:52,510::task::827::Storage.TaskManager.Task::(resourceAcquired)
Task=`d39463fd-486f-4280-903a-51b72862b648`::_resourcesAcquired:
Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
(shared)
Thread-4261::DEBUG::2015-11-09
11:17:52,510::task::993::Storage.TaskManager.Task::(_decref)
Task=`d39463fd-486f-4280-903a-51b72862b648`::ref 1 aborting
False
Thread-4261::DEBUG::2015-11-09
11:17:52,526::fileSD::536::Storage.StorageDomain::(activateVolumes)
Fixing permissions on
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095
Thread-4261::DEBUG::2015-11-09
11:17:52,528::fileUtils::143::Storage.fileUtils::(createdir) Creating
directory: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
mode: None
Thread-4261::WARNING::2015-11-09
11:17:52,528::fileUtils::152::Storage.fileUtils::(createdir) Dir
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 already
exists
Thread-4261::DEBUG::2015-11-09
11:17:52,528::fileSD::511::Storage.StorageDomain::(createImageLinks)
Creating symlink from
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8
to
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8
Thread-4261::DEBUG::2015-11-09
11:17:52,528::fileSD::516::Storage.StorageDomain::(createImageLinks)
img run dir already exists:
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8
Thread-4261::DEBUG::2015-11-09
11:17:52,530::fileVolume::535::Storage.Volume::(validateVolumePath)
validate path for
3fc3362d-ab6d-4e06-bd72-82d5750c7095
Thread-4261::INFO::2015-11-09
11:17:52,531::logUtils::51::dispatcher::(wrapper) Run and protect:
prepareImage, Return response: {'info': {'domainID':
'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095',
'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease',
'imageID': '0e1c20d1-94aa-4003-8e12-0dbbf06a6af8'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095',
'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease',
'imageID':
'0e1c20d1-94aa-4003-8e12-0dbbf06a6af8'}]}
Thread-4261::DEBUG::2015-11-09
11:17:52,531::task::1191::Storage.TaskManager.Task::(prepare)
Task=`d39463fd-486f-4280-903a-51b72862b648`::finished: {'info':
{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095',
'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease',
'imageID': '0e1c20d1-94aa-4003-8e12-0dbbf06a6af8'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095',
'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease',
'imageID':
'0e1c20d1-94aa-4003-8e12-0dbbf06a6af8'}]}
Thread-4261::DEBUG::2015-11-09
11:17:52,532::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d39463fd-486f-4280-903a-51b72862b648`::moving from state preparing
-> state finished
Thread-4261::DEBUG::2015-11-09
11:17:52,532::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj:
'None'>}
Thread-4261::DEBUG::2015-11-09
11:17:52,532::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4261::DEBUG::2015-11-09
11:17:52,532::resourceManager::616::Storage.ResourceManager::(releaseResource)
Trying to release resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'
Thread-4261::DEBUG::2015-11-09
11:17:52,532::resourceManager::635::Storage.ResourceManager::(releaseResource)
Released resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0
active users)
Thread-4261::DEBUG::2015-11-09
11:17:52,533::resourceManager::641::Storage.ResourceManager::(releaseResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free,
finding out if anyone is waiting for it.
Thread-4261::DEBUG::2015-11-09
11:17:52,533::resourceManager::649::Storage.ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing
records.
Thread-4261::DEBUG::2015-11-09
11:17:52,533::task::993::Storage.TaskManager.Task::(_decref)
Task=`d39463fd-486f-4280-903a-51b72862b648`::ref 0 aborting
False
Thread-4261::INFO::2015-11-09
11:17:52,535::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57858 stopped
Reactor thread::INFO::2015-11-09
11:17:52,536::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57859
Reactor
thread::DEBUG::2015-11-09
11:17:52,544::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,545::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from 127.0.0.1:57859
Reactor
thread::DEBUG::2015-11-09
11:17:52,545::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57859)
BindingXMLRPC::INFO::2015-11-09
11:17:52,545::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57859
Thread-4262::INFO::2015-11-09
11:17:52,548::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57859 started
Thread-4262::DEBUG::2015-11-09
11:17:52,548::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4262::DEBUG::2015-11-09
11:17:52,549::task::595::Storage.TaskManager.Task::(_updateState)
Task=`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::moving from state init ->
state preparing
Thread-4262::INFO::2015-11-09
11:17:52,549::logUtils::48::dispatcher::(wrapper) Run and protect:
prepareImage(sdUUID='b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
spUUID='00000000-0000-0000-0000-000000000000',
imgUUID='350fb787-049a-4174-8914-f371aabfa72c',
leafUUID='02c5d59d-638c-4672-814d-d734e334e24a')
Thread-4262::DEBUG::2015-11-09
11:17:52,549::resourceManager::198::Storage.ResourceManager.Request::(__init__)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`fd9ea6d0-3a31-4ec6-a74c-8b84b2e51746`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '3205' at
'prepareImage'
Thread-4262::DEBUG::2015-11-09
11:17:52,550::resourceManager::542::Storage.ResourceManager::(registerResource)
Trying to register resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type
'shared'
Thread-4262::DEBUG::2015-11-09
11:17:52,550::resourceManager::601::Storage.ResourceManager::(registerResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now
locking as 'shared' (1 active user)
Thread-4262::DEBUG::2015-11-09
11:17:52,550::resourceManager::238::Storage.ResourceManager.Request::(grant)
ResName=`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=`fd9ea6d0-3a31-4ec6-a74c-8b84b2e51746`::Granted
request
Thread-4262::DEBUG::2015-11-09
11:17:52,550::task::827::Storage.TaskManager.Task::(resourceAcquired)
Task=`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::_resourcesAcquired:
Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
(shared)
Thread-4262::DEBUG::2015-11-09
11:17:52,551::task::993::Storage.TaskManager.Task::(_decref)
Task=`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::ref 1 aborting
False
Thread-4262::DEBUG::2015-11-09
11:17:52,566::fileSD::536::Storage.StorageDomain::(activateVolumes)
Fixing permissions on
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a
Thread-4262::DEBUG::2015-11-09
11:17:52,568::fileUtils::143::Storage.fileUtils::(createdir) Creating
directory: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41
mode: None
Thread-4262::WARNING::2015-11-09
11:17:52,568::fileUtils::152::Storage.fileUtils::(createdir) Dir
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 already
exists
Thread-4262::DEBUG::2015-11-09
11:17:52,568::fileSD::511::Storage.StorageDomain::(createImageLinks)
Creating symlink from
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c
to
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-049a-4174-8914-f371aabfa72c
Thread-4262::DEBUG::2015-11-09
11:17:52,568::fileSD::516::Storage.StorageDomain::(createImageLinks)
img run dir already exists:
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-049a-4174-8914-f371aabfa72c
Thread-4262::DEBUG::2015-11-09
11:17:52,570::fileVolume::535::Storage.Volume::(validateVolumePath)
validate path for
02c5d59d-638c-4672-814d-d734e334e24a
Thread-4262::INFO::2015-11-09
11:17:52,572::logUtils::51::dispatcher::(wrapper) Run and protect:
prepareImage, Return response: {'info': {'domainID':
'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a',
'volumeID': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a.lease',
'imageID': '350fb787-049a-4174-8914-f371aabfa72c'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a',
'volumeID': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a.lease',
'imageID':
'350fb787-049a-4174-8914-f371aabfa72c'}]}
Thread-4262::DEBUG::2015-11-09
11:17:52,573::task::1191::Storage.TaskManager.Task::(prepare)
Task=`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::finished: {'info':
{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',
'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a',
'volumeID': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a.lease',
'imageID': '350fb787-049a-4174-8914-f371aabfa72c'}, 'path':
u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a',
'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',
'volType': 'path', 'leaseOffset': 0, 'path':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a',
'volumeID': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath':
u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a.lease',
'imageID':
'350fb787-049a-4174-8914-f371aabfa72c'}]}
Thread-4262::DEBUG::2015-11-09
11:17:52,573::task::595::Storage.TaskManager.Task::(_updateState)
Task=`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::moving from state preparing
-> state finished
Thread-4262::DEBUG::2015-11-09
11:17:52,573::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj:
'None'>}
Thread-4262::DEBUG::2015-11-09
11:17:52,573::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4262::DEBUG::2015-11-09
11:17:52,573::resourceManager::616::Storage.ResourceManager::(releaseResource)
Trying to release resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'
Thread-4262::DEBUG::2015-11-09
11:17:52,573::resourceManager::635::Storage.ResourceManager::(releaseResource)
Released resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0
active users)
Thread-4262::DEBUG::2015-11-09
11:17:52,574::resourceManager::641::Storage.ResourceManager::(releaseResource)
Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free,
finding out if anyone is waiting for it.
Thread-4262::DEBUG::2015-11-09
11:17:52,574::resourceManager::649::Storage.ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing
records.
Thread-4262::DEBUG::2015-11-09
11:17:52,574::task::993::Storage.TaskManager.Task::(_decref)
Task=`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::ref 0 aborting
False
Thread-4262::INFO::2015-11-09
11:17:52,576::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57859 stopped
Reactor thread::INFO::2015-11-09
11:17:52,610::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:57860
Reactor
thread::DEBUG::2015-11-09
11:17:52,619::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-11-09
11:17:52,619::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from
127.0.0.1:57860
BindingXMLRPC::INFO::2015-11-09
11:17:52,620::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:57860
Reactor thread::DEBUG::2015-11-09
11:17:52,620::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over
http detected from ('127.0.0.1', 57860)
Thread-4263::INFO::2015-11-09
11:17:52,623::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57860 started
Thread-4263::DEBUG::2015-11-09
11:17:52,623::bindingxmlrpc::325::vds::(wrapper) client
[127.0.0.1]
Thread-4263::DEBUG::2015-11-09
11:17:52,624::task::595::Storage.TaskManager.Task::(_updateState)
Task=`6fd1d011-d931-4eca-b93b-c0fc3a1b4107`::moving from state init ->
state preparing
Thread-4263::INFO::2015-11-09
11:17:52,624::logUtils::48::dispatcher::(wrapper) Run and protect:
repoStats(options=None)
Thread-4263::INFO::2015-11-09
11:17:52,624::logUtils::51::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41':
{'code': 0, 'actual': True, 'version': 3, 'acquired': True, 'delay':
'0.000392118', 'lastCheck': '6.0', 'valid': True},
u'0af99439-f140-4636-90f7-f43904735da0': {'code': 0, 'actual': True,
'version': 3, 'acquired': True, 'delay': '0.000382969', 'lastCheck':
'4.5', 'valid': True}}
Thread-4263::DEBUG::2015-11-09
11:17:52,624::task::1191::Storage.TaskManager.Task::(prepare)
Task=`6fd1d011-d931-4eca-b93b-c0fc3a1b4107`::finished:
{'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': {'code': 0, 'actual': True,
'version': 3, 'acquired': True, 'delay': '0.000392118', 'lastCheck':
'6.0', 'valid': True}, u'0af99439-f140-4636-90f7-f43904735da0': {'code':
0, 'actual': True, 'version': 3, 'acquired': True, 'delay':
'0.000382969', 'lastCheck': '4.5', 'valid':
True}}
Thread-4263::DEBUG::2015-11-09
11:17:52,625::task::595::Storage.TaskManager.Task::(_updateState)
Task=`6fd1d011-d931-4eca-b93b-c0fc3a1b4107`::moving from state preparing
-> state finished
Thread-4263::DEBUG::2015-11-09
11:17:52,625::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{}
Thread-4263::DEBUG::2015-11-09
11:17:52,625::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-4263::DEBUG::2015-11-09
11:17:52,625::task::993::Storage.TaskManager.Task::(_decref)
Task=`6fd1d011-d931-4eca-b93b-c0fc3a1b4107`::ref 0 aborting
False
Thread-4263::INFO::2015-11-09
11:17:52,627::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request
handler for 127.0.0.1:57860
stopped
mailbox.SPMMonitor::DEBUG::2015-11-09
11:17:52,829::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail)
dd
if=/rhev/data-center/00000001-0001-0001-0001-000000000230/mastersd/dom_md/inbox
iflag=direct,fullblock count=1 bs=1024000 (cwd
None)
mailbox.SPMMonitor::DEBUG::2015-11-09
11:17:52,845::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail)
SUCCESS: <err> = '1+0 records inn1+0 records outn1024000 bytes (1.0 MB)
copied, 0.00494757 s, 207 MB/sn'; <rc> = 0
--
Florent BELLO
Service
Informatique
informatique(a)ville-kourou.fr
0594 22 31 22
Mairie de Kourou
--=_920b73690f5d77fd5fed2f8d20a489a7
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html; charset=UTF-8
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN">
<html><body>
<p>Hi,</p>
<p>I activate gluster service in Cluster, then my engine.log chain : Could =
not add brick xxx to volume xxxx server uuid xxx not found in cluster=
=2E<br />I found in mailing list i have to put all my hosts in maintenance =
mode and put on.</p>
<p>Then now engine.log chain :</p>
<p>2015-11-09 11:15:53,563 INFO [org.ovirt.engine.core.vdsbroker.glus=
ter.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-64) [] STA=
RT, GlusterVolumesListVDSCommand(HostName =3D ovirt02, GlusterVolumesListVD=
SParameters:{runAsync=3D'true', hostId=3D'0d1284e1-fa18-4309-b196-df9a6a337=
c44'}), log id: 6ddd5b9d<br />2015-11-09 11:15:53,711 WARN [org.ovirt=
=2Eengine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] (Defaul=
tQuartzScheduler_Worker-64) [] Could not associate brick 'ovirt01.mafia.kru=
:/gfs1/engine/brick' of volume 'e9a24161-3e72-47ea-b593-57f3302e7c4e' with =
correct network as no gluster network found in cluster '00000002-0002-0002-=
0002-00000000022d'<br />2015-11-09 11:15:53,714 WARN [org.ovirt.engin=
e.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] (DefaultQuartzS=
cheduler_Worker-64) [] Could not associate brick 'ovirt02.mafia.kru:/gfs1/e=
ngine/brick' of volume 'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct =
network as no gluster network found in cluster '00000002-0002-0002-0002-000=
00000022d'<br />2015-11-09 11:15:53,716 WARN [org.ovirt.engine.core=
=2Evdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] (DefaultQuartzSched=
uler_Worker-64) [] Could not associate brick 'ovirt03.mafia.kru:/gfs1/engin=
e/brick' of volume 'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct netw=
ork as no gluster network found in cluster '00000002-0002-0002-0002-0000000=
0022d'<br />2015-11-09 11:15:53,719 WARN [org.ovirt.engine.core.vdsbr=
oker.gluster.GlusterVolumesListReturnForXmlRpc] (DefaultQuartzScheduler_Wor=
ker-64) [] Could not associate brick 'ovirt01.mafia.kru:/gfs2/engine/brick'=
of volume 'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as n=
o gluster network found in cluster '00000002-0002-0002-0002-00000000022d'<b=
r />2015-11-09 11:15:53,722 WARN [org.ovirt.engine.core.vdsbroker.glu=
ster.GlusterVolumesListReturnForXmlRpc] (DefaultQuartzScheduler_Worker-64) =
[] Could not associate brick 'ovirt02.mafia.kru:/gfs2/engine/brick' of volu=
me 'e9a24161-3e72-47ea-b593-57f3302e7c4e' with correct network as no gluste=
r network found in cluster '00000002-0002-0002-0002-00000000022d'<br />2015=
-11-09 11:15:53,725 WARN [org.ovirt.engine.core.vdsbroker.gluster.Glu=
sterVolumesListReturnForXmlRpc] (DefaultQuartzScheduler_Worker-64) [] Could=
not associate brick 'ovirt03.mafia.kru:/gfs2/engine/brick' of volume 'e9a2=
4161-3e72-47ea-b593-57f3302e7c4e' with correct network as no gluster networ=
k found in cluster '00000002-0002-0002-0002-00000000022d'<br />2015-11-09 1=
1:15:53,732 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolu=
mesListVDSCommand] (DefaultQuartzScheduler_Worker-64) [] FINISH, GlusterVol=
umesListVDSCommand, return: {e9a24161-3e72-47ea-b593-57f3302e7c4e=3Dorg.ovi=
rt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@7eafe244=
, e5df896f-b818-4d70-ac86-ad9270f9d5f2=3Dorg.ovirt.engine.core.common.busin=
essentities.gluster.GlusterVolumeEntity@cb7d0349}, log id: 6ddd5b9d</p>
<p> </p>
<p>Here my vdsm.log on host 1:</p>
<p>Thread-4247::DEBUG::2015-11-09 11:17:47,621::__init__::533::jsonrpc.Json=
RpcServer::(_serveRequest) Return 'Host.getVMFullList' in bridge with [{u's=
tatus': 'Up', u'nicModel': u'rtl8139,pv', u'kvmEnable': u'true', u'smp': u'=
1', u'emulatedMachine': u'pc', u'afterMigrationStatus': u'', 'pid': '4450',=
u'vmId': u'3930e6e3-5b41-45c3-bb7c-2af8563cefab', u'devices': [{u'alias': =
u'console0', u'specParams': {}, 'deviceType': u'console', u'deviceId': u'ab=
824f92-f636-4c0f-96ad-b4f3d1c352be', u'device': u'console', u'type': u'cons=
ole'}, {u'target': 1572864, u'alias': u'balloon0', u'specParams': {u'model'=
: u'none'}, 'deviceType': u'balloon', u'device': u'memballoon', u'type': u'=
balloon'}, {u'device': u'unix', u'alias': u'channel0', 'deviceType': u'chan=
nel', u'type': u'channel', u'address': {u'bus': u'0', u'controller': u'0', =
u'type': u'virtio-serial', u'port': u'1'}}, {u'device': u'unix', u'alias': =
u'channel1', 'deviceType': u'channel', u'type': u'channel', u'address': {u'=
bus': u'0', u'controller': u'0', u'type': u'virtio-serial', u'port': u'2'}}=
, {u'device': u'unix', u'alias': u'channel2', 'deviceType': u'channel', u't=
ype': u'channel', u'address': {u'bus': u'0', u'controller': u'0', u'type': =
u'virtio-serial', u'port': u'3'}}, {u'alias': u'scsi0', 'deviceType': u'con=
troller', u'address': {u'slot': u'0x04', u'bus': u'0x00', u'domain': u'0x00=
00', u'type': u'pci', u'function': u'0x0'}, u'device': u'scsi', u'model': u=
'virtio-scsi', u'type': u'controller'}, {u'device': u'usb', u'alias': u'usb=
0', 'deviceType': u'controller', u'type': u'controller', u'address': {u'slo=
t': u'0x01', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'func=
tion': u'0x2'}}, {u'device': u'ide', u'alias': u'ide0', 'deviceType': u'con=
troller', u'type': u'controller', u'address': {u'slot': u'0x01', u'bus': u'=
0x00', u'domain': u'0x0000', u'type': u'pci', u'function': u'0x1'}}, {u'dev=
ice': u'virtio-serial', u'alias': u'virtio-serial0', 'deviceType': u'contro=
ller', u'type': u'controller', u'address': {u'slot': u'0x05', u'bus': u'0x0=
0', u'domain': u'0x0000', u'type': u'pci', u'function': u'0x0'}}, {u'device=
': u'', u'alias': u'video0', 'deviceType': u'video', u'type': u'video', u'a=
ddress': {u'slot': u'0x02', u'bus': u'0x00', u'domain': u'0x0000', u'type':=
u'pci', u'function': u'0x0'}}, {u'device': u'vnc', u'specParams': {u'spice=
SecureChannels': u'smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartc=
ard,susbredir', u'displayIp': '0'}, 'deviceType': u'graphics', u'type': u'g=
raphics', u'port': u'5900'}, {u'nicModel': u'pv', u'macAddr': u'00:16:3e:43=
:96:7b', u'linkActive': True, u'network': u'ovirtmgmt', u'specParams': {}, =
u'filter': u'vdsm-no-mac-spoofing', u'alias': u'net0', 'deviceType': u'inte=
rface', u'deviceId': u'c2913ff3-fea3-4b17-a4b3-83398d920cd3', u'address': {=
u'slot': u'0x03', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u=
'function': u'0x0'}, u'device': u'bridge', u'type': u'interface', u'name': =
u'vnet0'}, {u'index': u'2', u'iface': u'ide', u'name': u'hdc', u'alias': u'=
ide0-1-0', u'specParams': {}, u'readonly': 'True', 'deviceType': u'disk', u=
'deviceId': u'13f4e285-c161-46f5-9ec3-ba1f92f374d9', u'address': {u'bus': u=
'1', u'controller': u'0', u'type': u'drive', u'target': u'0', u'unit': u'0'=
}, u'device': u'cdrom', u'shared': u'false', u'path': '', u'type': u'disk'}=
, {u'poolID': u'00000000-0000-0000-0000-000000000000', u'volumeInfo': {'dom=
ainID': u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path', 'leaseO=
ffset': 0, 'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath'=
: u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f=
-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee03=
4-de86-4438-b6eb-9109faa8b3d3.lease', 'imageID': u'56461302-0710-4df0-964d-=
5e7b1ff07828', 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:=
_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d=
-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3'}, u'index': u'0', u'ifa=
ce': u'virtio', u'apparentsize': '26843545600', u'specParams': {}, u'imageI=
D': u'56461302-0710-4df0-964d-5e7b1ff07828', u'readonly': 'False', 'deviceT=
ype': u'disk', u'shared': u'exclusive', u'truesize': '3515854848', u'type':=
u'disk', u'domainID': u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', u'reqsize':=
u'0', u'format': u'raw', u'deviceId': u'56461302-0710-4df0-964d-5e7b1ff078=
28', u'address': {u'slot': u'0x06', u'bus': u'0x00', u'domain': u'0x0000', =
u'type': u'pci', u'function': u'0x0'}, u'device': u'disk', u'path': u'/var/=
run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302-0710-4df0-96=
4d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3', u'propagateErrors': =
u'off', u'optional': u'false', u'name': u'vda', u'bootOrder': u'1', u'volum=
eID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', u'alias': u'virtio-disk0', u=
'volumeChain': [{'domainID': u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volT=
ype': 'path', 'leaseOffset': 0, 'volumeID': u'8f8ee034-de86-4438-b6eb-9109f=
aa8b3d3', 'leasePath': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:=
_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d=
-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3.lease', 'imageID': u'564=
61302-0710-4df0-964d-5e7b1ff07828', 'path': u'/rhev/data-center/mnt/gluster=
SD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56=
461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3'}]}=
], u'guestDiskMapping': {}, u'spiceSecureChannels': u'smain,sdisplay,sinput=
s,scursor,splayback,srecord,ssmartcard,susbredir', u'vmType': u'kvm', u'dis=
playIp': '0', u'displaySecurePort': '-1', u'memSize': 1536, u'displayPort':=
u'5900', u'cpuType': u'Conroe', 'clientIp': u'', u'statusTime': '429970492=
0', u'vmName': u'HostedEngine', u'display': 'vnc'}]<br />Reactor thread::IN=
FO::2015-11-09 11:17:48,004::protocoldetector::72::ProtocolDetector.Accepto=
rImpl::(handle_accept) Accepting connection from 127.0.0.1:57851<br />React=
or thread::DEBUG::2015-11-09 11:17:48,012::protocoldetector::82::ProtocolDe=
tector.Detector::(__init__) Using required_size=3D11<br />Reactor thread::I=
NFO::2015-11-09 11:17:48,013::protocoldetector::118::ProtocolDetector.Detec=
tor::(handle_read) Detected protocol xml from 127.0.0.1:57851<br />Reactor =
thread::DEBUG::2015-11-09 11:17:48,013::bindingxmlrpc::1297::XmlDetector::(=
handle_socket) xml over http detected from ('127.0.0.1', 57851)<br />Bindin=
gXMLRPC::INFO::2015-11-09 11:17:48,013::xmlrpc::73::vds.XMLRPCServer::(hand=
le_request) Starting request handler for 127.0.0.1:57851<br />Thread-4248::=
INFO::2015-11-09 11:17:48,015::xmlrpc::84::vds.XMLRPCServer::(_process_requ=
ests) Request handler for 127.0.0.1:57851 started<br />Thread-4248::INFO::2=
015-11-09 11:17:48,022::xmlrpc::92::vds.XMLRPCServer::(_process_requests) R=
equest handler for 127.0.0.1:57851 stopped<br />Thread-303::DEBUG::2015-11-=
09 11:17:48,143::fileSD::173::Storage.Misc.excCmd::(getReadDelay) /usr/bin/=
dd if=3D/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_data/0af99439-f1=
40-4636-90f7-f43904735da0/dom_md/metadata iflag=3Ddirect of=3D/dev/null bs=
=3D4096 count=3D1 (cwd None)<br />Thread-303::DEBUG::2015-11-09 11:17:48,15=
4::fileSD::173::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> =
=3D '0+1 records in\n0+1 records out\n461 bytes (461 B) copied, 0.000382969=
s, 1.2 MB/s\n'; <rc> =3D 0<br />mailbox.SPMMonitor::DEBUG::2015-11-0=
9 11:17:48,767::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail) =
dd if=3D/rhev/data-center/00000001-0001-0001-0001-000000000230/mastersd/dom=
_md/inbox iflag=3Ddirect,fullblock count=3D1 bs=3D1024000 (cwd None)<br />m=
ailbox.SPMMonitor::DEBUG::2015-11-09 11:17:48,783::storage_mailbox::735::St=
orage.Misc.excCmd::(_checkForMail) SUCCESS: <err> =3D '1+0 records in=
\n1+0 records out\n1024000 bytes (1.0 MB) copied, 0.00507258 s, 202 MB/s\n'=
; <rc> =3D 0<br />Reactor thread::INFO::2015-11-09 11:17:49,939::prot=
ocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting =
connection from 127.0.0.1:57852<br />Reactor thread::DEBUG::2015-11-09 11:1=
7:49,947::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using=
required_size=3D11<br />Reactor thread::INFO::2015-11-09 11:17:49,947::pro=
tocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected proto=
col xml from 127.0.0.1:57852<br />Reactor thread::DEBUG::2015-11-09 11:17:4=
9,947::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over http dete=
cted from ('127.0.0.1', 57852)<br />BindingXMLRPC::INFO::2015-11-09 11:17:4=
9,948::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request hand=
ler for 127.0.0.1:57852<br />Thread-4249::INFO::2015-11-09 11:17:49,949::xm=
lrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127.0=
=2E0.1:57852 started<br />Thread-4249::DEBUG::2015-11-09 11:17:49,950::bind=
ingxmlrpc::1257::vds::(wrapper) client [127.0.0.1]::call getCapabilities wi=
th () {}<br />Thread-4249::DEBUG::2015-11-09 11:17:49,962::netinfo::454::ro=
ot::(_dhcp_used) DHCPv6 configuration not specified for ovirtmgmt.<br />Thr=
ead-4249::DEBUG::2015-11-09 11:17:49,963::netinfo::686::root::(_get_gateway=
) The gateway 10.10.10.254 is duplicated for the device ovirtmgmt<br />Thre=
ad-4249::DEBUG::2015-11-09 11:17:49,965::netinfo::440::root::(_dhcp_used) T=
here is no VDSM network configured on enp2s0.<br />Thread-4249::DEBUG::2015=
-11-09 11:17:49,965::netinfo::440::root::(_dhcp_used) There is no VDSM netw=
ork configured on enp2s0.<br />Thread-4249::DEBUG::2015-11-09 11:17:49,968:=
:netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bo=
nd0.<br />Thread-4249::DEBUG::2015-11-09 11:17:49,968::netinfo::440::root::=
(_dhcp_used) There is no VDSM network configured on bond0.<br />Thread-4249=
::DEBUG::2015-11-09 11:17:49,970::netinfo::686::root::(_get_gateway) The ga=
teway 10.10.10.254 is duplicated for the device ovirtmgmt<br />Thread-4249:=
:DEBUG::2015-11-09 11:17:49,971::utils::676::root::(execCmd) /usr/sbin/tc q=
disc show (cwd None)<br />Thread-4249::DEBUG::2015-11-09 11:17:49,979::util=
s::694::root::(execCmd) SUCCESS: <err> =3D ''; <rc> =3D 0<br />=
Thread-4249::DEBUG::2015-11-09 11:17:49,980::utils::676::root::(execCmd) /u=
sr/sbin/tc class show dev enp2s0 classid 0:1388 (cwd None)<br />Thread-4249=
::DEBUG::2015-11-09 11:17:49,989::utils::694::root::(execCmd) SUCCESS: <=
err> =3D ''; <rc> =3D 0<br />Thread-4249::DEBUG::2015-11-09 11:17:=
49,993::caps::807::root::(_getKeyPackages) rpm package ('glusterfs-rdma',) =
not found<br />Thread-4249::DEBUG::2015-11-09 11:17:49,997::caps::807::root=
::(_getKeyPackages) rpm package ('gluster-swift-object',) not found<br />Th=
read-4249::DEBUG::2015-11-09 11:17:49,997::caps::807::root::(_getKeyPackage=
s) rpm package ('gluster-swift-proxy',) not found<br />Thread-4249::DEBUG::=
2015-11-09 11:17:50,001::caps::807::root::(_getKeyPackages) rpm package ('g=
luster-swift-plugin',) not found<br />Thread-4249::DEBUG::2015-11-09 11:17:=
50,002::caps::807::root::(_getKeyPackages) rpm package ('gluster-swift',) n=
ot found<br />Thread-4249::DEBUG::2015-11-09 11:17:50,002::caps::807::root:=
:(_getKeyPackages) rpm package ('gluster-swift-container',) not found<br />=
Thread-4249::DEBUG::2015-11-09 11:17:50,003::caps::807::root::(_getKeyPacka=
ges) rpm package ('gluster-swift-account',) not found<br />Thread-4249::DEB=
UG::2015-11-09 11:17:50,003::caps::807::root::(_getKeyPackages) rpm package=
('gluster-swift-doc',) not found<br />Thread-4249::DEBUG::2015-11-09 11:17=
:50,005::bindingxmlrpc::1264::vds::(wrapper) return getCapabilities with {'=
status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI':=
[{'InitiatorName': 'iqn.1994-05.com.redhat:1954deeb7a38'}], 'FC': []}, 'pa=
ckages2': {'kernel': {'release': '229.20.1.el7.x86_64', 'buildtime': 144658=
8607.0, 'version': '3.10.0'}, 'glusterfs-fuse': {'release': '1.el7', 'build=
time': 1444235292L, 'version': '3.7.5'}, 'spice-server': {'release': '9.el7=
_1.3', 'buildtime': 1444691699L, 'version': '0.12.4'}, 'librbd1': {'release=
': '2.el7', 'buildtime': 1425594433L, 'version': '0.80.7'}, 'vdsm': {'relea=
se': '0.el7.centos', 'buildtime': 1446474396L, 'version': '4.17.10.1'}, 'qe=
mu-kvm': {'release': '29.1.el7', 'buildtime': 1444310806L, 'version': '2.3=
=2E0'}, 'glusterfs': {'release': '1.el7', 'buildtime': 1444235292L, 'versio=
n': '3.7.5'}, 'libvirt': {'release': '16.el7_1.5', 'buildtime': 1446559281L=
, 'version': '1.2.8'}, 'qemu-img': {'release': '29.1.el7', 'buildtime': 144=
4310806L, 'version': '2.3.0'}, 'mom': {'release': '2.el7', 'buildtime': 144=
2501481L, 'version': '0.5.1'}, 'glusterfs-geo-replication': {'release': '1=
=2Eel7', 'buildtime': 1444235292L, 'version': '3.7.5'}, 'glusterfs-server':=
{'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}, 'glust=
erfs-cli': {'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7=
=2E5'}}, 'numaNodeDistance': {'0': [10]}, 'cpuModel': 'Intel(R) Core(TM)2 Q=
uad CPU Q8400 @ 2.66GHz', 'liveMerge': 'true', 'hoo=
ks': {'before_vm_start': {'50_hostedengine': {'md5': '2a6d96c26a3599812be6c=
f1a13d9f485'}}}, 'vmTypes': ['kvm'], 'selinux': {'mode': '-1'}, 'liveSnapsh=
ot': 'true', 'kdumpStatus': 0, 'networks': {'ovirtmgmt': {'iface': 'ovirtmg=
mt', 'addr': '10.10.10.211', 'cfg': {'IPV6INIT': 'no', 'DEFROUTE': 'yes', '=
IPADDR': '10.10.10.211', 'HOTPLUG': 'no', 'GATEWAY': '10.10.10.254', 'DELAY=
': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'no=
ne', 'STP': 'off', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'=
}, 'bridged': True, 'ipv6addrs': ['fe80::6e62:6dff:feb3:3b72/64'], 'gateway=
': '10.10.10.254', 'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6': F=
alse, 'stp': 'off', 'ipv4addrs': ['10.10.10.211/24'], 'mtu': '1500', 'ipv6g=
ateway': '::', 'ports': ['vnet0', 'enp2s0']}}, 'bridges': {'ovirtmgmt': {'a=
ddr': '10.10.10.211', 'cfg': {'IPV6INIT': 'no', 'DEFROUTE': 'yes', 'IPADDR'=
: '10.10.10.211', 'HOTPLUG': 'no', 'GATEWAY': '10.10.10.254', 'DELAY': '0',=
'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'S=
TP': 'off', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv=
6addrs': ['fe80::6e62:6dff:feb3:3b72/64'], 'gateway': '10.10.10.254', 'dhcp=
v4': False, 'netmask': '255.255.255.0', 'dhcpv6': False, 'stp': 'off', 'ipv=
4addrs': ['10.10.10.211/24'], 'mtu': '1500', 'ipv6gateway': '::', 'ports': =
['vnet0', 'enp2s0'], 'opts': {'multicast_last_member_count': '2', 'hash_ela=
sticity': '4', 'multicast_query_response_interval': '1000', 'group_fwd_mask=
': '0x0', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3=
125', 'hello_timer': '172', 'multicast_querier_interval': '25500', 'max_age=
': '2000', 'hash_max': '512', 'stp_state': '0', 'topology_change_detected':=
'0', 'priority': '32768', 'multicast_membership_interval': '26000', 'root_=
path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_sta=
rtup_query_count': '2', 'nf_call_iptables': '0', 'topology_change': '0', 'h=
ello_time': '200', 'root_id': '8000.6c626db33b72', 'bridge_id': '8000.6c626=
db33b72', 'topology_change_timer': '0', 'ageing_time': '30000', 'nf_call_ip=
6tables': '0', 'gc_timer': '25099', 'nf_call_arptables': '0', 'group_addr':=
'1:80:c2:0:0:0', 'multicast_last_member_interval': '100', 'default_pvid': =
'1', 'multicast_query_interval': '12500', 'tcn_timer': '0', 'multicast_rout=
er': '1', 'vlan_filtering': '0', 'forward_delay': '0'}}}, 'uuid': 'c2cac9d6=
-9ed7-44f0-8bbc-eff4c71db7ca', 'onlineCpus': '0,1,2,3', 'nics': {'enp2s0': =
{'addr': '', 'ipv6gateway': '::', 'ipv6addrs': ['fe80::6e62:6dff:feb3:3b72/=
64'], 'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4=
addrs': [], 'cfg': {'BRIDGE': 'ovirtmgmt', 'IPV6INIT': 'no', 'NM_CONTROLLED=
': 'no', 'HWADDR': '6c:62:6d:b3:3b:72', 'BOOTPROTO': 'none', 'DEVICE': 'enp=
2s0', 'ONBOOT': 'yes'}, 'hwaddr': '6c:62:6d:b3:3b:72', 'speed': 1000, 'gate=
way': ''}}, 'software_revision': '0', 'hostdevPassthrough': 'false', 'clust=
erLevels': ['3.4', '3.5', '3.6'], 'cpuFlags': 'fpu,vme,de,pse,tsc,msr,pae,m=
ce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,s=
se2,ss,ht,tm,pbe,syscall,nx,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,=
nopl,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,=
sse4_1,xsave,lahf_lm,dtherm,tpr_shadow,vnmi,flexpriority,model_Conroe,model=
_coreduo,model_core2duo,model_Penryn,model_n270', 'ISCSIInitiatorName': 'iq=
n.1994-05.com.redhat:1954deeb7a38', 'netConfigDirty': 'False', 'supportedEN=
GINEs': ['3.4', '3.5', '3.6'], 'autoNumaBalancing': 0, 'additionalFeatures'=
: ['GLUSTER_SNAPSHOT', 'GLUSTER_GEO_REPLICATION', 'GLUSTER_BRICK_MANAGEMENT=
'], 'reservedMem': '321', 'bondings': {'bond0': {'ipv4addrs': [], 'addr': '=
', 'cfg': {'BOOTPROTO': 'none'}, 'ipv6addrs': [], 'active_slave': '', 'mtu'=
: '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'slaves': [], 'h=
waddr': 'ba:5f:22:a3:17:07', 'ipv6gateway': '::', 'gateway': '', 'opts': {}=
}}, 'software_version': '4.17', 'memSize': '3782', 'cpuSpeed': '2670.000', =
'numaNodes': {'0': {'totalMemory': '3782', 'cpus': [0, 1, 2, 3]}}, 'cpuSock=
ets': '1', 'vlans': {}, 'lastClientIface': 'lo', 'cpuCores': '4', 'kvmEnabl=
ed': 'true', 'guestOverhead': '65', 'version_name': 'Snow Man', 'cpuThreads=
': '4', 'emulatedMachines': ['pc-i440fx-rhel7.1.0', 'rhel6.3.0', 'pc-q35-rh=
el7.2.0', 'pc-i440fx-rhel7.0.0', 'rhel6.1.0', 'rhel6.6.0', 'rhel6.2.0', 'pc=
', 'pc-q35-rhel7.0.0', 'pc-q35-rhel7.1.0', 'q35', 'pc-i440fx-rhel7.2.0', 'r=
hel6.4.0', 'rhel6.0.0', 'rhel6.5.0'], 'rngSources': ['random'], 'operatingS=
ystem': {'release': '1.1503.el7.centos.2.8', 'version': '7', 'name': 'RHEL'=
}, 'lastClient': '127.0.0.1'}}<br />Thread-4249::INFO::2015-11-09 11:17:50,=
020::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for =
127.0.0.1:57852 stopped<br />mailbox.SPMMonitor::DEBUG::2015-11-09 11:17:50=
,797::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail) dd if=3D/r=
hev/data-center/00000001-0001-0001-0001-000000000230/mastersd/dom_md/inbox =
iflag=3Ddirect,fullblock count=3D1 bs=3D1024000 (cwd None)<br />mailbox.SPM=
Monitor::DEBUG::2015-11-09 11:17:50,815::storage_mailbox::735::Storage.Misc=
=2EexcCmd::(_checkForMail) SUCCESS: <err> =3D '1+0 records in\n1+0 re=
cords out\n1024000 bytes (1.0 MB) copied, 0.00511026 s, 200 MB/s\n'; <rc=
> =3D 0<br />Reactor thread::INFO::2015-11-09 11:17:52,098::protocoldete=
ctor::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepting connecti=
on from 127.0.0.1:57853<br />Reactor thread::DEBUG::2015-11-09 11:17:52,106=
::protocoldetector::82::ProtocolDetector.Detector::(__init__) Using require=
d_size=3D11<br />Reactor thread::INFO::2015-11-09 11:17:52,107::protocoldet=
ector::118::ProtocolDetector.Detector::(handle_read) Detected protocol xml =
from 127.0.0.1:57853<br />Reactor thread::DEBUG::2015-11-09 11:17:52,107::b=
indingxmlrpc::1297::XmlDetector::(handle_socket) xml over http detected fro=
m ('127.0.0.1', 57853)<br />BindingXMLRPC::INFO::2015-11-09 11:17:52,108::x=
mlrpc::73::vds.XMLRPCServer::(handle_request) Starting request handler for =
127.0.0.1:57853<br />Thread-4250::INFO::2015-11-09 11:17:52,110::xmlrpc::84=
::vds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:57853=
started<br />Thread-4250::DEBUG::2015-11-09 11:17:52,111::bindingxmlrpc::1=
257::vds::(wrapper) client [127.0.0.1]::call getHardwareInfo with () {}<br =
/>Thread-4250::DEBUG::2015-11-09 11:17:52,112::bindingxmlrpc::1264::vds::(w=
rapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': =
0}, 'info': {'systemProductName': 'MS-7529', 'systemSerialNumber': 'To Be F=
illed By O.E.M.', 'systemFamily': 'To Be Filled By O.E.M.', 'systemVersion'=
: '1.0', 'systemUUID': '00000000-0000-0000-0000-6C626DB33B72', 'systemManuf=
acturer': 'MICRO-STAR INTERNATIONAL CO.,LTD'}}<br />Thread-4250::INFO::2015=
-11-09 11:17:52,114::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Requ=
est handler for 127.0.0.1:57853 stopped<br />Reactor thread::INFO::2015-11-=
09 11:17:52,116::protocoldetector::72::ProtocolDetector.AcceptorImpl::(hand=
le_accept) Accepting connection from 127.0.0.1:57854<br />Reactor thread::D=
EBUG::2015-11-09 11:17:52,124::protocoldetector::82::ProtocolDetector.Detec=
tor::(__init__) Using required_size=3D11<br />Reactor thread::INFO::2015-11=
-09 11:17:52,124::protocoldetector::118::ProtocolDetector.Detector::(handle=
_read) Detected protocol xml from 127.0.0.1:57854<br />BindingXMLRPC::INFO:=
:2015-11-09 11:17:52,125::xmlrpc::73::vds.XMLRPCServer::(handle_request) St=
arting request handler for 127.0.0.1:57854<br />Reactor thread::DEBUG::2015=
-11-09 11:17:52,125::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml =
over http detected from ('127.0.0.1', 57854)<br />Thread-4251::INFO::2015-1=
1-09 11:17:52,128::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Reques=
t handler for 127.0.0.1:57854 started<br />Thread-4251::DEBUG::2015-11-09 1=
1:17:52,129::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]<br />Thr=
ead-4251::DEBUG::2015-11-09 11:17:52,130::task::595::Storage.TaskManager.Ta=
sk::(_updateState) Task=3D`8535d95e-dce6-4474-bd8d-7824f68cf68a`::moving fr=
om state init -> state preparing<br />Thread-4251::INFO::2015-11-09 11:1=
7:52,130::logUtils::48::dispatcher::(wrapper) Run and protect: connectStora=
geServer(domType=3D7, spUUID=3D'00000000-0000-0000-0000-000000000000', conL=
ist=3D[{'id': '2c69bdcf-793b-4fda-b326-b8aa6c33ade0', 'vfs_type': 'glusterf=
s', 'connection': 'ovirt02.mafia.kru:/engine', 'user': 'kvm'}], options=3DN=
one)<br />Thread-4251::DEBUG::2015-11-09 11:17:52,132::hsm::2417::Storage=
=2EHSM::(__prefetchDomains) glusterDomPath: glusterSD/*<br />Thread-4251::D=
EBUG::2015-11-09 11:17:52,146::hsm::2429::Storage.HSM::(__prefetchDomains) =
Found SD uuids: (u'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', u'0af99439-f140-4=
636-90f7-f43904735da0')<br />Thread-4251::DEBUG::2015-11-09 11:17:52,147::h=
sm::2489::Storage.HSM::(connectStorageServer) knownSDs: {b4c488af-9d2f-4b7b=
-a6f6-74a0bac06c41: storage.glusterSD.findDomain, 0af99439-f140-4636-90f7-f=
43904735da0: storage.glusterSD.findDomain}<br />Thread-4251::INFO::2015-11-=
09 11:17:52,147::logUtils::51::dispatcher::(wrapper) Run and protect: conne=
ctStorageServer, Return response: {'statuslist': [{'status': 0, 'id': '2c69=
bdcf-793b-4fda-b326-b8aa6c33ade0'}]}<br />Thread-4251::DEBUG::2015-11-09 11=
:17:52,147::task::1191::Storage.TaskManager.Task::(prepare) Task=3D`8535d95=
e-dce6-4474-bd8d-7824f68cf68a`::finished: {'statuslist': [{'status': 0, 'id=
': '2c69bdcf-793b-4fda-b326-b8aa6c33ade0'}]}<br />Thread-4251::DEBUG::2015-=
11-09 11:17:52,147::task::595::Storage.TaskManager.Task::(_updateState) Tas=
k=3D`8535d95e-dce6-4474-bd8d-7824f68cf68a`::moving from state preparing -&g=
t; state finished<br />Thread-4251::DEBUG::2015-11-09 11:17:52,147::resourc=
eManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll=
requests {} resources {}<br />Thread-4251::DEBUG::2015-11-09 11:17:52,148:=
:resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.can=
celAll requests {}<br />Thread-4251::DEBUG::2015-11-09 11:17:52,148::task::=
993::Storage.TaskManager.Task::(_decref) Task=3D`8535d95e-dce6-4474-bd8d-78=
24f68cf68a`::ref 0 aborting False<br />Thread-4251::INFO::2015-11-09 11:17:=
52,149::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler f=
or 127.0.0.1:57854 stopped<br />Reactor thread::INFO::2015-11-09 11:17:52,1=
50::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Ac=
cepting connection from 127.0.0.1:57855<br />Reactor thread::DEBUG::2015-11=
-09 11:17:52,158::protocoldetector::82::ProtocolDetector.Detector::(__init_=
_) Using required_size=3D11<br />Reactor thread::INFO::2015-11-09 11:17:52,=
158::protocoldetector::118::ProtocolDetector.Detector::(handle_read) Detect=
ed protocol xml from 127.0.0.1:57855<br />BindingXMLRPC::INFO::2015-11-09 1=
1:17:52,159::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting reques=
t handler for 127.0.0.1:57855<br />Reactor thread::DEBUG::2015-11-09 11:17:=
52,159::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over http det=
ected from ('127.0.0.1', 57855)<br />Thread-4255::INFO::2015-11-09 11:17:52=
,162::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for=
127.0.0.1:57855 started<br />Thread-4255::DEBUG::2015-11-09 11:17:52,162::=
bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]<br />Thread-4255::DEB=
UG::2015-11-09 11:17:52,163::task::595::Storage.TaskManager.Task::(_updateS=
tate) Task=3D`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::moving from state init=
-> state preparing<br />Thread-4255::INFO::2015-11-09 11:17:52,163::log=
Utils::48::dispatcher::(wrapper) Run and protect: getStorageDomainStats(sdU=
UID=3D'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', options=3DNone)<br />Thread-4=
255::DEBUG::2015-11-09 11:17:52,164::resourceManager::198::Storage.Resource=
Manager.Request::(__init__) ResName=3D`Storage.b4c488af-9d2f-4b7b-a6f6-74a0=
bac06c41`ReqID=3D`41531754-8ba9-4fc4-8788-d4d67fa33e5c`::Request was made i=
n '/usr/share/vdsm/storage/hsm.py' line '2848' at 'getStorageDomainStats'<b=
r />Thread-4255::DEBUG::2015-11-09 11:17:52,164::resourceManager::542::Stor=
age.ResourceManager::(registerResource) Trying to register resource 'Storag=
e.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type 'shared'<br />Thread-=
4255::DEBUG::2015-11-09 11:17:52,164::resourceManager::601::Storage.Resourc=
eManager::(registerResource) Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0=
bac06c41' is free. Now locking as 'shared' (1 active user)<br />Thread-4255=
::DEBUG::2015-11-09 11:17:52,164::resourceManager::238::Storage.ResourceMan=
ager.Request::(grant) ResName=3D`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c=
41`ReqID=3D`41531754-8ba9-4fc4-8788-d4d67fa33e5c`::Granted request<br />Thr=
ead-4255::DEBUG::2015-11-09 11:17:52,165::task::827::Storage.TaskManager.Ta=
sk::(resourceAcquired) Task=3D`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::_reso=
urcesAcquired: Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 (shared)<br />T=
hread-4255::DEBUG::2015-11-09 11:17:52,165::task::993::Storage.TaskManager=
=2ETask::(_decref) Task=3D`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::ref 1 abo=
rting False<br />Thread-4255::DEBUG::2015-11-09 11:17:52,165::misc::750::St=
orage.SamplingMethod::(__call__) Trying to enter sampling method (storage=
=2Esdc.refreshStorage)<br />Thread-4255::DEBUG::2015-11-09 11:17:52,165::mi=
sc::753::Storage.SamplingMethod::(__call__) Got in to sampling method<br />=
Thread-4255::DEBUG::2015-11-09 11:17:52,165::misc::750::Storage.SamplingMet=
hod::(__call__) Trying to enter sampling method (storage.iscsi.rescan)<br /=
>Thread-4255::DEBUG::2015-11-09 11:17:52,165::misc::753::Storage.SamplingMe=
thod::(__call__) Got in to sampling method<br />Thread-4255::DEBUG::2015-11=
-09 11:17:52,166::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI scan,=
this will take up to 30 seconds<br />Thread-4255::DEBUG::2015-11-09 11:17:=
52,166::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin=
/iscsiadm -m session -R (cwd None)<br />Thread-4255::DEBUG::2015-11-09 11:1=
7:52,183::misc::760::Storage.SamplingMethod::(__call__) Returning last resu=
lt<br />Thread-4255::DEBUG::2015-11-09 11:17:52,183::misc::750::Storage.Sam=
plingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan=
)<br />Thread-4255::DEBUG::2015-11-09 11:17:52,184::misc::753::Storage.Samp=
lingMethod::(__call__) Got in to sampling method<br />Thread-4255::DEBUG::2=
015-11-09 11:17:52,184::hba::56::Storage.HBA::(rescan) Starting scan<br />T=
hread-4255::DEBUG::2015-11-09 11:17:52,295::hba::62::Storage.HBA::(rescan) =
Scan finished<br />Thread-4255::DEBUG::2015-11-09 11:17:52,296::misc::760::=
Storage.SamplingMethod::(__call__) Returning last result<br />Thread-4255::=
DEBUG::2015-11-09 11:17:52,296::multipath::77::Storage.Misc.excCmd::(rescan=
) /usr/bin/sudo -n /usr/sbin/multipath (cwd None)<br />Thread-4255::DEBUG::=
2015-11-09 11:17:52,362::multipath::77::Storage.Misc.excCmd::(rescan) SUCCE=
SS: <err> =3D ''; <rc> =3D 0<br />Thread-4255::DEBUG::2015-11-0=
9 11:17:52,362::utils::676::root::(execCmd) /sbin/udevadm settle --timeout=
=3D5 (cwd None)<br />Thread-4255::DEBUG::2015-11-09 11:17:52,371::utils::69=
4::root::(execCmd) SUCCESS: <err> =3D ''; <rc> =3D 0<br />Threa=
d-4255::DEBUG::2015-11-09 11:17:52,372::lvm::498::Storage.OperationMutex::(=
_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation m=
utex<br />Thread-4255::DEBUG::2015-11-09 11:17:52,372::lvm::500::Storage.Op=
erationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' rele=
ased the operation mutex<br />Thread-4255::DEBUG::2015-11-09 11:17:52,373::=
lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invali=
date operation' got the operation mutex<br />Thread-4255::DEBUG::2015-11-09=
11:17:52,373::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operat=
ion 'lvm invalidate operation' released the operation mutex<br />Thread-425=
5::DEBUG::2015-11-09 11:17:52,373::lvm::529::Storage.OperationMutex::(_inva=
lidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex<=
br />Thread-4255::DEBUG::2015-11-09 11:17:52,373::lvm::531::Storage.Operati=
onMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released =
the operation mutex<br />Thread-4255::DEBUG::2015-11-09 11:17:52,374::misc:=
:760::Storage.SamplingMethod::(__call__) Returning last result<br />Thread-=
4255::DEBUG::2015-11-09 11:17:52,386::fileSD::157::Storage.StorageDomainMan=
ifest::(__init__) Reading domain in path /rhev/data-center/mnt/glusterSD/ov=
irt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41<br />Thread-42=
55::DEBUG::2015-11-09 11:17:52,387::persistentDict::192::Storage.Persistent=
Dict::(__init__) Created a persistent dict with FileMetadataRW backend<br /=
>Thread-4255::DEBUG::2015-11-09 11:17:52,395::persistentDict::234::Storage=
=2EPersistentDict::(refresh) read lines (FileMetadataRW)=3D['CLASS=3DData',=
'DESCRIPTION=3Dhosted_storage', 'IOOPTIMEOUTSEC=3D10', 'LEASERETRIES=3D3',=
'LEASETIMESEC=3D60', 'LOCKPOLICY=3D', 'LOCKRENEWALINTERVALSEC=3D5', 'POOL_=
UUID=3D', 'REMOTE_PATH=3Dovirt02.mafia.kru:/engine', 'ROLE=3DRegular', 'SDU=
UID=3Db4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'TYPE=3DGLUSTERFS', 'VERSION=
=3D3', '_SHA_CKSUM=3Dcb09606ada74ed4155ad158923dd930264780fc8']<br />Thread=
-4255::DEBUG::2015-11-09 11:17:52,398::fileSD::647::Storage.StorageDomain::=
(imageGarbageCollector) Removing remnants of deleted images []<br />Thread-=
4255::INFO::2015-11-09 11:17:52,399::sd::442::Storage.StorageDomain::(_regi=
sterResourceNamespaces) Resource namespace b4c488af-9d2f-4b7b-a6f6-74a0bac0=
6c41_imageNS already registered<br />Thread-4255::INFO::2015-11-09 11:17:52=
,399::sd::450::Storage.StorageDomain::(_registerResourceNamespaces) Resourc=
e namespace b4c488af-9d2f-4b7b-a6f6-74a0bac06c41_volumeNS already registere=
d<br />Thread-4255::INFO::2015-11-09 11:17:52,400::logUtils::51::dispatcher=
::(wrapper) Run and protect: getStorageDomainStats, Return response: {'stat=
s': {'mdasize': 0, 'mdathreshold': True, 'mdavalid': True, 'diskfree': '210=
878988288', 'disktotal': '214643507200', 'mdafree': 0}}<br />Thread-4255::D=
EBUG::2015-11-09 11:17:52,401::task::1191::Storage.TaskManager.Task::(prepa=
re) Task=3D`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::finished: {'stats': {'md=
asize': 0, 'mdathreshold': True, 'mdavalid': True, 'diskfree': '21087898828=
8', 'disktotal': '214643507200', 'mdafree': 0}}<br />Thread-4255::DEBUG::20=
15-11-09 11:17:52,401::task::595::Storage.TaskManager.Task::(_updateState) =
Task=3D`04d7087f-f948-4221-8c0f-3e07e7d9bed8`::moving from state preparing =
-> state finished<br />Thread-4255::DEBUG::2015-11-09 11:17:52,401::reso=
urceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.release=
All requests {} resources {'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': =
< ResourceRef 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: '=
True' obj: 'None'>}<br />Thread-4255::DEBUG::2015-11-09 11:17:52,401::re=
sourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancel=
All requests {}<br />Thread-4255::DEBUG::2015-11-09 11:17:52,402::resourceM=
anager::616::Storage.ResourceManager::(releaseResource) Trying to release r=
esource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'<br />Thread-4255::DE=
BUG::2015-11-09 11:17:52,402::resourceManager::635::Storage.ResourceManager=
::(releaseResource) Released resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0=
bac06c41' (0 active users)<br />Thread-4255::DEBUG::2015-11-09 11:17:52,402=
::resourceManager::641::Storage.ResourceManager::(releaseResource) Resource=
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free, finding out if any=
one is waiting for it.<br />Thread-4255::DEBUG::2015-11-09 11:17:52,402::re=
sourceManager::649::Storage.ResourceManager::(releaseResource) No one is wa=
iting for resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing=
records.<br />Thread-4255::DEBUG::2015-11-09 11:17:52,402::task::993::Stor=
age.TaskManager.Task::(_decref) Task=3D`04d7087f-f948-4221-8c0f-3e07e7d9bed=
8`::ref 0 aborting False<br />Thread-4255::INFO::2015-11-09 11:17:52,404::x=
mlrpc::92::vds.XMLRPCServer::(_process_requests) Request handler for 127.0=
=2E0.1:57855 stopped<br />Reactor thread::INFO::2015-11-09 11:17:52,405::pr=
otocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept) Acceptin=
g connection from 127.0.0.1:57856<br />Reactor thread::DEBUG::2015-11-09 11=
:17:52,413::protocoldetector::82::ProtocolDetector.Detector::(__init__) Usi=
ng required_size=3D11<br />Reactor thread::INFO::2015-11-09 11:17:52,414::p=
rotocoldetector::118::ProtocolDetector.Detector::(handle_read) Detected pro=
tocol xml from 127.0.0.1:57856<br />BindingXMLRPC::INFO::2015-11-09 11:17:5=
2,414::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting request hand=
ler for 127.0.0.1:57856<br />Reactor thread::DEBUG::2015-11-09 11:17:52,414=
::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over http detected =
from ('127.0.0.1', 57856)<br />Thread-4259::INFO::2015-11-09 11:17:52,417::=
xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request handler for 127=
=2E0.0.1:57856 started<br />Thread-4259::DEBUG::2015-11-09 11:17:52,418::bi=
ndingxmlrpc::325::vds::(wrapper) client [127.0.0.1]<br />Thread-4259::DEBUG=
::2015-11-09 11:17:52,418::task::595::Storage.TaskManager.Task::(_updateSta=
te) Task=3D`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::moving from state init -=
> state preparing<br />Thread-4259::INFO::2015-11-09 11:17:52,419::logUt=
ils::48::dispatcher::(wrapper) Run and protect: prepareImage(sdUUID=3D'b4c4=
88af-9d2f-4b7b-a6f6-74a0bac06c41', spUUID=3D'00000000-0000-0000-0000-000000=
000000', imgUUID=3D'56461302-0710-4df0-964d-5e7b1ff07828', leafUUID=3D'8f8e=
e034-de86-4438-b6eb-9109faa8b3d3')<br />Thread-4259::DEBUG::2015-11-09 11:1=
7:52,419::resourceManager::198::Storage.ResourceManager.Request::(__init__)=
ResName=3D`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=3D`bec7c8c3-=
42b9-4acb-88cf-841d9dc28fb0`::Request was made in '/usr/share/vdsm/storage/=
hsm.py' line '3205' at 'prepareImage'<br />Thread-4259::DEBUG::2015-11-09 1=
1:17:52,419::resourceManager::542::Storage.ResourceManager::(registerResour=
ce) Trying to register resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c=
41' for lock type 'shared'<br />Thread-4259::DEBUG::2015-11-09 11:17:52,420=
::resourceManager::601::Storage.ResourceManager::(registerResource) Resourc=
e 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now locking as 's=
hared' (1 active user)<br />Thread-4259::DEBUG::2015-11-09 11:17:52,420::re=
sourceManager::238::Storage.ResourceManager.Request::(grant) ResName=3D`Sto=
rage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=3D`bec7c8c3-42b9-4acb-88cf-=
841d9dc28fb0`::Granted request<br />Thread-4259::DEBUG::2015-11-09 11:17:52=
,420::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=3D`9dbe0=
1b2-e3e0-466b-90e1-b9803dfce88b`::_resourcesAcquired: Storage.b4c488af-9d2f=
-4b7b-a6f6-74a0bac06c41 (shared)<br />Thread-4259::DEBUG::2015-11-09 11:17:=
52,420::task::993::Storage.TaskManager.Task::(_decref) Task=3D`9dbe01b2-e3e=
0-466b-90e1-b9803dfce88b`::ref 1 aborting False<br />Thread-4259::DEBUG::20=
15-11-09 11:17:52,445::fileSD::536::Storage.StorageDomain::(activateVolumes=
) Fixing permissions on /rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_=
engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-=
5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3<br />Thread-4259::DEBUG::=
2015-11-09 11:17:52,446::fileUtils::143::Storage.fileUtils::(createdir) Cre=
ating directory: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41=
mode: None<br />Thread-4259::WARNING::2015-11-09 11:17:52,446::fileUtils::=
152::Storage.fileUtils::(createdir) Dir /var/run/vdsm/storage/b4c488af-9d2f=
-4b7b-a6f6-74a0bac06c41 already exists<br />Thread-4259::DEBUG::2015-11-09 =
11:17:52,446::fileSD::511::Storage.StorageDomain::(createImageLinks) Creati=
ng symlink from /rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b=
4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff0=
7828 to /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/56461302=
-0710-4df0-964d-5e7b1ff07828<br />Thread-4259::DEBUG::2015-11-09 11:17:52,4=
47::fileSD::516::Storage.StorageDomain::(createImageLinks) img run dir alre=
ady exists: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/5646=
1302-0710-4df0-964d-5e7b1ff07828<br />Thread-4259::DEBUG::2015-11-09 11:17:=
52,448::fileVolume::535::Storage.Volume::(validateVolumePath) validate path=
for 8f8ee034-de86-4438-b6eb-9109faa8b3d3<br />Thread-4259::INFO::2015-11-0=
9 11:17:52,450::logUtils::51::dispatcher::(wrapper) Run and protect: prepar=
eImage, Return response: {'info': {'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a=
0bac06c41', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-cente=
r/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06=
c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-910=
9faa8b3d3', 'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath=
': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2=
f-4b7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee0=
34-de86-4438-b6eb-9109faa8b3d3.lease', 'imageID': '56461302-0710-4df0-964d-=
5e7b1ff07828'}, 'path': u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a=
0bac06c41/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109=
faa8b3d3', 'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac=
06c41', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mn=
t/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/=
images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa=
8b3d3', 'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath': u=
'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b=
7b-a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-d=
e86-4438-b6eb-9109faa8b3d3.lease', 'imageID': '56461302-0710-4df0-964d-5e7b=
1ff07828'}]}<br />Thread-4259::DEBUG::2015-11-09 11:17:52,450::task::1191::=
Storage.TaskManager.Task::(prepare) Task=3D`9dbe01b2-e3e0-466b-90e1-b9803df=
ce88b`::finished: {'info': {'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c=
41', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/g=
lusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/ima=
ges/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3=
d3', 'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath': u'/r=
hev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-=
a6f6-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86=
-4438-b6eb-9109faa8b3d3.lease', 'imageID': '56461302-0710-4df0-964d-5e7b1ff=
07828'}, 'path': u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c=
41/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d=
3', 'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41',=
'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glust=
erSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/=
56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-4438-b6eb-9109faa8b3d3',=
'volumeID': u'8f8ee034-de86-4438-b6eb-9109faa8b3d3', 'leasePath': u'/rhev/=
data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6=
-74a0bac06c41/images/56461302-0710-4df0-964d-5e7b1ff07828/8f8ee034-de86-443=
8-b6eb-9109faa8b3d3.lease', 'imageID': '56461302-0710-4df0-964d-5e7b1ff0782=
8'}]}<br />Thread-4259::DEBUG::2015-11-09 11:17:52,450::task::595::Storage=
=2ETaskManager.Task::(_updateState) Task=3D`9dbe01b2-e3e0-466b-90e1-b9803df=
ce88b`::moving from state preparing -> state finished<br />Thread-4259::=
DEBUG::2015-11-09 11:17:52,450::resourceManager::940::Storage.ResourceManag=
er.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.b4c=
488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef 'Storage.b4c488af-9d2f=
-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj: 'None'>}<br />Thread-4259=
::DEBUG::2015-11-09 11:17:52,450::resourceManager::977::Storage.ResourceMan=
ager.Owner::(cancelAll) Owner.cancelAll requests {}<br />Thread-4259::DEBUG=
::2015-11-09 11:17:52,451::resourceManager::616::Storage.ResourceManager::(=
releaseResource) Trying to release resource 'Storage.b4c488af-9d2f-4b7b-a6f=
6-74a0bac06c41'<br />Thread-4259::DEBUG::2015-11-09 11:17:52,451::resourceM=
anager::635::Storage.ResourceManager::(releaseResource) Released resource '=
Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0 active users)<br />Thread-=
4259::DEBUG::2015-11-09 11:17:52,451::resourceManager::641::Storage.Resourc=
eManager::(releaseResource) Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0b=
ac06c41' is free, finding out if anyone is waiting for it.<br />Thread-4259=
::DEBUG::2015-11-09 11:17:52,451::resourceManager::649::Storage.ResourceMan=
ager::(releaseResource) No one is waiting for resource 'Storage.b4c488af-9d=
2f-4b7b-a6f6-74a0bac06c41', Clearing records.<br />Thread-4259::DEBUG::2015=
-11-09 11:17:52,451::task::993::Storage.TaskManager.Task::(_decref) Task=3D=
`9dbe01b2-e3e0-466b-90e1-b9803dfce88b`::ref 0 aborting False<br />Thread-42=
59::INFO::2015-11-09 11:17:52,454::xmlrpc::92::vds.XMLRPCServer::(_process_=
requests) Request handler for 127.0.0.1:57856 stopped<br />Reactor thread::=
INFO::2015-11-09 11:17:52,454::protocoldetector::72::ProtocolDetector.Accep=
torImpl::(handle_accept) Accepting connection from 127.0.0.1:57857<br />Rea=
ctor thread::DEBUG::2015-11-09 11:17:52,463::protocoldetector::82::Protocol=
Detector.Detector::(__init__) Using required_size=3D11<br />Reactor thread:=
:INFO::2015-11-09 11:17:52,463::protocoldetector::118::ProtocolDetector.Det=
ector::(handle_read) Detected protocol xml from 127.0.0.1:57857<br />Reacto=
r thread::DEBUG::2015-11-09 11:17:52,464::bindingxmlrpc::1297::XmlDetector:=
:(handle_socket) xml over http detected from ('127.0.0.1', 57857)<br />Bind=
ingXMLRPC::INFO::2015-11-09 11:17:52,464::xmlrpc::73::vds.XMLRPCServer::(ha=
ndle_request) Starting request handler for 127.0.0.1:57857<br />Thread-4260=
::INFO::2015-11-09 11:17:52,466::xmlrpc::84::vds.XMLRPCServer::(_process_re=
quests) Request handler for 127.0.0.1:57857 started<br />Thread-4260::DEBUG=
::2015-11-09 11:17:52,467::bindingxmlrpc::325::vds::(wrapper) client [127=
=2E0.0.1]<br />Thread-4260::DEBUG::2015-11-09 11:17:52,467::task::595::Stor=
age.TaskManager.Task::(_updateState) Task=3D`aed16a50-ede9-4ff5-92ef-356692=
fd56ae`::moving from state init -> state preparing<br />Thread-4260::INF=
O::2015-11-09 11:17:52,467::logUtils::48::dispatcher::(wrapper) Run and pro=
tect: prepareImage(sdUUID=3D'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', spUUID=
=3D'00000000-0000-0000-0000-000000000000', imgUUID=3D'fd81353f-b654-4493-bc=
af-2f417849b830', leafUUID=3D'8bb29fcb-c109-4f0a-a227-3819b6ecfdd9')<br />T=
hread-4260::DEBUG::2015-11-09 11:17:52,468::resourceManager::198::Storage=
=2EResourceManager.Request::(__init__) ResName=3D`Storage.b4c488af-9d2f-4b7=
b-a6f6-74a0bac06c41`ReqID=3D`974119cd-1351-46e9-8062-ffb1298c4ac9`::Request=
was made in '/usr/share/vdsm/storage/hsm.py' line '3205' at 'prepareImage'=
<br />Thread-4260::DEBUG::2015-11-09 11:17:52,468::resourceManager::542::St=
orage.ResourceManager::(registerResource) Trying to register resource 'Stor=
age.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock type 'shared'<br />Threa=
d-4260::DEBUG::2015-11-09 11:17:52,468::resourceManager::601::Storage.Resou=
rceManager::(registerResource) Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74=
a0bac06c41' is free. Now locking as 'shared' (1 active user)<br />Thread-42=
60::DEBUG::2015-11-09 11:17:52,468::resourceManager::238::Storage.ResourceM=
anager.Request::(grant) ResName=3D`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac0=
6c41`ReqID=3D`974119cd-1351-46e9-8062-ffb1298c4ac9`::Granted request<br />T=
hread-4260::DEBUG::2015-11-09 11:17:52,469::task::827::Storage.TaskManager=
=2ETask::(resourceAcquired) Task=3D`aed16a50-ede9-4ff5-92ef-356692fd56ae`::=
_resourcesAcquired: Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 (shared)<b=
r />Thread-4260::DEBUG::2015-11-09 11:17:52,469::task::993::Storage.TaskMan=
ager.Task::(_decref) Task=3D`aed16a50-ede9-4ff5-92ef-356692fd56ae`::ref 1 a=
borting False<br />Thread-4260::DEBUG::2015-11-09 11:17:52,485::fileSD::536=
::Storage.StorageDomain::(activateVolumes) Fixing permissions on /rhev/data=
-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a=
0bac06c41/images/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a2=
27-3819b6ecfdd9<br />Thread-4260::DEBUG::2015-11-09 11:17:52,486::fileUtils=
::143::Storage.fileUtils::(createdir) Creating directory: /var/run/vdsm/sto=
rage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 mode: None<br />Thread-4260::WARN=
ING::2015-11-09 11:17:52,487::fileUtils::152::Storage.fileUtils::(createdir=
) Dir /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 already ex=
ists<br />Thread-4260::DEBUG::2015-11-09 11:17:52,487::fileSD::511::Storage=
=2EStorageDomain::(createImageLinks) Creating symlink from /rhev/data-cente=
r/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06=
c41/images/fd81353f-b654-4493-bcaf-2f417849b830 to /var/run/vdsm/storage/b4=
c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-4493-bcaf-2f417849b830<br =
/>Thread-4260::DEBUG::2015-11-09 11:17:52,487::fileSD::516::Storage.Storage=
Domain::(createImageLinks) img run dir already exists: /var/run/vdsm/storag=
e/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-4493-bcaf-2f417849b830=
<br />Thread-4260::DEBUG::2015-11-09 11:17:52,488::fileVolume::535::Storage=
=2EVolume::(validateVolumePath) validate path for 8bb29fcb-c109-4f0a-a227-3=
819b6ecfdd9<br />Thread-4260::INFO::2015-11-09 11:17:52,490::logUtils::51::=
dispatcher::(wrapper) Run and protect: prepareImage, Return response: {'inf=
o': {'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',=
'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia=
=2Ekru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-44=
93-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'volumeID': u'8=
bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath': u'/rhev/data-center/mnt/=
glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/im=
ages/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecf=
dd9.lease', 'imageID': 'fd81353f-b654-4493-bcaf-2f417849b830'}, 'path': u'/=
var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-449=
3-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'imgVolumesInfo'=
: [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path', =
'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia=
=2Ekru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-44=
93-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'volumeID': u'8=
bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath': u'/rhev/data-center/mnt/=
glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/im=
ages/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecf=
dd9.lease', 'imageID': 'fd81353f-b654-4493-bcaf-2f417849b830'}]}<br />Threa=
d-4260::DEBUG::2015-11-09 11:17:52,490::task::1191::Storage.TaskManager.Tas=
k::(prepare) Task=3D`aed16a50-ede9-4ff5-92ef-356692fd56ae`::finished: {'inf=
o': {'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path',=
'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia=
=2Ekru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-44=
93-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'volumeID': u'8=
bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath': u'/rhev/data-center/mnt/=
glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/im=
ages/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecf=
dd9.lease', 'imageID': 'fd81353f-b654-4493-bcaf-2f417849b830'}, 'path': u'/=
var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/fd81353f-b654-449=
3-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'imgVolumesInfo'=
: [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path', =
'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia=
=2Ekru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/fd81353f-b654-44=
93-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'volumeID': u'8=
bb29fcb-c109-4f0a-a227-3819b6ecfdd9', 'leasePath': u'/rhev/data-center/mnt/=
glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/im=
ages/fd81353f-b654-4493-bcaf-2f417849b830/8bb29fcb-c109-4f0a-a227-3819b6ecf=
dd9.lease', 'imageID': 'fd81353f-b654-4493-bcaf-2f417849b830'}]}<br />Threa=
d-4260::DEBUG::2015-11-09 11:17:52,490::task::595::Storage.TaskManager.Task=
::(_updateState) Task=3D`aed16a50-ede9-4ff5-92ef-356692fd56ae`::moving from=
state preparing -> state finished<br />Thread-4260::DEBUG::2015-11-09 1=
1:17:52,490::resourceManager::940::Storage.ResourceManager.Owner::(releaseA=
ll) Owner.releaseAll requests {} resources {'Storage.b4c488af-9d2f-4b7b-a6f=
6-74a0bac06c41': < ResourceRef 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac0=
6c41', isValid: 'True' obj: 'None'>}<br />Thread-4260::DEBUG::2015-11-09=
11:17:52,491::resourceManager::977::Storage.ResourceManager.Owner::(cancel=
All) Owner.cancelAll requests {}<br />Thread-4260::DEBUG::2015-11-09 11:17:=
52,491::resourceManager::616::Storage.ResourceManager::(releaseResource) Tr=
ying to release resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41'<br =
/>Thread-4260::DEBUG::2015-11-09 11:17:52,491::resourceManager::635::Storag=
e.ResourceManager::(releaseResource) Released resource 'Storage.b4c488af-9d=
2f-4b7b-a6f6-74a0bac06c41' (0 active users)<br />Thread-4260::DEBUG::2015-1=
1-09 11:17:52,491::resourceManager::641::Storage.ResourceManager::(releaseR=
esource) Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free, f=
inding out if anyone is waiting for it.<br />Thread-4260::DEBUG::2015-11-09=
11:17:52,491::resourceManager::649::Storage.ResourceManager::(releaseResou=
rce) No one is waiting for resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0ba=
c06c41', Clearing records.<br />Thread-4260::DEBUG::2015-11-09 11:17:52,492=
::task::993::Storage.TaskManager.Task::(_decref) Task=3D`aed16a50-ede9-4ff5=
-92ef-356692fd56ae`::ref 0 aborting False<br />Thread-4260::INFO::2015-11-0=
9 11:17:52,494::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request h=
andler for 127.0.0.1:57857 stopped<br />Reactor thread::INFO::2015-11-09 11=
:17:52,494::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_ac=
cept) Accepting connection from 127.0.0.1:57858<br />Reactor thread::DEBUG:=
:2015-11-09 11:17:52,503::protocoldetector::82::ProtocolDetector.Detector::=
(__init__) Using required_size=3D11<br />Reactor thread::INFO::2015-11-09 1=
1:17:52,504::protocoldetector::118::ProtocolDetector.Detector::(handle_read=
) Detected protocol xml from 127.0.0.1:57858<br />BindingXMLRPC::INFO::2015=
-11-09 11:17:52,504::xmlrpc::73::vds.XMLRPCServer::(handle_request) Startin=
g request handler for 127.0.0.1:57858<br />Reactor thread::DEBUG::2015-11-0=
9 11:17:52,504::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml over =
http detected from ('127.0.0.1', 57858)<br />Thread-4261::INFO::2015-11-09 =
11:17:52,507::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Request han=
dler for 127.0.0.1:57858 started<br />Thread-4261::DEBUG::2015-11-09 11:17:=
52,508::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]<br />Thread-4=
261::DEBUG::2015-11-09 11:17:52,508::task::595::Storage.TaskManager.Task::(=
_updateState) Task=3D`d39463fd-486f-4280-903a-51b72862b648`::moving from st=
ate init -> state preparing<br />Thread-4261::INFO::2015-11-09 11:17:52,=
509::logUtils::48::dispatcher::(wrapper) Run and protect: prepareImage(sdUU=
ID=3D'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', spUUID=3D'00000000-0000-0000-0=
000-000000000000', imgUUID=3D'0e1c20d1-94aa-4003-8e12-0dbbf06a6af8', leafUU=
ID=3D'3fc3362d-ab6d-4e06-bd72-82d5750c7095')<br />Thread-4261::DEBUG::2015-=
11-09 11:17:52,509::resourceManager::198::Storage.ResourceManager.Request::=
(__init__) ResName=3D`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=3D=
`240c2aba-6c2e-44da-890d-c3d605e1933f`::Request was made in '/usr/share/vds=
m/storage/hsm.py' line '3205' at 'prepareImage'<br />Thread-4261::DEBUG::20=
15-11-09 11:17:52,509::resourceManager::542::Storage.ResourceManager::(regi=
sterResource) Trying to register resource 'Storage.b4c488af-9d2f-4b7b-a6f6-=
74a0bac06c41' for lock type 'shared'<br />Thread-4261::DEBUG::2015-11-09 11=
:17:52,509::resourceManager::601::Storage.ResourceManager::(registerResourc=
e) Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now loc=
king as 'shared' (1 active user)<br />Thread-4261::DEBUG::2015-11-09 11:17:=
52,510::resourceManager::238::Storage.ResourceManager.Request::(grant) ResN=
ame=3D`Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=3D`240c2aba-6c2e-=
44da-890d-c3d605e1933f`::Granted request<br />Thread-4261::DEBUG::2015-11-0=
9 11:17:52,510::task::827::Storage.TaskManager.Task::(resourceAcquired) Tas=
k=3D`d39463fd-486f-4280-903a-51b72862b648`::_resourcesAcquired: Storage.b4c=
488af-9d2f-4b7b-a6f6-74a0bac06c41 (shared)<br />Thread-4261::DEBUG::2015-11=
-09 11:17:52,510::task::993::Storage.TaskManager.Task::(_decref) Task=3D`d3=
9463fd-486f-4280-903a-51b72862b648`::ref 1 aborting False<br />Thread-4261:=
:DEBUG::2015-11-09 11:17:52,526::fileSD::536::Storage.StorageDomain::(activ=
ateVolumes) Fixing permissions on /rhev/data-center/mnt/glusterSD/ovirt02=
=2Emafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-9=
4aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095<br />Thread=
-4261::DEBUG::2015-11-09 11:17:52,528::fileUtils::143::Storage.fileUtils::(=
createdir) Creating directory: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f=
6-74a0bac06c41 mode: None<br />Thread-4261::WARNING::2015-11-09 11:17:52,52=
8::fileUtils::152::Storage.fileUtils::(createdir) Dir /var/run/vdsm/storage=
/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 already exists<br />Thread-4261::DEBU=
G::2015-11-09 11:17:52,528::fileSD::511::Storage.StorageDomain::(createImag=
eLinks) Creating symlink from /rhev/data-center/mnt/glusterSD/ovirt02.mafia=
=2Ekru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-40=
03-8e12-0dbbf06a6af8 to /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0b=
ac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8<br />Thread-4261::DEBUG::2015-=
11-09 11:17:52,528::fileSD::516::Storage.StorageDomain::(createImageLinks) =
img run dir already exists: /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-7=
4a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8<br />Thread-4261::DEBUG::2=
015-11-09 11:17:52,530::fileVolume::535::Storage.Volume::(validateVolumePat=
h) validate path for 3fc3362d-ab6d-4e06-bd72-82d5750c7095<br />Thread-4261:=
:INFO::2015-11-09 11:17:52,531::logUtils::51::dispatcher::(wrapper) Run and=
protect: prepareImage, Return response: {'info': {'domainID': 'b4c488af-9d=
2f-4b7b-a6f6-74a0bac06c41', 'volType': 'path', 'leaseOffset': 0, 'path': u'=
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7=
b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab=
6d-4e06-bd72-82d5750c7095', 'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7=
095', 'leasePath': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_eng=
ine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0db=
bf06a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease', 'imageID': '0e1c20d1=
-94aa-4003-8e12-0dbbf06a6af8'}, 'path': u'/var/run/vdsm/storage/b4c488af-9d=
2f-4b7b-a6f6-74a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6=
d-4e06-bd72-82d5750c7095', 'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4=
b7b-a6f6-74a0bac06c41', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhe=
v/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6=
f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4=
e06-bd72-82d5750c7095', 'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095'=
, 'leasePath': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/=
b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06=
a6af8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease', 'imageID': '0e1c20d1-94a=
a-4003-8e12-0dbbf06a6af8'}]}<br />Thread-4261::DEBUG::2015-11-09 11:17:52,5=
31::task::1191::Storage.TaskManager.Task::(prepare) Task=3D`d39463fd-486f-4=
280-903a-51b72862b648`::finished: {'info': {'domainID': 'b4c488af-9d2f-4b7b=
-a6f6-74a0bac06c41', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/d=
ata-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-=
74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06=
-bd72-82d5750c7095', 'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095', '=
leasePath': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c=
488af-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6a=
f8/3fc3362d-ab6d-4e06-bd72-82d5750c7095.lease', 'imageID': '0e1c20d1-94aa-4=
003-8e12-0dbbf06a6af8'}, 'path': u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b=
-a6f6-74a0bac06c41/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-=
bd72-82d5750c7095', 'imgVolumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f=
6-74a0bac06c41', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-=
center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0=
bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3fc3362d-ab6d-4e06-bd7=
2-82d5750c7095', 'volumeID': u'3fc3362d-ab6d-4e06-bd72-82d5750c7095', 'leas=
ePath': u'/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488a=
f-9d2f-4b7b-a6f6-74a0bac06c41/images/0e1c20d1-94aa-4003-8e12-0dbbf06a6af8/3=
fc3362d-ab6d-4e06-bd72-82d5750c7095.lease', 'imageID': '0e1c20d1-94aa-4003-=
8e12-0dbbf06a6af8'}]}<br />Thread-4261::DEBUG::2015-11-09 11:17:52,532::tas=
k::595::Storage.TaskManager.Task::(_updateState) Task=3D`d39463fd-486f-4280=
-903a-51b72862b648`::moving from state preparing -> state finished<br />=
Thread-4261::DEBUG::2015-11-09 11:17:52,532::resourceManager::940::Storage=
=2EResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resourc=
es {'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': < ResourceRef 'Stora=
ge.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', isValid: 'True' obj: 'None'>}<=
br />Thread-4261::DEBUG::2015-11-09 11:17:52,532::resourceManager::977::Sto=
rage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}<br />Th=
read-4261::DEBUG::2015-11-09 11:17:52,532::resourceManager::616::Storage.Re=
sourceManager::(releaseResource) Trying to release resource 'Storage.b4c488=
af-9d2f-4b7b-a6f6-74a0bac06c41'<br />Thread-4261::DEBUG::2015-11-09 11:17:5=
2,532::resourceManager::635::Storage.ResourceManager::(releaseResource) Rel=
eased resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' (0 active use=
rs)<br />Thread-4261::DEBUG::2015-11-09 11:17:52,533::resourceManager::641:=
:Storage.ResourceManager::(releaseResource) Resource 'Storage.b4c488af-9d2f=
-4b7b-a6f6-74a0bac06c41' is free, finding out if anyone is waiting for it=
=2E<br />Thread-4261::DEBUG::2015-11-09 11:17:52,533::resourceManager::649:=
:Storage.ResourceManager::(releaseResource) No one is waiting for resource =
'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', Clearing records.<br />Thre=
ad-4261::DEBUG::2015-11-09 11:17:52,533::task::993::Storage.TaskManager.Tas=
k::(_decref) Task=3D`d39463fd-486f-4280-903a-51b72862b648`::ref 0 aborting =
False<br />Thread-4261::INFO::2015-11-09 11:17:52,535::xmlrpc::92::vds.XMLR=
PCServer::(_process_requests) Request handler for 127.0.0.1:57858 stopped<b=
r />Reactor thread::INFO::2015-11-09 11:17:52,536::protocoldetector::72::Pr=
otocolDetector.AcceptorImpl::(handle_accept) Accepting connection from 127=
=2E0.0.1:57859<br />Reactor thread::DEBUG::2015-11-09 11:17:52,544::protoco=
ldetector::82::ProtocolDetector.Detector::(__init__) Using required_size=3D=
11<br />Reactor thread::INFO::2015-11-09 11:17:52,545::protocoldetector::11=
8::ProtocolDetector.Detector::(handle_read) Detected protocol xml from 127=
=2E0.0.1:57859<br />Reactor thread::DEBUG::2015-11-09 11:17:52,545::binding=
xmlrpc::1297::XmlDetector::(handle_socket) xml over http detected from ('12=
7.0.0.1', 57859)<br />BindingXMLRPC::INFO::2015-11-09 11:17:52,545::xmlrpc:=
:73::vds.XMLRPCServer::(handle_request) Starting request handler for 127.0=
=2E0.1:57859<br />Thread-4262::INFO::2015-11-09 11:17:52,548::xmlrpc::84::v=
ds.XMLRPCServer::(_process_requests) Request handler for 127.0.0.1:57859 st=
arted<br />Thread-4262::DEBUG::2015-11-09 11:17:52,548::bindingxmlrpc::325:=
:vds::(wrapper) client [127.0.0.1]<br />Thread-4262::DEBUG::2015-11-09 11:1=
7:52,549::task::595::Storage.TaskManager.Task::(_updateState) Task=3D`0eebd=
b1c-6c4d-4b86-a2fa-00ad35a19f24`::moving from state init -> state prepar=
ing<br />Thread-4262::INFO::2015-11-09 11:17:52,549::logUtils::48::dispatch=
er::(wrapper) Run and protect: prepareImage(sdUUID=3D'b4c488af-9d2f-4b7b-a6=
f6-74a0bac06c41', spUUID=3D'00000000-0000-0000-0000-000000000000', imgUUID=
=3D'350fb787-049a-4174-8914-f371aabfa72c', leafUUID=3D'02c5d59d-638c-4672-8=
14d-d734e334e24a')<br />Thread-4262::DEBUG::2015-11-09 11:17:52,549::resour=
ceManager::198::Storage.ResourceManager.Request::(__init__) ResName=3D`Stor=
age.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41`ReqID=3D`fd9ea6d0-3a31-4ec6-a74c-8=
b84b2e51746`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '32=
05' at 'prepareImage'<br />Thread-4262::DEBUG::2015-11-09 11:17:52,550::res=
ourceManager::542::Storage.ResourceManager::(registerResource) Trying to re=
gister resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' for lock typ=
e 'shared'<br />Thread-4262::DEBUG::2015-11-09 11:17:52,550::resourceManage=
r::601::Storage.ResourceManager::(registerResource) Resource 'Storage.b4c48=
8af-9d2f-4b7b-a6f6-74a0bac06c41' is free. Now locking as 'shared' (1 active=
user)<br />Thread-4262::DEBUG::2015-11-09 11:17:52,550::resourceManager::2=
38::Storage.ResourceManager.Request::(grant) ResName=3D`Storage.b4c488af-9d=
2f-4b7b-a6f6-74a0bac06c41`ReqID=3D`fd9ea6d0-3a31-4ec6-a74c-8b84b2e51746`::G=
ranted request<br />Thread-4262::DEBUG::2015-11-09 11:17:52,550::task::827:=
:Storage.TaskManager.Task::(resourceAcquired) Task=3D`0eebdb1c-6c4d-4b86-a2=
fa-00ad35a19f24`::_resourcesAcquired: Storage.b4c488af-9d2f-4b7b-a6f6-74a0b=
ac06c41 (shared)<br />Thread-4262::DEBUG::2015-11-09 11:17:52,551::task::99=
3::Storage.TaskManager.Task::(_decref) Task=3D`0eebdb1c-6c4d-4b86-a2fa-00ad=
35a19f24`::ref 1 aborting False<br />Thread-4262::DEBUG::2015-11-09 11:17:5=
2,566::fileSD::536::Storage.StorageDomain::(activateVolumes) Fixing permiss=
ions on /rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-=
9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c=
5d59d-638c-4672-814d-d734e334e24a<br />Thread-4262::DEBUG::2015-11-09 11:17=
:52,568::fileUtils::143::Storage.fileUtils::(createdir) Creating directory:=
/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41 mode: None<br /=
>Thread-4262::WARNING::2015-11-09 11:17:52,568::fileUtils::152::Storage.fil=
eUtils::(createdir) Dir /var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0b=
ac06c41 already exists<br />Thread-4262::DEBUG::2015-11-09 11:17:52,568::fi=
leSD::511::Storage.StorageDomain::(createImageLinks) Creating symlink from =
/rhev/data-center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7=
b-a6f6-74a0bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c to /var/run=
/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-049a-4174-8914-=
f371aabfa72c<br />Thread-4262::DEBUG::2015-11-09 11:17:52,568::fileSD::516:=
:Storage.StorageDomain::(createImageLinks) img run dir already exists: /var=
/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-049a-4174-8=
914-f371aabfa72c<br />Thread-4262::DEBUG::2015-11-09 11:17:52,570::fileVolu=
me::535::Storage.Volume::(validateVolumePath) validate path for 02c5d59d-63=
8c-4672-814d-d734e334e24a<br />Thread-4262::INFO::2015-11-09 11:17:52,572::=
logUtils::51::dispatcher::(wrapper) Run and protect: prepareImage, Return r=
esponse: {'info': {'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'vol=
Type': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/=
ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb=
787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a', 'vol=
umeID': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath': u'/rhev/data-=
center/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0=
bac06c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814=
d-d734e334e24a.lease', 'imageID': '350fb787-049a-4174-8914-f371aabfa72c'}, =
'path': u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb7=
87-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a', 'imgV=
olumesInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType=
': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovir=
t02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-=
049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a', 'volumeI=
D': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath': u'/rhev/data-cent=
er/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac0=
6c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d7=
34e334e24a.lease', 'imageID': '350fb787-049a-4174-8914-f371aabfa72c'}]}<br =
/>Thread-4262::DEBUG::2015-11-09 11:17:52,573::task::1191::Storage.TaskMana=
ger.Task::(prepare) Task=3D`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::finished=
: {'info': {'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': =
'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02=
=2Emafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-0=
49a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a', 'volumeID=
': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath': u'/rhev/data-cente=
r/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06=
c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d73=
4e334e24a.lease', 'imageID': '350fb787-049a-4174-8914-f371aabfa72c'}, 'path=
': u'/var/run/vdsm/storage/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/350fb787-04=
9a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a', 'imgVolume=
sInfo': [{'domainID': 'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41', 'volType': 'p=
ath', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovirt02=
=2Emafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06c41/images/350fb787-0=
49a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d734e334e24a', 'volumeID=
': u'02c5d59d-638c-4672-814d-d734e334e24a', 'leasePath': u'/rhev/data-cente=
r/mnt/glusterSD/ovirt02.mafia.kru:_engine/b4c488af-9d2f-4b7b-a6f6-74a0bac06=
c41/images/350fb787-049a-4174-8914-f371aabfa72c/02c5d59d-638c-4672-814d-d73=
4e334e24a.lease', 'imageID': '350fb787-049a-4174-8914-f371aabfa72c'}]}<br /=
>Thread-4262::DEBUG::2015-11-09 11:17:52,573::task::595::Storage.TaskManage=
r.Task::(_updateState) Task=3D`0eebdb1c-6c4d-4b86-a2fa-00ad35a19f24`::movin=
g from state preparing -> state finished<br />Thread-4262::DEBUG::2015-1=
1-09 11:17:52,573::resourceManager::940::Storage.ResourceManager.Owner::(re=
leaseAll) Owner.releaseAll requests {} resources {'Storage.b4c488af-9d2f-4b=
7b-a6f6-74a0bac06c41': < ResourceRef 'Storage.b4c488af-9d2f-4b7b-a6f6-74=
a0bac06c41', isValid: 'True' obj: 'None'>}<br />Thread-4262::DEBUG::2015=
-11-09 11:17:52,573::resourceManager::977::Storage.ResourceManager.Owner::(=
cancelAll) Owner.cancelAll requests {}<br />Thread-4262::DEBUG::2015-11-09 =
11:17:52,573::resourceManager::616::Storage.ResourceManager::(releaseResour=
ce) Trying to release resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c4=
1'<br />Thread-4262::DEBUG::2015-11-09 11:17:52,573::resourceManager::635::=
Storage.ResourceManager::(releaseResource) Released resource 'Storage.b4c48=
8af-9d2f-4b7b-a6f6-74a0bac06c41' (0 active users)<br />Thread-4262::DEBUG::=
2015-11-09 11:17:52,574::resourceManager::641::Storage.ResourceManager::(re=
leaseResource) Resource 'Storage.b4c488af-9d2f-4b7b-a6f6-74a0bac06c41' is f=
ree, finding out if anyone is waiting for it.<br />Thread-4262::DEBUG::2015=
-11-09 11:17:52,574::resourceManager::649::Storage.ResourceManager::(releas=
eResource) No one is waiting for resource 'Storage.b4c488af-9d2f-4b7b-a6f6-=
74a0bac06c41', Clearing records.<br />Thread-4262::DEBUG::2015-11-09 11:17:=
52,574::task::993::Storage.TaskManager.Task::(_decref) Task=3D`0eebdb1c-6c4=
d-4b86-a2fa-00ad35a19f24`::ref 0 aborting False<br />Thread-4262::INFO::201=
5-11-09 11:17:52,576::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Req=
uest handler for 127.0.0.1:57859 stopped<br />Reactor thread::INFO::2015-11=
-09 11:17:52,610::protocoldetector::72::ProtocolDetector.AcceptorImpl::(han=
dle_accept) Accepting connection from 127.0.0.1:57860<br />Reactor thread::=
DEBUG::2015-11-09 11:17:52,619::protocoldetector::82::ProtocolDetector.Dete=
ctor::(__init__) Using required_size=3D11<br />Reactor thread::INFO::2015-1=
1-09 11:17:52,619::protocoldetector::118::ProtocolDetector.Detector::(handl=
e_read) Detected protocol xml from 127.0.0.1:57860<br />BindingXMLRPC::INFO=
::2015-11-09 11:17:52,620::xmlrpc::73::vds.XMLRPCServer::(handle_request) S=
tarting request handler for 127.0.0.1:57860<br />Reactor thread::DEBUG::201=
5-11-09 11:17:52,620::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml=
over http detected from ('127.0.0.1', 57860)<br />Thread-4263::INFO::2015-=
11-09 11:17:52,623::xmlrpc::84::vds.XMLRPCServer::(_process_requests) Reque=
st handler for 127.0.0.1:57860 started<br />Thread-4263::DEBUG::2015-11-09 =
11:17:52,623::bindingxmlrpc::325::vds::(wrapper) client [127.0.0.1]<br />Th=
read-4263::DEBUG::2015-11-09 11:17:52,624::task::595::Storage.TaskManager=
=2ETask::(_updateState) Task=3D`6fd1d011-d931-4eca-b93b-c0fc3a1b4107`::movi=
ng from state init -> state preparing<br />Thread-4263::INFO::2015-11-09=
11:17:52,624::logUtils::48::dispatcher::(wrapper) Run and protect: repoSta=
ts(options=3DNone)<br />Thread-4263::INFO::2015-11-09 11:17:52,624::logUtil=
s::51::dispatcher::(wrapper) Run and protect: repoStats, Return response: {=
'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': {'code': 0, 'actual': True, 'versio=
n': 3, 'acquired': True, 'delay': '0.000392118', 'lastCheck': '6.0', 'valid=
': True}, u'0af99439-f140-4636-90f7-f43904735da0': {'code': 0, 'actual': Tr=
ue, 'version': 3, 'acquired': True, 'delay': '0.000382969', 'lastCheck': '4=
=2E5', 'valid': True}}<br />Thread-4263::DEBUG::2015-11-09 11:17:52,624::ta=
sk::1191::Storage.TaskManager.Task::(prepare) Task=3D`6fd1d011-d931-4eca-b9=
3b-c0fc3a1b4107`::finished: {'b4c488af-9d2f-4b7b-a6f6-74a0bac06c41': {'code=
': 0, 'actual': True, 'version': 3, 'acquired': True, 'delay': '0.000392118=
', 'lastCheck': '6.0', 'valid': True}, u'0af99439-f140-4636-90f7-f43904735d=
a0': {'code': 0, 'actual': True, 'version': 3, 'acquired': True, 'delay': '=
0.000382969', 'lastCheck': '4.5', 'valid': True}}<br />Thread-4263::DEBUG::=
2015-11-09 11:17:52,625::task::595::Storage.TaskManager.Task::(_updateState=
) Task=3D`6fd1d011-d931-4eca-b93b-c0fc3a1b4107`::moving from state preparin=
g -> state finished<br />Thread-4263::DEBUG::2015-11-09 11:17:52,625::re=
sourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.relea=
seAll requests {} resources {}<br />Thread-4263::DEBUG::2015-11-09 11:17:52=
,625::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owne=
r.cancelAll requests {}<br />Thread-4263::DEBUG::2015-11-09 11:17:52,625::t=
ask::993::Storage.TaskManager.Task::(_decref) Task=3D`6fd1d011-d931-4eca-b9=
3b-c0fc3a1b4107`::ref 0 aborting False<br />Thread-4263::INFO::2015-11-09 1=
1:17:52,627::xmlrpc::92::vds.XMLRPCServer::(_process_requests) Request hand=
ler for 127.0.0.1:57860 stopped<br />mailbox.SPMMonitor::DEBUG::2015-11-09 =
11:17:52,829::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail) dd=
if=3D/rhev/data-center/00000001-0001-0001-0001-000000000230/mastersd/dom_m=
d/inbox iflag=3Ddirect,fullblock count=3D1 bs=3D1024000 (cwd None)<br />mai=
lbox.SPMMonitor::DEBUG::2015-11-09 11:17:52,845::storage_mailbox::735::Stor=
age.Misc.excCmd::(_checkForMail) SUCCESS: <err> =3D '1+0 records in\n=
1+0 records out\n1024000 bytes (1.0 MB) copied, 0.00494757 s, 207 MB/s\n'; =
<rc> =3D 0<br /><br /></p>
<div>-- <br />
<p>Florent BELLO<br />Service Informatique<br />informatique@ville-kourou=
=2Efr<br />0594 22 31 22<br />Mairie de Kourou</p>
</div>
</body></html>
--=_920b73690f5d77fd5fed2f8d20a489a7--
2
3
Hello All,
after configuring a first setup, based on the quick start guide, I'm now
looking
at the hosted-engine setup.
My question is: after I do the hosted-engine-setup, how do I setup vm's on
the same
machine that hosts the now-virtualized engine?
Greetings, J.
4
4

Invitation: [Deep dive] Host Network QoS - oVirt 3.6 @ Tue Nov 24 5pm - Thu Nov 26, 2015 6pm (ibarkan@redhat.com)
by ibarkan@redhat.com 24 Nov '15
by ibarkan@redhat.com 24 Nov '15
24 Nov '15
You have been invited to the following event.
Title: [Deep dive] Host Network QoS - oVirt 3.6
Hangouts on air: https://plus.google.com/events/c3la9vdse911atq991qflogtq0g
you tube link: https://plus.google.com/events/c3la9vdse911atq991qflogtq0g
When: Tue Nov 24 5pm - Thu Nov 26, 2015 6pm Jerusalem
Calendar: ibarkan(a)redhat.com
Who:
* ibarkan(a)redhat.com - organizer
* users(a)ovirt.org
Event details:
https://www.google.com/calendar/event?action=VIEW&eid=NGZxZXU0dDRka2ZrMmd1c…
Invitation from Google Calendar: https://www.google.com/calendar/
You are receiving this courtesy email at the account users(a)ovirt.org
because you are an attendee of this event.
To stop receiving future updates for this event, decline this event.
Alternatively you can sign up for a Google account at
https://www.google.com/calendar/ and control your notification settings for
your entire calendar.
Forwarding this invitation could allow any recipient to modify your RSVP
response. Learn more at
https://support.google.com/calendar/answer/37135#forwarding
1
0

[ANN] oVirt 3.5.6 Third Release Candidate is now available for testing
by Sandro Bonazzola 24 Nov '15
by Sandro Bonazzola 24 Nov '15
24 Nov '15
The oVirt Project is pleased to announce the availability
of the Third oVirt 3.5.6 Release Candidate for testing, as of November
24th, 2015.
This release is available now for
Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and
Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar).
This release supports Hypervisor Hosts running
Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and
Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar).
This release includes updated packages for:
- VDSM
See the release notes [1] for a list of fixed bugs.
Please refer to release notes [1] for Installation / Upgrade instructions.
a new oVirt Live ISO is already available[2].
Please note that mirrors[3] may need usually one day before being
synchronized.
Please refer to the release notes for known issues in this release.
Please add yourself to the test page[4] if you're testing this release.
[1] http://www.ovirt.org/OVirt_3.5.6_Release_Notes
[2] http://resources.ovirt.org/pub/ovirt-3.5-pre/iso/ovirt-live/
[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors
[4] http://www.ovirt.org/Testing/oVirt_3.5.6_Testing
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
1
0

Re: [ovirt-users] Ovirt 3.6 | After upgrade host can not connect to storage domains | returned by VDSM was: 480
by Punit Dambiwal 24 Nov '15
by Punit Dambiwal 24 Nov '15
24 Nov '15
Hi Sahina,
Either after make the changes in the vdsm.conf,still not able to connect to
the replica=2 storage..
Thanks,
Punit
On Mon, Nov 23, 2015 at 4:15 PM, Punit Dambiwal <hypunit(a)gmail.com> wrote:
> Hi Sahina,
>
> Thanks for the update...would you mind to let me know the correct syntax
> to add the line in the vdsm.conf ??
>
> Thanks,
> Punit
>
> On Mon, Nov 23, 2015 at 3:48 PM, Sahina Bose <sabose(a)redhat.com> wrote:
>
>> You can change the allowed_replica_count to 2 in vdsm.conf - though this
>> is not recommended in production. Supported replica count is 3.
>>
>> thanks
>> sahina
>>
>>
>> On 11/23/2015 07:58 AM, Punit Dambiwal wrote:
>>
>> Hi Sahina,
>>
>> Is there any workaround to solve this issue ?
>>
>> Thanks,
>> Punit
>>
>> On Wed, Nov 11, 2015 at 9:36 AM, Sahina Bose <sabose(a)redhat.com> wrote:
>>
>>> Hi,
>>>
>>> Thanks for your email. I will be back on 16th Nov and will get back to
>>> you then.
>>>
>>> thanks
>>> sahina
>>>
>>
>>
>>
>
2
1
Hi,
When creating a new data center, in oVirt 3.5 there is the options to have “local” or “shared” storage types.
Is there any resource out there that explains the difference between the two? The official doc does not really help there.
My current understanding is as follows:
In shared mode, I can create data domains that are shared between hosts in a same data center, eg NFS, iSCSI etc.
In lcoal mode, I can only create data domains locally, but I can “import” an existing iSCSI or Export domain to move VMs (with downtime) between data centers.
1. Is this correct or am I missing something here?
2. What would be the reason to go for a “local” storage type cluster?
Thank you very much for helping out a newcomer :)
Kind regards,
—
Christophe
2
1
hi everyone
how can i import an ovf file from a server to my ovirt. thanks in advance
2
1
------=_Part_16_134777541.1448296754211
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hello,
I'm getting the following error when I try to create a snapshot of one VM. Snapshots of all other VMs work as expected. I'm using oVirt 3.5 on Centos 7.
>Failed to create live snapshot 'fsbu3' for VM 'Odoo'. VM restart is recommended. Note that using the created snapshot might cause data inconsistency.
I think this is the relevant part of vdsm.log, what strikes me as odd is the line:
>Thread-1192052::ERROR::2015-11-23 17:18:20,532::vm::4355::vm.Vm::(snapshot) vmId=3D`581cebb3-7729-4c29-b98c-f9e04aa2fdd0`::The base volume doesn't exist: {'device': 'disk', 'domainID': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'volumeID': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'imageID': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d'}
The part "The base volume doesn't exist" seems interesting.
Also interesting is that it does create a snapshot, though I don't know if that snapshot is missing data.
Thread-1192048::DEBUG::2015-11-23 17:18:20,421::taskManager::103::Storage.TaskManager::(getTaskStatus) Entry. taskID: 21a1c403-f306-40b1-bad8-377d0265ebca
Thread-1192048::DEBUG::2015-11-23 17:18:20,421::taskManager::106::Storage.TaskManager::(getTaskStatus) Return. Response: {'code': 0, 'message': '1 jobs completed successfully', 'taskState': 'finished', 'taskResult': 'success', 'taskID': '21a1c403-f306-40b1-bad8-377d0265ebca'}
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::taskManager::123::Storage.TaskManager::(getAllTasksStatuses) Return: {'21a1c403-f306-40b1-bad8-377d0265ebca': {'code': 0, 'message': '1 jobs completed successfully', 'taskState': 'finished', 'taskResult': 'success', 'taskID': '21a1c403-f306-40b1-bad8-377d0265ebca'}}
Thread-1192048::INFO::2015-11-23 17:18:20,422::logUtils::47::dispatcher::(wrapper) Run and protect: getAllTasksStatuses, Return response: {'allTasksStatus': {'21a1c403-f306-40b1-bad8-377d0265ebca': {'code': 0, 'message': '1 jobs completed successfully', 'taskState': 'finished', 'taskResult': 'success', 'taskID': '21a1c403-f306-40b1-bad8-377d0265ebca'}}}
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::task::1191::Storage.TaskManager.Task::(prepare) Task=3D`ce3d857c-45d3-4acc-95a5-79484e457fc6`::finished: {'allTasksStatus': {'21a1c403-f306-40b1-bad8-377d0265ebca': {'code': 0, 'message': '1 jobs completed successfully', 'taskState': 'finished', 'taskResult': 'success', 'taskID': '21a1c403-f306-40b1-bad8-377d0265ebca'}}}
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::task::595::Storage.TaskManager.Task::(_updateState) Task=3D`ce3d857c-45d3-4acc-95a5-79484e457fc6`::moving from state preparing -> state finished
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::task::993::Storage.TaskManager.Task::(_decref) Task=3D`ce3d857c-45d3-4acc-95a5-79484e457fc6`::ref 0 aborting False
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::__init__::500::jsonrpc.JsonRpcServer::(_serveRequest) Return 'Host.getAllTasksStatuses' in bridge with {'21a1c403-f306-40b1-bad8-377d0265ebca': {'code': 0, 'message': '1 jobs completed successfully', 'taskState': 'finished', 'taskResult': 'success', 'taskID': '21a1c403-f306-40b1-bad8-377d0265ebca'}}
Thread-1192048::DEBUG::2015-11-23 17:18:20,423::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-11-23 17:18:20,423::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'SEND'>
JsonRpcServer::DEBUG::2015-11-23 17:18:20,424::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1192049::DEBUG::2015-11-23 17:18:20,426::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-11-23 17:18:20,438::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'SEND'>
JsonRpcServer::DEBUG::2015-11-23 17:18:20,439::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1192050::DEBUG::2015-11-23 17:18:20,441::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-11-23 17:18:20,442::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'SEND'>
JsonRpcServer::DEBUG::2015-11-23 17:18:20,443::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1192051::DEBUG::2015-11-23 17:18:20,445::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-11-23 17:18:20,529::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'SEND'>
Thread-1192052::DEBUG::2015-11-23 17:18:20,530::__init__::469::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'VM.snapshot' in bridge with {'vmID': '581cebb3-7729-4c29-b98c-f9e04aa2fdd0', 'snapDrives': [{'baseVolumeID': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'domainID': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'volumeID': '16f92498-c142-4330-bc0f-c96f210c379d', 'imageID': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d'}]}
JsonRpcServer::DEBUG::2015-11-23 17:18:20,530::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1192052::ERROR::2015-11-23 17:18:20,532::vm::4355::vm.Vm::(snapshot) vmId=3D`581cebb3-7729-4c29-b98c-f9e04aa2fdd0`::The base volume doesn't exist: {'device': 'disk', 'domainID': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'volumeID': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'imageID': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d'}
Thread-1192052::DEBUG::2015-11-23 17:18:20,532::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-11-23 17:18:20,588::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'SEND'>
JsonRpcServer::DEBUG::2015-11-23 17:18:20,590::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1192053::DEBUG::2015-11-23 17:18:20,590::__init__::469::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'Volume.getInfo' in bridge with {'imageID': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d', 'storagepoolID': '00000002-0002-0002-0002-000000000354', 'volumeID': '16f92498-c142-4330-bc0f-c96f210c379d', 'storagedomainID': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc'}
Thread-1192053::DEBUG::2015-11-23 17:18:20,592::task::595::Storage.TaskManager.Task::(_updateState) Task=3D`8f9455be-2f57-4b95-9d24-8db1522cbaed`::moving from state init -> state preparing
Thread-1192053::INFO::2015-11-23 17:18:20,592::logUtils::44::dispatcher::(wrapper) Run and protect: getVolumeInfo(sdUUID=3D'b4e7425a-53c7-40d4-befc-ea36ed7891fc', spUUID=3D'00000002-0002-0002-0002-000000000354', imgUUID=3D'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d', volUUID=3D'16f92498-c142-4330-bc0f-c96f210c379d', options=3DNone)
Thread-1192053::DEBUG::2015-11-23 17:18:20,593::resourceManager::198::Storage.ResourceManager.Request::(__init__) ResName=3D`Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc`ReqID=3D`e6aed3a3-c95a-4106-9a16-ad21e5db3ae7`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3124' at 'getVolumeInfo'
Thread-1192053::DEBUG::2015-11-23 17:18:20,593::resourceManager::542::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc' for lock type 'shared'
Thread-1192053::DEBUG::2015-11-23 17:18:20,594::resourceManager::601::Storage.ResourceManager::(registerResource) Resource 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc' is free. Now locking as 'shared' (1 active user)
Thread-1192053::DEBUG::2015-11-23 17:18:20,594::resourceManager::238::Storage.ResourceManager.Request::(grant) ResName=3D`Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc`ReqID=3D`e6aed3a3-c95a-4106-9a16-ad21e5db3ae7`::Granted request
Thread-1192053::DEBUG::2015-11-23 17:18:20,594::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=3D`8f9455be-2f57-4b95-9d24-8db1522cbaed`::_resourcesAcquired: Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc (shared)
Thread-1192053::DEBUG::2015-11-23 17:18:20,594::task::993::Storage.TaskManager.Task::(_decref) Task=3D`8f9455be-2f57-4b95-9d24-8db1522cbaed`::ref 1 aborting False
Thread-1192053::DEBUG::2015-11-23 17:18:20,594::lvm::419::Storage.OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex
Thread-1192053::DEBUG::2015-11-23 17:18:20,595::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm lvs --config ' devices { preferred_names =3D ["^/dev/mapper/"] ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_count=3D3 obtain_device_list_from_udev=3D0 filter =3D [ '\''a|/dev/mapper/1p_storage_store1|'\'', '\''r|.*|'\'' ] } global { locking_type=3D1 prioritise_write_locks=3D1 wait_for_locks=3D1 use_lvmetad=3D0 } backup { retain_min =3D 50 retain_days =3D 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags b4e7425a-53c7-40d4-befc-ea36ed7891fc (cwd None)
Thread-1192053::DEBUG::2015-11-23 17:18:20,731::lvm::288::Storage.Misc.excCmd::(cmd) SUCCESS: <err> =3D ''; <rc> =3D 0
Thread-1192053::DEBUG::2015-11-23 17:18:20,734::lvm::454::Storage.LVM::(_reloadlvs) lvs reloaded
Thread-1192053::DEBUG::2015-11-23 17:18:20,734::lvm::454::Storage.OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex
Thread-1192053::INFO::2015-11-23 17:18:20,734::volume::847::Storage.Volume::(getInfo) Info request: sdUUID=3Db4e7425a-53c7-40d4-befc-ea36ed7891fc imgUUID=3Ddfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d volUUID =3D 16f92498-c142-4330-bc0f-c96f210c379d=20
Thread-1192053::DEBUG::2015-11-23 17:18:20,734::blockVolume::594::Storage.Misc.excCmd::(getMetadata) /bin/dd iflag=3Ddirect skip=3D40 bs=3D512 if=3D/dev/b4e7425a-53c7-40d4-befc-ea36ed7891fc/metadata count=3D1 (cwd None)
Thread-1192053::DEBUG::2015-11-23 17:18:20,745::blockVolume::594::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> =3D '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.000196717 s, 2.6 MB/s\n'; <rc> =3D 0
Thread-1192053::DEBUG::2015-11-23 17:18:20,745::misc::262::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.000196717 s, 2.6 MB/s'], size: 512
Thread-1192053::INFO::2015-11-23 17:18:20,746::volume::875::Storage.Volume::(getInfo) b4e7425a-53c7-40d4-befc-ea36ed7891fc/dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d/16f92498-c142-4330-bc0f-c96f210c379d info is {'status': 'OK', 'domain': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'voltype': 'LEAF', 'description': '', 'parent': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'format': 'COW', 'image': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d', 'ctime': '1448295499', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1073741824', 'children': [], 'pool': '', 'capacity': '21474836480', 'uuid': '16f92498-c142-4330-bc0f-c96f210c379d', 'truesize': '1073741824', 'type': 'SPARSE'}
Thread-1192053::INFO::2015-11-23 17:18:20,746::logUtils::47::dispatcher::(wrapper) Run and protect: getVolumeInfo, Return response: {'info': {'status': 'OK', 'domain': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'voltype': 'LEAF', 'description': '', 'parent': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'format': 'COW', 'image': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d', 'ctime': '1448295499', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1073741824', 'children': [], 'pool': '', 'capacity': '21474836480', 'uuid': '16f92498-c142-4330-bc0f-c96f210c379d', 'truesize': '1073741824', 'type': 'SPARSE'}}
Thread-1192053::DEBUG::2015-11-23 17:18:20,746::task::1191::Storage.TaskManager.Task::(prepare) Task=3D`8f9455be-2f57-4b95-9d24-8db1522cbaed`::finished: {'info': {'status': 'OK', 'domain': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'voltype': 'LEAF', 'description': '', 'parent': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'format': 'COW', 'image': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d', 'ctime': '1448295499', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1073741824', 'children': [], 'pool': '', 'capacity': '21474836480', 'uuid': '16f92498-c142-4330-bc0f-c96f210c379d', 'truesize': '1073741824', 'type': 'SPARSE'}}
Thread-1192053::DEBUG::2015-11-23 17:18:20,746::task::595::Storage.TaskManager.Task::(_updateState) Task=3D`8f9455be-2f57-4b95-9d24-8db1522cbaed`::moving from state preparing -> state finished
Thread-1192053::DEBUG::2015-11-23 17:18:20,746::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc': < ResourceRef 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc', isValid: 'True' obj: 'None'>}
Thread-1192053::DEBUG::2015-11-23 17:18:20,746::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1192053::DEBUG::2015-11-23 17:18:20,747::resourceManager::616::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc'
Thread-1192053::DEBUG::2015-11-23 17:18:20,747::resourceManager::635::Storage.ResourceManager::(releaseResource) Released resource 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc' (0 active users)
Thread-1192053::DEBUG::2015-11-23 17:18:20,747::resourceManager::641::Storage.ResourceManager::(releaseResource) Resource 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc' is free, finding out if anyone is waiting for it.
Thread-1192053::DEBUG::2015-11-23 17:18:20,747::resourceManager::649::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc', Clearing records.
Thread-1192053::DEBUG::2015-11-23 17:18:20,747::task::993::Storage.TaskManager.Task::(_decref) Task=3D`8f9455be-2f57-4b95-9d24-8db1522cbaed`::ref 0 aborting False
Thread-1192053::DEBUG::2015-11-23 17:18:20,748::__init__::500::jsonrpc.JsonRpcServer::(_serveRequest) Return 'Volume.getInfo' in bridge with {'status': 'OK', 'domain': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'voltype': 'LEAF', 'description': '', 'parent': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'format': 'COW', 'image': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d', 'ctime': '1448295499', 'disktype': '2', 'legality': 'LEGAL', 'allocType': 'SPARSE', 'mtime': '0', 'apparentsize': '1073741824', 'children': [], 'pool': '', 'capacity': '21474836480', 'uuid': '16f92498-c142-4330-bc0f-c96f210c379d', 'truesize': '1073741824', 'type': 'SPARSE'}
Thread-1192053::DEBUG::2015-11-23 17:18:20,748::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-11-23 17:18:20,863::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'SEND'>
JsonRpcServer::DEBUG::2015-11-23 17:18:20,864::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1192054::DEBUG::2015-11-23 17:18:20,864::__init__::469::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'Task.clear' in bridge with {'taskID': '21a1c403-f306-40b1-bad8-377d0265ebca'}
------=_Part_16_134777541.1448296754211
Content-Type: application/pgp-signature; name=signature.asc
Content-Transfer-Encoding: 7bit
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: CIPHERMAIL (2.9.0-0)
iQIcBAEBCAAGBQJWU0EyAAoJEJ44dql+IcLKI0MP/3Asq2nBywx8kXJ0Gbt1wDYp
Yvakc220hhIxSKDCSuaWPoZHgBCMMi32po0lBrnyb/nq28O6vINL7EDNH5Did4fi
HIQ7JG/iyExgEwyWjhllM6d+yOIHYbIQnAqilxlnk/nBs98yXZn4LnYkaf9tDuXC
2VIYWisVM1VRkduGsIptkqN7qwfKek3NGwKBQbSL+o7CQNsCfSu7I7v7St+VyxH7
fSxRacILEaZmPSK7toSJKpZjabsmWEAbF1Finy2ni/qvnF984w/3b7Lmo0wmCbUG
2NeLveD5avQrO1RLMuSLPAO0/8aZnhPOL5ifYyM/f5WihFszmPn+72l17oMPxpwW
vpcpcsC2nq+ZTNvdR1LqYIzPlb201ohTv7YV5RJg4rU4uDfMlzv+0AEKJUlZgnmz
OLkfFH23OmmYMs6J/s/FDbsZBOZAdJpYwSdIWFyL3jfzqfoFwH0xUk6GKyZxAbRt
cMBUpDAtPoQ7qYJ+ctegY9XmexsnXhrRUTJWXTUBH7a/XKkgJbDIBLT5YRxACF33
tf1FIoc2f9RD3acdwAt/uF4wVQZI1CGZtmox0yk1dKDrPIfS3GQn6+9HZWRulT7W
han8iQLlyTN53sO4UKH8FHRjHqR0/JPmMXw4NR6r6CW6KA7Jf/m+9Bxh5sHVJXJt
kd8EaYpl165dFqUS+fy/
=fC70
-----END PGP SIGNATURE-----
------=_Part_16_134777541.1448296754211--
2
1
Hi guys,
I am college student who is going to graduate next year, major in CS, I
want to
work on my graduate thesis related to ovirt.
For now I need a project which I can work on for the next 6 month, so I need
an idealist which I can reference to write a proposal.
I have contributed to #dri-devel during my Google Summer of Code project,
so I
know how opensource groups works, though workflow are kind of different.
I am appreciate if anyone could help, and I hope some one could be my
mentor.
BR,
Zhao Junwang
--
Best regards
Junwang Zhao
Department of Computer Science &Technology
Peking University
Beijing, 100871, PRC
3
5

Re: [ovirt-users] Manually preconfigured network interfaces/bridges on oVirt 3.6 HC HE
by Giuseppe Ragusa 23 Nov '15
by Giuseppe Ragusa 23 Nov '15
23 Nov '15
On Mon, Nov 9, 2015, at 08:16, Sandro Bonazzola wrote:
> On Sun, Nov 8, 2015 at 9:57 PM, Giuseppe Ragusa <giuseppe.ragusa(a)hotmail.com> wrote:
>> On Tue, Nov 3, 2015, at 23:17, Giuseppe Ragusa wrote:
> On Tue, Nov 3, 2015, at 15:27, Simone Tiraboschi wrote:
> > On Mon, Nov 2, 2015 at 11:55 PM, Giuseppe Ragusa <giuseppe.ragusa(a)hotmail.com> wrote:
> >> On Mon, Nov 2, 2015, at 09:52, Simone Tiraboschi wrote:
> >>> On Mon, Nov 2, 2015 at 1:48 AM, Giuseppe Ragusa <giuseppe.ragusa(a)hotmail.com> wrote:
> >>>> Hi all,
> >>>> I'm stuck with the following error during the final phase of ovirt-hosted-engine-setup:
> >>>>
> >>>> The host hosted_engine_1 is in non-operational state.
> >>>> Please try to activate it via the engine webadmin UI.
> >>>>
> >>>> If I login on the engine administration web UI I find the corresponding message (inside NonOperational first host hosted_engine_1 Events tab):
> >>>>
> >>>> Host hosted_engine_1 does not comply with the cluster Default networks, the following networks are missing on host: 'ovirtmgmt'
> >>>>
> >>>> I'm installing with an oVirt snapshot from October the 27th on a fully-patched CentOS 7.1 host with a GlusterFS volume (3.7.5 hyperconverged, replica 3, for the engine-vm) pre-created and network interfaces/bridges (ovirtmgmt and other two bridges, called nfs and lan, on underlying 802.3ad bonds or plain interfaces) manually pre-configured in /etc/sysconfig/network-interfaces/ifcfg-* (using "classic" network service; NetworkManager disabled).
> >>>>
> >>>
> >>> If you manually created the network bridges, the match between them and the logical network should happen on name bases.
> >>
> >> Hi Simone,
> >> many thanks fpr your help (again) :)
> >>
> >> As you may note from the above comment, the name should actually match (it's exactly ovirtmgmt) but it doesn't get recognized.
> >>
> >>
> >>> If it doesn't for any reasons (please report if you find any evidence), you can manually bind logical network and network interfaces editing the host properties from the web-ui. At that point the host should become active in a few seconds.
> >>
> >>
> >> Well, the most immediate evidence are the error messages already reported (given that the bridge is actually present, with the right name and actually working).
> >> Apart from that, I find the following past logs (I don't know whether they are relevant or not):
> >>
> >> From /var/log/vdsm/connectivity.log:
> >
> >
> > Can you please add also host-deploy logs?
>
> Please find a gzipped tar archive of the whole directory /var/log/ovirt-engine/host-deploy/ at:
>
> https://onedrive.live.com/redir?resid=74BDE216CAA3E26F!110&authkey=!AIQUc6i…
>>
>> Since I suppose that there's nothing relevant on those logs, I'm planning to specify "net_persistence = ifcfg" in /etc/vdsm/vdsm.conf and restart VDSM on the host, then making the (still blocked) setup re-check.
>>
>>
Is there anything I should pay attention to before proceeding? (in particular while restarting VDSM)
>
>
> ^^ Dan?
I went on and unfortunately "net_persistence = ifcfg" in /etc/vdsm/vdsm.conf and restarting VDSM on the host did not solve it (same error as before).
While trying (always without success) all other steps suggested by Simone (binding logical network and synchronizing networks from host) I found an interesting-looking libvirt network definition (autostart too) for vdsm-ovirtmgmt and this recalled some memories from past mailing list messages (that I still cannot find...) ;)
Long story short: aborting setup, cleaning up all and creating a libvirt network for each pre-provisioned bridge worked! ("net_persistence = ifcfg" has been kept for other, client-specific, reasons so I don't know if it's needed too)
Here it is, in BASH form:
for my_bridge in ovirtmgmt bridge1 bridge2; do
cat <<- EOM > /root/my-${my_bridge}.xml
<network>
<name>vdsm-${my_bridge}</name>
<forward mode='bridge'/>
<bridge name='${my_bridge}'/>
</network>
EOM
virsh -c qemu:///system net-define /root/my-${my_bridge}.xml
virsh -c qemu:///system net-autostart vdsm-${my_bridge}
rm -f /root/my-${my_bridge}.xml
done
I was able to connect (with the "virsh" commands above) to libvirtd (which must be running for the above to work) by removing the VDSM-added config fragment, allowing tcp connections and denying TLS-only connections in /etc/libvirt/libvirtd.conf and finally by removing /etc/sasl2/libvirt.conf (all these modifications must be reverted after configuring networks then stopping libvirtd and before relaunching setup).
Many thanks again for suggestions, hints etc.
Regards,
Giuseppe
>>
I will report back here on the results.
>>
>>
Regards,
>>
Giuseppe
>>
>>
> Many thanks again for your kind assistance.
>>
>
>>
> Regards,
>>
> Giuseppe
>>
>
>>
> >> 2015-11-01 21:37:21,029:DEBUG:recent_client:True
>>
> >> 2015-11-01 21:37:51,088:DEBUG:recent_client:False
>>
> >> 2015-11-01 21:38:21,146:DEBUG:dropped vnet0:(operstate:up speed:0 duplex:full) d
>>
> >> ropped vnet2:(operstate:up speed:0 duplex:full) dropped vnet1:(operstate:up spee
>>
> >> d:0 duplex:full)
>>
> >> 2015-11-01 21:38:36,174:DEBUG:recent_client:True
>>
> >> 2015-11-01 21:39:06,233:DEBUG:recent_client:False
>>
> >> 2015-11-01 21:48:22,383:DEBUG:recent_client:True, lan:(operstate:up speed:0 dupl
>>
> >> ex:unknown), bond0:(operstate:up speed:2000 duplex:full), bond1:(operstate:up sp
>>
> >> eed:2000 duplex:full), enp0s20f1:(operstate:up speed:1000 duplex:full), ;vdsmdum
>>
> >> my;:(operstate:down speed:0 duplex:unknown), ovirtmgmt:(operstate:up speed:0 dup
>>
> >> lex:unknown), lo:(operstate:up speed:0 duplex:unknown), enp7s0f0:(operstate:up s
>>
> >> peed:1000 duplex:full), enp6s0f0:(operstate:up speed:100 duplex:full), enp6s0f1:
>>
> >> (operstate:up speed:1000 duplex:full), nfs:(operstate:up speed:0 duplex:unknown)
>>
> >> , bond2:(operstate:up speed:3000 duplex:full), enp7s0f1:(operstate:up speed:1000
>>
> >> duplex:full), enp0s20f0:(operstate:up speed:1000 duplex:full), enp0s20f3:(opers
>>
> >> tate:up speed:1000 duplex:full), enp0s20f2:(operstate:up speed:1000 duplex:full)
>>
> >> 2015-11-01 21:48:52,450:DEBUG:recent_client:False
>>
> >> 2015-11-01 22:55:21,668:DEBUG:recent_client:True, lan:(operstate:up speed:0 dupl
>>
> >> ex:unknown), bond0:(operstate:up speed:2000 duplex:full), bond1:(operstate:up sp
>>
> >> eed:2000 duplex:full), enp0s20f1:(operstate:up speed:1000 duplex:full), ;vdsmdum
>>
> >> my;:(operstate:down speed:0 duplex:unknown), ovirtmgmt:(operstate:up speed:0 dup
>>
> >> lex:unknown), lo:(operstate:up speed:0 duplex:unknown), enp7s0f0:(operstate:up s
>>
> >> peed:1000 duplex:full), enp6s0f0:(operstate:up speed:100 duplex:full), enp6s0f1:
>>
> >> (operstate:up speed:1000 duplex:full), nfs:(operstate:up speed:0 duplex:unknown), bond2:(operstate:up speed:3000 duplex:full), enp7s0f1:(operstate:up speed:1000 duplex:full), enp0s20f0:(operstate:up speed:1000 duplex:full), enp0s20f3:(operstate:up speed:1000 duplex:full), enp0s20f2:(operstate:up speed:1000 duplex:full)
>>
> >> 2015-11-01 22:56:00,952:DEBUG:recent_client:False, lan:(operstate:up speed:0 duplex:unknown), bond0:(operstate:up speed:2000 duplex:full), bond1:(operstate:up speed:2000 duplex:full), enp0s20f1:(operstate:up speed:1000 duplex:full), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 duplex:unknown), enp7s0f0:(operstate:up speed:1000 duplex:full), enp6s0f0:(operstate:up speed:100 duplex:full), enp6s0f1:(operstate:up speed:1000 duplex:full), nfs:(operstate:up speed:0 duplex:unknown), bond2:(operstate:up speed:3000 duplex:full), enp7s0f1:(operstate:up speed:1000 duplex:full), enp0s20f0:(operstate:up speed:1000 duplex:full), enp0s20f3:(operstate:up speed:1000 duplex:full), enp0s20f2:(operstate:up speed:1000 duplex:full)
>>
> >> 2015-11-01 22:58:16,215:DEBUG:new vnet0:(operstate:up speed:0 duplex:full) new vnet2:(operstate:up speed:0 duplex:full) new vnet1:(operstate:up speed:0 duplex:full)
>>
> >> 2015-11-02 00:04:54,019:DEBUG:dropped vnet0:(operstate:up speed:0 duplex:full) dropped vnet2:(operstate:up speed:0 duplex:full) dropped vnet1:(operstate:up speed:0 duplex:full)
>>
> >> 2015-11-02 00:05:39,102:DEBUG:new vnet0:(operstate:up speed:0 duplex:full) new vnet2:(operstate:up speed:0 duplex:full) new vnet1:(operstate:up speed:0 duplex:full)
>>
> >> 2015-11-02 01:16:47,194:DEBUG:recent_client:True
>>
> >> 2015-11-02 01:17:32,693:DEBUG:recent_client:True, vnet0:(operstate:up speed:0 duplex:full), lan:(operstate:up speed:0 duplex:unknown), bond0:(operstate:up speed:2000 duplex:full), bond1:(operstate:up speed:2000 duplex:full), enp0s20f1:(operstate:up speed:1000 duplex:full), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 duplex:unknown), enp7s0f0:(operstate:up speed:1000 duplex:full), enp6s0f0:(operstate:up speed:100 duplex:full), enp6s0f1:(operstate:up speed:1000 duplex:full), vnet2:(operstate:up speed:0 duplex:full), nfs:(operstate:up speed:0 duplex:unknown), vnet1:(operstate:up speed:0 duplex:full), bond2:(operstate:up speed:3000 duplex:full), enp7s0f1:(operstate:up speed:1000 duplex:full), enp0s20f0:(operstate:up speed:1000 duplex:full), enp0s20f3:(operstate:up speed:1000 duplex:full), enp0s20f2:(operstate:up speed:1000 duplex:full)
>>
> >> 2015-11-02 01:18:02,749:DEBUG:recent_client:False
>>
> >> 2015-11-02 01:20:18,001:DEBUG:recent_client:True
>>
> >>
>>
> >> From /var/log/vdsm/vdsm.log:
>>
> >>
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:16,991::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bond0.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:16,992::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bond0.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:16,994::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp6s0f0.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:16,995::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp6s0f0.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:16,997::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp6s0f1.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:16,997::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp6s0f1.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:16,999::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f0.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:16,999::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f0.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,001::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp7s0f0.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,001::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp7s0f0.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,003::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp7s0f1.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,003::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp7s0f1.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,005::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f1.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,006::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f1.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,007::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f2.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,008::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f2.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,009::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f3.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,010::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f3.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,014::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on ovirtmgmt.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,015::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on ovirtmgmt.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,019::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bond1.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,019::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bond1.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,024::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on nfs.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,024::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on nfs.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,028::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bond2.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,028::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bond2.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,033::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on lan.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,033::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on lan.
>>
> >>
>>
> >> And further down, always in /var/log/vdsm/vdsm.log:
>>
> >>
>>
> >> Thread-17::DEBUG::2015-11-02 01:17:18,747::__init__::533::jsonrpc.JsonRpcServer:
>>
> >> :(_serveRequest) Return 'Host.getCapabilities' in bridge with {'HBAInventory': {
>>
> >> 'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:5ed1a874ff5'}], 'FC': []}, '
>>
> >> packages2': {'kernel': {'release': '229.14.1.el7.x86_64', 'buildtime': 144232235
>>
> >> 1.0, 'version': '3.10.0'}, 'glusterfs-rdma': {'release': '1.el7', 'buildtime': 1
>>
> >> 444235292L, 'version': '3.7.5'}, 'glusterfs-fuse': {'release': '1.el7', 'buildti
>>
> >> me': 1444235292L, 'version': '3.7.5'}, 'spice-server': {'release': '9.el7_1.3',
>>
> >> 'buildtime': 1444691699L, 'version': '0.12.4'}, 'librbd1': {'release': '2.el7',
>>
> >> 'buildtime': 1425594433L, 'version': '0.80.7'}, 'vdsm': {'release': '2.gitdbbc5a
>>
> >> 4.el7', 'buildtime': 1445459370L, 'version': '4.17.10'}, 'qemu-kvm': {'release':
>>
> >> '23.el7_1.9.1', 'buildtime': 1443185645L, 'version': '2.1.2'}, 'glusterfs': {'r
>>
> >> elease': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}, 'libvirt': {'release': '16.el7_1.4', 'buildtime': 1442325910L, 'version': '1.2.8'}, 'qemu-img': {'release': '23.el7_1.9.1', 'buildtime': 1443185645L, 'version': '2.1.2'}, 'mom': {'release': '2.el7', 'buildtime': 1442501481L, 'version': '0.5.1'}, 'glusterfs-geo-replication': {'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}, 'glusterfs-server': {'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}, 'glusterfs-cli': {'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}}, 'numaNodeDistance': {'0': [10]}, 'cpuModel': 'Intel(R) Atom(TM) CPU C2750 @ 2.40GHz', 'liveMerge': 'true', 'hooks': {'before_vm_start': {'50_hostedengine': {'md5': '2a6d96c26a3599812be6cf1a13d9f485'}}}, 'vmTypes': ['kvm'], 'selinux': {'mode': '0'}, 'liveSnapshot': 'true', 'kdumpStatus': 0, 'networks': {}, 'bridges': {'ovirtmgmt': {'addr': '172.25.10.21', 'cfg': {'AGEING': '0', 'DEFROUTE': 'no', 'IPADDR': '172.25.10.21', 'IPV4_FAILURE_FATAL': 'yes', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'static', 'STP': 'off', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::225:90ff:fef1:cb37/64'], 'gateway': '', 'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6': False, 'stp': 'off', 'ipv4addrs': ['172.25.10.21/24'[http://172.25.10.21/24%27][http://172.25.10.21/24%27%5Bhttp://172.25.10.21/…]] 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['bond0', 'vnet0'], 'opts': {'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '1000', 'group_fwd_mask': '0x0', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3125', 'hello_timer': '83', 'multicast_querier_interval': '25500', 'max_age': '2000', 'hash_max': '512', 'stp_state': '0', 'topology_change_detected': '0', 'priority': '32768', 'multicast_membership_interval': '26000', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'nf_call_iptables': '0', 'topology_change': '0', 'hello_time': '200', 'root_id': '8000.002590f1cb37', 'bridge_id': '8000.002590f1cb37', 'topology_change_timer': '0', 'ageing_time': '0', 'nf_call_ip6tables': '0', 'gc_timer': '83', 'nf_call_arptables': '0', 'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval': '100', 'default_pvid': '1', 'multicast_query_interval': '12500', 'tcn_timer': '0', 'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': '0'}}, 'lan': {'addr': '192.168.164.218', 'cfg': {'AGEING': '0', 'IPADDR': '192.168.164.218', 'GATEWAY': '192.168.164.254', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'static', 'STP': 'off', 'DEVICE': 'lan', 'IPV4_FAILURE_FATAL': 'yes', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::a236:9fff:fe38:88cd/64'], 'gateway': '192.168.164.254', 'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6': False, 'stp': 'off', 'ipv4addrs': ['192.168.164.218/24', '192.168.164.216/24'[http://192.168.164.216/24%27][http://192.168.164.216/24%27%5Bhttp://192.168…]] 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['vnet1', 'enp6s0f0'], 'opts': {'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '1000', 'group_fwd_mask': '0x0', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3125', 'hello_timer': '82', 'multicast_querier_interval': '25500', 'max_age': '2000', 'hash_max': '512', 'stp_state': '0', 'topology_change_detected': '0', 'priority': '32768', 'multicast_membership_interval': '26000', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'nf_call_iptables': '0', 'topology_change': '0', 'hello_time': '200', 'root_id': '8000.a0369f3888cd', 'bridge_id': '8000.a0369f3888cd', 'topology_change_timer': '0', 'ageing_time': '0', 'nf_call_ip6tables': '0', 'gc_timer': '82', 'nf_call_arptables': '0', 'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval': '100', 'default_pvid': '1', 'multicast_query_interval': '12500', 'tcn_timer': '0', 'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': '0'}}, 'nfs': {'addr': '172.25.15.21', 'cfg': {'AGEING': '0', 'DEFROUTE': 'no', 'IPADDR': '172.25.15.21', 'IPV4_FAILURE_FATAL': 'yes', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'static', 'STP': 'off', 'DEVICE': 'nfs', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::225:90ff:fef1:cb35/64'], 'gateway': '', 'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6': False, 'stp': 'off', 'ipv4addrs': ['172.25.15.21/24', '172.25.15.203/24'[http://172.25.15.203/24%27][http://172.25.15.203/24%27%5Bhttp://172.25.15.2…]] 'mtu': '9000', 'ipv6gateway': '::', 'ports': ['bond1', 'vnet2'], 'opts': {'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '1000', 'group_fwd_mask': '0x0', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3125', 'hello_timer': '183', 'multicast_querier_interval': '25500', 'max_age': '2000', 'hash_max': '512', 'stp_state': '0', 'topology_change_detected': '0', 'priority': '32768', 'multicast_membership_interval': '26000', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'nf_call_iptables': '0', 'topology_change': '0', 'hello_time': '200', 'root_id': '8000.002590f1cb35', 'bridge_id': '8000.002590f1cb35', 'topology_change_timer': '0', 'ageing_time': '0', 'nf_call_ip6tables': '0', 'gc_timer': '83', 'nf_call_arptables': '0', 'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval': '100', 'default_pvid': '1', 'multicast_query_interval': '12500', 'tcn_timer': '0', 'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': '0'}}}, 'uuid': '2a1855a9-18fb-4d7a-b8b8-6fc898a8e827', 'onlineCpus': '0,1,2,3,4,5,6,7', 'nics': {'enp0s20f1': {'permhwaddr': '00:25:90:f1:cb:35', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '9000', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'ETHTOOL_OPTS': '-K ${DEVICE} tso off ufo off gso off gro off lro off', 'DEVICE': 'enp0s20f1', 'BOOTPROTO': 'none', 'MASTER': 'bond1', 'HWADDR': '00:25:90:F1:CB:35', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:35', 'speed': 1000, 'gateway': ''}, 'enp7s0f0': {'permhwaddr': 'a0:36:9f:38:88:cf', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '9000', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'ETHTOOL_OPTS': '-K ${DEVICE} tso off ufo off gso off gro off lro off', 'DEVICE': 'enp7s0f0', 'BOOTPROTO': 'none', 'MASTER': 'bond1', 'HWADDR': 'A0:36:9F:38:88:CF', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:35', 'speed': 1000, 'gateway': ''}, 'enp6s0f0': {'addr': '', 'ipv6gateway': '::', 'ipv6addrs': ['fe80::a236:9fff:fe38:88cd/64'], 'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'BRIDGE': 'lan', 'NM_CONTROLLED': 'no', 'ETHTOOL_OPTS': '-K ${DEVICE} tso off ufo off gso off gro off lro off', 'DEVICE': 'enp6s0f0', 'BOOTPROTO': 'none', 'HWADDR': 'A0:36:9F:38:88:CD', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': 'a0:36:9f:38:88:cd', 'speed': 100, 'gateway': ''}, 'enp6s0f1': {'permhwaddr': 'a0:36:9f:38:88:cc', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'DEVICE': 'enp6s0f1', 'BOOTPROTO': 'none', 'MASTER': 'bond0', 'HWADDR': 'A0:36:9F:38:88:CC', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:37', 'speed': 1000, 'gateway': ''}, 'enp7s0f1': {'permhwaddr': 'a0:36:9f:38:88:ce', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '9000', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'DEVICE': 'enp7s0f1', 'BOOTPROTO': 'none', 'MASTER': 'bond2', 'HWADDR': 'A0:36:9F:38:88:CE', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:34', 'speed': 1000, 'gateway': ''}, 'enp0s20f0': {'permhwaddr': '00:25:90:f1:cb:34', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '9000', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'DEVICE': 'enp0s20f0', 'BOOTPROTO': 'none', 'MASTER': 'bond2', 'HWADDR': '00:25:90:F1:CB:34', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:34', 'speed': 1000, 'gateway': ''}, 'enp0s20f3': {'permhwaddr': '00:25:90:f1:cb:37', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'DEVICE': 'enp0s20f3', 'BOOTPROTO': 'none', 'MASTER': 'bond0', 'HWADDR': '00:25:90:F1:CB:37', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:37', 'speed': 1000, 'gateway': ''}, 'enp0s20f2': {'permhwaddr': '00:25:90:f1:cb:36', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '9000', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'DEVICE': 'enp0s20f2', 'BOOTPROTO': 'none', 'MASTER': 'bond2', 'HWADDR': '00:25:90:F1:CB:36', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:34', 'speed': 1000, 'gateway'
>>
> >> : ''}}, 'software_revision': '2', 'hostdevPassthrough': 'false', 'clusterLevels': ['3.4', '3.5', '3.6'], 'cpuFlags': 'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_tsc,aperfmperf,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,sse4_1,sse4_2,movbe,popcnt,tsc_deadline_timer,aes,rdrand,lahf_lm,3dnowprefetch,ida,arat,epb,dtherm,tpr_shadow,vnmi,flexpriority,ept,vpid,tsc_adjust,smep,erms,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:5ed1a874ff5', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.4', '3.5', '3.6'], 'autoNumaBalancing': 0, 'additionalFeatures': ['GLUSTER_SNAPSHOT', 'GLUSTER_GEO_REPLICATION', 'GLUSTER_BRICK_MANAGEMENT'], 'reservedMem': '321', 'bondings': {'bond0': {'ipv4addrs': [], 'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'BONDING_OPTS': 'mode=balance-rr miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::225:90ff:fef1:cb37/64'], 'active_slave': '', 'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'slaves': ['enp0s20f3', 'enp6s0f1'], 'hwaddr': '00:25:90:f1:cb:37', 'ipv6gateway': '::', 'gateway': '', 'opts': {'miimon': '100'}}, 'bond1': {'ipv4addrs': [], 'addr': '', 'cfg': {'BRIDGE': 'nfs', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'BONDING_OPTS': 'mode=802.3ad xmit_hash_policy=layer2+3 miimon=100', 'DEVICE': 'bond1', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::225:90ff:fef1:cb35/64'], 'active_slave': '', 'mtu': '9000', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'slaves': ['enp0s20f1', 'enp7s0f0'], 'hwaddr': '00:25:90:f1:cb:35', 'ipv6gateway': '::', 'gateway': '', 'opts': {'miimon': '100', 'mode': '4', 'xmit_hash_policy': '2'}}, 'bond2': {'ipv4addrs': ['172.25.5.21/24'[http://172.25.5.21/24%27][http://172.25.5.21/24%27%5Bhttp://172.25.5.21/24%…]] 'addr': '172.25.5.21', 'cfg': {'DEFROUTE': 'no', 'IPADDR': '172.25.5.21', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'static', 'BONDING_OPTS': 'mode=802.3ad xmit_hash_policy=layer2+3 miimon=100', 'DEVICE': 'bond2', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::225:90ff:fef1:cb34/64'], 'active_slave': '', 'mtu': '9000', 'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6': False, 'slaves': ['enp0s20f0', 'enp0s20f2', 'enp7s0f1'], 'hwaddr': '00:25:90:f1:cb:34', 'ipv6gateway': '::', 'gateway': '', 'opts': {'miimon': '100', 'mode': '4', 'xmit_hash_policy': '2'}}}, 'software_version': '4.17', 'memSize': '16021', 'cpuSpeed': '2401.000', 'numaNodes': {'0': {'totalMemory': '16021', 'cpus': [0, 1, 2, 3, 4, 5, 6, 7]}}, 'cpuSockets': '1', 'vlans': {}, 'lastClientIface': 'ovirtmgmt', 'cpuCores': '8', 'kvmEnabled': 'true', 'guestOverhead': '65', 'version_name': 'Snow Man', 'cpuThreads': '8', 'emulatedMachines': ['pc-i440fx-rhel7.1.0', 'rhel6.3.0', 'pc-q35-rhel7.0.0', 'rhel6.1.0', 'rhel6.6.0', 'rhel6.2.0', 'pc', 'pc-q35-rhel7.1.0', 'q35', 'rhel6.4.0', 'rhel6.0.0', 'rhel6.5.0', 'pc-i440fx-rhel7.0.0'], 'rngSources': ['random'], 'operatingSystem': {'release': '1.1503.el7.centos.2.8', 'version': '7', 'name': 'RHEL'}}
>>
> >>
>>
> >> Navigating the Admin web UI offered by the engine, editing (with the "Edit" button or using the corresponding context menu entry) the hosted_engine_1 host, I do not find any way to associate the logical oVirt ovirtmgmt network to the already present ovirtmgmt Linux bridge.
>>
> >> Furthermore, the "Network Interfaces" tab of the aforementioned host shows only plain interfaces and bonds (all marked with a down-pointing red arrow, even if they are actually up and running), but not the already defined Linux bridges; inside this tab I find two buttons: "Setup Host Networks" (which would allow me to drag-and-drop-associate the ovirtmgmt logical network to an already present bond, like the right one: bond0, but I avoid it, since I fear it would try to create a bridge from scratch, while it's actually present now and it already has the host address assigned on top, allowing engine-host communication at the moment) and "Sync All Networks" (which actively scares me with a threatening "Are you sure you want to synchronize all host's networks?", which I deny, since it's view is already wrong and it's absolutely not clear in which direction the synchronization would go).
>>
> >>
>>
> >> So, it seems to me that either I need to perform on the host further pre-configuration steps for the ovirtmgmt bridge (beyond the ifcfg-* setup) or there is a bug in the setup/adminportal (a UI/usability bug, maybe) :)
>>
> >>
>>
> >> Many thanks again for your help.
>>
> >>
>>
> >> Kind regards,
>>
> >> Giuseppe
>>
> >>
>>
> >>
>>
> >>> When the host will become active you'll can continue with hosted-engine-setup.
>>
> >>>
>>
> >>>> I seem to recall that a preconfigured network setup on oVirt 3.6 would need something predefined on the libvirt side too (apart from usual ifcfg-* files), but I cannot find the relevant mailing list message anymore nor any other specific documentation.
>>
> >>>>
>>
> >>>>
>>
> Does anyone have any further suggestion or clue (code/docs to read)?
>>
> >>>>
>>
> >>>>
>>
> Many thanks in advance.
>>
> >>>>
>>
> >>>>
>>
> Kind regards,
>>
> >>>>
>>
> Giuseppe
>>
> >>>>
>>
> >>>>
>>
> PS: please keep also my address in replying because I'm experiencing some problems between Hotmail and oVirt-mailing-list
>>
> >>>>
>>
> _______________________________________________
>>
> >>>>
>>
> Users mailing list
>>
> >>>> Users(a)ovirt.org
>>
> >>>> http://lists.ovirt.org/mailman/listinfo/users
>>
> >>
>>
> >>
>>
_______________________________________________
>>
Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
1
0
On Sat, Nov 21, 2015, at 13:59, Dan Kenigsberg wrote:
> On Fri, Nov 20, 2015 at 01:54:35PM +0100, Giuseppe Ragusa wrote:
> > Hi all,
> > I go on with my wishlist, derived from both solitary mumblings and community talks at the the first Italian oVirt Meetup.
> >
> > I offer to help in coding (work/family schedules permitting) but keep in mind that I'm a sysadmin with mainly C and bash-scripting skills (but hoping to improve my less-than-newbie Python too...)
> >
> > I've sent separate wishlist messages for oVirt Node and Engine.
> >
> > VDSM:
> >
> > *) allow VDSM to configure/manage Samba, CTDB and Ganesha (specifically, I'm thinking of the GlusterFS integration); there are related wishlist items on configuring/managing Samba/CTDB/Ganesha on the Engine and on oVirt Node
>
> I'd apreciate a more detailed feature definition. Vdsm (and ovirt) try
> to configure only thing that are needed for their own usage. What do you
> want to control? When? You're welcome to draf a feature page prior to
> coding the fix ;-)
I was thinking of adding CIFS/NFSv4 functionality to an hyperconverged cluster (GlusterFS/oVirt) which would have separate volumes for virtual machines storage (one volume for the Engine and one for other vms, with no CIFS/NFSv4 capabilities offered) and for data shares (directly accessible by clients on LAN and obviously from local vms too).
Think of it as a 3-node HA NetApp+VMware killer ;-)
The UI idea (but that would be the Engine part, I understand) was along the lines of single-check enabling CIFS and/or NFSv4 sharing for a GlusterFS data volume, then optionally adding any further specific options (hosts allowed, users/groups for read/write access, network recycle_bin etc.); global Samba (domain/workgroup membership etc.) and CTDB (IPs/interfaces) configuration parameters would be needed too.
I have no experience on a GaneshaNFS clustered/HA configuration with GlusterFS, but (from superficial skimming through docs) it seems that it was not possible at all before 2.2 and now it needs a full Pacemaker/Corosync setup too (contrary to the IBM-GPFS-backed case), so that could be a problem.
This VDSM wishlist item was driven by the idea that all actions (and so future GlusterFS/Samba/CTDB too) performed by the Engine through the hosts/nodes were somehow "mediated" by VDSM and its API, but if this is not the case, then I retire my suggestion here and I will try to pursue it only on the Engine/Node side ;)
Many thanks for your attention.
Regards,
Giuseppe
> > *) add Open vSwitch direct support (not Neutron-mediated); there are related wishlist items on configuring/managing Open vSwitch on oVirt Node and on the Engine
>
> That's on our immediate roadmap. Soon, vdsm-hook-ovs would be ready for
> testing.
>
> >
> > *) add DRBD9 as a supported Storage Domain type; there are related wishlist items on configuring/managing DRBD9 on the Engine and on oVirt Node
> >
> > *) allow VDSM to configure/manage containers (maybe extend it by use of the LXC libvirt driver, similarly to the experimental work that has been put up to allow Xen vm management); there are related wishlist items on configuring/managing containers on the Engine and on oVirt Node
> >
> > *) add a VDSM_remote mode (for lack of a better name, but mainly inspired by pacemaker_remote) to be used inside a guest by the above mentioned container support (giving to the Engine the required visibility on the managed containers, but excluding the "virtual node" from power management and other unsuitable actions)
> >
> > Regards,
> > Giuseppe
> >
> > _______________________________________________
> > Users mailing list
> > Users(a)ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
1
0

23 Nov '15
The CFP for the Virtualization & IaaS devroom at FOSDEM 2016 is in full
swing, and we'd like to share a few updates with you:
-------------------------
Speaker Mentoring Program
-------------------------
As a part of the rising efforts to grow our communities and encourage a
diverse and inclusive conference ecosystem, we're happy to announce that
we'll be offering mentoring for newcomer speakers. Our mentors can help
you with tasks such as reviewing your abstract, reviewing your
presentation outline or slides, or practicing your talk with you.
You may apply to the mentoring program as a newcomer speaker if you:
* Never presented before or
* Presented only lightning talks or
* Presented full-length talks at small meetups (<50 ppl)
Submission guidelines:
* Mentored presentations will have 25-minute slots, where 20 minutes
will include the presentation and 5 minutes will be reserved for questions.
* The number of newcomer session slots is limited, so we will probably
not be able to accept all applications.
* You must submit your talk and abstract to apply for the mentoring
program, our mentors are volunteering their time and will happily
provide feedback but won't write your presentation for you! If you are
experiencing problems with Pentabarf, the proposal submission interface,
or have other questions, you can email iaas-virt-devroom at
lists.fosdem.org and we will try to help you.
How to apply:
* Follow the same procedure to submit an abstract in Pentabarf as
standard sessions. Instructions can be found in our original CFP
announcement:
http://community.redhat.com/blog/2015/10/call-for-proposals-fosdem16-virtua…
* In addition to agreeing to video recording and confirming that you can
attend FOSDEM in case your session is accepted, please write "speaker
mentoring program application" in the "Submission notes" field, and list
any prior speaking experience or other relevant information for your
application.
Call for mentors!
Interested in mentoring newcomer speakers? We'd love to have your help!
Please email iaas-virt-devroom at lists.fosdem.org with a short speaker
bio and any specific fields of expertise (for example, KVM, OpenStack,
storage, etc) so that we can match you with a newcomer speaker from a
similar field. Estimated time investment can be as low as a 5-10 hours
in total, usually distributed weekly or bi-weekly.
Never mentored a newcomer speaker but interested to try? Our mentoring
program coordinator will be happy to answer your questions and give you
tips on how to optimize the mentoring process. Email us and we'll be
happy to answer your questions!
-------------------------
CFP Deadline Extension
-------------------------
To help accommodate the newcomer speaker proposals, we have decided to
extend the deadline for submitting proposals by one week.
The new deadline is **TUESDAY, DECEMBER 8 @ midnight CET**.
-------------------------
Code of Conduct
-------------------------
Following the release of the updated code of conduct for FOSDEM[1], we'd
like to remind all speakers and attendees that all of the presentations
and discussions in our devroom are held under the guidelines set in the
CoC and we expect attendees, speakers, and volunteers to follow the CoC
at all times.
If you submit a proposal and it is accepted, you will be required to
confirm that you accept the FOSDEM CoC. If you have any questions about
the CoC or wish to have one of the devroom organizers review your
presentation slides or any other content for CoC compliance, please
email iaas-virt-devroom at lists.fosdem.org and we will do our best to
help you out.
[1] https://www.fosdem.org/2016/practical/conduct/
--
Mikey Ariel
Community Lead, oVirt
www.ovirt.org
"To be is to do" (Socrates)
"To do is to be" (Jean-Paul Sartre)
"Do be do be do" (Frank Sinatra)
Mobile: +420-702-131-141
IRC: mariel / thatdocslady
Twitter: @ThatDocsLady
1
0

Cannot setup Networks. The address of the network 'NFS' cannot be modified
by Ihor Piddubnyak 23 Nov '15
by Ihor Piddubnyak 23 Nov '15
23 Nov '15
Trying to change IP for VLAN interface attached to hypervisor, getting
vhi2:
Cannot setup Networks. The address of the network 'NFS' cannot be
modified without reinstalling the host, since this address was used to
create the host's certification.
Host reinstall does not help. Any clue how to do it?
--
Ihor Piddubnyak <ip(a)surftown.com>
surftown a/s
1
1

[3.6] User can't create a VM. No permission for EDIT_ADMIN_VM_PROPERTIES
by Maksim Naumov 23 Nov '15
by Maksim Naumov 23 Nov '15
23 Nov '15
Hello
I faced with the problem. The user can;t create a VM. The user has
PowerUserRole on Cluster. He tried to create a VM with a base template and
had no success.
Here some lines from log. Have no idea why it wants
for EDIT_ADMIN_VM_PROPERTIES permission for user?
2015-11-20 16:42:10,888 DEBUG [org.ovirt.engine.core.bll.AddVmCommand]
(default task-160) [2f0eb905] Checking whether user
'acc9ced5-a764-4d60-84d7-db4b4a498a18' or one of the groups he is member
of, have the following permissions: ID:
a303bbca-af20-4de5-9eff-01c52d3bf615 Type: VdsGroupsAction group CREATE_VM
with role type USER, ID: 00000000-0000-0000-0000-000000000000 Type:
VmTemplateAction group CREATE_VM with role type USER, ID:
a303bbca-af20-4de5-9eff-01c52d3bf615 Type: VdsGroupsAction group
EDIT_ADMIN_VM_PROPERTIES with role type ADMIN
2015-11-20 16:42:10,890 DEBUG [org.ovirt.engine.core.bll.AddVmCommand]
(default task-160) [2f0eb905] Found permission
'129c57bb-df56-4529-93d9-52db0265263f' for user when running 'AddVm', on
'Cluster' with id 'a303bbca-af20-4de5-9eff-01c52d3bf615'
2015-11-20 16:42:10,893 DEBUG [org.ovirt.engine.core.bll.AddVmCommand]
(default task-160) [2f0eb905] Found permission
'00000004-0004-0004-0004-000000000355' for user when running 'AddVm', on
'Template' with id '00000000-0000-0000-0000-000000000000'
2015-11-20 16:42:10,894 DEBUG [org.ovirt.engine.core.bll.AddVmCommand]
(default task-160) [2f0eb905] No permission found for user when running
action 'AddVm', on object 'Cluster' for action group
'EDIT_ADMIN_VM_PROPERTIES' with id 'a303bbca-af20-4de5-9eff-01c52d3bf615'.
2015-11-20 16:42:10,894 WARN [org.ovirt.engine.core.bll.AddVmCommand]
(default task-160) [2f0eb905] CanDoAction of action 'AddVm' failed for user
vincent.engel@hitmeister.de@hitmeister.de. Reasons:
VAR__ACTION__ADD,VAR__TYPE__VM,USER_NOT_AUTHORIZED_TO_PERFORM_ACTION
--
Maksim Naumov
Hitmeister GmbH
Softwareentwickler
Habsburgerring 2
50674 Köln
E: maksim.naumov(a)hitmeister.de
www.hitmeister.de
HRB 59046, Amtsgericht Köln
Geschäftsführer: Dr. Gerald Schönbucher
2
4
----_com.android.email_3020879656702700
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: base64
QW55IGhpbnRzIHRvIHRoaXMsIHBsZWFzZT8KCi0tLS0tLS0tIE1lbnNhamUgb3JpZ2luYWwgLS0t
LS0tLS0KRGU6IE5pY29sw6FzIDxuaWNvbGFzQGRldmVscy5lcz4gCkZlY2hhOjIwLzExLzIwMTUg
IDE4OjM5ICAoR01UKzAwOjAwKSAKUGFyYTogdXNlcnNAb3ZpcnQub3JnIApBc3VudG86IFtvdmly
dC11c2Vyc10gQWxsb3dpbmcgYSB1c2VyIHRvIG1hbmFnZSBhbGwgbWFjaGluZXMgaW4gYSBwb29s
IAoKSGksCgpXZSdyZSBydW5uaW5nIG9WaXJ0IDMuNS4zLjEtMSwgYW5kIHdlJ3JlIGN1cnJlbnRs
eSBkZXBsb3lpbmcgc29tZSBQb29scyAKZm9yIHN0dWRlbnRzIGFuZCB0ZWFjaGVycywgc28gZWFj
aCBoYXMgYWNjZXNzIHRvIG9uZSBtYWNoaW5lIGluIHRoZSAKcG9vbC4gVGh1cywgZWFjaCBvZiB0
aGVtIGlzIGdyYW50ZWQgdGhlIFVzZXJSb2xlIGluIHRoZSBwb29sLiBOb3cgdGhlIAp0ZWFjaGVy
IGlzIGFza2luZyB1cyB0byBhbGxvdyBoaW0gYWNjZXNzIHRvIGFsbCBzdHVkZW50cycgVk1zIHZp
YSB0aGUgCldlYiBHVUkgdG8gZXZhbHVhdGUgdGhlaXIgd29yay4KCklzIHRoZXJlIGEgcGVybWlz
c2lvbiB0byBhY2NvbXBsaXNoIHRoYXQ/IEluIHdvcnN0IG9mIGNhc2VzIEkgd2lsbCAKZGV0YWNo
IHRoZSBWTXMgZnJvbSB0aGUgcG9vbCBhbmQgZ3JhbnQgdGhlIHRlYWNoZXIgdGhlIFVzZXJSb2xl
IG9uIGVhY2ggCm9mIHRoZW0sIGJ1dCBJJ2QgbGlrZSB0byBrbm93IGlmIHRoZXJlJ3MgYSAiY2xl
YW5lciIgd2F5LgoKVGhhbmtzLgoKUmVnYXJkcywKCk5pY29sw6FzCl9fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClVzZXJzIG1haWxpbmcgbGlzdApVc2Vyc0Bv
dmlydC5vcmcKaHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzCg==
----_com.android.email_3020879656702700
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: base64
PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0
L2h0bWw7IGNoYXJzZXQ9VVRGLTgiPjwvaGVhZD48Ym9keSA+QW55IGhpbnRzIHRvIHRoaXMsIHBs
ZWFzZT88YnI+PGJyPi0tLS0tLS0tIE1lbnNhamUgb3JpZ2luYWwgLS0tLS0tLS08YnI+RGU6IE5p
Y29sw6FzIDxuaWNvbGFzQGRldmVscy5lcz4gPGJyPkZlY2hhOjIwLzExLzIwMTUgIDE4OjM5ICAo
R01UKzAwOjAwKSA8YnI+UGFyYTogdXNlcnNAb3ZpcnQub3JnIDxicj5Bc3VudG86IFtvdmlydC11
c2Vyc10gQWxsb3dpbmcgYSB1c2VyIHRvIG1hbmFnZSBhbGwgbWFjaGluZXMgaW4gYSBwb29sIDxi
cj48YnI+SGksPGJyPjxicj5XZSdyZSBydW5uaW5nIG9WaXJ0IDMuNS4zLjEtMSwgYW5kIHdlJ3Jl
IGN1cnJlbnRseSBkZXBsb3lpbmcgc29tZSBQb29scyA8YnI+Zm9yIHN0dWRlbnRzIGFuZCB0ZWFj
aGVycywgc28gZWFjaCBoYXMgYWNjZXNzIHRvIG9uZSBtYWNoaW5lIGluIHRoZSA8YnI+cG9vbC4g
VGh1cywgZWFjaCBvZiB0aGVtIGlzIGdyYW50ZWQgdGhlIFVzZXJSb2xlIGluIHRoZSBwb29sLiBO
b3cgdGhlIDxicj50ZWFjaGVyIGlzIGFza2luZyB1cyB0byBhbGxvdyBoaW0gYWNjZXNzIHRvIGFs
bCBzdHVkZW50cycgVk1zIHZpYSB0aGUgPGJyPldlYiBHVUkgdG8gZXZhbHVhdGUgdGhlaXIgd29y
ay48YnI+PGJyPklzIHRoZXJlIGEgcGVybWlzc2lvbiB0byBhY2NvbXBsaXNoIHRoYXQ/IEluIHdv
cnN0IG9mIGNhc2VzIEkgd2lsbCA8YnI+ZGV0YWNoIHRoZSBWTXMgZnJvbSB0aGUgcG9vbCBhbmQg
Z3JhbnQgdGhlIHRlYWNoZXIgdGhlIFVzZXJSb2xlIG9uIGVhY2ggPGJyPm9mIHRoZW0sIGJ1dCBJ
J2QgbGlrZSB0byBrbm93IGlmIHRoZXJlJ3MgYSAiY2xlYW5lciIgd2F5Ljxicj48YnI+VGhhbmtz
Ljxicj48YnI+UmVnYXJkcyw8YnI+PGJyPk5pY29sw6FzPGJyPl9fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fPGJyPlVzZXJzIG1haWxpbmcgbGlzdDxicj5Vc2Vy
c0BvdmlydC5vcmc8YnI+aHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3Vz
ZXJzPGJyPjwvYm9keT4=
----_com.android.email_3020879656702700--
1
0
Hi all,
I'm using an NFS storage domain, backed by a ZFS cluster. I need to deploy
a new storage domain, what would the recommended record size be in this?
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
2
1
------5YHBUFYRT5GVBTR0JVW5A916EMFQ77
Content-Transfer-Encoding: 8bit
Content-Type: text/plain;
charset=UTF-8
Hi,
My POC environment have 2 hosts - host A and host B, both are CentOS7.
Installed oVirt 3.6 self-hosted engine. I manually created a 2-brick
GlusterFS volume using both hosts and added it to my datacenter.
I tried shutting down host A. The hosted-engine restarted in host B
within 3 minutes, which is very cool. However, the GlusterFS data
domain, which I set both 'Use Host' and 'Path' to the host A, is down
along with it.
Here comes my questions:
1. How can I enable failover GlusterFS data domain?
2. How can I reverse back to the state before adding the data domain?
The data domain is super persistent - I can't edit or delete it. I put
it to maintenance mode but still unable to detach or destroy it because
it requires me to remove the datacenter first. I tried but can't remove
the datacenter either.
3. Why can't I add another GlusterFS data domain? When I choose
'GlusterFS' as my 'Storage Type' every text field become grayed-out.
4. When host A restarted again, I notice that 'Use Host' was changed
from host A to host B instead. Is this an expected behavior?
Regards,
Wee Sritippho
P.S: Please excuse my poor English.
------5YHBUFYRT5GVBTR0JVW5A916EMFQ77
Content-Type: text/html;
charset=utf-8
Content-Transfer-Encoding: 8bit
Hi,<br>
<br>
My POC environment have 2 hosts - host A and host B, both are CentOS7. <br>
Installed oVirt 3.6 self-hosted engine. I manually created a 2-brick <br>
GlusterFS volume using both hosts and added it to my datacenter.<br>
<br>
I tried shutting down host A. The hosted-engine restarted in host B <br>
within 3 minutes, which is very cool. However, the GlusterFS data <br>
domain, which I set both 'Use Host' and 'Path' to the host A, is down <br>
along with it.<br>
<br>
Here comes my questions:<br>
1. How can I enable failover GlusterFS data domain?<br>
2. How can I reverse back to the state before adding the data domain? <br>
The data domain is super persistent - I can't edit or delete it. I put <br>
it to maintenance mode but still unable to detach or destroy it because <br>
it requires me to remove the datacenter first. I tried but can't remove <br>
the datacenter either.<br>
3. Why can't I add another GlusterFS data domain? When I choose <br>
'GlusterFS' as my 'Storage Type' every text field become grayed-out.<br>
4. When host A restarted again, I notice that 'Use Host' was changed <br>
from host A to host B instead. Is this an expected behavior?<br>
<br>
Regards,<br>
Wee Sritippho<br>
<br>
P.S: Please excuse my poor English.
------5YHBUFYRT5GVBTR0JVW5A916EMFQ77--
2
2
HI
Does oVirt supports Mac OS ? if so which option do I need to select under
"operating system " .
Thanks,
Nagaraju
4
3
Hi,
At some point I had the oVirt reporting configured on my engine and it
worked. I had a "reports" option in the menu and could generate reports
for various resources.
At some point I've noticed that the "reports" option was no longer there
but did not have time to investigate. I believe it happened when I
migrated the engine host from CentOS 6 to 7 using engine-backup and restore.
How can I debug this?
In the ovirt-engine-dwh log I used to see the following error:
Exception in component tJDBCRollback_4
org.postgresql.util.PSQLException: FATAL: terminating connection due to
administrator command
at
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2157)
at
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1886)
at
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
at
org.postgresql.jdbc2.AbstractJdbc2Connection.executeTransactionCommand(AbstractJdbc2Connection.java:793)
at
org.postgresql.jdbc2.AbstractJdbc2Connection.rollback(AbstractJdbc2Connection.java:846)
at
ovirt_engine_dwh.historyetl_3_5.HistoryETL.tJDBCRollback_4Process(HistoryETL.java:2079)
at
ovirt_engine_dwh.historyetl_3_5.HistoryETL.tJDBCRollback_3Process(HistoryETL.java:1997)
at
ovirt_engine_dwh.historyetl_3_5.HistoryETL.tJDBCRollback_2Process(HistoryETL.java:1882)
at
ovirt_engine_dwh.historyetl_3_5.HistoryETL.tJDBCRollback_1Process(HistoryETL.java:1767)
at
ovirt_engine_dwh.historyetl_3_5.HistoryETL.tPostjob_1Process(HistoryETL.java:1647)
at
ovirt_engine_dwh.historyetl_3_5.HistoryETL.runJobInTOS(HistoryETL.java:10785)
at
ovirt_engine_dwh.historyetl_3_5.HistoryETL.main(HistoryETL.java:10277)
2015-11-19
15:42:02|rza8ri|rza8ri|rza8ri|OVIRT_ENGINE_DWH|HistoryETL|Default|6|Java
Exception|tJDBCRollback_4|org.postgresql.util.PSQLException:FATAL: term
But after rebooting the engine host it now only lists 'Service Started'.
The ovirt-engine-reportsd is also running.
Which of these two processes (reportsd vs dwhd) is generating the
reports (and showing it in the engine admin interface)?
In /var/log/ovirt-engine-reports, the reports.log file is empty, the
server.log reports Deployed ovirt-engine-reports.war as the last line
(without any obvious errors). Only jasperserver.log shows:
015-11-19 15:41:53,304 ERROR DiskStorageFactory,MSC service thread
1-2:948 - Could not flush disk cache. Initial cause was
/tmp/dataSnapshots/snapshot%0043ontents.index (No such file or directory)
java.io.FileNotFoundException:
/tmp/dataSnapshots/snapshot%0043ontents.index (No such file or directory)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
at java.io.FileOutputStream.<init>(FileOutputStream.java:171)
at
net.sf.ehcache.store.disk.DiskStorageFactory$IndexWriteTask.call(DiskStorageFactory.java:1120)
at
net.sf.ehcache.store.disk.DiskStorageFactory.unbind(DiskStorageFactory.java:946)
at net.sf.ehcache.store.disk.DiskStore.dispose(DiskStore.java:616)
at
net.sf.ehcache.store.FrontEndCacheTier.dispose(FrontEndCacheTier.java:521)
at net.sf.ehcache.Cache.dispose(Cache.java:2473)
at net.sf.ehcache.CacheManager.shutdown(CacheManager.java:1446)
at
org.springframework.cache.ehcache.EhCacheManagerFactoryBean.destroy(EhCacheManagerFactoryBean.java:134)
at
org.springframework.beans.factory.support.DisposableBeanAdapter.destroy(DisposableBeanAdapter.java:211)
at
org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroyBean(DefaultSingletonBeanRegistry.java:498)
at
org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroySingleton(DefaultSingletonBeanRegistry.java:474)
at
org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroySingletons(DefaultSingletonBeanRegistry.java:442)
at
org.springframework.context.support.AbstractApplicationContext.destroyBeans(AbstractApplicationContext.java:1066)
at
org.springframework.context.support.AbstractApplicationContext.doClose(AbstractApplicationContext.java:1040)
at
org.springframework.context.support.AbstractApplicationContext.close(AbstractApplicationContext.java:988)
at
org.springframework.web.context.ContextLoader.closeWebApplicationContext(ContextLoader.java:541)
at
org.springframework.web.context.ContextLoaderListener.contextDestroyed(ContextLoaderListener.java:142)
at
org.apache.catalina.core.StandardContext.listenerStop(StandardContext.java:3489)
at
org.apache.catalina.core.StandardContext.stop(StandardContext.java:3999)
at
org.jboss.as.web.deployment.WebDeploymentService.stop(WebDeploymentService.java:108)
at
org.jboss.msc.service.ServiceControllerImpl$StopTask.stopService(ServiceControllerImpl.java:1911)
at
org.jboss.msc.service.ServiceControllerImpl$StopTask.run(ServiceControllerImpl.java:1874)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
I have no idea on how to proceed debugging this. How is the reporting
connected to the engine?
Rik
--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee
+32(0)16/32.11.07
----------------------------------------------------------------
<<Any errors in spelling, tact or fact are transmission errors>>
3
4
Hello
User in the User portal un able to create virtual machine. Everytime I see
en error "User is not authorized to perform this action."
I tried different roles: VmCreator + DiskCreator, PowerUserRole,
UserVmManager .... even a new role with all the permissions - no way!
Can you give a 100% working example how should I setup role for user to be
abte to create a VM?
Thank you
--
Maksim Naumov
Hitmeister GmbH
Softwareentwickler
Habsburgerring 2
50674 Köln
E: maksim.naumov(a)hitmeister.de
www.hitmeister.de
HRB 59046, Amtsgericht Köln
Geschäftsführer: Dr. Gerald Schönbucher
2
2
HI
Can I get the document to configure cluster for the Engine ?
Thanks,,
Nagaraju
2
1
HI
Getting below error while adding a host,
2015-11-20 18:40:36,397 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
(ajp--127.0.0.1-8702-7) FINISH, FenceVdsVDSCommand, return: Test Succeeded,
on, log id: 6495791f
2015-11-20 18:40:40,588 WARN [org.ovirt.engine.core.bll.AddVdsCommand]
(ajp--127.0.0.1-8702-6) [2933038a] CanDoAction of action AddVds failed for
user admin@internal. Reasons: VAR__ACTION__ADD,VAR__TYPE__HOST,$server
pbuovirt2.bnglab.psecure.net,ACTION_TYPE_FAILED_VDS_WITH_SAME_UUID_EXIST
Thanks,
Nagaraju
2
1
Hi all,
I go on with my wishlist, derived from both solitary mumblings and community talks at the the first Italian oVirt Meetup.
I offer to help in coding (work/family schedules permitting) but keep in mind that I'm a sysadmin with mainly C and bash-scripting skills (but hoping to improve my less-than-newbie Python too...)
I've sent separate wishlist messages for oVirt Node and Engine.
VDSM:
*) allow VDSM to configure/manage Samba, CTDB and Ganesha (specifically, I'm thinking of the GlusterFS integration); there are related wishlist items on configuring/managing Samba/CTDB/Ganesha on the Engine and on oVirt Node
*) add Open vSwitch direct support (not Neutron-mediated); there are related wishlist items on configuring/managing Open vSwitch on oVirt Node and on the Engine
*) add DRBD9 as a supported Storage Domain type; there are related wishlist items on configuring/managing DRBD9 on the Engine and on oVirt Node
*) allow VDSM to configure/manage containers (maybe extend it by use of the LXC libvirt driver, similarly to the experimental work that has been put up to allow Xen vm management); there are related wishlist items on configuring/managing containers on the Engine and on oVirt Node
*) add a VDSM_remote mode (for lack of a better name, but mainly inspired by pacemaker_remote) to be used inside a guest by the above mentioned container support (giving to the Engine the required visibility on the managed containers, but excluding the "virtual node" from power management and other unsuitable actions)
Regards,
Giuseppe
2
1
Hi all,
I found that for some reason it is hard to find a working example for
integrating ovirt3.5 with freeipa using the generic aaa ldap
extension.
Here's what I did to get it to work:
The ovirt os is centos 6 x86_64
All the latest patches have been applied.
the ovirt machinet can be a member of the freeipa domain but this is
not required for the ovirt-freeipa authentication to work.
personally I think its nice to have the ovirt machine under freeipa
supervision as wel.
the freeipa os is centos7 x*86_64
All the latest patches have been applied.
The ovirt environment is configured, up and running.
There are two ways of single sign on for ovirt.
see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualiza…
This howto is for the first option
you require a search account in the freeipa domain.
add a user account to the freeipa domain
login with that account so it asks you to set a new password for it
then reset the experation date for the password to somewhere in the
far future with the procedure below
#
# Add the search account for ovirt to the freeipa domain.
#
# executed these commands on the freeipa server as root.
#
# first set the variables
export SUFFIX='dc=example,dc=com'
export OVIRT_SERVER=ovirt.example.com
export FREEIPA_DOMAIN=EXAMPLE.COM
export USERNAME=ovirt
export YOUR_PASSWORD='top_secret_random_very_long_password'
# create an ldif file
cat > resetexperation.ldif << EOF
dn: uid=$USERNAME,cn=users,cn=accounts,$SUFFIX
changetype: modify
replace: krbpasswordexpiration
krbpasswordexpiration: 20380119031407Z
EOF
# apply the ldif file
# the password requested is the directory admin password, this is NOT
the same account as the freeipa admin
ldapmodify -x -D "cn=directory manager" -W -vv -f resetexperation.ldif
# for the second option also :
# add the service for http to freeipa
kinit admin
ipa service-add HTTP/$OVIRT_SERVER@$FREEIPA_DOMAIN
#
# The following commands are executed as root on the ovirt-engine machine.
#
#
# first install the required package :
#
yum install -y ovirt-engine-extension-aaa-ldap
#
# ovirt configuration files
# examples can be found here :
# /usr/share/ovirt-engine-extension-aaa-ldap/examples/simple/.
#
mkdir /etc/ovirt-engine/aaa
mkdir /etc/ovirt-engine/extenstions.d
#
# set the vars again ( exports do not work between vm's)
#
export SUFFIX='dc=example,dc=com'
export YOUR_PASSWORD='top_secret_random_very_long_password'
export FREEIPA_SERVER=freeipa.example.com
export PROFILE_NAME=profile1
#
# create the config files
#
cat > /etc/ovirt-engine/aaa/$PROFILE_NAME.properties << EOF
include = <ipa.properties>
vars.server = $FREEIPA_SERVER
vars.user = uid=ovirt,cn=users,cn=accounts,$SUFFIX
vars.password = $YOUR_PASSWORD
pool.default.serverset.single.server = \${global:vars.server}
pool.default.auth.simple.bindDN = \${global:vars.user}
pool.default.auth.simple.password = \${global:vars.password}
EOF
cat > /etc/ovirt-engine/extensions.d/$PROFILE_NAME-authz.properties << EOF
ovirt.engine.extension.name = $PROFILE_NAME-authz
ovirt.engine.extension.bindings.method = jbossmodule
ovirt.engine.extension.binding.jbossmodule.module =
org.ovirt.engine-extensions.aaa.ldap
ovirt.engine.extension.binding.jbossmodule.class =
org.ovirt.engineextensions.aaa.ldap.AuthzExtension
ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authz
config.profile.file.1 = ../aaa/$PROFILE_NAME.properties
EOF
cat > /etc/ovirt-engine/extensions.d/$PROFILE_NAME-authn.properties << EOF
ovirt.engine.extension.name = $PROFILE_NAME-authn
ovirt.engine.extension.bindings.method = jbossmodule
ovirt.engine.extension.binding.jbossmodule.module =
org.ovirt.engine-extensions.aaa.ldap
ovirt.engine.extension.binding.jbossmodule.class =
org.ovirt.engineextensions.aaa.ldap.AuthnExtension
ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authn
ovirt.engine.aaa.authn.profile.name = $PROFILE_NAME
ovirt.engine.aaa.authn.authz.plugin = $PROFILE_NAME-authz
config.profile.file.1 = ../aaa/$PROFILE_NAME.properties
EOF
#
# change owner and permissions of the profile file
#
chown ovirt:ovirt /etc/ovirt-engine/extensions.d/$PROFILE_NAME-authn.properties
chmod 400 /etc/ovirt-engine/extensions.d/$PROFILE_NAME-authn.properties
#
# restart the ovirt engine
#
service ovirt-engine restart
#
# done you can now add freeipa users to the rhevm portal in the users menu
# after the users have been added you can assign permissions for them
on the vm's
#
Cheers
Rob Verduijn
1
0