[ovirt-users] Users Digest, Vol 39, Issue 38

Nikolai Sednev nsednev at redhat.com
Mon Dec 8 12:59:05 UTC 2014


Hi all, 
I was thinking of "booting from iSCSI SAN", which means you'll be using your LUN placed on storage in order to boot your host over the network. 
In this case you'll might configure your hosts HW to boot from iSCSI and then you'll won't need any HD on your HW. 
+adding more people to add their comments. 


Thanks in advance. 

Best regards, 
Nikolai 
____________________ 
Nikolai Sednev 
Senior Quality Engineer at Compute team 
Red Hat Israel 
34 Jerusalem Road, 
Ra'anana, Israel 43501 

Tel: +972 9 7692043 
Mobile: +972 52 7342734 
Email: nsednev at redhat.com 
IRC: nsednev 

----- Original Message -----

From: users-request at ovirt.org 
To: users at ovirt.org 
Sent: Monday, December 8, 2014 11:22:27 AM 
Subject: Users Digest, Vol 39, Issue 38 

Send Users mailing list submissions to 
users at ovirt.org 

To subscribe or unsubscribe via the World Wide Web, visit 
http://lists.ovirt.org/mailman/listinfo/users 
or, via email, send a message with subject or body 'help' to 
users-request at ovirt.org 

You can reach the person managing the list at 
users-owner at ovirt.org 

When replying, please edit your Subject line so it is more specific 
than "Re: Contents of Users digest..." 


Today's Topics: 

1. Re: is it possible to run ovirt node on Diskless HW? 
(Doron Fediuck) 
2. Re: Storage Domain Issue (Koen Vanoppen) 


---------------------------------------------------------------------- 

Message: 1 
Date: Mon, 8 Dec 2014 02:01:50 -0500 (EST) 
From: Doron Fediuck <dfediuck at redhat.com> 
To: Arman Khalatyan <arm2arm at gmail.com> 
Cc: Ryan Barry <rbarry at redhat.com>, Fabian Deutsch 
<fdeutsch at redhat.com>, users <users at ovirt.org> 
Subject: Re: [ovirt-users] is it possible to run ovirt node on 
Diskless HW? 
Message-ID: 
<1172482552.12144827.1418022110582.JavaMail.zimbra at redhat.com> 
Content-Type: text/plain; charset=utf-8 

For standard centos you may see other issues. 

For example, let's assume you have a single NIC (eth0). 
If you boot your host and then try to add it to the engine, 
the host deploy procedure will create try to create a management bridge 
for the VMs using eth0. At this point your host will freeze since your 
root FS will be disconnected while creating the bridge. 

I've done this ~6 years ago, and it required opening the initrd to handle 
the above issue, as well as adding the NIC driver and creating the bridge 
at this point. So it's not a trivial task but doable with some hacking. 

Doron 

----- Original Message ----- 
> From: "Arman Khalatyan" <arm2arm at gmail.com> 
> To: "Doron Fediuck" <dfediuck at redhat.com> 
> Cc: "users" <users at ovirt.org>, "Fabian Deutsch" <fdeutsch at redhat.com>, "Ryan Barry" <rbarry at redhat.com>, "Tolik 
> Litovsky" <tlitovsk at redhat.com>, "Douglas Landgraf" <dougsland at redhat.com> 
> Sent: Sunday, December 7, 2014 7:38:19 PM 
> Subject: Re: [ovirt-users] is it possible to run ovirt node on Diskless HW? 
> 
> It is centos 6.6 standard one. 
> a. 
> 
> *********************************************************** 
> 
> Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut f?r 
> Astrophysik Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany 
> 
> *********************************************************** 
> 
> 
> On Sun, Dec 7, 2014 at 6:04 PM, Doron Fediuck <dfediuck at redhat.com> wrote: 
> 
> > 
> > 
> > ----- Original Message ----- 
> > > From: "Arman Khalatyan" <arm2arm at gmail.com> 
> > > To: "users" <users at ovirt.org> 
> > > Sent: Wednesday, December 3, 2014 6:50:09 PM 
> > > Subject: [ovirt-users] is it possible to run ovirt node on Diskless HW? 
> > > 
> > > Hello, 
> > > 
> > > Doing steps in: 
> > > 
> > https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/diskless-nfs-config.html 
> > > 
> > > I would like to know is some one succeeded to run the host on a diskless 
> > > machine? 
> > > i am using Centos6.6 node with ovirt 3.5. 
> > > Thanks, 
> > > Arman. 
> > > 
> > > 
> > > 
> > > 
> > > *********************************************************** 
> > > Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut f?r 
> > Astrophysik 
> > > Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany 
> > > *********************************************************** 
> > > 
> > 
> > Hi Arman, 
> > Are you working with ovirt node or standard CentOS? 
> > 
> > Note that ovirt node is different as it's works like a live cd- 
> > it runs from memory. In order to save some configurations (such 
> > as networking) the local disk is used. 
> > 
> 


------------------------------ 

Message: 2 
Date: Mon, 8 Dec 2014 10:22:18 +0100 
From: Koen Vanoppen <vanoppen.koen at gmail.com> 
To: "users at ovirt.org" <users at ovirt.org> 
Subject: Re: [ovirt-users] Storage Domain Issue 
Message-ID: 
<CACfY+MaPY9opHykNc7hmM4Wc0_HBuu6_fyi7wPMWP4RSCe6xYQ at mail.gmail.com> 
Content-Type: text/plain; charset="utf-8" 

some more errors: 

Thread-19::DEBUG::2014-12-08 
10:20:02,700::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n 
/sbin/lvm vgck --config ' devices { preferred_names = ["^/dev/mapper/"] 
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 
obtain_device_list_from_udev=0 filter = [ 
'\''a|/dev/mapper/36005076802810d489000000000000062|'\'', '\''r|.*|'\'' ] 
} global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 
use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' 
f130d166-546e-4905-8b8f-55a1c1dd2e4f (cwd None) 
Thread-20::DEBUG::2014-12-08 
10:20:02,817::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n 
/sbin/lvm vgck --config ' devices { preferred_names = ["^/dev/mapper/"] 
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 
obtain_device_list_from_udev=0 filter = [ 
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'', 
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 
0 } ' eb912657-8a8c-4173-9d24-92d2b09a773c (cwd None) 
Thread-20::DEBUG::2014-12-08 
10:20:03,388::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n 
/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] 
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 
obtain_device_list_from_udev=0 filter = [ 
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'', 
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 
0 } ' --noheadings --units b --nosuffix --separator '|' 
--ignoreskippedcluster -o 
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 
eb912657-8a8c-4173-9d24-92d2b09a773c (cwd None) 
Thread-17::ERROR::2014-12-08 
10:20:03,469::sdc::137::Storage.StorageDomainCache::(_findDomain) looking 
for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece 
Thread-17::ERROR::2014-12-08 
10:20:03,472::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) 
looking for domain 78d84adf-7274-4efe-a711-fbec31196ece 
Thread-17::DEBUG::2014-12-08 
10:20:03,482::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n 
/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] 
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 
obtain_device_list_from_udev=0 filter = [ 
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'', 
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 
0 } ' --noheadings --units b --nosuffix --separator '|' 
--ignoreskippedcluster -o 
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 
78d84adf-7274-4efe-a711-fbec31196ece (cwd None) 
Thread-17::DEBUG::2014-12-08 
10:20:03,572::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n 
/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] 
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 
obtain_device_list_from_udev=0 filter = [ 
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'', 
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 
0 } ' --noheadings --units b --nosuffix --separator '|' 
--ignoreskippedcluster -o 
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 
78d84adf-7274-4efe-a711-fbec31196ece (cwd None) 
Thread-17::DEBUG::2014-12-08 
10:20:03,631::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n 
/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] 
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 
obtain_device_list_from_udev=0 filter = [ 
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'', 
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 
0 } ' --noheadings --units b --nosuffix --separator '|' 
--ignoreskippedcluster -o 
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 
f130d166-546e-4905-8b8f-55a1c1dd2e4f eb912657-8a8c-4173-9d24-92d2b09a773c 
(cwd None) 
Thread-14::ERROR::2014-12-08 
10:20:05,785::task::866::Storage.TaskManager.Task::(_setError) 
Task=`ffaf5100-e833-4d29-ac5d-f6f7f8ce2b5d`::Unexpected error 
raise SecureError("Secured object is not in safe state") 
SecureError: Secured object is not in safe state 
Thread-14::ERROR::2014-12-08 
10:20:05,797::dispatcher::79::Storage.Dispatcher::(wrapper) Secured object 
is not in safe state 
raise self.error 
SecureError: Secured object is not in safe state 
Thread-34::ERROR::2014-12-08 
10:21:46,544::task::866::Storage.TaskManager.Task::(_setError) 
Task=`82940da7-10c1-42f6-afca-3c0ac00c1487`::Unexpected error 
raise SecureError("Secured object is not in safe state") 
SecureError: Secured object is not in safe state 
Thread-34::ERROR::2014-12-08 
10:21:46,549::dispatcher::79::Storage.Dispatcher::(wrapper) Secured object 
is not in safe state 
raise self.error 
SecureError: Secured object is not in safe stat 

2014-12-08 7:30 GMT+01:00 Koen Vanoppen <vanoppen.koen at gmail.com>: 

> Dear all, 
> 
> We have updated our hypervisors with yum. This included an update ov vdsm 
> also. We now are with these version: 
> vdsm-4.16.7-1.gitdb83943.el6.x86_64 
> vdsm-python-4.16.7-1.gitdb83943.el6.noarch 
> vdsm-python-zombiereaper-4.16.7-1.gitdb83943.el6.noarch 
> vdsm-xmlrpc-4.16.7-1.gitdb83943.el6.noarch 
> vdsm-yajsonrpc-4.16.7-1.gitdb83943.el6.noarch 
> vdsm-jsonrpc-4.16.7-1.gitdb83943.el6.noarch 
> vdsm-cli-4.16.7-1.gitdb83943.el6.noarch 
> 
> And ever since these updates we experience BIG troubles with our fibre 
> connections. I've already update the brocade cards to the latest version. 
> This seemed to help, they already came back up and saw the storage domains 
> (before the brocade update, they didn't even see their storage domains). 
> But after a day or so, one of the hypersisors began to freak out again. 
> Coming up and going back down... Below you can find the errors: 
> 
> 
> Thread-821::ERROR::2014-12-08 
> 07:10:33,190::task::866::Storage.TaskManager.Task::(_setError) 
> Task=`27cb9779-a8e9-4080-988d-9772c922710b`::Unexpected error 
> raise se.SpmStatusError() 
> SpmStatusError: Not SPM: () 
> Thread-821::ERROR::2014-12-08 
> 07:10:33,194::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': 
> {'message': 'Not SPM: ()', 'code': 654}} 
> Thread-822::ERROR::2014-12-08 
> 07:11:03,878::task::866::Storage.TaskManager.Task::(_setError) 
> Task=`30177931-68c0-420f-950f-da5b770fe35c`::Unexpected error 
> Thread-822::ERROR::2014-12-08 
> 07:11:03,882::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': 
> {'message': "Unknown pool id, pool not connected: 
> ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}} 
> Thread-813::ERROR::2014-12-08 
> 07:11:07,634::sdc::137::Storage.StorageDomainCache::(_findDomain) looking 
> for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece 
> Thread-813::ERROR::2014-12-08 
> 07:11:07,634::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) 
> looking for domain 78d84adf-7274-4efe-a711-fbec31196ece 
> Thread-813::DEBUG::2014-12-08 
> 07:11:07,638::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n 
> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] 
> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 
> obtain_device_list_from_udev=0 filter = [ 
> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'', 
> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 
> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 
> 0 } ' --noheadings --units b --nosuffix --separator '|' 
> --ignoreskippedcluster -o 
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 
> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None) 
> Thread-813::DEBUG::2014-12-08 
> 07:11:07,835::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n 
> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] 
> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 
> obtain_device_list_from_udev=0 filter = [ 
> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'', 
> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 
> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 
> 0 } ' --noheadings --units b --nosuffix --separator '|' 
> --ignoreskippedcluster -o 
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 
> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None) 
> Thread-813::ERROR::2014-12-08 
> 07:11:07,896::spbackends::271::Storage.StoragePoolDiskBackend::(validateMasterDomainVersion) 
> Requested master domain 78d84adf-7274-4efe-a711-fbec31196ece does not have 
> expected version 42 it is version 17 
> Thread-813::ERROR::2014-12-08 
> 07:11:07,903::task::866::Storage.TaskManager.Task::(_setError) 
> Task=`c434f325-5193-4236-a04d-2fee9ac095bc`::Unexpected error 
> Thread-813::ERROR::2014-12-08 
> 07:11:07,946::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': 
> {'message': "Wrong Master domain or its version: 
> 'SD=78d84adf-7274-4efe-a711-fbec31196ece, 
> pool=1d03dc05-008b-4d14-97ce-b17bd714183d'", 'code': 324}} 
> Thread-823::ERROR::2014-12-08 
> 07:11:43,993::task::866::Storage.TaskManager.Task::(_setError) 
> Task=`9abbccd9-88a7-4632-b350-f9af1f65bebd`::Unexpected error 
> Thread-823::ERROR::2014-12-08 
> 07:11:43,998::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': 
> {'message': "Unknown pool id, pool not connected: 
> ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}} 
> Thread-823::ERROR::2014-12-08 
> 07:11:44,003::task::866::Storage.TaskManager.Task::(_setError) 
> Task=`7ef1ac39-e7c2-4538-b30b-ab2fcefac01d`::Unexpected error 
> raise se.SpmStatusError() 
> SpmStatusError: Not SPM: () 
> Thread-823::ERROR::2014-12-08 
> 07:11:44,007::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': 
> {'message': 'Not SPM: ()', 'code': 654}} 
> Thread-823::ERROR::2014-12-08 
> 07:11:44,133::task::866::Storage.TaskManager.Task::(_setError) 
> Task=`cc1ae82c-f3c4-4efa-9cd2-c62a27801e76`::Unexpected error 
> Thread-823::ERROR::2014-12-08 
> 07:11:44,137::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': 
> {'message': "Unknown pool id, pool not connected: 
> ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}} 
> Thread-823::ERROR::2014-12-08 
> 07:12:24,580::task::866::Storage.TaskManager.Task::(_setError) 
> Task=`9bcbb87d-3093-4894-879b-3fe2b09ef351`::Unexpected error 
> Thread-823::ERROR::2014-12-08 
> 07:12:24,585::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': 
> {'message': "Unknown pool id, pool not connected: 
> ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}} 
> Thread-823::ERROR::2014-12-08 
> 07:13:04,926::task::866::Storage.TaskManager.Task::(_setError) 
> Task=`8bdd0c1f-e681-4a8e-ad55-296c021389ed`::Unexpected error 
> raise se.SpmStatusError() 
> SpmStatusError: Not SPM: () 
> Thread-823::ERROR::2014-12-08 
> 07:13:04,931::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': 
> {'message': 'Not SPM: ()', 'code': 654}} 
> Thread-823::ERROR::2014-12-08 
> 07:13:45,342::task::866::Storage.TaskManager.Task::(_setError) 
> Task=`160ea2a7-b6cb-4102-9df4-71ba87fd863e`::Unexpected error 
> raise se.SpmStatusError() 
> SpmStatusError: Not SPM: () 
> Thread-823::ERROR::2014-12-08 
> 07:13:45,346::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': 
> {'message': 'Not SPM: ()', 'code': 654}} 
> Thread-823::ERROR::2014-12-08 
> 07:14:25,879::task::866::Storage.TaskManager.Task::(_setError) 
> Task=`985628db-8f48-44b5-8f61-631a922f7f71`::Unexpected error 
> raise se.SpmStatusError() 
> SpmStatusError: Not SPM: () 
> Thread-823::ERROR::2014-12-08 
> 07:14:25,883::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': 
> {'message': 'Not SPM: ()', 'code': 654}} 
> Thread-823::ERROR::2014-12-08 
> 07:15:06,175::task::866::Storage.TaskManager.Task::(_setError) 
> Task=`ddca1c88-0565-41e8-bf0c-22eadcc75918`::Unexpected error 
> raise se.SpmStatusError() 
> SpmStatusError: Not SPM: () 
> Thread-823::ERROR::2014-12-08 
> 07:15:06,179::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': 
> {'message': 'Not SPM: ()', 'code': 654}} 
> Thread-823::ERROR::2014-12-08 
> 07:15:46,585::task::866::Storage.TaskManager.Task::(_setError) 
> Task=`12bbded5-59ce-46d8-9e67-f48862a03606`::Unexpected error 
> raise se.SpmStatusError() 
> SpmStatusError: Not SPM: () 
> Thread-823::ERROR::2014-12-08 
> 07:15:46,589::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': 
> {'message': 'Not SPM: ()', 'code': 654}} 
> Thread-814::ERROR::2014-12-08 
> 07:16:08,619::sdc::137::Storage.StorageDomainCache::(_findDomain) looking 
> for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece 
> Thread-814::ERROR::2014-12-08 
> 07:16:08,619::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) 
> looking for domain 78d84adf-7274-4efe-a711-fbec31196ece 
> Thread-814::DEBUG::2014-12-08 
> 07:16:08,624::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n 
> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] 
> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 
> obtain_device_list_from_udev=0 filter = [ 
> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'', 
> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 
> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 
> 0 } ' --noheadings --units b --nosuffix --separator '|' 
> --ignoreskippedcluster -o 
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 
> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None) 
> Thread-814::DEBUG::2014-12-08 
> 07:16:08,740::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n 
> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] 
> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 
> obtain_device_list_from_udev=0 filter = [ 
> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'', 
> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 
> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 
> 0 } ' --noheadings --units b --nosuffix --separator '|' 
> --ignoreskippedcluster -o 
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 
> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None) 
> Thread-814::ERROR::2014-12-08 
> 07:16:08,812::spbackends::271::Storage.StoragePoolDiskBackend::(validateMasterDomainVersion) 
> Requested master domain 78d84adf-7274-4efe-a711-fbec31196ece does not have 
> expected version 42 it is version 17 
> Thread-814::ERROR::2014-12-08 
> 07:16:08,820::task::866::Storage.TaskManager.Task::(_setError) 
> Task=`5cdce5cd-6e6d-421e-bc2a-f999d8cbb056`::Unexpected error 
> Thread-814::ERROR::2014-12-08 
> 07:16:08,865::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': 
> {'message': "Wrong Master domain or its version: 
> 'SD=78d84adf-7274-4efe-a711-fbec31196ece, 
> pool=1d03dc05-008b-4d14-97ce-b17bd714183d'", 'code': 324}} 
> Thread-815::ERROR::2014-12-08 
> 07:16:09,471::sdc::137::Storage.StorageDomainCache::(_findDomain) looking 
> for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece 
> Thread-815::ERROR::2014-12-08 
> 07:16:09,472::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) 
> looking for domain 78d84adf-7274-4efe-a711-fbec31196ece 
> Thread-815::DEBUG::2014-12-08 
> 07:16:09,476::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n 
> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] 
> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 
> obtain_device_list_from_udev=0 filter = [ 
> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'', 
> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 
> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 
> 0 } ' --noheadings --units b --nosuffix --separator '|' 
> --ignoreskippedcluster -o 
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 
> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None) 
> Thread-815::DEBUG::2014-12-08 
> 07:16:09,564::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n 
> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] 
> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 
> obtain_device_list_from_udev=0 filter = [ 
> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'', 
> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 
> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 
> 0 } ' --noheadings --units b --nosuffix --separator '|' 
> --ignoreskippedcluster -o 
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 
> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None) 
> Thread-815::ERROR::2014-12-08 
> 07:16:09,627::spbackends::271::Storage.StoragePoolDiskBackend::(validateMasterDomainVersion) 
> Requested master domain 78d84adf-7274-4efe-a711-fbec31196ece does not have 
> expected version 42 it is version 17 
> Thread-815::ERROR::2014-12-08 
> 07:16:09,635::task::866::Storage.TaskManager.Task::(_setError) 
> Task=`abfa0fd0-04b3-4c65-b3d0-be18b085a65d`::Unexpected error 
> Thread-815::ERROR::2014-12-08 
> 07:16:09,681::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': 
> {'message': "Wrong Master domain or its version: 
> 'SD=78d84adf-7274-4efe-a711-fbec31196ece, 
> pool=1d03dc05-008b-4d14-97ce-b17bd714183d'", 'code': 324}} 
> Thread-816::ERROR::2014-12-08 
> 07:16:10,182::sdc::137::Storage.StorageDomainCache::(_findDomain) looking 
> for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece 
> Thread-816::ERROR::2014-12-08 
> 07:16:10,183::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) 
> looking for domain 78d84adf-7274-4efe-a711-fbec31196ece 
> Thread-816::DEBUG::2014-12-08 
> 07:16:10,187::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n 
> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] 
> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 
> obtain_device_list_from_udev=0 filter = [ 
> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'', 
> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 
> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 
> 0 } ' --noheadings --units b --nosuffix --separator '|' 
> --ignoreskippedcluster -o 
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 
> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None) 
> Thread-823::ERROR::2014-12-08 
> 07:16:27,163::task::866::Storage.TaskManager.Task::(_setError) 
> Task=`9b0fd676-7941-40a7-a71e-0f1dee48a107`::Unexpected error 
> raise se.SpmStatusError() 
> SpmStatusError: Not SPM: () 
> Thread-823::ERROR::2014-12-08 
> 07:16:27,168::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': 
> {'message': 'Not SPM: ()', 'code': 654}} 
> 
> 
-------------- next part -------------- 
An HTML attachment was scrubbed... 
URL: <http://lists.ovirt.org/pipermail/users/attachments/20141208/2f754047/attachment.html> 

------------------------------ 

_______________________________________________ 
Users mailing list 
Users at ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 


End of Users Digest, Vol 39, Issue 38 
************************************* 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20141208/db461362/attachment-0001.html>


More information about the Users mailing list