<html><body><div style="font-family: georgia,serif; font-size: 12pt; color: #000000"><div>Hi all,</div><div>I was thinking of "booting from iSCSI SAN", which means you'll be using your LUN placed on storage in order to boot your host over the network.<br></div><div>In this case you'll might configure your hosts HW to boot from iSCSI and then you'll won't need any HD on your HW.</div><div>+adding more people to add their comments.</div><div><br></div><div><span name="x"></span><br>Thanks in advance.<br><div><br></div>Best regards,<br>Nikolai<br>____________________<br>Nikolai Sednev<br>Senior Quality Engineer at Compute team<br>Red Hat Israel<br>34 Jerusalem Road,<br>Ra'anana, Israel 43501<br><div><br></div>Tel: +972 9 7692043<br>Mobile: +972 52 7342734<br>Email: nsednev@redhat.com<br>IRC: nsednev<span name="x"></span><br></div><div><br></div><hr id="zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>From: </b>users-request@ovirt.org<br><b>To: </b>users@ovirt.org<br><b>Sent: </b>Monday, December 8, 2014 11:22:27 AM<br><b>Subject: </b>Users Digest, Vol 39, Issue 38<br><div><br></div>Send Users mailing list submissions to<br> users@ovirt.org<br><div><br></div>To subscribe or unsubscribe via the World Wide Web, visit<br> http://lists.ovirt.org/mailman/listinfo/users<br>or, via email, send a message with subject or body 'help' to<br> users-request@ovirt.org<br><div><br></div>You can reach the person managing the list at<br> users-owner@ovirt.org<br><div><br></div>When replying, please edit your Subject line so it is more specific<br>than "Re: Contents of Users digest..."<br><div><br></div><br>Today's Topics:<br><div><br></div> 1. Re: is it possible to run ovirt node on Diskless HW?<br> (Doron Fediuck)<br> 2. Re: Storage Domain Issue (Koen Vanoppen)<br><div><br></div><br>----------------------------------------------------------------------<br><div><br></div>Message: 1<br>Date: Mon, 8 Dec 2014 02:01:50 -0500 (EST)<br>From: Doron Fediuck <dfediuck@redhat.com><br>To: Arman Khalatyan <arm2arm@gmail.com><br>Cc: Ryan Barry <rbarry@redhat.com>, Fabian Deutsch<br> <fdeutsch@redhat.com>, users <users@ovirt.org><br>Subject: Re: [ovirt-users] is it possible to run ovirt node on<br> Diskless HW?<br>Message-ID:<br> <1172482552.12144827.1418022110582.JavaMail.zimbra@redhat.com><br>Content-Type: text/plain; charset=utf-8<br><div><br></div>For standard centos you may see other issues.<br><div><br></div>For example, let's assume you have a single NIC (eth0).<br>If you boot your host and then try to add it to the engine,<br>the host deploy procedure will create try to create a management bridge <br>for the VMs using eth0. At this point your host will freeze since your<br>root FS will be disconnected while creating the bridge.<br><div><br></div>I've done this ~6 years ago, and it required opening the initrd to handle<br>the above issue, as well as adding the NIC driver and creating the bridge<br>at this point. So it's not a trivial task but doable with some hacking.<br><div><br></div>Doron<br><div><br></div>----- Original Message -----<br>> From: "Arman Khalatyan" <arm2arm@gmail.com><br>> To: "Doron Fediuck" <dfediuck@redhat.com><br>> Cc: "users" <users@ovirt.org>, "Fabian Deutsch" <fdeutsch@redhat.com>, "Ryan Barry" <rbarry@redhat.com>, "Tolik<br>> Litovsky" <tlitovsk@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com><br>> Sent: Sunday, December 7, 2014 7:38:19 PM<br>> Subject: Re: [ovirt-users] is it possible to run ovirt node on Diskless HW?<br>> <br>> It is centos 6.6 standard one.<br>> a.<br>> <br>> ***********************************************************<br>> <br>> Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut f?r<br>> Astrophysik Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany<br>> <br>> ***********************************************************<br>> <br>> <br>> On Sun, Dec 7, 2014 at 6:04 PM, Doron Fediuck <dfediuck@redhat.com> wrote:<br>> <br>> ><br>> ><br>> > ----- Original Message -----<br>> > > From: "Arman Khalatyan" <arm2arm@gmail.com><br>> > > To: "users" <users@ovirt.org><br>> > > Sent: Wednesday, December 3, 2014 6:50:09 PM<br>> > > Subject: [ovirt-users] is it possible to run ovirt node on Diskless HW?<br>> > ><br>> > > Hello,<br>> > ><br>> > > Doing steps in:<br>> > ><br>> > https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/diskless-nfs-config.html<br>> > ><br>> > > I would like to know is some one succeeded to run the host on a diskless<br>> > > machine?<br>> > > i am using Centos6.6 node with ovirt 3.5.<br>> > > Thanks,<br>> > > Arman.<br>> > ><br>> > ><br>> > ><br>> > ><br>> > > ***********************************************************<br>> > > Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut f?r<br>> > Astrophysik<br>> > > Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany<br>> > > ***********************************************************<br>> > ><br>> ><br>> > Hi Arman,<br>> > Are you working with ovirt node or standard CentOS?<br>> ><br>> > Note that ovirt node is different as it's works like a live cd-<br>> > it runs from memory. In order to save some configurations (such<br>> > as networking) the local disk is used.<br>> ><br>> <br><div><br></div><br>------------------------------<br><div><br></div>Message: 2<br>Date: Mon, 8 Dec 2014 10:22:18 +0100<br>From: Koen Vanoppen <vanoppen.koen@gmail.com><br>To: "users@ovirt.org" <users@ovirt.org><br>Subject: Re: [ovirt-users] Storage Domain Issue<br>Message-ID:<br> <CACfY+MaPY9opHykNc7hmM4Wc0_HBuu6_fyi7wPMWP4RSCe6xYQ@mail.gmail.com><br>Content-Type: text/plain; charset="utf-8"<br><div><br></div>some more errors:<br><div><br></div>Thread-19::DEBUG::2014-12-08<br>10:20:02,700::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>/sbin/lvm vgck --config ' devices { preferred_names = ["^/dev/mapper/"]<br>ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>obtain_device_list_from_udev=0 filter = [<br>'\''a|/dev/mapper/36005076802810d489000000000000062|'\'', '\''r|.*|'\'' ]<br>} global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1<br>use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } '<br>f130d166-546e-4905-8b8f-55a1c1dd2e4f (cwd None)<br>Thread-20::DEBUG::2014-12-08<br>10:20:02,817::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>/sbin/lvm vgck --config ' devices { preferred_names = ["^/dev/mapper/"]<br>ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>obtain_device_list_from_udev=0 filter = [<br>'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1<br>wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =<br>0 } ' eb912657-8a8c-4173-9d24-92d2b09a773c (cwd None)<br>Thread-20::DEBUG::2014-12-08<br>10:20:03,388::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]<br>ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>obtain_device_list_from_udev=0 filter = [<br>'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1<br>wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =<br>0 } ' --noheadings --units b --nosuffix --separator '|'<br>--ignoreskippedcluster -o<br>uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>eb912657-8a8c-4173-9d24-92d2b09a773c (cwd None)<br>Thread-17::ERROR::2014-12-08<br>10:20:03,469::sdc::137::Storage.StorageDomainCache::(_findDomain) looking<br>for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece<br>Thread-17::ERROR::2014-12-08<br>10:20:03,472::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)<br>looking for domain 78d84adf-7274-4efe-a711-fbec31196ece<br>Thread-17::DEBUG::2014-12-08<br>10:20:03,482::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]<br>ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>obtain_device_list_from_udev=0 filter = [<br>'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1<br>wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =<br>0 } ' --noheadings --units b --nosuffix --separator '|'<br>--ignoreskippedcluster -o<br>uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>Thread-17::DEBUG::2014-12-08<br>10:20:03,572::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]<br>ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>obtain_device_list_from_udev=0 filter = [<br>'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1<br>wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =<br>0 } ' --noheadings --units b --nosuffix --separator '|'<br>--ignoreskippedcluster -o<br>uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>Thread-17::DEBUG::2014-12-08<br>10:20:03,631::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]<br>ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>obtain_device_list_from_udev=0 filter = [<br>'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1<br>wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =<br>0 } ' --noheadings --units b --nosuffix --separator '|'<br>--ignoreskippedcluster -o<br>uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>f130d166-546e-4905-8b8f-55a1c1dd2e4f eb912657-8a8c-4173-9d24-92d2b09a773c<br>(cwd None)<br>Thread-14::ERROR::2014-12-08<br>10:20:05,785::task::866::Storage.TaskManager.Task::(_setError)<br>Task=`ffaf5100-e833-4d29-ac5d-f6f7f8ce2b5d`::Unexpected error<br> raise SecureError("Secured object is not in safe state")<br>SecureError: Secured object is not in safe state<br>Thread-14::ERROR::2014-12-08<br>10:20:05,797::dispatcher::79::Storage.Dispatcher::(wrapper) Secured object<br>is not in safe state<br> raise self.error<br>SecureError: Secured object is not in safe state<br>Thread-34::ERROR::2014-12-08<br>10:21:46,544::task::866::Storage.TaskManager.Task::(_setError)<br>Task=`82940da7-10c1-42f6-afca-3c0ac00c1487`::Unexpected error<br> raise SecureError("Secured object is not in safe state")<br>SecureError: Secured object is not in safe state<br>Thread-34::ERROR::2014-12-08<br>10:21:46,549::dispatcher::79::Storage.Dispatcher::(wrapper) Secured object<br>is not in safe state<br> raise self.error<br>SecureError: Secured object is not in safe stat<br><div><br></div>2014-12-08 7:30 GMT+01:00 Koen Vanoppen <vanoppen.koen@gmail.com>:<br><div><br></div>> Dear all,<br>><br>> We have updated our hypervisors with yum. This included an update ov vdsm<br>> also. We now are with these version:<br>> vdsm-4.16.7-1.gitdb83943.el6.x86_64<br>> vdsm-python-4.16.7-1.gitdb83943.el6.noarch<br>> vdsm-python-zombiereaper-4.16.7-1.gitdb83943.el6.noarch<br>> vdsm-xmlrpc-4.16.7-1.gitdb83943.el6.noarch<br>> vdsm-yajsonrpc-4.16.7-1.gitdb83943.el6.noarch<br>> vdsm-jsonrpc-4.16.7-1.gitdb83943.el6.noarch<br>> vdsm-cli-4.16.7-1.gitdb83943.el6.noarch<br>><br>> And ever since these updates we experience BIG troubles with our fibre<br>> connections. I've already update the brocade cards to the latest version.<br>> This seemed to help, they already came back up and saw the storage domains<br>> (before the brocade update, they didn't even see their storage domains).<br>> But after a day or so, one of the hypersisors began to freak out again.<br>> Coming up and going back down... Below you can find the errors:<br>><br>><br>> Thread-821::ERROR::2014-12-08<br>> 07:10:33,190::task::866::Storage.TaskManager.Task::(_setError)<br>> Task=`27cb9779-a8e9-4080-988d-9772c922710b`::Unexpected error<br>> raise se.SpmStatusError()<br>> SpmStatusError: Not SPM: ()<br>> Thread-821::ERROR::2014-12-08<br>> 07:10:33,194::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'message': 'Not SPM: ()', 'code': 654}}<br>> Thread-822::ERROR::2014-12-08<br>> 07:11:03,878::task::866::Storage.TaskManager.Task::(_setError)<br>> Task=`30177931-68c0-420f-950f-da5b770fe35c`::Unexpected error<br>> Thread-822::ERROR::2014-12-08<br>> 07:11:03,882::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'message': "Unknown pool id, pool not connected:<br>> ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}<br>> Thread-813::ERROR::2014-12-08<br>> 07:11:07,634::sdc::137::Storage.StorageDomainCache::(_findDomain) looking<br>> for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece<br>> Thread-813::ERROR::2014-12-08<br>> 07:11:07,634::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)<br>> looking for domain 78d84adf-7274-4efe-a711-fbec31196ece<br>> Thread-813::DEBUG::2014-12-08<br>> 07:11:07,638::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]<br>> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>> obtain_device_list_from_udev=0 filter = [<br>> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1<br>> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =<br>> 0 } ' --noheadings --units b --nosuffix --separator '|'<br>> --ignoreskippedcluster -o<br>> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>> Thread-813::DEBUG::2014-12-08<br>> 07:11:07,835::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]<br>> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>> obtain_device_list_from_udev=0 filter = [<br>> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1<br>> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =<br>> 0 } ' --noheadings --units b --nosuffix --separator '|'<br>> --ignoreskippedcluster -o<br>> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>> Thread-813::ERROR::2014-12-08<br>> 07:11:07,896::spbackends::271::Storage.StoragePoolDiskBackend::(validateMasterDomainVersion)<br>> Requested master domain 78d84adf-7274-4efe-a711-fbec31196ece does not have<br>> expected version 42 it is version 17<br>> Thread-813::ERROR::2014-12-08<br>> 07:11:07,903::task::866::Storage.TaskManager.Task::(_setError)<br>> Task=`c434f325-5193-4236-a04d-2fee9ac095bc`::Unexpected error<br>> Thread-813::ERROR::2014-12-08<br>> 07:11:07,946::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'message': "Wrong Master domain or its version:<br>> 'SD=78d84adf-7274-4efe-a711-fbec31196ece,<br>> pool=1d03dc05-008b-4d14-97ce-b17bd714183d'", 'code': 324}}<br>> Thread-823::ERROR::2014-12-08<br>> 07:11:43,993::task::866::Storage.TaskManager.Task::(_setError)<br>> Task=`9abbccd9-88a7-4632-b350-f9af1f65bebd`::Unexpected error<br>> Thread-823::ERROR::2014-12-08<br>> 07:11:43,998::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'message': "Unknown pool id, pool not connected:<br>> ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}<br>> Thread-823::ERROR::2014-12-08<br>> 07:11:44,003::task::866::Storage.TaskManager.Task::(_setError)<br>> Task=`7ef1ac39-e7c2-4538-b30b-ab2fcefac01d`::Unexpected error<br>> raise se.SpmStatusError()<br>> SpmStatusError: Not SPM: ()<br>> Thread-823::ERROR::2014-12-08<br>> 07:11:44,007::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'message': 'Not SPM: ()', 'code': 654}}<br>> Thread-823::ERROR::2014-12-08<br>> 07:11:44,133::task::866::Storage.TaskManager.Task::(_setError)<br>> Task=`cc1ae82c-f3c4-4efa-9cd2-c62a27801e76`::Unexpected error<br>> Thread-823::ERROR::2014-12-08<br>> 07:11:44,137::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'message': "Unknown pool id, pool not connected:<br>> ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}<br>> Thread-823::ERROR::2014-12-08<br>> 07:12:24,580::task::866::Storage.TaskManager.Task::(_setError)<br>> Task=`9bcbb87d-3093-4894-879b-3fe2b09ef351`::Unexpected error<br>> Thread-823::ERROR::2014-12-08<br>> 07:12:24,585::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'message': "Unknown pool id, pool not connected:<br>> ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}<br>> Thread-823::ERROR::2014-12-08<br>> 07:13:04,926::task::866::Storage.TaskManager.Task::(_setError)<br>> Task=`8bdd0c1f-e681-4a8e-ad55-296c021389ed`::Unexpected error<br>> raise se.SpmStatusError()<br>> SpmStatusError: Not SPM: ()<br>> Thread-823::ERROR::2014-12-08<br>> 07:13:04,931::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'message': 'Not SPM: ()', 'code': 654}}<br>> Thread-823::ERROR::2014-12-08<br>> 07:13:45,342::task::866::Storage.TaskManager.Task::(_setError)<br>> Task=`160ea2a7-b6cb-4102-9df4-71ba87fd863e`::Unexpected error<br>> raise se.SpmStatusError()<br>> SpmStatusError: Not SPM: ()<br>> Thread-823::ERROR::2014-12-08<br>> 07:13:45,346::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'message': 'Not SPM: ()', 'code': 654}}<br>> Thread-823::ERROR::2014-12-08<br>> 07:14:25,879::task::866::Storage.TaskManager.Task::(_setError)<br>> Task=`985628db-8f48-44b5-8f61-631a922f7f71`::Unexpected error<br>> raise se.SpmStatusError()<br>> SpmStatusError: Not SPM: ()<br>> Thread-823::ERROR::2014-12-08<br>> 07:14:25,883::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'message': 'Not SPM: ()', 'code': 654}}<br>> Thread-823::ERROR::2014-12-08<br>> 07:15:06,175::task::866::Storage.TaskManager.Task::(_setError)<br>> Task=`ddca1c88-0565-41e8-bf0c-22eadcc75918`::Unexpected error<br>> raise se.SpmStatusError()<br>> SpmStatusError: Not SPM: ()<br>> Thread-823::ERROR::2014-12-08<br>> 07:15:06,179::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'message': 'Not SPM: ()', 'code': 654}}<br>> Thread-823::ERROR::2014-12-08<br>> 07:15:46,585::task::866::Storage.TaskManager.Task::(_setError)<br>> Task=`12bbded5-59ce-46d8-9e67-f48862a03606`::Unexpected error<br>> raise se.SpmStatusError()<br>> SpmStatusError: Not SPM: ()<br>> Thread-823::ERROR::2014-12-08<br>> 07:15:46,589::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'message': 'Not SPM: ()', 'code': 654}}<br>> Thread-814::ERROR::2014-12-08<br>> 07:16:08,619::sdc::137::Storage.StorageDomainCache::(_findDomain) looking<br>> for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece<br>> Thread-814::ERROR::2014-12-08<br>> 07:16:08,619::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)<br>> looking for domain 78d84adf-7274-4efe-a711-fbec31196ece<br>> Thread-814::DEBUG::2014-12-08<br>> 07:16:08,624::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]<br>> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>> obtain_device_list_from_udev=0 filter = [<br>> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1<br>> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =<br>> 0 } ' --noheadings --units b --nosuffix --separator '|'<br>> --ignoreskippedcluster -o<br>> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>> Thread-814::DEBUG::2014-12-08<br>> 07:16:08,740::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]<br>> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>> obtain_device_list_from_udev=0 filter = [<br>> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1<br>> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =<br>> 0 } ' --noheadings --units b --nosuffix --separator '|'<br>> --ignoreskippedcluster -o<br>> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>> Thread-814::ERROR::2014-12-08<br>> 07:16:08,812::spbackends::271::Storage.StoragePoolDiskBackend::(validateMasterDomainVersion)<br>> Requested master domain 78d84adf-7274-4efe-a711-fbec31196ece does not have<br>> expected version 42 it is version 17<br>> Thread-814::ERROR::2014-12-08<br>> 07:16:08,820::task::866::Storage.TaskManager.Task::(_setError)<br>> Task=`5cdce5cd-6e6d-421e-bc2a-f999d8cbb056`::Unexpected error<br>> Thread-814::ERROR::2014-12-08<br>> 07:16:08,865::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'message': "Wrong Master domain or its version:<br>> 'SD=78d84adf-7274-4efe-a711-fbec31196ece,<br>> pool=1d03dc05-008b-4d14-97ce-b17bd714183d'", 'code': 324}}<br>> Thread-815::ERROR::2014-12-08<br>> 07:16:09,471::sdc::137::Storage.StorageDomainCache::(_findDomain) looking<br>> for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece<br>> Thread-815::ERROR::2014-12-08<br>> 07:16:09,472::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)<br>> looking for domain 78d84adf-7274-4efe-a711-fbec31196ece<br>> Thread-815::DEBUG::2014-12-08<br>> 07:16:09,476::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]<br>> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>> obtain_device_list_from_udev=0 filter = [<br>> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1<br>> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =<br>> 0 } ' --noheadings --units b --nosuffix --separator '|'<br>> --ignoreskippedcluster -o<br>> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>> Thread-815::DEBUG::2014-12-08<br>> 07:16:09,564::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]<br>> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>> obtain_device_list_from_udev=0 filter = [<br>> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1<br>> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =<br>> 0 } ' --noheadings --units b --nosuffix --separator '|'<br>> --ignoreskippedcluster -o<br>> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>> Thread-815::ERROR::2014-12-08<br>> 07:16:09,627::spbackends::271::Storage.StoragePoolDiskBackend::(validateMasterDomainVersion)<br>> Requested master domain 78d84adf-7274-4efe-a711-fbec31196ece does not have<br>> expected version 42 it is version 17<br>> Thread-815::ERROR::2014-12-08<br>> 07:16:09,635::task::866::Storage.TaskManager.Task::(_setError)<br>> Task=`abfa0fd0-04b3-4c65-b3d0-be18b085a65d`::Unexpected error<br>> Thread-815::ERROR::2014-12-08<br>> 07:16:09,681::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'message': "Wrong Master domain or its version:<br>> 'SD=78d84adf-7274-4efe-a711-fbec31196ece,<br>> pool=1d03dc05-008b-4d14-97ce-b17bd714183d'", 'code': 324}}<br>> Thread-816::ERROR::2014-12-08<br>> 07:16:10,182::sdc::137::Storage.StorageDomainCache::(_findDomain) looking<br>> for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece<br>> Thread-816::ERROR::2014-12-08<br>> 07:16:10,183::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)<br>> looking for domain 78d84adf-7274-4efe-a711-fbec31196ece<br>> Thread-816::DEBUG::2014-12-08<br>> 07:16:10,187::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]<br>> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>> obtain_device_list_from_udev=0 filter = [<br>> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1<br>> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =<br>> 0 } ' --noheadings --units b --nosuffix --separator '|'<br>> --ignoreskippedcluster -o<br>> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>> Thread-823::ERROR::2014-12-08<br>> 07:16:27,163::task::866::Storage.TaskManager.Task::(_setError)<br>> Task=`9b0fd676-7941-40a7-a71e-0f1dee48a107`::Unexpected error<br>> raise se.SpmStatusError()<br>> SpmStatusError: Not SPM: ()<br>> Thread-823::ERROR::2014-12-08<br>> 07:16:27,168::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'message': 'Not SPM: ()', 'code': 654}}<br>><br>><br>-------------- next part --------------<br>An HTML attachment was scrubbed...<br>URL: <http://lists.ovirt.org/pipermail/users/attachments/20141208/2f754047/attachment.html><br><div><br></div>------------------------------<br><div><br></div>_______________________________________________<br>Users mailing list<br>Users@ovirt.org<br>http://lists.ovirt.org/mailman/listinfo/users<br><div><br></div><br>End of Users Digest, Vol 39, Issue 38<br>*************************************<br></div><div><br></div></div></body></html>