<html><body><div style="font-family: georgia,serif; font-size: 12pt; color: #000000"><div>Hi all,</div><div>I was thinking of "booting from iSCSI SAN", which means you'll be using your LUN placed on storage in order to boot your host over the network.<br></div><div>In this case you'll might configure your hosts HW to boot from iSCSI and then you'll won't need any HD on your HW.</div><div>+adding more people to add their comments.</div><div><br></div><div><span name="x"></span><br>Thanks in advance.<br><div><br></div>Best regards,<br>Nikolai<br>____________________<br>Nikolai Sednev<br>Senior Quality Engineer at Compute team<br>Red Hat Israel<br>34 Jerusalem Road,<br>Ra'anana, Israel 43501<br><div><br></div>Tel: &nbsp; &nbsp; &nbsp; +972 &nbsp; 9 7692043<br>Mobile: +972 52 7342734<br>Email: nsednev@redhat.com<br>IRC: nsednev<span name="x"></span><br></div><div><br></div><hr id="zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>From: </b>users-request@ovirt.org<br><b>To: </b>users@ovirt.org<br><b>Sent: </b>Monday, December 8, 2014 11:22:27 AM<br><b>Subject: </b>Users Digest, Vol 39, Issue 38<br><div><br></div>Send Users mailing list submissions to<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;users@ovirt.org<br><div><br></div>To subscribe or unsubscribe via the World Wide Web, visit<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;http://lists.ovirt.org/mailman/listinfo/users<br>or, via email, send a message with subject or body 'help' to<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;users-request@ovirt.org<br><div><br></div>You can reach the person managing the list at<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;users-owner@ovirt.org<br><div><br></div>When replying, please edit your Subject line so it is more specific<br>than "Re: Contents of Users digest..."<br><div><br></div><br>Today's Topics:<br><div><br></div>&nbsp;&nbsp; 1. Re: &nbsp;is it possible to run ovirt node on Diskless HW?<br>&nbsp;&nbsp; &nbsp; &nbsp;(Doron Fediuck)<br>&nbsp;&nbsp; 2. Re: &nbsp;Storage Domain Issue (Koen Vanoppen)<br><div><br></div><br>----------------------------------------------------------------------<br><div><br></div>Message: 1<br>Date: Mon, 8 Dec 2014 02:01:50 -0500 (EST)<br>From: Doron Fediuck &lt;dfediuck@redhat.com&gt;<br>To: Arman Khalatyan &lt;arm2arm@gmail.com&gt;<br>Cc: Ryan Barry &lt;rbarry@redhat.com&gt;, Fabian Deutsch<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;fdeutsch@redhat.com&gt;,&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;users &lt;users@ovirt.org&gt;<br>Subject: Re: [ovirt-users] is it possible to run ovirt node on<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Diskless HW?<br>Message-ID:<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;1172482552.12144827.1418022110582.JavaMail.zimbra@redhat.com&gt;<br>Content-Type: text/plain; charset=utf-8<br><div><br></div>For standard centos you may see other issues.<br><div><br></div>For example, let's assume you have a single NIC (eth0).<br>If you boot your host and then try to add it to the engine,<br>the host deploy procedure will create try to create a management bridge <br>for the VMs using eth0. At this point your host will freeze since your<br>root FS will be disconnected while creating the bridge.<br><div><br></div>I've done this ~6 years ago, and it required opening the initrd to handle<br>the above issue, as well as adding the NIC driver and creating the bridge<br>at this point. So it's not a trivial task but doable with some hacking.<br><div><br></div>Doron<br><div><br></div>----- Original Message -----<br>&gt; From: "Arman Khalatyan" &lt;arm2arm@gmail.com&gt;<br>&gt; To: "Doron Fediuck" &lt;dfediuck@redhat.com&gt;<br>&gt; Cc: "users" &lt;users@ovirt.org&gt;, "Fabian Deutsch" &lt;fdeutsch@redhat.com&gt;, "Ryan Barry" &lt;rbarry@redhat.com&gt;, "Tolik<br>&gt; Litovsky" &lt;tlitovsk@redhat.com&gt;, "Douglas Landgraf" &lt;dougsland@redhat.com&gt;<br>&gt; Sent: Sunday, December 7, 2014 7:38:19 PM<br>&gt; Subject: Re: [ovirt-users] is it possible to run ovirt node on Diskless HW?<br>&gt; <br>&gt; It is centos 6.6 standard one.<br>&gt; a.<br>&gt; <br>&gt; ***********************************************************<br>&gt; <br>&gt; Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut f?r<br>&gt; Astrophysik Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany<br>&gt; <br>&gt; ***********************************************************<br>&gt; <br>&gt; <br>&gt; On Sun, Dec 7, 2014 at 6:04 PM, Doron Fediuck &lt;dfediuck@redhat.com&gt; wrote:<br>&gt; <br>&gt; &gt;<br>&gt; &gt;<br>&gt; &gt; ----- Original Message -----<br>&gt; &gt; &gt; From: "Arman Khalatyan" &lt;arm2arm@gmail.com&gt;<br>&gt; &gt; &gt; To: "users" &lt;users@ovirt.org&gt;<br>&gt; &gt; &gt; Sent: Wednesday, December 3, 2014 6:50:09 PM<br>&gt; &gt; &gt; Subject: [ovirt-users] is it possible to run ovirt node on Diskless HW?<br>&gt; &gt; &gt;<br>&gt; &gt; &gt; Hello,<br>&gt; &gt; &gt;<br>&gt; &gt; &gt; Doing steps in:<br>&gt; &gt; &gt;<br>&gt; &gt; https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/diskless-nfs-config.html<br>&gt; &gt; &gt;<br>&gt; &gt; &gt; I would like to know is some one succeeded to run the host on a diskless<br>&gt; &gt; &gt; machine?<br>&gt; &gt; &gt; i am using Centos6.6 node with ovirt 3.5.<br>&gt; &gt; &gt; Thanks,<br>&gt; &gt; &gt; Arman.<br>&gt; &gt; &gt;<br>&gt; &gt; &gt;<br>&gt; &gt; &gt;<br>&gt; &gt; &gt;<br>&gt; &gt; &gt; ***********************************************************<br>&gt; &gt; &gt; Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut f?r<br>&gt; &gt; Astrophysik<br>&gt; &gt; &gt; Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany<br>&gt; &gt; &gt; ***********************************************************<br>&gt; &gt; &gt;<br>&gt; &gt;<br>&gt; &gt; Hi Arman,<br>&gt; &gt; Are you working with ovirt node or standard CentOS?<br>&gt; &gt;<br>&gt; &gt; Note that ovirt node is different as it's works like a live cd-<br>&gt; &gt; it runs from memory. In order to save some configurations (such<br>&gt; &gt; as networking) the local disk is used.<br>&gt; &gt;<br>&gt; <br><div><br></div><br>------------------------------<br><div><br></div>Message: 2<br>Date: Mon, 8 Dec 2014 10:22:18 +0100<br>From: Koen Vanoppen &lt;vanoppen.koen@gmail.com&gt;<br>To: "users@ovirt.org" &lt;users@ovirt.org&gt;<br>Subject: Re: [ovirt-users] Storage Domain Issue<br>Message-ID:<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&lt;CACfY+MaPY9opHykNc7hmM4Wc0_HBuu6_fyi7wPMWP4RSCe6xYQ@mail.gmail.com&gt;<br>Content-Type: text/plain; charset="utf-8"<br><div><br></div>some more errors:<br><div><br></div>Thread-19::DEBUG::2014-12-08<br>10:20:02,700::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>/sbin/lvm vgck --config ' devices { preferred_names = ["^/dev/mapper/"]<br>ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>obtain_device_list_from_udev=0 filter = [<br>'\''a|/dev/mapper/36005076802810d489000000000000062|'\'', '\''r|.*|'\'' ]<br>} &nbsp;global { &nbsp;locking_type=1 &nbsp;prioritise_write_locks=1 &nbsp;wait_for_locks=1<br>use_lvmetad=0 } &nbsp;backup { &nbsp;retain_min = 50 &nbsp;retain_days = 0 } '<br>f130d166-546e-4905-8b8f-55a1c1dd2e4f (cwd None)<br>Thread-20::DEBUG::2014-12-08<br>10:20:02,817::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>/sbin/lvm vgck --config ' devices { preferred_names = ["^/dev/mapper/"]<br>ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>obtain_device_list_from_udev=0 filter = [<br>'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>'\''r|.*|'\'' ] } &nbsp;global { &nbsp;locking_type=1 &nbsp;prioritise_write_locks=1<br>wait_for_locks=1 &nbsp;use_lvmetad=0 } &nbsp;backup { &nbsp;retain_min = 50 &nbsp;retain_days =<br>0 } ' eb912657-8a8c-4173-9d24-92d2b09a773c (cwd None)<br>Thread-20::DEBUG::2014-12-08<br>10:20:03,388::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]<br>ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>obtain_device_list_from_udev=0 filter = [<br>'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>'\''r|.*|'\'' ] } &nbsp;global { &nbsp;locking_type=1 &nbsp;prioritise_write_locks=1<br>wait_for_locks=1 &nbsp;use_lvmetad=0 } &nbsp;backup { &nbsp;retain_min = 50 &nbsp;retain_days =<br>0 } ' --noheadings --units b --nosuffix --separator '|'<br>--ignoreskippedcluster -o<br>uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>eb912657-8a8c-4173-9d24-92d2b09a773c (cwd None)<br>Thread-17::ERROR::2014-12-08<br>10:20:03,469::sdc::137::Storage.StorageDomainCache::(_findDomain) looking<br>for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece<br>Thread-17::ERROR::2014-12-08<br>10:20:03,472::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)<br>looking for domain 78d84adf-7274-4efe-a711-fbec31196ece<br>Thread-17::DEBUG::2014-12-08<br>10:20:03,482::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]<br>ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>obtain_device_list_from_udev=0 filter = [<br>'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>'\''r|.*|'\'' ] } &nbsp;global { &nbsp;locking_type=1 &nbsp;prioritise_write_locks=1<br>wait_for_locks=1 &nbsp;use_lvmetad=0 } &nbsp;backup { &nbsp;retain_min = 50 &nbsp;retain_days =<br>0 } ' --noheadings --units b --nosuffix --separator '|'<br>--ignoreskippedcluster -o<br>uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>Thread-17::DEBUG::2014-12-08<br>10:20:03,572::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]<br>ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>obtain_device_list_from_udev=0 filter = [<br>'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>'\''r|.*|'\'' ] } &nbsp;global { &nbsp;locking_type=1 &nbsp;prioritise_write_locks=1<br>wait_for_locks=1 &nbsp;use_lvmetad=0 } &nbsp;backup { &nbsp;retain_min = 50 &nbsp;retain_days =<br>0 } ' --noheadings --units b --nosuffix --separator '|'<br>--ignoreskippedcluster -o<br>uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>Thread-17::DEBUG::2014-12-08<br>10:20:03,631::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]<br>ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>obtain_device_list_from_udev=0 filter = [<br>'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>'\''r|.*|'\'' ] } &nbsp;global { &nbsp;locking_type=1 &nbsp;prioritise_write_locks=1<br>wait_for_locks=1 &nbsp;use_lvmetad=0 } &nbsp;backup { &nbsp;retain_min = 50 &nbsp;retain_days =<br>0 } ' --noheadings --units b --nosuffix --separator '|'<br>--ignoreskippedcluster -o<br>uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>f130d166-546e-4905-8b8f-55a1c1dd2e4f eb912657-8a8c-4173-9d24-92d2b09a773c<br>(cwd None)<br>Thread-14::ERROR::2014-12-08<br>10:20:05,785::task::866::Storage.TaskManager.Task::(_setError)<br>Task=`ffaf5100-e833-4d29-ac5d-f6f7f8ce2b5d`::Unexpected error<br>&nbsp;&nbsp; &nbsp;raise SecureError("Secured object is not in safe state")<br>SecureError: Secured object is not in safe state<br>Thread-14::ERROR::2014-12-08<br>10:20:05,797::dispatcher::79::Storage.Dispatcher::(wrapper) Secured object<br>is not in safe state<br>&nbsp;&nbsp; &nbsp;raise self.error<br>SecureError: Secured object is not in safe state<br>Thread-34::ERROR::2014-12-08<br>10:21:46,544::task::866::Storage.TaskManager.Task::(_setError)<br>Task=`82940da7-10c1-42f6-afca-3c0ac00c1487`::Unexpected error<br>&nbsp;&nbsp; &nbsp;raise SecureError("Secured object is not in safe state")<br>SecureError: Secured object is not in safe state<br>Thread-34::ERROR::2014-12-08<br>10:21:46,549::dispatcher::79::Storage.Dispatcher::(wrapper) Secured object<br>is not in safe state<br>&nbsp;&nbsp; &nbsp;raise self.error<br>SecureError: Secured object is not in safe stat<br><div><br></div>2014-12-08 7:30 GMT+01:00 Koen Vanoppen &lt;vanoppen.koen@gmail.com&gt;:<br><div><br></div>&gt; Dear all,<br>&gt;<br>&gt; We have updated our hypervisors with yum. This included an update ov vdsm<br>&gt; also. We now are with these version:<br>&gt; vdsm-4.16.7-1.gitdb83943.el6.x86_64<br>&gt; vdsm-python-4.16.7-1.gitdb83943.el6.noarch<br>&gt; vdsm-python-zombiereaper-4.16.7-1.gitdb83943.el6.noarch<br>&gt; vdsm-xmlrpc-4.16.7-1.gitdb83943.el6.noarch<br>&gt; vdsm-yajsonrpc-4.16.7-1.gitdb83943.el6.noarch<br>&gt; vdsm-jsonrpc-4.16.7-1.gitdb83943.el6.noarch<br>&gt; vdsm-cli-4.16.7-1.gitdb83943.el6.noarch<br>&gt;<br>&gt; And ever since these updates we experience BIG troubles with our fibre<br>&gt; connections. I've already update the brocade cards to the latest version.<br>&gt; This seemed to help, they already came back up and saw the storage domains<br>&gt; (before the brocade update, they didn't even see their storage domains).<br>&gt; But after a day or so, one of the hypersisors began to freak out again.<br>&gt; Coming up and going back down... Below you can find the errors:<br>&gt;<br>&gt;<br>&gt; Thread-821::ERROR::2014-12-08<br>&gt; 07:10:33,190::task::866::Storage.TaskManager.Task::(_setError)<br>&gt; Task=`27cb9779-a8e9-4080-988d-9772c922710b`::Unexpected error<br>&gt; &nbsp; &nbsp; raise se.SpmStatusError()<br>&gt; SpmStatusError: Not SPM: ()<br>&gt; Thread-821::ERROR::2014-12-08<br>&gt; 07:10:33,194::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>&gt; {'message': 'Not SPM: ()', 'code': 654}}<br>&gt; Thread-822::ERROR::2014-12-08<br>&gt; 07:11:03,878::task::866::Storage.TaskManager.Task::(_setError)<br>&gt; Task=`30177931-68c0-420f-950f-da5b770fe35c`::Unexpected error<br>&gt; Thread-822::ERROR::2014-12-08<br>&gt; 07:11:03,882::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>&gt; {'message': "Unknown pool id, pool not connected:<br>&gt; ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}<br>&gt; Thread-813::ERROR::2014-12-08<br>&gt; 07:11:07,634::sdc::137::Storage.StorageDomainCache::(_findDomain) looking<br>&gt; for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece<br>&gt; Thread-813::ERROR::2014-12-08<br>&gt; 07:11:07,634::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)<br>&gt; looking for domain 78d84adf-7274-4efe-a711-fbec31196ece<br>&gt; Thread-813::DEBUG::2014-12-08<br>&gt; 07:11:07,638::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>&gt; /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]<br>&gt; ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>&gt; obtain_device_list_from_udev=0 filter = [<br>&gt; '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>&gt; '\''r|.*|'\'' ] } &nbsp;global { &nbsp;locking_type=1 &nbsp;prioritise_write_locks=1<br>&gt; wait_for_locks=1 &nbsp;use_lvmetad=0 } &nbsp;backup { &nbsp;retain_min = 50 &nbsp;retain_days =<br>&gt; 0 } ' --noheadings --units b --nosuffix --separator '|'<br>&gt; --ignoreskippedcluster -o<br>&gt; uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>&gt; 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>&gt; Thread-813::DEBUG::2014-12-08<br>&gt; 07:11:07,835::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>&gt; /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]<br>&gt; ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>&gt; obtain_device_list_from_udev=0 filter = [<br>&gt; '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>&gt; '\''r|.*|'\'' ] } &nbsp;global { &nbsp;locking_type=1 &nbsp;prioritise_write_locks=1<br>&gt; wait_for_locks=1 &nbsp;use_lvmetad=0 } &nbsp;backup { &nbsp;retain_min = 50 &nbsp;retain_days =<br>&gt; 0 } ' --noheadings --units b --nosuffix --separator '|'<br>&gt; --ignoreskippedcluster -o<br>&gt; uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>&gt; 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>&gt; Thread-813::ERROR::2014-12-08<br>&gt; 07:11:07,896::spbackends::271::Storage.StoragePoolDiskBackend::(validateMasterDomainVersion)<br>&gt; Requested master domain 78d84adf-7274-4efe-a711-fbec31196ece does not have<br>&gt; expected version 42 it is version 17<br>&gt; Thread-813::ERROR::2014-12-08<br>&gt; 07:11:07,903::task::866::Storage.TaskManager.Task::(_setError)<br>&gt; Task=`c434f325-5193-4236-a04d-2fee9ac095bc`::Unexpected error<br>&gt; Thread-813::ERROR::2014-12-08<br>&gt; 07:11:07,946::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>&gt; {'message': "Wrong Master domain or its version:<br>&gt; 'SD=78d84adf-7274-4efe-a711-fbec31196ece,<br>&gt; pool=1d03dc05-008b-4d14-97ce-b17bd714183d'", 'code': 324}}<br>&gt; Thread-823::ERROR::2014-12-08<br>&gt; 07:11:43,993::task::866::Storage.TaskManager.Task::(_setError)<br>&gt; Task=`9abbccd9-88a7-4632-b350-f9af1f65bebd`::Unexpected error<br>&gt; Thread-823::ERROR::2014-12-08<br>&gt; 07:11:43,998::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>&gt; {'message': "Unknown pool id, pool not connected:<br>&gt; ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}<br>&gt; Thread-823::ERROR::2014-12-08<br>&gt; 07:11:44,003::task::866::Storage.TaskManager.Task::(_setError)<br>&gt; Task=`7ef1ac39-e7c2-4538-b30b-ab2fcefac01d`::Unexpected error<br>&gt; &nbsp; &nbsp; raise se.SpmStatusError()<br>&gt; SpmStatusError: Not SPM: ()<br>&gt; Thread-823::ERROR::2014-12-08<br>&gt; 07:11:44,007::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>&gt; {'message': 'Not SPM: ()', 'code': 654}}<br>&gt; Thread-823::ERROR::2014-12-08<br>&gt; 07:11:44,133::task::866::Storage.TaskManager.Task::(_setError)<br>&gt; Task=`cc1ae82c-f3c4-4efa-9cd2-c62a27801e76`::Unexpected error<br>&gt; Thread-823::ERROR::2014-12-08<br>&gt; 07:11:44,137::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>&gt; {'message': "Unknown pool id, pool not connected:<br>&gt; ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}<br>&gt; Thread-823::ERROR::2014-12-08<br>&gt; 07:12:24,580::task::866::Storage.TaskManager.Task::(_setError)<br>&gt; Task=`9bcbb87d-3093-4894-879b-3fe2b09ef351`::Unexpected error<br>&gt; Thread-823::ERROR::2014-12-08<br>&gt; 07:12:24,585::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>&gt; {'message': "Unknown pool id, pool not connected:<br>&gt; ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}<br>&gt; Thread-823::ERROR::2014-12-08<br>&gt; 07:13:04,926::task::866::Storage.TaskManager.Task::(_setError)<br>&gt; Task=`8bdd0c1f-e681-4a8e-ad55-296c021389ed`::Unexpected error<br>&gt; &nbsp; &nbsp; raise se.SpmStatusError()<br>&gt; SpmStatusError: Not SPM: ()<br>&gt; Thread-823::ERROR::2014-12-08<br>&gt; 07:13:04,931::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>&gt; {'message': 'Not SPM: ()', 'code': 654}}<br>&gt; Thread-823::ERROR::2014-12-08<br>&gt; 07:13:45,342::task::866::Storage.TaskManager.Task::(_setError)<br>&gt; Task=`160ea2a7-b6cb-4102-9df4-71ba87fd863e`::Unexpected error<br>&gt; &nbsp; &nbsp; raise se.SpmStatusError()<br>&gt; SpmStatusError: Not SPM: ()<br>&gt; Thread-823::ERROR::2014-12-08<br>&gt; 07:13:45,346::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>&gt; {'message': 'Not SPM: ()', 'code': 654}}<br>&gt; Thread-823::ERROR::2014-12-08<br>&gt; 07:14:25,879::task::866::Storage.TaskManager.Task::(_setError)<br>&gt; Task=`985628db-8f48-44b5-8f61-631a922f7f71`::Unexpected error<br>&gt; &nbsp; &nbsp; raise se.SpmStatusError()<br>&gt; SpmStatusError: Not SPM: ()<br>&gt; Thread-823::ERROR::2014-12-08<br>&gt; 07:14:25,883::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>&gt; {'message': 'Not SPM: ()', 'code': 654}}<br>&gt; Thread-823::ERROR::2014-12-08<br>&gt; 07:15:06,175::task::866::Storage.TaskManager.Task::(_setError)<br>&gt; Task=`ddca1c88-0565-41e8-bf0c-22eadcc75918`::Unexpected error<br>&gt; &nbsp; &nbsp; raise se.SpmStatusError()<br>&gt; SpmStatusError: Not SPM: ()<br>&gt; Thread-823::ERROR::2014-12-08<br>&gt; 07:15:06,179::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>&gt; {'message': 'Not SPM: ()', 'code': 654}}<br>&gt; Thread-823::ERROR::2014-12-08<br>&gt; 07:15:46,585::task::866::Storage.TaskManager.Task::(_setError)<br>&gt; Task=`12bbded5-59ce-46d8-9e67-f48862a03606`::Unexpected error<br>&gt; &nbsp; &nbsp; raise se.SpmStatusError()<br>&gt; SpmStatusError: Not SPM: ()<br>&gt; Thread-823::ERROR::2014-12-08<br>&gt; 07:15:46,589::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>&gt; {'message': 'Not SPM: ()', 'code': 654}}<br>&gt; Thread-814::ERROR::2014-12-08<br>&gt; 07:16:08,619::sdc::137::Storage.StorageDomainCache::(_findDomain) looking<br>&gt; for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece<br>&gt; Thread-814::ERROR::2014-12-08<br>&gt; 07:16:08,619::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)<br>&gt; looking for domain 78d84adf-7274-4efe-a711-fbec31196ece<br>&gt; Thread-814::DEBUG::2014-12-08<br>&gt; 07:16:08,624::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>&gt; /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]<br>&gt; ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>&gt; obtain_device_list_from_udev=0 filter = [<br>&gt; '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>&gt; '\''r|.*|'\'' ] } &nbsp;global { &nbsp;locking_type=1 &nbsp;prioritise_write_locks=1<br>&gt; wait_for_locks=1 &nbsp;use_lvmetad=0 } &nbsp;backup { &nbsp;retain_min = 50 &nbsp;retain_days =<br>&gt; 0 } ' --noheadings --units b --nosuffix --separator '|'<br>&gt; --ignoreskippedcluster -o<br>&gt; uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>&gt; 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>&gt; Thread-814::DEBUG::2014-12-08<br>&gt; 07:16:08,740::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>&gt; /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]<br>&gt; ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>&gt; obtain_device_list_from_udev=0 filter = [<br>&gt; '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>&gt; '\''r|.*|'\'' ] } &nbsp;global { &nbsp;locking_type=1 &nbsp;prioritise_write_locks=1<br>&gt; wait_for_locks=1 &nbsp;use_lvmetad=0 } &nbsp;backup { &nbsp;retain_min = 50 &nbsp;retain_days =<br>&gt; 0 } ' --noheadings --units b --nosuffix --separator '|'<br>&gt; --ignoreskippedcluster -o<br>&gt; uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>&gt; 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>&gt; Thread-814::ERROR::2014-12-08<br>&gt; 07:16:08,812::spbackends::271::Storage.StoragePoolDiskBackend::(validateMasterDomainVersion)<br>&gt; Requested master domain 78d84adf-7274-4efe-a711-fbec31196ece does not have<br>&gt; expected version 42 it is version 17<br>&gt; Thread-814::ERROR::2014-12-08<br>&gt; 07:16:08,820::task::866::Storage.TaskManager.Task::(_setError)<br>&gt; Task=`5cdce5cd-6e6d-421e-bc2a-f999d8cbb056`::Unexpected error<br>&gt; Thread-814::ERROR::2014-12-08<br>&gt; 07:16:08,865::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>&gt; {'message': "Wrong Master domain or its version:<br>&gt; 'SD=78d84adf-7274-4efe-a711-fbec31196ece,<br>&gt; pool=1d03dc05-008b-4d14-97ce-b17bd714183d'", 'code': 324}}<br>&gt; Thread-815::ERROR::2014-12-08<br>&gt; 07:16:09,471::sdc::137::Storage.StorageDomainCache::(_findDomain) looking<br>&gt; for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece<br>&gt; Thread-815::ERROR::2014-12-08<br>&gt; 07:16:09,472::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)<br>&gt; looking for domain 78d84adf-7274-4efe-a711-fbec31196ece<br>&gt; Thread-815::DEBUG::2014-12-08<br>&gt; 07:16:09,476::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>&gt; /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]<br>&gt; ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>&gt; obtain_device_list_from_udev=0 filter = [<br>&gt; '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>&gt; '\''r|.*|'\'' ] } &nbsp;global { &nbsp;locking_type=1 &nbsp;prioritise_write_locks=1<br>&gt; wait_for_locks=1 &nbsp;use_lvmetad=0 } &nbsp;backup { &nbsp;retain_min = 50 &nbsp;retain_days =<br>&gt; 0 } ' --noheadings --units b --nosuffix --separator '|'<br>&gt; --ignoreskippedcluster -o<br>&gt; uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>&gt; 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>&gt; Thread-815::DEBUG::2014-12-08<br>&gt; 07:16:09,564::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>&gt; /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]<br>&gt; ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>&gt; obtain_device_list_from_udev=0 filter = [<br>&gt; '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>&gt; '\''r|.*|'\'' ] } &nbsp;global { &nbsp;locking_type=1 &nbsp;prioritise_write_locks=1<br>&gt; wait_for_locks=1 &nbsp;use_lvmetad=0 } &nbsp;backup { &nbsp;retain_min = 50 &nbsp;retain_days =<br>&gt; 0 } ' --noheadings --units b --nosuffix --separator '|'<br>&gt; --ignoreskippedcluster -o<br>&gt; uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>&gt; 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>&gt; Thread-815::ERROR::2014-12-08<br>&gt; 07:16:09,627::spbackends::271::Storage.StoragePoolDiskBackend::(validateMasterDomainVersion)<br>&gt; Requested master domain 78d84adf-7274-4efe-a711-fbec31196ece does not have<br>&gt; expected version 42 it is version 17<br>&gt; Thread-815::ERROR::2014-12-08<br>&gt; 07:16:09,635::task::866::Storage.TaskManager.Task::(_setError)<br>&gt; Task=`abfa0fd0-04b3-4c65-b3d0-be18b085a65d`::Unexpected error<br>&gt; Thread-815::ERROR::2014-12-08<br>&gt; 07:16:09,681::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>&gt; {'message': "Wrong Master domain or its version:<br>&gt; 'SD=78d84adf-7274-4efe-a711-fbec31196ece,<br>&gt; pool=1d03dc05-008b-4d14-97ce-b17bd714183d'", 'code': 324}}<br>&gt; Thread-816::ERROR::2014-12-08<br>&gt; 07:16:10,182::sdc::137::Storage.StorageDomainCache::(_findDomain) looking<br>&gt; for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece<br>&gt; Thread-816::ERROR::2014-12-08<br>&gt; 07:16:10,183::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)<br>&gt; looking for domain 78d84adf-7274-4efe-a711-fbec31196ece<br>&gt; Thread-816::DEBUG::2014-12-08<br>&gt; 07:16:10,187::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>&gt; /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]<br>&gt; ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>&gt; obtain_device_list_from_udev=0 filter = [<br>&gt; '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>&gt; '\''r|.*|'\'' ] } &nbsp;global { &nbsp;locking_type=1 &nbsp;prioritise_write_locks=1<br>&gt; wait_for_locks=1 &nbsp;use_lvmetad=0 } &nbsp;backup { &nbsp;retain_min = 50 &nbsp;retain_days =<br>&gt; 0 } ' --noheadings --units b --nosuffix --separator '|'<br>&gt; --ignoreskippedcluster -o<br>&gt; uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>&gt; 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>&gt; Thread-823::ERROR::2014-12-08<br>&gt; 07:16:27,163::task::866::Storage.TaskManager.Task::(_setError)<br>&gt; Task=`9b0fd676-7941-40a7-a71e-0f1dee48a107`::Unexpected error<br>&gt; &nbsp; &nbsp; raise se.SpmStatusError()<br>&gt; SpmStatusError: Not SPM: ()<br>&gt; Thread-823::ERROR::2014-12-08<br>&gt; 07:16:27,168::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>&gt; {'message': 'Not SPM: ()', 'code': 654}}<br>&gt;<br>&gt;<br>-------------- next part --------------<br>An HTML attachment was scrubbed...<br>URL: &lt;http://lists.ovirt.org/pipermail/users/attachments/20141208/2f754047/attachment.html&gt;<br><div><br></div>------------------------------<br><div><br></div>_______________________________________________<br>Users mailing list<br>Users@ovirt.org<br>http://lists.ovirt.org/mailman/listinfo/users<br><div><br></div><br>End of Users Digest, Vol 39, Issue 38<br>*************************************<br></div><div><br></div></div></body></html>