Re: [ovirt-users] Users Digest, Vol 39, Issue 38

------=_Part_7746146_197691492.1418043545536 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Hi all, I was thinking of "booting from iSCSI SAN", which means you'll be using your LUN placed on storage in order to boot your host over the network. In this case you'll might configure your hosts HW to boot from iSCSI and then you'll won't need any HD on your HW. +adding more people to add their comments. Thanks in advance. Best regards, Nikolai ____________________ Nikolai Sednev Senior Quality Engineer at Compute team Red Hat Israel 34 Jerusalem Road, Ra'anana, Israel 43501 Tel: +972 9 7692043 Mobile: +972 52 7342734 Email: nsednev@redhat.com IRC: nsednev ----- Original Message ----- From: users-request@ovirt.org To: users@ovirt.org Sent: Monday, December 8, 2014 11:22:27 AM Subject: Users Digest, Vol 39, Issue 38 Send Users mailing list submissions to users@ovirt.org To subscribe or unsubscribe via the World Wide Web, visit http://lists.ovirt.org/mailman/listinfo/users or, via email, send a message with subject or body 'help' to users-request@ovirt.org You can reach the person managing the list at users-owner@ovirt.org When replying, please edit your Subject line so it is more specific than "Re: Contents of Users digest..." Today's Topics: 1. Re: is it possible to run ovirt node on Diskless HW? (Doron Fediuck) 2. Re: Storage Domain Issue (Koen Vanoppen) ---------------------------------------------------------------------- Message: 1 Date: Mon, 8 Dec 2014 02:01:50 -0500 (EST) From: Doron Fediuck <dfediuck@redhat.com> To: Arman Khalatyan <arm2arm@gmail.com> Cc: Ryan Barry <rbarry@redhat.com>, Fabian Deutsch <fdeutsch@redhat.com>, users <users@ovirt.org> Subject: Re: [ovirt-users] is it possible to run ovirt node on Diskless HW? Message-ID: <1172482552.12144827.1418022110582.JavaMail.zimbra@redhat.com> Content-Type: text/plain; charset=utf-8 For standard centos you may see other issues. For example, let's assume you have a single NIC (eth0). If you boot your host and then try to add it to the engine, the host deploy procedure will create try to create a management bridge for the VMs using eth0. At this point your host will freeze since your root FS will be disconnected while creating the bridge. I've done this ~6 years ago, and it required opening the initrd to handle the above issue, as well as adding the NIC driver and creating the bridge at this point. So it's not a trivial task but doable with some hacking. Doron ----- Original Message -----
From: "Arman Khalatyan" <arm2arm@gmail.com> To: "Doron Fediuck" <dfediuck@redhat.com> Cc: "users" <users@ovirt.org>, "Fabian Deutsch" <fdeutsch@redhat.com>, "Ryan Barry" <rbarry@redhat.com>, "Tolik Litovsky" <tlitovsk@redhat.com>, "Douglas Landgraf" <dougsland@redhat.com> Sent: Sunday, December 7, 2014 7:38:19 PM Subject: Re: [ovirt-users] is it possible to run ovirt node on Diskless HW?
It is centos 6.6 standard one. a.
***********************************************************
Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut f?r Astrophysik Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany
***********************************************************
On Sun, Dec 7, 2014 at 6:04 PM, Doron Fediuck <dfediuck@redhat.com> wrote:
----- Original Message -----
From: "Arman Khalatyan" <arm2arm@gmail.com> To: "users" <users@ovirt.org> Sent: Wednesday, December 3, 2014 6:50:09 PM Subject: [ovirt-users] is it possible to run ovirt node on Diskless HW?
Hello,
Doing steps in:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/htm...
I would like to know is some one succeeded to run the host on a diskless machine? i am using Centos6.6 node with ovirt 3.5. Thanks, Arman.
*********************************************************** Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut f?r
Astrophysik
Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany ***********************************************************
Hi Arman, Are you working with ovirt node or standard CentOS?
Note that ovirt node is different as it's works like a live cd- it runs from memory. In order to save some configurations (such as networking) the local disk is used.
------------------------------ Message: 2 Date: Mon, 8 Dec 2014 10:22:18 +0100 From: Koen Vanoppen <vanoppen.koen@gmail.com> To: "users@ovirt.org" <users@ovirt.org> Subject: Re: [ovirt-users] Storage Domain Issue Message-ID: <CACfY+MaPY9opHykNc7hmM4Wc0_HBuu6_fyi7wPMWP4RSCe6xYQ@mail.gmail.com> Content-Type: text/plain; charset="utf-8" some more errors: Thread-19::DEBUG::2014-12-08 10:20:02,700::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm vgck --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/36005076802810d489000000000000062|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' f130d166-546e-4905-8b8f-55a1c1dd2e4f (cwd None) Thread-20::DEBUG::2014-12-08 10:20:02,817::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm vgck --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' eb912657-8a8c-4173-9d24-92d2b09a773c (cwd None) Thread-20::DEBUG::2014-12-08 10:20:03,388::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name eb912657-8a8c-4173-9d24-92d2b09a773c (cwd None) Thread-17::ERROR::2014-12-08 10:20:03,469::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece Thread-17::ERROR::2014-12-08 10:20:03,472::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain 78d84adf-7274-4efe-a711-fbec31196ece Thread-17::DEBUG::2014-12-08 10:20:03,482::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 78d84adf-7274-4efe-a711-fbec31196ece (cwd None) Thread-17::DEBUG::2014-12-08 10:20:03,572::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 78d84adf-7274-4efe-a711-fbec31196ece (cwd None) Thread-17::DEBUG::2014-12-08 10:20:03,631::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name f130d166-546e-4905-8b8f-55a1c1dd2e4f eb912657-8a8c-4173-9d24-92d2b09a773c (cwd None) Thread-14::ERROR::2014-12-08 10:20:05,785::task::866::Storage.TaskManager.Task::(_setError) Task=`ffaf5100-e833-4d29-ac5d-f6f7f8ce2b5d`::Unexpected error raise SecureError("Secured object is not in safe state") SecureError: Secured object is not in safe state Thread-14::ERROR::2014-12-08 10:20:05,797::dispatcher::79::Storage.Dispatcher::(wrapper) Secured object is not in safe state raise self.error SecureError: Secured object is not in safe state Thread-34::ERROR::2014-12-08 10:21:46,544::task::866::Storage.TaskManager.Task::(_setError) Task=`82940da7-10c1-42f6-afca-3c0ac00c1487`::Unexpected error raise SecureError("Secured object is not in safe state") SecureError: Secured object is not in safe state Thread-34::ERROR::2014-12-08 10:21:46,549::dispatcher::79::Storage.Dispatcher::(wrapper) Secured object is not in safe state raise self.error SecureError: Secured object is not in safe stat 2014-12-08 7:30 GMT+01:00 Koen Vanoppen <vanoppen.koen@gmail.com>:
Dear all,
We have updated our hypervisors with yum. This included an update ov vdsm also. We now are with these version: vdsm-4.16.7-1.gitdb83943.el6.x86_64 vdsm-python-4.16.7-1.gitdb83943.el6.noarch vdsm-python-zombiereaper-4.16.7-1.gitdb83943.el6.noarch vdsm-xmlrpc-4.16.7-1.gitdb83943.el6.noarch vdsm-yajsonrpc-4.16.7-1.gitdb83943.el6.noarch vdsm-jsonrpc-4.16.7-1.gitdb83943.el6.noarch vdsm-cli-4.16.7-1.gitdb83943.el6.noarch
And ever since these updates we experience BIG troubles with our fibre connections. I've already update the brocade cards to the latest version. This seemed to help, they already came back up and saw the storage domains (before the brocade update, they didn't even see their storage domains). But after a day or so, one of the hypersisors began to freak out again. Coming up and going back down... Below you can find the errors:
Thread-821::ERROR::2014-12-08 07:10:33,190::task::866::Storage.TaskManager.Task::(_setError) Task=`27cb9779-a8e9-4080-988d-9772c922710b`::Unexpected error raise se.SpmStatusError() SpmStatusError: Not SPM: () Thread-821::ERROR::2014-12-08 07:10:33,194::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': {'message': 'Not SPM: ()', 'code': 654}} Thread-822::ERROR::2014-12-08 07:11:03,878::task::866::Storage.TaskManager.Task::(_setError) Task=`30177931-68c0-420f-950f-da5b770fe35c`::Unexpected error Thread-822::ERROR::2014-12-08 07:11:03,882::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': {'message': "Unknown pool id, pool not connected: ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}} Thread-813::ERROR::2014-12-08 07:11:07,634::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece Thread-813::ERROR::2014-12-08 07:11:07,634::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain 78d84adf-7274-4efe-a711-fbec31196ece Thread-813::DEBUG::2014-12-08 07:11:07,638::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 78d84adf-7274-4efe-a711-fbec31196ece (cwd None) Thread-813::DEBUG::2014-12-08 07:11:07,835::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 78d84adf-7274-4efe-a711-fbec31196ece (cwd None) Thread-813::ERROR::2014-12-08 07:11:07,896::spbackends::271::Storage.StoragePoolDiskBackend::(validateMasterDomainVersion) Requested master domain 78d84adf-7274-4efe-a711-fbec31196ece does not have expected version 42 it is version 17 Thread-813::ERROR::2014-12-08 07:11:07,903::task::866::Storage.TaskManager.Task::(_setError) Task=`c434f325-5193-4236-a04d-2fee9ac095bc`::Unexpected error Thread-813::ERROR::2014-12-08 07:11:07,946::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': {'message': "Wrong Master domain or its version: 'SD=78d84adf-7274-4efe-a711-fbec31196ece, pool=1d03dc05-008b-4d14-97ce-b17bd714183d'", 'code': 324}} Thread-823::ERROR::2014-12-08 07:11:43,993::task::866::Storage.TaskManager.Task::(_setError) Task=`9abbccd9-88a7-4632-b350-f9af1f65bebd`::Unexpected error Thread-823::ERROR::2014-12-08 07:11:43,998::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': {'message': "Unknown pool id, pool not connected: ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}} Thread-823::ERROR::2014-12-08 07:11:44,003::task::866::Storage.TaskManager.Task::(_setError) Task=`7ef1ac39-e7c2-4538-b30b-ab2fcefac01d`::Unexpected error raise se.SpmStatusError() SpmStatusError: Not SPM: () Thread-823::ERROR::2014-12-08 07:11:44,007::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': {'message': 'Not SPM: ()', 'code': 654}} Thread-823::ERROR::2014-12-08 07:11:44,133::task::866::Storage.TaskManager.Task::(_setError) Task=`cc1ae82c-f3c4-4efa-9cd2-c62a27801e76`::Unexpected error Thread-823::ERROR::2014-12-08 07:11:44,137::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': {'message': "Unknown pool id, pool not connected: ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}} Thread-823::ERROR::2014-12-08 07:12:24,580::task::866::Storage.TaskManager.Task::(_setError) Task=`9bcbb87d-3093-4894-879b-3fe2b09ef351`::Unexpected error Thread-823::ERROR::2014-12-08 07:12:24,585::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': {'message': "Unknown pool id, pool not connected: ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}} Thread-823::ERROR::2014-12-08 07:13:04,926::task::866::Storage.TaskManager.Task::(_setError) Task=`8bdd0c1f-e681-4a8e-ad55-296c021389ed`::Unexpected error raise se.SpmStatusError() SpmStatusError: Not SPM: () Thread-823::ERROR::2014-12-08 07:13:04,931::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': {'message': 'Not SPM: ()', 'code': 654}} Thread-823::ERROR::2014-12-08 07:13:45,342::task::866::Storage.TaskManager.Task::(_setError) Task=`160ea2a7-b6cb-4102-9df4-71ba87fd863e`::Unexpected error raise se.SpmStatusError() SpmStatusError: Not SPM: () Thread-823::ERROR::2014-12-08 07:13:45,346::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': {'message': 'Not SPM: ()', 'code': 654}} Thread-823::ERROR::2014-12-08 07:14:25,879::task::866::Storage.TaskManager.Task::(_setError) Task=`985628db-8f48-44b5-8f61-631a922f7f71`::Unexpected error raise se.SpmStatusError() SpmStatusError: Not SPM: () Thread-823::ERROR::2014-12-08 07:14:25,883::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': {'message': 'Not SPM: ()', 'code': 654}} Thread-823::ERROR::2014-12-08 07:15:06,175::task::866::Storage.TaskManager.Task::(_setError) Task=`ddca1c88-0565-41e8-bf0c-22eadcc75918`::Unexpected error raise se.SpmStatusError() SpmStatusError: Not SPM: () Thread-823::ERROR::2014-12-08 07:15:06,179::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': {'message': 'Not SPM: ()', 'code': 654}} Thread-823::ERROR::2014-12-08 07:15:46,585::task::866::Storage.TaskManager.Task::(_setError) Task=`12bbded5-59ce-46d8-9e67-f48862a03606`::Unexpected error raise se.SpmStatusError() SpmStatusError: Not SPM: () Thread-823::ERROR::2014-12-08 07:15:46,589::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': {'message': 'Not SPM: ()', 'code': 654}} Thread-814::ERROR::2014-12-08 07:16:08,619::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece Thread-814::ERROR::2014-12-08 07:16:08,619::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain 78d84adf-7274-4efe-a711-fbec31196ece Thread-814::DEBUG::2014-12-08 07:16:08,624::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 78d84adf-7274-4efe-a711-fbec31196ece (cwd None) Thread-814::DEBUG::2014-12-08 07:16:08,740::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 78d84adf-7274-4efe-a711-fbec31196ece (cwd None) Thread-814::ERROR::2014-12-08 07:16:08,812::spbackends::271::Storage.StoragePoolDiskBackend::(validateMasterDomainVersion) Requested master domain 78d84adf-7274-4efe-a711-fbec31196ece does not have expected version 42 it is version 17 Thread-814::ERROR::2014-12-08 07:16:08,820::task::866::Storage.TaskManager.Task::(_setError) Task=`5cdce5cd-6e6d-421e-bc2a-f999d8cbb056`::Unexpected error Thread-814::ERROR::2014-12-08 07:16:08,865::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': {'message': "Wrong Master domain or its version: 'SD=78d84adf-7274-4efe-a711-fbec31196ece, pool=1d03dc05-008b-4d14-97ce-b17bd714183d'", 'code': 324}} Thread-815::ERROR::2014-12-08 07:16:09,471::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece Thread-815::ERROR::2014-12-08 07:16:09,472::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain 78d84adf-7274-4efe-a711-fbec31196ece Thread-815::DEBUG::2014-12-08 07:16:09,476::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 78d84adf-7274-4efe-a711-fbec31196ece (cwd None) Thread-815::DEBUG::2014-12-08 07:16:09,564::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 78d84adf-7274-4efe-a711-fbec31196ece (cwd None) Thread-815::ERROR::2014-12-08 07:16:09,627::spbackends::271::Storage.StoragePoolDiskBackend::(validateMasterDomainVersion) Requested master domain 78d84adf-7274-4efe-a711-fbec31196ece does not have expected version 42 it is version 17 Thread-815::ERROR::2014-12-08 07:16:09,635::task::866::Storage.TaskManager.Task::(_setError) Task=`abfa0fd0-04b3-4c65-b3d0-be18b085a65d`::Unexpected error Thread-815::ERROR::2014-12-08 07:16:09,681::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': {'message': "Wrong Master domain or its version: 'SD=78d84adf-7274-4efe-a711-fbec31196ece, pool=1d03dc05-008b-4d14-97ce-b17bd714183d'", 'code': 324}} Thread-816::ERROR::2014-12-08 07:16:10,182::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece Thread-816::ERROR::2014-12-08 07:16:10,183::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain 78d84adf-7274-4efe-a711-fbec31196ece Thread-816::DEBUG::2014-12-08 07:16:10,187::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 78d84adf-7274-4efe-a711-fbec31196ece (cwd None) Thread-823::ERROR::2014-12-08 07:16:27,163::task::866::Storage.TaskManager.Task::(_setError) Task=`9b0fd676-7941-40a7-a71e-0f1dee48a107`::Unexpected error raise se.SpmStatusError() SpmStatusError: Not SPM: () Thread-823::ERROR::2014-12-08 07:16:27,168::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': {'message': 'Not SPM: ()', 'code': 654}}
Best regards,<br>Nikolai<br>____________________<br>Nikolai Sednev<br>Seni= or Quality Engineer at Compute team<br>Red Hat Israel<br>34 Jerusalem Road,= <br>Ra'anana, Israel 43501<br><div><br></div>Tel: +972= 9 7692043<br>Mobile: +972 52 7342734<br>Email: nsednev@redhat.com<b= r>IRC: nsednev<span name=3D"x"></span><br></div><div><br></div><hr id=3D"zw= chr"><div style=3D"color:#000;font-weight:normal;font-style:normal;text-dec= oration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>Fro= m: </b>users-request@ovirt.org<br><b>To: </b>users@ovirt.org<br><b>Sent: </= b>Monday, December 8, 2014 11:22:27 AM<br><b>Subject: </b>Users Digest, Vol= 39, Issue 38<br><div><br></div>Send Users mailing list submissions to<br>&= nbsp; users@ovirt.org<br><div><br>= </div>To subscribe or unsubscribe via the World Wide Web, visit<br> &n= bsp; http://lists.ovirt.org/mailman/list= info/users<br>or, via email, send a message with subject or body 'help' to<= br> users-request@ovirt.org<= br><div><br></div>You can reach the person managing the list at<br> &n= bsp; users-owner@ovirt.org<br><div><br><= /div>When replying, please edit your Subject line so it is more specific<br= than "Re: Contents of Users digest..."<br><div><br></div><br>Today's Topic= s:<br><div><br></div> 1. Re: is it possible to run ovirt = node on Diskless HW?<br> (Doron Fediuck)<br> = 2. Re: Storage Domain Issue (Koen Vanoppen)<br><div><br></div>= <br>----------------------------------------------------------------------<= br><div><br></div>Message: 1<br>Date: Mon, 8 Dec 2014 02:01:50 -0500 (EST)<= br>From: Doron Fediuck <dfediuck@redhat.com><br>To: Arman Khalatyan &= lt;arm2arm@gmail.com><br>Cc: Ryan Barry <rbarry@redhat.com>, Fabia= n Deutsch<br> <fdeutsch@r= edhat.com>, users <use= rs@ovirt.org><br>Subject: Re: [ovirt-users] is it possible to run ovirt = node on<br> Diskless HW?<br>= Message-ID:<br> <11724825= 52.12144827.1418022110582.JavaMail.zimbra@redhat.com><br>Content-Type: t= ext/plain; charset=3Dutf-8<br><div><br></div>For standard centos you may se= e other issues.<br><div><br></div>For example, let's assume you have a sing= le NIC (eth0).<br>If you boot your host and then try to add it to the engin= e,<br>the host deploy procedure will create try to create a management brid= ge <br>for the VMs using eth0. At this point your host will freeze since yo= ur<br>root FS will be disconnected while creating the bridge.<br><div><br><= /div>I've done this ~6 years ago, and it required opening the initrd to han=
To: "users@ovirt.org" <users@ovirt.org><br>Subject: Re: [ovirt-users= ] Storage Domain Issue<br>Message-ID:<br> &nbs=
Thread-17::DEBUG::2014-12-08<br>10:20:03,631::lvm::288::Storage.Misc.excCm= d::(cmd) /usr/bin/sudo -n<br>/sbin/lvm vgs --config ' devices { preferred_n= ames =3D ["^/dev/mapper/"]<br>ignore_suspended_devices=3D1 write_cache_stat= e=3D0 disable_after_error_count=3D3<br>obtain_device_list_from_udev=3D0 fil= ter =3D [<br>'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mappe= r/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e000000000= 0000de|'\'',<br>'\''r|.*|'\'' ] } global { locking_type=3D1 &nb= sp;prioritise_write_locks=3D1<br>wait_for_locks=3D1 use_lvmetad=3D0 }= backup { retain_min =3D 50 retain_days =3D<br>0 } ' --no=
> Thread-823::ERROR::2014-12-08<br>> 07:15:06,175::task::866::Storag= e.TaskManager.Task::(_setError)<br>> Task=3D`ddca1c88-0565-41e8-bf0c-22e= adcc75918`::Unexpected error<br>> raise se.SpmStatusError(= )<br>> SpmStatusError: Not SPM: ()<br>> Thread-823::ERROR::2014-12-08= <br>> 07:15:06,179::dispatcher::76::Storage.Dispatcher::(wrapper) {'stat= us':<br>> {'message': 'Not SPM: ()', 'code': 654}}<br>> Thread-823::E= RROR::2014-12-08<br>> 07:15:46,585::task::866::Storage.TaskManager.Task:= :(_setError)<br>> Task=3D`12bbded5-59ce-46d8-9e67-f48862a03606`::Unexpec= ted error<br>> raise se.SpmStatusError()<br>> SpmStatus= Error: Not SPM: ()<br>> Thread-823::ERROR::2014-12-08<br>> 07:15:46,5= 89::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'mess= age': 'Not SPM: ()', 'code': 654}}<br>> Thread-814::ERROR::2014-12-08<br= > 07:16:08,619::sdc::137::Storage.StorageDomainCache::(_findDomain) loo= king<br>> for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece<br>&= gt; Thread-814::ERROR::2014-12-08<br>> 07:16:08,619::sdc::154::Storage.S= torageDomainCache::(_findUnfetchedDomain)<br>> looking for domain 78d84a= df-7274-4efe-a711-fbec31196ece<br>> Thread-814::DEBUG::2014-12-08<br>>= ; 07:16:08,624::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>&g= t; /sbin/lvm vgs --config ' devices { preferred_names =3D ["^/dev/mapper/"]= <br>> ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_e= rror_count=3D3<br>> obtain_device_list_from_udev=3D0 filter =3D [<br>>= ; '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/360050768= 02810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\''= ,<br>> '\''r|.*|'\'' ] } global { locking_type=3D1 pri= oritise_write_locks=3D1<br>> wait_for_locks=3D1 use_lvmetad=3D0 } = backup { retain_min =3D 50 retain_days =3D<br>> 0 } ' = --noheadings --units b --nosuffix --separator '|'<br>> --ignoreskippedcl= uster -o<br>> uuid,name,attr,size,free,extent_size,extent_count,free_cou= nt,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>> 78d84adf-= 7274-4efe-a711-fbec31196ece (cwd None)<br>> Thread-814::DEBUG::2014-12-0= 8<br>> 07:16:08,740::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo = -n<br>> /sbin/lvm vgs --config ' devices { preferred_names =3D ["^/dev/m= apper/"]<br>> ignore_suspended_devices=3D1 write_cache_state=3D0 disable= _after_error_count=3D3<br>> obtain_device_list_from_udev=3D0 filter =3D = [<br>> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/3= 6005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e000000000000= 0de|'\'',<br>> '\''r|.*|'\'' ] } global { locking_type=3D1 &= nbsp;prioritise_write_locks=3D1<br>> wait_for_locks=3D1 use_lvmeta= d=3D0 } backup { retain_min =3D 50 retain_days =3D<br>>= ; 0 } ' --noheadings --units b --nosuffix --separator '|'<br>> --ignores= kippedcluster -o<br>> uuid,name,attr,size,free,extent_size,extent_count,= free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>> 7= 8d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>> Thread-814::ERROR::2= 014-12-08<br>> 07:16:08,812::spbackends::271::Storage.StoragePoolDiskBac= kend::(validateMasterDomainVersion)<br>> Requested master domain 78d84ad= f-7274-4efe-a711-fbec31196ece does not have<br>> expected version 42 it = is version 17<br>> Thread-814::ERROR::2014-12-08<br>> 07:16:08,820::t= ask::866::Storage.TaskManager.Task::(_setError)<br>> Task=3D`5cdce5cd-6e= 6d-421e-bc2a-f999d8cbb056`::Unexpected error<br>> Thread-814::ERROR::201= 4-12-08<br>> 07:16:08,865::dispatcher::76::Storage.Dispatcher::(wrapper)= {'status':<br>> {'message': "Wrong Master domain or its version:<br>>= ; 'SD=3D78d84adf-7274-4efe-a711-fbec31196ece,<br>> pool=3D1d03dc05-008b-= 4d14-97ce-b17bd714183d'", 'code': 324}}<br>> Thread-815::ERROR::2014-12-= 08<br>> 07:16:09,471::sdc::137::Storage.StorageDomainCache::(_findDomain= ) looking<br>> for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece= <br>> Thread-815::ERROR::2014-12-08<br>> 07:16:09,472::sdc::154::Stor= age.StorageDomainCache::(_findUnfetchedDomain)<br>> looking for domain 7= 8d84adf-7274-4efe-a711-fbec31196ece<br>> Thread-815::DEBUG::2014-12-08<b= r>> 07:16:09,476::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<= br>> /sbin/lvm vgs --config ' devices { preferred_names =3D ["^/dev/mapp= er/"]<br>> ignore_suspended_devices=3D1 write_cache_state=3D0 disable_af= ter_error_count=3D3<br>> obtain_device_list_from_udev=3D0 filter =3D [<b= r>> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/3600= 5076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de= |'\'',<br>> '\''r|.*|'\'' ] } global { locking_type=3D1 &nbs=
> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>> Thread-815::E= RROR::2014-12-08<br>> 07:16:09,627::spbackends::271::Storage.StoragePool= DiskBackend::(validateMasterDomainVersion)<br>> Requested master domain = 78d84adf-7274-4efe-a711-fbec31196ece does not have<br>> expected version= 42 it is version 17<br>> Thread-815::ERROR::2014-12-08<br>> 07:16:09= ,635::task::866::Storage.TaskManager.Task::(_setError)<br>> Task=3D`abfa= 0fd0-04b3-4c65-b3d0-be18b085a65d`::Unexpected error<br>> Thread-815::ERR= OR::2014-12-08<br>> 07:16:09,681::dispatcher::76::Storage.Dispatcher::(w= rapper) {'status':<br>> {'message': "Wrong Master domain or its version:= <br>> 'SD=3D78d84adf-7274-4efe-a711-fbec31196ece,<br>> pool=3D1d03dc0= 5-008b-4d14-97ce-b17bd714183d'", 'code': 324}}<br>> Thread-816::ERROR::2= 014-12-08<br>> 07:16:10,182::sdc::137::Storage.StorageDomainCache::(_fin= dDomain) looking<br>> for unfetched domain 78d84adf-7274-4efe-a711-fbec3= 1196ece<br>> Thread-816::ERROR::2014-12-08<br>> 07:16:10,183::sdc::15= 4::Storage.StorageDomainCache::(_findUnfetchedDomain)<br>> looking for d= omain 78d84adf-7274-4efe-a711-fbec31196ece<br>> Thread-816::DEBUG::2014-= 12-08<br>> 07:16:10,187::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/s= udo -n<br>> /sbin/lvm vgs --config ' devices { preferred_names =3D ["^/d= ev/mapper/"]<br>> ignore_suspended_devices=3D1 write_cache_state=3D0 dis= able_after_error_count=3D3<br>> obtain_device_list_from_udev=3D0 filter = =3D [<br>> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapp= er/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e00000000= 00000de|'\'',<br>> '\''r|.*|'\'' ] } global { locking_type= =3D1 prioritise_write_locks=3D1<br>> wait_for_locks=3D1 use_= lvmetad=3D0 } backup { retain_min =3D 50 retain_days =3D<= br>> 0 } ' --noheadings --units b --nosuffix --separator '|'<br>> --i= gnoreskippedcluster -o<br>> uuid,name,attr,size,free,extent_size,extent_= count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>= > 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>> Thread-823::ER= ROR::2014-12-08<br>> 07:16:27,163::task::866::Storage.TaskManager.Task::= (_setError)<br>> Task=3D`9b0fd676-7941-40a7-a71e-0f1dee48a107`::Unexpect= ed error<br>> raise se.SpmStatusError()<br>> SpmStatusE= rror: Not SPM: ()<br>> Thread-823::ERROR::2014-12-08<br>> 07:16:27,16= 8::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'messa= ge': 'Not SPM: ()', 'code': 654}}<br>><br>><br>-------------- next pa= rt --------------<br>An HTML attachment was scrubbed...<br>URL: <http://=
-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.ovirt.org/pipermail/users/attachments/20141208/2f754047/attachment.html> ------------------------------ _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users End of Users Digest, Vol 39, Issue 38 ************************************* ------=_Part_7746146_197691492.1418043545536 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html><body><div style=3D"font-family: georgia,serif; font-size: 12pt; colo= r: #000000"><div>Hi all,</div><div>I was thinking of "booting from iSCSI SA= N", which means you'll be using your LUN placed on storage in order to boot= your host over the network.<br></div><div>In this case you'll might config= ure your hosts HW to boot from iSCSI and then you'll won't need any HD on y= our HW.</div><div>+adding more people to add their comments.</div><div><br>= </div><div><span name=3D"x"></span><br>Thanks in advance.<br><div><br></div= dle<br>the above issue, as well as adding the NIC driver and creating the b= ridge<br>at this point. So it's not a trivial task but doable with some hac= king.<br><div><br></div>Doron<br><div><br></div>----- Original Message ----= -<br>> From: "Arman Khalatyan" <arm2arm@gmail.com><br>> To: "Do= ron Fediuck" <dfediuck@redhat.com><br>> Cc: "users" <users@ovir= t.org>, "Fabian Deutsch" <fdeutsch@redhat.com>, "Ryan Barry" <r= barry@redhat.com>, "Tolik<br>> Litovsky" <tlitovsk@redhat.com>,= "Douglas Landgraf" <dougsland@redhat.com><br>> Sent: Sunday, Dece= mber 7, 2014 7:38:19 PM<br>> Subject: Re: [ovirt-users] is it possible t= o run ovirt node on Diskless HW?<br>> <br>> It is centos 6.6 standard= one.<br>> a.<br>> <br>> *****************************************= ******************<br>> <br>> Dr. Arman Khalatyan eScience -SuperComp= uting Leibniz-Institut f?r<br>> Astrophysik Potsdam (AIP) An der Sternwa= rte 16, 14482 Potsdam, Germany<br>> <br>> ***************************= ********************************<br>> <br>> <br>> On Sun, Dec 7, 2= 014 at 6:04 PM, Doron Fediuck <dfediuck@redhat.com> wrote:<br>> <b= r>> ><br>> ><br>> > ----- Original Message -----<br>> = > > From: "Arman Khalatyan" <arm2arm@gmail.com><br>> > &g= t; To: "users" <users@ovirt.org><br>> > > Sent: Wednesday, D= ecember 3, 2014 6:50:09 PM<br>> > > Subject: [ovirt-users] is it p= ossible to run ovirt node on Diskless HW?<br>> > ><br>> > &g= t; Hello,<br>> > ><br>> > > Doing steps in:<br>> > = ><br>> > https://access.redhat.com/documentation/en-US/Red_Hat_Ent= erprise_Linux/6/html/Storage_Administration_Guide/diskless-nfs-config.html<= br>> > ><br>> > > I would like to know is some one succee= ded to run the host on a diskless<br>> > > machine?<br>> > &= gt; i am using Centos6.6 node with ovirt 3.5.<br>> > > Thanks,<br>= > > > Arman.<br>> > ><br>> > ><br>> > >= <br>> > ><br>> > > **************************************= *********************<br>> > > Dr. Arman Khalatyan eScience -Super= Computing Leibniz-Institut f?r<br>> > Astrophysik<br>> > > P= otsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany<br>> > >= ***********************************************************<br>> > &= gt;<br>> ><br>> > Hi Arman,<br>> > Are you working with o= virt node or standard CentOS?<br>> ><br>> > Note that ovirt nod= e is different as it's works like a live cd-<br>> > it runs from memo= ry. In order to save some configurations (such<br>> > as networking) = the local disk is used.<br>> ><br>> <br><div><br></div><br>-------= -----------------------<br><div><br></div>Message: 2<br>Date: Mon, 8 Dec 20= 14 10:22:18 +0100<br>From: Koen Vanoppen <vanoppen.koen@gmail.com><br= p; <CACfY+MaPY9opHykNc7hmM4Wc0_HBuu6_fyi7wPMWP4RSCe6xYQ@mail.= gmail.com><br>Content-Type: text/plain; charset=3D"utf-8"<br><div><br></= div>some more errors:<br><div><br></div>Thread-19::DEBUG::2014-12-08<br>10:= 20:02,700::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>/sbin/l= vm vgck --config ' devices { preferred_names =3D ["^/dev/mapper/"]<br>ignor= e_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_count=3D3= <br>obtain_device_list_from_udev=3D0 filter =3D [<br>'\''a|/dev/mapper/3600= 5076802810d489000000000000062|'\'', '\''r|.*|'\'' ]<br>} global { &nb= sp;locking_type=3D1 prioritise_write_locks=3D1 wait_for_locks= =3D1<br>use_lvmetad=3D0 } backup { retain_min =3D 50 reta= in_days =3D 0 } '<br>f130d166-546e-4905-8b8f-55a1c1dd2e4f (cwd None)<br>Thr= ead-20::DEBUG::2014-12-08<br>10:20:02,817::lvm::288::Storage.Misc.excCmd::(= cmd) /usr/bin/sudo -n<br>/sbin/lvm vgck --config ' devices { preferred_name= s =3D ["^/dev/mapper/"]<br>ignore_suspended_devices=3D1 write_cache_state= =3D0 disable_after_error_count=3D3<br>obtain_device_list_from_udev=3D0 filt= er =3D [<br>'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper= /36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000= 000de|'\'',<br>'\''r|.*|'\'' ] } global { locking_type=3D1 &nbs= p;prioritise_write_locks=3D1<br>wait_for_locks=3D1 use_lvmetad=3D0 } = backup { retain_min =3D 50 retain_days =3D<br>0 } ' eb912= 657-8a8c-4173-9d24-92d2b09a773c (cwd None)<br>Thread-20::DEBUG::2014-12-08<= br>10:20:03,388::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>/= sbin/lvm vgs --config ' devices { preferred_names =3D ["^/dev/mapper/"]<br>= ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_coun= t=3D3<br>obtain_device_list_from_udev=3D0 filter =3D [<br>'\''a|/dev/mapper= /36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000= 000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>'\''r|.*|'\'' = ] } global { locking_type=3D1 prioritise_write_locks=3D1<= br>wait_for_locks=3D1 use_lvmetad=3D0 } backup { retain_m= in =3D 50 retain_days =3D<br>0 } ' --noheadings --units b --nosuffix = --separator '|'<br>--ignoreskippedcluster -o<br>uuid,name,attr,size,free,ex= tent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_= count,pv_name<br>eb912657-8a8c-4173-9d24-92d2b09a773c (cwd None)<br>Thread-= 17::ERROR::2014-12-08<br>10:20:03,469::sdc::137::Storage.StorageDomainCache= ::(_findDomain) looking<br>for unfetched domain 78d84adf-7274-4efe-a711-fbe= c31196ece<br>Thread-17::ERROR::2014-12-08<br>10:20:03,472::sdc::154::Storag= e.StorageDomainCache::(_findUnfetchedDomain)<br>looking for domain 78d84adf= -7274-4efe-a711-fbec31196ece<br>Thread-17::DEBUG::2014-12-08<br>10:20:03,48= 2::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>/sbin/lvm vgs -= -config ' devices { preferred_names =3D ["^/dev/mapper/"]<br>ignore_suspend= ed_devices=3D1 write_cache_state=3D0 disable_after_error_count=3D3<br>obtai= n_device_list_from_udev=3D0 filter =3D [<br>'\''a|/dev/mapper/3600507680281= 0d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/map= per/36005076802810d48e0000000000000de|'\'',<br>'\''r|.*|'\'' ] } glob= al { locking_type=3D1 prioritise_write_locks=3D1<br>wait_for_lo= cks=3D1 use_lvmetad=3D0 } backup { retain_min =3D 50 &nbs= p;retain_days =3D<br>0 } ' --noheadings --units b --nosuffix --separator '|= '<br>--ignoreskippedcluster -o<br>uuid,name,attr,size,free,extent_size,exte= nt_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<= br>78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>Thread-17::DEBUG::201= 4-12-08<br>10:20:03,572::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo= -n<br>/sbin/lvm vgs --config ' devices { preferred_names =3D ["^/dev/mappe= r/"]<br>ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_er= ror_count=3D3<br>obtain_device_list_from_udev=3D0 filter =3D [<br>'\''a|/de= v/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e00= 00000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>'\''r|= .*|'\'' ] } global { locking_type=3D1 prioritise_write_lo= cks=3D1<br>wait_for_locks=3D1 use_lvmetad=3D0 } backup { = retain_min =3D 50 retain_days =3D<br>0 } ' --noheadings --units b --n= osuffix --separator '|'<br>--ignoreskippedcluster -o<br>uuid,name,attr,size= ,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_c= ount,pv_count,pv_name<br>78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br= headings --units b --nosuffix --separator '|'<br>--ignoreskippedcluster -o<= br>uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda= _size,vg_mda_free,lv_count,pv_count,pv_name<br>f130d166-546e-4905-8b8f-55a1= c1dd2e4f eb912657-8a8c-4173-9d24-92d2b09a773c<br>(cwd None)<br>Thread-14::E= RROR::2014-12-08<br>10:20:05,785::task::866::Storage.TaskManager.Task::(_se= tError)<br>Task=3D`ffaf5100-e833-4d29-ac5d-f6f7f8ce2b5d`::Unexpected error<= br> raise SecureError("Secured object is not in safe stat= e")<br>SecureError: Secured object is not in safe state<br>Thread-14::ERROR= ::2014-12-08<br>10:20:05,797::dispatcher::79::Storage.Dispatcher::(wrapper)= Secured object<br>is not in safe state<br> raise self.er= ror<br>SecureError: Secured object is not in safe state<br>Thread-34::ERROR= ::2014-12-08<br>10:21:46,544::task::866::Storage.TaskManager.Task::(_setErr= or)<br>Task=3D`82940da7-10c1-42f6-afca-3c0ac00c1487`::Unexpected error<br>&= nbsp; raise SecureError("Secured object is not in safe state")<= br>SecureError: Secured object is not in safe state<br>Thread-34::ERROR::20= 14-12-08<br>10:21:46,549::dispatcher::79::Storage.Dispatcher::(wrapper) Sec= ured object<br>is not in safe state<br> raise self.error<= br>SecureError: Secured object is not in safe stat<br><div><br></div>2014-1= 2-08 7:30 GMT+01:00 Koen Vanoppen <vanoppen.koen@gmail.com>:<br><div>= <br></div>> Dear all,<br>><br>> We have updated our hypervisors wi= th yum. This included an update ov vdsm<br>> also. We now are with these= version:<br>> vdsm-4.16.7-1.gitdb83943.el6.x86_64<br>> vdsm-python-4= .16.7-1.gitdb83943.el6.noarch<br>> vdsm-python-zombiereaper-4.16.7-1.git= db83943.el6.noarch<br>> vdsm-xmlrpc-4.16.7-1.gitdb83943.el6.noarch<br>&g= t; vdsm-yajsonrpc-4.16.7-1.gitdb83943.el6.noarch<br>> vdsm-jsonrpc-4.16.= 7-1.gitdb83943.el6.noarch<br>> vdsm-cli-4.16.7-1.gitdb83943.el6.noarch<b= r>><br>> And ever since these updates we experience BIG troubles with= our fibre<br>> connections. I've already update the brocade cards to th= e latest version.<br>> This seemed to help, they already came back up an= d saw the storage domains<br>> (before the brocade update, they didn't e= ven see their storage domains).<br>> But after a day or so, one of the h= ypersisors began to freak out again.<br>> Coming up and going back down.= .. Below you can find the errors:<br>><br>><br>> Thread-821::ERROR= ::2014-12-08<br>> 07:10:33,190::task::866::Storage.TaskManager.Task::(_s= etError)<br>> Task=3D`27cb9779-a8e9-4080-988d-9772c922710b`::Unexpected = error<br>> raise se.SpmStatusError()<br>> SpmStatusErro= r: Not SPM: ()<br>> Thread-821::ERROR::2014-12-08<br>> 07:10:33,194::= dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'message'= : 'Not SPM: ()', 'code': 654}}<br>> Thread-822::ERROR::2014-12-08<br>>= ; 07:11:03,878::task::866::Storage.TaskManager.Task::(_setError)<br>> Ta= sk=3D`30177931-68c0-420f-950f-da5b770fe35c`::Unexpected error<br>> Threa= d-822::ERROR::2014-12-08<br>> 07:11:03,882::dispatcher::76::Storage.Disp= atcher::(wrapper) {'status':<br>> {'message': "Unknown pool id, pool not= connected:<br>> ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309= }}<br>> Thread-813::ERROR::2014-12-08<br>> 07:11:07,634::sdc::137::St= orage.StorageDomainCache::(_findDomain) looking<br>> for unfetched domai= n 78d84adf-7274-4efe-a711-fbec31196ece<br>> Thread-813::ERROR::2014-12-0= 8<br>> 07:11:07,634::sdc::154::Storage.StorageDomainCache::(_findUnfetch= edDomain)<br>> looking for domain 78d84adf-7274-4efe-a711-fbec31196ece<b= r>> Thread-813::DEBUG::2014-12-08<br>> 07:11:07,638::lvm::288::Storag= e.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>> /sbin/lvm vgs --config ' devi= ces { preferred_names =3D ["^/dev/mapper/"]<br>> ignore_suspended_device= s=3D1 write_cache_state=3D0 disable_after_error_count=3D3<br>> obtain_de= vice_list_from_udev=3D0 filter =3D [<br>> '\''a|/dev/mapper/360050768028= 10d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/ma= pper/36005076802810d48e0000000000000de|'\'',<br>> '\''r|.*|'\'' ] } &nbs= p;global { locking_type=3D1 prioritise_write_locks=3D1<br>> = wait_for_locks=3D1 use_lvmetad=3D0 } backup { retain_min = =3D 50 retain_days =3D<br>> 0 } ' --noheadings --units b --nosuffi= x --separator '|'<br>> --ignoreskippedcluster -o<br>> uuid,name,attr,= size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,= lv_count,pv_count,pv_name<br>> 78d84adf-7274-4efe-a711-fbec31196ece (cwd= None)<br>> Thread-813::DEBUG::2014-12-08<br>> 07:11:07,835::lvm::288= ::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>> /sbin/lvm vgs --confi= g ' devices { preferred_names =3D ["^/dev/mapper/"]<br>> ignore_suspende= d_devices=3D1 write_cache_state=3D0 disable_after_error_count=3D3<br>> o= btain_device_list_from_udev=3D0 filter =3D [<br>> '\''a|/dev/mapper/3600= 5076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae= |/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>> '\''r|.*|'\'' = ] } global { locking_type=3D1 prioritise_write_locks=3D1<= br>> wait_for_locks=3D1 use_lvmetad=3D0 } backup { ret= ain_min =3D 50 retain_days =3D<br>> 0 } ' --noheadings --units b -= -nosuffix --separator '|'<br>> --ignoreskippedcluster -o<br>> uuid,na= me,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_m= da_free,lv_count,pv_count,pv_name<br>> 78d84adf-7274-4efe-a711-fbec31196= ece (cwd None)<br>> Thread-813::ERROR::2014-12-08<br>> 07:11:07,896::= spbackends::271::Storage.StoragePoolDiskBackend::(validateMasterDomainVersi= on)<br>> Requested master domain 78d84adf-7274-4efe-a711-fbec31196ece do= es not have<br>> expected version 42 it is version 17<br>> Thread-813= ::ERROR::2014-12-08<br>> 07:11:07,903::task::866::Storage.TaskManager.Ta= sk::(_setError)<br>> Task=3D`c434f325-5193-4236-a04d-2fee9ac095bc`::Unex= pected error<br>> Thread-813::ERROR::2014-12-08<br>> 07:11:07,946::di= spatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'message': = "Wrong Master domain or its version:<br>> 'SD=3D78d84adf-7274-4efe-a711-= fbec31196ece,<br>> pool=3D1d03dc05-008b-4d14-97ce-b17bd714183d'", 'code'= : 324}}<br>> Thread-823::ERROR::2014-12-08<br>> 07:11:43,993::task::8= 66::Storage.TaskManager.Task::(_setError)<br>> Task=3D`9abbccd9-88a7-463= 2-b350-f9af1f65bebd`::Unexpected error<br>> Thread-823::ERROR::2014-12-0= 8<br>> 07:11:43,998::dispatcher::76::Storage.Dispatcher::(wrapper) {'sta= tus':<br>> {'message': "Unknown pool id, pool not connected:<br>> ('1= d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}<br>> Thread-823::= ERROR::2014-12-08<br>> 07:11:44,003::task::866::Storage.TaskManager.Task= ::(_setError)<br>> Task=3D`7ef1ac39-e7c2-4538-b30b-ab2fcefac01d`::Unexpe= cted error<br>> raise se.SpmStatusError()<br>> SpmStatu= sError: Not SPM: ()<br>> Thread-823::ERROR::2014-12-08<br>> 07:11:44,= 007::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'mes= sage': 'Not SPM: ()', 'code': 654}}<br>> Thread-823::ERROR::2014-12-08<b= r>> 07:11:44,133::task::866::Storage.TaskManager.Task::(_setError)<br>&g= t; Task=3D`cc1ae82c-f3c4-4efa-9cd2-c62a27801e76`::Unexpected error<br>> = Thread-823::ERROR::2014-12-08<br>> 07:11:44,137::dispatcher::76::Storage= .Dispatcher::(wrapper) {'status':<br>> {'message': "Unknown pool id, poo= l not connected:<br>> ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code'= : 309}}<br>> Thread-823::ERROR::2014-12-08<br>> 07:12:24,580::task::8= 66::Storage.TaskManager.Task::(_setError)<br>> Task=3D`9bcbb87d-3093-489= 4-879b-3fe2b09ef351`::Unexpected error<br>> Thread-823::ERROR::2014-12-0= 8<br>> 07:12:24,585::dispatcher::76::Storage.Dispatcher::(wrapper) {'sta= tus':<br>> {'message': "Unknown pool id, pool not connected:<br>> ('1= d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}<br>> Thread-823::= ERROR::2014-12-08<br>> 07:13:04,926::task::866::Storage.TaskManager.Task= ::(_setError)<br>> Task=3D`8bdd0c1f-e681-4a8e-ad55-296c021389ed`::Unexpe= cted error<br>> raise se.SpmStatusError()<br>> SpmStatu= sError: Not SPM: ()<br>> Thread-823::ERROR::2014-12-08<br>> 07:13:04,= 931::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'mes= sage': 'Not SPM: ()', 'code': 654}}<br>> Thread-823::ERROR::2014-12-08<b= r>> 07:13:45,342::task::866::Storage.TaskManager.Task::(_setError)<br>&g= t; Task=3D`160ea2a7-b6cb-4102-9df4-71ba87fd863e`::Unexpected error<br>> = raise se.SpmStatusError()<br>> SpmStatusError: Not SPM: ()= <br>> Thread-823::ERROR::2014-12-08<br>> 07:13:45,346::dispatcher::76= ::Storage.Dispatcher::(wrapper) {'status':<br>> {'message': 'Not SPM: ()= ', 'code': 654}}<br>> Thread-823::ERROR::2014-12-08<br>> 07:14:25,879= ::task::866::Storage.TaskManager.Task::(_setError)<br>> Task=3D`985628db= -8f48-44b5-8f61-631a922f7f71`::Unexpected error<br>> raise= se.SpmStatusError()<br>> SpmStatusError: Not SPM: ()<br>> Thread-823= ::ERROR::2014-12-08<br>> 07:14:25,883::dispatcher::76::Storage.Dispatche= r::(wrapper) {'status':<br>> {'message': 'Not SPM: ()', 'code': 654}}<br= p;prioritise_write_locks=3D1<br>> wait_for_locks=3D1 use_lvmetad= =3D0 } backup { retain_min =3D 50 retain_days =3D<br>>= 0 } ' --noheadings --units b --nosuffix --separator '|'<br>> --ignoresk= ippedcluster -o<br>> uuid,name,attr,size,free,extent_size,extent_count,f= ree_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>> 78= d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>> Thread-815::DEBUG::20= 14-12-08<br>> 07:16:09,564::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bi= n/sudo -n<br>> /sbin/lvm vgs --config ' devices { preferred_names =3D ["= ^/dev/mapper/"]<br>> ignore_suspended_devices=3D1 write_cache_state=3D0 = disable_after_error_count=3D3<br>> obtain_device_list_from_udev=3D0 filt= er =3D [<br>> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/m= apper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e00000= 00000000de|'\'',<br>> '\''r|.*|'\'' ] } global { locking_typ= e=3D1 prioritise_write_locks=3D1<br>> wait_for_locks=3D1 use= _lvmetad=3D0 } backup { retain_min =3D 50 retain_days =3D= <br>> 0 } ' --noheadings --units b --nosuffix --separator '|'<br>> --= ignoreskippedcluster -o<br>> uuid,name,attr,size,free,extent_size,extent= _count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br= lists.ovirt.org/pipermail/users/attachments/20141208/2f754047/attachment.ht= ml><br><div><br></div>------------------------------<br><div><br></div>_= ______________________________________________<br>Users mailing list<br>Use= rs@ovirt.org<br>http://lists.ovirt.org/mailman/listinfo/users<br><div><br><= /div><br>End of Users Digest, Vol 39, Issue 38<br>*************************= ************<br></div><div><br></div></div></body></html> ------=_Part_7746146_197691492.1418043545536--
participants (1)
-
Nikolai Sednev