Users
Threads by month
- ----- 2026 -----
- April
- March
- February
- January
- ----- 2025 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- 19176 discussions
Hi,
We were supposed to start composing oVirt 3.5.1 RC today *2014-12-09 08:00 UTC* from 3.5 branch.
We have still blockers for oVirt 3.5.1 RC release so we need to postpone it until they'll be fixed.
Being so near to winter's holidays we need to discuss the new tentative date for RC in tomorrow sync meeting.
The bug tracker [1] shows 1 open blocker:
Bug ID Whiteboard Status Summary
1160846 sla NEW Can't add disk to VM without specifying disk profile when the storage domain has more than one disk profile
In order to stabilize the release a new branch ovirt-engine-3.5.1 will be created from the same git hash used for composing the RC.
Maintainers:
- Please be sure that 3.5 snapshot allow to create VMs
- Please be sure that no pending patches are going to block the release
- If any patch must block the RC release please raise the issue as soon as possible.
There are still 65 bugs [2] targeted to 3.5.1.
Excluding node and documentation bugs we still have 44 bugs [3] targeted to 3.5.1.
Maintainers / Assignee:
- Please add the bugs to the tracker if you think that 3.5.1 should not be released without them fixed.
- Please update the target to 3.5.2 or later for bugs that won't be in 3.5.1:
it will ease gathering the blocking bugs for next releases.
- Please fill release notes, the page has been created here [4]
Community:
- If you're testing oVirt 3.5 nightly snapshot, please add yourself to the test page [5]
[1] http://bugzilla.redhat.com/1155170
[2] http://goo.gl/7G0PDV
[3] http://goo.gl/6gUbVr
[4] http://www.ovirt.org/OVirt_3.5.1_Release_Notes
[5] http://www.ovirt.org/Testing/oVirt_3.5.1_Testing
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
1
0
Hi.
Is it somehow possible to manually recover engine from the following error, caused maybe by https://bugzilla.redhat.com/show_bug.cgi?id=1155084.
oVirt 3.5
2014-12-08 23:24:41,922 ERROR [org.ovirt.engine.core.utils.timer.SchedulerUtilQuartzImpl] (DefaultQuartzScheduler_Worker-40) Failed to invoke scheduled method invokeCallbackMethods: java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedMethodAccessor100.invoke(Unknown Source) [:1.7.0_65]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.7.0_65]
at java.lang.reflect.Method.invoke(Method.java:606) [rt.jar:1.7.0_65]
at org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:60) [scheduler.jar:]
at org.quartz.core.JobRunShell.run(JobRunShell.java:213) [quartz.jar:]
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557) [quartz.jar:]
Caused by: org.apache.commons.lang.SerializationException: org.codehaus.jackson.map.JsonMappingException: Unexpected token (END_ARRAY), expected VALUE_STRING: need JSON String that contains type id (for subtype of java.util.Collection)
at [Source: java.io.StringReader@6a4d8b78; line: 22, column: 22] (through reference chain: org.ovirt.engine.core.common.action.AddVmFromSnapshotParameters["parametersCurrentUser"]->org.ovirt.engine.core.common.businessentities.aaa.DbUser["groupNames"])
at org.ovirt.engine.core.utils.serialization.json.JsonObjectDeserializer.readJsonString(JsonObjectDeserializer.java:91) [utils.jar:]
at org.ovirt.engine.core.utils.serialization.json.JsonObjectDeserializer.deserialize(JsonObjectDeserializer.java:60) [utils.jar:]
at org.ovirt.engine.core.dao.CommandEntityDaoDbFacadeImpl.deserializeParameters(CommandEntityDaoDbFacadeImpl.java:97) [dal.jar:]
at org.ovirt.engine.core.dao.CommandEntityDaoDbFacadeImpl.access$000(CommandEntityDaoDbFacadeImpl.java:21) [dal.jar:]
at org.ovirt.engine.core.dao.CommandEntityDaoDbFacadeImpl$1.mapRow(CommandEntityDaoDbFacadeImpl.java:34) [dal.jar:]
at org.ovirt.engine.core.dao.CommandEntityDaoDbFacadeImpl$1.mapRow(CommandEntityDaoDbFacadeImpl.java:23) [dal.jar:]
at org.springframework.jdbc.core.RowMapperResultSetExtractor.extractData(RowMapperResultSetExtractor.java:92) [spring-jdbc.jar:3.1.1.RELEASE]
at org.springframework.jdbc.core.RowMapperResultSetExtractor.extractData(RowMapperResultSetExtractor.java:1) [spring-jdbc.jar:3.1.1.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate$1.doInPreparedStatement(JdbcTemplate.java:649) [spring-jdbc.jar:3.1.1.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:587) [spring-jdbc.jar:3.1.1.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:637) [spring-jdbc.jar:3.1.1.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:666) [spring-jdbc.jar:3.1.1.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:706) [spring-jdbc.jar:3.1.1.RELEASE]
at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.executeCallInternal(PostgresDbEngineDialect.java:154) [dal.jar:]
at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.doExecute(PostgresDbEngineDialect.java:120) [dal.jar:]
at org.springframework.jdbc.core.simple.SimpleJdbcCall.execute(SimpleJdbcCall.java:181) [spring-jdbc.jar:3.1.1.RELEASE]
at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeImpl(SimpleJdbcCallsHandler.java:141) [dal.jar:]
at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeReadList(SimpleJdbcCallsHandler.java:103) [dal.jar:]
at org.ovirt.engine.core.dao.DefaultReadDaoDbFacade.getAll(DefaultReadDaoDbFacade.java:77) [dal.jar:]
at org.ovirt.engine.core.bll.tasks.CommandsCacheImpl.initializeCache(CommandsCacheImpl.java:30) [bll.jar:]
at org.ovirt.engine.core.bll.tasks.CommandsCacheImpl.keySet(CommandsCacheImpl.java:41) [bll.jar:]
at org.ovirt.engine.core.bll.tasks.CommandCoordinatorImpl.getCommandsWithCallBackEnabled(CommandCoordinatorImpl.java:130) [bll.jar:]
at org.ovirt.engine.core.bll.tasks.CommandExecutor.initCommandExecutor(CommandExecutor.java:119) [bll.jar:]
at org.ovirt.engine.core.bll.tasks.CommandExecutor.invokeCallbackMethods(CommandExecutor.java:57) [bll.jar:]
... 6 more
Thank you.
---
Raul
1
0
------=_Part_7746146_197691492.1418043545536
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi all,
I was thinking of "booting from iSCSI SAN", which means you'll be using your LUN placed on storage in order to boot your host over the network.
In this case you'll might configure your hosts HW to boot from iSCSI and then you'll won't need any HD on your HW.
+adding more people to add their comments.
Thanks in advance.
Best regards,
Nikolai
____________________
Nikolai Sednev
Senior Quality Engineer at Compute team
Red Hat Israel
34 Jerusalem Road,
Ra'anana, Israel 43501
Tel: +972 9 7692043
Mobile: +972 52 7342734
Email: nsednev(a)redhat.com
IRC: nsednev
----- Original Message -----
From: users-request(a)ovirt.org
To: users(a)ovirt.org
Sent: Monday, December 8, 2014 11:22:27 AM
Subject: Users Digest, Vol 39, Issue 38
Send Users mailing list submissions to
users(a)ovirt.org
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.ovirt.org/mailman/listinfo/users
or, via email, send a message with subject or body 'help' to
users-request(a)ovirt.org
You can reach the person managing the list at
users-owner(a)ovirt.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Users digest..."
Today's Topics:
1. Re: is it possible to run ovirt node on Diskless HW?
(Doron Fediuck)
2. Re: Storage Domain Issue (Koen Vanoppen)
----------------------------------------------------------------------
Message: 1
Date: Mon, 8 Dec 2014 02:01:50 -0500 (EST)
From: Doron Fediuck <dfediuck(a)redhat.com>
To: Arman Khalatyan <arm2arm(a)gmail.com>
Cc: Ryan Barry <rbarry(a)redhat.com>, Fabian Deutsch
<fdeutsch(a)redhat.com>, users <users(a)ovirt.org>
Subject: Re: [ovirt-users] is it possible to run ovirt node on
Diskless HW?
Message-ID:
<1172482552.12144827.1418022110582.JavaMail.zimbra(a)redhat.com>
Content-Type: text/plain; charset=utf-8
For standard centos you may see other issues.
For example, let's assume you have a single NIC (eth0).
If you boot your host and then try to add it to the engine,
the host deploy procedure will create try to create a management bridge
for the VMs using eth0. At this point your host will freeze since your
root FS will be disconnected while creating the bridge.
I've done this ~6 years ago, and it required opening the initrd to handle
the above issue, as well as adding the NIC driver and creating the bridge
at this point. So it's not a trivial task but doable with some hacking.
Doron
----- Original Message -----
> From: "Arman Khalatyan" <arm2arm(a)gmail.com>
> To: "Doron Fediuck" <dfediuck(a)redhat.com>
> Cc: "users" <users(a)ovirt.org>, "Fabian Deutsch" <fdeutsch(a)redhat.com>, "Ryan Barry" <rbarry(a)redhat.com>, "Tolik
> Litovsky" <tlitovsk(a)redhat.com>, "Douglas Landgraf" <dougsland(a)redhat.com>
> Sent: Sunday, December 7, 2014 7:38:19 PM
> Subject: Re: [ovirt-users] is it possible to run ovirt node on Diskless HW?
>
> It is centos 6.6 standard one.
> a.
>
> ***********************************************************
>
> Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut f?r
> Astrophysik Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany
>
> ***********************************************************
>
>
> On Sun, Dec 7, 2014 at 6:04 PM, Doron Fediuck <dfediuck(a)redhat.com> wrote:
>
> >
> >
> > ----- Original Message -----
> > > From: "Arman Khalatyan" <arm2arm(a)gmail.com>
> > > To: "users" <users(a)ovirt.org>
> > > Sent: Wednesday, December 3, 2014 6:50:09 PM
> > > Subject: [ovirt-users] is it possible to run ovirt node on Diskless HW?
> > >
> > > Hello,
> > >
> > > Doing steps in:
> > >
> > https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/ht…
> > >
> > > I would like to know is some one succeeded to run the host on a diskless
> > > machine?
> > > i am using Centos6.6 node with ovirt 3.5.
> > > Thanks,
> > > Arman.
> > >
> > >
> > >
> > >
> > > ***********************************************************
> > > Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut f?r
> > Astrophysik
> > > Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany
> > > ***********************************************************
> > >
> >
> > Hi Arman,
> > Are you working with ovirt node or standard CentOS?
> >
> > Note that ovirt node is different as it's works like a live cd-
> > it runs from memory. In order to save some configurations (such
> > as networking) the local disk is used.
> >
>
------------------------------
Message: 2
Date: Mon, 8 Dec 2014 10:22:18 +0100
From: Koen Vanoppen <vanoppen.koen(a)gmail.com>
To: "users(a)ovirt.org" <users(a)ovirt.org>
Subject: Re: [ovirt-users] Storage Domain Issue
Message-ID:
<CACfY+MaPY9opHykNc7hmM4Wc0_HBuu6_fyi7wPMWP4RSCe6xYQ(a)mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
some more errors:
Thread-19::DEBUG::2014-12-08
10:20:02,700::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm vgck --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/36005076802810d489000000000000062|'\'', '\''r|.*|'\'' ]
} global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1
use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } '
f130d166-546e-4905-8b8f-55a1c1dd2e4f (cwd None)
Thread-20::DEBUG::2014-12-08
10:20:02,817::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm vgck --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
0 } ' eb912657-8a8c-4173-9d24-92d2b09a773c (cwd None)
Thread-20::DEBUG::2014-12-08
10:20:03,388::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
eb912657-8a8c-4173-9d24-92d2b09a773c (cwd None)
Thread-17::ERROR::2014-12-08
10:20:03,469::sdc::137::Storage.StorageDomainCache::(_findDomain) looking
for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece
Thread-17::ERROR::2014-12-08
10:20:03,472::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain 78d84adf-7274-4efe-a711-fbec31196ece
Thread-17::DEBUG::2014-12-08
10:20:03,482::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
Thread-17::DEBUG::2014-12-08
10:20:03,572::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
Thread-17::DEBUG::2014-12-08
10:20:03,631::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
f130d166-546e-4905-8b8f-55a1c1dd2e4f eb912657-8a8c-4173-9d24-92d2b09a773c
(cwd None)
Thread-14::ERROR::2014-12-08
10:20:05,785::task::866::Storage.TaskManager.Task::(_setError)
Task=`ffaf5100-e833-4d29-ac5d-f6f7f8ce2b5d`::Unexpected error
raise SecureError("Secured object is not in safe state")
SecureError: Secured object is not in safe state
Thread-14::ERROR::2014-12-08
10:20:05,797::dispatcher::79::Storage.Dispatcher::(wrapper) Secured object
is not in safe state
raise self.error
SecureError: Secured object is not in safe state
Thread-34::ERROR::2014-12-08
10:21:46,544::task::866::Storage.TaskManager.Task::(_setError)
Task=`82940da7-10c1-42f6-afca-3c0ac00c1487`::Unexpected error
raise SecureError("Secured object is not in safe state")
SecureError: Secured object is not in safe state
Thread-34::ERROR::2014-12-08
10:21:46,549::dispatcher::79::Storage.Dispatcher::(wrapper) Secured object
is not in safe state
raise self.error
SecureError: Secured object is not in safe stat
2014-12-08 7:30 GMT+01:00 Koen Vanoppen <vanoppen.koen(a)gmail.com>:
> Dear all,
>
> We have updated our hypervisors with yum. This included an update ov vdsm
> also. We now are with these version:
> vdsm-4.16.7-1.gitdb83943.el6.x86_64
> vdsm-python-4.16.7-1.gitdb83943.el6.noarch
> vdsm-python-zombiereaper-4.16.7-1.gitdb83943.el6.noarch
> vdsm-xmlrpc-4.16.7-1.gitdb83943.el6.noarch
> vdsm-yajsonrpc-4.16.7-1.gitdb83943.el6.noarch
> vdsm-jsonrpc-4.16.7-1.gitdb83943.el6.noarch
> vdsm-cli-4.16.7-1.gitdb83943.el6.noarch
>
> And ever since these updates we experience BIG troubles with our fibre
> connections. I've already update the brocade cards to the latest version.
> This seemed to help, they already came back up and saw the storage domains
> (before the brocade update, they didn't even see their storage domains).
> But after a day or so, one of the hypersisors began to freak out again.
> Coming up and going back down... Below you can find the errors:
>
>
> Thread-821::ERROR::2014-12-08
> 07:10:33,190::task::866::Storage.TaskManager.Task::(_setError)
> Task=`27cb9779-a8e9-4080-988d-9772c922710b`::Unexpected error
> raise se.SpmStatusError()
> SpmStatusError: Not SPM: ()
> Thread-821::ERROR::2014-12-08
> 07:10:33,194::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': 'Not SPM: ()', 'code': 654}}
> Thread-822::ERROR::2014-12-08
> 07:11:03,878::task::866::Storage.TaskManager.Task::(_setError)
> Task=`30177931-68c0-420f-950f-da5b770fe35c`::Unexpected error
> Thread-822::ERROR::2014-12-08
> 07:11:03,882::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': "Unknown pool id, pool not connected:
> ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}
> Thread-813::ERROR::2014-12-08
> 07:11:07,634::sdc::137::Storage.StorageDomainCache::(_findDomain) looking
> for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece
> Thread-813::ERROR::2014-12-08
> 07:11:07,634::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
> looking for domain 78d84adf-7274-4efe-a711-fbec31196ece
> Thread-813::DEBUG::2014-12-08
> 07:11:07,638::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
> obtain_device_list_from_udev=0 filter = [
> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
> 0 } ' --noheadings --units b --nosuffix --separator '|'
> --ignoreskippedcluster -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
> Thread-813::DEBUG::2014-12-08
> 07:11:07,835::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
> obtain_device_list_from_udev=0 filter = [
> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
> 0 } ' --noheadings --units b --nosuffix --separator '|'
> --ignoreskippedcluster -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
> Thread-813::ERROR::2014-12-08
> 07:11:07,896::spbackends::271::Storage.StoragePoolDiskBackend::(validateMasterDomainVersion)
> Requested master domain 78d84adf-7274-4efe-a711-fbec31196ece does not have
> expected version 42 it is version 17
> Thread-813::ERROR::2014-12-08
> 07:11:07,903::task::866::Storage.TaskManager.Task::(_setError)
> Task=`c434f325-5193-4236-a04d-2fee9ac095bc`::Unexpected error
> Thread-813::ERROR::2014-12-08
> 07:11:07,946::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': "Wrong Master domain or its version:
> 'SD=78d84adf-7274-4efe-a711-fbec31196ece,
> pool=1d03dc05-008b-4d14-97ce-b17bd714183d'", 'code': 324}}
> Thread-823::ERROR::2014-12-08
> 07:11:43,993::task::866::Storage.TaskManager.Task::(_setError)
> Task=`9abbccd9-88a7-4632-b350-f9af1f65bebd`::Unexpected error
> Thread-823::ERROR::2014-12-08
> 07:11:43,998::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': "Unknown pool id, pool not connected:
> ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}
> Thread-823::ERROR::2014-12-08
> 07:11:44,003::task::866::Storage.TaskManager.Task::(_setError)
> Task=`7ef1ac39-e7c2-4538-b30b-ab2fcefac01d`::Unexpected error
> raise se.SpmStatusError()
> SpmStatusError: Not SPM: ()
> Thread-823::ERROR::2014-12-08
> 07:11:44,007::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': 'Not SPM: ()', 'code': 654}}
> Thread-823::ERROR::2014-12-08
> 07:11:44,133::task::866::Storage.TaskManager.Task::(_setError)
> Task=`cc1ae82c-f3c4-4efa-9cd2-c62a27801e76`::Unexpected error
> Thread-823::ERROR::2014-12-08
> 07:11:44,137::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': "Unknown pool id, pool not connected:
> ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}
> Thread-823::ERROR::2014-12-08
> 07:12:24,580::task::866::Storage.TaskManager.Task::(_setError)
> Task=`9bcbb87d-3093-4894-879b-3fe2b09ef351`::Unexpected error
> Thread-823::ERROR::2014-12-08
> 07:12:24,585::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': "Unknown pool id, pool not connected:
> ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}
> Thread-823::ERROR::2014-12-08
> 07:13:04,926::task::866::Storage.TaskManager.Task::(_setError)
> Task=`8bdd0c1f-e681-4a8e-ad55-296c021389ed`::Unexpected error
> raise se.SpmStatusError()
> SpmStatusError: Not SPM: ()
> Thread-823::ERROR::2014-12-08
> 07:13:04,931::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': 'Not SPM: ()', 'code': 654}}
> Thread-823::ERROR::2014-12-08
> 07:13:45,342::task::866::Storage.TaskManager.Task::(_setError)
> Task=`160ea2a7-b6cb-4102-9df4-71ba87fd863e`::Unexpected error
> raise se.SpmStatusError()
> SpmStatusError: Not SPM: ()
> Thread-823::ERROR::2014-12-08
> 07:13:45,346::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': 'Not SPM: ()', 'code': 654}}
> Thread-823::ERROR::2014-12-08
> 07:14:25,879::task::866::Storage.TaskManager.Task::(_setError)
> Task=`985628db-8f48-44b5-8f61-631a922f7f71`::Unexpected error
> raise se.SpmStatusError()
> SpmStatusError: Not SPM: ()
> Thread-823::ERROR::2014-12-08
> 07:14:25,883::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': 'Not SPM: ()', 'code': 654}}
> Thread-823::ERROR::2014-12-08
> 07:15:06,175::task::866::Storage.TaskManager.Task::(_setError)
> Task=`ddca1c88-0565-41e8-bf0c-22eadcc75918`::Unexpected error
> raise se.SpmStatusError()
> SpmStatusError: Not SPM: ()
> Thread-823::ERROR::2014-12-08
> 07:15:06,179::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': 'Not SPM: ()', 'code': 654}}
> Thread-823::ERROR::2014-12-08
> 07:15:46,585::task::866::Storage.TaskManager.Task::(_setError)
> Task=`12bbded5-59ce-46d8-9e67-f48862a03606`::Unexpected error
> raise se.SpmStatusError()
> SpmStatusError: Not SPM: ()
> Thread-823::ERROR::2014-12-08
> 07:15:46,589::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': 'Not SPM: ()', 'code': 654}}
> Thread-814::ERROR::2014-12-08
> 07:16:08,619::sdc::137::Storage.StorageDomainCache::(_findDomain) looking
> for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece
> Thread-814::ERROR::2014-12-08
> 07:16:08,619::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
> looking for domain 78d84adf-7274-4efe-a711-fbec31196ece
> Thread-814::DEBUG::2014-12-08
> 07:16:08,624::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
> obtain_device_list_from_udev=0 filter = [
> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
> 0 } ' --noheadings --units b --nosuffix --separator '|'
> --ignoreskippedcluster -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
> Thread-814::DEBUG::2014-12-08
> 07:16:08,740::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
> obtain_device_list_from_udev=0 filter = [
> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
> 0 } ' --noheadings --units b --nosuffix --separator '|'
> --ignoreskippedcluster -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
> Thread-814::ERROR::2014-12-08
> 07:16:08,812::spbackends::271::Storage.StoragePoolDiskBackend::(validateMasterDomainVersion)
> Requested master domain 78d84adf-7274-4efe-a711-fbec31196ece does not have
> expected version 42 it is version 17
> Thread-814::ERROR::2014-12-08
> 07:16:08,820::task::866::Storage.TaskManager.Task::(_setError)
> Task=`5cdce5cd-6e6d-421e-bc2a-f999d8cbb056`::Unexpected error
> Thread-814::ERROR::2014-12-08
> 07:16:08,865::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': "Wrong Master domain or its version:
> 'SD=78d84adf-7274-4efe-a711-fbec31196ece,
> pool=1d03dc05-008b-4d14-97ce-b17bd714183d'", 'code': 324}}
> Thread-815::ERROR::2014-12-08
> 07:16:09,471::sdc::137::Storage.StorageDomainCache::(_findDomain) looking
> for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece
> Thread-815::ERROR::2014-12-08
> 07:16:09,472::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
> looking for domain 78d84adf-7274-4efe-a711-fbec31196ece
> Thread-815::DEBUG::2014-12-08
> 07:16:09,476::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
> obtain_device_list_from_udev=0 filter = [
> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
> 0 } ' --noheadings --units b --nosuffix --separator '|'
> --ignoreskippedcluster -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
> Thread-815::DEBUG::2014-12-08
> 07:16:09,564::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
> obtain_device_list_from_udev=0 filter = [
> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
> 0 } ' --noheadings --units b --nosuffix --separator '|'
> --ignoreskippedcluster -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
> Thread-815::ERROR::2014-12-08
> 07:16:09,627::spbackends::271::Storage.StoragePoolDiskBackend::(validateMasterDomainVersion)
> Requested master domain 78d84adf-7274-4efe-a711-fbec31196ece does not have
> expected version 42 it is version 17
> Thread-815::ERROR::2014-12-08
> 07:16:09,635::task::866::Storage.TaskManager.Task::(_setError)
> Task=`abfa0fd0-04b3-4c65-b3d0-be18b085a65d`::Unexpected error
> Thread-815::ERROR::2014-12-08
> 07:16:09,681::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': "Wrong Master domain or its version:
> 'SD=78d84adf-7274-4efe-a711-fbec31196ece,
> pool=1d03dc05-008b-4d14-97ce-b17bd714183d'", 'code': 324}}
> Thread-816::ERROR::2014-12-08
> 07:16:10,182::sdc::137::Storage.StorageDomainCache::(_findDomain) looking
> for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece
> Thread-816::ERROR::2014-12-08
> 07:16:10,183::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
> looking for domain 78d84adf-7274-4efe-a711-fbec31196ece
> Thread-816::DEBUG::2014-12-08
> 07:16:10,187::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
> obtain_device_list_from_udev=0 filter = [
> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
> 0 } ' --noheadings --units b --nosuffix --separator '|'
> --ignoreskippedcluster -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
> Thread-823::ERROR::2014-12-08
> 07:16:27,163::task::866::Storage.TaskManager.Task::(_setError)
> Task=`9b0fd676-7941-40a7-a71e-0f1dee48a107`::Unexpected error
> raise se.SpmStatusError()
> SpmStatusError: Not SPM: ()
> Thread-823::ERROR::2014-12-08
> 07:16:27,168::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': 'Not SPM: ()', 'code': 654}}
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20141208/2f754047/attach…>
------------------------------
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
End of Users Digest, Vol 39, Issue 38
*************************************
------=_Part_7746146_197691492.1418043545536
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: georgia,serif; font-size: 12pt; colo=
r: #000000"><div>Hi all,</div><div>I was thinking of "booting from iSCSI SA=
N", which means you'll be using your LUN placed on storage in order to boot=
your host over the network.<br></div><div>In this case you'll might config=
ure your hosts HW to boot from iSCSI and then you'll won't need any HD on y=
our HW.</div><div>+adding more people to add their comments.</div><div><br>=
</div><div><span name=3D"x"></span><br>Thanks in advance.<br><div><br></div=
>Best regards,<br>Nikolai<br>____________________<br>Nikolai Sednev<br>Seni=
or Quality Engineer at Compute team<br>Red Hat Israel<br>34 Jerusalem Road,=
<br>Ra'anana, Israel 43501<br><div><br></div>Tel: +972=
9 7692043<br>Mobile: +972 52 7342734<br>Email: nsednev(a)redhat.com<b=
r>IRC: nsednev<span name=3D"x"></span><br></div><div><br></div><hr id=3D"zw=
chr"><div style=3D"color:#000;font-weight:normal;font-style:normal;text-dec=
oration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>Fro=
m: </b>users-request(a)ovirt.org<br><b>To: </b>users(a)ovirt.org<br><b>Sent: </=
b>Monday, December 8, 2014 11:22:27 AM<br><b>Subject: </b>Users Digest, Vol=
39, Issue 38<br><div><br></div>Send Users mailing list submissions to<br>&=
nbsp; users(a)ovirt.org<br><div><br>=
</div>To subscribe or unsubscribe via the World Wide Web, visit<br> &n=
bsp; http://lists.ovirt.org/mailman/list=
info/users<br>or, via email, send a message with subject or body 'help' to<=
br> users-request(a)ovirt.org<=
br><div><br></div>You can reach the person managing the list at<br> &n=
bsp; users-owner(a)ovirt.org<br><div><br><=
/div>When replying, please edit your Subject line so it is more specific<br=
>than "Re: Contents of Users digest..."<br><div><br></div><br>Today's Topic=
s:<br><div><br></div> 1. Re: is it possible to run ovirt =
node on Diskless HW?<br> (Doron Fediuck)<br> =
2. Re: Storage Domain Issue (Koen Vanoppen)<br><div><br></div>=
<br>----------------------------------------------------------------------<=
br><div><br></div>Message: 1<br>Date: Mon, 8 Dec 2014 02:01:50 -0500 (EST)<=
br>From: Doron Fediuck <dfediuck(a)redhat.com><br>To: Arman Khalatyan &=
lt;arm2arm(a)gmail.com><br>Cc: Ryan Barry <rbarry(a)redhat.com>, Fabia=
n Deutsch<br> <fdeutsch@r=
edhat.com>, users <use=
rs(a)ovirt.org><br>Subject: Re: [ovirt-users] is it possible to run ovirt =
node on<br> Diskless HW?<br>=
Message-ID:<br> <11724825=
52.12144827.1418022110582.JavaMail.zimbra(a)redhat.com><br>Content-Type: t=
ext/plain; charset=3Dutf-8<br><div><br></div>For standard centos you may se=
e other issues.<br><div><br></div>For example, let's assume you have a sing=
le NIC (eth0).<br>If you boot your host and then try to add it to the engin=
e,<br>the host deploy procedure will create try to create a management brid=
ge <br>for the VMs using eth0. At this point your host will freeze since yo=
ur<br>root FS will be disconnected while creating the bridge.<br><div><br><=
/div>I've done this ~6 years ago, and it required opening the initrd to han=
dle<br>the above issue, as well as adding the NIC driver and creating the b=
ridge<br>at this point. So it's not a trivial task but doable with some hac=
king.<br><div><br></div>Doron<br><div><br></div>----- Original Message ----=
-<br>> From: "Arman Khalatyan" <arm2arm(a)gmail.com><br>> To: "Do=
ron Fediuck" <dfediuck(a)redhat.com><br>> Cc: "users" <users@ovir=
t.org>, "Fabian Deutsch" <fdeutsch(a)redhat.com>, "Ryan Barry" <r=
barry(a)redhat.com>, "Tolik<br>> Litovsky" <tlitovsk(a)redhat.com>,=
"Douglas Landgraf" <dougsland(a)redhat.com><br>> Sent: Sunday, Dece=
mber 7, 2014 7:38:19 PM<br>> Subject: Re: [ovirt-users] is it possible t=
o run ovirt node on Diskless HW?<br>> <br>> It is centos 6.6 standard=
one.<br>> a.<br>> <br>> *****************************************=
******************<br>> <br>> Dr. Arman Khalatyan eScience -SuperComp=
uting Leibniz-Institut f?r<br>> Astrophysik Potsdam (AIP) An der Sternwa=
rte 16, 14482 Potsdam, Germany<br>> <br>> ***************************=
********************************<br>> <br>> <br>> On Sun, Dec 7, 2=
014 at 6:04 PM, Doron Fediuck <dfediuck(a)redhat.com> wrote:<br>> <b=
r>> ><br>> ><br>> > ----- Original Message -----<br>> =
> > From: "Arman Khalatyan" <arm2arm(a)gmail.com><br>> > &g=
t; To: "users" <users(a)ovirt.org><br>> > > Sent: Wednesday, D=
ecember 3, 2014 6:50:09 PM<br>> > > Subject: [ovirt-users] is it p=
ossible to run ovirt node on Diskless HW?<br>> > ><br>> > &g=
t; Hello,<br>> > ><br>> > > Doing steps in:<br>> > =
><br>> > https://access.redhat.com/documentation/en-US/Red_Hat_Ent=
erprise_Linux/6/html/Storage_Administration_Guide/diskless-nfs-config.html<=
br>> > ><br>> > > I would like to know is some one succee=
ded to run the host on a diskless<br>> > > machine?<br>> > &=
gt; i am using Centos6.6 node with ovirt 3.5.<br>> > > Thanks,<br>=
> > > Arman.<br>> > ><br>> > ><br>> > >=
<br>> > ><br>> > > **************************************=
*********************<br>> > > Dr. Arman Khalatyan eScience -Super=
Computing Leibniz-Institut f?r<br>> > Astrophysik<br>> > > P=
otsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany<br>> > >=
***********************************************************<br>> > &=
gt;<br>> ><br>> > Hi Arman,<br>> > Are you working with o=
virt node or standard CentOS?<br>> ><br>> > Note that ovirt nod=
e is different as it's works like a live cd-<br>> > it runs from memo=
ry. In order to save some configurations (such<br>> > as networking) =
the local disk is used.<br>> ><br>> <br><div><br></div><br>-------=
-----------------------<br><div><br></div>Message: 2<br>Date: Mon, 8 Dec 20=
14 10:22:18 +0100<br>From: Koen Vanoppen <vanoppen.koen(a)gmail.com><br=
>To: "users(a)ovirt.org" <users(a)ovirt.org><br>Subject: Re: [ovirt-users=
] Storage Domain Issue<br>Message-ID:<br> &nbs=
p; <CACfY+MaPY9opHykNc7hmM4Wc0_HBuu6_fyi7wPMWP4RSCe6xYQ(a)mail.=
gmail.com><br>Content-Type: text/plain; charset=3D"utf-8"<br><div><br></=
div>some more errors:<br><div><br></div>Thread-19::DEBUG::2014-12-08<br>10:=
20:02,700::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>/sbin/l=
vm vgck --config ' devices { preferred_names =3D ["^/dev/mapper/"]<br>ignor=
e_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_count=3D3=
<br>obtain_device_list_from_udev=3D0 filter =3D [<br>'\''a|/dev/mapper/3600=
5076802810d489000000000000062|'\'', '\''r|.*|'\'' ]<br>} global { &nb=
sp;locking_type=3D1 prioritise_write_locks=3D1 wait_for_locks=
=3D1<br>use_lvmetad=3D0 } backup { retain_min =3D 50 reta=
in_days =3D 0 } '<br>f130d166-546e-4905-8b8f-55a1c1dd2e4f (cwd None)<br>Thr=
ead-20::DEBUG::2014-12-08<br>10:20:02,817::lvm::288::Storage.Misc.excCmd::(=
cmd) /usr/bin/sudo -n<br>/sbin/lvm vgck --config ' devices { preferred_name=
s =3D ["^/dev/mapper/"]<br>ignore_suspended_devices=3D1 write_cache_state=
=3D0 disable_after_error_count=3D3<br>obtain_device_list_from_udev=3D0 filt=
er =3D [<br>'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper=
/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000=
000de|'\'',<br>'\''r|.*|'\'' ] } global { locking_type=3D1 &nbs=
p;prioritise_write_locks=3D1<br>wait_for_locks=3D1 use_lvmetad=3D0 } =
backup { retain_min =3D 50 retain_days =3D<br>0 } ' eb912=
657-8a8c-4173-9d24-92d2b09a773c (cwd None)<br>Thread-20::DEBUG::2014-12-08<=
br>10:20:03,388::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>/=
sbin/lvm vgs --config ' devices { preferred_names =3D ["^/dev/mapper/"]<br>=
ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_coun=
t=3D3<br>obtain_device_list_from_udev=3D0 filter =3D [<br>'\''a|/dev/mapper=
/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000=
000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>'\''r|.*|'\'' =
] } global { locking_type=3D1 prioritise_write_locks=3D1<=
br>wait_for_locks=3D1 use_lvmetad=3D0 } backup { retain_m=
in =3D 50 retain_days =3D<br>0 } ' --noheadings --units b --nosuffix =
--separator '|'<br>--ignoreskippedcluster -o<br>uuid,name,attr,size,free,ex=
tent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_=
count,pv_name<br>eb912657-8a8c-4173-9d24-92d2b09a773c (cwd None)<br>Thread-=
17::ERROR::2014-12-08<br>10:20:03,469::sdc::137::Storage.StorageDomainCache=
::(_findDomain) looking<br>for unfetched domain 78d84adf-7274-4efe-a711-fbe=
c31196ece<br>Thread-17::ERROR::2014-12-08<br>10:20:03,472::sdc::154::Storag=
e.StorageDomainCache::(_findUnfetchedDomain)<br>looking for domain 78d84adf=
-7274-4efe-a711-fbec31196ece<br>Thread-17::DEBUG::2014-12-08<br>10:20:03,48=
2::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>/sbin/lvm vgs -=
-config ' devices { preferred_names =3D ["^/dev/mapper/"]<br>ignore_suspend=
ed_devices=3D1 write_cache_state=3D0 disable_after_error_count=3D3<br>obtai=
n_device_list_from_udev=3D0 filter =3D [<br>'\''a|/dev/mapper/3600507680281=
0d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/map=
per/36005076802810d48e0000000000000de|'\'',<br>'\''r|.*|'\'' ] } glob=
al { locking_type=3D1 prioritise_write_locks=3D1<br>wait_for_lo=
cks=3D1 use_lvmetad=3D0 } backup { retain_min =3D 50 &nbs=
p;retain_days =3D<br>0 } ' --noheadings --units b --nosuffix --separator '|=
'<br>--ignoreskippedcluster -o<br>uuid,name,attr,size,free,extent_size,exte=
nt_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<=
br>78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>Thread-17::DEBUG::201=
4-12-08<br>10:20:03,572::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo=
-n<br>/sbin/lvm vgs --config ' devices { preferred_names =3D ["^/dev/mappe=
r/"]<br>ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_er=
ror_count=3D3<br>obtain_device_list_from_udev=3D0 filter =3D [<br>'\''a|/de=
v/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e00=
00000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>'\''r|=
.*|'\'' ] } global { locking_type=3D1 prioritise_write_lo=
cks=3D1<br>wait_for_locks=3D1 use_lvmetad=3D0 } backup { =
retain_min =3D 50 retain_days =3D<br>0 } ' --noheadings --units b --n=
osuffix --separator '|'<br>--ignoreskippedcluster -o<br>uuid,name,attr,size=
,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_c=
ount,pv_count,pv_name<br>78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br=
>Thread-17::DEBUG::2014-12-08<br>10:20:03,631::lvm::288::Storage.Misc.excCm=
d::(cmd) /usr/bin/sudo -n<br>/sbin/lvm vgs --config ' devices { preferred_n=
ames =3D ["^/dev/mapper/"]<br>ignore_suspended_devices=3D1 write_cache_stat=
e=3D0 disable_after_error_count=3D3<br>obtain_device_list_from_udev=3D0 fil=
ter =3D [<br>'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mappe=
r/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e000000000=
0000de|'\'',<br>'\''r|.*|'\'' ] } global { locking_type=3D1 &nb=
sp;prioritise_write_locks=3D1<br>wait_for_locks=3D1 use_lvmetad=3D0 }=
backup { retain_min =3D 50 retain_days =3D<br>0 } ' --no=
headings --units b --nosuffix --separator '|'<br>--ignoreskippedcluster -o<=
br>uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda=
_size,vg_mda_free,lv_count,pv_count,pv_name<br>f130d166-546e-4905-8b8f-55a1=
c1dd2e4f eb912657-8a8c-4173-9d24-92d2b09a773c<br>(cwd None)<br>Thread-14::E=
RROR::2014-12-08<br>10:20:05,785::task::866::Storage.TaskManager.Task::(_se=
tError)<br>Task=3D`ffaf5100-e833-4d29-ac5d-f6f7f8ce2b5d`::Unexpected error<=
br> raise SecureError("Secured object is not in safe stat=
e")<br>SecureError: Secured object is not in safe state<br>Thread-14::ERROR=
::2014-12-08<br>10:20:05,797::dispatcher::79::Storage.Dispatcher::(wrapper)=
Secured object<br>is not in safe state<br> raise self.er=
ror<br>SecureError: Secured object is not in safe state<br>Thread-34::ERROR=
::2014-12-08<br>10:21:46,544::task::866::Storage.TaskManager.Task::(_setErr=
or)<br>Task=3D`82940da7-10c1-42f6-afca-3c0ac00c1487`::Unexpected error<br>&=
nbsp; raise SecureError("Secured object is not in safe state")<=
br>SecureError: Secured object is not in safe state<br>Thread-34::ERROR::20=
14-12-08<br>10:21:46,549::dispatcher::79::Storage.Dispatcher::(wrapper) Sec=
ured object<br>is not in safe state<br> raise self.error<=
br>SecureError: Secured object is not in safe stat<br><div><br></div>2014-1=
2-08 7:30 GMT+01:00 Koen Vanoppen <vanoppen.koen(a)gmail.com>:<br><div>=
<br></div>> Dear all,<br>><br>> We have updated our hypervisors wi=
th yum. This included an update ov vdsm<br>> also. We now are with these=
version:<br>> vdsm-4.16.7-1.gitdb83943.el6.x86_64<br>> vdsm-python-4=
.16.7-1.gitdb83943.el6.noarch<br>> vdsm-python-zombiereaper-4.16.7-1.git=
db83943.el6.noarch<br>> vdsm-xmlrpc-4.16.7-1.gitdb83943.el6.noarch<br>&g=
t; vdsm-yajsonrpc-4.16.7-1.gitdb83943.el6.noarch<br>> vdsm-jsonrpc-4.16.=
7-1.gitdb83943.el6.noarch<br>> vdsm-cli-4.16.7-1.gitdb83943.el6.noarch<b=
r>><br>> And ever since these updates we experience BIG troubles with=
our fibre<br>> connections. I've already update the brocade cards to th=
e latest version.<br>> This seemed to help, they already came back up an=
d saw the storage domains<br>> (before the brocade update, they didn't e=
ven see their storage domains).<br>> But after a day or so, one of the h=
ypersisors began to freak out again.<br>> Coming up and going back down.=
.. Below you can find the errors:<br>><br>><br>> Thread-821::ERROR=
::2014-12-08<br>> 07:10:33,190::task::866::Storage.TaskManager.Task::(_s=
etError)<br>> Task=3D`27cb9779-a8e9-4080-988d-9772c922710b`::Unexpected =
error<br>> raise se.SpmStatusError()<br>> SpmStatusErro=
r: Not SPM: ()<br>> Thread-821::ERROR::2014-12-08<br>> 07:10:33,194::=
dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'message'=
: 'Not SPM: ()', 'code': 654}}<br>> Thread-822::ERROR::2014-12-08<br>>=
; 07:11:03,878::task::866::Storage.TaskManager.Task::(_setError)<br>> Ta=
sk=3D`30177931-68c0-420f-950f-da5b770fe35c`::Unexpected error<br>> Threa=
d-822::ERROR::2014-12-08<br>> 07:11:03,882::dispatcher::76::Storage.Disp=
atcher::(wrapper) {'status':<br>> {'message': "Unknown pool id, pool not=
connected:<br>> ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309=
}}<br>> Thread-813::ERROR::2014-12-08<br>> 07:11:07,634::sdc::137::St=
orage.StorageDomainCache::(_findDomain) looking<br>> for unfetched domai=
n 78d84adf-7274-4efe-a711-fbec31196ece<br>> Thread-813::ERROR::2014-12-0=
8<br>> 07:11:07,634::sdc::154::Storage.StorageDomainCache::(_findUnfetch=
edDomain)<br>> looking for domain 78d84adf-7274-4efe-a711-fbec31196ece<b=
r>> Thread-813::DEBUG::2014-12-08<br>> 07:11:07,638::lvm::288::Storag=
e.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>> /sbin/lvm vgs --config ' devi=
ces { preferred_names =3D ["^/dev/mapper/"]<br>> ignore_suspended_device=
s=3D1 write_cache_state=3D0 disable_after_error_count=3D3<br>> obtain_de=
vice_list_from_udev=3D0 filter =3D [<br>> '\''a|/dev/mapper/360050768028=
10d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/ma=
pper/36005076802810d48e0000000000000de|'\'',<br>> '\''r|.*|'\'' ] } &nbs=
p;global { locking_type=3D1 prioritise_write_locks=3D1<br>> =
wait_for_locks=3D1 use_lvmetad=3D0 } backup { retain_min =
=3D 50 retain_days =3D<br>> 0 } ' --noheadings --units b --nosuffi=
x --separator '|'<br>> --ignoreskippedcluster -o<br>> uuid,name,attr,=
size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,=
lv_count,pv_count,pv_name<br>> 78d84adf-7274-4efe-a711-fbec31196ece (cwd=
None)<br>> Thread-813::DEBUG::2014-12-08<br>> 07:11:07,835::lvm::288=
::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>> /sbin/lvm vgs --confi=
g ' devices { preferred_names =3D ["^/dev/mapper/"]<br>> ignore_suspende=
d_devices=3D1 write_cache_state=3D0 disable_after_error_count=3D3<br>> o=
btain_device_list_from_udev=3D0 filter =3D [<br>> '\''a|/dev/mapper/3600=
5076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae=
|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>> '\''r|.*|'\'' =
] } global { locking_type=3D1 prioritise_write_locks=3D1<=
br>> wait_for_locks=3D1 use_lvmetad=3D0 } backup { ret=
ain_min =3D 50 retain_days =3D<br>> 0 } ' --noheadings --units b -=
-nosuffix --separator '|'<br>> --ignoreskippedcluster -o<br>> uuid,na=
me,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_m=
da_free,lv_count,pv_count,pv_name<br>> 78d84adf-7274-4efe-a711-fbec31196=
ece (cwd None)<br>> Thread-813::ERROR::2014-12-08<br>> 07:11:07,896::=
spbackends::271::Storage.StoragePoolDiskBackend::(validateMasterDomainVersi=
on)<br>> Requested master domain 78d84adf-7274-4efe-a711-fbec31196ece do=
es not have<br>> expected version 42 it is version 17<br>> Thread-813=
::ERROR::2014-12-08<br>> 07:11:07,903::task::866::Storage.TaskManager.Ta=
sk::(_setError)<br>> Task=3D`c434f325-5193-4236-a04d-2fee9ac095bc`::Unex=
pected error<br>> Thread-813::ERROR::2014-12-08<br>> 07:11:07,946::di=
spatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'message': =
"Wrong Master domain or its version:<br>> 'SD=3D78d84adf-7274-4efe-a711-=
fbec31196ece,<br>> pool=3D1d03dc05-008b-4d14-97ce-b17bd714183d'", 'code'=
: 324}}<br>> Thread-823::ERROR::2014-12-08<br>> 07:11:43,993::task::8=
66::Storage.TaskManager.Task::(_setError)<br>> Task=3D`9abbccd9-88a7-463=
2-b350-f9af1f65bebd`::Unexpected error<br>> Thread-823::ERROR::2014-12-0=
8<br>> 07:11:43,998::dispatcher::76::Storage.Dispatcher::(wrapper) {'sta=
tus':<br>> {'message': "Unknown pool id, pool not connected:<br>> ('1=
d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}<br>> Thread-823::=
ERROR::2014-12-08<br>> 07:11:44,003::task::866::Storage.TaskManager.Task=
::(_setError)<br>> Task=3D`7ef1ac39-e7c2-4538-b30b-ab2fcefac01d`::Unexpe=
cted error<br>> raise se.SpmStatusError()<br>> SpmStatu=
sError: Not SPM: ()<br>> Thread-823::ERROR::2014-12-08<br>> 07:11:44,=
007::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'mes=
sage': 'Not SPM: ()', 'code': 654}}<br>> Thread-823::ERROR::2014-12-08<b=
r>> 07:11:44,133::task::866::Storage.TaskManager.Task::(_setError)<br>&g=
t; Task=3D`cc1ae82c-f3c4-4efa-9cd2-c62a27801e76`::Unexpected error<br>> =
Thread-823::ERROR::2014-12-08<br>> 07:11:44,137::dispatcher::76::Storage=
.Dispatcher::(wrapper) {'status':<br>> {'message': "Unknown pool id, poo=
l not connected:<br>> ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code'=
: 309}}<br>> Thread-823::ERROR::2014-12-08<br>> 07:12:24,580::task::8=
66::Storage.TaskManager.Task::(_setError)<br>> Task=3D`9bcbb87d-3093-489=
4-879b-3fe2b09ef351`::Unexpected error<br>> Thread-823::ERROR::2014-12-0=
8<br>> 07:12:24,585::dispatcher::76::Storage.Dispatcher::(wrapper) {'sta=
tus':<br>> {'message': "Unknown pool id, pool not connected:<br>> ('1=
d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}<br>> Thread-823::=
ERROR::2014-12-08<br>> 07:13:04,926::task::866::Storage.TaskManager.Task=
::(_setError)<br>> Task=3D`8bdd0c1f-e681-4a8e-ad55-296c021389ed`::Unexpe=
cted error<br>> raise se.SpmStatusError()<br>> SpmStatu=
sError: Not SPM: ()<br>> Thread-823::ERROR::2014-12-08<br>> 07:13:04,=
931::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'mes=
sage': 'Not SPM: ()', 'code': 654}}<br>> Thread-823::ERROR::2014-12-08<b=
r>> 07:13:45,342::task::866::Storage.TaskManager.Task::(_setError)<br>&g=
t; Task=3D`160ea2a7-b6cb-4102-9df4-71ba87fd863e`::Unexpected error<br>> =
raise se.SpmStatusError()<br>> SpmStatusError: Not SPM: ()=
<br>> Thread-823::ERROR::2014-12-08<br>> 07:13:45,346::dispatcher::76=
::Storage.Dispatcher::(wrapper) {'status':<br>> {'message': 'Not SPM: ()=
', 'code': 654}}<br>> Thread-823::ERROR::2014-12-08<br>> 07:14:25,879=
::task::866::Storage.TaskManager.Task::(_setError)<br>> Task=3D`985628db=
-8f48-44b5-8f61-631a922f7f71`::Unexpected error<br>> raise=
se.SpmStatusError()<br>> SpmStatusError: Not SPM: ()<br>> Thread-823=
::ERROR::2014-12-08<br>> 07:14:25,883::dispatcher::76::Storage.Dispatche=
r::(wrapper) {'status':<br>> {'message': 'Not SPM: ()', 'code': 654}}<br=
>> Thread-823::ERROR::2014-12-08<br>> 07:15:06,175::task::866::Storag=
e.TaskManager.Task::(_setError)<br>> Task=3D`ddca1c88-0565-41e8-bf0c-22e=
adcc75918`::Unexpected error<br>> raise se.SpmStatusError(=
)<br>> SpmStatusError: Not SPM: ()<br>> Thread-823::ERROR::2014-12-08=
<br>> 07:15:06,179::dispatcher::76::Storage.Dispatcher::(wrapper) {'stat=
us':<br>> {'message': 'Not SPM: ()', 'code': 654}}<br>> Thread-823::E=
RROR::2014-12-08<br>> 07:15:46,585::task::866::Storage.TaskManager.Task:=
:(_setError)<br>> Task=3D`12bbded5-59ce-46d8-9e67-f48862a03606`::Unexpec=
ted error<br>> raise se.SpmStatusError()<br>> SpmStatus=
Error: Not SPM: ()<br>> Thread-823::ERROR::2014-12-08<br>> 07:15:46,5=
89::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'mess=
age': 'Not SPM: ()', 'code': 654}}<br>> Thread-814::ERROR::2014-12-08<br=
>> 07:16:08,619::sdc::137::Storage.StorageDomainCache::(_findDomain) loo=
king<br>> for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece<br>&=
gt; Thread-814::ERROR::2014-12-08<br>> 07:16:08,619::sdc::154::Storage.S=
torageDomainCache::(_findUnfetchedDomain)<br>> looking for domain 78d84a=
df-7274-4efe-a711-fbec31196ece<br>> Thread-814::DEBUG::2014-12-08<br>>=
; 07:16:08,624::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>&g=
t; /sbin/lvm vgs --config ' devices { preferred_names =3D ["^/dev/mapper/"]=
<br>> ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_e=
rror_count=3D3<br>> obtain_device_list_from_udev=3D0 filter =3D [<br>>=
; '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/360050768=
02810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\''=
,<br>> '\''r|.*|'\'' ] } global { locking_type=3D1 pri=
oritise_write_locks=3D1<br>> wait_for_locks=3D1 use_lvmetad=3D0 } =
backup { retain_min =3D 50 retain_days =3D<br>> 0 } ' =
--noheadings --units b --nosuffix --separator '|'<br>> --ignoreskippedcl=
uster -o<br>> uuid,name,attr,size,free,extent_size,extent_count,free_cou=
nt,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>> 78d84adf-=
7274-4efe-a711-fbec31196ece (cwd None)<br>> Thread-814::DEBUG::2014-12-0=
8<br>> 07:16:08,740::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo =
-n<br>> /sbin/lvm vgs --config ' devices { preferred_names =3D ["^/dev/m=
apper/"]<br>> ignore_suspended_devices=3D1 write_cache_state=3D0 disable=
_after_error_count=3D3<br>> obtain_device_list_from_udev=3D0 filter =3D =
[<br>> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/3=
6005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e000000000000=
0de|'\'',<br>> '\''r|.*|'\'' ] } global { locking_type=3D1 &=
nbsp;prioritise_write_locks=3D1<br>> wait_for_locks=3D1 use_lvmeta=
d=3D0 } backup { retain_min =3D 50 retain_days =3D<br>>=
; 0 } ' --noheadings --units b --nosuffix --separator '|'<br>> --ignores=
kippedcluster -o<br>> uuid,name,attr,size,free,extent_size,extent_count,=
free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>> 7=
8d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>> Thread-814::ERROR::2=
014-12-08<br>> 07:16:08,812::spbackends::271::Storage.StoragePoolDiskBac=
kend::(validateMasterDomainVersion)<br>> Requested master domain 78d84ad=
f-7274-4efe-a711-fbec31196ece does not have<br>> expected version 42 it =
is version 17<br>> Thread-814::ERROR::2014-12-08<br>> 07:16:08,820::t=
ask::866::Storage.TaskManager.Task::(_setError)<br>> Task=3D`5cdce5cd-6e=
6d-421e-bc2a-f999d8cbb056`::Unexpected error<br>> Thread-814::ERROR::201=
4-12-08<br>> 07:16:08,865::dispatcher::76::Storage.Dispatcher::(wrapper)=
{'status':<br>> {'message': "Wrong Master domain or its version:<br>>=
; 'SD=3D78d84adf-7274-4efe-a711-fbec31196ece,<br>> pool=3D1d03dc05-008b-=
4d14-97ce-b17bd714183d'", 'code': 324}}<br>> Thread-815::ERROR::2014-12-=
08<br>> 07:16:09,471::sdc::137::Storage.StorageDomainCache::(_findDomain=
) looking<br>> for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece=
<br>> Thread-815::ERROR::2014-12-08<br>> 07:16:09,472::sdc::154::Stor=
age.StorageDomainCache::(_findUnfetchedDomain)<br>> looking for domain 7=
8d84adf-7274-4efe-a711-fbec31196ece<br>> Thread-815::DEBUG::2014-12-08<b=
r>> 07:16:09,476::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<=
br>> /sbin/lvm vgs --config ' devices { preferred_names =3D ["^/dev/mapp=
er/"]<br>> ignore_suspended_devices=3D1 write_cache_state=3D0 disable_af=
ter_error_count=3D3<br>> obtain_device_list_from_udev=3D0 filter =3D [<b=
r>> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/3600=
5076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de=
|'\'',<br>> '\''r|.*|'\'' ] } global { locking_type=3D1 &nbs=
p;prioritise_write_locks=3D1<br>> wait_for_locks=3D1 use_lvmetad=
=3D0 } backup { retain_min =3D 50 retain_days =3D<br>>=
0 } ' --noheadings --units b --nosuffix --separator '|'<br>> --ignoresk=
ippedcluster -o<br>> uuid,name,attr,size,free,extent_size,extent_count,f=
ree_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>> 78=
d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>> Thread-815::DEBUG::20=
14-12-08<br>> 07:16:09,564::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bi=
n/sudo -n<br>> /sbin/lvm vgs --config ' devices { preferred_names =3D ["=
^/dev/mapper/"]<br>> ignore_suspended_devices=3D1 write_cache_state=3D0 =
disable_after_error_count=3D3<br>> obtain_device_list_from_udev=3D0 filt=
er =3D [<br>> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/m=
apper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e00000=
00000000de|'\'',<br>> '\''r|.*|'\'' ] } global { locking_typ=
e=3D1 prioritise_write_locks=3D1<br>> wait_for_locks=3D1 use=
_lvmetad=3D0 } backup { retain_min =3D 50 retain_days =3D=
<br>> 0 } ' --noheadings --units b --nosuffix --separator '|'<br>> --=
ignoreskippedcluster -o<br>> uuid,name,attr,size,free,extent_size,extent=
_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br=
>> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>> Thread-815::E=
RROR::2014-12-08<br>> 07:16:09,627::spbackends::271::Storage.StoragePool=
DiskBackend::(validateMasterDomainVersion)<br>> Requested master domain =
78d84adf-7274-4efe-a711-fbec31196ece does not have<br>> expected version=
42 it is version 17<br>> Thread-815::ERROR::2014-12-08<br>> 07:16:09=
,635::task::866::Storage.TaskManager.Task::(_setError)<br>> Task=3D`abfa=
0fd0-04b3-4c65-b3d0-be18b085a65d`::Unexpected error<br>> Thread-815::ERR=
OR::2014-12-08<br>> 07:16:09,681::dispatcher::76::Storage.Dispatcher::(w=
rapper) {'status':<br>> {'message': "Wrong Master domain or its version:=
<br>> 'SD=3D78d84adf-7274-4efe-a711-fbec31196ece,<br>> pool=3D1d03dc0=
5-008b-4d14-97ce-b17bd714183d'", 'code': 324}}<br>> Thread-816::ERROR::2=
014-12-08<br>> 07:16:10,182::sdc::137::Storage.StorageDomainCache::(_fin=
dDomain) looking<br>> for unfetched domain 78d84adf-7274-4efe-a711-fbec3=
1196ece<br>> Thread-816::ERROR::2014-12-08<br>> 07:16:10,183::sdc::15=
4::Storage.StorageDomainCache::(_findUnfetchedDomain)<br>> looking for d=
omain 78d84adf-7274-4efe-a711-fbec31196ece<br>> Thread-816::DEBUG::2014-=
12-08<br>> 07:16:10,187::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/s=
udo -n<br>> /sbin/lvm vgs --config ' devices { preferred_names =3D ["^/d=
ev/mapper/"]<br>> ignore_suspended_devices=3D1 write_cache_state=3D0 dis=
able_after_error_count=3D3<br>> obtain_device_list_from_udev=3D0 filter =
=3D [<br>> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapp=
er/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e00000000=
00000de|'\'',<br>> '\''r|.*|'\'' ] } global { locking_type=
=3D1 prioritise_write_locks=3D1<br>> wait_for_locks=3D1 use_=
lvmetad=3D0 } backup { retain_min =3D 50 retain_days =3D<=
br>> 0 } ' --noheadings --units b --nosuffix --separator '|'<br>> --i=
gnoreskippedcluster -o<br>> uuid,name,attr,size,free,extent_size,extent_=
count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>=
> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>> Thread-823::ER=
ROR::2014-12-08<br>> 07:16:27,163::task::866::Storage.TaskManager.Task::=
(_setError)<br>> Task=3D`9b0fd676-7941-40a7-a71e-0f1dee48a107`::Unexpect=
ed error<br>> raise se.SpmStatusError()<br>> SpmStatusE=
rror: Not SPM: ()<br>> Thread-823::ERROR::2014-12-08<br>> 07:16:27,16=
8::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'messa=
ge': 'Not SPM: ()', 'code': 654}}<br>><br>><br>-------------- next pa=
rt --------------<br>An HTML attachment was scrubbed...<br>URL: <http://=
lists.ovirt.org/pipermail/users/attachments/20141208/2f754047/attachment.ht=
ml><br><div><br></div>------------------------------<br><div><br></div>_=
______________________________________________<br>Users mailing list<br>Use=
rs(a)ovirt.org<br>http://lists.ovirt.org/mailman/listinfo/users<br><div><br><=
/div><br>End of Users Digest, Vol 39, Issue 38<br>*************************=
************<br></div><div><br></div></div></body></html>
------=_Part_7746146_197691492.1418043545536--
1
0
Hello,
Error when i try to create a new network using neutron provider:
"Error while executing action Add Subnet to Provider: Failed to communicate
with the external provider"
==> /var/log/neutron/server.log <==
2014-12-07 22:35:14.825 1061 INFO neutron.wsgi [-] (1061) accepted
('xxx.xxx.xxx.xxx', 42975)
2014-12-07 22:35:14.828 1061 INFO urllib3.connectionpool [-] Starting new
HTTP connection (1): 127.0.0.1
2014-12-07 22:35:14.920 1061 INFO neutron.plugins.ml2.db
[req-ba2a18ec-6e02-4526-99a8-27b35152781f None] Added segment
e0ad11df-9c5a-4167-82ea-313dcc626661
of type flat for network 213c62ce-e167-4bb0-bd2d-720dd06bc970
2014-12-07 22:35:14.930 1061 INFO neutron.wsgi
[req-ba2a18ec-6e02-4526-99a8-27b35152781f None] - - [07/Dec/2014 22:35:14]
"POST /v2.0/networ ks HTTP/1.1" 201
527 0.103579
2
1
Hi ! Everyone!
Is anyone tried to add an ssd cache to node using bcache or flashcache?
It seemed that we have to change the procedure when adding a storage
domain .
Maybe it can be done in serveral days , but sync the cache between nodes
seemed a little tricky.
Do you have any idea?
Thanks
2
1
Dear all,
We have updated our hypervisors with yum. This included an update ov vdsm
also. We now are with these version:
vdsm-4.16.7-1.gitdb83943.el6.x86_64
vdsm-python-4.16.7-1.gitdb83943.el6.noarch
vdsm-python-zombiereaper-4.16.7-1.gitdb83943.el6.noarch
vdsm-xmlrpc-4.16.7-1.gitdb83943.el6.noarch
vdsm-yajsonrpc-4.16.7-1.gitdb83943.el6.noarch
vdsm-jsonrpc-4.16.7-1.gitdb83943.el6.noarch
vdsm-cli-4.16.7-1.gitdb83943.el6.noarch
And ever since these updates we experience BIG troubles with our fibre
connections. I've already update the brocade cards to the latest version.
This seemed to help, they already came back up and saw the storage domains
(before the brocade update, they didn't even see their storage domains).
But after a day or so, one of the hypersisors began to freak out again.
Coming up and going back down... Below you can find the errors:
Thread-821::ERROR::2014-12-08
07:10:33,190::task::866::Storage.TaskManager.Task::(_setError)
Task=`27cb9779-a8e9-4080-988d-9772c922710b`::Unexpected error
raise se.SpmStatusError()
SpmStatusError: Not SPM: ()
Thread-821::ERROR::2014-12-08
07:10:33,194::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': 'Not SPM: ()', 'code': 654}}
Thread-822::ERROR::2014-12-08
07:11:03,878::task::866::Storage.TaskManager.Task::(_setError)
Task=`30177931-68c0-420f-950f-da5b770fe35c`::Unexpected error
Thread-822::ERROR::2014-12-08
07:11:03,882::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': "Unknown pool id, pool not connected:
('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}
Thread-813::ERROR::2014-12-08
07:11:07,634::sdc::137::Storage.StorageDomainCache::(_findDomain) looking
for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece
Thread-813::ERROR::2014-12-08
07:11:07,634::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain 78d84adf-7274-4efe-a711-fbec31196ece
Thread-813::DEBUG::2014-12-08
07:11:07,638::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
Thread-813::DEBUG::2014-12-08
07:11:07,835::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
Thread-813::ERROR::2014-12-08
07:11:07,896::spbackends::271::Storage.StoragePoolDiskBackend::(validateMasterDomainVersion)
Requested master domain 78d84adf-7274-4efe-a711-fbec31196ece does not have
expected version 42 it is version 17
Thread-813::ERROR::2014-12-08
07:11:07,903::task::866::Storage.TaskManager.Task::(_setError)
Task=`c434f325-5193-4236-a04d-2fee9ac095bc`::Unexpected error
Thread-813::ERROR::2014-12-08
07:11:07,946::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': "Wrong Master domain or its version:
'SD=78d84adf-7274-4efe-a711-fbec31196ece,
pool=1d03dc05-008b-4d14-97ce-b17bd714183d'", 'code': 324}}
Thread-823::ERROR::2014-12-08
07:11:43,993::task::866::Storage.TaskManager.Task::(_setError)
Task=`9abbccd9-88a7-4632-b350-f9af1f65bebd`::Unexpected error
Thread-823::ERROR::2014-12-08
07:11:43,998::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': "Unknown pool id, pool not connected:
('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}
Thread-823::ERROR::2014-12-08
07:11:44,003::task::866::Storage.TaskManager.Task::(_setError)
Task=`7ef1ac39-e7c2-4538-b30b-ab2fcefac01d`::Unexpected error
raise se.SpmStatusError()
SpmStatusError: Not SPM: ()
Thread-823::ERROR::2014-12-08
07:11:44,007::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': 'Not SPM: ()', 'code': 654}}
Thread-823::ERROR::2014-12-08
07:11:44,133::task::866::Storage.TaskManager.Task::(_setError)
Task=`cc1ae82c-f3c4-4efa-9cd2-c62a27801e76`::Unexpected error
Thread-823::ERROR::2014-12-08
07:11:44,137::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': "Unknown pool id, pool not connected:
('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}
Thread-823::ERROR::2014-12-08
07:12:24,580::task::866::Storage.TaskManager.Task::(_setError)
Task=`9bcbb87d-3093-4894-879b-3fe2b09ef351`::Unexpected error
Thread-823::ERROR::2014-12-08
07:12:24,585::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': "Unknown pool id, pool not connected:
('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}
Thread-823::ERROR::2014-12-08
07:13:04,926::task::866::Storage.TaskManager.Task::(_setError)
Task=`8bdd0c1f-e681-4a8e-ad55-296c021389ed`::Unexpected error
raise se.SpmStatusError()
SpmStatusError: Not SPM: ()
Thread-823::ERROR::2014-12-08
07:13:04,931::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': 'Not SPM: ()', 'code': 654}}
Thread-823::ERROR::2014-12-08
07:13:45,342::task::866::Storage.TaskManager.Task::(_setError)
Task=`160ea2a7-b6cb-4102-9df4-71ba87fd863e`::Unexpected error
raise se.SpmStatusError()
SpmStatusError: Not SPM: ()
Thread-823::ERROR::2014-12-08
07:13:45,346::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': 'Not SPM: ()', 'code': 654}}
Thread-823::ERROR::2014-12-08
07:14:25,879::task::866::Storage.TaskManager.Task::(_setError)
Task=`985628db-8f48-44b5-8f61-631a922f7f71`::Unexpected error
raise se.SpmStatusError()
SpmStatusError: Not SPM: ()
Thread-823::ERROR::2014-12-08
07:14:25,883::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': 'Not SPM: ()', 'code': 654}}
Thread-823::ERROR::2014-12-08
07:15:06,175::task::866::Storage.TaskManager.Task::(_setError)
Task=`ddca1c88-0565-41e8-bf0c-22eadcc75918`::Unexpected error
raise se.SpmStatusError()
SpmStatusError: Not SPM: ()
Thread-823::ERROR::2014-12-08
07:15:06,179::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': 'Not SPM: ()', 'code': 654}}
Thread-823::ERROR::2014-12-08
07:15:46,585::task::866::Storage.TaskManager.Task::(_setError)
Task=`12bbded5-59ce-46d8-9e67-f48862a03606`::Unexpected error
raise se.SpmStatusError()
SpmStatusError: Not SPM: ()
Thread-823::ERROR::2014-12-08
07:15:46,589::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': 'Not SPM: ()', 'code': 654}}
Thread-814::ERROR::2014-12-08
07:16:08,619::sdc::137::Storage.StorageDomainCache::(_findDomain) looking
for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece
Thread-814::ERROR::2014-12-08
07:16:08,619::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain 78d84adf-7274-4efe-a711-fbec31196ece
Thread-814::DEBUG::2014-12-08
07:16:08,624::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
Thread-814::DEBUG::2014-12-08
07:16:08,740::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
Thread-814::ERROR::2014-12-08
07:16:08,812::spbackends::271::Storage.StoragePoolDiskBackend::(validateMasterDomainVersion)
Requested master domain 78d84adf-7274-4efe-a711-fbec31196ece does not have
expected version 42 it is version 17
Thread-814::ERROR::2014-12-08
07:16:08,820::task::866::Storage.TaskManager.Task::(_setError)
Task=`5cdce5cd-6e6d-421e-bc2a-f999d8cbb056`::Unexpected error
Thread-814::ERROR::2014-12-08
07:16:08,865::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': "Wrong Master domain or its version:
'SD=78d84adf-7274-4efe-a711-fbec31196ece,
pool=1d03dc05-008b-4d14-97ce-b17bd714183d'", 'code': 324}}
Thread-815::ERROR::2014-12-08
07:16:09,471::sdc::137::Storage.StorageDomainCache::(_findDomain) looking
for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece
Thread-815::ERROR::2014-12-08
07:16:09,472::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain 78d84adf-7274-4efe-a711-fbec31196ece
Thread-815::DEBUG::2014-12-08
07:16:09,476::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
Thread-815::DEBUG::2014-12-08
07:16:09,564::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
Thread-815::ERROR::2014-12-08
07:16:09,627::spbackends::271::Storage.StoragePoolDiskBackend::(validateMasterDomainVersion)
Requested master domain 78d84adf-7274-4efe-a711-fbec31196ece does not have
expected version 42 it is version 17
Thread-815::ERROR::2014-12-08
07:16:09,635::task::866::Storage.TaskManager.Task::(_setError)
Task=`abfa0fd0-04b3-4c65-b3d0-be18b085a65d`::Unexpected error
Thread-815::ERROR::2014-12-08
07:16:09,681::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': "Wrong Master domain or its version:
'SD=78d84adf-7274-4efe-a711-fbec31196ece,
pool=1d03dc05-008b-4d14-97ce-b17bd714183d'", 'code': 324}}
Thread-816::ERROR::2014-12-08
07:16:10,182::sdc::137::Storage.StorageDomainCache::(_findDomain) looking
for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece
Thread-816::ERROR::2014-12-08
07:16:10,183::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain 78d84adf-7274-4efe-a711-fbec31196ece
Thread-816::DEBUG::2014-12-08
07:16:10,187::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
Thread-823::ERROR::2014-12-08
07:16:27,163::task::866::Storage.TaskManager.Task::(_setError)
Task=`9b0fd676-7941-40a7-a71e-0f1dee48a107`::Unexpected error
raise se.SpmStatusError()
SpmStatusError: Not SPM: ()
Thread-823::ERROR::2014-12-08
07:16:27,168::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': 'Not SPM: ()', 'code': 654}}
1
1
Hello,
Doing steps in:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/ht…
I would like to know is some one succeeded to run the host on a diskless
machine?
i am using Centos6.6 node with ovirt 3.5.
Thanks,
Arman.
***********************************************************
Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut für
Astrophysik Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany
***********************************************************
2
3
Hello,
You can help ?
I tried to add neutron provider on my node ovirt and when tried to
installing occurred the following error:
2014-12-07 15:15:31 INFO otopi.plugins.otopi.packagers.yumpackager
yumpackager.info:92 Yum install: 52/53:
openstack-neutron-openvswitch-2014.1.3-4.el6.noarch
2014-12-07 15:15:31 ERROR otopi.plugins.otopi.packagers.yumpackager
yumpackager.error:97 Yum Non-fatal POSTIN scriptlet failure in rpm package
openstack-neutron-openvswitch-2014.1.3-4.el6.noarch
2014-12-07 15:15:31 DEBUG otopi.plugins.otopi.packagers.yumpackager
yumpackager.verbose:88 Yum Script sink: error reading information on
service openstack-openvswitch-agent: No such file or directory
warning: %post(openstack-neutron-openvswitch-2014.1.3-4.el6.noarch)
scriptlet failed, exit status 1
2014-12-07 15:15:31 DEBUG otopi.plugins.otopi.packagers.yumpackager
yumpackager.verbose:88 Yum Done:
openstack-neutron-openvswitch-2014.1.3-4.el6.noarch
2014-12-07 15:15:31 DEBUG otopi.plugins.otopi.packagers.yumpackager
yumpackager.verbose:88 Yum Done:
openstack-neutron-openvswitch-2014.1.3-4.el6.noarch
I already add the repo openstak on the node.
2
5
------=_Part_215_24910333.1417776619019
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
We're trying to set up an oVirt configuration with an oVirt-controller (CentOS 6), iSCSI-storage (Dell MD3200i) and 3 vm-hosts (CentOS 7) powered by 2 APC PDUs. Testing the Power Management settings in the web GUI, we get the following message: "Test Succeeded, unknown." The oVirt engine log outputs the following:
2014-12-05 11:23:00,872 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-7) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Host vm-02 from data center XXXX was chosen as a proxy to execute Status command on Host vm-03.
2014-12-05 11:23:00,879 INFO [org.ovirt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-7) Using Host vm-02 from data center XXXX as proxy to execute Status command on Host
2014-12-05 11:23:00,904 INFO [org.ovirt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-7) Executing <Status> Power Management command, Proxy Host:vm-02, Agent:apc, Target Host:, Management IP:***.***.***.***, User:apc, Options:, Fencing policy:null
2014-12-05 11:23:00,930 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (ajp--127.0.0.1-8702-7) START, FenceVdsVDSCommand(HostName = vm-02, HostId = 071554fc-eed2-4e8f-b6bc-041248d0eaa5, targetVdsId = 67c642ed-0a7a-4e3b-8dd6-32a36df4aea9, action = Status, ip = ***.***.***.***, port = , type = apc, user = apc, password = ******, options = '', policy = 'null'), log id: 2803522
2014-12-05 11:23:01,137 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-7) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Power Management test failed for Host vm-03.Done
2014-12-05 11:23:01,138 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (ajp--127.0.0.1-8702-7) FINISH, FenceVdsVDSCommand, return: Test Succeeded, unknown, log id: 2803522
2014-12-05 11:23:01,139 WARN [org.ovirt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-7) Fencing operation failed with proxy host 071554fc-eed2-4e8f-b6bc-041248d0eaa5, trying another proxy...
2014-12-05 11:23:01,241 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-7) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Host vm-01 from data center XXXX was chosen as a proxy to execute Status command on Host vm-03.
2014-12-05 11:23:01,244 INFO [org.ovirt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-7) Using Host vm-01 from data center XXXX as proxy to execute Status command on Host
2014-12-05 11:23:01,246 INFO [org.ovirt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-7) Executing <Status> Power Management command, Proxy Host:vm-01, Agent:apc, Target Host:, Management IP:***.***.***.***, User:apc, Options:, Fencing policy:null
2014-12-05 11:23:01,273 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (ajp--127.0.0.1-8702-7) START, FenceVdsVDSCommand(HostName = vm-01, HostId = c50eb9bf-5294-4d46-813d-7adfcb41d71d, targetVdsId = 67c642ed-0a7a-4e3b-8dd6-32a36df4aea9, action = Status, ip = ***.***.***.***, port = , type = apc, user = apc, password = ******, options = '', policy = 'null'), log id: 2b00de15
2014-12-05 11:23:01,449 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-7) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Power Management test failed for Host vm-03.Done
2014-12-05 11:23:01,451 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (ajp--127.0.0.1-8702-7) FINISH, FenceVdsVDSCommand, return: Test Succeeded, unknown, log id: 2b00de15
This is the vdsm.log output:
JsonRpc (StompReactor)::DEBUG::2014-12-05 11:34:05,065::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2014-12-05 11:34:05,067::__init__::504::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-24996::DEBUG::2014-12-05 11:34:05,069::API::1188::vds::(fenceNode) fenceNode(addr=***.***.***.***,port=,agent=apc,user=apc,passwd=XXXX,action=status,secure=False,options=,policy=None)
Thread-24996::DEBUG::2014-12-05 11:34:05,069::utils::738::root::(execCmd) /usr/sbin/fence_apc (cwd None)
Thread-24996::DEBUG::2014-12-05 11:34:05,131::utils::758::root::(execCmd) FAILED: <err> = "Failed: You have to enter plug number or machine identification\nPlease use '-h' for usage\n"; <rc> = 1
Thread-24996::DEBUG::2014-12-05 11:34:05,131::API::1143::vds::(fence) rc 1 inp agent=fence_apc
ipaddr=***.***.***.***
login=apc
action=status
passwd=XXXX
out [] err ['Failed: You have to enter plug number or machine identification', "Please use '-h' for usage"]
The 'port' and 'options' fields show up as empty, even if we enter '22' or 'port=22'. We did enter the slot number as well.
Entering the fence_apc command manually, we get:
fence_apc -a ***.***.***.*** -l apc -p ****** -o status -n 1 -x
Status: ON
Anyone have an idea what could be the problem?
Thanks for your time and kind regards,
Wout
------=_Part_215_24910333.1417776619019
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><head><style type=3D'text/css'>p { margin: 0; }</style></head><body><=
div style=3D'font-family: arial,helvetica,sans-serif; font-size: 10pt; colo=
r: #000000'>Hi,<br><br>We're trying to set up an oVirt configuration with a=
n oVirt-controller (CentOS 6), iSCSI-storage (Dell MD3200i) and 3 vm-hosts =
(CentOS 7) powered by 2 APC PDUs. Testing the Power Management settings in =
the web GUI, we get the following message: "Test Succeeded, unknown." The o=
Virt engine log outputs the following:<br><br>2014-12-05 11:23:00,872 INFO&=
nbsp; [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector=
] (ajp--127.0.0.1-8702-7) Correlation ID: null, Call Stack: null, Custom Ev=
ent ID: -1, Message: Host vm-02 from data center XXXX was chosen as a proxy=
to execute Status command on Host vm-03.<br>2014-12-05 11:23:00,879 INFO&n=
bsp; [org.ovirt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-7) Usin=
g Host vm-02 from data center XXXX as proxy to execute Status command on Ho=
st<br>2014-12-05 11:23:00,904 INFO [org.ovirt.engine.core.bll.FenceEx=
ecutor] (ajp--127.0.0.1-8702-7) Executing <Status> Power Management c=
ommand, Proxy Host:vm-02, Agent:apc, Target Host:, Management IP:***.***.**=
*.***, User:apc, Options:, Fencing policy:null<br>2014-12-05 11:23:00,930 I=
NFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (a=
jp--127.0.0.1-8702-7) START, FenceVdsVDSCommand(HostName =3D vm-02, HostId =
=3D 071554fc-eed2-4e8f-b6bc-041248d0eaa5, targetVdsId =3D 67c642ed-0a7a-4e3=
b-8dd6-32a36df4aea9, action =3D Status, ip =3D ***.***.***.***, port =3D , =
type =3D apc, user =3D apc, password =3D ******, options =3D '', policy =3D=
'null'), log id: 2803522<br>2014-12-05 11:23:01,137 WARN [org.ovirt.=
engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1=
-8702-7) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Messa=
ge: Power Management test failed for Host vm-03.Done<br>2014-12-05 11:23:01=
,138 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSComma=
nd] (ajp--127.0.0.1-8702-7) FINISH, FenceVdsVDSCommand, return: Test Succee=
ded, unknown, log id: 2803522<br>2014-12-05 11:23:01,139 WARN [org.ov=
irt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-7) Fencing operatio=
n failed with proxy host 071554fc-eed2-4e8f-b6bc-041248d0eaa5, trying anoth=
er proxy...<br>2014-12-05 11:23:01,241 INFO [org.ovirt.engine.core.da=
l.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-7) Corre=
lation ID: null, Call Stack: null, Custom Event ID: -1, Message: Host vm-01=
from data center XXXX was chosen as a proxy to execute Status command on H=
ost vm-03.<br>2014-12-05 11:23:01,244 INFO [org.ovirt.engine.core.bll=
.FenceExecutor] (ajp--127.0.0.1-8702-7) Using Host vm-01 from data center X=
XXX as proxy to execute Status command on Host<br>2014-12-05 11:23:01,246 I=
NFO [org.ovirt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-7)=
Executing <Status> Power Management command, Proxy Host:vm-01, Agent=
:apc, Target Host:, Management IP:***.***.***.***, User:apc, Options:, Fenc=
ing policy:null<br>2014-12-05 11:23:01,273 INFO [org.ovirt.engine.cor=
e.vdsbroker.vdsbroker.FenceVdsVDSCommand] (ajp--127.0.0.1-8702-7) START, Fe=
nceVdsVDSCommand(HostName =3D vm-01, HostId =3D c50eb9bf-5294-4d46-813d-7ad=
fcb41d71d, targetVdsId =3D 67c642ed-0a7a-4e3b-8dd6-32a36df4aea9, action =3D=
Status, ip =3D ***.***.***.***, port =3D , type =3D apc, user =3D apc, pas=
sword =3D ******, options =3D '', policy =3D 'null'), log id: 2b00de15<br>2=
014-12-05 11:23:01,449 WARN [org.ovirt.engine.core.dal.dbbroker.audit=
loghandling.AuditLogDirector] (ajp--127.0.0.1-8702-7) Correlation ID: null,=
Call Stack: null, Custom Event ID: -1, Message: Power Management test fail=
ed for Host vm-03.Done<br>2014-12-05 11:23:01,451 INFO [org.ovirt.eng=
ine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (ajp--127.0.0.1-8702-7) FI=
NISH, FenceVdsVDSCommand, return: Test Succeeded, unknown, log id: 2b00de15=
<br><br>This is the vdsm.log output:<br><br>JsonRpc (StompReactor)::DEBUG::=
2014-12-05 11:34:05,065::stompReactor::98::Broker.StompAdapter::(handle_fra=
me) Handling message <StompFrame command=3D'SEND'><br>JsonRpcServer::=
DEBUG::2014-12-05 11:34:05,067::__init__::504::jsonrpc.JsonRpcServer::(serv=
e_requests) Waiting for request<br>Thread-24996::DEBUG::2014-12-05 11:34:05=
,069::API::1188::vds::(fenceNode) fenceNode(addr=3D***.***.***.***,port=3D,=
agent=3Dapc,user=3Dapc,passwd=3DXXXX,action=3Dstatus,secure=3DFalse,options=
=3D,policy=3DNone)<br>Thread-24996::DEBUG::2014-12-05 11:34:05,069::utils::=
738::root::(execCmd) /usr/sbin/fence_apc (cwd None)<br>Thread-24996::DEBUG:=
:2014-12-05 11:34:05,131::utils::758::root::(execCmd) FAILED: <err> =
=3D "Failed: You have to enter plug number or machine identification\nPleas=
e use '-h' for usage\n"; <rc> =3D 1<br>Thread-24996::DEBUG::2014-12-0=
5 11:34:05,131::API::1143::vds::(fence) rc 1 inp agent=3Dfence_apc<br>ipadd=
r=3D***.***.***.***<br>login=3Dapc<br>action=3Dstatus<br>passwd=3DXXXX<br>&=
nbsp;out [] err ['Failed: You have to enter plug number or machine identifi=
cation', "Please use '-h' for usage"]<br><br>The 'port' and 'options' field=
s show up as empty, even if we enter '22' or 'port=3D22'. We did enter the =
slot number as well.<br><br>Entering the fence_apc command manually, we get=
:<br><br>fence_apc -a ***.***.***.*** -l apc -p ****** -o status -n 1 -x<br=
>Status: ON<br><br>Anyone have an idea what could be the problem?<br><br><b=
r>Thanks for your time and kind regards,<br><br>Wout<br></div></body></html=
>
------=_Part_215_24910333.1417776619019--
2
2
Re: [ovirt-users] ksmd high cpu usage from almost a week with just one vm running
by Markus Stockhausen 07 Dec '14
by Markus Stockhausen 07 Dec '14
07 Dec '14
------=_NextPartTM-000-dd1609c7-6593-4600-9926-1b296d875462
Content-Type: multipart/alternative;
boundary="_000_12EF8D94C6F8734FB2FF37B9FBEDD1735F8C9D38EXCHANGEcollogi_"
--_000_12EF8D94C6F8734FB2FF37B9FBEDD1735F8C9D38EXCHANGEcollogi_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
TWVtb3J5IHVzYWdlID4gODAlOiBrc20ga2lja3MgaW4uIFRoZXJlIGl0IHdpbGwgcnVuIGF0IGZ1
bGwgc3BlZWQgdW50aWwgdXNhZ2UgaXMgYmVsb3cgODAlLiBUaGVyZSBpcyBhbiBvcGVuIEJaIGZy
b20gbWUuIEJhZCBiZWhhdmlvdXIgaXMgY29udHJvbGxlZCBieSBtb20uDQoNCk1hcmt1cw0KDQpB
bSAwNi4xMi4yMDE0IDE1OjU4IHNjaHJpZWIgbWFkIEVuZ2luZWVyIDx0aGVtYWRlbmdpbjMzckBn
bWFpbC5jb20+Og0KSGVsbG8gQWxsLA0KICAgICAgICAgICAgIEkgYW0gdXNpbmcgY2VudG9zNi41
IHg2NCBvbiBhIHNlcnZlciB3aXRoIDQ4IEcgUkFNIGFuZCA4DQpDb3Jlcy5NYW5hZ2VkIGJ5IE92
aXJ0DQpUaGVyZSBpcyBvbmx5IG9uZSBydW5uaW5nIFZNIHdpdGggUkFNIDM0IEcgYW5kIHdpdGgg
NiBWQ1BVIChwaW5uZWQgdG8NCnByb3BlciBudW1hIG5vZGVzKQ0KDQpmcm9tIHRvcA0KDQp0b3Ag
LSAwNjo0Mjo0OCB1cCA2NyBkYXlzLCAyMDowNSwgIDEgdXNlciwgIGxvYWQgYXZlcmFnZTogMC4y
NiwgMC4yMCwgMC4xNw0KVGFza3M6IDI4NSB0b3RhbCwgICAyIHJ1bm5pbmcsIDI4MiBzbGVlcGlu
ZywgICAwIHN0b3BwZWQsICAgMSB6b21iaWUNCkNwdShzKTogIDEuMCV1cywgIDEuNCVzeSwgIDAu
MCVuaSwgOTcuNSVpZCwgIDAuMSV3YSwgIDAuMCVoaSwgIDAuMCVzaSwgIDAuMCVzdA0KTWVtOiAg
NDkzNTY0NjhrIHRvdGFsLCAzMzk3NzY4NGsgdXNlZCwgMTUzNzg3ODRrIGZyZWUsICAgMTQyODEy
ayBidWZmZXJzDQpTd2FwOiAxMjMzNzE0NGsgdG90YWwsICAgICAgICAwayB1c2VkLCAxMjMzNzE0
NGsgZnJlZSwgICAzNDMwNTJrIGNhY2hlZA0KDQogIFBJRCBVU0VSICAgICAgUFIgIE5JICBWSVJU
ICBSRVMgIFNIUiBTICVDUFUgJU1FTSAgICBUSU1FKyAgQ09NTUFORA0KICAxMDEgcm9vdCAgICAg
IDI1ICAgNSAgICAgMCAgICAwICAgIDAgUiAyNy40ICAwLjAgICA1NjUwOjA0IFtrc21kXQ0KMjYw
MDQgdmRzbSAgICAgICAwIC0yMCAzMzcxbSAgNjRtIDk0MDAgUyAgOS44ICAwLjEgICAxNjUzOjI3
DQovdXNyL2Jpbi9weXRob24gL3Vzci9zaGFyZS92ZHNtL3Zkc20gLS1waWRmaWxlIC92YXIvcnVu
L3Zkc20vdmRzbWQucGlkDQoyMDk2MyBxZW11ICAgICAgMjAgICAwIDM4LjVnICAzM2cgNjc5MiBT
ICAzLjkgNzEuNiAgIDUyMjU6NDMNCi91c3IvbGliZXhlYy9xZW11LWt2bSAtbmFtZSBDaW5kZXIg
LVMgLU0gcmhlbDYuNS4wIC1jcHUgTmVoYWxlbQ0KLWVuYWJsZS1rdm0gLW0gMzQwOTYgLXJlYWx0
aW1lIG1sb2NrPW9mZiAtc21wDQo2LG1heGNwdXM9MTYwLHNvY2tldHM9ODAsYw0KDQpmcm9tIC9z
eXMva2VybmVsL21tL2tzbQ0KcGFnZXNfdW5zaGFyZWQgIDc2MDIzMjINCnBhZ2VzX3NoYXJlZCAg
ICAgMjA3MDIzDQpwYWdlc190b19zY2FuICAgNjQNCnBhZ2VzX3ZvbGF0aWxlICAgIDMxNjc4DQoN
CkFueSBpZGVhIHdoeSBrc21kIGlzIG5vdCBjb21pbmcgdG8gbm9ybWFsIENQVSB1c2FnZSAsb24g
YSBkaWZmZXJlbnQNCnNlcnZlciBrc21kIHdhcyBkaXNhYmxlZCBhbmQgZm9yIHRlc3Rpbmcgd2hl
biBpIGVuYWJsZWQgaXQgaW5pdGlhbGx5DQpDUFUgdXNhZ2Ugd2FzIGhpZ2ggYnV0IGxhdGVyIHNl
dHRsZWQgZG93biB0byAzJSAsaW4gdGhhdCBob3N0IGkgaGF2ZSA0DQpWTXMgcnVubmluZy4NCg0K
QmVmb3JlIHR1cm5pbmcgb2ZmIGtzbWQgY2FuIGFueSBvbmUgaGVscCBtZSBmaW5kIG91dCB3aHkg
a3NtZCBpcw0KYmVoYXZpbmcgbGlrZSB0aGlzLkluaXRpYWxseSBpdCBoYWQgMiB2aXJ0dWFsIG1h
Y2hpbmVzLGJlY2F1c2Ugb2YgaGlnaA0KQ1BVIHV0aWxpemF0aW9uIG9mIHRoaXMgZ3Vlc3Qgb3Ro
ZXIgaXMgbWlncmF0ZWQgdG8gYW5vdGhlciBob3N0Lg0KDQpUaGFua3MNCl9fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQpVc2VycyBtYWlsaW5nIGxpc3QNClVz
ZXJzQG92aXJ0Lm9yZw0KaHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3Vz
ZXJzDQo=
--_000_12EF8D94C6F8734FB2FF37B9FBEDD1735F8C9D38EXCHANGEcollogi_
Content-Type: text/html; charset="utf-8"
Content-ID: <D4B5292395AD824885E0B8CCFCD13B95(a)collogia.de>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPHAgZGlyPSJsdHIi
Pk1lbW9yeSB1c2FnZSAmZ3Q7IDgwJToga3NtIGtpY2tzIGluLiBUaGVyZSBpdCB3aWxsIHJ1biBh
dCBmdWxsIHNwZWVkIHVudGlsIHVzYWdlIGlzIGJlbG93IDgwJS4gVGhlcmUgaXMgYW4gb3BlbiBC
WiBmcm9tIG1lLiBCYWQgYmVoYXZpb3VyIGlzIGNvbnRyb2xsZWQgYnkgbW9tLjwvcD4NCjxwIGRp
cj0ibHRyIj5NYXJrdXM8L3A+DQo8ZGl2IGNsYXNzPSJnbWFpbF9xdW90ZSI+QW0gMDYuMTIuMjAx
NCAxNTo1OCBzY2hyaWViIG1hZCBFbmdpbmVlciAmbHQ7dGhlbWFkZW5naW4zM3JAZ21haWwuY29t
Jmd0Ozo8YnIgdHlwZT0iYXR0cmlidXRpb24iPg0KPGJsb2NrcXVvdGUgY2xhc3M9InF1b3RlIiBz
dHlsZT0ibWFyZ2luOjAgMCAwIC44ZXg7Ym9yZGVyLWxlZnQ6MXB4ICNjY2Mgc29saWQ7cGFkZGlu
Zy1sZWZ0OjFleCI+DQo8ZGl2Pjxmb250IHNpemU9IjIiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6
MTBwdCI+PC9zcGFuPjwvZm9udD4NCjxkaXY+SGVsbG8gQWxsLDxicj4NCiZuYnNwOyZuYnNwOyZu
YnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNw
OyBJIGFtIHVzaW5nIGNlbnRvczYuNSB4NjQgb24gYSBzZXJ2ZXIgd2l0aCA0OCBHIFJBTSBhbmQg
ODxicj4NCkNvcmVzLk1hbmFnZWQgYnkgT3ZpcnQ8YnI+DQpUaGVyZSBpcyBvbmx5IG9uZSBydW5u
aW5nIFZNIHdpdGggUkFNIDM0IEcgYW5kIHdpdGggNiBWQ1BVIChwaW5uZWQgdG88YnI+DQpwcm9w
ZXIgbnVtYSBub2Rlcyk8YnI+DQo8YnI+DQpmcm9tIHRvcDxicj4NCjxicj4NCnRvcCAtIDA2OjQy
OjQ4IHVwIDY3IGRheXMsIDIwOjA1LCZuYnNwOyAxIHVzZXIsJm5ic3A7IGxvYWQgYXZlcmFnZTog
MC4yNiwgMC4yMCwgMC4xNzxicj4NClRhc2tzOiAyODUgdG90YWwsJm5ic3A7Jm5ic3A7IDIgcnVu
bmluZywgMjgyIHNsZWVwaW5nLCZuYnNwOyZuYnNwOyAwIHN0b3BwZWQsJm5ic3A7Jm5ic3A7IDEg
em9tYmllPGJyPg0KQ3B1KHMpOiZuYnNwOyAxLjAldXMsJm5ic3A7IDEuNCVzeSwmbmJzcDsgMC4w
JW5pLCA5Ny41JWlkLCZuYnNwOyAwLjEld2EsJm5ic3A7IDAuMCVoaSwmbmJzcDsgMC4wJXNpLCZu
YnNwOyAwLjAlc3Q8YnI+DQpNZW06Jm5ic3A7IDQ5MzU2NDY4ayB0b3RhbCwgMzM5Nzc2ODRrIHVz
ZWQsIDE1Mzc4Nzg0ayBmcmVlLCZuYnNwOyZuYnNwOyAxNDI4MTJrIGJ1ZmZlcnM8YnI+DQpTd2Fw
OiAxMjMzNzE0NGsgdG90YWwsJm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5i
c3A7IDBrIHVzZWQsIDEyMzM3MTQ0ayBmcmVlLCZuYnNwOyZuYnNwOyAzNDMwNTJrIGNhY2hlZDxi
cj4NCjxicj4NCiZuYnNwOyBQSUQgVVNFUiZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyBQ
UiZuYnNwOyBOSSZuYnNwOyBWSVJUJm5ic3A7IFJFUyZuYnNwOyBTSFIgUyAlQ1BVICVNRU0mbmJz
cDsmbmJzcDsmbmJzcDsgVElNRSYjNDM7Jm5ic3A7IENPTU1BTkQ8YnI+DQombmJzcDsgMTAxIHJv
b3QmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgMjUmbmJzcDsmbmJzcDsgNSZuYnNwOyZu
YnNwOyZuYnNwOyZuYnNwOyAwJm5ic3A7Jm5ic3A7Jm5ic3A7IDAmbmJzcDsmbmJzcDsmbmJzcDsg
MCBSIDI3LjQmbmJzcDsgMC4wJm5ic3A7Jm5ic3A7IDU2NTA6MDQgW2tzbWRdPGJyPg0KMjYwMDQg
dmRzbSZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyAwIC0yMCAzMzcxbSZuYnNw
OyA2NG0gOTQwMCBTJm5ic3A7IDkuOCZuYnNwOyAwLjEmbmJzcDsmbmJzcDsgMTY1MzoyNzxicj4N
Ci91c3IvYmluL3B5dGhvbiAvdXNyL3NoYXJlL3Zkc20vdmRzbSAtLXBpZGZpbGUgL3Zhci9ydW4v
dmRzbS92ZHNtZC5waWQ8YnI+DQoyMDk2MyBxZW11Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5i
c3A7IDIwJm5ic3A7Jm5ic3A7IDAgMzguNWcmbmJzcDsgMzNnIDY3OTIgUyZuYnNwOyAzLjkgNzEu
NiZuYnNwOyZuYnNwOyA1MjI1OjQzPGJyPg0KL3Vzci9saWJleGVjL3FlbXUta3ZtIC1uYW1lIENp
bmRlciAtUyAtTSByaGVsNi41LjAgLWNwdSBOZWhhbGVtPGJyPg0KLWVuYWJsZS1rdm0gLW0gMzQw
OTYgLXJlYWx0aW1lIG1sb2NrPW9mZiAtc21wPGJyPg0KNixtYXhjcHVzPTE2MCxzb2NrZXRzPTgw
LGM8YnI+DQo8YnI+DQpmcm9tIC9zeXMva2VybmVsL21tL2tzbTxicj4NCnBhZ2VzX3Vuc2hhcmVk
Jm5ic3A7IDc2MDIzMjI8YnI+DQpwYWdlc19zaGFyZWQmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsg
MjA3MDIzPGJyPg0KcGFnZXNfdG9fc2NhbiZuYnNwOyZuYnNwOyA2NDxicj4NCnBhZ2VzX3ZvbGF0
aWxlJm5ic3A7Jm5ic3A7Jm5ic3A7IDMxNjc4PGJyPg0KPGJyPg0KQW55IGlkZWEgd2h5IGtzbWQg
aXMgbm90IGNvbWluZyB0byBub3JtYWwgQ1BVIHVzYWdlICxvbiBhIGRpZmZlcmVudDxicj4NCnNl
cnZlciBrc21kIHdhcyBkaXNhYmxlZCBhbmQgZm9yIHRlc3Rpbmcgd2hlbiBpIGVuYWJsZWQgaXQg
aW5pdGlhbGx5PGJyPg0KQ1BVIHVzYWdlIHdhcyBoaWdoIGJ1dCBsYXRlciBzZXR0bGVkIGRvd24g
dG8gMyUgLGluIHRoYXQgaG9zdCBpIGhhdmUgNDxicj4NClZNcyBydW5uaW5nLjxicj4NCjxicj4N
CkJlZm9yZSB0dXJuaW5nIG9mZiBrc21kIGNhbiBhbnkgb25lIGhlbHAgbWUgZmluZCBvdXQgd2h5
IGtzbWQgaXM8YnI+DQpiZWhhdmluZyBsaWtlIHRoaXMuSW5pdGlhbGx5IGl0IGhhZCAyIHZpcnR1
YWwgbWFjaGluZXMsYmVjYXVzZSBvZiBoaWdoPGJyPg0KQ1BVIHV0aWxpemF0aW9uIG9mIHRoaXMg
Z3Vlc3Qgb3RoZXIgaXMgbWlncmF0ZWQgdG8gYW5vdGhlciBob3N0Ljxicj4NCjxicj4NClRoYW5r
czxicj4NCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fPGJy
Pg0KVXNlcnMgbWFpbGluZyBsaXN0PGJyPg0KVXNlcnNAb3ZpcnQub3JnPGJyPg0KPGEgaHJlZj0i
aHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzIj5odHRwOi8vbGlz
dHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnM8L2E+PGJyPg0KPC9kaXY+DQo8L2Rp
dj4NCjwvYmxvY2txdW90ZT4NCjwvZGl2Pg0KPC9ib2R5Pg0KPC9odG1sPg0K
--_000_12EF8D94C6F8734FB2FF37B9FBEDD1735F8C9D38EXCHANGEcollogi_--
------=_NextPartTM-000-dd1609c7-6593-4600-9926-1b296d875462
Content-Type: text/plain;
name="InterScan_Disclaimer.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
filename="InterScan_Disclaimer.txt"
****************************************************************************
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.
Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
Vorstand:
Kadir Akin
Dr. Michael Höhnerbach
Vorsitzender des Aufsichtsrates:
Hans Kristian Langva
Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497
This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.
e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
executive board:
Kadir Akin
Dr. Michael Höhnerbach
President of the supervisory board:
Hans Kristian Langva
Registry office: district court Cologne
Register number: HRB 52 497
****************************************************************************
------=_NextPartTM-000-dd1609c7-6593-4600-9926-1b296d875462--
3
4