Re: [ovirt-users] ssd cache
by Ernest Beinrohr
This is a multi-part message in MIME format.
--------------050302090303030802060601
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Dňa 09.12.2014 o 03:57 yao xu napísal(a):
> do you use iscsi ?
>
Yes, the storage present itselft as iscsi to ovirt. The tgtd iscsi
target which runs on rhel7 does NOT know about the cache, it simply
shares an LVM block device. I created the cache using this tutorial:
http://blog.kylemanna.com/linux/2013/06/30/ssd-caching-using-dmcache-tuto...
(or this, but you need a subscrition:
https://access.redhat.com/solutions/912953 )
tgtd.conf:
<target iqn.2014-05.sk.axonpro.sk:BigCachedPool>
scsi_id STORAGE_BigCachedPool
vendor_id AXONPRO
product_id scsi-target-utils
scsi_sn 42fbb7c1-99d4-4247-a55b-222e5abe13aa
backing-store /dev/rhel/BigCachedPool
incominguser ovirt xxxx
</target>
lvs:
LV VG Attr LSize Pool Origin
Data% Move Log Cpy%Sync Convert
BigCachedPool rhel Cwi-aoC--- 21,00t lv_cache [BigCachedPool_corig]
lv_cache rhel Cwi-a-C--- 100,00g
--
Ernest Beinrohr, AXON PRO
Ing <http://www.beinrohr.sk/ing.php>, RHCE
<http://www.beinrohr.sk/rhce.php>, RHCVA
<http://www.beinrohr.sk/rhce.php>, LPIC
<http://www.beinrohr.sk/lpic.php>, VCA <http://www.beinrohr.sk/vca.php>,
+421-2-62410360 +421-903-482603
--------------050302090303030802060601
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">Dňa 09.12.2014 o 03:57 yao xu
napísal(a):<br>
</div>
<blockquote
cite="mid:CAFEhV74MugY5YCTTaBfsMbU_0B6gWvU5zDeh9bGRnukvJscAFw@mail.gmail.com"
type="cite">
<div dir="ltr">do you use iscsi ? </div>
<br>
</blockquote>
<br>
Yes, the storage present itselft as iscsi to ovirt. The tgtd iscsi
target which runs on rhel7 does NOT know about the cache, it simply
shares an LVM block device. I created the cache using this tutorial:
<br>
<br>
<a class="moz-txt-link-freetext" href="http://blog.kylemanna.com/linux/2013/06/30/ssd-caching-using-dmcache-tuto...">http://blog.kylemanna.com/linux/2013/06/30/ssd-caching-using-dmcache-tuto...</a>
(or this, but you need a subscrition:
<a class="moz-txt-link-freetext" href="https://access.redhat.com/solutions/912953">https://access.redhat.com/solutions/912953</a> )<br>
<br>
tgtd.conf:<br>
<br>
<tt><target iqn.2014-05.sk.axonpro.sk:BigCachedPool></tt><tt><br>
</tt><tt> scsi_id STORAGE_BigCachedPool</tt><tt><br>
</tt><tt> vendor_id AXONPRO</tt><tt><br>
</tt><tt> product_id scsi-target-utils</tt><tt><br>
</tt><tt> scsi_sn 42fbb7c1-99d4-4247-a55b-222e5abe13aa</tt><tt><br>
</tt><tt> backing-store /dev/rhel/BigCachedPool</tt><tt><br>
</tt><tt> incominguser ovirt xxxx</tt><tt><br>
</tt><tt></target></tt><tt><br>
</tt><br>
lvs:<br>
<br>
<tt> LV VG Attr LSize Pool
Origin Data% Move Log Cpy%Sync Convert</tt><tt><br>
</tt><tt> BigCachedPool rhel Cwi-aoC--- 21,00t lv_cache
[BigCachedPool_corig]</tt><tt><br>
</tt><tt> lv_cache rhel Cwi-a-C--- 100,00g</tt><br>
<br>
<div class="moz-signature">-- <br>
<div id="oernii_footer" style="color: gray;">
<span style="font-family: Lucida Console, Luxi Mono, Courier,
monospace; font-size: 90%;">
Ernest Beinrohr, AXON PRO<br>
<a style="text-decoration: none; color: gray;"
href="http://www.beinrohr.sk/ing.php">Ing</a>, <a
style="text-decoration: none; color: gray;"
href="http://www.beinrohr.sk/rhce.php">RHCE</a>, <a
style="text-decoration: none; color: gray;"
href="http://www.beinrohr.sk/rhce.php">RHCVA</a>, <a
style="text-decoration: none; color: gray;"
href="http://www.beinrohr.sk/lpic.php">LPIC</a>, <a
style="text-decoration: none; color: gray;"
href="http://www.beinrohr.sk/vca.php">VCA</a>, <br>
+421-2-62410360 +421-903-482603
<br>
</span> </div>
<img
src="http://nojsstats.appspot.com/UA-44497096-1/email.beinrohr.sk"
moz-do-not-send="true" height="1" width="1" border="0">
</div>
</body>
</html>
--------------050302090303030802060601--
10 years
[QE][ACTION REQUIRED] oVirt 3.5.1 RC status - postponed
by Sandro Bonazzola
Hi,
We were supposed to start composing oVirt 3.5.1 RC today *2014-12-09 08:00 UTC* from 3.5 branch.
We have still blockers for oVirt 3.5.1 RC release so we need to postpone it until they'll be fixed.
Being so near to winter's holidays we need to discuss the new tentative date for RC in tomorrow sync meeting.
The bug tracker [1] shows 1 open blocker:
Bug ID Whiteboard Status Summary
1160846 sla NEW Can't add disk to VM without specifying disk profile when the storage domain has more than one disk profile
In order to stabilize the release a new branch ovirt-engine-3.5.1 will be created from the same git hash used for composing the RC.
Maintainers:
- Please be sure that 3.5 snapshot allow to create VMs
- Please be sure that no pending patches are going to block the release
- If any patch must block the RC release please raise the issue as soon as possible.
There are still 65 bugs [2] targeted to 3.5.1.
Excluding node and documentation bugs we still have 44 bugs [3] targeted to 3.5.1.
Maintainers / Assignee:
- Please add the bugs to the tracker if you think that 3.5.1 should not be released without them fixed.
- Please update the target to 3.5.2 or later for bugs that won't be in 3.5.1:
it will ease gathering the blocking bugs for next releases.
- Please fill release notes, the page has been created here [4]
Community:
- If you're testing oVirt 3.5 nightly snapshot, please add yourself to the test page [5]
[1] http://bugzilla.redhat.com/1155170
[2] http://goo.gl/7G0PDV
[3] http://goo.gl/6gUbVr
[4] http://www.ovirt.org/OVirt_3.5.1_Release_Notes
[5] http://www.ovirt.org/Testing/oVirt_3.5.1_Testing
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years
JSON mapping exception
by Raul Laansoo
Hi.
Is it somehow possible to manually recover engine from the following error, caused maybe by https://bugzilla.redhat.com/show_bug.cgi?id=1155084.
oVirt 3.5
2014-12-08 23:24:41,922 ERROR [org.ovirt.engine.core.utils.timer.SchedulerUtilQuartzImpl] (DefaultQuartzScheduler_Worker-40) Failed to invoke scheduled method invokeCallbackMethods: java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedMethodAccessor100.invoke(Unknown Source) [:1.7.0_65]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.7.0_65]
at java.lang.reflect.Method.invoke(Method.java:606) [rt.jar:1.7.0_65]
at org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:60) [scheduler.jar:]
at org.quartz.core.JobRunShell.run(JobRunShell.java:213) [quartz.jar:]
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557) [quartz.jar:]
Caused by: org.apache.commons.lang.SerializationException: org.codehaus.jackson.map.JsonMappingException: Unexpected token (END_ARRAY), expected VALUE_STRING: need JSON String that contains type id (for subtype of java.util.Collection)
at [Source: java.io.StringReader@6a4d8b78; line: 22, column: 22] (through reference chain: org.ovirt.engine.core.common.action.AddVmFromSnapshotParameters["parametersCurrentUser"]->org.ovirt.engine.core.common.businessentities.aaa.DbUser["groupNames"])
at org.ovirt.engine.core.utils.serialization.json.JsonObjectDeserializer.readJsonString(JsonObjectDeserializer.java:91) [utils.jar:]
at org.ovirt.engine.core.utils.serialization.json.JsonObjectDeserializer.deserialize(JsonObjectDeserializer.java:60) [utils.jar:]
at org.ovirt.engine.core.dao.CommandEntityDaoDbFacadeImpl.deserializeParameters(CommandEntityDaoDbFacadeImpl.java:97) [dal.jar:]
at org.ovirt.engine.core.dao.CommandEntityDaoDbFacadeImpl.access$000(CommandEntityDaoDbFacadeImpl.java:21) [dal.jar:]
at org.ovirt.engine.core.dao.CommandEntityDaoDbFacadeImpl$1.mapRow(CommandEntityDaoDbFacadeImpl.java:34) [dal.jar:]
at org.ovirt.engine.core.dao.CommandEntityDaoDbFacadeImpl$1.mapRow(CommandEntityDaoDbFacadeImpl.java:23) [dal.jar:]
at org.springframework.jdbc.core.RowMapperResultSetExtractor.extractData(RowMapperResultSetExtractor.java:92) [spring-jdbc.jar:3.1.1.RELEASE]
at org.springframework.jdbc.core.RowMapperResultSetExtractor.extractData(RowMapperResultSetExtractor.java:1) [spring-jdbc.jar:3.1.1.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate$1.doInPreparedStatement(JdbcTemplate.java:649) [spring-jdbc.jar:3.1.1.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:587) [spring-jdbc.jar:3.1.1.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:637) [spring-jdbc.jar:3.1.1.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:666) [spring-jdbc.jar:3.1.1.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:706) [spring-jdbc.jar:3.1.1.RELEASE]
at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.executeCallInternal(PostgresDbEngineDialect.java:154) [dal.jar:]
at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.doExecute(PostgresDbEngineDialect.java:120) [dal.jar:]
at org.springframework.jdbc.core.simple.SimpleJdbcCall.execute(SimpleJdbcCall.java:181) [spring-jdbc.jar:3.1.1.RELEASE]
at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeImpl(SimpleJdbcCallsHandler.java:141) [dal.jar:]
at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeReadList(SimpleJdbcCallsHandler.java:103) [dal.jar:]
at org.ovirt.engine.core.dao.DefaultReadDaoDbFacade.getAll(DefaultReadDaoDbFacade.java:77) [dal.jar:]
at org.ovirt.engine.core.bll.tasks.CommandsCacheImpl.initializeCache(CommandsCacheImpl.java:30) [bll.jar:]
at org.ovirt.engine.core.bll.tasks.CommandsCacheImpl.keySet(CommandsCacheImpl.java:41) [bll.jar:]
at org.ovirt.engine.core.bll.tasks.CommandCoordinatorImpl.getCommandsWithCallBackEnabled(CommandCoordinatorImpl.java:130) [bll.jar:]
at org.ovirt.engine.core.bll.tasks.CommandExecutor.initCommandExecutor(CommandExecutor.java:119) [bll.jar:]
at org.ovirt.engine.core.bll.tasks.CommandExecutor.invokeCallbackMethods(CommandExecutor.java:57) [bll.jar:]
... 6 more
Thank you.
---
Raul
10 years
Re: [ovirt-users] Users Digest, Vol 39, Issue 38
by Nikolai Sednev
------=_Part_7746146_197691492.1418043545536
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi all,
I was thinking of "booting from iSCSI SAN", which means you'll be using your LUN placed on storage in order to boot your host over the network.
In this case you'll might configure your hosts HW to boot from iSCSI and then you'll won't need any HD on your HW.
+adding more people to add their comments.
Thanks in advance.
Best regards,
Nikolai
____________________
Nikolai Sednev
Senior Quality Engineer at Compute team
Red Hat Israel
34 Jerusalem Road,
Ra'anana, Israel 43501
Tel: +972 9 7692043
Mobile: +972 52 7342734
Email: nsednev(a)redhat.com
IRC: nsednev
----- Original Message -----
From: users-request(a)ovirt.org
To: users(a)ovirt.org
Sent: Monday, December 8, 2014 11:22:27 AM
Subject: Users Digest, Vol 39, Issue 38
Send Users mailing list submissions to
users(a)ovirt.org
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.ovirt.org/mailman/listinfo/users
or, via email, send a message with subject or body 'help' to
users-request(a)ovirt.org
You can reach the person managing the list at
users-owner(a)ovirt.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Users digest..."
Today's Topics:
1. Re: is it possible to run ovirt node on Diskless HW?
(Doron Fediuck)
2. Re: Storage Domain Issue (Koen Vanoppen)
----------------------------------------------------------------------
Message: 1
Date: Mon, 8 Dec 2014 02:01:50 -0500 (EST)
From: Doron Fediuck <dfediuck(a)redhat.com>
To: Arman Khalatyan <arm2arm(a)gmail.com>
Cc: Ryan Barry <rbarry(a)redhat.com>, Fabian Deutsch
<fdeutsch(a)redhat.com>, users <users(a)ovirt.org>
Subject: Re: [ovirt-users] is it possible to run ovirt node on
Diskless HW?
Message-ID:
<1172482552.12144827.1418022110582.JavaMail.zimbra(a)redhat.com>
Content-Type: text/plain; charset=utf-8
For standard centos you may see other issues.
For example, let's assume you have a single NIC (eth0).
If you boot your host and then try to add it to the engine,
the host deploy procedure will create try to create a management bridge
for the VMs using eth0. At this point your host will freeze since your
root FS will be disconnected while creating the bridge.
I've done this ~6 years ago, and it required opening the initrd to handle
the above issue, as well as adding the NIC driver and creating the bridge
at this point. So it's not a trivial task but doable with some hacking.
Doron
----- Original Message -----
> From: "Arman Khalatyan" <arm2arm(a)gmail.com>
> To: "Doron Fediuck" <dfediuck(a)redhat.com>
> Cc: "users" <users(a)ovirt.org>, "Fabian Deutsch" <fdeutsch(a)redhat.com>, "Ryan Barry" <rbarry(a)redhat.com>, "Tolik
> Litovsky" <tlitovsk(a)redhat.com>, "Douglas Landgraf" <dougsland(a)redhat.com>
> Sent: Sunday, December 7, 2014 7:38:19 PM
> Subject: Re: [ovirt-users] is it possible to run ovirt node on Diskless HW?
>
> It is centos 6.6 standard one.
> a.
>
> ***********************************************************
>
> Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut f?r
> Astrophysik Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany
>
> ***********************************************************
>
>
> On Sun, Dec 7, 2014 at 6:04 PM, Doron Fediuck <dfediuck(a)redhat.com> wrote:
>
> >
> >
> > ----- Original Message -----
> > > From: "Arman Khalatyan" <arm2arm(a)gmail.com>
> > > To: "users" <users(a)ovirt.org>
> > > Sent: Wednesday, December 3, 2014 6:50:09 PM
> > > Subject: [ovirt-users] is it possible to run ovirt node on Diskless HW?
> > >
> > > Hello,
> > >
> > > Doing steps in:
> > >
> > https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/...
> > >
> > > I would like to know is some one succeeded to run the host on a diskless
> > > machine?
> > > i am using Centos6.6 node with ovirt 3.5.
> > > Thanks,
> > > Arman.
> > >
> > >
> > >
> > >
> > > ***********************************************************
> > > Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut f?r
> > Astrophysik
> > > Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany
> > > ***********************************************************
> > >
> >
> > Hi Arman,
> > Are you working with ovirt node or standard CentOS?
> >
> > Note that ovirt node is different as it's works like a live cd-
> > it runs from memory. In order to save some configurations (such
> > as networking) the local disk is used.
> >
>
------------------------------
Message: 2
Date: Mon, 8 Dec 2014 10:22:18 +0100
From: Koen Vanoppen <vanoppen.koen(a)gmail.com>
To: "users(a)ovirt.org" <users(a)ovirt.org>
Subject: Re: [ovirt-users] Storage Domain Issue
Message-ID:
<CACfY+MaPY9opHykNc7hmM4Wc0_HBuu6_fyi7wPMWP4RSCe6xYQ(a)mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
some more errors:
Thread-19::DEBUG::2014-12-08
10:20:02,700::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm vgck --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/36005076802810d489000000000000062|'\'', '\''r|.*|'\'' ]
} global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1
use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } '
f130d166-546e-4905-8b8f-55a1c1dd2e4f (cwd None)
Thread-20::DEBUG::2014-12-08
10:20:02,817::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm vgck --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
0 } ' eb912657-8a8c-4173-9d24-92d2b09a773c (cwd None)
Thread-20::DEBUG::2014-12-08
10:20:03,388::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
eb912657-8a8c-4173-9d24-92d2b09a773c (cwd None)
Thread-17::ERROR::2014-12-08
10:20:03,469::sdc::137::Storage.StorageDomainCache::(_findDomain) looking
for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece
Thread-17::ERROR::2014-12-08
10:20:03,472::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain 78d84adf-7274-4efe-a711-fbec31196ece
Thread-17::DEBUG::2014-12-08
10:20:03,482::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
Thread-17::DEBUG::2014-12-08
10:20:03,572::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
Thread-17::DEBUG::2014-12-08
10:20:03,631::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
f130d166-546e-4905-8b8f-55a1c1dd2e4f eb912657-8a8c-4173-9d24-92d2b09a773c
(cwd None)
Thread-14::ERROR::2014-12-08
10:20:05,785::task::866::Storage.TaskManager.Task::(_setError)
Task=`ffaf5100-e833-4d29-ac5d-f6f7f8ce2b5d`::Unexpected error
raise SecureError("Secured object is not in safe state")
SecureError: Secured object is not in safe state
Thread-14::ERROR::2014-12-08
10:20:05,797::dispatcher::79::Storage.Dispatcher::(wrapper) Secured object
is not in safe state
raise self.error
SecureError: Secured object is not in safe state
Thread-34::ERROR::2014-12-08
10:21:46,544::task::866::Storage.TaskManager.Task::(_setError)
Task=`82940da7-10c1-42f6-afca-3c0ac00c1487`::Unexpected error
raise SecureError("Secured object is not in safe state")
SecureError: Secured object is not in safe state
Thread-34::ERROR::2014-12-08
10:21:46,549::dispatcher::79::Storage.Dispatcher::(wrapper) Secured object
is not in safe state
raise self.error
SecureError: Secured object is not in safe stat
2014-12-08 7:30 GMT+01:00 Koen Vanoppen <vanoppen.koen(a)gmail.com>:
> Dear all,
>
> We have updated our hypervisors with yum. This included an update ov vdsm
> also. We now are with these version:
> vdsm-4.16.7-1.gitdb83943.el6.x86_64
> vdsm-python-4.16.7-1.gitdb83943.el6.noarch
> vdsm-python-zombiereaper-4.16.7-1.gitdb83943.el6.noarch
> vdsm-xmlrpc-4.16.7-1.gitdb83943.el6.noarch
> vdsm-yajsonrpc-4.16.7-1.gitdb83943.el6.noarch
> vdsm-jsonrpc-4.16.7-1.gitdb83943.el6.noarch
> vdsm-cli-4.16.7-1.gitdb83943.el6.noarch
>
> And ever since these updates we experience BIG troubles with our fibre
> connections. I've already update the brocade cards to the latest version.
> This seemed to help, they already came back up and saw the storage domains
> (before the brocade update, they didn't even see their storage domains).
> But after a day or so, one of the hypersisors began to freak out again.
> Coming up and going back down... Below you can find the errors:
>
>
> Thread-821::ERROR::2014-12-08
> 07:10:33,190::task::866::Storage.TaskManager.Task::(_setError)
> Task=`27cb9779-a8e9-4080-988d-9772c922710b`::Unexpected error
> raise se.SpmStatusError()
> SpmStatusError: Not SPM: ()
> Thread-821::ERROR::2014-12-08
> 07:10:33,194::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': 'Not SPM: ()', 'code': 654}}
> Thread-822::ERROR::2014-12-08
> 07:11:03,878::task::866::Storage.TaskManager.Task::(_setError)
> Task=`30177931-68c0-420f-950f-da5b770fe35c`::Unexpected error
> Thread-822::ERROR::2014-12-08
> 07:11:03,882::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': "Unknown pool id, pool not connected:
> ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}
> Thread-813::ERROR::2014-12-08
> 07:11:07,634::sdc::137::Storage.StorageDomainCache::(_findDomain) looking
> for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece
> Thread-813::ERROR::2014-12-08
> 07:11:07,634::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
> looking for domain 78d84adf-7274-4efe-a711-fbec31196ece
> Thread-813::DEBUG::2014-12-08
> 07:11:07,638::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
> obtain_device_list_from_udev=0 filter = [
> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
> 0 } ' --noheadings --units b --nosuffix --separator '|'
> --ignoreskippedcluster -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
> Thread-813::DEBUG::2014-12-08
> 07:11:07,835::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
> obtain_device_list_from_udev=0 filter = [
> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
> 0 } ' --noheadings --units b --nosuffix --separator '|'
> --ignoreskippedcluster -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
> Thread-813::ERROR::2014-12-08
> 07:11:07,896::spbackends::271::Storage.StoragePoolDiskBackend::(validateMasterDomainVersion)
> Requested master domain 78d84adf-7274-4efe-a711-fbec31196ece does not have
> expected version 42 it is version 17
> Thread-813::ERROR::2014-12-08
> 07:11:07,903::task::866::Storage.TaskManager.Task::(_setError)
> Task=`c434f325-5193-4236-a04d-2fee9ac095bc`::Unexpected error
> Thread-813::ERROR::2014-12-08
> 07:11:07,946::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': "Wrong Master domain or its version:
> 'SD=78d84adf-7274-4efe-a711-fbec31196ece,
> pool=1d03dc05-008b-4d14-97ce-b17bd714183d'", 'code': 324}}
> Thread-823::ERROR::2014-12-08
> 07:11:43,993::task::866::Storage.TaskManager.Task::(_setError)
> Task=`9abbccd9-88a7-4632-b350-f9af1f65bebd`::Unexpected error
> Thread-823::ERROR::2014-12-08
> 07:11:43,998::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': "Unknown pool id, pool not connected:
> ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}
> Thread-823::ERROR::2014-12-08
> 07:11:44,003::task::866::Storage.TaskManager.Task::(_setError)
> Task=`7ef1ac39-e7c2-4538-b30b-ab2fcefac01d`::Unexpected error
> raise se.SpmStatusError()
> SpmStatusError: Not SPM: ()
> Thread-823::ERROR::2014-12-08
> 07:11:44,007::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': 'Not SPM: ()', 'code': 654}}
> Thread-823::ERROR::2014-12-08
> 07:11:44,133::task::866::Storage.TaskManager.Task::(_setError)
> Task=`cc1ae82c-f3c4-4efa-9cd2-c62a27801e76`::Unexpected error
> Thread-823::ERROR::2014-12-08
> 07:11:44,137::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': "Unknown pool id, pool not connected:
> ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}
> Thread-823::ERROR::2014-12-08
> 07:12:24,580::task::866::Storage.TaskManager.Task::(_setError)
> Task=`9bcbb87d-3093-4894-879b-3fe2b09ef351`::Unexpected error
> Thread-823::ERROR::2014-12-08
> 07:12:24,585::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': "Unknown pool id, pool not connected:
> ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}
> Thread-823::ERROR::2014-12-08
> 07:13:04,926::task::866::Storage.TaskManager.Task::(_setError)
> Task=`8bdd0c1f-e681-4a8e-ad55-296c021389ed`::Unexpected error
> raise se.SpmStatusError()
> SpmStatusError: Not SPM: ()
> Thread-823::ERROR::2014-12-08
> 07:13:04,931::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': 'Not SPM: ()', 'code': 654}}
> Thread-823::ERROR::2014-12-08
> 07:13:45,342::task::866::Storage.TaskManager.Task::(_setError)
> Task=`160ea2a7-b6cb-4102-9df4-71ba87fd863e`::Unexpected error
> raise se.SpmStatusError()
> SpmStatusError: Not SPM: ()
> Thread-823::ERROR::2014-12-08
> 07:13:45,346::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': 'Not SPM: ()', 'code': 654}}
> Thread-823::ERROR::2014-12-08
> 07:14:25,879::task::866::Storage.TaskManager.Task::(_setError)
> Task=`985628db-8f48-44b5-8f61-631a922f7f71`::Unexpected error
> raise se.SpmStatusError()
> SpmStatusError: Not SPM: ()
> Thread-823::ERROR::2014-12-08
> 07:14:25,883::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': 'Not SPM: ()', 'code': 654}}
> Thread-823::ERROR::2014-12-08
> 07:15:06,175::task::866::Storage.TaskManager.Task::(_setError)
> Task=`ddca1c88-0565-41e8-bf0c-22eadcc75918`::Unexpected error
> raise se.SpmStatusError()
> SpmStatusError: Not SPM: ()
> Thread-823::ERROR::2014-12-08
> 07:15:06,179::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': 'Not SPM: ()', 'code': 654}}
> Thread-823::ERROR::2014-12-08
> 07:15:46,585::task::866::Storage.TaskManager.Task::(_setError)
> Task=`12bbded5-59ce-46d8-9e67-f48862a03606`::Unexpected error
> raise se.SpmStatusError()
> SpmStatusError: Not SPM: ()
> Thread-823::ERROR::2014-12-08
> 07:15:46,589::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': 'Not SPM: ()', 'code': 654}}
> Thread-814::ERROR::2014-12-08
> 07:16:08,619::sdc::137::Storage.StorageDomainCache::(_findDomain) looking
> for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece
> Thread-814::ERROR::2014-12-08
> 07:16:08,619::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
> looking for domain 78d84adf-7274-4efe-a711-fbec31196ece
> Thread-814::DEBUG::2014-12-08
> 07:16:08,624::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
> obtain_device_list_from_udev=0 filter = [
> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
> 0 } ' --noheadings --units b --nosuffix --separator '|'
> --ignoreskippedcluster -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
> Thread-814::DEBUG::2014-12-08
> 07:16:08,740::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
> obtain_device_list_from_udev=0 filter = [
> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
> 0 } ' --noheadings --units b --nosuffix --separator '|'
> --ignoreskippedcluster -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
> Thread-814::ERROR::2014-12-08
> 07:16:08,812::spbackends::271::Storage.StoragePoolDiskBackend::(validateMasterDomainVersion)
> Requested master domain 78d84adf-7274-4efe-a711-fbec31196ece does not have
> expected version 42 it is version 17
> Thread-814::ERROR::2014-12-08
> 07:16:08,820::task::866::Storage.TaskManager.Task::(_setError)
> Task=`5cdce5cd-6e6d-421e-bc2a-f999d8cbb056`::Unexpected error
> Thread-814::ERROR::2014-12-08
> 07:16:08,865::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': "Wrong Master domain or its version:
> 'SD=78d84adf-7274-4efe-a711-fbec31196ece,
> pool=1d03dc05-008b-4d14-97ce-b17bd714183d'", 'code': 324}}
> Thread-815::ERROR::2014-12-08
> 07:16:09,471::sdc::137::Storage.StorageDomainCache::(_findDomain) looking
> for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece
> Thread-815::ERROR::2014-12-08
> 07:16:09,472::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
> looking for domain 78d84adf-7274-4efe-a711-fbec31196ece
> Thread-815::DEBUG::2014-12-08
> 07:16:09,476::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
> obtain_device_list_from_udev=0 filter = [
> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
> 0 } ' --noheadings --units b --nosuffix --separator '|'
> --ignoreskippedcluster -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
> Thread-815::DEBUG::2014-12-08
> 07:16:09,564::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
> obtain_device_list_from_udev=0 filter = [
> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
> 0 } ' --noheadings --units b --nosuffix --separator '|'
> --ignoreskippedcluster -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
> Thread-815::ERROR::2014-12-08
> 07:16:09,627::spbackends::271::Storage.StoragePoolDiskBackend::(validateMasterDomainVersion)
> Requested master domain 78d84adf-7274-4efe-a711-fbec31196ece does not have
> expected version 42 it is version 17
> Thread-815::ERROR::2014-12-08
> 07:16:09,635::task::866::Storage.TaskManager.Task::(_setError)
> Task=`abfa0fd0-04b3-4c65-b3d0-be18b085a65d`::Unexpected error
> Thread-815::ERROR::2014-12-08
> 07:16:09,681::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': "Wrong Master domain or its version:
> 'SD=78d84adf-7274-4efe-a711-fbec31196ece,
> pool=1d03dc05-008b-4d14-97ce-b17bd714183d'", 'code': 324}}
> Thread-816::ERROR::2014-12-08
> 07:16:10,182::sdc::137::Storage.StorageDomainCache::(_findDomain) looking
> for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece
> Thread-816::ERROR::2014-12-08
> 07:16:10,183::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
> looking for domain 78d84adf-7274-4efe-a711-fbec31196ece
> Thread-816::DEBUG::2014-12-08
> 07:16:10,187::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
> /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
> obtain_device_list_from_udev=0 filter = [
> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
> 0 } ' --noheadings --units b --nosuffix --separator '|'
> --ignoreskippedcluster -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
> Thread-823::ERROR::2014-12-08
> 07:16:27,163::task::866::Storage.TaskManager.Task::(_setError)
> Task=`9b0fd676-7941-40a7-a71e-0f1dee48a107`::Unexpected error
> raise se.SpmStatusError()
> SpmStatusError: Not SPM: ()
> Thread-823::ERROR::2014-12-08
> 07:16:27,168::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': 'Not SPM: ()', 'code': 654}}
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20141208/2f754047/atta...>
------------------------------
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
End of Users Digest, Vol 39, Issue 38
*************************************
------=_Part_7746146_197691492.1418043545536
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: georgia,serif; font-size: 12pt; colo=
r: #000000"><div>Hi all,</div><div>I was thinking of "booting from iSCSI SA=
N", which means you'll be using your LUN placed on storage in order to boot=
your host over the network.<br></div><div>In this case you'll might config=
ure your hosts HW to boot from iSCSI and then you'll won't need any HD on y=
our HW.</div><div>+adding more people to add their comments.</div><div><br>=
</div><div><span name=3D"x"></span><br>Thanks in advance.<br><div><br></div=
>Best regards,<br>Nikolai<br>____________________<br>Nikolai Sednev<br>Seni=
or Quality Engineer at Compute team<br>Red Hat Israel<br>34 Jerusalem Road,=
<br>Ra'anana, Israel 43501<br><div><br></div>Tel: +972=
9 7692043<br>Mobile: +972 52 7342734<br>Email: nsednev(a)redhat.com<b=
r>IRC: nsednev<span name=3D"x"></span><br></div><div><br></div><hr id=3D"zw=
chr"><div style=3D"color:#000;font-weight:normal;font-style:normal;text-dec=
oration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>Fro=
m: </b>users-request(a)ovirt.org<br><b>To: </b>users(a)ovirt.org<br><b>Sent: </=
b>Monday, December 8, 2014 11:22:27 AM<br><b>Subject: </b>Users Digest, Vol=
39, Issue 38<br><div><br></div>Send Users mailing list submissions to<br>&=
nbsp; users(a)ovirt.org<br><div><br>=
</div>To subscribe or unsubscribe via the World Wide Web, visit<br> &n=
bsp; http://lists.ovirt.org/mailman/list=
info/users<br>or, via email, send a message with subject or body 'help' to<=
br> users-request(a)ovirt.org<=
br><div><br></div>You can reach the person managing the list at<br> &n=
bsp; users-owner(a)ovirt.org<br><div><br><=
/div>When replying, please edit your Subject line so it is more specific<br=
>than "Re: Contents of Users digest..."<br><div><br></div><br>Today's Topic=
s:<br><div><br></div> 1. Re: is it possible to run ovirt =
node on Diskless HW?<br> (Doron Fediuck)<br> =
2. Re: Storage Domain Issue (Koen Vanoppen)<br><div><br></div>=
<br>----------------------------------------------------------------------<=
br><div><br></div>Message: 1<br>Date: Mon, 8 Dec 2014 02:01:50 -0500 (EST)<=
br>From: Doron Fediuck <dfediuck(a)redhat.com><br>To: Arman Khalatyan &=
lt;arm2arm(a)gmail.com><br>Cc: Ryan Barry <rbarry(a)redhat.com>, Fabia=
n Deutsch<br> <fdeutsch@r=
edhat.com>, users <use=
rs(a)ovirt.org><br>Subject: Re: [ovirt-users] is it possible to run ovirt =
node on<br> Diskless HW?<br>=
Message-ID:<br> <11724825=
52.12144827.1418022110582.JavaMail.zimbra(a)redhat.com><br>Content-Type: t=
ext/plain; charset=3Dutf-8<br><div><br></div>For standard centos you may se=
e other issues.<br><div><br></div>For example, let's assume you have a sing=
le NIC (eth0).<br>If you boot your host and then try to add it to the engin=
e,<br>the host deploy procedure will create try to create a management brid=
ge <br>for the VMs using eth0. At this point your host will freeze since yo=
ur<br>root FS will be disconnected while creating the bridge.<br><div><br><=
/div>I've done this ~6 years ago, and it required opening the initrd to han=
dle<br>the above issue, as well as adding the NIC driver and creating the b=
ridge<br>at this point. So it's not a trivial task but doable with some hac=
king.<br><div><br></div>Doron<br><div><br></div>----- Original Message ----=
-<br>> From: "Arman Khalatyan" <arm2arm(a)gmail.com><br>> To: "Do=
ron Fediuck" <dfediuck(a)redhat.com><br>> Cc: "users" <users@ovir=
t.org>, "Fabian Deutsch" <fdeutsch(a)redhat.com>, "Ryan Barry" <r=
barry(a)redhat.com>, "Tolik<br>> Litovsky" <tlitovsk(a)redhat.com>,=
"Douglas Landgraf" <dougsland(a)redhat.com><br>> Sent: Sunday, Dece=
mber 7, 2014 7:38:19 PM<br>> Subject: Re: [ovirt-users] is it possible t=
o run ovirt node on Diskless HW?<br>> <br>> It is centos 6.6 standard=
one.<br>> a.<br>> <br>> *****************************************=
******************<br>> <br>> Dr. Arman Khalatyan eScience -SuperComp=
uting Leibniz-Institut f?r<br>> Astrophysik Potsdam (AIP) An der Sternwa=
rte 16, 14482 Potsdam, Germany<br>> <br>> ***************************=
********************************<br>> <br>> <br>> On Sun, Dec 7, 2=
014 at 6:04 PM, Doron Fediuck <dfediuck(a)redhat.com> wrote:<br>> <b=
r>> ><br>> ><br>> > ----- Original Message -----<br>> =
> > From: "Arman Khalatyan" <arm2arm(a)gmail.com><br>> > &g=
t; To: "users" <users(a)ovirt.org><br>> > > Sent: Wednesday, D=
ecember 3, 2014 6:50:09 PM<br>> > > Subject: [ovirt-users] is it p=
ossible to run ovirt node on Diskless HW?<br>> > ><br>> > &g=
t; Hello,<br>> > ><br>> > > Doing steps in:<br>> > =
><br>> > https://access.redhat.com/documentation/en-US/Red_Hat_Ent=
erprise_Linux/6/html/Storage_Administration_Guide/diskless-nfs-config.html<=
br>> > ><br>> > > I would like to know is some one succee=
ded to run the host on a diskless<br>> > > machine?<br>> > &=
gt; i am using Centos6.6 node with ovirt 3.5.<br>> > > Thanks,<br>=
> > > Arman.<br>> > ><br>> > ><br>> > >=
<br>> > ><br>> > > **************************************=
*********************<br>> > > Dr. Arman Khalatyan eScience -Super=
Computing Leibniz-Institut f?r<br>> > Astrophysik<br>> > > P=
otsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany<br>> > >=
***********************************************************<br>> > &=
gt;<br>> ><br>> > Hi Arman,<br>> > Are you working with o=
virt node or standard CentOS?<br>> ><br>> > Note that ovirt nod=
e is different as it's works like a live cd-<br>> > it runs from memo=
ry. In order to save some configurations (such<br>> > as networking) =
the local disk is used.<br>> ><br>> <br><div><br></div><br>-------=
-----------------------<br><div><br></div>Message: 2<br>Date: Mon, 8 Dec 20=
14 10:22:18 +0100<br>From: Koen Vanoppen <vanoppen.koen(a)gmail.com><br=
>To: "users(a)ovirt.org" <users(a)ovirt.org><br>Subject: Re: [ovirt-users=
] Storage Domain Issue<br>Message-ID:<br> &nbs=
p; <CACfY+MaPY9opHykNc7hmM4Wc0_HBuu6_fyi7wPMWP4RSCe6xYQ(a)mail.=
gmail.com><br>Content-Type: text/plain; charset=3D"utf-8"<br><div><br></=
div>some more errors:<br><div><br></div>Thread-19::DEBUG::2014-12-08<br>10:=
20:02,700::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>/sbin/l=
vm vgck --config ' devices { preferred_names =3D ["^/dev/mapper/"]<br>ignor=
e_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_count=3D3=
<br>obtain_device_list_from_udev=3D0 filter =3D [<br>'\''a|/dev/mapper/3600=
5076802810d489000000000000062|'\'', '\''r|.*|'\'' ]<br>} global { &nb=
sp;locking_type=3D1 prioritise_write_locks=3D1 wait_for_locks=
=3D1<br>use_lvmetad=3D0 } backup { retain_min =3D 50 reta=
in_days =3D 0 } '<br>f130d166-546e-4905-8b8f-55a1c1dd2e4f (cwd None)<br>Thr=
ead-20::DEBUG::2014-12-08<br>10:20:02,817::lvm::288::Storage.Misc.excCmd::(=
cmd) /usr/bin/sudo -n<br>/sbin/lvm vgck --config ' devices { preferred_name=
s =3D ["^/dev/mapper/"]<br>ignore_suspended_devices=3D1 write_cache_state=
=3D0 disable_after_error_count=3D3<br>obtain_device_list_from_udev=3D0 filt=
er =3D [<br>'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper=
/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000=
000de|'\'',<br>'\''r|.*|'\'' ] } global { locking_type=3D1 &nbs=
p;prioritise_write_locks=3D1<br>wait_for_locks=3D1 use_lvmetad=3D0 } =
backup { retain_min =3D 50 retain_days =3D<br>0 } ' eb912=
657-8a8c-4173-9d24-92d2b09a773c (cwd None)<br>Thread-20::DEBUG::2014-12-08<=
br>10:20:03,388::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>/=
sbin/lvm vgs --config ' devices { preferred_names =3D ["^/dev/mapper/"]<br>=
ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_coun=
t=3D3<br>obtain_device_list_from_udev=3D0 filter =3D [<br>'\''a|/dev/mapper=
/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000=
000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>'\''r|.*|'\'' =
] } global { locking_type=3D1 prioritise_write_locks=3D1<=
br>wait_for_locks=3D1 use_lvmetad=3D0 } backup { retain_m=
in =3D 50 retain_days =3D<br>0 } ' --noheadings --units b --nosuffix =
--separator '|'<br>--ignoreskippedcluster -o<br>uuid,name,attr,size,free,ex=
tent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_=
count,pv_name<br>eb912657-8a8c-4173-9d24-92d2b09a773c (cwd None)<br>Thread-=
17::ERROR::2014-12-08<br>10:20:03,469::sdc::137::Storage.StorageDomainCache=
::(_findDomain) looking<br>for unfetched domain 78d84adf-7274-4efe-a711-fbe=
c31196ece<br>Thread-17::ERROR::2014-12-08<br>10:20:03,472::sdc::154::Storag=
e.StorageDomainCache::(_findUnfetchedDomain)<br>looking for domain 78d84adf=
-7274-4efe-a711-fbec31196ece<br>Thread-17::DEBUG::2014-12-08<br>10:20:03,48=
2::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>/sbin/lvm vgs -=
-config ' devices { preferred_names =3D ["^/dev/mapper/"]<br>ignore_suspend=
ed_devices=3D1 write_cache_state=3D0 disable_after_error_count=3D3<br>obtai=
n_device_list_from_udev=3D0 filter =3D [<br>'\''a|/dev/mapper/3600507680281=
0d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/map=
per/36005076802810d48e0000000000000de|'\'',<br>'\''r|.*|'\'' ] } glob=
al { locking_type=3D1 prioritise_write_locks=3D1<br>wait_for_lo=
cks=3D1 use_lvmetad=3D0 } backup { retain_min =3D 50 &nbs=
p;retain_days =3D<br>0 } ' --noheadings --units b --nosuffix --separator '|=
'<br>--ignoreskippedcluster -o<br>uuid,name,attr,size,free,extent_size,exte=
nt_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<=
br>78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>Thread-17::DEBUG::201=
4-12-08<br>10:20:03,572::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo=
-n<br>/sbin/lvm vgs --config ' devices { preferred_names =3D ["^/dev/mappe=
r/"]<br>ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_er=
ror_count=3D3<br>obtain_device_list_from_udev=3D0 filter =3D [<br>'\''a|/de=
v/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e00=
00000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>'\''r|=
.*|'\'' ] } global { locking_type=3D1 prioritise_write_lo=
cks=3D1<br>wait_for_locks=3D1 use_lvmetad=3D0 } backup { =
retain_min =3D 50 retain_days =3D<br>0 } ' --noheadings --units b --n=
osuffix --separator '|'<br>--ignoreskippedcluster -o<br>uuid,name,attr,size=
,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_c=
ount,pv_count,pv_name<br>78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br=
>Thread-17::DEBUG::2014-12-08<br>10:20:03,631::lvm::288::Storage.Misc.excCm=
d::(cmd) /usr/bin/sudo -n<br>/sbin/lvm vgs --config ' devices { preferred_n=
ames =3D ["^/dev/mapper/"]<br>ignore_suspended_devices=3D1 write_cache_stat=
e=3D0 disable_after_error_count=3D3<br>obtain_device_list_from_udev=3D0 fil=
ter =3D [<br>'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mappe=
r/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e000000000=
0000de|'\'',<br>'\''r|.*|'\'' ] } global { locking_type=3D1 &nb=
sp;prioritise_write_locks=3D1<br>wait_for_locks=3D1 use_lvmetad=3D0 }=
backup { retain_min =3D 50 retain_days =3D<br>0 } ' --no=
headings --units b --nosuffix --separator '|'<br>--ignoreskippedcluster -o<=
br>uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda=
_size,vg_mda_free,lv_count,pv_count,pv_name<br>f130d166-546e-4905-8b8f-55a1=
c1dd2e4f eb912657-8a8c-4173-9d24-92d2b09a773c<br>(cwd None)<br>Thread-14::E=
RROR::2014-12-08<br>10:20:05,785::task::866::Storage.TaskManager.Task::(_se=
tError)<br>Task=3D`ffaf5100-e833-4d29-ac5d-f6f7f8ce2b5d`::Unexpected error<=
br> raise SecureError("Secured object is not in safe stat=
e")<br>SecureError: Secured object is not in safe state<br>Thread-14::ERROR=
::2014-12-08<br>10:20:05,797::dispatcher::79::Storage.Dispatcher::(wrapper)=
Secured object<br>is not in safe state<br> raise self.er=
ror<br>SecureError: Secured object is not in safe state<br>Thread-34::ERROR=
::2014-12-08<br>10:21:46,544::task::866::Storage.TaskManager.Task::(_setErr=
or)<br>Task=3D`82940da7-10c1-42f6-afca-3c0ac00c1487`::Unexpected error<br>&=
nbsp; raise SecureError("Secured object is not in safe state")<=
br>SecureError: Secured object is not in safe state<br>Thread-34::ERROR::20=
14-12-08<br>10:21:46,549::dispatcher::79::Storage.Dispatcher::(wrapper) Sec=
ured object<br>is not in safe state<br> raise self.error<=
br>SecureError: Secured object is not in safe stat<br><div><br></div>2014-1=
2-08 7:30 GMT+01:00 Koen Vanoppen <vanoppen.koen(a)gmail.com>:<br><div>=
<br></div>> Dear all,<br>><br>> We have updated our hypervisors wi=
th yum. This included an update ov vdsm<br>> also. We now are with these=
version:<br>> vdsm-4.16.7-1.gitdb83943.el6.x86_64<br>> vdsm-python-4=
.16.7-1.gitdb83943.el6.noarch<br>> vdsm-python-zombiereaper-4.16.7-1.git=
db83943.el6.noarch<br>> vdsm-xmlrpc-4.16.7-1.gitdb83943.el6.noarch<br>&g=
t; vdsm-yajsonrpc-4.16.7-1.gitdb83943.el6.noarch<br>> vdsm-jsonrpc-4.16.=
7-1.gitdb83943.el6.noarch<br>> vdsm-cli-4.16.7-1.gitdb83943.el6.noarch<b=
r>><br>> And ever since these updates we experience BIG troubles with=
our fibre<br>> connections. I've already update the brocade cards to th=
e latest version.<br>> This seemed to help, they already came back up an=
d saw the storage domains<br>> (before the brocade update, they didn't e=
ven see their storage domains).<br>> But after a day or so, one of the h=
ypersisors began to freak out again.<br>> Coming up and going back down.=
.. Below you can find the errors:<br>><br>><br>> Thread-821::ERROR=
::2014-12-08<br>> 07:10:33,190::task::866::Storage.TaskManager.Task::(_s=
etError)<br>> Task=3D`27cb9779-a8e9-4080-988d-9772c922710b`::Unexpected =
error<br>> raise se.SpmStatusError()<br>> SpmStatusErro=
r: Not SPM: ()<br>> Thread-821::ERROR::2014-12-08<br>> 07:10:33,194::=
dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'message'=
: 'Not SPM: ()', 'code': 654}}<br>> Thread-822::ERROR::2014-12-08<br>>=
; 07:11:03,878::task::866::Storage.TaskManager.Task::(_setError)<br>> Ta=
sk=3D`30177931-68c0-420f-950f-da5b770fe35c`::Unexpected error<br>> Threa=
d-822::ERROR::2014-12-08<br>> 07:11:03,882::dispatcher::76::Storage.Disp=
atcher::(wrapper) {'status':<br>> {'message': "Unknown pool id, pool not=
connected:<br>> ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309=
}}<br>> Thread-813::ERROR::2014-12-08<br>> 07:11:07,634::sdc::137::St=
orage.StorageDomainCache::(_findDomain) looking<br>> for unfetched domai=
n 78d84adf-7274-4efe-a711-fbec31196ece<br>> Thread-813::ERROR::2014-12-0=
8<br>> 07:11:07,634::sdc::154::Storage.StorageDomainCache::(_findUnfetch=
edDomain)<br>> looking for domain 78d84adf-7274-4efe-a711-fbec31196ece<b=
r>> Thread-813::DEBUG::2014-12-08<br>> 07:11:07,638::lvm::288::Storag=
e.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>> /sbin/lvm vgs --config ' devi=
ces { preferred_names =3D ["^/dev/mapper/"]<br>> ignore_suspended_device=
s=3D1 write_cache_state=3D0 disable_after_error_count=3D3<br>> obtain_de=
vice_list_from_udev=3D0 filter =3D [<br>> '\''a|/dev/mapper/360050768028=
10d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/ma=
pper/36005076802810d48e0000000000000de|'\'',<br>> '\''r|.*|'\'' ] } &nbs=
p;global { locking_type=3D1 prioritise_write_locks=3D1<br>> =
wait_for_locks=3D1 use_lvmetad=3D0 } backup { retain_min =
=3D 50 retain_days =3D<br>> 0 } ' --noheadings --units b --nosuffi=
x --separator '|'<br>> --ignoreskippedcluster -o<br>> uuid,name,attr,=
size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,=
lv_count,pv_count,pv_name<br>> 78d84adf-7274-4efe-a711-fbec31196ece (cwd=
None)<br>> Thread-813::DEBUG::2014-12-08<br>> 07:11:07,835::lvm::288=
::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>> /sbin/lvm vgs --confi=
g ' devices { preferred_names =3D ["^/dev/mapper/"]<br>> ignore_suspende=
d_devices=3D1 write_cache_state=3D0 disable_after_error_count=3D3<br>> o=
btain_device_list_from_udev=3D0 filter =3D [<br>> '\''a|/dev/mapper/3600=
5076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae=
|/dev/mapper/36005076802810d48e0000000000000de|'\'',<br>> '\''r|.*|'\'' =
] } global { locking_type=3D1 prioritise_write_locks=3D1<=
br>> wait_for_locks=3D1 use_lvmetad=3D0 } backup { ret=
ain_min =3D 50 retain_days =3D<br>> 0 } ' --noheadings --units b -=
-nosuffix --separator '|'<br>> --ignoreskippedcluster -o<br>> uuid,na=
me,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_m=
da_free,lv_count,pv_count,pv_name<br>> 78d84adf-7274-4efe-a711-fbec31196=
ece (cwd None)<br>> Thread-813::ERROR::2014-12-08<br>> 07:11:07,896::=
spbackends::271::Storage.StoragePoolDiskBackend::(validateMasterDomainVersi=
on)<br>> Requested master domain 78d84adf-7274-4efe-a711-fbec31196ece do=
es not have<br>> expected version 42 it is version 17<br>> Thread-813=
::ERROR::2014-12-08<br>> 07:11:07,903::task::866::Storage.TaskManager.Ta=
sk::(_setError)<br>> Task=3D`c434f325-5193-4236-a04d-2fee9ac095bc`::Unex=
pected error<br>> Thread-813::ERROR::2014-12-08<br>> 07:11:07,946::di=
spatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'message': =
"Wrong Master domain or its version:<br>> 'SD=3D78d84adf-7274-4efe-a711-=
fbec31196ece,<br>> pool=3D1d03dc05-008b-4d14-97ce-b17bd714183d'", 'code'=
: 324}}<br>> Thread-823::ERROR::2014-12-08<br>> 07:11:43,993::task::8=
66::Storage.TaskManager.Task::(_setError)<br>> Task=3D`9abbccd9-88a7-463=
2-b350-f9af1f65bebd`::Unexpected error<br>> Thread-823::ERROR::2014-12-0=
8<br>> 07:11:43,998::dispatcher::76::Storage.Dispatcher::(wrapper) {'sta=
tus':<br>> {'message': "Unknown pool id, pool not connected:<br>> ('1=
d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}<br>> Thread-823::=
ERROR::2014-12-08<br>> 07:11:44,003::task::866::Storage.TaskManager.Task=
::(_setError)<br>> Task=3D`7ef1ac39-e7c2-4538-b30b-ab2fcefac01d`::Unexpe=
cted error<br>> raise se.SpmStatusError()<br>> SpmStatu=
sError: Not SPM: ()<br>> Thread-823::ERROR::2014-12-08<br>> 07:11:44,=
007::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'mes=
sage': 'Not SPM: ()', 'code': 654}}<br>> Thread-823::ERROR::2014-12-08<b=
r>> 07:11:44,133::task::866::Storage.TaskManager.Task::(_setError)<br>&g=
t; Task=3D`cc1ae82c-f3c4-4efa-9cd2-c62a27801e76`::Unexpected error<br>> =
Thread-823::ERROR::2014-12-08<br>> 07:11:44,137::dispatcher::76::Storage=
.Dispatcher::(wrapper) {'status':<br>> {'message': "Unknown pool id, poo=
l not connected:<br>> ('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code'=
: 309}}<br>> Thread-823::ERROR::2014-12-08<br>> 07:12:24,580::task::8=
66::Storage.TaskManager.Task::(_setError)<br>> Task=3D`9bcbb87d-3093-489=
4-879b-3fe2b09ef351`::Unexpected error<br>> Thread-823::ERROR::2014-12-0=
8<br>> 07:12:24,585::dispatcher::76::Storage.Dispatcher::(wrapper) {'sta=
tus':<br>> {'message': "Unknown pool id, pool not connected:<br>> ('1=
d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}<br>> Thread-823::=
ERROR::2014-12-08<br>> 07:13:04,926::task::866::Storage.TaskManager.Task=
::(_setError)<br>> Task=3D`8bdd0c1f-e681-4a8e-ad55-296c021389ed`::Unexpe=
cted error<br>> raise se.SpmStatusError()<br>> SpmStatu=
sError: Not SPM: ()<br>> Thread-823::ERROR::2014-12-08<br>> 07:13:04,=
931::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'mes=
sage': 'Not SPM: ()', 'code': 654}}<br>> Thread-823::ERROR::2014-12-08<b=
r>> 07:13:45,342::task::866::Storage.TaskManager.Task::(_setError)<br>&g=
t; Task=3D`160ea2a7-b6cb-4102-9df4-71ba87fd863e`::Unexpected error<br>> =
raise se.SpmStatusError()<br>> SpmStatusError: Not SPM: ()=
<br>> Thread-823::ERROR::2014-12-08<br>> 07:13:45,346::dispatcher::76=
::Storage.Dispatcher::(wrapper) {'status':<br>> {'message': 'Not SPM: ()=
', 'code': 654}}<br>> Thread-823::ERROR::2014-12-08<br>> 07:14:25,879=
::task::866::Storage.TaskManager.Task::(_setError)<br>> Task=3D`985628db=
-8f48-44b5-8f61-631a922f7f71`::Unexpected error<br>> raise=
se.SpmStatusError()<br>> SpmStatusError: Not SPM: ()<br>> Thread-823=
::ERROR::2014-12-08<br>> 07:14:25,883::dispatcher::76::Storage.Dispatche=
r::(wrapper) {'status':<br>> {'message': 'Not SPM: ()', 'code': 654}}<br=
>> Thread-823::ERROR::2014-12-08<br>> 07:15:06,175::task::866::Storag=
e.TaskManager.Task::(_setError)<br>> Task=3D`ddca1c88-0565-41e8-bf0c-22e=
adcc75918`::Unexpected error<br>> raise se.SpmStatusError(=
)<br>> SpmStatusError: Not SPM: ()<br>> Thread-823::ERROR::2014-12-08=
<br>> 07:15:06,179::dispatcher::76::Storage.Dispatcher::(wrapper) {'stat=
us':<br>> {'message': 'Not SPM: ()', 'code': 654}}<br>> Thread-823::E=
RROR::2014-12-08<br>> 07:15:46,585::task::866::Storage.TaskManager.Task:=
:(_setError)<br>> Task=3D`12bbded5-59ce-46d8-9e67-f48862a03606`::Unexpec=
ted error<br>> raise se.SpmStatusError()<br>> SpmStatus=
Error: Not SPM: ()<br>> Thread-823::ERROR::2014-12-08<br>> 07:15:46,5=
89::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'mess=
age': 'Not SPM: ()', 'code': 654}}<br>> Thread-814::ERROR::2014-12-08<br=
>> 07:16:08,619::sdc::137::Storage.StorageDomainCache::(_findDomain) loo=
king<br>> for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece<br>&=
gt; Thread-814::ERROR::2014-12-08<br>> 07:16:08,619::sdc::154::Storage.S=
torageDomainCache::(_findUnfetchedDomain)<br>> looking for domain 78d84a=
df-7274-4efe-a711-fbec31196ece<br>> Thread-814::DEBUG::2014-12-08<br>>=
; 07:16:08,624::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<br>&g=
t; /sbin/lvm vgs --config ' devices { preferred_names =3D ["^/dev/mapper/"]=
<br>> ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_e=
rror_count=3D3<br>> obtain_device_list_from_udev=3D0 filter =3D [<br>>=
; '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/360050768=
02810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\''=
,<br>> '\''r|.*|'\'' ] } global { locking_type=3D1 pri=
oritise_write_locks=3D1<br>> wait_for_locks=3D1 use_lvmetad=3D0 } =
backup { retain_min =3D 50 retain_days =3D<br>> 0 } ' =
--noheadings --units b --nosuffix --separator '|'<br>> --ignoreskippedcl=
uster -o<br>> uuid,name,attr,size,free,extent_size,extent_count,free_cou=
nt,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>> 78d84adf-=
7274-4efe-a711-fbec31196ece (cwd None)<br>> Thread-814::DEBUG::2014-12-0=
8<br>> 07:16:08,740::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo =
-n<br>> /sbin/lvm vgs --config ' devices { preferred_names =3D ["^/dev/m=
apper/"]<br>> ignore_suspended_devices=3D1 write_cache_state=3D0 disable=
_after_error_count=3D3<br>> obtain_device_list_from_udev=3D0 filter =3D =
[<br>> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/3=
6005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e000000000000=
0de|'\'',<br>> '\''r|.*|'\'' ] } global { locking_type=3D1 &=
nbsp;prioritise_write_locks=3D1<br>> wait_for_locks=3D1 use_lvmeta=
d=3D0 } backup { retain_min =3D 50 retain_days =3D<br>>=
; 0 } ' --noheadings --units b --nosuffix --separator '|'<br>> --ignores=
kippedcluster -o<br>> uuid,name,attr,size,free,extent_size,extent_count,=
free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>> 7=
8d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>> Thread-814::ERROR::2=
014-12-08<br>> 07:16:08,812::spbackends::271::Storage.StoragePoolDiskBac=
kend::(validateMasterDomainVersion)<br>> Requested master domain 78d84ad=
f-7274-4efe-a711-fbec31196ece does not have<br>> expected version 42 it =
is version 17<br>> Thread-814::ERROR::2014-12-08<br>> 07:16:08,820::t=
ask::866::Storage.TaskManager.Task::(_setError)<br>> Task=3D`5cdce5cd-6e=
6d-421e-bc2a-f999d8cbb056`::Unexpected error<br>> Thread-814::ERROR::201=
4-12-08<br>> 07:16:08,865::dispatcher::76::Storage.Dispatcher::(wrapper)=
{'status':<br>> {'message': "Wrong Master domain or its version:<br>>=
; 'SD=3D78d84adf-7274-4efe-a711-fbec31196ece,<br>> pool=3D1d03dc05-008b-=
4d14-97ce-b17bd714183d'", 'code': 324}}<br>> Thread-815::ERROR::2014-12-=
08<br>> 07:16:09,471::sdc::137::Storage.StorageDomainCache::(_findDomain=
) looking<br>> for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece=
<br>> Thread-815::ERROR::2014-12-08<br>> 07:16:09,472::sdc::154::Stor=
age.StorageDomainCache::(_findUnfetchedDomain)<br>> looking for domain 7=
8d84adf-7274-4efe-a711-fbec31196ece<br>> Thread-815::DEBUG::2014-12-08<b=
r>> 07:16:09,476::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n<=
br>> /sbin/lvm vgs --config ' devices { preferred_names =3D ["^/dev/mapp=
er/"]<br>> ignore_suspended_devices=3D1 write_cache_state=3D0 disable_af=
ter_error_count=3D3<br>> obtain_device_list_from_udev=3D0 filter =3D [<b=
r>> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/3600=
5076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de=
|'\'',<br>> '\''r|.*|'\'' ] } global { locking_type=3D1 &nbs=
p;prioritise_write_locks=3D1<br>> wait_for_locks=3D1 use_lvmetad=
=3D0 } backup { retain_min =3D 50 retain_days =3D<br>>=
0 } ' --noheadings --units b --nosuffix --separator '|'<br>> --ignoresk=
ippedcluster -o<br>> uuid,name,attr,size,free,extent_size,extent_count,f=
ree_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>> 78=
d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>> Thread-815::DEBUG::20=
14-12-08<br>> 07:16:09,564::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bi=
n/sudo -n<br>> /sbin/lvm vgs --config ' devices { preferred_names =3D ["=
^/dev/mapper/"]<br>> ignore_suspended_devices=3D1 write_cache_state=3D0 =
disable_after_error_count=3D3<br>> obtain_device_list_from_udev=3D0 filt=
er =3D [<br>> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/m=
apper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e00000=
00000000de|'\'',<br>> '\''r|.*|'\'' ] } global { locking_typ=
e=3D1 prioritise_write_locks=3D1<br>> wait_for_locks=3D1 use=
_lvmetad=3D0 } backup { retain_min =3D 50 retain_days =3D=
<br>> 0 } ' --noheadings --units b --nosuffix --separator '|'<br>> --=
ignoreskippedcluster -o<br>> uuid,name,attr,size,free,extent_size,extent=
_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br=
>> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>> Thread-815::E=
RROR::2014-12-08<br>> 07:16:09,627::spbackends::271::Storage.StoragePool=
DiskBackend::(validateMasterDomainVersion)<br>> Requested master domain =
78d84adf-7274-4efe-a711-fbec31196ece does not have<br>> expected version=
42 it is version 17<br>> Thread-815::ERROR::2014-12-08<br>> 07:16:09=
,635::task::866::Storage.TaskManager.Task::(_setError)<br>> Task=3D`abfa=
0fd0-04b3-4c65-b3d0-be18b085a65d`::Unexpected error<br>> Thread-815::ERR=
OR::2014-12-08<br>> 07:16:09,681::dispatcher::76::Storage.Dispatcher::(w=
rapper) {'status':<br>> {'message': "Wrong Master domain or its version:=
<br>> 'SD=3D78d84adf-7274-4efe-a711-fbec31196ece,<br>> pool=3D1d03dc0=
5-008b-4d14-97ce-b17bd714183d'", 'code': 324}}<br>> Thread-816::ERROR::2=
014-12-08<br>> 07:16:10,182::sdc::137::Storage.StorageDomainCache::(_fin=
dDomain) looking<br>> for unfetched domain 78d84adf-7274-4efe-a711-fbec3=
1196ece<br>> Thread-816::ERROR::2014-12-08<br>> 07:16:10,183::sdc::15=
4::Storage.StorageDomainCache::(_findUnfetchedDomain)<br>> looking for d=
omain 78d84adf-7274-4efe-a711-fbec31196ece<br>> Thread-816::DEBUG::2014-=
12-08<br>> 07:16:10,187::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/s=
udo -n<br>> /sbin/lvm vgs --config ' devices { preferred_names =3D ["^/d=
ev/mapper/"]<br>> ignore_suspended_devices=3D1 write_cache_state=3D0 dis=
able_after_error_count=3D3<br>> obtain_device_list_from_udev=3D0 filter =
=3D [<br>> '\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapp=
er/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e00000000=
00000de|'\'',<br>> '\''r|.*|'\'' ] } global { locking_type=
=3D1 prioritise_write_locks=3D1<br>> wait_for_locks=3D1 use_=
lvmetad=3D0 } backup { retain_min =3D 50 retain_days =3D<=
br>> 0 } ' --noheadings --units b --nosuffix --separator '|'<br>> --i=
gnoreskippedcluster -o<br>> uuid,name,attr,size,free,extent_size,extent_=
count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>=
> 78d84adf-7274-4efe-a711-fbec31196ece (cwd None)<br>> Thread-823::ER=
ROR::2014-12-08<br>> 07:16:27,163::task::866::Storage.TaskManager.Task::=
(_setError)<br>> Task=3D`9b0fd676-7941-40a7-a71e-0f1dee48a107`::Unexpect=
ed error<br>> raise se.SpmStatusError()<br>> SpmStatusE=
rror: Not SPM: ()<br>> Thread-823::ERROR::2014-12-08<br>> 07:16:27,16=
8::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':<br>> {'messa=
ge': 'Not SPM: ()', 'code': 654}}<br>><br>><br>-------------- next pa=
rt --------------<br>An HTML attachment was scrubbed...<br>URL: <http://=
lists.ovirt.org/pipermail/users/attachments/20141208/2f754047/attachment.ht=
ml><br><div><br></div>------------------------------<br><div><br></div>_=
______________________________________________<br>Users mailing list<br>Use=
rs(a)ovirt.org<br>http://lists.ovirt.org/mailman/listinfo/users<br><div><br><=
/div><br>End of Users Digest, Vol 39, Issue 38<br>*************************=
************<br></div><div><br></div></div></body></html>
------=_Part_7746146_197691492.1418043545536--
10 years
Error When try add new network using neutron provider
by Eduardo Terzella
Hello,
Error when i try to create a new network using neutron provider:
"Error while executing action Add Subnet to Provider: Failed to communicate
with the external provider"
==> /var/log/neutron/server.log <==
2014-12-07 22:35:14.825 1061 INFO neutron.wsgi [-] (1061) accepted
('xxx.xxx.xxx.xxx', 42975)
2014-12-07 22:35:14.828 1061 INFO urllib3.connectionpool [-] Starting new
HTTP connection (1): 127.0.0.1
2014-12-07 22:35:14.920 1061 INFO neutron.plugins.ml2.db
[req-ba2a18ec-6e02-4526-99a8-27b35152781f None] Added segment
e0ad11df-9c5a-4167-82ea-313dcc626661
of type flat for network 213c62ce-e167-4bb0-bd2d-720dd06bc970
2014-12-07 22:35:14.930 1061 INFO neutron.wsgi
[req-ba2a18ec-6e02-4526-99a8-27b35152781f None] - - [07/Dec/2014 22:35:14]
"POST /v2.0/networ ks HTTP/1.1" 201
527 0.103579
10 years
ssd cache
by yao xu
Hi ! Everyone!
Is anyone tried to add an ssd cache to node using bcache or flashcache?
It seemed that we have to change the procedure when adding a storage
domain .
Maybe it can be done in serveral days , but sync the cache between nodes
seemed a little tricky.
Do you have any idea?
Thanks
10 years
Storage Domain Issue
by Koen Vanoppen
Dear all,
We have updated our hypervisors with yum. This included an update ov vdsm
also. We now are with these version:
vdsm-4.16.7-1.gitdb83943.el6.x86_64
vdsm-python-4.16.7-1.gitdb83943.el6.noarch
vdsm-python-zombiereaper-4.16.7-1.gitdb83943.el6.noarch
vdsm-xmlrpc-4.16.7-1.gitdb83943.el6.noarch
vdsm-yajsonrpc-4.16.7-1.gitdb83943.el6.noarch
vdsm-jsonrpc-4.16.7-1.gitdb83943.el6.noarch
vdsm-cli-4.16.7-1.gitdb83943.el6.noarch
And ever since these updates we experience BIG troubles with our fibre
connections. I've already update the brocade cards to the latest version.
This seemed to help, they already came back up and saw the storage domains
(before the brocade update, they didn't even see their storage domains).
But after a day or so, one of the hypersisors began to freak out again.
Coming up and going back down... Below you can find the errors:
Thread-821::ERROR::2014-12-08
07:10:33,190::task::866::Storage.TaskManager.Task::(_setError)
Task=`27cb9779-a8e9-4080-988d-9772c922710b`::Unexpected error
raise se.SpmStatusError()
SpmStatusError: Not SPM: ()
Thread-821::ERROR::2014-12-08
07:10:33,194::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': 'Not SPM: ()', 'code': 654}}
Thread-822::ERROR::2014-12-08
07:11:03,878::task::866::Storage.TaskManager.Task::(_setError)
Task=`30177931-68c0-420f-950f-da5b770fe35c`::Unexpected error
Thread-822::ERROR::2014-12-08
07:11:03,882::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': "Unknown pool id, pool not connected:
('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}
Thread-813::ERROR::2014-12-08
07:11:07,634::sdc::137::Storage.StorageDomainCache::(_findDomain) looking
for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece
Thread-813::ERROR::2014-12-08
07:11:07,634::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain 78d84adf-7274-4efe-a711-fbec31196ece
Thread-813::DEBUG::2014-12-08
07:11:07,638::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
Thread-813::DEBUG::2014-12-08
07:11:07,835::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
Thread-813::ERROR::2014-12-08
07:11:07,896::spbackends::271::Storage.StoragePoolDiskBackend::(validateMasterDomainVersion)
Requested master domain 78d84adf-7274-4efe-a711-fbec31196ece does not have
expected version 42 it is version 17
Thread-813::ERROR::2014-12-08
07:11:07,903::task::866::Storage.TaskManager.Task::(_setError)
Task=`c434f325-5193-4236-a04d-2fee9ac095bc`::Unexpected error
Thread-813::ERROR::2014-12-08
07:11:07,946::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': "Wrong Master domain or its version:
'SD=78d84adf-7274-4efe-a711-fbec31196ece,
pool=1d03dc05-008b-4d14-97ce-b17bd714183d'", 'code': 324}}
Thread-823::ERROR::2014-12-08
07:11:43,993::task::866::Storage.TaskManager.Task::(_setError)
Task=`9abbccd9-88a7-4632-b350-f9af1f65bebd`::Unexpected error
Thread-823::ERROR::2014-12-08
07:11:43,998::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': "Unknown pool id, pool not connected:
('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}
Thread-823::ERROR::2014-12-08
07:11:44,003::task::866::Storage.TaskManager.Task::(_setError)
Task=`7ef1ac39-e7c2-4538-b30b-ab2fcefac01d`::Unexpected error
raise se.SpmStatusError()
SpmStatusError: Not SPM: ()
Thread-823::ERROR::2014-12-08
07:11:44,007::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': 'Not SPM: ()', 'code': 654}}
Thread-823::ERROR::2014-12-08
07:11:44,133::task::866::Storage.TaskManager.Task::(_setError)
Task=`cc1ae82c-f3c4-4efa-9cd2-c62a27801e76`::Unexpected error
Thread-823::ERROR::2014-12-08
07:11:44,137::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': "Unknown pool id, pool not connected:
('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}
Thread-823::ERROR::2014-12-08
07:12:24,580::task::866::Storage.TaskManager.Task::(_setError)
Task=`9bcbb87d-3093-4894-879b-3fe2b09ef351`::Unexpected error
Thread-823::ERROR::2014-12-08
07:12:24,585::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': "Unknown pool id, pool not connected:
('1d03dc05-008b-4d14-97ce-b17bd714183d',)", 'code': 309}}
Thread-823::ERROR::2014-12-08
07:13:04,926::task::866::Storage.TaskManager.Task::(_setError)
Task=`8bdd0c1f-e681-4a8e-ad55-296c021389ed`::Unexpected error
raise se.SpmStatusError()
SpmStatusError: Not SPM: ()
Thread-823::ERROR::2014-12-08
07:13:04,931::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': 'Not SPM: ()', 'code': 654}}
Thread-823::ERROR::2014-12-08
07:13:45,342::task::866::Storage.TaskManager.Task::(_setError)
Task=`160ea2a7-b6cb-4102-9df4-71ba87fd863e`::Unexpected error
raise se.SpmStatusError()
SpmStatusError: Not SPM: ()
Thread-823::ERROR::2014-12-08
07:13:45,346::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': 'Not SPM: ()', 'code': 654}}
Thread-823::ERROR::2014-12-08
07:14:25,879::task::866::Storage.TaskManager.Task::(_setError)
Task=`985628db-8f48-44b5-8f61-631a922f7f71`::Unexpected error
raise se.SpmStatusError()
SpmStatusError: Not SPM: ()
Thread-823::ERROR::2014-12-08
07:14:25,883::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': 'Not SPM: ()', 'code': 654}}
Thread-823::ERROR::2014-12-08
07:15:06,175::task::866::Storage.TaskManager.Task::(_setError)
Task=`ddca1c88-0565-41e8-bf0c-22eadcc75918`::Unexpected error
raise se.SpmStatusError()
SpmStatusError: Not SPM: ()
Thread-823::ERROR::2014-12-08
07:15:06,179::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': 'Not SPM: ()', 'code': 654}}
Thread-823::ERROR::2014-12-08
07:15:46,585::task::866::Storage.TaskManager.Task::(_setError)
Task=`12bbded5-59ce-46d8-9e67-f48862a03606`::Unexpected error
raise se.SpmStatusError()
SpmStatusError: Not SPM: ()
Thread-823::ERROR::2014-12-08
07:15:46,589::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': 'Not SPM: ()', 'code': 654}}
Thread-814::ERROR::2014-12-08
07:16:08,619::sdc::137::Storage.StorageDomainCache::(_findDomain) looking
for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece
Thread-814::ERROR::2014-12-08
07:16:08,619::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain 78d84adf-7274-4efe-a711-fbec31196ece
Thread-814::DEBUG::2014-12-08
07:16:08,624::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
Thread-814::DEBUG::2014-12-08
07:16:08,740::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
Thread-814::ERROR::2014-12-08
07:16:08,812::spbackends::271::Storage.StoragePoolDiskBackend::(validateMasterDomainVersion)
Requested master domain 78d84adf-7274-4efe-a711-fbec31196ece does not have
expected version 42 it is version 17
Thread-814::ERROR::2014-12-08
07:16:08,820::task::866::Storage.TaskManager.Task::(_setError)
Task=`5cdce5cd-6e6d-421e-bc2a-f999d8cbb056`::Unexpected error
Thread-814::ERROR::2014-12-08
07:16:08,865::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': "Wrong Master domain or its version:
'SD=78d84adf-7274-4efe-a711-fbec31196ece,
pool=1d03dc05-008b-4d14-97ce-b17bd714183d'", 'code': 324}}
Thread-815::ERROR::2014-12-08
07:16:09,471::sdc::137::Storage.StorageDomainCache::(_findDomain) looking
for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece
Thread-815::ERROR::2014-12-08
07:16:09,472::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain 78d84adf-7274-4efe-a711-fbec31196ece
Thread-815::DEBUG::2014-12-08
07:16:09,476::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
Thread-815::DEBUG::2014-12-08
07:16:09,564::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
Thread-815::ERROR::2014-12-08
07:16:09,627::spbackends::271::Storage.StoragePoolDiskBackend::(validateMasterDomainVersion)
Requested master domain 78d84adf-7274-4efe-a711-fbec31196ece does not have
expected version 42 it is version 17
Thread-815::ERROR::2014-12-08
07:16:09,635::task::866::Storage.TaskManager.Task::(_setError)
Task=`abfa0fd0-04b3-4c65-b3d0-be18b085a65d`::Unexpected error
Thread-815::ERROR::2014-12-08
07:16:09,681::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': "Wrong Master domain or its version:
'SD=78d84adf-7274-4efe-a711-fbec31196ece,
pool=1d03dc05-008b-4d14-97ce-b17bd714183d'", 'code': 324}}
Thread-816::ERROR::2014-12-08
07:16:10,182::sdc::137::Storage.StorageDomainCache::(_findDomain) looking
for unfetched domain 78d84adf-7274-4efe-a711-fbec31196ece
Thread-816::ERROR::2014-12-08
07:16:10,183::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain 78d84adf-7274-4efe-a711-fbec31196ece
Thread-816::DEBUG::2014-12-08
07:16:10,187::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/36005076802810d489000000000000062|/dev/mapper/36005076802810d48e0000000000000ae|/dev/mapper/36005076802810d48e0000000000000de|'\'',
'\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
78d84adf-7274-4efe-a711-fbec31196ece (cwd None)
Thread-823::ERROR::2014-12-08
07:16:27,163::task::866::Storage.TaskManager.Task::(_setError)
Task=`9b0fd676-7941-40a7-a71e-0f1dee48a107`::Unexpected error
raise se.SpmStatusError()
SpmStatusError: Not SPM: ()
Thread-823::ERROR::2014-12-08
07:16:27,168::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': 'Not SPM: ()', 'code': 654}}
10 years
Add new host with Neutron Provider
by Eduardo Terzella
Hello,
You can help ?
I tried to add neutron provider on my node ovirt and when tried to
installing occurred the following error:
2014-12-07 15:15:31 INFO otopi.plugins.otopi.packagers.yumpackager
yumpackager.info:92 Yum install: 52/53:
openstack-neutron-openvswitch-2014.1.3-4.el6.noarch
2014-12-07 15:15:31 ERROR otopi.plugins.otopi.packagers.yumpackager
yumpackager.error:97 Yum Non-fatal POSTIN scriptlet failure in rpm package
openstack-neutron-openvswitch-2014.1.3-4.el6.noarch
2014-12-07 15:15:31 DEBUG otopi.plugins.otopi.packagers.yumpackager
yumpackager.verbose:88 Yum Script sink: error reading information on
service openstack-openvswitch-agent: No such file or directory
warning: %post(openstack-neutron-openvswitch-2014.1.3-4.el6.noarch)
scriptlet failed, exit status 1
2014-12-07 15:15:31 DEBUG otopi.plugins.otopi.packagers.yumpackager
yumpackager.verbose:88 Yum Done:
openstack-neutron-openvswitch-2014.1.3-4.el6.noarch
2014-12-07 15:15:31 DEBUG otopi.plugins.otopi.packagers.yumpackager
yumpackager.verbose:88 Yum Done:
openstack-neutron-openvswitch-2014.1.3-4.el6.noarch
I already add the repo openstak on the node.
10 years
oVirt power management issue
by Wout Peeters
------=_Part_215_24910333.1417776619019
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
We're trying to set up an oVirt configuration with an oVirt-controller (CentOS 6), iSCSI-storage (Dell MD3200i) and 3 vm-hosts (CentOS 7) powered by 2 APC PDUs. Testing the Power Management settings in the web GUI, we get the following message: "Test Succeeded, unknown." The oVirt engine log outputs the following:
2014-12-05 11:23:00,872 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-7) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Host vm-02 from data center XXXX was chosen as a proxy to execute Status command on Host vm-03.
2014-12-05 11:23:00,879 INFO [org.ovirt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-7) Using Host vm-02 from data center XXXX as proxy to execute Status command on Host
2014-12-05 11:23:00,904 INFO [org.ovirt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-7) Executing <Status> Power Management command, Proxy Host:vm-02, Agent:apc, Target Host:, Management IP:***.***.***.***, User:apc, Options:, Fencing policy:null
2014-12-05 11:23:00,930 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (ajp--127.0.0.1-8702-7) START, FenceVdsVDSCommand(HostName = vm-02, HostId = 071554fc-eed2-4e8f-b6bc-041248d0eaa5, targetVdsId = 67c642ed-0a7a-4e3b-8dd6-32a36df4aea9, action = Status, ip = ***.***.***.***, port = , type = apc, user = apc, password = ******, options = '', policy = 'null'), log id: 2803522
2014-12-05 11:23:01,137 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-7) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Power Management test failed for Host vm-03.Done
2014-12-05 11:23:01,138 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (ajp--127.0.0.1-8702-7) FINISH, FenceVdsVDSCommand, return: Test Succeeded, unknown, log id: 2803522
2014-12-05 11:23:01,139 WARN [org.ovirt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-7) Fencing operation failed with proxy host 071554fc-eed2-4e8f-b6bc-041248d0eaa5, trying another proxy...
2014-12-05 11:23:01,241 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-7) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Host vm-01 from data center XXXX was chosen as a proxy to execute Status command on Host vm-03.
2014-12-05 11:23:01,244 INFO [org.ovirt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-7) Using Host vm-01 from data center XXXX as proxy to execute Status command on Host
2014-12-05 11:23:01,246 INFO [org.ovirt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-7) Executing <Status> Power Management command, Proxy Host:vm-01, Agent:apc, Target Host:, Management IP:***.***.***.***, User:apc, Options:, Fencing policy:null
2014-12-05 11:23:01,273 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (ajp--127.0.0.1-8702-7) START, FenceVdsVDSCommand(HostName = vm-01, HostId = c50eb9bf-5294-4d46-813d-7adfcb41d71d, targetVdsId = 67c642ed-0a7a-4e3b-8dd6-32a36df4aea9, action = Status, ip = ***.***.***.***, port = , type = apc, user = apc, password = ******, options = '', policy = 'null'), log id: 2b00de15
2014-12-05 11:23:01,449 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-7) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Power Management test failed for Host vm-03.Done
2014-12-05 11:23:01,451 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (ajp--127.0.0.1-8702-7) FINISH, FenceVdsVDSCommand, return: Test Succeeded, unknown, log id: 2b00de15
This is the vdsm.log output:
JsonRpc (StompReactor)::DEBUG::2014-12-05 11:34:05,065::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2014-12-05 11:34:05,067::__init__::504::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-24996::DEBUG::2014-12-05 11:34:05,069::API::1188::vds::(fenceNode) fenceNode(addr=***.***.***.***,port=,agent=apc,user=apc,passwd=XXXX,action=status,secure=False,options=,policy=None)
Thread-24996::DEBUG::2014-12-05 11:34:05,069::utils::738::root::(execCmd) /usr/sbin/fence_apc (cwd None)
Thread-24996::DEBUG::2014-12-05 11:34:05,131::utils::758::root::(execCmd) FAILED: <err> = "Failed: You have to enter plug number or machine identification\nPlease use '-h' for usage\n"; <rc> = 1
Thread-24996::DEBUG::2014-12-05 11:34:05,131::API::1143::vds::(fence) rc 1 inp agent=fence_apc
ipaddr=***.***.***.***
login=apc
action=status
passwd=XXXX
out [] err ['Failed: You have to enter plug number or machine identification', "Please use '-h' for usage"]
The 'port' and 'options' fields show up as empty, even if we enter '22' or 'port=22'. We did enter the slot number as well.
Entering the fence_apc command manually, we get:
fence_apc -a ***.***.***.*** -l apc -p ****** -o status -n 1 -x
Status: ON
Anyone have an idea what could be the problem?
Thanks for your time and kind regards,
Wout
------=_Part_215_24910333.1417776619019
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><head><style type=3D'text/css'>p { margin: 0; }</style></head><body><=
div style=3D'font-family: arial,helvetica,sans-serif; font-size: 10pt; colo=
r: #000000'>Hi,<br><br>We're trying to set up an oVirt configuration with a=
n oVirt-controller (CentOS 6), iSCSI-storage (Dell MD3200i) and 3 vm-hosts =
(CentOS 7) powered by 2 APC PDUs. Testing the Power Management settings in =
the web GUI, we get the following message: "Test Succeeded, unknown." The o=
Virt engine log outputs the following:<br><br>2014-12-05 11:23:00,872 INFO&=
nbsp; [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector=
] (ajp--127.0.0.1-8702-7) Correlation ID: null, Call Stack: null, Custom Ev=
ent ID: -1, Message: Host vm-02 from data center XXXX was chosen as a proxy=
to execute Status command on Host vm-03.<br>2014-12-05 11:23:00,879 INFO&n=
bsp; [org.ovirt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-7) Usin=
g Host vm-02 from data center XXXX as proxy to execute Status command on Ho=
st<br>2014-12-05 11:23:00,904 INFO [org.ovirt.engine.core.bll.FenceEx=
ecutor] (ajp--127.0.0.1-8702-7) Executing <Status> Power Management c=
ommand, Proxy Host:vm-02, Agent:apc, Target Host:, Management IP:***.***.**=
*.***, User:apc, Options:, Fencing policy:null<br>2014-12-05 11:23:00,930 I=
NFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (a=
jp--127.0.0.1-8702-7) START, FenceVdsVDSCommand(HostName =3D vm-02, HostId =
=3D 071554fc-eed2-4e8f-b6bc-041248d0eaa5, targetVdsId =3D 67c642ed-0a7a-4e3=
b-8dd6-32a36df4aea9, action =3D Status, ip =3D ***.***.***.***, port =3D , =
type =3D apc, user =3D apc, password =3D ******, options =3D '', policy =3D=
'null'), log id: 2803522<br>2014-12-05 11:23:01,137 WARN [org.ovirt.=
engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1=
-8702-7) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Messa=
ge: Power Management test failed for Host vm-03.Done<br>2014-12-05 11:23:01=
,138 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSComma=
nd] (ajp--127.0.0.1-8702-7) FINISH, FenceVdsVDSCommand, return: Test Succee=
ded, unknown, log id: 2803522<br>2014-12-05 11:23:01,139 WARN [org.ov=
irt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-7) Fencing operatio=
n failed with proxy host 071554fc-eed2-4e8f-b6bc-041248d0eaa5, trying anoth=
er proxy...<br>2014-12-05 11:23:01,241 INFO [org.ovirt.engine.core.da=
l.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-7) Corre=
lation ID: null, Call Stack: null, Custom Event ID: -1, Message: Host vm-01=
from data center XXXX was chosen as a proxy to execute Status command on H=
ost vm-03.<br>2014-12-05 11:23:01,244 INFO [org.ovirt.engine.core.bll=
.FenceExecutor] (ajp--127.0.0.1-8702-7) Using Host vm-01 from data center X=
XXX as proxy to execute Status command on Host<br>2014-12-05 11:23:01,246 I=
NFO [org.ovirt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-7)=
Executing <Status> Power Management command, Proxy Host:vm-01, Agent=
:apc, Target Host:, Management IP:***.***.***.***, User:apc, Options:, Fenc=
ing policy:null<br>2014-12-05 11:23:01,273 INFO [org.ovirt.engine.cor=
e.vdsbroker.vdsbroker.FenceVdsVDSCommand] (ajp--127.0.0.1-8702-7) START, Fe=
nceVdsVDSCommand(HostName =3D vm-01, HostId =3D c50eb9bf-5294-4d46-813d-7ad=
fcb41d71d, targetVdsId =3D 67c642ed-0a7a-4e3b-8dd6-32a36df4aea9, action =3D=
Status, ip =3D ***.***.***.***, port =3D , type =3D apc, user =3D apc, pas=
sword =3D ******, options =3D '', policy =3D 'null'), log id: 2b00de15<br>2=
014-12-05 11:23:01,449 WARN [org.ovirt.engine.core.dal.dbbroker.audit=
loghandling.AuditLogDirector] (ajp--127.0.0.1-8702-7) Correlation ID: null,=
Call Stack: null, Custom Event ID: -1, Message: Power Management test fail=
ed for Host vm-03.Done<br>2014-12-05 11:23:01,451 INFO [org.ovirt.eng=
ine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (ajp--127.0.0.1-8702-7) FI=
NISH, FenceVdsVDSCommand, return: Test Succeeded, unknown, log id: 2b00de15=
<br><br>This is the vdsm.log output:<br><br>JsonRpc (StompReactor)::DEBUG::=
2014-12-05 11:34:05,065::stompReactor::98::Broker.StompAdapter::(handle_fra=
me) Handling message <StompFrame command=3D'SEND'><br>JsonRpcServer::=
DEBUG::2014-12-05 11:34:05,067::__init__::504::jsonrpc.JsonRpcServer::(serv=
e_requests) Waiting for request<br>Thread-24996::DEBUG::2014-12-05 11:34:05=
,069::API::1188::vds::(fenceNode) fenceNode(addr=3D***.***.***.***,port=3D,=
agent=3Dapc,user=3Dapc,passwd=3DXXXX,action=3Dstatus,secure=3DFalse,options=
=3D,policy=3DNone)<br>Thread-24996::DEBUG::2014-12-05 11:34:05,069::utils::=
738::root::(execCmd) /usr/sbin/fence_apc (cwd None)<br>Thread-24996::DEBUG:=
:2014-12-05 11:34:05,131::utils::758::root::(execCmd) FAILED: <err> =
=3D "Failed: You have to enter plug number or machine identification\nPleas=
e use '-h' for usage\n"; <rc> =3D 1<br>Thread-24996::DEBUG::2014-12-0=
5 11:34:05,131::API::1143::vds::(fence) rc 1 inp agent=3Dfence_apc<br>ipadd=
r=3D***.***.***.***<br>login=3Dapc<br>action=3Dstatus<br>passwd=3DXXXX<br>&=
nbsp;out [] err ['Failed: You have to enter plug number or machine identifi=
cation', "Please use '-h' for usage"]<br><br>The 'port' and 'options' field=
s show up as empty, even if we enter '22' or 'port=3D22'. We did enter the =
slot number as well.<br><br>Entering the fence_apc command manually, we get=
:<br><br>fence_apc -a ***.***.***.*** -l apc -p ****** -o status -n 1 -x<br=
>Status: ON<br><br>Anyone have an idea what could be the problem?<br><br><b=
r>Thanks for your time and kind regards,<br><br>Wout<br></div></body></html=
>
------=_Part_215_24910333.1417776619019--
10 years