which network guest driver for kernel 2.6.17
by ml ml
Hello List,
i would like to use kernel 2.6.17 for a guest.
Which network driver do i need for this?
I have only seen the the ovirt network driver stuff in recent kernel.
Thanks,
Mario
10 years, 3 months
Sealing a CentOS-7 Guest as a oVirt Template
by Christian Rebel
This is a multipart message in MIME format.
------=_NextPart_000_0061_01CFD378.998AF230
Content-Type: text/plain;
charset="us-ascii"
Content-Transfer-Encoding: 7bit
Hello all,
What is the recommended way to make an oVirt Template from a CentOS-7 Guest?
Previously on CentOS-6 I used "sys-unconfig" or "touch /.unconfigured", but
this is on CentOS-7 no longer possible.
I can use on CentOS-7 the "nmtui" after the OS is loaded, but maybe someone
knows a way to run it during the boot sequence.
Thanks,
Christian
------=_NextPart_000_0061_01CFD378.998AF230
Content-Type: text/html;
charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta =
http-equiv=3DContent-Type content=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 14 =
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
code
{mso-style-priority:99;
font-family:"Courier New";}
pre
{mso-style-priority:99;
mso-style-link:"HTML Preformatted Char";
margin:0cm;
margin-bottom:.0001pt;
font-size:10.0pt;
font-family:"Courier New";}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
span.HTMLPreformattedChar
{mso-style-name:"HTML Preformatted Char";
mso-style-priority:99;
mso-style-link:"HTML Preformatted";
font-family:"Courier New";
mso-fareast-language:DE-AT;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";
mso-fareast-language:EN-US;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:70.85pt 70.85pt 2.0cm 70.85pt;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DDE-AT link=3Dblue =
vlink=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal>Hello =
all,<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal><span lang=3DEN-US>What is the recommended way to make =
an oVirt Template from a CentOS-7 Guest?<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US><o:p> </o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Previously on CentOS-6 I used =
“sys-unconfig” or “touch /.unconfigured”, but =
this is on CentOS-7 no longer possible.<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>I can use on CentOS-7 the =
“nmtui” after the OS is loaded, but maybe someone knows a =
way to run it during the boot sequence…<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US><o:p> </o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Thanks,<o:p></o:p></span></p><p =
class=3DMsoNormal><span lang=3DEN-US>Christian <o:p></o:p></span></p><p =
class=3DMsoNormal><span =
lang=3DEN-US> <o:p></o:p></span></p></div></body></html>
------=_NextPart_000_0061_01CFD378.998AF230--
10 years, 3 months
OVIRT-3.5-TEST-DAY-3: CPU SLA for capping
by Martin Perina
Hi,
I tested the support to limit CPU utilization of hosts. Setting it up
in the GUI worked correctly, but for running VM settings are ignored.
I used XMLRPC protocol to avoid [1]. To simulate VM CPU utilization
I used stress tool [2].
Testing steps:
1. Create a VM with the same number of CPUs as Host
2. Create CPU type QoS in DC and set the limit to desired value
3. In Cluster create new CPU profile with above QoS and remove the default CPU profile
4. Start the VM and execute (N is a number of CPUs in VM)
stress --cpu N
5. Connect to Host and execute
sar -u 1 10
No matter what the limit was set to (1, 50, 100), host CPUs were always utilized to 100%.
So I create bug to cover this [3].
Martin Perina
[1] http://bugzilla.redhat.com/1142851
[2] http://people.seas.harvard.edu/~apw/stress/
[3] http://bugzilla.redhat.com/1144280
10 years, 3 months
[Speaking Opportunity] Central PA Open Source Conference 2014
by Brian Proffitt
If anyone in the eastern Pennsylvania/New Jersey area is interested, the oVirt community has been personally invited to have a speaker attend CPOSC on Nov. 1 in Lancaster, PA. Unfortunately I cannot attend, as I will be traveling that day. But this is a great opportunity to speak to a green-field audience about oVirt and what it can do.
If you would like to give a talk about your favorite datacenter management platform, visit the CFP page at http://www.cposc.org/ and post a proposal. (Ignore the Sept. 10 deadline; oVirt is assured a slot.) If you do want to present, we can send you presentation materials to use for an overview talk, so let me know!
Peace,
Brian
--
Brian Proffitt
Community Liason
oVirt
Open Source and Standards, Red Hat - http://community.redhat.com
Phone: +1 574 383 9BKP
IRC: bkp @ OFTC
10 years, 3 months
Main data storage won't come up
by Tamás Millián
I have a single node ovirt environment (I know its dumb, but I still do).
Today for some reason it went down and after coming up again glusterfs
wouldn't start. I had managed to fix that by reinitialising the volumes in
gluster. However now I am struggling with the issue that the data storage
domain for ovirt refuses to come up. Gluster gets the volumes online and
mounting gv0 (the data volume) works as well. I have no idea how to go
about this. Let me know if any more information is required to resolve this.
Any help would be highly appreciated.
engine log
2014-09-18 23:39:31,380 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Loaded file
"/usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.conf".
2014-09-18 23:39:31,381 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) The file "/etc/ovirt-engine/engine.conf" doesn't
exist or isn't readable. Will return an empty set of properties.
2014-09-18 23:39:31,381 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Loaded file
"/etc/ovirt-engine/engine.conf.d/10-setup-database.conf".
2014-09-18 23:39:31,381 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Loaded file
"/etc/ovirt-engine/engine.conf.d/10-setup-jboss.conf".
2014-09-18 23:39:31,382 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Loaded file
"/etc/ovirt-engine/engine.conf.d/10-setup-pki.conf".
2014-09-18 23:39:31,382 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Loaded file
"/etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf".
2014-09-18 23:39:31,383 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_AJP_ENABLED" is "true".
2014-09-18 23:39:31,383 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_AJP_PORT" is "8702".
2014-09-18 23:39:31,383 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_APPS" is "engine.ear".
2014-09-18 23:39:31,383 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_CACHE" is
"/var/cache/ovirt-engine".
2014-09-18 23:39:31,384 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_DB_CHECK_INTERVAL" is
"1000".
2014-09-18 23:39:31,384 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_DB_CONNECTION_TIMEOUT"
is "300000".
2014-09-18 23:39:31,384 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_DB_DATABASE" is
"engine".
2014-09-18 23:39:31,384 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_DB_DRIVER" is
"org.postgresql.Driver".
2014-09-18 23:39:31,384 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_DB_HOST" is "localhost".
2014-09-18 23:39:31,385 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_DB_MAX_CONNECTIONS" is
"100".
2014-09-18 23:39:31,385 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_DB_MIN_CONNECTIONS" is
"1".
2014-09-18 23:39:31,385 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_DB_PASSWORD" is "***".
2014-09-18 23:39:31,385 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_DB_PORT" is "5432".
2014-09-18 23:39:31,386 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_DB_SECURED" is "False".
2014-09-18 23:39:31,386 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_DB_SECURED_VALIDATION"
is "False".
2014-09-18 23:39:31,386 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_DB_URL" is
"jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory".
2014-09-18 23:39:31,387 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_DB_USER" is "engine".
2014-09-18 23:39:31,387 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_DEBUG_ADDRESS" is "".
2014-09-18 23:39:31,387 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_DOC" is
"/usr/share/doc/ovirt-engine".
2014-09-18 23:39:31,387 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_ETC" is
"/etc/ovirt-engine".
2014-09-18 23:39:31,388 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_FQDN" is "server1".
2014-09-18 23:39:31,388 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_GROUP" is "ovirt".
2014-09-18 23:39:31,388 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_HEAP_MAX" is "1g".
2014-09-18 23:39:31,389 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_HEAP_MIN" is "1g".
2014-09-18 23:39:31,389 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_HTTPS_ENABLED" is
"false".
2014-09-18 23:39:31,389 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_HTTPS_PORT" is "None".
2014-09-18 23:39:31,389 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_HTTPS_PROTOCOLS" is
"SSLv3,TLSv1,TLSv1.1,TLSv1.2".
2014-09-18 23:39:31,390 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_HTTP_ENABLED" is
"false".
2014-09-18 23:39:31,390 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_HTTP_PORT" is "None".
2014-09-18 23:39:31,390 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_JAVA_MODULEPATH" is
"/usr/share/ovirt-engine/modules".
2014-09-18 23:39:31,391 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_JVM_ARGS" is "
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath="/var/log/ovirt-engine/dump"".
2014-09-18 23:39:31,391 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_LOG" is
"/var/log/ovirt-engine".
2014-09-18 23:39:31,391 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_LOG_TO_CONSOLE" is
"false".
2014-09-18 23:39:31,391 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_MANUAL" is
"/usr/share/ovirt-engine/manual".
2014-09-18 23:39:31,392 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_PERM_MAX" is "256m".
2014-09-18 23:39:31,392 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_PERM_MIN" is "256m".
2014-09-18 23:39:31,392 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_PKI" is
"/etc/pki/ovirt-engine".
2014-09-18 23:39:31,392 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_PKI_CA" is
"/etc/pki/ovirt-engine/ca.pem".
2014-09-18 23:39:31,392 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_PKI_ENGINE_CERT" is
"/etc/pki/ovirt-engine/certs/engine.cer".
2014-09-18 23:39:31,393 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_PKI_ENGINE_STORE" is
"/etc/pki/ovirt-engine/keys/engine.p12".
2014-09-18 23:39:31,393 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_PKI_ENGINE_STORE_ALIAS"
is "1".
2014-09-18 23:39:31,393 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property
"ENGINE_PKI_ENGINE_STORE_PASSWORD" is "***".
2014-09-18 23:39:31,393 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_PKI_TRUST_STORE" is
"/etc/pki/ovirt-engine/.truststore".
2014-09-18 23:39:31,394 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property
"ENGINE_PKI_TRUST_STORE_PASSWORD" is "***".
2014-09-18 23:39:31,394 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_PROPERTIES" is "
jsse.enableSNIExtension=false".
2014-09-18 23:39:31,394 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_PROXY_ENABLED" is
"true".
2014-09-18 23:39:31,395 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_PROXY_HTTPS_PORT" is
"443".
2014-09-18 23:39:31,395 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_PROXY_HTTP_PORT" is
"80".
2014-09-18 23:39:31,395 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_REPORTS_UI" is
"/var/lib/ovirt-engine/reports.xml".
2014-09-18 23:39:31,395 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_STOP_INTERVAL" is "1".
2014-09-18 23:39:31,396 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_STOP_TIME" is "10".
2014-09-18 23:39:31,396 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_TMP" is
"/var/tmp/ovirt-engine".
2014-09-18 23:39:31,396 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_UP_MARK" is
"/var/lib/ovirt-engine/engine.up".
2014-09-18 23:39:31,396 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_URI" is "/ovirt-engine".
2014-09-18 23:39:31,397 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_USER" is "ovirt".
2014-09-18 23:39:31,397 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_USR" is
"/usr/share/ovirt-engine".
2014-09-18 23:39:31,397 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_VAR" is
"/var/lib/ovirt-engine".
2014-09-18 23:39:31,398 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "ENGINE_VERBOSE_GC" is "false".
2014-09-18 23:39:31,398 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "JBOSS_HOME" is
"/usr/share/jboss-as".
2014-09-18 23:39:31,398 INFO [org.ovirt.engine.core.utils.LocalConfig]
(MSC service thread 1-10) Value of property "SENSITIVE_KEYS" is
",ENGINE_DB_PASSWORD,ENGINE_PKI_TRUST_STORE_PASSWORD,ENGINE_PKI_ENGINE_STORE_PASSWORD".
2014-09-18 23:39:31,582 INFO [org.ovirt.engine.core.bll.Backend] (MSC
service thread 1-1) Start initializing Backend
2014-09-18 23:39:31,776 INFO [org.ovirt.engine.core.bll.Backend] (MSC
service thread 1-1) Running ovirt-engine 3.4.0-1.el6
2014-09-18 23:39:31,777 INFO
[org.ovirt.engine.core.bll.CpuFlagsManagerHandler] (MSC service thread
1-1) Start initializing dictionaries
2014-09-18 23:39:31,779 INFO
[org.ovirt.engine.core.bll.CpuFlagsManagerHandler] (MSC service thread
1-1) Finished initializing dictionaries
2014-09-18 23:39:31,779 INFO
[org.ovirt.engine.core.bll.AuditLogCleanupManager] (MSC service thread
1-1) Start initializing AuditLogCleanupManager
2014-09-18 23:39:31,780 INFO
[org.ovirt.engine.core.bll.AuditLogCleanupManager] (MSC service thread
1-1) Setting audit cleanup manager to run at: 35 35 3 * * ?
2014-09-18 23:39:31,789 INFO
[org.ovirt.engine.core.bll.AuditLogCleanupManager] (MSC service thread
1-1) Finished initializing AuditLogCleanupManager
2014-09-18 23:39:31,790 INFO [org.ovirt.engine.core.bll.TagsDirector] (MSC
service thread 1-1) Start initializing TagsDirector
2014-09-18 23:39:31,795 INFO [org.ovirt.engine.core.bll.TagsDirector] (MSC
service thread 1-1) Tag root added to tree
2014-09-18 23:39:31,802 INFO [org.ovirt.engine.core.bll.TagsDirector] (MSC
service thread 1-1) Finished initializing TagsDirector
2014-09-18 23:39:31,804 INFO
[org.ovirt.engine.core.bll.IsoDomainListSyncronizer] (MSC service thread
1-1) Start initializing IsoDomainListSyncronizer
2014-09-18 23:39:31,807 INFO
[org.ovirt.engine.core.bll.IsoDomainListSyncronizer] (MSC service thread
1-1) Finished initializing IsoDomainListSyncronizer
2014-09-18 23:39:31,812 INFO
[org.ovirt.engine.core.utils.osinfo.OsInfoPreferencesLoader] (MSC service
thread 1-1) Loaded file
/etc/ovirt-engine/osinfo.conf.d/00-defaults.properties
2014-09-18 23:39:32,097 INFO [org.ovirt.engine.core.bll.Backend] (MSC
service thread 1-1) Completed initializing handlers
2014-09-18 23:39:32,098 INFO
[org.ovirt.engine.core.utils.ErrorTranslatorImpl] (MSC service thread 1-1)
Start initializing ErrorTranslatorImpl
2014-09-18 23:39:32,103 WARN
[org.ovirt.engine.core.utils.ErrorTranslatorImpl] (MSC service thread 1-1)
Code MAC_ADDRESS_IS_IN_USE appears more than once in string table.
2014-09-18 23:39:32,104 INFO
[org.ovirt.engine.core.utils.ErrorTranslatorImpl] (MSC service thread 1-1)
Finished initializing ErrorTranslatorImpl
2014-09-18 23:39:32,104 INFO
[org.ovirt.engine.core.utils.ErrorTranslatorImpl] (MSC service thread 1-1)
Start initializing ErrorTranslatorImpl
2014-09-18 23:39:32,105 INFO
[org.ovirt.engine.core.utils.ErrorTranslatorImpl] (MSC service thread 1-1)
Finished initializing ErrorTranslatorImpl
2014-09-18 23:39:32,105 INFO [org.ovirt.engine.core.bll.Backend] (MSC
service thread 1-1) Mark incomplete jobs as UNKNOWN
2014-09-18 23:39:32,116 INFO
[org.ovirt.engine.core.bll.job.JobRepositoryCleanupManager] (MSC service
thread 1-1) Start initializing JobRepositoryCleanupManager
2014-09-18 23:39:32,117 INFO
[org.ovirt.engine.core.bll.job.JobRepositoryCleanupManager] (MSC service
thread 1-1) Finished initializing JobRepositoryCleanupManager
2014-09-18 23:39:32,118 INFO
[org.ovirt.engine.core.bll.AutoRecoveryManager] (MSC service thread 1-1)
Start initializing AutoRecoveryManager
2014-09-18 23:39:32,118 INFO
[org.ovirt.engine.core.bll.AutoRecoveryManager] (MSC service thread 1-1)
Finished initializing AutoRecoveryManager
2014-09-18 23:39:32,119 INFO
[org.ovirt.engine.core.dal.job.ExecutionMessageDirector] (MSC service
thread 1-1) Start initializing ExecutionMessageDirector
2014-09-18 23:39:32,120 INFO
[org.ovirt.engine.core.dal.job.ExecutionMessageDirector] (MSC service
thread 1-1) Finished initializing ExecutionMessageDirector
2014-09-18 23:39:32,146 INFO
[org.ovirt.engine.core.bll.adbroker.UsersDomainsCacheManagerService] (MSC
service thread 1-10) Start initializing UsersDomainsCacheManagerService
2014-09-18 23:39:32,150 INFO
[org.ovirt.engine.core.bll.DbUserCacheManager] (MSC service thread 1-10)
Start initializing DbUserCacheManager
2014-09-18 23:39:32,151 INFO
[org.ovirt.engine.core.bll.DbUserCacheManager] (MSC service thread 1-10)
Finished initializing DbUserCacheManager
2014-09-18 23:39:32,151 INFO
[org.ovirt.engine.core.bll.adbroker.UsersDomainsCacheManagerService] (MSC
service thread 1-10) Finished initializing UsersDomainsCacheManagerService
2014-09-18 23:39:32,156 INFO [org.ovirt.engine.core.bll.AsyncTaskManager]
(MSC service thread 1-5) Initialization of AsyncTaskManager completed
successfully.
2014-09-18 23:39:32,157 INFO
[org.ovirt.engine.core.vdsbroker.ResourceManager] (MSC service thread 1-5)
Start initializing ResourceManager
2014-09-18 23:39:32,224 INFO [org.ovirt.engine.core.vdsbroker.VdsManager]
(MSC service thread 1-5) Entered VdsManager constructor
2014-09-18 23:39:32,240 INFO [org.ovirt.engine.core.vdsbroker.VdsManager]
(MSC service thread 1-5) Initialize vdsBroker (server1,54,321)
2014-09-18 23:39:32,295 INFO
[org.ovirt.engine.core.vdsbroker.ResourceManager] (MSC service thread 1-5)
VDS 69f9e1c1-8daa-437a-b4e0-c4c215541db0 was added to the Resource Manager
2014-09-18 23:39:32,306 INFO
[org.ovirt.engine.core.vdsbroker.ResourceManager] (MSC service thread 1-5)
Finished initializing ResourceManager
2014-09-18 23:39:32,310 INFO [org.ovirt.engine.core.bll.OvfDataUpdater]
(MSC service thread 1-5) Initialization of OvfDataUpdater completed
successfully.
2014-09-18 23:39:32,312 INFO
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] (MSC service
thread 1-5) Start scheduling to enable vds load balancer
2014-09-18 23:39:32,312 INFO
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] (MSC service
thread 1-5) Finished scheduling to enable vds load balancer
2014-09-18 23:39:32,312 INFO
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] (MSC service
thread 1-5) Start HA Reservation check
2014-09-18 23:39:32,313 INFO
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] (MSC service
thread 1-5) Finished HA Reservation check
2014-09-18 23:39:32,317 INFO
[org.ovirt.engine.core.bll.network.MacPoolManager]
(org.ovirt.thread.pool-6-thread-1) MacPoolManager(295a48f6): Start
initializing
2014-09-18 23:39:32,323 INFO
[org.ovirt.engine.core.bll.InitBackendServicesOnStartupBean] (MSC service
thread 1-5) Init VM custom properties utilities
2014-09-18 23:39:32,324 INFO
[org.ovirt.engine.core.bll.InitBackendServicesOnStartupBean] (MSC service
thread 1-5) Init device custom properties utilities
2014-09-18 23:39:32,327 INFO
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] (MSC service
thread 1-5) Initializing Scheduling manager
2014-09-18 23:39:32,329 INFO
[org.ovirt.engine.core.bll.network.MacPoolManager]
(org.ovirt.thread.pool-6-thread-1) MacPoolManager(295a48f6): Finished
initializing. Available MACs in pool: 243
2014-09-18 23:39:32,343 INFO
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] (MSC service
thread 1-5) External scheduler disabled, discovery skipped
2014-09-18 23:39:32,343 INFO
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] (MSC service
thread 1-5) Initialized Scheduling manager
2014-09-18 23:39:32,344 INFO [org.ovirt.engine.core.bll.dwh.DwhHeartBeat]
(MSC service thread 1-5) Initializing DWH Heart Beat
2014-09-18 23:39:32,344 INFO [org.ovirt.engine.core.bll.dwh.DwhHeartBeat]
(MSC service thread 1-5) DWH Heart Beat initialized
2014-09-18 23:39:35,403 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetStatsVDSCommand]
(DefaultQuartzScheduler_Worker-6) Command GetStatsVDSCommand(HostName =
server1, HostId = 69f9e1c1-8daa-437a-b4e0-c4c215541db0, vds=Host[server1])
execution failed. Exception: VDSNetworkException:
java.net.ConnectException: Connection refused
2014-09-18 23:39:35,409 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-6) vds::refreshVdsStats Failed getVdsStats,
vds = 69f9e1c1-8daa-437a-b4e0-c4c215541db0 : server1, error =
VDSNetworkException: java.net.ConnectException: Connection refused
2014-09-18 23:39:35,584 WARN [org.ovirt.engine.core.vdsbroker.VdsManager]
(DefaultQuartzScheduler_Worker-6) Failed to refresh VDS , vds =
69f9e1c1-8daa-437a-b4e0-c4c215541db0 : server1, VDS Network Error,
continuing.
java.net.ConnectException: Connection refused
2014-09-18 23:39:38,788 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand]
(DefaultQuartzScheduler_Worker-12) START,
GetHardwareInfoVDSCommand(HostName = server1, HostId =
69f9e1c1-8daa-437a-b4e0-c4c215541db0, vds=Host[server1]), log id: 6344733e
2014-09-18 23:39:38,843 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand]
(DefaultQuartzScheduler_Worker-12) FINISH, GetHardwareInfoVDSCommand, log
id: 6344733e
2014-09-18 23:39:39,053 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(DefaultQuartzScheduler_Worker-12) START,
GlusterServersListVDSCommand(HostName = server1, HostId =
69f9e1c1-8daa-437a-b4e0-c4c215541db0), log id: 1a63320f
2014-09-18 23:39:39,082 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(DefaultQuartzScheduler_Worker-12) FINISH, GlusterServersListVDSCommand,
return: [148.251.247.45:CONNECTED], log id: 1a63320f
2014-09-18 23:39:39,149 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand]
(DefaultQuartzScheduler_Worker-12) START,
GetHardwareInfoVDSCommand(HostName = server1, HostId =
69f9e1c1-8daa-437a-b4e0-c4c215541db0, vds=Host[server1]), log id: 5d639aed
2014-09-18 23:39:39,194 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand]
(DefaultQuartzScheduler_Worker-12) FINISH, GetHardwareInfoVDSCommand, log
id: 5d639aed
2014-09-18 23:39:39,294 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(DefaultQuartzScheduler_Worker-12) START,
GlusterServersListVDSCommand(HostName = server1, HostId =
69f9e1c1-8daa-437a-b4e0-c4c215541db0), log id: 6c325227
2014-09-18 23:39:39,310 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(DefaultQuartzScheduler_Worker-12) FINISH, GlusterServersListVDSCommand,
return: [148.251.247.45:CONNECTED], log id: 6c325227
2014-09-18 23:39:39,334 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) AuditLogType: UNASSIGNED not exist in
string table
2014-09-18 23:39:39,334 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) AuditLogType: USER_FAILED_REMOVE_VM not
exist in string table
2014-09-18 23:39:39,334 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) AuditLogType:
USER_RUN_UNLOCK_ENTITY_SCRIPT not exist in string table
2014-09-18 23:39:39,335 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) AuditLogType:
VDS_NETWORK_MTU_DIFFER_FROM_LOGICAL_NETWORK not exist in string table
2014-09-18 23:39:39,335 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) AuditLogType: STORAGE_ACTIVATE_ASYNC not
exist in string table
2014-09-18 23:39:39,335 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) AuditLogType: DWH_STOPPED not exist in
string table
2014-09-18 23:39:39,336 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) AuditLogType: DWH_STARTED not exist in
string table
2014-09-18 23:39:39,336 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) AuditLogType: DWH_ERROR not exist in
string table
2014-09-18 23:39:39,341 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) AuditLogType: USER_FAILED_REMOVE_VM not
have severity. Assumed Normal
2014-09-18 23:39:39,341 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) AuditLogType: USER_ATTACH_DISK_TO_VM not
have severity. Assumed Normal
2014-09-18 23:39:39,342 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) AuditLogType: USER_DETACH_DISK_FROM_VM
not have severity. Assumed Normal
2014-09-18 23:39:39,342 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) AuditLogType:
USER_FAILED_DETACH_DISK_FROM_VM not have severity. Assumed Normal
2014-09-18 23:39:39,342 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) AuditLogType:
USER_RUN_UNLOCK_ENTITY_SCRIPT not have severity. Assumed Normal
2014-09-18 23:39:39,342 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) AuditLogType:
GLUSTER_VOLUME_OPTION_CHANGED_FROM_CLI not have severity. Assumed Normal
2014-09-18 23:39:39,343 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) AuditLogType:
GLUSTER_SERVICES_LIST_NOT_FETCHED not have severity. Assumed Normal
2014-09-18 23:39:39,343 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) AuditLogType: GLUSTER_VOLUME_BRICK_ADDED
not have severity. Assumed Normal
2014-09-18 23:39:39,343 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) AuditLogType:
USER_EXTEND_DISK_SIZE_UPDATE_VM_FAILURE not have severity. Assumed Normal
2014-09-18 23:39:39,343 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) AuditLogType:
VM_MIGRATION_START_SYSTEM_INITIATED not have severity. Assumed Normal
2014-09-18 23:39:39,344 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) AuditLogType:
VDS_NETWORK_MTU_DIFFER_FROM_LOGICAL_NETWORK not have severity. Assumed
Normal
2014-09-18 23:39:39,344 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) AuditLogType:
NETWORK_UPDATE_VM_INTERFACE_LINK_UP not have severity. Assumed Normal
2014-09-18 23:39:39,344 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) AuditLogType:
NETWORK_UPDATE_VM_INTERFACE_LINK_DOWN not have severity. Assumed Normal
2014-09-18 23:39:39,344 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) AuditLogType: USER_ADDED_AFFINITY_GROUP
not have severity. Assumed Normal
2014-09-18 23:39:39,345 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) AuditLogType:
USER_FAILED_TO_ADD_AFFINITY_GROUP not have severity. Assumed Normal
2014-09-18 23:39:39,345 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) AuditLogType:
USER_UPDATED_AFFINITY_GROUP not have severity. Assumed Normal
2014-09-18 23:39:39,345 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) AuditLogType:
USER_FAILED_TO_UPDATE_AFFINITY_GROUP not have severity. Assumed Normal
2014-09-18 23:39:39,345 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) AuditLogType:
USER_REMOVED_AFFINITY_GROUP not have severity. Assumed Normal
2014-09-18 23:39:39,346 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) AuditLogType:
USER_FAILED_TO_REMOVE_AFFINITY_GROUP not have severity. Assumed Normal
2014-09-18 23:39:39,445 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-12) Correlation ID: null, Call Stack: null,
Custom Event ID: -1, Message: State was set to Up for host server1.
2014-09-18 23:39:39,494 INFO
[org.ovirt.engine.core.bll.InitVdsOnUpCommand]
(DefaultQuartzScheduler_Worker-12) [452f28] Running command:
InitVdsOnUpCommand internal: true. Entities affected : ID:
48e61b65-232d-4283-838b-7090fa36bcc6 Type: StoragePool
2014-09-18 23:39:39,617 INFO
[org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServersCommand]
(DefaultQuartzScheduler_Worker-12) [2aae2a41] Running command:
ConnectHostToStoragePoolServersCommand internal: true. Entities affected :
ID: 48e61b65-232d-4283-838b-7090fa36bcc6 Type: StoragePool
2014-09-18 23:39:39,748 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(DefaultQuartzScheduler_Worker-12) [2aae2a41] START,
ConnectStorageServerVDSCommand(HostName = server1, HostId =
69f9e1c1-8daa-437a-b4e0-c4c215541db0, storagePoolId =
48e61b65-232d-4283-838b-7090fa36bcc6, storageType = GLUSTERFS,
connectionList = [{ id: 2fdef3d1-a3ec-4525-a879-fa8aa8422330, connection:
server1:gv0, iqn: null, vfsType: glusterfs, mountOptions: null, nfsVersion:
null, nfsRetrans: null, nfsTimeo: null };]), log id: 150a5f73
2014-09-18 23:39:39,846 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(DefaultQuartzScheduler_Worker-12) [2aae2a41] FINISH,
ConnectStorageServerVDSCommand, return:
{2fdef3d1-a3ec-4525-a879-fa8aa8422330=0}, log id: 150a5f73
2014-09-18 23:39:39,849 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(DefaultQuartzScheduler_Worker-12) [2aae2a41] START,
ConnectStorageServerVDSCommand(HostName = server1, HostId =
69f9e1c1-8daa-437a-b4e0-c4c215541db0, storagePoolId =
48e61b65-232d-4283-838b-7090fa36bcc6, storageType = NFS, connectionList =
[{ id: 1c11029c-4d2f-448c-8de7-1e31654c677b, connection: server1:/export,
iqn: null, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans:
null, nfsTimeo: null };{ id: 8014590d-150d-4c40-9f9e-1d2c43c36170,
connection: server1:/iso, iqn: null, vfsType: null, mountOptions: null,
nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 383e927d
2014-09-18 23:39:39,970 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(DefaultQuartzScheduler_Worker-12) [2aae2a41] FINISH,
ConnectStorageServerVDSCommand, return:
{8014590d-150d-4c40-9f9e-1d2c43c36170=0,
1c11029c-4d2f-448c-8de7-1e31654c677b=0}, log id: 383e927d
2014-09-18 23:39:39,970 INFO
[org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServersCommand]
(DefaultQuartzScheduler_Worker-12) [2aae2a41] Host server1 storage
connection was succeeded
2014-09-18 23:39:39,979 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand]
(org.ovirt.thread.pool-6-thread-13) START,
ConnectStoragePoolVDSCommand(HostName = server1, HostId =
69f9e1c1-8daa-437a-b4e0-c4c215541db0, storagePoolId =
48e61b65-232d-4283-838b-7090fa36bcc6, vds_spm_id = 1, masterDomainId =
eef6c9fd-a464-4cf1-bd22-c01a6344dd14, masterVersion = 111), log id: 7833108f
----> end of engine log
vdsmd log
MainProcess::DEBUG::2014-09-18
23:39:36,547::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper) call
readMultipathConf with () {}
MainProcess::DEBUG::2014-09-18
23:39:36,548::supervdsmServer::103::SuperVdsm.ServerCallback::(wrapper)
return readMultipathConf with ['# RHEV REVISION 1.0', '', 'defaults {', '
polling_interval 5', ' getuid_callout "/sbin/scsi_id
--whitelisted --replace-whitespace --device=/dev/%n"', ' no_path_retry
fail', ' user_friendly_names no', ' flush_on_last_del
yes', ' fast_io_fail_tmo 5', ' dev_loss_tmo 30',
' max_fds 4096', '}', '', 'devices {', 'device {', '
vendor "HITACHI"', ' product "DF.*"',
' getuid_callout "/sbin/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n"', '}', 'device {', ' vendor
"COMPELNT"', ' product "Compellent Vol"', '
no_path_retry fail', '}', '}']
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:38,800::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper) call
getHardwareInfo with () {}
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:38,801::supervdsmServer::103::SuperVdsm.ServerCallback::(wrapper)
return getHardwareInfo with {'systemProductName': 'S1200RP',
'systemSerialNumber': '............', 'systemFamily': 'To be filled by
O.E.M.', 'systemVersion': '....................', 'systemUUID':
'CC8CD1C9-B77D-E311-BB91-000E0C68D362', 'systemManufacturer': 'Intel
Corporation'}
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:39,056::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper) call
wrapper with () {}
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:39,056::utils::642::root::(execCmd) '/usr/sbin/gluster --mode=script
peer status --xml' (cwd None)
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:39,066::utils::662::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:39,067::utils::642::root::(execCmd) '/usr/sbin/gluster system:: uuid
get' (cwd None)
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:39,078::utils::662::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:39,078::supervdsmServer::103::SuperVdsm.ServerCallback::(wrapper)
return wrapper with [{'status': 'CONNECTED', 'hostname': '148.251.247.45',
'uuid': '2b098a03-cd87-4650-a55b-7e6727db8ca4'}]
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:39,152::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper) call
getHardwareInfo with () {}
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:39,152::supervdsmServer::103::SuperVdsm.ServerCallback::(wrapper)
return getHardwareInfo with {'systemProductName': 'S1200RP',
'systemSerialNumber': '............', 'systemFamily': 'To be filled by
O.E.M.', 'systemVersion': '....................', 'systemUUID':
'CC8CD1C9-B77D-E311-BB91-000E0C68D362', 'systemManufacturer': 'Intel
Corporation'}
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:39,296::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper) call
wrapper with () {}
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:39,297::utils::642::root::(execCmd) '/usr/sbin/gluster --mode=script
peer status --xml' (cwd None)
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:39,302::utils::662::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:39,303::utils::642::root::(execCmd) '/usr/sbin/gluster system:: uuid
get' (cwd None)
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:39,308::utils::662::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:39,308::supervdsmServer::103::SuperVdsm.ServerCallback::(wrapper)
return wrapper with [{'status': 'CONNECTED', 'hostname': '148.251.247.45',
'uuid': '2b098a03-cd87-4650-a55b-7e6727db8ca4'}]
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:40,586::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper) call
wrapper with () {}
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:40,586::utils::642::root::(execCmd) '/usr/sbin/gluster system:: uuid
get' (cwd None)
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:40,598::utils::662::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:40,598::supervdsmServer::103::SuperVdsm.ServerCallback::(wrapper)
return wrapper with 2b098a03-cd87-4650-a55b-7e6727db8ca4
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:42,377::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper) call
wrapper with (None,) {}
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:42,377::utils::642::root::(execCmd) '/usr/sbin/gluster --mode=script
volume info --xml' (cwd None)
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:42,387::utils::662::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:42,388::supervdsmServer::103::SuperVdsm.ServerCallback::(wrapper)
return wrapper with {'iso': {'transportType': ['TCP'], 'uuid':
'c1d293f7-3b07-4425-ac44-89a95864f7d2', 'bricks':
['server1:/data/brick1/iso', 'server1:/data/brick2/iso'], 'volumeName':
'iso', 'volumeType': 'DISTRIBUTE', 'replicaCount': '1', 'brickCount': '2',
'distCount': '1', 'volumeStatus': 'ONLINE', 'stripeCount': '1',
'bricksInfo': [{'name': 'server1:/data/brick1/iso', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'server1:/data/brick2/iso', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options': {'storage.owner-gid':
'36', 'storage.owner-uid': '36'}}, 'gv0': {'transportType': ['TCP'],
'uuid': '320f4ca5-d881-485e-97d3-c9d5dec1b5f0', 'bricks':
['server1:/data/brick1/gv0', 'server1:/data/brick2/gv0'], 'volumeName':
'gv0', 'volumeType': 'DISTRIBUTE', 'replicaCount': '1', 'brickCount': '2',
'distCount': '1', 'volumeStatus': 'ONLINE', 'stripeCount': '1',
'bricksInfo': [{'name': 'server1:/data/brick1/gv0', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'server1:/data/brick2/gv0', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options': {'auth.allow':
'*.*.*.*', 'storage.owner-gid': '36', 'cluster.quorum-type': 'auto',
'storage.owner-uid': '36'}}, 'storage': {'transportType': ['TCP'], 'uuid':
'91d0d0e6-9c13-435f-84fb-8a8c6d753bd2', 'bricks':
['10.0.0.1:/data/brick1/storage',
'10.0.0.1:/data/brick2/storage'], 'volumeName': 'storage', 'volumeType':
'DISTRIBUTE', 'replicaCount': '1', 'brickCount': '2', 'distCount': '1',
'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [{'name':
'10.0.0.1:/data/brick1/storage', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'10.0.0.1:/data/brick2/storage',
'hostUuid': '2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options':
{'auth.allow': '10.0.*.*', 'storage.owner-gid': '10000',
'storage.owner-uid': '10000'}}, 'export': {'transportType': ['TCP'],
'uuid': 'bd7bbc65-854d-4def-9c31-f94900310c4b', 'bricks':
['server1:/data/brick1/export', 'server1:/data/brick2/export'],
'volumeName': 'export', 'volumeType': 'DISTRIBUTE', 'replicaCount': '1',
'brickCount': '2', 'distCount': '1', 'volumeStatus': 'ONLINE',
'stripeCount': '1', 'bricksInfo': [{'name': 'server1:/data/brick1/export',
'hostUuid': '2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'server1:/data/brick2/export', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options': {'auth.allow':
'server1', 'storage.owner-gid': '36', 'storage.owner-uid': '36'}},
'backup': {'transportType': ['TCP'], 'uuid':
'c2d9c82e-57bb-4753-be66-94908011ab68', 'bricks':
['10.0.0.1:/data/brick1/backup',
'10.0.0.1:/data/brick2/backup'], 'volumeName': 'backup', 'volumeType':
'DISTRIBUTE', 'replicaCount': '1', 'brickCount': '2', 'distCount': '1',
'volumeStatus': 'OFFLINE', 'stripeCount': '1', 'bricksInfo': [{'name':
'10.0.0.1:/data/brick1/backup', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'10.0.0.1:/data/brick2/backup',
'hostUuid': '2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options':
{'auth.allow': '10.0.*.*'}}}
MainProcess|PolicyEngine::DEBUG::2014-09-18
23:39:46,982::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper) call
ksmTune with ({'run': 0},) {}
MainProcess|PolicyEngine::DEBUG::2014-09-18
23:39:46,983::supervdsmServer::103::SuperVdsm.ServerCallback::(wrapper)
return ksmTune with None
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:47,440::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper) call
wrapper with (None,) {}
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:47,440::utils::642::root::(execCmd) '/usr/sbin/gluster --mode=script
volume info --xml' (cwd None)
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:47,450::utils::662::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:47,450::supervdsmServer::103::SuperVdsm.ServerCallback::(wrapper)
return wrapper with {'iso': {'transportType': ['TCP'], 'uuid':
'c1d293f7-3b07-4425-ac44-89a95864f7d2', 'bricks':
['server1:/data/brick1/iso', 'server1:/data/brick2/iso'], 'volumeName':
'iso', 'volumeType': 'DISTRIBUTE', 'replicaCount': '1', 'brickCount': '2',
'distCount': '1', 'volumeStatus': 'ONLINE', 'stripeCount': '1',
'bricksInfo': [{'name': 'server1:/data/brick1/iso', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'server1:/data/brick2/iso', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options': {'storage.owner-gid':
'36', 'storage.owner-uid': '36'}}, 'gv0': {'transportType': ['TCP'],
'uuid': '320f4ca5-d881-485e-97d3-c9d5dec1b5f0', 'bricks':
['server1:/data/brick1/gv0', 'server1:/data/brick2/gv0'], 'volumeName':
'gv0', 'volumeType': 'DISTRIBUTE', 'replicaCount': '1', 'brickCount': '2',
'distCount': '1', 'volumeStatus': 'ONLINE', 'stripeCount': '1',
'bricksInfo': [{'name': 'server1:/data/brick1/gv0', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'server1:/data/brick2/gv0', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options': {'auth.allow':
'*.*.*.*', 'storage.owner-gid': '36', 'cluster.quorum-type': 'auto',
'storage.owner-uid': '36'}}, 'storage': {'transportType': ['TCP'], 'uuid':
'91d0d0e6-9c13-435f-84fb-8a8c6d753bd2', 'bricks':
['10.0.0.1:/data/brick1/storage',
'10.0.0.1:/data/brick2/storage'], 'volumeName': 'storage', 'volumeType':
'DISTRIBUTE', 'replicaCount': '1', 'brickCount': '2', 'distCount': '1',
'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [{'name':
'10.0.0.1:/data/brick1/storage', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'10.0.0.1:/data/brick2/storage',
'hostUuid': '2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options':
{'auth.allow': '10.0.*.*', 'storage.owner-gid': '10000',
'storage.owner-uid': '10000'}}, 'export': {'transportType': ['TCP'],
'uuid': 'bd7bbc65-854d-4def-9c31-f94900310c4b', 'bricks':
['server1:/data/brick1/export', 'server1:/data/brick2/export'],
'volumeName': 'export', 'volumeType': 'DISTRIBUTE', 'replicaCount': '1',
'brickCount': '2', 'distCount': '1', 'volumeStatus': 'ONLINE',
'stripeCount': '1', 'bricksInfo': [{'name': 'server1:/data/brick1/export',
'hostUuid': '2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'server1:/data/brick2/export', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options': {'auth.allow':
'server1', 'storage.owner-gid': '36', 'storage.owner-uid': '36'}},
'backup': {'transportType': ['TCP'], 'uuid':
'c2d9c82e-57bb-4753-be66-94908011ab68', 'bricks':
['10.0.0.1:/data/brick1/backup',
'10.0.0.1:/data/brick2/backup'], 'volumeName': 'backup', 'volumeType':
'DISTRIBUTE', 'replicaCount': '1', 'brickCount': '2', 'distCount': '1',
'volumeStatus': 'OFFLINE', 'stripeCount': '1', 'bricksInfo': [{'name':
'10.0.0.1:/data/brick1/backup', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'10.0.0.1:/data/brick2/backup',
'hostUuid': '2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options':
{'auth.allow': '10.0.*.*'}}}
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:52,484::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper) call
wrapper with (None,) {}
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:52,485::utils::642::root::(execCmd) '/usr/sbin/gluster --mode=script
volume info --xml' (cwd None)
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:52,500::utils::662::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:52,501::supervdsmServer::103::SuperVdsm.ServerCallback::(wrapper)
return wrapper with {'iso': {'transportType': ['TCP'], 'uuid':
'c1d293f7-3b07-4425-ac44-89a95864f7d2', 'bricks':
['server1:/data/brick1/iso', 'server1:/data/brick2/iso'], 'volumeName':
'iso', 'volumeType': 'DISTRIBUTE', 'replicaCount': '1', 'brickCount': '2',
'distCount': '1', 'volumeStatus': 'ONLINE', 'stripeCount': '1',
'bricksInfo': [{'name': 'server1:/data/brick1/iso', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'server1:/data/brick2/iso', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options': {'storage.owner-gid':
'36', 'storage.owner-uid': '36'}}, 'gv0': {'transportType': ['TCP'],
'uuid': '320f4ca5-d881-485e-97d3-c9d5dec1b5f0', 'bricks':
['server1:/data/brick1/gv0', 'server1:/data/brick2/gv0'], 'volumeName':
'gv0', 'volumeType': 'DISTRIBUTE', 'replicaCount': '1', 'brickCount': '2',
'distCount': '1', 'volumeStatus': 'ONLINE', 'stripeCount': '1',
'bricksInfo': [{'name': 'server1:/data/brick1/gv0', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'server1:/data/brick2/gv0', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options': {'auth.allow':
'*.*.*.*', 'storage.owner-gid': '36', 'cluster.quorum-type': 'auto',
'storage.owner-uid': '36'}}, 'storage': {'transportType': ['TCP'], 'uuid':
'91d0d0e6-9c13-435f-84fb-8a8c6d753bd2', 'bricks':
['10.0.0.1:/data/brick1/storage',
'10.0.0.1:/data/brick2/storage'], 'volumeName': 'storage', 'volumeType':
'DISTRIBUTE', 'replicaCount': '1', 'brickCount': '2', 'distCount': '1',
'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [{'name':
'10.0.0.1:/data/brick1/storage', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'10.0.0.1:/data/brick2/storage',
'hostUuid': '2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options':
{'auth.allow': '10.0.*.*', 'storage.owner-gid': '10000',
'storage.owner-uid': '10000'}}, 'export': {'transportType': ['TCP'],
'uuid': 'bd7bbc65-854d-4def-9c31-f94900310c4b', 'bricks':
['server1:/data/brick1/export', 'server1:/data/brick2/export'],
'volumeName': 'export', 'volumeType': 'DISTRIBUTE', 'replicaCount': '1',
'brickCount': '2', 'distCount': '1', 'volumeStatus': 'ONLINE',
'stripeCount': '1', 'bricksInfo': [{'name': 'server1:/data/brick1/export',
'hostUuid': '2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'server1:/data/brick2/export', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options': {'auth.allow':
'server1', 'storage.owner-gid': '36', 'storage.owner-uid': '36'}},
'backup': {'transportType': ['TCP'], 'uuid':
'c2d9c82e-57bb-4753-be66-94908011ab68', 'bricks':
['10.0.0.1:/data/brick1/backup',
'10.0.0.1:/data/brick2/backup'], 'volumeName': 'backup', 'volumeType':
'DISTRIBUTE', 'replicaCount': '1', 'brickCount': '2', 'distCount': '1',
'volumeStatus': 'OFFLINE', 'stripeCount': '1', 'bricksInfo': [{'name':
'10.0.0.1:/data/brick1/backup', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'10.0.0.1:/data/brick2/backup',
'hostUuid': '2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options':
{'auth.allow': '10.0.*.*'}}}
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:57,532::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper) call
wrapper with (None,) {}
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:57,532::utils::642::root::(execCmd) '/usr/sbin/gluster --mode=script
volume info --xml' (cwd None)
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:57,540::utils::662::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|Thread-13::DEBUG::2014-09-18
23:39:57,541::supervdsmServer::103::SuperVdsm.ServerCallback::(wrapper)
return wrapper with {'iso': {'transportType': ['TCP'], 'uuid':
'c1d293f7-3b07-4425-ac44-89a95864f7d2', 'bricks':
['server1:/data/brick1/iso', 'server1:/data/brick2/iso'], 'volumeName':
'iso', 'volumeType': 'DISTRIBUTE', 'replicaCount': '1', 'brickCount': '2',
'distCount': '1', 'volumeStatus': 'ONLINE', 'stripeCount': '1',
'bricksInfo': [{'name': 'server1:/data/brick1/iso', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'server1:/data/brick2/iso', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options': {'storage.owner-gid':
'36', 'storage.owner-uid': '36'}}, 'gv0': {'transportType': ['TCP'],
'uuid': '320f4ca5-d881-485e-97d3-c9d5dec1b5f0', 'bricks':
['server1:/data/brick1/gv0', 'server1:/data/brick2/gv0'], 'volumeName':
'gv0', 'volumeType': 'DISTRIBUTE', 'replicaCount': '1', 'brickCount': '2',
'distCount': '1', 'volumeStatus': 'ONLINE', 'stripeCount': '1',
'bricksInfo': [{'name': 'server1:/data/brick1/gv0', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'server1:/data/brick2/gv0', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options': {'auth.allow':
'*.*.*.*', 'storage.owner-gid': '36', 'cluster.quorum-type': 'auto',
'storage.owner-uid': '36'}}, 'storage': {'transportType': ['TCP'], 'uuid':
'91d0d0e6-9c13-435f-84fb-8a8c6d753bd2', 'bricks':
['10.0.0.1:/data/brick1/storage',
'10.0.0.1:/data/brick2/storage'], 'volumeName': 'storage', 'volumeType':
'DISTRIBUTE', 'replicaCount': '1', 'brickCount': '2', 'distCount': '1',
'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [{'name':
'10.0.0.1:/data/brick1/storage', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'10.0.0.1:/data/brick2/storage',
'hostUuid': '2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options':
{'auth.allow': '10.0.*.*', 'storage.owner-gid': '10000',
'storage.owner-uid': '10000'}}, 'export': {'transportType': ['TCP'],
'uuid': 'bd7bbc65-854d-4def-9c31-f94900310c4b', 'bricks':
['server1:/data/brick1/export', 'server1:/data/brick2/export'],
'volumeName': 'export', 'volumeType': 'DISTRIBUTE', 'replicaCount': '1',
'brickCount': '2', 'distCount': '1', 'volumeStatus': 'ONLINE',
'stripeCount': '1', 'bricksInfo': [{'name': 'server1:/data/brick1/export',
'hostUuid': '2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'server1:/data/brick2/export', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options': {'auth.allow':
'server1', 'storage.owner-gid': '36', 'storage.owner-uid': '36'}},
'backup': {'transportType': ['TCP'], 'uuid':
'c2d9c82e-57bb-4753-be66-94908011ab68', 'bricks':
['10.0.0.1:/data/brick1/backup',
'10.0.0.1:/data/brick2/backup'], 'volumeName': 'backup', 'volumeType':
'DISTRIBUTE', 'replicaCount': '1', 'brickCount': '2', 'distCount': '1',
'volumeStatus': 'OFFLINE', 'stripeCount': '1', 'bricksInfo': [{'name':
'10.0.0.1:/data/brick1/backup', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'10.0.0.1:/data/brick2/backup',
'hostUuid': '2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options':
{'auth.allow': '10.0.*.*'}}}
MainProcess|Thread-13::DEBUG::2014-09-18
23:40:02,581::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper) call
wrapper with (None,) {}
MainProcess|Thread-13::DEBUG::2014-09-18
23:40:02,581::utils::642::root::(execCmd) '/usr/sbin/gluster --mode=script
volume info --xml' (cwd None)
MainProcess|Thread-13::DEBUG::2014-09-18
23:40:02,608::utils::662::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|Thread-13::DEBUG::2014-09-18
23:40:02,609::supervdsmServer::103::SuperVdsm.ServerCallback::(wrapper)
return wrapper with {'iso': {'transportType': ['TCP'], 'uuid':
'c1d293f7-3b07-4425-ac44-89a95864f7d2', 'bricks':
['server1:/data/brick1/iso', 'server1:/data/brick2/iso'], 'volumeName':
'iso', 'volumeType': 'DISTRIBUTE', 'replicaCount': '1', 'brickCount': '2',
'distCount': '1', 'volumeStatus': 'ONLINE', 'stripeCount': '1',
'bricksInfo': [{'name': 'server1:/data/brick1/iso', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'server1:/data/brick2/iso', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options': {'storage.owner-gid':
'36', 'storage.owner-uid': '36'}}, 'gv0': {'transportType': ['TCP'],
'uuid': '320f4ca5-d881-485e-97d3-c9d5dec1b5f0', 'bricks':
['server1:/data/brick1/gv0', 'server1:/data/brick2/gv0'], 'volumeName':
'gv0', 'volumeType': 'DISTRIBUTE', 'replicaCount': '1', 'brickCount': '2',
'distCount': '1', 'volumeStatus': 'ONLINE', 'stripeCount': '1',
'bricksInfo': [{'name': 'server1:/data/brick1/gv0', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'server1:/data/brick2/gv0', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options': {'auth.allow':
'*.*.*.*', 'storage.owner-gid': '36', 'cluster.quorum-type': 'auto',
'storage.owner-uid': '36'}}, 'storage': {'transportType': ['TCP'], 'uuid':
'91d0d0e6-9c13-435f-84fb-8a8c6d753bd2', 'bricks':
['10.0.0.1:/data/brick1/storage',
'10.0.0.1:/data/brick2/storage'], 'volumeName': 'storage', 'volumeType':
'DISTRIBUTE', 'replicaCount': '1', 'brickCount': '2', 'distCount': '1',
'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [{'name':
'10.0.0.1:/data/brick1/storage', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'10.0.0.1:/data/brick2/storage',
'hostUuid': '2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options':
{'auth.allow': '10.0.*.*', 'storage.owner-gid': '10000',
'storage.owner-uid': '10000'}}, 'export': {'transportType': ['TCP'],
'uuid': 'bd7bbc65-854d-4def-9c31-f94900310c4b', 'bricks':
['server1:/data/brick1/export', 'server1:/data/brick2/export'],
'volumeName': 'export', 'volumeType': 'DISTRIBUTE', 'replicaCount': '1',
'brickCount': '2', 'distCount': '1', 'volumeStatus': 'ONLINE',
'stripeCount': '1', 'bricksInfo': [{'name': 'server1:/data/brick1/export',
'hostUuid': '2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'server1:/data/brick2/export', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options': {'auth.allow':
'server1', 'storage.owner-gid': '36', 'storage.owner-uid': '36'}},
'backup': {'transportType': ['TCP'], 'uuid':
'c2d9c82e-57bb-4753-be66-94908011ab68', 'bricks':
['10.0.0.1:/data/brick1/backup',
'10.0.0.1:/data/brick2/backup'], 'volumeName': 'backup', 'volumeType':
'DISTRIBUTE', 'replicaCount': '1', 'brickCount': '2', 'distCount': '1',
'volumeStatus': 'OFFLINE', 'stripeCount': '1', 'bricksInfo': [{'name':
'10.0.0.1:/data/brick1/backup', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'10.0.0.1:/data/brick2/backup',
'hostUuid': '2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options':
{'auth.allow': '10.0.*.*'}}}
MainProcess|Thread-132::DEBUG::2014-09-18
23:40:07,640::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper) call
wrapper with (None,) {}
MainProcess|Thread-132::DEBUG::2014-09-18
23:40:07,640::utils::642::root::(execCmd) '/usr/sbin/gluster --mode=script
volume info --xml' (cwd None)
MainProcess|Thread-132::DEBUG::2014-09-18
23:40:07,652::utils::662::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|Thread-132::DEBUG::2014-09-18
23:40:07,652::supervdsmServer::103::SuperVdsm.ServerCallback::(wrapper)
return wrapper with {'iso': {'transportType': ['TCP'], 'uuid':
'c1d293f7-3b07-4425-ac44-89a95864f7d2', 'bricks':
['server1:/data/brick1/iso', 'server1:/data/brick2/iso'], 'volumeName':
'iso', 'volumeType': 'DISTRIBUTE', 'replicaCount': '1', 'brickCount': '2',
'distCount': '1', 'volumeStatus': 'ONLINE', 'stripeCount': '1',
'bricksInfo': [{'name': 'server1:/data/brick1/iso', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'server1:/data/brick2/iso', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options': {'storage.owner-gid':
'36', 'storage.owner-uid': '36'}}, 'gv0': {'transportType': ['TCP'],
'uuid': '320f4ca5-d881-485e-97d3-c9d5dec1b5f0', 'bricks':
['server1:/data/brick1/gv0', 'server1:/data/brick2/gv0'], 'volumeName':
'gv0', 'volumeType': 'DISTRIBUTE', 'replicaCount': '1', 'brickCount': '2',
'distCount': '1', 'volumeStatus': 'ONLINE', 'stripeCount': '1',
'bricksInfo': [{'name': 'server1:/data/brick1/gv0', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'server1:/data/brick2/gv0', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options': {'auth.allow':
'*.*.*.*', 'storage.owner-gid': '36', 'cluster.quorum-type': 'auto',
'storage.owner-uid': '36'}}, 'storage': {'transportType': ['TCP'], 'uuid':
'91d0d0e6-9c13-435f-84fb-8a8c6d753bd2', 'bricks':
['10.0.0.1:/data/brick1/storage',
'10.0.0.1:/data/brick2/storage'], 'volumeName': 'storage', 'volumeType':
'DISTRIBUTE', 'replicaCount': '1', 'brickCount': '2', 'distCount': '1',
'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [{'name':
'10.0.0.1:/data/brick1/storage', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'10.0.0.1:/data/brick2/storage',
'hostUuid': '2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options':
{'auth.allow': '10.0.*.*', 'storage.owner-gid': '10000',
'storage.owner-uid': '10000'}}, 'export': {'transportType': ['TCP'],
'uuid': 'bd7bbc65-854d-4def-9c31-f94900310c4b', 'bricks':
['server1:/data/brick1/export', 'server1:/data/brick2/export'],
'volumeName': 'export', 'volumeType': 'DISTRIBUTE', 'replicaCount': '1',
'brickCount': '2', 'distCount': '1', 'volumeStatus': 'ONLINE',
'stripeCount': '1', 'bricksInfo': [{'name': 'server1:/data/brick1/export',
'hostUuid': '2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'server1:/data/brick2/export', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options': {'auth.allow':
'server1', 'storage.owner-gid': '36', 'storage.owner-uid': '36'}},
'backup': {'transportType': ['TCP'], 'uuid':
'c2d9c82e-57bb-4753-be66-94908011ab68', 'bricks':
['10.0.0.1:/data/brick1/backup',
'10.0.0.1:/data/brick2/backup'], 'volumeName': 'backup', 'volumeType':
'DISTRIBUTE', 'replicaCount': '1', 'brickCount': '2', 'distCount': '1',
'volumeStatus': 'OFFLINE', 'stripeCount': '1', 'bricksInfo': [{'name':
'10.0.0.1:/data/brick1/backup', 'hostUuid':
'2b098a03-cd87-4650-a55b-7e6727db8ca4'}, {'name':
'10.0.0.1:/data/brick2/backup',
'hostUuid': '2b098a03-cd87-4650-a55b-7e6727db8ca4'}], 'options':
{'auth.allow': '10.0.*.*'}}}
-->end of vdsmd log
10 years, 3 months
NeutronVirtualAppliance and 3.4?
by Robert Story
--Sig_/gfPJvvQ+IlhN7s9cw/2+k7I
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Can the Neutron appliance for 3.5 be used with 3.4? Or does it depend on
features only in 3.5?
Robert
--=20
Senior Software Engineer @ Parsons
--Sig_/gfPJvvQ+IlhN7s9cw/2+k7I
Content-Type: application/pgp-signature; name=signature.asc
Content-Disposition: attachment; filename=signature.asc
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iEYEARECAAYFAlQa+Z8ACgkQ7/fVLLY1mnhnMgCfVe0U3Yh21tyeDucqL1XspJfo
7yAAnR/P6nxM0j8ih9UxkUpb0kYC5iX+
=T8J7
-----END PGP SIGNATURE-----
--Sig_/gfPJvvQ+IlhN7s9cw/2+k7I--
10 years, 3 months
[oVirt3.5] Let me know how to integrate oVirt 3.5 with latest version of OpenLDAP.
by Fumihide Tani
Hi, everyone,
I'm trying to integrate oVirt 3.5 RC2 with latest OpenLDAP 2.4.
Both are running on a same server, CentOS 6.5 (Final), oVirt Engine server.
But the integration does not succeed.
First, I set up OpenLDAP according to the following url:
http://www.ovirt.org/LDAP_Quick_Start.
And many errors occurred during setup.
(like: ldap_modify: Other (e.g., implementation specific) error (80) ).
Next, I installed ovirt-engine-extension-aaa-ldap.noarch
0.0.0-0.0.master.20140904095149.gitc7bd415.el6 by yum.
Then I set up ovirt-engine-extension-aaa-ldap according to the following
url:
https://www.mail-archive.com/devel@ovirt.org/msg01449.html
After restarting ovirt-engine, the engine.log output:
engine.log:2014-09-18 16:35:09, 691 INFO
[org.ovirt.engineextensions.aaa.ldap. Framework] (MSC service thread
1-6) Creating LDAP pool 'authz' for 'authn-company''
Error is not detected here.
Access to OpenLDAP server succeeded, and the user authentication
succeeded too.
I think that the cause of failing OpenLDAP integration is OpenLDAP side
and the document http://www.ovirt.org/LDAP_Quick_Start is old and not
fit to the latest version of OpenLDAP.
If anyone know the latest document for the OpenLDAP integration
or any help for resolving this problem, please let me know.
Very thanks.
10 years, 3 months
error to add domain in rhevm
by linisha.m@cms.com
Sir
I cant add domain using the command rhevm-manage-domains. The command
that I executed is rhevm-manage-domains action=add domain=example.com
user=rhevadmin provider=IPA interactive.
The error is Failed to find example.com domain, client not find un
Kerberos database.
Can u please tell me the solution for this problem as far as possible.
Thanks
Linisha M
DISCLAIMER: The information contained in this communication, including any attachments (‘email’) is privileged, confidential or otherwise protected by disclosure and is intended only for the individuals or entities named above and any others who have been specifically authorized to receive it. Any unauthorized dissemination, copying or use of the contents of this email is strictly prohibited and may be in violation of law. If you are not the intended recipient, please do not read, copy and use or disclose to others the contents of this communication. Please notify the sender that you have received this e-mail in error by replying to this e-mail copying to info(a)cms.com and thereafter please delete the e-mail from your system. Nothing contained in this disclaimer shall be construed in any way to grant permission to transmit confidential information via CMS Group’s e-mail system or as a waiver of any confidentiality or privilege. CMS Info Systems Pvt. Ltd. (including its group companies) shall not be liable for the improper or incomplete transmission of the information contained in this communication nor for any delay in its receipt or damage to your system. You will appreciate that e-mail transmission cannot be guaranteed to be secure or error-free as its contents are susceptible to loss, damage, interception, destruction, etc. Before opening any attachments please check them for viruses and defects. Please note that any views or opinions presented in this email are those of the author and do not necessarily represent those of CMS Info Systems Pvt. Ltd. (including its group companies).
10 years, 3 months
Testday 3 results
by Doron Fediuck
Hi All,
I was glad to see the amount of participation yesterday. Kudus to everyone who took
a part in making oVirt 3.5.0 better!
Here are some stats from yesterday's logs:
Top 10 active IRC users[1]:
206 Zordrak
177 winfr34k
44 sbonazzo
35 fromani
24 msivak_
19 Sven
19 jhernand
18 tiraboschi
16 danken
16 capri
Bugzilla:
1. Bugs by team: http://goo.gl/RXjm8k
2. Bugs by reporter: http://goo.gl/c89Sdy
3. Results:
- 96 new bugs
- Top 5 reporters:
sbonazzo(a)redhat.com 15
redhat.bugzilla(a)jdmp.org.uk 6
tnisan(a)redhat.com 5
glazarov(a)redhat.com 4
info(a)netbulae.com 4
Please alert if there are any 3.5.0 blockers.
Thanks,
Doron
[1] Excluding the weekly meeting part as it wasn't relevant.
10 years, 3 months
[OVIRT-3.5-TEST-DAY-3] : fence kdump integration
by Eli Mesika
I was testing fence kdump integration.
Due to the following issues I was not able to test over JSON RPC and all tests were done with XML RPC
1) Manually applied : http://gerrit.ovirt.org/#/c/32843 (this was not included yet
in 3.5 and cause an exception in VDSM when fenceNode is called)
2) JSON : does not return status correctly : Test Succeeded, null
Therefor , can not stop/start/restart host
Related to engine patch : http://gerrit.ovirt.org/#/c/32855/
not back-ported yet to 3.5 (blocker IMO)
--------------------------------------------------------------
ALL tests below should be redone after resolving JSON issues !
--------------------------------------------------------------
-------------------
General comment
-------------------
All fence kdump flows are related to a scenario in which host became non responsive.
In the case of manual PM action (stop/restart) fence kdump is not taking place.
I think this should be fully documented in order to prevent misunderstanding when the kdump flag
is checked and host is rebooted/stopped manual while kdumping' which will result in losing the dump file.
*********************************
Tests using XML RPC
*********************************
------------------------
Installation tests :
------------------------
1) Adding a host with Detect kdump flow set to on and without crashkernel command line parameter
Result: host installation is OK, but warning message is displayed in Events tab and Audit log
TEST PASSED
2) Adding a host with Detect kdump flow set to on, with crashkernel command line parameter, but without
required version of kexec-tools package
Result: host installation is OK, but warning message is displayed in Events tab and Audit log
TEST PASSED
3) Adding a host with Detect kdump flow set to on, with crashkernel command line parameter and with required
version of kexec-tools package
Result: host installation is OK, in General tab of host detail view you should see Kdump Status: Enabled
TEST PASSED
------------------------
Kdump detection tests:
------------------------
1) Crashdumping a host with kdump detection disabled
Prerequisites: host was successfully deployed with Detect kdump flow set to off, fence_kdump listener
is running
Result: Host changes its status Up -> Connecting -> Non Responsive -> Reboot -> Non Responsive -> Up,
hard fencing is executed
TEST PASSED
2) Crashdumping a host with kdump detection enabled
Prerequisites: host was successfully deployed with Detect kdump flow set to on, fence_kdump listener
is running
Result: Host changes its status Up -> Connecting -> Non Responsive -> Kdumping -> Non Responsive -> Up,
hard fencing is not executed, there are messages in Events tab Kdump flow detected on host and Kdump
flow finished on host
TEST PASSED
3) Crashdumping a host with kdump detection enabled but fence_kdump listener down
Prerequisites: host was successfully deployed with Detect kdump flow set to on, fence_kdump listener
is not running
Result: Host changes its status Up -> Connecting -> Non Responsive -> Reboot -> Non Responsive -> Up,
hard fencing is executed, there's message in Events tab Kdump detection for host had started,
but fence_kdump listener is not running
TEST PASSED
4) Host with kdump detection enabled, fence_kdump listener is running, but network between engine
and host is down
Prerequisites: host was successfully deployed with Detect kdump flow set to on, fence_kdump listener
is running, alter firewall rules on engine to drop everything coming from host's IP address
Result: Host changes its status Up -> Connecting -> Non Responsive -> Reboot -> Non Responsive -> Up,
hard fencing is executed, there's message in Events tab Kdump flow not detected on host
TEST PASSED
5) Crashdumping a host with kdump detection enabled, fence_kdump listener is running, stop fence_kdump
listener during kdump
Prerequisites: host was successfully deployed with Detect kdump flow set to on, fence_kdump listener
is running
Actions: When host status is changed to Kdumping, stop fence_kdump listener
Result: Host changes its status Up -> Connecting -> Non Responsive -> Kdumping -> Reboot -> Non Responsive
-> Up, hard fencing is executed, there are messages in Events tab Kdump flow detected on host and Kdump
detection for host had started, but fence_kdump listener is not running
TEST PASSED I got this message in event log :
Unable to determine if Kdump is in progress on host 'pluto-vdsf', because fence_kdump listener is not running.
Is this OK ?
6) Crashdumping a host with kdump detection enabled, fence_kdump listener is running,
restart engine during kdump
Prerequisites: host was successfully deployed with Detect kdump flow set to on,
fence_kdump listener is running
Actions: When host status is changed to Kdumping, restart engine
Result: Host changes its status Up -> Connecting -> Non Responsive -> Kdumping, hard fencing is not
executed, there are messages in Events tab Kdump flow detected on host, after engine restart host
stays in Kdumping status for the period of DisableFenceAtStartupInSec seconds, after that there
are messages in Events tab Kdump flow detected on host and Kdump flow finished on host and
changes status Kdumping -> Non Responsive -> Up
TEST PASSED I got only this message in event log :
Kdump flow is in progress on host 'pluto-vdsf'.
Is this OK?
Thanks
Eli Mesika
10 years, 3 months