REST API - Problem when trying to retrieve vms list
by jaumotte, styve
Hi everybody,
Since fews days I can't get vms list from rest api. I always have the same return when I try https://myengine.mydomain/api/vms :
<fault>
<reason>Operation Failed</reason>
</fault>
I suspect that for new vm, some properties are malformed, but I can't identify them.
If I ask https://myengine.mydomain/api/vms?search=dev , vms including dev in their name are return.
If I ask https://myengine.mydomain/api/vms?search=xtypo , Operation Failed returns !
I look at the log on the engine, but I don't find any answers.
If someone have any ideas ... thank you !!
Here the log :
2016-02-18 16:43:03,591 ERROR [org.ovirt.engine.api.restapi.resource.validation.MappingExceptionMapper] (default task-23) [] Mapping exception while processing "GET" request for path "/vms"
2016-02-18 16:43:03,591 ERROR [org.ovirt.engine.api.restapi.resource.validation.MappingExceptionMapper] (default task-23) [] Exception: org.ovirt.engine.api.restapi.utils.MappingException: java.lang.reflect.InvocationTargetException
at org.ovirt.engine.api.restapi.types.MappingLocator$MethodInvokerMapper.map(MappingLocator.java:155) [restapi-types.jar:]
at org.ovirt.engine.api.restapi.resource.AbstractBackendResource.map(AbstractBackendResource.java:65) [restapi-jaxrs.jar:]
at org.ovirt.engine.api.restapi.resource.AbstractBackendResource.map(AbstractBackendResource.java:61) [restapi-jaxrs.jar:]
at org.ovirt.engine.api.restapi.resource.BackendVmsResource.mapCollection(BackendVmsResource.java:570) [restapi-jaxrs.jar:]
at org.ovirt.engine.api.restapi.resource.BackendVmsResource.list(BackendVmsResource.java:94) [restapi-jaxrs.jar:]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [rt.jar:1.8.0_71]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) [rt.jar:1.8.0_71]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_71]
at java.lang.reflect.Method.invoke(Method.java:497) [rt.jar:1.8.0_71]
at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:137) [resteasy-jaxrs-3.0.10.Final.jar:]
at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:296) [resteasy-jaxrs-3.0.10.Final.jar:]
at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:250) [resteasy-jaxrs-3.0.10.Final.jar:]
at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:237) [resteasy-jaxrs-3.0.10.Final.jar:]
at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:356) [resteasy-jaxrs-3.0.10.Final.jar:]
at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:179) [resteasy-jaxrs-3.0.10.Final.jar:]
at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:220) [resteasy-jaxrs-3.0.10.Final.jar:]
at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:56) [resteasy-jaxrs-3.0.10.Final.jar:]
at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:51) [resteasy-jaxrs-3.0.10.Final.jar:]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) [jboss-servlet-api_3.1_spec-1.0.0.Final.jar:1.0.0.Final]
at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:86) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:130) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at org.ovirt.engine.api.common.invocation.CurrentFilter.doFilter(CurrentFilter.java:66) [interface-common-jaxrs.jar:]
at org.ovirt.engine.api.common.invocation.CurrentFilter.doFilter(CurrentFilter.java:48) [interface-common-jaxrs.jar:]
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:132) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at org.ovirt.engine.core.aaa.filters.RestApiSessionMgmtFilter.doFilter(RestApiSessionMgmtFilter.java:81) [aaa.jar:]
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:132) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at org.ovirt.engine.core.aaa.filters.EnforceAuthFilter.doFilter(EnforceAuthFilter.java:39) [aaa.jar:]
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:132) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at org.ovirt.engine.core.aaa.filters.LoginFilter.doFilter(LoginFilter.java:75) [aaa.jar:]
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:132) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at org.ovirt.engine.core.aaa.filters.NegotiationFilter.doFilter(NegotiationFilter.java:113) [aaa.jar:]
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:132) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at org.ovirt.engine.core.aaa.filters.BasicAuthenticationFilter.doFilter(BasicAuthenticationFilter.java:90) [aaa.jar:]
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:132) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at org.ovirt.engine.core.aaa.filters.SessionValidationFilter.doFilter(SessionValidationFilter.java:77) [aaa.jar:]
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:132) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at org.ovirt.engine.core.aaa.filters.EngineSessionTokenAuthenticationFilter.doFilter(EngineSessionTokenAuthenticationFilter.java:31) [aaa.jar:]
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:132) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at org.ovirt.engine.core.aaa.filters.RestApiSessionValidationFilter.doFilter(RestApiSessionValidationFilter.java:35) [aaa.jar:]
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:132) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at org.ovirt.engine.api.common.security.CSRFProtectionFilter.doFilter(CSRFProtectionFilter.java:111) [interface-common-jaxrs.jar:]
at org.ovirt.engine.api.common.security.CSRFProtectionFilter.doFilter(CSRFProtectionFilter.java:102) [interface-common-jaxrs.jar:]
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:132) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at org.ovirt.engine.api.common.security.CORSSupportFilter.doFilter(CORSSupportFilter.java:183) [interface-common-jaxrs.jar:]
at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:60) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:132) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:85) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) [undertow-core-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) [undertow-core-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.security.handlers.AuthenticationConstraintHandler.handleRequest(AuthenticationConstraintHandler.java:51) [undertow-core-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46) [undertow-core-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.servlet.handlers.security.ServletSecurityConstraintHandler.handleRequest(ServletSecurityConstraintHandler.java:56) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:58) [undertow-core-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:70) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.security.handlers.SecurityInitialHandler.handleRequest(SecurityInitialHandler.java:76) [undertow-core-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) [undertow-core-1.1.8.Final.jar:1.1.8.Final]
at org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) [undertow-core-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) [undertow-core-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:261) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:248) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:77) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:167) [undertow-servlet-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.server.Connectors.executeRootHandler(Connectors.java:199) [undertow-core-1.1.8.Final.jar:1.1.8.Final]
at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:761) [undertow-core-1.1.8.Final.jar:1.1.8.Final]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_71]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_71]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_71]
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedMethodAccessor299.invoke(Unknown Source) [:1.8.0_71]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_71]
at java.lang.reflect.Method.invoke(Method.java:497) [rt.jar:1.8.0_71]
at org.ovirt.engine.api.restapi.types.MappingLocator$MethodInvokerMapper.map(MappingLocator.java:150) [restapi-types.jar:]
... 83 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
at org.ovirt.engine.api.restapi.types.VersionMapper.fromKernelVersionString(VersionMapper.java:52) [restapi-types.jar:]
at org.ovirt.engine.api.restapi.types.VmMapper.map(VmMapper.java:446) [restapi-types.jar:]
at org.ovirt.engine.api.restapi.types.VmMapper.map(VmMapper.java:330) [restapi-types.jar:]
... 87 more
8 years, 9 months
[ANN] oVirt 3.6.3 Fourth Release Candidate is now available for testing
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability
of the Fourth Release Candidate of oVirt 3.6.3 for testing, as of February
24th, 2016
This release is available now for Fedora 22,
Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and
Red Hat Enterprise Linux >= 7.2, CentOS Linux >= 7.2 (or similar).
This release supports Hypervisor Hosts running
Red Hat Enterprise Linux >= 7.2, CentOS Linux >= 7.2 (or similar) and
Fedora 22.
Highly experimental support for Debian 8.3 Jessie has been added too.
This release candidate includes updated packages for:
- ovirt-engine
- vdsm
- ovirt-hosted-engine-ha
- ovirt-hosted-engine-setup
This release of oVirt 3.6.3 includes numerous bug fixes.
See the release notes [1] for an initial list of the new features and bugs
fixed.
Please refer to release notes [1] for Installation / Upgrade instructions.
A new oVirt Live ISO will be available soon [2].
Please note that mirrors[3] may need usually one day before being
synchronized.
* Read more about the oVirt 3.6.3 release highlights:
http://www.ovirt.org/release/3.6.3/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Read more about oVirt Project community events:
http://www.ovirt.org/Upcoming_events
[1] http://www.ovirt.org/release/3.6.3/
[2] http://resources.ovirt.org/pub/ovirt-3.6-pre/iso/
[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
8 years, 9 months
RHEV-H 7.2 Beta Install
by Christopher Young
I know this is the ovirt-list, but I've had great success with help
here as I have been running a couple of ovirt instances for some time
(as part of a lab/testing).
In any case, I'm trying to install the RHEV-H beta (7.2 latest ISO),
and for whatever reason, my install gets to 22% (right after
partitioning up the local drive, I believe) and fails.
I have no idea how to troubleshoot this. I've tried changing install
options (i've been installing via PXE) to no avail, and I notice that
there don't appear to be any virtual terminals to switch to look at
things and/or see where things failed.
The errors that I get say something along the lines of:
unexpected EOF while looking for matching ''
I'll try and get more details, but i've attached a screenshot from the console.
Any help on HOW to troubleshoot this, pull logs, etc.. would be most
appreciated as I'd like to get this moving forward.
Thanks for all of your hard work and the help of the community!
-- Chris
8 years, 9 months
adding network via ovirt-shell
by Bill James
I'm trying to add a network using ovirt-shell.
It adds fine to the GUI but I can't add the network to a host because it
isn't listed in the "Setup Host Networks" dialog.
Adding a network interface using the GUI is added fine to the dialog window.
What am I missing?
[oVirt shell (connected)]# add network --data_center-name Default --name
Vlan7 --description '10.176.7' --vlan-id 7
Also not sure how to add "Network Label" via cli. (adding it via GUI
doesn't make interface usable in dialog window)
ovirt-engine-3.6.2.6-1.el7.centos.noarch
Problem networks are Vlan5 & Vlan 7.
[oVirt shell (connected)]# list networks
id : 80b5bffb-afb7-4c14-b228-e505b6a93152
name : Gluster-KS
description: kickstart
id : 00000000-0000-0000-0000-000000000009
name : ovirtmgmt
description: Management Network
id : 7d9d55d9-1158-4a64-a2b9-07b763ae2b6d
name : vlan1
description: 10.176.1
id : e81540f9-99f2-4826-b483-b9c08031cdaa
name : Vlan5
description: 10.176.5
id : 6d1bf691-d438-48e2-a230-21360c609889
name : Vlan6
description: 10.176.6
id : 2c7db3f8-151f-4c5a-a88d-c840f7f2ce57
name : Vlan7
description: 10.176.7
Cloud Services for Business www.j2.com
j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox
This email, its contents and attachments contain information from j2 Global, Inc. and/or its affiliates which may be privileged, confidential or otherwise protected from disclosure. The information is intended to be for the addressee(s) only. If you are not an addressee, any disclosure, copy, distribution, or use of the contents of this message is prohibited. If you have received this email in error please notify the sender by reply e-mail and delete the original message and any copies. (c) 2015 j2 Global, Inc. All rights reserved. eFax, eVoice, Campaigner, FuseMail, KeepItSafe, and Onebox are registered trademarks of j2 Global, Inc. and its affiliates.
8 years, 9 months
Importing data domain
by SATHEESARAN
Hi All,
I am using ovirt 3.6.3 with 2 data domains. domain1 to store os disks
and other to store additional disks for the VM
I am more concerned about the additional domain and I have rsynced the
second data domain to another
file based storage.
After sometime, I tried to attach that file based storage to another
oVirt DC.
When trying importing the data domain, I see a warning like "Storage
Domain(s) are already attached to a Data Center. Approving this
operation might cause data corruption if both Data Centers are active."
Any clue why do I see this warning ?
Also is there any better way to import the existing data domain ?
-- Satheesaran
8 years, 9 months
ovirt - can't attach master domain II
by paf1@email.cz
This is a multi-part message in MIME format.
--------------090001000406070504000905
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hi,
I found the main ( maybe ) problem with IO error ( -5 ) for "ids" file
access
This file is not accessable via NFS, locally yes.
How can I fix it ??
regs.
Pavel
# sanlock client log_dump
....
0 flags 1 timeout 0
2016-02-24 02:01:10+0100 3828 [12111]: s1316 lockspace
88adbd49-62d6-45b1-9992-b04464a04112:1:/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P2/88adbd49-62d6-45b1-9992-b04464a04112/dom_md/ids:0
2016-02-24 02:01:10+0100 3828 [12111]: cmd_add_lockspace 4,15 async done 0
2016-02-24 02:01:10+0100 3828 [19556]: s1316 delta_acquire begin
88adbd49-62d6-45b1-9992-b04464a04112:1
2016-02-24 02:01:10+0100 3828 [19556]: 88adbd49 aio collect 0
0x7fe4580008c0:0x7fe4580008d0:0x7fe458101000 result -5:0 match res
2016-02-24 02:01:10+0100 3828 [19556]: read_sectors delta_leader offset
0 rv -5
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P2/88adbd49-62d6-45b1-9992-b04464a04112/dom_md/ids
2016-02-24 02:01:10+0100 3828 [19556]: s1316 delta_acquire leader_read1
error -5
2016-02-24 02:01:11+0100 3829 [12111]: s1316 add_lockspace fail result -5
2016-02-24 02:01:12+0100 3831 [12116]: cmd_add_lockspace 4,15
7f52b697-c199-4f58-89aa-102d44327124:1:/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md/ids:0
flags 1 timeout 0
2016-02-24 02:01:12+0100 3831 [12116]: s1317 lockspace
7f52b697-c199-4f58-89aa-102d44327124:1:/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md/ids:0
2016-02-24 02:01:12+0100 3831 [12116]: cmd_add_lockspace 4,15 async done 0
2016-02-24 02:01:12+0100 3831 [19562]: s1317 delta_acquire begin
7f52b697-c199-4f58-89aa-102d44327124:1
2016-02-24 02:01:12+0100 3831 [19562]: 7f52b697 aio collect 0
0x7fe4580008c0:0x7fe4580008d0:0x7fe458101000 result -5:0 match res
2016-02-24 02:01:12+0100 3831 [19562]: read_sectors delta_leader offset
0 rv -5
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md/ids
2016-02-24 02:01:12+0100 3831 [19562]: s1317 delta_acquire leader_read1
error -5
2016-02-24 02:01:13+0100 3831 [1321]: cmd_add_lockspace 4,15
0fcad888-d573-47be-bef3-0bc0b7a99fb7:1:/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-BCK/0fcad888-d573-47be-bef3-0bc0b7a99fb7/dom_md/ids:0
flags 1 timeout 0
2016-02-24 02:01:13+0100 3831 [1321]: s1318 lockspace
0fcad888-d573-47be-bef3-0bc0b7a99fb7:1:/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-BCK/0fcad888-d573-47be-bef3-0bc0b7a99fb7/dom_md/ids:0
2016-02-24 02:01:13+0100 3831 [1321]: cmd_add_lockspace 4,15 async done 0
2016-02-24 02:01:13+0100 3831 [19564]: s1318 delta_acquire begin
0fcad888-d573-47be-bef3-0bc0b7a99fb7:1
2016-02-24 02:01:13+0100 3831 [19564]: 0fcad888 aio collect 0
0x7fe4580008c0:0x7fe4580008d0:0x7fe458201000 result -5:0 match res
2016-02-24 02:01:13+0100 3831 [19564]: read_sectors delta_leader offset
0 rv -5
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-BCK/0fcad888-d573-47be-bef3-0bc0b7a99fb7/dom_md/ids
2016-02-24 02:01:13+0100 3831 [19564]: s1318 delta_acquire leader_read1
error -5
2016-02-24 02:01:13+0100 3832 [12116]: s1317 add_lockspace fail result -5
2016-02-24 02:01:14+0100 3832 [1321]: s1318 add_lockspace fail result -5
2016-02-24 02:01:19+0100 3838 [12106]: cmd_add_lockspace 4,15
3da46e07-d1ea-4f10-9250-6cbbb7b94d80:1:/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P5/3da46e07-d1ea-4f10-9250-6cbbb7b94d80/dom_md/ids:0
flags 1 timeout 0
2016-02-24 02:01:19+0100 3838 [12106]: s1319 lockspace
3da46e07-d1ea-4f10-9250-6cbbb7b94d80:1:/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P5/3da46e07-d1ea-4f10-9250-6cbbb7b94d80/dom_md/ids:0
2016-02-24 02:01:19+0100 3838 [12106]: cmd_add_lockspace 4,15 async done 0
2016-02-24 02:01:19+0100 3838 [19638]: s1319 delta_acquire begin
3da46e07-d1ea-4f10-9250-6cbbb7b94d80:1
2016-02-24 02:01:19+0100 3838 [19638]: 3da46e07 aio collect 0
0x7fe4580008c0:0x7fe4580008d0:0x7fe458101000 result -5:0 match res
2016-02-24 02:01:19+0100 3838 [19638]: read_sectors delta_leader offset
0 rv -5
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P5/3da46e07-d1ea-4f10-9250-6cbbb7b94d80/dom_md/ids
2016-02-24 02:01:19+0100 3838 [19638]: s1319 delta_acquire leader_read1
error -5
2016-02-24 02:01:20+0100 3839 [12106]: s1319 add_lockspace fail result -5
2016-02-24 02:01:20+0100 3839 [1320]: cmd_add_lockspace 4,15
88adbd49-62d6-45b1-9992-b04464a04112:1:/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P2/88adbd49-62d6-45b1-9992-b04464a04112/dom_md/ids:0
flags 1 timeout 0
2016-02-24 02:01:20+0100 3839 [1320]: s1320 lockspace
88adbd49-62d6-45b1-9992-b04464a04112:1:/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P2/88adbd49-62d6-45b1-9992-b04464a04112/dom_md/ids:0
2016-02-24 02:01:20+0100 3839 [1320]: cmd_add_lockspace 4,15 async done 0
2016-02-24 02:01:20+0100 3839 [19658]: s1320 delta_acquire begin
88adbd49-62d6-45b1-9992-b04464a04112:1
2016-02-24 02:01:20+0100 3839 [19658]: 88adbd49 aio collect 0
0x7fe4580008c0:0x7fe4580008d0:0x7fe458101000 result -5:0 match res
2016-02-24 02:01:20+0100 3839 [19658]: read_sectors delta_leader offset
0 rv -5
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P2/88adbd49-62d6-45b1-9992-b04464a04112/dom_md/ids
2016-02-24 02:01:20+0100 3839 [19658]: s1320 delta_acquire leader_read1
error -5
2016-02-24 02:01:21+0100 3840 [1320]: s1320 add_lockspace fail result -5
--------------090001000406070504000905
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000066" bgcolor="#FFFFFF">
Hi,<br>
I found the main ( maybe ) problem with IO error ( -5 ) for "ids"
file access<br>
This file is not accessable via NFS, locally yes.<br>
How can I fix it ??<br>
regs.<br>
Pavel<br>
<br>
# sanlock client log_dump<br>
....<br>
0 flags 1 timeout 0<br>
2016-02-24 02:01:10+0100 3828 [12111]: s1316 lockspace
88adbd49-62d6-45b1-9992-b04464a04112:1:/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P2/88adbd49-62d6-45b1-9992-b04464a04112/dom_md/ids:0<br>
2016-02-24 02:01:10+0100 3828 [12111]: cmd_add_lockspace 4,15 async
done 0<br>
2016-02-24 02:01:10+0100 3828 [19556]: s1316 delta_acquire begin
88adbd49-62d6-45b1-9992-b04464a04112:1<br>
2016-02-24 02:01:10+0100 3828 [19556]: 88adbd49 aio collect 0
0x7fe4580008c0:0x7fe4580008d0:0x7fe458101000 result -5:0 match res<br>
2016-02-24 02:01:10+0100 3828 [19556]: read_sectors delta_leader
offset 0 rv -5
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P2/88adbd49-62d6-45b1-9992-b04464a04112/dom_md/ids<br>
2016-02-24 02:01:10+0100 3828 [19556]: s1316 delta_acquire
leader_read1 error -5<br>
2016-02-24 02:01:11+0100 3829 [12111]: s1316 add_lockspace fail
result -5<br>
2016-02-24 02:01:12+0100 3831 [12116]: cmd_add_lockspace 4,15
7f52b697-c199-4f58-89aa-102d44327124:1:/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md/ids:0
flags 1 timeout 0<br>
2016-02-24 02:01:12+0100 3831 [12116]: s1317 lockspace
7f52b697-c199-4f58-89aa-102d44327124:1:/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md/ids:0<br>
2016-02-24 02:01:12+0100 3831 [12116]: cmd_add_lockspace 4,15 async
done 0<br>
2016-02-24 02:01:12+0100 3831 [19562]: s1317 delta_acquire begin
7f52b697-c199-4f58-89aa-102d44327124:1<br>
2016-02-24 02:01:12+0100 3831 [19562]: 7f52b697 aio collect 0
0x7fe4580008c0:0x7fe4580008d0:0x7fe458101000 result -5:0 match res<br>
2016-02-24 02:01:12+0100 3831 [19562]: read_sectors delta_leader
offset 0 rv -5
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md/ids<br>
2016-02-24 02:01:12+0100 3831 [19562]: s1317 delta_acquire
leader_read1 error -5<br>
2016-02-24 02:01:13+0100 3831 [1321]: cmd_add_lockspace 4,15
0fcad888-d573-47be-bef3-0bc0b7a99fb7:1:/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-BCK/0fcad888-d573-47be-bef3-0bc0b7a99fb7/dom_md/ids:0
flags 1 timeout 0<br>
2016-02-24 02:01:13+0100 3831 [1321]: s1318 lockspace
0fcad888-d573-47be-bef3-0bc0b7a99fb7:1:/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-BCK/0fcad888-d573-47be-bef3-0bc0b7a99fb7/dom_md/ids:0<br>
2016-02-24 02:01:13+0100 3831 [1321]: cmd_add_lockspace 4,15 async
done 0<br>
2016-02-24 02:01:13+0100 3831 [19564]: s1318 delta_acquire begin
0fcad888-d573-47be-bef3-0bc0b7a99fb7:1<br>
2016-02-24 02:01:13+0100 3831 [19564]: 0fcad888 aio collect 0
0x7fe4580008c0:0x7fe4580008d0:0x7fe458201000 result -5:0 match res<br>
2016-02-24 02:01:13+0100 3831 [19564]: read_sectors delta_leader
offset 0 rv -5
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-BCK/0fcad888-d573-47be-bef3-0bc0b7a99fb7/dom_md/ids<br>
2016-02-24 02:01:13+0100 3831 [19564]: s1318 delta_acquire
leader_read1 error -5<br>
2016-02-24 02:01:13+0100 3832 [12116]: s1317 add_lockspace fail
result -5<br>
2016-02-24 02:01:14+0100 3832 [1321]: s1318 add_lockspace fail
result -5<br>
2016-02-24 02:01:19+0100 3838 [12106]: cmd_add_lockspace 4,15
3da46e07-d1ea-4f10-9250-6cbbb7b94d80:1:/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P5/3da46e07-d1ea-4f10-9250-6cbbb7b94d80/dom_md/ids:0
flags 1 timeout 0<br>
2016-02-24 02:01:19+0100 3838 [12106]: s1319 lockspace
3da46e07-d1ea-4f10-9250-6cbbb7b94d80:1:/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P5/3da46e07-d1ea-4f10-9250-6cbbb7b94d80/dom_md/ids:0<br>
2016-02-24 02:01:19+0100 3838 [12106]: cmd_add_lockspace 4,15 async
done 0<br>
2016-02-24 02:01:19+0100 3838 [19638]: s1319 delta_acquire begin
3da46e07-d1ea-4f10-9250-6cbbb7b94d80:1<br>
2016-02-24 02:01:19+0100 3838 [19638]: 3da46e07 aio collect 0
0x7fe4580008c0:0x7fe4580008d0:0x7fe458101000 result -5:0 match res<br>
2016-02-24 02:01:19+0100 3838 [19638]: read_sectors delta_leader
offset 0 rv -5
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P5/3da46e07-d1ea-4f10-9250-6cbbb7b94d80/dom_md/ids<br>
2016-02-24 02:01:19+0100 3838 [19638]: s1319 delta_acquire
leader_read1 error -5<br>
2016-02-24 02:01:20+0100 3839 [12106]: s1319 add_lockspace fail
result -5<br>
2016-02-24 02:01:20+0100 3839 [1320]: cmd_add_lockspace 4,15
88adbd49-62d6-45b1-9992-b04464a04112:1:/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P2/88adbd49-62d6-45b1-9992-b04464a04112/dom_md/ids:0
flags 1 timeout 0<br>
2016-02-24 02:01:20+0100 3839 [1320]: s1320 lockspace
88adbd49-62d6-45b1-9992-b04464a04112:1:/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P2/88adbd49-62d6-45b1-9992-b04464a04112/dom_md/ids:0<br>
2016-02-24 02:01:20+0100 3839 [1320]: cmd_add_lockspace 4,15 async
done 0<br>
2016-02-24 02:01:20+0100 3839 [19658]: s1320 delta_acquire begin
88adbd49-62d6-45b1-9992-b04464a04112:1<br>
2016-02-24 02:01:20+0100 3839 [19658]: 88adbd49 aio collect 0
0x7fe4580008c0:0x7fe4580008d0:0x7fe458101000 result -5:0 match res<br>
2016-02-24 02:01:20+0100 3839 [19658]: read_sectors delta_leader
offset 0 rv -5
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P2/88adbd49-62d6-45b1-9992-b04464a04112/dom_md/ids<br>
2016-02-24 02:01:20+0100 3839 [19658]: s1320 delta_acquire
leader_read1 error -5<br>
2016-02-24 02:01:21+0100 3840 [1320]: s1320 add_lockspace fail
result -5<br>
<br>
</body>
</html>
--------------090001000406070504000905--
8 years, 9 months
Fwd: Re: ovirt - can't attach master domain III
by paf1@email.cz
This is a multi-part message in MIME format.
--------------070808000207030305050505
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
hi,
after a lot of test will get
2016-02-24 11:38:05+0100 7406 [25824]: cmd_add_lockspace 3,10
ff71b47b-0f72-4528-9bfe-c3da888e47f0:4:/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P2/ff71b47b-0f72-4528-9bfe-c3da888e47f0/dom_md/ids:0
flags 1 timeout 0
2016-02-24 11:38:05+0100 7406 [25824]: s2256 lockspace
ff71b47b-0f72-4528-9bfe-c3da888e47f0:4:/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P2/ff71b47b-0f72-4528-9bfe-c3da888e47f0/dom_md/ids:0
2016-02-24 11:38:05+0100 7406 [25824]: cmd_add_lockspace 3,10 async done 0
2016-02-24 11:38:05+0100 7406 [26186]: open error -2
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P2/ff71b47b-0f72-4528-9bfe-c3da888e47f0/dom_md/ids
2016-02-24 11:38:05+0100 7406 [26186]: s2256 open_disk
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P2/ff71b47b-0f72-4528-9bfe-c3da888e47f0/dom_md/ids
error -2
what's wrong ??
thx.
Pa.
On 24.2.2016 08:14, Nir Soffer wrote:
> On Wed, Feb 24, 2016 at 8:53 AM, paf1(a)email.cz <mailto:paf1@email.cz>
> <paf1(a)email.cz <mailto:paf1@email.cz>> wrote:
>
> Hi,
> it seems that sanlock daemon has problem with reading empty "ids"
> file .
> How can I regenarate this "ids" file to get 2k rows of datas ??
> It's the base problem to get up "master domain" following
> "datacenter"
>
>
> You should understand why the ids files is empty and fix the root cause.
>
> To recover your ids files, you can follow the instructions here:
> http://lists.ovirt.org/pipermail/users/2016-February/038046.html
>
> Nir
>
>
> regs.
> Pa.
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org <mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
>
>
--------------070808000207030305050505
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000066" bgcolor="#FFFFFF">
<br>
<div class="moz-forward-container">
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
hi,<br>
after a lot of test will get <br>
<br>
2016-02-24 11:38:05+0100 7406 [25824]: cmd_add_lockspace 3,10
ff71b47b-0f72-4528-9bfe-c3da888e47f0:4:/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P2/ff71b47b-0f72-4528-9bfe-c3da888e47f0/dom_md/ids:0
flags 1 timeout 0<br>
2016-02-24 11:38:05+0100 7406 [25824]: s2256 lockspace
ff71b47b-0f72-4528-9bfe-c3da888e47f0:4:/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P2/ff71b47b-0f72-4528-9bfe-c3da888e47f0/dom_md/ids:0<br>
2016-02-24 11:38:05+0100 7406 [25824]: cmd_add_lockspace 3,10
async done 0<br>
2016-02-24 11:38:05+0100 7406 [26186]: open error -2
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P2/ff71b47b-0f72-4528-9bfe-c3da888e47f0/dom_md/ids<br>
2016-02-24 11:38:05+0100 7406 [26186]: s2256 open_disk
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P2/ff71b47b-0f72-4528-9bfe-c3da888e47f0/dom_md/ids
error -2<br>
<br>
what's wrong ??<br>
thx.<br>
Pa.<br>
<br>
<div class="moz-cite-prefix">On 24.2.2016 08:14, Nir Soffer wrote:<br>
</div>
<blockquote
cite="mid:CAMRbyysE8T_LyGwhT3K0cZ+ErSU0VQj7ZHPknL+=WzhAN7BKXQ@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">On Wed, Feb 24, 2016 at 8:53 AM, <a
moz-do-not-send="true" href="mailto:paf1@email.cz">paf1(a)email.cz</a>
<span dir="ltr"><<a moz-do-not-send="true"
href="mailto:paf1@email.cz" target="_blank">paf1(a)email.cz</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div text="#000066" bgcolor="#FFFFFF"> Hi, <br>
it seems that sanlock daemon has problem with reading
empty "ids" file .<br>
How can I regenarate this "ids" file to get 2k rows of
datas ??<br>
It's the base problem to get up "master domain"
following "datacenter" <br>
</div>
</blockquote>
<div><br>
</div>
<div>You should understand why the ids files is empty and
fix the root cause.</div>
<div><br>
</div>
<div>To recover your ids files, you can follow the
instructions here:</div>
<div><a moz-do-not-send="true"
href="http://lists.ovirt.org/pipermail/users/2016-February/038046.html">http://lists.ovirt.org/pipermail/users/2016-February/038046.html</a><br>
</div>
<div><br>
</div>
<div>Nir</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div text="#000066" bgcolor="#FFFFFF"> <br>
regs.<br>
Pa.<br>
</div>
<br>
_______________________________________________<br>
Users mailing list<br>
<a moz-do-not-send="true" href="mailto:Users@ovirt.org">Users(a)ovirt.org</a><br>
<a moz-do-not-send="true"
href="http://lists.ovirt.org/mailman/listinfo/users"
rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
<br>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<br>
<br>
</div>
<br>
</body>
</html>
--------------070808000207030305050505--
8 years, 9 months
Fwd: Re: ovirt - can't attach master domain III
by paf1@email.cz
This is a multi-part message in MIME format.
--------------000301020903090600090002
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hi Nir,
it doesn't running, or failed by me
1) no trafic od any storage in gluster
two ways
A: - will stop ( maintenance not allowed )master domain ( 2KVM12-P2 )
from GUI
- try to mount locally to one node
# mount -t glusterfs localhost:/2KVM12-P2 /mnt ==> error -19
B: go to
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P2/f71b47b-0f72-4528-9bfe-c3da888e47f0/dom_md/
- remove empty ids file
- sanlock direct init -s <f7......>:0:ids:0 - from manual
- restart sanlockd
2) no way was successfull
regs.Pa.
On 24.2.2016 08:14, Nir Soffer wrote:
> On Wed, Feb 24, 2016 at 8:53 AM, paf1(a)email.cz <mailto:paf1@email.cz>
> <paf1(a)email.cz <mailto:paf1@email.cz>> wrote:
>
> Hi,
> it seems that sanlock daemon has problem with reading empty "ids"
> file .
> How can I regenarate this "ids" file to get 2k rows of datas ??
> It's the base problem to get up "master domain" following
> "datacenter"
>
>
> You should understand why the ids files is empty and fix the root cause.
>
> To recover your ids files, you can follow the instructions here:
> http://lists.ovirt.org/pipermail/users/2016-February/038046.html
>
> Nir
>
>
> regs.
> Pa.
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org <mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
>
>
--------------000301020903090600090002
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000066" bgcolor="#FFFFFF">
<br>
<div class="moz-forward-container">
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
Hi Nir, <br>
<br>
it doesn't running, or failed by me<br>
<br>
1) no trafic od any storage in gluster<br>
<br>
two ways<br>
A: - will stop ( maintenance not allowed )master domain (
2KVM12-P2 ) from GUI <br>
- try to mount locally to one node<br>
# mount -t glusterfs localhost:/2KVM12-P2 /mnt ==> error -19<br>
<br>
B: go to
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P2/f71b47b-0f72-4528-9bfe-c3da888e47f0/dom_md/<br>
- remove empty ids file<br>
- sanlock direct init -s <f7......>:0:ids:0 - from
manual <br>
- restart sanlockd<br>
<br>
2) no way was successfull<br>
<br>
regs.Pa.<br>
<br>
<br>
<br>
<div class="moz-cite-prefix">On 24.2.2016 08:14, Nir Soffer wrote:<br>
</div>
<blockquote
cite="mid:CAMRbyysE8T_LyGwhT3K0cZ+ErSU0VQj7ZHPknL+=WzhAN7BKXQ@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">On Wed, Feb 24, 2016 at 8:53 AM, <a
moz-do-not-send="true" href="mailto:paf1@email.cz">paf1(a)email.cz</a>
<span dir="ltr"><<a moz-do-not-send="true"
href="mailto:paf1@email.cz" target="_blank">paf1(a)email.cz</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div text="#000066" bgcolor="#FFFFFF"> Hi, <br>
it seems that sanlock daemon has problem with reading
empty "ids" file .<br>
How can I regenarate this "ids" file to get 2k rows of
datas ??<br>
It's the base problem to get up "master domain"
following "datacenter" <br>
</div>
</blockquote>
<div><br>
</div>
<div>You should understand why the ids files is empty and
fix the root cause.</div>
<div><br>
</div>
<div>To recover your ids files, you can follow the
instructions here:</div>
<div><a moz-do-not-send="true"
href="http://lists.ovirt.org/pipermail/users/2016-February/038046.html">http://lists.ovirt.org/pipermail/users/2016-February/038046.html</a><br>
</div>
<div><br>
</div>
<div>Nir</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div text="#000066" bgcolor="#FFFFFF"> <br>
regs.<br>
Pa.<br>
</div>
<br>
_______________________________________________<br>
Users mailing list<br>
<a moz-do-not-send="true" href="mailto:Users@ovirt.org">Users(a)ovirt.org</a><br>
<a moz-do-not-send="true"
href="http://lists.ovirt.org/mailman/listinfo/users"
rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
<br>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<br>
<br>
</div>
<br>
</body>
</html>
--------------000301020903090600090002--
8 years, 9 months
[hosted-engine] Error creating a glusterfs storage domain
by Wee Sritippho
Hi,
I'm trying to deploy an oVirt Hosed Engine environment using this
glusterfs volume:
# gluster volume info
Volume Name: gv0
Type: Replicate
Volume ID: 37bba03b-7276-421a-8960-81e28196ebde
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: host01.ovirt.forest.go.th:/data/brick1/gv0
Brick2: host03.ovirt.forest.go.th:/data/brick1/gv0
Brick3: host02.ovirt.forest.go.th:/data/brick1/gv0
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
performance.readdir-ahead: on
But the deployment failed with this error message:
[ ERROR ] Failed to execute stage 'Misc configuration': Error creating a
storage domain: ('storageType=7,
sdUUID=be5f66d8-57ef-43c8-90a5-e9132e0c95b4, domainName=hosted_storage,
domClass=1, typeSpecificArg=host01.ovirt.forest.go.th:/gv0 domVersion=3',)
I tried to figure out what is happening via the log files:
Line ~7243 of vdsm.log
Line ~2930 of ovirt-hosted-engine-setup-20160223204857-585hqv.log
But didn't seem to understand it at all.
Please guide me on how to solve this problem.
Here is my environment:
CentOS Linux release 7.2.1511 (Core)
ovirt-hosted-engine-setup-1.3.2.3-1.el7.centos.noarch
vdsm-4.17.18-1.el7.noarch
glusterfs-3.7.8-1.el7.x86_64
Thank you,
Wee
---
ซอฟต์แวร์ Avast แอนตี้ไวรัสตรวจสอบหาไวรัสจากอีเมลนี้แล้ว
https://www.avast.com/antivirus
8 years, 9 months