recover dom_md
by Alastair Neil
I have an ovirt 4 cluster. with two gluster storage domains. The old
domain is on a 1G network and the new one is on 10G. While migrating disks
I accidentally removed the dom_md directory in the new storage domain.
Is there a process to recreate this directory. I have moved about 28 disk
images, the domain is marked as down but the VM with disks in there are
still up.
I have seen examples of rebuilding the dom_md/ids file using sanlock direct
init, but I do not know if it is possible to rebuild the entire directory
this way, and before I commit myself so shutting down all the hosts I'd
like to make sure there us a good chance of success,
If this is not possible is there a way of importing the disk images , I
tried copying them to the export domain but they do not show up in the
imports.
-Alastair
8 years, 1 month
fakevdsm vs ovirt engine 4.0.4
by joost@familiealbers.nl
Hi All,
I am trying to start load tests against my newly installed ovirt engine
version 4.0.4
I can run fakevdsm as follows (minor changes to pom.xml (mainly jetty
plugin version)
i also changed vdsm-jsonrpc-java-client to match that of the server.
<groupId>org.ovirt.vdsm-jsonrpc-java</groupId>
<artifactId>vdsm-jsonrpc-java-client</artifactId>
<version>1.2.5</version>
i can run the app using
mvn jetty:run -Dfake.host=0.0.0.0 -DjsonListenPort=54321
-DvdsmPort=54322
fakevdsm runs but keeps on looping over the following operations. I
have not add vms yet.
I sincerely hope someone can help me , i really need to stress test
this install urgently and fakevdsm looks like the perfect tool to use
for this.
thanks, joost
2016-10-17 22:24:59,505 CONNECT
accept-version:1.2
heart-beat:0,21234
host:null
2016-10-17 22:24:59,506 CONNECT
accept-version:1.2
heart-beat:0,21234
host:null
2016-10-17 22:24:59,507 CONNECTED
heart-beat:21234,0
session:196ed7d6-5125-4608-8299-bcaef308db87
2016-10-17 22:24:59,507 Message sent: CONNECTED
heart-beat:21234,0
session:196ed7d6-5125-4608-8299-bcaef308db87
2016-10-17 22:24:59,636 SUBSCRIBE
destination:jms.topic.vdsm_responses
ack:auto
id:9c734bb3-2dc4-4181-a049-2b77133b521f
SEND
destination:jms.topic.vdsm_requests
reply-to:jms.topic.vdsm_responses
content-length:105
{"jsonrpc":"2.0","method":"Host.getCapabilities","params":{},"id":"9f78265b-21c3-4e77-9b11-7c126c2d84ed"}
2016-10-17 22:24:59,636 SUBSCRIBE
destination:jms.topic.vdsm_responses
ack:auto
id:9c734bb3-2dc4-4181-a049-2b77133b521f
2016-10-17 22:24:59,636 ACK
id:9c734bb3-2dc4-4181-a049-2b77133b521f
2016-10-17 22:24:59,636 Message sent: ACK
id:9c734bb3-2dc4-4181-a049-2b77133b521f
2016-10-17 22:24:59,636 SEND
destination:jms.topic.vdsm_requests
reply-to:jms.topic.vdsm_responses
content-length:105
{"jsonrpc":"2.0","method":"Host.getCapabilities","params":{},"id":"9f78265b-21c3-4e77-9b11-7c126c2d84ed"}
2016-10-17 22:24:59,637 client policy identifier null
2016-10-17 22:24:59,714 Request is Host.getCapabilities got response
{"jsonrpc":"2.0","result":{"version_name":"Snow
Man","operatingSystem":{"name":"Fedora","release":"1","version":"17"},"cpuSpeed":"1200.000","clusterLevels":["3.0","3.1","3.2","3.3","3.4","3.5","3.6","4.0","4.1"],"hooks":{},"ISCSIInitiatorName":"iqn.1994-05.com.example:ef52ec17bb0","cpuSockets":"1","kvmEnabled":"true","reservedMem":"321","lastClientIface":"ovirtmgmt","numaNodes":{"1":{"cpus":[0,2,4,6,8,10,12,14],"totalMemory":3988},"0":{"cpus":[1,3,5,7,9,11,13,15],"totalMemory":3988}},"cpuFlags":"fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ss,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,rep_good,nopl,eagerfpu,pni,pclmulqdq,vmx,ssse3,fma,cx16,pcid,sse4_1,sse4_2,x2apic,movbe,popcnt,tsc_deadline_timer,aes,xsave,avx,f16c,rdrand,hypervisor,lahf_lm,abm,tpr_shadow,vnmi,flexpriority,ept,fsgsbase,bmi1,avx2,smep,bmi2,erms,invpcid,xsaveopt,model_Haswell-noTSX,model_Nehalem,model_Conroe,model_Penryn,model_Westmere,model_SandyBridge","HBAInventory":{"iSCSI":[{"InitiatorName":"iqn.1994-05.com.ex
ample:ef52ec17bb0"}],"FC":[]},"lastClient":"10.36.6.76","selinux":{"mode":"1"},"vlans":{},"software_version":"4.10","kdumpStatus":"1","emulatedMachines":["pc-0.10","pc-0.11","pc-0.12","pc-0.13","pc-0.14","pc-0.15","pc-1.0","pc-1.0","pc-i440fx-2.1","pseries-rhel7.2.0","pc-i440fx-rhel7.2.0","rhel6.4.0","rhel6.5.0","rhel6.6.0","rhel6.7.0","rhel6.8.0","rhel6.9.0","rhel7.0.0","rhel7.2.0","rhel7.5.0","pc","isapc"],"vmTypes":["kvm"],"software_revision":"0.141","bridges":{"ovirtmgmt":{"mtu":"1500","ports":["em1"],"gateway":"252.197.29.20","addr":"186.190.35.84","cfg":{"DELAY":"0","DEVICE":"ovirtmgmt","ONBOOT":"yes","BOOTPROTO":"dhcp","TYPE":"Ethernet"},"stp":"off","netmask":"255.255.252.0"}},"netConfigDirty":"False","autoNumaBalancing":"1","guestOverhead":"65","networks":{"ovirtmgmt":{"mtu":"1500","ports":["em1"],"iface":"ovirtmgmt","gateway":"10.34.63.254","bridged":true,"switch":"legacy","addr":"186.190.35.84","stp":"off","cfg":{"DELAY":"0","DEVICE":"ovirtmgmt","ONBOOT":"yes","BOOTPROTO":"
dhcp","TYPE":"Ethernet"},"netmask":"255.255.252.0"}},"memSize":"7976","rngSources":["RANDOM"],"management_ip":"","supportedENGINEs":["3.0","3.1","3.2","3.3","3.4","3.5","3.6","4.0","4.1"],"cpuModel":"Intel(R)
Xeon(R) CPU E5606 @
2.13GHz","cpuCores":"4","supportedProtocols":["2.2","2.3"],"packages2":{"libvirt":{"buildtime":"1349642820","release":"2.fc17","version":"1.0.1"},"spice-server":{"buildtime":"1336983054","release":"5.fc17","version":"0.10.1"},"vdsm":{"buildtime":"1359653302","release":"0.141.gita11e8f2.fc17","version":"4.10.3"},"qemu-kvm":{"buildtime":"1349642820","release":"2.fc17","version":"1.0.1"},"qemu-img":{"buildtime":"1349642820","release":"2.fc17","version":"1.0.1"},"kernel":{"buildtime":"1357699251.0","release":"5.fc17.x86_64","version":"3.6.11"},"mom":{"buildtime":"1354824066","release":"1.fc17","version":"0.3.0"}},"uuid":"7e6d8c6a-ca0c-46d2-ae6e-3af293fa6c4a_80:0D:F4:97:64:9B:3A","nics":{"em1":{"mtu":"1500","speed":1000,"addr":"","hwaddr":"55:2B:B0:FD:88:55","cfg":{"NM_CONTROLLED":"yes","NETBOOT":"yes","DEVICE":"em1","NAME":"Boot
Disk","HWADDR":"55:2B:B0:FD:88:55","BRIDGE":"ovirtmgmt","UUID":"f1962c67-78ab-4b2b-8095-07354d2eba73","ONBOOT":"yes","BOOTPROTO":"dhcp","TYPE":"Ethernet"},"netmask":""},"em2":{"mtu":"1500","speed":1000,"addr":"","hwaddr":"F3:1F:F7:F0:29:76","cfg":{"NM_CONTROLLED":"yes","NETBOOT":"yes","DEVICE":"em2","HWADDR":"F3:1F:F7:F0:29:76","BRIDGE":"ovirtmgmt","UUID":"6f563109-b336-4091-96d8-18c2c323189e","ONBOOT":"no","BOOTPROTO":"dhcp","TYPE":"Ethernet"},"netmask":""}},"numaNodeDistance":{"1":["20","10"],"0":["10","20"]},"onlineCpus":[1,3,5,7,9,11,13,15,0,2,4,6,8,10,12,14],"cpuThreads":"4","bondings":{"bond4":{"mtu":"150","slaves":[],"addr":"","hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":""},"bond3":{"mtu":"150","slaves":[],"addr":"","hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":""},"bond0":{"mtu":"150","slaves":[],"addr":"","hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":""},"bond1":{"mtu":"150","slaves":[],"addr":"","hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":""},"bond2":{"mtu":"150
","slaves":[],"addr":"","hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":""}}},"id":"9f78265b-21c3-4e77-9b11-7c126c2d84ed"}
2016-10-17 22:24:59,726 MESSAGE
destination:jms.queue.reponses
content-length:4230
{"jsonrpc":"2.0","result":{"version_name":"Snow
Man","operatingSystem":{"name":"Fedora","release":"1","version":"17"},"cpuSpeed":"1200.000","clusterLevels":["3.0","3.1","3.2","3.3","3.4","3.5","3.6","4.0","4.1"],"hooks":{},"ISCSIInitiatorName":"iqn.1994-05.com.example:ef52ec17bb0","cpuSockets":"1","kvmEnabled":"true","reservedMem":"321","lastClientIface":"ovirtmgmt","numaNodes":{"1":{"cpus":[0,2,4,6,8,10,12,14],"totalMemory":3988},"0":{"cpus":[1,3,5,7,9,11,13,15],"totalMemory":3988}},"cpuFlags":"fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ss,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,rep_good,nopl,eagerfpu,pni,pclmulqdq,vmx,ssse3,fma,cx16,pcid,sse4_1,sse4_2,x2apic,movbe,popcnt,tsc_deadline_timer,aes,xsave,avx,f16c,rdrand,hypervisor,lahf_lm,abm,tpr_shadow,vnmi,flexpriority,ept,fsgsbase,bmi1,avx2,smep,bmi2,erms,invpcid,xsaveopt,model_Haswell-noTSX,model_Nehalem,model_Conroe,model_Penryn,model_Westmere,model_SandyBridge","HBAInventory":{"iSCSI":[{"InitiatorName":"iqn.1994-05.com.ex
ample:ef52ec17bb0"}],"FC":[]},"lastClient":"10.36.6.76","selinux":{"mode":"1"},"vlans":{},"software_version":"4.10","kdumpStatus":"1","emulatedMachines":["pc-0.10","pc-0.11","pc-0.12","pc-0.13","pc-0.14","pc-0.15","pc-1.0","pc-1.0","pc-i440fx-2.1","pseries-rhel7.2.0","pc-i440fx-rhel7.2.0","rhel6.4.0","rhel6.5.0","rhel6.6.0","rhel6.7.0","rhel6.8.0","rhel6.9.0","rhel7.0.0","rhel7.2.0","rhel7.5.0","pc","isapc"],"vmTypes":["kvm"],"software_revision":"0.141","bridges":{"ovirtmgmt":{"mtu":"1500","ports":["em1"],"gateway":"252.197.29.20","addr":"186.190.35.84","cfg":{"DELAY":"0","DEVICE":"ovirtmgmt","ONBOOT":"yes","BOOTPROTO":"dhcp","TYPE":"Ethernet"},"stp":"off","netmask":"255.255.252.0"}},"netConfigDirty":"False","autoNumaBalancing":"1","guestOverhead":"65","networks":{"ovirtmgmt":{"mtu":"1500","ports":["em1"],"iface":"ovirtmgmt","gateway":"10.34.63.254","bridged":true,"switch":"legacy","addr":"186.190.35.84","stp":"off","cfg":{"DELAY":"0","DEVICE":"ovirtmgmt","ONBOOT":"yes","BOOTPROTO":"
dhcp","TYPE":"Ethernet"},"netmask":"255.255.252.0"}},"memSize":"7976","rngSources":["RANDOM"],"management_ip":"","supportedENGINEs":["3.0","3.1","3.2","3.3","3.4","3.5","3.6","4.0","4.1"],"cpuModel":"Intel(R)
Xeon(R) CPU E5606 @
2.13GHz","cpuCores":"4","supportedProtocols":["2.2","2.3"],"packages2":{"libvirt":{"buildtime":"1349642820","release":"2.fc17","version":"1.0.1"},"spice-server":{"buildtime":"1336983054","release":"5.fc17","version":"0.10.1"},"vdsm":{"buildtime":"1359653302","release":"0.141.gita11e8f2.fc17","version":"4.10.3"},"qemu-kvm":{"buildtime":"1349642820","release":"2.fc17","version":"1.0.1"},"qemu-img":{"buildtime":"1349642820","release":"2.fc17","version":"1.0.1"},"kernel":{"buildtime":"1357699251.0","release":"5.fc17.x86_64","version":"3.6.11"},"mom":{"buildtime":"1354824066","release":"1.fc17","version":"0.3.0"}},"uuid":"7e6d8c6a-ca0c-46d2-ae6e-3af293fa6c4a_80:0D:F4:97:64:9B:3A","nics":{"em1":{"mtu":"1500","speed":1000,"addr":"","hwaddr":"55:2B:B0:FD:88:55","cfg":{"NM_CONTROLLED":"yes","NETBOOT":"yes","DEVICE":"em1","NAME":"Boot
Disk","HWADDR":"55:2B:B0:FD:88:55","BRIDGE":"ovirtmgmt","UUID":"f1962c67-78ab-4b2b-8095-07354d2eba73","ONBOOT":"yes","BOOTPROTO":"dhcp","TYPE":"Ethernet"},"netmask":""},"em2":{"mtu":"1500","speed":1000,"addr":"","hwaddr":"F3:1F:F7:F0:29:76","cfg":{"NM_CONTROLLED":"yes","NETBOOT":"yes","DEVICE":"em2","HWADDR":"F3:1F:F7:F0:29:76","BRIDGE":"ovirtmgmt","UUID":"6f563109-b336-4091-96d8-18c2c323189e","ONBOOT":"no","BOOTPROTO":"dhcp","TYPE":"Ethernet"},"netmask":""}},"numaNodeDistance":{"1":["20","10"],"0":["10","20"]},"onlineCpus":[1,3,5,7,9,11,13,15,0,2,4,6,8,10,12,14],"cpuThreads":"4","bondings":{"bond4":{"mtu":"150","slaves":[],"addr":"","hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":""},"bond3":{"mtu":"150","slaves":[],"addr":"","hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":""},"bond0":{"mtu":"150","slaves":[],"addr":"","hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":""},"bond1":{"mtu":"150","slaves":[],"addr":"","hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":""},"bond2":{"mtu":"150
","slaves":[],"addr":"","hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":""}}},"id":"9f78265b-21c3-4e77-9b11-7c126c2d84ed"}
2016-10-17 22:24:59,785 Message sent: MESSAGE
content-length:4230
destination:jms.queue.reponses
<JsonRpcResponse id: "9f78265b-21c3-4e77-9b11-7c126c2d84ed" result:
{version_name=Snow Man, operatingSystem={name=Fedora, release=1,
version=17}, cpuSpeed=1200.000, clusterLevels=[3.0, 3.1, 3.2, 3.3, 3.4,
3.5, 3.6, 4.0, 4.1], hooks={},
ISCSIInitiatorName=iqn.1994-05.com.example:ef52ec17bb0, cpuSockets=1,
kvmEnabled=true, reservedMem=321, lastClientIface=ovirtmgmt,
numaNodes={1={cpus=[0, 2, 4, 6, 8, 10, 12, 14], totalMemory=3988},
0={cpus=[1, 3, 5, 7, 9, 11, 13, 15], totalMemory=3988}},
cpuFlags=fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ss,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,rep_good,nopl,eagerfpu,pni,pclmulqdq,vmx,ssse3,fma,cx16,pcid,sse4_1,sse4_2,x2apic,movbe,popcnt,tsc_deadline_timer,aes,xsave,avx,f16c,rdrand,hypervisor,lahf_lm,abm,tpr_shadow,vnmi,flexpriority,ept,fsgsbase,bmi1,avx2,smep,bmi2,erms,invpcid,xsaveopt,model_Haswell-noTSX,model_Nehalem,model_Conroe,model_Penryn,model_Westmere,model_SandyBridge,
HBAInventory={iSCSI=[{InitiatorName=iqn.1994-05.com.example:ef52ec17bb0}],
FC=[]}, lastClient=10.36.6.76, selinux={mode=1}, vlans={},
software_version=4.10, kdumpStatus=1, emulatedMachines=[pc-0.10,
pc-0.11, pc-0.12, pc-0.13, pc-0.14, pc-0.15, pc-1.0, pc-1.0,
pc-i440fx-2.1, pseries-rhel7.2.0, pc-i440fx-rhel7.2.0, rhel6.4.0,
rhel6.5.0, rhel6.6.0, rhel6.7.0, rhel6.8.0, rhel6.9.0, rhel7.0.0,
rhel7.2.0, rhel7.5.0, pc, isapc], vmTypes=[kvm],
software_revision=0.141, bridges={ovirtmgmt={mtu=1500, ports=[em1],
gateway=252.197.29.20, addr=186.190.35.84, cfg={DELAY=0,
DEVICE=ovirtmgmt, ONBOOT=yes, BOOTPROTO=dhcp, TYPE=Ethernet}, stp=off,
netmask=255.255.252.0}}, netConfigDirty=False, autoNumaBalancing=1,
guestOverhead=65, networks={ovirtmgmt={mtu=1500, ports=[em1],
iface=ovirtmgmt, gateway=10.34.63.254, bridged=true, switch=legacy,
addr=186.190.35.84, stp=off, cfg={DELAY=0, DEVICE=ovirtmgmt, ONBOOT=yes,
BOOTPROTO=dhcp, TYPE=Ethernet}, netmask=255.255.252.0}}, memSize=7976,
rngSources=[RANDOM], management_ip=, supportedENGINEs=[3.0, 3.1, 3.2,
3.3, 3.4, 3.5, 3.6, 4.0, 4.1], cpuModel=Intel(R) Xeon(R) CPU E5606 @
2.13GHz, cpuCores=4, supportedProtocols=[2.2, 2.3],
packages2={libvirt={buildtime=1349642820, release=2.fc17,
version=1.0.1}, spice-server={buildtime=1336983054, release=5.fc17,
version=0.10.1}, vdsm={buildtime=1359653302,
release=0.141.gita11e8f2.fc17, version=4.10.3},
qemu-kvm={buildtime=1349642820, release=2.fc17, version=1.0.1},
qemu-img={buildtime=1349642820, release=2.fc17, version=1.0.1},
kernel={buildtime=1357699251.0, release=5.fc17.x86_64, version=3.6.11},
mom={buildtime=1354824066, release=1.fc17, version=0.3.0}},
uuid=7e6d8c6a-ca0c-46d2-ae6e-3af293fa6c4a_80:0D:F4:97:64:9B:3A,
nics={em1={mtu=1500, speed=1000, addr=, hwaddr=55:2B:B0:FD:88:55,
cfg={NM_CONTROLLED=yes, NETBOOT=yes, DEVICE=em1, NAME=Boot Disk,
HWADDR=55:2B:B0:FD:88:55, BRIDGE=ovirtmgmt,
UUID=f1962c67-78ab-4b2b-8095-07354d2eba73, ONBOOT=yes, BOOTPROTO=dhcp,
TYPE=Ethernet}, netmask=}, em2={mtu=1500, speed=1000, addr=,
hwaddr=F3:1F:F7:F0:29:76, cfg={NM_CONTROLLED=yes, NETBOOT=yes,
DEVICE=em2, HWADDR=F3:1F:F7:F0:29:76, BRIDGE=ovirtmgmt,
UUID=6f563109-b336-4091-96d8-18c2c323189e, ONBOOT=no, BOOTPROTO=dhcp,
TYPE=Ethernet}, netmask=}}, numaNodeDistance={1=[20, 10], 0=[10, 20]},
onlineCpus=[1, 3, 5, 7, 9, 11, 13, 15, 0, 2, 4, 6, 8, 10, 12, 14],
cpuThreads=4, bondings={bond4={mtu=150, slaves=[], addr=,
hwaddr=00:00:00:00:00:00, cfg={}, netmask=}, bond3={mtu=150, slaves=[],
addr=, hwaddr=00:00:00:00:00:00, cfg={}, netmask=}, bond0={mtu=150,
slaves=[], addr=, hwaddr=00:00:00:00:00:00, cfg={}, netmask=},
bond1={mtu=150, slaves=[], addr=, hwaddr=00:00:00:00:00:00, cfg={},
netmask=}, bond2={mtu=150, slaves=[], addr=, hwaddr=00:00:00:00:00:00,
cfg={}, netmask=}}}>
2016-10-17 22:25:16,722
2016-10-17 22:25:16,722 Message sent: null
2016-10-17 22:25:16,722 Unable to process messages: Broken pipe
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:492)
at
org.ovirt.vdsm.jsonrpc.client.reactors.PlainClient.write(PlainClient.java:55)
at
org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient.processOutgoing(ReactorClient.java:261)
at
org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient.process(ReactorClient.java:224)
at
org.ovirt.vdsm.jsonrpc.client.reactors.Reactor.processChannels(Reactor.java:89)
at org.ovirt.vdsm.jsonrpc.client.reactors.Reactor.run(Reactor.java:65)
2016-10-17 22:25:16,723
2016-10-17 22:25:16,723 Message sent: null
2016-10-17 22:25:16,723 Failure in processing request
java.lang.IllegalArgumentException: 'method' field missing in node
at
org.ovirt.vdsm.jsonrpc.client.JsonRpcRequest.fromJsonNode(JsonRpcRequest.java:79)
at
org.ovirt.vdsm.jsonrpc.client.JsonRpcRequest.fromByteArray(JsonRpcRequest.java:103)
at
org.ovirt.vdsmfake.rpc.json.JsonRpcServer$MessageHandler.run(JsonRpcServer.java:122)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2016-10-17 22:25:19,647 CONNECT
accept-version:1.2
heart-beat:0,21234
host:null
2016-10-17 22:25:19,648 CONNECT
accept-version:1.2
heart-beat:0,21234
host:null
2016-10-17 22:25:19,648 CONNECTED
heart-beat:21234,0
session:ef2d8f70-3a0d-4ebd-b424-343ad130cb06
2016-10-17 22:25:19,648 Message sent: CONNECTED
heart-beat:21234,0
session:ef2d8f70-3a0d-4ebd-b424-343ad130cb06
2016-10-17 22:25:19,768 SUBSCRIBE
destination:jms.topic.vdsm_responses
ack:auto
id:7843c1e0-b514-4b7e-8687-66dae07b29d1
SEND
destination:jms.topic.vdsm_requests
reply-to:jms.topic.vdsm_responses
content-length:103
{"jsonrpc":"2.0","method":"Host.getAllVmStats","params":{},"id":"b9e1ef26-3046-4aaa-82d9-d00a0c5f8d55"}
2016-10-17 22:25:19,773 SUBSCRIBE
destination:jms.topic.vdsm_responses
ack:auto
id:7843c1e0-b514-4b7e-8687-66dae07b29d1
2016-10-17 22:25:19,773 ACK
id:7843c1e0-b514-4b7e-8687-66dae07b29d1
2016-10-17 22:25:19,773 Message sent: ACK
id:7843c1e0-b514-4b7e-8687-66dae07b29d1
2016-10-17 22:25:19,773 SEND
destination:jms.topic.vdsm_requests
reply-to:jms.topic.vdsm_responses
content-length:103
{"jsonrpc":"2.0","method":"Host.getAllVmStats","params":{},"id":"b9e1ef26-3046-4aaa-82d9-d00a0c5f8d55"}
2016-10-17 22:25:19,777 client policy identifier null
2016-10-17 22:25:19,786 Request is Host.getAllVmStats got response
{"jsonrpc":"2.0","result":[],"id":"b9e1ef26-3046-4aaa-82d9-d00a0c5f8d55"}
2016-10-17 22:25:19,789 MESSAGE
destination:jms.queue.reponses
content-length:73
{"jsonrpc":"2.0","result":[],"id":"b9e1ef26-3046-4aaa-82d9-d00a0c5f8d55"}
2016-10-17 22:25:19,789 Message sent: MESSAGE
content-length:73
destination:jms.queue.reponses
<JsonRpcResponse id: "b9e1ef26-3046-4aaa-82d9-d00a0c5f8d55" result: []>
2016-10-17 22:25:36,779
2016-10-17 22:25:36,779 Message sent: null
2016-10-17 22:25:36,779 Unable to process messages: Broken pipe
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:492)
at
org.ovirt.vdsm.jsonrpc.client.reactors.PlainClient.write(PlainClient.java:55)
at
org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient.processOutgoing(ReactorClient.java:261)
at
org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient.process(ReactorClient.java:224)
at
org.ovirt.vdsm.jsonrpc.client.reactors.Reactor.processChannels(Reactor.java:89)
at org.ovirt.vdsm.jsonrpc.client.reactors.Reactor.run(Reactor.java:65)
2016-10-17 22:25:36,780
2016-10-17 22:25:36,780 Message sent: null
2016-10-17 22:25:36,780 Failure in processing request
java.lang.IllegalArgumentException: 'method' field missing in node
at
org.ovirt.vdsm.jsonrpc.client.JsonRpcRequest.fromJsonNode(JsonRpcRequest.java:79)
at
org.ovirt.vdsm.jsonrpc.client.JsonRpcRequest.fromByteArray(JsonRpcRequest.java:103)
at
org.ovirt.vdsmfake.rpc.json.JsonRpcServer$MessageHandler.run(JsonRpcServer.java:122)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2016-10-17 22:25:39,763 CONNECT
accept-version:1.2
heart-beat:0,21234
host:null
On the ovirt-engine i see the following
2016-10-17 22:25:19,435 INFO
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (Stomp Reactor)
[] Connecting to hyp1/52.25.68.41
2016-10-17 22:25:19,710 ERROR
[org.ovirt.vdsm.jsonrpc.client.reactors.Reactor] (Stomp Reactor) []
Unable to process messages: Unrecognized message received
2016-10-17 22:25:19,711 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(DefaultQuartzScheduler2) [] Command 'GetAllVmStatsVDSCommand(HostName =
hyp1Host1, VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
hostId='7b0a7f76-b877-4fc4-aa30-2e94b533b7b3',
vds='Host[hyp1Host1,7b0a7f76-b877-4fc4-aa30-2e94b533b7b3]'})' execution
failed: VDSGenericException: VDSNetworkException: Unrecognized message
received
2016-10-17 22:25:19,711 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefresher]
(DefaultQuartzScheduler2) [] Failed to fetch vms info for host
'hyp1Host1' - skipping VMs monitoring.
2016-10-17 22:25:39,567 INFO
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (Stomp Reactor)
[] Connecting to hyp1/52.25.68.41
2016-10-17 22:25:39,827 ERROR
[org.ovirt.vdsm.jsonrpc.client.reactors.Reactor] (Stomp Reactor) []
Unable to process messages: Unrecognized message received
2016-10-17 22:25:39,831 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler10) [] Correlation ID: null, Call Stack: null,
Custom Event ID: -1, Message: VDSM hyp1Host1 command failed:
Unrecognized message received
2016-10-17 22:25:39,831 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler10) [] Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand'
return value
'org.ovirt.engine.core.vdsbroker.vdsbroker.VDSInfoReturnForXmlRpc@2e358a37'
2016-10-17 22:25:39,831 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler10) [] HostName = hyp1Host1
2016-10-17 22:25:39,832 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler10) [] Command
'GetCapabilitiesVDSCommand(HostName = hyp1Host1,
VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
hostId='7b0a7f76-b877-4fc4-aa30-2e94b533b7b3',
vds='Host[hyp1Host1,7b0a7f76-b877-4fc4-aa30-2e94b533b7b3]'})' execution
failed: VDSGenericException: VDSNetworkException: Unrecognized message
received
2016-10-17 22:25:39,832 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
(DefaultQuartzScheduler10) [] Failure to refresh Vds runtime info:
VDSGenericException: VDSNetworkException: Unrecognized message received
2016-10-17 22:25:39,832 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
(DefaultQuartzScheduler10) [] Exception:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
VDSGenericException: VDSNetworkException: Unrecognized message received
at
org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:188)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand.executeVdsBrokerCommand(GetCapabilitiesVDSCommand.java:16)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:110)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:73)
[vdsbroker.jar:]
at
org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33)
[dal.jar:]
at
org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:451)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.VdsManager.refreshCapabilities(VdsManager.java:653)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring.refreshVdsRunTimeInfo(HostMonitoring.java:121)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring.refresh(HostMonitoring.java:85)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.VdsManager.onTimer(VdsManager.java:238)
[vdsbroker.jar:]
at sun.reflect.GeneratedMethodAccessor70.invoke(Unknown Source)
[:1.8.0_102]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_102]
at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_102]
at
org.ovirt.engine.core.utils.timer.JobWrapper.invokeMethod(JobWrapper.java:77)
[scheduler.jar:]
at
org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:51)
[scheduler.jar:]
at org.quartz.core.JobRunShell.run(JobRunShell.java:213) [quartz.jar:]
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[rt.jar:1.8.0_102]
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[rt.jar:1.8.0_102]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[rt.jar:1.8.0_102]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[rt.jar:1.8.0_102]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_102]
2016-10-17 22:25:59,697 INFO
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (Stomp Reactor)
[] Connecting to hyp1/52.25.68.41
2016-10-17 22:25:59,974 ERROR
[org.ovirt.vdsm.jsonrpc.client.reactors.Reactor] (Stomp Reactor) []
Unable to process messages: Unrecognized message received
2016-10-17 22:25:59,974 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(DefaultQuartzScheduler1) [] Command 'GetAllVmStatsVDSCommand(HostName =
hyp1Host1, VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
hostId='7b0a7f76-b877-4fc4-aa30-2e94b533b7b3',
vds='Host[hyp1Host1,7b0a7f76-b877-4fc4-aa30-2e94b533b7b3]'})' execution
failed: VDSGenericException: VDSNetworkException: Unrecognized message
received
2016-10-17 22:25:59,975 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefresher]
(DefaultQuartzScheduler1) [] Failed to fetch vms info for host
'hyp1Host1' - skipping VMs monitoring.
2016-10-17 22:26:19,830 INFO
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (Stomp Reactor)
[] Connecting to hyp1/52.25.68.41
2016-10-17 22:26:20,119 ERROR
[org.ovirt.vdsm.jsonrpc.client.reactors.Reactor] (Stomp Reactor) []
Unable to process messages: Unrecognized message received
2016-10-17 22:26:20,125 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler3) [] Correlation ID: null, Call Stack: null,
Custom Event ID: -1, Message: VDSM hyp1Host1 command failed:
Unrecognized message received
2016-10-17 22:26:20,125 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler3) [] Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand'
return value
'org.ovirt.engine.core.vdsbroker.vdsbroker.VDSInfoReturnForXmlRpc@33b8f1a0'
2016-10-17 22:26:20,125 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler3) [] HostName = hyp1Host1
2016-10-17 22:26:20,125 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler3) [] Command 'GetCapabilitiesVDSCommand(HostName
= hyp1Host1, VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
hostId='7b0a7f76-b877-4fc4-aa30-2e94b533b7b3',
vds='Host[hyp1Host1,7b0a7f76-b877-4fc4-aa30-2e94b533b7b3]'})' execution
failed: VDSGenericException: VDSNetworkException: Unrecognized message
received
2016-10-17 22:26:20,125 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
(DefaultQuartzScheduler3) [] Failure to refresh Vds runtime info:
VDSGenericException: VDSNetworkException: Unrecognized message received
2016-10-17 22:26:20,125 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
(DefaultQuartzScheduler3) [] Exception:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
VDSGenericException: VDSNetworkException: Unrecognized message received
at
org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:188)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand.executeVdsBrokerCommand(GetCapabilitiesVDSCommand.java:16)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:110)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:73)
[vdsbroker.jar:]
at
org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33)
[dal.jar:]
at
org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:451)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.VdsManager.refreshCapabilities(VdsManager.java:653)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring.refreshVdsRunTimeInfo(HostMonitoring.java:121)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring.refresh(HostMonitoring.java:85)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.VdsManager.onTimer(VdsManager.java:238)
[vdsbroker.jar:]
at sun.reflect.GeneratedMethodAccessor70.invoke(Unknown Source)
[:1.8.0_102]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_102]
at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_102]
at
org.ovirt.engine.core.utils.timer.JobWrapper.invokeMethod(JobWrapper.java:77)
[scheduler.jar:]
at
org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:51)
[scheduler.jar:]
at org.quartz.core.JobRunShell.run(JobRunShell.java:213) [quartz.jar:]
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[rt.jar:1.8.0_102]
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[rt.jar:1.8.0_102]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[rt.jar:1.8.0_102]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[rt.jar:1.8.0_102]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_102]
2016-10-17 22:26:39,965 INFO
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (Stomp Reactor)
[] Connecting to hyp1/52.25.68.41
2016-10-17 22:26:40,229 ERROR
[org.ovirt.vdsm.jsonrpc.client.reactors.Reactor] (Stomp Reactor) []
Unable to process messages: Unrecognized message received
2016-10-17 22:26:40,229 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(DefaultQuartzScheduler1) [] Command 'GetAllVmStatsVDSCommand(HostName =
hyp1Host1, VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
hostId='7b0a7f76-b877-4fc4-aa30-2e94b533b7b3',
vds='Host[hyp1Host1,7b0a7f76-b877-4fc4-aa30-2e94b533b7b3]'})' execution
failed: VDSGenericException: VDSNetworkException: Unrecognized message
received
2016-10-17 22:26:40,229 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefresher]
(DefaultQuartzScheduler1) [] Failed to fetch vms info for host
'hyp1Host1' - skipping VMs monitoring.
8 years, 1 month
[ovirt 3.6] Logical network not working
by Luca 'remix_tj' Lorenzetto
Hello,
i'm new to ovirt and i did some months ago a setup of ovirt 3.6 for
playing. My setup is composed by two physical hosts with 6 nic each
and another machine hosting the engine. All hosts are running RHEL 7.2
Setup went well, no problems. I've been able to convert the kvm image
provided by redhat and have it running on ovirt.
Then i decided to configure a new network in addition to the
ovirtmgmt. I went to networks, i created the logical network called
Development and set the flag "Enable VLAN Tagging" and inserted the
vlan tag.
Once created the logical network i went to each host and did setup
network and assigned the logical network to the interface where the
vlan is connected. The interface is configured with bootproto=none, so
no IP has been assigned to the eno5.828 that appeared after assigning
logical network.
I started then a vm and connected to the vNIC "Develoment/Development"
and assigned an IP. But networking is not working: no ping, no traffic
visible with tcpdump.
I tested the single interfaces on the hosts and where the logical
network is connected with tcpdump (both eno5 and eno5.828) i see tons
of broadcast traffic of that interface.
With brctl-show i see that assigned to the bridge Development there
are both eno5.828 and vnic0.
Any way to understand what's happening and why traffic is not passing?
Thank you
Luca
--
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)
Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <lorenzetto.luca(a)gmail.com>
8 years, 1 month
Can't restart hosted-engine installation: vdsm did not start
by gregor
Hi,
after a failed hosted-engine installation I removed all the ovirt and
vdsm packages, rebooted the system and started from scratch.
But when I run "hosted-engine --deploy" it stuck's during starting vdsm.
The logfile /var/log/vdsm/supervdsm.log gives me the following
...
MainThread::DEBUG::2016-10-16
20:54:42,065::storage_log::69::blivet::(log_exception_info) IGNORED:
Caught exception, continuing.
MainThread::DEBUG::2016-10-16
20:54:42,065::storage_log::72::blivet::(log_exception_info) IGNORED:
Problem description: failed to get initiator name from iscsi firmware
MainThread::DEBUG::2016-10-16
20:54:42,065::storage_log::73::blivet::(log_exception_info) IGNORED:
Begin exception details.
MainThread::DEBUG::2016-10-16
20:54:42,066::storage_log::76::blivet::(log_exception_info) IGNORED:
Traceback (most recent call last):
MainThread::DEBUG::2016-10-16
20:54:42,066::storage_log::76::blivet::(log_exception_info) IGNORED:
File "/usr/lib/python2.7/site-packages/blivet/iscsi.py", line
142, in __init__
MainThread::DEBUG::2016-10-16
20:54:42,066::storage_log::76::blivet::(log_exception_info) IGNORED:
initiatorname = libiscsi.get_firmware_initiator_name()
MainThread::DEBUG::2016-10-16
20:54:42,066::storage_log::76::blivet::(log_exception_info) IGNORED:
IOError: Unknown error
MainThread::DEBUG::2016-10-16
20:54:42,066::storage_log::77::blivet::(log_exception_info) IGNORED:
End exception details.
...
Can anybody help? I found a old are close again like so many other bug
reports with "CLOSED CURRENTRELEASE":
https://bugzilla.redhat.com/show_bug.cgi?id=1238239
cheers
gregor
8 years, 1 month
ovirt list's smtp on blacklist?
by Jiří Sléžka
This is a cryptographically signed message in MIME format.
--------------ms050204090703000502050305
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable
Hello,
it looks like mailserver which serving users(a)ovirt.org is on blacklist.
line from our smtp server log...
Oct 17 15:31:25 hermes postfix/smtpd[14082]: NOQUEUE: reject: RCPT from=20
lists.ovirt.org[2600:3c01::f03c:91ff:fe93:4b0d]: 554 5.7.1 Service=20
unavailable; Client host [2600:3c01::f03c:91ff:fe93:4b0d] blocked using=20
sbl.spamhaus.org; https://www.spamhaus.org/sbl/query/SBLCSS;=20
from=3D<users-bounces(a)ovirt.org> to=3D<jiri.slezka(a)slu.cz> proto=3DESMTP =
helo=3D<lists.ovirt.org>
Cheers,
Jiri
--------------ms050204090703000502050305
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature
MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCC
CWcwggScMIIDhKADAgECAhEAuOSLPwlcx/l5IqBlguM0fDANBgkqhkiG9w0BAQUFADA7MQsw
CQYDVQQGEwJOTDEPMA0GA1UEChMGVEVSRU5BMRswGQYDVQQDExJURVJFTkEgUGVyc29uYWwg
Q0EwHhcNMTQxMTEwMDAwMDAwWhcNMTYxMTA5MjM1OTU5WjBlMQswCQYDVQQGEwJDWjElMCMG
A1UECgwcU2xlenNrw6EgdW5pdmVyeml0YSB2IE9wYXbEmzEYMBYGA1UEAwwPSmnFmcOtIFNs
w6nFvmthMRUwEwYJKoZIhvcNAQkCFgZzbGV6a2EwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw
ggEKAoIBAQDNcPHUXg4ZfD3shn/1zeMf8tyn/ZplDk1756hc+PVHYNO2VRP2p1HKRdtcfj1i
2V87na0EfMmfxM77dJJklSnAsCXrs0by2eHzdCz746vErs5VkSnZ1nhOWH7FViKadiyxmAv+
zXL+jkzb678GHsT2jPWdHjfhgQXAzd0hE5AqkQ3sRGRspsfruRmfgStEoE2+Ubq4jC69pBYW
i80zdAUOc+9Kl5Zfolfo/TpFViXIo4i1FMgDRNYZAhBKpHz70zN/7VUqTl/7x9z3a6ytNC8J
TbbMdj8SdWhRV0oyOOhYlFHL+1ZS0KtQ0iz5yWs9dCkq77LrTXCXaWSGBRlQ8H/5AgMBAAGj
ggFvMIIBazAfBgNVHSMEGDAWgBRjTUNaGUg/xEbBArq/7g7lgrdmpjAdBgNVHQ4EFgQUNHjX
Vei/P0DdklwoP8A3Tkq0XTYwDgYDVR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHQYDVR0l
BBYwFAYIKwYBBQUHAwQGCCsGAQUFBwMCMBgGA1UdIAQRMA8wDQYLKwYBBAGyMQECAh0wPwYD
VR0fBDgwNjA0oDKgMIYuaHR0cDovL2NybC50Y3MudGVyZW5hLm9yZy9URVJFTkFQZXJzb25h
bENBLmNybDByBggrBgEFBQcBAQRmMGQwOgYIKwYBBQUHMAKGLmh0dHA6Ly9jcnQudGNzLnRl
cmVuYS5vcmcvVEVSRU5BUGVyc29uYWxDQS5jcnQwJgYIKwYBBQUHMAGGGmh0dHA6Ly9vY3Nw
LnRjcy50ZXJlbmEub3JnMB0GA1UdEQQWMBSBEmppcmkuc2xlemthQHNsdS5jejANBgkqhkiG
9w0BAQUFAAOCAQEAJy6bixJ53paigwWwnXfipRly2TTkICwf4PtXw9hOBoYC17PbPpAoGBtT
Dvz6pQW4woSJ4JbkkD9JKGPlZXt0fQgZKgbfQ7sRFQ54goOhvJYm+CFJUPiSXrZ/i1CUzI40
U3kXYbWOq99yKid5aUEaIub9E6cJY6fybt7ireTV2IKVNIm/AXWjjf6jxGVavQ1QzTxmRvfE
sXpQis5jgCeJjRHhZ4BhwRChkIThLYfWTSYId9rbtuj3yjLjtJipDhHJEuIckgV8sCDbjbyt
xo0WNLQmfL0KUVxvpfMfdZ3McGKwn7nQiqBcpsGI3+9pfmHkMzy4+rDGZCHkeyNNxEUpLDCC
BMMwggOroAMCAQICEHP+V/rfuMUIgXtmuWvwLe8wDQYJKoZIhvcNAQEFBQAwga4xCzAJBgNV
BAYTAlVTMQswCQYDVQQIEwJVVDEXMBUGA1UEBxMOU2FsdCBMYWtlIENpdHkxHjAcBgNVBAoT
FVRoZSBVU0VSVFJVU1QgTmV0d29yazEhMB8GA1UECxMYaHR0cDovL3d3dy51c2VydHJ1c3Qu
Y29tMTYwNAYDVQQDEy1VVE4tVVNFUkZpcnN0LUNsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQg
RW1haWwwHhcNMDkwNTE4MDAwMDAwWhcNMjgxMjMxMjM1OTU5WjA7MQswCQYDVQQGEwJOTDEP
MA0GA1UEChMGVEVSRU5BMRswGQYDVQQDExJURVJFTkEgUGVyc29uYWwgQ0EwggEiMA0GCSqG
SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDIFdn1M2ojoZANz7sFRMOrH0o1hRohhaBP+PBA4kpD
m/5bsbC/tFfcdYBBS2Qa9ttPb4/QJUU1+erLSvr72tPtRYgRlDbkzKgN78U9N+0We+PClZ5Y
M38i+/j/7Oa+264KZSUih9pvhItG6ECGKD+/VgjiSumDouki+y36tigfkcHDcftTwCtOpAyh
bp1V7ezhJIc6COINHOTETdDLJ/qEZObRl51WJFuTuykuQ+JBaj3iSmX8ml9ahoe8h8d5gJaZ
UcaQD2SRmX0Q3awsAyrheGT+zj1O9CtQEUvRWNSbA/B/9TtTsFND+8UvxAQpGjqs11Xp0Q6V
0Tsxf3hPriktAgMBAAGjggFNMIIBSTAfBgNVHSMEGDAWgBSJgmd9xJ0mcABLtFBIfN49rgRu
fTAdBgNVHQ4EFgQUY01DWhlIP8RGwQK6v+4O5YK3ZqYwDgYDVR0PAQH/BAQDAgEGMBIGA1Ud
EwEB/wQIMAYBAf8CAQAwGAYDVR0gBBEwDzANBgsrBgEEAbIxAQICHTBYBgNVHR8EUTBPME2g
S6BJhkdodHRwOi8vY3JsLnVzZXJ0cnVzdC5jb20vVVROLVVTRVJGaXJzdC1DbGllbnRBdXRo
ZW50aWNhdGlvbmFuZEVtYWlsLmNybDBvBggrBgEFBQcBAQRjMGEwOAYIKwYBBQUHMAKGLGh0
dHA6Ly9jcnQudXNlcnRydXN0LmNvbS9VVE5BQUFDbGllbnRfQ0EuY3J0MCUGCCsGAQUFBzAB
hhlodHRwOi8vb2NzcC51c2VydHJ1c3QuY29tMA0GCSqGSIb3DQEBBQUAA4IBAQAGK6lTLxPc
XDkWzIafXkx7cvvsjVWKXpoK/1NMdvQGPVDPV/Ciz6+ZjKr+oBl2PpkDMvp1gziKu2uapQwT
stQbduaULmeYWeORbAKQmpzIYEtVq8qIWo0r5WmVAwfR1A78JCIuWbFjpF/t2SNy5JzOOlxs
H0+pAMkd/vp/RS22LoTdDyegWRhO1XYlRfSZJnnbb58j90O7Kw8Eo4EmLLd7Nfk9d19AIeZ/
HaWWWr3QyxY6bLthi4r9BDlECsss4cvOLhCYGtvgk+1JZGQIIJ+3o1Dwot3KtMZ8DD3nXhXc
J4bkOjtSWherqQZTK50Jc2QcAcP9MNKHA2/kFQN6OV9oMYIDGjCCAxYCAQEwUDA7MQswCQYD
VQQGEwJOTDEPMA0GA1UEChMGVEVSRU5BMRswGQYDVQQDExJURVJFTkEgUGVyc29uYWwgQ0EC
EQC45Is/CVzH+XkioGWC4zR8MA0GCWCGSAFlAwQCAQUAoIIBmzAYBgkqhkiG9w0BCQMxCwYJ
KoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNjEwMTcxOTI3MjRaMC8GCSqGSIb3DQEJBDEi
BCDzCEKI4itJBteemb5K3JGUkH8KzqDGSO5VWkoYQpCf9zBfBgkrBgEEAYI3EAQxUjBQMDsx
CzAJBgNVBAYTAk5MMQ8wDQYDVQQKEwZURVJFTkExGzAZBgNVBAMTElRFUkVOQSBQZXJzb25h
bCBDQQIRALjkiz8JXMf5eSKgZYLjNHwwYQYLKoZIhvcNAQkQAgsxUqBQMDsxCzAJBgNVBAYT
Ak5MMQ8wDQYDVQQKEwZURVJFTkExGzAZBgNVBAMTElRFUkVOQSBQZXJzb25hbCBDQQIRALjk
iz8JXMf5eSKgZYLjNHwwbAYJKoZIhvcNAQkPMV8wXTALBglghkgBZQMEASowCwYJYIZIAWUD
BAECMAoGCCqGSIb3DQMHMA4GCCqGSIb3DQMCAgIAgDANBggqhkiG9w0DAgIBQDAHBgUrDgMC
BzANBggqhkiG9w0DAgIBKDANBgkqhkiG9w0BAQEFAASCAQASqcTQh1MmiMVxkQWB+hPZp1HB
CtKl723HIo0ZUlor4RRFN22J2kgkKea7i1WP31sAKvSVVzjCHLdGYze1gGbjRpJx2x8nVS67
Iflr/zsR65pXaVJN5JFHtahAsgEbja2KMlIYF4RnXjxL1kE7jHey0yfn/svOqpTW+arWjXWQ
jh0dZ8m66MGZtdWO8Cd7PeB+ZCpqiwM8KfnoB6U71Ty2g+6eLO1o97KJpxtDnV/9FaMaPgjH
XyoycahgGDYkTlAVm9TL6CwvuTpsT7awyK1btlhzeyTEOj+xXzAhRYlAPYc8g/D/41hzec1y
AV0M2yrSsMNOBPp/Sq46KtUguBUhAAAAAAAA
--------------ms050204090703000502050305--
8 years, 1 month
oVirt AD integration problems
by cmc
Hi,
I'm trying to use the directory services provided by the
ovirt-engine-extension-aaa-ldap, and I can get it to successfully login
when I run the tests in the setup script, but when I login via the GUI, it
gives me:
unexpected error was encountered during validation processing:
javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated'
and fails login. It looks a bit like it is expecting to already be joined
to the domain, so I tried doing that manually via realmd and sssd. It
involved installing a lot of packages, such as kerberos and samba, which I
am nervous about on an engine host. Anyway, once I was joined, it still
gives me the same 'peer not authenticated' message. Does it need to be
separately bound to the domain, i.e., do you need all the other stuff
installed and running for it to work, or is the
ovirt-engine-extension-aaa-ldap package all that is needed?
Anyway, I ran the ovirt-engine-extensions-tool --log-level=FINEST
--log-file=/tmp/aaa.log aaa search --extension-name=domain-authz command
suggested in an earlier post, and it only gave me one exception, which was:
2016-09-28 16:08:15 SEVERE Extension domain-authz could not be found
2016-09-28 16:08:15 FINE Exception:
org.ovirt.engine.core.extensions.mgr.ConfigurationException: Extension
domain-authz could not be found
Thanks for any help,
Cam
8 years, 1 month
Re: [ovirt-users] oVirt AD integration problems
by Karli Sjöberg
--_000_ae004950fd40475d99b4ac034352c4b2exch24sluse_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
DQpEZW4gMTQgb2t0LiAyMDE2IDQ6MzAgZW0gc2tyZXYgY21jIDxpdWNvdW51QGdtYWlsLmNvbT46
DQo+DQo+IEhpIE9uZHJhLA0KPg0KPiBJdCBtYW5hZ2VzIHRvIGF1dGhlbnRpY2F0ZSwgYnV0IGFw
cGVuZHMgdGhlIGRvbWFpbiBhZ2FpbiBvbmNlIEknbSBsb2dnZWQgaW4sIGZvciBpbnN0YW5jZSwg
aWYgSSBsb2cgaW4gYXMgdXNlciAnY2FtJywgaXQgd2lsbCBsb2cgbWUgaW4sDQo+IGFuZCBkaXNw
bGF5IHRoZSBsb2dpbiBuYW1lIGluIHRoZSB0b3AgcmlnaHQgY29ybmVyIGFzICdjYW1AZG9tYWlu
LmNvbUBkb21haW4uY29tJyAodGhpcyBzaG93cyB1cCBpbiB0aGUgbG9nIGFzIHdlbGw6IGl0IHNo
b3dzIG1lDQo+IGxvZ2dpbmcgaW4gYXMgY2FtQGRvbWFpbi5jb20sIGJ1dCB0aGVuIHJldHVybnMg
YW4gZXJyb3IgYXMgdXNlciAgY2FtQGRvbWFpbi5jb21AZG9tYWluLmNvbSBpcyBub3QgYXV0aG9y
aXplZCkuIE15IHRob3VnaHQgd2FzDQo+IHRoYXQgc29tZXRoaW5nIGRvbmUgZWFybGllciB3aGVu
IEkgd2FzIHBsYXlpbmcgYXJvdW5kIHdpdGggc3NzZCwga2VyYmVyb3MgYW5kIEFEIGlzIGRvaW5n
IHRoaXMsIHRob3VnaCBJIGhhdmUgcmVtb3ZlZCB0aGVzZSBwYWNrYWdlcw0KPiBhbmQgcnVuIGF1
dGhjb25maWcgdG8gcmVtb3ZlIHNzc2QuIEFueSBpZGVhcz8NCg0KQ2FuJ3Qgc2F5IHdoeSwgYnV0
IGl0J3MgdGhlIHNhbWUgZm9yIHVzLiBJdCdzIHVuc2lnaHRseSwga2luZGx5IHB1dC4NCg0KL0sN
Cg0KPg0KPiBDaGVlcnMsDQo+DQo+IENhbQ0KPg0KPiBPbiBUaHUsIE9jdCAxMywgMjAxNiBhdCAy
OjA0IFBNLCBjbWMgPGl1Y291bnVAZ21haWwuY29tPiB3cm90ZToNCj4+DQo+PiBIaSBPbmRyYSwN
Cj4+DQo+PiBUaGF0IGlzIGdvb2QgdG8ga25vdyB0aGF0IHdlIGRvbid0IG5lZWQgS2VyYmVyb3Mg
LSBpdCBjb21wbGljYXRlcyB0aGluZ3MgYSBsb3QuDQo+Pg0KPj4gSSB0aGluayB0aGUgZXJyb3Jz
IG1pZ2h0IGJlIHRoZSBvcHRpb25zIEknZCBzZWxlY3RlZCBkdXJpbmcgdGhlIHNldHVwLiBJIHdh
cyB0aHJvd24gYSBiaXQgdGhhdA0KPj4gaXQgcGFzc2VkIGFsbCB0aGUgaW50ZXJuYWwgdGVzdHMg
cHJvdmlkZWQgYnkgdGhlIHNldHVwIHNjcmlwdCwgYnV0IGZhaWxlZCBvbiB0aGUgd2ViIEdVSS4g
V2hlbg0KPj4gSSd2ZSBzZWVuICd1bnNwZWNpZmllZCBHU1MgZmFpbHVyZScgYW5kICdwZWVyIG5v
dCBhdXRoZW50aWNhdGVkJyBpdCdzIHVzdWFsbHkgYmVlbiBkdWUgdG8NCj4+IEtlcmJlcm9zICh0
aG91Z2ggYWRtaXR0ZWRseSB0aGVzZSBhcmUganVzdCBnZW5lcmljIGVycm9ycykuIFNvIEkgdHJp
ZWQgdGhlIFJlZGhhdCBndWlkZSBmb3IgU1NPIGF0Og0KPj4NCj4+IGh0dHBzOi8vYWNjZXNzLnJl
ZGhhdC5jb20vZG9jdW1lbnRhdGlvbi9lbi1VUy9SZWRfSGF0X0VudGVycHJpc2VfVmlydHVhbGl6
YXRpb24vMy42L2h0bWwvQWRtaW5pc3RyYXRpb25fR3VpZGUvQ29uZmlndXJpbmdfTERBUF9hbmRf
S2VyYmVyb3NfZm9yX1NpbmdsZV9TaWduLW9uLmh0bWwNCj4+DQo+PiB3aGljaCB1c2VzIEtlcmJl
cm9zIChpbiBvdmlydC1zc28uY29uZikgSSBoYWQgdG8gcmVtb3ZlIHRoZSBzeW1saW5rIHRvIHRo
ZSBBcGFjaGUNCj4+IGNvbmZpZyBpdCBzYXlzIHRvIGNyZWF0ZSwgYXMgaXQgcmVzdWx0cyBpbiBp
bnRlcm5hbCBzZXJ2ZXIgZXJyb3JzIGluIEFwYWNoZS4gSXQgdXNlcyBhbiBTUE4gZm9yDQo+PiBB
cGFjaGUgaW4gdGhlIGtleXRhYi4NCj4+DQo+PiBOb3cgdGhhdCB5b3UndmUgY29uZmlybWVkIHRo
YXQgaXQgY2FuIGFjdHVhbGx5IHdvcmsgd2l0aG91dCBhbnkgbmVlZCBmb3IgdGhlIEtlcmJlcm9z
IHN0dWZmLA0KPj4gSSB3aWxsIHN0YXJ0IGFmcmVzaCBmcm9tIGEgY2xlYW4gc2V0dXAgYW5kIGFw
cGx5IHdoYXQgSSd2ZSBsZWFybnQgZHVyaW5nIHRoaXMgcHJvY2Vzcy4NCj4+DQo+PiBJJ2xsIHRy
eSBpdCBvdXQgYW5kIGxldCB5b3Uga25vdyBlaXRoZXIgd2F5Lg0KPj4NCj4+IE1hbnkgdGhhbmtz
IGZvciBhbGwgdGhlIGhlbHAhDQo+Pg0KPj4gS2luZCByZWdhcmRzLA0KPj4NCj4+IENhbQ0KPj4N
Cj4+DQo+Pj4NCj4+PiBZZXMsIHlvdSByZWFsbHkgZG8gbm90IG5lZWQgYW55dGhpbmcga2VyYmVy
b3MgcmVsYXRlZCB0byBzZWN1cmVseSBiaW5kDQo+Pj4gdG8gQUQgdmlhIExEQVAgc2ltcGxlIGJp
bmQgb3ZlciBUTFMvU1NMLiBUaGlzIGlzIHJlYWxseSBzdHJhbmdlIHRvIG1lDQo+Pj4gd2hhdCBl
cnJvcnMgeW91IGFyZSBnZXR0aW5nLCBidXQgeW91IHByb2JhYmx5IGNvbmZpZ3VyZWQgYXBhY2hl
IChvcg0KPj4+IHNvbWV0aGluZyBlbHNlPykgdG8gcmVxdWlyZSBrZXl0YWIsIGJ1dCB5b3UgZG9u
J3QgaGF2ZSB0bywgYW5kIHlvdSBjYW4NCj4+PiByZW1vdmUgdGhhdCBjb25maWd1cmF0aW9uLg0K
Pj4+DQo+Pj4+DQo+Pj4+IFRoYW5rcywNCj4+Pj4NCj4+Pj4gQ2FtDQo+Pj4+DQo+Pj4+DQo+Pj4+
DQo+Pj4+DQo+Pj4+ICAgICAgICAgVGhhbmtzLA0KPj4+Pg0KPj4+PiAgICAgICAgIENhbQ0KPj4+
Pg0KPj4+PiAgICAgICAgIF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fDQo+Pj4+DQo+Pj4+ICAgICAgICAgICAgICAgICBVc2VycyBtYWlsaW5nIGxpc3QNCj4+
Pj4gICAgICAgICAgICAgICAgIFVzZXJzQG92aXJ0Lm9yZyA8bWFpbHRvOlVzZXJzQG92aXJ0Lm9y
Zz4NCj4+Pj4gICAgICAgICA8bWFpbHRvOlVzZXJzQG92aXJ0Lm9yZyA8bWFpbHRvOlVzZXJzQG92
aXJ0Lm9yZz4+DQo+Pj4+ICAgICAgICAgICAgICAgICBodHRwOi8vbGlzdHMub3ZpcnQub3JnL21h
aWxtYW4vbGlzdGluZm8vdXNlcnMNCj4+Pj4gICAgICAgICA8aHR0cDovL2xpc3RzLm92aXJ0Lm9y
Zy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzPg0KPj4+PiAgICAgICAgICAgICAgICAgPGh0dHA6Ly9s
aXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vycw0KPj4+PiAgICAgICAgIDxodHRw
Oi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnM+Pg0KPj4+Pg0KPj4+Pg0K
Pj4+Pg0KPj4NCj4NCg==
--_000_ae004950fd40475d99b4ac034352c4b2exch24sluse_
Content-Type: text/html; charset="utf-8"
Content-ID: <C9AD4240A2DFAF4C92BCD8DA27CCC9CD(a)ad.slu.se>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPHAgZGlyPSJsdHIi
Pjxicj4NCkRlbiAxNCBva3QuIDIwMTYgNDozMCBlbSBza3JldiBjbWMgJmx0O2l1Y291bnVAZ21h
aWwuY29tJmd0Ozo8YnI+DQomZ3Q7PGJyPg0KJmd0OyBIaSBPbmRyYSw8YnI+DQomZ3Q7PGJyPg0K
Jmd0OyBJdCBtYW5hZ2VzIHRvIGF1dGhlbnRpY2F0ZSwgYnV0IGFwcGVuZHMgdGhlIGRvbWFpbiBh
Z2FpbiBvbmNlIEknbSBsb2dnZWQgaW4sIGZvciBpbnN0YW5jZSwgaWYgSSBsb2cgaW4gYXMgdXNl
ciAnY2FtJywgaXQgd2lsbCBsb2cgbWUgaW4sPGJyPg0KJmd0OyBhbmQgZGlzcGxheSB0aGUgbG9n
aW4gbmFtZSBpbiB0aGUgdG9wIHJpZ2h0IGNvcm5lciBhcyAnY2FtQGRvbWFpbi5jb21AZG9tYWlu
LmNvbScgKHRoaXMgc2hvd3MgdXAgaW4gdGhlIGxvZyBhcyB3ZWxsOiBpdCBzaG93cyBtZTxicj4N
CiZndDsgbG9nZ2luZyBpbiBhcyBjYW1AZG9tYWluLmNvbSwgYnV0IHRoZW4gcmV0dXJucyBhbiBl
cnJvciBhcyB1c2VyJm5ic3A7IGNhbUBkb21haW4uY29tQGRvbWFpbi5jb20gaXMgbm90IGF1dGhv
cml6ZWQpLiBNeSB0aG91Z2h0IHdhczxicj4NCiZndDsgdGhhdCBzb21ldGhpbmcgZG9uZSBlYXJs
aWVyIHdoZW4gSSB3YXMgcGxheWluZyBhcm91bmQgd2l0aCBzc3NkLCBrZXJiZXJvcyBhbmQgQUQg
aXMgZG9pbmcgdGhpcywgdGhvdWdoIEkgaGF2ZSByZW1vdmVkIHRoZXNlIHBhY2thZ2VzPGJyPg0K
Jmd0OyBhbmQgcnVuIGF1dGhjb25maWcgdG8gcmVtb3ZlIHNzc2QuIEFueSBpZGVhcz88L3A+DQo8
cCBkaXI9Imx0ciI+Q2FuJ3Qgc2F5IHdoeSwgYnV0IGl0J3MgdGhlIHNhbWUgZm9yIHVzLiBJdCdz
IHVuc2lnaHRseSwga2luZGx5IHB1dC48L3A+DQo8cCBkaXI9Imx0ciI+L0s8L3A+DQo8cCBkaXI9
Imx0ciI+Jmd0Ozxicj4NCiZndDsgQ2hlZXJzLDxicj4NCiZndDs8YnI+DQomZ3Q7IENhbTxicj4N
CiZndDs8YnI+DQomZ3Q7IE9uIFRodSwgT2N0IDEzLCAyMDE2IGF0IDI6MDQgUE0sIGNtYyAmbHQ7
aXVjb3VudUBnbWFpbC5jb20mZ3Q7IHdyb3RlOjxicj4NCiZndDsmZ3Q7PGJyPg0KJmd0OyZndDsg
SGkgT25kcmEsPGJyPg0KJmd0OyZndDs8YnI+DQomZ3Q7Jmd0OyBUaGF0IGlzIGdvb2QgdG8ga25v
dyB0aGF0IHdlIGRvbid0IG5lZWQgS2VyYmVyb3MgLSBpdCBjb21wbGljYXRlcyB0aGluZ3MgYSBs
b3QuPGJyPg0KJmd0OyZndDs8YnI+DQomZ3Q7Jmd0OyBJIHRoaW5rIHRoZSBlcnJvcnMgbWlnaHQg
YmUgdGhlIG9wdGlvbnMgSSdkIHNlbGVjdGVkIGR1cmluZyB0aGUgc2V0dXAuIEkgd2FzIHRocm93
biBhIGJpdCB0aGF0PGJyPg0KJmd0OyZndDsgaXQgcGFzc2VkIGFsbCB0aGUgaW50ZXJuYWwgdGVz
dHMgcHJvdmlkZWQgYnkgdGhlIHNldHVwIHNjcmlwdCwgYnV0IGZhaWxlZCBvbiB0aGUgd2ViIEdV
SS4gV2hlbg0KPGJyPg0KJmd0OyZndDsgSSd2ZSBzZWVuICd1bnNwZWNpZmllZCBHU1MgZmFpbHVy
ZScgYW5kICdwZWVyIG5vdCBhdXRoZW50aWNhdGVkJyBpdCdzIHVzdWFsbHkgYmVlbiBkdWUgdG8N
Cjxicj4NCiZndDsmZ3Q7IEtlcmJlcm9zICh0aG91Z2ggYWRtaXR0ZWRseSB0aGVzZSBhcmUganVz
dCBnZW5lcmljIGVycm9ycykuIFNvIEkgdHJpZWQgdGhlIFJlZGhhdCBndWlkZSBmb3IgU1NPIGF0
Ojxicj4NCiZndDsmZ3Q7PGJyPg0KJmd0OyZndDsgaHR0cHM6Ly9hY2Nlc3MucmVkaGF0LmNvbS9k
b2N1bWVudGF0aW9uL2VuLVVTL1JlZF9IYXRfRW50ZXJwcmlzZV9WaXJ0dWFsaXphdGlvbi8zLjYv
aHRtbC9BZG1pbmlzdHJhdGlvbl9HdWlkZS9Db25maWd1cmluZ19MREFQX2FuZF9LZXJiZXJvc19m
b3JfU2luZ2xlX1NpZ24tb24uaHRtbDxicj4NCiZndDsmZ3Q7PGJyPg0KJmd0OyZndDsgd2hpY2gg
dXNlcyBLZXJiZXJvcyAoaW4gb3ZpcnQtc3NvLmNvbmYpIEkgaGFkIHRvIHJlbW92ZSB0aGUgc3lt
bGluayB0byB0aGUgQXBhY2hlPGJyPg0KJmd0OyZndDsgY29uZmlnIGl0IHNheXMgdG8gY3JlYXRl
LCBhcyBpdCByZXN1bHRzIGluIGludGVybmFsIHNlcnZlciBlcnJvcnMgaW4gQXBhY2hlLiBJdCB1
c2VzIGFuIFNQTiBmb3I8YnI+DQomZ3Q7Jmd0OyBBcGFjaGUgaW4gdGhlIGtleXRhYi48YnI+DQom
Z3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7IE5vdyB0aGF0IHlvdSd2ZSBjb25maXJtZWQgdGhhdCBpdCBj
YW4gYWN0dWFsbHkgd29yayB3aXRob3V0IGFueSBuZWVkIGZvciB0aGUgS2VyYmVyb3Mgc3R1ZmYs
DQo8YnI+DQomZ3Q7Jmd0OyBJIHdpbGwgc3RhcnQgYWZyZXNoIGZyb20gYSBjbGVhbiBzZXR1cCBh
bmQgYXBwbHkgd2hhdCBJJ3ZlIGxlYXJudCBkdXJpbmcgdGhpcyBwcm9jZXNzLjxicj4NCiZndDsm
Z3Q7PGJyPg0KJmd0OyZndDsgSSdsbCB0cnkgaXQgb3V0IGFuZCBsZXQgeW91IGtub3cgZWl0aGVy
IHdheS48YnI+DQomZ3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7IE1hbnkgdGhhbmtzIGZvciBhbGwgdGhl
IGhlbHAhPGJyPg0KJmd0OyZndDs8YnI+DQomZ3Q7Jmd0OyBLaW5kIHJlZ2FyZHMsPGJyPg0KJmd0
OyZndDs8YnI+DQomZ3Q7Jmd0OyBDYW08YnI+DQomZ3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7PGJyPg0K
Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0OyZndDsmZ3Q7IFllcywgeW91IHJlYWxseSBkbyBub3QgbmVl
ZCBhbnl0aGluZyBrZXJiZXJvcyByZWxhdGVkIHRvIHNlY3VyZWx5IGJpbmQ8YnI+DQomZ3Q7Jmd0
OyZndDsgdG8gQUQgdmlhIExEQVAgc2ltcGxlIGJpbmQgb3ZlciBUTFMvU1NMLiBUaGlzIGlzIHJl
YWxseSBzdHJhbmdlIHRvIG1lPGJyPg0KJmd0OyZndDsmZ3Q7IHdoYXQgZXJyb3JzIHlvdSBhcmUg
Z2V0dGluZywgYnV0IHlvdSBwcm9iYWJseSBjb25maWd1cmVkIGFwYWNoZSAob3I8YnI+DQomZ3Q7
Jmd0OyZndDsgc29tZXRoaW5nIGVsc2U/KSB0byByZXF1aXJlIGtleXRhYiwgYnV0IHlvdSBkb24n
dCBoYXZlIHRvLCBhbmQgeW91IGNhbjxicj4NCiZndDsmZ3Q7Jmd0OyByZW1vdmUgdGhhdCBjb25m
aWd1cmF0aW9uLjxicj4NCiZndDsmZ3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7Jmd0OyZndDs8YnI+DQom
Z3Q7Jmd0OyZndDsmZ3Q7IFRoYW5rcyw8YnI+DQomZ3Q7Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0OyZn
dDsmZ3Q7Jmd0OyBDYW08YnI+DQomZ3Q7Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0OyZndDsmZ3Q7Jmd0
Ozxicj4NCiZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0OyZn
dDsmZ3Q7Jmd0OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgVGhhbmtzLDxicj4NCiZndDsm
Z3Q7Jmd0OyZndDs8YnI+DQomZ3Q7Jmd0OyZndDsmZ3Q7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu
YnNwOyBDYW08YnI+DQomZ3Q7Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0OyZndDsmZ3Q7Jmd0OyAmbmJz
cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX188YnI+DQomZ3Q7Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0OyZndDsmZ3Q7Jmd0
OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7
IFVzZXJzIG1haWxpbmcgbGlzdDxicj4NCiZndDsmZ3Q7Jmd0OyZndDsgJm5ic3A7ICZuYnNwOyAm
bmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyBVc2Vyc0BvdmlydC5vcmcg
Jmx0O21haWx0bzpVc2Vyc0BvdmlydC5vcmcmZ3Q7PGJyPg0KJmd0OyZndDsmZ3Q7Jmd0OyAmbmJz
cDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJmx0O21haWx0bzpVc2Vyc0BvdmlydC5vcmcgJmx0O21h
aWx0bzpVc2Vyc0BvdmlydC5vcmcmZ3Q7Jmd0Ozxicj4NCiZndDsmZ3Q7Jmd0OyZndDsgJm5ic3A7
ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyBodHRwOi8v
bGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnM8YnI+DQomZ3Q7Jmd0OyZndDsm
Z3Q7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbHQ7aHR0cDovL2xpc3RzLm92aXJ0Lm9y
Zy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzJmd0Ozxicj4NCiZndDsmZ3Q7Jmd0OyZndDsgJm5ic3A7
ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbHQ7aHR0
cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzPGJyPg0KJmd0OyZndDsm
Z3Q7Jmd0OyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJmx0O2h0dHA6Ly9saXN0cy5vdmly
dC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2VycyZndDsmZ3Q7PGJyPg0KJmd0OyZndDsmZ3Q7Jmd0
Ozxicj4NCiZndDsmZ3Q7Jmd0OyZndDs8YnI+DQomZ3Q7Jmd0OyZndDsmZ3Q7PGJyPg0KJmd0OyZn
dDs8YnI+DQomZ3Q7PGJyPg0KPC9wPg0KPC9ib2R5Pg0KPC9odG1sPg0K
--_000_ae004950fd40475d99b4ac034352c4b2exch24sluse_--
8 years, 1 month
Could not associate brick
by Davide Ferrari
Hello
I'm seeing several ot these warnings:
2016-10-14 10:49:23,721 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler7) [] Could not associate brick
'vm04.storage.billy:/gluster/ssd/data/brick' of volume
'23f8f1ae-a3ac-47bf-8223-5b5f7c29e508' with correct network as no gluster
network found in cluster '00000002-0002-0002-0002-000000000345'
I was reading this May thread
http://lists.ovirt.org/pipermail/users/2016-May/040069.html
but there is no final answer about how to solve this warning. As the other
poster, I created the gluster network before installing ovirt (it's an
hosted-engine installation) and I used DNS names to point to the bricks.
DNS resolution is working correctly (every host has everthing in
/etc/hosts).
I also created a non-VM network and assigned it to every host's bond1 (the
gluster network)
How can I fix this?
TIA
--
Davide Ferrari
Senior Systems Engineer
8 years, 1 month
Problem with backing-file, how to fix the backing-chain ?
by Claudio Soprano
Hi all,
We run an ovirt environment before with engine v3.6.5 (if remember good)
and now with v4.0.4 (we upgraded because we read the bug with
backing-file was resolved with v4).
We upgraded some of the hosts machines (but not all still) at v4.0.4 too
to see if this would fix the problem, but nothing.
The problem is that we have several VMs with snapshots, we do daily,
weekly and monthly snapshots, keep some of them (usually the fresh ones)
and remove the olds (that in the case they are weekly snapshots, they
are in the middle of a series of snapshots), this in the time has
produced the famous
Backing file too long bug.
So we upgraded the engine from 3.6.5 to 4.0.4 (latest available).
We discovered this bug, when we tried to upgrade an host to v4.0.4,
doing so a VM in the host didn't migrate, so we shutdown it and tried to
run on another host, but never succeded for the bug.
We don't know if we have more VMs in this situation because we upgraded
only 2 hosts on 10.
Investigating the problem we discovered that the backing file indicated
in each of LVM snapshots report a path very long with
/dev/storage-domain-id/../image-group-id/ with ../image-group-id/
repeated a lot of times and at the end /parentid.
So to understand which was the right path that it would contain, we
cloned a VM in the v4.0.4 and then we did 4 snapshots, now the backing
file path is
/dev/storage-domain-id/parentid
Is there a way to modify the path in the backing-file or a way to
recover the VM from this state ?
Where do reside the informations about the backing-file path ?
I attach here all the commands we run
On the ovirt manager (host with the engine only) we run
ovirt-shell
[oVirt shell (connected)]# list disks --parent-vm-name vm1
id : 2df25a13-6958-40a8-832f-9a26ce65de0f
name : vm1_Disk2
id : 8cda0aa6-9e25-4b50-ba00-b877232a1983
name : vm1_Disk1
[oVirt shell (connected)]# show disk 8cda0aa6-9e25-4b50-ba00-b877232a1983
id : 8cda0aa6-9e25-4b50-ba00-b877232a1983
name : vm1_Disk1
actual_size : 1073741824
alias : vm1_Disk1
disk_profile-id : 1731f79a-5034-4270-9a87-94d93025deac
format : cow
image_id : 7b354e2a-2099-4f2a-80b7-fba7d1fd13ee
propagate_errors : False
provisioned_size : 17179869184
shareable : False
size : 17179869184
sparse : True
status-state : ok
storage_domains-storage_domain-id: 384f9059-ef2f-4d43-a54f-de71c5d589c8
storage_type : image
wipe_after_delete : False
[root@ovc1mgr ~]# su - postgres
Last login: Fri Oct 14 01:02:14 CEST 2016
-bash-4.2$ psql -d engine -U postgres
psql (9.2.15)
Type "help" for help.
engine=#\x on
engine=# select * from images where image_group_id =
'8cda0aa6-9e25-4b50-ba00-b877232a1983' order by creation_date;
-[ RECORD 1 ]---------+-------------------------------------
image_guid | 60ba7acf-58cb-475b-b9ee-15b1be99fee6
creation_date | 2016-03-29 15:12:34+02
size | 17179869184
it_guid | 00000000-0000-0000-0000-000000000000
parentid | 00000000-0000-0000-0000-000000000000
imagestatus | 4
lastmodified | 2016-04-21 11:25:59.972+02
vm_snapshot_id | 27c187cd-989f-4f7a-ac05-49c4410de6c2
volume_type | 1
volume_format | 5
image_group_id | 8cda0aa6-9e25-4b50-ba00-b877232a1983
_create_date | 2016-03-29 15:12:31.994065+02
_update_date | 2016-09-04 01:10:08.773649+02
active | f
volume_classification | 1
-[ RECORD 2 ]---------+-------------------------------------
image_guid | 68d764ec-bc2e-4e1d-b8f2-b44afd9fcb2e
creation_date | 2016-07-03 01:01:30+02
size | 17179869184
it_guid | 00000000-0000-0000-0000-000000000000
parentid | 60ba7acf-58cb-475b-b9ee-15b1be99fee6
imagestatus | 1
lastmodified | 2016-07-04 01:03:33.732+02
vm_snapshot_id | 175c2071-a06b-4b0e-a069-5cc4bb236a34
volume_type | 2
volume_format | 4
image_group_id | 8cda0aa6-9e25-4b50-ba00-b877232a1983
_create_date | 2016-07-03 01:01:15.069585+02
_update_date | 2016-09-11 02:06:04.420965+02
active | f
volume_classification | 1
-[ RECORD 3 ]---------+-------------------------------------
image_guid | 37ca6494-e990-44e5-8597-28845a0a19b5
creation_date | 2016-08-07 01:06:15+02
size | 17179869184
it_guid | 00000000-0000-0000-0000-000000000000
parentid | 68d764ec-bc2e-4e1d-b8f2-b44afd9fcb2e
imagestatus | 1
lastmodified | 2016-08-08 01:00:03.778+02
vm_snapshot_id | 4c0e5ac0-2ef3-4996-b3e9-7fd566d97b1a
volume_type | 2
volume_format | 4
image_group_id | 8cda0aa6-9e25-4b50-ba00-b877232a1983
_create_date | 2016-08-07 01:06:01.777156+02
_update_date | 2016-09-25 01:55:54.090026+02
active | f
volume_classification | 1
I removed other 10 snapshot infos from the list to easily read all
On a host running v4.0.4 we run
root@ovc2n06 ~]# lvchange -aey -pr
384f9059-ef2f-4d43-a54f-de71c5d589c8/60ba7acf-58cb-475b-b9ee-15b1be99fee6
Logical volume "60ba7acf-58cb-475b-b9ee-15b1be99fee6" changed.
This is the base image infact it doesn't contain a backing-file
[root@ovc2n06 ~]# qemu-img info --backing-chain
/dev/384f9059-ef2f-4d43-a54f-de71c5d589c8/60ba7acf-58cb-475b-b9ee-15b1be99fee6
image:
/dev/384f9059-ef2f-4d43-a54f-de71c5d589c8/60ba7acf-58cb-475b-b9ee-15b1be99fee6
file format: raw
virtual size: 16G (17179869184 bytes)
disk size: 0
[root@ovc2n06 ~]# lvchange -aey -pr
384f9059-ef2f-4d43-a54f-de71c5d589c8/68d764ec-bc2e-4e1d-b8f2-b44afd9fcb2e
Logical volume "68d764ec-bc2e-4e1d-b8f2-b44afd9fcb2e" changed.
[root@ovc2n06 ~]# qemu-img info --backing-chain
/dev/384f9059-ef2f-4d43-a54f-de71c5d589c8/68d764ec-bc2e-4e1d-b8f2-b44afd9fcb2e
qemu-img: Could not open
'/dev/384f9059-ef2f-4d43-a54f-de71c5d589c8/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/60ba7acf-58cb-475b-b9ee-15b1be99fee6':
Could not open
'/dev/384f9059-ef2f-4d43-a54f-de71c5d589c8/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/60ba7acf-58cb-475b-b9ee-15b1be99fee6':
No such file or directory
To fix this problem we made a link
[root@ovc2n06 ~]# lvchange -aey -pr
384f9059-ef2f-4d43-a54f-de71c5d589c8/68d764ec-bc2e-4e1d-b8f2-b44afd9fcb2e
Logical volume "68d764ec-bc2e-4e1d-b8f2-b44afd9fcb2e" changed.
[root@ovc2n06 ~]# qemu-img info --backing-chain
/dev/384f9059-ef2f-4d43-a54f-de71c5d589c8/68d764ec-bc2e-4e1d-b8f2-b44afd9fcb2e
qemu-img: Could not open
'/dev/384f9059-ef2f-4d43-a54f-de71c5d589c8/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/60ba7acf-58cb-475b-b9ee-15b1be99fee6':
Could not open
'/dev/384f9059-ef2f-4d43-a54f-de71c5d589c8/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/60ba7acf-58cb-475b-b9ee-15b1be99fee6':
No such file or directory
[root@ovc2n06 ~]# ln -s /dev/384f9059-ef2f-4d43-a54f-de71c5d589c8
/dev/8cda0aa6-9e25-4b50-ba00-b877232a1983
[root@ovc2n06 ~]# qemu-img info --backing-chain
/dev/384f9059-ef2f-4d43-a54f-de71c5d589c8/68d764ec-bc2e-4e1d-b8f2-b44afd9fcb2e
image:
/dev/384f9059-ef2f-4d43-a54f-de71c5d589c8/68d764ec-bc2e-4e1d-b8f2-b44afd9fcb2e
file format: qcow2
virtual size: 16G (17179869184 bytes)
disk size: 0
cluster_size: 65536
backing file:
../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/60ba7acf-58cb-475b-b9ee-15b1be99fee6
(actual path:
/dev/384f9059-ef2f-4d43-a54f-de71c5d589c8/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/60ba7acf-58cb-475b-b9ee-15b1be99fee6)
backing file format: raw
Format specific information:
compat: 0.10
refcount bits: 16
image:
/dev/384f9059-ef2f-4d43-a54f-de71c5d589c8/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/../8cda0aa6-9e25-4b50-ba00-b877232a1983/60ba7acf-58cb-475b-b9ee-15b1be99fee6
file format: raw
virtual size: 16G (17179869184 bytes)
disk size: 0
We moved to the next snapshot
[root@ovc2n06 ~]# lvchange -aey -pr
384f9059-ef2f-4d43-a54f-de71c5d589c8/37ca6494-e990-44e5-8597-28845a0a19b5
Logical volume "37ca6494-e990-44e5-8597-28845a0a19b5" changed.
And now the error from the bug
qemu-img info --backing-chain
/dev/384f9059-ef2f-4d43-a54f-de71c5d589c8/37ca6494-e990-44e5-8597-28845a0a19b5
qemu-img: Could not open
'/dev/384f9059-ef2f-4d43-a54f-de71c5d589c8/37ca6494-e990-44e5-8597-28845a0a19b5':
Backing file name too long
Any snapshot starting from this snapshot is not usable, the VM was
running ok up to we did shutdown it, it never starts again.
How you can see the path of the backing-file is wrong and too long
already in the previous snapshots, is there a way to fix it or to edit
it manually ?
Obviously we tried to clone, export, create a qcow2 image, from all the
snapshot later 07-august but the operation didn't complete, we can
recover only from the snapshot of 7-august that is missing 2 months of
new data.
Please if you have a workaround or a solution, can you write the
commands we need to run with examples, we searched a lot about
backing-file but only the manual of the qemu-img command was found with
no examples on how to recover or change it.
Thanks again
Claudio Soprano
--
/ | / _____/ / | / _____/ | /
/ / | / / / / | / / / | /
/ / | / ___/ _____/ / / | / ___/ / | /
/ / | / / / / | / / / | /
______/ _/ __/ _/ _/ _/ __/ _/ _/ __/
Claudio Soprano phone: (+39)-06-9403.2349/2355
Computing Service fax: (+39)-06-9403.2649
LNF-INFN e-mail: Claudio.Soprano(a)lnf.infn.it
Via Enrico Fermi, 40 www: http://www.lnf.infn.it/
I-00044 Frascati, Italy
8 years, 1 month
Replication between storage domains
by Anantha Raghava
This is a multi-part message in MIME format.
--------------8A5BBB261C1AC062725312E9
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hello...
I am trying to setup a DC & DR with oVirt with two sets of rack servers
and two Storage Boxes. In DC, I have FC storage and in DR I have iSCSI
storage. Now the question is can I replicate the VM disks and data
between these dissimilar storage domains at say 15 min interval?
--
Thanks & Regards,
Anantha Raghava
eXza Technology Consulting & Services
--------------8A5BBB261C1AC062725312E9
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p>Hello...</p>
<p>I am trying to setup a DC & DR with oVirt with two sets of
rack servers and two Storage Boxes. In DC, I have FC storage and
in DR I have iSCSI storage. Now the question is can I replicate
the VM disks and data between these dissimilar storage domains at
say 15 min interval?<br>
</p>
<div class="moz-signature">--
<p style="margin-bottom: 0cm; line-height: 100%"><font face="Times
New Roman, serif">Thanks
& Regards,</font></p>
<p style="margin-bottom: 0cm; line-height: 100%"><br>
</p>
<address style="line-height: 100%"><font face="Times New Roman,
serif">Anantha
Raghava</font></address>
<address style="line-height: 100%"><font face="Times New Roman,
serif">eXza
Technology Consulting & Services</font></address>
</div>
</body>
</html>
--------------8A5BBB261C1AC062725312E9--
8 years, 1 month