[Users] unable to start vm in 3.3 and f19 with gluster

Hello, I'm testing GlusterFS on 3.3 with fedora 19 systems. One engine (ovirt) + 2 nodes (ovnode01 and ovnode02) Successfully created gluster volume composed by two bricks (one for each vdsm node) distributed replicated Suggestion: If page http://www.ovirt.org/Features/GlusterFS_Storage_Domain is the reference, perhaps it would be better to explicitly specify that one has to start the created volume before going to add a storage domain based on the created volume. Not knowing Gluster could lead to think that the start phase is responsibility of storage domain creation itself ... All seems ok from a configuration point of view. Uploaded a CentOS 6.4 iso image ito my ISO_DOMAIN (nfs exported from engine.. this will be another thread...) Created a server VM with 10Gb of disk with thin allocation. I get an error when starting the VM on engine.log 2013-09-25 00:43:16,027 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-44) Rerun vm 409c5dbe-5e70-40de-bf73-46ef484ea2d7. Called from vds ovnode02 2013-09-25 00:43:16,031 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (pool-6-thread-48) Correlation ID: 5ea15175, Job ID: 48128550-3633-4da4-8d9c-ab704be02f02, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM C6 on Host ovnode02. 2013-09-25 00:43:16,057 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-6-thread-48) Lock Acquired to object EngineLock [exclusiveLocks= key: 409c5dbe-5e70-40de-bf73-46ef484ea2d7 value: VM , sharedLocks= ] 2013-09-25 00:43:16,070 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (pool-6-thread-48) START, IsVmDuringInitiatingVDSCommand( vmId = 409c5dbe-5e70-40de-bf73-46ef484ea2d7), log id: 7979c53b 2013-09-25 00:43:16,071 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (pool-6-thread-48) FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 7979c53b 2013-09-25 00:43:16,086 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-6-thread-48) Running command: RunVmCommand internal: false. Entities affected : ID: 409c5dbe-5e70-40de-bf73-46ef484ea2d7 Type: VM 2013-09-25 00:43:16,110 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsoPrefixVDSCommand] (pool-6-thread-48) START, IsoPrefixVDSCommand( storagePoolId = 6b3175e6-6fa2-473f-ba21-38917c413ba9, ignoreFailoverLimit = false), log id: 7fd62f0f 2013-09-25 00:43:16,111 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsoPrefixVDSCommand] (pool-6-thread ... On node vdsm.log Thread-2915::ERROR::2013-09-25 00:43:20,108::vm::2062::vm.Vm::(_startUnderlyingVm) vmId=`409c5dbe-5e70-40de-bf73-46ef484ea2d7`::The vm start process failed Traceback (most recent call last): File "/usr/share/vdsm/vm.py", line 2022, in _startUnderlyingVm self._run() File "/usr/share/vdsm/vm.py", line 2906, in _run self._connection.createXML(domxml, flags), File "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py", line 76, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2805, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirtError: Unable to read from monitor: Connection reset by peer Thread-2915::DEBUG::2013-09-25 00:43:20,176::vm::2448::vm.Vm::(setDownStatus) vmId=`409c5dbe-5e70-40de-bf73-46ef484ea2d7`::Changed state to Down: Unable to read from monitor: Connection reset by peer libvirtEventLoop::WARNING::2013-09-25 00:43:20,114::clientIF::337::vds::(teardownVolumePath) Drive is not a vdsm image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2 VOLWM_FREE_PCT:50 _blockDev:False _checkIoTuneCategories:<bound method Drive._checkIoTuneCategories of <vm.Drive object at 0x2b12950>> _customize:<bound method Drive._customize of <vm.Drive object at 0x2b12950>> _deviceXML:<disk device="cdrom" snapshot="no" type="file"><source file="/var/run/vdsm/payload/409c5dbe-5e70-40de-bf73-46ef484ea2d7.393db1d8c9e756483db001b30a239296.img" startupPolicy="optional"/><target bus="ide" dev="hdd"/><readonly/><serial></serial></disk> _makeName:<bound method Drive._makeName of <vm.Drive object at 0x2b12950>> _validateIoTuneParams:<bound method Drive._validateIoTuneParams of <vm.Drive object at 0x2b12950>> apparentsize:0 blockDev:False cache:none conf:{'status': 'Down', 'acpiEnable': 'true', 'emulatedMachine': 'pc-1.0', 'vmId': '409c5dbe-5e70-40de-bf73-46ef484ea2d7', 'pid': '0', 'memGuaranteedSize': 1365, 'timeOffset': '0', 'keyboardLayout': 'en-us', 'displayPort': '-1', 'displaySecurePort': '-1', 'spiceSslCipherSuite': 'DEFAULT', 'cpuType': 'Nehalem', 'custom': {}, 'clientIp': '', 'exitCode': 1, 'nicModel': 'rtl8139,pv', 'smartcardEnable': 'false', 'kvmEnable': 'true', 'pitReinjection': 'false', 'transparentHugePages': 'true', 'devices': [{'device': 'scsi', 'model': 'virtio-scsi', 'type': 'controller'}, {'device': 'qxl', 'specParams': {'vram': '65536'}, 'type': 'video', 'deviceId': '70eadea2-6b53- Let me know if you need full logs The disk image itself seems ok: [root@ovnode02 ~]# ll /rhev/data-center/mnt/glusterSD/ovnode01\:gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/ total 1025 -rw-rw----. 1 vdsm kvm 10737418240 Sep 25 00:42 dff09892-bc60-4de5-85c0-2a1fa215a161 -rw-rw----. 1 vdsm kvm 1048576 Sep 25 00:42 dff09892-bc60-4de5-85c0-2a1fa215a161.lease -rw-r--r--. 1 vdsm kvm 268 Sep 25 00:42 dff09892-bc60-4de5-85c0-2a1fa215a161.meta [root@ovnode02 ~]# qemu-img info /rhev/data-center/mnt/glusterSD/ovnode01\:gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161 image: /rhev/data-center/mnt/glusterSD/ovnode01:gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161 file format: raw virtual size: 10G (10737418240 bytes) disk size: 0 Also on the other node [root@ovnode01 vdsm]# ll /rhev/data-center/mnt/glusterSD/ovnode01\:gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/ total 1025 -rw-rw----. 1 vdsm kvm 10737418240 Sep 25 00:42 dff09892-bc60-4de5-85c0-2a1fa215a161 -rw-rw----. 1 vdsm kvm 1048576 Sep 25 00:42 dff09892-bc60-4de5-85c0-2a1fa215a161.lease -rw-r--r--. 1 vdsm kvm 268 Sep 25 00:42 dff09892-bc60-4de5-85c0-2a1fa215a161.meta [root@ovnode01 vdsm]# qemu-img info /rhev/data-center/mnt/glusterSD/ovnode01\:gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161 image: /rhev/data-center/mnt/glusterSD/ovnode01:gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161 file format: raw virtual size: 10G (10737418240 bytes) disk size: 0 [root@ovnode02 ~]# gluster volume list gv01 [root@ovnode02 ~]# gluster volume info Volume Name: gv01 Type: Replicate Volume ID: 7cf18f87-eef8-47cb-b469-8e5f92bfcd98 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 192.168.33.41:/export/brick1/sdb1 Brick2: 192.168.33.42:/export/brick1/sdb1 Options Reconfigured: storage.owner-gid: 36 storage.owner-uid: 36 auth.allow: * user.cifs: on nfs.disable: off I notice during the volume creation this message that I don't know if could be of impact: Volume Option group=virt could not be set on gv01 See also this image for events generated on egine gui. https://docs.google.com/file/d/0BwoPbcrMv8mvZEp6UmhPV0ttaVU/edit?usp=sharing Possibly the openstack related ones could be misunderstood and sincerely I haven't understood their meaning.... Gianluca

On 09/25/2013 04:40 AM, Gianluca Cecchi wrote:
Hello, I'm testing GlusterFS on 3.3 with fedora 19 systems. One engine (ovirt) + 2 nodes (ovnode01 and ovnode02)
Successfully created gluster volume composed by two bricks (one for each vdsm node) distributed replicated
Suggestion: If page http://www.ovirt.org/Features/GlusterFS_Storage_Domain is the reference, perhaps it would be better to explicitly specify that one has to start the created volume before going to add a storage domain based on the created volume. Not knowing Gluster could lead to think that the start phase is responsibility of storage domain creation itself ...
All seems ok from a configuration point of view. Uploaded a CentOS 6.4 iso image ito my ISO_DOMAIN (nfs exported from engine.. this will be another thread...) Created a server VM with 10Gb of disk with thin allocation.
I get an error when starting the VM
on engine.log 2013-09-25 00:43:16,027 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-44) Rerun vm 409c5dbe-5e70-40de-bf73-46ef484ea2d7. Called from vds ovnode02 2013-09-25 00:43:16,031 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (pool-6-thread-48) Correlation ID: 5ea15175, Job ID: 48128550-3633-4da4-8d9c-ab704be02f02, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM C6 on Host ovnode02. 2013-09-25 00:43:16,057 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-6-thread-48) Lock Acquired to object EngineLock [exclusiveLocks= key: 409c5dbe-5e70-40de-bf73-46ef484ea2d7 value: VM , sharedLocks= ] 2013-09-25 00:43:16,070 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (pool-6-thread-48) START, IsVmDuringInitiatingVDSCommand( vmId = 409c5dbe-5e70-40de-bf73-46ef484ea2d7), log id: 7979c53b 2013-09-25 00:43:16,071 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (pool-6-thread-48) FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 7979c53b 2013-09-25 00:43:16,086 INFO [org.ovirt.engine.core.bll.RunVmCommand] (pool-6-thread-48) Running command: RunVmCommand internal: false. Entities affected : ID: 409c5dbe-5e70-40de-bf73-46ef484ea2d7 Type: VM 2013-09-25 00:43:16,110 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsoPrefixVDSCommand] (pool-6-thread-48) START, IsoPrefixVDSCommand( storagePoolId = 6b3175e6-6fa2-473f-ba21-38917c413ba9, ignoreFailoverLimit = false), log id: 7fd62f0f 2013-09-25 00:43:16,111 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IsoPrefixVDSCommand] (pool-6-thread ...
On node vdsm.log Thread-2915::ERROR::2013-09-25 00:43:20,108::vm::2062::vm.Vm::(_startUnderlyingVm) vmId=`409c5dbe-5e70-40de-bf73-46ef484ea2d7`::The vm start process failed Traceback (most recent call last): File "/usr/share/vdsm/vm.py", line 2022, in _startUnderlyingVm self._run() File "/usr/share/vdsm/vm.py", line 2906, in _run self._connection.createXML(domxml, flags), File "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py", line 76, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2805, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirtError: Unable to read from monitor: Connection reset by peer Thread-2915::DEBUG::2013-09-25 00:43:20,176::vm::2448::vm.Vm::(setDownStatus) vmId=`409c5dbe-5e70-40de-bf73-46ef484ea2d7`::Changed state to Down: Unable to read from monitor: Connection reset by peer libvirtEventLoop::WARNING::2013-09-25 00:43:20,114::clientIF::337::vds::(teardownVolumePath) Drive is not a vdsm image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2 VOLWM_FREE_PCT:50 _blockDev:False _checkIoTuneCategories:<bound method Drive._checkIoTuneCategories of <vm.Drive object at 0x2b12950>> _customize:<bound method Drive._customize of <vm.Drive object at 0x2b12950>> _deviceXML:<disk device="cdrom" snapshot="no" type="file"><source file="/var/run/vdsm/payload/409c5dbe-5e70-40de-bf73-46ef484ea2d7.393db1d8c9e756483db001b30a239296.img" startupPolicy="optional"/><target bus="ide" dev="hdd"/><readonly/><serial></serial></disk> _makeName:<bound method Drive._makeName of <vm.Drive object at 0x2b12950>> _validateIoTuneParams:<bound method Drive._validateIoTuneParams of <vm.Drive object at 0x2b12950>> apparentsize:0 blockDev:False cache:none conf:{'status': 'Down', 'acpiEnable': 'true', 'emulatedMachine': 'pc-1.0', 'vmId': '409c5dbe-5e70-40de-bf73-46ef484ea2d7', 'pid': '0', 'memGuaranteedSize': 1365, 'timeOffset': '0', 'keyboardLayout': 'en-us', 'displayPort': '-1', 'displaySecurePort': '-1', 'spiceSslCipherSuite': 'DEFAULT', 'cpuType': 'Nehalem', 'custom': {}, 'clientIp': '', 'exitCode': 1, 'nicModel': 'rtl8139,pv', 'smartcardEnable': 'false', 'kvmEnable': 'true', 'pitReinjection': 'false', 'transparentHugePages': 'true', 'devices': [{'device': 'scsi', 'model': 'virtio-scsi', 'type': 'controller'}, {'device': 'qxl', 'specParams': {'vram': '65536'}, 'type': 'video', 'deviceId': '70eadea2-6b53-
Let me know if you need full logs
The disk image itself seems ok:
[root@ovnode02 ~]# ll /rhev/data-center/mnt/glusterSD/ovnode01\:gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/ total 1025 -rw-rw----. 1 vdsm kvm 10737418240 Sep 25 00:42 dff09892-bc60-4de5-85c0-2a1fa215a161 -rw-rw----. 1 vdsm kvm 1048576 Sep 25 00:42 dff09892-bc60-4de5-85c0-2a1fa215a161.lease -rw-r--r--. 1 vdsm kvm 268 Sep 25 00:42 dff09892-bc60-4de5-85c0-2a1fa215a161.meta
[root@ovnode02 ~]# qemu-img info /rhev/data-center/mnt/glusterSD/ovnode01\:gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161 image: /rhev/data-center/mnt/glusterSD/ovnode01:gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161 file format: raw virtual size: 10G (10737418240 bytes) disk size: 0
Also on the other node [root@ovnode01 vdsm]# ll /rhev/data-center/mnt/glusterSD/ovnode01\:gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/ total 1025 -rw-rw----. 1 vdsm kvm 10737418240 Sep 25 00:42 dff09892-bc60-4de5-85c0-2a1fa215a161 -rw-rw----. 1 vdsm kvm 1048576 Sep 25 00:42 dff09892-bc60-4de5-85c0-2a1fa215a161.lease -rw-r--r--. 1 vdsm kvm 268 Sep 25 00:42 dff09892-bc60-4de5-85c0-2a1fa215a161.meta
[root@ovnode01 vdsm]# qemu-img info /rhev/data-center/mnt/glusterSD/ovnode01\:gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161 image: /rhev/data-center/mnt/glusterSD/ovnode01:gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161 file format: raw virtual size: 10G (10737418240 bytes) disk size: 0
[root@ovnode02 ~]# gluster volume list gv01
[root@ovnode02 ~]# gluster volume info
Volume Name: gv01 Type: Replicate Volume ID: 7cf18f87-eef8-47cb-b469-8e5f92bfcd98 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 192.168.33.41:/export/brick1/sdb1 Brick2: 192.168.33.42:/export/brick1/sdb1 Options Reconfigured: storage.owner-gid: 36 storage.owner-uid: 36 auth.allow: * user.cifs: on nfs.disable: off
I notice during the volume creation this message that I don't know if could be of impact:
Volume Option group=virt could not be set on gv01
This should not be an issue. This is about performance tuning, not about the functionality.
See also this image for events generated on egine gui. https://docs.google.com/file/d/0BwoPbcrMv8mvZEp6UmhPV0ttaVU/edit?usp=sharing
Possibly the openstack related ones could be misunderstood and sincerely I haven't understood their meaning....
Gianluca _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

oVirt hosts are VMs inside an ESX 5.1 infra. I think all is ok in terms of nested virtualization though CPU of ESX host is E7-4870 and cluster defined as "Intel Nehalem Family" selinux is in permissive mode [root@ovnode01 libvirt]# vdsClient -s localhost getVdsCapabilities HBAInventory = {'FC': [], 'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:6344c23973df'}]} ISCSIInitiatorName = 'iqn.1994-05.com.redhat:6344c23973df' bondings = {'bond0': {'addr': '', 'cfg': {}, 'hwaddr': '8e:a1:3b:0c:83:47', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}} bridges = {'ovirtmgmt': {'addr': '192.168.33.41', 'cfg': {'BOOTPROTO': 'none', 'DEFROUTE': 'yes', 'DELAY': '0', 'DEVICE': 'ovirtmgmt', 'GATEWAY': '192.168.33.15', 'IPADDR': '192.168.33.41', 'NETMASK': '255.255.255.0', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no', 'TYPE': 'Bridge'}, 'gateway': '192.168.33.15', 'ipv6addrs': ['fe80::250:56ff:fe9f:686b/64'], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ports': ['eth0'], 'stp': 'off'}} clusterLevels = ['3.0', '3.1', '3.2', '3.3'] cpuCores = '4' cpuFlags = 'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,mmx,fxsr,sse,sse2,ss,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,nopl,xtopology,tsc_reliable,nonstop_tsc,aperfmperf,pni,monitor,vmx,ssse3,cx16,sse4_1,sse4_2,x2apic,popcnt,lahf_lm,ida,arat,epb,dtherm,tpr_shadow,vnmi,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_n270' cpuModel = 'Intel(R) Xeon(R) CPU E7- 4870 @ 2.40GHz' cpuSockets = '4' cpuSpeed = '2394.000' cpuThreads = '4' emulatedMachines = ['pc', 'q35', 'isapc', 'pc-0.10', 'pc-0.11', 'pc-0.12', 'pc-0.13', 'pc-0.14', 'pc-0.15', 'pc-1.0', 'pc-1.1', 'pc-1.2', 'pc-1.3', 'none'] guestOverhead = '65' hooks = {} kvmEnabled = 'true' lastClient = '192.168.33.40' lastClientIface = 'ovirtmgmt' management_ip = '0.0.0.0' memSize = '16050' netConfigDirty = 'False' networks = {'ovirtmgmt': {'addr': '192.168.33.41', 'bridged': True, 'cfg': {'BOOTPROTO': 'none', 'DEFROUTE': 'yes', 'DELAY': '0', 'DEVICE': 'ovirtmgmt', 'GATEWAY': '192.168.33.15', 'IPADDR': '192.168.33.41', 'NETMASK': '255.255.255.0', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no', 'TYPE': 'Bridge'}, 'gateway': '192.168.33.15', 'iface': 'ovirtmgmt', 'ipv6addrs': ['fe80::250:56ff:fe9f:686b/64'], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ports': ['eth0'], 'stp': 'off'}} nics = {'ens224': {'addr': '192.168.230.31', 'cfg': {'BOOTPROTO': 'static', 'DEVICE': 'ens224', 'HWADDR': '00:50:56:9F:3C:B0', 'IPADDR': '192.168.230.31', 'NETMASK': '255.255.255.0', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'TYPE': 'Ethernet'}, 'hwaddr': '00:50:56:9f:3c:b0', 'ipv6addrs': ['fe80::250:56ff:fe9f:3cb0/64'], 'mtu': '1500', 'netmask': '255.255.255.0', 'speed': 10000}, 'eth0': {'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'DEVICE': 'eth0', 'HWADDR': '00:50:56:9f:68:6b', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no'}, 'hwaddr': '00:50:56:9f:68:6b', 'ipv6addrs': ['fe80::250:56ff:fe9f:686b/64'], 'mtu': '1500', 'netmask': '', 'speed': 10000}, 'p256p1': {'addr': '', 'cfg': {}, 'hwaddr': '00:50:56:9f:3b:86', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'speed': 0}} operatingSystem = {'name': 'Fedora', 'release': '3', 'version': '19'} packages2 = {'glusterfs': {'buildtime': 1375786850, 'release': '8.fc19', 'version': '3.4.0'}, 'glusterfs-fuse': {'buildtime': 1375786850, 'release': '8.fc19', 'version': '3.4.0'}, 'glusterfs-rdma': {'buildtime': 1375786850, 'release': '8.fc19', 'version': '3.4.0'}, 'glusterfs-server': {'buildtime': 1375786850, 'release': '8.fc19', 'version': '3.4.0'}, 'kernel': {'buildtime': 1379163891.0, 'release': '200.fc19.x86_64', 'version': '3.11.1'}, 'libvirt': {'buildtime': 1375400611, 'release': '1.fc19', 'version': '1.0.5.5'}, 'mom': {'buildtime': 1375215820, 'release': '3.fc19', 'version': '0.3.2'}, 'qemu-img': {'buildtime': 1378374475, 'release': '9.fc19', 'version': '1.4.2'}, 'qemu-kvm': {'buildtime': 1378374475, 'release': '9.fc19', 'version': '1.4.2'}, 'spice-server': {'buildtime': 1375454091, 'release': '1.fc19', 'version': '0.12.4'}, 'vdsm': {'buildtime': 1378818342, 'release': '2.fc19', 'version': '4.12.1'}} reservedMem = '321' software_revision = '2' software_version = '4.12' supportedENGINEs = ['3.0', '3.1', '3.2', '3.3'] supportedProtocols = ['2.2', '2.3'] uuid = '421F7170-C703-34E3-9628-4588D841F8B1' version_name = 'Snow Man' vlans = {} vmTypes = ['kvm'] qemu log for the VM: 2013-09-25 05:42:29.130+0000: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -name C6 -S -machine pc-1.0,accel=kvm,usb=off -cpu Nehalem -m 2048 -smp 1,sockets=1,cores=1,threads=1 -uuid 409c5dbe-5e70-40de-bf73-46ef484ea2d7 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=19-3,serial=421F7170-C703-34E3-9628-4588D841F8B1,uuid=409c5dbe-5e70-40de-bf73-46ef484ea2d7 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/C6.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2013-09-25T05:42:28,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:bb:9f:10,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/409c5dbe-5e70-40de-bf73-46ef484ea2d7.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/409c5dbe-5e70-40de-bf73-46ef484ea2d7.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -k en-us -vga qxl -global qxl-vga.ram_size=67108864 -global qxl-vga.vram_size=67108864 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 qemu-system-x86_64: -drive file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads: Gluster connection failed for server=ovnode01 port=0 volume=gv01 image=20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161 transport=tcp qemu-system-x86_64: -drive file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads: could not open disk image gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161: No data available 2013-09-25 05:42:32.291+0000: shutting down

So it seems the probelm is file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads: Gluster connection failed for server=ovnode01 port=0 volume=gv01 image=20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161 transport=tcp qemu-system-x86_64: -drive file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads: could not open disk image it is the same in qemu.log of both hosts On the other one I have: 2013-09-25 05:42:35.454+0000: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -name C6 -S -machine pc-1.0,accel=kvm,usb=off -cpu Nehalem -m 2048 -smp 1,sockets=1,cores=1,threads=1 -uuid 409c5dbe-5e70-40de-bf73-46ef484ea2d7 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=19-3,serial=421FAF48-83D1-08DC-F2ED-F2894F8BC56D,uuid=409c5dbe-5e70-40de-bf73-46ef484ea2d7 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/C6.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2013-09-25T05:42:35,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:bb:9f:10,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/409c5dbe-5e70-40de-bf73-46ef484ea2d7.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/409c5dbe-5e70-40de-bf73-46ef484ea2d7.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -k en-us -vga qxl -global qxl-vga.ram_size=67108864 -global qxl-vga.vram_size=67108864 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 qemu-system-x86_64: -drive file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads: Gluster connection failed for server=ovnode01 port=0 volume=gv01 image=20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161 transport=tcp qemu-system-x86_64: -drive file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads: could not open disk image gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161: No data available 2013-09-25 05:42:38.620+0000: shutting down Currently iptables as setup by install is this (I checked to set up iptables from the gui when I added the host) Do I have to add anything for gluster? [root@ovnode01 qemu]# iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:54321 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:161 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:16514 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 5634:6166 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 49152:49216 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:24007 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:111 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:38465 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:38466 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:111 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:38467 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:2049 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:39543 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:55863 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:38468 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:963 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:965 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:4379 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:139 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:445 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpts:24009:24108 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpts:49152:49251 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) target prot opt source destination REJECT all -- 0.0.0.0/0 0.0.0.0/0 PHYSDEV match ! --physdev-is-bridged reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) target prot opt source destination

On 09/25/2013 11:36 AM, Gianluca Cecchi wrote:
qemu-system-x86_64: -drive file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads: Gluster connection failed for server=ovnode01 port=0 volume=gv01 image=20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161 transport=tcp qemu-system-x86_64: -drive file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads: could not open disk image gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161: No data available 2013-09-25 05:42:32.291+0000: shutting down
Have the following configuration changes been done? 1) gluster volume set <volname> server.allow-insecure on 2) Edit /etc/glusterfs/glusterd.vol on all gluster nodes to contain this line: option rpc-auth-allow-insecure on Post 2), restarting glusterd would be necessary. Regards, Vijay

On Wed, Sep 25, 2013 at 8:11 AM, Vijay Bellur wrote:
Have the following configuration changes been done?
1) gluster volume set <volname> server.allow-insecure on
2) Edit /etc/glusterfs/glusterd.vol on all gluster nodes to contain this line: option rpc-auth-allow-insecure on
Post 2), restarting glusterd would be necessary.
Regards, Vijay
No, because I didn't find find this kind of info anywhere... ;-) Done on both hosts (step 1 only one time) and I see that the gui detects the change in volume setting. Now the VM can start (I see the qemu process on ovnode02) but it seems to remain in hourglass state icon..... After 5 minutes it still remains in "executing" phase in tasks Eventually I'm going to restart the nodes completely....

On 09/25/2013 11:51 AM, Gianluca Cecchi wrote:
On Wed, Sep 25, 2013 at 8:11 AM, Vijay Bellur wrote:
Have the following configuration changes been done?
1) gluster volume set <volname> server.allow-insecure on
2) Edit /etc/glusterfs/glusterd.vol on all gluster nodes to contain this line: option rpc-auth-allow-insecure on
Post 2), restarting glusterd would be necessary.
Regards, Vijay
No, because I didn't find find this kind of info anywhere... ;-)
The feature page wiki does provide this information but it gets missed in the details. Should we highlight it more?
Done on both hosts (step 1 only one time) and I see that the gui detects the change in volume setting. Now the VM can start (I see the qemu process on ovnode02) but it seems to remain in hourglass state icon..... After 5 minutes it still remains in "executing" phase in tasks
Let us know how this goes. -Vijay

This is a multi-part message in MIME format. --------------030900090306020400090109 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit On 25.09.2013 09:11, Vijay Bellur wrote:
On 09/25/2013 11:51 AM, Gianluca Cecchi wrote:
On Wed, Sep 25, 2013 at 8:11 AM, Vijay Bellur wrote:
Have the following configuration changes been done?
1) gluster volume set <volname> server.allow-insecure on
2) Edit /etc/glusterfs/glusterd.vol on all gluster nodes to contain this line: option rpc-auth-allow-insecure on
Post 2), restarting glusterd would be necessary.
Regards, Vijay
No, because I didn't find find this kind of info anywhere... ;-)
The feature page wiki does provide this information but it gets missed in the details. Should we highlight it more?
Done on both hosts (step 1 only one time) and I see that the gui detects the change in volume setting. Now the VM can start (I see the qemu process on ovnode02) but it seems to remain in hourglass state icon..... After 5 minutes it still remains in "executing" phase in tasks
Let us know how this goes.
-Vijay _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
I found a bug report on your issue. https://bugzilla.redhat.com/show_bug.cgi?id=988299 Scroll to the end. ( https://bugzilla.redhat.com/show_bug.cgi?id=988299#c46 ) There is a modified glusterVolume.py. I have the same issue as well, I'm trying to fix this it right now. Regards David --------------030900090306020400090109 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <html> <head> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <div class="moz-cite-prefix"><br> <br> On 25.09.2013 09:11, Vijay Bellur wrote:<br> </div> <blockquote cite="mid:52428CAA.2050707@redhat.com" type="cite">On 09/25/2013 11:51 AM, Gianluca Cecchi wrote: <br> <blockquote type="cite">On Wed, Sep 25, 2013 at 8:11 AM, Vijay Bellur wrote: <br> <br> <blockquote type="cite"> <br> <br> Have the following configuration changes been done? <br> <br> 1) gluster volume set <volname> server.allow-insecure on <br> <br> 2) Edit /etc/glusterfs/glusterd.vol on all gluster nodes to contain this <br> line: <br> option rpc-auth-allow-insecure on <br> <br> Post 2), restarting glusterd would be necessary. <br> <br> Regards, <br> Vijay <br> </blockquote> <br> <br> No, because I didn't find find this kind of info anywhere... ;-) <br> </blockquote> <br> The feature page wiki does provide this information but it gets missed in the details. Should we highlight it more? <br> <br> <blockquote type="cite"> <br> Done on both hosts (step 1 only one time) and I see that the gui <br> detects the change in volume setting. <br> Now the VM can start (I see the qemu process on ovnode02) but it seems <br> to remain in hourglass state icon..... <br> After 5 minutes it still remains in "executing" phase in tasks <br> <br> </blockquote> <br> Let us know how this goes. <br> <br> -Vijay <br> _______________________________________________ <br> Users mailing list <br> <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <br> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> <br> </blockquote> <br> I found a bug report on your issue.<br> <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1"> <a href="https://bugzilla.redhat.com/show_bug.cgi?id=988299">https://bugzilla.redhat.com/show_bug.cgi?id=988299</a><br> <br> Scroll to the end. ( <a class="moz-txt-link-freetext" href="https://bugzilla.redhat.com/show_bug.cgi?id=988299#c46">https://bugzilla.redhat.com/show_bug.cgi?id=988299#c46</a> )<br> There is a modified <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1"> glusterVolume.py. I have the same issue as well, I'm trying to fix this it right now.<br> <br> Regards David<br> <br> </body> </html> --------------030900090306020400090109--

On Thu, Sep 26, 2013 at 11:07 AM, David Riedl wrote: g report on your issue.
https://bugzilla.redhat.com/show_bug.cgi?id=988299
Scroll to the end. ( https://bugzilla.redhat.com/show_bug.cgi?id=988299#c46 ) There is a modified glusterVolume.py. I have the same issue as well, I'm trying to fix this it right now.
Regards David
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Unfortunately I have the esx infra down for a scheduled maintenance right now. Later today I'm going to verify and test when up again as suggested to me in the previous message.. But it doesn't appear as the same thing to me... My volume is gv01 and without any _ or / in it. Also the message is a bit different On vdsm.log of host: File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2805, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirtError: Unable to read from monitor: Connection reset by peer In qemu.log of the VM Gluster connection failed for server=ovnode01 port=0 volume=gv01 image=20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161 transport=tcp Gianluca

I was able to restart engine and the two hosts. All restarted again. Now the effect to run the VM is that it remains in paused state - start VM (about 21:54 today) it starts and goes into paused mode (arrow icon near VM)
From image https://docs.google.com/file/d/0BwoPbcrMv8mvRXlaa19sdFpmQ0E/edit?usp=sharing
you see that apparently the execute action terminates at 21:54 but the VM maintains paused state. - if I try other actions on the same VM there is no message preventing me from that.... and it stays in paused mode see several attempt actions to solve situation - at 21:58 host becomes unreponsive from the gui, no network ping from engine and if I go into its console I see the login prompt but not able to connect... - power off ovnode01 icon near VM becomes now question mark (?) - power on ovnode01 vm goes into stop mode (red square) ovnode01 joins again cluster vdsm log in gzip format starting today before start of vm https://docs.google.com/file/d/0BwoPbcrMv8mvXzY2eEcwR0VXazQ/edit?usp=sharing engine.log in gzip format https://docs.google.com/file/d/0BwoPbcrMv8mvU1RuLVRVYVZ0SXM/edit?usp=sharing PS: at the moment no fenging action set up. COuld I set any agent for host virtualized inside VMware? from a gluster point of view on ovnode01 under /var/log/glusterfs [root@ovnode01 glusterfs]# ls -lrt total 2008 drwxr-xr-x. 2 root root 4096 Sep 25 00:05 bricks -rw-------. 1 root root 59038 Sep 26 22:09 nfs.log -rw-------. 1 root root 51992 Sep 26 22:09 glustershd.log -rw-------. 1 root root 40230 Sep 26 22:09 rhev-data-center-mnt-glusterSD-ovnode01:gv01.log -rw-------. 1 root root 422757 Sep 26 22:47 etc-glusterfs-glusterd.vol.log -rw-------. 1 root root 1449411 Sep 26 22:47 cli.log In etc-glusterfs-glusterd.vol.log I see several lines like this [2013-09-26 20:19:53.450793] I [glusterd-handler.c:1007:__glusterd_handle_cli_get_volume] 0-glusterd: Received get vol req qemu process: qemu 4565 1 0 22:21 ? 00:00:09 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name C6 -S -machine pc-1.0,accel=kvm,usb=off -cpu Nehalem -m 2048 -smp 1,sockets=1,cores=1,threads=1 -uuid 409c5dbe-5e70-40de-bf73-46ef484ea2d7 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=19-3,serial=421F7170-C703-34E3-9628-4588D841F8B1,uuid=409c5dbe-5e70-40de-bf73-46ef484ea2d7 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/C6.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2013-09-26T20:21:00,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=/rhev/data-center/mnt/ovirt.localdomain.local:_var_lib_exports_iso/6e80607d-5437-4fc5-b73c-66794f6381e0/images/11111111-1111-1111-1111-111111111111/CentOS-6.4-x86_64-bin-DVD1.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -drive file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2 -netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:bb:9f:10,bus=pci.0,addr=0x3,bootindex=3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/409c5dbe-5e70-40de-bf73-46ef484ea2d7.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/409c5dbe-5e70-40de-bf73-46ef484ea2d7.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -k en-us -vga qxl -global qxl-vga.ram_size=67108864 -global qxl-vga.vram_size=67108864 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 The VM result as started in events of the gui but its icon remains in pause. See the image: https://docs.google.com/file/d/0BwoPbcrMv8mvZ1RnUkg4aVhlckk/edit?usp=sharing Gianluca

Today I was able to work again on this matter and it seems related to spice Every time I start the VM (that is defined with spice) it goes in paused state and after a few minutes the node gets unreachable, down form the gui, no response also inside its console and forced to power off it. So I tried to create a vm taken from oVirt generated xml using gluster to see if it is the problem in any way... and start it via virsh and libvirt on the node. See in attach: I got the ovirt generated xml file, under /var/run/libvirt/qemu/C6.xml myvm.xml the derived vm that defined and started and is able to go and remain running [root@ovnode02 ~]# virsh define myvm.xml Please enter your authentication name: vuser Please enter your password: Domain myvm defined from myvm.xml [root@ovnode02 ~]# virsh list --all Please enter your authentication name: vuser Please enter your password: Id Name State ---------------------------------------------------- - myvm shut off [root@ovnode02 ~]# virsh start myvm Please enter your authentication name: vuser Please enter your password: Domain myvm started [root@ovnode02 ~]# virsh list --all Please enter your authentication name: vuser Please enter your password: Id Name State ---------------------------------------------------- 2 myvm running In this case [root@ovnode02 ~]# ps -ef|grep qemu qemu 1617 574 0 18:24 ? 00:00:00 [python] <defunct> qemu 6083 1 7 18:40 ? 00:00:14 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name myvm -S -machine pc-1.0,accel=kvm,usb=off -cpu Nehalem -m 2048 -smp 1,sockets=1,cores=1,threads=1 -uuid dfadc661-6288-4f21-8faa-012daf29478f -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/myvm.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2013-10-01T16:40:55,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 root 7137 7083 0 18:44 pts/2 00:00:00 grep --color=auto qemu If I undefine and redefine myvm adding the spice parts in it, I get the same bad behaviour as in oVirt [root@ovnode02 ~]# virsh list --all Please enter your authentication name: vuser Please enter your password: Id Name State ---------------------------------------------------- 3 myvm paused A confirmation of spice involvement I get if I run once C6 from oVirt using vnc instead of spice. Now the VM goes running and I can access its console from vnc and the qemu commandline is: qemu 10786 1 9 21:05 ? 00:00:14 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name C6 -S -machine pc-1.0,accel=kvm,usb=off -cpu Nehalem -m 2048 -smp 1,sockets=1,cores=1,threads=1 -uuid 409c5dbe-5e70-40de-bf73-46ef484ea2d7 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=19-3,serial=421F7170-C703-34E3-9628-4588D841F8B1,uuid=409c5dbe-5e70-40de-bf73-46ef484ea2d7 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/C6.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2013-10-01T19:05:47,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=/rhev/data-center/mnt/ovirt.localdomain.local:_var_lib_exports_iso/6e80607d-5437-4fc5-b73c-66794f6381e0/images/11111111-1111-1111-1111-111111111111/CentOS-6.4-x86_64-bin-DVD1.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -drive file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2 -netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:bb:9f:10,bus=pci.0,addr=0x3,bootindex=3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/409c5dbe-5e70-40de-bf73-46ef484ea2d7.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/409c5dbe-5e70-40de-bf73-46ef484ea2d7.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0 -vnc 0:0,password -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 Any info about how to debug? Anyone got the former logs and see any information? Take in mind that the environment is nested virtualization inside ESXi 5.1, so it can take a part too in the problem. Anyone using spice over glusterfs on f19 without any problem? On nodes: [root@ovnode01 ~]# rpm -qa|grep vdsm vdsm-python-4.12.1-2.fc19.x86_64 vdsm-python-cpopen-4.12.1-2.fc19.x86_64 vdsm-gluster-4.12.1-2.fc19.noarch vdsm-cli-4.12.1-2.fc19.noarch vdsm-4.12.1-2.fc19.x86_64 vdsm-xmlrpc-4.12.1-2.fc19.noarch On engine [root@ovirt ~]# rpm -qa|grep ovirt ovirt-engine-restapi-3.3.0-4.fc19.noarch ovirt-engine-sdk-python-3.3.0.6-1.fc19.noarch ovirt-log-collector-3.3.0-1.fc19.noarch ovirt-engine-lib-3.3.0-4.fc19.noarch ovirt-engine-3.3.0-4.fc19.noarch ovirt-release-fedora-8-1.noarch ovirt-iso-uploader-3.3.0-1.fc19.noarch ovirt-engine-cli-3.3.0.4-1.fc19.noarch ovirt-engine-setup-3.3.0-4.fc19.noarch ovirt-engine-dbscripts-3.3.0-4.fc19.noarch ovirt-host-deploy-java-1.1.1-1.fc19.noarch ovirt-image-uploader-3.3.0-1.fc19.noarch ovirt-host-deploy-1.1.1-1.fc19.noarch ovirt-engine-webadmin-portal-3.3.0-4.fc19.noarch ovirt-engine-backend-3.3.0-4.fc19.noarch ovirt-engine-userportal-3.3.0-4.fc19.noarch ovirt-engine-tools-3.3.0-4.fc19.noarch Thanks in advance, Gianluca

On 10/02/2013 12:57 AM, Gianluca Cecchi wrote:
Today I was able to work again on this matter and it seems related to spice Every time I start the VM (that is defined with spice) it goes in
and this doesn't happen if the VM is defined with vnc?
paused state and after a few minutes the node gets unreachable, down form the gui, no response also inside its console and forced to power off it.
So I tried to create a vm taken from oVirt generated xml using gluster to see if it is the problem in any way... and start it via virsh and libvirt on the node.
See in attach: I got the ovirt generated xml file, under /var/run/libvirt/qemu/C6.xml myvm.xml the derived vm that defined and started and is able to go and remain running
[root@ovnode02 ~]# virsh define myvm.xml Please enter your authentication name: vuser Please enter your password: Domain myvm defined from myvm.xml
[root@ovnode02 ~]# virsh list --all Please enter your authentication name: vuser Please enter your password: Id Name State ---------------------------------------------------- - myvm shut off
[root@ovnode02 ~]# virsh start myvm Please enter your authentication name: vuser Please enter your password: Domain myvm started
[root@ovnode02 ~]# virsh list --all Please enter your authentication name: vuser Please enter your password: Id Name State ---------------------------------------------------- 2 myvm running
In this case
[root@ovnode02 ~]# ps -ef|grep qemu qemu 1617 574 0 18:24 ? 00:00:00 [python] <defunct> qemu 6083 1 7 18:40 ? 00:00:14 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name myvm -S -machine pc-1.0,accel=kvm,usb=off -cpu Nehalem -m 2048 -smp 1,sockets=1,cores=1,threads=1 -uuid dfadc661-6288-4f21-8faa-012daf29478f -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/myvm.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2013-10-01T16:40:55,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 root 7137 7083 0 18:44 pts/2 00:00:00 grep --color=auto qemu
If I undefine and redefine myvm adding the spice parts in it, I get the same bad behaviour as in oVirt
[root@ovnode02 ~]# virsh list --all Please enter your authentication name: vuser Please enter your password: Id Name State ---------------------------------------------------- 3 myvm paused
A confirmation of spice involvement I get if I run once C6 from oVirt using vnc instead of spice. Now the VM goes running and I can access its console from vnc and the qemu commandline is:
qemu 10786 1 9 21:05 ? 00:00:14 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name C6 -S -machine pc-1.0,accel=kvm,usb=off -cpu Nehalem -m 2048 -smp 1,sockets=1,cores=1,threads=1 -uuid 409c5dbe-5e70-40de-bf73-46ef484ea2d7 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=19-3,serial=421F7170-C703-34E3-9628-4588D841F8B1,uuid=409c5dbe-5e70-40de-bf73-46ef484ea2d7 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/C6.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2013-10-01T19:05:47,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=/rhev/data-center/mnt/ovirt.localdomain.local:_var_lib_exports_iso/6e80607d-5437-4fc5-b73c-66794f6381e0/images/11111111-1111-1111-1111-111111111111/CentOS-6.4-x86_64-bin-DVD1.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -drive file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2 -netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:bb:9f:10,bus=pci.0,addr=0x3,bootindex=3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/409c5dbe-5e70-40de-bf73-46ef484ea2d7.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/409c5dbe-5e70-40de-bf73-46ef484ea2d7.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0 -vnc 0:0,password -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
Any info about how to debug? Anyone got the former logs and see any information?
Take in mind that the environment is nested virtualization inside ESXi 5.1, so it can take a part too in the problem. Anyone using spice over glusterfs on f19 without any problem?
On nodes: [root@ovnode01 ~]# rpm -qa|grep vdsm vdsm-python-4.12.1-2.fc19.x86_64 vdsm-python-cpopen-4.12.1-2.fc19.x86_64 vdsm-gluster-4.12.1-2.fc19.noarch vdsm-cli-4.12.1-2.fc19.noarch vdsm-4.12.1-2.fc19.x86_64 vdsm-xmlrpc-4.12.1-2.fc19.noarch
On engine [root@ovirt ~]# rpm -qa|grep ovirt ovirt-engine-restapi-3.3.0-4.fc19.noarch ovirt-engine-sdk-python-3.3.0.6-1.fc19.noarch ovirt-log-collector-3.3.0-1.fc19.noarch ovirt-engine-lib-3.3.0-4.fc19.noarch ovirt-engine-3.3.0-4.fc19.noarch ovirt-release-fedora-8-1.noarch ovirt-iso-uploader-3.3.0-1.fc19.noarch ovirt-engine-cli-3.3.0.4-1.fc19.noarch ovirt-engine-setup-3.3.0-4.fc19.noarch ovirt-engine-dbscripts-3.3.0-4.fc19.noarch ovirt-host-deploy-java-1.1.1-1.fc19.noarch ovirt-image-uploader-3.3.0-1.fc19.noarch ovirt-host-deploy-1.1.1-1.fc19.noarch ovirt-engine-webadmin-portal-3.3.0-4.fc19.noarch ovirt-engine-backend-3.3.0-4.fc19.noarch ovirt-engine-userportal-3.3.0-4.fc19.noarch ovirt-engine-tools-3.3.0-4.fc19.noarch
Thanks in advance, Gianluca
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Wed, Oct 2, 2013 at 9:16 PM, Itamar Heim wrote:
On 10/02/2013 12:57 AM, Gianluca Cecchi wrote:
Today I was able to work again on this matter and it seems related to spice Every time I start the VM (that is defined with spice) it goes in
and this doesn't happen if the VM is defined with vnc?
No, reproduced both from oVirt and through virsh. with spice defined in boot options or in xml (for virsh) the vm remains in paused state and after a few minutes it seems the node hangs... with vnc the VM goes in runnign state I'm going to put same config on 2 physical nodes with only local storage and see what happens and report... Gianluca

On Thu, Oct 3, 2013 at 12:21 AM, Gianluca Cecchi wrote:
On Wed, Oct 2, 2013 at 9:16 PM, Itamar Heim wrote:
On 10/02/2013 12:57 AM, Gianluca Cecchi wrote:
Today I was able to work again on this matter and it seems related to spice Every time I start the VM (that is defined with spice) it goes in
and this doesn't happen if the VM is defined with vnc?
No, reproduced both from oVirt and through virsh. with spice defined in boot options or in xml (for virsh) the vm remains in paused state and after a few minutes it seems the node hangs... with vnc the VM goes in runnign state I'm going to put same config on 2 physical nodes with only local storage and see what happens and report...
Gianluca
So I was able to configure 2xHP blades BL685c G1 (Opteron G2 in oVirt) with only one internal 72Gb disk each. It seems here GlusterFS works quite well form the initial tests done. I used same config as the problematic nested one. Created a CentOS 6.4 VM, configured as server in oVirt but installed as default Desktop in anaconda. The 2 servers are on a Gigabit network The install phase of the 1089 packages had an elapsed of about 17 minutes before completion. During install I see (with command "bwm-ng -t 1000") throughput goes at 55MB/s so it seems quite ok. Also at the moment the distributed-replicated volume is on ovirtmgmt itself and I don't see any disconnection from hosts in webadmin..... I'm going to test more on this infra, and check/compare with the nested environment that was configured quite the same... unfortunately not much disk space to use and stress... ;-( Gianluca

On 10/03/2013 01:21 AM, Gianluca Cecchi wrote:
On Wed, Oct 2, 2013 at 9:16 PM, Itamar Heim wrote:
On 10/02/2013 12:57 AM, Gianluca Cecchi wrote:
Today I was able to work again on this matter and it seems related to spice Every time I start the VM (that is defined with spice) it goes in
and this doesn't happen if the VM is defined with vnc?
No, reproduced both from oVirt and through virsh. with spice defined in boot options or in xml (for virsh) the vm remains in paused state and after a few minutes it seems the node hangs... with vnc the VM goes in runnign state I'm going to put same config on 2 physical nodes with only local storage and see what happens and report...
Gianluca
adding spice-devel mailing list as the VM only hangs if started with spice and not with vnc, from virsh as well.

On 09/25/2013 02:10 AM, Gianluca Cecchi wrote:
Hello, I'm testing GlusterFS on 3.3 with fedora 19 systems. One engine (ovirt) + 2 nodes (ovnode01 and ovnode02)
Successfully created gluster volume composed by two bricks (one for each vdsm node) distributed replicated
Suggestion: If page http://www.ovirt.org/Features/GlusterFS_Storage_Domain is the reference, perhaps it would be better to explicitly specify that one has to start the created volume before going to add a storage domain based on the created volume. Not knowing Gluster could lead to think that the start phase is responsibility of storage domain creation itself ...
its a wiki - please edit/fix it ;)

On Wed, Sep 25, 2013 at 8:02 AM, Itamar Heim wrote:
Suggestion: If page http://www.ovirt.org/Features/GlusterFS_Storage_Domain is the reference, perhaps it would be better to explicitly specify that one has to start the created volume before going to add a storage domain based on the created volume. Not knowing Gluster could lead to think that the start phase is responsibility of storage domain creation itself ...
its a wiki - please edit/fix it ;)
I was in doubt because it is not explictly set as wiki but more as infra ... I'll do
participants (5)
-
David Riedl
-
Gianluca Cecchi
-
Itamar Heim
-
Kanagaraj
-
Vijay Bellur