getVdsCapabilites unexpected exception [was: Re: AIO 3.4 on fedora 19 initial errors before coming up]
by Gianluca Cecchi
Actually it seems that today datacenter comes up but starting a windows XP
vm remains in "waiting for launch"
Yesterday I ran an update that impied gluster but I don't think it is
relevant in my environment...
yum.log contains:
May 10 21:24:20 Updated: openstack-java-client-3.0.4-1.fc19.noarch
May 10 21:24:20 Updated: audit-libs-2.3.6-1.fc19.x86_64
May 10 21:24:20 Updated: libjpeg-turbo-1.3.1-2.fc19.x86_64
May 10 21:24:20 Installed: rsyslog-mmjsonparse-7.2.6-1.fc19.x86_64
May 10 21:24:21 Updated: glusterfs-libs-3.5.0-3.fc19.x86_64
May 10 21:24:21 Updated: glusterfs-3.5.0-3.fc19.x86_64
May 10 21:24:21 Updated: openstack-java-quantum-model-3.0.4-1.fc19.noarch
May 10 21:24:21 Updated: openstack-java-glance-model-3.0.4-1.fc19.noarch
May 10 21:24:21 Updated: openstack-java-keystone-model-3.0.4-1.fc19.noarch
May 10 21:24:21 Updated: openstack-java-keystone-client-3.0.4-1.fc19.noarch
May 10 21:24:21 Updated: openstack-java-glance-client-3.0.4-1.fc19.noarch
May 10 21:24:21 Updated: openstack-java-quantum-client-3.0.4-1.fc19.noarch
May 10 21:24:22 Updated: glusterfs-fuse-3.5.0-3.fc19.x86_64
May 10 21:24:22 Updated: glusterfs-api-3.5.0-3.fc19.x86_64
May 10 21:24:22 Updated: glusterfs-rdma-3.5.0-3.fc19.x86_64
May 10 21:24:22 Updated: glusterfs-cli-3.5.0-3.fc19.x86_64
May 10 21:24:22 Updated: libjpeg-turbo-devel-1.3.1-2.fc19.x86_64
May 10 21:24:22 Updated: audit-2.3.6-1.fc19.x86_64
May 10 21:24:22 Updated: audit-libs-python-2.3.6-1.fc19.x86_64
May 10 21:24:23 Updated:
openstack-java-resteasy-connector-3.0.4-1.fc19.noarch
May 10 21:24:23 Updated: 2:microcode_ctl-2.0-7.fc19.x86_64
May 10 21:24:23 Updated: python-ethtool-0.9-2.fc19.x86_64
May 10 21:24:23 Updated: htop-1.0.3-1.fc19.x86_64
May 10 21:24:23 Updated: python-lxml-3.3.5-1.fc19.x86_64
May 10 21:24:28 Updated:
kmod-nvidia-304xx-3.13.11-100.fc19.x86_64-304.119-2.fc19.11.x86_64
May 10 21:24:28 Updated: audit-libs-2.3.6-1.fc19.i686
In my engine.log I see error about getting capabilities...
2014-05-11 10:15:26,759 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_W
orker-67) Failed in GetCapabilitiesVDS method
2014-05-11 10:15:26,759 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_W
orker-67) Command
org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand return
value
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSInfoReturnForXmlRpc@792fc59f
2014-05-11 10:15:26,760 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_W
orker-67) HostName = local_host
2014-05-11 10:15:26,760 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(DefaultQuartzScheduler_W
orker-67) Command GetCapabilitiesVDSCommand(HostName = local_host, HostId =
aab9571f-da17-4c3c-9e6b-d0224b84c31e, vds=Host[lo
cal_host]) execution failed. Exception: VDSErrorException:
VDSGenericException: VDSErrorException: Failed to GetCapabilitiesV
DS, error = Unexpected exception, code = 16
2014-05-11 10:15:26,771 WARN [org.ovirt.engine.core.vdsbroker.VdsManager]
(DefaultQuartzScheduler_Worker-67) Failed to refre
sh VDS , vds = aab9571f-da17-4c3c-9e6b-d0224b84c31e : local_host, error =
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to GetCapabilitiesVDS, error
= Unexpected exception, code = 16, continuing.:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to GetCapabilitiesVDS, error
= Unexpected exception, code = 16
at
org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.createDefaultConcreteException(VdsBrokerCommand.java:61)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.createException(BrokerCommandBase.java:199)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:186)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand.executeVdsBrokerCommand(GetCapabilitiesVDSCommand.java:16)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:96)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:56)
[vdsbroker.jar:]
at
org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:31)
[dal.jar:]
at
org.ovirt.engine.core.vdsbroker.VdsManager.refreshCapabilities(VdsManager.java:537)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo.beforeFirstRefreshTreatment(VdsUpdateRunTimeInfo.java:888)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo.refreshVdsRunTimeInfo(VdsUpdateRunTimeInfo.java:499)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo.refresh(VdsUpdateRunTimeInfo.java:337)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.VdsManager.onTimer(VdsManager.java:236)
[vdsbroker.jar:]
at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source)
[:1.7.0_55]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.7.0_55]
at java.lang.reflect.Method.invoke(Method.java:606)
[rt.jar:1.7.0_55]
at
org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:60)
[scheduler.jar:]
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
[quartz.jar:]
at
org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557)
[quartz.jar:]
[root@tekkaman ovirt-engine]# vdsClient -s 0 getVdsCapabilities
Unexpected exception
the qemu command line is
qemu 8470 1 5 09:58 ? 00:00:57 /usr/bin/qemu-system-x86_64
-machine accel=kvm -name winxp -S -machine pc-1.0,accel=kvm,usb=off -cpu
Opteron_G3 -m 2048 -realtime mlock=off -smp
1,maxcpus=160,sockets=160,cores=1,threads=1 -uuid
2981b979-a363-4ab9-a251-439b5774b04d -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=19-8,serial=E0E1001E-8C00-002A-6F9A-90E6BAC9F1E1,uuid=2981b979-a363-4ab9-a251-439b5774b04d
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/winxp.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2014-05-11T09:58:34,driftfix=slew -no-shutdown -device
ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x5.0x7 -device
ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x5.0x1 -device
ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x5
-device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x5.0x2
-device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x7 -drive
if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
file=/rhev/data-center/mnt/_DATA/0a8035e6-e41d-40ff-a154-e0a374f264b2/images/75c54716-5222-4ad6-91f2-8b312eacc4b4/d4fa7785-8a89-4d13-9082-52556ab0b326,if=none,id=drive-virtio-disk0,format=raw,serial=75c54716-5222-4ad6-91f2-8b312eacc4b4,cache=none,werror=stop,rerror=stop,aio=threads
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:a8:01:52,bus=pci.0,addr=0x3
-chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/2981b979-a363-4ab9-a251-439b5774b04d.com.redhat.rhevm.vdsm,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/2981b979-a363-4ab9-a251-439b5774b04d.org.qemu.guest_agent.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-spice
tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
-k en-us -device
qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,bus=pci.0,addr=0x2
-device AC97,id=sound0,bus=pci.0,addr=0x4 -chardev
spicevmc,id=charredir0,name=usbredir -device
usb-redir,chardev=charredir0,id=redir0 -chardev
spicevmc,id=charredir1,name=usbredir -device
usb-redir,chardev=charredir1,id=redir1 -chardev
spicevmc,id=charredir2,name=usbredir -device
usb-redir,chardev=charredir2,id=redir2 -chardev
spicevmc,id=charredir3,name=usbredir -device
usb-redir,chardev=charredir3,id=redir3 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8
[root@tekkaman ovirt-engine]# cat /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
cpu family : 16
model : 5
model name : AMD Athlon(tm) II X4 630 Processor
stepping : 2
microcode : 0x10000db
cpu MHz : 2800.000
cache size : 512 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 5
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt
pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good nopl nonstop_tsc
extd_apicid pni monitor cx16 popcnt lahf_lm cmp_legacy svm extapic
cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt
hw_pstate npt lbrv svm_lock nrip_save
bogomips : 5600.37
TLB size : 1024 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 48 bits physical, 48 bits virtual
power management: ts ttp tm stc 100mhzsteps hwpstate
...
10 years, 6 months
getting 404 after fresh install of oVirt 3.4 on CentOS 6.5 (+ solution)
by Steven Van Acker
Hello all,
I was asked to post the following problem on this mailinglist.
After installing a clean CentOS 6.5 with a clean oVirt 3.4, following the
instructions from
http://www.ovirt.org/Download#Red_Hat_Enterprise_Linux_6.2FCentOS_Install...
I went to http://localhost.localdomain:80/ovirt-engine as indicated and was
redirected to a 404 page. The same happened when using https.
During the installation, engine-setup will print out the following on my VM
with 512MB RAM:
[WARNING] Warning: Not enough memory is available on the host. Minimum
requirement is 4096MB, and 16384MB is recommended.
SSH fingerprint: 4B:DE:48:26:99:AA:C0:72:E3:C8:B5:64:5F:6E:6D:00
Internal CA
FB:82:FE:14:35:3A:BE:1A:B1:E6:99:C2:DC:CD:6D:E0:44:64:0F:47
Web access is enabled at:
http://localhost.localdomain:80/ovirt-engine
https://localhost.localdomain:443/ovirt-engine
Please use the user "admin" and password specified in order to
login into oVirt Engine
The consequence of not having enough RAM for oVirt is that it will silently
fail to start up, without apparent errors or warnings.
In my view, the warning above should be rephrased as "Not enough memory is
available on the host, oVirt will refuse to start" and colored red.
The solution to this problem is to have at least 4GB of RAM, after which
oVirt seems to start up fine (with only 1.2GB of RAM in use).
kr,
-- Steven
10 years, 6 months
Re: [ovirt-users] gluster performance oVirt 3.4
by Humble Devassy Chirammal
|
| Citējot Vijay Bellur <vbellur(a)redhat.com> :
| > On 05/11/2014 02:04 AM, Vadims Korsaks wrote:
| > > HI!
| > >
| > > Created 2 node setup with oVirt 3.4 and
| CentOS 6.5, for storage created
| > > 2 node replicated gluster (3.5) fs on same
| hosts with oVirt.
| > > mount looks like this:
| > > 127.0.0.1:/gluster01 on
| > >
| /rhev/data-center/mnt/glusterSD/127.0.0.1:_gluster01
| type fuse.glusterfs
| > >
| (rw,default_permissions,allow_other,max_read=131072)
| > >
| > > when i making gluster test with dd, something
| like
| > > dd if=/dev/zero bs=1M count=20000
| > >
| of=/rhev/data-center/mnt/glusterSD/127.0.0.1\:_gluster01/kaka
| > > i'm gettting speed ~ 110 MB/s, so this is
| 1Gbps speed of ethernet adapter
| > >
| > > but with in VM created in oVirt speed is
| lower than 20 MB/s
| > >
| > > why there is so huge difference?
| > > how can improve VMs disks speed?
| > >
| >
| > What are your gluster volume settings? Have you
| applied the following
| > performance tunables in gluster's virt profile:
| >
| > eager-lock=enable
| > remote-dio=enable
| >
| > Regards,
| > Vijay
| >
| setting were:
| [root@centos155 ~]# gluster volume info gluster01
|
| Volume Name: gluster01
| Type: Replicate
| Volume ID: 436edaa3-ac8b-421f-aa35-68b5bd7064b6
| Status: Started
| Number of Bricks: 1 x 2 = 2
| Transport-type: tcp
| Bricks:
| Brick1: 10.2.75.152:/mnt/gluster01/brick
| Brick2: 10.2.75.155:/mnt/gluster01/brick
| Options Reconfigured:
| storage.owner-gid: 36
| storage.owner-uid: 36
|
|
| add your settings settings now it looks
|
| [root@centos155 ~]# gluster volume info gluster01
|
| Volume Name: gluster01
| Type: Replicate
| Volume ID: 436edaa3-ac8b-421f-aa35-68b5bd7064b6
| Status: Started
| Number of Bricks: 1 x 2 = 2
| Transport-type: tcp
| Bricks:
| Brick1: 10.2.75.152:/mnt/gluster01/brick
| Brick2: 10.2.75.155:/mnt/gluster01/brick
| Options Reconfigured:
| network.remote-dio: enable
| cluster.eager-lock: enable
| storage.owner-gid: 36
| storage.owner-uid: 36
|
|
| but this didn't affected performace in any big way
| should hosts to be restarted?
|
glusterfs storage domain configuration GUI got ""Optimize for virt. store"
option which have to be enabled when configuring for virt store.
Ref#http://www.ovirt.org/Features/GlusterFS_Storage_Domain
If the configuration is manual, you need to set the group to 'virt' as
shown below
#gluster volume set VOLNAME group virt
This will enable below options on gluster volume :
quick-read=off
read-ahead=off
io-cache=off
stat-prefetch=off
eager-lock=enable
remote-dio=on
Can you please make sure group has set properly ?
Also, invoke "dd" with oflag=direct option and check whether it helps ..
--Humble
10 years, 6 months
Re: [ovirt-users] gluster performance oVirt 3.4
by Humble Chirammal
|
| Citējot Vijay Bellur <vbellur(a)redhat.com> :
| > On 05/11/2014 02:04 AM, Vadims Korsaks wrote:
| > > HI!
| > >
| > > Created 2 node setup with oVirt 3.4 and
| CentOS 6.5, for storage created
| > > 2 node replicated gluster (3.5) fs on same
| hosts with oVirt.
| > > mount looks like this:
| > > 127.0.0.1:/gluster01 on
| > >
| /rhev/data-center/mnt/glusterSD/127.0.0.1:_gluster01
| type fuse.glusterfs
| > >
| (rw,default_permissions,allow_other,max_read=131072)
| > >
| > > when i making gluster test with dd, something
| like
| > > dd if=/dev/zero bs=1M count=20000
| > >
| of=/rhev/data-center/mnt/glusterSD/127.0.0.1\:_gluster01/kaka
| > > i'm gettting speed ~ 110 MB/s, so this is
| 1Gbps speed of ethernet adapter
| > >
| > > but with in VM created in oVirt speed is
| lower than 20 MB/s
| > >
| > > why there is so huge difference?
| > > how can improve VMs disks speed?
| > >
| >
| > What are your gluster volume settings? Have you
| applied the following
| > performance tunables in gluster's virt profile:
| >
| > eager-lock=enable
| > remote-dio=enable
| >
| > Regards,
| > Vijay
| >
| setting were:
| [root@centos155 ~]# gluster volume info gluster01
|
| Volume Name: gluster01
| Type: Replicate
| Volume ID: 436edaa3-ac8b-421f-aa35-68b5bd7064b6
| Status: Started
| Number of Bricks: 1 x 2 = 2
| Transport-type: tcp
| Bricks:
| Brick1: 10.2.75.152:/mnt/gluster01/brick
| Brick2: 10.2.75.155:/mnt/gluster01/brick
| Options Reconfigured:
| storage.owner-gid: 36
| storage.owner-uid: 36
|
|
| add your settings settings now it looks
|
| [root@centos155 ~]# gluster volume info gluster01
|
| Volume Name: gluster01
| Type: Replicate
| Volume ID: 436edaa3-ac8b-421f-aa35-68b5bd7064b6
| Status: Started
| Number of Bricks: 1 x 2 = 2
| Transport-type: tcp
| Bricks:
| Brick1: 10.2.75.152:/mnt/gluster01/brick
| Brick2: 10.2.75.155:/mnt/gluster01/brick
| Options Reconfigured:
| network.remote-dio: enable
| cluster.eager-lock: enable
| storage.owner-gid: 36
| storage.owner-uid: 36
|
|
| but this didn't affected performace in any big way
| should hosts to be restarted?
|
glusterfs storage domain configuration GUI got ""Optimize for virt. store" option which have to be enabled when configuring for virt store.
Ref#http://www.ovirt.org/Features/GlusterFS_Storage_Domain
If the configuration is manual, you need to set the group to 'virt' as shown below
#gluster volume set VOLNAME group virt
This will enable below options on gluster volume :
quick-read=off
read-ahead=off
io-cache=off
stat-prefetch=off
eager-lock=enable
remote-dio=on
Can you please make sure group has set properly ?
Also, invoke "dd" with oflag=direct option and check whether it helps ..
--Humble
10 years, 6 months
cannot connect to spice console with portal<oVirt 3.4.1>
by Zhong Qiang
since I upgraded from oVirt 3.4.0,I can't connect to spice console with
userportal/adminportal.received this error:
Unable to conect to the graphic server
c:\DOCUME~1\it\LOCALS~1\Temp\console.vv
Any help is greatly appreciated. Thank you
10 years, 6 months
Re: [ovirt-users] How to install spice-xpi-2.8 on ubuntu12.04?
by Zhong Qiang
Thanks for you help,It worked
2014-05-08 15:16 GMT+08:00 René Koch <rkoch(a)linuxland.at>:
> If spice-xpi 2.7 is sufficient, you can use these packages:
> https://launchpad.net/~jasonbrooks/+archive/ppa/+packages
>
>
> Regards,
> René
>
>
> On 05/08/2014 07:22 AM, Zhong Qiang wrote:
>
>> *when i complie spice-xpi-2.8 on ubuntu12.04 with firefox29,i receive
>> this error:*
>>
>> ############################################################
>> ####################################
>> root@user:~/src/spice-xpi-2.8# make
>> make all-recursive
>> make[1]: Entering directory `/root/src/spice-xpi-2.8'
>> Making all in SpiceXPI
>> make[2]: Entering directory `/root/src/spice-xpi-2.8/SpiceXPI'
>> Making all in src
>> make[3]: Entering directory `/root/src/spice-xpi-2.8/SpiceXPI/src'
>> Making all in plugin
>> make[4]: Entering directory `/root/src/spice-xpi-2.8/SpiceXPI/src/plugin'
>> GEN nsISpicec.xpt
>> make all-am
>> make[5]: Entering directory `/root/src/spice-xpi-2.8/SpiceXPI/src/plugin'
>> CXX libnsISpicec_la-controller.lo
>> controller.cpp: In destructor 'SpiceController::~SpiceController()':
>> controller.cpp:73:5: warning: format not a string literal and no format
>> arguments [-Wformat-security]
>> CXX libnsISpicec_la-np_entry.lo
>> CXX libnsISpicec_la-npn_gate.lo
>> CXX libnsISpicec_la-npp_gate.lo
>> CXX libnsISpicec_la-nsScriptablePeer.lo
>> In file included from nsScriptablePeer.cpp:54:0:
>> /root/src/xulrunner-sdk/include/nsError.h:186:14: error: expected
>> constructor, destructor, or type conversion before '(' token
>> /root/src/xulrunner-sdk/include/nsError.h:188:14: error: expected
>> constructor, destructor, or type conversion before '(' token
>> make[5]: *** [libnsISpicec_la-nsScriptablePeer.lo] Error 1
>> make[5]: Leaving directory `/root/src/spice-xpi-2.8/SpiceXPI/src/plugin'
>> make[4]: *** [all] Error 2
>> make[4]: Leaving directory `/root/src/spice-xpi-2.8/SpiceXPI/src/plugin'
>> make[3]: *** [all-recursive] Error 1
>> make[3]: Leaving directory `/root/src/spice-xpi-2.8/SpiceXPI/src'
>> make[2]: *** [all-recursive] Error 1
>> make[2]: Leaving directory `/root/src/spice-xpi-2.8/SpiceXPI'
>> make[1]: *** [all-recursive] Error 1
>> make[1]: Leaving directory `/root/src/spice-xpi-2.8'
>> make: *** [all] Error 2
>> ############################################################
>> #################################
>> *Any help is greatly appreciated. Thank you*
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
10 years, 6 months
Re: [ovirt-users] gluster performance oVirt 3.4
by Vadims Korsaks
Underlying FS is XFS
GlusterFS - glusterfs-3.5.0-2.el6
I'm using CentOS, if this is problem could RHEL
packages be used? why CentOS packages are compiled
without native glusterfs support?
Citējot Paul Robert Marino <prmarino1(a)gmail.com> :
> What's the underlying filesystem for gluster is
it XFS?
> What version of gluster are you using?
> What distro are you using and if its not RHEL
or Fedora are you using a version of QEMU with
gluster support compiled in keep in mind the
versions with CentOS and Scientific Linux do not
include Gluster native support compiled in.
>
>
>
> -- Sent from my HP Pre3
>
> On May 11, 2014 5:40, Vadims Korsaks
<truckk(a)inbox.lv> wrote:
>
> Citējot Vijay Bellur <vbellur(a)redhat.com> :
> > On 05/11/2014 02:04 AM, Vadims Korsaks wrote:
> > > HI!
> > >
> > > Created 2 node setup with oVirt 3.4 and
> CentOS 6.5, for storage created
> > > 2 node replicated gluster (3.5) fs on same
> hosts with oVirt.
> > > mount looks like this:
> > > 127.0.0.1:/gluster01 on
> > >
>
/rhev/data-center/mnt/glusterSD/127.0.0.1:_gluster01
> type fuse.glusterfs
> > >
>
(rw,default_permissions,allow_other,max_read=131072)
> > >
> > > when i making gluster test with dd, something
> like
> > > dd if=/dev/zero bs=1M count=20000
> > >
>
of=/rhev/data-center/mnt/glusterSD/127.0.0.1\:_gluster01/kaka
> > > i'm gettting speed ~ 110 MB/s, so this is
> 1Gbps speed of ethernet adapter
> > >
> > > but with in VM created in oVirt speed is
> lower than 20 MB/s
> > >
> > > why there is so huge difference?
> > > how can improve VMs disks speed?
> > >
> >
> > What are your gluster volume settings? Have you
> applied the following
> > performance tunables in gluster's virt profile:
> >
> > eager-lock=enable
> > remote-dio=enable
> >
> > Regards,
> > Vijay
> >
> setting were:
> [root@centos155 ~]# gluster volume info gluster01
>
> Volume Name: gluster01
> Type: Replicate
> Volume ID: 436edaa3-ac8b-421f-aa35-68b5bd7064b6
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: 10.2.75.152:/mnt/gluster01/brick
> Brick2: 10.2.75.155:/mnt/gluster01/brick
> Options Reconfigured:
> storage.owner-gid: 36
> storage.owner-uid: 36
>
>
> add your settings settings now it looks
>
> [root@centos155 ~]# gluster volume info gluster01
>
> Volume Name: gluster01
> Type: Replicate
> Volume ID: 436edaa3-ac8b-421f-aa35-68b5bd7064b6
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: 10.2.75.152:/mnt/gluster01/brick
> Brick2: 10.2.75.155:/mnt/gluster01/brick
> Options Reconfigured:
> network.remote-dio: enable
> cluster.eager-lock: enable
> storage.owner-gid: 36
> storage.owner-uid: 36
>
>
> but this didn't affected performace in any big way
> should hosts to be restarted?
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
10 years, 6 months
[Users] Open source events in Austria
by René Koch
Hi,
In the next 2 months 2 open source events will take place in Austria and
oVirt will be present in form of workshops, presentations and stands.
The first event is at the Grazer Linuxtage from April, 4th till April,
5th. These are the highlights from an oVirt perspective:
- Workshop: Installation and first steps with oVirt
- Talk: New features of oVirt (3.3/3.4)
- Stand: oVirt project stand with live demo system
For more information visit http://linuxtage.at/ (German website only)
The second event is at the Linuxwochen Vienna from May 8th till May
10th. These are the highlights from an oVirt perspective:
- Workshop: Installation and first steps with oVirt
- Talk: New features of oVirt (3.3/3.4)
- Stand: oVirt project stand with live demo system
For more information visit http://linuxwochen.at/ (German website only)
Last but not least I want to thank Brian Proffitt and Dave Neary for
their help.
--
Best Regards
René Koch
Senior Solution Architect
============================================
LIS-Linuxland GmbH
Brünner Straße 163, A-1210 Vienna
Phone: +43 1 236 91 60
Mobile: +43 660 / 512 21 31
E-Mail: rkoch(a)linuxland.at
============================================
10 years, 6 months