[Users] hosted engine setup (NFS)
by René Koch
Hi,
I installed hosted engine and faced an issue with NFS during installation.
First of all, I'm using GlusterFS on my storage and ovirt engine doesn't
support GlusterFS yet, only NFS.
But for NFS I can't use mountprotpo=tcp as hosted-engine --setup doesn't
ask for NFS options.
So I manually edited the following file:
/usr/share/ovirt-hosted-engine-setup/plugins/ovirt-hosted-engine-setup/storage/storage.py
and changed opts.append('vers=3'):
if domain_type == 'nfs3':
fstype = 'nfs'
opts.append('vers=3,mountproto=tcp')
My question is now: is it possible to ask for NFS options during setup
or do you think this can lead into problems? NFS via TCP worked fine for
me for one week until I rebooted the host today (did reboot tests last
weeks, too which was fine) and can't start hosted engine anymore (see
other mail thread), but I think the other issue is not NFS mountproto
related.
--
Best Regards
René Koch
Senior Solution Architect
============================================
LIS-Linuxland GmbH
Brünner Straße 163, A-1210 Vienna
Phone: +43 1 236 91 60
Mobile: +43 660 / 512 21 31
E-Mail: rkoch(a)linuxland.at
============================================
10 years, 7 months
Error creating Disks
by Maurice James
------=_Part_495_1151800014.1397484037085
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
oVirt Engine Version: 3.4.1-0.0.master.20140412010845.git43746c6.el6
While attempting to create a disk on an NFS storage domain, it fails with the following error in the engine.log
2014-04-14 09:58:12,127 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-72) Failed in HSMGetAllTasksStatusesVDS method
2014-04-14 09:58:12,139 ERROR [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-72) BaseAsyncTask::LogEndTaskFailure: Task ee6ce682-bd76-467a-82d2-d227229cb9de (Parent Command AddDisk, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended with failure:
2014-04-14 09:58:12,159 ERROR [org.ovirt.engine.core.bll.AddDiskCommand] (org.ovirt.thread.pool-6-thread-9) [483e53d6] Ending command with failure: org.ovirt.engine.core.bll.AddDiskCommand
2014-04-14 09:58:12,212 ERROR [org.ovirt.engine.core.bll.AddImageFromScratchCommand] (org.ovirt.thread.pool-6-thread-9) [ab1e0be] Ending command with failure: org.ovirt.engine.core.bll.AddImageFromScratchCommand
------=_Part_495_1151800014.1397484037085
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: times new roman, new york, times, se=
rif; font-size: 12pt; color: #000000"><div><span style=3D"font-family: 'Ari=
al Unicode MS', Arial, sans-serif; font-size: small; text-align: -webkit-ce=
nter; background-color: #ffffff;" data-mce-style=3D"font-family: 'Arial Uni=
code MS', Arial, sans-serif; font-size: small; text-align: -webkit-center; =
background-color: #ffffff;">oVirt Engine Version: 3.4.1-0.0.master.20140412=
010845.git43746c6.el6</span></div><div><span style=3D"font-family: 'Arial U=
nicode MS', Arial, sans-serif; font-size: small; text-align: -webkit-center=
; background-color: #ffffff;" data-mce-style=3D"font-family: 'Arial Unicode=
MS', Arial, sans-serif; font-size: small; text-align: -webkit-center; back=
ground-color: #ffffff;"><br></span></div><div><br></div><div>While attempti=
ng to create a disk on an NFS storage domain, it fails with the following e=
rror in the engine.log</div><div><br></div><div><br></div><div><p style=3D"=
margin: 0px;" data-mce-style=3D"margin: 0px;">2014-04-14 09:58:12,127 ERROR=
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSComman=
d] (DefaultQuartzScheduler_Worker-72) Failed in HSMGetAllTasksStatusesVDS m=
ethod<br>2014-04-14 09:58:12,139 ERROR [org.ovirt.engine.core.bll.SPMAsyncT=
ask] (DefaultQuartzScheduler_Worker-72) BaseAsyncTask::LogEndTaskFailure: T=
ask ee6ce682-bd76-467a-82d2-d227229cb9de (Parent Command AddDisk, Parameter=
s Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended w=
ith failure:<br>2014-04-14 09:58:12,159 ERROR [org.ovirt.engine.core.bll.Ad=
dDiskCommand] (org.ovirt.thread.pool-6-thread-9) [483e53d6] Ending command =
with failure: org.ovirt.engine.core.bll.AddDiskCommand<br>2014-04-14 09:58:=
12,212 ERROR [org.ovirt.engine.core.bll.AddImageFromScratchCommand] (org.ov=
irt.thread.pool-6-thread-9) [ab1e0be] Ending command with failure: org.ovir=
t.engine.core.bll.AddImageFromScratchCommand</p></div><div><br></div><div><=
span name=3D"x"></span><div><span style=3D"font-family: 'trebuchet ms', san=
s-serif;" data-mce-style=3D"font-family: 'trebuchet ms', sans-serif;"></spa=
n></div></div></div></body></html>
------=_Part_495_1151800014.1397484037085--
10 years, 7 months
Disable auth basic in API
by Jose Manuel Marquez Alhambra
Hi,
I’m testing a connection broker that uses oVirt's API. At the moment, the connection broker doesn’t work because it doesn’t send the basic authentication to oVirt's API. I contacted the developers and they're investigating the error. While they solve the error, I would like to continue testing the connection broker. Is there any way to disable auth basic in oVirt's API?
I’m using it in a testing environment (oVirt 3.4 at CentOS 6), so I’m not worried about security risks.
Thank you.
Regards,
Jose
10 years, 7 months
minimum DNS requirements for /etc/hosts
by Jeremiah Jahn
So like the nut job I am, I host my dns server on a virtual machine
that is now hosted by ovirt, I also have a few gateway machine in
there as well for routing traffic between internal networks. With
virt-manager, DNS rarely seemed to be an issue. I'd connect to a host
IP, and bring up the vm, no problem. Ovirt on the other hand takes
forever if dns is down. So i've passed around a /etc/hosts file to
resolve any ip address that ovirt might be looking for while dns is
out. I'd like to verify at an abstract level, what needs to be in
said file, to keep everything from timing out, or just taking forever.
name and FQDN of each host
name and FQDN of engine host
name and FQDN of each storage domain.
Is this enough, or are there other places/things the ovirt node and
hosts might be trying to resolve?
thanks,
-jj-
10 years, 7 months
Re: [ovirt-users] minimum DNS requirements for /etc/hosts
by Jeremiah Jahn
I only use names for storage, but didn't have entries for the engine
yet, So I think ssh was slow since it couldn't do reverse DNS lookups
against the engine where my consoles are coming from. Thanks for the
input. I wanted to verify before testing since this is a production
environment. It seems like even the XMLRPC code must try and do a
reverse DNS against the engine as well, not just ssh. Seems like the
engine hosts also needs said hosts file as well..
On Mon, Apr 14, 2014 at 8:23 AM, René Koch <rkoch(a)linuxland.at> wrote:
> On 04/14/2014 03:08 PM, Jeremiah Jahn wrote:
>>
>> So like the nut job I am, I host my dns server on a virtual machine
>> that is now hosted by ovirt, I also have a few gateway machine in
>> there as well for routing traffic between internal networks. With
>> virt-manager, DNS rarely seemed to be an issue. I'd connect to a host
>> IP, and bring up the vm, no problem. Ovirt on the other hand takes
>> forever if dns is down. So i've passed around a /etc/hosts file to
>> resolve any ip address that ovirt might be looking for while dns is
>> out. I'd like to verify at an abstract level, what needs to be in
>> said file, to keep everything from timing out, or just taking forever.
>>
>> name and FQDN of each host
>> name and FQDN of engine host
>> name and FQDN of each storage domain.
>>
>> Is this enough, or are there other places/things the ovirt node and
>> hosts might be trying to resolve?
>
>
> Afaik this should be enough.
> Just remember that e.g. ssh login will still be slow if the DNS server isn't
> reachable as you have to wait until it reaches a timeout.
>
> My oVirt setup doesn't have any DNS entry - just the hosts and engine in
> /etc/hosts. But I don't use DNS names at all only IP addresses...
>
>
> Regards,
> René
>
>>
>> thanks,
>> -jj-
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
10 years, 7 months
Re: [ovirt-users] [Users] hosted engine setup (NFS)
by Sandro Bonazzola
Il 14/04/2014 14:23, René Koch ha scritto:
>
> On 04/14/2014 12:39 PM, Sandro Bonazzola wrote:
>> Il 03/03/2014 12:39, René Koch ha scritto:
>>> Hi,
>>>
>>> I installed hosted engine and faced an issue with NFS during installation.
>>>
>>> First of all, I'm using GlusterFS on my storage and ovirt engine doesn't support GlusterFS yet, only NFS.
>>>
>>> But for NFS I can't use mountprotpo=tcp as hosted-engine --setup doesn't ask for NFS options.
>>>
>>> So I manually edited the following file:
>>> /usr/share/ovirt-hosted-engine-setup/plugins/ovirt-hosted-engine-setup/storage/storage.py
>>>
>>> and changed opts.append('vers=3'):
>>>
>>> if domain_type == 'nfs3':
>>> fstype = 'nfs'
>>> opts.append('vers=3,mountproto=tcp')
>>>
>>> My question is now: is it possible to ask for NFS options during setup or do you think this can lead into problems? NFS via TCP worked fine for me for
>>> one week until I rebooted the host today (did reboot tests last weeks, too which was fine) and can't start hosted engine anymore (see other mail
>>> thread), but I think the other issue is not NFS mountproto related.
>>>
>>>
>>
>> Well, in hosted-engine setup we don't ask for additional options because we don't store them.
>> We just ask for nfs3 or nfs4 because we pass that value as protocol version to VDSM connectStorageServer verb.
>> The above change affects only the temporary mount done for validating the domain.
>
> Thanks a lot for the information.
>
> Btw, I can mount mit GlusterFS 3.4.2 NFS share now without specifying -o mountproto=tcp. Is upd now possible or is the protocol determined
> automatically now? I didn't test if hosted-engine-setup is able to mount GlusterFS NFS shares now without hacks, too - only discovered this new
> behavior on my hosts.
I don't know
>
> Slightly off-topic question:
> The storage options are stored in /etc/ovirt-hosted-engine/hosted-engine.conf, right? If I want to change the ip address of my storage I simply put
> engine into global maintenance mode, change IP in hosted-engine.conf and re-enable hsoted-engine vm again? Or are there more steps required?
yes, storage info are stored in /etc/ovirt-hosted-engine/hosted-engine.conf.
By the way, if you want to propagate the info on new hypervisors when you run deploy on them, you should also update the answer file in
/etc/ovirt-hosted-engine/answers.conf so it will be configured automatically in newly installed hypervisors
>
>
> Regards,
> René
>
>>
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 7 months
Re: [Users] 2 virtual monitors for Fedora guest
by Gianluca Cecchi
Some prelimianry tests at my side.
oVirt 3.4 on fedora 19 AIO.
Datacenter and cluster configured as 3.4 level
Some packages on it
libvirt-1.1.3.2-1.fc19.x86_64
qemu-kvm-1.6.1-2.fc19.x86_64
vdsm-4.14.6-0.fc19.x86_64
spice-server-0.12.4-3.fc19.x86_64
guest is an updated Fedora 19 system configured based on blank
template and OS=Linux
and
xorg-x11-drv-qxl-0.1.1-3.fc19.x86_64
spice-vdagent-0.14.0-5.fc19.x86_64
Client is an updated Fedora 20 box with virt-viewer-0.6.0-1.fc20.x86_64
If I select the "Single PCI" checkbox in console options of guest and
connect from the Fedora 20 client I don't see at all an option in
remote-viewer to open a second display and no new display detected in
guest.
And lspci on guest indeed gives only one video controller.
BTW: what is this option for, apart the meaning?
If I deselect the "Single PCI" checkbox I get the "Display 2" option
in remote-viewer but it is greyed out.
No new monitor in "detect displays" of guest.
In this last situation I have on host this qem-kvm command line:
qemu 16664 1 48 21:04 ? 00:02:42
/usr/bin/qemu-system-x86_64 -machine accel=kvm -name f19 -S -machine
pc-1.0,accel=kvm,usb=off -cpu Opteron_G3 -m 2048 -realtime mlock=off
-smp 1,maxcpus=160,sockets=160,cores=1,threads=1 -uuid
55d8b95b-f420-4208-a2fb-5f370d05f5d8 -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=19-8,serial=E0E1001E-8C00-002A-6F9A-90E6BAC9F1E1,uuid=55d8b95b-f420-4208-a2fb-5f370d05f5d8
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/f19.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2014-04-08T19:04:45,driftfix=slew -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -device
usb-ccid,id=ccid0 -drive
if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
file=/rhev/data-center/mnt/_data_DATA2/b24b94c7-5935-4940-9152-36ecd370ba7c/images/5e99a818-9fd1-47bb-99dc-50bd25374c2f/a2baa1e5-569f-4081-97a7-10ec2a20daab,if=none,id=drive-virtio-disk0,format=raw,serial=5e99a818-9fd1-47bb-99dc-50bd25374c2f,cache=none,werror=stop,rerror=stop,aio=threads
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-netdev tap,fd=28,id=hostnet0,vhost=on,vhostfd=29 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:a8:01:55,bus=pci.0,addr=0x3
-chardev spicevmc,id=charsmartcard0,name=smartcard -device
ccid-card-passthru,chardev=charsmartcard0,id=smartcard0,bus=ccid0.0
-chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/55d8b95b-f420-4208-a2fb-5f370d05f5d8.com.redhat.rhevm.vdsm,server,nowait
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/55d8b95b-f420-4208-a2fb-5f370d05f5d8.org.qemu.guest_agent.0,server,nowait
-device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-spice tls-port=5901,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
-k en-us -device
qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,bus=pci.0,addr=0x2
-device qxl,id=video1,ram_size=67108864,vram_size=33554432,bus=pci.0,addr=0x8
-device AC97,id=sound0,bus=pci.0,addr=0x4 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
On guest:
[root@localhost ~]# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB
[Natoma/Triton II] (rev 01)
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 VGA compatible controller: Red Hat, Inc. QXL paravirtual
graphic card (rev 03)
00:03.0 Ethernet controller: Red Hat, Inc Virtio network device
00:04.0 Multimedia audio controller: Intel Corporation 82801AA AC'97
Audio Controller (rev 01)
00:05.0 Communication controller: Red Hat, Inc Virtio console
00:06.0 SCSI storage controller: Red Hat, Inc Virtio block device
00:07.0 RAM memory: Red Hat, Inc Virtio memory balloon
00:08.0 Display controller: Red Hat, Inc. QXL paravirtual graphic card (rev 03)
See here Xorg.0.log generated on guest:
https://drive.google.com/file/d/0BwoPbcrMv8mvTm9VbE53ZmVKcVk/edit?usp=sha...
In particular I see in it many:
[ 64.234] (II) qxl(0): qxl_xf86crtc_resize: Placeholder resize 1024x768
[ 87.280] qxl_surface_create: Bad bpp: 1 (1)
[ 87.280] qxl_surface_create: Bad bpp: 1 (1)
[ 87.949] qxl_surface_create: Bad bpp: 1 (1)
[ 110.469] qxl_surface_create: Bad bpp: 1 (1)
[ 110.478] qxl_surface_create: Bad bpp: 1 (1)
[ 146.096] - 0th attempt
[ 146.096] - OOM at 962 511 24 (= 1474746 bytes)
[ 146.096] Cache contents: null null null null null null null null
null null null null null null null null null null null null null null
null null null null null null null null null null null null null null
null null null null null null null null null null null null null null
1008 997 1007 1005 1018 1003 1009 1011 1001 1012 1019 1016 1006 1013
total: 14
[ 146.107] - 1th attempt
[ 146.107] - OOM at 962 511 24 (= 1474746 bytes)
Gianluca
10 years, 7 months
List disk images residing on specific lun
by Amedeo Salvati
Hello,
does anyone know if it's possible, using ovirt-shell or api, to list
every disk image residing on specific FC LUN?
best regards
Amedeo Salvati
10 years, 7 months
dependency error when upgrade ovirt 3.3 -> 3.4 (CentOS)
by H. Haven Liu
Hello,
I'm trying to upgrade an ovirt 3.3 installation to 3.4, but getting a dependency error. I tried to update the ovirt-engine-setup package to v 3.4.0-1.el6, which requires ovirt-engine-setup-plugin-ovirt-engine = 3.4.0-1.el6, which requires ovirt-engine >= 3.3.2. However, it appears that ovirt-engine 3.3 is in the versionlock list and cannot be updated. Should I manually disable the versionlock plugin, or am I missing something?
Thanks,
Haven
10 years, 7 months