[Users] Addition of nodes to oVirt-engine

Itamar Heim iheim at redhat.com
Thu Jun 28 19:06:48 UTC 2012


On 06/28/2012 07:55 AM, jose garcia wrote:
> On 06/28/2012 11:33 AM, Itamar Heim wrote:
>> On 06/28/2012 04:55 AM, jose garcia wrote:
>>> On 06/28/2012 02:43 AM, Itamar Heim wrote:
>>>> On 06/27/2012 09:15 AM, jose garcia wrote:
>>>>> Good evening to all,
>>>>>
>>>>> I have added two nodes of Fedora 17 with vdsm 4.10 installed to oVirt
>>>>> 3.1 engine. I was having problems
>>>>> with SSL, so I have disabled it. When I added the nodes there was no
>>>>> attempt of installation as it was
>>>>> the case with oVirt 3.0, but the nodes get activated, provided that
>>>>> ovirtmgmt bridge is present.
>>>>>
>>>>> I have been told that there is a configuration in the database that
>>>>> make
>>>>> this happen. I just recreated the
>>>>> database via the script /dbscripts/create_db_devel.sh and run
>>>>> engine-setup, after removing all packages from oVirt 3.0 and jboss and
>>>>> installing ovirt 3.1 basic packages.
>>>>>
>>>>> My question is: What would be the 'standard' procedure to get oVirt
>>>>> 3.1
>>>>> running?
>>>>
>>>> the standard procedure would be to install the rpm and engine-setup
>>>> and add the host from UI so they will be configured with certificates
>>>> and ssl
>>>
>>> Good morning,
>>>
>>> I wanted to know if there will be an installation process from scratch
>>> with a reboot for the hosts as there was in 3.0.
>>
>> yes.
>>
>>>
>>> I have an issue that may also be related to authentication. I am unable
>>> to start my newly created VM. The engine seems to connect to libvirt in
>>> read-only mode, and boot of the virtual Machine fails. There is
>>> something quaint in virsh (Fedora 17 in the hosts):
>>>
>>> when trying to connect without specifying the uri (qemu:///system) it
>>> gives segmentation fault. If it is given it asks for user and password.
>>>
>>> Tried to disable sasl in libvirtd.conf and now I can connect with virsh
>>> to the default uri without providing a password, but booting of the vm
>>> keeps failing.
>>>
>>> It is required to use sasl or to add a sasl user in the engine, in the
>>> hosts or in both?
>>
>> everything should work out of the box without tweaking config files.
>> however, i'm not sure we will work correctly if they were tweaked
>> manually.
>>
>> please try a clean install of your node, then add it from the UI
>> (assuming you did a clean rpm install of the engine and didn't tweak
>> any of its configuration parameters affecting installation process)
>> thanks
>>
>
> I apologize for mixing up things, but had the clean install of the node
> worked I would never have tweaked anything. SSL was not working for me
> so I had to disable it. If in the first connection to the node there is
> an error related to SSL it is not probable that the engine can configure
> certificates or anything. Meanwhile I have found the problem with the VM
> don't booting, which is a bug in the sanlock libvirt feature reported in
> https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=828633. To
> attach an iso there is a workaround updating libvirt with the version in
> updates-testing and, maybe, tweaking /etc/libvirt/qemu-sanlock.conf.
> This does not sound as the out-of-the-box kind of thing to me. Anyway,
> the tweak worked but the installed machine won't first-time boot and I
> am at the moment wondering why//the log is showing: /managed non
> plugable/(sic)/device was removed unexpetedly/(sic)/from libvirt/.
>
> I can imagine how difficult is the development process. What I know is
> that testing software is not an easy thing either and you can't sit and
> wait for a solution to come out of some box. What I am trying is to test
> the basic features of the new version, no more. I am not trying to tune
> it in any way.

i didn't mean to imply you did something wrong, and i appreciate you are 
trying to make it work.
however, it will be easier to try and identify your issue if you can try 
to run the installation on a cleanly installed fedora 17 host.

thanks,
    Itamar


>
> Regards
>
>>>
>>> vdsm.log reads:
>>>
>>> Thread-72::DEBUG::2012-06-27
>>> 18:57:25,511::task::978::TaskManager.Task::(_decref)
>>> Task=`e92b65bc-b9fe-492f-b77c-397321dbb105`::ref 0 aborting False
>>> Thread-70::DEBUG::2012-06-27
>>> 18:57:25,531::vm::580::vm.Vm::(_startUnderlyingVm)
>>> vmId=`32339151-23ed-4cc3-ada4-0f540ab81a97`::_ongoingCreations released
>>> Thread-70::ERROR::2012-06-27
>>> 18:57:25,532::vm::604::vm.Vm::(_startUnderlyingVm)
>>> vmId=`32339151-23ed-4cc3-ada4-0f540ab81a97`::The vm start process failed
>>> Traceback (most recent call last):
>>> File "/usr/share/vdsm/vm.py", line 570, in _startUnderlyingVm
>>> self._run()
>>> File "/usr/share/vdsm/libvirtvm.py", line 1364, in _run
>>> self._connection.createXML(domxml, flags),
>>> File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line
>>> 82, in wrapper
>>> ret = f(*args, **kwargs)
>>> File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2420, in
>>> createXML
>>> if ret is None:raise libvirtError('virDomainCreateXML() failed',
>>> conn=self)
>>> *libvirtError: internal error unsupported configuration: Readonly leases
>>> are not supported*
>>> Thread-70::DEBUG::2012-06-27
>>> 18:57:25,542::vm::920::vm.Vm::(setDownStatus)
>>> vmId=`32339151-23ed-4cc3-ada4-0f540ab81a97`::Changed state to Down:
>>> internal error unsupported configuration: Readonly leases are not
>>> supported
>>> Thread-75::DEBUG::2012-06-27
>>> 18:57:27,588::BindingXMLRPC::859::vds::(wrapper) client
>>> [10.10.30.101]::call vmGetStats with
>>> ('32339151-23ed-4cc3-ada4-0f540ab81a97',) {}
>>> Thread-75::DEBUG::2012-06-27
>>> 18:57:27,588::BindingXMLRPC::865::vds::(wrapper) return vmGetStats with
>>> {'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status':
>>> 'Down', 'hash': '0', 'exitMessage': 'internal error unsupported
>>> configuration: Readonly leases are not supported', 'vmId':
>>> '32339151-23ed-4cc3-ada4-0f540ab81a97', 'timeOffset': '0',
>>> 'exitCode': 1}]}
>>> Thread-76::DEBUG::2012-06-27
>>> 18:57:27,594::BindingXMLRPC::859::vds::(wrapper) client
>>> [10.10.30.101]::call vmDestroy with
>>> ('32339151-23ed-4cc3-ada4-0f540ab81a97',) {}
>>> Thread-76::INFO::2012-06-27 18:57:27,595::API::317::vds::(destroy)
>>> vmContainerLock acquired by vm 32339151-23ed-4cc3-ada4-0f540ab81a97
>>> Thread-76::DEBUG::2012-06-27
>>> 18:57:27,595::libvirtvm::2085::vm.Vm::(destroy)
>>> vmId=`32339151-23ed-4cc3-ada4-0f540ab81a97`::destroy Called
>>> Thread-76::INFO::2012-06-27
>>> 18:57:27,595::libvirtvm::2040::vm.Vm::(releaseVm)
>>> vmId=`32339151-23ed-4cc3-ada4-0f540ab81a97`::Release VM resources
>>> Thread-76::WARNING::2012-06-27
>>> 18:57:27,596::vm::328::vm.Vm::(_set_lastStatus)
>>> vmId=`32339151-23ed-4cc3-ada4-0f540ab81a97`::trying to set state to
>>> Powering down when already Down
>>> Thread-76::DEBUG::2012-06-27
>>> 18:57:27,596::__init__::1249::Storage.Misc.excCmd::(_log) '/usr/bin/sudo
>>> -n /usr/sbin/service ksmtuned retune'
>>>
>>> And there is a log in /var/log/libvirt/qemu for the VM that says:
>>>
>>> starting up
>>> LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
>>> QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -S -M pc-0.14 -cpu
>>> kvm64,+lahf_lm,+ssse3,-cx16 -enable-kvm -m 1024 -smp
>>> 1,sockets=1,cores=1,threads=1 -name fedora-test -uuid
>>> 32339151-23ed-4cc3-ada4-0f540ab81a97 -smbios type=1,manufacturer=Red
>>> Hat,product=RHEV
>>> Hypervisor,version=17-1,serial=03000200-0400-0500-0006-000700080009_00:30:18:a8:a8:42,uuid=32339151-23ed-4cc3-ada4-0f540ab81a97
>>>
>>> -nodefconfig -nodefaults -chardev
>>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/fedora-test.monitor,server,nowait
>>>
>>> -mon chardev=charmonitor,id=monitor,mode=control -rtc
>>> base=2012-06-27T17:57:28,driftfix=slew -no-shutdown -device
>>> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
>>> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
>>> file=/rhev/data-center/ee37f596-bf78-11e1-94ba-7309589a8ec2/891fa5d3-ceff-4711-8538-9bccd018969c/images/11111111-1111-1111-1111-111111111111/Fedora-16-x86_64-Live-KDE.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial=
>>>
>>> -device
>>> ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
>>> -drive
>>> file=/rhev/data-center/ee37f596-bf78-11e1-94ba-7309589a8ec2/cdd28fda-e2fa-4ce4-9805-566ad7d69df2/images/81d0b868-9e5c-4ed8-9ac0-db6982da5de1/80a356f1-0932-4d54-be81-e112720c60b0,if=none,id=drive-virtio-disk0,format=raw,serial=81d0b868-9e5c-4ed8-9ac0-db6982da5de1,cache=none,werror=stop,rerror=stop,aio=threads
>>>
>>> -device
>>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2
>>>
>>> -netdev tap,fd=25,id=hostnet0,vhost=on,vhostfd=26 -device
>>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:51,bus=pci.0,addr=0x3,bootindex=3
>>>
>>> -chardev
>>> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/fedora-test.com.redhat.rhevm.vdsm,server,nowait
>>>
>>> -device
>>> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
>>>
>>> -chardev spicevmc,id=charchannel1,name=vdagent -device
>>> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=com.redhat.spice.0
>>>
>>> -chardev pty,id=charconsole0 -device
>>> virtconsole,chardev=charconsole0,id=console0 -spice port=5900,addr=0 -k
>>> en-us -vga qxl -global qxl-vga.vram_size=67108864 -device
>>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6
>>> *libvir: Locking error : unsupported configuration: Readonly leases are
>>> not supported*
>>> 2012-06-27 17:57:28.241+0000: shutting down
>>>
>>>
>>>
>>
>
>




More information about the Users mailing list