[lago-devel] Lago - Installation help needed
Nicolas Ecarnot
nicolas at ecarnot.net
Fri Sep 23 07:06:05 UTC 2016
Le 21/09/2016 à 17:43, Yaniv Kaul a écrit :
>
>
> On Wed, Sep 21, 2016 at 5:58 PM, Nicolas Ecarnot <nicolas at ecarnot.net
> <mailto:nicolas at ecarnot.net>> wrote:
>
> Le 21/09/2016 à 16:28, Yaniv Kaul a écrit :
>>
>>
>> On Wed, Sep 21, 2016 at 5:19 PM, Nicolas Ecarnot
>> <nicolas at ecarnot.net <mailto:nicolas at ecarnot.net>> wrote:
>>
>> Le 21/09/2016 à 16:11, Yaniv Kaul a écrit :
>>>
>>>
>>> On Wed, Sep 21, 2016 at 5:07 PM, Nicolas Ecarnot
>>> <nicolas at ecarnot.net <mailto:nicolas at ecarnot.net>> wrote:
>>>
>>> Le 21/09/2016 à 15:49, Yaniv Kaul a écrit :
>>>> Adding the Lago devel mailing list.
>>>>
>>>> The download is the reposync phase - which seems to be
>>>> OK, but then the connection means that for some reason
>>>> Lago is not serving those RPMs (8585 is the port it
>>>> should be listening to).
>>>> Can you share some logs?
>>>
>>> http://pastebin.com/nsDFZhuE
>>>
>>>
>>>
>>> Perhaps something with the Firewall?
>>
>> I had no idea whether to keep it or not.
>> I already disabled selinux after having realized it lead to a
>> read only root file system.
>>
>> About the issue above, no being able to reach some random
>> port would indeed be caused by the firewall, so I'll give it
>> a try.
>>
>>
>> During RPM installation it should add the relevant rule to the
>> firewalld, btw:
>> if which firewall-cmd &>/dev/null; then
>> firewall-cmd --reload
>> firewall-cmd --permanent --zone=public --add-service=ovirtlago
>> firewall-cmd --reload
>> fi
>
> I gave it many tries and only when manually adding your
> recommended firewall-cmd command, I was able to go one step
> further in the run.
>
> Next issue is there :
>
> @ Start Prefix:
> # Start nets:
> * Create network lago_basic_suite_3_6_lago:
> * Create network lago_basic_suite_3_6_lago: Success (in 0:00:06)
> # Start nets: Success (in 0:00:06)
> # Start vms:
> * Starting VM lago_basic_suite_3_6_engine:
> libvirt: QEMU Driver error : internal error: process exited while
> connecting to monitor: 2016-09-21T14:50:12.757362Z
> qemu-system-x86_64: cannot set up guest memory 'pc.ram': Cannot
> allocate memory
> * Starting VM lago_basic_suite_3_6_engine: ERROR (in 0:00:02)
> # Start vms: ERROR (in 0:00:02)
> # Destroy network lago_basic_suite_3_6_lago:
> # Destroy network lago_basic_suite_3_6_lago: ERROR (in 0:00:00)
> @ Start Prefix: ERROR (in 0:00:09)
> Error occured, aborting
> Traceback (most recent call last):
> File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 691,
> in main
> cli_plugins[args.verb].do_run(args)
> File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py",
> line 180, in do_run
> self._do_run(**vars(args))
> File "/usr/lib/python2.7/site-packages/lago/utils.py", line 488,
> in wrapper
> return func(*args, **kwargs)
> File "/usr/lib/python2.7/site-packages/lago/utils.py", line 499,
> in wrapper
> return func(*args, prefix=prefix, **kwargs)
> File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 255,
> in do_start
> prefix.start(vm_names=vm_names)
> File "/usr/lib/python2.7/site-packages/lago/prefix.py", line
> 958, in start
> self.virt_env.start(vm_names=vm_names)
> File "/usr/lib/python2.7/site-packages/lago/virt.py", line 175,
> in start
> vm.start()
> File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
> 247, in start
> return self.provider.start(*args, **kwargs)
> File "/usr/lib/python2.7/site-packages/lago/vm.py", line 93, in
> start
> self.libvirt_con.createXML(self._libvirt_xml())
> File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3727,
> in createXML
> if ret is None:raise libvirtError('virDomainCreateXML()
> failed', conn=self)
> libvirtError: internal error: process exited while connecting to
> monitor: 2016-09-21T14:50:12.757362Z qemu-system-x86_64: cannot
> set up guest memory 'pc.ram': Cannot allocate memory
>
> So here is the time I have to admit I'm trying to run all this on
> a *very* humble machine, with only 4Gb of RAM, that may sound
> ridiculous but I am prepared to wait for days between each command
> return and mouse click, as long as everything is doing its job
> (slowly).
>
> Not being able to allocate memory is blocking me from even testing
> Lago.
>
> Wouldn't be somewhere I could tweak some limits?
>
>
> Of course, but I don't such low value would suffice. You can change
> the template - basic_suite_3.6/LagoInitFile.in
>
Thank you Nadav for your answer about networking.
Thank you Yaniv as always helpful, because your hint worked : I reduced
the values of memory settings in the Lago init file and ran everything
from scratch : at first, it got stuck for hours with no disk usage
progress (as adviced in the FAQ).
So I stopped everything, and ran a cleanup, then deleted the
/var/lib/lago/somewhere/rpm/cache/blahblah, and also rm -fr the
deployment dir.
Then ran again, and it seems that everything until the tests went OK.
Now in my processes, I can see qemu processes running the engine, the
host0, host1 and storage.
virsh # list
Id Name State
----------------------------------------------------
1 f8b91b68-lago_basic_suite_3_6_engine running
2 f8b91b68-lago_basic_suite_3_6_host1 running
3 f8b91b68-lago_basic_suite_3_6_host0 running
4 f8b91b68-lago_basic_suite_3_6_storage running
But the first test (engine initialization) is failing with an issue
related to paramiko (see below).
Apart from googling it, I had no idea what paramiko was.
Anyway, as it leads to test 001 failing, then eventually complete stop,
I have to know how to correct this.
I already checked that I have the paramiko RPM installed.
+ env_run_test
/data/lago/ovirt-system-tests/basic_suite_3.6/test-scenarios/001_initialize_engine.py
[0/1196]
+ echo '#########################'
#########################
+ local res=0
+ cd /data/lago/ovirt-system-tests/deployment-basic_suite_3.6
+ lago ovirt runtest
/data/lago/ovirt-system-tests/basic_suite_3.6/test-scenarios/001_initialize_engine.py
current session does not belong to lago group.
@ Run test: 001_initialize_engine.py:
nose.config: INFO: Ignoring files matching ['^\\.', '^_', '^setup\\.py$']
# 001_initialize_engine.test_initialize_engine:
* Copy
/data/lago/ovirt-system-tests/basic_suite_3.6/engine-answer-file.conf to
lago_basic_suite_3_6_engine:/tmp/answer-file:
* Copy
/data/lago/ovirt-system-tests/basic_suite_3.6/engine-answer-file.conf to
lago_basic_suite_3_6_engine:/tmp/answer-file: Success (in 0:00:01)
* Collect artifacts:
No handlers could be found for logger "paramiko.transport"
- [Thread-8] lago_basic_suite_3_6_host0: ERROR (in 0:01:04)
* Collect artifacts: ERROR (in 0:01:04)
# 001_initialize_engine.test_initialize_engine: ERROR (in 0:01:19)
# Results located at
/data/lago/ovirt-system-tests/deployment-basic_suite_3.6/default/nosetests-001_initialize_engine.py.xml
@ Run test: 001_initialize_engine.py: ERROR (in 0:01:21)
Error occured, aborting
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 258,
in do_run
self.cli_plugins[args.ovirtverb].do_run(args)
File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line
180, in do_run
self._do_run(**vars(args))
File "/usr/lib/python2.7/site-packages/lago/utils.py", line 488, in
wrapper
return func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/lago/utils.py", line 499, in
wrapper
return func(*args, prefix=prefix, **kwargs)
File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 102,
in do_ovirt_runtest
raise RuntimeError('Some tests failed')
RuntimeError: Some tests failed
+ res=1
+ cd -
/data/lago/ovirt-system-tests
+ return 1
+ failed=true
+ env_collect
/data/lago/ovirt-system-tests/test_logs/basic_suite_3.6/post-001_initialize_engine.py
+ local
tests_out_dir=/data/lago/ovirt-system-tests/test_logs/basic_suite_3.6/post-001_initialize_engine.py
+ echo '#########################'
#########################
+ [[ -e /data/lago/ovirt-system-tests/test_logs/basic_suite_3.6 ]]
+ mkdir -p /data/lago/ovirt-system-tests/test_logs/basic_suite_3.6
+ cd /data/lago/ovirt-system-tests/deployment-basic_suite_3.6/current
+ lago ovirt collect --output
/data/lago/ovirt-system-tests/test_logs/basic_suite_3.6/post-001_initialize_engine.py
current session does not belong to lago group.
@ Collect artifacts:
# [Thread-1] lago_basic_suite_3_6_engine:
# [Thread-2] lago_basic_suite_3_6_host1:
# [Thread-3] lago_basic_suite_3_6_host0:
# [Thread-4] lago_basic_suite_3_6_storage:
# [Thread-1] lago_basic_suite_3_6_engine: Success (in 0:00:22)
# [Thread-4] lago_basic_suite_3_6_storage: Success (in 0:00:23)
# [Thread-2] lago_basic_suite_3_6_host1: Success (in 0:00:23)
# [Thread-3] lago_basic_suite_3_6_host0: Success (in 0:00:23)
@ Collect artifacts: Success (in 0:00:24)
+ cp -a logs
/data/lago/ovirt-system-tests/test_logs/basic_suite_3.6/post-001_initialize_engine.py/lago_logs
+ cd -
/data/lago/ovirt-system-tests
+ true
+ echo '@@@@ ERROR: Failed running
/data/lago/ovirt-system-tests/basic_suite_3.6/test-scenarios/001_initialize_engine.py'
@@@@ ERROR: Failed running
/data/lago/ovirt-system-tests/basic_suite_3.6/test-scenarios/001_initialize_engine.py
+ return 1
Many stackoverflow answers seem to say that the fix is easy, but I have
no clue where to put these settings?
http://stackoverflow.com/questions/15437700/no-handlers-could-be-found-for-logger-paramiko-transport
--
Nicolas ECARNOT
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/lago-devel/attachments/20160923/a3ef7841/attachment-0001.html>
More information about the lago-devel
mailing list