[VDSM] test_echo(1024, False) (stomp_test.StompTests) fails
by Nir Soffer
This tests used to fail in the past, but since we fixed it or the code
it never failed.
Maybe the slave was overloaded?
*14:19:02* ======================================================================*14:19:02*
ERROR:
*14:19:02* ----------------------------------------------------------------------*14:19:02*
Traceback (most recent call last):*14:19:02* File
"/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/tests/testlib.py",
line 143, in wrapper*14:19:02* return f(self, *args)*14:19:02*
File "/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/tests/stomp_test.py",
line 95, in test_echo*14:19:02* str(uuid4())),*14:19:02* File
"/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/lib/yajsonrpc/jsonrpcclient.py",
line 77, in callMethod*14:19:02* raise
exception.JsonRpcNoResponseError(method=methodName)*14:19:02*
JsonRpcNoResponseError: No response for JSON-RPC request: {'method':
'echo'}*14:19:02* -------------------- >> begin captured logging <<
--------------------*14:19:02* 2018-06-27 14:17:54,524 DEBUG
(MainThread) [vds.MultiProtocolAcceptor] Creating socket (host='::1',
port=0, family=10, socketype=1, proto=6)
(protocoldetector:225)*14:19:02* 2018-06-27 14:17:54,526 INFO
(MainThread) [vds.MultiProtocolAcceptor] Listening at ::1:36713
(protocoldetector:183)*14:19:02* 2018-06-27 14:17:54,535 DEBUG
(MainThread) [Scheduler] Starting scheduler test.Scheduler
(schedule:98)*14:19:02* 2018-06-27 14:17:54,537 DEBUG (test.Scheduler)
[Scheduler] START thread <Thread(test.Scheduler, started daemon
140562082535168)> (func=<bound method Scheduler._run of
<vdsm.schedule.Scheduler object at 0x7fd74ca00390>>, args=(),
kwargs={}) (concurrent:193)*14:19:02* 2018-06-27 14:17:54,538 DEBUG
(test.Scheduler) [Scheduler] started (schedule:140)*14:19:02*
2018-06-27 14:17:54,546 DEBUG (JsonRpc (StompReactor)) [root] START
thread <Thread(JsonRpc (StompReactor), started daemon
140562629256960)> (func=<bound method StompReactor.process_requests of
<yajsonrpc.stompserver.StompReactor object at 0x7fd74ca006d0>>,
args=(), kwargs={}) (concurrent:193)*14:19:02* 2018-06-27 14:17:54,547
DEBUG (MainThread) [Executor] Starting executor
(executor:128)*14:19:02* 2018-06-27 14:17:54,549 DEBUG (MainThread)
[Executor] Starting worker jsonrpc/0 (executor:286)*14:19:02*
2018-06-27 14:17:54,553 DEBUG (jsonrpc/0) [Executor] START thread
<Thread(jsonrpc/0, started daemon 140562612471552)> (func=<bound
method _Worker._run of <Worker name=jsonrpc/0 waiting task#=0 at
0x7fd74ca00650>>, args=(), kwargs={}) (concurrent:193)*14:19:02*
2018-06-27 14:17:54,554 DEBUG (jsonrpc/0) [Executor] Worker started
(executor:298)*14:19:02* 2018-06-27 14:17:54,557 DEBUG (MainThread)
[Executor] Starting worker jsonrpc/1 (executor:286)*14:19:02*
2018-06-27 14:17:54,558 DEBUG (MainThread) [Executor] Starting worker
jsonrpc/2 (executor:286)*14:19:02* 2018-06-27 14:17:54,559 DEBUG
(jsonrpc/2) [Executor] START thread <Thread(jsonrpc/2, started daemon
140562620864256)> (func=<bound method _Worker._run of <Worker
name=jsonrpc/2 waiting task#=0 at 0x7fd74c9fb350>>, args=(),
kwargs={}) (concurrent:193)*14:19:02* 2018-06-27 14:17:54,560 DEBUG
(jsonrpc/2) [Executor] Worker started (executor:298)*14:19:02*
2018-06-27 14:17:54,561 DEBUG (MainThread) [Executor] Starting worker
jsonrpc/3 (executor:286)*14:19:02* 2018-06-27 14:17:54,562 DEBUG
(jsonrpc/3) [Executor] START thread <Thread(jsonrpc/3, started daemon
140562124498688)> (func=<bound method _Worker._run of <Worker
name=jsonrpc/3 waiting task#=0 at 0x7fd74bc80290>>, args=(),
kwargs={}) (concurrent:193)*14:19:02* 2018-06-27 14:17:54,563 DEBUG
(jsonrpc/3) [Executor] Worker started (executor:298)*14:19:02*
2018-06-27 14:17:54,564 DEBUG (MainThread) [Executor] Starting worker
jsonrpc/4 (executor:286)*14:19:02* 2018-06-27 14:17:54,565 DEBUG
(jsonrpc/4) [Executor] START thread <Thread(jsonrpc/4, started daemon
140562116105984)> (func=<bound method _Worker._run of <Worker
name=jsonrpc/4 waiting task#=0 at 0x7fd74bc80790>>, args=(),
kwargs={}) (concurrent:193)*14:19:02* 2018-06-27 14:17:54,566 DEBUG
(jsonrpc/4) [Executor] Worker started (executor:298)*14:19:02*
2018-06-27 14:17:54,568 DEBUG (jsonrpc/1) [Executor] START thread
<Thread(jsonrpc/1, started daemon 140562132891392)> (func=<bound
method _Worker._run of <Worker name=jsonrpc/1 waiting task#=0 at
0x7fd74c9fb690>>, args=(), kwargs={}) (concurrent:193)*14:19:02*
2018-06-27 14:17:54,569 DEBUG (jsonrpc/1) [Executor] Worker started
(executor:298)*14:19:02* 2018-06-27 14:17:54,569 DEBUG (MainThread)
[Executor] Starting worker jsonrpc/5 (executor:286)*14:19:02*
2018-06-27 14:17:54,570 DEBUG (jsonrpc/5) [Executor] START thread
<Thread(jsonrpc/5, started daemon 140562107713280)> (func=<bound
method _Worker._run of <Worker name=jsonrpc/5 waiting task#=0 at
0x7fd74bc80090>>, args=(), kwargs={}) (concurrent:193)*14:19:02*
2018-06-27 14:17:54,571 DEBUG (jsonrpc/5) [Executor] Worker started
(executor:298)*14:19:02* 2018-06-27 14:17:54,572 DEBUG (MainThread)
[Executor] Starting worker jsonrpc/6 (executor:286)*14:19:02*
2018-06-27 14:17:54,580 DEBUG (MainThread) [Executor] Starting worker
jsonrpc/7 (executor:286)*14:19:02* 2018-06-27 14:17:54,603 DEBUG
(jsonrpc/6) [Executor] START thread <Thread(jsonrpc/6, started daemon
140562099320576)> (func=<bound method _Worker._run of <Worker
name=jsonrpc/6 waiting task#=0 at 0x7fd74d0dacd0>>, args=(),
kwargs={}) (concurrent:193)*14:19:02* 2018-06-27 14:17:54,604 DEBUG
(jsonrpc/6) [Executor] Worker started (executor:298)*14:19:02*
2018-06-27 14:17:54,615 DEBUG (jsonrpc/7) [Executor] START thread
<Thread(jsonrpc/7, started daemon 140562090927872)> (func=<bound
method _Worker._run of <Worker name=jsonrpc/7 waiting task#=0 at
0x7fd74bfe0310>>, args=(), kwargs={}) (concurrent:193)*14:19:02*
2018-06-27 14:17:54,616 DEBUG (JsonRpcServer) [root] START thread
<Thread(JsonRpcServer, started daemon 140561730238208)> (func=<bound
method JsonRpcServer.serve_requests of <yajsonrpc.JsonRpcServer object
at 0x7fd74ca00690>>, args=(), kwargs={}) (concurrent:193)*14:19:02*
2018-06-27 14:17:54,618 DEBUG (MainThread) [vds.MultiProtocolAcceptor]
Adding detector <yajsonrpc.stompserver.StompDetector instance at
0x7fd74c28cc68> (protocoldetector:210)*14:19:02* 2018-06-27
14:17:54,625 INFO (Detector thread) [ProtocolDetector.AcceptorImpl]
Accepted connection from ::1:52432 (protocoldetector:61)*14:19:02*
2018-06-27 14:17:54,626 DEBUG (Detector thread)
[ProtocolDetector.Detector] Using required_size=11
(protocoldetector:89)*14:19:02* 2018-06-27 14:17:54,627 DEBUG
(jsonrpc/7) [Executor] Worker started (executor:298)*14:19:02*
2018-06-27 14:17:54,634 DEBUG (MainThread) [jsonrpc.AsyncoreClient]
Sending response (stompclient:293)*14:19:02* 2018-06-27 14:17:54,634
DEBUG (Client ::1:36713) [root] START thread <Thread(Client ::1:36713,
started daemon 140561713452800)> (func=<bound method
Reactor.process_requests of <yajsonrpc.betterAsyncore.Reactor object
at 0x7fd74bdb1d90>>, args=(), kwargs={}) (concurrent:193)*14:19:02*
2018-06-27 14:17:54,641 INFO (Detector thread)
[ProtocolDetector.Detector] Detected protocol stomp from ::1:52432
(protocoldetector:125)*14:19:02* 2018-06-27 14:17:54,645 INFO
(Detector thread) [Broker.StompAdapter] Processing CONNECT request
(stompserver:94)*14:19:02* 2018-06-27 14:17:54,648 DEBUG (Client
::1:36713) [yajsonrpc.protocols.stomp.AsyncClient] Stomp connection
established (stompclient:137)*14:19:02* 2018-06-27 14:17:54,651 DEBUG
(jsonrpc/0) [jsonrpc.JsonRpcServer] Calling 'echo' in bridge with
[u'Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do
eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad
minim veniam, quis nostrud exercitation ullamco laboris nisi ut
aliquip ex ea commodo consequat. Duis aute irure dolor in
reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla
pariatur. Excepteur sint occaecat cupidatat non proident, sunt in
culpa qui officia deserunt mollit anim id est laborum. Lorem ipsum
dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor
incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea
commodo consequat. Duis aute irure dolor in reprehenderit in voluptate
velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint
occaecat cupidatat non proident, sunt in culpa qui officia deserunt
mollit anim id est laborum. Lorem ipsum dolor sit amet, consectetur
adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore
magna aliqua. Ut en'] (__init__:328)*14:19:02* 2018-06-27 14:17:54,652
DEBUG (jsonrpc/0) [jsonrpc.JsonRpcServer] Return 'echo' in bridge with
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do
eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad
minim veniam, quis nostrud exercitation ullamco laboris nisi ut
aliquip ex ea commodo consequat. Duis aute irure dolor in
reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla
pariatur. Excepteur sint occaecat cupidatat non proident, sunt in
culpa qui officia deserunt mollit anim id est laborum. Lorem ipsum
dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor
incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea
commodo consequat. Duis aute irure dolor in reprehenderit in voluptate
velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint
occaecat cupidatat non proident, sunt in culpa qui officia deserunt
mollit anim id est laborum. Lorem ipsum dolor sit amet, consectetur
adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore
magna aliqua. Ut en (__init__:355)*14:19:02* 2018-06-27 14:17:54,653
INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call echo succeeded in
0.01 seconds (__init__:311)*14:19:02* 2018-06-27 14:17:54,661 INFO
(Detector thread) [Broker.StompAdapter] Subscribe command received
(stompserver:123)*14:19:02* 2018-06-27 14:17:54,662 DEBUG (Detector
thread) [protocoldetector.StompDetector] Stomp detected from ('::1',
52432) (stompserver:412)*14:19:02* 2018-06-27 14:18:09,649 DEBUG
(MainThread) [vds.MultiProtocolAcceptor] Stopping Acceptor
(protocoldetector:214)*14:19:02* 2018-06-27 14:18:09,650 INFO
(MainThread) [jsonrpc.JsonRpcServer] Stopping JsonRPC Server
(__init__:441)*14:19:02* 2018-06-27 14:18:09,651 DEBUG (MainThread)
[Executor] Stopping executor (executor:137)*14:19:02* 2018-06-27
14:18:09,652 DEBUG (Client ::1:36713) [root] FINISH thread
<Thread(Client ::1:36713, started daemon 140561713452800)>
(concurrent:196)*14:19:02* 2018-06-27 14:18:09,652 DEBUG
(JsonRpcServer) [root] FINISH thread <Thread(JsonRpcServer, started
daemon 140561730238208)> (concurrent:196)*14:19:02* 2018-06-27
14:18:09,654 DEBUG (JsonRpc (StompReactor)) [root] FINISH thread
<Thread(JsonRpc (StompReactor), started daemon 140562629256960)>
(concurrent:196)*14:19:02* 2018-06-27 14:18:09,655 DEBUG (jsonrpc/2)
[Executor] Worker stopped (executor:303)*14:19:02* 2018-06-27
14:18:09,658 DEBUG (jsonrpc/3) [Executor] Worker stopped
(executor:303)*14:19:02* 2018-06-27 14:18:09,659 DEBUG (jsonrpc/4)
[Executor] Worker stopped (executor:303)*14:19:02* 2018-06-27
14:18:09,660 DEBUG (jsonrpc/1) [Executor] Worker stopped
(executor:303)*14:19:02* 2018-06-27 14:18:09,660 DEBUG (jsonrpc/5)
[Executor] Worker stopped (executor:303)*14:19:02* 2018-06-27
14:18:09,660 DEBUG (jsonrpc/6) [Executor] Worker stopped
(executor:303)*14:19:02* 2018-06-27 14:18:09,665 DEBUG (jsonrpc/6)
[Executor] FINISH thread <Thread(jsonrpc/6, started daemon
140562099320576)> (concurrent:196)*14:19:02* 2018-06-27 14:18:09,661
DEBUG (jsonrpc/7) [Executor] Worker stopped (executor:303)*14:19:02*
2018-06-27 14:18:09,662 DEBUG (jsonrpc/2) [Executor] FINISH thread
<Thread(jsonrpc/2, started daemon 140562620864256)>
(concurrent:196)*14:19:02* 2018-06-27 14:18:09,663 DEBUG (jsonrpc/3)
[Executor] FINISH thread <Thread(jsonrpc/3, started daemon
140562124498688)> (concurrent:196)*14:19:02* 2018-06-27 14:18:09,663
DEBUG (jsonrpc/1) [Executor] FINISH thread <Thread(jsonrpc/1, started
daemon 140562132891392)> (concurrent:196)*14:19:02* 2018-06-27
14:18:09,664 DEBUG (jsonrpc/4) [Executor] FINISH thread
<Thread(jsonrpc/4, started daemon 140562116105984)>
(concurrent:196)*14:19:02* 2018-06-27 14:18:09,664 DEBUG (jsonrpc/0)
[Executor] Worker stopped (executor:303)*14:19:02* 2018-06-27
14:18:09,665 DEBUG (jsonrpc/5) [Executor] FINISH thread
<Thread(jsonrpc/5, started daemon 140562107713280)>
(concurrent:196)*14:19:02* 2018-06-27 14:18:09,667 DEBUG (jsonrpc/7)
[Executor] FINISH thread <Thread(jsonrpc/7, started daemon
140562090927872)> (concurrent:196)*14:19:02* 2018-06-27 14:18:09,667
DEBUG (MainThread) [Executor] Waiting for worker jsonrpc/0
(executor:290)*14:19:02* 2018-06-27 14:18:09,671 DEBUG (jsonrpc/0)
[Executor] FINISH thread <Thread(jsonrpc/0, started daemon
140562612471552)> (concurrent:196)*14:19:02* 2018-06-27 14:18:09,673
DEBUG (MainThread) [Executor] Waiting for worker jsonrpc/1
(executor:290)*14:19:02* 2018-06-27 14:18:09,674 DEBUG (MainThread)
[Executor] Waiting for worker jsonrpc/6 (executor:290)*14:19:02*
2018-Coverage.py warning: Module
/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/vdsm
was never imported. (module-not-imported)*14:19:05* pylint
installdeps: pylint==1.8*14:19:22* pylint installed: The directory
'/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/.cache/pip/http'
or its parent directory is not owned by the current user and the cache
has been disabled. Please check the permissions and owner of that
directory. If executing pip with sudo, you may want sudo's -H
flag.,astroid==1.6.5,backports.functools-lru-cache==1.5,backports.ssl-match-hostname==3.5.0.1,blivet==0.61.15.69,chardet==2.2.1,configparser==3.5.0,coverage==4.4.1,decorator==3.4.0,enum34==1.1.6,funcsigs==1.0.2,futures==3.2.0,iniparse==0.4,ioprocess==1.1.0,ipaddress==1.0.16,IPy==0.75,isort==4.3.4,kitchen==1.1.1,lazy-object-proxy==1.3.1,libvirt-python==3.9.0,Magic-file-extensions==0.2,mccabe==0.6.1,mock==2.0.0,netaddr==0.7.18,ovirt-imageio-common==1.4.1,pbr==3.1.1,pluggy==0.6.0,policycoreutils-default-encoding==0.1,pthreading==0.1.3,py==1.5.4,pycurl==7.19.0,pygpgme==0.3,pyinotify==0.9.4,pykickstart==1.99.66.18,pyliblzma==0.5.3,pylint==1.8.0,pyparted==3.9,python-augeas==0.5.0,python-dateutil==2.4.2,pyudev==0.15,pyxattr==0.5.1,PyYAML==3.10,requests==2.6.0,sanlock-python==3.6.0,seobject==0.1,sepolicy==1.1,singledispatch==3.4.0.3,six==1.11.0,subprocess32==3.2.6,tox==2.9.1,urlgrabber==3.10,urllib3==1.10.2,virtualenv==16.0.0,WebOb==1.2.3,wrapt==1.10.11,yum-metadata-parser==1.1.4*14:19:22*
pylint runtests: PYTHONHASHSEED='528910716'*14:19:22* pylint runtests:
commands[0] | pylint --errors-only
static/usr/share/vdsm/sitecustomize.py lib/vdsm lib/vdsmclient
lib/yajsonrpc*14:19:23* Problem importing module base.pyc: cannot
import name get_node_last_lineno*14:19:24* Problem importing module
base.py: cannot import name get_node_last_lineno*14:19:34* 06-27
14:18:09,674 DEBUG (MainThread) [Executor] Waiting for worker
jsonrpc/7 (executor:290)*14:19:34* 2018-06-27 14:18:09,675 DEBUG
(MainThread) [Executor] Waiting for worker jsonrpc/5
(executor:290)*14:19:34* 2018-06-27 14:18:09,675 DEBUG (MainThread)
[Executor] Waiting for worker jsonrpc/2 (executor:290)*14:19:34*
2018-06-27 14:18:09,676 DEBUG (MainThread) [Executor] Waiting for
worker jsonrpc/3 (executor:290)*14:19:34* 2018-06-27 14:18:09,676
DEBUG (MainThread) [Executor] Waiting for worker jsonrpc/4
(executor:290)*14:19:34* 2018-06-27 14:18:09,677 DEBUG (MainThread)
[Scheduler] Stopping scheduler test.Scheduler (schedule:110)*14:19:34*
2018-06-27 14:18:09,678 DEBUG (test.Scheduler) [Scheduler] stopped
(schedule:143)*14:19:34* 2018-06-27 14:18:09,679 DEBUG
(test.Scheduler) [Scheduler] FINISH thread <Thread(test.Scheduler,
started daemon 140562082535168)> (concurrent:196)*14:19:34*
--------------------- >> end captured logging << ---------------------
5 years, 11 months
oVirt master - Fedora 28
by Sandro Bonazzola
Hi,
you can now "dnf install ovirt-engine" on fedora 28 using
https://resources.ovirt.org/repos/ovirt/tested/master/rpm/fc28/
ovirt-master-snapshot is still syncing and will probably be aligned in 1
hour or so.
Mirrors should be aligned by tomorrow.
Please note that no test has been done on the build.
Host side we are currently blocked on a broken dependency on
Bug 1588471 <https://bugzilla.redhat.com/show_bug.cgi?id=1588471> - nothing
provides python-rhsm needed by pulp-rpm-handlers-2.15.2-1.fc28.noarch
And we still miss vdsm build, pending https://gerrit.ovirt.org/91944 review
and merge for supporting fc28.
A few notes on the engine build:
- Dropped fluentd and related packages from the dependency tree: there's no
commitment in supporting fluentd for metrics in 4.3 so we didn't invest
time in packaging it for fedora.
- Dropped limitation on novnc < 0.6. Fedora already has 0.6.1 in stable
repository and we have Bug 1502652
<https://bugzilla.redhat.com/show_bug.cgi?id=1502652> - [RFE] upgrade novnc
which should be scoped for 4.3.0 or we should consider dropping novnc
support.
This should allow to start developing oVirt Engine appliance based on
Fedora 28 and then OST suites.
Thanks,
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
6 years, 4 months
Re: [ovirt-users] PollVDSCommand Error: java.lang.NoClassDefFoundError: org/apache/commons/lang/StringUtils
by Greg Sheremeta
Is your engine functioning at all? I'm not sure if you're saying your
entire engine doesn't work, or just host deploy doesn't work.
What version of ovirt engine is this?
How did you install -- fresh installation, upgrade? From source or rpm?
Best wishes,
Greg
On Wed, Jun 20, 2018 at 12:35 AM Deekshith <deekshith(a)binaryindia.com>
wrote:
> Dear team
>
> Please find the attached server and host logs.. ovirt host
> unresponsive(down), unable to install node packages.
>
>
>
> Regards
>
> Deekshith
>
>
>
> *From:* Roy Golan [mailto:rgolan@redhat.com]
> *Sent:* 19 June 2018 06:57
> *To:* Greg Sheremeta
> *Cc:* devel; users; deekshith(a)binaryindia.com; allen_john(a)mrpl.co.in
> *Subject:* Re: [ovirt-users] PollVDSCommand Error:
> java.lang.NoClassDefFoundError: org/apache/commons/lang/StringUtils
>
>
>
>
>
> On Tue, 19 Jun 2018 at 16:04 Greg Sheremeta <gshereme(a)redhat.com> wrote:
>
> Sending to devel list.
>
>
>
> Anyone ever seen this before? It sounds like a bad installation if Java
> classes are missing / classloader issues are present.
>
>
>
> 2018-06-18 11:23:11,287+05 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (VdsDeploy) [d6a2578b-c58f-43be-b6ad-30a3a0a57a74] EVENT_ID:
> VDS_INSTALL_IN_PROGRESS(509), Correlation ID:
> d6a2578b-c58f-43be-b6ad-30a3a0a57a74, Call Stack: null, Custom ID: null,
> Custom Event ID: -1, Message: Installing Host Node. Retrieving installation
> logs to:
> '/var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20180618112311-mrpl.kvm2-d6a2578b-c58f-43be-b6ad-30a3a0a57a74.log'.
>
> 2018-06-18 11:23:12,131+05 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (VdsDeploy) [d6a2578b-c58f-43be-b6ad-30a3a0a57a74] EVENT_ID:
> VDS_INSTALL_IN_PROGRESS(509), Correlation ID:
> d6a2578b-c58f-43be-b6ad-30a3a0a57a74, Call Stack: null, Custom ID: null,
> Custom Event ID: -1, Message: Installing Host Node. Stage: Termination.
>
> 2018-06-18 11:23:12,193+05 INFO
> [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor)
> [] Connecting to mrpl.kvm2/172.31.1.32
>
> 2018-06-18 11:23:12,206+05 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand]
> (org.ovirt.thread.pool-7-thread-47) [d6a2578b-c58f-43be-b6ad-30a3a0a57a74]
> Error: java.lang.NoClassDefFoundError: org/apache/commons/lang/StringUtils
>
> 2018-06-18 11:23:12,206+05 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand]
> (org.ovirt.thread.pool-7-thread-47) [d6a2578b-c58f-43be-b6ad-30a3a0a57a74]
> Command 'PollVDSCommand(HostName = Node,
> VdsIdVDSCommandParametersBase:{runAsync='true',
> hostId='d862d5c8-c75e-42ee-867f-ca16a228c2ad'})' execution failed:
> java.util.concurrent.ExecutionException: java.lang.NoClassDefFoundError:
> org/apache/commons/lang/StringUtils
>
> 2018-06-18 11:23:12,707+05 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand]
> (org.ovirt.thread.pool-7-thread-47) [d6a2578b-c58f-43be-b6ad-30a3a0a57a74]
> Error: java.lang.NoClassDefFoundError: org/apache/commons/lang/StringUtils
>
>
>
>
>
> Is that a regular installation and what version?
>
> jdk version?
>
> server.log engine.log
>
>
>
>
>
> On Tue, Jun 19, 2018 at 8:50 AM Deekshith <deekshith(a)binaryindia.com>
> wrote:
>
> Dear team
>
> Please find attached ovirt-engine log .
>
>
>
> Regards
>
> Deekshith
>
>
>
> *From:* Greg Sheremeta [mailto:gshereme@redhat.com]
> *Sent:* 19 June 2018 05:53
> *To:* deekshith(a)binaryindia.com
> *Cc:* users
> *Subject:* Re: [ovirt-users] Unable to start Ovirt engine
>
>
>
> What does it say in the
> /var/log/ovirt-engine/setup/ovirt-engine-setup-xxxxxx.log?
>
>
>
> On Sun, Jun 17, 2018 at 6:01 AM Deekshith <deekshith(a)binaryindia.com>
> wrote:
>
> Hi Team
>
> Please anyone help me to solve this issue.
>
>
>
>
>
> *Error! Filename not specified.*
>
>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QPBXKHLK4NS...
>
>
>
>
> --
>
> *GREG SHEREMETA*
>
> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>
> Red Hat NA
>
> <https://www.redhat.com/>
>
> gshereme(a)redhat.com IRC: gshereme
>
> <https://red.ht/sig>
>
>
>
>
>
>
> --
>
> *GREG SHEREMETA*
>
> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>
> Red Hat NA
>
> <https://www.redhat.com/>
>
> gshereme(a)redhat.com IRC: gshereme
>
> <https://red.ht/sig>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2AA4B5IPQL7...
>
>
--
GREG SHEREMETA
SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
Red Hat NA
<https://www.redhat.com/>
gshereme(a)redhat.com IRC: gshereme
<https://red.ht/sig>
6 years, 4 months
[ OST Failure Report ] [ oVirt Master (ovirt-engine) ] [ 28-06-2018 ] [ 098_ovirt_provider_ovn.use_ovn_provider.]
by Dafna Ron
Hi,
We had a failure in test 098_ovirt_provider_ovn.use_ovn_provider.
Although CQ is pointing to this change: https://gerrit.ovirt.org/#/c/92567/
- packaging: Add python-netaddr requirement I actually think from the
error its because of changes made to multiqueues
https://gerrit.ovirt.org/#/c/92009/ - engine: Update libvirtVmXml to
consider vmBase.multiQueuesEnabled attribute
https://gerrit.ovirt.org/#/c/92008/ - engine: Introduce algorithm for
calculating how many queues asign per vnic
https://gerrit.ovirt.org/#/c/92007/ - engine: Add multiQueuesEnabled to
VmBase
https://gerrit.ovirt.org/#/c/92318/ - restapi: Add 'Multi Queues Enabled'
to the relevant mappers
https://gerrit.ovirt.org/#/c/92149/ - webadmin: Add 'Multi Queues Enabled'
to vm dialog
Alona, can you please take a look?
*Link to Job:*
*http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/8375/
<http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/8375/>Link
to all
logs:https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/8375/...
<https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/8375/artif...>(Relevant)
error snippet from the log: <error>*
*eng**ine: *
2018-06-27 13:59:25,976-04 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-80) [] Command
'GetAllVmStatsVDSCommand(HostName = lago-basic-suite-master-host-1,
VdsIdVDSCommandParametersBase:{hostId='d9094c95-3275-4616-b4c2-815e753bcfed'})'
execution failed: VDSGenericException: VDSNetworkException: Broken pipe
2018-06-27 13:59:25,977-04 DEBUG
[org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil]
(EE-ManagedThreadFactory-engine-Thread-442) [] Executing task:
EE-ManagedThreadFactory-engine-Thread-442
2018-06-27 13:59:25,977-04 DEBUG
[org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
(EE-ManagedThreadFactory-engine-Thread-442) [] method: getVdsManager,
params: [d9094c95-3275-4616-b4c2-815e753bcfed], timeElapsed: 0ms
2018-06-27 13:59:25,977-04 WARN
[org.ovirt.engine.core.vdsbroker.VdsManager]
(EE-ManagedThreadFactory-engine-Thread-442) [] Host
'lago-basic-suite-master-host-1' is not responding.
2018-06-27 13:59:25,979-04 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-63) [] EVENT_ID:
VDS_BROKER_COMMAND_FAILURE(10,802), VDSM lago-basic-suite-master-host-1
command GetStatsAsyncVDS failed: Broken pipe
2018-06-27 13:59:25,976-04 DEBUG
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-80) [] Exception:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
VDSGenericException: VDSNetworkException: Broken pipe
at
org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:189)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand.executeVdsBrokerCommand(GetAllVmStatsVDSCommand.java:23)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVdsCommandWithNetworkEvent(VdsBrokerCommand.java:123)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:111)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:65)
[vdsbroker.jar:]
at
org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:31)
[dal.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.DefaultVdsCommandExecutor.execute(DefaultVdsCommandExecutor.java:14)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:399)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand$$super(Unknown
Source) [vdsbroker.jar:]
at sun.reflect.GeneratedMethodAccessor270.invoke(Unknown Source)
[:1.8.0_171]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_171]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_171]
at
org.jboss.weld.interceptor.proxy.TerminalAroundInvokeInvocationContext.proceedInternal(TerminalAroundInvokeInvocationContext.java:49)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
at
org.jboss.weld.interceptor.proxy.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:77)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
at
org.ovirt.engine.core.common.di.interceptor.LoggingInterceptor.apply(LoggingInterceptor.java:12)
[common.jar:]
at sun.reflect.GeneratedMethodAccessor68.invoke(Unknown Source)
[:1.8.0_171]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_171]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_171]
at
org.jboss.weld.interceptor.reader.SimpleInterceptorInvocation$SimpleMethodInvocation.invoke(SimpleInterceptorInvocation.java:73)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
at
org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeAroundInvoke(InterceptorMethodHandler.java:84)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
at
org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeInterception(InterceptorMethodHandler.java:72)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
at
org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.invoke(InterceptorMethodHandler.java:56)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
at
org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorStackMethodHandler.invoke(CombinedInterceptorAndDecoratorStackMethodHandler.java:79)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
at
org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorStackMethodHandler.invoke(CombinedInterceptorAndDecoratorStackMethodHandler.java:68)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
at
org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand(Unknown
Source) [vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher.poll(VmsStatisticsFetcher.java:29)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.monitoring.VmsListFetcher.fetch(VmsListFetcher.java:49)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefresher.poll(PollVmStatsRefresher.java:44)
[vdsbroker.jar:]
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[rt.jar:1.8.0_171]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
[rt.jar:1.8.0_171]
at
org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383)
[javax.enterprise.concurrent-1.0.jar:]
at
org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534)
[javax.enterprise.concurrent-1.0.jar:]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[rt.jar:1.8.0_171]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[rt.jar:1.8.0_171]
at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_171]
at
org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250)
[javax.enterprise.concurrent-1.0.jar:]
at
org.jboss.as.ee.concurrent.service.ElytronManagedThreadFactory$ElytronManagedThread.run(ElytronManagedThreadFactory.java:78)
2018-06-27 13:59:25,984-04 DEBUG
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-80) [] FINISH,
GetAllVmStatsVDSCommand, return: , log id: 56d99e77
2018-06-27 13:59:25,984-04 DEBUG
[org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
(EE-ManagedThreadFactory-engineScheduled-Thread-80) [] method:
runVdsCommand, params: [GetAllVmStats,
VdsIdVDSCommandParametersBase:{hostId='d9094c95-3275-4616-b4c2-815e753bcfed'}],
timeElapsed: 1497ms
2018-06-27 13:59:25,984-04 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefresher]
(EE-ManagedThreadFactory-engineScheduled-Thread-80) [] Failed to fetch vms
info for host 'lago-basic-suite-master-host-1' - skipping VMs monitoring.
*vdsm: 2018-06-27 14:10:17,314-0400 INFO (jsonrpc/7) [virt.vm]
(vmId='b8a11304-07e3-4e64-af35-7421be780d5b') Hotunplug NIC xml: <?xml
version='1.0' encoding='utf-8'?><interface type="bridge"> <address
bus="0x00" domain="0x0000" function="0x0" slot="0x0b" type="pci" /> <mac
address="00:1a:4a:16:01:0e" /> <model type="virtio" /> <source
bridge="network_1" /> <link state="up" /> <driver name="vhost"
queues="" /> <alias name="ua-3c77476f-f194-476a-8412-d76a9e58d1f9"
/></interface> (vm:3321)2018-06-27 14:10:17,328-0400 ERROR (jsonrpc/7)
[virt.vm] (vmId='b8a11304-07e3-4e64-af35-7421be780d5b') Hotunplug failed
(vm:3353)Traceback (most recent call last): File
"/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 3343, in
hotunplugNic self._dom.detachDevice(nicXml) File
"/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 99, in f
ret = attr(*args, **kwargs) File
"/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line
131, in wrapper ret = f(*args, **kwargs) File
"/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 93, in
wrapper return func(inst, *args, **kwargs) File
"/usr/lib64/python2.7/site-packages/libvirt.py", line 1177, in
detachDevice if ret == -1: raise libvirtError ('virDomainDetachDevice()
failed', dom=self)libvirtError: 'queues' attribute must be positive number:
2018-06-27 14:10:17,345-0400 DEBUG (jsonrpc/7) [api] FINISH hotunplugNic
response={'status': {'message': "'queues' attribute must be positive
number: ", 'code': 50}} (api:136)2018-06-27 14:10:17,346-0400 INFO
(jsonrpc/7) [api.virt] FINISH hotunplugNic return={'status': {'message':
"'queues' attribute must be positive number: ", 'code': 50}}
from=::ffff:192.168.201.4,32976, flow_id=ecb6652,
vmId=b8a11304-07e3-4e64-af35-7421be780d5b (api:53)2018-06-27
14:10:17,346-0400 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call
VM.hotunplugNic failed (error 50) in 0.07 seconds (__init__:311)2018-06-27
14:10:19,244-0400 DEBUG (qgapoller/2) [vds] Not sending QEMU-GA command
'guest-get-users' to vm_id='b8a11304-07e3-4e64-af35-7421be780d5b', command
is not supported (qemuguestagent:192)2018-06-27 14:10:20,038-0400 DEBUG
(jsonrpc/1) [jsonrpc.JsonRpcServer] Calling 'Host.getAllVmStats' in bridge
with {} (__init__:328)2018-06-27 14:10:20,038-0400 INFO (jsonrpc/1)
[api.host] START getAllVmStats() from=::1,48032 (api:47)2018-06-27
14:10:20,041-0400 INFO (jsonrpc/1) [api.host] FINISH getAllVmStats
return={'status': {'message': 'Done', 'code': 0}, 'statsList':
(suppressed)} from=::1,48032 (api:53)2018-06-27 14:10:20,043-0400 DEBUG
(jsonrpc/1) [jsonrpc.JsonRpcServer] Return 'Host.getAllVmStats' in bridge
with (suppressed) (__init__:355)2018-06-27 14:10:20,043-0400 INFO
(jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded
in 0.00 seconds (__init__:311)2018-06-27 14:10:20,057-0400 DEBUG
(jsonrpc/6) [jsonrpc.JsonRpcServer] Calling 'Host.getAllVmIoTunePolicies'
in bridge with {} (__init__:328)2018-06-27 14:10:20,058-0400 INFO
(jsonrpc/6) [api.host] START getAllVmIoTunePolicies() from=::1,48032
(api:47)2018-06-27 14:10:20,058-0400 INFO (jsonrpc/6) [api.host] FINISH
getAllVmIoTunePolicies return={'status': {'message': 'Done', 'code': 0},
'io_tune_policies_dict': {'b8a11304-07e3-4e64-af35-7421be780d5b':
{'policy': [], 'current_values': [{'ioTune': {'write_bytes_sec': 0L,
'total_iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L,
'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path':
'/rhev/data-center/mnt/blockSD/cf23ceeb-81a3-4714-85a0-c6ddd1e024da/images/650fe4ae-47a1-4f2d-9cba-1617a8c868c3/03e75c3c-24e7-4e68-a6f1-21728aaaa73e',
'name': 'vda'}]}}} from=::1,48032 (api:53)2018-06-27 14:10:20,059-0400
DEBUG (jsonrpc/6) [jsonrpc.JsonRpcServer] Return
'Host.getAllVmIoTunePolicies' in bridge with
{'b8a11304-07e3-4e64-af35-7421be780d5b': {'policy': [], 'current_values':
[{'ioTune': {'write_bytes_sec': 0L, 'total_iops_sec': 0L, 'read_iops_sec':
0L, 'read_bytes_sec': 0L, 'write_iops_sec': 0L, 'total_bytes_sec': 0L},
'path':
'/rhev/data-center/mnt/blockSD/cf23ceeb-81a3-4714-85a0-c6ddd1e024da/images/650fe4ae-47a1-4f2d-9cba-1617a8c868c3/03e75c3c-24e7-4e68-a6f1-21728aaaa73e',
'name': 'vda'}]}} (__init__:355)</error>*
6 years, 4 months
oVirt HCI point-to-point interconnection
by Stefano Zappa
Good morning,
I would like to kindly ask you a question about the feasibility of defining a point-to-point interconnection between three ovirt nodes.
Initially with the idea of optimizing the direct communications between the nodes and especially the gluster communications, and so it would seem quite easy, then evaluating a more complex configuration, assuming to create an overlay L2 network on the three L3 point-to-point, using techniques like geneve, of which at the moment I have no mastery.
If the direct routing of three nodes to interconnect the public network with the private overlay network was not easily doable, we could leave the private overlay network isolated from the outside world and connect the VM hosted engine directly to the two networks with two adapters.
This layout with direct interconnection of the nodes without switches and a shared L2 overlay network between the nodes may in future be contemplated in future releases of your HCI solution?
Thank you for your attention, have a nice day!
Stefano Zappa.
[cid:609f1f14-f74e-489f-b86d-08647efc6d1c]
Stefano Zappa
IT Specialist CAE - TT-3
Industrie Saleri Italo S.p.A.
Phone: +39 0308250480
Fax: +39 0308250466
This message contains confidential information and is intended only for users(a)ovirt.org, infra(a)ovirt.org, devel(a)ovirt.org. If you are not users(a)ovirt.org, infra(a)ovirt.org, devel(a)ovirt.org you should not disseminate, distribute or copy this e-mail. Please notify Stefano.Zappa(a)saleri.it immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. Stefano Zappa therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mail transmission. If verification is required please request a hard-copy version.
PRIVACY INFORMATION ART. 13 EU REG. 2016/679
We inform you that the personal data contained in the present and subsequent electronic communications will be processed in compliance with the EU Regulation 2016/679, for the purpose of and only for the time necessary to allow the sender to carry out the activities connected to the existing commercial relationships.
The provision of personal data is not mandatory. However, failure to provide them determines the impossibility of achieving the aforementioned purpose.
With regard to these data, is allowed the exercise of the rights set out in art. 13 and from the articles from 15 to 22 of EU Regulation 2016/679 and in particular the right to request the access to, updating, rectification and erasure of your personal data, as well as the right to object to processing and to lodge a complaint with the Supervisory Authority, by sending an e-mail to privacy(a)saleri.it.
Data Controller of your personal data is Industrie Saleri Italo S.p.a., with registered office in via Ruca, 406 - 25065 Lumezzane (BS).
6 years, 4 months
oVirt on Fedora 28 for developers
by Nir Soffer
I want to share the current state of oVirt on Fedora 28, hopefully it will
save time for other developers.
1. Why should I run oVirt on Fedora 28?
- Fedora is the the base for future CentOS. The bugs we find *today* on
Fedora 28
are the bugs that we will not have next year when we try oVirt on CentOS.
- I want to contribute to projects that require python 3. For example,
virt-v2v
require python 3.6 in upstream. If you want to contribute you need to test
it on Fedora 28.
- I need to develop with latest libvirt and qemu. One example is incremental
backup, you will need Fefora 28 to work on this.
- CentOS is old and boring, I want to play with the newest bugs :-)
2. How to install oVirt with Fedora 28 hosts
Warning: ugly hacks bellow!
- Install ovirt-release-master.rpm on all hosts
dnf install http://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm
- Install or build engine on CentOS 7.5 (1804)[1]
- When provisioning a host, make sure you have lvm2-2.02.177-5.fc28
without it, no block storage you[2]
- When adding a host, disable "Configure host firewall" - it does not work
now
I hope that Sandro team will fix this issue soon.
- Adding a host will fail, because without firewall setup, port 54321 is
not reacable.
to fix this, configure the firewall manually, or disable it
iptables -F
This is pretty lame because you have to apply it again after restart, but
it was
good enough for now.
- Because the host was not reachable, we don't configure the host network
so the host will be non-operational.
To fix this, open host > network > setup networks and assign ovirtmgmt
to the host management nic. The host will become UP after that.
I hope that Dan team will fix it soon.
- Adding storage will fail because sanlock selinux issue[3]
To fix, set selinux to permissive mode:
setenforce 0
At this point you should have a working setup.
3. Building and instaling vdsm from source on Fedora 28
- Install build dependencies using
dnf install `cat automation/check-patch.packages.fc28`
- Clean the source (needed if you rerun ./autogen.sh with different options)
git clean -dxf
- Configure vdsm with hooks - for some reason 2 hooks are required
on Fedora but not on EL.
./autogen.sh --system --enable-hooks
- Build
make
make rpm
- Install the packages at /home/user/rpmbuild/RPMS/{noarch,x86_64}
- if vdsm was never installed, you need to configure and enable it
vdsm-tool configure --force
systemctl enable vdsmd
systemctl start vdsmd
At this point you can test the new bugs you added to vdsm :-)
[1] I did not try to install engine on Fedora 28. I guess engine folks can
share if there are issues with this.
[2] https://bugzilla.redhat.com/1575762
LVM team fixed the issue couple of hours after I asked about it in #lvm
We had a build for testing couple of days later.
[3] https://bugzilla.redhat.com/1593853
We are still waiting for selinux folks response.
Happy hacking!
Nir
6 years, 4 months
CVE-2018-3639 qemu-kvm-ev-2.10.0-21.el7_5.4.1 is now available for testing
by Sandro Bonazzola
Hi, qemu-kvm-ev-2.10.0-21.el7_5.4.1 has been tagged for testing.
If nothing shows up, I'll tag it for release on Monday July 2nd.
Here's the changelog:
* Thu Jun 28 2018 Sandro Bonazzola <sbonazzo(a)redhat.com> -
ev-2.10.0-21.el7_5.4.1
- Removing RH branding from package name
* Sat Jun 09 2018 Miroslav Rezanina <mrezanin(a)redhat.com> -
rhev-2.10.0-21.el7_5.4
- kvm-scsi-disk-allow-customizing-the-SCSI-version.patch [bz#1571370]
- kvm-hw-scsi-support-SCSI-2-passthrough-without-PI.patch [bz#1571370]
- kvm-i386-Define-the-Virt-SSBD-MSR-and-handling-of-it-CVE.patch
[bz#1584370]
- kvm-i386-define-the-AMD-virt-ssbd-CPUID-feature-bit-CVE-.patch
[bz#1584370]
- kvm-cpus-Fix-event-order-on-resume-of-stopped-guest.patch [bz#1582122]
- kvm-spec-Enable-Native-Ceph-support-on-all-architectures.patch
[bz#1588001]
- Resolves: bz#1571370
(Pegas1.1 Alpha: SCSI pass-thru of aacraid RAID1 is inaccessible
(qemu-kvm-rhev) [rhel-7.5.z])
- Resolves: bz#1582122
(IOERROR pause code lost after resuming a VM while I/O error is still
present [rhel-7.5.z])
- Resolves: bz#1584370
(CVE-2018-3639 qemu-kvm-rhev: hw: cpu: AMD: speculative store bypass
[rhel-7.5.z])
- Resolves: bz#1588001
(Enable Native Ceph support on non x86_64 CPUs [rhel-7.5.z])
For testing:
$ sudo yum install centos-release-qemu-ev
$ sudo yum-config-manager --enable centos-qemu-ev-test
$ sudo yum update "qemu-kvm-ev*"
and use it as usual.
Thanks,
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
6 years, 4 months
[VDSM] Why hooks are required for installing vdsm?
by Nir Soffer
Trying to update vdsm master, my update script fail with:
installing...
error: Failed dependencies:
vdsm = 4.30.0-429.git05bfb8731.fc28 is needed by (installed)
vdsm-hook-fcoe-4.30.0-429.git05bfb8731.fc28.noarch
vdsm = 4.30.0-429.git05bfb8731.fc28 is needed by (installed)
vdsm-hook-ethtool-options-4.30.0-429.git05bfb8731.fc28.noarch
I tried to remove these hooks, since I'm not interested in any of them, but
removing them try to remove 136 packages including libvirt.
Why do we have these dependencies? hook should be optional.
Who owns these hooks?
Nir
6 years, 4 months
Ovirt master build failing in my local (fedora27)
by Gobinda Das
Hi,
I am trying to build ovirt master code in my local but build is failing
with following error:
ERROR:
/home/godas/workspace/ovirt-engine/packaging/setup/ovirt_engine_setup/engine_common/postgres.py
Imports are incorrectly sorted.
ERROR:
/home/godas/workspace/ovirt-engine/packaging/setup/ovirt_engine_setup/engine_common/database.py
Imports are incorrectly sorted.
ERROR:
/home/godas/workspace/ovirt-engine/packaging/setup/ovirt_engine_setup/remote_engine.py
Imports are incorrectly sorted.
ERROR:
/home/godas/workspace/ovirt-engine/packaging/setup/plugins/ovirt-engine-common/base/core/postinstall.py
Imports are incorrectly sorted.
ERROR:
/home/godas/workspace/ovirt-engine/packaging/setup/plugins/ovirt-engine-setup/websocket_proxy/pki.py
Imports are incorrectly sorted.
ERROR:
/home/godas/workspace/ovirt-engine/packaging/setup/plugins/ovirt-engine-setup/websocket_proxy/config.py
Imports are incorrectly sorted.
ERROR:
/home/godas/workspace/ovirt-engine/packaging/setup/plugins/ovirt-engine-setup/ovirt-engine/upgrade/asynctasks.py
Imports are incorrectly sorted.
ERROR:
/home/godas/workspace/ovirt-engine/packaging/setup/plugins/ovirt-engine-setup/vmconsole_proxy_helper/config.py
Imports are incorrectly sorted.
+ echo ERROR: The following check failed:
I have cleared my DB as well and also did yum update.
--
Thanks,
Gobinda
6 years, 4 months
Adding Fedora 28 host to engine 4.2 (NotImplementedError: Packager install not implemented)
by Nir Soffer
I'm trying to add a host running Fedora 28 to engine 4.2, and installation
fails with:
2018-06-20 01:14:26,137+0300 DEBUG otopi.context context._executeMethod:143
method exception
Traceback (most recent call last):
File "/tmp/ovirt-Z5BGYej3Qa/pythonlib/otopi/context.py", line 133, in
_executeMethod
method['method']()
File
"/tmp/ovirt-Z5BGYej3Qa/otopi-plugins/ovirt-host-deploy/vdsm/vdsmid.py",
line 84, in _packages
self.packager.install(('dmidecode',))
File "/tmp/ovirt-Z5BGYej3Qa/pythonlib/otopi/packager.py", line 102, in
install
raise NotImplementedError(_('Packager install not implemented'))
NotImplementedError: Packager install not implemented
2018-06-20 01:14:26,138+0300 ERROR otopi.context context._executeMethod:152
Failed to execute stage 'Environment packages setup': Packager install not
implemented
Do we have a way to workaround this?
I must have a host when I can build and test virt-v2v upstream from source,
and virt-v2v requires python 3. So I need some workaround to get the host
connected to engine.
Nir
6 years, 5 months