Error Java SDK Issue??
by Geschwentner, Patrick
Dear Ladies and Gentlemen!
I am currently working with the java-sdk and I encountered a problem.
If I would like to retrieve the disk details, I get the following error:
Disk currDisk = ovirtConnection.followLink(diskAttachment.disk());
The Error is occurring in this line:
[cid:image001.png@01D44537.AF127FD0]
The getResponst looks quiet ok. (I inspected: [cid:image002.png@01D44537.AF127FD0] and it looks ok).
Error:
wrong number of arguments
The code is quiet similar to what you published on github (https://github.com/oVirt/ovirt-engine-sdk-java/blob/master/sdk/src/test/j... ).
Can you confirm the defect?
Best regards
Patrick
3 years, 7 months
query on running OST
by Pooja Pandey
Hello Team,
Could you please help me in answering a query about OST run.I am trying to
run on a virtual machine and it is failing with an error that libvirtError:
unsupported configuration: Domain requires KVM, but it is not available.
Is it necessary to run this Ovirt System Test on a hypervisor only?Please
reply on this so that I can proceed further.Thanks....
Regards,
Pooja
--
DISCLAIMER - *MSysTechnologies LLC*
This email message, contents and
its attachments may contain confidential, proprietary or legally privileged
information and is intended solely for the use of the individual or entity
to whom it is actually intended. If you have erroneously received this
message, please permanently delete it immediately and notify the sender. If
you are not the intended recipient of the email message,you are notified
strictly not to disseminate,distribute or copy this e-mail.E-mail
transmission cannot be guaranteed to be secure or error-free as Information
could be intercepted, corrupted, lost, destroyed, incomplete or contain
viruses and MSysTechnologies LLC accepts no liability for the contents and
integrity of this mail or for any damage caused by the limitations of the
e-mail transmission.
5 years, 10 months
[VDSM] all test passed, build failed with "tar: journal/b087148aba6d49b9bbef488e52a48752/system.journal: file changed as we read it"
by Nir Soffer
We have this failure that pops randomly:
1. All tests pass
*00:13:13.284* ___________________________________ summary
____________________________________*00:13:13.285* tests: commands
succeeded*00:13:13.286* storage-py27: commands
succeeded*00:13:13.286* storage-py36: commands
succeeded*00:13:13.286* lib-py27: commands succeeded*00:13:13.287*
lib-py36: commands succeeded*00:13:13.288* network-py27: commands
succeeded*00:13:13.290* network-py36: commands
succeeded*00:13:13.291* virt-py27: commands succeeded*00:13:13.292*
virt-py36: commands succeeded*00:13:13.293* congratulations :)
2. But we fail to collect logs at the end
*00:14:35.992* ##########################################################*00:14:35.995*
## Wed Nov 28 17:39:50 UTC 2018 Finished env:
fc28:fedora-28-x86_64*00:14:35.996* ## took 764
seconds*00:14:35.997* ## rc = 1*00:14:35.997*
##########################################################*00:14:36.009*
##! ERROR vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv*00:14:36.010* ##!
Last 20 log entries:
/tmp/mock_logs.Lcop4ZOq/script/stdout_stderr.log*00:14:36.011*
##!*00:14:36.012*
journal/b087148aba6d49b9bbef488e52a48752/system.journal*00:14:36.013*
tar: journal/b087148aba6d49b9bbef488e52a48752/system.journal: file
changed as we read it*00:14:36.014*
journal/b087148aba6d49b9bbef488e52a48752/user-1000.journal*00:14:36.015*
lastlog*00:14:36.015* libvirt/*00:14:36.015*
libvirt/lxc/*00:14:36.015* libvirt/libxl/*00:14:36.016*
libvirt/qemu/*00:14:36.016*
libvirt/qemu/LiveOS-f920001d-be4e-47ea-ac26-72480fd5be87.log*00:14:36.017*
libvirt/uml/*00:14:36.017* ovirt-guest-agent/*00:14:36.017*
ovirt-guest-agent/ovirt-guest-agent.log*00:14:36.017*
README*00:14:36.018* samba/*00:14:36.018* samba/old/*00:14:36.018*
sssd/*00:14:36.018* tallylog*00:14:36.018* wtmp*00:14:36.018* Took 678
seconds*00:14:36.018*
===================================*00:14:36.019* ##!*00:14:36.019*
##! ERROR ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^*00:14:36.019*
##!########################################################
This looks like an issue with vdsm check-patch.sh:
function collect_logs {
res=$?
[ "$res" -ne 0 ] && echo "*** err: $res"
cd /var/log
tar -cvzf "$EXPORT_DIR/mock_varlogs.tar.gz" *
cd /var/host_log
tar -cvzf "$EXPORT_DIR/host_varlogs.tar.gz" *
}
trap collect_logs EXIT
Seems that tar fail to collect log if the log is modified while copied,
which makes sense.
We can ignore errors in tar, since log collection should not fail the
build, but I think
a better solution is to avoid collecting any logs since vdsm writes its own
logs during
tests - all the info must be in vdsm log.
Here is the list of collected logs:
*00:13:47.280* + tar -cvzf
/home/jenkins/workspace/vdsm_master_check-patch-fc28-x86_64/vdsm/exported-artifacts/mock_varlogs.tar.gz
btmp dnf.librepo.log dnf.log dnf.rpm.log faillog glusterfs hawkey.log
journal lastlog libvirt openvswitch README tallylog vdsm_tests.log
wtmp yum.log*00:13:47.285* btmp*00:13:47.285*
dnf.librepo.log*00:13:47.299* dnf.log*00:13:47.309*
dnf.rpm.log*00:13:47.310* faillog*00:13:47.311*
glusterfs/*00:13:47.312* hawkey.log*00:13:47.313*
journal/*00:13:47.313* lastlog*00:13:47.315* libvirt/*00:13:47.315*
libvirt/qemu/*00:13:47.316* openvswitch/*00:13:47.317*
openvswitch/ovs-vswitchd.log*00:13:47.318*
openvswitch/ovsdb-server.log*00:13:47.319* README*00:13:47.320*
tallylog*00:13:47.321* vdsm_tests.log*00:13:47.342* wtmp*00:13:47.343*
yum.log*00:13:47.349* + cd /var/host_log*00:13:47.350* + tar -cvzf
/home/jenkins/workspace/vdsm_master_check-patch-fc28-x86_64/vdsm/exported-artifacts/host_varlogs.tar.gz
anaconda audit boot.log btmp chrony cloud-init.log
cloud-init-output.log cron dnf.librepo.log dnf.log dnf.rpm.log
firewalld glusterfs hawkey.log journal lastlog libvirt
ovirt-guest-agent README samba sssd tallylog wtmp*00:13:47.356*
anaconda/*00:13:47.356* anaconda/ifcfg.log*00:13:47.357*
anaconda/ks-script-l5qnynnj.log*00:13:47.358*
anaconda/storage.log*00:13:47.359* anaconda/program.log*00:13:47.395*
anaconda/ks-script-b5_08tmo.log*00:13:47.396*
anaconda/ks-script-6uks8bp3.log*00:13:47.397*
anaconda/hawkey.log*00:13:47.398* anaconda/syslog*00:13:47.406*
anaconda/journal.log*00:13:47.449*
anaconda/dnf.librepo.log*00:13:47.458*
anaconda/packaging.log*00:13:47.465* anaconda/dbus.log*00:13:47.466*
anaconda/anaconda.log*00:13:47.467*
anaconda/ks-script-slrcz39_.log*00:13:47.503* audit/*00:13:47.504*
audit/audit.log.3*00:13:47.657* audit/audit.log.2*00:13:47.814*
audit/audit.log.1*00:13:47.981* audit/audit.log*00:13:48.008*
audit/audit.log.4*00:13:48.155* boot.log*00:13:48.156*
btmp*00:13:48.157* chrony/*00:13:48.159* cloud-init.log*00:13:48.159*
cloud-init-output.log*00:13:48.161* cron*00:13:48.162*
dnf.librepo.log*00:13:49.930* dnf.log*00:13:51.335*
dnf.rpm.log*00:13:51.421* firewalld*00:13:51.423*
glusterfs/*00:13:51.424* hawkey.log*00:13:51.704*
journal/*00:13:51.708*
journal/b087148aba6d49b9bbef488e52a48752/*00:13:51.709*
journal/b087148aba6d49b9bbef488e52a48752/system.journal*00:13:55.817*
tar: journal/b087148aba6d49b9bbef488e52a48752/system.journal: file
changed as we read it*00:13:55.819*
journal/b087148aba6d49b9bbef488e52a48752/user-1000.journal*00:13:55.915*
lastlog*00:13:55.923* libvirt/*00:13:55.924*
libvirt/lxc/*00:13:55.926* libvirt/libxl/*00:13:55.927*
libvirt/qemu/*00:13:55.928*
libvirt/qemu/LiveOS-f920001d-be4e-47ea-ac26-72480fd5be87.log*00:13:55.929*
libvirt/uml/*00:13:55.930* ovirt-guest-agent/*00:13:55.930*
ovirt-guest-agent/ovirt-guest-agent.log*00:13:55.932*
README*00:13:55.933* samba/*00:13:55.933* samba/old/*00:13:55.935*
sssd/*00:13:55.935* tallylog*00:13:55.935* wtmp
Most if not all are lot relevant to vdsm tests, and should not be collected.
This was added in:
commit 9c9c17297433e5a5a49aa19cde10b206e7db61e9
Author: Edward Haas <edwardh(a)redhat.com>
Date: Tue Apr 17 10:53:11 2018 +0300
automation: Collect logs even when check-patch fails
Change-Id: Idfe07ce6fc55473b1db1d7f16754f559cc5c345a
Signed-off-by: Edward Haas <edwardh(a)redhat.com>
Reviewed in:
https://gerrit.ovirt.org/c/90370
Edward, can you explain why do we need to collect logs during check-patch,
and why do we need to collect all the logs in the system?
Nir
5 years, 11 months
Re: Any way to make VM Hidden/Locked for access
by Pavan Chavva
+Ovirt Devel and Adelino.
FYI.
Best,
Pavan.
On Wed, Nov 28, 2018 at 8:21 AM Mahesh Falmari <Mahesh.Falmari(a)veritas.com>
wrote:
> Hi Nir,
>
> Just wanted to check if there is any way to create a VM through API and
> make this VM hidden or locked so that it is not shown/accessible to the
> users until we explicitly enable it again.
>
> The use case here is of VM recovery, where we would like to create a VM
> first and wanted to check if there is any issue in VM creation and then
> fail in the first phase itself before the actual data transfer takes place.
> But the issue we may see here is that once VM gets created, it becomes
> accessible to users who may play with it and it would impact the subsequent
> recovery operations.
>
>
>
> Thanks & Regards,
> Mahesh Falmari
>
>
>
--
PAVAN KUMAR CHAVVA
ENGINEERING PARTNER MANAGER
Red Hat
pchavva(a)redhat.com M: 4793219099 IM: pchavva
5 years, 11 months
[Ovirt] [CQ weekly status] [30-11-2018]
by Dafna Ron
Hi,
This mail is to provide the current status of CQ and allow people to review
status before and after the weekend.
Please refer to below colour map for further information on the meaning of
the colours.
*CQ-4.2*: RED (#1)
I checked last date ovirt-engine and vdsm passed and moved packages to
tested as they are the bigger projects and it was on the 27-11-218.
We have been having sporadic failures for most of the projects on test
check_snapshot_with_memory.
We have deducted that this is caused by a code regression in storage based
on the following things:
1.Evgheni and Gal helped debug this issue to rule out lago and infra issue
as the cause of failure and both determined the issue is a code regression
- most likely in storage.
2. The failure only happens on 4.2 branch.
3. the failure itself is cannot run a vm due to low disk space in storage
domain and we cannot see any failures which would leave any leftovers in
the storage domain.
Dan and Ryan are actively involved in trying to find the regression but the
consensus is that this is a storage related regression and* we are having a
problem getting the storage team to join us in debugging the issue. *
I prepared a patch to skip the test in case we cannot get cooperation from
storage team and resolve this regression in the next few days:
https://gerrit.ovirt.org/#/c/95889/
*CQ-Master:* YELLOW (#1)
We have failures which CQ is still bisecting and until its done we cannot
point to any specific failing projects.
Happy week!
Dafna
-------------------------------------------------------------------------------------------------------------------
COLOUR MAP
Green = job has been passing successfully
** green for more than 3 days may suggest we need a review of our test
coverage
1.
1-3 days GREEN (#1)
2.
4-7 days GREEN (#2)
3.
Over 7 days GREEN (#3)
Yellow = intermittent failures for different projects but no lasting or
current regressions
** intermittent would be a healthy project as we expect a number of
failures during the week
** I will not report any of the solved failures or regressions.
1.
Solved job failures YELLOW (#1)
2.
Solved regressions YELLOW (#2)
Red = job has been failing
** Active Failures. The colour will change based on the amount of time the
project/s has been broken. Only active regressions would be reported.
1.
1-3 days RED (#1)
2.
4-7 days RED (#2)
3.
Over 7 days RED (#3)
5 years, 11 months
otopi defaults again to python3
by Yedidyah Bar David
Hi all,
As some of you might recall, some time ago we made otopi default to
python3, and quickly reverted that, realizing this causes too much
breakage.
Now things should hopefully be more stable, and I now merged a patch
to default to python3 again.
Current status:
engine-setup works with python3 on fedora.
host-deploy works with python3 on fedora, with both engine being on
el7 and on fedora. Didn't try on el7, might work as well.
hosted-engine --deploy is most likely broken on fedora, but I think it
was already broken. We are working on that, but it will require some
more time - notably, having stuff from vdsm on python3 (if not fully
porting vdsm to python3, which I understand will take even more time).
If you want to use python2, you can do that with:
OTOPI_PYTHON=/bin/python hosted-engine --deploy
On el7, if you manually added python3 (which is not available in the
base repos), things will break - use above workaround if needed.
Best regards,
--
Didi
5 years, 11 months
[VDSM] test_echo(1024, False) (stomp_test.StompTests) fails
by Nir Soffer
This tests used to fail in the past, but since we fixed it or the code
it never failed.
Maybe the slave was overloaded?
*14:19:02* ======================================================================*14:19:02*
ERROR:
*14:19:02* ----------------------------------------------------------------------*14:19:02*
Traceback (most recent call last):*14:19:02* File
"/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/tests/testlib.py",
line 143, in wrapper*14:19:02* return f(self, *args)*14:19:02*
File "/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/tests/stomp_test.py",
line 95, in test_echo*14:19:02* str(uuid4())),*14:19:02* File
"/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/lib/yajsonrpc/jsonrpcclient.py",
line 77, in callMethod*14:19:02* raise
exception.JsonRpcNoResponseError(method=methodName)*14:19:02*
JsonRpcNoResponseError: No response for JSON-RPC request: {'method':
'echo'}*14:19:02* -------------------- >> begin captured logging <<
--------------------*14:19:02* 2018-06-27 14:17:54,524 DEBUG
(MainThread) [vds.MultiProtocolAcceptor] Creating socket (host='::1',
port=0, family=10, socketype=1, proto=6)
(protocoldetector:225)*14:19:02* 2018-06-27 14:17:54,526 INFO
(MainThread) [vds.MultiProtocolAcceptor] Listening at ::1:36713
(protocoldetector:183)*14:19:02* 2018-06-27 14:17:54,535 DEBUG
(MainThread) [Scheduler] Starting scheduler test.Scheduler
(schedule:98)*14:19:02* 2018-06-27 14:17:54,537 DEBUG (test.Scheduler)
[Scheduler] START thread <Thread(test.Scheduler, started daemon
140562082535168)> (func=<bound method Scheduler._run of
<vdsm.schedule.Scheduler object at 0x7fd74ca00390>>, args=(),
kwargs={}) (concurrent:193)*14:19:02* 2018-06-27 14:17:54,538 DEBUG
(test.Scheduler) [Scheduler] started (schedule:140)*14:19:02*
2018-06-27 14:17:54,546 DEBUG (JsonRpc (StompReactor)) [root] START
thread <Thread(JsonRpc (StompReactor), started daemon
140562629256960)> (func=<bound method StompReactor.process_requests of
<yajsonrpc.stompserver.StompReactor object at 0x7fd74ca006d0>>,
args=(), kwargs={}) (concurrent:193)*14:19:02* 2018-06-27 14:17:54,547
DEBUG (MainThread) [Executor] Starting executor
(executor:128)*14:19:02* 2018-06-27 14:17:54,549 DEBUG (MainThread)
[Executor] Starting worker jsonrpc/0 (executor:286)*14:19:02*
2018-06-27 14:17:54,553 DEBUG (jsonrpc/0) [Executor] START thread
<Thread(jsonrpc/0, started daemon 140562612471552)> (func=<bound
method _Worker._run of <Worker name=jsonrpc/0 waiting task#=0 at
0x7fd74ca00650>>, args=(), kwargs={}) (concurrent:193)*14:19:02*
2018-06-27 14:17:54,554 DEBUG (jsonrpc/0) [Executor] Worker started
(executor:298)*14:19:02* 2018-06-27 14:17:54,557 DEBUG (MainThread)
[Executor] Starting worker jsonrpc/1 (executor:286)*14:19:02*
2018-06-27 14:17:54,558 DEBUG (MainThread) [Executor] Starting worker
jsonrpc/2 (executor:286)*14:19:02* 2018-06-27 14:17:54,559 DEBUG
(jsonrpc/2) [Executor] START thread <Thread(jsonrpc/2, started daemon
140562620864256)> (func=<bound method _Worker._run of <Worker
name=jsonrpc/2 waiting task#=0 at 0x7fd74c9fb350>>, args=(),
kwargs={}) (concurrent:193)*14:19:02* 2018-06-27 14:17:54,560 DEBUG
(jsonrpc/2) [Executor] Worker started (executor:298)*14:19:02*
2018-06-27 14:17:54,561 DEBUG (MainThread) [Executor] Starting worker
jsonrpc/3 (executor:286)*14:19:02* 2018-06-27 14:17:54,562 DEBUG
(jsonrpc/3) [Executor] START thread <Thread(jsonrpc/3, started daemon
140562124498688)> (func=<bound method _Worker._run of <Worker
name=jsonrpc/3 waiting task#=0 at 0x7fd74bc80290>>, args=(),
kwargs={}) (concurrent:193)*14:19:02* 2018-06-27 14:17:54,563 DEBUG
(jsonrpc/3) [Executor] Worker started (executor:298)*14:19:02*
2018-06-27 14:17:54,564 DEBUG (MainThread) [Executor] Starting worker
jsonrpc/4 (executor:286)*14:19:02* 2018-06-27 14:17:54,565 DEBUG
(jsonrpc/4) [Executor] START thread <Thread(jsonrpc/4, started daemon
140562116105984)> (func=<bound method _Worker._run of <Worker
name=jsonrpc/4 waiting task#=0 at 0x7fd74bc80790>>, args=(),
kwargs={}) (concurrent:193)*14:19:02* 2018-06-27 14:17:54,566 DEBUG
(jsonrpc/4) [Executor] Worker started (executor:298)*14:19:02*
2018-06-27 14:17:54,568 DEBUG (jsonrpc/1) [Executor] START thread
<Thread(jsonrpc/1, started daemon 140562132891392)> (func=<bound
method _Worker._run of <Worker name=jsonrpc/1 waiting task#=0 at
0x7fd74c9fb690>>, args=(), kwargs={}) (concurrent:193)*14:19:02*
2018-06-27 14:17:54,569 DEBUG (jsonrpc/1) [Executor] Worker started
(executor:298)*14:19:02* 2018-06-27 14:17:54,569 DEBUG (MainThread)
[Executor] Starting worker jsonrpc/5 (executor:286)*14:19:02*
2018-06-27 14:17:54,570 DEBUG (jsonrpc/5) [Executor] START thread
<Thread(jsonrpc/5, started daemon 140562107713280)> (func=<bound
method _Worker._run of <Worker name=jsonrpc/5 waiting task#=0 at
0x7fd74bc80090>>, args=(), kwargs={}) (concurrent:193)*14:19:02*
2018-06-27 14:17:54,571 DEBUG (jsonrpc/5) [Executor] Worker started
(executor:298)*14:19:02* 2018-06-27 14:17:54,572 DEBUG (MainThread)
[Executor] Starting worker jsonrpc/6 (executor:286)*14:19:02*
2018-06-27 14:17:54,580 DEBUG (MainThread) [Executor] Starting worker
jsonrpc/7 (executor:286)*14:19:02* 2018-06-27 14:17:54,603 DEBUG
(jsonrpc/6) [Executor] START thread <Thread(jsonrpc/6, started daemon
140562099320576)> (func=<bound method _Worker._run of <Worker
name=jsonrpc/6 waiting task#=0 at 0x7fd74d0dacd0>>, args=(),
kwargs={}) (concurrent:193)*14:19:02* 2018-06-27 14:17:54,604 DEBUG
(jsonrpc/6) [Executor] Worker started (executor:298)*14:19:02*
2018-06-27 14:17:54,615 DEBUG (jsonrpc/7) [Executor] START thread
<Thread(jsonrpc/7, started daemon 140562090927872)> (func=<bound
method _Worker._run of <Worker name=jsonrpc/7 waiting task#=0 at
0x7fd74bfe0310>>, args=(), kwargs={}) (concurrent:193)*14:19:02*
2018-06-27 14:17:54,616 DEBUG (JsonRpcServer) [root] START thread
<Thread(JsonRpcServer, started daemon 140561730238208)> (func=<bound
method JsonRpcServer.serve_requests of <yajsonrpc.JsonRpcServer object
at 0x7fd74ca00690>>, args=(), kwargs={}) (concurrent:193)*14:19:02*
2018-06-27 14:17:54,618 DEBUG (MainThread) [vds.MultiProtocolAcceptor]
Adding detector <yajsonrpc.stompserver.StompDetector instance at
0x7fd74c28cc68> (protocoldetector:210)*14:19:02* 2018-06-27
14:17:54,625 INFO (Detector thread) [ProtocolDetector.AcceptorImpl]
Accepted connection from ::1:52432 (protocoldetector:61)*14:19:02*
2018-06-27 14:17:54,626 DEBUG (Detector thread)
[ProtocolDetector.Detector] Using required_size=11
(protocoldetector:89)*14:19:02* 2018-06-27 14:17:54,627 DEBUG
(jsonrpc/7) [Executor] Worker started (executor:298)*14:19:02*
2018-06-27 14:17:54,634 DEBUG (MainThread) [jsonrpc.AsyncoreClient]
Sending response (stompclient:293)*14:19:02* 2018-06-27 14:17:54,634
DEBUG (Client ::1:36713) [root] START thread <Thread(Client ::1:36713,
started daemon 140561713452800)> (func=<bound method
Reactor.process_requests of <yajsonrpc.betterAsyncore.Reactor object
at 0x7fd74bdb1d90>>, args=(), kwargs={}) (concurrent:193)*14:19:02*
2018-06-27 14:17:54,641 INFO (Detector thread)
[ProtocolDetector.Detector] Detected protocol stomp from ::1:52432
(protocoldetector:125)*14:19:02* 2018-06-27 14:17:54,645 INFO
(Detector thread) [Broker.StompAdapter] Processing CONNECT request
(stompserver:94)*14:19:02* 2018-06-27 14:17:54,648 DEBUG (Client
::1:36713) [yajsonrpc.protocols.stomp.AsyncClient] Stomp connection
established (stompclient:137)*14:19:02* 2018-06-27 14:17:54,651 DEBUG
(jsonrpc/0) [jsonrpc.JsonRpcServer] Calling 'echo' in bridge with
[u'Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do
eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad
minim veniam, quis nostrud exercitation ullamco laboris nisi ut
aliquip ex ea commodo consequat. Duis aute irure dolor in
reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla
pariatur. Excepteur sint occaecat cupidatat non proident, sunt in
culpa qui officia deserunt mollit anim id est laborum. Lorem ipsum
dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor
incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea
commodo consequat. Duis aute irure dolor in reprehenderit in voluptate
velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint
occaecat cupidatat non proident, sunt in culpa qui officia deserunt
mollit anim id est laborum. Lorem ipsum dolor sit amet, consectetur
adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore
magna aliqua. Ut en'] (__init__:328)*14:19:02* 2018-06-27 14:17:54,652
DEBUG (jsonrpc/0) [jsonrpc.JsonRpcServer] Return 'echo' in bridge with
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do
eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad
minim veniam, quis nostrud exercitation ullamco laboris nisi ut
aliquip ex ea commodo consequat. Duis aute irure dolor in
reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla
pariatur. Excepteur sint occaecat cupidatat non proident, sunt in
culpa qui officia deserunt mollit anim id est laborum. Lorem ipsum
dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor
incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea
commodo consequat. Duis aute irure dolor in reprehenderit in voluptate
velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint
occaecat cupidatat non proident, sunt in culpa qui officia deserunt
mollit anim id est laborum. Lorem ipsum dolor sit amet, consectetur
adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore
magna aliqua. Ut en (__init__:355)*14:19:02* 2018-06-27 14:17:54,653
INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call echo succeeded in
0.01 seconds (__init__:311)*14:19:02* 2018-06-27 14:17:54,661 INFO
(Detector thread) [Broker.StompAdapter] Subscribe command received
(stompserver:123)*14:19:02* 2018-06-27 14:17:54,662 DEBUG (Detector
thread) [protocoldetector.StompDetector] Stomp detected from ('::1',
52432) (stompserver:412)*14:19:02* 2018-06-27 14:18:09,649 DEBUG
(MainThread) [vds.MultiProtocolAcceptor] Stopping Acceptor
(protocoldetector:214)*14:19:02* 2018-06-27 14:18:09,650 INFO
(MainThread) [jsonrpc.JsonRpcServer] Stopping JsonRPC Server
(__init__:441)*14:19:02* 2018-06-27 14:18:09,651 DEBUG (MainThread)
[Executor] Stopping executor (executor:137)*14:19:02* 2018-06-27
14:18:09,652 DEBUG (Client ::1:36713) [root] FINISH thread
<Thread(Client ::1:36713, started daemon 140561713452800)>
(concurrent:196)*14:19:02* 2018-06-27 14:18:09,652 DEBUG
(JsonRpcServer) [root] FINISH thread <Thread(JsonRpcServer, started
daemon 140561730238208)> (concurrent:196)*14:19:02* 2018-06-27
14:18:09,654 DEBUG (JsonRpc (StompReactor)) [root] FINISH thread
<Thread(JsonRpc (StompReactor), started daemon 140562629256960)>
(concurrent:196)*14:19:02* 2018-06-27 14:18:09,655 DEBUG (jsonrpc/2)
[Executor] Worker stopped (executor:303)*14:19:02* 2018-06-27
14:18:09,658 DEBUG (jsonrpc/3) [Executor] Worker stopped
(executor:303)*14:19:02* 2018-06-27 14:18:09,659 DEBUG (jsonrpc/4)
[Executor] Worker stopped (executor:303)*14:19:02* 2018-06-27
14:18:09,660 DEBUG (jsonrpc/1) [Executor] Worker stopped
(executor:303)*14:19:02* 2018-06-27 14:18:09,660 DEBUG (jsonrpc/5)
[Executor] Worker stopped (executor:303)*14:19:02* 2018-06-27
14:18:09,660 DEBUG (jsonrpc/6) [Executor] Worker stopped
(executor:303)*14:19:02* 2018-06-27 14:18:09,665 DEBUG (jsonrpc/6)
[Executor] FINISH thread <Thread(jsonrpc/6, started daemon
140562099320576)> (concurrent:196)*14:19:02* 2018-06-27 14:18:09,661
DEBUG (jsonrpc/7) [Executor] Worker stopped (executor:303)*14:19:02*
2018-06-27 14:18:09,662 DEBUG (jsonrpc/2) [Executor] FINISH thread
<Thread(jsonrpc/2, started daemon 140562620864256)>
(concurrent:196)*14:19:02* 2018-06-27 14:18:09,663 DEBUG (jsonrpc/3)
[Executor] FINISH thread <Thread(jsonrpc/3, started daemon
140562124498688)> (concurrent:196)*14:19:02* 2018-06-27 14:18:09,663
DEBUG (jsonrpc/1) [Executor] FINISH thread <Thread(jsonrpc/1, started
daemon 140562132891392)> (concurrent:196)*14:19:02* 2018-06-27
14:18:09,664 DEBUG (jsonrpc/4) [Executor] FINISH thread
<Thread(jsonrpc/4, started daemon 140562116105984)>
(concurrent:196)*14:19:02* 2018-06-27 14:18:09,664 DEBUG (jsonrpc/0)
[Executor] Worker stopped (executor:303)*14:19:02* 2018-06-27
14:18:09,665 DEBUG (jsonrpc/5) [Executor] FINISH thread
<Thread(jsonrpc/5, started daemon 140562107713280)>
(concurrent:196)*14:19:02* 2018-06-27 14:18:09,667 DEBUG (jsonrpc/7)
[Executor] FINISH thread <Thread(jsonrpc/7, started daemon
140562090927872)> (concurrent:196)*14:19:02* 2018-06-27 14:18:09,667
DEBUG (MainThread) [Executor] Waiting for worker jsonrpc/0
(executor:290)*14:19:02* 2018-06-27 14:18:09,671 DEBUG (jsonrpc/0)
[Executor] FINISH thread <Thread(jsonrpc/0, started daemon
140562612471552)> (concurrent:196)*14:19:02* 2018-06-27 14:18:09,673
DEBUG (MainThread) [Executor] Waiting for worker jsonrpc/1
(executor:290)*14:19:02* 2018-06-27 14:18:09,674 DEBUG (MainThread)
[Executor] Waiting for worker jsonrpc/6 (executor:290)*14:19:02*
2018-Coverage.py warning: Module
/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/vdsm
was never imported. (module-not-imported)*14:19:05* pylint
installdeps: pylint==1.8*14:19:22* pylint installed: The directory
'/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/.cache/pip/http'
or its parent directory is not owned by the current user and the cache
has been disabled. Please check the permissions and owner of that
directory. If executing pip with sudo, you may want sudo's -H
flag.,astroid==1.6.5,backports.functools-lru-cache==1.5,backports.ssl-match-hostname==3.5.0.1,blivet==0.61.15.69,chardet==2.2.1,configparser==3.5.0,coverage==4.4.1,decorator==3.4.0,enum34==1.1.6,funcsigs==1.0.2,futures==3.2.0,iniparse==0.4,ioprocess==1.1.0,ipaddress==1.0.16,IPy==0.75,isort==4.3.4,kitchen==1.1.1,lazy-object-proxy==1.3.1,libvirt-python==3.9.0,Magic-file-extensions==0.2,mccabe==0.6.1,mock==2.0.0,netaddr==0.7.18,ovirt-imageio-common==1.4.1,pbr==3.1.1,pluggy==0.6.0,policycoreutils-default-encoding==0.1,pthreading==0.1.3,py==1.5.4,pycurl==7.19.0,pygpgme==0.3,pyinotify==0.9.4,pykickstart==1.99.66.18,pyliblzma==0.5.3,pylint==1.8.0,pyparted==3.9,python-augeas==0.5.0,python-dateutil==2.4.2,pyudev==0.15,pyxattr==0.5.1,PyYAML==3.10,requests==2.6.0,sanlock-python==3.6.0,seobject==0.1,sepolicy==1.1,singledispatch==3.4.0.3,six==1.11.0,subprocess32==3.2.6,tox==2.9.1,urlgrabber==3.10,urllib3==1.10.2,virtualenv==16.0.0,WebOb==1.2.3,wrapt==1.10.11,yum-metadata-parser==1.1.4*14:19:22*
pylint runtests: PYTHONHASHSEED='528910716'*14:19:22* pylint runtests:
commands[0] | pylint --errors-only
static/usr/share/vdsm/sitecustomize.py lib/vdsm lib/vdsmclient
lib/yajsonrpc*14:19:23* Problem importing module base.pyc: cannot
import name get_node_last_lineno*14:19:24* Problem importing module
base.py: cannot import name get_node_last_lineno*14:19:34* 06-27
14:18:09,674 DEBUG (MainThread) [Executor] Waiting for worker
jsonrpc/7 (executor:290)*14:19:34* 2018-06-27 14:18:09,675 DEBUG
(MainThread) [Executor] Waiting for worker jsonrpc/5
(executor:290)*14:19:34* 2018-06-27 14:18:09,675 DEBUG (MainThread)
[Executor] Waiting for worker jsonrpc/2 (executor:290)*14:19:34*
2018-06-27 14:18:09,676 DEBUG (MainThread) [Executor] Waiting for
worker jsonrpc/3 (executor:290)*14:19:34* 2018-06-27 14:18:09,676
DEBUG (MainThread) [Executor] Waiting for worker jsonrpc/4
(executor:290)*14:19:34* 2018-06-27 14:18:09,677 DEBUG (MainThread)
[Scheduler] Stopping scheduler test.Scheduler (schedule:110)*14:19:34*
2018-06-27 14:18:09,678 DEBUG (test.Scheduler) [Scheduler] stopped
(schedule:143)*14:19:34* 2018-06-27 14:18:09,679 DEBUG
(test.Scheduler) [Scheduler] FINISH thread <Thread(test.Scheduler,
started daemon 140562082535168)> (concurrent:196)*14:19:34*
--------------------- >> end captured logging << ---------------------
5 years, 11 months