[JIRA] (OVIRT-1324) Re: Jenkins check-merged failure on Vdsm 4.1
by Nadav Goldin (oVirt JIRA)
[ https://ovirt-jira.atlassian.net/browse/OVIRT-1324?page=com.atlassian.jir... ]
Nadav Goldin commented on OVIRT-1324:
-------------------------------------
Adding more information from the email thread(probably missed here):
{quote}Hi Milan, sorry for missing this.
In short, it looks like a libvirt/qemu error, I guess it lays
somewhere in the nested environment the Jenkins slave runs at. I was
able to extract the libvirt log from this specific run, but there is
nothing useful there, except that there was no proper termination.
>From reading here[1] it might be related to a load on the hypervisor,
and the timeout configured for libvirt to wait for qemu. Unfortunately
looking at the this[2] thread, it seems that a patch to configure the
timeout never got into libvirt, which leaves us with a default of 30
seconds, and that might not be enough in our nested environment. I
presume that if the hypervisor which the Jenkins slave runs is highly
loaded, then when we try to start the vdsm_functional_tests_lago VM,
it might take more than 30 seconds for qemu to respond.
Another indication of this "hypothesis" is that I never seen this
error on OST - which uses bare-metal slaves.
Evgheni, do we have the load monitoring on the hypervisor that runs
vm0065.workers-phx.ovirt.org? Not sure if we added that eventually.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=987088
[2] https://www.redhat.com/archives/libvir-list/2014-January/msg00410.html
{quote}
[~ederevea]
> Re: Jenkins check-merged failure on Vdsm 4.1
> --------------------------------------------
>
> Key: OVIRT-1324
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1324
> Project: oVirt - virtualization made easy
> Issue Type: By-EMAIL
> Reporter: Gil Shinar
> Assignee: infra
>
> Adding infra-support so a ticket will be opened
> Milan, is it still relevant?
> Thanks
> Gil
> On Mon, Apr 10, 2017 at 10:56 AM, Milan Zamazal <mzamazal(a)redhat.com> wrote:
> > Hi,
> >
> > after my Vdsm patch https://gerrit.ovirt.org/75329 in ovirt-4.1 branch
> > had been merged, Jenkins check-merged job
> > http://jenkins.ovirt.org/job/vdsm_4.1_check-merged-el7-x86_64/173/
> > failed with the following error:
> >
> > 07:01:21 @ Start specified VMs:
> > 07:01:21 # Start nets:
> > 07:01:21 * Create network vdsm_functional_tests_lago:
> > 07:01:27 * Create network vdsm_functional_tests_lago: Success (in
> > 0:00:05)
> > 07:01:27 # Start nets: Success (in 0:00:05)
> > 07:01:27 # Start vms:
> > 07:01:27 * Starting VM vdsm_functional_tests_host-el7:
> > 07:02:07 libvirt: QEMU Driver error : monitor socket did not show up: No
> > such file or directory
> > 07:02:07 * Starting VM vdsm_functional_tests_host-el7: ERROR (in
> > 0:00:40)
> > 07:02:07 # Start vms: ERROR (in 0:00:40)
> > 07:02:07 # Destroy network vdsm_functional_tests_lago:
> > 07:02:07 # Destroy network vdsm_functional_tests_lago: ERROR (in
> > 0:00:00)
> > 07:02:07 @ Start specified VMs: ERROR (in 0:00:46)
> > 07:02:07 Error occured, aborting
> > 07:02:07 Traceback (most recent call last):
> > 07:02:07 File "/usr/lib/python2.7/site-packages/lago/cmd.py", line
> > 936, in main
> > 07:02:07 cli_plugins[args.verb].do_run(args)
> > 07:02:07 File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py",
> > line 184, in do_run
> > 07:02:07 self._do_run(**vars(args))
> > 07:02:07 File "/usr/lib/python2.7/site-packages/lago/utils.py", line
> > 495, in wrapper
> > 07:02:07 return func(*args, **kwargs)
> > 07:02:07 File "/usr/lib/python2.7/site-packages/lago/utils.py", line
> > 506, in wrapper
> > 07:02:07 return func(*args, prefix=prefix, **kwargs)
> > 07:02:07 File "/usr/lib/python2.7/site-packages/lago/cmd.py", line
> > 264, in do_start
> > 07:02:07 prefix.start(vm_names=vm_names)
> > 07:02:07 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line
> > 1033, in start
> > 07:02:07 self.virt_env.start(vm_names=vm_names)
> > 07:02:07 File "/usr/lib/python2.7/site-packages/lago/virt.py", line
> > 331, in start
> > 07:02:07 vm.start()
> > 07:02:07 File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py",
> > line 299, in start
> > 07:02:07 return self.provider.start(*args, **kwargs)
> > 07:02:07 File "/usr/lib/python2.7/site-packages/lago/vm.py", line
> > 106, in start
> > 07:02:07 dom = self.libvirt_con.createXML(self._libvirt_xml())
> > 07:02:07 File "/usr/lib64/python2.7/site-packages/libvirt.py", line
> > 3782, in createXML
> > 07:02:07 if ret is None:raise libvirtError('virDomainCreateXML()
> > failed', conn=self)
> > 07:02:07 libvirtError: monitor socket did not show up: No such file or
> > directory
> > 07:02:07 Took 210 seconds
> >
> > The error is apparently unrelated to my patch since: 1. my patch should
> > have nothing to do with VM start; 2. Jenkins has run successfully on the
> > following patch (https://gerrit.ovirt.org/75321). FWIW, the preceding
> > patch (https://gerrit.ovirt.org/75038) has run successfully too.
> >
> > Do you know what's wrong?
> >
> > Thanks,
> > Milan
> > _______________________________________________
> > Infra mailing list
> > Infra(a)ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> >
--
This message was sent by Atlassian JIRA
(v1000.910.0#100040)
7 years, 7 months
[JIRA] (OVIRT-1324) Re: Jenkins check-merged failure on Vdsm 4.1
by Gil Shinar (oVirt JIRA)
Gil Shinar created OVIRT-1324:
---------------------------------
Summary: Re: Jenkins check-merged failure on Vdsm 4.1
Key: OVIRT-1324
URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1324
Project: oVirt - virtualization made easy
Issue Type: By-EMAIL
Reporter: Gil Shinar
Assignee: infra
Adding infra-support so a ticket will be opened
Milan, is it still relevant?
Thanks
Gil
On Mon, Apr 10, 2017 at 10:56 AM, Milan Zamazal <mzamazal(a)redhat.com> wrote:
> Hi,
>
> after my Vdsm patch https://gerrit.ovirt.org/75329 in ovirt-4.1 branch
> had been merged, Jenkins check-merged job
> http://jenkins.ovirt.org/job/vdsm_4.1_check-merged-el7-x86_64/173/
> failed with the following error:
>
> 07:01:21 @ Start specified VMs:
> 07:01:21 # Start nets:
> 07:01:21 * Create network vdsm_functional_tests_lago:
> 07:01:27 * Create network vdsm_functional_tests_lago: Success (in
> 0:00:05)
> 07:01:27 # Start nets: Success (in 0:00:05)
> 07:01:27 # Start vms:
> 07:01:27 * Starting VM vdsm_functional_tests_host-el7:
> 07:02:07 libvirt: QEMU Driver error : monitor socket did not show up: No
> such file or directory
> 07:02:07 * Starting VM vdsm_functional_tests_host-el7: ERROR (in
> 0:00:40)
> 07:02:07 # Start vms: ERROR (in 0:00:40)
> 07:02:07 # Destroy network vdsm_functional_tests_lago:
> 07:02:07 # Destroy network vdsm_functional_tests_lago: ERROR (in
> 0:00:00)
> 07:02:07 @ Start specified VMs: ERROR (in 0:00:46)
> 07:02:07 Error occured, aborting
> 07:02:07 Traceback (most recent call last):
> 07:02:07 File "/usr/lib/python2.7/site-packages/lago/cmd.py", line
> 936, in main
> 07:02:07 cli_plugins[args.verb].do_run(args)
> 07:02:07 File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py",
> line 184, in do_run
> 07:02:07 self._do_run(**vars(args))
> 07:02:07 File "/usr/lib/python2.7/site-packages/lago/utils.py", line
> 495, in wrapper
> 07:02:07 return func(*args, **kwargs)
> 07:02:07 File "/usr/lib/python2.7/site-packages/lago/utils.py", line
> 506, in wrapper
> 07:02:07 return func(*args, prefix=prefix, **kwargs)
> 07:02:07 File "/usr/lib/python2.7/site-packages/lago/cmd.py", line
> 264, in do_start
> 07:02:07 prefix.start(vm_names=vm_names)
> 07:02:07 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line
> 1033, in start
> 07:02:07 self.virt_env.start(vm_names=vm_names)
> 07:02:07 File "/usr/lib/python2.7/site-packages/lago/virt.py", line
> 331, in start
> 07:02:07 vm.start()
> 07:02:07 File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py",
> line 299, in start
> 07:02:07 return self.provider.start(*args, **kwargs)
> 07:02:07 File "/usr/lib/python2.7/site-packages/lago/vm.py", line
> 106, in start
> 07:02:07 dom = self.libvirt_con.createXML(self._libvirt_xml())
> 07:02:07 File "/usr/lib64/python2.7/site-packages/libvirt.py", line
> 3782, in createXML
> 07:02:07 if ret is None:raise libvirtError('virDomainCreateXML()
> failed', conn=self)
> 07:02:07 libvirtError: monitor socket did not show up: No such file or
> directory
> 07:02:07 Took 210 seconds
>
> The error is apparently unrelated to my patch since: 1. my patch should
> have nothing to do with VM start; 2. Jenkins has run successfully on the
> following patch (https://gerrit.ovirt.org/75321). FWIW, the preceding
> patch (https://gerrit.ovirt.org/75038) has run successfully too.
>
> Do you know what's wrong?
>
> Thanks,
> Milan
> _______________________________________________
> Infra mailing list
> Infra(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
--
This message was sent by Atlassian JIRA
(v1000.910.0#100040)
7 years, 7 months
[JIRA] (OVIRT-1323) Re: [ovirt-users] I’m having trouble deleting a test gluster volume
by sbonazzo (oVirt JIRA)
sbonazzo created OVIRT-1323:
-------------------------------
Summary: Re: [ovirt-users] I’m having trouble deleting a test gluster volume
Key: OVIRT-1323
URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1323
Project: oVirt - virtualization made easy
Issue Type: By-EMAIL
Reporter: sbonazzo
Assignee: infra
On Wed, Apr 12, 2017 at 11:05 PM, Precht, Andrew <
Andrew.Precht(a)sjlibrary.org> wrote:
> Hi all,
> In the end, I ran this on each host node and is what worked:
> systemctl stop glusterd && rm -rf /var/lib/glusterd/vols/* && rm -rf
> /var/lib/glusterd/peers/*
>
> Thanks so much for your help.
>
> P.S. I work as a sys admin for the San Jose library. Part of my job
> satisfaction comes from knowing that the work I do here goes directly back
> into this community. We’r fortunate that you, your coworkers, and Red Hat
> do so much to give back. I have to imagine you too feel this sense of
> satisfaction. Thanks again…
>
> P.S.S. I never did hear back from users(a)ovirt.org mailing list. I did
> fill out the fields on this page: https://lists.ovirt.org/
> mailman/listinfo/users. Yet, everytime I send them an email I get: Your
> message to Users awaits moderator approval. Is there a secret handshake,
> I’m not aware of?
>
>
Opening a ticket on infra to check your account on users mailing list.
> Regards,
> Andrew
>
> ------------------------------
> *From:* knarra <knarra(a)redhat.com>
> *Sent:* Wednesday, April 12, 2017 10:01:33 AM
>
> *To:* Precht, Andrew; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon
> Mureinik; Nir Soffer
> *Cc:* users
> *Subject:* Re: [ovirt-users] I’m having trouble deleting a test gluster
> volume
>
> On 04/12/2017 08:45 PM, Precht, Andrew wrote:
>
> Hi all,
>
> You asked: Any errors in ovirt-engine.log file ?
>
> Yes, In the engine.log this error is repeated about every 3 minutes:
>
> 2017-04-12 07:16:12,554-07 ERROR [org.ovirt.engine.core.bll.gluster.GlusterTasksSyncJob]
> (DefaultQuartzScheduler3) [ccc8ed0d-8b91-4397-b6b9-ab0f77c5f7b8] Error
> updating tasks from CLI: org.ovirt.engine.core.common.errors.EngineException:
> EngineException: Command execution failed error: Error : Request timed out return
> code: 1 (Failed with error GlusterVolumeStatusAllFailedException and code
> 4161) error: Error : Request timed out
>
> I am not sure why this says "Request timed out".
>
> 1) gluster volume list -> Still shows the deleted volume (test1)
>
> 2) gluster peer status -> Shows one of the peers twice with different
> uuid’s:
>
> Hostname: 192.168.10.109 Uuid: 42fbb7de-8e6f-4159-a601-3f858fa65f6c State:
> Peer in Cluster (Connected) Hostname: 192.168.10.109 Uuid:
> e058babe-7f9d-49fe-a3ea-ccdc98d7e5b5 State: Peer in Cluster (Connected)
>
> How did this happen? Are the hostname same for two hosts ?
>
> I tried a gluster volume stop test1, with this result: volume stop:
> test1: failed: Another transaction is in progress for test1. Please try
> again after sometime.
>
> can you restart glusterd and try to stop and delete the volume?
>
> The etc-glusterfs-glusterd.vol.log shows no activity triggered by trying
> to remove the test1 volume from the UI.
>
> The ovirt-engine.log shows this repeating many times, when trying to
> remove the test1 volume from the UI:
>
> 2017-04-12 07:57:38,049-07 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> (DefaultQuartzScheduler9) [ccc8ed0d-8b91-4397-b6b9-ab0f77c5f7b8] Failed
> to acquire lock and wait lock 'EngineLock:{exclusiveLocks='[
> b0e1b909-9a6a-49dc-8e20-3a027218f7e1=<GLUSTER, ACTION_TYPE_FAILED_GLUSTER_OPERATION_INPROGRESS>]',
> sharedLocks='null'}'
>
> can you restart ovirt-engine service because i see that "failed to acquire
> lock". Once ovirt-engine is restarted some one who is holding the lock
> should be release and things should work fine.
>
> Last but not least, if none of the above works:
>
> Login to all your nodes in the cluster.
> rm -rf /var/lib/glusterd/vols/*
> rm -rf /var/lib/glusterd/peers/*
> systemctl restart glusterd on all the nodes.
>
> Login to UI and see if any volumes / hosts are present. If yes, remove
> them.
>
> This should clear things for you and you can start from basic.
>
>
> Thanks much,
>
> Andrew
> ------------------------------
> *From:* knarra <knarra(a)redhat.com> <knarra(a)redhat.com>
> *Sent:* Tuesday, April 11, 2017 11:10:04 PM
> *To:* Precht, Andrew; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon
> Mureinik; Nir Soffer
> *Cc:* users
> *Subject:* Re: [ovirt-users] I’m having trouble deleting a test gluster
> volume
>
> On 04/12/2017 03:35 AM, Precht, Andrew wrote:
>
> I just noticed this in the Alerts tab: Detected deletion of volume test1
> on cluster 8000-1, and deleted it from engine DB.
>
> Yet, It still shows in the web UI?
>
> Any errors in ovirt-engine.log file ? if the volume is deleted from db
> ideally it should be deleted from UI too. Can you go to gluster nodes and
> check for the following:
>
> 1) gluster volume list -> should not return anything since you have
> deleted the volumes.
>
> 2) gluster peer status -> on all the nodes should show that all the peers
> are in connected state.
>
> can you tail -f /var/log/ovirt-engine/ovirt-engine.log and gluster log
> and capture the error messages when you try deleting the volume from UI?
>
> Log what you have pasted in the previous mail only gives info and i could
> not get any details from that on why volume delete is failing
>
> ------------------------------
> *From:* Precht, Andrew
> *Sent:* Tuesday, April 11, 2017 2:39:31 PM
> *To:* knarra; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon Mureinik;
> Nir Soffer
> *Cc:* users
> *Subject:* Re: [ovirt-users] I’m having trouble deleting a test gluster
> volume
>
> The plot thickens…
> I put all hosts in the cluster into maintenance mode, with the Stop
> Gluster service checkbox checked. I then deleted the
> /var/lib/glusterd/vols/test1 directory on all hosts. I then took the host
> that the test1 volume was on out of maintenance mode. Then I tried to
> remove the test1 volume from within the web UI. With no luck, I got the
> message: Could not delete Gluster Volume test1 on cluster 8000-1.
>
> I went back and checked all host for the test1 directory, it is not on any
> host. Yet I still can’t remove it…
>
> Any suggestions?
>
> ------------------------------
> *From:* Precht, Andrew
> *Sent:* Tuesday, April 11, 2017 1:15:22 PM
> *To:* knarra; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon Mureinik;
> Nir Soffer
> *Cc:* users
> *Subject:* Re: [ovirt-users] I’m having trouble deleting a test gluster
> volume
>
> Here is an update…
>
> I checked the /var/log/glusterfs/etc-glusterfs-glusterd.vol.log on the
> node that had the trouble volume (test1). I didn’t see any errors. So, I
> ran a tail -f on the log as I tried to remove the volume using the web UI.
> here is what was appended:
>
> [2017-04-11 19:48:40.756360] I [MSGID: 106487] [glusterd-handler.c:1474:__
> glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req
> [2017-04-11 19:48:42.238840] I [MSGID: 106488] [glusterd-handler.c:1537:__
> glusterd_handle_cli_get_volume] 0-management: Received get vol req
> The message "I [MSGID: 106487] [glusterd-handler.c:1474:__
> glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req"
> repeated 6 times between [2017-04-11 19:48:40.756360] and [2017-04-11
> 19:49:32.596536]
> The message "I [MSGID: 106488] [glusterd-handler.c:1537:__
> glusterd_handle_cli_get_volume] 0-management: Received get vol req"
> repeated 20 times between [2017-04-11 19:48:42.238840] and [2017-04-11
> 19:49:34.082179]
> [2017-04-11 19:51:41.556077] I [MSGID: 106487] [glusterd-handler.c:1474:__
> glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req
>
> I’m seeing that the timestamps on these log entries do not match the time
> on the node.
>
> The next steps
> I stopped the glusterd service on the node with volume test1
> I deleted it with: rm -rf /var/lib/glusterd/vols/test1
> I started the glusterd service.
>
> After starting the gluster service back up, the directory
> /var/lib/glusterd/vols/test1 reappears.
> I’m guessing syncing with the other nodes?
> Is this because I have the Volume Option: auth allow *
> Do I need to remove the directory /var/lib/glusterd/vols/test1 on all
> nodes in the cluster individually?
>
> thanks
>
> ------------------------------
> *From:* knarra <knarra(a)redhat.com> <knarra(a)redhat.com>
> *Sent:* Tuesday, April 11, 2017 11:51:18 AM
> *To:* Precht, Andrew; Sandro Bonazzola; Sahina Bose; Tal Nisan; Allon
> Mureinik; Nir Soffer
> *Cc:* users
> *Subject:* Re: [ovirt-users] I’m having trouble deleting a test gluster
> volume
>
> On 04/11/2017 11:28 PM, Precht, Andrew wrote:
>
> Hi all,
> The node is oVirt Node 4.1.1 with glusterfs-3.8.10-1.el7.
> On the node I can not find /var/log/glusterfs/glusterd.log However, there
> is a /var/log/glusterfs/glustershd.log
>
> can you check if /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
> exists? if yes, can you check if there is any error present in that file ?
>
>
> What happens if I follow the four steps outlined here to remove the volume
> from the node *BUT*, I do have another volume present in the cluster. It
> too is a test volume. Neither one has any data on them. So, data loss is
> not an issue.
>
> Running those four steps will remove the volume from your cluster . If the
> volumes what you have are test volumes you could just follow the steps
> outlined to delete them (since you are not able to delete from UI) and
> bring back the cluster into a normal state.
>
>
> ------------------------------
> *From:* knarra <knarra(a)redhat.com> <knarra(a)redhat.com>
> *Sent:* Tuesday, April 11, 2017 10:32:27 AM
> *To:* Sandro Bonazzola; Precht, Andrew; Sahina Bose; Tal Nisan; Allon
> Mureinik; Nir Soffer
> *Cc:* users
> *Subject:* Re: [ovirt-users] I’m having trouble deleting a test gluster
> volume
>
> On 04/11/2017 10:44 PM, Sandro Bonazzola wrote:
>
> Adding some people
>
> Il 11/Apr/2017 19:06, "Precht, Andrew" <Andrew.Precht(a)sjlibrary.org> ha
> scritto:
>
>> Hi Ovirt users,
>> I’m a newbie to oVirt and I’m having trouble deleting a test gluster
>> volume. The nodes are 4.1.1 and the engine is 4.1.0
>>
>> When I try to remove the test volume, I click Remove, the dialog box
>> prompting to confirm the deletion pops up and after I click OK, the dialog
>> box changes to show a little spinning wheel and then it disappears. In the
>> end the volume is still there.
>>
> with the latest version of glusterfs & ovirt we do not see any issue with
> deleting a volume. Can you please check /var/log/glusterfs/glusterd.log
> file if there is any error present?
>
>
> The test volume was distributed with two host members. One of the hosts I
>> was able to remove from the volume by removing the host form the cluster.
>> When I try to remove the remaining host in the volume, even with the “Force
>> Remove” box ticked, I get this response: Cannot remove Host. Server having
>> Gluster volume.
>>
>> What to try next?
>>
> since you have already removed the volume from one host in the cluster and
> you still see it on another host you can do the following to remove the
> volume from another host.
>
> 1) Login to the host where the volume is present.
> 2) cd to /var/lib/glusterd/vols
> 3) rm -rf <vol_name>
> 4) Restart glusterd on that host.
>
> And before doing the above make sure that you do not have any other volume
> present in the cluster.
>
> Above steps should not be run on a production system as you might loose
> the volume and data.
>
> Now removing the host from UI should succed.
>
>
>> P.S. I’ve tried to join this user group several times in the past, with
>> no response.
>> Is it possible for me to join this group?
>>
>> Regards,
>> Andrew
>>
>>
>
> _______________________________________________
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
>
>
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
--
This message was sent by Atlassian JIRA
(v1000.910.0#100040)
7 years, 7 months
[oVirt Jenkins] ovirt_master_hc-system-tests - Build # 73 - Failure!
by jenkins@jenkins.phx.ovirt.org
------=_Part_134_1941629441.1492667498579
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
Project: http://jenkins.ovirt.org/job/ovirt_master_hc-system-tests/
Build: http://jenkins.ovirt.org/job/ovirt_master_hc-system-tests/73/
Build Number: 73
Build Status: Failure
Triggered By: Started by timer
-------------------------------------
Changes Since Last Success:
-------------------------------------
Changes for Build #73
[Yaniv Kaul] Fixed NTP configuration on Engine.
[Sandro Bonazzola] publisher: drop 3.6 publisher
[Sandro Bonazzola] publisher: drop 4.0 publisher
-----------------
Failed Tests:
-----------------
1 tests failed.
FAILED: 002_bootstrap.add_hosts
Error Message:
status: 404
reason: Not Found
detail:
<html><head><title>Error</title></head><body>404 - Not Found</body></html>
-------------------- >> begin captured logging << --------------------
ovirtlago.testlib: ERROR: * Unhandled exception in <function _host_is_up at 0x3b26ed8>
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 217, in assert_equals_within
res = func()
File "/home/jenkins/workspace/ovirt_master_hc-system-tests/ovirt-system-tests/hc-basic-suite-master/test-scenarios/002_bootstrap.py", line 145, in _host_is_up
cur_state = api.hosts.get(host.name()).status.state
File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/brokers.py", line 18338, in get
headers={"All-Content":all_content}
File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 46, in get
return self.request(method='GET', url=url, headers=headers, cls=cls)
File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 122, in request
persistent_auth=self.__persistent_auth
File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py", line 79, in do_request
persistent_auth)
File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py", line 162, in __do_request
raise errors.RequestError(response_code, response_reason, response_body)
RequestError:
status: 404
reason: Not Found
detail:
<html><head><title>Error</title></head><body>404 - Not Found</body></html>
--------------------- >> end captured logging << ---------------------
Stack Trace:
File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
testMethod()
File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 129, in wrapped_test
test()
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 59, in wrapper
return func(get_test_prefix(), *args, **kwargs)
File "/home/jenkins/workspace/ovirt_master_hc-system-tests/ovirt-system-tests/hc-basic-suite-master/test-scenarios/002_bootstrap.py", line 164, in add_hosts
testlib.assert_true_within(_host_is_up, timeout=15 * 60)
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 256, in assert_true_within
assert_equals_within(func, True, timeout, allowed_exceptions)
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 217, in assert_equals_within
res = func()
File "/home/jenkins/workspace/ovirt_master_hc-system-tests/ovirt-system-tests/hc-basic-suite-master/test-scenarios/002_bootstrap.py", line 145, in _host_is_up
cur_state = api.hosts.get(host.name()).status.state
File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/brokers.py", line 18338, in get
headers={"All-Content":all_content}
File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 46, in get
return self.request(method='GET', url=url, headers=headers, cls=cls)
File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 122, in request
persistent_auth=self.__persistent_auth
File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py", line 79, in do_request
persistent_auth)
File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py", line 162, in __do_request
raise errors.RequestError(response_code, response_reason, response_body)
status: 404
reason: Not Found
detail:
<html><head><title>Error</title></head><body>404 - Not Found</body></html>
-------------------- >> begin captured logging << --------------------
ovirtlago.testlib: ERROR: * Unhandled exception in <function _host_is_up at 0x3b26ed8>
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 217, in assert_equals_within
res = func()
File "/home/jenkins/workspace/ovirt_master_hc-system-tests/ovirt-system-tests/hc-basic-suite-master/test-scenarios/002_bootstrap.py", line 145, in _host_is_up
cur_state = api.hosts.get(host.name()).status.state
File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/brokers.py", line 18338, in get
headers={"All-Content":all_content}
File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 46, in get
return self.request(method='GET', url=url, headers=headers, cls=cls)
File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line 122, in request
persistent_auth=self.__persistent_auth
File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py", line 79, in do_request
persistent_auth)
File "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py", line 162, in __do_request
raise errors.RequestError(response_code, response_reason, response_body)
RequestError:
status: 404
reason: Not Found
detail:
<html><head><title>Error</title></head><body>404 - Not Found</body></html>
--------------------- >> end captured logging << ---------------------
------=_Part_134_1941629441.1492667498579--
7 years, 7 months
oVirt infra daily report - unstable production jobs - 298
by jenkins@jenkins.phx.ovirt.org
------=_Part_151_1571013372.1492729201659
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
Good morning!
Attached is the HTML page with the jenkins status report. You can see it also here:
- http://jenkins.ovirt.org/job/system_jenkins-report/298//artifact/exported...
Cheers,
Jenkins
------=_Part_151_1571013372.1492729201659
Content-Type: text/html; charset=us-ascii; name=upstream_report.html
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename=upstream_report.html
Content-ID: <upstream_report.html>
<!DOCTYPE html><head><style type="text/css">
table.gridtable {
border-collapse: collapse;
table-layout:fixed;
width:1600px;
font-family: monospace;
font-size:13px;
}
.head {
font-size:20px;
font-family: arial;
}
.sub {
font-size:18px;
background-color:#e5e5e5;
font-family: arial;
}
pre {
font-family: monospace;
display: inline;
white-space: pre-wrap;
white-space: -moz-pre-wrap !important;
white-space: -pre-wrap;
white-space: -o-pre-wrap;
word-wrap: break-word;
}
</style>
</head>
<body>
<table class="gridtable" border=2>
<tr><th colspan=2 class=head>
RHEVM CI Jenkins Daily Report - 20/04/2017
</th></tr><tr><th colspan=2 class=sub>
<font color="blue"><a href="http://jenkins.ovirt.org/">00 Unstable Critical</a></font>
</th></tr>
<tr><td>
<a href="http://jenkins.ovirt.org/job/ovirt_3.6_he-system-tests/">ovirt_3.6_he-system-tests</a>
</td><td>
This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the <a href="http://gerrit.ovirt.org/gitweb?p=jenkins.git;a=tree;h=refs/heads/master;h...">
jenkins</a> repo.
<!-- Managed by Jenkins Job Builder -->
</td></tr>
<tr><td>
<a href="http://jenkins.ovirt.org/job/ovirt_4.1_image-ng-system-tests/">ovirt_4.1_image-ng-system-tests</a>
</td><td>
This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the <a href="http://gerrit.ovirt.org/gitweb?p=jenkins.git;a=tree;h=refs/heads/master;h...">
jenkins</a> repo.
<!-- Managed by Jenkins Job Builder -->
</td></tr>
<tr><td>
<a href="http://jenkins.ovirt.org/job/ovirt_master-ansible-system-tests/">ovirt_master-ansible-system-tests</a>
</td><td>
This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the <a href="http://gerrit.ovirt.org/gitweb?p=jenkins.git;a=tree;h=refs/heads/master;h...">
jenkins</a> repo.
<!-- Managed by Jenkins Job Builder -->
</td></tr>
<tr><td>
<a href="http://jenkins.ovirt.org/job/ovirt_master_hc-system-tests/">ovirt_master_hc-system-tests</a>
</td><td>
This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the <a href="http://gerrit.ovirt.org/gitweb?p=jenkins.git;a=tree;h=refs/heads/master;h...">
jenkins</a> repo.
<!-- Managed by Jenkins Job Builder -->
</td></tr>
<tr><td>
<a href="http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master-dry_run/">test-repo_ovirt_experimental_master-dry_run</a>
</td><td>
This job is automatically updated by jenkins job builder, any manual
change will be lost in the next update. If you want to make permanent
changes, check out the <a href="http://gerrit.ovirt.org/gitweb?p=jenkins.git;a=tree;h=refs/heads/master;h...">
jenkins</a> repo.
<!-- Managed by Jenkins Job Builder -->
</td></tr>
------=_Part_151_1571013372.1492729201659--
7 years, 7 months
Fwd: Announcing Bugzilla 5 Public Beta!
by Sandro Bonazzola
---------- Forwarded message ----------
From: Christine Freitas <cfreitas(a)redhat.com>
Date: Thu, Apr 20, 2017 at 12:45 AM
Subject: Announcing Bugzilla 5 Public Beta!
Hello All,
We are pleased to announce Red Hat's Bugzilla 5 beta [1]! We’re inviting
all of you to participate.
We encourage you to test your current scripts against this new version and
take part in the beta discussions on the Fedora development list [2].
Partners and customers may also use their existing communications channels
to share feedback or questions. We ask that you provide feedback or
questions by Wednesday, May 17th.
Here is a short list of some of the changes in Bugzilla 5:
-
Major improvements in the WebServices interface, including a new
REST-like endpoint, allowing clients to access data using standard HTTP
calls for easy development.
-
The UI has been significantly overhauled for a modern browsing
experience.
-
Performance improvements, including caching improvements to allow faster
access to certain types of data.
-
Red Hat Associates, Customers and Fedora Account System users can now
log in using SAML.
-
The addition of some of the Bayoteers extensions allowing features such
as inline editing of bugs in search results, team management and scrum
tools, etc.
-
Ye Olde diff viewer has been replaced with the modern diff2html diff
viewer
-
Improved, updated documentation including a rewrite using the
reStructuredText format, which allows documentation to be more easily
converted into different formats such as HTML and PDF, etc
The official release date for Bugzilla 5 will be determined based on the
beta feedback. We will communicate to you as the beta progresses.
For more information refer to:
https://beta-bugzilla.redhat.com/page.cgi?id=whats-new.html
https://beta-bugzilla.redhat.com/page.cgi?id=release-notes.html
https://beta-bugzilla.redhat.com/page.cgi?id=faq.html
https://beta-bugzilla.redhat.com/docs/en/html/using/index.html
https://beta-bugzilla.redhat.com/docs/en/html/api/index.html
Cheers, the Red Hat Bugzilla team.
1: https://beta-bugzilla.redhat.com/
2: https://lists.fedoraproject.org/archives/list/devel%40lists.
fedoraproject.org/
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
7 years, 7 months
Build failed in Jenkins: deploy-to-ovirt_experimental_4.1 #3832
by jenkins@jenkins.phx.ovirt.org
See <http://jenkins.ovirt.org/job/deploy-to-ovirt_experimental_4.1/3832/displa...>
------------------------------------------
[...truncated 6.03 KB...]
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ sleep 5
+ (( i++ ))
+ (( i < 180 ))
+ echo 'Timed out waiting for lock'
Timed out waiting for lock
+ exit 1
Build step 'Execute shell' marked build as failure
[ssh-agent] Stopped.
7 years, 7 months
[JIRA] (OVIRT-1322) remove duplicate emails from Gerrit DB
by Evgheni Dereveanchin (oVirt JIRA)
[ https://ovirt-jira.atlassian.net/browse/OVIRT-1322?page=com.atlassian.jir... ]
Evgheni Dereveanchin reassigned OVIRT-1322:
-------------------------------------------
Assignee: Evgheni Dereveanchin (was: infra)
> remove duplicate emails from Gerrit DB
> --------------------------------------
>
> Key: OVIRT-1322
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-1322
> Project: oVirt - virtualization made easy
> Issue Type: Improvement
> Components: Gerrit/git
> Reporter: Evgheni Dereveanchin
> Assignee: Evgheni Dereveanchin
>
> Since the reviewers plugin was installed, there have been several examples of failure to push as the committer's email is present multiple times in our DB (see OVIRT-1306 )
> Checking the DB there are still dozens of emails listed twice or more in the DB which can potentially cause problems with the reviewers plugin.
> Let's clear emails on all inactive account as these are not used anyways and mostly causing the problem.
--
This message was sent by Atlassian JIRA
(v1000.892.2#100040)
7 years, 7 months