[OST] HE basic suite failing on reposync
by Ales Musil
Hello,
I am trying to run locally he-basic-ansible-suite-master but it keeps
failing on reposync:
reposync command failed for repoid: ovirt-appliance-master-el7
Any idea what could be wrong?
Thank you.
Regards,
Ales
--
ALES MUSIL
Associate Software Engineer - rhv network
Red Hat EMEA <https://www.redhat.com/>
amusil(a)redhat.com IM: amusil
<https://red.ht/sig>
5 years, 11 months
jenkins dead or barely alive?
by Michal Skrivanek
Hey,
it didn’t work well yesterday evening (taking 10+ minutes to trigger CI) and this morning it’s even worse. Logging in to trigger it manually takes ~5 minutes, every action there seems to take ages...
Thanks,
michal
5 years, 11 months
[IMPORTANT] Upgrading to PG 10.x in version 4.3 on f28/CentOS7.5 , instructions for developers,, please read before you re-base on master
by Eli Mesika
Hi
We will merge a patch[1] today that will require the upcoming 4.3 version
to use Postgres 10 on f28 and Postgres SCL 10 (Software Collections) on
CentOS 7.5
* If you are using f28 and creating new DB schema for development (having
no 9.x PG data), then you done and this change should be transparent to you.
* If you are using f27, have some old 9.x PG data and upgrade to f28, then
you will have to upgrade your old PG data after moving to f28 according to
the instructions in [2]
* If you are using CentOS then you are running PG SCL 9.5 and should
upgrade to PG SCL 10
1. Upgrade to CentOS 7.5 (insure that you have the SCL repo [3] after
upgrade)
2. #yum update -y # insure that your OS has latest patches
3. #service rh-postgresql95-postgresql stop
4. #yum install rh-postgresql10 rh-postgresql10-postgresql-contrib -y
5. #/opt/rh/rh-postgresql10/root/bin/postgresql-setup --upgrade
--upgrade-from="rh-postgresql95-postgresql" # If you have junk sachems,
drop them first, the upgrade of each takes time.
6. #service rh-postgresql10-postgresql start # check status to see that it
is active
7. #service rh-postgresql10-postgresql enable
8. Insure that your patches are re-based on master latest
9. Remove all <target> binaries created with "#make clean install-dev
PREFIX=<target>....."
10. Compile(make install-dev..)/Setup(engine-setup) and run again your
patches (git branches you are working on)
11. Insure that basic operations (add new dc/cluster for example) are
working
12. Track engine.log / server.log and PG logs for database errors
13. # yum remove rh-postgresql95 rh-postgresql95-postgresql-contrib -y
(only if all above is succeed)
[1] https://gerrit.ovirt.org/#/c/95100/
[2]
https://docs.bmc.com/docs/btco113/migrating-the-data-from-postgresql-9-x-...
[3] https://wiki.centos.org/AdditionalResources/Repositories/SCL
If you are developing from a VM then it is always recommended to create a
"before upgrade" snapshot of the VM.
(I know it is not supported from the user portal but you can ask for access
rights only for doing this from your admin on the admin portal.
If you have any questions/problems , please contact me
Eli Mesika (emesika on IRC)
Best Regards
Eli
5 years, 11 months
oVirt PPC support
by Alexandre Bencz
Hi
I have a IBM Power7 machine, I saw that oVirt support is for PPC8, but, I would like to know if it is possible to use oVirt in PowerPC 7, if there is any internal limitation on the implementation of oVirt to not work in PPC7
5 years, 11 months
Re: Any way to make VM Hidden/Locked for access
by Pavan Chavva
+Ovirt Devel and Adelino.
FYI.
Best,
Pavan.
On Wed, Nov 28, 2018 at 8:21 AM Mahesh Falmari <Mahesh.Falmari(a)veritas.com>
wrote:
> Hi Nir,
>
> Just wanted to check if there is any way to create a VM through API and
> make this VM hidden or locked so that it is not shown/accessible to the
> users until we explicitly enable it again.
>
> The use case here is of VM recovery, where we would like to create a VM
> first and wanted to check if there is any issue in VM creation and then
> fail in the first phase itself before the actual data transfer takes place.
> But the issue we may see here is that once VM gets created, it becomes
> accessible to users who may play with it and it would impact the subsequent
> recovery operations.
>
>
>
> Thanks & Regards,
> Mahesh Falmari
>
>
>
--
PAVAN KUMAR CHAVVA
ENGINEERING PARTNER MANAGER
Red Hat
pchavva(a)redhat.com M: 4793219099 IM: pchavva
5 years, 11 months
[Ovirt] [CQ weekly status] [30-11-2018]
by Dafna Ron
Hi,
This mail is to provide the current status of CQ and allow people to review
status before and after the weekend.
Please refer to below colour map for further information on the meaning of
the colours.
*CQ-4.2*: RED (#1)
I checked last date ovirt-engine and vdsm passed and moved packages to
tested as they are the bigger projects and it was on the 27-11-218.
We have been having sporadic failures for most of the projects on test
check_snapshot_with_memory.
We have deducted that this is caused by a code regression in storage based
on the following things:
1.Evgheni and Gal helped debug this issue to rule out lago and infra issue
as the cause of failure and both determined the issue is a code regression
- most likely in storage.
2. The failure only happens on 4.2 branch.
3. the failure itself is cannot run a vm due to low disk space in storage
domain and we cannot see any failures which would leave any leftovers in
the storage domain.
Dan and Ryan are actively involved in trying to find the regression but the
consensus is that this is a storage related regression and* we are having a
problem getting the storage team to join us in debugging the issue. *
I prepared a patch to skip the test in case we cannot get cooperation from
storage team and resolve this regression in the next few days:
https://gerrit.ovirt.org/#/c/95889/
*CQ-Master:* YELLOW (#1)
We have failures which CQ is still bisecting and until its done we cannot
point to any specific failing projects.
Happy week!
Dafna
-------------------------------------------------------------------------------------------------------------------
COLOUR MAP
Green = job has been passing successfully
** green for more than 3 days may suggest we need a review of our test
coverage
1.
1-3 days GREEN (#1)
2.
4-7 days GREEN (#2)
3.
Over 7 days GREEN (#3)
Yellow = intermittent failures for different projects but no lasting or
current regressions
** intermittent would be a healthy project as we expect a number of
failures during the week
** I will not report any of the solved failures or regressions.
1.
Solved job failures YELLOW (#1)
2.
Solved regressions YELLOW (#2)
Red = job has been failing
** Active Failures. The colour will change based on the amount of time the
project/s has been broken. Only active regressions would be reported.
1.
1-3 days RED (#1)
2.
4-7 days RED (#2)
3.
Over 7 days RED (#3)
5 years, 11 months
otopi defaults again to python3
by Yedidyah Bar David
Hi all,
As some of you might recall, some time ago we made otopi default to
python3, and quickly reverted that, realizing this causes too much
breakage.
Now things should hopefully be more stable, and I now merged a patch
to default to python3 again.
Current status:
engine-setup works with python3 on fedora.
host-deploy works with python3 on fedora, with both engine being on
el7 and on fedora. Didn't try on el7, might work as well.
hosted-engine --deploy is most likely broken on fedora, but I think it
was already broken. We are working on that, but it will require some
more time - notably, having stuff from vdsm on python3 (if not fully
porting vdsm to python3, which I understand will take even more time).
If you want to use python2, you can do that with:
OTOPI_PYTHON=/bin/python hosted-engine --deploy
On el7, if you manually added python3 (which is not available in the
base repos), things will break - use above workaround if needed.
Best regards,
--
Didi
5 years, 11 months