oVirt s390x support - some questions for maintainers
by Barak Korren
Hi to all project maintainers,
As you may know, over the last few months the oVirt project had got
some code contributions geared towards enabling the use of s390x
(Mainframe) machines as hypervisor hosts.
As you may also know, if you've followed the relevant thread, some
work had been done in collaboration with the Fedora community to
enable os390x support in oVirt`s CI system.
We're now at the point where we can take the final step and enable
automated builds of node components for s390x/fc27. Looking at what we
curently build for ppc64le, I already took the time and submitted
patches to enable build jobs for vdsm [2], ovirt-host [3], and
ioprocess [4]. The relevant maintainers should have had these patches
land in thair inbpx already.
Few questions remain however:
1. When would be the best time to merge the patches mentioned above? Given
that some of the projects do not support fc27 yet, that the new
build jobs may
raise issues and that the 4.2 release is fast approaching, the
right timing should
be considered carefully.
2. Which additional projects need to be build? I can see we build some SDK
components for ppc64le as wel, are those dependences of vdsm? Will we need
to build then for s390x?
[1]: http://jenkins.ovirt.org/search/?q=master_build-artifacts-el7-ppc64le
[2]: https://gerrit.ovirt.org/c/85487
[3]: https://gerrit.ovirt.org/c/85486
[4]: https://gerrit.ovirt.org/c/85485
--
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
6 years, 10 months
[ovirt-users] cloud-init not working on master/4.2?
by Roy Golan
I had problems bootstrapping my glance images, cloud init seems to ignore
my password changes and then I realized I'm missing the cdrom in the domxml.
To solve it I disabled the DomainXML feature(engine preparing the domxml,
not vdsm) and it started working.
Before I open a bug can anyone try to reproduce on 4.2?
The workaround with engine-config:
echo DomainXML | engine-config -s DomainXML=false --cver=4.2 -p /dev/stdin
6 years, 10 months
WFLYCTL0348: Timeout after [300] seconds waiting for service container stability
by Yedidyah Bar David
Hi all,
Can someone please help debug this failure?
It happens to me when deploying hosted-engine --ansible using
ovirt-system-tests. So far happened to me once when tried manually and once
in jenkins [1] (/var/log/ovirt-engine) [2] (server.log):
2018-01-01 08:19:14,375-05 INFO [org.jboss.as.clustering.infinispan]
(ServerService Thread Pool -- 57) WFLYCLINF0002: Started timeout-base
cache from ovirt-engine container
2018-01-01 08:23:49,407-05 ERROR
[org.jboss.as.controller.management-operation] (Controller Boot
Thread) WFLYCTL0348: Timeout after [300] seconds waiting for service
container stability. Operation will roll back. Step that first updated
the service container was 'add' at address '[
("core-service" => "management"),
("management-interface" => "native-interface")
]'
On my own system, HE-HA-agent gave up, stopped the machine, started it, and
then the engine did start well. In jenkins, we do not give it enough time
for this. Now pushed a patch [3] to increase the timeout to 20 minutes
(from 10), but this is just to see if it works then. For now, at least.
Thanks,
[1]
http://jenkins.ovirt.org/job/ovirt-system-tests_master_check-patch-el7-x8...
[2]
http://jenkins.ovirt.org/job/ovirt-system-tests_master_check-patch-el7-x8...
[3] https://gerrit.ovirt.org/85856
--
Didi
6 years, 10 months