Fwd: [CentOS-announce] Announcing the release of Gluster 4.1 on CentOS Linux 7 x86_64
by Sandro Bonazzola
Hi,
as you can read below Gluster 4.1 LTM release is available in CentOS.
We are currently using 3.12 in oVirt 4.2 but this release is going EOL in 6
months.
I would suggest to move 4.3 / Master to Gluster 4.1 now. Any objection?
If no objection I will start pushing patches for it on Monday, July 16th.
---------- Forwarded message ----------
From: Niels de Vos <ndevos(a)redhat.com>
Date: 2018-06-27 16:42 GMT+02:00
Subject: [CentOS-announce] Announcing the release of Gluster 4.1 on CentOS
Linux 7 x86_64
To: centos-announce(a)centos.org
I am happy to announce the General Availability of Gluster 4.1 for
CentOS 7 on x86_64. These packages are following the upstream Gluster
Community releases, and will receive monthly bugfix updates.
Gluster 4.1 is a Long-Term-Maintenance release, and will receive
updates for approximately 18 months. The difference between
Long-Term-Maintenance and Short-Term-Maintenance releases is explained
on the Gluster release schedule page:
https://www.gluster.org/community/release-schedule/
Users of CentOS 7 can now simply install Gluster 4.1 with only these two
commands:
# yum install centos-release-gluster
# yum install glusterfs-server
The centos-release-gluster package is delivered via CentOS Extras repos.
This contains all the metadata and dependency information, needed to
install Gluster 4.1. The actual package that will get installed is
centos-release-gluster41. Users of the now End-Of-Life
Short-Term-Maintenance Gluster 4.0 will automatically get the update to
Gluster 4.1, whereas users of Gluster 3.12 can stay on that
Long-Term-Maintenance release for an other six months.
Users of Gluster 3.10 will need to manually upgrade by uninstalling the
centos-release-gluster310 package, and replacing it with either the
Gluster 4.1 or 3.12 version. Additional details about the upgrade
process are linked in the announcement from the Gluster Community:
https://lists.gluster.org/pipermail/announce/2018-June/000102.html
We have a quickstart guide specifically built around the packages are
available, it makes for a good introduction to Gluster and will help get
you started in just a few simple steps, this quick start is available at
https://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart
More details about the packages that the Gluster project provides in the
Storage SIG is available in the documentation:
https://wiki.centos.org/SpecialInterestGroup/Storage/Gluster
The centos-release-gluster* repositories offer additional packages that
enhance the usability of Gluster itself. Utilities and tools that were
working with previous versions of Gluster are expected to stay working
fine. If there are any problems, or requests for additional tools and
applications to be provided, just send us an email with your
suggestions. The current list of packages that is (planned to become)
available can be found here:
https://wiki.centos.org/SpecialInterestGroup/Storage/
Gluster/Ecosystem-pkgs
We welcome all feedback, comments and contributions. You can get in
touch with the CentOS Storage SIG on the centos-devel mailing list
(https://lists.centos.org ) and with the Gluster developer and user
communities at https://www.gluster.org/mailman/listinfo , we are also
available on irc at #gluster on irc.freenode.net, and on twitter at
@gluster .
Cheers,
Niels de Vos
Storage SIG member & Gluster maintainer
_______________________________________________
CentOS-announce mailing list
CentOS-announce(a)centos.org
https://lists.centos.org/mailman/listinfo/centos-announce
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
6 years, 5 months
ovirt-system-tests_he-basic-iscsi-suite-master fails due to timeout waiting for StoragePool to go up after vdsmd restart
by Sandro Bonazzola
Failing job is:
https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-iscsi-suite-mas...
My findings:
2018-07-06 03:28:05,461::008_restart_he_vm.py::_shutdown_he_vm::123::root::INFO::
* VM is down.
2018-07-06 03:28:05,461::008_restart_he_vm.py::_restart_services::127::root::INFO::
* Restarting services...
2018-07-06 03:28:05,729::log_utils.py::__exit__::611::lago.ssh::DEBUG::end
task:f7db7960-b541-4d18-9a73-45b3d0677f03:Get ssh client for
lago-he-basic-iscsi-suite-master-host-0:
2018-07-06 03:28:06,031::ssh.py::ssh::58::lago.ssh::DEBUG::Running
98d0de9e on lago-he-basic-iscsi-suite-master-host-0: systemctl restart
vdsmd ovirt-ha-broker ovirt-ha-agent
2018-07-06 03:28:22,887::ssh.py::ssh::81::lago.ssh::DEBUG::Command
98d0de9e on lago-he-basic-iscsi-suite-master-host-0 returned with 0
it then waits for engine to be up again till it gives up 10 minutes later:
2018-07-06 03:38:20,979::log_utils.py::__enter__::600::lago.ssh::DEBUG::start
task:f44bb656-dfe3-4f57-a6c8-e7e208863054:Get ssh client for
lago-he-basic-iscsi-suite-master-host-0:
2018-07-06 03:38:21,290::log_utils.py::__exit__::611::lago.ssh::DEBUG::end
task:f44bb656-dfe3-4f57-a6c8-e7e208863054:Get ssh client for
lago-he-basic-iscsi-suite-master-host-0:
2018-07-06 03:38:21,591::ssh.py::ssh::58::lago.ssh::DEBUG::Running
07b7f08a on lago-he-basic-iscsi-suite-master-host-0: hosted-engine
--vm-status
2018-07-06 03:38:21,637::ssh.py::ssh::81::lago.ssh::DEBUG::Command
07b7f08a on lago-he-basic-iscsi-suite-master-host-0 returned with 1
2018-07-06 03:38:21,637::ssh.py::ssh::89::lago.ssh::DEBUG::Command
07b7f08a on lago-he-basic-iscsi-suite-master-host-0 output:
The hosted engine configuration has not been retrieved from shared
storage. Please ensure that ovirt-ha-agent is running and the storage
server is reachable.
On host 0, vdsm.log shows it's restarting:
2018-07-05 23:28:19,646-0400 INFO (MainThread) [vds] Exiting (vdsmd:171)
2018-07-05 23:28:23,514-0400 INFO (MainThread) [vds] (PID: 19942) I
am the actual vdsm 4.30.0-465.git1ad18aa.el7
lago-he-basic-iscsi-suite-master-host-0 (3.10.0-862.2.3.el7.x86_64)
(vdsmd:149)
vdsm then stays for the 10 minutes above mentioned waiting for the storage
pool to go up:
2018-07-05 23:38:20,739-0400 INFO (vmrecovery) [vds] recovery:
waiting for storage pool to go up (clientIF:704)
while the ha agent try to get the hosted engine stats
2018-07-05 23:38:30,363-0400 WARN (vdsm.Scheduler) [Executor] Worker
blocked: <Worker name=jsonrpc/7 running <Task <JsonRpcTask {'params':
{}, 'jsonrpc': '2.0', 'method': u'Host.getStats', 'id':
u'cddde340-37a8-4f72-a471-b5bc40c06a16'} at 0x7f262814ae90>
timeout=60, duration=600.00 at 0x7f262815c050> task#=0 at
0x7f262834b810>, traceback:
File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap
self.__bootstrap_inner()
File: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner
self.run()
File: "/usr/lib64/python2.7/threading.py", line 765, in run
self.__target(*self.__args, **self.__kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py",
line 195, in run
ret = func(*args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run
self._execute_task()
File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315,
in _execute_task
task()
File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in __call__
self._callable()
File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
261, in __call__
self._handler(self._ctx, self._req)
File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
304, in _serveRequest
response = self._handle_request(req, ctx)
File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
344, in _handle_request
res = method(**params)
File: "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 202,
in _dynamicMethod
result = fn(*methodArgs)
File: "<string>", line 2, in getStats
File: "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 49, in method
ret = func(*args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1409, in getStats
multipath=True)}
File: "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 78, in get_stats
ret['haStats'] = _getHaInfo()
File: "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 176,
in _getHaInfo
stats = instance.get_all_stats()
File: "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 94, in get_all_stats
stats = broker.get_stats_from_storage()
File: "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
line 135, in get_stats_from_storage
result = self._proxy.get_stats()
File: "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__
return self.__send(self.__name, args)
File: "/usr/lib64/python2.7/xmlrpclib.py", line 1591, in __request
verbose=self.__verbose
File: "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request
return self.single_request(host, handler, request_body, verbose)
File: "/usr/lib64/python2.7/xmlrpclib.py", line 1303, in single_request
response = h.getresponse(buffering=True)
File: "/usr/lib64/python2.7/httplib.py", line 1113, in getresponse
response.begin()
File: "/usr/lib64/python2.7/httplib.py", line 444, in begin
version, status, reason = self._read_status()
File: "/usr/lib64/python2.7/httplib.py", line 400, in _read_status
line = self.fp.readline(_MAXLINE + 1)
File: "/usr/lib64/python2.7/socket.py", line 476, in readline
data = self._sock.recv(self._rbufsize) (executor:363)
Andrej, I see the test has been added by you in
commit e8d32f7375f2033b73544f47c1e1ca67abe8d35a
I'm not sure about the purpose of this test but I don't understand why we
are restarting the services on the host.
Nir, Tal, any idea on why the storage pool is not getting up?
I see vdsm is in recovery mode, I'm not sure if this was what the test
was supposed to do or if the intention was to cleanly shutdown vdsm
and cleanly restart it.
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
6 years, 5 months
Re: [Spice-devel] Ongoing migration: gitlab.com/spice -> gitlab.freedesktop.org/spice
by Victor Toso
Hi,
JFYI, CC'ing:
* qemu-devel
* virt-tools-list
* devel(a)ovirt.org
If nobody complains, I'll finish whatever is pending next Monday
morning (GMT+2).
Cheers,
toso
On Thu, Jul 12, 2018 at 02:12:43PM +0200, Victor Toso wrote:
> Hi,
>
> The official repository for Spice components should stay in
> gitlab.freedesktop.org/spice - As suggested in the previous
> thread [0], instead of closing gitlab.com/spice we can keep it
> read-only mirror from Freedesktop's gitlab instance.
>
> [0] https://lists.freedesktop.org/archives/spice-devel/2018-July/044664.html
>
> Please, read the email below to see what's going on :)
>
> At this moment, I've only changed components that were already a
> mirror of gitlab.freedesktop.org, the first part of this email is
> about the projects that are not a mirror.
>
> This email's intent is to inform, discuss and give some time to
> let contributors know about the change. People CC'ed are to be
> affected.
>
> [*] Components to be created in gitlab.freedesktop.org/spice and
> mirrored from there.
>
> * https://gitlab.com/spice/x11spice
> * https://gitlab.com/spice/spice-space
> * https://gitlab.com/spice/spice-space-pages
> * https://gitlab.com/spice/spice-streaming-agent
>
> * https://gitlab.com/spice/virtio-gpu-wddm/virtio-gpu-wddm-dod
> - Taking the opportunity, should we move this under win32
> subgroup or create the virtio-gpu-wddm subgroup once more?
> - Migrate issue #1
>
> * https://gitlab.com/spice/qxl-wddm-dod
> - Update and mirror from https://gitlab.freedesktop.org/spice/win32/qxl-wddm-dod
> - The mirrored in gitlab will also be moved into win32 subgroup
>
> * https://gitlab.com/spice/spice-nsis
> - Update and mirror from https://gitlab.freedesktop.org/spice/spice-nsis
> - Should we also include under win32 subgroup or stay outside?
>
> [*] Components I'm ignoring on this mirroring due too old or
> untouched or unrelated (qemu?) code
>
> * win32/usbclerk
> * win32/vdi_port
> * qemu
> * slirp
> * spicec
> * vdesktop
>
>
> [*] Components that are already mirrored, some updates around
> them below:
>
> * The mirrored components will be always overwritten when
> branches diverge, please don't push to gitlab.com - about
> read-only option:
> - https://gitlab.com/gitlab-org/gitlab-ce/issues/40891
>
> * Added common description: "Read only repository, mirror from
> Freedesktop's instance of Gitlab"
>
> * General permissions
> - Visibility: Public
> - Issues, Merge request, Wiki, etc: Disabled
>
> No problems seen:
> * spice [ https://gitlab.com/spice/spice ]
> * spice-protocol [ https://gitlab.com/spice/spice-protocol ]
> * spice-common [ https://gitlab.com/spice/spice-common ]
> * usbredir [ https://gitlab.com/spice/usbredir ]
> * spice-jhbuild [ https://gitlab.com/spice/spice-jhbuild ]
> * spice-html5 [ https://gitlab.com/spice/spice-html5 ]
> * libcacard [ https://gitlab.com/spice/libcacard ]
>
> Small problem but fixed:
> * Spice-gtk [ https://gitlab.com/spice/spice-gtk ]
> - I had to delete and mirror again due some ongoing failure to
> pull from freedesktop.org
>
> * linux/vd_agent [ https://gitlab.com/spice/linux/vd_agent ]
> - I had to create the linux subgroup and clone a new instance
> - removed spice/vdagent one (previous)
>
> * win32/vd_agent [ https://gitlab.com/spice/win32/vd_agent ]
> - I had to create the win32 subgroup and clone a new instance
> - removed spice/vdagent-win (previous)
>
> * win32/usbdk [ https://gitlab.com/spice/win32/usbdk ]
> - Moved to win32 subgroup (delete + clone again)
> - removed spice/usbdk (previous)
>
> * win32/qxl [ https://gitlab.com/spice/win32/qxl ]
> - Moved to win32 subgroup (delete + clone again)
> - removed spice/qxl-win (previous)
>
> I hope all is good to have everything migrated early next week :)
> PS: Let me know if I forgot something.
>
> Cheers,
> toso
> _______________________________________________
> Spice-devel mailing list
> Spice-devel(a)lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/spice-devel
6 years, 5 months
Build fail stopping docker containers
by Nir Soffer
Seen this failure today:
*16:16:05* Stopping and removing all running containers*16:16:05*
Stopping and removing name=unnamed,
id=b1d61f20b7fd4746ed83f308ab292b1d53a02de2271965268890b35c1344da04*16:16:40*
Stopping and removing name=unnamed,
id=22a8ace28d96e469bfc8177ee6afb52d6ca7b4578c27475c92862465253caa2b*16:17:16*
Stopping and removing name=unnamed,
id=5bab561632b6c179446df56dd3aadf10402989047564202f86c6b3989551820c*16:17:28*
Stopping and removing name=unnamed,
id=4c04d87ec8b55a774c9e6558a814b944d665b4c0984d6677972285209c5eea7c*16:17:41*
Stopping and removing name=unnamed,
id=5b40e3237c00cb56897b75fbac500d22b81e53894683ff1a872e4f9e4a48fc1d*16:17:41*
Traceback (most recent call last):*16:17:41* File
"/home/jenkins/workspace/ovirt-imageio_4.2_check-patch-el7-x86_64/jenkins/scripts/docker_cleanup.py",
line 290, in <module>*16:17:41* main()*16:17:41* File
"/home/jenkins/workspace/ovirt-imageio_4.2_check-patch-el7-x86_64/jenkins/scripts/docker_cleanup.py",
line 34, in main*16:17:41*
stop_all_running_containers(client)*16:17:41* File
"/home/jenkins/workspace/ovirt-imageio_4.2_check-patch-el7-x86_64/jenkins/scripts/docker_cleanup.py",
line 125, in stop_all_running_containers*16:17:41*
_remove_container(client, container)*16:17:41* File
"/home/jenkins/workspace/ovirt-imageio_4.2_check-patch-el7-x86_64/jenkins/scripts/docker_cleanup.py",
line 152, in _remove_container*16:17:41*
client.stop(container)*16:17:41* File
"/usr/lib/python2.7/site-packages/docker/utils/decorators.py", line
21, in wrapped*16:17:41* return f(self, resource_id, *args,
**kwargs)*16:17:41* File
"/usr/lib/python2.7/site-packages/docker/api/container.py", line 403,
in stop*16:17:42* self._raise_for_status(res)*16:17:42* File
"/usr/lib/python2.7/site-packages/docker/client.py", line 173, in
_raise_for_status*16:17:42* raise errors.NotFound(e, response,
explanation=explanation)*16:17:42* docker.errors.NotFound: 404 Client
Error: Not Found ("{"message":"No such container:
5b40e3237c00cb56897b75fbac500d22b81e53894683ff1a872e4f9e4a48fc1d"}")
Looks like we need to handle docker.errors.NotFound - it is expected error.
https://jenkins.ovirt.org/job/ovirt-imageio_4.2_check-patch-el7-x86_64/55...
Nir
6 years, 5 months
blog on asking questions
by Greg Sheremeta
wanted to share :)
"Software folk are very good at putting their headphones on and living in
their own little bubbles. This reinforces the idea that software
development is some sort of mythical or sacred practice that should never
be interrupted, and it makes people second guess whether they should ask
for help. This isn’t what you want in a team. You want people to be
comfortable asking for help – that can be the difference between a
successful & productive team who ask for help as soon as they’re stuck, and
an unproductive team who struggle with their work in silence."
from https://wildlyinaccurate.com/becoming-a-team-lead-a-survival-guide/
I love helping everyone, and I know most people love to help, but my usual
first reaction to needing help is to resist asking. After 5 years on the
team I'm starting to get more comfortable lol :)
--
GREG SHEREMETA
SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
Red Hat NA
<https://www.redhat.com/>
gshereme(a)redhat.com IRC: gshereme
<https://red.ht/sig>
6 years, 5 months
Error adding host to ovirt-engine in local development environment
by Kaustav Majumder
Hi,
I am trying to setup ovirt engine dev environment in my local Fedora 28
machine.I have followed this guide ->
https://gerrit.ovirt.org/gitweb?p=ovirt-engine.git;a=blob_plain;f=README....
When I am trying to add a new host (Centos 7) ,it is failing with the
following error.
[35eb76d9] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509), Installing Host
10.70.43.157. Connected to host 10.70.43.157 with SSH key fingerprint:
SHA256:rZfUGylVh3PLqfH2Siey0+CA9RUctK2ITQ2UGtV5ggA.
2018-07-10 15:47:50,447+05 INFO
[org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
(EE-ManagedThreadFactory-engine-Thread-4188) [35eb76d9] Installation of
10.70.43.157. Executing command via SSH umask 0077;
MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t ovirt-XXXXXXXXXX)"; trap
"chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1; rm -fr \"${MYTMP}\" >
/dev/null 2>&1" 0; tar --warning=no-timestamp -C "${MYTMP}" -x &&
"${MYTMP}"/ovirt-host-deploy DIALOG/dialect=str:machine
DIALOG/customization=bool:True <
/home/kaustavmajumder/work/ovirt-engine-builds/07-07/var/cache/ovirt-engine/ovirt-host-deploy.tar
2018-07-10 15:47:50,447+05 INFO
[org.ovirt.engine.core.utils.archivers.tar.CachedTar]
(EE-ManagedThreadFactory-engine-Thread-4188) [35eb76d9] Tarball
'/home/kaustavmajumder/work/ovirt-engine-builds/07-07/var/cache/ovirt-engine/ovirt-host-deploy.tar'
refresh
2018-07-10 15:47:50,471+05 INFO
[org.ovirt.engine.core.uutils.ssh.SSHDialog]
(EE-ManagedThreadFactory-engine-Thread-4188) [35eb76d9] SSH execute
'root(a)10.70.43.157' 'umask 0077; MYTMP="$(TMPDIR="${OVIRT_TMPDIR}"
mktemp -d -t ovirt-XXXXXXXXXX)"; trap "chmod -R u+rwX \"${MYTMP}\" >
/dev/null 2>&1; rm -fr \"${MYTMP}\" > /dev/null 2>&1" 0; tar
--warning=no-timestamp -C "${MYTMP}" -x && "${MYTMP}"/ovirt-host-deploy
DIALOG/dialect=str:machine DIALOG/customization=bool:True'
2018-07-10 15:47:50,676+05 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(VdsDeploy) [35eb76d9] EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511), An
error has occurred during installation of Host 10.70.43.157: Python is
required but missing.
2018-07-10 15:47:50,685+05 ERROR
[org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase] (VdsDeploy)
[35eb76d9] Error during deploy dialog
2018-07-10 15:47:50,686+05 ERROR
[org.ovirt.engine.core.uutils.ssh.SSHDialog]
(EE-ManagedThreadFactory-engine-Thread-4188) [35eb76d9] SSH error
running command root(a)10.70.43.157:'umask 0077;
MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t ovirt-XXXXXXXXXX)"; trap
"chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1; rm -fr \"${MYTMP}\" >
/dev/null 2>&1" 0; tar --warning=no-timestamp -C "${MYTMP}" -x &&
"${MYTMP}"/ovirt-host-deploy DIALOG/dialect=str:machine
DIALOG/customization=bool:True': IOException: Command returned failure
code 1 during SSH session 'root(a)10.70.43.157'
2018-07-10 15:47:50,690+05 ERROR
[org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
(EE-ManagedThreadFactory-engine-Thread-4188) [35eb76d9] Error during
host 10.70.43.157 install
2018-07-10 15:47:50,697+05 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-4188) [35eb76d9] EVENT_ID:
VDS_INSTALL_IN_PROGRESS_ERROR(511), An error has occurred during
installation of Host 10.70.43.157: Command returned failure code 1
during SSH session 'root(a)10.70.43.157'.
2018-07-10 15:47:50,698+05 ERROR
[org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
(EE-ManagedThreadFactory-engine-Thread-4188) [35eb76d9] Error during
host 10.70.43.157 install, preferring first exception: Unexpected
connection termination
2018-07-10 15:47:50,698+05 ERROR
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(EE-ManagedThreadFactory-engine-Thread-4188) [35eb76d9] Host
installation failed for host '3a52f700-a1d3-47b0-9518-f7f94231a874',
'10.70.43.157': Unexpected connection termination
2018-07-10 15:47:50,700+05 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-4188) [35eb76d9] START,
SetVdsStatusVDSCommand(HostName = 10.70.43.157,
SetVdsStatusVDSCommandParameters:{hostId='3a52f700-a1d3-47b0-9518-f7f94231a874',
status='InstallFailed', nonOperationalReason='NONE',
stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 48d6c653
2018-07-10 15:47:50,704+05 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-4188) [35eb76d9] FINISH,
SetVdsStatusVDSCommand, return: , log id: 48d6c653
2018-07-10 15:47:50,710+05 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-4188) [35eb76d9] EVENT_ID:
VDS_INSTALL_FAILED(505), Host 10.70.43.157 installation failed.
Unexpected connection termination.
I have tried installing it several times and it has failed.
Thanks,
Kaustav
6 years, 5 months
Adding a supported distro
by bob.bownes@oracle.com
dumb q.
I have a distro I'd like to add to the supported list. (private labeled version of a rhel 7.5)
I tried grabbing an unclaimed ID, putting that plus the output of osinfo-query —fields=name os short-id='my_shortid' and derivedFrom.value= well_known_distro into osinfo-defaults.properties and restarting ovirt-engine, but I still come up with install failed because '$MYNAME' is not supported. Am I missing something obvious?
Entry looks like this:
os.my_7x86.id.value = 34
os.my_7x86.name.value = My Linux 7.5
os.my_7x86.derivedFrom.value = rhel_6x64
ideas?
Thanks!
6 years, 5 months