Roles and Permissions and Inheritance
by Brian Wilson
Is there a way to prevent Roles Assigned to Groups on Objects to only apply to where it is set?
Basically looking for a way to do what we had done in VMWare which involved using the do not propagate permission setting.
be able
Seems to me that right now there is no way to set this so if i give access to something at the top level of a DC those accesses wlll overide if i then explcitly set another role and permission on an object underneath
Lets take as a concrete example the ovirtmgmt network. I do not want users in the engine to be able to place VMs on this (but i want the Superusers to be able to still) How can i accomplish this with the way roles and permissions work with Ovirt?
thanks!
Brian
5 years, 9 months
Ovirt configuration
by matteofedeli97@gmail.com
Hi at all,
I'm installing and configuring a ovirt system in hyperconverged mode and I have some
doubt about storage.
server 1: 1TB, 1TB
server 2: 1TB, 1TB
server 3: 1TB, 1TB
With this hdd configuration I want on the first drive install ovirt node, vmstore and engine volume, on the second another volume for data.
The first in Replica 3 and the second distribuited... It's possible?
During installation of ovirt node in the automatic partitioning should I select both hard disk?
In the cockpit wizard in this step, https://www.ovirt.org/images/uarwo42-gdeploy-3.png?1519899774, to recover the device name i use pvdisplay -m?
5 years, 9 months
vSphere Admin is trying to setup an oVirt-Cluster...
by ralph@gottschalkson.eu
and is failing .. of course.
I get access to 3 HP DL380 G7 with a lot of RAM and a lot of HDDs an 10 GBE NICs.
Which VLANs I have to create?
The HDDs have not to be in a RAID, they should be single instances for GlusterFS, so that GlusterFS
Handles the "Bricks (=Single HHDs?)"?
Or use a SAN shared Device, which is also there for simplyfienig the (first) setup
What to do with the downloaded oVirt-ISO Image?
In vSphere I install ESXi on the nodes, Connect the NICs to the right VLANs.
Install VCSA as VM, adding the nodes and so on..
I there an good Instruction out there which tanslates VMWare terms and concepts into oVirts terms
so that I can follow the step-by-step instuction? Or ist the setup done by an (magic) Ansible script?
5 years, 9 months
4.3.0 rc2 cannot mount glusterfs volumes on ovirt node ng
by Jorick Astrego
Hi,
We're having problems mounting the preexisting 3.12 glusterfs storage
domains in ovirt node ng 4.3.0 rc2.
Getting
There are no iptables blocks on the storage network, the ip's are
pingable bothe ways. I can telnet to the glusterfs ports and I see no
messages in the logs of the glusterfs servers.
When I try the mount command manually it hangs for ever:
/usr/bin/mount -t glusterfs -o
backup-volfile-servers=*.*.*.*:*.*.*.* *.*.*.*:/sdd8 /mnt/temp
I haven't submitted a bug yet....
from supervdsm.log
MainProcess|jsonrpc/2::DEBUG::2019-01-25
13:42:45,282::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper)
call volumeInfo with (u'sdd8', u'*.*.*.*') {}
MainProcess|jsonrpc/2::DEBUG::2019-01-25
13:42:45,282::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list
0-63 /usr/sbin/gluster --mode=script volume info --remote-host=*.*.*.*
sdd8 --xml (cwd None)
MainProcess|jsonrpc/2::DEBUG::2019-01-25
13:44:45,399::commands::219::root::(execCmd) FAILED: <err> = ''; <rc> = 1
MainProcess|jsonrpc/2::DEBUG::2019-01-25
13:44:45,399::logutils::319::root::(_report_stats) ThreadedHandler is ok
in the last 120 seconds (max pending: 2)
MainProcess|jsonrpc/2::ERROR::2019-01-25
13:44:45,399::supervdsm_server::104::SuperVdsm.ServerCallback::(wrapper)
Error in volumeInfo
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line
102, in wrapper
res = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 529,
in volumeInfo
xmltree = _execGlusterXml(command)
File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 131,
in _execGlusterXml
return _getTree(rc, out, err)
File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 112,
in _getTree
raise ge.GlusterCmdExecFailedException(rc, out, err)
GlusterCmdExecFailedException: Command execution failed
error: E
r
r
o
r
:
R
e
q
u
e
s
t
t
i
m
e
d
o
u
t
return code: 1
MainProcess|jsonrpc/2::DEBUG::2019-01-25
13:44:45,400::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper)
call mount with (<vdsm.supervdsm_server._SuperVdsm object at
0x7f6eb8d0a2d0>, u'*.*.*.*:/sdd8',
u'/rhev/data-center/mnt/glusterSD/*.*.*.*:_sdd8') {'vfstype':
u'glusterfs', 'mntOpts': u'backup-volfile-servers=*.*.*.*:*.*.*.*',
'cgroup': 'vdsm-glusterfs'}
MainProcess|jsonrpc/2::DEBUG::2019-01-25
13:44:45,400::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list
0-63 /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
-t glusterfs -o backup-volfile-servers=*.*.*.*:*.*.*.* *.*.*.*:/sdd8
/rhev/data-center/mnt/glusterSD/*.*.*.*:_sdd8 (cwd None)
MainProcess|jsonrpc/0::DEBUG::2019-01-25
13:45:02,884::commands::219::root::(execCmd) FAILED: <err> = 'Running
scope as unit run-38676.scope.\nMount failed. Please check the log file
for more details.\n'; <rc> = 1
MainProcess|jsonrpc/0::ERROR::2019-01-25
13:45:02,884::supervdsm_server::104::SuperVdsm.ServerCallback::(wrapper)
Error in mount
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line
102, in wrapper
res = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line
144, in mount
cgroup=cgroup)
File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line
277, in _mount
_runcmd(cmd)
File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line
305, in _runcmd
raise MountError(rc, b";".join((out, err)))
MountError: (1, ';Running scope as unit run-38676.scope.\nMount failed.
Please check the log file for more details.\n')
MainProcess|jsonrpc/0::DEBUG::2019-01-25
13:45:02,894::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper)
call volumeInfo with (u'ssd9', u'*.*.*.*') {}
MainProcess|jsonrpc/0::DEBUG::2019-01-25
13:45:02,894::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list
0-63 /usr/sbin/gluster --mode=script volume info --remote-host=*.*.*.*
ssd9 --xml (cwd None)
from vdsm.log
2019-01-25 13:46:03,519+0100 WARN (vdsm.Scheduler) [Executor] Worker
blocked: <Worker name=jsonrpc/2 running <Task <JsonRpcTask {'params':
{u'connectionParams': [{u'mnt_options':
u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
u'6b6b7899-c82b-4417-b453-0b3b0ac11deb', u'connection': u'*.*.*.*:ssd4',
u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false',
u'vfs_type': u'glusterfs', u'password': '********', u'port': u''},
{u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
u'b036005a-d44d-4689-a8c3-13e1bbf55af7', u'connection': u'*.*.*.*:ssd5',
u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false',
u'vfs_type': u'glusterfs', u'password': '********', u'port': u''},
{u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
u'40d191b0-b7f8-48f9-bf6f-327275f51fef', u'connection': u'*.*.*.*:ssd6',
u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false',
u'vfs_type': u'glusterfs', u'password': '********', u'port': u''},
{u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
u'26fbd2d6-6f25-4520-ab7f-15e9001f07b9', u'connection':
u'*.*.*.*:/hdd2', u'iqn': u'', u'user': u'', u'tpgt': u'1',
u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password':
'********', u'port': u''}, {u'mnt_options':
u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
u'f11fed97-513a-4a10-b85c-2afe68f42608', u'connection':
u'*.*.*.*:/ssd3', u'iqn': u'', u'user': u'', u'tpgt': u'1',
u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password':
'********', u'port': u''}, {u'mnt_options':
u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
u'f20b8691-528e-4e38-89ad-1e27684dee8b', u'connection':
u'*.*.*.*:/sdd8', u'iqn': u'', u'user': u'', u'tpgt': u'1',
u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password':
'********', u'port': u''}, {u'mnt_options':
u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
u'188e71dc-3d81-43d3-b930-238a4c6711e6', u'connection':
u'*.*.*.*:/ssd9', u'iqn': u'', u'user': u'', u'tpgt': u'1',
u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password':
'********', u'port': u''}], u'storagepoolID':
u'00000001-0001-0001-0001-000000000043', u'domainType': 7}, 'jsonrpc':
'2.0', 'method': u'StoragePool.connectStorageServer', 'id':
u'581e2ad3-0682-4d44-95b4-bdc088b45f66'} at 0x7f9be815c850> timeout=60,
duration=1260.00 at 0x7f9be815ca10> task#=98 at 0x7f9be83bb750>, traceback:
File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap
self.__bootstrap_inner()
File: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner
self.run()
File: "/usr/lib64/python2.7/threading.py", line 765, in run
self.__target(*self.__args, **self.__kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line
195, in run
ret = func(*args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run
self._execute_task()
File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in
_execute_task
task()
File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in
__call__
self._callable()
File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
262, in __call__
self._handler(self._ctx, self._req)
File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
305, in _serveRequest
response = self._handle_request(req, ctx)
File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
345, in _handle_request
res = method(**params)
File: "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194,
in _dynamicMethod
result = fn(*methodArgs)
File: "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1103, in
connectStorageServer
connectionParams)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py",
line 72, in wrapper
result = ctask.prepare(func, *args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108,
in wrapper
return m(self, *a, **kw)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line
1179, in prepare
result = self._run(func, *args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
in _run
return fn(*args, **kargs)
File: "<string>", line 2, in connectStorageServer
File: "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in
method
ret = func(*args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2410,
in connectStorageServer
conObj.connect()
from messages:
Jan 25 13:49:07 node9 vdsm[31968]: WARN Worker blocked: <Worker
name=jsonrpc/3 running <Task <JsonRpcTask {'params':
{u'connectionParams': [{u'mnt_options':
u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
u'6b6b7899-c82b-4417-b453-0b3b0ac11deb', u'connection':
u'192.168.99.15:ssd4', u'iqn': u'', u'user': u'', u'tpgt': u'1',
u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password':
'********', u'port': u''}, {u'mnt_options':
u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
u'b036005a-d44d-4689-a8c3-13e1bbf55af7', u'connection': u'*.*.*.*:ssd5',
u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false',
u'vfs_type': u'glusterfs', u'password': '********', u'port': u''},
{u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
u'40d191b0-b7f8-48f9-bf6f-327275f51fef', u'connection': u'*.*.*.*:ssd6',
u'iqn': u'', u'user': u'', u'tpgt': u'1', u'ipv6_enabled': u'false',
u'vfs_type': u'glusterfs', u'password': '********', u'port': u''},
{u'mnt_options': u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
u'26fbd2d6-6f25-4520-ab7f-15e9001f07b9', u'connection':
u'*.*.*.*:/hdd2', u'iqn': u'', u'user': u'', u'tpgt': u'1',
u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password':
'********', u'port': u''}, {u'mnt_options':
u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
u'f11fed97-513a-4a10-b85c-2afe68f42608', u'connection':
u'*.*.*.*:/ssd3', u'iqn': u'', u'user': u'', u'tpgt': u'1',
u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password':
'********', u'port': u''}, {u'mnt_options':
u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
u'f20b8691-528e-4e38-89ad-1e27684dee8b', u'connection':
u'*.*.*.*:/sdd8', u'iqn': u'', u'user': u'', u'tpgt': u'1',
u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password':
'********', u'port': u''}, {u'mnt_options':
u'backup-volfile-servers=*.*.*.*:*.*.*.*', u'id':
u'188e71dc-3d81-43d3-b930-238a4c6711e6', u'connection':
u'*.*.*.*:/ssd9', u'iqn': u'', u'user': u'', u'tpgt': u'1',
u'ipv6_enabled': u'false', u'vfs_type': u'glusterfs', u'password':
'********', u'port': u''}], u'storagepoolID':
u'00000001-0001-0001-0001-000000000043', u'domainType': 7}, 'jsonrpc':
'2.0', 'method': u'StoragePool.connectStorageServer', 'id':
u'918f4d06-ca89-4ec0-a396-3407f1bdb8f9'} at 0x7f9be82ff250> timeout=60,
duration=840.00 at 0x7f9be82ffa50> task#=93 at 0x7f9be83bba10>,
traceback:#012File: "/usr/lib64/python2.7/threading.py", line 785, in
__bootstrap#012 self.__bootstrap_inner()#012File:
"/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner#012
self.run()#012File: "/usr/lib64/python2.7/threading.py", line 765, in
run#012 self.__target(*self.__args, **self.__kwargs)#012File:
"/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 195,
in run#012 ret = func(*args, **kwargs)#012File:
"/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in
_run#012 self._execute_task()#012File:
"/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in
_execute_task#012 task()#012File:
"/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in
__call__#012 self._callable()#012File:
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 262, in
__call__#012 self._handler(self._ctx, self._req)#012File:
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 305, in
_serveRequest#012 response = self._handle_request(req, ctx)#012File:
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in
_handle_request#012 res = method(**params)#012File:
"/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 194, in
_dynamicMethod#012 result = fn(*methodArgs)#012File:
"/usr/lib/python2.7/site-packages/vdsm/API.py", line 1103, in
connectStorageServer#012 connectionParams)#012File:
"/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py", line 72,
in wrapper#012 result = ctask.prepare(func, *args, **kwargs)#012File:
"/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108, in
wrapper#012 return m(self, *a, **kw)#012File:
"/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1179, in
prepare#012 result = self._run(func, *args, **kwargs)#012File:
"/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in
_run#012 return fn(*args, **kargs)#012File: "<string>", line 2, in
connectStorageServer#012File:
"/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in
method#012 ret = func(*args, **kwargs)#012File:
"/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2410, in
connectStorageServer#012 conObj.connect()#012File:
"/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line
172, in connect#012 self._mount.mount(self.options, self._vfsType,
cgroup=self.CGROUP)#012File:
"/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 207, in
mount#012 cgroup=cgroup)#012File:
"/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 56, in
__call__#012 return callMethod()#012File:
"/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 54, in
<lambda>#012 **kwargs)#012File: "<string>", line 2, in mount#012File:
"/usr/lib64/python2.7/multiprocessing/managers.py", line 759, in
_callmethod#012 kind, result = conn.recv()
Met vriendelijke groet, With kind regards,
Jorick Astrego
Netbulae Virtualization Experts
----------------
Tel: 053 20 30 270 info(a)netbulae.eu Staalsteden 4-3A KvK 08198180
Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
----------------
5 years, 9 months
Host Only Network
by Brian Wilson
What is the proper way to provide networks without an egress of the host? In VMware i would take a portgroup and just not associate it with a vnic. In ovirt that doesnt seem possible as the network doesnt seem to function unless i move to a physical nic under the "Setup Host Networks" options.
The Goal here is to provide a network for VMs to use on the single host only no traffic would need to leave the host but would need to have VLan IDs and segmentation through them on the host
5 years, 9 months
Single Node Deployment - Self-Hosted-Engine
by Brian Wilson
I have a question regarding installing a single node and then running the self-hosted engine on it.
I would like to keep this as simple as possible and am running into an issue with where to tell the installer to place the engine. It asks me to choose a shared storage technology(gfs,nfs, iscsi) and i dont want to rely on anything outside of this single box to provide storage. I would like to add another volume (LVM) as a local storage domain to be used for this as well as all other VMs to be run on this all in one box.
Is there some secret sauce to getting this installer to choose a local location rather than a shared one?
5 years, 10 months
[ANN] oVirt 4.3.0 Fourth Release Candidate is now available for testing
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the Fourth
Release Candidate of oVirt 4.3.0, as of January 29th, 2019
This is pre-release software. This pre-release should not to be used in
production.
Please take a look at our community page[1] to learn how to ask questions
and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This update is the first release candidate of the 4.3.0 version.
This release brings more than 130 enhancements and more than 450 bug fixes
on top of oVirt 4.2 series.
What's new in oVirt 4.3.0?
* Q35 chipset, support booting using UEFI and Secure Boot
* Skylake-server and AMD EPYC support
* New smbus driver in windows guest tools
* Improved support for v2v
* OVA export / import of Templates
* Full support for live migration of High Performance VMs
* Microsoft Failover clustering support (SCSI Persistent Reservation) for
Direct LUN disks
* Hundreds of bug fixes on top of oVirt 4.2 series
* New VM portal details page (see a preview here:
https://imgur.com/a/ExINpci)
* New Cluster upgrade UI
* OVN security groups
* IPv6 (static host addresses)
* Support of Neutron from RDO OpenStack 13 as external network provider
* Support of using Skydive from RDO OpenStack 14 as Tech Preview
* Support for 3.6 and 4.0 data centers, clusters and hosts were removed
* Now using PostgreSQL 10
* New metrics support using rsyslog instead of fluentd
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)
Experimental tech preview for x86_64 and s390x architectures for Fedora 28
is also included.
See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available for CentOS 7
- oVirt Node NG is already available for both CentOS 7 and Fedora 28 (tech
preview).
- oVirt Appliance for Fedora 28 (tech preview) is being delayed due to
build issues with the build system.
Additional Resources:
* Read more about the oVirt 4.3.0 release highlights:
http://www.ovirt.org/release/4.3.0/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.3.0/
[4] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
5 years, 10 months
Planned infra outage - Saturday February 2
by Evgheni Dereveanchin
Hi everyone,
There's a maintenance window planned for this Saturday, February 2 2019 to
upgrade network equipment in the PHX datacenter. The change will be
implemented during afternoon hours and several oVirt infra services may be
unreachable for short periods of time, specifically:
* package repositories
* Jenkins CI
* glance image repo
If you have problems accessing project resources during this period or see
CI failures please re-try in a couple of hours.
The project website and Gerrit code review are not affected by this outage
and will continue to function normally.
--
Regards,
Evgheni Dereveanchin
5 years, 10 months
Clearing asynchronous task Unknown
by nicolas@devels.es
Hi,
We're running oVirt 4.1.9 (I know there's a new version, we can't
upgrade until [1] is implemented). The thing is that since some days
we're having an event that floods our event list:
Clearing asynchronous task Unknown that started at Tue Jan 22 14:33:17
WET 2019
The event shows up every minute. We tried restarting the ovirt-engine,
but after some time it starts flooding again. No pending tasks in the
task list.
How can I check what is happening and how to solve it?
Thanks.
[1]: https://github.com/oVirt/ovirt-web-ui/issues/490
5 years, 10 months