CentOS Stream support
by Michal Skrivanek
Hi all,
we would like to ask about interest in community about oVirt moving to CentOS Stream.
There were some requests before but it’s hard to see how many people would really like to see that.
With CentOS releases lagging behind RHEL for months it’s interesting to consider moving to CentOS Stream as it is much more up to date and allows us to fix bugs faster, with less workarounds and overhead for maintaining old code. E.g. our current integration tests do not really pass on CentOS 8.1 and we can’t really do much about that other than wait for more up to date packages. It would also bring us closer to make oVirt run smoothly on RHEL as that is also much closer to Stream than it is to outdated CentOS.
So..would you like us to support CentOS Stream?
We don’t really have capacity to run 3 different platforms, would you still want oVirt to support CentOS Stream if it means “less support” for regular CentOS?
There are some concerns about Stream being a bit less stable, do you share those concerns?
Thank you for your comments,
michal
1 month
planned Jenkins restart
by Evgheni Dereveanchin
Hi everyone,
I'll be performing a planned Jenkins restart within the next hour.
No new CI jobs will be scheduled during this maintenance period.
I will inform you once it is back online.
--
Regards,
Evgheni Dereveanchin
3 months
Re: Need help in handling catch block in vdsm
by Michal Skrivanek
[adding devel list]
> On 29 Sep 2020, at 09:51, Ritesh Chikatwar <rchikatw(a)redhat.com> wrote:
>
> Hello,
>
> i am new to VDSM codebase.
>
> There is one minor bug in gluster and they don't have any near plan to fix this. So i need to fix this in vdsm.
>
> The bug is when i run command
> [root@dhcp35-237 ~]# gluster v geo-replication status
> No active geo-replication sessions
> geo-replication command failed
> [root@dhcp35-237 ~]#
>
> So this engine is piling up with error. i need handle this in vdsm the code at this place
>
> https://github.com/oVirt/vdsm/blob/master/lib/vdsm/gluster/cli.py#L1231 <https://github.com/oVirt/vdsm/blob/master/lib/vdsm/gluster/cli.py#L1231>
>
> If i run the above command with xml then i will get
>
> [root@dhcp35-237 ~]# gluster v geo-replication status --xml
> geo-replication command failed
> [root@dhcp35-237 ~]#
>
> so i have to run one more command before executing this and check in exception that it contains No active geo-replication sessions this string. for that i did like this
maybe it goes to stderr?
other than that, no idea, sorry
Thanks,
michal
>
> try:
> xmltree = _execGluster(command)
> except ge.GlusterCmdFailedException as e:
> if str(e).find("No active geo-replication sessions") != -1:
> return []
> Seems like this is not working can you help me here
>
>
> Any help will be appreciated
> Ritesh
>
5 months
Disk sizes not updated on unmap/discard
by Tomáš Golembiovský
Hi,
currently, when we run virt-sparsify on VM or user runs VM with discard
enabled and when the disk is on block storage in qcow, the results are
not reflected in oVirt. The blocks get discarded, storage can reuse them
and reports correct allocation statistics, but oVirt does not. In oVirt
one can still see the original allocation for disk and storage domain as
it was before blocks were discarded. This is super-confusing to the
users because when they check after running virt-sparsify and see the
same values they think sparsification is not working. Which is not true.
It all seems to be because of our LVM layout that we have on storage
domain. The feature page for discard [1] suggests it could be solved by
running lvreduce. But this does not seem to be true. When blocks are
discarded the QCOW does not necessarily change its apparent size, the
blocks don't have to be removed from the end of the disk. So running
lvreduce is likely to remove valuable data.
At the moment I don't see how we could achieve the correct values. If
anyone has any idea feel free to entertain me. The only option seems to
be to switch to LVM thin pools. Do we have any plans on doing that?
Tomas
[1] https://www.ovirt.org/develop/release-management/features/storage/pass-di...
--
Tomáš Golembiovský <tgolembi(a)redhat.com>
5 months
Incremental Backup | oVirt
by luwen.zhang@vinchin.com
Dear team,
My name is Luwen Zhang from Vinchin Backup & Recovery.
I'm sending you this Email because our developers encountered some problems while developing incremental backup feature following the oVirt documentations.
https://www.ovirt.org/documentation/incremental-backup-guide/incremental-...
We downloaded oVirt 4.4.1 from oVirt website, and is now developing based on this version, as per the documentation when we perform the first time full backup, we should obtain a "checkpoint id", so when we perform incremental backup using this ID we should know what's have changed on the VM. But now the problem is that we cannot get the checkpoint ID.
The components and versions of oVirt are as follows:
Could you please kindly check on this issue and point out what should be wrong and what we should do?
Looking forward to your reply.
Thanks & regards!
Luwen Zhang | Product Manager
Tel: +86-28-8553-0156
Mob: +86-138-8042-4687
Skype: luwen.zhang_cn
E-mail: luwen.zhang(a)vinchin.com
F5, Block 8, National Information Security Industry Park, No.333 YunHua Road, Hi-Tech Zone, Chengdu, China | P.C.610015
5 months, 1 week
Propose Eitan Raviv as a ovirt-system-tests network-suite maintainer
by Dominik Holler
Hi all,
Eitan Raviv has been working on the oVirt project for more than 3 years.
He contributed more than 40 patches to ovirt-system-tests, including
more than 35 patches to the network-suite.
He is already recognized as a relevant reviewer for the network-suite,
so it time to give him the official role for the work he is already
doing.
I would like to propose Eitan as a virt-system-tests network-suite
maintainer.
Thanks
Dominik
5 months, 1 week
OST fails during 002_bootstrap_pytest
by Vojtech Juranek
Hi,
can anybody look on OST, it fails constantly with error bellow.
See e.g. [1, 2] for full logs.
Thanks
Vojta
[1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7381/
[2] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7382/
13:07:16 ../basic-suite-master/test-scenarios/
002_bootstrap_pytest.py::test_verify_engine_backup [WARNING]: Invalid
characters were found in group names but not replaced, use
13:07:22 -vvvv to see details
13:07:22 /usr/lib/python2.7/site-packages/requests/__init__.py:91:
RequestsDependencyWarning: urllib3 (1.25.10) or chardet (3.0.4) doesn't match
a supported version!
13:07:22 RequestsDependencyWarning)
13:07:22 lago-basic-suite-master-engine | CHANGED => {
13:07:22 "changed": true,
13:07:22 "gid": 0,
13:07:22 "group": "root",
13:07:22 "mode": "0755",
13:07:22 "owner": "root",
13:07:22 "path": "/var/log/ost-engine-backup",
13:07:22 "secontext": "unconfined_u:object_r:var_log_t:s0",
13:07:22 "size": 6,
13:07:22 "state": "directory",
13:07:22 "uid": 0
13:07:22 }
13:07:44 [WARNING]: Invalid characters were found in group names but not
replaced, use
13:07:44 -vvvv to see details
13:07:44 /usr/lib/python2.7/site-packages/requests/__init__.py:91:
RequestsDependencyWarning: urllib3 (1.25.10) or chardet (3.0.4) doesn't match
a supported version!
13:07:44 RequestsDependencyWarning)
13:07:44 lago-basic-suite-master-engine | FAILED | rc=1 >>
13:07:44 Start of engine-backup with mode 'backup'
13:07:44 scope: all
13:07:44 archive file: /var/log/ost-engine-backup/backup.tgz
13:07:44 log file: /var/log/ost-engine-backup/log.txt
13:07:44 Backing up:
13:07:44 Notifying engine
13:07:44 - Files
13:07:44 - Engine database 'engine'
13:07:44 - DWH database 'ovirt_engine_history'
13:07:44 - Grafana database '/var/lib/grafana/grafana.db'
13:07:44 Notifying engineFATAL: failed to backup /var/lib/grafana/grafana.db
with sqlite3non-zero return code
13:17:47 FAILED
5 months, 2 weeks
Engine build fails: "found duplicate key "when" with value "collectd_default_files|d(true)"
by Nir Soffer
I'm trying to build engine on new VM, installed based on README
and it fails in ansible-lint on:
/usr/share/ansible/roles/oVirt.metrics/roles/oVirt.logging/tasks/main.yml
Is this a known issue? any workaround?
Nir
---
$ make clean install-dev PREFIX="$HOME/ovirt-engine"
...
+ /usr/bin/ansible-lint -c build/ansible-lint.conf
packaging/playbooks/install-skydive.yml
packaging/playbooks/ovirt-provider-ovn-driver.yml
packaging/ansible-runner-service-project/project/create-brick.yml
packaging/ansible-runner-service-project/project/ovirt-fetch-he-config.yml
packaging/ansible-runner-service-project/project/ovirt-host-check-upgrade.yml
packaging/ansible-runner-service-project/project/ovirt-host-deploy.yml
packaging/ansible-runner-service-project/project/ovirt-host-enroll-certificate.yml
packaging/ansible-runner-service-project/project/ovirt-host-remove.yml
packaging/ansible-runner-service-project/project/ovirt-host-upgrade.yml
packaging/ansible-runner-service-project/project/ovirt-image-measure.yml
packaging/ansible-runner-service-project/project/ovirt-ova-export.yml
packaging/ansible-runner-service-project/project/ovirt-ova-import.yml
packaging/ansible-runner-service-project/project/ovirt-ova-query.yml
packaging/ansible-runner-service-project/project/ovirt-vnc-sasl.yml
packaging/ansible-runner-service-project/project/ovirt_host_upgrade_vars.yml
packaging/ansible-runner-service-project/project/replace-gluster.yml
packaging/ansible-runner-service-project/project/roles
packaging/ansible-runner-service-project/project/roles/gluster-brick-create
packaging/ansible-runner-service-project/project/roles/gluster-replace-peers
packaging/ansible-runner-service-project/project/roles/hc-gluster-cgroups
packaging/ansible-runner-service-project/project/roles/ovirt-host-check-upgrade
packaging/ansible-runner-service-project/project/roles/ovirt-host-deploy-facts
packaging/ansible-runner-service-project/project/roles/ovirt-host-deploy-firewalld
packaging/ansible-runner-service-project/project/roles/ovirt-host-deploy-hosted-engine
packaging/ansible-runner-service-project/project/roles/ovirt-host-deploy-iptables
packaging/ansible-runner-service-project/project/roles/ovirt-host-deploy-kdump
packaging/ansible-runner-service-project/project/roles/ovirt-host-deploy-kernel
packaging/ansible-runner-service-project/project/roles/ovirt-host-deploy-libvirt-guests
packaging/ansible-runner-service-project/project/roles/ovirt-host-deploy-misc
packaging/ansible-runner-service-project/project/roles/ovirt-host-deploy-spice-encryption
packaging/ansible-runner-service-project/project/roles/ovirt-host-deploy-vdsm-certificates
packaging/ansible-runner-service-project/project/roles/ovirt-host-deploy-vdsm
packaging/ansible-runner-service-project/project/roles/ovirt-host-deploy-vm-console-certificates
packaging/ansible-runner-service-project/project/roles/ovirt-host-deploy-vm-console
packaging/ansible-runner-service-project/project/roles/ovirt-host-deploy-vnc-certificates
packaging/ansible-runner-service-project/project/roles/ovirt-host-deploy
packaging/ansible-runner-service-project/project/roles/ovirt-host-enroll-certificate
packaging/ansible-runner-service-project/project/roles/ovirt-host-setup-vnc-sasl
packaging/ansible-runner-service-project/project/roles/ovirt-host-upgrade
packaging/ansible-runner-service-project/project/roles/ovirt-image-measure
packaging/ansible-runner-service-project/project/roles/ovirt-ova-export-post-pack
packaging/ansible-runner-service-project/project/roles/ovirt-ova-export-pre-pack
packaging/ansible-runner-service-project/project/roles/ovirt-ova-extract
packaging/ansible-runner-service-project/project/roles/ovirt-ova-pack
packaging/ansible-runner-service-project/project/roles/ovirt-ova-query
packaging/ansible-runner-service-project/project/roles/ovirt-provider-ovn-driver
packaging/ansible-runner-service-project/project/roles/ovirt-to-vdsm-network
packaging/ansible-runner-service-project/project/roles/python-ver-detect
[WARNING]: While constructing a mapping from
/usr/share/ansible/roles/oVirt.metrics/roles/oVirt.logging/tasks/main.yml,
line 118, column 7, found a duplicate dict key
(when). Using last defined value only.
[WARNING]: While constructing a mapping from <unicode string>, line
118, column 7, found a duplicate dict key (when). Using last defined
value only.
Traceback (most recent call last):
File "/usr/bin/ansible-lint", line 11, in <module>
load_entry_point('ansible-lint==4.1.0', 'console_scripts', 'ansible-lint')()
File "/usr/lib/python3.6/site-packages/ansiblelint/__main__.py",
line 187, in main
matches.extend(runner.run())
File "/usr/lib/python3.6/site-packages/ansiblelint/__init__.py",
line 282, in run
skip_list=self.skip_list))
File "/usr/lib/python3.6/site-packages/ansiblelint/__init__.py",
line 174, in run
matches.extend(rule.matchtasks(playbookfile, text))
File "/usr/lib/python3.6/site-packages/ansiblelint/__init__.py",
line 84, in matchtasks
yaml = ansiblelint.utils.append_skipped_rules(yaml, text, file['type'])
File "/usr/lib/python3.6/site-packages/ansiblelint/utils.py", line
604, in append_skipped_rules
ruamel_data = yaml.load(file_text)
File "/usr/lib64/python3.6/site-packages/ruamel/yaml/main.py", line
266, in load
return constructor.get_single_data()
File "/usr/lib64/python3.6/site-packages/ruamel/yaml/constructor.py",
line 105, in get_single_data
return self.construct_document(node)
File "/usr/lib64/python3.6/site-packages/ruamel/yaml/constructor.py",
line 115, in construct_document
for dummy in generator:
File "/usr/lib64/python3.6/site-packages/ruamel/yaml/constructor.py",
line 1357, in construct_yaml_map
self.construct_mapping(node, data)
File "/usr/lib64/python3.6/site-packages/ruamel/yaml/constructor.py",
line 1266, in construct_mapping
self.check_mapping_key(node, key_node, maptyp, key, value)
File "/usr/lib64/python3.6/site-packages/ruamel/yaml/constructor.py",
line 265, in check_mapping_key
raise DuplicateKeyError(*args)
ruamel.yaml.constructor.DuplicateKeyError: while constructing a mapping
in "<unicode string>", line 118, column 7:
- name: Populate logging_outputs d ...
^ (line: 118)
" (original value: "collect_ovirt_engine_log or collect_ovirt_vdsm_log")
in "<unicode string>", line 135, column 7:
when: collectd_default_files|d(true)
^ (line: 135)
To suppress this check see:
http://yaml.readthedocs.io/en/latest/api.html#duplicate-keys
Duplicate keys will become and error in future releases, and are errors
by default when using the new API.
make[1]: *** [Makefile:366: validations] Error 1
make[1]: Leaving directory '/home/nsoffer/src/ovirt-engine'
make: *** [Makefile:545: all-dev] Error 2
5 months, 2 weeks