I want to remove inactive contributors from vdsm-master-maintainers.
I suggest the simple rule of 2 years of inactivity for removing from this
based on git log.
See the list below for current status:
Dear Ladies and Gentlemen!
I am currently working with the java-sdk and I encountered a problem.
If I would like to retrieve the disk details, I get the following error:
Disk currDisk = ovirtConnection.followLink(diskAttachment.disk());
The Error is occurring in this line:
The getResponst looks quiet ok. (I inspected: [cid:image002.png@01D44537.AF127FD0] and it looks ok).
wrong number of arguments
The code is quiet similar to what you published on github (https://github.com/oVirt/ovirt-engine-sdk-java/blob/master/sdk/src/test/j... ).
Can you confirm the defect?
A VM Snapshot can be made using
SnapshotBuilder builder = new SnapshotBuilder().vm(VM).name("Snap1").description("Test");
It can be made with no memory by using a builder with persistMemorystate(false)
How can one be made with no disk? .diskAttachments(emptyList) isnt working.
I've seen vdsmd leak memory (RSS increasing) for a while (brought it up
on the lists and opened a BZ ticket), and never gotten anywhere with
diagnosing or resolving it. I reinstalled my dev setup Friday with
up-to-date CentOS 7 (minimal install) and oVirt 4.3, with a hosted
engine on iSCSI (multipath if it matters).
In just 3 days, vdsmd on the host with the engine has gone up to an RSS
of 481 MB. It just continues to steadily increase. Watching with a
script, I see (this is VmRSS from /proc/$(pidof -x vdsmd)/status):
12:26:32.892 482076 +20
12:26:35.300 482096 +20
12:26:38.927 482112 +16
12:26:40.034 482128 +16
12:26:47.534 482132 +4
12:26:48.887 482144 +12
12:26:49.133 482156 +12
12:26:50.955 482172 +16
12:26:53.062 482176 +4
12:26:53.092 482204 +28
12:26:59.065 482212 +8
12:26:59.075 482228 +16
12:26:59.361 482244 +16
12:27:03.131 482252 +8
12:27:07.370 482256 +4
12:27:10.091 482272 +16
12:27:13.205 482296 +24
12:27:18.770 482308 +12
12:27:20.437 482332 +24
12:27:23.313 482340 +8
12:27:23.324 482364 +24
12:27:26.667 482372 +8
12:27:26.687 482376 +4
12:27:28.873 482388 +12
12:27:28.883 482392 +4
12:27:28.976 482396 +4
12:27:29.190 482408 +12
That's an increase of 352 kB in a minute.
There's got to be some way to diagnose this, but I don't know python
Chris Adams <cma(a)cmadams.net>
I am failing to add a new host to my env with the following message:
2019-12-25 10:57:17,587+02 ERROR
[41ec72c1-88e2-402b-8bb9-f38c678d0bf0] EVENT_ID: VDS_INSTALL_FAILED(505),
Host 10.35.0.158 installation failed. Failed to execute Ansible
null. Please check logs for more details:
The host is Fedora-30
Ansible version is - 2.9.1
Ansible runner version - 1.3.4
There are no logs at all at the specified location in the error message.
Did someone encounter that issue?
we are going to merge a series of patches to master branch, which
integrates ansible-runner with oVirt engine. When the patches will be
merged you will need to install new package called ansible-runner-
service-dev, and follow instructions so your dev-env will keep working
smoothly(all relevant info will be also in README.adoc):
1) sudo dnf update ovirt-release-master
2) sudo dnf install -y ansible-runner-service-dev
3) Edit `/etc/ansible-runner-service/config.yaml` file as follows:
Where `$PREFIX` is the prefix of your development environment prefix,
which you've specified during the compilation of the engine.
4) Restart and enable ansible-runner-service:
# systemctl restart ansible-runner-service
# systemctl enable ansible-runner-service
That's it, your dev-env should start using the ansible-runner-service
for host-deployment etc.
Please note that only Fedora 30/31 and Centos7 was packaged, and are
Hello Dev List,
Is there a bugzilla entry for tracking the progress of oVirt integration with Gluster v7 ?
I have met some issues that I would be happy to share .
Currently , I'm running oVirt 4.3 with Gluster 7.0 .
If the VMs are starting properly and they don't use cloudinit - then the issue is not oVirt specific, but guest specific (Linux/Windows depending on guest OS).
So your should check:
1. Does your host have any Netwprks out of sync (Host's network tab)
If yes - put the server into maintenance and fix the issue (host's network tab)
2. Check each VM's configuration if it is defined to use CloudInit -> if yes, verify that cloudinit's service is running on the guest
3. Verify each problematic guest's network settings. If needed, set a static IP and try to ping another IP from the same subnet/Vlan .
Strahil NikolovOn Dec 25, 2019 11:41, lifuqiong(a)sunyainfo.com wrote:
>>> Dear All:
>>> My ovirt engine managed two vdsms with nfs storage on another nfs server, it worked fine about three months.
>>> One of host (host_1.3; ip:172.18.1.3) was created about 16 vms, but the host_1.3 was shutdown unexpectly about 2019-12-23 16:11, when re-started host and vms; half of the vms was lost some configure or changed , such as lost theirs ip etc.(the vm name is 'zzh_Chain49_ACG_M' in the vdsm.log)
>>> the vm zzh_Chain49_ACG_M was create by rest API through vm's templated . the template is 126.96.36.199; the vm zzh_Chain49_ACG_M was created by ovirt rest api and changed the ip to 188.8.131.52
>>> by ovirt rest api. But the ip was changed to templated ip when the accident happened.
>>> the vm's os is centos.
>>> Hope get help from you soon, Thank you.
My ovirt engine managed two vdsms with nfs storage on another nfs server, it worked fine about three months.
One of host (host_1.3; ip:172.18.1.3) was created about 16 vms, but the host_1.3 was shutdown unexpectly about 2019-12-23 16:11, when re-started host and vms; half of the vms was lost some configure or changed , such as lost theirs ip etc.(the vm name is 'zzh_Chain49_ACG_M' in the vdsm.log)
the vm zzh_Chain49_ACG_M was create by rest API through vm's templated . the template is 184.108.40.206; the vm zzh_Chain49_ACG_M was created by ovirt rest api and changed the ip to 220.127.116.11
by ovirt rest api. But the ip was changed to templated ip when the accident happened.
the vm's os is centos.
Hope get help from you soon, Thank you.