Error: Adding new Host to ovirt-engine
by Ahmad Khiet
Hi,
Can't add new host to ovirt engine, because the following error:
2019-06-12 12:23:09,664 p=4134 u=engine | TASK [ovirt-host-deploy-facts :
Set facts] *************************************
2019-06-12 12:23:09,684 p=4134 u=engine | ok: [10.35.1.17] => {
"ansible_facts": {
"ansible_python_interpreter": "/usr/bin/python2",
"host_deploy_vdsm_version": "4.40.0"
},
"changed": false
}
2019-06-12 12:23:09,697 p=4134 u=engine | TASK [ovirt-provider-ovn-driver
: Install ovs] *********************************
2019-06-12 12:23:09,726 p=4134 u=engine | fatal: [10.35.1.17]: FAILED! =>
{}
MSG:
The conditional check 'cluster_switch == "ovs" or (ovn_central is defined
and ovn_central | ipaddr and ovn_engine_cluster_version is
version_compare('4.2', '>='))' failed. The error was: The ipaddr filter
requires python's netaddr be installed on the ansible controller
The error appears to be in
'/home/engine/apps/engine/share/ovirt-engine/playbooks/roles/ovirt-provider-ovn-driver/tasks/configure.yml':
line 3, column 5, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- block:
- name: Install ovs
^ here
2019-06-12 12:23:09,728 p=4134 u=engine | PLAY RECAP
*********************************************************************
2019-06-12 12:23:09,728 p=4134 u=engine | 10.35.1.17 :
ok=3 changed=0 unreachable=0 failed=1 skipped=0 rescued=0
ignored=0
whats missing!?
Thanks
--
Ahmad Khiet
Red Hat <https://www.redhat.com/>
akhiet(a)redhat.com
M: +972-54-6225629
<https://red.ht/sig>
1 year, 3 months
git backup
by Yedidyah Bar David
Hi all,
With the decision and on-going process to migrate from gerrit to
github, we do not anymore have a backup - github used to be a backup
for gerrit, automatically synced.
Do we want a backup for github? Some options:
1. Do nothing. github as-is might be good enough, and it also has an
archive program [1]. AFAICT, right now none of the partners in this
program allow 'git clone'. There are plans to allow that in the
future.
2. Do something custom like we did so far with gerrit->github.
3. Find some service. Searching for 'github backup' finds lots of
options. I didn't check any.
Thoughts?
[1] https://archiveprogram.github.com/
Best regards,
--
Didi
2 years, 10 months
create a host using ssh
by Grace Chen
I am writing a script to create a host using public ssh key
the function I am using is add_using_ssh:
my ssh_key is a an Ssh type already
ssh_key = types.Ssh(public_key=my_ssh_key_string)
host = hosts_service.add_using_ssh(
types.Host(
name=name,
description='',
address=address,
root_password=None,
port=22,
ssh=ssh_key,
cluster=types.Cluster(
name=cluster,
),
),
)
When I choose from GUI, it doesn't ask me to put in root password, so I set
it as None (Not sure if this is correct)
I got error:
ovirtsdk4.Error: Fault reason is "Request syntactically incorrect.". Fault
detail is "For correct usage, see:
ovirt-engine/apidoc#services/hosts/methods/add". HTTP response code is 400.
looks like I didn't get all the parameters assigned?
Can anybody help with this?
2 years, 10 months
Re: [oVirt/ovirt-release] Failed repoclosure job (Issue #55)
by Yedidyah Bar David
On Mon, Dec 27, 2021 at 2:38 PM github-actions[bot]
<notifications(a)github.com> wrote:
>
> ❌ The repoclosure CI job is still failing. Please investigate.
I think this started failing recently (on el9) due to a combination of:
- The engine is still not built on el9 in copr. I tried building
it myself there and so far got stuck - it seems like the fix is
an existing pending patch [1] (failed at %add_maven_depmap).
- ovirt-engine-keycloak is now built also for el9, but requires an
engine package, so we fail on repoclosure.
Unless we have concrete plans to fix this very soon, I suggest to
stop the noise for now - either not build (or publish?) keycloak
for el9, or do not run repoclosure on el9, or perhaps filter the
output to not fail if it's only on these packages, etc.
[1] https://gerrit.ovirt.org/c/ovirt-engine/+/116811
--
Didi
2 years, 10 months
OST fails with "Error loading module from /usr/share/ovirt-engine/modules/common/org/springframework/main/module.xml"
by Marcin Sobczyk
Hi All,
OST currently fails all the time during engine setup.
Here's a piece of ansible log that's seen repeatedly and I think
describes the problem:
11:07:54 E "engine-config",
11:07:54 E "-s",
11:07:54 E "OvfUpdateIntervalInMinutes=10"
11:07:54 E ],
11:07:54 E "delta": "0:00:01.142926",
11:07:54 E "end": "2021-12-24 11:06:37.894810",
11:07:54 E "invocation": {
11:07:54 E "module_args": {
11:07:54 E "_raw_params": "engine-config -s
OvfUpdateIntervalInMinutes='10' ",
11:07:54 E "_uses_shell": false,
11:07:54 E "argv": null,
11:07:54 E "chdir": null,
11:07:54 E "creates": null,
11:07:54 E "executable": null,
11:07:54 E "removes": null,
11:07:54 E "stdin": null,
11:07:54 E "stdin_add_newline": true,
11:07:54 E "strip_empty_ends": true,
11:07:54 E "warn": false
11:07:54 E }
11:07:54 E },
11:07:54 E "item": {
11:07:54 E "key": "OvfUpdateIntervalInMinutes",
11:07:54 E "value": "10"
11:07:54 E },
11:07:54 E "msg": "non-zero return code",
11:07:54 E "rc": 1,
11:07:54 E "start": "2021-12-24 11:06:36.751884",
11:07:54 E "stderr": "Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false",
11:07:54 E "stderr_lines": [
11:07:54 E "Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false"
11:07:54 E ],
11:07:54 E "stdout": "Error loading module from
/usr/share/ovirt-engine/modules/common/org/springframework/main/module.xml",
11:07:54 E "stdout_lines": [
11:07:54 E "Error loading module from
/usr/share/ovirt-engine/modules/common/org/springframework/main/module.xml"
We do set some config values for the engine in OST when running
engine-setup. I tried commenting these out, but then engine failed
health check anyway:
"Status code was 503 and not [200]: HTTP Error 503: Service Unavailable"
The last working set of OST images was the one from Dec 23, 2021 2:05:08
AM. The first broken one is from Dec 24, 2021 2:05:09 AM. The shipped
ovirt-engine RPMs doesn't seem to contain any important changes for
these two sets, but AFAICS the newer ovirt-dependencies RPM did take in
a couple of patches that look suspicious [1][2][3]. The patches were
merged on November 16th, but it seems they were first used in that
broken set from Dec 24 (the one from Dec 23 seems to contain
ovirt-dependencies RPM based on this [4] commit).
I wanted to try out an older version of ovirt-dependencies, but I think
they were wiped out from resources.ovirt.org.
I will disable cyclic el8stream OST runs for now, cause all of them
fail. If there's anyone working and able to make a build with those
patches reverted and test that out, please ping me and I'll re-enable them.
Regards, Marcin
[1] https://gerrit.ovirt.org/c/ovirt-dependencies/+/114699
[2] https://gerrit.ovirt.org/c/ovirt-dependencies/+/113877
[3] https://gerrit.ovirt.org/c/ovirt-dependencies/+/114654
[4] https://gerrit.ovirt.org/c/ovirt-dependencies/+/117459
2 years, 11 months
Re: [ovirt-users] Re: using stop_reason as a vdsm hook trigger into the UI
by Nathanaël Blanchet
Le 22/12/2021 à 14:56, John Taylor a écrit :
> Maybe use the events api and search for the shutdown and reason in there?
>
> api/events;from={event_id}?search={query}" rel="events/search"/>
>
> -John
Exact! that's precisely the workaround I already tested this morning as
well :)
and it works because the event contains the stop_reason in the
description field before vm is in the stopped status!
>
> On Tue, Dec 21, 2021 at 9:00 AM Nathanaël Blanchet <blanchet(a)abes.fr
> <mailto:blanchet@abes.fr>> wrote:
>
> Thanks for responding,
>
> Le 20/12/2021 à 21:42, Nir Soffer a écrit :
>> On Mon, Dec 20, 2021 at 9:59 PM Nathanaël Blanchet<blanchet(a)abes.fr> <mailto:blanchet@abes.fr> wrote:
>>
>> Adding the devel list since question is more about extending oVirt
>> ...
>>> The idea is to use the stop_reason element into the vm xml definition. But after hours, I realized that this element is writed to the vm definition file only after the VM has been destroyed.
>> So you want to run the clean hook only if stop reason == "clean"?
>>
>> I think the way to integrate hooks is to define a custom property
>> in the vm, and check if the property was defined in the hook.
>>
>> For example how the localdisk hook is triggered:
>>
>> def main():
>> backend = os.environ.get('localdisk')
>> if backend is None:
>> return
>> if backend not in [BACKEND_LVM, BACKEND_LVM_THIN]:
>> hooking.log("localdisk-hook: unsupported backend: %r" % backend)
>> return
>> ...
>>
>> The hook runs only if the environment variable "localdisk" is defined
>> and configured properly.
>>
>> vdsm defines the custom properties as environment variables.
>>
>> On the engine side, you need to add a user defined property:
>>
>> engine-config -s UserDefinedVMProperties='localdisk=^(lvm|lvmthin)$'
>>
>> And configure a custom property with one of the allowed values, like:
>>
>> localdisk=lvm
>>
>> See vdsm_hooks/localdisk/README for more info.
>>
>> If you want to control the cleanup, by adding a "clean" stop reason only when
>> needed, this will not help, and vdsm hook is probably not the right way
>> to integrate this.
> Sure
>> If your intent is to clean a vm in some special events, but you want
>> to integrate
>> this in engine, maybe you should write an engine ui plugin?
>>
>> The plugin can show the running vms, and provide a clean button that will
>> shut down the vm and run your custom code.
> too complex for doing what I want
>> But maybe you don't need to integrate this in engine, and having a simple
>> script using ovirt engine API/SDK to shutdown the vm and run the cleanup
>> code.
> My playbook/scripts work already fine, but this is not my goal.
>> Nir
>>
> I will sum up my initial question: *Is there any way to get the
> value of "stop_reason" (value of the field in the UI) so as to
> reuse this variable into a vdsm hook?*
>
> Thank you
>
>
> --
> Nathanaël Blanchet
>
> Supervision réseau
> SIRE
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5
> Tél. 33 (0)4 67 54 84 55
> Fax 33 (0)4 67 54 84 14
> blanchet(a)abes.fr <mailto:blanchet@abes.fr>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
> To unsubscribe send an email to users-leave(a)ovirt.org
> <mailto:users-leave@ovirt.org>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> <https://www.ovirt.org/privacy-policy.html>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> <https://www.ovirt.org/community/about/community-guidelines/>
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OSM572SLKKA...
> <https://lists.ovirt.org/archives/list/users@ovirt.org/message/OSM572SLKKA...>
>
--
Nathanaël Blanchet
Supervision réseau
SIRE
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
2 years, 11 months
Re: [ovirt-users] using stop_reason as a vdsm hook trigger into the UI
by Nir Soffer
On Mon, Dec 20, 2021 at 9:59 PM Nathanaël Blanchet <blanchet(a)abes.fr> wrote:
Adding the devel list since question is more about extending oVirt
...
> The idea is to use the stop_reason element into the vm xml definition. But after hours, I realized that this element is writed to the vm definition file only after the VM has been destroyed.
So you want to run the clean hook only if stop reason == "clean"?
I think the way to integrate hooks is to define a custom property
in the vm, and check if the property was defined in the hook.
For example how the localdisk hook is triggered:
def main():
backend = os.environ.get('localdisk')
if backend is None:
return
if backend not in [BACKEND_LVM, BACKEND_LVM_THIN]:
hooking.log("localdisk-hook: unsupported backend: %r" % backend)
return
...
The hook runs only if the environment variable "localdisk" is defined
and configured properly.
vdsm defines the custom properties as environment variables.
On the engine side, you need to add a user defined property:
engine-config -s UserDefinedVMProperties='localdisk=^(lvm|lvmthin)$'
And configure a custom property with one of the allowed values, like:
localdisk=lvm
See vdsm_hooks/localdisk/README for more info.
If you want to control the cleanup, by adding a "clean" stop reason only when
needed, this will not help, and vdsm hook is probably not the right way
to integrate this.
If your intent is to clean a vm in some special events, but you want
to integrate
this in engine, maybe you should write an engine ui plugin?
The plugin can show the running vms, and provide a clean button that will
shut down the vm and run your custom code.
But maybe you don't need to integrate this in engine, and having a simple
script using ovirt engine API/SDK to shutdown the vm and run the cleanup
code.
Nir
2 years, 11 months
selinux for Ganesha absent in master?
by lejeczek
Hi guys.
I get:
-> $ dnf install glusterfs-ganesha.x86_64
Mine and private
2.9 MB/s | 3.0 kB 00:00
Error:
Problem: cannot install the best candidate for the job
- nothing provides nfs-ganesha-gluster >= 2.7.6 needed by
glusterfs-ganesha-10.0-1.el9s.x86_64
that is with:
ovirt-release-master-4.5.0-0.0.master.20211206152702.gitebb0229.el9.noarch
many thanks, L.
2 years, 11 months