How to pass parameters between VDSM Hooks domxml in single run
by Vrgotic, Marko
Dear oVIrt,
A while ago we discussed on ways to change/update content of parameters of domxml in certain action.
As I mentioned before, we have added the VDSMHook 60_nsupdate which removes the DNS record entries when a VM is destroyed:
…
domxml = hooking.read_domxml()
name = domxml.getElementsByTagName('name')[0]
name = " ".join(name.nodeValue for name in name.childNodes if name.nodeType == name.TEXT_NODE)
nsupdate_commands = """server {server_ip}
update delete {vm_name}.example.com a
update delete {vm_name}. example.com aaaa
update delete {vm_name}. example.com txt
send
""".format(server_ip="172.16.1.10", vm_name=name)
…
The goal:
However, we did not want to execute remove dns records when VM is only migrated. Since its considered a “destroy” action we took following approach.
* In state “before_vm_migrate_source add hook which will write flag “is_migration” to domxml
* Once VM is scheduled for migration, this hook should add the flag “is_migration” to domxml
* Once 60_nsupdate is triggered, it will check for the flag and if there, skip executing dns record action, but only remove the flag “is_migration” from domxml of the VM
…
domxml = hooking.read_domxml()
migration = domxml.createElement("is_migration")
domxml.getElementsByTagName("domain")[0].appendChild(migration)
logging.info("domxml_updated {}".format(domxml.toprettyxml()))
hooking.write_domxml(domxml)
…
When executing first time, we observed that flag “
<name>hookiesvm</name>
<uuid>fcfa66cb-b251-43a3-8e2b-f33b3024a749</uuid>
<metadata xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ns1="http://ovirt.org/vm/1.0">
<ns0:qos/>
<ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
<ovirt-vm:clusterVersion>4.3</ovirt-vm:clusterVersion>
<ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot>
<ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>
<ovirt-vm:memGuaranteedSize type="int">1024</ovirt-vm:memGuaranteedSize>
<ovirt-vm:minGuaranteedMemoryMb type="int">1024</ovirt-vm:minGuaranteedMemoryMb>
...skipping...
<address bus="0x00" domain="0x0000" function="0x0" slot="0x09" type="pci"/>
</rng>
</devices>
<seclabel model="selinux" relabel="yes" type="dynamic">
<label>system_u:system_r:svirt_t:s0:c169,c575</label>
<imagelabel>system_u:object_r:svirt_image_t:s0:c169,c575</imagelabel>
</seclabel>
<seclabel model="dac" relabel="yes" type="dynamic">
<label>+107:+107</label>
<imagelabel>+107:+107</imagelabel>
</seclabel>
<is_migration/>
</domain>
is added to domxml, but was present once 60_nsupdate hook was executed.
The question: How do we make sure that, when domxml is updated, that the update is visible/usable by following hook, in single run? How to pass these changes between hooks?
Kindly awaiting your reply.
— — —
Met vriendelijke groet / Kind regards,
Marko Vrgotic
5 years, 1 month
Ansible setup host network fails on comparing sorted dictionaries
by Vrgotic, Marko
Dear oVirt,
While deploying ovirt_infra , the role oivrt.networks fails on Setup Host Networks, with following error:
TypeError: '<' not supported between instances of 'dict' and 'dict'
Full output:
TASK [ovirt.infra/roles/ovirt.networks : Setup host networks] ******************************************************************************************************************
The full traceback is:
Traceback (most recent call last):
File "/var/folders/40/w2c8fp151854mddz_4n3czwm0000gn/T/ansible_ovirt_host_network_payload_s0fx52mx/__main__.py", line 396, in main
(nic is None or host_networks_module.has_update(nics_service.service(nic.id)))
File "/var/folders/40/w2c8fp151854mddz_4n3czwm0000gn/T/ansible_ovirt_host_network_payload_s0fx52mx/__main__.py", line 289, in has_update
update = self.__compare_options(get_bond_options(bond.get('mode'), bond.get('options')), getattr(nic.bonding, 'options', []))
File "/var/folders/40/w2c8fp151854mddz_4n3czwm0000gn/T/ansible_ovirt_host_network_payload_s0fx52mx/__main__.py", line 247, in __compare_options
return sorted(get_dict_of_struct(opt) for opt in new_options) != sorted(get_dict_of_struct(opt) for opt in old_options)
TypeError: '<' not supported between instances of 'dict' and 'dict'
failed: [localhost] (item={'name': 'ovirt-staging-hv-02.avinity.tv', 'check': True, 'save': True, 'bond': {'name': 'bond28', 'mode': 4, 'interfaces': ['p2p1', 'p2p2']}, 'networks': [{'name': 'backbone', 'boot_protocol': 'static', 'address': '172.17.28.212', 'netmask': '255.255.255.0', 'version': 'v4'}]}) => {
"ansible_loop_var": "item",
"changed": false,
"invocation": {
"module_args": {
"bond": {
"interfaces": [
"p2p1",
"p2p2"
],
"mode": 4,
"name": "bond28"
},
"check": true,
"fetch_nested": false,
"interface": null,
"labels": null,
"name": "ovirt-staging-hv-02.avinity.tv",
"nested_attributes": [],
"networks": [
{
"address": "172.17.28.212",
"boot_protocol": "static",
"name": "backbone",
"netmask": "255.255.255.0",
"version": "v4"
}
],
"poll_interval": 3,
"save": true,
"state": "present",
"sync_networks": false,
"timeout": 180,
"wait": true
}
},
"item": {
"bond": {
"interfaces": [
"p2p1",
"p2p2"
],
"mode": 4,
"name": "bond28"
},
"check": true,
"name": "ovirt-staging-hv-02.avinity.tv",
"networks": [
{
"address": "172.17.28.212",
"boot_protocol": "static",
"name": "backbone",
"netmask": "255.255.255.0",
"version": "v4"
}
],
"save": true
},
"msg": "'<' not supported between instances of 'dict' and 'dict'"
}
Read vars_file 'vars/engine_vars.yml'
Read vars_file 'vars/secrets.yml'
Read vars_file 'vars/ovirt_infra_vars.yml'
Looking further into ovirt_host_network.py I found that issue is reported after following is executed:
return sorted(get_dict_of_struct(opt) for opt in new_options) != sorted(get_dict_of_struct(opt) for opt in old_options)
It seemed to be failing due to not getting the key value to sort the dicts, so I added sorting based on name, to test, and it worked in single test run:
return sorted((get_dict_of_struct(opt) for opt in new_options), key=lambda x: x["name"]) != sorted((get_dict_of_struct(opt) for opt in old_options), key=lambda x: x["name"])
After rerun of the play, with changes above:
TASK [ovirt.infra/roles/ovirt.networks : Setup host networks] ******************************************************************************************************************
task path: /Users/mvrgotic/Git/ovirt-engineering/roles/ovirt.infra/roles/ovirt.networks/tasks/main.yml:25
Using module file /Users/mvrgotic/.local/share/virtualenvs/ovirt-engineering-JaxzXThh/lib/python3.7/site-packages/ansible/modules/cloud/ovirt/ovirt_host_network.py
Pipelining is enabled.
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: mvrgotic
<127.0.0.1> EXEC /bin/sh -c '/Users/mvrgotic/.local/share/virtualenvs/ovirt-engineering-JaxzXThh/bin/python3 && sleep 0'
changed: [localhost] => (item={'name': 'ovirt-staging-hv-02.avinity.tv', 'check': True, 'save': True, 'bond': {'name': 'bond28', 'mode': 4, 'interfaces': ['p2p1', 'p2p2']}, 'networks': [{'name': 'backbone', 'boot_protocol': 'static', 'address': '172.17.28.212', 'netmask': '255.255.255.0', 'version': 'v4'}]}) => {
"ansible_loop_var": "item",
"changed": true,
"host_nic": {
"ad_aggregator_id": 6,
"bonding": {
"ad_partner_mac": {
"address": "44:31:92:7c:b3:11"
},
"options": [
{
"name": "mode",
"type": "Dynamic link aggregation (802.3ad)",
"value": "4"
},
{
"name": "xmit_hash_policy",
"value": "2"
}
],
"slaves": [
{
"href": "/ovirt-engine/api/hosts/aa1f7a65-a867-436a-9a7f-264068f4bdea/nics/01703220-570c-44f5-9729-6717ceead304",
"id": "01703220-570c-44f5-9729-6717ceead304"
},
{
"href": "/ovirt-engine/api/hosts/aa1f7a65-a867-436a-9a7f-264068f4bdea/nics/e879c1c6-065d-4742-b282-fcdb477f95a7",
"id": "e879c1c6-065d-4742-b282-fcdb477f95a7"
}
]
},
"boot_protocol": "static",
"bridged": false,
"custom_configuration": false,
"host": {
"href": "/ovirt-engine/api/hosts/aa1f7a65-a867-436a-9a7f-264068f4bdea",
"id": "aa1f7a65-a867-436a-9a7f-264068f4bdea"
},
"href": "/ovirt-engine/api/hosts/aa1f7a65-a867-436a-9a7f-264068f4bdea/nics/abce07fa-cb7f-46f2-b967-69d1feaa4075",
"id": "abce07fa-cb7f-46f2-b967-69d1feaa4075",
"ip": {
"address": "172.17.28.212",
"netmask": "255.255.255.0",
"version": "v4"
},
"ipv6": {
"gateway": "::",
"version": "v6"
},
"ipv6_boot_protocol": "none",
"mac": {
"address": "b4:96:91:3f:47:1c"
},
"mtu": 9000,
"name": "bond28",
"network": {
"href": "/ovirt-engine/api/networks/f3ef80cf-bf3a-4fa5-aed9-7d9e7455f804",
"id": "f3ef80cf-bf3a-4fa5-aed9-7d9e7455f804"
},
"network_labels": [],
"properties": [],
"speed": 10000000000,
"statistics": [],
"status": "up"
},
"id": "abce07fa-cb7f-46f2-b967-69d1feaa4075",
"invocation": {
"module_args": {
"bond": {
"interfaces": [
"p2p1",
"p2p2"
],
"mode": 4,
"name": "bond28"
},
"check": true,
"fetch_nested": false,
"interface": null,
"labels": null,
"name": "ovirt-staging-hv-02.avinity.tv",
"nested_attributes": [],
"networks": [
{
"address": "172.17.28.212",
"boot_protocol": "static",
"id": "3e40ff7d-5384-45f1-b036-13e6f91aff56",
"name": "backbone",
"netmask": "255.255.255.0",
"version": "v4"
}
],
"poll_interval": 3,
"save": true,
"state": "present",
"sync_networks": false,
"timeout": 180,
"wait": true
}
},
"item": {
"bond": {
"interfaces": [
"p2p1",
"p2p2"
],
"mode": 4,
"name": "bond28"
},
"check": true,
"name": "ovirt-staging-hv-02.avinity.tv",
"networks": [
{
"address": "172.17.28.212",
"boot_protocol": "static",
"name": "backbone",
"netmask": "255.255.255.0",
"version": "v4"
}
],
"save": true
}
}
Read vars_file 'vars/engine_vars.yml'
Read vars_file 'vars/secrets.yml'
Read vars_file 'vars/ovirt_infra_vars.yml'
Changes resulted in applying configuration exactly as intended.
Not sure it this was the actual intention, but please let me know if the made change was as initially intended for sorted compare to work.
My pipenv setup:
Python 3.7
ansible==2.8.6
asn1crypto==1.1.0
bcrypt==3.1.7
cffi==1.13.1
cryptography==2.8
dnspython==1.16.0
ipaddress==1.0.23
Jinja2==2.10.3
jmespath==0.9.4
lxml==4.4.1
MarkupSafe==1.1.1
netaddr==0.7.19
ovirt-engine-sdk-python==4.3.3
paramiko==2.6.0
passlib==1.7.1
pyasn1==0.4.5
pycparser==2.19
pycurl==7.43.0.3
PyNaCl==1.3.0
PyYAML==5.1.2
six==1.12.0
Ansible vars and play:
=================================
host_networks:
- name: ovirt-staging-hv-02.avinity.tv
check: true
save: true
bond:
name: bond28
mode: 4
interfaces:
- p2p1
- p2p2
networks:
- name: backbone
boot_protocol: static
address: 172.17.28.212
netmask: 255.255.255.0
version: v4
=================================
- name: Setup host networks
ovirt_host_network:
auth: "{{ ovirt_auth }}"
name: "{{ item.name }}"
state: "{{ item.state | default(omit) }}"
check: "{{ item.check | default(omit) }}"
save: "{{ item.save | default(omit) }}"
bond: "{{ item.bond | default(omit) }}"
networks: "{{ item.networks | default(omit) }}"
labels: "{{ item.labels | default(omit) }}"
interface: "{{ item.interface | default(omit) }}"
with_items:
- "{{ host_networks | default([]) }}"
tags:
- host_networks
- networks
====================================
5 years, 1 month
[ANN] oVirt 4.3.7 First Release Candidate is now available for testing
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.3.7 First Release Candidate for testing, as of October 18th, 2019.
This update is a release candidate of the seventh in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used in
production.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.7 or later (but <8)
* CentOS Linux (or similar) 7.7 or later (but <8)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.7 or later (but <8)
* CentOS Linux (or similar) 7.7 or later (but <8)
* oVirt Node 4.3 (available for x86_64 only) has been built consuming
CentOS 7.7 Release
See the release notes [1] for known issues, new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available
Additional Resources:
* Read more about the oVirt 4.3.7 release highlights:
http://www.ovirt.org/release/4.3.7/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.7/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>*Red Hat respects your work life balance.
Therefore there is no need to answer this email out of your office hours.*
5 years, 1 month
Re: Disk encryption in oVirt
by Strahil
Hi,
Maybe a gluster encryption will also cover your needs in a hyper converged setup?
Best Regards,
Strahil Nikolov
On Oct 22, 2019 14:43, Sandro Bonazzola <sbonazzo(a)redhat.com> wrote:
>
>
>
> Il giorno mar 22 ott 2019 alle ore 12:32 MIMMIK _ <dmarini(a)it.iliad.com> ha scritto:
>>
>> Is there a way to get a full disk encryption on virtual disks used by VMs in oVirt?
>
>
> You can use encrypted file system managed from within the VM itself if the OS support it (dm-crypt with LUKS on GNU Linux, BitLocker) which is the most secure choice on this topic.
> Encrypting the storage hosting the VM disks won't help once you access the storage for booting the VM, at that point disks will be accessible without encryption.
>
>
>
>
>>
>>
>> Regards
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org
>> To unsubscribe send an email to users-leave(a)ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3FMGDSRBHRW...
>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA
>
> sbonazzo(a)redhat.com
>
> Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours.
5 years, 1 month
Re: oVirt/RHV 4.3.6: How to boot a Q35/UEFI VM from CD?
by Strahil
I have successfully managed to boot and install RHEL & openSUSE with Secureboot, but UEFI variables are not saved and requires some workarounds (there is a bug opened for that) to work.
What exactly is your problem?
Best Regards,
Strahil NikolovOn Oct 29, 2019 10:32, mathieu.simon(a)gymneufeld.ch wrote:
>
> Hi
>
> I've been experimenting with UEFI on RHV 4.3.6 using Windows Server 2019 but I guess posting on the oVirt list might reach a wider audience.
> I've used UEFI guests on other KVM-based virtualizations, but I have been unable to detect a UEFI-bootable CD using Q35 VMs with or without SecureBoot in RHV 4.3.6 since it doesn't even propose to load the UEFI loader from within OVMF. It seems to show the CD-ROM drive, but I cannot boot from file the disc from OVMF:
>
> Are there any hints? (I've tried finding something more detailed in the Docs though) I'd like investigate UEFI and SecureBoot on oVirt/RHV even if it's still considered a tech preview. (I've been using KVM-based UEFI VM for years already on other platforms and distributions with no particular issues, rather advantages overall).
>
> Regards
> Mathieu
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AVHRXTQ4TSZ...
5 years, 1 month
Re: Trying to add a LUN to a host and use that, instead of Gluster, as the datastore for my VM's.
by Strahil
I'm not sure, but you can try with replica 1 gluster volume.
Later that gluster volume can be changed to replica 2 arbiter 1 or even better replica 3 volume.
Best Regards,
Strahil NikolovOn Oct 10, 2019 13:04, Tom <tk(a)mdevsys.com> wrote:
>
> I take it then theres no way to do so without NFS?
>
> Sent from my iPhone
>
> > On Oct 10, 2019, at 2:53 AM, Strahil <hunter86_bg(a)yahoo.com> wrote:
> >
> > You can create an NFS on that host and export to itself that local storage...
> >
> > Best Regards,
> > Strahil NikolovOn Oct 9, 2019 07:43, TomK <tomkcpr(a)mdevsys.com> wrote:
> >>
> >> Manged to reverse it through the options. Had to deactivate the custom
> >> local cluster.
> >>
> >> Now I have my GlusterFS back and want to try and add some local storage
> >> to each host under the same DC, Cluster and Storage Domain. So
> >> effectively I will have two storage locations. One on my GlusterFS and
> >> the other on the local storage I've just defined.
> >>
> >> Appears I can't make them coexist with oVirt. Either the host is placed
> >> in a separate cluster with local storage, or I move it back to the
> >> GlusterFS.
> >>
> >> Appears I can't have two usable storage types under one host. Is this
> >> correct?
> >>
> >> What I'm looking for is in the image and shared earlier.
> >>
> >> Cheers,
> >> TK
> >>
> >>> On 10/8/2019 11:03 PM, TomK wrote:
> >>> I'm working to reverse this scenario.
> >>>
> >>> My storage domain still exists. However I can't activate it. The
> >>> Manage button is greyed out without clear indication as to why.
> >>> Difficult to tell what my next move should be.
> >>>
> >>> Anyway to find out?
> >>>
> >>> Cheers,
> >>> TK
> >>>
> >>>> On 10/7/2019 8:25 AM, TomK wrote:
> >>>> Allright.
> >>>>
> >>>> So I followed this and configured local storage. It had some default
> >>>> names that I changed. It would have been either the default name, or
> >>>> one I choose. In either case it would be a different name then the
> >>>> Gluster storage domain the host was a part of anyway.
> >>>>
> >>>> Once the process completed for both hosts, I noticed the Gluster
> >>>> volume storage domain was offline and the gluster volume was gone. I
> >>>> can't seem to add that back in anymore. I get:
> >>>>
> >>>> "Error while executing action DisconnectStorageServerConnection: Error
> >>>> storage server disconnection"
> >>>>
> >>>> Guessing I can't have local storage in addition to Gluster on the same
> >>>> hosts and available for VM's?
> >>>>
> >>>> Reason why I need that is that Gluster is slow but provides redundancy
> >>>> and live migration. But I also wanted direct storage for VM's
> >>>> requiring faster IO. So here's my scenario:
> >>>>
> >>>> host01
> >>>> /dev/sda OS
> >>>> /dev/sdb 4TB (For Gluster)
> >>>> /dev/sdc 4TB (For Local Storage)
> >>>>
> >>>> host02
> >>>> /dev/sda OS
> >>>> /dev/sdb 4TB (For Gluster)
> >>>> /dev/sdc 4TB (For Local Storage)
> >>>>
> >>>> I would like to have:
> >>>>
> >>>> 1) GlusterFS volume available to oVirt via the two /dev/sdb drives in
> >>>> both physical hosts.
> >>>> 2) Two locally attached LUN's, each separate and local to that
> >>>> physical host.
> >>>>
> >>>> Is this possible?
> >>>>
> >>>> Cheers,
> >>>> TK
> >>>>
> >>>>> On 10/6/2019 12:38 AM, Strahil wrote:
> >>>>> Hi Tom,
> >>>>>
> >>>>> Have you checked
> >>>>> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/...
> >>>>> ?
> >>>>>
> >>>>> Best Regards,
> >>>>> Strahil NikolovOn Oct 6, 2019 06:26, TomK <tomkcpr(a)mdevsys.com> wrote:
> >>>>>>
> >>>>>> Hey All,
> >>>>>>
> >>>>>> I've added a 4TB LUN to my physical storage and now I want to use that,
> >>>>>> instead of GlusterFS as my datastore for VM's on that physical. How do
> >>>>>> I do this?
> >>>>>>
> >>>>>> I've tried a number of options, including adding LUN's but appears I
> >>>>>> can
> >>>>>> only add them to VM's directly but not use them as my datastores
> >>
5 years, 1 month
Re: oVirt 4.3.5/6 HC: Reinstall fails from WEB UI
by Strahil
Ssh to host and check the status of :
sanlock.service
supervdsmd.service
vdsmd.service
ovirt-ha-broker.service
ovirt-ha-agent.service
For example if sanlock is working, but supervdsmd is not - try to restart it.
If it fais, run:
systemctl cat supervdsmd.service
And execute the commands in sections:
ExecStartPre
ExecStart
And report any issues, then follow the next service in the chain.
Best Regards,
Strahil NikolovOn Oct 16, 2019 23:52, adrianquintero(a)gmail.com wrote: > > Hi, > I am trying to re-install a host from the web UI in oVirt 4.3.5, but it always fails and goes to "Setting Host state to Non-Operational" > > From the engine.log I see the following WARN/ERROR: > 2019-10-16 16:32:57,263-04 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-43) [491c8bd9] EVENT_ID: VDS_SET_NONOPERATIONAL_DOMAIN(522), Host host1.example.com cannot access the Storage Domain(s) attached to the Data Center Default-DC1. Setting Host state to Non-Operational. > 2019-10-16 16:32:57,271-04 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-43) [491c8bd9] EVENT_ID: VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host host1.example.com.There is no other host in the data center that can be used to test the power management settings. > 2019-10-16 16:32:57,276-04 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-43) [491c8bd9] EVENT_ID: CONNECT_STORAGE_POOL_FAILED(995), Failed to connect Host host1.example.com to Storage Pool Default-DC1 > 2019-10-16 16:35:06,151-04 ERROR [org.ovirt.engine.core.bll.InitVdsOnUpCommand] (EE-ManagedThreadFactory-engine-Thread-137245) [] Could not connect host 'host1.example.com' to pool 'Default-DC1': Error storage pool connection: (u"spUUID=7d3fb14c-ebf0-11e9-9ee5-00163e05e135, msdUUID=4b87a5de-c976-4982-8b62-7cffef4a22d8, masterVersion=1, hostID=1, domainsMap={u'8c2df9c6-b505-4499-abb9-0d15db80f33e': u'active', u'4b87a5de-c976-4982-8b62-7cffef4a22d8': u'active', u'5d9f7d05-1fcc-4f99-9470-4e57cd15f128': u'active', u'fe24d88e-6acf-42d7-a857-eaf1f8deb24a': u'active'}",) > 2019-10-16 16:35:06,248-04 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-91) [690baf86] EVENT_ID: VDS_SET_NONOPERATIONAL_DOMAIN(522), Host host1.example.com cannot access the Storage Domain(s) attached to the Data Center Default-DC1. Setting Host state to Non-Operational. > 2019-10-16 16:35:06,256-04 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-91) [690baf86] EVENT_ID: VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host host1.example.com.There is no other host in the data center that can be used to test the power management settings. > 2019-10-16 16:35:06,261-04 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-91) [690baf86] EVENT_ID: CONNECT_STORAGE_POOL_FAILED(995), Failed to connect Host host1.example.com to Storage Pool Default-DC1 > 2019-10-16 16:37:46,011-04 ERROR [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connection timeout for host 'host1.example.com', last response arrived 1501 ms ago. > 2019-10-16 16:41:57,095-04 ERROR [org.ovirt.engine.core.bll.InitVdsOnUpCommand] (EE-ManagedThreadFactory-engine-Thread-137527) [17f3aadd] Could not connect host 'host1.example.com' to pool 'Default-DC1': Error storage pool connection: (u"spUUID=7d3fb14c-ebf0-11e9-9ee5-00163e05e135, msdUUID=4b87a5de-c976-4982-8b62-7cffef4a22d8, masterVersion=1, hostID=1, domainsMap={u'8c2df9c6-b505-4499-abb9-0d15db80f33e': u'active', u'4b87a5de-c976-4982-8b62-7cffef4a22d8': u'active', u'5d9f7d05-1fcc-4f99-9470-4e57cd15f128': u'active', u'fe24d88e-6acf-42d7-a857-eaf1f8deb24a': u'active'}",) > 2019-10-16 16:41:57,199-04 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-22) [508ddb44] EVENT_ID: VDS_SET_NONOPERATIONAL_DOMAIN(522), Host host1.example.com cannot access the Storage Domain(s) attached to the Data Center Default-DC1. Setting Host state to Non-Operational. > 2019-10-16 16:41:57,211-04 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-22) [508ddb44] EVENT_ID: VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host host1.example.com.There is no other host in the data center that can be used to test the power management settings. > 2019-10-16 16:41:57,216-04 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-22) [508ddb44] EVENT_ID: CONNECT_STORAGE_POOL_FAILED(995), Failed to connect Host host1.example.com to Storage Pool Default-DC1 > > Any ideas why this might be happening? > I have researched, however I have not been able to find a solution. > > thanks, > > Adrian > _______________________________________________ > Users mailing list -- users(a)ovirt.org > To unsubscribe send an email to users-leave(a)ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ > List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3XAKJ23DSQV...
5 years, 1 month
Drag and Drop - Setup Host Networks
by ccesario@blueit.com.br
it seems that drag and drop function does not working as expected in Setup Host Networks screen.
It is not possibel remove NIC interface a interface from a Bond config.
When I tried move a interface to outside Bond cofing, all block is moved instead of only the specific interface .
Screenshots abut this
https://pasteboard.co/IEx5Cun.png
In this case, I'm trying remove eno5 interface from bond0 config, but it is not possible.
Regards
5 years, 1 month
[ANN] oVirt 4.3.7 Second Release Candidate is now available for testing
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.3.7 Second Release Candidate for testing, as of October 31th, 2019.
This update is a release candidate of the seventh in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used in
production.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.7 or later (but <8)
* CentOS Linux (or similar) 7.7 or later (but <8)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.7 or later (but <8)
* CentOS Linux (or similar) 7.7 or later (but <8)
* oVirt Node 4.3 (available for x86_64 only) has been built consuming
CentOS 7.7 Release
See the release notes [1] for known issues, new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available
Additional Resources:
* Read more about the oVirt 4.3.7 release highlights:
http://www.ovirt.org/release/4.3.7/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.7/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>*Red Hat respects your work life balance.
Therefore there is no need to answer this email out of your office hours.
<https://mojo.redhat.com/docs/DOC-1199578>*
5 years, 1 month