Cinder and Managed Block Storage
by Jorge Visentini
Is it still possible to configure cinder to use Managed Block Storage? If
yes, is there a particular setting or can I just stick to Cinder's default
setting?
Thanks all.
--
Att,
Jorge Visentini
+55 55 98432-9868
1 year, 9 months
Had issue with storage and now storage domain won't mount VMs are in unknown status
by jsmith1299@live.com
Hi All,
We have an odd setup in out environment but each storage data center has one host and one storage domain.
We had an issue with the storage domain attached to a host. After the reboot I am seeing in the vdsm logs over and over again vmrecovery
2023-06-09 21:01:30,419+0000 INFO (periodic/2) [vdsm.api] START repoStats(domains=()) from=internal, task_id=40f5b198-cb82-4ba2-8c20-b8cee34a7f47 (api:48)
2023-06-09 21:01:30,420+0000 INFO (periodic/2) [vdsm.api] FINISH repoStats return={} from=internal, task_id=40f5b198-cb82-4ba2-8c20-b8cee34a7f47 (api:54)
2023-06-09 21:01:30,810+0000 INFO (vmrecovery) [vdsm.api] START getConnectedStoragePoolsList(options=None) from=internal, task_id=74b1a1cf-fab1-4918-b0da-b3fd152d9d1a (api:48)
2023-06-09 21:01:30,811+0000 INFO (vmrecovery) [vdsm.api] FINISH getConnectedStoragePoolsList return={'poollist': []} from=internal, task_id=74b1a1cf-fab1-4918-b0da-b3fd152d9d1a (api:54)
2023-06-09 21:01:30,811+0000 INFO (vmrecovery) [vds] recovery: waiting for storage pool to go up (clientIF:723)
I've also checked the firewall and it is still disabled.
systemctl status libvirtd
● libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/libvirtd.service.d
└─unlimited-core.conf
Active: active (running) since Fri 2023-06-09 20:51:11 UTC; 16min ago
Docs: man:libvirtd(8)
https://libvirt.org
Main PID: 4984 (libvirtd)
Tasks: 17 (limit: 32768)
Memory: 39.7M
CGroup: /system.slice/libvirtd.service
└─4984 /usr/sbin/libvirtd --listen
Jun 09 20:51:11 hlkvm01 systemd[1]: Starting Virtualization daemon...
Jun 09 20:51:11 hlkvm01 systemd[1]: Started Virtualization daemon.
● vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2023-06-09 20:53:11 UTC; 14min ago
Process: 10496 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)
Main PID: 10587 (vdsmd)
Tasks: 39
Memory: 79.5M
CGroup: /system.slice/vdsmd.service
└─10587 /usr/bin/python2 /usr/share/vdsm/vdsmd
Jun 09 20:53:14 hlkvm01 vdsm[10587]: WARN Not ready yet, ignoring event '|virt|VM_status|596001c3-33e7-44a4-bdf9-0b53ab1dd810' args={'596001c3-33e7-44a4-bdf9-0b53ab1dd810': {'status': 'Down', 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port': '-1'}], 'hash': '-2283890943663580625', 'exitMessage': 'VM terminated with error', 'cpuUser': '0.00', 'monitorResponse': '0', 'vmId': '596001c3-33e7-44a4-bdf9-0b53ab1dd810', 'exitReason': 1, 'cpuUsage': '0.00', 'elapsedTime': '893978', 'cpuSys': '0.00', 'timeOffset': '0', 'clientIp': '', 'exitCode': 1}}
Jun 09 20:53:14 hlkvm01 vdsm[10587]: WARN Not ready yet, ignoring event '|virt|VM_status|87155499-1e10-4228-aa69-7c487007746e' args={'87155499-1e10-4228-aa69-7c487007746e': {'status': 'Down', 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port': '-1'}], 'hash': '-5453960159391982695', 'exitMessage': 'VM terminated with error', 'cpuUser': '0.00', 'monitorResponse': '0', 'vmId': '87155499-1e10-4228-aa69-7c487007746e', 'exitReason': 1, 'cpuUsage': '0.00', 'elapsedTime': '893973', 'cpuSys': '0.00', 'timeOffset': '0', 'clientIp': '', 'exitCode': 1}}
Jun 09 20:53:14 hlkvm01 vdsm[10587]: WARN Not ready yet, ignoring event '|virt|VM_status|0ec7a66d-fac2-4a4a-a939-e05fc7b097b7' args={'0ec7a66d-fac2-4a4a-a939-e05fc7b097b7': {'status': 'Down', 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port': '-1'}], 'hash': '-1793949836195780752', 'exitMessage': 'VM terminated with error', 'cpuUser': '0.00', 'monitorResponse': '0', 'vmId': '0ec7a66d-fac2-4a4a-a939-e05fc7b097b7', 'exitReason': 1, 'cpuUsage': '0.00', 'elapsedTime': '893976', 'cpuSys': '0.00', 'timeOffset': '0', 'clientIp': '', 'exitCode': 1}}
Jun 09 20:53:14 hlkvm01 vdsm[10587]: WARN Not ready yet, ignoring event '|virt|VM_status|9c8802c3-c7c9-473c-bbfb-abb0bd0f8fdb' args={'9c8802c3-c7c9-473c-bbfb-abb0bd0f8fdb': {'status': 'Down', 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port': '-1'}], 'hash': '-1144924804541449415', 'exitMessage': 'VM terminated with error', 'cpuUser': '0.00', 'monitorResponse': '0', 'vmId': '9c8802c3-c7c9-473c-bbfb-abb0bd0f8fdb', 'exitReason': 1, 'cpuUsage': '0.00', 'elapsedTime': '893971', 'cpuSys': '0.00', 'timeOffset': '0', 'clientIp': '', 'exitCode': 1}}
Jun 09 20:53:14 hlkvm01 vdsm[10587]: WARN Not ready yet, ignoring event '|virt|VM_status|f799c326-9969-4892-8d67-3b1229baf0ef' args={'f799c326-9969-4892-8d67-3b1229baf0ef': {'status': 'Down', 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port': '-1'}], 'hash': '5564598485369155833', 'exitMessage': 'VM terminated with error', 'cpuUser': '0.00', 'monitorResponse': '0', 'vmId': 'f799c326-9969-4892-8d67-3b1229baf0ef', 'exitReason': 1, 'cpuUsage': '0.00', 'elapsedTime': '893980', 'cpuSys': '0.00', 'timeOffset': '0', 'clientIp': '', 'exitCode': 1}}
Jun 09 20:53:14 hlkvm01 vdsm[10587]: WARN Not ready yet, ignoring event '|virt|VM_status|e9311d9f-d770-458b-b5ad-cdc2eb35f1bd' args={'e9311d9f-d770-458b-b5ad-cdc2eb35f1bd': {'status': 'Down', 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port': '-1'}], 'hash': '-5622951617346770490', 'exitMessage': 'VM terminated with error', 'cpuUser': '0.00', 'monitorResponse': '0', 'vmId': 'e9311d9f-d770-458b-b5ad-cdc2eb35f1bd', 'exitReason': 1, 'cpuUsage': '0.00', 'elapsedTime': '893972', 'cpuSys': '0.00', 'timeOffset': '0', 'clientIp': '', 'exitCode': 1}}
Jun 09 20:53:14 hlkvm01 vdsm[10587]: WARN Not ready yet, ignoring event '|virt|VM_status|1fc4ddad-203f-4cdf-9cb3-c3d66fb97c87' args={'1fc4ddad-203f-4cdf-9cb3-c3d66fb97c87': {'status': 'Down', 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port': '-1'}], 'hash': '-1397731328049024241', 'exitMessage': 'VM terminated with error', 'cpuUser': '0.00', 'monitorResponse': '0', 'vmId': '1fc4ddad-203f-4cdf-9cb3-c3d66fb97c87', 'exitReason': 1, 'cpuUsage': '0.00', 'elapsedTime': '893981', 'cpuSys': '0.00', 'timeOffset': '0', 'clientIp': '', 'exitCode': 1}}
Jun 09 20:53:14 hlkvm01 vdsm[10587]: WARN Not ready yet, ignoring event '|virt|VM_status|321183ed-b0a6-42c7-bbee-2ad46a5f37ae' args={'321183ed-b0a6-42c7-bbee-2ad46a5f37ae': {'status': 'Down', 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port': '-1'}], 'hash': '4398712824561987912', 'exitMessage': 'VM terminated with error', 'cpuUser': '0.00', 'monitorResponse': '0', 'vmId': '321183ed-b0a6-42c7-bbee-2ad46a5f37ae', 'exitReason': 1, 'cpuUsage': '0.00', 'elapsedTime': '893970', 'cpuSys': '0.00', 'timeOffset': '0', 'clientIp': '', 'exitCode': 1}}
Jun 09 20:53:14 hlkvm01 vdsm[10587]: WARN Not ready yet, ignoring event '|virt|VM_status|731a11e8-62ba-4639-bdee-8c44b5790d82' args={'731a11e8-62ba-4639-bdee-8c44b5790d82': {'status': 'Down', 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port': '-1'}], 'hash': '-1278467655696539707', 'exitMessage': 'VM terminated with error', 'cpuUser': '0.00', 'monitorResponse': '0', 'vmId': '731a11e8-62ba-4639-bdee-8c44b5790d82', 'exitReason': 1, 'cpuUsage': '0.00', 'elapsedTime': '893977', 'cpuSys': '0.00', 'timeOffset': '0', 'clientIp': '', 'exitCode': 1}}
Jun 09 20:53:14 hlkvm01 vdsm[10587]: WARN Not ready yet, ignoring event '|virt|VM_status|411a97e6-41c7-473e-819b-04aa10bc2bf0' args={'411a97e6-41c7-473e-819b-04aa10bc2bf0': {'status': 'Down', 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port': '-1'}], 'hash': '-11964682092647781', 'exitMessage': 'VM terminated with error', 'cpuUser': '0.00', 'monitorResponse': '0', 'vmId': '411a97e6-41c7-473e-819b-04aa10bc2bf0', 'exitReason': 1, 'cpuUsage': '0.00', 'elapsedTime': '893975', 'cpuSys': '0.00', 'timeOffset': '0', 'clientIp': '', 'exitCode': 1}}
This has been going on for hours. On the management VM I am seeing the following over and over again
2023-06-09 13:59:25,129-07 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-5) [] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM hlkvm01 command Get Host Capabilities failed: Message timeout which can be caused by communication issues
2023-06-09 13:59:25,129-07 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (EE-ManagedThreadFactory-engineScheduled-Thread-5) [] Unable to RefreshCapabilities: VDSNetworkE
xception: VDSGenericException: VDSNetworkException: Message timeout which can be caused by communication issues
I am able to restart the Host from the management VM (Web Console). When I try to put the host in maintenance mode I get "Error while executing action. Cannot switch Host to Maintenance mode. Hose still has running VMs on it and is in Non Responsive state"
If I try to set the "Confirm host has been rebooted" I get an error saying that another power management action is already in progress. Can someone please help me out here? Is there a way to set the manage for all of the VMs to down? Anything I can do to get the storage domain back up?
Thanks
1 year, 9 months
how to expand a arbitrated-distributed-replicated volume
by Hans Kuhl
Hello all,
how can I expand a arbitrated-distributed-replicated volume without adding additional arbiter bricks?
my current gluster config is:
Volume Name: data
Type: Distributed-Replicate
Volume ID: dcf75cf7-1937-4ce9-a7d0-46a56a3be4b4
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gl1.lan:/gluster_bricks/sdc/sdc
Brick2: gl2.lan:/gluster_bricks/sdc/sdc
Brick3: gl3.lan:/gluster_bricks/sdx/sdx (arbiter)
Brick4: gl2.lan:/gluster_bricks/sdd/sdd
Brick5: gl3.lan:/gluster_bricks/sdc/sdc
Brick6: gl1.lan:/gluster_bricks/sdx/sdx (arbiter)
Brick7: gl3.lan:/gluster_bricks/sdd/sdd
Brick8: gl1.lan:/gluster_bricks/sdd/sdd
Brick9: gl2.lan:/gluster_bricks/sdx/sdx (arbiter)
1 year, 9 months
Non responsive host (4.3.10)
by Maria Souvalioti
Hello everyone!
Due to a recent major power outage in my area I now have an unresponsive self hosted host in an environment of 3 self hosted hosts. There's one vm stuck on there as well as some metadata I guess from when hosted engine was running there (before the power went down).
I'm running 4.3.10 ovirt node with 3 nodes and GlusterFS, no arbiter, and I'm using it to provide services to our clients i.e. DNS, web sites, wikis, ticketing etc. and I cannot shut them down.
The ovirt engine is up and running and I can manage all the other VMs that run on the other hosts through the web gui.
The unresponsive host replies only to ICMP requests; in every other sense it's dead, no ssh, no gluster bricks, no console, nothing.
I tried to place the faulty host in maintenance, using the option to stop glusterd, but wasn't able to as the engine won't let the host go into maintenance mode because it thinks the host has running VMs on it. The host won't go into maintenance even if I chose the "Ignore gluster quorum and self-heal validations" option.
I spent last week creating a backup environment were I copied the VMs, to have somewhere to run them in case something goes terribly wrong with the systems or the gluster in the production system.
I'm thinking of using the global maintenance mode and then shutting down the engine itself with *hosted-engine --vm-shutdown* and rebooting the affected host.
Should I remove the host from the cluster and then re-add it or should I do something else?
Thanks for any of your help!
1 year, 9 months
Renew the engine certificate - egine-setup
by Fabrice Soler
Hello,
We have an standalone engine version 4.3.6 and we try to renew the
engine certificate.
We have followed the official documentation but the engine-setup never
ask us the question "Renew certificate ?"
Do you have an idea ?
Sincerely,
--
Carte de visite
DSI-DIT
*Fabrice SOLER*
Responsable du Département des Infrastructures Techniques (DIT)
/Rectorat de la Guadeloupe - BP 480/
97183 Les Abymes cedex
*Téléphone : *0590 47 8321
+590 690 335 564
*Courriel*:
www.ac-guadeloupe.fr
1 year, 9 months
Rename Virtual Machine
by lavi.buchnik@exlibrisgroup.com
Hi,
We have this scenario:
- VM running Centos-7
- VM running Oracle-8
Then, we shutdown and delete Centos-7 VM (verified by API and UI)
Then, we rename Oracle-8 VM (via API) to have the name of the old Centos-7 VM and then stop/start Oracle-8 VM.
Rename is working and VM is coming up (status is "up" from API and also in the UI).
The problem is that Oracle-8 VM is not really up. it is hanging. not replying to PING nor to SSH.
Seems like the root cause is this:
- Oracle-8 VM name is indeed Centos-7 VM. but its FQDN (seen in UI) is still its Original name.
- So when getting both Centos-7 VM name and Oracle-8 VM name status, I get an answer (via API) about both of them:
Both reply with "up"
I'm using a simple API to rename the VM as followed (vm_name is Oracle-8 VM name, and new_name is Centos-7 VM name):
vm_id_obj = self.get_vm_id(vm_name)
updated_vm = vm_id_obj.update(
vm=types.Vm(
name=new_name
)
)
Thanks,
Lavi
1 year, 9 months
VM disks Block device
by hemak88@gmail.com
LVM is used underneath the storage domain for disk management in kvm cluster on ovirt. VG which is the storage domain is visbile and active in all hosts in the same datacenter on ovirt. But, LV under this vg (that is the virtual disks of vms) is not active and the corresponding block devices of these LVs is not available on all hosts in cluster.
I find only the running VMs LVs are active on the corresponding host. Is this a feature?
Is it possible to get a corresponding block device and LV on all hosts in cluster?
1 year, 9 months
Resending with Log: Hosted Engine Deploy Fails at "Waiting for Host to be up"
by Angel, Christopher
I've installed 3 Ovirt Nodes and am trying to set up the hosted engine. Every time I run it however, it fails at the 'Waiting for host to be up' stage. I've attached the relevant log.
--
Christopher Angel, B.Eng, B.Sc
Laboratory Systems Analyst, Computer Science Department
University of Saskatchewan
Christopher.angel(a)usask.ca<mailto:Christopher.angel@usask.ca>
3069661434
1 year, 10 months
Ovirt Error after creating a VM image
by lavi.buchnik@exlibrisgroup.com
Hi,
I wrote an automation that provisioning a VM using ovirtsdk4 (we are using Ovirt 4.3).
In the last week, we notice that sometimes the VM provisioning is failing with this error:
"The response content type 'text/html; charset=iso-8859-1' isn't the expected XML"
This error is happening when the automation is creating the VM image and waiting for its status to get change from "image-locked", this to be able to set affinity on it and later to start the new VM.
So the error is happening while the image is in "image-locked" state.
Can you please let me know why it is happening (note that it is occurring like 1 out of 4 attempts) and how can it get fixed?
Thanks,
Lavi
1 year, 10 months