[ovirt 4.2]vdsm host was shutdown unexpectly, configure of some vms was changed or lost when host was re-powered on and vm was started
by lifuqiong@sunyainfo.com
Dear All:
My ovirt engine managed two vdsms with nfs storage on another nfs server, it worked fine about three months.
One of host (host_1.3; ip:172.18.1.3) was created about 16 vms, but the host_1.3 was shutdown unexpectly about 2019-12-23 16:11, when re-started host and vms; half of the vms was lost some configure or changed , such as lost theirs ip etc.(the vm name is 'zzh_Chain49_ACG_M' in the vdsm.log)
the vm zzh_Chain49_ACG_M was create by rest API through vm's templated . the template is 20.1.1.161; the vm zzh_Chain49_ACG_M was created by ovirt rest api and changed the ip to 20.1.1.219
by ovirt rest api. But the ip was changed to templated ip when the accident happened.
the vm's os is centos.
Hope get help from you soon, Thank you.
Mark
Sincerely.
4 years, 11 months
VM Import Fails
by Vijay Sachdeva
Hi All,
I am trying to import a VM from export domain, but import fails.
Setup:
Source DC has a NFS shared storage with two Hosts
Destination DC has a local storage configured using LVM.
Note: Used common export domain to export the VM.
Anyone, please help me on this case to understand why it’s failing.
Thanks
Vijay Sachdeva
4 years, 11 months
iSCSI connections and storage put into maintenance
by Gianluca Cecchi
Hello,
I have a cluster composed by two hosts connected to some iSCSI storage
domains.
I have put one host into maintenance and I see that all the iSCSI sessions
are closed. So far so good.
Then I put one storage domain into maintenance and I would expect to see on
the active host the (2) iSCSI sessions versus this storage domain to get
closed.
Instead I continue to see them up and "multipath -l" command gives me the 2
paths for the LUN part of the storage domain connection.
Is this expected and only when I execute detach of the SD I will get the
closure of the iSCSI connections?
Because the detach phase will unregister the VMs and I would like not to do
it, but at the same time I will have a planned maintenance for this storage
domain and I would like to clean the connections on host side, to let
multipath happy when they will go down for some hours...
Thanks in advance for any insight.
Gianluca
4 years, 11 months
Ovirt 4.4 node installation failed for yum-utils
by usurse@redhat.com
Has anyone got this issue?
Host deploy failed with the requirement of yum-utils
~~~
2019-12-19 20:16:54 IST - TASK [ovirt-host-deploy-facts : Install yum-utils] *****************************
2019-12-19 20:17:27 IST - fatal: [Hostname: FAILED! => {"changed": false, "failures": ["No package yum-utils available."], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []}
2019-12-19 20:17:27 IST - {
"status" : "OK",
"msg" : "",
"data" : {
"event" : "runner_on_failed",
"uuid" : "9793bb18-562b-407e-9149-782b3e7f2d65",
"stdout" : "fatal: [Hostname]: FAILED! => {\"changed\": false, \"failures\": [\"No package yum-utils available.\"], \"msg\": \"Failed to install some of the specified packages\", \"rc\": 1, \"results\": []}",
~~~
4 years, 11 months
[NEED YOUR INPUT] | Server Virtualization Trends 2019 - Statistical Survey | oVirt users
by Paweł Mączka
Hello everybody,
Have you even wonder about worldwide virtualization market, who are undisputed leader and who is chasing the market from a back seat? I’m personally decided with my team (Storware) to work on data research on Server Virtualization Trends 2019. I'm contacting you because I believe, that you are using VM environment (for sure oVirt 😊) and you could give a value to our community. Your vote is crucial.
My goal is to reach 1000 specialist (now we have 250 votes) on the market and convince them to vote 😊. I will be pleased if you agree to take part in this short survey (link below). The answer to the following questions will take you no more than 1 minute. Thank you in advance for your time, respondents' data will not be made public. I will inform you when the report is ready.
http://bit.ly/VMTrends2019
Thanks In advance!
Pozdrawiam/Best Regards
Paweł Mączka
Chief Technology Officer, VP
+48 730 602 659
e-mail: p.maczka(a)storware.eu<mailto:p.maczka@storware.eu>
Storware Sp. z o.o.
ul. Leszno 8/44
01-192 Warszawa
www.storware.eu<http://www.storware.eu/>
[backupmoster (1)]
Want to talk? Book a one-to-one session with me on https://calendly.com/p-maczka/30min/
4 years, 11 months
vdsmd 4.4.0 throws an exception in asyncore.py while updating OVF data
by alexandermurashkin@msn.com
vdsmd 4.4.0 throws an exception in asyncore.py while updating OVF data. First exception occured on December 14th and happens every hour since then.
An hour before the first exception, we updated engine's and host's rpms packages. We did not run engine-backup so we cannot restore oVirt database if it got corrupted.
Currently, both engine (RHEL 7.7) and host (RHEL 8.1) have the most recent packages (and were rebooted). Neither the updates, nor reboots resolve the issue.
In general, oVirt is working. We can run virtual machines with disks in the storage domain. We can create new virtual machines. But some functionality is not available, for example, we cannot move disks between domains.
The storage domain in question is on NFS. After the issue appeared, we successfully created Glusterfs domain but its OVF update also failed. Interesting that it was possible to move disks to it before OVF failure.
Here is the vdsmd traceback for your convinience
2019-12-17 16:36:58,393-0600 ERROR (Reactor thread) [vds.dispatcher] uncaptured python exception, closing channel
<yajsonrpc.betterAsyncore.Dispatcher connected ('::ffff:172.20.1.142', 38002, 0, 0) at 0x7fbda865ed30> (
<class 'TypeError'>:object of type 'NoneType' has no len()
[/usr/lib64/python3.6/asyncore.py|readwrite|108]
[/usr/lib64/python3.6/asyncore.py|handle_read_event|423]
[/usr/lib/python3.6/site-packages/yajsonrpc/betterAsyncore.py|handle_read|71]
[/usr/lib/python3.6/site-packages/yajsonrpc/betterAsyncore.py|_delegate_call|168]
[/usr/lib/python3.6/site-packages/vdsm/protocoldetector.py|handle_read|115]
)
(betterAsyncore:179)
I appreciate your help,
Alexander Murashkin
------ Current Versions (that have the exception) ------
Engine
ovirt-engine-4.4.0-0.0.master.20191204120550.git04d5d05.el7.noarch
Host
vdsm-4.40.0-1363.gitf6a1ba0a0.el8.x86_64
vdsm-python-4.40.0-1363.gitf6a1ba0a0.el8.noarch
vdsm-yajsonrpc-4.40.0-1363.gitf6a1ba0a0.el8.noarch
python3-libs-3.6.8-15.1.el8.x86_64
------ December 13th Versions (that did not have the exception) ------
Engine
ovirt-engine-4.4.0-0.0.master.20191204120550.git04d5d05.el7.noarch --- not sure, but probably the same as now
Host
vdsm-4.40.0-1360.git821afbbc2.el8.x86_64
vdsm-python-4.40.0-1360.git821afbbc2.el8.noarch
vdsm-yajsonrpc-4.40.0-1360.git821afbbc2.el8.noarch
python3-libs-3.6.8-15.1.el8.x86_64
------ vdsm.log ------
2019-12-17 16:36:58,393-0600 ERROR (Reactor thread) [vds.dispatcher] uncaptured python exception, closing channel <yajsonrpc.betterAsyncore.Dispatcher connected ('::ffff:172.20.1.142', 38002, 0, 0) at 0x7fbda865ed30> (<class 'TypeError'>:object of type 'NoneType' has no len() [/usr/lib64/python3.6/asyncore.py|readwrite|108] [/usr/lib64/python3.6/asyncore.py|handle_read_event|423] [/usr/lib/python3.6/site-packages/yajsonrpc/betterAsyncore.py|handle_read|71] [/usr/lib/python3.6/site-packages/yajsonrpc/betterAsyncore.py|_delegate_call|168] [/usr/lib/python3.6/site-packages/vdsm/protocoldetector.py|handle_read|115]) (betterAsyncore:179)
----- engine.log ------
2019-12-17 16:36:58,395-06 ERROR [org.ovirt.engine.core.bll.storage.ovfstore.UploadStreamCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-95) [6c787bf3] Command 'org.ovirt.engine.core.bll.storage.ovfstore.UploadStreamCommand' failed: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.net.SocketException: Connection reset (Failed with error VDS_NETWORK_ERROR and code 5022)
------ Web Interface Events ------
Storage Domain Events
Dec 17, 2019, 4:36:58 PM
Failed to update VMs/Templates OVF data for Storage Domain storedom3 in Data Center Default.
36b41d9c
oVirt
Dec 17, 2019, 4:36:58 PM
Failed to update OVF disks fcc661df-b2e3-4625-be40-52b65033c6d7, OVF data isn't updated on those OVF stores (Data Center Default, Storage Domain storedom3).
36b41d9c
oVirt
Host Events
Dec 17, 2019, 4:38:20 PM
Status of host poplar was set to Up.
399a2181
oVirt
Dec 17, 2019, 4:36:58 PM
Host poplar is not responding. Host cannot be fenced automatically because power management for the host is disabled.
oVirt
4 years, 11 months
Re: Self hosted engine to hci
by Strahil
I have never done this , so you can simulate it on VMs before doint it :
1. Add the new host and put it in maintenance
2. You can backup your Hosted Engine VM and any critical VMs(just incase)
3. Create a new Gluster volume (either replica 3 or replica 2 arbiter 1)
4. Add the new volume via the UI as new storage
5. Do a storage migration from the distributed to the replica 3 volume.
6. Get rid of the old distributed volume and use it for something useful.
7. Remove the new hosts' maintenance and test live migration
8. Set global maintenance (via hosted-engine command) and poweroff the engine VM.
9. Manually power it up on a new host.
10. Remove global maintenance and shutdown a VM
11. Change the same VM to power up only on specific host and power it up.
12. Remove the limitation of that test VM (from step 10 & 11)
And you are ready to go.
Best Regards,
Strahil Nikolov
On Dec 18, 2019 16:39, Ernest Clyde Chua <ernestclydeachua(a)gmail.com> wrote:
>
> Hello currently i have a server running glusterfs on distributed 1 and a self hosted engine.
> And planning to add two servers for hci.
>
> Do i add new hosts and change gluster volume type to replicated or backup all the vms then start from scratch?
>
> Is there any recommendation on this?
>
>
4 years, 11 months
Self hosted engine to hci
by Ernest Clyde Chua
Hello currently i have a server running glusterfs on distributed 1 and a
self hosted engine.
And planning to add two servers for hci.
Do i add new hosts and change gluster volume type to replicated or backup
all the vms then start from scratch?
Is there any recommendation on this?
4 years, 11 months