Re: [ovirt-devel] [FYI] Blog entry about oVirt on the mainframe
by Michal Skrivanek
> On 17 May 2018, at 16:41, Viktor VM Mihajlovski <mihajlov(a)linux.vnet.ibm.com> wrote:
>
> Hi,
>
> as some of you know the support for s390x KVM hosts has been in oVirt
> for a while now - without raising too much attention. The following
> article highlights the availability and describes how to add a mainframe
> to an oVirt cluster:
> https://kvmonz.blogspot.co.uk/2018/05/knowledge-series-managing-kvm-on-ib...
thanks! looks great
cross-posting to users list too
>
> --
> Regards,
> Viktor Mihajlovski
> _______________________________________________
> Devel mailing list -- devel(a)ovirt.org
> To unsubscribe send an email to devel-leave(a)ovirt.org
6 years, 6 months
Re: VM's disk stuck in migrating state
by nicolas@devels.es
Thanks.
I've been able to see the line in the log, however, the format differs
slightly from yours.
2018-05-17 12:24:44,132+0100 DEBUG (jsonrpc/6) [jsonrpc.JsonRpcServer]
Calling 'Volume.getInfo' in bridge with {u'storagepoolID':
u'75bf8f48-970f-42bc-8596-f8ab6efb2b63', u'imageID':
u'b4013aba-a936-4a54-bb14-670d3a8b7c38', u'volumeID':
u'c2cfbb02-9981-4fb7-baea-7257a824145c', u'storagedomainID':
u'1876ab86-216f-4a37-a36b-2b5d99fcaad0'} (__init__:556)
2018-05-17 12:24:44,689+0100 DEBUG (jsonrpc/6) [jsonrpc.JsonRpcServer]
Return 'Volume.getInfo' in bridge with {'status': 'OK', 'domain':
'1876ab86-216f-4a37-a36b-2b5d99fcaad0', 'voltype': 'INTERNAL',
'description': 'None', 'parent': 'ea9a0182-329f-4b8f-abe3-e894de95dac0',
'format': 'COW', 'generation': 1, 'image':
'b4013aba-a936-4a54-bb14-670d3a8b7c38', 'ctime': '1526470759',
'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize':
'1073741824', 'children': [], 'pool': '', 'capacity': '21474836480',
'uuid': u'c2cfbb02-9981-4fb7-baea-7257a824145c', 'truesize':
'1073741824', 'type': 'SPARSE', 'lease': {'owners': [8], 'version': 1L}}
(__init__:582)
As you can see, there's no path field there.
How should I procceed?
El 2018-05-17 12:01, Benny Zlotnik escribió:
> vdsm-client replaces vdsClient, take a look
> here: https://lists.ovirt.org/pipermail/devel/2016-July/013535.html
> [4]
>
> On Thu, May 17, 2018 at 1:57 PM, <nicolas(a)devels.es> wrote:
>
>> The issue is present in the logs:
>>
>> 2018-05-17 11:50:44,822+01 INFO
>> [org.ovirt.engine.core.bll.storage.disk.image.VdsmImagePoller]
>> (DefaultQuartzScheduler1) [39755bb7-9082-40d6-ae5e-64b5b2b5f98e]
>> Command CopyData id: '84a49b25-0e37-4338-834e-08bd67c42860': the
>> volume lease is not FREE - the job is running
>>
>> I tried setting the log level to debug but it seems I have not a
>> vdsm-client command. All I have is a vdsm-tool command. Is it
>> equivalent?
>>
>> Thanks
>>
>> El 2018-05-17 11:49, Benny Zlotnik escribió:
>> By the way, please verify it's the same issue, you should see "the
>> volume lease is not FREE - the job is running" in the engine log
>>
>> On Thu, May 17, 2018 at 1:21 PM, Benny Zlotnik
>> <bzlotnik(a)redhat.com>
>> wrote:
>>
>> I see because I am on debug level, you need to enable it in order
>> to
>> see
>>
>> https://www.ovirt.org/develop/developer-guide/vdsm/log-files/ [1]
>> [3]
>>
>> On Thu, 17 May 2018, 13:10 , <nicolas(a)devels.es> wrote:
>>
>> Hi,
>>
>> Thanks. I've checked vdsm logs on all my hosts but the only entry
>> I can
>> find grepping by Volume.getInfo is like this:
>>
>> 2018-05-17 10:14:54,892+0100 INFO (jsonrpc/0)
>> [jsonrpc.JsonRpcServer]
>> RPC call Volume.getInfo succeeded in 0.30 seconds (__init__:539)
>>
>> I cannot find a line like yours... any other way on how to obtain
>> those
>> parameters. This is an iSCSI based storage FWIW (both source and
>> destination of the movement).
>>
>> Thanks.
>>
>> El 2018-05-17 10:01, Benny Zlotnik escribió:
>> In the vdsm log you will find the volumeInfo log which looks
>> like
>> this:
>>
>> 2018-05-17 11:55:03,257+0300 DEBUG (jsonrpc/6)
>> [jsonrpc.JsonRpcServer]
>> Return 'Volume.getInfo' in bridge with {'status': 'OK',
>> 'domain':
>> '5c4d2216-
>> 2eb3-4e24-b254-d5f83fde4dbe', 'voltype': 'INTERNAL',
>> 'description':
>> '{"DiskAlias":"vm_Disk1","DiskDescription":""}', 'parent':
>> '00000000-0000-0000-
>> 0000-000000000000', 'format': 'RAW', 'generation': 3, 'image':
>> 'b8eb8c82-fddd-4fbc-b80d-6ee04c1255bc', 'ctime': '1526543244',
>> 'disktype': 'DATA', '
>> legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1073741824',
>> 'children': [], 'pool': '', 'capacity': '1073741824', 'uuid':
>> u'7190913d-320c-4fc9-
>> a5b3-c55b26aa30f4', 'truesize': '0', 'type': 'SPARSE', 'lease':
>> {'path':
>
>
> u'/rhev/data-center/mnt/10.35.0.233:_root_storage__domains_sd1/5c4d2216-2e
>
>>
>
>
> b3-4e24-b254-d5f83fde4dbe/images/b8eb8c82-fddd-4fbc-b80d-6ee04c1255bc/7190913d-320c-4fc9-a5b3-c55b26aa30f4.lease',
>
>> 'owners': [1], 'version': 8L, 'o
>> ffset': 0}} (__init__:355)
>>
>> The lease path in my case is:
>> /rhev/data-center/mnt/10.35.0. [2]
>
>
> [1]233:_root_storage__domains_sd1/5c4d2216-2eb3-4e24-b254-d5f83fde4dbe/images/b8eb8c82-fddd-4fbc-b80d-6ee04c1255bc/7190913d-320c-4fc9-a5b3-c55b26aa30f4.lease
>
>> Then you can look in /var/log/sanlock.log
>>
>> 2018-05-17 11:35:18 243132 [14847]: s2:r9 resource
>
>
> 5c4d2216-2eb3-4e24-b254-d5f83fde4dbe:7190913d-320c-4fc9-a5b3-c55b26aa30f4:/rhev/data-center/mnt/10.35.0.233:_root_storage__domains_sd1/5c4d2216-2eb3-4e24-b254-d5f83fde4dbe/images/b8eb8c82-fddd-4fbc-b80d-6ee04c1255bc/7190913d-320c-4fc9-a5b3-c55b26aa30f4.lease:0
>
>> for 2,9,5049
>>
>> Then you can use this command to unlock, the pid in this case
>> is 5049
>>
>> sanlock client release -r RESOURCE -p pid
>>
>> On Thu, May 17, 2018 at 11:52 AM, Benny Zlotnik
>> <bzlotnik(a)redhat.com>
>> wrote:
>>
>> I believe you've hit this
>> bug: https://bugzilla.redhat.com/show_bug.cgi?id=1565040 [3] [2]
>
> [1]
>
>>> You can try to release the lease manually using the
> sanlock client
>
>>> command (there's an example in the comments on the bug),
>>> once the lease is free the job will fail and the disk can be
> unlock
>
>> On Thu, May 17, 2018 at 11:05 AM, <nicolas(a)devels.es> wrote:
>>
>> Hi,
>>
>> We're running oVirt 4.1.9 (I know it's not the recommended
>> version, but we can't upgrade yet) and recently we had an
> issue
>
>> with a Storage Domain while a VM was moving a disk. The
> Storage
>
>> Domain went down for a few minutes, then it got back.
>>
>> However, the disk's state has stuck in a 'Migrating: 10%'
> state
>
>> (see ss-2.png).
>>
>> I run the 'unlock_entity.sh' script to try to unlock the
> disk,
>
>> with these parameters:
>>
>> # PGPASSWORD=...
>> /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -t
> disk -u
>
>> engine -v b4013aba-a936-4a54-bb14-670d3a8b7c38
>>
>> The disk's state changed to 'OK', but the actual state still
>> states it's migrating (see ss-1.png).
>>
>> Calling the script with -t all doesn't make a difference
> either.
>
>> Currently, the disk is unmanageable: cannot be deactivated,
> moved
>
>> or copied, as it says there's a copying operation running
> already.
>
>> Could someone provide a way to unlock this disk? I don't mind
>> modifying a value directly into the database, I just need the
>> copying process cancelled.
>>
>> Thanks.
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org
>> To unsubscribe send an email to users-leave(a)ovirt.org
>
> Links:
> ------
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1565040 [3] [2]
>
> Links:
> ------
> [1] http://10.35.0 [5].
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1565040 [3]
> [3] https://www.ovirt.org/develop/developer-guide/vdsm/log-files/ [1]
>
>
>
> Links:
> ------
> [1] https://www.ovirt.org/develop/developer-guide/vdsm/log-files/
> [2] http://10.35.0.
> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1565040
> [4] https://lists.ovirt.org/pipermail/devel/2016-July/013535.html
> [5] http://10.35.0
6 years, 6 months
VM interface bonding (LACP)
by Doug Ingham
Hi All,
My hosts have all of their interfaces bonded via LACP to maximise
throughput, however the VMs are still limited to Gbit virtual interfaces.
Is there a way to configure my VMs to take full advantage of the bonded
physical interfaces?
One way might be adding several VIFs to each VM & using ALB bonding,
however I'd rather use LACP if possible...
Cheers,
--
Doug
6 years, 6 months
oVirt setup help
by Lakhwinder Rai
Hello guy's, I am planning to set up home lab with ovirt.I am confused with hardware selection.Anybody has a same setup ,Any suggestion will be appreciated.
Thanks
Lakhwinder Rai
6 years, 6 months
oVirt 4.1 upgrade hosted-engine
by dlotarev@yahoo.com
Hi! I have a two nodes (A and B), and running vm HostedEngine on host A.
After upgrade host B to latest release CentOS (7.5.1804), i trying to upgrade host A, but i cannot migrate HostedEngine to host B for next upgrade CentOS. Another VMs migrated to Host B w/o problems.
In logs on host A i have following lines:
May 17 12:58:16 ovirt-h1 libvirtd: 2018-05-17 07:58:16.728+0000: 3637: error : virNetClientProgramDispatchError:177 : internal error: cannot execute command QEMU «cont»: Failed to lock byte 100
May 17 12:58:38 ovirt-h1 journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Migration failed: {u'status': {'message': 'Done', 'code': 0}, u'progress': 100}
Please explain me what is my mistake?
P.S I found, that host B have a missmatch in version of packages:
# rpm -aq|grep qemu|sort -n
centos-release-qemu-ev-1.0-2.el7.noarch
ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch
libvirt-daemon-driver-qemu-3.9.0-14.el7_5.2.x86_64
qemu-img-ev-2.10.0-21.el7_5.2.1.x86_64
qemu-kvm-common-ev-2.10.0-21.el7_5.2.1.x86_64
qemu-kvm-ev-2.10.0-21.el7_5.2.1.x86_64
qemu-kvm-tools-ev-2.9.0-16.el7_4.14.1.x86_64
Package qemu-kvm-tools-ev has version 2.9.0, while another packages of qemu has 2.10 version.
Thank you so much for response.
6 years, 6 months
hosted-engine deploy not use FQDN
by dhy336@sina.com
Hi
I deploy hosted-engine by #hosted-engine --deploy, but I do not use FQDN, I want to visit engine directly by ip
besides, I have not DNS server, I want to skip setup Engine VM FQDN, because my project can not to setup /etc/hosts
Please provide the FQDN you would like to use for the engine appliance. Note: This will be the FQDN of the engine VM you are now going to launch, it should not point to the base host or to any other existing machine. Engine VM FQDN: (leave it empty to skip): []:
6 years, 6 months
Self-Hosted engine deploy fails but hosted-engine --vm-status shows engine in good health
by 03ce007@gmail.com
I am setting up self-hosted engine (4.2) on centos 7.4.
The setup runs all the way through and creates the local VM, but fails on task "Wait for the local bootstrap VM to be down at engine eyes".
But if I then run hosted-engine --vm-start, it comes back up and shows it is in good health, but when I do an api call with curl, it shows me that the host is in 'non responsive' state! I cannot deploy any VM on the engine VM, the ssh connection fails.
How and what could have caused it, where can i find more detailed information related to this error/status of engine vm ?
Thanks
6 years, 6 months
ovirt 4.2 failed deploy
by Alex K
Hi all,
I am trying to setup ovirt 4.2 self hosted with 3 nodes.
I have done several 4.1 installations without issues. Now at 4.2 I get:
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unable
to resolve address\n"}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing
ansible-playbook
I am running:
hosted-engine --deploy --config-append=/root/ovirt/storage.conf
Checking the log doesn't give an easy reference of the issue. Seems to be
related with DNS but I can confirm that the host can resolve the engine
FQDN from /etc/hosts or from the DNS server.
Any ideas?
Thanx,
Alex
6 years, 6 months
ovirt engine frequently rebooting/changing host
by Bernhard Dick
Hi,
currently I'm evaluating oVirt and I have three hosts installed within
nested KVM. They're sharing a gluster environment which has been
configured using the oVirt Node Wizards.
It seems to work quite well, but after some hours I get many status
update mails from the ovirt engine which are either going to EngineStop
or EngeineForceStop. Sometimes the host where the engine runs is
switched. After some of those reboots there is silence for some hours
before it is starting over. Can you tell me where I should look at to
fix that problem?
Regards
Bernhard Dick
6 years, 6 months