cpu hotplug cannot be applied for windows 10
by zodaoko@gmail.com
Hi there,
I am running ovirt engine 4.2.8.
I did cpu hotplug for windows 10 guest os, ovirt (from 1*2*1 to 2*2*1) successfully, and the output of virsh vcpuinfo also shows now there are total 4 vcpus:
# virsh --readonly vcpuinfo 13
VCPU: 0
CPU: 51
State: running
CPU time: 3911.8s
CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
VCPU: 1
CPU: 15
State: running
CPU time: 907.7s
CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
VCPU: 2
CPU: 26
State: running
CPU time: 1086.2s
CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
VCPU: 3
CPU: 2
State: running
CPU time: 1085.7s
CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
Issue:
But when I log into windows 10, from the task manager, there are only two vcpus listed. Then I opened device manager, and found that there are 4 processors listed. refer to https://bugzilla.redhat.com/show_bug.cgi?id=1377155, I have updated os to the latest version 1809.
refer to the ovirt doc, cpu hot plug is explicitly supported for windows 10 (both x86 and x86_64) , but from my testing result, the cpu hotplug was not applied. So is this a windows 10 bug? or is there any workaround? Thank you!
Thanks,
-Zhen
5 years, 8 months
HE Install cannot login to VM
by Callum Smith
Dear all,
Giving up on running ansible playbook remotely we're hitting earlier issues running hosted-engine --deploy
2019-03-28 15:42:33,975+0000 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Wait for the local VM]
2019-03-28 15:45:40,739+0000 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:103 {u'msg': u"timed out waiting for ping module test success: Using
a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this. Please add this host's fingerprint to your known_hosts file
to manage this host.", u'elapsed': 186, u'changed': False, u'_ansible_no_log': False, u'_ansible_delegated_vars': {u'ansible_delegated_host': u'he.virt.in.bmrc.ox.ac.uk', u'ansibl
e_host': u'he.virt.in.bmrc.ox.ac.uk'}}
2019-03-28 15:45:40,840+0000 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:107 fatal: [localhost -> he.virt.in.bmrc.ox.ac.uk]: FAILED! => {"changed": false, "elapsed": 186, "msg": "timed out waiting for ping module test success: Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this. Please add this host's fingerprint to your known_hosts file to manage this host."}
At a bit of a loss, especially since the old no-ansible install is not available anymore?
Regards,
Callum
--
Callum Smith
Research Computing Core
Wellcome Trust Centre for Human Genetics
University of Oxford
e. callum(a)well.ox.ac.uk<mailto:callum@well.ox.ac.uk>
5 years, 8 months
CLUSTER_CANNOT_DISABLE_GLUSTER_WHEN_CLUSTER_CONTAINS_VOLUMES
by Arsène Gschwind
Hi,
I updated ou oVirt cluster to 4.3.2 and when i try to update the cluster version to 4.3 i get the following error:
CLUSTER_CANNOT_DISABLE_GLUSTER_WHEN_CLUSTER_CONTAINS_VOLUMES
So far I remember I could update the cluster version to 4.2 without having to stop everything.
I've searched around about this error but couldn't find anything.
The engine log says:
2019-03-20 14:33:28,125+01 INFO [org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-59) [c93c7f4f-b9a3-4e10-82bf-f8bbda46cc87] Lock Acquired to object 'EngineLock:{exclusiveLocks='[25682477-0bd2-4303-a5fd-0ae9adfd276c=TEMPLATE]', sharedLocks='[119fad69-b4a2-441f-9056-354cd1b8a7aa=VM, 1f696eca-c0a0-48ff-8aa9-b977345b5618=VM, 95ee485d-7992-45b8-b1be-256223b5a89f=VM, 432ed647-c150-425e-aac7-3cb5410f4bc8=VM, 7c2646b1-8a4c-4618-b3c9-dfe563f29e00=VM, 629e40c0-e83b-47e0-82bc-df42ec310ca4=VM, 2e0741bd-33e2-4a8e-9624-a9b7bb70b664=VM, 136d0ca0-478c-48d9-9af4-530f98ac30fd=VM, dafaa8d2-c60c-44c8-b9a3-2b5f80f5aee3=VM, 2112902c-ed42-4fbb-a187-167a5f5a446c=VM, 5330b948-f0cd-4b2f-b722-28918f59c5ca=VM, 3f06be8c-8af9-45e2-91bc-9946315192bf=VM, 8cf338bf-8c94-4db4-b271-a85dbc5d6996=VM, c0be6ae6-3d25-4a99-b93d-81b4ecd7c9d7=VM, 8f4acfc6-3bb1-4863-bf97-d7924641b394=VM]'}'
2019-03-20 14:33:28,214+01 WARN [org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-59) [c93c7f4f-b9a3-4e10-82bf-f8bbda46cc87] Validation of action 'UpdateCluster' failed for user xxxx@yyy. Reasons: VAR__TYPE__CLUSTER,VAR__ACTION__UPDATE,CLUSTER_CANNOT_DISABLE_GLUSTER_WHEN_CLUSTER_CONTAINS_VOLUMES
2019-03-20 14:33:28,215+01 INFO [org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-59) [c93c7f4f-b9a3-4e10-82bf-f8bbda46cc87] Lock freed to object 'EngineLock:{exclusiveLocks='[25682477-0bd2-4303-a5fd-0ae9adfd276c=TEMPLATE]', sharedLocks='[119fad69-b4a2-441f-9056-354cd1b8a7aa=VM, 1f696eca-c0a0-48ff-8aa9-b977345b5618=VM, 95ee485d-7992-45b8-b1be-256223b5a89f=VM, 432ed647-c150-425e-aac7-3cb5410f4bc8=VM, 7c2646b1-8a4c-4618-b3c9-dfe563f29e00=VM, 629e40c0-e83b-47e0-82bc-df42ec310ca4=VM, 2e0741bd-33e2-4a8e-9624-a9b7bb70b664=VM, 136d0ca0-478c-48d9-9af4-530f98ac30fd=VM, dafaa8d2-c60c-44c8-b9a3-2b5f80f5aee3=VM, 2112902c-ed42-4fbb-a187-167a5f5a446c=VM, 5330b948-f0cd-4b2f-b722-28918f59c5ca=VM, 3f06be8c-8af9-45e2-91bc-9946315192bf=VM, 8cf338bf-8c94-4db4-b271-a85dbc5d6996=VM, c0be6ae6-3d25-4a99-b93d-81b4ecd7c9d7=VM, 8f4acfc6-3bb1-4863-bf97-d7924641b394=VM]'}'
Any Idea for the reason of this error?
Thanks
5 years, 8 months
Migrate HE beetwen hosts failed.
by kiv@intercom.pro
Hi all.
I have oVirt 4.3.1 and 3 node hosts.
All VM migrate beetwen all hosts successfully.
VM with HE - does not migrate.
vdsm.log:
libvirtError: operation failed: guest CPU doesn't match specification: missing features: xsave,avx,xsaveopt
Nodes:
1. Intel Westmere IBRS SSBD Family
2. Intel Westmere IBRS SSBD Family
3. Intel SandyBridge IBRS SSBD Family <----- HE now here
Cluster CPU Type: Intel Westmere Family
Info from VM with HE:
Guest CPU Type: Intel Westmere Family
Does anyone know what needs to be done to make migration work?
5 years, 8 months
virsh edit vm not saving after reboot
by Dev Ops
I am trying to get Cisco ISE installed into Ovirt. I was told I needed to do a virsh edit on the vm and change a line in the smbios settings. This is all simple enough. After a reboot or shutdown/restart the settings revert back to what they were before. A dump of the XML in fact shows the original settings that I had changed. Since I can't edit the file while the VM is off I am not sure how to save this config change. My issue is similar to this:
https://access.redhat.com/discussions/2720821
I have been reading and digging around with no luck. I just can't seem to make this change static.
Thanks!
5 years, 8 months
[ANN] oVirt 4.3.3 First Release Candidate is now available
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.3.3 First Release Candidate, as of March 28th, 2019.
This update is a release candidate of the third in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used in
production.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)
Experimental tech preview for x86_64 and s390x architectures for Fedora 28
is also included.
See the release notes [1] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available[2]
Additional Resources:
* Read more about the oVirt 4.3.3 release highlights:
http://www.ovirt.org/release/4.3.3/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.3/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
5 years, 8 months
Re: Gluster VM image Resync Time
by Strahil
Hi Krutika,
I have noticed some performance penalties (10%-15%) when using sharing in v3.12 .
What is the situation now with 5.5 ?
Best Regards,
Strahil NikolovOn Mar 28, 2019 08:56, Krutika Dhananjay <kdhananj(a)redhat.com> wrote:
>
> Right. So Gluster stores what are called "indices" for each modified file (or shard)
> under a special hidden directory of the "good" bricks at $BRICK_PATH/.glusterfs/indices/xattrop.
> When the offline brick comes back up, the file corresponding to each index is healed, and then the index deleted
> to mark the fact that the file has been healed.
>
> You can try this and see it for yourself. Just create a 1x3 plain replicate volume, and enable shard on it.
> Create a big file (big enough to have multiple shards). Check that the shards are created under $BRICK_PATH/.shard.
> Now kill a brick. Modify a small portion of the file. Hit `ls` on $BRICK_PATH/.glusterfs/indices/xattrop of the online bricks.
> You'll notice there will be entries named after the gfid (unique identifier in gluster for each file) of the shards.
> And only for those shards that the write modified, and not ALL shards of this really big file.
> And then when you bring the brick back up using `gluster volume start $VOL force`, the
> shards get healed and the directory eventually becomes empty.
>
> -Krutika
>
>
> On Thu, Mar 28, 2019 at 12:14 PM Indivar Nair <indivar.nair(a)techterra.in> wrote:
>>
>> Hi Krutika,
>>
>> So how does the Gluster node know which shards were modified after it went down?
>> Do the other Gluster nodes keep track of it?
>>
>> Regards,
>>
>>
>> Indivar Nair
>>
>>
>> On Thu, Mar 28, 2019 at 9:45 AM Krutika Dhananjay <kdhananj(a)redhat.com> wrote:
>>>
>>> Each shard is a separate file of size equal to value of "features.shard-block-size".
>>> So when a brick/node was down, only those shards belonging to the VM that were modified will be sync'd later when the brick's back up.
>>> Does that answer your question?
>>>
>>> -Krutika
>>>
>>> On Wed, Mar 27, 2019 at 7:48 PM Sahina Bose <sabose(a)redhat.com> wrote:
>>>>
>>>> On Wed, Mar 27, 2019 at 7:40 PM Indivar Nair <indivar.nair(a)techterra.in> wrote:
>>>> >
>>>> > Hi Strahil,
>>>> >
>>>> > Ok. Looks like sharding should make the resyncs faster.
>>>> >
>>>> > I searched for more info on it, but couldn't find much.
>>>> > I believe it will still have to compare each shard to determine whether there are any changes that need to be replicated.
>>>> > Am I right?
>>>>
>>>> +Krutika Dhananjay
>>>> >
>>>> > Regards,
>>>> >
>>>> > Indivar Nair
>>>> >
>>>> >
>>>> >
>>>> > On Wed, Mar 27, 2019 at 4:34 PM Strahil <hunter86_bg(a)yahoo.com> wrote:
>>>> >>
>>>> >> By default ovirt uses 'sharding' which splits the files into logical chunks. This greatly reduces healing time, as VM's disk is not always completely overwritten and only the shards that are different will be healed.
>>>> >>
>>>> >> Maybe you should change the default shard size.
>>>> >>
>>>> >> Best Regards,
>>>> >> Strahil Nikolov
>>>> >>
>>>> >> On Mar 27, 2019 08:24, Indivar Nair <indivar.nair(a)techterra.in> wrote:
>>>> >>
>>>> >> Hi All,
>>>> >>
>>>> >> We are planning a 2 + 1 arbitrated mirrored Gluster setup.
>>>> >> We would have around 50 - 60 VMs, with an average 500GB disk size.
>>>> >>
>>>> >> Now in case one of the Gluster Nodes go completely out of sync, roughly, how long would it take to resync? (as per your experience)
>>>> >> Will it impact the working of VMs in any way?
>>>> >> Is there anything to be taken care of, in advance, to prepare for such a situation?
>>>> >>
>>>> >> Regards,
>>>> >>
>>>> >>
>>>> >> Indivar Nair
>>>> >>
>>>> > ______________
5 years, 8 months
Re: ovirt data domain with multi luns, any best practice
by Strahil
In my opinion it's best to have 2 data domains.
Best Regards,
Strahil NikolovOn Mar 28, 2019 05:32, adam_xu(a)adagene.com.cn wrote:
>
> Hello, everyone. I had a question on how to create a data domain in the best way.
> for example, I have a FCP device and there's a lot of hdds and ssds in that device. I ceate 2 luns. one consisting of HDDs and the other consisting of SSDs. I want to make some normal vms run in the lun with hdd and some vms (such as DB) run in the lun with ssd.
> so, shoud I create one data domain with the 2 luns. or shoud I create 2 difference data domains, and every data domain with a single lun in it?
> thanks.
> ________________________________
> yours Adam
5 years, 8 months
ovirt data domain with multi luns, any best practice
by adam_xu@adagene.com.cn
Hello, everyone. I had a question on how to create a data domain in the best way.
for example, I have a FCP device and there's a lot of hdds and ssds in that device. I ceate 2 luns. one consisting of HDDs and the other consisting of SSDs. I want to make some normal vms run in the lun with hdd and some vms (such as DB) run in the lun with ssd.
so, shoud I create one data domain with the 2 luns. or shoud I create 2 difference data domains, and every data domain with a single lun in it?
thanks.
yours Adam
5 years, 8 months
User permissions needed to clone template disk
by Wood Peter
Hi,
Users have PowerUserRole permissions on the cluster and the storage
objects. Also TemplateCreator role on the Datacenter.
When users create VMs from templates there is no option to clone the disk
and create independent VM disk. The resource allocation section is not
visible at all.
What permissions should I give users so they can clone the disk when
creating a VM from a template?
Using oVirt 4.2.8.2-1.el7
Thank you,
-- Peter
5 years, 8 months