On 10/22/2014 02:43 PM, Michal Skrivanek wrote:
On Oct 21, 2014, at 18:13 , Darrell Budic
<budic(a)onholyground.com> wrote:
> Was poking at this a little to see if there was any tuning that could affect it and
spotted some oddness with the processor counts on my VMs under Ovirt 3.4. They seem to
think they only have the proper number I set in ovirt (as shown in /proc/cpu), but if I
look at dmidecode, there’s 159 bogus processors listed. I’d expect maybe 16 from the -smp
1,maxcpus=16,sockets=16,cores=1,threads=1 argument to qemu-kvm, but there are 0xa0 of
them. Maybe this is a seabios or qemu-kvm issue causing all those extras? Anyway, the # of
rcu* processes matches pretty well, so that’s likely where it’s coming from.
>
> At least they shouldn’t be causing a performance issue, given their purpose as
non-blocking work threads, but it is odd to see.
>
> Punit, did you open a BZ I can add these details to?
>
> BTW, this appears to be corrected with some component of Ovirt 3.5 (probably
qemu-rhev?). On VMs started after I upgraded my Engine (even on 3.4 vdsmds), I’m only
seeing 16 “processors” in the bios, and thus only 16 of the various rcu* processes. Could
have been a general Centos 6.5 update too, since I did those as well, so I can’t get any
finer resolution on that issue (both engine and host nodes, lots of Centos 7 VMs).
>
> If there is a RFE for this, perhaps a configurable max # of CPU sockets for hot add
could be added, or it could be limited to the max physical cpu count of the biggest host
in the cluster?
The max is 16. Roy, what can we change/not change after your latest changes?
I suppose it's related to teh maximum values we send because of hotplug support
I guess its related. What is surprising is that probably most of the
processors are offline, therefor not in use but the kernel
still allocated rcu's to them. maybe a bug, maybe an optimization for
online speed, don't know really.
2 config values would help
engine-config -g MaxNumOfVmSockets
engine-config -g MaxNumOfVmCpus
You can still have 160 vcpu (that limit is QEMU) - 16 Sockets and 10 cpu
p/socket
Roy
Thanks,
michal
> -Darrell
>
>> On Oct 20, 2014, at 7:34 AM, Doron Fediuck <dfediuck(a)redhat.com> wrote:
>>
>>
>>
>> ----- Original Message -----
>>> From: "Punit Dambiwal" <hypunit(a)gmail.com>
>>> To: users(a)ovirt.org, "Dan Kenigsberg" <danken(a)redhat.com>,
"Itamar Heim" <iheim(a)redhat.com>, ahadas(a)redhat.com
>>> Sent: Monday, October 20, 2014 5:58:20 AM
>>> Subject: Re: [ovirt-users] Guest VM Running 160 RCU Processes
>>>
>>> Hi,
>>>
>>> Is there any body suggest me good way to handle it....??
>>>
>>> On Fri, Oct 17, 2014 at 3:15 PM, Punit Dambiwal < hypunit(a)gmail.com >
wrote:
>>>
>>>
>>>
>>> Hi,
>>>
>>> I have one Ovirt cluster and under this cluster all the guest machines (such
>>> as centos,ubuntu,debian etc) almost 160 RCU processes running....
>>>
>>> i searched on google about the RCU (It's kernel process
"read-copy-upate")
>>>
>>>
http://lwn.net/Articles/518953/
>>>
>>> I want to know how i can modify this 160 processes to 10-20 or how i can to
>>> disable it....is there any bed impact if i disable it....
>>>
>>> Thanks,
>>> Punit
>>>
>>>
>> Hi Punit,
>> we need to do a bit of a research on this one.
>> In order to make sure we keep track of it, do you mind opening an RFE (BZ)
>> with all the relevant details, including the hardware that you're using,
>> guests config, how busy the guests/host are, Which hypervisor is used and
>> the versions.
>>
>> Thanks,
>> Doron
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>>
http://lists.ovirt.org/mailman/listinfo/users
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users