<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Dec 29, 2015 at 5:54 AM, pc <span dir="ltr">&lt;<a href="mailto:pc@pcswo.com" target="_blank">pc@pcswo.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">[sorry, this is my first time to use mailing list, repost again, with content from html to plain text]<br>
<br>
### Description ###<br>
1. problem<br>
1) migrate vm {name:xyz001, mem(min, max) = (2G,4G)} from ovirt host n33 to n34, failed.<br>
2) shutting down vm {name: test001, mem(min, max) = (1G,1G)} on n34, update test001&#39;s config: Host-&gt;Start Running On: Specific(n34), then start test001, while, it&#39;s running on n33.<br>
<br>
2. err message<br>
Error while executing action: migrate<br>
[engine gui]<br>
xyz001:<br>
Cannot migrate VM. There is no host that satisfies current scheduling constraints. See below for details:<br>
The host n33.ovirt did not satisfy internal filter Memory because has availabe 1863 MB memory. Insufficient free memory to run the VM.<br>
The host n34.ovirt did not satisfy internal filter Memory because its swap value was illegal.<br>
<br>
<br>
[engine.log]<br>
INFO  [org.ovirt.engine.core.bll.MigrateVmCommand] (default task-23) [5916aa3b] Lock Acquired to object &#39;EngineLock:{exclusiveLocks=&#39;[73351885-9a92-4317-baaf-e4f2bed1171a=&lt;VM, ACTION_TYPE_FAILED_VM_IS_BEING_MIGRATED$VmName test11&gt;]&#39;, sharedLocks=&#39;null&#39;}&#39;<br>
INFO  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-23) [5916aa3b] Candidate host &#39;n34&#39; (&#39;2ae3a219-ae9a-4347-b1e2-0e100360231e&#39;) was filtered out by &#39;VAR__FILTERTYPE__INTERNAL&#39; filter &#39;Memory&#39; (correlation id: null)<br>
INFO  [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-23) [5916aa3b] Candidate host &#39;n33&#39; (&#39;688aec34-5630-478e-ae5e-9d57990804e5&#39;) was filtered out by &#39;VAR__FILTERTYPE__INTERNAL&#39; filter &#39;Memory&#39; (correlation id: null)<br>
WARN  [org.ovirt.engine.core.bll.MigrateVmCommand] (default task-23) [5916aa3b] CanDoAction of action &#39;MigrateVm&#39; failed for user admin@internal. Reasons: VAR__ACTION__MIGRATE,VAR__TYPE__VM,SCHEDULING_ALL_HOSTS_FILTERED_OUT,VAR__FILTERTYPE__INTERNAL,$hostName n33,$filterName Memory,$availableMem 1863,VAR__DETAIL__NOT_ENOUGH_MEMORY,SCHEDULING_HOST_FILTERED_REASON_WITH_DETAIL,VAR__FILTERTYPE__INTERNAL,$hostName n34,$filterName Memory,VAR__DETAIL__SWAP_VALUE_ILLEGAL,SCHEDULING_HOST_FILTERED_REASON_WITH_DETAIL<br>
INFO  [org.ovirt.engine.core.bll.MigrateVmCommand] (default task-23) [5916aa3b] Lock freed to object &#39;EngineLock:{exclusiveLocks=&#39;[73351885-9a92-4317-baaf-e4f2bed1171a=&lt;VM, ACTION_TYPE_FAILED_VM_IS_BEING_MIGRATED$VmName test11&gt;]&#39;, sharedLocks=&#39;null&#39;}&#39;<br>
<br>
<br>
3. DC<br>
Compatibility Version: 3.5<br>
<br>
4. Cluster<br>
Memory Optimization: For Server Load - Allow scheduling of 150% of physical memory<br>
Memory Balloon: Enable Memory Balloon Optimization<br>
Enable KSM: Share memory pages across all available memory (best KSM effectivness)<br>
<br>
5. HOST<br>
name: n33, n34<br>
mem: 32G<br>
<br>
6. VM<br>
[n33] 11 vms<br>
(min, max) = (2G,4G) = 8<br>
(min, max) = (2G,8G) = 1<br>
(min, max) = (2G,2G) = 2<br>
total: 22G/44G<br>
<br>
[n34] 7 vms<br>
(min, max) = (0.5G,1G) = 1<br>
(min, max) = (1G,2G) = 1<br>
(min, max) = (2G,2G) = 1<br>
(min, max) = (2G,4G) = 3<br>
(min, max) = (8G,8G) = 1<br>
total: 17.5G/25G<br>
--------------------------------------------<br>
(min, max) = (2G,4G) stands for:<br>
Memory Size: 4G<br>
Physical Memory Guaranteed: 2G<br>
Memory Balloon Device Enabled: checked<br>
--------------------------------------------<br>
<br>
7. rpm version<br>
[root@n33 ~]# rpm -qa |grep vdsm<br>
vdsm-yajsonrpc-4.16.27-0.el6.noarch<br>
vdsm-jsonrpc-4.16.27-0.el6.noarch<br>
vdsm-cli-4.16.27-0.el6.noarch<br>
vdsm-python-zombiereaper-4.16.27-0.el6.noarch<br>
vdsm-xmlrpc-4.16.27-0.el6.noarch<br>
vdsm-python-4.16.27-0.el6.noarch<br>
vdsm-4.16.27-0.el6.x86_64<br>
<br>
[root@engine ~]# rpm -qa |grep ovirt<br>
ovirt-release36-001-2.noarch<br>
ovirt-engine-setup-base-3.6.0.3-1.el6.noarch<br>
ovirt-engine-setup-3.6.0.3-1.el6.noarch<br>
ovirt-image-uploader-3.6.0-1.el6.noarch<br>
ovirt-engine-wildfly-8.2.0-1.el6.x86_64<br>
ovirt-engine-setup-plugin-vmconsole-proxy-helper-3.6.0.3-1.el6.noarch<br>
ovirt-host-deploy-1.4.0-1.el6.noarch<br>
ovirt-engine-backend-3.6.0.3-1.el6.noarch<br>
ovirt-engine-webadmin-portal-3.6.0.3-1.el6.noarch<br>
ovirt-engine-jboss-as-7.1.1-1.el6.x86_64<br>
ovirt-engine-lib-3.6.0.3-1.el6.noarch<br>
ovirt-engine-setup-plugin-ovirt-engine-common-3.6.0.3-1.el6.noarch<br>
ovirt-engine-setup-plugin-ovirt-engine-3.6.0.3-1.el6.noarch<br>
ovirt-engine-setup-plugin-websocket-proxy-3.6.0.3-1.el6.noarch<br>
ovirt-engine-sdk-python-3.6.0.3-1.el6.noarch<br>
ovirt-iso-uploader-3.6.0-1.el6.noarch<br>
ovirt-vmconsole-proxy-1.0.0-1.el6.noarch<br>
ovirt-engine-extensions-api-impl-3.6.0.3-1.el6.noarch<br>
ovirt-engine-websocket-proxy-3.6.0.3-1.el6.noarch<br>
ovirt-engine-vmconsole-proxy-helper-3.6.0.3-1.el6.noarch<br>
ebay-cors-filter-1.0.1-0.1.ovirt.el6.noarch<br>
ovirt-host-deploy-java-1.4.0-1.el6.noarch<br>
ovirt-engine-tools-3.6.0.3-1.el6.noarch<br>
ovirt-engine-restapi-3.6.0.3-1.el6.noarch<br>
ovirt-engine-3.6.0.3-1.el6.noarch<br>
ovirt-engine-extension-aaa-jdbc-1.0.1-1.el6.noarch<br>
ovirt-engine-cli-3.6.0.1-1.el6.noarch<br>
ovirt-vmconsole-1.0.0-1.el6.noarch<br>
ovirt-engine-wildfly-overlay-001-2.el6.noarch<br>
ovirt-engine-dbscripts-3.6.0.3-1.el6.noarch<br>
ovirt-engine-userportal-3.6.0.3-1.el6.noarch<br>
ovirt-guest-tools-iso-3.6.0-0.2_master.fc22.noarch<br>
<br>
<br>
### DB ###<br>
[root@engine ~]# su postgres<br>
bash-4.1$ cd ~<br>
bash-4.1$ psql engine<br>
engine=# select vds_id, physical_mem_mb, mem_commited, vm_active, vm_count, reserved_mem, guest_overhead, transparent_hugepages_state, pending_vmem_size from vds_dynamic;<br>
                vds_id                | physical_mem_mb | mem_commited | vm_active | vm_count | reserved_mem | guest_overhead | transparent_hugepages_state | pending_vmem_size<br>
--------------------------------------+-----------------+--------------+-----------+----------+--------------+----------------+-----------------------------+-------------------<br>
 688aec34-5630-478e-ae5e-9d57990804e5 |           32057 |        45836 |        11 |       11 |          321 |             65 |                           2 |                 0<br>
 2ae3a219-ae9a-4347-b1e2-0e100360231e |           32057 |        26120 |         7 |        7 |          321 |             65 |                           2 |                 0<br>
(2 rows)<br>
<br>
<br>
<br>
### memory ###<br>
[n33]<br>
# free -m<br>
             total       used       free     shared    buffers     cached<br>
Mem:         32057      31770        287          0         41       6347<br>
-/+ buffers/cache:      25381       6676<br>
Swap:        29999      10025      19974<br>
<br>
Physical Memory:                            32057 MB total, 25646 MB used, 6411 MB free<br>
Swap Size:                                  29999 MB total, 10025 MB used, 19974 MB free<br>
Max free Memory for scheduling new VMs:     1928.5 MB<br>
<br>
<br>
[n34]<br>
# free -m<br>
             total       used       free     shared    buffers     cached<br>
Mem:         32057      31713        344          0         78      13074<br>
-/+ buffers/cache:      18560      13497<br>
Swap:        29999       5098      24901<br>
<br>
Physical Memory:                            32057 MB total, 18593 MB used, 13464 MB free<br>
Swap Size:                                  29999 MB total, 5098 MB used, 24901 MB free<br>
Max free Memory for scheduling new VMs:     21644.5 MB<br>
<br>
<br>
<br>
### code ###<br>
##from: <a href="https://github.com/oVirt/ovirt-engine" rel="noreferrer" target="_blank">https://github.com/oVirt/ovirt-engine</a><br>
v3.6.0<br>
<br>
##from: D:\code\java\ovirt-engine\backend\manager\modules\dal\src\main\resources\bundles\AppErrors.properties<br>
VAR__DETAIL__SWAP_VALUE_ILLEGAL=$detailMessage its swap value was illegal<br>
<br>
##from: D:\code\java\ovirt-engine\backend\manager\modules\bll\src\main\java\org\ovirt\engine\core\bll\scheduling\policyunits\MemoryPolicyUnit.java<br>
#-----------code--------------1#<br>
    private boolean isVMSwapValueLegal(VDS host) {<br>
        if (!Config.&lt;Boolean&gt; getValue(ConfigValues.EnableSwapCheck)) {<br>
                    return true;<br>
                }<br>
    (omitted..)<br>
        return ((swap_total - swap_free - mem_available) * 100 / physical_mem_mb) &lt;= Config.&lt;Integer&gt; getValue(ConfigValues.BlockMigrationOnSwapUsagePercentage)<br>
    (omitted..)<br>
    }<br>
#-----------code--------------1#<br>
if EnableSwapCheck = False then return True, so we can simply disable this option? Any Suggestion?<br>
<br>
[root@engine ~]# engine-config --get BlockMigrationOnSwapUsagePercentage<br>
BlockMigrationOnSwapUsagePercentage: 0 version: general<br>
<br>
so,,<br>
Config.&lt;Integer&gt; getValue(ConfigValues.BlockMigrationOnSwapUsagePercentage) = 0<br>
so,,<br>
(swap_total - swap_free - mem_available) * 100 / physical_mem_mb &lt;= 0<br>
so,,<br>
swap_total - swap_free - mem_available &lt;= 0<br>
right?<br>
so,, if (swap_total - swap_free) &lt;= mem_available then return True else return False<br>
<br>
<br>
#-----------code--------------2#<br>
       for (VDS vds : hosts) {<br>
            if (!isVMSwapValueLegal(vds)) {<br>
                log.debug(&quot;Host &#39;{}&#39; swap value is illegal&quot;, vds.getName());<br>
                messages.addMessage(vds.getId(), EngineMessage.VAR__DETAIL__SWAP_VALUE_ILLEGAL.toString());<br>
                continue;<br>
            }<br>
            if (!memoryChecker.evaluate(vds, vm)) {<br>
                int hostAavailableMem = SlaValidator.getInstance().getHostAvailableMemoryLimit(vds);<br>
                log.debug(&quot;Host &#39;{}&#39; has {} MB available. Insufficient memory to run the VM&quot;,<br>
                        vds.getName(),<br>
                        hostAavailableMem);<br>
                messages.addMessage(vds.getId(), String.format(&quot;$availableMem %1$d&quot;, hostAavailableMem));<br>
                messages.addMessage(vds.getId(), EngineMessage.VAR__DETAIL__NOT_ENOUGH_MEMORY.toString());<br>
                continue;<br>
            }<br>
            (omitted..)<br>
        }<br>
<br>
#-----------code--------------2#<br>
!isVMSwapValueLegal then throw exception, right?<br>
so,, when we migrate vm from n33 to n34, the swap status on n34 actually is:<br>
(swap_total - swap_free) &gt; mem_available<br>
<br>
swap_used &gt; mem_available? confused...<br>
<br>
so,, the logic is:<br>
1) check n33: swap[passed], then memory[failed], then goto (for..continue..loop)<br>
2) check n34: swap[failed], then goto (for..continue..loop)<br>
<br>
If I have misunderstood anything, please let me know.<br>
<br>
<br>
<br>
### conclusion ###<br>
1) n33 do not have enough memory. [yes, I know that.]<br>
2) n34 memory is illegal [why and how to solve it?]<br>
3) what I tried:<br>
--change config: BlockMigrationOnSwapUsagePercentage<br>
[root@engine ~]# engine-config --set BlockMigrationOnSwapUsagePercentage=75 -cver general<br>
[root@engine ~]# engine-config --get BlockMigrationOnSwapUsagePercentage<br>
BlockMigrationOnSwapUsagePercentage: 75 version: general<br>
<br>
Result: failed.<br>
<br>
--disable EnableSwapCheck<br>
How? Option not found from &#39;engine-config --list&#39;, should I update table field direct from db?<br>
<br>
<br>
--disk swap partition on host<br>
Should I do this operation?<br>
<br>
--update ovirt-engine?<br>
No useful infomation found in latest release note, should I do this operation?<br>
<br>
<br>
### help ###<br>
any help would be appreciated.<br>
<br>
ZYXW. Reference<br>
<a href="http://www.ovirt.org/Sla/FreeMemoryCalculation" rel="noreferrer" target="_blank">http://www.ovirt.org/Sla/FreeMemoryCalculation</a><br>
<a href="http://lists.ovirt.org/pipermail/users/2012-November/010858.html" rel="noreferrer" target="_blank">http://lists.ovirt.org/pipermail/users/2012-November/010858.html</a><br>
<a href="http://lists.ovirt.org/pipermail/users/2013-March/013201.html" rel="noreferrer" target="_blank">http://lists.ovirt.org/pipermail/users/2013-March/013201.html</a><br>
<a href="http://comments.gmane.org/gmane.comp.emulators.ovirt.user/19288" rel="noreferrer" target="_blank">http://comments.gmane.org/gmane.comp.emulators.ovirt.user/19288</a><br>
<a href="http://jim.rippon.me.uk/2013/07/ovirt-testing-english-instructions-for.html" rel="noreferrer" target="_blank">http://jim.rippon.me.uk/2013/07/ovirt-testing-english-instructions-for.html</a><br>
<br></blockquote><div><br></div><div>Hi, </div><div>Let me simplify things.</div><div><br></div><div>We do not allow swapping in general. The reason is that it kills the performance</div><div>of all hosts.</div><div><br></div><div>As you were able to see in our code (0 is the default config value we have)</div><div>we expect the following expression:</div><div><br></div><div>swap_total - swap_free - mem_available) * 100 / physical_mem_mb) &lt;= 0<br><br></div></div>And in your case we see the value is &gt; 0.</div><div class="gmail_extra">This means that swap_total &lt; (swap_free+mem_available) or in general</div><div class="gmail_extra">your host is swapping.</div><div class="gmail_extra"><br></div><div class="gmail_extra">Since the host is swapping, we do not allow to run a VM on it.</div><div class="gmail_extra"><br></div><div class="gmail_extra">Let me know if you have any further questions.</div><div class="gmail_extra">Doron</div></div>