failed to migrate Vm
by Bill James
I have a hardware node that for whatever reason most of the VM status
was "?", even though VM was up and running fine.
node was running ovirt 3.6 so I decided to upgrade it to 4.1.0.4-1 like
most of the rest of the cluster is, including engine.
After upgrade some VMs I'm able to start up fine on the upgraded hosts,
other fail to start on that host and migrate themselves to another host.
If I try to manually migrate them to the recently upgraded host
migration fails. I have yet to find in the logs where it says why they fail.
I am able to "runonce" the VM and tell it to start on this host and it
starts fine.
Why is migration failing?
ovirt-engine-4.1.0.4-1.el7.centos.noarch
vdsm-4.19.4-1.el7.centos.x86_64
2017-03-04 10:16:51,716-08 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-7-thread-48)
[519095c3-83ed-4478-8dd4-f432db6db140] EVENT_ID:
VM_MIGRATION_TO_SERVER_FAILED(120), Correlation ID: 2
6d1ecba-8711-408d-b9d5-6461a4aab4e5, Job ID:
546e0e11-9b4b-48d4-bd50-2f525d049ae2, Call Stack: null, Custom Event ID:
-1, Message: Migration failed (VM: *j2es2.test.j2noc.com*, Source:
ovirt6.test.j2noc.com, Destination: *ovirt4.test.j2noc.com*)
(engine.log attached)
Thanks.
8 years, 1 month
lvscan on rhv host
by Marcin Kruk
Could somebody give me a links to doc or explain behavior of lv volumes in
RHV 4 cluster, once they are active another time they are incative, and how
to corelate the lv volumes names with disks of virtual machines, or other
disks?
8 years, 1 month
Staring cluster Hosted Engine stuck - failed liveliness check
by Maton, Brett
What is the correct way to shut down a cluster ?
I shutdown my 4.1 cluster following this guide
https://github.com/rharmonson/richtech/wiki/OSVDC-Series:-
oVirt-3.6-Cluster-Shutdown-and-Startup as the NAS hosting the nfs mounts
needed rebooting after updates.
( I realise the guide is for 3.6, surely not that different ? )
However hosted engine isn't starting now.
failed liveliness check
I've tried connecting from the host starting the HE vm to see what's going
on with:
virsh console HostedEngine
I don't know what authentication it requires, tried engine admin details
and root os...
hosted-engine --add-console-password
Enter password:
no graphics devices configured
unsurprisingly, remote-viewer vnc://localhost:5900
fails with Unable to connect to the graphic server vnc://localhost:5900
What can I do next ?
8 years, 1 month
best way to remove SAN lun
by Gianluca Cecchi
Hello,
currently I have a cluster of 3 hosts where each one has FC SAN
connectivity to 4 LUNs: 3 are already configured as storage domains (1TB,
2TB, 4TB), one is free, not allocated.
See here for screenshot:
https://drive.google.com/file/d/0BwoPbcrMv8mvRVZZMTlNcTQ5MGs/view?usp=sha...
At the moment the command "multipath -l" run on hosts shows all the 4 LUNs.
Now I want to do 2 things at storage array level:
- remove the 2TB storage domain LUN
- remove the 20Gb LUN not yet allocated
What is the correct workflow, supposing I have already emptied the 2TB from
VM disks ad such?
Select 2Tb SD, then Datacenter subtab, then "maintenance", detach" and at
the end "remove"?
I think I continue to see 4 LUNs at this point, correct?
Now I proceed with removal of lun at storage array level?
Should I select an SD line and then "Scan disks" to see refresh the SAN and
see in multipath only 2 of them at the end?
Or any manual command at host level before removal from array?
Thanks in advance
Gianluca
8 years, 1 month
Hot to force glusterfs to use RDMA?
by Arman Khalatyan
Hi,
Are there way to force the connections over RDMA only?
If I check host mounts I cannot see rdma mount option:
mount -l| grep gluster
10.10.10.44:/GluReplica on
/rhev/data-center/mnt/glusterSD/10.10.10.44:_GluReplica
type fuse.glusterfs
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
I have glusterized 3 nodes:
GluReplica
Volume ID:
ee686dfe-203a-4caa-a691-26353460cc48
Volume Type:
Replicate (Arbiter)
Replica Count:
2 + 1
Number of Bricks:
3
Transport Types:
TCP, RDMA
Maximum no of snapshots:
256
Capacity:
3.51 TiB total, 190.56 GiB used, 3.33 TiB free
8 years, 1 month
Unsupported Cluster with oVirt question
by m m
Hi,
I have two questions about an oVirt setup. I have 3 nodes connected to a
SAN storage via FC and all VMs are kept in this SAN. From the SAN I have
mounted two partitions to each of these nodes and these partitions are in
an OCFS2 cluster setup. I am planning to export these partitions and create
NFS Domain storage in oVirt. Is there any better way considering this
setup? my second question is, in the event that one of the nodes is
failing, is there thing I can do to prevent VMs running on that failing
node not to fail?
Regards
8 years, 1 month
vm display effect
by qinglong.dong@horebdata.cn
This is a multi-part message in MIME format.
------=_001_NextPart520137605018_=----
Content-Type: text/plain;
charset="ISO-8859-1"
Content-Transfer-Encoding: base64
SGksIGFsbA0KICAgICAgICBJIGhhdmUgdXNlZCB3aW5kb3dzIHZtIGNyZWF0ZWQgYnkgb3ZpcnQg
Zm9yIGEgbG9uZyB0aW1lLiBJIGZvdW5kIHRoYXQgdGhlIHZtIGRpc3BsYXkgZWZmZWN0IHdhcyBu
b3QgdmVyeSBnb29kIGluIHNvbWUgY2FzZXMuIEZvciBleGFtcGxlLCBUaGUgY29kZXMgY2FtZSBv
dXQgd29yZCBieSB3b3JkIHdoZW4gdXNpbmcgc29tZSBjb2RlIGJyb3dzaW5nIHRvb2xzLiBBbmQg
c29tZXRpbWVzIG9ubGluZSB2aWRlbyBwbGF5aW5nIHdhcyBub3QgdmVyeSBmbHVlbnQuDQogICAg
ICAgIEFueW9uZSBjb2l1bGQgZ2l2ZSBzb21lIGFkdmljZXMgdG8gaW1wcm92ZSB0aGlzPyBUaGFu
a3MhDQo=
------=_001_NextPart520137605018_=----
Content-Type: text/html;
charset="ISO-8859-1"
Content-Transfer-Encoding: quoted-printable
<html><head><meta http-equiv=3D"content-type" content=3D"text/html; charse=
t=3DISO-8859-1"><style>body { line-height: 1.5; }body { font-size: 10.5pt;=
font-family: ????; color: rgb(0, 0, 0); line-height: 1.5; }body { font-si=
ze: 10.5pt; color: rgb(0, 0, 0); line-height: 1.5; }</style></head><body>=
=0A<div><span></span>Hi, all</div><div> <span style=3D"f=
ont-size: 10.5pt; line-height: 1.5; background-color: window;"> &nbs=
p; I have used windows vm created by ovirt for a long time. I found that t=
he vm display effect was not very good in some cases. For example, The cod=
es came out word by word when using some c</span><span style=3D"font-size:=
10.5pt; line-height: 1.5; background-color: window;">ode browsing&nb=
sp;tools. And sometimes online video playing was not very fluent.</span></=
div><div><span style=3D"font-size: 10.5pt; line-height: 1.5; background-co=
lor: window;"> </span><span style=3D"font-size: 10.5pt; =
line-height: 1.5; background-color: window;"> </span><sp=
an style=3D"font-size: 10.5pt; line-height: 1.5; background-color: window;=
">Anyone coiuld give some advices to improve this? Thanks!</span></div>=0A=
</body></html>
------=_001_NextPart520137605018_=------
8 years, 1 month
Updating the notification emails after the hosted-engine is deployed
by Joseph Kelly
--_000_1488354820917038cameltradingscreencom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
SGVsbG8gYWxsIC0gZG9lcyBhbnlvbmUga25vdyBob3cgdG8gdXBkYXRlIHRoZSBkZXN0aW5hdGlv
bi1lbWFpbHMgYWRkcmVzcyBhZnRlciB0aGUgaG9zdGVkLWVuZ2luZSBpcyBkZXBsb3llZCA/IEFu
ZCBjYW4gdGhpcyBiZSBkb25lIGZyb20gdGhlIHdlYmFkbWluIEdVSSA/IEkNCg0KIyBjYXQgIC92
YXIvbGliL292aXJ0LWhvc3RlZC1lbmdpbmUtaGEvYnJva2VyLmNvbmYNCltlbWFpbF0NCnNtdHAt
c2VydmVyID0gbWFpbGhvc3QudHMuY29tDQpzbXRwLXBvcnQgPSAyNQ0Kc291cmNlLWVtYWlsID0g
cm9vdEBsb2NhbGhvc3Q8bWFpbHRvOnJvb3RAbnkyLWxhYi0wMDgubWd0LnRyYWRpbmdzY3JlZW4u
Y29tPg0KZGVzdGluYXRpb24tZW1haWxzID0gZW1haS1hZGRyZXNzbEB0cy5jb208bWFpbHRvOmVt
YWktYWRkcmVzc2xAdHMuY29tPiA8LS0gd2FudCB0byBjaGFuZ2UgdGhpcw0KDQpbbm90aWZ5XQ0K
c3RhdGVfdHJhbnNpdGlvbiA9IG1haW50ZW5hbmNlfHN0YXJ0fHN0b3B8bWlncmF0ZXx1cHxkb3du
DQoNCkFuZCBJJ3ZlIGRvbmUgdGhlIGZvbGxvd2luZyBvbiBib3RoIG5vZGVzIGJ1dCBpdHNlZW1z
IHRvIGtlZXBzIG9uIGdldHRpbmcgb3Zlci13cml0dGVuOg0KDQoxKSBTaHV0ZG93biB0aGUgaG9z
dGVkLWVuZ2luZQ0KDQoyKSAgU3RvcHBlZCB0aGVzZSBzZXJ2aWNlcyBvbiBhbGwgbm9kZXM6DQpz
ZXJ2aWNlIG92aXJ0LWhhLWFnZW50IHN0b3ANCnNlcnZpY2Ugb3ZpcnQtaGEtYnJva2VyIHN0b3AN
Cg0KMykgVXBkYXRlZCBicm9rZXIuY29uZiBvbiBlYWNoIG5vZGU6DQpFZGl0ZWQgL3Zhci9saWIv
b3ZpcnQtaG9zdGVkLWVuZ2luZS1oYS9icm9rZXIuY29uZiBvbiBlYWNoIG5vZGUNCkVkaXRlZCAv
ZXRjL292aXJ0LWhvc3RlZC1lbmdpbmUvYW5zd2Vycy5jb25mIDwtLSBqdXN0IGZvciBnb29kIG1l
YXN1cmUNCg0KNCkgUmUtc3RhcnRlZCB0aGVzZSBzZXJ2aWNlcyBvbiBhbGwgbm9kZXM6DQpzZXJ2
aWNlIG92aXJ0LWhhLWFnZW50IHN0YXJ0DQpzZXJ2aWNlIG92aXJ0LWhhLWJyb2tlciBzdGFydA0K
DQo1KSBSZXN0YXJ0ZWQgdGhlIGhvc3RlZC1lbmdpbmUuDQoNClRoYW5rcywNCkpvZS4NCg0KLS0N
Cg0KDQpKLiBLZWxseQ0KSW5mcmFzdHJ1Y3R1cmUgRW5naW5lZXINClRyYWRpbmdTY3JlZW4NCnd3
dy50cmFkaW5nc2NyZWVuLmNvbTxodHRwOi8vd3d3LnRyYWRpbmdzY3JlZW4uY29tPg0KS0RYIFJv
cHBvbmdpIDIyOCBCbGRnLiA2Rg0KMi0yLTggUm9wcG9uZ2ksIE1pbmF0by1rdSwgVG9reW8gMTA2
LTAwMzIsIEphcGFuDQpvZmZpY2U6ICs4MS0zLTQ1NDAtODUxMw0KZW1haWw6IGpvc2VwaC5rZWxs
eUB0cmFkaW5nc2NyZWVuLmNvbTxtYWlsdG86am9zZXBoLmtlbGx5QHRyYWRpbmdzY3JlZW4uY29t
Pg0KDQpGb2xsb3cgVHJhZGluZ1NjcmVlbiBvbiBUd2l0dGVyLCBGYWNlYm9vaywgb3Igb3VyIGJs
b2csIFRyYWRpbmcgU21hcnRlcg0KDQpUaGlzIG1lc3NhZ2UgaXMgaW50ZW5kZWQgb25seSBmb3Ig
dGhlIHJlY2lwaWVudChzKSBuYW1lZCBhYm92ZSBhbmQgbWF5DQpjb250YWluIGNvbmZpZGVudGlh
bCBpbmZvcm1hdGlvbi4gSWYgeW91IGFyZSBub3QgYW4gaW50ZW5kZWQgcmVjaXBpZW50LA0KeW91
IHNob3VsZCBub3QgcmV2aWV3LCBkaXN0cmlidXRlIG9yIGNvcHkgdGhpcyBtZXNzYWdlLiBQbGVh
c2Ugbm90aWZ5DQp0aGUgc2VuZGVyIGltbWVkaWF0ZWx5IGJ5IGUtbWFpbCBpZiB5b3UgaGF2ZSBy
ZWNlaXZlZCB0aGlzIG1lc3NhZ2UgaW4NCmVycm9yIGFuZCBkZWxldGUgaXQgZnJvbSB5b3VyIHN5
c3RlbS4NCg0KDQoNCg==
--_000_1488354820917038cameltradingscreencom_
Content-Type: text/html; charset="utf-8"
Content-ID: <B5B3E2E0E513C24ABC7454E1ABEBB0B7@localhost>
Content-Transfer-Encoding: base64
PCFET0NUWVBFIEhUTUwgUFVCTElDICItLy9XM0MvL0RURCBIVE1MIDQuMCBUUkFOU0lUSU9OQUwv
L0VOIj4NCjxodG1sPg0KPGhlYWQ+DQo8bWV0YSBodHRwLWVxdWl2PSJDb250ZW50LVR5cGUiIGNv
bnRlbnQ9InRleHQvaHRtbDsgY2hhcnNldD11dGYtOCI+DQo8bWV0YSBuYW1lPSJHRU5FUkFUT1Ii
IGNvbnRlbnQ9Ikd0a0hUTUwvNC44LjUiPg0KPC9oZWFkPg0KPGJvZHk+DQpIZWxsbyBhbGwgLSBk
b2VzIGFueW9uZSBrbm93IGhvdyB0byB1cGRhdGUgdGhlIGRlc3RpbmF0aW9uLWVtYWlscyBhZGRy
ZXNzIGFmdGVyIHRoZSBob3N0ZWQtZW5naW5lIGlzIGRlcGxveWVkID8gQW5kIGNhbiB0aGlzIGJl
IGRvbmUgZnJvbSB0aGUgd2ViYWRtaW4gR1VJID8gSTxicj4NCjxicj4NCiMgY2F0Jm5ic3A7IC92
YXIvbGliL292aXJ0LWhvc3RlZC1lbmdpbmUtaGEvYnJva2VyLmNvbmY8YnI+DQpbZW1haWxdPGJy
Pg0Kc210cC1zZXJ2ZXIgPSBtYWlsaG9zdC50cy5jb208YnI+DQpzbXRwLXBvcnQgPSAyNTxicj4N
CnNvdXJjZS1lbWFpbCA9Jm5ic3A7PGEgaHJlZj0ibWFpbHRvOnJvb3RAbnkyLWxhYi0wMDgubWd0
LnRyYWRpbmdzY3JlZW4uY29tIj5yb290QGxvY2FsaG9zdDwvYT48YnI+DQo8Yj48Zm9udCBjb2xv
cj0iI2ZmMDAwMCI+ZGVzdGluYXRpb24tZW1haWxzID0gPGEgaHJlZj0ibWFpbHRvOmVtYWktYWRk
cmVzc2xAdHMuY29tIj4NCmVtYWktYWRkcmVzc2xAdHMuY29tPC9hPiAmbHQ7LS0gd2FudCB0byBj
aGFuZ2UgdGhpcyA8L2ZvbnQ+PC9iPjxicj4NCjxicj4NCltub3RpZnldPGJyPg0KPGZvbnQgY29s
b3I9IiMwMDAwMDAiPnN0YXRlX3RyYW5zaXRpb24gPSBtYWludGVuYW5jZXxzdGFydHxzdG9wfG1p
Z3JhdGV8dXB8ZG93bjwvZm9udD48YnI+DQo8YnI+DQpBbmQgSSd2ZSBkb25lIHRoZSBmb2xsb3dp
bmcgb24gYm90aCBub2RlcyBidXQgaXRzZWVtcyB0byBrZWVwcyBvbiBnZXR0aW5nIG92ZXItd3Jp
dHRlbjo8YnI+DQo8YnI+DQoxKSBTaHV0ZG93biB0aGUgaG9zdGVkLWVuZ2luZTxicj4NCjxicj4N
CjIpJm5ic3A7IFN0b3BwZWQgdGhlc2Ugc2VydmljZXMgb24gYWxsIG5vZGVzOjxicj4NCnNlcnZp
Y2Ugb3ZpcnQtaGEtYWdlbnQgc3RvcDxicj4NCnNlcnZpY2Ugb3ZpcnQtaGEtYnJva2VyIHN0b3A8
YnI+DQo8YnI+DQozKSBVcGRhdGVkIGJyb2tlci5jb25mIG9uIGVhY2ggbm9kZTo8YnI+DQpFZGl0
ZWQgL3Zhci9saWIvb3ZpcnQtaG9zdGVkLWVuZ2luZS1oYS9icm9rZXIuY29uZiBvbiBlYWNoIG5v
ZGUmbmJzcDs8YnI+DQpFZGl0ZWQgL2V0Yy9vdmlydC1ob3N0ZWQtZW5naW5lL2Fuc3dlcnMuY29u
ZiAmbHQ7LS0ganVzdCBmb3IgZ29vZCBtZWFzdXJlPGJyPg0KPGJyPg0KNCkgUmUtc3RhcnRlZCB0
aGVzZSBzZXJ2aWNlcyBvbiBhbGwgbm9kZXM6PGJyPg0Kc2VydmljZSBvdmlydC1oYS1hZ2VudCBz
dGFydDxicj4NCnNlcnZpY2Ugb3ZpcnQtaGEtYnJva2VyIHN0YXJ0PGJyPg0KPGJyPg0KNSkgUmVz
dGFydGVkIHRoZSBob3N0ZWQtZW5naW5lLjxicj4NCjxicj4NClRoYW5rcyw8YnI+DQpKb2UuPGJy
Pg0KPGJyPg0KPHRhYmxlIGNlbGxzcGFjaW5nPSIwIiBjZWxscGFkZGluZz0iMCIgd2lkdGg9IjEw
MCUiPg0KPHRib2R5Pg0KPHRyPg0KPHRkPi0tDQo8cHJlPg0KPGI+Si4gS2VsbHk8L2I+DQpJbmZy
YXN0cnVjdHVyZSBFbmdpbmVlcg0KVHJhZGluZ1NjcmVlbg0KPGEgaHJlZj0iaHR0cDovL3d3dy50
cmFkaW5nc2NyZWVuLmNvbSI+d3d3LnRyYWRpbmdzY3JlZW4uY29tPC9hPg0KS0RYIFJvcHBvbmdp
IDIyOCBCbGRnLiA2Rg0KMi0yLTggUm9wcG9uZ2ksIE1pbmF0by1rdSwgVG9reW8gMTA2LTAwMzIs
IEphcGFuDQpvZmZpY2U6ICYjNDM7ODEtMy00NTQwLTg1MTMNCmVtYWlsOiA8YSBocmVmPSJtYWls
dG86am9zZXBoLmtlbGx5QHRyYWRpbmdzY3JlZW4uY29tIj5qb3NlcGgua2VsbHlAdHJhZGluZ3Nj
cmVlbi5jb208L2E+DQoNCkZvbGxvdyBUcmFkaW5nU2NyZWVuIG9uIFR3aXR0ZXIsIEZhY2Vib29r
LCBvciBvdXIgYmxvZywgVHJhZGluZyBTbWFydGVyDQoNClRoaXMgbWVzc2FnZSBpcyBpbnRlbmRl
ZCBvbmx5IGZvciB0aGUgcmVjaXBpZW50KHMpIG5hbWVkIGFib3ZlIGFuZCBtYXkNCmNvbnRhaW4g
Y29uZmlkZW50aWFsIGluZm9ybWF0aW9uLiBJZiB5b3UgYXJlIG5vdCBhbiBpbnRlbmRlZCByZWNp
cGllbnQsDQp5b3Ugc2hvdWxkIG5vdCByZXZpZXcsIGRpc3RyaWJ1dGUgb3IgY29weSB0aGlzIG1l
c3NhZ2UuIFBsZWFzZSBub3RpZnkNCnRoZSBzZW5kZXIgaW1tZWRpYXRlbHkgYnkgZS1tYWlsIGlm
IHlvdSBoYXZlIHJlY2VpdmVkIHRoaXMgbWVzc2FnZSBpbg0KZXJyb3IgYW5kIGRlbGV0ZSBpdCBm
cm9tIHlvdXIgc3lzdGVtLg0KDQo8L3ByZT4NCjwvdGQ+DQo8L3RyPg0KPC90Ym9keT4NCjwvdGFi
bGU+DQo8L2JvZHk+DQo8L2h0bWw+DQo=
--_000_1488354820917038cameltradingscreencom_--
8 years, 1 month
About qemu's CPU up to 400-500%
by 张 余歌
--_000_DM5PR11MB1930C74C309A493FF58FD13090280DM5PR11MB1930namp_
Content-Type: text/plain; charset="gb2312"
Content-Transfer-Encoding: base64
aGVsbG8sZnJpZW5kcyENCg0KSXQgaXMgd2VpcmQgdGhhdCBpdCBzaG93ZWQgcWVtdScgQ1BVIGlz
IG5vdCBub3JtYWwuDQoNCg0KI3RvcA0KDQogICBQSUQgVVNFUiAgICAgIFBSICBOSSAgVklSVCAg
UkVTICBTSFIgUyAlQ1BVICVNRU0gICAgVElNRSsgIENPTU1BTkQNCiA1MjMxMyBxZW11ICAgICAg
MjAgICAwIDkwNzVtIDQuM2cgIDIzbSBTIDQzMy4xICAzLjUgNjY2OjMwLjU5IHFlbXUta3ZtDQog
NzU1ODMgcWVtdSAgICAgIDIwICAgMCA5MjIwbSA0LjJnIDY2MzYgUiAyMTYuMSAgMy4zIDE1MToy
OC4wMiBxZW11LWt2bQ0KIDg1NDkyIHJvb3QgICAgICAyMCAgIDAgIDIxN20gIDI0bSA1MTAwIFIg
OTkuNyAgMC4wICAgMDozNi45MCBweXRob24NCiA1NjcwNCBxZW11ICAgICAgMjAgICAwIDY5NzZt
IDIuMmcgNzQwMCBTIDYwLjMgIDEuOCAzMDY6MDcuMTUgcWVtdS1rdm0NCiA3NTU5MCByb290ICAg
ICAgMjAgICAwICAgICAwICAgIDAgICAgMCBSIDQ1LjIgIDAuMCAgIDc6NDkuMzAgdmhvc3QtNzU1
ODMNCiAgODE2NCBxZW11ICAgICAgMjAgICAwIDg5MDVtIDQuMmcgNjU5MiBTIDExLjUgIDMuMyAx
MDc6MjIuMzAgcWVtdS1rdm0NCiAgNDc0MyBvdmlydCAgICAgMjAgICAwIDQ5LjNnIDkuOGcgIDIx
bSBTIDEwLjkgIDcuOCAgNjI6MzcuMjUgamF2YQ0KICA1NDc1IHJvb3QgICAgICAyMCAgIDAgIDkx
MW0gIDEzbSA2MTM2IFMgMTAuOSAgMC4wICAyODo1NS40NiBsaWJ2aXJ0ZA0KICA2MzY3IHZkc20g
ICAgICAgMCAtMjAgNDI1OG0gMTMxbSA5Mjk2IFMgMTAuOSAgMC4xIDEwMjoyOS44OSB2ZHNtDQog
ODI0MDkgcG9zdGdyZXMgIDIwICAgMCAgMjYwbSAgNjJtICAyMW0gUyAxMC45ICAwLjAgICAwOjM1
LjU5IHBvc3RtYXN0ZXINCiAyNDcwNiByb290ICAgICAgMjAgICAwIDE1NzA0IDIwMDAgIDkyNCBT
ICA0LjIgIDAuMCAgMTc6MzguNDAgdG9wDQogIDY4MTkgcm9vdCAgICAgIDIwICAgMCAxNTgwNCAy
MDEyICA5MzIgUyAgMy44ICAwLjAgIDIxOjMwLjc0IHRvcA0KIDc3MzE3IHJvb3QgICAgICAyMCAg
IDAgMTU2OTIgMTk5MiAgOTI0IFMgIDMuOCAgMC4wICAgMzoxMS43OCB0b3ANCiA4NTIwMyByb290
ICAgICAgMjAgICAwIDE1NjkyIDE5OTYgIDkyNCBTICAzLjggIDAuMCAgIDA6MDkuNTUgdG9wDQog
ICAxNzggcm9vdCAgICAgIDIwICAgMCAgICAgMCAgICAwICAgIDAgUyAgMy4yICAwLjAgIDE0OjQy
Ljk1IGV2ZW50cy8xNQ0KIDg1NTU1IHJvb3QgICAgICAyMCAgIDAgMTU2OTIgMTk5MiAgOTI0IFIg
IDMuMiAgMC4wICAgMDowMC41OSB0b3ANCiAgNDk0NSBhcGFjaGUgICAgMjAgICAwICAxOTRtIDU1
MzIgMjMzMiBTICAxLjMgIDAuMCAgIDA6MjcuNTcgaHR0cGQNCg0KSXQgY2F1c2VkIG1lIGNhbm5v
dCB1c2Ugdm0gbm9ybWFsbHkhZXNwZWNpYWxseSB3aGVuIGkgd2FzIHBsYXlpbmcgdmlkZW8saXQg
cGVyZm9ybWVkIGJhZGx5IC5TdWRkZW5seSBlbmNvdW50ZXJlZCB0aGlzICBjb25kaXRpb24saSBk
b24ndCBrbm93IGhvdyB0byBkZWFsIHdpdGggaXQuSGVscCBtZSBwbGVhc2UhDQoNCkkgZm91bmQg
dGhlIGNwdSB1c2FnZSBvZiBxZW11IHdhcyBub3Qgc29vb29vIG5vcm1hbCFCdXQgaSBoYWQgbm8g
aWRlYS4NCg==
--_000_DM5PR11MB1930C74C309A493FF58FD13090280DM5PR11MB1930namp_
Content-Type: text/html; charset="gb2312"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dgb2312">
<style type=3D"text/css" style=3D"display:none;"><!-- P {margin-top:0;margi=
n-bottom:0;} --></style>
</head>
<body dir=3D"ltr">
<div id=3D"divtagdefaultwrapper" style=3D"font-size:12pt;color:#000000;font=
-family:Calibri,Arial,Helvetica,sans-serif;" dir=3D"ltr">
<p>hello,friends!</p>
<p>It is weird that it showed qemu' CPU is not normal.</p>
<p><br>
</p>
<p>#top</p>
<p></p>
<div><br>
</div>
<div> PID USER PR NI VIRT  =
;RES SHR S %CPU %MEM TIME+ COMMAND</div>
<div> 52313 qemu 20 0 9075m 4.3g 23m =
S <i>433.1</i> 3.5 666:30.59 qemu-kvm</div>
<div> 75583 qemu 20 0 9220m 4.2g 6636 R <i>=
216.1</i> 3.3 151:28.02 qemu-kvm</div>
<div> 85492 root 20 0 217m 24m =
5100 R 99.7 0.0 0:36.90 python</div>
<div> 56704 qemu 20 0 6976m 2.2g 7400 S 60.=
3 1.8 306:07.15 qemu-kvm</div>
<div> 75590 root 20 0 0  =
; 0 0 R 45.2 0.0 7:49.30 vhost-75583</div>
<div> 8164 qemu 20 0 8905m 4.2g 6592 S 11.=
5 3.3 107:22.30 qemu-kvm</div>
<div> 4743 ovirt 20 0 49.3g 9.8g 21m S 10.=
9 7.8 62:37.25 java</div>
<div> 5475 root 20 0 911m 13m =
6136 S 10.9 0.0 28:55.46 libvirtd</div>
<div> 6367 vdsm 0 -20 4258m 131m 9296 S 10.9 &nb=
sp;0.1 102:29.89 vdsm</div>
<div> 82409 postgres 20 0 260m 62m 21m =
S 10.9 0.0 0:35.59 postmaster</div>
<div> 24706 root 20 0 15704 2000 924 =
S 4.2 0.0 17:38.40 top</div>
<div> 6819 root 20 0 15804 2012 932 =
S 3.8 0.0 21:30.74 top</div>
<div> 77317 root 20 0 15692 1992 924 =
S 3.8 0.0 3:11.78 top</div>
<div> 85203 root 20 0 15692 1996 924 =
S 3.8 0.0 0:09.55 top</div>
<div> 178 root 20 0 0 =
0 0 S 3.2 0.0 14:42.95 events/1=
5</div>
<div> 85555 root 20 0 15692 1992 924 =
R 3.2 0.0 0:00.59 top</div>
<div> 4945 apache 20 0 194m 5532 2332 S &nb=
sp;1.3 0.0 0:27.57 httpd</div>
<div><br>
</div>
<div>It caused me cannot use vm normally!especially when i was playing vide=
o,it performed badly .Suddenly encountered this condition,i don't kno=
w how to deal with it.Help me please!</div>
<div><br>
</div>
<div>I found the cpu usage of qemu was not sooooo normal!But i had no idea.=
</div>
<p></p>
</div>
</body>
</html>
--_000_DM5PR11MB1930C74C309A493FF58FD13090280DM5PR11MB1930namp_--
8 years, 1 month
hosted-engine status timestamp
by martin twostep
RHV 4.0 and self hosted engine.
Please explain why when I run hosted-engine --vm-status, the timestamp
field in the Extra metadata (valid at timestamp) section is obsolete not
the present time?
8 years, 1 month