Hosted-engine without mac-spoofing
by Matteo Capuano
Hi folks,
i'm trying to install oVirt hyperconverged on Google Cloud Engine (GCE) to
create a lab environment.
Nested virtualization on GCE is in public beta, it seems to work fine but
there's a catch: i cannot enable mac-spoofing on the GCE instances' nics.
So here's my question: is there a way to install hosted-engine on a network
there you can't enabling mac-spoofing?
Thank you
Cheers
Matteo
6 years, 10 months
Re: [ovirt-users] oVirt 4.1 change ovirtmgmt to enable VLAN tagging
by Staniforth, Paul
--_000_14955b19e8134aa58fc7371f58868590emailandroidcom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
VGhhbmtzIEx1Y2EsDQogICAgICAgICAgICAgICAgICAgICAgICAgSSdsbCB0YWxrIHRvIG91ciBu
ZXR3b3JrIHRlYW0uDQpQYXVsIFMuDQoNCk9uIDIyIEphbiAyMDE4IDE5OjEwLCBMdWNhICdyZW1p
eF90aicgTG9yZW56ZXR0byA8bG9yZW56ZXR0by5sdWNhQGdtYWlsLmNvbT4gd3JvdGU6DQoNCg0K
SWwgMjIgZ2VuIDIwMTggNzo0OSBQTSwgIlN0YW5pZm9ydGgsIFBhdWwiIDxQLlN0YW5pZm9ydGhA
bGVlZHNiZWNrZXR0LmFjLnVrPG1haWx0bzpQLlN0YW5pZm9ydGhAbGVlZHNiZWNrZXR0LmFjLnVr
Pj4gaGEgc2NyaXR0bzoNCg0KSGVsbG8sDQoNCiAgICAgICAgIFdlIGhhdmUgYW4gb1ZpcnQgNC4x
ICBjbHVzdGVyIG9mIDMgb3ZpcnQtbm9kZXMgdGhhdCBjdXJyZW50bHkgb24gdGhlIG1hbmFnZW1l
bnQgbmV0d29yayBkb24ndCBoYXZlIFZMQU4gdGFnZ2luZyBlbmFibGVkLCB3ZSB3b3VsZCBsaWtl
IHRvIGVuYWJsZSBWTEFOIHRhZ2dpbmcgYW5kIHNvIHdvdWxkIGJlIGdyYXRlZnVsIG9mIGFueW9u
ZSBoYWQgYW4gaW5mb3JtYXRpb24gb3IgZXhwZXJpZW5jZSBkb2luZyB0aGlzLg0KDQoNClRoZSBt
YW5hZ2VtZW50IGVuZ2luZSBpcyBydW5uaW5nIG9uIGEgc2VwYXJhdGUgaG9zdCBhbmQgdGhlIDMg
aG9zdHMgYXJlIHJ1bm5pbmcgb1ZpcnQgTm9kZSA0LjEuNg0KDQpIaSBQYXVsLCBpIGRpZCB0aGUg
c2FtZSB3aXRoIHRoZSBzYW1lIHJlbGVhc2UuDQoNClRoZSB0cmljayB3YXMgdG8gc2V0IHRoZSB0
YWdnaW5nIGFsc28gb2YgdGhlIGRlZmF1bHQgdmxhbiBvbiB0aGUgc3dpdGNoIGR1cmluZyB0aGUg
Y2hhbmdlIGZyb20gdW50YWdnZWQgdG8gdGFnZ2VkLg0KDQpJbiBleGFtcGxlLCBpZiB5b3UgaGF2
ZSB0byBtb3ZlIHlvdXIgb3ZpcnRtZ210IHRvIHZsYW4gMTAsIHlvdSBjYW4gc2V0IHlvdXIgcG9y
dCBmb3Igc2VydmluZyB0aGUgbmV0d29yayBib3RoIG5hdGl2ZSBhbmQgdGFnZ2VkIHNvIHdoaWxl
IGNoYW5naW5nIHRoZSBvdmlydG1nbXQgc2V0dXAgeW91IHdpbGwgbm90IGxvc2UgdGhlIGNvbm5l
Y3Rpdml0eS4NCg0KTHVjYQ0KDQpUbyB2aWV3IHRoZSB0ZXJtcyB1bmRlciB3aGljaCB0aGlzIGVt
YWlsIGlzIGRpc3RyaWJ1dGVkLCBwbGVhc2UgZ28gdG86LQ0KaHR0cDovL2Rpc2NsYWltZXIubGVl
ZHNiZWNrZXR0LmFjLnVrL2Rpc2NsYWltZXIvZGlzY2xhaW1lci5odG1sDQo=
--_000_14955b19e8134aa58fc7371f58868590emailandroidcom_
Content-Type: text/html; charset="utf-8"
Content-ID: <E03D7FE978BC7F459C7E38BCCE487EF5(a)leedsbeckett.ac.uk>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPGRpdiBkaXI9ImF1
dG8iPg0KPGRpdj5UaGFua3MgTHVjYSwNCjxkaXYgZGlyPSJhdXRvIj4mbmJzcDsgJm5ic3A7ICZu
YnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5i
c3A7ICZuYnNwOyAmbmJzcDtJJ2xsIHRhbGsgdG8gb3VyIG5ldHdvcmsgdGVhbS48L2Rpdj4NClBh
dWwgUy48YnI+DQo8ZGl2IGNsYXNzPSJnbWFpbF9leHRyYSI+PGJyPg0KPGRpdiBjbGFzcz0iZ21h
aWxfcXVvdGUiPk9uIDIyIEphbiAyMDE4IDE5OjEwLCBMdWNhICdyZW1peF90aicgTG9yZW56ZXR0
byAmbHQ7bG9yZW56ZXR0by5sdWNhQGdtYWlsLmNvbSZndDsgd3JvdGU6PGJyIHR5cGU9ImF0dHJp
YnV0aW9uIj4NCjxibG9ja3F1b3RlIGNsYXNzPSJxdW90ZSIgc3R5bGU9Im1hcmdpbjowIDAgMCAu
OGV4O2JvcmRlci1sZWZ0OjFweCAjY2NjIHNvbGlkO3BhZGRpbmctbGVmdDoxZXgiPg0KPGRpdj4N
CjxkaXYgZGlyPSJhdXRvIj48YnI+DQo8ZGl2IGRpcj0iYXV0byI+PGJyPg0KPGRpdiBjbGFzcz0i
ZWxpZGVkLXRleHQiPklsIDIyIGdlbiAyMDE4IDc6NDkgUE0sICZxdW90O1N0YW5pZm9ydGgsIFBh
dWwmcXVvdDsgJmx0OzxhIGhyZWY9Im1haWx0bzpQLlN0YW5pZm9ydGhAbGVlZHNiZWNrZXR0LmFj
LnVrIj5QLlN0YW5pZm9ydGhAbGVlZHNiZWNrZXR0LmFjLjx3YnI+dWs8L2E+Jmd0OyBoYSBzY3Jp
dHRvOjxicj4NCjxibG9ja3F1b3RlIHN0eWxlPSJtYXJnaW46MCAwIDAgMC44ZXg7Ym9yZGVyLWxl
ZnQ6MXB4ICNjY2Mgc29saWQ7cGFkZGluZy1sZWZ0OjFleCI+DQo8ZGl2IGRpcj0ibHRyIiBzdHls
ZT0iZm9udC1zaXplOjEycHQ7Y29sb3I6IzAwMDAwMDtiYWNrZ3JvdW5kLWNvbG9yOiNmZmZmZmY7
Zm9udC1mYW1pbHk6J2NhbGlicmknICwgJ2FyaWFsJyAsICdoZWx2ZXRpY2EnICwgc2Fucy1zZXJp
ZiI+DQo8cD5IZWxsbyw8L3A+DQo8cD4mbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7
V2UgaGF2ZSBhbiBvVmlydCA0LjEgJm5ic3A7Y2x1c3RlciBvZiAzIG92aXJ0LW5vZGVzIHRoYXQg
Y3VycmVudGx5IG9uIHRoZSBtYW5hZ2VtZW50IG5ldHdvcmsgZG9uJ3QgaGF2ZSBWTEFOIHRhZ2dp
bmcgZW5hYmxlZCwgd2Ugd291bGQgbGlrZSB0byBlbmFibGUmbmJzcDtWTEFOIHRhZ2dpbmcgYW5k
IHNvIHdvdWxkIGJlIGdyYXRlZnVsIG9mIGFueW9uZSBoYWQgYW4gaW5mb3JtYXRpb24gb3IgZXhw
ZXJpZW5jZSBkb2luZyB0aGlzLjwvcD4NCjxwPjxicj4NCjwvcD4NCjxwPlRoZSBtYW5hZ2VtZW50
IGVuZ2luZSBpcyBydW5uaW5nIG9uIGEgc2VwYXJhdGUgaG9zdCBhbmQgdGhlIDMgaG9zdHMgYXJl
IHJ1bm5pbmcgb1ZpcnQgTm9kZSA0LjEuNjwvcD4NCjwvZGl2Pg0KPC9ibG9ja3F1b3RlPg0KPC9k
aXY+DQo8L2Rpdj4NCjxkaXYgZGlyPSJhdXRvIj48YnI+DQo8L2Rpdj4NCjxkaXYgZGlyPSJhdXRv
Ij5IaSBQYXVsLCBpIGRpZCB0aGUgc2FtZSB3aXRoIHRoZSBzYW1lIHJlbGVhc2UuPC9kaXY+DQo8
ZGl2IGRpcj0iYXV0byI+PGJyPg0KPC9kaXY+DQo8ZGl2IGRpcj0iYXV0byI+VGhlIHRyaWNrIHdh
cyB0byBzZXQgdGhlIHRhZ2dpbmcgYWxzbyBvZiB0aGUgZGVmYXVsdCB2bGFuIG9uIHRoZSBzd2l0
Y2ggZHVyaW5nIHRoZSBjaGFuZ2UgZnJvbSB1bnRhZ2dlZCB0byB0YWdnZWQuPC9kaXY+DQo8ZGl2
IGRpcj0iYXV0byI+PGJyPg0KPC9kaXY+DQo8ZGl2IGRpcj0iYXV0byI+SW4gZXhhbXBsZSwgaWYg
eW91IGhhdmUgdG8gbW92ZSB5b3VyIG92aXJ0bWdtdCB0byB2bGFuIDEwLCB5b3UgY2FuIHNldCB5
b3VyIHBvcnQgZm9yIHNlcnZpbmcgdGhlIG5ldHdvcmsgYm90aCBuYXRpdmUgYW5kIHRhZ2dlZCBz
byB3aGlsZSBjaGFuZ2luZyB0aGUgb3ZpcnRtZ210IHNldHVwIHlvdSB3aWxsIG5vdCBsb3NlIHRo
ZSBjb25uZWN0aXZpdHkuPC9kaXY+DQo8ZGl2IGRpcj0iYXV0byI+PGJyPg0KPC9kaXY+DQo8ZGl2
IGRpcj0iYXV0byI+THVjYTwvZGl2Pg0KPGRpdiBkaXI9ImF1dG8iPjwvZGl2Pg0KPC9kaXY+DQo8
L2Rpdj4NCjwvYmxvY2txdW90ZT4NCjwvZGl2Pg0KPGJyPg0KPC9kaXY+DQo8L2Rpdj4NCjwvZGl2
Pg0KVG8gdmlldyB0aGUgdGVybXMgdW5kZXIgd2hpY2ggdGhpcyBlbWFpbCBpcyBkaXN0cmlidXRl
ZCwgcGxlYXNlIGdvIHRvOi0gPGJyPg0KPGEgaHJlZj0iaHR0cDovL2Rpc2NsYWltZXIubGVlZHNi
ZWNrZXR0LmFjLnVrL2Rpc2NsYWltZXIvZGlzY2xhaW1lci5odG1sIiB0YXJnZXQ9Il9ibGFuayI+
aHR0cDovL2Rpc2NsYWltZXIubGVlZHNiZWNrZXR0LmFjLnVrL2Rpc2NsYWltZXIvZGlzY2xhaW1l
ci5odG1sPC9hPg0KPHA+PC9wPg0KPC9ib2R5Pg0KPC9odG1sPg0K
--_000_14955b19e8134aa58fc7371f58868590emailandroidcom_--
6 years, 10 months
Re: [ovirt-users] oVirt 4.2 - Removed disk remains in VM OS with hooks?
by Yaniv Kaul
On Jan 22, 2018 5:32 PM, "Zip" <plord(a)intricatenetworks.com> wrote:
I am having an issue where when I use the REST API to connect a snapshot
> from another VM to a Backup-Appliance-VM, after the clone when I remove the
> Disk and delete the Snapshot, the disk remains in the Backup-Appliance-VM
> as /dev/sdb ot /dev/vdb.
>
> If I reboot the Bakup-Applicance-VM the disk disappears.
>
> If I manually remove the disk by "echo 1 > /sys/block/sdb/device/delete”
> the disk will disappear, but if I rescan the scsi bus, it is found and
> shows up again in the VM OS, but the oVirt WebGUI does NOT show it as
> connected.
>
-- The first part is expected - the 2nd isn't.
What are you referring to as the first and second part?
I understand why the disk is seen after it was detached (stale device), not
why it comes back after rescan, which seems to suggest it is not detached
properly.
>
> I am also not able to attach any other disks as it complains of :
>
> HotPlugDiskVDS failed: internal error: unable to execute QEMU command
> '__com.redhat_drive_add': Duplicate ID 'drive-scsi0-0-0-2' for drive
>
> I did see that others in the past have gotten around this issue by
> rebooting the Backup-Appliance-VM and then continuing on with the next VM
> backup and looping through backup-reboot-backup-reboot-etc.
>
> Anyone have an idea on how to solve this issue and remove the hooks from
> the guest OS?
>
> Steps to reproduce this issue:
>
>
> 1. Create a backup appliance VM to be used for the backup script
> execution
> 2. Currently I have the Vms set to virtio with threaded I/O enabled.
> Also tried virtio_scsi with same result.
> 3. Using REST API – make snapshot of target VM
> 4. Using REST API – fetch vm metadata
> 5. Using REST API – attach the snapshot/disk to the Backup-Appliance-VM
> 6. dd the drive to backup folder
> 7. Using REST API – remove the disk from the Backup-Appliance-VM
> 8. Using REST API – delete the snapshot
> 9. ** Check the guest OS of the Backup-Appliance-VM and the mounted
> drive from the backup above still appears and behaves as mentioned in
> comments above.
>
>
>
— There are many details missing, including versions of everything used,
but logs would be most helpful here.
Versions for oVirt are all the most recent. This is a fresh install of the
Hosted Engine. I will just script the backup to cycle through sdb, sdc,
sdd, …. Szzzz, just seems odd that once a disk is detached and a snapshot
deleted, that the Backup_appliance-VM can still access the drive/snapshot?
Unrelated note - do NOT use /dev/sdX to enumerate them. Especially on SCSI
bus, probing is done in parallel and they may have a different name next
time. Use /dev/disk/by-id paths.
A second issue is that the above wont work when I have the Vms running on
> MPIO iSCSI storage, so for testing I have moved to NFS4. Anyone have ideas
> about either issue, I’d love to hear ;)
>
— Same - logs would be helpful here.
I will continue to dig through this issue and will post logs if stuck. I
just wanted to know if there was anything obvious that I should be doing
differently with iSCSI vs NFS with mounting disks/snapshots.
Nope.
Y.
Zip
— Y.
>
> Thanks
>
> Irc.oftc.net #ovirt
> zipur
>
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
6 years, 10 months
oVirt home lab hardware
by Abdurrahman A. Ibrahim
Hello,
I am planning to buy home lab hardware to be used by oVirt.
Any recommendations for used hardware i can buy from eBay for example?
Also, have you tried oVirt on Intel NUC or any other SMB servers before?
Thanks,
Ab
6 years, 10 months
oVirt 4.2 - Removed disk remains in VM OS with hooks?
by Zip
> This message is in MIME format. Since your mail reader does not understand
this format, some or all of this message may not be legible.
--B_3599421680_119924947
Content-type: text/plain;
charset="ISO-8859-1"
Content-transfer-encoding: quoted-printable
I am having an issue where when I use the REST API to connect a snapshot
from another VM to a Backup-Appliance-VM, after the clone when I remove the
Disk and delete the Snapshot, the disk remains in the Backup-Appliance-VM a=
s
/dev/sdb ot /dev/vdb.
If I reboot the Bakup-Applicance-VM the disk disappears.
If I manually remove the disk by "echo 1 > /sys/block/sdb/device/delete=B2 th=
e
disk will disappear, but if I rescan the scsi bus, it is found and shows up
again in the VM OS, but the oVirt WebGUI does NOT show it as connected.
I am also not able to attach any other disks as it complains of :
HotPlugDiskVDS failed: internal error: unable to execute QEMU command
'__com.redhat_drive_add': Duplicate ID 'drive-scsi0-0-0-2' for drive
I did see that others in the past have gotten around this issue by rebootin=
g
the Backup-Appliance-VM and then continuing on with the next VM backup and
looping through backup-reboot-backup-reboot-etc.
Anyone have an idea on how to solve this issue and remove the hooks from th=
e
guest OS?
Steps to reproduce this issue:
1. Create a backup appliance VM to be used for the backup script execution
2. Currently I have the Vms set to virtio with threaded I/O enabled. Also
tried virtio_scsi with same result.
3. Using REST API =AD make snapshot of target VM
4. Using REST API =AD fetch vm metadata
5. Using REST API =AD attach the snapshot/disk to the Backup-Appliance-VM
6. dd the drive to backup folder
7. Using REST API =AD remove the disk from the Backup-Appliance-VM
8. Using REST API =AD delete the snapshot
9. ** Check the guest OS of the Backup-Appliance-VM and the mounted drive
from the backup above still appears and behaves as mentioned in comments
above.
A second issue is that the above wont work when I have the Vms running on
MPIO iSCSI storage, so for testing I have moved to NFS4. Anyone have ideas
about either issue, I=B9d love to hear ;)
Thanks
Irc.oftc.net #ovirt
zipur
--B_3599421680_119924947
Content-type: text/html;
charset="ISO-8859-1"
Content-transfer-encoding: quoted-printable
<html><head></head><body style=3D"word-wrap: break-word; -webkit-nbsp-mode: s=
pace; -webkit-line-break: after-white-space; color: rgb(0, 0, 0); font-size:=
14px; font-family: Calibri, sans-serif;"><div>I am having an issue where wh=
en I use the REST API to connect a snapshot from another VM to a Backup-Appl=
iance-VM, after the clone when I remove the Disk and delete the Snapshot, th=
e disk remains in the Backup-Appliance-VM as /dev/sdb ot /dev/vdb. </di=
v><div><br></div><div>If I reboot the Bakup-Applicance-VM the disk disappear=
s.</div><div><br></div><div>If I manually remove the disk by "echo 1 > /s=
ys/block/sdb/device/delete” the disk will disappear, but if I rescan t=
he scsi bus, it is found and shows up again in the VM OS, but the oVirt WebG=
UI does NOT show it as connected.</div><div><br></div><div>I am also not abl=
e to attach any other disks as it complains of :</div><div><br></div><div><s=
pan class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>HotPlugDiskVDS fa=
iled: internal error: unable to execute QEMU command '__com.redhat_drive_add=
': Duplicate ID 'drive-scsi0-0-0-2' for drive</div><div><br></div><div>I did=
see that others in the past have gotten around this issue by rebooting the =
Backup-Appliance-VM and then continuing on with the next VM backup and loopi=
ng through backup-reboot-backup-reboot-etc. </div><div><br></div><div>A=
nyone have an idea on how to solve this issue and remove the hooks from the =
guest OS?</div><div><br></div><div>Steps to reproduce this issue:</div><div>=
<br></div><ol><li>Create a backup appliance VM to be used for the backup scr=
ipt execution</li><li>Currently I have the Vms set to virtio with threaded I=
/O enabled. Also tried virtio_scsi with same result.</li><li>Using REST API =
– make snapshot of target VM</li><li>Using REST API – fetch vm m=
etadata</li><li>Using REST API – attach the snapshot/disk to the Backu=
p-Appliance-VM</li><li>dd the drive to backup folder</li><li>Using REST API =
– remove the disk from the Backup-Appliance-VM</li><li>Using REST API =
– delete the snapshot</li><li>** Check the guest OS of the Backup-Appl=
iance-VM and the mounted drive from the backup above still appears and behav=
es as mentioned in comments above.</li></ol><div><br></div><div>A second iss=
ue is that the above wont work when I have the Vms running on MPIO iSCSI sto=
rage, so for testing I have moved to NFS4. Anyone have ideas about either is=
sue, I’d love to hear ;)</div><div><br></div><div>Thanks</div><div><br=
></div><div>Irc.oftc.net #ovirt</div><div>zipur</div><div><br></div><br></bo=
dy></html>
--B_3599421680_119924947--
6 years, 10 months
VirtIO-SCSI and viodiskcache custom property
by Matthias Leopold
Hi,
is there a reason why the viodiskcache custom property isn't honored
when using VirtIO-SCSI?
On a Cinder (Ceph) disk "viodiskcache=writeback" is ignored with
VirtIO-SCSI and honored when using VirtIO.
On an iSCSI disk "viodiskcache=writeback" is ignored with VirtIO-SCSI
and the VM can't be started when using VirtIO with "unsupported
configuration: native I/O needs either no disk cache or directsync cache
mode, QEMU will fallback to aio=threads"
We actually want to use "viodiskcache=writeback" with Cinder (Ceph) disks.
oVirt version: 4.1.8
Thanks
Matthias
6 years, 10 months
Configuration of FCoE in oVirt 4.2 on HP BladeSystem c7000
by Gunder Johansen
------=_Part_3635619_1374417168.1515792280983
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
We are running oVirt 4.2 on a HP BladeSystem c7000 with six BL460 gen 9 blade servers, all running oVirt Node 4.2 distro. We want to configure FCoE for the storage domains, but I find no places in the GUI where I can define the logical network. I guess the documentation on oVirt site (https://www.ovirt.org/documentation/admin-guide/chap-Logical_Networks/) is outdated with respect to FCoE. Can anyone tell us how to configure FCoE in oVirt 4.2?
Thanks in advance for your help.
------=_Part_3635619_1374417168.1515792280983
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
<html><head></head><body><div style=3D"color:#000; background-color:#fff; f=
ont-family:Courier New, courier, monaco, monospace, sans-serif;font-size:13=
px"><div style=3D"margin-bottom: 6px; font-family: Helvetica, Arial, sans-s=
erif; color: rgb(29, 33, 41); font-size: 14px;" id=3D"yui_3_16_0_ym19_1_151=
5490309249_45448" dir=3D"ltr">We are running oVirt 4.2 on a HP BladeSystem =
c7000 with six BL460 gen 9 blade servers, all running oVirt Node 4.2 distro=
. We want to configure FCoE for the storage domains, but I find no places i=
n the GUI where I can define the logical network. I guess the documentation=
on oVirt site (https://www.ovirt.org/documentation/admin-guide/chap-Logica=
l_Networks/) is outdated with respect to FCoE. Can anyone tell us how to co=
nfigure FCoE in oVirt 4.2?</div><div id=3D"yui_3_16_0_ym19_1_1515490309249_=
45475"><br></div><div style=3D"margin-top: 6px; display: inline; font-famil=
y: Helvetica, Arial, sans-serif; color: rgb(29, 33, 41); font-size: 14px;" =
dir=3D"ltr" id=3D"yui_3_16_0_ym19_1_1515490309249_45449">Thanks in advance =
for your help.</div></div></body></html>
------=_Part_3635619_1374417168.1515792280983--
6 years, 10 months
Re: [ovirt-users] hosted-engine unknow stale-data
by Artem Tambovskiy
Explored logs on both hosts.
broker.log shows no errors.
agent.log looking not good:
on host1 (which running hosted engine) :
MainThread::ERROR::2018-01-12
21:51:03,883::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Traceback (most recent call last):
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
line 191, in _run_agent
return action(he)
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
line 64, in action_proper
return he.start_monitoring()
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 411, in start_monitoring
self._initialize_sanlock()
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 749, in _initialize_sanlock
"Failed to initialize sanlock, the number of errors has"
SanlockInitializationError: Failed to initialize sanlock, the number of
errors has exceeded the limit
MainThread::ERROR::2018-01-12
21:51:03,884::agent::206::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Trying to restart agent
MainThread::WARNING::2018-01-12
21:51:08,889::agent::209::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Restarting agent, attempt '1'
MainThread::INFO::2018-01-12
21:51:08,919::hosted_engine::242::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_hostname)
Found certificate common name: ovirt1.telia.ru
MainThread::INFO::2018-01-12
21:51:08,921::hosted_engine::604::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
Initializing VDSM
MainThread::INFO::2018-01-12
21:51:11,398::hosted_engine::630::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Connecting the storage
MainThread::INFO::2018-01-12
21:51:11,399::storage_server::220::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(validate_storage_server)
Validating storage server
MainThread::INFO::2018-01-12
21:51:13,725::storage_server::239::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Connecting storage server
MainThread::INFO::2018-01-12
21:51:18,390::storage_server::246::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Connecting storage server
MainThread::INFO::2018-01-12
21:51:18,423::storage_server::253::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Refreshing the storage domain
MainThread::INFO::2018-01-12
21:51:18,689::hosted_engine::663::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Preparing images
MainThread::INFO::2018-01-12
21:51:18,690::image::126::ovirt_hosted_engine_ha.lib.image.Image::(prepare_images)
Preparing images
MainThread::INFO::2018-01-12
21:51:21,895::hosted_engine::666::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Refreshing vm.conf
MainThread::INFO::2018-01-12
21:51:21,895::config::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_vm_conf)
Reloading vm.conf from the shared storage domain
MainThread::INFO::2018-01-12
21:51:21,896::config::416::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
Trying to get a fresher copy of vm configuration from the OVF_STORE
MainThread::INFO::2018-01-12
21:51:21,896::ovf_store::132::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
Extracting Engine VM OVF from the OVF_STORE
MainThread::INFO::2018-01-12
21:51:21,897::ovf_store::134::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
OVF_STORE volume path:
/var/run/vdsm/storage/4a7f8717-9bb0-4d80-8016-498fa4b88162/5cabd8e1-5f4b-469e-becc-227469e03f5c/8048cbd7-77e2-4805-9af4-d109fa36dfcf
MainThread::INFO::2018-01-12
21:51:21,915::config::435::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
Found an OVF for HE VM, trying to convert
MainThread::INFO::2018-01-12
21:51:21,918::config::440::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
Got vm.conf from OVF_STORE
MainThread::INFO::2018-01-12
21:51:21,919::hosted_engine::509::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
Initializing ha-broker connection
MainThread::INFO::2018-01-12
21:51:21,919::brokerlink::130::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Starting monitor ping, options {'addr': '80.239.162.97'}
MainThread::INFO::2018-01-12
21:51:21,922::brokerlink::141::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Success, id 140547104457680
MainThread::INFO::2018-01-12
21:51:21,922::brokerlink::130::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Starting monitor mgmt-bridge, options {'use_ssl': 'true', 'bridge_name':
'ovirtmgmt', 'address': '0'}
MainThread::INFO::2018-01-12
21:51:21,936::brokerlink::141::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Success, id 140547104458064
MainThread::INFO::2018-01-12
21:51:21,936::brokerlink::130::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Starting monitor mem-free, options {'use_ssl': 'true', 'address': '0'}
MainThread::INFO::2018-01-12
21:51:21,938::brokerlink::141::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Success, id 140547104458448
MainThread::INFO::2018-01-12
21:51:21,939::brokerlink::130::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Starting monitor cpu-load-no-engine, options {'use_ssl': 'true', 'vm_uuid':
'b366e466-b0ea-4a09-866b-d0248d7523a6', 'address': '0'}
MainThread::INFO::2018-01-12
21:51:21,940::brokerlink::141::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Success, id 140547104457552
MainThread::INFO::2018-01-12
21:51:21,941::brokerlink::130::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Starting monitor engine-health, options {'use_ssl': 'true', 'vm_uuid':
'b366e466-b0ea-4a09-866b-d0248d7523a6', 'address': '0'}
MainThread::INFO::2018-01-12
21:51:21,942::brokerlink::141::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Success, id 140547104459792
MainThread::INFO::2018-01-12
21:51:26,951::brokerlink::179::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(set_storage_domain)
Success, id 140546772847056
MainThread::INFO::2018-01-12
21:51:26,952::hosted_engine::601::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
Broker initialized, all submonitors started
MainThread::INFO::2018-01-12
21:51:27,049::hosted_engine::704::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_sanlock)
Ensuring lease for lockspace hosted-engine, host id 1 is acquired (file:
/var/run/vdsm/storage/4a7f8717-9bb0-4d80-8016-498fa4b88162/093faa75-5e33-4559-84fa-1f1f8d48153b/911c7637-b49d-463e-b186-23b404e50769)
MainThread::INFO::2018-01-12
21:53:48,067::hosted_engine::745::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_sanlock)
Failed to acquire the lock. Waiting '5's before the next attempt
MainThread::INFO::2018-01-12
21:56:14,088::hosted_engine::745::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_sanlock)
Failed to acquire the lock. Waiting '5's before the next attempt
MainThread::INFO::2018-01-12
21:58:40,111::hosted_engine::745::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_sanlock)
Failed to acquire the lock. Waiting '5's before the next attempt
MainThread::INFO::2018-01-12
22:01:06,133::hosted_engine::745::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_sanlock)
Failed to acquire the lock. Waiting '5's before the next attempt
agent.log from second host
MainThread::INFO::2018-01-12
22:01:37,241::hosted_engine::630::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Connecting the storage
MainThread::INFO::2018-01-12
22:01:37,242::storage_server::220::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(validate_storage_server)
Validating storage server
MainThread::INFO::2018-01-12
22:01:39,540::hosted_engine::639::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Storage domain reported as valid and reconnect is not forced.
MainThread::INFO::2018-01-12
22:01:41,939::hosted_engine::453::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUnexpectedlyDown (score: 0)
MainThread::INFO::2018-01-12
22:01:52,150::config::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_vm_conf)
Reloading vm.conf from the shared storage domain
MainThread::INFO::2018-01-12
22:01:52,150::config::416::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
Trying to get a fresher copy of vm configuration from the OVF_STORE
MainThread::INFO::2018-01-12
22:01:52,151::ovf_store::132::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
Extracting Engine VM OVF from the OVF_STORE
MainThread::INFO::2018-01-12
22:01:52,153::ovf_store::134::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
OVF_STORE volume path:
/var/run/vdsm/storage/4a7f8717-9bb0-4d80-8016-498fa4b88162/5cabd8e1-5f4b-469e-becc-227469e03f5c/8048cbd7-77e2-4805-9af4-d109fa36dfcf
MainThread::INFO::2018-01-12
22:01:52,174::config::435::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
Found an OVF for HE VM, trying to convert
MainThread::INFO::2018-01-12
22:01:52,179::config::440::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
Got vm.conf from OVF_STORE
MainThread::INFO::2018-01-12
22:01:52,189::hosted_engine::604::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
Initializing VDSM
MainThread::INFO::2018-01-12
22:01:54,586::hosted_engine::630::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Connecting the storage
MainThread::INFO::2018-01-12
22:01:54,587::storage_server::220::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(validate_storage_server)
Validating storage server
MainThread::INFO::2018-01-12
22:01:56,903::hosted_engine::639::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Storage domain reported as valid and reconnect is not forced.
MainThread::INFO::2018-01-12
22:01:59,299::states::682::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
Score is 0 due to unexpected vm shutdown at Fri Jan 12 21:57:48 2018
MainThread::INFO::2018-01-12
22:01:59,299::hosted_engine::453::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUnexpectedlyDown (score: 0)
MainThread::INFO::2018-01-12
22:02:09,659::config::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_vm_conf)
Reloading vm.conf from the shared storage domain
MainThread::INFO::2018-01-12
22:02:09,659::config::416::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
Trying to get a fresher copy of vm configuration from the OVF_STORE
MainThread::INFO::2018-01-12
22:02:09,660::ovf_store::132::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
Extracting Engine VM OVF from the OVF_STORE
MainThread::INFO::2018-01-12
22:02:09,663::ovf_store::134::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
OVF_STORE volume path:
/var/run/vdsm/storage/4a7f8717-9bb0-4d80-8016-498fa4b88162/5cabd8e1-5f4b-469e-becc-227469e03f5c/8048cbd7-77e2-4805-9af4-d109fa36dfcf
MainThread::INFO::2018-01-12
22:02:09,683::config::435::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
Found an OVF for HE VM, trying to convert
MainThread::INFO::2018-01-12
22:02:09,688::config::440::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
Got vm.conf from OVF_STORE
MainThread::INFO::2018-01-12
22:02:09,698::hosted_engine::604::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
Initializing VDSM
MainThread::INFO::2018-01-12
22:02:12,112::hosted_engine::630::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Connecting the storage
MainThread::INFO::2018-01-12
22:02:12,113::storage_server::220::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(validate_storage_server)
Validating storage server
MainThread::INFO::2018-01-12
22:02:14,444::hosted_engine::639::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Storage domain reported as valid and reconnect is not forced.
MainThread::INFO::2018-01-12
22:02:16,859::states::682::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
Score is 0 due to unexpected vm shutdown at Fri Jan 12 21:57:47 2018
MainThread::INFO::2018-01-12
22:02:16,859::hosted_engine::453::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUnexpectedlyDown (score: 0)
MainThread::INFO::2018-01-12
22:02:27,100::config::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_vm_conf)
Reloading vm.conf from the shared storage domain
MainThread::INFO::2018-01-12
22:02:27,100::config::416::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
Trying to get a fresher copy of vm configuration from the OVF_STORE
MainThread::INFO::2018-01-12
22:02:27,101::ovf_store::132::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
Extracting Engine VM OVF from the OVF_STORE
MainThread::INFO::2018-01-12
22:02:27,103::ovf_store::134::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
OVF_STORE volume path:
/var/run/vdsm/storage/4a7f8717-9bb0-4d80-8016-498fa4b88162/5cabd8e1-5f4b-469e-becc-227469e03f5c/8048cbd7-77e2-4805-9af4-d109fa36dfcf
MainThread::INFO::2018-01-12
22:02:27,125::config::435::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
Found an OVF for HE VM, trying to convert
MainThread::INFO::2018-01-12
22:02:27,129::config::440::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
Got vm.conf from OVF_STORE
MainThread::INFO::2018-01-12
22:02:27,130::states::667::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine down, local host does not have best score
MainThread::INFO::2018-01-12
22:02:27,139::hosted_engine::604::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
Initializing VDSM
MainThread::INFO::2018-01-12
22:02:29,584::hosted_engine::630::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Connecting the storage
MainThread::INFO::2018-01-12
22:02:29,586::storage_server::220::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(validate_storage_server)
Validating storage server
Any suggestions how to resolve this .
regards,
Artem
On Fri, Jan 12, 2018 at 7:08 PM, Artem Tambovskiy <
artem.tambovskiy(a)gmail.com> wrote:
> Trying to fix one thing I broke another :(
>
> I fixed mnt_options for hosted engine storage domain and installed latest
> security patches to my hosts and hosted engine. All VM's up and running,
> but hosted_engine --vm-status reports about issues:
>
> [root@ovirt1 ~]# hosted-engine --vm-status
>
>
> --== Host 1 status ==--
>
> conf_on_shared_storage : True
> Status up-to-date : False
> Hostname : ovirt2
> Host ID : 1
> Engine status : unknown stale-data
> Score : 0
> stopped : False
> Local maintenance : False
> crc32 : 193164b8
> local_conf_timestamp : 8350
> Host timestamp : 8350
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=8350 (Fri Jan 12 19:03:54 2018)
> host-id=1
> score=0
> vm_conf_refresh_time=8350 (Fri Jan 12 19:03:54 2018)
> conf_on_shared_storage=True
> maintenance=False
> state=EngineUnexpectedlyDown
> stopped=False
> timeout=Thu Jan 1 05:24:43 1970
>
>
> --== Host 2 status ==--
>
> conf_on_shared_storage : True
> Status up-to-date : False
> Hostname : ovirt1.telia.ru
> Host ID : 2
> Engine status : unknown stale-data
> Score : 0
> stopped : True
> Local maintenance : False
> crc32 : c7037c03
> local_conf_timestamp : 7530
> Host timestamp : 7530
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=7530 (Fri Jan 12 16:10:12 2018)
> host-id=2
> score=0
> vm_conf_refresh_time=7530 (Fri Jan 12 16:10:12 2018)
> conf_on_shared_storage=True
> maintenance=False
> state=AgentStopped
> stopped=True
> [root@ovirt1 ~]#
>
>
>
> from second host situation looks a bit different:
>
>
> [root@ovirt2 ~]# hosted-engine --vm-status
>
>
> --== Host 1 status ==--
>
> conf_on_shared_storage : True
> Status up-to-date : True
> Hostname : ovirt2
> Host ID : 1
> Engine status : {"reason": "vm not running on this
> host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score : 0
> stopped : False
> Local maintenance : False
> crc32 : 78eabdb6
> local_conf_timestamp : 8403
> Host timestamp : 8402
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=8402 (Fri Jan 12 19:04:47 2018)
> host-id=1
> score=0
> vm_conf_refresh_time=8403 (Fri Jan 12 19:04:47 2018)
> conf_on_shared_storage=True
> maintenance=False
> state=EngineUnexpectedlyDown
> stopped=False
> timeout=Thu Jan 1 05:24:43 1970
>
>
> --== Host 2 status ==--
>
> conf_on_shared_storage : True
> Status up-to-date : False
> Hostname : ovirt1.telia.ru
> Host ID : 2
> Engine status : unknown stale-data
> Score : 0
> stopped : True
> Local maintenance : False
> crc32 : c7037c03
> local_conf_timestamp : 7530
> Host timestamp : 7530
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=7530 (Fri Jan 12 16:10:12 2018)
> host-id=2
> score=0
> vm_conf_refresh_time=7530 (Fri Jan 12 16:10:12 2018)
> conf_on_shared_storage=True
> maintenance=False
> state=AgentStopped
> stopped=True
>
>
> WebGUI shows that engine running on host ovirt1.
> Gluster looks fine
> [root@ovirt1 ~]# gluster volume status engine
> Status of volume: engine
> Gluster process TCP Port RDMA Port Online
> Pid
> ------------------------------------------------------------
> ------------------
> Brick ovirt1.telia.ru:/oVirt/engine 49169 0 Y
> 3244
> Brick ovirt2.telia.ru:/oVirt/engine 49179 0 Y
> 20372
> Brick ovirt3.telia.ru:/oVirt/engine 49206 0 Y
> 16609
> Self-heal Daemon on localhost N/A N/A Y
> 117868
> Self-heal Daemon on ovirt2.telia.ru N/A N/A Y
> 20521
> Self-heal Daemon on ovirt3 N/A N/A Y
> 25093
>
> Task Status of Volume engine
> ------------------------------------------------------------
> ------------------
> There are no active volume tasks
>
> How to resolve this issue?
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
6 years, 10 months