upgrade hosted-engine os ( not hosts )
by Paul Groeneweg | Pazion
I am looking for a way to get my hosted-engine running on el7 so I can
upgrade to oVirt 4.0. Currently my hosts already run el7, but my
hosted-engine is still el6.
I read
https://www.ovirt.org/documentation/how-to/hosted-engine-host-OS-upgrade/ but
this is only about the hosts.
I read https://www.ovirt.org/documentation/how-to/hosted-engine/, but it
only mentions upgrade of the hosted-engine software, not the OS.
I understood I can do a fresh hosted-engine install, and then import my
storage domain to the new hosted engine, but:
- Do I need to restore my hosted engine database? ( like described here:
http://www.ovirt.org/develop/developer-guide/engine/migrate-to-hosted-eng...
)
- Can I directly install hosted-engine 4.0 and then import the storage
domain? Or should I install same hosted-engine version?
- Do I first need another master storage domain or can I directly import my
old master storage domain?
- When importing the storage domain what is the risk it fails ( I have
backups, but it would cost a day to restore all )
- How long would import take? few minutes or hours? ( I want to keep down
time as low as possible ).
Another option would be upgrade the OS ( with redhat-upgrade-tool ) or is
this a path for disaster?
I hope someone can tell me how I can smoothly upgrade my hosted-engine up
to el7 and run oVirt 4.
7 years, 10 months
Re: [ovirt-users] Hung task finalizing live migration
by Maton, Brett
Sorry just hit reply....
I'm seeing these errors in the logs which look related to the problem:
2016-09-07 06:46:35,123 ERROR
[org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller]
(DefaultQuartzScheduler6) [19c58c0d] Failed invoking callback end method
'onFailed' for command '07608003-ca05-4e2e-b917-85ce525c011b' with
exception 'null', the callback is marked for end method retries
2016-09-07 06:46:45,184 ERROR [org.ovirt.engine.core.bll.CommandsFactory]
(DefaultQuartzScheduler7) [19c58c0d] Error in invocating CTOR of command
'LiveMigrateDisk': null
2016-09-07 06:46:45,185 ERROR
[org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller]
(DefaultQuartzScheduler7) [19c58c0d] Failed invoking callback end method
'onFailed' for command '07608003-ca05-4e2e-b917-85ce525c011b' with
exception 'null', the callback is marked for end method retries
On 5 September 2016 at 06:46, Nir Soffer <nsoffer(a)redhat.com> wrote:
> Hi Maton,
>
> Please reply to the list, not to me directly.
>
> Ala, can you look at this? is this a known issue?
>
> Thanks,
> Nir
>
> On Mon, Sep 5, 2016 at 8:43 AM, Maton, Brett <matonb(a)ltresources.co.uk>
> wrote:
> > Log files as requested
> >
> > https://ufile.io/4fc35 vdsm log
> > https://ufile.io/e9836 engine 03-Sep
> > https://ufile.io/15f37 engine 04-Sep
> >
> > vdsm log stops on the 01-Sep...
> >
> > Couple of entries from the event log:
> >
> > Sep 3, 2016 7:31:07 PM Snapshot 'Auto-generated for Live Storage
> > Migration' deletion for VM 'lv01' has been completed.
> > Sep 3, 2016 6:46:46 PM Snapshot 'Auto-generated for Live Storage
> > Migration' deletion for VM 'lv01' was initiated by SYSTEM
> >
> > And the related tasks
> >
> > Removing Snapshot Auto-generated for Live Storage Migration of VM lv01
> > Sep 3, 2016 6:46:44 PM N/A 29f45ca9
> > Validating Sep 3, 2016 6:46:44 PM until Sep 3, 2016 6:46:44 PM
> > Executing Sep 3, 2016 6:46:44 PM until Sep 3, 2016 7:31:06 PM
> >
> > Finalizing Sep 3, 2016 7:31:06 PM N/A
> >
> >
> >
> > On 4 September 2016 at 14:27, Nir Soffer <nsoffer(a)redhat.com> wrote:
> >>
> >> On Sun, Sep 4, 2016 at 12:40 PM, Maton, Brett <matonb(a)ltresources.co.uk
> >
> >> wrote:
> >>>
> >>> How do I fix / kill a hung vdsm task?
> >>>
> >>> It seems to have completed the task but is stuck finalising.
> >>>
> >>> Removing Snapshot Auto-generated for Live Storage Migration
> >>> Validating
> >>> Executing
> >>> (hour glass) Finalizing
> >>>
> >>> Task has been 'stuck' finalising for over 13 hours
> >>
> >>
> >> Can you share engine and vdsm logs since the time the merge was started?
> >>
> >> Nir
> >
> >
>
7 years, 11 months
can't import vm from KVM host
by Nelson Lameiras
Hello,
I'm trying to import virtual machines from a KVM host (centos 7.2) to an oVirt 4.0.2 Cluster using the "import" feature on GUI.
If the original VM is using RAW/QCOW2 files as storage, everything works fine.
But if the original VM is using a special block special device as storage (like a LVM or SAN volume), it's simply not recognized.
The VM does appear in the import list of the KVM host, but it's disk count is 0!
Is this a known technical obstacle or am I doing something wrong ?
below is the storage part of the xml describing the original VM :
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source dev='/dev/mapper/vg_01-lv_sys'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</disk>
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source dev='/dev/sdc'/>
<target dev='vdb' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</disk>
We have hundreds of virtual machines in production with this type of configuration... How can we migrate them safely to oVirt?
thanks
Nelson
7 years, 12 months
Storage VLAN Issue
by Kendal Montgomery
--_000_0366DCE80EE9443B80C6E891426FBB27cbuscollaboratorycom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
SGkgYWxsLA0KDQpJIGp1c3QgcmVjZW50bHkgc3RhcnRlZCB0ZXN0aW5nIG91dCBvVmlydCBpbiBv
dXIgbGFiLiAgSeKAmW0gdXNpbmcgQ2VudE9TIDcgb24gbXkgaG9zdHMgYW5kIHVzaW5nIHRoZSBo
b3N0ZWQtZW5naW5lIG1vZGVsLCBhbmQgdGhlIG9WaXJ0IDMuNiByZXBvc2l0b3J5LiAgSSBoYXZl
IE5GUyBzdG9yYWdlLiAgSSByYW4gYWNyb3NzIHdoYXQgSSB0aGluayBpcyBhIGJ1ZyBvZiBzb21l
IHNvcnQsIGFuZCBJ4oCZbSBjdXJpb3VzIGlmIGFueW9uZSBlbHNlIGhhcyB0cmllZCB0aGlzIG9y
IGtub3cgd2hhdOKAmXMgZ29pbmcgb24uDQoNCkkgd2FudGVkIHRvIGJlIGFibGUgdG8gZXhwb3Nl
IHRoZSBORlMgc2VydmVyIChub3QgbmVjZXNzYXJpbHkgdGhlIHNoYXJlIHVzZWQgZm9yIG9WaXJ0
IHN0b3JhZ2UgZG9tYWlucywgYnV0IG90aGVyIHNoYXJlcyBvbiB0aGUgTkZTIHNlcnZlcikgdG8g
Vk1zIHJ1bm5pbmcgb24gbXkgaG9zdCAoY3VycmVudGx5IG15IHNldHVwIG9ubHkgaW52b2x2ZXMg
YSBzaW5nbGUgaG9zdCkuIEkgaGF2ZSB0d28gMTBHYmFzZVQgaW50ZXJmYWNlZCBib25kZWQgdG9n
ZXRoZXIgb24gdGhlIGhvc2Ugd2l0aCB0d28gVkxBTiBuZXR3b3JrcyBvbiBpdCBjdXJyZW50bHks
IG9uZSBmb3IgdGhlIG1hbmFnZW1lbnQgbmV0d29yaywgb25lIGZvciBzdG9yYWdlLiAgV2hlbiB0
aGUgaG9zdGVkLWVuZ2luZSBkZXBsb3ltZW50IHdhcyBzZXQgdXAsIEkgZW5kZWQgdXAgd2l0aCBh
biBvdmlydG1nbXQgaW50ZXJmYWNlIHRoYXQgd2FzIGJyaWRnZWQgdG8gbXkgaW5mcmFzdHJ1Y3R1
cmUgdmxhbiBpbnRlcmZhY2UgKHZsYW4gMTA4MCkuICBTbywgSSBhZGRlZCBhbm90aGVyIG5ldHdv
cmsgaW4gbXkgb1ZpcnQgY2x1c3RlciBuYW1lZCBWTS1TdG9yYWdlIHdpdGggdmxhbiAxMDkyICht
eSBzdG9yYWdlIG5ldHdvcmspLiAgSGVyZSBpcyBhcHByb3hpbWF0ZWx5IGhvdyBJIGV4cGVjdGVk
IHRoaXMgdG8gZW5kIHVwOg0KDQpib25kMCAtIChib25kZWQgaW50ZXJmYWNlKQ0KICAtIGJvbmQw
LjEwOTIgKFNUT1JBR0UgLSB2bGFuIGludGVyZmFjZSkNCiAgICAgLSBWTS1zdG9yYWdlIChicmlk
Z2VkIGludGVyZmFjZSkNCiAgLSBib25kMC4xMDgwIChJTkZSIC0gdmFuIGludGVyZmFjZSkNCiAg
ICAtIG92aXJ0bWdtdCAoYnJpZGdlZCBpbnRlcmZhY2UpDQoNCkhvd2V2ZXIsIHdoZW4gSSBkaWQg
bmV0d29yayBzZXR1cCBvbiB0aGUgaG9zdCwgYW5kIGRyYWdnZWQgdGhlIFZNLVN0b3JhZ2UgbmV0
d29yayBvdmVyIHRvIHRoZSBuZXR3b3JrIGludGVyZmFjZSBhbmQgaGl0IE9LLCB0aGUgVUkganVz
dCBmcm96ZSwgYW5kIGZvciBhIGZldyBzZWNvbmRzIEkgY2hlY2tlZCBvbiB0aGUgaG9zdCB2aWEg
c3NoIHNlc3Npb24gYW5kIHRoZSBWTS1zdG9yYWdlIGJyaWRnZSB3YXMgc2V0dXAsIHRoZW4gdGhl
IHNlcnZlciBqdXN0IHJlYm9vdGVkLiAgQWZ0ZXIgaXQgcmVib290ZWQsIG15IHZsYW4gaW50ZXJm
YWNlIHdhcyBubyBsb25nZXIgdGhlcmUgYW5kIGl0IHNlZW1zIGxpa2UgYm90aCB0aGUgaG9zdGVk
IGVuZ2luZSBWTSBhbmQgdGhlIGhvc3QgZW5kZWQgdXAgYmVpbmcgcmVib290ZWQuICBJbiB0aGlu
a2luZyBhYm91dCBpdCwgSSBtYXkgaGF2ZSBjYXVzZWQgYXQgbGVhc3QgYSB0ZW1wb3Jhcnkgb3V0
YWdlIHdpdGggbXkgTkZTIHN0b3JhZ2Ugd2hlbiB0aGUgbmV3IGJyaWRnZWQgaW50ZXJmYWNlIHdh
cyBzZXQgdXAgd2hpY2ggY2F1c2VkIChtYXliZSkgdGhlIEhBIGFnZW50IHRvIHRoaW5rIHRoZSBo
b3N0ZWQgZW5naW5lIHZtIHdlbnQgYXdheSBvciBzb21ldGhpbmcsIGFuZCB0aGF0IGNhdXNlIHRo
ZSByZWJvb3RzLiAgTm90IGVudGlyZWx5IHN1cmUsIGJ1dCB0aGlzIHdhcyBjZXJ0YWlubHkgdW5l
eHBlY3RlZC4gIEkgaGF2ZSB0cmllZCBzZXZlcmFsIHRpbWVzIGFuZCB0aGUgc2FtZSByZXN1bHQg
ZWFjaCB0aW1lLg0KDQpJIGRpZCBjaGVjayB0aGF0IGFueSBvdGhlciBWTSBuZXR3b3JrIHdpdGgg
YSBkaWZmZXJlbnQgVkxBTiBJRCBwcm92aXNpb25zIGp1c3QgZmluZSBvbiB0aGUgaG9zdCwgc28g
SSBhc3N1bWUgdGhlcmXigJlzIHNvbWV0aGluZyB0aGF0IGhhcHBlbnMgd2hlbiB0aGlzIHN0b3Jh
Z2UgbmV0d29yayBpcyBwcm92aXNpb25lZCB0aGF0IGlzIGNhdGNoaW5nIG9WaXJ0IG9mZi1ndWFy
ZCBzb21laG93Lg0KDQpBbnlvbmUgZWxzZSBoYXZlIHRoaXMgaXNzdWUgYmVmb3JlPyAgQ2FuIEkg
c29sdmUgdGhpcyBieSBhZGRpbmcgYW5vdGhlciBob3N0LCB0aGVuIG1vdmluZyB0aGUgaG9zdGVk
LWVuZ2luZSB0byBhIGRpZmZlcmVudCBob3N0IHdoaWxlIEkgYWRkIHRoZSBzdG9yYWdlIG5ldHdv
cmsgdG8gZWFjaCBob3N0Pw0KDQpUaGFua3MuDQoNCktlbmRhbCBNb250Z29tZXJ5DQpMYWIgTWFu
YWdlcg0KTzogNjE0LjQwNy41NTg0IHwgTTogNjE0LjU3MS4wMTcyDQprbW9udGdvbWVyeUBjYnVz
Y29sbGFib3JhdG9yeS5jb208bWFpbHRvOmttb250Z29tZXJ5QGNidXNjb2xsYWJvcmF0b3J5LmNv
bT4NCg0K
--_000_0366DCE80EE9443B80C6E891426FBB27cbuscollaboratorycom_
Content-Type: text/html; charset="utf-8"
Content-ID: <91F0E571AB281F44B836DAD9F2229895(a)namprd04.prod.outlook.com>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHN0eWxlPSJ3b3JkLXdy
YXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgLXdlYmtpdC1saW5lLWJy
ZWFrOiBhZnRlci13aGl0ZS1zcGFjZTsiIGNsYXNzPSIiPg0KPGRpdiBjbGFzcz0iIj5IaSBhbGws
PC9kaXY+DQo8ZGl2IGNsYXNzPSIiPjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KPGRpdiBjbGFzcz0i
Ij5JIGp1c3QgcmVjZW50bHkgc3RhcnRlZCB0ZXN0aW5nIG91dCBvVmlydCBpbiBvdXIgbGFiLiAm
bmJzcDtJ4oCZbSB1c2luZyBDZW50T1MgNyBvbiBteSBob3N0cyBhbmQgdXNpbmcgdGhlIGhvc3Rl
ZC1lbmdpbmUgbW9kZWwsIGFuZCB0aGUgb1ZpcnQgMy42IHJlcG9zaXRvcnkuICZuYnNwO0kgaGF2
ZSBORlMgc3RvcmFnZS4gJm5ic3A7SSByYW4gYWNyb3NzIHdoYXQgSSB0aGluayBpcyBhIGJ1ZyBv
ZiBzb21lIHNvcnQsIGFuZCBJ4oCZbSBjdXJpb3VzIGlmIGFueW9uZQ0KIGVsc2UgaGFzIHRyaWVk
IHRoaXMgb3Iga25vdyB3aGF04oCZcyBnb2luZyBvbi48L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGJy
IGNsYXNzPSIiPg0KPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPkkgd2FudGVkIHRvIGJlIGFibGUgdG8g
ZXhwb3NlIHRoZSBORlMgc2VydmVyIChub3QgbmVjZXNzYXJpbHkgdGhlIHNoYXJlIHVzZWQgZm9y
IG9WaXJ0IHN0b3JhZ2UgZG9tYWlucywgYnV0IG90aGVyIHNoYXJlcyBvbiB0aGUgTkZTIHNlcnZl
cikgdG8gVk1zIHJ1bm5pbmcgb24gbXkgaG9zdCAoY3VycmVudGx5IG15IHNldHVwIG9ubHkgaW52
b2x2ZXMgYSBzaW5nbGUgaG9zdCkuIEkgaGF2ZSB0d28gMTBHYmFzZVQgaW50ZXJmYWNlZA0KIGJv
bmRlZCB0b2dldGhlciBvbiB0aGUgaG9zZSB3aXRoIHR3byBWTEFOIG5ldHdvcmtzIG9uIGl0IGN1
cnJlbnRseSwgb25lIGZvciB0aGUgbWFuYWdlbWVudCBuZXR3b3JrLCBvbmUgZm9yIHN0b3JhZ2Uu
ICZuYnNwO1doZW4gdGhlIGhvc3RlZC1lbmdpbmUgZGVwbG95bWVudCB3YXMgc2V0IHVwLCBJIGVu
ZGVkIHVwIHdpdGggYW4gb3ZpcnRtZ210IGludGVyZmFjZSB0aGF0IHdhcyBicmlkZ2VkIHRvIG15
IGluZnJhc3RydWN0dXJlIHZsYW4gaW50ZXJmYWNlDQogKHZsYW4gMTA4MCkuICZuYnNwO1NvLCBJ
IGFkZGVkIGFub3RoZXIgbmV0d29yayBpbiBteSBvVmlydCBjbHVzdGVyIG5hbWVkIFZNLVN0b3Jh
Z2Ugd2l0aCB2bGFuIDEwOTIgKG15IHN0b3JhZ2UgbmV0d29yaykuICZuYnNwO0hlcmUgaXMgYXBw
cm94aW1hdGVseSBob3cgSSBleHBlY3RlZCB0aGlzIHRvIGVuZCB1cDo8L2Rpdj4NCjxkaXYgY2xh
c3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPmJvbmQwIC0gKGJvbmRl
ZCBpbnRlcmZhY2UpPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPiZuYnNwOyAtIGJvbmQwLjEwOTIgKFNU
T1JBR0UgLSB2bGFuIGludGVyZmFjZSk8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+Jm5ic3A7ICZuYnNw
OyAmbmJzcDstIFZNLXN0b3JhZ2UgKGJyaWRnZWQgaW50ZXJmYWNlKTwvZGl2Pg0KPGRpdiBjbGFz
cz0iIj4mbmJzcDsgLSBib25kMC4xMDgwIChJTkZSIC0gdmFuIGludGVyZmFjZSk8L2Rpdj4NCjxk
aXYgY2xhc3M9IiI+Jm5ic3A7ICZuYnNwOyAtIG92aXJ0bWdtdCAoYnJpZGdlZCBpbnRlcmZhY2Up
PC9kaXY+DQo8ZGl2IGNsYXNzPSIiPjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KPGRpdiBjbGFzcz0i
Ij5Ib3dldmVyLCB3aGVuIEkgZGlkIG5ldHdvcmsgc2V0dXAgb24gdGhlIGhvc3QsIGFuZCBkcmFn
Z2VkIHRoZSBWTS1TdG9yYWdlIG5ldHdvcmsgb3ZlciB0byB0aGUgbmV0d29yayBpbnRlcmZhY2Ug
YW5kIGhpdCBPSywgdGhlIFVJIGp1c3QgZnJvemUsIGFuZCBmb3IgYSBmZXcgc2Vjb25kcyBJIGNo
ZWNrZWQgb24gdGhlIGhvc3QgdmlhIHNzaCBzZXNzaW9uIGFuZCB0aGUgVk0tc3RvcmFnZSBicmlk
Z2Ugd2FzIHNldHVwLCB0aGVuDQogdGhlIHNlcnZlciBqdXN0IHJlYm9vdGVkLiAmbmJzcDtBZnRl
ciBpdCByZWJvb3RlZCwgbXkgdmxhbiBpbnRlcmZhY2Ugd2FzIG5vIGxvbmdlciB0aGVyZSBhbmQg
aXQgc2VlbXMgbGlrZSBib3RoIHRoZSBob3N0ZWQgZW5naW5lIFZNIGFuZCB0aGUgaG9zdCBlbmRl
ZCB1cCBiZWluZyByZWJvb3RlZC4gJm5ic3A7SW4gdGhpbmtpbmcgYWJvdXQgaXQsIEkgbWF5IGhh
dmUgY2F1c2VkIGF0IGxlYXN0IGEgdGVtcG9yYXJ5IG91dGFnZSB3aXRoIG15IE5GUyBzdG9yYWdl
IHdoZW4NCiB0aGUgbmV3IGJyaWRnZWQgaW50ZXJmYWNlIHdhcyBzZXQgdXAgd2hpY2ggY2F1c2Vk
IChtYXliZSkgdGhlIEhBIGFnZW50IHRvIHRoaW5rIHRoZSBob3N0ZWQgZW5naW5lIHZtIHdlbnQg
YXdheSBvciBzb21ldGhpbmcsIGFuZCB0aGF0IGNhdXNlIHRoZSByZWJvb3RzLiAmbmJzcDtOb3Qg
ZW50aXJlbHkgc3VyZSwgYnV0IHRoaXMgd2FzIGNlcnRhaW5seSB1bmV4cGVjdGVkLiAmbmJzcDtJ
IGhhdmUgdHJpZWQgc2V2ZXJhbCB0aW1lcyBhbmQgdGhlIHNhbWUgcmVzdWx0DQogZWFjaCB0aW1l
LjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9
IiI+SSBkaWQgY2hlY2sgdGhhdCBhbnkgb3RoZXIgVk0gbmV0d29yayB3aXRoIGEgZGlmZmVyZW50
IFZMQU4gSUQgcHJvdmlzaW9ucyBqdXN0IGZpbmUgb24gdGhlIGhvc3QsIHNvIEkgYXNzdW1lIHRo
ZXJl4oCZcyBzb21ldGhpbmcgdGhhdCBoYXBwZW5zIHdoZW4gdGhpcyBzdG9yYWdlIG5ldHdvcmsg
aXMgcHJvdmlzaW9uZWQgdGhhdCBpcyBjYXRjaGluZyBvVmlydCBvZmYtZ3VhcmQgc29tZWhvdy48
L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8ZGl2IGNsYXNzPSIi
PkFueW9uZSBlbHNlIGhhdmUgdGhpcyBpc3N1ZSBiZWZvcmU/ICZuYnNwO0NhbiBJIHNvbHZlIHRo
aXMgYnkgYWRkaW5nIGFub3RoZXIgaG9zdCwgdGhlbiBtb3ZpbmcgdGhlIGhvc3RlZC1lbmdpbmUg
dG8gYSBkaWZmZXJlbnQgaG9zdCB3aGlsZSBJIGFkZCB0aGUgc3RvcmFnZSBuZXR3b3JrIHRvIGVh
Y2ggaG9zdD88L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8ZGl2
IGNsYXNzPSIiPlRoYW5rcy48L2Rpdj4NCjxiciBjbGFzcz0iIj4NCjxkaXYgY2xhc3M9IiI+DQo8
ZGl2IHN0eWxlPSJjb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBm
b250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50OiBub3JtYWw7
IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IG9ycGhhbnM6IGF1
dG87IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTog
bm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd2lkb3dzOiBhdXRvOyB3b3JkLXNwYWNpbmc6IDBw
eDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB3b3JkLXdyYXA6IGJyZWFrLXdvcmQ7
IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgLXdlYmtpdC1saW5lLWJyZWFrOiBhZnRlci13aGl0
ZS1zcGFjZTsiIGNsYXNzPSIiPg0KPGRpdiBjbGFzcz0iIj4NCjxkaXYgY2xhc3M9IiIgc3R5bGU9
ImZvbnQtc2l6ZTogMTRweDsgbWFyZ2luOiAwaW4gMGluIDAuMDAwMXB0OyI+PHNwYW4gY2xhc3M9
IiIgc3R5bGU9ImNvbG9yOiByZ2IoMjM3LCAxMDMsIDIxKTsiPjxmb250IGZhY2U9IlRpbWVzIiBz
aXplPSIzIiBjbGFzcz0iIj48YiBjbGFzcz0iIj5LZW5kYWwgTW9udGdvbWVyeTwvYj48L2ZvbnQ+
PC9zcGFuPjwvZGl2Pg0KPGRpdiBjbGFzcz0iIiBzdHlsZT0iZm9udC1zaXplOiAxNHB4OyBtYXJn
aW46IDBpbiAwaW4gMC4wMDAxcHQ7Ij48Zm9udCBmYWNlPSJUaW1lcyIgY2xhc3M9IiI+PGIgY2xh
c3M9IiI+TGFiIE1hbmFnZXI8L2I+PC9mb250PjwvZGl2Pg0KPGRpdiBjbGFzcz0iIiBzdHlsZT0i
Zm9udC1zaXplOiAxNHB4OyBtYXJnaW46IDBpbiAwaW4gMC4wMDAxcHQ7Ij48Zm9udCBmYWNlPSJU
aW1lcyIgY2xhc3M9IiI+TzogNjE0LjQwNy41NTg0IHwmbmJzcDtNOiA2MTQuNTcxLjAxNzI8L2Zv
bnQ+PC9kaXY+DQo8ZGl2IGNsYXNzPSIiIHN0eWxlPSJmb250LXNpemU6IDE0cHg7IG1hcmdpbjog
MGluIDBpbiAwLjAwMDFwdDsiPjxmb250IGZhY2U9IlRpbWVzIiBjbGFzcz0iIj48YSBocmVmPSJt
YWlsdG86a21vbnRnb21lcnlAY2J1c2NvbGxhYm9yYXRvcnkuY29tIiBjbGFzcz0iIj5rbW9udGdv
bWVyeUBjYnVzY29sbGFib3JhdG9yeS5jb208L2E+PC9mb250PjwvZGl2Pg0KPC9kaXY+DQo8L2Rp
dj4NCjwvZGl2Pg0KPGJyIGNsYXNzPSIiPg0KPC9ib2R5Pg0KPC9odG1sPg0K
--_000_0366DCE80EE9443B80C6E891426FBB27cbuscollaboratorycom_--
8 years
quick way to see total RAM and CPU count on VM listing
by Nelson Lameiras
------=_Part_49066511_63048749.1475160413304
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hello oVirt community,=20
When listing virtual machines, is there a quick way to see how much RAM/CPU=
each machine has?=20
In oVirt 4.0.4, I can see ram/cpu usage in a very nice way, but I can not a=
ccess easily the total RAM and number of CPUs.=20
The only way to find this information in GUI is to edit a vm and see the sy=
stem tab (and that's when the "edit" context menu is available, which is no=
t always)=20
Am I missing something?=20
This information is very usefull and sometimes critical (when migrating VM =
to hosts which are already low on free RAM, specially when balloning is inv=
olved)=20
I would like to see this information always on screen, like a (un)checkable=
column... does it seem doable ?=20
Maybe it would be possible to show it while hoovering the memory/cpu inform=
ation with the mouse ?=20
Please forgive me if this is not the rigth place to post this question/requ=
est.=20
cordialement, regards,=20
Nelson LAMEIRAS=20
Lyra Network=20
Service Projets et Processus=20
Tel : +33 (0) 5 32 09 09 70=20
109 rue de l=E2=80=99innovation=20
31670 Lab=C3=A8ge - France=20
www.lyra-network.com=20
------=_Part_49066511_63048749.1475160413304
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: arial, helvetica, sans-serif; font-s=
ize: 12pt; color: #000000"><div>Hello oVirt community,<br></div><div><br da=
ta-mce-bogus=3D"1"></div><div>When listing virtual machines, is there a qui=
ck way to see how much RAM/CPU each machine has?<br data-mce-bogus=3D"1"></=
div><div><br data-mce-bogus=3D"1"></div><div>In oVirt 4.0.4, I can see ram/=
cpu usage in a very nice way, but I can not access easily the total RAM and=
number of CPUs. </div><div>The only way to find this information in GUI is=
to edit a vm and see the system tab (and that's when the "edit" context me=
nu is available, which is not always)<br data-mce-bogus=3D"1"></div><div><b=
r data-mce-bogus=3D"1"></div><div>Am I missing something?</div><div>This in=
formation is very usefull and sometimes critical (when migrating VM to host=
s which are already low on free RAM, specially when balloning is involved)<=
br data-mce-bogus=3D"1"></div><div><br data-mce-bogus=3D"1"></div><div>I wo=
uld like to see this information always on screen, like a (un)checkable col=
umn... does it seem doable ? </div><div>Maybe it would be possible to show =
it while hoovering the memory/cpu information with the mouse ?<br data-mce-=
bogus=3D"1"></div><div><br data-mce-bogus=3D"1"></div><div>Please forgive m=
e if this is not the rigth place to post this question/request.<br data-mce=
-bogus=3D"1"></div><div><br data-mce-bogus=3D"1"></div><div data-marker=3D"=
__SIG_PRE__"><div><div><span style=3D"font-size: 12pt; color: rgb(0, 0, 0);=
" data-mce-style=3D"font-size: 12pt; color: #000000;"><span style=3D"font-f=
amily: "century gothic",sans-serif;">cordialement, regards,<br da=
ta-mce-bogus=3D"1"></span></span></div><div><span style=3D"font-size: 12pt;=
color: rgb(0, 0, 0);" data-mce-style=3D"font-size: 12pt; color: #000000;">=
<span style=3D"font-family: "century gothic",sans-serif;">Nelson =
LAMEIRAS<br data-mce-bogus=3D"1"></span></span></div><div><span style=3D"fo=
nt-size: 9pt;" data-mce-style=3D"font-size: 9pt;"><b><span style=3D"font-fa=
mily: "century gothic",sans-serif; color: rgb(54, 95, 145);"><br =
data-mce-bogus=3D"1"></span></b></span></div><div><span style=3D"font-size:=
9pt;" data-mce-style=3D"font-size: 9pt;"><b><span style=3D"font-family: &q=
uot;century gothic",sans-serif; color: rgb(54, 95, 145);">Lyra Network=
</span></b></span></div><div><div><span style=3D"font-size: 9pt; color: #36=
5f91;" data-mce-style=3D"font-size: 9pt; color: #365f91;">Service Projets e=
t Processus</span></div><div><span style=3D"font-size: 9pt; color: rgb(54, =
95, 145);" data-mce-style=3D"font-size: 9pt; color: #365f91;">Tel : +33 (0)=
5 32 09 09 70</span></div></div><span style=3D"font-size: 9pt;" data-mce-s=
tyle=3D"font-size: 9pt;"><span style=3D"font-family: "century gothic&q=
uot;,sans-serif; color: rgb(54, 95, 145);">109 rue de l=E2=80=99innovation<=
/span><span style=3D"font-family: geneva;"></span></span><br><span style=3D=
"font-size: 9pt; font-family: geneva;" data-mce-style=3D"font-size: 9pt; fo=
nt-family: geneva;"></span><div><span style=3D"font-size: 9pt; font-family:=
'century gothic',sans-serif; color: #365f91;" data-mce-style=3D"font-size:=
9pt; font-family: 'century gothic',sans-serif; color: #365f91;">31670 Lab=
=C3=A8ge - France</span><span style=3D"font-size: 9pt; font-family: geneva;=
" data-mce-style=3D"font-size: 9pt; font-family: geneva;"></span><br data-m=
ce-bogus=3D"1"></div><div><span style=3D"font-size: 9pt; font-family: 'cent=
ury gothic',sans-serif; color: #365f91;" data-mce-style=3D"font-size: 9pt; =
font-family: 'century gothic',sans-serif; color: #365f91;"><a href=3D"http:=
//www.lyra-network.com" data-mce-href=3D"http://www.lyra-network.com">www.l=
yra-network.com</a></span><br data-mce-bogus=3D"1"></div></div></div></div>=
</body></html>
------=_Part_49066511_63048749.1475160413304--
8 years
[ovirt 3.6] Logical network not working
by Luca 'remix_tj' Lorenzetto
Hello,
i'm new to ovirt and i did some months ago a setup of ovirt 3.6 for
playing. My setup is composed by two physical hosts with 6 nic each
and another machine hosting the engine. All hosts are running RHEL 7.2
Setup went well, no problems. I've been able to convert the kvm image
provided by redhat and have it running on ovirt.
Then i decided to configure a new network in addition to the
ovirtmgmt. I went to networks, i created the logical network called
Development and set the flag "Enable VLAN Tagging" and inserted the
vlan tag.
Once created the logical network i went to each host and did setup
network and assigned the logical network to the interface where the
vlan is connected. The interface is configured with bootproto=none, so
no IP has been assigned to the eno5.828 that appeared after assigning
logical network.
I started then a vm and connected to the vNIC "Develoment/Development"
and assigned an IP. But networking is not working: no ping, no traffic
visible with tcpdump.
I tested the single interfaces on the hosts and where the logical
network is connected with tcpdump (both eno5 and eno5.828) i see tons
of broadcast traffic of that interface.
With brctl-show i see that assigned to the bridge Development there
are both eno5.828 and vnic0.
Any way to understand what's happening and why traffic is not passing?
Thank you
Luca
--
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)
Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <lorenzetto.luca(a)gmail.com>
8 years, 1 month
oVirt AD integration problems
by cmc
Hi,
I'm trying to use the directory services provided by the
ovirt-engine-extension-aaa-ldap, and I can get it to successfully login
when I run the tests in the setup script, but when I login via the GUI, it
gives me:
unexpected error was encountered during validation processing:
javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated'
and fails login. It looks a bit like it is expecting to already be joined
to the domain, so I tried doing that manually via realmd and sssd. It
involved installing a lot of packages, such as kerberos and samba, which I
am nervous about on an engine host. Anyway, once I was joined, it still
gives me the same 'peer not authenticated' message. Does it need to be
separately bound to the domain, i.e., do you need all the other stuff
installed and running for it to work, or is the
ovirt-engine-extension-aaa-ldap package all that is needed?
Anyway, I ran the ovirt-engine-extensions-tool --log-level=FINEST
--log-file=/tmp/aaa.log aaa search --extension-name=domain-authz command
suggested in an earlier post, and it only gave me one exception, which was:
2016-09-28 16:08:15 SEVERE Extension domain-authz could not be found
2016-09-28 16:08:15 FINE Exception:
org.ovirt.engine.core.extensions.mgr.ConfigurationException: Extension
domain-authz could not be found
Thanks for any help,
Cam
8 years, 1 month
VM pauses/hangs after migration
by Davide Ferrari
Hello
trying to migrate a VM from one host to another, a big VM with 96GB of RAM,
I found that when the migration completes, the VM goes to a paused satte
and cannot be resumed. The libvirt/qemu log it gives is this:
2016-09-28T12:18:15.679176Z qemu-kvm: error while loading state section id
2(ram)
2016-09-28T12:18:15.680010Z qemu-kvm: load of migration failed:
Input/output error
2016-09-28 12:18:15.872+0000: shutting down
2016-09-28 12:22:21.467+0000: starting up libvirt version: 1.2.17, package:
13.el7_2.5 (CentOS BuildSystem <http://bugs.centos.org>,
2016-06-23-14:23:27, worker1.bsys.centos.org), qemu version: 2.3.0
(qemu-kvm-ev-2.3.0-31.el7.16.1)
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name front04.billydomain.com -S
-machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off -cpu Haswell-noTSX -m
size=100663296k,slots=16,maxmem=4294967296k -realtime mlock=off -smp
32,sockets=16,cores=1,threads=2 -numa node,nodeid=0,cpus=0-31,mem=98304
-uuid 4511d1c0-6607-418f-ae75-34f605b2ad68 -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=7-2.1511.el7.centos.2.10,serial=4C4C4544-004A-3310-8054-B2C04F474432,uuid=4511d1c0-6607-418f-ae75-34f605b2ad68
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/
domain-front04.billydomain.com/monitor.sock,server,nowait -mon
chardev=charmonitor,id=monitor,mode=control -rtc
base=2016-09-28T14:22:21,driftfix=slew -global
kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x7 -device
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4 -drive
if=none,id=drive-ide0-1-0,readonly=on,format=raw -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
file=/rhev/data-center/00000001-0001-0001-0001-0000000003e3/ba2bd397-9222-424d-aecc-eb652c0169d9/images/b5b49d5c-2378-4639-9469-362e37ae7473/24fd0d3c-309b-458d-9818-4321023afacf,if=none,id=drive-virtio-disk0,format=qcow2,serial=b5b49d5c-2378-4639-9469-362e37ae7473,cache=none,werror=stop,rerror=stop,aio=threads
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive
file=/rhev/data-center/00000001-0001-0001-0001-0000000003e3/ba2bd397-9222-424d-aecc-eb652c0169d9/images/f02ac1ce-52cd-4b81-8b29-f8006d0469e0/ff4e49c6-3084-4234-80a1-18a67615c527,if=none,id=drive-virtio-disk1,format=raw,serial=f02ac1ce-52cd-4b81-8b29-f8006d0469e0,cache=none,werror=stop,rerror=stop,aio=threads
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk1,id=virtio-disk1
-netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=31 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:56,bus=pci.0,addr=0x3
-chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/4511d1c0-6607-418f-ae75-34f605b2ad68.com.redhat.rhevm.vdsm,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/4511d1c0-6607-418f-ae75-34f605b2ad68.org.qemu.guest_agent.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-vnc 192.168.10.225:1,password -k es -spice
tls-port=5902,addr=192.168.10.225,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
-k es -device
qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vgamem_mb=16,bus=pci.0,addr=0x2
-incoming tcp:0.0.0.0:49156 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on
Domain id=5 is tainted: hook-script
red_dispatcher_loadvm_commands:
KVM: entry failed, hardware error 0x8
RAX=00000000ffffffed RBX=ffff8817ba00c000 RCX=0100000000000000
RDX=0000000000000000
RSI=0000000000000000 RDI=0000000000000046 RBP=ffff8817ba00fe98
RSP=ffff8817ba00fe98
R8 =0000000000000000 R9 =0000000000000000 R10=0000000000000000
R11=0000000000000000
R12=0000000000000006 R13=ffff8817ba00c000 R14=ffff8817ba00c000
R15=0000000000000000
RIP=ffffffff81058e96 RFL=00010286 [--S--P-] CPL=0 II=0 A20=1 SMM=0 HLT=0
ES =0000 0000000000000000 ffffffff 00000000
CS =0010 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA]
SS =0018 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA]
DS =0000 0000000000000000 ffffffff 00000000
FS =0000 0000000000000000 ffffffff 00000000
GS =0000 ffff8817def80000 ffffffff 00000000
LDT=0000 0000000000000000 ffffffff 00000000
TR =0040 ffff8817def93b80 00002087 00008b00 DPL=0 TSS64-busy
GDT= ffff8817def89000 0000007f
IDT= ffffffffff529000 00000fff
CR0=80050033 CR2=00000000ffffffff CR3=00000017b725b000 CR4=001406e0
DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000
DR3=0000000000000000
DR6=00000000ffff0ff0 DR7=0000000000000400
EFER=0000000000000d01
Code=89 e5 fb 5d c3 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 fb f4 <5d> c3 0f
1f 84 00 00 00 00 00 55 48 89 e5 f4 5d c3 66 0f 1f 84 00 00 00 00 00 55 49
89 ca
KVM: entry failed, hardware error 0x8
RAX=00000000ffffffed RBX=ffff8817ba008000 RCX=0100000000000000
RDX=0000000000000000
RSI=0000000000000000 RDI=0000000000000046 RBP=ffff8817ba00be98
RSP=ffff8817ba00be98
R8 =0000000000000000 R9 =0000000000000000 R10=0000000000000000
R11=0000000000000000
R12=0000000000000005 R13=ffff8817ba008000 R14=ffff8817ba008000
R15=0000000000000000
RIP=ffffffff81058e96 RFL=00010286 [--S--P-] CPL=0 II=0 A20=1 SMM=0 HLT=0
ES =0000 0000000000000000 ffffffff 00000000
CS =0010 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA]
SS =0018 0000000000000000 ffffffff 00c09300 DPL=0 DS [-WA]
DS =0000 0000000000000000 ffffffff 00000000
FS =0000 0000000000000000 ffffffff 00000000
GS =0000 ffff8817def40000 ffffffff 00000000
LDT=0000 0000000000000000 ffffffff 00000000
TR =0040 ffff8817def53b80 00002087 00008b00 DPL=0 TSS64-busy
GDT= ffff8817def49000 0000007f
IDT= ffffffffff529000 00000fff
CR0=80050033 CR2=00000000ffffffff CR3=00000017b3c9a000 CR4=001406e0
DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000
DR3=0000000000000000
DR6=00000000ffff0ff0 DR7=0000000000000400
EFER=0000000000000d01
Code=89 e5 fb 5d c3 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 fb f4 <5d> c3 0f
1f 84 00 00 00 00 00 55 48 89 e5 f4 5d c3 66 0f 1f 84 00 00 00 00 00 55 49
89 ca
KVM: entry failed, hardware error 0x80000021
If you're running a guest on an Intel machine without unrestricted mode
support, the failure can be most likely due to the guest entering an invalid
state for Intel VT. For example, the guest maybe running in big real mode
which is not supported on less recent Intel processors.
EAX=ffffffed EBX=ba020000 ECX=00000000 EDX=00000000
ESI=00000000 EDI=00000046 EBP=ba023e98 ESP=ba023e98
EIP=81058e96 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=0
ES =0000 00000000 0000ffff 00009300 DPL=0 DS [-WA]
CS =f000 ffff0000 0000ffff 00009b00 DPL=0 CS16 [-RA]
SS =0000 00000000 0000ffff 00009300 DPL=0 DS [-WA]
DS =0000 00000000 0000ffff 00009300 DPL=0 DS [-WA]
FS =0000 00000000 0000ffff 00009300 DPL=0 DS [-WA]
GS =0000 00000000 0000ffff 00009300 DPL=0 DS [-WA]
LDT=0000 00000000 0000ffff 00008200 DPL=0 LDT
TR =0000 00000000 0000ffff 00008b00 DPL=0 TSS64-busy
GDT= 0000000000000000 0000ffff
IDT= 0000000000000000 0000ffff
CR0=80050033 CR2=00007fd826ac20a0 CR3=000000003516c000 CR4=00140060
DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000
DR3=0000000000000000
DR6=00000000ffff0ff0 DR7=0000000000000400
EFER=0000000000000d01
Code=?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? <??> ?? ??
?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ??
?? ??
Searching for errors like this I found some bug report about kernel issues
but I don't think it's the case, other VMs spawned from the same image
migrate without any issue. I have toi say that the original host running
the VM has some RAM problem (ECC multibit fault in one DIMM). Maybe that's
the problem?
How can I properly read this error log?
Thanks
--
Davide Ferrari
Senior Systems Engineer
8 years, 1 month
vdsm ssl errors
by C. Handel
i have an ovirt 4.0.1 installation on two nodes. Hosted Engine. San storage
backend.
For some reason the vdsmd on the nodes is logging an error every few
seconds:
vdsm vds.dispatcher ERROR SSL error during reading data: unexpected eof
Running tcpdump it is a connect from the node to itself. I can't figure out
what is wrong. Can someone give me a hint?
Greetings
Christoph
8 years, 1 month