Few Questions on New Install
by Talk Jesus
Greetings,
Just installed Ovirt:
Software Version:4.2.0.2-1.el7.centos
How Do I:
- add a subnet of IPv4 to assign to VMs
- download (or import) basic Linux templates like Centos 7, Ubuntu 16 even
if using minimal iso
- import from SolusVM based KVM nodes
Does oVirt support bulk IPv4 assignment to VMs? If I wish to assign say a
full /26 subnet of IPv4 to VM #1, is this a one click option?
Thank you. I read the docs, but everything is a bit confusing for me.
6 years, 9 months
Network Topologies
by aeR7Re
This is a multi-part message in MIME format.
--b1_bf9e7016867c417f2a03f0009ce95b53
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: base64
SGVsbG8sCkknbSBsb29raW5nIGZvciBzb21lIGFkdmljZSBvbiBvciBldmVuIGp1c3Qgc29tZSBl
eGFtcGxlcyBvZiBob3cgb3RoZXIgb1ZpcnQgdXNlcnMgaGF2ZSBjb25maWd1cmVkIG5ldHdvcmtp
bmcgaW5zaWRlIHRoZWlyIGNsdXN0ZXJzLgoKQ3VycmVudGx5IHdlJ3JlIHJ1bm5pbmcgYSBjbHVz
dGVyIHdpdGggaG9zdHMgc3ByZWFkIGFjcm9zcyBtdWx0aXBsZSByYWNrcyBpbiBvdXIgREMsIHdp
dGggbGF5ZXIgMiBzcGFubmVkIGJldHdlZW4gdGhlbSBmb3IgVk0gbmV0d29ya3MuIFdoaWxlIHRo
aXMgaXMgZnVuY3Rpb25hbCwgaXQncyAxMDAlIG5vdCBpZGVhbCBhcyB0aGVyZSdzIG11bHRpcGxl
IHNpbmdsZSBwb2ludHMgb2YgZmFpbHVyZSBhbmQgYXQgc29tZSBwb2ludCBzb21lb25lIGlzIGdv
aW5nIHRvIGFjY2lkZW50YWxseSBsb29wIGl0IDopCgpXaGF0IHdlJ3JlIGFmdGVyIGlzIGEgbWV0
aG9kIG9mIHByb3ZpZGluZyBhIFZNIG5ldHdvcmsgYWNyb3NzIG11bHRpcGxlIHJhY2tzIHdoZXJl
IHRoZXJlIGFyZSBubyBzaW5nbGUgcG9pbnRzIG9mIGZhaWx1cmUuIFdlJ3ZlIGdvdCBsYXllciAz
IHN3aXRjaGVzIGluIHJhY2tzIGNhcGFibGUgb2YgcnVubmluZyBhbiBJR1AvRUdQLgoKQ3VycmVu
dCBpZGVhczoKLSBSdW4gYSByb3V0aW5nIGRhZW1vbiBvbiBlYWNoIFZNIGFuZCBoYXZlIGl0IGFk
dmVydGlzZSBhIC8zMiB0byB0aGUgZGlzdHJpYnV0aW9uIHN3aXRjaAotIE9WTiBmb3IgbGF5ZXIg
MiBiZXR3ZWVuIGhvc3RzICsgcG90ZW50aWFsbHkgVlJSUCBvciBzaW1pbGFyIG9uIHRoZSBkaXN0
cmlidXRpb24gc3dpdGNoCgpTbyBhcyBwZXIgbXkgb3JpZ2luYWwgcGFyYWdyYXBoLCBhbnkgYWR2
aWNlIG9uIHRoZSBtb3N0IGFwcHJvcHJpYXRlIG5ldHdvcmsgdG9wb2xvZ3kgZm9yIGFuIG9WaXJ0
IGNsdXN0ZXI/IG9yIGhvdyBoYXZlIHlvdSBzZXQgdXAgeW91ciBuZXR3b3Jrcz8KClRoYW5rIHlv
dQoKU2VudCB3aXRoIFtQcm90b25NYWlsXShodHRwczovL3Byb3Rvbm1haWwuY29tKSBTZWN1cmUg
RW1haWwu
--b1_bf9e7016867c417f2a03f0009ce95b53
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: base64
PGRpdj5IZWxsbyw8YnI+PC9kaXY+PGRpdj5JJ20gbG9va2luZyBmb3Igc29tZSBhZHZpY2Ugb24g
b3IgZXZlbiBqdXN0IHNvbWUgZXhhbXBsZXMgb2YgaG93IG90aGVyIG9WaXJ0IHVzZXJzIGhhdmUg
Y29uZmlndXJlZCBuZXR3b3JraW5nIGluc2lkZSB0aGVpciBjbHVzdGVycy4gPGJyPjwvZGl2Pjxk
aXY+PGJyPjwvZGl2PjxkaXY+Q3VycmVudGx5IHdlJ3JlIHJ1bm5pbmcgYSBjbHVzdGVyIHdpdGgg
aG9zdHMgc3ByZWFkIGFjcm9zcyBtdWx0aXBsZSByYWNrcyBpbiBvdXIgREMsIHdpdGggbGF5ZXIg
MiBzcGFubmVkIGJldHdlZW4gdGhlbSBmb3IgVk0gbmV0d29ya3MuIFdoaWxlIHRoaXMgaXMgZnVu
Y3Rpb25hbCwgaXQncyAxMDAlIG5vdCBpZGVhbCBhcyB0aGVyZSdzIG11bHRpcGxlIHNpbmdsZSBw
b2ludHMgb2YgZmFpbHVyZSBhbmQgYXQgc29tZSBwb2ludCBzb21lb25lIGlzIGdvaW5nIHRvIGFj
Y2lkZW50YWxseSBsb29wIGl0IDopIDxicj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PldoYXQg
d2UncmUgYWZ0ZXIgaXMgYSBtZXRob2Qgb2YgcHJvdmlkaW5nIGEgVk0gbmV0d29yayBhY3Jvc3Mg
bXVsdGlwbGUgcmFja3Mgd2hlcmUgdGhlcmUgYXJlIG5vIHNpbmdsZSBwb2ludHMgb2YgZmFpbHVy
ZS4gV2UndmUgZ290IGxheWVyIDMgc3dpdGNoZXMgaW4gcmFja3MgY2FwYWJsZSBvZiBydW5uaW5n
IGFuIElHUC9FR1AuPGJyPjwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+Q3VycmVudCBpZGVhczo8
YnI+PC9kaXY+PGRpdj4tIFJ1biBhIHJvdXRpbmcgZGFlbW9uIG9uIGVhY2ggVk0gYW5kIGhhdmUg
aXQgYWR2ZXJ0aXNlIGEgLzMyIHRvIHRoZSBkaXN0cmlidXRpb24gc3dpdGNoPGJyPjwvZGl2Pjxk
aXY+LSBPVk4gZm9yIGxheWVyIDIgYmV0d2VlbiBob3N0cyArIHBvdGVudGlhbGx5IFZSUlAgb3Ig
c2ltaWxhciBvbiB0aGUgZGlzdHJpYnV0aW9uIHN3aXRjaDxicj48L2Rpdj48ZGl2Pjxicj48L2Rp
dj48ZGl2PlNvIGFzIHBlciBteSBvcmlnaW5hbCBwYXJhZ3JhcGgsIGFueSBhZHZpY2Ugb24gdGhl
IG1vc3QgYXBwcm9wcmlhdGUgbmV0d29yayB0b3BvbG9neSBmb3IgYW4gb1ZpcnQgY2x1c3Rlcj8g
b3IgaG93IGhhdmUgeW91IHNldCB1cCB5b3VyIG5ldHdvcmtzPzxicj48L2Rpdj48ZGl2Pjxicj48
L2Rpdj48ZGl2PlRoYW5rIHlvdTxicj48L2Rpdj48ZGl2IGNsYXNzPSJwcm90b25tYWlsX3NpZ25h
dHVyZV9ibG9jayI+PGRpdiBjbGFzcz0icHJvdG9ubWFpbF9zaWduYXR1cmVfYmxvY2stdXNlciBw
cm90b25tYWlsX3NpZ25hdHVyZV9ibG9jay1lbXB0eSI+PGJyPjwvZGl2PjxkaXYgY2xhc3M9InBy
b3Rvbm1haWxfc2lnbmF0dXJlX2Jsb2NrLXByb3RvbiI+U2VudCB3aXRoIDxhIGhyZWY9Imh0dHBz
Oi8vcHJvdG9ubWFpbC5jb20iPlByb3Rvbk1haWw8L2E+IFNlY3VyZSBFbWFpbC48YnI+PC9kaXY+
PC9kaXY+PGRpdj48YnI+PC9kaXY+
--b1_bf9e7016867c417f2a03f0009ce95b53--
6 years, 9 months
Re: [ovirt-users] Live migration of VM(0 downtime) while Hypervisor goes down in ovirt
by Luca 'remix_tj' Lorenzetto
What you're looking at is called fault tolerance in other hypervisors.
As far as i know, ovirt doesn't implement such solution.
But if your system doesn't support failure recovery done by high
availability options, you should take in account to revise your application
architecture if you want to keep running on ovirt.
Luca
Il 10 feb 2018 8:31 AM, "Ranjith P" <ranjithspr13(a)yahoo.com> ha scritto:
Hi,
>>Who's shutting down the hypervisor? (Or perhaps it is shutdown
externally, due to overheating or otherwise?)
We need a continuous availability of VM's in our production setup. If the
hypervisor goes down due to any hardware failure or work load then VM's
above hypervisor will reboot and started on available hypervisors. This is
normally happening but it disrupting VM's. Can you suggest a solution in
this case? Can we achieve this challenge using glusterfs?
Thanks & Regards
Ranjith
Sent from Yahoo Mail on Android
<https://overview.mail.yahoo.com/mobile/?.src=Android>
On Sat, Feb 10, 2018 at 2:07 AM, Yaniv Kaul
<ykaul(a)redhat.com> wrote:
On Fri, Feb 9, 2018 at 9:25 PM, ranjithspr13(a)yahoo.com <
ranjithspr13(a)yahoo.com> wrote:
Hi,
Anyone can suggest how to setup VM Live migration (without restart vm)
while Hypervisor goes down in ovirt?
I think there are two parts to achieving this:
1. Have a script that migrates VMs off a specific host. This should be easy
to write using the Python/Ruby/Java SDK, Ansible or using REST directly.
2. Having this script run as a service when a host shuts down, in the right
order - well before libvirt and VDSM shut down, and would be fast enough
not to be terminated by systemd.
This is a bit more challenging.
Who's shutting down the hypervisor? (Or perhaps it is shutdown externally,
due to overheating or otherwise?)
Y.
Using glusterfs is it possible? Then how?
Thanks & Regards
Ranjith
Sent from Yahoo Mail on Android
<https://overview.mail.yahoo.com/mobile/?.src=Android>
______________________________ _________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/ mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
6 years, 9 months
VM backups - Bacchus
by Niyazi Elvan
Dear Friends,
It has been a while I could not have time to work on Bacchus. This weekend
I created an ansible playbook to replace the installation procedure.
You simply download installer.yml and settings.yml files from git repo and
run the installer as "ansible-playbook installer.yml" Please check it at
https://github.com/openbacchus/bacchus . I recommend you to run the
installer on a fresh VM, which has no MySQL DB or previous installation.
Hope this helps to more people and please let me know about your ideas.
ps. Regarding oVirt 4.2, I had a chance to look at it and tried the new
domain type "Backup Domain". This is really cool feature and I am planning
to implement the support in Bacchus. Hopefully, CBT will show up soon and
we will have a better world :)
King Regards,
--
Niyazi Elvan
6 years, 9 months
Maximum time node can be offline.
by Thomas Letherby
Hello all,
Is there a maximum length of time an Ovirt Node 4.2 based host can be
offline in a cluster before it would have issues when powered back on?
The reason I ask is in my lab I currently have a three node cluster that
works really well, however a lot of the time I only actually need the
resources of one host, so to save power I'd like to keep the other two
offline until needed.
I can always script them to boot once a week or so if I need to.
Thanks,
Thomas
6 years, 9 months
Live migration of VM(0 downtime) while Hypervisor goes down in ovirt
by ranjithspr13@yahoo.com
------=_Part_1968242_1358094662.1518204324810
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
Hi,Anyone can suggest how to setup VM Live migration (without restart vm) while Hypervisor goes down in ovirt?Using glusterfs is it possible? Then how?
Thanks & RegardsRanjith
Sent from Yahoo Mail on Android
------=_Part_1968242_1358094662.1518204324810
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 7bit
Hi,<div id="yMail_cursorElementTracker_1518203906835">Anyone can suggest how to setup VM Live migration (without restart vm) while Hypervisor goes down in ovirt?</div><div id="yMail_cursorElementTracker_1518204210420">Using glusterfs is it possible? Then how?</div><div id="yMail_cursorElementTracker_1518204099547"><br></div><div id="yMail_cursorElementTracker_1518204100461">Thanks & Regards</div><div id="yMail_cursorElementTracker_1518204113848">Ranjith<br><br><div id="ymail_android_signature"><a href="https://overview.mail.yahoo.com/mobile/?.src=Android">Sent from Yahoo Mail on Android</a></div></div>
------=_Part_1968242_1358094662.1518204324810--
6 years, 9 months
Importing Libvirt Kvm Vms to oVirt Status: Released in oVirt 4.2 using ssh - Failed to communicate with the external provider
by maoz zadok
Hello there,
I'm following
https://www.ovirt.org/develop/release-management/features/virt/KvmToOvirt/
guide
in order to import VMS from Libvirt to oVirt using ssh.
URL: "qemu+ssh://host1.example.org/system"
and get the following error:
Failed to communicate with the external provider, see log for additional
details.
*oVirt agent log:*
*- Failed to retrieve VMs information from external server
qemu+ssh://XXX.XXX.XXX.XXX/system*
*- VDSM XXX command GetVmsNamesFromExternalProviderVDS failed: Cannot recv
data: Host key verification failed.: Connection reset by peer*
*remote host sshd DEBUG log:*
*Feb 7 16:38:29 XXX sshd[110005]: Connection from XXX.XXX.XXX.147 port
48148 on XXX.XXX.XXX.123 port 22*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: Client protocol version 2.0;
client software version OpenSSH_7.4*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: match: OpenSSH_7.4 pat OpenSSH*
compat 0x04000000*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: Local version string
SSH-2.0-OpenSSH_7.4*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: Enabling compatibility mode for
protocol 2.0*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: SELinux support disabled
[preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: permanently_set_uid: 74/74
[preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: list_hostkey_types:
ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_KEXINIT sent [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_KEXINIT received
[preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: algorithm:
curve25519-sha256 [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: host key algorithm:
ecdsa-sha2-nistp256 [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: client->server cipher:
chacha20-poly1305(a)openssh.com <chacha20-poly1305(a)openssh.com> MAC:
<implicit> compression: none [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: server->client cipher:
chacha20-poly1305(a)openssh.com <chacha20-poly1305(a)openssh.com> MAC:
<implicit> compression: none [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: curve25519-sha256 need=64
dh_need=64 [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: curve25519-sha256 need=64
dh_need=64 [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: expecting SSH2_MSG_KEX_ECDH_INIT
[preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: rekey after 134217728 blocks
[preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_NEWKEYS sent [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: expecting SSH2_MSG_NEWKEYS
[preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: Connection closed by XXX.XXX.XXX.147
port 48148 [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: do_cleanup [preauth]*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: do_cleanup*
*Feb 7 16:38:29 XXX sshd[110005]: debug1: Killing privsep child 110006*
*Feb 7 16:38:29 XXX sshd[109922]: debug1: Forked child 110007.*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: Set /proc/self/oom_score_adj to
0*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: rexec start in 5 out 5 newsock 5
pipe 7 sock 8*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: inetd sockets after dupping: 3,
3*
*Feb 7 16:38:29 XXX sshd[110007]: Connection from XXX.XXX.XXX.147 port
48150 on XXX.XXX.XXX.123 port 22*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: Client protocol version 2.0;
client software version OpenSSH_7.4*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: match: OpenSSH_7.4 pat OpenSSH*
compat 0x04000000*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: Local version string
SSH-2.0-OpenSSH_7.4*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: Enabling compatibility mode for
protocol 2.0*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: SELinux support disabled
[preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: permanently_set_uid: 74/74
[preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: list_hostkey_types:
ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: SSH2_MSG_KEXINIT sent [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: SSH2_MSG_KEXINIT received
[preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: algorithm:
curve25519-sha256 [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: host key algorithm:
ecdsa-sha2-nistp256 [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: client->server cipher:
chacha20-poly1305(a)openssh.com <chacha20-poly1305(a)openssh.com> MAC:
<implicit> compression: none [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: server->client cipher:
chacha20-poly1305(a)openssh.com <chacha20-poly1305(a)openssh.com> MAC:
<implicit> compression: none [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: curve25519-sha256 need=64
dh_need=64 [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: curve25519-sha256 need=64
dh_need=64 [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: expecting SSH2_MSG_KEX_ECDH_INIT
[preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: rekey after 134217728 blocks
[preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: SSH2_MSG_NEWKEYS sent [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: expecting SSH2_MSG_NEWKEYS
[preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: Connection closed by XXX.XXX.XXX.147
port 48150 [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: do_cleanup [preauth]*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: do_cleanup*
*Feb 7 16:38:29 XXX sshd[110007]: debug1: Killing privsep child 110008*
*Feb 7 16:38:30 XXX sshd[109922]: debug1: Forked child 110009.*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: Set /proc/self/oom_score_adj to
0*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: rexec start in 5 out 5 newsock 5
pipe 7 sock 8*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: inetd sockets after dupping: 3,
3*
*Feb 7 16:38:30 XXX sshd[110009]: Connection from XXX.XXX.XXX.147 port
48152 on XXX.XXX.XXX.123 port 22*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: Client protocol version 2.0;
client software version OpenSSH_7.4*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: match: OpenSSH_7.4 pat OpenSSH*
compat 0x04000000*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: Local version string
SSH-2.0-OpenSSH_7.4*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: Enabling compatibility mode for
protocol 2.0*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: SELinux support disabled
[preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: permanently_set_uid: 74/74
[preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: list_hostkey_types:
ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: SSH2_MSG_KEXINIT sent [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: SSH2_MSG_KEXINIT received
[preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: algorithm:
curve25519-sha256 [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: host key algorithm:
ecdsa-sha2-nistp256 [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: client->server cipher:
chacha20-poly1305(a)openssh.com <chacha20-poly1305(a)openssh.com> MAC:
<implicit> compression: none [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: server->client cipher:
chacha20-poly1305(a)openssh.com <chacha20-poly1305(a)openssh.com> MAC:
<implicit> compression: none [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: curve25519-sha256 need=64
dh_need=64 [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: curve25519-sha256 need=64
dh_need=64 [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: expecting SSH2_MSG_KEX_ECDH_INIT
[preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: rekey after 134217728 blocks
[preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: SSH2_MSG_NEWKEYS sent [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: expecting SSH2_MSG_NEWKEYS
[preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: Connection closed by XXX.XXX.XXX.147
port 48152 [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: do_cleanup [preauth]*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: do_cleanup*
*Feb 7 16:38:30 XXX sshd[110009]: debug1: Killing privsep child 110010*
*Feb 7 16:38:30 XXX sshd[109922]: debug1: Forked child 110011.*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: Set /proc/self/oom_score_adj to
0*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: rexec start in 5 out 5 newsock 5
pipe 7 sock 8*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: inetd sockets after dupping: 3,
3*
*Feb 7 16:38:30 XXX sshd[110011]: Connection from XXX.XXX.XXX.147 port
48154 on XXX.XXX.XXX.123 port 22*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: Client protocol version 2.0;
client software version OpenSSH_7.4*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: match: OpenSSH_7.4 pat OpenSSH*
compat 0x04000000*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: Local version string
SSH-2.0-OpenSSH_7.4*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: Enabling compatibility mode for
protocol 2.0*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: SELinux support disabled
[preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: permanently_set_uid: 74/74
[preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: list_hostkey_types:
ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: SSH2_MSG_KEXINIT sent [preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: SSH2_MSG_KEXINIT received
[preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: algorithm:
curve25519-sha256 [preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: host key algorithm:
ecdsa-sha2-nistp256 [preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: client->server cipher:
chacha20-poly1305(a)openssh.com <chacha20-poly1305(a)openssh.com> MAC:
<implicit> compression: none [preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: server->client cipher:
chacha20-poly1305(a)openssh.com <chacha20-poly1305(a)openssh.com> MAC:
<implicit> compression: none [preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: curve25519-sha256 need=64
dh_need=64 [preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: curve25519-sha256 need=64
dh_need=64 [preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: expecting SSH2_MSG_KEX_ECDH_INIT
[preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: rekey after 134217728 blocks
[preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: SSH2_MSG_NEWKEYS sent [preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: debug1: expecting SSH2_MSG_NEWKEYS
[preauth]*
*Feb 7 16:38:30 XXX sshd[110011]: Connection closed by XXX.XXX.XXX.147
port 48154 [preauth]*
Thank you!
6 years, 9 months
Virt-viewer not working over VPN
by Vincent Royer
Hi, I asked this on the virt-viewer list, but it appears to be dead, so my
apologies if this isn't the right place for this question.
When I access my vm's locally using virt-viewer on windows clients,
everything works fine, spice or vnc.
When I access the same vm's remotely over a site-to-site VPN (setup between
the two firewalls), it fails with an error: unable to connect to libvirt
with uri: [none]. Similarly I cannot connect in a browser-based vnc
session (cannot connect to host).
I can resolve the DNS of the server from my remote client (domain override
in the firewall pointing to the DNS server locally) and everything else I
do seems completely unaware of the vpn link (SSH, RDP, etc). For example
connecting to https://ovirt-enginr.mydomain.com works as expected. The
only function not working remotely is virt-viewer.
Any clues would be appreciated!
6 years, 9 months
Re: [ovirt-users] Ovirt backups lead to unresponsive VM
by Alex K
Ok. I will reproduce and collect logs.
Thanx,
Alex
On Jan 29, 2018 20:21, "Mahdi Adnan" <mahdi.adnan(a)outlook.com> wrote:
I have Windows VMs, both client and server.
if you provide the engine.log file we might have a look at it.
--
Respectfully
*Mahdi A. Mahdi*
------------------------------
*From:* Alex K <rightkicktech(a)gmail.com>
*Sent:* Monday, January 29, 2018 5:40 PM
*To:* Mahdi Adnan
*Cc:* users
*Subject:* Re: [ovirt-users] Ovirt backups lead to unresponsive VM
Hi,
I have observed this logged at host when the issue occurs:
VDSM command GetStoragePoolInfoVDS failed: Connection reset by peer
or
VDSM host.domain command GetStatsVDS failed: Connection reset by peer
At engine logs have not been able to correlate.
Are you hosting Windows 2016 server and Windows 10 VMs?
The weird is that I have same setup on other clusters with no issues.
Thanx,
Alex
On Sun, Jan 28, 2018 at 9:21 PM, Mahdi Adnan <mahdi.adnan(a)outlook.com>
wrote:
Hi,
We have a cluster of 17 nodes, backed by GlusterFS storage, and using this
same script for backup.
we have no issues with it so far.
have you checked engine log file ?
--
Respectfully
*Mahdi A. Mahdi*
------------------------------
*From:* users-bounces(a)ovirt.org <users-bounces(a)ovirt.org> on behalf of Alex
K <rightkicktech(a)gmail.com>
*Sent:* Wednesday, January 24, 2018 4:18 PM
*To:* users
*Subject:* [ovirt-users] Ovirt backups lead to unresponsive VM
Hi all,
I have a cluster with 3 nodes, using ovirt 4.1 in a self hosted setup on
top glusterfs.
On some VMs (especially one Windows server 2016 64bit with 500 GB of disk).
Guest agents are installed at VMs. i almost always observe that during the
backup of the VM the VM is rendered unresponsive (dashboard shows a
question mark at the VM status and VM does not respond to ping or to
anything).
For scheduled backups I use:
https://github.com/wefixit-AT/oVirtBackup
The script does the following:
1. snapshot VM (this is done ok without any failure)
2. Clone snapshot (this steps renders the VM unresponsive)
3. Export Clone
4. Delete clone
5. Delete snapshot
Do you have any similar experience? Any suggestions to address this?
I have never seen such issue with hosted Linux VMs.
The cluster has enough storage to accommodate the clone.
Thanx,
Alex
6 years, 9 months
Cannot Remove Disk
by Donny Davis
Ovirt 4.2 has been humming away quite nicely for me in the last few months,
and now I am hitting an issue when try to touch any api call that has to do
with a specific disk. This disk resides on a hyperconverged DC, and none of
the other disks seem to be affected. Here is the error thrown.
2018-02-08 10:13:20,005-05 ERROR
[org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] (default
task-22) [7b48d1ec-53a7-497a-af8e-938f30a321cf] Error during
ValidateFailure.:
org.ovirt.engine.core.bll.quota.InvalidQuotaParametersException: Quota
6156b8dd-50c9-4e8f-b1f3-4a6449b02c7b does not match storage pool
5a497956-0380-021e-0025-00000000035e
Any ideas what can be done to fix this?
6 years, 9 months