Reg: Ovirt mouse not responding
by syedquadeer@ctel.in
Dear Team,
I am using Ovirt 3.x on centos 3 node cluster and in that Ubuntu 14.04
64bit vm's are installed. But the end users, who are using this vm' s
are facing some issue daily and issues are mentioned below,
1. Keyboard will not respond in middle automatically and after checking
log file in vm, it show pmouse sync issue.
2. If Vm is restarted it is giving black screen, then Vm need to power
off and start again.
Please provide solution for above issues. Thanks in advance...
--
Thanks & Regards,
Syed Abdul Qadeer.
7660022818.
7 years
Re: [ovirt-users] iSCSI domain on 4kn drives
by Martijn Grendelman
--------------DE48748F7C67E1FABE46EEAF
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Op 7-8-2016 om 8:19 schreef Yaniv Kaul:
>
> On Fri, Aug 5, 2016 at 4:42 PM, Martijn Grendelman
> <martijn.grendelman(a)isaac.nl <mailto:martijn.grendelman@isaac.nl>> wrote:
>
> Op 4-8-2016 om 18:36 schreef Yaniv Kaul:
>> On Thu, Aug 4, 2016 at 11:49 AM, Martijn Grendelman
>> <martijn.grendelman(a)isaac.nl
>> <mailto:martijn.grendelman@isaac.nl>> wrote:
>>
>> Hi,
>>
>> Does oVirt support iSCSI storage domains on target LUNs using
>> a block
>> size of 4k?
>>
>>
>> No, we do not - not if it exposes 4K blocks.
>> Y.
>
> Is this on the roadmap?
>
>
> Not in the short term roadmap.
> Of course, patches are welcome. It's mainly in VDSM.
> I wonder if it'll work in NFS.
> Y.
I don't think I ever replied to this, but I can confirm that in RHEV 3.6
it works with NFS.
Best regards,
Martijn.
--------------DE48748F7C67E1FABE46EEAF
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Op 7-8-2016 om 8:19 schreef Yaniv Kaul:<br>
<blockquote
cite="mid:280cfbd3a16ad1b76cc7de56bda88f45,CAJgorsbJHLV1e3fH4b4AR3GBp1oi44fDhfeii+PQ1iY1RwUStw@mail.gmail.com"
type="cite">
<div dir="ltr"><br>
<div class="gmail_extra">On Fri, Aug 5, 2016 at 4:42 PM, Martijn
Grendelman <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:martijn.grendelman@isaac.nl" target="_blank">martijn.grendelman(a)isaac.nl</a>></span>
wrote:<br>
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"> Op 4-8-2016 om
18:36 schreef Yaniv Kaul:<br>
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote"><span class="">On Thu,
Aug 4, 2016 at 11:49 AM, Martijn Grendelman <span
dir="ltr"><<a moz-do-not-send="true"
href="mailto:martijn.grendelman@isaac.nl"
target="_blank">martijn.grendelman(a)isaac.nl</a>></span>
wrote:<br>
</span><span class="">
<blockquote class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px
#ccc solid;padding-left:1ex">Hi,<br>
<br>
Does oVirt support iSCSI storage domains on
target LUNs using a block<br>
size of 4k?<br>
</blockquote>
<div><br>
</div>
</span><span class="">
<div>No, we do not - not if it exposes 4K
blocks.</div>
<div>Y.</div>
</span></div>
</div>
</div>
</blockquote>
<br>
Is this on the roadmap?<br>
</div>
</blockquote>
<div><br>
</div>
<div>Not in the short term roadmap.</div>
<div>Of course, patches are welcome. It's mainly in VDSM.</div>
<div>I wonder if it'll work in NFS.</div>
<div>Y.</div>
</div>
</div>
</div>
</blockquote>
<br>
I don't think I ever replied to this, but I can confirm that in RHEV
3.6 it works with NFS.<br>
<br>
Best regards,<br>
Martijn.<br>
</body>
</html>
--------------DE48748F7C67E1FABE46EEAF--
7 years, 1 month
Engine crash, storage won't activate, hosts won't shutdown, template locked, gpu passthrough failed
by M R
Hello!
I have been using Ovirt for last four weeks, testing and trying to get
things working.
I have collected here the problems I have found and this might be a bit
long but help to any of these or maybe to all of them from several people
would be wonderful.
My version is ovirt node 4.1.5 and 4.1.6 downloaded from website latest
stable release at the time. Also tested with CentOS minimal +ovirt repo. In
this case, 3. is solved, but other problems persist.
1. Power off host
First day after installing ovirt node, it was able to reboot and shutdown
clean. No problems at all. After few days of using ovir, I have noticed
that hosts are unable to shutdown. I have tested this in several different
ways and come to the following conclusion. IF engine has not been started
after boot, all hosts are able to shutdown clean. But if engine is started
even once, none of the hosts are able to shutdown anymore. The only way to
get power off is to unplug or press power button for a longer time as hard
reset. I have failed to find a way to have the engine running and then
shutdown host. This effects to all hosts in the cluster.
2. Glusterfs failed
Every time I have booted hosts, glusterfs has failed. For some reason, it
turns inactive state even if I have setup systemctl enable glusterd. Before
this command it was just inactive. After this command, it will say "failed
(inactive). There is still a way to get glusterfs working. I have to give
command systemctl start glusterd manually and everything starts working.
Why do I have to give manual commands to start glusterfs? I have used this
for CentOS before and never had this problem before. Node installer is that
much different from the CentOS core?
3. Epel
As I said that I have used CentOS before, I would like to able to install
some packets from repo. But even if I install epel-release, it won't find
packets such as nano or htop. I have read about how to add epel-release to
ovirt node from here: https://www.ovirt.org/release/4.1.1/#epel
I have tested even manually edit repolist, but it will fail to find normal
epel packets. I have setup additional exclude=collectd* as guided in the
link above. This doesn't make any difference. All being said I am able to
install manually packets which are downloaded with other CentOS machine and
transferred with scp to ovirt node. Still, this once again needs a lot of
manual input and is just a workaround for the bug.
4. Engine startup
When I try to start the engine when glusterfs is up, it will say vm doesn't
exist, starting up. Still, it won't startup automatically. I have to give
several times command hosted-engine --vm-start. I wait for about 5minutes
until I give it next time. This will take usually about 30minutes and then
randomly. Completely randomly after one of the times, I give this command
engine shoots up and is up in 1minute. This has happened every time I boot
up. And the times that I have to give a command to start the engine, has
been changing. At best it's been 3rd time at worst it has been 7th time.
Calculating from there it might take from 15minutes to 35minutes to get the
engine up.Nevertheless, it will eventually come up every time. If there is
a way to get it up on the first try or even better, automatically up, it
would be great.
5. Activate storage
Once the engine is up, there has been a problem with storage. When I go to
storage tab, it will show all sources red. Even if I wait for 15~20minutes,
it won't get storage green itself. I have to go and press active button
from main data storage. Then it will get main storage up in
2~3munutes.Sometimes it fails it once, but will definitely get main data
storage up on the seconds try. And then magically at the same time all
other storages instantly go green. Main storage is glusterfs and I have 3
NFS storages as well. This is only a problem when starting up and once
storages are on green they stay green. Still annoying that it cannot get it
done by itself.
6.Template locked
I try to create a template from existing VM and it resulted in original VM
going into locked state and template being locked. I have read that some
other people had a similar problem and they were suggested to restart
engine to see if it solves it. For me it has been now a week and several
restarts of engine and hosts, but there is still one VM locked and template
locked as well. This is not a big problem, but still a problem. Everything
is grey and cannot delete this bugged VM or template.
7. unable to use GPU
I have been trying to do GPU passthrough with my VM. First, there was a
problem with qemu cmd line, but once I figure out a way to get commands, it
maybe is working(?). Log shows up fine, but it still doesn't give
functionality I¨m looking for. As I mentioned in the other email that I
have found this: https://www.mail-archive.com/users@ovirt.org/msg40422.html
. It will give right syntax in log, but still, won't fix error 43 with
nvidia drivers. If anybody got this working or has ideas how to do it,
would really like to know how it's done properly. I have also tested with
AMD graphics cards such as vega, but as soon as drivers have installed, I
will get a black screen. Even if I restart VM or hosts or both. I will only
see black screen and unable to use VM at all. I might be able to live with
the other six things listed above, but this one is a bit of a problem for
me. My use of VMs will eventually need graphical performance and therefore
I will have to get this working or find an alternative to ovirt..I have
found several things that I really like in ovirt and would prefer to use
it.
Best regards
Mikko
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_camp...>
Ei
viruksia. www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_camp...>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
7 years, 2 months
VM remote noVNC console
by Alex K
Hi all,
I am trying to get the VM console of a VM through SSH socks proxy.
This is a scenario I will frequently face, as the ovirt cluster will be
available only though a remote SSH tunnel.
I am trying several console options without success.
With SPICE or VNC I get issue with virt-viewer saying "Unable to connect to
libvirt with URI [none]'
With noVNC I get a separate tab on browser where it is stuck showing
"loading".
Has anyone success with this kind of remote console access?
Thanx,
Alex
7 years, 2 months
LVM structure
by Nicolas Ecarnot
Hello,
I'm still coping with my qemu image corruption, and I'm following some
Redhat guidelines that explains the way to go :
- Start the VM
- Identify the host
- On this host, run the ps command to identify the disk image location :
# ps ax|grep qemu-kvm|grep vm_name
- Look for "-drive
file=/rhev/data-center/00000001-0001-0001-0001-00000000033e/b72773dc-c99c-472a-9548-503c122baa0b/images/91bfb2b4-5194-4ab3-90c8-3c172959f712/e7174214-3c2b-4353-98fd-2e504de72c75"
(YMMV)
- Resolve this symbolic link
# ls -la
/rhev/data-center/00000001-0001-0001-0001-00000000033e/b72773dc-c99c-472a-9548-503c122baa0b/images/91bfb2b4-5194-4ab3-90c8-3c172959f712/e7174214-3c2b-4353-98fd-2e504de72c75
lrwxrwxrwx 1 vdsm kvm 78 3 oct. 2016
/rhev/data-center/00000001-0001-0001-0001-00000000033e/b72773dc-c99c-472a-9548-503c122baa0b/images/91bfb2b4-5194-4ab3-90c8-3c172959f712/e7174214-3c2b-4353-98fd-2e504de72c75
->
/dev/b72773dc-c99c-472a-9548-503c122baa0b/e7174214-3c2b-4353-98fd-2e504de72c75
- Shutdown the VM
- On the SPM, activate the logical volume :
# lvchange -ay
/dev/b72773dc-c99c-472a-9548-503c122baa0b/e7174214-3c2b-4353-98fd-2e504de72c75
- Verify the state of the qemu image :
# qemu-img check
/dev/b72773dc-c99c-472a-9548-503c122baa0b/e7174214-3c2b-4353-98fd-2e504de72c75
- If needed, attempt a repair :
# qemu-img check -r all /dev/...
- In any case, deactivate the LV :
# lvchange -an /dev/...
I followed this steps tens of times, and finding the LV and activating
it was obvious and successful.
Since yesterday, I'm finding some VMs one which these steps are not
working : I can identify the symbolic link, but the SPM neither the host
are able to find the LV device, thus can not LV-activate it :
# lvchange -ay
/dev/de2fdaa0-6e09-4dd2-beeb-1812318eb893/ce13d349-151e-4631-b600-c42b82106a8d
Failed to find logical volume
"de2fdaa0-6e09-4dd2-beeb-1812318eb893/ce13d349-151e-4631-b600-c42b82106a8d"
Either I need two more coffees, either I may be missing a step or
something to check.
Looking at the SPM /dev/disk/* structure, it looks like very sound (I
can see my three storage domains dm-name-* series of links).
As the VM can nicely be ran and stopped, does the host activates
something more before being launched?
--
Nicolas ECARNOT
7 years, 2 months
Re: [ovirt-users] How to import a qcow2 disk into ovirt
by Martín Follonier
Hi,
I've done all the recommendations in this thread, and I'm still getting the
"Paused by System" message just after the transfer starts.
Honestly I don't know were else to look at, cause I don't find any log
entry or packet capture that give me a hint about what is happening.
I'll appreciate any help! Thank you in advance!
Regards
Martin
On Thu, Sep 1, 2016 at 5:01 PM, Amit Aviram <aavi...(a)redhat.com> wrote:
> You can do both,
> Through the database, the table is "vdc_options". change "option_value"
> where "option_name" = 'ImageProxyAddress' .
>
> On Thu, Sep 1, 2016 at 4:56 PM, Gianluca Cecchi <gianluca.cec...(a)gmail.com
> > wrote:
>
>> On Thu, Sep 1, 2016 at 3:53 PM, Amit Aviram <aavi...(a)redhat.com> wrote:
>>
>>> You can just replace this value in the DB and change it to the right
>>> FQDN, it is a config value named "ImageProxyAddress". replace "localhost"
>>> with the right address (notice that the port is there too).
>>>
>>> If this will keep happen after users will have the latest version, we
>>> will have to open a bug and fix whatever causes the URL to be "localhost".
>>>
>>>
>> Do you mean through "engine-config" or directly into database?
>> In this second case which is the table involved?
>>
>> Gianluca
>>
>
>
[root@ractorshe bin]# systemctl stop ovirt-imageio-proxy
engine=# select * from vdc_options where option_name='ImageProxyAddress';
option_id | option_name | option_value | version
-----------+-------------------+-----------------+---------
950 | ImageProxyAddress | localhost:54323 | general
(1 row)
engine=# update vdc_options set option_value='ractorshe.mydomain:54323'
where option_name='ImageProxyAddress';
UPDATE 1
engine=# select * from vdc_options where option_name='ImageProxyAddress';
option_id | option_name | option_value |
version
-----------+-------------------+--------------------------------------+---------
950 | ImageProxyAddress | ractorshe.mydomain:54323 | general
(1 row)
engine=#
engine=# select * from vdc_options where option_name='ImageProxyAddress';
option_id | option_name | option_value |
version
-----------+-------------------+--------------------------------------+---------
950 | ImageProxyAddress | ractorshe.mydomain:54323 | general
(1 row)
systemctl stop ovirt-engine
(otherwise it remained localhost)
systemctl start ovirt-engine
systemctl start ovirt-imageio-proxy
Now transfer is ok.
I tried a qcow2 disck configured as 40Gb but containing about 1.6Gb of data.
I'm going to connect it to a VM and see if all is ok also from a contents
point of view.
Gianluca
_______________________________________________
Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
7 years, 2 months
Hosted engine setup question
by Demeter Tibor
--=_d5a03c9f-d720-4690-b3f7-196f2e084694
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
I just installed a hosted engine based four nodes cluster to glustered storage.
It seems to working fine, but I have some question about it.
- I would like to make an own cluster and datacenter. Is it possible to remove a host and re-add to an another cluster while it is running the hosted engine?
- Is it possible to remove default datacenter without any problems?
- I have a productive ovirt cluter that is based on 3.5 series. It is using a shared nfs storage. Is it possible to migrate VMs from 3.5 to 4.1 with detach shared storage from the old cluster and attach it to the new cluster?
- If yes what will happend with the VM properies? For example mac addresses, limits, etc. Those will be migrated or not?
Thanks in advance,
Regard
Tibor
--=_d5a03c9f-d720-4690-b3f7-196f2e084694
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: arial, helvetica, sans-serif; font-s=
ize: 12pt; color: #000000"><div>Hi,</div><div><br data-mce-bogus=3D"1"></di=
v><div>I just installed a hosted engine based four nodes cluster to gluster=
ed storage.</div><div>It seems to working fine, but I have some question ab=
out it.</div><div><br data-mce-bogus=3D"1"></div><div>- I would like to mak=
e an own cluster and datacenter. Is it possible to remove a host and re-add=
to an another cluster while it is running the hosted engine? </div><d=
iv>- Is it possible to remove default datacenter without any problems? =
;</div><div><br></div><div>- I have a productive ovirt cluter that is based=
on 3.5 series. It is using a shared nfs storage. Is it possible to m=
igrate VMs from 3.5 to 4.1 with detach shared storage from the old cluster =
and attach it to the new cluster? </div><div>- If yes what will happen=
d with the VM properies? For example mac addresses, limits, etc. Those will=
be migrated or not?</div><div><br data-mce-bogus=3D"1"></div><div>Thanks i=
n advance,</div><div>Regard</div><div><br data-mce-bogus=3D"1"></div><div><=
br data-mce-bogus=3D"1"></div><div>Tibor</div><div data-marker=3D"__SIG_PRE=
__"><p style=3D"font-family: 'Times New Roman'; font-size: medium; margin: =
0px;" data-mce-style=3D"font-family: 'Times New Roman'; font-size: medium; =
margin: 0px;"><strong><span style=3D"font-size: medium;" data-mce-style=3D"=
font-size: medium;"><span style=3D"color: #2d67b0;" data-mce-style=3D"color=
: #2d67b0;"><br></span></span></strong></p><p style=3D"font-family: 'Times =
New Roman'; font-size: medium; margin: 0px;" data-mce-style=3D"font-family:=
'Times New Roman'; font-size: medium; margin: 0px;"><span style=3D"font-fa=
mily: georgia, serif; color: #000080;" data-mce-style=3D"font-family: georg=
ia, serif; color: #000080;"><strong><span style=3D"font-size: medium;" data=
-mce-style=3D"font-size: medium;"><span></span></span></strong></span></p><=
p></p></div></div></body></html>
--=_d5a03c9f-d720-4690-b3f7-196f2e084694--
7 years, 2 months
Help with Power Management network
by ~Stack~
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--OnEhT75wdmSi56KelkEfSHNlc4EOmbWar
Content-Type: multipart/mixed; boundary="QtNjKESroum0Gs4B16RTXPJnq400V1QDU";
protected-headers="v1"
From: ~Stack~ <i.am.stack(a)gmail.com>
To: users(a)ovirt.org
Message-ID: <42d5325d-217f-5559-ec5a-11a10fbad2ed(a)gmail.com>
Subject: Help with Power Management network
--QtNjKESroum0Gs4B16RTXPJnq400V1QDU
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
Greetings,
I hit up the IRC earlier, but only crickets. Guess no one wants to stick
around late on a Friday night. :-D
I'm an ovirt newb here. I've been going through the docs setting up 4.1
on Scientific Linux 7.4. For the most part everything is going well once
I learn how to do it. I'm, however, stuck on power management.
I have multiple networks:
192.168.1.x is my BMC/ilo network. The security team wants as few entry
points into this as possible and wants as much segregation as possible.
192.168.2.x is my "management" access network. For my other machines on
this network this means admin-SSH/rsyslog/SaltStack configuration
management/ect.
192.168.3.x is my high speed network where my NFS storage sits and
applications that need the bandwidth do their thing.
10.10.86.x is my "public" access
All networks are configured on the Host network settings. Mostly
confident I got it right...at least each network/IP matches the right
interface. ;-)
Right now I only have the engine server and one hyper-visor. On either
host I can ssh into the command line and run fence_ipmilan -a
192.168.1.x -l USER -p PASS -o status -v -P" it works, all is good.
However, when I try to add it in the ovirt interface I get an error. :-/
Edit Host -> Power Management:
Address: 192.168.1.14
User Name: root
Password: SorryCantTellYou
Type: ipmilan
Options: <blank>
Test
Test failed: Failed to run fence status-check on host '192.168.2.14'. No
other host was available to serve as proxy for the operation.
Yes, same host because I only have one right now. :-)
Any help or guidance would be much appreciated. In the meantime I'm
going back to the docs to poke at a few other things I need to figure
out. :-)
Thanks!
~Stack~
--QtNjKESroum0Gs4B16RTXPJnq400V1QDU--
--OnEhT75wdmSi56KelkEfSHNlc4EOmbWar
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAEBCAAGBQJZzqxCAAoJELkej+ysXJPmkn8P/i7sx6DP5aSOejTEvOzq45jc
uTYNnoAqniDK/do47z2ojjB0+Oa6czExR7IqyzAzz9+pFEMZlRttxVwQ0XyEj+4t
Fw44htR1PhU+YnNQm4fgEo04P7X72qEzdgeMgA/vVVp6chpw0tSG5/bLosrX/yJC
NsUF4X0yhnfsCtLZ9Tw78S392OqIQ1iyx12Brmxtip0c97JenMXxXXrxPoUHDFcR
T+mqVf7jnC+VxpRj0x5qU+JAOr05oje9coAgbDE6MhWaL6sjClEwhsi5VOU47he9
JcBjKbye4bRHIlzkgpg01Ge0m5fQ4FclJl9wnV4V5vX1Rkuol61wiPQ6SXd/CPy2
PiVsbvX3WloealAupANhaaYG93QPpQsmrw/6Ew/Finlsz6CNfg2VZHbzBGc79QV6
trLMhu+fw7Hsi/lmiU9Rkkmi8OOSgtapMkA283ft1wnBr7gYTyPZwQsp2chO66X5
QZvrRC64nBv9QcVswawWruWSIsETWNNRg7NltEiy8CKBDUsaJ4vJftXzEuHe++ML
2tgOaVRK9nikf6C5OlGPf2TVTVuBRyXGQTVQhGmPVx40499B5sUaen3+dyDHy8QW
qLWi6iPiN0YGZkzh/inl/jT4aowQlZEZTfT3KpnH5tyZQ018rcJBQnKFBiTwi5aM
/KzRHvKBIvKpjiIREQ7V
=kxQZ
-----END PGP SIGNATURE-----
--OnEhT75wdmSi56KelkEfSHNlc4EOmbWar--
7 years, 2 months
libvirt: XML-RPC error : authentication failed: Failed to start SASL
by Ozan Uzun
Hello,
Today I updated my ovirt engine v3.5 and all my hosts on one datacenter
(centos 7.4 ones).
and suddenly my vdsm and vdsm-network services stopped working.
btw: My other DC is centos 6 based (managed from the same ovirt engine),
everything works just fine there.
vdsm fails dependent on vdsm-network service, with lots of RPC error.
I tried to configure vdsm-tool configure --force, deleted everything
(vdsm-libvirt), reinstalled.
Could not make it work.
My logs are filled with the follogin
Sep 18 23:06:01 node6 python[5340]: GSSAPI Error: Unspecified GSS failure.
Minor code may provide more information (No Kerberos credentials available
(default cache: KEYRING:persistent:0))
Sep 18 23:06:01 node6 vdsm-tool[5340]: libvirt: XML-RPC error :
authentication failed: Failed to start SASL negotiation: -1 (SASL(-1):
generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may
provide more information (No Kerberos credent
Sep 18 23:06:01 node6 libvirtd[4312]: 2017-09-18 20:06:01.954+0000: 4312:
error : virNetSocketReadWire:1808 : End of file while reading data:
Input/output error
-------
journalctl -xe output for vdsm-network
Sep 18 23:06:02 node6 vdsm-tool[5340]: libvirt: XML-RPC error :
authentication failed: Failed to start SASL negotiation: -1 (SASL(-1):
generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may
provide more information (No Kerberos credent
Sep 18 23:06:02 node6 vdsm-tool[5340]: Traceback (most recent call last):
Sep 18 23:06:02 node6 vdsm-tool[5340]: File "/usr/bin/vdsm-tool", line 219,
in main
Sep 18 23:06:02 node6 libvirtd[4312]: 2017-09-18 20:06:02.558+0000: 4312:
error : virNetSocketReadWire:1808 : End of file while reading data:
Input/output error
Sep 18 23:06:02 node6 vdsm-tool[5340]: return
tool_command[cmd]["command"](*args)
Sep 18 23:06:02 node6 vdsm-tool[5340]: File
"/usr/lib/python2.7/site-packages/vdsm/tool/upgrade_300_networks.py", line
83, in upgrade_networks
Sep 18 23:06:02 node6 vdsm-tool[5340]: networks = netinfo.networks()
Sep 18 23:06:02 node6 vdsm-tool[5340]: File
"/usr/lib/python2.7/site-packages/vdsm/netinfo.py", line 112, in networks
Sep 18 23:06:02 node6 vdsm-tool[5340]: conn = libvirtconnection.get()
Sep 18 23:06:02 node6 vdsm-tool[5340]: File
"/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 159, in
get
Sep 18 23:06:02 node6 vdsm-tool[5340]: conn = _open_qemu_connection()
Sep 18 23:06:02 node6 vdsm-tool[5340]: File
"/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 95, in
_open_qemu_connection
Sep 18 23:06:02 node6 vdsm-tool[5340]: return utils.retry(libvirtOpen,
timeout=10, sleep=0.2)
Sep 18 23:06:02 node6 vdsm-tool[5340]: File
"/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1108, in retry
Sep 18 23:06:02 node6 vdsm-tool[5340]: return func()
Sep 18 23:06:02 node6 vdsm-tool[5340]: File
"/usr/lib64/python2.7/site-packages/libvirt.py", line 105, in openAuth
Sep 18 23:06:02 node6 vdsm-tool[5340]: if ret is None:raise
libvirtError('virConnectOpenAuth() failed')
Sep 18 23:06:02 node6 vdsm-tool[5340]: libvirtError: authentication failed:
Failed to start SASL negotiation: -1 (SASL(-1): generic failure: GSSAPI
Error: Unspecified GSS failure. Minor code may provide more information
(No Kerberos credentials availa
Sep 18 23:06:02 node6 systemd[1]: vdsm-network.service: control process
exited, code=exited status=1
Sep 18 23:06:02 node6 systemd[1]: Failed to start Virtual Desktop Server
Manager network restoration.
-----
libvirt is running but throws some errors.
[root@node6 ~]# systemctl status libvirtd
● libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled;
vendor preset: enabled)
Drop-In: /etc/systemd/system/libvirtd.service.d
└─unlimited-core.conf
Active: active (running) since Mon 2017-09-18 23:15:47 +03; 19min ago
Docs: man:libvirtd(8)
http://libvirt.org
Main PID: 6125 (libvirtd)
CGroup: /system.slice/libvirtd.service
└─6125 /usr/sbin/libvirtd --listen
Sep 18 23:15:56 node6 libvirtd[6125]: 2017-09-18 20:15:56.195+0000: 6125:
error : virNetSocketReadWire:1808 : End of file while reading data:
Input/output error
Sep 18 23:15:56 node6 libvirtd[6125]: 2017-09-18 20:15:56.396+0000: 6125:
error : virNetSocketReadWire:1808 : End of file while reading data:
Input/output error
Sep 18 23:15:56 node6 libvirtd[6125]: 2017-09-18 20:15:56.597+0000: 6125:
error : virNetSocketReadWire:1808 : End of file while reading data:
Input/output error
----------------
[root@node6 ~]# virsh
Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands
'quit' to quit
virsh # list
error: failed to connect to the hypervisor
error: authentication failed: Failed to start SASL negotiation: -1
(SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor
code may provide more information (No Kerberos credentials available
(default cache: KEYRING:persistent:0)))
=================
I do not want to lose all my virtual servers, is there any way to recover
them? Currenty everything is down. I am ok to install a new ovirt engine if
somehow I can restore my virtual servers. I can also split centos 6 and
centos 7 ovirt engine's.
7 years, 2 months
iSCSI VLAN host connections - bond or multipath & IPv6
by Ben Bradley
Hi All
I'm looking to add a new host to my oVirt lab installation.
I'm going to share out some LVs from a separate box over iSCSI and will
hook the new host up to that.
I have 2 NICs on the storage host and 2 NICs on the new Ovirt host to
dedicate to the iSCSI traffic.
I also have 2 separate switches so I'm looking for redundancy here. Both
iSCSI host and oVirt host plugged into both switches.
If this was non-iSCSI traffic and without oVirt I would create bonded
interfaces in active-backup mode and layer the VLANs on top of that.
But for iSCSI traffic without oVirt involved I wouldn't bother with a
bond and just use multipath.
From scanning the oVirt docs it looks like there is an option to have
oVirt configure iSCSI multipathing.
So what's the best/most-supported option for oVirt?
Manually create active-backup bonds so oVirt just sees a single storage
link between host and storage?
Or leave them as separate interfaces on each side and use oVirt's
multipath/bonding?
Also I quite like the idea of using IPv6 for the iSCSI VLAN, purely down
to the fact I could use link-local addressing and not have to worry
about setting up static IPv4 addresses or DHCP. Is IPv6 iSCSI supported
by oVirt?
Thanks, Ben
7 years, 2 months