SSH into guest VM
by 03ce007@gmail.com
I have self-hosted-engine (4.2) deployed successfully on cento (7.4) server.
The physical server has 'ovirt' as hostname and the self-hosted engineVM deployed
and running on it has 'engine.ovirt' as fdqn.
I can successfully create new VM using the oVirt.vm-infra example playbook. But what other config/var i need to add in them so that I can ssh into this new VM. Some example, tips, hints would be helpful.
Thank you.
6 years, 6 months
Bad volume specification
by Bryan Sockel
Hi,
I am having to rebuild my ovirt instance for a couple of reasons. I have
setup a temporary ovirt portal to migrate my setup to while i rebuild my
production portal.
I have run int a couple of VM's that will no longer start.
Everything that i am seeing is identical to this article -
https://access.redhat.com/solutions/2423391
VM devel-build is down with error. Exit message: Bad volume specification
{'serial': 'ec7a3258-7a99-4813-aa7d-dceb727a1975', 'index': 0, 'iface':
'virtio', 'apparentsize': '1835008', 'specParams': {}, 'cache': 'none',
'imageID': 'ec7a3258-7a99-4813-aa7d-dceb727a1975', 'truesize': '1777664',
'type': 'disk', 'domainID': '2b79768f-a329-4eab-81e0-120a81ac8906',
'reqsize': '0', 'format': 'cow', 'poolID':
'9e7d643c-592d-11e8-82eb-005056b41d15', 'device': 'disk', 'path':
'/rhev/data-center/9e7d643c-592d-11e8-82eb-005056b41d15/2b79768f-a329-4eab-81e0-120a81ac8906/images/ec7a3258-7a99-4813-aa7d-dceb727a1975/8f4ddee4-68b3-48e9-be27-0231557f5218',
'propagateErrors': 'off', 'name': 'vda', 'bootOrder': '1', 'volumeID':
'8f4ddee4-68b3-48e9-be27-0231557f5218', 'diskType': 'file', 'alias':
'ua-ec7a3258-7a99-4813-aa7d-dceb727a1975', 'discard': False}.
Currently running version 4.2.3.5-1
Is there anyway to recover the image, or even just be able to mount the
drive to another vm to extract data.
Thank You
6 years, 6 months
second deploy hosted engine, engine VM is shut dwon
by dhy336@sina.com
HiI first deploy hosted engine is failed, then deploy hosted engine, after ansible run "name: Add host ovirt_hosts:"(bootstrap_local_vm.yml), engine VM is shut dwon, why to shutown engine VM?
6 years, 6 months
ovirt guest agent epel 6 and rhevm in the name
by Gianluca Cecchi
Hello,
I have updated one test environment to 4.2 (it was in 4.1.9)
I have a guest in Oracle Linux 6 and epel 6 repo enabled.
After powering down and start the VM, I saw the exclamation mark regarding
ovirt-guest-agent latest version needed.
On the VM the process was dead and I took the time of updatng it
from ovirt-guest-agent-1.0.12-4.el6.noarch
to ovirt-guest-agent-1.0.13-2.el6.noarch
and start the agent.
All seems ok; no exclamation mark now and I see the ip addresses of the
guest in 4.2 web admin gui.
But in log of ovirt-guest-agent I see:
MainThread::ERROR::2018-05-28
14:34:55,711::ovirt-guest-agent::140::root::Unhandled exception in oVirt
guest agent!
Traceback (most recent call last):
File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 134, in
<module>
agent.run(daemon, pidfile)
File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 65, in run
self.agent = LinuxVdsAgent(config)
File "/usr/share/ovirt-guest-agent/GuestAgentLinux2.py", line 466, in
__init__
AgentLogicBase.__init__(self, config)
File "/usr/share/ovirt-guest-agent/OVirtAgentLogic.py", line 188, in
__init__
self.vio = VirtIoChannel(config.get("virtio", "device"))
File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 151, in
__init__
self._stream = VirtIoStream(vport_name)
File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 132, in
__init__
self._vport = os.open(vport_name, os.O_RDWR)
OSError: [Errno 2] No such file or directory:
'/dev/virtio-ports/com.redhat.rhevm.vdsm'
MainThread::INFO::2018-05-28
14:42:45,535::ovirt-guest-agent::59::root::Starting oVirt guest agent
The status of files under the indicated directory is:
# ll /dev/virtio-ports/
total 0
lrwxrwxrwx 1 root root 11 May 28 14:34 com.redhat.spice.0 -> ../vport0p3
lrwxrwxrwx 1 root root 11 May 28 14:34 org.qemu.guest_agent.0 -> ../vport0p2
lrwxrwxrwx 1 root root 11 May 28 14:42 ovirt-guest-agent.0 -> ../vport0p1
#
Anything to be done at my side at guest/host configuration level?
Thanks,
Gianluca
6 years, 6 months
CentOS7 Install Failure to Shutdown
by Matthew Wimpelberg
I have ovirt 4.2.3.5-1.el7.centos installed and am unable to cleanly reboot
the host machine. The only way I can reboot is to press and hold the power
button which causes an unclean shutdown of the engine vm. I can upload
logs if necessary I'm just not sure where to upload them to.
6 years, 6 months
Fwd: External ceph storage
by Leo David
---------- Forwarded message ---------
From: Leo David <leoalex(a)gmail.com>
Date: Sun, May 27, 2018, 20:01
Subject: Re: [ovirt-users] External ceph storage
To: Luca 'remix_tj' Lorenzetto <lorenzetto.luca(a)gmail.com>
Indeed, you right. Will setup the gws while waiting for a maybe future
native ceph connection, which in my oppinion would bring a lot of power to
oVirt.
On Sun, May 27, 2018, 19:48 Luca 'remix_tj' Lorenzetto <
lorenzetto.luca(a)gmail.com> wrote:
> I think is the best option. Setting up a sma openstack setup only for it
> is not worth the hassle.
>
> Luca
>
> Il dom 27 mag 2018, 18:44 Leo David <leoalex(a)gmail.com> ha scritto:
>
>> Ok, so since i dont have openstack installed (which would put cinder
>> between ceph and ovirt ), the only option remains the iscsi gw...
>> Thank you very much !
>>
>> On Sun, May 27, 2018, 18:14 Luca 'remix_tj' Lorenzetto <
>> lorenzetto.luca(a)gmail.com> wrote:
>>
>>> No,
>>>
>>> The only way you have is to configure cinder to manage ceph pool or in
>>> alternative you have to deploy an iscsi gateway, no other ways are
>>> available at the moment.
>>>
>>> So you can't use rbd directly.
>>>
>>> Luca
>>>
>>> Il dom 27 mag 2018, 16:54 Leo David <leoalex(a)gmail.com> ha scritto:
>>>
>>>> Thank you Luca,
>>>> At the moment i would try the cinder storage provider, since we already
>>>> have a proxmox cluster directly connecting to ceph. The problem is that I
>>>> just could not find a straight way to do this.
>>>> ie: Specify the ceph monitors and ceph pool to connect to. Can oVirt
>>>> directly connect to ceph monitors ? How the configuration should be done if
>>>> so ?
>>>> Thank you very much !
>>>>
>>>>
>>>> On Sun, May 27, 2018, 17:20 Luca 'remix_tj' Lorenzetto <
>>>> lorenzetto.luca(a)gmail.com> wrote:
>>>>
>>>>> Hello,
>>>>>
>>>>> Yes, using cinder or through iscsi gateway.
>>>>>
>>>>> For a simpler setup i suggest the second option.
>>>>>
>>>>> Luca
>>>>>
>>>>> Il dom 27 mag 2018, 16:08 Leo David <leoalex(a)gmail.com> ha scritto:
>>>>>
>>>>>> Hello everyone,
>>>>>> I am new to ovirt and very impressed of its features. I would like to
>>>>>> levereage on our existing ceph cluster to provide rbd images for vm hdds,
>>>>>> is this possible to achieve ?
>>>>>> Thank you very much !
>>>>>> Regards,
>>>>>> Leo
>>>>>> _______________________________________________
>>>>>> Users mailing list -- users(a)ovirt.org
>>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>>>>
>>>>>
6 years, 6 months