I need to change the gateway ping address, the one used by hosted engine setup.
Is ok to edit /etc/ovirt-hosted-engine/hosted-engine.conf on each node,
update the gateway param with the new ip address and restart
the agent&broker on each node?
With a blind test seems ok, but need to understand if is the right procedure.
I'm confused because though I'm using ovirt-shell to script many actions
every day, and even after a large bunch of reading and testing, I can
not find the correct syntax to move (offline/available) disks between
May you help me please?
I'm working on some load-balancing solutions and they appear to require MAC
spoofing. I did some searching and reading and as I understand it, you can
disable the MAC spoofing protection through a few methods.
I was wondering about the best manner to enable this for the VMs that
require it and not across the board (if that is even possible). I'd like
to just allow my load-balancer VMs to do what they need to, but keep the
others untouched as a security mechanism.
If anyone has any advice on the best method to handle this scenario, I
would greatly appreciate it. It seems that this might turn into some type
of feature request, though I'm not sure if this is something that has to be
done at the Linux bridge level, the port level, or the VM level. Any
explanations into that would also help in my education.
I'm trying to boot a vm with non persistent floppy using python ovirt
sdk (the "RunOnce" way in administrator portal), but guest OS can't see
floppy drive. The ultimate goal is to deploy floppy with sysprep
unattend.xml file for windows 7 pools of vm.
Here is a snippet of code I use:
myvm = api.vms.get(name="vmname")
content="This is file content!"
xml = ParseHelper.toXml(action)
As you can see, for debugging purpose, I print my xml action, and I get:
<content>This is file content</content>
in the admin portal I can see my vm in "RunOnce" state, but no floppy is
In fact in the vm process command line
(ps -ef | grep qemu-kvm | grep vmname) I can't see -drive option
referring to floppy (I only see 2 "-drive" options, referring to vm
system disk and to a correctly mounted cdrom ISO)
What I'm doing wrong?
(The engine is RHEV-M version 3.4.1-0.31.el6ev)
Thanks in advance,
On 03/09/2015 07:12 AM, Simone Tiraboschi wrote:
> ----- Original Message -----
>> From: "Bob Doolittle" <bob(a)doolittle.us.com>
>> To: "Simone Tiraboschi" <stirabos(a)redhat.com>
>> Sent: Monday, March 9, 2015 12:02:49 PM
>> Subject: Re: [ovirt-users] Error during hosted-engine-setup for 3.5.1 on F20 (Cannot add the host to cluster ... SSH
>> has failed)
>> On Mar 9, 2015 5:23 AM, "Simone Tiraboschi" <stirabos(a)redhat.com> wrote:
>>> ----- Original Message -----
>>>> From: "Bob Doolittle" <bob(a)doolittle.us.com>
>>>> To: "users-ovirt" <users(a)ovirt.org>
>>>> Sent: Friday, March 6, 2015 9:21:20 PM
>>>> Subject: [ovirt-users] Error during hosted-engine-setup for 3.5.1 on
>> F20 (Cannot add the host to cluster ... SSH has
>>>> I'm following the instructions here:
>>>> My self-hosted install failed near the end:
>>>> To continue make a selection from the options below:
>>>> (1) Continue setup - engine installation is complete
>>>> (2) Power off and restart the VM
>>>> (3) Abort setup
>>>> (4) Destroy VM and abort setup
>>>> (1, 2, 3, 4): 1
>>>> [ INFO ] Engine replied: DB Up!Welcome to Health Status!
>>>> Enter the name of the cluster to which you want to add the
>>>> (Default) [Default]:
>>>> [ ERROR ] Cannot automatically add the host to cluster Default: Cannot
>>>> Host. Connecting to host via SSH has failed, verify that the host is
>>>> reachable (IP address, routable address etc.) You may refer to the
>>>> engine.log file for further details.
>>>> [ ERROR ] Failed to execute stage 'Closing up': Cannot add the host to
>>>> cluster Default
>>>> [ INFO ] Stage: Clean up
>>>> [ INFO ] Generating answer file
>>>> [ INFO ] Stage: Pre-termination
>>>> [ INFO ] Stage: Termination
>>>> I can ssh into the engine VM both locally and remotely. There is no
>>>> /root/.ssh directory, however. Did I need to set that up somehow?
>>> It's the engine that needs to open an SSH connection to the host calling
>> it by its hostname.
>>> So please be sure that you can SSH to the host from the engine using its
>> hostname and not its IP address.
>> I'm assuming this should be a password-less login (key-based
> Yes, it is.
>> As what user?
OK, I see a couple of problems.
First off, I didn't have my deploying-host hostname in the hosts map for my engine.
After adding it to /etc/hosts (both hostname and FQDN), when I try to ssh from root@engine to root@host it is prompting me for a password.
On my engine, ~root/.ssh does not contain any keys.
On my host, ~root/.ssh has authorized_keys, and in it there is a key with the comment "ovirt-engine".
It's possible that I inadvertently removed ~root/.ssh on engine while I was preparing the engine (I started to set up my own no-password logins and then thought better and cleaned up, not realizing that some prior setup affecting that directory had occurred). That would explain the second issue.
How/when does the key for root@engine get populated to the host's ~root/.ssh/authenticated_keys during setup?
>>> Till hosted-engine hosts were simply identified by their IP address but
>> than we had some bug report on side effects of that.
>>> So now we generate and sign certs using host hostnames and so the engine
>> should be able to correctly resolve them.
>>>> When I log into the Administration portal, the engine VM does not appear
>>>> under the Virtual machine view (it's empty).
>>> It's cause the setup didn't complete.
>>>> I've attached what I think are the relevant logs.
>>>> Also, when my host reboots, the ovirt-ha-broker and ovirt-ha-agent
>>>> do not come up automatically. I have to use systemctl to start them
>>> It's cause the setup didn't complete.
>>>> This is a fresh Fedora 20 machine installing a fresh copy of Ovirt
>>>> What's the cleanest approach to restore/complete sanity of my setup
>>> First step is to clarify what went wrong in order to avoid it in the
>>> Than, if you want a really sanity environment for production use I'd
>> suggest to redeploy.
>>> hosted-engine --vm-poweroff
>>> empty the storage domain share and deploy again
>>>> I've linked 3 files to this email:
>>>> server.log (12.4 MB) Dropbox https://db.tt/g5p09AaD
>>>> vdsm.log (3.2 MB) Dropbox https://db.tt/P4572SUm
>>>> ovirt-hosted-engine-setup-20150306123622-tad1fy.log (413 KB) Dropbox
>>>> Mozilla Thunderbird makes it easy to share large files over email.
>>>> Users mailing list
Content-Type: text/plain; charset="iso-8859-7"
On 12/09/14 09:22=2C Itamar Heim wrote:=0A=
> With oVirt 3.5 nearing GA=2C time to ask for "what do you want to see in =
> oVirt 3.6"?=0A=
> Users mailing list=0A=
> Users at ovirt.org=0A=
Performance metric similar to what VMware calls "CPU Ready" would be very u=
seful if it was available in the VM details in the admin portal.=20
It would provide great visibility on VM's performance in an environment wit=
h CPU overallocation.
Content-Type: text/html; charset="iso-8859-7"
<body class=3D'hmmessage'><div dir=3D'ltr'><pre style=3D"white-space: pre-w=
rap=3B">On 12/09/14 09:22=2C Itamar Heim wrote:=0A=
>=3B<i> With oVirt 3.5 nearing GA=2C time to ask for "what do you want to=
see in =0A=
</i>>=3B<i> oVirt 3.6"?=0A=
</i>>=3B<i> Users mailing list=0A=
</i>>=3B<i> <a href=3D"http://lists.ovirt.org/mailman/listinfo/users">Use=
rs at ovirt.org</a>=0A=
</i>>=3B<i> <a href=3D"http://lists.ovirt.org/mailman/listinfo/users">htt=
</i>>=3B</pre><pre style=3D"white-space: pre-wrap=3B"><br></pre><pre styl=
e=3D"white-space: pre-wrap=3B">Performance metric similar to what VMware ca=
lls "CPU Ready" would be very useful if it was available in the VM details =
in the admin portal. =3B</pre><pre style=3D"white-space: pre-wrap=3B"><=
br></pre><pre style=3D"white-space: pre-wrap=3B">It would provide great vis=
ibility on VM's performance in an environment with CPU overallocation.</pre=
><pre style=3D"white-space: pre-wrap=3B"><br></pre> </div></body=
+1 to iso upload from gui
+1 to ceph support (if the way is via Cynder, then integrate Cynder in
Ovirt as you did with neutron to get arround the lack of features in
I've been asking for several things (with their respective RFEs) and as
versions go by without success, I'm asking again:
- 1049994 [RFE] Allow choosing network interface for gluster domain
- 1049476 [RFE] Mix untagged and tagged Logical Networks on the same NIC
- 1029489 [RFE] Export not exporting direct lun disk
- 1051002 [RFE] ISO domain should be a simple NFS share containing ISOs