Frustration defines the deployment of Hosted Engine
by Vinícius Ferrão
Hello oVirt folks.
I’m a traitor of the Xen movement and was looking for some good alternatives for XenServer hypervisors. I was aware of KVM for a long time but I was missing a more professional and appliance feeling of the product, and oVirt appears to deliver exactly what I’m looking for.
Don’t get me wrong, I’m not saying that Xen is not good, I’m looking for equal or better alternatives, but I’m starting to get frustrated with oVirt.
Firstly I’ve tried to install the oVirt Node on a VM in VMware Fusion on my notebook, it was a no go. For whatever reasons I don’t know vdsmd.service and libvirtd failed to start. I make sure that I was running with EPT support enabled to achieve nested virtualization, but as I said: it was a no go.
So I’ve decommissioned a XenServer machine that was in production just to try oVirt. The hardware is not new, but’s it’s very capable: Dual Xeon E5506 with 48GB of system RAM, but I can’t get the hosted engine to work, it always insults my hardware: --- Hosted Engine deployment failed: this system is not reliable, please check the issue,fix and redeploy.
It’s definitely a problem on the storage subsystem, the error is just random, at this moment I’ve got:
[ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No response for JSON-RPC StorageDomain.detach request.
But on other tries it came up with something like this:
No response for JSON-RPC Volume.getSize request.
I was thinking that the problem was on the NFSv3 server on our FreeNAS box, so I’ve changed to an iSCSI backend, but the problem continues. This happens at the very end of the ovirt-hosted-engine-setup command, which leads me to believe that’s an oVirt issue. The OVA was already copied and deployed to the storage:
[ INFO ] Starting vdsmd
[ INFO ] Creating Volume Group
[ INFO ] Creating Storage Domain
[ INFO ] Creating Storage Pool
[ INFO ] Connecting Storage Pool
[ INFO ] Verifying sanlock lockspace initialization
[ INFO ] Creating Image for 'hosted-engine.lockspace' ...
[ INFO ] Image for 'hosted-engine.lockspace' created successfully
[ INFO ] Creating Image for 'hosted-engine.metadata' ...
[ INFO ] Image for 'hosted-engine.metadata' created successfully
[ INFO ] Creating VM Image
[ INFO ] Extracting disk image from OVF archive (could take a few minutes depending on archive size)
[ INFO ] Validating pre-allocated volume size
[ INFO ] Uploading volume to data domain (could take a few minutes depending on archive size)
[ INFO ] Image successfully imported from OVF
[ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No response for JSON-RPC StorageDomain.detach request.
[ INFO ] Yum Performing yum transaction rollback
[ INFO ] Stage: Clean up
[ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20170623032541.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue,fix and redeploy
Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20170623023424-o9rbt0.log
At this point I really don’t know what I should try. And the log file is too verborragic (hoping this word exists) to look for errors.
Any guidance?
Thanks,
V.
7 years, 6 months
Dell firmware update problem
by Davide Ferrari
Hello
I'm trying to update the BIOS firmware on a Dell PowerEdge with CentOS
(just upgraded to 7.3) running oVirt 4.0 and it fails with an error
mount: /dev/sdb is already mounted or /tmp/SECUPD busy
The problem is acknoleged by Red Hat and there is a solution for it
https://access.redhat.com/solutions/2671581
but well, I have no RH subscriptions :( Has anyone here encountered the
same problem with CentOS and oVirt? Any hint?
Thanks!
--
Davide
7 years, 6 months
SPICE keymap ?
by Matthias Leopold
hi,
i'm looking for a way to change the SPICE keymap for a VM. i couldn't
find it. i couldn't find a way to change it in the client either (linux
remote-viewer application). this is probably easy, thanks anyway...
matthias
7 years, 6 months
is arbiter configuration needed?
by Erekle Magradze
Hello,
I am using glusterfs as the storage backend for the VM images, volumes
for oVirt consist of three bricks, is it still necessary to configure
the arbiter to be on the safe side? or since the number of bricks is odd
it will be done out of the box?
Thanks in advance
Cheers
Erekle
7 years, 6 months
problems migrating to hosted engine from bare metal
by cmc
Hi,
I created a new host to deploy a hosted engine, and then used a backup
from the bare metal engine and restored this, as per the procedure in:
http://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_M...
Everything worked fine up until step 15 ('Continue setup') as the
script said the engine was not responding. I tried the reboot option
(option 3), but still it would not connect. So I could not do the
final step involving the internal CA, adding the host to an existing
cluster (of which there were two other hosts). I was able to connect
via vnc and ssh fine to the engine, and from here I could see that the
ovirt-engine service was up. I had to install the aaa-ldap extension
package to enable ldap auth separately however, but once done I was
able to log in, and it showed the old cluster as it was on the bare
metal engine. I added the host that I created the hosted engine on,
and it installed various packages and then I configured the network
and it looked fine, apart from the fact that I could not see a VM
named 'HostedEngine' in the list of VMs. I think however that this was
not a properly working setup, as the NFS storage I used to setup the
hosted engine became unavailable and I think this killed the hosted
engine, which caused it to reboot the host it was on. The hosted
engine has not come back since then, so I'm guessing it either isn't
properly set up for HA or it needs the NFS storage or something else
was not properly done by me in the setup. I've restarted the bare
metal engine for now as I needed it running for now.
My questions are:
1. My understanding is that the NFS storage is initially used to
create the hosted engine disk image, and is temporary, and that the
hosted engine later gets migrated to the storage used by the rest of
the cluster (which in my case is directly attached to the hosts via
fibre channel). I suspect that this did not happen. The bare metal
engine had some local ISO storage (on a hard disk local to it), which
will not be replicated to the hosted engine VM - will this cause a
problem for the deployment? I can create some new ISO storage later if
not.
2. What is the recommended way to recover from this situation? Should
I just run 'hosted-engine --deploy' again and try and find out what is
going wrong at step 15?
I can probably get the disk image that was on NFS and mount it to find
out what went wrong on the initial deployment, or I can run the
deployment again and then get the log when it fails at step 15.
Ovirt version was 4.1.2.2
Thanks for any help,
Cam
7 years, 6 months
Failed to activate ISO Ovirt 4.1
by rajatjpatel
Hello Team,
I am using Ovirt 4.1 for my production, we are facing issues with ISO
Domain. We are facing issues with activating a ISO domain. Using mount -t
nfs ovirt.fog.com:/var/lib/exports/iso /mnt we are able to mount the same
in all hyp. While using ovirt UI which is giving error.
Pls find attach vdsm & engine logs.
Regards
Rajat
7 years, 6 months
default login info for repository images
by Dan Sullivan
I've created a VM using a template imported from a Fedora 24 image in
the repository. Now that it is created, what credentials do I use to
log into it from the console?
7 years, 6 months
New installation
by david caughey
Hi Folks,
I just got the go ahead to install oVirt to use as a lab.
The servers are:
1xdl360 4x30010k in raid 10 for OS (Manager) should this be clustered with
another server for resiliency
5xdl380 2x30010k for OS 6x1TB7.2k RAID5 on each for a data store
It will all be behind a proxy and be connected with 1GB links, (2x4 bonds)
My plan is to have 1 manager as this is a lab scenario and not production,
(yet), or is it better to have a cluster.
Is it ok to have a data store on each host, (they won't let me have
dedicated storage yet)?
Is it wise to share these data_stores between hosts or should 1 store be
dedicated to each host individually?
There is no need for performance, basically as long as it runs it will work.
The final intention is to have templates for OpenStack and CEPH plus lots
of Linux examples set up with SDN etc.
We have a 90% Windows shop but it is all rapidly changing to OpenStack et
al and I want to try and give people the opportunity to use Linux in the
lab before they are let lose in production.
Plus it gives me the chance to show management exactly what ovirt can do.
Any help, advice or links would be greatly appreciated,
BR/David
7 years, 6 months