Ovirt Single Node Hyperconverged
by Keith Winn
Hi,
I am looking at using Ovirt on a single machine. I used to use the all-in-one setup a while ago. I have only two vms and setting up two servers and a storage is out of my budget right now. The all in one solution sounds good, but I have some questions concerning the gdeploy config file. I am new to gluster and am unsure what I need. The server is ibm system x3550 m3 with two 1TB drives setup as a mirror.
For these, what do I put if the disks are already setup as raid through the raid controller? Jbod
[disktype]
@RAIDTYPE@ #Possible values raid6, raid10, raid5, jbod
[diskcount]
@NUMBER_OF_DATA_DISKS@ #Ignored in case of jbod
[stripesize]
@STRIPE_SIZE@ #256 in case of jbod
My second question is where do I get the Device name in the following:
[script1]
action=execute
ignore_script_errors=no
file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d @DEVICE@
Is this the dev/sda on the disk or some other name.
I was planning on using ovirt node as a staring point as that seems easier.
Thanks in advance for the help.
6 years, 3 months
LDAP authentication does not work after engine upgrade to ovirt 4.6
by Michael Watters
I've just upgraded our ovirt engine server to ovirt 4.6 and it appears
that LDAP logins no longer work. When I attempt to log in using an AD
account the following errors are shown in the engine log.
2018-09-11 10:03:44,610-04 ERROR
[org.ovirt.engine.core.sso.servlets.InteractiveAuthServlet] (default
task-10) [] Internal Server Error: Cannot locate principal
'username(a)example.com'
2018-09-11 10:03:44,610-04 ERROR
[org.ovirt.engine.core.sso.utils.SsoUtils] (default task-10) [] Cannot
locate principal 'username(a)example.com'
2018-09-11 10:03:44,645-04 ERROR
[org.ovirt.engine.core.aaa.servlet.SsoPostLoginServlet] (default
task-10) [] server_error: Cannot locate principal 'username(a)example.com'
I have not changed any LDAP settings and ldapsearch is able to find this
object without any issues. Does anybody have any idea what would cause
this?
6 years, 3 months
Managing multiple oVirt installs?
by femi adegoke
Let's say you have multiple oVirt installs.
How can they all be "managed" by using a single engine web UI (so I don't have to login 5 different times)?
6 years, 3 months
Re: Engine Setup Error
by Sakhi Hadebe
HI,
I re-installed the cluster and got the hosted engine deployed succesfully.
Thank you for your assistance
On Thu, Sep 6, 2018 at 1:54 PM, Sakhi Hadebe <sakhi(a)sanren.ac.za> wrote:
> Hi Simone,
>
> Yes the ownership of /var/log/ovirt-imageio-daemon is vdsm:kvm, 755.
>
> Below is the version of the ovirt packages currently installed on my oVirt
> Nodes:
>
> ovirt-vmconsole-1.0.5-4.el7.centos.noarch
> ovirt-provider-ovn-driver-1.2.14-1.el7.noarch
> ovirt-release-host-node-4.2.6-1.el7.noarch
> ovirt-hosted-engine-setup-2.2.26-1.el7.noarch
> ovirt-node-ng-nodectl-4.2.0-0.20180903.0.el7.noarch
> ovirt-release42-4.2.6-1.el7.noarch
> ovirt-imageio-common-1.4.4-0.el7.x86_64
> python-ovirt-engine-sdk4-4.2.8-2.el7.x86_64
> *ovirt-imageio-daemon-1.4.4-0.el7.noarch*
> *ovirt-hosted-engine-ha-2.2.16-1.el7.noarch*
> ovirt-host-4.2.3-1.el7.x86_64
> ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch
> ovirt-host-deploy-1.7.4-1.el7.noarch
> cockpit-machines-ovirt-172-2.el7.centos.noarch
> ovirt-host-dependencies-4.2.3-1.el7.x86_64
> ovirt-engine-appliance-4.2-20180903.1.el7.noarch
> ovirt-setup-lib-1.1.5-1.el7.noarch
> cockpit-ovirt-dashboard-0.11.33-1.el7.noarch
> ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch
> ovirt-node-ng-image-update-placeholder-4.2.6-1.el7.noarch
>
> I updated ovirt packages. and the ovirt-imageio-daemon is now running.
>
> ovirt-ha-agent is failing: *Failed to start monitor agents. *The engine
> was successfully deployed but can't access it as the ovirt-ha-agent is not
> running with the error below and log files attached:
>
> [root@goku ovirt-hosted-engine-ha]# journalctl -xe -u ovirt-ha-agent
> Sep 06 13:50:36 goku systemd[1]: ovirt-ha-agent.service holdoff time over,
> scheduling restart.
> Sep 06 13:50:36 goku systemd[1]: Started oVirt Hosted Engine High
> Availability Monitoring Agent.
> -- Subject: Unit ovirt-ha-agent.service has finished start-up
> -- Defined-By: systemd
> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
> --
> -- Unit ovirt-ha-agent.service has finished starting up.
> --
> -- The start-up result is done.
> Sep 06 13:50:36 goku systemd[1]: Starting oVirt Hosted Engine High
> Availability Monitoring Agent...
> -- Subject: Unit ovirt-ha-agent.service has begun start-up
> -- Defined-By: systemd
> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
> --
> -- Unit ovirt-ha-agent.service has begun starting up.
> Sep 06 13:50:37 goku ovirt-ha-agent[50395]: ovirt-ha-agent
> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngi
> Sep 06 13:50:37 goku ovirt-ha-agent[50395]: ovirt-ha-agent
> ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceb
> File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agen
> return action(he)
> File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agen
> return
> he.start_monitoring()
> File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agen
> self._initialize_broker()
> File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agen
> m.get('options', {}))
> File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/
> .format(type, options, e))
> RequestError: Failed to start
> monitor ping, options {'addr': '192.16
> Sep 06 13:50:37 goku ovirt-ha-agent[50395]: ovirt-ha-agent
> ovirt_hosted_engine_ha.agent.agent.Agent ERROR Trying
> Sep 06 13:50:37 goku systemd[1]: ovirt-ha-agent.service: main process
> exited, code=exited, status=157/n/a
> Sep 06 13:50:37 goku systemd[1]: Unit ovirt-ha-agent.service entered
> failed state.
> Sep 06 13:50:37 goku systemd[1]: ovirt-ha-agent.service failed.
> lines 3528-3559/3559 (END)
>
>
>
> On Wed, Sep 5, 2018 at 1:53 PM, Simone Tiraboschi <stirabos(a)redhat.com>
> wrote:
>
>> Can you please check if on your host you have
>> /var/log/ovirt-imageio-daemon and its ownership and permissions (it should
>> be vdsm:kvm,700)?
>> Can you please report which version of ovirt-imageio-daemon you are
>> using?
>> We had a bug there but it has been fixed long time ago.
>>
>>
>> On Wed, Sep 5, 2018 at 12:04 PM Sakhi Hadebe <sakhi(a)sanren.ac.za> wrote:
>>
>>> Sorry, I mistakenly send the email:
>>>
>>> Below is the output of:
>>> [root@glustermount ~]# systemctl status ovirt-imageio-daemon.service -l
>>> ● ovirt-imageio-daemon.service - oVirt ImageIO Daemon
>>> Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-daemon.service;
>>> disabled; vendor preset: disabled)
>>> Active: failed (Result: start-limit) since Tue 2018-09-04 16:55:16
>>> SAST; 19h ago
>>> Condition: start condition failed at Wed 2018-09-05 11:56:46 SAST; 2min
>>> 9s ago
>>> ConditionPathExists=/etc/pki/vdsm/certs/vdsmcert.pem was not
>>> met
>>> Process: 11345 ExecStart=/usr/bin/ovirt-imageio-daemon (code=exited,
>>> status=1/FAILURE)
>>> Main PID: 11345 (code=exited, status=1/FAILURE)
>>>
>>> Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt
>>> ImageIO Daemon.
>>> Sep 04 16:55:16 glustermount.goku systemd[1]: Unit
>>> ovirt-imageio-daemon.service entered failed state.
>>> Sep 04 16:55:16 glustermount.goku systemd[1]:
>>> ovirt-imageio-daemon.service failed.
>>> Sep 04 16:55:16 glustermount.goku systemd[1]:
>>> ovirt-imageio-daemon.service holdoff time over, scheduling restart.
>>> Sep 04 16:55:16 glustermount.goku systemd[1]: start request repeated too
>>> quickly for ovirt-imageio-daemon.service
>>> Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt
>>> ImageIO Daemon.
>>> Sep 04 16:55:16 glustermount.goku systemd[1]: Unit
>>> ovirt-imageio-daemon.service entered failed state.
>>> Sep 04 16:55:16 glustermount.goku systemd[1]:
>>> ovirt-imageio-daemon.service failed.
>>> [root@glustermount ~]# journalctl -xe -u ovirt-imageio-daemon.service
>>> Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: File
>>> "/usr/lib64/python2.7/logging/handlers.py",
>>> Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]:
>>> BaseRotatingHandler.__init__(self, filename, mode
>>> Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: File
>>> "/usr/lib64/python2.7/logging/handlers.py",
>>> Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]:
>>> logging.FileHandler.__init__(self, filename, mode
>>> Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: File
>>> "/usr/lib64/python2.7/logging/__init__.py",
>>> Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]:
>>> StreamHandler.__init__(self, self._open())
>>> Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: File
>>> "/usr/lib64/python2.7/logging/__init__.py",
>>> Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: stream =
>>> open(self.baseFilename, self.mode)
>>> Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: IOError:
>>> [Errno 2] No such file or directory: '/v
>>> Sep 04 16:55:16 glustermount.goku systemd[1]:
>>> ovirt-imageio-daemon.service: main process exited, code=exited, st
>>> Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt
>>> ImageIO Daemon.
>>> -- Subject: Unit ovirt-imageio-daemon.service has failed
>>> -- Defined-By: systemd
>>> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
>>> --
>>> -- Unit ovirt-imageio-daemon.service has failed.
>>> --
>>> -- The result is failed.
>>> Sep 04 16:55:16 glustermount.goku systemd[1]: Unit
>>> ovirt-imageio-daemon.service entered failed state.
>>> Sep 04 16:55:16 glustermount.goku systemd[1]:
>>> ovirt-imageio-daemon.service failed.
>>> Sep 04 16:55:16 glustermount.goku systemd[1]:
>>> ovirt-imageio-daemon.service holdoff time over, scheduling restart
>>> Sep 04 16:55:16 glustermount.goku systemd[1]: start request repeated too
>>> quickly for ovirt-imageio-daemon.servic
>>> Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt
>>> ImageIO Daemon.
>>> -- Subject: Unit ovirt-imageio-daemon.service has failed
>>> -- Defined-By: systemd
>>> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
>>> --
>>> -- Unit ovirt-imageio-daemon.service has failed.
>>> --
>>> -- The result is failed.
>>> Sep 04 16:55:16 glustermount.goku systemd[1]: Unit
>>> ovirt-imageio-daemon.service entered failed state.
>>> Sep 04 16:55:16 glustermount.goku systemd[1]:
>>> ovirt-imageio-daemon.service failed.
>>>
>>>
>>> On Wed, Sep 5, 2018 at 12:01 PM, Sakhi Hadebe <sakhi(a)sanren.ac.za>
>>> wrote:
>>>
>>>> # systemctl status ovirt-imageio-daemon.service
>>>> ● ovirt-imageio-daemon.service - oVirt ImageIO Daemon
>>>> Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-daemon.service;
>>>> disabled; vendor preset: disabled)
>>>> Active: failed (Result: start-limit) since Tue 2018-09-04 16:55:16
>>>> SAST; 19h ago
>>>> Condition: start condition failed at Wed 2018-09-05 11:56:46 SAST; 1min
>>>> 58s ago
>>>> ConditionPathExists=/etc/pki/vdsm/certs/vdsmcert.pem was
>>>> not met
>>>> Process: 11345 ExecStart=/usr/bin/ovirt-imageio-daemon (code=exited,
>>>> status=1/FAILURE)
>>>> Main PID: 11345 (code=exited, status=1/FAILURE)
>>>>
>>>> Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt
>>>> ImageIO Daemon.
>>>> Sep 04 16:55:16 glustermount.goku systemd[1]: Unit
>>>> ovirt-imageio-daemon.service entered failed state.
>>>> Sep 04 16:55:16 glustermount.goku systemd[1]:
>>>> ovirt-imageio-daemon.service failed.
>>>> Sep 04 16:55:16 glustermount.goku systemd[1]:
>>>> ovirt-imageio-daemon.service holdoff time over, scheduling ...art.
>>>> Sep 04 16:55:16 glustermount.goku systemd[1]: start request repeated
>>>> too quickly for ovirt-imageio-daemon...vice
>>>> Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt
>>>> ImageIO Daemon.
>>>> Sep 04 16:55:16 glustermount.goku systemd[1]: Unit
>>>> ovirt-imageio-daemon.service entered failed state.
>>>> Sep 04 16:55:16 glustermount.goku systemd[1]:
>>>> ovirt-imageio-daemon.service failed.
>>>> Hint: Some lines were ellipsized, use -l to show in full.
>>>> [root@glustermount ~]# systemctl status ovirt-imageio-daemon.service -l
>>>> ● ovirt-imageio-daemon.service - oVirt ImageIO Daemon
>>>> Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-daemon.service;
>>>> disabled; vendor preset: disabled)
>>>> Active: failed (Result: start-limit) since Tue 2018-09-04 16:55:16
>>>> SAST; 19h ago
>>>> Condition: start condition failed at Wed 2018-09-05 11:56:46 SAST; 2min
>>>> 9s ago
>>>> ConditionPathExists=/etc/pki/vdsm/certs/vdsmcert.pem was
>>>> not met
>>>> Process: 11345 ExecStart=/usr/bin/ovirt-imageio-daemon (code=exited,
>>>> status=1/FAILURE)
>>>> Main PID: 11345 (code=exited, status=1/FAILURE)
>>>>
>>>> Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt
>>>> ImageIO Daemon.
>>>> Sep 04 16:55:16 glustermount.goku systemd[1]: Unit
>>>> ovirt-imageio-daemon.service entered failed state.
>>>> Sep 04 16:55:16 glustermount.goku systemd[1]:
>>>> ovirt-imageio-daemon.service failed.
>>>> Sep 04 16:55:16 glustermount.goku systemd[1]:
>>>> ovirt-imageio-daemon.service holdoff time over, scheduling restart.
>>>> Sep 04 16:55:16 glustermount.goku systemd[1]: start request repeated
>>>> too quickly for ovirt-imageio-daemon.service
>>>> Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt
>>>> ImageIO Daemon.
>>>> Sep 04 16:55:16 glustermount.goku systemd[1]: Unit
>>>> ovirt-imageio-daemon.service entered failed state.
>>>> Sep 04 16:55:16 glustermount.goku systemd[1]:
>>>> ovirt-imageio-daemon.service failed.
>>>>
>>>> Output of:
>>>>
>>>> On Wed, Sep 5, 2018 at 11:35 AM, Simone Tiraboschi <stirabos(a)redhat.com
>>>> > wrote:
>>>>
>>>>>
>>>>>
>>>>> On Wed, Sep 5, 2018 at 11:10 AM Sakhi Hadebe <sakhi(a)sanren.ac.za>
>>>>> wrote:
>>>>>
>>>>>> Hi All,
>>>>>>
>>>>>> The host deploy logs are showing the below errors:
>>>>>>
>>>>>> [root@garlic engine-logs-2018-09-05T08:48:22Z]# cat
>>>>>> /var/log/ovirt-hosted-engine-setup/engine-logs-2018-09-05T08
>>>>>> \:34\:55Z/ovirt-engine/host-deploy/ovirt-host-deploy-20180
>>>>>> 905103605-garlic.sanren.ac.za-543b536b.log | grep -i error
>>>>>> 2018-09-05 10:35:46,909+0200 DEBUG otopi.context
>>>>>> context.dumpEnvironment:869 ENV BASE/error=bool:'False'
>>>>>> 2018-09-05 10:35:47,116 [ERROR] __main__.py:8011:MainThread
>>>>>> @identity.py:145 - Reload of consumer identity cert
>>>>>> /etc/pki/consumer/cert.pem raised an exception with msg: [Errno 2] No such
>>>>>> file or directory: '/etc/pki/consumer/key.pem'
>>>>>> 2018-09-05 10:35:47,383+0200 DEBUG otopi.context
>>>>>> context.dumpEnvironment:869 ENV BASE/error=bool:'False'
>>>>>> 2018-09-05 10:35:47,593 [ERROR] __main__.py:8011:MainThread
>>>>>> @identity.py:145 - Reload of consumer identity cert
>>>>>> /etc/pki/consumer/cert.pem raised an exception with msg: [Errno 2] No such
>>>>>> file or directory: '/etc/pki/consumer/key.pem'
>>>>>> 2018-09-05 10:35:48,245+0200 DEBUG otopi.context
>>>>>> context.dumpEnvironment:869 ENV BASE/error=bool:'False'
>>>>>> Job for ovirt-imageio-daemon.service failed because the control
>>>>>> process exited with error code. See "systemctl status
>>>>>> ovirt-imageio-daemon.service" and "journalctl -xe" for details.
>>>>>> RuntimeError: Failed to start service 'ovirt-imageio-daemon'
>>>>>> 2018-09-05 10:36:05,098+0200 ERROR otopi.context
>>>>>> context._executeMethod:152 Failed to execute stage 'Closing up': Failed to
>>>>>> start service 'ovirt-imageio-daemon'
>>>>>> 2018-09-05 10:36:05,099+0200 DEBUG otopi.context
>>>>>> context.dumpEnvironment:869 ENV BASE/error=bool:'True'
>>>>>> 2018-09-05 10:36:05,099+0200 DEBUG otopi.context
>>>>>> context.dumpEnvironment:869 ENV BASE/exceptionInfo=list:'[(<type
>>>>>> 'exceptions.RuntimeError'>, RuntimeError("Failed to start service
>>>>>> 'ovirt-imageio-daemon'",), <traceback object at 0x7f4de25ff320>)]'
>>>>>> 2018-09-05 10:36:05,106+0200 DEBUG otopi.context
>>>>>> context.dumpEnvironment:869 ENV BASE/error=bool:'True'
>>>>>> 2018-09-05 10:36:05,106+0200 DEBUG otopi.context
>>>>>> context.dumpEnvironment:869 ENV BASE/exceptionInfo=list:'[(<type
>>>>>> 'exceptions.RuntimeError'>, RuntimeError("Failed to start service
>>>>>> 'ovirt-imageio-daemon'",), <traceback object at 0x7f4de25ff320>)]'
>>>>>>
>>>>>> I coudn't find anything helpful from the internet.
>>>>>>
>>>>>
>>>>> something relevant int he output of
>>>>> systemctl status ovirt-imageio-daemon.service
>>>>> and
>>>>> journalctl -xe -u ovirt-imageio-daemon.service
>>>>> ?
>>>>>
>>>>>
>>>>>
>>>>>>
>>>>>> On Tue, Sep 4, 2018 at 6:46 PM, Simone Tiraboschi <
>>>>>> stirabos(a)redhat.com> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Sep 4, 2018 at 6:07 PM Sakhi Hadebe <sakhi(a)sanren.ac.za>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi Sahina,
>>>>>>>>
>>>>>>>> I am sorry I can't reproduce the error nor access the logs since I
>>>>>>>> did a fresh installed pn nodes. However now I can't even react that far
>>>>>>>> because the engine deployment fails to start the host up:
>>>>>>>>
>>>>>>>>
>>>>>>>> [ INFO ] TASK [Wait for the host to be up]
>>>>>>>> [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts":
>>>>>>>> {"ovirt_hosts": [{"address": "goku.sanren.ac.za",
>>>>>>>> "affinity_labels": [], "auto_numa_status": "unknown", "certificate":
>>>>>>>> {"organization": "sanren.ac.za", "subject": "O=sanren.ac.za,CN=
>>>>>>>> goku.sanren.ac.za"}, "cluster": {"href":
>>>>>>>> "/ovirt-engine/api/clusters/1ca368cc-b052-11e8-b7de-00163e008187",
>>>>>>>> "id": "1ca368cc-b052-11e8-b7de-00163e008187"}, "comment": "",
>>>>>>>> "cpu": {"speed": 0.0, "topology": {}}, "device_passthrough": {"enabled":
>>>>>>>> false}, "devices": [], "external_network_provider_configurations":
>>>>>>>> [], "external_status": "ok", "hardware_information":
>>>>>>>> {"supported_rng_sources": []}, "hooks": [], "href":
>>>>>>>> "/ovirt-engine/api/hosts/1c575995-70b1-43f7-b348-4a9788e070cd",
>>>>>>>> "id": "1c575995-70b1-43f7-b348-4a9788e070cd", "katello_errata":
>>>>>>>> [], "kdump_status": "unknown", "ksm": {"enabled": false},
>>>>>>>> "max_scheduling_memory": 0, "memory": 0, "name": "goku.sanren.ac.za",
>>>>>>>> "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported":
>>>>>>>> false, "os": {"custom_kernel_cmdline": ""}, "permissions": [], "port":
>>>>>>>> 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false,
>>>>>>>> "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp",
>>>>>>>> "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh":
>>>>>>>> {"fingerprint": "SHA256:B3/PDH551EFid93fm6PoRryi6/cXuVE8yNgiiiROh84",
>>>>>>>> "port": 22}, "statistics": [], "status": "install_failed",
>>>>>>>> "storage_connection_extensions": [], "summary": {"total": 0},
>>>>>>>> "tags": [], "transparent_huge_pages": {"enabled": false}, "type":
>>>>>>>> "ovirt_node", "unmanaged_networks": [], "update_available": false}]},
>>>>>>>> "attempts": 120, "changed": false}
>>>>>>>>
>>>>>>>> "status": "install_failed"
>>>>>>>
>>>>>>> You have to check host-deploy logs to get a details error message.
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> Please help.
>>>>>>>>
>>>>>>>> On Mon, Sep 3, 2018 at 1:34 PM, Sahina Bose <sabose(a)redhat.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Wed, Aug 29, 2018 at 8:39 PM, Sakhi Hadebe <sakhi(a)sanren.ac.za>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Hi,
>>>>>>>>>>
>>>>>>>>>> I am sorry to bother you again.
>>>>>>>>>>
>>>>>>>>>> I am trying to deploy an oVirt engine for oVirtNode-4.2.5.1. I
>>>>>>>>>> get the same error I encountered before:
>>>>>>>>>>
>>>>>>>>>> [ INFO ] TASK [Add glusterfs storage domain]
>>>>>>>>>> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail
>>>>>>>>>> is "[Problem while trying to mount target]". HTTP response code
>>>>>>>>>> is 400.
>>>>>>>>>> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false,
>>>>>>>>>> "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Problem
>>>>>>>>>> while trying to mount target]\". HTTP response code is 400."}
>>>>>>>>>> Please specify the storage you would like to use
>>>>>>>>>> (glusterfs, iscsi, fc, nfs)[nfs]:
>>>>>>>>>>
>>>>>>>>>> The glusterd daemon is running.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> mounting 172.16.4.18:/engine at /rhev/data-center/mnt/glusterS
>>>>>>>>> D/172.16.4.18:_engine (mount:204)
>>>>>>>>> 2018-08-29 16:47:28,846+0200 ERROR (jsonrpc/3) [storage.HSM] Could
>>>>>>>>> not connect to storageServer (hsm:2398)
>>>>>>>>>
>>>>>>>>> Can you try to see if you are able to mount 172.16.4.18:/engine
>>>>>>>>> on the server you're deploying Hosted Engine using "mount -t glusterfs
>>>>>>>>> 172.16.4.18:/engine /mnt/test"
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> During the deployment of the engine it sets the engine entry in
>>>>>>>>>> the /etc/hosts file with the IP Address of 192.168.124.* which it gets form
>>>>>>>>>> the virbr0 bridge interface. I stopped the bridge and deleted it, but still
>>>>>>>>>> giving the same error. Not sure what causes it to use that interface.
>>>>>>>>>> Please help!
>>>>>>>>>>
>>>>>>>>>> But I give the engine an IP of 192.168.1.10 same subnet as my
>>>>>>>>>> gateway and my ovirtmgmt bridge. Attached is the ifconfig output of my
>>>>>>>>>> Node, engine.log and vdsm.log.
>>>>>>>>>>
>>>>>>>>>> Your assistance is always appreciated
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Wed, Jul 11, 2018 at 11:47 AM, Sahina Bose <sabose(a)redhat.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> Is glusterd running on the server: goku.sanren.**
>>>>>>>>>>> There's an error
>>>>>>>>>>> Failed to get volume info: Command execution failed
>>>>>>>>>>> error: Connection failed. Please check if gluster daemon is
>>>>>>>>>>> operational
>>>>>>>>>>>
>>>>>>>>>>> Please check the volume status using "gluster volume status
>>>>>>>>>>> engine"
>>>>>>>>>>>
>>>>>>>>>>> and if all looks ok, attach the mount logs from
>>>>>>>>>>> /var/log/glusterfs
>>>>>>>>>>>
>>>>>>>>>>> On Wed, Jul 11, 2018 at 1:57 PM, Sakhi Hadebe <
>>>>>>>>>>> sakhi(a)sanren.ac.za> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi,
>>>>>>>>>>>>
>>>>>>>>>>>> I have managed to fix the error by enabling the DMA
>>>>>>>>>>>> Virtualisation in BIOS. I am now hit with a new error: It's failing to add
>>>>>>>>>>>> a glusterfs storage domain:
>>>>>>>>>>>>
>>>>>>>>>>>> [ INFO ] TASK [Add glusterfs storage domain]
>>>>>>>>>>>> [ ERROR ] Error: Fault reason is "Operation Failed". Fault
>>>>>>>>>>>> detail is "[Problem while trying to mount target]". HTTP response code is
>>>>>>>>>>>> 400.
>>>>>>>>>>>> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false,
>>>>>>>>>>>> "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Problem
>>>>>>>>>>>> while trying to mount target]\". HTTP response code is 400."}
>>>>>>>>>>>> Please specify the storage you would like to use
>>>>>>>>>>>> (glusterfs, iscsi, fc, nfs)[nfs]:
>>>>>>>>>>>>
>>>>>>>>>>>> Attached are vdsm and engine log files.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Wed, Jul 11, 2018 at 9:57 AM, Sakhi Hadebe <
>>>>>>>>>>>> sakhi(a)sanren.ac.za> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Wed, Jul 11, 2018 at 9:33 AM, Sakhi Hadebe <
>>>>>>>>>>>>> sakhi(a)sanren.ac.za> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Below are the versions of packages installed. Please find the
>>>>>>>>>>>>>> logs attached.
>>>>>>>>>>>>>> Qemu:
>>>>>>>>>>>>>> ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch
>>>>>>>>>>>>>> libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64
>>>>>>>>>>>>>> qemu-img-ev-2.10.0-21.el7_5.4.1.x86_64
>>>>>>>>>>>>>> qemu-kvm-ev-2.10.0-21.el7_5.4.1.x86_64
>>>>>>>>>>>>>> qemu-kvm-common-ev-2.10.0-21.el7_5.4.1.x86_64
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Libvirt installed packages:
>>>>>>>>>>>>>> libvirt-daemon-driver-storage-disk-3.9.0-14.el7_5.6.x86_64
>>>>>>>>>>>>>> libvirt-daemon-config-nwfilter-3.9.0-14.el7_5.6.x86_64
>>>>>>>>>>>>>> libvirt-daemon-driver-storage-iscsi-3.9.0-14.el7_5.6.x86_64
>>>>>>>>>>>>>> libvirt-daemon-driver-network-3.9.0-14.el7_5.6.x86_64
>>>>>>>>>>>>>> libvirt-libs-3.9.0-14.el7_5.6.x86_64
>>>>>>>>>>>>>> libvirt-daemon-driver-secret-3.9.0-14.el7_5.6.x86_64
>>>>>>>>>>>>>> libvirt-daemon-driver-storage-core-3.9.0-14.el7_5.6.x86_64
>>>>>>>>>>>>>> libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.6.x86_64
>>>>>>>>>>>>>> libvirt-daemon-driver-storage-3.9.0-14.el7_5.6.x86_64
>>>>>>>>>>>>>> libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64
>>>>>>>>>>>>>> libvirt-3.9.0-14.el7_5.6.x86_64
>>>>>>>>>>>>>> libvirt-python-3.9.0-1.el7.x86_64
>>>>>>>>>>>>>> libvirt-daemon-driver-nodedev-3.9.0-14.el7_5.6.x86_64
>>>>>>>>>>>>>> libvirt-daemon-driver-storage-rbd-3.9.0-14.el7_5.6.x86_64
>>>>>>>>>>>>>> libvirt-daemon-driver-storage-scsi-3.9.0-14.el7_5.6.x86_64
>>>>>>>>>>>>>> libvirt-daemon-config-network-3.9.0-14.el7_5.6.x86_64
>>>>>>>>>>>>>> libvirt-client-3.9.0-14.el7_5.6.x86_64
>>>>>>>>>>>>>> libvirt-daemon-kvm-3.9.0-14.el7_5.6.x86_64
>>>>>>>>>>>>>> libvirt-daemon-driver-storage-logical-3.9.0-14.el7_5.6.x86_64
>>>>>>>>>>>>>> libvirt-daemon-3.9.0-14.el7_5.6.x86_64
>>>>>>>>>>>>>> libvirt-daemon-driver-interface-3.9.0-14.el7_5.6.x86_64
>>>>>>>>>>>>>> libvirt-lock-sanlock-3.9.0-14.el7_5.6.x86_64
>>>>>>>>>>>>>> libvirt-daemon-driver-storage-mpath-3.9.0-14.el7_5.6.x86_64
>>>>>>>>>>>>>> libvirt-daemon-driver-lxc-3.9.0-14.el7_5.6.x86_64
>>>>>>>>>>>>>> libvirt-daemon-driver-nwfilter-3.9.0-14.el7_5.6.x86_64
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Virt-manager:
>>>>>>>>>>>>>> virt-manager-common-1.4.3-3.el7.noarch
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> oVirt:
>>>>>>>>>>>>>> [root@localhost network-scripts]# rpm -qa | grep ovirt
>>>>>>>>>>>>>> ovirt-setup-lib-1.1.4-1.el7.centos.noarch
>>>>>>>>>>>>>> cockpit-ovirt-dashboard-0.11.28-1.el7.noarch
>>>>>>>>>>>>>> ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch
>>>>>>>>>>>>>> ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch
>>>>>>>>>>>>>> ovirt-host-dependencies-4.2.3-1.el7.x86_64
>>>>>>>>>>>>>> ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch
>>>>>>>>>>>>>> ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch
>>>>>>>>>>>>>> ovirt-host-4.2.3-1.el7.x86_64
>>>>>>>>>>>>>> python-ovirt-engine-sdk4-4.2.7-2.el7.x86_64
>>>>>>>>>>>>>> ovirt-host-deploy-1.7.4-1.el7.noarch
>>>>>>>>>>>>>> cockpit-machines-ovirt-169-1.el7.noarch
>>>>>>>>>>>>>> ovirt-hosted-engine-ha-2.2.14-1.el7.noarch
>>>>>>>>>>>>>> ovirt-vmconsole-1.0.5-4.el7.centos.noarch
>>>>>>>>>>>>>> ovirt-provider-ovn-driver-1.2.11-1.el7.noarch
>>>>>>>>>>>>>> ovirt-engine-appliance-4.2-20180626.1.el7.noarch
>>>>>>>>>>>>>> ovirt-release42-4.2.4-1.el7.noarch
>>>>>>>>>>>>>> ovirt-hosted-engine-setup-2.2.22.1-1.el7.noarch
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Wed, Jul 11, 2018 at 6:48 AM, Yedidyah Bar David <
>>>>>>>>>>>>>> didi(a)redhat.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Tue, Jul 10, 2018 at 11:32 PM, Sakhi Hadebe <
>>>>>>>>>>>>>>> sakhi(a)sanren.ac.za> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I did not select any CPU architecture. It doenst gove me
>>>>>>>>>>>>>>>> the option to select one. It only states the number of virtual CPUs and the
>>>>>>>>>>>>>>>> memory for the engine VM.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Looking at the documentation of installing
>>>>>>>>>>>>>>>> ovirt-release36.rpm....it does allow you to select te CPU, but not when
>>>>>>>>>>>>>>>> installing ovirt-release42.rpm
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Tuesday, July 10, 2018, Alastair Neil <
>>>>>>>>>>>>>>>> ajneil.tech(a)gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> what did you select as your CPU architecture when you
>>>>>>>>>>>>>>>>> created the cluster? It looks like the VM is trying to use a CPU type of
>>>>>>>>>>>>>>>>> "Custom", how many nodes in your cluster? I suggest you specify the lowest
>>>>>>>>>>>>>>>>> common denominator of CPU architecture (e.g. Sandybridge) of the nodes as
>>>>>>>>>>>>>>>>> the CPU architecture of the cluster..
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Tue, 10 Jul 2018 at 12:01, Sakhi Hadebe <
>>>>>>>>>>>>>>>>> sakhi(a)sanren.ac.za> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I have just re-installed centOS 7 in 3 servers and have
>>>>>>>>>>>>>>>>>> configured gluster volumes following this documentation:
>>>>>>>>>>>>>>>>>> https://www.ovirt.org/blog/201
>>>>>>>>>>>>>>>>>> 6/03/up-and-running-with-ovirt-3-6/, But I have
>>>>>>>>>>>>>>>>>> installed
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> package.
>>>>>>>>>>>>>>>>>> Hosted-engine --deploy is failing with this error:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> "rhel7", "--virt-type", "kvm", "--memory", "16384",
>>>>>>>>>>>>>>>>>> "--vcpus", "4", "--network", "network=default,mac=00:16:3e:09:5e:5d,model=virtio",
>>>>>>>>>>>>>>>>>> "--disk", "/var/tmp/localvm0nnJH9/images
>>>>>>>>>>>>>>>>>> /eacac30d-0304-4c77-8753-6965e4b8c2e7/d494577e-027a-4209-895b-6132e6fc6b9a",
>>>>>>>>>>>>>>>>>> "--import", "--disk", "path=/var/tmp/localvm0nnJH9/seed.iso,device=cdrom",
>>>>>>>>>>>>>>>>>> "--noautoconsole", "--rng", "/dev/random", "--graphics", "vnc", "--video",
>>>>>>>>>>>>>>>>>> "vga", "--sound", "none", "--controller", "usb,model=none", "--memballoon",
>>>>>>>>>>>>>>>>>> "none", "--boot", "hd,menu=off", "--clock", "kvmclock_present=yes"],
>>>>>>>>>>>>>>>>>> "delta": "0:00:00.979003", "end": "2018-07-10 17:55:11.308555", "msg":
>>>>>>>>>>>>>>>>>> "non-zero return code", "rc": 1, "start": "2018-07-10 17:55:10.329552",
>>>>>>>>>>>>>>>>>> "stderr": "ERROR unsupported configuration: CPU mode 'custom' for x86_64
>>>>>>>>>>>>>>>>>> kvm domain on x86_64 host is not supported by hypervisor\nDomain
>>>>>>>>>>>>>>>>>> installation does not appear to have been successful.\nIf it was, you can
>>>>>>>>>>>>>>>>>> restart your domain by running:\n virsh --connect qemu:///system start
>>>>>>>>>>>>>>>>>> HostedEngineLocal\notherwise, please restart your installation.",
>>>>>>>>>>>>>>>>>> "stderr_lines": ["ERROR unsupported configuration: CPU mode 'custom' for
>>>>>>>>>>>>>>>>>> x86_64 kvm domain on x86_64 host is not supported by hypervisor", "Domain
>>>>>>>>>>>>>>>>>> installation does not appear to have been successful.", "If it was, you can
>>>>>>>>>>>>>>>>>> restart your domain by running:", " virsh --connect qemu:///system start
>>>>>>>>>>>>>>>>>> HostedEngineLocal", "otherwise, please restart your installation."],
>>>>>>>>>>>>>>>>>> "stdout": "\nStarting install...", "stdout_lines": ["", "Starting
>>>>>>>>>>>>>>>>>> install..."]}
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> This seems to be in the phase where we create a local vm for
>>>>>>>>>>>>>>> the engine. We do this with plain virt-install, nothing fancy. Searching
>>>>>>>>>>>>>>> the net for "unsupported configuration: CPU mode 'custom'" finds other
>>>>>>>>>>>>>>> relevant reports, you might want to check them. You can see the command in
>>>>>>>>>>>>>>> bootstrap_local_vm.yml .
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Please check/share versions of relevant packages (libvirt*,
>>>>>>>>>>>>>>> qemu*, etc) and relevant logs (libvirt).
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Also updating the subject line and adding Simone.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Best regards,
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> Didi
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>> Sakhi Hadebe
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Tel: +27 12 841 2308 <+27128414213>
>>>>>>>>>>>>>> Fax: +27 12 841 4223 <+27128414223>
>>>>>>>>>>>>>> Cell: +27 71 331 9622 <+27823034657>
>>>>>>>>>>>>>> Email: sakhi(a)sanren.ac.za <shadebe(a)csir.co.za>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> --
>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>> Sakhi Hadebe
>>>>>>>>>>>>>
>>>>>>>>>>>>> Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
>>>>>>>>>>>>>
>>>>>>>>>>>>> Tel: +27 12 841 2308 <+27128414213>
>>>>>>>>>>>>> Fax: +27 12 841 4223 <+27128414223>
>>>>>>>>>>>>> Cell: +27 71 331 9622 <+27823034657>
>>>>>>>>>>>>> Email: sakhi(a)sanren.ac.za <shadebe(a)csir.co.za>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Regards,
>>>>>>>>>>>> Sakhi Hadebe
>>>>>>>>>>>>
>>>>>>>>>>>> Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
>>>>>>>>>>>>
>>>>>>>>>>>> Tel: +27 12 841 2308 <+27128414213>
>>>>>>>>>>>> Fax: +27 12 841 4223 <+27128414223>
>>>>>>>>>>>> Cell: +27 71 331 9622 <+27823034657>
>>>>>>>>>>>> Email: sakhi(a)sanren.ac.za <shadebe(a)csir.co.za>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> Users mailing list -- users(a)ovirt.org
>>>>>>>>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>>>>>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>>>>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit
>>>>>>>>>>>> y/about/community-guidelines/
>>>>>>>>>>>> List Archives: https://lists.ovirt.org/archiv
>>>>>>>>>>>> es/list/users(a)ovirt.org/message/YHKUKW22QLRVS56XZBXEWOGORFWF
>>>>>>>>>>>> EGIA/
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Regards,
>>>>>>>>>> Sakhi Hadebe
>>>>>>>>>>
>>>>>>>>>> Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
>>>>>>>>>>
>>>>>>>>>> Tel: +27 12 841 2308 <+27128414213>
>>>>>>>>>> Fax: +27 12 841 4223 <+27128414223>
>>>>>>>>>> Cell: +27 71 331 9622 <+27823034657>
>>>>>>>>>> Email: sakhi(a)sanren.ac.za <shadebe(a)csir.co.za>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Regards,
>>>>>>>> Sakhi Hadebe
>>>>>>>>
>>>>>>>> Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
>>>>>>>>
>>>>>>>> Tel: +27 12 841 2308 <+27128414213>
>>>>>>>> Fax: +27 12 841 4223 <+27128414223>
>>>>>>>> Cell: +27 71 331 9622 <+27823034657>
>>>>>>>> Email: sakhi(a)sanren.ac.za <shadebe(a)csir.co.za>
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Users mailing list -- users(a)ovirt.org
>>>>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit
>>>>>>>> y/about/community-guidelines/
>>>>>>>> List Archives: https://lists.ovirt.org/archiv
>>>>>>>> es/list/users(a)ovirt.org/message/DS7O33ZSGV5ZAHXKA3KT2ABYW3VJJWXQ/
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Regards,
>>>>>> Sakhi Hadebe
>>>>>>
>>>>>> Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
>>>>>>
>>>>>> Tel: +27 12 841 2308 <+27128414213>
>>>>>> Fax: +27 12 841 4223 <+27128414223>
>>>>>> Cell: +27 71 331 9622 <+27823034657>
>>>>>> Email: sakhi(a)sanren.ac.za <shadebe(a)csir.co.za>
>>>>>>
>>>>>>
>>>>
>>>>
>>>> --
>>>> Regards,
>>>> Sakhi Hadebe
>>>>
>>>> Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
>>>>
>>>> Tel: +27 12 841 2308 <+27128414213>
>>>> Fax: +27 12 841 4223 <+27128414223>
>>>> Cell: +27 71 331 9622 <+27823034657>
>>>> Email: sakhi(a)sanren.ac.za <shadebe(a)csir.co.za>
>>>>
>>>>
>>>
>>>
>>> --
>>> Regards,
>>> Sakhi Hadebe
>>>
>>> Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
>>>
>>> Tel: +27 12 841 2308 <+27128414213>
>>> Fax: +27 12 841 4223 <+27128414223>
>>> Cell: +27 71 331 9622 <+27823034657>
>>> Email: sakhi(a)sanren.ac.za <shadebe(a)csir.co.za>
>>>
>>>
>
>
> --
> Regards,
> Sakhi Hadebe
>
> Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
>
> Tel: +27 12 841 2308 <+27128414213>
> Fax: +27 12 841 4223 <+27128414223>
> Cell: +27 71 331 9622 <+27823034657>
> Email: sakhi(a)sanren.ac.za <shadebe(a)csir.co.za>
>
>
--
Regards,
Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency
Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213>
Fax: +27 12 841 4223 <+27128414223>
Cell: +27 71 331 9622 <+27823034657>
Email: sakhi(a)sanren.ac.za <shadebe(a)csir.co.za>
6 years, 3 months
add gluster cluster to deployed cluster
by p.staniforth@leedsbeckett.ac.uk
Hello People,
I have a 3 node cluster this just has virt service deployed, how would I add gluster service to this. We have installed 2 extra disks in each node and they are running oVirt Node 4.2.5.1
After we have a gluster service running how easy would it be to migrate an existing engine to Self Hosted Engine?
Thanks,
Paul S.
6 years, 3 months
Clean stuck VM from ovirt dashboard
by Alex K
Hi all,
I have a failed clone of a VM which led to the clone being listed as a VM
with state "Image locked", as if the clone is still continuing. No tasks
are being shown and for sure the clone is not continuing. Also there are
not disk images listed for this VM instance at the disks tab.
How can i remove this fake VM from the GUI. There might be some steps I can
do in the engine DB to make the engine forget about this VM.
I a running ovirt 4.1.9. (I know not the latest, though this will be for
some time until we prepare to move to 4.2.X)
Thanx,
Alex
6 years, 3 months
hosted-engine --check-liveliness returns "Hosted Engine is not up!" for host other cluster
by omprakashp@cdac.in
Hi ,
I am just getting started with Ovirt. I have setup 3 Ovirt-Nodes(all on physical host) and deployed self hosted engine with iscsi storage on one of the host.
On engine ui, i have create two Clusters(C1, C2) and Engine is running in C1.
While executing "hosted-engine --check-liveliness" on hosts in C2 cluster, it says "Hosted Engine is not up!". Is this an issue? Or this means i have to deploy hosted engine in C2 cluster too.
6 years, 3 months
Moving all disks to new storage domain
by Simon Vincent
I need to decommission an old SAN so I have connected a new one into ovirt
as a new storage domain. I would like to use a python script to move all
disks/snapshots/everything over to the new storage domain but I am having
some problems.
I am trying to search for all VMs using a particular storage domain but
this is not working how I would like. The code I am using is:
vms = vms_service.list(search='Storage=team-data')
However this is returning VMs which are using storage domains 'team-data'
and 'team-data2'. Is this a bug or is there another way to return vms from
just 'team-data'?
My script so far is:
connection = sdk.Connection(
url=ovaddress,
username=username,
password=password,
insecure=True
)
system_service = connection.system_service()
vms_service = system_service.vms_service()
disks_service = connection.system_service().disks_service()
vms = vms_service.list(search='Storage=team-data')
sds_service = connection.system_service().storage_domains_service()
dest_sd = sds_service.list(search='name=team-data2')
if len(dest_sd)!=1:
exit()
dest_sd=dest_sd[0]
for vm in vms:
print (vm.name)
vm_service = vms_service.vm_service(vm.id)
disk_attachments = vm_service.disk_attachments_service().list()
for disk_attachment in disk_attachments:
disk = connection.follow_link(disk_attachment.disk)
print (" disk: " + disk.name)
disk_service = disks_service.disk_service(disk.id)
disk_service.move(storage_domain=dest_sd)
while True:
print("Waiting for movement to complete ...")
time.sleep(10)
disk = disk_service.get()
if disk.status == types.DiskStatus.OK:
break
connection.close()
6 years, 3 months
supermicro hardware infrastructure
by l.savarese@storelink.it
Hello everybody
I would to install oVirt 4.2.6 on the below hardware infrastructure:
2 server SUPERMICRO TWINSERVER (2029TP-HC0R) with one Adapter 2-port 10G SFP+ installed on each node (AOC-MTGN-I2SM-O)
1 Storage QSAN XS1200 Series connected via iSCSI 10G to servers
Are you aware of any hardware incompatibilities?
Do you have any advice?
Best regards
Luigi
6 years, 3 months