Can not log to engine manager after creating a clone
by guillaume.pavese@interact-iv.com
Hello,
Fresh 4.3 Cluster :
I imported a 4.2 export domain, copied the vm on the newly provisioned gluster Volume with VDO
I then tried to create a clone of a 50GB thin provisionning vm anddisconnected
This morning I can not log in Engine Manager even after rebooting hosted-engine : I get this message in UI and logs when I click "Administration Portal"
2019-02-08 09:40:51,706+01 ERROR [org.ovirt.engine.core.aaa.servlet.SsoPostLoginServlet] (default task-1) [] org.apache.commons.lang.SerializationException: org.codehaus.jackson.map.JsonMappingException: Can not deserialize instance of java.util.LinkedHashMap out of START_ARRAY token
at [Source: java.io.StringReader@f832283; line: 724, column: 25] (through reference chain: org.ovirt.engine.core.common.action.CloneVmParameters["watchdog"]->org.ovirt.engine.core.common.businessentities.VmWatchdog["specParams"])
5 years, 9 months
Open_vSwitch no key error after upgrading to 4.2.8
by Jayme
I upgraded oVirt to 4.2.8 and now I am spammed with the following message
in all host syslog. How can I stop/fix this error?
ovs-vsctl: ovs|00001|db_ctl_base|ERR|no key "odl_os_hostconfig_hostid" in
Open_vSwitch record "." column external_ids
5 years, 9 months
[ANN] oVirt 4.3.0 is now generally available
by Sandro Bonazzola
The oVirt Project is excited to announce the general availability of oVirt
4.3.0, as of February 4th, 2019
This release unleashes an altogether more powerful and flexible open source
virtualization solution that encompasses over four hundreds individual
changes and a wide range of enhancements across the engine, storage,
network, user interface, and analytics on top of oVirt 4.2 series.
What's new in oVirt 4.3.0?
* Q35 chipset, support booting using UEFI and Secure Boot
* Skylake-server and AMD EPYC support
* New smbus driver in windows guest tools
* Improved support for v2v
* OVA export / import of Templates
* Full support for live migration of High Performance VMs
* Microsoft Failover clustering support (SCSI Persistent Reservation) for
Direct LUN disks
* Hundreds of bug fixes on top of oVirt 4.2 series
* New VM portal details page (see a preview here:
https://imgur.com/a/ExINpci)
* New Cluster upgrade UI
* OVN security groups
* IPv6 (static host addresses)
* Support of Neutron from RDO OpenStack 13 as external network provider
* Support of using Skydive from RDO OpenStack 14 as Tech Preview
* Support for 3.6 and 4.0 data centers, clusters and hosts were removed
* Now using PostgreSQL 10
* New metrics support using rsyslog instead of fluentd
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)
Experimental tech preview for x86_64 and s390x architectures for Fedora 28
is also included.
See the release notes [1] for installation / upgrade instructions and a
list of new features and bugs fixed.
If you’re managing more than one oVirt instance, OpenShift Origin or RDO we
also recommend to try ManageIQ <http://manageiq.org/>.
In such case, please ensure to take the qc2 image and not the ova image.
Notes:
- oVirt Appliance is already available for CentOS 7 and Fedora 28
- oVirt Node NG is already available for CentOS 7 and Fedora 28 [2]
- oVirt Windows Guest Tools iso is already available [2]
Additional Resources:
* Read more about the oVirt 4.3.0 release highlights:
http://www.ovirt.org/release/4.3.0/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.0/
[2] http://resources.ovirt.org/pub/ovirt-4.3/iso/
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
5 years, 9 months
Retrieval of iSCSI targets failed.
by cbop.mail@gmail.com
Hello!
I was just trying to setup my first oVirt Server via Cockpit but it failed when I wanted to connect it to my iSCSI.
These are the last lines of my vdsm.log:
2019-02-08 01:34:07,080+0100 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:312)
2019-02-08 01:34:07,096+0100 INFO (jsonrpc/5) [api.host] START getAllVmIoTunePolicies() from=::1,36420 (api:48)
2019-02-08 01:34:07,096+0100 INFO (jsonrpc/5) [api.host] FINISH getAllVmIoTunePolicies return={'status': {'message': 'Done', 'code': 0}, 'io_tune_policies_dict': {'be244416-dc4b-4bee-b451-37c0517f19f7': {'policy': [], 'current_values': []}}} from=:
:1,36420 (api:54)
2019-02-08 01:34:07,097+0100 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmIoTunePolicies succeeded in 0.00 seconds (__init__:312)
2019-02-08 01:34:08,149+0100 INFO (jsonrpc/7) [vdsm.api] START discoverSendTargets(con={'ipv6_enabled': u'false', 'connection': u'20.20.20.20', 'password': '', 'port': u'3260', 'user': ''}, options=None) from=::ffff:192.168.122.144,35682, flow_id=6
a17a7b3-1dfa-424c-8978-652d73314daf, task_id=f8be43e7-9085-42bd-b526-c2adb9962cb4 (api:48)
2019-02-08 01:34:08,330+0100 INFO (jsonrpc/7) [vdsm.api] FINISH discoverSendTargets return={'fullTargets': ['20.20.20.20:3260,1 iqn.2019-02.eu.teamxenon:data', '20.20.20.20:3260,1 iqn.2019-02.eu.teamxenon:ovirt'], 'targets': ['iqn.2019-02.eu.teamxe
non:data', 'iqn.2019-02.eu.teamxenon:ovirt']} from=::ffff:192.168.122.144,35682, flow_id=6a17a7b3-1dfa-424c-8978-652d73314daf, task_id=f8be43e7-9085-42bd-b526-c2adb9962cb4 (api:54)
2019-02-08 01:34:08,331+0100 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call ISCSIConnection.discoverSendTargets succeeded in 0.18 seconds (__init__:312)
2019-02-08 01:34:10,739+0100 INFO (vmrecovery) [vdsm.api] START getConnectedStoragePoolsList(options=None) from=internal, task_id=0507616a-5b31-4d88-897f-0d2411674ce3 (api:48)
2019-02-08 01:34:10,739+0100 INFO (vmrecovery) [vdsm.api] FINISH getConnectedStoragePoolsList return={'poollist': []} from=internal, task_id=0507616a-5b31-4d88-897f-0d2411674ce3 (api:54)
2019-02-08 01:34:10,740+0100 INFO (vmrecovery) [vds] recovery: waiting for storage pool to go up (clientIF:709)
As you can see it finds the targets but for some reason it fails. The initiator name should be configured correctly.
My iSCSI is configured as the following:
o- / ......................................................................................................................... [...]
o- backstores .............................................................................................................. [...]
| o- block .................................................................................................. [Storage Objects: 0]
| o- fileio ................................................................................................. [Storage Objects: 4]
| | o- code ......................................................... [/mnt/pool0/vdisks/code.img (200.0GiB) write-back activated]
| | | o- alua ................................................................................................... [ALUA Groups: 1]
| | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
| | o- games ....................................................... [/mnt/pool1/vdisks/games.img (800.0GiB) write-back activated]
| | | o- alua ................................................................................................... [ALUA Groups: 1]
| | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
| | o- music ....................................................... [/mnt/pool0/vdisks/music.img (400.0GiB) write-back activated]
| | | o- alua ................................................................................................... [ALUA Groups: 1]
| | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
| | o- virtualization ................................................. [/mnt/pool0/vdisks/vm.img (100.0GiB) write-back activated]
| | o- alua ................................................................................................... [ALUA Groups: 1]
| | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
| o- pscsi .................................................................................................. [Storage Objects: 0]
| o- ramdisk ................................................................................................ [Storage Objects: 0]
o- iscsi ............................................................................................................ [Targets: 2]
| o- iqn.2019-02.eu.teamxenon:data ..................................................................................... [TPGs: 1]
| | o- tpg1 ............................................................................................... [no-gen-acls, no-auth]
| | o- acls .......................................................................................................... [ACLs: 2]
| | | o- iqn.2019-02.eu.teamxenon:laptop ...................................................................... [Mapped LUNs: 3]
| | | | o- mapped_lun0 ................................................................................. [lun0 fileio/code (rw)]
| | | | o- mapped_lun1 ................................................................................ [lun1 fileio/music (rw)]
| | | | o- mapped_lun2 ................................................................................ [lun2 fileio/games (rw)]
| | | o- iqn.2019-02.eu.teamxenon:win10pro .................................................................... [Mapped LUNs: 3]
| | | o- mapped_lun0 ................................................................................. [lun0 fileio/code (rw)]
| | | o- mapped_lun1 ................................................................................ [lun1 fileio/music (rw)]
| | | o- mapped_lun2 ................................................................................ [lun2 fileio/games (rw)]
| | o- luns .......................................................................................................... [LUNs: 3]
| | | o- lun0 .................................................... [fileio/code (/mnt/pool0/vdisks/code.img) (default_tg_pt_gp)]
| | | o- lun1 .................................................. [fileio/music (/mnt/pool0/vdisks/music.img) (default_tg_pt_gp)]
| | | o- lun2 .................................................. [fileio/games (/mnt/pool1/vdisks/games.img) (default_tg_pt_gp)]
| | o- portals .................................................................................................... [Portals: 1]
| | o- 0.0.0.0:3260 ..................................................................................................... [OK]
| o- iqn.2019-02.eu.teamxenon:ovirt .................................................................................... [TPGs: 1]
| o- tpg1 ............................................................................................... [no-gen-acls, no-auth]
| o- acls .......................................................................................................... [ACLs: 1]
| | o- iqn.2019-02.eu.teamxenon:ovirt ....................................................................... [Mapped LUNs: 1]
| | o- mapped_lun0 ....................................................................... [lun0 fileio/virtualization (rw)]
| o- luns .......................................................................................................... [LUNs: 1]
| | o- lun0 ............................................ [fileio/virtualization (/mnt/pool0/vdisks/vm.img) (default_tg_pt_gp)]
| o- portals .................................................................................................... [Portals: 1]
| o- 0.0.0.0:3260 ..................................................................................................... [OK]
o- loopback ......................................................................................................... [Targets: 0]
o- srpt ............................................................................................................. [Targets: 0]
I really don't know what the problem is and I haven't found an answer anywhere on the internet.
I hope you can help me out.
Roman
5 years, 9 months
Network Performance Issues
by Bryan Sockel
Hi,
I currently have a 4 node ovirt cluster running. Each node is configured with an active passive network setup, with each link being 10 GB. After looking over my performance metrics collected via observium. I am noticing the network traffic rarely exceeds 100 MB. I am noticing this across all four of my servers and my 2 storage arrays that are also connected via the same 10 GB links.
What is the best way to trouble shoot this problem?
Thanks
5 years, 9 months
need network design advice for iSCSI
by John Florian
I have an existing 4.2 setup with 2 hosts, both with a quad-gbit NIC and
a QNAP TS-569 Pro NAS with twin gbit NIC and five 7k2 drives. At
present, the I have 5 VLANs, each with their own subnet as:
1. my "main" net (VLAN 1, 172.16.7.0/24)
2. ovirtmgmt (VLAN 100, 192.168.100.0/24)
3. four storage nets (VLANs 101-104, 192.168.101.0/24 - 192.168.104.0/24)
On the NAS, I enslaved both NICs into a 802.3ad LAG and then bound an IP
address for each of the four storage nets giving me:
* bond0.101@bond0: 192.168.101.101
* bond0.102@bond0: 192.168.102.102
* bond0.103@bond0: 192.168.103.103
* bond0.104@bond0: 192.168.104.104
The hosts are similar, but with all four NICs enslaved into a 802.3ad LAG:
Host 1:
* bond0.101@bond0: 192.168.101.203
* bond0.102@bond0: 192.168.102.203
* bond0.103@bond0: 192.168.103.203
* bond0.104@bond0: 192.168.104.203
Host 2:
* bond0.101@bond0: 192.168.101.204
* bond0.102@bond0: 192.168.102.204
* bond0.103@bond0: 192.168.103.204
* bond0.104@bond0: 192.168.104.204
I believe my performance could be better though. While running bonnie++
on a VM, the NAS reports top disk throughput around 70MB/s and the
network (both NICs) topping out around 90MB/s. I suspect I'm being hurt
by the load balancing across the NICs. I've played with various load
balancing options for the LAGs (src-dst-ip and src-dst-mac) but with
little difference in effect. Watching the resource monitor on the NAS,
I can see that one NIC almost exclusive does transmits while the other
is almost exclusively receives. Here's the bonnie report (my apologies
to those reading plain-text here):
Bonnie++ Benchmark results
*Version 1.97* *Sequential Output* *Sequential Input* *Random
Seeks*
*Sequential Create* *Random Create*
Size Per Char Block Rewrite Per Char Block Num Files Create
Read Delete Create Read Delete
K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU
/sec % CPU
/sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec
% CPU
unamed 4G 267 97 75284 21 22775 8 718 97 43559 7 189.5 8
16 6789 60 +++++ +++ 24948 75 14792 86 +++++ +++ 18163 51
Latency 69048us 754ms 898ms 61246us 311ms 1126ms Latency 33937us
1132us 1299us 528us 22us 458us
I keep seeing MPIO mentioned for iSCSI deployments and now I'm trying to
get my head around how to best set that up or to even know if it would
be helpful. I only have one switch (a Catalyst 3750g) in this small
setup so fault tolerance at that level isn't a goal.
So... what would the recommendation be? I've never done MPIO before but
know where it's at in the web UI at least.
--
John Florian
5 years, 9 months
ovirt-node-4.3, deployment fails when moving hosted engine vm to gluster storage.
by feral
I have no idea what's wrong at this point. Very vanilla install of 3 nodes.
Run the Hyperconverged wizard, completes fine. Run the engine deployment,
takes hours, eventually fails with :
[ INFO ] TASK [oVirt.hosted-engine-setup : Check engine VM health]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed": true,
"cmd": ["hosted-engine", "--vm-status", "--json"], "delta":
"0:00:00.340985", "end": "2019-02-06 11:44:48.836431", "rc": 0, "start":
"2019-02-06 11:44:48.495446", "stderr": "", "stderr_lines": [], "stdout":
"{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\":
\"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=12994
(Wed Feb 6 11:44:44
2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=12995 (Wed Feb 6
11:44:44
2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStop\\nstopped=False\\n\",
\"hostname\": \"ovirt-431.localdomain\", \"host-id\": 1, \"engine-status\":
{\"reason\": \"failed liveliness check\", \"health\": \"bad\", \"vm\":
\"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\": false,
\"maintenance\": false, \"crc32\": \"5474927a\", \"local_conf_timestamp\":
12995, \"host-ts\": 12994}, \"global_maintenance\": false}",
"stdout_lines": ["{\"1\": {\"conf_on_shared_storage\": true, \"live-data\":
true, \"extra\":
\"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=12994
(Wed Feb 6 11:44:44
2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=12995 (Wed Feb 6
11:44:44
2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStop\\nstopped=False\\n\",
\"hostname\": \"ovirt-431.localdomain\", \"host-id\": 1, \"engine-status\":
{\"reason\": \"failed liveliness check\", \"health\": \"bad\", \"vm\":
\"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\": false,
\"maintenance\": false, \"crc32\": \"5474927a\", \"local_conf_timestamp\":
12995, \"host-ts\": 12994}, \"global_maintenance\": false}"]}
[ INFO ] TASK [oVirt.hosted-engine-setup : Check VM status at virt level]
[ INFO ] changed: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : debug]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Fail if engine VM is not running]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Get target engine VM IP address]
[ INFO ] changed: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Get VDSM's target engine VM
stats]
[ INFO ] changed: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Convert stats to JSON format]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Get target engine VM IP address
from VDSM stats]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : debug]
[ INFO ] ok: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Fail if Engine IP is different
from engine's he_fqdn resolved IP]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [oVirt.hosted-engine-setup : Fail is for any other reason the
engine didn't started]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The
engine failed to start inside the engine VM; please check engine.log."}
---------------------------------------------------
I can't check the engine.log as I can't connect to the VM once this failure
occurs. I can ssh in prior to the VM being moved to gluster storage, but as
soon as it starts doing so, the VM never comes back online.
--
_____
Fact:
1. Ninjas are mammals.
2. Ninjas fight ALL the time.
3. The purpose of the ninja is to flip out and kill people.
5 years, 9 months
ovn bad gateway after update from 4.2.7 to 4.2.8
by Gianluca Cecchi
Hello,
at this moment (about two days ago) I have updated only engine (external,
not self hosted) from 4.2.7.5 to 4.2.8.2
As soon as I'm starting for the first time a VM with an ovn based nic I get
what below in ovirt-provider-ovn.log
In admin gui, if I try for example to start via "run once" I get:
"
Error while executing action Run VM once: Failed to communicate with the
external provider, see log for additional details.
"
Any clue?
Thanks,
Gianluca
2019-01-29 17:23:20,554 root Starting server
2019-01-29 17:23:20,554 root Version: 1.2.18-1
2019-01-29 17:23:20,555 root Build date: 20190114151850
2019-01-29 17:23:20,555 root Githash: dae4c1d
2019-01-29 18:04:15,575 root Starting server
2019-01-29 18:04:15,576 root Version: 1.2.18-1
2019-01-29 18:04:15,576 root Build date: 20190114151850
2019-01-29 18:04:15,576 root Githash: dae4c1d
2019-02-01 14:26:58,316 root From: ::ffff:127.0.0.1:49582 Request: GET
/v2.0/ports
2019-02-01 14:26:58,317 root HTTPSConnectionPool(host='engine-host',
port=443): Max retries exceeded with url:
/ovirt-engine/sso/oauth/token-info (Caused by
NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at
0x7fe806166b90>: Failed to establish a new connection: [Errno -2] Name or
service not known',))
Traceback (most recent call last):
File "/usr/share/ovirt-provider-ovn/handlers/base_handler.py", line 134,
in _handle_request
method, path_parts, content
File "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py", line
175, in handle_request
return self.call_response_handler(handler, content, parameters)
File "/usr/share/ovirt-provider-ovn/handlers/neutron.py", line 33, in
call_response_handler
TOKEN_HTTP_HEADER_FIELD_NAME, '')):
File "/usr/share/ovirt-provider-ovn/auth/plugin_facade.py", line 31, in
validate_token
return auth.core.plugin.validate_token(token)
File
"/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/authorization_by_username.py",
line 36, in validate_token
return self._is_user_name(token, _admin_user_name())
File
"/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/authorization_by_username.py",
line 47, in _is_user_name
timeout=AuthorizationByUserName._timeout())
File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 131,
in get_token_info
timeout=timeout
File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 54,
in wrapper
response = func(*args, **kwargs)
File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 47,
in wrapper
raise BadGateway(e)
BadGateway: HTTPSConnectionPool(host='engine-host', port=443): Max retries
exceeded with url: /ovirt-engine/sso/oauth/token-info (Caused by
NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at
0x7fe806166b90>: Failed to establish a new connection: [Errno -2] Name or
service not known',))
2019-02-01 14:27:26,968 root From: ::ffff:127.0.0.1:49590 Request: GET
/v2.0/ports
2019-02-01 14:27:26,969 root HTTPSConnectionPool(host='engine-host',
port=443): Max retries exceeded with url:
/ovirt-engine/sso/oauth/token-info (Caused by
NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at
0x7fe80618df50>: Failed to establish a new connection: [Errno -2] Name or
service not known',))
Traceback (most recent call last):
File "/usr/share/ovirt-provider-ovn/handlers/base_handler.py", line 134,
in _handle_request
method, path_parts, content
File "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py", line
175, in handle_request
return self.call_response_handler(handler, content, parameters)
File "/usr/share/ovirt-provider-ovn/handlers/neutron.py", line 33, in
call_response_handler
TOKEN_HTTP_HEADER_FIELD_NAME, '')):
File "/usr/share/ovirt-provider-ovn/auth/plugin_facade.py", line 31, in
validate_token
return auth.core.plugin.validate_token(token)
File
"/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/authorization_by_username.py",
line 36, in validate_token
return self._is_user_name(token, _admin_user_name())
File
"/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/authorization_by_username.py",
line 47, in _is_user_name
timeout=AuthorizationByUserName._timeout())
File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 131,
in get_token_info
timeout=timeout
File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 54,
in wrapper
response = func(*args, **kwargs)
File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 47,
in wrapper
raise BadGateway(e)
BadGateway: HTTPSConnectionPool(host='engine-host', port=443): Max retries
exceeded with url: /ovirt-engine/sso/oauth/token-info (Caused by
NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at
0x7fe80618df50>: Failed to establish a new connection: [Errno -2] Name or
service not known',))
2019-02-01 14:29:17,412 root From: ::ffff:127.0.0.1:49616 Request: GET
/v2.0/ports
2019-02-01 14:29:17,412 root HTTPSConnectionPool(host='engine-host',
port=443): Max retries exceeded with url:
/ovirt-engine/sso/oauth/token-info (Caused by
NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at
0x7fe80618de50>: Failed to establish a new connection: [Errno -2] Name or
service not known',))
Traceback (most recent call last):
File "/usr/share/ovirt-provider-ovn/handlers/base_handler.py", line 134,
in _handle_request
method, path_parts, content
File "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py", line
175, in handle_request
return self.call_response_handler(handler, content, parameters)
File "/usr/share/ovirt-provider-ovn/handlers/neutron.py", line 33, in
call_response_handler
TOKEN_HTTP_HEADER_FIELD_NAME, '')):
File "/usr/share/ovirt-provider-ovn/auth/plugin_facade.py", line 31, in
validate_token
return auth.core.plugin.validate_token(token)
File
"/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/authorization_by_username.py",
line 36, in validate_token
return self._is_user_name(token, _admin_user_name())
File
"/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/authorization_by_username.py",
line 47, in _is_user_name
timeout=AuthorizationByUserName._timeout())
File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 131,
in get_token_info
timeout=timeout
File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 54,
in wrapper
response = func(*args, **kwargs)
File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 47,
in wrapper
raise BadGateway(e)
BadGateway: HTTPSConnectionPool(host='engine-host', port=443): Max retries
exceeded with url: /ovirt-engine/sso/oauth/token-info (Caused by
NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at
0x7fe80618de50>: Failed to establish a new connection: [Errno -2] Name or
service not known',))
5 years, 9 months
vm autostart and priorities
by Hetz Ben Hamo
Hi,
I'm migrating my VM's to oVirt from ESXI and I've only got one left on the
ESXi side - a VM which provides me DNS, mail and DHCP services. Without
this VM's, I can't access the internet.
The only issue that prevents me from migrating this VM to oVirt and "kill"
the ESXi machine, is that I need to start this VM's upon oVirt node power
up (auto start), so if the power goes down and comes back, I'll have my
infrastructure up and running automatically within minutes.
Is there a way to auto start specific VM's or better - start VM's based on
specific orders upon boot of the nodes? I think it's a very important
feature.
Thanks
5 years, 9 months