ISO/NFS
by gmail.com
Hello everyone.
I have ovirt with 2 datacenters. In first datacenter I've created
ISO/NFS domain on external to ovirt NFS server.
When i try to create ISO/NFS domain in second datacenter using the same
NFS server/export path - i get error like this:
"Failed to retrieve existing storage domain information."
/var/log/vdsm/vdsm.log :
Thread-863::INFO::2015-10-02
16:25:44,670::logUtils::44::dispatcher::(wrapper) Run and protect:
disconnectStorageServer(domType=1,
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '',
'connection': '172.17.10.19:/exports/iso', 'iqn': '', 'user': '',
'tpgt': '1', 'protocol_version': '4', 'password': '******', 'id':
'00000000-0000-0000-0000-000000000000'}], options=None)
Thread-863::ERROR::2015-10-02
16:25:44,681::hsm::2547::Storage.HSM::(disconnectStorageServer) Could
not disconnect from storageServer
Traceback (most recent call last):
File "/usr/share/vdsm/storage/hsm.py", line 2543, in
disconnectStorageServer
conObj.disconnect()
File "/usr/share/vdsm/storage/storageServer.py", line 336, in disconnect
return self._mountCon.disconnect()
File "/usr/share/vdsm/storage/storageServer.py", line 237, in disconnect
self._mount.umount(True, True)
File "/usr/share/vdsm/storage/mount.py", line 254, in umount
return self._runcmd(cmd, timeout)
File "/usr/share/vdsm/storage/mount.py", line 239, in _runcmd
raise MountError(rc, ";".join((out, err)))
MountError: (1, ';umount:
/rhev/data-center/mnt/172.17.10.19:_exports_iso: not found\n')
Thread-863::INFO::2015-10-02
16:25:44,750::logUtils::47::dispatcher::(wrapper) Run and protect:
disconnectStorageServer, Return response: {'statuslist': [{'status':
477, 'id': '00000000-0000-0000-0000-000000000000'}]}
In same time i can mount NFS volume by hands without any errors.
Can i use the same NFS server for ISO domain i different datacenter ?
Is my case bug or technical limit ?
Thanks for you answers.
9 years, 2 months
[QE][ACTION REQUIRED] oVirt 3.6.0 status
by Sandro Bonazzola
Hi,
We planned to start composing next milestone of oVirt 3.6.0 on *2015-10-12
08:00 UTC* from 3.6 branches.
There are still 234 bugs [1] targeted to 3.6.0.
Whiteboard RC GA Total
docs 5 0 15
dwh 1 0 1
external 0 1 1
gluster 40 0 40
i18n 2 0 2
infra 23 6 29
integration 0 1 1
network 27 1 28
node 4 0 4
reports 1 0 1
sla 36 2 38
storage 49 9 58
ux 10 0 10
virt 5 1 6
Total 213 21 234
Maintainers must scrub them and push non critical bugs to ZStream releases.
There is 1 acknowledged blocker for 3.6.0:
Bug 1196640 - [Monitoring] Network utilisation is not shown for the VM
And there are 21 bugs suggested as blockers for 3.6.0:
Bug ID Status Whiteboard Summary
1259441 NEW gluster Can't create new Gluster storage domain - Permission
denied
1261822 NEW integration BUILD WGT 3.6
1259468 NEW network Setupnetworks fails from time to time with error
'Failed to bring interface up'
1262026 NEW network Host booted up after upgrading it 3.5.4->3.6 with rhevm
bridge with DEFROUTE=no.
1262051 NEW network Host move from 'up' to 'connecting' and back to 'up'
from time to time
1243811 POST sla vm with dedicate host fails to run on other host, if
dedicated host is in maintenance
1262293 POST sla Creating a VM with Foreman fails if cluster has more than
one CPU profile
1239297 ASSIGNED storage Logical disk name is not showing up
1118349 NEW storage [vdsm] Creating DataCenter 3.5 using master domain V1
fails with InquireNotSupportedError
1250540 NEW storage Re-attaching fresh export domain fails -
hsm::2543::Storage.HSM::(disconnectStorageServer) Could not disconnect from
storageServer
1253790 NEW storage [BLOCKED] consume fix for "iscsi_session recovery_tmo
revert back to default when a path becomes active"
1253975 NEW storage [vdsm] extendVolumeSize task is not cleared in case of
a live merge failure for a volume that was extended
1151838 NEW storage [vdsm] Scan alignment fails with a VirtAlignError
1238239 NEW storage Deployment of hosted engine failed on ISCSI storage
with 'Error block device action'
1257240 NEW storage Template's disk format is wrong
1261953 NEW storage [vdsm] Host fails to connect to storage pool
'MiscBlockReadException: Internal block device read failure'
1261980 NEW storage failed LSM due to connection lost with qemu process
1138144 NEW storage [BLOCKED]Failed to autorecover storage domain after
unblocking connection with host
1253756 POST storage [BLOCKED] libvirt reports physical=0 for COW2 volumes
on block storage
1251008 POST storage [BLOCKED] libvirt reports physical=0 for COW2 volumes
on block storage
1258901 POST storage [UX] Toggling wad property while vm is up shouldn't be
greyed out
There are 17 bugs marked as regressions to be solved in 3.6.0 [2]
Given current bugs status, release criteria are not met [3] so we need to
review bug status for planning the RC.
Action items:
- Developers: check suggested blockers, fix acknowledged blockers,
regressions and re-target remaining blockers.
- Please check Jenkins status for 3.6 jobs and sync with relevant
maintainers if there are issues.
- Please fill release notes, the page has been created here [4]
- Please test oVirt 3.6 nightly snapshot and check for regressions from 3.5
[1]
https://bugzilla.redhat.com/buglist.cgi?quicksearch=target_milestone%3Aov...
status%3Anew%2Cassigned%2Cpost
[2]
https://bugzilla.redhat.com/buglist.cgi?quicksearch=target_milestone%3Aov...
[3]
http://www.ovirt.org/OVirt_3.6_Release_Management#Candidate_Release_Criteria
[4] http://www.ovirt.org/OVirt_3.6_Release_Notes
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
9 years, 2 months
Ovirtmgmt not on team device
by Johan Kooijman
Hi all,
I'm adding my first CentOS 7 host to my cluster today, but running into an
issue. When setting up network for the new host I don't have the ability to
set ovirtmgmt to the team I created, see screenshot:
http://imgur.com/k8GWwcK
The team however, works perfectly fine:
[root@hv15]# teamdctl team0 state view
setup:
runner: lacp
ports:
ens2f0
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
runner:
aggregator ID: 4, Selected
selected: yes
state: current
ens2f1
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
runner:
aggregator ID: 4, Selected
selected: yes
state: current
runner:
active: yes
fast rate: no
With CentOS 6 and bonding I did not have this issue. Am I missing something
here?
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
9 years, 2 months
Troubles after upgrading oVirt from 3.4 to 3.5
by Andy Michielsen
Hello,
I had an issue when I installed some updates on my centos 6.7 hosts.
(Engine, node1 and node2)
I seems that the new java version has higher security requirments.
So I decided to take the plunge and upgrade all to the new 3.5 version.
At first everything seem to go just fine and I was able to start some of my
virtual machines up. But 3 days later, after the weekend, my users noticed
that it had stopped working.
I'm now unable to get the hosts up in the ovirt engine and also my storage
domains are unavailable.
Can someone tell me what I can do, check, change to get everything up again.
I can send the sos logs via google drive.
Kind regards.
9 years, 2 months
Problems upgrading Ovirt 3.5.4 to 3.6 RC
by Adrian Garay
--------------070306020106050808000908
Content-Type: text/plain; charset="utf-8"; format=flowed
Content-Transfer-Encoding: 7bit
I followed the instructions here
<http://www.ovirt.org/OVirt_3.6_Release_Notes> on upgrading to Ovirt
3.6RC from 3.5.4 and have encountered a few problems.
My Ovirt 3.5.4 test environment consisted of:
1 Centos 7.1 host running hosted engine stored on a separate NFS server
1 Centos 7.1 ovirt engine vm
With some research I was able to solve two of the three issues I've
experienced. I'll list them here for acadamia - and perhaps they point
to a misstep on my behalf that is causing the third.
1. Upon a "successful" upgrade, the admin@local account was expired.
The problem is documented here
<https://bugzilla.redhat.com/show_bug.cgi?id=1261382> and currently
caused by following the upgrade instructions as seen here
<http://www.ovirt.org/OVirt_3.6_Release_Notes#Install_.2F_Upgrade_from_pre...>.
Solution was to do the following from the ovirt-engine vm (they may not
have all been necessary, it was late!):
a. ovirt-aaa-jdbc-tool
--db-config=/etc/ovirt-engine/aaa/internal.properties user
password-reset admin --force
b. ovirt-aaa-jdbc-tool
--db-config=/etc/ovirt-engine/aaa/internal.properties user
password-reset admin --password-valid-to="2019-01-01 12:00:00Z"
c. ovirt-aaa-jdbc-tool
--db-config=/etc/ovirt-engine/aaa/internal.properties user edit admin
--account-valid-from="2014-01-01 12:00:00Z"
--account-valid-to="2019-01-01 12:00:00Z"
d. ovirt-aaa-jdbc-tool
--db-config=/etc/ovirt-engine/aaa/internal.properties user unlock admin
2. Rebooting the Centos 7.1 host caused a loss of default gateway. The
engine does not allow you to modify the host because it is in use and
modifying /etc/sysconfig/network-scripts is undone by VDSM upon the next
reboot. I assume in the past this worked okay because I had a
GATEWAY=xxxx in /etc/sysconfig/network as a pre-Ovirt relic. Solution
here was to add gateway and defaultRoute fields using the vdsClient
command line utility:
a. vdsClient -s 0 setupNetworks
networks='{ovirtmgmt:{ipaddr:10.1.0.21,netmask:255.255.254.0,bonding:bond0,bridged:true,gateway:10.1.1.254,defaultRoute:True}}'
b. vdsClient -s 0 setSafeNetworkConfig
Now for the issue I can't solve. When I reboot the Centos 7.1 host I
get the following:
[root@ovirt-one /]# hosted-engine --vm-status
You must run deploy first
I then notice that the NFS share to the hosted engine is not mounted and
the ovirt-ha-agent.service has failed to start itself at boot.
[root@ovirt-one /]# systemctl status ovirt-ha-agent.service
ovirt-ha-agent.service - oVirt Hosted Engine High Availability
Monitoring Agent
Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled)
Active: failed (Result: exit-code) since Tue 2015-09-29 12:17:55
CDT; 9min ago
Process: 1424 ExecStop=/usr/lib/systemd/systemd-ovirt-ha-agent stop
(code=exited, status=0/SUCCESS)
Process: 1210 ExecStart=/usr/lib/systemd/systemd-ovirt-ha-agent start
(code=exited, status=0/SUCCESS)
Main PID: 1377 (code=exited, status=254)
CGroup: /system.slice/ovirt-ha-agent.service
Sep 29 12:17:55 ovirt-one.thaultanklines.com
systemd-ovirt-ha-agent[1210]: Starting ovirt-ha-agent: [ OK ]
Sep 29 12:17:55 ovirt-one.thaultanklines.com systemd[1]: Started oVirt
Hosted Engine High Availability Monitoring Agent.
Sep 29 12:17:55 ovirt-one.thaultanklines.com ovirt-ha-agent[1377]:
ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine
ERROR Service vdsmd is not running and the admin is responsible for
starting it. Shutting down.
Sep 29 12:17:55 ovirt-one.thaultanklines.com systemd[1]:
ovirt-ha-agent.service: main process exited, code=exited, status=254/n/a
Sep 29 12:17:55 ovirt-one.thaultanklines.com systemd[1]: Unit
ovirt-ha-agent.service entered failed state.
Manually starting ovirt-ha-agent.service works and it then correctly
mounts the hosted engine NFS share and all works and I can eventually
start the hosted engine. Why would the ovirt-ha-agent.service attempt
to start before VDSM was ready?
Snippet from /usr/lib/systemd/system/ovirt-ha-agent.service
[Unit]
Description=oVirt Hosted Engine High Availability Monitoring Agent
Wants=ovirt-ha-broker.service
Wants=vdsmd.service
Wants=sanlock.service
After=ovirt-ha-broker.service
Any help would be appreciated!
--------------070306020106050808000908
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
I followed the instructions <a
href="http://www.ovirt.org/OVirt_3.6_Release_Notes">here</a> on
upgrading to Ovirt 3.6RC from 3.5.4 and have encountered a few
problems.<br>
<br>
My Ovirt 3.5.4 test environment consisted of:<br>
<br>
1 Centos 7.1 host running hosted engine stored on a separate NFS
server<br>
1 Centos 7.1 ovirt engine vm<br>
<br>
With some research I was able to solve two of the three issues I've
experienced. I'll list them here for acadamia - and perhaps they
point to a misstep on my behalf that is causing the third.<br>
<br>
1. Upon a "successful" upgrade, the admin@local account was
expired. The problem is <a
href="https://bugzilla.redhat.com/show_bug.cgi?id=1261382">documented
here</a> and currently caused by following the upgrade
instructions <a
href="http://www.ovirt.org/OVirt_3.6_Release_Notes#Install_.2F_Upgrade_from_pre...">as
seen here</a>. Solution was to do the following from the
ovirt-engine vm (they may not have all been necessary, it was
late!):<br>
a. ovirt-aaa-jdbc-tool
--db-config=/etc/ovirt-engine/aaa/internal.properties user
password-reset admin --force<br>
b. ovirt-aaa-jdbc-tool
--db-config=/etc/ovirt-engine/aaa/internal.properties user
password-reset admin --password-valid-to="2019-01-01 12:00:00Z"<br>
c. ovirt-aaa-jdbc-tool
--db-config=/etc/ovirt-engine/aaa/internal.properties user edit
admin --account-valid-from="2014-01-01 12:00:00Z"
--account-valid-to="2019-01-01 12:00:00Z"<br>
d. ovirt-aaa-jdbc-tool
--db-config=/etc/ovirt-engine/aaa/internal.properties user unlock
admin<br>
<br>
2. Rebooting the Centos 7.1 host caused a loss of default gateway.
The engine does not allow you to modify the host because it is in
use and modifying /etc/sysconfig/network-scripts is undone by VDSM
upon the next reboot. I assume in the past this worked okay because
I had a GATEWAY=xxxx in /etc/sysconfig/network as a pre-Ovirt
relic. Solution here was to add gateway and defaultRoute fields
using the vdsClient command line utility:<br>
a. vdsClient -s 0 setupNetworks
networks='{ovirtmgmt:{ipaddr:10.1.0.21,netmask:255.255.254.0,bonding:bond0,bridged:true,gateway:10.1.1.254,defaultRoute:True}}'<br>
b. vdsClient -s 0 setSafeNetworkConfig<br>
<br>
Now for the issue I can't solve. When I reboot the Centos 7.1 host
I get the following:<br>
<br>
[root@ovirt-one /]# hosted-engine --vm-status<br>
You must run deploy first<br>
<br>
I then notice that the NFS share to the hosted engine is not mounted
and the ovirt-ha-agent.service has failed to start itself at boot.<br>
<br>
[root@ovirt-one /]# systemctl status ovirt-ha-agent.service<br>
ovirt-ha-agent.service - oVirt Hosted Engine High Availability
Monitoring Agent<br>
Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service;
enabled)<br>
Active: failed (Result: exit-code) since Tue 2015-09-29 12:17:55
CDT; 9min ago<br>
Process: 1424 ExecStop=/usr/lib/systemd/systemd-ovirt-ha-agent
stop (code=exited, status=0/SUCCESS)<br>
Process: 1210 ExecStart=/usr/lib/systemd/systemd-ovirt-ha-agent
start (code=exited, status=0/SUCCESS)<br>
Main PID: 1377 (code=exited, status=254)<br>
CGroup: /system.slice/ovirt-ha-agent.service<br>
<br>
Sep 29 12:17:55 ovirt-one.thaultanklines.com
systemd-ovirt-ha-agent[1210]: Starting ovirt-ha-agent: [ OK ]<br>
Sep 29 12:17:55 ovirt-one.thaultanklines.com systemd[1]: Started
oVirt Hosted Engine High Availability Monitoring Agent.<br>
Sep 29 12:17:55 ovirt-one.thaultanklines.com ovirt-ha-agent[1377]:
ovirt-ha-agent
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR
Service vdsmd is not running and the admin is responsible for
starting it. Shutting down.<br>
Sep 29 12:17:55 ovirt-one.thaultanklines.com systemd[1]:
ovirt-ha-agent.service: main process exited, code=exited,
status=254/n/a<br>
Sep 29 12:17:55 ovirt-one.thaultanklines.com systemd[1]: Unit
ovirt-ha-agent.service entered failed state.<br>
<br>
Manually starting ovirt-ha-agent.service works and it then correctly
mounts the hosted engine NFS share and all works and I can
eventually start the hosted engine. Why would the
ovirt-ha-agent.service attempt to start before VDSM was ready?<br>
<br>
Snippet from /usr/lib/systemd/system/ovirt-ha-agent.service<br>
[Unit]<br>
Description=oVirt Hosted Engine High Availability Monitoring Agent<br>
Wants=ovirt-ha-broker.service<br>
Wants=vdsmd.service<br>
Wants=sanlock.service<br>
After=ovirt-ha-broker.service<br>
<br>
Any help would be appreciated!<br>
<br>
</body>
</html>
--------------070306020106050808000908--
9 years, 2 months
Admin@internal inlog problems with clean install 3.6RC
by Joop
I just installed 3.6RC and got Cannot Login. User Account has expired,
Please contact your system administrator. in the webui. In engine.log I
see the following:
2015-09-30 09:59:52,150 INFO [org.ovirt.engine.core.bll.aaa.LoginBaseCommand] (default task-30) [] Can't login user 'admin' with authentication profile 'internal' because the authentication failed.
2015-09-30 09:59:52,162 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-30) [] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: The account for admin got expired. Please contact the system administrator.
2015-09-30 09:59:52,171 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-30) [] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: User admin@internal failed to log in.
2015-09-30 09:59:52,171 WARN [org.ovirt.engine.core.bll.aaa.LoginAdminUserCommand] (default task-30) [] CanDoAction of action 'LoginAdminUser' failed for user admin@internal. Reasons: USER_ACCOUNT_EXPIRED
Using ovirt-aaa-jdbc-tool user password-reset admin
--password-valid-to="2025-08-15 10:30:00Z" to set a new password doesn't
help, restarting ovirt-engine doesn't work either.
List of installed ovirt packages:
ebay-cors-filter-1.0.1-0.1.ovirt.el7.noarch
ovirt-engine-3.6.0-1.el7.centos.noarch
ovirt-engine-backend-3.6.0-1.el7.centos.noarch
ovirt-engine-cli-3.6.0.1-1.el7.centos.noarch
ovirt-engine-dbscripts-3.6.0-1.el7.centos.noarch
ovirt-engine-extension-aaa-jdbc-1.0.0-0.0.master.20150923074852.git46a67c9.el7.noarch
ovirt-engine-extensions-api-impl-3.6.0-1.el7.centos.noarch
ovirt-engine-lib-3.6.0-1.el7.centos.noarch
ovirt-engine-restapi-3.6.0-1.el7.centos.noarch
ovirt-engine-sdk-python-3.6.0.2-1.el7.centos.noarch
ovirt-engine-setup-3.6.0-1.el7.centos.noarch
ovirt-engine-setup-base-3.6.0-1.el7.centos.noarch
ovirt-engine-setup-plugin-ovirt-engine-3.6.0-1.el7.centos.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-3.6.0-1.el7.centos.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-3.6.0-1.el7.centos.noarch
ovirt-engine-setup-plugin-websocket-proxy-3.6.0-1.el7.centos.noarch
ovirt-engine-tools-3.6.0-1.el7.centos.noarch
ovirt-engine-userportal-3.6.0-1.el7.centos.noarch
ovirt-engine-vmconsole-proxy-helper-3.6.0-1.el7.centos.noarch
ovirt-engine-webadmin-portal-3.6.0-1.el7.centos.noarch
ovirt-engine-websocket-proxy-3.6.0-1.el7.centos.noarch
ovirt-engine-wildfly-8.2.0-1.el7.x86_64
ovirt-engine-wildfly-overlay-001-2.el7.noarch
ovirt-host-deploy-1.4.0-0.0.master.20150806005708.git670e9c8.el7.noarch
ovirt-host-deploy-java-1.4.0-0.0.master.20150806005708.git670e9c8.el7.noarch
ovirt-image-uploader-3.6.0-1.el7.centos.noarch
ovirt-iso-uploader-3.6.0-1.el7.centos.noarch
ovirt-release36-001-0.5.beta.noarch
ovirt-vmconsole-1.0.0-0.0.master.20150821105434.gite14b2f0.el7.noarch
ovirt-vmconsole-proxy-1.0.0-0.0.master.20150821105434.gite14b2f0.el7.noarch
There is a BZ about this but I would expect it to be in this RC release
since its almost 3wks old.
Anything else I can check?
Thanks,
Joop
9 years, 2 months
New backgrounds for oVirt Live 4.0?
by Sandro Bonazzola
Hi,
anybody interested in proposing a new background for oVirt Live 4.0?
Since it will be a major release I think it would be nice to refresh its
look.
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
9 years, 2 months