spice folder sharing
by Nathanaël Blanchet
Hi all,
Is there a native way to enable spice sharing like described there:
https://www.spice-space.org/docs/manual/manual.chunked/ar01s10.html
I guess such a hook is very simple to write.
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
7 years, 7 months
Hosted engine setup shooting dirty pool
by Jamie Lawrence
Or at least, refusing to mount a dirty pool.
I have 4.1 set up, configured and functional, currently wired up with two VM hosts and three Gluster hosts. It is configured with a (temporary) NFS data storage domain, with the end-goal being two data domains on Gluster; one for the hosted engine, one for other VMs.
The issue is that `hosted-engine` sees any gluster volumes offered as dirty. (I have been creating them via the command line right before attempting the hosted-engine migration; there is nothing in them at that stage.) I *think* what is happening is that ovirt-engine notices a newly created volume and has its way with the volume (visible in the GUI; the volume appears in the list), and the hosted-engine installer becomes upset about that. What I don’t know is what to do about it. Relevant log lines below. The installer almost sounds like it is asking me to remove the UUID-directory and whatnot, but I’m pretty sure that’s just going to leave me with two problems instead of fixing the first one. I’ve considered attempting to wire this together in the DB, which also seems like a great way to break things. I’ve even thought of using a Gluster installation that Ovirt knows nothing about, mainly as an experiment to see if it would even work, but decided it doesn’t matter, because I can’t deploy in that state anyway and it doesn’t actually get me any closer to getting this working.
I noticed several bugs in the tracker seemingly related, but the bulk of those were for past versions and I saw nothing that seemed actionable from my end in the others.
So, can anyone spare a clue as to what is going wrong, and what to do about that?
-j
- - - - ovirt-hosted-engine-setup.log - - - -
2017-04-11 16:14:39 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._storageServerConnection:408 connectStorageServer
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._storageServerConnection:475 {'status': {'message': 'Done', 'code': 0}, 'items': [{u'status': 0, u'id': u'890e82cf-5570-4507-a9bc-c610584dea6e'}]}
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._storageServerConnection:502 {'status': {'message': 'Done', 'code': 0}, 'items': [{u'status': 0, u'id': u'cd1a1bb6-e607-4e35-b815-1fd88b84fe14'}]}
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._check_existing_pools:794 _check_existing_pools
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._check_existing_pools:795 getConnectedStoragePoolsList
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._check_existing_pools:797 {'status': {'message': 'Done', 'code': 0}}
2017-04-11 16:14:40 INFO otopi.plugins.gr_he_setup.storage.storage storage._misc:956 Creating Storage Domain
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createStorageDomain:513 createStorageDomain
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createStorageDomain:547 {'status': {'message': 'Done', 'code': 0}}
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createStorageDomain:549 {'status': {'message': 'Done', 'code': 0}, u'mdasize': 0, u'mdathreshold': True, u'mdavalid': True, u'diskfree': u'321929216000', u'disktotal': u'321965260800', u'mdafree': 0}
2017-04-11 16:14:40 INFO otopi.plugins.gr_he_setup.storage.storage storage._misc:959 Creating Storage Pool
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createFakeStorageDomain:553 createFakeStorageDomain
2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createFakeStorageDomain:570 {'status': {'message': 'Done', 'code': 0}}
2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createFakeStorageDomain:572 {'status': {'message': 'Done', 'code': 0}, u'mdasize': 0, u'mdathreshold': True, u'mdavalid': True, u'diskfree': u'1933930496', u'disktotal': u'2046640128', u'mdafree': 0}
2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createStoragePool:587 createStoragePool
2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createStoragePool:627 createStoragePool(args=[storagepoolID=9e399f0c-7c4b-4131-be79-922dda038383, name=hosted_datacenter, masterSdUUID=9a5c302b-2a18-4c7e-b75d-29088299988c, masterVersion=1, domainList=['9a5c302b-2a18-4c7e-b75d-29088299988c', 'f26efe61-a2e1-4a85-a212-269d0a047e07'], lockRenewalIntervalSec=None, leaseTimeSec=None, ioOpTimeoutSec=None, leaseRetries=None])
2017-04-11 16:15:29 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createStoragePool:640 {'status': {'message': 'Done', 'code': 0}}
2017-04-11 16:15:29 INFO otopi.plugins.gr_he_setup.storage.storage storage._misc:962 Connecting Storage Pool
2017-04-11 16:15:29 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._storagePoolConnection:717 connectStoragePool
2017-04-11 16:15:29 DEBUG otopi.context context._executeMethod:142 method exception
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132, in _executeMethod
method['method']()
File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-setup/storage/storage.py", line 963, in _misc
self._storagePoolConnection()
File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-setup/storage/storage.py", line 725, in _storagePoolConnection
message=status['status']['message'],
RuntimeError: Dirty Storage Domain: Cannot connect pool, already connected to another pool: ('ef5e4496-3095-40a8-89da-6847db67a4b9',)
Please clean the storage device and try again
2017-04-11 16:15:29 ERROR otopi.context context._executeMethod:151 Failed to execute stage 'Misc configuration': Dirty Storage Domain: Cannot connect pool, already connected to another pool: ('ef5e4496-3095-40a8-89da-6847db67a4b9',)
Please clean the storage device and try again
2017-04-11 16:15:29 DEBUG otopi.transaction transaction.abort:119 aborting 'Yum Transaction'
2017-04-11 16:15:29 INFO otopi.plugins.otopi.packagers.yumpackager yumpackager.info:80 Yum Performing yum transaction rollback
Loaded plugins: fastestmirror
2017-04-11 16:15:29 DEBUG otopi.transaction transaction.abort:119 aborting 'File transaction for '/etc/ovirt-hosted-engine/firewalld/hosted-console.xml''
2017-04-11 16:15:29 DEBUG otopi.transaction transaction.abort:119 aborting 'File transaction for '/etc/ovirt-hosted-engine/firewalld/hosted-cockpit.xml''
2017-04-11 16:15:29 DEBUG otopi.transaction transaction.abort:119 aborting 'File transaction for '/etc/ovirt-hosted-engine/iptables.example''
2017-04-11 16:15:29 DEBUG otopi.context context.dumpEnvironment:760 ENVIRONMENT DUMP - BEGIN
7 years, 7 months
I’m having trouble deleting a test gluster volume
by Precht, Andrew
--_000_BY1PR09MB0824B7766560130DB97C50A0F5000BY1PR09MB0824namp_
Content-Type: text/plain; charset="Windows-1252"
Content-Transfer-Encoding: quoted-printable
Hi Ovirt users,
I=92m a newbie to oVirt and I=92m having trouble deleting a test gluster vo=
lume. The nodes are 4.1.1 and the engine is 4.1.0
When I try to remove the test volume, I click Remove, the dialog box prompt=
ing to confirm the deletion pops up and after I click OK, the dialog box ch=
anges to show a little spinning wheel and then it disappears. In the end th=
e volume is still there.
The test volume was distributed with two host members. One of the hosts I w=
as able to remove from the volume by removing the host form the cluster. Wh=
en I try to remove the remaining host in the volume, even with the =93Force=
Remove=94 box ticked, I get this response: Cannot remove Host. Server havi=
ng Gluster volume.
What to try next?
P.S. I=92ve tried to join this user group several times in the past, with n=
o response.
Is it possible for me to join this group?
Regards,
Andrew
--_000_BY1PR09MB0824B7766560130DB97C50A0F5000BY1PR09MB0824namp_
Content-Type: text/html; charset="Windows-1252"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DWindows-1=
252">
<style type=3D"text/css" style=3D"display:none;"><!-- P {margin-top:0;margi=
n-bottom:0;} --></style>
</head>
<body dir=3D"ltr">
<div id=3D"divtagdefaultwrapper" style=3D"font-size:12pt;color:#000000;font=
-family:Calibri,Arial,Helvetica,sans-serif;" dir=3D"ltr">
<p></p>
<div>Hi Ovirt users,</div>
<div>I=92m a newbie to oVirt and I=92m having trouble deleting a test glust=
er volume. The nodes are 4.1.1 and the engine is 4.1.0</div>
<div><br>
</div>
<div>When I try to remove the test volume, I click Remove, the dialog box p=
rompting to confirm the deletion pops up and after I click OK, the dialog b=
ox changes to show a little spinning wheel and then it disappears. In the e=
nd the volume is still there.</div>
<div><br>
</div>
<div>The test volume was distributed with two host members. One of the host=
s I was able to remove from the volume by removing the host form the cluste=
r. When I try to remove the remaining host in the volume, even with the =93=
Force Remove=94 box ticked, I get this
response: Cannot remove Host. Server having Gluster volume.</div>
<div><br>
</div>
<div>What to try next?</div>
<div><br>
</div>
<div>P.S. I=92ve tried to join this user group several times in the past, w=
ith no response.</div>
<div>Is it possible for me to join this group?</div>
<div><br>
</div>
<div>Regards,</div>
<div>Andrew</div>
<br>
<p></p>
</div>
</body>
</html>
--_000_BY1PR09MB0824B7766560130DB97C50A0F5000BY1PR09MB0824namp_--
7 years, 7 months
Re: [ovirt-users] Adding existing kvm hosts
by Konstantin Raskoshnyi
+Users
We're using SCLinux 6.7, the latest version available in updates is 2.6.6
So I'm going to fix this
Thanks
On Wed, Apr 12, 2017 at 9:42 AM, Yaniv Kaul <ykaul(a)redhat.com> wrote:
> Right. How did you end up with such an ancient version?
>
> Also, please email the users mailing list, not just me (so, for example,
> others will know what the issue is).
> Thanks,
> Y.
>
>
> On Apr 12, 2017 6:52 PM, "Konstantin Raskoshnyi" <konrasko(a)gmail.com>
> wrote:
>
>> I just found this error on oVirt engine: Python version 2.6 is too old,
>> expecting at least 2.7.
>>
>> So going to upgrade python first
>>
>> On Wed, Apr 12, 2017 at 4:41 AM, Yaniv Kaul <ykaul(a)redhat.com> wrote:
>>
>>> Can you share the vdsm log? The host deploy log (from the engine) ?
>>> Y.
>>>
>>>
>>> On Wed, Apr 12, 2017 at 8:13 AM, Konstantin Raskoshnyi <
>>> konrasko(a)gmail.com> wrote:
>>>
>>>> Hi guys, We're never had mgmt for our kvm machines
>>>>
>>>> I installed oVirt 4.1 on CentOS73 and trying to add existing kvm hosts
>>>> but oVirt fails with this error
>>>>
>>>> 2017-04-12 05:08:46,430Z ERROR [org.ovirt.engine.core.uutils.ssh.SSHDialog]
>>>> (org.ovirt.thread.pool-7-thread-21) [4a1d5f35] Exception:
>>>> java.io.IOException: Command returned failure code 1 during SSH session
>>>> 'root@tank3'
>>>>
>>>> I don't experience any problems connecting to virtank3 under root.
>>>>
>>>> 2017-04-12 05:08:46,445Z ERROR [org.ovirt.engine.core.dal.dbb
>>>> roker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-21)
>>>> [4a1d5f35] EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511), Correlation
>>>> ID: 4a1d5f35, Call Stack: null, Custom Event ID: -1, Message: Failed to
>>>> install Host tank3. Command returned failure code 1 during SSH session
>>>> 'root@tank3'.
>>>> 2017-04-12 05:08:46,445Z ERROR [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
>>>> (org.ovirt.thread.pool-7-thread-21) [4a1d5f35] Error during host tank3
>>>> install, prefering first exception: Unexpected connection termination
>>>> 2017-04-12 05:08:46,445Z ERROR [org.ovirt.engine.core.bll.hos
>>>> tdeploy.InstallVdsInternalCommand] (org.ovirt.thread.pool-7-thread-21)
>>>> [4a1d5f35] Host installation failed for host 'cec720ed-460a-48aa-a9fc-2262b6da5a83',
>>>> 'tank3': Unexpected connection termination
>>>> 2017-04-12 05:08:46,446Z INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
>>>> (org.ovirt.thread.pool-7-thread-21) [4a1d5f35] START,
>>>> SetVdsStatusVDSCommand(HostName = tank3, SetVdsStatusVDSCommandParameters:{runAsync='true',
>>>> hostId='cec720ed-460a-48aa-a9fc-2262b6da5a83', status='InstallFailed',
>>>> nonOperationalReason='NONE', stopSpmFailureLogged='false',
>>>> maintenanceReason='null'}), log id: 4bbc52f9
>>>> 2017-04-12 05:08:46,449Z INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
>>>> (org.ovirt.thread.pool-7-thread-21) [4a1d5f35] FINISH,
>>>> SetVdsStatusVDSCommand, log id: 4bbc52f9
>>>> 2017-04-12 05:08:46,457Z ERROR [org.ovirt.engine.core.dal.dbb
>>>> roker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-21)
>>>> [4a1d5f35] EVENT_ID: VDS_INSTALL_FAILED(505), Correlation ID: 4a1d5f35, Job
>>>> ID: 8af22af5-72a5-4ec4-b216-4e26ceaa48d6, Call Stack: null, Custom
>>>> Event ID: -1, Message: Host tank3 installation failed. Unexpected
>>>> connection termination.
>>>> 2017-04-12 05:08:46,496Z INFO [org.ovirt.engine.core.bll.ho
>>>> stdeploy.InstallVdsInternalCommand] (org.ovirt.thread.pool-7-thread-21)
>>>> [4a1d5f35] Lock freed to object 'EngineLock:{exclusiveLocks='[
>>>> cec720ed-460a-48aa-a9fc-2262b6da5a83=<VDS,
>>>> ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
>>>> 2017-04-12 05:09:02,742Z INFO [org.ovirt.engine.core.bll.RemoveVdsCommand]
>>>> (default task-48) [13050988-bf00-4391-9862-a8ed8ade34dd] Lock Acquired
>>>> to object 'EngineLock:{exclusiveLocks='[cec720ed-460a-48aa-a9fc-2262b6da5a83=<VDS,
>>>> ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
>>>> 2017-04-12 05:09:02,750Z INFO [org.ovirt.engine.core.bll.RemoveVdsCommand]
>>>> (org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
>>>> Running command: RemoveVdsCommand internal: false. Entities affected : ID:
>>>> cec720ed-460a-48aa-a9fc-2262b6da5a83 Type: VDSAction group DELETE_HOST
>>>> with role type ADMIN
>>>> 2017-04-12 05:09:02,822Z INFO [org.ovirt.engine.core.vdsbroker.RemoveVdsVDSCommand]
>>>> (org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
>>>> START, RemoveVdsVDSCommand( RemoveVdsVDSCommandParameters:{runAsync='true',
>>>> hostId='cec720ed-460a-48aa-a9fc-2262b6da5a83'}), log id: 26e68c12
>>>> 2017-04-12 05:09:02,822Z INFO [org.ovirt.engine.core.vdsbroker.VdsManager]
>>>> (org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
>>>> vdsManager::disposing
>>>> 2017-04-12 05:09:02,822Z INFO [org.ovirt.engine.core.vdsbroker.RemoveVdsVDSCommand]
>>>> (org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
>>>> FINISH, RemoveVdsVDSCommand, log id: 26e68c12
>>>> 2017-04-12 05:09:02,824Z WARN [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
>>>> (ResponseWorker) [] Exception thrown during message processing
>>>> 2017-04-12 05:09:02,848Z INFO [org.ovirt.engine.core.dal.db
>>>> broker.auditloghandling.AuditLogDirector]
>>>> (org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
>>>> EVENT_ID: USER_REMOVE_VDS(44), Correlation ID:
>>>> 13050988-bf00-4391-9862-a8ed8ade34dd, Call Stack: null, Custom Event
>>>> ID: -1, Message: Host tank3 was removed by admin@internal-authz.
>>>> 2017-04-12 05:09:02,848Z INFO [org.ovirt.engine.core.bll.RemoveVdsCommand]
>>>> (org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
>>>> Lock freed to object 'EngineLock:{exclusiveLocks='[
>>>> cec720ed-460a-48aa-a9fc-2262b6da5a83=<VDS,
>>>> ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
>>>> 2017-04-12 05:10:56,139Z INFO [org.ovirt.engine.core.bll.storage.ovfstore.OvfDataUpdater]
>>>> (DefaultQuartzScheduler8) [] Attempting to update VMs/Templates Ovf.
>>>>
>>>>
>>>> Package vdsm-tool installed.
>>>>
>>>> Any thoughts?
>>>>
>>>> Thanks
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users(a)ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>
>>
7 years, 7 months
Adding existing kvm hosts
by Konstantin Raskoshnyi
Hi guys, We're never had mgmt for our kvm machines
I installed oVirt 4.1 on CentOS73 and trying to add existing kvm hosts but
oVirt fails with this error
2017-04-12 05:08:46,430Z ERROR [org.ovirt.engine.core.uutils.ssh.SSHDialog]
(org.ovirt.thread.pool-7-thread-21) [4a1d5f35] Exception:
java.io.IOException: Command returned failure code 1 during SSH session
'root@tank3'
I don't experience any problems connecting to virtank3 under root.
2017-04-12 05:08:46,445Z ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-7-thread-21) [4a1d5f35] EVENT_ID:
VDS_INSTALL_IN_PROGRESS_ERROR(511), Correlation ID: 4a1d5f35, Call Stack:
null, Custom Event ID: -1, Message: Failed to install Host tank3. Command
returned failure code 1 during SSH session 'root@tank3'.
2017-04-12 05:08:46,445Z ERROR
[org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
(org.ovirt.thread.pool-7-thread-21) [4a1d5f35] Error during host tank3
install, prefering first exception: Unexpected connection termination
2017-04-12 05:08:46,445Z ERROR
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(org.ovirt.thread.pool-7-thread-21) [4a1d5f35] Host installation failed for
host 'cec720ed-460a-48aa-a9fc-2262b6da5a83', 'tank3': Unexpected connection
termination
2017-04-12 05:08:46,446Z INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(org.ovirt.thread.pool-7-thread-21) [4a1d5f35] START,
SetVdsStatusVDSCommand(HostName = tank3,
SetVdsStatusVDSCommandParameters:{runAsync='true',
hostId='cec720ed-460a-48aa-a9fc-2262b6da5a83', status='InstallFailed',
nonOperationalReason='NONE', stopSpmFailureLogged='false',
maintenanceReason='null'}), log id: 4bbc52f9
2017-04-12 05:08:46,449Z INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(org.ovirt.thread.pool-7-thread-21) [4a1d5f35] FINISH,
SetVdsStatusVDSCommand, log id: 4bbc52f9
2017-04-12 05:08:46,457Z ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-7-thread-21) [4a1d5f35] EVENT_ID:
VDS_INSTALL_FAILED(505), Correlation ID: 4a1d5f35, Job ID:
8af22af5-72a5-4ec4-b216-4e26ceaa48d6, Call Stack: null, Custom Event ID:
-1, Message: Host tank3 installation failed. Unexpected connection
termination.
2017-04-12 05:08:46,496Z INFO
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(org.ovirt.thread.pool-7-thread-21) [4a1d5f35] Lock freed to object
'EngineLock:{exclusiveLocks='[cec720ed-460a-48aa-a9fc-2262b6da5a83=<VDS,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2017-04-12 05:09:02,742Z INFO [org.ovirt.engine.core.bll.RemoveVdsCommand]
(default task-48) [13050988-bf00-4391-9862-a8ed8ade34dd] Lock Acquired to
object
'EngineLock:{exclusiveLocks='[cec720ed-460a-48aa-a9fc-2262b6da5a83=<VDS,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2017-04-12 05:09:02,750Z INFO [org.ovirt.engine.core.bll.RemoveVdsCommand]
(org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
Running command: RemoveVdsCommand internal: false. Entities affected : ID:
cec720ed-460a-48aa-a9fc-2262b6da5a83 Type: VDSAction group DELETE_HOST with
role type ADMIN
2017-04-12 05:09:02,822Z INFO
[org.ovirt.engine.core.vdsbroker.RemoveVdsVDSCommand]
(org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
START, RemoveVdsVDSCommand( RemoveVdsVDSCommandParameters:{runAsync='true',
hostId='cec720ed-460a-48aa-a9fc-2262b6da5a83'}), log id: 26e68c12
2017-04-12 05:09:02,822Z INFO [org.ovirt.engine.core.vdsbroker.VdsManager]
(org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
vdsManager::disposing
2017-04-12 05:09:02,822Z INFO
[org.ovirt.engine.core.vdsbroker.RemoveVdsVDSCommand]
(org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
FINISH, RemoveVdsVDSCommand, log id: 26e68c12
2017-04-12 05:09:02,824Z WARN
[org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker)
[] Exception thrown during message processing
2017-04-12 05:09:02,848Z INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
EVENT_ID: USER_REMOVE_VDS(44), Correlation ID:
13050988-bf00-4391-9862-a8ed8ade34dd, Call Stack: null, Custom Event ID:
-1, Message: Host tank3 was removed by admin@internal-authz.
2017-04-12 05:09:02,848Z INFO [org.ovirt.engine.core.bll.RemoveVdsCommand]
(org.ovirt.thread.pool-7-thread-22) [13050988-bf00-4391-9862-a8ed8ade34dd]
Lock freed to object
'EngineLock:{exclusiveLocks='[cec720ed-460a-48aa-a9fc-2262b6da5a83=<VDS,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2017-04-12 05:10:56,139Z INFO
[org.ovirt.engine.core.bll.storage.ovfstore.OvfDataUpdater]
(DefaultQuartzScheduler8) [] Attempting to update VMs/Templates Ovf.
Package vdsm-tool installed.
Any thoughts?
Thanks
7 years, 7 months
Re: [ovirt-users] More 4.1 Networking Questions
by Charles Tassell
And bingo! I totally spaced on the fact that the MAC pool would be the
same on the two clusters. I changed the pool range (Configure->MAC
Address Pools for anyone interested) on the 4.1 cluster and things are
looking good again. Thank you, this was driving me nuts!
On 2017-04-11 05:22 AM, users-request(a)ovirt.org wrote:
> Message: 1
> Date: Mon, 10 Apr 2017 21:07:44 -0400
> From: Jeff Bailey <bailey(a)cs.kent.edu>
> To: users(a)ovirt.org
> Subject: Re: [ovirt-users] More 4.1 Networking Questions
> Message-ID: <31d1821c-9172-5a62-4f7d-b43fa4608025(a)cs.kent.edu>
> Content-Type: text/plain; charset=windows-1252; format=flowed
>
> On 4/10/2017 6:59 AM, Charles Tassell wrote:
>
>> Ah, spoke to soon. 30 seconds later the network went down with IPv6
>> disabled. So it does appear to be a host forwarding problem, not a VM
>> problem. I have an oVirt 4.0 cluster on the same network that doesn't
>> have these issues, so it must be a configuration issue somewhere.
>> Here is a dump of my ip config on the host:
>>
> The same L2 network? Using the same range of MAC addresses?
[snip]
7 years, 7 months
Hosted engine setup shooting dirty pool
by Jamie Lawrence
Or at least, refusing to mount a dirty pool. I’m having trouble getting the hosted engine installed.
I have 4.1 set up, configured and functional, currently wired up with two VM hosts and three Gluster hosts. It is configured with a (temporary) NFS data storage domain, with the end-goal being two data domains on Gluster; one for the hosted engine, one for other VMs.
The issue is that `hosted-engine` sees any gluster volumes offered as dirty. (I have been creating them via the command line right before attempting the hosted-engine migration; there is nothing in them at that stage.) I *think* what is happening is that ovirt-engine notices a newly created volume and has its way with the volume (visible in the GUI; the volume appears in the list), and the hosted-engine installer becomes upset about that. What I don’t know is what to do about that. Relevant log lines below. The installer almost sounds like it is asking me to remove the UUID-directory and whatnot, but I’m pretty sure that’s just going to leave me with two problems instead of fixing the first one. I’ve considered attempting to wire this together in the DB, which also seems like a great way to break things. I’ve even thought of using a Gluster cluster that Ovirt knows nothing about, mainly as an experiment to see if it would even work, but decided it doesn’t especially matter, as architecturally that would not work for production in our environment and I just need to get this up.
So, can anyone spare a clue as to what is going wrong, and what to do about that?
-j
- - - - ovirt-hosted-engine-setup.log - - - -
2017-04-11 16:14:39 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._storageServerConnection:408 connectStorageServer
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._storageServerConnection:475 {'status': {'message': 'Done', 'code': 0}, 'items': [{u'status': 0, u'id': u'890e82cf-5570-4507-a9bc-c610584dea6e'}]}
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._storageServerConnection:502 {'status': {'message': 'Done', 'code': 0}, 'items': [{u'status': 0, u'id': u'cd1a1bb6-e607-4e35-b815-1fd88b84fe14'}]}
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._check_existing_pools:794 _check_existing_pools
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._check_existing_pools:795 getConnectedStoragePoolsList
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._check_existing_pools:797 {'status': {'message': 'Done', 'code': 0}}
2017-04-11 16:14:40 INFO otopi.plugins.gr_he_setup.storage.storage storage._misc:956 Creating Storage Domain
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createStorageDomain:513 createStorageDomain
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createStorageDomain:547 {'status': {'message': 'Done', 'code': 0}}
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createStorageDomain:549 {'status': {'message': 'Done', 'code': 0}, u'mdasize': 0, u'mdathreshold': True, u'mdavalid': True, u'diskfree': u'321929216000', u'disktotal': u'321965260800', u'mdafree': 0}
2017-04-11 16:14:40 INFO otopi.plugins.gr_he_setup.storage.storage storage._misc:959 Creating Storage Pool
2017-04-11 16:14:40 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createFakeStorageDomain:553 createFakeStorageDomain
2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createFakeStorageDomain:570 {'status': {'message': 'Done', 'code': 0}}
2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createFakeStorageDomain:572 {'status': {'message': 'Done', 'code': 0}, u'mdasize': 0, u'mdathreshold': True, u'mdavalid': True, u'diskfree': u'1933930496', u'disktotal': u'2046640128', u'mdafree': 0}
2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createStoragePool:587 createStoragePool
2017-04-11 16:14:41 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createStoragePool:627 createStoragePool(args=[storagepoolID=9e399f0c-7c4b-4131-be79-922dda038383, name=hosted_datacenter, masterSdUUID=9a5c302b-2a18-4c7e-b75d-29088299988c, masterVersion=1, domainList=['9a5c302b-2a18-4c7e-b75d-29088299988c', 'f26efe61-a2e1-4a85-a212-269d0a047e07'], lockRenewalIntervalSec=None, leaseTimeSec=None, ioOpTimeoutSec=None, leaseRetries=None])
2017-04-11 16:15:29 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._createStoragePool:640 {'status': {'message': 'Done', 'code': 0}}
2017-04-11 16:15:29 INFO otopi.plugins.gr_he_setup.storage.storage storage._misc:962 Connecting Storage Pool
2017-04-11 16:15:29 DEBUG otopi.plugins.gr_he_setup.storage.storage storage._storagePoolConnection:717 connectStoragePool
2017-04-11 16:15:29 DEBUG otopi.context context._executeMethod:142 method exception
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132, in _executeMethod
method['method']()
File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-setup/storage/storage.py", line 963, in _misc
self._storagePoolConnection()
File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-setup/storage/storage.py", line 725, in _storagePoolConnection
message=status['status']['message'],
RuntimeError: Dirty Storage Domain: Cannot connect pool, already connected to another pool: ('ef5e4496-3095-40a8-89da-6847db67a4b9',)
Please clean the storage device and try again
2017-04-11 16:15:29 ERROR otopi.context context._executeMethod:151 Failed to execute stage 'Misc configuration': Dirty Storage Domain: Cannot connect pool, already connected to another pool: ('ef5e4496-3095-40a8-89da-6847db67a4b9',)
Please clean the storage device and try again
2017-04-11 16:15:29 DEBUG otopi.transaction transaction.abort:119 aborting 'Yum Transaction'
2017-04-11 16:15:29 INFO otopi.plugins.otopi.packagers.yumpackager yumpackager.info:80 Yum Performing yum transaction rollback
Loaded plugins: fastestmirror
2017-04-11 16:15:29 DEBUG otopi.transaction transaction.abort:119 aborting 'File transaction for '/etc/ovirt-hosted-engine/firewalld/hosted-console.xml''
2017-04-11 16:15:29 DEBUG otopi.transaction transaction.abort:119 aborting 'File transaction for '/etc/ovirt-hosted-engine/firewalld/hosted-cockpit.xml''
2017-04-11 16:15:29 DEBUG otopi.transaction transaction.abort:119 aborting 'File transaction for '/etc/ovirt-hosted-engine/iptables.example''
2017-04-11 16:15:29 DEBUG otopi.context context.dumpEnvironment:760 ENVIRONMENT DUMP - BEGIN
7 years, 7 months
Resize iSCSI lun f storage domain
by Gianluca Cecchi
Hello,
my iSCSI storage domain in 4.1.1 is composed by 1 lun of size 1TB and there
are two hosts accessing it.
I have to extend this LUN to 4Tb.
What is supposed to be done after having completed the resize operation at
storage array level?
I read here the workflow:
http://www.ovirt.org/develop/release-management/features/storage/lun-resize/
Does this imply that I have to do nothing at host side?
Normally inside a physical server connecting to iSCSI volumes I run the
command "iscsiadm .... --rescan" and this one below is all my workflow (on
CentOS 5.9, quite similar in CentOS 6.x, except the multipath refresh).
Is it oVirt itself that takes in charge the iscsiadm --rescan command?
BTW: the "edit domain" screenshot should become a "manage domain"
screenshot now
Thanks,
Gianluca
I want to extend my filesystem from 250Gb to 600Gb
- current layout of FS, PV, multipath device
[g.cecchi@dbatest ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
...
/dev/mapper/VG_ORASAVE-LV_ORASAVE
247G 66G 178G 28% /orasave
[root@dbatest ~]# pvs /dev/mpath/mpsave
PV VG Fmt Attr PSize PFree
/dev/mpath/mpsave VG_ORASAVE lvm2 a-- 250.00G 0
[root@dbatest ~]# multipath -l mpsave
mpsave (36090a0585094695aed0f95af0601f066) dm-4 EQLOGIC,100E-00
[size=250G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
\_ 12:0:0:0 sdc 8:32 [active][undef]
\_ 11:0:0:0 sdd 8:48 [active][undef]
- current configured iSCSI ifaces (ieth0 using eth0 and ieth1 using eth1)
[root@dbatest ~]# iscsiadm -m node -P 1
...
Target:
iqn.2001-05.com.equallogic:0-8a0906-5a6994505-66f00106af950fed-dbatest-save
Portal: 10.10.100.20:3260,1
Iface Name: ieth0
Iface Name: ieth1
- rescan of iSCSI
[root@dbatest ~]# for i in 0 1
> do
> iscsiadm -m node
--targetname=iqn.2001-05.com.equallogic:0-8a0906-5a6994505-66f00106af950fed-dbatest-save
-I ieth$i --rescan
> done
Rescanning session [sid: 3, target:
iqn.2001-05.com.equallogic:0-8a0906-5a6994505-66f00106af950fed-dbatest-save,
portal: 10.10.100.20,3260]
Rescanning session [sid: 4, target:
iqn.2001-05.com.equallogic:0-8a0906-5a6994505-66f00106af950fed-dbatest-save,
portal: 10.10.100.20,3260]
In messages I get:
Apr 24 16:07:17 dbatest kernel: SCSI device sdd: 1258291200 512-byte hdwr
sectors (644245 MB)
Apr 24 16:07:17 dbatest kernel: sdd: Write Protect is off
Apr 24 16:07:17 dbatest kernel: SCSI device sdd: drive cache: write through
Apr 24 16:07:17 dbatest kernel: sdd: detected capacity change from
268440698880 to 644245094400
Apr 24 16:07:17 dbatest kernel: SCSI device sdc: 1258291200 512-byte hdwr
sectors (644245 MB)
Apr 24 16:07:17 dbatest kernel: sdc: Write Protect is off
Apr 24 16:07:17 dbatest kernel: SCSI device sdc: drive cache: write through
Apr 24 16:07:17 dbatest kernel: sdc: detected capacity change from
268440698880 to 644245094400
- Dry run of multupath refresh
[root@dbatest ~]# multipath -v2 -d
: mpsave (36090a0585094695aed0f95af0601f066) EQLOGIC,100E-00
[size=600G][features=1 queue_if_no_path][hwhandler=0][n/a]
\_ round-robin 0 [prio=1][undef]
\_ 12:0:0:0 sdc 8:32 [active][ready]
\_ 11:0:0:0 sdd 8:48 [active][ready]
- execute the refresh of multipath
[root@dbatest ~]# multipath -v2
: mpsave (36090a0585094695aed0f95af0601f066) EQLOGIC,100E-00
[size=600G][features=1 queue_if_no_path][hwhandler=0][n/a]
\_ round-robin 0 [prio=1][undef]
\_ 12:0:0:0 sdc 8:32 [active][ready]
\_ 11:0:0:0 sdd 8:48 [active][ready]
- verify new size:
[root@dbatest ~]# multipath -l mpsave
mpsave (36090a0585094695aed0f95af0601f066) dm-4 EQLOGIC,100E-00
[size=600G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
\_ 12:0:0:0 sdc 8:32 [active][undef]
\_ 11:0:0:0 sdd 8:48 [active][undef]
- pvresize
[root@dbatest ~]# pvresize /dev/mapper/mpsave
Physical volume "/dev/mpath/mpsave" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
- verify the newly added 350Gb size in PV and in VG
[root@dbatest ~]# pvs /dev/mpath/mpsave
PV VG Fmt Attr PSize PFree
/dev/mpath/mpsave VG_ORASAVE lvm2 a-- 600.00G 349.99G
[root@dbatest ~]# vgs VG_ORASAVE
VG #PV #LV #SN Attr VSize VFree
VG_ORASAVE 1 1 0 wz--n- 600.00G 349.99G
- lvextend of the existing LV
[root@dbatest ~]# lvextend -l+100%FREE /dev/VG_ORASAVE/LV_ORASAVE
Extending logical volume LV_ORASAVE to 600.00 GB
Logical volume LV_ORASAVE successfully resized
[root@dbatest ~]# lvs VG_ORASAVE/LV_ORASAVE
LV VG Attr LSize Origin Snap% Move Log Copy%
Convert
LV_ORASAVE VG_ORASAVE -wi-ao 600.00G
- resize FS
[root@dbatest ~]# resize2fs /dev/VG_ORASAVE/LV_ORASAVE
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/VG_ORASAVE/LV_ORASAVE is mounted on /orasave; on-line
resizing required
Performing an on-line resize of /dev/VG_ORASAVE/LV_ORASAVE to 157285376
(4k) blocks.
The filesystem on /dev/VG_ORASAVE/LV_ORASAVE is now 157285376 blocks long.
[root@dbatest ~]# df -h /orasave
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VG_ORASAVE-LV_ORASAVE
591G 66G 519G 12% /orasave
7 years, 7 months
ovirt node 4.1.1.1
by Julius Bekontaktis
Hey guys,
Yesterday I've upgraded my Ovirt Node 4.1.0 to 4.1.1.1 version. Everything
looks fine except Network configuration part is missing. If I enter
manually in url (/network) like in the older version - I get Not Found
error. I've roll backed to previous version - network configuration part
is OK, reverted back to new one - no networking configuration. How should I
configure networks? In terminal, the old way ?
Thanks for any comments. Maybe I am missing something.
Regards
Julius
7 years, 7 months
EMC Unity 300
by Colin Coe
Hi all
We are contemplating moving away from our current iSCSI SAN to Dell/EMC
Unity 300.
I've seen one thread where the 300F is problematic and no resolution is
posted.
Is anyone else using tis with RHEV/oVirt? Any war stories?
Thanks
CC
7 years, 7 months