Leeroy Jenkins cloud plugin for oVirt
by Vrgotic, Marko
Hey oVirt,
How are you all doing?
Has anyone used or has experience with Leeroy Jenkins on oVIrt.
We need to move master ci and its slaves to oVirt, but cannot find suitable plugin for it. For builds we run, Jenkins need to spawn slaves, for which plugin is required.
Please help if possible.
Kindly awaiting your reply.
— — —
Met vriendelijke groet / Kind regards,
Marko Vrgotic
3 years, 10 months
Trouble initializing new VM in 4.2.8.2-1.el7
by David Johnson
Good evening,
Thanks in advance,
I'm trying to set up a new VM in my cluster, and things appear to have hung
up initializing the 80GB boot partition for a new Windows VM. The system
has been sitting like this for 6 hours.
What am I looking for to resolve this issue? Can I kill this process safely
and start over?
[image: image.png]
[image: image.png]
Regards,
David Johnson
Director of Development, Maxis Technology
844.696.2947 ext 702 (o) | 479.531.3590 (c)
djohnson(a)maxistechnology.com
[image: Maxis Techncology] <http://www.maxistechnology.com>
www.maxistechnology.com
*stay connected <http://www.linkedin.com/in/pojoguy>*
3 years, 10 months
Moving data to new storage appliance
by David Johnson
Hi everyone,
I'm sorry to bother y'all with another noob question.
We are in the process of retiring the old storage appliance that backed our
oVirt cluster in favor of a new appliance. I have migrated all of the
active VM storage to the new appliance, but can't see how to migrate the
"base versions".
My understanding is that if I just drop the storage with the base versions
then the derived VM's will cease to be functional.
Please advise.
3 years, 10 months
bond for vm interfaces
by Edoardo Mazza
Hello everyone,
I need to create a bond for vm interfaces but I don't kown what is the best solution, you can help me?
Thanks
Edoardo
3 years, 10 months
Re: Active-Passive DR: mutual for different storage domains possible?
by Gianluca Cecchi
On Thu, Jul 25, 2019 at 2:21 PM Eyal Shenitzky <eshenitz(a)redhat.com> wrote:
> On Thu, Jul 25, 2019 at 3:02 PM Gianluca Cecchi <gianluca.cecchi(a)gmail.com>
> wrote:
>
>> On Thu, Jul 25, 2019 at 1:54 PM Eyal Shenitzky <eshenitz(a)redhat.com>
>> wrote:
>>
>>>
>>> Please notice that a automation python scripts created in order to
>>> facilitate the DR process.
>>> You can find them under - path/to/your/dr/folder/files.
>>>
>>> You can use those scripts to generate the mapping, test the generated
>>> mapping and start the failover/failback.
>>>
>>> I strongly recommend to use it.
>>>
>>>
>> Yes, I have used it to create the disaster_recovery_vars.yml mapping file
>> and then populating it with the secondary site information, thanks.
>> My doubt was about any difference in playbook actions between "failover"
>> (3.3) and "discreet failover test" (B.1), as the executed playbook and
>> tags are the same.
>>
>
> No, the only difference is that you disable the storage replication by
> yourself, this way you can test the failover while the other "primary" site
> is still active.
>
>
First "discreet failover test" was a success!!! Great.
Storage domain attached, templates imported and the only VM defined at
source correctly started (at source I configured link down for the VM,
inherited at target, so no collisions).
Elapsed between beginning of ovirt connection, until first template import
has been about 6 minutes.
...
Template TOL76 has been successfully imported from the given configuration.
7/25/19 3:26:58 PM
Storage Domain ovsd3910 was attached to Data Center SVIZ3-DR by
admin@internal-authz 7/25/19 3:26:46 PM
Storage Domains were attached to Data Center SVIZ3-DR by
admin@internal-authz 7/25/19 3:26:46 PM
Storage Domain ovsd3910 (Data Center SVIZ3-DR) was activated by
admin@internal-authz 7/25/19 3:26:46 PM
...
Storage Pool Manager runs on Host ovh201. (Address: ovh201.), Data Center
SVIZ3-DR. 7/25/19 3:26:36 PM
Data Center is being initialized, please wait for initialization to
complete. 7/25/19 3:23:53 PM
Storage Domain ovsd3910 was added by admin@internal-authz 7/25/19 3:20:43 PM
Disk Profile ovsd3910 was successfully added (User: admin@internal-authz).
7/25/19 3:20:42 PM
User admin@internal-authz connecting from '10.4.192.43' using session 'xxx'
logged in. 7/25/19 3:20:35 PM
Some notes:
1) iSCSI multipath
my storage domains are iSCSI based and my hosts have two network cards to
reach the storage.
I'm using EQL that doesn't support bonding and has one portal that all
initiators use.
So in my primary env I configured "iSCSI Multipathing" tab in Compute -->
Datacenter --> Datacenter_Name window.
But this tab appears only when you activate the storage.
So during the ansible playbook run the iSCSI connection has been activated
through the "default" iscsi interface
I can then:
- configure "iSCSI Multipathing"
- shutdown VM
- put host into maintenance
- remove the default iSCSI session that has not been removed on host
iscsiadm -m session -r 6 -u
- activate host
now I have:
[root@ov201 ~]# iscsiadm -m session
tcp: [10] 10.10.100.8:3260,1
iqn.2001-05.com.equallogic:4-771816-99d82fc59-5bdd77031e05beac-ovsd3910
(non-flash)
tcp: [9] 10.10.100.8:3260,1
iqn.2001-05.com.equallogic:4-771816-99d82fc59-5bdd77031e05beac-ovsd3910
(non-flash)
[root@ov201 ~]#
with
# multipath -l
364817197c52fd899acbe051e0377dd5b dm-29 EQLOGIC ,100E-00
size=1.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 23:0:0:0 sdb 8:16 active undef running
`- 24:0:0:0 sdc 8:32 active undef running
- start vm
The I do a cleanup:
1. Detach the storage domains from the secondary site.
2. Enable storage replication between the primary and secondary storage
domains.
The storage domain remains as "unattached" in DR environment
Then I executed the test again and during connection I got this error about
40 seconds after run of playbook
TASK [oVirt.disaster-recovery : Import iSCSI storage domain]
***************************************
An exception occurred during task execution. To see the full traceback, use
-vvv. The error was: Error: Fault reason is "Operation Failed". Fault
detail is "[]". HTTP response code is 400.
failed: [localhost]
(item=iqn.2001-05.com.equallogic:4-771816-99d82fc59-5bdd77031e05beac-ovsd3910)
=> {"ansible_loop_var": "dr_target", "changed": false, "dr_target":
"iqn.2001-05.com.equallogic:4-771816-99d82fc59-5bdd77031e05beac-ovsd3910",
"msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP
response code is 400."}
In webadmin gui of DR env I see:
VDSM ov201 command CleanStorageDomainMetaDataVDS failed: Cannot obtain
lock: "id=56eadc97-5731-40cf-8409-aff58d8ffd11, rc=-243, out=Cannot acquire
Lease(name='SDM', path='/dev/56eadc97-5731-40cf-8409-aff58d8ffd11/leases',
offset=1048576), err=(-243, 'Sanlock resource not acquired', 'Lease is held
by another host')" 7/25/19 4:50:43 PM
What could be the cause of this?
In vdsm.log:
2019-07-25 16:50:43,196+0200 INFO (jsonrpc/1) [vdsm.api] FINISH
forcedDetachStorageDomain error=Cannot obtain lock:
"id=56eadc97-5731-40cf-8409-aff58d8ffd11, rc=-243, out=Cannot acquire
Lease(name='SDM', path='/dev/56eadc97-5731-40cf-8409-aff58d8ffd11/leases',
offset=1048576), err=(-243, 'Sanlock resource not acquired', 'Lease is held
by another host')" from=::ffff:10.4.192.79,49038, flow_id=4bd330d1,
task_id=c0dfac81-5c58-427d-a7d0-e8c695448d27 (api:52)
2019-07-25 16:50:43,196+0200 ERROR (jsonrpc/1) [storage.TaskManager.Task]
(Task='c0dfac81-5c58-427d-a7d0-e8c695448d27') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
in _run
return fn(*args, **kargs)
File "<string>", line 2, in forcedDetachStorageDomain
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in
method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 856, in
forcedDetachStorageDomain
self._detachStorageDomainFromOldPools(sdUUID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 834, in
_detachStorageDomainFromOldPools
dom.acquireClusterLock(host_id)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 910, in
acquireClusterLock
self._manifest.acquireDomainLock(hostID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 527, in
acquireDomainLock
self._domainLock.acquire(hostID, self.getDomainLease())
File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py", line
419, in acquire
"Cannot acquire %s" % (lease,), str(e))
AcquireLockFailure: Cannot obtain lock:
"id=56eadc97-5731-40cf-8409-aff58d8ffd11, rc=-243, out=Cannot acquire
Lease(name='SDM', path='/dev/56eadc97-5731-40cf-8409-aff58d8ffd11/leases',
offset=1048576), err=(-243, 'Sanlock resource not acquired', 'Lease is held
by another host')"
2019-07-25 16:50:43,196+0200 INFO (jsonrpc/1) [storage.TaskManager.Task]
(Task='c0dfac81-5c58-427d-a7d0-e8c695448d27') aborting: Task is aborted:
'Cannot obtain lock: "id=56eadc97-5731-40cf-8409-aff58d8ffd11, rc=-243,
out=Cannot acquire Lease(name=\'SDM\',
path=\'/dev/56eadc97-5731-40cf-8409-aff58d8ffd11/leases\', offset=1048576),
err=(-243, \'Sanlock resource not acquired\', \'Lease is held by another
host\')"' - code 651 (task:1181)
2019-07-25 16:50:43,197+0200 ERROR (jsonrpc/1) [storage.Dispatcher] FINISH
forcedDetachStorageDomain error=Cannot obtain lock:
"id=56eadc97-5731-40cf-8409-aff58d8ffd11, rc=-243, out=Cannot acquire
Lease(name='SDM', path='/dev/56eadc97-5731-40cf-8409-aff58d8ffd11/leases',
offset=1048576), err=(-243, 'Sanlock resource not acquired', 'Lease is held
by another host')" (dispatcher:83)
2019-07-25 16:50:43,197+0200 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC
call StorageDomain.detach failed (error 651) in 24.12 seconds (__init__:312)
2019-07-25 16:50:44,180+0200 INFO (jsonrpc/6) [api.host] START getStats()
from=::ffff:10.4.192.79,49038 (api:48)
2019-07-25 16:50:44,222+0200 INFO (jsonrpc/6) [vdsm.api] START
repoStats(domains=()) from=::ffff:10.4.192.79,49038,
task_id=8a7a0302-4ee3-49a8-a3f7-f9636a123765 (api:48)
2019-07-25 16:50:44,222+0200 INFO (jsonrpc/6) [vdsm.api] FINISH repoStats
return={} from=::ffff:10.4.192.79,49038,
task_id=8a7a0302-4ee3-49a8-a3f7-f9636a123765 (api:54)
2019-07-25 16:50:44,223+0200 INFO (jsonrpc/6) [vdsm.api] START
multipath_health() from=::ffff:10.4.192.79,49038,
task_id=fb09923c-0888-4c3f-9b8a-a7750592da22 (api:48)
2019-07-25 16:50:44,223+0200 INFO (jsonrpc/6) [vdsm.api] FINISH
multipath_health return={} from=::ffff:10.4.192.79,49038,
task_id=fb09923c-0888-4c3f-9b8a-a7750592da22 (api:54)
After putting host into maintenance + reboot of the host and re-run of
playbook, all went well again..
2) Mac Address pools
I notice that the imported VM has preserved the link state (down in my
case) but not the mac address.
The mac address is the one defined in my target engine, that is different
from source engine to avoid overlap of mac adresses
Is this an option I can customize?
In general a VM could have problems when changing its mac address...
3) Clean up dest DC after "discreet failover test"
The guide says:
1. Detach the storage domains from the secondary site.
2. Enable storage replication between the primary and secondary storage
domains.
better to also restart the DR hosts?
4) VM consistency
Can we say that all the imported VMs will be "crash consistent"?
Thanks,
Gianluca
3 years, 10 months
Re: engine-setup failure on 4.1 -> 4.2
by Strahil
Try to remove the maintenance, wait 10 min and then put it back:
hosted-engine --set-maintenance --mode=none; sleep 600; hosted-engine --set-maintenance --mode=global ; sleep 300
Best Regards,
Strahil NikolovOn Jul 26, 2019 20:49, Alex K <rightkicktech(a)gmail.com> wrote:
>
> I repeated the same upgrade steps to another cluster and although I was receiving same warnings about the db the upgrade completed successfully.
>
> Is there a way I can manually inform engine db about the maintenance status? I was thinking that in this way the engine would proceed with the remaining steps.
>
> On Thu, Jul 25, 2019 at 3:55 PM Alex K <rightkicktech(a)gmail.com> wrote:
>>
>> Hi all,
>>
>> I have a self hosted engine setup, with 3 servers.
>> I have successfully upgraded several other installations, from 4.1 to 4.2.
>> On one of them I am encountering and issue with the engine-setup.
>>
>> I get the following warning:
>>
>> Found the following problems in PostgreSQL configuration for the Engine database:
>> It is required to be at least '8192'
>>
>> Please note the following required changes in postgresql.conf on 'localhost':
>> 'work_mem' is currently '1024'. It is required to be at least '8192'.
>> postgresql.conf is usually in /var/lib/pgsql/data, /var/opt/rh/rh-postgresql95/lib/pgsql/data, or somewhere under /etc/postgresql* . You have to restart PostgreSQL after making these changes.
>> The database requires these configurations values to be changed. Setup can fix them for you or abort. Fix automatically? (Yes, No) [Yes]:
>>
>> Then, if I select Yes to proceed, I get:
>>
>> [WARNING] This release requires PostgreSQL server 9.5.14 but the engine database is currently hosted on PostgreSQL server 9.2.24.
>>
>> Then finally:
>> [ ERROR ] It seems that you are running your engine inside of the hosted-engine VM and are not in "Global Maintenance" mode.
>> In that case you should put the system into the "Global Maintenance" mode before running engine-setup, or the hosted-engine HA agent might kill the machine, which might corrupt your data.
>>
>> [ ERROR ] Failed to execute stage 'Setup validation': Hosted Engine setup detected, but Global Maintenance is not set.
>> [ INFO ] Stage: Clean up
>> Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20190725124653-hvekp2.log
>> [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20190725125154-setup.conf'
>> [ INFO ] Stage: Pre-termination
>> [ INFO ] Stage: Termination
>> [ ERROR ] Execution of setup failed
>>
>> I have put the cluster on global maintenance, though the engine thinks it is not.
>>
>> Are any steps that I may follow to avoid above?
>> I am attaching also the last full setup log.
>> Thank you!
>>
>> Alex
>>
>>
>>
3 years, 10 months
engine-setup failure on 4.1 -> 4.2
by Alex K
Hi all,
I have a self hosted engine setup, with 3 servers.
I have successfully upgraded several other installations, from 4.1 to 4.2.
On one of them I am encountering and issue with the engine-setup.
I get the following warning:
Found the following problems in PostgreSQL configuration for the Engine
database:
It is required to be at least '8192'
Please note the following required changes in postgresql.conf on
'localhost':
'work_mem' is currently '1024'. It is required to be at least
'8192'.
postgresql.conf is usually in /var/lib/pgsql/data,
/var/opt/rh/rh-postgresql95/lib/pgsql/data, or somewhere under
/etc/postgresql* . You have to restart PostgreSQL after making these
changes.
The database requires these configurations values to be changed.
Setup can fix them for you or abort. Fix automatically? (Yes, No) [Yes]:
Then, if I select Yes to proceed, I get:
[WARNING] This release requires PostgreSQL server 9.5.14 but the engine
database is currently hosted on PostgreSQL server 9.2.24.
Then finally:
[ ERROR ] It seems that you are running your engine inside of the
hosted-engine VM and are not in "Global Maintenance" mode.
In that case you should put the system into the "Global
Maintenance" mode before running engine-setup, or the hosted-engine HA
agent might kill the machine, which might corrupt your data.
[ ERROR ] Failed to execute stage 'Setup validation': Hosted Engine setup
detected, but Global Maintenance is not set.
[ INFO ] Stage: Clean up
Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20190725124653-hvekp2.log
[ INFO ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20190725125154-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
I have put the cluster on global maintenance, though the engine thinks it
is not.
Are any steps that I may follow to avoid above?
I am attaching also the last full setup log.
Thank you!
Alex
3 years, 10 months
SPM and Task error ...
by Enrico
Hi all,
my ovirt cluster has got 3 Hypervisors runnig Centos 7.5.1804 vdsm is
4.20.39.1-1.el7,
ovirt engine is 4.2.4.5-1.el7, the storage systems are HP MSA P2000 and
2050 (fibre channel).
I need to stop one of the hypervisors for maintenance but this system is
the storage pool manager.
For this reason I decided to manually activate SPM in one of the other
nodes but this operation is not
successful.
In the ovirt engine (engine.log) the error is this:
2019-07-25 12:39:16,744+02 INFO
[org.ovirt.engine.core.bll.storage.pool.ForceSelectSPMCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] Running command:
ForceSelectSPMCommand internal: false. Entities affected : ID:
81c9bd3c-ae0a-467f-bf7f-63ab30cd8d9e Type: VDSAction group
MANIPULATE_HOST with role type ADMIN
2019-07-25 12:39:16,745+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
SpmStopOnIrsVDSCommand(
SpmStopOnIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false'}), log id: 37bf4639
2019-07-25 12:39:16,747+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
ResetIrsVDSCommand(
ResetIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
vdsId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
ignoreStopFailed='false'}), log id: 2522686f
2019-07-25 12:39:16,749+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
SpmStopVDSCommand(HostName = infn-vm05.management,
SpmStopVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7'}), log id: 1810fd8b
2019-07-25 12:39:16,758+02 *ERROR*
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] SpmStopVDSCommand::Not
stopping SPM on vds 'infn-vm05.management', pool id
'18d57688-6ed4-43b8-bd7c-0665b55950b7' as there are uncleared tasks
'Task 'fdcf4d1b-82fe-49a6-b233-323ebe568f8e', status 'running''
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
SpmStopVDSCommand, log id: 1810fd8b
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
ResetIrsVDSCommand, log id: 2522686f
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
SpmStopOnIrsVDSCommand, log id: 37bf4639
2019-07-25 12:39:16,760+02 *ERROR*
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] EVENT_ID:
USER_FORCE_SELECTED_SPM_STOP_FAILED(4,096), Failed to force select
infn-vm07.management as the SPM due to a failure to stop the current SPM.
while in the hypervisor (SPM) vdsm.log:
2019-07-25 12:39:16,744+02 INFO
[org.ovirt.engine.core.bll.storage.pool.ForceSelectSPMCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] Running command:
ForceSelectSPMCommand internal: false. Entities affected : ID:
81c9bd3c-ae0a-467f-bf7f-63ab30cd8d9e Type: VDSAction group
MANIPULATE_HOST with role type ADMIN
2019-07-25 12:39:16,745+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
SpmStopOnIrsVDSCommand(
SpmStopOnIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false'}), log id: 37bf4639
2019-07-25 12:39:16,747+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
ResetIrsVDSCommand(
ResetIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
vdsId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
ignoreStopFailed='false'}), log id: 2522686f
2019-07-25 12:39:16,749+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
SpmStopVDSCommand(HostName = infn-vm05.management,
SpmStopVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7'}), log id: 1810fd8b
2019-07-25 12:39:16,758+02 *ERROR*
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] SpmStopVDSCommand::Not
stopping SPM on vds 'infn-vm05.management', pool id
'18d57688-6ed4-43b8-bd7c-0665b55950b7' as there are uncleared tasks
'Task 'fdcf4d1b-82fe-49a6-b233-323ebe568f8e', status 'running''
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
SpmStopVDSCommand, log id: 1810fd8b
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
ResetIrsVDSCommand, log id: 2522686f
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
SpmStopOnIrsVDSCommand, log id: 37bf4639
2019-07-25 12:39:16,760+02 *ERROR*
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] EVENT_ID:
USER_FORCE_SELECTED_SPM_STOP_FAILED(4,096), Failed to force select
infn-vm07.management as the SPM due to a failure to stop the current SPM.
2019-07-25 12:39:18,660+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Task id
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' has passed pre-polling period
time and should be polled. Pre-polling period is 60000 millis.
2019-07-25 12:39:18,660+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Task id
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' has passed pre-polling period
time and should be polled. Pre-polling period is 60000 millis.
2019-07-25 12:39:18,750+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Task id
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' has passed pre-polling period
time and should be polled. Pre-polling period is 60000 millis.
2019-07-25 12:39:18,750+02 *ERROR*
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
BaseAsyncTask::logEndTaskFailure: Task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' (Parent Command 'Unknown',
Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended
with failure:
2019-07-25 12:39:18,750+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: Attempting to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e'
2019-07-25 12:39:18,751+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
SPMClearTaskVDSCommand(
SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: 34ae2b2f
2019-07-25 12:39:18,752+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
HSMClearTaskVDSCommand(HostName = infn-vm05.management,
HSMTaskGuidBaseVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: d3a78ad
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
HSMClearTaskVDSCommand, log id: d3a78ad
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
SPMClearTaskVDSCommand, log id: 34ae2b2f
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: At time of attempt to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' the response code was
'TaskStateError' and message was 'Operation is not allowed in this task
state: ("can't clean in state running",)'. Task will not be cleaned
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
BaseAsyncTask::onTaskEndSuccess: Task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' (Parent Command 'Unknown',
Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended
successfully.
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: Attempting to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e'
2019-07-25 12:39:18,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
SPMClearTaskVDSCommand(
SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: 42de0c2b
2019-07-25 12:39:18,759+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
HSMClearTaskVDSCommand(HostName = infn-vm05.management,
HSMTaskGuidBaseVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: 4895c79c
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
HSMClearTaskVDSCommand, log id: 4895c79c
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
SPMClearTaskVDSCommand, log id: 42de0c2b
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: At time of attempt to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' the response code was
'TaskStateError' and message was 'Operation is not allowed in this task
state: ("can't clean in state running",)'. Task will not be cleaned
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Task id
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' has passed pre-polling period
time and should be polled. Pre-polling period is 60000 millis.
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Cleaning zombie
tasks: Clearing async task 'Unknown' that started at 'Fri May 03
14:48:50 CEST 2019'
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: Attempting to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e'
2019-07-25 12:39:18,765+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
SPMClearTaskVDSCommand(
SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: da77af2
2019-07-25 12:39:18,766+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
HSMClearTaskVDSCommand(HostName = infn-vm05.management,
HSMTaskGuidBaseVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: 530694fb
2019-07-25 12:39:18,771+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
HSMClearTaskVDSCommand, log id: 530694fb
2019-07-25 12:39:18,771+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
SPMClearTaskVDSCommand, log id: da77af2
2019-07-25 12:39:18,771+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: At time of attempt to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' the response code was
'TaskStateError' and message was 'Operation is not allowed in this task
state: ("can't clean in state running",)'. Task will not be cleaned
there is some relation between this error and a task that has remained
hanging, from SPM server:
# vdsm-client Task getInfo taskID=fdcf4d1b-82fe-49a6-b233-323ebe568f8e
{
"verb": "prepareMerge",
"id": "fdcf4d1b-82fe-49a6-b233-323ebe568f8e"
}
# vdsm-client Task getStatus taskID=fdcf4d1b-82fe-49a6-b233-323ebe568f8e
{
"message": "running job 1 of 1",
"code": 0,
"taskID": "fdcf4d1b-82fe-49a6-b233-323ebe568f8e",
"taskResult": "",
"taskState": "running"
}
How can I solve this problem ?
Thanks a lot for your help !!
Best Regards
Enrico
--
_______________________________________________________________________
Enrico Becchetti Servizio di Calcolo e Reti
Istituto Nazionale di Fisica Nucleare - Sezione di Perugia
Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY)
Phone:+39 075 5852777 Mail: Enrico.Becchetti<at>pg.infn.it
_______________________________________________________________________
3 years, 10 months
Clone function seem broken - regarding disk virtual size
by Vrgotic, Marko
Dear oVirt,
All our CentOS templates are “baked” with 40GB disk (sparse) QCOW.
When Creating new VM from template, using Thin storage allocation:
* VM is created with 40GB disk (reading from oVirt GUI) and
* from OS fdisk and/or parted report 40GB disk size.
* The / partition is also matching expected size
When Creating VM from template, using default Clone storage allocation:
* VM is created with 40GB disk (reading from oVIrt GUI)
* However:
* from OS fdsik and/or parted report only actual used space, which in our case is 8GB.
* The / partition is matching only actual disk size, not virtual one
Why is Clone not taking into account Virtual size of the Template disk?
Is it intended behavior that when image is cloned from template, only actual disk size information is copied, not the virtual?
Please assist.
Kindly awaiting your reply.
oVirt 4.3.4 with SHE and NFS based storage and Local.
— — —
Met vriendelijke groet / Kind regards,
Marko Vrgotic
Sr. System Engineer @ System Administration
m.vrgotic(a)activevideo.com<mailto:m.vrgotic@activevideo.com>
tel. +31 (0)35 677 4131
ActiveVideo BV
Mediacentrum 3741
Joop van den Endeplein 1
1217 WJ Hilversum
3 years, 10 months
Re: major network changes
by Strahil
The CA can be downloaded via the web , or you can tell curl to just ignore the engine's cert via the '-k' flag.
It will show you if the health page is working.
Best Regards,
Strahil NikolovOn Jul 24, 2019 19:39, carl langlois <crl.langlois(a)gmail.com> wrote:
>
> Strahil, not sure what to put for the --cacert.
>
> Yes Derek your are right at one point the port 8702 stop listening.
>
> tcp6 0 0 127.0.0.1:8702 :::* LISTEN 1607/ovirt-engine
>
> After some time the line above disappear. I am trying to figure why this port is being close after some time when the engine is running on the host on the 248.x network. On the 236.x network this port is kept alive all the time.
> If you have any hint on why this port is closing do not hesitate because i am starting to be out of ideas. :-)
>
>
> Thanks & Regards
>
> Carl
>
>
>
>
>
>
> On Wed, Jul 24, 2019 at 11:11 AM Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
>>
>> A healthy engine should report:
>> [root@ovirt1 ~]# curl --cacert CA https://engine.localdomain/ovirt-engine/services/health;echo
>> DB Up!Welcome to Health Status!
>>
>> Of course you can use the '-k' switch to verify the situation.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> В сряда, 24 юли 2019 г., 17:43:59 ч. Гринуич+3, Derek Atkins <derek(a)ihtfp.com> написа:
>>
>>
>> Hi,
>>
>> carl langlois <crl.langlois(a)gmail.com> writes:
>>
>> > If i try to access http://ovengine/ovirt-engine/services/health
>> > i always get "Service Unavailable" in the browser and each time i it reload in
>> > the browser i get in the error_log
>> >
>> > [proxy_ajp:error] [pid 1868] [client 10.8.1.76:63512] AH00896: failed to make
>> > connection to backend: 127.0.0.1
>> > [Tue Jul 23 14:04:10.074023 2019] [proxy:error] [pid 1416] (111)Connection
>> > refused: AH00957: AJP: attempt to connect to 127.0.0.1:8702 (127.0.0.1) failed
>>
>> Sounds like a service isn't running on port 8702.
>>
>> > Thanks & Regards
>> >
>> > Carl
>>
>> -derek
>>
>> --
>> Derek Atkins 617-623-3745
>>
>> derek(a)ihtfp.com
>> www.ihtfp.com
>> Computer and Internet Security Consultant
3 years, 10 months