Hello again, Shani, thank you very much for your help
Here I attach the engine.log related lines to moving the disk with an
illegal snapshot to another storage domain, and the deletion of the
previously illegal snapshot. I hope you can find them useful.
Thank you!!
2019-02-25 11:58:46,386+01 INFO
[org.ovirt.engine.core.bll.storage.disk.MoveDiskCommand] (default task-82)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] Lock Acquired to object
'EngineLock:{exclusiveLocks='[0d9b635d
-22ef-4456-9b10-20be027baa7a=DISK]', sharedLocks=''}'
2019-02-25 11:58:46,390+01 INFO
[org.ovirt.engine.core.bll.storage.disk.MoveDiskCommand] (default task-82)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] Running command: MoveDiskCommand
internal: false. Entities aff
ected : ID: 0d9b635d-22ef-4456-9b10-20be027baa7a Type: DiskAction group
CONFIGURE_DISK_STORAGE with role type USER
2019-02-25 11:58:46,965+01 INFO
[org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand] (default
task-82) [f24bee33-3e0d-4726-a9f0-9ee1c4caf063] Running command:
LiveMigrateDiskCommand internal: true.
Entities affected : ID: 0d9b635d-22ef-4456-9b10-20be027baa7a Type:
DiskAction group DISK_LIVE_STORAGE_MIGRATION with role type USER
2019-02-25 11:58:47,133+01 INFO
[org.ovirt.engine.core.bll.snapshots.CreateAllSnapshotsFromVmCommand]
(default task-82) [f24bee33-3e0d-4726-a9f0-9ee1c4caf063] Running command:
CreateAllSnapshotsFromVmCommand
internal: true. Entities affected : ID:
0e7e53c6-ad15-484a-b084-4647b50a2567 Type: VMAction group
MANIPULATE_VM_SNAPSHOTS with role type USER
2019-02-25 11:58:47,177+01 INFO
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotCommand] (default
task-82) [f24bee33-3e0d-4726-a9f0-9ee1c4caf063] Running command:
CreateSnapshotCommand internal: true. Enti
ties affected : ID: 00000000-0000-0000-0000-000000000000 Type: Storage
2019-02-25 11:58:47,224+01 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] (default
task-82) [f24bee33-3e0d-4726-a9f0-9ee1c4caf063] START,
CreateVolumeVDSCommand( CreateVolumeVDSComman
dParameters:{storagePoolId='fa64792e-73b3-4da2-9d0b-f334422aaccf',
ignoreFailoverLimit='false',
storageDomainId='e655abce-c5e8-44f3-8d50-9fd76edf05cb',
imageGroupId='0d9b635d-22ef-4456-9b10-20be027baa7a', imag
eSizeInBytes='10737418240', volumeFormat='COW',
newImageId='0e5c1398-461c-4ca3-ba6e-d3ca3bf01a2a', imageType='Sparse',
newImageDescription='', imageInitialSizeInBytes='0',
imageId='f65be7d6-d1bf-4db3-aa04-fd59
265d5ebc', sourceImageGroupId='0d9b635d-22ef-4456-9b10-20be027baa7a'}), log
id: 20ddcc22
2019-02-25 11:58:47,284+01 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] (default
task-82) [f24bee33-3e0d-4726-a9f0-9ee1c4caf063] FINISH,
CreateVolumeVDSCommand, return: 0e5c1398-461
c-4ca3-ba6e-d3ca3bf01a2a, log id: 20ddcc22
2019-02-25 11:58:47,292+01 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (default task-82)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] CommandAsyncTask::Adding
CommandMultiAsyncTasks object for command '
3fbee088-e491-4041-9d44-8c5528dbe33f'
2019-02-25 11:58:47,292+01 INFO
[org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (default task-82)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] CommandMultiAsyncTasks::attachTask:
Attaching task '32124952-a4ac-4e
f2-832d-7e2417996658' to command '3fbee088-e491-4041-9d44-8c5528dbe33f'.
2019-02-25 11:58:47,326+01 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (default task-82)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] Adding task
'32124952-a4ac-4ef2-832d-7e2417996658' (Parent Command '
CreateSnapshot', Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling
hasn't started yet..
2019-02-25 11:58:47,404+01 WARN
[org.ovirt.engine.core.utils.ovf.OvfVmWriter] (default task-82)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] There are no users with permissions
on VM 0e7e53c6-ad15-484a-b084-4647b50
a2567 to write
2019-02-25 11:58:47,458+01 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-82) [f24bee33-3e0d-4726-a9f0-9ee1c4caf063] EVENT_ID:
USER_CREATE_SNAPSHOT(45), Snapshot 'vm
.example.com_Disk1 Auto-generated for Live Storage Migration' creation for
VM 'vm.example.com' was initiated by admin@internal-authz.
2019-02-25 11:58:47,459+01 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (default task-82)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] BaseAsyncTask::startPollingTask:
Starting to poll task '32124952-a4ac-4e
f2-832d-7e2417996658'.
2019-02-25 11:58:47,507+01 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-82) [f24bee33-3e0d-4726-a9f0-9ee1c4caf063] EVENT_ID:
USER_MOVED_DISK(2,008), User admin@int
ernal-authz moving disk vm.example.com_Disk1 to domain
PreproductionStorage01.
2019-02-25 11:58:47,527+01 INFO
[org.ovirt.engine.core.bll.storage.disk.MoveDiskCommand] (default task-82)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] Lock freed to object
'EngineLock:{exclusiveLocks='[0d9b635d-22
ef-4456-9b10-20be027baa7a=DISK]', sharedLocks=''}'
2019-02-25 11:58:49,336+01 INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(EE-ManagedThreadFactory-engineScheduled-Thread-79)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] Command 'CreateA
llSnapshotsFromVm' (id: 'a828e3ff-b755-4161-931c-c1b43d93f7b0') waiting on
child command id: '3fbee088-e491-4041-9d44-8c5528dbe33f'
type:'CreateSnapshot' to complete
2019-02-25 11:58:49,411+01 INFO
[org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
(EE-ManagedThreadFactory-engineScheduled-Thread-79)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] Command 'LiveMigrate
Disk' (id: '6c61fe22-ba06-420d-b253-33ded0c2c685') waiting on child command
id: 'a828e3ff-b755-4161-931c-c1b43d93f7b0'
type:'CreateAllSnapshotsFromVm'
to complete
2019-02-25 11:58:50,213+01 INFO
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotCommand]
(EE-ManagedThreadFactory-engine-Thread-67)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] Command [id=3fbee088-e491-4041-9d4
4-8c5528dbe33f]: Updating status to 'FAILED', The command end method logic
will be executed by one of its parent commands.
2019-02-25 11:58:50,213+01 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(EE-ManagedThreadFactory-engine-Thread-67)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063]
CommandAsyncTask::HandleEndActionResult [wi
thin thread]: endAction for action type 'CreateSnapshot' completed,
handling the result.
019-02-25 11:58:50,213+01 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(EE-ManagedThreadFactory-engine-Thread-67)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063]
CommandAsyncTask::HandleEndActionResult [within thread]: endAction for
action type 'CreateSnapshot' succeeded, clearing tasks.
2019-02-25 11:58:50,213+01 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engine-Thread-67)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] SPMAsyncTask::ClearAsyncTask:
Attempting to clear task '32124952-a4ac-4ef2-832d-7e2417996658'
2019-02-25 11:58:50,214+01 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-67)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] START, SPMClearTaskVDSCommand(
SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='fa64792e-73b3-4da2-9d0b-f334422aaccf',
ignoreFailoverLimit='false',
taskId='32124952-a4ac-4ef2-832d-7e2417996658'}), log id: 1317d3
2019-02-25 11:58:50,215+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-67)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] START,
HSMClearTaskVDSCommand(HostName = hood17.pic.es,
HSMTaskGuidBaseVDSCommandParameters:{hostId='734244c1-fbf4-4fd4-ba56-40c21d8b0e4d',
taskId='32124952-a4ac-4ef2-832d-7e2417996658'}), log id: 73f68faf
2019-02-25 11:58:50,221+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-67)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] FINISH, HSMClearTaskVDSCommand, log
id: 73f68faf
2019-02-25 11:58:50,221+01 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-67)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] FINISH, SPMClearTaskVDSCommand, log
id: 1317d3
2019-02-25 11:58:50,228+01 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engine-Thread-67)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] BaseAsyncTask::removeTaskFromDB:
Removed task '32124952-a4ac-4ef2-832d-7e2417996658' from DataBase
2019-02-25 11:58:50,228+01 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(EE-ManagedThreadFactory-engine-Thread-67)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063]
CommandAsyncTask::HandleEndActionResult [within thread]: Removing
CommandMultiAsyncTasks object for entity
'3fbee088-e491-4041-9d44-8c5528dbe33f'
2019-02-25 11:58:53,440+01 INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(EE-ManagedThreadFactory-engineScheduled-Thread-33)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] Command 'CreateAllSnapshotsFromVm'
id: 'a828e3ff-b755-4161-931c-c1b43d93f7b0' child commands
'[3fbee088-e491-4041-9d44-8c5528dbe33f]' executions were completed, status
'FAILED'
2019-02-25 11:58:53,537+01 INFO
[org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
(EE-ManagedThreadFactory-engineScheduled-Thread-33)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] Command 'LiveMigrateDisk' (id:
'6c61fe22-ba06-420d-b253-33ded0c2c685') waiting on child command id:
'a828e3ff-b755-4161-931c-c1b43d93f7b0' type:'CreateAllSnapshotsFromVm' to
complete
2019-02-25 11:58:54,602+01 ERROR
[org.ovirt.engine.core.bll.snapshots.CreateAllSnapshotsFromVmCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-96)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] Ending command
'org.ovirt.engine.core.bll.snapshots.CreateAllSnapshotsFromVmCommand' with
failure.
2019-02-25 11:58:54,627+01 ERROR
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-96)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] Ending command
'org.ovirt.engine.core.bll.snapshots.CreateSnapshotCommand' with failure.
2019-02-25 11:58:54,785+01 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-96) [] EVENT_ID:
USER_CREATE_SNAPSHOT_FINISHED_FAILURE(69), Failed to complete snapshot
'vm.example.com_Disk1 Auto-generated for Live Storage Migration' creation
for VM 'vm.example.com'.
2019-02-25 11:58:54,981+01 INFO
[org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
(EE-ManagedThreadFactory-engineScheduled-Thread-96)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] Command 'LiveMigrateDisk' id:
'6c61fe22-ba06-420d-b253-33ded0c2c685' child commands
'[a828e3ff-b755-4161-931c-c1b43d93f7b0]' executions were completed, status
'FAILED'
2019-02-25 11:58:56,121+01 ERROR
[org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-9)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] Ending command
'org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand' with failure.
2019-02-25 11:58:56,121+01 ERROR
[org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-9)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] Failed during live storage migration
of disk '0d9b635d-22ef-4456-9b10-20be027baa7a' of vm
'0e7e53c6-ad15-484a-b084-4647b50a2567', attempting to end replication
before deleting the target disk
2019-02-25 11:58:56,122+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-9)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] START,
VmReplicateDiskFinishVDSCommand(HostName = hood14.pic.es,
VmReplicateDiskParameters:{hostId='9b6fc918-ef12-4180-bd0b-c81c3cda685e',
vmId='0e7e53c6-ad15-484a-b084-4647b50a2567',
storagePoolId='fa64792e-73b3-4da2-9d0b-f334422aaccf',
srcStorageDomainId='e655abce-c5e8-44f3-8d50-9fd76edf05cb',
targetStorageDomainId='e655abce-c5e8-44f3-8d50-9fd76edf05cb',
imageGroupId='0d9b635d-22ef-4456-9b10-20be027baa7a',
imageId='f65be7d6-d1bf-4db3-aa04-fd59265d5ebc'}), log id: 36f598e9
2019-02-25 11:58:56,125+01 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-9)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] Failed in
'VmReplicateDiskFinishVDS' method
2019-02-25 11:58:56,131+01 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-9)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] EVENT_ID:
VDS_BROKER_COMMAND_FAILURE(10,802), VDSM hood14.pic.es command
VmReplicateDiskFinishVDS failed: Drive image file could not be found
2019-02-25 11:58:56,131+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-9)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand'
return value 'StatusOnlyReturn [status=Status [code=13, message=Drive image
file could not be found]]'
2019-02-25 11:58:56,131+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-9)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] HostName = hood14.pic.es
2019-02-25 11:58:56,131+01 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-9)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] Command
'VmReplicateDiskFinishVDSCommand(HostName = hood14.pic.es,
VmReplicateDiskParameters:{hostId='9b6fc918-ef12-4180-bd0b-c81c3cda685e',
vmId='0e7e53c6-ad15-484a-b084-4647b50a2567',
storagePoolId='fa64792e-73b3-4da2-9d0b-f334422aaccf',
srcStorageDomainId='e655abce-c5e8-44f3-8d50-9fd76edf05cb',
targetStorageDomainId='e655abce-c5e8-44f3-8d50-9fd76edf05cb',
imageGroupId='0d9b635d-22ef-4456-9b10-20be027baa7a',
imageId='f65be7d6-d1bf-4db3-aa04-fd59265d5ebc'})' execution failed:
VDSGenericException: VDSErrorException: Failed to VmReplicateDiskFinishVDS,
error = Drive image file could not be found, code = 13
2019-02-25 11:58:56,131+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-9)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] FINISH,
VmReplicateDiskFinishVDSCommand, log id: 36f598e9
2019-02-25 11:58:56,131+01 ERROR
[org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-9)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] Replication end of disk
'0d9b635d-22ef-4456-9b10-20be027baa7a' in vm
'0e7e53c6-ad15-484a-b084-4647b50a2567' back to the source failed, skipping
deletion of the target disk
2019-02-25 11:58:56,142+01 WARN
[org.ovirt.engine.core.bll.lock.InMemoryLockManager]
(EE-ManagedThreadFactory-engineScheduled-Thread-9)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] Trying to release exclusive lock
which does not exist, lock key:
'0e7e53c6-ad15-484a-b084-4647b50a2567LIVE_STORAGE_MIGRATION'
2019-02-25 11:58:56,142+01 INFO
[org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-9)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] Lock freed to object
'EngineLock:{exclusiveLocks='[0e7e53c6-ad15-484a-b084-4647b50a2567=LIVE_STORAGE_MIGRATION]',
sharedLocks=''}'
2019-02-25 11:58:56,176+01 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-9)
[f24bee33-3e0d-4726-a9f0-9ee1c4caf063] EVENT_ID:
USER_MOVED_DISK_FINISHED_FAILURE(2,011), User admin@internal-authz have
failed to move disk vm.example.com_Disk1 to domain PreproductionStorage01.
2019-02-25 11:59:33,685+01 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-23)
[b257b938-4a81-4c21-a53e-e346510d9430] EVENT_ID:
USER_REMOVE_DISK_SNAPSHOT(373), Disk 'vm.example.com_Disk1' from
Snapshot(s) 'backup_snapshot_20190217-010004' of VM 'vm.example.com'
deletion was initiated by admin@internal-authz.
2019-02-25 12:00:07,063+01 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-20)
[b257b938-4a81-4c21-a53e-e346510d9430] EVENT_ID:
USER_REMOVE_DISK_SNAPSHOT_FINISHED_SUCCESS(375), Disk
'vm.example.com_Disk1' from Snapshot(s) 'backup_snapshot_20190217-010004'
of VM 'vm.example.com' deletion has been completed (User:
admin@internal-authz).
You have new mail in /var/spool/mail/root
On Mon, Feb 25, 2019 at 11:52 AM Bruno Rodriguez <bruno(a)pic.es> wrote:
Thank you very much, Shani
I've got what you asked and some good news as well. I'll start with the
good news: the snapshots status changes to OK if I try to live-migrate the
disk to another storage domain in the same cabin. Then I can delete them.
Just tried with two illegal snapshots and it worked, I'll send you the
engine logs as soon as I can check them.
If I go with the query to the database (the following one) I get 0 rows
# select * from images where vm_snapshot_id IN
('f649d9c1-563e-49d4-9fad-6bc94abc279b',
'5734df23-de67-41a8-88a1-423cecfe7260',
'f649d9c1-563e-49d4-9fad-6bc94abc279b',
'2929df28-eae8-4f27-afee-a984fe0b07e7',
'4bd4360e-e0f4-4629-ab38-2f0d80d3ae0f',
'fbaff53b-30ce-4b20-8f10-80e70becb48c',
'c628386a-da6c-4a0d-ae7d-3e6ecda27d6d',
'e9ddaa5c-007d-49e6-8384-efefebb00aa6',
'5b6db52a-bfe3-45f8-b7bb-d878c4e63cb4',
'7efe2e7e-ca24-4b27-b512-b42795c79ea4');
BTW, the dump-volume-chains is here. I deleted the disks with no snapshots
in illegal status
Images volume chains (base volume first)
image: 4dab82be-7068-4234-8478-45ec51a29bc0
- 04f7bafd-aa56-4940-9c4e-e4055972284e
status: OK, voltype: INTERNAL, format: COW, legality:
LEGAL, type: SPARSE
- 0fe7e3bf-24a9-45b3-9f33-9d93349b2ac1
status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE
image: 7428f6fd-ae2e-4691-b718-1422b1150798
- a81b454e-17b1-4c6a-a7bc-803a0e3b2b03
status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE
image: c111cb15-5465-4056-b0bb-bd94565d6060
- 2443c7f2-9ee3-4a92-a80e-c3ee1a481311
status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE
image: 2900bc6b-3345-42a4-9533-667fb4009deb
- e9ddaa5c-007d-49e6-8384-efefebb00aa6
status: OK, voltype: INTERNAL, format: COW, legality:
LEGAL, type: SPARSE
- 8be46861-3d44-4dd3-9216-a2817f9d0db9
status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE
image: 79663e1f-c6a7-4193-91fa-d5cbe02f81e3
- f649d9c1-563e-49d4-9fad-6bc94abc279b
status: OK, voltype: INTERNAL, format: COW, legality:
LEGAL, type: SPARSE
- f0a381bb-4110-43de-999c-99ae9cb03a5a
status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE
image: e1f3dafb-64be-46a9-a3dc-1f3f7fcd1960
- 4bd4360e-e0f4-4629-ab38-2f0d80d3ae0f
status: OK, voltype: INTERNAL, format: COW, legality:
LEGAL, type: SPARSE
- 0fd199eb-ab26-46de-b0b4-b1be58d635c2
status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE
image: fcdeeff4-1f1a-4ced-b6e9-0746c54a7fe9
- 5b6db52a-bfe3-45f8-b7bb-d878c4e63cb4
status: OK, voltype: INTERNAL, format: COW, legality:
LEGAL, type: SPARSE
- c133af2a-6642-4809-b2cc-267d4cbadec9
status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE
image: 06216874-7c66-4e8c-ab3b-4dd4ae8281c1
- 32459132-88f9-460d-8fb5-17ac41413cff
status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE
image: 4b87c897-062c-4e71-874b-ce48376cc463
- 16ac155d-cf61-4eb2-8227-6503ee6d3414
status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE
image: 24acdba6-f600-464d-a662-22e42bbcdf1c
- b04cc993-164e-49f1-9d46-e013bf3e66f3
status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE
image: 562bf24c-d69c-4ec5-a670-e5eba5064fab
- c628386a-da6c-4a0d-ae7d-3e6ecda27d6d
status: OK, voltype: INTERNAL, format: COW, legality:
LEGAL, type: SPARSE
- ecab3f58-0ae3-4d2a-b0df-dd712b3d5a70
status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE
image: 0cc6d13d-5878-4fd2-bf54-5aef95017bc5
- 05aa8ec3-674b-4a8e-be39-2bddc18369e4
status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE
image: 30a88bec-4262-47ce-a329-59ed5350e10a
- a96de756-275e-4e64-a9c9-c8cc221f2bac
status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE
image: 818a3dbe-9cfa-4355-b4b6-c2dda51beef5
- ba422faf-25df-45d0-80aa-c4cad166fcbd
status: OK, voltype: INTERNAL, format: COW, legality:
LEGAL, type: SPARSE
- df419136-ebcc-4087-8b5c-2c5a29f0dcfd
status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE
image: c5cc464e-eb71-4edf-a780-60180c592a6f
- 5734df23-de67-41a8-88a1-423cecfe7260
status: OK, voltype: INTERNAL, format: COW, legality:
LEGAL, type: SPARSE
- fa154782-0dbb-45b5-ba62-d6937259f097
status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE
image: ded89dac-ce57-4c09-8ca5-dd131e0776df
- fbaff53b-30ce-4b20-8f10-80e70becb48c
status: OK, voltype: INTERNAL, format: COW, legality:
LEGAL, type: SPARSE
- 2feae690-0301-4ec3-9277-57ad3168141b
status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE
image: f2e35007-ff0f-4f74-90ec-1a28d342d564
- 096faec2-e0e0-47a5-9a24-9332de963bfb
status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE
image: 0d9b635d-22ef-4456-9b10-20be027baa7a
- 2929df28-eae8-4f27-afee-a984fe0b07e7
status: OK, voltype: INTERNAL, format: RAW, legality:
LEGAL, type: PREALLOCATED
- f65be7d6-d1bf-4db3-aa04-fd59265d5ebc
status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE
image: 1fd83784-dc53-4f40-ac13-ac2f046aa81f
- b4139999-1671-416c-bebc-3415ee7e61b2
status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE
image: 145e5438-b7e8-45fd-81ea-3eb10a07acb6
- baa84312-8c4e-4ac1-aa50-27836b67e5f6
status: OK, voltype: INTERNAL, format: COW, legality:
LEGAL, type: SPARSE
- 2a95e1fb-7e42-4589-836b-76b45f03f854
status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE
image: dc856555-bfef-4bfa-8351-59b5a4f04331
- 7efe2e7e-ca24-4b27-b512-b42795c79ea4
status: OK, voltype: INTERNAL, format: COW, legality:
LEGAL, type: SPARSE
- a3f715c2-efc4-4421-aa4b-efce5a853e9e
status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE
Thank you very much!!!
Bruno
On Mon, Feb 25, 2019 at 11:32 AM Shani Leviim <sleviim(a)redhat.com> wrote:
> Hi Bruno,
> Can you please share the output of:
> vdsm-tool dump-volume-chains <the-relevant-storage-domain-id>
>
> Also, can you see those images on the 'images' table?
>
>
> *Regards,*
>
> *Shani Leviim*
>
>
> On Mon, Feb 25, 2019 at 9:32 AM Bruno Rodriguez <bruno(a)pic.es> wrote:
>
>> Good morning, Shani
>>
>> I'm not trying to deactivate any disk because the VM using it is
>> working. I can't turn it off because I'm pretty sure if I do I won't
be
>> able to turn it on again, in fact the web interface is telling me that if I
>> turn it off possibly it won't restart :(
>>
>> For what I can check I have no information about any of the the
>> snapshots I provided in the database
>>
>> engine=# select * from snapshots where snapshot_id IN
>> ('f649d9c1-563e-49d4-9fad-6bc94abc279b',
>> '5734df23-de67-41a8-88a1-423cecfe7260',
>> 'f649d9c1-563e-49d4-9fad-6bc94abc279b',
>> '2929df28-eae8-4f27-afee-a984fe0b07e7',
>> '4bd4360e-e0f4-4629-ab38-2f0d80d3ae0f',
>> 'fbaff53b-30ce-4b20-8f10-80e70becb48c',
>> 'c628386a-da6c-4a0d-ae7d-3e6ecda27d6d',
>> 'e9ddaa5c-007d-49e6-8384-efefebb00aa6',
>> '5b6db52a-bfe3-45f8-b7bb-d878c4e63cb4',
>> '7efe2e7e-ca24-4b27-b512-b42795c79ea4');
>> snapshot_id | vm_id | snapshot_type | status | description |
>> creation_date | app_list | vm_configuration | _create_date | _update_date |
>> memory_volume | memory_metadata_disk_id | memory_dump_disk_id | vm_conf
>> iguration_broken
>>
>>
-------------+-------+---------------+--------+-------------+---------------+----------+------------------+--------------+--------------+---------------+-------------------------+---------------------+--------
>> -----------------
>> (0 rows)
>>
>>
>> Thank you
>>
>>
>> On Sun, Feb 24, 2019 at 12:16 PM Shani Leviim <sleviim(a)redhat.com>
>> wrote:
>>
>>> Hi Bruno,
>>>
>>> It seems that the disk you're trying to deactivate is in use ( Logical
>>> volume
>>> e655abce-c5e8-44f3-8d50-9fd76edf05cb/fa154782-0dbb-45b5-ba62-d6937259f097
>>> in use).
>>> Is there any task that uses that disk?
>>>
>>> Also, did you try to verify the snapshot's creation date with the DB?
>>> ( select * from snapshots; )
>>>
>>>
>>> *Regards*
>>>
>>> *Shani Leviim*
>>>
>>>
>>> On Fri, Feb 22, 2019 at 6:08 PM Bruno Rodriguez <bruno(a)pic.es> wrote:
>>>
>>>> Hello,
>>>>
>>>> We are experiencing some problems with some snapshots in illegal
>>>> status generated with the python API. I think I'm not the only one,
and
>>>> that is not a relief but I hope someone can help about it.
>>>>
>>>> I'm a bit scared because, for what I see, the creation date in the
>>>> engine for every snapshot is way different from the date when it was
really
>>>> created. The name of the snapshot is in the format
>>>> backup_snapshot_YYYYMMDD-HHMMSS, but as you can see in the following
>>>> examples, the stored date is totally random...
>>>>
>>>> Size
>>>> Creation Date
>>>> Snapshot Description
>>>> Status
>>>> Disk Snapshot ID
>>>>
>>>> 33 GiB
>>>> Mar 2, 2018, 5:03:57 PM
>>>> backup_snapshot_20190217-011645
>>>> Illegal
>>>> 5734df23-de67-41a8-88a1-423cecfe7260
>>>>
>>>> 33 GiB
>>>> May 8, 2018, 10:02:56 AM
>>>> backup_snapshot_20190216-013047
>>>> Illegal
>>>> f649d9c1-563e-49d4-9fad-6bc94abc279b
>>>>
>>>> 10 GiB
>>>> Feb 21, 2018, 11:10:17 AM
>>>> backup_snapshot_20190217-010004
>>>> Illegal
>>>> 2929df28-eae8-4f27-afee-a984fe0b07e7
>>>>
>>>> 43 GiB
>>>> Feb 2, 2018, 12:55:51 PM
>>>> backup_snapshot_20190216-015544
>>>> Illegal
>>>> 4bd4360e-e0f4-4629-ab38-2f0d80d3ae0f
>>>>
>>>> 11 GiB
>>>> Feb 13, 2018, 12:51:08 PM
>>>> backup_snapshot_20190217-010541
>>>> Illegal
>>>> fbaff53b-30ce-4b20-8f10-80e70becb48c
>>>>
>>>> 11 GiB
>>>> Feb 13, 2018, 4:05:39 PM
>>>> backup_snapshot_20190217-011207
>>>> Illegal
>>>> c628386a-da6c-4a0d-ae7d-3e6ecda27d6d
>>>>
>>>> 11 GiB
>>>> Feb 13, 2018, 4:38:25 PM
>>>> backup_snapshot_20190216-012058
>>>> Illegal
>>>> e9ddaa5c-007d-49e6-8384-efefebb00aa6
>>>>
>>>> 11 GiB
>>>> Feb 13, 2018, 10:52:09 AM
>>>> backup_snapshot_20190216-012550
>>>> Illegal
>>>> 5b6db52a-bfe3-45f8-b7bb-d878c4e63cb4
>>>>
>>>> 55 GiB
>>>> Jan 22, 2018, 5:02:29 PM
>>>> backup_snapshot_20190217-012659
>>>> Illegal
>>>> 7efe2e7e-ca24-4b27-b512-b42795c79ea4
>>>>
>>>>
>>>> When I'm getting the logs for the first one, to check what happened
to
>>>> it, I get the following
>>>>
>>>> 2019-02-17 01:16:45,839+01 INFO
>>>> [org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand]
(default
>>>> task-100) [96944daa-c90a-4ad7-a556-c98e66550f87] START,
>>>> CreateVolumeVDSCommand(
>>>>
CreateVolumeVDSCommandParameters:{storagePoolId='fa64792e-73b3-4da2-9d0b-f334422aaccf',
>>>> ignoreFailoverLimit='false',
>>>> storageDomainId='e655abce-c5e8-44f3-8d50-9fd76edf05cb',
>>>> imageGroupId='c5cc464e-eb71-4edf-a780-60180c592a6f',
>>>> imageSizeInBytes='32212254720', volumeFormat='COW',
>>>> newImageId='fa154782-0dbb-45b5-ba62-d6937259f097',
imageType='Sparse',
>>>> newImageDescription='', imageInitialSizeInBytes='0',
>>>> imageId='5734df23-de67-41a8-88a1-423cecfe7260',
>>>> sourceImageGroupId='c5cc464e-eb71-4edf-a780-60180c592a6f'}), log
id:
>>>> 497c168a
>>>> 2019-02-17 01:18:26,506+01 INFO
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand]
>>>> (default task-212) [19f00d3e-5159-48aa-b3a0-615a085b62d9] START,
>>>> GetVolumeInfoVDSCommand(HostName = hood13.pic.es,
>>>>
GetVolumeInfoVDSCommandParameters:{hostId='0a774472-5737-4ea2-b49a-6f0ea4572199',
>>>> storagePoolId='fa64792e-73b3-4da2-9d0b-f334422aaccf',
>>>> storageDomainId='e655abce-c5e8-44f3-8d50-9fd76edf05cb',
>>>> imageGroupId='c5cc464e-eb71-4edf-a780-60180c592a6f',
>>>> imageId='5734df23-de67-41a8-88a1-423cecfe7260'}), log id:
111a34cf
>>>> 2019-02-17 01:18:26,764+01 INFO
>>>> [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
>>>> (default task-212) [19f00d3e-5159-48aa-b3a0-615a085b62d9] Successfully
>>>> added Download disk 'vm.example.com_Disk1' (id
>>>> '5734df23-de67-41a8-88a1-423cecfe7260') for image transfer
command
>>>> '11104d8c-2a9b-4924-96ce-42ef66725616'
>>>> 2019-02-17 01:18:27,310+01 INFO
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.AddImageTicketVDSCommand]
>>>> (default task-212) [19f00d3e-5159-48aa-b3a0-615a085b62d9] START,
>>>> AddImageTicketVDSCommand(HostName = hood11.pic.es,
>>>>
AddImageTicketVDSCommandParameters:{hostId='79cbda85-35f4-44df-b309-01b57bc2477e',
>>>> ticketId='0d389c3e-5ea5-4886-8ea7-60a1560e3b2d',
timeout='300',
>>>> operations='[read]', size='35433480192',
>>>>
url='file:///rhev/data-center/mnt/blockSD/e655abce-c5e8-44f3-8d50-9fd76edf05cb/images/c5cc464e-eb71-4edf-a780-60180c592a6f/5734df23-de67-41a8-88a1-423cecfe7260',
>>>> filename='null'}), log id: f5de141
>>>> 2019-02-17 01:22:28,898+01 INFO
>>>> [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
>>>> (EE-ManagedThreadFactory-engineScheduled-Thread-100)
>>>> [19f00d3e-5159-48aa-b3a0-615a085b62d9] Renewing transfer ticket for
>>>> Download disk 'vm.example.com_Disk1' (id
>>>> '5734df23-de67-41a8-88a1-423cecfe7260')
>>>> 2019-02-17 01:26:33,588+01 INFO
>>>> [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
>>>> (EE-ManagedThreadFactory-engineScheduled-Thread-100)
>>>> [19f00d3e-5159-48aa-b3a0-615a085b62d9] Renewing transfer ticket for
>>>> Download disk 'vm.example.com_Disk1' (id
>>>> '5734df23-de67-41a8-88a1-423cecfe7260')
>>>> 2019-02-17 01:26:49,319+01 INFO
>>>> [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
>>>> (EE-ManagedThreadFactory-engineScheduled-Thread-6)
>>>> [19f00d3e-5159-48aa-b3a0-615a085b62d9] Finalizing successful transfer
for
>>>> Download disk 'vm.example.com_Disk1' (id
>>>> '5734df23-de67-41a8-88a1-423cecfe7260')
>>>> 2019-02-17 01:27:17,771+01 ERROR
>>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>>> (EE-ManagedThreadFactory-engineScheduled-Thread-6)
>>>> [19f00d3e-5159-48aa-b3a0-615a085b62d9] EVENT_ID:
>>>> VDS_BROKER_COMMAND_FAILURE(10,802), VDSM hood11.pic.es command
>>>> TeardownImageVDS failed: Cannot deactivate Logical Volume: ('General
>>>> Storage Exception: ("5 [] [\' Logical volume
>>>>
e655abce-c5e8-44f3-8d50-9fd76edf05cb/5734df23-de67-41a8-88a1-423cecfe7260
>>>> in use.\', \' Logical volume
>>>>
e655abce-c5e8-44f3-8d50-9fd76edf05cb/fa154782-0dbb-45b5-ba62-d6937259f097
>>>> in
>>>>
use.\']\\ne655abce-c5e8-44f3-8d50-9fd76edf05cb/[\'5734df23-de67-41a8-88a1-423cecfe7260\',
>>>> \'fa154782-0dbb-45b5-ba62-d6937259f097\']",)',)
>>>> 2019-02-17 01:27:17,772+01 ERROR
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand]
>>>> (EE-ManagedThreadFactory-engineScheduled-Thread-6)
>>>> [19f00d3e-5159-48aa-b3a0-615a085b62d9] Command
>>>> 'TeardownImageVDSCommand(HostName = hood11.pic.es,
>>>>
ImageActionsVDSCommandParameters:{hostId='79cbda85-35f4-44df-b309-01b57bc2477e'})'
>>>> execution failed: VDSGenericException: VDSErrorException: Failed in
>>>> vdscommand to TeardownImageVDS, error = Cannot deactivate Logical
Volume:
>>>> ('General Storage Exception: ("5 [] [\' Logical volume
>>>>
e655abce-c5e8-44f3-8d50-9fd76edf05cb/5734df23-de67-41a8-88a1-423cecfe7260
>>>> in use.\', \' Logical volume
>>>>
e655abce-c5e8-44f3-8d50-9fd76edf05cb/fa154782-0dbb-45b5-ba62-d6937259f097
>>>> in
>>>>
use.\']\\ne655abce-c5e8-44f3-8d50-9fd76edf05cb/[\'5734df23-de67-41a8-88a1-423cecfe7260\',
>>>> \'fa154782-0dbb-45b5-ba62-d6937259f097\']",)',)
>>>> 2019-02-17 01:27:19,204+01 INFO
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand]
>>>> (EE-ManagedThreadFactory-commandCoordinator-Thread-6)
>>>> [e73bfbec-d6bd-40e5-8009-97e6005ad1d4] START, MergeVDSCommand(HostName =
>>>> hood11.pic.es,
>>>>
MergeVDSCommandParameters:{hostId='79cbda85-35f4-44df-b309-01b57bc2477e',
>>>> vmId='addd5eba-9078-41aa-9244-fe485aded951',
>>>> storagePoolId='fa64792e-73b3-4da2-9d0b-f334422aaccf',
>>>> storageDomainId='e655abce-c5e8-44f3-8d50-9fd76edf05cb',
>>>> imageGroupId='c5cc464e-eb71-4edf-a780-60180c592a6f',
>>>> imageId='fa154782-0dbb-45b5-ba62-d6937259f097',
>>>> baseImageId='5734df23-de67-41a8-88a1-423cecfe7260',
>>>> topImageId='fa154782-0dbb-45b5-ba62-d6937259f097',
bandwidth='0'}), log id:
>>>> 1d956d48
>>>> 2019-02-17 01:27:29,923+01 INFO
>>>> [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
>>>> (EE-ManagedThreadFactory-engineScheduled-Thread-49)
>>>> [19f00d3e-5159-48aa-b3a0-615a085b62d9] Transfer was
>>>> successful. Download disk 'vm.example.com_Disk1' (id
>>>> '5734df23-de67-41a8-88a1-423cecfe7260')
>>>> 2019-02-17 01:27:30,997+01 INFO
>>>> [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
>>>> (EE-ManagedThreadFactory-engineScheduled-Thread-46)
>>>> [19f00d3e-5159-48aa-b3a0-615a085b62d9] Successfully
>>>> transferred disk '5734df23-de67-41a8-88a1-423cecfe7260' (command
id
>>>> '11104d8c-2a9b-4924-96ce-42ef66725616')
>>>>
>>>>
>>>> For what I understand, there was a "Cannot deactivate logical
volume"
>>>> problem that I would thank very much if someone can help me about how to
>>>> solve
>>>>
>>>> Thanks again
>>>>
>>>> --
>>>> Bruno Rodríguez Rodríguez
>>>>
>>>> "Si algo me ha enseñado el tetris, es que los errores se acumulan y
>>>> los triunfos desaparecen"
>>>> _______________________________________________
>>>> Users mailing list -- users(a)ovirt.org
>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct:
>>>>
https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LXYE3T2VGFC...
>>>>
>>>
>>
>> --
>> Bruno Rodríguez Rodríguez
>>
>> *Port d'Informació Científica (PIC)*
>> Campus UAB - Edificio D,
>> C / Albareda, s / n
>> 08193-Bellaterra (Barcelona), España
>> Telf. +34 93 170 27 30
>> GPS coordenadas: 41.500850 2.110628
>>
>> "Si algo me ha enseñado el tetris, es que los errores se acumulan y los
>> triunfos desaparecen"
>>
>
--
Bruno Rodríguez Rodríguez
*Port d'Informació Científica (PIC)*
Campus UAB - Edificio D,
C / Albareda, s / n
08193-Bellaterra (Barcelona), España
Telf. +34 93 170 27 30
GPS coordenadas: 41.500850 2.110628
"Si algo me ha enseñado el tetris, es que los errores se acumulan y los
triunfos desaparecen"
--
Bruno Rodríguez Rodríguez
*Port d'Informació Científica (PIC)*
Campus UAB - Edificio D,
C / Albareda, s / n
08193-Bellaterra (Barcelona), España
Telf. +34 93 170 27 30
GPS coordenadas: 41.500850 2.110628
"Si algo me ha enseñado el tetris, es que los errores se acumulan y los
triunfos desaparecen"