Thanks!
+Fred Rolland <frolland(a)redhat.com> seems like the same issue as reported
in
https://bugzilla.redhat.com/show_bug.cgi?id=1555116
2019-01-24 10:12:08,240+02 ERROR
[org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand] (default
task-544) [416c625f-e57b-46b8-bf74-5b774191fada] Error during
ValidateFailure.: java.lang.NullPointerExceptio
n
at org.ovirt.engine.core.bll.validator.storage.
StorageDomainValidator.getTotalSizeForMerge(StorageDomainValidator.java:205)
[bll.jar:]
at org.ovirt.engine.core.bll.validator.storage.
StorageDomainValidator.hasSpaceForMerge(StorageDomainValidator.java:241)
[bll.jar:]
at org.ovirt.engine.core.bll.validator.storage.
MultipleStorageDomainsValidator.lambda$allDomainsHaveSpaceForMerge$6(
MultipleStorageDomainsValidator.java:122) [bll.jar:]
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
[rt.jar:1.8.0_191]
On Thu, Jan 24, 2019 at 10:25 AM Alex K <rightkicktech(a)gmail.com> wrote:
When I get the error the engine.log logs the attached
engine-partial.log.
At vdsm.log at SPM host I don't see any error generated.
Full logs also attached.
Thanx,
Alex
On Wed, Jan 23, 2019 at 5:53 PM Elad Ben Aharon <ebenahar(a)redhat.com>
wrote:
> Hi,
>
> Can you please provide engine.log and vdsm.log?
>
> On Wed, Jan 23, 2019 at 5:41 PM Alex K <rightkicktech(a)gmail.com> wrote:
>
>> Hi all,
>>
>> I have ovirt 4.2.7, self-hosted on top gluster, with two servers.
>> I have a specific VM which has encountered some snapshot issues.
>> The engine lists 4 snapshots and when trying to delete one of them I get
>> "General command validation failure".
>>
>> The VM was being backed up periodically by a python script which was
>> creating a snapshot -> clone -> export -> delete clone -> delete
snapshot.
>> There were times where the VM was complaining of some illegal snapshots
>> following such backup procedures and I had to delete such illegal snapshots
>> references from the engine DB (following some steps found online),
>> otherwise I would not be able to start the VM if it was shut down. Seems
>> though that this is not a clean process and leaves the underlying image of
>> the VM in an inconsistent state in regards to its snapshots as when
>> checking the backing chain of the image file I get:
>>
>> *b46d8efe-885b-4a68-94ca-e8f437566bee* (active VM)* ->*
>> *b7673dca-6e10-4a0f-9885-1c91b86616af ->*
>> *4f636d91-a66c-4d68-8720-d2736a3765df ->*
>> 6826cb76-6930-4b53-a9f5-fdeb0e8012ac ->
>> 61eea475-1135-42f4-b8d1-da6112946bac ->
>> *604d84c3-8d5f-4bb6-a2b5-0aea79104e43 ->*
>> 1e75898c-9790-4163-ad41-847cfe84db40 ->
>> *cf8707f2-bf1f-4827-8dc2-d7e6ffcc3d43 ->*
>> 3f54c98e-07ca-4810-82d8-cbf3964c7ce5 (raw image)
>>
>> The bold ones are the ones shown at engine GUI. The VM runs normally
>> without issues.
>> I was thinking if I could use qemu-img commit to consolidate and remove
>> the snapshots that are not referenced from engine anymore. Any ideas from
>> your side?
>>
>> Thanx,
>> Alex
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org
>> To unsubscribe send an email to users-leave(a)ovirt.org
>> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>>
https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DDZXH5UG6QE...
>>
>
>
> --
>
> Elad Ben Aharon
>
> ASSOCIATE MANAGER, RHV storage QE
>
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6IJLQCVUHR6...
--
Elad Ben Aharon
ASSOCIATE MANAGER, RHV storage QE