isort errors while make install
by Prajith Kesava Prasad
Hi,
everytime i clone and run make install i get this error, as a workaround i
remove them and re-run which works fine, but i wanted to know the root
cause, i thought it was because i pulled a wrong head ref?, or is it a
common issue?
```
packaging/setup/plugins/ovirt-engine-setup/ovirt-engine/config/aaajdbc.py:249:21
'...'.format(...) has unused named argument(s): profile
packaging/setup/plugins/ovirt-engine-setup/ovirt-engine/upgrade/auth_url_validation.py:75:13
'...'.format(...) has unused named argument(s): ownConnection
+ echo ERROR: The following check failed:
ERROR: The following check failed:
```
Regards,
Prajith
4 years, 2 months
gerrit pull/clone timeout
by Prajith Kesava Prasad
Is anyone else facing an issue of timeout (forever) while cloning/pulling
ovirt-engine at times in gerrit,
i followed the required procedures like adding ssh keys etc,
did anything change or am I missing something?
I have pasted the vv output for the same
```
$ git clone git://gerrit.ovirt.org/ovirt-engine -vvv
Cloning into 'ovirt-engine'...
Looking up gerrit.ovirt.org ... done.
Connecting to gerrit.ovirt.org (port 9418) ...
107.22.212.69 done.
and then hangs
```
```
git clone ssh://pkesavap@gerrit.ovirt.org:29418/ovirt-engine.git
Cloning into 'ovirt-engine'...
```
Regards,
Prajith
4 years, 2 months
Re: [ovirt-users] [ovirt 4.2]The snapshot is deleted after restoring the snapshot
by Strahil Nikolov
If you have snapshots like A -> B -> C and you restore A , it is normal to loose B & C. After all when you restore A , B & C never happened. Otherwise , ovirt should clone the snapshots in separate images and that is not the requested, right ?
Best Regards,
Strahil Nikolov
В четвъртък, 3 септември 2020 г., 10:03:30 Гринуич+3, liuweijie(a)sunyainfo.com <liuweijie(a)sunyainfo.com> написа:
Dear All:
I used ovirt api (POST /ovirt-engine/api/vms/{vm:id}/snapshots) to create several snapshots. When I call the snapshot recovery api (POST /ovirt-engine/api/vms/{vm:id }/snapshots/{snapshot:id}/restore) to restoe the first snapshot, I found that the next few snapshots were deleted, and it was normal when I restored the last snapshot. Why does the snapshot with the earlier recovery time delete the subsequent snapshot? The vm's os is centos.
Hope get help from you soon, Thank you.
Yours sincerely,
Evelyn
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/LJL6NUYBPGJ...
4 years, 2 months
oVirt Terraform Provider
by Jake Reynolds
Hi,
A number of PRs (bug fixes & feature additions) are outstanding on the oVirt Terraform Provider https://github.com/oVirt/terraform-provider-ovirt. There seems to have been no activity on the master branch of the repo for months.
1. How can I promote/request that PRs are reviewed in a timely manner?
2. I am using this to deploy a global infrastructure over the next year and expect to be doing further extensions/improvements over that time (done 5 PRs in the past few weeks alone). How do I go about becoming part of the community and getting write access to this repo if there is no active maintainer?
Thanks,
Jake
4 years, 2 months
Host installation is broken across OST suites
by Marcin Sobczyk
Hi,
OST suites seem to be broken, example runs [1][2].
In 'engine.log' [3] there is a problem reported:
2020-08-31 10:54:50,875+02 ERROR [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] (EE-ManagedThreadFactory-engine-Thread-2) [568e87b9] Host installation failed for host '0524ee6a-815b-4f7c-8ac1-a085b9870325', 'lago-basic-suite-master-host-1': null
2020-08-31 10:54:50,875+02 DEBUG [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] (EE-ManagedThreadFactory-engine-Thread-2) [568e87b9] Exception: java.lang.NullPointerException
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand.executeCommand(InstallVdsInternalCommand.java:190)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1169)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1327)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:2003)
at org.ovirt.engine.core.utils//org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:140)
at org.ovirt.engine.core.utils//org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:79)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1387)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:419)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.executor.DefaultBackendActionExecutor.execute(DefaultBackendActionExecutor.java:13)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.Backend.runAction(Backend.java:442)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.Backend.runActionImpl(Backend.java:424)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.Backend.runInternalAction(Backend.java:630)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.jboss.as.ee@19.1.0.Final//org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
at org.jboss.as.weld.common@19.1.0.Final//org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:79)
at org.jboss.as.weld.common@19.1.0.Final//org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:89)
at org.jboss.as.weld.common@19.1.0.Final//org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:102)
at org.jboss.as.ee@19.1.0.Final//org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ejb3@19.1.0.Final//org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ee@19.1.0.Final//org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:40)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53)
at org.jboss.as.ee@19.1.0.Final//org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:52)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ejb3@19.1.0.Final//org.jboss.as.ejb3.component.singleton.SingletonComponentInstanceAssociationInterceptor.processInvocation(SingletonComponentInstanceAssociationInterceptor.java:53)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ejb3@19.1.0.Final//org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:228)
at org.jboss.as.ejb3@19.1.0.Final//org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:430)
at org.jboss.as.ejb3@19.1.0.Final//org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:160)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
at org.jboss.weld.core@3.1.3.Final//org.jboss.weld.module.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:81)
at org.jboss.as.weld.common@19.1.0.Final//org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:89)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ejb3@19.1.0.Final//org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ejb3@19.1.0.Final//org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:47)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ejb3@19.1.0.Final//org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ejb3@19.1.0.Final//org.jboss.as.ejb3.deployment.processors.StartupAwaitInterceptor.processInvocation(StartupAwaitInterceptor.java:22)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ejb3@19.1.0.Final//org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ejb3@19.1.0.Final//org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:67)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ee@19.1.0.Final//org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:60)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:438)
at org.wildfly.security.elytron-private@1.11.4.Final//org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:627)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.AccessCheckingInterceptor.processInvocation(AccessCheckingInterceptor.java:57)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.invocation@1.5.2.Final//org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53)
at org.jboss.as.ee@19.1.0.Final//org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:198)
at org.jboss.as.ee@19.1.0.Final//org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:185)
at org.jboss.as.ee@19.1.0.Final//org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:81)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.interfaces.BackendInternal$$$view2.runInternalAction(Unknown Source)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.jboss.weld.core@3.1.3.Final//org.jboss.weld.util.reflection.Reflections.invokeAndUnwrap(Reflections.java:410)
at org.jboss.weld.core@3.1.3.Final//org.jboss.weld.module.ejb.EnterpriseBeanProxyMethodHandler.invoke(EnterpriseBeanProxyMethodHandler.java:134)
at org.jboss.weld.core@3.1.3.Final//org.jboss.weld.bean.proxy.EnterpriseTargetBeanInstance.invoke(EnterpriseTargetBeanInstance.java:56)
at org.jboss.weld.core@3.1.3.Final//org.jboss.weld.module.ejb.InjectionPointPropagatingEnterpriseTargetBeanInstance.invoke(InjectionPointPropagatingEnterpriseTargetBeanInstance.java:68)
at org.jboss.weld.core@3.1.3.Final//org.jboss.weld.bean.proxy.ProxyMethodHandler.invoke(ProxyMethodHandler.java:106)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.BackendCommandObjectsHandler$BackendInternal$BackendLocal$2049259618$Proxy$_$$_Weld$EnterpriseProxy$.runInternalAction(Unknown Source)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.runInternalAction(CommandBase.java:2381)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.hostdeploy.AddVdsCommand.lambda$executeCommand$3(AddVdsCommand.java:219)
at org.ovirt.engine.core.utils//org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalWrapperRunnable.run(ThreadPoolUtil.java:96)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
at org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:227)
Looking at recent patches it might've been introduced by [4].
[1]
https://jenkins.ovirt.org/job/ovirt-system-tests_network-suite-master/1470/
[2]
https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/11211/
[3]
https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/112...
[4] https://gerrit.ovirt.org/#/c/109995/
4 years, 2 months
Re: [storage] how to find if a snapshot is in preview based on storage domain metadata only?
by Germano Veit Michel
On Wed, Sep 2, 2020 at 2:36 PM Germano Veit Michel <germano(a)redhat.com>
wrote:
>
>
> On Tue, Sep 1, 2020 at 5:39 PM Nir Soffer <nsoffer(a)redhat.com> wrote:
>
>> On Tue, Sep 1, 2020 at 2:27 AM Germano Veit Michel <germano(a)redhat.com>
>> wrote:
>> >
>> >
>> >
>> > On Mon, Aug 31, 2020 at 5:00 PM Nir Soffer <nsoffer(a)redhat.com> wrote:
>> >>
>> >> On Mon, Aug 31, 2020 at 4:48 AM Germano Veit Michel <
>> germano(a)redhat.com> wrote:
>> >> >
>> >> >
>> >> >
>> >> > On Sun, Aug 30, 2020 at 8:46 PM Nir Soffer <nsoffer(a)redhat.com>
>> wrote:
>> >> >>
>> >> >>
>> >> >>
>> >> >> On Fri, Aug 28, 2020, 08:36 Germano Veit Michel <germano(a)redhat.com>
>> wrote:
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>> On Fri, Aug 28, 2020 at 9:29 AM Nir Soffer <nsoffer(a)redhat.com>
>> wrote:
>> >> >>>>
>> >> >>>>
>> >> >>>>
>> >> >>>> On Thu, Aug 27, 2020, 16:38 Tal Nisan <tnisan(a)redhat.com> wrote:
>> >> >>>>>
>> >> >>>>>
>> >> >>>>>
>> >> >>>>> On Fri, Aug 21, 2020 at 4:34 AM Germano Veit Michel <
>> germano(a)redhat.com> wrote:
>> >> >>>>>>
>> >> >>>>>> Hi,
>> >> >>>>>>
>> >> >>>>>> Is there a reliable way to figure out if a snapshot is in
>> preview only using information obtained from the storage domain metadata?
>> >> >>>>>> I'm trying to find a way to distinguish a problematic snapshot
>> chain (double parent) from a snapshot in preview in order to improve
>> dump-volume chains.
>> >> >>>>>>
>> >> >>>>>> Currently dump-volume-chains throws an error
>> (DuplicateParentError) if a snapshot is in preview for the image, as there
>> is a 'Y' shape split in the chain
>> >> >>>>>> with 2 volumes (previous chain + preview) pointing to a common
>> parent:
>> >> >>>>>>
>> >> >>>>>> image: dff0a0c0-b731-4e5b-9f32-d97310ca40de
>> >> >>>>>>
>> >> >>>>>> Error: more than one volume pointing to the same
>> parent volume e.g: (_BLANK_UUID<-a), (a<-b), (a<-c)
>> >> >>>>>>
>> >> >>>>>> Unordered volumes and children:
>> >> >>>>>>
>> >> >>>>>> - e6c7bec0-53c6-4729-a4a0-a9b3ef2b8c38 <-
>> 5eb2b29d-82d6-4337-8511-3c86705d566e
>> >> >>>>>> status: OK, voltype: LEAF, format: COW,
>> legality: LEGAL, type: SPARSE, capacity: 1073741824, truesize: 1073741824
>> >> >>>>>>
>> >> >>>>>> - e0475853-4514-4464-99e7-b185cce9b67d <-
>> deceff83-9d88-4f87-8304-d5bf74d119b1
>> >> >>>>>> status: OK, voltype: LEAF, format: COW,
>> legality: LEGAL, type: SPARSE, capacity: 1073741824, truesize: 1073741824
>> >> >>>>>>
>> >> >>>>>> - e6c7bec0-53c6-4729-a4a0-a9b3ef2b8c38 <-
>> e0475853-4514-4464-99e7-b185cce9b67d
>> >> >>>>>> status: OK, voltype: INTERNAL, format: COW,
>> legality: LEGAL, type: SPARSE, capacity: 1073741824, truesize: 1073741824
>> >> >>>>>>
>> >> >>>>>> - 00000000-0000-0000-0000-000000000000 <-
>> e6c7bec0-53c6-4729-a4a0-a9b3ef2b8c38
>> >> >>>>>> status: OK, voltype: INTERNAL, format: RAW,
>> legality: LEGAL, type: PREALLOCATED, capacity: 1073741824, truesize:
>> 1073741824
>> >> >>>>>>
>> >> >>>>>> From the engine side it's easy, but I'd need to solve this
>> problem using only metadata from the storage.
>> >> >>>>>>
>> >> >>>>>> The only thing I could think of is that one of the volumes
>> pointing to the common parent has voltype LEAF. Any better ideas?
>> >> >>>>>
>> >> >>>>> don't think that there is any, Engine is the orchestrator and
>> due to that the info is only in the database
>> >> >>>>
>> >> >>>>
>> >> >>>> There is no good way, but you can look at the length of the
>> chain, and the "ctime" value.
>> >> >>>>
>> >> >>>> For example if this was the original chain:
>> >> >>>>
>> >> >>>> a <- b <- c
>> >> >>>>
>> >> >>>> if we preview a:
>> >> >>>>
>> >> >>>> a <- b <- c
>> >> >>>> a <- d
>> >> >>>>
>> >> >>>> You know that d is a preview volume.
>> >> >>>>
>> >> >>>> If we preview b, we will have two chains of same length:
>> >> >>>>
>> >> >>>> a <- b <- c
>> >> >>>> a <- b <- d
>> >> >>>>
>> >> >>>> But the ctime value of d will be larger, since preview is created
>> after
>> >> >>>> the leaf was created.
>> >> >>>>
>> >> >>>> ctime is using time.time() so it is not affected by time zone
>> changes
>> >> >>>> but it may be wrong due to host time changes, so it is not
>> reliable.
>> >> >>>>
>> >> >>>> Can you open a bug for this?
>> >> >>>
>> >> >>> https://bugzilla.redhat.com/show_bug.cgi?id=1873382
>> >> >>>
>> >> >>> I have a prototype working with some code I pasted in the
>> bugzilla, but I don't think it's reliable and an overcomplication of what
>> should be simple.
>> >> >>
>> >> >>
>> >> >> I don't think the code in the bug is the way to handle this.
>> >> >>
>> >> >> It will be simpler and more useful to:
>> >> >> 1. Find leaves
>> >> >> 2. Follow the chain from each leaf, until the base (volume with no
>> parent).
>> >> >> 3. Display a tree instead of list, like lsblk.
>> >> >>
>> >> >> For example:
>> >> >>
>> >> >> dbf1e90c-41d5-4c2d-a8d2-15f2f04d3561
>> >> >> ├─ea6af566-922c-4ca2-af17-67f7cd08826c
>> >> >> └─aa5643ef-8c74-4b28-91e0-8d45d6ee426b
>> >> >> └─30c4f6d1-7f1d-470b-96ae-7594cf367dfa
>> >> >
>> >> > I like the idea of this visual representation, but it does not fix
>> the problem.
>> >> >
>> >> > The problem is dump-volume-chains throwing incorrect errors in case
>> there is a snapshot in preview.
>> >> >
>> >> > Error: more than one volume pointing to the same parent volume e.g:
>> (_BLANK_UUID<-a), (a<-b), (a<-c)
>> >>
>> >> This error is wrong, you should remove it, and instead show the tree.
>> >>
>> >> > There is still a double parent on the representation above. So if
>> the analysis is done (text output), there will
>> >> > be an error detected no matter how we print it. If there is no way
>> to distinguish a preview from a double parent
>> >> > problem without leaving any doubt based only on storage metadata
>> only then we can improve the
>> >> > representation but ultimately the problem remains there.
>> >> >
>> >> > Ideally I'd like to keep DoubleParentError logic and detect Previews
>> to eliminate the false errors.
>> >>
>> >> This is not possible now.
>> >>
>> >> > The analysis should be done in the image discrepancy tool on the
>> engine, which has dump-volume-chains
>> >> > output (json - no analysis) and the engine db. And we are already
>> doing some basic checks there. Maybe
>> >> > we should even move the entire analysis logic there and make
>> dump-volume-chains just print and dump
>> >> > data without doing analysis if the analysis cannot be done based on
>> partial data.
>> >> >
>> >> > The main idea here was to simply stop false errors for those who
>> look for them in dump-volume-chains
>> >> > text output.
>> >> >
>> >> >>
>> >> >>
>> >> >> Users of the tool will have to check engine db to understand how to
>> fix the disk.
>> >> >>
>> >> >> Even if it was easy to detect a volume in preview, how do you know
>> which chain
>> >> >> should be kept? Did it fail just after the user asked to commit the
>> preview?
>> >> >
>> >> >
>> >> > This tool is not used to diagnose and correct issues on its own. It
>> is used for 2 things, but mainly the first:
>> >> > a) Nice readable way to see volumes and their metadata, plus chain
>> >> > b) Any obvious errors
>> >> >
>> >> > The duplicate parent is printing false problems during preview,
>> breaking the tool for B.
>> >> >
>> >> > The main use is still A, use of dump-volume-chains is to stop
>> collecting /dev/VG/metadata LV or *.meta files
>> >> > and have this info for the volumes in the sosreport.
>> >> >
>> >> > I'm not aware of anyone using just the output of the tool to perform
>> chain changes, every failure
>> >> > also requires checking the DB too and most importantly the logs
>> (unless rotated).
>> >> >
>> >> >>
>> >> >> Storage format does not have a way to store info about the state of
>> the disk, or make atomic
>> >> >> changes like remove one chain when committing after a preview. This
>> is also the reason we
>> >> >> have trouble with removing snapshots.
>> >> >
>> >> >
>> >> > Which means we cannot know for sure what is happening in the chain,
>> right?
>> >> > With this in mind, any suggestion to stop the false errors?
>> >>
>> >> Change the code to handle a tree instead of a list of volumes, error
>> is gone.
>> >
>> > But then part of the validation is gone too. We open the possibility of
>> validating trees, which are all invalid
>> > except for the very specific case of a preview, which we have no data
>> to determine for sure anyway.
>>
>> But the validation is incorrect. Trees are actually supported using
>> preview, so
>> failing and not showing the tree in dump-volume-chain is a bug.
>>
> Well, the tree is valid only if it is a snapshot preview and we have no
> reliable way to determine if its a preview,
> which means we cannot validate the tree reliably too.
>
> We could do the change, but then we swap one bug for another, from saying
> all trees are errors to saying
> non-preview trees are fine. False positive to false negative, not sure
> which is preferable.
>
> If we can have a way to determine reliably if the chain is in preview,
> then I totally agree it's worth
> the effort of changing the logic. Otherwise I think it's not worth
> changing right now.
>
>
>>
>> > There is not much that can be done as there is no reliable way to
>> determine if the chain has a snapshot
>> > in preview without several changes on engine and vdsm. And it's not
>> worth implementing this, I'll close
>> > the bug too.
>>
>> I think it is worth the time, so better leave this open.
>>
> Sure, no problem. I'll re-open it.
>
>>
>> > Thanks for your help!
>> >
>> >>
>> >> >
>> >> > Since we cannot be sure of this based just on SD metadata, maybe the
>> simplest is to remove the
>> >> > duplicate parent error string and/or add some warning that it could
>> be a snapshot in preview and just
>> >> > print the unordered volumes.
>> >> >
>> >> > The improved visual representation could be handled separate from
>> this. I've thought of something
>> >> > similar in the past but found hard to print the volume metadata in a
>> nice way (and we need to handle
>> >> > big chains of several dozen snapshots).
>> >> >
>> >> > Thanks,
>> >> > Germano
>> >> >
>> >> >>
>> >> >>
>> >> >> Nir
>> >> >>
>> >> >>>
>> >> >>>
>> >>
>>
>>
4 years, 2 months