Re: After importing the ovirt-engine 4.3.5 backup file into ovirt-engine 4.3.6, will the information of the ovirt-engine 4.3.6 management interface be the same as ovirt-engine 4.3.5?
by Yedidyah Bar David
On Mon, Nov 11, 2019 at 12:26 PM Staniforth, Paul
<P.Staniforth(a)leedsbeckett.ac.uk> wrote:
>
>
> Thanks,
> I should have been clearer, what I meant was could you install 4.3.5 and restore to that, then run engine-setup --offline?
Of course, this should work.
The only point is that once 4.3.6 is released, it's not that easy to
install 4.3.5, because yum generally tries to install the latest
versions of stuff - and in theory, there might be issues with
installing stuff from 4.3.6 repo with a 4.3.5 engine, so if you want
an exact replicate of a 4.3.5 engine, you have to check all relevant
packages, not just the engine.
>
> Regards,
> Paul S.
> ________________________________
> From: Yedidyah Bar David <didi(a)redhat.com>
> Sent: 11 November 2019 10:19
> To: Staniforth, Paul <P.Staniforth(a)leedsbeckett.ac.uk>
> Cc: wangyu13476969128(a)126.com <wangyu13476969128(a)126.com>; users <users(a)ovirt.org>
> Subject: Re: [ovirt-users] Re: After importing the ovirt-engine 4.3.5 backup file into ovirt-engine 4.3.6, will the information of the ovirt-engine 4.3.6 management interface be the same as ovirt-engine 4.3.5?
>
> On Mon, Nov 11, 2019 at 12:12 PM Staniforth, Paul
> <P.Staniforth(a)leedsbeckett.ac.uk> wrote:
> >
> > I haven't tried it but doesn't engine-setup --offline stop it upgrading minor releases.
>
> It prevents only the packager component.
>
> If you already have 4.3.6 installed, which was the original question,
> then engine-setup will still update the database schema. And if it
> didn't, the 4.3.6 engine might fail working correctly with a 3.4.5.5
> database. In particular:
>
> The documentation for engine-backup always stated that you must run
> engine-setup immediately after restore. But at some points in time,
> and under certain conditions, this was actually optional. You could
> have done a restore, and immediately start the engine, and it would
> have worked. But this isn't the case if your restored db is older than
> the installed engine. Then, you must run engine-setup before starting
> the engine.
>
> Best regards,
>
> >
> > Regards,
> > Paul S.
> > ________________________________
> > From: Yedidyah Bar David <didi(a)redhat.com>
> > Sent: 11 November 2019 07:57
> > To: wangyu13476969128(a)126.com <wangyu13476969128(a)126.com>
> > Cc: users <users(a)ovirt.org>
> > Subject: [ovirt-users] Re: After importing the ovirt-engine 4.3.5 backup file into ovirt-engine 4.3.6, will the information of the ovirt-engine 4.3.6 management interface be the same as ovirt-engine 4.3.5?
> >
> > On Tue, Oct 29, 2019 at 11:55 AM <wangyu13476969128(a)126.com> wrote:
> > >
> > > The current version of ovirt-engine in production environment is 4.3.5.5.
> > > In order to prevent the ovirt-engine machine from being down, the management interface cannot be used, according to the relevant link:
> > > https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fovirt.o...
> > >
> > > I backed it up using the command:
> > > engine-backup --scope=all --mode=backup --file=file_name --log=log_file_name
> > >
> > > I have prepared another machine as a spare machine. Once the ovirt-engine is down in the production environment, the standby machine can be UP and the standby machine can manage the ovirt-nodes.
> > >
> > > According to the relevant links:
> > > https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fovirt.o...
> > >
> > > The ovirt-engine is installed on the standby machine.
> > >
> > > After I have executed these three commands:
> > > 1. yum install https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fresource...
> > >
> > > 2. yum update
> > >
> > > 3. yum install ovirt-engine
> > >
> > > I found that the ovirt-engine version of the standby machine is 4.3.6
> > >
> > > So my question is:
> > >
> > > 1. The ovirt-engine version of the standby machine is 4.3.6, and the production environment ovirt-engine version is 4.3.5.5, which command is used on the production environment ovirt-engine machine:
> > > engine-backup --scope=all --mode=backup --file=file_name --log=log_file_name
> > >
> > > Files obtained after using the backup command on the ovirt-engine in the production environment are restored using these two files under the standby computer. Will the information on the ovirt-engine management interface of the standby computer be consistent with the ovirt-engine of the production environment (eg data center, host, virtual machine, etc.)?
> >
> > In principle it should be ok. After you restore with above command,
> > you should run 'engine-setup', and this will update your restored
> > database schema to 4.3.6.
> > This is not routinely tested explicitly, though, but I expect that
> > people often do this, perhaps even unknowingly, when they restore from
> > a backup taken with a somewhat-old version.
> >
> > >
> > > 2. Can the ovirt-engine of the standby machine be installed with version 4.3.5.5? How do I need to modify the following three commands to get the backup machine to install the version 4.3.5.5?
> > > 1. yum install https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fresource...
> > > 2. yum update
> > > 3. yum install ovirt-engine
> >
> > In theory you can change last one to 'yum install
> > ovirt-engine-4.3.5.5'. I didn't try that, and you are quite likely to
> > run into complex dependencies you'll have to satisfy by yourself.
> >
> > I think the best solution for you, if you care about active and
> > standby to be identical, is to upgrade your active one as well, and
> > keep them same version going forward.
> >
> > I'd also like to note that maintaining a standby engine and using it
> > the way you intend to, is not in the scope of oVirt. People do similar
> > things, but have to manually work around this and be careful. One very
> > important issue is that if for any reason you allow both of them to be
> > up at the same time, and manage the same hosts, they will not know
> > about each other, and you'll see lots of confusion and errors.
> >
> > You might also want to check:
> >
> > 1. Hosted-engine setup with more than one host as hosted-engine host.
> > There, you only have a single engine vm, but it can run on any of your
> > hosts. You still have a single VM disk/image, and if that's corrupted,
> > you have no HA and have to somehow reinstall/restore/etc.
> >
> > 2. https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...
> >
> > Best regards,
> > --
> > Didi
> > _______________________________________________
> > Users mailing list -- users(a)ovirt.org
> > To unsubscribe send an email to users-leave(a)ovirt.org
> > Privacy Statement: https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...
> > oVirt Code of Conduct: https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...
> > List Archives: https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.o...
> > To view the terms under which this email is distributed, please go to:-
> > http://leedsbeckett.ac.uk/disclaimer/email/
>
>
>
> --
> Didi
>
> To view the terms under which this email is distributed, please go to:-
> http://leedsbeckett.ac.uk/disclaimer/email/
--
Didi
5 years, 2 months
Re: After importing the ovirt-engine 4.3.5 backup file into ovirt-engine 4.3.6, will the information of the ovirt-engine 4.3.6 management interface be the same as ovirt-engine 4.3.5?
by Yedidyah Bar David
On Mon, Nov 11, 2019 at 12:12 PM Staniforth, Paul
<P.Staniforth(a)leedsbeckett.ac.uk> wrote:
>
> I haven't tried it but doesn't engine-setup --offline stop it upgrading minor releases.
It prevents only the packager component.
If you already have 4.3.6 installed, which was the original question,
then engine-setup will still update the database schema. And if it
didn't, the 4.3.6 engine might fail working correctly with a 3.4.5.5
database. In particular:
The documentation for engine-backup always stated that you must run
engine-setup immediately after restore. But at some points in time,
and under certain conditions, this was actually optional. You could
have done a restore, and immediately start the engine, and it would
have worked. But this isn't the case if your restored db is older than
the installed engine. Then, you must run engine-setup before starting
the engine.
Best regards,
>
> Regards,
> Paul S.
> ________________________________
> From: Yedidyah Bar David <didi(a)redhat.com>
> Sent: 11 November 2019 07:57
> To: wangyu13476969128(a)126.com <wangyu13476969128(a)126.com>
> Cc: users <users(a)ovirt.org>
> Subject: [ovirt-users] Re: After importing the ovirt-engine 4.3.5 backup file into ovirt-engine 4.3.6, will the information of the ovirt-engine 4.3.6 management interface be the same as ovirt-engine 4.3.5?
>
> On Tue, Oct 29, 2019 at 11:55 AM <wangyu13476969128(a)126.com> wrote:
> >
> > The current version of ovirt-engine in production environment is 4.3.5.5.
> > In order to prevent the ovirt-engine machine from being down, the management interface cannot be used, according to the relevant link:
> > https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fovirt.o...
> >
> > I backed it up using the command:
> > engine-backup --scope=all --mode=backup --file=file_name --log=log_file_name
> >
> > I have prepared another machine as a spare machine. Once the ovirt-engine is down in the production environment, the standby machine can be UP and the standby machine can manage the ovirt-nodes.
> >
> > According to the relevant links:
> > https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fovirt.o...
> >
> > The ovirt-engine is installed on the standby machine.
> >
> > After I have executed these three commands:
> > 1. yum install https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fresource...
> >
> > 2. yum update
> >
> > 3. yum install ovirt-engine
> >
> > I found that the ovirt-engine version of the standby machine is 4.3.6
> >
> > So my question is:
> >
> > 1. The ovirt-engine version of the standby machine is 4.3.6, and the production environment ovirt-engine version is 4.3.5.5, which command is used on the production environment ovirt-engine machine:
> > engine-backup --scope=all --mode=backup --file=file_name --log=log_file_name
> >
> > Files obtained after using the backup command on the ovirt-engine in the production environment are restored using these two files under the standby computer. Will the information on the ovirt-engine management interface of the standby computer be consistent with the ovirt-engine of the production environment (eg data center, host, virtual machine, etc.)?
>
> In principle it should be ok. After you restore with above command,
> you should run 'engine-setup', and this will update your restored
> database schema to 4.3.6.
> This is not routinely tested explicitly, though, but I expect that
> people often do this, perhaps even unknowingly, when they restore from
> a backup taken with a somewhat-old version.
>
> >
> > 2. Can the ovirt-engine of the standby machine be installed with version 4.3.5.5? How do I need to modify the following three commands to get the backup machine to install the version 4.3.5.5?
> > 1. yum install https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fresource...
> > 2. yum update
> > 3. yum install ovirt-engine
>
> In theory you can change last one to 'yum install
> ovirt-engine-4.3.5.5'. I didn't try that, and you are quite likely to
> run into complex dependencies you'll have to satisfy by yourself.
>
> I think the best solution for you, if you care about active and
> standby to be identical, is to upgrade your active one as well, and
> keep them same version going forward.
>
> I'd also like to note that maintaining a standby engine and using it
> the way you intend to, is not in the scope of oVirt. People do similar
> things, but have to manually work around this and be careful. One very
> important issue is that if for any reason you allow both of them to be
> up at the same time, and manage the same hosts, they will not know
> about each other, and you'll see lots of confusion and errors.
>
> You might also want to check:
>
> 1. Hosted-engine setup with more than one host as hosted-engine host.
> There, you only have a single engine vm, but it can run on any of your
> hosts. You still have a single VM disk/image, and if that's corrupted,
> you have no HA and have to somehow reinstall/restore/etc.
>
> 2. https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...
>
> Best regards,
> --
> Didi
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...
> oVirt Code of Conduct: https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...
> List Archives: https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.o...
> To view the terms under which this email is distributed, please go to:-
> http://leedsbeckett.ac.uk/disclaimer/email/
--
Didi
5 years, 2 months
Supporting larger than 8TiB Lun's iSCSI RHV/oVirt
by Green, Jacob Allen /C
I have a question regarding the information in the following screenshot. According to https://access.redhat.com/articles/906543 there is an 8TiB limit I assume this is per LUN? Also if I want to go past 8TiB per LUN is there some configuration I need to edit somewhere? And is there a problem with going past 8TiB per LUN? I think it more manageable to simply increase LUN sizes if you have the storage, instead of adding additional LUN’s to your storage domain, however I may not fully understand the problems with increasing LUN sizes.
Thank you.
[cid:image001.png@01D5948E.D0101740]
Jacob.
5 years, 2 months
Long post about gluster issue after 6.5 to 6.6 upgrade and recovery steps
by Strahil Nikolov
Hello Colleagues,
I want to share my experience and especially how I have recovered after a situation where gluster refuses to heal several files of my oVirt Lab.
Most probably the situation was caused by the fact that I didn't check if all files were healed before I started the upgrade on the next node , which in a 'replica 2 arbiter1' setup has caused multiple files missing/conflicting and heal fails to happen.
Background:1. I have powered off my HostedEngine VM and made a snapshot of the gluster volume2. Started the update , but without screen - I messed it up and decided to revert from the snapshot3. Powered off the volume , restored from snapshot and then started (and again snapshoted the volume)4. Upgrade of the HostedEngine was successfull5. Upgraded the arbiter (ovirt3)6. I forgot to check the heal status on the arbiter and upgraded ovirt1/gluster1 (which maybe was the reason for the issue)7. After gluster1 was healed I saw that some files are left for healing , but expected it will finish till ovirt2/gluster2 is patched8. Sadly my assumption was not right and after the reboot of ovirt2/gluster2 I noticed that some files never heal.
Symptoms:2 files per volume never heal (used 'full' mode) even after I 'stat'-ed every file/dir in the volume.Ovirt Dashboard reported multiple errors (50+) that it cannot update the OVF metadata for volume/VM.
Here are my notes while I was recovering from the situation. As this is my LAB, I shutdown all VMs (including the HostedEngine) as downtime was not an issue:
Heals never complete:
# gluster volume heal data_fast4 info
Brick gluster1:/gluster_bricks/data_fast4/data_fast4
Status: Connected
Number of entries: 0
Brick gluster2:/gluster_bricks/data_fast4/data_fast4
<gfid:d21a6512-eaf6-4859-90cf-eeef2cc0cab8>
<gfid:95bc2cd2-8a1e-464e-a384-6b128780d370>
Status: Connected
Number of entries: 2
Brick ovirt3:/gluster_bricks/data_fast4/data_fast4
<gfid:d21a6512-eaf6-4859-90cf-eeef2cc0cab8>
<gfid:95bc2cd2-8a1e-464e-a384-6b128780d370>
Status: Connected
Number of entries: 2
Mount to get the gfid to path relationship:
mount -t glusterfs -o aux-gfid-mount gluster1:/data_fast4 /mnt
# getfattr -n trusted.glusterfs.pathinfo -e text /mnt/.gfid/d21a6512-eaf6-4859-90cf-eeef2cc0cab8
getfattr: Removing leading '/' from absolute path names
# file: mnt/.gfid/d21a6512-eaf6-4859-90cf-eeef2cc0cab8
trusted.glusterfs.pathinfo="(<REPLICATE:data_fast4-replicate-0> <POSIX(/gluster_bricks/data_fast4/data_fast4):ovirt2.localdomain:/gluster_bricks/data_fast4/data_fast4/.glusterfs/d2/1a/d21a651
2-eaf6-4859-90cf-eeef2cc0cab8> <POSIX(/gluster_bricks/data_fast4/data_fast4):ovirt3.localdomain:/gluster_bricks/data_fast4/data_fast4/.glusterfs/d2/1a/d21a6512-eaf6-4859-90cf-eeef2cc0cab8>)"
The local brick (that is supposed to be healed) is missing some data:
# ls -l /gluster_bricks/data_fast4/data_fast4/.glusterfs/d2/1a/d21a6512-eaf6-4859-90cf-eeef2cc0cab8
ls: няма достъп до /gluster_bricks/data_fast4/data_fast4/.glusterfs/d2/1a/d21a6512-eaf6-4859-90cf-eeef2cc0cab8: Няма такъв файл или директория
Remote is OK:
# ssh gluster2 'ls -l /gluster_bricks/data_fast4/data_fast4/.glusterfs/d2/1a/d21a6512-eaf6-4859-90cf-eeef2cc0cab8'
-rw-r--r--. 2 vdsm kvm 436 9 ное 19,34 /gluster_bricks/data_fast4/data_fast4/.glusterfs/d2/1a/d21a6512-eaf6-4859-90cf-eeef2cc0cab8
Arbiter is also OK:
# ssh ovirt3 'ls -l /gluster_bricks/data_fast4/data_fast4/.glusterfs/d2/1a/d21a6512-eaf6-4859-90cf-eeef2cc0cab8'
-rw-r--r--. 2 vdsm kvm 0 9 ное 19,34 /gluster_bricks/data_fast4/data_fast4/.glusterfs/d2/1a/d21a6512-eaf6-4859-90cf-eeef2cc0cab8
Rsync the file/directory from a good brick to broken one:
# rsync -avP gluster2:/gluster_bricks/data_fast4/data_fast4/.glusterfs/d2/1a/ /gluster_bricks/data_fast4/data_fast4/.glusterfs/d2/1a/
# ls -l /gluster_bricks/data_fast4/data_fast4/.glusterfs/d2/1a/d21a6512-eaf6-4859-90cf-eeef2cc0cab8
-rw-r--r--. 1 vdsm kvm 436 9 ное 19,34 /gluster_bricks/data_fast4/data_fast4/.glusterfs/d2/1a/d21a6512-eaf6-4859-90cf-eeef2cc0cab8
After a full heal we see our problematic file:
# gluster volume heal data_fast4 full
Launching heal operation to perform full self heal on volume data_fast4 has been successful
Use heal info commands to check status.
# gluster volume heal data_fast4 info
Brick gluster1:/gluster_bricks/data_fast4/data_fast4
Status: Connected
Number of entries: 0
Brick gluster2:/gluster_bricks/data_fast4/data_fast4
/578bca3d-6540-41cd-8e0e-9e3047026484/images/58e197a6-12df-4432-a643-298d40e44130/535ec7f7-f4d1-4d1e-a988-c1e95b4a38ca.meta
Status: Connected
Number of entries: 1
Brick ovirt3:/gluster_bricks/data_fast4/data_fast4
/578bca3d-6540-41cd-8e0e-9e3047026484/images/58e197a6-12df-4432-a643-298d40e44130/535ec7f7-f4d1-4d1e-a988-c1e95b4a38ca.meta
Status: Connected
Number of entries: 1
This time the file is missing on both gluster1 (older version) and arbiter:
# cat /gluster_bricks/data_fast4/data_fast4/578bca3d-6540-41cd-8e0e-9e3047026484/images/58e197a6-12df-4432-a643-298d40e44130/535ec7f7-f4d1-4d1e-a988-c1e95b4a38ca.meta
CTIME=1558265783
DESCRIPTION={"Updated":true,"Size":256000,"Last Updated":"Sat Nov 09 19:24:08 EET 2019","Storage Domains":[{"uuid":"578bca3d-6540-41cd-8e0e-9e3047026484"}],"Disk Description":"OVF_STORE"}
DISKTYPE=OVFS
DOMAIN=578bca3d-6540-41cd-8e0e-9e3047026484
FORMAT=RAW
GEN=0
IMAGE=58e197a6-12df-4432-a643-298d40e44130
LEGALITY=LEGAL
MTIME=0
PUUID=00000000-0000-0000-0000-000000000000
SIZE=262144
TYPE=PREALLOCATED
VOLTYPE=LEAF
EOF
# ssh gluster2 cat /gluster_bricks/data_fast4/data_fast4/578bca3d-6540-41cd-8e0e-9e3047026484/images/58e197a6-12df-4432-a643-298d40e44130/535ec7f7-f4d1-4d1e-a988-c1e95b4a38ca.meta
CTIME=1558265783
DESCRIPTION={"Updated":true,"Size":256000,"Last Updated":"Sat Nov 09 19:34:33 EET 2019","Storage Domains":[{"uuid":"578bca3d-6540-41cd-8e0e-9e3047026484"}],"Disk Description":"OVF_STORE"}
DISKTYPE=OVFS
DOMAIN=578bca3d-6540-41cd-8e0e-9e3047026484
FORMAT=RAW
GEN=0
IMAGE=58e197a6-12df-4432-a643-298d40e44130
LEGALITY=LEGAL
MTIME=0
PUUID=00000000-0000-0000-0000-000000000000
SIZE=262144
TYPE=PREALLOCATED
VOLTYPE=LEAF
EOF
# ssh ovirt3 cat /578bca3d-6540-41cd-8e0e-9e3047026484/images/58e197a6-12df-4432-a643-298d40e44130/535ec7f7-f4d1-4d1e-a988-c1e95b4a38ca.meta
cat: /578bca3d-6540-41cd-8e0e-9e3047026484/images/58e197a6-12df-4432-a643-298d40e44130/535ec7f7-f4d1-4d1e-a988-c1e95b4a38ca.meta: Няма такъв файл или директория
As we miss the same file on both a brick and arbiter we take another approach:
First remove on gluster1 the file to another name:
# mv /gluster_bricks/data_fast4/data_fast4/578bca3d-6540-41cd-8e0e-9e3047026484/images/58e197a6-12df-4432-a643-298d40e44130/535ec7f7-f4d1-4d1e-a988-c1e95b4a38ca.meta /gluster_bricks/data_fast4
/data_fast4/578bca3d-6540-41cd-8e0e-9e3047026484/images/58e197a6-12df-4432-a643-298d40e44130/535ec7f7-f4d1-4d1e-a988-c1e95b4a38ca.meta_old
Then we copy the file from gluster2 (good brick)
# rsync -avP gluster2:/gluster_bricks/data_fast4/data_fast4/578bca3d-6540-41cd-8e0e-9e3047026484/images/58e197a6-12df-4432-a643-298d40e44130/535ec7f7-f4d1-4d1e-a988-c1e95b4a38ca.meta /gluster_
bricks/data_fast4/data_fast4/578bca3d-6540-41cd-8e0e-9e3047026484/images/58e197a6-12df-4432-a643-298d40e44130/535ec7f7-f4d1-4d1e-a988-c1e95b4a38ca.meta
receiving incremental file list
535ec7f7-f4d1-4d1e-a988-c1e95b4a38ca.meta
436 100% 425.78kB/s 0:00:00 (xfr#1, to-chk=0/1)
sent 43 bytes received 571 bytes 1,228.00 bytes/sec
total size is 436 speedup is 0.71
Run full heal:
# gluster volume heal data_fast4 full
Launching heal operation to perform full self heal on volume data_fast4 has been successful
Use heal info commands to check status
# gluster volume heal data_fast4 info
Brick gluster1:/gluster_bricks/data_fast4/data_fast4
Status: Connected
Number of entries: 0
Brick gluster2:/gluster_bricks/data_fast4/data_fast4
Status: Connected
Number of entries: 0
Brick ovirt3:/gluster_bricks/data_fast4/data_fast4
Status: Connected
Number of entries: 0
And ofcourse umount /mnt
I did the above for all oVirt Storage domains and rebooted all nodes (simultaneously) after stopping the whole stack. This one should not be neccessary , but I wanted to be sure that after power outage the cluster will be operational again:
systemctl stop ovirt-ha-agent ovirt-ha-broker vdsmd supervdsmd sanlock glusterd/usr/share/glusterfs/scripts/stop-all-gluster-processes.sh
Verification:After the reboot , I have tried to set each oVirt storage domain in 'Maintenance' which confirms that engine can update the OVMF meta and then set it back to Active. Without downtime , this will not be possible.
I hope this long post will help anyone .
PS: I have collected some data for some of the files, that I have ommited as this e-mail is very long.
Best Regards,Strahil Nikolov
5 years, 2 months
oVirt orb python3/fedora 31 support
by Baptiste Agasse
Hi all,
I was an happy user of oVirt orb on my laptop to do some testing (New features, foreman integration tests , ansible modules/playbooks/roles tests...) and it worked realy well, thanks guys ! Last week I've upgraded my laptop to fedora 31 and I have to uninstall ovirt-orb/lago related stuff to be able upgrade to this fedora version (related to python2 dependencies packages missing on fedora 31). Is there any planned python3/fedora 31 support for ovirt-orb/lago-ovirt/lago ? If not there is any simple replacement solution to spin up a light ovirt env to this kind of tests ?
Have a nice day.
Baptiste
5 years, 2 months
Re: oVirt and Netapp question
by Vrgotic, Marko
Second attempt 😊
From: "Vrgotic, Marko" <M.Vrgotic(a)activevideo.com>
Date: Monday, 4 November 2019 at 14:01
To: "users(a)ovirt.org" <users(a)ovirt.org>
Subject: oVirt and Netapp question
Dear oVirt,
Few months ago our production environment oVirt with main Shared storage via NFS Netapp is live.
We have been deploying VMs with thin provisioned HDD 40GB based template, CentOS 7.
The Netapp NFS v4 storage volume is 7TB in size, also Thin Provisioned.
First attached screenshot shows the space allocation of the production volume from Netapp side.
Is there anyone in oVirt community who would be able to tell me the meaning of the 5.31 TB Over Provisioned Space?
[A screenshot of a cell phone Description automatically generated]
Second attached is the info of the production volume from oVirt side:
[A screenshot of a social media post Description automatically generated]
What I want to understand is the way how is oVirt reading the volume usage and Netapp and where is the difference.
Is the Over Allocated Space something that is just logically used/reserved and will be intelligently re-allocated/re-used as the actual Data Space Used grows or am I looking at oVIrt actually hitting Critical Space Action Blocker and will have to resize the volume?
If there is anyone from Netapp or with good Netapp experience that is able to help understanding the data above better, thank you in advance?
Kindly awaiting your reply.
-----
kind regards/met vrindelijke groet
Marko Vrgotic
ActiveVideo
5 years, 2 months