Adding VLANs to a single-host, self-hosted-engine oVirt deployment?
by Derek Atkins
HI,
I've got a single oVirt host running a self-contained hosted-engine
deployment. When I set it up I did not use VLANs in my network. I am in
the process of moving my equipment, and in part of this move I would like
to introduce VLANs into my network infrastructure. The documentation
seems to imply that to add virtual networks and/or VLANs to a host that I
need to put it into maintenance mode, configure it in the engine, and then
resync the network. However, I don't think I can do that with a
single-host environment. If I put the host into local maint mode, it will
try to offload all my VMs, including the engine, which obviously it cannot
do because there is no other host to migrate them to.
So what's the approach to add VLANs in this situation?
I should add that this system started at 4.0, and I'm still only running
4.1 (although I do plan to upgrade to 4.2 as part of this move). I'm
hesitant to upgrade further because of the impending removal of SDK-3 -- I
am depending on a script that uses ovirt-shell which I keep being told is
going away. If ovirt-shell is still in 4.3 then I might consider
upgrading to that as well. :)
Thanks.
-derek
--
Derek Atkins 617-623-3745
derek(a)ihtfp.com www.ihtfp.com
Computer and Internet Security Consultant
5 years, 3 months
Can't add a pre-existing (storage-only) gluster member as a compute-also node using self-hosted hyper-convergent engine
by thomas@hoberg.net
Intro:
Trying to add oVirt self-hosted engine to a pre-existing set of system already using Gluster storage seems to fail, because ‘hosted engine wizard’ seems to create its own peer pool for managed hosts, instead of joining the one given for storage.
Context:
I am testing on a set of four Atom Goldmont+ boxes, which are silent/low-power, cheap, fast-enough for edge workloads and even take 32GB of RAM these days.
But for various, sometimes even good reasons, they are not a mainline platform and I attributed many problems I faced to the niche hardware--sometimes correctly.
Because the three-node hyperconverged setup has very exacting requirements, hard to meet in pre-existing machines (and in my case many initial failures with too little insight), I created the storage gluster separately first and then used the “hosted engine” wizard to set up the hosted engine on one of the Atom nodes.
I used CentOS7 (fresh install and latest updates) on the primary nodes, not the oVirt node image, because some of my targets are big HPC machines, that are supposed to run oVirt for support services in a small niche, while Nvidia-Docker/SLURM workloads dominate.
I assumed that if I were to split the storage and the orchestration setup into distinct steps, it would give me both better insight and flexibility to expand/transform the storage and the compute without losing any of the self-hosted hyperconvergeance comfort.
Problem:
I had all kinds of problem to get the hosted engine to run all the way through on the Atoms, it typically stopped just shy of the final launch as a VM on the Gluster storage.
I eventually stumbled across this message from Yedidyah Bar David:
https://lists.ovirt.org/pipermail/users/2018-March/087923.html
I then had a look at the engine database and found that indeed the compute nodes were all in a separate gluster newly created by the hosted-engine setup and evidently used for cluster synchronization, instead of joining the pool already used for the bulk of the storage.
I don’t know if this would be considered an error that needs fixing, an issue that can be avoided using a manual configuration, or something else. I believe it could use some highlighting in the documentation.
5 years, 3 months
Hyperconverged on a self-made Gluster: Considered normal or 'way-off'?
by thomas@hoberg.net
I use three very low power target nodes (J5005 Intel Atoms, 24GB RAM, 250GB SSD) to experiment with hyperconverged oVirt, and some bigger machines to act as pure compute nodes, preferably CentOS based, potentially oVirt Node image based.
I've used both CentOS7 and oVirt Node image bases (all latest versions and patches) and typically I use either a single partition and xfs root file system to avoid partitioning the constricted space on CentOS or the recommended partition layout with the oVirt node images, which still consumes the single primary SSD /dev/sda.
In both cases I don't have another device or partition left for the three-node hyperconverged wizard's ansible scripts to play with.
And since I want to understand better what is going on under the hood in any case I tried to separate the Gluster setup and the hosted-engine setup.
Question: Is the impact of that decision perhaps far bigger than I imagined?
So I went with the alternative A Hosted-Engine wizard, where the storage was already provisioned and I had set up 2+1 Gluster storage using bricks right on the xfs, assuming I would achieve something very similar to what the three node wizard would simplify, except without the LVM/VDO layer all done with Ansible.
But I am running into all kinds of problems and what continues to irritate me is the note in the manuals, that clusters need to be either 'compute' or 'storage' but cannot be both--yet that's the very point of hyperconvergence...
So basically I am wondering whether at this point there is a strict bifurifcation of oVirt between the hyperconverged variant and the 'Hosted-Engine' on existing storage, without any middle ground. Or is it ok to do a two-phased approach, where you separate out the Gluster aspect and the Hosted-Engine *but with physical servers that are doing both storage and compute*?
Also, I am wondering about the 'Enable Virt Service' and 'Enable Gluster Service' tick boxes in the Cluster property pages: For my 'hosted engine' wizard generated Cluster 'gluster' was unticked, even if it did run on Gluster storage. And while ticking 'gluster' in addition had no visible effect, unticking either one of the two is now impossible. I could not find any documentation on the significance or effect of these boxes.
While the prepared three node replicated storage Gluster for the /engine was running on CentOS nodes, the Hosted-Engine VM was then created on additional oVirt Node hosts using that Gluster storage (trying to do the same on CentOS nodes always fails, separate topic), with the assumption that I could then push the hosted-engine VM back to the gluster storage nodes and get rid of the temporary compute-only host used for hosted engine setup.
I then tried to add the Gluster nodes as hosts (and hosted-engine backup nodes), that also failed. Whether it failed because they were CentOS based or because they were already used for storage (and there is this hidden exclusivity), I would like to have some clarity on.
So to repeat once again my main question:
Is the (one or three) node hyperconverged oVirt setup a completely different thing from the hosted-engine on Gluster?
Are you supposed to be able to add extra compute hosts, storage etc. to be managed by a three-node HC based hosted-engine?
Are you supposed to be able to expand the three-node HC also in terms of storage, if perhaps currently only in quorum-maintaining multiples?
Is oVirt currently actually strictly mandating 100% segregation between compute and storage unless you use the wholly contained single/triple HC setups?
5 years, 3 months
Disk Allocation policy changes after a snapshot
by Kevin Doyle
Hi I have linux VM with 2 disks one for OS is sparse the other is for a Database and is Preallocated. When I take a snapshot of the VM both disks change to sparse policy but the disks in the snapshot are 1 sparse and 1 allocated. Before the snapshot the VM was running fine, now it crashes when data is written to the database. When I delete the snapshot the disks go back to 1 sparse and 1 allocated. Has anyone else seen this happen. Ovirt is 4.3.2.1-1.el7 and it is running on a hostedEngine
Many thanks
Kevin
5 years, 3 months
failed import ova vm
by David David
hi
cant import vmware OVA
Engine: 4.3.5.5-1.el7
OS Version: RHEL - 7 - 6.1810.2.el7.centos
Kernel Version: 3.10.0 - 957.27.2.el7.x86_64
KVM Version: 2.12.0 - 18.el7_6.7.1
LIBVIRT Version: libvirt-4.5.0-10.el7_6.12
VDSM Version: vdsm-4.30.24-1.el7
WebUI -> Virtual Machines -> Import (select vmware OVA, select OVAs
path, Load) - OK(details) - OK
vdsm.log, engine.log in attachment
5 years, 3 months
Inconsistent metadata on VM's disks
by Nardus Geldenhuys
Hi There
Hope you are all well. We got this weird issue at the moment. Let me
explain.
We are on ovirt 4.3.5. We use Centos 7.6 and when we do an update on the
VM's the VM's can not boot into multi user mode. It complains that the root
mount can't be found. When you boot using ISO and rescue mode you can see
the disks. When you chroot into the VM and run pvscan twice your VM can be
rebooted and it is fixed. We don't know if this is Centos based or maybe
something on ovirt. Screenshots below. Any help would be appreciated.
[image: Screenshot from 2019-08-11 18-15-14.png]
[image: Screenshot from 2019-08-12 15-10-53.png]
[image: Screenshot from 2019-08-12 14-18-33.png]
[image: Screenshot from 2019-08-09 21-54-44.png]
5 years, 3 months
Re: VM --- is not responding.
by Strahil
Would you check the health status of the controllers :
hpssacli ctrl all show status
Best Regards,
Strahil NikolovOn Aug 11, 2019 09:55, Edoardo Mazza <edo7411(a)gmail.com> wrote:
>
> The hosts are 3 ProLiant DL380 Gen10, 2 hosts with HPE Smart Array P816i-a SR Gen10 like controller and the other host with
> HPE Smart Array P408i-a SR Gen10. The storage for ovirt enviroment is gluster and the last host is the arbiter in the gluster enviroment.
> The S.M.A.R.T. healt status is ok for all host
> Edoardo
>
>
>
>
>
> Il giorno gio 8 ago 2019 alle ore 16:19 Sandro Bonazzola <sbonazzo(a)redhat.com> ha scritto:
>>
>>
>>
>> Il giorno gio 8 ago 2019 alle ore 11:19 Edoardo Mazza <edo7411(a)gmail.com> ha scritto:
>>>
>>> Hi all,
>>> It is more days that for same vm I received this error, but I don't underdand why.
>>> The traffic of the virtual machine is not excessive, cpu and ram to, but for few minutes the vm is not responding. and in the messages log file of the vm I received the error under, yo can help me?
>>> thanks
>>
>>
>> can you check the S.M.A.R.T. health status of the disks?
>>
>>
>>>
>>> Edoardo
>>> kernel: NMI watchdog: BUG: soft lockup - CPU#2 stuck for 25s! [kworker/2:0:26227]
>>> Aug 8 02:51:11 vmmysql kernel: Modules linked in: binfmt_misc ip6t_rpfilter ipt_REJECT nf_reject_ipv4 ip6t_REJECT nf_reject_
>>> ipv6 xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute bridge stp llc ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_
>>> nat_ipv6 ip6table_mangle ip6table_security ip6table_raw iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_con
>>> ntrack iptable_mangle iptable_security iptable_raw ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter snd_hda_c
>>> odec_generic iosf_mbi crc32_pclmul ppdev ghash_clmulni_intel snd_hda_intel snd_hda_codec aesni_intel snd_hda_core lrw gf128mul
>>> glue_helper ablk_helper snd_hwdep cryptd snd_seq snd_seq_device snd_pcm snd_timer snd soundcore virtio_rng sg virtio_balloon
>>> i2c_piix4 parport_pc parport joydev pcspkr ip_tables xfs libcrc32c sd_mod
>>> Aug 8 02:51:14 vmmysql kernel: crc_t10dif crct10dif_generic sr_mod cdrom virtio_net virtio_console virtio_scsi ata_generic p
>>> ata_acpi crct10dif_pclmul crct10dif_common crc32c_intel serio_raw qxl floppy drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm ata_piix libata virtio_pci drm_panel_orientation_quirks virtio_ring virtio dm_mirror dm_region_hash dm_log dm_mod
>>> Aug 8 02:51:14 vmmysql kernel: CPU: 2 PID: 26227 Comm: kworker/2:0 Kdump: loaded Tainted: G L ------------ 3.10.0-957.12.1.el7.x86_64 #1
>>> Aug 8 02:51:14 vmmysql kernel: Hardware name: oVirt oVirt Node, BIOS 1.11.0-2.el7 04/01/2014
>>> Aug 8 02:51:14 vmmysql kernel: Workqueue: events_freezable disk_events_workfn
>>> Aug 8 02:51:14 vmmysql kernel: task: ffff9e25b6609040 ti: ffff9e27b1610000 task.ti: ffff9e27b1610000
>>> Aug 8 02:51:14 vmmysql kernel: RIP: 0010:[<ffffffffb8b6b355>] [<ffffffffb8b6b355>] _raw_spin_unlock_irqrestore+0x15/0x20
>>> Aug 8 02:51:14 vmmysql kernel: RSP: 0000:ffff9e27b1613a68 EFLAGS: 00000286
>>> Aug 8 02:51:14 vmmysql kernel: RAX: 0000000000000001 RBX: ffff9e27b1613a10 RCX: ffff9e27b72a3d05
>>> Aug 8 02:51:14 vmmysql kernel: RDX: ffff9e27b729a420 RSI: 0000000000000286 RDI: 0000000000000286
>>> Aug 8 02:51:14 vmmysql kernel: RBP: ffff9e27b1613a68 R08: 0000000000000001 R09: ffff9e25b67fc198
>>> Aug 8 02:51:14 vmmysql kernel: R10: ffff9e27b45bd8d8 R11: 0000000000000000 R12: ffff9e25b67fde80
>>> Aug 8 02:51:14 vmmysql kernel: R13: ffff9e25b67fc000 R14: ffff9e25b67fc158 R15: ffffffffc032f8e0
>>> Aug 8 02:51:14 vmmysql kernel: FS: 0000000000000000(0000) GS:ffff9e27b7280000(0000) knlGS:0000000000000000
>>> Aug 8 02:51:14 vmmysql kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>>> Aug 8 02:51:14 vmmysql kernel: CR2: 00007f0c9e9b6008 CR3: 00
5 years, 3 months
Either allow 2 CD-ROM's or selectable *.vfd Floppy Images from Storage via Run Once other than from deprcated ISO-Storage
by Ralf Schenk
Hello,
when Installing Windows VM's on Ovirt we need either 2 CD-ROM's attached
as ISO Files (Installer ISO and Virtio-Win-ISO) to be able to install to
Virtio-(SCSI)-Disks.
In Ovirt 4.3.4 it is not possible to attach 2 CD-ROM's to a VM. So we
have to use Floppy Images (virtio-win-*.vfd) attached to install drivers
within Installer.
We need to use "Run Once" to attach flopppy disks. There are only *.vfd
selectable which are located on ISO-Storage.Domain, which will be
deprecated now or then.
-> We won't be able to install Windows VM's from unmodified ISO
Installer-CD's without ISO Storage Domain or making *.vfd Files
selectable via "Run Once"
When will that be available... ?
Bye
--
*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *rs(a)databay.de* <mailto:rs@databay.de>
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen
------------------------------------------------------------------------
5 years, 3 months
Re: oVirt 4.3.5 potential issue with NFS storage
by Shani Leviim
Basically, I meant to verify the access by ssh, but I want to verify
something following your detailed reply:
According to [1], in order to set a NetApp NFS server, the required steps
should look like this:
# mount NetApp_NFS:/path/to/export /mnt
# chown -R 36.36 /mnt
# chmod -R 755 /mnt
# umount /mnt
Which is quite similar to the steps you've mentioned, except the last step
of unmounting:
Unmount the 10.214.13.64:/ovirt_production
I think that you had to unmount /mnt/rhevstore instead.
Can you please verify?
[1] https://access.redhat.com/solutions/660143
*Regards,*
*Shani Leviim*
On Sun, Aug 11, 2019 at 10:57 PM Vrgotic, Marko <M.Vrgotic(a)activevideo.com>
wrote:
> Hi Shani,
>
> Thank you for your reply, but
> How do I do that?
> Reason why I am asking is following:
> Hosts 2,3,4 do not have that issue. Host 1 and 5 do.
> What I learned previously is that when using Netapp based NFS, which we
> are, it’s required to before provisioning SHE and/or just adding a Host to
> a pool, it’s required to execute following steps:
>
> Create random dir on a host:
> - mkdir /mnt/rhevstore
> Mount netapp volume to the dir
> - mount -o sec=sys -t nfs 10.214.13.64:/ovirt_production /mnt/rhevstore
> Set ownership to vdsm:kvm (36:36):
> - chown -R vdsm:kvm /mnt/rhevstore/*
> Unmount the 10.214.13.64:/ovirt_production
>
> I do not expect the above ownership actions need to be done initially on
> each host, before starting the deployment, otherwise it would be
> practically impossible to expand the Host pool.
>
> All 5 hosts are provisioned in same way. How? I am using foreman to
> provision these servers, so they are built of same kickstart hostgroup
> template.
>
> I even installed ovirt-hosted-engine-setup package to make sure all
> required packages, users and groups are in place before adding host to
> oVirt via UI or Ansible.
>
> Is it possible that we if I am already using or heavily using the
> mentioned volume via Hosts already added to oVirt pool, that ownership
> actions executed,on host about to be added to the pool, will fail to
> complete setting ownership on all required files on the volume?
>
> To repeat the question above: How do I make sure Host can read metadata
> file of the storage volume?
>
> Kindly awaiting your reply.
>
>
> All best,
> Marko Vrgotic
> Sent from my iPhone
>
> On 11 Aug 2019, at 01:19, Shani Leviim <sleviim(a)redhat.com> wrote:
>
> Hi Marko,
> Is seems that there's a connectivity problem with host 10.210.13.64.
> Can you please make sure the metadata under
> /rhev/data-center/mnt/10.210.13.64:_ovirt__production/6effda5e-1a0d-4312-bf93-d97fa9eb5aee/dom_md/metadata
> is accessible?
>
>
> *Regards, *
>
> *Shani Leviim *
>
>
> On Sat, Aug 10, 2019 at 2:57 AM Vrgotic, Marko <M.Vrgotic(a)activevideo.com>
> wrote:
>
>> Log files from ovirt engine and ovirt-sj-05 vdsm attached.
>>
>>
>>
>> Its related to host named: ovirt-sj-05.ictv.com
>>
>>
>>
>> Kindly awaiting your reply.
>>
>>
>>
>>
>>
>> — — —
>> Met vriendelijke groet / Kind regards,
>>
>> *Marko Vrgotic*
>>
>>
>>
>>
>>
>>
>>
>> *From: *"Vrgotic, Marko" <M.Vrgotic(a)activevideo.com>
>> *Date: *Thursday, 8 August 2019 at 17:02
>> *To: *Shani Leviim <sleviim(a)redhat.com>
>> *Cc: *"users(a)ovirt.org" <users(a)ovirt.org>
>> *Subject: *Re: [ovirt-users] Re: oVirt 4.3.5 potential issue with NFS
>> storage
>>
>>
>>
>> Hey Shanii,
>>
>>
>>
>> Thank you for the reply.
>>
>> Sure, I will attach the full logs asap.
>>
>> What do you mean by “flow you are doing”?
>>
>>
>>
>> Kindly awaiting your reply.
>>
>>
>>
>> Marko Vrgotic
>>
>>
>>
>> *From: *Shani Leviim <sleviim(a)redhat.com>
>> *Date: *Thursday, 8 August 2019 at 00:01
>> *To: *"Vrgotic, Marko" <M.Vrgotic(a)activevideo.com>
>> *Cc: *"users(a)ovirt.org" <users(a)ovirt.org>
>> *Subject: *Re: [ovirt-users] Re: oVirt 4.3.5 potential issue with NFS
>> storage
>>
>>
>>
>> Hi,
>>
>> Can you please clarify the flow you're doing?
>>
>> Also, can you please attach full vdsm and engine logs?
>>
>>
>> *Regards,*
>>
>> *Shani Leviim*
>>
>>
>>
>>
>>
>> On Thu, Aug 8, 2019 at 6:25 AM Vrgotic, Marko <M.Vrgotic(a)activevideo.com>
>> wrote:
>>
>> Log line form VDSM:
>>
>>
>>
>> “[root@ovirt-sj-05 ~]# tail -f /var/log/vdsm/vdsm.log | grep WARN
>>
>> 2019-08-07 09:40:03,556-0700 WARN (check/loop) [storage.check] Checker
>> u'/rhev/data-center/mnt/10.210.13.64:_ovirt__production/bda97276-a399-448f-9113-017972f6b55a/dom_md/metadata'
>> is blocked for 20.00 seconds (check:282)
>>
>> 2019-08-07 09:40:47,132-0700 WARN (monitor/bda9727) [storage.Monitor]
>> Host id for domain bda97276-a399-448f-9113-017972f6b55a was released (id:
>> 5) (monitor:445)
>>
>> 2019-08-07 09:44:53,564-0700 WARN (check/loop) [storage.check] Checker
>> u'/rhev/data-center/mnt/10.210.13.64:_ovirt__production/bda97276-a399-448f-9113-017972f6b55a/dom_md/metadata'
>> is blocked for 20.00 seconds (check:282)
>>
>> 2019-08-07 09:46:38,604-0700 WARN (monitor/bda9727) [storage.Monitor]
>> Host id for domain bda97276-a399-448f-9113-017972f6b55a was released (id:
>> 5) (monitor:445)”
>>
>>
>>
>>
>>
>>
>>
>> *From: *"Vrgotic, Marko" <M.Vrgotic(a)activevideo.com>
>> *Date: *Wednesday, 7 August 2019 at 09:09
>> *To: *"users(a)ovirt.org" <users(a)ovirt.org>
>> *Subject: *oVirt 4.3.5 potential issue with NFS storage
>>
>>
>>
>> Dear oVIrt,
>>
>>
>>
>> This is my third oVirt platform in the company, but first time I am
>> seeing following logs:
>>
>>
>>
>> “2019-08-07 16:00:16,099Z INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-51) [1b85e637] Lock freed
>> to object
>> 'EngineLock:{exclusiveLocks='[2350ee82-94ed-4f90-9366-451e0104d1d6=PROVIDER]',
>> sharedLocks=''}'
>>
>> 2019-08-07 16:00:25,618Z WARN
>> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
>> (EE-ManagedThreadFactory-engine-Thread-37723) [] domain
>> 'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' in problem
>> 'PROBLEMATIC'. vds: 'ovirt-sj-05.ictv.com'
>>
>> 2019-08-07 16:00:40,630Z INFO
>> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
>> (EE-ManagedThreadFactory-engine-Thread-37735) [] Domain
>> 'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' recovered from
>> problem. vds: 'ovirt-sj-05.ictv.com'
>>
>> 2019-08-07 16:00:40,652Z INFO
>> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
>> (EE-ManagedThreadFactory-engine-Thread-37737) [] Domain
>> 'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' recovered from
>> problem. vds: 'ovirt-sj-01.ictv.com'
>>
>> 2019-08-07 16:00:40,652Z INFO
>> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
>> (EE-ManagedThreadFactory-engine-Thread-37737) [] Domain
>> 'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' has recovered from
>> problem. No active host in the DC is reporting it as problematic, so
>> clearing the domain recovery timer.”
>>
>>
>>
>> Can you help me understanding why is this being reported?
>>
>>
>>
>> This setup is:
>>
>>
>>
>> 5HOSTS, 3 in HA
>>
>> SelfHostedEngine
>>
>> Version 4.3.5
>>
>> NFS based Netapp storage, version 4.1
>>
>> “10.210.13.64:/ovirt_hosted_engine on /rhev/data-center/mnt/10.210.13.64:_ovirt__hosted__engine
>> type nfs4
>> (rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=10.210.11.14,local_lock=none,addr=10.210.13.64)
>>
>>
>>
>> 10.210.13.64:/ovirt_production on /rhev/data-center/mnt/10.210.13.64:_ovirt__production
>> type nfs4
>> (rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=10.210.11.14,local_lock=none,addr=10.210.13.64)
>>
>> tmpfs on /run/user/0 type tmpfs
>> (rw,nosuid,nodev,relatime,seclabel,size=9878396k,mode=700)”
>>
>>
>>
>> First mount is SHE dedicated storage.
>>
>> Second mount “ovirt_produciton” is for other VM Guests.
>>
>>
>>
>> Kindly awaiting your reply.
>>
>>
>>
>> Marko Vrgotic
>>
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org
>> To unsubscribe send an email to users-leave(a)ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ICRKHD3GXTP...
>>
>>
5 years, 3 months