Data recovery from (now unused, but still mounted) Gluster Volume for a single VM
by David White
My hyperconverged cluster was running out of space.
The reason for that is a good problem to have - I've grown more in the last 4 months than in the past 4-5 years combined.
But the downside was, I had to go ahead and upgrade my storage, and it became urgent to do so.
I began that process last week.
I have 3 volumes:
[root@cha2-storage dwhite]# gluster volume list
data
engine
vmstore
I did the following on all 3 of my volumes:
1) Converted cluster from Replica 3 to Replica 2, arbiter 1
- I did run into an issue where some VMs were paused, but I was able to poweroff and power on those VMs again with no issue
2) Ran the following on Host 2:
# gluster volume remove-brick data replica 1 cha1-storage.mgt.my-domain.com:/gluster_bricks/data/data force
# gluster volume remove-brick vmstore replica 1 cha1-storage.mgt.my-domain.com:/gluster_bricks/vmstore/vmstore force
# gluster volume remove-brick engine replica 1 cha1-storage.mgt.my-domain.com:/gluster_bricks/engine/engine force
(rebuilt the array & reboot the server -- when I rebooted, I commented out the original UUIDs from /etc/fstab for the gluster storage)
# lvcreate -L 2157G --zero n -T gluster_vg_sdb/gluster_thinpool_gluster_vg_sdb
# lvcreate -L 75G -n gluster_lv_engine gluster_vg_sdb
# lvcreate -V 600G --thin -n gluster_lv_data gluster_vg_sdb/gluster_thinpool_gluster_vg_sdb
# lvcreate -V 1536G --thin -n gluster_lv_vmstore gluster_vg_sdb/gluster_thinpool_gluster_vg_sdb
# mkfs.xfs /dev/gluster_vg_sdb/gluster_lv_engine
# mkfs.xfs /dev/gluster_vg_sdb/gluster_lv_data
# mkfs.xfs /dev/gluster_vg_sdb/gluster_lv_vmstore
At this point, I ran lsblk --fs to get the new UUIDs, and put them into /etc/fstab
# mount -a
# gluster volume add-brick engine replica 2 cha2-storage.mgt.my-domain.com:/gluster_bricks/engine/engine force
# gluster volume add-brick vmstore replica 2 cha2-storage.mgt.my-domain.com:/gluster_bricks/vmstore/vmstore force
# gluster volume add-brick data replica 2 cha2-storage.mgt.my-domain.com:/gluster_bricks/data/data force
So far so good.
3) I was running critically low on disk space on the replica, so I:- Let the gluster volume heal
- Then I removed the Host 1 bricks from the volumes
When I removed the Host 1 bricks, I made sure that there were no additional healing tasks, then proceeded to do so:# gluster volume remove-brick data replica 1 cha1-storage.mgt.my-domain.com:/gluster_bricks/data/data force
# gluster volume remove-brick vmstore replica 1 cha1-storage.mgt.my-domain.com:/gluster_bricks/vmstore/vmstore force
At this point, I ran into problems.
All of my VMs went into a Paused state, and I had to reboot all of them.
All VMs came back online, but two VMs were corrupted. I wound up re-building them from scratch.
Unfortunately, we lost a lot of data on 1 of the VMs, as I didn't realize backups were broken on that particular VM.
Is there a way for me to go into those disks (that are still mounted to Host 1), examine the Gluster content, and somehow mount / recover the data from the VM that we lost?
Sent with ProtonMail Secure Email.
3 years, 4 months
non-critical request - Disk volume label - Web-ui
by Jorge Visentini
Hi everyone!
Firstly, congratulations for the evolution of the oVirt 4.4.7.
A *non-critical* request for a future version... if possible, add a label
in disk volumes, in Web-ui.
Thank you all!
[image: image.png]
--
Att,
Jorge Visentini
+55 55 98432-9868
3 years, 4 months
Wondering a way to create a universal USB Live OS to connect to VM Portal
by eta.entropy@gmail.com
Hi All,
once oVirt Infrastracture has been setup, VMs created, assigned to users and they manage them through VM Portal,
I'm wondering if there is an easy way to provide to end users a USB key that will just
> plug into any computer or boot from it
> connect to given VM Portal to manage assigned VMs
> connect to given SPICE console to enter assigned VMs
Just like a Linux Live USB with preconfigured service to be started and to connect to VM Portal or SPICE console
Is there something already available to start from ?
Is this something doable or am I dreaming ?
Thanks for any input
3 years, 4 months
Question about PCI storage passthrough for a single guest VM
by Tony Pearce
I have configured a host with pci passthrough for GPU pass through. Using
this knowledge I went ahead and configured nvme SSD pci pass through. On
the guest, I partitioned and mounted the SSD without any issues.
Searching google for this exact setup I only see results about "local
storage" where local storage = using a disk image on the hosts storage. So
I have come here to try and find out if there are any concerns or gripes or
issues with using nvme pci pass through compared to local storage.
Some more detail about the setup:
I have 2 identical hosts (nvidia gpu and also nvme pci SSD). A few weeks
ago when I started researching converting one of these systems over (from
native ubuntu) to ovirt using gpu pci pass through I found the information
about local storage. I have 1 host (host #1) set up with local storage mode
and the guest VM is using a disk image on this local storage.
Host 2 has an identical hardware setup but I did not configure local
storage for this host. Instead, I have the ovirt host OS installed on a
SATA HDD and the nvme SSD is in pci pass through to a different guest
instance.
What I notice is Host 2 disk performance is approx. +30% increase over host
#1 when running simple dd tests to write data to the disk. So at first
glance it appears the nvme pci pass through gives better performance and
this is desired, but I have not seen any ovirt documentation that explains
that this is supported or any guidelines on configuring such a setup.
Aside from the usual caveats when running pci pass through, are there any
other gotchya's when running this type of setup (pci nvme ssd pass
through)? I am trying to discover any unknowns about this before I use this
for real data. I have no previous experience with this and this is my main
reason for emailing the group.
Any insight appreciated.
Kind regards,
Tony Pearce
3 years, 4 months
glance.ovirt.org planned outage: 10.08.2021 at 01:00 UTC
by Evgheni Dereveanchin
Hi everyone,
There's an outage scheduled in order to move glance.ovirt.org to new
hardware. This will happen after midnight the upcoming Tuesday between 1AM
and 3AM UTC. It will not be possible to pull images from our Glance image
registry during this period. Other services will not be affected.
If you see any CI jobs failing on Glance tests - please re-run them in the
morning after the planned outage window is over. If issues persist please
report it via JIRA or reach out to me personally.
--
Regards,
Evgheni Dereveanchin
3 years, 4 months
[ANN] oVirt 4.4.8 Fourth Release Candidate is now available for testing
by Sandro Bonazzola
oVirt 4.4.8 Fourth Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.4.8
Fourth Release Candidate for testing, as of August 6th, 2021.
This update is the eighth in a series of stabilization updates to the 4.4
series.
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
Installation instructions
For installation instructions and additional information please refer to:
https://ovirt.org/documentation/
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.4 or similar
* CentOS Stream 8
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.4 or similar
* CentOS Stream 8
* oVirt Node 4.4 based on CentOS Stream 8 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
- oVirt Appliance is already available based on CentOS Stream 8
- oVirt Node NG is already available based on CentOS Stream 8
Additional Resources:
* Read more about the oVirt 4.4.8 release highlights:
http://www.ovirt.org/release/4.4.8/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.8/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
3 years, 4 months
Is there a way to support Mellanox OFED with oVirt/RHV?
by Vinícius Ferrão
Hello,
Is there a way to keep Mellanox OFED and oVirt/RHV playing nice with each other?
The real issue is regarding GlusterFS. It seems to be a Mellanox issue, but I would like to know if there's something that we can do make both play nice on the same machine:
[root@rhvepyc2 ~]# dnf update --nobest
Updating Subscription Management repositories.
Last metadata expiration check: 0:14:25 ago on Wed 04 Aug 2021 02:01:11 AM -03.
Dependencies resolved.
Problem: both package mlnx-ofed-all-user-only-5.4-1.0.3.0.rhel8.4.noarch and mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsolete glusterfs-rdma
- cannot install the best update candidate for package glusterfs-rdma-6.0-49.1.el8.x86_64
- package ovirt-host-4.4.7-1.el8ev.x86_64 requires glusterfs-rdma, but none of the providers can be installed
- package mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsoletes glusterfs-rdma provided by glusterfs-rdma-6.0-49.1.el8.x86_64
- package glusterfs-rdma-3.12.2-40.2.el8.x86_64 requires glusterfs(x86-64) = 3.12.2-40.2.el8, but none of the providers can be installed
- package glusterfs-rdma-6.0-15.el8.x86_64 requires glusterfs(x86-64) = 6.0-15.el8, but none of the providers can be installed
- package glusterfs-rdma-6.0-20.el8.x86_64 requires glusterfs(x86-64) = 6.0-20.el8, but none of the providers can be installed
- package glusterfs-rdma-6.0-37.el8.x86_64 requires glusterfs(x86-64) = 6.0-37.el8, but none of the providers can be installed
- package glusterfs-rdma-6.0-37.2.el8.x86_64 requires glusterfs(x86-64) = 6.0-37.2.el8, but none of the providers can be installed
- cannot install both glusterfs-3.12.2-40.2.el8.x86_64 and glusterfs-6.0-49.1.el8.x86_64
- cannot install both glusterfs-6.0-15.el8.x86_64 and glusterfs-6.0-49.1.el8.x86_64
- cannot install both glusterfs-6.0-20.el8.x86_64 and glusterfs-6.0-49.1.el8.x86_64
- cannot install both glusterfs-6.0-37.el8.x86_64 and glusterfs-6.0-49.1.el8.x86_64
- cannot install both glusterfs-6.0-37.2.el8.x86_64 and glusterfs-6.0-49.1.el8.x86_64
- cannot install the best update candidate for package ovirt-host-4.4.7-1.el8ev.x86_64
- cannot install the best update candidate for package glusterfs-6.0-49.1.el8.x86_64
=============================================================================================================================================================
Package Architecture Version Repository Size
=============================================================================================================================================================
Installing dependencies:
openvswitch x86_64 2.14.1-1.54103 mlnx_ofed_5.4-1.0.3.0_base 17 M
ovirt-openvswitch noarch 2.11-1.el8ev rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms 8.7 k
replacing rhv-openvswitch.noarch 1:2.11-7.el8ev
unbound x86_64 1.7.3-15.el8 rhel-8-for-x86_64-appstream-rpms 895 k
Skipping packages with conflicts:
(add '--best --allowerasing' to command line to force their upgrade):
glusterfs x86_64 3.12.2-40.2.el8 rhel-8-for-x86_64-baseos-rpms 558 k
glusterfs x86_64 6.0-15.el8 rhel-8-for-x86_64-baseos-rpms 658 k
glusterfs x86_64 6.0-20.el8 rhel-8-for-x86_64-baseos-rpms 659 k
glusterfs x86_64 6.0-37.el8 rhel-8-for-x86_64-baseos-rpms 663 k
glusterfs x86_64 6.0-37.2.el8 rhel-8-for-x86_64-baseos-rpms 662 k
Skipping packages with broken dependencies:
glusterfs-rdma x86_64 3.12.2-40.2.el8 rhel-8-for-x86_64-baseos-rpms 49 k
glusterfs-rdma x86_64 6.0-15.el8 rhel-8-for-x86_64-baseos-rpms 46 k
glusterfs-rdma x86_64 6.0-20.el8 rhel-8-for-x86_64-baseos-rpms 46 k
glusterfs-rdma x86_64 6.0-37.2.el8 rhel-8-for-x86_64-baseos-rpms 48 k
glusterfs-rdma x86_64 6.0-37.el8 rhel-8-for-x86_64-baseos-rpms 48 k
Transaction Summary
=============================================================================================================================================================
Install 3 Packages
Skip 10 Packages
Total size: 18 M
Is this ok [y/N]:
I really don't care for GlusterFS on this cluster, but Mellanox OFED is much more relevant do me.
Thank you all,
Vinícius.
3 years, 4 months
import from vmware provider always failis
by edp@maddalena.it
Hi.
I have created a new vmware provider to connet to my vmware esxi node.
But I have this problem.
If I choose to import a vm from that provider, the process always fails.
But the errror message is generic:
"failed to import vm xyz to Data Center Default, Cluster Default"
I have tried to import both Linux and Windows Vms without success.
I can see that the import phase go on through importing virtual disk and then when the import process comes almost to the end I get the generic error stated above.
There is a place where I can see more detailed logs to solve this problem?
Thank you
3 years, 4 months