Re: Node 4.4.1 gluster bricks
by Strahil Nikolov
Since oVirt 4.4 , the stage that deploys the oVirt node/host is adding an lvm filter in /etc/lvm/lvm.conf which is the reason behind that.
Best Regards,
Strahil Nikolov
В петък, 25 септември 2020 г., 20:52:13 Гринуич+3, Staniforth, Paul <p.staniforth(a)leedsbeckett.ac.uk> написа:
Thanks,
the gluster volume is just a test and the main reason was to test the upgrade of a node with gluster bricks.
I don't know why lvm doesn\t work which is what oVirt is using.
Regards,
Paul S.
________________________________
From: Strahil Nikolov <hunter86_bg(a)yahoo.com>
Sent: 25 September 2020 18:28
To: Users <users(a)ovirt.org>; Staniforth, Paul <P.Staniforth(a)leedsbeckett.ac.uk>
Subject: Re: [ovirt-users] Node 4.4.1 gluster bricks
Caution External Mail: Do not click any links or open any attachments unless you trust the sender and know that the content is safe.
>1 node I wiped it clean and the other I left the 3 gluster brick drives untouch.
If the last node from the original is untouched you can:
1. Go to the old host and use 'gluster volume remove-brick <VOL> replica 1 wiped_host:/path/to-brick untouched_bricks_host:/path/to-brick force'
2. Remove the 2 nodes that you have kicked away:
gluster peer detach node2
gluster peer detach node3
3. Reinstall the wiped node and install gluster there
4. Create the filesystem on the brick:
mkfs.xfs -i size=512 /dev/mapper/brick_block_device
5. Mount the Gluster (you can copy the fstab entry from the working node and adapt it)
Here is an example:
/dev/data/data1 /gluster_bricks/data1 xfs inode64,noatime,nodiratime,inode64,nouuid,context="system_u:object_r:glusterd_brick_t:s0" 0 0
6. Create the selinux label via 'semanage fcontext -a -t glusterd_brick_t "/gluster_bricks/data1(/.*)?"' (remove only the single quotes) and run 'restorecon -RFvv /gluster_bricks/data1'
7. Mount the FS and create a dir inside the mount point
8. Extend the gluster volume:
'gluster volume add-brick <VOL> replica 2 new_host:/gluster_bricks/<dir>/<subdir>
9. Run a full heal
gluster volume heal <VOL> full
10. Repeat again and remember to never wipe 2 nodes at a time :)
Good luck and take a look at Quick Start Guide - Gluster Docs
Best Regards,
Strahil Nikolov
To view the terms under which this email is distributed, please go to:-
http://leedsbeckett.ac.uk/disclaimer/email/
4 years, 2 months
Re: Node 4.4.1 gluster bricks
by Strahil Nikolov
>1 node I wiped it clean and the other I left the 3 gluster brick drives untouch.
If the last node from the original is untouched you can:
1. Go to the old host and use 'gluster volume remove-brick <VOL> replica 1 wiped_host:/path/to-brick untouched_bricks_host:/path/to-brick force'
2. Remove the 2 nodes that you have kicked away:
gluster peer detach node2
gluster peer detach node3
3. Reinstall the wiped node and install gluster there
4. Create the filesystem on the brick:
mkfs.xfs -i size=512 /dev/mapper/brick_block_device
5. Mount the Gluster (you can copy the fstab entry from the working node and adapt it)
Here is an example:
/dev/data/data1 /gluster_bricks/data1 xfs inode64,noatime,nodiratime,inode64,nouuid,context="system_u:object_r:glusterd_brick_t:s0" 0 0
6. Create the selinux label via 'semanage fcontext -a -t glusterd_brick_t "/gluster_bricks/data1(/.*)?"' (remove only the single quotes) and run 'restorecon -RFvv /gluster_bricks/data1'
7. Mount the FS and create a dir inside the mount point
8. Extend the gluster volume:
'gluster volume add-brick <VOL> replica 2 new_host:/gluster_bricks/<dir>/<subdir>
9. Run a full heal
gluster volume heal <VOL> full
10. Repeat again and remember to never wipe 2 nodes at a time :)
Good luck and take a look at Quick Start Guide - Gluster Docs
Best Regards,
Strahil Nikolov
4 years, 2 months
oVirt host "unregistered"
by Jeremey Wise
Trying to get all the 3 node cluster back fully working... clearing out all
the errors.
I noted that the HCI wizard.. I thing SHOULD have deployed a hosted engine
on the nodes, but this is not the case .. Only thor... the first node in
cluster has hosted engine.
I tried to redeploy this via the Cockpit wizard to add engine to host.. but
I think this may not have been correct repair path.
Now node in cluster shows all bricks... green (so it detects after reboot
that host is back up and working.. but hosts lists it as "red triangle"
with error "unregistered"
I also just tried on third node to "edit" -> Host Engine -> and click drop
box and choose "deploy"
only log in event is "Host medusa configuration was updated by
admin@internal-authz.
9/24/208:49:24 PM" but nothing changes.
I then ran on odin (node with error) ovirt-hosted-engine-cleanup but
no change
################
[root@odin ~]# ovirt-hosted-engine-cleanup
This will de-configure the host to run ovirt-hosted-engine-setup from
scratch.
Caution, this operation should be used with care.
Are you sure you want to proceed? [y/n]
y
-=== Destroy hosted-engine VM ===-
You must run deploy first
error: failed to get domain 'HostedEngine'
-=== Stop HA services ===-
-=== Shutdown sanlock ===-
shutdown force 1 wait 0
shutdown done 0
-=== Disconnecting the hosted-engine storage domain ===-
You must run deploy first
-=== De-configure VDSM networks ===-
ovirtmgmt
ovirtmgmt
A previously configured management bridge has been found on the system,
this will try to de-configure it. Under certain circumstances you can loose
network connection.
Caution, this operation should be used with care.
Are you sure you want to proceed? [y/n]
y
-=== Stop other services ===-
Warning: Stopping libvirtd.service, but it can still be activated by:
libvirtd-ro.socket
libvirtd.socket
libvirtd-admin.socket
-=== De-configure external daemons ===-
Removing database file /var/lib/vdsm/storage/managedvolume.db
-=== Removing configuration files ===-
? /etc/init/libvirtd.conf already missing
- removing /etc/libvirt/nwfilter/vdsm-no-mac-spoofing.xml
? /etc/ovirt-hosted-engine/answers.conf already missing
? /etc/ovirt-hosted-engine/hosted-engine.conf already missing
- removing /etc/vdsm/vdsm.conf
- removing /etc/pki/vdsm/certs/cacert.pem
- removing /etc/pki/vdsm/certs/vdsmcert.pem
- removing /etc/pki/vdsm/keys/vdsmkey.pem
- removing /etc/pki/vdsm/libvirt-migrate/ca-cert.pem
- removing /etc/pki/vdsm/libvirt-migrate/server-cert.pem
- removing /etc/pki/vdsm/libvirt-migrate/server-key.pem
- removing /etc/pki/vdsm/libvirt-spice/ca-cert.pem
- removing /etc/pki/vdsm/libvirt-spice/server-cert.pem
- removing /etc/pki/vdsm/libvirt-spice/server-key.pem
- removing /etc/pki/vdsm/libvirt-vnc/ca-cert.pem
- removing /etc/pki/vdsm/libvirt-vnc/server-cert.pem
- removing /etc/pki/vdsm/libvirt-vnc/server-key.pem
- removing /etc/pki/CA/cacert.pem
- removing /etc/pki/libvirt/clientcert.pem
- removing /etc/pki/libvirt/private/clientkey.pem
? /etc/pki/ovirt-vmconsole/*.pem already missing
- removing /var/cache/libvirt/qemu
? /var/run/ovirt-hosted-engine-ha/* already missing
- removing /var/tmp/localvm69i1jxnd
- removing /var/tmp/localvmfyg59713
- removing /var/tmp/localvmmg5y6g52
-=== Removing IP Rules ===-
[root@odin ~]#
[root@odin ~]#
################
Ideas on how to repair engine install issues on nodes?
--
penguinpages
4 years, 2 months
Re: Node 4.4.1 gluster bricks
by Jayme
Assuming you don't care about data on the drive you may just need to use
wipefs on the device i.e. wipefs -a /dev/sdb
On Fri, Sep 25, 2020 at 12:53 PM Staniforth, Paul <
P.Staniforth(a)leedsbeckett.ac.uk> wrote:
> Hello,
> how do you manage a gluster host when upgrading a node?
>
> I upgraded/replaced 2 nodes with the new install and can't recreate any
> gluster bricks.
> 1 node I wiped it clean and the other I left the 3 gluster brick drives
> untouched.
>
> If I try to create bricks using the UI on the nodes, I get an internal
> server error. When I try to create a PV from the clean disk, I get device
> excluded by filter.
>
> e.g.
>
> pvcreate /dev/sdb
>
> Device /dev/sdb excluded by a filter.
>
> pvcreate /dev/mapper/SSDSC2KB240G7R_BTYS83100E0S240AGN
>
> Device /dev/mapper/SSDSC2KB240G7R_BTYS83100E0S240AGN excluded by a
> filter.
>
>
>
>
> Thanks,
>
>
> Paul S.
>
> To view the terms under which this email is distributed, please go to:-
> http://leedsbeckett.ac.uk/disclaimer/email/
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/27IUR3H54G2...
>
4 years, 2 months
Node 4.4.1 gluster bricks
by Staniforth, Paul
Hello,
how do you manage a gluster host when upgrading a node?
I upgraded/replaced 2 nodes with the new install and can't recreate any gluster bricks.
1 node I wiped it clean and the other I left the 3 gluster brick drives untouched.
If I try to create bricks using the UI on the nodes, I get an internal server error. When I try to create a PV from the clean disk, I get device excluded by filter.
e.g.
pvcreate /dev/sdb
Device /dev/sdb excluded by a filter.
pvcreate /dev/mapper/SSDSC2KB240G7R_BTYS83100E0S240AGN
Device /dev/mapper/SSDSC2KB240G7R_BTYS83100E0S240AGN excluded by a filter.
Thanks,
Paul S.
To view the terms under which this email is distributed, please go to:-
http://leedsbeckett.ac.uk/disclaimer/email/
4 years, 2 months
oVirt 4.4.2 is now generally available
by Lev Veyde
The oVirt project is excited to announce the general availability of oVirt
4.4.2 , as of September 17th, 2020.
This release unleashes an altogether more powerful and flexible open source
virtualization solution that encompasses hundreds of individual changes and
a wide range of enhancements across the engine, storage, network, user
interface, and analytics, as compared to oVirt 4.3.
Important notes before you install / upgrade
Please note that oVirt 4.4 only supports clusters and data centers with
compatibility version 4.2 and above. If clusters or data centers are
running with an older compatibility version, you need to upgrade them to at
least 4.2 (4.3 is recommended).
Please note that in RHEL 8 / CentOS 8 several devices that worked on EL7
are no longer supported.
For example, the megaraid_sas driver is removed. If you use Enterprise
Linux 8 hosts you can try to provide the necessary drivers for the
deprecated hardware using the DUD method (See the users’ mailing list
thread on this at
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/NDSVUZSESOXE...
)
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
What’s new in oVirt 4.4.2 Release?
This update is the second in a series of stabilization updates to the 4.4
series.
This release is available now on x86_64 architecture for:
-
Red Hat Enterprise Linux 8.2
-
CentOS Linux (or similar) 8.2
-
CentOS Stream (tech preview)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
-
Red Hat Enterprise Linux 8.2
-
CentOS Linux (or similar) 8.2
-
CentOS Stream (tech preview)
oVirt Node 4.4 based on CentOS Linux 8.2 (available for x86_64 only) will
be released separately due to a blocker issue (Bug 1837864
<https://bugzilla.redhat.com/show_bug.cgi?id=1837864>).
oVirt Node and Appliance have been updated, including:
-
oVirt 4.4.2: http://www.ovirt.org/release/4.4.2/
-
Ansible 2.9.13:
https://github.com/ansible/ansible/blob/stable-2.9/changelogs/CHANGELOG-v...
-
Glusterfs 7.7: https://docs.gluster.org/en/latest/release-notes/7.7/
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
-
oVirt Appliance is already available for CentOS Linux 8
-
oVirt Node NG will be available soon for CentOS Linux 8
Additional resources:
-
Read more about the oVirt 4.4.2 release highlights:
http://www.ovirt.org/release/4.4.2/
-
Get more oVirt project updates on Twitter: https://twitter.com/ovirt
-
Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.2/
[2] http://resources.ovirt.org/pub/ovirt-4.4/iso/
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
4 years, 2 months
[ANN] oVirt 4.4.3 Second Release Candidate is now available for testing
by Lev Veyde
oVirt 4.4.3 Second Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.4.3
Second Release Candidate for testing, as of September 25th, 2020.
This update is the third in a series of stabilization updates to the 4.4
series.
How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
Note: Upgrading from 4.4.2 GA should not require re-doing these steps, if
already performed while upgrading from 4.4.1 to 4.4.2 GA. These are only
required to be done once.
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> -
Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you
should be aware that after upgrading from 4.4.1 to 4.4.3 you may get your
host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on
your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode
(if rebooted).
2.
Reboot.
3.
Upgrade to 4.4.3 (redeploy in case of already being on 4.4.3).
4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.
Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with the
correct filter configuration
6.
Reboot.
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
Installation instructions
For installation instructions and additional information please refer to:
https://ovirt.org/documentation/
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.2 or newer
* CentOS Linux (or similar) 8.2 or newer
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.2 or newer
* CentOS Linux (or similar) 8.2 or newer
* oVirt Node 4.4 based on CentOS Linux 8.2 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
Additional Resources:
* Read more about the oVirt 4.4.3 release highlights:
http://www.ovirt.org/release/4.4.3/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.3/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
4 years, 2 months
export to ova
by Tommaso - Shellrent
Hi to all.
Hi try to ank another time the same question:
in our tests ovirt seems to be able to make only one export to ova at
time. also on different hosts and datacenter.
Someone can explain to us why?? this is for us a big issue, because we
use it in a backup script of more than 50 VMs and counting....
We also already opened a bug without any useful response:
https://bugzilla.redhat.com/show_bug.cgi?id=1855782
Regards,
--
--
Shellrent - Il primo hosting italiano Security First
*Tommaso De Marchi*
/COO - Chief Operating Officer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155 <tel:+390444321155> | Fax 04441492177
4 years, 2 months
Restart oVirt-Engine
by Jeremey Wise
How ,without reboot of hosting system, do I restart the oVirt engine?
# I tried below but do not seem to effect the virtual machine
[root@thor iso]# systemctl restart ov
ovirt-ha-agent.service ovirt-imageio.service
ovn-controller.service ovs-delete-transient-ports.service
ovirt-ha-broker.service ovirt-vmconsole-host-sshd.service
ovsdb-server.service ovs-vswitchd.service
[root@thor iso]#
# You cannot restart the VM " HostedEngine " as it responses:
Error while executing action:
HostedEngine:
- Cannot restart VM. This VM is not managed by the engine.
Reason is I had to do some work on a node. Reboot it.. it is back up..
network is all fine.. Cockpit working fine... and gluster fine.. But
oVirt-Engine refuses to accept the node is up.
--
p <jeremey.wise(a)gmail.com>enguinpages
4 years, 2 months
oVirt 4.3 HCI ovirtmgmt vlan problem
by wodel youchi
Hi,
I deployed an HCI of three nodes using ovirt 4.3 on a flat network at the
beginning, now we need to use VLAN on the management network.
I have ovirtmgmt over bond2, this bond will have three vlans : vlan 10 for
management, vlan20 for DMZ and vlan30 for DMZ2.
On the switch, I did configure the concerned ports to support ; the native
vlan (untagged), the vlan10 (tagged), ...etc.
Then I activated the tag on the ovirtmgmt network on the web UI, I did lose
connexion to the hypervisors and things got weird, I then put my machine on
vlan10, I saw that two of my hypervisors had their network configuration
modified to use the vlan10, but not the hypervisor where the VM-engine was
running.
I created the vlan manually on that hypervisor, then I started the
VM-Manager, all hosts were recognized.
Then I stopped, then started the platforme again. Still the same problem,
two hosts are correct with their vlan : bond2.10 created, but the third no
vlan.
Doing the configuration manually works, but it does not survive reboot, is
there a way to force vdsm to accept the new configuration on that faulty
host?
Regards.
4 years, 2 months