[ANN] oVirt 4.3.8 Third Release Candidate is now available for testing
by Lev Veyde
The oVirt Project is pleased to announce the availability of the oVirt
4.3.8 Third Release Candidate for testing, as of January 8th, 2020.
This update is a release candidate of the eighth in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used in
production.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.7 or later (but <8)
* CentOS Linux (or similar) 7.7 or later (but <8)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.7 or later (but <8)
* CentOS Linux (or similar) 7.7 or later (but <8)
* oVirt Node 4.3 (available for x86_64 only) has been built consuming
CentOS 7.7 Release
See the release notes [1] for known issues, new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available
Additional Resources:
* Read more about the oVirt 4.3.8 release highlights:
http://www.ovirt.org/release/4.3.8/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.8/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
4 years, 10 months
Re: gluster shards not healing
by Strahil
If you observe similar problems in the future, you can just rsync the files from one brick to the other and run a heal (usually a full heal resolves it all).
Most probably the Hypervisor has accesed the file that consists the shards ans the FUSE client noticed that the first host needs healing and uploaded the files .
I have used the following to also trigger a heal (not good for large volumes)
find /fuse-mountpoint -iname '*' -exec stat {} \;
Best Regards,
Strahil Nikolov
On Jan 14, 2020 02:33, Jayme <jaymef(a)gmail.com> wrote:
>
> I'm not exactly sure how but it looks like the problem worked itself out after a few hours
>
> On Mon, Jan 13, 2020 at 5:02 PM Jayme <jaymef(a)gmail.com> wrote:
>>
>> I have a 3-way replica HCI setup. I recently placed one host in maintenance to perform work on it. When I re-activated it I've noticed that many of my gluster volumes are not completing the heal process.
>>
>> heal info shows shard files in heal pending. I looked up the files and it appears that they exist on the other two hosts (the ones that remained active) but do not exist on the host that was in maintenance.
>>
>> I tried to run a manual heal on one of the volumes and then a full heal as well but there are still unhealed shards. The shard files also still do not exist on the maintenance host. Here is an example from one of my volumes:
>>
>> # gluster volume heal prod_a info
>> Brick gluster0:/gluster_bricks/prod_a/prod_a
>> Status: Connected
>> Number of entries: 0
>>
>> Brick gluster1:/gluster_bricks/prod_a/prod_a
>> /.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
>> /.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.178
>> Status: Connected
>> Number of entries: 2
>>
>> Brick gluster2:/gluster_bricks/prod_a/prod_a
>> /.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
>> /.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.178
>> Status: Connected
>> Number of entries: 2
>>
>>
>> host0:
>>
>> # ls -al /gluster_bricks/prod_a/prod_a/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
>> ls: cannot access /gluster_bricks/prod_a/prod_a/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177: No such file or directory
>>
>> host1:
>>
>> # ls -al /gluster_bricks/prod_a/prod_a/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
>> -rw-rw----. 2 root root 67108864 Jan 13 16:57 /gluster_bricks/prod_a/prod_a/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
>>
>> host2:
>>
>> # ls -al /gluster_bricks/prod_a/prod_a/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
>> -rw-rw----. 2 root root 67108864 Jan 13 16:57 /gluster_bricks/prod_a/prod_a/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
>>
>>
>> How can I heal these volumes?
>>
>> Thanks!
4 years, 10 months
gluster shards not healing
by Jayme
I have a 3-way replica HCI setup. I recently placed one host in
maintenance to perform work on it. When I re-activated it I've noticed
that many of my gluster volumes are not completing the heal process.
heal info shows shard files in heal pending. I looked up the files and it
appears that they exist on the other two hosts (the ones that remained
active) but do not exist on the host that was in maintenance.
I tried to run a manual heal on one of the volumes and then a full heal as
well but there are still unhealed shards. The shard files also still do
not exist on the maintenance host. Here is an example from one of my
volumes:
# gluster volume heal prod_a info
Brick gluster0:/gluster_bricks/prod_a/prod_a
Status: Connected
Number of entries: 0
Brick gluster1:/gluster_bricks/prod_a/prod_a
/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.178
Status: Connected
Number of entries: 2
Brick gluster2:/gluster_bricks/prod_a/prod_a
/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.178
Status: Connected
Number of entries: 2
host0:
# ls -al
/gluster_bricks/prod_a/prod_a/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
ls: cannot access
/gluster_bricks/prod_a/prod_a/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177:
No such file or directory
host1:
# ls -al
/gluster_bricks/prod_a/prod_a/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
-rw-rw----. 2 root root 67108864 Jan 13 16:57
/gluster_bricks/prod_a/prod_a/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
host2:
# ls -al
/gluster_bricks/prod_a/prod_a/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
-rw-rw----. 2 root root 67108864 Jan 13 16:57
/gluster_bricks/prod_a/prod_a/.shard/a746f8d2-5044-4d20-b525-24456e6f6f16.177
How can I heal these volumes?
Thanks!
4 years, 10 months
How can I link the cloned disks from a snapshot to the original disks?
by FMGarcia
Hello,
Sorry, I posted in bugzilla, and you advise me than my question isn't
error. https://bugzilla.redhat.com/show_bug.cgi?id=1789534
I will attach the same archives and the same text.
Description of problem:
This problem is bizarre/unusual/convoluted, such that I think that it won't affect to anybody. But the system allows it.
In a backup from snapshot with several disks, we can know the original disks from each disk through their names and their sizes. But, and if several disks had the same name and same size? How could we do it without inspect the content of each disk?
With the java Sdk, the order of the discs from the original machine to the cloned machine changes, making this path not possible either.
In attached files, I show the disks[with different name and order of java Sdk] of the original vm[vm1] and the cloned vm[vm1Clone]. And others[vm2] and [vm2Clone] with the same process, but all disks have the same name. The disks have two sizes: 1Gb and 2Gb.
Steps to Reproduce:
1. Create a vm, and attach several disks with same name and same size.
2. Create a backup from snapshot.
3. Now, try to differentiate the disks cloned from original, without looking at their contents.
Actual results:
I think that it is impossible without I look at their contents.
Expected results:
Knowing which disk of the cloned machine links to which disk of the original machine.
Additional info:
If you need more info, request me it
Vm.log -> Vm original with different disk name
VmClone.log -> Vm clone with different disk name
Vm2.log -> Vm original with same disk name
Vm2Clone.log -> Cloned vm with same disk name
Thanks for your time,
Fran
4 years, 10 months
Re: Please Help, Ovirt Node Hosted Engine Deployment Problems 4.3.2
by William Smyth
I'm running into the exact same issue as described in this thread. The
HostedEngine VM (HE) loses connectivity during the deployment while on the
NAT'd network and the deployment fails. My HE VM is never migrated to
shared storage because that part of the process hasn't been reached, nor
have I been asked for the shared storage details, which are simple NFS
shares. I've tried the deployment from the CLI and through Cockpit, both
times using the Ansible deployment, and both times reaching the same
conclusion.
I did manage to connect to the Centos 7 host that is running the HE guest
with virt-manager and moved the NIC to the 'ovirtmgmt' vswitch using
macvtap, which allowed me to access HE from the outside world. One of the
big issues though is that the HE isn't on shared storage and I'm not sure
how to move it there. I'm running 2 Centos 7 hosts with different CPU
makes, AMD and Intel. So i have two separate single-host clusters but I'm
trying to at least have the HA setup for the HE, which I think is still
possible. I know that I won't be able to do a live migration, but that's
not a big deal. I'm afraid my failed deployment of HE is preventing this
and I'm at an impasse. I don't want to give up on oVirt but it's becoming
difficult to get my lab environment operational and I'm wasting a lot of
time troubleshooting.
Any help or suggestions would be greatly appreciated.
William Smyth
4 years, 10 months
Can we please migrate to 2020 and get a user friendly issues/support tool?
by m.skrzetuski@gmail.com
Hello everyone,
I can speak only for myself but I find mailing lists so 1990. Could we migrate this list to something (user friendly) Jira like?
I have to admit it's rather difficult to search an read the archive for old issues in the e-mail format.
Kind regards
Skrzetuski
4 years, 10 months
Setting up cockpit?
by m.skrzetuski@gmail.com
Hello everyone,
I'd like to get cockpit to work because currently when I click "Host Console" on a host I just get "connection refused". I checked and after the engine installation the cockpit service was not running. When I start it, it runs and answers on port 9090, however the SSL certificate is broken.
- How do I auto enable cockpit on installation?
- How do I supply my own SSL certification to cockpit?
Kind regards
Skrzetuski
4 years, 10 months
Re: Unstable VM network connectivity after upgrade 4.2.8 to 4.3.7
by Strahil
Can you check for any differences betwewn the newly created VM and the old one.
Set an alias as follows:
alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf'
Then:
virsh dumpxml oldVM
virsh dumpxml newVM
Maybe somethhing can give a clue what is going on.
Best Regards,
Strahil NikolovOn Jan 12, 2020 20:03, Latchezar Filtchev <Latcho(a)aubg.bg> wrote:
>
> Dear Strahil,
>
>
>
> Last strange discovery. Newly installed VM – same OS, same NIC’s – connected to the same VLAN’s. Both machines are running on the same virtualization host.
>
> Newly installed VM – no network issues at all. It works flawlessly.
>
>
>
> It is really very strange.
>
>
>
> Best,
>
> Latcho
>
>
4 years, 10 months
Re: Unstable VM network connectivity after upgrade 4.2.8 to 4.3.7
by Strahil
Maybe it's related to the CentOS 5.2 e1000 kernel module .
Can you install another CentOS 5.2 and verify that the issue exists?
Any chance to upgrade the VM to 5.11 or just update the kernel so it is at least 2.6.25 , so you can try the virtio NIC ?
Best Regards,
Strahil NikolovOn Jan 12, 2020 20:03, Latchezar Filtchev <Latcho(a)aubg.bg> wrote:
>
> Dear Strahil,
>
>
>
> Last strange discovery. Newly installed VM – same OS, same NIC’s – connected to the same VLAN’s. Both machines are running on the same virtualization host.
>
> Newly installed VM – no network issues at all. It works flawlessly.
>
>
>
> It is really very strange.
>
>
>
> Best,
>
> Latcho
>
>
4 years, 10 months