problem with qemu / dynamic_ownership
by Christoph Köhler
Hey,
is the problem back with qemu/dynamic_ownership?
We have one DC with two clusters: one: vdsm-4.30.38 / libvirt-4.5.0-23,
two: vdsm-4.30.40 / libvirt-4.5.0-23, LibgfApi enabled, Gluster-Server
6.5.1, Gluster-Client 6.7, opVersion 60000
It went well for a log time but now, with getting vdsm like above, we
are getting old problems:
°setting disk files to root:root when VM went down
°cannot create snapshot etc.
Do we have to set dynamic_ownership=0 in qemu.conf? We were testing
anything but nothing other worked.
Is there any idea?
Christoph
3 years, 4 months
Re: [ANN] oVirt 4.3.7 Third Release Candidate is now available for testing
by Strahil
I got upgraded to RC3 and now cannot power any VM .
Constantly getting I/O error, but checking at gluster level - I can dd from each disk or even create a new one.
Removing the HighAvailability doesn't help.
I guess I should restore the engine from the gluster snapshot and rollback via 'yum history undo last'.
Does anyone else have my issues ?
Best Regards,
Strahil NikolovOn Nov 13, 2019 15:31, Sandro Bonazzola <sbonazzo(a)redhat.com> wrote:
>
>
>
> Il giorno mer 13 nov 2019 alle ore 14:25 Sandro Bonazzola <sbonazzo(a)redhat.com> ha scritto:
>>
>>
>>
>> Il giorno mer 13 nov 2019 alle ore 13:56 Florian Schmid <fschmid(a)ubimet.com> ha scritto:
>>>
>>> Hello,
>>>
>>> I have a question about bugs, which are flagged as [downstream clone - 4.3.7], but are not yet released.
>>>
>>> I'm talking about this bug:
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1749202
>>>
>>> I can't see it in 4.3.7 release notes. Will it be included in a further release candidate? This fix is very important I think and I can't upgrade yet because of this bug.
>>
>>
>>
>> Looking at the bug, the fix was done with $ git tag --contains 12bd5cb1fe7c95e29b4065fca968913722fe9eaa
>> ovirt-engine-4.3.6.6
>> ovirt-engine-4.3.6.7
>> ovirt-engine-4.3.7.0
>> ovirt-engine-4.3.7.1
>>
>> So the fix is already included in release oVirt 4.3.6.
>
>
> Sent a fix to 4.3.6 release notes: https://github.com/oVirt/ovirt-site/pull/2143. @Ryan Barry can you please review?
>
>
>>
>>
>>
>>
>>
>>
>>>
>>>
>>> BR Florian Schmid
>>>
>>> ________________________________
>>> Von: "Sandro Bonazzola" <sbonazzo(a)redhat.com>
>>> An: "users" <users(a)ovirt.org>
>>> Gesendet: Mittwoch, 13. November 2019 13:34:59
>>> Betreff: [ovirt-users] [ANN] oVirt 4.3.7 Third Release Candidate is now available for testing
>>>
>>> The oVirt Project is pleased to announce the availability of the oVirt 4.3.7 Third Release Candidate for testing, as of November 13th, 2019.
>>>
>>> This update is a release candidate of the seventh in a series of stabilization updates to the 4.3 series.
>>> This is pre-release software. This pre-release should not to be used in production.
>>>
>>> This release is available now on x86_64 architecture for:
>>> * Red Hat Enterprise Linux 7.7 or later (but <8)
>>> * CentOS Linux (or similar) 7.7 or later (but <8)
>>>
>>> This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for:
>>> * Red Hat Enterprise Linux 7.7 or later (but <8)
>>> * CentOS Linux (or similar) 7.7 or later (but <8)
>>> * oVirt Node 4.3 (available for x86_64 only) has been built consuming CentOS 7.7 Release
>>>
>>> See the release notes [1] for known issues, new features and bugs fixed.
>>>
>>> While testing this release candidate please note that oVirt node now includes:
>>> - ansible 2.9.0
>>> - GlusterFS 6.6
>>>
>>> Notes:
>>> - oVirt Appliance is already available
>>> - oVirt Node is already available
>>>
>>> Additional Resources:
>>> * Read more about the oVirt 4.3.7 release highlights: http://www.ovirt.org/release/4.3.7/
>>> * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
>>> * Check out the latest project news on the oVirt blog:http://www.ovirt.org/blog/
>>>
>>> [1] http://www.ovirt.org/release/4.3.7/
>>> [2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
>>>
>>> --
>>>
>>> Sandro Bonazzola
>>>
>>> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>>>
>>> Red Hat EMEA
>>>
>>> sbonazzo(a)redhat.com
>>>
>>> Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours.
>>>
>>> _______________________________________________
>>> Users mailing list -- users(a)ovirt.org
>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/24QUREJPZHT...
>>
>>
>>
>> --
>>
>> Sandro Bonazzola
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>>
>> Red Hat EMEA
>>
>> sbonazzo(a)redhat.com
>>
>> Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours.
>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA
>
> sbonazzo(a)redhat.com
>
> Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours.
3 years, 4 months
Async release for ovirt-engine-metrics is now available for oVirt 4.3.8
by Sandro Bonazzola
The oVirt Team has just released a new version of ovirt-engine-metrics
package that
fixes the deployment of the metrics solution when using foreman/satellite.
The deployment was previously failing on missing rhsub_orgid in
vars.yaml.template.
We recommend to update ovirt-engine-metrics if you're planning to use it
with foreman/satellite.
Thanks,
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>*Red Hat respects your work life balance.
Therefore there is no need to answer this email out of your office hours.*
3 years, 4 months
Virtual disk attached to VM showing in the Webui but not identified by the system
by Eugène Ngontang
Hi,
I'm facing a virtual disk behavior I don't understand.
Currently my VMs are spun up with a Boot disk of 25GB and an additional
disk of 215/45/65 GB depending.
When logged to the webui I see the two disks, but when I ssh to VM we only
see the primary boot disk, the other one can't get a *UUID* and then cannot
and is not mounted
I also noticed in the webui the second disk doesn't have a logical name as
you can see in the screenshot.
Pleas can someone explain this behavior please?
Here is my disk management commands outputs
[root@fp-gpu-node3 centos]# fdisk -l
>
>
> Disque /dev/vda : 26.8 Go, 26843545600 octets, 52428800 secteurs
>
> Unités = secteur de 1 × 512 = 512 octets
>
> Taille de secteur (logique / physique) : 512 octets / 512 octets
>
> taille d'E/S (minimale / optimale) : 512 octets / 512 octets
>
> Type d'étiquette de disque : dos
>
> Identifiant de disque : 0x000b6061
>
>
> Périphérique Amorçage Début Fin Blocs Id. Système
>
> /dev/vda1 * 2048 52428766 26213359+ 83 Linux
>
>
> Disque /dev/vdb : 48.3 Go, 48318382080 octets, 94371840 secteurs
>
> Unités = secteur de 1 × 512 = 512 octets
>
> Taille de secteur (logique / physique) : 512 octets / 512 octets
>
> taille d'E/S (minimale / optimale) : 512 octets / 512 octets
>
>
> [root@fp-gpu-node3 centos]# blkid
>
> /dev/vda1: UUID="3ef2b806-efd7-4eef-aaa2-2584909365ff" TYPE="xfs"
>
> [root@fp-gpu-node3 centos]# lsblk -f
>
> NAME FSTYPE LABEL UUID MOUNTPOINT
>
> sr0
>
> vda
>
> └─vda1 xfs 3ef2b806-efd7-4eef-aaa2-2584909365ff /
>
> vdb
>
Thanks for your help.
--
LesCDN <http://lescdn.com>
engontang(a)lescdn.com
------------------------------------------------------------
*Aux hommes il faut un chef, et au*
* chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te
voit on te juge!*
3 years, 4 months
Details about bios and custom emulated machine values
by Gianluca Cecchi
Hello,
in oVirt 4.3.7 in "Edit VM" -> System -> Advanced parameters
I can choose
Bios Type:
Default
Q35 Chipset with Legacy BIOS
Q35 Chipset with UEFI BIOS
Q35 Chipset with SecureBoot
and
Custom Emulated Machine:
Use cluster default (pc-i440fx-rhel7.6.0)
...
q35
pc-q35-rhel7.3.0
pc-q35-rhel7.4.0
pc-q35-rhel7.5.0
pc-q35-rhel7.6.0
and I can apparently mix all possible values.
Any deeper information about implications?
What is the "Default" value for Bios Type?
Did anything change in 4.3.8 (eg official support for q35)?
I search also through official docs for RHV 4.3 (virtual mgmt guide,
appendix A), but for example I don't find any reference to the "Bios Type"
parameter and a vague reference to the "Custom Emulated Machine". Is there
any more detailed link?
Thanks,
Gianluca
3 years, 4 months
Problems in new oVirt install
by Steve Watkins
Got everything installed and was uploding an ISO when the system just stopped responding. Went to the host and saw the hostedengine was paused. Looked online and found the following instructions for unpausing it
virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf
resume HostedEngine
And get the following error
Failed to acquire lock: No space left on device
any ideas?
3 years, 4 months