This request was bounced two times as well.
Details below. Can you please advise me.
Sent from my iPhone
On 8 Jul 2019, at 09:58, Vrgotic, Marko <M.Vrgotic(a)activevideo.com<mailto:M.Vrgotic@activevideo.com>> wrote:
Happy to report I have deployed my first python vdsm_hook which removes A, AAAA, TXT records of VM each time after_vm_destroy event is triggered, and it works fine.
This was required to be able to quickly redploy/rebuild VMs using same hostname.
Issue I am observing/just discovered now is:
Since live_migration of VM guest is considered a destruction from Host perspective, DNS records get deleted and VM lands on new host without being resolvable 😊 .
I would like to introduce another python script for hook regarding migration events to get these entries back into DNS but for that I need there IPs.
* Where can I find/locate/read VMs IPs and Hostnames (is it dom_xml) to be able to incorporate them for DNS Update script?
Or even better:
* What would be even better, is there a way to not trigger the dns update if VM is being Migrated? Can I use dom_xml or other location/file to check whether VM is migrated or not, which would allow me to control if dnsupdate should be triggered?
If its relevant for the question: we are currently running oVirt 4.3.3 version
Kindly awaiting your reply.
Don't forget to run a ping with a size of 8500 and "do not fragment" flag.
This will really prove that you are no longer using the small MTU anywhere on the path.
Strahil NikolovOn Jul 9, 2019 17:27, Mark Steele <msteele(a)telvue.com> wrote:
> Thank you Strahil,
> The MTU appears to be set to 1500 according to 'ifconfig'.
> Mark Steele
> CIO / VP Technical Operations | TelVue Corporation
> TelVue - We Share Your Vision
> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054
> 800.885.8886 x128 | email@example.com | http://www.telvue.com
> twitter: http://twitter.com/telvue |facebook: https://www.facebook.com/telvue
> On Tue, Jul 9, 2019 at 10:14 AM Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
>> The first thing that comes to my mind is to check the network's MTU.
>> By default it is 1500, and I suppose you can go with MTU 9000
>> Also , check if OS is using MTU 9000
>> Best Regards,
>> Strahil Nikolov
>> В вторник, 9 юли 2019 г., 9:58:47 ч. Гринуич-4, Mark Steele <msteele(a)telvue.com> написа:
>> It seems that our oVirt instance never exceeds 550Mbps aggregate traffic (rx / tx) according to nvstat. I want to make sure that the negotiated speed, handshaking, etc. are all what I expect them to be from the OS level.
>> Mark Steele
oVirt Enginge Version: 188.8.131.52-1.el6 (yes....I know it's old)
VM is running Ubuntu 18.04 with ovirt agent installed.
I'm attempting to get NIC information by running 'ethtool -i ens3' I get
root@myserver:~# ethtool -i ens3
and when I run 'ethtool ens3' I get the following:
root@myserver:~# ethtool ens3
Settings for ens3:
Supported ports: [ ]
Supported link modes: Not reported
Supported pause frame use: No
Supports auto-negotiation: No
Supported FEC modes: Not reported
Advertised link modes: Not reported
Advertised pause frame use: No
Advertised auto-negotiation: No
Advertised FEC modes: Not reported
Duplex: Unknown! (255)
Link detected: yes
This is not what I expected. Is there another tool I should be using to
display the information for ens3 on this server?
CIO / VP Technical Operations | TelVue Corporation
TelVue - We Share Your Vision
16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054
800.885.8886 x128 | msteele(a)telvue.com | http://www.telvue.com
twitter: http://twitter.com/telvue | facebook:
Today marks a new day in the 26-year history of Red Hat. IBM has finalized its acquisition of Red Hat, which will operate as a distinct unit within IBM moving forward.
What does this mean for Red Hat’s contributions to the oVirt project?
In short, nothing.
Red Hat always has and will continue to be a champion for open source and projects like oVirt that have played such a large role in driving innovation in the virtualization space. IBM is committed to Red Hat’s independence and role in open source software communities so that we can continue this work without interruption or changes.
Our mission, governance, and objectives remain the same. We will continue to execute the existing project roadmap as we’ve been doing since 2011 for the oVirt project. Red Hat associates will continue to contribute to the upstream in the same way they have been. And, as always, we will continue to help upstream projects be successful and contribute to welcoming new members and maintaining the project.
We will do this together, with the community, as we always have.
If you have questions or would like to learn more about today’s news, I encourage you to review the materials below. Red Hat CTO Chris Wright will host an online Q&A session in the coming days where you can ask questions you may have about what the acquisition means for Red Hat and our involvement in open source communities. Details will be announced on the Red Hat blog .
* Press release 
* Blog from Chris Wright 
* FAQ 
Sandro Bonazzola, Manager, Software Engineering
I'm testing out the managed storage to connect to ceph and I have a few questions:
* Would I be correct in assuming that the hosted engine VM needs connectivity to the storage and not just the underlying hosts themselves? It seems like the cinderlib client runs from the engine?
* Does the ceph config and keyring need to replicated onto each hypervisor/host?
* I have managed to do one block operation so far (i've created a volume which is visible on the ceph side), but multiple other operations have failed and are 'running' in the engine task list. Is there any way I can increase debugging to see whats happening?
The oVirt Project is pleased to announce the availability of the oVirt
4.3.5 Fourth Release Candidate for testing, as of July 3rd, 2019.
While testing this release candidate please consider deeper testing on
gluster upgrade since with this release we are switching from Gluster 5 to
This update is a release candidate of the fifth in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used in
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)
See the release notes  for installation / upgrade instructions and a
list of new features and bugs fixed.
- oVirt Appliance is laready available
- oVirt Node is already available
* Read more about the oVirt 4.3.5 release highlights:
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
part of updating a DC from 4.2 to 4.3 consists in updating format of data
storage domains from V4 to V5.
- where can I find list of differences between V4 and V5? And in general
between previous ones?
- can I force only a particular storage domain to remain in V4 while having
the other ones upgraded to V5, for any reason? Or the reverse: update only
one of the defined domains?
Eg on oVirt web site I only found this for V4
with mention about Qcow2 v3 (link broken.. one ot the upstream pages for it
https://wiki.qemu.org/Features/Qcow3 ) that should give near raw
performance with advantages of qcow2 format...
I would like to ask any of you to do a simple test.
Please install a linux on a VM with Secure boot and after the install wizzard is over -power it off and then on.
Here is what I have observed:
1. UEFI forgets the certificates after a poweroff and asks for the openSUSE certificate every time. Rebooting from the VM doesn't cause any problems till nextg boot.
2. RHEL8 grub menu consists only of 'System setup'.
When you select it - you are sent to the UEFI menu. If you select continue - everything is fine.
Poweroff + Poweron makes everything to repeat.
Do you observe the same issues. If I'm not the only one hitting this - I will open a bug, otherwise I will try to further debug this.
I have a problem, Last week I lost connectivity a VM with OS Windows server
2008 I access to check it and
in ovirtmanager that VM is power on and I could login it, but there was no
ping to other VM machines So I
restarted VM and I recovery connectivity. Today It shows the same scenario
with same VM.
I checked log from ovirtmananer and ovirt node from running VM but I didnt
find any error.
Where can I find out more ?
I have a Ovirtmanager 4 and a Ovirtnode 4.1
Julio Cesar Bustamante.
But I thought that cockpit should prevent creation of the gluster volume 'engine' as it is too small.
Do we have such control ?
On Jul 8, 2019 09:19, Parth Dhanjal <dparth(a)redhat.com> wrote:
> The other 2 bricks were of 50G each.
> I forgot to check that.
> Sorry for the confusion.
> On Mon, Jul 8, 2019 at 11:42 AM Sahina Bose <sabose(a)redhat.com> wrote:
>> On Mon, Jul 8, 2019 at 11:40 AM Parth Dhanjal <dparth(a)redhat.com> wrote:
>>> I used cockpit to deploy gluster.
>>> And the problem seems to be with
>>> 10.xx.xx.xx9:/engine 50G 3.6G 47G 8% /rhev/data-center/mnt/glusterSD/10.70.41.139:_engine
>>> Engine volume has 500G available
>>> Filesystem Size Used Avail Use% Mounted on
>>> /dev/mapper/gluster_vg_sdb1-gluster_lv_engine 500G 37M 500G 1% /gluster_bricks/engine
>> Check that you have 500G on all the bricks of the volume if you're using replica 3. It takes the minimum of the 3 bricks.
>>> On Fri, Jul 5, 2019 at 8:05 PM Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
>>>> did you use cockpit to deploy the gluster volume ?
>>>> it's interesting why the cockpit allowed the engine's volume to be so small ...
>>>> Best Regards,
>>>> Strahil Nikolov
>>>> В петък, 5 юли 2019 г., 10:17:11 ч. Гринуич-4, Simone Tiraboschi <stirabos(a)redhat.com> написа:
>>>> On Fri, Jul 5, 2019 at 4:12 PM Parth Dhanjal <dparth(a)redhat.com> wrote:
>>>>> I'm trying to deploy a 3 node cluster with gluster storage.
>>>>> After the gluster deployment is completed successfully, the creation of storage domain fails during HE deployment giving the error: