Veeam - Move data over other network than ovirtmgmt

Hi all I was wondering if anyone managed to get Veeam working with a dedicated backup VLAN instead of sending traffic over the ovirtmgmt interface. We have an OLVM setup (version 4.5.5-1.42.el8) with 3 hosts in 1 cluster for now. Each host has a 1Gbps NIC pair in a seperate management LAN that is only used for ovirtmgmt Each host also has a 10Gbps NIC Bond where all the VLAN's are trunked including a dedicated Backup VLAN. The hosts do not have an IP in this Backup VLAN configured (is this needed?) Each host also has 2 NICs dedicated to storage backend (iSCSI Multipathing) Our Veeam Appliance and workers are setup in the Backup VLAN and they can communicate with the backup repository that is also in this VLAN. When we take a backup, however, we see that the ovirtmgmt network is maxed out on trafic, which probably shouldn't happen. We come from a VMWare environment where the backup proxies handle all the trafic through the backup VLAN without relying on the management network and I was expecting it to work similarly Has anyone managed to get this working as I envision it? Or is the solution to simply link ovirtmgmt to the 10Gbps bond and skip the dedicated mgmt networking? I've also opened a ticket with Veeam for this, but for now I'm just getting questions from them and no real answers. If I get a useful solution from them I will also post it here

If the host does not have an IP on the 10G network then it cannot use that interface. -derek Sent using my mobile device. Please excuse any typos. On May 18, 2025 04:54:25 "domien.vanrompaey--- via Users" <users@ovirt.org> wrote:
Hi all
I was wondering if anyone managed to get Veeam working with a dedicated backup VLAN instead of sending traffic over the ovirtmgmt interface. We have an OLVM setup (version 4.5.5-1.42.el8) with 3 hosts in 1 cluster for now.
Each host has a 1Gbps NIC pair in a seperate management LAN that is only used for ovirtmgmt Each host also has a 10Gbps NIC Bond where all the VLAN's are trunked including a dedicated Backup VLAN. The hosts do not have an IP in this Backup VLAN configured (is this needed?) Each host also has 2 NICs dedicated to storage backend (iSCSI Multipathing)
Our Veeam Appliance and workers are setup in the Backup VLAN and they can communicate with the backup repository that is also in this VLAN.
When we take a backup, however, we see that the ovirtmgmt network is maxed out on trafic, which probably shouldn't happen. We come from a VMWare environment where the backup proxies handle all the trafic through the backup VLAN without relying on the management network and I was expecting it to work similarly
Has anyone managed to get this working as I envision it? Or is the solution to simply link ovirtmgmt to the 10Gbps bond and skip the dedicated mgmt networking?
I've also opened a ticket with Veeam for this, but for now I'm just getting questions from them and no real answers. If I get a useful solution from them I will also post it here _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/CSBYRUEGDM3W3T...

Hi and thanks for the reply. I have now solved the issue. If anyone is wondering how: Veeam was using DNS to connect to everything meaning it would always connect to the management IP behind the FQDN of the server. Wat I did was: - Give every host an IP in the Backup VLAN - Set up DNS Policies so that in my BACKUP VLAN the FQDN always resolves to the hosts IP in the backup VLAN Veeam is now happily using the 10Gbps bond and speeds have increased significantly. It adds a little bit of management overhead having to manage a seperate DNS Scope but it shouldn’t really matter. Met vriendelijke groeten, Domien Van Rompaey Security Engineer Cloud & ICT‑infrastructure | Security Tel +32 51 76 00 00 [https://website-api.dynamate.be/public/E-mailhandtekening/DYNAMATE-handtekening-2025.png]<https://eu.content.exclaimer.net/?url=https%3A%2F%2Fwww.christiaens.net%2F&tenantid=ZVNCIaYYEei0sAANOiIdVQ&templateid=a77e4c5517ffef11aaa700224881a89e&excomponentid=QmSgNQQYiLn8tySf2ahduif4VXkC3WopoLoX1zvQ2GM&excomponenttype=Image&signature=GOnsh4Hp6DH-Xj4cajPNV6HcS5LJ0ZyfxPfq7ULLltJVZozwGRMuswjq-lUHYO2r_VrMFg3gxZKUBylwq9kUoexZV3_OGjhAOW_SpwtucVEcX8g37NYAkwY_cVIBhxrvRJSuaJEFtReKhgywYHQKjoT8K37rRJSYtVzEczRKow-ngkhhMVZnddXcpwSdYtKng1n6L3rW_meRLt351aS5AZdQIbd2E0w04j-EvOP7SKyP6JP6Qm3CoKCAPpJsD6d9k2L6GykNIznP-6V0U6lgfjdUC-GDDHHU2_DRHbOY0Bqsfomd3F-sCSOlhCxP0vB_W5jGEDcEjwwsUvWH10WNoA&v=1> www.dynamate.be<https://eu.content.exclaimer.net?url=https%3A%2F%2Fwww.dynamate.be%2F&tenantid=ZVNCIaYYEei0sAANOiIdVQ&templateid=a77e4c5517ffef11aaa700224881a89e&excomponentid=mJkfg9wDAAm-NPNHLA-h1Gp6g0Lt8pD9czalgHZ86II&excomponenttype=Link&signature=WgWeiYEkOjsivAmIm0BRhtwsUT9rr4wGlv3wSti2L00pophZDWHMnNL4vxQMTM4XAWpnQAPt9AwPq4lmpbb1qqgMSQXoeP23bTBYaCd9vePz3xbTqzWaQsaC0p6FpD1IxFF1UudRgdnhFsHdOVz8Ea07M2taCT4n9o1NcoqcCIbNFheUmgpSCG2RMBOZfc-iLHbLwFpXZRgIAuVcOkzFM3guuzrUjJZU2-y9R6n0gGdB1rSzykZ6KntJqQdn7P1axILqhVNhxtTPctTOuZVrwPTQqjfmD2Te1yZGw6rZDck0HM5NKTxaIlXDGOJN-WXZsbyvWQHAzgnXP9uU5gbeUg&v=1> [https://website-api.dynamate.be/public/E-mailhandtekening/instagram-60-150.png]<https://eu.content.exclaimer.net/?url=https%3A%2F%2Fwww.instagram.com%2Fwearedynamate&tenantid=ZVNCIaYYEei0sAANOiIdVQ&templateid=a77e4c5517ffef11aaa700224881a89e&excomponentid=6lAc_cPk4Eu8CLiBuA5wHRXSaXoCCQVXtpuDysfsKYc&excomponenttype=Image&signature=cuzt6ZDMAc6xk_tBw3fXOEIHNKcTGjFADwQmJ87tihK9V1rnyaZQpuHSyWIrBESEVe9HPylx450UPJgUl4qDviJMFZyNMRUa-fjoUEs6Mv-7CFObodSsbN3rWV_Fz6bLYYzsLz2NIkzaR28TU6_O7B8UxrtZ5rUpSR8jnesKpvZ0TVZjCVo_TqYlOXFV0AFA0yTjkWcTiajK_iN1dMn0-MbKEjsQc7IApfQnh9s7k6vQoJmmSL3PnIR2m3rP0f1fPeUiNHcao4ZMuHGATuIoTCSc1lP-4vYn13WkMWcb-asVV0TK0nahN-6NHvtpK8zM4HGVf9-S0eqiIAixJde9-A&v=1> [https://website-api.dynamate.be/public/E-mailhandtekening/facebook-60-150.png]<https://eu.content.exclaimer.net/?url=https%3A%2F%2Fwww.facebook.com%2Fwearedynamate&tenantid=ZVNCIaYYEei0sAANOiIdVQ&templateid=a77e4c5517ffef11aaa700224881a89e&excomponentid=lShiU7qmAoteL9Acoc1oPBp3VYPs2pBZStvraifDYdk&excomponenttype=Image&signature=0kCoh92F0I_7IygnWZsbWonBkE2qMp2sjPz7lz1caMAYeutKE3g8pABECfqAAVed46M_YqjN_xxBNY4vMb4n2NS4SqExYtwjiAeOPoDIsuuVgJXL9lQi2Ps3wU29BFW0yC8yHDQy4_ceVLNF5JYBCxG35Zp8FE6G1bdVvtWpRKSlu37sp0eLBaj1kY8wtsGdZylTmtB4p1WRn629-ZSlaqq9Q0oSLWS8Wthbs6H1ns7Ydn1Ps9VVyjMMdH3n9XBtSRuTywVg-fKce5QHXQVWvUiDCGsebxhFsC8XblBxW-lZqrV3zHrGmwNJ_j_tXIpIebD6MDW7PcuhmHBtGQ7xFQ&v=1> [https://website-api.dynamate.be/public/E-mailhandtekening/linkedin-60-150-2.png]<https://eu.content.exclaimer.net/?url=https%3A%2F%2Fwww.linkedin.com%2Fcompany%2Fwearedynamate&tenantid=ZVNCIaYYEei0sAANOiIdVQ&templateid=a77e4c5517ffef11aaa700224881a89e&excomponentid=sF3njAHyyRNEgZQpk6OZBmoP6g-Qw-8cUJu67gFRp3s&excomponenttype=Image&signature=eze6bOdWltBkzTkmJ666tsBPLZF-pkyWRW0hpAmcWRonsBKSoReuhSslcy2Ts_tNiZw2CUp6tJhAR0WX56CxszleleZaHPOmb_EjykFt6CkHL21hidxQzIjkaoRtq3Nc-uuHzqMbWzlcMKmVWBxfA_2VUCAZRzCgbmsXr5D2ubWV3tMcWAvp9mHk-YHa_yPyGoERIo6QZxEaRE6bnk1i-h30dfI9FPr3noSoQE4zDg52AmUh1En6ZjcVlBiRTXqbQVnmQQXc0iJ7gPkcrMnEEc7WQy13s0Bvdp6g4TkyA3yL_Lea73GtzD3SzbLlWicQQfMamrZuUucKV3zN4e9FAg&v=1> Van: Derek Atkins <derek@ihtfp.com> Verzonden: zondag 18 mei 2025 12:20 Aan: Domien Van Rompaey <domien.vanrompaey@dynamate.be>; users@ovirt.org Onderwerp: Re: [ovirt-users] Veeam - Move data over other network than ovirtmgmt If the host does not have an IP on the 10G network then it cannot use that interface. -derek Sent using my mobile device. Please excuse any typos. On May 18, 2025 04:54:25 "domien.vanrompaey--- via Users" <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hi all I was wondering if anyone managed to get Veeam working with a dedicated backup VLAN instead of sending traffic over the ovirtmgmt interface. We have an OLVM setup (version 4.5.5-1.42.el8) with 3 hosts in 1 cluster for now. Each host has a 1Gbps NIC pair in a seperate management LAN that is only used for ovirtmgmt Each host also has a 10Gbps NIC Bond where all the VLAN's are trunked including a dedicated Backup VLAN. The hosts do not have an IP in this Backup VLAN configured (is this needed?) Each host also has 2 NICs dedicated to storage backend (iSCSI Multipathing) Our Veeam Appliance and workers are setup in the Backup VLAN and they can communicate with the backup repository that is also in this VLAN. When we take a backup, however, we see that the ovirtmgmt network is maxed out on trafic, which probably shouldn't happen. We come from a VMWare environment where the backup proxies handle all the trafic through the backup VLAN without relying on the management network and I was expecting it to work similarly Has anyone managed to get this working as I envision it? Or is the solution to simply link ovirtmgmt to the 10Gbps bond and skip the dedicated mgmt networking? I've also opened a ticket with Veeam for this, but for now I'm just getting questions from them and no real answers. If I get a useful solution from them I will also post it here _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/CSBYRUEGDM3W3T...

Hi Domien, As far as I understand (I'm not a Veeam expert), if you are taking backup by connecting Veeam to the Engine API, after receiving the backup request the Engine application will request the KVM host to provide the items to backup. That communication from Engine to KVM and backwards will go through the ovirtmgmt network. If you are backing up VMs by using an agent installed on VMs, your VMs must have access on that backup VLAN you mentioned. I recommend you to talk to your Veeam backup support to confirm this. Marcos -----Original Message----- From: domien.vanrompaey--- via Users <users@ovirt.org> Sent: Friday, May 16, 2025 4:21 AM To: users@ovirt.org Subject: [External] : [ovirt-users] Veeam - Move data over other network than ovirtmgmt Hi all I was wondering if anyone managed to get Veeam working with a dedicated backup VLAN instead of sending traffic over the ovirtmgmt interface. We have an OLVM setup (version 4.5.5-1.42.el8) with 3 hosts in 1 cluster for now. Each host has a 1Gbps NIC pair in a seperate management LAN that is only used for ovirtmgmt Each host also has a 10Gbps NIC Bond where all the VLAN's are trunked including a dedicated Backup VLAN. The hosts do not have an IP in this Backup VLAN configured (is this needed?) Each host also has 2 NICs dedicated to storage backend (iSCSI Multipathing) Our Veeam Appliance and workers are setup in the Backup VLAN and they can communicate with the backup repository that is also in this VLAN. When we take a backup, however, we see that the ovirtmgmt network is maxed out on trafic, which probably shouldn't happen. We come from a VMWare environment where the backup proxies handle all the trafic through the backup VLAN without relying on the management network and I was expecting it to work similarly Has anyone managed to get this working as I envision it? Or is the solution to simply link ovirtmgmt to the 10Gbps bond and skip the dedicated mgmt networking? I've also opened a ticket with Veeam for this, but for now I'm just getting questions from them and no real answers. If I get a useful solution from them I will also post it here _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://urldefense.com/v3/__https://www.ovirt.org/privacy-policy.html__;!!AC... oVirt Code of Conduct: https://urldefense.com/v3/__https://www.ovirt.org/community/about/community-... List Archives: https://urldefense.com/v3/__https://lists.ovirt.org/archives/list/users@ovir...
participants (4)
-
Derek Atkins
-
Domien Van Rompaey
-
domien.vanrompaey@dynamate.be
-
Marcos Sungaila