
Hello, We have begun migrating VM boot disks from one Storage domain to another. Both the storage domains are NFS and both are V3. The Ovirt instance is oVirt Engine Version: 3.5.0.1-1.el6 I am shutting down the VM's prior to migration (not a live migration). When the disks complete their move, I start the VM back up on the original host. The hosts are all running either Ubuntu 14.04 LTS or 16.04 LTS I have experienced the following two issues: - when the VM's come back up, they are not mounting an external NFS mount that is defined in /etc/fstab. The mount can be reinstated manually and survives subsequent reboots - but every VM that has an external NFS mount has this issue after migration - VM's that have a docker instance running no longer respond to web requests on the web server ports that the docker instance is running. Any thoughts on why this would happen would be greatly appreciated. Best regards, *** *Mark Steele* CIO / VP Technical Operations | TelVue Corporation TelVue - We Share Your Vision 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 800.885.8886 x128 | msteele@telvue.com | http://www.telvue.com twitter: http://twitter.com/telvue | facebook: https://www.facebook.com/telvue

Hi Mark, Can you please send the engine log and the VDSM log. This will help us to make a deeper investigation on the scenario that you described. Thanks. On Tue, Dec 12, 2017 at 11:28 PM, Mark Steele <msteele@telvue.com> wrote:
Hello,
We have begun migrating VM boot disks from one Storage domain to another. Both the storage domains are NFS and both are V3. The Ovirt instance is oVirt Engine Version: 3.5.0.1-1.el6
I am shutting down the VM's prior to migration (not a live migration). When the disks complete their move, I start the VM back up on the original host. The hosts are all running either Ubuntu 14.04 LTS or 16.04 LTS
I have experienced the following two issues:
- when the VM's come back up, they are not mounting an external NFS mount that is defined in /etc/fstab. The mount can be reinstated manually and survives subsequent reboots - but every VM that has an external NFS mount has this issue after migration
- VM's that have a docker instance running no longer respond to web requests on the web server ports that the docker instance is running.
Any thoughts on why this would happen would be greatly appreciated.
Best regards,
*** *Mark Steele* CIO / VP Technical Operations | TelVue Corporation TelVue - We Share Your Vision 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 800.885.8886 x128 <(800)%20885-8886> | msteele@telvue.com | http:// www.telvue.com twitter: http://twitter.com/telvue | facebook: https://www. facebook.com/telvue
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Regards, Eyal Shenitzky

Thank you Eyal, For reference - one of the VM's in question is named connect-job-15 and it's disk is called connect-job-15-root. The disks were migrated from phl-datastore (Data Master / NFS / V3) to phl-tevestore-01 (Data / NFS / V3) Best regards, *** *Mark Steele* CIO / VP Technical Operations | TelVue Corporation TelVue - We Share Your Vision 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 800.885.8886 x128 | msteele@telvue.com | http://www.telvue.com twitter: http://twitter.com/telvue | facebook: https://www.facebook.com/telvue On Wed, Dec 13, 2017 at 2:43 AM, Eyal Shenitzky <eshenitz@redhat.com> wrote:
Hi Mark,
Can you please send the engine log and the VDSM log.
This will help us to make a deeper investigation on the scenario that you described.
Thanks.
On Tue, Dec 12, 2017 at 11:28 PM, Mark Steele <msteele@telvue.com> wrote:
Hello,
We have begun migrating VM boot disks from one Storage domain to another. Both the storage domains are NFS and both are V3. The Ovirt instance is oVirt Engine Version: 3.5.0.1-1.el6
I am shutting down the VM's prior to migration (not a live migration). When the disks complete their move, I start the VM back up on the original host. The hosts are all running either Ubuntu 14.04 LTS or 16.04 LTS
I have experienced the following two issues:
- when the VM's come back up, they are not mounting an external NFS mount that is defined in /etc/fstab. The mount can be reinstated manually and survives subsequent reboots - but every VM that has an external NFS mount has this issue after migration
- VM's that have a docker instance running no longer respond to web requests on the web server ports that the docker instance is running.
Any thoughts on why this would happen would be greatly appreciated.
Best regards,
*** *Mark Steele* CIO / VP Technical Operations | TelVue Corporation TelVue - We Share Your Vision 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 <https://maps.google.com/?q=16000+Horizon+Way,+Suite+100+%7C+Mt.+Laurel,+NJ+08054&entry=gmail&source=g> 800.885.8886 x128 <(800)%20885-8886> | msteele@telvue.com | http:// www.telvue.com twitter: http://twitter.com/telvue | facebook: https://www.facebook .com/telvue
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Regards, Eyal Shenitzky

From what I see it doesn't seem related to the live storage migration
Hi Mark, process. But please send the content of the /etc/fstab and /var/log/messages. On Wed, Dec 13, 2017 at 2:24 PM, Mark Steele <msteele@telvue.com> wrote:
Thank you Eyal,
For reference - one of the VM's in question is named connect-job-15 and it's disk is called connect-job-15-root.
The disks were migrated from phl-datastore (Data Master / NFS / V3) to phl-tevestore-01 (Data / NFS / V3)
Best regards,
*** *Mark Steele* CIO / VP Technical Operations | TelVue Corporation TelVue - We Share Your Vision 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 800.885.8886 x128 <(800)%20885-8886> | msteele@telvue.com | http:// www.telvue.com twitter: http://twitter.com/telvue | facebook: https://www. facebook.com/telvue
On Wed, Dec 13, 2017 at 2:43 AM, Eyal Shenitzky <eshenitz@redhat.com> wrote:
Hi Mark,
Can you please send the engine log and the VDSM log.
This will help us to make a deeper investigation on the scenario that you described.
Thanks.
On Tue, Dec 12, 2017 at 11:28 PM, Mark Steele <msteele@telvue.com> wrote:
Hello,
We have begun migrating VM boot disks from one Storage domain to another. Both the storage domains are NFS and both are V3. The Ovirt instance is oVirt Engine Version: 3.5.0.1-1.el6
I am shutting down the VM's prior to migration (not a live migration). When the disks complete their move, I start the VM back up on the original host. The hosts are all running either Ubuntu 14.04 LTS or 16.04 LTS
I have experienced the following two issues:
- when the VM's come back up, they are not mounting an external NFS mount that is defined in /etc/fstab. The mount can be reinstated manually and survives subsequent reboots - but every VM that has an external NFS mount has this issue after migration
- VM's that have a docker instance running no longer respond to web requests on the web server ports that the docker instance is running.
Any thoughts on why this would happen would be greatly appreciated.
Best regards,
*** *Mark Steele* CIO / VP Technical Operations | TelVue Corporation TelVue - We Share Your Vision 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 <https://maps.google.com/?q=16000+Horizon+Way,+Suite+100+%7C+Mt.+Laurel,+NJ+08054&entry=gmail&source=g> 800.885.8886 x128 <(800)%20885-8886> | msteele@telvue.com | http:// www.telvue.com twitter: http://twitter.com/telvue | facebook: https://www.facebook .com/telvue
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Regards, Eyal Shenitzky
-- Regards, Eyal Shenitzky
participants (2)
-
Eyal Shenitzky
-
Mark Steele