Behaviour when attaching shared iSCSI storage with existing data

--Apple-Mail=_0EC3C8C4-0EB1-4E35-A593-42CEA5D59791 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=us-ascii If one was to attach a shared iSCSI LUN as 'storage' to an oVirt data = centre that contains existing data - how does oVirt behave? For example the LUN might be partitioned as LVM, then contain existing = filesystems etc... =20 - Would oVirt see that there is existing data on the LUN and simply = attach it as any other linux initiator (client) world, or would it try = to wipe the LUN clean and reinitialise it? Context: Investigating migration from XenServer to oVirt (4.2.0) All our iSCSI storage is currently attached to XenServer hosts, = XenServer formats those raw LUNs with LVM and VMs are stored within = them. If the answer to this is already out there and I should have found it by = searching, I apologise, please point me to the link and I'll RTFM. -- Sam McLeod https://smcleod.net https://twitter.com/s_mcleod --Apple-Mail=_0EC3C8C4-0EB1-4E35-A593-42CEA5D59791 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=us-ascii <html><head><meta http-equiv=3D"Content-Type" content=3D"text/html; = charset=3Dus-ascii"></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; line-break: after-white-space;" class=3D""><div = class=3D"">If one was to attach a shared iSCSI LUN as 'storage' to an = oVirt data centre that contains existing data - how does oVirt = behave?</div><div class=3D""><br class=3D""></div><div class=3D"">For = example the LUN might be partitioned as LVM, then contain existing = filesystems etc...</div><div class=3D""> </div><div class=3D"">- = Would oVirt see that there is existing data on the LUN and simply attach = it as any other linux initiator (client) world, or would it try to wipe = the LUN clean and reinitialise it?</div><div class=3D""><br = class=3D""></div><div class=3D""><br class=3D""></div><div = class=3D"">Context: Investigating migration from XenServer to oVirt = (4.2.0)</div><div class=3D""><br class=3D""></div><div class=3D"">All = our iSCSI storage is currently attached to XenServer hosts, XenServer = formats those raw LUNs with LVM and VMs are stored within = them.</div><div class=3D""><br class=3D""></div><div class=3D""><br = class=3D""></div><div class=3D""><br class=3D""></div><div class=3D""><i = class=3D"">If the answer to this is already out there and I should have = found it by searching, I apologise, please point me to the link and I'll = RTFM.</i></div><div class=3D""><div class=3D""> <div dir=3D"auto" style=3D"color: rgb(0, 0, 0); letter-spacing: normal; = text-align: start; text-indent: 0px; text-transform: none; white-space: = normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; word-wrap: = break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" = class=3D""><div dir=3D"auto" style=3D"color: rgb(0, 0, 0); = letter-spacing: normal; text-align: start; text-indent: 0px; = text-transform: none; white-space: normal; word-spacing: 0px; = -webkit-text-stroke-width: 0px; word-wrap: break-word; = -webkit-nbsp-mode: space; line-break: after-white-space;" class=3D""><div = dir=3D"auto" style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; = line-break: after-white-space;" class=3D""><div style=3D"color: rgb(0, = 0, 0); font-family: Helvetica; font-size: 12px; font-style: normal; = font-variant-caps: normal; font-weight: normal; letter-spacing: normal; = text-align: start; text-indent: 0px; text-transform: none; white-space: = normal; word-spacing: 0px; -webkit-text-stroke-width: 0px;"><br = class=3D"">--<br class=3D"">Sam McLeod<br class=3D""><a = href=3D"https://smcleod.net" class=3D"">https://smcleod.net</a><br = class=3D"">https://twitter.com/s_mcleod</div></div></div></div> </div> <br class=3D""></div></body></html>= --Apple-Mail=_0EC3C8C4-0EB1-4E35-A593-42CEA5D59791--

On Thu, Jan 4, 2018 at 4:03 AM, Sam McLeod <mailinglists@smcleod.net> wrote:
If one was to attach a shared iSCSI LUN as 'storage' to an oVirt data centre that contains existing data - how does oVirt behave?
For example the LUN might be partitioned as LVM, then contain existing filesystems etc...
- Would oVirt see that there is existing data on the LUN and simply attach it as any other linux initiator (client) world, or would it try to wipe the LUN clean and reinitialise it?
Neither - we will not be importing these as existing data domains, nor wipe them, as they have contents.
Context: Investigating migration from XenServer to oVirt (4.2.0)
A very interesting subject - would love to see the outcome!
All our iSCSI storage is currently attached to XenServer hosts, XenServer formats those raw LUNs with LVM and VMs are stored within them.
I suspect we need to copy the data. We might be able to do some tricks, but at the end of the day I think copying the data, LV to LV, makes the most sense. However, I wonder what else is needed - do we need a conversion of the drivers, different kernel, etc.? What are the export options Xen provides? Perhaps OVF? Is there an API to stream the disks from Xen? Y.
*If the answer to this is already out there and I should have found it by searching, I apologise, please point me to the link and I'll RTFM.*
-- Sam McLeod https://smcleod.net https://twitter.com/s_mcleod
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

--Apple-Mail=_0E406D10-3897-4923-B86C-A3B778AE0C43 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=us-ascii Thanks for your response Yaniv,
=20 Context: Investigating migration from XenServer to oVirt (4.2.0) =20 A very interesting subject - would love to see the outcome!
=20 =20 All our iSCSI storage is currently attached to XenServer hosts, = XenServer formats those raw LUNs with LVM and VMs are stored within =
=20 I suspect we need to copy the data. We might be able to do some =
I'll certainly be writing one of not many blog posts on the process and = outcome :) We've been wanting to switch to something more 'modern' for a while, but = XenServer has had a very low TCO for us, sure it doesn't perform as well = as Xen/KVM setup on top of CentOS/RHEL with updated kernels, tuning = etc... but it just kept working, meanwhile we lost some people in my = team so it hasn't been the right time to look at moving... until now... Citrix / XenServer recently screwed over the community (I don't use that = term lightly) by kneecapping the free / unlicensed version of XenServer: = https://xenserver.org/blog/entry/xenserver-7-3-changes-to-the-free-edition= .html = <https://xenserver.org/blog/entry/xenserver-7-3-changes-to-the-free-editio= n.html> There's a large number of people very unhappy about this, as many of the = people that contribute heavily to bug reporting, testing and rapid / = modern deployment lifecycles were / are using the unlicensed version = (like us over @infoxchange), so for us - this was the straw that broke = the camel's back. I've been looking into various options such as oVirt, Proxmox, OpenStack = and a roll-your-own libvirt style platform based on our CentOS (7 at = present) SOE, so far oVirt is looking promising. them. tricks, but at the end of the day I think copying the data, LV to LV, = makes the most sense.
However, I wonder what else is needed - do we need a conversion of the = drivers, different kernel, etc.?
All our Xen VMs are PVHVM, so there's no reason we could't export them = as files, then import them to oVirt of we do go down the oVirt path = after the POC. We run kernel-ml across our fleet (almost always running near-latest = kernel release) and automate all configuration with Puppet. The issue I have with this is that it will be slow - XenServer's storage = performance is terrible and there'd be lots of manual work involved. If this was to be the most simple option, I think we'd opt for = rebuilding VMs from scratch, letting Puppet setup their config etc... = then restoring data from backups / rsync etc... that way we'd still be = performing the manual work - but we'd end up with nice clean VMs. The down side to that is juggling iSCSI LUNs, I'll have to migrate VMs = on XenServer off one LUN at a time, remove that LUN from XenServer and = add it to oVirt as new storage, and continue - but if it's what has to = be done, we'll do it.
=20 What are the export options Xen provides? Perhaps OVF? Is there an API to stream the disks from Xen? Y.
Yes, Xen does have an API, but TBH - it's pretty awful to work with, = think XML and lots of UUIDs...
=20 =20
On 4 Jan 2018, at 7:58 pm, Yaniv Kaul <ykaul@redhat.com> wrote: =20 =20 =20 On Thu, Jan 4, 2018 at 4:03 AM, Sam McLeod <mailinglists@smcleod.net = <mailto:mailinglists@smcleod.net>> wrote: If one was to attach a shared iSCSI LUN as 'storage' to an oVirt data = centre that contains existing data - how does oVirt behave? =20 For example the LUN might be partitioned as LVM, then contain existing = filesystems etc... =20 - Would oVirt see that there is existing data on the LUN and simply = attach it as any other linux initiator (client) world, or would it try = to wipe the LUN clean and reinitialise it? =20 Neither - we will not be importing these as existing data domains, nor = wipe them, as they have contents. =20 =20 =20 Context: Investigating migration from XenServer to oVirt (4.2.0) =20 A very interesting subject - would love to see the outcome! =20 =20 All our iSCSI storage is currently attached to XenServer hosts, = XenServer formats those raw LUNs with LVM and VMs are stored within =
=20 I suspect we need to copy the data. We might be able to do some =
-- Sam McLeod https://smcleod.net https://twitter.com/s_mcleod them. tricks, but at the end of the day I think copying the data, LV to LV, = makes the most sense.
However, I wonder what else is needed - do we need a conversion of the = drivers, different kernel, etc.? =20 What are the export options Xen provides? Perhaps OVF? Is there an API to stream the disks from Xen? Y. =20 =20 =20 =20 If the answer to this is already out there and I should have found it = by searching, I apologise, please point me to the link and I'll RTFM. =20 -- Sam McLeod https://smcleod.net <https://smcleod.net/> https://twitter.com/s_mcleod <https://twitter.com/s_mcleod> =20 _______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users = <http://lists.ovirt.org/mailman/listinfo/users> =20 =20
--Apple-Mail=_0E406D10-3897-4923-B86C-A3B778AE0C43 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=us-ascii <html><head><meta http-equiv=3D"Content-Type" content=3D"text/html; = charset=3Dus-ascii"></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; line-break: after-white-space;" = class=3D"">Thanks for your response Yaniv,<div class=3D""><br = class=3D""></div><div class=3D""><blockquote type=3D"cite" class=3D""><div= dir=3D"ltr" class=3D""><div class=3D"gmail_extra"><div = class=3D"gmail_quote"><blockquote class=3D"gmail_quote" style=3D"margin: = 0px 0px 0px 0.8ex; border-left-width: 1px; border-left-style: solid; = border-left-color: rgb(204, 204, 204); padding-left: 1ex;"><div class=3D""= style=3D"word-wrap: break-word; line-break: = after-white-space;"><blockquote class=3D"gmail_quote" style=3D"margin: = 0px 0px 0px 0.8ex; border-left-width: 1px; border-left-style: solid; = border-left-color: rgb(204, 204, 204); padding-left: 1ex;"><div class=3D""= style=3D"word-wrap: break-word; line-break: after-white-space;"><div = class=3D""><br class=3D""></div><div class=3D"">Context: Investigating = migration from XenServer to oVirt (4.2.0)</div></div></blockquote><div = class=3D""><br class=3D""></div><div class=3D"">A very interesting = subject - would love to see the = outcome!</div></div></blockquote></div></div></div></blockquote><div = class=3D""><br class=3D""></div><div class=3D"">I'll certainly be = writing one of not many blog posts on the process and outcome = :)</div><div class=3D""><br class=3D""></div><div class=3D"">We've been = wanting to switch to something more 'modern' for a while, but XenServer = has had a very low TCO for us, sure it doesn't perform as well as = Xen/KVM setup on top of CentOS/RHEL with updated kernels, tuning etc... = but it just kept working, meanwhile we lost some people in my team so it = hasn't been the right time to look at moving... until now...</div><div = class=3D""><br class=3D""></div><div class=3D"">Citrix / XenServer = recently screwed over the community (I don't use that term lightly) by = kneecapping the free / unlicensed version of XenServer: <a = href=3D"https://xenserver.org/blog/entry/xenserver-7-3-changes-to-the-free= -edition.html" = class=3D"">https://xenserver.org/blog/entry/xenserver-7-3-changes-to-the-f= ree-edition.html</a></div><div class=3D""><br class=3D""></div><div = class=3D"">There's a large number of people very unhappy about this, as = many of the people that contribute heavily to bug reporting, testing and = rapid / modern deployment lifecycles were / are using the unlicensed = version (like us over @infoxchange), so for us - this was the straw that = broke the camel's back.</div><div class=3D""><br class=3D""></div><div = class=3D"">I've been looking into various options such as oVirt, = Proxmox, OpenStack and a roll-your-own libvirt style platform based on = our CentOS (7 at present) SOE, so far oVirt is looking = promising.</div><br class=3D""><blockquote type=3D"cite" class=3D""><div = dir=3D"ltr" class=3D""><div class=3D"gmail_extra"><div = class=3D"gmail_quote"><blockquote class=3D"gmail_quote" style=3D"margin: = 0px 0px 0px 0.8ex; border-left-width: 1px; border-left-style: solid; = border-left-color: rgb(204, 204, 204); padding-left: 1ex;"><div class=3D""= style=3D"word-wrap: break-word; line-break: after-white-space;"><div = class=3D""> </div><blockquote class=3D"gmail_quote" style=3D"margin: = 0px 0px 0px 0.8ex; border-left-width: 1px; border-left-style: solid; = border-left-color: rgb(204, 204, 204); padding-left: 1ex;"><div class=3D""= style=3D"word-wrap: break-word; line-break: after-white-space;"><div = class=3D""><br class=3D""></div><div class=3D"">All our iSCSI storage is = currently attached to XenServer hosts, XenServer formats those raw LUNs = with LVM and VMs are stored within them.</div></div></blockquote><div = class=3D""><br class=3D""></div><div class=3D"">I suspect we need to = copy the data. We might be able to do some tricks, but at the end of the = day I think copying the data, LV to LV, makes the most sense.</div><div = class=3D"">However, I wonder what else is needed - do we need a = conversion of the drivers, different kernel, = etc.?</div></div></blockquote></div></div></div></blockquote><div = class=3D""><br class=3D""></div><div class=3D"">All our Xen VMs are = PVHVM, so there's no reason we could't export them as files, then import = them to oVirt of we do go down the oVirt path after the POC.</div><div = class=3D"">We run kernel-ml across our fleet (almost always running = near-latest kernel release) and automate all configuration with = Puppet.</div><div class=3D""><br class=3D""></div><div class=3D"">The = issue I have with this is that it will be slow - XenServer's storage = performance is <i class=3D"">terrible</i> and there'd be lots of = manual work involved.</div><div class=3D""><br class=3D""></div><div = class=3D"">If this was to be the most simple option, I think we'd opt = for rebuilding VMs from scratch, letting Puppet setup their config = etc... then restoring data from backups / rsync etc... that way we'd = still be performing the manual work - but we'd end up with nice clean = VMs.</div><div class=3D""><br class=3D""></div><div class=3D"">The down = side to that is juggling iSCSI LUNs, I'll have to migrate VMs on = XenServer off one LUN at a time, remove that LUN from XenServer and add = it to oVirt as new storage, and continue - but if it's what has to be = done, we'll do it.</div><br class=3D""><blockquote type=3D"cite" = class=3D""><div dir=3D"ltr" class=3D""><div class=3D"gmail_extra"><div = class=3D"gmail_quote"><blockquote class=3D"gmail_quote" style=3D"margin: = 0px 0px 0px 0.8ex; border-left-width: 1px; border-left-style: solid; = border-left-color: rgb(204, 204, 204); padding-left: 1ex;"><div class=3D""= style=3D"word-wrap: break-word; line-break: after-white-space;"><div = class=3D""><br class=3D""></div><div class=3D"">What are the export = options Xen provides? Perhaps OVF?</div><div class=3D"">Is there an API = to stream the disks from Xen?</div><div = class=3D"">Y.</div></div></blockquote></div></div></div></blockquote><div = class=3D""><br class=3D""></div><div class=3D"">Yes, Xen does have an = API, but TBH - it's pretty awful to work with, think XML and lots of = UUIDs...</div><br class=3D""><blockquote type=3D"cite" class=3D""><div = dir=3D"ltr" class=3D""><div class=3D"gmail_extra"><div = class=3D"gmail_quote"><blockquote class=3D"gmail_quote" style=3D"margin: = 0px 0px 0px 0.8ex; border-left-width: 1px; border-left-style: solid; = border-left-color: rgb(204, 204, 204); padding-left: 1ex;"><div class=3D""= style=3D"word-wrap: break-word; line-break: after-white-space;"><div = class=3D""> </div><blockquote class=3D"gmail_quote" style=3D"margin: = 0px 0px 0px 0.8ex; border-left-width: 1px; border-left-style: solid; = border-left-color: rgb(204, 204, 204); padding-left: 1ex;"><div class=3D""= style=3D"word-wrap: break-word; line-break: after-white-space;"><div = class=3D""><br = class=3D""></div></div></blockquote></div></blockquote></div></div></div><= /blockquote><div class=3D""> <div dir=3D"auto" style=3D"color: rgb(0, 0, 0); letter-spacing: normal; = text-align: start; text-indent: 0px; text-transform: none; white-space: = normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; word-wrap: = break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" = class=3D""><div dir=3D"auto" style=3D"color: rgb(0, 0, 0); = letter-spacing: normal; text-align: start; text-indent: 0px; = text-transform: none; white-space: normal; word-spacing: 0px; = -webkit-text-stroke-width: 0px; word-wrap: break-word; = -webkit-nbsp-mode: space; line-break: after-white-space;" class=3D""><div = dir=3D"auto" style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; = line-break: after-white-space;" class=3D""><div style=3D"color: rgb(0, = 0, 0); font-family: Helvetica; font-size: 12px; font-style: normal; = font-variant-caps: normal; font-weight: normal; letter-spacing: normal; = text-align: start; text-indent: 0px; text-transform: none; white-space: = normal; word-spacing: 0px; -webkit-text-stroke-width: 0px;"><br = class=3D"">--<br class=3D"">Sam McLeod<br class=3D""><a = href=3D"https://smcleod.net" class=3D"">https://smcleod.net</a><br = class=3D"">https://twitter.com/s_mcleod</div></div></div></div> </div> <div><br class=3D""><blockquote type=3D"cite" class=3D""><div = class=3D"">On 4 Jan 2018, at 7:58 pm, Yaniv Kaul <<a = href=3D"mailto:ykaul@redhat.com" class=3D"">ykaul@redhat.com</a>> = wrote:</div><br class=3D"Apple-interchange-newline"><div class=3D""><div = dir=3D"ltr" class=3D""><br class=3D""><div class=3D"gmail_extra"><br = class=3D""><div class=3D"gmail_quote">On Thu, Jan 4, 2018 at 4:03 AM, = Sam McLeod <span dir=3D"ltr" class=3D""><<a = href=3D"mailto:mailinglists@smcleod.net" target=3D"_blank" = class=3D"">mailinglists@smcleod.net</a>></span> wrote:<br = class=3D""><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 = .8ex;border-left:1px #ccc solid;padding-left:1ex"><div = style=3D"word-wrap:break-word;line-break:after-white-space" = class=3D""><div class=3D"">If one was to attach a shared iSCSI LUN as = 'storage' to an oVirt data centre that contains existing data - how does = oVirt behave?</div><div class=3D""><br class=3D""></div><div = class=3D"">For example the LUN might be partitioned as LVM, then contain = existing filesystems etc...</div><div class=3D""> </div><div = class=3D"">- Would oVirt see that there is existing data on the LUN and = simply attach it as any other linux initiator (client) world, or would = it try to wipe the LUN clean and reinitialise = it?</div></div></blockquote><div class=3D""><br class=3D""></div><div = class=3D"">Neither - we will not be importing these as existing data = domains, nor wipe them, as they have contents.</div><div = class=3D""> </div><blockquote class=3D"gmail_quote" style=3D"margin:0= 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div = style=3D"word-wrap:break-word;line-break:after-white-space" = class=3D""><div class=3D""><br class=3D""></div><div class=3D""><br = class=3D""></div><div class=3D"">Context: Investigating migration from = XenServer to oVirt (4.2.0)</div></div></blockquote><div class=3D""><br = class=3D""></div><div class=3D"">A very interesting subject - would love = to see the outcome!</div><div class=3D""> </div><blockquote = class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc = solid;padding-left:1ex"><div = style=3D"word-wrap:break-word;line-break:after-white-space" = class=3D""><div class=3D""><br class=3D""></div><div class=3D"">All our = iSCSI storage is currently attached to XenServer hosts, XenServer = formats those raw LUNs with LVM and VMs are stored within = them.</div></div></blockquote><div class=3D""><br class=3D""></div><div = class=3D"">I suspect we need to copy the data. We might be able to do = some tricks, but at the end of the day I think copying the data, LV to = LV, makes the most sense.</div><div class=3D"">However, I wonder what = else is needed - do we need a conversion of the drivers, different = kernel, etc.?</div><div class=3D""><br class=3D""></div><div = class=3D"">What are the export options Xen provides? Perhaps = OVF?</div><div class=3D"">Is there an API to stream the disks from = Xen?</div><div class=3D"">Y.</div><div class=3D""> </div><blockquote = class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc = solid;padding-left:1ex"><div = style=3D"word-wrap:break-word;line-break:after-white-space" = class=3D""><div class=3D""><br class=3D""></div><div class=3D""><br = class=3D""></div><div class=3D""><br class=3D""></div><div class=3D""><i = class=3D"">If the answer to this is already out there and I should have = found it by searching, I apologise, please point me to the link and I'll = RTFM.</i></div><div class=3D""><div class=3D""> <div dir=3D"auto" style=3D"letter-spacing: normal; text-align: start; = text-indent: 0px; text-transform: none; white-space: normal; = word-spacing: 0px; word-wrap: break-word; line-break: = after-white-space;" class=3D""><div dir=3D"auto" style=3D"letter-spacing: = normal; text-align: start; text-indent: 0px; text-transform: none; = white-space: normal; word-spacing: 0px; word-wrap: break-word; = line-break: after-white-space;" class=3D""><div dir=3D"auto" = style=3D"word-wrap:break-word;line-break:after-white-space" = class=3D""><div style=3D"font-family: Helvetica; font-size: 12px; = font-style: normal; font-variant-caps: normal; font-weight: normal; = letter-spacing: normal; text-align: start; text-indent: 0px; = text-transform: none; white-space: normal; word-spacing: 0px;" = class=3D""><br class=3D"">--<br class=3D"">Sam McLeod<br class=3D""><a = href=3D"https://smcleod.net/" target=3D"_blank" = class=3D"">https://smcleod.net</a><br class=3D""><a = href=3D"https://twitter.com/s_mcleod" target=3D"_blank" = class=3D"">https://twitter.com/s_mcleod</a></div></div></div></div> </div> <br class=3D""></div></div><br = class=3D"">______________________________<wbr = class=3D"">_________________<br class=3D""> Users mailing list<br class=3D""> <a href=3D"mailto:Users@ovirt.org" class=3D"">Users@ovirt.org</a><br = class=3D""> <a href=3D"http://lists.ovirt.org/mailman/listinfo/users" = rel=3D"noreferrer" target=3D"_blank" = class=3D"">http://lists.ovirt.org/<wbr = class=3D"">mailman/listinfo/users</a><br class=3D""> <br class=3D""></blockquote></div><br class=3D""></div></div> </div></blockquote></div><br class=3D""></div></body></html>= --Apple-Mail=_0E406D10-3897-4923-B86C-A3B778AE0C43--

On Fri, Jan 5, 2018 at 12:19 AM, Sam McLeod <mailinglists@smcleod.net> wrote:
Thanks for your response Yaniv,
Context: Investigating migration from XenServer to oVirt (4.2.0)
A very interesting subject - would love to see the outcome!
I'll certainly be writing one of not many blog posts on the process and outcome :)
We've been wanting to switch to something more 'modern' for a while, but XenServer has had a very low TCO for us, sure it doesn't perform as well as Xen/KVM setup on top of CentOS/RHEL with updated kernels, tuning etc... but it just kept working, meanwhile we lost some people in my team so it hasn't been the right time to look at moving... until now...
Citrix / XenServer recently screwed over the community (I don't use that term lightly) by kneecapping the free / unlicensed version of XenServer: https://xenserver.org/blog/entry/xenserver-7-3- changes-to-the-free-edition.html
There's a large number of people very unhappy about this, as many of the people that contribute heavily to bug reporting, testing and rapid / modern deployment lifecycles were / are using the unlicensed version (like us over @infoxchange), so for us - this was the straw that broke the camel's back.
I've been looking into various options such as oVirt, Proxmox, OpenStack and a roll-your-own libvirt style platform based on our CentOS (7 at present) SOE, so far oVirt is looking promising.
All our iSCSI storage is currently attached to XenServer hosts, XenServer formats those raw LUNs with LVM and VMs are stored within them.
I suspect we need to copy the data. We might be able to do some tricks, but at the end of the day I think copying the data, LV to LV, makes the most sense. However, I wonder what else is needed - do we need a conversion of the drivers, different kernel, etc.?
All our Xen VMs are PVHVM, so there's no reason we could't export them as files, then import them to oVirt of we do go down the oVirt path after the POC. We run kernel-ml across our fleet (almost always running near-latest kernel release) and automate all configuration with Puppet.
The issue I have with this is that it will be slow - XenServer's storage performance is *terrible* and there'd be lots of manual work involved.
If this was to be the most simple option, I think we'd opt for rebuilding VMs from scratch, letting Puppet setup their config etc... then restoring data from backups / rsync etc... that way we'd still be performing the manual work - but we'd end up with nice clean VMs.
Indeed, greenfield deployment has its advantages.
The down side to that is juggling iSCSI LUNs, I'll have to migrate VMs on XenServer off one LUN at a time, remove that LUN from XenServer and add it to oVirt as new storage, and continue - but if it's what has to be done, we'll do it.
The migration of VMs has three parts: - VM configuration data (from name to number of CPUs, memory, etc.) - Data - the disks themselves. - Adjusting VM internal data (paths, boot kernel, grub?, etc.) The first item could be automated. Unfortunately, it was a bit of a challenge to find a common automation platform. For example, we have great Ansible support, which I could not find for XenServer (but[1], which may be a bit limited). Perhaps if there aren't too many VMs, this could be done manually. If you use Foreman, btw, then it could probably be used for both to provision? The 2nd - data movement could be done in at least two-three ways: 1. Copy using 'dd' from LUN/LV/raw/? to a raw volume in oVirt. 2. (My preferred option), copy using 'dd' from LUN/LV/raw and upload using the oVirt upload API (example in Python[2]). I think that's an easy to implement option and provides the flexibility to copy from pretty much any source to oVirt. 3. There are ways to convert XVA to qcow2 - I saw some references on the Internet, never tried any. As for the last item, I'm really not sure what changes are needed, if at all. I don't know the disk convention, for example (/dev/sd* for SCSI disk -> virtio-scsi, but are there are other device types?) I'd be happy to help with any adjustment needed for the Python script below. Y. [1] http://docs.ansible.com/ansible/latest/xenserver_facts_module.html [2] https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_di...
What are the export options Xen provides? Perhaps OVF? Is there an API to stream the disks from Xen? Y.
Yes, Xen does have an API, but TBH - it's pretty awful to work with, think XML and lots of UUIDs...
-- Sam McLeod https://smcleod.net https://twitter.com/s_mcleod
On 4 Jan 2018, at 7:58 pm, Yaniv Kaul <ykaul@redhat.com> wrote:
On Thu, Jan 4, 2018 at 4:03 AM, Sam McLeod <mailinglists@smcleod.net> wrote:
If one was to attach a shared iSCSI LUN as 'storage' to an oVirt data centre that contains existing data - how does oVirt behave?
For example the LUN might be partitioned as LVM, then contain existing filesystems etc...
- Would oVirt see that there is existing data on the LUN and simply attach it as any other linux initiator (client) world, or would it try to wipe the LUN clean and reinitialise it?
Neither - we will not be importing these as existing data domains, nor wipe them, as they have contents.
Context: Investigating migration from XenServer to oVirt (4.2.0)
A very interesting subject - would love to see the outcome!
All our iSCSI storage is currently attached to XenServer hosts, XenServer formats those raw LUNs with LVM and VMs are stored within them.
I suspect we need to copy the data. We might be able to do some tricks, but at the end of the day I think copying the data, LV to LV, makes the most sense. However, I wonder what else is needed - do we need a conversion of the drivers, different kernel, etc.?
What are the export options Xen provides? Perhaps OVF? Is there an API to stream the disks from Xen? Y.
*If the answer to this is already out there and I should have found it by searching, I apologise, please point me to the link and I'll RTFM.*
-- Sam McLeod https://smcleod.net https://twitter.com/s_mcleod
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

--Apple-Mail=_35F8DEE9-967F-4EC8-8537-CA22EDA2D86A Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=us-ascii Hi Yaniv, Thanks for your detailed reply, it's very much appreciated.
On 5 Jan 2018, at 8:34 pm, Yaniv Kaul <ykaul@redhat.com> wrote: =20 Indeed, greenfield deployment has its advantages. =20 The down side to that is juggling iSCSI LUNs, I'll have to migrate VMs = on XenServer off one LUN at a time, remove that LUN from XenServer and = add it to oVirt as new storage, and continue - but if it's what has to = be done, we'll do it. =20 The migration of VMs has three parts: - VM configuration data (from name to number of CPUs, memory, etc.)
That's not too much of an issue for us, we have a pretty standard set of = configuration for performance / sizing.
- Data - the disks themselves.
This is the big one, for most hosts at least the data is on a dedicated = logical volume, for example if it's postgresql, it would be LUKS on top = of a logical volume for /var/lib/pgsql etc....
- Adjusting VM internal data (paths, boot kernel, grub?, etc.)
The first item could be automated. Unfortunately, it was a bit of a = challenge to find a common automation platform. For example, we have = great Ansible support, which I could not find for XenServer (but[1], = which may be a bit limited). Perhaps if there aren't too many VMs, this = could be done manually. If you use Foreman, btw, then it could probably = be used for both to provision? The 2nd - data movement could be done in at least two-three ways: 1. Copy using 'dd' from LUN/LV/raw/? to a raw volume in oVirt. 2. (My preferred option), copy using 'dd' from LUN/LV/raw and upload = using the oVirt upload API (example in Python[2]). I think that's an = easy to implement option and provides the flexibility to copy from =
3. There are ways to convert XVA to qcow2 - I saw some references on =
Everything is currently PVHVM which uses standard grub2, you could = literally dd any one of our VMs to a physical disk and boot it in any = x86/64 machine. pretty much any source to oVirt. A key thing here would be how quickly the oVirt API can ingest the data, = our storage LUNs are 100% SSD each LUN can easily provide at least = 1000MB/s and around 2M 4k write IOP/s and 2-4M 4k read IOP/s so we = always find hypervisors disk virtualisation mechanisms to be the = bottleneck - but adding an API to the mix, especially one that is single = threaded (if that does the data stream processing) could be a big = performance problem. the Internet, never tried any. This is something I was thinking of potentially doing, I can actually = export each VM as an OVF/OVA package - since that's very standard I'm = assuming oVirt can likely import them and convert to qcow2 or raw/LVM?
=20 As for the last item, I'm really not sure what changes are needed, if = at all. I don't know the disk convention, for example (/dev/sd* for SCSI = disk -> virtio-scsi, but are there are other device types?)
Xen's virtual disks are all /dev/xvd[a-z] Thankfully, we partition everything as LVM and partitions (other than = boot I think) are mounted as such.
=20 I'd be happy to help with any adjustment needed for the Python script = below.
Very much appreciated, when I get to the point where I'm happy with the = basic architectural design and POC deployment of oVirt - that's when = I'll be testing importing VMs / data in various ways and have made note = of these scripts.
=20 Y. =20 [1] http://docs.ansible.com/ansible/latest/xenserver_facts_module.html = <http://docs.ansible.com/ansible/latest/xenserver_facts_module.html> [2] = https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_= disk.py = <https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload= _disk.py> =20
--Apple-Mail=_35F8DEE9-967F-4EC8-8537-CA22EDA2D86A Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=us-ascii <html><head><meta http-equiv=3D"Content-Type" content=3D"text/html; = charset=3Dus-ascii"></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; line-break: after-white-space;" class=3D"">Hi = Yaniv,<div class=3D""><br class=3D""></div><div class=3D"">Thanks for = your detailed reply, it's very much appreciated.</div><div class=3D""><br = class=3D""></div><div class=3D""><div><blockquote type=3D"cite" = class=3D""><div class=3D"">On 5 Jan 2018, at 8:34 pm, Yaniv Kaul <<a = href=3D"mailto:ykaul@redhat.com" class=3D"">ykaul@redhat.com</a>> = wrote:</div><br class=3D"Apple-interchange-newline"><div class=3D""><div = class=3D"gmail_quote" style=3D"font-family: Helvetica; font-size: 13px; = font-style: normal; font-variant-caps: normal; font-weight: normal; = letter-spacing: normal; text-align: start; text-indent: 0px; = text-transform: none; white-space: normal; word-spacing: 0px; = -webkit-text-stroke-width: 0px;"><div class=3D"">Indeed, greenfield = deployment has its advantages.</div><blockquote class=3D"gmail_quote" = style=3D"margin: 0px 0px 0px 0.8ex; border-left-width: 1px; = border-left-style: solid; border-left-color: rgb(204, 204, 204); = padding-left: 1ex;"><div style=3D"word-wrap: break-word;" class=3D""><div = class=3D""><div class=3D""><br class=3D""></div><div class=3D"">The down = side to that is juggling iSCSI LUNs, I'll have to migrate VMs on = XenServer off one LUN at a time, remove that LUN from XenServer and add = it to oVirt as new storage, and continue - but if it's what has to be = done, we'll do it.</div></div></div></blockquote><div class=3D""><br = class=3D""></div><div class=3D"">The migration of VMs has three = parts:</div><div class=3D"">- VM configuration data (from name to number = of CPUs, memory, etc.)</div></div></div></blockquote><div><br = class=3D""></div><div>That's not too much of an issue for us, we have a = pretty standard set of configuration for performance / sizing.</div><br = class=3D""><blockquote type=3D"cite" class=3D""><div class=3D""><div = class=3D"gmail_quote" style=3D"font-family: Helvetica; font-size: 13px; = font-style: normal; font-variant-caps: normal; font-weight: normal; = letter-spacing: normal; text-align: start; text-indent: 0px; = text-transform: none; white-space: normal; word-spacing: 0px; = -webkit-text-stroke-width: 0px;"><div class=3D"">- Data - the disks = themselves.</div></div></div></blockquote><div><br = class=3D""></div><div>This is the big one, for most hosts at least the = data is on a dedicated logical volume, for example if it's postgresql, = it would be LUKS on top of a logical volume for /var/lib/pgsql = etc....</div><br class=3D""><blockquote type=3D"cite" class=3D""><div = class=3D""><div class=3D"gmail_quote" style=3D"font-family: Helvetica; = font-size: 13px; font-style: normal; font-variant-caps: normal; = font-weight: normal; letter-spacing: normal; text-align: start; = text-indent: 0px; text-transform: none; white-space: normal; = word-spacing: 0px; -webkit-text-stroke-width: 0px;"><div class=3D"">- = Adjusting VM internal data (paths, boot kernel, grub?, = etc.)</div></div></div></blockquote><div><br = class=3D""></div><div>Everything is currently PVHVM which uses standard = grub2, you could literally dd any one of our VMs to a physical disk and = boot it in any x86/64 machine.</div><br class=3D""><blockquote = type=3D"cite" class=3D""><div class=3D"gmail_quote" style=3D"font-family: = Helvetica; font-size: 13px; font-style: normal; font-variant-caps: = normal; font-weight: normal; letter-spacing: normal; text-align: start; = text-indent: 0px; text-transform: none; white-space: normal; = word-spacing: 0px; -webkit-text-stroke-width: 0px;"><div class=3D"">The = first item could be automated. Unfortunately, it was a bit of a = challenge to find a common automation platform. For example, we have = great Ansible support, which I could not find for XenServer (but[1], = which may be a bit limited). Perhaps if there aren't too many VMs, this = could be done manually. If you use Foreman, btw, then it could probably = be used for both to provision?</div><div class=3D"">The 2nd - data = movement could be done in at least two-three ways:</div><div class=3D"">1.= Copy using 'dd' from LUN/LV/raw/? to a raw volume in oVirt.</div><div = class=3D"">2. (My preferred option), copy using 'dd' from LUN/LV/raw and = upload using the oVirt upload API (example in Python[2]). I think that's = an easy to implement option and provides the flexibility to copy from = pretty much any source to oVirt.</div></div></blockquote><div><br = class=3D""></div><div>A key thing here would be how quickly the oVirt = API can ingest the data, our storage LUNs are 100% SSD each LUN can = easily provide at least 1000MB/s and around 2M 4k write IOP/s and 2-4M = 4k read IOP/s so we always find hypervisors disk virtualisation = mechanisms to be the bottleneck - but adding an API to the mix, = especially one that is single threaded (if that does the data stream = processing) could be a big performance problem.</div><br = class=3D""><blockquote type=3D"cite" class=3D""><div class=3D"gmail_quote"= style=3D"font-family: Helvetica; font-size: 13px; font-style: normal; = font-variant-caps: normal; font-weight: normal; letter-spacing: normal; = text-align: start; text-indent: 0px; text-transform: none; white-space: = normal; word-spacing: 0px; -webkit-text-stroke-width: 0px;"><div = class=3D"">3. There are ways to convert XVA to qcow2 - I saw some = references on the Internet, never tried = any.</div></div></blockquote><div><br class=3D""></div><div>This is = something I was thinking of potentially doing, I can actually export = each VM as an OVF/OVA package - since that's very standard I'm assuming = oVirt can likely import them and convert to qcow2 or raw/LVM?</div><br = class=3D""><blockquote type=3D"cite" class=3D""><div class=3D"gmail_quote"= style=3D"font-family: Helvetica; font-size: 13px; font-style: normal; = font-variant-caps: normal; font-weight: normal; letter-spacing: normal; = text-align: start; text-indent: 0px; text-transform: none; white-space: = normal; word-spacing: 0px; -webkit-text-stroke-width: 0px;"><div = class=3D""><br class=3D""></div><div class=3D"">As for the last item, = I'm really not sure what changes are needed, if at all. I don't know the = disk convention, for example (/dev/sd* for SCSI disk -> virtio-scsi, = but are there are other device types?)</div></div></blockquote><div><br = class=3D""></div><div>Xen's virtual disks are all = /dev/xvd[a-z]</div><div>Thankfully, we partition everything as LVM and = partitions (other than boot I think) are mounted as such.</div><br = class=3D""><blockquote type=3D"cite" class=3D""><div class=3D"gmail_quote"= style=3D"font-family: Helvetica; font-size: 13px; font-style: normal; = font-variant-caps: normal; font-weight: normal; letter-spacing: normal; = text-align: start; text-indent: 0px; text-transform: none; white-space: = normal; word-spacing: 0px; -webkit-text-stroke-width: 0px;"><div = class=3D""><br class=3D""></div><div class=3D"">I'd be happy to help = with any adjustment needed for the Python script = below.</div></div></blockquote><div><br class=3D""></div><div>Very much = appreciated, when I get to the point where I'm happy with the basic = architectural design and POC deployment of oVirt - that's when I'll be = testing importing VMs / data in various ways and have made note of these = scripts.</div><br class=3D""><blockquote type=3D"cite" class=3D""><div = class=3D"gmail_quote" style=3D"font-family: Helvetica; font-size: 13px; = font-style: normal; font-variant-caps: normal; font-weight: normal; = letter-spacing: normal; text-align: start; text-indent: 0px; = text-transform: none; white-space: normal; word-spacing: 0px; = -webkit-text-stroke-width: 0px;"><div class=3D""><br class=3D""></div><div= class=3D"">Y.</div><div class=3D""><br class=3D""></div><div = class=3D"">[1] <a = href=3D"http://docs.ansible.com/ansible/latest/xenserver_facts_module.html= " = class=3D"">http://docs.ansible.com/ansible/latest/xenserver_facts_module.h= tml</a></div><div class=3D"">[2] <a = href=3D"https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples= /upload_disk.py" = class=3D"">https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examp= les/upload_disk.py</a></div><div = class=3D""> </div></div></blockquote></div><br = class=3D""></div></body></html>= --Apple-Mail=_35F8DEE9-967F-4EC8-8537-CA22EDA2D86A--

On Mon, Jan 8, 2018 at 11:52 PM, Sam McLeod <mailinglists@smcleod.net> wrote:
Hi Yaniv,
Thanks for your detailed reply, it's very much appreciated.
On 5 Jan 2018, at 8:34 pm, Yaniv Kaul <ykaul@redhat.com> wrote:
Indeed, greenfield deployment has its advantages.
The down side to that is juggling iSCSI LUNs, I'll have to migrate VMs on XenServer off one LUN at a time, remove that LUN from XenServer and add it to oVirt as new storage, and continue - but if it's what has to be done, we'll do it.
The migration of VMs has three parts: - VM configuration data (from name to number of CPUs, memory, etc.)
That's not too much of an issue for us, we have a pretty standard set of configuration for performance / sizing.
- Data - the disks themselves.
This is the big one, for most hosts at least the data is on a dedicated logical volume, for example if it's postgresql, it would be LUKS on top of a logical volume for /var/lib/pgsql etc....
- Adjusting VM internal data (paths, boot kernel, grub?, etc.)
Everything is currently PVHVM which uses standard grub2, you could literally dd any one of our VMs to a physical disk and boot it in any x86/64 machine.
The first item could be automated. Unfortunately, it was a bit of a challenge to find a common automation platform. For example, we have great Ansible support, which I could not find for XenServer (but[1], which may be a bit limited). Perhaps if there aren't too many VMs, this could be done manually. If you use Foreman, btw, then it could probably be used for both to provision? The 2nd - data movement could be done in at least two-three ways: 1. Copy using 'dd' from LUN/LV/raw/? to a raw volume in oVirt. 2. (My preferred option), copy using 'dd' from LUN/LV/raw and upload using the oVirt upload API (example in Python[2]). I think that's an easy to implement option and provides the flexibility to copy from pretty much any source to oVirt.
A key thing here would be how quickly the oVirt API can ingest the data, our storage LUNs are 100% SSD each LUN can easily provide at least 1000MB/s and around 2M 4k write IOP/s and 2-4M 4k read IOP/s so we always find hypervisors disk virtualisation mechanisms to be the bottleneck - but adding an API to the mix, especially one that is single threaded (if that does the data stream processing) could be a big performance problem.
Well, it's just for the data copy. We can do ~300 or so MBps in a single upload API call, but you can copy multiple disks via multiple hypervisors in parallel. In addition, if you are using 'dd' you might even be able to use sg_xcopy (if it's the same storage) - worth looking into it. In any case, we have concrete plans to improve the performance of the upload API.
3. There are ways to convert XVA to qcow2 - I saw some references on the Internet, never tried any.
This is something I was thinking of potentially doing, I can actually export each VM as an OVF/OVA package - since that's very standard I'm assuming oVirt can likely import them and convert to qcow2 or raw/LVM?
Well, in theory, OVF/OVA is a standard. In practice, it's far from it - it defines how the XML should look and what it contains, but a VMware para-virtual NIC is not a para-virtual Xen NIC is not an oVirt para-virtual NIC, so the fact it describes a NIC means nothing when it comes to cross-platform compatibility.
As for the last item, I'm really not sure what changes are needed, if at all. I don't know the disk convention, for example (/dev/sd* for SCSI disk -> virtio-scsi, but are there are other device types?)
Xen's virtual disks are all /dev/xvd[a-z] Thankfully, we partition everything as LVM and partitions (other than boot I think) are mounted as such.
And there's nothing that needs to address such path as /dev/xvd* ? Y.
I'd be happy to help with any adjustment needed for the Python script below.
Very much appreciated, when I get to the point where I'm happy with the basic architectural design and POC deployment of oVirt - that's when I'll be testing importing VMs / data in various ways and have made note of these scripts.
Y.
[1] http://docs.ansible.com/ansible/latest/xenserver_facts_module.html [2] https://github.com/oVirt/ovirt-engine-sdk/blob/master/sd k/examples/upload_disk.py

On 9 January 2018 at 13:54, Yaniv Kaul <ykaul@redhat.com> wrote:
On Mon, Jan 8, 2018 at 11:52 PM, Sam McLeod <mailinglists@smcleod.net> wrote:
Hi Yaniv,
Thanks for your detailed reply, it's very much appreciated.
On 5 Jan 2018, at 8:34 pm, Yaniv Kaul <ykaul@redhat.com> wrote:
Indeed, greenfield deployment has its advantages.
The down side to that is juggling iSCSI LUNs, I'll have to migrate VMs on XenServer off one LUN at a time, remove that LUN from XenServer and add it to oVirt as new storage, and continue - but if it's what has to be done, we'll do it.
The migration of VMs has three parts: - VM configuration data (from name to number of CPUs, memory, etc.)
That's not too much of an issue for us, we have a pretty standard set of configuration for performance / sizing.
- Data - the disks themselves.
This is the big one, for most hosts at least the data is on a dedicated logical volume, for example if it's postgresql, it would be LUKS on top of a logical volume for /var/lib/pgsql etc....
- Adjusting VM internal data (paths, boot kernel, grub?, etc.)
Everything is currently PVHVM which uses standard grub2, you could literally dd any one of our VMs to a physical disk and boot it in any x86/64 machine.
The first item could be automated. Unfortunately, it was a bit of a challenge to find a common automation platform. For example, we have great Ansible support, which I could not find for XenServer (but[1], which may be a bit limited). Perhaps if there aren't too many VMs, this could be done manually. If you use Foreman, btw, then it could probably be used for both to provision? The 2nd - data movement could be done in at least two-three ways: 1. Copy using 'dd' from LUN/LV/raw/? to a raw volume in oVirt. 2. (My preferred option), copy using 'dd' from LUN/LV/raw and upload using the oVirt upload API (example in Python[2]). I think that's an easy to implement option and provides the flexibility to copy from pretty much any source to oVirt.
A key thing here would be how quickly the oVirt API can ingest the data, our storage LUNs are 100% SSD each LUN can easily provide at least 1000MB/s and around 2M 4k write IOP/s and 2-4M 4k read IOP/s so we always find hypervisors disk virtualisation mechanisms to be the bottleneck - but adding an API to the mix, especially one that is single threaded (if that does the data stream processing) could be a big performance problem.
Well, it's just for the data copy. We can do ~300 or so MBps in a single upload API call, but you can copy multiple disks via multiple hypervisors in parallel. In addition, if you are using 'dd' you might even be able to use sg_xcopy (if it's the same storage) - worth looking into it. In any case, we have concrete plans to improve the performance of the upload API.
3. There are ways to convert XVA to qcow2 - I saw some references on the Internet, never tried any.
This is something I was thinking of potentially doing, I can actually export each VM as an OVF/OVA package - since that's very standard I'm assuming oVirt can likely import them and convert to qcow2 or raw/LVM?
Well, in theory, OVF/OVA is a standard. In practice, it's far from it - it defines how the XML should look and what it contains, but a VMware para-virtual NIC is not a para-virtual Xen NIC is not an oVirt para-virtual NIC, so the fact it describes a NIC means nothing when it comes to cross-platform compatibility.
While exporting, please ensure you include snapshots. You can learn more on snapshot tree export support in Xen here: https://xenserver.org/partners/18-sdk-development/114-import-export-vdi.html
As for the last item, I'm really not sure what changes are needed, if at all. I don't know the disk convention, for example (/dev/sd* for SCSI disk -> virtio-scsi, but are there are other device types?)
Xen's virtual disks are all /dev/xvd[a-z] Thankfully, we partition everything as LVM and partitions (other than boot I think) are mounted as such.
And there's nothing that needs to address such path as /dev/xvd* ? Y.
I'd be happy to help with any adjustment needed for the Python script below.
Very much appreciated, when I get to the point where I'm happy with the basic architectural design and POC deployment of oVirt - that's when I'll be testing importing VMs / data in various ways and have made note of these scripts.
Y.
[1] http://docs.ansible.com/ansible/latest/xenserver_facts_module.html [2] https://github.com/oVirt/ovirt-engine-sdk/blob/master/sd k/examples/upload_disk.py
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (3)
-
Doron Fediuck
-
Sam McLeod
-
Yaniv Kaul