[Users] Ovirt 3.3 removing disk failure
by Saša Friedrich
When I try to remove viritual disk from ovirt engine I get error "User
admin@internal finished removing disk test_vm with storage failure in
domain DATA_DOMAIN."
VM itself was running fine with no errors.
DATA_DOMAIN is GlusterFS replicated volume (on ovirt host).
ovirt engine comp (fc19)
ovirt-engine.noarch 3.3.0.1-1.fc19
ovirt host (fc19)
vdsm.x86_64 4.12.1-4.fc19
vdsm-gluster.noarch 4.12.1-4.fc19
glusterfs-server.x86_64 3.4.1-1.fc19
tnx for help
11 years
[Users] Info on snapshot removal operations and final disk format
by Gianluca Cecchi
Hello,
I'm on oVirt 3.2.3-1 on a Fedora 18 all-in-one test server.
I have a Windows XP VM that has a disk in qcow2 format and has a snapshot on it.
When I delete the snapshot I see, from the commands intercepted, that
the final effect is to have a raw disk (aka preallocated) .... is this
correct and always true?
Does this mean that even if I create a VM with thin provisioned disks,
as soon as I take at least one snapshot and I then delete it I only
have raw disks?
Or am I missing anything?
This what I observed:
as soon as I launch delete snapshot operation:
raw format of new disk
vdsm 30805 1732 6 13:24 ? 00:00:00 /usr/bin/dd
if=/dev/zero of=/rhev/data-center/65c9777e-23f1-4f04-8cea-e7c8871dc88b/0a8035e6-e41d-40ff-a154-e0a374f264b2/images/75c54716-5222-4ad6-91f2-8b312eacc4b4/d4fa7785-8a89-4d13-9082-52556ab0b326_MERGE
bs=1048576 seek=0 skip=0 conv=notrunc count=11264 oflag=direct
after about 5 minutes:
convert from qemu format to raw format
vdsm 31287 1732 7 13:29 ? 00:00:08 /usr/bin/qemu-img
convert -t none -f qcow2
/rhev/data-center/65c9777e-23f1-4f04-8cea-e7c8871dc88b/0a8035e6-e41d-40ff-a154-e0a374f264b2/images/75c54716-5222-4ad6-91f2-8b312eacc4b4/d4fa7785-8a89-4d13-9082-52556ab0b326
-O raw /rhev/data-center/65c9777e-23f1-4f04-8cea-e7c8871dc88b/0a8035e6-e41d-40ff-a154-e0a374f264b2/images/75c54716-5222-4ad6-91f2-8b312eacc4b4/d4fa7785-8a89-4d13-9082-52556ab0b326_MERGE
at the end probably there is a rename of the disk file and
qemu-img info /rhev/data-center/65c9777e-23f1-4f04-8cea-e7c8871dc88b/0a8035e6-e41d-40ff-a154-e0a374f264b2/images/75c54716-5222-4ad6-91f2-8b312eacc4b4/d4fa7785-8a89-4d13-9082-52556ab0b326
image: /rhev/data-center/65c9777e-23f1-4f04-8cea-e7c8871dc88b/0a8035e6-e41d-40ff-a154-e0a374f264b2/images/75c54716-5222-4ad6-91f2-8b312eacc4b4/d4fa7785-8a89-4d13-9082-52556ab0b326
file format: raw
virtual size: 11G (11811160064 bytes)
disk size: 9.5G
# ll /rhev/data-center/65c9777e-23f1-4f04-8cea-e7c8871dc88b/0a8035e6-e41d-40ff-a154-e0a374f264b2/images/75c54716-5222-4ad6-91f2-8b312eacc4b4/
total 9995476
-rw-rw----. 1 vdsm kvm 1048576 Nov 16 12:09
6ac73ee2-6419-43a4-91e7-7d4ef2026943_MERGE.lease
-rw-rw----. 1 vdsm kvm 11811160064 Nov 16 13:32
d4fa7785-8a89-4d13-9082-52556ab0b326
-rw-rw----. 1 vdsm kvm 1048576 Mar 23 2013
d4fa7785-8a89-4d13-9082-52556ab0b326.lease
-rw-rw----. 1 vdsm kvm 1048576 Nov 16 13:29
d4fa7785-8a89-4d13-9082-52556ab0b326_MERGE.lease
-rw-r--r--. 1 vdsm kvm 274 Nov 16 13:29
d4fa7785-8a89-4d13-9082-52556ab0b326.meta
Thanks
Gianluca
11 years
Re: [Users] Users Digest, Vol 26, Issue 72
by Ryan Barry
Without knowing how the disks are split among the controllers, I don't want
to make any assumptions about how shared it actually is, since it may be
half and half with no multipathing.
While a multi-controller DAS array *may* be shared storage, it may not be.
Moreover, I have no idea whether VDSM looks at by-path, by-bus, dm-*, or
otherwise, and there are no guarantees that a SAS disk will present like a
FC LUN (by-path/pci...-fc-$wwn...), whereas OCFS POSIXFS is assured to
work, albeit with a more complex setup and another intermediary layer.
On Nov 17, 2013 10:00 AM, <users-request(a)ovirt.org> wrote:
> Send Users mailing list submissions to
> users(a)ovirt.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.ovirt.org/mailman/listinfo/users
> or, via email, send a message with subject or body 'help' to
> users-request(a)ovirt.org
>
> You can reach the person managing the list at
> users-owner(a)ovirt.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Users digest..."
>
>
> Today's Topics:
>
> 1. Re: oVirt and SAS shared storage?? (Jeff Bailey)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sat, 16 Nov 2013 21:39:35 -0500
> From: Jeff Bailey <bailey(a)cs.kent.edu>
> To: users(a)ovirt.org
> Subject: Re: [Users] oVirt and SAS shared storage??
> Message-ID: <52882C67.9000707(a)cs.kent.edu>
> Content-Type: text/plain; charset=ISO-8859-1
>
>
> On 11/16/2013 9:22 AM, Ryan Barry wrote:
> >
> > unfortunally, I didn't got a reply for my question. So.. let's try
> > again.
> >
> > Does oVirt supports SAS shared storages (p. e. MSA2000sa) as
> > storage domain?
> > If yes.. what kind of storage domain I've to choose at setup time?
> >
> > SAS is a bus which implements the SCSI protocol in a point-to-point
> > fashion. The array you have is the effective equivalent of attaching
> > additional hard drives directly to your computer.
> >
> > It is not necessarily faster than iSCSI or Fiber Channel; almost any
> > nearline storage these days will be SAS, almost all the SANs in
> > production, and most of the tiered storage as well (because SAS
> > supports SATA drives). I'm not even sure if NetApp uses FC-AL drives
> > in their arrays anymore. I think they're all SAS, but don't quote me
> > on that.
> >
> > What differentiates a SAN (iSCSI or Fiber Channel) from a NAS is that
> > a SAN presents raw devices over a fabric or switched medium rather
> > than point-to-point (point-to-point Fiber Channel still happens, but
> > it's easier to assume that it doesn't for the sake of argument). A NAS
> > presents network file systems (CIFS, GlusterFS, Lustre, NFS, Ceph,
> > whatever), though this also gets complicated when you start talking
> > about distributed clustered network file systems.
> >
> > Anyway, what you have is neither of these. It's directly-attached
> > storage. It may work, but it's an unsupported configuration, and is
> > only shared storage in the sense that it has multiple controllers. If
> > I were going to configure it for oVirt, I would:
> >
>
> It's shared storage in every sense of the word. I would simply use an
> FC domain and choose the LUNs as usual.
>
> > Attach it to a 3rd server and export iSCSI LUNs from it
> > Attach it to a 3rd server and export NFS from it
> > Attach it to multiple CentOS/Fedora servers, configure clustering (so
> > you get fencing, a DLM, and the other requisites of a clustered
> > filesystem), and use raw cLVM block devices or GFS2/OCFS filesystems
> > as POSIXFS storage for oVirt.
> >
>
> These would be terrible choices for both performance and reliability.
> It's exactly the same as fronting an FC LUN would be with all of that
> crud when you could simply access the LUN directly. If the array port
> count is a problem then just toss an SAS switch in between and you have
> an all SAS equivalent of a Fibre Channel SAN. This is exactly what we
> do in production vSphere environments and there are no technical reasons
> it shouldn't work fine with oVirt.
>
> > Thank you for your help
> >
> > Hans-Joachim
> >
> >
> > Hans
> >
> > --
> > while (!asleep) { sheep++; }
> >
> >
> > _______________________________________________
> > Users mailing list
> > Users(a)ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ------------------------------
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
> End of Users Digest, Vol 26, Issue 72
> *************************************
>
11 years
Re: [Users] oVirt and SAS shared storage??
by Ryan Barry
>
> unfortunally, I didn't got a reply for my question. So.. let's try again.
>
> Does oVirt supports SAS shared storages (p. e. MSA2000sa) as storage
> domain?
> If yes.. what kind of storage domain I've to choose at setup time?
>
> SAS is a bus which implements the SCSI protocol in a point-to-point
fashion. The array you have is the effective equivalent of attaching
additional hard drives directly to your computer.
It is not necessarily faster than iSCSI or Fiber Channel; almost any
nearline storage these days will be SAS, almost all the SANs in production,
and most of the tiered storage as well (because SAS supports SATA drives).
I'm not even sure if NetApp uses FC-AL drives in their arrays anymore. I
think they're all SAS, but don't quote me on that.
What differentiates a SAN (iSCSI or Fiber Channel) from a NAS is that a SAN
presents raw devices over a fabric or switched medium rather than
point-to-point (point-to-point Fiber Channel still happens, but it's easier
to assume that it doesn't for the sake of argument). A NAS presents network
file systems (CIFS, GlusterFS, Lustre, NFS, Ceph, whatever), though this
also gets complicated when you start talking about distributed clustered
network file systems.
Anyway, what you have is neither of these. It's directly-attached storage.
It may work, but it's an unsupported configuration, and is only shared
storage in the sense that it has multiple controllers. If I were going to
configure it for oVirt, I would:
Attach it to a 3rd server and export iSCSI LUNs from it
Attach it to a 3rd server and export NFS from it
Attach it to multiple CentOS/Fedora servers, configure clustering (so you
get fencing, a DLM, and the other requisites of a clustered filesystem),
and use raw cLVM block devices or GFS2/OCFS filesystems as POSIXFS storage
for oVirt.
Thank you for your help
>
> Hans-Joachim
>
Hans
--
while (!asleep) { sheep++; }
11 years
[Users] oVirt and SAS shared storage??
by Hans-Joachim
--========GMXBoundary25901384583304721109
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Hello,
unfortunally, I didn't got a reply for my question. So.. let's try again.
Does oVirt supports SAS shared storages (p. e. MSA2000sa) as storage domain?
If yes.. what kind of storage domain I've to choose at setup time?
Thank you for your help
Hans-Joachim
--========GMXBoundary25901384583304721109
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
<span style=3D'font-family:Verdana'><span style=3D'font-size:12px'>Hello,<b=
r /><br />unfortunally, I didn't got a reply for my question. So.. let's tr=
y again.<br /><br />Does oVirt supports SAS shared storages (p. e. MSA2000s=
a) as storage domain?<br />If yes.. what kind of storage domain I've to cho=
ose at setup time?<br /><br />Thank you for your help<br /><br />Hans-Joach=
im<br />=C2=A0</span></span>
--========GMXBoundary25901384583304721109--
11 years
[Users] Low quality of el6 vdsm rpms
by Patrick Hurrelmann
Hi all,
sorry for this rant, but...
I now tried several times to test the beta 3.3.1 rpms, but they can't
even be installed in the most times. One time it required a future
selinux-policy, although the needed selinux fix was delivered in a much
lower version. Now the rpms have broken requirements. It requires
"hostname" instead of "/bin/hostname". This broken requirement is not
included in the vdsm 3.3 branch, so I wonder where it comes from?
Anyway. So I proceeded and tried to build vdsm myself once again.
Currently the build fails with (but worked fine some days ago):
/usr/bin/pep8 --exclude="config.py,constants.py" --filename '*.py,*.py.in' \
client lib/cpopen/*.py lib/vdsm/*.py lib/vdsm/*.py.in tests
vds_bootstrap vdsm-tool vdsm/*.py vdsm/*.py.in vdsm/netconf
vdsm/sos/vdsm.py.in vdsm/storage vdsm/vdsm vdsm_api vdsm_hooks vdsm_reg
vdsm/storage/imageRepository/formatConverter.py:280:29: E128
continuation line under-indented for visual indent
- How can the quality of the vdsm builds be increased? It is frustrating
to spend time on testing and then the hosts cannot even be installed to
broken vdsm rpms.
- How are the builds prepared? Is there a Jenkins job that prepares
"stable" rpms in addition to the nightly job? Or is this totally
handcrafted?
- How can it be that the rpm spec differs between the 3.3 branch and
released rpms? What is the source/branch for el6 vdsm rpms? Maybe I'm
just tracking on the wrong source tree...
Thx and Regards
Patrick
--
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg
HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
11 years
[Users] EL5 support for VirtIO SCSI?
by Paul Jansen
---1504104896-299338706-1384321468=:29063
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable
I have just set up an Ovirt 3.3.0 install and have done a test install of C=
entos 6.4 in a VM.=A0 The VM was configured with an IDE drive and a virtio-=
scsi drive.=A0 The Centos 6.4 install sees both drives OK.=0AI'm wanting to=
do some testing on a product that is based on EL5, but I'm finding that it=
cannot see the virtio-scsi drive.=A0 It does show up in the output of 'lsp=
ci', but I don't see a corresponding 'sd' device.=0A=0AI've just tried inst=
alling Centos 5.10 and the support is not there.=0A=0ADoes anyone know of a=
ny tricks to allow EL5 to see the virtio-scsi device?
---1504104896-299338706-1384321468=:29063
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"color:#000; background-color:#fff; font-family:ti=
mes new roman, new york, times, serif;font-size:12pt"><div>I have just set =
up an Ovirt 3.3.0 install and have done a test install of Centos 6.4 in a V=
M. The VM was configured with an IDE drive and a virtio-scsi drive.&n=
bsp; The Centos 6.4 install sees both drives OK.</div><div>I'm wanting to d=
o some testing on a product that is based on EL5, but I'm finding that it c=
annot see the virtio-scsi drive. It does show up in the output of 'ls=
pci', but I don't see a corresponding 'sd' device.<br></div><div>I've just =
tried installing Centos 5.10 and the support is not there.</div><div><br></=
div><div style=3D"color: rgb(0, 0, 0); font-size: 16px; font-family: times =
new roman,new york,times,serif; background-color: transparent; font-style: =
normal;">Does anyone know of any tricks to allow EL5 to see the virtio-scsi=
device?<br></div><div style=3D"color: rgb(0, 0, 0); font-size: 16px;
font-family: times new roman,new york,times,serif; background-color: trans=
parent; font-style: normal;"><span><br></span></div></div></body></html>
---1504104896-299338706-1384321468=:29063--
11 years
Re: [Users] from irc
by Bob Doolittle
Hi,
I had a question on IRC regarding sysprep/sealing of Windows VMs and use
in Pools. Basically, if you follow the Quick Start Guide, it says to
seal the VM and shut it down before making the Template.
My problem with this is that when you start a VM from the Pool, it takes
forever to unseal - i.e. to repersonalize itself. That's a bad
experience from a VDI perspective - you want the user to get a desktop
they can start using ASAP.
Itamar responded to me directly via e-mail:
> bobdrad: on your question of windows VMs from pool - you can start
> them once with an admin for the sysprep to happen, then shut them down.
> admin launch of VMs doesn't create a stateless snapshot and
> manipulates the VM itself.
This raises some questions. I'd love to understand this better.
He's asked me to cross this conversation onto the Users list now.
1. My understanding is that a Pool clones VMs on demand from a template.
So how does the admin "launch" the template? I thought the only way to
exercise a pool is from the User Portal. Is it sufficient to do that as
Admin? I thought the persistence only came when launching a VM from the
Admin Portal.
2. My understanding of "sealing" a system is that this depersonalizes it
- e.g. removes hostname, prepares network for reinitialization, etc. And
that the next time the system boots up it re-personalizes. So if one
were to restart it, even as admin, this would reverse the sealing
process, which would seem to make sealing in the first place pointless.
What am I missing? At the moment I don't see the point of sealing a VM
before putting it into the Pool (assuming you're using DHCP, anyway).
What happens if you don't?
Thanks,
Bob
P.S. I note the behavior of Fedora vs RHEL 6 is quite different in this
regard. If you follow the "sealing" process on the Quick Start page for
Fedora it seems to have no visible effect, but on RHEL 6 it puts you
through a re-personalization dialog which is rather extensive (and
again, not really suitable for VDI use).
11 years
[Users] Change data domain
by Juan Pablo Lorier
Hi,
I'm changing the master domain to a new added one. I've placed the old
in maintenance and ovirt made the change of master to the new one. The
thing is that now that I have both active, I get this error:
Sync Error on Master Domain between Host Dell2 and oVirt Engine. Domain:
IBM3300 is marked as Master in oVirt Engine database but not on the
Storage side. Please consult with Support on how to fix this issue.
How can I fix this?
Another thing, is there a way of having two iscsi data domains mirrored?
To have a sort of high availability in the data domains.
Regards,
11 years