Unable to backend oVirt with Cinder
by Logan Kuhn
------=_Part_51316288_608143832.1472587678781
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
I've got Cinder configured and pointed at Ceph for it's back end storage. I can run ceph commands on the cinder machine and cinder is configured for noauth and I've also tried it with Keystone for auth. I can run various cinder commands and it'll return as expected.
When I configure it in oVirt it'll add the external provider fine, but when I go to create a disk it doesn't populate the volume type field, it's just empty. The corresponding command for cinder: cinder type-list and cinder type-show <name> returns fine and it is public.
Ovirt and Cinder are on the same host so it isn't a firewall issue.
Cinder config:
[DEFAULT]
rpc_backend = rabbit
#auth_strategy = keystone
auth_strategy = noauth
enabled_backends = ceph
#glance_api_servers = http://10.128.7.252:9292
#glance_api_version = 2
#[keystone_authtoken]
#auth_uri = http://10.128.7.252:5000/v3
#auth_url = http://10.128.7.252:35357/v3
#auth_type = password
#memcached_servers = localhost:11211
#project_domain_name = default
#user_domain_name = default
#project_name = services
#username = user
#password = pass
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = ovirt-images
rbd_user = cinder
rbd_secret_uuid = <secret>
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = true
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
#glance_api_version = 2
[database]
connection = postgresql://user:pass@10.128.2.33/cinder
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_rabbit]
rabbit_host = localhost
rabbit_port = 5672
rabbit_userid = user
rabbit_password = pass
Regards,
Logan
------=_Part_51316288_608143832.1472587678781
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: Arial; font-size: 12pt; color: #0000=
00"><div>I've got Cinder configured and pointed at Ceph for it's back end s=
torage. I can run ceph commands on the cinder machine and cinder is c=
onfigured for noauth and I've also tried it with Keystone for auth. I=
can run various cinder commands and it'll return as expected. </=
div><div><br data-mce-bogus=3D"1"></div><div>When I configure it in oVirt i=
t'll add the external provider fine, but when I go to create a disk it does=
n't populate the volume type field, it's just empty. The correspondin=
g command for cinder: cinder type-list and cinder type-show <name> re=
turns fine and it is public. </div><div><br data-mce-bogus=3D"1"></div=
><div>Ovirt and Cinder are on the same host so it isn't a firewall issue.</=
div><div><br data-mce-bogus=3D"1"></div><div>Cinder config:</div><div>[DEFA=
ULT]<br>rpc_backend =3D rabbit<br>#auth_strategy =3D keystone<br>auth_strat=
egy =3D noauth<br>enabled_backends =3D ceph<br>#glance_api_servers =3D http=
://10.128.7.252:9292<br>#glance_api_version =3D 2<br><br>#[keystone_authtok=
en]<br>#auth_uri =3D http://10.128.7.252:5000/v3<br>#auth_url =3D http://10=
.128.7.252:35357/v3<br>#auth_type =3D password<br>#memcached_servers =3D lo=
calhost:11211<br>#project_domain_name =3D default<br>#user_domain_name =3D =
default<br>#project_name =3D services<br>#username =3D user<br>#passwo=
rd =3D pass<br><br>[ceph]<br>volume_driver =3D cinder.volume.drivers.rbd.RB=
DDriver<br>volume_backend_name =3D ceph<br>rbd_pool =3D ovirt-images<br>rbd=
_user =3D cinder<br>rbd_secret_uuid =3D <secret><br>rbd_ceph_con=
f =3D /etc/ceph/ceph.conf<br>rbd_flatten_volume_from_snapshot =3D true<br>r=
bd_max_clone_depth =3D 5<br>rbd_store_chunk_size =3D 4<br>rados_connect_tim=
eout =3D -1<br>#glance_api_version =3D 2<br><br>[database]<br>connection =
=3D postgresql://user:pass@10.128.2.33/cinder<br><br>[oslo_concurrency]<br>=
lock_path =3D /var/lib/cinder/tmp<br><br>[oslo_messaging_rabbit]<br>rabbit_=
host =3D localhost<br>rabbit_port =3D 5672<br>rabbit_userid =3D <span =
style=3D"color: #000000; font-family: Arial; font-size: 16px; font-style: n=
ormal; font-variant-ligatures: normal; font-variant-caps: normal; font-weig=
ht: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-a=
lign: start; text-indent: 0px; text-transform: none; white-space: normal; w=
idows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; display: inlin=
e !important; float: none; background-color: #ffffff;" data-mce-style=3D"co=
lor: #000000; font-family: Arial; font-size: 16px; font-style: normal; font=
-variant-ligatures: normal; font-variant-caps: normal; font-weight: normal;=
letter-spacing: normal; line-height: normal; orphans: 2; text-align: start=
; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; w=
ord-spacing: 0px; -webkit-text-stroke-width: 0px; display: inline !importan=
t; float: none; background-color: #ffffff;">user</span><br>rabbit_password =
=3D <span style=3D"color: #000000; font-family: Arial; font-size: 16px=
; font-style: normal; font-variant-ligatures: normal; font-variant-caps: no=
rmal; font-weight: normal; letter-spacing: normal; line-height: normal; orp=
hans: 2; text-align: start; text-indent: 0px; text-transform: none; white-s=
pace: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px;=
display: inline !important; float: none; background-color: #ffffff;" data-=
mce-style=3D"color: #000000; font-family: Arial; font-size: 16px; font-styl=
e: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-=
weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; te=
xt-align: start; text-indent: 0px; text-transform: none; white-space: norma=
l; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; display: i=
nline !important; float: none; background-color: #ffffff;">pass</span></div=
><div><br></div><div data-marker=3D"__SIG_PRE__">Regards,<br>Logan</div></d=
iv></body></html>
------=_Part_51316288_608143832.1472587678781--
6 years, 5 months
Mailing-Lists upgrade
by Marc Dequènes (Duck)
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--IfGqXrbgT9wNNdIsvS8kI8WdhEQHlgkiq
Content-Type: multipart/mixed; boundary="AUFHCWPQQknM84fF2kF4uGjIv4blC9bMv";
protected-headers="v1"
From: =?UTF-8?B?TWFyYyBEZXF1w6huZXMgKER1Y2sp?= <duck(a)redhat.com>
To: oVirt Infra <infra(a)ovirt.org>, users <users(a)ovirt.org>,
devel <devel(a)ovirt.org>
Message-ID: <c5c71fce-0290-e97a-ddd0-eab0e6fccea4(a)redhat.com>
Subject: Mailing-Lists upgrade
--AUFHCWPQQknM84fF2kF4uGjIv4blC9bMv
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
Quack,
On behalf of the oVirt infra team, I'd like to announce the current
Mailing-Lists system is going to be upgraded to a brand new Mailman 3
installation on Monday during the slot 11:00-12:00 JST.
It should not take a full hour to migrate as we already made incremental
synchronization with the current system but better keep some margin. The
system will then take over delivery of the mails but might be a bit slow
at first as it needs to reindex all the archived mails (which might take
a few hours).
To manage your subscriptions and delivery settings you can do this
easily on the much nicer web interface (https://lists.ovirt.org). There
is a notion of account so you don't need to login separately for each ML.=
You can Sign In using Fedora, GitHub or Google or create a local account
if you prefer. Please keep in mind signing in with a different method
would create separate accounts (which cannot be merged at the moment).
But you can easily link your account to other authentication methods in
your settings (click on you name in the up-right corner -> Account ->
Account Connections).
As for the original mail archives, because the previous system did not
have stable URLs, we cannot create mappings to the new pages. We decided
to keep the old archives around on the same URL (/pipermail), so the
Internet links would still work fine.
Hope you'd be happy with the new system.
\_o<
--AUFHCWPQQknM84fF2kF4uGjIv4blC9bMv--
--IfGqXrbgT9wNNdIsvS8kI8WdhEQHlgkiq
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEEcpcqg+UmRT3yiF+BVen596wcRD8FAlmwx0MACgkQVen596wc
RD9LTQ/+LtUncsq9K8D/LX8wqUTd6VyPwAD5UnAk5c3H/2tmyVA0u7FIfhEyPsXs
Z//LE9FEneTDqDVRi1Dw9I54K0ZwxPBemi71dXfwgBI7Ay0ezkbLWrA168Mt9spE
tHAODEuxPt2to2aqaS4ujogrkp/gvEP8ILoxPEqoTPCJ/eDTPAu/I1a2JzjMPK3n
2BBS6D8z0TLAf7w1n72TsgX2QzJW57ig/0HELyjvat2/K8V3HSrkwiKlsdULQDWe
zB+aMde7r6UoyVKHqlu4asTl2tU/lGZ+e31Hd9Bnx1/oZOJdzslGOhEo9Qoz6763
AHWU9LKiK4NtxYHj2UQTWhndr8PiTtTmR73eIDmkb0cuRXxzjl9VQbwYJ0Kbrmfp
attTqpc2CnEojTNXUUNSmNxotZoYXZiX8ZvjPfSgRVr15TUYujzlOfG+lUynbQMV
9rQ9/m58wgwYUymMpOIsRGaIcAKzjm+WpuuVnO+bS2AfmcBkGMQRoIhfV+3SkS8q
kT9cDXgcDZOzVFcnZZB4EjbycMcPgZDcoHxU88VdYH+jFJYvvb21esgswVF/wJ2Z
uEI/chp4+ADaQhl8ehZNWMSZq125v6SeirPhBNgLG7zFVZI1S9Tm/6qFmH+ajQY7
nCk1X9HZlB1ubex1X+HibRz9QKOilkMgkADyJ4yMDckwYj93sx0=
=l6uN
-----END PGP SIGNATURE-----
--IfGqXrbgT9wNNdIsvS8kI8WdhEQHlgkiq--
6 years, 7 months
vdsClient is removed and replaced by vdsm-client
by Irit Goihman
Hi All,
vdsClient will be removed from master branch today.
It is using XMLRPC protocol which has been deprecated and replaced by
JSON-RPC.
A new client for vdsm was introduced in 4.1: vdsm-client.
This is a simple client that uses JSON-RPC protocol which was introduced in
ovirt 3.5.
The client is not aware of the available methods and parameters, and you
should consult
the schema [1] in order to construct the desired command.
Future version should parse the schema and provide online help.
If you're using vdsClient, we will be happy to assist you in migrating to
the new vdsm client.
*vdsm-client usage:*
vdsm-client [-h] [-a ADDRESS] [-p PORT] [--unsecure] [--timeout TIMEOUT]
[-f FILE] namespace method [name=value [name=value] ...]
Invoking simple methods:
# vdsm-client Host getVMList
['b3f6fa00-b315-4ad4-8108-f73da817b5c5']
For invoking methods with many or complex parameters, you can read
the parameters from a JSON format file:
# vdsm-client Lease info -f lease.json
where lease.json file content is:
{
"lease": {
"sd_id": "75ab40e3-06b1-4a54-a825-2df7a40b93b2",
"lease_id": "b3f6fa00-b315-4ad4-8108-f73da817b5c5"
}
}
It is also possible to read parameters from standard input, creating
complex parameters interactively:
# cat <<EOF | vdsm-client Lease info -f -
{
"lease": {
"sd_id": "75ab40e3-06b1-4a54-a825-2df7a40b93b2",
"lease_id": "b3f6fa00-b315-4ad4-8108-f73da817b5c5"
}
}
EOF
*Constructing a command from vdsm schema:*
Let's take VM.getStats as an example.
This is the entry in the schema:
VM.getStats:
added: '3.1'
description: Get statistics about a running virtual machine.
params:
- description: The UUID of the VM
name: vmID
type: *UUID
return:
description: An array containing a single VmStats record
type:
- *VmStats
namespace: VM
method name: getStats
params: vmID
The vdsm-client command is:
# vdsm-client VM getStats vmID=b3f6fa00-b315-4ad4-8108-f73da817b5c5
*Invoking getVdsCaps command:*
# vdsm-client Host getCapabilities
Please consult vdsm-client help and man page for further details and
options.
[1] https://github.com/oVirt/vdsm/blob/master/lib/api/vdsm-api.yml
--
Irit Goihman
Software Engineer
Red Hat Israel Ltd.
6 years, 7 months
Passing VLAN trunk to VM
by Simon Vincent
Is it possible to pass multiple VLANs to a VM (pfSense) using a single
virtual NIC? All my existing oVirt networks are setup as a single tagged
VLAN. I know this didn't used to be supported but wondered if this has
changed. My other option is to pass each VLAN as a separate NIC to the VM
however if I needed to add a new VLAN I would have to add a new interface
and reboot the VM as hot-add of NICs is not supported by pfSense.
6 years, 8 months
HostedEngine with HA
by Carlos Rodrigues
Hello,
I have one cluster with two hosts with power management correctly
configured and one virtual machine with HostedEngine over shared
storage with FiberChannel.
When i shutdown the network of host with HostedEngine VM, it should be
possible the HostedEngine VM migrate automatically to another host?
What is the expected behaviour on this HA scenario?
Regards,
--
Carlos Rodrigues
Engenheiro de Software Sénior
Eurotux Informática, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 911 926 110
6 years, 8 months
Very Slow Console Performance - Windows 10
by FERNANDO FREDIANI
This is a multi-part message in MIME format.
--------------C64C2289E18148FC52BD4390
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Hello
Has anyone installed a Windows 10 Virtual Machine ?
I am having serious Console Performance issues even after installing the
Ted Hat QXL controller from the virtio-win ISO.
Someone informed in a forum having similar issues and have resolved by
increasing the graphics card memory to 65536 by editing the XML (example
below), but how is that possible in oVirt permanently ?
<video>
<model type='qxl' ram='131072' vram='131072' vgamem='65536'
heads='1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x0'/>
</video>
Thanks
Fernando
--------------C64C2289E18148FC52BD4390
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<font face="arial, helvetica, sans-serif">Hello<br>
<br>
Has anyone installed a Windows 10 Virtual Machine ? <br>
<br>
I am having serious Console Performance issues even after
installing the Ted Hat QXL controller from the virtio-win ISO.<br>
Someone informed in a forum having similar issues and have
resolved by increasing the graphics card memory to 65536 by
editing the XML (example below), but how is that possible in oVirt
permanently ?</font><br>
<br>
<video><br>
<model type='qxl' ram='131072' vram='131072' vgamem='<ins>65536</ins>'
heads='1'/><br>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x0'/><br>
</video><br>
<br>
Thanks<br>
Fernando<br>
</body>
</html>
--------------C64C2289E18148FC52BD4390--
6 years, 9 months
oVIRT 4.1 / iSCSI Multipathing
by Devin Acosta
I am using the latest release of oVIRT 4.1.3, and I am connecting a Dell
Compelent SAN that has 2 fault domains each on a separate VLAN that I have
attached to oVIRT. From what I understand I am suppose to go into “iSCSI
Multipathing” option and add a BOND of the iSCSI interfaces. I have done
this selecting the 2 logical networks together for iSCSI. I notice that
there is an option below to select Storage Targets but if I select the
storage targets below with the logical networks the the cluster goes crazy
and appears to be mad. Storage, Nodes, and everything goes offline even
thought I have NFS also attached to the cluster.
How should this best be configured. What we notice that happens is when the
server reboots it seems to log into the SAN correctly but according the the
Dell SAN it is only logged into once controller. So only pulls both fault
domains from a single controller.
Please Advise.
Devin
6 years, 9 months
when creating VMs, I don't want hosted_storage to be an option
by Mike Farnam
Hi All - Is that a way to mark hosted_storage somehow so that it’s not available to add new VMs to? Right now it’s the default storage domain when adding a VM. At the least, I’d like to make another storage domain the default.
Is there a way to do this?
Thanks
6 years, 9 months
qemu-kvm images corruption
by Nicolas Ecarnot
TL;DR:
How to avoid images corruption?
Hello,
On two of our old 3.6 DC, a recent series of VM migrations lead to some
issues :
- I'm putting a host into maintenance mode
- most of the VM are migrating nicely
- one remaining VM never migrates, and the logs are showing :
* engine.log : "...VM has been paused due to I/O error..."
* vdsm.log : "...Improbable extension request for volume..."
After digging amongst the RH BZ tickets, I saved the day by :
- stopping the VM
- lvchange -ay the adequate /dev/...
- qemu-img check [-r all] /rhev/blahblah
- lvchange -an...
- boot the VM
- enjoy!
Yesterday this worked for a VM where only one error occurred on the qemu
image, and the repair was easily done by qemu-img.
Today, facing the same issue on another VM, it failed because the errors
were very numerous, and also because of this message :
[...]
Rebuilding refcount structure
ERROR writing refblock: No space left on device
qemu-img: Check failed: No space left on device
[...]
The PV/VG/LV are far from being full, so I guess I don't where to look at.
I tried many ways to solve it but I'm not comfortable at all with qemu
images, corruption and solving, so I ended up exporting this VM (to an
NFS export domain), importing it into another DC : this had the side
effect to use qemu-img convert from qcow2 to qcow2, and (maybe?????) to
solve some errors???
I also copied it into another qcow2 file with the same qemu-img convert
way, but it is leading to another clean qcow2 image without errors.
I saw that on 4.x some bugs are fixed about VM migrations, but this is
not the point here.
I checked my SANs, my network layers, my blades, the OS (CentOS 7.2) of
my hosts, but I see nothing special.
The real reason behind my message is not to know how to repair anything,
rather than to understand what could have lead to this situation?
Where to keep a keen eye?
--
Nicolas ECARNOT
6 years, 10 months
Official Hyperconverged Gluster oVirt upgrade procedure?
by Hanson
Hi Guys,
Just wondering if we have an updated manual or whats the current
procedure for upgrading the nodes in a hyperconverged ovirt gluster pool?
Ie Nodes run 4.0 oVirt, as well as GlusterFS, and hosted-engine running
in a gluster storage domain.
Put node in maintenance mode and disable glusterfs from ovirt gui, run
yum update?
Thanks!
6 years, 10 months