This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--0FvSCkD0lvxtxSwwQbfuIEi9LeF5BHTUu
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable
Thanks Darryl!
For others having the same problem and needing more information on
how to temporarily fix this:
On the Engine VM you find the information to access the DB in
/etc/ovirt-engine/engine.conf.d/10-setup-database.conf
Then access the engine database and update the vfs_type field in the
storage_server_connections table of the engine storage volume entry:
psql -U engine -W -h localhost
select * from storage_server_connections;
update storage_server_connections set vfs_type =3D 'glusterfs' where
id =3D 'THE_ID_YOU_FOUND_IN_THE_OUTPUT_ABOVE_FOR_THE_ENGINE_VOLUME';
After that adding new hosts works as expected.
Cheers
Richard
On 04/08/2016 12:43 AM, Bond, Darryl wrote:
The workaround for this bug is here
https://bugzilla.redhat.com/show_bu= g.cgi?id=3D1317699
=20
=20
________________________________________
From: users-bounces(a)ovirt.org <users-bounces(a)ovirt.org> on behalf of Si=
mone
Tiraboschi <stirabos(a)redhat.com>
Sent: Friday, 8 April 2016 1:30 AM
To: Richard Neuboeck; Roy Golan
Cc: users
Subject: Re: [ovirt-users] Can not access storage domain hosted_storage=
=20
On Thu, Apr 7, 2016 at 4:17 PM, Richard Neuboeck <hawk(a)tbi.univie.ac.at=
wrote:
> Hi oVirt Users/Developers,
>
> I'm having trouble adding another host to a working hosted engine
> setup. Through the WebUI I try to add another host. The package
> installation and configuration processes seemingly run without
> problems. When the second host tries to mount the engine storage
> volume it halts with the WebUI showing the following message:
>
> 'Failed to connect Host cube-two to the Storage Domain hosted_engine'
>
> The mount fails which results in the host status as 'non operational'.=
>
> Checking the vdsm.log on the newly added host shows that the mount
> attempt of the engine volume doesn't use -t glusterfs. On the other
> hand the VM storage volume (also a glusterfs volume) is mounted the
> right way.
>
> It seems the Engine configuration that is given to the second host
> lacks the vfs_type property. So without glusterfs as fs given the
> system assumes an NFS mount and obviously fails.
=20
It seams that the auto-import procedure in the engine didn't recognize
that the hosted-engine storage domain was on gluster and took it for
NFS.
=20
Adding Roy here to take a look.
=20
=20
> Here are the relevant log lines showing the JSON reply to the
> configuration request, the working mount of the VM storage (called
> plexus) and the failing mount of the engine storage.
>
> ...
> jsonrpc.Executor/4::INFO::2016-04-07
> 15:45:53,043::logUtils::48::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=3D7,
> spUUID=3Du'00000001-0001-0001-0001-0000000003ce', conList=3D[{u'id':
> u'981cd3aa-052b-498a-914e-5e8f314357a8', u'connection':
> u'borg-sphere-one:/plexus', u'iqn': u'', u'user':
u'', u'tpgt':
> u'1', u'vfs_type': u'glusterfs', u'password':
'********', u'port':
> u''}, {u'id': u'cceaa988-9607-4bef-8854-0e7a585720aa',
> u'connection': u'borg-sphere-one:/engine', u'iqn':
u'', u'user':
> u'', u'tpgt': u'1', u'password': '********',
u'port': u''}],
> options=3DNone)
> ...
> jsonrpc.Executor/4::DEBUG::2016-04-07
> 15:45:53,062::mount::229::Storage.Misc.excCmd::(_runcmd)
> /usr/bin/taskset --cpu-list 0-39 /usr/bin/sudo -n
> /usr/bin/systemd-run --scope --slice=3Dvdsm-glusterfs /usr/bin/mount
> -t glusterfs -o
> backup-volfile-servers=3Dborg-sphere-two:borg-sphere-three
> borg-sphere-one:/plexus
> /rhev/data-center/mnt/glusterSD/borg-sphere-one:_plexus (cwd None)
> ...
> jsonrpc.Executor/4::DEBUG::2016-04-07
> 15:45:53,380::mount::229::Storage.Misc.excCmd::(_runcmd)
> /usr/bin/taskset --cpu-list 0-39 /usr/bin/sudo -n
> /usr/bin/systemd-run --scope --slice=3Dvdsm-glusterfs /usr/bin/mount
> -o backup-volfile-servers=3Dborg-sphere-two:borg-sphere-three
> borg-sphere-one:/engine
> /rhev/data-center/mnt/glusterSD/borg-sphere-one:_engine (cwd None)
> ...
>
> The problem seems to have been introduced since March 22nd. On this
> install I have added two additional hosts without problem. Three
> days ago I tried to reinstall the whole system for testing and
> documentation purposes but now am not able to add other hosts.
>
> All the installs follow the same documented procedure. I've verified
> several times that the problem exists with the components in the
> current 3.6 release repo as well as in the 3.6 snapshot repo.
>
> If I check the storage configuration of hosted_engine domain in the
> WebUI it shows glusterfs as VFS type.
>
> The initial mount during the hosted engine setup on the first host
> shows the correct parameters (vfs_type) in vdsm.log:
>
> Thread-42::INFO::2016-04-07
> 14:56:29,464::logUtils::48::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=3D7,
> spUUID=3D'00000000-0000-0000-0000-000000000000', conList=3D[{'id':
> 'b13ae31f-d66a-43a7-8aba-eaf4e62a6fb0', 'tpgt': '1',
'vfs_type':
> 'glusterfs', 'connection': 'borg-sphere-one:/engine',
'user':
> 'kvm'}], options=3DNone)
> Thread-42::DEBUG::2016-04-07
> 14:56:29,591::fileUtils::143::Storage.fileUtils::(createdir)
> Creating directory:
> /rhev/data-center/mnt/glusterSD/borg-sphere-one:_engine mode: None
> Thread-42::DEBUG::2016-04-07
> 14:56:29,592::storageServer::364::Storage.StorageServer.MountConnectio=
n::(_get_backup_servers_option)
> Using bricks: ['borg-sphere-one',
'borg-sphere-two',
> 'borg-sphere-three']
> Thread-42::DEBUG::2016-04-07
> 14:56:29,592::mount::229::Storage.Misc.excCmd::(_runcmd)
> /usr/bin/taskset --cpu-list 0-39 /usr/bin/sudo -n
> /usr/bin/systemd-run --scope --slice=3Dvdsm-glusterfs /usr/bin/mount
> -t glusterfs -o
> backup-volfile-servers=3Dborg-sphere-two:borg-sphere-three
> borg-sphere-one:/engine
> /rhev/data-center/mnt/glusterSD/borg-sphere-one:_engine (cwd None)
>
>
> I've already created a bug report but since I didn't know where to
> put it filed it as VDSM bug which it doesn't seem to be.
>
https://bugzilla.redhat.com/show_bug.cgi?id=3D1324075
>
>
> I would really like to help resolve this problem. If there is
> anything I can test, please let me know. I appreciate any help in
> this matter.
>
> Currently I'm running an oVirt 3.6 snapshot installation on CentOS
> 7.2. The two storage volumes are both replica 3 on separate gluster
> storage nodes.
>
> Thanks in advance!
> Richard
>
> --
> /dev/null
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
>
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
=20
________________________________
=20
The contents of this electronic message and any attachments are intende=
d only for
the addressee and may contain legally privileged, personal, se=
nsitive or confidential information. If you are not the intended addresse=
e, and have received this email, any transmission, distribution, download=
ing, printing or photocopying of the contents of this message or attachme=
nts is strictly prohibited. Any legal privilege or confidentiality attach=
ed to this message and attachments is not waived, lost or destroyed by re=
ason of delivery to any person other than intended addressee. If you have=
received this message and are not the intended addressee you should noti=
fy the sender by return email and destroy all copies of the message and a=
ny attachments. Unless expressly attributed, the views expressed in this =
email do not necessarily represent the views of the company.
=20
--=20
/dev/null
--0FvSCkD0lvxtxSwwQbfuIEi9LeF5BHTUu
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAEBCgAGBQJXB1NwAAoJEA7XCanqEVqI1wkQALQB7zsvdXtsFVp1dyGB0r+G
jn5/xgMVad5AGygf7FsUqJG2K+baFVQ4BtXKiVS/KSkdWMbAekYJqfmw7Fism8xL
+K0JlJnpN5hKkUtmEcct16Nb+ipsj5bdp2Nx/4D+UfTXnPRiWou5ahuWHSzPKg9t
WUk+3gsx9BLcMh/vF+DsAjcqwXOqL1v9EFnP0GZmNt4D5sgUmqkX0D4aOHx08Pml
eO8jn5o0mWaIZGmCPc0QhagEUJLpAp7zgcvfd0aZhK+n8GuM4ICZlMT9oTim3T3X
cNPw4pzu0ar1lxjRoRuzk/gE1+/iqzENct0lZmP0Eg7jSWx60SM/Tq9c/0E5GMhn
AGB3K7SfDwY4FA4KCyYghQKci9y4vdFgPwOkxQ15QgMMorUOtMzP4CEtkPdZc1xv
rMgoVR04mdQcrBCqSZs/IFobS4cGfAAiLfFxCFmwPVylBuC2WweFnxbPDwn2O6hE
qjZJVfOU6uN32qgxNctvzbWVrVB7mubFvzIbPD7AYCkr2kUy3B93vMAP7LULvFTh
lbVWwQA86W/+kEKCvMvIJ7D8UFsiseJX9OrPfgyxE87c6wf6Hsu+2YenTJcLfrb8
3YUWc2UfjCFvQ7BWjtzsMoA+DnSpBdS+BpFkJ1RPYi2ObZGkoRXYTUckNG107CeN
7atAdcNQy/MSweJKwwwY
=Zoc1
-----END PGP SIGNATURE-----
--0FvSCkD0lvxtxSwwQbfuIEi9LeF5BHTUu--