Not enough disk space on the engine LUN to create the configuration volume?
by gflwqs gflwqs
Hi list,
When i am trying to upgrade hosts from 3.5.6 to 3.6.7 i get these errors
from agent.log:
MainThread::ERROR::2017-01-24
15:17:58,981::upgrade::207::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(_is_conf_volume_there)
Unable to find HE conf volume
MainThread::INFO::2017-01-24
15:17:58,981::upgrade::262::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(_create_shared_conf_volume)
Creating hosted-engine configuration volume on the shared storage domain
MainThread::DEBUG::2017-01-24
15:17:59,079::heconflib::358::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(create_and_prepare_image)
{'status': {'message': 'OK', 'code': 0}, 'uuid':
'df95e3fd-3715-4525-8955-abcc8cb24865'}
MainThread::DEBUG::2017-01-24
15:17:59,080::heconflib::372::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(create_and_prepare_image)
Created configuration volume OK, request was:
- image: e4d09ccf-5d9b-4f90-9eda-1ebd4f55ecbc
- volume: 9636163f-d634-4c28-b2bc-f84b5a90a17e
MainThread::DEBUG::2017-01-24
15:17:59,080::heconflib::283::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(task_wait)
Waiting for existing tasks to complete
MainThread::DEBUG::2017-01-24
15:18:00,137::heconflib::283::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(task_wait)
Waiting for existing tasks to complete
MainThread::DEBUG::2017-01-24
15:18:00,278::heconflib::379::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(create_and_prepare_image)
configuration volume: prepareImage
MainThread::DEBUG::2017-01-24
15:18:00,350::heconflib::387::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(create_and_prepare_image)
{'status': {'message': "Volume does not exist:
('9636163f-d634-4c28-b2bc-f84b5a90a17e',)", 'code': 201}
from vdsm.log:
df95e3fd-3715-4525-8955-abcc8cb24865::DEBUG::2017-01-24
15:17:59,187::task::752::Storage.TaskManager.Task::(_save)
Task=`df95e3fd-3715-4525-8955-abcc8cb24865`::_save: orig
/rhev/data-center/c202d58c-afc7-4848-8501-228d9bccb15d/mastersd/master/tasks/df95e3fd-3715-4525-8955-
abcc8cb24865 temp
/rhev/data-center/c202d58c-afc7-4848-8501-228d9bccb15d/mastersd/master/tasks/df95e3fd-3715-4525-8955-abcc8cb24865.temp
df95e3fd-3715-4525-8955-abcc8cb24865::DEBUG::2017-01-24
15:17:59,201::task::752::Storage.TaskManager.Task::(_save)
Task=`df95e3fd-3715-4525-8955-abcc8cb24865`::_save: orig
/rhev/data-center/c202d58c-afc7-4848-8501-228d9bccb15d/mastersd/master/tasks/df95e3fd-3715-4525-8955-
abcc8cb24865 temp
/rhev/data-center/c202d58c-afc7-4848-8501-228d9bccb15d/mastersd/master/tasks/df95e3fd-3715-4525-8955-abcc8cb24865.temp
df95e3fd-3715-4525-8955-abcc8cb24865::DEBUG::2017-01-24
15:17:59,217::lvm::290::Storage.Misc.excCmd::(cmd) /usr/bin/taskset
--cpu-list 0-79 /usr/bin/sudo -n /usr/sbin/lvm lvcreate --config ' devices
{ preferred_names = ["^*/dev/mapper/*"] ignore_suspended_devices=1
write_cac
he_state=0 disable_after_error_count=3 filter = [
'\''a|/dev/mapper/2347b42194dacd3656c9ce900aea3dee2|'\'', '\''r|.*|'\'' ]
} global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1
use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } '
--autobackup
n --contiguous n --size 1024m --addtag OVIRT_VOL_INITIALIZING --name
9636163f-d634-4c28-b2bc-f84b5a90a17e 8fa5a242-fcb4-454d-aaa1-23f8dd92373b
(cwd None)
Thread-28556::DEBUG::2017-01-24 15:17:59,289::utils::671::root::(execCmd)
/usr/bin/taskset --cpu-list 0-79 /usr/sbin/tc qdisc show (cwd None)
df95e3fd-3715-4525-8955-abcc8cb24865::DEBUG::2017-01-24
15:17:59,301::lvm::290::Storage.Misc.excCmd::(cmd) FAILED: <err> = '
WARNING: Not using lvmetad because config setting use_lvmetad=0.\n
WARNING: To avoid corruption, rescan devices to make changes visible
(pvscan --
cache).\n Volume group "8fa5a242-fcb4-454d-aaa1-23f8dd92373b" has
insufficient free space (4 extents): 8 required.\n'; <rc> = 5
df95e3fd-3715-4525-8955-abcc8cb24865::ERROR::2017-01-24
15:17:59,303::volume::485::Storage.Volume::(create) Failed to create volume
/rhev/data-center/c202d58c-afc7-4848-8501-228d9bccb15d/8fa5a242-fcb4-454d-aaa1-23f8dd92373b/images/e4d09ccf-5d9b-4f90-9eda-1ebd4f55ecbc/96361
63f-d634-4c28-b2bc-f84b5a90a17e: Cannot create Logical Volume:
('8fa5a242-fcb4-454d-aaa1-23f8dd92373b',
'9636163f-d634-4c28-b2bc-f84b5a90a17e')
df95e3fd-3715-4525-8955-abcc8cb24865::ERROR::2017-01-24
15:17:59,304::volume::521::Storage.Volume::(create) Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/volume.py", line 482, in create
initialSize=initialSize)
File "/usr/share/vdsm/storage/blockVolume.py", line 133, in _create
initialTag=TAG_VOL_UNINIT)
File "/usr/share/vdsm/storage/lvm.py", line 1096, in createLV
raise se.CannotCreateLogicalVolume(vgName, lvName)
CannotCreateLogicalVolume: Cannot create Logical Volume:
('8fa5a242-fcb4-454d-aaa1-23f8dd92373b',
'9636163f-d634-4c28-b2bc-f84b5a90a17e')
df95e3fd-3715-4525-8955-abcc8cb24865::DEBUG::2017-01-24
15:17:59,304::resourceManager::619::Storage.ResourceManager::(releaseResource)
Trying to release resource
'8fa5a242-fcb4-454d-aaa1-23f8dd92373b_imageNS.e4d09ccf-5d9b-4f90-9eda-1ebd4f55ecbc'
df95e3fd-3715-4525-8955-abcc8cb24865::DEBUG::2017-01-24
15:17:59,304::resourceManager::638::Storage.ResourceManager::(releaseResource)
Released resource
'8fa5a242-fcb4-454d-aaa1-23f8dd92373b_imageNS.e4d09ccf-5d9b-4f90-9eda-1ebd4f55ecbc'
(0 active users)
df95e3fd-3715-4525-8955-abcc8cb24865::DEBUG::2017-01-24
15:17:59,305::resourceManager::644::Storage.ResourceManager::(releaseResource)
Resource
'8fa5a242-fcb4-454d-aaa1-23f8dd92373b_imageNS.e4d09ccf-5d9b-4f90-9eda-1ebd4f55ecbc'
is free, finding out if anyone is waiting for
it.
df95e3fd-3715-4525-8955-abcc8cb24865::DEBUG::2017-01-24
15:17:59,305::resourceManager::652::Storage.ResourceManager::(releaseResource)
No one is waiting for resource
'8fa5a242-fcb4-454d-aaa1-23f8dd92373b_imageNS.e4d09ccf-5d9b-4f90-9eda-1ebd4f55ecbc',
Clearing records.
df95e3fd-3715-4525-8955-abcc8cb24865::ERROR::2017-01-24
15:17:59,305::task::866::Storage.TaskManager.Task::(_setError)
Task=`df95e3fd-3715-4525-8955-abcc8cb24865`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 873, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/storage/task.py", line 332, in run
return self.cmd(*self.argslist, **self.argsdict)
File "/usr/share/vdsm/storage/securable.py", line 77, in wrapper
return method(self, *args, **kwargs)
File "/usr/share/vdsm/storage/sp.py", line 1934, in createVolume
initialSize=initialSize)
File "/usr/share/vdsm/storage/sd.py", line 488, in createVolume
initialSize=initialSize)
File "/usr/share/vdsm/storage/volume.py", line 482, in create
initialSize=initialSize)
File "/usr/share/vdsm/storage/blockVolume.py", line 133, in _create
initialTag=TAG_VOL_UNINIT)
File "/usr/share/vdsm/storage/lvm.py", line 1096, in createLV
raise se.CannotCreateLogicalVolume(vgName, lvName)
CannotCreateLogicalVolume: Cannot create Logical Volume:
('8fa5a242-fcb4-454d-aaa1-23f8dd92373b',
'9636163f-d634-4c28-b2bc-f84b5a90a17e')
It seems that there is not enough disk space on the engine LUN to create
the configuration volume:
# vgs
VG #PV #LV #SN Attr VSize VFree
8fa5a242-fcb4-454d-aaa1-23f8dd92373b 1 9 0 wz--n- 104.62g 512.00m
What can we do to fix this?
7 years, 11 months
Manual VDSM install cannot start
by Matt .
Hi,
When I try to installl VDSM manually and run the following needed commands:
systemctl stop supervdsmd.service
systemctl stop libvirtd.service
vdsm-tool configure --module libvirt --force
systemctl start vdsmd.service
VDSMD is not able to start:
Jan 29 20:09:34 host-04 vdsm-tool: libvirt: XML-RPC error :
authentication failed: authentication failed
Jan 29 20:09:34 host-04 journal: authentication failed: authentication failed
Jan 29 20:09:34 host-04 journal: End of file while reading data:
Input/output error
Jan 29 20:09:34 host-04 journal: authentication failed: Failed to
start SASL negotiation: -20 (SASL(-13): user not found: unable to
canonify user and get auxprops)
What can cause this issue, I see earlier issues with some passwd.py
missing file but I'm on 4.0.6 which should not have this issue
anymore.
I hope someone has some clue.
Thanks,
Matt
7 years, 11 months
Change the timezone on the dashboard
by Logan Kuhn
Is there a way to change the timezone for the dashboard plugin?
I've searched the list and the plugin files and can't seem to find where
it's set.
Logan
7 years, 11 months
Can't export VM's
by Peter Calum
Hi,
I use oVirt Engine Version: 4.0.5.5-1.el7.centos
I have an export domain that i use for backup of the VM's with the
snapshot-clone-export method.
This have worked fine for a long time, but now i'm not able to export the
VM's.
I have tried to detach the export domain and attach it again with no luck.
I have attached a log where i first detach/attach the export domain and
then clone a snapshot which goes ok, and then export the cloned VM which
fails.
If i click on storage tab/ my export domain and click VM import tab the SPM
fails shortly,
see attached event1.log
Hope you can help.
--
Venlig hilsen / Kind regards
Peter Calum
7 years, 11 months
engine-backup Ovirt 3.6.
by Ricky Schneberger
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--5nUHJTtHgGKFqBJrGPUJhQQvGpB826XlW
Content-Type: multipart/mixed; boundary="q4KOr5GvJA9S6x6AcrRqBLxpovuTHj0SC";
protected-headers="v1"
From: Ricky Schneberger <ricky(a)schneberger.se>
Reply-To: ricky(a)schneberger.se
To: users <users(a)ovirt.org>
Message-ID: <041a88c2-9f61-c15d-f2fb-5548d5d430e7(a)schneberger.se>
Subject: engine-backup Ovirt 3.6.
--q4KOr5GvJA9S6x6AcrRqBLxpovuTHj0SC
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi,
Its time to move in to Ovirt 4.x.
But when I do the backup I got an error telling me "FATAL: Failed
notifying engine".
The command I us is "engine-backup --mode=3Dbackup=20
--file=3Dengine_2.6.7.tar.gz --log=3Dengine_2.6.7_backup.log" and the what I
got is:
"
Backing up:
Notifying engine
Notifying engine
FATAL: Failed notifying engine
"
The engine_2.6.7_backup.log says:
"
2017-01-27 12:55:20 17029: Start of engine-backup mode backup scope all
file engine_2.6.7.tar.gz
2017-01-27 12:55:20 17029: OUTPUT: Backing up:
2017-01-27 12:55:20 17029: Generating pgpass
2017-01-27 12:55:20 17029: OUTPUT: Notifying engine
2017-01-27 12:55:20 17029: pg_cmd running: psql -w -U engine -h
localhost -p 5432 engine -t -c SELECT LogEngineBackupEvent('files',
now(), 0, 'Started', 'ovirt.actnet.local', '/root/engine_2.6.7_backup.log');
psql: FATAL: password authentication failed for user "engine"
2017-01-27 12:55:20 17029: FATAL: Failed notifying engine
2017-01-27 12:55:20 17029: OUTPUT: Notifying engine
2017-01-27 12:55:20 17029: pg_cmd running: psql -w -U engine -h
localhost -p 5432 engine -t -c SELECT LogEngineBackupEvent('files',
now(), -1, 'Failed notifying engine', 'ovirt.actnet.local',
'/root/engine_2.6.7_backup.log');
psql: FATAL: password authentication failed for user "engine"
2017-01-27 12:55:20 17029: FATAL: Failed notifying engine
"
I can connect to the database with user"engine" and the password found
in the file "10-setup-database.conf".
Regards
--=20
Ricky Schneberger
------------------------------------
Use PGP to protect your own privacy!
Key fingerprint =3D 59E1 2B00 C28B 6E0D C8D1 D85B 39AA 7CD5 B88C 0B63
Key-ID: 0xB88C0B63
--q4KOr5GvJA9S6x6AcrRqBLxpovuTHj0SC--
--5nUHJTtHgGKFqBJrGPUJhQQvGpB826XlW
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iEYEARECAAYFAliLPYUACgkQOap81biMC2OlsACffNfXK0gQJQa7wJ/2uauVU0hi
oOIAnjRT0a3F6dyRniwbh4q5c+lmtGjO
=fxxg
-----END PGP SIGNATURE-----
--5nUHJTtHgGKFqBJrGPUJhQQvGpB826XlW--
--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you choose to click.
If you are uncertain, we always try to help.
Greetings helpdesk(a)actnet.se
7 years, 11 months
Unable to activate Storage Pool
by Phil D
Hello,
My ovirt node has crashed and am unable to reach the engine manager. I
have the NFS mounts working okay and am able to
execute getStorageDomainInfo against them and retrieve the pool id. The
major issue I am having is that to be able to use connectStoragePool I seem
to need the Master UUID as am getting:
vdsClient -s localhost connectStoragePool
9b40933b-8198-4b3d-bfc2-6d1e49a97d70 0 0
Cannot find master domain: 'spUUID=9b40933b-8198-4b3d-bfc2-6d1e49a97d70,
msdUUID=00000000-0000-0000-0000-000000000000'
How can I find the correct msdUUID to use ?
Thanks, Phil
7 years, 11 months
Actual downtime during migration?
by Gianluca Cecchi
Hello,
I was testing put host into maintenance on 4.0.6, with 1 VM running.
It correctly completes the live migration of the VM and I see this event in
pane:
Migration completed (VM: ol65, Source: ovmsrv06, Destination: ovmsrv05,
Duration: 39 seconds, Total: 39 seconds, Actual downtime: 133ms)
What is considered as "Actual downtime"?
Thanks,
Gianluca
7 years, 11 months
[ANN] oVirt 4.1.0 First Release Candidate is now available
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the First
Release candidate of oVirt 4.1.0 for testing, as of January 23rd, 2016
This is pre-release software. Please take a look at our community page[1]
to know how to ask questions and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This pre-release should not to be used in production.
This update is the first release candidate of the 4.1 release series.
4.1.0 brings more than 250 enhancements and more than 700 bugfixes,
including more than 300 high or urgent
severity fixes, on top of oVirt 4.0 series
See the release notes [3] for installation / upgrade instructions and a
list of new features and bugs fixed.
This release is available now for:
* Fedora 24 (tech preview)
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later
This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later
* Fedora 24 (tech preview)
* oVirt Node 4.1
See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Live iso is already available[5]
- oVirt Node NG iso is already available[5]
- Hosted Engine appliance is already available
A release management page including planned schedule is also available[4]
Additional Resources:
* Read more about the oVirt 4.1.0 release highlights:
http://www.ovirt.org/release/4.1.0/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.1.0/
[4]
http://www.ovirt.org/develop/release-management/releases/4.1/release-mana...
[5] http://resources.ovirt.org/pub/ovirt-4.1-pre/iso/
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
7 years, 11 months