oVIRT / GlusterFS / Data (HA)
by Devin Acosta
I have created an oVIRT 4.0.6 Cluster, it has 2 Compute nodes, and 3
Dedicated Gluster nodes. The Gluster nodes are configured correctly and
they have the replica set to 3. I'm trying to figure out when I go to
attach the Data (Master) domain to the oVIRT manager what is the best
method to do so in the configuration? I initially set the mount point to
be like: gluster01-int:/data, then set in the mount options
"backup-volfile-servers=gluster02-int:/data,gluster03-int:/data", so I
understand that will choose another host if the 1st one is down but if i
was to reboot the 1st Gluster node would that provide HA for my Data
domain?
I also configured ctdb with a floating-ip address that floats between all 3
Gluster nodes, and I am wondering if I should be pointing the mount to that
VIP? What is the best solution with dealing with Gluster and keeping your
mount HA?
--
Devin Acosta
Red Hat Certified Architect, LinuxStack
602-354-1220 || devin(a)linuxguru.co
7 years, 11 months
oVirt installs 3.6 repos for 4.0 compat cluster
by James
Hi guys,
I'm trying to servers to a new oVirt 4.0 cluster. I've upgraded the
engine (not hosted, standalone) to version 4.0.6.3 and run engine-setup
and everything seemed to go fine.
When reinstalling oVirt on a host after removing it from the 3.6 compat
cluster into a 4.0 compat cluster causes issues when reinstalling and
setting up the management network for some reason. I haven't been able
to track this issue down fully yet.
Whats weird is that even when a fresh new host is installed and added to
the grid for some reason oVirt installs the repos for oVirt 3.6 on the
host. I'm sure this is done by oVirt itself since they aren't there
before I install the host and we don't manage these outside of that. It
also just stalls at "Activating" the host and I can't do anything with
them after that.
Shouldn't it be installing version 4.0 for this cluster? I've triple
checked its set to version 4.0 compatibility and the data center its in
is also set to this level.
Unsure what logs I can provide but any help would be greatly
appreciated!
Cheers,
James
7 years, 11 months
Re: [ovirt-users] [ovirt-devel] Invitation: Next-generation Node package persistence for oVirt 4.1 @ Tue Jan 24, 2017 8am - 9am (MST) (devel@ovirt.org)
by Ryan Barry
Sorry everyone -- accidentally created this with the wrong account, and I
can't start the stream (thanks Youtube).
I'll reschedule for Thursday. New invite to follow
On Thu, Jan 19, 2017 at 6:21 AM, <rbarry(a)redhat.com> wrote:
> more details »
> <https://www.google.com/calendar/event?action=VIEW&eid=dWxlaW1rMmI0MHJybHZ...>
> Next-generation Node package persistence for oVirt 4.1
> oVirt Node is a composed hypervisor image which can be used to provision
> virtualization hosts for use with oVirt out of the box, with no additional
> package installation necessary.
>
> oVirt Node is upgraded via yum in an A/B fashion, but the installation of
> a completely new image means that packages which have been installed on a
> previous version of the hypervisor were lost on upgrades, or required
> reinstallation.
>
> With oVIrt 4.1, oVirt Node will cache and reinstall packages installed
> with yum or dnf onto the new image, ensuring that customizations made by
> users or administrators are kept.
>
> The advantages of keeping packages across upgrades:
> - ability to persistently modify oVirt Node with additions for tooling or
> support
> - removes the need to build a brand-new image with a modified kickstart to
> modify oVirt Node
> - simplified management
>
> Session outline:
> - Next-generation oVirt Node overview
> - yum API overview
> - oVIrt Node integration with yum/dnf to persist RPMs across upgrades
>
> Session link:
> https://www.youtube.com/watch?v=VAznsxvZpuk
>
> Feature Page:
> https://www.ovirt.org/develop/release-management/features/
> node/node-next-persistence
> <https://www.google.com/url?q=https%3A%2F%2Fwww.ovirt.org%2Fdevelop%2Frele...>
>
> *When*
> Tue Jan 24, 2017 8am – 9am Mountain Time - Arizona
>
> *Where*
> https://www.youtube.com/watch?v=VAznsxvZpuk (map
> <https://maps.google.com/maps?q=https://www.youtube.com/watch?v%3DVAznsxvZ...>
> )
>
> *Calendar*
> devel(a)ovirt.org
>
> *Who*
> •
> rbarry(a)redhat.com - organizer
> •
> users(a)ovirt.org
> •
> devel(a)ovirt.org
>
> Going? *Yes
> <https://www.google.com/calendar/event?action=RESPOND&eid=dWxlaW1rMmI0MHJy...>
> - Maybe
> <https://www.google.com/calendar/event?action=RESPOND&eid=dWxlaW1rMmI0MHJy...>
> - No
> <https://www.google.com/calendar/event?action=RESPOND&eid=dWxlaW1rMmI0MHJy...>*
> more options »
> <https://www.google.com/calendar/event?action=VIEW&eid=dWxlaW1rMmI0MHJybHZ...>
>
> Invitation from Google Calendar <https://www.google.com/calendar/>
>
> You are receiving this courtesy email at the account devel(a)ovirt.org
> because you are an attendee of this event.
>
> To stop receiving future updates for this event, decline this event.
> Alternatively you can sign up for a Google account at
> https://www.google.com/calendar/ and control your notification settings
> for your entire calendar.
>
> Forwarding this invitation could allow any recipient to modify your RSVP
> response. Learn More
> <https://support.google.com/calendar/answer/37135#forwarding>.
>
> _______________________________________________
> Devel mailing list
> Devel(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
7 years, 11 months
Problems upgrading ovirt from 3.5.6 to 3.6.7
by gflwqs gflwqs
Hi list!
We have a problem upgrading hosted engine nodes (hypervisors) from 3.5.6 to
3.6.7 we get these errors:
from agent.log:
MainThread::ERROR::2017-01-24
15:17:58,981::upgrade::207::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(_is_conf_volume_there)
Unable to find HE conf volume
MainThread::INFO::2017-01-24
15:17:58,981::upgrade::262::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(_create_shared_conf_volume)
Creating hosted-engine configuration volume on the shared storage domain
MainThread::DEBUG::2017-01-24
15:17:59,079::heconflib::358::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(create_and_prepare_image)
{'status': {'message': 'OK', 'code': 0}, 'uuid':
'df95e3fd-3715-4525-8955-abcc8cb24865'}
MainThread::DEBUG::2017-01-24
15:17:59,080::heconflib::372::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(create_and_prepare_image)
Created configuration volume OK, request was:
- image: e4d09ccf-5d9b-4f90-9eda-1ebd4f55ecbc
- volume: 9636163f-d634-4c28-b2bc-f84b5a90a17e
MainThread::DEBUG::2017-01-24
15:17:59,080::heconflib::283::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(task_wait)
Waiting for existing tasks to complete
MainThread::DEBUG::2017-01-24
15:18:00,137::heconflib::283::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(task_wait)
Waiting for existing tasks to complete
MainThread::DEBUG::2017-01-24
15:18:00,278::heconflib::379::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(create_and_prepare_image)
configuration volume: prepareImage
MainThread::DEBUG::2017-01-24
15:18:00,350::heconflib::387::ovirt_hosted_engine_ha.lib.upgrade.StorageServer::(create_and_prepare_image)
{'status': {'message': "Volume does not exist:
('9636163f-d634-4c28-b2bc-f84b5a90a17e',)", 'code': 201}
from vdsm.log:
df95e3fd-3715-4525-8955-abcc8cb24865::DEBUG::2017-01-24
15:17:59,187::task::752::Storage.TaskManager.Task::(_save)
Task=`df95e3fd-3715-4525-8955-abcc8cb24865`::_save: orig
/rhev/data-center/c202d58c-afc7-4848-8501-228d9bccb15d/mastersd/master/tasks/df95e3fd-3715-4525-8955-
abcc8cb24865 temp
/rhev/data-center/c202d58c-afc7-4848-8501-228d9bccb15d/mastersd/master/tasks/df95e3fd-3715-4525-8955-abcc8cb24865.temp
df95e3fd-3715-4525-8955-abcc8cb24865::DEBUG::2017-01-24
15:17:59,201::task::752::Storage.TaskManager.Task::(_save)
Task=`df95e3fd-3715-4525-8955-abcc8cb24865`::_save: orig
/rhev/data-center/c202d58c-afc7-4848-8501-228d9bccb15d/mastersd/master/tasks/df95e3fd-3715-4525-8955-
abcc8cb24865 temp
/rhev/data-center/c202d58c-afc7-4848-8501-228d9bccb15d/mastersd/master/tasks/df95e3fd-3715-4525-8955-abcc8cb24865.temp
df95e3fd-3715-4525-8955-abcc8cb24865::DEBUG::2017-01-24
15:17:59,217::lvm::290::Storage.Misc.excCmd::(cmd) /usr/bin/taskset
--cpu-list 0-79 /usr/bin/sudo -n /usr/sbin/lvm lvcreate --config ' devices
{ preferred_names = ["^*/dev/mapper/*"] ignore_suspended_devices=1
write_cac
he_state=0 disable_after_error_count=3 filter = [
'\''a|/dev/mapper/2347b42194dacd3656c9ce900aea3dee2|'\'', '\''r|.*|'\'' ]
} global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1
use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } '
--autobackup
n --contiguous n --size 1024m --addtag OVIRT_VOL_INITIALIZING --name
9636163f-d634-4c28-b2bc-f84b5a90a17e 8fa5a242-fcb4-454d-aaa1-23f8dd92373b
(cwd None)
Thread-28556::DEBUG::2017-01-24 15:17:59,289::utils::671::root::(execCmd)
/usr/bin/taskset --cpu-list 0-79 /usr/sbin/tc qdisc show (cwd None)
df95e3fd-3715-4525-8955-abcc8cb24865::DEBUG::2017-01-24
15:17:59,301::lvm::290::Storage.Misc.excCmd::(cmd) FAILED: <err> = '
WARNING: Not using lvmetad because config setting use_lvmetad=0.\n
WARNING: To avoid corruption, rescan devices to make changes visible
(pvscan --
cache).\n Volume group "8fa5a242-fcb4-454d-aaa1-23f8dd92373b" has
insufficient free space (4 extents): 8 required.\n'; <rc> = 5
df95e3fd-3715-4525-8955-abcc8cb24865::ERROR::2017-01-24
15:17:59,303::volume::485::Storage.Volume::(create) Failed to create volume
/rhev/data-center/c202d58c-afc7-4848-8501-228d9bccb15d/8fa5a242-fcb4-454d-aaa1-23f8dd92373b/images/e4d09ccf-5d9b-4f90-9eda-1ebd4f55ecbc/96361
63f-d634-4c28-b2bc-f84b5a90a17e: Cannot create Logical Volume:
('8fa5a242-fcb4-454d-aaa1-23f8dd92373b',
'9636163f-d634-4c28-b2bc-f84b5a90a17e')
df95e3fd-3715-4525-8955-abcc8cb24865::ERROR::2017-01-24
15:17:59,304::volume::521::Storage.Volume::(create) Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/volume.py", line 482, in create
initialSize=initialSize)
File "/usr/share/vdsm/storage/blockVolume.py", line 133, in _create
initialTag=TAG_VOL_UNINIT)
File "/usr/share/vdsm/storage/lvm.py", line 1096, in createLV
raise se.CannotCreateLogicalVolume(vgName, lvName)
CannotCreateLogicalVolume: Cannot create Logical Volume:
('8fa5a242-fcb4-454d-aaa1-23f8dd92373b',
'9636163f-d634-4c28-b2bc-f84b5a90a17e')
df95e3fd-3715-4525-8955-abcc8cb24865::DEBUG::2017-01-24
15:17:59,304::resourceManager::619::Storage.ResourceManager::(releaseResource)
Trying to release resource
'8fa5a242-fcb4-454d-aaa1-23f8dd92373b_imageNS.e4d09ccf-5d9b-4f90-9eda-1ebd4f55ecbc'
df95e3fd-3715-4525-8955-abcc8cb24865::DEBUG::2017-01-24
15:17:59,304::resourceManager::638::Storage.ResourceManager::(releaseResource)
Released resource
'8fa5a242-fcb4-454d-aaa1-23f8dd92373b_imageNS.e4d09ccf-5d9b-4f90-9eda-1ebd4f55ecbc'
(0 active users)
df95e3fd-3715-4525-8955-abcc8cb24865::DEBUG::2017-01-24
15:17:59,305::resourceManager::644::Storage.ResourceManager::(releaseResource)
Resource
'8fa5a242-fcb4-454d-aaa1-23f8dd92373b_imageNS.e4d09ccf-5d9b-4f90-9eda-1ebd4f55ecbc'
is free, finding out if anyone is waiting for
it.
df95e3fd-3715-4525-8955-abcc8cb24865::DEBUG::2017-01-24
15:17:59,305::resourceManager::652::Storage.ResourceManager::(releaseResource)
No one is waiting for resource
'8fa5a242-fcb4-454d-aaa1-23f8dd92373b_imageNS.e4d09ccf-5d9b-4f90-9eda-1ebd4f55ecbc',
Clearing records.
df95e3fd-3715-4525-8955-abcc8cb24865::ERROR::2017-01-24
15:17:59,305::task::866::Storage.TaskManager.Task::(_setError)
Task=`df95e3fd-3715-4525-8955-abcc8cb24865`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 873, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/storage/task.py", line 332, in run
return self.cmd(*self.argslist, **self.argsdict)
File "/usr/share/vdsm/storage/securable.py", line 77, in wrapper
return method(self, *args, **kwargs)
File "/usr/share/vdsm/storage/sp.py", line 1934, in createVolume
initialSize=initialSize)
File "/usr/share/vdsm/storage/sd.py", line 488, in createVolume
initialSize=initialSize)
File "/usr/share/vdsm/storage/volume.py", line 482, in create
initialSize=initialSize)
File "/usr/share/vdsm/storage/blockVolume.py", line 133, in _create
initialTag=TAG_VOL_UNINIT)
File "/usr/share/vdsm/storage/lvm.py", line 1096, in createLV
raise se.CannotCreateLogicalVolume(vgName, lvName)
CannotCreateLogicalVolume: Cannot create Logical Volume:
('8fa5a242-fcb4-454d-aaa1-23f8dd92373b',
'9636163f-d634-4c28-b2bc-f84b5a90a17e')
It seems that there is not enough disk space on the engine LUN to create
the configuration volume:
# vgs
VG #PV #LV #SN Attr VSize VFree
8fa5a242-fcb4-454d-aaa1-23f8dd92373b 1 9 0 wz--n- 104.62g 512.00m
What can we do to fix this?
Regards
Christian
7 years, 11 months
[ANN] oVirt 4.0.6 is the last 4.0 release
by Sandro Bonazzola
Hi,
with the coming oVirt 4.1.0 GA release scheduled to be released on February
1st,
oVirt 4.0.6 is the last officially released version in 4.0 cycle.
If any critical issue will be reported upgrading to 4.1 which will require
a fix in 4.0.6, a fix will be released.
Please note that any bug reported as fixed in bugzilla against 4.0.7 has
been also fixed in 4.1.0.
on behalf of the oVirt team,
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
7 years, 11 months
[PySDK v3] Choose storage domain
by Nicolas Ecarnot
Hello,
When trying to create a VM by cloning a template, I found out I couldn't
choose the target storage domain :
[...]
vm_storage_domain = api.storagedomains.get(name=storage_domain)
vm_params = params.VM(name=vm_name,
memory=vm_memory,
cluster=vm_cluster,
template=vm_template,
os=vm_os,
storage_domain=vm_storage_domain,
)
try:
api.vms.add(vm=vm_params)
[...]
I'm getting :
Failed to create VM from Template:
status: 400
reason: Bad Request
detail: Cannot add VM. The selected Storage Domain does not contain the
VM Template.
... which I know, but I thought that, as with the GUI, I could specify
the target storage domain.
I made my homework, and I found a nice answer from Juan :
http://lists.ovirt.org/pipermail/users/2016-January/037321.html
but this relates to snapshots, and not to template usage, so I'm still
stuck.
May I ask an advice?
--
Nicolas ECARNOT
7 years, 11 months
packages that can be updated without maintening hosts
by Nathanaël Blanchet
Hi
The update notifier in the webadmin was originally designed to alert for
new vdsm* packages. Now, I noticed that available update of virt
packages and more are notified. I know that hot updating qemu-kvm
package does break vms that are running on concerned hosts, but what
about other one like libvirt-client? I know it is recommended to put in
maintenance while updating, but can we update some minor packages
without waiting for migration?
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
7 years, 11 months