Recommended Ovirt Implementation on Active-Active Datacenters (Site 1 and Site2) - Same Cluster
by Rogério Ceni Coelho
Hi Ovirt Jedi´s !!!
First of all, congrats about the product !!! I love Ovirt !!!
I am using Ovirt 4.0.4 with 10 hosts and 58 virtual machines on two
Active-Active Datacenters using two EMC Vplex + two EMC VNX5500 + eight
Dell Blades + 8 Dell PowerEdge M610 and two M620 Servers.
Half servers are on Site 1 and Half servers on Site 2. The same with VMs.
All Sites work as one and have redundant network, storage, power, etc etc
etc ...
I want to know what is the best way to set that VM number 1 runs on Site 1
and VM number 2 runs on Site 2 ?
On Vmware 5.1 we use DRS Group Manager and on Hyper-V we use Custom
Properties on hosts and on VMs. What we use on oVirt without segregate on
two different Datacenters or two different clusters ?
Thanks in advance.
8 years, 1 month
expired cert for aaa
by cmc
Hi,
I upgraded my engine host from 4.0.2.7 to 4.0.4 and when I attempt to login
via a aaa provider I get:
java.security.cert.CertificateExpiredException: NotAfter: Fri Nov 04
00:19:18 GMT 2016,
What certificate is this referring to? The certificate from the aaa
provider expires in 2063.
It was fine until the upgrade.
Thanks for any help,
Cam
8 years, 1 month
[ANN] oVirt 4.0.6 Second Release Candidate is now available
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of oVirt 4.0.6
second release candidate for testing, as of November 24th, 2016.
This release is available now for:
* Fedora 23 (tech preview)
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later
This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later
* Fedora 23 (tech preview)
* oVirt Next Generation Node 4.0
This is pre-release software. Please take a look at our community page[1]
to know how to ask questions and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This pre-release should not to be used in production.
This update is the second release candidate of the sixth in a series of
stabilization updates to the 4.0 series.
4.0.6 brings 2 enhancements and 43 bugfixes, including 16 high or urgent
severity fixes, on top of oVirt 4.0 series
See the release notes [3] for installation / upgrade instructions and a
list of new features and bugs fixed.
Notes:
* A new oVirt Live ISO is available. [4]
* A new oVirt Next Generation Node will be available soon [4]
* A new oVirt Engine Appliance is available for Red Hat Enterprise Linux
and CentOS Linux (or similar)
* Mirrors[5] might need up to one day to synchronize.
Additional Resources:
* Read more about the oVirt 4.0.6 release highlights:
http://www.ovirt.org/release/4.0.6/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.0.6/
[4] http://resources.ovirt.org/pub/ovirt-4.0-pre/iso/
[5] http://www.ovirt.org/Repository_mirrors#Current_mirrors
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
8 years, 1 month
GFS2 and OCFS2 for Shared Storage
by Fernando Frediani
Has anyone managed to use GFS2 or OCFS2 for Shared Block Storage between
hosts ? How scalable was it and which of the two work better ?
Using traditional CLVM is far from good starting because of the lack of
Thinprovision so I'm willing to consider either of the Filesystems.
Thanks
Fernando
8 years, 1 month
OVN Provider setup issues
by Andrea Fagiani
Hi all,
I've been messing around with the ovirt-ovn-provider [1] and I've run
into some issues during the initial setup.
I have a 5-node cluster (running the hosted-engine VA), LEGACY virtual
switch; this test was done on a single host. Following the instructions
from the aforementioned blog post, I have downloaded the
ovirt-provider-ovn and ovirt-provider-ovn-driver rpms, and built the rpm
packages for:
- openvswitch (2.6.90)
- openvswitch-ovn-common
- openvswitch-ovn-host
- openvswitch-ovn-central
- python-openvswitch
I set up a dedicated VM for the OVN controller, installed ovs and
ovn-central, started the ovn-northd and ovirt-provider-ovn services. So
far so good. I then moved on to the oVirt host and installed the above
packages (minus ovn-central) as well as the ovirt-provider-ovn-driver
provided, started the ovn-controller service and ran
# vdsm-tool ovn-config <ovn controller IP> <host management IP>
Executing the suggested checks I noticed that something didn't quite go
as planned. Below is the /var/log/openvswitch/ovn-controller.log from
the host machine. There are no firewalls involved (not even on the
servers) and I also tried disabling SELinux but to no avail.
Any ideas?
Thanks,
Andrea
[1] http://www.ovirt.org/blog/2016/11/ovirt-provider-ovn/
2016-11-07T14:22:09.552Z|00001|vlog|INFO|opened log file
/var/log/openvswitch/ovn-controller.log
2016-11-07T14:22:09.553Z|00002|reconnect|INFO|unix:/var/run/openvswitch/db.sock:
connecting...
2016-11-07T14:22:09.553Z|00003|reconnect|INFO|unix:/var/run/openvswitch/db.sock:
connected
2016-11-07T14:22:09.555Z|00004|reconnect|INFO|tcp:10.100.248.11:6642:
connecting...
2016-11-07T14:22:09.555Z|00005|reconnect|INFO|tcp:10.100.248.11:6642:
connected
2016-11-07T14:22:09.556Z|00006|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt:
connecting to switch
2016-11-07T14:22:09.556Z|00007|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt:
connecting...
2016-11-07T14:22:09.557Z|00008|pinctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt:
connecting to switch
2016-11-07T14:22:09.557Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt:
connecting...
2016-11-07T14:22:09.557Z|00010|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt:
connected
2016-11-07T14:22:09.557Z|00011|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt:
connected
2016-11-07T14:22:09.558Z|00012|ofctrl|INFO|OpenFlow error: OFPT_ERROR
(OF1.3) (xid=0x9): OFPBMC_BAD_FIELD
OFPT_FLOW_MOD (OF1.3) (xid=0x9):
(***truncated to 64 bytes from 240***)
00000000 04 0e 00 f0 00 00 00 09-00 00 00 00 00 00 00 00
|................|
00000010 00 00 00 00 00 00 00 00-22 00 00 00 00 00 00 00
|........".......|
00000020 ff ff ff ff ff ff ff ff-ff ff ff ff 00 00 00 00
|................|
00000030 00 01 00 04 00 00 00 00-00 04 00 b8 00 00 00 00
|................|
8 years, 1 month
Cannot remove disk error for ovirt-image-repository VMs
by Gianluca Cecchi
Hello,
I am in 4.0.5 with 3 hosts, Gluster and self hosted engine.
If I create a VM through iso and install OS then I can then delete the VM
and its related disks without errors
If I do the same creating a template (or directly a VM) using CentOS 7
Atomic Host Image in ovirt-image-repository I get these events in sequence
hen I delete the VM
10:47:29 VM atomic was successfully removed.
10:47:54 VDSM hosted_engine_1 command failed: Could not remove all image's
volumes
10:50:05 Refresh image list succeeded for domain(s): ovirt-image-repository
(All file type)
tried many times with same behavior
The VM has been removed, from web admin gui
All disks in "Disks" tab are marked as "OK"
Any commands to check actual integrity... in db and filesystems...?
Basic messages in engine.log:
2016-11-23 09:47:54,860 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler10) [53f3be13] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: VDSM hosted_engine_1 command failed:
Could not remove all image's volumes
2016-11-23 09:47:54,860 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler10)
[53f3be13] SPMAsyncTask::PollTask: Polling task
'555e7dd0-dc32-4cf7-be10-d469fc8b2f8d' (Parent Command 'RemoveVm',
Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned
status 'finished', result 'cleanSuccess'.
2016-11-23 09:47:54,880 ERROR
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler10)
[53f3be13] BaseAsyncTask::logEndTaskFailure: Task
'555e7dd0-dc32-4cf7-be10-d469fc8b2f8d' (Parent Command 'RemoveVm',
Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended with
failure:
-- Result: 'cleanSuccess'
-- Message: 'VDSGenericException: VDSErrorException: Failed in vdscommand
to HSMGetAllTasksStatusesVDS, error = Could not remove all image's volumes',
-- Exception: 'VDSGenericException: VDSErrorException: Failed in vdscommand
to HSMGetAllTasksStatusesVDS, error = Could not remove all image's volumes'
full files:
engine.log in gzip format:
https://drive.google.com/file/d/0BwoPbcrMv8mvQlVwVDlGTVEtR00/view?usp=sha...
vdsm.log of related host in gzip format:
https://drive.google.com/file/d/0BwoPbcrMv8mvdDFFOEhTQ3o1ZXM/view?usp=sha...
supervdsm.log in gzip format
https://drive.google.com/file/d/0BwoPbcrMv8mvbE5ZdXMyc0w1S1U/view?usp=sha...
Gianluca
8 years, 1 month
Fedora 25 images uploaded to oVirt's image repository
by Barak Korren
The Fedora 25 images had been uploaded to oVirt's public Glance image
repository.
Both the Atomic image as well as the generic cloud image have been
made available.
To gain access to the images please select "ovirt-image-repository"
from the "Storage" tab in the Web Administration console while
"System" is selected in the left-hand tree.
If you don't have "ovirt-image-repository" configured as an external
provider in your oVirt instance, you can add it by selecting "External
Providers" in the left-hand tree clicking on "Add" and filling in the
following details:
Name: ovirt-image-repository
Description: Public Glance repository for oVirt
Type: OpenStack Image
Provider URL: http://glance.ovirt.org:9292
"Requires Authentication" should be checked off.
--
Barak Korren
bkorren(a)redhat.com
oVirt Infra Team
8 years, 1 month
Hook to add firewall rules
by Claude Durocher
------=_=-_OpenGroupware_org_NGMime-9224-1479426132.656558-0------
content-type: text/plain; charset=utf-8
content-length: 176
content-transfer-encoding: quoted-printable
I've implemented sucessfully a hook to edit the configuration of some o=
f my nics on my ovirt hosts.
Is there a way to add firewall rules (iptables) with vdsm hooks?
=C2=A0
------=_=-_OpenGroupware_org_NGMime-9224-1479426132.656558-0------
content-type: text/html; charset=utf-8
content-length: 210
content-transfer-encoding: quoted-printable
<html>I've implemented sucessfully a hook to edit the configuration=
of some of my nics on my ovirt hosts.<br /><br />Is there a way to add=
firewall rules (iptables) with vdsm hooks?<br /> </html>
------=_=-_OpenGroupware_org_NGMime-9224-1479426132.656558-0--------
8 years, 1 month
Cloning a VM with Ceph/Cinder based disk leaves disk in locked state
by Thomas Klute
Dear oVirt Users,
we're using cinder (based on the Kolla setup) to provide storage for
ovirt. Everything works fine except the clone process of a VM.
Cloning a VM with NFS based storage works as expected, thus I think it's
the cinder integration that causes the problem here.
When cloning a VM with cinder/ceph-based storage we see, that the VM
clone is created, the attached image is cloned as well, but the
disk/image remains in locked state. We then need to issue a
"update images set imagestatus=1 where imagestatus=2;"
on the engine to make the VM clone work.
Is this a bug in the cinder integration?
Thanks and best regards,
Thomas
engine.log:
2016-11-21 10:00:20,216 INFO [org.ovirt.engine.core.bll.CloneVmCommand]
(default task-19) [2dd83801] Lock Acquired to object
'EngineLock:{exclusiveLocks='[vm-vertr-kp-klon=<VM_NAME,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='[897d96c8-0ea9-4d06-b815-66a42b63c49b=<VM,
ACTION_TYPE_FAILED_VM_IS_BEING_CLONED>,
e5e06033-e099-4efe-a5cd-9de2ebc0238b=<DISK,
ACTION_TYPE_FAILED_DISK_IS_USED_FOR_CREATE_VM$VmName vm-vertr-kp-klon>]'}'
2016-11-21 10:00:21,290 INFO [org.ovirt.engine.core.bll.CloneVmCommand]
(default task-19) [] Running command: CloneVmCommand internal: false.
Entities affected : ID: 897d96c8-0ea9-4d06-b815-66a42b63c49b Type:
VMAction group CREATE_VM with role type USER
2016-11-21 10:00:21,846 INFO
[org.ovirt.engine.core.bll.storage.disk.cinder.CloneSingleCinderDiskCommand]
(default task-19) [4c8e58b8] Running command:
CloneSingleCinderDiskCommand internal: true. Entities affected : ID:
1f342ea3-49f8-4f65-bf15-ce48514e9bd3 Type: StorageAction group
CONFIGURE_VM_STORAGE with role type USER
2016-11-21 10:00:23,143 INFO
[org.ovirt.engine.core.bll.AddGraphicsDeviceCommand] (default task-19)
[420ca5bf] Running command: AddGraphicsDeviceCommand internal: true.
Entities affected : ID: b9f78fd2-9a55-42f0-9ae9-7bca4ae93d9a Type:
VMAction group EDIT_VM_PROPERTIES with role type USER
2016-11-21 10:00:23,151 INFO
[org.ovirt.engine.core.bll.AddGraphicsDeviceCommand] (default task-19)
[7f72e8e4] Running command: AddGraphicsDeviceCommand internal: true.
Entities affected : ID: b9f78fd2-9a55-42f0-9ae9-7bca4ae93d9a Type:
VMAction group EDIT_VM_PROPERTIES with role type USER
2016-11-21 10:00:23,222 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-19) [7f72e8e4] Correlation ID: 2dd83801, Job ID:
a02e5069-a9ef-4b1b-8ec1-b1922d2e3135, Call Stack: null, Custom Event ID:
-1, Message: VM vm-vertr-kp-klon was created by admin@internal-authz.
2016-11-21 10:00:23,240 INFO [org.ovirt.engine.core.bll.CloneVmCommand]
(default task-19) [7f72e8e4] Lock freed to object
'EngineLock:{exclusiveLocks='[vm-vertr-kp-klon=<VM_NAME,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='[897d96c8-0ea9-4d06-b815-66a42b63c49b=<VM,
ACTION_TYPE_FAILED_VM_IS_BEING_CLONED>,
e5e06033-e099-4efe-a5cd-9de2ebc0238b=<DISK,
ACTION_TYPE_FAILED_DISK_IS_USED_FOR_CREATE_VM$VmName vm-vertr-kp-klon>]'}'
2016-11-21 10:00:29,283 INFO
[org.ovirt.engine.core.bll.storage.disk.cinder.CloneSingleCinderDiskCommandCallback]
(DefaultQuartzScheduler9) [4c8e58b8] Command 'CloneSingleCinderDisk' id:
'a76696d7-f698-4591-9a26-888f47462888' child commands '[]' executions
were completed, status 'SUCCEEDED'
2016-11-21 10:00:29,284 INFO
[org.ovirt.engine.core.bll.storage.disk.cinder.CloneSingleCinderDiskCommandCallback]
(DefaultQuartzScheduler9) [4c8e58b8] Command 'CloneSingleCinderDisk' id:
'a76696d7-f698-4591-9a26-888f47462888' Updating status to 'SUCCEEDED',
The command end method logic will be executed by one of its parent commands.
Packets:
ovirt-engine-jboss-as-7.1.1-1.el7.centos.x86_64
ovirt-vmconsole-proxy-1.0.4-1.el7.centos.noarch
ovirt-engine-wildfly-overlay-10.0.0-1.el7.noarch
ovirt-engine-setup-base-4.0.5.5-1.el7.centos.noarch
ovirt-guest-agent-common-1.0.12-3.el7.noarch
ovirt-engine-setup-plugin-ovirt-engine-4.0.5.5-1.el7.centos.noarch
ovirt-host-deploy-1.5.3-1.el7.centos.noarch
ovirt-engine-websocket-proxy-4.0.5.5-1.el7.centos.noarch
ovirt-engine-extensions-api-impl-4.0.5.5-1.el7.centos.noarch
ovirt-engine-wildfly-10.1.0-1.el7.x86_64
ovirt-engine-dbscripts-4.0.5.5-1.el7.centos.noarch
ovirt-engine-restapi-4.0.5.5-1.el7.centos.noarch
ovirt-vmconsole-1.0.4-1.el7.centos.noarch
ovirt-release36-3.6.6-1.noarch
ovirt-engine-lib-4.0.5.5-1.el7.centos.noarch
ovirt-setup-lib-1.0.2-1.el7.centos.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-4.0.5.5-1.el7.centos.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.0.5.5-1.el7.centos.noarch
ovirt-engine-setup-plugin-websocket-proxy-4.0.5.5-1.el7.centos.noarch
ovirt-host-deploy-java-1.5.3-1.el7.centos.noarch
ovirt-iso-uploader-4.0.2-1.el7.centos.noarch
ovirt-engine-setup-4.0.5.5-1.el7.centos.noarch
ovirt-engine-extension-aaa-jdbc-1.1.1-1.el7.noarch
ovirt-imageio-common-0.4.0-1.el7.noarch
ovirt-imageio-proxy-setup-0.4.0-0.201608310602.gita9b573b.el7.centos.noarch
ovirt-image-uploader-4.0.1-1.el7.centos.noarch
ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch
ovirt-release40-4.0.5-2.noarch
ovirt-engine-tools-4.0.5.5-1.el7.centos.noarch
ovirt-engine-dashboard-1.0.5-1.el7.centos.noarch
ovirt-engine-backend-4.0.5.5-1.el7.centos.noarch
ovirt-engine-4.0.5.5-1.el7.centos.noarch
ovirt-engine-dwh-4.0.5-1.el7.centos.noarch
ovirt-engine-cli-3.6.8.1-1.el7.centos.noarch
ovirt-imageio-proxy-0.4.0-0.201608310602.gita9b573b.el7.centos.noarch
ovirt-guest-tools-iso-4.0-1.fc23.noarch
ovirt-engine-dwh-setup-4.0.5-1.el7.centos.noarch
python-ovirt-engine-sdk4-4.0.2-1.el7.centos.x86_64
ovirt-engine-vmconsole-proxy-helper-4.0.5.5-1.el7.centos.noarch
ovirt-engine-tools-backup-4.0.5.5-1.el7.centos.noarch
ovirt-engine-webadmin-portal-4.0.5.5-1.el7.centos.noarch
ovirt-engine-userportal-4.0.5.5-1.el7.centos.noarch
Sollten noch Fragen offen sein, stehen wir Ihnen natürlich jederzeit
gerne zur Verfügung.
Mit Grüßen aus Dortmund,
Thomas Klute
--
________________________________________________________________________
Dipl.-Inform. Thomas Klute klute(a)ingenit.com
Geschäftsführer / CEO
----------------------------------------------------------------------
ingenit GmbH & Co. KG Tel. +49 (0)231 58 698-120
Emil-Figge-Strasse 76-80 Fax. +49 (0)231 58 698-121
D-44227 Dortmund www.ingenit.com
Registergericht: Amtsgericht Dortmund, HRA 13 914
Gesellschafter : Thomas Klute, Marc-Christian Schröer
________________________________________________________________________
8 years, 1 month