VDI broker and oVirt
by oquerejazu@gmail.com
hello
What is the name of the broker that we can install in ovirt?
any documentation?
thanks !!
5 years, 7 months
Controller recomandation - LSI2008/9265
by Leo David
Hi Everyone,
For a hyperconverged setup started with 3 nodes and going up in time up to
12 nodes, I have to choose between LSI2008 ( jbod ) and LSI9265 (raid).
Perc h710 ( raid ) might be an option too, but on a different chassis.
There will not be many disk installed on each node, so the replication will
be replica 3 replicated-distribute volumes across the nodes as:
node1/disk1 node2/disk1 node3/disk1
node1/disk2 node2/disk2 node3/disk2
and so on...
As i will add nodes to the cluster , I intend expand the volumes using the
same rule.
What would it be a better way, to used jbod cards ( no cache ) or raid
card and create raid0 arrays ( one for each disk ) and therefore have a bit
of raid cache ( 512Mb ) ?
Is raid caching a benefit to have it underneath ovirt/gluster as long as I
go for "Jbod" installation anyway ?
Thank you very much !
--
Best regards, Leo David
5 years, 7 months
Broker ovirt
by Oscar Querejazu
Hello
I'm looking for what broker I can use, that's the question, to ovirt and
thus be able to deploy desktops based on templates
5 years, 7 months
status of uploaded images complete and not OK
by Gianluca Cecchi
Hello,
I'm using oVirt 4.3.2 and I have an iSCSI based storage domain.
I see that if I upload an iso image or a qcow2 file, at the end it remains
in "Complete" status and not "OK"
See here:
https://drive.google.com/file/d/1rTuVB1_MGxudVCx-ok7mE2BirWF0rx7x/view?us...
I can use them, but it seems a bit strange and in fact I had a problem in
putting a host into maintenance due to this. See here:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4UJ75YMYSPW...
In fact from db point of view I can see this (for other 2 more uploads done
in March as described above), where status remains in phase 9....
engine=# select last_updated, message, bytes_sent, bytes_total,active,phase
from image_transfers ;
last_updated | message | bytes_sent |
bytes_total | active | phase
----------------------------+-----------------------+-------------+-------------+--------+-------
2019-03-14 16:22:11.728+01 | Finalizing success... | 524288000 |
565182464 | t | 9
2019-03-14 16:26:00.03+01 | Finalizing success... | 6585057280 |
6963593216 | t | 9
2019-04-04 17:16:29.889+02 | Finalizing success... | 21478375424 |
21478375424 | f | 9
2019-04-04 19:35:28.688+02 | Finalizing success... | 4542431232 |
4588568576 | t | 9
(4 rows)
engine=#
Any hint? anyone to verify an upload on block storage on 4.3.2?
Thanks,
Gianluca
5 years, 7 months
Eve-ng KVM acceleration not working in Ovirt
by robertodg@prismatelecomtesting.com
Hello everyone!
Actually I'm encountering a huge problem that I'll explain in steps, in order to be clear and discursive. First of all, I've deployed eve-ng Community Edition 2.0.3-92 on a KVM with those specs:
Server DELL PowerEdge R420
CPU: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (24 Core total)
RAM: 24GB RAM
HDD: 4x 2TB
All was working fine, KVM virtualization was working well and i was able to deploy qemu images like PaloAlto (PANOS-8.0.5) with qemu-options accell=kvm without any problem.
After an internal company upgrade, i've decided to migrate this tool on Ovirt 4.2, in a node with following specs:
SuperMicro Server
CPU: Intel (R) Xeon (R) Platinum 8160T @ 2.10GHz (96 Core Total)
RAM: 256 GB (16x16) DDR4 2666Mhz Samsung M393A2K40BB2-CTD
HDD: x 2 Seagate ST500LM021-1KJ152 SATA 3.0 500GB
SSD: x 2 SSD Samsung SSD 850 PRO 1TB
NVMe: x 4 KXG50ZNV512G TOSHIBA 512GB (RAID0)
Before migration eve-ng had 12 cores, 12 GB RAM and 90 GB HDD. After migration he has now 16 Core, 32GB RAM and RAID0 100GB.
On Ovirt Pass-Through Host CPU is activated on VM, CPU nesting is activated too and after new release (2.0.5-95) those are eve-info command logs:
eve-ng-info-ovirt: https://paste.fedoraproject.org/paste/Mom-~CXmnlU3cWCUdxPXFg
Those are the previous eve-info command logs (on KVM):
eve-ng-info-kvm: https://paste.fedoraproject.org/paste/AhqiapiyG5lJcHQBlrbJgw
Also, here you can find output of cpuinfo of both node (KVM and Ovirt):
cpuinfo-kvm: https://paste.fedoraproject.org/paste/6TQDHxCmiEMosF1qtcdSDQ
cpuinfo-ovirt: https://paste.fedoraproject.org/paste/LmieowOvc7WwkEWW9JMduQ
On both nodes, output of cat /sys/module/kvm_intel/parameters/nested is Y
at least, this is the output of virsh dumpxml of both nodes:
eve-ng-ovirt.xml: https://paste.fedoraproject.org/paste/s5SbdWhBHTWJs9O2Be~y8w
eve-ng-kvm.xml: https://paste.fedoraproject.org/paste/Yz1E9taAkNT-JyTj8cBR-g
However, after migration, i'm not able to deploy qemu images with accell=kvm enabled anymore. With same example PANOS-8.0.5 i'm able to run this qemu image only with this setting removed.
Thinking on a migration problem, i've deployed a new eve-ng VM with same specs but problem is still present.
Have you any idea on what might be the problem?
Thanks for your support.
5 years, 7 months
used network for uploading api
by Nathanaël Blanchet
Hi all,
I noticed a bad transfer when uploading a floating disk through the
upload API.
I set up two networks, one for management with a dedicated 1Gb link and
an other 10Gi for other vlans.
It seems that the upload is done by the management network, can anyone
confirm it?
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
5 years, 7 months
UI bug viewing/editing host
by Callum Smith
2019-04-04 10:43:35,383Z ERROR [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default task-15) [] Permutation name: 0D2DB7A91B469CC36C64386E5632FAC5
2019-04-04 10:43:35,383Z ERROR [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default task-15) [] Uncaught exception: com.google.gwt.core.client.JavaScriptException: (TypeError) : oab(...) is null
at org.ovirt.engine.ui.webadmin.section.main.view.popup.host.HostPopupView.$lambda$0(HostPopupView.java:693)
at org.ovirt.engine.ui.webadmin.section.main.view.popup.host.HostPopupView$lambda$0$Type.eventRaised(HostPopupView.java:693)
at org.ovirt.engine.ui.uicompat.Event.$raise(Event.java:99)
at org.ovirt.engine.ui.uicommonweb.models.ListModel.$setSelectedItem(ListModel.java:82)
at org.ovirt.engine.ui.uicommonweb.models.ListModel.setSelectedItem(ListModel.java:78)
at org.ovirt.engine.ui.uicommonweb.models.ListModel.itemsChanged(ListModel.java:236)
at org.ovirt.engine.ui.uicommonweb.models.ListModel.$itemsChanged(ListModel.java:224)
at org.ovirt.engine.ui.uicommonweb.models.ListModel.$setItems(ListModel.java:102)
at org.ovirt.engine.ui.uicommonweb.models.hosts.HostModel.$updateClusterList(HostModel.java:1037)
at org.ovirt.engine.ui.uicommonweb.models.hosts.HostModel.$lambda$13(HostModel.java:1017)
at org.ovirt.engine.ui.uicommonweb.models.hosts.HostModel$lambda$13$Type.onSuccess(HostModel.java:1017)
at org.ovirt.engine.ui.frontend.Frontend$1.$onSuccess(Frontend.java:227) [frontend.jar:]
at org.ovirt.engine.ui.frontend.Frontend$1.onSuccess(Frontend.java:227) [frontend.jar:]
at org.ovirt.engine.ui.frontend.communication.OperationProcessor$1.$onSuccess(OperationProcessor.java:133) [frontend.jar:]
at org.ovirt.engine.ui.frontend.communication.OperationProcessor$1.onSuccess(OperationProcessor.java:133) [frontend.jar:]
at org.ovirt.engine.ui.frontend.communication.GWTRPCCommunicationProvider$5$1.$onSuccess(GWTRPCCommunicationProvider.java:270) [frontend.jar:]
at org.ovirt.engine.ui.frontend.communication.GWTRPCCommunicationProvider$5$1.onSuccess(GWTRPCCommunicationProvider.java:270) [frontend.jar:]
at com.google.gwt.user.client.rpc.impl.RequestCallbackAdapter.onResponseReceived(RequestCallbackAdapter.java:198) [gwt-servlet.jar:]
at com.google.gwt.http.client.Request.$fireOnResponseReceived(Request.java:233) [gwt-servlet.jar:]
at com.google.gwt.http.client.RequestBuilder$1.onReadyStateChange(RequestBuilder.java:409) [gwt-servlet.jar:]
at Unknown.onreadystatechange<(https://he.virt.in.bmrc.ox.ac.uk/ovirt-engine/webadmin/?locale=en_US#host...)
at com.google.gwt.core.client.impl.Impl.apply(Impl.java:236) [gwt-servlet.jar:]
at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:275) [gwt-servlet.jar:]
at Unknown.Su/<(https://he.virt.in.bmrc.ox.ac.uk/ovirt-engine/webadmin/?locale=en_US#host...)
at Unknown.anonymous(Unknown)
2019-04-04 10:43:40,636Z ERROR [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default task-15) [] Permutation name: 0D2DB7A91B469CC36C64386E5632FAC5
2019-04-04 10:43:40,636Z ERROR [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default task-15) [] Uncaught exception: com.google.gwt.event.shared.UmbrellaException: Exception caught: (TypeError) : oab(...) is null
at java.lang.Throwable.Throwable(Throwable.java:70) [rt.jar:1.8.0_201]
at java.lang.RuntimeException.RuntimeException(RuntimeException.java:32) [rt.jar:1.8.0_201]
at com.google.web.bindery.event.shared.UmbrellaException.UmbrellaException(UmbrellaException.java:64) [gwt-servlet.jar:]
at com.google.gwt.event.shared.UmbrellaException.UmbrellaException(UmbrellaException.java:25) [gwt-servlet.jar:]
at com.google.gwt.event.shared.HandlerManager.$fireEvent(HandlerManager.java:117) [gwt-servlet.jar:]
at com.google.gwt.user.client.ui.Widget.$fireEvent(Widget.java:127) [gwt-servlet.jar:]
at com.google.gwt.user.client.ui.Widget.fireEvent(Widget.java:127) [gwt-servlet.jar:]
at com.google.gwt.event.dom.client.DomEvent.fireNativeEvent(DomEvent.java:110) [gwt-servlet.jar:]
at com.google.gwt.user.client.ui.Widget.$onBrowserEvent(Widget.java:163) [gwt-servlet.jar:]
at com.google.gwt.user.client.ui.Widget.onBrowserEvent(Widget.java:163) [gwt-servlet.jar:]
at com.google.gwt.user.client.DOM.dispatchEvent(DOM.java:1415) [gwt-servlet.jar:]
at com.google.gwt.user.client.impl.DOMImplStandard.dispatchEvent(DOMImplStandard.java:312) [gwt-servlet.jar:]
at com.google.gwt.core.client.impl.Impl.apply(Impl.java:236) [gwt-servlet.jar:]
at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:275) [gwt-servlet.jar:]
at Unknown.Su/<(https://he.virt.in.bmrc.ox.ac.uk/ovirt-engine/webadmin/?locale=en_US#host...)
at Unknown.anonymous(Unknown)
Caused by: com.google.gwt.core.client.JavaScriptException: (TypeError) : oab(...) is null
at org.ovirt.engine.ui.uicommonweb.models.hosts.HostListModel.$onSave(HostListModel.java:816)
at org.ovirt.engine.ui.uicommonweb.models.hosts.HostListModel.executeCommand(HostListModel.java:1969)
at org.ovirt.engine.ui.uicommonweb.UICommand.$execute(UICommand.java:163)
at org.ovirt.engine.ui.common.presenter.AbstractModelBoundPopupPresenterWidget.$lambda$4(AbstractModelBoundPopupPresenterWidget.java:306)
at org.ovirt.engine.ui.common.presenter.AbstractModelBoundPopupPresenterWidget$lambda$4$Type.onClick(AbstractModelBoundPopupPresenterWidget.java:306)
at com.google.gwt.event.dom.client.ClickEvent.dispatch(ClickEvent.java:55) [gwt-servlet.jar:]
at com.google.gwt.event.shared.GwtEvent.dispatch(GwtEvent.java:76) [gwt-servlet.jar:]
at com.google.web.bindery.event.shared.SimpleEventBus.$doFire(SimpleEventBus.java:173) [gwt-servlet.jar:]
... 12 more
Clean install ovirt-node-ng-4.3.0-0.20190204.0+1
Regards,
Callum
--
Callum Smith
Research Computing Core
Wellcome Trust Centre for Human Genetics
University of Oxford
e. callum(a)well.ox.ac.uk<mailto:callum@well.ox.ac.uk>
5 years, 7 months
[ANN] oVirt 4.3.3 Second Release Candidate is now available
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.3.3 Second Release Candidate, as of April 4th, 2019.
This update is a release candidate of the third in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used in
production.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)
Experimental tech preview for x86_64 and s390x architectures for Fedora 28
is also included.
See the release notes [1] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available[2]
Additional Resources:
* Read more about the oVirt 4.3.3 release highlights:
http://www.ovirt.org/release/4.3.3/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.3/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
5 years, 7 months
Re: [Gluster-users] Gluster 5.5 slower than 3.12.15
by Strahil
Hi Amar,
I would like to test Cluster v6 , but as I'm quite new to oVirt - I'm not sure if oVirt <-> Gluster will communicate properly
Did anyone test rollback from v6 to v5.5 ? If rollback is possible - I would be happy to give it a try.
Best Regards,
Strahil NikolovOn Apr 3, 2019 11:35, Amar Tumballi Suryanarayan <atumball(a)redhat.com> wrote:
>
> Strahil,
>
> With some basic testing, we are noticing the similar behavior too.
>
> One of the issue we identified was increased n/w usage in 5.x series (being addressed by https://review.gluster.org/#/c/glusterfs/+/22404/), and there are few other features which write extended attributes which caused some delay.
>
> We are in the process of publishing some numbers with release-3.12.x, release-5 and release-6 comparison soon. With some numbers we are already seeing release-6 currently is giving really good performance in many configurations, specially for 1x3 replicate volume type.
>
> While we continue to identify and fix issues in 5.x series, one of the request is to validate release-6.x (6.0 or 6.1 which would happen on April 10th), so you can see the difference in your workload.
>
> Regards,
> Amar
>
>
>
> On Wed, Apr 3, 2019 at 5:57 AM Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
>>
>> Hi Community,
>>
>> I have the feeling that with gluster v5.5 I have poorer performance than it used to be on 3.12.15. Did you observe something like that?
>>
>> I have a 3 node Hyperconverged Cluster (ovirt + glusterfs with replica 3 arbiter1 volumes) with NFS Ganesha and since I have upgraded to v5 - the issues came up.
>> First it was 5.3 notorious experience and now with 5.5 - my sanlock is having problems and higher latency than it used to be. I have switched from NFS-Ganesha to pure FUSE , but the latency problems do not go away.
>>
>> Of course , this is partially due to the consumer hardware, but as the hardware has not changed I was hoping that the performance will remain as is.
>>
>> So, do you expect 5.5 to perform less than 3.12 ?
>>
>> Some info:
>> Volume Name: engine
>> Type: Replicate
>> Volume ID: 30ca1cc2-f2f7-4749-9e2e-cee9d7099ded
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: ovirt1:/gluster_bricks/engine/engine
>> Brick2: ovirt2:/gluster_bricks/engine/engine
>> Brick3: ovirt3:/gluster_bricks/engine/engine (arbiter)
>> Options Reconfigured:
>> performance.client-io-threads: off
>> nfs.disable: on
>> transport.address-family: inet
>> performance.quick-read: off
>> performance.read-ahead: off
>> performance.io-cache: off
>> performance.low-prio-threads: 32
>> network.remote-dio: off
>> cluster.eager-lock: enable
>> cluster.quorum-type: auto
>> cluster.server-quorum-type: server
>> cluster.data-self-heal-algorithm: full
>> cluster.locking-scheme: granular
>> cluster.shd-max-threads: 8
>> cluster.shd-wait-qlength: 10000
>> features.shard: on
>> user.cifs: off
>> storage.owner-uid: 36
>> storage.owner-gid: 36
>> network.ping-timeout: 30
>> performance.strict-o-direct: on
>> cluster.granular-entry-heal: enable
>> cluster.enable-shared-storage: enable
>>
>> Network: 1 gbit/s
>>
>> Filesystem:XFS
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users(a)gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> Amar Tumballi (amarts)
5 years, 7 months