Engine can not support CORS by openidconnect
by du_hongyu@yeah.net
hi,
I write a openIDconnect client to authentication my web app from ovirt-engine-4.2.0,
this is my engine config :
[root@engine ~]# engine-config -a |grep CORS
CORSSupport: true version: general
CORSAllowedOrigins: https://10.110.128.129 version: general
CORSAllowDefaultOrigins: true version: general
CORSDefaultOriginSuffixes: :9090 version: general
but face some error:
Access to fetch at 'https://10.110.128.120/ovirt-engine/sso/openid/authorize?response_type=co...' (redirected from 'https://10.110.128.129/console/auth/login') from origin 'https://10.110.128.129' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: Redirect is not allowed for a preflight request.
engine log(I add log info ) is:
019-04-04 10:16:52,491+08 INFO [org.ovirt.engine.core.sso.utils.SsoUtils] (default task-4) [] engine=======strippedScope [openid, ovirt-app-portal, ovirt-app-admin, ovirt-app-api, ovirt-ext=token:password-access, ovirt-ext=auth:sequence-priority, ovirt-ext=token:login-on-behalf, ovirt-ext=token-info:authz-search, ovirt-ext=token-info:public-authz-search, ovirt-ext=token-info:validate, ovirt-ext=revoke:revoke-all]
2019-04-04 10:16:52,492+08 INFO [org.ovirt.engine.core.sso.utils.SsoUtils] (default task-4) [] engine=======requestedScope [openid, ovirt-app-admin, ovirt-app-api, ovirt-app-portal, ovirt-ext=auth:sequence-priority, ovirt-ext=revoke:revoke-all, ovirt-ext=token-info:authz-search, ovirt-ext=token-info:public-authz-search, ovirt-ext=token-info:validate, ovirt-ext=token:login-on-behalf, ovirt-ext=token:password-access]
and I realize this java source not run, i do not find any log in engine.log
backend/manager/modules/utils/src/main/java/org/ovirt/engine/core/utils/servlet/CORSSupportFilter.java
Regards
Hongyu Du
5 years, 8 months
Re: Vm status not update after update
by Strahil
Sadly I couldn't find the e-mail with the fix.
As far as I remember , there should be a bug opened for that.
Best Regards,
Strahil NikolovOn Apr 3, 2019 13:53, Marcelo Leandro <marceloltmm(a)gmail.com> wrote:
>
> Hi,
>
> Strahil , can you help me?
>
> Very thanks,
>
> Marcelo Leandro
>
> Em ter, 2 de abr de 2019 10:02, Marcelo Leandro <marceloltmm(a)gmail.com> escreveu:
>>
>> Sorry, I can't find this.
>>
>> Em ter, 2 de abr de 2019 às 09:49, Strahil Nikolov <hunter86_bg(a)yahoo.com> escreveu:
>>>
>>> I think I already met a solution in the mail lists. Can you check and apply the fix mentioned there ?
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>> В вторник, 2 април 2019 г., 14:39:10 ч. Гринуич+3, Marcelo Leandro <marceloltmm(a)gmail.com> написа:
>>>
>>>
>>> Hi, After update my hosts to ovirt node 4.3.2 with vdsm version vdsm-4.30.11-1.el7 my vms status not update, if I do anything with vm like shutdown, migrate this status not change , only a restart the vdsm the host that vm is runnig.
>>>
>>> vdmd status :
>>>
>>> ERROR Internal server error
>>> Traceback (most recent call last):
>>> File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in _handle_request..
>>>
>>> Thanks,
>>> _______________________________________________
>>> Users mailing list -- users(a)ovirt.org
>>> To unsubscribe send an email to
5 years, 8 months
vdsClient in oVirt 4.3
by nicolas@devels.es
Hi,
In oVirt 4.1 we used this command to set a volume as LEGAL:
vdsClient -s <host> setVolumeLegality sdUUID spUUID imgUUID leafUUID
LEGAL
What would be the equivalent to this command using vdsm-client in oVirt
4.3?
Thanks.
5 years, 8 months
Vm status not update after update
by Marcelo Leandro
Hi, After update my hosts to ovirt node 4.3.2 with vdsm version
vdsm-4.30.11-1.el7
my vms status not update, if I do anything with vm like shutdown, migrate
this status not change , only a restart the vdsm the host that vm is runnig.
vdmd status :
ERROR Internal server error
Traceback (most
recent call last):
File
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in
_handle_request..
Thanks,
5 years, 8 months
Actual size bigger than virtual size
by suporte@logicworks.pt
Hi,
I have an all in one ovirt 4.2.2 with gluster storage and a couple of windows 2012 VMs.
One w2012 is showing actual size 209GiB and Virtual Size 150 GiB on a thin provision disk. The vm shows 30,9 GB of used space.
This VM is slower than the others mainly when we reboot the machine it takes around 2 hours.
Any idea?
Thanks
--
Jose Ferradeira
http://www.logicworks.pt
5 years, 8 months
Why/how does oVirt fsync over NFS
by Karli Sjöberg
Hello!
This question arose after the (in my opinion) hasty response from a user
that thinks oVirt is bad because it cares about the integrity of your data:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QPVXUFW2U3W...
Fine, that's their choice. They can have silent data corruption all they
want in case of a power failure or something, it's up to them if they
think it's worth it.
However, I started wondering why oVirt sync's everything over NFS, I
know it does because you can in ZFS turn it off by "zfs set
sync=disabled <dataset>" and throughput and everything goes through the
roof. The right way is of course to add a SLOG or better yet a mirrored
pair but just for a quick test or whatever...
But their doesn't seem to be any reason why oVirt should do that, that's
why I'm asking. Look, this is from the CentOS 'mount' man page:
"defaults
Use default options: rw, suid, dev, exec, auto, nouser,
and async."
So therefore 'sync' should be explicitly set on the mount, right? Wrong!
# mount | grep -c sync
0
So why then does it sync? The NFS man page says:
"The sync mount option
The NFS client treats the sync mount option differently than some
other file systems (refer to mount(8) for a description of the generic
sync and async mount options). If neither sync nor async is specified
(or if the async option is specified), the NFS client delays sending
application writes to the server until any of these events occur:
Memory pressure forces reclamation of system memory resources.
An application flushes file data explicitly with sync(2),
msync(2), or fsync(3).
An application closes a file with close(2).
The file is locked/unlocked via fcntl(2)."
So which of these things are happening for the Host to send sync calls
to the NFS server?
Just curious...
/K
5 years, 8 months
4.2 / 4.3 : Moving the hosted-engine to another storage
by andreas.elvers+ovirtforum@solutions.work
Hi friends of oVirt,
Roughly 3 years ago a user asked about the options he had to move the hosted engine to some other storage.
The answer by Simone Tiraboschi was that it would largely not be possible, because of references in the database to the node the engine was hosted. This information would prevent a successful move of the engine even with backup/restore.
The situation seem to have improved, but I'm not sure. So I ask.
We have to move our engine away from our older Cluster with NFS Storage backends (engine, volumes, iso-images).
The engine should be restored on our new cluster that has a gluster volume available for the engine. Additionally this 3-node cluster is running Guests from a Cinder/Ceph storage Domain.
I want to restore the engine on a different cluster to a different storage domain.
Reading the documentation at https://www.ovirt.org/documentation/self-hosted/chap-Backing_up_and_Resto... I am wondering whether oVirt Nodes (formerly Node-NG) are capable of restoring an engine at all. Do I need EL-based Nodes? We are currently running on oVirt Nodes.
- Andreas
5 years, 8 months
Re: oVirt Performance (Horrific)
by Strahil
Hi Drew,
What is the host RAM size and what is the setting for VM.dirty_ratio and background ratio on those hosts?
What about your iSCSI target?
Best Regards,
Strahil Nikolov
On Mar 11, 2019 23:51, Drew Rash <drew.rash(a)gmail.com> wrote:
>
> Added the disable:false, removed the gluster, re-added using nfs. Performance still in the low 10's MBps + or - 5
> Ran the showmount -e "" and it displayed the mount.
>
> Trying right now to re-mount using gluster with a negative-timeout=1 option.
>
> We converted one of our 4 boxes to FreeNAS, took 4 6TB drives and made a raid iSCSI and connected it to oVirt. Boot windows. ( times 2, did 2 boxes with a 7GB file on each) copied from one to the other and it copied at 600MBps average. But then has weird pauses... I think it's doing some kind of cache..it'll go like 2GB and choke to zero Bps. Then speed up and choke, speed up choke averaging or getting up to 10MBps. Then at 99% it waits 15 seconds with 0 bytes left...
> Small files, are instant basically. No complaint there.
> So...WAY faster. But suffers from the same thing....just requires writing some more to get to it. a few gigs and then it crawls.
>
> Seems to be related to if I JUST finished running a test. If I wait a while, I get it it to copy almost 4GB or so before choking.
> I made a 3rd windows 10 VM and copied the same file from the 1st to the 2nd (via a windows share and from the 3rd box) And it didn't choke or do any funny business...oddly. Maybe a fluke. Only did that once.
>
> So....switching to freenas appears to have increased the window size before it runs horribly. But it will still run horrifically if the disk is busy.
>
> And since we're planning on doing actual work on this... idle disks caching up on some hidden cache feature of oVirt isn't gonna work. We won't be writing gigs of data all over the place...but knowing that this chokes a VM to near death...is scary.
>
> It looks like for a windows 10 install to operate correctly, it expects at least 15MB/s with less than 1s latency. Otherwise services don't start and weird stuff happens and it runs slower than my dog while pooping out that extra little stringy bit near the end. So we gotta avoid that.
>
>
>
>
> On Sat, Mar 9, 2019 at 12:44 AM Strahil <hunter86_bg(a)yahoo.com> wrote:
>>
>> Hj Drew,
>>
>> For the test change the gluster parameter nfs.disabled to false.
>> Something like gluster volume set volname nfs.dsiable false
>>
>> Then use shownount -e gluster-node-fqdn
>> Note: NFS might not be allowed in the firewall.
>>
>> Then add this NFS domain (don't forget to remove the gluster storage domain before that) and do your tests.
>>
>> If it works well, you will have to switch off nfs.disable and deploy NFS Ganesha:
>>
>> gluster volume reset volname nfs.disable
>>
>> Best Regards,
>> Strahil Nikolov
5 years, 8 months