On 27 October 2016 at 10:42, <joost@familiealbers.nl> wrote:
Hi All, i have two ovirt / fakevdsm questions.

I am using fakevdsm to test my ovirt 4.0.5 engine.
I have a somewhat awkward setup where i have a relatively large amount of dc's. 
I cannot get nfs master domain to work from the api.
The api code is solid and is used on a ovirt-engine 3.5 production cluster.

 
When creating the nfs storage domain i have no issues (it is created as id (00000-00000 more zeros here......) but when i try to attach it to a dc it fails.

please share the logs
 
I am not sure this is an ovirt of fakevdsm issue. When i manually add the nfs storage domain it does get created which is somewhat worrying.

Manually?
 
My real aim would be to run 500 dc's with 2 hosts and around 5 vms.


You'd have to have a host per DC, cause every DC has an SPM. You meant cluster or I'm missing something.
 
Not being able to use the nfs storage master domain is one of the issues.

To work around this i decided to try and create 1000 dcs with a single host using local storage.
Allthough this works it looks like the hosts and vms within the dc's are actually not having their own storage pool.

In our prod environment we run at a much lower scale serveral dc's with shared gluster storage (shared between the two hosts in the cluster)
This is considered per dc its own storage pool.

In my tests when reaching dc,cluster, host 249 things fail, this is likely because of the value of 'maxnumberofhostsinstoragepool'. I would expect this to be bound to the dcs and not the overall server.

In short i expect each dc to be able to run 249 hosts as each dc is its own storage pool?

I'm not sure I understand the question. However, the limitation is soft and can be overridden.
 
Similarly when i look at the ovirt ui some hosts actually show they are running the total amount of running vms.

Yes, when running with vdsm fake you must set this config value as well

```bash
sudo -i -u postgre
export ENGINE_DB=dbname
psql $ENGINE_DB -c "UPDATE vdc_options set option_value = 'true' WHERE option_name = 'UseHostNameIdentifier';"
 ```

I've sent a patch to gerrit for that  https://gerrit.ovirt.org/65804


Again i am not sure if i am correct in assuming each dc is its own storage pool.

Correct. DC = Storage Pool 

Finally.
As this setup is somewhat awkward i would like to tweak the diverse flags in ovirt engine to be more relaxed about obtaining data from the hosts and storage domains.

I believe ovirt engine now checks every 3 seconds the host but i am guessing i can live with every 60 seconds for example, the same would apply for the storage part. what i am not sure about though is if there is a golden ratio between these config items.
I.e. there is vdsRefreshRate, if i set this to 60 instead of 3 do i need to also change the vdsTimeout value which is set to 180 by default?

Change the VdsRefreshRate to 20s - but bare in mind that we have a limitation when starting up a VM - its status will show as up only after the next 20s interval
Please report with your finding on the above tweak
 
sorru for the lenght of this message.
the most critical items are.
Should i worry about not being eable to create an nfs master domain or is this a fakevdsm thing
Why are my dcs, for someone reason related to the main storage pool or am i right in thinking a dc + cluster +host with local storage, is its own storage pool?

Thanks you so much.






_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users