On 11/5/20 1:22 PM, Vojtech Juranek wrote:
>> IMO OST should be made easy to interact with from your main
development
>> machine.
> TBH I didn't see much interest in running OST on developers' machines.
as it's little bit complex to setup?
Definitely, that's why the playbook
and 'setup_for_ost.sh' was created.
I hope it will be the long term solution for this problem.
Making it more easy maybe would increase
number of people contributing to OST ...
But that's a chicken and egg problem -
who else is going to contribute
to OST if not us?
If we want the setup to be easier, then let's work on it.
> People are mostly using manual OST runs to verify things and that is what
> most of the efforts focus on. It's not that I wouldn't like OST to be more
> developer-friendly, I definitely would, but we need more manpower
> and interest for that to happen.
>
>>> I noticed that many of you run OST in a VM ending up with three layers
>>> of VMs.
>>> I know it works, but I got multiple reports of assertions' timeouts and
>>> TBH I just don't
>>> see this as a viable solution to work with OST - you need a bare metal
>>> for that.
>> Why?
>>
>> After all, we also work on a virtualization product/project. If it's
>> not good enough for ourselves, how do we expect others to use it? :-)
> I'm really cool with the engine and the hosts being VMs, but deploying
> engine and the hosts as VMs nested in other VM is what I think is
> unreasonable.
I tried this approach in past two days and works fine for me (except the fast
it's slow)
> Maybe I'm wrong here, but I don't think our customers run whole oVirt
> clusters
> inside VMs. There's just too much overhead with all that layers of nesting
> and the performance sucks.
>
>> Also, using bare-metal isn't always that easy/comfortable either, even
>> if you have the hardware.
> I'm very happy with my servers. What makes working with bm
> hard/uncomfortable?
IMHO main issue is lack of HW. I really don't want to run it directly on my
dev laptop (without running it inside VM). If I ssh/mount FS/tunnel ports to/
from VM or some bare metal server really doesn't matter (assuming reasonable
I
agree. I have the privilege of having separate servers to run OST.
Even though that would work, I can't imagine working with OST
on a daily basis on my laptop.
That also kinda proves my point that people are not being interested
in running OST on their machines - they don't have machines they could use.
I see three solutions to this:
- people start pushing managers to have their own servers
- we will have a machine-renting solution based on beaker
 (with nice, automatic provisioning for OST etc.), so we can
 work on bare metals
- we focus on the CI and live with the "launch and prey" philosophy :)
connection speed to bare metal server)
> I can think of reprovisioning, but that is not needed for OST usage.
>
>> CI also uses VMs for this, IIUC. Or did we move there to containers?
>> Perhaps we should invest in making this work well inside a container.
> CI doesn't use VMs - it uses a mix of containers and bare metals.
> The solution for containers can't handle el8 and that's why we're
> stuck with running OST on el7 mostly (apart from the aforementioned
> bare metals, which use el8).
>
> There is a 'run-ost-container.sh' script in the project. I think some people
> had luck using it, but I never even tried. Again, my personal opinion, as
> much as I find containers useful and convenient in different situations,
> this is not one of them - you should be using bare metal.
>
> The "backend for OST" is a subject for a whole, new discussion.
> My opinion here is that we should be using oVirt as backend for OST
> (as in running oVirt cluster as VMs in oVirt). I'm a big fan of the
> dogfooding
> concept. This of course creates a set of new problems like "how can
> developers
> work with this", "where do we get the hosting oVirt cluster from"
etc.
> Whooole, new discussion :)
>
> Regards, Marcin
>
>>> On my bare metal server OST basic run takes 30 mins to complete. This is
>>> something one
>>> can work with, but we can do even better.
>>>
>>> Thank you for your input and I hope that we can have more people
>>> involved in OST
>>> on a regular basis and not once-per-year hackathons. This is a complex
>>> project, but it's
>>> really useful.
>> +1!
>>
>> Thanks and best regards,
>>
>>>> Nice.
>>>>
>>>> Thanks and best regards,
>>> [1]
>>>
https://github.com/lago-project/lago/blob/7bf288ad53da3f1b86c08b3283ee9c5
>>> 118e7605e/lago/providers/libvirt/network.py#L162 [2]
>>>
https://github.com/oVirt/ovirt-system-tests/blob/6d5c2a0f9fb3c05afc854712
>>> 60065786b5fdc729/ost_utils/ost_utils/pytest/fixtures/engine.py#L105