Re: Subscribe to mailing list and install oVirt with freeIPA
by Yedidyah Bar David
On Wed, Dec 25, 2019 at 10:45 AM Kien Pham <ptkcc(a)outlook.com> wrote:
>
> Dear oVirt team,
>
> I have post a thread on forum but I got problem with mailing list. How can I subscribe the mailing list.
>
> This is the question.
>
> Dear all,
>
> I am installing oVirt but I got an error when running engine-setup and get the following notification:
>
> [ INFO ] Restarting httpd
> [ ERROR ] Failed to execute stage 'Closing up': Failed to start service 'httpd'
> [ INFO ] Stage: Clean up
> Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20191224111455-ppsefr.log
> [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20191224111745-setup.conf'
> [ INFO ] Stage: Pre-termination
> [ INFO ] Stage: Termination
> [ ERROR ] Execution of setup failed
>
> I think that the problem is freeIPA. I installed freeIPA to manage user authetication. But IPA is currently using port 80 so httpd service cannot start.
Right. This isn't supported, for exactly this reason.
The ovirt-engine rpm conflicts with ipa-server. How did you install it?
I now see that there is also a package python2-ipaserver, and in
fedora and el8 also python3-ipaserver. Now pushed a patch to conflict
with them as well:
https://gerrit.ovirt.org/105935
Anyway, you'll have to set them up on two different machines.
Thanks for the report, and sorry for the failure...
Good luck and best regards,
--
Didi
4 years, 11 months
Subscribe to mailing list and install oVirt with freeIPA
by Kien Pham
Dear oVirt team,
I have post a thread on forum but I got problem with mailing list. How can I subscribe the mailing list.
This is the question.
Dear all,
I am installing oVirt but I got an error when running engine-setup and get the following notification:
[ INFO ] Restarting httpd
[ ERROR ] Failed to execute stage 'Closing up': Failed to start service 'httpd'
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20191224111455-ppsefr.log
[ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20191224111745-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
I think that the problem is freeIPA. I installed freeIPA to manage user authetication. But IPA is currently using port 80 so httpd service cannot start.
Is there any suggest.
Thank for all reply.
Best regards.
Kimloai.
4 years, 11 months
Re: ovirt 4.3.7 + Gluster in hyperconverged (production design)
by Strahil
On Dec 24, 2019 02:18, Jayme <jaymef(a)gmail.com> wrote:
>
> If you can afford it I would definitely do raid. Being able to monitor and replace disks at the raid level is much easier than brick. With raid I’d do a gluster arbiter setup so your aren’t losing too much space.
>
> Keep an eye on libgfapi. It’s not default setting due to a few bugs but I’ve been testing it in my ssd hci cluster and have been seeing up to 5x io improvement. Also jumbo frames on those 10Gbps switches.
>
> Someone else will probably chime in re the other questions. I believe the GUI can only deploy a three sever cluster then you have to add the remaining hosts afterward.
>
> On Mon, Dec 23, 2019 at 5:56 PM <adrianquintero(a)gmail.com> wrote:
>>
>> Hi,
>> After playing a bit with oVirt and Gluster in our pre-production environment for the last year, we have decided to move forward with a our production design using ovirt 4.3.7 + Gluster in a hyperconverged setup.
>>
>> For this we are looking get answers to a few questions that will help out with our design and eventually lead to our production deployment phase:
>>
>> Current HW specs (total servers = 18):
>> 1.- Server type: DL380 GEN 9
>> 2.- Memory: 256GB
>> 3.-Disk QTY per hypervisor:
>> - 2x600GB SAS (RAID 0) for the OS
I would pick Raid1 and split that between OS and brick for engine 's gluster volume
>> - 9x1.2TB SSD (RAID[0, 6..]..? ) for GLUSTER.
If you had bigger NICs I would preffer RAID0. Have you considered JBOD approach (each disk is presented as a standalone LUN from the raid array) ?
Raid 6 is waste for your SSDs (when you already have replica 3 volumes). Ask HPE to provide SSDs from different manufacturing batches. I think that with SSD thee best compromise will be RAID 5 with replica3 for faster reads.
As the SSDs will be overwritten with the same ammount of data, there could be a situation where all 3 from a replica set will be in predictive failiure (which is not nice) - that's why a raid 5 is a good option.
>> 4.-Network:
>> - Bond1: 2x10Gbps
>> - Bond2: 2x10Gbps (for migration and gluster)
Do not use Active-Backup bond as you will loose bandwidth. LACP (with hashing on layer 3+ layer4) is a good option, but it depends if your machines will be in the same LAN segment (communicating over layer 2 and no gateway inbetween).
>> Our plan is to build 2x9 node clusters, however the following questions come up:
>>
>> 1.-Should we build 2 separate environments each with its own engine? or should we do 1 engine that will manage both clusters?
If you decide to split - you have less fault tollerance - for example loosing 3 ot of 18 is better than loosing 3 out of 9 nodes :)
>> 2.-What would be the best gluster volume layout for #1 above with regards to RAID configuration:
>> - JBOD or RAID6 or…?.
RAID6 wastes alot of space
>> - what is the benefit or downside of using JBOD vs RAID 6 for this particular scenario?
>> 3.-Would you recommend Ansible-based deployment (if supported)? If yes where would I find the documentation for it? or should we just deploy using the UI?.
There is a topic in the mailing list about issues with ansible.It's up to you.
>> - I have reviewed the following and in Chapter 5 it only mentions Web UI https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infr...
>> - Also looked at https://github.com/gluster/gluster-ansible/tree/master/playbooks/hc-ansib... but could not get it to work properly.
>>
>> 4.-what is the recommended max server qty in a hyperconverged setup with gluster, 12, 15, 18...?
I don't remmember but I think it was 9 nodes for Hyperconverged setup in RHV.
>> Thanks,
>>
>> Adrian
Best Regards,
Strahil Nikolov
4 years, 11 months
Re: Problems after 4.3.8 update
by Mahdi Adnan
We had an issue after upgrading Gluster nodes to 6.6, we had memory leak in
gluster self heal daemon which cause it to do infinite heal and ate all of
the nodes RAM "128GB", we had to downgrade to 6.5 to solve the problem.
I think it is related to the changes made to the SHD code in version 6.6 (
1760706 <https://bugzilla.redhat.com/1760706>)
I might find the time and try to bug report the issue.
Anyway, glad you solved your issue.
4 years, 11 months
Problems with ovirt-web-ui-1.6.0-1.el7.noarch
by Matthias Leopold
Hi,
after upgrading oVirt from 4.3.5 to 4.3.7 some things in the VM Portal
stopped working for users with "User" role:
* running guest agent is not recognized ("It looks like no guest agent
is configured on the VM.")
* CD/ISO list is "[Empty]"
for this I found
"- limit CD/ISO list by data center"
in rpm changelog, but this doesn't help me in any way
Assigning other roles (UserVmRunTimeManager, PowerUserRole) doesn't
change anything.
/var/log/ovirt-engine/ui.log is quiet.
Downgrading to ovirt-web-ui-1.5.3-1.el7.noarch resolves both issues.
Am I the only one who is experiencing this?
thx for advice
Matthias
4 years, 11 months
Strategy to Mange Ovirt VMs Created from Templates
by jeremy_tourville@hotmail.com
I want to be able to manage VMs using Ansible. As part of the template creation process it says to seal the VM. Can someone tell me what sealing does to a Linux VM? I understand it removes some of things that make the VM unique but no real specifics.
So, if I want to manage a VM created from a template would this general process work?-
Seal the VM
Install CloudInit and keys, accounts, etc
Shut off VM and create template from it.
Create new VM using Ansbile & CloudInit
CloudInit would have just enough info so that you could manage the VM with Ansible.
Would that work?
I am just starting to explore what CloudInit can do and what it is. I am brand new to it. I didn't find enough info on template sealing to help me devise a full cycle management strategy. Perhaps there are other/easier methods? Thanks for your advice and input.
4 years, 11 months
[ovirt 4.2]vdsm host was shutdown unexpectly, configure of some vms was changed or lost when host was re-powered on and vm was started
by lifuqiong@sunyainfo.com
Dear All:
My ovirt engine managed two vdsms with nfs storage on another nfs server, it worked fine about three months.
One of host (host_1.3; ip:172.18.1.3) was created about 16 vms, but the host_1.3 was shutdown unexpectly about 2019-12-23 16:11, when re-started host and vms; half of the vms was lost some configure or changed , such as lost theirs ip etc.(the vm name is 'zzh_Chain49_ACG_M' in the vdsm.log)
the vm zzh_Chain49_ACG_M was create by rest API through vm's templated . the template is 20.1.1.161; the vm zzh_Chain49_ACG_M was created by ovirt rest api and changed the ip to 20.1.1.219
by ovirt rest api. But the ip was changed to templated ip when the accident happened.
the vm's os is centos.
Hope get help from you soon, Thank you.
Mark
Sincerely.
4 years, 11 months