Info about AD integration and Root CA
by Gianluca Cecchi
Hello,
in docs for 4.2 RHV (I think it applies to oVirt 4.2 too) for attaching to
AD there is the statement
"
To set up secure connection between the LDAP server and the Manager, ensure
a PEM-
encoded CA certificate has been prepared. See Section D.2, “Setting Up
Encrypted
Communication between the Manager and an LDAP Server” for more information.
"
and in Appendix
"
To set up encrypted communication between the Red Hat Virtualization
Manager and an LDAP server, obtain the root CA certificate of the LDAP
server. . .
"
and in readme file referred in the Appendix
(/usr/share/doc/ovirt-engine-extension-aaa-ldap-1.3.8/README) there is the
command:
"
Active Directory
Windows: > certutil -ca.cert myrootca.der
Linux: $ openssl -in myrootca.der -inform DER -out myrootca.pem
"
In my case on Windows DC (that is a Windows 2012 R2 server with "Domain
functional level: Windows Server 2003") I get this error:
C:\Users\Administrator.MYDOMAIN>certutil -ca.cert mydomain.der
CertUtil: The system cannot find the file specified.
C:\Users\Administrator.MYDOMAIN>
What does it mean exactly?
Thanks in advance,
Gianluca
5 years, 9 months
HostedEngine Unreachable
by Sakhi Hadebe
Hi,
Our cluster was running fine, until we moved it to the new network.
Looking at the agent.log file, it stills pings the old gateway. Not sure if
this is the reason it's failing the liveliness check.
Please help.
On Thu, Feb 21, 2019 at 4:39 PM Sakhi Hadebe <sakhi(a)sanren.ac.za> wrote:
> Hi,
>
> I need some help. We had a working ovirt cluster in the testing
> environment. We have just moved it to the production environment with the
> same network settings. The only thing we changed is the public VLAN. In
> production we're using a different subnet.
>
> The problem is we can't get the HostedEngine Up. It does come up but it
> fails the LIVELINESS CHECK and its health status is bad. We can't ping even
> ping it. It is on the same subnet as host machines: 192.168.x.x/24:
>
> *HostedEngine VM status:*
>
> [root@garlic qemu]# hosted-engine --vm-status
>
>
> --== Host 1 status ==--
>
> conf_on_shared_storage : True
> Status up-to-date : True
> Hostname : goku.sanren.ac.za
> Host ID : 1
> Engine status : {"reason": "vm not running on this
> host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score : 3400
> stopped : False
> Local maintenance : False
> crc32 : 57b2ece9
> local_conf_timestamp : 8463
> Host timestamp : 8463
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=8463 (Thu Feb 21 16:32:29 2019)
> host-id=1
> score=3400
> vm_conf_refresh_time=8463 (Thu Feb 21 16:32:29 2019)
> conf_on_shared_storage=True
> maintenance=False
> state=EngineDown
> stopped=False
>
>
> --== Host 2 status ==--
>
> conf_on_shared_storage : True
> Status up-to-date : True
> Hostname : garlic.sanren.ac.za
> Host ID : 2
> Engine status : {"reason": "failed liveliness check",
> "health": "bad", "vm": "up", "detail": "Powering down"}
> Score : 3400
> stopped : False
> Local maintenance : False
> crc32 : 71dc3daf
> local_conf_timestamp : 8540
> Host timestamp : 8540
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=8540 (Thu Feb 21 16:32:31 2019)
> host-id=2
> score=3400
> vm_conf_refresh_time=8540 (Thu Feb 21 16:32:31 2019)
> conf_on_shared_storage=True
> maintenance=False
> state=EngineStop
> stopped=False
> timeout=Thu Jan 1 04:24:29 1970
>
>
> --== Host 3 status ==--
>
> conf_on_shared_storage : True
> Status up-to-date : True
> Hostname : gohan.sanren.ac.za
> Host ID : 3
> Engine status : {"reason": "vm not running on this
> host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score : 3400
> stopped : False
> Local maintenance : False
> crc32 : 49645620
> local_conf_timestamp : 5480
> Host timestamp : 5480
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=5480 (Thu Feb 21 16:32:22 2019)
> host-id=3
> score=3400
> vm_conf_refresh_time=5480 (Thu Feb 21 16:32:22 2019)
> conf_on_shared_storage=True
> maintenance=False
> state=EngineDown
> stopped=False
> You have new mail in /var/spool/mail/root
>
> The service are running but with errors:
> *vdsmd.service:*
> [root@garlic qemu]# systemctl status vdsmd
> ● vdsmd.service - Virtual Desktop Server Manager
> Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor
> preset: enabled)
> Active: active (running) since Thu 2019-02-21 16:12:12 SAST; 3min 31s
> ago
> Process: 40117 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh
> --post-stop (code=exited, status=0/SUCCESS)
> Process: 40121 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh
> --pre-start (code=exited, status=0/SUCCESS)
> Main PID: 40224 (vdsmd)
> Tasks: 47
> CGroup: /system.slice/vdsmd.service
> ├─40224 /usr/bin/python2 /usr/share/vdsm/vdsmd
> └─40346 /usr/libexec/ioprocess --read-pipe-fd 65
> --write-pipe-fd 64 --max-threads 10 --max-queued-requests 10
>
> Feb 21 16:12:11 garlic.sanren.ac.za vdsmd_init_common.sh[40121]: vdsm:
> Running nwfilter
> Feb 21 16:12:11 garlic.sanren.ac.za vdsmd_init_common.sh[40121]: libvirt:
> Network Filter Driver error : Requested operation is not valid: nwfilter is
> in use
> Feb 21 16:12:11 garlic.sanren.ac.za vdsmd_init_common.sh[40121]: vdsm:
> Running dummybr
> Feb 21 16:12:12 garlic.sanren.ac.za vdsmd_init_common.sh[40121]: vdsm:
> Running tune_system
> Feb 21 16:12:12 garlic.sanren.ac.za vdsmd_init_common.sh[40121]: vdsm:
> Running test_space
> Feb 21 16:12:12 garlic.sanren.ac.za vdsmd_init_common.sh[40121]: vdsm:
> Running test_lo
> Feb 21 16:12:12 garlic.sanren.ac.za systemd[1]: Started Virtual Desktop
> Server Manager.
> Feb 21 16:12:13 garlic.sanren.ac.za vdsm[40224]: WARN MOM not available.
> Feb 21 16:12:13 garlic.sanren.ac.za vdsm[40224]: WARN MOM not available,
> KSM stats will be missing.
> Feb 21 16:12:13 garlic.sanren.ac.za vdsm[40224]: WARN Not ready yet,
> ignoring event '|virt|VM_status|e2608f14-39fe-4ab6-b6be-9c60679e8c76'
> args={'e2608f14-39fe-4ab6-b6be-9c606..., 'type': '
> Hint: Some lines were ellipsized, use -l to show in full.
>
> *libvirtd.service:*
> [root@garlic qemu]# systemctl status libvirtd
> ● libvirtd.service - Virtualization daemon
> Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled;
> vendor preset: enabled)
> Drop-In: /etc/systemd/system/libvirtd.service.d
> └─unlimited-core.conf
> Active: active (running) since Thu 2019-02-21 16:06:50 SAST; 9min ago
> Docs: man:libvirtd(8)
> https://libvirt.org
> Main PID: 38485 (libvirtd)
> Tasks: 17 (limit: 32768)
> CGroup: /system.slice/libvirtd.service
> └─38485 /usr/sbin/libvirtd --listen
>
> Feb 21 16:06:50 garlic.sanren.ac.za systemd[1]: Starting Virtualization
> daemon...
> Feb 21 16:06:50 garlic.sanren.ac.za systemd[1]: Started Virtualization
> daemon.
> Feb 21 16:07:43 garlic.sanren.ac.za libvirtd[38485]: 2019-02-21
> 14:07:43.033+0000: 38485: info : libvirt version: 3.9.0, package:
> 14.el7_5.8 (CentOS BuildSystem <http://bugs.c...centos.org)
> Feb 21 16:07:43 garlic.sanren.ac.za libvirtd[38485]: 2019-02-21
> 14:07:43.033+0000: 38485: info : hostname: garlic.sanren.ac.za
> Feb 21 16:07:43 garlic.sanren.ac.za libvirtd[38485]: 2019-02-21
> 14:07:43.033+0000: 38485: error : virNetSocketReadWire:1808 : End of file
> while reading data: Input/output error
> Feb 21 16:12:08 garlic.sanren.ac.za libvirtd[38485]: 2019-02-21
> 14:12:08.791+0000: 38485: error : virNetSocketReadWire:1808 : End of file
> while reading data: Input/output error
> Hint: Some lines were ellipsized, use -l to show in full.
>
> *ovirt-ha-broker & ovirt-ha-agent services:*
> [root@garlic qemu]# systemctl restart ovirt-ha-broker
> ^[[A[root@garlic qemu]# systemctl status ovirt-ha-broker
> ● ovirt-ha-broker.service - oVirt Hosted Engine High Availability
> Communications Broker
> Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-broker.service;
> enabled; vendor preset: disabled)
> Active: active (running) since Thu 2019-02-21 16:18:43 SAST; 30s ago
> Main PID: 41493 (ovirt-ha-broker)
> Tasks: 13
> CGroup: /system.slice/ovirt-ha-broker.service
> ├─41493 /usr/bin/python
> /usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker
> ├─41688 /bin/sh /usr/sbin/hosted-engine --check-liveliness
> └─41689 python -m ovirt_hosted_engine_setup.check_liveliness
>
> Feb 21 16:18:43 garlic.sanren.ac.za systemd[1]: Started oVirt Hosted
> Engine High Availability Communications Broker.
> Feb 21 16:18:43 garlic.sanren.ac.za systemd[1]: Starting oVirt Hosted
> Engine High Availability Communications Broker...
> [root@garlic qemu]# systemctl status ovirt-ha-agent
> ● ovirt-ha-agent.service - oVirt Hosted Engine High Availability
> Monitoring Agent
> Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service;
> enabled; vendor preset: disabled)
> Active: active (running) since Thu 2019-02-21 16:18:53 SAST; 25s ago
> Main PID: 41581 (ovirt-ha-agent)
> Tasks: 2
> CGroup: /system.slice/ovirt-ha-agent.service
> └─41581 /usr/bin/python
> /usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent
>
> Feb 21 16:18:53 garlic.sanren.ac.za systemd[1]: Started oVirt Hosted
> Engine High Availability Monitoring Agent.
> Feb 21 16:18:53 garlic.sanren.ac.za systemd[1]: Starting oVirt Hosted
> Engine High Availability Monitoring Agent...
>
> Attached are logs of files that might contain some useful information to
> troubleshoot.
>
>
> Your assistance will be highly appreciated
>
> --
> Regards,
> Sakhi Hadebe
>
> Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
>
> Tel: +27 12 841 2308 <+27128414213>
> Fax: +27 12 841 4223 <+27128414223>
> Cell: +27 71 331 9622 <+27823034657>
> Email: sakhi(a)sanren.ac.za <shadebe(a)csir.co.za>
>
>
--
Regards,
Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency
Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213>
Fax: +27 12 841 4223 <+27128414223>
Cell: +27 71 331 9622 <+27823034657>
Email: sakhi(a)sanren.ac.za <shadebe(a)csir.co.za>
5 years, 9 months
Move infrastructure ,how to change FQDN
by Fabrice SOLER
Hello,
I need to move all the physical infrastructure oVirt to another site
(for student education).
The node's FQDN and the hosted engine's FQDN must change.
The version is ovirt 4.2.8 for the hosted engine and nodes.
Is there sommone who know how to do ?
Fabrice SOLER
5 years, 9 months
Ovirt 4.2.8.. Possible bug?
by matteo fedeli
Hi considering that the deploy with 4.2.7.8 failed I try to reinstall ovirt to version 4.2.8 and there are appened two strange things.
During the volume step if i choose jbod mode in the deploy conf remain raid6 type... Why? To solve I have only tried to editing manually the file at line about volume type and the deploy stuck on creating physical volume...
this is my conf file: (I used 3 HDDs 500GB each, node,engine + vmstore and data)
#gdeploy configuration generated by cockpit-gluster plugin
[hosts]
kansas.planet.bn
germany.planet.bn
singapore.planet.bn
[script1:kansas.planet.bn]
action=execute
ignore_script_errors=no
file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h kansas.planet.bn, germany.planet.bn, singapore.planet.bn
[script1:germany.planet.bn]
action=execute
ignore_script_errors=no
file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h kansas.planet.bn, germany.planet.bn, singapore.planet.bn
[script1:singapore.planet.bn]
action=execute
ignore_script_errors=no
file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h kansas.planet.bn, germany.planet.bn, singapore.planet.bn
[disktype]
jbod
[diskcount]
12
[stripesize]
256
[service1]
action=enable
service=chronyd
[service2]
action=restart
service=chronyd
[shell2]
action=execute
command=vdsm-tool configure --force
[script3]
action=execute
file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh
ignore_script_errors=no
[pv1:kansas.planet.bn]
action=create
devices=sdb
ignore_pv_errors=no
[pv1:germany.planet.bn]
action=create
devices=sdb
ignore_pv_errors=no
[pv1:singapore.planet.bn]
action=create
devices=sdb
ignore_pv_errors=no
[vg1:kansas.planet.bn]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no
[vg1:germany.planet.bn]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no
[vg1:singapore.planet.bn]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no
[lv1:kansas.planet.bn]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=1005GB
poolmetadatasize=5GB
[lv2:germany.planet.bn]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=1005GB
poolmetadatasize=5GB
[lv3:singapore.planet.bn]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=1005GB
poolmetadatasize=5GB
[lv4:kansas.planet.bn]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
size=100GB
lvtype=thick
[lv5:kansas.planet.bn]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=500GB
[lv6:kansas.planet.bn]
action=create
lvname=gluster_lv_vmstore
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/vmstore
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=500GB
[lv7:germany.planet.bn]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
size=100GB
lvtype=thick
[lv8:germany.planet.bn]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=500GB
[lv9:germany.planet.bn]
action=create
lvname=gluster_lv_vmstore
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/vmstore
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=500GB
[lv10:singapore.planet.bn]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
size=100GB
lvtype=thick
[lv11:singapore.planet.bn]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=500GB
[lv12:singapore.planet.bn]
action=create
lvname=gluster_lv_vmstore
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/vmstore
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=500GB
[selinux]
yes
[service3]
action=restart
service=glusterd
slice_setup=yes
[firewalld]
action=add
ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,54322/tcp
services=glusterfs
[script2]
action=execute
file=/usr/share/gdeploy/scripts/disable-gluster-hooks.sh
[shell3]
action=execute
command=usermod -a -G gluster qemu
[volume1]
action=create
volname=engine
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable
brick_dirs=kansas.planet.bn:/gluster_bricks/engine/engine,germany.planet.bn:/gluster_bricks/engine/engine,singapore.planet.bn:/gluster_bricks/engine/engine
ignore_volume_errors=no
[volume2]
action=create
volname=data
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable
brick_dirs=kansas.planet.bn:/gluster_bricks/data/data,germany.planet.bn:/gluster_bricks/data/data,singapore.planet.bn:/gluster_bricks/data/data
ignore_volume_errors=no
[volume3]
action=create
volname=vmstore
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable
brick_dirs=kansas.planet.bn:/gluster_bricks/vmstore/vmstore,germany.planet.bn:/gluster_bricks/vmstore/vmstore,singapore.planet.bn:/gluster_bricks/vmstore/vmstore
ignore_volume_errors=no
5 years, 9 months
Re: Ovirt Glusterfs
by Strahil
I have done some testing and it seems that storhaug + ctdb + nfs-ganesha is showing decent performance in a 3 node hyperconverged setup.
Fuse mounts are hitting some kind of limit when mounting gluster -3.12.15 volumes.
Best Regards,
Strahil Nikolov
5 years, 9 months
Creating new VM
by Yujin Boby
I installed oVirt, when i try to create a new VM, it says
This host is managed by a virtualization manager, so creation of new VM from this host is not possible.
https://imgur.com/a/hpqV6ML
Any idea why i am getting this error ? Do i need more than 1 server to create Virtual machines ? Like adding a node ?
5 years, 9 months
update to 4.2.8 fails
by Vincent Royer
trying to update from 4.2.6 to 4.2.8
yum update fails with:
--> Finished Dependency Resolution
Error: Package: vdsm-4.20.46-1.el7.x86_64 (ovirt-4.2)
Requires: libvirt-daemon-kvm >= 4.5.0-10.el7_6.3
Installed: libvirt-daemon-kvm-3.9.0-14.el7_5.8.x86_64
(installed)
libvirt-daemon-kvm = 3.9.0-14.el7_5.8
Error: Package: 10:qemu-kvm-ev-2.12.0-18.el7_6.3.1.x86_64
(ovirt-4.2-centos-qemu-ev)
Requires: libepoxy.so.0()(64bit)
Error: Package: 10:qemu-kvm-ev-2.12.0-18.el7_6.3.1.x86_64
(ovirt-4.2-centos-qemu-ev)
Requires: libibumad.so.3()(64bit)
Error: Package: 10:qemu-kvm-ev-2.12.0-18.el7_6.3.1.x86_64
(ovirt-4.2-centos-qemu-ev)
Requires: libgbm.so.1()(64bit)
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
Uploading Enabled Repositories Report
2019-02-13 16:42:14,190 [INFO] yum:17779:Dummy-18 @connection.py:868 -
Connection built: host=subscription.rhsm.redhat.com port=443
handler=/subscription auth=identity_cert ca_dir=/etc/rhsm/ca/
insecure=False
Loaded plugins: fastestmirror, product-id, subscription-manager
2019-02-13 16:42:14,199 [WARNING] yum:17779:Dummy-18 @logutil.py:141 -
logging already initialized
2019-02-13 16:42:14,200 [ERROR] yum:17779:Dummy-18 @identity.py:145 -
Reload of consumer identity cert /etc/pki/consumer/cert.pem raised an
exception with msg: [Errno 2] No such file or directory:
'/etc/pki/consumer/key.pem'
2019-02-13 16:42:14,200 [INFO] yum:17779:Dummy-18 @connection.py:868 -
Connection built: host=subscription.rhsm.redhat.com port=443
handler=/subscription auth=identity_cert ca_dir=/etc/rhsm/ca/
insecure=False
2019-02-13 16:42:14,201 [INFO] yum:17779:Dummy-18 @repolib.py:471 - repos
updated: Repo updates
Total repo updates: 0
Updated
<NONE>
Added (new)
<NONE>
Deleted
<NONE>
This system is not registered with an entitlement server. You can use
subscription-manager to register.
Cannot upload enabled repos report, is this client registered?
[root@brian yum.repos.d]# yum repolist
Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
package_upload, product-id, search-disabled-repos, subscription-manager,
vdsmupgrade
2019-02-13 16:42:36,489 [ERROR] yum:17809:MainThread @identity.py:145 -
Reload of consumer identity cert /etc/pki/consumer/cert.pem raised an
exception with msg: [Errno 2] No such file or directory:
'/etc/pki/consumer/key.pem'
2019-02-13 16:42:36,490 [INFO] yum:17809:MainThread @connection.py:868 -
Connection built: host=subscription.rhsm.redhat.com port=443
handler=/subscription auth=identity_cert ca_dir=/etc/rhsm/ca/
insecure=False
2019-02-13 16:42:36,491 [INFO] yum:17809:MainThread @repolib.py:471 - repos
updated: Repo updates
Total repo updates: 0
Updated
<NONE>
Added (new)
<NONE>
Deleted
<NONE>
This system is not registered with an entitlement server. You can use
subscription-manager to register.
Loading mirror speeds from cached hostfile
* ovirt-4.2: mirrors.rit.edu
* ovirt-4.2-epel: mirror.sjc02.svwh.net
repo id
repo name
status
centos-sclo-rh-release/x86_64
CentOS-7 - SCLo rh
8,113
ovirt-4.2/7
Latest oVirt 4.2 Release
2,558
ovirt-4.2-centos-gluster312/x86_64
CentOS-7 - Gluster 3.12
262
ovirt-4.2-centos-opstools/x86_64
CentOS-7 - OpsTools - release
853
ovirt-4.2-centos-ovirt42/x86_64
CentOS-7 - oVirt 4.2
631
ovirt-4.2-centos-qemu-ev/x86_64
CentOS-7 - QEMU EV
71
ovirt-4.2-epel/x86_64
Extra Packages for Enterprise Linux 7 - x86_64
*Help!*
5 years, 9 months
How to recreate Ovirt CA from scratch
by Giorgio Biacchi
Hi list,
during our datacenter lifetime many things changed. We moved the engine
twice on different hosts with, of course, different FQDNs, and many
other changes. Now we are stuck with an error when we try to upload an
image to a data domain. The error is somehow bound to a failure to
validate the ovirt-imageio-proxy certificate and, since the current root
CA certificate is still signed with sha1WithRSAEncryption we'd like to
regenerate the whole CA.
That's the steps we've done.. without success...
1) Make a tar.gz of the /etc/pki/ovirt-engine as backup
2) Create a new CA cert using the same private key:
openssl req -key /etc/pki/ovirt-engine/private/ca.pem -new -x509 -days
3650 -sha256 -extensions v3_ca -out newca.cert.pem
3) Put the new CA cert in place
mv ca.pem ca.pem.orig.20190219;mv newca.cert.pem ca.pem
4) Resign all the other certs
names="engine apache websocket-proxy jboss imageio-proxy"
for name in $names; do
subject="$(
openssl \
x509 \
-in /etc/pki/ovirt-engine/certs/"${name}".cer \
-noout \
-subject \
| sed \
's;subject= \(.*\);\1;' \
)"
/usr/share/ovirt-engine/bin/pki-enroll-pkcs12.sh \
--name="${name}" \
--password=mypass \
--subject="${subject}" \
--keep-key
done
5) Restart all the services
systemctl restart httpd
systemctl restart ovirt-engine
systemctl restart ovirt-websocket-proxy
systemctl restart ovirt-imageio-proxy
The following step was to take the backup at 1) and fall back to the
initial state because nothing worked as expected.
There's any documented procedure about how to recreate Ovirt CA from
scratch??
Thanks in advance
--
gb
PGP Key: http://pgp.mit.edu/
Primary key fingerprint: C510 0765 943E EBED A4F2 69D3 16CC DC90 B9CB 0F34
5 years, 9 months
Installing oVirt on Physical machine (Solved)
by emmanualvnebu1@gmail.com
Issue: Unable to install oVirt on physical machine with USB.
Once selected the option to install oVirt 4.3 it give a screen "dracut-initqueue time out And then get into emergency mode & gives dracut cmd line.
Fix:I think I fixed it.
The issue was with the bootable usb.
Tried with multiple software to create & one worked.
https://www.balena.io/etcher/
Thanks for everyone’s help.. Appreciate it.
5 years, 9 months