VM CPU Pinning
by lavi.buchnik@exlibrisgroup.com
Hi,
When trying to pin a VM CPU's to Physical CPU's, I'm getting this error in case the number of physical CPU slots are > 35 (e.g. 40):
Error attempting to pin CPUs.
Full error: Fault reason is "Operation Failed". Fault detail is "[size must be between 0 and 4000, Attribute: vmStatic.cpuPinning]". HTTP response code is "400". HTTP response message is "Bad Request".
It is working for me up to 35 physical CPU's. but above it, I get that error.
Can you please tell what this error means and how to overcome it?
Version we are using is: 4.3.6.6-1.0.9.el7
Thanks,
Lavi
3 years, 1 month
Update Package Conflict
by penguin pages
Fresh install of minimal CentOS8
Then deploy:
- EPEL
- Add ovirt repo https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm
Install all nodes:
- cockpit-ovirt-dashboard
- gluster-ansible-roles
- vdsm-gluster
- ovirt-host
- ovirt-ansible-roles
- ovirt-ansible-infra
Install on "first node of cluster"
- ovirt-engine-appliance
Now each node is stuck with same package conflict error: (and this blocks GUI "upgrades")
[root@medusa ~]# yum update
Last metadata expiration check: 0:55:35 ago on Wed 10 Mar 2021 08:14:22 AM EST.
Error:
Problem 1: package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed
- package cockpit-bridge-238.1-1.el8.x86_64 conflicts with cockpit-dashboard < 233 provided by cockpit-dashboard-217-1.el8.noarch
- cannot install the best update candidate for package ovirt-host-4.4.1-4.el8.x86_64
- cannot install the best update candidate for package cockpit-bridge-217-1.el8.x86_64
Problem 2: problem with installed package ovirt-host-4.4.1-4.el8.x86_64
- package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed
- package cockpit-system-238.1-1.el8.noarch obsoletes cockpit-dashboard provided by cockpit-dashboard-217-1.el8.noarch
- cannot install the best update candidate for package cockpit-dashboard-217-1.el8.noarch
Problem 3: package ovirt-hosted-engine-setup-2.4.9-1.el8.noarch requires ovirt-host >= 4.4.0, but none of the providers can be installed
- package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed
- package ovirt-host-4.4.1-1.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed
- package ovirt-host-4.4.1-2.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed
- package ovirt-host-4.4.1-3.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed
- package cockpit-system-238.1-1.el8.noarch obsoletes cockpit-dashboard provided by cockpit-dashboard-217-1.el8.noarch
- cannot install the best update candidate for package ovirt-hosted-engine-setup-2.4.9-1.el8.noarch
- cannot install the best update candidate for package cockpit-system-217-1.el8.noarch
(try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
[root@medusa ~]# yum update --allowerasing
Last metadata expiration check: 0:55:56 ago on Wed 10 Mar 2021 08:14:22 AM EST.
Dependencies resolved.
=========================================================================================================================================================================================================================================
Package Architecture Version Repository Size
=========================================================================================================================================================================================================================================
Upgrading:
cockpit-bridge x86_64 238.1-1.el8 baseos 535 k
cockpit-system noarch 238.1-1.el8 baseos 3.4 M
replacing cockpit-dashboard.noarch 217-1.el8
Removing dependent packages:
cockpit-ovirt-dashboard noarch 0.14.17-1.el8 @ovirt-4.4 16 M
ovirt-host x86_64 4.4.1-4.el8 @ovirt-4.4 11 k
ovirt-hosted-engine-setup noarch 2.4.9-1.el8 @ovirt-4.4 1.3 M
Transaction Summary
=========================================================================================================================================================================================================================================
Upgrade 2 Packages
Remove 3 Packages
##
Initially I assumed this was a path I was taking that was not standard.. but now I think this is some ovirt vs CentOS package repo issue. Any work arounds or root cause to fix this from repo conflict?
3 years, 1 month
How to replace a failed oVirt Hyperconverged Host
by Ramon Sierra
Hi,
We have a three hosts hyperconverged ovirt setup. A few weeks ago one of
the hosts failed and we lost a RAID5 array on it. We removed it from the
cluster and repaired. We are trying to setup and add it back to the
cluster but we are not clear on how to proceed. There is a replica 2
with 1 arbiter Gluster setup on the cluster and I have no idea on how to
recreate the LVM partitions, gluster bricks, and then add it to the
cluster in order to start the healing process.
Any help on how to proceed with this scenario will be very welcome.
Ramon
3 years, 1 month
oVirt 4.3.6 and Security Measures
by scroodj@gmail.com
Hello team,
Due to security policy in the our customer`s company there is need to implement some changes into machines in their oVirt cluster (Standalone Engine + 2 KVM Host).
1. The home drives of user sanlock (/var/run/sanlock) and gluster (/run/gluster) have permission of 775. We would like to have them at least 755 if not stricter. Is that possible?
2. NFS mount of storage has ‘nodev’ and ‘nosuid’ disabled. Is it safe to use those options for NFS Storage doamin?
3. Usually bridged routing is not allowed on managed servers. Security scan asks us to set the following four parameters to 0
Network Parameter "net.ipv4.conf.all.send_redirects" = 1 (expected: 0)
Network Parameter "net.ipv4.conf.all.secure_redirects" = 1 (expected: 0)
Network Parameter "net.ipv6.conf.all.accept_redirects" = 1 (expected: 0)
Network Parameter "net.ipv4.conf.all.accept_redirects" = 1 (expected: 0)
Would changing them interfere with ovirtmgmt network?
Those are valid for all three machines in the cluster.
On the engine though there is httpd installed now and we have some findings there too:
1. There are modules installed that are on a blacklist. Can they be removed? The modules are:
mod_dav_lock
mod_userdir
mod_include
mod_dav_fs
mod_autoindex
mod_dav
mod_info
2. HTTP traces should be blocked so we would set “TraceEnable” to off in virtual host config. If HTTP traces are needed we would have to limit the verbs that are allowed.
3. Apache version information should be turned off to not inform potential attackers of which web server is running. Is that a problem for oVirt?
4. TLSv1.0 and TLSv1.1 are enabled but should be turned off.
5. HSTS should be turned on but is not yet.
6. Can we use X-Frame-Options header to append X-Frame-Options DENY (or SAEMORIGIN or at least ALLOW-FROM)?
7. Can we implement the X-Content-Type-Options HTTP header with “nosniff”?
8. Can we implement the X-XSS-Protection header with “1; mode=block”?
I know, this is quite a bit. But maybe you know the answers.
BR
Aleksandr
3 years, 1 month
engine - gluster volume import
by penguin pages
I keep going through cycles to get a HCI cluster to deploy.
Gluster is working fine. Standard from HCI wizard:
Brick thorst.penguinpages.local:/gluster_br
icks/vmstore/vmstore 49154 0 Y 40968
Brick odinst.penguinpages.local:/gluster_br
icks/vmstore/vmstore 49157 0 Y 3771
Brick medusast.penguinpages.local:/gluster_
bricks/vmstore/vmstore 49154 0 Y 5138
Self-heal Daemon on localhost N/A N/A Y 41172
Self-heal Daemon on medusast.penguinpages.l
ocal N/A N/A Y 5150
Self-heal Daemon on odinst.penguinpages.loc
al N/A N/A Y 3142
What I think happens when the engine is installed "on existing disk" is that it does not install components for gluster needed to present those to be deployed upon by gluster.
What is this service / install / means to get the engine to "add" gluster volumes?
Ex: When I click -> Storage -> (no volumes listed) -> New volume -> <empty>.... all drops downs are blank and cannot be populated.
3 years, 1 month
Commvault
by Colin Coe
Hi all
My workplace is considering replacing Arcserve (which has been terrible)
with Commvault.
Our virtualisation is a mix of about 95% RHV and 5% HyperV. We have a
handful of physical servers, a 50/50 mix of Windows and RHEL.
We run dual hot data centers, and want to backup locally (disk to disk)
then replicate to the other data center. Cloud is off the table.
Anyone have any experience with Commvault? War stories?
Happy to be pointed at other alternatives.
Thanks
3 years, 1 month
Setup Hosts Network Disabled
by Andrei Verovski
Hi !
I run into a problem which looks like a software bug.
Network -> Networks -> My_Net_Name -> Hosts
Setup Hosts Network button is disabled (greyed out). I deleted this network, created again, restarted hosted engine - no changes.
Is it possible to fix this for example from command line ?
Thanks in advance.
3 years, 1 month
ERROR: Installing oVirt Node & Hosted-Engine on one physical server
by ivanpashchuk@ipoft.com
oVirt node 4.4.4
Question 1: In General, can i install oVirt Node and hosted-engine on one physical server?
Question 2: What domain names should I specify in the DNS server and which ones IP should I reserve?
Question 3: "Failed to connect to the host via ssh: ssh: connect to host ovirt-engine-01.local port 22: No route to host" - Who and where is trying to establish a connection via SSH?
Can someone reply to questions 1-3 above? Any help will be much appreciated.
Thanks for the help,
Ivan.
Logs:
ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 fatal: [localhost]: FAILED! => {"attempts": 180, "changed": true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta": "0:00:00.565239", "end": "2021-03-03 13:36:54.775239", "rc": 0, "start": "2021-03-03 13:36:54.210000", "stderr": "", "stderr_lines": [], "stdout": "{\"1\": {\"host-id\": 1, \"host-ts\": 16377, \"score\": 3400, \"engine-status\": {\"vm\": \"up\", \"health\": \"bad\", \"detail\": \"Up\", \"reason\": \"failed liveliness check\"}, \"hostname\": \"mng-ovirt-engine-01.local\", \"maintenance\": false, \"stopped\": false, \"crc32\": \"2b3ee5d1\", \"conf_on_shared_storage\": true, \"local_conf_timestamp\": 16377, \"extra\": \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=16377 (Wed Mar 3 13:36:48 2021)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=16377 (Wed Mar 3 13:36:48 2021)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=
False\\n\", \"live-data\": true}, \"global_maintenance\": false}", "stdout_lines": ["{\"1\": {\"host-id\": 1, \"host-ts\": 16377, \"score\": 3400, \"engine-status\": {\"vm\": \"up\", \"health\": \"bad\", \"detail\": \"Up\", \"reason\": \"failed liveliness check\"}, \"hostname\": \"mng-ovirt-engine-01.local\", \"maintenance\": false, \"stopped\": false, \"crc32\": \"2b3ee5d1\", \"conf_on_shared_storage\": true, \"local_conf_timestamp\": 16377, \"extra\": \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=16377 (Wed Mar 3 13:36:48 2021)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=16377 (Wed Mar 3 13:36:48 2021)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\", \"live-data\": true}, \"global_maintenance\": false}"]}
ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 fatal: [localhost]: FAILED! => {"changed": false, "msg": "Engine VM IP address is while the engine's he_fqdn ovirt-engine-01.local resolves to 10.0.2.250. If you are using DHCP, check your DHCP reservation configuration"}
ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host ovirt-engine-01.local port 22: No route to host", "skip_reason": "Host localhost is unreachable", "unreachable": true}
3 years, 1 month
Strange Glitches with VM and Network (oVirt 4.4.4.7-1.el8)
by Andrei Verovski
Hi,
I’m running oVirt 4.4.4.7-1.el8 and need to connect one of the VMs straight to the ISP link via Ethernet cable.
oVirt already have 2 networks (ovirt mgmt local + DMZ).
Created new network provider and assigned to it available physical interface of HP ProLiant, connected via cable to ISP switch with public IP.
Options of this network: (VM Network = on, Port Isolation = off, NIC Type = VirtIO, the rest are defaults).
VM is Debian 10.
Link works but with strange artefacts. If VM left being idle for a while, it cant be connected or pinged from outside, until I initiate pings from VM itself.
I have only 2 IPs from this ISP so I’m sure there are no IP address conflicts.
Another port and public IP go to our VyOS router handling internal and DMZ zone.
How to fix this problem ?
Thanks in advance.
Andrei
3 years, 1 month