Error while making hosts
by krisibo01@gmail.com
Hello,
I am a student and as a school project i have to make virtual environment using oVirt. I just installed the oVirt-engine with the recomended configuration in the documentation and when i try to make a new host i got this error:
Error while executing action: Cannot add Host. Connecting to host via SSH has failed, verify that the host is reachable (IP address, routable address etc.) You may refer to the engine.log file for further details.
When i checked the log file the only thing that was there is that there is:
2020-02-06 03:59:24,594-05 ERROR [org.ovirt.engine.core.bll.hostdeploy.AddVdsCommand] (default task-30) [d54ee4df-32c8-474c-b315-51ad1131091a] Failed to establish session with host 'test': SSH connection timed out connecting to 'root(a)10.10.0.20'
I was trying to make the host following this steps:
Adding a Host to the oVirt Engine
1.Click Compute → Hosts.
2.Click New.
3.Use the drop-down list to select the Data Center and Host Cluster for the new host.
4.Enter the Name and the Hostname of the new host. The standard SSH port, port 22, is auto-filled in the SSH Port field.
5.Select an authentication method to use for the Engine to access the host.
Enter the root user’s password to use password authentication.
Alternatively, copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_keys on the host to use public key authentication.
6.Click the Advanced Parameters button to expand the advanced host settings.
i. Optionally disable automatic firewall configuration.
ii. Optionally add a host SSH fingerprint to increase security. You can add it manually, or fetch it automatically.
7.Optionally configure Power Management, SPM, Console, Network Provider, and Kernel. See Explanation of Settings and Controls in the New Host and Edit Host Windows for more information. Hosted Engine is used when deploying or undeploying a host for a self-hosted engine deployment.
8.Click OK.
The new host displays in the list of hosts with a status of Installing, and you can view the progress of the installation in the details pane. After a brief delay the host status changes to Up.
4 years, 10 months
Re: Power Management - drac5
by eevans@digitaldatatechs.com
I have the same issue on Dell R710's. Power management is optional and I don't use it. It doesn't affect it either way. I am connected to APC Smart UPS 3000. It will not find the apc either.
Any other opinions would be welcome. He's right, no documentation about this issue.
Eric Evans
Digital Data Services LLC.
304.660.9080
-----Original Message-----
From: Robert Webb <rwebb(a)ropeguru.com>
Sent: Monday, February 03, 2020 12:47 PM
To: users <users(a)ovirt.org>
Subject: [ovirt-users] Power Management - drac5
I have 3 Dell R410's with iDrac6 Enterprise capability. I am trying to get power management set up but the test will not pass and I am not finding the docs very helpful.
I have put in the IP, user name, password, and drac5 as the type. I have tested both with and without secure checked and always get, "Test failed: Internal JSON-RPC error".
idrac log shows:
2020 Feb 3 17:41:22 os[19772] root closing session from 192.168.1.12
2020 Feb 3 17:41:17 os[19746] root login from 192.168.1.12
Can someone please guide me in the right direction?
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RKFEK2ORWOO...
4 years, 10 months
Re: issue connecting 4.3.8 node to nfs domain
by Jorick Astrego
On 2/6/20 1:44 PM, Amit Bawer wrote:
>
>
> On Thu, Feb 6, 2020 at 1:07 PM Jorick Astrego <jorick(a)netbulae.eu
> <mailto:jorick@netbulae.eu>> wrote:
>
> Here you go, this is from the activation I just did a couple of
> minutes ago.
>
> I was hoping to see how it was first connected to host, but it doesn't
> go that far back. Anyway, the storage domain type is set from engine
> and vdsm never try to guess it as far as I saw.
I put the host in maintenance and activated it again, this should give
you some more info. See attached log.
> Could you query the engine db about the misbehaving domain and paste
> the results?
>
> # su - postgres
> Last login: Thu Feb 6 07:17:52 EST 2020 on pts/0
> -bash-4.2$ LD_LIBRARY_PATH=/opt/rh/rh-postgresql10/root/lib64/
> /opt/rh/rh-postgresql10/root/usr/bin/psql engine
> psql (10.6)
> Type "help" for help.
> engine=# select * from storage_domain_static where id =
> 'f5d2f7c6-093f-46d6-a844-224d92db5ef9' ;
engine=# select * from storage_domain_static where id =
'f5d2f7c6-093f-46d6-a844-224d92db5ef9' ;
id |
storage | storage_name | storage_domain_type |
storage_type | storage_domain_format_type |
_create_date | _update_date | recoverable | la
st_time_used_as_master | storage_description | storage_comment |
wipe_after_delete | warning_low_space_indicator |
critical_space_action_blocker | first_metadata_device |
vg_metadata_device | discard_after_delete | backup | warning_low_co
nfirmed_space_indicator | block_size
--------------------------------------+--------------------------------------+--------------+---------------------+--------------+----------------------------+-------------------------------+-----------------------------+-------------+---
-----------------------+---------------------+-----------------+-------------------+-----------------------------+-------------------------------+-----------------------+--------------------+----------------------+--------+---------------
------------------------+------------
f5d2f7c6-093f-46d6-a844-224d92db5ef9 |
b8b456f0-27c3-49b9-b5e9-9fa81fb3cdaa | backupnfs
| 1 | 1 | 4 |
2018-01-19 13:31:25.899738+01 | 2019-02-14 14:36:22.3171+01 |
t |
1530772724454 | | |
f | 10
| 5 |
| | f | f |
0 | 512
(1 row)
Regards,
Jorick
>
> On 2/6/20 11:23 AM, Amit Bawer wrote:
>>
>>
>> On Thu, Feb 6, 2020 at 11:07 AM Jorick Astrego
>> <jorick(a)netbulae.eu <mailto:jorick@netbulae.eu>> wrote:
>>
>> Hi,
>>
>> Something weird is going on with our ovirt node 4.3.8 install
>> mounting a nfs share.
>>
>> We have a NFS domain for a couple of backup disks and we have
>> a couple of 4.2 nodes connected to it.
>>
>> Now I'm adding a fresh cluster of 4.3.8 nodes and the
>> backupnfs mount doesn't work.
>>
>> (annoying you cannot copy the text from the events view)
>>
>> The domain is up and working
>>
>> ID:f5d2f7c6-093f-46d6-a844-224d92db5ef9
>> Size:10238 GiB
>> Available:2491 GiB
>> Used:7747 GiB
>> Allocated:3302 GiB
>> Over Allocation Ratio:37%
>> Images:7
>> Path:*.*.*.*:/data/ovirt
>> NFS Version:AUTO
>> Warning Low Space Indicator:10% (1023 GiB)
>> Critical Space Action Blocker:5 GiB
>>
>> But somehow the node appears to thin thinks it's an LVM
>> volume? It tries to find the VGs volume group but fails...
>> which is not so strange as it is an NFS volume:
>>
>>
>> Could you provide full vdsm.log file with this flow?
>>
>>
>> 2020-02-05 14:17:54,190+0000 WARN (monitor/f5d2f7c)
>> [storage.LVM] Reloading VGs failed
>> (vgs=[u'f5d2f7c6-093f-46d6-a844-224d92db5ef9'] rc=5
>> out=[] err=[' Volume group
>> "f5d2f7c6-093f-46d6-a844-224d92db5ef9" not found', '
>> Cannot process volume group
>> f5d2f7c6-093f-46d6-a844-224d92db5ef9']) (lvm:470)
>> 2020-02-05 14:17:54,201+0000 ERROR (monitor/f5d2f7c)
>> [storage.Monitor] Setting up monitor for
>> f5d2f7c6-093f-46d6-a844-224d92db5ef9 failed (monitor:330)
>> Traceback (most recent call last):
>> File
>> "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
>> line 327, in _setupLoop
>> self._setupMonitor()
>> File
>> "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
>> line 349, in _setupMonitor
>> self._produceDomain()
>> File "/usr/lib/python2.7/site-packages/vdsm/utils.py",
>> line 159, in wrapper
>> value = meth(self, *a, **kw)
>> File
>> "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py",
>> line 367, in _produceDomain
>> self.domain = sdCache.produce(self.sdUUID)
>> File
>> "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py",
>> line 110, in produce
>> domain.getRealDomain()
>> File
>> "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py",
>> line 51, in getRealDomain
>> return self._cache._realProduce(self._sdUUID)
>> File
>> "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py",
>> line 134, in _realProduce
>> domain = self._findDomain(sdUUID)
>> File
>> "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py",
>> line 151, in _findDomain
>> return findMethod(sdUUID)
>> File
>> "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py",
>> line 176, in _findUnfetchedDomain
>> raise se.StorageDomainDoesNotExist(sdUUID)
>> StorageDomainDoesNotExist: Storage domain does not exist:
>> (u'f5d2f7c6-093f-46d6-a844-224d92db5ef9',)
>>
>> The volume is actually mounted fine on the node:
>>
>> On NFS server
>>
>> Feb 5 15:47:09 back1en rpc.mountd[4899]: authenticated
>> mount request from *.*.*.*:673 for /data/ovirt (/data/ovirt)
>>
>> On the host
>>
>> mount|grep nfs
>>
>> *.*.*.*:/data/ovirt on
>> /rhev/data-center/mnt/*.*.*.*:_data_ovirt type nfs
>> (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=*.*.*.*,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=*.*.*.*)
>>
>> And I can see the files:
>>
>> ls -alrt /rhev/data-center/mnt/*.*.*.*:_data_ovirt
>> total 4
>> drwxr-xr-x. 5 vdsm kvm 61 Oct 26 2016
>> 1ed0a635-67ee-4255-aad9-b70822350706
>> -rwxr-xr-x. 1 vdsm kvm 0 Feb 5 14:37 __DIRECT_IO_TEST__
>> drwxrwxrwx. 3 root root 86 Feb 5 14:37 .
>> drwxr-xr-x. 5 vdsm kvm 4096 Feb 5 14:37 ..
>>
>>
>>
>>
>>
>> Met vriendelijke groet, With kind regards,
>>
>> Jorick Astrego
>> *
>> Netbulae Virtualization Experts *
>> ------------------------------------------------------------------------
>> Tel: 053 20 30 270 info(a)netbulae.eu
>> <mailto:info@netbulae.eu> Staalsteden 4-3A KvK 08198180
>> Fax: 053 20 30 271 www.netbulae.eu <http://www.netbulae.eu>
>> 7547 TA Enschede BTW NL821234584B01
>>
>>
>> ------------------------------------------------------------------------
>>
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
>> To unsubscribe send an email to users-leave(a)ovirt.org
>> <mailto:users-leave@ovirt.org>
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IFTO5WBLVLG...
>>
>
>
>
>
> Met vriendelijke groet, With kind regards,
>
> Jorick Astrego
> *
> Netbulae Virtualization Experts *
> ------------------------------------------------------------------------
> Tel: 053 20 30 270 info(a)netbulae.eu <mailto:info@netbulae.eu>
> Staalsteden 4-3A KvK 08198180
> Fax: 053 20 30 271 www.netbulae.eu <http://www.netbulae.eu> 7547
> TA Enschede BTW NL821234584B01
>
>
> ------------------------------------------------------------------------
>
Met vriendelijke groet, With kind regards,
Jorick Astrego
Netbulae Virtualization Experts
----------------
Tel: 053 20 30 270 info(a)netbulae.eu Staalsteden 4-3A KvK 08198180
Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
----------------
4 years, 10 months
Installing hosts w/o node installer
by Christian Reiss
Hey folks,
quick question: If I would like to use specific versions (contained
within the host) and craft myself a specific environment for the hosts
am I correct to assume that only installing the ovirt repo and a gluster
version of my choice (cough, 7.0) would suffice? Then adding the host
via an engine (and taking the ususal steps as you would for an ovirt
node: ssh keys et all)?
To my understanding the rollout is done via ansible so packages will be
installed if not present? The installation is pretty vague on this point
(https://www.ovirt.org/download/):
1. Enable the Base, Optional, and Extra repositories
2. Install Cockpit and the cockpit-ovirt-dashboard plugin:
3. Enable Cockpit
4. Open the firewall
5. - 9. Deploy using a hosted engine.
So, by using a remote engine (aka, like an additional host):
1. Enable the Base, Optional, and Extra repositories
the end?
I would use CentOS 7 as a base which I am very familiar with. I would
also setup Gluster by hand so "Only" the virtualization part needs to be
done.
This would, among other things, grant me perfect host recovery options
for a 1-host disaster case. We are looking at a base CentOS7 install
without any deep modifications (except for some packages etc).
Enjoy your weekend, all! :-)
-Christian.
--
Christian Reiss - email(a)christian-reiss.de /"\ ASCII Ribbon
christian(a)reiss.nrw \ / Campaign
X against HTML
XMPP chris(a)alpha-labs.net / \ in eMails
WEB christian-reiss.de, reiss.nrw
GPG Retrieval http://gpg.christian-reiss.de
GPG ID ABCD43C5, 0x44E29126ABCD43C5
GPG fingerprint = 9549 F537 2596 86BA 733C A4ED 44E2 9126 ABCD 43C5
"It's better to reign in hell than to serve in heaven.",
John Milton, Paradise lost.
4 years, 10 months
Re: Win Server 2k19 and BlueIris
by Robert Webb
FYI
This also fixed the previous issues I was having with 2k19 servers that I was trying to migrate the disk image from Proxmox that was throwing the BSOD of KMODE EXCEPTION.
From: Robert Webb <rwebb(a)ropeguru.com>
Sent: Saturday, February 8, 2020 11:30 AM
To: users <users(a)ovirt.org>
Subject: [ovirt-users] Re: Win Server 2k19 and BlueIris
Update:
So after some digging, it seems this has been reported as an issue many times.
As a workaround, I added the an entry to the kvm.conf in modprobe.d directory of ignore_msrs=1 and that allowed Blue Iris to boot.
Would be nice to have an actual fix for this instead of a workaround. Would really like to know why kvm is thing that ia32_debugctl is not a valid cpu option.
From: Robert Webb
Sent: Saturday, February 8, 2020 10:33 AM
To: users <users(a)ovirt.org<mailto:users@ovirt.org>>
Subject: Win Server 2k19 and BlueIris
So I have been running Windows Server 2019 on Proxmox VE with Blue Iris NVR. Tried to migrate the qcow disk over to oVirt, and like other Win2k19 servers, they all give a BSOD with a KMODE error.
Last night I installed 2k19 from scratch, installed the latest drivers I could find, oVirt-toolsSetup-4.3-3.el7, and proceeded to load up Blue Iris. As soon as I go to start Blue Iris, the VM hangs, then gives a BSOD with SYSTEM SERVICE EXCEPTION.
Looking at the debug log on the ovirt node the VM is running, there one entry that gets created on each crash with, "kvm [4723]: vcpu1 kvm_set_msr_common: MSR_IA32_DEBUGCTLMSR 0x1, nop".
Running on oVirt node 4.3.8, cluster cpu is Intel Westmere IBRS SSBD Family, pc-i440fx-rhel7.6.0, and the physical cpu is 24x Intel(R) Xeon(R) CPU X5670 @ 2.93GHz.
Any ideas?
4 years, 10 months
Re: Win Server 2k19 and BlueIris
by Robert Webb
Update:
So after some digging, it seems this has been reported as an issue many times.
As a workaround, I added the an entry to the kvm.conf in modprobe.d directory of ignore_msrs=1 and that allowed Blue Iris to boot.
Would be nice to have an actual fix for this instead of a workaround. Would really like to know why kvm is thing that ia32_debugctl is not a valid cpu option.
From: Robert Webb
Sent: Saturday, February 8, 2020 10:33 AM
To: users <users(a)ovirt.org>
Subject: Win Server 2k19 and BlueIris
So I have been running Windows Server 2019 on Proxmox VE with Blue Iris NVR. Tried to migrate the qcow disk over to oVirt, and like other Win2k19 servers, they all give a BSOD with a KMODE error.
Last night I installed 2k19 from scratch, installed the latest drivers I could find, oVirt-toolsSetup-4.3-3.el7, and proceeded to load up Blue Iris. As soon as I go to start Blue Iris, the VM hangs, then gives a BSOD with SYSTEM SERVICE EXCEPTION.
Looking at the debug log on the ovirt node the VM is running, there one entry that gets created on each crash with, "kvm [4723]: vcpu1 kvm_set_msr_common: MSR_IA32_DEBUGCTLMSR 0x1, nop".
Running on oVirt node 4.3.8, cluster cpu is Intel Westmere IBRS SSBD Family, pc-i440fx-rhel7.6.0, and the physical cpu is 24x Intel(R) Xeon(R) CPU X5670 @ 2.93GHz.
Any ideas?
4 years, 10 months
Win Server 2k19 and BlueIris
by Robert Webb
So I have been running Windows Server 2019 on Proxmox VE with Blue Iris NVR. Tried to migrate the qcow disk over to oVirt, and like other Win2k19 servers, they all give a BSOD with a KMODE error.
Last night I installed 2k19 from scratch, installed the latest drivers I could find, oVirt-toolsSetup-4.3-3.el7, and proceeded to load up Blue Iris. As soon as I go to start Blue Iris, the VM hangs, then gives a BSOD with SYSTEM SERVICE EXCEPTION.
Looking at the debug log on the ovirt node the VM is running, there one entry that gets created on each crash with, "kvm [4723]: vcpu1 kvm_set_msr_common: MSR_IA32_DEBUGCTLMSR 0x1, nop".
Running on oVirt node 4.3.8, cluster cpu is Intel Westmere IBRS SSBD Family, pc-i440fx-rhel7.6.0, and the physical cpu is 24x Intel(R) Xeon(R) CPU X5670 @ 2.93GHz.
Any ideas?
4 years, 10 months
[ANN] oVirt 4.3.9 First Release Candidate is now available for testing
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.3.9 First Release Candidate for testing, as of January 30th, 2020.
This update is a release candidate of the nineth in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used in
production.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.7 or later (but <8)
* CentOS Linux (or similar) 7.7 or later (but <8)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.7 or later (but <8)
* CentOS Linux (or similar) 7.7 or later (but <8)
* oVirt Node 4.3 (available for x86_64 only) has been built consuming
CentOS 7.7 Release
See the release notes [1] for known issues, new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node will be available soon
Additional Resources:
* Read more about the oVirt 4.3.9 release highlights:
http://www.ovirt.org/release/4.3.9/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.9/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>*Red Hat respects your work life balance.
Therefore there is no need to answer this email out of your office hours.*
4 years, 10 months
VM not getting up
by Crazy Ayansh
[image: image.png]
disk is already attached, All vm on the same server giving same error. Help
please.
4 years, 10 months
3 Node Hyperconverged Install fails with - Error: the target storage domain contains only 46.0GiB of available space while a minimum of 61.0Gi
by Dirk Rydvan
Hello ovirts,
I try to create a 3 Node hyperconverged cluster with the cockpit GUI, but ist fails at last step "5" in Hosted engine setup
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Error: the target storage domain contains only 46.0GiB of available space while a minimum of 61.0GiB is required If you wish to use the current target storage domain by extending it, make sure it contains nothing before adding it."}
I checked the volumes at all 3 nodes - only node1 shows a mount - node2 and node3 not
[root@node1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
<snip>
node43.infra.qivicon.de:/engine 52G 3.0G 47G 7% /rhev/data-center/mnt/glusterSD/node43.infra.qivicon.de:_engine
/dev/mapper/gluster_vg_nvme0n1-gluster_lv_data 500G 34M 500G 1% /gluster_bricks/data
/dev/mapper/gluster_vg_nvme0n1-gluster_lv_engine 100G 33M 100G 1% /gluster_bricks/engine
/dev/mapper/gluster_vg_nvme0n1-gluster_lv_vmstore 500G 33M 500G 1% /gluster_bricks/vmstore
I created the gluster before with:
Hosts:
node1
node2
node3
Default Gluster values plus cache
/gluster_bricks/engine 100GB /dev/nvme0n1 (Arbiter unchecked)
/gluster_bricks/data 500GB /dev/nvme0n1 (Arbiter unchecked)
/gluster_bricks/vmstore 500GB /dev/nvme0n1 (Arbiter unchecked)
cache /dev/nvme1n1 10GB writeback
The allredy existing thread does not help:
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/W2CM74EU6KPP...
Any idea where the problem is?
many Thanks in advance
4 years, 10 months