ovirt hosted-engine on iSCSI offering one target
by wodel youchi
Hi,
We have an oVirt Platforme using the 4.1 version.
when the platforme was installed, it was made of :
- Two HP Proliant DL380 G9 as hypervisors
- One HP MSA1040 for iSCSI
- One Synology for NFS
- Two switches, one for network/vm traffic, the second for storage traffic.
The problem : the hosted-engine domain was created using iSCSI on the HP
MSA. The problem is that this disk array does not give the possibility to
create different targets, it presents just one target.
At that time we create both the hosted-engine and the first data domain
using the same target, and we didn't pay attention to the information
saying "i*f you are using iSCSI storage, do not use the same iSCSI target
for the shared storage domain and data storage domain*".
Question :
- what problems can be generated by this (mis-)configuration?
- Is this a must to do (correct) configuration.
Regards.
5 years
hyperconverged single node with SSD cache fails gluster creation
by thomas@hoberg.net
I am seeing more success than failures at creating single and triple node hyperconverged setups after some weeks of experimentation so I am branching out to additional features: In this case the ability to use SSDs as cache media for hard disks.
I tried first with a single node that combined caching and compression and that fails during the creation of LVMs.
I tried again without the VDO compression, but actually the results where identical whilst VDO compression but without the LV cache worked ok.
I tried various combinations, using less space etc., but the results are always the same and unfortunately rather cryptic (substituted the physical disk label with {disklabel}):
TASK [gluster.infra/roles/backend_setup : Extend volume group] *****************
failed: [{hostname}] (item={u'vgname': u'gluster_vg_{disklabel}p1', u'cachethinpoolname': u'gluster_thinpool_gluster_vg_{disklabel}p1', u'cachelvname': u'cachelv_gluster_thinpool_gluster_vg_{disklabel}p1', u'cachedisk': u'/dev/sda4', u'cachemetalvname': u'cache_gluster_thinpool_gluster_vg_{disklabel}p1', u'cachemode': u'writeback', u'cachemetalvsize': u'70G', u'cachelvsize': u'630G'}) => {"ansible_loop_var": "item", "changed": false, "err": " Physical volume \"/dev/mapper/vdo_{disklabel}p1\" still in use\n", "item": {"cachedisk": "/dev/sda4", "cachelvname": "cachelv_gluster_thinpool_gluster_vg_{disklabel}p1", "cachelvsize": "630G", "cachemetalvname": "cache_gluster_thinpool_gluster_vg_{disklabel}p1", "cachemetalvsize": "70G", "cachemode": "writeback", "cachethinpoolname": "gluster_thinpool_gluster_vg_{disklabel}p1", "vgname": "gluster_vg_{disklabel}p1"}, "msg": "Unable to reduce gluster_vg_{disklabel}p1 by /dev/dm-15.", "rc": 5}
somewhere within that I see something that points to a race condition ("still in use").
Unfortunately I have not been able to pinpoint the raw logs which are used at that stage and I wasn't able to obtain more info.
At this point quite a bit of storage setup is already done, so rolling back for a clean new attempt, can be a bit complicated, with reboots to reconcile the kernel with data on disk.
I don't actually believe it's related to single node and I'd be quite happy to move the creation of the SSD cache to a later stage, but in a VDO setup, this looks slightly complex to someone without intimate knowledge of LVS-with-cache-and-perhaps-thin/VDO/Gluster all thrown into one.
Needless the feature set (SSD caching & compressed-dedup) sounds terribly attractive but when things don't just work, it's more terrifying.
5 years
Beginning oVirt / Hyperconverged
by email@christian-reiss.de
Hey folks!
Quick question, really: I have 4 servers of identical hardware. The documentation says "you need 3", not "you need 3 or more"; is it possible to run hyperconverged with 4 servers (in same rack, let's neglect the possibility of 2 failing servers (split brain))?
Also: I have a nice seperate server (low power footprint, big on cpu (8) and has 32gb ram)-- would love to use this as engine server only. Can I initiate a hyperconverged system with 4 working host starting from a stand-alone engine? Or does "hyperconverged" a must for HA engine on the cluster?
One last one: Installing ovirt nodes, obviously. This will by default create a lvm across the entire RAID volume; can I simply create folders (with correct permissions) and use those as gluster bricks? Or do I need to partition the RAID in a special way?
Thanks for your pointers,
Chris.
5 years
oVirt change IP's & add new ISO share
by Jonathan Mathews
Good Day
I have to change the IP addresses of the oVirt Engine, hosts and storage to
a new IP range. Please, can you advise the best way to do this and if there
is anything I would need to change in the database?
I have also run into an issue where someone has removed the ISO share/data
on the storage, so I am unable to remove, activate, detach or even add a
new ISO share.
Please, can you advise the best wat to resolve this?
Please see the below engine logs:
2019-10-30 11:39:13,918 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-27) [58f6cfb8] Failed in
'DetachStorageDomainVDS' method
2019-10-30 11:39:13,942 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-27) [58f6cfb8] Correlation ID: null, Call
Stack: null, Custom Event ID: -1, Message: VDSM command failed: Storage
domain does not exist: (u'42b7d819-ce3a-4a18-a683-f4817c4bdb06',)
2019-10-30 11:39:13,943 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(org.ovirt.thread.pool-8-thread-27) [58f6cfb8]
IrsBroker::Failed::DetachStorageDomainVDS: IRSGenericException:
IRSErrorException: Failed to DetachStorageDomainVDS, error = Storage domain
does not exist: (u'42b7d819-ce3a-4a18-a683-f4817c4bdb06',), code = 358
2019-10-30 11:39:13,951 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-27) [58f6cfb8] FINISH,
DetachStorageDomainVDSCommand, log id: 5547e2df
2019-10-30 11:39:13,951 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand]
(org.ovirt.thread.pool-8-thread-38) [58f6cfb8] -- executeIrsBrokerCommand:
Attempting on storage pool '5849b030-626e-47cb-ad90-3ce782d831b3'
2019-10-30 11:39:13,951 ERROR
[org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand]
(org.ovirt.thread.pool-8-thread-27) [58f6cfb8] Command
'org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand'
failed: EngineException:
org.ovirt.engine.core.vdsbroker.irsbroker.IRSErrorException:
IRSGenericException: IRSErrorException: Failed to DetachStorageDomainVDS,
error = Storage domain does not exist:
(u'42b7d819-ce3a-4a18-a683-f4817c4bdb06',), code = 358 (Failed with error
StorageDomainDoesNotExist and code 358)
2019-10-30 11:39:13,952 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand]
(org.ovirt.thread.pool-8-thread-38) [58f6cfb8] START,
HSMGetAllTasksInfoVDSCommand(HostName = host01,
VdsIdVDSCommandParametersBase:{runAsync='true',
hostId='291a3a19-7467-4783-a6f7-2b2dd0de9ad3'}), log id: 6cc238fb
2019-10-30 11:39:13,952 INFO
[org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand]
(org.ovirt.thread.pool-8-thread-27) [58f6cfb8] Command
[id=cec030b7-4a62-43a2-9ae8-de56a5d71ef8]: Compensating CHANGED_STATUS_ONLY
of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap;
snapshot:
EntityStatusSnapshot:{id='StoragePoolIsoMapId:{storagePoolId='5849b030-626e-47cb-ad90-3ce782d831b3',
storageId='42b7d819-ce3a-4a18-a683-f4817c4bdb06'}', status='Inactive'}.
2019-10-30 11:39:13,975 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-27) [58f6cfb8] Correlation ID: 28ac658, Job
ID: b31e0f44-2d82-47bf-90d9-f69e399d994f, Call Stack: null, Custom Event
ID: -1, Message: Failed to detach Storage Domain iso to Data Center
Default. (User: admin@internal)
2019-10-30 11:42:46,711 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand]
(org.ovirt.thread.pool-8-thread-25) [31e89bba] START,
SPMGetAllTasksInfoVDSCommand( IrsBaseVDSCommandParameters:{runAsync='true',
storagePoolId='5849b030-626e-47cb-ad90-3ce782d831b3',
ignoreFailoverLimit='false'}), log id: 59192768
2019-10-30 11:42:48,825 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-34) [31e89bba] Failed in
'ActivateStorageDomainVDS' method
2019-10-30 11:42:48,855 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-34) [31e89bba] Correlation ID: null, Call
Stack: null, Custom Event ID: -1, Message: VDSM command failed: Storage
domain does not exist: (u'42b7d819-ce3a-4a18-a683-f4817c4bdb06',)
2019-10-30 11:42:48,856 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(org.ovirt.thread.pool-8-thread-34) [31e89bba]
IrsBroker::Failed::ActivateStorageDomainVDS: IRSGenericException:
IRSErrorException: Failed to ActivateStorageDomainVDS, error = Storage
domain does not exist: (u'42b7d819-ce3a-4a18-a683-f4817c4bdb06',), code =
358
2019-10-30 11:42:48,864 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-34) [31e89bba] FINISH,
ActivateStorageDomainVDSCommand, log id: 518fdcf
2019-10-30 11:42:48,865 ERROR
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
(org.ovirt.thread.pool-8-thread-34) [31e89bba] Command
'org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand' failed:
EngineException:
org.ovirt.engine.core.vdsbroker.irsbroker.IRSErrorException:
IRSGenericException: IRSErrorException: Failed to ActivateStorageDomainVDS,
error = Storage domain does not exist:
(u'42b7d819-ce3a-4a18-a683-f4817c4bdb06',), code = 358 (Failed with error
StorageDomainDoesNotExist and code 358)
2019-10-30 11:42:48,865 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand]
(org.ovirt.thread.pool-8-thread-25) [31e89bba] -- executeIrsBrokerCommand:
Attempting on storage pool '5849b030-626e-47cb-ad90-3ce782d831b3'
2019-10-30 11:42:48,865 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand]
(org.ovirt.thread.pool-8-thread-25) [31e89bba] START,
HSMGetAllTasksInfoVDSCommand(HostName = host02,
VdsIdVDSCommandParametersBase:{runAsync='true',
hostId='3673a0e1-721d-40ba-a179-b1f13a9aec43'}), log id: 47ef923b
2019-10-30 11:42:48,866 INFO
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
(org.ovirt.thread.pool-8-thread-34) [31e89bba] Command
[id=68bf0e1e-6a0b-41cb-9cad-9eb2bf87c5ee]: Compensating CHANGED_STATUS_ONLY
of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap;
snapshot:
EntityStatusSnapshot:{id='StoragePoolIsoMapId:{storagePoolId='5849b030-626e-47cb-ad90-3ce782d831b3',
storageId='42b7d819-ce3a-4a18-a683-f4817c4bdb06'}', status='Inactive'}.
2019-10-30 11:42:48,888 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-34) [31e89bba] Correlation ID: 5b208434,
Job ID: dff5f615-9dc4-4d79-a37e-5c6e99a2cc6b, Call Stack: null, Custom
Event ID: -1, Message: Failed to activate Storage Domain iso (Data Center
Default) by admin@internal
Thanks
Jonathan
5 years, 1 month
DHCP Client in Guest VM does not work on ovirtmgmt
by ccesario@blueit.com.br
Hello,
Is there any special config to usage dhcp client on Guest VM using ovirtmgmt/ovirtmgmt vnic profile ?
Currently I have a VM using the ovirtmgmt/ovirtmgmt NIC profile and this interface is configured as DHCP client, and this does not work when using ovirtmgmt/ovirtmgmt as NIC profile. But if I assign manual IP address from the same range of DHCP server the comunication it works.
And If usage other NIC profile in other VLAN with other DHCP server it works.
It seems ovirtmgmt/ovirtmgmt profile filter the DHCP protocol.
Could someone has idea to allow DHCP protocol works on ovirtmgmt/ovirtmgmt NIC profile?
Best regards
Carlos
5 years, 1 month
External networks issue after upgrading to 4.3.6
by ada per
After upgrading to the latest stable version the external networks lost all
functionality. Under providers-->ovn-network-provider the test runs
successfully.
But when im creating an external provider network, attaching it to a router
as a LAN setting up dhcp lease it is not reachable from other VMs in the
same network. Hosts and hosted engine can't seem to ping it either.
I tried disabling firewalls on both hosted engines, VMs and host and still
nothing.
When configuring Logical networks or VLANs they work perfectly tje one
problem is the external networks.
I have another environment running on a previous version of ovirt and it
works perfectly there. I think its a bug.
Thanks for your help
5 years, 1 month
Can ovirt-node support multiple network cards ?
by wangyu13476969128@126.com
Can ovirt-node support multiple network cards ?
I download ovirt-node-ng-installer-4.3.5-2019073010.el7.iso from ovirt site. And install on a Dell R730 Server.
The version of ovirt-engine is 4.3.5.5.
I found that the logic network ovirtmgmt can only point to one network card interface.
Can ovirt-node support multiple network cards ?
If yes , please tell me the method.
5 years, 1 month