Storage advice
by Brain Recursion
I have a small oVirt cluster running but I have been having problems with
the storage and would like to completely start again with the storage
infrastructure. Currently for storage I have a single server running
windows storage server serving oVirt via iSCSI. I also have a smaller
storage server which is currently not used. The oVirt cluster is not
running a production environment but ideally I do not want to have to power
it all off to patch the storage servers and oVirt cluster.
1x storage server, raid 10 24TB usable, 2x 10Gb ethernet
1x storage server, raid 10 4TB usable, 4x 1Gb ethernet
8x oVirt hosts, 1x10Gb ethernet on each
1x 24port 10Gb switch
What would be the best way to utilise the storage servers?
I was thinking about sticking CentOS on both servers and running Gluster
with a 4TB replicated volume across both servers for the hosted engine and
other critical VMs and then a 20TB non-replicated Gluster volume running on
just the larger storage server for non critical VMs. I have another spare
server which i could potentially use as a arbiter node. Would this work or
would I have huge problems as the hardware performance of each storage
server is so different?
Any advice appreciated.
6 years, 1 month
Home ovirt setup (new to ovirt)
by ryancoeit@gmail.com
Hello
My names ryan and im studying for my rhcsa and rhce also a linux enthusiast, im setting up my first ovirt server and have my ovirt engine setup aswell as my data center and host, ive come to the point where i need to configure storage. Ovirt is installed on a 256GB nvme ssd with my iso and master data domain pointed to the drive.
I have 2 WD 2TB hard drives available that havent been allocated yet, id like them to be combined with lvm and have a single share exported via nfs that all vm's (3 vm's) in total can access for data storage
Is this possible ?
Kind Regards
Ryan
6 years, 1 month
removing FC storage
by David David
How to remove FC storage domain correctly?
I turned off the FC storage domain in the cluster (maintenance -> detach ->
remove), but the volume group still remained on the hosts and I would have
to remove them manually and need to clear a multipath links too, before I
turn off the LUN on the FC storage.
How will the cluster behave if there are differences in the volume groups
on the hosts?
Because I can’t simultaneously delete partitions and multipath links on all
hosts, and this process will occur sequentially on each host.
6 years, 1 month
Cannot delete a stuck task
by Markus Schaufler
Hi,
The task "Adding disk" is stuck at "creating volume".
I tried to delete the task with "taskcleaner.sh", at CLI with list / remove jobs, deleted lastly the VM and restartet the engine-vm.
But its still in the task list.
Any idea to remove this job?
6 years, 1 month
Gluster JSON-RPC errors
by Maton, Brett
I'm seeing the following errors appear in the event log every 10 minutes
for each participating host in the gluster cluster
GetGlusterVolumeHealInfoVDS failed: Internal JSON-RPC error: {'reason':
"'bool' object has no attribute 'getiterator'"}
Gluster brick health is good
Any ideas ?
oVirt 4.2.7.2-1.el7
CentOS 7
6 years, 1 month
New oVirt deployment suggestions
by Stefano Danzi
Hello!
I'm almost ready to start with a new oVirt deplyment. I will use CentOS
7, self hosting engine and glusert storage.
I have 3 phisical host. Each host has four NIC. My first idea is:
- configure bond betwheen NICs
- configure a VLAN interface for management network (and local lan)
- configure a VLAN interface for gluster network
- configure gluster for the hosted engine
- start "hosted-engine --deploy" process
is this enough? do I need a phisical dedicated NIC for management network?
Bye
6 years, 1 month
VM Portal noVNC Console invocation
by briwils2@cisco.com
When I use the VM Portal and invoke a console I'm not sure how i can leverage the html5 noVNC version of this. I can only get the .vv file and would like to use a web based console, or rather provide them to users of an engine.
TIA
Brian
6 years, 1 month
cockpit-networkmanager
by Jonathan Baecker
Hello Everybody,
I only wanted to ask you, if the ovirt hosts does need the
cockpit-networkmanager?
I ask because I can not updates my CentOS hosts, I get always the message:
Transaction check error:
file /usr/share/cockpit/networkmanager/manifest.json from install
of cockpit-system-176-2.el7.centos.noarch conflicts with file from
package cockpit-networkmanager-172-1.el7.noarch
file /usr/share/cockpit/networkmanager/po.ca.js.gz from install
of cockpit-system-176-2.el7.centos.noarch conflicts with file from
package cockpit-networkmanager-172-1.el7.noarch
file /usr/share/cockpit/networkmanager/po.cs.js.gz from install
of cockpit-system-176-2.el7.centos.noarch conflicts with file from
package cockpit-networkmanager-172-1.el7.noarch
...
When I remove the cockpit-networkmanager the error is gone, but after
making a yum update I'm not able to reinstall the cockpit-networkmanager
because it is still want to use the old version.
Jonathan
6 years, 1 month
master domain wont activate
by Vincent Royer
I was attempting to migrate from nfs to iscsi storage domains. I have
reached a state where I can no longer activate the old master storage
domain, and thus no others will activate either.
I'm ready to give up on the installation and just move to an HCI deployment
instead. Wipe all the hosts clean and start again.
My plan was to create and use an export domain, then wipe the nodes and set
them up HCI where I could re-import. But without being able to activate a
master domain, I can't create the export domain.
I'm not sure why it can't find the master anymore, as nothing has happened
to the NFS storage, but the error in vdsm says it just can't find it:
StoragePoolMasterNotFound: Cannot find master domain:
u'spUUID=5a77bed1-0238-030c-0122-0000000003b3,
msdUUID=d3165759-07c2-46ae-b7b8-b6226a929d68'
2018-10-03 22:40:33,751-0700 INFO (jsonrpc/3) [storage.TaskManager.Task]
(Task='83f33db5-90f3-4064-87df-0512ab9b6378') aborting: Task is aborted:
"Cannot find master domain: u'spUUID=5a77bed1-0238-030c-0122-0000000003b3,
msdUUID=d3165759-07c2-46ae-b7b8-b6226a929d68'" - code 304 (task:1181)
2018-10-03 22:40:33,751-0700 ERROR (jsonrpc/3) [storage.Dispatcher] FINISH
connectStoragePool error=Cannot find master domain:
u'spUUID=5a77bed1-0238-030c-0122-0000000003b3,
msdUUID=d3165759-07c2-46ae-b7b8-b6226a929d68' (dispatcher:82)
2018-10-03 22:40:33,751-0700 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC
call StoragePool.connect failed (error 304) in 0.17 seconds (__init__:573)
2018-10-03 22:40:34,200-0700 INFO (jsonrpc/1) [api.host] START getStats()
from=::ffff:172.16.100.13,39028 (api:46)
When I look in cockpit on the hosts, the storage domain is mounted and
seems fine.
6 years, 1 month