Interestingly enough I literally just went through this same thing with a slight variation.
I had 10Gb cards for my storage network but no 10Gb switch, so I direct connected them with some fun routing and /etc/hosts settings. I also didnt want my storage network on a routed network (have firewalls in the way of VLANs) and I wanted the network separate from my ovirtmgmt - and, as I said, had no switches for 10Gb. Here is what you need at a bare minimum. Adapt / change it as you need
1 dedicated NIC on each node for ovirtmgmt. Ex: eth0
1 dedicated NIC to direct connect node 1 and node 2 - eth1 node1
1 dedicated NIC to direct connect node 1 and node 3 - eth2 node1
1 dedicated NIC to direct connect node 2 and node 1 - eth1 node2
1 dedicated NIC to direct connect node 2 and node 3 - eth2 node2
1 dedicated NIC to direct connect node 3 and node 1 - eth1 node3
1 dedicated NIC to direct connect node 3 and node 2 - eth2 node3
You'll need custom routes too:
Route to node 3 from node 1 via eth2
Route to node 3 from node 2 via eth2
Route to node 2 from node 3 via eth2
Finally, entries in your /etc/hosts which match to your routes above
Then, advisably, a dedicated NIC per box for VM network but you can leverage ovirtmgmt if you are just proofing this out
At this point if you can reach all of your nodes via this direct connect IPs then you setup gluster as you normally would referencing your entries in /etc/hosts when you call "gluster volume create"
In my setup, as I said, I had 2x 2 port PCIe 10Gb cards per server so I setup LACP as well as you can see below
You can optionally choose to setup this network in ovirt as well (and add the NICs to each host) but dont configure it as a VM network. Then you can also, with some other minor tweaks, use these direct connects as migration networks rather than ovirtmgmt or VM network