[Users] Ims shared disks and ovirt storage

René Koch (ovido) r.koch at ovido.at
Tue Apr 9 12:14:41 UTC 2013


On Tue, 2013-04-09 at 13:59 +0200, Gianluca Cecchi wrote:
> On Tue, Apr 9, 2013 at 1:56 PM, René Koch (ovido)  wrote:
> > Hi Gianluca,
> >
> >
> > On Tue, 2013-04-09 at 12:50 +0200, Gianluca Cecchi wrote:
> >> Hello,
> >> I have the chance to work on Intel blades where I have option to
> >> confihure shared disks between them at enclosure level.
> >> Their controller is scsi.
> >> How could I configure as shared storage in ovirt?
> >> I presume neither Fc nor iscsi are possible. .. so what can i do in
> >> your opinion?
> >
> >
> > Is this an Intel modular server, btw?
> > If yes - create a Fibre channel datacenter and add the storage as FC
> > LUNs. I can also send you a working multipath.conf for Intel Modular if
> > required...
> >
> >
> > Regards,
> > René
> >
> 
> Yes, it is an Intel modular server (I don't know well this platform
> but I think IMS is its acronym)


You can contact me if you have questions regarding these server - I
use(d) this system in the company I worked before and on customer side.


> The multipath working configuration would be more than appreciated ;-)
> But.... does this mean there are configuration with dual scsi or is
> multipath needed in every case (as I see oVirt config implies
> multipath configuration anyway.)

Attached is a working multipath.conf for RHEV-H 3.0.
You need a working multipath.conf file if you have redundant storage
controllers.

Find out the wwids of your devices and add them to your
blacklist_exceptions section.

# multipath -ll
mpathb (2222b00015521f5d4) dm-0 Intel,Multi-Flex
size=96G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 0:0:0:1 sdb 8:16 active ready running
mpatha (222140001553601d1) dm-1 Intel,Multi-Flex
size=20G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 0:0:0:0 sda 8:0  active ready running

Don't forget to persist /etc/multipath.conf if you're using ovirt-node!
With this config you don't need the Intel multipath drivers, btw.

Please let me know if you have further questions.


> 
> Gianluca
-------------- next part --------------
# RHEV REVISION 0.7
# RHEV PRIVATE

defaults {
	udev_dir		/dev
	polling_interval 	10
	path_selector		"round-robin 0"
	path_grouping_policy	multibus
	getuid_callout		"/lib/udev/scsi_id --whitelisted --device=/dev/%n"
	prio			alua
	path_checker		readsector0
	rr_min_io		100
	max_fds			8192
	rr_weight		priorities
	failback		immediate
	no_path_retry		fail
	user_friendly_names	yes
}
blacklist {
       wwid 26353900f02796769
	devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
	devnode "^hd[a-z]"
}
blacklist_exceptions {
	wwid "2222b00015521f5d4"
	wwid "222140001553601d1"
}
devices {
	device {
	vendor			"Intel"
	product			"Multi-Flex"
	path_grouping_policy	group_by_prio
	getuid_callout		"/lib/udev/scsi_id --whitelisted --device=/dev/%n"
	prio			"tpg_pref"
	path_checker		tur
	path_selector		"round-robin 0"
	hardware_handler	"1 alua"
	failback		immediate
	rr_weight		uniform
	rr_min_io		100
	no_path_retry		queue
	features		"1 queue_if_no_path"
	product_blacklist "VTrak V-LUN"
	}
}
multipaths {
	multipath {
		uid 0
		alias mpatha
		gid 0
		wwid "222140001553601d1"
		mode 0600
	}
	multipath {
		uid 0
		alias mpathb
		gid 0
		wwid "2222b00015521f5d4"
		mode 0600
	}
}


More information about the Users mailing list