You gave some different details in your other post, but here you mention use of GPU pass
through.
Any pass through will lose you the live migration ability, but unfortunately with GPUs,
that's just how it is these days: while those could in theory be moved when the GPUs
were identical (because their amount of state is limited to VRAM size), the support code
(and kernel interfaces?) simply does not exist today.
In that scenario a pass-through storage device won't lose you anything you still
have.
But you'll have to remember that PCI pass-through works only at the granularity of a
whole PCI device. That's fine with (an entire) NVMe, because these combine
"disks" and "controller", not so fine with individual disks on a SATA
or SCSI controller. And you certainly can't pass through partitions!
It gets to be really fun with cascaded USB and I haven't really tried Thunderbolt
either (mostly because I have given up on CentOS8/oVirt 4.4)
But generally the VirtIOSCSI interface imposes so little overhead, it only becomes
noticeable when you run massive amounts of tiny I/O on NVMe. Play with the block sizes and
the sync flag on your DD tests to see the differences, I've had lots of fun (and some
disillusions) with that, but mostly with Gluster storage over TCP/IP on Ethernet.
If that's really where your bottlenecks are coming from, you may want to look at
architecture rather than pass-through.