On Wed, 2014-01-29 at 12:35 -0500, Steve Dainard wrote:
On Wed, Jan 29, 2014 at 5:11 AM, Vadim Rozenfeld
<vrozenfe(a)redhat.com>
wrote:
On Wed, 2014-01-29 at 11:30 +0200, Ronen Hod wrote:
> Adding the virtio-scsi developers.
> Anyhow, virtio-scsi is newer and less established than
viostor (the
> block device), so you might want to try it out.
[VR]
Was it "SCSI Controller" or "SCSI pass-through controller"?
If it's "SCSI Controller" then it will be viostor (virtio-blk)
device
driver.
"SCSI Controller" is listed in device manager.
Hardware ID's:
PCI\VEN_1AF4&DEV_1004&SUBSYS_00081AF4&REV_00
PCI\VEN_1AF4&DEV_1004&SUBSYS_00081AF4
There is something strange here. Subsystem ID 0008
means it is a virtio scsi pass-through controller.
And you shouldn't be able to install "SCSI Controller"
device driver (viostor.sys) on top of "SCSI pass-through
Controller".
vioscsi.sys should be installed on top of
VEN_1AF4&DEV_1004&SUBSYS_00081AF4&REV_00
viostor.sys should be installed on top of
VEN_1AF4&DEV_1001&SUBSYS_00021AF4&REV_00
PCI\VEN_1AF4&DEV_1004&CC_010000
PCI\VEN_1AF4&DEV_1004&CC_0100
> A disclaimer: There are time and patches gaps between RHEL
and other
> versions.
>
> Ronen.
>
> On 01/28/2014 10:39 PM, Steve Dainard wrote:
>
> > I've had a bit of luck here.
> >
> >
> > Overall IO performance is very poor during Windows
updates, but a
> > contributing factor seems to be the "SCSI Controller"
device in the
> > guest. This last install I didn't install a driver for
that device,
[VR]
Does it mean that your system disk is IDE and the data disk
(virtio-blk)
is not accessible?
In Ovirt 3.3.2-1.el6 I do not have an option to add a virtio-blk
device:
Screenshot here:
https://dl.dropboxusercontent.com/u/21916057/Screenshot%20from%
202014-01-29%2010%3A04%3A57.png
my guess is that VirtIO means virtio-blk, and you
should use viostor.sys
for it.
VirtIO-SCSI is for virtio-scsi, and need install vioscsi.sys to make it
working in Windows.
VM disk drive is "Red Hat VirtIO SCSI Disk Device", storage controller
is listed as "Red Hat VirtIO SCSI Controller" as shown in device
manager.
Screenshot here:
https://dl.dropboxusercontent.com/u/21916057/Screenshot%20from%
202014-01-29%2009%3A57%3A24.png
In Ovirt manager the disk interface is listed as "VirtIO".
Screenshot
here:
https://dl.dropboxusercontent.com/u/21916057/Screenshot%20from%
202014-01-29%2009%3A58%3A35.png
> > and my performance is much better. Updates still chug
along quite
> > slowly, but I seem to have more than the < 100KB/s write
speeds I
> > was seeing previously.
> >
> >
> > Does anyone know what this device is for? I have the "Red
Hat VirtIO
> > SCSI Controller" listed under storage controllers.
[VR]
It's a virtio-blk device. OS cannot see this volume unless you
have
viostor.sys driver installed on it.
Interesting that my VM's can see the controller, but I can't add a
disk for that controller in Ovirt. Is there a package I have missed on
install?
rpm -qa | grep ovirt
ovirt-host-deploy-java-1.1.3-1.el6.noarch
ovirt-engine-backend-3.3.2-1.el6.noarch
ovirt-engine-lib-3.3.2-1.el6.noarch
ovirt-engine-restapi-3.3.2-1.el6.noarch
ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch
ovirt-log-collector-3.3.2-2.el6.noarch
ovirt-engine-dbscripts-3.3.2-1.el6.noarch
ovirt-engine-webadmin-portal-3.3.2-1.el6.noarch
ovirt-host-deploy-1.1.3-1.el6.noarch
ovirt-image-uploader-3.3.2-1.el6.noarch
ovirt-engine-websocket-proxy-3.3.2-1.el6.noarch
ovirt-engine-userportal-3.3.2-1.el6.noarch
ovirt-engine-setup-3.3.2-1.el6.noarch
ovirt-iso-uploader-3.3.2-1.el6.noarch
ovirt-engine-cli-3.3.0.6-1.el6.noarch
ovirt-engine-3.3.2-1.el6.noarch
ovirt-engine-tools-3.3.2-1.el6.noarch
> >
> > I've setup a NFS storage domain on my
desktops SSD.
> > I've re-installed
> > win 2008 r2 and initially it was running
smoother.
> >
> > Disk performance peaks at 100MB/s.
> >
> > If I copy a 250MB file from a share into
the Windows
> > VM, it writes out
[VR]
Do you copy it with Explorer or any other copy program?
Windows Explorer only.
Do you have HPET enabled?
I can't find it in the guest 'system devices'. On the hosts the
current clock source is 'tsc', although 'hpet' is an available option.
How does it work with if you copy from/to local (non-NFS)
storage?
Not sure, this is a royal pain to setup. Can I use my ISO domain in
two different data centers at the same time? I don't have an option to
create an ISO / NFS domain in the local storage DC.
I mean just for sake of performance troubleshooting, it will be
interesting to see whether local storage performs as badly as NFS.
When I use the import option with the default DC's ISO domain, I get
an error "There is no storage domain under the specified path. Check
event log for more details." VDMS logs show "Resource namespace
0e90e574-b003-4a62-867d-cf274b17e6b1_imageNS already registered" so
I'm guessing the answer is no.
I tried to deploy with WDS, but the 64bit drivers apparently aren't
signed, and on x86 I get an error about the NIC not being supported
even with the drivers added to WDS.
That's also strange. All drivers available
from fedoraproject side
should be sighed with RedHat signature which is cross certified.
You shouldn't have any problem installing 64-bit drivers, except
for annoying pop-ups warning that drivers are not MS-signed.
What is your virtio-win drivers package origin and version?
virtio-win-0.1-74.iso
->
http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/ Good, it is
our most recent build.
Best regards,
Vadim.
Thanks,
Vadim.
Appreciate it,
Steve