On VM start
*ovirt001# tail -f /var/log/glusterfs/**
==> cli.log <==
[2013-07-17 20:23:09.187585] W [rpc-transport.c:175:rpc_transport_load]
0-rpc-transport: missing 'option transport-type'. defaulting to
"socket"
[2013-07-17 20:23:09.189638] I [socket.c:3480:socket_init] 0-glusterfs: SSL
support is NOT enabled
[2013-07-17 20:23:09.189660] I [socket.c:3495:socket_init] 0-glusterfs:
using system polling thread
==> etc-glusterfs-glusterd.vol.log <==
[2013-07-17 20:23:09.252219] I
[glusterd-handler.c:1007:__glusterd_handle_cli_get_volume] 0-glusterd:
Received get vol req
==> cli.log <==
[2013-07-17 20:23:09.252506] I [cli-rpc-ops.c:538:gf_cli_get_volume_cbk]
0-cli: Received resp to get vol: 0
[2013-07-17 20:23:09.252961] E
[cli-xml-output.c:2572:cli_xml_output_vol_info_end] 0-cli: Returning 0
[2013-07-17 20:23:09.252998] I [cli-rpc-ops.c:771:gf_cli_get_volume_cbk]
0-cli: Returning: 0
[2013-07-17 20:23:09.253107] I [input.c:36:cli_batch] 0-: Exiting with: 0
*ovirt001# tail -f /var/log/vdsm/**
(see attachment)
*Steve Dainard *
Infrastructure Manager
Miovision <
http://miovision.com/> | *Rethink Traffic*
519-513-2407 ex.250
877-646-8476 (toll-free)
*Blog <
http://miovision.com/blog> |
**LinkedIn<https://www.linkedin.com/company/miovision-technologies> |
Twitter <
https://twitter.com/miovision> |
Facebook<https://www.facebook.com/miovision>
*
------------------------------
Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON,
Canada | N2C 1L3
This e-mail may contain information that is privileged or confidential. If
you are not the intended recipient, please delete the e-mail and any
attachments and notify us immediately.
On Wed, Jul 17, 2013 at 2:40 PM, Vijay Bellur <vbellur(a)redhat.com> wrote:
On 07/17/2013 10:20 PM, Steve Dainard wrote:
> Completed changes:
>
> *gluster> volume info vol1*
>
> Volume Name: vol1
> Type: Replicate
> Volume ID: 97c3b2a7-0391-4fae-b541-**cf04ce6bde0f
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: ovirt001.miovision.corp:/mnt/**storage1/vol1
> Brick2: ovirt002.miovision.corp:/mnt/**storage1/vol1
> Options Reconfigured:
> network.remote-dio: on
> cluster.eager-lock: enable
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> storage.owner-gid: 36
> storage.owner-uid: 36
> auth.allow: *
> user.cifs: on
> nfs.disable: off
> server.allow-insecure: on
>
>
> *Same error on VM run:*
>
> VM VM1 is down. Exit message: internal error process exited while
> connecting to monitor: qemu-system-x86_64: -drive
> file=gluster://ovirt001/vol1/**a87a7ef6-2c74-4d8e-a6e0-**
> a392d0f791cf/images/238cc6cf-**070c-4483-b686-c0de7ddf0dfa/**
> ff2bca2d-4ed1-46c6-93c8-**22a39bb1626a,if=none,id=drive-**
> virtio-disk0,format=raw,**serial=238cc6cf-070c-4483-**
> b686-c0de7ddf0dfa,cache=none,**werror=stop,rerror=stop,aio=**threads:
> could not open disk image
> gluster://ovirt001/vol1/**a87a7ef6-2c74-4d8e-a6e0-**
> a392d0f791cf/images/238cc6cf-**070c-4483-b686-c0de7ddf0dfa/**
> ff2bca2d-4ed1-46c6-93c8-**22a39bb1626a:
> No such file or directory .
> VM VM1 was started by admin@internal (Host: ovirt001).
>
Do you see any errors in glusterd log while trying to run the VM? Log file
can be found at (/var/log/glusterfs/...) on ovirt001.
Thanks,
Vijay