[Users] Storage server
by Koen Vanoppen
Dear All,
At work, the network guys have come to a discovery that they have made a
mistake in their network configuration... The Storage server that we use as
ISCSI target in oVirt, is in the wrong VLAN, so the wrong IP address...
What is the best way to make sure that all of our vms (40) that are using
this scsi storage will come up again after the storage server has changed
his ip?
What are the best steps to be taken...?
We are using oVirt Engine Version: 3.3.1-2.el6
Kind Regards,
Koen
10 years, 10 months
[Users] Limit bandwidths for VM
by Hans Emmanuel
Hi all,
I was thinking whether it is possible to limit bandwidth used by a
particular vm ? So that we can make sure that its not eating up bw for
other vms also.
--
*Hans Emmanuel*
10 years, 10 months
[Users] Problems with migration on a VM and not detected by gui
by Gianluca Cecchi
Hello,
passing from 3.3.3rc to 3.3.3 final on fedora 19 based infra.
Two hosts and one engine.
Gluster DC.
I have 3 VMs: CentOS 5.10, 6.5, Fedora 20
Main steps:
1) update engine with usual procedure
2) all VMs are on one node; I put into maintenance the other one and
update it and reboot
3) activate the new node and migrate all VMs to it.
>From webadmin gui point of view it seems all ok.
Only "strange" thing is that the CentOS 6.5 VM has no ip shown, when
usually it has becuse of ovrt-guest-agent installed on it
So I try to connect to its console (configured as VNC).
But I get error (the other two are ok and they are spice)
Also, I cannot ping or ssh into the VM so there is indeed some problem.
I didn't connect since 30th January so I don't knw if any probem
arised before today.
>From the original host
/var/log/libvirt/qemu/c6s.log
I see:
2014-01-30 11:21:37.561+0000: shutting down
2014-01-30 11:22:14.595+0000: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=none /usr/bin/qemu-kvm -name c6s -S -machine
pc-1.0,accel=kvm,usb=off -cpu Opteron_G2 -m 1024 -smp
1,sockets=1,cores=1,threads=1 -uuid
4147e0d3-19a7-447b-9d88-2ff19365bec0 -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=19-5,serial=34353439-3036-435A-4A38-303330393338,uuid=4147e0d3-19a7-447b-9d88-2ff19365bec0
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/c6s.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2014-01-23T11:42:26,driftfix=slew -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive
if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
file=/rhev/data-center/mnt/glusterSD/f18ovn01.ceda.polimi.it:gvdata/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/a5e4f67b-50b5-4740-9990-39deb8812445/53408cb0-bcd4-40de-bc69-89d59b7b5bc2,if=none,id=drive-virtio-disk0,format=raw,serial=a5e4f67b-50b5-4740-9990-39deb8812445,cache=none,werror=stop,rerror=stop,aio=threads
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive file=/rhev/data-center/mnt/glusterSD/f18ovn01.ceda.polimi.it:gvdata/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/c1477133-6b06-480d-a233-1dae08daf8b3/c2a82c64-9dee-42bb-acf2-65b8081f2edf,if=none,id=drive-scsi0-0-0-0,format=raw,serial=c1477133-6b06-480d-a233-1dae08daf8b3,cache=none,werror=stop,rerror=stop,aio=threads
-device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0
-netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=28 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:8f:04:f8,bus=pci.0,addr=0x3
-chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/4147e0d3-19a7-447b-9d88-2ff19365bec0.com.redhat.rhevm.vdsm,server,nowait
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/4147e0d3-19a7-447b-9d88-2ff19365bec0.org.qemu.guest_agent.0,server,nowait
-device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev pty,id=charconsole0 -device
virtconsole,chardev=charconsole0,id=console0 -device
usb-tablet,id=input0 -vnc 0:0,password -k en-us -vga cirrus -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
char device redirected to /dev/pts/0 (label charconsole0)
2014-02-04 12:48:01.855+0000: shutting down
qemu: terminating on signal 15 from pid 1021
>From the updated host where I apparently migrated it I see:
2014-02-04 12:47:54.674+0000: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=none /usr/bin/qemu-kvm -name c6s -S -machine
pc-1.0,accel=kvm,usb=off -cpu Opteron_G2 -m 1024 -smp
1,sockets=1,cores=1,threads=1 -uuid
4147e0d3-19a7-447b-9d88-2ff19365bec0 -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=19-5,serial=34353439-3036-435A-4A38-303330393338,uuid=4147e0d3-19a7-447b-9d88-2ff19365bec0
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/c6s.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2014-01-28T13:08:06,driftfix=slew -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive
if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
file=/rhev/data-center/mnt/glusterSD/f18ovn01.ceda.polimi.it:gvdata/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/a5e4f67b-50b5-4740-9990-39deb8812445/53408cb0-bcd4-40de-bc69-89d59b7b5bc2,if=none,id=drive-virtio-disk0,format=raw,serial=a5e4f67b-50b5-4740-9990-39deb8812445,cache=none,werror=stop,rerror=stop,aio=threads
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive file=/rhev/data-center/mnt/glusterSD/f18ovn01.ceda.polimi.it:gvdata/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/c1477133-6b06-480d-a233-1dae08daf8b3/c2a82c64-9dee-42bb-acf2-65b8081f2edf,if=none,id=drive-scsi0-0-0-0,format=raw,serial=c1477133-6b06-480d-a233-1dae08daf8b3,cache=none,werror=stop,rerror=stop,aio=threads
-device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0
-netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=31 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:8f:04:f8,bus=pci.0,addr=0x3
-chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/4147e0d3-19a7-447b-9d88-2ff19365bec0.com.redhat.rhevm.vdsm,server,nowait
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/4147e0d3-19a7-447b-9d88-2ff19365bec0.org.qemu.guest_agent.0,server,nowait
-device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev pty,id=charconsole0 -device
virtconsole,chardev=charconsole0,id=console0 -device
usb-tablet,id=input0 -vnc 0:0,password -k en-us -vga cirrus -incoming
tcp:[::]:51152 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
char device redirected to /dev/pts/1 (label charconsole0)
engine log
https://drive.google.com/file/d/0BwoPbcrMv8mvZWpqOHNqc0dnenc/edit?usp=sha...
source vdsm log:
https://drive.google.com/file/d/0BwoPbcrMv8mvYlluMDh1Y19jdEU/edit?usp=sha...
dest vdsm log
https://drive.google.com/file/d/0BwoPbcrMv8mvT1JxcmdKWlloOFU/edit?usp=sha...
First error I see in source host log:
Thread-728830::ERROR::2014-02-04
13:42:59,735::BindingXMLRPC::984::vds::(wrapper) unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/BindingXMLRPC.py", line 970, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/gluster/api.py", line 53, in wrapper
rv = func(*args, **kwargs)
File "/usr/share/vdsm/gluster/api.py", line 206, in volumeStatus
statusOption)
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in glusterVolumeStatus
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
in _callmethod
raise convert_to_error(kind, result)
KeyError: 'path'
Thread-728831::ERROR::2014-02-04
13:42:59,805::BindingXMLRPC::984::vds::(wrapper) unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/BindingXMLRPC.py", line 970, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/gluster/api.py", line 53, in wrapper
rv = func(*args, **kwargs)
File "/usr/share/vdsm/gluster/api.py", line 206, in volumeStatus
statusOption)
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in glusterVolumeStatus
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
in _callmethod
raise convert_to_error(kind, result)
KeyError: 'path'
Thread-323::INFO::2014-02-04
13:43:05,765::logUtils::44::dispatcher::(wrapper) Run and protect:
getVolumeSize(sdUUID='d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291',
spUUID='eb679feb-4da2-4fd0-a185-abbe459ffa70',
imgUUID='a3d332c0-c302-4f28-9ed3-e2e83566343f',
volUUID='701eca86-df87-4b16-ac6d-e9f51e7ac171', options=None)
Apart the problem itself, another one is in my opinion the engine that
doesn't know about it at all....
For this VM, in its event tab I can see only:
2014-Feb-04, 13:57
user admin@internal initiated console session for VM c6s
1b78630c
oVirt
2014-Feb-04, 13:51
user admin@internal initiated console session for VM c6s
1d77f16a
oVirt
2014-Feb-04, 13:48
Migration completed (VM: c6s, Source: f18ovn03, Destination: f18ovn01,
Duration: 8 sec).
17c547cc
oVirt
2014-Feb-04, 13:47
Migration started (VM: c6s, Source: f18ovn03, Destination: f18ovn01,
User: admin@internal).
17c547cc
oVirt
2014-Jan-30, 12:30
user admin@internal initiated console session for VM c6s
5536edb8
oVirt
2014-Jan-30, 12:23
VM c6s started on Host f18ovn03
45209312
oVirt
2014-Jan-30, 12:22
user admin@internal initiated console session for VM c6s
19c766c8
oVirt
2014-Jan-30, 12:22
user admin@internal initiated console session for VM c6s
79815897
oVirt
2014-Jan-30, 12:22
VM c6s was started by admin@internal (Host: f18ovn03).
45209312
oVirt
2014-Jan-30, 12:22
VM c6s configuration was updated by admin@internal.
76cbc53
oVirt
2014-Jan-30, 12:21
VM c6s is down. Exit message: User shut down
oVirt
2014-Jan-30, 12:20
VM shutdown initiated by admin@internal on VM c6s (Host: f18ovn03).
213c3a55
oVirt
Gianluca
10 years, 10 months
[Users] VM install failures on a stateless node
by David Li
Hi,
I have been trying to install my first VM on a stateless node. so far I have failed twice with the node ending up in the "Non-responsive" mode. I had to reboot to recover and it took a while to reconfigure everything since this is stateless.
I can still get into the node via the console. It's not dead. But the ovirtmgmt interface seems to be dead. The other iSCSI interface is running ok.
Can anyone recommend ways how to debug this problem?
Thanks.
David
10 years, 10 months
[Users] Ovirt 3.3.2 Cannot attach POSIX (gluster) storage domain
by Steve Dainard
I can successfully create a POSIX storage domain backed by gluster, but at
the end of creation I get an error message "failed to acquire host id".
Note that I have successfully created/activated NFS DC/SD on the same
ovirt/hosts.
I have some logs when I tried to attach to the DC after failure:
*engine.log*
2014-02-04 09:54:04,324 INFO
[org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand]
(ajp--127.0.0.1-8702-3) [1dd40406] Lock Acquired to object EngineLock [ex
clusiveLocks= key: 8c4e8898-c91a-4d49-98e8-b6467791a9cc value: POOL
, sharedLocks= ]
2014-02-04 09:54:04,473 INFO
[org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand]
(pool-6-thread-42) [1dd40406] Running command: AddStoragePoolWithStorages
Command internal: false. Entities affected : ID:
8c4e8898-c91a-4d49-98e8-b6467791a9cc Type: StoragePool
2014-02-04 09:54:04,673 INFO
[org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand]
(pool-6-thread-42) [3f86c31b] Running command: ConnectStorageToVdsCommand
intern
al: true. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa
Type: System
2014-02-04 09:54:04,682 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(pool-6-thread-42) [3f86c31b] START, ConnectStorageServerVDSCommand(
HostName = ovirt001, HostId = 48f13d47-8346-4ff6-81ca-4f4324069db3,
storagePoolId = 00000000-0000-0000-0000-000000000000, storageType =
POSIXFS, connectionList = [{ id: 87f9
ff74-93c4-4fe5-9a56-ed5338290af9, connection: 10.0.10.3:/rep2, iqn: null,
vfsType: glusterfs, mountOptions: null, nfsVersion: null, nfsRetrans: null,
nfsTimeo: null };]), lo
g id: 332ff091
2014-02-04 09:54:05,089 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(pool-6-thread-42) [3f86c31b] FINISH, ConnectStorageServerVDSCommand
, return: {87f9ff74-93c4-4fe5-9a56-ed5338290af9=0}, log id: 332ff091
2014-02-04 09:54:05,093 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
(pool-6-thread-42) [3f86c31b] START, CreateStoragePoolVDSCommand(HostNa
me = ovirt001, HostId = 48f13d47-8346-4ff6-81ca-4f4324069db3,
storagePoolId=8c4e8898-c91a-4d49-98e8-b6467791a9cc, storageType=POSIXFS,
storagePoolName=IT, masterDomainId=471
487ed-2946-4dfc-8ec3-96546006be12,
domainsIdList=[471487ed-2946-4dfc-8ec3-96546006be12], masterVersion=3), log
id: 1be84579
2014-02-04 09:54:08,833 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorag
ePoolVDSCommand] (pool-6-thread-42) [3f86c31b] Failed in
CreateStoragePoolVDS method
2014-02-04 09:54:08,834 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
(pool-6-thread-42) [3f86c31b] Error code AcquireHostIdFailure and error
message VDSGenericException: VDSErrorException: Failed to
CreateStoragePoolVDS, error = Cannot acquire host id:
('471487ed-2946-4dfc-8ec3-96546006be12', SanlockException(22, 'Sanlock
lockspace add failure', 'Invalid argument'))
2014-02-04 09:54:08,835 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
(pool-6-thread-42) [3f86c31b] Command
org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand
return value
StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=661,
mMessage=Cannot acquire host id: ('471487ed-2946-4dfc-8ec3-96546006be12',
SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument'))]]
2014-02-04 09:54:08,836 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
(pool-6-thread-42) [3f86c31b] HostName = ovirt001
2014-02-04 09:54:08,840 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
(pool-6-thread-42) [3f86c31b] Command CreateStoragePoolVDS execution
failed. Exception: VDSErrorException: VDSGenericException:
VDSErrorException: Failed to CreateStoragePoolVDS, error = Cannot acquire
host id: ('471487ed-2946-4dfc-8ec3-96546006be12', SanlockException(22,
'Sanlock lockspace add failure', 'Invalid argument'))
2014-02-04 09:54:08,840 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
(pool-6-thread-42) [3f86c31b] FINISH, CreateStoragePoolVDSCommand, log id:
1be84579
2014-02-04 09:54:08,841 ERROR
[org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand]
(pool-6-thread-42) [3f86c31b] Command
org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand throw
Vdc Bll exception. With error message VdcBLLException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to CreateStoragePoolVDS,
error = Cannot acquire host id: ('471487ed-2946-4dfc-8ec3-96546006be12',
SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument'))
(Failed with error AcquireHostIdFailure and code 661)
2014-02-04 09:54:08,867 INFO
[org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand]
(pool-6-thread-42) [3f86c31b] Command
[id=373987cb-b54d-4174-b4a9-195be631f0d7]: Compensating CHANGED_ENTITY of
org.ovirt.engine.core.common.businessentities.StoragePool; snapshot:
id=8c4e8898-c91a-4d49-98e8-b6467791a9cc.
2014-02-04 09:54:08,871 INFO
[org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand]
(pool-6-thread-42) [3f86c31b] Command
[id=373987cb-b54d-4174-b4a9-195be631f0d7]: Compensating NEW_ENTITY_ID of
org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot:
storagePoolId = 8c4e8898-c91a-4d49-98e8-b6467791a9cc, storageId =
471487ed-2946-4dfc-8ec3-96546006be12.
2014-02-04 09:54:08,879 INFO
[org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand]
(pool-6-thread-42) [3f86c31b] Command
[id=373987cb-b54d-4174-b4a9-195be631f0d7]: Compensating CHANGED_ENTITY of
org.ovirt.engine.core.common.businessentities.StorageDomainStatic;
snapshot: id=471487ed-2946-4dfc-8ec3-96546006be12.
2014-02-04 09:54:08,951 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(pool-6-thread-42) [3f86c31b] Correlation ID: 1dd40406, Job ID:
07003dff-9e0e-42ae-8f88-6b055b45f797, Call Stack: null, Custom Event ID:
-1, Message: Failed to attach Storage Domains to Data Center IT. (User:
admin@internal)
2014-02-04 09:54:08,975 INFO
[org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand]
(pool-6-thread-42) [3f86c31b] Lock freed to object EngineLock
[exclusiveLocks= key: 8c4e8898-c91a-4d49-98e8-b6467791a9cc value: POOL
, sharedLocks= ]
*vdsm.log*
Thread-30::DEBUG::2014-02-04
09:54:04,692::BindingXMLRPC::167::vds::(wrapper) client [10.0.10.2] flowID
[3f86c31b]
Thread-30::DEBUG::2014-02-04
09:54:04,692::task::579::TaskManager.Task::(_updateState)
Task=`218dcde9-bbc7-4d5a-ad53-0bab556c6261`::moving from state init ->
state preparing
Thread-30::INFO::2014-02-04
09:54:04,693::logUtils::44::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=6,
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '',
'connection': '10.0.10.3:/rep2', 'iqn': '', 'portal': '', 'user': '',
'vfs_type': 'glusterfs', 'password': '******', 'id':
'87f9ff74-93c4-4fe5-9a56-ed5338290af9'}], options=None)
Thread-30::DEBUG::2014-02-04
09:54:04,698::mount::226::Storage.Misc.excCmd::(_runcmd) '/usr/bin/sudo -n
/bin/mount -t glusterfs 10.0.10.3:/rep2 /rhev/data-center/mnt/10.0.10.3:_rep2'
(cwd None)
Thread-30::DEBUG::2014-02-04
09:54:05,067::hsm::2315::Storage.HSM::(__prefetchDomains) posix local path:
/rhev/data-center/mnt/10.0.10.3:_rep2
Thread-30::DEBUG::2014-02-04
09:54:05,078::hsm::2333::Storage.HSM::(__prefetchDomains) Found SD uuids:
('471487ed-2946-4dfc-8ec3-96546006be12',)
Thread-30::DEBUG::2014-02-04
09:54:05,078::hsm::2389::Storage.HSM::(connectStorageServer) knownSDs:
{471487ed-2946-4dfc-8ec3-96546006be12: storage.nfsSD.findDomain}
Thread-30::INFO::2014-02-04
09:54:05,078::logUtils::47::dispatcher::(wrapper) Run and protect:
connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id':
'87f9ff74-93c4-4fe5-9a56-ed5338290af9'}]}
Thread-30::DEBUG::2014-02-04
09:54:05,079::task::1168::TaskManager.Task::(prepare)
Task=`218dcde9-bbc7-4d5a-ad53-0bab556c6261`::finished: {'statuslist':
[{'status': 0, 'id': '87f9ff74-93c4-4fe5-9a56-ed5338290af9'}]}
Thread-30::DEBUG::2014-02-04
09:54:05,079::task::579::TaskManager.Task::(_updateState)
Task=`218dcde9-bbc7-4d5a-ad53-0bab556c6261`::moving from state preparing ->
state finished
Thread-30::DEBUG::2014-02-04
09:54:05,079::resourceManager::939::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-30::DEBUG::2014-02-04
09:54:05,079::resourceManager::976::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-30::DEBUG::2014-02-04
09:54:05,079::task::974::TaskManager.Task::(_decref)
Task=`218dcde9-bbc7-4d5a-ad53-0bab556c6261`::ref 0 aborting False
Thread-31::DEBUG::2014-02-04
09:54:05,098::BindingXMLRPC::167::vds::(wrapper) client [10.0.10.2] flowID
[3f86c31b]
Thread-31::DEBUG::2014-02-04
09:54:05,099::task::579::TaskManager.Task::(_updateState)
Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::moving from state init ->
state preparing
Thread-31::INFO::2014-02-04
09:54:05,099::logUtils::44::dispatcher::(wrapper) Run and protect:
createStoragePool(poolType=None,
spUUID='8c4e8898-c91a-4d49-98e8-b6467791a9cc', poolName='IT',
masterDom='471487ed-2946-4dfc-8ec3-96546006be12',
domList=['471487ed-2946-4dfc-8ec3-96546006be12'], masterVersion=3,
lockPolicy=None, lockRenewalIntervalSec=5, leaseTimeSec=60,
ioOpTimeoutSec=10, leaseRetries=3, options=None)
Thread-31::DEBUG::2014-02-04
09:54:05,099::misc::809::SamplingMethod::(__call__) Trying to enter
sampling method (storage.sdc.refreshStorage)
Thread-31::DEBUG::2014-02-04
09:54:05,100::misc::811::SamplingMethod::(__call__) Got in to sampling
method
Thread-31::DEBUG::2014-02-04
09:54:05,100::misc::809::SamplingMethod::(__call__) Trying to enter
sampling method (storage.iscsi.rescan)
Thread-31::DEBUG::2014-02-04
09:54:05,100::misc::811::SamplingMethod::(__call__) Got in to sampling
method
Thread-31::DEBUG::2014-02-04
09:54:05,100::iscsiadm::91::Storage.Misc.excCmd::(_runCmd) '/usr/bin/sudo
-n /sbin/iscsiadm -m session -R' (cwd None)
Thread-31::DEBUG::2014-02-04
09:54:05,114::iscsiadm::91::Storage.Misc.excCmd::(_runCmd) FAILED: <err> =
'iscsiadm: No session found.\n'; <rc> = 21
Thread-31::DEBUG::2014-02-04
09:54:05,115::misc::819::SamplingMethod::(__call__) Returning last result
Thread-31::DEBUG::2014-02-04
09:54:07,144::multipath::112::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo
-n /sbin/multipath -r' (cwd None)
Thread-31::DEBUG::2014-02-04
09:54:07,331::multipath::112::Storage.Misc.excCmd::(rescan) SUCCESS: <err>
= ''; <rc> = 0
Thread-31::DEBUG::2014-02-04
09:54:07,332::lvm::510::OperationMutex::(_invalidateAllPvs) Operation 'lvm
invalidate operation' got the operation mutex
Thread-31::DEBUG::2014-02-04
09:54:07,333::lvm::512::OperationMutex::(_invalidateAllPvs) Operation 'lvm
invalidate operation' released the operation mutex
Thread-31::DEBUG::2014-02-04
09:54:07,333::lvm::521::OperationMutex::(_invalidateAllVgs) Operation 'lvm
invalidate operation' got the operation mutex
Thread-31::DEBUG::2014-02-04
09:54:07,333::lvm::523::OperationMutex::(_invalidateAllVgs) Operation 'lvm
invalidate operation' released the operation mutex
Thread-31::DEBUG::2014-02-04
09:54:07,333::lvm::541::OperationMutex::(_invalidateAllLvs) Operation 'lvm
invalidate operation' got the operation mutex
Thread-31::DEBUG::2014-02-04
09:54:07,334::lvm::543::OperationMutex::(_invalidateAllLvs) Operation 'lvm
invalidate operation' released the operation mutex
Thread-31::DEBUG::2014-02-04
09:54:07,334::misc::819::SamplingMethod::(__call__) Returning last result
Thread-31::DEBUG::2014-02-04
09:54:07,499::fileSD::137::Storage.StorageDomain::(__init__) Reading domain
in path /rhev/data-center/mnt/10.0.10.3:
_rep2/471487ed-2946-4dfc-8ec3-96546006be12
Thread-31::DEBUG::2014-02-04
09:54:07,605::persistentDict::192::Storage.PersistentDict::(__init__)
Created a persistent dict with FileMetadataRW backend
Thread-31::DEBUG::2014-02-04
09:54:07,647::persistentDict::234::Storage.PersistentDict::(refresh) read
lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=gluster-store-rep2',
'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=',
'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=10.0.10.3:/rep2',
'ROLE=Regular', 'SDUUID=471487ed-2946-4dfc-8ec3-96546006be12',
'TYPE=POSIXFS', 'VERSION=3',
'_SHA_CKSUM=469191aac3fb8ef504b6a4d301b6d8be6fffece1']
Thread-31::DEBUG::2014-02-04
09:54:07,683::fileSD::558::Storage.StorageDomain::(imageGarbageCollector)
Removing remnants of deleted images []
Thread-31::DEBUG::2014-02-04
09:54:07,684::resourceManager::420::ResourceManager::(registerNamespace)
Registering namespace '471487ed-2946-4dfc-8ec3-96546006be12_imageNS'
Thread-31::DEBUG::2014-02-04
09:54:07,684::resourceManager::420::ResourceManager::(registerNamespace)
Registering namespace '471487ed-2946-4dfc-8ec3-96546006be12_volumeNS'
Thread-31::INFO::2014-02-04
09:54:07,684::fileSD::299::Storage.StorageDomain::(validate)
sdUUID=471487ed-2946-4dfc-8ec3-96546006be12
Thread-31::DEBUG::2014-02-04
09:54:07,692::persistentDict::234::Storage.PersistentDict::(refresh) read
lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=gluster-store-rep2',
'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=',
'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=10.0.10.3:/rep2',
'ROLE=Regular', 'SDUUID=471487ed-2946-4dfc-8ec3-96546006be12',
'TYPE=POSIXFS', 'VERSION=3',
'_SHA_CKSUM=469191aac3fb8ef504b6a4d301b6d8be6fffece1']
Thread-31::DEBUG::2014-02-04
09:54:07,693::resourceManager::197::ResourceManager.Request::(__init__)
ResName=`Storage.8c4e8898-c91a-4d49-98e8-b6467791a9cc`ReqID=`e0a3d477-b953-49d9-ab78-67695a6bc6d5`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '971' at
'createStoragePool'
Thread-31::DEBUG::2014-02-04
09:54:07,693::resourceManager::541::ResourceManager::(registerResource)
Trying to register resource 'Storage.8c4e8898-c91a-4d49-98e8-b6467791a9cc'
for lock type 'exclusive'
Thread-31::DEBUG::2014-02-04
09:54:07,693::resourceManager::600::ResourceManager::(registerResource)
Resource 'Storage.8c4e8898-c91a-4d49-98e8-b6467791a9cc' is free. Now
locking as 'exclusive' (1 active user)
Thread-31::DEBUG::2014-02-04
09:54:07,693::resourceManager::237::ResourceManager.Request::(grant)
ResName=`Storage.8c4e8898-c91a-4d49-98e8-b6467791a9cc`ReqID=`e0a3d477-b953-49d9-ab78-67695a6bc6d5`::Granted
request
Thread-31::DEBUG::2014-02-04
09:54:07,694::task::811::TaskManager.Task::(resourceAcquired)
Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::_resourcesAcquired:
Storage.8c4e8898-c91a-4d49-98e8-b6467791a9cc (exclusive)
Thread-31::DEBUG::2014-02-04
09:54:07,694::task::974::TaskManager.Task::(_decref)
Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::ref 1 aborting False
Thread-31::DEBUG::2014-02-04
09:54:07,694::resourceManager::197::ResourceManager.Request::(__init__)
ResName=`Storage.471487ed-2946-4dfc-8ec3-96546006be12`ReqID=`bc20dd7e-d351-47c5-8ed3-78b1b11d703a`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '973' at
'createStoragePool'
Thread-31::DEBUG::2014-02-04
09:54:07,694::resourceManager::541::ResourceManager::(registerResource)
Trying to register resource 'Storage.471487ed-2946-4dfc-8ec3-96546006be12'
for lock type 'exclusive'
Thread-31::DEBUG::2014-02-04
09:54:07,695::resourceManager::600::ResourceManager::(registerResource)
Resource 'Storage.471487ed-2946-4dfc-8ec3-96546006be12' is free. Now
locking as 'exclusive' (1 active user)
Thread-31::DEBUG::2014-02-04
09:54:07,695::resourceManager::237::ResourceManager.Request::(grant)
ResName=`Storage.471487ed-2946-4dfc-8ec3-96546006be12`ReqID=`bc20dd7e-d351-47c5-8ed3-78b1b11d703a`::Granted
request
Thread-31::DEBUG::2014-02-04
09:54:07,695::task::811::TaskManager.Task::(resourceAcquired)
Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::_resourcesAcquired:
Storage.471487ed-2946-4dfc-8ec3-96546006be12 (exclusive)
Thread-31::DEBUG::2014-02-04
09:54:07,695::task::974::TaskManager.Task::(_decref)
Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::ref 1 aborting False
Thread-31::INFO::2014-02-04
09:54:07,696::sp::593::Storage.StoragePool::(create)
spUUID=8c4e8898-c91a-4d49-98e8-b6467791a9cc poolName=IT
master_sd=471487ed-2946-4dfc-8ec3-96546006be12
domList=['471487ed-2946-4dfc-8ec3-96546006be12'] masterVersion=3
{'LEASETIMESEC': 60, 'IOOPTIMEOUTSEC': 10, 'LEASERETRIES': 3,
'LOCKRENEWALINTERVALSEC': 5}
Thread-31::INFO::2014-02-04
09:54:07,696::fileSD::299::Storage.StorageDomain::(validate)
sdUUID=471487ed-2946-4dfc-8ec3-96546006be12
Thread-31::DEBUG::2014-02-04
09:54:07,703::persistentDict::234::Storage.PersistentDict::(refresh) read
lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=gluster-store-rep2',
'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=',
'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=10.0.10.3:/rep2',
'ROLE=Regular', 'SDUUID=471487ed-2946-4dfc-8ec3-96546006be12',
'TYPE=POSIXFS', 'VERSION=3',
'_SHA_CKSUM=469191aac3fb8ef504b6a4d301b6d8be6fffece1']
Thread-31::DEBUG::2014-02-04
09:54:07,710::persistentDict::234::Storage.PersistentDict::(refresh) read
lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=gluster-store-rep2',
'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=',
'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=10.0.10.3:/rep2',
'ROLE=Regular', 'SDUUID=471487ed-2946-4dfc-8ec3-96546006be12',
'TYPE=POSIXFS', 'VERSION=3',
'_SHA_CKSUM=469191aac3fb8ef504b6a4d301b6d8be6fffece1']
Thread-31::DEBUG::2014-02-04
09:54:07,711::persistentDict::167::Storage.PersistentDict::(transaction)
Starting transaction
Thread-31::DEBUG::2014-02-04
09:54:07,711::persistentDict::175::Storage.PersistentDict::(transaction)
Finished transaction
Thread-31::INFO::2014-02-04
09:54:07,711::clusterlock::174::SANLock::(acquireHostId) Acquiring host id
for domain 471487ed-2946-4dfc-8ec3-96546006be12 (id: 250)
Thread-31::ERROR::2014-02-04
09:54:08,722::task::850::TaskManager.Task::(_setError)
Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 857, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 977, in createStoragePool
masterVersion, leaseParams)
File "/usr/share/vdsm/storage/sp.py", line 618, in create
self._acquireTemporaryClusterLock(msdUUID, leaseParams)
File "/usr/share/vdsm/storage/sp.py", line 560, in
_acquireTemporaryClusterLock
msd.acquireHostId(self.id)
File "/usr/share/vdsm/storage/sd.py", line 458, in acquireHostId
self._clusterLock.acquireHostId(hostId, async)
File "/usr/share/vdsm/storage/clusterlock.py", line 189, in acquireHostId
raise se.AcquireHostIdFailure(self._sdUUID, e)
AcquireHostIdFailure: Cannot acquire host id:
('471487ed-2946-4dfc-8ec3-96546006be12', SanlockException(22, 'Sanlock
lockspace add failure', 'Invalid argument'))
Thread-31::DEBUG::2014-02-04
09:54:08,826::task::869::TaskManager.Task::(_run)
Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::Task._run:
66924dbf-5a1c-473e-a158-d038aae38dc3 (None,
'8c4e8898-c91a-4d49-98e8-b6467791a9cc', 'IT',
'471487ed-2946-4dfc-8ec3-96546006be12',
['471487ed-2946-4dfc-8ec3-96546006be12'], 3, None, 5, 60, 10, 3) {} failed
- stopping task
Thread-31::DEBUG::2014-02-04
09:54:08,826::task::1194::TaskManager.Task::(stop)
Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::stopping in state preparing
(force False)
Thread-31::DEBUG::2014-02-04
09:54:08,826::task::974::TaskManager.Task::(_decref)
Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::ref 1 aborting True
Thread-31::INFO::2014-02-04
09:54:08,826::task::1151::TaskManager.Task::(prepare)
Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::aborting: Task is aborted:
'Cannot acquire host id' - code 661
Thread-31::DEBUG::2014-02-04
09:54:08,826::task::1156::TaskManager.Task::(prepare)
Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::Prepare: aborted: Cannot
acquire host id
Thread-31::DEBUG::2014-02-04
09:54:08,827::task::974::TaskManager.Task::(_decref)
Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::ref 0 aborting True
Thread-31::DEBUG::2014-02-04
09:54:08,827::task::909::TaskManager.Task::(_doAbort)
Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::Task._doAbort: force False
Thread-31::DEBUG::2014-02-04
09:54:08,827::resourceManager::976::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-31::DEBUG::2014-02-04
09:54:08,827::task::579::TaskManager.Task::(_updateState)
Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::moving from state preparing ->
state aborting
Thread-31::DEBUG::2014-02-04
09:54:08,827::task::534::TaskManager.Task::(__state_aborting)
Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::_aborting: recover policy none
Thread-31::DEBUG::2014-02-04
09:54:08,827::task::579::TaskManager.Task::(_updateState)
Task=`66924dbf-5a1c-473e-a158-d038aae38dc3`::moving from state aborting ->
state failed
Thread-31::DEBUG::2014-02-04
09:54:08,827::resourceManager::939::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.471487ed-2946-4dfc-8ec3-96546006be12': < ResourceRef
'Storage.471487ed-2946-4dfc-8ec3-96546006be12', isValid: 'True' obj:
'None'>, 'Storage.8c4e8898-c91a-4d49-98e8-b6467791a9cc': < ResourceRef
'Storage.8c4e8898-c91a-4d49-98e8-b6467791a9cc', isValid: 'True' obj:
'None'>}
Thread-31::DEBUG::2014-02-04
09:54:08,828::resourceManager::976::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-31::DEBUG::2014-02-04
09:54:08,828::resourceManager::615::ResourceManager::(releaseResource)
Trying to release resource 'Storage.471487ed-2946-4dfc-8ec3-96546006be12'
Thread-31::DEBUG::2014-02-04
09:54:08,828::resourceManager::634::ResourceManager::(releaseResource)
Released resource 'Storage.471487ed-2946-4dfc-8ec3-96546006be12' (0 active
users)
Thread-31::DEBUG::2014-02-04
09:54:08,828::resourceManager::640::ResourceManager::(releaseResource)
Resource 'Storage.471487ed-2946-4dfc-8ec3-96546006be12' is free, finding
out if anyone is waiting for it.
Thread-31::DEBUG::2014-02-04
09:54:08,828::resourceManager::648::ResourceManager::(releaseResource) No
one is waiting for resource 'Storage.471487ed-2946-4dfc-8ec3-96546006be12',
Clearing records.
Thread-31::DEBUG::2014-02-04
09:54:08,828::resourceManager::615::ResourceManager::(releaseResource)
Trying to release resource 'Storage.8c4e8898-c91a-4d49-98e8-b6467791a9cc'
Thread-31::DEBUG::2014-02-04
09:54:08,829::resourceManager::634::ResourceManager::(releaseResource)
Released resource 'Storage.8c4e8898-c91a-4d49-98e8-b6467791a9cc' (0 active
users)
Thread-31::DEBUG::2014-02-04
09:54:08,829::resourceManager::640::ResourceManager::(releaseResource)
Resource 'Storage.8c4e8898-c91a-4d49-98e8-b6467791a9cc' is free, finding
out if anyone is waiting for it.
Thread-31::DEBUG::2014-02-04
09:54:08,829::resourceManager::648::ResourceManager::(releaseResource) No
one is waiting for resource 'Storage.8c4e8898-c91a-4d49-98e8-b6467791a9cc',
Clearing records.
Thread-31::ERROR::2014-02-04
09:54:08,829::dispatcher::67::Storage.Dispatcher.Protect::(run) {'status':
{'message': "Cannot acquire host id:
('471487ed-2946-4dfc-8ec3-96546006be12', SanlockException(22, 'Sanlock
lockspace add failure', 'Invalid argument'))", 'code': 661}}
*Storage domain metadata file:*
CLASS=Data
DESCRIPTION=gluster-store-rep2
IOOPTIMEOUTSEC=10
LEASERETRIES=3
LEASETIMESEC=60
LOCKPOLICY=
LOCKRENEWALINTERVALSEC=5
POOL_UUID=
REMOTE_PATH=10.0.10.3:/rep2
ROLE=Regular
SDUUID=471487ed-2946-4dfc-8ec3-96546006be12
TYPE=POSIXFS
VERSION=3
_SHA_CKSUM=469191aac3fb8ef504b6a4d301b6d8be6fffece1
*Steve Dainard *
IT Infrastructure Manager
Miovision <http://miovision.com/> | *Rethink Traffic*
519-513-2407 ex.250
877-646-8476 (toll-free)
*Blog <http://miovision.com/blog> | **LinkedIn
<https://www.linkedin.com/company/miovision-technologies> | Twitter
<https://twitter.com/miovision> | Facebook
<https://www.facebook.com/miovision>*
------------------------------
Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON,
Canada | N2C 1L3
This e-mail may contain information that is privileged or confidential. If
you are not the intended recipient, please delete the e-mail and any
attachments and notify us immediately.
10 years, 10 months
Re: [Users] I can't remove VM
by Dafna Ron
Any time.
If you manage to reproduce please let us know.
Dafna
On 02/04/2014 05:00 PM, Eduardo Ramos wrote:
> Hi Dafna! Thanks for responding.
>
> In order to collect full logs, I migrated my 61 machines from 16 to 2
> hosts. When I tried to remove, it worked without any problem. I did
> not understand why. I'm investigating.
>
> Thanks again for your attention.
>
> On 02/03/2014 11:43 AM, Dafna Ron wrote:
>> please attach full vdsm and engine logs.
>>
>> Thanks,
>>
>> Dafna
>>
>>
>> On 02/03/2014 12:11 PM, Eduardo Ramos wrote:
>>> Hi all!
>>>
>>> I'm having trouble on removing virtual machines. My environment run
>>> on a ISCSI domain storage. When I try remove, the SPM logs:
>>>
>>> # Start vdsm SPM log #
>>> Thread-6019517::INFO::2014-02-03
>>> 09:58:09,293::logUtils::41::dispatcher::(wrapper) Run and protect:
>>> deleteImage(sdUUID='c332da29-ba9f-4c94-8fa9-346bb8e04e2a',
>>> spUUID='9dbc7bb1-c460-4202-8f10-862d2ed3ed9a',
>>> imgUUID='57ba1906-2035-4503-acbc-5f6f077f75cc', postZero='false',
>>> force='false')
>>> Thread-6019517::INFO::2014-02-03
>>> 09:58:09,293::blockSD::816::Storage.StorageDomain::(validate)
>>> sdUUID=c332da29-ba9f-4c94-8fa9-346bb8e04e2a
>>> Thread-6019517::ERROR::2014-02-03
>>> 09:58:10,061::task::833::TaskManager.Task::(_setError)
>>> Task=`8cbf9978-ed51-488a-af52-a3db030e44ff`::Unexpected error
>>> Traceback (most recent call last):
>>> File "/usr/share/vdsm/storage/task.py", line 840, in _run
>>> return fn(*args, **kargs)
>>> File "/usr/share/vdsm/logUtils.py", line 42, in wrapper
>>> res = f(*args, **kwargs)
>>> File "/usr/share/vdsm/storage/hsm.py", line 1429, in deleteImage
>>> allVols = dom.getAllVolumes()
>>> File "/usr/share/vdsm/storage/blockSD.py", line 972, in getAllVolumes
>>> return getAllVolumes(self.sdUUID)
>>> File "/usr/share/vdsm/storage/blockSD.py", line 172, in getAllVolumes
>>> vImg not in res[vPar]['imgs']):
>>> KeyError: '63650a24-7e83-4c0a-851d-0ce9869a294d'
>>> Thread-6019517::INFO::2014-02-03
>>> 09:58:10,063::task::1134::TaskManager.Task::(prepare)
>>> Task=`8cbf9978-ed51-488a-af52-a3db030e44ff`::aborting: Task is
>>> aborted: u"'63650a24-7e83-4c0a-851d-0ce9869a294d'" - code 100
>>> Thread-6019517::ERROR::2014-02-03
>>> 09:58:10,066::dispatcher::70::Storage.Dispatcher.Protect::(run)
>>> '63650a24-7e83-4c0a-851d-0ce9869a294d'
>>> Traceback (most recent call last):
>>> File "/usr/share/vdsm/storage/dispatcher.py", line 62, in run
>>> result = ctask.prepare(self.func, *args, **kwargs)
>>> File "/usr/share/vdsm/storage/task.py", line 1142, in prepare
>>> raise self.error
>>> KeyError: '63650a24-7e83-4c0a-851d-0ce9869a294d'
>>> Thread-6019518::INFO::2014-02-03
>>> 09:58:10,087::logUtils::41::dispatcher::(wrapper) Run and protect:
>>> getSpmStatus(spUUID='9dbc7bb1-c460-4202-8f10-862d2ed3ed9a',
>>> options=None)
>>> Thread-6019518::INFO::2014-02-03
>>> 09:58:10,088::logUtils::44::dispatcher::(wrapper) Run and protect:
>>> getSpmStatus, Return response: {'spm_st': {'spmId': 14, 'spmStatus':
>>> 'SPM', 'spmLver': 64}}
>>> Thread-6019519::INFO::2014-02-03
>>> 09:58:10,100::logUtils::41::dispatcher::(wrapper) Run and protect:
>>> getAllTasksStatuses(spUUID=None, options=None)
>>> Thread-6019519::INFO::2014-02-03
>>> 09:58:10,101::logUtils::44::dispatcher::(wrapper) Run and protect:
>>> getAllTasksStatuses, Return response: {'allTasksStatus': {}}
>>> Thread-6019520::INFO::2014-02-03
>>> 09:58:10,109::logUtils::41::dispatcher::(wrapper) Run and protect:
>>> spmStop(spUUID='9dbc7bb1-c460-4202-8f10-862d2ed3ed9a', options=None)
>>> Thread-6019520::INFO::2014-02-03
>>> 09:58:10,681::clusterlock::121::SafeLease::(release) Releasing
>>> cluster lock for domain c332da29-ba9f-4c94-8fa9-346bb8e04e2a
>>> Thread-6019521::INFO::2014-02-03
>>> 09:58:11,054::logUtils::41::dispatcher::(wrapper) Run and protect:
>>> repoStats(options=None)
>>> Thread-6019521::INFO::2014-02-03
>>> 09:58:11,054::logUtils::44::dispatcher::(wrapper) Run and protect:
>>> repoStats, Return response:
>>> {u'51eb6183-157d-4015-ae0f-1c7ffb1731c0': {'delay':
>>> '0.00799298286438', 'lastCheck': '5.3', 'code': 0, 'valid': True},
>>> u'c332da29-ba9f-4c94-8fa9-346bb8e04e2a': {'delay':
>>> '0.0197920799255', 'lastCheck': '4.9', 'code': 0, 'valid': True},
>>> u'0e0be898-6e04-4469-bb32-91f3cf8146d1': {'delay':
>>> '0.00803208351135', 'lastCheck': '5.3', 'code': 0, 'valid': True}}
>>> Thread-6019520::INFO::2014-02-03
>>> 09:58:11,732::logUtils::44::dispatcher::(wrapper) Run and protect:
>>> spmStop, Return response: None
>>> Thread-6019523::INFO::2014-02-03
>>> 09:58:11,835::logUtils::41::dispatcher::(wrapper) Run and protect:
>>> getAllTasksStatuses(spUUID=None, options=None)
>>> Thread-6019523::INFO::2014-02-03
>>> 09:58:11,835::logUtils::44::dispatcher::(wrapper) Run and protect:
>>> getAllTasksStatuses, Return response: {'allTasksStatus': {}}
>>> Thread-6019524::INFO::2014-02-03
>>> 09:58:11,844::logUtils::41::dispatcher::(wrapper) Run and protect:
>>> spmStop(spUUID='9dbc7bb1-c460-4202-8f10-862d2ed3ed9a', options=None)
>>> Thread-6019524::ERROR::2014-02-03
>>> 09:58:11,846::task::833::TaskManager.Task::(_setError)
>>> Task=`00df5ff7-bbf4-4a0e-b60b-1b06dcaa7683`::Unexpected error
>>> Traceback (most recent call last):
>>> File "/usr/share/vdsm/storage/task.py", line 840, in _run
>>> return fn(*args, **kargs)
>>> File "/usr/share/vdsm/logUtils.py", line 42, in wrapper
>>> res = f(*args, **kwargs)
>>> File "/usr/share/vdsm/storage/hsm.py", line 601, in spmStop
>>> pool.stopSpm()
>>> File "/usr/share/vdsm/storage/securable.py", line 66, in wrapper
>>> raise SecureError()
>>> SecureError
>>> Thread-6019524::INFO::2014-02-03
>>> 09:58:11,855::task::1134::TaskManager.Task::(prepare)
>>> Task=`00df5ff7-bbf4-4a0e-b60b-1b06dcaa7683`::aborting: Task is
>>> aborted: u'' - code 100
>>> Thread-6019524::ERROR::2014-02-03
>>> 09:58:11,857::dispatcher::70::Storage.Dispatcher.Protect::(run)
>>> Traceback (most recent call last):
>>> File "/usr/share/vdsm/storage/dispatcher.py", line 62, in run
>>> result = ctask.prepare(self.func, *args, **kwargs)
>>> File "/usr/share/vdsm/storage/task.py", line 1142, in prepare
>>> raise self.error
>>> SecureError
>>> Dummy-6018624::INFO::2014-02-03
>>> 09:58:14,220::storage_mailbox::674::Storage.MailBox.SpmMailMonitor::(run)
>>> SPM_MailMonitor - Incoming mail monitoring thread stopped
>>> Thread-34627::INFO::2014-02-03
>>> 09:58:17,696::logUtils::41::dispatcher::(wrapper) Run and protect:
>>> getVolumeSize(sdUUID='c332da29-ba9f-4c94-8fa9-346bb8e04e2a',
>>> spUUID='9dbc7bb1-c460-4202-8f10-862d2ed3ed9a',
>>> imgUUID='974d9602-8fbe-485b-a12d-59b6c34826b7',
>>> volUUID='c1bcfe5c-20ab-4f50-a88b-e2e0e1184bf8', options=None)
>>> Thread-34757::INFO::2014-02-03
>>> 09:58:17,696::logUtils::41::dispatcher::(wrapper) Run and protect:
>>> getVolumeSize(sdUUID='c332da29-ba9f-4c94-8fa9-346bb8e04e2a',
>>> spUUID='9dbc7bb1-c460-4202-8f10-862d2ed3ed9a',
>>> imgUUID='49f65bfd-8592-42b9-9a31-91268402903f',
>>> volUUID='511e6584-4f19-426d-9379-b223d0c2d9c6', options=None)
>>> Thread-34627::INFO::2014-02-03
>>> 09:58:17,697::logUtils::44::dispatcher::(wrapper) Run and protect:
>>> getVolumeSize, Return response: {'truesize': '10737418240',
>>> 'apparentsize': '10737418240'}
>>> Thread-34757::INFO::2014-02-03
>>> 09:58:17,698::logUtils::44::dispatcher::(wrapper) Run and protect:
>>> getVolumeSize, Return response: {'truesize': '32212254720',
>>> 'apparentsize': '32212254720'}
>>> Thread-6019529::INFO::2014-02-03
>>> 09:58:21,672::logUtils::41::dispatcher::(wrapper) Run and protect:
>>> repoStats(options=None)
>>> Thread-6019529::INFO::2014-02-03
>>> 09:58:21,673::logUtils::44::dispatcher::(wrapper) Run and protect:
>>> repoStats, Return response:
>>> {u'51eb6183-157d-4015-ae0f-1c7ffb1731c0': {'delay':
>>> '0.00730204582214', 'lastCheck': '5.9', 'code': 0, 'valid': True},
>>> u'c332da29-ba9f-4c94-8fa9-346bb8e04e2a': {'delay':
>>> '0.0207469463348', 'lastCheck': '5.3', 'code': 0, 'valid': True},
>>> u'0e0be898-6e04-4469-bb32-91f3cf8146d1': {'delay':
>>> '0.00734615325928', 'lastCheck': '5.9', 'code': 0, 'valid': True}}
>>> Thread-243::INFO::2014-02-03
>>> 09:58:27,800::logUtils::41::dispatcher::(wrapper) Run and protect:
>>> getVolumeSize(sdUUID='c332da29-ba9f-4c94-8fa9-346bb8e04e2a',
>>> spUUID='9dbc7bb1-c460-4202-8f10-862d2ed3ed9a',
>>> imgUUID='50f2c3e9-aa94-4ad1-9c3f-91b452292374',
>>> volUUID='d7cddb76-a5b7-49ed-9efe-44d92ec18d93', options=None)
>>> Thread-34590::INFO::2014-02-03
>>> 09:58:27,801::logUtils::41::dispatcher::(wrapper) Run and protect:
>>> getVolumeSize(sdUUID='c332da29-ba9f-4c94-8fa9-346bb8e04e2a',
>>> spUUID='9dbc7bb1-c460-4202-8f10-862d2ed3ed9a',
>>> imgUUID='c36bb1da-babd-47bd-a406-58f0cb529c00',
>>> volUUID='6ea83f9e-c614-4e11-ab57-314ed4efeeaa', options=None)
>>> Thread-243::INFO::2014-02-03
>>> 09:58:27,802::logUtils::44::dispatcher::(wrapper) Run and protect:
>>> getVolumeSize, Return response: {'truesize': '10737418240',
>>> 'apparentsize': '10737418240'}
>>> Thread-34590::INFO::2014-02-03
>>> 09:58:27,803::logUtils::44::dispatcher::(wrapper) Run and protect:
>>> getVolumeSize, Return response: {'truesize': '107374182400',
>>> 'apparentsize': '107374182400'}
>>> Thread-6019535::INFO::2014-02-03
>>> 09:58:32,337::logUtils::41::dispatcher::(wrapper) Run and protect:
>>> repoStats(options=None)
>>> Thread-6019535::INFO::2014-02-03
>>> 09:58:32,337::logUtils::44::dispatcher::(wrapper) Run and protect:
>>> repoStats, Return response:
>>> {u'51eb6183-157d-4015-ae0f-1c7ffb1731c0': {'delay':
>>> '0.0119340419769', 'lastCheck': '6.6', 'code': 0, 'valid': True},
>>> u'c332da29-ba9f-4c94-8fa9-346bb8e04e2a': {'delay':
>>> '0.0190720558167', 'lastCheck': '6.0', 'code': 0, 'valid': True},
>>> u'0e0be898-6e04-4469-bb32-91f3cf8146d1': {'delay':
>>> '0.00720596313477', 'lastCheck': '6.6', 'code': 0, 'valid': True}}
>>> Thread-2017487::INFO::2014-02-03
>>> 09:58:37,692::logUtils::41::dispatcher::(wrapper) Run and protect:
>>> getVolumeSize(sdUUID='c332da29-ba9f-4c94-8fa9-346bb8e04e2a',
>>> spUUID='9dbc7bb1-c460-4202-8f10-862d2ed3ed9a',
>>> imgUUID='827f2d81-dc8c-414e-90d2-75e76b3250a0',
>>> volUUID='f86ec330-0815-4361-8ce7-abf3318a8939', options=None)
>>> Thread-2017487::INFO::2014-02-03
>>> 09:58:37,693::logUtils::44::dispatcher::(wrapper) Run and protect:
>>> getVolumeSize, Return response: {'truesize': '10737418240',
>>> 'apparentsize': '10737418240'}
>>> Thread-6019540::INFO::2014-02-03
>>> 09:58:39,118::logUtils::41::dispatcher::(wrapper) Run and protect:
>>> getAllTasksStatuses(spUUID=None, options=None)
>>> Thread-6019540::INFO::2014-02-03
>>> 09:58:39,118::logUtils::44::dispatcher::(wrapper) Run and protect:
>>> getAllTasksStatuses, Return response: {'allTasksStatus': {}}
>>> Thread-6019541::INFO::2014-02-03
>>> 09:58:39,126::logUtils::41::dispatcher::(wrapper) Run and protect:
>>> spmStop(spUUID='9dbc7bb1-c460-4202-8f10-862d2ed3ed9a', options=None)
>>> Thread-6019541::ERROR::2014-02-03
>>> 09:58:39,127::task::833::TaskManager.Task::(_setError)
>>> Task=`1f478485-401b-4b9b-b58b-1e7973cf64a2`::Unexpected error
>>> Traceback (most recent call last):
>>> File "/usr/share/vdsm/storage/task.py", line 840, in _run
>>> return fn(*args, **kargs)
>>> File "/usr/share/vdsm/logUtils.py", line 42, in wrapper
>>> res = f(*args, **kwargs)
>>> File "/usr/share/vdsm/storage/hsm.py", line 601, in spmStop
>>> pool.stopSpm()
>>> File "/usr/share/vdsm/storage/securable.py", line 66, in wrapper
>>> raise SecureError()
>>> SecureError
>>> Thread-6019541::INFO::2014-02-03
>>> 09:58:39,128::task::1134::TaskManager.Task::(prepare)
>>> Task=`1f478485-401b-4b9b-b58b-1e7973cf64a2`::aborting: Task is
>>> aborted: u'' - code 100
>>> Thread-6019541::ERROR::2014-02-03
>>> 09:58:39,130::dispatcher::70::Storage.Dispatcher.Protect::(run)
>>> Traceback (most recent call last):
>>> File "/usr/share/vdsm/storage/dispatcher.py", line 62, in run
>>> result = ctask.prepare(self.func, *args, **kwargs)
>>> File "/usr/share/vdsm/storage/task.py", line 1142, in prepare
>>> raise self.error
>>> SecureError
>>> # End vdsm SPM log #
>>>
>>> And after, the cluster elects another SPM.
>>>
>>> The webgui shows on 'events' tab:
>>>
>>> # Start webgui events #
>>> Data Center is being initialized, please wait for initialization to
>>> complete.
>>> Failed to remove VM _12.147_postgresql_default.sir.inpe.br_apagar
>>> (User: eduardo.ramos).
>>> # End webgui events #
>>>
>>> Engine logs nothing but normal change of SPM.
>>>
>>> I would like to know how I can identify what is stuck, and if I can
>>> delete by hand, deleting entry from DB and lvremove.
>>>
>>> Thanks!
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
--
Dafna Ron
10 years, 10 months
[Users] oVirt 3.3.3 RC EL6 Live Snapshot
by Karli Sjöberg
Hi!
I´ve gone through upgrading from 3.3.2 to 3.3.3 RC on CentOS 6.5 in our
test environment, went off without a hitch, so "good job" guys! However
something I´d very much like to see fixed is live snapshots for CentOS,
especially since it seems to be fixed already for Fedora. Issue already
been discussed:
http://lists.ovirt.org/pipermail/users/2013-December/019090.html
Is this something that can be targeted for 3.3.3 GA?
--
Med Vänliga Hälsningar
-------------------------------------------------------------------------------
Karli Sjöberg
Swedish University of Agricultural Sciences Box 7079 (Visiting Address
Kronåsvägen 8)
S-750 07 Uppsala, Sweden
Phone: +46-(0)18-67 15 66
karli.sjoberg(a)slu.se
10 years, 10 months
[Users] Install on CentOS6.5 fails
by ml ml
Hello List,
i am trying to install oVirt on CentOS6.5
The Howto from: http://www.ovirt.org/Download fails with:
[root@ovirt ovirt-engine]# yum localinstall
http://ovirt.org/releases/ovirt-release-el.noarch.rpm
Loaded plugins: fastestmirror, versionlock
Setting up Local Package Process
ovirt-release-el.noarch.rpm
| 8.1 kB 00:00
Examining /var/tmp/yum-root-200V5v/ovirt-release-el.noarch.rpm:
ovirt-release-el6-10.0.1-3.noarch
Marking /var/tmp/yum-root-200V5v/ovirt-release-el.noarch.rpm to be installed
Loading mirror speeds from cached hostfile
* base: ftp.plusline.de
* extras: mirror.skylink-datacenter.de
* updates: ftp-stud.fht-esslingen.de
Resolving Dependencies
--> Running transaction check
---> Package ovirt-release-el6.noarch 0:10.0.1-3 will be installed
--> Processing Dependency: epel-release for package:
ovirt-release-el6-10.0.1-3.noarch
--> Finished Dependency Resolution
Error: Package: ovirt-release-el6-10.0.1-3.noarch (/ovirt-release-el.noarch)
Requires: epel-release
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
So i used: http://wiki.centos.org/HowTos/oVirt
The Install runs without any errors. However, i am getting a blank page now
when i try to access my webadmin page.
My Server.log:
1. 2014-02-04 12:19:49,332 ERROR
[org.jboss.as.controller.management-operation] (ServerService Thread Pool
-- 20) JBAS014612: Operation ("add") failed - address: ([("subsystem" =>
"jaxrs")]): org.jboss.modules.ModuleLoadError: Error loading module from
/usr/share/ovirt-engine/modules/org/apache/httpcomponents/main/module.xml
at
org.jboss.modules.ModuleLoadException.toError(ModuleLoadException.java:78)
[jboss-modules.jar:1.1.1.GA] at
org.jboss.modules.Module.getPathsUnchecked(Module.java:1166)
[jboss-modules.jar:1.1.1.GA] at
org.jboss.modules.Module.loadModuleClass(Module.java:512)
[jboss-modules.jar:1.1.1.GA] at
org.jboss.modules.ModuleClassLoader.findClass(ModuleClassLoader.java:182)
[jboss-modules.jar:1.1.1.GA] at
org.jboss.modules.ConcurrentClassLoader.performLoadClassUnchecked(ConcurrentClassLoader.java:468)
[jboss-modules.jar:1.1.1.GA] at
org.jboss.modules.ConcurrentClassLoader.performLoadClassChecked(ConcurrentClassLoader.java:456)
[jboss-modules.jar:1.1.1.GA] at
org.jboss.modules.ConcurrentClassLoader.performLoadClassChecked(ConcurrentClassLoader.java:423)
[jboss-modules.jar:1.1.1.GA] at
org.jboss.modules.ConcurrentClassLoader.performLoadClass(ConcurrentClassLoader.java:398)
[jboss-modules.jar:1.1.1.GA] at
org.jboss.modules.ConcurrentClassLoader.loadClass(ConcurrentClassLoader.java:120)
[jboss-modules.jar:1.1.1.GA] at
org.jboss.resteasy.plugins.server.servlet.ResteasyBootstrapClasses.<clinit>(ResteasyBootstrapClasses.java:11)
at
org.jboss.as.jaxrs.deployment.JaxrsScanningProcessor.<clinit>(JaxrsScanningProcessor.java:121)
at
org.jboss.as.jaxrs.JaxrsSubsystemAdd$1.execute(JaxrsSubsystemAdd.java:61)
at
org.jboss.as.server.AbstractDeploymentChainStep.execute(AbstractDeploymentChainStep.java:45)
at
org.jboss.as.controller.AbstractOperationContext.executeStep(AbstractOperationContext.java:385)
[jboss-as-controller-7.1.1.Final.jar:7.1.1.Final] at
org.jboss.as.controller.AbstractOperationContext.doCompleteStep(AbstractOperationContext.java:272)
[jboss-as-controller-7.1.1.Final.jar:7.1.1.Final] at
org.jboss.as.controller.AbstractOperationContext.completeStep(AbstractOperationContext.java:200)
[jboss-as-controller-7.1.1.Final.jar:7.1.1.Final] at
org.jboss.as.controller.ParallelBootOperationStepHandler$ParallelBootTask.run(ParallelBootOperationStepHandler.java:311)
[jboss-as-controller-7.1.1.Final.jar:7.1.1.Final] at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
[rt.jar:1.6.0_30] at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
[rt.jar:1.6.0_30] at java.lang.Thread.run(Thread.java:701)
[rt.jar:1.6.0_30] at
org.jboss.threads.JBossThread.run(JBossThread.java:122)
[jboss-threads-2.0.0.GA.jar:2.0.0.GA] Caused by:
javax.xml.stream.XMLStreamException: ParseError at [row,col]:[31,47]
Message: Failed to add resource root 'httpclient.jar' at path
'httpclient.jar' at
org.jboss.modules.ModuleXmlParser.parseResourceRoot(ModuleXmlParser.java:898)
[jboss-modules.jar:1.1.1.GA] at
org.jboss.modules.ModuleXmlParser.parseResources(ModuleXmlParser.java:854)
[jboss-modules.jar:1.1.1.GA] at
org.jboss.modules.ModuleXmlParser.parseModuleContents(ModuleXmlParser.java:676)
[jboss-modules.jar:1.1.1.GA] at
org.jboss.modules.ModuleXmlParser.parseDocument(ModuleXmlParser.java:548)
[jboss-modules.jar:1.1.1.GA] at
org.jboss.modules.ModuleXmlParser.parseModuleXml(ModuleXmlParser.java:287)
[jboss-modules.jar:1.1.1.GA] at
org.jboss.modules.ModuleXmlParser.parseModuleXml(ModuleXmlParser.java:242)
[jboss-modules.jar:1.1.1.GA] at
org.jboss.modules.LocalModuleLoader.parseModuleInfoFile(LocalModuleLoader.java:138)
[jboss-modules.jar:1.1.1.GA] at
org.jboss.modules.LocalModuleLoader.findModule(LocalModuleLoader.java:122)
[jboss-modules.jar:1.1.1.GA] at
org.jboss.modules.ModuleLoader.loadModuleLocal(ModuleLoader.java:275)
[jboss-modules.jar:1.1.1.GA] at
org.jboss.modules.ModuleLoader.preloadModule(ModuleLoader.java:222)
[jboss-modules.jar:1.1.1.GA] at
org.jboss.modules.LocalModuleLoader.preloadModule(LocalModuleLoader.java:94)
[jboss-modules.jar:1.1.1.GA] at
org.jboss.modules.Module.addPaths(Module.java:841) [jboss-modules.jar:
1.1.1.GA] at org.jboss.modules.Module.link(Module.java:1181)
[jboss-modules.jar:1.1.1.GA] at
org.jboss.modules.Module.getPaths(Module.java:1153) [jboss-modules.jar:
1.1.1.GA] at
org.jboss.modules.Module.getPathsUnchecked(Module.java:1164)
[jboss-modules.jar:1.1.1.GA] ... 19 more
my processes:
[root@ovirt ovirt-engine]# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 19232 1488 ? Ss 12:01 0:01 /sbin/init
root 2 0.0 0.0 0 0 ? S 12:01 0:00 [kthreadd]
root 3 0.0 0.0 0 0 ? S 12:01 0:00
[migration/0]
root 4 0.0 0.0 0 0 ? S 12:01 0:00
[ksoftirqd/0]
root 5 0.0 0.0 0 0 ? S 12:01 0:00
[migration/0]
root 6 0.0 0.0 0 0 ? S 12:01 0:00
[watchdog/0]
root 7 0.0 0.0 0 0 ? S 12:01 0:00 [events/0]
root 8 0.0 0.0 0 0 ? S 12:01 0:00 [cgroup]
root 9 0.0 0.0 0 0 ? S 12:01 0:00 [khelper]
root 10 0.0 0.0 0 0 ? S 12:01 0:00 [netns]
root 11 0.0 0.0 0 0 ? S 12:01 0:00 [async/mgr]
root 12 0.0 0.0 0 0 ? S 12:01 0:00 [pm]
root 13 0.0 0.0 0 0 ? S 12:01 0:00
[sync_supers]
root 14 0.0 0.0 0 0 ? S 12:01 0:00
[bdi-default]
root 15 0.0 0.0 0 0 ? S 12:01 0:00
[kintegrityd/0]
root 16 0.0 0.0 0 0 ? S 12:01 0:00 [kblockd/0]
root 17 0.0 0.0 0 0 ? S 12:01 0:00 [kacpid]
root 18 0.0 0.0 0 0 ? S 12:01 0:00
[kacpi_notify]
root 19 0.0 0.0 0 0 ? S 12:01 0:00
[kacpi_hotplug]
root 20 0.0 0.0 0 0 ? S 12:01 0:00 [ata_aux]
root 21 0.0 0.0 0 0 ? S 12:01 0:00 [ata_sff/0]
root 22 0.0 0.0 0 0 ? S 12:01 0:00
[ksuspend_usbd]
root 23 0.0 0.0 0 0 ? S 12:01 0:00 [khubd]
root 24 0.0 0.0 0 0 ? S 12:01 0:00 [kseriod]
root 25 0.0 0.0 0 0 ? S 12:01 0:00 [md/0]
root 26 0.0 0.0 0 0 ? S 12:01 0:00 [md_misc/0]
root 27 0.0 0.0 0 0 ? S 12:01 0:00 [linkwatch]
root 28 0.0 0.0 0 0 ? S 12:01 0:00
[khungtaskd]
root 29 0.0 0.0 0 0 ? S 12:01 0:00 [kswapd0]
root 30 0.0 0.0 0 0 ? SN 12:01 0:00 [ksmd]
root 31 0.0 0.0 0 0 ? SN 12:01 0:00
[khugepaged]
root 32 0.0 0.0 0 0 ? S 12:01 0:00 [aio/0]
root 33 0.0 0.0 0 0 ? S 12:01 0:00 [crypto/0]
root 38 0.0 0.0 0 0 ? S 12:01 0:00
[kthrotld/0]
root 39 0.0 0.0 0 0 ? S 12:01 0:00 [pciehpd]
root 41 0.0 0.0 0 0 ? S 12:02 0:00 [kpsmoused]
root 42 0.0 0.0 0 0 ? S 12:02 0:00
[usbhid_resumer]
root 72 0.0 0.0 0 0 ? S 12:02 0:00 [kstriped]
root 133 0.0 0.0 0 0 ? S 12:02 0:00 [scsi_eh_0]
root 134 0.0 0.0 0 0 ? S 12:02 0:00 [scsi_eh_1]
root 195 0.0 0.0 0 0 ? S 12:02 0:00 [scsi_eh_2]
root 196 0.0 0.0 0 0 ? S 12:02 0:00
[vmw_pvscsi_wq_2]
root 272 0.0 0.0 0 0 ? S 12:02 0:00 [kdmflush]
root 274 0.0 0.0 0 0 ? S 12:02 0:00 [kdmflush]
root 291 0.0 0.0 0 0 ? S 12:02 0:00
[jbd2/dm-0-8]
root 292 0.0 0.0 0 0 ? S 12:02 0:00
[ext4-dio-unwrit]
root 369 0.0 0.0 11076 1164 ? S<s 12:02 0:00
/sbin/udevd -d
root 533 0.0 0.0 0 0 ? S 12:02 0:00 [vmmemctl]
root 656 0.0 0.0 0 0 ? S 12:02 0:00
[jbd2/sda1-8]
root 657 0.0 0.0 0 0 ? S 12:02 0:00
[ext4-dio-unwrit]
root 695 0.0 0.0 0 0 ? S 12:02 0:00 [kauditd]
root 765 0.0 0.0 0 0 ? S 12:02 0:00
[flush-253:0]
root 903 0.0 0.0 27640 792 ? S<sl 12:02 0:00 auditd
root 919 0.0 0.0 249088 1580 ? Sl 12:02 0:00
/sbin/rsyslogd -i /var/run/syslogd.pid -c 5
root 956 0.0 0.0 0 0 ? S 12:02 0:00 [rpciod/0]
rpcuser 962 0.0 0.0 23348 1328 ? Ss 12:02 0:00 rpc.statd
-p 662 -o 2020
root 1081 0.0 0.0 66608 1228 ? Ss 12:02 0:00
/usr/sbin/sshd
root 1203 0.0 0.0 81280 3400 ? Ss 12:02 0:00
/usr/libexec/postfix/master
postfix 1211 0.0 0.0 81360 3372 ? S 12:02 0:00 pickup -l
-t fifo -u
postfix 1212 0.0 0.0 81532 3416 ? S 12:02 0:00 qmgr -l -t
fifo -u
root 1228 0.0 0.0 117300 1380 ? Ss 12:02 0:00 crond
root 1232 0.0 0.0 100360 4060 ? Ss 12:02 0:00 sshd:
root@pts/0
root 1245 0.0 0.0 4064 572 tty1 Ss+ 12:02 0:00
/sbin/mingetty /dev/tty1
root 1247 0.0 0.0 4064 568 tty2 Ss+ 12:02 0:00
/sbin/mingetty /dev/tty2
root 1249 0.0 0.0 4064 568 tty3 Ss+ 12:02 0:00
/sbin/mingetty /dev/tty3
root 1252 0.0 0.0 4064 572 tty4 Ss+ 12:02 0:00
/sbin/mingetty /dev/tty4
root 1254 0.0 0.0 12280 2580 ? S< 12:02 0:00
/sbin/udevd -d
root 1255 0.0 0.0 12280 2576 ? S< 12:02 0:00
/sbin/udevd -d
root 1256 0.0 0.0 4064 568 tty5 Ss+ 12:02 0:00
/sbin/mingetty /dev/tty5
root 1258 0.0 0.0 4064 568 tty6 Ss+ 12:02 0:00
/sbin/mingetty /dev/tty6
root 1263 0.0 0.0 108304 1852 pts/0 Ss+ 12:02 0:00 -bash
root 1327 0.0 0.0 100360 4104 ? Ss 12:05 0:00 sshd:
root@pts/1
root 1333 0.0 0.0 108304 1916 pts/1 Ss 12:05 0:00 -bash
root 1574 0.0 0.0 100360 4088 ? Ss 12:08 0:00 sshd:
root@pts/2
root 1578 0.0 0.0 108304 1896 pts/2 Ss+ 12:09 0:00 -bash
rpc 6827 0.0 0.0 18976 872 ? Ss 12:11 0:00 rpcbind
root 6923 0.0 0.0 21656 964 ? Ss 12:11 0:00 rpc.mountd
-p 892
root 6928 0.0 0.0 0 0 ? S 12:11 0:00 [lockd]
root 6929 0.0 0.0 0 0 ? S 12:11 0:00 [nfsd4]
root 6930 0.0 0.0 0 0 ? S 12:11 0:00
[nfsd4_callbacks]
root 6931 0.0 0.0 0 0 ? S 12:11 0:00 [nfsd]
root 6932 0.0 0.0 0 0 ? S 12:11 0:00 [nfsd]
root 6933 0.0 0.0 0 0 ? S 12:11 0:00 [nfsd]
root 6934 0.0 0.0 0 0 ? S 12:11 0:00 [nfsd]
root 6935 0.0 0.0 0 0 ? S 12:11 0:00 [nfsd]
root 6936 0.0 0.0 0 0 ? S 12:11 0:00 [nfsd]
root 6937 0.0 0.0 0 0 ? S 12:11 0:00 [nfsd]
root 6938 0.0 0.0 0 0 ? S 12:11 0:00 [nfsd]
root 6961 0.0 0.0 25164 576 ? Ss 12:11 0:00 rpc.idmapd
postgres 11279 0.0 0.1 217620 7044 ? S 12:19 0:00
/usr/bin/postmaster -p 5432 -D /var/lib/pgsql/data
postgres 11281 0.0 0.0 179264 1480 ? Ss 12:19 0:00 postgres:
logger process
postgres 11283 0.0 0.0 217736 2860 ? Ss 12:19 0:00 postgres:
writer process
postgres 11284 0.0 0.0 217620 1676 ? Ss 12:19 0:00 postgres:
wal writer process
postgres 11285 0.0 0.0 217884 2060 ? Ss 12:19 0:00 postgres:
autovacuum launcher process
postgres 11286 0.0 0.0 179548 1792 ? Ss 12:19 0:00 postgres:
stats collector process
ovirt 12406 0.9 6.0 2164440 300836 ? Ssl 12:19 0:08
engine-service -server -XX:+TieredCompilation -Xms1g -Xmx1g
-XX:PermSize=256m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true
-Dsun.rmi.dgc.client.gcInterval=3600000 -D
postgres 12464 0.0 0.0 218848 3872 ? Ss 12:19 0:00 postgres:
engine engine 127.0.0.1(42069) idle
root 12494 0.0 0.1 201080 5532 ? Ss 12:20 0:00
/usr/sbin/httpd
apache 12496 0.0 0.0 201220 3868 ? S 12:20 0:00
/usr/sbin/httpd
apache 12497 0.0 0.0 201228 4044 ? S 12:20 0:00
/usr/sbin/httpd
apache 12498 0.0 0.0 201228 4044 ? S 12:20 0:00
/usr/sbin/httpd
apache 12499 0.0 0.0 201228 4004 ? S 12:20 0:00
/usr/sbin/httpd
apache 12500 0.0 0.0 201228 4408 ? S 12:20 0:00
/usr/sbin/httpd
apache 12501 0.0 0.0 201228 4536 ? S 12:20 0:00
/usr/sbin/httpd
apache 12502 0.0 0.0 201228 4516 ? S 12:20 0:00
/usr/sbin/httpd
apache 12503 0.0 0.0 201080 3124 ? S 12:20 0:00
/usr/sbin/httpd
root 12557 0.0 0.0 110232 1168 pts/1 R+ 12:34 0:00 ps aux
my packages:
[root@ovirt ovirt-engine]# rpm -qa | grep ovirt
ovirt-engine-sdk-3.2.0.3-1.el6.centos.alt.noarch
ovirt-image-uploader-3.1.0-26.el6.centos.alt.noarch
ovirt-engine-userportal-3.1.0-3.26.3.el6.centos.alt.noarch
ovirt-engine-restapi-3.1.0-3.26.3.el6.centos.alt.noarch
ovirt-engine-config-3.1.0-3.26.3.el6.centos.alt.noarch
ovirt-engine-tools-common-3.1.0-3.26.3.el6.centos.alt.noarch
ovirt-engine-webadmin-portal-3.1.0-3.26.3.el6.centos.alt.noarch
ovirt-engine-backend-3.1.0-3.26.3.el6.centos.alt.noarch
ovirt-log-collector-3.1.0-26.el6.centos.alt.noarch
ovirt-iso-uploader-3.1.0-26.el6.centos.alt.noarch
ovirt-engine-cli-3.2.0.6-1.el6.centos.alt.noarch
ovirt-engine-jbossas711-1-3.el6.alt.x86_64
ovirt-engine-dbscripts-3.1.0-3.26.3.el6.centos.alt.noarch
ovirt-engine-setup-3.1.0-3.26.3.el6.centos.alt.noarch
ovirt-engine-notification-service-3.1.0-3.26.3.el6.centos.alt.noarch
ovirt-engine-genericapi-3.1.0-3.26.3.el6.centos.alt.noarch
ovirt-engine-3.1.0-3.26.3.el6.centos.alt.noarch
What i am doing wrong?
Thank you!
Cheers,
Mario
10 years, 10 months
[Users] ovirt-report Forbidden access error
by Alessandro Bianchi
Hi all
I installed
ovirt-engine-reports-3.3.2-1.fc19.noarch using yum
Now I have reports listed when right clicking on Vms but on any report i
see this error:
Forbidden
You don't have permission to access /ovirt-engine-reports/flow.html on
this server.
This seems to be related to apache redirection but how to fix it?
I have three files in conf.d
ovirt-engine-root-redirect.conf
z-ovirt-engine-proxy.conf
z-ovirt-engine-reports-proxy.conf
but can't figure how to fix them
I applied no changes to these files
Any hint?
Thank you
10 years, 10 months