3.6 upgrade issue
by Jon Archer
Hi all,
Wonder if anyone can shed any light on an error i'm seeing while running
engine-setup.
If just upgraded the packages to the latest 3.6 ones today (from 3.5),
run engine-setup, answered the questions, confirming install then get
presented with:
[ INFO ] Cleaning async tasks and compensations
[ INFO ] Unlocking existing entities
[ INFO ] Checking the Engine database consistency
[ INFO ] Stage: Transaction setup
[ INFO ] Stopping engine service
[ INFO ] Stopping ovirt-fence-kdump-listener service
[ INFO ] Stopping websocket-proxy service
[ INFO ] Stage: Misc configuration
[ INFO ] Stage: Package installation
[ INFO ] Stage: Misc configuration
[ ERROR ] Failed to execute stage 'Misc configuration': function
getdwhhistorytimekeepingbyvarname(unknown) does not exist LINE 2:
select * from GetDwhHistoryTimekeepingByVarName(
^ HINT: No function matches the given name and argument
types. You might need to add explicit type casts.
[ INFO ] Yum Performing yum transaction rollback
[ INFO ] Stage: Clean up
Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20150929144137-7u5rhg.log
[ INFO ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20150929144215-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
Any ideas, where to look to fix things?
Thanks
Jon
8 years, 6 months
heavy webadmin
by Nathanaël Blanchet
Hi all,
Since I upgraded engine to 3.6, I noticed that the webadmin takes a lot
of ressources whatever is the browser. It can become very slow even for
small actions, like changing tabs or editing a vm. The browser activity
becomes intensive (100% of cpu) and the processor very hot with a
increased fan activity. I suppose javascript to be responsible of this
behaviour. Is there a way to reduce the resource allocated to the webadmin?
(This is not a weakness of my laptop which is an i7 cpu with 16GB of RAM)
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
8 years, 7 months
Multi-node cluster with local storage
by Pavel Gashev
Hello,
I'd like to ask community, what is the best way to use oVirt in the
following hardware configuration:
Three servers connected 1GB network. Each server - 32 threads, 256GB
RAM, 4TB RAID.
Please note that a local storage and an 1GB network is a typical
hardware configuration for almost any dedicated hosting.
Unfortunately, oVirt doesn't support multi-node local storage clusters.
And Gluster/CEPH doesn't work well over 1G network. It looks like that
the only way to use oVirt in a three-node cluster is to share local
storages over NFS. At least it makes possible to migrate VMs and move
disks among hardware nodes.
Does somebody have such setup?
Thanks
8 years, 7 months
ovirt glusterfs performance
by Bill James
I'm setting up a ovirt cluster using glusterfs and noticing not stellar
performance.
Maybe my setup could use some adjustments?
3 hardware nodes running centos7.2, glusterfs 3.7.6.1, ovirt 3.6.2.6-1.
Each node has 8 spindles configured in 1 array which is split using LVM
with one logical volume for system and one for gluster.
They each have 4 NICs,
NIC1 = ovirtmgmt
NIC2 = gluster
NIC3 = VM traffic
I tried with default glusterfs settings and also with:
performance.cache-size: 1GB
performance.readdir-ahead: on
performance.write-behind-window-size: 4MB
[root@ovirt3 test scripts]# gluster volume info gv1
Volume Name: gv1
Type: Replicate
Volume ID: 71afc35b-09d7-4384-ab22-57d032a0f1a2
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1
Options Reconfigured:
performance.cache-size: 1GB
performance.readdir-ahead: on
performance.write-behind-window-size: 4MB
Using simple dd test on VM in ovirt:
dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct
1073741824 bytes (1.1 GB) copied, 65.9337 s, 16.3 MB/s
Another VM not in ovirt using nfs:
dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct
1073741824 bytes (1.1 GB) copied, 27.0079 s, 39.8 MB/s
Is that expected or is there a better way to set it up to get better
performance?
Thanks.
Cloud Services for Business www.j2.com
j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox
This email, its contents and attachments contain information from j2 Global, Inc. and/or its affiliates which may be privileged, confidential or otherwise protected from disclosure. The information is intended to be for the addressee(s) only. If you are not an addressee, any disclosure, copy, distribution, or use of the contents of this message is prohibited. If you have received this email in error please notify the sender by reply e-mail and delete the original message and any copies. (c) 2015 j2 Global, Inc. All rights reserved. eFax, eVoice, Campaigner, FuseMail, KeepItSafe, and Onebox are registered trademarks of j2 Global, Inc. and its affiliates.
8 years, 7 months
ovirt with glusterfs - big test - unwanted results
by paf1@email.cz
This is a multi-part message in MIME format.
--------------070802090208020205070907
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hello,
we tried the following test - with unwanted results
input:
5 node gluster
A = replica 3 with arbiter 1 ( node1+node2+arbiter on node 5 )
B = replica 3 with arbiter 1 ( node3+node4+arbiter on node 5 )
C = distributed replica 3 arbiter 1 ( node1+node2, node3+node4, each
arbiter on node 5)
node 5 has only arbiter replica ( 4x )
TEST:
1) directly reboot one node - OK ( is not important which ( data node
or arbiter node ))
2) directly reboot two nodes - OK ( if nodes are not from the same
replica )
3) directly reboot three nodes - yes, this is the main problem and a
questions ....
- rebooted all three nodes from replica "B" ( not so possible, but
who knows ... )
- all VMs with data on this replica was paused ( no data access ) - OK
- all VMs running on replica "B" nodes lost ( started manually,
later )( datas on other replicas ) - acceptable
BUT
- !!! all oVIrt domains went down !! - master domain is on replica
"A" which lost only one member from three !!!
so we are not expecting that all domain will go down, especially
master with 2 live members.
Results:
- the whole cluster unreachable until at all domains up - depent of
all nodes up !!!
- all paused VMs started back - OK
- rest of all VMs rebooted and runnig - OK
Questions:
1) why all domains down if master domain ( on replica "A" ) has two
runnig members ( 2 of 3 ) ??
2) how to fix that colaps without waiting to all nodes up ? ( in
worste case if node has HW error eg. ) ??
3) which oVirt cluster policy can prevent that situation ?? ( if
any )
regs.
Pavel
--------------070802090208020205070907
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000066" bgcolor="#FFFFFF">
Hello, <br>
we tried the following test - with unwanted results<br>
<br>
input:<br>
5 node gluster<br>
A = replica 3 with arbiter 1 ( node1+node2+arbiter on node 5 )<br>
B = replica 3 with arbiter 1 ( node3+node4+arbiter on node 5 )<br>
C = distributed replica 3 arbiter 1 ( node1+node2, node3+node4,
each arbiter on node 5)<br>
node 5 has only arbiter replica ( 4x )<br>
<br>
TEST:<br>
1) directly reboot one node - OK ( is not important which ( data
node or arbiter node ))<br>
2) directly reboot two nodes - OK ( if nodes are not from the same
replica ) <br>
3) directly reboot three nodes - yes, this is the main problem and
a questions ....<br>
- rebooted all three nodes from replica "B" ( not so possible,
but who knows ... )<br>
- all VMs with data on this replica was paused ( no data access
) - OK<br>
- all VMs running on replica "B" nodes lost ( started manually,
later )( datas on other replicas ) - acceptable<br>
BUT<br>
- !!! all oVIrt domains went down !! - master domain is on
replica "A" which lost only one member from three !!!<br>
so we are not expecting that all domain will go down, especially
master with 2 live members.<br>
<br>
Results: <br>
- the whole cluster unreachable until at all domains up - depent
of all nodes up !!!<br>
- all paused VMs started back - OK<br>
- rest of all VMs rebooted and runnig - OK<br>
<br>
Questions:<br>
1) why all domains down if master domain ( on replica "A" ) has
two runnig members ( 2 of 3 ) ??<br>
2) how to fix that colaps without waiting to all nodes up ? ( in
worste case if node has HW error eg. ) ??<br>
3) which oVirt cluster policy can prevent that situation ?? (
if any )<br>
<br>
regs.<br>
Pavel<br>
<br>
<br>
</body>
</html>
--------------070802090208020205070907--
8 years, 7 months
Error: Storage format V3 is not supported
by Alex R
I am trying to import a domain that I have used as an export on a previous
install. The previous install was no older then v3.5 and was built with
the all-in-one-plugin. Before destroying that system I took a portable
drive and made an export domain to export my VMs and templates.
The new system is up to date an was built as a hosted engine. When I try
to import the domain I get the following error:
"Error while executing action: Cannot add Storage. Storage format V3 is not
supported on the selected host version."
I just need to recover the VMs.
I connect the USB hard drive to the host and make an export directory just
like I did on the old host.
# ls -ld /mnt/export_ovirt
drwxr-xr-x. 5 vdsm kvm 4096 Mar 6 11:27 /mnt/export_ovirt
I have tried both doing an NFS mount
# cat /etc/exports.d/ovirt.exports
/home/engineha 127.0.0.1/32(rw,anonuid=36,anongid=36,all_squash)
/mnt/backup-vm/ 10.3.1.0/24(rw,anonuid=36,anongid=36,all_squash)
127.0.0.1/32(rw,anonuid=36,anongid=36,all_squash)
# cat
/mnt/export_ovirt/images/4be3f6ac-7946-4e7b-9ca2-11731c8ba236/dom_md/metadata
CLASS=Backup
DESCRIPTION=eport_storage
IOOPTIMEOUTSEC=10
LEASERETRIES=3
LEASETIMESEC=60
LOCKPOLICY=
LOCKRENEWALINTERVALSEC=5
POOL_UUID=053926e4-e63d-450e-8aa7-6f1235b944c6
REMOTE_PATH=/mnt/export_ovirt/images
ROLE=Regular
SDUUID=4be3f6ac-7946-4e7b-9ca2-11731c8ba236
TYPE=LOCALFS
VERSION=3
_SHA_CKSUM=2e6e203168bd84f3dc97c953b520ea8f78119bf0
# ls -l
/mnt/export_ovirt/images/4be3f6ac-7946-4e7b-9ca2-11731c8ba236/master/vms/4873de49-9090-40b1-a21d-665633109aa2/4873de49-9090-40b1-a21d-665633109aa2.ovf
-rw-r--r--. 1 vdsm kvm 9021 Mar 6 11:50
/mnt/export_ovirt/images/4be3f6ac-7946-4e7b-9ca2-11731c8ba236/master/vms/4873de49-9090-40b1-a21d-665633109aa2/4873de49-9090-40b1-a21d-665633109aa2.ovf
Thanks,
Alex
8 years, 7 months
bug in disks QOS
by Fabrice Bacchella
I attached a image disk to a VM , but set it using the wrong disk profile, I powered off the VM, and then tried to change it on the GUI.
The operation in the GUI is ok.
But nothing is done.
And in the log I get:
2016-03-25 10:12:10,467 INFO [org.ovirt.engine.core.bll.UpdateVmDiskCommand] (default task-26) [2f3b7d9] Lock Acquired to object 'EngineLock:{exclusiveLocks='null', sharedLocks='[a32e1043-a5a5-4e4c-8436-f7b7a4ff644c=<VM, ACTION_TYPE_FAILED_VM_IS_LOCKED>]'}'
2016-03-25 10:12:10,608 INFO [org.ovirt.engine.core.bll.UpdateVmDiskCommand] (default task-26) [2f3b7d9] Running command: UpdateVmDiskCommand internal: false. Entities affected : ID: 55d2be6b-7a78-4712-82be-b725b7812db8 Type: DiskAction group EDIT_DISK_PROPERTIES with role type USER
2016-03-25 10:12:10,794 INFO [org.ovirt.engine.core.bll.UpdateVmDiskCommand] (default task-26) [2f3b7d9] Lock freed to object 'EngineLock:{exclusiveLocks='null', sharedLocks='[a32e1043-a5a5-4e4c-8436-f7b7a4ff644c=<VM, ACTION_TYPE_FAILED_VM_IS_LOCKED>]'}'
2016-03-25 10:12:10,808 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-26) [2f3b7d9] Correlation ID: 2f3b7d9, Call Stack: null, Custom Event ID: -1, Message: VM test test_Disk3 disk was updated by FA4@apachesso.
It says "with role type USER" but I'm logged as a super admin
The set up is totally new, on dedicated centos 7.2, running 3.6.3.4-1.el7.centos.
8 years, 7 months
Can't start VMs (Unable to get volume size for domain)
by Justin Foreman
I’m running 3.6.2 rc1 with hosted engine on an FCP storage domain.
As of yesterday, I can’t run some VMs. I’ve experience corruption on others (I now have a Windows VM that blue screens on boot).
Here’s the log from my engine.
2016-01-04 16:55:39,446 INFO [org.ovirt.engine.core.bll.RunVmCommand] (default task-16) [1f1deb62] Lock Acquired to object 'EngineLock:{exclusiveLocks='[3a17534b-e86d-4563-8ca2-2a27c34b4a87=<VM, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-01-04 16:55:39,479 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-16) [1f1deb62] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{runAsync='true', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}), log id: 299a5052
2016-01-04 16:55:39,479 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-16) [1f1deb62] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 299a5052
2016-01-04 16:55:39,517 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] Running command: RunVmCommand internal: false. Entities affected : ID: 3a17534b-e86d-4563-8ca2-2a27c34b4a87 Type: VMAction group RUN_VM with role type USER
2016-01-04 16:55:39,579 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{runAsync='true', hostId='null', vmId='00000000-0000-0000-0000-000000000000', vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@dadddaa9'}), log id: 6574710a
2016-01-04 16:55:39,582 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] FINISH, UpdateVmDynamicDataVDSCommand, log id: 6574710a
2016-01-04 16:55:39,585 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] START, CreateVmVDSCommand( CreateVmVDSCommandParameters:{runAsync='true', hostId='2fe6c27b-9346-4678-8cd3-c9d367ec447f', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87', vm='VM [adm1]'}), log id: 55e0849d
2016-01-04 16:55:39,586 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] START, CreateVDSCommand(HostName = ov-101, CreateVmVDSCommandParameters:{runAsync='true', hostId='2fe6c27b-9346-4678-8cd3-c9d367ec447f', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87', vm='VM [adm1]'}), log id: 1d5c1c04
2016-01-04 16:55:39,589 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmInfoBuilderBase] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] Bootable disk '9e43c66a-5bf1-44d6-94f4-52178d15c1e6' set to index '0'
2016-01-04 16:55:39,600 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand pitReinjection=false,memGuaranteedSize=4054,smpThreadsPerCore=1,cpuType=SandyBridge,vmId=3a17534b-e86d-4563-8ca2-2a27c34b4a87,acpiEnable=true,numaTune={nodeset=0,1, mode=interleave},tabletEnable=true,spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,vmType=kvm,keyboardLayout=en-us,smp=1,smpCoresPerSocket=1,emulatedMachine=pc-i440fx-rhel7.2.0,smartcardEnable=false,guestNumaNodes=[{memory=4054, cpus=0, nodeIndex=0}],transparentHugePages=true,vmName=adm1,maxVCpus=16,kvmEnable=true,devices=[{address={bus=0x00, domain=0x0000, function=0x0, slot=0x02, type=pci}, type=video, specParams={heads=1, vram=32768}, device=cirrus, deviceId=645e99e3-a9fa-4894-baf5-97b539236782}, {type=graphics, specParams={}, device=vnc, deviceId=12845c03-16a3-4bf0-a015-a15201a77673}, {iface=ide, shared=false, path=, address={bus=1, controller=0, unit=0, type=drive, target=0}, readonly=true, index=2, type=disk, specParams={path=}, device=cdrom, deviceId=ab048396-5dd8-4594-aa8a-9fe835a04cd1}, {shared=false, address={bus=0, controller=0, unit=0, type=drive, target=0}, imageID=9e43c66a-5bf1-44d6-94f4-52178d15c1e6, format=raw, index=0, optional=false, type=disk, deviceId=9e43c66a-5bf1-44d6-94f4-52178d15c1e6, domainID=1fb79d91-b245-4447-91e0-e57671152a8c, propagateErrors=off, iface=ide, readonly=false, bootOrder=1, poolID=00000001-0001-0001-0001-000000000154, volumeID=c736baca-de76-4593-b3dc-28bb8807e7a3, specParams={}, device=disk}, {shared=false, address={bus=0, controller=0, unit=1, type=drive, target=0}, imageID=a016b350-87ef-4c3b-b150-024907fed9c0, format=raw, optional=false, type=disk, deviceId=a016b350-87ef-4c3b-b150-024907fed9c0, domainID=1fb79d91-b245-4447-91e0-e57671152a8c, propagateErrors=off, iface=ide, readonly=false, poolID=00000001-0001-0001-0001-000000000154, volumeID=20fc4399-0b02-4da1-8aee-68df1629ca94, specParams={}, device=disk}, {filter=vdsm-no-mac-spoofing, nicModel=rtl8139, address={bus=0x00, domain=0x0000, function=0x0, slot=0x03, type=pci}, type=interface, specParams={inbound={}, outbound={}}, device=bridge, linkActive=true, deviceId=8e00d4cc-6a60-4598-82ee-645d742708de, macAddr=FA:0D:49:9E:A2:E6, network=server-vlan10}, {address={bus=0x00, domain=0x0000, function=0x0, slot=0x04, type=pci}, type=controller, specParams={}, device=virtio-serial, deviceId=8ac5777e-375f-4ec6-a6fd-856c7cd7363b}],custom={device_8617fb20-b870-45ea-8232-a70dd8b4551c=VmDevice:{id='VmDeviceId:{deviceId='8617fb20-b870-45ea-8232-a70dd8b4551c', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false'}, device_8617fb20-b870-45ea-8232-a70dd8b4551cdevice_f691fc09-31c8-43bf-bd82-c5acac8a1a76device_30bd748e-6ea8-434f-8587-d8ff8db5555e=VmDevice:{id='VmDeviceId:{deviceId='30bd748e-6ea8-434f-8587-d8ff8db5555e', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false'}, device_8617fb20-b870-45ea-8232-a70dd8b4551cdevice_f691fc09-31c8-43bf-bd82-c5acac8a1a76=VmDevice:{id='VmDeviceId:{deviceId='f691fc09-31c8-43bf-bd82-c5acac8a1a76', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false'}},display=vnc,timeOffset=0,spiceSslCipherSuite=DEFAULT,nice=0,maxMemSize=4194304,maxMemSlots=16,bootMenuEnable=false,memSize=4054
2016-01-04 16:55:39,627 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] FINISH, CreateVDSCommand, log id: 1d5c1c04
2016-01-04 16:55:39,631 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id: 55e0849d
2016-01-04 16:55:39,631 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] Lock freed to object 'EngineLock:{exclusiveLocks='[3a17534b-e86d-4563-8ca2-2a27c34b4a87=<VM, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-01-04 16:55:39,634 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-40) [1f1deb62] Correlation ID: 1f1deb62, Job ID: 5212e078-cf01-4312-ae00-7e0a136217a5, Call Stack: null, Custom Event ID: -1, Message: VM adm1 was started by jforeman@us.dignitastech.com(a)Dignitas AD (Host: ov-101).
2016-01-04 16:55:40,724 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-10) [] START, DestroyVDSCommand(HostName = ov-101, DestroyVmVDSCommandParameters:{runAsync='true', hostId='2fe6c27b-9346-4678-8cd3-c9d367ec447f', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87', force='false', secondsToWait='0', gracefully='false', reason=''}), log id: 7935781d
2016-01-04 16:55:41,730 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-10) [] FINISH, DestroyVDSCommand, log id: 7935781d
2016-01-04 16:55:41,747 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-10) [] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM adm1 is down with error. Exit message: Unable to get volume size for domain 1fb79d91-b245-4447-91e0-e57671152a8c volume c736baca-de76-4593-b3dc-28bb8807e7a3.
2016-01-04 16:55:41,747 INFO [org.ovirt.engine.core.vdsbroker.VmAnalyzer] (ForkJoinPool-1-worker-10) [] Running on vds during rerun failed vm: '2fe6c27b-9346-4678-8cd3-c9d367ec447f'
2016-01-04 16:55:41,747 INFO [org.ovirt.engine.core.vdsbroker.VmAnalyzer] (ForkJoinPool-1-worker-10) [] VM '3a17534b-e86d-4563-8ca2-2a27c34b4a87(adm1) is running in db and not running in VDS 'ov-101'
2016-01-04 16:55:41,747 INFO [org.ovirt.engine.core.vdsbroker.VmAnalyzer] (ForkJoinPool-1-worker-10) [] add VM 'adm1' to HA rerun treatment
2016-01-04 16:55:41,752 ERROR [org.ovirt.engine.core.vdsbroker.VmsMonitoring] (ForkJoinPool-1-worker-10) [] Rerun VM '3a17534b-e86d-4563-8ca2-2a27c34b4a87'. Called from VDS 'ov-101'
2016-01-04 16:55:41,756 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-30) [] Correlation ID: 1f1deb62, Job ID: 5212e078-cf01-4312-ae00-7e0a136217a5, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM adm1 on Host ov-101.
2016-01-04 16:55:41,760 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-30) [] Lock Acquired to object 'EngineLock:{exclusiveLocks='[3a17534b-e86d-4563-8ca2-2a27c34b4a87=<VM, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-01-04 16:55:41,770 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (org.ovirt.thread.pool-8-thread-30) [] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{runAsync='true', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}), log id: 2577cd3a
2016-01-04 16:55:41,770 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (org.ovirt.thread.pool-8-thread-30) [] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 2577cd3a
2016-01-04 16:55:41,798 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-30) [] Running command: RunVmCommand internal: false. Entities affected : ID: 3a17534b-e86d-4563-8ca2-2a27c34b4a87 Type: VMAction group RUN_VM with role type USER
2016-01-04 16:55:41,850 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (org.ovirt.thread.pool-8-thread-30) [] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{runAsync='true', hostId='null', vmId='00000000-0000-0000-0000-000000000000', vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@dbe0ef0a'}), log id: 351fb749
2016-01-04 16:55:41,852 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (org.ovirt.thread.pool-8-thread-30) [] FINISH, UpdateVmDynamicDataVDSCommand, log id: 351fb749
2016-01-04 16:55:41,854 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (org.ovirt.thread.pool-8-thread-30) [] START, CreateVmVDSCommand( CreateVmVDSCommandParameters:{runAsync='true', hostId='65555052-9601-4e4f-88f5-a0f14dcc29eb', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87', vm='VM [adm1]'}), log id: 3163c7c3
2016-01-04 16:55:41,857 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (org.ovirt.thread.pool-8-thread-30) [] START, CreateVDSCommand(HostName = ov-102, CreateVmVDSCommandParameters:{runAsync='true', hostId='65555052-9601-4e4f-88f5-a0f14dcc29eb', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87', vm='VM [adm1]'}), log id: 569ec368
2016-01-04 16:55:41,860 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmInfoBuilderBase] (org.ovirt.thread.pool-8-thread-30) [] Bootable disk '9e43c66a-5bf1-44d6-94f4-52178d15c1e6' set to index '0'
2016-01-04 16:55:41,869 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (org.ovirt.thread.pool-8-thread-30) [] org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand pitReinjection=false,memGuaranteedSize=4054,smpThreadsPerCore=1,cpuType=SandyBridge,vmId=3a17534b-e86d-4563-8ca2-2a27c34b4a87,acpiEnable=true,numaTune={nodeset=0,1, mode=interleave},tabletEnable=true,spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,vmType=kvm,keyboardLayout=en-us,smp=1,smpCoresPerSocket=1,emulatedMachine=pc-i440fx-rhel7.2.0,smartcardEnable=false,guestNumaNodes=[{memory=4054, cpus=0, nodeIndex=0}],transparentHugePages=true,vmName=adm1,maxVCpus=16,kvmEnable=true,devices=[{address={bus=0x00, domain=0x0000, function=0x0, slot=0x02, type=pci}, type=video, specParams={heads=1, vram=32768}, device=cirrus, deviceId=645e99e3-a9fa-4894-baf5-97b539236782}, {type=graphics, specParams={}, device=vnc, deviceId=12845c03-16a3-4bf0-a015-a15201a77673}, {iface=ide, shared=false, path=, address={bus=1, controller=0, unit=0, type=drive, target=0}, readonly=true, index=2, type=disk, specParams={path=}, device=cdrom, deviceId=ab048396-5dd8-4594-aa8a-9fe835a04cd1}, {shared=false, address={bus=0, controller=0, unit=0, type=drive, target=0}, imageID=9e43c66a-5bf1-44d6-94f4-52178d15c1e6, format=raw, index=0, optional=false, type=disk, deviceId=9e43c66a-5bf1-44d6-94f4-52178d15c1e6, domainID=1fb79d91-b245-4447-91e0-e57671152a8c, propagateErrors=off, iface=ide, readonly=false, bootOrder=1, poolID=00000001-0001-0001-0001-000000000154, volumeID=c736baca-de76-4593-b3dc-28bb8807e7a3, specParams={}, device=disk}, {shared=false, address={bus=0, controller=0, unit=1, type=drive, target=0}, imageID=a016b350-87ef-4c3b-b150-024907fed9c0, format=raw, optional=false, type=disk, deviceId=a016b350-87ef-4c3b-b150-024907fed9c0, domainID=1fb79d91-b245-4447-91e0-e57671152a8c, propagateErrors=off, iface=ide, readonly=false, poolID=00000001-0001-0001-0001-000000000154, volumeID=20fc4399-0b02-4da1-8aee-68df1629ca94, specParams={}, device=disk}, {filter=vdsm-no-mac-spoofing, nicModel=rtl8139, address={bus=0x00, domain=0x0000, function=0x0, slot=0x03, type=pci}, type=interface, specParams={inbound={}, outbound={}}, device=bridge, linkActive=true, deviceId=8e00d4cc-6a60-4598-82ee-645d742708de, macAddr=FA:0D:49:9E:A2:E6, network=server-vlan10}, {address={bus=0x00, domain=0x0000, function=0x0, slot=0x04, type=pci}, type=controller, specParams={}, device=virtio-serial, deviceId=8ac5777e-375f-4ec6-a6fd-856c7cd7363b}],custom={device_8617fb20-b870-45ea-8232-a70dd8b4551c=VmDevice:{id='VmDeviceId:{deviceId='8617fb20-b870-45ea-8232-a70dd8b4551c', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false'}, device_8617fb20-b870-45ea-8232-a70dd8b4551cdevice_f691fc09-31c8-43bf-bd82-c5acac8a1a76device_30bd748e-6ea8-434f-8587-d8ff8db5555e=VmDevice:{id='VmDeviceId:{deviceId='30bd748e-6ea8-434f-8587-d8ff8db5555e', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false'}, device_8617fb20-b870-45ea-8232-a70dd8b4551cdevice_f691fc09-31c8-43bf-bd82-c5acac8a1a76=VmDevice:{id='VmDeviceId:{deviceId='f691fc09-31c8-43bf-bd82-c5acac8a1a76', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false'}},display=vnc,timeOffset=0,spiceSslCipherSuite=DEFAULT,nice=0,maxMemSize=4194304,maxMemSlots=16,bootMenuEnable=false,memSize=4054
2016-01-04 16:55:41,987 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (org.ovirt.thread.pool-8-thread-30) [] FINISH, CreateVDSCommand, log id: 569ec368
2016-01-04 16:55:41,991 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (org.ovirt.thread.pool-8-thread-30) [] FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id: 3163c7c3
2016-01-04 16:55:41,992 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-30) [] Lock freed to object 'EngineLock:{exclusiveLocks='[3a17534b-e86d-4563-8ca2-2a27c34b4a87=<VM, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-01-04 16:55:41,994 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-30) [] Correlation ID: 1f1deb62, Job ID: 5212e078-cf01-4312-ae00-7e0a136217a5, Call Stack: null, Custom Event ID: -1, Message: VM adm1 was started by jforeman@us.dignitastech.com(a)Dignitas AD (Host: ov-102).
2016-01-04 16:55:43,069 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-3) [] START, DestroyVDSCommand(HostName = ov-102, DestroyVmVDSCommandParameters:{runAsync='true', hostId='65555052-9601-4e4f-88f5-a0f14dcc29eb', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87', force='false', secondsToWait='0', gracefully='false', reason=''}), log id: 43dd93c5
2016-01-04 16:55:44,075 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-3) [] FINISH, DestroyVDSCommand, log id: 43dd93c5
2016-01-04 16:55:44,091 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-3) [] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM adm1 is down with error. Exit message: Unable to get volume size for domain 1fb79d91-b245-4447-91e0-e57671152a8c volume c736baca-de76-4593-b3dc-28bb8807e7a3.
2016-01-04 16:55:44,091 INFO [org.ovirt.engine.core.vdsbroker.VmAnalyzer] (ForkJoinPool-1-worker-3) [] Running on vds during rerun failed vm: '65555052-9601-4e4f-88f5-a0f14dcc29eb'
2016-01-04 16:55:44,092 INFO [org.ovirt.engine.core.vdsbroker.VmAnalyzer] (ForkJoinPool-1-worker-3) [] VM '3a17534b-e86d-4563-8ca2-2a27c34b4a87(adm1) is running in db and not running in VDS 'ov-102'
2016-01-04 16:55:44,092 INFO [org.ovirt.engine.core.vdsbroker.VmAnalyzer] (ForkJoinPool-1-worker-3) [] add VM 'adm1' to HA rerun treatment
2016-01-04 16:55:44,096 ERROR [org.ovirt.engine.core.vdsbroker.VmsMonitoring] (ForkJoinPool-1-worker-3) [] Rerun VM '3a17534b-e86d-4563-8ca2-2a27c34b4a87'. Called from VDS 'ov-102'
2016-01-04 16:55:44,128 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-35) [] Correlation ID: 1f1deb62, Job ID: 5212e078-cf01-4312-ae00-7e0a136217a5, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM adm1 on Host ov-102.
2016-01-04 16:55:44,132 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-35) [] Lock Acquired to object 'EngineLock:{exclusiveLocks='[3a17534b-e86d-4563-8ca2-2a27c34b4a87=<VM, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-01-04 16:55:44,141 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (org.ovirt.thread.pool-8-thread-35) [] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{runAsync='true', vmId='3a17534b-e86d-4563-8ca2-2a27c34b4a87'}), log id: 545236ca
2016-01-04 16:55:44,141 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (org.ovirt.thread.pool-8-thread-35) [] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 545236ca
2016-01-04 16:55:44,162 WARN [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-35) [] CanDoAction of action 'RunVm' failed for user jforeman@us.dignitastech.com(a)Dignitas AD. Reasons: VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_NO_HOSTS
2016-01-04 16:55:44,162 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-35) [] Lock freed to object 'EngineLock:{exclusiveLocks='[3a17534b-e86d-4563-8ca2-2a27c34b4a87=<VM, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-01-04 16:55:44,170 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-35) [] Correlation ID: 1f1deb62, Job ID: 5212e078-cf01-4312-ae00-7e0a136217a5, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM adm1 (User: jforeman@us.dignitastech.com(a)Dignitas AD).
2016-01-04 16:55:44,173 INFO [org.ovirt.engine.core.bll.ProcessDownVmCommand] (org.ovirt.thread.pool-8-thread-46) [48c1f0bd] Running command: ProcessDownVmCommand internal: true.
8 years, 7 months
How to connect a VM with a port = -1
by zhangjian
--------------020404060709070704020700
Content-Type: text/plain; charset="gbk"; format=flowed
Content-Transfer-Encoding: 8bit
Hi guys,
I created a VM in ovirt, and I found it a port = -1, How can I connect
to it using like remote-viewer.
--------------
console.vv
[virt-viewer]
type=spice
host=XXX.XXX.XXX.XXX
port=-1
password=J4xu1swd59A5
# Password is valid for 120 seconds.
delete-this-file=1
fullscreen=0
title=test:%d
toggle-fullscreen=shift+f11
...
...
...
--------------
I usually use the following command to connect to my VM when it has a
positive value¡£
remote-viewer spice://XXX.XXX.XXX.XXX:590X
Regards,
Kenn
--------------020404060709070704020700
Content-Type: text/html; charset="gbk"
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=gbk">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hi guys,<br>
<br>
<br>
I created a VM in ovirt, and I found it a port = -1, How can I
connect to it using like remote-viewer.<br>
<br>
--------------<br>
console.vv<br>
<br>
[virt-viewer]<br>
type=spice<br>
host=XXX.XXX.XXX.XXX<br>
port=-1<br>
password=J4xu1swd59A5<br>
# Password is valid for 120 seconds.<br>
delete-this-file=1<br>
fullscreen=0<br>
title=test:%d<br>
toggle-fullscreen=shift+f11<br>
...<br>
...<br>
...<br>
--------------<br>
<br>
I usually use the following command to connect to my VM when it has
a
<meta http-equiv="content-type" content="text/html; charset=gbk">
positive value¡£<br>
remote-viewer <a class="moz-txt-link-freetext" href="spice://XXX.XXX.XXX.XXX:590X">spice://XXX.XXX.XXX.XXX:590X</a><br>
<br>
<br>
Regards,<br>
Kenn<br>
</body>
</html>
--------------020404060709070704020700--
8 years, 7 months