Re: How to add vnc console for ovirt-engine VM
by Dafna Ron
Hi Xu,
please direct question ovirt user's list rather then directly to a specific
person as others may be able to assist you.
adding users list and Galit from lago to help
Thanks,
Dafna
On Mon, Dec 24, 2018 at 8:49 AM Tian Xu <tian.xux(a)hotmail.com> wrote:
> Hi Dron,
>
> I'm try to run ovirt system tests on my centos7.5 host, but my test becase
> ovirt engine VM lost network connection. I want to look at what happens in
> my ovirt engine VM but the VM has not VNC or spice console when create. Is
> there any way I can add vnc console when create engine VM with lago ?
>
> Thanks,
> Xu
>
5 years, 10 months
how to take a snapshot for hosted-engine?
by jeremiah52@naver.com
hello everyone.
I want to take a snapshot for hosted-engine. I am intend to do the job that might do critical damages on hosted-engine such as to seperate the database from the engine. before I do these job, it will be nice that I take a snapshot for the engine and then recover it when occuring severe problems.
However, I tried to take a snapshot for hosted-engine in RHV-M portal and failed to take the snapshot.
Is there any ways taking a snapshot for hosted-engine?
5 years, 10 months
Hosted Engine VM and Storage not showing up
by ferrao@versatushpc.com.br
Hello,
I’ve a new oVirt installation using oVirt 4.2.7.1 Node and after deploying the hosted engine it does not show up on the interface even after adding the first storage.
The Datacenter is up but the engine VM and the engine storage does not appear.
I have the following message repeated constantly on /var/log/messages:
Jan 4 20:17:30 ovirt1 journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR Unable to identify the OVF_STORE volume, falling back to initial vm.conf. Please ensure you already added your first data domain for regular VMs
What’s wrong? Am I doing something different?
Additional infos:
[root@ovirt1 ~]# vdsm-tool list-nets
ovirtmgmt (default route)
storage
[root@ovirt1 ~]# ip a | grep "inet "
inet 127.0.0.1/8 scope host lo
inet 10.20.0.101/24 brd 10.20.0.255 scope global dynamic ovirtmgmt
inet 192.168.10.1/29 brd 192.168.10.7 scope global storage
[root@ovirt1 ~]# mount | grep -i nfs
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
10.20.0.200:/mnt/pool0/ovirt/he on /rhev/data-center/mnt/10.20.0.200:_mnt_pool0_ovirt_he type nfs4 (rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=10.20.0.101,local_lock=none,addr=10.20.0.200)
[root@ovirt1 ~]# hosted-engine --check-deployed
Returns nothing!
[root@ovirt1 ~]# hosted-engine --check-liveliness
Hosted Engine is up!
[root@ovirt1 ~]# hosted-engine --vm-status
--== Host 1 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt1.local.versatushpc.com.br
Host ID : 1
Engine status : {"health": "good", "vm": "up", "detail": "Up"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 1736a87d
local_conf_timestamp : 7836
Host timestamp : 7836
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=7836 (Fri Jan 4 20:18:10 2019)
host-id=1
score=3400
vm_conf_refresh_time=7836 (Fri Jan 4 20:18:10 2019)
conf_on_shared_storage=True
maintenance=False
state=EngineUp
stopped=False
Thanks in advance,
PS: Log files are available here: http://www.if.ufrj.br/~ferrao/ovirt/issues/he-not-showing/
5 years, 10 months
oVirt QueryType
by Anastasiya Ruzhanskaya
Hello everyone!
I am investigation POST requests oVirt makes during the startup while
entering to the webadmin.
The first is xsrf token,
second is QueryType with type 231.
I am a little bit confused: if I counted correctly , 231 is Search type.
What it searched exactly? It is not specified in message.
The answer to this query is aaa.ProfileEntry.
Am I missing something? Then in the third message I see such queries as
GetAllVmIcons, GetBookmark which seems rather strange ( because no vms are
shown on the dashboard).
Or I am counting enum values incorrectly?
5 years, 10 months
IoT Big Data design on AWS
by Soujanya Bargavi
I'm trying to design a big IoT solution of millions of devices starting from zero. That's why I need a highly scalable platform like AWS.
My devices are going to report data using AWS IoT, and that's the only thing I've really decided. I need to store a lot of data like a temperature measure every 15 minutes on each device so for that measures I've planned to insert those measures directly to DynamoDB using IoT Rules, but on the other side, I need a relational structure to store companies, temperature sensors, etc. So I thought I could store that in MySQL RDS.
After that, I need to configure a proper analysis tool, so I was thinking of Kinesis and load the data from Redshift after ETL using Data Pipeline since AWS https://goo.gl/n2qNUi Glue doesn't support DynamoDB.
I'm new with some of the services so I don't know exactly what I'm doing and I don't know if this approach is the best one. What do you think?.
5 years, 10 months
Ovirt Engine UI bug- cannot attache networks to hosts
by Leo David
Hi Everyone,
Using self hosted-engine 4.2.7.5-1.el7 running on ovirt-node 4.2.7, i am
trying to attache hosts to a newlly created network for gluster/migration
traffic.
Oncei click on Networks->gluster-Hosts- Unattached, i get the following
error:
Uncaught exception occurred. Please try reloading the page. Details:
(TypeError) : M9(...) is null
Please have your administrator check the UI logs
So to make the cluster installation possible, i had to use ovirt 4.2.2 and
manually install ovirt-engine-appliance-4.2-20180626.1.el7 on the nodes
before deploying self-hosted engine ( version that has other problems,
like not being able to get dasboards in cloudforms/manageiq -
ovirt-engine-dwh is missing ).
Is it possible to:
1. upgrade actual hosted-engine version to the latest that has this module
properly installed ?
2. fix the latest ( upgraded ) version so i can attach hosts to networks ?
Thank you very much !
Leo
--
Best regards, Leo David
5 years, 10 months
oVirt Storage questions
by michael@wanderingmad.com
From previous thread about gluster issues, things seem to be running much better than before, and it has raised a few questions that I can't seem to find any answers to:
Setup:
3 hyperconverged nodes
each node has 1x 1tb SSD and 1x 500gb nVME drive
each node is connected via ethernet and also by a 40gb infiniband connection for the gluster replication.
Questions:
1. I created a 3tb VDO drive on the SSD and a 1.3tb VDO Drive on the nVME drive with a 1Gb cache on each server and I enabled the RDMA transport.
5 years, 10 months
migration failed
by 董青龙
Hi all,
I have an ovirt4.2 environment of 3 hosts. Now all vms in this environment could not be migrated. But all the vms could be started on all 3 hosts. Anyone can help? Thanks a lot!
engine logs:
2019-01-02 09:41:26,868+08 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (default task-9) [3eed5f0e-aaf5-4dce-bf30-2c49e09ab30d] Lock Acquired to object 'EngineLock:{exclusiveLocks='[eff7f697-8a07-46e5-a631-a1011a0eb836=VM]', sharedLocks=''}'
2019-01-02 09:41:26,978+08 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (EE-ManagedThreadFactory-engine-Thread-168938) [3eed5f0e-aaf5-4dce-bf30-2c49e09ab30d] Running command: MigrateVmCommand internal: false. Entities affected : ID: eff7f697-8a07-46e5-a631-a1011a0eb836 Type: VMAction group MIGRATE_VM with role type USER
2019-01-02 09:41:27,019+08 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-168938) [3eed5f0e-aaf5-4dce-bf30-2c49e09ab30d] START, MigrateVDSCommand( MigrateVDSCommandParameters:{hostId='0aff0075-4b41-4f37-98de-7433a17cd47e', vmId='eff7f697-8a07-46e5-a631-a1011a0eb836', srcHost='horeb66', dstVdsId='5bb18f6e-9c7e-4afd-92de-f6482bf752e5', dstHost='horeb65:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]', dstQemu='192.168.128.78'}), log id: 1bd72db2
2019-01-02 09:41:27,019+08 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-168938) [3eed5f0e-aaf5-4dce-bf30-2c49e09ab30d] START, MigrateBrokerVDSCommand(HostName = horeb66, MigrateVDSCommandParameters:{hostId='0aff0075-4b41-4f37-98de-7433a17cd47e', vmId='eff7f697-8a07-46e5-a631-a1011a0eb836', srcHost='horeb66', dstVdsId='5bb18f6e-9c7e-4afd-92de-f6482bf752e5', dstHost='horeb65:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]', dstQemu='192.168.128.78'}), log id: 380b8d38
2019-01-02 09:41:27,025+08 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-168938) [3eed5f0e-aaf5-4dce-bf30-2c49e09ab30d] FINISH, MigrateBrokerVDSCommand, log id: 380b8d38
2019-01-02 09:41:27,029+08 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-168938) [3eed5f0e-aaf5-4dce-bf30-2c49e09ab30d] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 1bd72db2
2019-01-02 09:41:27,036+08 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-168938) [3eed5f0e-aaf5-4dce-bf30-2c49e09ab30d] EVENT_ID: VM_MIGRATION_START(62), Migration started (VM: win7, Source: horeb66, Destination: horeb65, User: admin@internal-authz).
2019-01-02 09:41:41,557+08 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-30) [] VM 'eff7f697-8a07-46e5-a631-a1011a0eb836'(win7) moved from 'MigratingFrom' --> 'Up'
2019-01-02 09:41:41,557+08 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-30) [] Adding VM 'eff7f697-8a07-46e5-a631-a1011a0eb836'(win7) to re-run list
2019-01-02 09:41:41,567+08 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (EE-ManagedThreadFactory-engineScheduled-Thread-30) [] Rerun VM 'eff7f697-8a07-46e5-a631-a1011a0eb836'. Called from VDS 'horeb66'
2019-01-02 09:41:41,570+08 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-168945) [] START, MigrateStatusVDSCommand(HostName = horeb66, MigrateStatusVDSCommandParameters:{hostId='0aff0075-4b41-4f37-98de-7433a17cd47e', vmId='eff7f697-8a07-46e5-a631-a1011a0eb836'}), log id: 4ed2923c
2019-01-02 09:41:41,573+08 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-168945) [] FINISH, MigrateStatusVDSCommand, log id: 4ed2923c
2019-01-02 09:41:41,583+08 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-168945) [] EVENT_ID: VM_MIGRATION_TRYING_RERUN(128), Failed to migrate VM win7 to Host horeb65 . Trying to migrate to another Host.
2019-01-02 09:41:41,642+08 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (EE-ManagedThreadFactory-engine-Thread-168945) [] Running command: MigrateVmCommand internal: false. Entities affected : ID: eff7f697-8a07-46e5-a631-a1011a0eb836 Type: VMAction group MIGRATE_VM with role type USER
2019-01-02 09:41:41,671+08 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-168945) [] START, MigrateVDSCommand( MigrateVDSCommandParameters:{hostId='0aff0075-4b41-4f37-98de-7433a17cd47e', vmId='eff7f697-8a07-46e5-a631-a1011a0eb836', srcHost='horeb66', dstVdsId='20786f47-87fe-4ef6-be82-b580e5d0a350', dstHost='horeb67:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]', dstQemu='192.168.128.76'}), log id: 4611863
2019-01-02 09:41:41,672+08 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-168945) [] START, MigrateBrokerVDSCommand(HostName = horeb66, MigrateVDSCommandParameters:{hostId='0aff0075-4b41-4f37-98de-7433a17cd47e', vmId='eff7f697-8a07-46e5-a631-a1011a0eb836', srcHost='horeb66', dstVdsId='20786f47-87fe-4ef6-be82-b580e5d0a350', dstHost='horeb67:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]', dstQemu='192.168.128.76'}), log id: 7d20b357
2019-01-02 09:41:41,677+08 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-168945) [] FINISH, MigrateBrokerVDSCommand, log id: 7d20b357
2019-01-02 09:41:41,682+08 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-168945) [] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 4611863
2019-01-02 09:41:41,686+08 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-168945) [] EVENT_ID: VM_MIGRATION_START(62), Migration started (VM: win7, Source: horeb66, Destination: horeb67, User: admin@internal-authz).
2019-01-02 09:41:56,575+08 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-45) [] VM 'eff7f697-8a07-46e5-a631-a1011a0eb836'(win7) moved from 'MigratingFrom' --> 'Up'
2019-01-02 09:41:56,575+08 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-45) [] Adding VM 'eff7f697-8a07-46e5-a631-a1011a0eb836'(win7) to re-run list
2019-01-02 09:41:56,584+08 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (EE-ManagedThreadFactory-engineScheduled-Thread-45) [] Rerun VM 'eff7f697-8a07-46e5-a631-a1011a0eb836'. Called from VDS 'horeb66'
2019-01-02 09:41:56,625+08 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-168952) [] START, MigrateStatusVDSCommand(HostName = horeb66, MigrateStatusVDSCommandParameters:{hostId='0aff0075-4b41-4f37-98de-7433a17cd47e', vmId='eff7f697-8a07-46e5-a631-a1011a0eb836'}), log id: 4ded4ce9
2019-01-02 09:41:56,628+08 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-168952) [] FINISH, MigrateStatusVDSCommand, log id: 4ded4ce9
2019-01-02 09:41:56,638+08 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-168952) [] EVENT_ID: VM_MIGRATION_TRYING_RERUN(128), Failed to migrate VM win7 to Host horeb67 . Trying to migrate to another Host.
2019-01-02 09:41:56,695+08 WARN [org.ovirt.engine.core.bll.MigrateVmCommand] (EE-ManagedThreadFactory-engine-Thread-168952) [] Validation of action 'MigrateVm' failed for user admin@internal-authz. Reasons: VAR__ACTION__MIGRATE,VAR__TYPE__VM,VAR__ACTION__MIGRATE,VAR__TYPE__VM,VAR__ACTION__MIGRATE,VAR__TYPE__VM,VAR__ACTION__MIGRATE,VAR__TYPE__VM,SCHEDULING_NO_HOSTS
2019-01-02 09:41:56,696+08 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (EE-ManagedThreadFactory-engine-Thread-168952) [] Lock freed to object 'EngineLock:{exclusiveLocks='[eff7f697-8a07-46e5-a631-a1011a0eb836=VM]', sharedLocks=''}'
2019-01-02 09:41:56,706+08 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-168952) [] EVENT_ID: VM_MIGRATION_NO_VDS_TO_MIGRATE_TO(166), No available host was found to migrate VM win7 to.
2019-01-02 09:41:56,709+08 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-168952) [] EVENT_ID: VM_MIGRATION_FAILED(65), Migration failed (VM: win7, Source: horeb66).
5 years, 10 months
Which option is best for storage?
by geoff@pdclouds.com.au
Hi,
Which is the preferred option for connecting oVirt VM farm to SAN/NAS?
NFS (10G), iSCSI (10G) or FC (8G)?
We are confused, some people say iSCSI is preferred and others say NFS performs the better than iSCSI, but FC is most expensive but performs the best overall....
Would value expert opinion.
Cheers
Geoff
5 years, 10 months
Unable to increase memory or CPU on hosted Engine version 4.2.5.2
by Florian Schmid
Hi,
I'm using version 4.2.5.2 and every article I found, I read that with oVirt 4.2, it should be possible to update CPU and memory for hosted Engine in UI.
Now, when I'm trying to do this, even being on global maintenance, I always get the same error:
Error while executing action:
HostedEngine:
There was an attempt to change Hosted Engine VM values that are locked.
My problem is, that I need to increase the memory, because we had now several times in the last 2 or 3 months an out of memory kill of the java process.
What is the best way to increase memory here? Should I do it over the config file? But how is then the OVF updated?
BR Florian
5 years, 10 months