We are graphing certain parameters we collect from the engine database.
We use following query to get the count of running VMs:
select count(*),cluster.name as cluster from vms, cluster where
vms.cluster_id = cluster.cluster_id and vms.status = 1 group by cluster;
It would help extremely if we knew what the numeric VM status codes mean.
For example, I know that "status = 1" means the VM is running. The ones
we can cause through the web interface are easy enough to figure out.
But there are error states, that cannot be produced by standard use.
For instance, when the event log says "VM foobar has been paused due to
storage I/O problem." the following states get set in the DB
(Query used: select * from vm_dynamic where vm_guid =(select vm_guid
from vm_static where vm_name = 'foobar';):
status | 4
exit_status | 0
pause_status | 2
guest_agent_status | 0
What do they mean?
If you could point me to a table somewhere with the state descriptions
it would be extremely helpful.
Would it be possible to utilize SSH Pub key from User config
In such a way that it is offered or autoloaded into Cloud-init SSH Authorized keys field when exists?
This would bring one less action required before VM can be created.
Kindly awaiting your reply.
— — —
Met vriendelijke groet / Kind regards,
Ovirt (and RHV respectively) can be installed as:
1. VM in an oVirt Cluster
2. Separate machine(s)
In point 2 , the Virt admin should take care about the high availability of the engine.
So, you can install CentOS7 and then deploy the engine's software ontop of it.
Extracting the OVA will not provide you with fully working engine, as the deployment scripts on the hosts will not inject the necessary data on remote system. (Only on local)
Strahil NikolovOn Jul 15, 2019 04:08, Jingjie Jiang <jingjie.jiang(a)oracle.com> wrote:
> On 7/13/2019 6:28 AM, Strahil Nikolov wrote:
> I still don't get why you need the OVA .
> Are you trying to extract and put it into another virtualization. If this is your intention - better install it as if it was a standalone engine.
> Yes, I am trying to build ova with different OS.
> What do you mean install it as standalone engine?
> install VM first then install as ovirt engine?
>> Best Regards,
>> Strahil Nikolov.
>> В петък, 12 юли 2019 г., 21:48:35 ч. Гринуич+3, Jingjie Jiang <jingjie.jiang(a)oracle.com> написа:
>> Hi Strahil,
>> Thanks for your reply.
>> Can you share the procedure of creating ovirt-engine-appliance ova?
>> On 7/12/19 12:52 AM, Strahil wrote:
>> Based on my experience - the OVA contains the xml and the actual disk of the hosted engine.
>> Then the deployment starts locally the VM and populates the necessary data in it
>> Once it's over , the deployment shuts down and copies the disk of that local VM, undefines it and then ovirt's ha agents are being configured - so they can mount the shared storage and power up the VM (special tar /OVMF/ file on shared storage has the agent configuration file).
>> So , in the ova there should be a template VM + the xml config (cpus, ram, devices, etc) .
>> I would be surprised if there is something else in it.
>> Best Regards,
>> Strahil Nikolov
>> On Jul 11, 2019 23:39, Jingjie Jiang <jingjie.jiang(a)oracle.com> wrote:
We are relying on Templates more and more, which is great.
Images/templates built using “official” cloud images usually have 1CPU and 1GB of RAM.
Using this feature via Administration portal is super, since we can attach Instance_type to template to provide sufficient CPU and RAM, regardless of via UI or using Ansible.
Would it be possible to consider adding Instance_Type option to Create New VM options in VM Portal?
Haven’t tested yet, but is Instance_type supported via ovirt_vm ansible module, in case of EndUser privileges?
Kindly awaiting your reply.
— — —
Met vriendelijke groet / Kind regards,
Sr. System Engineer @ System Administration
tel. +31 (0)35 677 4131
Joop van den Endeplein 1
1217 WJ Hilversum
I'm working on migrating an existing ovirt setup to a new hosted-engine
setup and I've been seeing messages about iptables support being deprecated
and slated to be removed.
Can I continue using iptables to manage the firewalls on my ovirt hosts if
I don't care about allowing ovirt to configure the firewalls?
We manage all of our machines with puppet and iptables is deeply integrated
into this. It would be non-trivial to migrate to firewalld support.
As it stands I already manage the firewall rules for our ovirt hosts with
puppet and iptables and have always ignored the "Automatically Configure
Firewall" option when adding new hosts. Will this continue to work?
Also with hosted engine, I had to cowboy enable firewalld to get the engine
installed, but now that I've got a cluster up and running with hosted
engine enabled on several hosts, can I just switch back from firewalld to
iptables assuming I've got all the correct ports open?
Based on my experience - the OVA contains the xml and the actual disk of the hosted engine.
Then the deployment starts locally the VM and populates the necessary data in it
Once it's over , the deployment shuts down and copies the disk of that local VM, undefines it and then ovirt's ha agents are being configured - so they can mount the shared storage and power up the VM (special tar /OVMF/ file on shared storage has the agent configuration file).
So , in the ova there should be a template VM + the xml config (cpus, ram, devices, etc) .
I would be surprised if there is something else in it.
Strahil NikolovOn Jul 11, 2019 23:39, Jingjie Jiang <jingjie.jiang(a)oracle.com> wrote:
> Hi Strahil,
> Yes, you are right.
> After install ovirt-engine-appliance rpm, the ova file will be save at /usr/share/ovirt-engine-appliance/ovirt-engine-appliance-4.3-20190610.1.el7.ova
> I was trying to understand what the ova file included.
> I thought it only has the CentOS7.6.
> I observed that ovirt-engine was installed during "host-engine --deploy"
> Is ovirt-engine-appliance-4.3-20190610.1.el7.ova only used for deploy host-engine?
> Is there a document about how to generate?
> On 7/11/19 4:20 PM, Strahil Nikolov wrote:
> If I'm not wrong, this rpm is being downloaded to one of the hosts during self-hosted engine's deployment.
> Why would you try to import a second self-hosted engine ?
> Best Regards,
> Strahil Nikolov
> В четвъртък, 11 юли 2019 г., 22:37:56 ч. Гринуич+3, <jingjie.jiang(a)oracle.com> написа:
> Can someone tell me how to generate ovirt-engine-appliance ova file in ovirt-engine-appliance-4.3-20190610.1.el7.x86_64.rpm?
> I tried to import ovirt-engine-appliance ova(ovirt-engine-appliance-4.3-20190610.1.el7.ova) from ovirt-engine, but I got error as following:
> Failed to load VM configuration from OVA file: /var/tmp/ovirt-engine-appliance-4.2-20190121.1.el7.ova
> I guess ovirt-engine-appliance-4.2-20190121.1.el7.ova has more than CentOS7.6.
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://firstname.lastname@example.org/message/EP2BMVXRXUM..." rel="nofollow" target="_blank" moz-do-not-send="true">https://lists.ovirt.org/archives/list/users@o
since several days i try to install the hosted engine initially to an iscsi multipath device without success.
Some information on the environment:
- Ovirt Version 4.3.3
- using two 10gbe interfaces as single lacp - bond for the ovirtmgmt interface
- using two 10gbe storage interfaces on each hypervisor for iscsi storage
-- each storage interface is configured without any lacp bonding or 802.1q tagging, etc (on the switches the vlan is configured; port based vlan)
-- each storage interface lives in a serarate vlan were also the iscsi target is available, the iscsi target has 4x10ge interfaces
-- so; each storage interface is connected to a iscsi target through a different vlan
The documentation here is for me unclear:
>Note: To specify more than one iSCSI target, you must enable multipathing before deploying the self-hosted engine. There is also a Multipath >Helper tool that generates a script to install and configure multipath with different options.
This indicates for me that it should be possbile to install the HE directly on the /dev/mapper/mpath device which is availibale when I have prepared the host accordingly before installing the HE (log in to multiple iscsi targets, create proper multipath,conf, etc) - right ?
I login to the two iscsi targets and get in sum 8 pathes, 4 from each interface and iscsi target.
I have then the mpath device on the hypervisor available and i can mount the mpath device and put (test) data on it.
In the cockpit interface the mount can also be activated and is recognized correctly.
multipathd -ll and lsblk looks good. Everything seems to be fine.
But when I run the "hosted-engine" --deploy, the last option while running the assistant is to enter the iscsi data.
So, basically i just want to define my mpath device - when entering the data for one of my iscsi target I can see the 4 pathes of the single target,
and when i choose the path where the "lun" is finally available it fails. I think in general this option is not that what i want to have
here for using the multipath device.
I' m lost - what is the usual way to install the HE on a mutlipath device ?
Do I have to change the configuration of the storage interfaces or the iscsi network design?
- Are bond interfaces for these iscsi connections mandatory ?
Did I missed something obvious ?
Can I put in my multipath data into the answerfile somehow to get rid of the last step of the assistant ?
Is it not possible in general ?? :
Sorry in advance for the long mail .... 1!^^
I'd recommend you to avoid DNS & DHCP unless you oVirt infra consistes of hundreds of servers.
It is far more reliable to use static IPs + /etc/hosts .
As you could 'ssh' to the engine, check the logs - there should be a clue why it failed.
Most probably it's related to the DNS/IP used.
I think the devs can tell their opinion on Monday.
Strahil NikolovOn Jul 13, 2019 15:08, carl langlois <crl.langlois(a)gmail.com> wrote:
> Thanks for the info. There have been some progress with the situation. So to make the story as short as possible we are in a process of changing our range of IP addresse to 10.8.X.X to 10.16.X.X for all of the ovirt infra. This implies a new DHCP server, new switchs etc etc. For now we went back to our old IP address ranges because we were not able to stabilize the system.
> So the last status using our new range of addresses was that gluster was all fine, the hosted engine domaine was moutning okey. I suspect DNS table was not properly updated.. but i am not 100% sure. But if we tried to used the new range of adrreses everything seems to be fine except that the hosted-engine always fail the "liveliness check" after going up. I was not able to solve this situation so i went back to our previous DHCP server.
> So i am not sure what is missing for the hosted-engine to use the DHCP server. Is there any hardcode config in the hosted-egnine that need to be updated when chaging DHCP server(i.e new address with the same hostname, new gateway..)
> More info on the test i did with the new DHCP server -- > All node have name resolution working. I am able to ssh to the hosted-engine
> Any suggestions will be appreciated as i am out of idea for now. Do i need to redo some sort of setup in the engine to take into account the range of address/new gateway? There is also a LDAP server access configure in the engine for username mapping..
> On Sat, Jul 13, 2019 at 6:31 AM Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
>> Can you mount the volume manually at another location ?
>> Also, have you done any changes to Gluster ?
>> Please provide "gluster volume info engine" . I have noticed the following in your logs: option 'parallel-readdir' is not recognized
>> Best Regards,
>> Strahil Nikolov
>> В петък, 12 юли 2019 г., 22:30:41 ч. Гринуич+3, carl langlois <crl.langlois(a)gmail.com> написа:
>> Hi ,
>> I am in state where my system does not recover from a major failure. I have pinpoint the probleme to be that the hosted engine storage domain is not able to mount
>> I have a glusterfs containing the storage domain. but when it attempt to mount glusterfs to /rhev/data-center/mnt/glusterSD/ovhost1:_engine i get
>> [2019-07-12 19:19:44.063608] I [rpc-clnt.c:1986:rpc_clnt_reconfig] 0-engine-client-2: changing port to 49153 (from 0)
>> [2019-07-12 19:19:55.033725] I [fuse-bridge.c:4205:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.22
>> [2019-07-12 19:19:55.033748] I [fuse-bridge.c:4835:fuse_graph_sync] 0-fuse: switched to graph 0
>> [2019-07-12 19:19:55.033895] I [MSGID: 108006] [afr-common.c:537
I have read the archives and the most recent discussion was 5 years ago. So I better ask again.
My NAS runs Centos with NFS4.2 (and I am testing Ganesha in another server)
Does Ovirt 4.3.4 have support for NFS 4/4.1/4.2 or pNFS
Specially version 4.2 due to:
Server-Side Copy: NFSv4.2 supports copy_file_range() system call, which allows the NFS client to efficiently copy data without wasting network resources.
But this will only happen if the Ovirt (which I know is Centos based) supports NFS 4.2.
Not sure If I update the NFS toolset on the Ovirt install, it will break something or worst.