Add Instance_type option for VM Portal
by Vrgotic, Marko
Dear oVIrt,
We are relying on Templates more and more, which is great.
Images/templates built using “official” cloud images usually have 1CPU and 1GB of RAM.
Using this feature via Administration portal is super, since we can attach Instance_type to template to provide sufficient CPU and RAM, regardless of via UI or using Ansible.
Would it be possible to consider adding Instance_Type option to Create New VM options in VM Portal?
Haven’t tested yet, but is Instance_type supported via ovirt_vm ansible module, in case of EndUser privileges?
Kindly awaiting your reply.
— — —
Met vriendelijke groet / Kind regards,
Marko Vrgotic
Sr. System Engineer @ System Administration
m.vrgotic(a)activevideo.com<mailto:m.vrgotic@activevideo.com>
tel. +31 (0)35 677 4131
ActiveVideo BV
Mediacentrum 3741
Joop van den Endeplein 1
1217 WJ Hilversum
5 years, 4 months
iptables with 4.3+?
by Jordan Conway
Hello,
I'm working on migrating an existing ovirt setup to a new hosted-engine
setup and I've been seeing messages about iptables support being deprecated
and slated to be removed.
Can I continue using iptables to manage the firewalls on my ovirt hosts if
I don't care about allowing ovirt to configure the firewalls?
We manage all of our machines with puppet and iptables is deeply integrated
into this. It would be non-trivial to migrate to firewalld support.
As it stands I already manage the firewall rules for our ovirt hosts with
puppet and iptables and have always ignored the "Automatically Configure
Firewall" option when adding new hosts. Will this continue to work?
Also with hosted engine, I had to cowboy enable firewalld to get the engine
installed, but now that I've got a cluster up and running with hosted
engine enabled on several hosts, can I just switch back from firewalld to
iptables assuming I've got all the correct ports open?
Thank you,
Jordan Conway
5 years, 4 months
Re: ovirt-engine-appliance ova
by Strahil
Based on my experience - the OVA contains the xml and the actual disk of the hosted engine.
Then the deployment starts locally the VM and populates the necessary data in it
Once it's over , the deployment shuts down and copies the disk of that local VM, undefines it and then ovirt's ha agents are being configured - so they can mount the shared storage and power up the VM (special tar /OVMF/ file on shared storage has the agent configuration file).
So , in the ova there should be a template VM + the xml config (cpus, ram, devices, etc) .
I would be surprised if there is something else in it.
Best Regards,
Strahil NikolovOn Jul 11, 2019 23:39, Jingjie Jiang <jingjie.jiang(a)oracle.com> wrote:
>
> Hi Strahil,
>
> Yes, you are right.
>
> After install ovirt-engine-appliance rpm, the ova file will be save at /usr/share/ovirt-engine-appliance/ovirt-engine-appliance-4.3-20190610.1.el7.ova
>
> I was trying to understand what the ova file included.
>
> I thought it only has the CentOS7.6.
>
> I observed that ovirt-engine was installed during "host-engine --deploy"
>
> Is ovirt-engine-appliance-4.3-20190610.1.el7.ova only used for deploy host-engine?
>
> Is there a document about how to generate?
>
>
> Thanks,
>
> Jingjie
>
>
> On 7/11/19 4:20 PM, Strahil Nikolov wrote:
>
> If I'm not wrong, this rpm is being downloaded to one of the hosts during self-hosted engine's deployment.
> Why would you try to import a second self-hosted engine ?
>
> Best Regards,
> Strahil Nikolov
>
>
> В четвъртък, 11 юли 2019 г., 22:37:56 ч. Гринуич+3, <jingjie.jiang(a)oracle.com> написа:
>
>
> Hi,
> Can someone tell me how to generate ovirt-engine-appliance ova file in ovirt-engine-appliance-4.3-20190610.1.el7.x86_64.rpm?
> I tried to import ovirt-engine-appliance ova(ovirt-engine-appliance-4.3-20190610.1.el7.ova) from ovirt-engine, but I got error as following:
> Failed to load VM configuration from OVA file: /var/tmp/ovirt-engine-appliance-4.2-20190121.1.el7.ova
>
> I guess ovirt-engine-appliance-4.2-20190121.1.el7.ova has more than CentOS7.6.
>
> Thanks,
> Jingjie
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EP2BMVXRXUM..." rel="nofollow" target="_blank" moz-do-not-send="true">https://lists.ovirt.org/archives/list/users@o
5 years, 4 months
hosted engine installation on iscsi multipath/mpath device fails
by Michael Frank
Dear all,
since several days i try to install the hosted engine initially to an iscsi multipath device without success.
Some information on the environment:
- Ovirt Version 4.3.3
- using two 10gbe interfaces as single lacp - bond for the ovirtmgmt interface
- using two 10gbe storage interfaces on each hypervisor for iscsi storage
-- each storage interface is configured without any lacp bonding or 802.1q tagging, etc (on the switches the vlan is configured; port based vlan)
-- each storage interface lives in a serarate vlan were also the iscsi target is available, the iscsi target has 4x10ge interfaces
-- so; each storage interface is connected to a iscsi target through a different vlan
The documentation here is for me unclear:
https://ovirt.org/documentation/self-hosted/chap-Deploying_Self-Hosted_En... <https://ovirt.org/documentation/self-hosted/chap-Deploying_Self-Hosted_En...>
>Note: To specify more than one iSCSI target, you must enable multipathing before deploying the self-hosted engine. There is also a Multipath >Helper tool that generates a script to install and configure multipath with different options.
This indicates for me that it should be possbile to install the HE directly on the /dev/mapper/mpath device which is availibale when I have prepared the host accordingly before installing the HE (log in to multiple iscsi targets, create proper multipath,conf, etc) - right ?
I login to the two iscsi targets and get in sum 8 pathes, 4 from each interface and iscsi target.
I have then the mpath device on the hypervisor available and i can mount the mpath device and put (test) data on it.
In the cockpit interface the mount can also be activated and is recognized correctly.
multipathd -ll and lsblk looks good. Everything seems to be fine.
But when I run the "hosted-engine" --deploy, the last option while running the assistant is to enter the iscsi data.
So, basically i just want to define my mpath device - when entering the data for one of my iscsi target I can see the 4 pathes of the single target,
and when i choose the path where the "lun" is finally available it fails. I think in general this option is not that what i want to have
here for using the multipath device.
I' m lost - what is the usual way to install the HE on a mutlipath device ?
Do I have to change the configuration of the storage interfaces or the iscsi network design?
- Are bond interfaces for these iscsi connections mandatory ?
Did I missed something obvious ?
Can I put in my multipath data into the answerfile somehow to get rid of the last step of the assistant ?
Is it not possible in general ?? :
https://bugzilla.redhat.com/show_bug.cgi?id=1193961 <https://bugzilla.redhat.com/show_bug.cgi?id=1193961>
Sorry in advance for the long mail .... 1!^^
br,
michael
5 years, 4 months
Re: System unable to recover after a crash
by Strahil
Hi Carl,
I'd recommend you to avoid DNS & DHCP unless you oVirt infra consistes of hundreds of servers.
It is far more reliable to use static IPs + /etc/hosts .
As you could 'ssh' to the engine, check the logs - there should be a clue why it failed.
Most probably it's related to the DNS/IP used.
I think the devs can tell their opinion on Monday.
Best Regards,
Strahil NikolovOn Jul 13, 2019 15:08, carl langlois <crl.langlois(a)gmail.com> wrote:
>
> Hi
> Thanks for the info. There have been some progress with the situation. So to make the story as short as possible we are in a process of changing our range of IP addresse to 10.8.X.X to 10.16.X.X for all of the ovirt infra. This implies a new DHCP server, new switchs etc etc. For now we went back to our old IP address ranges because we were not able to stabilize the system.
>
> So the last status using our new range of addresses was that gluster was all fine, the hosted engine domaine was moutning okey. I suspect DNS table was not properly updated.. but i am not 100% sure. But if we tried to used the new range of adrreses everything seems to be fine except that the hosted-engine always fail the "liveliness check" after going up. I was not able to solve this situation so i went back to our previous DHCP server.
>
> So i am not sure what is missing for the hosted-engine to use the DHCP server. Is there any hardcode config in the hosted-egnine that need to be updated when chaging DHCP server(i.e new address with the same hostname, new gateway..)
>
> More info on the test i did with the new DHCP server -- > All node have name resolution working. I am able to ssh to the hosted-engine
>
> Any suggestions will be appreciated as i am out of idea for now. Do i need to redo some sort of setup in the engine to take into account the range of address/new gateway? There is also a LDAP server access configure in the engine for username mapping..
> Carl
>
>
>
>
> On Sat, Jul 13, 2019 at 6:31 AM Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
>>
>> Can you mount the volume manually at another location ?
>> Also, have you done any changes to Gluster ?
>>
>> Please provide "gluster volume info engine" . I have noticed the following in your logs: option 'parallel-readdir' is not recognized
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> В петък, 12 юли 2019 г., 22:30:41 ч. Гринуич+3, carl langlois <crl.langlois(a)gmail.com> написа:
>>
>>
>> Hi ,
>>
>> I am in state where my system does not recover from a major failure. I have pinpoint the probleme to be that the hosted engine storage domain is not able to mount
>>
>> I have a glusterfs containing the storage domain. but when it attempt to mount glusterfs to /rhev/data-center/mnt/glusterSD/ovhost1:_engine i get
>>
>> +------------------------------------------------------------------------------+
>> [2019-07-12 19:19:44.063608] I [rpc-clnt.c:1986:rpc_clnt_reconfig] 0-engine-client-2: changing port to 49153 (from 0)
>> [2019-07-12 19:19:55.033725] I [fuse-bridge.c:4205:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.22
>> [2019-07-12 19:19:55.033748] I [fuse-bridge.c:4835:fuse_graph_sync] 0-fuse: switched to graph 0
>> [2019-07-12 19:19:55.033895] I [MSGID: 108006] [afr-common.c:537
5 years, 4 months
Does Ovirt 4.3.4 have support for NFS 4/4.1/4.2 or pNFS
by Erick Perez
I have read the archives and the most recent discussion was 5 years ago. So I better ask again.
My NAS runs Centos with NFS4.2 (and I am testing Ganesha in another server)
Does Ovirt 4.3.4 have support for NFS 4/4.1/4.2 or pNFS
Specially version 4.2 due to:
Server-Side Copy: NFSv4.2 supports copy_file_range() system call, which allows the NFS client to efficiently copy data without wasting network resources.
But this will only happen if the Ovirt (which I know is Centos based) supports NFS 4.2.
Not sure If I update the NFS toolset on the Ovirt install, it will break something or worst.
5 years, 4 months
System unable to recover after a crash
by carl langlois
Hi ,
I am in state where my system does not recover from a major failure. I have
pinpoint the probleme to be that the hosted engine storage domain is not
able to mount
I have a glusterfs containing the storage domain. but when it attempt to
mount glusterfs to /rhev/data-center/mnt/glusterSD/ovhost1:_engine i get
+------------------------------------------------------------------------------+
[2019-07-12 19:19:44.063608] I [rpc-clnt.c:1986:rpc_clnt_reconfig]
0-engine-client-2: changing port to 49153 (from 0)
[2019-07-12 19:19:55.033725] I [fuse-bridge.c:4205:fuse_init]
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel
7.22
[2019-07-12 19:19:55.033748] I [fuse-bridge.c:4835:fuse_graph_sync] 0-fuse:
switched to graph 0
[2019-07-12 19:19:55.033895] I [MSGID: 108006]
[afr-common.c:5372:afr_local_init] 0-engine-replicate-0: no subvolumes up
[2019-07-12 19:19:55.033938] E [fuse-bridge.c:4271:fuse_first_lookup]
0-fuse: first lookup on root failed (Transport endpoint is not connected)
[2019-07-12 19:19:55.034041] W [fuse-resolve.c:132:fuse_resolve_gfid_cbk]
0-fuse: 00000000-0000-0000-0000-000000000001: failed to resolve (Transport
endpoint is not connected)
[2019-07-12 19:19:55.034060] E [fuse-bridge.c:900:fuse_getattr_resume]
0-glusterfs-fuse: 2: GETATTR 1 (00000000-0000-0000-0000-000000000001)
resolution failed
[2019-07-12 19:19:55.034095] W [fuse-resolve.c:132:fuse_resolve_gfid_cbk]
0-fuse: 00000000-0000-0000-0000-000000000001: failed to resolve (Transport
endpoint is not connected)
[2019-07-12 19:19:55.034102] E [fuse-bridge.c:900:fuse_getattr_resume]
0-glusterfs-fuse: 3: GETATTR 1 (00000000-0000-0000-0000-000000000001)
resolution failed
[2019-07-12 19:19:55.035596] W [fuse-resolve.c:132:fuse_resolve_gfid_cbk]
0-fuse: 00000000-0000-0000-0000-000000000001: failed to resolve (Transport
endpoint is not connected)
[2019-07-12 19:19:55.035611] E [fuse-bridge.c:900:fuse_getattr_resume]
0-glusterfs-fuse: 4: GETATTR 1 (00000000-0000-0000-0000-000000000001)
resolution failed
[2019-07-12 19:19:55.047957] I [fuse-bridge.c:5093:fuse_thread_proc]
0-fuse: initating unmount of /rhev/data-center/mnt/glusterSD/ovhost1:_engine
The message "I [MSGID: 108006] [afr-common.c:5372:afr_local_init]
0-engine-replicate-0: no subvolumes up" repeated 3 times between
[2019-07-12 19:19:55.033895] and [2019-07-12 19:19:55.035588]
[2019-07-12 19:19:55.048138] W [glusterfsd.c:1375:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7e25) [0x7f51cecb3e25]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x5632143bd4b5]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x5632143bd32b] ) 0-:
received signum (15), shutting down
[2019-07-12 19:19:55.048150] I [fuse-bridge.c:5852:fini] 0-fuse: Unmounting
'/rhev/data-center/mnt/glusterSD/ovhost1:_engine'.
[2019-07-12 19:19:55.048155] I [fuse-bridge.c:5857:fini] 0-fuse: Closing
fuse connection to '/rhev/data-center/mnt/glusterSD/ovhost1:_engine'.
[2019-07-12 19:19:56.029923] I [MSGID: 100030] [glusterfsd.c:2511:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.12.11
(args: /usr/sbin/glusterfs --volfile-server=ovhost1
--volfile-server=ovhost2 --volfile-server=ovhost3 --volfile-id=/engine
/rhev/data-center/mnt/glusterSD/ovhost1:_engine)
[2019-07-12 19:19:56.032209] W [MSGID: 101002]
[options.c:995:xl_opt_validate] 0-glusterfs: option 'address-family' is
deprecated, preferred is 'transport.address-family', continuing with
correction
[2019-07-12 19:19:56.037510] I [MSGID: 101190]
[event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2019-07-12 19:19:56.039618] I [MSGID: 101190]
[event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 2
[2019-07-12 19:19:56.039691] W [MSGID: 101174]
[graph.c:363:_log_if_unknown_option] 0-engine-readdir-ahead: option
'parallel-readdir' is not recognized
[2019-07-12 19:19:56.039739] I [MSGID: 114020] [client.c:2360:notify]
0-engine-client-0: parent translators are ready, attempting connect on
transport
[2019-07-12 19:19:56.043324] I [MSGID: 114020] [client.c:2360:notify]
0-engine-client-1: parent translators are ready, attempting connect on
transport
[2019-07-12 19:19:56.043481] I [rpc-clnt.c:1986:rpc_clnt_reconfig]
0-engine-client-0: changing port to 49153 (from 0)
[2019-07-12 19:19:56.048539] I [MSGID: 114020] [client.c:2360:notify]
0-engine-client-2: parent translators are ready, attempting connect on
transport
[2019-07-12 19:19:56.048952] I [rpc-clnt.c:1986:rpc_clnt_reconfig]
0-engine-client-1: changing port to 49153 (from 0)
Final graph:
without this mount point the ha-agent is not starting.
the volume seem to be okey
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick ovhost1:/gluster_bricks/data/data 49152 0 Y
7505
Brick ovhost2:/gluster_bricks/data/data 49152 0 Y
3640
Brick ovhost3:/gluster_bricks/data/data 49152 0 Y
6329
Self-heal Daemon on localhost N/A N/A Y
7712
Self-heal Daemon on ovhost2 N/A N/A Y
4925
Self-heal Daemon on ovhost3 N/A N/A Y
6501
Task Status of Volume data
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: engine
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick ovhost1:/gluster_bricks/engine/engine 49153 0 Y
7514
Brick ovhost2:/gluster_bricks/engine/engine 49153 0 Y
3662
Brick ovhost3:/gluster_bricks/engine/engine 49153 0 Y
6339
Self-heal Daemon on localhost N/A N/A Y
7712
Self-heal Daemon on ovhost2 N/A N/A Y
4925
Self-heal Daemon on ovhost3 N/A N/A Y
6501
Task Status of Volume engine
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: iso
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick ovhost1:/gluster_bricks/iso/iso 49154 0 Y
7523
Brick ovhost2:/gluster_bricks/iso/iso 49154 0 Y
3715
Brick ovhost3:/gluster_bricks/iso/iso 49154 0 Y
6349
Self-heal Daemon on localhost N/A N/A Y
7712
Self-heal Daemon on ovhost2 N/A N/A Y
4925
Self-heal Daemon on ovhost3 N/A N/A Y
6501
Task Status of Volume iso
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: vmstore
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick ovhost1:/gluster_bricks/vmstore/vmsto
re 49155 0 Y
7532
Brick ovhost2:/gluster_bricks/vmstore/vmsto
re 49155 0 Y
3739
Brick ovhost3:/gluster_bricks/vmstore/vmsto
re 49155 0 Y
6359
Self-heal Daemon on localhost N/A N/A Y
7712
Self-heal Daemon on ovhost2 N/A N/A Y
4925
Self-heal Daemon on ovhost3 N/A N/A Y
6501
Task Status of Volume vmstore
------------------------------------------------------------------------------
There are no active volume tasks
I am not sure what to look for at this point
Any help would be really appreciated.
Thanks
Carl
5 years, 4 months
"Actual timezone in guest doesn't match configuration" for Windows VMs since guest agent 4.3
by Matthias Leopold
Hi,
the oVirt guest agent seems to report DST configuration for the timezone
since version 4.3 (of the guest agent). this results in "Actual timezone
in guest doesn't match configuration" messages in the UI for windows VMs
because the timezone field can't be matched with oVirt configuration
anymore (no DST flag). to me this looks like a bug. shall I report it?
I know there was a similar thread at the beginning of May, but there was
no solution mentioned.
Matthias
5 years, 4 months
An error has occurred during installation of Host
by lliujd@gmail.com
Hello everyone,
I have installed an 'oVirt Engine' by 'yum install ovirt-engine' and the engine runs fine after 'engine-setup'.
But when I trying to add node host,some error occurred:
1st:
An error has occurred during installation of Host node02: Yum Cannot queue package dmidecode: 'ascii' codec can't encode characters in position 143-147: ordinal not in range(128).
2ed:
An error has occurred during installation of Host node02: Failed to execute stage 'Environment packages setup': 'ascii' codec can't encode characters in position 143-147: ordinal not in range(128).
3rd:
Host node02 installation failed. Command returned failure code 1 during SSH session 'root@node02'.
On host node02,package dmidecode has been installed:
rpm -qa | grep dmidecode
python-dmidecode-3.12.2-3.el7.x86_64
dmidecode-3.1-2.el7.x86_64
anyone can help ? Thanks !
5 years, 4 months
Update 4.2.8 --> 4.3.5
by Christoph Köhler
Hello!
We have a 4.2.8 environment with some managed gluster-volumes as storage
domains and we want to go up to 4.3.5.
How is that procedure especially with the gluster nodes in ovirt that
are running 3.12.15? My fear is on the jump to gluster 6. Do the cluster
work if the first node (of three) is upgraded? And what about the
sequence - first the hypervisors or first gluster nodes?
Is there anyone who had done this?
Greetings!
Christoph Köhler
5 years, 4 months