Ovirt Node 4.3 Gluster Install adding bricks
by pollard@tx.net
Sorry if this seems simple, but trial and error is how I learn. So the basics. I installed Node 4.3 on 3 hosts, and was following the setup for self-hosted engine. The setup fails when detecting peers and indicates that they are already part of a cluster. So i restart the install and choose the install engine on single node with Gluster. I now have all three nodes connected to ovirt engine, however my trouble is I am not understanding how to add the disk of the two nodes to the glusterFS. I also have some extra disks on my primary node I want to add. I believe it is as follows but I dont want to mess this up.
Under Compute >> Hosts >> $HOSTNAME
Select Storage Devices
Select Disk in my case sde
Then create brick?
If this is the case Do I add it to a LV if so which one engine,data or vmstore?
Do I repeat this for each hosts?
I can duplicate the engine, data and vmstore on each one, but that will still leave me with two disks on each node without assignment
If anyone can help that would be great.
Thank you.
Pollard
5 years, 10 months
Re: "gluster-ansible-roles is not installed on Host" error on Cockpit
by Strahil
Check if you have a repo called sac-gluster-ansible.
Best Regards,
Strahil NikolovOn Mar 10, 2019 08:21, Hesham Ahmed <hsahmed(a)gmail.com> wrote:
>
> On a new 4.3.1 oVirt Node installation, when trying to deploy HCI
> (also when trying adding a new gluster volume to existing clusters)
> using Cockpit, an error is displayed "gluster-ansible-roles is not
> installed on Host. To continue deployment, please install
> gluster-ansible-roles on Host and try again". There is no package
> named gluster-ansible-roles in the repositories:
>
> [root@localhost ~]# yum install gluster-ansible-roles
> Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
> package_upload, product-id, search-disabled-repos,
> subscription-manager, vdsmupgrade
> This system is not registered with an entitlement server. You can use
> subscription-manager to register.
> Loading mirror speeds from cached hostfile
> * ovirt-4.3-epel: mirror.horizon.vn
> No package gluster-ansible-roles available.
> Error: Nothing to do
> Uploading Enabled Repositories Report
> Cannot upload enabled repos report, is this client registered?
>
> This is due to check introduced here:
> https://gerrit.ovirt.org/#/c/98023/1/dashboard/src/helpers/AnsibleUtil.js
>
> Changing the line from:
> [ "rpm", "-qa", "gluster-ansible-roles" ], { "superuser":"require" }
> to
> [ "rpm", "-qa", "gluster-ansible" ], { "superuser":"require" }
> resolves the issue. The above code snippet is installed at
> /usr/share/cockpit/ovirt-dashboard/app.js on oVirt node and can be
> patched by running "sed -i 's/gluster-ansible-roles/gluster-ansible/g'
> /usr/share/cockpit/ovirt-dashboard/app.js && systemctl restart
> cockpit"
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/243QJOXO2KT...
5 years, 10 months
gdeployConfig.conf errors (Hyperconverged setup using GUI)
by adrianquintero@gmail.com
Hello I am trying to run a Hyperconverged setup "COnfigure gluster storage and ovirt hosted engine", however I get the following error
__________________________________________________________________________________________________
PLAY [gluster_servers] *********************************************************
TASK [Create LVs with specified size for the VGs] ******************************
failed: [ovirt01.grupokino.com] (item={u'lv': u'gluster_thinpool_sdb', u'size': u'45GB', u'extent': u'100%FREE', u'vg': u'gluster_vg_sdb'}) => {"changed": false, "item": {"extent": "100%FREE", "lv": "gluster_thinpool_sdb", "size": "45GB", "vg": "gluster_vg_sdb"}, "msg": "lvcreate: metadata/pv_map.c:198: consume_pv_area: Assertion `to_go <= pva->count' failed.\n", "rc": -6}
to retry, use: --limit @/tmp/tmpwo4SNB/lvcreate.retry
PLAY RECAP *********************************************************************
ovirt01.grupokino.com : ok=0 changed=0 unreachable=0 failed=1
__________________________________________________________________________________________________
I know that oVirt Hosted Engine Setup GUI for "gluster wizard (gluster deployment) does not populate the geodeployConfig.conf file properly (Generated Gdeploy configuration : /var/lib/ovirt-hosted-engine-setup/gdeploy/gdeployConfig.conf) so I have tried to modify it to fit our needs but keep getting the above error everytime.
Any ideas or comments are welcome...Thanks!
My servers are setup with 4x50GB disks, 1 for the OS and the rest for Gluster Hyperconverged setup.
__________________________________________________________________________________________________
my gdeployConfig.conf file:
__________________________________________________________________________________________________
#gdeploy configuration generated by cockpit-gluster plugin
[hosts]
ovirt01.mydomain.com
ovirt02.mydomain.com
ovirt03.mydomain.com
[script1:ovirt01.mydomain.com]
action=execute
ignore_script_errors=no
file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb,sdc,sdd -h ovirt01.mydomain.com, ovirt02.mydomain.com, ovirt03.mydomain.com
[script1:ovirt02.mydomain.com]
action=execute
ignore_script_errors=no
file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb,sdc,sdd -h ovirt01.mydomain.com, ovirt02.mydomain.com, ovirt03.mydomain.com
[script1:ovirt03.mydomain.com]
action=execute
ignore_script_errors=no
file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb,sdc,sdd -h ovirt01.mydomain.com, ovirt02.mydomain.com, ovirt03.mydomain.com
[disktype]
jbod
[diskcount]
3
[stripesize]
256
[service1]
action=enable
service=chronyd
[service2]
action=restart
service=chronyd
[shell2]
action=execute
command=vdsm-tool configure --force
[script3]
action=execute
file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh
ignore_script_errors=no
[pv1:ovirt01.mydomain.com]
action=create
devices=sdb
ignore_pv_errors=no
[pv1:ovirt02.mydomain.com]
action=create
devices=sdb
ignore_pv_errors=no
[pv1:ovirt03.mydomain.com]
action=create
devices=sdb
ignore_pv_errors=no
[pv2:ovirt01.mydomain.com]
action=create
devices=sdc
ignore_pv_errors=no
[pv2:ovirt02.mydomain.com]
action=create
devices=sdc
ignore_pv_errors=no
[pv2:ovirt03.mydomain.com]
action=create
devices=sdc
ignore_pv_errors=no
[pv3:ovirt01.mydomain.com]
action=create
devices=sdd
ignore_pv_errors=no
[pv3:ovirt02.mydomain.com]
action=create
devices=sdd
ignore_pv_errors=no
[pv3:ovirt03.mydomain.com]
action=create
devices=sdd
ignore_pv_errors=no
[vg1:ovirt01.mydomain.com]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no
[vg1:ovirt02.mydomain.com]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no
[vg1:ovirt03.mydomain.com]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no
[vg2:ovirt01.mydomain.com]
action=create
vgname=gluster_vg_sdc
pvname=sdc
ignore_vg_errors=no
[vg2:ovirt02.mydomain.com]
action=create
vgname=gluster_vg_sdc
pvname=sdc
ignore_vg_errors=no
[vg2:ovirt03.mydomain.com]
action=create
vgname=gluster_vg_sdc
pvname=sdc
ignore_vg_errors=no
[vg3:ovirt01.mydomain.com]
action=create
vgname=gluster_vg_sdd
pvname=sdd
ignore_vg_errors=no
[vg3:ovirt02.mydomain.com]
action=create
vgname=gluster_vg_sdd
pvname=sdd
ignore_vg_errors=no
[vg3:ovirt03.mydomain.com]
action=create
vgname=gluster_vg_sdd
pvname=sdd
ignore_vg_errors=no
[lv1:ovirt01.mydomain.com]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=45GB
poolmetadatasize=3GB
[lv2:ovirt02.mydomain.com]
action=create
poolname=gluster_thinpool_sdc
ignore_lv_errors=no
vgname=gluster_vg_sdc
lvtype=thinpool
size=45GB
poolmetadatasize=3GB
[lv3:ovirt03.mydomain.com]
action=create
poolname=gluster_thinpool_sdd
ignore_lv_errors=no
vgname=gluster_vg_sdd
lvtype=thinpool
size=45GB
poolmetadatasize=3GB
[lv4:ovirt01.mydomain.com]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
size=42GB
lvtype=thick
[lv5:ovirt01.mydomain.com]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdc
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdc
virtualsize=42GB
[lv6:ovirt01.mydomain.com]
action=create
lvname=gluster_lv_vmstore
ignore_lv_errors=no
vgname=gluster_vg_sdd
mount=/gluster_bricks/vmstore
lvtype=thinlv
poolname=gluster_thinpool_sdd
virtualsize=42GB
[lv7:ovirt02.mydomain.com]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
size=42GB
lvtype=thick
[lv8:ovirt02.mydomain.com]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdc
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdc
virtualsize=42GB
[lv9:ovirt02.mydomain.com]
action=create
lvname=gluster_lv_vmstore
ignore_lv_errors=no
vgname=gluster_vg_sdd
mount=/gluster_bricks/vmstore
lvtype=thinlv
poolname=gluster_thinpool_sdd
virtualsize=42GB
[lv10:ovirt03.mydomain.com]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
size=42GB
lvtype=thick
[lv11:ovirt03.mydomain.com]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdc
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdc
virtualsize=42GB
[lv12:ovirt03.mydomain.com]
action=create
lvname=gluster_lv_vmstore
ignore_lv_errors=no
vgname=gluster_vg_sdd
mount=/gluster_bricks/vmstore
lvtype=thinlv
poolname=gluster_thinpool_sdd
virtualsize=42GB
[selinux]
yes
[service3]
action=restart
service=glusterd
slice_setup=yes
[firewalld]
action=add
ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,54322/tcp
services=glusterfs
[script2]
action=execute
file=/usr/share/gdeploy/scripts/disable-gluster-hooks.sh
[shell3]
action=execute
command=usermod -a -G gluster qemu
[volume1]
action=create
volname=engine
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable
brick_dirs=ovirt01.mydomain.com:/gluster_bricks/engine/engine,ovirt02.mydomain.com:/gluster_bricks/engine/engine,ovirt03.mydomain.com:/gluster_bricks/engine/engine
ignore_volume_errors=no
[volume2]
action=create
volname=data
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable
brick_dirs=ovirt01.mydomain.com:/gluster_bricks/data/data,ovirt02.mydomain.com:/gluster_bricks/data/data,ovirt03.mydomain.com:/gluster_bricks/data/data
ignore_volume_errors=no
[volume3]
action=create
volname=vmstore
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable
brick_dirs=ovirt01.mydomain.com:/gluster_bricks/vmstore/vmstore,ovirt02.mydomain.com:/gluster_bricks/vmstore/vmstore,ovirt03.mydomain.com:/gluster_bricks/vmstore/vmstore
ignore_volume_errors=no
---------------------------------------------------------------------------------------------------
5 years, 10 months
alertMessage, [Warning! Low confirmed free space on gluster volume M2Stick1]
by Robert O'Kane
Hello,
With my first "in Ovirt" made Gluster Storage I am getting some annoying Warnings.
On the Hypervisor(s) engine.log :
2019-03-05 13:07:45,281+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler5) [59957167] START,
GetGlusterVolumeAdvancedDetailsVDSCommand(HostName = Hausesel3, GlusterVolumeAdvancedDetailsVDSParameters:{hostId='d7db584e-03e3-4a37-abc7-73012a9f5ba8',
volumeName='M2Stick1'}), log id: 74482de6
2019-03-05 13:07:46,814+01 INFO [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (DefaultQuartzScheduler10) [6d40c5d0] Failed to acquire lock and wait lock
'EngineLock:{exclusiveLocks='[27f8ed93-c857-41ae-af16-e1af9f0b62d4=GLUSTER]', sharedLocks=''}'
2019-03-05 13:07:46,823+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand] (DefaultQuartzScheduler5) [59957167]
FINISH, GetGlusterVolumeAdvancedDetailsVDSCommand, return: org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@868edb00, log id:
74482de6
I find no other correlated messages in the Gluster logs. Where else should I look?
It (seems) to work very well. Just these "Warnings" which only worry me due to the "Failed to acquire lock" messages.
This is one of 3 Gluster Storage Domains. The other 2 were "Hand made" and exist since Ovirt-3.5 and show no messages.
1x standalone engine
6x Hypervisors in 2 clusters.
One other special condition:
I am in the processes of moving my VM's to a second cluster (same Data Center) with a different defined Gluster Network. (New 10Gb cards).
All Hypervisors see all Networks. but since there is only one SPM, the SPM is never a "Gluster Peer" of all Domains due to
the "only one Gluster Network per Cluster" definition. Is this the Problem/Situation?
There is another "Hand Made" Domain in the new Cluster but it does not have any problems. The only difference between the two is that the
new Domain was created over the Ovirt Web interface.
Cheers,
Robert O'Kane
engine:
libgovirt-0.3.4-1.el7.x86_64
libvirt-bash-completion-4.5.0-10.el7_6.4.x86_64
libvirt-client-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-interface-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-network-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-nodedev-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-nwfilter-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-qemu-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-secret-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-core-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-disk-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-iscsi-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-logical-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-mpath-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-rbd-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-scsi-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-kvm-4.5.0-10.el7_6.4.x86_64
libvirt-glib-1.0.0-1.el7.x86_64
libvirt-libs-4.5.0-10.el7_6.4.x86_64
libvirt-python-4.5.0-1.el7.x86_64
ovirt-ansible-cluster-upgrade-1.1.10-1.el7.noarch
ovirt-ansible-disaster-recovery-1.1.4-1.el7.noarch
ovirt-ansible-engine-setup-1.1.6-1.el7.noarch
ovirt-ansible-hosted-engine-setup-1.0.2-1.el7.noarch
ovirt-ansible-image-template-1.1.9-1.el7.noarch
ovirt-ansible-infra-1.1.10-1.el7.noarch
ovirt-ansible-manageiq-1.1.13-1.el7.noarch
ovirt-ansible-repositories-1.1.3-1.el7.noarch
ovirt-ansible-roles-1.1.6-1.el7.noarch
ovirt-ansible-shutdown-env-1.0.0-1.el7.noarch
ovirt-ansible-v2v-conversion-host-1.9.0-1.el7.noarch
ovirt-ansible-vm-infra-1.1.12-1.el7.noarch
ovirt-cockpit-sso-0.0.4-1.el7.noarch
ovirt-engine-4.2.8.2-1.el7.noarch
ovirt-engine-api-explorer-0.0.2-1.el7.centos.noarch
ovirt-engine-backend-4.2.8.2-1.el7.noarch
ovirt-engine-cli-3.6.9.2-1.el7.centos.noarch
ovirt-engine-dashboard-1.2.4-1.el7.noarch
ovirt-engine-dbscripts-4.2.8.2-1.el7.noarch
ovirt-engine-dwh-4.2.4.3-1.el7.noarch
ovirt-engine-dwh-setup-4.2.4.3-1.el7.noarch
ovirt-engine-extension-aaa-jdbc-1.1.7-1.el7.centos.noarch
ovirt-engine-extension-aaa-ldap-1.3.8-1.el7.noarch
ovirt-engine-extension-aaa-ldap-setup-1.3.8-1.el7.noarch
ovirt-engine-extensions-api-impl-4.2.8.2-1.el7.noarch
ovirt-engine-lib-4.2.8.2-1.el7.noarch
ovirt-engine-metrics-1.1.8.1-1.el7.noarch
ovirt-engine-restapi-4.2.8.2-1.el7.noarch
ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch
ovirt-engine-setup-4.2.8.2-1.el7.noarch
ovirt-engine-setup-base-4.2.8.2-1.el7.noarch
ovirt-engine-setup-plugin-ovirt-engine-4.2.8.2-1.el7.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-4.2.8.2-1.el7.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.2.8.2-1.el7.noarch
ovirt-engine-setup-plugin-websocket-proxy-4.2.8.2-1.el7.noarch
ovirt-engine-tools-4.2.8.2-1.el7.noarch
ovirt-engine-tools-backup-4.2.8.2-1.el7.noarch
ovirt-engine-vmconsole-proxy-helper-4.2.8.2-1.el7.noarch
ovirt-engine-webadmin-portal-4.2.8.2-1.el7.noarch
ovirt-engine-websocket-proxy-4.2.8.2-1.el7.noarch
ovirt-engine-wildfly-14.0.1-3.el7.x86_64
ovirt-engine-wildfly-overlay-14.0.1-3.el7.noarch
ovirt-guest-agent-windows-1.0.14-1.el7.centos.noarch
ovirt-host-deploy-1.7.4-1.el7.noarch
ovirt-host-deploy-java-1.7.4-1.el7.noarch
ovirt-imageio-common-1.4.6-1.el7.x86_64
ovirt-imageio-proxy-1.4.6-1.el7.noarch
ovirt-imageio-proxy-setup-1.4.6-1.el7.noarch
ovirt-image-uploader-4.0.1-1.el7.centos.noarch
ovirt-iso-uploader-4.2.0-1.el7.centos.noarch
ovirt-js-dependencies-1.2.0-3.1.el7.centos.noarch
ovirt-log-collector-4.2.7-1.el7.noarch
ovirt-log-collector-analyzer-4.2.7-1.el7.noarch
ovirt-provider-ovn-1.2.18-1.el7.noarch
ovirt-release40-4.0.6.1-1.noarch
ovirt-release41-4.1.8-1.el7.centos.noarch
ovirt-release41-4.1.9-1.el7.centos.noarch
ovirt-release42-4.2.8-1.el7.noarch
ovirt-setup-lib-1.1.5-1.el7.noarch
ovirt-vmconsole-1.0.6-2.el7.noarch
ovirt-vmconsole-proxy-1.0.6-2.el7.noarch
ovirt-web-ui-1.4.5-1.el7.noarch
python-ovirt-engine-sdk4-4.2.9-2.el7.x86_64
virtio-win-0.1.164-1.noarch
virt-manager-1.5.0-1.el7.noarch
virt-manager-common-1.5.0-1.el7.noarch
virt-viewer-5.0-11.el7.x86_64
virt-what-1.18-4.el7.x86_64
Hypervisor:
cockpit-machines-ovirt-176-4.el7.centos.noarch
cockpit-ovirt-dashboard-0.11.38-1.el7.noarch
collectd-virt-5.8.1-3.el7.x86_64
fence-virt-0.3.2-13.el7.x86_64
libvirt-4.5.0-10.el7_6.4.x86_64
libvirt-bash-completion-4.5.0-10.el7_6.4.x86_64
libvirt-client-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-config-network-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-config-nwfilter-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-interface-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-lxc-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-network-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-nodedev-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-nwfilter-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-qemu-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-secret-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-core-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-disk-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-iscsi-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-logical-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-mpath-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-rbd-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-driver-storage-scsi-4.5.0-10.el7_6.4.x86_64
libvirt-daemon-kvm-4.5.0-10.el7_6.4.x86_64
libvirt-libs-4.5.0-10.el7_6.4.x86_64
libvirt-lock-sanlock-4.5.0-10.el7_6.4.x86_64
libvirt-python-4.5.0-1.el7.x86_64
ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch
ovirt-host-4.2.3-1.el7.x86_64
ovirt-host-dependencies-4.2.3-1.el7.x86_64
ovirt-host-deploy-1.7.4-1.el7.noarch
ovirt-hosted-engine-ha-2.2.19-1.el7.noarch
ovirt-hosted-engine-setup-2.2.33-1.el7.noarch
ovirt-imageio-common-1.4.6-1.el7.x86_64
ovirt-imageio-daemon-1.4.6-1.el7.noarch
ovirt-provider-ovn-driver-1.2.18-1.el7.noarch
ovirt-release42-4.2.8-1.el7.noarch
ovirt-setup-lib-1.1.5-1.el7.noarch
ovirt-vmconsole-1.0.6-2.el7.noarch
ovirt-vmconsole-host-1.0.6-2.el7.noarch
python-ovirt-engine-sdk4-4.2.9-2.el7.x86_64
virt-install-1.5.0-1.el7.noarch
virt-manager-common-1.5.0-1.el7.noarch
virt-v2v-1.38.2-12.el7_6.1.x86_64
virt-what-1.18-4.el7.x86_64
--
Robert O'Kane
Systems Administrator
Kunsthochschule für Medien Köln
Peter-Welter-Platz 2
50676 Köln
fon: +49(221)20189-223
fax: +49(221)20189-49223
5 years, 10 months
Unable to make Single Sign on working on Windows 7 Guest
by Felipe Herrera Martinez
On the case I'll be able to create an installer, what is the name of the Application need to be there, in order to ovirt detects that Ovirt Guest agent is installed?
I have created an installer adding OvirtGuestService files and the Product Name to be shown, a part of the command line post installs..
I have tried with "ovirt-guest-agent" and "Ovirt guest agent" Names for the application installed on Windows 7 guest and even both are presented on ovirt VM Applications tab,
on any case LogonVDScommand appears.
There is other option to make it work now?
Thanks in advance,
Felipe
5 years, 10 months
Re: [ovirt-users] Need VM run once api
by Chandrahasa S
This is a multipart message in MIME format.
--=_alternative 00199D2865257E91_=
Content-Type: text/plain; charset="US-ASCII"
Can any one help on this.
Thanks & Regards
Chandrahasa S
From: Chandrahasa S/MUM/TCS
To: users(a)ovirt.org
Date: 28-07-2015 15:20
Subject: Need VM run once api
Hi Experts,
We are integrating ovirt with our internal cloud.
Here we installed cloudinit in vm and then converted vm to template. We
deploy template with initial run parameter Hostname, IP Address, Gateway
and DNS.
but when we power ON initial, run parameter is not getting pushed to
inside the vm. But its working when we power on VM using run once option
on Ovirt portal.
I believe we need to power ON vm using run once API, but we are not able
get this API.
Can some one help on this.
I got reply on this query last time but unfortunately mail got deleted.
Thanks & Regards
Chandrahasa S
=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
--=_alternative 00199D2865257E91_=
Content-Type: text/html; charset="US-ASCII"
<font size=2 face="sans-serif">Can any one help on this.</font>
<br>
<br><font size=2 face="sans-serif">Thanks & Regards<br>
Chandrahasa S<br>
</font>
<br>
<br>
<br>
<br><font size=1 color=#5f5f5f face="sans-serif">From:
</font><font size=1 face="sans-serif">Chandrahasa S/MUM/TCS</font>
<br><font size=1 color=#5f5f5f face="sans-serif">To:
</font><font size=1 face="sans-serif">users(a)ovirt.org</font>
<br><font size=1 color=#5f5f5f face="sans-serif">Date:
</font><font size=1 face="sans-serif">28-07-2015 15:20</font>
<br><font size=1 color=#5f5f5f face="sans-serif">Subject:
</font><font size=1 face="sans-serif">Need VM run
once api</font>
<br>
<hr noshade>
<br>
<br><font size=2 face="sans-serif">Hi Experts,</font>
<br>
<br><font size=2 face="sans-serif">We are integrating ovirt with our internal
cloud.</font>
<br>
<br><font size=2 face="sans-serif">Here we installed cloudinit in vm and
then converted vm to template. We deploy template with initial run parameter
Hostname, IP Address, Gateway and DNS.</font>
<br>
<br><font size=2 face="sans-serif">but when we power ON initial, run parameter
is not getting pushed to inside the vm. But its working when we power on
VM using run once option on Ovirt portal.</font>
<br>
<br><font size=2 face="sans-serif">I believe we need to power ON vm using
run once API, but we are not able get this API.</font>
<br>
<br><font size=2 face="sans-serif">Can some one help on this.</font>
<br>
<br><font size=2 face="sans-serif">I got reply on this query last time
but unfortunately mail got deleted.</font>
<br>
<br><font size=2 face="sans-serif">Thanks & Regards<br>
Chandrahasa S<br>
</font>
<br><p>=====-----=====-----=====<br>
Notice: The information contained in this e-mail<br>
message and/or attachments to it may contain <br>
confidential or privileged information. If you are <br>
not the intended recipient, any dissemination, use, <br>
review, distribution, printing or copying of the <br>
information contained in this e-mail message <br>
and/or attachments to it are strictly prohibited. If <br>
you have received this communication in error, <br>
please notify us by reply e-mail or telephone and <br>
immediately and permanently delete the message <br>
and any attachments. Thank you</p>
<p></p>
--=_alternative 00199D2865257E91_=--
5 years, 10 months
Re: [ovirt-users] Problem Windows guests start in pause
by Dafna Ron
Hi Lucas,
Please send mails to the list next time.
can you please do rpm -qa |grep qemu.
also, can you try a different windows image?
Thanks,
Dafna
On 07/14/2014 02:03 PM, lucas castro wrote:
> On the host there I've tried to run the vm, I use a centOS 6.5
> and checked, no update for qemu, libvirt or related package.
--
Dafna Ron
5 years, 10 months
Failed to deploy ovirt engine with "hosted-engine --deploy"
by Bong Shau Fui
Hi:
I'm new to ovirt. I followed the guide at the page https://www.ovirt.org/documentation/self-hosted/chap-Deploying_Self-Hoste..., started to run "hosted-engine --deploy". It got stuck at the stage TASK [Get local VM IP] for about 5 minutes, and failed. Did a search but came out with nothing. From the log files these 2 lines are shown:
2018-07-25 19:59:11,269+0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 {u'_ansible_parsed': True, u'stderr_lines': [], u'cmd': u"virsh -r net-dhcp-leases default | grep -i 00:16:3e:5e:43:32 | awk '{ print $5 }' | cut -f1 -d'/'", u'end': u'2018-07-25 19:59:11.081420', u'_ansible_no_log': False, u'stdout': u'', u'changed': True, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'_raw_params': u"virsh -r net-dhcp-leases default | grep -i 00:16:3e:5e:43:32 | awk '{ print $5 }' | cut -f1 -d'/'", u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'start': u'2018-07-25 19:59:10.949464', u'attempts': 50, u'stderr': u'', u'rc': 0, u'delta': u'0:00:00.131956', u'stdout_lines': []}
2018-07-25 19:59:11,370+0800 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:98 fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:5e:43:32 | awk '{ print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.131956", "end": "2018-07-25 19:59:11.081420", "rc": 0, "start": "2018-07-25 19:59:10.949464", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
Can someone please enlighten me on what has happened? Many thanks in advance.
regards,
Bong SF
5 years, 10 months