strange issue: vm lost info on disk
by Juan Pablo
Hi! , Im strugled about an ongoing problem:
after migrating a vm's disk from an iscsi domain to a nfs and ovirt
reporting the migration was successful, I see there's no data 'inside' the
vm's disk. we never had this issues with ovirt so Im stranged about the
root cause and if theres a chance of recovering the information.
can you please help me out troubleshooting this one? I would really
appreciate it =)
running ovirt 4.2.1 here!
thanks in advance,
JP
6 years, 6 months
hosted engine with openvswitch
by Sverker Abrahamsson
Hi
I have a problem with running hosted engine with openvswitch. I have one
cluster where the ovirt engine runs on the host, there it works and when
starting a vm the interface definition looks like this:
<interface type="bridge">
<address bus="0x00" domain="0x0000" function="0x0"
slot="0x03" type="pci" />
<mac address="00:1a:4a:16:01:51" />
<model type="virtio" />
<source bridge="vdsmbr_2XMhqdgD" />
<virtualport type="openvswitch" />
<filterref filter="vdsm-no-mac-spoofing" />
</interface>
The xml for that vm as fetched from vdsm does not contain virtualport
tag nor does it use the correct bridge, it looks like this:
<interface type="bridge">
<model type="virtio"/>
<link state="up"/>
<source bridge="ovirtmgmt"/>
<address bus="0x00" domain="0x0000" function="0x0" slot="0x03"
type="pci"/>
<mac address="00:1a:4a:16:01:51"/>
<filterref filter="vdsm-no-mac-spoofing"/>
<bandwidth/>
</interface>
I.e. somewhere the definition is modified to contain the correct data to
work with openvswitch
On the other cluster where I try to run hosted engine I don't get the
above behaviour. When the engine vm starts the interface settings are
not modified to use the bridge in openvswitch, with the result that the
vm fails to start:
<interface type="bridge">
<model type="virtio"/>
<link state="up"/>
<source bridge="ovirtmgmt"/>
<alias name="ua-430d692e-6ef0-4529-8af0-b37a53a11564"/>
<address bus="0x00" domain="0x0000" function="0x0"
slot="0x03" type="pci"/>
<mac address="00:16:3e:0e:39:42"/>
<filterref filter="vdsm-no-mac-spoofing"/>
<bandwidth/>
</interface>
Last login: Thu May 10 16:23:48 2018 from 172.27.1.32
[root@h2 ~]# ovs-vsctl show
dfcf7463-ce51-4115-9a3a-ecab9efa8146
Bridge "vdsmbr_H91hH5sG"
Port "vdsmbr_H91hH5sG"
Interface "vdsmbr_H91hH5sG"
type: internal
Port ovirtmgmt
Interface ovirtmgmt
type: internal
Port "dummy0"
Interface "dummy0"
ovs_version: "2.9.0"
I assumed first there is a hook that make the needed change, but the
only hooks I can find that mentions openvswitch are
ovirt_provider_ovn_hook and 50_openstacknet but both those would set the
source bridge to br-int and not look up the dynamic name of the bridge
as created by vdsm.
One special thing about the host where I try to run hosted engine is
that the there is a dummy port since otherwise I couldn't get vdsm to
create the bridge, but that shouldn't affect changing the interface
definition for the vm.
Where should I look next?
6 years, 6 months
not signed
by Fernando Fuentes
I am getting this when trying to upgrade to 4.2 from 4.1:
[ ERROR ] Yum Package gdeploy-2.0.6-1.el7.noarch.rpm is not signed
[ ERROR ] Failed to execute stage 'Package installation': Package gdeploy-2.0.6-1.el7.noarch.rpm is not signed
[ INFO ] Yum Performing yum transaction rollback
[ INFO ] Rolling back to the previous PostgreSQL instance (postgresql).
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20180510144813-tvu4i2.log
[ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20180510144937-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
Ideas?
--
Fernando Fuentes
ffuentes(a)txweather.org
http://www.txweather.org
6 years, 6 months
Scheduling a Snapshot of a Gluster volume not working within Ovirt
by Mark Betham
Hi Ovirt community,
I am hoping you will be able to help with a problem I am experiencing when
trying to schedule a snapshot of my Gluster volumes using the Ovirt portal.
Below is an overview of the environment;
I have an Ovirt instance running which is managing our Gluster storage. We
are running Ovirt version "4.2.2.6-1.el7.centos", Gluster version
"glusterfs-3.13.2-2.el7" on a base OS of "CentOS Linux release 7.4.1708
(Core)", Kernel "3.10.0 - 693.21.1.el7.x86_64", VDSM version
"vdsm-4.20.23-1.el7.centos". All of the versions of software are the
latest release and have been fully patched where necessary.
Ovirt has been installed and configured in "Gluster" mode only, no
virtualisation. The Ovirt platform runs from one of the Gluster storage
nodes.
Gluster runs with 2 clusters, each located at a different physical site (UK
and DE). Each of the storage clusters contain 3 storage nodes. Each
storage cluster contains a single gluster volume. The Gluster volume is 3
* Replicated. The Gluster volume runs on top of a LVM thin vol which has
been provisioned with a XFS filesystem. The system is running a Geo-rep
between the 2 geo-diverse clusters.
The host servers running at the primary site are of specification 1 *
Intel(R) Xeon(R) CPU E3-1270 v5 @ 3.60GHz (8 core with HT), 64GB Ram, LSI
MegaRAID SAS 9271 with bbu and cache, 8 * SAS 10K 2.5" 1.8TB enterprise
drives configured in a RAID 10 array to give 6.52TB of useable space. The
host servers running at the secondary site are of specification 1 *
Intel(R) Xeon(R) CPU E3-1271 v3 @ 3.60GHz (8 core with HT), 32GB Ram, LSI
MegaRAID SAS 9260 with bbu and cache, 8 * SAS 10K 2.5" 1.8TB enterprise
drives configured in a RAID 10 array to give 6.52TB of useable space. The
secondary site is for DR use only.
When I first starting experiencing the issue and was unable to resolve it,
I carried out a full rebuild from scratch across the two storage clusters.
I had spent some time troubleshooting the issue but felt it worthwhile to
ensure I had a clean platform, void of any potential issues which may be
there due to some of the previous work carried out. The platform was
rebuilt and data re-ingested. It is probably worth mentioning that this
environment will become our new production platform, we will be migrating
data and services to this new platform from our existing Gluster storage
cluster. The date for the migration activity is getting closer so
available time has become an issue and will not permit another full rebuild
of the platform without impacting delivery date.
After the rebuild with both storage clusters online, available and managed
within the Ovirt platform I conducted some basic commissioning checks and I
found no issues. The next step I took at this point was to setup the
Geo-replication. This was brought online with no issues and data was seen
to be synchronised without any problems. At this point the data
re-ingestion was started and the new data was synchronised by the
Geo-replication.
The first step in bringing the snapshot schedule online was to validate
that snapshots could be taken outside of the scheduler. Taking a manual
snapshot via the OVirt portal worked without issue. Several were taken on
both primary and secondary clusters. At this point a schedule was created
on the primary site cluster via the Ovirt portal to create a snapshot of
the storage at hourly intervals. The schedule was created successfully
however no snapshots were ever created. Examining the logs did not show
anything which I believed was a direct result of the faulty schedule but it
is quite possible I missed something.
I reviewed many online articles, bug reports and application manuals in
relation to snapshotting. There were several loosely related support
articles around snapshotting but none of the recommendations seemed to
work. I did the same with manuals and again nothing that seemed to work.
What I did find were several references to running snapshots along with
geo-replication and that the geo-replication should be paused when
creating. So I removed all existing references to any snapshot schedule,
paused the Geo-repl and recreated the snapshot schedule. The schedule was
never actioned and no snapshots were created. Removed Geo-repl entirely,
remove all schedules and carried out a reboot of the entire platform. When
the system was fully back online and no pending heal operations the
schedule was re-added for the primary site only. No difference in the
results and no snapshots were created from the schedule.
I have now reached the point where I feel I require assistance and hence
this email request.
If you require any further data then please let me know and I will do my
best to get it for you.
Any help you can give would be greatly appreciated.
Many thanks,
Mark Betham
6 years, 6 months
delete snapshot error
by 董青龙
Hi all,
I am using ovirt4.1. I failed deleting a snapshot of a vm. The state of the snapshot stayed locked and I could not start the vm. Anyone can help? Thanks!
PS: some logs:
VDSM command ReconcileVolumeChainVDS failed: Could not acquire resource. Probably resource factory threw an exception.: ()
VDSM host command GetVGInfoVDS failed: Volume Group does not exist: (u'vg_uuid: nL3wgg-uctH-1lGd-Vyl1-f1P2-fk95-tH5tlj',)
6 years, 6 months
Mikrotik CHR guest agent
by Magnus Isaksson
Hello all
I have installed MikroTik Cloud Hosted Router on my oVirt 4.2 setup
It starts fine and so but oVirt complains about guest-agent.
At MikroTik this information is available regarding guest-agent
https://wiki.mikrotik.com/wiki/Manual:CHR#KVM
But I don't understand what I shall do with that information to get it working.
Can somebody help me?
Regards
Magnus
6 years, 6 months
self hosted engine mac range and possible conflict
by Gianluca Cecchi
Hello,
suppose I have an environment with self hosted engine environment in 4.1
How is it generated the hw address of the virtual nic of the engine vm when
I deploy using cockpit?
Suppose on a lan I already have a self hosted engine environment and I'm
going to create another one on the same lan, is there any risck of hw
address collision for the engine?
I know that from inside web admin console I can then set the hw address
ranges for the VMs, but what about the mac of the engine vm itself?
At the moment I have only lan access and so crosschecked an RHV 4.1
environment and I see that the engine has hw addr 00:16:3e:7c:65:34 (a
lookup seems to associate to "Xensource, Inc.") while the VMs have their hw
addr in the range 00:1a:4a:16:01:51 - 00:1a:4a:16:01:e6 (eg
the 00:1a:4a:16:01:56) (a lookup seems to associate to "Qumranet Inc.".
Does this mean that the engine has a sort of dedicated range? Any risk of
collision or is a query on the lan done before assigning it?
BTW: probably the range for engine VM hw addr has to be put in Qumranet
too...?
Thanks in advance,
Gianluca
6 years, 6 months