performance issues with Windows 10
by Hetz Ben Hamo
Hi,
I'm doing some very simple disk benchmarks on Windows 10, both with ESXI
6.7 and oVirt 4.3.0.
Both Windows 10 Pro guests have all the driver installed.
the "Storage" (Datastore in VMWare and storage domains in oVirt) comes from
the same ZFS machine, both mounted as NFS *without* any parameters for NFS
mount.
The ESXI is running on HP DL360 G7 with E5620 CPU, while the oVirt node is
running on IBM X3550 M3 with dual Xeon E5620. There are no memory issues as
both machines have plenty of free memory and free CPU resources.
Screenshots:
- Windows 10 in vSphere 6.7 - https://imgur.com/V75ep2n
- Windows 10 in oVirt 4.3.0 - https://imgur.com/3JDrWLx
As you can see, while oVirt lags a bit in 4K Read, the write performance is
really bad.
How can this be improved?
Thanks
5 years, 9 months
Re: ovirt 4.2.7 -> 4.3.0 upgrade process
by Douglas Duckworth
Thanks everyone!
We will try the upgrade to 4.2.8 then try 4.3 later.
Thanks,
Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit<https://scu.med.cornell.edu>
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu>
O: 212-746-6305
F: 212-746-8690
On Wed, Feb 6, 2019 at 11:56 AM Sandro Bonazzola <sbonazzo(a)redhat.com<mailto:sbonazzo@redhat.com>> wrote:
Il giorno mer 6 feb 2019 alle ore 17:28 Douglas Duckworth <dod2014(a)med.cornell.edu<mailto:dod2014@med.cornell.edu>> ha scritto:
Hello
Can anyone confirm that the steps for upgrading to 4.3 from 4.2 are the same as listed for previous versions?
I reviewed 4.1 to 4.2 which looks straight forward:
https://www.ovirt.org/documentation/upgrade-guide/chap-Upgrading_from_4.1...<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ovirt.org_docume...>
I am sure nothing will go wrong though we are doing daily backups of the hosted engine VM database.
If you are on 4.2.7 I would recommend upgrade the engine to 4.2.8 before upgrading to 4.3.
Since in 4.3 we switched from fluentd to rsyslog for the metrics part, with 4.2.7 you may have some issue if you upgrade hosts to 4.3 before the engine gets upgraded.
Thanks,
Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit<https://scu.med.cornell.edu>
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu>
O: 212-746-6305
F: 212-746-8690
_______________________________________________
Users mailing list -- users(a)ovirt.org<mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org<mailto:users-leave@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ovirt.org_site_p...>
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ovirt.org_commun...>
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VC2TVZIHTOT...<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.ovirt.org_arch...>
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.redhat.com_&d=Dw...>
sbonazzo(a)redhat.com<mailto:sbonazzo@redhat.com>
[https://www.redhat.com/files/brand/email/sig-redhat.png]<https://urldefense.proofpoint.com/v2/url?u=https-3A__red.ht_sig&d=DwMFaQ&...>
5 years, 9 months
sun.security.validator
by suporte@logicworks.pt
Hi,
I'm running Version 4.2.3.8-1.el7, and after reboot the engine machine no longer could login into administration portal with this error:
sun.security.validator.ValidatorException: PKIX path validation faile
java.security.cert.CertPathValidatorException: validity check failed
I'm using a self signed cert.
Any idea?
Thanks
--
Jose Ferradeira
http://www.logicworks.pt
5 years, 9 months
Strange behavior when changing memory and virtual cpu for VM
by matthias.barmeier@sourcepark.de
Hi,
after creating a VM from a template everything works fine. After a while I would like to increase memory and cpus. I changed the memory and virtual cpu settings in the "System" settings.
I restarted the VM.
From this moment on the VM hangs on boot with this messages filling the console:
url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' Connection to 169.254.169.254 timed out.
This goes on for ever. What can I do to make the VM work again ? I tried to restore the settings to the values it has before but this does not help.
The VM contains debian 9 (stretch) with cloud init. Ovirt version: 4.2.7.5-1.el7
Ciao
Matze
5 years, 9 months
But in the web interface?
by Hetz Ben Hamo
Hi,
I upgraded to 4.3 and on the hosted engine web interface, I see the
"dashboard" button twice, one below the other.
How can I fix it or is it a known bug?
Thanks
Hetz
5 years, 9 months
Re: Changing CPU type from Opteron G3 to Epyc with hosted engine in 4.3
by Simone Tiraboschi
On Wed, Feb 6, 2019 at 11:02 AM Juhani Rautiainen <
juhani.rautiainen(a)gmail.com> wrote:
> On Wed, Feb 6, 2019 at 11:27 AM Juhani Rautiainen
> <juhani.rautiainen(a)gmail.com> wrote:
> >
> > On Wed, Feb 6, 2019 at 10:31 AM Simone Tiraboschi <stirabos(a)redhat.com>
> wrote:
> > >
> > > The only case that could prevent that is that you are in hosted-engine
> mode and so you cannot set the latest host into maintenance mode without
> loosing the engine itself.
> > >
> > > If this is your case,
> > > what you can do is:
> > > - set HE global maintenance mode
> > > - set one of the hosted-engine hosts into maintenance mode
> > > - move it to a different cluster
> > > - shutdown the engine VM
> > > - manually restart the engine VM on the host on the custom cluster
> directly executing on that host: hosted-engine --vm-start
> > > - connect again to the engine
> > > - set all the hosts of the initial cluster into maintenance mode
> > > - upgrade the cluster
> > > - shut down again the engine VM
> > > - manually restart the engine VM on one of the hosts of the initial
> cluster
> > > - move back the host that got into a temporary cluster to its initial
> cluster
> >
> > I might try this one.
>
> I tried this one and it worked. There was problem with couple of VM's
> (they were made from OVA from Virtualbox). It couldn't upgrade old
> cluster until they were moved to temporary cluster too (complaining
> about wrong cluster level). It might have been the real reason why the
> normal upgrade failed? Maybe the error message was wrong?
>
Thanks for the report, honestly I have to double check it because it looks
a bit suspicious.
I also think that an addition to
https://github.com/oVirt/ovirt-ansible-cluster-upgrade
to handle also the upgrade of the hosted-engine cluster making it smoother
could make a lot of sense.
>
> Thanks a lot,
> Juhani
>
5 years, 9 months
Changing CPU type from Opteron G3 to Epyc with hosted engine in 4.3
by Juhani Rautiainen
Hi!
Now that I have engine and nodes on 4.3 level I'm stuck on upgrading
the CPU type. I have 2 node cluster with Epyc processors. It was
originally installed with 4.2 so it chose CPU type as Opteron G3 (no
Epyc support back then). In Engine 4.3 Epyc is available as CPU type
when I choose Compatibility Version: 4.3. Big problem is that it
doesn't allow to upgrade CPU because all hosts are not in maintenance:
"Error while executing action: Cannot change Cluster CPU type unless
all Hosts attached to this Cluster are in Maintenance". Putting all
hosts to maintenance is impossible because Engine is hosted in the
cluster. I tried with Global HA maintenance, but that didn't help.
What is the correct way to solve this problem?
Thanks,
-Juhani
5 years, 9 months
Issue when adding new hypervisor to cluster
by Jonas Lindholm
Background:
We had an issue when an operator decided to just install Windows on a running oVirt hypervisor with VMs running.
Somehow the Windows installation somehow corrupted some of the storage metadata for one storage domain (The storage domain could not be activated, activation failed after ~5 minutes each time I tried).
I was able to move all disks on that storage domain except two to another a new storage domain so all important data was saved.
After data was moved I removed the corrupted storage domain from the cluster/datacenter with no issues.
I then destroyed the storage domain.
The existing hypervisors in the cluster can reboot without any issues with the existing storage domains.
So far so good.
Issue:
However when I try to add an additional hypervisor to the cluster I get an error referencing the corrupted storage domain (UUID is for the bad one that no longer exists).
My question is where can the reference still be, in the DB or in any active data structure the hypervisors are using?
if it is in the DB I guess some clever SQL statement might be able to remove it but if that's the case I would need help with those SQL statements.
Thanks
/Jonas
***********************************************************
CONFIDENTIALITY AND PRIVACY NOTICE: This e-mail and any attachments are for the exclusive and confidential use of the intended recipient and may constitute non-public information. Personal data in this email is governed by our Privacy Policy at http://business.nasdaq.com/privacy-statement unless explicitly excluded from it; please see the section in the policy entitled “Situations Where This Privacy Policy Does Not Apply” for circumstances where different privacy terms govern emailed personal data. If you received this e-mail in error, disclosing, copying, distributing or taking any action in reliance of this e-mail is strictly prohibited and may be unlawful. Instead, please notify us immediately by return e-mail and promptly delete this message and its attachments from your computer system. We do not waive any work product or other applicable legal privilege(s) by the transmission of this message.
***********************************************************
5 years, 9 months
rebooting an ovirt cluster
by feral
How is an oVirt hyperconverged cluster supposed to come back to life after
a power outage to all 3 nodes?
Running ovirt-node (ovirt-node-ng-installer-4.2.0-2019013006.el7.iso) to
get things going, but I've run into multiple issues.
1. During the gluster setup, the volume sizes I specify, are not reflected
in the deployment configuration. The auto-populated values are used every
time. I manually hacked on the config to get the volume sizes correct. I
also noticed if I create the deployment config with "sdb" by accident, but
click back and change it to "vdb", again, the changes are not reflected in
the config.
My deployment config does seem to work. All volumes are created (though the
xfs options used don't make sense as you end up with stripe sizes that
aren't a multiple of the block size).
Once gluster is deployed, I deploy the hosted engine, and everything works.
2. Reboot all nodes. I was testing for power outage response. All nodes
come up, but glusterd is not running (seems to have failed for some
reason). I can manually restart glusterd on all nodes and it comes up and
starts communicating normally. However, the engine does not come online. So
I figure out where it last lived, and try to start it manually through the
web interface. This fails because vdsm-ovirtmgmt is not up. I figured out
the correct way to start up the engine would be through the cli via
hosted-engine --vm-start. This does work, but it takes a very long time,
and it usually starts up on any node other than the one I told it to start
on.
So I guess two (or three) questions. What is the expected operation after a
full cluster reboot (ie: in the event of a power failure)? Why doesn't the
engine start automatically, and what might be causing glusterd to fail,
when it can be restarted manually and works fine?
--
_____
Fact:
1. Ninjas are mammals.
2. Ninjas fight ALL the time.
3. The purpose of the ninja is to flip out and kill people.
5 years, 9 months