[Users] Lifecycle / upgradepath
by Sven Kieske
Hi Community,
Currently, there is no single document describing supported
(which means: working ) upgrade scenarios.
I think the project has matured enough, to have such an supported
upgradepath, which should be considered in the development of new
releases.
As far as I know, currently it is supported to upgrade
from x.y.z to x.y.z+1 and from x.y.z to x.y+1.z
but not from x.y-1.z to x.y+1.z directly.
maybe this should be put together in a wiki page at least.
also it would be cool to know how long a single "release"
would be supported.
In this context I would define a release as a version
bump from x.y.z to x.y+1.z or to x+1.y.z
a bump in z would be a bugfix release.
The question is, how long will we get bugfix releases
for a given version?
What are your thoughts?
--
Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator
Mittwald CM Service GmbH & Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
5 years, 7 months
Planned restart of production services
by Evgheni Dereveanchin
Hi everyone,
I will be restarting several production systems within the following hour
to apply updates.
The following services may be unreachable for some period of time:
- resources.ovirt.org - package repositories
- jenkins.ovirt.org - CI master
Package repositories will be unreachable for a short period of time.
No new CI jobs will be started during this period.
I will announce you once maintenance is complete.
--
Regards,
Evgheni Dereveanchin
5 years, 7 months
[Users] Nested virtualization with Opteron 2nd generation and oVirt 3.1 possible?
by Gianluca Cecchi
Hello,
I have 2 physical servers with Opteron 2nd gen cpu.
There is CentOS 6.3 installed and some VM already configured on them.
Their /proc/cpuinfo contains
...
model name : Dual-Core AMD Opteron(tm) Processor 8222
...
kvm_amd kernel module is loaded with its default enabled nested option
# systool -m kvm_amd -v
Module = "kvm_amd"
Attributes:
initstate = "live"
refcnt = "15"
srcversion = "43D8067144E7D8B0D53D46E"
Parameters:
nested = "1"
npt = "1"
...
I already configured a fedora 17 VM as a oVirt 3.1 Engine
I'm trying to configure another VM as oVirt 3.1 node with
ovirt-node-iso-2.5.5-0.1.fc17.iso
It seems I'm not able to configure so that ovirt install doesn't complain.
After some attempts, I tried this in my vm.xml for the cpu:
<cpu mode='custom' match='exact'>
<model fallback='allow'>athlon</model>
<vendor>AMD</vendor>
<feature policy='require' name='pni'/>
<feature policy='require' name='rdtscp'/>
<feature policy='force' name='svm'/>
<feature policy='require' name='clflush'/>
<feature policy='require' name='syscall'/>
<feature policy='require' name='lm'/>
<feature policy='require' name='cr8legacy'/>
<feature policy='require' name='ht'/>
<feature policy='require' name='lahf_lm'/>
<feature policy='require' name='fxsr_opt'/>
<feature policy='require' name='cx16'/>
<feature policy='require' name='extapic'/>
<feature policy='require' name='mca'/>
<feature policy='require' name='cmp_legacy'/>
</cpu>
Inside node /proc/cpuinfo becomes
processor : 3
vendor_id : AuthenticAMD
cpu family : 6
model : 2
model name : QEMU Virtual CPU version 0.12.1
stepping : 3
microcode : 0x1000065
cpu MHz : 3013.706
cache size : 512 KB
fpu : yes
fpu_exception : yes
cpuid level : 2
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat
pse36 clflush mmx fxsr sse sse2 syscall mmxext fxsr_opt lm nopl pni
cx16 hypervisor lahf_lm cmp_legacy cr8_legacy
bogomips : 6027.41
TLB size : 1024 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
2 questions:
1) Is there any combination in xml file to give to my VM so that oVirt
doesn't complain about missing hardware virtualization with this
processor?
2) suppose 1) is not possible in my case and I still want to test the
interface and try some config operations to see for example the
differences with RHEV 3.0, how can I do?
At the moment this complaint about hw virtualization prevents me to
activate the node.
I get
Installing Host f17ovn01. Step: RHEV_INSTALL.
Host f17ovn01 was successfully approved.
Host f17ovn01 running without virtualization hardware acceleration
Detected new Host f17ovn01. Host state was set to Non Operational.
Host f17ovn01 moved to Non-Operational state.
Host f17ovn01 moved to Non-Operational state as host does not meet the
cluster's minimum CPU level. Missing CPU features : CpuFlags
Can I lower the requirements to be able to operate without hw
virtualization in 3.1?
Thanks in advance,
Gianluca
5 years, 7 months
Re: oVirt and NetApp NFS storage
by Strahil
I know 2 approaches.
1. Use NFS hard mounting option - it will never give error to sanlock and it will be waiting until NFS is recovered (never tries this one, but in theory might work)
2. Change the default sanlock timeout (last time I tried that - it didn't work) . You might need help from Sandro or Sahina for that option.
Best Regards,
Strahil NikolovOn Apr 18, 2019 11:45, klaasdemter(a)gmail.com wrote:
>
> Hi,
>
> I got a question regarding oVirt and the support of NetApp NFS storage.
> We have a MetroCluster for our virtual machine disks but a HA-Failover
> of that (active IP gets assigned to another node) seems to produce
> outages too long for sanlock to handle - that affects all VMs that have
> storage leases. NetApp says a "worst case" takeover time is 120 seconds.
> That would mean sanlock has already killed all VMs. Is anyone familiar
> with how we could setup oVirt to allow such storage outages? Do I need
> to use another type of storage for my oVirt VMs because that NFS
> implementation is unsuitable for oVirt?
>
>
> Greetings
>
> Klaas
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSJJKK5UG57...
5 years, 7 months
Ovirt nodeNG RDMA support?
by michael@wanderingmad.com
When I was able to load CentosOS as a host OS, Was able to use RDMA, but it seems like the 4.3x branch of nodeNG is missing RDMA support? I enabled rdma and started the service, but gluster refuses to recognize that RDMA is available and always reports RDMA port as 0 and when I try to make a new drive with tcp,rdma transport options, it always fails.
5 years, 7 months
Need VM run once api
by Chandrahasa S
This is a multipart message in MIME format.
--=_alternative 00361B2065257E90_=
Content-Type: text/plain; charset="US-ASCII"
Hi Experts,
We are integrating ovirt with our internal cloud.
Here we installed cloudinit in vm and then converted vm to template. We
deploy template with initial run parameter Hostname, IP Address, Gateway
and DNS.
but when we power ON initial, run parameter is not getting pushed to
inside the vm. But its working when we power on VM using run once option
on Ovirt portal.
I believe we need to power ON vm using run once API, but we are not able
get this API.
Can some one help on this.
I got reply on this query last time but unfortunately mail got deleted.
Thanks & Regards
Chandrahasa S
=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
--=_alternative 00361B2065257E90_=
Content-Type: text/html; charset="US-ASCII"
<font size=2 face="sans-serif">Hi Experts,</font>
<br>
<br><font size=2 face="sans-serif">We are integrating ovirt with our internal
cloud.</font>
<br>
<br><font size=2 face="sans-serif">Here we installed cloudinit in vm and
then converted vm to template. We deploy template with initial run parameter
Hostname, IP Address, Gateway and DNS.</font>
<br>
<br><font size=2 face="sans-serif">but when we power ON initial, run parameter
is not getting pushed to inside the vm. But its working when we power on
VM using run once option on Ovirt portal.</font>
<br>
<br><font size=2 face="sans-serif">I believe we need to power ON vm using
run once API, but we are not able get this API.</font>
<br>
<br><font size=2 face="sans-serif">Can some one help on this.</font>
<br>
<br><font size=2 face="sans-serif">I got reply on this query last time
but unfortunately mail got deleted.</font>
<br>
<br><font size=2 face="sans-serif">Thanks & Regards<br>
Chandrahasa S<br>
</font><p>=====-----=====-----=====<br>
Notice: The information contained in this e-mail<br>
message and/or attachments to it may contain <br>
confidential or privileged information. If you are <br>
not the intended recipient, any dissemination, use, <br>
review, distribution, printing or copying of the <br>
information contained in this e-mail message <br>
and/or attachments to it are strictly prohibited. If <br>
you have received this communication in error, <br>
please notify us by reply e-mail or telephone and <br>
immediately and permanently delete the message <br>
and any attachments. Thank you</p>
<p></p>
--=_alternative 00361B2065257E90_=--
5 years, 7 months
Please Help, Ovirt Node Hosted Engine Deployment Problems 4.3.2
by Todd Barton
I've having to rebuild an environment that started back in the early 3.x days. A lot has changed and I'm attempting to use the Ovirt Node based setup to build a new environment, but I can't get through the hosted engine deployment process via the cockpit (I've done command line as well). I've tried static DHCP address and static IPs as well as confirmed I have resolvable host-names. This is a test environment so I can work through any issues in deployment.
When the cockpit is displaying the waiting for host to come up task, the cockpit gets disconnected. It appears to a happen when the bridge network is setup. At that point, the deployment is messed up and I can't return to the cockpit. I've tried this with one or two nic/interfaces and tried every permutation of static and dynamic ip addresses. I've spent a week trying different setups and I've got to be doing something stupid.
Attached is a screen capture of the resulting IP info after my latest try failing. I used two nics, one for the gluster and bridge network and the other for the ovirt cockpit access. I can't access cockpit on either ip address after the failure.
I've attempted this setup as both a single host hyper-converged setup and a three host hyper-converged environment...same issue in both.
Can someone please help me or give me some thoughts on what is wrong?
Thanks!
Todd Barton
5 years, 7 months
Re: All hosts non-operational after upgrading from 4.2 to 4.3
by Strahil
Are you able to access your iSCSI via the /rhev/data-center/mnt... mount point ?
Best Regards,
Strahil NikolovOn Apr 5, 2019 19:04, John Florian <jflorian(a)doubledog.org> wrote:
>
> I am in a severe pinch here. A while back I upgraded from 4.2.8 to 4.3.3 and only had one step remaining and that was to set the cluster compat level to 4.3 (from 4.2). When I tried this it gave the usual warning that each VM would have to be rebooted to complete, but then I got my first unusual piece when it then told me next that this could not be completed until each host was in maintenance mode. Quirky I thought, but I stopped all VMs and put both hosts into maintenance mode. I then set the cluster to 4.3. Things didn't want to become active again and I eventually noticed that I was being told the DC needed to be 4.3 as well. Don't remember that from before, but oh well that was easy.
>
> However, the DC and SD remains down. The hosts are non-op. I've powered everything off and started fresh but still wind up in the same state. Hosts will look like their active for a bit (green triangle) but then go non-op after about a minute. It appears that my iSCSI sessions are active/logged in. The one glaring thing I see in the logs is this in vdsm.log:
>
> 2019-04-05 12:03:30,225-0400 ERROR (monitor/07bb1bf) [storage.Monitor] Setting up monitor for 07bb1bf8-3b3e-4dc0-bc43-375b09e06683 failed (monitor:329)
> Traceback (most recent call last):
> File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 326, in _setupLoop
> self._setupMonitor()
> File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 348, in _setupMonitor
> self._produceDomain()
> File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 158, in wrapper
> value = meth(self, *a, **kw)
> File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 366, in _produceDomain
> self.domain = sdCache.produce(self.sdUUID)
> File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110, in produce
> domain.getRealDomain()
> File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51, in getRealDomain
> return self._cache._realProduce(self._sdUUID)
> File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134, in _realProduce
> domain = self._findDomain(sdUUID)
> File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151, in _findDomain
> return findMethod(sdUUID)
> File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 176, in _findUnfetchedDomain
> raise se.StorageDomainDoesNotExist(sdUUID)
> StorageDomainDoesNotExist: Storage domain does not exist: (u'07bb1bf8-3b3e-4dc0-bc43-375b09e06683',)
>
> How do I proceed to get back operational?
5 years, 7 months
Re: Fwd: [Gluster-users] Announcing Gluster release 5.5
by Strahil
Hi Darrel,
Will it fix the cluster brick sudden death issue ?
Best Regards,
Strahil NikolovOn Mar 21, 2019 21:56, Darrell Budic <budic(a)onholyground.com> wrote:
>
> This release of Gluster 5.5 appears to fix the gluster 3.12->5.3 migration problems many ovirt users have encountered.
>
> I’ll try and test it out this weekend and report back. If anyone else gets a chance to check it out, let us know how it goes!
>
> -Darrell
>
>> Begin forwarded message:
>>
>> From: Shyam Ranganathan <srangana(a)redhat.com>
>> Subject: [Gluster-users] Announcing Gluster release 5.5
>> Date: March 21, 2019 at 6:06:33 AM CDT
>> To: announce(a)gluster.org, gluster-users Discussion List <gluster-users(a)gluster.org>
>> Cc: GlusterFS Maintainers <maintainers(a)gluster.org>
>>
>> The Gluster community is pleased to announce the release of Gluster
>> 5.5 (packages available at [1]).
>>
>> Release notes for the release can be found at [3].
>>
>> Major changes, features and limitations addressed in this release:
>>
>> - Release 5.4 introduced an incompatible change that prevented rolling
>> upgrades, and hence was never announced to the lists. As a result we are
>> jumping a release version and going to 5.5 from 5.3, that does not have
>> the problem.
>>
>> Thanks,
>> Gluster community
>>
>> [1] Packages for 5.5:
>> https://download.gluster.org/pub/gluster/glusterfs/5/5.5/
>>
>> [2] Release notes for 5.5:
>> https://docs.gluster.org/en/latest/release-notes/5.5/
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users(a)gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
5 years, 7 months