[URGENT][ACTION REQUIRED] vdsm versioning system need to be fixed
by Sandro Bonazzola
Hi,
I think I already raised the issue several times, but let me raise this
again.
VDSM building / automated versioning is badly broken.
Currently, 3.6 snapshot is building:
vdsm-4.17.19-32.git171584b.el7.centos.src.rpm
<http://jenkins.ovirt.org/job/vdsm_3.6_build-artifacts-el7-x86_64/178/arti...>
because
the last tag on the 3.6 branch was 4.17.19 and new tags have been created
in different branches.
This make impossible to upgrade from stable (4.17.23) to latest snapshot.
This also break dependencies on other projects requiring the latest
released version like hosted engine.
Please fix the automated versioning system vdsm is currently using or get
rid of it.
Thanks,
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
8 years, 9 months
master ovirt-engine requires ovirt-engine-sdk-python >= 4.0.0.0
by Yedidyah Bar David
Hi all,
I am trying, on a rhel7 vm:
yum install -y http://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm
yum install ovirt-engine
Fails with:
--> Finished Dependency Resolution
Error: Package:
ovirt-iso-uploader-4.0.0-0.0.master.20160208072812.gita433ce3.el7.centos.noarch
(ovirt-master-snapshot)
Requires: ovirt-engine-sdk-python >= 4.0.0.0
Available:
ovirt-engine-sdk-python-3.6.0.3-1.el7.centos.noarch (centos-ovirt36)
ovirt-engine-sdk-python = 3.6.0.3-1.el7.centos
Available: ovirt-engine-sdk-python-3.6.2.1-1.el7.noarch
(centos-ovirt36)
ovirt-engine-sdk-python = 3.6.2.1-1.el7
Installing: ovirt-engine-sdk-python-3.6.3.0-1.el7.noarch
(centos-ovirt36)
ovirt-engine-sdk-python = 3.6.3.0-1.el7
Error: Package:
ovirt-image-uploader-4.0.0-0.0.master.20160211135948.git43cc942.el7.centos.noarch
(ovirt-master-snapshot)
Requires: ovirt-engine-sdk-python >= 4.0.0.0
Available:
ovirt-engine-sdk-python-3.6.0.3-1.el7.centos.noarch (centos-ovirt36)
ovirt-engine-sdk-python = 3.6.0.3-1.el7.centos
Available: ovirt-engine-sdk-python-3.6.2.1-1.el7.noarch
(centos-ovirt36)
ovirt-engine-sdk-python = 3.6.2.1-1.el7
Installing: ovirt-engine-sdk-python-3.6.3.0-1.el7.noarch
(centos-ovirt36)
ovirt-engine-sdk-python = 3.6.3.0-1.el7
Tried also:
yum install http://jenkins.ovirt.org/job/ovirt-engine-sdk_master_build-artifacts-el7-...
which works, but does not satisfy the above dep. Should it? Any other
place to find 4.0 python sdk?
Thanks,
--
Didi
8 years, 9 months
nested hosts \ vdsm hooks on rhel 7.2
by emarcian
Hi,
Has anyone ever used vdsm hooks in RHEL 7.2 on to of 3.6 ?
I faced some issues with that, related to emulated machines
[root@localhost ~]# tail -f /var/log/vdsm/vdsm.log |grep ':ERROR:'
Thread-18::ERROR::2016-03-06
07:43:57,019::caps::195::root::(_get_emulated_machines_from_domain)
Error while looking for kvm domain (x86_64) libvirt capabilities
-Eldad
8 years, 9 months
logging VDSM-generated libvirt XML properly
by Martin Polednik
Hello developers!
I've been chasing a bug that lead me to an idea of improving our XML
logging. Now, to see a VMs generated libvirt XML, we have to rely on
vdsm.log. The issue is that the log is rotating and therefore, it is
easy to miss the correct XML when dealing with busy hypervisor.
Since we're built on libvirt, I was thinking of doing similar thinks
that libvirt does with qemu commandline. Each running domain(VM) has
it's command line logged in /var/log/libvirt/qemu/${vmname}. This is
great for debugging as you can mostly just take the cmdline and
restart the VM.
There is an issue with using the cmdline directly - networking.
Libvirt uses additional script to create and up a bridge. Therefore,
it is easier to use the XML and shape it to one's needs.
I propose that we properly log the generated XML in a similar fashion
as libvirt generates the cmdline. This means we would have path like
/var/log/vdsm/libvirt/${vmname}, where generated XML would be stored.
To minimize the logging requirements, only last definition of VM with
that name may be stored. Additionally, exception level errors
related to that VM could also be stored there.
What do you think, can we afford the space and additional writes per
VM to help the debugging process?
Regards,
mpolednik
8 years, 9 months
Allow access to Cockpit by default after adding a host? Or make it configurable in Engine?
by Fabian Deutsch
Hey,
Node Next will ship Cockpit by default.
When the host is getting installed, Cockpit can be reached by default
over it's port 9090/tcp.
But after the host was added to Engine, Engine/vdsm is setting up it's
own iptables rules which then prevent further access to Cockpit.
How do we want users to control the access to Cockpit? So where shall
users be able to open or close the Cockpit firewall port.
Initially I thought that we can open up the cockpit port by default,
but this might be a security issue.
(Brute force attacks to crack user passwords through the web interface).
- fabian
8 years, 9 months