TransactionSupport during command validation
by Tomer Saban
Hi,
I wanted to draw your attention to something I found out while fixing a bug involving the use of transactions in commands.
When calling a command using getBackend().runAction() when another transaction is open, by default the command will become part of the open transaction because by default CommandBase specifies TransactionScopeOption.Required as the default 'scope'.
Unfortunately, validation function(formerly CanDoAction()) will not be aware of the opened transaction. This get be problematic in cases where the validation() method check things that are fixed in the opened transaction and not committed yet.
For example, In my case I'm using the AddVdsGroupCommand which creates a new cluster (In an open tranasction) and then, calls the AddCpuProfileCommand which creates a default cpu profile. The default cpu profile validation() check then, Check if the cluster exists which fails because it's not part of the opened transaction.
They way to over come this is to add the validation() method to the open transaction by adding @ValidateSupportsTransaction annotation on the command.
The associated part of the code in CommandBase(line 817) :
"""
if (!isValidateSupportsTransaction()) {
transaction = TransactionSupport.suspend();
}
"""
isValidateSupportsTransaction() - Checks that the annotation exists and if it's not it suspends the TransactionSupport.
FYI,
Tomer
8 years, 10 months
Non Responding Treatment not working?
by Roy Golan
I have seen 2 diferent installations not moving a host to non responsive.
One host was failing to install and stayed in "Installing" and on my setup
a host just stays in "Connecting" and the treatment doesn't kick in.
Anyone seen this?
8 years, 10 months
Request permissions for ovirt-host-deploy maintainership
by Sandro Bonazzola
Hi,
ovirt-host-deploy access lists have as unique member Alon Bar-Lev which
recently left the project.
Since the python part is otopi based and I already had worked on it on the
past, I'm stepping in, proposing myself as maintainer for that project.
Thanks,
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
8 years, 10 months
oVirt 3.6.2 RC2 build starting
by Sandro Bonazzola
Fyi oVirt products maintainers,
An oVirt build for an official release candidate is going to start right
now.
If you're a maintainer for any of the projects included in oVirt
distribution and you have changes in your package ready to be released
please:
- bump version and release to be GA ready
- tag your release within git (implies a GitHub Release to be automatically
created)
- build your packages within jenkins / koji / copr / whatever
- verify all bugs on MODIFIED have target release and target milestone set.
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
8 years, 10 months
Vdsm sync 5/1 summary after short delay
by Yaniv Bronheim
(fromani, nsoffer, ybronhei, alitke)
- Removing xmlrpc for good - who should accept it? where do we stand with
full jsonrpc client ? (we didn't get to any conclusions and said that we'll
reraise this topic next week with pioter)
- Moving from nose to pytest - generally good approach to achieve. It
requires some changes in current testlib.py code. must be an item for next
major version (nir already managed to run most of the tests with it, and
stated few gaps)
- Exception patches - still on progress, please review (
https://gerrit.ovirt.org/48868)
- python3 effort to cover all asyncProc usage, and allowing utils import
without having python3-cpopen - https://gerrit.ovirt.org/51421
https://gerrit.ovirt.org/49441 . still under review
We didn't take notes during that talk, so if I forgot to mention something
I apologize. Feel free to reply and raise it
Greetings,
--
*Yaniv Bronhaim.*
8 years, 10 months
Installing the oVirt Node Next Early-Preview
by Fabian Deutsch
Hey,
finally we've got continous builds of [oVirt Node
Next](http://www.ovirt.org/Node/4.0) in [oVirt's jenkins
instance](http://jenkins.ovirt.org/user/fabiand/my-views/view/Node.next/j...
which are consumable by Anaconda.
**Note:** This is an early preview, the basic oVirt bit's (vdsm) are
there and you should be able to add such a host to Engine. But updates
are not yet working, and if they were, then they'd be broken.
You want to try it? Read on.
## Starting the anaconda installer
Independently if you continue with a VM or a bare metal server, the
host you will use
- needs at least 3GB of free space
- is connected to the internet
### Using a VM
The safe way to try the oVirt Node Next is to install it into a VM,
this is quite easy:
qemu-img create -f qcow2 dst.img 20G
qemu-system-x86_64 \
-enable-kvm \
-m 2048 \
-cdrom http://jenkins.ovirt.org/job/fabiand_boo_build_testing/lastSuccessfulBuil...
\
-hda dst.img \
-serial stdio \
-net nic -net user,hostfwd=tcp::19090-:9090
Now select _"Interactive installation"_ and wait for the anaconda
installer to come up, this can take a while, as data is downloaded
from the internet.
### Using bare metal (easy - iPXE ISO)
Trying oVirt node Next on bare metal is not much more difficult.
1. Download our pre-built PXE ISO: <http:>
2. Write the `ovirt-ipxe.iso` onto a CD-ROM, DVD, USB, virtual media
3. Boot your bare metal server using the previously created media
Now select _"Interactive installation"_ and wait for the anaconda
installer to come up, this can take a while, as data is downloaded
from the internet.
### Using bare metal (medium - CentOS 7 ISO + kernel arguments)
Trying oVirt Node Next on bare metal is not much more difficult.
1. Grab a CentOS 7 (Node Next is based on CentOS 7) netinstall
`boot.iso` from a [mirror close to
you](http://mirror.centos.org/centos-7/7/os/x86_64/images/boot.iso).
2. Boot the `boot.iso` on your bare metal server and wait for ISOLINUX
bootloader
3. Select the default entry and append the following arguments:
`inst.ks=https://gerrit.ovirt.org/gitweb?p=ovirt-node-ng.git;a=blob_plain;f=docs/kickstart/minimal-kickstart.ks;hb=HEAD
inst.updates=http://jenkins.ovirt.org/job/fabiand_boo_build_testing/lastS...
inst.stage2=http://mirror.centos.org/centos-7/7/os/x86_64/`
Now wait for the anaconda installer to come up, this can take a while,
as data is downloaded from the internet.
# Performing the installation
Once anaconda is up you need to answer a few questions
1. Select your prefered language and continue
2. On the next page, select your keyboard layout and timezone
3. Select the disk to be used - Enter the spoke (the thing which opens
when you click on the disk icon), and select the disk to use.
Sometimes you need to deselect and select the disk icon again.
**Note: Leave the partitioning to automatic. If you are brave you
can do your own partitioning layout, but ensure to use LVM
thin-provisioning.**
4. Now you should have answered all necessary questions, and can
proceed by hitting `Continue`
5. The installation is started, use the time to at least set a _root_
user password. **Note: The installation can take a while, because now
the installation image is pulled from our jenkins instance (~500MB).**
6. Once the installation is done and you've set a password, you can
reboot the host and it should be ready to use.
The host can now be added to Engine.
**Note:** In case you encounter issues, enable permissive mode on the
host by calling `setenforce 0`
## Trying Cockpit
This is an early preview with cockpit installed. But currently cockpit
is not enabled and the firewall prevents access. Yes, it will be
fixed. Until then: Perform the following steps to take a look at
cockpit:
1. Use the VM installation above
2. After installation, log intot the VM
3. Start cockpit: `systemctl start cockpit`
4. Stop firewalld: `systemctl stop firewalld`
5. Browse to <http:> (on the host, not inside the VM), this request
will be forwarded to port 9090 inside the VM, where cockpit is
running.
## Feedback
Is this working for you or not? Could you install Node? Could you add
Node? Could you browse Cockpit? Or did your host explode?
Let us know what you think.
## Next steps
The next big step is to enable updates.
On behalf of the Node Team
- fabian
8 years, 10 months
oVirt 3.6.2 RC2 merge / tag / bugzilla reminder
by Sandro Bonazzola
All stable branch maintainers, please make sure to
- merge all relevant open bugs until Tuesday morning 11:00 AM TLV time.
-
https://bugzilla.redhat.com/buglist.cgi?quicksearch=target_milestone%3A3....
Every package build (i.e oVirt product) - please make sure every bug in
MODIFIED has the right Target Release and Target Milestone.
A Target release should state the version of the package you're building
and should include the same version you used for the tag you just used for
this build. (e.g. for ovirt-engine, tag: ovirt-engine-3.6.2.5, tr: 3.6.2.5)
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
8 years, 10 months
[VDSM] exceptions cleanup
by Nir Soffer
Hi all,
I started the exceptions topic[1], trying to:
- gather all exceptions in vdsm.exception
- make all exceptions inherit from same base class
- add exceptions for virt errors, currently managed in vdsm.define (unfinished)
- add missing tests for the exception logic
Having exceptions for all api errors, we can simplify error handling
in virt so you can raise an exception in any level, and have one top
level error handler that knows how to create an error response.
See Piotr patch [2] adding such handler.
One issue raised by Pitor was ensuring that we don't have duplicate error
codes. We have tests/main.py, checking that storage exceptions and gluster
exceptions use unique error codes.
Another issue raised by Pitor is having single location for vdsm error codes,
instead of duplicating the information in both vdsm and engine. Actually this
info is duplicated also in hosted engine setup and agent, and maybe in other
places.
I think the best solution for this is to generate the java module used by engine
from vdsm.exception. Python applications can depend on vdsm-api package,
and import vdsm.exception.
Future changes:
- Merge storage_exception, into vdsm.exception
- Move vdsm.exception into api pacakge
- Cover all exceptions in the unique error code tests
- Replace the error response dict in all verbs with raising an exception
- Dropping the error handler in vdsm.storage.dispatcher.
The first 4 patches in [1] should be ready for merge.
Thoughts?
[1] https://gerrit.ovirt.org/#/q/status:open+project:vdsm+branch:master+topic...
[2] https://gerrit.ovirt.org/#/c/51549/2/vdsm/rpc/Bridge.py
Nir
8 years, 10 months