[VDSM] Old network flaky test
by Nir Soffer
HI all,
This test is failing randomly for few long time.
I think it is time to mark it as broken_on_ci.
23:44:53 ======================================================================
23:44:53 FAIL: test_ip_info (network.netinfo_test.TestNetinfo)
23:44:53 ----------------------------------------------------------------------
23:44:53 Traceback (most recent call last):
23:44:53 File
"/home/jenkins/workspace/vdsm_master_check-patch-fc24-x86_64/vdsm/tests/testValidation.py",
line 97, in wrapper
23:44:53 return f(*args, **kwargs)
23:44:53 File
"/home/jenkins/workspace/vdsm_master_check-patch-fc24-x86_64/vdsm/tests/network/netinfo_test.py",
line 384, in test_ip_info
23:44:53 [IPV6_ADDR_CIDR]))
23:44:53 AssertionError: Tuples differ: ('192.0.2.2',
'255.255.255.0',... != ('192.0.2.2', '255.255.255.0',...
23:44:53
23:44:53 First differing element 2:
23:44:53 ['192.0.2.2/24', '198.51.100.9/24', '198.51.100.11/32']
23:44:53 ['192.0.2.2/24', '198.51.100.9/24', '198.51.100.11/32', '192.0.2.3/24']
23:44:53
23:44:53 ('192.0.2.2',
23:44:53 '255.255.255.0',
23:44:53 - ['192.0.2.2/24', '198.51.100.9/24', '198.51.100.11/32'],
23:44:53 + ['192.0.2.2/24', '198.51.100.9/24', '198.51.100.11/32',
'192.0.2.3/24'],
23:44:53 ?
++++++++++++++++
23:44:53
23:44:53 ['2607:f0d0:1002:51::4/64'])
23:44:53 -------------------- >> begin captured logging << --------------------
23:44:53 2016-11-27 23:44:29,240 DEBUG (MainThread) [root]
/usr/bin/taskset --cpu-list 0-1 /sbin/ip link add name dummy_zjpxS
type dummy (cwd None) (commands:69)
23:44:53 2016-11-27 23:44:29,263 DEBUG (MainThread) [root] SUCCESS:
<err> = ''; <rc> = 0 (commands:93)
23:44:53 2016-11-27 23:44:29,265 DEBUG (MainThread) [root]
/usr/bin/taskset --cpu-list 0-1 /sbin/ip link set dev dummy_zjpxS up
(cwd None) (commands:69)
23:44:53 2016-11-27 23:44:29,294 DEBUG (MainThread) [root] SUCCESS:
<err> = ''; <rc> = 0 (commands:93)
23:44:53 2016-11-27 23:44:29,299 DEBUG (MainThread) [root]
/usr/bin/taskset --cpu-list 0-1 /sbin/ip -4 addr add dev dummy_zjpxS
192.0.2.2/24 (cwd None) (commands:69)
23:44:53 2016-11-27 23:44:29,311 DEBUG (MainThread) [root] SUCCESS:
<err> = ''; <rc> = 0 (commands:93)
23:44:53 2016-11-27 23:44:29,315 DEBUG (MainThread) [root]
/usr/bin/taskset --cpu-list 0-1 /sbin/ip -4 addr add dev dummy_zjpxS
192.0.2.3/24 (cwd None) (commands:69)
23:44:53 2016-11-27 23:44:29,329 DEBUG (MainThread) [root] SUCCESS:
<err> = ''; <rc> = 0 (commands:93)
23:44:53 2016-11-27 23:44:29,331 DEBUG (MainThread) [root]
/usr/bin/taskset --cpu-list 0-1 /sbin/ip -4 addr add dev dummy_zjpxS
198.51.100.9/24 (cwd None) (commands:69)
23:44:53 2016-11-27 23:44:29,341 DEBUG (MainThread) [root] SUCCESS:
<err> = ''; <rc> = 0 (commands:93)
23:44:53 2016-11-27 23:44:29,345 DEBUG (MainThread) [root]
/usr/bin/taskset --cpu-list 0-1 /sbin/ip -6 addr add dev dummy_zjpxS
2607:f0d0:1002:51::4/64 (cwd None) (commands:69)
23:44:53 2016-11-27 23:44:29,356 DEBUG (MainThread) [root] SUCCESS:
<err> = ''; <rc> = 0 (commands:93)
23:44:53 2016-11-27 23:44:29,358 DEBUG (MainThread) [root]
/usr/bin/taskset --cpu-list 0-1 /sbin/ip -4 addr add dev dummy_zjpxS
198.51.100.11/32 (cwd None) (commands:69)
23:44:53 2016-11-27 23:44:29,370 DEBUG (MainThread) [root] SUCCESS:
<err> = ''; <rc> = 0 (commands:93)
23:44:53 2016-11-27 23:44:29,379 DEBUG (MainThread) [root]
/usr/bin/taskset --cpu-list 0-1 /sbin/ip link del dev dummy_zjpxS (cwd
None) (commands:69)
23:44:53 2016-11-27 23:44:29,401 DEBUG (MainThread) [root] SUCCESS:
<err> = ''; <rc> = 0 (commands:93)
23:44:53 --------------------- >> end captured logging << ---------------------
7 years, 12 months
[VDSM] Flaky stoarge test
by Nir Soffer
Hi all,
This the second time I see this error - please report if you saw this
error in your tests.
The suspicious thing is this line:
21:32:59 2016-11-27 21:30:34,890 DEBUG (MainThread) [root] SUCCESS:
<err> = "can't open device /var/tmp/tmpA0v1m6/vol0.img: Image is not
in qcow2 format\nno file open, try 'help open'\n"; <rc> = 0
(commands:93)
qemu-io command succeeded - but it logs very alerting error...
I think Kevin Wolf would like to see this.
Ala, can you investigate this?
21:32:59 ======================================================================
21:32:59 FAIL: test_commit('1.1', 0, 1, True) (qemuimg_test.TestCommit)
21:32:59 ----------------------------------------------------------------------
21:32:59 Traceback (most recent call last):
21:32:59 File
"/home/jenkins/workspace/vdsm_master_check-patch-fc24-x86_64/vdsm/tests/testlib.py",
line 135, in wrapper
21:32:59 return f(self, *args)
21:32:59 File
"/home/jenkins/workspace/vdsm_master_check-patch-fc24-x86_64/vdsm/tests/qemuimg_test.py",
line 425, in test_commit
21:32:59 self.assertEqual(os.stat(vol).st_blocks, blocks)
21:32:59 AssertionError: 648 != 776
21:32:59 -------------------- >> begin captured logging << --------------------
21:32:59 2016-11-27 21:30:33,960 DEBUG (MainThread) [root]
/usr/bin/taskset --cpu-list 0-15 /usr/bin/qemu-img create -f raw
/var/tmp/tmpA0v1m6/vol0.img 1048576 (cwd None) (commands:69)
21:32:59 2016-11-27 21:30:33,989 DEBUG (MainThread) [root] SUCCESS:
<err> = ''; <rc> = 0 (commands:93)
21:32:59 2016-11-27 21:30:33,990 DEBUG (MainThread) [root]
/usr/bin/taskset --cpu-list 0-15 qemu-io -f raw -c 'write -P 240 0
1024' /var/tmp/tmpA0v1m6/vol0.img (cwd None) (commands:69)
21:32:59 2016-11-27 21:30:34,049 DEBUG (MainThread) [root] SUCCESS:
<err> = ''; <rc> = 0 (commands:93)
21:32:59 2016-11-27 21:30:34,050 DEBUG (MainThread) [root]
/usr/bin/taskset --cpu-list 0-15 /usr/bin/qemu-img create -f qcow2 -o
compat=1.1 -b /var/tmp/tmpA0v1m6/vol0.img /var/tmp/tmpA0v1m6/vol1.img
1048576 (cwd None) (commands:69)
21:32:59 2016-11-27 21:30:34,117 DEBUG (MainThread) [root] SUCCESS:
<err> = ''; <rc> = 0 (commands:93)
21:32:59 2016-11-27 21:30:34,118 DEBUG (MainThread) [root]
/usr/bin/taskset --cpu-list 0-15 qemu-io -f qcow2 -c 'write -P 241
1024 1024' /var/tmp/tmpA0v1m6/vol1.img (cwd None) (commands:69)
21:32:59 2016-11-27 21:30:34,282 DEBUG (MainThread) [root] SUCCESS:
<err> = ''; <rc> = 0 (commands:93)
21:32:59 2016-11-27 21:30:34,282 DEBUG (MainThread) [root]
/usr/bin/taskset --cpu-list 0-15 /usr/bin/qemu-img create -f qcow2 -o
compat=1.1 -b /var/tmp/tmpA0v1m6/vol1.img /var/tmp/tmpA0v1m6/vol2.img
1048576 (cwd None) (commands:69)
21:32:59 2016-11-27 21:30:34,334 DEBUG (MainThread) [root] SUCCESS:
<err> = ''; <rc> = 0 (commands:93)
21:32:59 2016-11-27 21:30:34,334 DEBUG (MainThread) [root]
/usr/bin/taskset --cpu-list 0-15 qemu-io -f qcow2 -c 'write -P 242
2048 1024' /var/tmp/tmpA0v1m6/vol2.img (cwd None) (commands:69)
21:32:59 2016-11-27 21:30:34,515 DEBUG (MainThread) [root] SUCCESS:
<err> = ''; <rc> = 0 (commands:93)
21:32:59 2016-11-27 21:30:34,515 DEBUG (MainThread) [root]
/usr/bin/taskset --cpu-list 0-15 /usr/bin/qemu-img create -f qcow2 -o
compat=1.1 -b /var/tmp/tmpA0v1m6/vol2.img /var/tmp/tmpA0v1m6/vol3.img
1048576 (cwd None) (commands:69)
21:32:59 2016-11-27 21:30:34,577 DEBUG (MainThread) [root] SUCCESS:
<err> = ''; <rc> = 0 (commands:93)
21:32:59 2016-11-27 21:30:34,577 DEBUG (MainThread) [root]
/usr/bin/taskset --cpu-list 0-15 qemu-io -f qcow2 -c 'write -P 243
3072 1024' /var/tmp/tmpA0v1m6/vol3.img (cwd None) (commands:69)
21:32:59 2016-11-27 21:30:34,750 DEBUG (MainThread) [root] SUCCESS:
<err> = ''; <rc> = 0 (commands:93)
21:32:59 2016-11-27 21:30:34,751 DEBUG (MainThread) [QemuImg]
/usr/bin/taskset --cpu-list 0-15 /usr/bin/nice -n 19 /usr/bin/ionice
-c 3 /usr/bin/qemu-img commit -p -t none -b
/var/tmp/tmpA0v1m6/vol0.img -f qcow2 /var/tmp/tmpA0v1m6/vol1.img (cwd
/var/tmp/tmpA0v1m6) (qemuimg:257)
21:32:59 2016-11-27 21:30:34,817 DEBUG (MainThread) [QemuImg] qemu-img
operation progress: 100.0% (qemuimg:323)
21:32:59 2016-11-27 21:30:34,818 DEBUG (MainThread) [root]
/usr/bin/taskset --cpu-list 0-15 qemu-io -f raw -c 'read -P 240 -s 0
-l 1024 0 1024' /var/tmp/tmpA0v1m6/vol0.img (cwd None) (commands:69)
21:32:59 2016-11-27 21:30:34,856 DEBUG (MainThread) [root] SUCCESS:
<err> = ''; <rc> = 0 (commands:93)
21:32:59 2016-11-27 21:30:34,857 DEBUG (MainThread) [root]
/usr/bin/taskset --cpu-list 0-15 qemu-io -f qcow2 -c 'read -P 241 -s 0
-l 1024 1024 1024' /var/tmp/tmpA0v1m6/vol0.img (cwd None)
(commands:69)
21:32:59 2016-11-27 21:30:34,890 DEBUG (MainThread) [root] SUCCESS:
<err> = "can't open device /var/tmp/tmpA0v1m6/vol0.img: Image is not
in qcow2 format\nno file open, try 'help open'\n"; <rc> = 0
(commands:93)
21:32:59 --------------------- >> end captured logging << ---------------------
7 years, 12 months
Gerrit headers are not added to commits in vdsm repo
by Tomáš Golembiovský
Hi,
I've noticed that in vdsm repo the merged commits do not contain the
info headers added by Gerrit any more (Reviewed-by/Reviewed-on/etc.).
Is that intentional? If yes, what was the motivation behind this?
The change seem to have happened about 4 days ago. Sometime between the
following two commits:
* 505f5da API: Introduce getQemuImageInfo API. [Maor Lipchuk]
* 1c4a39c protocoldetector: Avoid unneeded getpeername() [Nir Soffer]
Thanks,
Tomas
--
Tomáš Golembiovský <tgolembi(a)redhat.com>
7 years, 12 months
[VDSM] test_import_modules randomly failing on travis
by Nir Soffer
Hi all,
Vdsm travis tests are running file for some weeks. We have several
random failures, probably bad tests that needs to be fixed.
The most common failure is this - failing only in python 3.
======================================================================
ERROR: test_import_modules(('a.py', 'b.py', 'a.pyc', 'a.pyioas'),
('a', 'b')) (moduleloader_test.ImportModulesTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/vdsm/tests/testlib.py", line 135, in wrapper
return f(self, *args)
File "/vdsm/tests/moduleloader_test.py", line 51, in test_import_modules
with self._setup_test_modules(files) as module_name:
File "/usr/lib64/python3.5/contextlib.py", line 59, in __enter__
return next(self.gen)
File "/vdsm/tests/moduleloader_test.py", line 42, in _setup_test_modules
yield importlib.import_module(os.path.basename(path))
File "/usr/lib64/python3.5/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 956, in _find_and_load_unlocked
ImportError: No module named 'tmpozng7rht'
See complete run:
https://travis-ci.org/nirs/vdsm/jobs/178894634
Piotr, can you check why this fail and get this test working on travis?
Thanks,
Nir
7 years, 12 months
Re: [ovirt-devel] [ovirt-users] Recommended Ovirt Implementation on Active-Active Datacenters (Site 1 and Site2) - Same Cluster
by Roy Golan
On Nov 24, 2016 5:01 PM, "Roy Golan" <rgolan(a)redhat.com> wrote:
Reposting to list
> Affinity labels [1] will allow you to label the hosts and vms to site1
and site2 and that should be it.
>
> - create label per site
> - add the redpective label to each vm and host
>
> Unfortunately there is no UI for that but with SDK or rest it's easy
>
> [1] https://www.ovirt.org/blog/2016/07/affinity-labels/
>
>
> On Nov 24, 2016 3:12 PM, "Rogério Ceni Coelho" <
rogeriocenicoelho(a)gmail.com> wrote:
>>
>> Hi Ovirt Jedi´s !!!
>>
>> First of all, congrats about the product !!! I love Ovirt !!!
>>
>> I am using Ovirt 4.0.4 with 10 hosts and 58 virtual machines on two
Active-Active Datacenters using two EMC Vplex + two EMC VNX5500 + eight
Dell Blades + 8 Dell PowerEdge M610 and two M620 Servers.
>>
>> Half servers are on Site 1 and Half servers on Site 2. The same with
VMs. All Sites work as one and have redundant network, storage, power, etc
etc etc ...
>>
>> I want to know what is the best way to set that VM number 1 runs on Site
1 and VM number 2 runs on Site 2 ?
>>
>> On Vmware 5.1 we use DRS Group Manager and on Hyper-V we use Custom
Properties on hosts and on VMs. What we use on oVirt without segregate on
two different Datacenters or two different clusters ?
>>
>> Thanks in advance.
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
7 years, 12 months
master deps broken for el7
by Sandro Bonazzola
Hi,
FYI, jenkins detected broken dependencies issues withing vdsm package.
It seems that some package are missing so I'm rebuilding the repository.
Some failures may happen in the next 45 minutes.
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
7 years, 12 months
When you 'select *' you actually use ~10% of the data
by Michal Skrivanek
reposting
> On 23 Nov 2016, at 10:22, Arik Hadas <ahadas(a)redhat.com> wrote:
>
> +1
> And this should come as no surprise to one that read [1] - it shows that querying 6000 VMs that run on a particular host in the monitoring took 765.774 ms on 3.6 by querying the 'vms' view while on master the same task (on the same database) took 2.703 ms by querying only the dynamic VM data (vm_dynamic).
> Following this result, we (virt) already replaced calls to queries that are based on the heavy 'vms' view with ones that call lighter queries that are based on 'vm_static' and 'vm_dynamic' in most of the core virt flows. We also introduced lighter query named 'vms_monitoring_view' that is already used in several places (and can be used even further) instead of 'vms'. This principle is recommended in other places and for other entities (where appropriate) as well.
>
> [1] http://www.ovirt.org/blog/2016/08/monitoring-improvements-in-ovirt/
>
> ---- Original Message —
> On Wed, Nov 23, 2016 at 10:34 AM, Roy Golan <rgolan(a)redhat.com> wrote:
>> It turns our that our busiest UI grids, 'Vms', 'Hosts', 'Disks' are using
>> fraction of the data they really need. Every time we load the tab, or
>> refresh it we invoke a 'SearchQuery' which effectively is translated to
>> 'SELECT * from VIEW LIMIT 100'. Our views contains HUGH amount of joins
>> just to feed the monster while we don't need it at all.
>>
>> See this table to understand how far we got:
>>
>> Grid name | Grid column count | # of columns in view | # of joins in view
>>
>> Vms tab | 14 | 161 | 9
>> Hosts tab | 11 | 137 | 8
>> Disks | 9 | 58 | 9
>>
>>
>> The numbers are not precise cause few more fields are needed for internal
>> logic probably but this is very close the actual num. The numbers of views
>> involved may even be higher because some of the view are using... more views
>>
>>
>> This is not UI specific. Tons of bll code uses the views entities just
>> because
>> its easy while working with tailoring a query to your needs would perform way
>> better. We obviously need to STOP doing that and create code that will
>> encourage
>> us to do get just enough data we need. This is a debate about or dal
>> layer and its
>> relationship with stored procedure AND our coding guidelines with db
>> interaction.
>>
>> I didn't measure the effec tof moving to specific queries and using the
>> result
>> but I'm pretty sure it's going to be dramatic.
>>
>> So please, as a start, stop using the views cause it's easy, prefer the
>> tables and create
>> procedure to support your data needs.
>>
>>
>> I opened a bug for that to track that and will work on POC to measure the
>> effect
>>
>> *Bug 1397691* <https://bugzilla.redhat.com/show_bug.cgi?id=1397691> -
>> [scale] UI grids queries all fields while ~10% is actually needed
>>
>
>
7 years, 12 months