[VDSM] WARNING: mailbox tests timeout
by Nir Soffer
Hi all,
I have seen the new storage mailbox test failing with a timeout twice.
If you see this in your builds, please complain here.
The cause for the timeout seems to be:
09:56:02 2017-01-29 09:53:59,830 ERROR (mailbox-spm)
[storage.MailBox.SpmMailMonitor] SPM_MailMonitor: mailbox 7 checksum
failed, not clearing mailbox, clearing newMail. (storage_mailbox:630)
We see this error in vdsm logs sometimes, needs investigation.
09:56:02 ======================================================================
09:56:02 FAIL: test_send_receive (storage_mailbox_test.TestMailbox)
09:56:02 ----------------------------------------------------------------------
09:56:02 Traceback (most recent call last):
09:56:02 File
"/home/jenkins/workspace/vdsm_master_check-patch-fc25-x86_64/vdsm/tests/storage_mailbox_test.py",
line 132, in test_send_receive
09:56:02 self.assertFalse(expired, 'message was not processed on time')
09:56:02 AssertionError: message was not processed on time
09:56:02 -------------------- >> begin captured logging << --------------------
09:56:02 2017-01-29 09:53:55,337 DEBUG (MainThread)
[storage.ThreadPool] Enter - name: mailbox-hsm, numThreads: 5,
waitTimeout: 0.15, maxTasks: 500 (threadPool:36)
09:56:02 2017-01-29 09:53:55,337 DEBUG (mailbox-hsm/0) [root] START
thread <Thread(mailbox-hsm/0, started daemon 140445079619328)>
(func=<bound method WorkerThread.run of
<vdsm.storage.threadPool.WorkerThread object at 0x7fbc0960a0d0>>,
args=(), kwargs={}) (concurrent:183)
09:56:02 2017-01-29 09:53:55,338 DEBUG (mailbox-hsm/1) [root] START
thread <Thread(mailbox-hsm/1, started daemon 140445631756032)>
(func=<bound method WorkerThread.run of
<vdsm.storage.threadPool.WorkerThread object at 0x7fbc096180d0>>,
args=(), kwargs={}) (concurrent:183)
09:56:02 2017-01-29 09:53:55,339 DEBUG (mailbox-hsm/2) [root] START
thread <Thread(mailbox-hsm/2, started daemon 140444826597120)>
(func=<bound method WorkerThread.run of
<vdsm.storage.threadPool.WorkerThread object at 0x7fbc09618b10>>,
args=(), kwargs={}) (concurrent:183)
09:56:02 2017-01-29 09:53:55,339 DEBUG (mailbox-hsm/3) [root] START
thread <Thread(mailbox-hsm/3, started daemon 140444784633600)>
(func=<bound method WorkerThread.run of
<vdsm.storage.threadPool.WorkerThread object at 0x7fbc09618c50>>,
args=(), kwargs={}) (concurrent:183)
09:56:02 2017-01-29 09:53:55,340 DEBUG (MainThread)
[storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-1 /usr/bin/dd
if=/var/tmp/tmp71q4uJ/5d928855-b09b-47a7-b920-bd2d2eb5808c/mastersd/dom_md/outbox
iflag=direct,fullblock bs=512 count=8 skip=56 (cwd None) (commands:69)
09:56:02 2017-01-29 09:53:55,340 DEBUG (mailbox-hsm/4) [root] START
thread <Thread(mailbox-hsm/4, started daemon 140445071226624)>
(func=<bound method WorkerThread.run of
<vdsm.storage.threadPool.WorkerThread object at 0x7fbc09618590>>,
args=(), kwargs={}) (concurrent:183)
09:56:02 2017-01-29 09:53:55,646 DEBUG (MainThread)
[storage.Misc.excCmd] SUCCESS: <err> = '8+0 records in\n8+0 records
out\n4096 bytes (4.1 kB, 4.0 KiB) copied, 0.296994 s, 13.8 kB/s\n';
<rc> = 0 (commands:93)
09:56:02 2017-01-29 09:53:55,646 INFO (MainThread)
[storage.MailBox.HsmMailMonitor] HSM_MailMonitor sending mail to SPM -
['/usr/bin/dd',
'of=/var/tmp/tmp71q4uJ/5d928855-b09b-47a7-b920-bd2d2eb5808c/mastersd/dom_md/inbox',
'iflag=fullblock', 'oflag=direct', 'conv=notrunc', 'bs=512',
'seek=56'] (storage_mailbox:394)
09:56:02 2017-01-29 09:53:55,647 DEBUG (MainThread)
[storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-1 /usr/bin/dd
of=/var/tmp/tmp71q4uJ/5d928855-b09b-47a7-b920-bd2d2eb5808c/mastersd/dom_md/inbox
iflag=fullblock oflag=direct conv=notrunc bs=512 seek=56 (cwd None)
(commands:69)
09:56:02 2017-01-29 09:53:58,798 DEBUG (MainThread)
[storage.Misc.excCmd] SUCCESS: <err> = '8+0 records in\n8+0 records
out\n4096 bytes (4.1 kB, 4.0 KiB) copied, 3.14004 s, 1.3 kB/s\n'; <rc>
= 0 (commands:93)
09:56:02 2017-01-29 09:53:58,807 DEBUG (mailbox-hsm)
[storage.MailBox.HsmMailMonitor] START thread <Thread(mailbox-hsm,
started daemon 140445049673472)> (func=<bound method
HSM_MailMonitor.run of <storage.storage_mailbox.HSM_MailMonitor object
at 0x7fbc0960a9d0>>, args=(), kwargs={}) (concurrent:183)
09:56:02 2017-01-29 09:53:58,808 DEBUG (MainThread)
[storage.Mailbox.HSM] HSM_MailboxMonitor created for pool
5d928855-b09b-47a7-b920-bd2d2eb5808c (storage_mailbox:209)
09:56:02 2017-01-29 09:53:58,809 DEBUG (MainThread)
[storage.ThreadPool] Enter - name: mailbox-spm, numThreads: 5,
waitTimeout: 0.15, maxTasks: 500 (threadPool:36)
09:56:02 2017-01-29 09:53:58,818 DEBUG (mailbox-spm/0) [root] START
thread <Thread(mailbox-spm/0, started daemon 140445041280768)>
(func=<bound method WorkerThread.run of
<vdsm.storage.threadPool.WorkerThread object at 0x7fbc09608890>>,
args=(), kwargs={}) (concurrent:183)
09:56:02 2017-01-29 09:53:58,820 DEBUG (mailbox-spm/1) [root] START
thread <Thread(mailbox-spm/1, started daemon 140444818204416)>
(func=<bound method WorkerThread.run of
<vdsm.storage.threadPool.WorkerThread object at 0x7fbc09608490>>,
args=(), kwargs={}) (concurrent:183)
09:56:02 2017-01-29 09:53:58,820 DEBUG (mailbox-spm/2) [root] START
thread <Thread(mailbox-spm/2, started daemon 140444809811712)>
(func=<bound method WorkerThread.run of
<vdsm.storage.threadPool.WorkerThread object at 0x7fbc09608110>>,
args=(), kwargs={}) (concurrent:183)
09:56:02 2017-01-29 09:53:58,821 DEBUG (mailbox-spm/3) [root] START
thread <Thread(mailbox-spm/3, started daemon 140444801419008)>
(func=<bound method WorkerThread.run of
<vdsm.storage.threadPool.WorkerThread object at 0x7fbc0961bb90>>,
args=(), kwargs={}) (concurrent:183)
09:56:02 2017-01-29 09:53:58,822 DEBUG (mailbox-spm/4) [root] START
thread <Thread(mailbox-spm/4, started daemon 140444793026304)>
(func=<bound method WorkerThread.run of
<vdsm.storage.threadPool.WorkerThread object at 0x7fbc0961bc10>>,
args=(), kwargs={}) (concurrent:183)
09:56:02 2017-01-29 09:53:58,822 DEBUG (MainThread)
[storage.MailBox.SpmMailMonitor] SPM_MailMonitor - clearing outgoing
mail, command is: ['dd',
'of=/var/tmp/tmp71q4uJ/5d928855-b09b-47a7-b920-bd2d2eb5808c/mastersd/dom_md/outbox',
'oflag=direct', 'iflag=fullblock', 'conv=notrunc', 'count=1']
(storage_mailbox:583)
09:56:02 2017-01-29 09:53:58,823 DEBUG (MainThread)
[storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-1 dd
of=/var/tmp/tmp71q4uJ/5d928855-b09b-47a7-b920-bd2d2eb5808c/mastersd/dom_md/outbox
oflag=direct iflag=fullblock conv=notrunc count=1 bs=40960 (cwd None)
(commands:69)
09:56:02 2017-01-29 09:53:59,561 DEBUG (MainThread)
[storage.Misc.excCmd] SUCCESS: <err> = '1+0 records in\n1+0 records
out\n40960 bytes (41 kB, 40 KiB) copied, 0.721626 s, 56.8 kB/s\n';
<rc> = 0 (commands:93)
09:56:02 2017-01-29 09:53:59,562 DEBUG (mailbox-spm)
[storage.MailBox.SpmMailMonitor] START thread <Thread(mailbox-spm,
started daemon 140444776240896)> (func=<bound method
SPM_MailMonitor.run of <storage.storage_mailbox.SPM_MailMonitor
instance at 0x7fbc0c610b90>>, args=(), kwargs={}) (concurrent:183)
09:56:02 2017-01-29 09:53:59,563 DEBUG (MainThread)
[storage.MailBox.SpmMailMonitor] SPM_MailMonitor created for pool
5d928855-b09b-47a7-b920-bd2d2eb5808c (storage_mailbox:593)
09:56:02 2017-01-29 09:53:59,563 DEBUG (MainThread)
[storage.SPM.Messages.Extend] new extend msg created: domain:
8adbc85e-e554-4ae0-b318-8a5465fe5fe1, volume:
d772f1c6-3ebb-43c3-a42e-73fcd8255a5f (storage_mailbox:125)
09:56:02 2017-01-29 09:53:59,563 DEBUG (mailbox-hsm)
[storage.MailBox.HsmMailMonitor] HSM_MailMonitor - start: 64, end:
128, len: 4096, message(1/63):
'1xtnd\xe1_\xfeeT\x8a\x18\xb3\xe0JT\xe5^\xc8\xdb\x8a_Z%\xd8\xfcs.\xa4\xc3C\xbb>\xc6\xf1r\xd7000000000000006400000000000'
(storage_mailbox:435)
09:56:02 2017-01-29 09:53:59,564 DEBUG (mailbox-hsm)
[storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-1 /usr/bin/dd
if=/var/tmp/tmp71q4uJ/5d928855-b09b-47a7-b920-bd2d2eb5808c/mastersd/dom_md/outbox
iflag=direct,fullblock bs=512 count=8 skip=56 (cwd None) (commands:69)
09:56:02 2017-01-29 09:53:59,572 DEBUG (mailbox-spm)
[storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-1 dd
if=/var/tmp/tmp71q4uJ/5d928855-b09b-47a7-b920-bd2d2eb5808c/mastersd/dom_md/inbox
iflag=direct,fullblock count=1 bs=40960 (cwd None) (commands:69)
09:56:02 2017-01-29 09:53:59,589 DEBUG (mailbox-spm)
[storage.Misc.excCmd] SUCCESS: <err> = '1+0 records in\n1+0 records
out\n40960 bytes (41 kB, 40 KiB) copied, 0.0080494 s, 5.1 MB/s\n';
<rc> = 0 (commands:93)
09:56:02 2017-01-29 09:53:59,600 DEBUG (mailbox-hsm)
[storage.Misc.excCmd] SUCCESS: <err> = '8+0 records in\n8+0 records
out\n4096 bytes (4.1 kB, 4.0 KiB) copied, 0.023761 s, 172 kB/s\n';
<rc> = 0 (commands:93)
09:56:02 2017-01-29 09:53:59,601 INFO (mailbox-hsm)
[storage.MailBox.HsmMailMonitor] HSM_MailMonitor sending mail to SPM -
['/usr/bin/dd',
'of=/var/tmp/tmp71q4uJ/5d928855-b09b-47a7-b920-bd2d2eb5808c/mastersd/dom_md/inbox',
'iflag=fullblock', 'oflag=direct', 'conv=notrunc', 'bs=512',
'seek=56'] (storage_mailbox:394)
09:56:02 2017-01-29 09:53:59,601 DEBUG (mailbox-hsm)
[storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-1 /usr/bin/dd
of=/var/tmp/tmp71q4uJ/5d928855-b09b-47a7-b920-bd2d2eb5808c/mastersd/dom_md/inbox
iflag=fullblock oflag=direct conv=notrunc bs=512 seek=56 (cwd None)
(commands:69)
09:56:02 2017-01-29 09:53:59,691 DEBUG (mailbox-spm)
[storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-1 dd
if=/var/tmp/tmp71q4uJ/5d928855-b09b-47a7-b920-bd2d2eb5808c/mastersd/dom_md/inbox
iflag=direct,fullblock count=1 bs=40960 (cwd None) (commands:69)
09:56:02 2017-01-29 09:53:59,828 DEBUG (mailbox-spm)
[storage.Misc.excCmd] SUCCESS: <err> = '1+0 records in\n1+0 records
out\n40960 bytes (41 kB, 40 KiB) copied, 0.128307 s, 319 kB/s\n'; <rc>
= 0 (commands:93)
09:56:02 2017-01-29 09:53:59,830 ERROR (mailbox-spm)
[storage.MailBox.SpmMailMonitor] SPM_MailMonitor: mailbox 7 checksum
failed, not clearing mailbox, clearing newMail. (storage_mailbox:630)
09:56:02 2017-01-29 09:53:59,931 DEBUG (mailbox-spm)
[storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-1 dd
if=/var/tmp/tmp71q4uJ/5d928855-b09b-47a7-b920-bd2d2eb5808c/mastersd/dom_md/inbox
iflag=direct,fullblock count=1 bs=40960 (cwd None) (commands:69)
09:56:02 2017-01-29 09:53:59,976 DEBUG (mailbox-spm)
[storage.Misc.excCmd] SUCCESS: <err> = '1+0 records in\n1+0 records
out\n40960 bytes (41 kB, 40 KiB) copied, 0.0354083 s, 1.2 MB/s\n';
<rc> = 0 (commands:93)
09:56:02 2017-01-29 09:53:59,977 ERROR (mailbox-spm)
[storage.MailBox.SpmMailMonitor] SPM_MailMonitor: mailbox 7 checksum
failed, not clearing mailbox, clearing newMail. (storage_mailbox:630)
09:56:02 2017-01-29 09:54:00,078 DEBUG (mailbox-spm)
[storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-1 dd
if=/var/tmp/tmp71q4uJ/5d928855-b09b-47a7-b920-bd2d2eb5808c/mastersd/dom_md/inbox
iflag=direct,fullblock count=1 bs=40960 (cwd None) (commands:69)
09:56:02 2017-01-29 09:54:00,220 DEBUG (mailbox-spm)
[storage.Misc.excCmd] SUCCESS: <err> = '1+0 records in\n1+0 records
out\n40960 bytes (41 kB, 40 KiB) copied, 0.133017 s, 308 kB/s\n'; <rc>
= 0 (commands:93)
09:56:02 2017-01-29 09:54:00,222 ERROR (mailbox-spm)
[storage.MailBox.SpmMailMonitor] SPM_MailMonitor: mailbox 7 checksum
failed, not clearing mailbox, clearing newMail. (storage_mailbox:630)
09:56:02 2017-01-29 09:54:00,323 DEBUG (mailbox-spm)
[storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-1 dd
if=/var/tmp/tmp71q4uJ/5d928855-b09b-47a7-b920-bd2d2eb5808c/mastersd/dom_md/inbox
iflag=direct,fullblock count=1 bs=40960 (cwd None) (commands:69)
09:56:02 2017-01-29 09:54:00,364 DEBUG (mailbox-spm)
[storage.Misc.excCmd] SUCCESS: <err> = '1+0 records in\n1+0 records
out\n40960 bytes (41 kB, 40 KiB) copied, 0.0325451 s, 1.3 MB/s\n';
<rc> = 0 (commands:93)
09:56:02 2017-01-29 09:54:00,366 ERROR (mailbox-spm)
[storage.MailBox.SpmMailMonitor] SPM_MailMonitor: mailbox 7 checksum
failed, not clearing mailbox, clearing newMail. (storage_mailbox:630)
09:56:02 2017-01-29 09:54:00,466 DEBUG (mailbox-spm)
[storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-1 dd
if=/var/tmp/tmp71q4uJ/5d928855-b09b-47a7-b920-bd2d2eb5808c/mastersd/dom_md/inbox
iflag=direct,fullblock count=1 bs=40960 (cwd None) (commands:69)
09:56:02 2017-01-29 09:54:00,610 DEBUG (mailbox-spm)
[storage.Misc.excCmd] SUCCESS: <err> = '1+0 records in\n1+0 records
out\n40960 bytes (41 kB, 40 KiB) copied, 0.129057 s, 317 kB/s\n'; <rc>
= 0 (commands:93)
09:56:02 2017-01-29 09:54:00,612 ERROR (mailbox-spm)
[storage.MailBox.SpmMailMonitor] SPM_MailMonitor: mailbox 7 checksum
failed, not clearing mailbox, clearing newMail. (storage_mailbox:630)
09:56:02 2017-01-29 09:54:00,778 DEBUG (mailbox-spm/1) [root] FINISH
thread <Thread(mailbox-spm/1, started daemon 140444818204416)>
(concurrent:186)
09:56:02 2017-01-29 09:54:00,778 DEBUG (mailbox-spm/2) [root] FINISH
thread <Thread(mailbox-spm/2, started daemon 140444809811712)>
(concurrent:186)
09:56:02 2017-01-29 09:54:00,777 DEBUG (mailbox-spm/0) [root] FINISH
thread <Thread(mailbox-spm/0, started daemon 140445041280768)>
(concurrent:186)
09:56:02 2017-01-29 09:54:00,783 DEBUG (mailbox-spm/4) [root] FINISH
thread <Thread(mailbox-spm/4, started daemon 140444793026304)>
(concurrent:186)
09:56:15 2017-01-29 09:54:00,783 DEBUG (mailbox-spm/3) [root] FINISH
thread <Thread(mailbox-spm/3, started daemon 140444801419008)>
(concurrent:186)
09:56:15 2017-01-29 09:54:00,785 INFO (mailbox-spm)
[storage.MailBox.SpmMailMonitor] SPM_MailMonitor - Incoming mail
monitoring thread stopped (storage_mailbox:805)
09:56:15 2017-01-29 09:54:00,785 DEBUG (mailbox-spm)
[storage.MailBox.SpmMailMonitor] FINISH thread <Thread(mailbox-spm,
started daemon 140444776240896)> (concurrent:186)
09:56:15 2017-01-29 09:54:00,901 DEBUG (mailbox-hsm)
[storage.Misc.excCmd] SUCCESS: <err> = '8+0 records in\n8+0 records
out\n4096 bytes (4.1 kB, 4.0 KiB) copied, 1.28855 s, 3.2 kB/s\n'; <rc>
= 0 (commands:93)
09:56:15 2017-01-29 09:54:00,901 INFO (mailbox-hsm)
[storage.MailBox.HsmMailMonitor] HSM_MailboxMonitor - Incoming mail
monitoring thread stopped, clearing outgoing mail
(storage_mailbox:518)
09:56:15 2017-01-29 09:54:00,901 INFO (mailbox-hsm)
[storage.MailBox.HsmMailMonitor] HSM_MailMonitor sending mail to SPM -
['/usr/bin/dd',
'of=/var/tmp/tmp71q4uJ/5d928855-b09b-47a7-b920-bd2d2eb5808c/mastersd/dom_md/inbox',
'iflag=fullblock', 'oflag=direct', 'conv=notrunc', 'bs=512',
'seek=56'] (storage_mailbox:394)
09:56:15 2017-01-29 09:54:00,902 DEBUG (mailbox-hsm)
[storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-1 /usr/bin/dd
of=/var/tmp/tmp71q4uJ/5d928855-b09b-47a7-b920-bd2d2eb5808c/mastersd/dom_md/inbox
iflag=fullblock oflag=direct conv=notrunc bs=512 seek=56 (cwd None)
(commands:69)
09:56:15 2017-01-29 09:54:00,912 DEBUG (mailbox-hsm/4) [root] FINISH
thread <Thread(mailbox-hsm/4, started daemon 140445071226624)>
(concurrent:186)
09:56:15 2017-01-29 09:54:00,911 DEBUG (mailbox-hsm/2) [root] FINISH
thread <Thread(mailbox-hsm/2, started daemon 140444826597120)>
(concurrent:186)
09:56:15 2017-01-29 09:54:00,910 DEBUG (mailbox-hsm/1) [root] FINISH
thread <Thread(mailbox-hsm/1, started daemon 140445631756032)>
(concurrent:186)
09:56:15 2017-01-29 09:54:00,910 DEBUG (mailbox-hsm/3) [root] FINISH
thread <Thread(mailbox-hsm/3, started daemon 140444784633600)>
(concurrent:186)
09:56:15 2017-01-29 09:54:00,910 DEBUG (mailbox-hsm/0) [root] FINISH
thread <Thread(mailbox-hsm/0, started daemon 140445079619328)>
(concurrent:186)
09:56:15 2017-01-29 09:54:02,394 DEBUG (mailbox-hsm)
[storage.Misc.excCmd] SUCCESS: <err> = '8+0 records in\n8+0 records
out\n4096 bytes (4.1 kB, 4.0 KiB) copied, 1.47883 s, 2.8 kB/s\n'; <rc>
= 0 (commands:93)
09:56:15 2017-01-29 09:54:02,394 DEBUG (mailbox-hsm)
[storage.MailBox.HsmMailMonitor] FINISH thread <Thread(mailbox-hsm,
started daemon 140445049673472)> (concurrent:186)
7 years, 10 months
Fwd: SSO and the engine
by Piotr Kliczewski
I downgraded jdk and it did not help.
My dnf says when I attempt to install as in the link:
No package nss-3.27.0-1.1.fc25.x86_64 available.
No package nss-softokn-3.27.0-1.0.fc25.x86_64 available.
No package nss-softokn-freebl-3.27.0-1.0.fc25.x86_64 available.
No package nss-sysinit-3.27.0-1.1.fc25.x86_64 available.
No package nss-tools-3.27.0-1.1.fc25.x86_64 available.
No package nss-util-3.27.0-1.0.fc25.x86_64 available.
I am not able to downgrade nss due to conflicts with other packages.:
On Fri, Jan 27, 2017 at 2:23 PM, Benny Zlotnik <bzlotnik(a)redhat.com> wrote:
> You can also try downgrading the nss packages, see:
> https://bugzilla.redhat.com/show_bug.cgi?id=1415137#c15
>
> On Fri, Jan 27, 2017 at 3:18 PM, Piotr Kliczewski
> <piotr.kliczewski(a)gmail.com> wrote:
>>
>> I was too fast to send the update. I am able to login now but I see
>> core dump during host add:
>>
>> 2017-01-27 14:14:01,906+01 ERROR
>> [org.ovirt.engine.core.bll.hostdeploy.AddVdsCommand] (default task-58)
>> [20086bed-e76d-42ef-9ab1-30c8e965374b] Failed to establish session
>> with host 'fedora': SSH session closed during connection
>> 'root(a)192.168.1.102'
>> 2017-01-27 14:14:01,907+01 WARN
>> [org.ovirt.engine.core.bll.hostdeploy.AddVdsCommand] (default task-58)
>> [20086bed-e76d-42ef-9ab1-30c8e965374b] Validation of action 'AddVds'
>> failed for user admin@internal-authz. Reasons:
>> VAR__ACTION__ADD,VAR__TYPE__HOST,$server
>> 192.168.1.102,VDS_CANNOT_CONNECT_TO_SERVER
>> #
>> # A fatal error has been detected by the Java Runtime Environment:
>> #
>> # SIGSEGV (0xb) at pc=0x00007f7c9d773734, pid=20890,
>> tid=0x00007f7c6c148700
>> #
>> # JRE version: OpenJDK Runtime Environment (8.0_111-b16) (build
>> 1.8.0_111-b16)
>> # Java VM: OpenJDK 64-Bit Server VM (25.111-b16 mixed mode linux-amd64
>> compressed oops)
>> # Problematic frame:
>> # C [libc.so.6+0x14a734] __memcpy_avx_unaligned+0x2c4
>> #
>> # Failed to write core dump. Core dumps have been disabled. To enable
>> core dumping, try "ulimit -c unlimited" before starting Java again
>> #
>> # An error report file with more information is saved as:
>> # /tmp/hs_err_pid20890.log
>> #
>> # If you would like to submit a bug report, please visit:
>> # http://bugreport.java.com/bugreport/crash.jsp
>> #
>> ovirt-engine[20848] ERROR run:554 Error: process terminated with status
>> code -6
>>
>> 2017-01-27 14:14:01,756+01 INFO
>> [org.apache.sshd.common.util.SecurityUtils] (default task-58)
>> BouncyCastle not registered, using the default JCE provider
>> 2017-01-27 14:14:01,870+01 INFO
>> [org.apache.sshd.client.session.ClientSessionImpl]
>> (sshd-SshClient[26c9f7da]-nio2-thread-1) Client session created
>> 2017-01-27 14:14:01,885+01 INFO
>> [org.apache.sshd.client.session.ClientSessionImpl]
>> (sshd-SshClient[26c9f7da]-nio2-thread-1) Server version string:
>> SSH-2.0-OpenSSH_7.2
>> 2017-01-27 14:14:01,886+01 INFO
>> [org.apache.sshd.client.session.ClientSessionImpl]
>> (sshd-SshClient[26c9f7da]-nio2-thread-1) Kex: server->client
>> aes128-ctr hmac-sha2-256 none
>> 2017-01-27 14:14:01,886+01 INFO
>> [org.apache.sshd.client.session.ClientSessionImpl]
>> (sshd-SshClient[26c9f7da]-nio2-thread-1) Kex: client->server
>> aes128-ctr hmac-sha2-256 none
>> 2017-01-27 14:14:01,896+01 WARN
>> [org.apache.sshd.client.session.ClientSessionImpl]
>> (sshd-SshClient[26c9f7da]-nio2-thread-1) Exception caught:
>> java.security.ProviderException: java.lang.NegativeArraySizeException
>> at
>> sun.security.ec.ECKeyPairGenerator.generateKeyPair(ECKeyPairGenerator.java:147)
>> at
>> java.security.KeyPairGenerator$Delegate.generateKeyPair(KeyPairGenerator.java:703)
>> [rt.jar:1.8.0_111]
>> at org.apache.sshd.common.kex.ECDH.getE(ECDH.java:59)
>> at
>> org.apache.sshd.client.kex.AbstractDHGClient.init(AbstractDHGClient.java:78)
>> at
>> org.apache.sshd.common.session.AbstractSession.doHandleMessage(AbstractSession.java:359)
>> at
>> org.apache.sshd.common.session.AbstractSession.handleMessage(AbstractSession.java:295)
>> at
>> org.apache.sshd.client.session.ClientSessionImpl.handleMessage(ClientSessionImpl.java:256)
>> at
>> org.apache.sshd.common.session.AbstractSession.decode(AbstractSession.java:731)
>> at
>> org.apache.sshd.common.session.AbstractSession.messageReceived(AbstractSession.java:277)
>> at
>> org.apache.sshd.common.AbstractSessionIoHandler.messageReceived(AbstractSessionIoHandler.java:54)
>> at
>> org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:187)
>> at
>> org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:173)
>> at
>> org.apache.sshd.common.io.nio2.Nio2CompletionHandler$1.run(Nio2CompletionHandler.java:32)
>> at java.security.AccessController.doPrivileged(Native Method)
>> [rt.jar:1.8.0_111]
>> at
>> org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:30)
>> at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126) [rt.jar:1.8.0_111]
>> at sun.nio.ch.Invoker.invokeDirect(Invoker.java:157) [rt.jar:1.8.0_111]
>> at
>> sun.nio.ch.UnixAsynchronousSocketChannelImpl.implRead(UnixAsynchronousSocketChannelImpl.java:553)
>> [rt.jar:1.8.0_111]
>> at
>> sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:276)
>> [rt.jar:1.8.0_111]
>> at
>> sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:297)
>> [rt.jar:1.8.0_111]
>> at
>> java.nio.channels.AsynchronousSocketChannel.read(AsynchronousSocketChannel.java:420)
>> [rt.jar:1.8.0_111]
>> at
>> org.apache.sshd.common.io.nio2.Nio2Session.startReading(Nio2Session.java:173)
>> at
>> org.apache.sshd.common.io.nio2.Nio2Connector$1.onCompleted(Nio2Connector.java:53)
>> at
>> org.apache.sshd.common.io.nio2.Nio2Connector$1.onCompleted(Nio2Connector.java:46)
>> at
>> org.apache.sshd.common.io.nio2.Nio2CompletionHandler$1.run(Nio2CompletionHandler.java:32)
>> at java.security.AccessController.doPrivileged(Native Method)
>> [rt.jar:1.8.0_111]
>> at
>> org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:30)
>> at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126) [rt.jar:1.8.0_111]
>> at sun.nio.ch.Invoker$2.run(Invoker.java:218) [rt.jar:1.8.0_111]
>> at
>> sun.nio.ch.AsynchronousChannelGroupImpl$1.run(AsynchronousChannelGroupImpl.java:112)
>> [rt.jar:1.8.0_111]
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> [rt.jar:1.8.0_111]
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> [rt.jar:1.8.0_111]
>> at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_111]
>> Caused by: java.lang.NegativeArraySizeException
>> at sun.security.ec.ECKeyPairGenerator.generateECKeyPair(Native Method)
>> at
>> sun.security.ec.ECKeyPairGenerator.generateKeyPair(ECKeyPairGenerator.java:128)
>> ... 32 more
>>
>> On Fri, Jan 27, 2017 at 1:56 PM, Piotr Kliczewski
>> <piotr.kliczewski(a)gmail.com> wrote:
>> > Thank you Juan, It fixed my issue
>> >
>> > I updated java.security and changed:
>> >
>> > from
>> >
>> > jdk.tls.disabledAlgorithms=SSLv3, RC4, MD5withRSA, DH keySize < 768
>> >
>> > to
>> >
>> > jdk.tls.disabledAlgorithms=SSLv3, DH keySize < 768, EC, ECDHE, ECDH
>> >
>> > Thanks,
>> > Piotr
>> >
>> > On Fri, Jan 27, 2017 at 1:42 PM, Juan Hernández <jhernand(a)redhat.com>
>> > wrote:
>> >> See this Piotr:
>> >>
>> >>
>> >>
>> >> http://post-office.corp.redhat.com/archives/rhev-devel/2017-January/msg00...
>> >>
>> >> Benny, may be worth publishing it to the upstream devel list.
>> >>
>> >> On 01/27/2017 01:35 PM, Piotr Kliczewski wrote:
>> >>> All,
>> >>>
>> >>> I pulled the latest source from master and rebuilt my engine. Every
>> >>> time I attempt to login I see:
>> >>>
>> >>> 2017-01-27 13:22:51,403+01 INFO
>> >>> [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default
>> >>> task-54) [] User admin@internal successfully logged in with scopes:
>> >>> ovirt-app-admin ovirt-app-api ovirt-app-portal
>> >>> ovirt-ext=auth:sequence-priority=~ ovirt-ext=revoke:revoke-all
>> >>> ovirt-ext=token-info:authz-search
>> >>> ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate
>> >>> ovirt-ext=token:password-access
>> >>> #
>> >>> # A fatal error has been detected by the Java Runtime Environment:
>> >>> #
>> >>> # SIGSEGV (0xb) at pc=0x00007f514eb45734, pid=2519,
>> >>> tid=0x00007f51119a6700
>> >>> #
>> >>> # JRE version: OpenJDK Runtime Environment (8.0_111-b16) (build
>> >>> 1.8.0_111-b16)
>> >>> # Java VM: OpenJDK 64-Bit Server VM (25.111-b16 mixed mode linux-amd64
>> >>> compressed oops)
>> >>> # Problematic frame:
>> >>> # C [libc.so.6+0x14a734] __memcpy_avx_unaligned+0x2c4
>> >>> #
>> >>> # Failed to write core dump. Core dumps have been disabled. To enable
>> >>> core dumping, try "ulimit -c unlimited" before starting Java again
>> >>> #
>> >>> # An error report file with more information is saved as:
>> >>> # /tmp/hs_err_pid2519.log
>> >>> #
>> >>> # If you would like to submit a bug report, please visit:
>> >>> # http://bugreport.java.com/bugreport/crash.jsp
>> >>> #
>> >>> ovirt-engine[2471] ERROR run:554 Error: process terminated with status
>> >>> code -6
>> >>>
>> >>> I enabled ssl debug to find:
>> >>>
>> >>> 2017-01-27 13:22:37,641+01 INFO [stdout] (default I/O-2) default
>> >>> I/O-2, fatal error: 80: problem unwrapping net record
>> >>> 2017-01-27 13:22:37,642+01 INFO [stdout] (default I/O-2)
>> >>> java.lang.RuntimeException: java.lang.NegativeArraySizeException
>> >>> 2017-01-27 13:22:37,642+01 INFO [stdout] (default I/O-2) %%
>> >>> Invalidated: [Session-1, SSL_NULL_WITH_NULL_NULL]
>> >>> 2017-01-27 13:22:37,643+01 INFO [stdout] (default I/O-2) default
>> >>> I/O-2, SEND TLSv1.2 ALERT: fatal, description = internal_error
>> >>> 2017-01-27 13:22:37,643+01 INFO [stdout] (default I/O-2) default
>> >>> I/O-2, WRITE: TLSv1.2 Alert, length = 2
>> >>> 2017-01-27 13:22:37,643+01 INFO [stdout] (default I/O-2) default
>> >>> I/O-2, called closeInbound()
>> >>> 2017-01-27 13:22:37,643+01 INFO [stdout] (default I/O-2) default
>> >>> I/O-2, fatal: engine already closed. Rethrowing
>> >>> javax.net.ssl.SSLException: Inbound closed before receiving peer's
>> >>> close_notify: possible truncation attack?
>> >>> 2017-01-27 13:22:37,643+01 INFO [stdout] (default I/O-2) default
>> >>> I/O-2, called closeOutbound()
>> >>> 2017-01-27 13:22:37,643+01 INFO [stdout] (default I/O-2) default
>> >>> I/O-2, closeOutboundInternal()
>> >>> 2017-01-27 13:22:37,644+01 INFO [stdout] (default task-1) default
>> >>> task-1, received EOFException: error
>> >>> 2017-01-27 13:22:37,644+01 INFO [stdout] (default task-1) default
>> >>> task-1, handling exception: javax.net.ssl.SSLHandshakeException:
>> >>> Remote host closed connection during handshake
>> >>> 2017-01-27 13:22:37,645+01 INFO [stdout] (default task-1) default
>> >>> task-1, SEND TLSv1.2 ALERT: fatal, description = handshake_failure
>> >>> 2017-01-27 13:22:37,645+01 INFO [stdout] (default task-1) default
>> >>> task-1, WRITE: TLSv1.2 Alert, length = 2
>> >>> 2017-01-27 13:22:37,645+01 INFO [stdout] (default task-1) [Raw
>> >>> write]: length = 7
>> >>> 2017-01-27 13:22:37,647+01 INFO [stdout] (default task-1) 0000: 15 03
>> >>> 03 00 02 02 28 ......(
>> >>> 2017-01-27 13:22:37,647+01 INFO [stdout] (default task-1) default
>> >>> task-1, called closeSocket()
>> >>> 2017-01-27 13:22:37,644+01 ERROR [org.xnio.nio] (default I/O-2)
>> >>> XNIO000011: Task io.undertow.protocols.ssl.SslConduit$5$1@6d665208
>> >>> failed with an exception: java.lang.RuntimeException:
>> >>> java.lang.NegativeArraySizeException
>> >>> at sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1429)
>> >>> [jsse.jar:1.8.0_111]
>> >>> at
>> >>> sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:535)
>> >>> [jsse.jar:1.8.0_111]
>> >>> at
>> >>> sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:813)
>> >>> [jsse.jar:1.8.0_111]
>> >>> at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:781)
>> >>> [jsse.jar:1.8.0_111]
>> >>> at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624)
>> >>> [rt.jar:1.8.0_111]
>> >>> at io.undertow.protocols.ssl.SslConduit.doUnwrap(SslConduit.java:742)
>> >>> at
>> >>> io.undertow.protocols.ssl.SslConduit.doHandshake(SslConduit.java:639)
>> >>> at io.undertow.protocols.ssl.SslConduit.access$900(SslConduit.java:63)
>> >>> at io.undertow.protocols.ssl.SslConduit$5$1.run(SslConduit.java:1035)
>> >>> at org.xnio.nio.WorkerThread.safeRun(WorkerThread.java:588)
>> >>> [xnio-nio-3.4.0.Final.jar:3.4.0.Final]
>> >>> at org.xnio.nio.WorkerThread.run(WorkerThread.java:468)
>> >>> [xnio-nio-3.4.0.Final.jar:3.4.0.Final]
>> >>> Caused by: java.security.ProviderException:
>> >>> java.lang.NegativeArraySizeException
>> >>> at
>> >>> sun.security.ec.ECKeyPairGenerator.generateKeyPair(ECKeyPairGenerator.java:147)
>> >>> at
>> >>> java.security.KeyPairGenerator$Delegate.generateKeyPair(KeyPairGenerator.java:703)
>> >>> [rt.jar:1.8.0_111]
>> >>> at sun.security.ssl.ECDHCrypt.<init>(ECDHCrypt.java:64)
>> >>> [jsse.jar:1.8.0_111]
>> >>> at
>> >>> sun.security.ssl.ServerHandshaker.setupEphemeralECDHKeys(ServerHandshaker.java:1432)
>> >>> [jsse.jar:1.8.0_111]
>> >>> at
>> >>> sun.security.ssl.ServerHandshaker.trySetCipherSuite(ServerHandshaker.java:1219)
>> >>> [jsse.jar:1.8.0_111]
>> >>> at
>> >>> sun.security.ssl.ServerHandshaker.chooseCipherSuite(ServerHandshaker.java:1023)
>> >>> [jsse.jar:1.8.0_111]
>> >>> at
>> >>> sun.security.ssl.ServerHandshaker.clientHello(ServerHandshaker.java:738)
>> >>> [jsse.jar:1.8.0_111]
>> >>> at
>> >>> sun.security.ssl.ServerHandshaker.processMessage(ServerHandshaker.java:221)
>> >>> [jsse.jar:1.8.0_111]
>> >>> at sun.security.ssl.Handshaker.processLoop(Handshaker.java:979)
>> >>> [jsse.jar:1.8.0_111]
>> >>> at sun.security.ssl.Handshaker$1.run(Handshaker.java:919)
>> >>> [jsse.jar:1.8.0_111]
>> >>> at sun.security.ssl.Handshaker$1.run(Handshaker.java:916)
>> >>> [jsse.jar:1.8.0_111]
>> >>> at java.security.AccessController.doPrivileged(Native Method)
>> >>> [rt.jar:1.8.0_111]
>> >>> at sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1369)
>> >>> [jsse.jar:1.8.0_111]
>> >>> at io.undertow.protocols.ssl.SslConduit$5.run(SslConduit.java:1023)
>> >>> at
>> >>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> >>> [rt.jar:1.8.0_111]
>> >>> at
>> >>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> >>> [rt.jar:1.8.0_111]
>> >>> at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_111]
>> >>> Caused by: java.lang.NegativeArraySizeException
>> >>> at sun.security.ec.ECKeyPairGenerator.generateECKeyPair(Native Method)
>> >>> at
>> >>> sun.security.ec.ECKeyPairGenerator.generateKeyPair(ECKeyPairGenerator.java:128)
>> >>> ... 16 more
>> >>>
>> >>> Are we aware of the issue? Is there any workaround?
>> >>>
>> >>> I am using fedora 24 with all recent updates applied.
>> >>>
>> >>> Thanks,
>> >>> Piotr
>> >>>
>> >>>
>> >>>
>> >>> _______________________________________________
>> >>> Devel mailing list
>> >>> Devel(a)ovirt.org
>> >>> http://lists.ovirt.org/mailman/listinfo/devel
>> >>>
>> >>
>
>
7 years, 10 months
SSO and the engine
by Piotr Kliczewski
All,
I pulled the latest source from master and rebuilt my engine. Every
time I attempt to login I see:
2017-01-27 13:22:51,403+01 INFO
[org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default
task-54) [] User admin@internal successfully logged in with scopes:
ovirt-app-admin ovirt-app-api ovirt-app-portal
ovirt-ext=auth:sequence-priority=~ ovirt-ext=revoke:revoke-all
ovirt-ext=token-info:authz-search
ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate
ovirt-ext=token:password-access
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f514eb45734, pid=2519, tid=0x00007f51119a6700
#
# JRE version: OpenJDK Runtime Environment (8.0_111-b16) (build 1.8.0_111-b16)
# Java VM: OpenJDK 64-Bit Server VM (25.111-b16 mixed mode linux-amd64
compressed oops)
# Problematic frame:
# C [libc.so.6+0x14a734] __memcpy_avx_unaligned+0x2c4
#
# Failed to write core dump. Core dumps have been disabled. To enable
core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /tmp/hs_err_pid2519.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
#
ovirt-engine[2471] ERROR run:554 Error: process terminated with status code -6
I enabled ssl debug to find:
2017-01-27 13:22:37,641+01 INFO [stdout] (default I/O-2) default
I/O-2, fatal error: 80: problem unwrapping net record
2017-01-27 13:22:37,642+01 INFO [stdout] (default I/O-2)
java.lang.RuntimeException: java.lang.NegativeArraySizeException
2017-01-27 13:22:37,642+01 INFO [stdout] (default I/O-2) %%
Invalidated: [Session-1, SSL_NULL_WITH_NULL_NULL]
2017-01-27 13:22:37,643+01 INFO [stdout] (default I/O-2) default
I/O-2, SEND TLSv1.2 ALERT: fatal, description = internal_error
2017-01-27 13:22:37,643+01 INFO [stdout] (default I/O-2) default
I/O-2, WRITE: TLSv1.2 Alert, length = 2
2017-01-27 13:22:37,643+01 INFO [stdout] (default I/O-2) default
I/O-2, called closeInbound()
2017-01-27 13:22:37,643+01 INFO [stdout] (default I/O-2) default
I/O-2, fatal: engine already closed. Rethrowing
javax.net.ssl.SSLException: Inbound closed before receiving peer's
close_notify: possible truncation attack?
2017-01-27 13:22:37,643+01 INFO [stdout] (default I/O-2) default
I/O-2, called closeOutbound()
2017-01-27 13:22:37,643+01 INFO [stdout] (default I/O-2) default
I/O-2, closeOutboundInternal()
2017-01-27 13:22:37,644+01 INFO [stdout] (default task-1) default
task-1, received EOFException: error
2017-01-27 13:22:37,644+01 INFO [stdout] (default task-1) default
task-1, handling exception: javax.net.ssl.SSLHandshakeException:
Remote host closed connection during handshake
2017-01-27 13:22:37,645+01 INFO [stdout] (default task-1) default
task-1, SEND TLSv1.2 ALERT: fatal, description = handshake_failure
2017-01-27 13:22:37,645+01 INFO [stdout] (default task-1) default
task-1, WRITE: TLSv1.2 Alert, length = 2
2017-01-27 13:22:37,645+01 INFO [stdout] (default task-1) [Raw
write]: length = 7
2017-01-27 13:22:37,647+01 INFO [stdout] (default task-1) 0000: 15 03
03 00 02 02 28 ......(
2017-01-27 13:22:37,647+01 INFO [stdout] (default task-1) default
task-1, called closeSocket()
2017-01-27 13:22:37,644+01 ERROR [org.xnio.nio] (default I/O-2)
XNIO000011: Task io.undertow.protocols.ssl.SslConduit$5$1@6d665208
failed with an exception: java.lang.RuntimeException:
java.lang.NegativeArraySizeException
at sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1429)
[jsse.jar:1.8.0_111]
at sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:535)
[jsse.jar:1.8.0_111]
at sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:813)
[jsse.jar:1.8.0_111]
at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:781)
[jsse.jar:1.8.0_111]
at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624) [rt.jar:1.8.0_111]
at io.undertow.protocols.ssl.SslConduit.doUnwrap(SslConduit.java:742)
at io.undertow.protocols.ssl.SslConduit.doHandshake(SslConduit.java:639)
at io.undertow.protocols.ssl.SslConduit.access$900(SslConduit.java:63)
at io.undertow.protocols.ssl.SslConduit$5$1.run(SslConduit.java:1035)
at org.xnio.nio.WorkerThread.safeRun(WorkerThread.java:588)
[xnio-nio-3.4.0.Final.jar:3.4.0.Final]
at org.xnio.nio.WorkerThread.run(WorkerThread.java:468)
[xnio-nio-3.4.0.Final.jar:3.4.0.Final]
Caused by: java.security.ProviderException: java.lang.NegativeArraySizeException
at sun.security.ec.ECKeyPairGenerator.generateKeyPair(ECKeyPairGenerator.java:147)
at java.security.KeyPairGenerator$Delegate.generateKeyPair(KeyPairGenerator.java:703)
[rt.jar:1.8.0_111]
at sun.security.ssl.ECDHCrypt.<init>(ECDHCrypt.java:64) [jsse.jar:1.8.0_111]
at sun.security.ssl.ServerHandshaker.setupEphemeralECDHKeys(ServerHandshaker.java:1432)
[jsse.jar:1.8.0_111]
at sun.security.ssl.ServerHandshaker.trySetCipherSuite(ServerHandshaker.java:1219)
[jsse.jar:1.8.0_111]
at sun.security.ssl.ServerHandshaker.chooseCipherSuite(ServerHandshaker.java:1023)
[jsse.jar:1.8.0_111]
at sun.security.ssl.ServerHandshaker.clientHello(ServerHandshaker.java:738)
[jsse.jar:1.8.0_111]
at sun.security.ssl.ServerHandshaker.processMessage(ServerHandshaker.java:221)
[jsse.jar:1.8.0_111]
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:979)
[jsse.jar:1.8.0_111]
at sun.security.ssl.Handshaker$1.run(Handshaker.java:919) [jsse.jar:1.8.0_111]
at sun.security.ssl.Handshaker$1.run(Handshaker.java:916) [jsse.jar:1.8.0_111]
at java.security.AccessController.doPrivileged(Native Method) [rt.jar:1.8.0_111]
at sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1369)
[jsse.jar:1.8.0_111]
at io.undertow.protocols.ssl.SslConduit$5.run(SslConduit.java:1023)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[rt.jar:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[rt.jar:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_111]
Caused by: java.lang.NegativeArraySizeException
at sun.security.ec.ECKeyPairGenerator.generateECKeyPair(Native Method)
at sun.security.ec.ECKeyPairGenerator.generateKeyPair(ECKeyPairGenerator.java:128)
... 16 more
Are we aware of the issue? Is there any workaround?
I am using fedora 24 with all recent updates applied.
Thanks,
Piotr
7 years, 10 months
Re: [ovirt-devel] [ovirt-users] Translate hint
by Yedidyah Bar David
On Wed, Jan 25, 2017 at 6:28 PM, Gianluca Cecchi
<gianluca.cecchi(a)gmail.com> wrote:
> Hello,
> I'm checking Italian Translation in 4.0.6 and completing it for 4.1 (now at
> 92%).
> Suppose I found an untranslated / bad translated word: what is the best and
> quick way to go to Zanata and find the reference for it and correct?
> For example in 4.0.6 I'm in dashboard and at bottom I have the three labels
>
> Alert, Events, Tasks
>
> I click ALerts and then I want to dismiss an Alert because it happened while
> I was configuring Power Management: I faked the password to verify that the
> test failed and I got the alert
>
> Power Management test failed for Host .....
>
> I rght click on the alert line and I see three options and the first and
> third one are untranslated
>
> they are
>
> Dismiss Alert
> Display All
>
> (I'm going to send a separate e-mail to ask about alerts and events in
> general).
>
> If I go to Zanata, select oVirt 4.0 and Italian language line and use the
> search function I don't find anything....
> I see there is an option to
> Download All for Offline Translation
> and
> Export Italian Documents to TMX
> I have not tried them yet, but I would prefer to directly correct online
> while I find some things to correct.
Adding devel and Yuko.
--
Didi
7 years, 10 months
[ANN] oVirt 4.1.0 Second Release Candidate is now available
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the Second
Release candidate of oVirt 4.1.0 for testing, as of January 26th, 2016
This is pre-release software. Please take a look at our community page[1]
to know how to ask questions and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This pre-release should not to be used in production.
This update is the second release candidate of the 4.1 release series.
4.1.0 brings more than 260 enhancements and 790 bugfixes, including 340
high or urgent
severity fixes, on top of oVirt 4.0 series
See the release notes [3] for installation / upgrade instructions and a
list of new features and bugs fixed.
This release is available now for:
* Fedora 24 (tech preview)
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later
This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later
* Fedora 24 (tech preview)
* oVirt Node 4.1
See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Live iso is already available[5]
- oVirt Node NG iso is already available[5]
- Hosted Engine appliance is already available.
- oVirt Windows Guest Tools iso is already available[5]
A release management page including planned schedule is also available[4]
Additional Resources:
* Read more about the oVirt 4.1.0 release highlights:
http://www.ovirt.org/release/4.1.0/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.1.0/
[4]
http://www.ovirt.org/develop/release-management/releases/4.1/release-mana...
[5] http://resources.ovirt.org/pub/ovirt-4.1-pre/iso/
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
7 years, 10 months
[URGENT][ACTION REQUIRED] Repository closure failure for oVirt 4.1.0 RC2
by Sandro Bonazzola
*00:00:44.789* package: ovirt-engine-backend-4.1.0.3-1.fc24.noarch
from check-custom-fc24*00:00:44.789* unresolved deps: *00:00:44.789*
vdsm-jsonrpc-java >= 0:1.3.8
[sbonazzo@sbonazzo vdsm-jsonrpc-java] [vdsm-jsonrpc-java:ovirt-4.1]$ git
tag --list |grep 1.3
v1.3.3
v1.3.4
v1.3.5
v1.3.6
v1.3.7
We really need a test in check-patch.sh which build the rpms and install
them so we can detect these errors earlier.
Piotr please provide vdsm-jsonrpc-java v.1.3.8, thanks
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
7 years, 10 months
[ OST Failure Report ] [ oVirt master ] [ 25.1.17 ] [hotplug_disk]
by Gil Shinar
Hi,
The test in $subject has failed. Below please find exception I have found
in engine.log.
{"jsonrpc": "2.0", "id": "4b8eb810-c52d-4c92-a792-e554f87c9493",
"error": {"message": "Cannot deactivate Logical Volume: ('General
Storage Exception: (\"5 [] [\\' WARNING: Not using lvmetad because
config setting use_lvmetad=0.\\', \\' WARNING: To avoid corruption,
rescan devices to make changes visible (pvscan --cache).\\', \\'
Logical volume f9dce023-0282-4185-9ad9-fe71c3975106/778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5
in use.\\', \\' Logical volume
f9dce023-0282-4185-9ad9-fe71c3975106/ab4e8962-6196-485e-be2a-d5791a38eaeb
in use.\\']\\\\nf9dce023-0282-4185-9ad9-fe71c3975106/[\\'778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5\\',
\\'ab4e8962-6196-485e-be2a-d5791a38eaeb\\']\",)',)", "code": 552}}�
2017-01-25 05:05:49,037-05 DEBUG
[org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
(ResponseWorker) [] Message received: {"jsonrpc": "2.0", "id":
"4b8eb810-c52d-4c92-a792-e554f87c9493", "error": {"message": "Cannot
deactivate Logical Volume: ('General Storage Exception: (\"5 [] [\\'
WARNING: Not using lvmetad because config setting use_lvmetad=0.\\',
\\' WARNING: To avoid corruption, rescan devices to make changes
visible (pvscan --cache).\\', \\' Logical volume
f9dce023-0282-4185-9ad9-fe71c3975106/778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5
in use.\\', \\' Logical volume
f9dce023-0282-4185-9ad9-fe71c3975106/ab4e8962-6196-485e-be2a-d5791a38eaeb
in use.\\']\\\\nf9dce023-0282-4185-9ad9-fe71c3975106/[\\'778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5\\',
\\'ab4e8962-6196-485e-be2a-d5791a38eaeb\\']\",)',)", "code": 552}}
2017-01-25 05:05:49,047-05 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler5) [59ab00f1] EVENT_ID:
VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: VDSM lago-basic-suite-master-host0
command TeardownImageVDS failed: Cannot deactivate Logical Volume:
('General Storage Exception: ("5 [] [\' WARNING: Not using lvmetad
because config setting use_lvmetad=0.\', \' WARNING: To avoid
corruption, rescan devices to make changes visible (pvscan
--cache).\', \' Logical volume
f9dce023-0282-4185-9ad9-fe71c3975106/778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5
in use.\', \' Logical volume
f9dce023-0282-4185-9ad9-fe71c3975106/ab4e8962-6196-485e-be2a-d5791a38eaeb
in use.\']\\nf9dce023-0282-4185-9ad9-fe71c3975106/[\'778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5\',
\'ab4e8962-6196-485e-be2a-d5791a38eaeb\']",)',)
2017-01-25 05:05:49,047-05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand]
(DefaultQuartzScheduler5) [59ab00f1] Command
'TeardownImageVDSCommand(HostName = lago-basic-suite-master-host0,
ImageActionsVDSCommandParameters:{runAsync='true',
hostId='60e52527-b637-445c-b408-0275d347e76a'})' execution failed:
VDSGenericException: VDSErrorException: Failed in vdscommand to
TeardownImageVDS, error = Cannot deactivate Logical Volume: ('General
Storage Exception: ("5 [] [\' WARNING: Not using lvmetad because
config setting use_lvmetad=0.\', \' WARNING: To avoid corruption,
rescan devices to make changes visible (pvscan --cache).\', \'
Logical volume f9dce023-0282-4185-9ad9-fe71c3975106/778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5
in use.\', \' Logical volume
f9dce023-0282-4185-9ad9-fe71c3975106/ab4e8962-6196-485e-be2a-d5791a38eaeb
in use.\']\\nf9dce023-0282-4185-9ad9-fe71c3975106/[\'778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5\',
\'ab4e8962-6196-485e-be2a-d5791a38eaeb\']",)',)
2017-01-25 05:05:49,047-05 DEBUG
[org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand]
(DefaultQuartzScheduler5) [59ab00f1] Exception:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed in vdscommand to
TeardownImageVDS, error = Cannot deactivate Logical Volume: ('General
Storage Exception: ("5 [] [\' WARNING: Not using lvmetad because
config setting use_lvmetad=0.\', \' WARNING: To avoid corruption,
rescan devices to make changes visible (pvscan --cache).\', \'
Logical volume f9dce023-0282-4185-9ad9-fe71c3975106/778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5
in use.\', \' Logical volume
f9dce023-0282-4185-9ad9-fe71c3975106/ab4e8962-6196-485e-be2a-d5791a38eaeb
in use.\']\\nf9dce023-0282-4185-9ad9-fe71c3975106/[\'778cbc5b-a9df-46d7-bc80-1a66f7d3e2b5\',
\'ab4e8962-6196-485e-be2a-d5791a38eaeb\']",)',)
at org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:182)
[vdsbroker.jar:]
at org.ovirt.engine.core.vdsbroker.vdsbroker.ImageActionsVDSCommandBase.executeVdsBrokerCommand(ImageActionsVDSCommandBase.java:20)
[vdsbroker.jar:]
at org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:111)
[vdsbroker.jar:]
at org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:73)
[vdsbroker.jar:]
at org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33)
[dal.jar:]
at org.ovirt.engine.core.vdsbroker.vdsbroker.DefaultVdsCommandExecutor.execute(DefaultVdsCommandExecutor.java:14)
[vdsbroker.jar:]
at org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:407)
[vdsbroker.jar:]
at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33)
[bll.jar:]
at org.ovirt.engine.core.bll.storage.disk.image.ImagesHandler.teardownImage(ImagesHandler.java:1007)
[bll.jar:]
at org.ovirt.engine.core.bll.storage.disk.image.ImagesHandler.getQemuImageInfoFromVdsm(ImagesHandler.java:856)
[bll.jar:]
at org.ovirt.engine.core.bll.storage.disk.image.BaseImagesCommand.endSuccessfully(BaseImagesCommand.java:367)
[bll.jar:]
at org.ovirt.engine.core.bll.CommandBase.internalEndSuccessfully(CommandBase.java:736)
[bll.jar:]
at org.ovirt.engine.core.bll.CommandBase.endActionInTransactionScope(CommandBase.java:694)
[bll.jar:]
at org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:2057)
[bll.jar:]
at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:164)
[utils.jar:]
at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:103)
[utils.jar:]
at org.ovirt.engine.core.bll.CommandBase.endAction(CommandBase.java:559)
[bll.jar:]
at org.ovirt.engine.core.bll.Backend.endAction(Backend.java:536) [bll.jar:]
Link to suspected patches: https://gerrit.ovirt.org/#/c/71132/
Link to Job:
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/4950/
Link to all logs:
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/4951/art...
Thanks
Gil
7 years, 10 months
[ OST Failure Report ] [ oVirt master ] [25.01.17] [add_hosts]
by Gil Shinar
Hi,
vdsm-cli has been removed from VDSM rpms but hadn't been removed from add
hosts. Here is the error message:
2017-01-25 10:59:44 DEBUG otopi.plugins.otopi.packagers.yumpackager
yumpackager.verbose:76 Yum Building transaction
2017-01-25 10:59:47 ERROR otopi.plugins.otopi.packagers.yumpackager
yumpackager.error:85 Yum
[u'vdsm-cli-4.20.0-261.gitabb73a5.el7.centos.noarch requires
vdsm-xmlrpc = 4.20.0-261.gitabb73a5.el7.centos',
u'vdsm-cli-4.20.0-261.gitabb73a5.el7.centos.noarch requires
vdsm-client = 4.20.0-261.gitabb73a5.el7.centos',
u'vdsm-cli-4.20.0-261.gitabb73a5.el7.centos.noarch requires
vdsm-python = 4.20.0-261.gitabb73a5.el7.centos']
2017-01-25 10:59:47 DEBUG otopi.context context._executeMethod:142
method exception
Traceback (most recent call last):
File "/tmp/ovirt-9WzrERSugT/pythonlib/otopi/context.py", line 132,
in _executeMethod
method['method']()
File "/tmp/ovirt-9WzrERSugT/otopi-plugins/otopi/packagers/yumpackager.py",
line 253, in _packages
if self._miniyum.buildTransaction():
File "/tmp/ovirt-9WzrERSugT/pythonlib/otopi/miniyum.py", line 919,
in buildTransaction
raise yum.Errors.YumBaseError(msg)
YumBaseError: [u'vdsm-cli-4.20.0-261.gitabb73a5.el7.centos.noarch
requires vdsm-xmlrpc = 4.20.0-261.gitabb73a5.el7.centos',
u'vdsm-cli-4.20.0-261.gitabb73a5.el7.centos.noarch requires
vdsm-client = 4.20.0-261.gitabb73a5.el7.centos',
u'vdsm-cli-4.20.0-261.gitabb73a5.el7.centos.noarch requires
vdsm-python = 4.20.0-261.gitabb73a5.el7.centos']
2017-01-25 10:59:47 ERROR otopi.context context._executeMethod:151
Failed to execute stage 'Package installation':
[u'vdsm-cli-4.20.0-261.gitabb73a5.el7.centos.noarch requires
vdsm-xmlrpc = 4.20.0-261.gitabb73a5.el7.centos',
u'vdsm-cli-4.20.0-261.gitabb73a5.el7.centos.noarch requires
vdsm-client = 4.20.0-261.gitabb73a5.el7.centos',
u'vdsm-cli-4.20.0-261.gitabb73a5.el7.centos.noarch requires
vdsm-python = 4.20.0-261.gitabb73a5.el7.centos']
2017-01-25 10:59:47 DEBUG otopi.transaction transaction.abort:119
aborting 'Yum Transaction'
2017-01-25 10:59:47 INFO otopi.plugins.otopi.packagers.yumpackager
yumpackager.info:80 Yum Performing yum transaction rollback
Link to suspected patches: https://gerrit.ovirt.org/#/c/68721/
Link to Job:
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/4962/
Link to all logs:
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/4962/art...
Thanks
Gil
7 years, 10 months