On Thu, 2018-07-26 at 17:13 +0200, Simone Tiraboschi wrote:
On Thu, Jul 26, 2018 at 5:09 PM Karli Sjöberg <karli(a)inparadise.se>
wrote:
>
> On Jul 26, 2018 15:48, Karli Sjöberg <karli(a)inparadise.se> wrote:
> > On Thu, 2018-07-26 at 14:14 +0200, Karli Sjöberg wrote:
> > > On Thu, 2018-07-26 at 14:01 +0200, Simone Tiraboschi wrote:
> > > >
> > > >
> > > > On Thu, Jul 26, 2018 at 12:44 PM Karli Sjöberg > > >
> > > > wrote:
> > > > > On Thu, 2018-07-26 at 12:38 +0200, Simone Tiraboschi wrote:
> > > > > >
> > > > > >
> > > > > > On Thu, Jul 26, 2018 at 9:30 AM Karli Sjöberg > >
> > .s
> > > > >
> > > > > e>
> > > > > > wrote:
> > > > > > > On Thu, 2018-07-26 at 09:27 +0200, Simone Tiraboschi
> > wrote:
> > > > > > > >
> > > > > > > >
> > > > > > > > On Wed, Jul 25, 2018 at 12:04 PM Karli Sjöberg
> > >
> > > > > ad
> > > > >
> > > > > ise.
> > > > > > > se>
> > > > > > > > wrote:
> > > > > > > > > Hey all!
> > > > > > > > >
> > > > > > > > > I'm trying to deploy Hosted Engine
through the
> > Cockpit UI
> > > > >
> > > > > and
> > > > > > > it's
> > > > > > > > > going well until it's time to start the
local VM
> > and it
> > > > >
> > > > > kernel
> > > > > > > > > panics:
> > > > > > > > >
> > > > > > > > > [ 2.032053] Call Trace:
> > > > > > > > > [ 2.032053] []
> > > > > > >
> > > > > > > load_elf_binary+0x33c/0xe50
> > > > > > > > > [ 2.032053] [] ?
> > > > >
> > > > > ima_bprm_check+0x49/0x50
> > > > > > > > > [ 2.032053] [] ?
> > > > > > >
> > > > > > > load_elf_library+0x220/0x220
> > > > > > > > > [ 2.032053] []
> > > > > > > > > search_binary_handler+0xef/0x310
> > > > > > > > > [ 2.032053] []
> > > > > > > > > do_execve_common.isra.24+0x5db/0x6e0
> > > > > > > > > [ 2.032053] [] do_execve+0x18/0x20
> > > > > > > > > [ 2.032053] []
> > > > > > > > > ____call_usermodehelper+0xff/0x140
> > > > > > > > > [ 2.032053] [] ?
> > > > > > > > > call_usermodehelper+0x60/0x60
> > > > > > > > > [ 2.032053] []
> > > > > > > > > ret_from_fork_nospec_begin+0x21/0x21
> > > > > > > > > [ 2.032053] [] ?
> > > > > > > > > call_usermodehelper+0x60/0x60
> > > > > > > > > [ 2.032053] Code: cf e9 ff 4c 89 f7 e8 7b 32
e7 ff
> > e9
> > > > > > > > > 4d
> > > > >
> > > > > fa
> > > > > > > ff
> > > > > > > > > ff 65 8b 05 03 a0 7e 49 a8 01 0f 84 85 fc ff
ff 31
> > d2 b8
> > > > > > > > > 01
> > > > >
> > > > > 00
> > > > > > > 00
> > > > > > > > > 00 b9 49 00 00 00 <0f> 30 0f 1f 44 00
00 48 c7 c0
> > 10 00
> > > > > > > > > 00
> > > > >
> > > > > 00
> > > > > > > e8 07
> > > > > > > > > 00 00 00 f3 90
> > > > > > > > > [ 2.032053] RIP []
> > > > > > >
> > > > > > > flush_old_exec+0x725/0x980
> > > > > > > > > [ 2.032053] RSP
> > > > > > > > > [ 2.298131] ---[ end trace 354b4039b6fb0889
]---
> > > > > > > > > [ 2.303914] Kernel panic - not syncing:
Fatal
> > > > > > > > > exception
> > > > > > > > > [ 2.304835] Kernel Offset: 0x35600000 from
> > > > > > >
> > > > > > > 0xffffffff81000000
> > > > > > > > > (relocation range: 0xffffffff80000000-
> > 0xffffffffbfffffff)
> > > > > > > > >
> > > > > > > > > I've never had this problem so I'd
just want to
> > know if
> > > > >
> > > > > it's a
> > > > > > > > > known
> > > > > > > > > issue right now or if I've done anything
special to
> > > > > > > > > deserve
> > > > > > >
> > > > > > > this:)
> > > > > > > > >
> > > > > > > > > The "Hosts" I'm deploying this
on are VMs with
> > nested
> > > > > > > > > virt
> > > > > > > > > activated,
> > > > > > > > > and I've done this before but this time
around it's
> > > > >
> > > > > bombing, as
> > > > > > > > > earlier
> > > > > > > > > explained.
> > > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Thanks for the report,
> > > > > > > > which hypervisor are you using on L0?
> > > > > > >
> > > > > > >
> > > > > > > In my case it's Xubuntu 18.04 LTS. Is there
anything I
> > can do
> > > > >
> > > > > to
> > > > > > > help
> > > > > > > out with this?
> > > > > >
> > > > > >
> > > > > > And you are running your host VM on KVM, right?
> > > > >
> > > > > Correct, but wait, woa woa, what am I saying, it's not
> > 18.04,
> > > > > it's
> > > > > 16.04! I was looking on the wrong computer:)
> > > > >
> > > > > Just to be as clear as I possibly can, the issue I am
> > facing is
> > > > > with
> > > > > Xubuntu 16.04.4 LTS as L0 hypervisor.
> > > >
> > > > We are successfully running our CI tests on a nested env on
> > Centos
> > > > 7.5.
> > > > Maybe the issue is just due to an older KVM version on
> > Xubuntu
> > > > 16.04.4 LTS.
> > > > Honestly I never tried that combination but I think it could
> > be
> > > > worth
> > > > to try with a different L0.
> > >
> > > OK, lets compare versions:
> > >
> > > CentOS 7.5: 3.10.0
> > > Xubuntu 16.04: 4.4.0
> > >
> > > In what way could that be, as you say, older?
> > >
> > > /K
> >
> > I have been able to at least prove it's not "your fault" :)
> >
> > To test that I have installed another CentOS 7.5 VM as a L1
> > hypervisor,
> > just an ordinary CentOS from LiveGNOME ISO and in that VM
> > installed
> > virt-manager and all to see if it's something specific to either
> > _how_
> > you are starting the local engine VM or _what_ you are starting,
> > like
> > if something's up with the appliance image.
> >
> > Turns out that trying to boot the CentOS 7.5 LiveGNOME ISO inside
> > the
> > L1 hypervisor yields the exact same result.
> >
> > I am going to test this out in a Xubuntu 18.04 Host to see if
> > that
> > changes anything.
> >
> > /K
>
> I have now _successfully_ gone through the above procedure in
> Xubuntu 18.04, yeay! :)
>
> I will continue searching for problems running nested
> virtualization in Ubuntu 16.04 (or derivatives)
>
Thanks for the report, please keep us up to date on your progress!
TLDR; I upgraded my workstation to 18.04 to get on with my life, but I
reported it before doing so, so I could give someone something to go
on.
Here, for anyone interested:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1783952
/K
> /K
>
> > >
> > > >
> > > > >
> > > > > /K
> > > > >
> > > > > >
> > > > > > > /K
> > > > > > >
> > > > > > > >
> > > > > > > > > Thanks in advance!
> > > > > > > > > /K
> > > > > > > > >
_______________________________________________
> > > > > > > > > Users mailing list -- users(a)ovirt.org
> > > > > > > > > To unsubscribe send an email to
users-leave(a)ovirt.o
> > rg
> > > > > > > > > Privacy Statement:
https://www.ovirt.org/site/priva
> > cy-pol
> > > > > > > > > ic
> > > > >
> > > > > y/
> > > > > > > > > oVirt Code of Conduct:
https://www.ovirt.org/commun
> > ity/ab
> > > > > > > > > ou
> > > > >
> > > > > t/co
> > > > > > > mmun
> > > > > > > > > ity-guidelines/
> > > > > > > > > List Archives:
https://lists.ovirt.org/archives/lis
> > t/user
> > > > > > > > > s@
> > > > >
> > > > > ovir
> > > > > > > t.or
> > > > > > > > > g/message/UQ6NH54FRQ4BIURTSE5I5OGQZ6HHDT2B/
> > >
> > > _______________________________________________
> > > Users mailing list -- users(a)ovirt.org
> > > To unsubscribe send an email to users-leave(a)ovirt.org
> > > Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
> > > oVirt Code of Conduct:
https://www.ovirt.org/community/about/co
> > mmunit
> > > y-guidelines/
> > > List Archives:
https://lists.ovirt.org/archives/list/users@ovir
> > t.org/
> > >
> > message/JY5A37AKKQSZVX2VRRDYTKRUABLJV34A/________________________
> > _______________________
> > Users mailing list -- users(a)ovirt.org
> > To unsubscribe send an email to users-leave(a)ovirt.org
> > Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
https://www.ovirt.org/community/about/comm
> > unity-guidelines/
> > List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.
> > org/message/CDR7344VKKWHW4L5OBELPT6PGPFY72N7/