Windows 10 Guest with nvidia Tesla P40 vGpu Bluescreens when I enable nested virtualization
by domw@live.ca
Hi all,
I have this issue when I go to create a VM, I want to run WSL2 but when I install it or windows defender application guard the computer bluescreens.
So I am using CentOS 8 stream host connected to a VirtIO cluster the hardware is 2 Nvidia Tesla P40 graphics cards installed on a poweredge R480. My windows 10 20H2 Image would bluescreen after the nvidia drivers are installed. I did some testing and discovered after installing a clean ISO windows 10 20H2 that it worked fine even after installing the drivers for the vGPU but the master image does have WSL2 installed out of the box so I installed it on my Vanilla image and it would bluescreen. I imagine this is due to the nested virtualization and the installation of some hyper-v service maybe it is trying to initialize some hardware acceleration. Has anyone had this issue I really need WSL and would love to use version 2.
Tesla Version I have tried is 450.12 and the latest 460.32
Kernel Bitmap Dump File: Kernel address space is available, User address space may not be available.
Symbol search path is: srv*
Executable search path is:
Windows 10 Kernel Version 19041 MP (4 procs) Free x64
Product: WinNt, suite: TerminalServer SingleUserTS
Built by: 19041.1.amd64fre.vb_release.191206-1406
Machine Name:
Kernel base = 0xfffff805`5e800000 PsLoadedModuleList = 0xfffff805`5f42a490
Debug session time: Sat Jun 26 04:58:37.807 2021 (UTC - 7:00)
System Uptime: 0 days 0:00:09.469
Loading Kernel Symbols
...............................................................
...........................Page 799b not present in the dump file. Type ".hh dbgerr004" for details
.....................................
.........................
Loading User Symbols
PEB is paged out (Peb.Ldr = 0000002e`f91fa018). Type ".hh dbgerr001" for details
Loading unloaded module list
.......
For analysis of this file, run !analyze -v
3: kd> !analyze -v
ERROR: FindPlugIns 8007007b
*******************************************************************************
* *
* Bugcheck Analysis *
* *
*******************************************************************************
SYSTEM_SERVICE_EXCEPTION (3b)
An exception happened while executing a system service routine.
Arguments:
Arg1: 00000000c0000005, Exception code that caused the bugcheck
Arg2: fffff80567d55b24, Address of the instruction which caused the bugcheck
Arg3: ffff8487974646a0, Address of the context record for the exception that caused the bugcheck
Arg4: 0000000000000000, zero.
Debugging Details:
------------------
Page fd6196 not present in the dump file. Type ".hh dbgerr004" for details
Page fd6196 not present in the dump file. Type ".hh dbgerr004" for details
KEY_VALUES_STRING: 1
Key : Analysis.CPU.Sec
Value: 3
Key : Analysis.DebugAnalysisProvider.CPP
Value: Create: 8007007e on BCCO050
Key : Analysis.DebugData
Value: CreateObject
Key : Analysis.DebugModel
Value: CreateObject
Key : Analysis.Elapsed.Sec
Value: 27
Key : Analysis.Memory.CommitPeak.Mb
Value: 81
Key : Analysis.System
Value: CreateObject
BUGCHECK_CODE: 3b
BUGCHECK_P1: c0000005
BUGCHECK_P2: fffff80567d55b24
BUGCHECK_P3: ffff8487974646a0
BUGCHECK_P4: 0
CONTEXT: ffff8487974646a0 -- (.cxr 0xffff8487974646a0)
rax=0000000000000000 rbx=ffffb406f7d85000 rcx=e17f3f1e8efe0000
rdx=0000000000000000 rsi=0000000000000000 rdi=ffffb406f1667270
rip=fffff80567d55b24 rsp=ffff8487974650a0 rbp=0000000000000002
r8=0000000000000000 r9=0000000000000000 r10=0000000000000000
r11=ffff848797465040 r12=fffff80568511c80 r13=ffffb406f1667270
r14=ffffb406f194c660 r15=0000000000000000
iopl=0 nv up ei pl nz na pe nc
cs=0010 ss=0018 ds=002b es=002b fs=0053 gs=002b efl=00050202
nvlddmkm+0x1d5b24:
fffff805`67d55b24 4c8b80c8220000 mov r8,qword ptr [rax+22C8h] ds:002b:00000000`000022c8=????????????????
Resetting default scope
PROCESS_NAME: csrss.exe
STACK_TEXT:
ffff8487`974650a0 fffff805`67cab38a : ffffb406`f7d85000 ffffb406`f194c660 00000000`ffffffff ffff8487`97465220 : nvlddmkm+0x1d5b24
ffff8487`974650e0 fffff805`67cdb663 : 00000000`00000000 ffffb406`f7d85000 ffff8487`97465220 ffffb406`f1989000 : nvlddmkm+0x12b38a
ffff8487`97465160 fffff805`67cb5df8 : ffffb406`f7d85000 ffffb406`f7d85000 ffffb406`f7d85000 ffff8487`97465220 : nvlddmkm+0x15b663
ffff8487`97465190 fffff805`67ca3ce5 : ffffb406`f1a3b000 00000000`00000001 ffffb406`00000001 ffffb406`f7d85000 : nvlddmkm+0x135df8
ffff8487`974651c0 fffff805`67ca4e8c : 00000000`00000800 ffffb406`f7d85000 ffffb406`f194c660 ffffb406`f1a3b000 : nvlddmkm+0x123ce5
ffff8487`974651f0 fffff805`67c8a41c : ffffb406`f1b8271c 00000002`00000001 ffffb406`00000007 00000000`00000000 : nvlddmkm+0x124e8c
ffff8487`97465260 fffff805`67c90f8b : 00000000`00000060 fffff805`684baeb8 00000000`00000060 ffffb406`f1b826b8 : nvlddmkm+0x10a41c
ffff8487`97466440 fffff805`67c284f3 : ffffb406`f1b81000 ffff8487`97466640 ffffb406`00000002 ffffa001`ce6c4d80 : nvlddmkm+0x110f8b
ffff8487`97466540 fffff805`67bffa83 : 00000000`00000016 00000000`00000000 00000000`00000001 ffffb406`f1b81000 : nvlddmkm+0xa84f3
ffff8487`974666a0 fffff805`68795711 : ffffb406`f1b81000 ffff8487`97466d91 ffff8487`97466f88 00000000`00000001 : nvlddmkm+0x7fa83
ffff8487`97466d10 fffff805`686aa75d : 00000000`00000000 ffffb406`f1943bfc ffff8487`97466f88 ffffb406`f1b18bf0 : nvlddmkm+0xc15711
ffff8487`97466df0 fffff805`6403fab3 : ffffb406`f1b81000 ffffb406`f190f3e0 ffffb406`f1b18bf0 ffffb406`f1943030 : nvlddmkm+0xb2a75d
ffff8487`97466e30 fffff805`6404188e : ffffb406`f1b18bf0 ffffffff`fffffffd ffffb406`f1943180 ffffb406`00000000 : dxgkrnl!DpiDxgkDdiStartDevice+0x6b
ffff8487`97466e90 fffff805`6403f5d8 : ffffffff`fffffffd ffffb406`f1943180 00000000`00000001 00000000`00000001 : dxgkrnl!DpiFdoStartAdapter+0x58e
ffff8487`97467010 fffff805`64058d40 : ffffb113`9a011040 00000000`00000000 ffffb406`f14b6260 00000000`00000000 : dxgkrnl!DpiFdoStartAdapterThreadImpl+0x308
ffff8487`974671c0 fffff805`6402a9ea : fffff805`63f7fa00 00000000`00000000 ffffffff`fffffffd ffffb406`f14b6260 : dxgkrnl!DpiFdoStartAdapterThread+0x30
ffff8487`974671f0 fffff805`6402a94f : 00000000`00000000 ffffffff`fffffffd ffffb406`f14b6260 00000000`00000000 : dxgkrnl!DpiSessionCreateCallback+0x52
ffff8487`97467230 fffff805`64aee722 : 00000000`00000000 00000000`0000014c 00000000`0000014c 00000000`00000148 : dxgkrnl!DxgkNotifySessionStateChange+0xbf
ffff8487`97467280 ffffb113`9a2c12d2 : ffffb406`ed7d4940 00000000`0020001e ffffb113`9b6073b0 00000000`00000148 : watchdog!SMgrNotifySessionChange+0x92
ffff8487`974672b0 ffffb113`99e0e5ea : 00000000`00000010 00000000`00050246 ffff8487`97467308 00000000`00000018 : win32k!SysEntrySMgrNotifySessionChange+0x12
ffff8487`974672e0 ffffb113`99e0dfdb : 00000000`00000000 00000000`00050246 ffff8487`97467338 00000000`00000018 : win32kbase!DrvNotifySessionStateChange+0x8a
ffff8487`97467310 ffffb113`99e15bc4 : 00000000`00000000 00000000`0000014c 00000000`00000148 ffffb406`f1871b60 : win32kbase!InitializeGreCSRSS+0x1b
ffff8487`97467340 ffffb113`9a2c1466 : ffffb406`f7d24080 ffff8487`97467440 00000000`00000001 00000000`00000001 : win32kbase!Win32kBaseUserInitialize+0x124
ffff8487`97467390 fffff805`5ec075b5 : ffffb406`f7d24000 00000000`00100000 ffff8487`97467440 ffffb406`00000000 : win32k!NtUserInitialize+0x16
ffff8487`974673c0 00007ff9`d13b9b44 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : nt!KiSystemServiceCopyEnd+0x25
0000002e`f8fbf5f8 00000000`00000000 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : 0x00007ff9`d13b9b44
SYMBOL_NAME: nvlddmkm+1d5b24
MODULE_NAME: nvlddmkm
IMAGE_NAME: nvlddmkm.sys
STACK_COMMAND: .cxr 0xffff8487974646a0 ; kb
BUCKET_ID_FUNC_OFFSET: 1d5b24
FAILURE_BUCKET_ID: 0x3B_c0000005_nvlddmkm!unknown_function
OS_VERSION: 10.0.19041.1
BUILDLAB_STR: vb_release
OSPLATFORM_TYPE: x64
OSNAME: Windows 10
FAILURE_ID_HASH: {63c41bff-3ea4-15a0-a72c-26a548d17abe}
Followup: MachineOwner
---------
3: kd> lmvm nvlddmkm
Browse full module list
start end module name
fffff805`67b80000 fffff805`6938d000 nvlddmkm (no symbols)
Loaded symbol image file: nvlddmkm.sys
Image path: \SystemRoot\System32\DriverStore\FileRepository\nvgridsw.inf_amd64_bc3d1d075a43d1ac\nvlddmkm.sys
Image name: nvlddmkm.sys
Browse all global symbols functions data
Timestamp: Mon Apr 5 11:28:32 2021 (606B56D0)
CheckSum: 017A33BA
ImageSize: 0180D000
Translations: 0000.04b0 0000.04e4 0409.04b0 0409.04e4
Information from resource tables:
3: kd> .cxr 0xffff8487974646a0
rax=0000000000000000 rbx=ffffb406f7d85000 rcx=e17f3f1e8efe0000
rdx=0000000000000000 rsi=0000000000000000 rdi=ffffb406f1667270
rip=fffff80567d55b24 rsp=ffff8487974650a0 rbp=0000000000000002
r8=0000000000000000 r9=0000000000000000 r10=0000000000000000
r11=ffff848797465040 r12=fffff80568511c80 r13=ffffb406f1667270
r14=ffffb406f194c660 r15=0000000000000000
iopl=0 nv up ei pl nz na pe nc
cs=0010 ss=0018 ds=002b es=002b fs=0053 gs=002b efl=00050202
nvlddmkm+0x1d5b24:
fffff805`67d55b24 4c8b80c8220000 mov r8,qword ptr [rax+22C8h] ds:002b:00000000`000022c8=????????????????
3 years, 9 months
migrate hosted engine
by Harry O
Hi,
I get the following when trying to migrate my HostedEngine VM to a new node in the cluster, I just did node reinstall via HostedEngine and rebuild of the gluster array on that node, bacause its a replacement for a crashed dead old node.
ID: 120
Migration failed due to an Error: Failed to connect to remote libvirt URI qemu+tls://hej3.5ervers.lan/system: Cannot read CA certificate '/etc/pki/CA/cacert.pem': No such file or directory (VM: HostedEngine, Source: hej1.5ervers.lan, Destination: hej3.5ervers.lan)
3 years, 9 months
Re: Question about Template and Storage Domain
by Eyal Shenitzky
Hi,
You are probably creating the VM as a dependent on the template (thin VM),
it means that the VM's disk should be created on the same storage domain as
the template disk.
In order to create the VM with a disk on a different storage domain, you
can either create the VM as an independent (clone) or copy the template
disk to the storage domain you want to create the VM disk on.
On Sun, 27 Jun 2021 at 18:40, Nur Imam Febrianto <nur_imam(a)outlook.com>
wrote:
> Hi,
>
>
> Want to ask about template. For example I have a several template (with
> disk) stored in some Storage Domain. If I create a VM from the template,
> and when I change the parameter of cloned disk into another storage domain
> (different with where the template are stored). The VM always failed to be
> created. This only occurs in VM creation phase, if I create the VM at same
> storage domain where the templates are stored, it created successfully. Is
> this are “normal” behavior ?
>
>
>
> Thanks before.
>
>
>
> Regards,
>
> Nur Imam Febrianto
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/F6BCRDFJBCO...
>
--
Regards,
Eyal Shenitzky
3 years, 9 months
Libgfapi considerations
by Jayme
Are there currently any known issues with using libgfapi in the latest
stable version of ovirt in hci deployments? I have recently enabled it and
have noticed a significant (over 4x) increase in io performance on my vms.
I’m concerned however since it does not seem to be an ovirt default
setting. Is libgfapi considered safe and stable to use in ovirt 4.3 hci?
3 years, 10 months
Node 4.4.6 and Gluster deployment issues
by coradat@gmail.com
Been beating my head against this for a while, but I'm having issues deploying a new 4.4.6 node hyperconverged cluster. It's a homelab/dev environment, so it's on pretty outdated hardware, SuperMicro X8DTT based systems. No on board raid controller, so that should help at least. Storage drives are a software raid 0 of two 1T SSDs at /dev/md/ovstore and disk traffic is on a separate network from management traffic; ssh-copy-id was run for both interfaces. It looks like it fails when trying to create the volume group on the VDO but I cannot figure out why it's excluded. /etc/lvm/lvm.conf is (should be) default from the install, raid is formatted with no partitions, and I've done a wipefs on the raid. I've been able to complete the same install on these devices with Node 4.3.9, but not 4.4.6. Logs are available at https://pastebin.com/yBzUpe3c
3 years, 10 months
OVF_STORE update error
by whiteplant02@gmail.com
hi.
I use automatic translation, not English as my main language.
Please note that this is a difficult expression to understand.
I have an NFS storage domain that I used with ovirt 4.4 and I imported it with oracle linux virtualization manager 4.3.
As a result, OVF_STORE fails to update.
Failed to update VMs/Templates OVF data for Storage Domain [Domain Name] in Data Center [DC Name].
Failed to update OVF disks [OVF_STORE1_diskID],[OVF_STORE2_diskID], OVF data ins't updated on those OVF stores (Data Center [DC Name], Storage Domain [Domain Name]).
In addition, there was the following log.
[org.ovirt.engine.core.bll.storage.ovfstore.UploadStreamCommant] (EE-ManagedThreadFactory-engineScheduled-Thread-100) [4859c0db] Validation of action 'UploadStream' failed for user SYSTEM.
Reasons: VAR__TYPE__STORAGE__DOMAIN,ACTION_TYPE_FAILED_DISK_VOLUME_TYPE_UNSUPPORTED,$volumeTypeSparse,$supportedVolumeTypes Preallocated
With ovirt 4.4, OVF STORE is created with thin provisioning.
With olvm4.3, I think the problem is that OVF_STORE is created in Preallocated.
Is there a way to fix it while the virtual machine is still running?
Thank you.
3 years, 10 months
Host reinstall from engine
by Harry O
Hi,
Should the engine not deploy gluster when host reinstall is ran? How do I deploy my gluster setup on a replacement node replacing a dead node?
3 years, 10 months
Re: oVirt and ARM
by Sandro Bonazzola
Il giorno ven 25 giu 2021 alle ore 14:20 Marko Vrgotic <
M.Vrgotic(a)activevideo.com> ha scritto:
> Hi Sandro,
>
>
>
> Thank you for the update. I am not equipped to help on development side,
> but I can most certainly do test deployments, once there is something
> available.
>
>
>
> We are big oVirt shop and moving to ARM64 with new product, it would be
> great if oVirt would start supporting it.
>
>
>
> If we are able to help somehow, let me know.
>
I guess a start could be adding some arm64 machine to oVirt infrastructure
so developers can build for it.
You can have a look at
https://ovirt.org/community/get-involved/donate-hardware.html
Looping in +Evgheni Dereveanchin <ederevea(a)redhat.com> in case you can
share some resources.
>
>
> -----
>
> kind regards/met vriendelijke groeten
>
>
>
> Marko Vrgotic
> Sr. System Engineer @ System Administration
>
>
> ActiveVideo
>
> *e:* m.vrgotic(a)activevideo.com
> *w: *www.activevideo.com
>
>
>
> ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217
> WJ Hilversum, The Netherlands. The information contained in this message
> may be legally privileged and confidential. It is intended to be read only
> by the individual or entity to whom it is addressed or by their designee.
> If the reader of this message is not the intended recipient, you are on
> notice that any distribution of this message, in any form, is strictly
> prohibited. If you have received this message in error, please immediately
> notify the sender and/or ActiveVideo Networks, LLC by telephone at +1
> 408.931.9200 and delete or destroy any copy of this message.
>
>
>
>
>
>
>
> *From: *Sandro Bonazzola <sbonazzo(a)redhat.com>
> *Date: *Thursday, 24 June 2021 at 18:21
> *To: *Marko Vrgotic <M.Vrgotic(a)activevideo.com>, Zhenyu Zheng <
> zhengzhenyulixi(a)gmail.com>, Joey Ma <majunjiev(a)gmail.com>
> *Cc: *users(a)ovirt.org <users(a)ovirt.org>
> *Subject: *Re: [ovirt-users] oVirt and ARM
>
> ***CAUTION: This email originated from outside of the organization. Do not
> click links or open attachments unless you recognize the sender!!!***
>
>
>
>
>
> Il giorno gio 24 giu 2021 alle ore 16:34 Marko Vrgotic <
> M.Vrgotic(a)activevideo.com> ha scritto:
>
> Hi oVirt,
>
>
>
> Where can I find if there are any information about oVirt supporting arm64
> CPU architecture?
>
>
>
> Right now oVirt is not supporting arm64. There was an initiative about
> supporting it started some time ago from openEuler ovirt SIG.
>
> I didn't got any further updates on this topic, looping in those who I
> remember being looking into it.
>
> I think that if someone contributes arm64 support it would also be a
> feature worth a 4.5 release :-)
>
>
>
>
>
> -----
>
> kind regards/met vriendelijke groeten
>
>
>
> Marko Vrgotic
> Sr. System Engineer @ System Administration
>
>
> ActiveVideo
>
> *o: *+31 (35) 6774131
>
> *m: +*31 (65) 5734174
>
> *e:* m.vrgotic(a)activevideo.com
> *w: *www.activevideo.com
>
>
>
> ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217
> WJ Hilversum, The Netherlands. The information contained in this message
> may be legally privileged and confidential. It is intended to be read only
> by the individual or entity to whom it is addressed or by their designee.
> If the reader of this message is not the intended recipient, you are on
> notice that any distribution of this message, in any form, is strictly
> prohibited. If you have received this message in error, please immediately
> notify the sender and/or ActiveVideo Networks, LLC by telephone at +1
> 408.931.9200 and delete or destroy any copy of this message.
>
>
>
>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> <https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> <https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...>
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MQ3XND2NKIL...
> <https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.o...>
>
>
>
>
> --
>
> *Sandro Bonazzola*
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA
> <https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.red...>
>
> sbonazzo(a)redhat.com
>
>
> <https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.red...>
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.*
>
>
>
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
3 years, 10 months
VMs traffic cannot go outside host
by deyaa112006@hotmail.com
Hi
trying to get hands on ovirt virtualization.
built a test environment on vmware workstation:
- 2 VMs running centos7 as ovirt Hosts
- 1 VM running centos 7 as ovirt-manager
on ovirt Hosts:
[root@ovirt-node1 ~]# rpm -q cockpit-ovirt-dashboard qemu-kvm-ev libvirt virt-install bridge-utils vdsm
cockpit-ovirt-dashboard-0.13.10-1.el7.noarch
qemu-kvm-ev-2.12.0-44.1.el7_8.1.x86_64
libvirt-4.5.0-36.el7_9.5.x86_64
virt-install-1.5.0-7.el7.noarch
bridge-utils-1.5-9.el7.x86_64
vdsm-4.30.46-1.el7.x86_64
on ovirt-manager:
[oVirt shell (connected)]# info
backend version: 4.3.10
sdk version : 4.3.4
cli version : 3.6.9.2
python version : 2.7.5.final.0
entry point : https://ovirt-manager.home.lab/ovirt-engine/api
I created two ovirt VMs, running ubuntu16, on the same VM Networks "vmnet-02"
Both VMs' traffic cannot go outside hosts (i.e they can reach each others only if they run on the same host)
need help fix this please!
[d@dv6 ~]$ ovirt-node2
Last login: Mon Jun 28 15:25:12 2021 from 10.0.1.1
[root@ovirt-node2 ~]# ifconfig
ens32: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 00:0c:29:d2:71:10 txqueuelen 1000 (Ethernet)
RX packets 5388529 bytes 7970598031 (7.4 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 287643 bytes 124694822 (118.9 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens34: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 00:0c:29:d2:71:1a txqueuelen 1000 (Ethernet)
RX packets 6574 bytes 637208 (622.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3573 bytes 393479 (384.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
genev_sys_6081: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 65000
ether fe:66:e6:5e:51:f8 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 15152 bytes 3732985 (3.5 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 15152 bytes 3732985 (3.5 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ovirtmgmt: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.1.22 netmask 255.255.255.0 broadcast 10.0.1.255
ether 00:0c:29:d2:71:10 txqueuelen 1000 (Ethernet)
RX packets 308773 bytes 7630918499 (7.1 GiB)
RX errors 0 dropped 42 overruns 0 frame 0
TX packets 253627 bytes 92412626 (88.1 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vmnet-02: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.57.146 netmask 255.255.255.0 broadcast 192.168.57.255
ether 00:0c:29:d2:71:1a txqueuelen 1000 (Ethernet)
RX packets 9367 bytes 640524 (625.5 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1015 bytes 248111 (242.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vnet0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether fe:6f:b4:ee:00:01 txqueuelen 1000 (Ethernet)
RX packets 6483 bytes 289530 (282.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 5120 bytes 480414 (469.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@ovirt-node2 ~]#
[root@ovirt-node2 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.1.254 0.0.0.0 UG 0 0 0 ovirtmgmt
10.0.1.0 0.0.0.0 255.255.255.0 U 0 0 0 ovirtmgmt
169.254.0.0 0.0.0.0 255.255.0.0 U 1023 0 0 ovirtmgmt
192.168.57.0 0.0.0.0 255.255.255.0 U 0 0 0 vmnet-02
Here I can access the guest VM:
[root@ovirt-node2 ~]# ssh sysadmin(a)10.0.1.44
sysadmin(a)10.0.1.44's password:
sysadmin@ubuntu16-1:~$ ifconfig
ens4 Link encap:Ethernet HWaddr 56:6f:b4:ee:00:01
inet addr:10.0.1.44 Bcast:10.0.1.255 Mask:255.255.255.0
inet6 addr: fe80::546f:b4ff:feee:1/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:312 errors:0 dropped:0 overruns:0 frame:0
TX packets:1834 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:75924 (75.9 KB) TX bytes:87332 (87.3 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:1136 errors:0 dropped:0 overruns:0 frame:0
TX packets:1136 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:105288 (105.2 KB) TX bytes:105288 (105.2 KB)
sysadmin@ubuntu16-1:~$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.1.254 0.0.0.0 UG 0 0 0 ens4
10.0.1.0 0.0.0.0 255.255.255.0 U 0 0 0 ens4
but from the other host "node1" I cannot.
3 years, 10 months