oVirt + TrueNAS: Unable to create iSCSI domain - I am missing something obvious
by David Johnson
Good morning folks, and thank you in advance.
I am working on migrating my oVirt backing store from NFS to iSCSI.
*oVirt Environment:*
oVirt Open Virtualization Manager
Software Version:4.4.4.7-1.el8
*TrueNAS environment:*
FreeBSD truenas.local 12.2-RELEASE-p11 75566f060d4(HEAD) TRUENAS amd64
The iSCSI share is on a TrueNAS server, exposed to user VDSM and group 36.
oVirt sees the targeted share, but is unable to make use of it.
The latest issue is "Error while executing action New SAN Storage Domain:
Volume Group block size error, please check your Volume Group
configuration, Supported block size is 512 bytes."
As near as I can tell, oVirt does not support any block size other than 512
bytes, while TrueNAS's smallest OOB block size is 4k.
I know that oVirt on TrueNAS is a common configuration, so I expect I am
missing something really obvious here, probably a TrueNAS configuration
needed to make TrueNAS work with 512 byte blocks.
Any advice would be helpful.
*David Johnson*
2 years, 3 months
After Upgrade OVirt 4.4.9 > Version 4.4.10.7-1.el8 VM Kernel Crashes after migration
by Ralf Schenk
Dear List,
we upgraded our OVirt 4.4.9 Infrastructure (engine and hosts) to latest
available 4.4.10.x. Engine shows "4.4.10.7-1.el8" on Login Page
Hosts are based on ovirt-node-ng (nodectl info outputs
ovirt-node-ng-4.4.10.2-0.20220303.0)
When we migrate VM's we see VM's dying short after migration with below
errors also printed to console. We are only able to shutdown these VM's
(PowerOff) and start them up again. Mostly they are started again on the
Host that was target of migration and they run without issues then.
All Servers are EPYC based, but different CPU Versions. All hosts are in
a single "Cluster", CPU Type is "Secure AMD EPYC". With 4.4.8 and 4.4.9
we had no problems migrating VM's between hosts.
Cluster is:
2 Hosts: AMD EPYC 7401P 24-Core Processor
1 Hosts: AMD EPYC 7402P 24-Core Processor
7 Hosts: AMD EPYC 7502P 32-Core Processor
What can we do ? As said, we didn't see such problems for 1 year running
4.4.8.x and 4.4.9.x.
Syslog showing crash and date/clock error or sudden offset:
Jan 19 19:23:22 myvmXX kernel: [163933.218848] rcu: INFO: rcu_sched
self-detected stall on CPU
Jan 19 19:23:22 myvmXX kernel: [163933.218876] rcu: 1-...!: (8 GPs
behind) idle=78a/0/0x1 softirq=4456350/4456351 fqs=0
Jan 19 19:23:22 myvmXX kernel: [163933.218901] (t=537752 jiffies
g=9505317 q=22)
Jan 19 19:23:22 myvmXX kernel: [163933.218903] rcu: rcu_sched kthread
starved for 537752 jiffies! g9505317 f0x0 RCU_GP_WAIT_FQS(5)
->state=0x402 ->cpu=0
Jan 19 19:23:22 myvmXX kernel: [163933.218932] rcu: RCU grace-period
kthread stack dump:
Jan 19 19:23:22 myvmXX kernel: [163933.218949] rcu_sched I 0
10 2 0x80004000
Jan 19 19:23:22 myvmXX kernel: [163933.218951] Call Trace:
Jan 19 19:23:22 myvmXX kernel: [163933.218957] __schedule+0x2e3/0x740
Jan 19 19:23:22 myvmXX kernel: [163933.218960] schedule+0x42/0xb0
Jan 19 19:23:22 myvmXX kernel: [163933.218961] schedule_timeout+0x8a/0x160
Jan 19 19:23:22 myvmXX kernel: [163933.218964] ?
rcu_accelerate_cbs+0x28/0x190
Jan 19 19:23:22 myvmXX kernel: [163933.218967] ?
__next_timer_interrupt+0xe0/0xe0
Jan 19 19:23:22 myvmXX kernel: [163933.218969] rcu_gp_kthread+0x48d/0x9a0
Jan 19 19:23:22 myvmXX kernel: [163933.218971] kthread+0x104/0x140
Jan 19 19:23:22 myvmXX kernel: [163933.218972] ? kfree_call_rcu+0x20/0x20
Jan 19 19:23:22 myvmXX kernel: [163933.218974] ? kthread_park+0x90/0x90
Jan 19 19:23:22 myvmXX kernel: [163933.218975] ret_from_fork+0x35/0x40
Jan 19 19:23:22 myvmXX kernel: [163933.218982] Sending NMI from CPU 1 to
CPUs 0:
Jan 19 19:23:22 myvmXX kernel: [163933.219978] NMI backtrace for cpu 0
Jan 19 19:23:22 myvmXX kernel: [163933.219978] CPU: 0 PID: 0 Comm:
swapper/0 Not tainted 5.4.0-137-generic #154-Ubuntu
Jan 19 19:23:22 myvmXX kernel: [163933.219979] Hardware name: oVirt
RHEL, BIOS 1.15.0-1.module_el8.6.0+1087+b42c8331 04/01/2014
Jan 19 19:23:22 myvmXX kernel: [163933.219979] RIP:
0010:timekeeping_advance+0x12f/0x5a0
Jan 19 19:23:22 myvmXX kernel: [163933.219980] Code: 00 48 8b 35 e3 a0
ec 01 bb 00 ca 9a 3b 49 29 c7 48 01 05 04 a0 ec 01 48 01 05 35 a0 ec 01
48 89 f2 48 d3 e2 8b 0d fd 9f ec 01 <48> 03 15 fa 9f ec 01 48 89 15 f3
9f ec 01 48 d3 e3 48 39 da 72 57
Jan 19 19:23:22 myvmXX kernel: [163933.219981] RSP:
0018:ffffb5a580003e40 EFLAGS: 00000016
Jan 19 19:23:22 myvmXX kernel: [163933.219981] RAX: 000000007a120000
RBX: 000000003b9aca00 RCX: 0000000000000017
Jan 19 19:23:22 myvmXX kernel: [163933.219982] RDX: 003d091a39de0000
RSI: 00001e848d1cef00 RDI: 00000000003d0900
Jan 19 19:23:22 myvmXX kernel: [163933.219982] RBP: ffffb5a580003e98
R08: 0000000000000000 R09: 666657f4d876bbc2
Jan 19 19:23:22 myvmXX kernel: [163933.219983] R10: ffff8d1ee8119618
R11: ffff8d1effa2ffb8 R12: 0000000000000000
Jan 19 19:23:22 myvmXX kernel: [163933.219983] R13: 0000000000000000
R14: 0000000000000009 R15: 6666232f185ebbc2
Jan 19 19:23:22 myvmXX kernel: [163933.219984] FS:
0000000000000000(0000) GS:ffff8d1effa00000(0000) knlGS:0000000000000000
Jan 19 19:23:22 myvmXX kernel: [163933.219984] CS: 0010 DS: 0000 ES:
0000 CR0: 0000000080050033
Jan 19 19:23:22 myvmXX kernel: [163933.219984] CR2: 000055a093634da0
CR3: 00000001280d6000 CR4: 00000000003406f0
Jan 19 19:23:22 myvmXX kernel: [163933.219985] Call Trace:
Jan 19 19:23:22 myvmXX kernel: [163933.219985] <IRQ>
Jan 19 19:23:22 myvmXX kernel: [163933.219985] ? ttwu_do_activate+0x5b/0x70
Jan 19 19:23:22 myvmXX kernel: [163933.219985] update_wall_time+0x10/0x20
Jan 19 19:23:22 myvmXX kernel: [163933.219986]
tick_do_update_jiffies64.part.0+0x88/0xd0
Jan 19 19:23:22 myvmXX kernel: [163933.219986] tick_sched_do_timer+0x58/0x60
Jan 19 19:23:22 myvmXX kernel: [163933.219986] tick_sched_timer+0x2d/0x80
Jan 19 19:23:22 myvmXX kernel: [163933.219987]
__hrtimer_run_queues+0xf7/0x270
Jan 19 19:23:22 myvmXX kernel: [163933.219987] ?
tick_sched_do_timer+0x60/0x60
Jan 19 19:23:22 myvmXX kernel: [163933.219987] hrtimer_interrupt+0x109/0x220
Jan 19 19:23:22 myvmXX kernel: [163933.219988]
smp_apic_timer_interrupt+0x71/0x140
Jan 19 19:23:22 myvmXX kernel: [163933.219988] apic_timer_interrupt+0xf/0x20
Jan 19 19:23:22 myvmXX kernel: [163933.219988] </IRQ>
Jan 19 19:23:22 myvmXX kernel: [163933.219988] RIP:
0010:native_safe_halt+0xe/0x10
Jan 19 19:23:22 myvmXX kernel: [163933.219989] Code: 7b ff ff ff eb bd
90 90 90 90 90 90 e9 07 00 00 00 0f 00 2d d6 39 51 00 f4 c3 66 90 e9 07
00 00 00 0f 00 2d c6 39 51 00 fb f4 <c3> 90 0f 1f 44 00 00 55 48 89 e5
41 55 41 54 53 e8 9d 5e 62 ff 65
Jan 19 19:23:22 myvmXX kernel: [163933.219990] RSP:
0018:ffffffff8be03e18 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff13
Jan 19 19:23:22 myvmXX kernel: [163933.219990] RAX: ffffffff8aef7a20
RBX: 0000000000000000 RCX: 0000000000000001
Jan 19 19:23:22 myvmXX kernel: [163933.219991] RDX: 000000000bc48666
RSI: ffffffff8be03dd8 RDI: 00009519477dc7a5
Jan 19 19:23:22 myvmXX kernel: [163933.219991] RBP: ffffffff8be03e38
R08: 0000000000000001 R09: 0000000000000002
Jan 19 19:23:22 myvmXX kernel: [163933.219992] R10: 0000000000000000
R11: 0000000000000001 R12: 0000000000000000
Jan 19 19:23:22 myvmXX kernel: [163933.219992] R13: ffffffff8be13780
R14: 0000000000000000 R15: 0000000000000000
Jan 19 19:23:22 myvmXX kernel: [163933.219992] ?
__cpuidle_text_start+0x8/0x8
Jan 19 19:23:22 myvmXX kernel: [163933.219993] ?
tick_nohz_idle_stop_tick+0x164/0x290
Jan 19 19:23:22 myvmXX kernel: [163933.219993] ? default_idle+0x20/0x140
Jan 19 19:23:22 myvmXX kernel: [163933.219993] arch_cpu_idle+0x15/0x20
Jan 19 19:23:22 myvmXX kernel: [163933.219994] default_idle_call+0x23/0x30
Jan 19 19:23:22 myvmXX kernel: [163933.219994] do_idle+0x1fb/0x270
Jan 19 19:23:22 myvmXX kernel: [163933.219994] cpu_startup_entry+0x20/0x30
Jan 19 19:23:22 myvmXX kernel: [163933.219994] rest_init+0xae/0xb0
Jan 19 19:23:22 myvmXX kernel: [163933.219995] arch_call_rest_init+0xe/0x1b
Jan 19 19:23:22 myvmXX kernel: [163933.219995] start_kernel+0x52f/0x550
Jan 19 19:23:22 myvmXX kernel: [163933.219995]
x86_64_start_reservations+0x24/0x26
Jan 19 19:23:22 myvmXX kernel: [163933.219996] x86_64_start_kernel+0x8f/0x93
Jan 19 19:23:22 myvmXX kernel: [163933.219996]
secondary_startup_64+0xa4/0xb0
Jan 19 19:23:22 myvmXX kernel: [163933.220000] NMI backtrace for cpu 1
Jan 19 19:23:22 myvmXX kernel: [163933.220002] CPU: 1 PID: 0 Comm:
swapper/1 Not tainted 5.4.0-137-generic #154-Ubuntu
Jan 19 19:23:22 myvmXX kernel: [163933.220003] Hardware name: oVirt
RHEL, BIOS 1.15.0-1.module_el8.6.0+1087+b42c8331 04/01/2014
Jan 19 19:23:22 myvmXX kernel: [163933.220003] Call Trace:
Jan 19 19:23:22 myvmXX kernel: [163933.220004] <IRQ>
Jan 19 19:23:22 myvmXX kernel: [163933.220006] dump_stack+0x6d/0x8b
Jan 19 19:23:22 myvmXX kernel: [163933.220008] ?
lapic_can_unplug_cpu+0x80/0x80
Jan 19 19:23:22 myvmXX kernel: [163933.220010]
nmi_cpu_backtrace.cold+0x14/0x53
Jan 19 19:23:22 myvmXX kernel: [163933.220013]
nmi_trigger_cpumask_backtrace+0xe8/0xf0
Jan 19 19:23:22 myvmXX kernel: [163933.220015]
arch_trigger_cpumask_backtrace+0x19/0x20
Jan 19 19:23:22 myvmXX kernel: [163933.220016] rcu_dump_cpu_stacks+0x99/0xcb
Jan 19 19:23:22 myvmXX kernel: [163933.220018]
rcu_sched_clock_irq.cold+0x1b0/0x39c
Jan 19 19:23:22 myvmXX kernel: [163933.220020]
update_process_times+0x2c/0x60
Jan 19 19:23:22 myvmXX kernel: [163933.220021] tick_sched_handle+0x29/0x60
Jan 19 19:23:22 myvmXX kernel: [163933.220022] tick_sched_timer+0x3d/0x80
Jan 19 19:23:22 myvmXX kernel: [163933.220024]
__hrtimer_run_queues+0xf7/0x270
Jan 19 19:23:22 myvmXX kernel: [163933.220026] ?
tick_sched_do_timer+0x60/0x60
Jan 19 19:23:22 myvmXX kernel: [163933.220027] hrtimer_interrupt+0x109/0x220
Jan 19 19:23:22 myvmXX kernel: [163933.220029]
smp_apic_timer_interrupt+0x71/0x140
Jan 19 19:23:22 myvmXX kernel: [163933.220030] apic_timer_interrupt+0xf/0x20
Jan 19 19:23:22 myvmXX kernel: [163933.220031] </IRQ>
Jan 19 19:23:22 myvmXX kernel: [163933.220033] RIP:
0010:native_safe_halt+0xe/0x10
Jan 19 19:23:22 myvmXX kernel: [163933.220035] Code: 7b ff ff ff eb bd
90 90 90 90 90 90 e9 07 00 00 00 0f 00 2d d6 39 51 00 f4 c3 66 90 e9 07
00 00 00 0f 00 2d c6 39 51 00 fb f4 <c3> 90 0f 1f 44 00 00 55 48 89 e5
41 55 41 54 53 e8 9d 5e 62 ff 65
Jan 19 19:23:22 myvmXX kernel: [163933.220036] RSP:
0018:ffffb5a58007be70 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff13
Jan 19 19:23:22 myvmXX kernel: [163933.220037] RAX: ffffffff8aef7a20
RBX: 0000000000000001 RCX: 0000000000000001
Jan 19 19:23:22 myvmXX kernel: [163933.220038] RDX: 000000000d1b7786
RSI: ffffb5a58007be30 RDI: 000095194740bea5
Jan 19 19:23:22 myvmXX kernel: [163933.220039] RBP: ffffb5a58007be90
R08: 0000000000000001 R09: 0000000000000000
Jan 19 19:23:22 myvmXX kernel: [163933.220040] R10: ffff8d1effa5c848
R11: 0000000000000000 R12: 0000000000000001
Jan 19 19:23:22 myvmXX kernel: [163933.220040] R13: ffff8d1eff25af00
R14: 0000000000000000 R15: 0000000000000000
Jan 19 19:23:22 myvmXX kernel: [163933.220042] ?
__cpuidle_text_start+0x8/0x8
Jan 19 19:23:22 myvmXX kernel: [163933.220044] ? default_idle+0x20/0x140
Jan 19 19:23:22 myvmXX kernel: [163933.220046] arch_cpu_idle+0x15/0x20
Jan 19 19:23:22 myvmXX kernel: [163933.220047] default_idle_call+0x23/0x30
Jan 19 19:23:22 myvmXX kernel: [163933.220049] do_idle+0x1fb/0x270
Jan 19 19:23:22 myvmXX kernel: [163933.220051] cpu_startup_entry+0x20/0x30
Jan 19 19:23:22 myvmXX kernel: [163933.220053] start_secondary+0x167/0x1c0
Jan 19 19:23:22 myvmXX kernel: [163933.220054]
secondary_startup_64+0xa4/0xb0
Jan 19 19:24:34 myvmXX systemd[1]: systemd-udevd.service: Watchdog
timeout (limit 3min)!
Jan 19 19:24:34 myvmXX systemd[1]: systemd-udevd.service: Killing
process 421 (systemd-udevd) with signal SIGABRT.
Nov 15 18:38:49 myvmXX kernel: [164024.594313] rcu: INFO: rcu_sched
self-detected stall on CPU
Nov 15 18:38:49 myvmXX kernel: [164024.594352] rcu: 0-...!: (2
ticks this GP) idle=66a/0/0x1 softirq=4017386/4017386 fqs=0
Nov 15 18:38:49 myvmXX kernel: [164024.594380] (t=1844682531753
jiffies g=9505317 q=675)
Nov 15 18:38:49 myvmXX kernel: [164024.594382] rcu: rcu_sched kthread
starved for 1844682531753 jiffies! g9505317 f0x0 RCU_GP_WAIT_FQS(5)
->state=0x200 ->cpu=0
Nov 15 18:38:49 myvmXX kernel: [164024.594413] rcu: RCU grace-period
kthread stack dump:
Nov 15 18:38:49 myvmXX kernel: [164024.594432] rcu_sched R 0
10 2 0x80004000
Nov 15 18:38:49 myvmXX kernel: [164024.594435] Call Trace:
Nov 15 18:38:49 myvmXX kernel: [164024.594445] __schedule+0x2e3/0x740
Nov 15 18:38:49 myvmXX kernel: [164024.594447] schedule+0x42/0xb0
Nov 15 18:38:49 myvmXX kernel: [164024.594449] schedule_timeout+0x8a/0x160
Nov 15 18:38:49 myvmXX kernel: [164024.594453] ?
rcu_accelerate_cbs+0x28/0x190
Nov 15 18:38:49 myvmXX kernel: [164024.594456] ?
__next_timer_interrupt+0xe0/0xe0
Nov 15 18:38:49 myvmXX kernel: [164024.594458] rcu_gp_kthread+0x48d/0x9a0
Nov 15 18:38:49 myvmXX kernel: [164024.594460] kthread+0x104/0x140
Nov 15 18:38:49 myvmXX kernel: [164024.594462] ? kfree_call_rcu+0x20/0x20
Nov 15 18:38:49 myvmXX kernel: [164024.594463] ? kthread_park+0x90/0x90
Nov 15 18:38:49 myvmXX kernel: [164024.594464] ret_from_fork+0x35/0x40
Nov 15 18:38:49 myvmXX kernel: [164024.594467] NMI backtrace for cpu 0
Nov 15 18:38:49 myvmXX kernel: [164024.594470] CPU: 0 PID: 0 Comm:
swapper/0 Not tainted 5.4.0-137-generic #154-Ubuntu
Nov 15 18:38:49 myvmXX kernel: [164024.594471] Hardware name: oVirt
RHEL, BIOS 1.15.0-1.module_el8.6.0+1087+b42c8331 04/01/2014
Nov 15 18:38:49 myvmXX kernel: [164024.594472] Call Trace:
Nov 15 18:38:49 myvmXX kernel: [164024.594473] <IRQ>
Nov 15 18:38:49 myvmXX kernel: [164024.594476] dump_stack+0x6d/0x8b
Nov 15 18:38:49 myvmXX kernel: [164024.594479] ?
lapic_can_unplug_cpu+0x80/0x80
Nov 15 18:38:49 myvmXX kernel: [164024.594480]
nmi_cpu_backtrace.cold+0x14/0x53
Nov 15 18:38:49 myvmXX kernel: [164024.594484]
nmi_trigger_cpumask_backtrace+0xe8/0xf0
Nov 15 18:38:49 myvmXX kernel: [164024.594485]
arch_trigger_cpumask_backtrace+0x19/0x20
Nov 15 18:38:49 myvmXX kernel: [164024.594488] rcu_dump_cpu_stacks+0x99/0xcb
Nov 15 18:38:49 myvmXX kernel: [164024.594489]
rcu_sched_clock_irq.cold+0x1b0/0x39c
Nov 15 18:38:49 myvmXX kernel: [164024.594491]
update_process_times+0x2c/0x60
Nov 15 18:38:49 myvmXX kernel: [164024.594494] tick_sched_handle+0x29/0x60
Nov 15 18:38:49 myvmXX kernel: [164024.594495] tick_sched_timer+0x3d/0x80
Nov 15 18:38:49 myvmXX kernel: [164024.594497]
__hrtimer_run_queues+0xf7/0x270
Nov 15 18:38:49 myvmXX kernel: [164024.594498] ?
tick_sched_do_timer+0x60/0x60
Nov 15 18:38:49 myvmXX kernel: [164024.594500] hrtimer_interrupt+0x109/0x220
Nov 15 18:38:49 myvmXX kernel: [164024.594503]
smp_apic_timer_interrupt+0x71/0x140
Nov 15 18:38:49 myvmXX kernel: [164024.594504] apic_timer_interrupt+0xf/0x20
Nov 15 18:38:49 myvmXX kernel: [164024.594505] </IRQ>
Nov 15 18:38:49 myvmXX kernel: [164024.594507] RIP:
0010:native_safe_halt+0xe/0x10
Nov 15 18:38:49 myvmXX kernel: [164024.594510] Code: 7b ff ff ff eb bd
90 90 90 90 90 90 e9 07 00 00 00 0f 00 2d d6 39 51 00 f4 c3 66 90 e9 07
00 00 00 0f 00 2d c6 39 51 00 fb f4 <c3> 90 0f 1f 44 00 00 55 48 89 e5
41 55 41 54 53 e8 9d 5e 62 ff 65
Nov 15 18:38:49 myvmXX kernel: [164024.594511] RSP:
0018:ffffffff8be03e18 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff13
Nov 15 18:38:49 myvmXX kernel: [164024.594513] RAX: ffffffff8aef7a20
RBX: 0000000000000000 RCX: 0000000000000001
Nov 15 18:38:49 myvmXX kernel: [164024.594513] RDX: 000000000bc48666
RSI: ffffffff8be03dd8 RDI: 00009519477dc7a5
Nov 15 18:38:49 myvmXX kernel: [164024.594514] RBP: ffffffff8be03e38
R08: 0000000000000001 R09: 0000000000000002
Nov 15 18:38:49 myvmXX kernel: [164024.594515] R10: 0000000000000000
R11: 0000000000000001 R12: 0000000000000000
Nov 15 18:38:49 myvmXX kernel: [164024.594515] R13: ffffffff8be13780
R14: 0000000000000000 R15: 0000000000000000
Nov 15 18:38:49 myvmXX kernel: [164024.594517] ?
__cpuidle_text_start+0x8/0x8
Nov 15 18:38:49 myvmXX kernel: [164024.594519] ?
tick_nohz_idle_stop_tick+0x164/0x290
Nov 15 18:38:49 myvmXX kernel: [164024.594521] ? default_idle+0x20/0x140
Nov 15 18:38:49 myvmXX kernel: [164024.594524] arch_cpu_idle+0x15/0x20
Nov 15 18:38:49 myvmXX kernel: [164024.594525] default_idle_call+0x23/0x30
Nov 15 18:38:49 myvmXX kernel: [164024.594528] do_idle+0x1fb/0x270
Nov 15 18:38:49 myvmXX kernel: [164024.594530] cpu_startup_entry+0x20/0x30
Nov 15 18:38:49 myvmXX kernel: [164024.594532] rest_init+0xae/0xb0
Nov 15 18:38:49 myvmXX kernel: [164024.594536] arch_call_rest_init+0xe/0x1b
Nov 15 18:38:49 myvmXX kernel: [164024.594537] start_kernel+0x52f/0x550
Nov 15 18:38:49 myvmXX kernel: [164024.594539]
x86_64_start_reservations+0x24/0x26
Nov 15 18:38:49 myvmXX kernel: [164024.594541] x86_64_start_kernel+0x8f/0x93
Nov 15 18:38:49 myvmXX kernel: [164024.594544]
secondary_startup_64+0xa4/0xb0
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@Jan
19 19:28:51 myvmXX systemd-sysctl[413]: Not setting
net/ipv4/conf/all/promote_secondaries (explicit setting exists).
--
Databay AG Logo
*Ralf Schenk
*
fon: +49 2405 40837-0
mail: rs(a)databay.de
web: www.databay.de <https://www.databay.de>
Databay AG
Jens-Otto-Krag-Str. 11
52146 Würselen
Sitz/Amtsgericht Aachen • HRB: 8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Dr. Jan Scholzen
Datenschutzhinweise für Kunden: Hier nachlesen
<https://www.databay.de/datenschutzhinweise-fuer-kunden>
2 years, 3 months
unsynced after remove brick
by Dominique D
Hello,
Yesterday I had to remove the brick of my first server (HCI with 3 servers) for maintenance and recover hard disks.
3 servers with 4 disks per server in raid5. 1 brick per server
i did :
gluster volume remove-brick data replica 2 ovnode1s.telecom.lan:/gluster_bricks/datassd/datassd force
After deleting the brick, I had 8 unsynced entries present and this morning I have 6.
What should I do to resolve my unsynced ?
[root@ovnode2 ~]# gluster volume status
Status of volume: datassd
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick ovnode2s.telecom.lan:/gluster_bricks/
datassd/datassd 49152 0 Y 2431
Brick ovnode3s.telecom.lan:/gluster_bricks/
datassd/datassd 49152 0 Y 2379
Self-heal Daemon on localhost N/A N/A Y 2442
Self-heal Daemon on ovnode3s.telecom.lan N/A N/A Y 2390
Task Status of Volume datassd
------------------------------------------------------------------------------
[root@ovnode2 ~]# gluster volume heal datassd info
Brick ovnode2s.telecom.lan:/gluster_bricks/datassd/datassd
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.18
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.7
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.150
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.241
/.shard/21907c8f-abe2-4501-b597-d1c2f9a0fa92.18
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.155
Status: Connected
Number of entries: 6
Brick ovnode3s.telecom.lan:/gluster_bricks/datassd/datassd
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.18
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.7
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.150
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.241
/.shard/21907c8f-abe2-4501-b597-d1c2f9a0fa92.18
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.155
Status: Connected
Number of entries: 6
Thank you
2 years, 3 months
4.5.4 Hosted-Engine: change hosted-engine storage
by Diego Ercolani
ovirt-engine-appliance-4.5-20221206133948
Hello,
I have some trouble with my Gluster instance where there is hosted-engine. I would like to copy data from that hosted engine and reverse to another hosted-engine storage (I will try NFS).
I think the main method is to put ovirt in global-management mode, stop the hosted-engine instance, unmount the hosted-engine storage
systemctl stop {vdsmd;supervdsmd;ovirt-ha-broker;ovirt-ha-agent};
hosted-engine --disconnect-storage
umount /rhev/data-center/mnt/glusterSD/localhost:_glen/
and then redeploy the hosted engine with the current backup:
method 1: [doesn't work; it fail the ansible script after the pause]
ovirt-hosted-engine-setup --4 --config-append=/var/lib/ovirt-hosted-engine-setup/answers/answers-20230116184834.conf --restore-from-file=230116-backup.tar.gz --ansible-extra-vars=he_pause_before_engine_setup=true
but it doesn't work
method 2: [I cannot find the glitch]
I saw in every node there is a configuration under /etc/ovirt-hosted-engine/hosted-engine.conf where there are some interessant entry:
vm_disk_id=0a1a501c-fc45-430f-bfd3-076172cec406
vm_disk_vol_id=f65dab86-67f1-46fa-87c0-f9076f479741
storage=localhost:/glen
domainType=glusterfs
metadata_volume_UUID=4e64e155-ee11-4fdc-b9a0-a7cbee5e4181
metadata_image_UUID=20234090-ea40-4614-ae95-8f91b339ba3e
lockspace_volume_UUID=6a975f46-4126-4c2a-b444-6e5a34872cf6
lockspace_image_UUID=893a1fc1-9a1d-44fc-a02f-8fdac19afc18
conf_volume_UUID=206e505f-1bb8-4cc4-abd9-942654c47612
conf_image_UUID=9871a483-8e7b-4f52-bf71-a6a8adc2309b
so, the conf_volume under conf_image dir is a TAR archive:
/rhev/data-center/mnt/glusterSD/localhost:_glen/3577c21e-f757-4405-97d1-0f827c9b4e22/images/9871a483-8e7b-4f52-bf71-a6a8adc2309b/206e505f-1bb8-4cc4-abd9-942654c47612: POSIX tar archive (GNU)
That I think is the common configuration. I copied the structure (from 3577c21e-f757-4405-97d1-0f827c9b4e22 directory) in the new storage (nfs), changed the entries under local configuration and "shared configuration":
from storage=localhost:/glen to storage=<server>:/directory (writable by vdsm user and kvm group)
from domainType=glusterfs to domainType=nfs
and issued:
1. hosted-engine --connect-storage <- work
2. hosted-engine --vm-start <- doesn't work because there is a complain about the not-starting ovirt-ha-agent
What can I do?
Is there any documentation somewhere?
Diego
2 years, 3 months
Bug in the engine-backup script when no attached TTY -- Easy fix
by eshwayri@gmail.com
The output() function. This line:
printf "%s\n" "${m}"
It will fail if there is no attached TTY, which will set the exit code to 1, which in turn will trigger the cleanup() function notifying the engine that the backup failed. This ironically happens when it should be writing "Done." and exiting after a successful backup. Fix I used was to change it to:
printf "%s\n" "${m}" >> "${LOG}"
You can't assume attached TTY since a lot of people like me want to run this as part of a pre/post script to an automated backup program.
2 years, 3 months
Hosted Engine No Longer Has IP Address - Help!
by Matthew J Black
Hi Guys,
I've gone and shot myself in the foot - and I'm looking for some first-aid.
I've managed to remove the IP Address of the oVirt Self-Hosted Engine and so have lost contact with it (don't ask how - let's just say I f*cked-up). I *think* its still running, I've got it set to DHCP, and I've got access to the Host its running on, so my question(s) is:
- (The preferred method) How can I re-establish (console?) contact - I'm thinking via the Host Server and some kvm-commands, so I can issue a `dhclient` command
- (The most drastic) How can I get it to reboot ie is there a command / command sequence to do this
Any help would be appreciated.
Cheers
Dulux-Oz
2 years, 3 months
update path from 4.4.4 to 4.5.4
by marek
hi,
i have standalone ovirt-engine 4.4.4 (centos8 stream)
few hosts (mix of old centos 8.2, centos 8 stream, rocky linux)
can you confirm if this upgrade path is ok?
first round
- update engine to 4.4.10
- update OS on engine to last centos 8 stream (can i upgrade to some
point in time recommended for 4.4.10?)
- update hosts to ovirt 4.4.10
- update OS on hosts to last centos 8 stream and rocky linux 8.6
special case in first round - centos 8.2
- migrate OS on hosts from centos 8.2 to rocky linux 8.6
- update hosts to ovirt 4.4.10
second round
- update engine to 4.5.4
- update hosts to ovirt 4.5.4
- update OS on hosts - centos 8 stream - is ALREADY on last update from
first round
- update OS on hosts - rocky linux - from 8.6 to 8.7
Marek
2 years, 3 months
Preferred way to give customer access to VMs
by hanisirfan.work@gmail.com
Hi. Just for an introduction, I'm a junior staff working on a way to deploy a KVM cluster to provision VMs to our customer. Before this, we're using VMware ESXi and connect it to OpenNebula as the console that we give to customers.
We're moving to KVM due to VMware licensing cost. I've successfully deployed an oVirt cluster and currently able to access it remotely via a VPN that I've setup on a virtualized pfSense VM inside the cluster.
My question is, what is the best way to give customer console access to the VMs that we provisioned for them? Surely we doesn't want to give them access to our VPN for security reasons.
I can't seems to find a way to connect OpenNebula with oVirt and I do believe it isn't possible since both are basically virtualization manager, managing the KVM instances.
Thanks for the responses and ideas given ☺️
2 years, 3 months