LVM-Thin error node instalation
by Jorge Visentini
Hello.
I'm having trouble installing 4.4.2. The same problem happened in the 4.4.1
media.
At the time of post installation, it gives error of LVM Thin Provisioning.
I tried to switch from LVM Thin to LVM and the same error happened.
In version 4.3 this error does not happen.
I was unable to copy the error text, so I am attaching the images.
[image: ovirt_installer01.jpg]
[image: ovirt_installer02.jpg]
Thank you all!
--
Att,
Jorge Visentini
+55 55 98432-9868
4 years, 2 months
ISO Repo
by Jeremey Wise
I saw notes about oVirt 4.4 may no longer support ISO images... but there
are times like now I need to build based on specific ISO images.
I tried to do a cycle to create an image file 8GB then do dd if=blah.iso
of=/<random disk ID image file name>
Created a new vm with this as boot disk and it fails to boot... so.. back
to "create volume for iso images"
But when I do that I get error
New Domain -> "Domain function" =iso Storage type = glusterFs
Use Managed Gluster volume -> Select already working gluster file space
"thor.penguinpages.local:/iso
VFS Type: glusterfs
mount
options: backup-volfile-servers=odin.penguinpages.local:medusa.penguinpages.local
Error:
Error while executing action: Cannot add Storage Connection. Performance
o_direct option is not enabled for storage domain.
Questions:
1) Why did the image and dd of iso to boot disk not work?
2) Any ideas about create of iso mount volume?
--
penguinpages
4 years, 2 months
Adding host fails with Ansible host-deploy role: Internal server error.
by Andrey Andrey
Hello!
Updated host engine to version 4.4.2. I am trying to add a new host and I get the Host node2.kvm.test.s1.eirc installation failed error. Failed to execute Ansible host-deploy role: Internal server error. Please check logs for more details: /var/log/ovirt-engine/ansible-runner-service.log.
Also updated Hosted Engine to version 4.4.3.3-1.el8, I get the same error.
Only old entries are in the logs. New does not appear
[root@admin ovirt-engine]# tail /var/log/ovirt-engine/ansible-runner-service.log
2020-09-22 18:02:13,943 - runner_service.controllers.jobs - INFO - 127.0.0.1 - GET /api/v1/jobs/8e369e4c-fce4-11ea-b893-00163e4843d7/events
2020-09-22 18:02:13,943 - runner_service.controllers.jobs - INFO - 127.0.0.1 - GET /api/v1/jobs/8e3cba7a-fce4-11ea-b893-00163e4843d7/events
2020-09-22 18:02:13,944 - runner_service.services.jobs - DEBUG - Job events for play 8e369e4c-fce4-11ea-b893-00163e4843d7: 25
2020-09-22 18:02:13,944 - runner_service.services.jobs - DEBUG - Job events for play 8e3cba7a-fce4-11ea-b893-00163e4843d7: 25
2020-09-22 18:02:13,944 - runner_service.services.jobs - DEBUG - Active filter is :{}
2020-09-22 18:02:13,944 - runner_service.services.jobs - DEBUG - Active filter is :{}
2020-09-22 18:02:13,948 - runner_service.controllers.jobs - DEBUG - Request received, content-type :None
2020-09-22 18:02:13,949 - runner_service.controllers.jobs - INFO - 127.0.0.1 - GET /api/v1/jobs/8e369e4c-fce4-11ea-b893-00163e4843d7/events
2020-09-22 18:02:13,949 - runner_service.services.jobs - DEBUG - Job events for play 8e369e4c-fce4-11ea-b893-00163e4843d7: 25
2020-09-22 18:02:13,949 - runner_service.services.jobs - DEBUG - Active filter is :{}
In file /var/log/ovirt-engine/engine.log
2020-09-24 10:36:36,645+03 INFO [org.ovirt.engine.core.bll.hostdeploy.AddVdsCommand] (default task-23) [de214fcc-31fc-464c-8ac1-6e111889cd20] Running command: AddVdsCommand internal: false. Entities affected : ID: 8e827912-a0ef-11ea-826c-00163e4843d7 Type: ClusterAction group CREATE_HOST with role type ADMIN
2020-09-24 10:36:36,666+03 INFO [org.ovirt.engine.core.bll.AddVdsSpmIdCommand] (default task-23) [7b77f4da] Before acquiring and wait lock 'EngineLock:{exclusiveLocks='[8e7c3f2a-a0ef-11ea-971b-00163e4843d7=REGISTER_VDS]', sharedLocks=''}'
2020-09-24 10:36:36,666+03 INFO [org.ovirt.engine.core.bll.AddVdsSpmIdCommand] (default task-23) [7b77f4da] Lock-wait acquired to object 'EngineLock:{exclusiveLocks='[8e7c3f2a-a0ef-11ea-971b-00163e4843d7=REGISTER_VDS]', sharedLocks=''}'
2020-09-24 10:36:36,667+03 INFO [org.ovirt.engine.core.bll.AddVdsSpmIdCommand] (default task-23) [7b77f4da] Running command: AddVdsSpmIdCommand internal: true. Entities affected : ID: 3a5cbd19-82b7-41ce-870b-5b65184934c9 Type: VDS
2020-09-24 10:36:36,674+03 INFO [org.ovirt.engine.core.bll.AddVdsSpmIdCommand] (default task-23) [7b77f4da] Lock freed to object 'EngineLock:{exclusiveLocks='[8e7c3f2a-a0ef-11ea-971b-00163e4843d7=REGISTER_VDS]', sharedLocks=''}'
2020-09-24 10:36:36,678+03 INFO [org.ovirt.engine.core.vdsbroker.RemoveVdsVDSCommand] (default task-23) [7b77f4da] START, RemoveVdsVDSCommand(HostName = node2.kvm.test.s1.eirc, RemoveVdsVDSCommandParameters:{hostId='3a5cbd19-82b7-41ce-870b-5b65184934c9'}), log id: 7b990f6d
2020-09-24 10:36:36,678+03 INFO [org.ovirt.engine.core.vdsbroker.RemoveVdsVDSCommand] (default task-23) [7b77f4da] FINISH, RemoveVdsVDSCommand, return: , log id: 7b990f6d
2020-09-24 10:36:36,680+03 INFO [org.ovirt.engine.core.vdsbroker.AddVdsVDSCommand] (default task-23) [7b77f4da] START, AddVdsVDSCommand(HostName = node2.kvm.test.s1.eirc, AddVdsVDSCommandParameters:{hostId='3a5cbd19-82b7-41ce-870b-5b65184934c9'}), log id: 20f7b449
2020-09-24 10:36:36,682+03 INFO [org.ovirt.engine.core.vdsbroker.AddVdsVDSCommand] (default task-23) [7b77f4da] AddVds - entered , starting logic to add VDS '3a5cbd19-82b7-41ce-870b-5b65184934c9'
2020-09-24 10:36:36,684+03 INFO [org.ovirt.engine.core.vdsbroker.AddVdsVDSCommand] (default task-23) [7b77f4da] AddVds - VDS '3a5cbd19-82b7-41ce-870b-5b65184934c9' was added, will try to add it to the resource manager
2020-09-24 10:36:36,684+03 INFO [org.ovirt.engine.core.vdsbroker.VdsManager] (default task-23) [7b77f4da] Entered VdsManager constructor
2020-09-24 10:36:36,693+03 INFO [org.ovirt.engine.core.vdsbroker.VdsManager] (default task-23) [7b77f4da] Initialize vdsBroker 'node2.kvm.test.s1.eirc:54321'
2020-09-24 10:36:36,783+03 INFO [org.ovirt.engine.core.vdsbroker.ResourceManager] (default task-23) [7b77f4da] VDS '3a5cbd19-82b7-41ce-870b-5b65184934c9' was added to the Resource Manager
2020-09-24 10:36:36,783+03 INFO [org.ovirt.engine.core.vdsbroker.AddVdsVDSCommand] (default task-23) [7b77f4da] FINISH, AddVdsVDSCommand, return: , log id: 20f7b449
2020-09-24 10:36:36,790+03 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-23) [7b77f4da] EVENT_ID: VDS_ALERT_FENCE_IS_NOT_CONFIGURED(9,000), Failed to verify Power Management configuration for Host node2.kvm.test.s1.eirc.
2020-09-24 10:36:36,805+03 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-23) [7b77f4da] EVENT_ID: USER_ADD_VDS(42), Host node2.kvm.test.s1.eirc was added by admin@internal-authz.
2020-09-24 10:36:36,824+03 INFO [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] (EE-ManagedThreadFactory-engine-Thread-1271) [5459582a] Lock Acquired to object 'EngineLock:{exclusiveLocks='[3a5cbd19-82b7-41ce-870b-5b65184934c9=VDS]', sharedLocks=''}'
2020-09-24 10:36:36,830+03 INFO [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] (EE-ManagedThreadFactory-engine-Thread-1271) [5459582a] Running command: InstallVdsInternalCommand internal: true. Entities affected : ID: 3a5cbd19-82b7-41ce-870b-5b65184934c9 Type: VDS
2020-09-24 10:36:36,830+03 INFO [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] (EE-ManagedThreadFactory-engine-Thread-1271) [5459582a] Before Installation host 3a5cbd19-82b7-41ce-870b-5b65184934c9, node2.kvm.test.s1.eirc
2020-09-24 10:36:36,832+03 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-1271) [5459582a] START, SetVdsStatusVDSCommand(HostName = node2.kvm.test.s1.eirc, SetVdsStatusVDSCommandParameters:{hostId='3a5cbd19-82b7-41ce-870b-5b65184934c9', status='Installing', nonOperationalReason='NONE', stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 2b846a38
2020-09-24 10:36:36,835+03 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-1271) [5459582a] FINISH, SetVdsStatusVDSCommand, return: , log id: 2b846a38
2020-09-24 10:36:36,842+03 INFO [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] (EE-ManagedThreadFactory-engine-Thread-1271) [5459582a] Opening ssh-copy-id session on host node2.kvm.test.s1.eirc
2020-09-24 10:36:37,058+03 INFO [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] (EE-ManagedThreadFactory-engine-Thread-1271) [5459582a] Executing ssh-copy-id command on host node2.kvm.test.s1.eirc
2020-09-24 10:36:37,969+03 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-1271) [5459582a] EVENT_ID: VDS_ANSIBLE_INSTALL_STARTED(560), Ansible host-deploy playbook execution has started on host node2.kvm.test.s1.eirc.
2020-09-24 10:36:37,986+03 ERROR [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (EE-ManagedThreadFactory-engine-Thread-1271) [5459582a] Exception: Internal server error
2020-09-24 10:36:37,987+03 ERROR [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] (EE-ManagedThreadFactory-engine-Thread-1271) [5459582a] Host installation failed for host '3a5cbd19-82b7-41ce-870b-5b65184934c9', 'node2.kvm.test.s1.eirc': Failed to execute Ansible host-deploy role: Internal server error. Please check logs for more details: /var/log/ovirt-engine/ansible-runner-service.log
2020-09-24 10:36:37,989+03 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-1271) [5459582a] START, SetVdsStatusVDSCommand(HostName = node2.kvm.test.s1.eirc, SetVdsStatusVDSCommandParameters:{hostId='3a5cbd19-82b7-41ce-870b-5b65184934c9', status='InstallFailed', nonOperationalReason='NONE', stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 3f5bca62
2020-09-24 10:36:37,994+03 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-1271) [5459582a] FINISH, SetVdsStatusVDSCommand, return: , log id: 3f5bca62
2020-09-24 10:36:37,997+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-1271) [5459582a] EVENT_ID: VDS_INSTALL_FAILED(505), Host node2.kvm.test.s1.eirc installation failed. Failed to execute Ansible host-deploy role: Internal server error. Please check logs for more details: /var/log/ovirt-engine/ansible-runner-service.log.
2020-09-24 10:36:38,001+03 INFO [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] (EE-ManagedThreadFactory-engine-Thread-1271) [5459582a] Lock freed to object 'EngineLock:{exclusiveLocks='[3a5cbd19-82b7-41ce-870b-5b65184934c9=VDS]', sharedLocks=''}'
P.S. Sorry. This is google translate..
4 years, 2 months
Installed oVirt 4.4.2 on CentOS 8
by info@worldhostess.com
New oVirt basic installation on Centos 8 as a physical server. The default
Data center in an uninitialized state
This is my first working installation and I have no idea what to do next.
Anyone knows a step by step guide for fresh install?
Any advice will be appreciated.
4 years, 2 months
oVirt - Gluster Node Offline but Bricks Active
by Jeremey Wise
oVirt engine shows one of the gluster servers having an issue. I did a
graceful shutdown of all three nodes over weekend as I have to move around
some power connections in prep for UPS.
Came back up.. but....
[image: image.png]
And this is reflected in 2 bricks online (should be three for each volume)
[image: image.png]
Command line shows gluster should be happy.
[root@thor engine]# gluster peer status
Number of Peers: 2
Hostname: odinst.penguinpages.local
Uuid: 83c772aa-33cd-430f-9614-30a99534d10e
State: Peer in Cluster (Connected)
Hostname: medusast.penguinpages.local
Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031
State: Peer in Cluster (Connected)
[root@thor engine]#
# All bricks showing online
[root@thor engine]# gluster volume status
Status of volume: data
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick thorst.penguinpages.local:/gluster_br
icks/data/data 49152 0 Y
11001
Brick odinst.penguinpages.local:/gluster_br
icks/data/data 49152 0 Y
2970
Brick medusast.penguinpages.local:/gluster_
bricks/data/data 49152 0 Y
2646
Self-heal Daemon on localhost N/A N/A Y
50560
Self-heal Daemon on odinst.penguinpages.loc
al N/A N/A Y
3004
Self-heal Daemon on medusast.penguinpages.l
ocal N/A N/A Y
2475
Task Status of Volume data
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: engine
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick thorst.penguinpages.local:/gluster_br
icks/engine/engine 49153 0 Y
11012
Brick odinst.penguinpages.local:/gluster_br
icks/engine/engine 49153 0 Y
2982
Brick medusast.penguinpages.local:/gluster_
bricks/engine/engine 49153 0 Y
2657
Self-heal Daemon on localhost N/A N/A Y
50560
Self-heal Daemon on odinst.penguinpages.loc
al N/A N/A Y
3004
Self-heal Daemon on medusast.penguinpages.l
ocal N/A N/A Y
2475
Task Status of Volume engine
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: iso
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick thorst.penguinpages.local:/gluster_br
icks/iso/iso 49156 49157 Y
151426
Brick odinst.penguinpages.local:/gluster_br
icks/iso/iso 49156 49157 Y
69225
Brick medusast.penguinpages.local:/gluster_
bricks/iso/iso 49156 49157 Y
45018
Self-heal Daemon on localhost N/A N/A Y
50560
Self-heal Daemon on odinst.penguinpages.loc
al N/A N/A Y
3004
Self-heal Daemon on medusast.penguinpages.l
ocal N/A N/A Y
2475
Task Status of Volume iso
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: vmstore
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick thorst.penguinpages.local:/gluster_br
icks/vmstore/vmstore 49154 0 Y
11023
Brick odinst.penguinpages.local:/gluster_br
icks/vmstore/vmstore 49154 0 Y
2993
Brick medusast.penguinpages.local:/gluster_
bricks/vmstore/vmstore 49154 0 Y
2668
Self-heal Daemon on localhost N/A N/A Y
50560
Self-heal Daemon on medusast.penguinpages.l
ocal N/A N/A Y
2475
Self-heal Daemon on odinst.penguinpages.loc
al N/A N/A Y
3004
Task Status of Volume vmstore
------------------------------------------------------------------------------
There are no active volume tasks
[root@thor engine]# gluster volume heal
data engine iso vmstore
[root@thor engine]# gluster volume heal data info
Brick thorst.penguinpages.local:/gluster_bricks/data/data
Status: Connected
Number of entries: 0
Brick odinst.penguinpages.local:/gluster_bricks/data/data
Status: Connected
Number of entries: 0
Brick medusast.penguinpages.local:/gluster_bricks/data/data
Status: Connected
Number of entries: 0
[root@thor engine]# gluster volume heal engine
Launching heal operation to perform index self heal on volume engine has
been successful
Use heal info commands to check status.
[root@thor engine]# gluster volume heal engine info
Brick thorst.penguinpages.local:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0
Brick odinst.penguinpages.local:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0
Brick medusast.penguinpages.local:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0
[root@thor engine]# gluster volume heal vmwatore info
Volume vmwatore does not exist
Volume heal failed.
[root@thor engine]#
So not sure what to do with oVirt Engine to make it happy again.
--
p <jeremey.wise(a)gmail.com>enguinpages
4 years, 2 months
Gluster Name too long
by Jeremey Wise
Deployment on three node cluster using oVirt HCI wizard.
I think this is a bug where it needs to do either a pre-flight name length
validation, or increase valid field length.
I avoid using /dev/sd# as those can change. And the wizard allows for
this change to a more explicit devices Ex:
/dev/mapper/Samsung_SSD_850_PRO_512GB_S250NXAGA15787L
Error:
TASK [gluster.infra/roles/backend_setup : Create a LV thinpool for similar
device types] ***
task path:
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thin_pool_create.yml:239
failed: [thorst.penguinpages.local] (item={'vgname':
'gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L', 'thinpoolname':
'gluster_thinpool_gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L',
'poolmetadatasize': '3G'}) => {"ansible_loop_var": "item", "changed":
false, "err": " Full LV name
\"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L/gluster_thinpool_gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L_tdata\"
is too long.\n Full LV name
\"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L/gluster_thinpool_gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L_tdata\"
is too long.\n Full LV name
\"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L/gluster_thinpool_gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L_tdata\"
is too long.\n Full LV name
\"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L/gluster_thinpool_gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L_tdata\"
is too long.\n Full LV name
\"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L/gluster_thinpool_gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L_tdata\"
is too long.\n Full LV name
\"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L/gluster_thinpool_gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L_tmeta\"
is too long.\n Internal error: LV name
\"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L/gluster_thinpool_gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L_tmeta\"
length 130 is not supported.\n Internal error: LV name
\"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L/gluster_thinpool_gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L_tdata\"
length 130 is not supported.\n Internal error: LV name
\"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L/gluster_thinpool_gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L_tmeta\"
length 130 is not supported.\n Internal error: LV name
\"gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L/gluster_thinpool_gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L_tdata\"
length 130 is not supported.\n", "item": {"poolmetadatasize": "3G",
"thinpoolname":
"gluster_thinpool_gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L",
"vgname": "gluster_vg_Samsung_SSD_850_PRO_512GB_S250NXAGA15787L"}, "msg":
"Creating logical volume 'None' failed", "rc": 5}
failed: [medusast.penguinpages.local] (item={'vgname':
'gluster_vg_SAMSUNG_SSD_PM851_mSATA_512GB_S1EWNYAF609306', 'thinpoolname':
'gluster_thinpool_gluster_vg_SAMSUNG_SSD_PM851_mSATA_512GB_S1EWNYAF609306',
'poolmetadatasize': '3G'}) => {"ansible_loop_var": "item", "changed":
false, "err": " Internal error: LV name
\"gluster_vg_SAMSUNG_SSD_PM851_mSATA_512GB_S1EWNYAF609306/gluster_thinpool_gluster_vg_SAMSUNG_SSD_PM851_mSATA_512GB_S1EWNYAF609306\"
length 130 is not supported.\n", "item": {"poolmetadatasize": "3G",
"thinpoolname":
"gluster_thinpool_gluster_vg_SAMSUNG_SSD_PM851_mSATA_512GB_S1EWNYAF609306",
"vgname": "gluster_vg_SAMSUNG_SSD_PM851_mSATA_512GB_S1EWNYAF609306"},
"msg": "Creating logical volume 'None' failed", "rc": 5}
changed: [odinst.penguinpages.local] => (item={'vgname':
'gluster_vg_Micron_1100_MTFDDAV512TBN_17401F699137', 'thinpoolname':
'gluster_thinpool_gluster_vg_Micron_1100_MTFDDAV512TBN_17401F699137',
'poolmetadatasize': '3G'}) => {"ansible_loop_var": "item", "changed": true,
"item": {"poolmetadatasize": "3G", "thinpoolname":
"gluster_thinpool_gluster_vg_Micron_1100_MTFDDAV512TBN_17401F699137",
"vgname": "gluster_vg_Micron_1100_MTFDDAV512TBN_17401F699137"}, "msg": ""}
I will revert back to /dev/sd# for now... but this should be cleaned up.
Attached is YAML file for deployment of cluster
--
p <jeremey.wise(a)gmail.com>enguinpages
4 years, 2 months
Host has to be reinstalled
by Jeremey Wise
Trying to repair / clean up HCI deployment so it is HA and ready for
"production".
I have gluster now showing three bricks all green
Now I just have error on node.. and of course the node which is hosting the
ovirt-engine
# (as I can not send images to this forum... I will move to a breadcrumb
posting)
Compute -> Hosts -> "thor" (red exclamation)
"Host has to be reinstalled"
To fix gluster... i had to reinstall "vdsm-gluster"
But what package does this error need to be reviewed / fixed with?
--
penguinpages
4 years, 2 months
info on iSCSI connection setup in oVirt 4.4
by Gianluca Cecchi
Hello,
supposing to have a node that connects to an iSCSI storage domain in oVirt
4.4, is there any particular requirement in the configuration of the
network adapter (ifcfg-eno1 file) when I pre-configure the server OS?
Eg, do I need to have it managed by NetworkManager in 4.4? Can I instead
set NM_CONTROLLED=no for this controller?
Thanks in advance,
Gianluca
4 years, 2 months
Question on "Memory" column/field in Virtual Machines list/table in ovirt GUI
by KISHOR K
Hi,
Memory field/column for few of VMs in our ovirt (Compute -> Virtual Machines -> Memory column) shows more than 90%.
But, when I checked (from free and also other commands) the actual "used" memory by those VMs, it is less than 60%. What I see (from free -h) is that ovirt seems to be considering both "used" + "buff/cache" memory and is reporting the same in GUI.
Isn't it the "available" memory that should be considered, because that is the actual memory available and cache memory is something that is adjusted every time?
:~> free -h
total used free shared buff/cache available
Mem: 15Gi 6.8Gi 724Mi 310Mi 8.1Gi 9.2Gi
Swap: 0B 0B 0B
Can someone help to answer. Thanks !
/Kishore
4 years, 2 months