Sure, will do that.
Hi,
vdsm.log :
2018-12-18 11:41:28,025+0530 INFO (periodic/3) [vdsm.api] START
repoStats(domains=()) from=internal,
task_id=d47301b2-ef1a-43af-ac14-e8ef623ab783 (api:46)
2018-12-18 11:41:28,026+0530 INFO (periodic/3) [vdsm.api] FINISH repoStats
return={} from=internal, task_id=d47301b2-ef1a-43af-ac14-e8ef623ab783
(api:52)
2018-12-18 11:41:28,026+0530 INFO (periodic/3) [vdsm.api] START
multipath_health() from=internal,
task_id=d6d30c7d-4814-437c-9bfb-6b89d5a3a61a (api:46)
2018-12-18 11:41:28,026+0530 INFO (periodic/3) [vdsm.api] FINISH
multipath_health return={} from=internal,
task_id=d6d30c7d-4814-437c-9bfb-6b89d5a3a61a (api:52)
2018-12-18 11:41:34,651+0530 INFO (jsonrpc/7) [api.host] START
getAllVmStats() from=::1,51148 (api:46)
2018-12-18 11:41:34,651+0530 INFO (jsonrpc/7) [api.host] FINISH
getAllVmStats return={'status': {'message': 'Done',
'code': 0},
'statsList': (suppressed)} from=::1,51148 (api:52)
2018-12-18 11:41:34,651+0530 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
call Host.getAllVmStats succeeded in 0.00 seconds (__init__:573)
2018-12-18 11:41:43,041+0530 INFO (periodic/3) [vdsm.api] START
repoStats(domains=()) from=internal,
task_id=bd441a62-45ea-4821-b1d9-9440c6842a71 (api:46)
2018-12-18 11:41:43,041+0530 INFO (periodic/3) [vdsm.api] FINISH repoStats
return={} from=internal, task_id=bd441a62-45ea-4821-b1d9-9440c6842a71
(api:52)
2018-12-18 11:41:43,041+0530 INFO (periodic/3) [vdsm.api] START
multipath_health() from=internal,
task_id=9a284f8a-b64b-4499-8e32-1ac775252519 (api:46)
2018-12-18 11:41:43,041+0530 INFO (periodic/3) [vdsm.api] FINISH
multipath_health return={} from=internal,
task_id=9a284f8a-b64b-4499-8e32-1ac775252519 (api:52)
2018-12-18 11:41:49,670+0530 INFO (jsonrpc/0) [api.host] START
getAllVmStats() from=::1,51148 (api:46)
2018-12-18 11:41:49,670+0530 INFO (jsonrpc/0) [api.host] FINISH
getAllVmStats return={'status': {'message': 'Done',
'code': 0},
'statsList': (suppressed)} from=::1,51148 (api:52)
2018-12-18 11:41:49,670+0530 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
call Host.getAllVmStats succeeded in 0.00 seconds (__init__:573)
2018-12-18 11:41:58,065+0530 INFO (periodic/2) [vdsm.api] START
repoStats(domains=()) from=internal,
task_id=54372170-9413-41af-8c8f-b827b99939be (api:46)
2018-12-18 11:41:58,065+0530 INFO (periodic/2) [vdsm.api] FINISH repoStats
return={} from=internal, task_id=54372170-9413-41af-8c8f-b827b99939be
(api:52)
2018-12-18 11:41:58,066+0530 INFO (periodic/2) [vdsm.api] START
multipath_health() from=internal,
task_id=08f46b07-ed6c-40d0-83c4-00ac1151349b (api:46)
2018-12-18 11:41:58,066+0530 INFO (periodic/2) [vdsm.api] FINISH
multipath_health return={} from=internal,
task_id=08f46b07-ed6c-40d0-83c4-00ac1151349b (api:52)
supervdsm.log:
MainThread::DEBUG::2018-12-14
17:03:03,785::storage_log::76::blivet::(log_exception_info) IGNORED:
initiatorname = libiscsi.get_firmware_initiator_name()
MainThread::DEBUG::2018-12-14
17:03:03,785::storage_log::76::blivet::(log_exception_info) IGNORED:
IOError: Unknown error
MainThread::DEBUG::2018-12-14
17:03:03,785::storage_log::77::blivet::(log_exception_info) IGNORED:
End exception details.
MainThread::DEBUG::2018-12-14
17:03:03,846::supervdsm_server::287::SuperVdsm.Server::(main) Making sure
I'm root - SuperVdsm
MainThread::DEBUG::2018-12-14
17:03:03,846::supervdsm_server::296::SuperVdsm.Server::(main) Parsing cmd
args
MainThread::DEBUG::2018-12-14
17:03:03,846::supervdsm_server::299::SuperVdsm.Server::(main) Cleaning old
socket /var/run/vdsm/svdsm.sock
MainThread::DEBUG::2018-12-14
17:03:03,846::supervdsm_server::303::SuperVdsm.Server::(main) Setting up
keep alive thread
MainThread::DEBUG::2018-12-14
17:03:03,846::supervdsm_server::309::SuperVdsm.Server::(main) Creating
remote object manager
Thread-1::DEBUG::2018-12-14 17:03:03,847::concurrent::192::root::(run)
START thread <Thread(Thread-1, started daemon 140277057115904)>
(func=<bound method Server.serve_forever of
<multiprocessing.managers.Server object at 0x7f94cc2c0650>>, args=(),
kwargs={})
MainThread::INFO::2018-12-14
17:03:03,847::fileUtils::246::storage.fileUtils::(chown) Changing owner for
/var/run/vdsm/svdsm.sock, to (36:36)
MainThread::DEBUG::2018-12-14
17:03:03,847::supervdsm_server::319::SuperVdsm.Server::(main) Started
serving super vdsm object
MainThread::DEBUG::2018-12-14 17:03:03,850::cmdutils::150::root::(exec_cmd)
/usr/bin/systemctl status lldpad.service (cwd None)
MainThread::DEBUG::2018-12-14 17:03:03,886::cmdutils::158::root::(exec_cmd)
SUCCESS: <err> = ''; <rc> = 0
MainThread::DEBUG::2018-12-14 17:03:03,886::cmdutils::150::root::(exec_cmd)
/usr/sbin/lldptool -ping (cwd None)
MainThread::DEBUG::2018-12-14 17:03:03,890::cmdutils::158::root::(exec_cmd)
SUCCESS: <err> = ''; <rc> = 0
MainThread::DEBUG::2018-12-14 17:03:03,890::cmdutils::150::root::(exec_cmd)
lshw -json -disable usb -disable pcmcia -disable isapnp -disable ide
-disable scsi -disable dmi -disable memory -disable cpuinfo (cwd None)
restore-net::INFO::2018-12-14
17:03:04,060::restore_net_config::428::root::(restore) networks already
restored. doing nothing.
MainThread::DEBUG::2018-12-14 17:03:04,857::cmdutils::158::root::(exec_cmd)
SUCCESS: <err> = ''; <rc> = 0
MainThread::DEBUG::2018-12-14 17:03:04,864::cmdutils::150::root::(exec_cmd)
/usr/sbin/lldptool get-lldp -i em1 adminStatus (cwd None)
MainThread::DEBUG::2018-12-14 17:03:04,871::cmdutils::158::root::(exec_cmd)
SUCCESS: <err> = ''; <rc> = 0
MainThread::DEBUG::2018-12-14 17:03:04,872::cmdutils::150::root::(exec_cmd)
/usr/sbin/lldptool get-lldp -i em2 adminStatus (cwd None)
MainThread::DEBUG::2018-12-14 17:03:04,876::cmdutils::158::root::(exec_cmd)
SUCCESS: <err> = ''; <rc> = 0
MainThread::DEBUG::2018-12-14 17:03:04,876::cmdutils::150::root::(exec_cmd)
/usr/sbin/lldptool get-lldp -i p1p1 adminStatus (cwd None)
MainThread::DEBUG::2018-12-14 17:03:04,880::cmdutils::158::root::(exec_cmd)
SUCCESS: <err> = ''; <rc> = 0
MainThread::DEBUG::2018-12-14 17:03:04,880::cmdutils::150::root::(exec_cmd)
/usr/sbin/lldptool get-lldp -i p1p2 adminStatus (cwd None)
MainThread::DEBUG::2018-12-14 17:03:04,884::cmdutils::158::root::(exec_cmd)
SUCCESS: <err> = ''; <rc> = 0
MainProcess::DEBUG::2018-12-14
17:03:06,812::supervdsm_server::96::SuperVdsm.ServerCallback::(wrapper)
call multipath_status with () {}
MainProcess::DEBUG::2018-12-14
17:03:06,813::commands::65::storage.Misc.excCmd::(execCmd) /usr/bin/taskset
--cpu-list 0-31 /usr/sbin/dmsetup status --target multipath (cwd None)
MainProcess::DEBUG::2018-12-14
17:03:06,819::commands::86::storage.Misc.excCmd::(execCmd) SUCCESS: <err> =
''; <rc> = 0
MainProcess::DEBUG::2018-12-14
17:03:06,819::supervdsm_server::103::SuperVdsm.ServerCallback::(wrapper)
return multipath_status with {}
MainProcess|hsm/init::DEBUG::2018-12-14
17:03:06,886::supervdsm_server::96::SuperVdsm.ServerCallback::(wrapper)
call hbaRescan with () {}
MainProcess|hsm/init::DEBUG::2018-12-14
17:03:06,886::commands::65::storage.HBA::(execCmd) /usr/bin/taskset
--cpu-list 0-31 /usr/libexec/vdsm/fc-scan (cwd None)
MainProcess|hsm/init::DEBUG::2018-12-14
17:03:06,956::supervdsm_server::103::SuperVdsm.ServerCallback::(wrapper)
return hbaRescan with None
MainProcess|jsonrpc/3::DEBUG::2018-12-14
17:03:27,028::supervdsm_server::96::SuperVdsm.ServerCallback::(wrapper)
call ksmTune with () {}
MainProcess|jsonrpc/3::DEBUG::2018-12-14
17:03:27,029::supervdsm_server::103::SuperVdsm.ServerCallback::(wrapper)
return ksmTune with None
Please find above logs for same...
On Mon, Dec 17, 2018 at 10:22 PM Gobinda Das <godas(a)redhat.com> wrote:
Hi Hemant,
Could you provide vdsm.log and supervdsm.log as well ?
On Fri, Dec 14, 2018 at 10:51 AM Hemant Gupta <hemantguptadevops(a)gmail.com>
wrote:
> Hi,
> While adding Ovirt node to Ovirt engine I am getting below network errors
> on ovirt-engine.log
>
> 2018-12-14 10:38:58,900+05 INFO
> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-85) [5a45cbef] Lock freed
> to object
>
'EngineLock:{exclusiveLocks='[3b5faef7-e0f5-4995-934e-0d681c253d8f=PROVIDER]',
> sharedLocks=''}'
> 2018-12-14 10:42:05,962+05 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engineScheduled-Thread-66) [] EVENT_ID:
> VDS_BROKER_COMMAND_FAILURE(10,802), VDSM OvirtNode command Get Host
> Capabilities failed: Message timeout which can be caused by communication
> issues
> 2018-12-14 10:42:05,962+05 ERROR
> [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
> (EE-ManagedThreadFactory-engineScheduled-Thread-66) [] Unable to
> RefreshCapabilities: VDSNetworkException: VDSGenericException:
> VDSNetworkException: Message timeout which can be caused by communication
> issues
> 2018-12-14 10:43:42,433+05 INFO
> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'default' is using 0 threads out of 1, 5 threads waiting for tasks.
> 2018-12-14 10:43:42,433+05 INFO
> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'engineScheduled' is using 0 threads out of 100, 100 threads waiting for
> tasks.
> 2018-12-14 10:43:42,433+05 INFO
> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'engineThreadMonitoring' is using 1 threads out of 1, 0 threads waiting for
> tasks.
> 2018-12-14 10:43:58,949+05 INFO
> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-30) [28e597cc] Lock
> Acquired to object
>
'EngineLock:{exclusiveLocks='[3b5faef7-e0f5-4995-934e-0d681c253d8f=PROVIDER]',
> sharedLocks=''}'
> 2018-12-14 10:43:58,959+05 INFO
> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-30) [28e597cc] Running
> command: SyncNetworkProviderCommand internal: true.
> 2018-12-14 10:43:59,248+05 INFO
> [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-47) []
> User admin@internal successfully logged in with scopes: ovirt-app-api
> ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search
> ovirt-ext=token-info:validate ovirt-ext=token:password-access
> 2018-12-14 10:43:59,438+05 INFO
> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-30) [28e597cc] Lock freed
> to object
>
'EngineLock:{exclusiveLocks='[3b5faef7-e0f5-4995-934e-0d681c253d8f=PROVIDER]',
> sharedLocks=''}'
> 2018-12-14 10:45:28,996+05 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engineScheduled-Thread-68) [] EVENT_ID:
> VDS_BROKER_COMMAND_FAILURE(10,802), VDSM OvirtNode command Get Host
> Capabilities failed: Message timeout which can be caused by communication
> issues
> 2018-12-14 10:45:28,996+05 ERROR
> [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
> (EE-ManagedThreadFactory-engineScheduled-Thread-68) [] Unable to
> RefreshCapabilities: VDSNetworkException: VDSGenericException:
> VDSNetworkException: Message timeout which can be caused by communication
> issues
>
>
https://www.ovirt.org/documentation/install-guide/chap-System_Requirements/
>
> Will I need to open port between my Ovirt master and Ovirt node for Ovirt
> node connection and establishment to be completed....???
>
> Please help I am stuck here...
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
>
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S5QPPPCZHXI...
>
--
Thanks,
Gobinda