what is a telecommunications job
by kumarfield28@gmail.com
These days, you will often come across telecommunication field technician jobs on the internet and newspapers. This must have gotten you curious regarding telecom engineer jobs. You might be wondering whether it would be the right career option for you and what kind of telecom services you will need to provide if this is the field you opt for. If that is the case, this is where your search culminates since we will provide you with the required details regarding telecom engineer jobs and services among other things.
https://www.fieldengineer.com/blogs/telecom-engineer-job-description/
5 years, 1 month
Re: INVALID_SERVICE: ovirtlago
by Barak Korren
That is not the real issue, the real issue seems to be this:
+ sudo -n systemctl start docker
Job for docker.service failed because the control process exited with error
code. See "systemctl status docker.service" and "journalctl -xe" for
details.
+ sudo -n systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor
preset: disabled)
Active: activating (auto-restart) (Result: exit-code) since Mon
2019-02-25 04:03:52 UTC; 45ms ago
Docs: https://docs.docker.com
Process: 15496 ExecStart=/usr/bin/dockerd -H fd:// (code=exited,
status=1/FAILURE)
Main PID: 15496 (code=exited, status=1/FAILURE)
Feb 25 04:03:52 openshift-integ-tests-container-6bmr3 systemd[1]: Failed to
start Docker Application Container Engine.
Feb 25 04:03:52 openshift-integ-tests-container-6bmr3 systemd[1]: Unit
docker.service entered failed state.
Feb 25 04:03:52 openshift-integ-tests-container-6bmr3 systemd[1]:
docker.service failed.
+ :
+ log ERROR 'Failed to start docker service'
+ local level=ERROR
So docker is failing to start in the integ-test container. Here is the
podspec that was used:
---apiVersion: v1kind: Podmetadata: generateName: jenkins-slave
labels: integ-tests-container: "" namespace:
jenkins-ovirt-orgspec: containers: - env: - name:
JENKINS_AGENT_WORKDIR value: /home/jenkins - name:
CI_RUNTIME_UNAME value: jenkins - name:
STDCI_SLAVE_CONTAINER_NAME value: im_a_container -
name: CONTAINER_SLOTS value: /var/lib/stdci image:
docker.io/ovirtinfra/el7-runner-node:12c9f471a6e9eccd6d5052c6c4964fff3b6670c9
command: ['/usr/sbin/init'] livenessProbe: exec:
command: ['systemctl', 'status', 'multi-user.target']
initialDelaySeconds: 360 periodSeconds: 7200 name: jnlp
resources: limits: memory: 32Gi requests:
memory: 32Gi securityContext: privileged: true
volumeMounts: - mountPath: /var/lib/stdci name:
slave-cache - mountPath: /dev/shm name: dshm
workingDir: /home/jenkins tty: true nodeSelector: model: r620
serviceAccount: jenkins-slave volumes: - hostPath: path:
/var/lib/stdci type: DirectoryOrCreate name: slave-cache
- emptyDir: medium: Memory name: dshm
Adding Gal and infra list.
On Mon, 25 Feb 2019 at 08:45, Eitan Raviv <eraviv(a)redhat.com> wrote:
> Hi,
> I have some OST patches failing on:
>
> *04:03:53* Error: INVALID_SERVICE: ovirtlago
>
> e.g. https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/344...
>
> I am fully rebased on ost master.
>
> Can you have a look?
>
> Thank you
>
>
> ---------- Forwarded message ---------
> From: Galit Rosenthal <grosenth(a)redhat.com>
> Date: Mon, Feb 25, 2019 at 8:35 AM
> Subject: Re: INVALID_SERVICE: ovirtlago
> To: Eitan Raviv <eraviv(a)redhat.com>
>
>
> I think you should consult Barak
>
> On Sun, Feb 24, 2019 at 8:26 PM Eitan Raviv <eraviv(a)redhat.com> wrote:
>
>> *13:58:57* ++ sudo -n firewall-cmd --query-service=ovirtlago*13:58:58* Error: INVALID_SERVICE: ovirtlago
>>
>> https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/343...
>>
>>
>
> --
>
> GALIT ROSENTHAL
>
> SOFTWARE ENGINEER
>
> Red Hat
>
> <https://www.redhat.com/>
>
> galit(a)gmail.com T: 972-9-7692230
> <https://red.ht/sig>
>
--
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
5 years, 8 months
[ OST Failure Report ] [ oVirt Master (ovirt-engine-nodejs-modules) ]
[ 27-02-2019 ] [ 002_bootstrap.add_vm2_lease ]
by Dafna Ron
Hi,
We have a failure on the project in basic suite, master branch. The recent
failure was in patch:
https://gerrit.ovirt.org/#/c/98087/1 - Add pre-seed for ovirt-web-ui
CQ is pointing at the below as the root cause (which was merged a while
back):
https://gerrit.ovirt.org/#/c/97491/ - Add pre-seed for ovirt-web-ui
Can you please check the issue as it seems both patches are changing the
same thing and the project seem to be broken since
https://gerrit.ovirt.org/#/c/97491/3
Latest failure:
https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/13171/
Logs:
https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/13171/arti...
errors from logs:
Engine:
2019-02-27 13:37:28,479-05 ERROR
[org.ovirt.engine.core.bll.UpdateVmCommand] (default task-1) [74283e25]
Transaction rolled-back for command
'org.ovirt.engine.core.bll.UpdateVmCommand'.
2019-02-27 13:37:28,483-05 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-1) [74283e25] EVENT_ID: USER_FAILED_UPDATE_VM(58), Failed to
update VM vm2 (User: admin@inter
nal-authz).
2019-02-27 13:37:28,485-05 INFO
[org.ovirt.engine.core.bll.UpdateVmCommand] (default task-1) [74283e25]
Lock freed to object 'EngineLock:{exclusiveLocks='[vm2=VM_NAME]',
sharedLocks='[3500eb82-e5e2-4e24-b41c-ea
02d9f6adee=VM]'}'
2019-02-27 13:37:28,485-05 DEBUG
[org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
(default task-1) [74283e25] method: runAction, params: [UpdateVm,
VmManagementParametersBase:{commandId='340
59769-05b9-429e-8356-f6b9b9953f55', user='admin', commandType='UpdateVm',
vmId='3500eb82-e5e2-4e24-b41c-ea02d9f6adee'}], timeElapsed: 6618ms
2019-02-27 13:37:28,486-05 ERROR
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default
task-1) [] Operation Failed: []
2019-02-27 13:37:28,579-05 DEBUG
[org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
(EE-ManagedThreadFactory-engineScheduled-Thread-85) [] method: get, params:
[e29c0ba1-464c-4eb4-a8f2-c6933d9
9969a], timeElapsed: 3ms
vdsm:
2019-02-27 13:37:21,987-0500 INFO (jsonrpc/1) [vdsm.api] FINISH lease_info
error=No such lease 3500eb82-e5e2-4e24-b41c-ea02d9f6adee
from=::ffff:192.168.201.4,43920,
flow_id=117dec74-ad59-4b12-8148-b2c130337c10,
task_id=9c297d41-0aa7-4c74-b268-b710e666bc6c (api:52)
2019-02-27 13:37:21,988-0500 ERROR (jsonrpc/1) [storage.TaskManager.Task]
(Task='9c297d41-0aa7-4c74-b268-b710e666bc6c') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
in _run
return fn(*args, **kargs)
File "<string>", line 2, in lease_info
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in
method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3702,
in lease_info
info = dom.lease_info(lease.lease_id)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 674, in
lease_info
return vol.lookup(lease_id)
File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 553,
in lookup
raise NoSuchLease(lease_id)
NoSuchLease: No such lease 3500eb82-e5e2-4e24-b41c-ea02d9f6adee
2019-02-27 13:37:21,988-0500 INFO (jsonrpc/1) [storage.TaskManager.Task]
(Task='9c297d41-0aa7-4c74-b268-b710e666bc6c') aborting: Task is aborted:
u'No such lease 3500eb82-e5e2-4e24-b41c-ea02d9f6adee' - code 100
(task:1181)
2019-02-27 13:37:21,988-0500 ERROR (jsonrpc/1) [storage.Dispatcher] FINISH
lease_info error=No such lease 3500eb82-e5e2-4e24-b41c-ea02d9f6adee
(dispatcher:87)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py", line
74, in wrapper
result = ctask.prepare(func, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108,
in wrapper
return m(self, *a, **kw)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1189,
in prepare
raise self.error
NoSuchLease: No such lease 3500eb82-e5e2-4e24-b41c-ea02d9f6adee
5 years, 8 months
[JIRA] (OVIRT-2566) make network suite blocking
by danken (oVirt JIRA)
danken created OVIRT-2566:
-----------------------------
Summary: make network suite blocking
Key: OVIRT-2566
URL: https://ovirt-jira.atlassian.net/browse/OVIRT-2566
Project: oVirt - virtualization made easy
Issue Type: Improvement
Components: OST
Reporter: danken
Assignee: infra
Priority: High
The network suite is executing nightly for almost a year. It has a caring team that tends to it, and it does not have false positives.
Currently, it currently fails for a week on the 4.2 branch, but it is due to a production code bug.
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-te...
I would like to see this suite escalated to the importance of the basic suite, making it a gating condition for marking a collection of packages as "tested".
[~gbenhaim(a)redhat.com], what should be done to make this happen?
--
This message was sent by Atlassian Jira
(v1001.0.0-SNAPSHOT#100095)
5 years, 8 months
[oVirt Jenkins] ovirt-system-tests_compat-4.2-suite-master - Build
# 496 - Failure!
by jenkins@jenkins.phx.ovirt.org
Project: http://jenkins.ovirt.org/job/ovirt-system-tests_compat-4.2-suite-master/
Build: http://jenkins.ovirt.org/job/ovirt-system-tests_compat-4.2-suite-master/496/
Build Number: 496
Build Status: Failure
Triggered By: Started by timer
-------------------------------------
Changes Since Last Success:
-------------------------------------
Changes for Build #496
[Sandro Bonazzola] repos: master: switch to gluster 5
-----------------
Failed Tests:
-----------------
1 tests failed.
FAILED: 002_bootstrap.resize_and_refresh_storage_domain_42
Error Message:
False != True after 600 seconds
-------------------- >> begin captured logging << --------------------
lago.ssh: DEBUG: start task:cad32885-9201-470b-8ec9-53ade3f2e147:Get ssh client for lago-compat-4-2-suite-master-engine:
lago.ssh: DEBUG: end task:cad32885-9201-470b-8ec9-53ade3f2e147:Get ssh client for lago-compat-4-2-suite-master-engine:
lago.ssh: DEBUG: Running 59628f54 on lago-compat-4-2-suite-master-engine: lvresize --size +3000M /dev/mapper/vg1_storage-lun0_bdev
lago.ssh: DEBUG: Command 59628f54 on lago-compat-4-2-suite-master-engine returned with 0
lago.ssh: DEBUG: Command 59628f54 on lago-compat-4-2-suite-master-engine output:
Size of logical volume vg1_storage/lun0_bdev changed from 20.00 GiB (5120 extents) to <22.93 GiB (5870 extents).
Logical volume vg1_storage/lun0_bdev successfully resized.
lago.ssh: DEBUG: start task:c433966f-7e18-4d1e-a68d-218bfda20420:Get ssh client for lago-compat-4-2-suite-master-engine:
lago.ssh: DEBUG: end task:c433966f-7e18-4d1e-a68d-218bfda20420:Get ssh client for lago-compat-4-2-suite-master-engine:
lago.ssh: DEBUG: Running 59c76e60 on lago-compat-4-2-suite-master-engine: cat /root/multipath.txt
lago.ssh: DEBUG: Command 59c76e60 on lago-compat-4-2-suite-master-engine returned with 0
lago.ssh: DEBUG: Command 59c76e60 on lago-compat-4-2-suite-master-engine output:
360014052b36c080242849af94e08d593
3600140577ca2fe2129b46cfadb969690
3600140582c361006901472d903755e3c
36001405e4f3abc5714e4d42929c9534b
36001405ec963d2b05d748c4bb4ba46f6
--------------------- >> end captured logging << ---------------------
Stack Trace:
File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
testMethod()
File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in wrapped_test
test()
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper
return func(get_test_prefix(), *args, **kwargs)
File "/home/jenkins/workspace/ovirt-system-tests_compat-4.2-suite-master/ovirt-system-tests/compat-4.2-suite-master/test-scenarios/002_bootstrap.py", line 527, in resize_and_refresh_storage_domain
logical_units=luns
File "/usr/lib64/python2.7/contextlib.py", line 35, in __exit__
self.gen.throw(type, value, traceback)
File "/home/jenkins/workspace/ovirt-system-tests_compat-4.2-suite-master/ovirt-system-tests/compat-4.2-suite-master/test_utils/__init__.py", line 264, in TestEvent
lambda:
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 286, in assert_true_within_long
assert_equals_within_long(func, True, allowed_exceptions)
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 273, in assert_equals_within_long
func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 252, in assert_equals_within
'%s != %s after %s seconds' % (res, value, timeout)
'False != True after 600 seconds\n-------------------- >> begin captured logging << --------------------\nlago.ssh: DEBUG: start task:cad32885-9201-470b-8ec9-53ade3f2e147:Get ssh client for lago-compat-4-2-suite-master-engine:\nlago.ssh: DEBUG: end task:cad32885-9201-470b-8ec9-53ade3f2e147:Get ssh client for lago-compat-4-2-suite-master-engine:\nlago.ssh: DEBUG: Running 59628f54 on lago-compat-4-2-suite-master-engine: lvresize --size +3000M /dev/mapper/vg1_storage-lun0_bdev\nlago.ssh: DEBUG: Command 59628f54 on lago-compat-4-2-suite-master-engine returned with 0\nlago.ssh: DEBUG: Command 59628f54 on lago-compat-4-2-suite-master-engine output:\n Size of logical volume vg1_storage/lun0_bdev changed from 20.00 GiB (5120 extents) to <22.93 GiB (5870 extents).\n Logical volume vg1_storage/lun0_bdev successfully resized.\n\nlago.ssh: DEBUG: start task:c433966f-7e18-4d1e-a68d-218bfda20420:Get ssh client for lago-compat-4-2-suite-master-engine:\nlago.ssh: DEBUG: end task:c433966f-7e18-4d1e-a68d-218bfda20420:Get ssh client for lago-compat-4-2-suite-master-engine:\nlago.ssh: DEBUG: Running 59c76e60 on lago-compat-4-2-suite-master-engine: cat /root/multipath.txt\nlago.ssh: DEBUG: Command 59c76e60 on lago-compat-4-2-suite-master-engine returned with 0\nlago.ssh: DEBUG: Command 59c76e60 on lago-compat-4-2-suite-master-engine output:\n 360014052b36c080242849af94e08d593\n3600140577ca2fe2129b46cfadb969690\n3600140582c361006901472d903755e3c\n36001405e4f3abc5714e4d42929c9534b\n36001405ec963d2b05d748c4bb4ba46f6\n\n--------------------- >> end captured logging << ---------------------'
5 years, 8 months
[oVirt Jenkins] ovirt-system-tests_compat-4.1-suite-master - Build
# 498 - Failure!
by jenkins@jenkins.phx.ovirt.org
Project: http://jenkins.ovirt.org/job/ovirt-system-tests_compat-4.1-suite-master/
Build: http://jenkins.ovirt.org/job/ovirt-system-tests_compat-4.1-suite-master/498/
Build Number: 498
Build Status: Failure
Triggered By: Started by timer
-------------------------------------
Changes Since Last Success:
-------------------------------------
Changes for Build #498
[Milan Zamazal] Add upgrade-from-release_suite-4.3
[Gal Ben Haim] global-setup: Collect more info about docker
[Barak Korren] slaves_deploy: Fix STDCI slave settings
-----------------
Failed Tests:
-----------------
1 tests failed.
FAILED: 002_bootstrap.get_host_devices_41
Error Message:
Could not find block_vda_1 device in host devices:
Stack Trace:
Traceback (most recent call last):
File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
testMethod()
File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in wrapped_test
test()
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper
return func(get_test_prefix(), *args, **kwargs)
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 79, in wrapper
prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs
File "/home/jenkins/workspace/ovirt-system-tests_compat-4.1-suite-master/ovirt-system-tests/compat-4.1-suite-master/test-scenarios/002_bootstrap.py", line 1016, in get_host_devices
raise RuntimeError('Could not find block_vda_1 device in host devices: {}'.format(device_list))
RuntimeError: Could not find block_vda_1 device in host devices:
5 years, 8 months