Where to configure iscsi initiator name?
by Dan Poltawski
When I added the first node a 'random' initiator name was generated of form:
# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:[RANDOM]
Having attempted to add another node, this node has another initiator name generated and can't access the storage. Is there a way to configure this initiator name to a static value which will be configured when new nodes get added to the cluster? Or is there some reason for this i'm missing?
thanks,
Dan
________________________________
The Networking People (TNP) Limited. Registered office: Network House, Caton Rd, Lancaster, LA1 3PE. Registered in England & Wales with company number: 07667393
This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited.
5 years, 4 months
engine-setup is failing in fedora30
by Gobinda Das
Hi,
I am facing issue to do engine-setup in fedora30.
[godas@godas ovirt-engine]$ $HOME/ovirt-engine/bin/engine-setup
[ INFO ] Stage: Initializing
Setup was run under unprivileged user this will produce
development installation do you wish to proceed? (Yes, No) [No]: Yes
[ INFO ] Stage: Environment setup
Configuration files:
['/home/gobinda/ovirt-engine/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf']
Log file:
/home/gobinda/ovirt-engine/var/log/ovirt-engine/setup/ovirt-engine-setup-20190821121001-bkc6we.log
Version: otopi-1.8.1_master
(otopi-1.8.1-0.0.master.20190228094531.git1953bdc.fc30)
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup (late)
[ INFO ] Stage: Environment customization
--== PRODUCT OPTIONS ==--
--== PACKAGES ==--
--== NETWORK CONFIGURATION ==--
[WARNING] Failed to resolve godas.lab.eng.blr.redhat.com using DNS, it can
be resolved only locally
--== DATABASE CONFIGURATION ==--
--== OVIRT ENGINE CONFIGURATION ==--
Perform full vacuum on the engine database engine@localhost?
This operation may take a while depending on this setup health
and the
configuration of the db vacuum process.
See https://www.postgresql.org/docs/10/sql-vacuum.html
(Yes, No) [No]:
--== STORAGE CONFIGURATION ==--
--== PKI CONFIGURATION ==--
--== APACHE CONFIGURATION ==--
--== SYSTEM CONFIGURATION ==--
--== MISC CONFIGURATION ==--
--== END OF CONFIGURATION ==--
[ INFO ] Stage: Setup validation
--== CONFIGURATION PREVIEW ==--
Default SAN wipe after delete : False
Host FQDN :
godas.lab.eng.blr.redhat.com
Set up Cinderlib integration : False
Engine database host : localhost
Engine database port : 5432
Engine database secured connection : False
Engine database host name validation : False
Engine database name : engine
Engine database user name : engine
Engine installation : True
PKI organization : lab.eng.blr.redhat.com
Set up ovirt-provider-ovn : False
Configure WebSocket Proxy : True
Configure VMConsole Proxy : True
Please confirm installation settings (OK, Cancel) [OK]: OK
[ INFO ] Checking the Engine database consistency
[ INFO ] Stage: Transaction setup
[ INFO ] Stage: Misc configuration (early)
[ INFO ] Stage: Package installation
[ INFO ] Stage: Misc configuration
[ INFO ] Upgrading CA
[ INFO ] Configuring WebSocket Proxy
[ INFO ] Backing up database localhost:engine to
'/home/gobinda/ovirt-engine/var/lib/ovirt-engine/backups/engine-20190821121010.0iip56rw.dump'.
[ INFO ] Creating/refreshing Engine database schema
Unregistering existing client registration info.
[ INFO ] Generating post install configuration file
'/home/gobinda/ovirt-engine/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf'
[ ERROR ] Failed to execute stage 'Misc configuration': a bytes-like object
is required, not 'str'
[ INFO ] Rolling back database schema
[ INFO ] Clearing Engine database engine
[ INFO ] Restoring Engine database engine
[ INFO ] Restoring file
'/home/gobinda/ovirt-engine/var/lib/ovirt-engine/backups/engine-20190821121010.0iip56rw.dump'
to database localhost:engine.
[ ERROR ] Engine database rollback failed: cannot use a string pattern on a
bytes-like object
[ INFO ] Stage: Clean up
Log file is located at
/home/gobinda/ovirt-engine/var/log/ovirt-engine/setup/ovirt-engine-setup-20190821121001-bkc6we.log
[ INFO ] Generating answer file
'/home/gobinda/ovirt-engine/var/lib/ovirt-engine/setup/answers/20190821121039-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
I*t was working before but the moment i pulled latest master code the issue
started.*
--
Thanks,
Gobinda
5 years, 4 months
Hyperconverged setup ovirt 4.3.x using ansible?
by adrianquintero@gmail.com
Hi,
I am trying to do a Hyperconverged setup using ansible.
So far I have been able to run a playbook to setup gluster but have not been able to identify how to install the Hosted-Engine VM using ansible and tie all the pieces together.
Can someone point me in the right direction to deploy a hyperconverged environment using ansible?
I have succesfully deployed ovirt 4.3.5 (Hyperconverged) using the web UI.
thanks.
5 years, 4 months
ovirt node install
by Staniforth, Paul
Hello,
on the latest version of the oVirt-node install running nodectl info gives the error
Traceback (most recent call last):
File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42, in <module>
CliApplication()
File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200, in CliApplication
return cmdmap.command(args)
File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118, in command
return self.commands[command](**kwargs)
File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 76, in info
Info(self.imgbased, self.machine).write()
File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 46, in __init__
self._fetch_information()
File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 49, in _fetch_information
self._get_bootloader_info()
File "/usr/lib/python2.7/site-packages/nodectl/info.py", line 62, in _get_bootloader_info
bootinfo["entries"][k] = v.__dict__
AttributeError: 'list' object has no attribute '__dict__'
Also what is the correct way to update from ovirt node 4.2 to 4.3?
I used
yum install https://resources.ovirt.org/pub/ovirt-4.3/rpm/el7/noarch/ovirt-node-ng-im...
I then did yum erase ovirt-release42 and rm /etc/yum.repos.d/ovirt-4.2*
Regards,
Paul S.
To view the terms under which this email is distributed, please go to:-
http://leedsbeckett.ac.uk/disclaimer/email/
5 years, 4 months
numa pinning for high performance
by Vincent Royer
Is there a good place to learn about cpu pinning and the various settings?
I've built a test VM and am trying to understand
- I've given it 2 cpus, 4 cores each, 1 thread per core.
- The VM is pinned to a specific host with passthrough cpu enabled
- numa pinning, first 4 cores on physical socket 1 and the next 4 on
physical socket 2
- The host has two sockets each with 20 core cpus
I would expect to see a load applied on 8 of the 40 cores on the host
during testing. Instead it seems like 32 of 40 cores are working at 100%.
What are the effects of this on the performance of the vm?
[image: image.png]
[image: image.png]
[image: image.png]
[image: image.png]
[image: image.png]
[image: image.png]
5 years, 4 months
Export as OVA fails
by anthonywest@alfatron.com.au
Hi.
We are running oVirt 4.3.4 with each of the hosts configured to use local storage.
When I attempt to export a large virtual machine as an OVA the process begins but fails about half an hour later.
On the oVirt Events page the following entries appear:
Starting to export Vm Jessica as a Virtual Appliance
Failed to export Vm Jessica as a Virtual Appliance to path /storage/Jessica.ova on Host Trueblood
When I log onto the host Trueblood I see that a file called Jessica.ova.tmp has been created.
The engine.log file contains the following errors:
2019-08-16 16:40:27,141+10 ERROR [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (EE-ManagedThreadFactory-engineScheduled-Thread-37) [80ae4c6] Ansible playbook execution failed: Timeout occurred while executing Ansible playbook.
2019-08-16 16:40:27,142+10 ERROR [org.ovirt.engine.core.bll.CreateOvaCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-37) [80ae4c6] Failed to create OVA. Please check logs for more details: /var/log/ovirt-engine/ova/ovirt-export-ova-ansible-20190816161027-trueblood.alfatron.com.au-80ae4c6.log
2019-08-16 16:40:27,161+10 ERROR [org.ovirt.engine.core.bll.exportimport.ExportVmToOvaCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-37) [80ae4c6] Failed to create OVA file
2019-08-16 16:40:27,162+10 ERROR [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-37) [80ae4c6] Command 'ExportVmToOva' id: '12e98544-67ea-45b0-adb8-c50c8c190ecf' with children [a6c04b63-fe81-4d13-a69f-87b8ac55ceaa] failed when attempting to perform the next operation, marking as 'ACTIVE'
2019-08-16 16:40:27,162+10 ERROR [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-37) [80ae4c6] EngineException: ENGINE (Failed with error ENGINE and code 5001): org.ovirt.engine.core.common.errors.EngineException: EngineException: ENGINE (Failed with error ENGINE and code 5001)
at org.ovirt.engine.core.bll.exportimport.ExportOvaCommand.createOva(ExportOvaCommand.java:117) [bll.jar:]
at org.ovirt.engine.core.bll.exportimport.ExportVmToOvaCommand.executeNextOperation(ExportVmToOvaCommand.java:224) [bll.jar:]
at org.ovirt.engine.core.bll.exportimport.ExportVmToOvaCommand.performNextOperation(ExportVmToOvaCommand.java:216) [bll.jar:]
at org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback.childCommandsExecutionEnded(SerialChildCommandsExecutionCallback.java:32) [bll.jar:]
at org.ovirt.engine.core.bll.ChildCommandsCallbackBase.doPolling(ChildCommandsCallbackBase.java:77) [bll.jar:]
at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:175) [bll.jar:]
at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109) [bll.jar:]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [rt.jar:1.8.0_212]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [rt.jar:1.8.0_212]
at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383) [javax.enterprise.concurrent-1.0.jar:]
at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534) [javax.enterprise.concurrent-1.0.jar:]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_212]
at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_212]
at org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250) [javax.enterprise.concurrent-1.0.jar:]
at org.jboss.as.ee.concurrent.service.ElytronManagedThreadFactory$ElytronManagedThread.run(ElytronManagedThreadFactory.java:78)
2019-08-16 16:40:28,304+10 ERROR [org.ovirt.engine.core.bll.exportimport.ExportVmToOvaCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [3f52073f-b7d2-4238-8ac1-4634678b11b4] Ending command 'org.ovirt.engine.core.bll.exportimport.ExportVmToOvaCommand' with failure.
2019-08-16 16:40:28,745+10 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [41e553bf] EVENT_ID: IMPORTEXPORT_EXPORT_VM_TO_OVA_FAILED(1,225), Failed to export Vm Jessica as a Virtual Appliance to path /storage/Jessica.ova on Host Trueblood
Any suggestions/help on solving this would be much appreciated.
Anthony
5 years, 4 months
Issues with oVirt-Engine start - oVirt 4.3.4
by Vrgotic, Marko
Dear oVirt,
While working on a procedure to get the NFS v4 mount from Netapp, working on oVIrt, following steps came out to be the way to go in regards of setting it up for oVIrt SHE and VM Guests:
* mkdir /mnt/rhevstore
* mount -t nfs 10.20.30.40:/ovirt_hosted_engine /mnt/rhevstore
* chown -R 36.36 /mnt/rhevstore
* chmod -R 755 /mnt/rhevstore
* umount /mnt/rhevstore
This works fine, and it needs to be executed on each Hypervisor, before its provisioned into oVirt.
However, just today I have discovered that command chmod -R 755 /mnt/rhevstore, if executed on new to be added Hypervisor, after oVirt is already running, it brings the oVirt Engine into broken state.
The moment I executed the above on 3rd Hypervisor, before provisioning it into oVirt, following occurred:
* Engine threw following error:
* 2019-08-19 13:16:31,425Z ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStatusVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-82) [] Failed in 'SpmStatusVDS' method
* Connection was lost:
* packet_write_wait: Connection to 10.210.11.10 port 22: Broken pipe
* And VDSM on SHE Hosting hypervisor started logging errors like:
* 2019-08-19 15:00:52,340+0000 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:312)
* 2019-08-19 15:00:53,865+0000 WARN (vdsm.Scheduler) [Executor] Worker blocked: <Worker name=periodic/2 running <Task <Operation action=<vdsm.virt.sampling.HostMonitor object at 0x7f59442c3d50> at 0x7f59442c3b90> timeout=15, duration=225.00 at 0x7f592476df90> task#=578 at 0x7f59442ef910>, traceback:
* File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap
* self.__bootstrap_inner()
* File: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner
* self.run()
* File: "/usr/lib64/python2.7/threading.py", line 765, in run
* self.__target(*self.__args, **self.__kwargs)
* File: "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 195, in run
* ret = func(*args, **kwargs)
* File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run
* self._execute_task()
* File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in _execute_task
* task()
* File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in __call__
* self._callable()
* File: "/usr/lib/python2.7/site-packages/vdsm/virt/periodic.py", line 186, in __call__
* self._func()
* File: "/usr/lib/python2.7/site-packages/vdsm/virt/sampling.py", line 481, in __call__
* stats = hostapi.get_stats(self._cif, self._samples.stats())
* File: "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 79, in get_stats
* ret['haStats'] = _getHaInfo()
* File: "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 177, in _getHaInfo
* stats = instance.get_all_stats()
* File: "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 94, in get_all_stats
* stats = broker.get_stats_from_storage()
* File: "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 143, in get_stats_from_storage
* result = self._proxy.get_stats()
* File: "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__
* return self.__send(self.__name, args)
* File: "/usr/lib64/python2.7/xmlrpclib.py", line 1591, in __request
* verbose=self.__verbose
* File: "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request
* return self.single_request(host, handler, request_body, verbose)
* File: "/usr/lib64/python2.7/xmlrpclib.py", line 1303, in single_request
* response = h.getresponse(buffering=True)
* File: "/usr/lib64/python2.7/httplib.py", line 1113, in getresponse
* response.begin()
* File: "/usr/lib64/python2.7/httplib.py", line 444, in begin
* version, status, reason = self._read_status()
* File: "/usr/lib64/python2.7/httplib.py", line 400, in _read_status
* line = self.fp.readline(_MAXLINE + 1)
* File: "/usr/lib64/python2.7/socket.py", line 476, in readline
* data = self._sock.recv(self._rbufsize) (executor:363)
* 2019-08-19 15:00:54,103+0000 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00 seconds (__init__:312)
I am unable to boot the Engine VM – it end up in Status ForceStop
Hosted-engine –vm-status shows:
[root@ovirt-sj-02 ~]# hosted-engine --vm-status
The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable.
But, storage is mounted and reachable, as well as ovirt-ha-agent running:
[root@ovirt-sj-02 ~]# systemctl status ovirt-ha-agent
● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring Agent
Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-08-19 14:57:07 UTC; 23s ago
Main PID: 43388 (ovirt-ha-agent)
Tasks: 2
CGroup: /system.slice/ovirt-ha-agent.service
└─43388 /usr/bin/python /usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent
Can somebody help me with what to do?
— — —
Met vriendelijke groet / Kind regards,
Marko Vrgotic
5 years, 4 months
Hosted Engine on seperate L2 network from nodes?
by Dan Poltawski
For some security requirements, I’ve been asked if it’s possible to segregate the hosted engine from the physical nodes, with specific firewalling for access to do node/ storage operations (I’m using managed block storage).
Is this an approach others us, or is it better practice and just ensure the nodes and engine are all sharing the same network?
Thanks,
dan
________________________________
The Networking People (TNP) Limited. Registered office: Network House, Caton Rd, Lancaster, LA1 3PE. Registered in England & Wales with company number: 07667393
This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited.
5 years, 4 months