Re: [ovirt-users] Re: Is it possible to customize web-ui/User Portal/VM Portal?
by Greg Sheremeta
On Wed, Feb 6, 2019 at 6:27 PM Brian T <deliciouscardboard18(a)gmail.com>
wrote:
> Hi, thank you for the swift response!
>
> That's true, haha!
>
> Currently, we are looking to incorporate an RDP solution (Mechdyne TGX)
> into the VM portal for our users. The current "workaround" is pasting the
> VM's IP address in the description or comment section and having the
> end-user run TGX separately.
>
+devel list
Cool! We're actually in the process of overhauling consoles for VM Portal.
We currently have Windows RDP in the Admin Portal, and we are actively
working on it for VM Portal. Here are some screens from the design:
[image: image.png]
[image: Selection_375.png]
[Kudos to our awesome designer, Laura Wright]
We also have SPICE (thick client) fully implemented in both portals, thick
client VNC in both portals, novnc (web-based) in Admin, and novnc coming
very soon in VM Portal (4.3.2 at the latest, but I hope 4.3.1) as part of
the overhaul.
Is Mechdyne TGX much different than Windows RDP, SPICE, and VNC? If so,
what is the use case for using that over Windows RDP / SPICE / VNC?
Also, would you mind please reviewing the designs we have so far? I'm
interested in others' opinions, especially someone who has already been
thinking about this.
Console design doc (start at page 41 ish):
https://docs.google.com/document/d/1m-pM0VVgeZmVCJFs2lLuzC9KUVV-X1cDfLXPb...
And I would certainly love for you to help test it out as we're nearing
completion, at the least. If you want to code, we would love to have you :)
Greg
>
> My proposed idea would be (Correct me if I'm wrong):
> 1) Create a new Button or option from "Console" labelled something like
> "Console with TGX"
> 2) Upon hitting this button, make a REST API call to obtain the VM's IPv4
> address.
> 3) Launch Mechdyne TGX, slotting in the IPv4 address obtained
>
> There's more than one way to achieve this, and I guess I'm looking for any
> suggestions or a place to start. I understand that this use case is not a
> universal improvement for a lot of oVirt users (Hence why I thought it
> would be a more personal modification!), but I can see a use for allowing
> other RDP solutions as well.
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XN6OEWENCSM...
>
--
GREG SHEREMETA
SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
Red Hat NA
<https://www.redhat.com/>
gshereme(a)redhat.com IRC: gshereme
<https://red.ht/sig>
5 years, 10 months
Silent errors in storage code on Vdsm startup
by Milan Zamazal
Hi, I'm trying to run Vdsm with Python 3, which exposes various Python 3
incompatibilities in Vdsm code. I'm currently semi-stuck on Vdsm
crashing very shortly after startup with:
Panic: Error initializing IRS
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/vdsmd.py", line 95, in serve_clients
irs = Dispatcher(HSM())
File "/usr/lib/python3.6/site-packages/vdsm/storage/dispatcher.py", line 46, in __init__
self._exposeFunctions(obj)
File "/usr/lib/python3.6/site-packages/vdsm/storage/dispatcher.py", line 56, in _exposeFunctions
if hasattr(funcObj, _EXPORTED_ATTRIBUTE) and callable(funcObj):
File "/usr/lib/python3.6/site-packages/vdsm/storage/sp.py", line 85, in __getattr__
raise se.StoragePoolNotConnected
vdsm.storage.exception.StoragePoolNotConnected: Storage pool not connected: ()
That wouldn't be a problem by itself, errors are expected in Python 3
experiments. However the problem is that this is often the *only* error
I receive.
I've found out that HSM() call spawns threads, which apparently don't
have chance to report their errors before the panic above kills Vdsm.
For instance, there was a string-byte mismatch in running iscsiadm
scanning commands. Only after I added time.sleep to the end of
HSM.__init__, I got an error + traceback from that and could see what's
wrong. That worries me a bit, because it might mean that if some
initialization external commands are not fast enough, Vdsm may not be
able to start.
After fixing some problems, I get the panic again, even with the sleep
at the end of HSM.__init__, and again without any other error. It's
difficult to proceed in such a situation -- any tips what to look at or
to look for are welcome.
FWIW, my vdsm.log now ends with
INFO (MainThread) [storage.check] Starting check service (check:91)
DEBUG (check/loop) [root] START thread <Thread(check/loop, started daemon 140180434425600)> (func=<bound method EventLoop.run_forever of <EventLoop running=True closed=False at 0x140181096264704>>, args=(), kwargs={}) (concurrent:193)
INFO (check/loop) [storage.asyncevent] Starting <EventLoop running=True closed=False at 0x140181096264704> (asyncevent:125)
DEBUG (hsm/init) [storage.SamplingMethod] Returning last result (misc:386)
DEBUG (hsm/init) [storage.SamplingMethod] Trying to enter sampling method (vdsm.storage.hba.<function rescan at 0x7f7e7a597e18>) (misc:376)
DEBUG (hsm/init) [storage.SamplingMethod] Got in to sampling method (misc:379)
DEBUG (hsm/init) [storage.HBA] Starting scan (hba:58)
DEBUG (hsm/init) [storage.HBA] Scan finished (hba:64)
DEBUG (hsm/init) [storage.SamplingMethod] Returning last result (misc:386)
DEBUG (hsm/init) [root] /usr/bin/taskset --cpu-list 0-0 /sbin/udevadm settle --timeout=5 (cwd None) (commands:200)
DEBUG (hsm/init) [root] SUCCESS: <err> = b''; <rc> = 0 (commands:221)
DEBUG (hsm/init) [storage.SamplingMethod] Returning last result (misc:386)
DEBUG (hsm/init) [storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-0 /usr/bin/sudo -n /sbin/lvm pvs --config 'devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["r|.*|"] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min=50 retain_days=0 }' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size,mda_used_count (cwd None) (commands:200)
DEBUG (hsm/init) [storage.Misc.excCmd] SUCCESS: <err> = b''; <rc> = 0 (commands:221)
DEBUG (hsm/init) [storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-0 /usr/bin/sudo -n /sbin/lvm vgs --config 'devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["r|.*|"] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min=50 retain_days=0 }' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None) (commands:200)
DEBUG (hsm/init) [storage.Misc.excCmd] SUCCESS: <err> = b''; <rc> = 0 (commands:221)
DEBUG (hsm/init) [storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-0 /usr/bin/sudo -n /sbin/lvm lvs --config 'devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["r|.*|"] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min=50 retain_days=0 }' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags (cwd None) (commands:200)
DEBUG (hsm/init) [storage.Misc.excCmd] SUCCESS: <err> = b''; <rc> = 0 (commands:221)
DEBUG (hsm/init) [storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-0 /usr/bin/sudo -n /sbin/lvm vgs --config 'devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["r|.*|"] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min=50 retain_days=0 }' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None) (commands:200)
DEBUG (hsm/init) [storage.Misc.excCmd] SUCCESS: <err> = b''; <rc> = 0 (commands:221)
INFO (hsm/init) [storage.HSM] FINISH HSM init succeeded in 0.43 seconds (hsm:410)
DEBUG (hsm/init) [storage.HSM] FINISH thread <Thread(hsm/init, stopped daemon 140180442818304)> (concurrent:196)
and is completely silent after that.
Thanks,
Milan
5 years, 10 months
Plans for ovirt-guest-agent
by Sandro Bonazzola
Hi,
I understood that most of the ovirt-guest-agent logic moved to
qemu-guest-agent.
We are currently shipping ovirt-guest-agent for Fedora, EL7 and Windows,
all of them using Python 2.
Most of the bugs around ovirt-guest-agent have been closed wontfix or
deferred with comments like "The guest agent is going away in 4.3 in favor
of the qemu guest agent, so this will not be resolved" but we still have
the ovirt guest agent around in oVirt 4.3.0.
What's the plan for 4.3 updates and 4.4? Is it safe to not install
ovirt-guest-agent but just qemu-guest-agent? Wich version of
qemu-guest-agent is required to completely replace ovirt-guest-agent?
Thanks,
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
5 years, 10 months
Proposing Yedidyah Bar David (didi) as a integration maintainer for oVirt Engine
by Sandro Bonazzola
Hi,
Didi is maintaining OTOPI and wrote with me most of the code in
engine-setup back into oVirt 3.3 release. He maintained that code across
the following releases along with the backup and restore tool and the
rename tool. He's the most active contributor within integration team for
ovirt-engine related code.
I would like to propose Didi as integration maintainer for oVirt Engine.
Thanks,
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
5 years, 10 months
propose Ravi Nori as a frontend maintainer for ovirt-engine infra
by Greg Sheremeta
Hi all,
I'd like to propose Ravi Nori as a frontend maintainer for ovirt-engine
infra team. While the infra team typically hasn't had a frontend
maintainer, Ravi has strong knowledge of all of the frontend infra pieces,
including everything related to SSO, welcome page, and GWT in admin portal.
Best wishes,
Greg
--
GREG SHEREMETA
SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
Red Hat NA
<https://www.redhat.com/>
gshereme(a)redhat.com IRC: gshereme
<https://red.ht/sig>
5 years, 10 months
[ANN] oVirt 4.3.0 is now generally available
by Sandro Bonazzola
The oVirt Project is excited to announce the general availability of oVirt
4.3.0, as of February 4th, 2019
This release unleashes an altogether more powerful and flexible open source
virtualization solution that encompasses over four hundreds individual
changes and a wide range of enhancements across the engine, storage,
network, user interface, and analytics on top of oVirt 4.2 series.
What's new in oVirt 4.3.0?
* Q35 chipset, support booting using UEFI and Secure Boot
* Skylake-server and AMD EPYC support
* New smbus driver in windows guest tools
* Improved support for v2v
* OVA export / import of Templates
* Full support for live migration of High Performance VMs
* Microsoft Failover clustering support (SCSI Persistent Reservation) for
Direct LUN disks
* Hundreds of bug fixes on top of oVirt 4.2 series
* New VM portal details page (see a preview here:
https://imgur.com/a/ExINpci)
* New Cluster upgrade UI
* OVN security groups
* IPv6 (static host addresses)
* Support of Neutron from RDO OpenStack 13 as external network provider
* Support of using Skydive from RDO OpenStack 14 as Tech Preview
* Support for 3.6 and 4.0 data centers, clusters and hosts were removed
* Now using PostgreSQL 10
* New metrics support using rsyslog instead of fluentd
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)
Experimental tech preview for x86_64 and s390x architectures for Fedora 28
is also included.
See the release notes [1] for installation / upgrade instructions and a
list of new features and bugs fixed.
If you’re managing more than one oVirt instance, OpenShift Origin or RDO we
also recommend to try ManageIQ <http://manageiq.org/>.
In such case, please ensure to take the qc2 image and not the ova image.
Notes:
- oVirt Appliance is already available for CentOS 7 and Fedora 28
- oVirt Node NG is already available for CentOS 7 and Fedora 28 [2]
- oVirt Windows Guest Tools iso is already available [2]
Additional Resources:
* Read more about the oVirt 4.3.0 release highlights:
http://www.ovirt.org/release/4.3.0/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.0/
[2] http://resources.ovirt.org/pub/ovirt-4.3/iso/
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
5 years, 10 months
Re: [rhev-devel] u/s errors during update
by Roni Eliezer
Adding devel(a)ovirt.org
Thx
Roni
On Sun, Feb 3, 2019 at 2:31 PM Eyal Edri <eedri(a)redhat.com> wrote:
> you might want to ask on devel(a)ovirt.org and not rhev-devel which is
> downstream and might be less monitored.
>
> On Sun, Feb 3, 2019 at 11:10 AM Roni Eliezer <reliezer(a)redhat.com> wrote:
>
>> Hi
>> I got the following errors (see below) when I try to upgrade 4.3 u/s
>> Any idea?
>>
>> --> Finished Dependency Resolution
>> Error: Package: 1:openvswitch-2.10.1-1.el7.x86_64
>> (@centos-ovirt43-testing)
>> Requires: librte_efd.so.1()(64bit)
>> Removing: dpdk-17.11-15.el7.x86_64 (@rhel-extras)
>> librte_efd.so.1()(64bit)
>> Updated By: dpdk-18.11-2.el7_6.x86_64 (rhel-extras)
>> Not found
>> Available: dpdk-17.11-7.el7.x86_64 (rhel-extras)
>> librte_efd.so.1()(64bit)
>> Available: dpdk-17.11-11.el7.x86_64 (rhel-extras)
>> librte_efd.so.1()(64bit)
>> Available: dpdk-17.11-13.el7.x86_64 (rhel-extras)
>> librte_efd.so.1()(64bit)
>> Available: dpdk-2.0.0-8.el7.x86_64 (rhel-extras)
>> Not found
>> Available: dpdk-2.2.0-2.el7.x86_64 (rhel-extras)
>> Not found
>> Available: dpdk-2.2.0-3.el7.x86_64 (rhel-extras)
>> Not found
>> Available: dpdk-16.11.2-4.el7.x86_64 (rhel-extras)
>> Not found
>> Error: Package: 1:openvswitch-ovn-common-2.10.1-1.el7.x86_64
>> (@centos-ovirt43-testing)
>> Requires: librte_pmd_softnic.so.1()(64bit)
>> Removing: dpdk-17.11-15.el7.x86_64 (@rhel-extras)
>> librte_pmd_softnic.so.1()(64bit)
>> Updated By: dpdk-18.11-2.el7_6.x86_64 (rhel-extras)
>> Not found
>> Available: dpdk-17.11-7.el7.x86_64 (rhel-extras)
>> librte_pmd_softnic.so.1()(64bit)
>> Available: dpdk-17.11-11.el7.x86_64 (rhel-extras)
>> librte_pmd_softnic.so.1()(64bit)
>> Available: dpdk-17.11-13.el7.x86_64 (rhel-extras)
>> librte_pmd_softnic.so.1()(64bit)
>> Available: dpdk-2.0.0-8.el7.x86_64 (rhel-extras)
>> Not found
>> Available: dpdk-2.2.0-2.el7.x86_64 (rhel-extras)
>> Not found
>> Available: dpdk-2.2.0-3.el7.x86_64 (rhel-extras)
>> Not found
>> Available: dpdk-16.11.2-4.el7.x86_64 (rhel-extras)
>> Not found
>>
>>
>
> --
>
> Eyal edri
>
>
> MANAGER
>
> RHV/CNV DevOps
>
> EMEA VIRTUALIZATION R&D
>
>
> Red Hat EMEA <https://www.redhat.com/>
> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
5 years, 10 months