neofetch and spice/vnc console
by Nathanaël Blanchet
Hello
When running neofetch in spice/vnc console, terminal become unreadable,
does it come from spice/vnc or graphic emulation or neofetch binary?
Screenfetch alternative is ok.
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
5 years, 2 months
Internal Memory filter not satisfied question
by Vrgotic, Marko
Dear oVirt,
I need help clarifying how internal memory filter is used, at which values it looks at.
The log message below is clear, but I am failing to see from where are the memory values calculated, is it based on Guarenteed, Allocated or MaxAvailable memory a VM can get/have? Or something else.
We have a shared cluster, with hosts having different memory amount:
Tried to create 17VMs, and following errors were observed:
2019-09-20 11:46:10,496Z WARN [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-967) [2d316914-84dc-4410-856f-f8576e411574] Validation of action 'RunVm' failed for user mvrgotic@ictv.com(a)ictv.com-authz. Reasons: VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_ALL_HOSTS_FILTERED_OUT,VAR__FILTERTYPE__INTERNAL,$hostName ovirt-hv-08.avinity.tv,$filterName Memory,$availableMem 3513,VAR__DETAIL__NOT_ENOUGH_MEMORY,SCHEDULING_HOST_FILTERED_REASON_WITH_DETAIL,VAR__FILTERTYPE__INTERNAL,$hostName ovirt-hv-02.avinity.tv,$filterName Memory,$availableMem 1605,VAR__DETAIL__NOT_ENOUGH_MEMORY,SCHEDULING_HOST_FILTERED_REASON_WITH_DETAIL,VAR__FILTERTYPE__INTERNAL,$hostName ovirt-hv-05.avinity.tv,$filterName Memory,$availableMem 1265,VAR__DETAIL__NOT_ENOUGH_MEMORY,SCHEDULING_HOST_FILTERED_REASON_WITH_DETAIL,VAR__FILTERTYPE__INTERNAL,$hostName ovirt-hv-20.avinity.tv,$filterName Memory,$availableMem 195,VAR__DETAIL__NOT_ENOUGH_MEMORY,SCHEDULING_HOST_FILTERED_REASON_WITH_DETAIL,VAR__FILTERTYPE__INTERNAL,$hostName ovirt-hv-01.avinity.tv,$filterName Memory,$availableMem 177,VAR__DETAIL__NOT_ENOUGH_MEMORY,SCHEDULING_HOST_FILTERED_REASON_WITH_DETAIL,VAR__FILTERTYPE__INTERNAL,$hostName ovirt-hv-06.avinity.tv,$filterName Memory,$availableMem 2845,VAR__DETAIL__NOT_ENOUGH_MEMORY,SCHEDULING_HOST_FILTERED_REASON_WITH_DETAIL,VAR__FILTERTYPE__INTERNAL,$hostName ovirt-hv-09.avinity.tv,$filterName Memory,$availableMem 480,VAR__DETAIL__NOT_ENOUGH_MEMORY,SCHEDULING_HOST_FILTERED_REASON_WITH_DETAIL,VAR__FILTERTYPE__INTERNAL,$hostName ovirt-hv-03.avinity.tv,$filterName Memory,$availableMem 3116,VAR__DETAIL__NOT_ENOUGH_MEMORY,SCHEDULING_HOST_FILTERED_REASON_WITH_DETAIL,VAR__FILTERTYPE__INTERNAL,$hostName ovirt-hv-21.avinity.tv,$filterName Memory,$availableMem 1342,VAR__DETAIL__NOT_ENOUGH_MEMORY,SCHEDULING_HOST_FILTERED_REASON_WITH_DETAIL
2019-09-20 11:46:10,577Z INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (EE-ManagedThreadFactory-engine-Thread-851) [2d316914-84dc-4410-856f-f8576e411574] Candidate host 'ovirt-hv-08.avinity.tv' ('adebaa4f-5402-47c7-8634-5779a4b3f10f') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'Memory' (correlation id: null)
To provide you with bit more detail on our Cluster setup:
oVirt SHE 4.3.4.3-1
Cluster Settings and Hosts actual memory usage
[A screenshot of a cell phone Description automatically generated][A screenshot of a cell phone Description automatically generated][A screenshot of a cell phone Description automatically generated]
And here is the VM information”
Name: testvmmvr-th11stitcher
Defined Memory: 4096 MB
Origin: oVirt
Description:
Physical Memory Guaranteed: 1024 MB
Run On: Any Host in Cluster
Template: av-centos-75-baseimage (Thin/Dependent)
Guest OS Memory Free/Cached/Buffered: 3453 / 2 / 156 MB
Custom Properties: Not Configured
Operating System: Red Hat Enterprise Linux 7.x x64
Number of CPU Cores: 4 (1:4:1)
Cluster Compatibility Version: 4.3
BIOS Type: Default
Guest CPU Count: 4
VM ID: testvmmvr
Graphics protocol: VNC
Guest CPU Type: SandyBridge,+pcid,+spec-ctrl,+ssbd
Video Type: QXL
Highly Available: No
Priority: Low
Number of Monitors: 1
FQDN: testvmmvr-th11stitcher
Optimized for: Server
USB Policy: Disabled
Hardware Clock Time Offset: Etc/GMT
Created By: Deployment User
Kindly awaiting your reply.
If any data is missing, please let me know.
— — —
Met vriendelijke groet / Kind regards,
Marko Vrgotic
5 years, 2 months
Re: One of the brick in the GFS is down ,How can I slove it?
by Amit Bawer
On Tuesday, September 24, 2019, zhouhao(a)vip.friendtimes.net <
zhouhao(a)vip.friendtimes.net> wrote:
> sorry ,execute “gluster peer status” lost a brick,I worry about losing
> data
>
> ------------------------------
> zhouhao(a)vip.friendtimes.net
>
>
> *From:* Amit Bawer <abawer(a)redhat.com>
> *Date:* 2019-09-24 15:42
> *To:* zhouhao(a)vip.friendtimes.net
> *CC:* users <users(a)ovirt.org>
> *Subject:* Re: [ovirt-users] Re: One of the brick in the GFS is down ,How
> can I slove it?
> it is preferable to have all GFS nodes online and running first, you can
> follow troubleshooting here:
>
> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-
> Troubleshooting.html
>
>
> On Tue, Sep 24, 2019 at 7:18 AM zhouhao(a)vip.friendtimes.net <
> zhouhao(a)vip.friendtimes.net> wrote:
>
>>
>>
>> ------------------------------
>> zhouhao(a)vip.friendtimes.net
>>
>>
>> *From:* zhouhao(a)vip.friendtimes.net
>> *Date:* 2019-09-24 11:41
>> *To:* zhouhao <zhouhao(a)vip.friendtimes.net>; users <users(a)ovirt.org>
>> *Subject:* Re: [ovirt-users] One of the brick in the GFS is down ,How
>> can I slove it?
>> Can I click The 'start' button??
>>
>>
>> ------------------------------
>> zhouhao(a)vip.friendtimes.net
>>
>>
>> *From:* zhouhao(a)vip.friendtimes.net
>> *Date:* 2019-09-24 11:36
>> *To:* users <users(a)ovirt.org>
>> *Subject:* [ovirt-users] One of the brick in the GFS is down ,How can I
>> slove it?
>>
>> *How can I slove it safely? There are 50 vms running in this GFS*
>>
>>
>> *the mail messages below:*
>>
>> *Time:* 2019-09-24 04:47:20.511
>> *Message:* Detected change in status of brick 192.168.3.16:/vmdata/gfs
>> of volume bojoy-GFS of cluster bojoy-cluster from UP to DOWN via cli.
>> *Severity:* WARNING
>>
>> status below:
>>
>>
>> GFS service status
>>
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org
>> To unsubscribe send an email to users-leave(a)ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
>> guidelines/
>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
>> message/TBCAJIHSV2AU24IFAMR43GIHRUECWW2V/
>>
>
5 years, 2 months
Unexpected SSO error with first task
by Vrgotic, Marko
Dear oVirt,
In our software tests, we deploy oVirt VMs using ansible and teardown also using ansible.
We noticed that each time we run nightly tests, using Jenkins user in ovirt, in one of the Jobs, first task, creating VM, fails with following error,
TASK [Login to oVirt] **********************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AuthError: Error during SSO authentication invalid_grant : The provided authorization grant for the username and password has expired.
fatal: [PVyrZMy0-csm-2.avinity.tv<http://pvyrzmy0-csm-2.avinity.tv/> -> localhost]: FAILED! => {"changed": false, "msg": "Error during SSO authentication invalid_grant : The provided authorization grant for the username and password has expired."}
Immediately after, Token seem to be renewed and all other VMs are created without issue.
The play looks like following:
---
# file: provision-ovirt.yml
- name: Create VM in OVirt
hosts: ovirt-vms
gather_facts: false
vars:
- state: running
pre_tasks:
- name: Login to oVirt
delegate_to: localhost
ovirt_auth:
url: "{{ ovirt_engine_url }}"
username: "{{ ovirt_engine_user }}"
password: "{{ ovirt_engine_password }}"
ca_file: "{{ ovirt_engine_cafile | default(omit) }}"
insecure: "{{ ovirt_engine_insecure | default(true) }}"
run_once: true
tasks:
- name: "Get agent keys"
delegate_to: localhost
command: ssh-add -L
register: ssh_agent_pubkeys
run_once: true
changed_when: False
- name: Create new VMs from template
delegate_to: localhost
ovirt_vm:
auth: "{{ ovirt_auth }}"
name: "{{ inventory_hostname_short }}"
nics:
- name: nic1
profile_name: tenant1
interface: virtio
cloud_init_nics:
- nic_name: eth0
nic_boot_protocol: dhcp
nic_on_boot: true
cloud_init:
host_name: "{{ inventory_hostname_short }}"
authorized_ssh_keys: "{{ ssh_agent_pubkeys.stdout }}"
state: "{{ state }}"
cluster: "{{ovirt_cluster}}"
template: "{{ images_to_template[os_image] }}"
instance_type: "{{ os_flavor_to_ovirt_instance_type[os_flavor] }}"
- name: "Wait until the ansible user can log into the host (cloud-init needs to have finished)"
command: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no {{ansible_user}}@{{inventory_hostname}} exit
register: ssh_output
delegate_to: localhost
until: ssh_output.rc == 0
retries: 30
delay: 5
changed_when: False
post_tasks:
- name: Logout from oVirt
delegate_to: localhost
run_once: true
ovirt_auth:
state: absent
ovirt_auth: "{{ ovirt_auth }}"
tags:
- always
How does token relates to user session in oVirt? We would expect that once we logout, token should be expired and upon new login just get renewed/new token to proceed.
Is it a bug? Is it in the play that we are doing something incorrectly?
Kindly awaiting your eply.
— — —
Met vriendelijke groet / Kind regards,
Marko Vrgotic
5 years, 2 months
the virtual machine crushed and I cant shutdown the vm successfully
by zhouhao@vip.friendtimes.net
The trouble bothered me for a long time;Some times my vm will crush and I cant shutdown it,I had to reboot the ovirt-node to slove it;
the vm's error below
The ovirt-node's error below
The vm's threading on the ovirt-node, the IO ratio is 100%
and the vm's process change to defunct
I can not kill it,every time I had to shutdown the ovirt-node
on the engine website ,the vm's status always in the way to shutdown ,even if I wait for it for hours;
It either failed or is shutting down
and the "power off" cant shutdown the vm too.
----------------------------------------------------------------
The other infomation about my ovirt-node
node-version:
node hardware
zhouhao(a)vip.friendtimes.net
5 years, 2 months
Creating Bricks via the REST API
by Julian Schill
Under "Compute -> Hosts -> Host Name -> Storage Devices" one can create new Gluster Bricks on free partitions. How can i do this via the REST API?
5 years, 2 months
ovirtmgmt STP disabled - any reason behind that
by Strahil Nikolov
Hi All,
I recently got an issue and due to lack of debug time , I'm still not sure if it was a loop on the network.Can someone clarify why the ovirtmgmt bridge has STP disabled ? Any reason behind that ?
Also, what is the proper way to enable STP on those bridges ?
For now , I have set highest priority of the STP on my router and I don't expect the issue to reoccur (if it really was a loop), but I want to avoid future issues like that.
Best Regards,Strahil Nikolov
5 years, 2 months
VDI
by Fabio Marzocca
Is there anyone who uses oVirt as a full VDI environment? I would have a
bunch of questions...
5 years, 2 months