ovirt-ha-broker ERROR Failed to getVdsStats: No 'network' in result
by Strahil Nikolov
Hi All,
After a host reinstall + deploy (UI -> Hosts -> Management -> Reinstall) I see the following error in the ovirt-ha-broker :
ovirt-ha-broker mgmt_bridge.MgmtBridge ERROR Failed to getVdsStats: No 'network' in result
Anyone got an idea what is it going about ? Should I worry about that message ?
Best Regards,Strahil Nikolov
5 years, 1 month
Re: improve file operation speed in HCI
by Strahil
If you seek performance, then set tuned-adm profile in the VM to 'throughput-performance' and scheduler either 'noop' or 'none' (depends if multi queue is enabled).
Usually, if yhou crete the gluster cluster via cockpit and then install the hosted engine via cockpit again - all options on your gluster volumes are the most optimal you need.
Best Rregards,
Strahil NikolovOn Oct 18, 2019 15:30, Jayme <jaymef(a)gmail.com> wrote:
>
> My VMs are using virtual-guest tuned profiles and ovirt node hosts are using virtual-host profile. Those seem to be good defaults from what I'm looking at. I will test I/O schedulers to see if that makes any difference and also try out high performance VM profile (I was staying away from that profile due to loss of high-availability).
>
> On Fri, Oct 18, 2019 at 9:18 AM Jayme <jaymef(a)gmail.com> wrote:
>>
>> The VMs are basically as stock CentOS 7x as you can get. There are so many layers to deal with in HCI it's difficult to know where to begin with tuning. I was focusing mainly on gluster. Is it recommended to do tuning directly on oVirt host nodes as well such as I/O scheduler and tuned-adm profiles etc?
>>
>> On Fri, Oct 18, 2019 at 6:55 AM Strahil <hunter86_bg(a)yahoo.com> wrote:
>>>
>>> What is your I/O scheduler and tuned-adm profile in the VM.
>>> RedHat based VMs use deadline which prioritizes reads before writes -> you can use 'noop' or 'none'.
>>>
>>> For profile, you can use high-performance.
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>> On Oct 18, 2019 06:45, Jayme <jaymef(a)gmail.com> wrote:
>>>>
>>>> I'm wondering if anyone has any tips to improve file/directory operations in HCI replica 3 (no arbtr) configuration with SSDs and 10Gbe storage network.
>>>>
>>>> I am running stock optimize for virt store volume settings currently and am wondering what if any improvements I can make for VM write speed and more specifically anything I can tune to increase performance of small file operations such as copying, untar, npm installs etc.
>>>>
>>>> For some context, I'm seeing ~50MB/s write speeds inner VM with: dd if=/dev/zero of=./test bs=512k count=2048 oflag=direct -- I am not sure how this compares to other HCI setups, I feel like it should be higher with SSD backed storage. Same command from gluster mount is over 400MB/s
>>>>
>>>> I've read some things about meta data caching, read ahead and other options. There are so many and I'm not sure where to start, I'm also not sure which could potentially have a negative impact on VM stability/reliability.
>>>>
>>>> Here are options for one of my volumes:
>>>>
>>>> Volume Name: prod_b
>>>> Type: Replicate
>>>> Volume ID: c3e7447e-8514-4e4a-9ff5-a648fe6aa537
>>>> Status: Started
>>>> Snapshot Count: 0
>>>> Number of Bricks: 1 x 3 = 3
>>>> Transport-type: tcp
>>>> Bricks:
>>>> Brick1: gluster0.example.com:/gluster_bricks/prod_b/prod_b
>>>> Brick2: gluster1.example.com:/gluster_bricks/prod_b/prod_b
>>>> Brick3: gluster2.example.com:/gluster_bricks/prod_b/prod_b
>>>> Options Reconfigured:
>>>> server.event-threads: 4
>>>> client.event-threads: 4
>>>> performance.client-io-threads: on
>>>> nfs.disable: on
>>>> transport.address-family: inet
>>>> performance.quick-read: off
>>>> performance.read-ahead: off
>>>> performance.io-cache: off
>>>> performance.low-prio-threads: 32
>>>> network.remote-dio: off
>>>> cluster.eager-lock: enable
>>>> cluster.quorum-type: auto
>>>> cluster.server-quorum-type: server
>>>> cluster.data-self-heal-algorithm: full
>>>> cluster.locking-scheme: granular
>>>> cluster.shd-max-threads: 8
>>>> cluster.shd-wait-qlength: 10000
>>>> features.shard: on
>>>> user.cifs: off
>>>> storage.owner-uid: 36
>>>> storage.owner-gid: 36
>>>> network.ping-timeout: 30
>>>> performance.strict-o-direct: on
>>>> cluster.granular-entry-heal: enable
>>>> server.allow-insecure: on
>>>> cluster.choose-local: off
>>>>
>>>>
5 years, 1 month
HE deployment failing - FAILED! => {"changed": false, "msg": "network default not found"}
by Parth Dhanjal
Hey!
I am trying a static IP deployment.
But the HE deployment fails during the VM preparation step and throws the
following error -
[ INFO ] TASK [ovirt.hosted_engine_setup : Parse libvirt default network
configuration]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "network
default not found"}
I tried restarting the network service, but the error is still persisting.
Upon checking the logs, I can find the following
https://pastebin.com/MB6GrLKA
Has anyone else faced this issue?
Regards
Parth Dhanjal
5 years, 1 month
Re: Error updating package in engine
by Strahil
You can also use 'package-cleanup --cleandupes' to remove duplicate packages.
Best Regards,
Strahil NikolovOn Oct 18, 2019 10:15, Gianluca Cecchi <gianluca.cecchi(a)gmail.com> wrote:
>
> On Fri, Oct 18, 2019 at 8:28 AM karli <karli(a)inparadise.se> wrote:
>>
>> Hey guys!
>>
>> I got a wierd issue today doing 'yum upgrade' I'm hoping you can help
>> me with. The transaction failed because of systemd packages. I can't
>> update systemd packages because they're already the latest, but old
>> versions are still around because of ovirt related dependencies:
>>
>> # rpm -q systemd
>> systemd-219-62.el7_6.9.x86_64
>> systemd-219-67.el7_7.1.x86_64
>> # rpm -e systemd-219-62.el7_6.9.x86_64
>> error: Failed dependencies:
>> systemd = 219-62.el7_6.9 is needed by (installed) systemd-
>> python-219-62.el7_6.9.x86_64
>> # rpm -e systemd-python-219-62.el7_6.9.x86_64
>> error: Failed dependencies:
>> systemd-python is needed by (installed) ovirt-imageio-proxy-
>> 1.5.1-0.el7.noarch
>>
>> Is this something you are aware of or are am I a special little
>> snowflake? :)
>>
>> /K
>>
>
> Possibly you have an incomplete transaction in the past that didn't clean the old systemd?
> If you run
> yum-complete-transaction
> what do you get as output of candidate removals?
> Gianluca
5 years, 1 month
Error updating package in engine
by karli
Hey guys!
I got a wierd issue today doing 'yum upgrade' I'm hoping you can help
me with. The transaction failed because of systemd packages. I can't
update systemd packages because they're already the latest, but old
versions are still around because of ovirt related dependencies:
# rpm -q systemd
systemd-219-62.el7_6.9.x86_64
systemd-219-67.el7_7.1.x86_64
# rpm -e systemd-219-62.el7_6.9.x86_64
error: Failed dependencies:
systemd = 219-62.el7_6.9 is needed by (installed) systemd-
python-219-62.el7_6.9.x86_64
# rpm -e systemd-python-219-62.el7_6.9.x86_64
error: Failed dependencies:
systemd-python is needed by (installed) ovirt-imageio-proxy-
1.5.1-0.el7.noarch
Is this something you are aware of or are am I a special little
snowflake? :)
/K
5 years, 1 month
VM unable to control via mouse and keyboard
by Jess Zanne Uy
Hi,
I have added new VM for my first time (Ubuntu 18 LTS)
I tried to run thru SPICE remote server but i can't click the button inside
VM
I'm stuck with the start boot menu.
[image: image.png]
[image: image.png]
Thanks,
Jess
5 years, 1 month
Re: improve file operation speed in HCI
by Strahil
What is your I/O scheduler and tuned-adm profile in the VM.
RedHat based VMs use deadline which prioritizes reads before writes -> you can use 'noop' or 'none'.
For profile, you can use high-performance.
Best Regards,
Strahil NikolovOn Oct 18, 2019 06:45, Jayme <jaymef(a)gmail.com> wrote:
>
> I'm wondering if anyone has any tips to improve file/directory operations in HCI replica 3 (no arbtr) configuration with SSDs and 10Gbe storage network.
>
> I am running stock optimize for virt store volume settings currently and am wondering what if any improvements I can make for VM write speed and more specifically anything I can tune to increase performance of small file operations such as copying, untar, npm installs etc.
>
> For some context, I'm seeing ~50MB/s write speeds inner VM with: dd if=/dev/zero of=./test bs=512k count=2048 oflag=direct -- I am not sure how this compares to other HCI setups, I feel like it should be higher with SSD backed storage. Same command from gluster mount is over 400MB/s
>
> I've read some things about meta data caching, read ahead and other options. There are so many and I'm not sure where to start, I'm also not sure which could potentially have a negative impact on VM stability/reliability.
>
> Here are options for one of my volumes:
>
> Volume Name: prod_b
> Type: Replicate
> Volume ID: c3e7447e-8514-4e4a-9ff5-a648fe6aa537
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: gluster0.example.com:/gluster_bricks/prod_b/prod_b
> Brick2: gluster1.example.com:/gluster_bricks/prod_b/prod_b
> Brick3: gluster2.example.com:/gluster_bricks/prod_b/prod_b
> Options Reconfigured:
> server.event-threads: 4
> client.event-threads: 4
> performance.client-io-threads: on
> nfs.disable: on
> transport.address-family: inet
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.low-prio-threads: 32
> network.remote-dio: off
> cluster.eager-lock: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-max-threads: 8
> cluster.shd-wait-qlength: 10000
> features.shard: on
> user.cifs: off
> storage.owner-uid: 36
> storage.owner-gid: 36
> network.ping-timeout: 30
> performance.strict-o-direct: on
> cluster.granular-entry-heal: enable
> server.allow-insecure: on
> cluster.choose-local: off
>
>
5 years, 1 month
Re: Ceph cluster on oVirt Guests
by Strahil
Please provide some (or all) of the errors you got.
Best Regards,
Strahil NikolovOn Oct 18, 2019 13:32, Anantha Raghava <raghav(a)exzatechconsulting.com> wrote:
>
> Hi,
>
> It appears it is. We tested it on the physical hardware with same CentOS 7 and same release of Ceph and it works like a charm. We also tried it with VMWare, and it works perfect. Of course, on VMWare we have to install open-VM-Tools in place of VMWare Guest tools. But with ovirt-guest-tools, we get error and open-vm-tools appears not designed for oVirt.
>
>
> Thanks & regards,
>
> Anantha Raghava
>
> Do not print this e-mail unless required. Save Paper & trees.
> On 18/10/19 3:27 pm, Strahil wrote:
>
> Are you sure it's VM-related issue ?
>
> Best Regards,
> Strahil Nikolov
>
> On Oct 18, 2019 09:35, Anantha Raghava <raghav(a)exzatechconsulting.com> wrote:
>>
>> Hi,
>>
>> We are trying to install Ceph Cluster on Cent OS7 Guests on oVirt. We are receiving many errors and it is unable to create master or the node. Has anyone tried to deploy Ceph Cluster on CentOS 7 Guest in oVirt?
>>
>> --
>>
>> Thanks & regards,
>>
>> Anantha Raghava
>> Do not print this e-mail unless required. Save Paper & trees.
5 years, 1 month
Re: Ceph cluster on oVirt Guests
by Strahil
Are you sure it's VM-related issue ?
Best Regards,
Strahil NikolovOn Oct 18, 2019 09:35, Anantha Raghava <raghav(a)exzatechconsulting.com> wrote:
>
> Hi,
>
> We are trying to install Ceph Cluster on Cent OS7 Guests on oVirt. We are receiving many errors and it is unable to create master or the node. Has anyone tried to deploy Ceph Cluster on CentOS 7 Guest in oVirt?
>
> --
>
> Thanks & regards,
>
> Anantha Raghava
> Do not print this e-mail unless required. Save Paper & trees.
5 years, 1 month