Re: Single instance scaleup.
by Strahil
Hi Leo,
As you do not have a distributed volume , you can easily switch to replica 2 arbiter 1 or replica 3 volumes.
You can use the following for adding the bricks:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Ad...
Best Regards,
Strahil NikolivOn May 26, 2019 10:54, Leo David <leoalex(a)gmail.com> wrote:
>
> Hi Stahil,
> Thank you so much for yout input !
>
> gluster volume info
>
>
> Volume Name: engine
> Type: Distribute
> Volume ID: d7449fc2-cc35-4f80-a776-68e4a3dbd7e1
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.80.191:/gluster_bricks/engine/engine
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard: on
> performance.low-prio-threads: 32
> performance.strict-o-direct: off
> network.remote-dio: off
> network.ping-timeout: 30
> user.cifs: off
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> cluster.eager-lock: enable
> Volume Name: ssd-samsung
> Type: Distribute
> Volume ID: 76576cc6-220b-4651-952d-99846178a19e
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.80.191:/gluster_bricks/sdc/data
> Options Reconfigured:
> cluster.eager-lock: enable
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> user.cifs: off
> network.ping-timeout: 30
> network.remote-dio: off
> performance.strict-o-direct: on
> performance.low-prio-threads: 32
> features.shard: on
> storage.owner-gid: 36
> storage.owner-uid: 36
> transport.address-family: inet
> nfs.disable: on
>
> The other two hosts will be 192.168.80.192/193 - this is gluster dedicated network over 10GB sfp+ switch.
> - host 2 wil have identical harware configuration with host 1 ( each disk is actually a raid0 array )
> - host 3 has:
> - 1 ssd for OS
> - 1 ssd - for adding to engine volume in a full replica 3
> - 2 ssd's in a raid 1 array to be added as arbiter for the data volume ( ssd-samsung )
> So the plan is to have "engine" scaled in a full replica 3, and "ssd-samsung" scalled in a replica 3 arbitrated.
>
>
>
>
> On Sun, May 26, 2019 at 10:34 AM Strahil <hunter86_bg(a)yahoo.com> wrote:
>>
>> Hi Leo,
>>
>> Gluster is quite smart, but in order to provide any hints , can you provide output of 'gluster volume info <glustervol>'.
>> If you have 2 more systems , keep in mind that it is best to mirror the storage on the second replica (2 disks on 1 machine -> 2 disks on the new machine), while for the arbiter this is not neccessary.
>>
>> What is your network and NICs ? Based on my experience , I can recommend at least 10 gbit/s interfase(s).
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> On May 26, 2019 07:52, Leo David <leoalex(a)gmail.com> wrote:
>>>
>>> Hello Everyone,
>>> Can someone help me to clarify this ?
>>> I have a single-node 4.2.8 installation ( only two gluster storage domains - distributed single drive volumes ). Now I just got two identintical servers and I would like to go for a 3 nodes bundle.
>>> Is it possible ( after joining the new nodes to the cluster ) to expand the existing volumes across the new nodes and change them to replica 3 arbitrated ?
>>> If so, could you share with me what would it be the procedure ?
>>> Thank you very much !
>>>
>>> Leo
>
>
>
> --
> Best regards, Leo David
4 years, 5 months
Hang on "Wait for the host to be up"
by piotret@interia.pl
Hi,
I have problem with ovirt instalation.
Instalation hang on step "Wait for the host to be up"
I have vlan configuration and perhaps this is problem.
[ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": [{"address": "hostxxxxx", "affinity_labels": [], "auto_numa_status": "unknown", "certificate": {"organization": "xxxxx", "subject": "O=xxxx,CN=xxxxx"}, "cluster": {"href": "/ovirt-engine/api/clusters/1874b3a6-a631-11ea-98bc-00163e7d7d57", "id": "1874b3a6-a631-11ea-98bc-00163e7d7d57"}, "comment": "", "cpu": {"speed": 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, "devices": [], "external_network_provider_configurations": [], "external_status": "ok", "hardware_information": {"supported_rng_sources": []}, "hooks": [], "href": "/ovirt-engine/api/hosts/3764c903-358a-4865-9cc8-2fa627800fef", "id": "3764c903-358a-4865-9cc8-2fa627800fef", "katello_errata": [], "kdump_status": "unknown", "ksm": {"enabled": false}, "max_scheduling_memory": 0, "memory": 0, "name": "hostxxxx", "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline": ""}, "permission
s": [], "port": 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false, "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": {"fingerprint": "SHA256:67c4FdB+T7KAxxFRCjaiYRReB+n6Bv9EqZFh3J/d/Es", "port": 22}, "statistics": [], "status": "installing", "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], "transparent_huge_pages": {"enabled": false}, "type": "rhel", "unmanaged_networks": [], "update_available": false, "vgpu_placement": "consolidated"}]}, "attempts": 120, "changed": false}
Network connecting is work. I can login to https://hostxxxx:6900/ovirt-engine/
I don't know how ovirt checked if host is up ?
Greetings
4 years, 5 months
oVirt 4.4 node via PXE and custom kickstart
by Michael Thomas
I'm trying to customize a node install to include some local management and monitoring tools, starting with puppet, following the instructions here:
https://ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_...
The installation of the ovirt node works, and I can deploy the engine once it's running. However, while the extra steps in the %post section of my kickstart are working, any additional 'repo' and %package settings seem to get ignored (even though they were copied from known working kickstart files).
What kickstart customizations are supported when deploying a node via PXE?
My PXE menu looks like this:
menuentry 'Ovirt node' {
linuxefi ovirt/vmlinuz ip=dhcp ks=http://10.13.5.13/kickstart/ovirt.cfg ksdevice=link initrd=ovirt/initrd.img inst.stage2=http://10.13.5.13/rhvh
initrdefi ovirt/initrd.img
}
My kickstart file is as follows:
liveimg --url=http://10.13.5.13/rhvh/ovirt-node-ng-image.squashfs.img
clearpart --all --initlabel
autopart --type=thinp
zerombr
rootpw --plaintext ovirt
timezone --utc America/Chicago
text
reboot
# This will need to be updated to point to the 'frozen' snapshot, when available
repo --install --name="EPEL" --baseurl=http://10.13.5.13/mirror/linux/epel/8/Everything/x86_64 --cost=99
repo --install --name="Puppet" --baseurl=http://10.13.5.13/mirror/linux/puppetlabs/puppet6/el/8/x86_64 --cost=98
%packages
puppet-agent
%end
%post --erroronfail
nodectl init
echo "This is a test" > /etc/test.txt
%end
--Mike
4 years, 5 months
oVirt 4.4 install fails
by Me
Hi All
Not sure where to start, but here goes.
I'm not totally new to oVirt, I used RHEV V3.x in production for several
years, it was a breeze to setup.
Installing 4.4 on to a host with local SSD and FC for storage.
Issue 1, having selected the SSD for install which has failed 4.4 beta
on it (several times), I reclaim the space and after a few minutes of
not being able to enter a root password on the next install screen, it
fails as it can't delete the data on the SSD! Yes, really tried this
several times, choosing the recover option and getting a prompt,
fdisk /dev/sda delete the two partitions created by oVirt and I can
install. This was the case with the beta I tried a few weeks ago too.
Having reconfigured the switch attached the the host as a dumb 10GBE
port as the enterprise OS installer still doesn't appear to support
anything more advanced like teaming and VLANS, I have the initial
install on the single SSD and a network connection.
Issue 2, I use FF 72.0.2 on Linux x64 to connect by
https://hostname:9090 to the web interface, but I can't enter login
details as the boxes (everything) are disabled???? There is no warning
like "we don't like your choice of browser", but the screen is a not
very accessible dark grey on darker grey (a poor choice in what I
thought were more enlightened times) so this maybe the case. I have
disabled all security add-ons in FF, makes no difference.
Any suggestions?
M
4 years, 5 months
basic infra and glusterfs sizing question
by Jiří Sléžka
Hello,
I am just curious if basic gluster HCI layout which is suggested in
cockpit has some deeper meaning.
There are suggested 3 volumes
* engine - it is clear, it is the volume where engine vm is running.
When this vm is 51GB big how small could this volume be? I have 1TB SSD
storage and I would like utilize it as much as possible. Could I create
this volume as small as this vm is? Is it safe for example for future
upgrades?
* vmstore - it make sense it is a space for all other vms running in
oVirt. Right?
* data - which purpose has this volume? other data like for example
ISOs? Direct disks?
Another infra question... or maybe request for comment
I have small amount of public ipv4 addresses in my housing (but I have
own switches there so I can create vlans and separate internal traffic).
I can access only these public ipv4 addresses directly. I would like to
conserve these addressess as much as possible so what is the best
approach in your opinion?
* Install all hosts and HE with management network on private addressess
* have small router (hw appliance with for example LEDE) which will
utilize one ipv4 address and will do NAT and vpn for accessing my
internals vlans.
+ looks like simple approach to me
- single point of failure in this router (not really - just in case
oVirt is badly broken and I need to access internal vlans to recover it)
* have this router as virtual appliance inside oVirt (something like
pfSense for example)
+ no need hw router
+ not sure but I could probably configure vrrp redundancy
- still single point of failure like in first case
* any other approach? Could ovn help here somehow?
* Install all hosts and HE with public addresses :-)
+ access to all hosts directly
- 3 node HCI cluster uses 4 public ip addressess
Thanks for your opinions
Cheers,
Jiri
4 years, 5 months
Ovirt 4.4 HC gluster issues on new CentOS 8 node (cluster still in 4.3 compatibility level)
by jillian.morgan@primordial.ca
I've successfully migrated to a new 4.4 engine, now managing the older 4.3 (CentOS 7) nodes. So far so good there.
I installed a new CentOS 8 node w/ 4.4, joined it to the Gluster peer group, and it can see all of the volumes, but the node won't go into Online state in the engine because of apparent gluster-related VDSM errors:
Status of host butter was set to NonOperational.
Gluster command [<UNKNOWN>] failed on server <UNKNOWN>.
VDSM butter command ManageGlusterServiceVDS failed: The method does not exist or is not available: {'method': 'GlusterService.action'}
I haven't been able to find anything in the VDSM or Engine logs that give me any hint as to what's going on (besides just repeating that the "GlusterSerice.action" method doesn't exist). Anybody know what I'm missing, or have hints on where to dig to debug further?
4 years, 5 months
ETL service aggregation error
by Ayansh Rocks
Hi,
I am using 4.3.7 self hosted engine. From Few days i am getting regular
below error messages :-
[image: image.png]
Logs in /var/log/ovirt-engine-dwh/ovirt-engine-dwhd.log
[image: image.png]
What could be the reason for this?
Thanks
Shashank
4 years, 5 months
VM login info
by fawzi@kdsplumbing.com
Hi guys,
After a lot of attempts, i was finally able to open a VM (centos8-cld) using virt-viewer on my mac (catalina). but i am faced with a login screen. So, where do i get this information from?
or should i be using a different method to log into a vm?
This goes for all of the VMs available by default.
can someone please help me with this?
thanks
4 years, 5 months