Re: Using ovirt imageio
by Nir Soffer
On Tue, Jul 7, 2020 at 5:05 PM Łukasz Kołaciński <l.kolacinski(a)storware.eu>
wrote:
> Dear ovirt community,
>
Hi Łukasz,
Adding devel(a)ovit.org since this topic is more appropriate for the devel
list.
> I am trying to use ovirt imageio api to receive changed blocks (dirty
> bitmap) on ovirt 4.4. Could anyone tell me how to get them step by step? On
> the documentation I saw endpoint "GET /images/ticket-uuid/map". I don't
> know what ticket-uuid is and how to generate it. I also need to know how to
> use this api because I can't reach it via /ovirt-engine/api/
>
> I am asking about this endpoint:
>
> https://www.ovirt.org/documentation/incremental-backup-guide/incremental-...
>
This guide is outdated and should not be used now.
The most up to date information is here:
https://www.ovirt.org/develop/release-management/features/storage/increme...
However the extents API is also outdated in the feature page. We are
working on updating it.
So here is example:
First you must start backup with from_checkpoint_id argument:
backup = backups_service.add(
types.Backup(
disks=disks,
from_checkpoint_id="checkpoint-id",
)
)
>
"checkpoint-id" is the checkpoint created in the last backup.
This starts a backup in in incremental mode. Dirty extents are available
only
in this mode.
Then you start a transfer for download, using the backup id:
transfer = imagetransfer.create_transfer(
connection,
disk,
types.ImageTransferDirection.DOWNLOAD,
backup=types.Backup(id=backup_uuid))
The transfer.transfer_url is the URL to download from, for example:
https://host:54322/images/53787351-3f72-44a1-8a26-1323524fac4a
Connect to host:54322 and send this request:
GET /images/53787351-3f72-44a1-8a26-1323524fac4a/extents?context=dirty
And parse the return json list, containing objects like:
[
{"start": 0, "length": 65536, "dirty": true},
{"start": 65536, "length": 1048576, "dirty": false},
...
]
For example code of using the imageio API, see imageio http backend:
https://github.com/oVirt/ovirt-imageio/blob/d5aa0e1fe659f1bf1247516f83c71...
https://github.com/oVirt/ovirt-imageio/blob/d5aa0e1fe659f1bf1247516f83c71...
We are adding a ImageioClient API that makes it easier to consume without
writing any HTTP code:
https://gerrit.ovirt.org/c/110068
With this you can use:
with ImageioClient(transfer.transfer_url, cafile=args.cafile) as client:
for extent in client.extent("dirty"):
if extent.dirty:
print("##dirty start={} length={}".format(extent.start,
extent.length))
client.write_to(sys.stdout.buffer, extent.start,
extent.length)
print()
This will stream the dirty extents to stdout. Not very useful as is, but
illustrates how
you can consume the data.
Here is an example writing extents to a sparse stream format:
https://gerrit.ovirt.org/c/110069
For complete backup example code see:
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/backup...
Note the new imagetransfer helper module:
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/helper...
Nir
e-mail: l.kolacinski(a)storware.eu
> <m.helbert(a)storware.eu>
>
>
>
>
> *[image: STORWARE]* <http://www.storware.eu/>
>
>
>
> *ul. Leszno 8/44 01-192 Warszawa www.storware.eu
> <https://www.storware.eu/>*
>
> *[image: facebook]* <https://www.facebook.com/storware>
>
> *[image: twitter]* <https://twitter.com/storware>
>
> *[image: linkedin]* <https://www.linkedin.com/company/storware>
>
> *[image: Storware_Stopka_09]*
> <https://www.youtube.com/channel/UCKvLitYPyAplBctXibFWrkw>
>
>
>
> *Storware Spółka z o.o. nr wpisu do ewidencji KRS dla M.St. Warszawa
> 000510131* *, NIP 5213672602.** Wiadomość ta jest przeznaczona jedynie
> dla osoby lub podmiotu, który jest jej adresatem i może zawierać poufne
> i/lub uprzywilejowane informacje. Zakazane jest jakiekolwiek przeglądanie,
> przesyłanie, rozpowszechnianie lub inne wykorzystanie tych informacji lub
> podjęcie jakichkolwiek działań odnośnie tych informacji przez osoby lub
> podmioty inne niż zamierzony adresat. Jeżeli Państwo otrzymali przez
> pomyłkę tę informację prosimy o poinformowanie o tym nadawcy i usunięcie
> tej wiadomości z wszelkich komputerów. **This message is intended only
> for the person or entity to which it is addressed and may contain
> confidential and/or privileged material. Any review, retransmission,
> dissemination or other use of, or taking of any action in reliance upon,
> this information by persons or entities other than the intended recipient
> is prohibited. If you have received this message in error, please contact
> the sender and remove the material from all of your computer systems.*
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AIJMMJOR354...
>
4 years, 5 months
Lots of problems with deploying the hosted-engine (ovirt 4.4 | CentOS 8.2.2004)
by jonas
Hi!
I have banged my head against deploying the ovirt 4.4 self-hosted engine
on Centos 8.2 for last couple of days.
First I was astonished that resources.ovirt.org has no IPv6
connectivity, which made my initial plan for a mostly IPv6-only
deployment impossible.
CentOS was installed from scratch using the ks.cgf Kickstart file below,
which also adds the ovirt 4.4 repo and installs cockpit-ovirt-dashboard
& ovirt-engine-appliance.
When deploying the hosted-engine from cockpit while logged in as a
non-root (although privileged) user, the "(3) Prepare VM" step instantly
fails with a nondescript error message and without generating any logs.
By using the browser dev tools it was determined that this was because
the ansible vars file could not be created as the non-root user did not
have write permissions in '/var/lib/ovirt-hosted-engine-setup/cockpit/'
. Shouldn't cockpit be capable of using sudo when appropriate, or at
least give a more descriptive error message?
After login into cockpit as root, or when using the command line
ovirt-hosted-engine-setup tool, the deployment fails with "Failed to
download metadata for repo 'AppStream'".
This seems to be because a) the dnsmasq running on the host does not
forward dns queries, even though the host itself can resolve dns queries
just fine, and b) there also does not seem to be any functioning routing
setup to reach anything outside the host.
Regarding a) it is strange that dnsmasq is running with a config file
'/var/lib/libvirt/dnsmasq/default.conf' containing the 'no-resolv'
option. Could the operation of systemd-resolved be interfering with
dnsmasq (see ss -tulpen output)? I tried to manually stop
systemd-resolved, but got the same behaviour as before.
I hope someone could give me a hint how I could get past this problem,
as so far my ovirt experience has been a little bit sub-par. :D
Also when running ovirt-hosted-engine-cleanup, the extracted engine VMs
in /var/tmp/localvm* are not removed, leading to a "disk-memory-leak"
with subsequent runs.
Best regards
Jonas
--- ss -tulpen output post deploy-run ---
[root@nxtvirt ~]# ss -tulpen | grep ':53 '
udp UNCONN 0 0 127.0.0.53%lo:53
0.0.0.0:* users:(("systemd-resolve",pid=1379,fd=18)) uid:193
ino:32910 sk:6 <->
udp UNCONN 0 0 [fd00:1234:5678:900::1]:53
[::]:* users:(("dnsmasq",pid=13525,fd=15)) uid:979 ino:113580
sk:d v6only:1 <->
udp UNCONN 0 0 [fe80::5054:ff:fe94:f314]%virbr0:53
[::]:* users:(("dnsmasq",pid=13525,fd=12)) uid:979 ino:113575
sk:e v6only:1 <->
tcp LISTEN 0 32 [fd00:1234:5678:900::1]:53
[::]:* users:(("dnsmasq",pid=13525,fd=16)) uid:979 ino:113581
sk:20 v6only:1 <->
tcp LISTEN 0 32 [fe80::5054:ff:fe94:f314]%virbr0:53
[::]:* users:(("dnsmasq",pid=13525,fd=13)) uid:979 ino:113576
sk:21 v6only:1 <->
--- running dnsmasq processes on host ('nxtvirt') post deploy-run ---
dnsmasq 13525 0.0 0.0 71888 2344 ? S 12:31 0:00
/usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf
--leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
root 13526 0.0 0.0 71860 436 ? S 12:31 0:00
/usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf
--leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
--- var/lib/libvirt/dnsmasq/default.conf ---
##WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO
BE
##OVERWRITTEN AND LOST. Changes to this configuration should be made
using:
## virsh net-edit default
## or other application using the libvirt API.
##
## dnsmasq conf file created by libvirt
strict-order
pid-file=/run/libvirt/network/default.pid
except-interface=lo
bind-dynamic
interface=virbr0
dhcp-option=3
no-resolv
ra-param=*,0,0
dhcp-range=fd00:1234:5678:900::10,fd00:1234:5678:900::ff,64
dhcp-lease-max=240
dhcp-hostsfile=/var/lib/libvirt/dnsmasq/default.hostsfile
addn-hosts=/var/lib/libvirt/dnsmasq/default.addnhosts
enable-ra
--- cockpit wizard overview before the 'Prepare VM' step ---
VM
Engine FQDN:engine.*REDACTED*
MAC Address:00:16:3e:20:13:b3
Network Configuration:Static
VM IP Address:*REDACTED*:1099:babe::3/64
Gateway Address:*REDACTED*:1099::1
DNS Servers:*REDACTED*:1052::11
Root User SSH Access:yes
Number of Virtual CPUs:4
Memory Size (MiB):4096
Root User SSH Public Key:(None)
Add Lines to /etc/hosts:yes
Bridge Name:ovirtmgmt
Apply OpenSCAP profile:no
Engine
SMTP Server Name:localhost
SMTP Server Port Number:25
Sender E-Mail Address:root@localhost
Recipient E-Mail Addresses:root@localhost
--- ks.cgf ---
#version=RHEL8
ignoredisk --only-use=vda
autopart --type=lvm
# Partition clearing information
clearpart --drives=vda --all --initlabel
# Use graphical install
#graphical
text
# Use CDROM installation media
cdrom
# Keyboard layouts
keyboard --vckeymap=de --xlayouts='de','us'
# System language
lang en_US.UTF-8
# Network information
network --bootproto=static --device=enp1s0 --ip=192.168.199.250
--netmask=255.255.255.0 --gateway=192.168.199.10
--ipv6=*REDACTED*:1090:babe::250/64 --ipv6gateway=*REDACTED*:1090::1
--hostname=nxtvirt.*REDACTED* --nameserver=*REDACTED*:1052::11
--activate
network --hostname=nxtvirt.*REDACTED*
# Root password
rootpw --iscrypted $6$*REDACTED*
firewall --enabled --service=cockpit --service=ssh
# Run the Setup Agent on first boot
firstboot --enable
# Do not configure the X Window System
skipx
# System services
services --enabled="chronyd"
# System timezone
timezone Etc/UTC --isUtc --ntpservers=ntp.*REDACTED*,ntp2.*REDACTED*
user --name=nonrootuser --groups=wheel --password=$6$*REDACTED*
--iscrypted
# KVM Users/Groups
group --name=kvm --gid=36
user --name=vdsm --uid=36 --gid=36
%packages
@^server-product-environment
#@graphical-admin-tools
@headless-management
kexec-tools
cockpit
%end
%addon com_redhat_kdump --enable --reserve-mb='auto'
%end
%anaconda
pwpolicy root --minlen=6 --minquality=1 --notstrict --nochanges
--notempty
pwpolicy user --minlen=6 --minquality=1 --notstrict --nochanges
--emptyok
pwpolicy luks --minlen=6 --minquality=1 --notstrict --nochanges
--notempty
%end
%post --erroronfail --log=/root/ks-post.log
#!/bin/sh
dnf update -y
# NFS storage
mkdir -p /opt/ovirt/nfs-storage
chown -R 36:36 /opt/ovirt/nfs-storage
chmod 0755 /opt/ovirt/nfs-storage
echo "/opt/ovirt/nfs-storage localhost" > /etc/exports
echo "/opt/ovirt/nfs-storage engine.*REDACTED*" >> /etc/exports
dnf install -y nfs-utils
systemctl enable nfs-server.service
# Install ovirt packages
dnf install -y
https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm
dnf install -y cockpit-ovirt-dashboard ovirt-engine-appliance
# Enable cockpit
systemctl enable cockpit.socket
%end
#reboot --eject --kexec
reboot --eject
--- Host (nxtvirt) ip -a post deploy-run ---
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel
state UP group default qlen 1000
link/ether 52:54:00:ad:79:1b brd ff:ff:ff:ff:ff:ff
inet 192.168.199.250/24 brd 192.168.199.255 scope global
noprefixroute enp1s0
valid_lft forever preferred_lft forever
inet6 *REDACTED*:1099:babe::250/64 scope global noprefixroute
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fead:791b/64 scope link noprefixroute
valid_lft forever preferred_lft forever
5: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP group default qlen 1000
link/ether 52:54:00:94:f3:14 brd ff:ff:ff:ff:ff:ff
inet6 fd00:1234:5678:900::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe94:f314/64 scope link
valid_lft forever preferred_lft forever
6: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master
virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:94:f3:14 brd ff:ff:ff:ff:ff:ff
7: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel
master virbr0 state UNKNOWN group default qlen 1000
link/ether fe:16:3e:68:d3:8a brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fe68:d38a/64 scope link
valid_lft forever preferred_lft forever
--- iptables-save post deploy-run ---
# Generated by iptables-save v1.8.4 on Sun Jun 28 13:20:53 2020
*filter
:INPUT ACCEPT [4007:8578553]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [3920:7633249]
:LIBVIRT_INP - [0:0]
:LIBVIRT_OUT - [0:0]
:LIBVIRT_FWO - [0:0]
:LIBVIRT_FWI - [0:0]
:LIBVIRT_FWX - [0:0]
-A INPUT -j LIBVIRT_INP
-A FORWARD -j LIBVIRT_FWX
-A FORWARD -j LIBVIRT_FWI
-A FORWARD -j LIBVIRT_FWO
-A OUTPUT -j LIBVIRT_OUT
-A LIBVIRT_INP -i virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p udp -m udp --dport 67 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p udp -m udp --dport 68 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p tcp -m tcp --dport 68 -j ACCEPT
-A LIBVIRT_FWO -i virbr0 -j REJECT --reject-with icmp-port-unreachable
-A LIBVIRT_FWI -o virbr0 -j REJECT --reject-with icmp-port-unreachable
-A LIBVIRT_FWX -i virbr0 -o virbr0 -j ACCEPT
COMMIT
# Completed on Sun Jun 28 13:20:53 2020
# Generated by iptables-save v1.8.4 on Sun Jun 28 13:20:53 2020
*security
:INPUT ACCEPT [3959:8576054]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [3920:7633249]
COMMIT
# Completed on Sun Jun 28 13:20:53 2020
# Generated by iptables-save v1.8.4 on Sun Jun 28 13:20:53 2020
*raw
:PREROUTING ACCEPT [4299:8608260]
:OUTPUT ACCEPT [3920:7633249]
COMMIT
# Completed on Sun Jun 28 13:20:53 2020
# Generated by iptables-save v1.8.4 on Sun Jun 28 13:20:53 2020
*mangle
:PREROUTING ACCEPT [4299:8608260]
:INPUT ACCEPT [4007:8578553]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [3920:7633249]
:POSTROUTING ACCEPT [3923:7633408]
:LIBVIRT_PRT - [0:0]
-A POSTROUTING -j LIBVIRT_PRT
COMMIT
# Completed on Sun Jun 28 13:20:53 2020
# Generated by iptables-save v1.8.4 on Sun Jun 28 13:20:53 2020
*nat
:PREROUTING ACCEPT [337:32047]
:INPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [159:9351]
:OUTPUT ACCEPT [159:9351]
:LIBVIRT_PRT - [0:0]
-A POSTROUTING -j LIBVIRT_PRT
COMMIT
# Completed on Sun Jun 28 13:20:53 2020
4 years, 5 months
Error: missing groups or modules: javapackages-tools in EL 8.2
by Patrick Lomakin
I'm take an error after executing "dnf module enable -y javapackages-tools pki-deps postgresql:12 389-ds".
Error message:
"Error: Problems in request:
missing groups or modules: javapackages-tools"
Installed system - Red Hat Enterprise Linux 8.2
4 years, 5 months
ovirt terraform - multiple nics creation problem
by marek
i have problem with creation VM with multiple nics
part of terraform config
resource "ovirt_vm" "vm" {
....
initialization {
authorized_ssh_key = "${var.vm_authorized_ssh_key}"
host_name = "${var.vm_hostname}"
timezone = "${var.vm_timezone}"
user_name = "${var.vm_user_name}"
custom_script = "${var.vm_custom_script}"
dns_search = "${var.vm_dns_search}"
dns_servers = "${var.vm_dns_servers}"
nic_configuration {
label = "eth0"
boot_proto = "${var.vm_nic_boot_proto}"
address = "${var.vm_nic_ip_address}"
gateway = "${var.vm_nic_gateway}"
netmask = "${var.vm_nic_netmask}"
on_boot = "${var.vm_nic_on_boot}"
}
nic_configuration {
label = "eth1"
boot_proto = "${var.vm_nic2_boot_proto}"
address = "${var.vm_nic2_ip_address}"
gateway = "${var.vm_nic2_gateway}"
netmask = "${var.vm_nic2_netmask}"
on_boot = "${var.vm_nic2_on_boot}"
}
}
}
resource "ovirt_vnic" "eth0" {
name = "eth0"
vm_id = "${ovirt_vm.vm.id}"
vnic_profile_id = "${data.ovirt_vnic_profiles.nic1.vnic_profiles.0.id}"
}
resource "ovirt_vnic" "eth1" {
name = "eth1"
vm_id = "${ovirt_vm.vm.id}"
vnic_profile_id = "${data.ovirt_vnic_profiles.nic2.vnic_profiles.0.id}"
}
how terraform knows which nic_configuration {} from ovirt_vm belongs to
which resource "ovirt_vnic"?
my problem is that VM has paired nic_configuration(eth0) with resource
"ovirt_vnic" "eth1" and vice versa
any hints?
https://github.com/oVirt/terraform-provider-ovirt/blob/master/ovirt/resou...
my experience with Go is not enough to understand how is "pairing" done
Marek
4 years, 5 months
Revert to 4.3
by jb
Hello everybody,
at to moment I run ovirt engine in a VM on a different server (no hosted
engine) and I have two hosts. One is only a backup and is not running.
I would like to install a new ovirt engine 4.4 VM and use the backup
host. And if this is running fine in some month I would migrate the
second host to 4.4.
The Problem is only that I have to upgrade the data center compatibly
mode to 4.4 and with that I'm a bit afraid.
Is it possible, to downgrade the compatibility mode, when something
unsuspected is happen?
The original 4.3 engine VM I would like to have untouched, that I can go
back if is necessary.
Regards
Jonathan
4 years, 5 months
Not able to asign IB network to IB Bond
by Andrey Rusakov
Hi all,
We are using oBirt since 4.2.
Config is 1Gb or 10G for VM ext network and 40Gb IB for Starage and VM Migration.
We are testing 4.4 and 4.4.0 at the moment.
2 problems
- Major problem is that i can't assign IB netwrok to IB bond i got - "Error while executing action HostSetupNetworks: Unexpected exception"
VDSM.log
2020-07-11 15:33:39,147+0300 INFO (jsonrpc/1) [api.network] START setupNetworks(networks={'IB': {'netmask': '255.255.255.0', 'bonding': 'bond1', 'ipv6autoconf': False, 'bridged': 'false', 'ipaddr': '172.17.21.101', 'dhcpv6': False, 'mtu': 65520, 'switch': 'legacy'}}, bondings={}, options={'connectivityTimeout': 120, 'commitOnSuccess': True, 'connectivityCheck': 'true'}) from=::ffff:172.16.21.201,33614, flow_id=fe589281-3171-41b4-b7aa-e28a4bbebe55 (api:48)
2020-07-11 15:33:40,088+0300 INFO (jsonrpc/1) [api.network] FINISH setupNetworks error=MAC address cannot be specified in bond interface along with specified bond options from=::ffff:172.16.21.201,33614, flow_id=fe589281-3171-41b4-b7aa-e28a4bbebe55 (api:52)
2020-07-11 15:33:40,088+0300 ERROR (jsonrpc/1) [jsonrpc.JsonRpcServer] Internal server error (__init__:350)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/yajsonrpc/__init__.py", line 345, in _handle_request
res = method(**params)
File "/usr/lib/python3.6/site-packages/vdsm/rpc/Bridge.py", line 198, in _dynamicMethod
result = fn(*methodArgs)
File "<decorator-gen-480>", line 2, in setupNetworks
File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line 50, in method
ret = func(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/API.py", line 1548, in setupNetworks
supervdsm.getProxy().setupNetworks(networks, bondings, options)
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 56, in __call__
return callMethod()
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 54, in <lambda>
**kwargs)
File "<string>", line 2, in setupNetworks
File "/usr/lib64/python3.6/multiprocessing/managers.py", line 772, in _callmethod
raise convert_to_error(kind, result)
libnmstate.error.NmstateValueError: MAC address cannot be specified in bond interface along with specified bond options
2nd problem
- IB bond not working after reboot - theres is no bond1 insterface at all (my solution is to restart network)
This is not very important as VDSM will change config after Adding bond to oVirt.
P.S.
I Got 100% same config with same cards and samу bond config on 4.3 - everething is wotking fine.
4 years, 5 months
Metrics in Kibana not working
by Guillaume Pavese
Trying to use the provided visualizations/Dashboards for ovirt metrics in
Kibana.
Using ovirt 4.3.7, with ovirt-engine-metrics 1.3.7-1 and latest
rsylog-8.24.0-52 and collectd-5.10.0-2
I am receiving data in indexes and some fields are present. A few graph
work, but most don't and I am getting a lot of errors like theses :
Error in visualisation[esaggs] > "field" is a required parameter
- On my hosts I can see the following collectd errors:
systemd[1]: Starting Collectd statistics daemon...
collectd[21629]: plugin_load: plugin "disk" successfully loaded.
...
collectd[21629]: plugin_load: plugin "network" successfully loaded.
collectd[21629]: Systemd detected, trying to signal readiness.
systemd[1]: Started Collectd statistics daemon.
collectd[21629]: virt plugin: reader virt-0 initialized
collectd[21629]: Initialization complete, entering read-loop.
collectd[21629]: write_syslog plugin: send failed with status -1
(Connection reset by peer)
collectd[21629]: write_syslog plugin: error with ws_send_message
- And the following rsyslog ones :
systemd[1]: Starting System Logging Service...
rsyslogd[210059]: [origin software="rsyslogd"
swVersion="8.24.0-52.el7_8.2" x-pid="210059" x-info="http://www.rsyslog.com"]
start
systemd[1]: Started System Logging Service.
rsyslogd[210059]: command 'SystemLogSocketName' is currently not permitted
- did you already set it via a RainerScript command (v6+ config)?
[v8.24.0-52.el7_8.2 try http://www.rsyslog.com/e/2222 ]
I think that this is this bug :
https://bugzilla.redhat.com/show_bug.cgi?id=1732918
However it's CLOSED INSUFFICIENT_DATA with last message being "Closing with
insufficient data. Please reopen if you can provide needed info."
Is my understanding correct that only RedHat employees can reopen closed
bugs? I have encountered quite a lot of situations where I'm facing an
issue covered by such a closed bug with a "please reopen if you can provide
info" msg, but frustratingly not being able to do so...
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
--
Ce message et toutes les pièces jointes (ci-après le “message”) sont
établis à l’intention exclusive de ses destinataires et sont confidentiels.
Si vous recevez ce message par erreur, merci de le détruire et d’en avertir
immédiatement l’expéditeur. Toute utilisation de ce message non conforme a
sa destination, toute diffusion ou toute publication, totale ou partielle,
est interdite, sauf autorisation expresse. L’internet ne permettant pas
d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales)
décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse
ou il aurait été modifié. IT, ES, UK.
<https://interactiv-group.com/disclaimer.html>
4 years, 5 months
Is there a way to access VM data that is stored using iscsi block storage outside of the ovirt platform ?
by Kevin Doyle
We have had problems trying to get our broken hosted engine to work. The datastore for all the VM's was stored in an iscsi storage that was deactivated before the platform went down, I am now looking for a way to access this data and start the VM's outside of a running hosted-engine
Can I build an new ovirt standalone engine on a new bare metal remote server and connect to the hosts and start up the domain ?
I would be interested in your thoughts.
The problem with rebuilding the hosted-engine is that it cannot reuse the lun dedicated for the hosted-engine. I have tried to rebuild using NFS but it complains about not seeing the master domain
thanks
Kevin
4 years, 5 months