oVirt: access ovirt-image-repository through Proxy
by tianxu
Hi expert,
'ovirt-image-repository' storage domain created by default in my ovirt
4.2 environment , but unfortunate my ovirt-engine and ovirt-node
machines sits in a private network where it can only access external
services through a HTTP proxy.
Is there a way to tell ovirt-engine the HTTP-Proxy to use ?
Thanks,
Xu
5 years, 11 months
ISO Lost Domain
by jc@mcsaipan.net
Hello Fellow users,
Platform : Ovirt Engine 4.1
Problem : ISO Domain server has crashed. It is a separate NFS server. I am unable to replace the ISO Domain. I have the old one in maintenance but it won't detach. It says.
VDSM command DetachStorageDomainVDS failed: Storage domain does not exist: (u'0ad098e9-65e7-494a-90f7-e42949da3f85',)
Which is quite true it does not exist..
Any suggestions?
Thank you
5 years, 11 months
ovirt engine VM (xfs on sda3) broken, how to fix image?
by Mike Lykov
I have a 4.2.7 setup hyperconverged, two deployed VM Engine images and i
have 20-30 second network outage. After some pinging to start engine on
host 1, then 2, then again 1 Engine image stuck at
"Probing EDD (edd=off to disable)... _"
as here: https://bugzilla.redhat.com/show_bug.cgi?id=1569827
I stop ha-agent, ha-broker on two hosts (to not trying to start Engine
VM), Stop VM via
vdsm-client VM destroy vmID="4f169ca9-1854-4e3f-ad57-24445ec08c79"
on both hosts, but i have a lock (lease file) anyway
....
Oh, lease are disappeared while i wrote he message.... now xfs_repair
output:
ERROR: The filesystem has valuable metadata changes in a log which needs
to be replayed.
guestmount :
ommandrvf: udevadm --debug settle -E /dev/sda3
calling: settle
......
command: mount '-o' 'ro' '/dev/sda3' '/sysroot//'
[ 1.478858] SGI XFS with ACLs, security attributes, no debug enabled
[ 1.481701] XFS (sda3): Mounting V5 Filesystem
[ 1.514183] XFS (sda3): Starting recovery (logdev: internal)
[ 1.537299] XFS (sda3): Internal error XFS_WANT_CORRUPTED_GOTO at
line 1664 of file fs/xfs/libxfs/xfs_alloc.c. Caller
xfs_free_extent+0xaa/0x140 [xfs]
and "Structure needs cleaning" .....
before it:
[root@ovirtnode6 aa6f3e9b-2eba-4fab-a8ee-a4a1aceddf5e]# ls -l
итого 7480047
-rw-rw----. 1 vdsm kvm 83751862272 дек 21 13:05
38ef3aac-6ecc-4940-9d2c-ffe4e2557482
-rw-rw----. 1 vdsm kvm 1048576 дек 21 13:27
38ef3aac-6ecc-4940-9d2c-ffe4e2557482.lease
-rw-r--r--. 1 vdsm kvm 338 ноя 2 14:01
38ef3aac-6ecc-4940-9d2c-ffe4e2557482.meta
If i try to use guestfs
LIBGUESTFS_BACKEND=direct guestfish --rw -a
38ef3aac-6ecc-4940-9d2c-ffe4e2557482
and 'run'
It result to
<fs> run
.....
qemu-kvm: -device scsi-hd,drive=hd0: Failed to get "write" lock
Is another process using the image?
in vdsm-client Host getVMList I do not see engine VM (get id from
vdsm-client Host getAllVmStats), because it stopeed?
And i want to remove lease by vdsm-client, i need an json file with
UUIDs like
usage: vdsm-client Lease info [-h] [arg=value [arg=value ...]]
positional arguments:
arg=value lease: The lease to query
JSON representation:
{
"lease": {
"sd_id": "UUID",
"lease_id": "UUID"
}
}
in all docs I not find any explains about sd_id and lease_id - where i
can get it?
see, for example:
https://www.ovirt.org/develop/developer-guide/vdsm/vdsm-client.html
without it I get:
[root@ovirtnode6 aa6f3e9b-2eba-4fab-a8ee-a4a1aceddf5e]# vdsm-client
Lease info lease=38ef3aac-6ecc-4940-9d2c-ffe4e2557482
vdsm-client: Command Lease.info with args {'lease':
'38ef3aac-6ecc-4940-9d2c-ffe4e2557482'} failed:
(code=100, message='unicode' object has no attribute 'get')
[root@ovirtnode6 ~]# vdsm-client Lease status
lease=38ef3aac-6ecc-4940-9d2c-ffe4e2557482
vdsm-client: Command Lease.status with args {'lease':
'38ef3aac-6ecc-4940-9d2c-ffe4e2557482'} failed:
(code=100, message=)
Please, help me to fix that Engine VM image.
--
Mike
5 years, 11 months
ISO Domain Problems
by JC Clark
Hello Fellow users,
Platform : Ovirt Engine 4.1
Problem : ISO Domain server has crashed. It is a separate NFS server. I
am unable to replace the ISO Domain. I have the old one in maintenance
but it won't detach. It says.
VDSM command ActivateStorageDomainVDS failed: Storage domain does not
exist: (u'0ad098e9-65e7-494a-90f7-e42949da3f85',)
Which is quite true it does not exist..
Any suggestions?
Thank you
5 years, 11 months
ISO Domain Problems
by JC Clark
Hello Fellow users,
Platform : Ovirt Engine 4.1
Problem : ISO Domain server has crashed. It is a separate NFS server. I
am unable to replace the ISO Domain. I have the old one in maintenance
but it won't detach. It says.
VDSM command ActivateStorageDomainVDS failed: Storage domain does not
exist: (u'0ad098e9-65e7-494a-90f7-e42949da3f85',)
Which is quite true it does not exist..
Any suggestions?
Thank you
5 years, 11 months
ISO Domain Problems
by JC Clark
Hello Fellow users,
Platform : Ovirt Engine 4.1
Problem : ISO Domain server has crashed. It is a separate NFS server. I
am unable to replace the ISO Domain. I have the old one in maintenance
but it won't detach. It says.
VDSM command ActivateStorageDomainVDS failed: Storage domain does not
exist: (u'0ad098e9-65e7-494a-90f7-e42949da3f85',)
Which is quite true it does not exist..
Any suggestions?
Thank you
5 years, 11 months
Upload via GUI to VMSTORE possible but not ISO Domain
by Ralf Schenk
Hello,
I can successfully upload disks to my Data-Domain ("VMSTORE") which is
NFS. I also can upload .iso Files there. (No porblems with SSL or
imageio-proxy). Why is the ISO Domain not available for Upload via GUI ?
Does a separate ISO Domain still make sense ? The ISO Domain is up and
running. And ist it possible to filter out the hosted_storage where the
engine lives for uploads ?
save image
--
*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *rs(a)databay.de* <mailto:rs@databay.de>
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen
------------------------------------------------------------------------
5 years, 11 months
Issue with yum update
by Alex Bartonek
It could be my repos setup, but not sure. CentOS is the distro.
Running yum update, I get the following error and nothing updates:
Transaction check error:
file /usr/lib64/collectd/write_http.so conflicts between attempted installs of collectd-write_http-5.8.1-1.el7.x86_64 and collectd-5.8.1-1.el7.x86_64
file /usr/lib64/collectd/disk.so conflicts between attempted installs of collectd-disk-5.8.1-1.el7.x86_64 and collectd-5.8.1-1.el7.x86_64
Anything I can check out to resolve this?
Thanks
Alex
Sent with [ProtonMail](https://protonmail.com) Secure Email.
5 years, 11 months
planned datacenter maintenance 21.12.2018 20:30 UTC
by Evgheni Dereveanchin
Hi everyone,
The network switch stack used by the oVirt PHX datacenter needs a reboot
which is scheduled for tomorrow. It is expected to be a fast task yet it
may cut all networking including shared storage access for all of our
hypervisors for a couple of minutes.
For this reason I'll shut down non-critical services beforehand and pause
CI to minimize I/O activity and protect against potential VM disk
corruption.
Maintenance window: 21.12.2018 20:30 UTC - 21:30 UTC
Services that may be unreachable for short periods of time during this
outage are:
* Package repositories
* Glance image repository
* Jenkins CI
Other services such as the website, gerrit and mailing lists are not
affected and will be operating as usual.
--
Regards,
Evgheni Dereveanchin
5 years, 11 months
Active Storage Domains as Problematic
by Stefan Wolf
Hello,
I ,ve setup a test lab with 3 nodes installed with centos 7
I configured manualy gluster fs. Glusterfs is up and running
[root@kvm380 ~]# gluster peer status
Number of Peers: 2
Hostname: kvm320.durchhalten.intern
Uuid: dac066db-55f7-4770-900d-4830c740ffbf
State: Peer in Cluster (Connected)
Hostname: kvm360.durchhalten.intern
Uuid: 4291be40-f77f-4f41-98f6-dc48fd993842
State: Peer in Cluster (Connected)
[root@kvm380 ~]# gluster volume info
Volume Name: data
Type: Replicate
Volume ID: 3586de82-e504-4c62-972b-448abead13d3
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: kvm380.durchhalten.intern:/gluster/data
Brick2: kvm360.durchhalten.intern:/gluster/data
Brick3: kvm320.durchhalten.intern:/gluster/data
Options Reconfigured:
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
performance.low-prio-threads: 32
performance.strict-o-direct: on
network.ping-timeout: 30
user.cifs: off
network.remote-dio: off
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
cluster.eager-lock: enable
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
Volume Name: engine
Type: Replicate
Volume ID: dcfbd322-5dd0-4bfe-a775-99ecc79e1416
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: kvm380.durchhalten.intern:/gluster/engine
Brick2: kvm360.durchhalten.intern:/gluster/engine
Brick3: kvm320.durchhalten.intern:/gluster/engine
Options Reconfigured:
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
performance.low-prio-threads: 32
performance.strict-o-direct: on
network.remote-dio: off
network.ping-timeout: 30
user.cifs: off
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
cluster.eager-lock: enable
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
After that I deployed a selfhosted engine
And add the two other hosts, at the beginning it looks good, but without
changing anything I got following error by two hosts
!
20.12.2018 11:35:05
Failed to connect Host kvm320.durchhalten.intern to Storage Pool Default
!
20.12.2018 11:35:05
Host kvm320.durchhalten.intern cannot access the Storage Domain(s)
hosted_storage attached to the Data Center Default. Setting Host state to
Non-Operational.
X
20.12.2018 11:35:05
Host kvm320.durchhalten.intern reports about one of the Active Storage
Domains as Problematic.
!
20.12.2018 11:35:05
Kdump integration is enabled for host kvm320.durchhalten.intern, but kdump
is not configured properly on host.
!
20.12.2018 11:35:04
Failed to connect Host kvm360.durchhalten.intern to Storage Pool Default
!
20.12.2018 11:35:04
Host kvm360.durchhalten.intern cannot access the Storage Domain(s)
hosted_storage attached to the Data Center Default. Setting Host state to
Non-Operational.
X
20.12.2018 11:35:04
Host kvm360.durchhalten.intern reports about one of the Active Storage
Domains as Problematic.
Before glusterfs I had a setup with nfs on 4. Server
Where is the problem?
thx
5 years, 11 months