Remote DB: How do you set server_version?
by ~Stack~
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--9XklncOThLTUfwFAQNGbNX4f828I5OGGz
Content-Type: multipart/mixed; boundary="CSTP5JhEfsPNyK0PlfM9nQuRWFwftX3vt";
protected-headers="v1"
From: ~Stack~ <i.am.stack(a)gmail.com>
To: users <users(a)ovirt.org>
Message-ID: <3f809af7-4e88-68ed-bb65-99a7584ae8a3(a)gmail.com>
Subject: Remote DB: How do you set server_version?
--CSTP5JhEfsPNyK0PlfM9nQuRWFwftX3vt
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
Greetings,
Exploring hosting my engine and ovirt_engine_history db's on my
dedicated PostgreSQL server.
This is a 9.5 install on a beefy box from the postgresql.org yum repos
that I'm using for other SQL needs too. 9.5.12 to be exact. I set up the
database just as the documentation says and I'm doing a fresh install of
my engine-setup.
During the install, right after I give it the details for the remote I
get this error:
[ ERROR ] Please set:
server_version =3D 9.5.9
in postgresql.conf on 'None'. Its location is usually
/var/lib/pgsql/data , or somewhere under /etc/postgresql* .
Huh?
Um. OK.
$ grep ^server_version postgresql.conf
server_version =3D 9.5.9
$ systemctl restart postgresql-9.5.service
LOG: syntax error in file "/var/lib/pgsql/9.5/data/postgresql.conf"
line 33, n...n ".9"
FATAL: configuration file "/var/lib/pgsql/9.5/data/postgresql.conf"
contains errors
Well that didn't work. Let's try something else.
$ grep ^server_version postgresql.conf
server_version =3D 9.5.9
$ systemctl restart postgresql-9.5.service
LOG: parameter "server_version" cannot be changed
FATAL: configuration file "/var/lib/pgsql/9.5/data/postgresql.conf"
contains errors
Whelp. That didn't work either. I can't seem to find anything in the
oVirt docs on setting this.
How am I supposed to do this?
Thanks!
~Stack~
--CSTP5JhEfsPNyK0PlfM9nQuRWFwftX3vt--
--9XklncOThLTUfwFAQNGbNX4f828I5OGGz
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAEBCAAGBQJa6hZqAAoJELkej+ysXJPmSb4QAJ9EhSKTwOeaDF4dgob1Yko2
Z154R/VLPP9nHtAFb/tHZd9zr1zqa9Jr7aU7SmU09epkra4TY/WkdYE7MKpJJdCU
hebzncp7Fqr5cooEyQvb4aGEakCVh5yvADcZzVgLO+E72n6DW0ZUNE7HiPqXbKj8
dY8kPM7u4afT4Uga7n7BeQgu2L3M0bgJKR5VOFbwidBfrTqdP3ZXscvqwY12+5Iz
arPfDkoab0MBiJIYPNAFXYKa5CRJyr5U+NALwIdXl2FywKgClBLCP+6fCH/UyhMq
AzJJZwG/zlCmLOGUJAXG4hh5k8BY+N+zvx3vN7AaU187Rslm5v+q36Ost8N2ugh9
t0JVWDNS8OaZdxu9oKvaKYfDwni9baNNX9mA3Rf90v1TSQEZNPXjIA15Iw84ngPV
X833iw9MmmVpfQsrLyjRbDdvBq1+vNhmxCxwloF/ZBoqlid48wHTFB5t5IIZxdUg
U8x26/Cz0GTviAr2v98qxIVUh1fKhAF/WFbEK16FwGEmMcOjta/lR63/21d8A2wk
7aOom4ftJfL9jges9LweVN3U/hS66GHMgAxYqmsj/k/OdGBzDiIUqmXfGAhQbT1T
ZbhiSMv9/Zj88oBxdE4+zXSuM0C12OPSyrLKac2AGxYRBydv8cxAFIzTuDVxVnNc
NFEUgBoZ515dZ6sUaN2V
=kS+W
-----END PGP SIGNATURE-----
--9XklncOThLTUfwFAQNGbNX4f828I5OGGz--
6 years, 7 months
Does Mailing List DEAD ?
by paul.lkw@gmail.com
It seems since the upgrade of Mailing List I do not receive any mail from oVirt Mailing List, I also tried to post and do not receive any ?
6 years, 7 months
Re: Scheduling a Snapshot of a Gluster volume not working within Ovirt
by Mark Betham
Hi Sahina,
Many thanks for your response and apologies for my delay in getting back to you.
> How was the schedule created - is this using the Remote Data Sync Setup under Storage domain?
Ovirt is configured in ‘Gluster’ mode, no VM support. When snapshotting we are taking a snapshot of the full Gluster volume.
To configure the snapshot schedule I did the following;
Login to Ovirt WebUI
From left hand menu select ‘Storage’ and ‘Volumes'
I then selected the volume I wanted to snapshot by clicking on the link within the ‘Name’ column
From here I selected the ‘Snapshots’ tab
From the top menu options I selected the drop down ‘Snapshot’
From the drop down options I selected ‘New’
A new window appeared titled ‘Create/Schedule Snapshot’
I entered a snapshot prefix and description into the available fields and selected the ‘Schedule’ page
On the schedule page I selected ‘Minute’ from the ‘Recurrence’ drop down
Set ‘Interval’ to every ’30’ minutes
Changed timezone to ‘Europe/London=(GMT+00:00) London Standard Time’
Left value in ‘Start Schedule by’ at default value
Set schedule to ‘No End Date’
Click 'OK'
Interestingly I get the following message on the ‘Create/Schedule Snapshot’ page before clicking on OK;
Frequent creation of snapshots would overload the cluster
Gluster CLI based snapshot scheduling is enabled. It would be disabled once volume snapshots scheduled from UI.
What is interesting is that I have not enabled 'Gluster CLI based snapshot scheduling’.
After clicking OK I am returned to the Volume Snapshots tab.
From this point I get no snapshots created according to the schedule set.
At the time of clicking OK in the WebUI to enable the schedule I get the following in the engine log;
2018-05-14 09:24:11,068Z WARN [org.ovirt.engine.core.dal.job.ExecutionMessageDirector] (default task-128) [85d0b16f-2c0c-464f-bbf1-682c062a4871] The message key 'ScheduleGlusterVolumeSnapshot' is missing from 'bundles/ExecutionMessages'
2018-05-14 09:24:11,090Z INFO [org.ovirt.engine.core.bll.gluster.ScheduleGlusterVolumeSnapshotCommand] (default task-128) [85d0b16f-2c0c-464f-bbf1-682c062a4871] Before acquiring and wait lock 'EngineLock:{exclusiveLocks='[712da1df-4c11-405a-8fb6-f99aebc185c1=GLUSTER_SNAPSHOT]', sharedLocks=''}'
2018-05-14 09:24:11,090Z INFO [org.ovirt.engine.core.bll.gluster.ScheduleGlusterVolumeSnapshotCommand] (default task-128) [85d0b16f-2c0c-464f-bbf1-682c062a4871] Lock-wait acquired to object 'EngineLock:{exclusiveLocks='[712da1df-4c11-405a-8fb6-f99aebc185c1=GLUSTER_SNAPSHOT]', sharedLocks=''}'
2018-05-14 09:24:11,111Z INFO [org.ovirt.engine.core.bll.gluster.ScheduleGlusterVolumeSnapshotCommand] (default task-128) [85d0b16f-2c0c-464f-bbf1-682c062a4871] Running command: ScheduleGlusterVolumeSnapshotCommand internal: false. Entities affected : ID: 712da1df-4c11-405a-8fb6-f99aebc185c1 Type: GlusterVolumeAction group MANIPULATE_GLUSTER_VOLUME with role type ADMIN
2018-05-14 09:24:11,148Z INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-128) [85d0b16f-2c0c-464f-bbf1-682c062a4871] EVENT_ID: GLUSTER_VOLUME_SNAPSHOT_SCHEDULED(4,134), Snapshots scheduled on volume glustervol0 of cluster NOSS-LD5.
2018-05-14 09:24:11,156Z INFO [org.ovirt.engine.core.bll.gluster.ScheduleGlusterVolumeSnapshotCommand] (default task-128) [85d0b16f-2c0c-464f-bbf1-682c062a4871] Lock freed to object 'EngineLock:{exclusiveLocks='[712da1df-4c11-405a-8fb6-f99aebc185c1=GLUSTER_SNAPSHOT]', sharedLocks=''}'
> Could you please provide the engine.log from the time the schedule was setup and including the time the schedule was supposed to run?
The original log file is no longer present, so I removed the old schedule and created a new schedule, as per the instructions above, earlier today. I have therefor attached the engine log from today. The new schedule, which was set to run every 30 minutes, has not produced any snapshots after around 2 hours.
Please let me know if you require any further information.
Many thanks,
Mark Betham.
>
>
>
> On Thu, May 3, 2018 at 4:37 PM, Mark Betham <mark.betham(a)googlemail.com <mailto:mark.betham@googlemail.com>> wrote:
> Hi Ovirt community,
>
> I am hoping you will be able to help with a problem I am experiencing when trying to schedule a snapshot of my Gluster volumes using the Ovirt portal.
>
> Below is an overview of the environment;
>
> I have an Ovirt instance running which is managing our Gluster storage. We are running Ovirt version "4.2.2.6-1.el7.centos", Gluster version "glusterfs-3.13.2-2.el7" on a base OS of "CentOS Linux release 7.4.1708 (Core)", Kernel "3.10.0 - 693.21.1.el7.x86_64", VDSM version "vdsm-4.20.23-1.el7.centos". All of the versions of software are the latest release and have been fully patched where necessary.
>
> Ovirt has been installed and configured in "Gluster" mode only, no virtualisation. The Ovirt platform runs from one of the Gluster storage nodes.
>
> Gluster runs with 2 clusters, each located at a different physical site (UK and DE). Each of the storage clusters contain 3 storage nodes. Each storage cluster contains a single gluster volume. The Gluster volume is 3 * Replicated. The Gluster volume runs on top of a LVM thin vol which has been provisioned with a XFS filesystem. The system is running a Geo-rep between the 2 geo-diverse clusters.
>
> The host servers running at the primary site are of specification 1 * Intel(R) Xeon(R) CPU E3-1270 v5 @ 3.60GHz (8 core with HT), 64GB Ram, LSI MegaRAID SAS 9271 with bbu and cache, 8 * SAS 10K 2.5" 1.8TB enterprise drives configured in a RAID 10 array to give 6.52TB of useable space. The host servers running at the secondary site are of specification 1 * Intel(R) Xeon(R) CPU E3-1271 v3 @ 3.60GHz (8 core with HT), 32GB Ram, LSI MegaRAID SAS 9260 with bbu and cache, 8 * SAS 10K 2.5" 1.8TB enterprise drives configured in a RAID 10 array to give 6.52TB of useable space. The secondary site is for DR use only.
>
> When I first starting experiencing the issue and was unable to resolve it, I carried out a full rebuild from scratch across the two storage clusters. I had spent some time troubleshooting the issue but felt it worthwhile to ensure I had a clean platform, void of any potential issues which may be there due to some of the previous work carried out. The platform was rebuilt and data re-ingested. It is probably worth mentioning that this environment will become our new production platform, we will be migrating data and services to this new platform from our existing Gluster storage cluster. The date for the migration activity is getting closer so available time has become an issue and will not permit another full rebuild of the platform without impacting delivery date.
>
> After the rebuild with both storage clusters online, available and managed within the Ovirt platform I conducted some basic commissioning checks and I found no issues. The next step I took at this point was to setup the Geo-replication. This was brought online with no issues and data was seen to be synchronised without any problems. At this point the data re-ingestion was started and the new data was synchronised by the Geo-replication.
>
> The first step in bringing the snapshot schedule online was to validate that snapshots could be taken outside of the scheduler. Taking a manual snapshot via the OVirt portal worked without issue. Several were taken on both primary and secondary clusters. At this point a schedule was created on the primary site cluster via the Ovirt portal to create a snapshot of the storage at hourly intervals. The schedule was created successfully however no snapshots were ever created. Examining the logs did not show anything which I believed was a direct result of the faulty schedule but it is quite possible I missed something.
>
> How was the schedule created - is this using the Remote Data Sync Setup under Storage domain?
>
>
> I reviewed many online articles, bug reports and application manuals in relation to snapshotting. There were several loosely related support articles around snapshotting but none of the recommendations seemed to work. I did the same with manuals and again nothing that seemed to work. What I did find were several references to running snapshots along with geo-replication and that the geo-replication should be paused when creating. So I removed all existing references to any snapshot schedule, paused the Geo-repl and recreated the snapshot schedule. The schedule was never actioned and no snapshots were created. Removed Geo-repl entirely, remove all schedules and carried out a reboot of the entire platform. When the system was fully back online and no pending heal operations the schedule was re-added for the primary site only. No difference in the results and no snapshots were created from the schedule.
>
> I have now reached the point where I feel I require assistance and hence this email request.
>
> If you require any further data then please let me know and I will do my best to get it for you.
>
> Could you please provide the engine.log from the time the schedule was setup and including the time the schedule was supposed to run?
>
>
>
> Any help you can give would be greatly appreciated.
>
> Many thanks,
>
> Mark Betham
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org <mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
>
>
6 years, 7 months
4.2.3 -- Snapshot in GUI Issue
by Zack Gould
Is there no way to restore a snapshot via the GUI on 4.2 anymore?
I can't take snapshot, but there's no restore option. Since the new GUI
design, it appears that it's missing?
6 years, 7 months
Re: strange issue: vm lost info on disk
by Benny Zlotnik
I see here a failed attempt:
2018-05-09 16:00:20,129-03 ERROR [org.ovirt.engine.core.dal.
dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-67)
[bd8eeb1d-f49a-4f91-a521-e0f31b4a7cbd] EVENT_ID:
USER_MOVED_DISK_FINISHED_FAILURE(2,011),
User admin@internal-authz have failed to move disk mail02-int_Disk1 to
domain 2penLA.
Then another:
2018-05-09 16:15:06,998-03 ERROR [org.ovirt.engine.core.dal.
dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-34)
[] EVENT_ID: USER_MOVED_DISK_FINISHED_FAILURE(2,011), User
admin@internal-authz have failed to move disk mail02-int_Disk1 to domain
2penLA.
Here I see a successful attempt:
2018-05-09 21:58:42,628-03 INFO [org.ovirt.engine.core.dal.
dbbroker.auditloghandling.AuditLogDirector] (default task-50)
[940b051c-8c63-4711-baf9-f3520bb2b825] EVENT_ID: USER_MOVED_DISK(2,008),
User admin@internal-authz moving disk mail02-int_Disk1 to domain 2penLA.
Then, in the last attempt I see the attempt was successful but live merge
failed:
2018-05-11 03:37:59,509-03 ERROR
[org.ovirt.engine.core.bll.MergeStatusCommand]
(EE-ManagedThreadFactory-commandCoordinator-Thread-2)
[d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Failed to live merge, still in
volume chain: [5d9d2958-96bc-49fa-9100-2f33a3ba737f,
52532d05-970e-4643-9774-96c31796062c]
2018-05-11 03:38:01,495-03 INFO
[org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback]
(EE-ManagedThreadFactory-engineScheduled-Thread-51)
[d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command 'LiveMigrateDisk' (id:
'115fc375-6018-4d59-b9f2-51ee05ca49f8') waiting on child command id:
'26bc52a4-4509-4577-b342-44a679bc628f' type:'RemoveSnapshot' to complete
2018-05-11 03:38:01,501-03 ERROR
[org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-51)
[d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command id:
'4936d196-a891-4484-9cf5-fceaafbf3364 failed child command status for step
'MERGE_STATUS'
2018-05-11 03:38:01,501-03 INFO
[org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommandCallback]
(EE-ManagedThreadFactory-engineScheduled-Thread-51)
[d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command
'RemoveSnapshotSingleDiskLive' id: '4936d196-a891-4484-9cf5-fceaafbf3364'
child commands '[8da5f261-7edd-4930-8d9d-d34f232d84b3,
1c320f4b-7296-43c4-a3e6-8a868e23fc35,
a0e9e70c-cd65-4dfb-bd00-076c4e99556a]' executions were completed, status
'FAILED'
2018-05-11 03:38:02,513-03 ERROR
[org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-2)
[d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Merging of snapshot
'319e8bbb-9efe-4de4-a9a6-862e3deb891f' images
'52532d05-970e-4643-9774-96c31796062c'..'5d9d2958-96bc-49fa-9100-2f33a3ba737f'
failed. Images have been marked illegal and can no longer be previewed or
reverted to. Please retry Live Merge on the snapshot to complete the
operation.
2018-05-11 03:38:02,519-03 ERROR
[org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-2)
[d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Ending command
'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotSingleDiskLiveCommand'
with failure.
2018-05-11 03:38:03,530-03 INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(EE-ManagedThreadFactory-engineScheduled-Thread-37)
[d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Command 'RemoveSnapshot' id:
'26bc52a4-4509-4577-b342-44a679bc628f' child commands
'[4936d196-a891-4484-9cf5-fceaafbf3364]' executions were completed, status
'FAILED'
2018-05-11 03:38:04,548-03 ERROR
[org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-66)
[d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Ending command
'org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand' with failure.
2018-05-11 03:38:04,557-03 INFO
[org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-66)
[d5b7fdf5-9c37-4c1f-8543-a7bc75c993a5] Lock freed to object
'EngineLock:{exclusiveLocks='[4808bb70-c9cc-4286-aa39-16b5798213ac=LIVE_STORAGE_MIGRATION]',
sharedLocks=''}'
I do not see the merge attempt in the vdsm.log, so please send vdsm logs
for node02.phy.eze.ampgn.com.ar from that time.
Also, did you use the auto-generated snapshot to start the vm?
On Fri, May 11, 2018 at 6:11 PM, Juan Pablo <pablo.localhost(a)gmail.com>
wrote:
> after the xfs_repair, it says: sorry I could not find valid secondary
> superblock
>
> 2018-05-11 12:09 GMT-03:00 Juan Pablo <pablo.localhost(a)gmail.com>:
>
>> hi,
>> Alias:
>> mail02-int_Disk1
>> Description:
>> ID:
>> 65ec515e-0aae-4fe6-a561-387929c7fb4d
>> Alignment:
>> Unknown
>> Disk Profile:
>> Wipe After Delete:
>> No
>>
>> that one
>>
>> 2018-05-11 11:12 GMT-03:00 Benny Zlotnik <bzlotnik(a)redhat.com>:
>>
>>> I looked at the logs and I see some disks have moved successfully and
>>> some failed. Which disk is causing the problems?
>>>
>>> On Fri, May 11, 2018 at 5:02 PM, Juan Pablo <pablo.localhost(a)gmail.com>
>>> wrote:
>>>
>>>> Hi, just sent you via drive the files. attaching some extra info,
>>>> thanks thanks and thanks :
>>>>
>>>> from inside the migrated vm I had the following attached dmesg output
>>>> before rebooting
>>>>
>>>> regards and thanks again for the help,
>>>>
>>>> 2018-05-11 10:45 GMT-03:00 Benny Zlotnik <bzlotnik(a)redhat.com>:
>>>>
>>>>> Dropbox or google drive I guess. Also, can you attach engine.log?
>>>>>
>>>>> On Fri, May 11, 2018 at 4:43 PM, Juan Pablo <pablo.localhost(a)gmail.com
>>>>> > wrote:
>>>>>
>>>>>>
>>>>>> vdsm is too big for gmail ...any other way I can share it with you?
>>>>>>
>>>>>>
>>>>>> ---------- Forwrded message ----------
>>>>>> From: Juan Pablo <pablo.localhost(a)gmail.com>
>>>>>> Date: 2018-05-11 10:40 GMT-03:00
>>>>>> Subject: Re: [ovirt-users] strange issue: vm lost info on disk
>>>>>> To: Benny Zlotnik <bzlotnik(a)redhat.com>
>>>>>> Cc: users <Users(a)ovirt.org>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> Benny, thanks for your reply! it was a Live migration. sorry, it was
>>>>>> from nfs to iscsi, not otherwise. I have reboot the vm for rescue and it
>>>>>> does not detect any partitions with fdisk, Im running a xfs_repair with -n
>>>>>> and found some corrupted primary superblock., its still running... ( so...
>>>>>> there's info in the disk maybe?)
>>>>>>
>>>>>> attaching logs, let me know if those are the ones.
>>>>>> thanks again!
>>>>>>
>>>>>> 2018-05-11 9:45 GMT-03:00 Benny Zlotnik <bzlotnik(a)redhat.com>:
>>>>>>
>>>>>>> Can you provide the logs? engine and vdsm.
>>>>>>> Did you perform a live migration (the VM is running) or cold?
>>>>>>>
>>>>>>> On Fri, May 11, 2018 at 2:49 PM, Juan Pablo <
>>>>>>> pablo.localhost(a)gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi! , Im strugled about an ongoing problem:
>>>>>>>> after migrating a vm's disk from an iscsi domain to a nfs and
>>>>>>>> ovirt reporting the migration was successful, I see there's no data
>>>>>>>> 'inside' the vm's disk. we never had this issues with ovirt so Im stranged
>>>>>>>> about the root cause and if theres a chance of recovering the information.
>>>>>>>>
>>>>>>>> can you please help me out troubleshooting this one? I would really
>>>>>>>> appreciate it =)
>>>>>>>> running ovirt 4.2.1 here!
>>>>>>>>
>>>>>>>> thanks in advance,
>>>>>>>> JP
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Users mailing list -- users(a)ovirt.org
>>>>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
6 years, 7 months
sdk api and follow question / bug
by Peter Hudec
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
Hi,
I'm using API to get stats about the VM using the FOLLOW syntax
described in
http://ovirt.github.io/ovirt-engine-api-model/4.2/#documents/003_common_
concepts/follow
In my case I list all VM with FOLLOWING
- 'statistics',
- 'nics.statistics',
- 'disk_attachments.disk.statistics'
On my env with about 50 VM the query took about 12s but the paring for
the monitoring system is easy.
The question is, it's better to split the code and use more queries as
the note in the API docs recommends? Not sure it will be faster and
less cpu intensive on the engine side.
Also the disk_attachments.disk.statistics on VMs seems to be broken,
I'm not getting the data on 4.2.2 installation. /using json syntax,
not xml/
regards
Peter
- --
*Peter Hudec*
Infraštruktúrny architekt
phudec(a)cnc.sk <mailto:phudec@cnc.sk>
*CNC, a.s.*
Borská 6, 841 04 Bratislava
Recepcia: +421 2 35 000 100
Mobil:+421 905 997 203
*www.cnc.sk* <http:///www.cnc.sk>
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEERCwM7E/ZOYb1ejhCbZu8qDuIhFsFAlr0A60ACgkQbZu8qDuI
hFuh9RAAxP7o7Ve7tGN2c9ClG5FQcOpUH9Jwa6JiieEauFP3ivIdN1vVwA+YhgVW
Ij9iDrXBWdtlVrlQojqInMrWjNUQoeSAiJGTee/LztjeKrxK7Hkbd6JC8GAL0WbK
NWjHtzODX98MHhY7tgFoCW0xApUcN4c+jp5E8IuN/Gh19Ml3Nk9okUbbXDAnOxNK
1n5y1wnye2Gjk7hVwHoccY4V74FgjQ/hKbVRwjhDXHsHa9VHBpLJKtRhu1+5JdDj
ba79B5KM7w9CYI/QnLDsKeGLHsEylEnw6zY6+seitRAtIGfe7XB+PDddt9dSoC91
bwZfVEx6GCUr6lp+YJsbd/BCFdjLpMhWuI1q3d0A1rzuB4P/9/ehK2fB5ZtE5IDx
snkfNOk9sn/+AxuR79GnORdAOL38VFtNg1vZeHBVCg9byQ+mPcIlXdavYmZw9Sn3
lqsuS8xzcVltRw30eF9ne8uIlDETUdLQ84/RI2kxcbXT6naGWH9pbtto5ttY84Oc
QLAzIKeSFj9GFQm06b5wCReA7gpApU1eMeBlXEhGfUwwpQ4Ma2QjGVHLM6yyIbHG
uqKQcaU2z0gZANmDMw1RSv6AbDiDYvF5UNznCJklrxKycLKgiLVYEfsDcT8+mSKK
pFqazDVn6HzEa+SnU+peSQxb0N1oCfnufUqyCumih+8FNmjr1V8=
=u7vS
-----END PGP SIGNATURE-----
6 years, 7 months
oVirt 4.2.3 on RHEL/CentOS 7.5
by Stefano Stagnaro
Hello,
since oVirt 4.2.3 release notes still indicates the availability for EL 7.4, I would like to know if the release has been tested for EL 7.5 yet and it's safe to install the VDSM on it (instead of oVirt Node).
Thank you.
--
Stefano Stagnaro
Prisma Telecom Testing S.r.l.
Via Petrocchi, 4
20127 Milano – Italy
Tel. 02 26113507 int 339
e-mail: stefanos(a)prismatelecomtesting.com
skype: stefano.stagnaro
6 years, 7 months