Re: Recreating ISO storage domain
by Strahil Nikolov
Actually ISO domain is not necessary.
You can moutn it via FUSE to a system and either use the python script ( It was mentioned several times in the mailing list) or the API/UI to upload your ISOs to a data storage domain.
I think it is about time to get rid of the deprecated ISO domain.
Best Regards,
Strahil NIkolov
В събота, 26 септември 2020 г., 21:44:28 Гринуич+3, Matthew.Stier(a)fujitsu.com <matthew.stier(a)fujitsu.com> написа:
I have created a three host oVirt cluster using 4.4.2.
I created an ISO storage domain to hold my collection of ISO images, and then decided to migrate it to a better location.
I placed the storage domain in maintenance mode, end then removed it.
When I when to recreate it add the new location, I found that the ‘ISO’ storage domain was no longer an options.
What do I need to do to re-enable it, so I can re-create the storage domain.
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/4YVAUMRFWO7...
3 years, 7 months
Re: Random hosts disconnects
by Artur Socha
On Fri, Sep 18, 2020 at 1:54 PM Anton Louw <Anton.Louw(a)voxtelecom.co.za>
wrote:
>
>
> Hi Artur,
>
>
>
> Thanks for the reply. I have attached the system logs. There was a
> disconnect at 10:54, but no error that is different to the rest. I do see a
> whole lot of QEMU Guest Agent and block_io errors in the system logs. Not
> entirely sure what this means.
>
After a very quick search on the internet the first one does not seem to be
severe at all - this guest agent provides only some information to VMs
about the host.
*Sep 18 10:50:41 node05.kvm.voxvm.co.za <http://node05.kvm.voxvm.co.za>
libvirtd[23603]: 2020-09-18 08:50:41.493+0000: 23729: error :
qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU guest
agent is not connected*
The second one is unknown to me at all:
ISep 18 10:50:52 node05.kvm.voxvm.co.za libvirtd[23603]: 2020-09-18
08:50:52.802+0000: 23729: error : qemuMonitorJSONBlockIoThrottleInfo:5005 :
internal error: block_io_throttle inserted entry was not in expected format
Sep 18
Perhaps someone with more libvirt/qemu background will comment on that.
>
> Checking the vdsm logs at the time or the error, the only entry is the
> below:
>
>
>
> “2020-09-18 10:55:57,081+0000 WARN (qgapoller/2)
> [virt.periodic.VmDispatcher] could not run <function <lambda> at
> 0x7f2170395578> on ['d3838612-70bb-4731-a0d4-8f65d31b40a6',
> '59a2f394-48fe-4bd9-91d6-08115f2eec0a',
> 'f81e3ab8-c1a9-4674-b238-7e229fd43e7c',
> '42189fa1-4381-02c7-d830-20eac408da2c',
> '423f1c57-f98e-707f-c0f9-d4958d3f0fec',
> '64d1eabc-20ff-4288-98ff-dcfd120fe7d2',
> '4218baf0-e2a1-42c7-2efd-077407f47b4d',
> '42184650-5a60-5403-d758-840bdbf92dd8',
> '492ea3fe-0a27-4dde-abf9-7d146ee1b988',
> '4218df00-15cd-bdf9-efd9-c5ead49fd89c',
> '9c373379-718b-4906-abc1-960fb1820c2d',
> 'b9441c7a-0bfd-4d41-a8de-ee24e4259b36',
> 'd810325a-1a45-4054-a870-c8c052a22354',
> '42189d3f-4570-45ea-6e5a-94c85a5885a1'] (periodic:289)”
>
>
This WARN does not seem to be the cause ... it may be be the result because
VM failed to be dispatched (perhaps due to lack of suitable hosts that got
disconnected at a moment)
>
> I am stumped. Do you think it is worth a shot increasing the vdsConnectionTimeout
> and vdsHeartbeatInSeconds to 40 for testing purposes?
>
I still don't think it will change anything unless your network between
those 2 DC is 'tcp over pigeons' kind of setup :)
Now, more seriously. Even if increasing timeouts would fix the connectivity
I suspect the core issue would still remain ... in the best case scenario
it could be postponed a bit.
Am I correct assuming those 2 DC are located in 2 different physical
locations? If so then I would closely check the network itself first
(including hardware like routers/switches).
Artur
>
>
> Thanks
>
>
>
> *Anton Louw*
> *Cloud Engineer: Storage and Virtualization* at *Vox*
> ------------------------------
> *T:* 087 805 0000 | *D:* 087 805 1572
> *M:* N/A
> *E:* anton.louw(a)voxtelecom.co.za
> *A:* Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
> www.vox.co.za
>
> [image: F] <https://www.facebook.com/voxtelecomZA>
> [image: T] <https://www.twitter.com/voxtelecom>
> [image: I] <https://www.instagram.com/voxtelecomza/>
> [image: L] <https://www.linkedin.com/company/voxtelecom>
> [image: Y] <https://www.youtube.com/user/VoxTelecom>
>
> *From:* Artur Socha <asocha(a)redhat.com>
> *Sent:* 18 September 2020 13:27
> *To:* Anton Louw <Anton.Louw(a)voxtelecom.co.za>
> *Cc:* users(a)ovirt.org
> *Subject:* Re: [ovirt-users] Re: Random hosts disconnects
>
>
>
> Hi Anton,
>
> I am not sure if changing this value would fix the issue. Defaults are
> pretty high. For example vdsHeartbeatInSeconds=30seconds,
> vdsTimeout=180seconds, vdsConnectionTimeout=20seconds.
>
>
>
> Do you still have relevant logs from the affected hosts:
>
> * /var/logs/vdsm/vdsm.log*
>
> * /var/logs/vdsm/supervdsm.log*
>
> Please look for any jsonrpc errors ie. write/read errors or (connection)
> timeouts. Storage related warnings/errors might also be relevant.
>
>
>
> Plus system logs if possible:
>
> *journalctl -f /usr/share/vdsm/vdsmd*
>
> *journalctl -f /usr/sbin/libvirtd*
>
>
>
> In order to get system logs from particular time period please combine it
> with the following example using -S -U options:
>
> *journalctl -S "2020-01-12 07:00:00" -U "2020-01-12 07:15:00" *
>
> I haven't a clue what to look there for besides any warnings/errors or
> anything else that seems .... unusual.
>
>
>
> Artur
>
>
>
>
>
> On Thu, Sep 17, 2020 at 8:09 AM Anton Louw via Users <users(a)ovirt.org>
> wrote:
>
>
>
> Hi Everybody,
>
>
>
> Did some digging around, and saw a few things regarding “vdsHeartbeatInSeconds”
>
> I had a look at the properties file located at /etc/ovirt-engine/engine-config/engine-config.properties, and do not see an entry for “vdsHeartbeatInSeconds.type=Integer”.
>
> Seeing as these data centers are geographically split, could the “vdsHeartbeatInSeconds” potentially be the issue? Is it safe to increase this value after I add “vdsHeartbeatInSeconds.type=Integer” into my engine-config.properties file?
>
>
>
> Thanks
>
>
>
>
>
> *Anton Louw*
>
> *Cloud Engineer: Storage and Virtualization* at *Vox*
> ------------------------------
>
> *T:* 087 805 0000 | *D:* 087 805 1572
> *M:* N/A
> *E:* anton.louw(a)voxtelecom.co.za
> *A:* Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
> www.vox.co.za
>
>
>
> [image: F] <https://www.facebook.com/voxtelecomZA>
>
>
>
> [image: T] <https://www.twitter.com/voxtelecom>
>
>
>
> [image: I] <https://www.instagram.com/voxtelecomza/>
>
>
>
> [image: L] <https://www.linkedin.com/company/voxtelecom>
>
>
>
> [image: Y] <https://www.youtube.com/user/VoxTelecom>
>
>
>
>
>
> *From:* Anton Louw via Users <users(a)ovirt.org>
> *Sent:* 16 September 2020 09:01
> *To:* users(a)ovirt.org
> *Subject:* [ovirt-users] Random hosts disconnects
>
>
>
>
>
> Hi All,
>
>
>
> I have a strange issue in my oVirt environment. I currently have a
> standalone manager which is running in VMware. In my oVirt environment, I
> have two Data Centers. The manager is currently sitting on the same subnet
> as DC1. Randomly, hosts in DC2 will say “Not Responding” and then 2 seconds
> later, the hosts will activate again.
>
>
>
> The strange thing is, when the manager was sitting on the same subnet as
> DC2, hosts in DC1 will randomly say “Not Responding”
>
>
>
> I have tried going through the logs, but I cannot see anything out of the
> ordinary regarding why the hosts would drop connection. I have attached the
> engine.log for anybody that would like to do a spot check.
>
>
>
> Thanks
>
>
>
> *Anton Louw*
>
> *Cloud Engineer: Storage and Virtualization* at *Vox*
> ------------------------------
>
> *T:* 087 805 0000 | *D:* 087 805 1572
> *M:* N/A
> *E:* anton.louw(a)voxtelecom.co.za
> *A:* Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
> www.vox.co.za
>
>
>
> [image: F] <https://www.facebook.com/voxtelecomZA>
>
>
>
> [image: T] <https://www.twitter.com/voxtelecom>
>
>
>
> [image: I] <https://www.instagram.com/voxtelecomza>
>
>
>
> [image: L] <https://www.linkedin.com/company/voxtelecom>
>
>
>
> [image: Y] <https://www.youtube.com/user/VoxTelecom>
>
>
>
>
>
> [image: #VoxBrand]
> <https://www.vox.co.za/fibre/fibre-to-the-home/?prod=HOME>
>
>
> *Disclaimer*
>
> The contents of this email are confidential to the sender and the intended
> recipient. Unless the contents are clearly and entirely of a personal
> nature, they are subject to copyright in favour of the holding company of
> the Vox group of companies. Any recipient who receives this email in error
> should immediately report the error to the sender and permanently delete
> this email from all storage devices.
>
> This email has been scanned for viruses and malware, and may have been
> automatically archived by *Mimecast Ltd*, an innovator in Software as a
> Service (SaaS) for business. Providing a *safer* and *more useful* place
> for your human generated data. Specializing in; Security, archiving and
> compliance. To find out more Click Here
> <https://www.voxtelecom.co.za/security/mimecast/?prod=Enterprise>.
>
>
>
>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EJL246IPBGE...
>
>
>
> --
>
> Artur Socha
> Senior Software Engineer, RHV
> Red Hat
>
>
--
Artur Socha
Senior Software Engineer, RHV
Red Hat
3 years, 7 months
Re: Populating ISO storage domain
by Matthew.Stier@fujitsu.com
It is being mounted under /rhev/data-center/mnt/ on the hosts, but is not being self-hosted engine.
From: Matthew.Stier(a)fujitsu.com <Matthew.Stier(a)fujitsu.com>
Sent: Saturday, September 26, 2020 4:17 PM
To: Edward Berger <edwberger(a)gmail.com>
Cc: users(a)ovirt.org
Subject: [ovirt-users] Re: Populating ISO storage domain
I setup the folder and the files with 36:36 with a mode of 775 on the directory and 644 on the files.
Hours later, still not being read.
From: Edward Berger <edwberger(a)gmail.com<mailto:edwberger@gmail.com>>
Sent: Saturday, September 26, 2020 2:01 PM
To: Stier, Matthew <Matthew.Stier(a)fujitsu.com<mailto:Matthew.Stier@fujitsu.com>>
Cc: users(a)ovirt.org<mailto:users@ovirt.org>
Subject: Re: [ovirt-users] Populating ISO storage domain
If its in an NFS folder, make sure the ownership is vdsm:kvm (36:36)
On Sat, Sep 26, 2020 at 2:57 PM Matthew.Stier(a)fujitsu.com<mailto:Matthew.Stier@fujitsu.com> <Matthew.Stier(a)fujitsu.com<mailto:Matthew.Stier@fujitsu.com>> wrote:
I’ve created and ISO storage domain, and placed ISO’s in the export path do not show up under Storage > Storage Domains > iso > images; nor as available images which creating a new VM.
Haven’t located method get them noticed. There is a greyed out ‘scan disk’ option.
What is the proper method to install ISO images.
_______________________________________________
Users mailing list -- users(a)ovirt.org<mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org<mailto:users-leave@ovirt.org>
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/M6QYBMNT43M...
3 years, 7 months
Populating ISO storage domain
by Matthew.Stier@fujitsu.com
I've created and ISO storage domain, and placed ISO's in the export path do not show up under Storage > Storage Domains > iso > images; nor as available images which creating a new VM.
Haven't located method get them noticed. There is a greyed out 'scan disk' option.
What is the proper method to install ISO images.
3 years, 7 months
Re: Recreating ISO storage domain
by Matthew.Stier@fujitsu.com
Got this fixed. Ignore.
From: Matthew.Stier(a)fujitsu.com <Matthew.Stier(a)fujitsu.com>
Sent: Saturday, September 26, 2020 1:39 PM
To: users(a)ovirt.org
Subject: [ovirt-users] Recreating ISO storage domain
I have created a three host oVirt cluster using 4.4.2.
I created an ISO storage domain to hold my collection of ISO images, and then decided to migrate it to a better location.
I placed the storage domain in maintenance mode, end then removed it.
When I when to recreate it add the new location, I found that the 'ISO' storage domain was no longer an options.
What do I need to do to re-enable it, so I can re-create the storage domain.
3 years, 7 months
Recreating ISO storage domain
by Matthew.Stier@fujitsu.com
I have created a three host oVirt cluster using 4.4.2.
I created an ISO storage domain to hold my collection of ISO images, and then decided to migrate it to a better location.
I placed the storage domain in maintenance mode, end then removed it.
When I when to recreate it add the new location, I found that the 'ISO' storage domain was no longer an options.
What do I need to do to re-enable it, so I can re-create the storage domain.
3 years, 7 months
Node upgrade to 4.4
by Vincent Royer
I have 3 nodes running node ng 4.3.9 with a gluster/hci cluster. How do I
upgrade to 4.4? Is there a guide?
3 years, 7 months
oVirt - Engine - VM Reconstitute
by Jeremey Wise
As expected... this is a learning curve. My three node cluster.. in an
attempt to learn how to do admin work on it, debug it... I have now
redeployed the engine and even added second one on a node in cluster.
But.....
I now realize that my "production vms" are gone.
In the past, on a manual build with KVM + Gluster .. when I repaired a
damaged cluster I would just then browse to the xml file and import.
I think with oVirt, those days are gone. As the PostGres engine knows the
links to the disk / thin provision volumes / network / VM definition files.
######
Question:
1) Can someone point me to the manual on how to re-constitute a VM and
bring it back into oVirt where all "oVirt-engines" were redeployed. It is
only three or four VMs I typically care about (HA cluster and OCP ignition/
Ansible tower VM).
2) How do I make sure these core VMs are able to be reconstituted. Can I
create a dedicated volume where the VMs are full provisioned, and the path
structure is "human understandable".
3) I know that you can backup the engine. If I had been a smart person,
how does one backup and recover from this kind of situation. Does anyone
have any guides or good articles on this?
Thanks,
--
p <jeremey.wise(a)gmail.com>enguinpages
3 years, 7 months
Re: Node 4.4.1 gluster bricks
by Strahil Nikolov
Since oVirt 4.4 , the stage that deploys the oVirt node/host is adding an lvm filter in /etc/lvm/lvm.conf which is the reason behind that.
Best Regards,
Strahil Nikolov
В петък, 25 септември 2020 г., 20:52:13 Гринуич+3, Staniforth, Paul <p.staniforth(a)leedsbeckett.ac.uk> написа:
Thanks,
the gluster volume is just a test and the main reason was to test the upgrade of a node with gluster bricks.
I don't know why lvm doesn\t work which is what oVirt is using.
Regards,
Paul S.
________________________________
From: Strahil Nikolov <hunter86_bg(a)yahoo.com>
Sent: 25 September 2020 18:28
To: Users <users(a)ovirt.org>; Staniforth, Paul <P.Staniforth(a)leedsbeckett.ac.uk>
Subject: Re: [ovirt-users] Node 4.4.1 gluster bricks
Caution External Mail: Do not click any links or open any attachments unless you trust the sender and know that the content is safe.
>1 node I wiped it clean and the other I left the 3 gluster brick drives untouch.
If the last node from the original is untouched you can:
1. Go to the old host and use 'gluster volume remove-brick <VOL> replica 1 wiped_host:/path/to-brick untouched_bricks_host:/path/to-brick force'
2. Remove the 2 nodes that you have kicked away:
gluster peer detach node2
gluster peer detach node3
3. Reinstall the wiped node and install gluster there
4. Create the filesystem on the brick:
mkfs.xfs -i size=512 /dev/mapper/brick_block_device
5. Mount the Gluster (you can copy the fstab entry from the working node and adapt it)
Here is an example:
/dev/data/data1 /gluster_bricks/data1 xfs inode64,noatime,nodiratime,inode64,nouuid,context="system_u:object_r:glusterd_brick_t:s0" 0 0
6. Create the selinux label via 'semanage fcontext -a -t glusterd_brick_t "/gluster_bricks/data1(/.*)?"' (remove only the single quotes) and run 'restorecon -RFvv /gluster_bricks/data1'
7. Mount the FS and create a dir inside the mount point
8. Extend the gluster volume:
'gluster volume add-brick <VOL> replica 2 new_host:/gluster_bricks/<dir>/<subdir>
9. Run a full heal
gluster volume heal <VOL> full
10. Repeat again and remember to never wipe 2 nodes at a time :)
Good luck and take a look at Quick Start Guide - Gluster Docs
Best Regards,
Strahil Nikolov
To view the terms under which this email is distributed, please go to:-
http://leedsbeckett.ac.uk/disclaimer/email/
3 years, 7 months