<div dir="ltr">Thank you for your response! With the right magic word "geo-replication", I was able to find a howto that appears to be what I need to get started.<div><br></div><div>As to the documentation, some more use-case docs would be helpful. I also find myself struggling to understand it at a level to feel comfortable admining it. For example, I followed a howto to get my current gluster stuff running, but just barely, and I still don't truely understand the components or how to make them work. My system is 3 node ovirt cluster with gluster bricks located on the same nodes (and node 3 is the arbitrar apparently). I was trying to set up gluster to ride on its own network instead of sharing the ovirt main network. Unfortunately, I never could get that to work, and now that it is in production, I don't have the slightest on where to look to cause gluster to use a different network interface already configured and up on the servers. I'm not even sure I know what tools to use at the command line level to ensure gluster is healthy, and should something happen, I'd probably have to post panicked e-mails here....</div><div><br></div><div>I also am not sure I understand gluster well enough to architect a system under new assumptions. My current configuration was always intended to be a "phase 1" to get my cluster online and thus start and grow my business. However, my current storage is very limited. So, what should my target be for a "better" cluster? Single server, or dual?</div><div><br></div><div>If I grow to the point where I have multiple clusters at different offices (connected by fiber I own), how should I architect the storage then such that VMs can be moved between clusters, and my clusters 'back each other up"? I could use geo-replication, but is that the best/proper way? If I build dedicated servers, do I need more than one gluster storage server per location?</div><div><br></div><div>This is a lot I just threw out there...These questions have all passed through my head, but I haven't found enough details to answer them myself. I'm slowly growing in a few areas; this geo-replication configuration will be my next growth. I would like to move my gluster to another network, and I think I found some of the files where relevant configs are stored, but not enough detail to feel comfortable without breaking what I have. I realized that most systems I've set up, the docs are a bit less "recipie"-ish and have more explanation interspersed with the commands, and intermediate checks (with explanations) to check your work as you go. These are extremely valuable to me.</div><div><br></div><div>For example, if my initial configuration instructions had a paragraph about the network architecture, different settings (eg, gluster-gluster node sync vs management interface that ovirt uses vs data access for clients), and then walks through configuring each one, then showed the command line instructions to check that was working correctly before moving on to the next stage, that would help me understand what I've done, and be more likely to maintain it. It also makes other docs more understandable given a deeper knowledge of what I've already done.</div><div><br></div><div>Its possible that the instructions I used may have been poorer than typical for the project, but my googling didn't turn up something that allowed me to figure it out before I posted my original e-mail.</div><div><br></div><div>Thanks for your help!</div><div><br></div><div>--Jim</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Apr 25, 2017 at 10:02 AM, Sahina Bose <span dir="ltr"><<a href="mailto:sabose@redhat.com" target="_blank">sabose@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span class="">On Tue, Apr 25, 2017 at 9:18 PM, Jim Kusznir <span dir="ltr"><<a href="mailto:jim@palousetech.com" target="_blank">jim@palousetech.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>So with arbiter, I actually only have two copies of data...Does arbiter have at least checksum or something to detect corruption of a copy? (like old RAID-4 disk configuration)?</div></div></blockquote><div><br></div></span><div>Yes, the arbiter brick stores metadata information about the files to decide the good copy of data stored on the replicas in case of conflict.<br> <br></div><span class=""><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><br></div>Ok...Related question: Is there a way to set up an offsite gluster storage server to mirror the contents of my main server? As "fire" insurance basically? (eventually, I'd like to have an "offsite" DR cluster, but I don't have the resources or scale yet for that).<div><br></div><div>What I'd like to do is place a basic storage server somewhere else and have it sync any gluster data changes on a regular basis, and be usable to repopulate storage should I loose all of my current cluster (eg, a building fire or theft).</div></div></blockquote><div><br></div></span><div>Yes, the geo-replication feature can help with that. There's a remote data sync feature introduced for gluster storage domains, that helps with this. You can set this up such that data from your storage domain is regularly synced to a remote gluster volume, while ensuring data consistency. The remote gluster volume does not have to a replica 3.<br> <br></div><span class=""><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><br></div><div>I find gluster has amazing power from what I hear, but I have a hard time finding documentation at "the right level" to be useful. I've found some very basic introductory guide, then some very advanced guides that require extensive knowledge of gluster already. Something in the middle to explain some of these questions (like arbitrar and migration strategies, geo-replication, etc; and how to deploy them) are absent (or at least, i haven't found them yet). I still feel like I'm using something I don't understand, and the only avenue I have to learn more is to ask questions here, as the docs aren't at an accessible level.</div></div></blockquote><div><br></div></span><div>Thanks for the feedback. Are you looking at documentation on a use-case basis?<br> <br></div><div><div class="h5"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><br></div><div>Thanks!</div><span class="m_4613553837863432275HOEnZb"><font color="#888888"><div>--Jim</div></font></span></div><div class="m_4613553837863432275HOEnZb"><div class="m_4613553837863432275h5"><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Apr 3, 2017 at 10:34 PM, Sahina Bose <span dir="ltr"><<a href="mailto:sabose@redhat.com" target="_blank">sabose@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><div><div class="m_4613553837863432275m_-5339302049872229818h5">On Sat, Apr 1, 2017 at 10:32 PM, Jim Kusznir <span dir="ltr"><<a href="mailto:jim@palousetech.com" target="_blank">jim@palousetech.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Thank you!<div><br></div><div>Here's the output of gluster volume info:</div><div><div>[root@ovirt1 ~]# gluster volume info</div><div> </div><div>Volume Name: data</div><div>Type: Replicate</div><div>Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e<wbr>42cc59</div><div>Status: Started</div><div>Number of Bricks: 1 x (2 + 1) = 3</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: ovirt1.nwfiber.com:/gluster/br<wbr>ick2/data</div><div>Brick2: ovirt2.nwfiber.com:/gluster/br<wbr>ick2/data</div><div>Brick3: ovirt3.nwfiber.com:/gluster/br<wbr>ick2/data (arbiter)</div><div>Options Reconfigured:</div><div>performance.strict-o-direct: on</div><div>nfs.disable: on</div><div>user.cifs: off</div><div>network.ping-timeout: 30</div><div>cluster.shd-max-threads: 6</div><div>cluster.shd-wait-qlength: 10000</div><div>cluster.locking-scheme: granular</div><div>cluster.data-self-heal-algorit<wbr>hm: full</div><div>performance.low-prio-threads: 32</div><div>features.shard-block-size: 512MB</div><div>features.shard: on</div><div>storage.owner-gid: 36</div><div>storage.owner-uid: 36</div><div>cluster.server-quorum-type: server</div><div>cluster.quorum-type: auto</div><div>network.remote-dio: enable</div><div>cluster.eager-lock: enable</div><div>performance.stat-prefetch: off</div><div>performance.io-cache: off</div><div>performance.read-ahead: off</div><div>performance.quick-read: off</div><div>performance.readdir-ahead: on</div><div>server.allow-insecure: on</div><div> </div><div>Volume Name: engine</div><div>Type: Replicate</div><div>Volume ID: 87ad86b9-d88b-457e-ba21-5d3173<wbr>c612de</div><div>Status: Started</div><div>Number of Bricks: 1 x (2 + 1) = 3</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: ovirt1.nwfiber.com:/gluster/br<wbr>ick1/engine</div><div>Brick2: ovirt2.nwfiber.com:/gluster/br<wbr>ick1/engine</div><div>Brick3: ovirt3.nwfiber.com:/gluster/br<wbr>ick1/engine (arbiter)</div><div>Options Reconfigured:</div><div>performance.readdir-ahead: on</div><div>performance.quick-read: off</div><div>performance.read-ahead: off</div><div>performance.io-cache: off</div><div>performance.stat-prefetch: off</div><div>cluster.eager-lock: enable</div><div>network.remote-dio: off</div><div>cluster.quorum-type: auto</div><div>cluster.server-quorum-type: server</div><div>storage.owner-uid: 36</div><div>storage.owner-gid: 36</div><div>features.shard: on</div><div>features.shard-block-size: 512MB</div><div>performance.low-prio-threads: 32</div><div>cluster.data-self-heal-algorit<wbr>hm: full</div><div>cluster.locking-scheme: granular</div><div>cluster.shd-wait-qlength: 10000</div><div>cluster.shd-max-threads: 6</div><div>network.ping-timeout: 30</div><div>user.cifs: off</div><div>nfs.disable: on</div><div>performance.strict-o-direct: on</div><div> </div><div>Volume Name: export</div><div>Type: Replicate</div><div>Volume ID: 04ee58c7-2ba1-454f-be99-26ac75<wbr>a352b4</div><div>Status: Stopped</div><div>Number of Bricks: 1 x (2 + 1) = 3</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: ovirt1.nwfiber.com:/gluster/br<wbr>ick3/export</div><div>Brick2: ovirt2.nwfiber.com:/gluster/br<wbr>ick3/export</div><div>Brick3: ovirt3.nwfiber.com:/gluster/br<wbr>ick3/export (arbiter)</div><div>Options Reconfigured:</div><div>performance.readdir-ahead: on</div><div>performance.quick-read: off</div><div>performance.read-ahead: off</div><div>performance.io-cache: off</div><div>performance.stat-prefetch: off</div><div>cluster.eager-lock: enable</div><div>network.remote-dio: off</div><div>cluster.quorum-type: auto</div><div>cluster.server-quorum-type: server</div><div>storage.owner-uid: 36</div><div>storage.owner-gid: 36</div><div>features.shard: on</div><div>features.shard-block-size: 512MB</div><div>performance.low-prio-threads: 32</div><div>cluster.data-self-heal-algorit<wbr>hm: full</div><div>cluster.locking-scheme: granular</div><div>cluster.shd-wait-qlength: 10000</div><div>cluster.shd-max-threads: 6</div><div>network.ping-timeout: 30</div><div>user.cifs: off</div><div>nfs.disable: on</div><div>performance.strict-o-direct: on</div><div> </div><div>Volume Name: iso</div><div>Type: Replicate</div><div>Volume ID: b1ba15f5-0f0f-4411-89d0-595179<wbr>f02b92</div><div>Status: Started</div><div>Number of Bricks: 1 x (2 + 1) = 3</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: ovirt1.nwfiber.com:/gluster/br<wbr>ick4/iso</div><div>Brick2: ovirt2.nwfiber.com:/gluster/br<wbr>ick4/iso</div><div>Brick3: ovirt3.nwfiber.com:/gluster/br<wbr>ick4/iso (arbiter)</div><div>Options Reconfigured:</div><div>performance.readdir-ahead: on</div><div>performance.quick-read: off</div><div>performance.read-ahead: off</div><div>performance.io-cache: off</div><div>performance.stat-prefetch: off</div><div>cluster.eager-lock: enable</div><div>network.remote-dio: off</div><div>cluster.quorum-type: auto</div><div>cluster.server-quorum-type: server</div><div>storage.owner-uid: 36</div><div>storage.owner-gid: 36</div><div>features.shard: on</div><div>features.shard-block-size: 512MB</div><div>performance.low-prio-threads: 32</div><div>cluster.data-self-heal-algorit<wbr>hm: full</div><div>cluster.locking-scheme: granular</div><div>cluster.shd-wait-qlength: 10000</div><div>cluster.shd-max-threads: 6</div><div>network.ping-timeout: 30</div><div>user.cifs: off</div><div>nfs.disable: on</div><div>performance.strict-o-direct: on</div></div><div><br></div><div><br></div><div>The node marked as (arbiter) on all of the bricks is the node that is not using any of its disk space.</div></div></blockquote><div><br></div></div></div><div>This is by design - the arbiter brick only stores metadata and hence saves on storage.<br> <br></div><div><div class="m_4613553837863432275m_-5339302049872229818h5"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><br></div><div>The engine domain is the volume dedicated for storing the hosted engine. Here's some LVM info:</div><div><br></div><div><div> --- Logical volume ---</div><div> LV Path /dev/gluster/engine</div><div> LV Name engine</div><div> VG Name gluster</div><div> LV UUID 4gZ1TF-a1PX-i1Qx-o4Ix-MjEf-0H<wbr>D8-esm3wg</div><div> LV Write Access read/write</div><div> LV Creation host, time <a href="http://ovirt1.nwfiber.com" target="_blank">ovirt1.nwfiber.com</a>, 2016-12-31 14:40:00 -0800</div><div> LV Status available</div><div> # open 1</div><div> LV Size 25.00 GiB</div><div> Current LE 6400</div><div> Segments 1</div><div> Allocation inherit</div><div> Read ahead sectors auto</div><div> - currently set to 256</div><div> Block device 253:2</div><div> </div><div> --- Logical volume ---</div><div> LV Name lvthinpool</div><div> VG Name gluster</div><div> LV UUID aaNtso-fN1T-ZAkY-kUF2-dlxf-0a<wbr>p2-JAwSid</div><div> LV Write Access read/write</div><div> LV Creation host, time <a href="http://ovirt1.nwfiber.com" target="_blank">ovirt1.nwfiber.com</a>, 2016-12-31 14:40:09 -0800</div><div> LV Pool metadata lvthinpool_tmeta</div><div> LV Pool data lvthinpool_tdata</div><div> LV Status available</div><div> # open 4</div><div> LV Size 150.00 GiB</div><div> Allocated pool data 65.02%</div><div> Allocated metadata 14.92%</div><div> Current LE 38400</div><div> Segments 1</div><div> Allocation inherit</div><div> Read ahead sectors auto</div><div> - currently set to 256</div><div> Block device 253:5</div><div> </div><div> --- Logical volume ---</div><div> LV Path /dev/gluster/data</div><div> LV Name data</div><div> VG Name gluster</div><div> LV UUID NBxLOJ-vp48-GM4I-D9ON-4OcB-hZ<wbr>rh-MrDacn</div><div> LV Write Access read/write</div><div> LV Creation host, time <a href="http://ovirt1.nwfiber.com" target="_blank">ovirt1.nwfiber.com</a>, 2016-12-31 14:40:11 -0800</div><div> LV Pool name lvthinpool</div><div> LV Status available</div><div> # open 1</div><div> LV Size 100.00 GiB</div><div> Mapped size 90.28%</div><div> Current LE 25600</div><div> Segments 1</div><div> Allocation inherit</div><div> Read ahead sectors auto</div><div> - currently set to 256</div><div> Block device 253:7</div><div> </div><div> --- Logical volume ---</div><div> LV Path /dev/gluster/export</div><div> LV Name export</div><div> VG Name gluster</div><div> LV UUID bih4nU-1QfI-tE12-ZLp0-fSR5-dl<wbr>Kt-YHkhx8</div><div> LV Write Access read/write</div><div> LV Creation host, time <a href="http://ovirt1.nwfiber.com" target="_blank">ovirt1.nwfiber.com</a>, 2016-12-31 14:40:20 -0800</div><div> LV Pool name lvthinpool</div><div> LV Status available</div><div> # open 1</div><div> LV Size 25.00 GiB</div><div> Mapped size 0.12%</div><div> Current LE 6400</div><div> Segments 1</div><div> Allocation inherit</div><div> Read ahead sectors auto</div><div> - currently set to 256</div><div> Block device 253:8</div><div> </div><div> --- Logical volume ---</div><div> LV Path /dev/gluster/iso</div><div> LV Name iso</div><div> VG Name gluster</div><div> LV UUID l8l1JU-ViD3-IFiZ-TucN-tGPE-To<wbr>qc-Q3R6uX</div><div> LV Write Access read/write</div><div> LV Creation host, time <a href="http://ovirt1.nwfiber.com" target="_blank">ovirt1.nwfiber.com</a>, 2016-12-31 14:40:29 -0800</div><div> LV Pool name lvthinpool</div><div> LV Status available</div><div> # open 1</div><div> LV Size 25.00 GiB</div><div> Mapped size 28.86%</div><div> Current LE 6400</div><div> Segments 1</div><div> Allocation inherit</div><div> Read ahead sectors auto</div><div> - currently set to 256</div><div> Block device 253:9</div><div> </div><div> --- Logical volume ---</div><div> LV Path /dev/centos_ovirt/swap</div><div> LV Name swap</div><div> VG Name centos_ovirt</div><div> LV UUID PcVQ11-hQ9U-9KZT-QPuM-HwT6-8o<wbr>49-2hzNkQ</div><div> LV Write Access read/write</div><div> LV Creation host, time localhost, 2016-12-31 13:56:36 -0800</div><div> LV Status available</div><div> # open 2</div><div> LV Size 16.00 GiB</div><div> Current LE 4096</div><div> Segments 1</div><div> Allocation inherit</div><div> Read ahead sectors auto</div><div> - currently set to 256</div><div> Block device 253:1</div><div> </div><div> --- Logical volume ---</div><div> LV Path /dev/centos_ovirt/root</div><div> LV Name root</div><div> VG Name centos_ovirt</div><div> LV UUID g2h2fn-sF0r-Peos-hAE1-WEo9-WE<wbr>NO-MlO3ly</div><div> LV Write Access read/write</div><div> LV Creation host, time localhost, 2016-12-31 13:56:36 -0800</div><div> LV Status available</div><div> # open 1</div><div> LV Size 20.00 GiB</div><div> Current LE 5120</div><div> Segments 1</div><div> Allocation inherit</div><div> Read ahead sectors auto</div><div> - currently set to 256</div><div> Block device 253:0</div></div><div><br></div><div>------------</div><div><br></div><div>I don't use the export gluster volume, and I've never used lvthinpool-type allocations before, so I'm not sure if there's anything special there.</div><div><br></div><div>I followed the setup instructions from an ovirt contributed documentation that I can't find now that talked about how to install ovirt with gluster on a 3-node cluster.</div><div><br></div><div>Thank you for your assistance!</div><span class="m_4613553837863432275m_-5339302049872229818m_5350367153824699463HOEnZb"><font color="#888888"><div>--Jim</div></font></span></div><div class="m_4613553837863432275m_-5339302049872229818m_5350367153824699463HOEnZb"><div class="m_4613553837863432275m_-5339302049872229818m_5350367153824699463h5"><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Mar 30, 2017 at 1:27 AM, Sahina Bose <span dir="ltr"><<a href="mailto:sabose@redhat.com" target="_blank">sabose@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span>On Thu, Mar 30, 2017 at 1:23 PM, Liron Aravot <span dir="ltr"><<a href="mailto:laravot@redhat.com" target="_blank">laravot@redhat.com</a>></span> wrote:<br><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div dir="ltr">Hi Jim, please see inline<br><div class="gmail_extra"><br><div class="gmail_quote"><span class="m_4613553837863432275m_-5339302049872229818m_5350367153824699463m_9190369073002137491m_3465073948164697150gmail-">On Thu, Mar 30, 2017 at 4:08 AM, Jim Kusznir <span dir="ltr"><<a href="mailto:jim@palousetech.com" target="_blank">jim@palousetech.com</a>></span> wrote:<br><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div dir="ltr">hello:<div><br></div><div>I've been running my ovirt Version 4.0.5.5-1.el7.centos cluster for a while now, and am now revisiting some aspects of it for ensuring that I have good reliability.</div><div><br></div><div>My cluster is a 3 node cluster, with gluster nodes running on each node. After running my cluster a bit, I'm realizing I didn't do a very optimal job of allocating the space on my disk to the different gluster mount points. Fortunately, they were created with LVM, so I'm hoping that I can resize them without much trouble.</div><div><br></div><div>I have a domain for iso, domain for export, and domain for storage, all thin provisioned; then a domain for the engine, not thin provisioned. I'd like to expand the storage domain, and possibly shrink the engine domain and make that space also available to the main storage domain. Is it as simple as expanding the LVM partition, or are there more steps involved? Do I need to take the node offline?</div></div></blockquote><div><br></div></span><div>I didn't understand completely that part - what is the difference between the domain for storage and the domain for engine you mentioned? </div></div></div></div></blockquote><div><br></div></span><div>I think the domain for engine is the one storing Hosted Engine data.<br></div><div>You should be able to expand your underlying LVM partition without having to take the node offline<br> <br></div><span><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="m_4613553837863432275m_-5339302049872229818m_5350367153824699463m_9190369073002137491m_3465073948164697150gmail-"><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div dir="ltr"><div><br></div><div>second, I've noticed that the first two nodes seem to have a full copy of the data (the disks are in use), but the 3rd node appears to not be using any of its storage space...It is participating in the gluster cluster, though.</div></div></blockquote></span></div></div></div></blockquote><div><br></div></span><div>Is the volume created as replica 3? If so, fully copy of the data should be present on all 3 nodes. Please provide the output of "gluster volume info"<br><br></div><span><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="m_4613553837863432275m_-5339302049872229818m_5350367153824699463m_9190369073002137491m_3465073948164697150gmail-"><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div dir="ltr"><div><br></div><div>Third, currently gluster shares the same network as the VM networks. I'd like to put it on its own network. I'm not sure how to do this, as when I tried to do it at install time, I never got the cluster to come online; I had to make them share the same network to make that work.</div></div></blockquote></span></div></div></div></blockquote><div><br></div></span><div>While creating the bricks the network intended for gluster should have been used to identify the brick in hostname:brick-directory. Changing this at a later point is a bit more involved. Please check online or on gluster-users on changing IP address associated with brick.<br> <br></div><span><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="m_4613553837863432275m_-5339302049872229818m_5350367153824699463m_9190369073002137491m_3465073948164697150gmail-"><div><br></div></span><div>I'm adding Sahina who may shed some light on the gluster question, I'd try on the gluster mailing list as well. </div><span class="m_4613553837863432275m_-5339302049872229818m_5350367153824699463m_9190369073002137491m_3465073948164697150gmail-"><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div dir="ltr"><div><br></div><div><br></div><div>Ovirt questions:</div><div>I've noticed that recently, I don't appear to be getting software updates anymore. I used to get update available notifications on my nodes every few days; I haven't seen one for a couple weeks now. is something wrong?</div><div><br></div><div>I have a windows 10 x64 VM. I get a warning that my VM type does not match the installed OS. All works fine, but I've quadrouple-checked that it does match. Is this a known bug?</div></div></blockquote><div><br></div></span><div>Arik, any info on that? </div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><span class="m_4613553837863432275m_-5339302049872229818m_5350367153824699463m_9190369073002137491m_3465073948164697150gmail-"><div dir="ltr"><div><br></div><div>I have a UPS that all three nodes and the networking are on. It is a USB UPS. How should I best integrate monitoring in? I could put a raspberry pi up and then run NUT or similar on it, but is there a "better" way with oVirt?</div><div><br></div><div>Thanks!</div><span class="m_4613553837863432275m_-5339302049872229818m_5350367153824699463m_9190369073002137491m_3465073948164697150gmail-m_2753504372628888366HOEnZb"><font color="#888888"><div>--Jim</div></font></span></div>
<br></span>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a rel="noreferrer" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
<br></blockquote></div><br></div></div>
</blockquote></span></div><br></div></div>
</blockquote></div><br></div>
</div></div></blockquote></div></div></div><br></div></div>
</blockquote></div><br></div>
</div></div></blockquote></div></div></div><br></div></div>
</blockquote></div><br></div>