Windows Safely Remove Hardware
by Стаценко Константин Юрьевич
--_000_3146965f26a84fbda26a43c3f5f55606msk1exchmb07interraoru_
Content-Type: text/plain; charset="koi8-r"
Content-Transfer-Encoding: quoted-printable
Hello!
What is the best way to remove options like eject virtual machine disks/con=
trollers or nics from "Safely Remove Hardware" in Windows OS ?
Currently running oVirt 4.1.8.
Thanks.
--_000_3146965f26a84fbda26a43c3f5f55606msk1exchmb07interraoru_
Content-Type: text/html; charset="koi8-r"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dkoi8-r">
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:2.0cm 42.5pt 2.0cm 3.0cm;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"RU" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span lang=3D"EN-US">Hello!<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">What is the best way to remove =
options like eject virtual machine disks/controllers or nics from “Sa=
fely Remove Hardware” in Windows OS ?<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Currently running oVirt 4.1.8.<=
o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Thanks.<o:p></o:p></span></p>
</div>
</body>
</html>
--_000_3146965f26a84fbda26a43c3f5f55606msk1exchmb07interraoru_--
6 years, 11 months
Re: [ovirt-users] Q: Optimal settings for DB hosting
by Yaniv Kaul
On Jan 19, 2018 10:52 AM, "andreil1" <andreil1(a)starlett.lv> wrote:
Hi !
What is optimal setting for oVirt KVM guests for database hosting on Xeon
server (2 x Xeon 4-core each) (in my case this is Firebird based
accounting/stock control system with several clients active)?
1st of course its preallocated disk image.
VirtO-SCSI enabled
- It's not clear that virtio-scsi is faster than virtio-blk in all cases.
Test.
- What's the backend storage?
Migration disabled.
Balloning disabled.
CPU shares disabled
Pass-through host CPU enabled
What about NUMA and pinning?
What should be other CPU settings?
For example, Xeon have 2 threads per core, should I set in oVirt 1 or 2
threads per virtual CPU?
IO Threads on or off?
On.
Any idea of NUMA settings ?
Indeed. + Huge pages, in both host and guest.
Node running 4 VMs total, CPU load is quite low, RAM is enough to
preallocate for each VM + 4GB for node itself.
In short, use high performance VM. See ovirt.org feature page.
Y.
Thanks in advance !
Andrei
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
6 years, 11 months
Questions about converged infrastructure setup and glusterFS sizing/performance
by Jayme
I am attempting to narrow down choices for storage in a new oVirt build
that will eventually be used for a mix of dev and production servers.
My current space usage excluding backups sits at about only 1TB so I figure
3-5 TB would be more than enough for VM storage only + some room to grow.
There will be around 24 linux VMs total but 80% of them are VERY low usage
and low spec servers.
I've been considering a 3 host hyperconverged oVirt setup, replica 3
arbiter 1 setup with a disaster recovery plan to replicate the gluster
volume to a separate server. I would of course do additional incremental
backups to an alternate server as well probably with rsync or some other
method.
Some questions:
1. Is it recommended to use SSDs for glusterFS or can the performance of
regular server/sas drives be sufficient enough performance. If using SSDs
is it recommended to use enterprise SSDs are consumer SSDs good enough due
to the redundancy of glusterFS? I would love to hear of any use cases
from any of you regarding hardware specs you used in hyperconverged setups
and what level of performance you are seeing.
2. Is it recommended to RAID the drives that form the gluster bricks? If
so what raid level?
3. How do I calculate how much space will be usable in a replicate 3
arbiter 1 configuration? Will it be 75% of total drive capacity minus what
I lose from raid (if I raid the drives)?
4. For replication of the gluster volume, is it possible for me to
replicate the entire volume to a single drive/raid array in an alternate
server or does the replicated volume need to match the configuration of the
main glusterFS volume (i.e. same amount of drives/configuration etc).
5. Has the meltdown bug caused or expected to cause major issues with oVirt
hyperconverged setup due to performance loss from the patches. I've been
reading articles suggesting up to 30% performance loss on some
converged/storage setups due to how CPU intensive converged setups are.
Thanks in advance!
6 years, 11 months
OVS not running / logwatch error after upgrade from 4.0.6 to 4.1.8
by Derek Atkins
Hi,
I recently upgraded my 1-host ovirt deployment from 4.0.6 to 4.1.8.
Since then, the host has been reporting a cron.daily error:
/etc/cron.daily/logrotate:
logrotate_script: line 4: cd: /var/run/openvswitch: No such file or directory
This isn't surprising, since:
# systemctl status openvswitch
● openvswitch.service - Open vSwitch
Loaded: loaded (/usr/lib/systemd/system/openvswitch.service; disabled;
vendor preset: disabled)
Active: inactive (dead)
The host was just upgraded by "yum update".
Was there anything special that needed to happen after the update?
Do I *NEED* OVS running?
The VMs all seem to be behaving properly.
Thanks,
-derek
--
Derek Atkins 617-623-3745
derek(a)ihtfp.com www.ihtfp.com
Computer and Internet Security Consultant
6 years, 11 months
oVirt storage access failure from host
by Alex K
Hi All,
I have a 3 server ovirt 4.1 selft hosted setup with gluster replica 3.
I see that suddenly one of the hosts reported as unresponsive and at same
time the /var/log/messages logged:
ovirt-ha-broker ovirt_hosted_engine_ha.broker.listener.ConnectionHandler
ERROR Error handling request, data: 'set-storage-domain FilesystemBackend
dom_type=glusterfs
sd_uuid=ad7b9e2a-7ae3-46ad-9429-5f5ef452eac8'#012Traceback (most recent
call last):#012 File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py",
line 166, in handle#012 data)#012 File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/listener.py",
line 299, in _dispatch#012 .set_storage_domain(client, sd_type,
**options)#012 File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
line 66, in set_storage_domain#012 self._backends[client].connect()#012
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
line 462, in connect#012 self._dom_type)#012 File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
line 107, in get_domain_path#012 " in {1}".format(sd_uuid,
parent))#012BackendFailureException: path to storage domain
ad7b9e2a-7ae3-46ad-9429-5f5ef452eac8 not found in
/rhev/data-center/mnt/glusterSD
Jan 15 11:04:56 v1 journal: vdsm root ERROR failed to retrieve Hosted
Engine HA info#012Traceback (most recent call last):#012 File
"/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231, in
_getHaInfo#012 stats = instance.get_all_stats()#012 File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 103, in get_all_stats#012 self._configure_broker_conn(broker)#012
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 180, in _configure_broker_conn#012 dom_type=dom_type)#012 File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
line 177, in set_storage_domain#012 .format(sd_type, options,
e))#012RequestError: Failed to set storage domain FilesystemBackend,
options {'dom_type': 'glusterfs', 'sd_uuid':
'ad7b9e2a-7ae3-46ad-9429-5f5ef452eac8'}: Request failed: <class
'ovirt_hosted_engine_ha.lib.storage_backends.BackendFailureException'>
At VDSM logs i see the following continuously logged:
[jsonrpc.JsonRpcServer] RPC call VM.getStats failed (error 1) in 0.00
seconds (__init__:539)
No errors seen at gluster at same time frame.
Any hints on what is causing this issue? It seems a storage access issue
but gluster was up and volumes ok. The VMs that I am running on top are
Windows 10 and Windows 2016 64 bit.
Thanx,
Alex
6 years, 11 months
correct settings for gluster based storage domain
by Artem Tambovskiy
I'm still troubleshooting the my oVirt 4.1.8 cluster and idea came to my
mind that I have an issue with storage settings for hosted_engine storage
domain.
But in general if I have a 2 ovirt nodes running gluster + 3rd host as
arbiter, how the settings should looks like?
lets say I have a 3 nodes:
ovirt1.domain.com (gluster + ovirt)
ovirt2.domain.com (gluster + ovirt)
ovirt3.domain.com (gluster)
How the correct storage domain config should looks like?
Option 1:
/etc/ovirt-hosted-engine/hosted-engine.conf
....
storage=ovirt1.domain.com:/engine
mnt_options=backup-volfile-servers=ovirt2.domain.com:ovirt3.domain.com
Option 2:
/etc/ovirt-hosted-engine/hosted-engine.conf
....
storage=localhost:/engine
mnt_options=backup-volfile-servers=ovirt1.domain.com:ovirt2.domain.com:o
virt3.domain.com
Option 3:
Setup a DNS record gluster.domain.com pointing to IP addresses of gluster
nodes
/etc/ovirt-hosted-engine/hosted-engine.conf
....
storage=gluster.domain.com:/engine
mnt_options=
Of course its related not only to hosted engine domain, but to all gluster
based storage domains.
Thank you in advance!
Regards,
Artem
6 years, 11 months
[ANN] oVirt 4.2.1 Second Release Candidate is now available
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.2.1 Second Release Candidate, as of January 18th, 2017
This update is a release candidate of the second in a series of
stabilization updates to the 4.2
series.
This is pre-release software. This pre-release should not to be used in
production.
[WARNING] right after we finished to compose the release candidate we
discovered a regression in a disaster recovery flow causing wrong MAC
address to be assigned to re-imported VMs.
This release is available now for:
* Red Hat Enterprise Linux 7.4 or later
* CentOS Linux (or similar) 7.4 or later
This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.4 or later
* CentOS Linux (or similar) 7.4 or later
* oVirt Node 4.2
See the release notes [1] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node will be available soon [2]
Additional Resources:
* Read more about the oVirt 4.2.1 release highlights:
http://www.ovirt.org/release/4.2.1/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.2.1/
[2] http://resources.ovirt.org/pub/ovirt-4.2-pre/iso/
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
6 years, 11 months
Re: [ovirt-users] Problems with some vms
by Endre Karlson
One brick was at a point down for replacement.
It has been replaced and all vols are up
Status of volume: data
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick ovirt0:/gluster/brick3/data 49152 0 Y
22467
Brick ovirt2:/gluster/brick3/data 49152 0 Y
20736
Brick ovirt3:/gluster/brick3/data 49152 0 Y
23148
Brick ovirt0:/gluster/brick4/data 49153 0 Y
22497
Brick ovirt2:/gluster/brick4/data 49153 0 Y
20742
Brick ovirt3:/gluster/brick4/data 49153 0 Y
23158
Brick ovirt0:/gluster/brick5/data 49154 0 Y
22473
Brick ovirt2:/gluster/brick5/data 49154 0 Y
20748
Brick ovirt3:/gluster/brick5/data 49154 0 Y
23156
Brick ovirt0:/gluster/brick6/data 49155 0 Y
22479
Brick ovirt2:/gluster/brick6_1/data 49161 0 Y
21203
Brick ovirt3:/gluster/brick6/data 49155 0 Y
23157
Brick ovirt0:/gluster/brick7/data 49156 0 Y
22485
Brick ovirt2:/gluster/brick7/data 49156 0 Y
20763
Brick ovirt3:/gluster/brick7/data 49156 0 Y
23155
Brick ovirt0:/gluster/brick8/data 49157 0 Y
22491
Brick ovirt2:/gluster/brick8/data 49157 0 Y
20771
Brick ovirt3:/gluster/brick8/data 49157 0 Y
23154
Self-heal Daemon on localhost N/A N/A Y
23238
Bitrot Daemon on localhost N/A N/A Y
24870
Scrubber Daemon on localhost N/A N/A Y
24889
Self-heal Daemon on ovirt2 N/A N/A Y
24271
Bitrot Daemon on ovirt2 N/A N/A Y
24856
Scrubber Daemon on ovirt2 N/A N/A Y
24866
Self-heal Daemon on ovirt0 N/A N/A Y
29409
Bitrot Daemon on ovirt0 N/A N/A Y
5457
Scrubber Daemon on ovirt0 N/A N/A Y
5468
Task Status of Volume data
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: engine
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick ovirt0:/gluster/brick1/engine 49158 0 Y
22511
Brick ovirt2:/gluster/brick1/engine 49158 0 Y
20780
Brick ovirt3:/gluster/brick1/engine 49158 0 Y
23199
Self-heal Daemon on localhost N/A N/A Y
23238
Self-heal Daemon on ovirt0 N/A N/A Y
29409
Self-heal Daemon on ovirt2 N/A N/A Y
24271
Task Status of Volume engine
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: iso
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick ovirt0:/gluster/brick2/iso 49159 0 Y
22520
Brick ovirt2:/gluster/brick2/iso 49159 0 Y
20789
Brick ovirt3:/gluster/brick2/iso 49159 0 Y
23208
NFS Server on localhost N/A N/A N
N/A
Self-heal Daemon on localhost N/A N/A Y
23238
NFS Server on ovirt2 N/A N/A N
N/A
Self-heal Daemon on ovirt2 N/A N/A Y
24271
NFS Server on ovirt0 N/A N/A N
N/A
Self-heal Daemon on ovirt0 N/A N/A Y
29409
Task Status of Volume iso
------------------------------------------------------------------------------
There are no active volume tasks
2018-01-17 8:13 GMT+01:00 Gobinda Das <godas(a)redhat.com>:
> Hi,
> I can see some error in log:
> [2018-01-14 11:19:49.886571] E [socket.c:2309:socket_connect_finish]
> 0-engine-client-0: connection to 10.2.0.120:24007 failed (Connection
> timed out)
> [2018-01-14 11:20:05.630669] E [socket.c:2309:socket_connect_finish]
> 0-engine-client-0: connection to 10.2.0.120:24007 failed (Connection
> timed out)
> [2018-01-14 12:01:09.089925] E [MSGID: 114058]
> [client-handshake.c:1527:client_query_portmap_cbk] 0-engine-client-0:
> failed to get the port number for remote subvolume. Please run 'gluster
> volume status' on server to see if brick process is running.
> [2018-01-14 12:01:09.090048] I [MSGID: 114018]
> [client.c:2280:client_rpc_notify] 0-engine-client-0: disconnected from
> engine-client-0. Client process will keep trying to connect to glusterd
> until brick's port is available
>
> Can you please check gluster volume status and see if all bricks are up?
>
> On Wed, Jan 17, 2018 at 12:24 PM, Endre Karlson <endre.karlson(a)gmail.com>
> wrote:
>
>> It's there now for each of the hosts. ovirt1 is not in service yet.
>>
>> 2018-01-17 5:52 GMT+01:00 Gobinda Das <godas(a)redhat.com>:
>>
>>> In the above url only data and iso mnt log present,But there is no
>>> engine and vmstore mount log.
>>>
>>> On Wed, Jan 17, 2018 at 1:26 AM, Endre Karlson <endre.karlson(a)gmail.com>
>>> wrote:
>>>
>>>> Hi, all logs are located here: https://www.dropbox.com/
>>>> sh/3qzmwe76rkt09fk/AABzM9rJKbH5SBPWc31Npxhma?dl=0 for the mounts
>>>>
>>>> additionally we replaced a broken disk that is now resynced.
>>>>
>>>> 2018-01-15 11:17 GMT+01:00 Gobinda Das <godas(a)redhat.com>:
>>>>
>>>>> Hi Endre,
>>>>> Mount logs will be in below format inside /var/log/glusterfs :
>>>>>
>>>>> /var/log/glusterfs/rhev-data-center-mnt-glusterSD-*\:_engine.log
>>>>> /var/log/glusterfs/rhev-data-center-mnt-glusterSD-*\:_data.log
>>>>> /var/log/glusterfs/rhev-data-center-mnt-glusterSD-*\:_vmstore.log
>>>>>
>>>>> On Mon, Jan 15, 2018 at 11:57 AM, Endre Karlson <
>>>>> endre.karlson(a)gmail.com> wrote:
>>>>>
>>>>>> Hi.
>>>>>>
>>>>>> What are the gluster mount logs ?
>>>>>>
>>>>>> I have these gluster logs.
>>>>>> cli.log etc-glusterfs-glusterd.vol.log
>>>>>> glfsheal-engine.log glusterd.log nfs.log
>>>>>> rhev-data-center-mnt-glusterSD-ovirt0:_engine.log
>>>>>> rhev-data-center-mnt-glusterSD-ovirt3:_iso.log
>>>>>> cmd_history.log glfsheal-data.log glfsheal-iso.log
>>>>>> glustershd.log rhev-data-center-mnt-glusterSD-ovirt0:_data.log
>>>>>> rhev-data-center-mnt-glusterSD-ovirt0:_iso.log statedump.log
>>>>>>
>>>>>>
>>>>>> I am running version
>>>>>> glusterfs-server-3.12.4-1.el7.x86_64
>>>>>> glusterfs-geo-replication-3.12.4-1.el7.x86_64
>>>>>> libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.7.x86_64
>>>>>> glusterfs-libs-3.12.4-1.el7.x86_64
>>>>>> glusterfs-api-3.12.4-1.el7.x86_64
>>>>>> python2-gluster-3.12.4-1.el7.x86_64
>>>>>> glusterfs-client-xlators-3.12.4-1.el7.x86_64
>>>>>> glusterfs-cli-3.12.4-1.el7.x86_64
>>>>>> glusterfs-events-3.12.4-1.el7.x86_64
>>>>>> glusterfs-rdma-3.12.4-1.el7.x86_64
>>>>>> vdsm-gluster-4.20.9.3-1.el7.centos.noarch
>>>>>> glusterfs-3.12.4-1.el7.x86_64
>>>>>> glusterfs-fuse-3.12.4-1.el7.x86_64
>>>>>>
>>>>>> // Endre
>>>>>>
>>>>>> 2018-01-15 6:11 GMT+01:00 Gobinda Das <godas(a)redhat.com>:
>>>>>>
>>>>>>> Hi Endre,
>>>>>>> Can you please provide glusterfs mount logs?
>>>>>>>
>>>>>>> On Mon, Jan 15, 2018 at 6:16 AM, Darrell Budic <
>>>>>>> budic(a)onholyground.com> wrote:
>>>>>>>
>>>>>>>> What version of gluster are you running? I’ve seen a few of these
>>>>>>>> since moving my storage cluster to 12.3, but still haven’t been able to
>>>>>>>> determine what’s causing it. Seems to be happening most often on VMs that
>>>>>>>> haven’t been switches over to libgfapi mounts yet, but even one of those
>>>>>>>> has paused once so far. They generally restart fine from the GUI, and
>>>>>>>> nothing seems to need healing.
>>>>>>>>
>>>>>>>> ------------------------------
>>>>>>>> *From:* Endre Karlson <endre.karlson(a)gmail.com>
>>>>>>>> *Subject:* [ovirt-users] Problems with some vms
>>>>>>>> *Date:* January 14, 2018 at 12:55:45 PM CST
>>>>>>>> *To:* users
>>>>>>>>
>>>>>>>> Hi, we are getting some errors with some of our vms in a 3 node
>>>>>>>> server setup.
>>>>>>>>
>>>>>>>> 2018-01-14 15:01:44,015+0100 INFO (libvirt/events) [virt.vm]
>>>>>>>> (vmId='2c34f52d-140b-4dbe-a4bd-d2cb467b0b7c') abnormal vm stop
>>>>>>>> device virtio-disk0 error eother (vm:4880)
>>>>>>>>
>>>>>>>> We are running glusterfs for shared storage.
>>>>>>>>
>>>>>>>> I have tried setting global maintenance on the first server and
>>>>>>>> then issuing a 'hosted-engine --vm-start' but that leads to nowhere.
>>>>>>>> _______________________________________________
>>>>>>>> Users mailing list
>>>>>>>> Users(a)ovirt.org
>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Users mailing list
>>>>>>>> Users(a)ovirt.org
>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Thanks,
>>>>>>> Gobinda
>>>>>>> +91-9019047912 <+91%2090190%2047912>
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Thanks,
>>>>> Gobinda
>>>>> +91-9019047912 <+91%2090190%2047912>
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Thanks,
>>> Gobinda
>>> +91-9019047912 <+91%2090190%2047912>
>>>
>>
>>
>
>
> --
> Thanks,
> Gobinda
> +91-9019047912 <+91%2090190%2047912>
>
6 years, 11 months
Re: [ovirt-users] oVirt home lab hardware
by Abdurrahman A. Ibrahim
One more thing,
Do you have power switch?
Best regards,
Ab
On 19 Jan 2018 10:44 am, "Abdurrahman A. Ibrahim" <a.rahman.attia(a)gmail.com>
wrote:
> Thank you Jayme for your reply,
>
> I have some concerns regarding space, noise and power consumption.
>
> Would you mind to share your experience regarding those three parameters
> with us?
>
> Best regards,
> Ab
>
>
>
>
> On 18 Jan 2018 10:23 pm, "Jayme" <jaymef(a)gmail.com> wrote:
>
> For rackmount Dell R710's are fairly popular for home labs, they have good
> specs and can be found at reasonable prices on ebay.
>
> On Thu, Jan 18, 2018 at 4:52 PM, Abdurrahman A. Ibrahim <
> a.rahman.attia(a)gmail.com> wrote:
>
>> Hello,
>>
>> I am planning to buy home lab hardware to be used by oVirt.
>>
>> Any recommendations for used hardware i can buy from eBay for example?
>> Also, have you tried oVirt on Intel NUC or any other SMB servers before?
>>
>> Thanks,
>> Ab
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
6 years, 11 months