Changing ticket duration for VMs
by Alexis HAUSER
Hi,
I'm looking for a way to change the duration of all tickets from all VMs. How can I do this ? I'd like to change it to 5 min instead of 2 min.
It seems it is possible to change these parameters using the RestAPI, with "action.grace_period.expiry" or "action.ticket.value"...
Anyway, these parameters seems to be accessible only using POST but not GET. How can you retrieve their value then, using POST ?
These parameters seem to be available for each VM, is there a way to set it for all VMs in general, even next VM created ? Do they work for all tickets created, or only one single generated ticket where you define its value ?
8 years, 6 months
video memory in virtual machine
by qinglong.dong@horebdata.cn
This is a multi-part message in MIME format.
------=_001_NextPart577251741428_=----
Content-Type: text/plain;
charset="GB2312"
Content-Transfer-Encoding: base64
SSBoYXZlIGZvdW5kIHRoYXQgcGxheWluZyB2aWRlbyBpcyBub3QgZmx1ZW50IGluIHRoZSB2aXJ0
dWFsIG1hY2hpbmUuIFNvIEkgdGhvdWdodCBtYXliZSB0b28gc21hbGwgdmlkZW8gbWVtb3J5IGlu
IHZpcnR1YWwgbWFjaGluZXMgbGVkIHRvIHRoaXMuIFNvbWUgcGFyYW1ldGVycyBoYXZlIGJlZW4g
Zm91bmQgd2hlbiBJIHdhcyB0cnlpbmcgdG8gY2hlY2sgc29tZSBpbmZvcm1hdGlvbi4gU28gSSB3
b25kZXIgd2hlcmUgdG8gbW9kaWZ5IHRoZW0gYW5kIGlmIGl0IHdvcmtzIHRvIGRvIHRoaXMuDQoN
Ci1kZXZpY2UgcXhsLXZnYSxpZD12aWRlbzAscmFtX3NpemU9NjcxMDg4NjQsdnJhbV9zaXplPTgz
ODg2MDgsdmdhbWVtX21iPTE2DQoNCkFueW9uZSBjYW4gaGVscD8NCg0KDQoNCg==
------=_001_NextPart577251741428_=----
Content-Type: text/html;
charset="GB2312"
Content-Transfer-Encoding: quoted-printable
<html><head><meta http-equiv=3D"content-type" content=3D"text/html; charse=
t=3DGB2312"><style>body { line-height: 1.5; }body { font-size: 10.5pt; fon=
t-family: =CE=A2=C8=ED=D1=C5=BA=DA; color: rgb(0, 0, 0); line-height: 1.5;=
}</style></head><body>=0A<div><span></span>I have found that playing =
;video is not fluent in the virtual mac=
hine. So I thought maybe too small <span style=3D"font-size: 10.5pt; =
line-height: 1.5; background-color: window;">video memory in&nbs=
p;virtual machines led to this. Some </span><span style=3D"font-=
size: 10.5pt; line-height: 1.5; background-color: window;">parameters have=
been found when I was trying to c</span><span style=3D"font-size: 10.5pt;=
line-height: 1.5; background-color: window;">heck some information. =
So I wonder where to modify them and if it works to do this.</span></div><=
div><span style=3D"font-size: 10.5pt; line-height: 1.5; background-color: =
window;"><br></span></div><div>-device qxl-vga,id=3Dvideo0,ram_size=
=3D67108864,vram_size=3D8388608,vgamem_mb=3D16</div><div><br></div><div>An=
yone can help?</div><hr style=3D"width: 210px; height: 1px;" color=3D"#b5c=
4df" size=3D"1" align=3D"left">=0A<div><br></div>=0A</body></html>
------=_001_NextPart577251741428_=------
8 years, 6 months
strange API parameters default in python SDK
by Fabrice Bacchella
When I'm looking the RHEV's documentation about the API class in the python SDK, I see :
persistent_auth
Specifies whether persistent authentication is enabled for this connection. Valid values are True and False. This parameter is optional and defaults to False.
filter
Specifies whether or not user permission based filter is on or off. Valid values are True and False. If the filter parameter is set to False - which is the default - then the authentication credentials provided must be those of an administrative user. If the filter parameter is set to True then any user can be used and the Manager will filter the actions available to the user based on their permissions.
I'm surprised, because in my mind, the default value are the least usefull version of each options. Why don't set them to good, useful values and let the user changed them to the opposite if there is some problems ?
8 years, 6 months
qemu cgroup_controllers
by Дмитрий Глушенок
Hello!
Is it possible to tell libvirt to add specific devices to qemu cgroup? By somehow enumerating the devices in XML using a hook for example.
I'm passing scsi-generic disks (/dev/sgX) to VM using qemucmdline hook and it doesn't work until I remove "devices" from cgroup_controllers in qemu.conf.
--
Dmitry Glushenok
Jet Infosystems
http://www.jet.msk.su
+7-495-411-7601 (ext. 1237)
8 years, 6 months
Getting SPICE connection parameters with Python-SDK
by nicolas@devels.es
Hi,
On user portal, when users click on the "Connect" link of a VM that is
configured to be run with "Native client" and SPICE protocol, a .vv file
is generated with the connection parameters so the SPICE client knows
where to connect: host, port, password, ...
Currently, is there a way to generate those parameters with the Python
SDK? Especially the session password so it's possible to connect to the
VM directly with a SPICE client.
Thanks.
8 years, 6 months
Ovirt system reboot no symlink to storage
by David Gossage
This morning I updated my gluster version to 3.7.11 and during this I
shutdown ovirt completely and all VM's. On bringing them back up ovirt
seems to have not recreated symlinks to gluster storage
Before VM's were created and the path they expected to find the storage at
as it was created by oVirt was
/rhev/data-center/00000001-0001-0001-0001-0000000000b8/7c73a8dd-a72e-4556-ac88-7f6813131e64
which was a symlink to what was mounted
at /rhev/data-center/mnt/glusterSD/ccgl1.gl.local:GLUSTER1 once engine
activated hosts
It never seemed to recreate that symlink though and I ended up just doing
so manually and after that I could bring up my VM's.
If this was caused by some error would I likely find that in the engine
log's on the engine vm or on one of the vdsm logs of a host?
I'm running oVirt Engine Version: 3.6.1.3-1.el7.centos
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
8 years, 6 months
Sanlock add Lockspace Errors
by InterNetX - Juergen Gotteswinter
Hi,
since some time we get Error Messages from Sanlock, and so far i was not
able to figure out what exactly they try to tell and more important if
its something which can be ignored or needs to be fixed (and how).
Here are the Versions we are using currently:
Engine
ovirt-engine-3.5.6.2-1.el6.noarch
Nodes
vdsm-4.16.34-0.el7.centos.x86_64
sanlock-3.2.4-1.el7.x86_64
libvirt-lock-sanlock-1.2.17-13.el7_2.3.x86_64
libvirt-daemon-1.2.17-13.el7_2.3.x86_64
libvirt-lock-sanlock-1.2.17-13.el7_2.3.x86_64
libvirt-1.2.17-13.el7_2.3.x86_64
-- snip --
May 30 09:55:27 vm2 sanlock[1094]: 2016-05-30 09:55:27+0200 294109
[60137]: verify_leader 2 wrong space name
4643f652-8014-4951-8a1a-02af41e67d08
f757b127-a951-4fa9-bf90-81180c0702e6
/dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids
May 30 09:55:27 vm2 sanlock[1094]: 2016-05-30 09:55:27+0200 294109
[60137]: leader1 delta_acquire_begin error -226 lockspace
f757b127-a951-4fa9-bf90-81180c0702e6 host_id 2
May 30 09:55:27 vm2 sanlock[1094]: 2016-05-30 09:55:27+0200 294109
[60137]: leader2 path /dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids offset 0
May 30 09:55:27 vm2 sanlock[1094]: 2016-05-30 09:55:27+0200 294109
[60137]: leader3 m 12212010 v 30003 ss 512 nh 0 mh 1 oi 2 og 8 lv 0
May 30 09:55:27 vm2 sanlock[1094]: 2016-05-30 09:55:27+0200 294109
[60137]: leader4 sn 4643f652-8014-4951-8a1a-02af41e67d08 rn
1eed8aa9-8fb5-4d27-8d1c-03ebce2c36d4.vm2.intern ts 3786679 cs 1474f033
May 30 09:55:28 vm2 sanlock[1094]: 2016-05-30 09:55:28+0200 294110
[1099]: s9703 add_lockspace fail result -226
May 30 09:55:58 vm2 sanlock[1094]: 2016-05-30 09:55:58+0200 294140
[60331]: verify_leader 2 wrong space name
4643f652-8014-4951-8a1a-02af41e67d08
f757b127-a951-4fa9-bf90-81180c0702e6
/dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids
May 30 09:55:58 vm2 sanlock[1094]: 2016-05-30 09:55:58+0200 294140
[60331]: leader1 delta_acquire_begin error -226 lockspace
f757b127-a951-4fa9-bf90-81180c0702e6 host_id 2
May 30 09:55:58 vm2 sanlock[1094]: 2016-05-30 09:55:58+0200 294140
[60331]: leader2 path /dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids offset 0
May 30 09:55:58 vm2 sanlock[1094]: 2016-05-30 09:55:58+0200 294140
[60331]: leader3 m 12212010 v 30003 ss 512 nh 0 mh 1 oi 2 og 8 lv 0
May 30 09:55:58 vm2 sanlock[1094]: 2016-05-30 09:55:58+0200 294140
[60331]: leader4 sn 4643f652-8014-4951-8a1a-02af41e67d08 rn
1eed8aa9-8fb5-4d27-8d1c-03ebce2c36d4.vm2.intern ts 3786679 cs 1474f033
May 30 09:55:59 vm2 sanlock[1094]: 2016-05-30 09:55:59+0200 294141
[1098]: s9704 add_lockspace fail result -226
May 30 09:56:05 vm2 sanlock[1094]: 2016-05-30 09:56:05+0200 294148
[1094]: s1527 check_other_lease invalid for host 0 0 ts 7566376 name in
4643f652-8014-4951-8a1a-02af41e67d08
May 30 09:56:05 vm2 sanlock[1094]: 2016-05-30 09:56:05+0200 294148
[1094]: s1527 check_other_lease leader 12212010 owner 1 11 ts 7566376 sn
f757b127-a951-4fa9-bf90-81180c0702e6 rn
f888524b-27aa-4724-8bae-051f9e950a21.vm1.intern
May 30 09:56:28 vm2 sanlock[1094]: 2016-05-30 09:56:28+0200 294170
[60496]: verify_leader 2 wrong space name
4643f652-8014-4951-8a1a-02af41e67d08
f757b127-a951-4fa9-bf90-81180c0702e6
/dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids
May 30 09:56:28 vm2 sanlock[1094]: 2016-05-30 09:56:28+0200 294170
[60496]: leader1 delta_acquire_begin error -226 lockspace
f757b127-a951-4fa9-bf90-81180c0702e6 host_id 2
May 30 09:56:28 vm2 sanlock[1094]: 2016-05-30 09:56:28+0200 294170
[60496]: leader2 path /dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids offset 0
May 30 09:56:28 vm2 sanlock[1094]: 2016-05-30 09:56:28+0200 294170
[60496]: leader3 m 12212010 v 30003 ss 512 nh 0 mh 1 oi 2 og 8 lv 0
May 30 09:56:28 vm2 sanlock[1094]: 2016-05-30 09:56:28+0200 294170
[60496]: leader4 sn 4643f652-8014-4951-8a1a-02af41e67d08 rn
1eed8aa9-8fb5-4d27-8d1c-03ebce2c36d4.vm2.intern ts 3786679 cs 1474f033
May 30 09:56:29 vm2 sanlock[1094]: 2016-05-30 09:56:29+0200 294171
[6415]: s9705 add_lockspace fail result -226
May 30 09:56:58 vm2 sanlock[1094]: 2016-05-30 09:56:58+0200 294200
[60645]: verify_leader 2 wrong space name
4643f652-8014-4951-8a1a-02af41e67d08
f757b127-a951-4fa9-bf90-81180c0702e6
/dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids
May 30 09:56:58 vm2 sanlock[1094]: 2016-05-30 09:56:58+0200 294200
[60645]: leader1 delta_acquire_begin error -226 lockspace
f757b127-a951-4fa9-bf90-81180c0702e6 host_id 2
May 30 09:56:58 vm2 sanlock[1094]: 2016-05-30 09:56:58+0200 294200
[60645]: leader2 path /dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids offset 0
May 30 09:56:58 vm2 sanlock[1094]: 2016-05-30 09:56:58+0200 294200
[60645]: leader3 m 12212010 v 30003 ss 512 nh 0 mh 1 oi 2 og 8 lv 0
May 30 09:56:58 vm2 sanlock[1094]: 2016-05-30 09:56:58+0200 294200
[60645]: leader4 sn 4643f652-8014-4951-8a1a-02af41e67d08 rn
1eed8aa9-8fb5-4d27-8d1c-03ebce2c36d4.vm2.intern ts 3786679 cs 1474f033
May 30 09:56:59 vm2 sanlock[1094]: 2016-05-30 09:56:59+0200 294201
[6373]: s9706 add_lockspace fail result -226
May 30 09:57:28 vm2 sanlock[1094]: 2016-05-30 09:57:28+0200 294230
[60806]: verify_leader 2 wrong space name
4643f652-8014-4951-8a1a-02af41e67d08
f757b127-a951-4fa9-bf90-81180c0702e6
/dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids
May 30 09:57:28 vm2 sanlock[1094]: 2016-05-30 09:57:28+0200 294230
[60806]: leader1 delta_acquire_begin error -226 lockspace
f757b127-a951-4fa9-bf90-81180c0702e6 host_id 2
May 30 09:57:28 vm2 sanlock[1094]: 2016-05-30 09:57:28+0200 294230
[60806]: leader2 path /dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids offset 0
May 30 09:57:28 vm2 sanlock[1094]: 2016-05-30 09:57:28+0200 294230
[60806]: leader3 m 12212010 v 30003 ss 512 nh 0 mh 1 oi 2 og 8 lv 0
May 30 09:57:28 vm2 sanlock[1094]: 2016-05-30 09:57:28+0200 294230
[60806]: leader4 sn 4643f652-8014-4951-8a1a-02af41e67d08 rn
1eed8aa9-8fb5-4d27-8d1c-03ebce2c36d4.vm2.intern ts 3786679 cs 1474f033
May 30 09:57:29 vm2 sanlock[1094]: 2016-05-30 09:57:29+0200 294231
[6399]: s9707 add_lockspace fail result -226
May 30 09:57:58 vm2 sanlock[1094]: 2016-05-30 09:57:58+0200 294260
[60946]: verify_leader 2 wrong space name
4643f652-8014-4951-8a1a-02af41e67d08
f757b127-a951-4fa9-bf90-81180c0702e6
/dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids
May 30 09:57:58 vm2 sanlock[1094]: 2016-05-30 09:57:58+0200 294260
[60946]: leader1 delta_acquire_begin error -226 lockspace
f757b127-a951-4fa9-bf90-81180c0702e6 host_id 2
May 30 09:57:58 vm2 sanlock[1094]: 2016-05-30 09:57:58+0200 294260
[60946]: leader2 path /dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids offset 0
May 30 09:57:58 vm2 sanlock[1094]: 2016-05-30 09:57:58+0200 294260
[60946]: leader3 m 12212010 v 30003 ss 512 nh 0 mh 1 oi 2 og 8 lv 0
May 30 09:57:58 vm2 sanlock[1094]: 2016-05-30 09:57:58+0200 294260
[60946]: leader4 sn 4643f652-8014-4951-8a1a-02af41e67d08 rn
1eed8aa9-8fb5-4d27-8d1c-03ebce2c36d4.vm2.intern ts 3786679 cs 1474f033
May 30 09:57:59 vm2 sanlock[1094]: 2016-05-30 09:57:59+0200 294261
[6352]: s9708 add_lockspace fail result -226
-- snip --
sanlock log_dump also shows errors
-- snip --
2016-05-30 09:53:23+0200 7526415 [1017]: s567 check_other_lease invalid
for host 0 0 ts 7566376 name in 4643f652-8014-4951-8a1a-02af41e67d08
2016-05-30 09:53:23+0200 7526415 [1017]: s567 check_other_lease leader
12212010 owner 1 11 ts 7566376 sn f757b127-a951-4fa9-bf90-81180c0702e6
rn f888524b-27aa-4724-8bae-051f9e950a21.vm1.intern
2016-05-30 09:53:33+0200 7526425 [1017]: s568 check_other_lease invalid
for host 0 0 ts 3786679 name in f757b127-a951-4fa9-bf90-81180c0702e6
2016-05-30 09:53:33+0200 7526425 [1017]: s568 check_other_lease leader
12212010 owner 2 8 ts 3786679 sn 4643f652-8014-4951-8a1a-02af41e67d08 rn
1eed8aa9-8fb5-4d27-8d1c-03ebce2c36d4.vm2.intern
2016-05-30 09:53:33+0200 7526425 [1017]: s568 check_other_lease invalid
for host 0 0 ts 6622415 name in f757b127-a951-4fa9-bf90-81180c0702e6
2016-05-30 09:53:33+0200 7526425 [1017]: s568 check_other_lease leader
12212010 owner 3 14 ts 6622415 sn 4643f652-8014-4951-8a1a-02af41e67d08
rn 51c8f8e2-f9d8-462c-866c-e4052213ea81.vm3.intern
2016-05-30 09:53:33+0200 7526425 [1017]: s568 check_other_lease invalid
for host 0 0 ts 6697413 name in f757b127-a951-4fa9-bf90-81180c0702e6
2016-05-30 09:53:33+0200 7526425 [1017]: s568 check_other_lease leader
12212010 owner 4 4 ts 6697413 sn 4643f652-8014-4951-8a1a-02af41e67d08 rn
8d4f32dc-f595-4254-bfcb-d96e5057e110.vm4.intern
2016-05-30 09:53:33+0200 7526425 [1017]: s568 check_other_lease invalid
for host 0 0 ts 7563413 name in f757b127-a951-4fa9-bf90-81180c0702e6
2016-05-30 09:53:33+0200 7526425 [1017]: s568 check_other_lease leader
12212010 owner 5 8 ts 7563413 sn 4643f652-8014-4951-8a1a-02af41e67d08 rn
9fe564e6-cf40-4403-9fc0-eb118e1ee2cf.vm5.intern
2016-05-30 09:53:33+0200 7526425 [1017]: s568 check_other_lease invalid
for host 0 0 ts 6129706 name in f757b127-a951-4fa9-bf90-81180c0702e6
2016-05-30 09:53:33+0200 7526425 [1017]: s568 check_other_lease leader
12212010 owner 6 169 ts 6129706 sn 4643f652-8014-4951-8a1a-02af41e67d08
rn b99ff588-f1ad-43f2-bd6c-6869f54d424d.vm1-tiny.i
2016-05-30 09:53:47+0200 7526439 [14576]: cmd_read_resource 23,58
/dev/c2212f15-35b7-4fa0-b13c-b50befda0af9/leases:1048576
-- snip --
any hint whould be highly appreciated
Juergen
8 years, 6 months
Questions on oVirt
by Brett I. Holcomb
After using oVirt for about three months I have some questions that
really haven't been answered in any of the documentation, posts, or
found in searching. Or maybe more correctly I've found some answers
but am trying to put the pieces I've found together.
My setup is one physical host that used to run VMware ESXi6 and it
handled running the VMs on an iSCSI LUN on a Synology 3615xs unit. I
have one physical Windows workstation and all the servers, DNS, DHCP,
file, etc. are VMs. The VMs are on an iSCSI LUN on the Synology.
* Hosted-engine deployment - Run Engine as a VM. This has the
advantage of using one machine for host and running the Engine as a VM
but what are the cons of it?
* Can I run the Engine on the host that will run the VMs without
running it on a VM? That is I install the OS on my physical box,
install Engine, then setup datastores (iSCSI LUN), networking etc.
* How do I run more than one Engine. With just one there is no
redundancy so can I run another Engine that access the same Datacenter,
etc. as the first? Or does each Engine have to have it's own
Datacenter and the backup is achieved by migrating between the Engine's
Datacenters as needed.
* Given I have a hosted Engine setup how can I "undo" it and get to
running just the Engine on the host. Do I have to undo everything or
can I just install another instance of the Engine on the host but not
in a VM, move the VMs to it and then remove the Engine VM.
* System shutdown - If I shutdown the host what is the proper
procedure? Go to global maintenance mode and then shutdown the host or
do I have to do some other steps to make sure VMs don't get corrupted.
On ESXi we'd put a host into maintenance mode after shutting down or
moving the VMs so I assume it's the same here. Shutdown VMS since there
is nowhere to move the VMS, go into global maintenance mode. Shutdown.
On startup the Engine will come up, then I start my V
Ms.##SELECTION_END##
* Upgrading engine and host - Do I have to go to maintenance mode then
run yum to install the new versions on the host and engine and then run
engine-setup or do I need to go into maintenance mode? I assume the
4.0 production install will be much more involved but hopefully keeping
updated will make it a little less painful.
Thanks.
8 years, 6 months
Re: [ovirt-users] Questions on oVirt
by Charles Tassell
Hi Brett,
I'm not an expert on oVirt, but from my experience I would say you
probably want to run the engine as a VM rather than on the bare metal.
It has a lot of moving parts (PostgresSQL, jBoss, etc...) and they all
fit well inside the VM. You can run it right on the bare-metal if you
want though, as that was the preferred means for versions prior to 3.6
Also, you don't need to allocate the recommended 16GB of RAM to it if
you are only running 5-10 VMs. You can probably get by with a 2-4GB VM
which makes it more palatable.
The thing to realize with oVirt is that the Engine is not the
Hypervisor. The engine is just a management tool. If it crashes, all
the VMs continue to run fine without it, so you can just start it back
up and it will just resume managing everything fine. If you only have
one physical host you don't need to really worry too much about
redundancy. I don't think you can assign a host to two engines at the
same time, but I might be wrong about that.
If you want to migrate between a hosted engine and bare metal (or
vice versa) you can use the engine-backup command to backup and then
restore (same command, different arguments) the configuration. I've
never done it, but it should work fine.
For a system shutdown, I would shutdown all of the VMs (do the hosted
engine last) and then just shutdown the box. I'm not sure if
maintenance mode is actually required or not, so I'd defer to someone
with more experience. I know I have done it this way and it doesn't
seem to have caused any problems.
For upgrades, I'd say shutdown all of the VMs (including the hosted
engine) then apply your updates, reboot as necessary, and then start the
VMs back up. Once everything is up ssh into the hosted engine, update
it (yum update), reboot as necessary, and you are good to go. If you
have a multi-host system that's a bit different. In that case put a
host into maintenance mode; migrate all the VMs to other hosts; update
it and reboot it; set it as active; migrate the VMs back and move on to
the next host doing the same thing. the reason you want to shutdown all
the VMs is that upgrades to the KVM/qemu packages may crash running VMs.
I've seen this happen on Ubuntu, so I assume it's the same on RedHat/CentOS.
As for the 4.0 branch, I'd give it a month or two of being out before
you use it for a production system. I started with oVirt just as 3.6
came out and ran into some bugs that made it quite complicated. On the
positive side, I learned a lot about how it works from getting advice on
how to deal with those issues. :)
On 2016-06-02 10:23 PM, users-request(a)ovirt.org wrote:
> Message: 4
> Date: Thu, 02 Jun 2016 21:23:49 -0400
> From: "Brett I. Holcomb" <biholcomb(a)l1049h.com>
> To: users <users(a)ovirt.org>
> Subject: [ovirt-users] Questions on oVirt
> Message-ID: <1464917029.26446.133.camel(a)l1049h.com>
> Content-Type: text/plain; charset="utf-8"
>
> After using oVirt for about three months I have some questions that
> really haven't been answered in any of the documentation, posts, or
> found in searching. ?Or maybe more correctly I've found some answers
> but am trying to put the pieces I've found together.
>
> My setup is one physical host that used to run VMware ESXi6 and it
> handled running the VMs on an iSCSI LUN on a Synology 3615xs unit. ?I
> have one physical Windows workstation and all the servers, DNS, DHCP,
> file, etc. are VMs. ?The VMs are on an iSCSI LUN on the Synology.
>
> * Hosted-engine deployment - Run Engine as a VM. ?This has the
> advantage of using one machine for host and running the Engine as a VM
> but what are the cons of it?
>
> * Can I run the Engine on the host that will run the VMs without
> running it on a VM? ?That is I install the OS on my physical box,
> install Engine, then setup datastores (iSCSI LUN), networking etc.
>
> * How do I run more than one Engine. ?With just one there is no
> redundancy so can I run another Engine that access the same Datacenter,
> etc. as the first? ?Or does each Engine have to have it's own
> Datacenter and the backup is achieved by migrating between the Engine's
> Datacenters as needed.
>
> * Given I have a hosted Engine setup how can I "undo" it and ?get to
> running just the Engine on the host. ?Do I have to undo everything or
> can I just install another instance of the Engine on the host but not
> in a VM, move the VMs to it and then remove the Engine VM.
>
> * System shutdown - If I shutdown the host what is the proper
> procedure? ?Go to global maintenance mode and then shutdown the host or
> do I have to do some other steps to make sure VMs don't get corrupted.
> ?On ESXi we'd put a host into maintenance mode after shutting down or
> moving the VMs so I assume it's the same here. Shutdown VMS since there
> is nowhere to move the VMS, go into global maintenance mode. Shutdown.
> ?On startup the ?Engine will come up, then I start my V
> Ms.##SELECTION_END##
>
> * Upgrading engine and host - Do I have to go to maintenance mode then
> run yum to install the new versions on the host and engine and then run
> engine-setup or do I need to go into maintenance mode? ?I assume the
> 4.0 production install will be much more involved but hopefully keeping
> updated will make it a little less painful.
>
> Thanks.
>
>
8 years, 6 months