Re: [ovirt-users] 4.2 downgrade
by Yaniv Kaul
On Sep 30, 2017 8:09 AM, "Ryan Mahoney" <ryan(a)beaconhillentertainment.com>
wrote:
Accidentally upgraded a 4.0 environment to 4.2 (didn't realize the "master"
repo was development repo). What's my chances/best way if possible to roll
back to 4.0 (or 4.1 for that matter).
There is no roll back to oVirt installation.
That being said, I believe the Alpha quality is good. It is not feature
complete and we of course have more polishing to do, but it's very usable
and we will continue to ship updates to it. Let us know promptly what
issues you encounter.
Y.
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
7 years, 2 months
DR with oVirt: no data on OVF_STORE
by Luca 'remix_tj' Lorenzetto
Hello,
i'm experimenting a DR architecture that involves block storage
replication from the storage side (EMC VNX 8000).
Our idea is to import the replicated data storage domain on another
datacenter, managed by another engine, with the option "Import Domain"
and then import all the vms contained.
The idea works, but we encountered an issue that we don't want to have
again: we imported an SD and *no* vm were listed in the tab "VM
Import". Disks were available, but no VM informations.
What we did:
- on storage side: split the replica between the main disk in Site A
and the secondary disk Site B
- on storage side: added the disk to the storage group of the
"recovery" cluster
- from engine UI: Imported storage domain, confirming that i want to
activate even if seems to be attached to another DC
- from engine UI: move out from maintenance the storage domain and
click on the "VM Import" tab of the new SD.
What happened: *no* vm were listed
To identify better what's happening, I've found here some indications
on how lvm for block storage works and I identified the command on how
to find and read the OVF_STORE.
Looking inside the OVF_STORE has shown why no vm were listed: it was
empty (no .ovf file listed with tar tvf)
So, without the possibility import vms, i did a rollback, detaching
the storage domain and re-establishing the replication between the
main site and the DR site.
Then, after a day of replications (secondary volume is aligned every
30 minutes), i tried again and i've been able to import also vms
(OVF_STORE was populated).
So my question is: how to i force to have OVF_STORE to be aligned at
least as frequent as the replication? I want to have the VM disks
replicated to the remote site along with VM OVF informations.
Is possible to have OVF_STORE informations aligned when a VM is
created/edited or with a scheduled task? Is this so I/O expensive?
Thank you,
Luca
--
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)
Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <lorenzetto.luca(a)gmail.com>
7 years, 2 months
oVirt 4.2 hosted-engine command damaged
by Julián Tete
I updated my lab environment from oVirt 4.1.x to oVirt 4.2 Alpha
The hosted-engine command has been corrupted
An example:
hosted-engine --vm-status
Traceback (most recent call last):
File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vm_status.py",
line 213, in <module>
if not status_checker.print_status ():
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vm_status.py",
line 110, in print_status
all_host_stats = self._get_all_host_stats ()
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vm_status.py",
line 75, in _get_all_host_stats
all_host_stats = ha_cli.get_all_host_stats ()
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 154, in get_all_host_stats
return self.get_all_stats (self.StatModes.HOST)
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 99, in get_all_stats
stats = broker.get_stats_from_storage (service)
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
line 147, in get_stats_from_storage
for host_id, data in six.iteritems (result):
File "/usr/lib/python2.7/site-packages/six.py", line 599, in iteritems
return d.iteritems (** kw)
AttributeError: 'NoneType' object has no attribute 'iteritems'
hosted-engine --set-maintenance --mode = none
Traceback (most recent call last):
File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/set_maintenance.py",
line 88, in <module>
if not maintenance.set_mode (sys.argv [1]):
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/set_maintenance.py",
line 76, in set_mode
value = m_global,
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 240, in set_maintenance_mode
str (value))
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 187, in set_global_md_flag
all_stats = broker.get_stats_from_storage (service)
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
line 147, in get_stats_from_storage
for host_id, data in six.iteritems (result):
File "/usr/lib/python2.7/site-packages/six.py", line 599, in iteritems
return d.iteritems (** kw)
AttributeError: 'NoneType' object has no attribute 'iteritems'
hosted-engine --vm-start
VM exists and its status is Up
Hardware
Manufacturer: HP
Family: ProLiant
Product Name: ProLiant BL460c Gen8
CPU Model Name: Intel (R) Xeon (R) CPU E5-2667 v2 @ 3.30GHz
CPU Type: Intel SandyBridge Family
CPU Sockets: 2
CPU Cores per Socket: 8
CPU Threads per Core: 2 (SMT Enabled)
Software:
OS Version: RHEL - 7 - 4.1708.el7.centos
OS Description: CentOS Linux 7 (Core)
Kernel Version: 4.12.0 - 1.el7.elrepo.x86_64
KVM Version: 2.9.0 - 16.el7_4.5.1
LIBVIRT Version: libvirt-3.2.0-14.el7_4.3
VDSM Version: vdsm-4.20.3-95.git0813890.el7.centos
SPICE Version: 0.12.8 - 2.el7.1
GlusterFS Version: glusterfs-3.12.1-2.el7
CEPH Version: librbd1-0.94.5-2.el7
7 years, 2 months
Re: [ovirt-users] Engine crash, storage won't activate, hosts won't shutdown, template locked, gpu passthrough failed
by Yaniv Kaul
On Sep 30, 2017 7:50 PM, "M R" <gr8nextmail(a)gmail.com> wrote:
Hello!
I have been using Ovirt for last four weeks, testing and trying to get
things working.
I have collected here the problems I have found and this might be a bit
long but help to any of these or maybe to all of them from several people
would be wonderful.
It's a bit difficult and inefficient to list all issues in a single post -
unless you feel they are related ?
Also, it'd be challenging to understand them without logs.
Lastly, it's usually a good habit, when something doesn't work, solve it,
rather than continue. I do suspect your issues are somehow related.
Y.
My version is ovirt node 4.1.5 and 4.1.6 downloaded from website latest
stable release at the time. Also tested with CentOS minimal +ovirt repo. In
this case, 3. is solved, but other problems persist.
1. Power off host
First day after installing ovirt node, it was able to reboot and shutdown
clean. No problems at all. After few days of using ovir, I have noticed
that hosts are unable to shutdown. I have tested this in several different
ways and come to the following conclusion. IF engine has not been started
after boot, all hosts are able to shutdown clean. But if engine is started
even once, none of the hosts are able to shutdown anymore. The only way to
get power off is to unplug or press power button for a longer time as hard
reset. I have failed to find a way to have the engine running and then
shutdown host. This effects to all hosts in the cluster.
2. Glusterfs failed
Every time I have booted hosts, glusterfs has failed. For some reason, it
turns inactive state even if I have setup systemctl enable glusterd. Before
this command it was just inactive. After this command, it will say "failed
(inactive). There is still a way to get glusterfs working. I have to give
command systemctl start glusterd manually and everything starts working.
Why do I have to give manual commands to start glusterfs? I have used this
for CentOS before and never had this problem before. Node installer is that
much different from the CentOS core?
3. Epel
As I said that I have used CentOS before, I would like to able to install
some packets from repo. But even if I install epel-release, it won't find
packets such as nano or htop. I have read about how to add epel-release to
ovirt node from here: https://www.ovirt.org/release/4.1.1/#epel
I have tested even manually edit repolist, but it will fail to find normal
epel packets. I have setup additional exclude=collectd* as guided in the
link above. This doesn't make any difference. All being said I am able to
install manually packets which are downloaded with other CentOS machine and
transferred with scp to ovirt node. Still, this once again needs a lot of
manual input and is just a workaround for the bug.
4. Engine startup
When I try to start the engine when glusterfs is up, it will say vm doesn't
exist, starting up. Still, it won't startup automatically. I have to give
several times command hosted-engine --vm-start. I wait for about 5minutes
until I give it next time. This will take usually about 30minutes and then
randomly. Completely randomly after one of the times, I give this command
engine shoots up and is up in 1minute. This has happened every time I boot
up. And the times that I have to give a command to start the engine, has
been changing. At best it's been 3rd time at worst it has been 7th time.
Calculating from there it might take from 15minutes to 35minutes to get the
engine up.Nevertheless, it will eventually come up every time. If there is
a way to get it up on the first try or even better, automatically up, it
would be great.
5. Activate storage
Once the engine is up, there has been a problem with storage. When I go to
storage tab, it will show all sources red. Even if I wait for 15~20minutes,
it won't get storage green itself. I have to go and press active button
from main data storage. Then it will get main storage up in
2~3munutes.Sometimes it fails it once, but will definitely get main data
storage up on the seconds try. And then magically at the same time all
other storages instantly go green. Main storage is glusterfs and I have 3
NFS storages as well. This is only a problem when starting up and once
storages are on green they stay green. Still annoying that it cannot get it
done by itself.
6.Template locked
I try to create a template from existing VM and it resulted in original VM
going into locked state and template being locked. I have read that some
other people had a similar problem and they were suggested to restart
engine to see if it solves it. For me it has been now a week and several
restarts of engine and hosts, but there is still one VM locked and template
locked as well. This is not a big problem, but still a problem. Everything
is grey and cannot delete this bugged VM or template.
7. unable to use GPU
I have been trying to do GPU passthrough with my VM. First, there was a
problem with qemu cmd line, but once I figure out a way to get commands, it
maybe is working(?). Log shows up fine, but it still doesn't give
functionality I¨m looking for. As I mentioned in the other email that I
have found this: https://www.mail-archive.com/users@ovirt.org/msg40422.html
. It will give right syntax in log, but still, won't fix error 43 with
nvidia drivers. If anybody got this working or has ideas how to do it,
would really like to know how it's done properly. I have also tested with
AMD graphics cards such as vega, but as soon as drivers have installed, I
will get a black screen. Even if I restart VM or hosts or both. I will only
see black screen and unable to use VM at all. I might be able to live with
the other six things listed above, but this one is a bit of a problem for
me. My use of VMs will eventually need graphical performance and therefore
I will have to get this working or find an alternative to ovirt..I have
found several things that I really like in ovirt and would prefer to use
it.
Best regards
Mikko
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_camp...>
Ei
viruksia. www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_camp...>
<#m_1830658822099207723_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
7 years, 2 months
Passing through a display port to a GUEST vm
by Alexander Witte
--_000_6DDEB273B96F45CC8FB776FCE14DD4FFbaicanadacom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
SGksDQoNCk91ciBzZXJ2ZXIgaGFzIDIgZGlzcGxheSBwb3J0cyBvbiBhbiBpbnRlZ3JhdGVkIGdy
YXBoaWNzIGNhcmQuICBPbmUgcG9ydCBkaXNwbGF5cyB0aGUgaG9zdCBPUyAoQ2VudG9zNyB3aXRo
IEtWTSBpbnN0YWxsZWQpIGFuZCB3ZSB3b3VsZCBsaWtlIHRoZSBzZWNvbmQgZGlzcGxheSBwb3J0
IHRvIGRpc3BsYXkgb25lIG9mIHRoZSBHVUVTVCBWTXMgKGEgV2luZG93cyAxMCBzZXJ2ZXIpLiAg
SSB3YXMganVzdCBjdXJpb3VzIGlmIGFueW9uZSBoYWQgc2V0IHRoaXMga2luZCBvZiB0aGluZyB1
cCBiZWZvcmUgb3IgaWYgdGhpcyBpcyBldmVuIHBvc3NpYmxlIGFzIHRoZXJlIGlzIG5vdCBleHRl
cm5hbCBWaWRlbyBjYXJkLiAgVGhpcyBpcyBhbGwgaW4gYW4gb1ZpcnQgZW52aXJvbm1lbnQuDQoN
CklmIHRoZSBwYXNzdGhyb3VnaCBvbiB0aGUgZGlzcGxheSBwb3J0IGlzIG5vdCBwb3NzaWJsZSBJ
IHdhcyB0aGlua2luZyBtYXliZSBvZiB1c2luZyBhIHVzYiB0byBoZG1pIGFkYXB0ZXIgYW5kIHBh
c3NpbmcgdGhyb3VnaCB0aGUgVVNCIHBvcnQgdG8gdGhlIGd1ZXN0IFZNPw0KDQpIZXJl4oCZcyB0
aGUgc2VydmVyIHdl4oCZcmUgdXNpbmc6DQoNCmh0dHBzOi8vd3d3Lm1lbm1pY3JvLmNvbS9wcm9k
dWN0cy9ib3gtcGNzL2JsNzB3Lw0KDQpJZiBhbnlvbmUgaGFzIGRvbmUgdGhpcyBvciBoYXMgYW55
IHRob3VnaHRzIGl0IHdvdWxkIGJlIGhlbHBmdWwhDQoNClRoYW5rcywNCg0KQWxleCBXaXR0ZQ0K
--_000_6DDEB273B96F45CC8FB776FCE14DD4FFbaicanadacom_
Content-Type: text/html; charset="utf-8"
Content-ID: <F67DA902C0572B4DBC4D4E4B308B3190(a)baicanada.local>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHN0eWxlPSJ3b3JkLXdy
YXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgLXdlYmtpdC1saW5lLWJy
ZWFrOiBhZnRlci13aGl0ZS1zcGFjZTsiIGNsYXNzPSIiPg0KPGRpdiBjbGFzcz0iIj5IaSw8L2Rp
dj4NCjxkaXYgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPk91
ciBzZXJ2ZXIgaGFzIDIgZGlzcGxheSBwb3J0cyBvbiBhbiBpbnRlZ3JhdGVkIGdyYXBoaWNzIGNh
cmQuICZuYnNwO09uZSBwb3J0IGRpc3BsYXlzIHRoZSBob3N0IE9TIChDZW50b3M3IHdpdGggS1ZN
IGluc3RhbGxlZCkgYW5kIHdlIHdvdWxkIGxpa2UgdGhlIHNlY29uZCBkaXNwbGF5IHBvcnQgdG8g
ZGlzcGxheSBvbmUgb2YgdGhlIEdVRVNUIFZNcyAoYSBXaW5kb3dzIDEwIHNlcnZlcikuICZuYnNw
O0kgd2FzIGp1c3QgY3VyaW91cyBpZg0KIGFueW9uZSBoYWQgc2V0IHRoaXMga2luZCBvZiB0aGlu
ZyB1cCBiZWZvcmUgb3IgaWYgdGhpcyBpcyBldmVuIHBvc3NpYmxlIGFzIHRoZXJlIGlzIG5vdCBl
eHRlcm5hbCBWaWRlbyBjYXJkLiAmbmJzcDtUaGlzIGlzIGFsbCBpbiBhbiBvVmlydCBlbnZpcm9u
bWVudC48L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8ZGl2IGNs
YXNzPSIiPklmIHRoZSBwYXNzdGhyb3VnaCBvbiB0aGUgZGlzcGxheSBwb3J0IGlzIG5vdCBwb3Nz
aWJsZSBJIHdhcyB0aGlua2luZyBtYXliZSBvZiB1c2luZyBhIHVzYiB0byBoZG1pIGFkYXB0ZXIg
YW5kIHBhc3NpbmcgdGhyb3VnaCB0aGUgVVNCIHBvcnQgdG8gdGhlIGd1ZXN0IFZNPzwvZGl2Pg0K
PGRpdiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+SGVyZeKA
mXMgdGhlIHNlcnZlciB3ZeKAmXJlIHVzaW5nOjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48YnIgY2xh
c3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGEgaHJlZj0iaHR0cHM6Ly93d3cubWVubWlj
cm8uY29tL3Byb2R1Y3RzL2JveC1wY3MvYmw3MHcvIiBjbGFzcz0iIj5odHRwczovL3d3dy5tZW5t
aWNyby5jb20vcHJvZHVjdHMvYm94LXBjcy9ibDcwdy88L2E+PC9kaXY+DQo8ZGl2IGNsYXNzPSIi
PjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5JZiBhbnlvbmUgaGFzIGRvbmUg
dGhpcyBvciBoYXMgYW55IHRob3VnaHRzIGl0IHdvdWxkIGJlIGhlbHBmdWwhPC9kaXY+DQo8ZGl2
IGNsYXNzPSIiPjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KVGhhbmtzLA0KPGRpdiBjbGFzcz0iIj48
YnIgY2xhc3M9IiI+DQo8ZGl2IGNsYXNzPSIiPg0KPGRpdiBzdHlsZT0iY29sb3I6IHJnYigwLCAw
LCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxl
OiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7
IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IG9ycGhhbnM6IGF1dG87IHRleHQtYWxpZ246IHN0YXJ0
OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5v
cm1hbDsgd2lkb3dzOiBhdXRvOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXNpemUt
YWRqdXN0OiBhdXRvOyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHdvcmQtd3JhcDog
YnJlYWstd29yZDsgLXdlYmtpdC1uYnNwLW1vZGU6IHNwYWNlOyAtd2Via2l0LWxpbmUtYnJlYWs6
IGFmdGVyLXdoaXRlLXNwYWNlOyIgY2xhc3M9IiI+DQo8ZGl2IHN0eWxlPSJjb2xvcjogcmdiKDAs
IDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5
bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1h
bDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50
OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNw
YWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyI+DQpBbGV4IFdpdHRl
PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8L2Rpdj4NCjwvZGl2Pg0KPC9kaXY+DQo8L2JvZHk+DQo8
L2h0bWw+DQo=
--_000_6DDEB273B96F45CC8FB776FCE14DD4FFbaicanadacom_--
7 years, 2 months
Re: [ovirt-users] Qemu prevents vm from starting up properly
by M R
Hello!
I have maybe found a way to do this.
I found this older email archive where similar problem was described:
https://www.mail-archive.com/users@ovirt.org/msg40422.html
With this -cpu arguments show up corretcly in log.
But the it still won't fix nvidia problem 43, which is annoying "bug"
implemented by nvidia.
I have several gtx graphic cards collecting dust and would like to use
them, but fail to do so...
best regards
Mikko
On Thu, Sep 28, 2017 at 8:46 AM, Yedidyah Bar David <didi(a)redhat.com> wrote:
> On Wed, Sep 27, 2017 at 8:32 PM, M R <gr8nextmail(a)gmail.com> wrote:
> > Hello!
> >
> > Thank you very much! I had misunderstood how it was suppose to be
> written in
> > qemu_cmdline. There was a typo in syntax and error log revealed it. It is
> > working IF I use ["-spice", "tls-ciphers=DES-CBC3-SHA"].
> > So I believe that installation is correctly done.
> >
> > Though, my problem still exists.
> > This is what I have been trying to use for qemu_cmdline:
> > ["-cpu", "kvm=off, hv_vendor_id=sometext"]
> > It does not work and most likely is incorrectly written.
>
> You should first come up with something that works when you try it
> manually, then try adapting that to the hook's syntax.
>
> >
> > I understood that qemu commands are often exported into xml files and the
> > command I'm trying to write is the following:
> >
> > <features>
> > <hyperv>
> > <vendor_id state='on' value='customvalue'/>
> > </hyperv>
> > <kvm>
> > <hidden state='on'/>
> > </kvm>
> > </features>
>
> I guess you refer above to libvirt xml. This isn't strictly
> related to qemu, although in practice most usage of libvirt
> is with qemu.
>
> >
> > How do I write this in custom properties for qemu_cmdline?
>
> If you have a working libvirt vm, with the options you need,
> simply check how it translated your xml to qemu's command line.
> You can see this either in its logs, or using ps.
>
> Best,
>
> >
> >
> > best regards
> >
> > Mikko
> >
> >
> >
> > On 27 Sep 2017 3:27 pm, "Yedidyah Bar David" <didi(a)redhat.com> wrote:
> >>
> >> On Wed, Sep 27, 2017 at 1:14 PM, M R <gr8nextmail(a)gmail.com> wrote:
> >> > Hello!
> >> >
> >> > I did check logs from hosts, but didnt notice anything that would help
> >> > me. I
> >> > can copy paste logs later.
> >> >
> >> > I was not trying to get qemu crash vm.
> >> > I'm trying to add new functionalities with qemu.
> >> >
> >> > I wasnt sure if my syntax was correct, so I copy pasted the example
> >> > command
> >> > for spice from that website. And it still behaves similarly.
> >> >
> >> > My conclusion is that qemu cmdline is setup wrong or it's not working
> at
> >> > all. But I dont know how to check that.
> >>
> >> Please check/share /var/log/libvirt/qemu/* and /var/log/vdsm/* . Thanks.
> >>
> >> >
> >> > On 27 Sep 2017 12:32, "Yedidyah Bar David" <didi(a)redhat.com> wrote:
> >> >>
> >> >> On Wed, Sep 27, 2017 at 11:32 AM, M R <gr8nextmail(a)gmail.com> wrote:
> >> >> > Hello!
> >> >> >
> >> >> > I have followed instructions in
> >> >> > https://www.ovirt.org/develop/developer-guide/vdsm/hook/
> qemucmdline/
> >> >> >
> >> >> > After adding any command for qemu cmdline, vm will try to start,
> but
> >> >> > will
> >> >> > immediately shutdown.
> >> >> >
> >> >> > Is this a bug?
> >> >>
> >> >> If you intended, with the params you passed, to make qemu fail, for
> >> >> whatever
> >> >> reason (debugging qemu?), then it's not a bug :-) Otherwise, it is,
> but
> >> >> we
> >> >> can't know where.
> >> >>
> >> >> > or is the information in the link insufficient?
> >> >> > If it would be possible to confirm this and if there's a way to
> fix,
> >> >> > I
> >> >> > would
> >> >> > really like to have step by step guide of how to get this working.
> >> >>
> >> >> Did you check relevant logs? vdsm/libvirt?
> >> >>
> >> >> Best,
> >> >> --
> >> >> Didi
> >>
> >>
> >>
> >> --
> >> Didi
>
>
>
> --
> Didi
>
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_camp...>
Ei
viruksia. www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_camp...>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
7 years, 2 months
4.2 downgrade
by Ryan Mahoney
Accidentally upgraded a 4.0 environment to 4.2 (didn't realize the "master"
repo was development repo). What's my chances/best way if possible to roll
back to 4.0 (or 4.1 for that matter).
7 years, 2 months
changing ip of host and its ovirtmgmt vlan
by Gianluca Cecchi
Hello,
an host not maintained by me was modified so that its mgmt network had
become ovirtmgmntZ2Z3.
Originally the host had been added into engine with its hostname and not
its ip and this simplifies things.
So a dns entry change was done while the host was in maintenance
(as far as I have understood...)
The guy changed the /etc/sysconfig/netowrk-scripts/ files and apparently it
was activated ok, but when host rebooted the config was reverted due to
persistence of vdsm.
As he had urgence for this host to become operational again, in the mean
time I worked like this, having now a working host:
- modified /etc/sysconfig/network/scripts files with the new required
configuration
- modified files under /var/lib/vdsm/persistence/netconf/nets/
eg the file ovirtmgmntZ2Z3 with its ip and vlan correct information
sync
then power off / power on host
The host comes up good and as it had been peviously put into maintenance,
it was able to be activated and power on some VMs.
Can I consider this workflow ok or is there any ip/network information of
the host stored into engine db or other parts on engine or hosts?
I have then a question for the ovirtmgmt logical network itself, but I will
open a new thread for it...
Thanks in advance,
Gianluca
7 years, 2 months
Real Noob question- setting a static IP on host
by Alexander Witte
--_000_78BA09B27208464C8062331367F3EA13baicanadacom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
SSBhbSBpbmNyZWRpYmx5IHNvcnJ5IG92ZXIgdGhpcyBub29iIHF1ZXN0aW9uIGJ1dCBJIGFtIHJl
YWxseSBiYXNoaW5nIG15IGhlYWQgdHJ5aW5nIHRvIHNpbXBseSBjaGFuZ2UgYW4gSVAgYWRkcmVz
cyBvbiBhbiBPdmlydCBob3N0LiAgb1ZpcnQgd2FzIHB1c2hlZCB0byB0aGlzIGhvc3QgdGhyb3Vn
aCB0aGUgc2VydmVyIHdlYiBpbnRlcmZhY2UuICBJdCBpcyBydW5uaW5nIG9uIHRvcCBvZiBDZW50
b3MgNy4NCg0KRnJvbSB0aGUgZG9jcyBpdCBzYXlzIHRvIGxvZyBpbnRvIHRoZSBob3N0IGFuZCBl
ZGl0IHRoZSBpZmNmZy1vdmlydG1nbXQgZmlsZSBhbmQgSSBoYXZlIGRvbmUgYW5kIGhlcmUgYXJl
IHRoZSBsYXRlc3Qgc2V0dGluZ3M6DQoNCiNHZW5lcmF0ZWQgYnkgVkRTTSB2ZXJzaW9uIDQuMTku
MjgtMS5lMTcuY2VudG9zDQpERVZJQ0U6b3ZpcnRtZ210DQpUWVBFOkJyaWRnZQ0KREVMQVk9MA0K
U1RQPW9mZg0KT05CT09UPXllcw0KQk9PVFBST1RPPW5vbmUNCk1UVT0xNTAwDQpERUZST1VURT15
ZXMNCk5NX0NPTlRST0xMRUQ9bm8NCklQVjZJTklUPXllcw0KSVBWNl9BVVRPQ09ORj15ZXMNCklQ
QUREUj0xMC4wLjAuMjI2DQpHQVRFV0FZPTEwLjAuMC4xDQpQUkVGSVg9MTMNCkROUzE9MTAuMC4w
LjkNCkROUzI9OC44LjguOA0KDQpUaGUgc2VydmVyIGNhbiByZWFjaCBldmVyeXRoaW5nIG9uIHRo
ZSBuZXR3b3JrIGZpbmUuICBBbHRob3VnaCBpdCBjYW5ub3QgYmUgcmVhY2hlZCB0aHJvdWdoIHRo
ZSBvVmlydCB3ZWIgaW50ZXJmYWNlIGFuZCB0aGUgaG9zdCBpcyBpbiBhIOKAnGNvbm5lY3Rpbmfi
gJ0gc3RhdHVzLiAgSW4gdGhlIG9WaXJ0IHdlYiBpbnRlcmZhY2UgaWYgSSBhdHRlbXB0IHRvIGVk
aXQgdGhlIE5JQyBzZXR0aW5ncyBmcm9tIERIQ1AgdG8gU3RhdGljIHRvIHJlZmxlY3QgdGhlIGNo
YW5nZXMgSXZlIG1hZGUgYWJvdmUgSSBydW4gaW50byB0aGlzIGVycm9yOg0KDQoNCiAgKiAgIENh
bm5vdCBzZXR1cCBOZXR3b3Jrcy4gQW5vdGhlciBTZXR1cCBOZXR3b3JrcyBvciBIb3N0IFJlZnJl
c2ggcHJvY2VzcyBpbiBwcm9ncmVzcyBvbiB0aGUgaG9zdC4gUGxlYXNlIHRyeSBsYXRlci4NCg0K
V2hhdCBpcyB0aGUgY29ycmVjdCBwcm9jZWR1cmUgdG8gY2hhbmdlIGEgaG9zdCBtYW5hZ2VtZW50
IElQIGZyb20gREhDUCB0byBTVEFUSUM/ICBTaG91bGQgSSBtYWtlIHRoZXNlIGNoYW5nZXMgbWFu
dWFsbHkgb24gdGhlIGhvc3Qgb3IgdGhyb3VnaCB0aGUgTklDIHNldHRpbmdzIGluIHRoZSBvVmly
dCB3ZWIgaW50ZXJmYWNlICh3aGVuIEkgdHJpZWQgdGhpcyBpdCBqdXN0IHNlZW1lZCB0byBoYW5n
Li4pDQoNCkFueSBoZWxwIGlzIGdyZWF0bHkgYXBwcmVjaWF0ZWQuDQoNClRoYW5rcyEhDQoNCg0K
QWxleA0KDQoNCg==
--_000_78BA09B27208464C8062331367F3EA13baicanadacom_
Content-Type: text/html; charset="utf-8"
Content-ID: <6FB4F688113E0C42AD7DBCD1BEC27217(a)baicanada.local>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHN0eWxlPSJ3b3JkLXdy
YXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgLXdlYmtpdC1saW5lLWJy
ZWFrOiBhZnRlci13aGl0ZS1zcGFjZTsiIGNsYXNzPSIiPg0KPGRpdiBjbGFzcz0iIj5JIGFtIGlu
Y3JlZGlibHkgc29ycnkgb3ZlciB0aGlzIG5vb2IgcXVlc3Rpb24gYnV0IEkgYW0gcmVhbGx5IGJh
c2hpbmcgbXkgaGVhZCB0cnlpbmcgdG8gc2ltcGx5IGNoYW5nZSBhbiBJUCBhZGRyZXNzIG9uIGFu
IE92aXJ0IGhvc3QuICZuYnNwO29WaXJ0IHdhcyBwdXNoZWQgdG8gdGhpcyBob3N0IHRocm91Z2gg
dGhlIHNlcnZlciB3ZWIgaW50ZXJmYWNlLiAmbmJzcDtJdCBpcyBydW5uaW5nIG9uIHRvcCBvZiBD
ZW50b3MgNy48L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8ZGl2
IGNsYXNzPSIiPkZyb20gdGhlIGRvY3MgaXQgc2F5cyB0byBsb2cgaW50byB0aGUgaG9zdCBhbmQg
ZWRpdCB0aGUgaWZjZmctb3ZpcnRtZ210IGZpbGUgYW5kIEkgaGF2ZSBkb25lIGFuZCBoZXJlIGFy
ZSB0aGUgbGF0ZXN0IHNldHRpbmdzOjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+
DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+I0dlbmVyYXRlZCBieSBWRFNNIHZlcnNpb24gNC4xOS4y
OC0xLmUxNy5jZW50b3M8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+REVWSUNFOm92aXJ0bWdtdDwvZGl2
Pg0KPGRpdiBjbGFzcz0iIj5UWVBFOkJyaWRnZTwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5ERUxBWT0w
PC9kaXY+DQo8ZGl2IGNsYXNzPSIiPlNUUD1vZmY8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+T05CT09U
PXllczwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48Zm9udCBjb2xvcj0iI2ZmMmEwZSIgY2xhc3M9IiI+
Qk9PVFBST1RPPW5vbmU8L2ZvbnQ+PC9kaXY+DQo8ZGl2IGNsYXNzPSIiPk1UVT0xNTAwPC9kaXY+
DQo8ZGl2IGNsYXNzPSIiPkRFRlJPVVRFPXllczwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5OTV9DT05U
Uk9MTEVEPW5vPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPklQVjZJTklUPXllczwvZGl2Pg0KPGRpdiBj
bGFzcz0iIj5JUFY2X0FVVE9DT05GPXllczwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48Zm9udCBjb2xv
cj0iI2ZmMjUxMyIgY2xhc3M9IiI+SVBBRERSPTEwLjAuMC4yMjY8L2ZvbnQ+PC9kaXY+DQo8ZGl2
IGNsYXNzPSIiPjxmb250IGNvbG9yPSIjZmYyMDE1IiBjbGFzcz0iIj5HQVRFV0FZPTEwLjAuMC4x
PC9mb250PjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48Zm9udCBjb2xvcj0iI2ZmMWMwZCIgY2xhc3M9
IiI+UFJFRklYPTEzPC9mb250PjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5ETlMxPTEwLjAuMC45PC9k
aXY+DQo8ZGl2IGNsYXNzPSIiPkROUzI9OC44LjguODwvZGl2Pg0KPGRpdiBjbGFzcz0iIj48YnIg
Y2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+VGhlIHNlcnZlciBjYW4gcmVhY2ggZXZl
cnl0aGluZyBvbiB0aGUgbmV0d29yayBmaW5lLiAmbmJzcDtBbHRob3VnaCBpdCBjYW5ub3QgYmUg
cmVhY2hlZCB0aHJvdWdoIHRoZSBvVmlydCB3ZWIgaW50ZXJmYWNlIGFuZCB0aGUgaG9zdCBpcyBp
biBhIOKAnGNvbm5lY3RpbmfigJ0gc3RhdHVzLiAmbmJzcDtJbiB0aGUgb1ZpcnQgd2ViIGludGVy
ZmFjZSBpZiBJIGF0dGVtcHQgdG8gZWRpdCB0aGUgTklDIHNldHRpbmdzIGZyb20gREhDUCB0byBT
dGF0aWMNCiB0byByZWZsZWN0IHRoZSBjaGFuZ2VzIEl2ZSBtYWRlIGFib3ZlIEkgcnVuIGludG8g
dGhpcyBlcnJvcjo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8
ZGl2IGNsYXNzPSIiPg0KPHVsIHN0eWxlPSJib3gtc2l6aW5nOiBib3JkZXItYm94OyBtYXJnaW4t
dG9wOiAwcHg7IG1hcmdpbi1ib3R0b206IDEwcHg7IGZvbnQtZmFtaWx5OiAnQXJpYWwgVW5pY29k
ZSBNUycsIEFyaWFsLCBzYW5zLXNlcmlmOyIgY2xhc3M9IiI+DQo8bGkgc3R5bGU9ImJveC1zaXpp
bmc6IGJvcmRlci1ib3g7IiBjbGFzcz0iIj5DYW5ub3Qgc2V0dXAgTmV0d29ya3MuIEFub3RoZXIg
U2V0dXAgTmV0d29ya3Mgb3IgSG9zdCBSZWZyZXNoIHByb2Nlc3MgaW4gcHJvZ3Jlc3Mgb24gdGhl
IGhvc3QuIFBsZWFzZSB0cnkgbGF0ZXIuPC9saT48L3VsPg0KPGRpdiBjbGFzcz0iIj48YnIgY2xh
c3M9IiI+DQo8L2Rpdj4NCjwvZGl2Pg0KPGRpdiBjbGFzcz0iIj5XaGF0IGlzIHRoZSBjb3JyZWN0
IHByb2NlZHVyZSB0byBjaGFuZ2UgYSBob3N0IG1hbmFnZW1lbnQgSVAgZnJvbSBESENQIHRvIFNU
QVRJQz8gJm5ic3A7U2hvdWxkIEkgbWFrZSB0aGVzZSBjaGFuZ2VzIG1hbnVhbGx5IG9uIHRoZSBo
b3N0IG9yIHRocm91Z2ggdGhlIE5JQyBzZXR0aW5ncyBpbiB0aGUgb1ZpcnQgd2ViIGludGVyZmFj
ZSAod2hlbiBJIHRyaWVkIHRoaXMgaXQganVzdCBzZWVtZWQgdG8gaGFuZy4uKTwvZGl2Pg0KPGRp
diBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+QW55IGhlbHAg
aXMgZ3JlYXRseSBhcHByZWNpYXRlZC48L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGJyIGNsYXNzPSIi
Pg0KPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPlRoYW5rcyEhPC9kaXY+DQo8ZGl2IGNsYXNzPSIiPjxi
ciBjbGFzcz0iIj4NCjwvZGl2Pg0KPGJyIGNsYXNzPSIiPg0KPGRpdiBjbGFzcz0iIj4NCjxkaXYg
c3R5bGU9ImNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQt
c2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFs
OyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyBvcnBoYW5zOiBh
dXRvOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06
IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdpZG93czogYXV0bzsgd29yZC1zcGFjaW5nOiAw
cHg7IC13ZWJraXQtdGV4dC1zaXplLWFkanVzdDogYXV0bzsgLXdlYmtpdC10ZXh0LXN0cm9rZS13
aWR0aDogMHB4OyB3b3JkLXdyYXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFj
ZTsgLXdlYmtpdC1saW5lLWJyZWFrOiBhZnRlci13aGl0ZS1zcGFjZTsiIGNsYXNzPSIiPg0KPGRp
diBzdHlsZT0iY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9u
dC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3Jt
YWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxp
Z246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUt
c3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lk
dGg6IDBweDsiPg0KQWxleDxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KPC9k
aXY+DQo8L2Rpdj4NCjxiciBjbGFzcz0iIj4NCjwvYm9keT4NCjwvaHRtbD4NCg==
--_000_78BA09B27208464C8062331367F3EA13baicanadacom_--
7 years, 2 months