HE Fails to install on oVirt 4.3.4
by nico.kruger@darkmatter.ae
Hi Guys,
I have tried installing oVirt 4.3.4 using ovirt-node-ng-installer-4.3.4-2019061016.el7.iso and ovirt-engine-appliance-4.3-20190610.1.el7.x86_64.rpm
Gluster install works fine, but HE deployment fails every time at last "waiting for host to be up". I have tried the deployment on multiple different hardware types, also tried single Vs 3 node deployments and all fail at the same point.
I suspect that the IP in the HE is not being correctly configured as i see qemu running, but ansible times out and cleans up the fail install.
Any ideas on why this is happening? i will try add the logs
I am going to try using a older HE appliance rpm to see if that fixes the issue.
5 years, 6 months
USB Passtrough device problem
by Juan Pablo Lorier
Hi,
I've tried to use 2 different audio IO interfaces connected to a VM via
USB passthrough and none worked.
I've tried with win 10 and win server 2016 vms with different errors but
none worked.
I've connected the devices to a host, then attached the host device to a
VM and in the VM properties I've enabled the USB support in the console
properties.
The devices are installed and don't show any error in the device manager
within the VM, but they don't work.
Should I do something else?
Regards
5 years, 6 months
ovirt-vmconsole: Pemission denied (publickey) when I select VM id
by Jonathan Gregoire
Hi all,
I'm getting a Permission denied (publickey) when I select VM id... Could you help
ovirt engine: 4.3.4.3-1.el7
ovirt host: 4.3.3.1
Macbook:~ Jonathan$ ssh -t -p 2222 ovirt-vmconsole(a)ovirt-engine01.int.cloche.ca<mailto:ovirt-vmconsole@ovirt-engine01.int.cloche.ca>-i .ssh/serialconsolekey connect
Available Serial Consoles:
00 AUTO-LAB-RTR01[6a61b3e8-0ed1-4888-8f4b-e4e38cb26953]
01 AUTO-LAB-RTR02[c9f14c47-e416-4e1a-822a-0911ed4fc00e]
02 AUTO-LAB-RTR03[172ffa97-9564-4ce3-8c6f-de292b257a50]
03 AUTO-LAB-RTR04[d9f4ef8a-7fe8-42a3-8abc-f70bae20c817]
04 AUTO-LAB-RTR05[d3012c5e-53c5-4020-be79-143288ee757f]
05 AUTO-LAB-RTR06[25053bf7-d321-44a1-b72a-edf432fc4824]
06 AUTO-LAB-RTR07[f52edeca-7ec6-4c17-92c2-9b150c1f0f06]
07 AUTO-LAB-RTR08[8ad50c7c-740e-4d76-9f2e-e1915d2b17be]
08 AUTO-LAB-RTR09[9474d032-5576-4fb9-97ff-16a4b306b52f]
09 AUTO-LAB-RTR10[68625045-4065-4a53-be39-8e76a91cb6d3]
10 AUTO-LAB-RTR11[d3380c53-ab7b-4b65-88d0-2f05f43d9e88]
11 AUTO-LAB-RTR12[3ca24472-defa-48d1-91ee-7d264069243b]
12 AUTO-LAB-RTR13[62161a3c-6bf5-4c0e-83eb-3c65bcf3b3b4]
13 AUTO-LAB-RTR14[7843fe1b-65ff-42f4-8c0c-d677c9f266ee]
14 AUTO-LAB-RTR15[1e22b5dc-af3a-4c0f-8319-b982608ac85a]
15 AUTO-LAB-RTR16[ce2d9ce4-890a-46e2-a55f-76e5647f60bc]
16 AUTO-LAB-RTR17[f45f4451-5322-488d-99ad-75fb9a8871e5]
17 AUTO-LAB-RTR18[fa88f535-d11b-44de-b9e5-aad1da3e09cc]
18 AUTO-LAB-RTR19[3f5f08d1-300e-4529-8da2-b1f48d9746d0]
19 AUTO-LAB-RTR20[07adbdc7-16b6-4bb7-8dcf-14eab615b84b]
20 CDSNetScaler10-5[5bf444ad-a308-4a78-bdd6-bc1028e6ed62]
21 CDSNetScalerLB01[224198ca-ed29-44b2-ac8c-31363e96b734]
22 CDSNetScalerLB02[184eb272-2a0c-427e-9e66-94e00d4b0c2e]
23 CDSNetScalerLB03[f110b2c8-fc43-44fe-a7b1-b03c0b1562ce]
24 CDSNetScalerLB04[71c2c75c-5c5c-4605-81c4-17de1ca8ba75]
25 CDSNetScalerLB05-10[c1159ede-9627-4b25-b07d-82cd6ce6b0c0]
26 CDSNetScalerLB06-10[6c9b9b73-2de3-4038-97af-a4b46c4fcd6e]
27 jump_point-auto[51a36f5d-6fba-4ec6-b7cf-f1eb6bb8170b]
28 lab-ansible[cae59168-fb87-42b1-afe6-49712a5c0e1b]
29 lab-lxd01[8b04afc9-dd69-4da6-9f63-197817086de8]
30 lab-reverse-proxy01[2726e1f8-0875-481b-aebb-3443de690183]
31 pod2-branch01[36598184-1aab-426e-be57-d8ff653af532]
32 pod2-branch01-pc01[7af0a040-6a48-4f9b-bcf6-3f1eab6921df]
33 pod2-branch02[0d60b360-82c7-4694-9dbe-1925e8793d2b]
34 pod2-branch02-pc01[11abbc59-12e8-4cf0-96a9-f43547de9aca]
35 pod2-branch03[f8e5277e-898b-4211-b721-65b249e6ca16]
36 pod2-branch03-pc01[7cfa9afd-dd28-436e-8035-2161809e1631]
37 pod2-dc01[b917e2dc-bde6-4993-ade5-b08fc9903f51]
38 pod2-dc01-pc01[fa8f4918-095b-4d7a-a2c9-69e875a6d0ed]
39 pod2-dc02[21a80ce6-958c-47cb-812c-f0a49e397553]
40 pod2-router1[af175917-7624-4725-a4f4-90fadf9ba76a]
41 pod2-vbond01[3364c1f6-d062-489d-b8ef-28a21295b471]
42 pod2-vmanage01[c189c279-8601-46b1-a7fb-a038175f13b4]
43 pod2-vsmart01[ec0cec3c-31d6-4370-8b3b-f331d84cdc54]
44 pod2-vsmart02[68abe3b0-3ad0-49bb-9cd7-50bd41f515f9]
45 tacrad01[0bfa49bd-fe5d-4f99-8402-9f2720f50dcc]
46 vrsx1[3a210809-f6d4-4f7a-b134-50ce697fa27b]
47 vsrx2[7e2c5638-f97c-45c4-8487-153764db2fc7]
48 vsrx3[d3448254-b04a-4485-93cc-388d9ceeb54f]
49 vsrx4[b027e509-3b65-4cba-8aa6-be92c3d7bd25]
50 win2k16ad01[605409db-2469-4f61-a06f-cfd5e2f91af0]
51 win2k16ad02[31ca7536-7ed2-47e6-9e4d-87b34cc685ee]
Please, enter the id of the Serial Console you want to connect to.
To disconnect from a Serial Console, enter the sequence: <Enter><~><.>
SELECT> 49
Permission denied (publickey).
Connection to ovirt-engine01.int.cloche.ca closed.
Macbook:~ Jonathan$
5 years, 6 months
Snapshots and quotas
by Mitja Mihelič
Hi!
We are using oVirt 4.1.9 with quotas enabled. The usual quota is 40GB
per user, usually used for a webserver. It is created from a template
with a 40GB disk.
When such users try to create a snapshot of their VM, oVirt blocks the
operation with "Cannot create Snapshot. Quota has insufficient storage
resources."
I could increase the quota to 2x size. But then users would create
additional disks, snapshot creation would fail due to insufficient
storage resources and we're back to square one.
I could increase "Storage Grace" to 200%. It would take users longer to
figure out that they can create additional disk even if the quota showed
100% full. But sooner or later we would be back to square one.
Is it possible to exclude snapshots from the quota system?
Kind regards,
Mitja
P.S.
"Storage Threshold" is currently set to 80% and "Storage Grace" to 120%.
Where can we change the default settings to something else?
E.g. new quotas would have "Storage Threshold" set to 90% and "Storage
Grace" to 160%.
5 years, 6 months
ovirt-vmconsole: Pemission denied (publickey) when I select VM id
by Jonathan Greg
Hi all,
I'm getting a Permission denied (publickey) when I select VM id... Could you help me to fix it?
ovirt engine: 4.3.4.3-1.el7
ovirt host: 4.3.3.1
Macbook:~ Jonathan$ ssh -t -p 2222 ovirt-vmconsole(a)ovirt-engine01.int.cloche.ca -i .ssh/serialconsolekey connect
Available Serial Consoles:
00 AUTO-LAB-RTR01[6a61b3e8-0ed1-4888-8f4b-e4e38cb26953]
01 AUTO-LAB-RTR02[c9f14c47-e416-4e1a-822a-0911ed4fc00e]
02 AUTO-LAB-RTR03[172ffa97-9564-4ce3-8c6f-de292b257a50]
03 AUTO-LAB-RTR04[d9f4ef8a-7fe8-42a3-8abc-f70bae20c817]
04 AUTO-LAB-RTR05[d3012c5e-53c5-4020-be79-143288ee757f]
05 AUTO-LAB-RTR06[25053bf7-d321-44a1-b72a-edf432fc4824]
06 AUTO-LAB-RTR07[f52edeca-7ec6-4c17-92c2-9b150c1f0f06]
07 AUTO-LAB-RTR08[8ad50c7c-740e-4d76-9f2e-e1915d2b17be]
08 AUTO-LAB-RTR09[9474d032-5576-4fb9-97ff-16a4b306b52f]
09 AUTO-LAB-RTR10[68625045-4065-4a53-be39-8e76a91cb6d3]
10 AUTO-LAB-RTR11[d3380c53-ab7b-4b65-88d0-2f05f43d9e88]
11 AUTO-LAB-RTR12[3ca24472-defa-48d1-91ee-7d264069243b]
12 AUTO-LAB-RTR13[62161a3c-6bf5-4c0e-83eb-3c65bcf3b3b4]
13 AUTO-LAB-RTR14[7843fe1b-65ff-42f4-8c0c-d677c9f266ee]
14 AUTO-LAB-RTR15[1e22b5dc-af3a-4c0f-8319-b982608ac85a]
15 AUTO-LAB-RTR16[ce2d9ce4-890a-46e2-a55f-76e5647f60bc]
16 AUTO-LAB-RTR17[f45f4451-5322-488d-99ad-75fb9a8871e5]
17 AUTO-LAB-RTR18[fa88f535-d11b-44de-b9e5-aad1da3e09cc]
18 AUTO-LAB-RTR19[3f5f08d1-300e-4529-8da2-b1f48d9746d0]
19 AUTO-LAB-RTR20[07adbdc7-16b6-4bb7-8dcf-14eab615b84b]
20 CDSNetScaler10-5[5bf444ad-a308-4a78-bdd6-bc1028e6ed62]
21 CDSNetScalerLB01[224198ca-ed29-44b2-ac8c-31363e96b734]
22 CDSNetScalerLB02[184eb272-2a0c-427e-9e66-94e00d4b0c2e]
23 CDSNetScalerLB03[f110b2c8-fc43-44fe-a7b1-b03c0b1562ce]
24 CDSNetScalerLB04[71c2c75c-5c5c-4605-81c4-17de1ca8ba75]
25 CDSNetScalerLB05-10[c1159ede-9627-4b25-b07d-82cd6ce6b0c0]
26 CDSNetScalerLB06-10[6c9b9b73-2de3-4038-97af-a4b46c4fcd6e]
27 jump_point-auto[51a36f5d-6fba-4ec6-b7cf-f1eb6bb8170b]
28 lab-ansible[cae59168-fb87-42b1-afe6-49712a5c0e1b]
29 lab-lxd01[8b04afc9-dd69-4da6-9f63-197817086de8]
30 lab-reverse-proxy01[2726e1f8-0875-481b-aebb-3443de690183]
31 pod2-branch01[36598184-1aab-426e-be57-d8ff653af532]
32 pod2-branch01-pc01[7af0a040-6a48-4f9b-bcf6-3f1eab6921df]
33 pod2-branch02[0d60b360-82c7-4694-9dbe-1925e8793d2b]
34 pod2-branch02-pc01[11abbc59-12e8-4cf0-96a9-f43547de9aca]
35 pod2-branch03[f8e5277e-898b-4211-b721-65b249e6ca16]
36 pod2-branch03-pc01[7cfa9afd-dd28-436e-8035-2161809e1631]
37 pod2-dc01[b917e2dc-bde6-4993-ade5-b08fc9903f51]
38 pod2-dc01-pc01[fa8f4918-095b-4d7a-a2c9-69e875a6d0ed]
39 pod2-dc02[21a80ce6-958c-47cb-812c-f0a49e397553]
40 pod2-router1[af175917-7624-4725-a4f4-90fadf9ba76a]
41 pod2-vbond01[3364c1f6-d062-489d-b8ef-28a21295b471]
42 pod2-vmanage01[c189c279-8601-46b1-a7fb-a038175f13b4]
43 pod2-vsmart01[ec0cec3c-31d6-4370-8b3b-f331d84cdc54]
44 pod2-vsmart02[68abe3b0-3ad0-49bb-9cd7-50bd41f515f9]
45 tacrad01[0bfa49bd-fe5d-4f99-8402-9f2720f50dcc]
46 vrsx1[3a210809-f6d4-4f7a-b134-50ce697fa27b]
47 vsrx2[7e2c5638-f97c-45c4-8487-153764db2fc7]
48 vsrx3[d3448254-b04a-4485-93cc-388d9ceeb54f]
49 vsrx4[b027e509-3b65-4cba-8aa6-be92c3d7bd25]
50 win2k16ad01[605409db-2469-4f61-a06f-cfd5e2f91af0]
51 win2k16ad02[31ca7536-7ed2-47e6-9e4d-87b34cc685ee]
Please, enter the id of the Serial Console you want to connect to.
To disconnect from a Serial Console, enter the sequence: <Enter><~><.>
SELECT> 49
Permission denied (publickey).
Connection to ovirt-engine01.int.cloche.ca closed.
Macbook:~ Jonathan$
5 years, 6 months
ovirt-vmconsole: Pemission denied (publickey) when I select VM id
by Jonathan Greg
Hi all,
I'm getting a Permission denied (publickey) when I select VM id... Could you help
ovirt engine: 4.3.4.3-1.el7
ovirt host: 4.3.3.1
Macbook:~ Jonathan$ ssh -t -p 2222 ovirt-vmconsole(a)ovirt-engine01.int.cloche.ca -i .ssh/serialconsolekey connect
Available Serial Consoles:
00 AUTO-LAB-RTR01[6a61b3e8-0ed1-4888-8f4b-e4e38cb26953]
01 AUTO-LAB-RTR02[c9f14c47-e416-4e1a-822a-0911ed4fc00e]
02 AUTO-LAB-RTR03[172ffa97-9564-4ce3-8c6f-de292b257a50]
03 AUTO-LAB-RTR04[d9f4ef8a-7fe8-42a3-8abc-f70bae20c817]
04 AUTO-LAB-RTR05[d3012c5e-53c5-4020-be79-143288ee757f]
05 AUTO-LAB-RTR06[25053bf7-d321-44a1-b72a-edf432fc4824]
06 AUTO-LAB-RTR07[f52edeca-7ec6-4c17-92c2-9b150c1f0f06]
07 AUTO-LAB-RTR08[8ad50c7c-740e-4d76-9f2e-e1915d2b17be]
08 AUTO-LAB-RTR09[9474d032-5576-4fb9-97ff-16a4b306b52f]
09 AUTO-LAB-RTR10[68625045-4065-4a53-be39-8e76a91cb6d3]
10 AUTO-LAB-RTR11[d3380c53-ab7b-4b65-88d0-2f05f43d9e88]
11 AUTO-LAB-RTR12[3ca24472-defa-48d1-91ee-7d264069243b]
12 AUTO-LAB-RTR13[62161a3c-6bf5-4c0e-83eb-3c65bcf3b3b4]
13 AUTO-LAB-RTR14[7843fe1b-65ff-42f4-8c0c-d677c9f266ee]
14 AUTO-LAB-RTR15[1e22b5dc-af3a-4c0f-8319-b982608ac85a]
15 AUTO-LAB-RTR16[ce2d9ce4-890a-46e2-a55f-76e5647f60bc]
16 AUTO-LAB-RTR17[f45f4451-5322-488d-99ad-75fb9a8871e5]
17 AUTO-LAB-RTR18[fa88f535-d11b-44de-b9e5-aad1da3e09cc]
18 AUTO-LAB-RTR19[3f5f08d1-300e-4529-8da2-b1f48d9746d0]
19 AUTO-LAB-RTR20[07adbdc7-16b6-4bb7-8dcf-14eab615b84b]
20 CDSNetScaler10-5[5bf444ad-a308-4a78-bdd6-bc1028e6ed62]
21 CDSNetScalerLB01[224198ca-ed29-44b2-ac8c-31363e96b734]
22 CDSNetScalerLB02[184eb272-2a0c-427e-9e66-94e00d4b0c2e]
23 CDSNetScalerLB03[f110b2c8-fc43-44fe-a7b1-b03c0b1562ce]
24 CDSNetScalerLB04[71c2c75c-5c5c-4605-81c4-17de1ca8ba75]
25 CDSNetScalerLB05-10[c1159ede-9627-4b25-b07d-82cd6ce6b0c0]
26 CDSNetScalerLB06-10[6c9b9b73-2de3-4038-97af-a4b46c4fcd6e]
27 jump_point-auto[51a36f5d-6fba-4ec6-b7cf-f1eb6bb8170b]
28 lab-ansible[cae59168-fb87-42b1-afe6-49712a5c0e1b]
29 lab-lxd01[8b04afc9-dd69-4da6-9f63-197817086de8]
30 lab-reverse-proxy01[2726e1f8-0875-481b-aebb-3443de690183]
31 pod2-branch01[36598184-1aab-426e-be57-d8ff653af532]
32 pod2-branch01-pc01[7af0a040-6a48-4f9b-bcf6-3f1eab6921df]
33 pod2-branch02[0d60b360-82c7-4694-9dbe-1925e8793d2b]
34 pod2-branch02-pc01[11abbc59-12e8-4cf0-96a9-f43547de9aca]
35 pod2-branch03[f8e5277e-898b-4211-b721-65b249e6ca16]
36 pod2-branch03-pc01[7cfa9afd-dd28-436e-8035-2161809e1631]
37 pod2-dc01[b917e2dc-bde6-4993-ade5-b08fc9903f51]
38 pod2-dc01-pc01[fa8f4918-095b-4d7a-a2c9-69e875a6d0ed]
39 pod2-dc02[21a80ce6-958c-47cb-812c-f0a49e397553]
40 pod2-router1[af175917-7624-4725-a4f4-90fadf9ba76a]
41 pod2-vbond01[3364c1f6-d062-489d-b8ef-28a21295b471]
42 pod2-vmanage01[c189c279-8601-46b1-a7fb-a038175f13b4]
43 pod2-vsmart01[ec0cec3c-31d6-4370-8b3b-f331d84cdc54]
44 pod2-vsmart02[68abe3b0-3ad0-49bb-9cd7-50bd41f515f9]
45 tacrad01[0bfa49bd-fe5d-4f99-8402-9f2720f50dcc]
46 vrsx1[3a210809-f6d4-4f7a-b134-50ce697fa27b]
47 vsrx2[7e2c5638-f97c-45c4-8487-153764db2fc7]
48 vsrx3[d3448254-b04a-4485-93cc-388d9ceeb54f]
49 vsrx4[b027e509-3b65-4cba-8aa6-be92c3d7bd25]
50 win2k16ad01[605409db-2469-4f61-a06f-cfd5e2f91af0]
51 win2k16ad02[31ca7536-7ed2-47e6-9e4d-87b34cc685ee]
Please, enter the id of the Serial Console you want to connect to.
To disconnect from a Serial Console, enter the sequence: <Enter><~><.>
SELECT> 49
Permission denied (publickey).
Connection to ovirt-engine01.int.cloche.ca closed.
Macbook:~ Jonathan$
5 years, 6 months
ovirt-vmconsole: Pemission denied (publickey) when I select VM id
by Jonathan Greg
Hi all,
I'm getting a Permission denied (publickey) when I select VM id... It looks like that the ovirt engine cannot authenticate himself against the ovirt node. Any idea how I could fix it?
ovirt engine: 4.3.4.3-1.el7
ovirt host: 4.3.3.1
Macbook:~ Jonathan$ ssh -t -p 2222 ovirt-vmconsole(a)ovirt-engine01.int.cloche.ca -i .ssh/serialconsolekey connect
Available Serial Consoles:
00 AUTO-LAB-RTR01[6a61b3e8-0ed1-4888-8f4b-e4e38cb26953]
01 AUTO-LAB-RTR02[c9f14c47-e416-4e1a-822a-0911ed4fc00e]
02 AUTO-LAB-RTR03[172ffa97-9564-4ce3-8c6f-de292b257a50]
03 AUTO-LAB-RTR04[d9f4ef8a-7fe8-42a3-8abc-f70bae20c817]
04 AUTO-LAB-RTR05[d3012c5e-53c5-4020-be79-143288ee757f]
05 AUTO-LAB-RTR06[25053bf7-d321-44a1-b72a-edf432fc4824]
06 AUTO-LAB-RTR07[f52edeca-7ec6-4c17-92c2-9b150c1f0f06]
07 AUTO-LAB-RTR08[8ad50c7c-740e-4d76-9f2e-e1915d2b17be]
08 AUTO-LAB-RTR09[9474d032-5576-4fb9-97ff-16a4b306b52f]
09 AUTO-LAB-RTR10[68625045-4065-4a53-be39-8e76a91cb6d3]
10 AUTO-LAB-RTR11[d3380c53-ab7b-4b65-88d0-2f05f43d9e88]
11 AUTO-LAB-RTR12[3ca24472-defa-48d1-91ee-7d264069243b]
12 AUTO-LAB-RTR13[62161a3c-6bf5-4c0e-83eb-3c65bcf3b3b4]
13 AUTO-LAB-RTR14[7843fe1b-65ff-42f4-8c0c-d677c9f266ee]
14 AUTO-LAB-RTR15[1e22b5dc-af3a-4c0f-8319-b982608ac85a]
15 AUTO-LAB-RTR16[ce2d9ce4-890a-46e2-a55f-76e5647f60bc]
16 AUTO-LAB-RTR17[f45f4451-5322-488d-99ad-75fb9a8871e5]
17 AUTO-LAB-RTR18[fa88f535-d11b-44de-b9e5-aad1da3e09cc]
18 AUTO-LAB-RTR19[3f5f08d1-300e-4529-8da2-b1f48d9746d0]
19 AUTO-LAB-RTR20[07adbdc7-16b6-4bb7-8dcf-14eab615b84b]
20 CDSNetScaler10-5[5bf444ad-a308-4a78-bdd6-bc1028e6ed62]
21 CDSNetScalerLB01[224198ca-ed29-44b2-ac8c-31363e96b734]
22 CDSNetScalerLB02[184eb272-2a0c-427e-9e66-94e00d4b0c2e]
23 CDSNetScalerLB03[f110b2c8-fc43-44fe-a7b1-b03c0b1562ce]
24 CDSNetScalerLB04[71c2c75c-5c5c-4605-81c4-17de1ca8ba75]
25 CDSNetScalerLB05-10[c1159ede-9627-4b25-b07d-82cd6ce6b0c0]
26 CDSNetScalerLB06-10[6c9b9b73-2de3-4038-97af-a4b46c4fcd6e]
27 jump_point-auto[51a36f5d-6fba-4ec6-b7cf-f1eb6bb8170b]
28 lab-ansible[cae59168-fb87-42b1-afe6-49712a5c0e1b]
29 lab-lxd01[8b04afc9-dd69-4da6-9f63-197817086de8]
30 lab-reverse-proxy01[2726e1f8-0875-481b-aebb-3443de690183]
31 pod2-branch01[36598184-1aab-426e-be57-d8ff653af532]
32 pod2-branch01-pc01[7af0a040-6a48-4f9b-bcf6-3f1eab6921df]
33 pod2-branch02[0d60b360-82c7-4694-9dbe-1925e8793d2b]
34 pod2-branch02-pc01[11abbc59-12e8-4cf0-96a9-f43547de9aca]
35 pod2-branch03[f8e5277e-898b-4211-b721-65b249e6ca16]
36 pod2-branch03-pc01[7cfa9afd-dd28-436e-8035-2161809e1631]
37 pod2-dc01[b917e2dc-bde6-4993-ade5-b08fc9903f51]
38 pod2-dc01-pc01[fa8f4918-095b-4d7a-a2c9-69e875a6d0ed]
39 pod2-dc02[21a80ce6-958c-47cb-812c-f0a49e397553]
40 pod2-router1[af175917-7624-4725-a4f4-90fadf9ba76a]
41 pod2-vbond01[3364c1f6-d062-489d-b8ef-28a21295b471]
42 pod2-vmanage01[c189c279-8601-46b1-a7fb-a038175f13b4]
43 pod2-vsmart01[ec0cec3c-31d6-4370-8b3b-f331d84cdc54]
44 pod2-vsmart02[68abe3b0-3ad0-49bb-9cd7-50bd41f515f9]
45 tacrad01[0bfa49bd-fe5d-4f99-8402-9f2720f50dcc]
46 vrsx1[3a210809-f6d4-4f7a-b134-50ce697fa27b]
47 vsrx2[7e2c5638-f97c-45c4-8487-153764db2fc7]
48 vsrx3[d3448254-b04a-4485-93cc-388d9ceeb54f]
49 vsrx4[b027e509-3b65-4cba-8aa6-be92c3d7bd25]
50 win2k16ad01[605409db-2469-4f61-a06f-cfd5e2f91af0]
51 win2k16ad02[31ca7536-7ed2-47e6-9e4d-87b34cc685ee]
Please, enter the id of the Serial Console you want to connect to.
To disconnect from a Serial Console, enter the sequence: <Enter><~><.>
SELECT> 49
Permission denied (publickey).
Connection to ovirt-engine01.int.cloche.ca closed.
Macbook:~ Jonathan$
Regards,
Jonathan
5 years, 6 months
[Ovirt 4.3] Guest agent issue
by s.natoli@siinfo.eu
Hi to all!
I can't use the after_xxxx hooks in the guest agent (version: 1.0.16-1.el7) installed in a centos 7 machine in a ovirt 4.3 cluster, but the before_xxxxx hooks are executed without problem
The same machine in an old ovirt cluster was ok with the same hooks.
This is the ovirt-guest-agent.log
MainThread::INFO::2019-06-18 13:15:11,192::ovirt-guest-agent::59::root::Starting oVirt guest agent
Dummy-2::INFO::2019-06-18 13:15:11,326::OVirtAgentLogic::322::root::Received an external command: refresh...
Dummy-2::INFO::2019-06-18 13:15:20,990::OVirtAgentLogic::322::root::Received an external command: api-version...
Dummy-2::INFO::2019-06-18 13:15:20,990::OVirtAgentLogic::118::root::API Version updated from 0 to 3
Dummy-2::INFO::2019-06-18 13:17:05,391::OVirtAgentLogic::322::root::Received an external command: lifecycle-event...
Dummy-2::INFO::2019-06-18 13:17:05,440::hooks::64::Hooks::Hook(before_migration) "/etc/ovirt-guest-agent/hooks.d/before_migration/55_flush-caches" executed
Dummy-2::INFO::2019-06-18 13:17:24,329::OVirtAgentLogic::322::root::Received an external command: refresh...
Dummy-2::INFO::2019-06-18 13:18:24,605::OVirtAgentLogic::322::root::Received an external command: api-version...
As you can see, the agent does not receive the lifecycle-event for after-migration
I look in the vdsm and supervdsm log, also in debug level, but seem all normal
the xml of the machine contain the virtio-serial
<controller index="0" ports="16" type="virtio-serial">
<alias name="ua-9785547d-0e2d-4139-8952-010fe4874800"/>
<address bus="0x00" domain="0x0000" function="0x0" slot="0x06" type="pci"/>
</controller>
.
.
.
.
.
<channel type="unix">
<source mode="bind" path="/var/lib/libvirt/qemu/channels/27f3d4d0-f3d2-41bf-a8da-e6e75aad51f6.ovirt-guest-agent.0"/>
<target name="ovirt-guest-agent.0" state="connected" type="virtio"/>
<alias name="channel0"/>
<address bus="0" controller="0" port="1" type="virtio-serial"/>
</channel>
<channel type="unix">
<source mode="bind" path="/var/lib/libvirt/qemu/channels/27f3d4d0-f3d2-41bf-a8da-e6e75aad51f6.org.qemu.guest_agent.0"/>
<target name="org.qemu.guest_agent.0" state="connected" type="virtio"/>
<alias name="channel1"/>
<address bus="0" controller="0" port="2" type="virtio-serial"/>
</channel>
I tried also with another 2 machines with a fresh install of centos 7 and guest-agent but nothing.
The guest agent does not have any error in
systemctl status ovirt-guest-agent
This is my hook in /etc/ovirt-guest-agent/hooks.d/after_migration
-rwxrwxrwx. 1 root root 34 Jun 14 12:48 01_example
and this is the content:
#!/bin/bash
touch /tmp/prova
this script in before_xxxx start without problems
Someone have some suggestion or some log to see to solve this problem?
Thank you!
5 years, 6 months
Re: Hosted engine setup: "Failed to configure management network on host Local due to setup networks failure"
by Strahil
I think that this can be resolved by a remote (r)syslog system and proper documentation.
I would be happy to write a short crash-course , but I will definately need assiatance from more experienced person.
So far (7 months later) , I still struggle to find my errors and where to control log level.
Best Regards,
Strahil NikolovOn Jun 18, 2019 12:18, me(a)brendanh.com wrote:
>
> "trade-off between time [developing] and time spent debugging such cases when they do happen"
>
> Your call. All I know is, it took me over a month to install oVirt, including three weeks of one-to-one time with Simone Tiraboschi from Red Hat. He sent me eleven emails but eventually gave up, baffled as me. It shouldn't be this hard. Others who are having this or similar problems will just abandon oVirt: ("I'm just wondering if I should cut my losses with oVirt"):
> https://lists.ovirt.org/archives/list/users@ovirt.org/thread/PZJYNAKPYNQU...
>
> The biggest challenge is to find the relevant error. What would be useful is a log aggregator. If oVirt had a journalctl type app running on the host that tails ALL the logs including the engine logs from hosted-engine (via ssh), everything would be in one place and easy to spot. Currently, you need fairly detailed knowledge of the architecture and install process to (i) find the log files (ii) whittle down to the one displaying the problem. Yes, I know you guys have a log-packaging app that compresses them up, so they can be sent to Red Hat for inspection (does this even include hosted-engine logs?). But with a journalling app, users would be able to spot the error themselves and most of the time (if it's not a new bug), fix it on their own like I did. And yes, I know once set up, users will have their own log aggregator in the form of Kibana, Splunk, etc but these don't help during the initial install.
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/4732OJ7DJFK...
5 years, 6 months