From vincent at epicenergy.ca Thu Mar 1 02:26:37 2018 From: vincent at epicenergy.ca (Vincent Royer) Date: Wed, 28 Feb 2018 18:26:37 -0800 Subject: [ovirt-users] open source backup solution for ovirt/VMs In-Reply-To: <341128da-7338-67fd-2975-c8b50892253e@l1049h.com> References: <341128da-7338-67fd-2975-c8b50892253e@l1049h.com> Message-ID: Can you tell us more about this implementation? *Vincent Royer* *778-825-1057* *SUSTAINABLE MOBILE ENERGY SOLUTIONS* On Wed, Feb 28, 2018 at 2:10 PM, Brett Holcomb wrote: > I use Bareos for backup. It is open source. > > > > On 02/28/2018 01:14 PM, Junaid Jadoon wrote: > > HI, > Can you please suggest me open source backup solution for ovirt Virtual > machines. > > My backup media is FC tape library which directly attached to my ovirt > node. > > I really appreciate you help > > thanks. > > > _______________________________________________ > Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From recreationh at gmail.com Thu Mar 1 03:51:50 2018 From: recreationh at gmail.com (Terry hey) Date: Thu, 1 Mar 2018 11:51:50 +0800 Subject: [ovirt-users] Power management - oVirt 4,2 In-Reply-To: References: Message-ID: Dear Martin, Thank you so much! It works! I found that IPMI is disable by default setting. When i enable it and run fence_ilo4 command, it showed that status is ON. [root at XXXXXX~]# ?fence_ilo4 -a XXXXX -l XXXX -p XXXXX -v -o status Executing: /usr/bin/ipmitool -I lanplus -H XXXXX -p 623 -U XXXXX -P XXXXX -L ADMINISTRATOR chassis power status 0 Chassis Power is on Status: ON [root at XXXXXX~]# Thank you so much again! Regards, Terry 2018-02-28 16:30 GMT+08:00 Martin Perina : > > > On Wed, Feb 28, 2018 at 9:13 AM, Terry hey wrote: > >> Dear Martin, >> Please see the following result. >> [root at XXXXX ~]# fence_ilo4 -a XXX.XXX.XXX.XXX -l XXXXX -p XXXXX -v -o >> status >> Executing: /usr/bin/ipmitool -I lanplus -H XXX.XXX.XXX.XXX -p 623 -U >> XXXXX -P XXXXX -L ADMINISTRATOR chassis power status >> >> Connection timed out >> >> >> [root at XXXXX~]# >> As you can see it just said connection timed out. >> But i can actually access iLO5 ( same account and password) through >> Internet Explorer , >> > > ?This is completely different protocol (HTTP) using a browser, it's > independent of IPMI. > > Are you sure that some firewall doesn't block access to the IPMI > interface? Are you executing the command from different host than the host > which you want access the IPMI interface of? > ? > ?If above is not an issue, then please login to iLO5 management using a > browser and check if IPMI interface is enabled according to your iLO5 > documentation > ? > > >> >> I want to ask.. do you know what port did the manger use when compile >> this command? >> > > ?623 is the default IPMI port > ? > > >> >> Regards >> Terry >> >> >> 2018-02-26 17:38 GMT+08:00 Martin Perina : >> >>> >>> >>> On Fri, Feb 23, 2018 at 11:34 AM, Terry hey >>> wrote: >>> >>>> Dear Martin, >>>> I am very sorry that i reply you so late. >>>> Do you mean that 4.2 can support ilo5 by selecting the option "ilo4" in >>>> power management? >>>> >>> >>> ?Yes >>> ? >>> >>> >>>> "from the error message below I'd say that you are either not using >>>> correct IP address of iLO5 interface or you haven't enabled remote access >>>> to your iLO5 interface" >>>> I just try it and double confirm that i did not type a wrong IP. But >>>> the error message is same. >>>> >>> >>> ?Unfortunately I don't have iLO5 server available, so I cannot provide >>> more details. Anyway could you please double check your server >>> documentation, that you have enabled access to iLO5 IPMI interface >>> correctly? And could you please share output of following command? >>> >>> ? >>> f >>> ?? >>> ence_ilo4 -a -l -p -v -o status >>> >>> Thanks >>> >>> Martin >>> ? >>> >>> >>>> >>>> Regards >>>> Terry >>>> >>>> 2018-02-08 16:13 GMT+08:00 Martin Perina : >>>> >>>>> Hi Terry, >>>>> >>>>> from the error message below I'd say that you are either not using >>>>> correct IP address of iLO5 interface or you haven't enabled remote access >>>>> to your iLO5 interface. >>>>> According to [1] iLO5 should fully IPMI compatible. So are you sure >>>>> that you enabled the remote access to your iLO5 address in iLO5 management? >>>>> Please consult [1] how to enable everything and use a user with at >>>>> least Operator privileges. >>>>> >>>>> Regards >>>>> >>>>> Martin >>>>> >>>>> [1] https://support.hpe.com/hpsc/doc/public/display?docId=a00018 >>>>> 324en_us >>>>> >>>>> >>>>> On Thu, Feb 8, 2018 at 7:57 AM, Terry hey >>>>> wrote: >>>>> >>>>>> Dear Martin, >>>>>> >>>>>> Thank you for helping me. To answer your question, >>>>>> 1. Does the Test in Edit fence agent dialog work?? >>>>>> Ans: it shows that "Test failed: Internal JSON-RPC error" >>>>>> >>>>>> Regardless the fail result, i press "OK" to enable power management. >>>>>> There are four event log appear in "Events" >>>>>> ********************************The follwing are the log in >>>>>> "Event""******************************** >>>>>> Host host01 configuration was updated by admin at internal-authz. >>>>>> Kdump integration is enabled for host hostv01, but kdump is not >>>>>> configured properly on host. >>>>>> Health check on Host host01 indicates that future attempts to Stop >>>>>> this host using Power-Management are expected to fail. >>>>>> Health check on Host host01 indicates that future attempts to Start >>>>>> this host using Power-Management are expected to fail. >>>>>> >>>>>> 2. If not could you please try to install fence-agents-all package on >>>>>> different host and execute? >>>>>> Ans: It just shows "Connection timed out". >>>>>> >>>>>> So, does it means that it is not support iLo5 now or i configure >>>>>> wrongly? >>>>>> >>>>>> Regards, >>>>>> Terry >>>>>> >>>>>> 2018-02-02 15:46 GMT+08:00 Martin Perina : >>>>>> >>>>>>> >>>>>>> >>>>>>> On Fri, Feb 2, 2018 at 5:40 AM, Terry hey >>>>>>> wrote: >>>>>>> >>>>>>>> Dear Martin, >>>>>>>> >>>>>>>> Um..Since i am going to use HPE ProLiant DL360 Gen10 Server to >>>>>>>> setup oVirt Node(Hypervisor). HP G10 is using ilo5 rather than ilo4. >>>>>>>> Therefore, i would like to ask whether oVirt power management support iLO5 >>>>>>>> or not. >>>>>>>> >>>>>>> >>>>>>> ?We don't have any hardware with iLO5 available, but there is a good >>>>>>> chance that it will be compatible with iLO4. Have you tried to setup your >>>>>>> server with iLO4? Does the Test in Edit fence agent dialog work?? If not >>>>>>> could you please try to install fence-agents-all package on different host >>>>>>> and execute following: >>>>>>> >>>>>>> ?? >>>>>>> f >>>>>>> ?? >>>>>>> ence_ilo4 -a -l -p -v -o status >>>>>>> >>>>>>> and share the output? >>>>>>> >>>>>>> Thanks >>>>>>> >>>>>>> Martin >>>>>>> >>>>>>> >>>>>>>> If not, do you have any idea to setup power management with HP G10? >>>>>>>> >>>>>>>> Regards, >>>>>>>> Terry >>>>>>>> >>>>>>>> 2018-02-01 16:21 GMT+08:00 Martin Perina : >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Wed, Jan 31, 2018 at 11:19 PM, Luca 'remix_tj' Lorenzetto < >>>>>>>>> lorenzetto.luca at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> From ilo3 and up, ilo fencing agents are an alias for fence_ipmi. >>>>>>>>>> Try using the standard ipmi. >>>>>>>>>> >>>>>>>>> >>>>>>>>> ?It's not just an alias, ilo3/ilo4 also have different defaults >>>>>>>>> than ipmilan. For example if you use ilo4, then by default following is >>>>>>>>> used: >>>>>>>>> >>>>>>>>> ? >>>>>>>>> >>>>>>>>> ?lanplus=1 >>>>>>>>> power_wait=4 >>>>>>>>> >>>>>>>>> ?So I recommend to start with ilo4 and add any necessary custom >>>>>>>>> options into Options field. If you need some custom >>>>>>>>> options, could you please share them with us? It would be very >>>>>>>>> helpful for us, if needed we could introduce ilo5 with >>>>>>>>> different defaults then ilo4 >>>>>>>>> >>>>>>>>> Thanks >>>>>>>>> >>>>>>>>> Martin >>>>>>>>> >>>>>>>>> >>>>>>>>>> Luca >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Il 31 gen 2018 11:14 PM, "Terry hey" ha >>>>>>>>>> scritto: >>>>>>>>>> >>>>>>>>>>> Dear all, >>>>>>>>>>> Did oVirt 4.2 Power management support iLO5 as i could not see >>>>>>>>>>> iLO5 option in Power Management. >>>>>>>>>>> >>>>>>>>>>> Regards >>>>>>>>>>> Terry >>>>>>>>>>> >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> Users mailing list >>>>>>>>>>> Users at ovirt.org >>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> Users mailing list >>>>>>>>>> Users at ovirt.org >>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Martin Perina >>>>>>>>> Associate Manager, Software Engineering >>>>>>>>> Red Hat Czech s.r.o. >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Martin Perina >>>>>>> Associate Manager, Software Engineering >>>>>>> Red Hat Czech s.r.o. >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Martin Perina >>>>> Associate Manager, Software Engineering >>>>> Red Hat Czech s.r.o. >>>>> >>>> >>>> >>> >>> >>> -- >>> Martin Perina >>> Associate Manager, Software Engineering >>> Red Hat Czech s.r.o. >>> >> >> > > > -- > Martin Perina > Associate Manager, Software Engineering > Red Hat Czech s.r.o. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sakhi at sanren.ac.za Thu Mar 1 06:59:19 2018 From: sakhi at sanren.ac.za (Sakhi Hadebe) Date: Thu, 1 Mar 2018 08:59:19 +0200 Subject: [ovirt-users] Failed to verify Power Management configuration for Host xxxxxxxxx Message-ID: Hi, I am installing the ovirt cluster of 3 server machines. With the hosted engine vm running on one of the servers. I have been struggling to enable Power management. On the Administration it shows enabled and not editable. The error it show is: Error while executing action: ovirt-host.example.co.za: - Cannot edit Host. Power Management is enabled for Host but no Agent type selected. ?Please assist.? -- Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi at sanren.ac.za -------------- next part -------------- An HTML attachment was scrubbed... URL: From didi at redhat.com Thu Mar 1 07:28:13 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Thu, 1 Mar 2018 09:28:13 +0200 Subject: [ovirt-users] Backup & Restore In-Reply-To: <1229669542.20349654.1519829079176.JavaMail.zimbra@logicworks.pt> References: <1044341911.20329570.1519819859027.JavaMail.zimbra@logicworks.pt> <1513265050.20337674.1519821904171.JavaMail.zimbra@logicworks.pt> <1229669542.20349654.1519829079176.JavaMail.zimbra@logicworks.pt> Message-ID: On Wed, Feb 28, 2018 at 4:44 PM, wrote: > If I run > > # engine-backup --mode=restore --file=back_futur --log=log_futur > --provision-db --restore-permissions --provision-dwh-db --log=/root/rest-log > > to create a log, I found these errors: > > 2018-02-28 14:36:31 6339: pg_cmd running: psql -w -U ovirt_engine_history -h > localhost -p 5432 ovirt_engine_history -t -c show lc_messages > 2018-02-28 14:36:31 6339: pg_cmd running: pg_dump -w -U ovirt_engine_history > -h localhost -p 5432 ovirt_engine_history -s > 2018-02-28 14:36:31 6339: OUTPUT: - Engine database 'engine' > 2018-02-28 14:36:31 6339: Restoring engine database backup at > /tmp/engine-backup.VVkcNuYAkV/db/engine_backup.db > 2018-02-28 14:36:31 6339: restoreDB: backupfile > /tmp/engine-backup.VVkcNuYAkV/db/engine_backup.db user engine host localhost > port 5432 database engine orig_user compressor format custom jobsnum 2 > 2018-02-28 14:36:31 6339: pg_cmd running: pg_restore -w -U engine -h > localhost -p 5432 -d engine -j 2 > /tmp/engine-backup.VVkcNuYAkV/db/engine_backup.db > pg_restore: [archiver (db)] Error while PROCESSING TOC: > pg_restore: [archiver (db)] Error from TOC entry 7314; 0 0 COMMENT EXTENSION > plpgsql > pg_restore: [archiver (db)] could not execute query: ERROR: must be owner of > extension plpgsql > Command was: COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language'; > > > > pg_restore: [archiver (db)] Error from TOC entry 693; 1255 211334 FUNCTION > uuid_generate_v1() engine > pg_restore: [archiver (db)] could not execute query: ERROR: function > "uuid_generate_v1" already exists with same argument types This is the error that fails you. I have a pending patch to make this more visible in the log [1], need to find time to verify it... Does this happen on a clean machine? Perhaps 'engine-cleanup' after such a failed restore is not enough. Please try reinstalling the OS and trying again. If it's not an important machine (test/dev/etc), this will probably be enough, as a faster replacement for a full OS reinstall: engine-cleanup systemctl stop postgresql systemctl stop rh-postgresql95-postgresql rm -rf /var/lib/pgsql/data /var/opt/rh/rh-postgresql95/lib/pgsql/data Then try restore again. [1] https://gerrit.ovirt.org/86395 > Command was: CREATE FUNCTION uuid_generate_v1() RETURNS uuid > LANGUAGE plpgsql STABLE > AS ' > DECLARE > v_val BIGINT; > v_4_1_par... > pg_restore: [archiver (db)] could not execute query: ERROR: must be owner of > function uuid_generate_v1 > Command was: ALTER FUNCTION public.uuid_generate_v1() OWNER TO engine; Adding also Eli. Eli - perhaps we need to patch engine-backup to ignore also this error? I think the minimal flow to reproduce is: engine-setup engine-backup --mode=backup --file=f1 --log=l1 engine-cleanup engine-backup --mode=restore --file=f1 --provision-db --provision-dwh-db --log=l2 Didn't try this myself. > > > pg_restore: WARNING: column "user_role_title" has type "unknown" > DETAIL: Proceeding with relation creation anyway. > pg_restore: WARNING: no privileges could be revoked for "public" > pg_restore: WARNING: no privileges could be revoked for "public" > pg_restore: WARNING: no privileges were granted for "public" > pg_restore: WARNING: no privileges were granted for "public" > WARNING: errors ignored on restore: 3 > 2018-02-28 14:37:23 6339: FATAL: Errors while restoring database engine > > ________________________________ > De: suporte at logicworks.pt > Para: "Yedidyah Bar David" > Cc: "ovirt users" > Enviadas: Quarta-feira, 28 De Fevereiro de 2018 12:45:04 > > Assunto: Re: [ovirt-users] Backup & Restore > > Still no luck: > > # engine-backup --mode=restore --file=back_futur --log=log_futur > --provision-db --restore-permissions --provision-dwh-db > Preparing to restore: > - Unpacking file 'back_futur' > Restoring: > - Files > Provisioning PostgreSQL users/databases: > - user 'engine', database 'engine' > - user 'ovirt_engine_history', database 'ovirt_engine_history' > Restoring: > - Engine database 'engine' > FATAL: Errors while restoring database engine > > I did a engine-cleanup, try it again but still the same error. > > ________________________________ > De: "Yedidyah Bar David" > Para: suporte at logicworks.pt > Cc: "ovirt users" > Enviadas: Quarta-feira, 28 De Fevereiro de 2018 12:24:50 > Assunto: Re: [ovirt-users] Backup & Restore > > On Wed, Feb 28, 2018 at 2:10 PM, wrote: >> Hi, >> >> I'm testing backup & restore on Ovirt 4.2. >> I follow this doc >> >> https://www.ovirt.org/documentation/admin-guide/chap-Backups_and_Migration/ >> Try to restore to a fresh installation but always get this error message: >> >> restore-permissions >> Preparing to restore: >> - Unpacking file 'back_file' >> Restoring: >> - Files >> Provisioning PostgreSQL users/databases: >> - user 'engine', database 'engine' >> Restoring: >> FATAL: Can't connect to database 'ovirt_engine_history'. Please see >> '/usr/bin/engine-backup --help'. >> >> On the live engine I run # engine-backup --scope=all --mode=backup >> --file=file_name --log=log_file_name >> >> And try to restore on a fresh installation: >> # engine-backup --mode=restore --file=file_name --log=log_file_name >> --provision-db --restore-permissions >> >> Any Idea? > > Please try adding to restore command '--providion-dwh-db'. Thanks. > -- > Didi > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- Didi From recreationh at gmail.com Thu Mar 1 10:24:33 2018 From: recreationh at gmail.com (Terry hey) Date: Thu, 1 Mar 2018 18:24:33 +0800 Subject: [ovirt-users] VM paused rather than migrate to another hosts In-Reply-To: <87r2p5w5kc.fsf@redhat.com> References: <87r2p5w5kc.fsf@redhat.com> Message-ID: Dear Milan, Today, i just found that oVirt 4.2 support iLO5 and power management was set on all hosts (hypervisor). I found that if i choose VM lease and shutdown iSCSI network, the VM was shutdown. Then the VM will migrate to another host if the iSCSI network was resumed. If i just choose enable HA on VM setting, the VM was successfully migrate to another hosts. But i want to ask another question, what if the management network is down? What VM and hosts behavior would you expect? Regards Terry Hung 2018-02-28 22:29 GMT+08:00 Milan Zamazal : > Terry hey writes: > > > I am testing iSCSI bonding failover test on oVirt, but i observed that VM > > were paused and did not migrate to another host. Please see the details > as > > follows. > > > > I have two hypervisors. Since they are running iLO 5 and oVirt 4.2 cannot > > support iLO 5, thus i cannot setup power management. > > > > For the cluster setting, I set "Migrate Virtual Machines" under the > > Migration Policy. > > > > For each hypervisor, I bonded two iSCSI interface as bond 1. > > > > I created one Virtual machine and enable high availability on it. > > Also, I created one Virtual machine and did not enable high availability > on > > it. > > > > When i shutdown one of the iSCSI interface, nothing happened. > > But when i shutdown both iSCSI interface, VM in that hosts were paused > and > > did not migrate to another hosts. Is this behavior normal or i miss > > something? > > A paused VM can't be migrated, since there are no guarantees about the > storage state. As the VMs were paused under erroneous (rather than > controlled such as putting the host into maintenance) situation, > migration policy can't help here. > > But highly available VMs can be restarted on another host automatically. > Do you have VM lease enabled for the highly available VM in High > Availability settings? With a lease, Engine should be able to restart > the VM elsewhere after a while, without it Engine can't do that since > there is danger of resuming the VM on the original host, resulting in > multiple instances of the same VM running at the same time. > > VMs without high availability must be restarted manually (unless storage > domain becomes available again). > > HTH, > Milan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From junaid8756 at gmail.com Thu Mar 1 10:54:18 2018 From: junaid8756 at gmail.com (Junaid Jadoon) Date: Thu, 1 Mar 2018 15:54:18 +0500 Subject: [ovirt-users] open source backup solution for ovirt/VMs In-Reply-To: References: <341128da-7338-67fd-2975-c8b50892253e@l1049h.com> Message-ID: Bareos is compatible with FC tape library ??? On Thu, Mar 1, 2018 at 7:26 AM, Vincent Royer wrote: > Can you tell us more about this implementation? > > *Vincent Royer* > *778-825-1057* > > > > *SUSTAINABLE MOBILE ENERGY SOLUTIONS* > > > > > On Wed, Feb 28, 2018 at 2:10 PM, Brett Holcomb > wrote: > >> I use Bareos for backup. It is open source. >> >> >> >> On 02/28/2018 01:14 PM, Junaid Jadoon wrote: >> >> HI, >> Can you please suggest me open source backup solution for ovirt Virtual >> machines. >> >> My backup media is FC tape library which directly attached to my ovirt >> node. >> >> I really appreciate you help >> >> thanks. >> >> >> _______________________________________________ >> Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users >> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From c.mammoli at apra.it Thu Mar 1 11:35:18 2018 From: c.mammoli at apra.it (Cristian Mammoli) Date: Thu, 1 Mar 2018 12:35:18 +0100 Subject: [ovirt-users] Troubleshooting VM SSO on Windows 10 (ovirt 4.2.1) Message-ID: Hi, I'm trying to setup sso on Windows 10, vm is domain joined, has agent installed and credential provider registered.Of course I setup an AD domain and the vm has sso enabled Whenever I log to the user portal and open a VM I'm presented with the login screen and nothing happens, it's like the engine doesn't send the command to autologin. In the agent logs there's nothing interesting but the communication between the engine and the agent is ok: for example the command to lock-screen on console close runs and works: Dummy-2::INFO::2018-03-01 09:01:39,124::ovirtagentlogic::322::root::Received an external command: lock-screen... This is an extract from engine logs when I login in the user portal and start a connection: 2018-03-01 11:30:01,558+01 INFO [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-30) [] User c.mammoli at apra.it successfully logged in with scopes: ovirt-app-admin ovirt-app-api ovirt-app-portal ovirt-ext=auth:sequence-priority=~ ovirt-ext=revoke:revoke-all ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access 2018-03-01 11:30:01,606+01 INFO [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-31) [7bc265f] Running command: CreateUserSessionCommand internal: false. 2018-03-01 11:30:01,623+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-31) [7bc265f] EVENT_ID: USER_VDC_LOGIN(30), User c.mammoli at apra.it@apra.it connecting from '192.168.1.100' using session '5NMjCbUiehNLAGMeeWsr4L5TatL+uUGsNHOxQtCvSa9i0DaQ7uoGSi6zaZdXu08vrEk5gyQUJAsB2+COzLwtEw==' logged in. 2018-03-01 11:30:02,163+01 ERROR [org.ovirt.engine.core.bll.GetSystemStatisticsQuery] (default task-39) [14276418-5de7-44a6-bb64-c60965de0acf] Query execution failed due to insufficient permissions. 2018-03-01 11:30:02,664+01 INFO [org.ovirt.engine.core.bll.SetVmTicketCommand] (default task-54) [617f130b] Running command: SetVmTicketCommand internal: false. Entities affected :? ID: c0250fe0-5d8b-44de-82bc-04610952f453 Type: VMAction group CONNECT_TO_VM with role type USER 2018-03-01 11:30:02,683+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand] (default task-54) [617f130b] START, SetVmTicketVDSCommand(HostName = r630-01.apra.it, SetVmTicketVDSCommandParameters:{hostId='d99a8356-72e8-4130-a1cc-e148762eca57', vmId='c0250fe0-5d8b-44de-82bc-04610952f453', protocol='SPICE', ticket='u2b1nv+rH+pw', validTime='120', userName='c.mammoli at apra.it', userId='39f9d718-6e65-456a-8a6f-71976bcbbf2f', disconnectAction='LOCK_SCREEN'}), log id: 18fa2ef 2018-03-01 11:30:02,703+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand] (default task-54) [617f130b] FINISH, SetVmTicketVDSCommand, log id: 18fa2ef 2018-03-01 11:30:02,713+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-54) [617f130b] EVENT_ID: VM_SET_TICKET(164), User c.mammoli at apra.it@apra.it initiated console session for VM testvdi02 2018-03-01 11:30:11,558+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-49) [] EVENT_ID: VM_CONSOLE_CONNECTED(167), User c.mammoli at apra.it is connected to VM testvdi02. Any help would be appreciated From suporte at logicworks.pt Thu Mar 1 12:07:58 2018 From: suporte at logicworks.pt (suporte at logicworks.pt) Date: Thu, 1 Mar 2018 12:07:58 +0000 (WET) Subject: [ovirt-users] Backup & Restore In-Reply-To: References: <1044341911.20329570.1519819859027.JavaMail.zimbra@logicworks.pt> <1513265050.20337674.1519821904171.JavaMail.zimbra@logicworks.pt> <1229669542.20349654.1519829079176.JavaMail.zimbra@logicworks.pt> Message-ID: <1803853344.20566967.1519906078677.JavaMail.zimbra@logicworks.pt> Yes, it happens in a clean machine. I try it twice and restore always fails. From: "Yedidyah Bar David" To: suporte at logicworks.pt, "Eli Mesika" Cc: "ovirt users" Sent: Thursday, March 1, 2018 7:28:13 AM Subject: Re: [ovirt-users] Backup & Restore On Wed, Feb 28, 2018 at 4:44 PM, wrote: > If I run > > # engine-backup --mode=restore --file=back_futur --log=log_futur > --provision-db --restore-permissions --provision-dwh-db --log=/root/rest-log > > to create a log, I found these errors: > > 2018-02-28 14:36:31 6339: pg_cmd running: psql -w -U ovirt_engine_history -h > localhost -p 5432 ovirt_engine_history -t -c show lc_messages > 2018-02-28 14:36:31 6339: pg_cmd running: pg_dump -w -U ovirt_engine_history > -h localhost -p 5432 ovirt_engine_history -s > 2018-02-28 14:36:31 6339: OUTPUT: - Engine database 'engine' > 2018-02-28 14:36:31 6339: Restoring engine database backup at > /tmp/engine-backup.VVkcNuYAkV/db/engine_backup.db > 2018-02-28 14:36:31 6339: restoreDB: backupfile > /tmp/engine-backup.VVkcNuYAkV/db/engine_backup.db user engine host localhost > port 5432 database engine orig_user compressor format custom jobsnum 2 > 2018-02-28 14:36:31 6339: pg_cmd running: pg_restore -w -U engine -h > localhost -p 5432 -d engine -j 2 > /tmp/engine-backup.VVkcNuYAkV/db/engine_backup.db > pg_restore: [archiver (db)] Error while PROCESSING TOC: > pg_restore: [archiver (db)] Error from TOC entry 7314; 0 0 COMMENT EXTENSION > plpgsql > pg_restore: [archiver (db)] could not execute query: ERROR: must be owner of > extension plpgsql > Command was: COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language'; > > > > pg_restore: [archiver (db)] Error from TOC entry 693; 1255 211334 FUNCTION > uuid_generate_v1() engine > pg_restore: [archiver (db)] could not execute query: ERROR: function > "uuid_generate_v1" already exists with same argument types This is the error that fails you. I have a pending patch to make this more visible in the log [1], need to find time to verify it... Does this happen on a clean machine? Perhaps 'engine-cleanup' after such a failed restore is not enough. Please try reinstalling the OS and trying again. If it's not an important machine (test/dev/etc), this will probably be enough, as a faster replacement for a full OS reinstall: engine-cleanup systemctl stop postgresql systemctl stop rh-postgresql95-postgresql rm -rf /var/lib/pgsql/data /var/opt/rh/rh-postgresql95/lib/pgsql/data Then try restore again. [1] https://gerrit.ovirt.org/86395 > Command was: CREATE FUNCTION uuid_generate_v1() RETURNS uuid > LANGUAGE plpgsql STABLE > AS ' > DECLARE > v_val BIGINT; > v_4_1_par... > pg_restore: [archiver (db)] could not execute query: ERROR: must be owner of > function uuid_generate_v1 > Command was: ALTER FUNCTION public.uuid_generate_v1() OWNER TO engine; Adding also Eli. Eli - perhaps we need to patch engine-backup to ignore also this error? I think the minimal flow to reproduce is: engine-setup engine-backup --mode=backup --file=f1 --log=l1 engine-cleanup engine-backup --mode=restore --file=f1 --provision-db --provision-dwh-db --log=l2 Didn't try this myself. > > > pg_restore: WARNING: column "user_role_title" has type "unknown" > DETAIL: Proceeding with relation creation anyway. > pg_restore: WARNING: no privileges could be revoked for "public" > pg_restore: WARNING: no privileges could be revoked for "public" > pg_restore: WARNING: no privileges were granted for "public" > pg_restore: WARNING: no privileges were granted for "public" > WARNING: errors ignored on restore: 3 > 2018-02-28 14:37:23 6339: FATAL: Errors while restoring database engine > > ________________________________ > De: suporte at logicworks.pt > Para: "Yedidyah Bar David" > Cc: "ovirt users" > Enviadas: Quarta-feira, 28 De Fevereiro de 2018 12:45:04 > > Assunto: Re: [ovirt-users] Backup & Restore > > Still no luck: > > # engine-backup --mode=restore --file=back_futur --log=log_futur > --provision-db --restore-permissions --provision-dwh-db > Preparing to restore: > - Unpacking file 'back_futur' > Restoring: > - Files > Provisioning PostgreSQL users/databases: > - user 'engine', database 'engine' > - user 'ovirt_engine_history', database 'ovirt_engine_history' > Restoring: > - Engine database 'engine' > FATAL: Errors while restoring database engine > > I did a engine-cleanup, try it again but still the same error. > > ________________________________ > De: "Yedidyah Bar David" > Para: suporte at logicworks.pt > Cc: "ovirt users" > Enviadas: Quarta-feira, 28 De Fevereiro de 2018 12:24:50 > Assunto: Re: [ovirt-users] Backup & Restore > > On Wed, Feb 28, 2018 at 2:10 PM, wrote: >> Hi, >> >> I'm testing backup & restore on Ovirt 4.2. >> I follow this doc >> >> https://www.ovirt.org/documentation/admin-guide/chap-Backups_and_Migration/ >> Try to restore to a fresh installation but always get this error message: >> >> restore-permissions >> Preparing to restore: >> - Unpacking file 'back_file' >> Restoring: >> - Files >> Provisioning PostgreSQL users/databases: >> - user 'engine', database 'engine' >> Restoring: >> FATAL: Can't connect to database 'ovirt_engine_history'. Please see >> '/usr/bin/engine-backup --help'. >> >> On the live engine I run # engine-backup --scope=all --mode=backup >> --file=file_name --log=log_file_name >> >> And try to restore on a fresh installation: >> # engine-backup --mode=restore --file=file_name --log=log_file_name >> --provision-db --restore-permissions >> >> Any Idea? > > Please try adding to restore command '--providion-dwh-db'. Thanks. > -- > Didi > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- Didi -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas at ecarnot.net Thu Mar 1 12:13:04 2018 From: nicolas at ecarnot.net (Nicolas Ecarnot) Date: Thu, 1 Mar 2018 13:13:04 +0100 Subject: [ovirt-users] oVirt 4.2.x and ManageIQ : Adding 'cfme' credentials Message-ID: Hello, As for my 4 previous oVirt DCs, I'm trying to add them to ManageIQ providers. I tried to follow this guide : https://access.redhat.com/documentation/en-us/red_hat_cloudforms/4.6/html-single/deployment_planning_guide/#data_collection_for_rhev_33_34 But when trying to run psql, the shell tells me the command is not found. I made a very simple setup : when running engine-setup, I answered the default question about DWH, so the DB is local. When viewing (with pgAdmin) the roles of this new PostgreSQL DB, I see there is no 'cfme' user. Do I have to re-run the setup and answer different things to ensure other packages and setup are made? I saw https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html-single/data_warehouse_guide/#Overview_of_Configuring_Data_Warehouse telling me to re-run. But I see that : rpm -qa|grep -i dwh ovirt-engine-dwh-4.2.1.2-1.el7.centos.noarch ovirt-engine-dwh-setup-4.2.1.2-1.el7.centos.noarch so I thought it was already enough... ? -- Nicolas ECARNOT From ykaul at redhat.com Thu Mar 1 14:00:39 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Thu, 1 Mar 2018 16:00:39 +0200 Subject: [ovirt-users] oVirt 4.2.x and ManageIQ : Adding 'cfme' credentials In-Reply-To: References: Message-ID: On Thu, Mar 1, 2018 at 2:13 PM, Nicolas Ecarnot wrote: > Hello, > > As for my 4 previous oVirt DCs, I'm trying to add them to ManageIQ > providers. > > I tried to follow this guide : > > https://access.redhat.com/documentation/en-us/red_hat_cloudf > orms/4.6/html-single/deployment_planning_guide/#data_collect > ion_for_rhev_33_34 > > But when trying to run psql, the shell tells me the command is not found. > Because you are probably on PG 9.5 SCL, I assume? Something like 'scl enable rh-postgrsql95' should help. Y. > > I made a very simple setup : when running engine-setup, I answered the > default question about DWH, so the DB is local. > > When viewing (with pgAdmin) the roles of this new PostgreSQL DB, I see > there is no 'cfme' user. > Do I have to re-run the setup and answer different things to ensure other > packages and setup are made? > > I saw https://access.redhat.com/documentation/en-us/red_hat_virtua > lization/4.1/html-single/data_warehouse_guide/#Overview_of_C > onfiguring_Data_Warehouse telling me to re-run. > > But I see that : > rpm -qa|grep -i dwh > ovirt-engine-dwh-4.2.1.2-1.el7.centos.noarch > ovirt-engine-dwh-setup-4.2.1.2-1.el7.centos.noarch > > so I thought it was already enough... ? > > -- > Nicolas ECARNOT > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas at ecarnot.net Thu Mar 1 14:50:30 2018 From: nicolas at ecarnot.net (Nicolas Ecarnot) Date: Thu, 1 Mar 2018 15:50:30 +0100 Subject: [ovirt-users] oVirt 4.2.x and ManageIQ : Adding 'cfme' credentials In-Reply-To: References: Message-ID: <89281469-cdc0-9751-09e2-1689ba7ab04b@ecarnot.net> Le 01/03/2018 ? 15:00, Yaniv Kaul a ?crit?: > > > On Thu, Mar 1, 2018 at 2:13 PM, Nicolas Ecarnot > wrote: > > Hello, > > As for my 4 previous oVirt DCs, I'm trying to add them to ManageIQ > providers. > > I tried to follow this guide : > > https://access.redhat.com/documentation/en-us/red_hat_cloudforms/4.6/html-single/deployment_planning_guide/#data_collection_for_rhev_33_34 > > > But when trying to run psql, the shell tells me the command is not > found. > > Hello Yanniv, Thank you for answering. > Because you are probably on PG 9.5 SCL, I assume? I've never heard about that before today. I installed a bare-metal CentOS 7.4 on which I installed oVirt 4.2. I saw no reference to SCL nowhere, neither during the setup, neither in the oVirt install documentation. How an average user is supposed to behave in such a situation? (In my case, as usual, I read and read again) Couldn't the Redhat documentation mentioned above be more accurate? > Something like 'scl enable rh-postgrsql95' should help. Not that much... root at serv-mvm-prds01:/etc/ovirt-engine-setup.conf.d# cd /tmp root at serv-mvm-prds01:/tmp# su - postgres Derni?re connexion : jeudi 1 mars 2018 ? 15:42:40 CET sur pts/2 -bash-4.2$ scl enable rh-postgrsql95 Need at least 3 arguments. Run scl --help to get help. -- Nicolas ECARNOT From mzamazal at redhat.com Thu Mar 1 15:35:20 2018 From: mzamazal at redhat.com (Milan Zamazal) Date: Thu, 01 Mar 2018 16:35:20 +0100 Subject: [ovirt-users] VM paused rather than migrate to another hosts In-Reply-To: (Terry hey's message of "Thu, 1 Mar 2018 18:24:33 +0800") References: <87r2p5w5kc.fsf@redhat.com> Message-ID: <87606fstaf.fsf@redhat.com> Terry hey writes: > Dear Milan, > Today, i just found that oVirt 4.2 support iLO5 and power management was > set on all hosts (hypervisor). > I found that if i choose VM lease and shutdown iSCSI network, the VM was > shutdown. > Then the VM will migrate to another host if the iSCSI network was resumed. If the VM had been shut down then it was probably restarted on rather than migrated to another host. > If i just choose enable HA on VM setting, the VM was successfully migrate > to another hosts. There can be a special situation if the storage storing VM leases is unavailable. oVirt tries to do what it can in case of storage problems, but it all depends on the overall state of the storage ? for how long it remains unavailable, if it is available at least on some hosts, and which parts of the storage are available; there are more possible scenarios here. Indeed, it's a good idea to experiment with failures and learn what happens before real problems come! > But i want to ask another question, what if the management network is down? > What VM and hosts behavior would you expect? The primary problem is that oVirt Engine can't communicate with the hosts in such a case. Unless there is another problem (especially assuming storage is still reachable from the hosts) the hosts and VMs will keep running, but the hosts will be displayed as unreachable and VMs as unknown in Engine. And you won't be able to manage your VMs from Engine of course. Once the management network is back, things should return to normal state sooner or later. Regards, Milan > Regards > Terry Hung > > 2018-02-28 22:29 GMT+08:00 Milan Zamazal : > >> Terry hey writes: >> >> > I am testing iSCSI bonding failover test on oVirt, but i observed that VM >> > were paused and did not migrate to another host. Please see the details >> as >> > follows. >> > >> > I have two hypervisors. Since they are running iLO 5 and oVirt 4.2 cannot >> > support iLO 5, thus i cannot setup power management. >> > >> > For the cluster setting, I set "Migrate Virtual Machines" under the >> > Migration Policy. >> > >> > For each hypervisor, I bonded two iSCSI interface as bond 1. >> > >> > I created one Virtual machine and enable high availability on it. >> > Also, I created one Virtual machine and did not enable high availability >> on >> > it. >> > >> > When i shutdown one of the iSCSI interface, nothing happened. >> > But when i shutdown both iSCSI interface, VM in that hosts were paused >> and >> > did not migrate to another hosts. Is this behavior normal or i miss >> > something? >> >> A paused VM can't be migrated, since there are no guarantees about the >> storage state. As the VMs were paused under erroneous (rather than >> controlled such as putting the host into maintenance) situation, >> migration policy can't help here. >> >> But highly available VMs can be restarted on another host automatically. >> Do you have VM lease enabled for the highly available VM in High >> Availability settings? With a lease, Engine should be able to restart >> the VM elsewhere after a while, without it Engine can't do that since >> there is danger of resuming the VM on the original host, resulting in >> multiple instances of the same VM running at the same time. >> >> VMs without high availability must be restarted manually (unless storage >> domain becomes available again). >> >> HTH, >> Milan >> From nicolas at ecarnot.net Thu Mar 1 15:41:39 2018 From: nicolas at ecarnot.net (Nicolas Ecarnot) Date: Thu, 1 Mar 2018 16:41:39 +0100 Subject: [ovirt-users] oVirt 4.2.x and ManageIQ : Adding 'cfme' credentials In-Reply-To: <89281469-cdc0-9751-09e2-1689ba7ab04b@ecarnot.net> References: <89281469-cdc0-9751-09e2-1689ba7ab04b@ecarnot.net> Message-ID: Le 01/03/2018 ? 15:50, Nicolas Ecarnot a ?crit?: > Couldn't the Redhat documentation mentioned above be more accurate? > >> Something like 'scl enable rh-postgrsql95' should help. > > Not that much... > > root at serv-mvm-prds01:/etc/ovirt-engine-setup.conf.d# cd /tmp > root at serv-mvm-prds01:/tmp# su - postgres > Derni?re connexion : jeudi? 1 mars 2018 ? 15:42:40 CET sur pts/2 > -bash-4.2$ scl enable rh-postgrsql95 > Need at least 3 arguments. > Run scl --help to get help. After reading and reading again : For the record, here are the steps allowing me to add this user : su - postgres scl enable rh-postgresql95 'psql ovirt_engine_history' CREATE ROLE cfme with LOGIN ENCRYPTED PASSWORD 'xxx'; SELECT 'GRANT SELECT ON ' || relname || ' TO cfme;' FROM pg_class JOIN pg_namespace ON pg_namespace.oid = pg_class.relnamespace WHERE nspname = 'public' AND relkind IN ('r', 'v','S'); \q exit -- Nicolas ECARNOT From nicola.gentile.to at gmail.com Thu Mar 1 16:01:10 2018 From: nicola.gentile.to at gmail.com (nicola gentile) Date: Thu, 1 Mar 2018 17:01:10 +0100 Subject: [ovirt-users] cannot remove vm. Message-ID: Hi, I have a problem. I try to remove a pool and than I remove every vm but one of this display the message "Cannot remove VM. Related operation is currently in progress. Please try again later." I try to unlock with this command PGPASSWORD=... /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -t all -u engine but not working in /var/log/ovirt-engine/engine.log 2018-03-01 17:02:47,799+01 INFO [org.ovirt.engine.core.bll.RemoveVmCommand] (default task-33) [0217d710-afb6-4450-9897-02748d871aa1] Failed to Acquire Lock to object 'EngineLock:{exclusiveLocks='[d623ad44-a645-4fd0-9993-d21374e99eb5=VM]', sharedLocks=''}' 2018-03-01 17:02:47,799+01 WARN [org.ovirt.engine.core.bll.RemoveVmCommand] (default task-33) [0217d710-afb6-4450-9897-02748d871aa1] Validation of action 'RemoveVm' failed for user admin at internal-authz. Reasons: VAR__ACTION__REMOVE,VAR__TYPE__VM,ACTION_TYPE_FAILED_OBJECT_LOCKED please help thanks Nick From mzamazal at redhat.com Thu Mar 1 16:03:43 2018 From: mzamazal at redhat.com (Milan Zamazal) Date: Thu, 01 Mar 2018 17:03:43 +0100 Subject: [ovirt-users] VM paused rather than migrate to another hosts In-Reply-To: (Terry hey's message of "Thu, 1 Mar 2018 18:24:33 +0800") References: <87r2p5w5kc.fsf@redhat.com> Message-ID: <874llzsrz4.fsf@redhat.com> Terry hey writes: > Dear Milan, > Today, i just found that oVirt 4.2 support iLO5 and power management was > set on all hosts (hypervisor). > I found that if i choose VM lease and shutdown iSCSI network, the VM was > shutdown. > Then the VM will migrate to another host if the iSCSI network was resumed. If the VM had been shut down then it was probably restarted on rather than migrated to another host. > If i just choose enable HA on VM setting, the VM was successfully migrate > to another hosts. There can be a special situation if the storage storing VM leases is unavailable. oVirt tries to do what it can in case of storage problems, but it all depends on the overall state of the storage ? for how long it remains unavailable, if it is available at least on some hosts, and which parts of the storage are available; there are more possible scenarios here. Indeed, it's a good idea to experiment with failures and learn what happens before real problems come! > But i want to ask another question, what if the management network is down? > What VM and hosts behavior would you expect? The primary problem is that oVirt Engine can't communicate with the hosts in such a case. Unless there is another problem (especially assuming storage is still reachable from the hosts) the hosts and VMs will keep running, but the hosts will be displayed as unreachable and VMs as unknown in Engine. And you won't be able to manage your VMs from Engine of course. Once the management network is back, things should return to normal state sooner or later. Regards, Milan > Regards > Terry Hung > > 2018-02-28 22:29 GMT+08:00 Milan Zamazal : > >> Terry hey writes: >> >> > I am testing iSCSI bonding failover test on oVirt, but i observed that VM >> > were paused and did not migrate to another host. Please see the details >> as >> > follows. >> > >> > I have two hypervisors. Since they are running iLO 5 and oVirt 4.2 cannot >> > support iLO 5, thus i cannot setup power management. >> > >> > For the cluster setting, I set "Migrate Virtual Machines" under the >> > Migration Policy. >> > >> > For each hypervisor, I bonded two iSCSI interface as bond 1. >> > >> > I created one Virtual machine and enable high availability on it. >> > Also, I created one Virtual machine and did not enable high availability >> on >> > it. >> > >> > When i shutdown one of the iSCSI interface, nothing happened. >> > But when i shutdown both iSCSI interface, VM in that hosts were paused >> and >> > did not migrate to another hosts. Is this behavior normal or i miss >> > something? >> >> A paused VM can't be migrated, since there are no guarantees about the >> storage state. As the VMs were paused under erroneous (rather than >> controlled such as putting the host into maintenance) situation, >> migration policy can't help here. >> >> But highly available VMs can be restarted on another host automatically. >> Do you have VM lease enabled for the highly available VM in High >> Availability settings? With a lease, Engine should be able to restart >> the VM elsewhere after a while, without it Engine can't do that since >> there is danger of resuming the VM on the original host, resulting in >> multiple instances of the same VM running at the same time. >> >> VMs without high availability must be restarted manually (unless storage >> domain becomes available again). >> >> HTH, >> Milan >> From fsoyer at systea.fr Thu Mar 1 16:27:10 2018 From: fsoyer at systea.fr (fsoyer) Date: Thu, 01 Mar 2018 17:27:10 +0100 Subject: [ovirt-users] =?utf-8?q?VMs_with_multiple_vdisks_don=27t_migrate?= In-Reply-To: <878tbg9d1y.fsf@redhat.com> Message-ID: <4663-5a982a00-b1-50066700@129767920> I Milan, I tried to activate the debug mode, but the restart of libvirt crashed something on the host : it was no more possible to start any vm on it, and migration to it just never started. So I decided to restart it, and to be sure, I've restarted all the hosts. And... now the migration of all VMs, simple or multi-disks, works ?!? So, there was probably something hidden that was resetted or repaired by the global restart !.... In french, we call that "tomber en marche" ;) So : solved. Thank you for the wasted time ! -- Cordialement, Frank Soyer Mob. 06 72 28 38 53 - Fix. 05 49 50 52 34 Le Lundi, F?vrier 26, 2018 12:59 CET, Milan Zamazal a ?crit: ?"fsoyer" writes: > I don't beleive that this is relatd to a host, tests have been done from victor > source to ginger dest and ginger to victor. I don't see problems on storage > (gluster 3.12 native managed by ovirt), when VMs with a uniq disk from 20 to > 250G migrate without error in some seconds and with no downtime. The host itself may be fine, but libvirt/QEMU running there may expose problems, perhaps just for some VMs. According to your logs something is not behaving as expected on the source host during the faulty migration. > How ca I enable this libvirt debug mode ? Set the following options in /etc/libvirt/libvirtd.conf (look for examples in comments there) - log_level=1 - log_outputs="1:file:/var/log/libvirt/libvirtd.log" and restart libvirt. Then /var/log/libvirt/libvirtd.log should contain the log. It will be huge, so I suggest to enable it only for the time of reproducing the problem. > -- > > Cordialement, > > Frank Soyer > > ? > > Le Vendredi, F?vrier 23, 2018 09:56 CET, Milan Zamazal a ?crit: > ?Maor Lipchuk writes: > >> I encountered a bug (see [1]) which contains the same error mentioned in >> your VDSM logs (see [2]), but I doubt it is related. > > Indeed, it's not related. > > The error in vdsm_victor.log just means that the info gathering call > tries to access libvirt domain before the incoming migration is > completed. It's ugly but harmless. > >> Milan, maybe you have any advice to troubleshoot the issue? Will the >> libvirt/qemu logs can help? > > It seems there is something wrong on (at least) the source host. There > are no migration progress messages in the vdsm_ginger.log and there are > warnings about stale stat samples. That looks like problems with > calling libvirt ? slow and/or stuck calls, maybe due to storage > problems. The possibly faulty second disk could cause that. > > libvirt debug logs could tell us whether that is indeed the problem and > whether it is caused by storage or something else. > >> I would suggest to open a bug on that issue so we can track it more >> properly. >> >> Regards, >> Maor >> >> >> [1] >> https://bugzilla.redhat.com/show_bug.cgi?id=1486543 - Migration leads to >> VM running on 2 Hosts >> >> [2] >> 2018-02-16 09:43:35,236+0100 ERROR (jsonrpc/7) [jsonrpc.JsonRpcServer] >> Internal server error (__init__:577) >> Traceback (most recent call last): >> File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 572, >> in _handle_request >> res = method(**params) >> File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198, in >> _dynamicMethod >> result = fn(*methodArgs) >> File "/usr/share/vdsm/API.py", line 1454, in getAllVmIoTunePolicies >> io_tune_policies_dict = self._cif.getAllVmIoTunePolicies() >> File "/usr/share/vdsm/clientIF.py", line 454, in getAllVmIoTunePolicies >> 'current_values': v.getIoTune()} >> File "/usr/share/vdsm/virt/vm.py", line 2859, in getIoTune >> result = self.getIoTuneResponse() >> File "/usr/share/vdsm/virt/vm.py", line 2878, in getIoTuneResponse >> res = self._dom.blockIoTune( >> File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 47, >> in __getattr__ >> % self.vmid) >> NotConnectedError: VM u'755cf168-de65-42ed-b22f-efe9136f7594' was not >> started yet or was shut down >> >> On Thu, Feb 22, 2018 at 4:22 PM, fsoyer wrote: >> >>> Hi, >>> Yes, on 2018-02-16 (vdsm logs) I tried with a VM standing on ginger >>> (192.168.0.6) migrated (or failed to migrate...) to victor (192.168.0.5), >>> while the engine.log in the first mail on 2018-02-12 was for VMs standing >>> on victor, migrated (or failed to migrate...) to ginger. Symptoms were >>> exactly the same, in both directions, and VMs works like a charm before, >>> and even after (migration "killed" by a poweroff of VMs). >>> Am I the only one experimenting this problem ? >>> >>> >>> Thanks >>> -- >>> >>> Cordialement, >>> >>> *Frank Soyer * >>> >>> >>> >>> Le Jeudi, F?vrier 22, 2018 00:45 CET, Maor Lipchuk >>> a ?crit: >>> >>> >>> Hi Frank, >>> >>> Sorry about the delay repond. >>> I've been going through the logs you attached, although I could not find >>> any specific indication why the migration failed because of the disk you >>> were mentionning. >>> Does this VM run with both disks on the target host without migration? >>> >>> Regards, >>> Maor >>> >>> >>> On Fri, Feb 16, 2018 at 11:03 AM, fsoyer wrote: >>>> >>>> Hi Maor, >>>> sorry for the double post, I've change the email adress of my account and >>>> supposed that I'd need to re-post it. >>>> And thank you for your time. Here are the logs. I added a vdisk to an >>>> existing VM : it no more migrates, needing to poweroff it after minutes. >>>> Then simply deleting the second disk makes migrate it in exactly 9s without >>>> problem ! >>>> https://gist.github.com/fgth/4707446331d201eef574ac31b6e89561 >>>> https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d >>>> >>>> -- >>>> >>>> Cordialement, >>>> >>>> *Frank Soyer * >>>> Le Mercredi, F?vrier 14, 2018 11:04 CET, Maor Lipchuk < >>>> mlipchuk at redhat.com> a ?crit: >>>> >>>> >>>> Hi Frank, >>>> >>>> I already replied on your last email. >>>> Can you provide the VDSM logs from the time of the migration failure for >>>> both hosts: >>>> ginger.local.systea.f r and v >>>> ictor.local.systea.fr >>>> >>>> Thanks, >>>> Maor >>>> >>>> On Wed, Feb 14, 2018 at 11:23 AM, fsoyer wrote: >>>>> >>>>> Hi all, >>>>> I discovered yesterday a problem when migrating VM with more than one >>>>> vdisk. >>>>> On our test servers (oVirt4.1, shared storage with Gluster), I created 2 >>>>> VMs needed for a test, from a template with a 20G vdisk. On this VMs I >>>>> added a 100G vdisk (for this tests I didn't want to waste time to extend >>>>> the existing vdisks... But I lost time finally...). The VMs with the 2 >>>>> vdisks works well. >>>>> Now I saw some updates waiting on the host. I tried to put it in >>>>> maintenance... But it stopped on the two VM. They were marked "migrating", >>>>> but no more accessible. Other (small) VMs with only 1 vdisk was migrated >>>>> without problem at the same time. >>>>> I saw that a kvm process for the (big) VMs was launched on the source >>>>> AND destination host, but after tens of minutes, the migration and the VMs >>>>> was always freezed. I tried to cancel the migration for the VMs : failed. >>>>> The only way to stop it was to poweroff the VMs : the kvm process died on >>>>> the 2 hosts and the GUI alerted on a failed migration. >>>>> In doubt, I tried to delete the second vdisk on one of this VMs : it >>>>> migrates then without error ! And no access problem. >>>>> I tried to extend the first vdisk of the second VM, the delete the >>>>> second vdisk : it migrates now without problem ! >>>>> >>>>> So after another test with a VM with 2 vdisks, I can say that this >>>>> blocked the migration process :( >>>>> >>>>> In engine.log, for a VMs with 1 vdisk migrating well, we see : >>>>> >>>>> 2018-02-12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >>>>> (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acquired >>>>> to object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', >>>>> sharedLocks=''}' >>>>> 2018-02-12 16:46:29,955+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >>>>> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] >>>>> Running command: MigrateVmToServerCommand internal: false. Entities >>>>> affected : ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction >>>>> group MIGRATE_VM with role type USER >>>>> 2018-02-12 16:46:30,261+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] >>>>> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] >>>>> START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', >>>>> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', >>>>> dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost=' >>>>> 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', >>>>> migrationDowntime='0', autoConverge='true', migrateCompressed='false', >>>>> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', >>>>> maxIncomingMigrations='2', maxOutgoingMigrations='2', >>>>> convergenceSchedule='[init=[{name=setDowntime, params=[100]}], >>>>> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, >>>>> action={name=setDowntime, params=[200]}}, {limit=3, >>>>> action={name=setDowntime, params=[300]}}, {limit=4, >>>>> action={name=setDowntime, params=[400]}}, {limit=6, >>>>> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, >>>>> params=[]}}]]'}), log id: 14f61ee0 >>>>> 2018-02-12 16:46:30,262+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) >>>>> [2f712024-5982-46a8-82c8-fd8293da5725] START, >>>>> MigrateBrokerVDSCommand(HostName = victor.local.systea.fr, >>>>> MigrateVDSCommandParameters:{runAsync='true', >>>>> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', >>>>> dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost=' >>>>> 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', >>>>> migrationDowntime='0', autoConverge='true', migrateCompressed='false', >>>>> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', >>>>> maxIncomingMigrations='2', maxOutgoingMigrations='2', >>>>> convergenceSchedule='[init=[{name=setDowntime, params=[100]}], >>>>> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, >>>>> action={name=setDowntime, params=[200]}}, {limit=3, >>>>> action={name=setDowntime, params=[300]}}, {limit=4, >>>>> action={name=setDowntime, params=[400]}}, {limit=6, >>>>> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, >>>>> params=[]}}]]'}), log id: 775cd381 >>>>> 2018-02-12 16:46:30,277+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) >>>>> [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCommand, >>>>> log id: 775cd381 >>>>> 2018-02-12 16:46:30,285+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] >>>>> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] >>>>> FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0 >>>>> 2018-02-12 16:46:30,301+01 INFO [org.ovirt.engine.core.dal.db >>>>> broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-32) >>>>> [2f712024-5982-46a8-82c8-fd8293da5725] EVENT_ID: >>>>> VM_MIGRATION_START(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, >>>>> Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom >>>>> ID: null, Custom Event ID: -1, Message: Migration started (VM: >>>>> Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: >>>>> ginger.local.systea.fr, User: admin at internal-authz). >>>>> 2018-02-12 16:46:31,106+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] >>>>> START, FullListVDSCommand(HostName = victor.local.systea.fr, >>>>> FullListVDSCommandParameters:{runAsync='true', >>>>> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >>>>> vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b435 >>>>> 2018-02-12 16:46:31,147+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] >>>>> FINISH, FullListVDSCommand, return: [{acpiEnable=true, >>>>> emulatedMachine=pc-i440fx-rhel7.3.0, tabletEnable=true, pid=1493, >>>>> guestDiskMapping={0QEMU_QEMU_HARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, >>>>> QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, >>>>> timeOffset=0, cpuType=Nehalem, smp=2, pauseCode=NOERR, >>>>> guestNumaNodes=[Ljava.lang.Object;@1d9042cd, smartcardEnable=false, >>>>> custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_87 >>>>> 9c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId: >>>>> {deviceId='879c93ab-4df1-435c-af02-565039fcc254', >>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>>>> controller=0, type=virtio-serial, port=1}', managed='false', >>>>> plugged='true', readOnly='false', deviceAlias='channel0', >>>>> customProperties='[]', snapshotId='null', logicalName='null', >>>>> hostDevice='null'}, device_fbddd528-7d93-49c6-a286 >>>>> -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254devi >>>>> ce_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4- >>>>> 4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='0 >>>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', >>>>> device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', >>>>> address='{bus=0, type=usb, port=1}', managed='false', plugged='true', >>>>> readOnly='false', deviceAlias='input0', customProperties='[]', >>>>> snapshotId='null', logicalName='null', hostDevice='null'}, >>>>> device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='Vm >>>>> DeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', >>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', >>>>> type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, >>>>> bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', >>>>> plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', >>>>> snapshotId='null', logicalName='null', hostDevice='null'}, >>>>> device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4 >>>>> df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4a >>>>> a6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', >>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>>>> controller=0, type=virtio-serial, port=2}', managed='false', >>>>> plugged='true', readOnly='false', deviceAlias='channel1', >>>>> customProperties='[]', snapshotId='null', logicalName='null', >>>>> hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, >>>>> vmName=Oracle_SECONDARY, nice=0, status=Migration Source, maxMemSize=32768, >>>>> bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, >>>>> numOfIoThreads=2, smpThreadsPerCore=1, memGuaranteedSize=8192, >>>>> kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, >>>>> devices=[Ljava.lang.Object;@28ae66d7, display=vnc, maxVCpus=16, >>>>> clientIp=, statusTime=4299484520, maxMemSlots=16}], log id: 54b4b435 >>>>> 2018-02-12 16:46:31,150+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) >>>>> [27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69' >>>>> 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) >>>>> [54a65b66] Received a vnc Device without an address when processing VM >>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>>> {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, >>>>> displayIp=192.168.0.6}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, >>>>> port=5901} >>>>> 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) >>>>> [54a65b66] Received a lease Device without an address when processing VM >>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>>> {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, >>>>> sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, >>>>> deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, >>>>> device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: >>>>> _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} >>>>> 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>>> (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) >>>>> was unexpectedly detected as 'MigratingTo' on VDS >>>>> 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr) >>>>> (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1') >>>>> 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>>> (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' >>>>> is migrating to VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'( >>>>> ginger.local.systea.fr) ignoring it in the refresh until migration is >>>>> done >>>>> .... >>>>> 2018-02-12 16:46:41,631+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>>> (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' >>>>> was reported as Down on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( >>>>> victor.local.systea.fr) >>>>> 2018-02-12 16:46:41,632+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, >>>>> DestroyVDSCommand(HostName = victor.local.systea.fr, >>>>> DestroyVmVDSCommandParameters:{runAsync='true', >>>>> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', force='false', >>>>> secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log >>>>> id: 560eca57 >>>>> 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FINISH, >>>>> DestroyVDSCommand, log id: 560eca57 >>>>> 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>>> (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) >>>>> moved from 'MigratingFrom' --> 'Down' >>>>> 2018-02-12 16:46:41,651+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>>> (ForkJoinPool-1-worker-11) [] Handing over VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) >>>>> to Host 'd569c2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status >>>>> 'MigratingTo' >>>>> 2018-02-12 16:46:42,163+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>>> (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) >>>>> moved from 'MigratingTo' --> 'Up' >>>>> 2018-02-12 16:46:42,169+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] >>>>> START, MigrateStatusVDSCommand(HostName = ginger.local.systea.fr, >>>>> MigrateStatusVDSCommandParameters:{runAsync='true', >>>>> hostId='d569c2dd-8f30-4878-8aea-858db285cf69', >>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}), log id: 7a25c281 >>>>> 2018-02-12 16:46:42,174+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] >>>>> FINISH, MigrateStatusVDSCommand, log id: 7a25c281 >>>>> 2018-02-12 16:46:42,194+01 INFO [org.ovirt.engine.core.dal.db >>>>> broker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-4) [] >>>>> EVENT_ID: VM_MIGRATION_DONE(63), Correlation ID: >>>>> 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: >>>>> 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: >>>>> null, Custom Event ID: -1, Message: Migration completed (VM: >>>>> Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: >>>>> ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, Actual >>>>> downtime: (N/A)) >>>>> 2018-02-12 16:46:42,201+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >>>>> (ForkJoinPool-1-worker-4) [] Lock freed to object >>>>> 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', >>>>> sharedLocks=''}' >>>>> 2018-02-12 16:46:42,203+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START, >>>>> FullListVDSCommand(HostName = ginger.local.systea.fr, >>>>> FullListVDSCommandParameters:{runAsync='true', >>>>> hostId='d569c2dd-8f30-4878-8aea-858db285cf69', >>>>> vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc65298 >>>>> 2018-02-12 16:46:42,254+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, >>>>> FullListVDSCommand, return: [{acpiEnable=true, >>>>> emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, >>>>> tabletEnable=true, pid=18748, guestDiskMapping={}, >>>>> transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, >>>>> guestNumaNodes=[Ljava.lang.Object;@760085fd, >>>>> custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_87 >>>>> 9c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId: >>>>> {deviceId='879c93ab-4df1-435c-af02-565039fcc254', >>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>>>> controller=0, type=virtio-serial, port=1}', managed='false', >>>>> plugged='true', readOnly='false', deviceAlias='channel0', >>>>> customProperties='[]', snapshotId='null', logicalName='null', >>>>> hostDevice='null'}, device_fbddd528-7d93-49c6-a286 >>>>> -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254devi >>>>> ce_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4- >>>>> 4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='0 >>>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', >>>>> device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', >>>>> address='{bus=0, type=usb, port=1}', managed='false', plugged='true', >>>>> readOnly='false', deviceAlias='input0', customProperties='[]', >>>>> snapshotId='null', logicalName='null', hostDevice='null'}, >>>>> device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='Vm >>>>> DeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', >>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', >>>>> type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, >>>>> bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', >>>>> plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', >>>>> snapshotId='null', logicalName='null', hostDevice='null'}, >>>>> device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4 >>>>> df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4a >>>>> a6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', >>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>>>> controller=0, type=virtio-serial, port=2}', managed='false', >>>>> plugged='true', readOnly='false', deviceAlias='channel1', >>>>> customProperties='[]', snapshotId='null', logicalName='null', >>>>> hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, >>>>> vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, >>>>> bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, >>>>> numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, >>>>> maxMemSlots=16, kvmEnable=true, pitReinjection=false, >>>>> displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@2e4d3dd3, >>>>> memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304259600 >>>>> <(430)%20425-9600>, display=vnc}], log id: 7cc65298 >>>>> 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] >>>>> Received a vnc Device without an address when processing VM >>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>>> {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, >>>>> displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, >>>>> port=5901} >>>>> 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] >>>>> Received a lease Device without an address when processing VM >>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>>> {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, >>>>> sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, >>>>> deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, >>>>> device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: >>>>> _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} >>>>> 2018-02-12 16:46:46,260+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a] >>>>> FINISH, FullListVDSCommand, return: [{acpiEnable=true, >>>>> emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, >>>>> tabletEnable=true, pid=18748, guestDiskMapping={0QEMU_QEMU_H >>>>> ARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, >>>>> QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, >>>>> timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Obj >>>>> ect;@77951faf, custom={device_fbddd528-7d93-4 >>>>> 9c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fc >>>>> c254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', >>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>>>> controller=0, type=virtio-serial, port=1}', managed='false', >>>>> plugged='true', readOnly='false', deviceAlias='channel0', >>>>> customProperties='[]', snapshotId='null', logicalName='null', >>>>> hostDevice='null'}, device_fbddd528-7d93-49c6-a286 >>>>> -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254devi >>>>> ce_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4- >>>>> 4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='0 >>>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', >>>>> device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', >>>>> address='{bus=0, type=usb, port=1}', managed='false', plugged='true', >>>>> readOnly='false', deviceAlias='input0', customProperties='[]', >>>>> snapshotId='null', logicalName='null', hostDevice='null'}, >>>>> device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='Vm >>>>> DeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', >>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', >>>>> type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, >>>>> bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', >>>>> plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', >>>>> snapshotId='null', logicalName='null', hostDevice='null'}, >>>>> device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4 >>>>> df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4a >>>>> a6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', >>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>>>> controller=0, type=virtio-serial, port=2}', managed='false', >>>>> plugged='true', readOnly='false', deviceAlias='channel1', >>>>> customProperties='[]', snapshotId='null', logicalName='null', >>>>> hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, >>>>> vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, >>>>> bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, >>>>> numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, >>>>> maxMemSlots=16, kvmEnable=true, pitReinjection=false, >>>>> displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@286410fd, >>>>> memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304263620 >>>>> <(430)%20426-3620>, display=vnc}], log id: 58cdef4c >>>>> 2018-02-12 16:46:46,267+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) >>>>> [7fcb200a] Received a vnc Device without an address when processing VM >>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>>> {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, >>>>> displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, >>>>> port=5901} >>>>> 2018-02-12 16:46:46,268+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) >>>>> [7fcb200a] Received a lease Device without an address when processing VM >>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>>> {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, >>>>> sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, >>>>> deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, >>>>> device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: >>>>> _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} >>>>> >>>>> >>>>> >>>>> >>>>> For the VM with 2 vdisks we see : >>>>> >>>>> 2018-02-12 16:49:06,112+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >>>>> (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Lock Acquired >>>>> to object 'EngineLock:{exclusiveLocks='[f7d4ec12-627a-4b83-b59e-886400d55474=VM]', >>>>> sharedLocks=''}' >>>>> 2018-02-12 16:49:06,407+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >>>>> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] >>>>> Running command: MigrateVmToServerCommand internal: false. Entities >>>>> affected : ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type: VMAction >>>>> group MIGRATE_VM with role type USER >>>>> 2018-02-12 16:49:06,712+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] >>>>> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] >>>>> START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', >>>>> hostId='d569c2dd-8f30-4878-8aea-858db285cf69', >>>>> vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', >>>>> dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=' >>>>> 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', >>>>> migrationDowntime='0', autoConverge='true', migrateCompressed='false', >>>>> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', >>>>> maxIncomingMigrations='2', maxOutgoingMigrations='2', >>>>> convergenceSchedule='[init=[{name=setDowntime, params=[100]}], >>>>> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, >>>>> action={name=setDowntime, params=[200]}}, {limit=3, >>>>> action={name=setDowntime, params=[300]}}, {limit=4, >>>>> action={name=setDowntime, params=[400]}}, {limit=6, >>>>> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, >>>>> params=[]}}]]'}), log id: 3702a9e0 >>>>> 2018-02-12 16:49:06,713+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) >>>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, >>>>> MigrateBrokerVDSCommand(HostName = ginger.local.systea.fr, >>>>> MigrateVDSCommandParameters:{runAsync='true', >>>>> hostId='d569c2dd-8f30-4878-8aea-858db285cf69', >>>>> vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', >>>>> dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=' >>>>> 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', >>>>> migrationDowntime='0', autoConverge='true', migrateCompressed='false', >>>>> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', >>>>> maxIncomingMigrations='2', maxOutgoingMigrations='2', >>>>> convergenceSchedule='[init=[{name=setDowntime, params=[100]}], >>>>> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, >>>>> action={name=setDowntime, params=[200]}}, {limit=3, >>>>> action={name=setDowntime, params=[300]}}, {limit=4, >>>>> action={name=setDowntime, params=[400]}}, {limit=6, >>>>> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, >>>>> params=[]}}]]'}), log id: 1840069c >>>>> 2018-02-12 16:49:06,724+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) >>>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrokerVDSCommand, >>>>> log id: 1840069c >>>>> 2018-02-12 16:49:06,732+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] >>>>> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] >>>>> FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3702a9e0 >>>>> 2018-02-12 16:49:06,753+01 INFO [org.ovirt.engine.core.dal.db >>>>> broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-49) >>>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT_ID: >>>>> VM_MIGRATION_START(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e, >>>>> Job ID: f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, Custom >>>>> ID: null, Custom Event ID: -1, Message: Migration started (VM: >>>>> Oracle_PRIMARY, Source: ginger.local.systea.fr, Destination: >>>>> victor.local.systea.fr, User: admin at internal-authz). >>>>> ... >>>>> 2018-02-12 16:49:16,453+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4) >>>>> [162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1' >>>>> 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>>> (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) >>>>> was unexpectedly detected as 'MigratingTo' on VDS >>>>> 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) >>>>> (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') >>>>> 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>>> (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' >>>>> is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( >>>>> victor.local.systea.fr) ignoring it in the refresh until migration is >>>>> done >>>>> ... >>>>> 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>>> (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) >>>>> was unexpectedly detected as 'MigratingTo' on VDS >>>>> 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) >>>>> (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') >>>>> 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>>> (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' >>>>> is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( >>>>> victor.local.systea.fr) ignoring it in the refresh until migration is >>>>> done >>>>> >>>>> >>>>> >>>>> and so on, last lines repeated indefinitly for hours since we poweroff >>>>> the VM... >>>>> Is this something known ? Any idea about that ? >>>>> >>>>> Thanks >>>>> >>>>> Ovirt 4.1.6, updated last at feb-13. Gluster 3.12.1. >>>>> >>>>> -- >>>>> >>>>> Cordialement, >>>>> >>>>> *Frank Soyer * >>>>> >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users ? ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mzamazal at redhat.com Thu Mar 1 16:38:24 2018 From: mzamazal at redhat.com (Milan Zamazal) Date: Thu, 01 Mar 2018 17:38:24 +0100 Subject: [ovirt-users] VM Migrations In-Reply-To: (Bryan Sockel's message of "Mon, 26 Feb 2018 13:30:07 -0600") References: Message-ID: <87zi3rrbsv.fsf@redhat.com> "Bryan Sockel" writes: > I am having an issue migrating all vm's based on a specific template. The > template was created in a previous ovirt environment (4.1), and all VM's > deployed from this template experience the same issue. > > I would like to find a resolution to both the template and vm's that are > already deployed from this template. The VM in question is VDI-Bryan and > the migration starts around 12:25. I have attached the engine.log and the > vdsm.log file from the destination server. The VM died on the destination before it could be migrated and I can't see the exact reason in the log. However I can see there that you have been hit by some 4.1->4.2 migration issues and it's likely to be the problem as well as being a problem by itself in any case. That will be fixed in 4.2.2. If you don't want to wait until 4.2.2 is released, you may want to try current 4.2.2 snapshot, which already contains the fixes. Regards, Milan From mzamazal at redhat.com Thu Mar 1 16:47:57 2018 From: mzamazal at redhat.com (Milan Zamazal) Date: Thu, 01 Mar 2018 17:47:57 +0100 Subject: [ovirt-users] VMs with multiple vdisks don't migrate In-Reply-To: <4663-5a982a00-b1-50066700@129767920> (fsoyer@systea.fr's message of "Thu, 01 Mar 2018 17:27:10 +0100") References: <4663-5a982a00-b1-50066700@129767920> Message-ID: <87r2p3rbcy.fsf@redhat.com> "fsoyer" writes: > I tried to activate the debug mode, but the restart of libvirt crashed > something on the host : it was no more possible to start any vm on it, and > migration to it just never started. So I decided to restart it, and to be sure, > I've restarted all the hosts. > And... now the migration of all VMs, simple or multi-disks, works ?!? So, there > was probably something hidden that was resetted or repaired by the global > restart !.... In french, we call that "tomber en marche" ;) I'm always amazed how many problems in computing are eventually resolved (and how many new ones introduced) by reboot :-). I'm glad that it works for you now. Regards, Milan From simone.bruckner at fabasoft.com Thu Mar 1 16:57:03 2018 From: simone.bruckner at fabasoft.com (Bruckner, Simone) Date: Thu, 1 Mar 2018 16:57:03 +0000 Subject: [ovirt-users] Cannot activate storage domain In-Reply-To: <2CB4E8C8E00E594EA06D4AC427E429920FE500D1@fabamailserver.fabagl.fabasoft.com> References: <2CB4E8C8E00E594EA06D4AC427E429920FE500D1@fabamailserver.fabagl.fabasoft.com> Message-ID: <2CB4E8C8E00E594EA06D4AC427E429920FE56290@fabamailserver.fabagl.fabasoft.com> Hi, we are still struggling getting a storage domain online again. We tried to put the storage domain in maintenance mode, that led to "Failed to update OVF disks 809cc8d7-7687-46cf-a342-3be48674a9b3, OVF data isn't updated on those OVF stores". Trying again with ignoring OVF update failures put the storage domain in "preparing for maintenance". We see the following message on all hosts: "Error releasing host id 26 for domain b83c159c-4ad6-4613-ba16-bab95ccd10c0 (monitor:578)". Querying the storage domain using vdsm-client on the SPM resulted in # vdsm-client StorageDomain getInfo "storagedomainID"="b83c159c-4ad6-4613-ba16-bab95ccd10c0" vdsm-client: Command StorageDomain.getInfo with args {'storagedomainID': 'b83c159c-4ad6-4613-ba16-bab95ccd10c0'} failed: (code=358, message=Storage domain does not exist: (u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',)) Any ideas? Thank you and all the best, Simone Von: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] Im Auftrag von Bruckner, Simone Gesendet: Mittwoch, 28. Februar 2018 15:52 An: users at ovirt.org Betreff: [ovirt-users] Cannot activate storage domain Hi all, we run a small oVirt installation that we also use for automated testing (automatically creating, dropping vms). We got an inactive FC storage domain that we cannot activate any more. We see several events at that time starting with: VM perftest-c17 is down with error. Exit message: Unable to get volume size for domain b83c159c-4ad6-4613-ba16-bab95ccd10c0 volume 686376c1-4be1-44c3-89a3-0a8addc8fdf2. Trying to activate the strorage domain results in the following alert event for each host: VDSM command GetVGInfoVDS failed: Volume Group does not exist: (u'vg_uuid: 813oRe-64r8-mloU-k9G2-LFsS-dXSG-hpN4kf',) And after those messages from all hosts we get: VDSM command ActivateStorageDomainVDS failed: Storage domain does not exist: (u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',) Failed to activate Storage Domain VMHOST_LUN_205 (Data Center Production) by Invalid status on Data Center Production. Setting status to Non Responsive. Storage Pool Manager runs on Host vmhost003.fabagl.fabasoft.com (Address: vmhost003.fabagl.fabasoft.com), Data Center Production. Checking the hosts with multipath -ll we see the LUN without errors. We run oVirt 4.2.1 on CentOS 7.4. Hosts are CentOS 7.4 hosts with oVirt installed using oVirt engine. Hosts are connected to about 30 FC LUNs (8 TB each) on two all-flash storage arrays. Thank you, Simone Bruckner -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bryan.Sockel at mdaemon.com Thu Mar 1 17:06:00 2018 From: Bryan.Sockel at mdaemon.com (Bryan Sockel) Date: Thu, 01 Mar 2018 11:06:00 -0600 Subject: [ovirt-users] VM Migrations In-Reply-To: <87zi3rrbsv.fsf@redhat.com> References: <87zi3rrbsv.fsf@redhat.com> Message-ID: Hi, Thanks for the info, I will be sure to update to 4.2.2 when it is ready. With out the ability to migrate vm's based on this image it makes it less convenient to patch my servers on a more consistent basis.I was also experiencing this issue prior to upgrading my environment to 4.2. Thank You, From: "Milan Zamazal (mzamazal at redhat.com)" To: "Bryan Sockel" Cc: "users\@ovirt.org" Date: Thu, 01 Mar 2018 17:38:24 +0100 Subject: Re: VM Migrations "Bryan Sockel" writes: > I am having an issue migrating all vm's based on a specific template. The > template was created in a previous ovirt environment (4.1), and all VM's > deployed from this template experience the same issue. > > I would like to find a resolution to both the template and vm's that are > already deployed from this template. The VM in question is VDI-Bryan and > the migration starts around 12:25. I have attached the engine.log and the > vdsm.log file from the destination server. The VM died on the destination before it could be migrated and I can't see the exact reason in the log. However I can see there that you have been hit by some 4.1->4.2 migration issues and it's likely to be the problem as well as being a problem by itself in any case. That will be fixed in 4.2.2. If you don't want to wait until 4.2.2 is released, you may want to try current 4.2.2 snapshot, which already contains the fixes. Regards, Milan -------------- next part -------------- An HTML attachment was scrubbed... URL: From suporte at logicworks.pt Thu Mar 1 17:11:27 2018 From: suporte at logicworks.pt (suporte at logicworks.pt) Date: Thu, 1 Mar 2018 17:11:27 +0000 (WET) Subject: [ovirt-users] windows vm Message-ID: <1742521797.20671799.1519924287288.JavaMail.zimbra@logicworks.pt> Hi, I'm trying to install a windows VM on Ovirt 4.2 but when doing a run once it freezes with the windows logo. Is there any special requirements/parameter to run windows machines in this version? Thanks -- Jose Ferradeira http://www.logicworks.pt -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincent at epicenergy.ca Thu Mar 1 17:50:23 2018 From: vincent at epicenergy.ca (Vincent Royer) Date: Thu, 1 Mar 2018 09:50:23 -0800 Subject: [ovirt-users] windows vm In-Reply-To: <1742521797.20671799.1519924287288.JavaMail.zimbra@logicworks.pt> References: <1742521797.20671799.1519924287288.JavaMail.zimbra@logicworks.pt> Message-ID: I have deployed a dozen or so on 4.2 without issues. I'd check your .iso *Vincent Royer* *778-825-1057* *SUSTAINABLE MOBILE ENERGY SOLUTIONS* On Thu, Mar 1, 2018 at 9:11 AM, wrote: > Hi, > > I'm trying to install a windows VM on Ovirt 4.2 but when doing a run once > it freezes with the windows logo. Is there any special > requirements/parameter to run windows machines in this version? > > Thanks > > -- > ------------------------------ > Jose Ferradeira > http://www.logicworks.pt > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From suporte at logicworks.pt Thu Mar 1 17:57:59 2018 From: suporte at logicworks.pt (suporte at logicworks.pt) Date: Thu, 1 Mar 2018 17:57:59 +0000 (WET) Subject: [ovirt-users] windows vm In-Reply-To: References: <1742521797.20671799.1519924287288.JavaMail.zimbra@logicworks.pt> Message-ID: <1148456803.20681936.1519927079589.JavaMail.zimbra@logicworks.pt> What CPU type do you have for your windows machines? De: "Vincent Royer" Para: suporte at logicworks.pt Cc: "ovirt users" Enviadas: Quinta-feira, 1 De Mar?o de 2018 17:50:23 Assunto: Re: [ovirt-users] windows vm I have deployed a dozen or so on 4.2 without issues. I'd check your .iso Vincent Royer 778-825-1057 SUSTAINABLE MOBILE ENERGY SOLUTIONS On Thu, Mar 1, 2018 at 9:11 AM, < suporte at logicworks.pt > wrote: Hi, I'm trying to install a windows VM on Ovirt 4.2 but when doing a run once it freezes with the windows logo. Is there any special requirements/parameter to run windows machines in this version? Thanks -- Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincent at epicenergy.ca Thu Mar 1 18:04:49 2018 From: vincent at epicenergy.ca (Vincent Royer) Date: Thu, 1 Mar 2018 10:04:49 -0800 Subject: [ovirt-users] windows vm In-Reply-To: <1148456803.20681936.1519927079589.JavaMail.zimbra@logicworks.pt> References: <1742521797.20671799.1519924287288.JavaMail.zimbra@logicworks.pt> <1148456803.20681936.1519927079589.JavaMail.zimbra@logicworks.pt> Message-ID: Broadwell *Vincent Royer* *778-825-1057* *SUSTAINABLE MOBILE ENERGY SOLUTIONS* On Thu, Mar 1, 2018 at 9:57 AM, wrote: > What CPU type do you have for your windows machines? > > ------------------------------ > *De: *"Vincent Royer" > *Para: *suporte at logicworks.pt > *Cc: *"ovirt users" > *Enviadas: *Quinta-feira, 1 De Mar?o de 2018 17:50:23 > *Assunto: *Re: [ovirt-users] windows vm > > I have deployed a dozen or so on 4.2 without issues. I'd check your .iso > > *Vincent Royer* > *778-825-1057 <(778)%20825-1057>* > > > > *SUSTAINABLE MOBILE ENERGY SOLUTIONS* > > > > > On Thu, Mar 1, 2018 at 9:11 AM, wrote: > >> Hi, >> >> I'm trying to install a windows VM on Ovirt 4.2 but when doing a run once >> it freezes with the windows logo. Is there any special >> requirements/parameter to run windows machines in this version? >> >> Thanks >> >> -- >> ------------------------------ >> Jose Ferradeira >> http://www.logicworks.pt >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzetto.luca at gmail.com Thu Mar 1 18:19:39 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Thu, 1 Mar 2018 19:19:39 +0100 Subject: [ovirt-users] oVirt API (4.0 and 4.1) not reporting vms running on a given storage domain Message-ID: Hello, i need to extract the list of the vms running on a given storage domain. Copying some code from ansible's ovirt_storage_vms_facts simplified my work but i stopped with a strange behavior: no vm is listed. I thought it was an issue with my code, but looking more in detail at api's i tried opening: ovirt-engine/api/storagedomains/52b661fe-609e-48f9-beab-f90165b868c4/vms And what i get is And this for all the storage domains available. Is there something wrong with the versions i'm running? Do i require some options in the query? I'm running RHV, so i can't upgrade to 4.2 yet Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From suporte at logicworks.pt Fri Mar 2 08:06:37 2018 From: suporte at logicworks.pt (suporte at logicworks.pt) Date: Fri, 2 Mar 2018 08:06:37 +0000 (WET) Subject: [ovirt-users] windows vm In-Reply-To: References: <1742521797.20671799.1519924287288.JavaMail.zimbra@logicworks.pt> <1148456803.20681936.1519927079589.JavaMail.zimbra@logicworks.pt> Message-ID: <1091451.20717998.1519977997247.JavaMail.zimbra@logicworks.pt> I change the cpu type and windows vm is running. Thanks. De: "Vincent Royer" Para: suporte at logicworks.pt Cc: "ovirt users" Enviadas: Quinta-feira, 1 De Mar?o de 2018 18:04:49 Assunto: Re: [ovirt-users] windows vm Broadwell Vincent Royer 778-825-1057 SUSTAINABLE MOBILE ENERGY SOLUTIONS On Thu, Mar 1, 2018 at 9:57 AM, < suporte at logicworks.pt > wrote: What CPU type do you have for your windows machines? De: "Vincent Royer" < vincent at epicenergy.ca > Para: suporte at logicworks.pt Cc: "ovirt users" < users at ovirt.org > Enviadas: Quinta-feira, 1 De Mar?o de 2018 17:50:23 Assunto: Re: [ovirt-users] windows vm I have deployed a dozen or so on 4.2 without issues. I'd check your .iso Vincent Royer 778-825-1057 SUSTAINABLE MOBILE ENERGY SOLUTIONS On Thu, Mar 1, 2018 at 9:11 AM, < suporte at logicworks.pt > wrote: BQ_BEGIN Hi, I'm trying to install a windows VM on Ovirt 4.2 but when doing a run once it freezes with the windows logo. Is there any special requirements/parameter to run windows machines in this version? Thanks -- Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users BQ_END -------------- next part -------------- An HTML attachment was scrubbed... URL: From Oliver.Riesener at hs-bremen.de Fri Mar 2 08:55:00 2018 From: Oliver.Riesener at hs-bremen.de (Oliver Riesener) Date: Fri, 2 Mar 2018 09:55:00 +0100 Subject: [ovirt-users] upgrading disks with snapshots from V3.6 domain failed on NFS and Export domain V4, QCOW V2 to V3 renaming Message-ID: Hi, after upgrading cluster compatibility from V3.6 to V4.2, i found all V3.6 disks with *snapshots* QCOW V2 are not working on NFS and Export Domains. Non UI command solves this problem, all stuck, they worsen disk state to "Illegal" and produces "Async Tasks" these newer ends. It seems that the disk file on storage domain has been renamed with addition -NNNNNNNNNNNN but the QCOW backing file locations in data files are not updated. They point to old disk names. [root at ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# ls -la insgesamt 4983972 drwxr-xr-x.? 2 vdsm kvm??????? 4096 28. Feb 12:57 . drwxr-xr-x. 64 vdsm kvm??????? 4096? 1. M?r 12:02 .. -rw-rw----.? 1 vdsm kvm 53687091200? 5. Sep 2016 239c0ffc-8249-4d08-967a-619abbbb897a -rw-rw----.? 1 vdsm kvm???? 1048576? 5. Sep 2016 239c0ffc-8249-4d08-967a-619abbbb897a.lease -rw-r--r--.? 1 vdsm kvm???????? 319? 5. Sep 2016 239c0ffc-8249-4d08-967a-619abbbb897a.meta -rw-rw----.? 1 vdsm kvm?? 966393856? 6. Sep 2016 2f773536-9b60-4f53-b179-dbf64d182a41 -rw-rw----.? 1 vdsm kvm???? 1048576? 5. Sep 2016 2f773536-9b60-4f53-b179-dbf64d182a41.lease -rw-r--r--.? 1 vdsm kvm???????? 264? 6. Sep 2016 2f773536-9b60-4f53-b179-dbf64d182a41.meta -rw-rw----.? 1 vdsm kvm? 2155806720 14. Feb 11:53 67f96ffc-3a4f-4f3d-9c1b-46293e0be762 -rw-rw----.? 1 vdsm kvm???? 1048576? 6. Sep 2016 67f96ffc-3a4f-4f3d-9c1b-46293e0be762.lease -rw-r--r--.? 1 vdsm kvm???????? 260? 6. Sep 2016 67f96ffc-3a4f-4f3d-9c1b-46293e0be762.meta [root at ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# file * 239c0ffc-8249-4d08-967a-619abbbb897a:?????? x86 boot sector; partition 1: ID=0x83, starthead 32, startsector 2048, 104853504 sectors, code offset 0xb8 239c0ffc-8249-4d08-967a-619abbbb897a.lease: data 239c0ffc-8249-4d08-967a-619abbbb897a.meta:? ASCII text 2f773536-9b60-4f53-b179-dbf64d182a41:?????? QEMU QCOW Image (v2), has backing file (path ../706ff176-4f96-42fe-a5fa-56434347f16c/239c0ffc-8249-4d08-967a), 53687091200 bytes 2f773536-9b60-4f53-b179-dbf64d182a41.lease: data 2f773536-9b60-4f53-b179-dbf64d182a41.meta:? ASCII text 67f96ffc-3a4f-4f3d-9c1b-46293e0be762:?????? QEMU QCOW Image (v2), has backing file (path ../706ff176-4f96-42fe-a5fa-56434347f16c/2f773536-9b60-4f53-b179), 53687091200 bytes 67f96ffc-3a4f-4f3d-9c1b-46293e0be762.lease: data 67f96ffc-3a4f-4f3d-9c1b-46293e0be762.meta:? ASCII text My solution is, to hard-link the disk files to old names too. Then the disk could be handled by UI again. [root at ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# ln 239c0ffc-8249-4d08-967a-619abbbb897a 239c0ffc-8249-4d08-967a [root at ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# ln 2f773536-9b60-4f53-b179-dbf64d182a41 2f773536-9b60-4f53-b179 To fix the illegal disk state, i manipulate the postgres database directly, thanx to ovirt-users mailing list. Rescan of DIsks in the UI could also work, i will test it in the evening, i have a lot of old exported diskswith snapshots ... Is there a smarter way to do it ? Cheers Olri From mzamazal at redhat.com Fri Mar 2 09:06:22 2018 From: mzamazal at redhat.com (Milan Zamazal) Date: Fri, 02 Mar 2018 10:06:22 +0100 Subject: [ovirt-users] VM Migrations In-Reply-To: (Bryan Sockel's message of "Thu, 01 Mar 2018 11:06:00 -0600") References: <87zi3rrbsv.fsf@redhat.com> Message-ID: <87po4mvoc1.fsf@redhat.com> "Bryan Sockel" writes: > Thanks for the info, I will be sure to update to 4.2.2 when it is ready. > With out the ability to migrate vm's based on this image it makes it less > convenient to patch my servers on a more consistent basis.I was also > experiencing this issue prior to upgrading my environment to 4.2. If you experienced it also in 4.1 then it must be another problem, which may or may not be fixed in 4.2 or it can be related to your template or setup. Let's see what happens once you upgrade to 4.2.2. > From: "Milan Zamazal (mzamazal at redhat.com)" > To: "Bryan Sockel" > Cc: "users\@ovirt.org" > Date: Thu, 01 Mar 2018 17:38:24 +0100 > Subject: Re: VM Migrations > > "Bryan Sockel" writes: > >> I am having an issue migrating all vm's based on a specific template. The >> template was created in a previous ovirt environment (4.1), and all VM's >> deployed from this template experience the same issue. >> >> I would like to find a resolution to both the template and vm's that are >> already deployed from this template. The VM in question is VDI-Bryan and >> the migration starts around 12:25. I have attached the engine.log and the >> vdsm.log file from the destination server. > > The VM died on the destination before it could be migrated and I can't > see the exact reason in the log. However I can see there that you have > been hit by some 4.1->4.2 migration issues and it's likely to be the > problem as well as being a problem by itself in any case. > > That will be fixed in 4.2.2. If you don't want to wait until 4.2.2 is > released, you may want to try current 4.2.2 snapshot, which already > contains the fixes. > > Regards, > Milan From Oliver.Riesener at hs-bremen.de Fri Mar 2 09:08:16 2018 From: Oliver.Riesener at hs-bremen.de (Oliver Riesener) Date: Fri, 2 Mar 2018 10:08:16 +0100 Subject: [ovirt-users] upgrade domain V3.6 to V4.2 with disks and snapshots failed on NFS, Export, QCOW V2/V3 renaming probl. Message-ID: Hi, after upgrading cluster compatibility from V3.6 to V4.2, i found all V3.6 disks with *snapshots* QCOW V2 are not working on NFS and Export Domains. Non UI command solves this problem, all stuck, they worsen disk state to "Illegal" and produces "Async Tasks" these newer ends. It seems that the disk file on storage domain has been renamed with addition -NNNNNNNNNNNN but the QCOW backing file locations in data files are not updated. They point to old disk names. [root at ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# ls -la insgesamt 4983972 drwxr-xr-x. 2 vdsm kvm 4096 28. Feb 12:57 . drwxr-xr-x. 64 vdsm kvm 4096 1. M?r 12:02 .. -rw-rw----. 1 vdsm kvm 53687091200 5. Sep 2016 239c0ffc-8249-4d08-967a-619abbbb897a -rw-rw----. 1 vdsm kvm 1048576 5. Sep 2016 239c0ffc-8249-4d08-967a-619abbbb897a.lease -rw-r--r--. 1 vdsm kvm 319 5. Sep 2016 239c0ffc-8249-4d08-967a-619abbbb897a.meta -rw-rw----. 1 vdsm kvm 966393856 6. Sep 2016 2f773536-9b60-4f53-b179-dbf64d182a41 -rw-rw----. 1 vdsm kvm 1048576 5. Sep 2016 2f773536-9b60-4f53-b179-dbf64d182a41.lease -rw-r--r--. 1 vdsm kvm 264 6. Sep 2016 2f773536-9b60-4f53-b179-dbf64d182a41.meta -rw-rw----. 1 vdsm kvm 2155806720 14. Feb 11:53 67f96ffc-3a4f-4f3d-9c1b-46293e0be762 -rw-rw----. 1 vdsm kvm 1048576 6. Sep 2016 67f96ffc-3a4f-4f3d-9c1b-46293e0be762.lease -rw-r--r--. 1 vdsm kvm 260 6. Sep 2016 67f96ffc-3a4f-4f3d-9c1b-46293e0be762.meta [root at ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# file * 239c0ffc-8249-4d08-967a-619abbbb897a: x86 boot sector; partition 1: ID=0x83, starthead 32, startsector 2048, 104853504 sectors, code offset 0xb8 239c0ffc-8249-4d08-967a-619abbbb897a.lease: data 239c0ffc-8249-4d08-967a-619abbbb897a.meta: ASCII text 2f773536-9b60-4f53-b179-dbf64d182a41: QEMU QCOW Image (v2), has backing file (path ../706ff176-4f96-42fe-a5fa-56434347f16c/239c0ffc-8249-4d08-967a), 53687091200 bytes 2f773536-9b60-4f53-b179-dbf64d182a41.lease: data 2f773536-9b60-4f53-b179-dbf64d182a41.meta: ASCII text 67f96ffc-3a4f-4f3d-9c1b-46293e0be762: QEMU QCOW Image (v2), has backing file (path ../706ff176-4f96-42fe-a5fa-56434347f16c/2f773536-9b60-4f53-b179), 53687091200 bytes 67f96ffc-3a4f-4f3d-9c1b-46293e0be762.lease: data 67f96ffc-3a4f-4f3d-9c1b-46293e0be762.meta: ASCII text My solution is, to hard-link the disk files to old names too. Then the disk could be handled by UI again. [root at ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# ln 239c0ffc-8249-4d08-967a-619abbbb897a 239c0ffc-8249-4d08-967a [root at ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# ln 2f773536-9b60-4f53-b179-dbf64d182a41 2f773536-9b60-4f53-b179 To fix the illegal disk state, i manipulate the postgres database directly, thanx to ovirt-users mailing list. Rescan of DIsks in the UI could also work, i will test it in the evening, i have a lot of old exported disks with snapshots ... Is there a smarter way to do it ? Cheers! Olri From geomeid at mairie-saint-ouen.fr Fri Mar 2 10:25:51 2018 From: geomeid at mairie-saint-ouen.fr (geomeid at mairie-saint-ouen.fr) Date: Fri, 02 Mar 2018 11:25:51 +0100 Subject: [ovirt-users] hosted-engine --deploy : Failed executing ansible-playbook Message-ID: Hello, I am on CENTOS 7.4.1708 I follow the documentation: [root at srvvm42 ~]# yum install ovirt-hosted-engine-setup [root at srvvm42 ~]# yum info ovirt-hosted-engine-setup Loaded plugins: fastestmirror, package_upload, product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * base: centos.quelquesmots.fr * epel: mirrors.ircam.fr * extras: centos.mirrors.ovh.net * ovirt-4.2: ftp.nluug.nl * ovirt-4.2-epel: mirrors.ircam.fr * updates: centos.mirrors.ovh.net Installed Packages Name : ovirt-hosted-engine-setup Arch : noarch Version : 2.2.9 Release : 1.el7.centos Size : 2.3 M Repo : installed >From repo : ovirt-4.2 Summary : oVirt Hosted Engine setup tool URL : http://www.ovirt.org License : LGPLv2+ Description : Hosted Engine setup tool for oVirt project. [root at srvvm42 ~]# hosted-engine --deploy I encounter an issue when i try to install my hosted-engine. Here is the last line of the installation: .... [ INFO ] TASK [Clean /etc/hosts for the engine VM] [ INFO ] skipping: [localhost] [ INFO ] TASK [Copy /etc/hosts back to the engine VM] [ INFO ] skipping: [localhost] [ INFO ] TASK [Clean /etc/hosts on the host] [ INFO ] changed: [localhost] [ INFO ] TASK [Add an entry in /etc/hosts for the target VM] [ INFO ] changed: [localhost] [ INFO ] TASK [Start broker] [ INFO ] changed: [localhost] [ INFO ] TASK [Initialize lockspace volume] [ INFO ] changed: [localhost] [ INFO ] TASK [Start agent] [ INFO ] changed: [localhost] [ INFO ] TASK [Wait for the engine to come up on the target VM] [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The conditional check 'health_result.rc == 0 and health_result.stdout|from_json|json_query('*."engine-status"."health"')|first=="good"' failed. The error was: error while evaluating conditional (health_result.rc == 0 and health_result.stdout|from_json|json_query('*."engine-status"."health"')|first=="good"): No first item, sequence was empty."} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook [ INFO ] Stage: Clean up [ INFO ] Cleaning temporary resources [ INFO ] TASK [Gathering Facts] [ INFO ] ok: [localhost] [ INFO ] TASK [Remove local vm dir] [ INFO ] changed: [localhost] [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20180302104441.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: please check the logs for the issue, fix accordingly or re-deploy from scratch. Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180302101734-fuzcop.log And here is a part of the log file: 2018-03-02 10:44:13,760+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 TASK [debug] 2018-03-02 10:44:13,861+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 changed: False 2018-03-02 10:44:13,962+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 result: {'stderr_lines': [], u'changed': True, u'end': u'2018-03-02 10:44:13.401854', u'stdou t': u'', u'cmd': [u'hosted-engine', u'--reinitialize-lockspace', u'--force'], 'failed': False, 'attempts': 2, u'stderr': u'', u'rc': 0, u'delta': u'0:00:00.202734', 'stdout_lines': [], u'start': u'2018-03-02 10:44:13.199120'} 2018-03-02 10:44:14,063+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Start agent] 2018-03-02 10:44:14,565+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-03-02 10:44:14,667+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Wait for the engine to come up on the target VM] 2018-03-02 10:44:36,555+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 {u'msg': u'The conditional check 'health_result.rc == 0 and health_result.stdout|from_json|j son_query('*."engine-status"."health"')|first=="good"' failed. The error was: error while evaluating conditional (health_result.rc == 0 and health_result.stdout|from_json|json_query('*."engine-status"."h ealth"')|first=="good"): No first item, sequence was empty.'} 2018-03-02 10:44:36,657+0100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:98 fatal: [localhost]: FAILED! => {"msg": "The conditional check 'health_result.rc == 0 and heal th_result.stdout|from_json|json_query('*."engine-status"."health"')|first=="good"' failed. The error was: error while evaluating conditional (health_result.rc == 0 and health_result.stdout|from_json|js on_query('*."engine-status"."health"')|first=="good"): No first item, sequence was empty."} 2018-03-02 10:44:36,759+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:180 ansible-playbook rc: 2 2018-03-02 10:44:36,759+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 PLAY RECAP [localhost] : ok: 57 changed: 20 unreachable: 0 skipped: 3 failed: 1 2018-03-02 10:44:36,760+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 PLAY RECAP [ovirtengine.stouen.local] : ok: 10 changed: 5 unreachable: 0 skipped: 0 failed: 0 2018-03-02 10:44:36,760+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:187 ansible-playbook stdout: 2018-03-02 10:44:36,760+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:189 to retry, use: --limit @/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.retry 2018-03-02 10:44:36,760+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:190 ansible-playbook stderr: 2018-03-02 10:44:36,761+0100 DEBUG otopi.context context._executeMethod:143 method exception Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in _executeMethod method['method']() File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/target_vm.py", line 193, in _closeup r = ah.run() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/ansible_utils.py", line 194, in run raise RuntimeError(_('Failed executing ansible-playbook')) RuntimeError: Failed executing ansible-playbook 2018-03-02 10:44:36,763+0100 ERROR otopi.context context._executeMethod:152 Failed to execute stage 'Closing up': Failed executing ansible-playbook 2018-03-02 10:44:36,764+0100 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-03-02 10:44:36,765+0100 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'True' 2018-03-02 10:44:36,765+0100 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/exceptionInfo=list:'[(, RuntimeError('Failed executing ansible-playbook',), )]' 2018-03-02 10:44:36,766+0100 DEBUG otopi.context context.dumpEnvironment:873 ENVIRONMENT DUMP - END 2018-03-02 10:44:36,767+0100 INFO otopi.context context.runSequence:741 Stage: Clean up 2018-03-02 10:44:36,767+0100 DEBUG otopi.context context.runSequence:745 STAGE cleanup 2018-03-02 10:44:36,768+0100 DEBUG otopi.context context._executeMethod:128 Stage cleanup METHOD otopi.plugins.gr_he_ansiblesetup.core.misc.Plugin._cleanup 2018-03-02 10:44:36,769+0100 INFO otopi.plugins.gr_he_ansiblesetup.core.misc misc._cleanup:236 Cleaning temporary resources 2018-03-02 10:44:36,769+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:153 ansible-playbook: cmd: ['/bin/ansible-playbook', '--module-path=/usr/share/ovirt-hosted-engine-setup/ans ible', '--inventory=localhost,', '--extra-vars=@/tmp/tmpCctJN4', '/usr/share/ovirt-hosted-engine-setup/ansible/final_clean.yml'] 2018-03-02 10:44:36,770+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:154 ansible-playbook: out_path: /tmp/tmpBm1bE0 2018-03-02 10:44:36,770+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:155 ansible-playbook: vars_path: /tmp/tmpCctJN4 2018-03-02 10:44:36,770+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:156 ansible-playbook: env: {'LC_NUMERIC': 'fr_FR.UTF-8', 'HE_ANSIBLE_LOG_PATH': '/var/log/ovirt-hosted-engin e-setup/ovirt-hosted-engine-setup-ansible-final_clean-20180302104436-0yt7bk.log', 'LESSOPEN': '||/usr/bin/lesspipe.sh %s', 'SSH_CLIENT': '10.2.10.112 38120 22', 'SELINUX_USE_CURRENT_RANGE': '', 'LOGNAME': 'r oot', 'USER': 'root', 'HOME': '/root', 'LC_PAPER': 'fr_FR.UTF-8', 'PATH': '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/dell/srvadmin/bin:/opt/dell/srvadmin/sbin:/root/bin', 'LANG': 'en_US.UTF-8', 'TERM': 'xterm-256color', 'SHELL': '/bin/bash', 'LC_MEASUREMENT': 'fr_FR.UTF-8', 'HISTSIZE': '1000', 'OTOPI_CALLBACK_OF': '/tmp/tmpBm1bE0', 'LC_MONETARY': 'fr_FR.UTF-8', 'XDG_RUNTIME_DIR': '/run/user/0', 'AN SIBLE_STDOUT_CALLBACK': '1_otopi_json', 'LC_ADDRESS': 'fr_FR.UTF-8', 'PYTHONPATH': '/usr/share/ovirt-hosted-engine-setup/scripts/..:', 'SELINUX_ROLE_REQUESTED': '', 'MAIL': '/var/spool/mail/root', 'ANSIBLE_C ALLBACK_WHITELIST': '1_otopi_json,2_ovirt_logger', 'XDG_SESSION_ID': '1', 'LC_IDENTIFICATION': 'fr_FR.UTF-8', 'LS_COLORS': 'rs=0:di=38;5;27:ln=38;5;51:mh=44;38;5;15:pi=40;38;5;11:so=38;5;13:do=38;5;5:bd=48;5 ;232;38;5;11:cd=48;5;232;38;5;3:or=48;5;232;38;5;9:mi=05;48;5;232;38;5;15:su=48;5;196;38;5;15:sg=48;5;11;38;5;16:ca=48;5;196;38;5;226:tw=48;5;10;38;5;16:ow=48;5;10;38;5;21:st=48;5;21;38;5;15:ex=38;5;34:*.tar =38;5;9:*.tgz=38;5;9:*.arc=38;5;9:*.arj=38;5;9:*.taz=38;5;9:*.lha=38;5;9:*.lz4=38;5;9:*.lzh=38;5;9:*.lzma=38;5;9:*.tlz=38;5;9:*.txz=38;5;9:*.tzo=38;5;9:*.t7z=38;5;9:*.zip=38;5;9:*.z=38;5;9:*.Z=38;5;9:*.dz=38 ;5;9:*.gz=38;5;9:*.lrz=38;5;9:*.lz=38;5;9:*.lzo=38;5;9:*.xz=38;5;9:*.bz2=38;5;9:*.bz=38;5;9:*.tbz=38;5;9:*.tbz2=38;5;9:*.tz=38;5;9:*.deb=38;5;9:*.rpm=38;5;9:*.jar=38;5;9:*.war=38;5;9:*.ear=38;5;9:*.sar=38;5; 9:*.rar=38;5;9:*.alz=38;5;9:*.ace=38;5;9:*.zoo=38;5;9:*.cpio=38;5;9:*.7z=38;5;9:*.rz=38;5;9:*.cab=38;5;9:*.jpg=38;5;13:*.jpeg=38;5;13:*.gif=38;5;13:*.bmp=38;5;13:*.pbm=38;5;13:*.pgm=38;5;13:*.ppm=38;5;13:*.t ga=38;5;13:*.xbm=38;5;13:*.xpm=38;5;13:*.tif=38;5;13:*.tiff=38;5;13:*.png=38;5;13 :*.svg=38;5;13:*.svgz=38;5;13:*.mng=38;5;13:*.pcx=38;5;13:*.mov=38;5;13:*.mpg=38;5;13:*.mpeg=38;5;13:*.m2v=38;5;13:*.mkv=38;5; 13:*.webm=38;5;13:*.ogm=38;5;13:*.mp4=38;5;13:*.m4v=38;5;13:*.mp4v=38;5;13:*.vob=38;5;13:*.qt=38;5;13:*.nuv=38;5;13:*.wmv=38;5;13:*.asf=38;5;13:*.rm=38;5;13:*.rmvb=38;5;13:*.flc=38;5;13:*.avi=38;5;13:*.fli=3 8;5;13:*.flv=38;5;13:*.gl=38;5;13:*.dl=38;5;13:*.xcf=38;5;13:*.xwd=38;5;13:*.yuv=38;5;13:*.cgm=38;5;13:*.emf=38;5;13:*.axv=38;5;13:*.anx=38;5;13:*.ogv=38;5;13:*.ogx=38;5;13:*.aac=38;5;45:*.au=38;5;45:*.flac= 38;5;45:*.mid=38;5;45:*.midi=38;5;45:*.mka=38;5;45:*.mp3=38;5;45:*.mpc=38;5;45:*.ogg=38;5;45:*.ra=38;5;45:*.wav=38;5;45:*.axa=38;5;45:*.oga=38;5;45:*.spx=38;5;45:*.xspf=38;5;45:', 'SSH_TTY': '/dev/pts/0', 'H OSTNAME': 'srvvm42.stouen.local', 'LC_TELEPHONE': 'fr_FR.UTF-8', 'SELINUX_LEVEL_REQUESTED': '', 'HISTCONTROL': 'ignoredups', 'SHLVL': '1', 'PWD': '/root', 'LC_NAME': 'fr_FR.UTF-8', 'OTOPI_LOGFILE': '/var/log /ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180302101734-fuzcop.log', 'LC_TIME': 'fr_FR.UTF-8', 'SSH_CONNECTION': '10.2.10.112 38120 10.2.200.130 22', 'OTOPI_EXECDIR': '/root'} 2018-03-02 10:44:37,885+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 PLAY [Clean temporary resources] 2018-03-02 10:44:37,987+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Gathering Facts] 2018-03-02 10:44:40,098+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-03-02 10:44:40,300+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Remove local vm dir] 2018-03-02 10:44:41,105+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-03-02 10:44:41,206+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 PLAY RECAP [localhost] : ok: 2 changed: 1 unreachable: 0 skipped: 0 failed: 0 2018-03-02 10:44:41,307+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:180 ansible-playbook rc: 0 2018-03-02 10:44:41,307+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:187 ansible-playbook stdout: 2018-03-02 10:44:41,307+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:190 ansible-playbook stderr: 2018-03-02 10:44:41,308+0100 DEBUG otopi.plugins.gr_he_ansiblesetup.core.misc misc._cleanup:238 {} 2018-03-02 10:44:41,311+0100 DEBUG otopi.context context._executeMethod:128 Stage cleanup METHOD otopi.plugins.gr_he_common.engine.ca.Plugin._cleanup 2018-03-02 10:44:41,312+0100 DEBUG otopi.context context._executeMethod:135 condition False 2018-03-02 10:44:41,315+0100 DEBUG otopi.context context._executeMethod:128 Stage cleanup METHOD otopi.plugins.gr_he_common.vm.boot_disk.Plugin._cleanup 2018-03-02 10:44:41,315+0100 DEBUG otopi.context context._executeMethod:135 condition False 2018-03-02 10:44:41,318+0100 DEBUG otopi.context context._executeMethod:128 Stage cleanup METHOD otopi.plugins.gr_he_common.vm.cloud_init.Plugin._cleanup 2018-03-02 10:44:41,319+0100 DEBUG otopi.context context._executeMethod:135 condition False 2018-03-02 10:44:41,320+0100 DEBUG otopi.context context._executeMethod:128 Stage cleanup METHOD otopi.plugins.otopi.dialog.answer_file.Plugin._generate_answer_file 2018-03-02 10:44:41,320+0100 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN 2018-03-02 10:44:41,320+0100 DEBUG otopi.context context.dumpEnvironment:869 ENV DIALOG/answerFileContent=str:'# OTOPI answer file, generated by human dialog [environment:default] QUESTION/1/CI_VM_ETC_HOST=str:yes -- I don't understand the problem. I searched on the web and found nothing. I'm using an other ovirt in a test environement from a while, but here i'm really lost... Thanks for any help ! Georges -------------- next part -------------- An HTML attachment was scrubbed... URL: From jvdwege at xs4all.nl Fri Mar 2 10:39:47 2018 From: jvdwege at xs4all.nl (Joop) Date: Fri, 2 Mar 2018 11:39:47 +0100 Subject: [ovirt-users] hosted-engine --deploy : Failed executing ansible-playbook In-Reply-To: References: Message-ID: <5A9929F3.4010306@xs4all.nl> On 2-3-2018 11:25, geomeid at mairie-saint-ouen.fr wrote: > > Hello, > > > > I am on CENTOS 7.4.1708 > > > > I follow the documentation: > > [root at srvvm42 ~]# yum install ovirt-hosted-engine-setup > > [root at srvvm42 ~]# yum info ovirt-hosted-engine-setup > Loaded plugins: fastestmirror, package_upload, product-id, > search-disabled-repos, subscription-manager > This system is not registered with an entitlement server. You can use > subscription-manager to register. > Loading mirror speeds from cached hostfile > * base: centos.quelquesmots.fr > * epel: mirrors.ircam.fr > * extras: centos.mirrors.ovh.net > * ovirt-4.2: ftp.nluug.nl > * ovirt-4.2-epel: mirrors.ircam.fr > * updates: centos.mirrors.ovh.net > Installed Packages > Name : ovirt-hosted-engine-setup > Arch : noarch > Version : 2.2.9 > Release : 1.el7.centos > Size : 2.3 M > Repo : installed > From repo : ovirt-4.2 > Summary : oVirt Hosted Engine setup tool > URL : http://www.ovirt.org > License : LGPLv2+ > Description : Hosted Engine setup tool for oVirt project. > > > [root at srvvm42 ~]# hosted-engine --deploy > > I encounter an issue when i try to install my hosted-engine. Here is > the last line of the installation: > > .... > > [ INFO ] TASK [Clean /etc/hosts for the engine VM] > [ INFO ] skipping: [localhost] > [ INFO ] TASK [Copy /etc/hosts back to the engine VM] > [ INFO ] skipping: [localhost] > [ INFO ] TASK [Clean /etc/hosts on the host] > [ INFO ] changed: [localhost] > [ INFO ] TASK [Add an entry in /etc/hosts for the target VM] > [ INFO ] changed: [localhost] > [ INFO ] TASK [Start broker] > [ INFO ] changed: [localhost] > [ INFO ] TASK [Initialize lockspace volume] > [ INFO ] changed: [localhost] > [ INFO ] TASK [Start agent] > [ INFO ] changed: [localhost] > [ INFO ] TASK [Wait for the engine to come up on the target VM] > [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The conditional > check 'health_result.rc == 0 and > health_result.stdout|from_json|json_query('*.\"engine-status\".\"health\"')|first==\"good\"' > failed. The error was: error while evaluating conditional > (health_result.rc == 0 and > health_result.stdout|from_json|json_query('*.\"engine-status\".\"health\"')|first==\"good\"): > No first item, sequence was empty."} > [ ERROR ] Failed to execute stage 'Closing up': Failed executing > ansible-playbook > Guessing that it might be a firewall problem. Check what ports are open after this fails. You can check it too, when the Engine is up you can try to curl to your engine and see if it is reachable on 443, if not then the healthcheck will fail. Regards, Joop From stirabos at redhat.com Fri Mar 2 10:43:45 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Fri, 2 Mar 2018 11:43:45 +0100 Subject: [ovirt-users] hosted-engine --deploy : Failed executing ansible-playbook In-Reply-To: References: Message-ID: On Fri, Mar 2, 2018 at 11:25 AM, wrote: > Hello, > > > > I am on CENTOS 7.4.1708 > > > > I follow the documentation: > > [root at srvvm42 ~]# yum install ovirt-hosted-engine-setup > > [root at srvvm42 ~]# yum info ovirt-hosted-engine-setup > Loaded plugins: fastestmirror, package_upload, product-id, > search-disabled-repos, subscription-manager > This system is not registered with an entitlement server. You can use > subscription-manager to register. > Loading mirror speeds from cached hostfile > * base: centos.quelquesmots.fr > * epel: mirrors.ircam.fr > * extras: centos.mirrors.ovh.net > * ovirt-4.2: ftp.nluug.nl > * ovirt-4.2-epel: mirrors.ircam.fr > * updates: centos.mirrors.ovh.net > Installed Packages > Name : ovirt-hosted-engine-setup > Arch : noarch > Version : 2.2.9 > Release : 1.el7.centos > Size : 2.3 M > Repo : installed > From repo : ovirt-4.2 > Summary : oVirt Hosted Engine setup tool > URL : http://www.ovirt.org > License : LGPLv2+ > Description : Hosted Engine setup tool for oVirt project. > > > [root at srvvm42 ~]# hosted-engine --deploy > > I encounter an issue when i try to install my hosted-engine. Here is the > last line of the installation: > > .... > > [ INFO ] TASK [Clean /etc/hosts for the engine VM] > [ INFO ] skipping: [localhost] > [ INFO ] TASK [Copy /etc/hosts back to the engine VM] > [ INFO ] skipping: [localhost] > [ INFO ] TASK [Clean /etc/hosts on the host] > [ INFO ] changed: [localhost] > [ INFO ] TASK [Add an entry in /etc/hosts for the target VM] > [ INFO ] changed: [localhost] > [ INFO ] TASK [Start broker] > [ INFO ] changed: [localhost] > [ INFO ] TASK [Initialize lockspace volume] > [ INFO ] changed: [localhost] > [ INFO ] TASK [Start agent] > [ INFO ] changed: [localhost] > [ INFO ] TASK [Wait for the engine to come up on the target VM] > [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The conditional check > 'health_result.rc == 0 and health_result.stdout|from_ > json|json_query('*.\"engine-status\".\"health\"')|first==\"good\"' > failed. The error was: error while evaluating conditional (health_result.rc > == 0 and health_result.stdout|from_json|json_query('*.\"engine- > status\".\"health\"')|first==\"good\"): No first item, sequence was > empty."} > [ ERROR ] Failed to execute stage 'Closing up': Failed executing > ansible-playbook > Hi, it's a kind of race condition consuming the output of hosted-engine --vm-status --json that, during service startup time, could appear not complete. We tracked it here: https://bugzilla.redhat.com/1540926 oVirt 4.2.2 rc3 will contain a fix for that. It failed really late so at the end your env should be already OK, maybe you can just have the bootstrap local VM as a leftover external VM in the engine but you could simply manually delete it. I don't see other drawbacks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbonazzo at redhat.com Fri Mar 2 13:22:10 2018 From: sbonazzo at redhat.com (Sandro Bonazzola) Date: Fri, 2 Mar 2018 14:22:10 +0100 Subject: [ovirt-users] [ANN] oVirt 4.2.2 Third Release Candidate is now available Message-ID: The oVirt Project is pleased to announce the availability of the oVirt 4.2.2 Third Release Candidate, as of March 2nd, 2018 This update is a release candidate of the second in a series of stabilization updates to the 4.2 series. This is pre-release software. This pre-release should not to be used in production. This release is available now for: * Red Hat Enterprise Linux 7.4 or later * CentOS Linux (or similar) 7.4 or later This release supports Hypervisor Hosts running: * Red Hat Enterprise Linux 7.4 or later * CentOS Linux (or similar) 7.4 or later * oVirt Node 4.2 See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed. Notes: - oVirt Appliance is already available - oVirt Node is already available [2] Additional Resources: * Read more about the oVirt 4.2.2 release highlights: http://www.ovirt.org/release/4.2.2/ * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt * Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/ [1] http://www.ovirt.org/release/4.2.2/ [2] http://resources.ovirt.org/pub/ovirt-4.2-pre/iso/ -- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vajdovski at gmail.com Fri Mar 2 14:09:30 2018 From: vajdovski at gmail.com (Krzysztof Wajda) Date: Fri, 2 Mar 2018 15:09:30 +0100 Subject: [ovirt-users] Issue with deploy HE on another host 4.1 Message-ID: Hi, I have an issue with Hosted Engine when I try to deploy via gui on another host. There is no errors after deploy but in GUI I see only "Not active" status HE, and hosted-engine --status shows only 1 node (on both nodes same output). In hosted-engine.conf I see that host_id is the same as it is on primary host with HE !? Issue looks quite similar like in http://lists.ovirt.org/pipermail/users/2018-February/086932.html Here is config file on newly deployed node : ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem gateway=192.168.8.1 iqn= conf_image_UUID=f2813205-4b0c-45f3-a9cb-3748f61d2194 ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem sdUUID=7e7a275c-6939-4f79-85f6-d695209951ea connectionUUID=81a2f9a3-2efe-448f-b305-e22543068044 conf_volume_UUID=d6b7e25c-9912-47ff-b104-9d424b9f34b8 user= host_id=1 bridge=ovirtmgmt metadata_image_UUID=fe95f22e-b468-4adf-a754-21d419ae3e67 spUUID=00000000-0000-0000-0000-000000000000 mnt_options= fqdn=dev-ovirtengine0.somedomain.it portal= vm_disk_id=febde231-92cc-4599-8f55-816f63132739 metadata_volume_UUID=7ebaf268-15ec-4c76-ba89-b5e2dc143830 vm_disk_vol_id=e3920b18-4467-44f8-b2d0-629b3b1d1a58 domainType=fc port= console=vnc ca_subject="C=EN, L=Test, O=Test, CN=Test" password= vmid=3f7d9c1d-6c3e-4b96-b85d-d240f3bf9b76 lockspace_image_UUID=49e318ad-63a3-4efd-977c-33b8c4c93728 lockspace_volume_UUID=91bcb5cf-006c-42b4-b419-6ac9f841f50a vdsm_use_ssl=true storage=None conf=/var/run/ovirt-hosted-engine-ha/vm.conf This is original one: fqdn=dev-ovirtengine0.somedomain.it vm_disk_id=febde231-92cc-4599-8f55-816f63132739 vm_disk_vol_id=e3920b18-4467-44f8-b2d0-629b3b1d1a58 vmid=3f7d9c1d-6c3e-4b96-b85d-d240f3bf9b76 storage=None mnt_options= conf=/var/run/ovirt-hosted-engine-ha/vm.conf host_id=1 console=vnc domainType=fc spUUID=00000000-0000-0000-0000-000000000000 sdUUID=7e7a275c-6939-4f79-85f6-d695209951ea connectionUUID=81a2f9a3-2efe-448f-b305-e22543068044 ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem ca_subject="C=EN, L=Test, O=Test, CN=Test" vdsm_use_ssl=true gateway=192.168.8.1 bridge=ovirtmgmt metadata_volume_UUID=7ebaf268-15ec-4c76-ba89-b5e2dc143830 metadata_image_UUID=fe95f22e-b468-4adf-a754-21d419ae3e67 lockspace_volume_UUID=91bcb5cf-006c-42b4-b419-6ac9f841f50a lockspace_image_UUID=49e318ad-63a3-4efd-977c-33b8c4c93728 conf_volume_UUID=d6b7e25c-9912-47ff-b104-9d424b9f34b8 conf_image_UUID=f2813205-4b0c-45f3-a9cb-3748f61d2194 # The following are used only for iSCSI storage iqn= portal= user= password= port= Packages: ovirt-imageio-daemon-1.0.0-1.el7.noarch ovirt-host-deploy-1.6.7-1.el7.centos.noarch ovirt-release41-4.1.9-1.el7.centos.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-hosted-engine-ha-2.1.8-1.el7.centos.noarch ovirt-hosted-engine-setup-2.1.4-1.el7.centos.noarch ovirt-vmconsole-1.0.4-1.el7.centos.noarch ovirt-vmconsole-host-1.0.4-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch ovirt-imageio-common-1.0.0-1.el7.noarch Output from agent.log MainThread::INFO::2018-03-02 15:01:47,279::brokerlink::141::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Success, id 140493346760912 MainThread::INFO::2018-03-02 15:01:51,011::brokerlink::179::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(set_storage_domain) Success, id 140493346759824 MainThread::INFO::2018-03-02 15:01:51,011::hosted_engine::601::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker) Broker initialized, all submonitors started MainThread::INFO::2018-03-02 15:01:51,045::hosted_engine::704::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_sanlock) Ensuring lease for lockspace hosted-engine, host id 1 is acquired (file: /var/run/vdsm/storage/7e7a275c-6939-4f79-85f6-d695209951ea/49e318ad-63a3-4efd-977c-33b8c4c93728/91bcb5cf-006c-42b4-b419-6ac9f841f50a) MainThread::INFO::2018-03-02 15:04:12,058::hosted_engine::745::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_sanlock) Failed to acquire the lock. Waiting '5's before the next attempt Regards Krzysztof -------------- next part -------------- An HTML attachment was scrubbed... URL: From mzamazal at redhat.com Fri Mar 2 14:10:18 2018 From: mzamazal at redhat.com (Milan Zamazal) Date: Fri, 02 Mar 2018 15:10:18 +0100 Subject: [ovirt-users] VMs stuck in migrating state In-Reply-To: (nicolas@devels.es's message of "Mon, 26 Feb 2018 08:58:36 +0000") References: Message-ID: <87lgfava9h.fsf@redhat.com> nicolas at devels.es writes: > We're running 4.1.9 and during the weekend we had a storage issue that seemed > to leave some hosts in an strange state. One of the hosts has almost all VMs > migrating (although it seems to not actually being migrating them) and the > migration state cannot be cancelled. > > When clicking on one of those machines and selecting 'Cancel migration', in the > ovirt-engine log I see: > > 2018-02-26 08:52:07,588Z INFO > [org.ovirt.engine.core.vdsbroker.vdsbroker.CancelMigrateVDSCommand] > (org.ovirt.thread.pool-6-thread-36) [887dfbf9-dece-4f7b-90a8-dac02b849b7f] > HostName = host2.domain.com > 2018-02-26 08:52:07,588Z ERROR > [org.ovirt.engine.core.vdsbroker.vdsbroker.CancelMigrateVDSCommand] > (org.ovirt.thread.pool-6-thread-36) [887dfbf9-dece-4f7b-90a8-dac02b849b7f] > Command 'CancelMigrateVDSCommand(HostName = host2.domain.com, > CancelMigrationVDSParameters:{runAsync='true', > hostId='e63b9146-10c4-47ad-bd6c-f053a8c5b4eb', > vmId='26d37e43-32e2-4e55-9c1e-1438518d5021'})' execution failed: > VDSGenericException: VDSErrorException: Failed to CancelMigrateVDS, error = > Migration process cancelled, code = 82 > > On the vdsm side I see: > > 2018-02-26 08:56:19,396+0000 INFO (jsonrpc/0) [vdsm.api] START migrateCancel() > from=::ffff:10.X.X.X,54654, flow_id=874d36d7-63f5-4b71-8a4d-6d9f3ec65858 > (api:46) > 2018-02-26 08:56:19,398+0000 INFO (jsonrpc/0) [vdsm.api] FINISH migrateCancel > return={'status': {'message': 'Migration process cancelled', 'code': 82}, > 'progress': 0} from=::ffff:10.X.X.X,54654, > flow_id=874d36d7-63f5-4b71-8a4d-6d9f3ec65858 (api:52) > > So no error on the vdsm side log. Interesting. The messages above indicate that the VM was attempted to migrate, but the migration got temporarily rejected on the destination due to the number of already running incoming migrations (the limit is 2 incoming migrations by default). Later, Vdsm was asked to cancel the outgoing migration and it successfully set a migration canceling flag. However the action was reported as an error to Engine, due to hitting the incoming migration limit on the destination. Maybe it's a bug, I'm not sure, resulting in minor confusion. Normally it shouldn't matter, the migration should be canceled shortly after anyway and Engine should be informed about that. However the migration apparently wasn't canceled here. I can't say what happened without complete Vdsm log. One of possible reasons is that the migration has been waiting on completion of another migration outgoing from the source (only one outgoing migration at the time is allowed by default). In any case it seems the migration either wasn't actually started at all or it just started being set up and that has never been completely finished. > I already tried restarting ovirt-engine but it didn't work. Here the problem is clearly on the Vdsm side. > Could someone shed some light on how to cancel the migration status for these > machines? All of them seem to be running on the same host. Did the VMs get unblocked in the meantime? I can't know what's the actual state of the given VMs without seeing the complete Vdsm log, so it's difficult to give a good advice. I think that Vdsm restart on the given host would help BUT it's generally not a very good idea to restart Vdsm if any real migration, outgoing or incoming, is running on the host. VMs that aren't actually being migrated (despite being reported as migrating) at all should simply return to Up state after the restart, but VMs with any real migration action pending might get return to Up state without proper cleanup, resulting in a different kind of mess or maybe something even worse (things should improve in oVirt 4.2, but it's still good to avoid Vdsm restarts with migrations running). Regards, Milan From omachace at redhat.com Fri Mar 2 14:21:04 2018 From: omachace at redhat.com (Ondra Machacek) Date: Fri, 2 Mar 2018 15:21:04 +0100 Subject: [ovirt-users] oVirt API (4.0 and 4.1) not reporting vms running on a given storage domain In-Reply-To: References: Message-ID: <836ea137-aa06-fe45-30f8-f546e8cd184a@redhat.com> Hi, As per documentation: http://ovirt.github.io/ovirt-engine-api-model/4.1/#services/storage_domain_vms That resource is used to list VMs on export storage domain, not on data domain. If you want to find VMs using specific storage you may use following query: /ovirt-engine/api/vms?search=storage.name=nameofthestorage On 03/01/2018 07:19 PM, Luca 'remix_tj' Lorenzetto wrote: > Hello, > > i need to extract the list of the vms running on a given storage domain. > Copying some code from ansible's ovirt_storage_vms_facts simplified my > work but i stopped with a strange behavior: no vm is listed. > > I thought it was an issue with my code, but looking more in detail at > api's i tried opening: > > ovirt-engine/api/storagedomains/52b661fe-609e-48f9-beab-f90165b868c4/vms > > And what i get is > > > > And this for all the storage domains available. > > Is there something wrong with the versions i'm running? Do i require > some options in the query? > > I'm running RHV, so i can't upgrade to 4.2 yet > > Luca > From nicolas at devels.es Fri Mar 2 14:25:17 2018 From: nicolas at devels.es (nicolas at devels.es) Date: Fri, 02 Mar 2018 14:25:17 +0000 Subject: [ovirt-users] VMs stuck in migrating state In-Reply-To: <87lgfava9h.fsf@redhat.com> References: <87lgfava9h.fsf@redhat.com> Message-ID: <78580de0bc71d3498bf07ffaa37ad935@devels.es> Hi Milan, El 2018-03-02 14:10, Milan Zamazal escribi?: > nicolas at devels.es writes: > >> We're running 4.1.9 and during the weekend we had a storage issue that >> seemed >> to leave some hosts in an strange state. One of the hosts has almost >> all VMs >> migrating (although it seems to not actually being migrating them) and >> the >> migration state cannot be cancelled. >> >> When clicking on one of those machines and selecting 'Cancel >> migration', in the >> ovirt-engine log I see: >> >> 2018-02-26 08:52:07,588Z INFO >> [org.ovirt.engine.core.vdsbroker.vdsbroker.CancelMigrateVDSCommand] >> (org.ovirt.thread.pool-6-thread-36) >> [887dfbf9-dece-4f7b-90a8-dac02b849b7f] >> HostName = host2.domain.com >> 2018-02-26 08:52:07,588Z ERROR >> [org.ovirt.engine.core.vdsbroker.vdsbroker.CancelMigrateVDSCommand] >> (org.ovirt.thread.pool-6-thread-36) >> [887dfbf9-dece-4f7b-90a8-dac02b849b7f] >> Command 'CancelMigrateVDSCommand(HostName = host2.domain.com, >> CancelMigrationVDSParameters:{runAsync='true', >> hostId='e63b9146-10c4-47ad-bd6c-f053a8c5b4eb', >> vmId='26d37e43-32e2-4e55-9c1e-1438518d5021'})' execution failed: >> VDSGenericException: VDSErrorException: Failed to CancelMigrateVDS, >> error = >> Migration process cancelled, code = 82 >> >> On the vdsm side I see: >> >> 2018-02-26 08:56:19,396+0000 INFO (jsonrpc/0) [vdsm.api] START >> migrateCancel() >> from=::ffff:10.X.X.X,54654, >> flow_id=874d36d7-63f5-4b71-8a4d-6d9f3ec65858 >> (api:46) >> 2018-02-26 08:56:19,398+0000 INFO (jsonrpc/0) [vdsm.api] FINISH >> migrateCancel >> return={'status': {'message': 'Migration process cancelled', 'code': >> 82}, >> 'progress': 0} from=::ffff:10.X.X.X,54654, >> flow_id=874d36d7-63f5-4b71-8a4d-6d9f3ec65858 (api:52) >> >> So no error on the vdsm side log. > > Interesting. The messages above indicate that the VM was attempted to > migrate, but the migration got temporarily rejected on the destination > due to the number of already running incoming migrations (the limit is > 2 > incoming migrations by default). Later, Vdsm was asked to cancel the > outgoing migration and it successfully set a migration canceling flag. > However the action was reported as an error to Engine, due to hitting > the incoming migration limit on the destination. Maybe it's a bug, I'm > not sure, resulting in minor confusion. Normally it shouldn't matter, > the migration should be canceled shortly after anyway and Engine should > be informed about that. > > However the migration apparently wasn't canceled here. I can't say > what > happened without complete Vdsm log. One of possible reasons is that > the > migration has been waiting on completion of another migration outgoing > from the source (only one outgoing migration at the time is allowed by > default). In any case it seems the migration either wasn't actually > started at all or it just started being set up and that has never been > completely finished. > I'm attaching the log. Basically the storage backend was restarted by fencing and then this issue happened. This was on 26/02 at about 08:52 (log time). >> I already tried restarting ovirt-engine but it didn't work. > > Here the problem is clearly on the Vdsm side. > >> Could someone shed some light on how to cancel the migration status >> for these >> machines? All of them seem to be running on the same host. > > Did the VMs get unblocked in the meantime? I can't know what's the No, they didn't. They're still in a "Migrating" state. > actual state of the given VMs without seeing the complete Vdsm log, so > it's difficult to give a good advice. I think that Vdsm restart on the > given host would help BUT it's generally not a very good idea to > restart > Vdsm if any real migration, outgoing or incoming, is running on the > host. VMs that aren't actually being migrated (despite being reported > as migrating) at all should simply return to Up state after the > restart, > but VMs with any real migration action pending might get return to Up > state without proper cleanup, resulting in a different kind of mess or > maybe something even worse (things should improve in oVirt 4.2, but > it's > still good to avoid Vdsm restarts with migrations running). > I assume this is not a real migration as it has been in this state for several days. Would you advice restarting vdsm in this case then? Thank you. > Regards, > Milan -------------- next part -------------- A non-text attachment was scrubbed... Name: vdsm.log.20.xz Type: application/x-xz Size: 963208 bytes Desc: not available URL: From artem.tambovskiy at gmail.com Fri Mar 2 14:52:28 2018 From: artem.tambovskiy at gmail.com (Artem Tambovskiy) Date: Fri, 02 Mar 2018 14:52:28 +0000 Subject: [ovirt-users] Issue with deploy HE on another host 4.1 In-Reply-To: References: Message-ID: Hello Krzysztof, As I can see both hosts have the same host_id=1, which causing conflict. You need this this manually on the newly deployed host and restart ovirt-ha-agent. You may run following command on engine VM in order to find correct host_id values for your hosts. sudo -u postgres psql -d engine -c 'select vds_name, vds_spm_id from vds' Once you fixed host_id and restarted agents, i would advise to check sanlock client status in order to see that there are no conflicts and hosts using correct host_id values. Regards, Artem ??, 2 ???. 2018 ?., 17:10 Krzysztof Wajda : > Hi, > > I have an issue with Hosted Engine when I try to deploy via gui on another > host. There is no errors after deploy but in GUI I see only "Not active" > status HE, and hosted-engine --status shows only 1 node (on both nodes same > output). In hosted-engine.conf I see that host_id is the same as it is on > primary host with HE !? Issue looks quite similar like in > > http://lists.ovirt.org/pipermail/users/2018-February/086932.html > > Here is config file on newly deployed node : > > ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem > gateway=192.168.8.1 > iqn= > conf_image_UUID=f2813205-4b0c-45f3-a9cb-3748f61d2194 > ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem > sdUUID=7e7a275c-6939-4f79-85f6-d695209951ea > connectionUUID=81a2f9a3-2efe-448f-b305-e22543068044 > conf_volume_UUID=d6b7e25c-9912-47ff-b104-9d424b9f34b8 > user= > host_id=1 > bridge=ovirtmgmt > metadata_image_UUID=fe95f22e-b468-4adf-a754-21d419ae3e67 > spUUID=00000000-0000-0000-0000-000000000000 > mnt_options= > fqdn=dev-ovirtengine0.somedomain.it > portal= > vm_disk_id=febde231-92cc-4599-8f55-816f63132739 > metadata_volume_UUID=7ebaf268-15ec-4c76-ba89-b5e2dc143830 > vm_disk_vol_id=e3920b18-4467-44f8-b2d0-629b3b1d1a58 > domainType=fc > port= > console=vnc > ca_subject="C=EN, L=Test, O=Test, CN=Test" > password= > vmid=3f7d9c1d-6c3e-4b96-b85d-d240f3bf9b76 > lockspace_image_UUID=49e318ad-63a3-4efd-977c-33b8c4c93728 > lockspace_volume_UUID=91bcb5cf-006c-42b4-b419-6ac9f841f50a > vdsm_use_ssl=true > storage=None > conf=/var/run/ovirt-hosted-engine-ha/vm.conf > > This is original one: > > fqdn=dev-ovirtengine0.somedomain.it > vm_disk_id=febde231-92cc-4599-8f55-816f63132739 > vm_disk_vol_id=e3920b18-4467-44f8-b2d0-629b3b1d1a58 > vmid=3f7d9c1d-6c3e-4b96-b85d-d240f3bf9b76 > storage=None > mnt_options= > conf=/var/run/ovirt-hosted-engine-ha/vm.conf > host_id=1 > console=vnc > domainType=fc > spUUID=00000000-0000-0000-0000-000000000000 > sdUUID=7e7a275c-6939-4f79-85f6-d695209951ea > connectionUUID=81a2f9a3-2efe-448f-b305-e22543068044 > ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem > ca_subject="C=EN, L=Test, O=Test, CN=Test" > vdsm_use_ssl=true > gateway=192.168.8.1 > bridge=ovirtmgmt > metadata_volume_UUID=7ebaf268-15ec-4c76-ba89-b5e2dc143830 > metadata_image_UUID=fe95f22e-b468-4adf-a754-21d419ae3e67 > lockspace_volume_UUID=91bcb5cf-006c-42b4-b419-6ac9f841f50a > lockspace_image_UUID=49e318ad-63a3-4efd-977c-33b8c4c93728 > conf_volume_UUID=d6b7e25c-9912-47ff-b104-9d424b9f34b8 > conf_image_UUID=f2813205-4b0c-45f3-a9cb-3748f61d2194 > > # The following are used only for iSCSI storage > iqn= > portal= > user= > password= > port= > > Packages: > > ovirt-imageio-daemon-1.0.0-1.el7.noarch > ovirt-host-deploy-1.6.7-1.el7.centos.noarch > ovirt-release41-4.1.9-1.el7.centos.noarch > ovirt-setup-lib-1.1.4-1.el7.centos.noarch > ovirt-hosted-engine-ha-2.1.8-1.el7.centos.noarch > ovirt-hosted-engine-setup-2.1.4-1.el7.centos.noarch > ovirt-vmconsole-1.0.4-1.el7.centos.noarch > ovirt-vmconsole-host-1.0.4-1.el7.centos.noarch > ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch > ovirt-imageio-common-1.0.0-1.el7.noarch > > Output from agent.log > > MainThread::INFO::2018-03-02 > 15:01:47,279::brokerlink::141::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) > Success, id 140493346760912 > MainThread::INFO::2018-03-02 > 15:01:51,011::brokerlink::179::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(set_storage_domain) > Success, id 140493346759824 > MainThread::INFO::2018-03-02 > 15:01:51,011::hosted_engine::601::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker) > Broker initialized, all submonitors started > MainThread::INFO::2018-03-02 > 15:01:51,045::hosted_engine::704::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_sanlock) > Ensuring lease for lockspace hosted-engine, host id 1 is acquired (file: > /var/run/vdsm/storage/7e7a275c-6939-4f79-85f6-d695209951ea/49e318ad-63a3-4efd-977c-33b8c4c93728/91bcb5cf-006c-42b4-b419-6ac9f841f50a) > MainThread::INFO::2018-03-02 > 15:04:12,058::hosted_engine::745::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_sanlock) > Failed to acquire the lock. Waiting '5's before the next attempt > > Regards > > Krzysztof > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mzamazal at redhat.com Fri Mar 2 15:34:04 2018 From: mzamazal at redhat.com (Milan Zamazal) Date: Fri, 02 Mar 2018 16:34:04 +0100 Subject: [ovirt-users] VMs stuck in migrating state In-Reply-To: <78580de0bc71d3498bf07ffaa37ad935@devels.es> (nicolas@devels.es's message of "Fri, 02 Mar 2018 14:25:17 +0000") References: <87lgfava9h.fsf@redhat.com> <78580de0bc71d3498bf07ffaa37ad935@devels.es> Message-ID: <87h8pyv6dv.fsf@redhat.com> nicolas at devels.es writes: > El 2018-03-02 14:10, Milan Zamazal escribi?: >> nicolas at devels.es writes: >> >>> We're running 4.1.9 and during the weekend we had a storage issue that >>> seemed >>> to leave some hosts in an strange state. One of the hosts has almost all VMs >>> migrating (although it seems to not actually being migrating them) and the >>> migration state cannot be cancelled. >>> >>> When clicking on one of those machines and selecting 'Cancel migration', in >>> the >>> ovirt-engine log I see: >>> >>> 2018-02-26 08:52:07,588Z INFO >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CancelMigrateVDSCommand] >>> (org.ovirt.thread.pool-6-thread-36) [887dfbf9-dece-4f7b-90a8-dac02b849b7f] >>> HostName = host2.domain.com >>> 2018-02-26 08:52:07,588Z ERROR >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CancelMigrateVDSCommand] >>> (org.ovirt.thread.pool-6-thread-36) [887dfbf9-dece-4f7b-90a8-dac02b849b7f] >>> Command 'CancelMigrateVDSCommand(HostName = host2.domain.com, >>> CancelMigrationVDSParameters:{runAsync='true', >>> hostId='e63b9146-10c4-47ad-bd6c-f053a8c5b4eb', >>> vmId='26d37e43-32e2-4e55-9c1e-1438518d5021'})' execution failed: >>> VDSGenericException: VDSErrorException: Failed to CancelMigrateVDS, error = >>> Migration process cancelled, code = 82 >>> >>> On the vdsm side I see: >>> >>> 2018-02-26 08:56:19,396+0000 INFO (jsonrpc/0) [vdsm.api] START >>> migrateCancel() >>> from=::ffff:10.X.X.X,54654, flow_id=874d36d7-63f5-4b71-8a4d-6d9f3ec65858 >>> (api:46) >>> 2018-02-26 08:56:19,398+0000 INFO (jsonrpc/0) [vdsm.api] FINISH >>> migrateCancel >>> return={'status': {'message': 'Migration process cancelled', 'code': 82}, >>> 'progress': 0} from=::ffff:10.X.X.X,54654, >>> flow_id=874d36d7-63f5-4b71-8a4d-6d9f3ec65858 (api:52) >>> >>> So no error on the vdsm side log. >> >> Interesting. The messages above indicate that the VM was attempted to >> migrate, but the migration got temporarily rejected on the destination >> due to the number of already running incoming migrations (the limit is 2 >> incoming migrations by default). Later, Vdsm was asked to cancel the >> outgoing migration and it successfully set a migration canceling flag. >> However the action was reported as an error to Engine, due to hitting >> the incoming migration limit on the destination. Maybe it's a bug, I'm >> not sure, resulting in minor confusion. Normally it shouldn't matter, >> the migration should be canceled shortly after anyway and Engine should >> be informed about that. >> >> However the migration apparently wasn't canceled here. I can't say what >> happened without complete Vdsm log. One of possible reasons is that the >> migration has been waiting on completion of another migration outgoing >> from the source (only one outgoing migration at the time is allowed by >> default). In any case it seems the migration either wasn't actually >> started at all or it just started being set up and that has never been >> completely finished. >> > > I'm attaching the log. Basically the storage backend was restarted by fencing > and then this issue happened. This was on 26/02 at about 08:52 (log time). Thank you for the log, but VMs are already ?migrating? at its beginning, there had to be some problem already earlier. >>> I already tried restarting ovirt-engine but it didn't work. >> >> Here the problem is clearly on the Vdsm side. >> >>> Could someone shed some light on how to cancel the migration status for >>> these >>> machines? All of them seem to be running on the same host. >> >> Did the VMs get unblocked in the meantime? I can't know what's the > > No, they didn't. They're still in a "Migrating" state. > >> actual state of the given VMs without seeing the complete Vdsm log, so >> it's difficult to give a good advice. I think that Vdsm restart on the >> given host would help BUT it's generally not a very good idea to restart >> Vdsm if any real migration, outgoing or incoming, is running on the >> host. VMs that aren't actually being migrated (despite being reported >> as migrating) at all should simply return to Up state after the restart, >> but VMs with any real migration action pending might get return to Up >> state without proper cleanup, resulting in a different kind of mess or >> maybe something even worse (things should improve in oVirt 4.2, but it's >> still good to avoid Vdsm restarts with migrations running). >> > > I assume this is not a real migration as it has been in this state for several > days. Would you advice restarting vdsm in this case then? I'd say try it. Since nothing has changed for several days, restarting Vdsm looks like appropriate action at this point. Just don't make a habit of it :-). Regards, Milan From simone.bruckner at fabasoft.com Fri Mar 2 16:02:31 2018 From: simone.bruckner at fabasoft.com (Bruckner, Simone) Date: Fri, 2 Mar 2018 16:02:31 +0000 Subject: [ovirt-users] Cannot activate storage domain In-Reply-To: <2CB4E8C8E00E594EA06D4AC427E429920FE56290@fabamailserver.fabagl.fabasoft.com> References: <2CB4E8C8E00E594EA06D4AC427E429920FE500D1@fabamailserver.fabagl.fabasoft.com> <2CB4E8C8E00E594EA06D4AC427E429920FE56290@fabamailserver.fabagl.fabasoft.com> Message-ID: <2CB4E8C8E00E594EA06D4AC427E429920FE5A323@fabamailserver.fabagl.fabasoft.com> Hi all, I managed to get the inactive storage domain to maintenance by stopping all running VMs that were using it, but I am still not able to activate it. Trying to activate results in the following events: For each host: VDSM command GetVGInfoVDS failed: Volume Group does not exist: (u'vg_uuid: 813oRe-64r8-mloU-k9G2-LFsS-dXSG-hpN4kf',) And finally: VDSM command ActivateStorageDomainVDS failed: Storage domain does not exist: (u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',) Is there anything I can do to recover this storage domain? Thank you and all the best, Simone Von: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] Im Auftrag von Bruckner, Simone Gesendet: Donnerstag, 1. M?rz 2018 17:57 An: users at ovirt.org Betreff: Re: [ovirt-users] Cannot activate storage domain Hi, we are still struggling getting a storage domain online again. We tried to put the storage domain in maintenance mode, that led to "Failed to update OVF disks 809cc8d7-7687-46cf-a342-3be48674a9b3, OVF data isn't updated on those OVF stores". Trying again with ignoring OVF update failures put the storage domain in "preparing for maintenance". We see the following message on all hosts: "Error releasing host id 26 for domain b83c159c-4ad6-4613-ba16-bab95ccd10c0 (monitor:578)". Querying the storage domain using vdsm-client on the SPM resulted in # vdsm-client StorageDomain getInfo "storagedomainID"="b83c159c-4ad6-4613-ba16-bab95ccd10c0" vdsm-client: Command StorageDomain.getInfo with args {'storagedomainID': 'b83c159c-4ad6-4613-ba16-bab95ccd10c0'} failed: (code=358, message=Storage domain does not exist: (u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',)) Any ideas? Thank you and all the best, Simone Von: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] Im Auftrag von Bruckner, Simone Gesendet: Mittwoch, 28. Februar 2018 15:52 An: users at ovirt.org Betreff: [ovirt-users] Cannot activate storage domain Hi all, we run a small oVirt installation that we also use for automated testing (automatically creating, dropping vms). We got an inactive FC storage domain that we cannot activate any more. We see several events at that time starting with: VM perftest-c17 is down with error. Exit message: Unable to get volume size for domain b83c159c-4ad6-4613-ba16-bab95ccd10c0 volume 686376c1-4be1-44c3-89a3-0a8addc8fdf2. Trying to activate the strorage domain results in the following alert event for each host: VDSM command GetVGInfoVDS failed: Volume Group does not exist: (u'vg_uuid: 813oRe-64r8-mloU-k9G2-LFsS-dXSG-hpN4kf',) And after those messages from all hosts we get: VDSM command ActivateStorageDomainVDS failed: Storage domain does not exist: (u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',) Failed to activate Storage Domain VMHOST_LUN_205 (Data Center Production) by Invalid status on Data Center Production. Setting status to Non Responsive. Storage Pool Manager runs on Host vmhost003.fabagl.fabasoft.com (Address: vmhost003.fabagl.fabasoft.com), Data Center Production. Checking the hosts with multipath -ll we see the LUN without errors. We run oVirt 4.2.1 on CentOS 7.4. Hosts are CentOS 7.4 hosts with oVirt installed using oVirt engine. Hosts are connected to about 30 FC LUNs (8 TB each) on two all-flash storage arrays. Thank you, Simone Bruckner -------------- next part -------------- An HTML attachment was scrubbed... URL: From stirabos at redhat.com Fri Mar 2 16:14:24 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Fri, 2 Mar 2018 17:14:24 +0100 Subject: [ovirt-users] Issue with deploy HE on another host 4.1 In-Reply-To: References: Message-ID: Thanks Artem, all right. Krzysztof, can you please attach or send me your host-deploy logs for this additional host from your engine VM to let me try understanding how it got a wrong ID? On Fri, Mar 2, 2018 at 3:52 PM, Artem Tambovskiy wrote: > Hello Krzysztof, > > As I can see both hosts have the same host_id=1, which causing conflict. > > You need this this manually on the newly deployed host and restart > ovirt-ha-agent. > You may run following command on engine VM in order to find correct > host_id values for your hosts. > > sudo -u postgres psql -d engine -c 'select vds_name, vds_spm_id from vds' > > Once you fixed host_id and restarted agents, i would advise to check > sanlock client status in order to see that there are no conflicts and hosts > using correct host_id values. > > Regards, > Artem > > ??, 2 ???. 2018 ?., 17:10 Krzysztof Wajda : > >> Hi, >> >> I have an issue with Hosted Engine when I try to deploy via gui on >> another host. There is no errors after deploy but in GUI I see only "Not >> active" status HE, and hosted-engine --status shows only 1 node (on both >> nodes same output). In hosted-engine.conf I see that host_id is the same as >> it is on primary host with HE !? Issue looks quite similar like in >> >> http://lists.ovirt.org/pipermail/users/2018-February/086932.html >> >> Here is config file on newly deployed node : >> >> ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem >> gateway=192.168.8.1 >> iqn= >> conf_image_UUID=f2813205-4b0c-45f3-a9cb-3748f61d2194 >> ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem >> sdUUID=7e7a275c-6939-4f79-85f6-d695209951ea >> connectionUUID=81a2f9a3-2efe-448f-b305-e22543068044 >> conf_volume_UUID=d6b7e25c-9912-47ff-b104-9d424b9f34b8 >> user= >> host_id=1 >> bridge=ovirtmgmt >> metadata_image_UUID=fe95f22e-b468-4adf-a754-21d419ae3e67 >> spUUID=00000000-0000-0000-0000-000000000000 >> mnt_options= >> fqdn=dev-ovirtengine0.somedomain.it >> portal= >> vm_disk_id=febde231-92cc-4599-8f55-816f63132739 >> metadata_volume_UUID=7ebaf268-15ec-4c76-ba89-b5e2dc143830 >> vm_disk_vol_id=e3920b18-4467-44f8-b2d0-629b3b1d1a58 >> domainType=fc >> port= >> console=vnc >> ca_subject="C=EN, L=Test, O=Test, CN=Test" >> password= >> vmid=3f7d9c1d-6c3e-4b96-b85d-d240f3bf9b76 >> lockspace_image_UUID=49e318ad-63a3-4efd-977c-33b8c4c93728 >> lockspace_volume_UUID=91bcb5cf-006c-42b4-b419-6ac9f841f50a >> vdsm_use_ssl=true >> storage=None >> conf=/var/run/ovirt-hosted-engine-ha/vm.conf >> >> This is original one: >> >> fqdn=dev-ovirtengine0.somedomain.it >> vm_disk_id=febde231-92cc-4599-8f55-816f63132739 >> vm_disk_vol_id=e3920b18-4467-44f8-b2d0-629b3b1d1a58 >> vmid=3f7d9c1d-6c3e-4b96-b85d-d240f3bf9b76 >> storage=None >> mnt_options= >> conf=/var/run/ovirt-hosted-engine-ha/vm.conf >> host_id=1 >> console=vnc >> domainType=fc >> spUUID=00000000-0000-0000-0000-000000000000 >> sdUUID=7e7a275c-6939-4f79-85f6-d695209951ea >> connectionUUID=81a2f9a3-2efe-448f-b305-e22543068044 >> ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem >> ca_subject="C=EN, L=Test, O=Test, CN=Test" >> vdsm_use_ssl=true >> gateway=192.168.8.1 >> bridge=ovirtmgmt >> metadata_volume_UUID=7ebaf268-15ec-4c76-ba89-b5e2dc143830 >> metadata_image_UUID=fe95f22e-b468-4adf-a754-21d419ae3e67 >> lockspace_volume_UUID=91bcb5cf-006c-42b4-b419-6ac9f841f50a >> lockspace_image_UUID=49e318ad-63a3-4efd-977c-33b8c4c93728 >> conf_volume_UUID=d6b7e25c-9912-47ff-b104-9d424b9f34b8 >> conf_image_UUID=f2813205-4b0c-45f3-a9cb-3748f61d2194 >> >> # The following are used only for iSCSI storage >> iqn= >> portal= >> user= >> password= >> port= >> >> Packages: >> >> ovirt-imageio-daemon-1.0.0-1.el7.noarch >> ovirt-host-deploy-1.6.7-1.el7.centos.noarch >> ovirt-release41-4.1.9-1.el7.centos.noarch >> ovirt-setup-lib-1.1.4-1.el7.centos.noarch >> ovirt-hosted-engine-ha-2.1.8-1.el7.centos.noarch >> ovirt-hosted-engine-setup-2.1.4-1.el7.centos.noarch >> ovirt-vmconsole-1.0.4-1.el7.centos.noarch >> ovirt-vmconsole-host-1.0.4-1.el7.centos.noarch >> ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch >> ovirt-imageio-common-1.0.0-1.el7.noarch >> >> Output from agent.log >> >> MainThread::INFO::2018-03-02 15:01:47,279::brokerlink::141: >> :ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) >> Success, id 140493346760912 >> MainThread::INFO::2018-03-02 15:01:51,011::brokerlink::179: >> :ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(set_storage_domain) >> Success, id 140493346759824 >> MainThread::INFO::2018-03-02 15:01:51,011::hosted_engine:: >> 601::ovirt_hosted_engine_ha.agent.hosted_engine. >> HostedEngine::(_initialize_broker) Broker initialized, all submonitors >> started >> MainThread::INFO::2018-03-02 15:01:51,045::hosted_engine:: >> 704::ovirt_hosted_engine_ha.agent.hosted_engine. >> HostedEngine::(_initialize_sanlock) Ensuring lease for lockspace >> hosted-engine, host id 1 is acquired (file: /var/run/vdsm/storage/ >> 7e7a275c-6939-4f79-85f6-d695209951ea/49e318ad-63a3- >> 4efd-977c-33b8c4c93728/91bcb5cf-006c-42b4-b419-6ac9f841f50a) >> MainThread::INFO::2018-03-02 15:04:12,058::hosted_engine:: >> 745::ovirt_hosted_engine_ha.agent.hosted_engine. >> HostedEngine::(_initialize_sanlock) Failed to acquire the lock. Waiting >> '5's before the next attempt >> >> Regards >> >> Krzysztof >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzetto.luca at gmail.com Fri Mar 2 16:24:17 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Fri, 2 Mar 2018 17:24:17 +0100 Subject: [ovirt-users] oVirt API (4.0 and 4.1) not reporting vms running on a given storage domain In-Reply-To: <836ea137-aa06-fe45-30f8-f546e8cd184a@redhat.com> References: <836ea137-aa06-fe45-30f8-f546e8cd184a@redhat.com> Message-ID: On Fri, Mar 2, 2018 at 3:21 PM, Ondra Machacek wrote: > Hi, > > As per documentation: > > http://ovirt.github.io/ovirt-engine-api-model/4.1/#services/storage_domain_vms > > That resource is used to list VMs on export storage domain, not on data > domain. > > If you want to find VMs using specific storage you may use following query: > > /ovirt-engine/api/vms?search=storage.name=nameofthestorage Hi Ondra, thanks. So what's the purpose of the ovirt_storage_vms_facts, only working with export domains (which has been deprecated?) Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From omachace at redhat.com Fri Mar 2 16:59:06 2018 From: omachace at redhat.com (Ondra Machacek) Date: Fri, 2 Mar 2018 17:59:06 +0100 Subject: [ovirt-users] oVirt API (4.0 and 4.1) not reporting vms running on a given storage domain In-Reply-To: References: <836ea137-aa06-fe45-30f8-f546e8cd184a@redhat.com> Message-ID: <80bd13d7-9f17-5ea6-1196-72fa4ae84bd2@redhat.com> On 03/02/2018 05:24 PM, Luca 'remix_tj' Lorenzetto wrote: > On Fri, Mar 2, 2018 at 3:21 PM, Ondra Machacek wrote: >> Hi, >> >> As per documentation: >> >> http://ovirt.github.io/ovirt-engine-api-model/4.1/#services/storage_domain_vms >> >> That resource is used to list VMs on export storage domain, not on data >> domain. >> >> If you want to find VMs using specific storage you may use following query: >> >> /ovirt-engine/api/vms?search=storage.name=nameofthestorage > > Hi Ondra, > > thanks. So what's the purpose of the ovirt_storage_vms_facts, only > working with export domains (which has been deprecated?) Mainly listing the unregistered VMs, so it works as export domain. > > Luca > > From lorenzetto.luca at gmail.com Fri Mar 2 17:58:55 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Fri, 2 Mar 2018 18:58:55 +0100 Subject: [ovirt-users] oVirt API (4.0 and 4.1) not reporting vms running on a given storage domain In-Reply-To: <80bd13d7-9f17-5ea6-1196-72fa4ae84bd2@redhat.com> References: <836ea137-aa06-fe45-30f8-f546e8cd184a@redhat.com> <80bd13d7-9f17-5ea6-1196-72fa4ae84bd2@redhat.com> Message-ID: Got it. I'm changing my code accordingly. Many thanks Luca Il 2 mar 2018 5:59 PM, "Ondra Machacek" ha scritto: > On 03/02/2018 05:24 PM, Luca 'remix_tj' Lorenzetto wrote: > >> On Fri, Mar 2, 2018 at 3:21 PM, Ondra Machacek >> wrote: >> >>> Hi, >>> >>> As per documentation: >>> >>> http://ovirt.github.io/ovirt-engine-api-model/4.1/#services/ >>> storage_domain_vms >>> >>> That resource is used to list VMs on export storage domain, not on data >>> domain. >>> >>> If you want to find VMs using specific storage you may use following >>> query: >>> >>> /ovirt-engine/api/vms?search=storage.name=nameofthestorage >>> >> >> Hi Ondra, >> >> thanks. So what's the purpose of the ovirt_storage_vms_facts, only >> working with export domains (which has been deprecated?) >> > > Mainly listing the unregistered VMs, so it works as export domain. > > >> Luca >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at bootc.boo.tc Sat Mar 3 14:08:59 2018 From: lists at bootc.boo.tc (Chris Boot) Date: Sat, 3 Mar 2018 14:08:59 +0000 Subject: [ovirt-users] Change management network In-Reply-To: References: <8e2a2323-8d72-c908-e23d-e5a49e1e0c41@bootc.boo.tc> Message-ID: <59ac16ed-2437-420f-5595-0bda38cf8b59@boo.tc> On 2018-02-28 11:58, Petr Horacek wrote: > What I seem to be stuck on is changing the cluster on the HostedEngine. > I actually have it running on a host in the new cluster, but it still > appears in the old cluster on the web interface with no way to > change this. > > Martin, is such thing possible in HostedEngine? In the end I enabled global maintenance mode, stopped hosted-engine on my HE VM, then changed the cluster and CPU profile (I think?) from within the database, and started it back up again. That then permitted me to remove the old cluster using the UI. I did several tests stopping and starting the engine and the VM comes up fine, so I believe that was enough to make the change, it's just rather nasty. HTH, Chris -- Chris Boot bootc at boo.tc From ykaul at redhat.com Sun Mar 4 08:16:05 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Sun, 4 Mar 2018 10:16:05 +0200 Subject: [ovirt-users] oVirt 4.2.x and ManageIQ : Adding 'cfme' credentials In-Reply-To: <89281469-cdc0-9751-09e2-1689ba7ab04b@ecarnot.net> References: <89281469-cdc0-9751-09e2-1689ba7ab04b@ecarnot.net> Message-ID: On Thu, Mar 1, 2018 at 4:50 PM, Nicolas Ecarnot wrote: > Le 01/03/2018 ? 15:00, Yaniv Kaul a ?crit : > >> >> >> On Thu, Mar 1, 2018 at 2:13 PM, Nicolas Ecarnot > > wrote: >> >> Hello, >> >> As for my 4 previous oVirt DCs, I'm trying to add them to ManageIQ >> providers. >> >> I tried to follow this guide : >> >> https://access.redhat.com/documentation/en-us/red_hat_cloudf >> orms/4.6/html-single/deployment_planning_guide/#data_ >> collection_for_rhev_33_34 >> > forms/4.6/html-single/deployment_planning_guide/#data_ >> collection_for_rhev_33_34> >> >> But when trying to run psql, the shell tells me the command is not >> found. >> >> >> > Hello Yanniv, > > Thank you for answering. > > Because you are probably on PG 9.5 SCL, I assume? >> > > I've never heard about that before today. > I installed a bare-metal CentOS 7.4 on which I installed oVirt 4.2. > I saw no reference to SCL nowhere, neither during the setup, neither in > the oVirt install documentation. > > How an average user is supposed to behave in such a situation? > (In my case, as usual, I read and read again) > An average user does not touch the database. But you are right, we should mention it somewhere. > > Couldn't the Redhat documentation mentioned above be more accurate? Red Hat did not release 4.2 yet. > > > Something like 'scl enable rh-postgrsql95' should help. >> > > Not that much... > > root at serv-mvm-prds01:/etc/ovirt-engine-setup.conf.d# cd /tmp > root at serv-mvm-prds01:/tmp# su - postgres > Derni?re connexion : jeudi 1 mars 2018 ? 15:42:40 CET sur pts/2 > -bash-4.2$ scl enable rh-postgrsql95 > Need at least 3 arguments. > Run scl --help to get help. > > https://www.softwarecollections.org/en/scls/rhscl/rh-postgresql95/ provide better information than I do... Y. > -- > Nicolas ECARNOT > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sleviim at redhat.com Sun Mar 4 08:48:16 2018 From: sleviim at redhat.com (Shani Leviim) Date: Sun, 4 Mar 2018 10:48:16 +0200 Subject: [ovirt-users] cannot remove vm. In-Reply-To: References: Message-ID: Hi Nick, You can try the taskcleaner script on dbutils: PGPASSWORD=... /usr/share/ovirt-engine/setup/dbutils/taskcleaner.sh --help *Regards,* *Shani Leviim* On Thu, Mar 1, 2018 at 6:01 PM, nicola gentile wrote: > Hi, > I have a problem. I try to remove a pool and than I remove every vm > but one of this display the message "Cannot remove VM. Related > operation is currently in progress. Please try again later." > > I try to unlock with this command > > PGPASSWORD=... /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh > -t all -u engine > > but not working > > in /var/log/ovirt-engine/engine.log > > 2018-03-01 17:02:47,799+01 INFO > [org.ovirt.engine.core.bll.RemoveVmCommand] (default task-33) > [0217d710-afb6-4450-9897-02748d871aa1] Failed to Acquire Lock to > object 'EngineLock:{exclusiveLocks='[d623ad44-a645-4fd0-9993- > d21374e99eb5=VM]', > sharedLocks=''}' > 2018-03-01 17:02:47,799+01 WARN > [org.ovirt.engine.core.bll.RemoveVmCommand] (default task-33) > [0217d710-afb6-4450-9897-02748d871aa1] Validation of action 'RemoveVm' > failed for user admin at internal-authz. Reasons: > VAR__ACTION__REMOVE,VAR__TYPE__VM,ACTION_TYPE_FAILED_OBJECT_LOCKED > > please help > > thanks > > Nick > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahadas at redhat.com Sun Mar 4 09:05:07 2018 From: ahadas at redhat.com (Arik Hadas) Date: Sun, 4 Mar 2018 11:05:07 +0200 Subject: [ovirt-users] cannot remove vm. In-Reply-To: References: Message-ID: On Thu, Mar 1, 2018 at 6:01 PM, nicola gentile wrote: > Hi, > I have a problem. I try to remove a pool and than I remove every vm > but one of this display the message "Cannot remove VM. Related > operation is currently in progress. Please try again later." > I try to unlock with this command > > PGPASSWORD=... /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh > -t all -u engine > > but not working > Yeah, that wouldn't work since the VM is locked in-memory (as opposed to being locked in the database). > > in /var/log/ovirt-engine/engine.log > > 2018-03-01 17:02:47,799+01 INFO > [org.ovirt.engine.core.bll.RemoveVmCommand] (default task-33) > [0217d710-afb6-4450-9897-02748d871aa1] Failed to Acquire Lock to > object 'EngineLock:{exclusiveLocks='[d623ad44-a645-4fd0-9993- > d21374e99eb5=VM]', > sharedLocks=''}' > 2018-03-01 17:02:47,799+01 WARN > [org.ovirt.engine.core.bll.RemoveVmCommand] (default task-33) > [0217d710-afb6-4450-9897-02748d871aa1] Validation of action 'RemoveVm' > failed for user admin at internal-authz. Reasons: > VAR__ACTION__REMOVE,VAR__TYPE__VM,ACTION_TYPE_FAILED_OBJECT_LOCKED > > please help > What's the status of the VMs that are attached to the pool? It may be that some of those VMs cannot be stopped and so they remain locked by the remove-pool operation, but hard to tell without a complete engine log. Anyway, you can restart the engine - it would probably release that in-memory lock that prevents you from removing the VM. > > thanks > > Nick > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreil1 at starlett.lv Sun Mar 4 11:27:50 2018 From: andreil1 at starlett.lv (Andrei V) Date: Sun, 4 Mar 2018 13:27:50 +0200 Subject: [ovirt-users] Can't move/copy VM disks between Data Centers In-Reply-To: References: <0D950DC4-A8E3-4A39-B557-5E122AA38DE6@starlett.lv> <4023F78E-1E84-439B-B89A-718C366B2C80@starlett.lv> Message-ID: <5eaafdac-1726-2f1b-769f-25e7613a30ca@starlett.lv> Hi, On 02/27/2018 04:29 PM, Fred Rolland wrote: > Hi, > > Just to make clear what you want to achieve: > - DC1 - local storage - host1 - VMs > - DC2 - local storage - host2 > > You want to move the VMs from DC1 to DC2. Yes, thanks, this is exactly what I want to accomplish. BTW, are export domains from different data centers visible to each other? If not, wouldn't be simpler to export VM#1 in DC #1 to Export domain #1, copy over ssh to DC #2 Export domain, and finally import it into DC #2. PS. I can't test right now myself, sitting home on sick leave.. > > What you can do: > - Add a shared storage domain to the DC#1 > - Move VM disk from local SD to shared storage domain > - Put shared storage domain to maintenance > - Detach shared storage from DC1 > - Attach shared storage to DC2 > - Activate shared storage > - You should be able to register the VM from the shared storage into > the DC2 > - If you want/need move disks from shared storage to local storage in DC2 > > Please test this flow with a dummy VM before doing on important VMs. > > Regards, > > Freddy > > On Mon, Feb 26, 2018 at 1:46 PM, Andrei Verovski > wrote: > > Hi, > > Thanks for clarification. I?m using 4.2. > Anyway, I have to define another data center with shared storage > domain (since data center with local storage domain can have only > 1 host), and the do what you have described. > > Is it possible to copy VM disks from 1 data center #1 local > storage domain to another data center #2 NFS storage domain, or > need to use export storage domain ? > > > >> On 26 Feb 2018, at 13:30, Fred Rolland > > wrote: >> >> Hi, >> Which version are you using? >> >> in 4.1 , the support of adding shared storage to local DC was >> added [1]. >> You can copy/move disks to the shared storage domain, then detach >> the SD and attach to another DC. >> >> In any case, you wont be able to live migrate VMs from the local >> DC, it is not supported. >> >> Regards, >> Fred >> >> [1] >> https://ovirt.org/develop/release-management/features/storage/sharedStorageDomainsAttachedToLocalDC/ >> >> >> On Fri, Feb 23, 2018 at 1:35 PM, Andrei V > > wrote: >> >> Hi, >> >> I have oVirt setup,?separate PC host engine + 2 nodes (#10 + >> #11) with local storage domains (internal RAIDs). >> 1st node #10 is currently active and can?t be turned off. >> >> Since oVirt doesn?t support more then 1 host in data center >> with local storage domain as described here: >> http://lists.ovirt.org/pipermail/users/2018-January/086118.html >> >> defined another data center with 1 node #11. >> >> Problem:? >> 1) can?t copy or move VM disks from node #10 (even of >> inactive VMs) to node #11, this node is NOT being shown as >> possible destination. >> 2) can?t migrate active VMs to node #11. >> 3) Added NFS shares to data center #1 -> node #10, but can?t >> change data center #1 -> storage type to Shared, because this >> operation requires detachment of local storage domains, which >> is not possible, several VMs are active and can?t be stopped. >> >> VM disks placed on local storage domains because of >> performance limitations of our 1Gbit network.? >> 2 VMs running our accounting/inventory control system, and >> are critical to NFS storage performance limits. >> >> How to solve this problem ? >> Thanks in advance. >> >> Andrei >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >> > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Pax at acronis.com Sun Mar 4 12:01:59 2018 From: Pax at acronis.com (Pavel Gashev) Date: Sun, 4 Mar 2018 12:01:59 +0000 Subject: [ovirt-users] Q: VM disk speed penalty NFS vs Local data center storage In-Reply-To: References: <2056C30C-2888-4394-A0F5-5E2DFA5716EA@starlett.lv> Message-ID: Andrei, You could try pNFS. It allows to access a data on a local storage directly bypassing NFS and even FS level. Unfortunately, it requires SCSI Persistent Reservation feature so would not work all RAID controllers. From: on behalf of Yaniv Kaul Date: Monday, 26 February 2018 at 14:49 To: Andrei Verovski Cc: Ovirt Users Subject: Re: [ovirt-users] Q: VM disk speed penalty NFS vs Local data center storage On Mon, Feb 26, 2018 at 1:30 PM, Andrei Verovski > wrote: Hi, Since oVirt doesn?t support more then 1 host in data center with local storage domain as described here: http://lists.ovirt.org/pipermail/users/2018-January/086118.html I have to setup NFS server on node with VMs (on same node) access via NFS. 10 GB shared storage is in the future plans yet right now have only 2 nodes with local RAID on each. Q: What is VM disk speed penalty (approx %) NFS vs Local RAID in oVirt data center storage? Currently I have 2 VMs running our accounting/inventory control system which are critical to storage performance limits. 2 other VMs have very low disk activity. I don't know, but please remember there's both latency and throughput, both of which are somewhat affected. Throughput will benefit from jumbo frames, for example. Unfortunately it may affect latency a bit. There was an interesting patch that if the NFS was local, bypassed the NFS[1]. It was never completed and merged. Lastly, chapter 6 at hyper-converged guide (should be available in few hours) might be an interesting idea for you to consider - a single Gluster that can later expand. Y. [1] https://gerrit.ovirt.org/#/c/68822/ [2] https://ovirt.org/documentation/gluster-hyperconverged/Gluster_Hyperconverged_Guide/ Thanks in advance Andrei Verovski _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreil1 at starlett.lv Sun Mar 4 22:39:33 2018 From: andreil1 at starlett.lv (Andrei Verovski) Date: Mon, 5 Mar 2018 00:39:33 +0200 Subject: [ovirt-users] Q: Can't connect to oVirt shell Message-ID: <2144a8e3-2366-44a2-cd98-3f970d040bd4@starlett.lv> Hi ! I'm trying to connect via Bash from same machine where oVirt engine installed ovirt-shell --url=http://node00.mydomain.com/api -u admin After entering password I've got: === ERROR === [404] - Not Found What is wrong here? Thanks in advance. Andrei From karli at inparadise.se Mon Mar 5 04:46:37 2018 From: karli at inparadise.se (=?utf-8?B?S2FybGkgU2rDtmJlcmc=?=) Date: Mon, 5 Mar 2018 05:46:37 +0100 (CET) Subject: [ovirt-users] Q: Can't connect to oVirt shell In-Reply-To: <2144a8e3-2366-44a2-cd98-3f970d040bd4@starlett.lv> References: <2144a8e3-2366-44a2-cd98-3f970d040bd4@starlett.lv> Message-ID: An HTML attachment was scrubbed... URL: From thomas.fecke at eset.de Mon Mar 5 06:24:47 2018 From: thomas.fecke at eset.de (Thomas Fecke) Date: Mon, 5 Mar 2018 06:24:47 +0000 Subject: [ovirt-users] Any Kind of Storage IO Limitation? Message-ID: <047d22559fb543f0ab04092ca22f4f05@DR1-XEXCH01-B.eset.corp> Hey Guys, I got a kind of strange Question: We got some Hypervisors connected to an x86 Storage ( NFS ) The Machines are connected via 10 Gbit with that Storage. When I try to rsync some Files we reach almost the Maximum Bandwidth. But, when I copy some VM?s, Templates or do something Stroage related in Ovirt I just can reach 1000 M/bit. Is there any kind of "Config Limitation"? My biggest Problem: We work a lot with Templates. When I deploy 10 VM?s based on one Template the VM?s are getting very slow and the Storage seems to be the Problem Thanks a lot -------------- next part -------------- An HTML attachment was scrubbed... URL: From jm3185951 at gmail.com Mon Mar 5 06:35:18 2018 From: jm3185951 at gmail.com (Jonathan Mathews) Date: Mon, 5 Mar 2018 08:35:18 +0200 Subject: [ovirt-users] Failure to upgrade Cluster Compatibility Version In-Reply-To: References: Message-ID: Good Day I do apologize for the for the duplication, but is anyone able to advise on this? On Wed, Feb 28, 2018 at 12:21 PM, Jonathan Mathews wrote: > I have been upgrading my oVirt platform from 3.4 and I am trying to get to > 4.2. > > I have managed to get the platform to 3.6, but need to upgrade the Cluster > Compatibility Version. > > When I select 3.6 in the Cluster Compatibility Version and select OK, it > highlights Compatibility Version in red, (image attached). > > There are no errors been displayed on screen, or in > the /var/log/ovirt-engine/engine.log file. > > Please let me know if I am missing something and how I can resolve this? > > Thanks > Jonathan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tbaror at gmail.com Thu Mar 1 07:28:16 2018 From: tbaror at gmail.com (Tal Bar-Or) Date: Thu, 1 Mar 2018 09:28:16 +0200 Subject: [ovirt-users] Cannot activate host from maintenance mode In-Reply-To: <2CB4E8C8E00E594EA06D4AC427E429920FE51639@fabamailserver.fabagl.fabasoft.com> References: <2CB4E8C8E00E594EA06D4AC427E429920FE51639@fabamailserver.fabagl.fabasoft.com> Message-ID: Hi, sorry for the time I saw this mail I reinstalled the host from Ovirt host section and it resolved the issue. Thanks On Wed, Feb 28, 2018 at 6:30 PM, Bruckner, Simone < simone.bruckner at fabasoft.com> wrote: > Hi Martin, > > > > please find the logs attached. The storage domain got inactive at around > 10:42am CET. One other thing to mention is, that all VMs that were running > on the inactive storage domain are still available. > > > > All the best, > > Simone > > > > *Von:* users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] *Im > Auftrag von *Martin Perina > *Gesendet:* Mittwoch, 28. Februar 2018 16:57 > *An:* Tal Bar-Or > *Cc:* users > *Betreff:* Re: [ovirt-users] Cannot activate host from maintenance mode > > > > > > > > On 28 Feb 2018 10:14 am, "Tal Bar-Or" wrote: > > Hello, > > I have Ovirt Version:4.2.1.7-1.el7.centos, I did upgrade according to > host indication ,and since then I get the following error when trying to > activate host " Cannot activate Host. Host has no unique id. " > > > > Could you please share all engine logs with us so we can investigate? > > > > Thanks > > > > Martin > > > > Any idea how to fix this issue, please advice > > Thanks > > > > -- > > Tal Bar-or > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -- Tal Bar-or -------------- next part -------------- An HTML attachment was scrubbed... URL: From anastasiya.ruzhanskaya at frtk.ru Thu Mar 1 07:55:07 2018 From: anastasiya.ruzhanskaya at frtk.ru (Anastasiya Ruzhanskaya) Date: Thu, 1 Mar 2018 08:55:07 +0100 Subject: [ovirt-users] Installation using virtual machines Message-ID: Hello! I have read the documentation on the web site, but still it is not clear for me : is it possible to install ovirt engine as well as guests on virtual machines, not on the real hardware? -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfm1977 at gmail.com Thu Mar 1 11:13:06 2018 From: sfm1977 at gmail.com (=?UTF-8?Q?S=C3=A9rgio_Marques?=) Date: Thu, 1 Mar 2018 11:13:06 +0000 Subject: [ovirt-users] Ovirt 4.2.1 Storage with Fibre Channel Protocol Message-ID: Hello, I'm new in Ovirt, loving it, but i'm having some doubts when it comes to add FCP Storage. I have two clusters, one called Production and the other Development, each cluster with two nodes and i want to configure Data Domain. My doubts are: 1 - Every nodes must have connection to de FCP? If only de SPM have it, if it falls, who can connect to the storage? 2 - I have only one LUN in EVA HSV300 for me and i want to add three Storage Domains (Data, ISO e Export). Is it possible? How? This is because when i add data storage domain and choose the only LUN available it warn me that all data is going to be lost "This operation might be unrecoverable and destructive" 3 - Any advice about best practices about implementation? Thanks -- Cumprimentos, S?rgio Marques -------------- next part -------------- An HTML attachment was scrubbed... URL: From joao at magnetwork.com.br Fri Mar 2 14:52:32 2018 From: joao at magnetwork.com.br (=?UTF-8?Q?Jo=C3=A3o_Floriano_=2D_Magnet?=) Date: Fri, 2 Mar 2018 11:52:32 -0300 Subject: [ovirt-users] Problem win10 with numbers cores Message-ID: Hello guys I am facing problems with windwos 10 virtual machines in ovirt 4.1 and 4.2, and it does not recognize all the cores for processing that I define in the system settings. I would need 6 colors, however only two are recognized can anybody help me? -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From joao at magnetwork.com.br Fri Mar 2 15:00:57 2018 From: joao at magnetwork.com.br (=?UTF-8?Q?Jo=C3=A3o_Floriano_=2D_Magnet?=) Date: Fri, 2 Mar 2018 12:00:57 -0300 Subject: [ovirt-users] Problem Storage Iscsi Message-ID: Hello everyone! I am facing the following problem with ovirt 4.2.1, out of nowhere all virtual machines are paused and only return operals if log in via ssh on the host and execute a "virsh destroy domain", it is necessary to reactivate disk and network after this process. In the events I get the following message: VM X has been paused due to storage I / O problem. Currently I have iscsi storage configured on a server with target and based on some test I did not identify problems of communication of the hosts to the storage can you help me? -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From anastasiya.ruzhanskaya at frtk.ru Sat Mar 3 10:31:12 2018 From: anastasiya.ruzhanskaya at frtk.ru (Anastasiya Ruzhanskaya) Date: Sat, 3 Mar 2018 11:31:12 +0100 Subject: [ovirt-users] oVirt network tries to reassign my bridge address to herself Message-ID: Hello! I have two VM - they are machines on which I test installation. I don't want any clusters, advanced features. My goal is two connect engine and host, shut down everything, then turn on and have right configuration. However my VMs are connected via bridge, and oVirt also use another bridge to connect them. Because of this on startup I have a problem, that two bridges are trying to assign the same address. What can be done in this case? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas at devels.es Mon Mar 5 08:43:32 2018 From: nicolas at devels.es (nicolas at devels.es) Date: Mon, 05 Mar 2018 08:43:32 +0000 Subject: [ovirt-users] VMs stuck in migrating state In-Reply-To: <87h8pyv6dv.fsf@redhat.com> References: <87lgfava9h.fsf@redhat.com> <78580de0bc71d3498bf07ffaa37ad935@devels.es> <87h8pyv6dv.fsf@redhat.com> Message-ID: <511100a76bb787971f7605bbd72539fa@devels.es> El 2018-03-02 15:34, Milan Zamazal escribi?: > nicolas at devels.es writes: > >> El 2018-03-02 14:10, Milan Zamazal escribi?: >>> nicolas at devels.es writes: >>> >>>> We're running 4.1.9 and during the weekend we had a storage issue >>>> that >>>> seemed >>>> to leave some hosts in an strange state. One of the hosts has almost >>>> all VMs >>>> migrating (although it seems to not actually being migrating them) >>>> and the >>>> migration state cannot be cancelled. >>>> >>>> When clicking on one of those machines and selecting 'Cancel >>>> migration', in >>>> the >>>> ovirt-engine log I see: >>>> >>>> 2018-02-26 08:52:07,588Z INFO >>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CancelMigrateVDSCommand] >>>> (org.ovirt.thread.pool-6-thread-36) >>>> [887dfbf9-dece-4f7b-90a8-dac02b849b7f] >>>> HostName = host2.domain.com >>>> 2018-02-26 08:52:07,588Z ERROR >>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CancelMigrateVDSCommand] >>>> (org.ovirt.thread.pool-6-thread-36) >>>> [887dfbf9-dece-4f7b-90a8-dac02b849b7f] >>>> Command 'CancelMigrateVDSCommand(HostName = host2.domain.com, >>>> CancelMigrationVDSParameters:{runAsync='true', >>>> hostId='e63b9146-10c4-47ad-bd6c-f053a8c5b4eb', >>>> vmId='26d37e43-32e2-4e55-9c1e-1438518d5021'})' execution failed: >>>> VDSGenericException: VDSErrorException: Failed to CancelMigrateVDS, >>>> error = >>>> Migration process cancelled, code = 82 >>>> >>>> On the vdsm side I see: >>>> >>>> 2018-02-26 08:56:19,396+0000 INFO (jsonrpc/0) [vdsm.api] START >>>> migrateCancel() >>>> from=::ffff:10.X.X.X,54654, >>>> flow_id=874d36d7-63f5-4b71-8a4d-6d9f3ec65858 >>>> (api:46) >>>> 2018-02-26 08:56:19,398+0000 INFO (jsonrpc/0) [vdsm.api] FINISH >>>> migrateCancel >>>> return={'status': {'message': 'Migration process cancelled', 'code': >>>> 82}, >>>> 'progress': 0} from=::ffff:10.X.X.X,54654, >>>> flow_id=874d36d7-63f5-4b71-8a4d-6d9f3ec65858 (api:52) >>>> >>>> So no error on the vdsm side log. >>> >>> Interesting. The messages above indicate that the VM was attempted >>> to >>> migrate, but the migration got temporarily rejected on the >>> destination >>> due to the number of already running incoming migrations (the limit >>> is 2 >>> incoming migrations by default). Later, Vdsm was asked to cancel the >>> outgoing migration and it successfully set a migration canceling >>> flag. >>> However the action was reported as an error to Engine, due to hitting >>> the incoming migration limit on the destination. Maybe it's a bug, >>> I'm >>> not sure, resulting in minor confusion. Normally it shouldn't >>> matter, >>> the migration should be canceled shortly after anyway and Engine >>> should >>> be informed about that. >>> >>> However the migration apparently wasn't canceled here. I can't say >>> what >>> happened without complete Vdsm log. One of possible reasons is that >>> the >>> migration has been waiting on completion of another migration >>> outgoing >>> from the source (only one outgoing migration at the time is allowed >>> by >>> default). In any case it seems the migration either wasn't actually >>> started at all or it just started being set up and that has never >>> been >>> completely finished. >>> >> >> I'm attaching the log. Basically the storage backend was restarted by >> fencing >> and then this issue happened. This was on 26/02 at about 08:52 (log >> time). > > Thank you for the log, but VMs are already ?migrating? at its > beginning, > there had to be some problem already earlier. > >>>> I already tried restarting ovirt-engine but it didn't work. >>> >>> Here the problem is clearly on the Vdsm side. >>> >>>> Could someone shed some light on how to cancel the migration status >>>> for >>>> these >>>> machines? All of them seem to be running on the same host. >>> >>> Did the VMs get unblocked in the meantime? I can't know what's the >> >> No, they didn't. They're still in a "Migrating" state. >> >>> actual state of the given VMs without seeing the complete Vdsm log, >>> so >>> it's difficult to give a good advice. I think that Vdsm restart on >>> the >>> given host would help BUT it's generally not a very good idea to >>> restart >>> Vdsm if any real migration, outgoing or incoming, is running on the >>> host. VMs that aren't actually being migrated (despite being >>> reported >>> as migrating) at all should simply return to Up state after the >>> restart, >>> but VMs with any real migration action pending might get return to Up >>> state without proper cleanup, resulting in a different kind of mess >>> or >>> maybe something even worse (things should improve in oVirt 4.2, but >>> it's >>> still good to avoid Vdsm restarts with migrations running). >>> >> >> I assume this is not a real migration as it has been in this state for >> several >> days. Would you advice restarting vdsm in this case then? > > I'd say try it. Since nothing has changed for several days, restarting > Vdsm looks like appropriate action at this point. Just don't make a > habit of it :-). > Thanks, that made it. Regards. > Regards, > Milan From recreationh at gmail.com Mon Mar 5 08:43:56 2018 From: recreationh at gmail.com (Terry hey) Date: Mon, 5 Mar 2018 16:43:56 +0800 Subject: [ovirt-users] VM paused rather than migrate to another hosts In-Reply-To: <874llzsrz4.fsf@redhat.com> References: <87r2p5w5kc.fsf@redhat.com> <874llzsrz4.fsf@redhat.com> Message-ID: Dear Milan, Thank you for your explanation. Very clear! Regards, Terry 2018-03-02 0:03 GMT+08:00 Milan Zamazal : > Terry hey writes: > > > Dear Milan, > > Today, i just found that oVirt 4.2 support iLO5 and power management was > > set on all hosts (hypervisor). > > I found that if i choose VM lease and shutdown iSCSI network, the VM was > > shutdown. > > Then the VM will migrate to another host if the iSCSI network was > resumed. > > If the VM had been shut down then it was probably restarted on rather > than migrated to another host. > > > If i just choose enable HA on VM setting, the VM was successfully migrate > > to another hosts. > > There can be a special situation if the storage storing VM leases is > unavailable. > > oVirt tries to do what it can in case of storage problems, but it all > depends on the overall state of the storage ? for how long it remains > unavailable, if it is available at least on some hosts, and which parts > of the storage are available; there are more possible scenarios here. > Indeed, it's a good idea to experiment with failures and learn what > happens before real problems come! > > > But i want to ask another question, what if the management network is > down? > > What VM and hosts behavior would you expect? > > The primary problem is that oVirt Engine can't communicate with the > hosts in such a case. Unless there is another problem (especially > assuming storage is still reachable from the hosts) the hosts and VMs > will keep running, but the hosts will be displayed as unreachable and > VMs as unknown in Engine. And you won't be able to manage your VMs from > Engine of course. Once the management network is back, things should > return to normal state sooner or later. > > Regards, > Milan > > > Regards > > Terry Hung > > > > 2018-02-28 22:29 GMT+08:00 Milan Zamazal : > > > >> Terry hey writes: > >> > >> > I am testing iSCSI bonding failover test on oVirt, but i observed > that VM > >> > were paused and did not migrate to another host. Please see the > details > >> as > >> > follows. > >> > > >> > I have two hypervisors. Since they are running iLO 5 and oVirt 4.2 > cannot > >> > support iLO 5, thus i cannot setup power management. > >> > > >> > For the cluster setting, I set "Migrate Virtual Machines" under the > >> > Migration Policy. > >> > > >> > For each hypervisor, I bonded two iSCSI interface as bond 1. > >> > > >> > I created one Virtual machine and enable high availability on it. > >> > Also, I created one Virtual machine and did not enable high > availability > >> on > >> > it. > >> > > >> > When i shutdown one of the iSCSI interface, nothing happened. > >> > But when i shutdown both iSCSI interface, VM in that hosts were paused > >> and > >> > did not migrate to another hosts. Is this behavior normal or i miss > >> > something? > >> > >> A paused VM can't be migrated, since there are no guarantees about the > >> storage state. As the VMs were paused under erroneous (rather than > >> controlled such as putting the host into maintenance) situation, > >> migration policy can't help here. > >> > >> But highly available VMs can be restarted on another host automatically. > >> Do you have VM lease enabled for the highly available VM in High > >> Availability settings? With a lease, Engine should be able to restart > >> the VM elsewhere after a while, without it Engine can't do that since > >> there is danger of resuming the VM on the original host, resulting in > >> multiple instances of the same VM running at the same time. > >> > >> VMs without high availability must be restarted manually (unless storage > >> domain becomes available again). > >> > >> HTH, > >> Milan > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From c.mammoli at apra.it Mon Mar 5 08:49:24 2018 From: c.mammoli at apra.it (Cristian Mammoli) Date: Mon, 5 Mar 2018 09:49:24 +0100 Subject: [ovirt-users] Troubleshooting VM SSO on Windows 10 (ovirt 4.2.1) Message-ID: Anyone??? Hi, I'm trying to setup sso on Windows 10, vm is domain joined, has agent installed and credential provider registered.Of course I setup an AD domain and the vm has sso enabled Whenever I log to the user portal and open a VM I'm presented with the login screen and nothing happens, it's like the engine doesn't send the command to autologin. In the agent logs there's nothing interesting but the communication between the engine and the agent is ok: for example the command to lock-screen on console close runs and works: Dummy-2::INFO::2018-03-01 09:01:39,124::ovirtagentlogic::322::root::Received an external command: lock-screen... This is an extract from engine logs when I login in the user portal and start a connection: 2018-03-01 11:30:01,558+01 INFO [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-30) [] Userc.mammoli at apra.it successfully logged in with scopes: ovirt-app-admin ovirt-app-api ovirt-app-portal ovirt-ext=auth:sequence-priority=~ ovirt-ext=revoke:revoke-all ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access 2018-03-01 11:30:01,606+01 INFO [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-31) [7bc265f] Running command: CreateUserSessionCommand internal: false. 2018-03-01 11:30:01,623+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-31) [7bc265f] EVENT_ID: USER_VDC_LOGIN(30), User c.mammoli at apra.it @apra.it connecting from '192.168.1.100' using session '5NMjCbUiehNLAGMeeWsr4L5TatL+uUGsNHOxQtCvSa9i0DaQ7uoGSi6zaZdXu08vrEk5gyQUJAsB2+COzLwtEw==' logged in. 2018-03-01 11:30:02,163+01 ERROR [org.ovirt.engine.core.bll.GetSystemStatisticsQuery] (default task-39) [14276418-5de7-44a6-bb64-c60965de0acf] Query execution failed due to insufficient permissions. 2018-03-01 11:30:02,664+01 INFO [org.ovirt.engine.core.bll.SetVmTicketCommand] (default task-54) [617f130b] Running command: SetVmTicketCommand internal: false. Entities affected :? ID: c0250fe0-5d8b-44de-82bc-04610952f453 Type: VMAction group CONNECT_TO_VM with role type USER 2018-03-01 11:30:02,683+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand] (default task-54) [617f130b] START, SetVmTicketVDSCommand(HostName = r630-01.apra.it, SetVmTicketVDSCommandParameters:{hostId='d99a8356-72e8-4130-a1cc-e148762eca57', vmId='c0250fe0-5d8b-44de-82bc-04610952f453', protocol='SPICE', ticket='u2b1nv+rH+pw', validTime='120', userName='c.mammoli at apra.it ', userId='39f9d718-6e65-456a-8a6f-71976bcbbf2f', disconnectAction='LOCK_SCREEN'}), log id: 18fa2ef 2018-03-01 11:30:02,703+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand] (default task-54) [617f130b] FINISH, SetVmTicketVDSCommand, log id: 18fa2ef 2018-03-01 11:30:02,713+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-54) [617f130b] EVENT_ID: VM_SET_TICKET(164), User c.mammoli at apra.it @apra.it initiated console session for VM testvdi02 2018-03-01 11:30:11,558+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-49) [] EVENT_ID: VM_CONSOLE_CONNECTED(167), Userc.mammoli at apra.it is connected to VM testvdi02. Any help would be appreciated From frolland at redhat.com Mon Mar 5 08:53:20 2018 From: frolland at redhat.com (Fred Rolland) Date: Mon, 5 Mar 2018 10:53:20 +0200 Subject: [ovirt-users] Problem Storage Iscsi In-Reply-To: References: Message-ID: Hi, Can you provide Vdsm logs from the host with the issues? Thanks, Fred On Fri, Mar 2, 2018 at 5:00 PM, Jo?o Floriano - Magnet < joao at magnetwork.com.br> wrote: > Hello everyone! > > I am facing the following problem with ovirt 4.2.1, out of nowhere all > virtual machines are paused and only return operals if log in via ssh on > the host and execute a "virsh destroy domain", it is necessary to > reactivate disk and network after this process. In the events I get the > following message: VM X has been paused due to storage I / O problem. > > Currently I have iscsi storage configured on a server with target and > based on some test I did not identify problems of communication of the > hosts to the storage > > can you help me? > > -- > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From frolland at redhat.com Mon Mar 5 08:59:19 2018 From: frolland at redhat.com (Fred Rolland) Date: Mon, 5 Mar 2018 10:59:19 +0200 Subject: [ovirt-users] Ovirt 4.2.1 Storage with Fibre Channel Protocol In-Reply-To: References: Message-ID: Hi, All hosts in the Data Center should have access to the storage. Think of the DC as a migration domain. If one of the hosts fails and you need to migrate the VMs to another host, the VMs disks needs to be available in the new host. You cannot use the same LUN in different domains. Either ask the storage administrator to split the LUN into 3 smaller LUNs or maybe use an NFS server for the ISO domain. Regards, Fred On Thu, Mar 1, 2018 at 1:13 PM, S?rgio Marques wrote: > Hello, > > I'm new in Ovirt, loving it, but i'm having some doubts when it comes to > add FCP Storage. I have two clusters, one called Production and the other > Development, each cluster with two nodes and i want to configure Data > Domain. My doubts are: > > 1 - Every nodes must have connection to de FCP? If only de SPM have it, if > it falls, who can connect to the storage? > > 2 - I have only one LUN in EVA HSV300 for me and i want to add three > Storage Domains (Data, ISO e Export). Is it possible? How? This is because > when i add data storage domain and choose the only LUN available it warn me > that all data is going to be lost "This operation might be unrecoverable > and destructive" > > 3 - Any advice about best practices about implementation? > > Thanks > > -- > Cumprimentos, > S?rgio Marques > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From frolland at redhat.com Mon Mar 5 09:03:30 2018 From: frolland at redhat.com (Fred Rolland) Date: Mon, 5 Mar 2018 11:03:30 +0200 Subject: [ovirt-users] Installation using virtual machines In-Reply-To: References: Message-ID: Hi, The ovirt engine can be run in a virtual engine. Look for "hosted engine". What do you mean by "guests" ? If you mean the hosts running the VMs, it is possible but not recommended. Real hardware is the best way. Regards, Fred On Thu, Mar 1, 2018 at 9:55 AM, Anastasiya Ruzhanskaya < anastasiya.ruzhanskaya at frtk.ru> wrote: > Hello! > I have read the documentation on the web site, but still it is not clear > for me : is it possible to install ovirt engine as well as guests on > virtual machines, not on the real hardware? > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From frolland at redhat.com Mon Mar 5 09:11:30 2018 From: frolland at redhat.com (Fred Rolland) Date: Mon, 5 Mar 2018 11:11:30 +0200 Subject: [ovirt-users] Can't move/copy VM disks between Data Centers In-Reply-To: <5eaafdac-1726-2f1b-769f-25e7613a30ca@starlett.lv> References: <0D950DC4-A8E3-4A39-B557-5E122AA38DE6@starlett.lv> <4023F78E-1E84-439B-B89A-718C366B2C80@starlett.lv> <5eaafdac-1726-2f1b-769f-25e7613a30ca@starlett.lv> Message-ID: Using Export domain should work also, but I would go the way I described earlier. On Sun, Mar 4, 2018 at 1:27 PM, Andrei V wrote: > Hi, > > On 02/27/2018 04:29 PM, Fred Rolland wrote: > > Hi, > > Just to make clear what you want to achieve: > - DC1 - local storage - host1 - VMs > - DC2 - local storage - host2 > > You want to move the VMs from DC1 to DC2. > > > Yes, thanks, this is exactly what I want to accomplish. > BTW, are export domains from different data centers visible to each other? > > If not, wouldn't be simpler to export VM#1 in DC #1 to Export domain #1, > copy over ssh to DC #2 Export domain, and finally import it into DC #2. > > PS. I can't test right now myself, sitting home on sick leave.. > > > > What you can do: > - Add a shared storage domain to the DC#1 > - Move VM disk from local SD to shared storage domain > - Put shared storage domain to maintenance > - Detach shared storage from DC1 > - Attach shared storage to DC2 > - Activate shared storage > - You should be able to register the VM from the shared storage into the > DC2 > - If you want/need move disks from shared storage to local storage in DC2 > > Please test this flow with a dummy VM before doing on important VMs. > > Regards, > > Freddy > > On Mon, Feb 26, 2018 at 1:46 PM, Andrei Verovski > wrote: > >> Hi, >> >> Thanks for clarification. I?m using 4.2. >> Anyway, I have to define another data center with shared storage domain >> (since data center with local storage domain can have only 1 host), and the >> do what you have described. >> >> Is it possible to copy VM disks from 1 data center #1 local storage >> domain to another data center #2 NFS storage domain, or need to use export >> storage domain ? >> >> >> >> On 26 Feb 2018, at 13:30, Fred Rolland wrote: >> >> Hi, >> Which version are you using? >> >> in 4.1 , the support of adding shared storage to local DC was added [1]. >> You can copy/move disks to the shared storage domain, then detach the SD >> and attach to another DC. >> >> In any case, you wont be able to live migrate VMs from the local DC, it >> is not supported. >> >> Regards, >> Fred >> >> [1] https://ovirt.org/develop/release-management/features/storag >> e/sharedStorageDomainsAttachedToLocalDC/ >> >> On Fri, Feb 23, 2018 at 1:35 PM, Andrei V wrote: >> >>> Hi, >>> >>> I have oVirt setup, separate PC host engine + 2 nodes (#10 + #11) with >>> local storage domains (internal RAIDs). >>> 1st node #10 is currently active and can?t be turned off. >>> >>> Since oVirt doesn?t support more then 1 host in data center with local >>> storage domain as described here: >>> http://lists.ovirt.org/pipermail/users/2018-January/086118.html >>> defined another data center with 1 node #11. >>> >>> Problem: >>> 1) can?t copy or move VM disks from node #10 (even of inactive VMs) to >>> node #11, this node is NOT being shown as possible destination. >>> 2) can?t migrate active VMs to node #11. >>> 3) Added NFS shares to data center #1 -> node #10, but can?t change data >>> center #1 -> storage type to Shared, because this operation requires >>> detachment of local storage domains, which is not possible, several VMs are >>> active and can?t be stopped. >>> >>> VM disks placed on local storage domains because of performance >>> limitations of our 1Gbit network. >>> 2 VMs running our accounting/inventory control system, and are critical >>> to NFS storage performance limits. >>> >>> How to solve this problem ? >>> Thanks in advance. >>> >>> Andrei >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Markus.Schaufler at ooe.gv.at Mon Mar 5 09:42:16 2018 From: Markus.Schaufler at ooe.gv.at (Markus.Schaufler at ooe.gv.at) Date: Mon, 5 Mar 2018 09:42:16 +0000 Subject: [ovirt-users] Users/Groups Permissions Message-ID: <9cd67cf34aaa42f180fe29c2fc2e9fd9@ooe.gv.at> Hi! Still new to oVirt and got another question: I have many Windows and Linux VMs and created for each the Windows and Linux machines two Usergroups (limited and admins). Now I want to grant the groups according permissions to according VMs. How can I do this without clicking through every VM manually (e.g. by mark several vms in the UI and manage their permissions or via CLI)? Many thanks in advance, Markus Schaufler, MSc Amt der O?. Landesregierung Direktion Pr?sidium Abteilung Informationstechnologie Referat ST3 Server A-4021 Linz, K?rntnerstra?e 16 Tel.: +43 (0)732 7720 - 13138 Fax: +43 (0)732 7720 - 213255 email: markus.schaufler at ooe.gv.at Internet: www.land-oberoesterreich.gv.at DVR: 0069264 Der Austausch von Nachrichten mit o.a. Absender via e-mail dient ausschlie?lich Informationszwecken. Rechtsg?ltige Erkl?rungen d?rfen ?ber dieses Medium nur an das offizielle Postfach it.post at ooe.gv.at ?bermittelt werden. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Mon Mar 5 10:07:35 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Mon, 5 Mar 2018 12:07:35 +0200 Subject: [ovirt-users] Any Kind of Storage IO Limitation? In-Reply-To: <047d22559fb543f0ab04092ca22f4f05@DR1-XEXCH01-B.eset.corp> References: <047d22559fb543f0ab04092ca22f4f05@DR1-XEXCH01-B.eset.corp> Message-ID: On Mon, Mar 5, 2018 at 8:24 AM, Thomas Fecke wrote: > Hey Guys, > > > > I got a kind of strange Question: > > > > We got some Hypervisors connected to an x86 Storage ( NFS ) > > > > The Machines are connected via 10 Gbit with that Storage. When I try to > rsync some Files we reach almost the Maximum Bandwidth. > > > > But, when I copy some VM?s, Templates or do something Stroage related in > Ovirt I just can reach 1000 M/bit. > > > > Is there any kind of ?Config Limitation?? > > > > My biggest Problem: > > > > We work a lot with Templates. When I deploy 10 VM?s based on one Template > the VM?s are getting very slow and the Storage seems to be the Problem > Please verify all your hosts, and especially the SPM, are connected with 10g (and have negotiated with 10g and not 1g, etc.) Y. > > > Thanks a lot > > > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas at ecarnot.net Mon Mar 5 10:29:29 2018 From: nicolas at ecarnot.net (Nicolas Ecarnot) Date: Mon, 5 Mar 2018 11:29:29 +0100 Subject: [ovirt-users] oVIRT 4.1 / iSCSI Multipathing In-Reply-To: <54FF808B-215E-441D-9864-38DBDD9F32E8@if.ufrj.br> References: <91FCC8D5-CC5B-4062-B1ED-58FBEB179FB1@if.ufrj.br> <1eedfc36-4bb3-970e-8c11-0a38e80ed500@laverenz.de> <35F9BD06-00A8-4A6F-9E12-C519214F6D3C@if.ufrj.br> <54FF808B-215E-441D-9864-38DBDD9F32E8@if.ufrj.br> Message-ID: <33a835c8-72c9-a3c7-293f-63eb4e65c3c9@ecarnot.net> Hello, [Unusual setup] Last week, I eventually managed to make a 4.2.1.7 oVirt work with iscsi-multipathing on both hosts and guest, connected to a Dell Equallogic SAN which is providing one single virtual ip - my hosts have two dedicated NICS for iscsi, but on the same VLAN. Torture-tests showed good resilience. [Classical setup] But this year, we plan to create at least two additional DCs but to connect their hosts to a "classical" SAN, ie which provides TWO IPs on segregated VLANs (not routed), and we'd like to use the same iscsi-multipathing feature. The discussion below could lead to think that oVirt needs the two iscsi VLANs to be routed, allowing the hosts in one VLAN to access to resources in the other. As Vinicius explained, this is not a best practice to say the least. Searching through the mailing list archive, I found no answer to Vinicius' question. May a Redhat storage and/or network expert enlighten us on these points? Regards, -- Nicolas Ecarnot Le 21/07/2017 ? 20:56, Vin?cius Ferr?o a ?crit?: > >> On 21 Jul 2017, at 15:12, Yaniv Kaul > > wrote: >> >> >> >> On Wed, Jul 19, 2017 at 9:13 PM, Vin?cius Ferr?o > > wrote: >> >> Hello, >> >> I?ve skipped this message entirely yesterday. So this is per >> design? Because the best practices of iSCSI MPIO, as far as I >> know, recommends two completely separate paths. If this can?t be >> achieved with oVirt what?s the point of running MPIO? >> >> >> With regular storage it is quite easy to achieve using 'iSCSI bonding'. >> I think the Dell storage is a bit different and requires some more >> investigation - or experience with it. >> ?Y. > > Yaniv, thank you for answering this. I?m really hoping that a solution > would be found. > > Actually I?m not running anything from DELL. My storage system is > FreeNAS which is pretty standard and, as far as I know, iSCSI > practices dictates segregate networks for proper working. > > All other major virtualization products supports iSCSI this way: > vSphere, XenServer and Hyper-V. So I was really surprised that oVirt > (and even RHV, I requested a trial yesterday) does not implement ISCSI > with the well know best practices. > > There?s a picture of the architecture that I take from Google when > searching for ?mpio best practives?: > https://image.slidesharecdn.com/2010-12-06-midwest-reg-vmug-101206110506-phpapp01/95/nextgeneration-best-practices-for-vmware-and-storage-15-728.jpg?cb=1296301640 > > Ans as you can see it?s segregated networks on a machine reaching the > same target. > > In my case, my datacenter has five Hypervisor Machines, with two NICs > dedicated for iSCSI. Both NICs connect to different converged ethernet > switches and the iStorage is connected the same way. > > So it really does not make sense that a the first NIC can reach the > second NIC target. In a case of a switch failure the cluster will go > down anyway, so what?s the point of running MPIO? Right? > > Thanks once again, > V. -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreil1 at starlett.lv Mon Mar 5 10:51:33 2018 From: andreil1 at starlett.lv (Andrei Verovski) Date: Mon, 5 Mar 2018 12:51:33 +0200 Subject: [ovirt-users] Q: Can't connect to oVirt shell / SSL cert issue In-Reply-To: References: <2144a8e3-2366-44a2-cd98-3f970d040bd4@starlett.lv> Message-ID: <938a3e8e-f886-e530-afaa-0859720618fc@starlett.lv> Hi, Thanks,? corrected URL accepted. However, I've run into SSL certificate issue: ovirt-shell -l https://node00.mydomain.com/ovirt-engine/api --cert-file /etc/pki/ovirt-engine/certs/engine.cer -u "admin at internal" ==== ERROR ==== server CA certificate file must be specified for SSL secured connection. certificate file exists, verified /etc/pki/ovirt-engine/certs/engine.cer without specifying SSL cert file its not possible to connect at all ==== ERROR ==== No response returned from server. If you're using HTTP protocol against a SSL secured server, then try using HTTPS instead. Or I should use another certificate from same directory ? Thanks. On 03/05/2018 06:46 AM, Karli Sj?berg wrote: > > > Den 4 mars 2018 23:39 skrev Andrei Verovski : > > Hi ! > > I'm trying to connect via Bash from same machine where oVirt engine > installed > ovirt-shell --url=http://node00.mydomain.com/api -u admin > > > Hi! > > You've forgotten 'ovirt-engine' before 'api': > http://node00.mydomain.com/ovirt-engine/api > > /K > > After entering password I've got: > === ERROR === > [404] - Not Found > > What is wrong here? > Thanks in advance. > Andrei > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > Hi ! > > I'm trying to connect via Bash from same machine where oVirt engine > installed > ovirt-shell --url=http://node00.mydomain.com/api -u admin > After entering password I've got: > === ERROR === > [404] - Not Found > > What is wrong here? > Thanks in advance. > Andrei > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From omachace at redhat.com Mon Mar 5 10:58:08 2018 From: omachace at redhat.com (Ondra Machacek) Date: Mon, 5 Mar 2018 11:58:08 +0100 Subject: [ovirt-users] Q: Can't connect to oVirt shell / SSL cert issue In-Reply-To: <938a3e8e-f886-e530-afaa-0859720618fc@starlett.lv> References: <2144a8e3-2366-44a2-cd98-3f970d040bd4@starlett.lv> <938a3e8e-f886-e530-afaa-0859720618fc@starlett.lv> Message-ID: <6ef5e5b3-60ac-c671-a2ce-ef1398430bd0@redhat.com> You should use CA certificate if you use default one it's: /etc/pki/ovirt-engine/ca.pem You can find more information about oVirt PKI here: https://www.ovirt.org/develop/release-management/features/infra/pki/ On 03/05/2018 11:51 AM, Andrei Verovski wrote: > Hi, > > Thanks,? corrected URL accepted. > > However, I've run into SSL certificate issue: > ovirt-shell -l https://node00.mydomain.com/ovirt-engine/api --cert-file > /etc/pki/ovirt-engine/certs/engine.cer -u "admin at internal" > > ==== ERROR ==== > server CA certificate file must be specified for SSL secured connection. > > certificate file exists, verified > /etc/pki/ovirt-engine/certs/engine.cer > > without specifying SSL cert file its not possible to connect at all > ==== ERROR ==== > No response returned from server. If you're using HTTP protocol > against a SSL secured server, then try using HTTPS instead. > > Or I should use another certificate from same directory ? > > Thanks. > > > On 03/05/2018 06:46 AM, Karli Sj?berg wrote: >> >> >> Den 4 mars 2018 23:39 skrev Andrei Verovski : >> >> Hi ! >> >> I'm trying to connect via Bash from same machine where oVirt engine >> installed >> ovirt-shell --url=http://node00.mydomain.com/api -u admin >> >> >> Hi! >> >> You've forgotten 'ovirt-engine' before 'api': >> http://node00.mydomain.com/ovirt-engine/api >> >> /K >> >> After entering password I've got: >> === ERROR === >> [404] - Not Found >> >> What is wrong here? >> Thanks in advance. >> Andrei >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >> Hi ! >> >> I'm trying to connect via Bash from same machine where oVirt engine >> installed >> ovirt-shell --url=http://node00.mydomain.com/api -u admin >> After entering password I've got: >> === ERROR === >> [404] - Not Found >> >> What is wrong here? >> Thanks in advance. >> Andrei >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From thomas.fecke at eset.de Mon Mar 5 11:25:05 2018 From: thomas.fecke at eset.de (Thomas Fecke) Date: Mon, 5 Mar 2018 11:25:05 +0000 Subject: [ovirt-users] Any Kind of Storage IO Limitation? In-Reply-To: References: <047d22559fb543f0ab04092ca22f4f05@DR1-XEXCH01-B.eset.corp> Message-ID: <6fd8ea9368994c1abee9960c3e73d655@DR1-XEXCH01-B.eset.corp> Hey Yaniv, thanks in advance ethtool eno2 | grep Speed always brings up 10.000. So i tought there is maybe a config File in ovirt that change it or something. Or will it use the OS information direct? Im not sure why the VM?s are getting slow. But when I run an Monitor tool to check the Bandwith it never excet 1000. Maybe he just don?t need more for the Copy Job I have no explanation fact ist ? when I deploy the same Template about 10-15 Times they are getting rly slow. The RAM and CPU is morgen then 70% Free. Ill tried to Stress the Storage a bit. Opened nload and created some VM?s, created Templates, created Pools and and and An I was wrong: Max: 8.09 GBit/s I can life with that. The Performance issues sees to be Gone for now. Maybe it was just the 4.2 Upgrade. I have no idea But everything seems to work fine. Sorry for wasting your tim e From: Yaniv Kaul [mailto:ykaul at redhat.com] Sent: Montag, 5. M?rz 2018 11:08 To: Thomas Fecke Cc: users at ovirt.org Subject: Re: [ovirt-users] Any Kind of Storage IO Limitation? On Mon, Mar 5, 2018 at 8:24 AM, Thomas Fecke > wrote: Hey Guys, I got a kind of strange Question: We got some Hypervisors connected to an x86 Storage ( NFS ) The Machines are connected via 10 Gbit with that Storage. When I try to rsync some Files we reach almost the Maximum Bandwidth. But, when I copy some VM?s, Templates or do something Stroage related in Ovirt I just can reach 1000 M/bit. Is there any kind of ?Config Limitation?? My biggest Problem: We work a lot with Templates. When I deploy 10 VM?s based on one Template the VM?s are getting very slow and the Storage seems to be the Problem Please verify all your hosts, and especially the SPM, are connected with 10g (and have negotiated with 10g and not 1g, etc.) Y. Thanks a lot _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Mon Mar 5 11:34:37 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Mon, 5 Mar 2018 13:34:37 +0200 Subject: [ovirt-users] Any Kind of Storage IO Limitation? In-Reply-To: <6fd8ea9368994c1abee9960c3e73d655@DR1-XEXCH01-B.eset.corp> References: <047d22559fb543f0ab04092ca22f4f05@DR1-XEXCH01-B.eset.corp> <6fd8ea9368994c1abee9960c3e73d655@DR1-XEXCH01-B.eset.corp> Message-ID: On Mon, Mar 5, 2018 at 1:25 PM, Thomas Fecke wrote: > Hey Yaniv, > > > > thanks in advance > > > > ethtool eno2 | grep Speed > > > > always brings up 10.000. So i tought there is maybe a config File in ovirt > that change it or something. > > > > Or will it use the OS information direct? > Correct. I assume you have not put any capping on the performance. > > > Im not sure why the VM?s are getting slow. But when I run an Monitor tool > to check the Bandwith it never excet 1000. Maybe he just don?t need more > for the Copy Job > We'll need more details on the VM configuration. I assume you are using raw (and not qcow2), using virtio or virtio-SCSI, enabled IO threads, etc. What are you using the measure the performance? > > > I have no explanation > > > > fact ist ? when I deploy the same Template about 10-15 Times they are > getting rly slow. > We'll need to understand how you deploy them. > > > The RAM and CPU is morgen then 70% Free. > > > > > > > > Ill tried to Stress the Storage a bit. > > > > Opened nload and created some VM?s, created Templates, created Pools and > and and > > > > An I was wrong: > > > > Max: 8.09 GBit/s > That's unlikely. What storage is providing you with 8GB/s? That needs to be a very very very high end storage.... Y. > > > I can life with that. The Performance issues sees to be Gone for now. > Maybe it was just the 4.2 Upgrade. I have no idea > > > > But everything seems to work fine. Sorry for wasting your tim e > > > > > > > > > > *From:* Yaniv Kaul [mailto:ykaul at redhat.com] > *Sent:* Montag, 5. M?rz 2018 11:08 > *To:* Thomas Fecke > *Cc:* users at ovirt.org > *Subject:* Re: [ovirt-users] Any Kind of Storage IO Limitation? > > > > > > > > On Mon, Mar 5, 2018 at 8:24 AM, Thomas Fecke wrote: > > Hey Guys, > > > > I got a kind of strange Question: > > > > We got some Hypervisors connected to an x86 Storage ( NFS ) > > > > The Machines are connected via 10 Gbit with that Storage. When I try to > rsync some Files we reach almost the Maximum Bandwidth. > > > > But, when I copy some VM?s, Templates or do something Stroage related in > Ovirt I just can reach 1000 M/bit. > > > > Is there any kind of ?Config Limitation?? > > > > My biggest Problem: > > > > We work a lot with Templates. When I deploy 10 VM?s based on one Template > the VM?s are getting very slow and the Storage seems to be the Problem > > > > Please verify all your hosts, and especially the SPM, are connected with > 10g (and have negotiated with 10g and not 1g, etc.) > > Y. > > > > > > Thanks a lot > > > > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From getallad at gmail.com Mon Mar 5 11:44:27 2018 From: getallad at gmail.com (Sergei Hanus) Date: Mon, 5 Mar 2018 14:44:27 +0300 Subject: [ovirt-users] oVIRT 4.1 / iSCSI Multipathing In-Reply-To: <33a835c8-72c9-a3c7-293f-63eb4e65c3c9@ecarnot.net> References: <91FCC8D5-CC5B-4062-B1ED-58FBEB179FB1@if.ufrj.br> <1eedfc36-4bb3-970e-8c11-0a38e80ed500@laverenz.de> <35F9BD06-00A8-4A6F-9E12-C519214F6D3C@if.ufrj.br> <54FF808B-215E-441D-9864-38DBDD9F32E8@if.ufrj.br> <33a835c8-72c9-a3c7-293f-63eb4e65c3c9@ecarnot.net> Message-ID: Hi, Nicolas. As long, as you are able to set up two separate iscsi sessions, which are bount to two separate paths - multipath driver will handle the rest. As I understand, Yaniv is talking about iscsi bonding, which is a bit more broad sort of multipath (per description - https://www.ovirt.org/documentation/admin-guide/chap-Storage/) - it creates all possible paths between all possible initiators and targets within bond. Personally, I don't think it's necessary - it's always better to control the connections the way you described - two vlans, each vlan contains one server NIC and one storage NIC, that's it. Sergei. -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreil1 at starlett.lv Mon Mar 5 12:28:52 2018 From: andreil1 at starlett.lv (Andrei Verovski) Date: Mon, 5 Mar 2018 14:28:52 +0200 Subject: [ovirt-users] Q: Can't connect to oVirt shell / SSL cert issue In-Reply-To: <6ef5e5b3-60ac-c671-a2ce-ef1398430bd0@redhat.com> References: <2144a8e3-2366-44a2-cd98-3f970d040bd4@starlett.lv> <938a3e8e-f886-e530-afaa-0859720618fc@starlett.lv> <6ef5e5b3-60ac-c671-a2ce-ef1398430bd0@redhat.com> Message-ID: On 03/05/2018 12:58 PM, Ondra Machacek wrote: > You should use CA certificate if you use default one it's: > > ?/etc/pki/ovirt-engine/ca.pem Executed as root ovirt-shell -l https://node00.mydomain.com/ovirt-engine/api --cert-file /etc/pki/ovirt-engine/ca.pem -u "admin at internal" === ERROR === server CA certificate file must be specified for SSL secured connection. ca.pem exists in specified location. > > You can find more information about oVirt PKI here: > > ?https://www.ovirt.org/develop/release-management/features/infra/pki/ > > On 03/05/2018 11:51 AM, Andrei Verovski wrote: >> Hi, >> >> Thanks,? corrected URL accepted. >> >> However, I've run into SSL certificate issue: >> ovirt-shell -l https://node00.mydomain.com/ovirt-engine/api >> --cert-file /etc/pki/ovirt-engine/certs/engine.cer -u "admin at internal" >> >> ==== ERROR ==== >> server CA certificate file must be specified for SSL secured connection. >> >> certificate file exists, verified >> /etc/pki/ovirt-engine/certs/engine.cer >> >> without specifying SSL cert file its not possible to connect at all >> ==== ERROR ==== >> No response returned from server. If you're using HTTP protocol >> against a SSL secured server, then try using HTTPS instead. >> >> Or I should use another certificate from same directory ? >> >> Thanks. >> >> >> On 03/05/2018 06:46 AM, Karli Sj?berg wrote: >>> >>> >>> Den 4 mars 2018 23:39 skrev Andrei Verovski : >>> >>> ??? Hi ! >>> >>> ??? I'm trying to connect via Bash from same machine where oVirt engine >>> ??? installed >>> ??? ovirt-shell --url=http://node00.mydomain.com/api -u admin >>> >>> >>> Hi! >>> >>> You've forgotten 'ovirt-engine' before 'api': >>> http://node00.mydomain.com/ovirt-engine/api >>> >>> /K >>> >>> ??? After entering password I've got: >>> ??? === ERROR === >>> ??? [404] - Not Found >>> >>> ??? What is wrong here? >>> ??? Thanks in advance. >>> ??? Andrei >>> ??? _______________________________________________ >>> ??? Users mailing list >>> ??? Users at ovirt.org >>> ??? http://lists.ovirt.org/mailman/listinfo/users >>> >>> >>> Hi ! >>> >>> I'm trying to connect via Bash from same machine where oVirt engine >>> installed >>> ovirt-shell --url=http://node00.mydomain.com/api -u admin >>> After entering password I've got: >>> === ERROR === >>> [404] - Not Found >>> >>> What is wrong here? >>> Thanks in advance. >>> Andrei >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >> >> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > From omachace at redhat.com Mon Mar 5 13:38:49 2018 From: omachace at redhat.com (Ondra Machacek) Date: Mon, 5 Mar 2018 14:38:49 +0100 Subject: [ovirt-users] Q: Can't connect to oVirt shell / SSL cert issue In-Reply-To: References: <2144a8e3-2366-44a2-cd98-3f970d040bd4@starlett.lv> <938a3e8e-f886-e530-afaa-0859720618fc@starlett.lv> <6ef5e5b3-60ac-c671-a2ce-ef1398430bd0@redhat.com> Message-ID: <873bb608-eb21-1658-2419-c950658a4b5d@redhat.com> On 03/05/2018 01:28 PM, Andrei Verovski wrote: > On 03/05/2018 12:58 PM, Ondra Machacek wrote: >> You should use CA certificate if you use default one it's: >> >> ?/etc/pki/ovirt-engine/ca.pem > > Executed as root > ovirt-shell -l https://node00.mydomain.com/ovirt-engine/api --cert-file > /etc/pki/ovirt-engine/ca.pem -u "admin at internal" Right, there should be '--ca-file /etc/pki/ovirt-engine/ca.pem', not --cert-file /etc/pki/ovirt-engine/ca.pem > > === ERROR === > server CA certificate file must be specified for SSL secured connection. > > ca.pem exists in specified location. > >> >> You can find more information about oVirt PKI here: >> >> ?https://www.ovirt.org/develop/release-management/features/infra/pki/ >> >> On 03/05/2018 11:51 AM, Andrei Verovski wrote: >>> Hi, >>> >>> Thanks,? corrected URL accepted. >>> >>> However, I've run into SSL certificate issue: >>> ovirt-shell -l https://node00.mydomain.com/ovirt-engine/api >>> --cert-file /etc/pki/ovirt-engine/certs/engine.cer -u "admin at internal" >>> >>> ==== ERROR ==== >>> server CA certificate file must be specified for SSL secured connection. >>> >>> certificate file exists, verified >>> /etc/pki/ovirt-engine/certs/engine.cer >>> >>> without specifying SSL cert file its not possible to connect at all >>> ==== ERROR ==== >>> No response returned from server. If you're using HTTP protocol >>> against a SSL secured server, then try using HTTPS instead. >>> >>> Or I should use another certificate from same directory ? >>> >>> Thanks. >>> >>> >>> On 03/05/2018 06:46 AM, Karli Sj?berg wrote: >>>> >>>> >>>> Den 4 mars 2018 23:39 skrev Andrei Verovski : >>>> >>>> ??? Hi ! >>>> >>>> ??? I'm trying to connect via Bash from same machine where oVirt engine >>>> ??? installed >>>> ??? ovirt-shell --url=http://node00.mydomain.com/api -u admin >>>> >>>> >>>> Hi! >>>> >>>> You've forgotten 'ovirt-engine' before 'api': >>>> http://node00.mydomain.com/ovirt-engine/api >>>> >>>> /K >>>> >>>> ??? After entering password I've got: >>>> ??? === ERROR === >>>> ??? [404] - Not Found >>>> >>>> ??? What is wrong here? >>>> ??? Thanks in advance. >>>> ??? Andrei >>>> ??? _______________________________________________ >>>> ??? Users mailing list >>>> ??? Users at ovirt.org >>>> ??? http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>>> Hi ! >>>> >>>> I'm trying to connect via Bash from same machine where oVirt engine >>>> installed >>>> ovirt-shell --url=http://node00.mydomain.com/api -u admin >>>> After entering password I've got: >>>> === ERROR === >>>> [404] - Not Found >>>> >>>> What is wrong here? >>>> Thanks in advance. >>>> Andrei >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >> > From mskrivan at redhat.com Mon Mar 5 13:56:58 2018 From: mskrivan at redhat.com (Michal Skrivanek) Date: Mon, 5 Mar 2018 14:56:58 +0100 Subject: [ovirt-users] ovirt 4.1 - skylake - no avx512 support in virtual machines In-Reply-To: References: <71aff973-8971-e274-3d8e-0b6fdbd03b2d@redhat.com> Message-ID: <5638C950-AF1A-48CB-A4F1-1CFDCEA1F4F0@redhat.com> > On 27 Feb 2018, at 10:53, Marco Lorenzo Crociani wrote: > >> Skylake-Client does _not_ have AVX512 (I tried now on a Kaby Lake Core >> i7 laptop). Only Skylake-Server has it and it will be in RHEL 7.5. >> Thanks, >> Paolo > > Ok, we'll stay with pass-through until RHEL 7.5. Note that support for Skylake-Server is already in oVirt 4.2.2, once you?re running on a hypervisor which supports that(e.g. RHEL 7.5 beta) it should work, when you run on the exsiting 7.4 it just won?t work (the option is there though, and we separated the generic former ?Skylake? to Skylake-Client and Skylake-Server) Thanks, michal > Thanks, > > -- > Marco Crociani > Prisma Telecom Testing S.r.l. > via Petrocchi, 4 20127 MILANO ITALY > Phone: +39 02 26113507 > Fax: +39 02 26113597 > e-mail: marcoc at prismatelecomtesting.com > web: http://www.prismatelecomtesting.com From NasrumMinallah9 at hotmail.com Mon Mar 5 04:57:00 2018 From: NasrumMinallah9 at hotmail.com (Nasrum Minallah Manzoor) Date: Mon, 5 Mar 2018 04:57:00 +0000 Subject: [ovirt-users] Open source backup! Message-ID: HI, Can you please suggest me any open source backup solution for ovirt Virtual machines. My backup media is FC tape library which is directly attached to my ovirt node. I really appreciate your help Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonbae77 at gmail.com Mon Mar 5 14:50:35 2018 From: jonbae77 at gmail.com (Jon bae) Date: Mon, 5 Mar 2018 15:50:35 +0100 Subject: [ovirt-users] Problem win10 with numbers cores In-Reply-To: References: Message-ID: Hello, under *Advanced Parameters* you have to choice *1* *virtual Socket *and in *Cores per Virtual Socket* you can add your 6 cores. 2018-03-02 15:52 GMT+01:00 Jo?o Floriano - Magnet : > Hello guys > I am facing problems with windwos 10 virtual machines in ovirt 4.1 and > 4.2, and it does not recognize all the cores for processing that I define > in the system settings. I would need 6 colors, however only two are > recognized > > can anybody help me? > > -- > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From niyazielvan at gmail.com Mon Mar 5 15:25:43 2018 From: niyazielvan at gmail.com (Niyazi Elvan) Date: Mon, 05 Mar 2018 15:25:43 +0000 Subject: [ovirt-users] Open source backup! In-Reply-To: References: Message-ID: Hi, If you are looking for VM image backup, you may have a look at Open Bacchus https://github.com/openbacchus/bacchus Bacchus is backing up VMs using the oVirt python api and final image will reside on the Export domain (which is an NFS share or glusterfs) in your environment. It does not support moving the images to tapes at the moment. You need to use another tool to stage your backups to tape. Hope this helps. On 5 Mar 2018 Mon at 17:31 Nasrum Minallah Manzoor < NasrumMinallah9 at hotmail.com> wrote: > HI, > Can you please suggest me any open source backup solution for ovirt > Virtual machines. > My backup media is FC tape library which is directly attached to my ovirt > node. I really appreciate your help > > > > > > > > Regards, > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- Niyazi Elvan -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.danzi at hawai.it Mon Mar 5 15:05:48 2018 From: s.danzi at hawai.it (Stefano Danzi) Date: Mon, 5 Mar 2018 16:05:48 +0100 Subject: [ovirt-users] Open source backup! In-Reply-To: References: Message-ID: I've never used it, but https://www.bareos.org/ could be a good product. It isn't specific for VMs, but it can help. Il 05/03/2018 05:57, Nasrum Minallah Manzoor ha scritto: > > HI, > Can you please suggest me any open source backup solution for ovirt > Virtual machines. > My backup media is FC tape library which is directly attached to my > ovirt node. I really appreciate your help > > Regards, > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabrice.soler at ac-guadeloupe.fr Mon Mar 5 16:29:14 2018 From: fabrice.soler at ac-guadeloupe.fr (Fabrice SOLER) Date: Mon, 5 Mar 2018 12:29:14 -0400 Subject: [ovirt-users] Export VM to OVA failed Message-ID: <5064936d-798c-8c3a-853b-2bb0c19e8f3c@ac-guadeloupe.fr> Hello, I need to export my VM to OVA format from the administration portail Ovirt. It fails with this message : /Failed to export Vm CentOS as a Virtual Appliance to path /data/CentOS.ova on Host eple-rectorat-proto/ My storage is local (not NFS or iSCSI), is there some particulars permissions to put to the destination directory ? The path is the path to an export domain ? Sincerely, Fabrice SOLER -- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fabrice SOLER.PNG Type: image/png Size: 16525 bytes Desc: not available URL: From ahadas at redhat.com Mon Mar 5 16:48:45 2018 From: ahadas at redhat.com (Arik Hadas) Date: Mon, 5 Mar 2018 18:48:45 +0200 Subject: [ovirt-users] Export VM to OVA failed In-Reply-To: <5064936d-798c-8c3a-853b-2bb0c19e8f3c@ac-guadeloupe.fr> References: <5064936d-798c-8c3a-853b-2bb0c19e8f3c@ac-guadeloupe.fr> Message-ID: On Mon, Mar 5, 2018 at 6:29 PM, Fabrice SOLER < fabrice.soler at ac-guadeloupe.fr> wrote: > Hello, > > I need to export my VM to OVA format from the administration portail > Ovirt. It fails with this message : > > *Failed to export Vm CentOS as a Virtual Appliance to path > /data/CentOS.ova on Host eple-rectorat-proto* > > My storage is local (not NFS or iSCSI), is there some particulars > permissions to put to the destination directory ? > No, the script that packs the OVA is executed with root permissions. > The path is the path to an export domain ? > Not necessarily. > Sincerely, > > Fabrice SOLER > -- > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fabrice SOLER.PNG Type: image/png Size: 16525 bytes Desc: not available URL: From ahadas at redhat.com Mon Mar 5 16:50:35 2018 From: ahadas at redhat.com (Arik Hadas) Date: Mon, 5 Mar 2018 18:50:35 +0200 Subject: [ovirt-users] Export VM to OVA failed In-Reply-To: References: <5064936d-798c-8c3a-853b-2bb0c19e8f3c@ac-guadeloupe.fr> Message-ID: On Mon, Mar 5, 2018 at 6:48 PM, Arik Hadas wrote: > > > On Mon, Mar 5, 2018 at 6:29 PM, Fabrice SOLER < > fabrice.soler at ac-guadeloupe.fr> wrote: > >> Hello, >> >> I need to export my VM to OVA format from the administration portail >> Ovirt. It fails with this message : >> >> *Failed to export Vm CentOS as a Virtual Appliance to path >> /data/CentOS.ova on Host eple-rectorat-proto* >> >> My storage is local (not NFS or iSCSI), is there some particulars >> permissions to put to the destination directory ? >> > No, the script that packs the OVA is executed with root permissions. > > >> The path is the path to an export domain ? >> > Not necessarily. > Oh, and please share the (engine, ansible) logs if you want more eyes looking at that failure. > > >> Sincerely, >> >> Fabrice SOLER >> -- >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fabrice SOLER.PNG Type: image/png Size: 16525 bytes Desc: not available URL: From fabrice.soler at ac-guadeloupe.fr Mon Mar 5 17:13:55 2018 From: fabrice.soler at ac-guadeloupe.fr (Fabrice SOLER) Date: Mon, 5 Mar 2018 13:13:55 -0400 Subject: [ovirt-users] Export VM to OVA failed In-Reply-To: References: <5064936d-798c-8c3a-853b-2bb0c19e8f3c@ac-guadeloupe.fr> Message-ID: <744cc79b-4e81-f327-18e1-32533d7c7c21@ac-guadeloupe.fr> Hello, Thank for your answer, I have put all permissions for the directory and I always have errors. Here are the ERROR in logs on the engine : 2018-03-05 13:03:18,525-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-23) [7dcef072] FINISH, GetVolumeInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage at 90e3c610, log id: 84f89c3 2018-03-05 13:03:18,529-04 ERROR [org.ovirt.engine.core.bll.CreateOvaCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-23) [7dcef072] Command 'org.ovirt.engine.core.bll.CreateOvaCommand' failed: null 2018-03-05 13:03:18,529-04 ERROR [org.ovirt.engine.core.bll.CreateOvaCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-23) [7dcef072] Exception: java.lang.NullPointerException ... 2018-03-05 13:03:18,533-04 ERROR [org.ovirt.engine.core.bll.exportimport.ExportOvaCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-23) [7dcef072] Failed to create OVA file 2018-03-05 13:03:18,533-04 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-23) [7dcef072] Command 'ExportOva' id: 'c484392e-3540-4a11-97bf-3fecbc13e080' failed when attempting to perform the next operation, marking as FAILED '[d5d4381b-ec82-4927-91a4-74597cd2511d]' 2018-03-05 13:03:18,533-04 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-23) [7dcef072] Command 'ExportOva' id: 'c484392e-3540-4a11-97bf-3fecbc13e080' child commands '[d5d4381b-ec82-4927-91a4-74597cd2511d]' executions were completed, status 'FAILED' 2018-03-05 13:03:19,542-04 ERROR [org.ovirt.engine.core.bll.exportimport.ExportOvaCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [c99e94b0-a9dd-486f-9274-9aa17c9590a0] Ending command 'org.ovirt.engine.core.bll.exportimport.ExportOvaCommand' with failure. 2018-03-05 13:03:19,543-04 INFO [org.ovirt.engine.core.bll.exportimport.ExportOvaCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [c99e94b0-a9dd-486f-9274-9aa17c9590a0] Lock freed to object 'EngineLock:{exclusiveLocks='[3ae307cb-53d6-4d70-87b6-4e073c6f5eb6=VM]', sharedLocks=''}' 2018-03-05 13:03:19,550-04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [c99e94b0-a9dd-486f-9274-9aa17c9590a0] EVENT_ID: IMPORTEXPORT_EXPORT_VM_TO_OVA_FAILED(1,225), Failed to export Vm pfSense as a Virtual Appliance to path /ova/pfSense.ova on Host eple-rectorat-proto ... Sincerely, Fabrice Le 05/03/2018 ? 12:50, Arik Hadas a ?crit?: > > > On Mon, Mar 5, 2018 at 6:48 PM, Arik Hadas > wrote: > > > > On Mon, Mar 5, 2018 at 6:29 PM, Fabrice SOLER > > wrote: > > Hello, > > I need to export my VM to OVA format from the administration > portail Ovirt. It fails with this message : > > /Failed to export Vm CentOS as a Virtual Appliance to path > /data/CentOS.ova on Host eple-rectorat-proto/ > > My storage is local (not NFS or iSCSI), is there some > particulars permissions to put to the destination directory ? > > No, the script that packs the OVA is executed with root permissions. > > The path is the path to an export domain ? > > Not necessarily. > > > Oh, and please share the (engine, ansible) logs if you want more eyes > looking at that failure. > > Sincerely, > > Fabrice SOLER > > -- > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fabrice SOLER.PNG Type: image/png Size: 16525 bytes Desc: not available URL: From ahadas at redhat.com Mon Mar 5 20:31:38 2018 From: ahadas at redhat.com (Arik Hadas) Date: Mon, 5 Mar 2018 22:31:38 +0200 Subject: [ovirt-users] Export VM to OVA failed In-Reply-To: <744cc79b-4e81-f327-18e1-32533d7c7c21@ac-guadeloupe.fr> References: <5064936d-798c-8c3a-853b-2bb0c19e8f3c@ac-guadeloupe.fr> <744cc79b-4e81-f327-18e1-32533d7c7c21@ac-guadeloupe.fr> Message-ID: On Mon, Mar 5, 2018 at 7:13 PM, Fabrice SOLER < fabrice.soler at ac-guadeloupe.fr> wrote: > Hello, > > Thank for your answer, I have put all permissions for the directory and I > always have errors. > > Here are the ERROR in logs on the engine : > > 2018-03-05 13:03:18,525-04 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-23) > [7dcef072] FINISH, GetVolumeInfoVDSCommand, return: > org.ovirt.engine.core.common.businessentities.storage.DiskImage at 90e3c610, > log id: 84f89c3 > 2018-03-05 13:03:18,529-04 ERROR [org.ovirt.engine.core.bll.CreateOvaCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-23) [7dcef072] Command > 'org.ovirt.engine.core.bll.CreateOvaCommand' failed: null > 2018-03-05 13:03:18,529-04 ERROR [org.ovirt.engine.core.bll.CreateOvaCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-23) [7dcef072] Exception: > java.lang.NullPointerException > ... > The skipped part above is the most interesting one that may enable us to investigate the error, can you share it? it would be best to file a bug [1] and add the full log as an attachment (that can be made private if needed, btw). [1] https://bugzilla.redhat.com/ > 2018-03-05 13:03:18,533-04 ERROR [org.ovirt.engine.core.bll.exportimport.ExportOvaCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-23) [7dcef072] Failed to > create OVA file > 2018-03-05 13:03:18,533-04 INFO [org.ovirt.engine.core.bll. > SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-23) > [7dcef072] Command 'ExportOva' id: 'c484392e-3540-4a11-97bf-3fecbc13e080' > failed when attempting to perform the next operation, marking as FAILED > '[d5d4381b-ec82-4927-91a4-74597cd2511d]' > 2018-03-05 13:03:18,533-04 INFO [org.ovirt.engine.core.bll. > SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-23) > [7dcef072] Command 'ExportOva' id: 'c484392e-3540-4a11-97bf-3fecbc13e080' > child commands '[d5d4381b-ec82-4927-91a4-74597cd2511d]' executions were > completed, status 'FAILED' > 2018-03-05 13:03:19,542-04 ERROR [org.ovirt.engine.core.bll.exportimport.ExportOvaCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-98) > [c99e94b0-a9dd-486f-9274-9aa17c9590a0] Ending command > 'org.ovirt.engine.core.bll.exportimport.ExportOvaCommand' with failure. > 2018-03-05 13:03:19,543-04 INFO [org.ovirt.engine.core.bll.exportimport.ExportOvaCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-98) > [c99e94b0-a9dd-486f-9274-9aa17c9590a0] Lock freed to object > 'EngineLock:{exclusiveLocks='[3ae307cb-53d6-4d70-87b6-4e073c6f5eb6=VM]', > sharedLocks=''}' > 2018-03-05 13:03:19,550-04 ERROR [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-98) > [c99e94b0-a9dd-486f-9274-9aa17c9590a0] EVENT_ID: > IMPORTEXPORT_EXPORT_VM_TO_OVA_FAILED(1,225), Failed to export Vm pfSense > as a Virtual Appliance to path /ova/pfSense.ova on Host eple-rectorat-proto > ... > > Sincerely, > Fabrice > > > Le 05/03/2018 ? 12:50, Arik Hadas a ?crit : > > > > On Mon, Mar 5, 2018 at 6:48 PM, Arik Hadas wrote: > >> >> >> On Mon, Mar 5, 2018 at 6:29 PM, Fabrice SOLER < >> fabrice.soler at ac-guadeloupe.fr> wrote: >> >>> Hello, >>> >>> I need to export my VM to OVA format from the administration portail >>> Ovirt. It fails with this message : >>> >>> *Failed to export Vm CentOS as a Virtual Appliance to path >>> /data/CentOS.ova on Host eple-rectorat-proto* >>> >>> My storage is local (not NFS or iSCSI), is there some particulars >>> permissions to put to the destination directory ? >>> >> No, the script that packs the OVA is executed with root permissions. >> >> >>> The path is the path to an export domain ? >>> >> Not necessarily. >> > > Oh, and please share the (engine, ansible) logs if you want more eyes > looking at that failure. > > >> >> >>> Sincerely, >>> >>> Fabrice SOLER >>> -- >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> > > -- > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fabrice SOLER.PNG Type: image/png Size: 16525 bytes Desc: not available URL: From fabrice.soler at ac-guadeloupe.fr Mon Mar 5 21:01:13 2018 From: fabrice.soler at ac-guadeloupe.fr (Fabrice SOLER) Date: Mon, 5 Mar 2018 17:01:13 -0400 Subject: [ovirt-users] After the export, the import OVA failed In-Reply-To: <744cc79b-4e81-f327-18e1-32533d7c7c21@ac-guadeloupe.fr> References: <5064936d-798c-8c3a-853b-2bb0c19e8f3c@ac-guadeloupe.fr> <744cc79b-4e81-f327-18e1-32533d7c7c21@ac-guadeloupe.fr> Message-ID: Hello, I found this KB : https://bugzilla.redhat.com/show_bug.cgi?id=1529607 and put a description to the VM disk and the OVA export works ! :-) Now, the import does not work :-( The error is : */Failed to load VM configuration from OVA file: /data/ova/amon /*I have tried two ways. In first, I let the file ova. Secondely I did? : tar xvf file.ova and specifiy the directory where the ovf file is. In the engine log, I have found this : 2018-03-05 16:15:58,319-04 INFO [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (default task-6) [e2a953ca-1460-4899-a958-7dbe37e40a21] Ansible playbook command has exited with value: 2 2018-03-05 16:15:58,319-04 ERROR [org.ovirt.engine.core.bll.GetVmFromOvaQuery] (default task-6) [e2a953ca-1460-4899-a958-7dbe37e40a21] Failed to query OVA info 2018-03-05 16:15:58,319-04 ERROR [org.ovirt.engine.core.bll.GetVmFromOvaQuery] (default task-6) [e2a953ca-1460-4899-a958-7dbe37e40a21] Query 'GetVmFromOvaQuery' failed: EngineException: Failed to query OVA info (Failed with error GeneralException and code 100) I have found this KB : https://bugzilla.redhat.com/show_bug.cgi?id=1529965 The unique solution is a update ? Sincerely Fabrice Le 05/03/2018 ? 13:13, Fabrice SOLER a ?crit?: > Hello, > > Thank for your answer, I have put all permissions for the directory > and I always have errors. > > Here are the ERROR in logs on the engine : > > 2018-03-05 13:03:18,525-04 INFO > [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-23) [7dcef072] FINISH, > GetVolumeInfoVDSCommand, return: > org.ovirt.engine.core.common.businessentities.storage.DiskImage at 90e3c610, > log id: 84f89c3 > 2018-03-05 13:03:18,529-04 ERROR > [org.ovirt.engine.core.bll.CreateOvaCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-23) [7dcef072] Command > 'org.ovirt.engine.core.bll.CreateOvaCommand' failed: null > 2018-03-05 13:03:18,529-04 ERROR > [org.ovirt.engine.core.bll.CreateOvaCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-23) [7dcef072] > Exception: java.lang.NullPointerException > ... > 2018-03-05 13:03:18,533-04 ERROR > [org.ovirt.engine.core.bll.exportimport.ExportOvaCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-23) [7dcef072] Failed > to create OVA file > 2018-03-05 13:03:18,533-04 INFO > [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] > (EE-ManagedThreadFactory-engineScheduled-Thread-23) [7dcef072] Command > 'ExportOva' id: 'c484392e-3540-4a11-97bf-3fecbc13e080' failed when > attempting to perform the next operation, marking as FAILED > '[d5d4381b-ec82-4927-91a4-74597cd2511d]' > 2018-03-05 13:03:18,533-04 INFO > [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] > (EE-ManagedThreadFactory-engineScheduled-Thread-23) [7dcef072] Command > 'ExportOva' id: 'c484392e-3540-4a11-97bf-3fecbc13e080' child commands > '[d5d4381b-ec82-4927-91a4-74597cd2511d]' executions were completed, > status 'FAILED' > 2018-03-05 13:03:19,542-04 ERROR > [org.ovirt.engine.core.bll.exportimport.ExportOvaCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-98) > [c99e94b0-a9dd-486f-9274-9aa17c9590a0] Ending command > 'org.ovirt.engine.core.bll.exportimport.ExportOvaCommand' with failure. > 2018-03-05 13:03:19,543-04 INFO > [org.ovirt.engine.core.bll.exportimport.ExportOvaCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-98) > [c99e94b0-a9dd-486f-9274-9aa17c9590a0] Lock freed to object > 'EngineLock:{exclusiveLocks='[3ae307cb-53d6-4d70-87b6-4e073c6f5eb6=VM]', > sharedLocks=''}' > 2018-03-05 13:03:19,550-04 ERROR > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (EE-ManagedThreadFactory-engineScheduled-Thread-98) > [c99e94b0-a9dd-486f-9274-9aa17c9590a0] EVENT_ID: > IMPORTEXPORT_EXPORT_VM_TO_OVA_FAILED(1,225), Failed to export Vm > pfSense as a Virtual Appliance to path /ova/pfSense.ova on Host > eple-rectorat-proto > ... > > Sincerely, > Fabrice > > Le 05/03/2018 ? 12:50, Arik Hadas a ?crit?: >> >> >> On Mon, Mar 5, 2018 at 6:48 PM, Arik Hadas > > wrote: >> >> >> >> On Mon, Mar 5, 2018 at 6:29 PM, Fabrice SOLER >> > > wrote: >> >> Hello, >> >> I need to export my VM to OVA format from the administration >> portail Ovirt. It fails with this message : >> >> /Failed to export Vm CentOS as a Virtual Appliance to path >> /data/CentOS.ova on Host eple-rectorat-proto/ >> >> My storage is local (not NFS or iSCSI), is there some >> particulars permissions to put to the destination directory ? >> >> No, the script that packs the OVA is executed with root permissions. >> >> The path is the path to an export domain ? >> >> Not necessarily. >> >> >> Oh, and please share the (engine, ansible) logs if you want more eyes >> looking at that failure. >> >> Sincerely, >> >> Fabrice SOLER >> >> -- >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >> >> > > -- > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fabrice SOLER.PNG Type: image/png Size: 16525 bytes Desc: not available URL: From ahadas at redhat.com Mon Mar 5 21:49:43 2018 From: ahadas at redhat.com (Arik Hadas) Date: Mon, 5 Mar 2018 23:49:43 +0200 Subject: [ovirt-users] After the export, the import OVA failed In-Reply-To: References: <5064936d-798c-8c3a-853b-2bb0c19e8f3c@ac-guadeloupe.fr> <744cc79b-4e81-f327-18e1-32533d7c7c21@ac-guadeloupe.fr> Message-ID: On Mon, Mar 5, 2018 at 11:01 PM, Fabrice SOLER < fabrice.soler at ac-guadeloupe.fr> wrote: > Hello, > > I found this KB : https://bugzilla.redhat.com/show_bug.cgi?id=1529607 > and put a description to the VM disk and the OVA export works ! :-) > > Now, the import does not work :-( > > The error is : > > *Failed to load VM configuration from OVA file: /data/ova/amon *I have > tried two ways. > In first, I let the file ova. > Secondely I did : tar xvf file.ova and specifiy the directory where the > ovf file is. > > In the engine log, I have found this : > > 2018-03-05 16:15:58,319-04 INFO [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] > (default task-6) [e2a953ca-1460-4899-a958-7dbe37e40a21] Ansible playbook > command has exited with value: 2 > 2018-03-05 16:15:58,319-04 ERROR [org.ovirt.engine.core.bll.GetVmFromOvaQuery] > (default task-6) [e2a953ca-1460-4899-a958-7dbe37e40a21] Failed to query > OVA info > 2018-03-05 16:15:58,319-04 ERROR [org.ovirt.engine.core.bll.GetVmFromOvaQuery] > (default task-6) [e2a953ca-1460-4899-a958-7dbe37e40a21] Query > 'GetVmFromOvaQuery' failed: EngineException: Failed to query OVA info > (Failed with error GeneralException and code 100) > > I have found this KB : https://bugzilla.redhat.com/show_bug.cgi?id=1529965 > The unique solution is a update ? > I highly suggest updating to the latest version, various issues related to OVA export/import were recently fixed. > > Sincerely > Fabrice > > > Le 05/03/2018 ? 13:13, Fabrice SOLER a ?crit : > > Hello, > > Thank for your answer, I have put all permissions for the directory and I > always have errors. > > Here are the ERROR in logs on the engine : > > 2018-03-05 13:03:18,525-04 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-23) > [7dcef072] FINISH, GetVolumeInfoVDSCommand, return: > org.ovirt.engine.core.common.businessentities.storage.DiskImage at 90e3c610, > log id: 84f89c3 > 2018-03-05 13:03:18,529-04 ERROR [org.ovirt.engine.core.bll.CreateOvaCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-23) [7dcef072] Command > 'org.ovirt.engine.core.bll.CreateOvaCommand' failed: null > 2018-03-05 13:03:18,529-04 ERROR [org.ovirt.engine.core.bll.CreateOvaCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-23) [7dcef072] Exception: > java.lang.NullPointerException > ... > 2018-03-05 13:03:18,533-04 ERROR [org.ovirt.engine.core.bll.exportimport.ExportOvaCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-23) [7dcef072] Failed to > create OVA file > 2018-03-05 13:03:18,533-04 INFO [org.ovirt.engine.core.bll. > SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-23) > [7dcef072] Command 'ExportOva' id: 'c484392e-3540-4a11-97bf-3fecbc13e080' > failed when attempting to perform the next operation, marking as FAILED > '[d5d4381b-ec82-4927-91a4-74597cd2511d]' > 2018-03-05 13:03:18,533-04 INFO [org.ovirt.engine.core.bll. > SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-23) > [7dcef072] Command 'ExportOva' id: 'c484392e-3540-4a11-97bf-3fecbc13e080' > child commands '[d5d4381b-ec82-4927-91a4-74597cd2511d]' executions were > completed, status 'FAILED' > 2018-03-05 13:03:19,542-04 ERROR [org.ovirt.engine.core.bll.exportimport.ExportOvaCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-98) > [c99e94b0-a9dd-486f-9274-9aa17c9590a0] Ending command > 'org.ovirt.engine.core.bll.exportimport.ExportOvaCommand' with failure. > 2018-03-05 13:03:19,543-04 INFO [org.ovirt.engine.core.bll.exportimport.ExportOvaCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-98) > [c99e94b0-a9dd-486f-9274-9aa17c9590a0] Lock freed to object > 'EngineLock:{exclusiveLocks='[3ae307cb-53d6-4d70-87b6-4e073c6f5eb6=VM]', > sharedLocks=''}' > 2018-03-05 13:03:19,550-04 ERROR [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-98) > [c99e94b0-a9dd-486f-9274-9aa17c9590a0] EVENT_ID: > IMPORTEXPORT_EXPORT_VM_TO_OVA_FAILED(1,225), Failed to export Vm pfSense > as a Virtual Appliance to path /ova/pfSense.ova on Host eple-rectorat-proto > ... > > Sincerely, > Fabrice > > Le 05/03/2018 ? 12:50, Arik Hadas a ?crit : > > > > On Mon, Mar 5, 2018 at 6:48 PM, Arik Hadas wrote: > >> >> >> On Mon, Mar 5, 2018 at 6:29 PM, Fabrice SOLER < >> fabrice.soler at ac-guadeloupe.fr> wrote: >> >>> Hello, >>> >>> I need to export my VM to OVA format from the administration portail >>> Ovirt. It fails with this message : >>> >>> *Failed to export Vm CentOS as a Virtual Appliance to path >>> /data/CentOS.ova on Host eple-rectorat-proto* >>> >>> My storage is local (not NFS or iSCSI), is there some particulars >>> permissions to put to the destination directory ? >>> >> No, the script that packs the OVA is executed with root permissions. >> >> >>> The path is the path to an export domain ? >>> >> Not necessarily. >> > > Oh, and please share the (engine, ansible) logs if you want more eyes > looking at that failure. > > >> >> >>> Sincerely, >>> >>> Fabrice SOLER >>> -- >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> > > -- > > > _______________________________________________ > Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users > > > -- > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fabrice SOLER.PNG Type: image/png Size: 16525 bytes Desc: not available URL: From nicolas.vaye at province-sud.nc Mon Mar 5 21:56:48 2018 From: nicolas.vaye at province-sud.nc (Nicolas Vaye) Date: Mon, 5 Mar 2018 21:56:48 +0000 Subject: [ovirt-users] ovirt Node 4.2.1 on 2 node : problem with ha broker and HE storage Message-ID: <1520287007.4516.58.camel@province-sud.nc> Hello, I'd installed oVirt Node 4.2.1 on two host, i'd deployed HE on the first, via a storage NFS. The administration portal seem to be OK but i can't see the HE VM, the portal indicate there is one VM, but impossible to get it into the list of the VM. I can't see the storage that host the HE-VM too. Where is the problem ? I'd added the second host into the cluster and i haven't the option to deploy the HE into it during the "add Host" operation. After added the second node, i can put into maintenance mode and then reinstall with the option DEPLOY for HE but the result is : "Cannot edit Host. You are using an unmanaged hosted engine VM. Please add the first storage domain in order to start the hosted engine import process." The logs for the first node indicate this error (very often, every ~ 30sec) below : ovirt-ha-broker ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update ERROR Failed to read state. Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py", line 88, in run self._storage_broker.get_raw_stats() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 162, in get_raw_stats .format(str(e))) RequestError: failed to read metadata: [Errno 2] No such file or directory: '/var/run/vdsm/storage/e906ea1d-a5ee-4ea9-838c-bcb4a8b866ff/f1ccdaaf-4682-4d68-9c27-751d54f101a6/13fb32ec-0463-45d4-925a-b557c1dc803f' With this problem, i can't move the HE vm on the second host. so there is no HA. Thanks. Nicolas VAYE From ggkkrr55 at gmail.com Mon Mar 5 23:03:19 2018 From: ggkkrr55 at gmail.com (Jean Pickard) Date: Mon, 5 Mar 2018 15:03:19 -0800 Subject: [ovirt-users] How to setup users to see a subset of VMs in oVirt Message-ID: Hello, I need to create user accounts in oVirt that can only manage a specific set of VMs and I don't want them to see any other ones. example: User1 can only see VM1, VM2, VM3, VM4 User2 can only see VM5, VM6, VM7 Admin can see all of them. How do I accomplish this? Thank you, Payman Vazinkhoo -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincent at epicenergy.ca Tue Mar 6 02:11:07 2018 From: vincent at epicenergy.ca (Vincent Royer) Date: Mon, 5 Mar 2018 18:11:07 -0800 Subject: [ovirt-users] The timezone in the guest differs from configuration (Windows) Message-ID: My windows VMs have this error, but the time zones are setup correctly. In engine, the VM shows: "Hardware Clock Time Offset: Pacific Standard Time" In Windows, doesn't matter if I leave the time zone automatic or manually set it to Pacific Standard Time. How do I fix this? -------------- next part -------------- An HTML attachment was scrubbed... URL: From simone.bruckner at fabasoft.com Tue Mar 6 05:34:19 2018 From: simone.bruckner at fabasoft.com (Bruckner, Simone) Date: Tue, 6 Mar 2018 05:34:19 +0000 Subject: [ovirt-users] Cannot activate storage domain In-Reply-To: <2CB4E8C8E00E594EA06D4AC427E429920FE5A323@fabamailserver.fabagl.fabasoft.com> References: <2CB4E8C8E00E594EA06D4AC427E429920FE500D1@fabamailserver.fabagl.fabasoft.com> <2CB4E8C8E00E594EA06D4AC427E429920FE56290@fabamailserver.fabagl.fabasoft.com> <2CB4E8C8E00E594EA06D4AC427E429920FE5A323@fabamailserver.fabagl.fabasoft.com> Message-ID: <2CB4E8C8E00E594EA06D4AC427E429920FE6670D@fabamailserver.fabagl.fabasoft.com> Hello, I apologize for bringing this one up again, but does anybody know if there is a change to recover a storage domain, that cannot be activated? Thank you, Simone Von: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] Im Auftrag von Bruckner, Simone Gesendet: Freitag, 2. M?rz 2018 17:03 An: users at ovirt.org Betreff: Re: [ovirt-users] Cannot activate storage domain Hi all, I managed to get the inactive storage domain to maintenance by stopping all running VMs that were using it, but I am still not able to activate it. Trying to activate results in the following events: For each host: VDSM command GetVGInfoVDS failed: Volume Group does not exist: (u'vg_uuid: 813oRe-64r8-mloU-k9G2-LFsS-dXSG-hpN4kf',) And finally: VDSM command ActivateStorageDomainVDS failed: Storage domain does not exist: (u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',) Is there anything I can do to recover this storage domain? Thank you and all the best, Simone Von: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] Im Auftrag von Bruckner, Simone Gesendet: Donnerstag, 1. M?rz 2018 17:57 An: users at ovirt.org Betreff: Re: [ovirt-users] Cannot activate storage domain Hi, we are still struggling getting a storage domain online again. We tried to put the storage domain in maintenance mode, that led to "Failed to update OVF disks 809cc8d7-7687-46cf-a342-3be48674a9b3, OVF data isn't updated on those OVF stores". Trying again with ignoring OVF update failures put the storage domain in "preparing for maintenance". We see the following message on all hosts: "Error releasing host id 26 for domain b83c159c-4ad6-4613-ba16-bab95ccd10c0 (monitor:578)". Querying the storage domain using vdsm-client on the SPM resulted in # vdsm-client StorageDomain getInfo "storagedomainID"="b83c159c-4ad6-4613-ba16-bab95ccd10c0" vdsm-client: Command StorageDomain.getInfo with args {'storagedomainID': 'b83c159c-4ad6-4613-ba16-bab95ccd10c0'} failed: (code=358, message=Storage domain does not exist: (u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',)) Any ideas? Thank you and all the best, Simone Von: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] Im Auftrag von Bruckner, Simone Gesendet: Mittwoch, 28. Februar 2018 15:52 An: users at ovirt.org Betreff: [ovirt-users] Cannot activate storage domain Hi all, we run a small oVirt installation that we also use for automated testing (automatically creating, dropping vms). We got an inactive FC storage domain that we cannot activate any more. We see several events at that time starting with: VM perftest-c17 is down with error. Exit message: Unable to get volume size for domain b83c159c-4ad6-4613-ba16-bab95ccd10c0 volume 686376c1-4be1-44c3-89a3-0a8addc8fdf2. Trying to activate the strorage domain results in the following alert event for each host: VDSM command GetVGInfoVDS failed: Volume Group does not exist: (u'vg_uuid: 813oRe-64r8-mloU-k9G2-LFsS-dXSG-hpN4kf',) And after those messages from all hosts we get: VDSM command ActivateStorageDomainVDS failed: Storage domain does not exist: (u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',) Failed to activate Storage Domain VMHOST_LUN_205 (Data Center Production) by Invalid status on Data Center Production. Setting status to Non Responsive. Storage Pool Manager runs on Host vmhost003.fabagl.fabasoft.com (Address: vmhost003.fabagl.fabasoft.com), Data Center Production. Checking the hosts with multipath -ll we see the LUN without errors. We run oVirt 4.2.1 on CentOS 7.4. Hosts are CentOS 7.4 hosts with oVirt installed using oVirt engine. Hosts are connected to about 30 FC LUNs (8 TB each) on two all-flash storage arrays. Thank you, Simone Bruckner -------------- next part -------------- An HTML attachment was scrubbed... URL: From caignec at cines.fr Tue Mar 6 07:22:30 2018 From: caignec at cines.fr (Lionel Caignec) Date: Tue, 6 Mar 2018 08:22:30 +0100 (CET) Subject: [ovirt-users] Ghost Snapshot Disk In-Reply-To: References: <2109773819.1728939.1519370725788.JavaMail.zimbra@cines.fr> <1550150634.1827635.1519649308293.JavaMail.zimbra@cines.fr> <280580777.1830731.1519652234852.JavaMail.zimbra@cines.fr> <48154177.1832942.1519654691849.JavaMail.zimbra@cines.fr> Message-ID: <489433186.2545721.1520320950720.JavaMail.zimbra@cines.fr> Hi, ok thank you for information (sorry for late response). I will do that. ----- Mail original ----- De: "Shani Leviim" ?: "Lionel Caignec" Cc: "users" Envoy?: Mardi 27 F?vrier 2018 14:19:45 Objet: Re: [ovirt-users] Ghost Snapshot Disk Hi Lionel, Sorry for the delay in replying you. If it's possible from your side, syncing the data and destroying old disk sounds about right. In addition, it seems like you're having this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1509629 And it was fixed for version 4.1.9. and above. *Regards,* *Shani Leviim* On Mon, Feb 26, 2018 at 4:18 PM, Lionel Caignec wrote: > Ok so i reply myself, > > Version is 4.1.7.6-1 > > I just delete manually a snapshot previously created. But this is an io > intensive vm, whit big disk (2,5To, and 5To). > > For the log, i cannot paste all my log on public list security reason, i > will send you full in private. > Here is an extract relevant to my error > engine.log-20180210:2018-02-09 23:00:03,200+01 INFO > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (default task-312) [44402a8c-3196-43f0-ba33-307ea78e6f49] EVENT_ID: > USER_CREATE_SNAPSHOT(45), Correlation ID: 44402a8c-3196-43f0-ba33-307ea78e6f49, > Job ID: 030cd310-fec9-4a89-8c3f-7888504fe973, Call Stack: null, Custom > ID: null, Custom Event ID: -1, Message: Snapshot 'AUTO_7D_zz_nil_20180209_220002' > creation for VM 'zz_nil' was initiated by snap_user at internal. > engine.log-20180210:2018-02-09 23:01:06,578+01 INFO > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (DefaultQuartzScheduler5) [] EVENT_ID: USER_CREATE_SNAPSHOT_FINISHED_SUCCESS(68), > Correlation ID: 44402a8c-3196-43f0-ba33-307ea78e6f49, Job ID: > 030cd310-fec9-4a89-8c3f-7888504fe973, Call Stack: null, Custom ID: null, > Custom Event ID: -1, Message: Snapshot 'AUTO_7D_zz_nil_20180209_220002' > creation for VM 'zz_nil' has been completed. > engine.log-20180220:2018-02-19 17:01:23,800+01 INFO > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (default task-113) [] EVENT_ID: USER_REMOVE_SNAPSHOT(342), Correlation ID: > 06a9efa4-1b80-4021-bf3e-41ecebe58a88, Job ID: c9a918a7-b00c-43cf-b6de-3659ac0765da, > Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Snapshot > 'AUTO_7D_zz_nil_20180209_220002' deletion for VM 'zz_nil' was initiated > by acaignec at ldap-cines-authz. > engine.log-20180221:2018-02-20 22:24:45,174+01 ERROR > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (DefaultQuartzScheduler6) [06a9efa4-1b80-4021-bf3e-41ecebe58a88] > EVENT_ID: USER_REMOVE_SNAPSHOT_FINISHED_FAILURE(357), Correlation ID: > 06a9efa4-1b80-4021-bf3e-41ecebe58a88, Job ID: c9a918a7-b00c-43cf-b6de-3659ac0765da, > Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Failed to > delete snapshot 'AUTO_7D_zz_nil_20180209_220002' for VM 'zz_nil'. > 2018-02-20 22:24:46,266+01 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] > (DefaultQuartzScheduler3) [516079c3] SPMAsyncTask::PollTask: Polling task > '34137342-4f30-476d-b16c-1cb7e0ea0ac0' (Parent Command 'DestroyImage', > Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') > returned status 'finished', result 'success'. > 2018-02-20 22:24:46,267+01 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] > (DefaultQuartzScheduler3) [516079c3] BaseAsyncTask::onTaskEndSuccess: > Task '34137342-4f30-476d-b16c-1cb7e0ea0ac0' (Parent Command > 'DestroyImage', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') > ended successfully. > 2018-02-20 22:24:46,268+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] > (DefaultQuartzScheduler3) [516079c3] CommandAsyncTask::endActionIfNecessary: > All tasks of command 'fe8c91f2-386b-4b3f-bbf3-aeda8e9244c6' has ended -> > executing 'endAction' > 2018-02-20 22:24:46,268+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] > (DefaultQuartzScheduler3) [516079c3] CommandAsyncTask::endAction: Ending > action for '1' tasks (command ID: 'fe8c91f2-386b-4b3f-bbf3-aeda8e9244c6'): > calling endAction '. > 2018-02-20 22:24:46,268+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] > (org.ovirt.thread.pool-6-thread-20) [516079c3] CommandAsyncTask::endCommandAction > [within thread] context: Attempting to endAction 'DestroyImage', > 2018-02-20 22:24:46,269+01 ERROR [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] > (org.ovirt.thread.pool-6-thread-20) [516079c3] [within thread]: endAction > for action type DestroyImage threw an exception.: > java.lang.NullPointerException > at org.ovirt.engine.core.bll.tasks.CoCoAsyncTaskHelper. > endAction(CoCoAsyncTaskHelper.java:335) [bll.jar:] > at org.ovirt.engine.core.bll.tasks.CommandCoordinatorImpl. > endAction(CommandCoordinatorImpl.java:340) [bll.jar:] > at org.ovirt.engine.core.bll.tasks.CommandAsyncTask. > endCommandAction(CommandAsyncTask.java:154) [bll.jar:] > at org.ovirt.engine.core.bll.tasks.CommandAsyncTask.lambda$ > endActionIfNecessary$0(CommandAsyncTask.java:106) [bll.jar:] > at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$ > InternalWrapperRunnable.run(ThreadPoolUtil.java:84) [utils.jar:] > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [rt.jar:1.8.0_161] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [rt.jar:1.8.0_161] > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [rt.jar:1.8.0_161] > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [rt.jar:1.8.0_161] > at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_161] > > ----- Mail original ----- > De: "Shani Leviim" > ?: "Lionel Caignec" > Envoy?: Lundi 26 F?vrier 2018 14:42:38 > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > Yes, please. > Can you detail a bit more regarding the actions you've done? > > I'm assuming that since the snapshot had no description, trying to operate > it caused the nullPointerException you've got. > But I want to examine what was the cause for that. > > Also, can you please answer back to the list? > > > > *Regards,* > > *Shani Leviim* > > On Mon, Feb 26, 2018 at 3:37 PM, Lionel Caignec wrote: > > > Version is 4.1.7.6-1 > > > > Do you want the log from the day i delete snapshot? > > > > ----- Mail original ----- > > De: "Shani Leviim" > > ?: "Lionel Caignec" > > Cc: "users" > > Envoy?: Lundi 26 F?vrier 2018 14:29:16 > > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > > > Hi, > > > > What is your engine version, please? > > I'm trying to reproduce your steps, for understanding better was is the > > cause for that error. Therefore, a full engine log is needed. > > Can you please attach it? > > > > Thanks, > > > > > > *Shani Leviim* > > > > On Mon, Feb 26, 2018 at 2:48 PM, Lionel Caignec > wrote: > > > > > Hi > > > > > > 1) this is error message from ui.log > > > > > > 2018-02-26 13:44:10,001+01 ERROR [org.ovirt.engine.ui.frontend. > > > server.gwt.OvirtRemoteLoggingService] (default task-3) [] Permutation > > > name: 8C01181C3B121D0AAE1312275CC96415 > > > 2018-02-26 13:44:10,001+01 ERROR [org.ovirt.engine.ui.frontend. > > server.gwt.OvirtRemoteLoggingService] > > > (default task-3) [] Uncaught exception: com.google.gwt.core.client. > > JavaScriptException: > > > (TypeError) > > > __gwt$exception: : Cannot read property 'F' of null > > > at org.ovirt.engine.ui.uicommonweb.models.storage. > > > DisksAllocationModel$3.$onSuccess(DisksAllocationModel.java:120) > > > at org.ovirt.engine.ui.uicommonweb.models.storage. > > > DisksAllocationModel$3.onSuccess(DisksAllocationModel.java:120) > > > at org.ovirt.engine.ui.frontend.Frontend$2.$onSuccess( > > Frontend.java:233) > > > [frontend.jar:] > > > at org.ovirt.engine.ui.frontend.Frontend$2.onSuccess(Frontend. > > java:233) > > > [frontend.jar:] > > > at org.ovirt.engine.ui.frontend.communication. > > > OperationProcessor$2.$onSuccess(OperationProcessor.java:139) > > > [frontend.jar:] > > > at org.ovirt.engine.ui.frontend.communication. > > > OperationProcessor$2.onSuccess(OperationProcessor.java:139) > > > [frontend.jar:] > > > at org.ovirt.engine.ui.frontend.communication. > > > GWTRPCCommunicationProvider$5$1.$onSuccess( > GWTRPCCommunicationProvider. > > java:269) > > > [frontend.jar:] > > > at org.ovirt.engine.ui.frontend.communication. > > > GWTRPCCommunicationProvider$5$1.onSuccess(GWTRPCCommunicationProvider. > > java:269) > > > [frontend.jar:] > > > at com.google.gwt.user.client.rpc.impl.RequestCallbackAdapter. > > > onResponseReceived(RequestCallbackAdapter.java:198) [gwt-servlet.jar:] > > > at com.google.gwt.http.client.Request.$fireOnResponseReceived( > > Request.java:237) > > > [gwt-servlet.jar:] > > > at com.google.gwt.http.client.RequestBuilder$1. > > onReadyStateChange(RequestBuilder.java:409) > > > [gwt-servlet.jar:] > > > at Unknown.eval(webadmin-0.js at 65) > > > at com.google.gwt.core.client.impl.Impl.apply(Impl.java:296) > > > [gwt-servlet.jar:] > > > at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:335) > > > [gwt-servlet.jar:] > > > at Unknown.eval(webadmin-0.js at 54) > > > > > > > > > 2) This line seems to be about the bad disk : > > > > > > f242cc9a-56c1-4ae4-aef0-f75eb01f74b1 | 2018-01-17 22:02:00+01 | > > > 2748779069440 | 00000000-0000-0000-0000-000000000000 | > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > > > 22:01:20.5+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > > > > > > > > > 3) Snapshot table is empty for the concerned vm_id. > > > > > > ----- Mail original ----- > > > De: "Shani Leviim" > > > ?: "Lionel Caignec" > > > Cc: "users" > > > Envoy?: Lundi 26 F?vrier 2018 13:31:23 > > > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > > > > > Hi Lionel, > > > > > > The error message you've mentioned sounds like a UI error. > > > Can you please attach your ui log? > > > > > > Also, on the data from 'images' table you've uploaded, can you describe > > > which line is the relevant disk? > > > > > > Finally (for now), in case the snapshot was deleted, can you please > > > validate it by viewing the output of: > > > $ select * from snapshots; > > > > > > > > > > > > *Regards,* > > > > > > *Shani Leviim* > > > > > > On Mon, Feb 26, 2018 at 9:20 AM, Lionel Caignec > > wrote: > > > > > > > Hi Shani, > > > > thank you for helping me with your reply, > > > > i juste make a little mistake on explanation. In fact it's the > snapshot > > > > does not exist anymore. This is the disk(s) relative to her wich > still > > > > exist, and perhaps LVM volume. > > > > So can i delete manually this disk in database? what about the lvm > > > volume? > > > > Is it better to recreate disk sync data and destroy old one? > > > > > > > > > > > > > > > > ----- Mail original ----- > > > > De: "Shani Leviim" > > > > ?: "Lionel Caignec" > > > > Cc: "users" > > > > Envoy?: Dimanche 25 F?vrier 2018 14:26:41 > > > > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > > > > > > > Hi Lionel, > > > > > > > > You can try to delete that snapshot directly from the database. > > > > > > > > In case of using psql [1], once you've logged in to your database, > you > > > can > > > > run this query: > > > > $ select * from snapshots where vm_id = ''; > > > > This one would list the snapshots associated with a VM by its id. > > > > > > > > In case you don't have you vm_id, you can locate it by querying: > > > > $ select * from vms where vm_name = 'nil'; > > > > This one would show you some details about a VM by its name > (including > > > the > > > > vm's id). > > > > > > > > Once you've found the relevant snapshot, you can delete it by > running: > > > > $ delete from snapshots where snapshot_id = ''; > > > > This one would delete the desired snapshot from the database. > > > > > > > > Since it's a delete operation, I would suggest confirming the ids > > before > > > > executing it. > > > > > > > > Hope you've found it useful! > > > > > > > > [1] > > > > https://www.ovirt.org/documentation/install-guide/ > > > appe-Preparing_a_Remote_ > > > > PostgreSQL_Database_for_Use_with_the_oVirt_Engine/ > > > > > > > > > > > > *Regards,* > > > > > > > > *Shani Leviim* > > > > > > > > On Fri, Feb 23, 2018 at 9:25 AM, Lionel Caignec > > > wrote: > > > > > > > > > Hi, > > > > > > > > > > i've a problem with snapshot. On one VM i've a "snapshot" ghost > > without > > > > > name or uuid, only information is size (see attachment). In the > > > snapshot > > > > > tab there is no trace about this disk. > > > > > > > > > > In database (table images) i found this : > > > > > f242cc9a-56c1-4ae4-aef0-f75eb01f74b1 | 2018-01-17 22:02:00+01 | > > > > > 2748779069440 | 00000000-0000-0000-0000-000000000000 | > > > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > > > > > 22:01:20.5+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > > > > > | 2 | 4 | 17e26476-cecb-441d-a5f7- > > 46ab3ef387ee > > > | > > > > > 2018-01-17 22:01:29.663334+01 | 2018-01-19 08:40:14.345229+01 | f > > > | > > > > > 1 | 2 > > > > > 1c7650fa-542b-4ec2-83a1-d2c1c31be5fd | 2018-01-17 22:02:03+01 | > > > > > 5368709120000 | 00000000-0000-0000-0000-000000000000 | > > > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > > > > > 22:01:20.84+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > > > > > | 2 | 4 | bf834a91-c69f-4d2c-b639- > > 116ed58296d8 > > > | > > > > > 2018-01-17 22:01:29.836133+01 | 2018-01-19 08:40:19.083508+01 | f > > > | > > > > > 1 | 2 > > > > > 8614b21f-c0de-40f2-b4fb-e5cf193b0743 | 2018-02-09 23:00:44+01 | > > > > > 5368709120000 | 00000000-0000-0000-0000-000000000000 | > > > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-02-16 > > > > > 23:00:02.855+01 | 390175dc-baf4-4831-936a-5ea68fa4c969 > > > > > > > > > > > > > > > But i does not know which line is my disk. Is it possible to > delete > > > > > directly into database? > > > > > Or is it better to dump my disk to another new and delete the > > > "corrupted > > > > > one"? > > > > > > > > > > Another thing, when i try to move the disk to another storage > > domain i > > > > > always get "uncaght exeption occured ..." and no error in > engine.log. > > > > > > > > > > > > > > > Thank you for helping. > > > > > > > > > > -- > > > > > Lionel Caignec > > > > > > > > > > _______________________________________________ > > > > > Users mailing list > > > > > Users at ovirt.org > > > > > http://lists.ovirt.org/mailman/listinfo/users > > > > > > > > > > > > > > > > > > > > From omachace at redhat.com Tue Mar 6 08:27:43 2018 From: omachace at redhat.com (Ondra Machacek) Date: Tue, 6 Mar 2018 09:27:43 +0100 Subject: [ovirt-users] Users/Groups Permissions In-Reply-To: <9cd67cf34aaa42f180fe29c2fc2e9fd9@ooe.gv.at> References: <9cd67cf34aaa42f180fe29c2fc2e9fd9@ooe.gv.at> Message-ID: <1219ab15-e001-32d7-b0ae-7bcb7fc86730@redhat.com> On 03/05/2018 10:42 AM, Markus.Schaufler at ooe.gv.at wrote: > Hi! > > Still new to oVirt and got another question: > > I have many Windows and Linux VMs and created for each the Windows and > Linux machines two Usergroups (limited and admins). > > Now I want to grant the groups according permissions to according VMs. > How can I do this without clicking through every VM manually (e.g. by > mark several vms in the UI and manage their permissions or via CLI)? You can use our Python SDK, please see below example: https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/assign_permission_to_vms.py Or you can use Ansible if you are familiar with it: http://docs.ansible.com/ansible/latest/ovirt_permissions_module.html#examples The playbook would look like: --- - hosts: localhost connection: local vars: username: admin at internal password: thepassowrd insecure: True url: https://ovirt.example.com/ovirt-engine/api tasks: - name: Obtain SSO token ovirt_auth: url: "{{ url }}" username: "{{ username }}" password: "{{ password }}" insecure: "{{ insecure }}" - name: Add permissions to user ovirt_permissions: auth: "{{ ovirt_auth }}" user_name: user2 authz_name: internal-authz object_type: vm object_name: "{{ item }}" role: UserVmManager with_items: - myvm1 - myvm2 - myvm3 - name: Revoke SSO token ovirt_auth: state: absent ovirt_auth: "{{ ovirt_auth }}" > > Many thanks in advance, > > *Markus Schaufler, MSc* > > Amt der O?. Landesregierung > Direktion Pr?sidium > > Abteilung Informationstechnologie > > Referat ST3 Server > > A-4021 Linz, K?rntnerstra?e 16 > > *Tel.:*+43 (0)732 7720 ? 13138 > > *Fax:*+43 (0)732 7720 - 213255 > > *email:*markus.schaufler at ooe.gv.at > > *Internet:*www.land-oberoesterreich.gv.at > > > *DVR:*0069264 > > Der Austausch von Nachrichten mit o.a. Absender via e-mail dient > ausschlie?lich Informationszwecken. > Rechtsg?ltige Erkl?rungen d?rfen ?ber dieses Medium nur an das > offizielle Postfach it.post at ooe.gv.at > ?bermittelt werden. > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From omachace at redhat.com Tue Mar 6 08:29:05 2018 From: omachace at redhat.com (Ondra Machacek) Date: Tue, 6 Mar 2018 09:29:05 +0100 Subject: [ovirt-users] How to setup users to see a subset of VMs in oVirt In-Reply-To: References: Message-ID: On 03/06/2018 12:03 AM, Jean Pickard wrote: > Hello, > I need to create user accounts in oVirt that can only manage a specific > set of VMs and I don't want them to see any other ones. > example: > User1 can only see VM1, VM2, VM3, VM4 > User2 can only see VM5, VM6, VM7 > Admin can see all of them. > How do I accomplish this? Maybe this can help you: http://lists.ovirt.org/pipermail/users/2018-March/087432.html > > Thank you, > > Payman Vazinkhoo > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From sleviim at redhat.com Tue Mar 6 08:47:39 2018 From: sleviim at redhat.com (Shani Leviim) Date: Tue, 6 Mar 2018 10:47:39 +0200 Subject: [ovirt-users] Cannot activate storage domain In-Reply-To: <2CB4E8C8E00E594EA06D4AC427E429920FE6670D@fabamailserver.fabagl.fabasoft.com> References: <2CB4E8C8E00E594EA06D4AC427E429920FE500D1@fabamailserver.fabagl.fabasoft.com> <2CB4E8C8E00E594EA06D4AC427E429920FE56290@fabamailserver.fabagl.fabasoft.com> <2CB4E8C8E00E594EA06D4AC427E429920FE5A323@fabamailserver.fabagl.fabasoft.com> <2CB4E8C8E00E594EA06D4AC427E429920FE6670D@fabamailserver.fabagl.fabasoft.com> Message-ID: Hi Simone, Can you please share your vdsm and engine logs? *Regards,* *Shani Leviim* On Tue, Mar 6, 2018 at 7:34 AM, Bruckner, Simone < simone.bruckner at fabasoft.com> wrote: > Hello, I apologize for bringing this one up again, but does anybody know > if there is a change to recover a storage domain, that cannot be activated? > > > > Thank you, > > Simone > > > > *Von:* users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] *Im > Auftrag von *Bruckner, Simone > *Gesendet:* Freitag, 2. M?rz 2018 17:03 > > *An:* users at ovirt.org > *Betreff:* Re: [ovirt-users] Cannot activate storage domain > > > > Hi all, > > > > I managed to get the inactive storage domain to maintenance by stopping > all running VMs that were using it, but I am still not able to activate it. > > > > Trying to activate results in the following events: > > > > For each host: > > VDSM command GetVGInfoVDS failed: Volume Group does not exist: > (u'vg_uuid: 813oRe-64r8-mloU-k9G2-LFsS-dXSG-hpN4kf',) > > > > And finally: > > VDSM command ActivateStorageDomainVDS failed: Storage domain does not > exist: (u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',) > > > > Is there anything I can do to recover this storage domain? > > > > Thank you and all the best, > > Simone > > > > *Von:* users-bounces at ovirt.org [mailto:users-bounces at ovirt.org > ] *Im Auftrag von *Bruckner, Simone > *Gesendet:* Donnerstag, 1. M?rz 2018 17:57 > *An:* users at ovirt.org > *Betreff:* Re: [ovirt-users] Cannot activate storage domain > > > > Hi, > > > > we are still struggling getting a storage domain online again. We tried > to put the storage domain in maintenance mode, that led to ?Failed to > update OVF disks 809cc8d7-7687-46cf-a342-3be48674a9b3, OVF data isn't > updated on those OVF stores?. > > > > Trying again with ignoring OVF update failures put the storage domain in > ?preparing for maintenance?. We see the following message on all hosts: > ?Error releasing host id 26 for domain b83c159c-4ad6-4613-ba16-bab95ccd10c0 > (monitor:578)?. > > > > Querying the storage domain using vdsm-client on the SPM resulted in > > # vdsm-client StorageDomain getInfo "storagedomainID"="b83c159c- > 4ad6-4613-ba16-bab95ccd10c0" > > vdsm-client: Command StorageDomain.getInfo with args {'storagedomainID': > 'b83c159c-4ad6-4613-ba16-bab95ccd10c0'} failed: > > (code=358, message=Storage domain does not exist: > (u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',)) > > > > Any ideas? > > > > Thank you and all the best, > > Simone > > > > *Von:* users-bounces at ovirt.org [mailto:users-bounces at ovirt.org > ] *Im Auftrag von *Bruckner, Simone > *Gesendet:* Mittwoch, 28. Februar 2018 15:52 > *An:* users at ovirt.org > *Betreff:* [ovirt-users] Cannot activate storage domain > > > > Hi all, > > > > we run a small oVirt installation that we also use for automated testing > (automatically creating, dropping vms). > > > > We got an inactive FC storage domain that we cannot activate any more. We > see several events at that time starting with: > > > > VM perftest-c17 is down with error. Exit message: Unable to get volume > size for domain b83c159c-4ad6-4613-ba16-bab95ccd10c0 volume > 686376c1-4be1-44c3-89a3-0a8addc8fdf2. > > > > Trying to activate the strorage domain results in the following alert > event for each host: > > > > VDSM command GetVGInfoVDS failed: Volume Group does not exist: > (u'vg_uuid: 813oRe-64r8-mloU-k9G2-LFsS-dXSG-hpN4kf',) > > > > And after those messages from all hosts we get: > > > > VDSM command ActivateStorageDomainVDS failed: Storage domain does not > exist: (u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',) > > Failed to activate Storage Domain VMHOST_LUN_205 (Data Center Production) > by > > Invalid status on Data Center Production. Setting status to Non Responsive. > > Storage Pool Manager runs on Host vmhost003.fabagl.fabasoft.com (Address: > vmhost003.fabagl.fabasoft.com), Data Center Production. > > > > Checking the hosts with multipath ?ll we see the LUN without errors. > > > > We run oVirt 4.2.1 on CentOS 7.4. Hosts are CentOS 7.4 hosts with oVirt > installed using oVirt engine. > > Hosts are connected to about 30 FC LUNs (8 TB each) on two all-flash > storage arrays. > > > > Thank you, > > Simone Bruckner > > > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simone.bruckner at fabasoft.com Tue Mar 6 09:19:10 2018 From: simone.bruckner at fabasoft.com (Bruckner, Simone) Date: Tue, 6 Mar 2018 09:19:10 +0000 Subject: [ovirt-users] Cannot activate storage domain In-Reply-To: References: <2CB4E8C8E00E594EA06D4AC427E429920FE500D1@fabamailserver.fabagl.fabasoft.com> <2CB4E8C8E00E594EA06D4AC427E429920FE56290@fabamailserver.fabagl.fabasoft.com> <2CB4E8C8E00E594EA06D4AC427E429920FE5A323@fabamailserver.fabagl.fabasoft.com> <2CB4E8C8E00E594EA06D4AC427E429920FE6670D@fabamailserver.fabagl.fabasoft.com> Message-ID: <2CB4E8C8E00E594EA06D4AC427E429920FE67150@fabamailserver.fabagl.fabasoft.com> Hi Shani, please find the logs attached. Thank you, Simone Von: Shani Leviim [mailto:sleviim at redhat.com] Gesendet: Dienstag, 6. M?rz 2018 09:48 An: Bruckner, Simone Cc: users at ovirt.org Betreff: Re: [ovirt-users] Cannot activate storage domain Hi Simone, Can you please share your vdsm and engine logs? Regards, Shani Leviim On Tue, Mar 6, 2018 at 7:34 AM, Bruckner, Simone > wrote: Hello, I apologize for bringing this one up again, but does anybody know if there is a change to recover a storage domain, that cannot be activated? Thank you, Simone Von: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] Im Auftrag von Bruckner, Simone Gesendet: Freitag, 2. M?rz 2018 17:03 An: users at ovirt.org Betreff: Re: [ovirt-users] Cannot activate storage domain Hi all, I managed to get the inactive storage domain to maintenance by stopping all running VMs that were using it, but I am still not able to activate it. Trying to activate results in the following events: For each host: VDSM command GetVGInfoVDS failed: Volume Group does not exist: (u'vg_uuid: 813oRe-64r8-mloU-k9G2-LFsS-dXSG-hpN4kf',) And finally: VDSM command ActivateStorageDomainVDS failed: Storage domain does not exist: (u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',) Is there anything I can do to recover this storage domain? Thank you and all the best, Simone Von: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] Im Auftrag von Bruckner, Simone Gesendet: Donnerstag, 1. M?rz 2018 17:57 An: users at ovirt.org Betreff: Re: [ovirt-users] Cannot activate storage domain Hi, we are still struggling getting a storage domain online again. We tried to put the storage domain in maintenance mode, that led to ?Failed to update OVF disks 809cc8d7-7687-46cf-a342-3be48674a9b3, OVF data isn't updated on those OVF stores?. Trying again with ignoring OVF update failures put the storage domain in ?preparing for maintenance?. We see the following message on all hosts: ?Error releasing host id 26 for domain b83c159c-4ad6-4613-ba16-bab95ccd10c0 (monitor:578)?. Querying the storage domain using vdsm-client on the SPM resulted in # vdsm-client StorageDomain getInfo "storagedomainID"="b83c159c-4ad6-4613-ba16-bab95ccd10c0" vdsm-client: Command StorageDomain.getInfo with args {'storagedomainID': 'b83c159c-4ad6-4613-ba16-bab95ccd10c0'} failed: (code=358, message=Storage domain does not exist: (u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',)) Any ideas? Thank you and all the best, Simone Von: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] Im Auftrag von Bruckner, Simone Gesendet: Mittwoch, 28. Februar 2018 15:52 An: users at ovirt.org Betreff: [ovirt-users] Cannot activate storage domain Hi all, we run a small oVirt installation that we also use for automated testing (automatically creating, dropping vms). We got an inactive FC storage domain that we cannot activate any more. We see several events at that time starting with: VM perftest-c17 is down with error. Exit message: Unable to get volume size for domain b83c159c-4ad6-4613-ba16-bab95ccd10c0 volume 686376c1-4be1-44c3-89a3-0a8addc8fdf2. Trying to activate the strorage domain results in the following alert event for each host: VDSM command GetVGInfoVDS failed: Volume Group does not exist: (u'vg_uuid: 813oRe-64r8-mloU-k9G2-LFsS-dXSG-hpN4kf',) And after those messages from all hosts we get: VDSM command ActivateStorageDomainVDS failed: Storage domain does not exist: (u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',) Failed to activate Storage Domain VMHOST_LUN_205 (Data Center Production) by Invalid status on Data Center Production. Setting status to Non Responsive. Storage Pool Manager runs on Host vmhost003.fabagl.fabasoft.com (Address: vmhost003.fabagl.fabasoft.com), Data Center Production. Checking the hosts with multipath ?ll we see the LUN without errors. We run oVirt 4.2.1 on CentOS 7.4. Hosts are CentOS 7.4 hosts with oVirt installed using oVirt engine. Hosts are connected to about 30 FC LUNs (8 TB each) on two all-flash storage arrays. Thank you, Simone Bruckner _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: logs.tar.gz Type: application/x-gzip Size: 173969 bytes Desc: logs.tar.gz URL: From nicolas at ecarnot.net Tue Mar 6 10:41:02 2018 From: nicolas at ecarnot.net (Nicolas Ecarnot) Date: Tue, 6 Mar 2018 11:41:02 +0100 Subject: [ovirt-users] Importing VM fails with "No space left on device" Message-ID: <2c2c247f-a16e-5f05-aa49-7f1bae24024b@ecarnot.net> Hello, When importing a VM, I'm facing the know bug : https://access.redhat.com/solutions/2770791 QImgError: ecode=1, stdout=[], stderr=['qemu-img: error while writing sector 93569024: No space left on device' The difference between my case and what is described in the RH webpage is that I have no "Failed to flush the refcount block cache". Here is what I see : > ecfbd1a4-f9d2-463a-ade6-def5bd217b43::DEBUG::2018-03-06 09:57:36,460::utils::718::root::(watchCmd) FAILED: = ['qemu-img: error while writing sector 205517952: No space left on device']; = 1 > ecfbd1a4-f9d2-463a-ade6-def5bd217b43::ERROR::2018-03-06 09:57:36,460::image::865::Storage.Image::(copyCollapsed) conversion failure for volume ac08bc8d-1eea-449a-a102-cf763c6726c8 > Traceback (most recent call last): > File "/usr/share/vdsm/storage/image.py", line 860, in copyCollapsed > volume.fmt2str(dstVolFormat)) > File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 207, in convert > raise QImgError(rc, out, err) > QImgError: ecode=1, stdout=[], stderr=['qemu-img: error while writing sector 205517952: No space left on device'], message=None > ecfbd1a4-f9d2-463a-ade6-def5bd217b43::ERROR::2018-03-06 09:57:36,461::image::878::Storage.Image::(copyCollapsed) Unexpected error > Traceback (most recent call last): > File "/usr/share/vdsm/storage/image.py", line 866, in copyCollapsed > raise se.CopyImageError(str(e)) > CopyImageError: low level Image copy failed: ("ecode=1, stdout=[], stderr=['qemu-img: error while writing sector 205517952: No space left on device'], message=None",) I followed the advices in the RH webpage (check if the figures are correct between the qemu-img sizes and the meta-data file), and they seem to be correct : root at serv-hv-adm30:/etc# qemu-img info /rhev/data-center/mnt/serv-lin-adm1.sdis.isere.fr\:_home_vmexport3/be2878c9-2c46-476b-bfae-8b02a4679022/images/a5d68d88-3b54-488d-a61e-7995a1906994/ac08bc8d-1eea-449a-a102-cf763c6726c8 image: /rhev/data-center/mnt/serv-lin-adm1.sdis.isere.fr:_home_vmexport3/be2878c9-2c46-476b-bfae-8b02a4679022/images/a5d68d88-3b54-488d-a61e-7995a1906994/ac08bc8d-1eea-449a-a102-cf763c6726c8 file format: qcow2 virtual size: 98G (105226698752 bytes) disk size: 97G cluster_size: 65536 Format specific information: compat: 0.10 refcount bits: 16 root at serv-hv-adm30:/etc# cat /rhev/data-center/mnt/serv-lin-adm1.sdis.isere.fr\:_home_vmexport3/be2878c9-2c46-476b-bfae-8b02a4679022/images/a5d68d88-3b54-488d-a61e-7995a1906994/ac08bc8d-1eea-449a-a102-cf763c6726c8.meta DOMAIN=be2878c9-2c46-476b-bfae-8b02a4679022 CTIME=1520318755 FORMAT=COW DISKTYPE=1 LEGALITY=LEGAL SIZE=205520896 VOLTYPE=LEAF DESCRIPTION= IMAGE=a5d68d88-3b54-488d-a61e-7995a1906994 PUUID=00000000-0000-0000-0000-000000000000 MTIME=0 POOL_UUID= TYPE=SPARSE EOF So I don't see what's wrong? -- Nicolas ECARNOT From stirabos at redhat.com Tue Mar 6 13:57:28 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Tue, 6 Mar 2018 14:57:28 +0100 Subject: [ovirt-users] ovirt Node 4.2.1 on 2 node : problem with ha broker and HE storage In-Reply-To: <1520287007.4516.58.camel@province-sud.nc> References: <1520287007.4516.58.camel@province-sud.nc> Message-ID: On Mon, Mar 5, 2018 at 10:56 PM, Nicolas Vaye wrote: > Hello, > > > I'd installed oVirt Node 4.2.1 on two host, i'd deployed HE on the first, > via a storage NFS. > The administration portal seem to be OK but i can't see the HE VM, the > portal indicate there is one VM, but impossible to get it into the list of > the VM. > I can't see the storage that host the HE-VM too. > Where is the problem ? > Did you deployed with the vintage --noansible flow? In that case you explicitly have to add another SD before the auto import process can be started. > > I'd added the second host into the cluster and i haven't the option to > deploy the HE into it during the "add Host" operation. > After added the second node, i can put into maintenance mode and then > reinstall with the option DEPLOY for HE but the result is : > "Cannot edit Host. You are using an unmanaged hosted engine VM. Please add > the first storage domain in order to start the hosted engine import > process." > > > > The logs for the first node indicate this error (very often, every ~ > 30sec) below : > > ovirt-ha-broker ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update > ERROR Failed to read state. Traceback (most recent call last): File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py", > line 88, in run self._storage_broker.get_raw_stats() File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", > line 162, in get_raw_stats .format(str(e))) RequestError: failed to read > metadata: [Errno 2] No such file or directory: '/var/run/vdsm/storage/ > e906ea1d-a5ee-4ea9-838c-bcb4a8b866ff/f1ccdaaf-4682-4d68-9c27-751d54f101a6/ > 13fb32ec-0463-45d4-925a-b557c1dc803f' > > With this problem, i can't move the HE vm on the second host. so there is > no HA. > > Thanks. > > > Nicolas VAYE > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jm3185951 at gmail.com Tue Mar 6 14:20:21 2018 From: jm3185951 at gmail.com (Jonathan Mathews) Date: Tue, 6 Mar 2018 16:20:21 +0200 Subject: [ovirt-users] Failure to upgrade Cluster Compatibility Version In-Reply-To: References: Message-ID: Any chance of getting feedback on this? It is becoming urgent. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabrice.soler at ac-guadeloupe.fr Tue Mar 6 14:33:24 2018 From: fabrice.soler at ac-guadeloupe.fr (Fabrice SOLER) Date: Tue, 6 Mar 2018 10:33:24 -0400 Subject: [ovirt-users] After the export, the import OVA failed In-Reply-To: References: <5064936d-798c-8c3a-853b-2bb0c19e8f3c@ac-guadeloupe.fr> <744cc79b-4e81-f327-18e1-32533d7c7c21@ac-guadeloupe.fr> Message-ID: <34f50392-8681-e105-e699-b8ebe08cf6ea@ac-guadeloupe.fr> Hello, I have upgraded the engine and the node, so the version is : 4.2.1.1.1-1.el7 To import, I made a "tar xvf file.ova". Then from the portal, I import the VM I saw that : After that the amon was removed as we can see in the events? : It seems it does not work. Maybe the VM is hide somewhere ? Sincerely, -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fnoenhmgochfekoh.png Type: image/png Size: 18482 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ieaciiocgjjgcbcg.png Type: image/png Size: 14807 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: hljdjnbeojagoppc.png Type: image/png Size: 14507 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fabrice SOLER.PNG Type: image/png Size: 16525 bytes Desc: not available URL: From michal.skrivanek at redhat.com Tue Mar 6 14:33:34 2018 From: michal.skrivanek at redhat.com (Michal Skrivanek) Date: Tue, 6 Mar 2018 15:33:34 +0100 Subject: [ovirt-users] Failure to upgrade Cluster Compatibility Version In-Reply-To: References: Message-ID: <845A2E27-2B20-4807-B86F-A4A05859314F@redhat.com> > On 28 Feb 2018, at 11:21, Jonathan Mathews wrote: > > I have been upgrading my oVirt platform from 3.4 and I am trying to get to 4.2. > > I have managed to get the platform to 3.6, but need to upgrade the Cluster Compatibility Version. > > When I select 3.6 in the Cluster Compatibility Version and select OK, it highlights Compatibility Version in red, (image attached). > > There are no errors been displayed on screen, or in the /var/log/ovirt-engine/engine.log file. > > Please let me know if I am missing something and how I can resolve this? Hi, well, screenshot?s not really enough to uderstand what exactly are you trying to do Is that in 3.6 engine? Latest 3.6? Do you have any VMs running? Can you share that engine.log? Thanks, michal > > Thanks > Jonathan > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From awels at redhat.com Tue Mar 6 14:34:09 2018 From: awels at redhat.com (Alexander Wels) Date: Tue, 06 Mar 2018 09:34:09 -0500 Subject: [ovirt-users] Failure to upgrade Cluster Compatibility Version In-Reply-To: References: Message-ID: <2066626.RJrgaHBUgA@awels> On Tuesday, March 6, 2018 9:20:21 AM EST Jonathan Mathews wrote: > Any chance of getting feedback on this? > > It is becoming urgent. If you hover over the label it should display a pop-up with the reason why its red. From shuriku at shurik.kiev.ua Tue Mar 6 15:02:40 2018 From: shuriku at shurik.kiev.ua (Alexandr Krivulya) Date: Tue, 6 Mar 2018 17:02:40 +0200 Subject: [ovirt-users] Power off VM from VM portal Message-ID: <2826607c-14bc-695d-26d6-b20d12f9b755@shurik.kiev.ua> Hi, is there any way to power off VM from VM portal (4.2.1.7)? I can't find "power off" button, just "shutdown". From nicolas at ecarnot.net Tue Mar 6 15:39:22 2018 From: nicolas at ecarnot.net (Nicolas Ecarnot) Date: Tue, 6 Mar 2018 16:39:22 +0100 Subject: [ovirt-users] Power off VM from VM portal In-Reply-To: <2826607c-14bc-695d-26d6-b20d12f9b755@shurik.kiev.ua> References: <2826607c-14bc-695d-26d6-b20d12f9b755@shurik.kiev.ua> Message-ID: <7fb78f56-cc23-8369-0e54-586f42408a91@ecarnot.net> Le 06/03/2018 ? 16:02, Alexandr Krivulya a ?crit?: > Hi, > > is there any way to power off VM from VM portal (4.2.1.7)? I can't find > "power off" button, just "shutdown". > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users Hello Alexandr, After having clicked on the VM link, you'll notice that on the right of the Shutdown button is an arrow allowing you to access to the Power Off feature. Regards, -- Nicolas ECARNOT From Oliver.Riesener at hs-bremen.de Tue Mar 6 16:11:31 2018 From: Oliver.Riesener at hs-bremen.de (Oliver Riesener) Date: Tue, 6 Mar 2018 17:11:31 +0100 Subject: [ovirt-users] After the export, the import OVA failed In-Reply-To: <34f50392-8681-e105-e699-b8ebe08cf6ea@ac-guadeloupe.fr> References: <5064936d-798c-8c3a-853b-2bb0c19e8f3c@ac-guadeloupe.fr> <744cc79b-4e81-f327-18e1-32533d7c7c21@ac-guadeloupe.fr> <34f50392-8681-e105-e699-b8ebe08cf6ea@ac-guadeloupe.fr> Message-ID: <3cbdb40f-f3c6-2106-1c58-a68d984e9407@hs-bremen.de> Hi Fabrice, try to rename the already existing old VM to another name like amon-old. The import the OVA machine again. On 06.03.2018 15:33, Fabrice SOLER wrote: > Hello, > > I have upgraded the engine and the node, so the version is : > 4.2.1.1.1-1.el7 > To import, I made a "tar xvf file.ova". > Then from the portal, I import the VM > > > I saw that : > > > > After that the amon was removed as we can see in the events? : > > > > It seems it does not work. Maybe the VM is hide somewhere ? > > Sincerely, > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fnoenhmgochfekoh.png Type: image/png Size: 18482 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ieaciiocgjjgcbcg.png Type: image/png Size: 14807 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: hljdjnbeojagoppc.png Type: image/png Size: 14507 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fabrice SOLER.PNG Type: image/png Size: 16525 bytes Desc: not available URL: From fabrice.soler at ac-guadeloupe.fr Tue Mar 6 16:21:32 2018 From: fabrice.soler at ac-guadeloupe.fr (Fabrice SOLER) Date: Tue, 6 Mar 2018 12:21:32 -0400 Subject: [ovirt-users] After the export, the import OVA failed In-Reply-To: <34f50392-8681-e105-e699-b8ebe08cf6ea@ac-guadeloupe.fr> References: <5064936d-798c-8c3a-853b-2bb0c19e8f3c@ac-guadeloupe.fr> <744cc79b-4e81-f327-18e1-32533d7c7c21@ac-guadeloupe.fr> <34f50392-8681-e105-e699-b8ebe08cf6ea@ac-guadeloupe.fr> Message-ID: <305ea5ef-5656-409f-eb00-eb105240af69@ac-guadeloupe.fr> Hello, I constated that the ovf format is not the same when I made the export ova with vmware and ovirt. Export ova with vmware : [root at eple-rectorat-proto AntiVirus]# file AntiVirus.ovf AntiVirus.ovf: XML 1.0 document, ASCII text, with very long lines, with CRLF line terminators Export ova with ovirt : [root at ovirt-eple amon]# file vm.ovf vm.ovf: XML 1.0 document, ASCII text, with very long lines, with no line terminators With Ovirt there is no line terminators. Is that normal ? Is that why the OVA import does not work ? Sincerely, Fabrice SOLER Le 06/03/2018 ? 10:33, Fabrice SOLER a ?crit?: > Hello, > > I have upgraded the engine and the node, so the version is : > 4.2.1.1.1-1.el7 > To import, I made a "tar xvf file.ova". > Then from the portal, I import the VM > > > I saw that : > > > > After that the amon was removed as we can see in the events? : > > > > It seems it does not work. Maybe the VM is hide somewhere ? > > Sincerely, > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fnoenhmgochfekoh.png Type: image/png Size: 18482 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ieaciiocgjjgcbcg.png Type: image/png Size: 14807 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: hljdjnbeojagoppc.png Type: image/png Size: 14507 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fabrice SOLER.PNG Type: image/png Size: 16525 bytes Desc: not available URL: From gianluca.cecchi at gmail.com Tue Mar 6 17:01:28 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Tue, 6 Mar 2018 18:01:28 +0100 Subject: [ovirt-users] Pre-snapshot scripts to run before live snapshot Message-ID: Hello, this thread last year (started by me... ;-) was very useful in different aspects involved http://lists.ovirt.org/pipermail/users/2017-March/080322.html We did cover memory save or not and fsfreeze automatically done by guest agent if installed inside the VM. What about pre-snapshot scripts/operations to run inside guest, to have application consistency? Eg if I have a database inside the VM and I have scripted my backup job involving live-snapshot (eg with the backup.py utility of the thread) Can I leverage this kind of functionality with the oVirt guest agent? Or is it mandatory to consider a remote connection to the VM (via ssh or what for windows?) and execute the script/command/bat file? What are you curently doing in this respect? Thanks, Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From derek at ihtfp.com Tue Mar 6 17:22:44 2018 From: derek at ihtfp.com (Derek Atkins) Date: Tue, 06 Mar 2018 12:22:44 -0500 Subject: [ovirt-users] Power off VM from VM portal In-Reply-To: <2826607c-14bc-695d-26d6-b20d12f9b755@shurik.kiev.ua> (Alexandr Krivulya's message of "Tue, 6 Mar 2018 17:02:40 +0200") References: <2826607c-14bc-695d-26d6-b20d12f9b755@shurik.kiev.ua> Message-ID: Hi, Alexandr Krivulya writes: > Hi, > > is there any way to power off VM from VM portal (4.2.1.7)? I can't > find "power off" button, just "shutdown". I don't know about 4.2, but in 4.1 and 4.0 there is a right-click context menu that gives you access to the Power Off feature. If that doesn't work (ISTR disucssion about removing that context menu), then there must be a different way to access it now. -derek -- Derek Atkins 617-623-3745 derek at ihtfp.com www.ihtfp.com Computer and Internet Security Consultant From derek at ihtfp.com Tue Mar 6 17:20:25 2018 From: derek at ihtfp.com (Derek Atkins) Date: Tue, 06 Mar 2018 12:20:25 -0500 Subject: [ovirt-users] How to setup users to see a subset of VMs in oVirt In-Reply-To: (Jean Pickard's message of "Mon, 5 Mar 2018 15:03:19 -0800") References: Message-ID: Hi, Jean Pickard writes: > Hello, > I need to create user accounts in oVirt that can only manage a specific set of > VMs and I don't want them to see any other ones. > example: > User1 can only see VM1, VM2, VM3, VM4 > User2 can only see VM5, VM6, VM7 > Admin can see all of them. > How do I accomplish this? Just set the permissions on the VMs. It works quite well. > Thank you, > > Payman Vazinkhoo -derek -- Derek Atkins 617-623-3745 derek at ihtfp.com www.ihtfp.com Computer and Internet Security Consultant From fabrice.soler at ac-guadeloupe.fr Tue Mar 6 18:25:58 2018 From: fabrice.soler at ac-guadeloupe.fr (Fabrice SOLER) Date: Tue, 6 Mar 2018 14:25:58 -0400 Subject: [ovirt-users] After the export, the import OVA failed In-Reply-To: <3cbdb40f-f3c6-2106-1c58-a68d984e9407@hs-bremen.de> References: <5064936d-798c-8c3a-853b-2bb0c19e8f3c@ac-guadeloupe.fr> <744cc79b-4e81-f327-18e1-32533d7c7c21@ac-guadeloupe.fr> <34f50392-8681-e105-e699-b8ebe08cf6ea@ac-guadeloupe.fr> <3cbdb40f-f3c6-2106-1c58-a68d984e9407@hs-bremen.de> Message-ID: Hi, I have deleted the VM amon and tried to import the OVA. It does not work. I think there is a problem in the ovf file (XML format) like I posted in the precedente mail : */I constated that the ovf format is not the same when I made the export ova with vmware and ovirt./**/ /**//**/ /**/Export ova with vmware :/**/ /**//**/ /**/[root at eple-rectorat-proto AntiVirus]# file AntiVirus.ovf/**/ /**/AntiVirus.ovf: XML 1.0 document, ASCII text, with very long lines, with CRLF line terminators/**/ /**//**/ /**/Export ova with ovirt :/**/ /**/[root at ovirt-eple amon]# file vm.ovf/**/ /**/vm.ovf: XML 1.0 document, ASCII text, with very long lines, with no line terminators/**/ /**//**/ /**/With Ovirt there is no line terminators./**/ /**//**/ /**/Is that normal ? Is that why the OVA import does not work ?/*/ / Le 06/03/2018 ? 12:11, Oliver Riesener a ?crit?: > Hi Fabrice, > try to rename the already existing old VM to another name like amon-old. > The import the OVA machine again. > > On 06.03.2018 15:33, Fabrice SOLER wrote: >> Hello, >> >> I have upgraded the engine and the node, so the version is : >> 4.2.1.1.1-1.el7 >> To import, I made a "tar xvf file.ova". >> Then from the portal, I import the VM >> >> >> I saw that : >> >> >> >> After that the amon was removed as we can see in the events? : >> >> >> >> It seems it does not work. Maybe the VM is hide somewhere ? >> >> Sincerely, >> >> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fnoenhmgochfekoh.png Type: image/png Size: 18482 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ieaciiocgjjgcbcg.png Type: image/png Size: 14807 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: hljdjnbeojagoppc.png Type: image/png Size: 14507 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fabrice SOLER.PNG Type: image/png Size: 16525 bytes Desc: not available URL: From ykaul at redhat.com Tue Mar 6 18:54:28 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Tue, 6 Mar 2018 20:54:28 +0200 Subject: [ovirt-users] Pre-snapshot scripts to run before live snapshot In-Reply-To: References: Message-ID: On Mar 6, 2018 7:02 PM, "Gianluca Cecchi" wrote: Hello, this thread last year (started by me... ;-) was very useful in different aspects involved http://lists.ovirt.org/pipermail/users/2017-March/080322.html We did cover memory save or not and fsfreeze automatically done by guest agent if installed inside the VM. What about pre-snapshot scripts/operations to run inside guest, to have application consistency? Eg if I have a database inside the VM and I have scripted my backup job involving live-snapshot (eg with the backup.py utility of the thread) https://github.com/guillon/qemu-plugins/blob/master/scripts/qemu-guest-agent/fsfreeze-hook.d/mysql-flush.sh.sample Y. Can I leverage this kind of functionality with the oVirt guest agent? Or is it mandatory to consider a remote connection to the VM (via ssh or what for windows?) and execute the script/command/bat file? What are you curently doing in this respect? Thanks, Gianluca _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From rjones at redhat.com Tue Mar 6 19:18:24 2018 From: rjones at redhat.com (Richard W.M. Jones) Date: Tue, 6 Mar 2018 19:18:24 +0000 Subject: [ovirt-users] Unremovable disks created through the API Message-ID: <20180306191824.GA27515@redhat.com> I've been playing with disk uploads through the API. As a result I now have lots of disks in the states "Paused by System" and "Paused by User". They are not attached to any VM, and I'm logged in as admin at internal, but there seems to be no way to use them. Even worse I've now run out of space so can't do anything else. How can I remove them? Screenshot: http://oirase.annexia.org/tmp/ovirt.png It's a pretty recent engine: ovirt-engine-4.2.2.2-0.0.master.20180225172203.gitd7cf125.el7.centos.noarch Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-df lists disk usage of guests without needing to install any software inside the virtual machine. Supports Linux and Windows. http://people.redhat.com/~rjones/virt-df/ From fabrice.soler at ac-guadeloupe.fr Tue Mar 6 20:23:00 2018 From: fabrice.soler at ac-guadeloupe.fr (Fabrice SOLER) Date: Tue, 6 Mar 2018 16:23:00 -0400 Subject: [ovirt-users] After the export, the import OVA failed In-Reply-To: References: <5064936d-798c-8c3a-853b-2bb0c19e8f3c@ac-guadeloupe.fr> <744cc79b-4e81-f327-18e1-32533d7c7c21@ac-guadeloupe.fr> <34f50392-8681-e105-e699-b8ebe08cf6ea@ac-guadeloupe.fr> <3cbdb40f-f3c6-2106-1c58-a68d984e9407@hs-bremen.de> Message-ID: Hello, I have imported a VM with OVA created with VMware and it works ! So, the problem is in the export ! The vm.ovf has no line terminators, that why it does not work ! */ /**/[root at ovirt-eple amon]# file vm.ovf/**/ /**/vm.ovf: XML 1.0 document, ASCII text, with very long lines, with no line terminators/**/ /* Is there somebody would know a solution ? Sincerely, Fabrice SOLER Le 06/03/2018 ? 14:25, Fabrice SOLER a ?crit?: > Hi, > > I have deleted the VM amon and tried to import the OVA. It does not work. > I think there is a problem in the ovf file (XML format) like I posted > in the precedente mail : > > */I constated that the ovf format is not the same when I made the > export ova with vmware and ovirt./**/ > /**//**/ > /**/Export ova with vmware :/**/ > /**//**/ > /**/[root at eple-rectorat-proto AntiVirus]# file AntiVirus.ovf/**/ > /**/AntiVirus.ovf: XML 1.0 document, ASCII text, with very long lines, > with CRLF line terminators/**/ > /**//**/ > /**/Export ova with ovirt :/**/ > /**/[root at ovirt-eple amon]# file vm.ovf/**/ > /**/vm.ovf: XML 1.0 document, ASCII text, with very long lines, with > no line terminators/**/ > /**//**/ > /**/With Ovirt there is no line terminators./**/ > /**//**/ > /**/Is that normal ? Is that why the OVA import does not work ?/*/ > / > > > > > > > Le 06/03/2018 ? 12:11, Oliver Riesener a ?crit?: >> Hi Fabrice, >> try to rename the already existing old VM to another name like amon-old. >> The import the OVA machine again. >> >> On 06.03.2018 15:33, Fabrice SOLER wrote: >>> Hello, >>> >>> I have upgraded the engine and the node, so the version is : >>> 4.2.1.1.1-1.el7 >>> To import, I made a "tar xvf file.ova". >>> Then from the portal, I import the VM >>> >>> >>> I saw that : >>> >>> >>> >>> After that the amon was removed as we can see in the events : >>> >>> >>> >>> It seems it does not work. Maybe the VM is hide somewhere ? >>> >>> Sincerely, >>> >>> >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > > -- > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fnoenhmgochfekoh.png Type: image/png Size: 18482 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ieaciiocgjjgcbcg.png Type: image/png Size: 14807 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: hljdjnbeojagoppc.png Type: image/png Size: 14507 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fabrice SOLER.PNG Type: image/png Size: 16525 bytes Desc: not available URL: From ahadas at redhat.com Tue Mar 6 21:04:28 2018 From: ahadas at redhat.com (Arik Hadas) Date: Tue, 6 Mar 2018 23:04:28 +0200 Subject: [ovirt-users] After the export, the import OVA failed In-Reply-To: References: <5064936d-798c-8c3a-853b-2bb0c19e8f3c@ac-guadeloupe.fr> <744cc79b-4e81-f327-18e1-32533d7c7c21@ac-guadeloupe.fr> <34f50392-8681-e105-e699-b8ebe08cf6ea@ac-guadeloupe.fr> <3cbdb40f-f3c6-2106-1c58-a68d984e9407@hs-bremen.de> Message-ID: On Tue, Mar 6, 2018 at 10:23 PM, Fabrice SOLER wrote: > Hello, > > I have imported a VM with OVA created with VMware and it works ! > That's great, but note that importing an OVA that was created by VMware is totally different than importing an OVA that was created by oVirt implementation-wise. > So, the problem is in the export ! The vm.ovf has no line terminators, > that why it does not work ! > > * [root at ovirt-eple amon]# file vm.ovf* > * vm.ovf: XML 1.0 document, ASCII text, with very long lines, with no line > terminators* > > Is there somebody would know a solution ? > I'm afraid that's not the issue here - the OVF inside the OVA seems valid (otherwise, the UI would have failed to "load" the configuration from the OVA file). You may be facing one of the various issues that were already resolved and will be available in 4.2.2, but it's hard to tell without having the logs. If you can share the engine+ansible logs, we will gladly look at them and try to shed more light on the issue you're facing. Otherwise, I would suggest to wait for 4.2.2 where the new OVA support is much more mature. > > Sincerely, > Fabrice SOLER > > > > Le 06/03/2018 ? 14:25, Fabrice SOLER a ?crit : > > Hi, > > I have deleted the VM amon and tried to import the OVA. It does not work. > I think there is a problem in the ovf file (XML format) like I posted in > the precedente mail : > > *I constated that the ovf format is not the same when I made the export > ova with vmware and ovirt.* > > * Export ova with vmware :* > > * [root at eple-rectorat-proto AntiVirus]# file AntiVirus.ovf* > * AntiVirus.ovf: XML 1.0 document, ASCII text, with very long lines, with > CRLF line terminators* > > * Export ova with ovirt :* > * [root at ovirt-eple amon]# file vm.ovf* > * vm.ovf: XML 1.0 document, ASCII text, with very long lines, with no line > terminators* > > * With Ovirt there is no line terminators.* > > * Is that normal ? Is that why the OVA import does not work ?* > > > > > > > > Le 06/03/2018 ? 12:11, Oliver Riesener a ?crit : > > Hi Fabrice, > try to rename the already existing old VM to another name like amon-old. > The import the OVA machine again. > > On 06.03.2018 15:33, Fabrice SOLER wrote: > > Hello, > > I have upgraded the engine and the node, so the version is : > 4.2.1.1.1-1.el7 > To import, I made a "tar xvf file.ova". > Then from the portal, I import the VM > > > I saw that : > > > > After that the amon was removed as we can see in the events : > > > > It seems it does not work. Maybe the VM is hide somewhere ? > > Sincerely, > > > > > _______________________________________________ > Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users > > > > _______________________________________________ > Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users > > > -- > > > _______________________________________________ > Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users > > > -- > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ieaciiocgjjgcbcg.png Type: image/png Size: 14807 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fnoenhmgochfekoh.png Type: image/png Size: 18482 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fabrice SOLER.PNG Type: image/png Size: 16525 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: hljdjnbeojagoppc.png Type: image/png Size: 14507 bytes Desc: not available URL: From ahadas at redhat.com Tue Mar 6 21:14:40 2018 From: ahadas at redhat.com (Arik Hadas) Date: Tue, 6 Mar 2018 23:14:40 +0200 Subject: [ovirt-users] Unremovable disks created through the API In-Reply-To: <20180306191824.GA27515@redhat.com> References: <20180306191824.GA27515@redhat.com> Message-ID: On Tue, Mar 6, 2018 at 9:18 PM, Richard W.M. Jones wrote: > > I've been playing with disk uploads through the API. As a result > I now have lots of disks in the states "Paused by System" and > "Paused by User". They are not attached to any VM, and I'm logged > in as admin at internal, but there seems to be no way to use them. > Even worse I've now run out of space so can't do anything else. > > How can I remove them? > Screenshot: http://oirase.annexia.org/tmp/ovirt.png Hi Richard, Selecting Upload->Cancel at that tab will remove such a disk. Note that it may take a minute or two. > > > It's a pretty recent engine: > > ovirt-engine-4.2.2.2-0.0.master.20180225172203. > gitd7cf125.el7.centos.noarch > > Rich. > > -- > Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~ > rjones > Read my programming and virtualization blog: http://rwmj.wordpress.com > virt-df lists disk usage of guests without needing to install any > software inside the virtual machine. Supports Linux and Windows. > http://people.redhat.com/~rjones/virt-df/ > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rjones at redhat.com Tue Mar 6 21:19:41 2018 From: rjones at redhat.com (Richard W.M. Jones) Date: Tue, 6 Mar 2018 21:19:41 +0000 Subject: [ovirt-users] Unremovable disks created through the API In-Reply-To: References: <20180306191824.GA27515@redhat.com> Message-ID: <20180306211941.GI2787@redhat.com> On Tue, Mar 06, 2018 at 11:14:40PM +0200, Arik Hadas wrote: > On Tue, Mar 6, 2018 at 9:18 PM, Richard W.M. Jones > wrote: > > > > > I've been playing with disk uploads through the API. As a result > > I now have lots of disks in the states "Paused by System" and > > "Paused by User". They are not attached to any VM, and I'm logged > > in as admin at internal, but there seems to be no way to use them. > > Even worse I've now run out of space so can't do anything else. > > > > How can I remove them? > > > > Screenshot: http://oirase.annexia.org/tmp/ovirt.png > > > Hi Richard, > > Selecting Upload->Cancel at that tab will remove such a disk. > Note that it may take a minute or two. Yes, that works, thanks. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com Fedora Windows cross-compiler. Compile Windows programs, test, and build Windows installers. Over 100 libraries supported. http://fedoraproject.org/wiki/MinGW From Oliver.Riesener at hs-bremen.de Tue Mar 6 22:10:20 2018 From: Oliver.Riesener at hs-bremen.de (Oliver Riesener) Date: Tue, 6 Mar 2018 23:10:20 +0100 Subject: [ovirt-users] After the export, the import OVA failed In-Reply-To: References: <5064936d-798c-8c3a-853b-2bb0c19e8f3c@ac-guadeloupe.fr> <744cc79b-4e81-f327-18e1-32533d7c7c21@ac-guadeloupe.fr> <34f50392-8681-e105-e699-b8ebe08cf6ea@ac-guadeloupe.fr> <3cbdb40f-f3c6-2106-1c58-a68d984e9407@hs-bremen.de> Message-ID: <1F88B9DE-4F80-48B7-9644-A5080F6A226C@hs-bremen.de> Hi Arik, Hi Fabrice, i have tried the same procedure with 4.2.1, with same results. After commands: # mkdir /var/log/ovirt-engine/ova # chown ovirt.ovirt /var/log/ovirt-engine/ova I can provide log files. See ansible script failed. /export/ovirt/export/xx is the correct directory where i have extracted the *.ova file because the UI search for *.ovf file and didn?t recognize *.ova files directly ? Please see attachments: Cheers Olri > Am 06.03.2018 um 19:25 schrieb Fabrice SOLER : > > Hi, > > I have deleted the VM amon and tried to import the OVA. It does not work. > I think there is a problem in the ovf file (XML format) like I posted in the precedente mail : > > I constated that the ovf format is not the same when I made the export ova with vmware and ovirt. > > Export ova with vmware : > > [root at eple-rectorat-proto AntiVirus]# file AntiVirus.ovf > AntiVirus.ovf: XML 1.0 document, ASCII text, with very long lines, with CRLF line terminators > > Export ova with ovirt : > [root at ovirt-eple amon]# file vm.ovf > vm.ovf: XML 1.0 document, ASCII text, with very long lines, with no line terminators > > With Ovirt there is no line terminators. > > Is that normal ? Is that why the OVA import does not work ? > > > > > > > > Le 06/03/2018 ? 12:11, Oliver Riesener a ?crit : >> Hi Fabrice, >> try to rename the already existing old VM to another name like amon-old. >> The import the OVA machine again. >> >> On 06.03.2018 15:33, Fabrice SOLER wrote: >>> Hello, >>> >>> I have upgraded the engine and the node, so the version is : 4.2.1.1.1-1.el7 >>> To import, I made a "tar xvf file.ova". >>> Then from the portal, I import the VM >>> >>> >>> I saw that : >>> >>> >>> >>> After that the amon was removed as we can see in the events : >>> >>> >>> >>> It seems it does not work. Maybe the VM is hide somewhere ? >>> >>> Sincerely, >>> >>> >>> >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > -- > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ovirt-4.2.1-failed-to-import-OVA.engine.log Type: application/octet-stream Size: 32086 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ovirt-import-ova-ansible-20180306223007-ovn-elem.example.org-45e9a51.log Type: application/octet-stream Size: 1275 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ovirt-query-ova-ansible-20180306222952-ovn-elem.example.org.log Type: application/octet-stream Size: 13476 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahadas at redhat.com Tue Mar 6 23:03:28 2018 From: ahadas at redhat.com (Arik Hadas) Date: Wed, 7 Mar 2018 01:03:28 +0200 Subject: [ovirt-users] After the export, the import OVA failed In-Reply-To: <1F88B9DE-4F80-48B7-9644-A5080F6A226C@hs-bremen.de> References: <5064936d-798c-8c3a-853b-2bb0c19e8f3c@ac-guadeloupe.fr> <744cc79b-4e81-f327-18e1-32533d7c7c21@ac-guadeloupe.fr> <34f50392-8681-e105-e699-b8ebe08cf6ea@ac-guadeloupe.fr> <3cbdb40f-f3c6-2106-1c58-a68d984e9407@hs-bremen.de> <1F88B9DE-4F80-48B7-9644-A5080F6A226C@hs-bremen.de> Message-ID: On Wed, Mar 7, 2018 at 12:10 AM, Oliver Riesener < Oliver.Riesener at hs-bremen.de> wrote: > Hi Arik, Hi Fabrice, > > i have tried the same procedure with 4.2.1, with same results. > > After commands: > > # mkdir /var/log/ovirt-engine/ova > # chown ovirt.ovirt /var/log/ovirt-engine/ova > > I can provide log files. See ansible script failed. > > /export/ovirt/export/xx is the correct directory where i have extracted > the *.ova file because the UI search for *.ovf file and didn?t recognize > *.ova files directly ? > Thanks for providing the logs. We don't support OVA directory with content (OVF, disks) that was created by oVirt yet (it only works with content that was created by VMware). The OVA tar file should not be extracted and the path to the OVA should include the filename, e.g., /export/ovirt/export/xx/my.ova If you pack the directory's content into an OVA file, please note that the OVF should be placed at the first entry. There is a bug about changing 'path' to something like 'path to ova file' in the import dialog [1], I may add a validation for that as well until we'll also support the OVA directory format. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1547636 > > Please see attachments: > > Cheers > Olri > > > > > > Am 06.03.2018 um 19:25 schrieb Fabrice SOLER fr>: > > Hi, > > I have deleted the VM amon and tried to import the OVA. It does not work. > I think there is a problem in the ovf file (XML format) like I posted in > the precedente mail : > > *I constated that the ovf format is not the same when I made the export > ova with vmware and ovirt.* > > * Export ova with vmware :* > > * [root at eple-rectorat-proto AntiVirus]# file AntiVirus.ovf* > * AntiVirus.ovf: XML 1.0 document, ASCII text, with very long lines, with > CRLF line terminators* > > * Export ova with ovirt :* > * [root at ovirt-eple amon]# file vm.ovf* > * vm.ovf: XML 1.0 document, ASCII text, with very long lines, with no line > terminators* > > * With Ovirt there is no line terminators.* > > * Is that normal ? Is that why the OVA import does not work ?* > > > > > > > > Le 06/03/2018 ? 12:11, Oliver Riesener a ?crit : > > Hi Fabrice, > try to rename the already existing old VM to another name like amon-old. > The import the OVA machine again. > > On 06.03.2018 15:33, Fabrice SOLER wrote: > > Hello, > > I have upgraded the engine and the node, so the version is : > 4.2.1.1.1-1.el7 > To import, I made a "tar xvf file.ova". > Then from the portal, I import the VM > > > I saw that : > > > > After that the amon was removed as we can see in the events : > > > > It seems it does not work. Maybe the VM is hide somewhere ? > > Sincerely, > > > > > > _______________________________________________ > Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users > > > > _______________________________________________ > Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users > > > -- > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Oliver.Riesener at hs-bremen.de Wed Mar 7 05:40:17 2018 From: Oliver.Riesener at hs-bremen.de (Oliver Riesener) Date: Wed, 7 Mar 2018 06:40:17 +0100 Subject: [ovirt-users] After the export, the import OVA failed In-Reply-To: References: <5064936d-798c-8c3a-853b-2bb0c19e8f3c@ac-guadeloupe.fr> <744cc79b-4e81-f327-18e1-32533d7c7c21@ac-guadeloupe.fr> <34f50392-8681-e105-e699-b8ebe08cf6ea@ac-guadeloupe.fr> <3cbdb40f-f3c6-2106-1c58-a68d984e9407@hs-bremen.de> <1F88B9DE-4F80-48B7-9644-A5080F6A226C@hs-bremen.de> Message-ID: <00B51011-DA01-44B6-8EEA-BF7DED4ECC61@hs-bremen.de> > Am 07.03.2018 um 00:03 schrieb Arik Hadas : > > > > On Wed, Mar 7, 2018 at 12:10 AM, Oliver Riesener > wrote: > Hi Arik, Hi Fabrice, > > i have tried the same procedure with 4.2.1, with same results. > > After commands: > > # mkdir /var/log/ovirt-engine/ova > # chown ovirt.ovirt /var/log/ovirt-engine/ova > > I can provide log files. See ansible script failed. > > /export/ovirt/export/xx is the correct directory where i have extracted the *.ova file because the UI search for *.ovf file and didn?t recognize *.ova files directly ? > > Thanks for providing the logs. > We don't support OVA directory with content (OVF, disks) that was created by oVirt yet (it only works with content that was created by VMware). > The OVA tar file should not be extracted and the path to the OVA should include the filename, e.g., /export/ovirt/export/xx/my.ova With the fullname to *.ova, the VM is importable. yeah! > If you pack the directory's content into an OVA file, please note that the OVF should be placed at the first entry. > > There is a bug about changing 'path' to something like 'path to ova file' in the import dialog [1], I may add a validation for that as well until we'll also support the OVA directory format. > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1547636 At this point, path is ok, but *.ova should be **browsed** too. > > > Please see attachments: > > Cheers > Olri > > > > > >> Am 06.03.2018 um 19:25 schrieb Fabrice SOLER >: >> >> Hi, >> >> I have deleted the VM amon and tried to import the OVA. It does not work. >> I think there is a problem in the ovf file (XML format) like I posted in the precedente mail : >> >> I constated that the ovf format is not the same when I made the export ova with vmware and ovirt. >> >> Export ova with vmware : >> >> [root at eple-rectorat-proto AntiVirus]# file AntiVirus.ovf >> AntiVirus.ovf: XML 1.0 document, ASCII text, with very long lines, with CRLF line terminators >> >> Export ova with ovirt : >> [root at ovirt-eple amon]# file vm.ovf >> vm.ovf: XML 1.0 document, ASCII text, with very long lines, with no line terminators >> >> With Ovirt there is no line terminators. >> >> Is that normal ? Is that why the OVA import does not work ? >> >> >> >> >> >> >> >> Le 06/03/2018 ? 12:11, Oliver Riesener a ?crit : >>> Hi Fabrice, >>> try to rename the already existing old VM to another name like amon-old. >>> The import the OVA machine again. >>> >>> On 06.03.2018 15:33, Fabrice SOLER wrote: >>>> Hello, >>>> >>>> I have upgraded the engine and the node, so the version is : 4.2.1.1.1-1.el7 >>>> To import, I made a "tar xvf file.ova". >>>> Then from the portal, I import the VM >>>> >>>> >>>> I saw that : >>>> >>>> >>>> >>>> After that the amon was removed as we can see in the events : >>>> >>>> >>>> >>>> It seems it does not work. Maybe the VM is hide somewhere ? >>>> >>>> Sincerely, >>>> >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >> >> -- >> > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.skrivanek at redhat.com Wed Mar 7 10:05:27 2018 From: michal.skrivanek at redhat.com (Michal Skrivanek) Date: Wed, 7 Mar 2018 11:05:27 +0100 Subject: [ovirt-users] Troubleshooting VM SSO on Windows 10 (ovirt 4.2.1) In-Reply-To: References: Message-ID: <9D9FE387-878B-4690-893E-AE5097FB64A2@redhat.com> > On 5 Mar 2018, at 09:49, Cristian Mammoli wrote: > > Anyone??? what authentication to the portal are you using ? SSO only works if you provide user and password in the ovirt?s login screen > > Hi, I'm trying to setup sso on Windows 10, vm is domain joined, has > agent installed and credential provider registered.Of course I setup an > AD domain and the vm has sso enabled > > Whenever I log to the user portal and open a VM I'm presented with the > login screen and nothing happens, it's like the engine doesn't send the > command to autologin. > > In the agent logs there's nothing interesting but the communication > between the engine and the agent is ok: for example the command to > lock-screen on console close runs and works: > > Dummy-2::INFO::2018-03-01 > 09:01:39,124::ovirtagentlogic::322::root::Received an external command: > lock-screen... > > This is an extract from engine logs when I login in the user portal and > start a connection: > > 2018-03-01 11:30:01,558+01 INFO > [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-30) > [] Userc.mammoli at apra.it successfully logged in with scopes: > ovirt-app-admin ovirt-app-api ovirt-app-portal > ovirt-ext=auth:sequence-priority=~ ovirt-ext=revoke:revoke-all > ovirt-ext=token-info:authz-search > ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate > ovirt-ext=token:password-access > 2018-03-01 11:30:01,606+01 INFO > [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default > task-31) [7bc265f] Running command: CreateUserSessionCommand internal: > false. > 2018-03-01 11:30:01,623+01 INFO > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (default task-31) [7bc265f] EVENT_ID: USER_VDC_LOGIN(30), User > c.mammoli at apra.it @apra.it connecting from '192.168.1.100' using session > '5NMjCbUiehNLAGMeeWsr4L5TatL+uUGsNHOxQtCvSa9i0DaQ7uoGSi6zaZdXu08vrEk5gyQUJAsB2+COzLwtEw==' > logged in. > 2018-03-01 11:30:02,163+01 ERROR > [org.ovirt.engine.core.bll.GetSystemStatisticsQuery] (default task-39) > [14276418-5de7-44a6-bb64-c60965de0acf] Query execution failed due to > insufficient permissions. > 2018-03-01 11:30:02,664+01 INFO > [org.ovirt.engine.core.bll.SetVmTicketCommand] (default task-54) > [617f130b] Running command: SetVmTicketCommand internal: false. Entities > affected : ID: c0250fe0-5d8b-44de-82bc-04610952f453 Type: VMAction > group CONNECT_TO_VM with role type USER > 2018-03-01 11:30:02,683+01 INFO > [org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand] > (default task-54) [617f130b] START, SetVmTicketVDSCommand(HostName = > r630-01.apra.it, > SetVmTicketVDSCommandParameters:{hostId='d99a8356-72e8-4130-a1cc-e148762eca57', > vmId='c0250fe0-5d8b-44de-82bc-04610952f453', protocol='SPICE', > ticket='u2b1nv+rH+pw', validTime='120', userName='c.mammoli at apra.it ', > userId='39f9d718-6e65-456a-8a6f-71976bcbbf2f', > disconnectAction='LOCK_SCREEN'}), log id: 18fa2ef > 2018-03-01 11:30:02,703+01 INFO > [org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand] > (default task-54) [617f130b] FINISH, SetVmTicketVDSCommand, log id: 18fa2ef > 2018-03-01 11:30:02,713+01 INFO > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (default task-54) [617f130b] EVENT_ID: VM_SET_TICKET(164), User > c.mammoli at apra.it @apra.it initiated console session for VM testvdi02 > 2018-03-01 11:30:11,558+01 INFO > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (EE-ManagedThreadFactory-engineScheduled-Thread-49) [] EVENT_ID: > VM_CONSOLE_CONNECTED(167), Userc.mammoli at apra.it is connected to VM > testvdi02. > > Any help would be appreciated > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > From shuriku at shurik.kiev.ua Wed Mar 7 12:42:17 2018 From: shuriku at shurik.kiev.ua (Alexandr Krivulya) Date: Wed, 7 Mar 2018 14:42:17 +0200 Subject: [ovirt-users] Power off VM from VM portal In-Reply-To: <7fb78f56-cc23-8369-0e54-586f42408a91@ecarnot.net> References: <2826607c-14bc-695d-26d6-b20d12f9b755@shurik.kiev.ua> <7fb78f56-cc23-8369-0e54-586f42408a91@ecarnot.net> Message-ID: <1931fde6-2347-eef9-e6b6-244ddb80517e@shurik.kiev.ua> 06.03.2018 17:39, Nicolas Ecarnot ?????: > Le 06/03/2018 ? 16:02, Alexandr Krivulya a ?crit?: >> Hi, >> >> is there any way to power off VM from VM portal (4.2.1.7)? I can't >> find "power off" button, just "shutdown". >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > Hello Alexandr, > > After having clicked on the VM link, you'll notice that on the right > of the Shutdown button is an arrow allowing you to access to the Power > Off feature. I cant find this arrow on Shutdown button -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: edleppoideihfgob. Type: image/png Size: 44014 bytes Desc: not available URL: From shuriku at shurik.kiev.ua Wed Mar 7 12:46:11 2018 From: shuriku at shurik.kiev.ua (Alexandr Krivulya) Date: Wed, 7 Mar 2018 14:46:11 +0200 Subject: [ovirt-users] Power off VM from VM portal In-Reply-To: References: <2826607c-14bc-695d-26d6-b20d12f9b755@shurik.kiev.ua> Message-ID: <59035c6c-6c09-59f2-b36f-f4779ff228c0@shurik.kiev.ua> 06.03.2018 19:22, Derek Atkins ?????: > Alexandr Krivulya writes: > >> is there any way to power off VM from VM portal (4.2.1.7)? I can't >> find "power off" button, just "shutdown". > I don't know about 4.2, but in 4.1 and 4.0 there is a right-click > context menu that gives you access to the Power Off feature. If that > doesn't work (ISTR disucssion about removing that context menu), then > there must be a different way to access it now. > In 4.2 User portal was replaced with new VM portal. There is no right-click context menu on VM list :( From admin portal I can poweroff VM. From fernando.frediani at upx.com Wed Mar 7 13:03:55 2018 From: fernando.frediani at upx.com (FERNANDO FREDIANI) Date: Wed, 7 Mar 2018 10:03:55 -0300 Subject: [ovirt-users] Very Slow Console Performance - Windows 10 In-Reply-To: References: <781778b2-1d81-4a55-2060-ea570e83fbd1@upx.com> <43c4790c-14d2-dbb7-d074-d8d47d4db913@upx.com> Message-ID: <60915132-7486-d25c-4e20-11ab0a4aa8d9@upx.com> Hello Gianluca Resurrecting this topic. I made the changes as per your instructions below on the Engine configuration but it had no effect on the VM graphics memory. Is it necessary to restart the Engine after adding the 20-overload.properties file ? Also I don't think is necessary to do any changes on the hosts right ? On the recent updates has anything changed in the terms on how to change the video memory assigned to any given VM. I guess it is something that has been forgotten overtime, specially if you are running a VDI-like environment whcih depends very much on the video memory. Let me know. Thanks Fernando Frediani On 24/11/2017 20:45, Gianluca Cecchi wrote: > On Fri, Nov 24, 2017 at 5:50 PM, FERNANDO FREDIANI > > wrote: > > I have made a Export of the same VM created in oVirt to a server > running pure qemu/KVM and which creates new VMs profiles with vram > 65536 and it turned on the Windows 10 which run perfectly with > that configuration. > > Was reading some documentation that it may be possible to change > the file /usr/share/ovirt-engine/conf/osinfo-defaults.properties > in order to change it for the profile you want but I am not sure > how these changed should be made if directly in that file, on > another one just with custom configs and also how to apply them > immediatelly to any new or existing VM ? I am pretty confident > once vram is increased that should resolve the issue with not only > Windows 10 VMs, but other as well. > > Anyone can give a hint about the correct procedure to apply this > change ? > > Thanks in advance. > Fernando > > > > > Hi Fernando, > based on this: > https://www.ovirt.org/develop/release-management/features/virt/os-info/ > > > you should create a file of kind > /etc/ovirt-engine/osinfo.conf.d/20-overload.properties > but I think you can only overwrite the multiplier and not directly the > vgamem (or vgamem_mb in rhel 7) values > > so that you could put something like this inside it: > > os.windows_10.devices.display.vramMultiplier.value = 2 > os.windows_10x64.devices.display.vramMultiplier.value = 2 > > I think there are no values for vgamem_mb > > I found these two threads in 2016 > http://lists.ovirt.org/pipermail/users/2016-June/073692.html > that confirms you cannot set vgamem > and > http://lists.ovirt.org/pipermail/users/2016-June/073786.html > that suggests to create a hook > > Just a hack that came into mind: > in a CentOS vm of mine in a 4.1.5 environment I see that by default I > get this qemu command line > > -device > qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,vram64_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2 > > Based on this: > https://www.ovirt.org/documentation/draft/video-ram/ > > > you have > vgamem = 16 MB * number_of_heads > > I verified that if I edit the vm in the gui and set Monitors=4 in > console section (but with the aim of using only the first head) and > then I power off and power on the VM, I get now > > -device > qxl-vga,id=video0,ram_size=268435456,vram_size=134217728,vram64_size_mb=0,vgamem_mb=64,bus=pci.0,addr=0x2 > > I have not a client to connect and verify any improvement: I don't > know if you will be able to use all the new ram in the only first head > with a better experience or if it is partitioned in some way... > Could you try eventually? > > Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From hariprasanth.l at msystechnologies.com Wed Mar 7 13:20:15 2018 From: hariprasanth.l at msystechnologies.com (Hari Prasanth Loganathan) Date: Wed, 7 Mar 2018 18:50:15 +0530 Subject: [ovirt-users] Tunable parameters in ovirt engine Message-ID: Hi Team, *Description of problem:* I am trying to achieve 1000 concurrent request to oVirt. What are the tunable parameters to achieve this? I tried to perform the benchmarking for ovirt engine using Apache benchmark using the same SSO token. ab -n 1000 -c 500 -k -H "accept: application/json" -H "Authorization: Bearer SSOTOKEN" https://172.30.56.70/ovirt-engine/ When the number of concurrent request is 500, we are getting more than 100 failures with the following error, SSL read failed (1) - closing connection 139620982339352:error: NOTE: It is scaling for concurrent request below 500. I used the profiler to get the memory and CPU and it seems very less, PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 30413 ovirt 20 0 4226664 882396 6776 S 126.0 23.0 27:48.53 java Configuration of the machine in which Ovirt is deployed : RAM - 4GB, Hard disk - 100GB, core processor - 2, OS - Cent7.x. In which 2GB is allocated to oVirt. Version-Release number of selected component (if applicable): 4.2.2 How reproducible: If the number of concurrent requests are above 500, we are easily facing this issue. *Actual results:* SSL read failed (1) - closing connection 139620982339352:error: *Expected results:* Request success. Thanks, Hari -- DISCLAIMER The information in this e-mail is confidential and may be subject to legal privilege. It is intended solely for the addressee. Access to this e-mail by anyone else is unauthorized. If you have received this communication in error, please address with the subject heading "Received in error," send to it at msystechnologies.com, then delete the e-mail and destroy any copies of it. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. The views, opinions, conclusions and other information expressed in this electronic mail and any attachments are not given or endorsed by the company unless otherwise indicated by an authorized representative independent of this message. MSys cannot guarantee that e-mail communications are secure or error-free, as information could be intercepted, corrupted, amended, lost, destroyed, arrive late or incomplete, or contain viruses, though all reasonable precautions have been taken to ensure no viruses are present in this e-mail. As our company cannot accept responsibility for any loss or damage arising from the use of this e-mail or attachments we recommend that you subject these to your virus checking procedures prior to use -------------- next part -------------- An HTML attachment was scrubbed... URL: From c.mammoli at apra.it Wed Mar 7 13:23:20 2018 From: c.mammoli at apra.it (Cristian Mammoli) Date: Wed, 7 Mar 2018 14:23:20 +0100 Subject: [ovirt-users] Troubleshooting VM SSO on Windows 10 (ovirt 4.2.1) In-Reply-To: <9D9FE387-878B-4690-893E-AE5097FB64A2@redhat.com> References: <9D9FE387-878B-4690-893E-AE5097FB64A2@redhat.com> Message-ID: <0b56cf9f-5462-9eed-1cf5-5bc2032bdf7f@apra.it> It's ldap based with Active Directory, of course I login to the user portal with the correct credentials > what authentication to the portal are you using ? > SSO only works if you provide user and password in the ovirt?s login screen > -- *Cristian Mammoli* System Administrator T. ?+39 0731 719822 www.apra.it ApraSpa linksocial *Avviso sulla tutela di informazioni riservate.* Questo messaggio ? stato spedito da Apra spa o da una delle aziende del Gruppo. Esso e gli eventuali allegati, potrebbero contenere informazioni di carattere estremamente riservato e confidenziale. Qualora non foste i destinatari designati, vogliate cortesemente informarci immediatamente con lo stesso mezzo ed eliminare il messaggio e i relativi eventuali allegati, senza trattenerne copia. From jhernand at redhat.com Wed Mar 7 13:35:53 2018 From: jhernand at redhat.com (=?UTF-8?Q?Juan_Hern=c3=a1ndez?=) Date: Wed, 7 Mar 2018 14:35:53 +0100 Subject: [ovirt-users] Tunable parameters in ovirt engine In-Reply-To: References: Message-ID: <786be990-4ea0-613b-d81b-639f70a11398@redhat.com> The first thing you will need to change for such a test is the number of simultaneous connections accepted by the Apache web server: by default the max is 256. See the Apache documentation here: https://httpd.apache.org/docs/2.4/mod/mpm_common.html#maxrequestworkers In addition I also suggest that you consider using the "worker" multi-processing module instead of the "prefork", as it usually works better when talking to a Java application server, because it re-uses connections better. On 03/07/2018 02:20 PM, Hari Prasanth Loganathan wrote: > Hi Team, > > *Description of problem:* > > I am trying to achieve 1000 concurrent request to oVirt. What are the > tunable parameters to achieve this? > > I tried to perform the benchmarking for ovirt engine using Apache benchmark > using the same SSO token. > > ab -n 1000 -c 500 -k -H "accept: application/json" -H "Authorization: > Bearer SSOTOKEN" https://172.30.56.70/ovirt-engine/ > > > When the number of concurrent request is 500, we are getting more than 100 > failures with the following error, > > SSL read failed (1) - closing connection > 139620982339352:error: > > NOTE: It is scaling for concurrent request below 500. > > I used the profiler to get the memory and CPU and it seems very less, > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ > COMMAND > 30413 ovirt 20 0 4226664 882396 6776 S 126.0 23.0 27:48.53 java > > Configuration of the machine in which Ovirt is deployed : > > RAM - 4GB, > Hard disk - 100GB, > core processor - 2, > OS - Cent7.x. > > In which 2GB is allocated to oVirt. > > > Version-Release number of selected component (if applicable): > > 4.2.2 > > > How reproducible: > > If the number of concurrent requests are above 500, we are easily facing > this issue. > > > *Actual results:* > > SSL read failed (1) - closing connection > 139620982339352:error: > > *Expected results:* > > Request success. > > > Thanks, > Hari > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From nicolas at ecarnot.net Wed Mar 7 13:36:34 2018 From: nicolas at ecarnot.net (Nicolas Ecarnot) Date: Wed, 7 Mar 2018 14:36:34 +0100 Subject: [ovirt-users] Power off VM from VM portal In-Reply-To: <1931fde6-2347-eef9-e6b6-244ddb80517e@shurik.kiev.ua> References: <2826607c-14bc-695d-26d6-b20d12f9b755@shurik.kiev.ua> <7fb78f56-cc23-8369-0e54-586f42408a91@ecarnot.net> <1931fde6-2347-eef9-e6b6-244ddb80517e@shurik.kiev.ua> Message-ID: Le 07/03/2018 ? 13:42, Alexandr Krivulya a ?crit?: > > > 06.03.2018 17:39, Nicolas Ecarnot ?????: >> Le 06/03/2018 ? 16:02, Alexandr Krivulya a ?crit?: >>> Hi, >>> >>> is there any way to power off VM from VM portal (4.2.1.7)? I can't >>> find "power off" button, just "shutdown". >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >> >> Hello Alexandr, >> >> After having clicked on the VM link, you'll notice that on the right >> of the Shutdown button is an arrow allowing you to access to the Power >> Off feature. > > I cant find this arrow on Shutdown button > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > Oh sorry I answered in the context of admin portal. Indeed, in the VM portal, I neither see this poweroff button. -- Nicolas ECARNOT From hariprasanth.l at msystechnologies.com Wed Mar 7 13:43:50 2018 From: hariprasanth.l at msystechnologies.com (Hari Prasanth Loganathan) Date: Wed, 7 Mar 2018 19:13:50 +0530 Subject: [ovirt-users] Tunable parameters in ovirt engine In-Reply-To: <786be990-4ea0-613b-d81b-639f70a11398@redhat.com> References: <786be990-4ea0-613b-d81b-639f70a11398@redhat.com> Message-ID: Hi Juan, Thanks for the response. I agree web server can handle only limited number of concurrent requests. But Why it is failing with SSL handshake failure for few requests, Can't the JBOSS wait and serve the request? We can spare the delay but not with the request fails. So Is there a configuration in oVirt which can be tuned to achieve this? Thanks, Hari On Wed, Mar 7, 2018 at 7:05 PM, Juan Hern?ndez wrote: > The first thing you will need to change for such a test is the number of > simultaneous connections accepted by the Apache web server: by default the > max is 256. See the Apache documentation here: > > https://httpd.apache.org/docs/2.4/mod/mpm_common.html#maxrequestworkers > > In addition I also suggest that you consider using the "worker" > multi-processing module instead of the "prefork", as it usually works > better when talking to a Java application server, because it re-uses > connections better. > > On 03/07/2018 02:20 PM, Hari Prasanth Loganathan wrote: > >> Hi Team, >> >> *Description of problem:* >> >> I am trying to achieve 1000 concurrent request to oVirt. What are the >> tunable parameters to achieve this? >> >> I tried to perform the benchmarking for ovirt engine using Apache >> benchmark >> using the same SSO token. >> >> ab -n 1000 -c 500 -k -H "accept: application/json" -H "Authorization: >> Bearer SSOTOKEN" https://172.30.56.70/ovirt-engine/ >> > b-9ff1-076fc07ebf50/statistics> >> >> When the number of concurrent request is 500, we are getting more than 100 >> failures with the following error, >> >> SSL read failed (1) - closing connection >> 139620982339352:error: >> >> NOTE: It is scaling for concurrent request below 500. >> >> I used the profiler to get the memory and CPU and it seems very less, >> >> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ >> COMMAND >> 30413 ovirt 20 0 4226664 882396 6776 S 126.0 23.0 27:48.53 java >> >> Configuration of the machine in which Ovirt is deployed : >> >> RAM - 4GB, >> Hard disk - 100GB, >> core processor - 2, >> OS - Cent7.x. >> >> In which 2GB is allocated to oVirt. >> >> >> Version-Release number of selected component (if applicable): >> >> 4.2.2 >> >> >> How reproducible: >> >> If the number of concurrent requests are above 500, we are easily facing >> this issue. >> >> >> *Actual results:* >> >> SSL read failed (1) - closing connection >> 139620982339352:error: >> >> *Expected results:* >> >> Request success. >> >> >> Thanks, >> Hari >> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -- DISCLAIMER The information in this e-mail is confidential and may be subject to legal privilege. It is intended solely for the addressee. Access to this e-mail by anyone else is unauthorized. If you have received this communication in error, please address with the subject heading "Received in error," send to it at msystechnologies.com, then delete the e-mail and destroy any copies of it. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. The views, opinions, conclusions and other information expressed in this electronic mail and any attachments are not given or endorsed by the company unless otherwise indicated by an authorized representative independent of this message. MSys cannot guarantee that e-mail communications are secure or error-free, as information could be intercepted, corrupted, amended, lost, destroyed, arrive late or incomplete, or contain viruses, though all reasonable precautions have been taken to ensure no viruses are present in this e-mail. As our company cannot accept responsibility for any loss or damage arising from the use of this e-mail or attachments we recommend that you subject these to your virus checking procedures prior to use -------------- next part -------------- An HTML attachment was scrubbed... URL: From jhernand at redhat.com Wed Mar 7 13:55:12 2018 From: jhernand at redhat.com (=?UTF-8?Q?Juan_Hern=c3=a1ndez?=) Date: Wed, 7 Mar 2018 14:55:12 +0100 Subject: [ovirt-users] Tunable parameters in ovirt engine In-Reply-To: References: <786be990-4ea0-613b-d81b-639f70a11398@redhat.com> Message-ID: <109ffb44-89ac-f190-29a4-e937157cce81@redhat.com> With the default configuration of the web server it is impossible to handle more than 256 *connections* simultaneously. I guess that "ab" is opening a connection for each concurrent request, so when you reach request 257 the web server will just reject the connection, there is nothing that the JBoss can do about it; you have to increase the number of connections supported by the web server. Or else you may want to re-consider why you want to use 1000 simultaneous connections. It may be OK for a performance test, but there are better ways to squeeze performance. For example, you could consider using HTTP pipelining, which is much more friendly for the server than so many connections. This is what we use when we need to send a large number of requests from other systems. There are examples of how to do that with the Python and Ruby SDKs here: Python: https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/asynchronous_inventory.py Ruby: https://github.com/oVirt/ovirt-engine-sdk-ruby/blob/master/sdk/examples/asynchronous_inventory.rb On 03/07/2018 02:43 PM, Hari Prasanth Loganathan wrote: > Hi Juan, > > Thanks for the response. > > I agree web server can handle only limited number of concurrent requests. > But Why it is failing with SSL handshake failure for few requests, Can't > the JBOSS wait and serve the request? We can spare the delay but not with > the request fails. So Is there a configuration in oVirt which can be tuned > to achieve this? > > Thanks, > Hari > > On Wed, Mar 7, 2018 at 7:05 PM, Juan Hern?ndez wrote: > >> The first thing you will need to change for such a test is the number of >> simultaneous connections accepted by the Apache web server: by default the >> max is 256. See the Apache documentation here: >> >> https://httpd.apache.org/docs/2.4/mod/mpm_common.html#maxrequestworkers >> >> In addition I also suggest that you consider using the "worker" >> multi-processing module instead of the "prefork", as it usually works >> better when talking to a Java application server, because it re-uses >> connections better. >> >> On 03/07/2018 02:20 PM, Hari Prasanth Loganathan wrote: >> >>> Hi Team, >>> >>> *Description of problem:* >>> >>> I am trying to achieve 1000 concurrent request to oVirt. What are the >>> tunable parameters to achieve this? >>> >>> I tried to perform the benchmarking for ovirt engine using Apache >>> benchmark >>> using the same SSO token. >>> >>> ab -n 1000 -c 500 -k -H "accept: application/json" -H "Authorization: >>> Bearer SSOTOKEN" https://172.30.56.70/ovirt-engine/ >>> >> b-9ff1-076fc07ebf50/statistics> >>> >>> When the number of concurrent request is 500, we are getting more than 100 >>> failures with the following error, >>> >>> SSL read failed (1) - closing connection >>> 139620982339352:error: >>> >>> NOTE: It is scaling for concurrent request below 500. >>> >>> I used the profiler to get the memory and CPU and it seems very less, >>> >>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ >>> COMMAND >>> 30413 ovirt 20 0 4226664 882396 6776 S 126.0 23.0 27:48.53 java >>> >>> Configuration of the machine in which Ovirt is deployed : >>> >>> RAM - 4GB, >>> Hard disk - 100GB, >>> core processor - 2, >>> OS - Cent7.x. >>> >>> In which 2GB is allocated to oVirt. >>> >>> >>> Version-Release number of selected component (if applicable): >>> >>> 4.2.2 >>> >>> >>> How reproducible: >>> >>> If the number of concurrent requests are above 500, we are easily facing >>> this issue. >>> >>> >>> *Actual results:* >>> >>> SSL read failed (1) - closing connection >>> 139620982339352:error: >>> >>> *Expected results:* >>> >>> Request success. >>> >>> >>> Thanks, >>> Hari >>> >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> > From jsiml at plusline.net Wed Mar 7 14:11:06 2018 From: jsiml at plusline.net (Jan Siml) Date: Wed, 7 Mar 2018 15:11:06 +0100 Subject: [ovirt-users] Unable to start VM after upgrade vom 4.1.9 to 4.2.1 - NPE Message-ID: <62f32a4e-c6eb-5363-42df-b09a2fe33d00@plusline.net> Hello, we have upgrade one of our oVirt engines to 4.2.1 (from 4.1.9) and afterwards all nodes too. The cluster compatibility level has been set to 4.2. Now we can't start a VM after it has been powered off. The only hint we found in engine.log is: 2018-03-07 14:51:52,504+01 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{hostId='null', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic at 491983e9'}), log id: 7d49849e 2018-03-07 14:51:52,509+01 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, UpdateVmDynamicDataVDSCommand, log id: 7d49849e 2018-03-07 14:51:52,531+01 INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] START, CreateVDSCommand( CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'}), log id: 4af1f227 2018-03-07 14:51:52,533+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] START, CreateBrokerVDSCommand(HostName = prod-node-210, CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'}), log id: 71dcc8e7 2018-03-07 14:51:52,545+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] Failed in 'CreateBrokerVDS' method, for vds: 'prod-node-210'; host: 'prod-node-210': null 2018-03-07 14:51:52,546+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] Command 'CreateBrokerVDSCommand(HostName = prod-node-210, CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'})' execution failed: null 2018-03-07 14:51:52,546+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, CreateBrokerVDSCommand, log id: 71dcc8e7 2018-03-07 14:51:52,546+01 ERROR [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) [f855b5 4a-56d9-4708-8a67-5609438ddadb] Failed to create VM: java.lang.NullPointerException at org.ovirt.engine.core.vdsbroker.builder.vminfo.LibvirtVmXmlBuilder.lambda$writeInterfaces$23(LibvirtVmXmlBuilder.java:1066) [vdsbroker.jar:] [...] But this doesn't lead us to the root cause. I haven't found any matching bug tickets in release notes for upcoming 4.2.1. Can anyone help here? Kind regards Jan Siml From ykaul at redhat.com Wed Mar 7 14:47:47 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Wed, 7 Mar 2018 16:47:47 +0200 Subject: [ovirt-users] New oVirt blog - Your Container Volumes Served By oVirt Message-ID: When running a virtualization workload on oVirt, a VM disk is 'natively' a disk somewhere on your network-storage. Entering containers world, on Kubernetes(k8s) or OpenShift, there are many options specifically because the workload can be totally stateless, i.e they are stored on a host supplied disk and can be removed when the container is terminated. The more interesting case is *stateful workloads* i.e apps that persist data (think DBs, web servers/services, etc). k8s/OpenShift designed an API to dynamically provision the container storage (volume in k8s terminology). In this post I want to cover how oVirt can provide volumes for containers running on k8s/OpenShift cluster. Read more @ https://ovirt.org/blog/2018/02/your-container-volumes-served-by-ovirt/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahadas at redhat.com Wed Mar 7 15:20:21 2018 From: ahadas at redhat.com (Arik Hadas) Date: Wed, 7 Mar 2018 17:20:21 +0200 Subject: [ovirt-users] Unable to start VM after upgrade vom 4.1.9 to 4.2.1 - NPE In-Reply-To: <62f32a4e-c6eb-5363-42df-b09a2fe33d00@plusline.net> References: <62f32a4e-c6eb-5363-42df-b09a2fe33d00@plusline.net> Message-ID: On Wed, Mar 7, 2018 at 4:11 PM, Jan Siml wrote: > Hello, > > we have upgrade one of our oVirt engines to 4.2.1 (from 4.1.9) and > afterwards all nodes too. The cluster compatibility level has been set to > 4.2. > > Now we can't start a VM after it has been powered off. The only hint we > found in engine.log is: > > 2018-03-07 14:51:52,504+01 INFO [org.ovirt.engine.core.vdsbrok > er.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) > [f855b54a-56d9-4708-8a67-5609438ddadb] START, > UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{hostId='null', > vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vmDynamic=' > org.ovirt.engine.core.common.businessentities.VmDynamic at 491983e9'}), log > id: 7d49849e > 2018-03-07 14:51:52,509+01 INFO [org.ovirt.engine.core.vdsbrok > er.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) > [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, > UpdateVmDynamicDataVDSCommand, log id: 7d49849e > 2018-03-07 14:51:52,531+01 INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] > (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] > START, CreateVDSCommand( CreateVDSCommandParameters:{ho > stId='0add031e-c72f-473f-ab2f-4f7abd1f402b', > vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'}), > log id: 4af1f227 > 2018-03-07 14:51:52,533+01 INFO [org.ovirt.engine.core.vdsbrok > er.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) > [f855b54a-56d9-4708-8a67-5609438ddadb] START, > CreateBrokerVDSCommand(HostName = prod-node-210, > CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', > vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'}), > log id: 71dcc8e7 > 2018-03-07 14:51:52,545+01 ERROR [org.ovirt.engine.core.vdsbrok > er.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) > [f855b54a-56d9-4708-8a67-5609438ddadb] Failed in 'CreateBrokerVDS' > method, for vds: 'prod-node-210'; host: 'prod-node-210': null > 2018-03-07 14:51:52,546+01 ERROR [org.ovirt.engine.core.vdsbrok > er.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) > [f855b54a-56d9-4708-8a67-5609438ddadb] Command > 'CreateBrokerVDSCommand(HostName = prod-node-210, > CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', > vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM > [prod-hub-201]'})' execution failed: null > 2018-03-07 14:51:52,546+01 INFO [org.ovirt.engine.core.vdsbrok > er.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) > [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, CreateBrokerVDSCommand, > log id: 71dcc8e7 > 2018-03-07 14:51:52,546+01 ERROR [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] > (EE-ManagedThreadFactory-engine-Thread-25) [f855b5 > 4a-56d9-4708-8a67-5609438ddadb] Failed to create VM: > java.lang.NullPointerException > at org.ovirt.engine.core.vdsbroker.builder.vminfo.LibvirtVmXmlB > uilder.lambda$writeInterfaces$23(LibvirtVmXmlBuilder.java:1066) > [vdsbroker.jar:] > > [...] > > But this doesn't lead us to the root cause. I haven't found any matching > bug tickets in release notes for upcoming 4.2.1. Can anyone help here? > What's the mac address of that VM? You can find it in the UI or with: select mac_addr from vm_interface where vm_guid in (select vm_guid from vm_static where vm_name=''); > > Kind regards > > Jan Siml > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahadas at redhat.com Wed Mar 7 15:22:18 2018 From: ahadas at redhat.com (Arik Hadas) Date: Wed, 7 Mar 2018 17:22:18 +0200 Subject: [ovirt-users] Unable to start VM after upgrade vom 4.1.9 to 4.2.1 - NPE In-Reply-To: References: <62f32a4e-c6eb-5363-42df-b09a2fe33d00@plusline.net> Message-ID: On Wed, Mar 7, 2018 at 5:20 PM, Arik Hadas wrote: > > > On Wed, Mar 7, 2018 at 4:11 PM, Jan Siml wrote: > >> Hello, >> >> we have upgrade one of our oVirt engines to 4.2.1 (from 4.1.9) and >> afterwards all nodes too. The cluster compatibility level has been set to >> 4.2. >> >> Now we can't start a VM after it has been powered off. The only hint we >> found in engine.log is: >> >> 2018-03-07 14:51:52,504+01 INFO [org.ovirt.engine.core.vdsbrok >> er.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) >> [f855b54a-56d9-4708-8a67-5609438ddadb] START, >> UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{hostId='null', >> vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vmDynamic=' >> org.ovirt.engine.core.common.businessentities.VmDynamic at 491983e9'}), log >> id: 7d49849e >> 2018-03-07 14:51:52,509+01 INFO [org.ovirt.engine.core.vdsbrok >> er.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) >> [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, >> UpdateVmDynamicDataVDSCommand, log id: 7d49849e >> 2018-03-07 14:51:52,531+01 INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] >> (EE-ManagedThreadFactory-engine-Thread-25) [f855b54a-56d9-4708-8a67-5609438ddadb] >> START, CreateVDSCommand( CreateVDSCommandParameters:{ho >> stId='0add031e-c72f-473f-ab2f-4f7abd1f402b', >> vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'}), >> log id: 4af1f227 >> 2018-03-07 14:51:52,533+01 INFO [org.ovirt.engine.core.vdsbrok >> er.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) >> [f855b54a-56d9-4708-8a67-5609438ddadb] START, >> CreateBrokerVDSCommand(HostName = prod-node-210, >> CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', >> vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM [prod-hub-201]'}), >> log id: 71dcc8e7 >> 2018-03-07 14:51:52,545+01 ERROR [org.ovirt.engine.core.vdsbrok >> er.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) >> [f855b54a-56d9-4708-8a67-5609438ddadb] Failed in 'CreateBrokerVDS' >> method, for vds: 'prod-node-210'; host: 'prod-node-210': null >> 2018-03-07 14:51:52,546+01 ERROR [org.ovirt.engine.core.vdsbrok >> er.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) >> [f855b54a-56d9-4708-8a67-5609438ddadb] Command >> 'CreateBrokerVDSCommand(HostName = prod-node-210, >> CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', >> vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM >> [prod-hub-201]'})' execution failed: null >> 2018-03-07 14:51:52,546+01 INFO [org.ovirt.engine.core.vdsbrok >> er.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-25) >> [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, CreateBrokerVDSCommand, >> log id: 71dcc8e7 >> 2018-03-07 14:51:52,546+01 ERROR [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] >> (EE-ManagedThreadFactory-engine-Thread-25) [f855b5 >> 4a-56d9-4708-8a67-5609438ddadb] Failed to create VM: >> java.lang.NullPointerException >> at org.ovirt.engine.core.vdsbroker.builder.vminfo.LibvirtVmXmlB >> uilder.lambda$writeInterfaces$23(LibvirtVmXmlBuilder.java:1066) >> [vdsbroker.jar:] >> >> [...] >> >> But this doesn't lead us to the root cause. I haven't found any matching >> bug tickets in release notes for upcoming 4.2.1. Can anyone help here? >> > > What's the mac address of that VM? > You can find it in the UI or with: > > select mac_addr from vm_interface where vm_guid in (select vm_guid from > vm_static where vm_name=''); > Actually, different question - does this VM has unplugged network interface? > > >> >> Kind regards >> >> Jan Siml >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsiml at plusline.net Wed Mar 7 15:32:38 2018 From: jsiml at plusline.net (Jan Siml) Date: Wed, 7 Mar 2018 16:32:38 +0100 Subject: [ovirt-users] Unable to start VM after upgrade vom 4.1.9 to 4.2.1 - NPE In-Reply-To: References: <62f32a4e-c6eb-5363-42df-b09a2fe33d00@plusline.net> Message-ID: <9b914e1c-5ded-d61a-18f5-d14f2914823f@plusline.net> Hello Arik, > we have upgrade one of our oVirt engines to 4.2.1 (from 4.1.9) > and afterwards all nodes too. The cluster compatibility level > has been set to 4.2. > > Now we can't start a VM after it has been powered off. The only > hint we found in engine.log is: > > 2018-03-07 14:51:52,504+01 INFO > [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] > (EE-ManagedThreadFactory-engine-Thread-25) > [f855b54a-56d9-4708-8a67-5609438ddadb] START, > UpdateVmDynamicDataVDSCommand( > UpdateVmDynamicDataVDSCommandParameters:{hostId='null', > vmId='a7bc4124-06cb-4909-9389-bcf727df1304', > vmDynamic='org.ovirt.engine.co > re.common.businessentities.VmDynamic at 491983e9'}), > log id: 7d49849e > 2018-03-07 14:51:52,509+01 INFO > [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] > (EE-ManagedThreadFactory-engine-Thread-25) > [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, > UpdateVmDynamicDataVDSCommand, log id: 7d49849e > 2018-03-07 14:51:52,531+01 INFO > [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] > (EE-ManagedThreadFactory-engine-Thread-25) > [f855b54a-56d9-4708-8a67-5609438ddadb] START, CreateVDSCommand( > CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', > vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM > [prod-hub-201]'}), log id: 4af1f227 > 2018-03-07 14:51:52,533+01 INFO > [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] > (EE-ManagedThreadFactory-engine-Thread-25) > [f855b54a-56d9-4708-8a67-5609438ddadb] START, > CreateBrokerVDSCommand(HostName = prod-node-210, > CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', > vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM > [prod-hub-201]'}), log id: 71dcc8e7 > 2018-03-07 14:51:52,545+01 ERROR > [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] > (EE-ManagedThreadFactory-engine-Thread-25) > [f855b54a-56d9-4708-8a67-5609438ddadb] Failed in > 'CreateBrokerVDS' method, for vds: 'prod-node-210'; host: > 'prod-node-210': null > 2018-03-07 14:51:52,546+01 ERROR > [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] > (EE-ManagedThreadFactory-engine-Thread-25) > [f855b54a-56d9-4708-8a67-5609438ddadb] Command > 'CreateBrokerVDSCommand(HostName = prod-node-210, > CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', > vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM > [prod-hub-201]'})' execution failed: null > 2018-03-07 14:51:52,546+01 INFO > [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] > (EE-ManagedThreadFactory-engine-Thread-25) > [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, > CreateBrokerVDSCommand, log id: 71dcc8e7 > 2018-03-07 14:51:52,546+01 ERROR > [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] > (EE-ManagedThreadFactory-engine-Thread-25) [f855b5 > 4a-56d9-4708-8a67-5609438ddadb] Failed to create VM: > java.lang.NullPointerException > at > org.ovirt.engine.core.vdsbroker.builder.vminfo.LibvirtVmXmlBuilder.lambda$writeInterfaces$23(LibvirtVmXmlBuilder.java:1066) > ?[vdsbroker.jar:] > > [...] > > But this doesn't lead us to the root cause. I haven't found any > matching bug tickets in release notes for upcoming 4.2.1. Can > anyone help here? > > > What's the mac address of that VM? > You can find it in the UI or with: > > select mac_addr from vm_interface where vm_guid in (select vm_guid > from vm_static where vm_name=''); > > > Actually, different question - does this VM has unplugged network interface? The VM has two NICs. Both are plugged. The MAC addresses are 00:1a:4a:18:01:52 for nic1 and 00:1a:4a:36:01:67 for nic2. Regards Jan From hariprasanth.l at msystechnologies.com Wed Mar 7 15:33:09 2018 From: hariprasanth.l at msystechnologies.com (Hari Prasanth Loganathan) Date: Wed, 7 Mar 2018 21:03:09 +0530 Subject: [ovirt-users] Tunable parameters in ovirt engine In-Reply-To: <109ffb44-89ac-f190-29a4-e937157cce81@redhat.com> References: <786be990-4ea0-613b-d81b-639f70a11398@redhat.com> <109ffb44-89ac-f190-29a4-e937157cce81@redhat.com> Message-ID: With the default configuration of the web server it is impossible to handle more than 256 *connections* simultaneously. I guess that "ab" is opening a connection for each concurrent request, so when you reach request 257 the web server will just reject the connection, there is nothing that the JBoss can do about it; you have to increase the number of connections supported by the web server. *So Does it mean that oVirt cannot serve more than 257 request? * My question is, If its possible How to scale this and what is the configuration we need to change? Also, we are taking a benchmark in using oVirt, So I need to find the maximum possible oVirt request. So please let me know the configuration tuning for oVirt to achieve maximum concurrent request. Thanks, Hari On Wed, Mar 7, 2018 at 7:25 PM, Juan Hern?ndez wrote: > With the default configuration of the web server it is impossible to > handle more than 256 *connections* simultaneously. I guess that "ab" is > opening a connection for each concurrent request, so when you reach request > 257 the web server will just reject the connection, there is nothing that > the JBoss can do about it; you have to increase the number of connections > supported by the web server. > > Or else you may want to re-consider why you want to use 1000 simultaneous > connections. It may be OK for a performance test, but there are better ways > to squeeze performance. For example, you could consider using HTTP > pipelining, which is much more friendly for the server than so many > connections. This is what we use when we need to send a large number of > requests from other systems. There are examples of how to do that with the > Python and Ruby SDKs here: > > Python: > > https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/ > examples/asynchronous_inventory.py > > Ruby: > > https://github.com/oVirt/ovirt-engine-sdk-ruby/blob/master/ > sdk/examples/asynchronous_inventory.rb > > > On 03/07/2018 02:43 PM, Hari Prasanth Loganathan wrote: > >> Hi Juan, >> >> Thanks for the response. >> >> I agree web server can handle only limited number of concurrent requests. >> But Why it is failing with SSL handshake failure for few requests, Can't >> the JBOSS wait and serve the request? We can spare the delay but not with >> the request fails. So Is there a configuration in oVirt which can be tuned >> to achieve this? >> >> Thanks, >> Hari >> >> On Wed, Mar 7, 2018 at 7:05 PM, Juan Hern?ndez >> wrote: >> >> The first thing you will need to change for such a test is the number of >>> simultaneous connections accepted by the Apache web server: by default >>> the >>> max is 256. See the Apache documentation here: >>> >>> https://httpd.apache.org/docs/2.4/mod/mpm_common.html#m >>> axrequestworkers >>> >>> In addition I also suggest that you consider using the "worker" >>> multi-processing module instead of the "prefork", as it usually works >>> better when talking to a Java application server, because it re-uses >>> connections better. >>> >>> On 03/07/2018 02:20 PM, Hari Prasanth Loganathan wrote: >>> >>> Hi Team, >>>> >>>> *Description of problem:* >>>> >>>> I am trying to achieve 1000 concurrent request to oVirt. What are the >>>> tunable parameters to achieve this? >>>> >>>> I tried to perform the benchmarking for ovirt engine using Apache >>>> benchmark >>>> using the same SSO token. >>>> >>>> ab -n 1000 -c 500 -k -H "accept: application/json" -H "Authorization: >>>> Bearer SSOTOKEN" https://172.30.56.70/ovirt-engine/ >>>> >>> b-9ff1-076fc07ebf50/statistics> >>>> >>>> When the number of concurrent request is 500, we are getting more than >>>> 100 >>>> failures with the following error, >>>> >>>> SSL read failed (1) - closing connection >>>> 139620982339352:error: >>>> >>>> NOTE: It is scaling for concurrent request below 500. >>>> >>>> I used the profiler to get the memory and CPU and it seems very less, >>>> >>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ >>>> COMMAND >>>> 30413 ovirt 20 0 4226664 882396 6776 S 126.0 23.0 27:48.53 java >>>> >>>> Configuration of the machine in which Ovirt is deployed : >>>> >>>> RAM - 4GB, >>>> Hard disk - 100GB, >>>> core processor - 2, >>>> OS - Cent7.x. >>>> >>>> In which 2GB is allocated to oVirt. >>>> >>>> >>>> Version-Release number of selected component (if applicable): >>>> >>>> 4.2.2 >>>> >>>> >>>> How reproducible: >>>> >>>> If the number of concurrent requests are above 500, we are easily facing >>>> this issue. >>>> >>>> >>>> *Actual results:* >>>> >>>> SSL read failed (1) - closing connection >>>> 139620982339352:error: >>>> >>>> *Expected results:* >>>> >>>> Request success. >>>> >>>> >>>> Thanks, >>>> Hari >>>> >>>> >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>>> >>> >> > -- DISCLAIMER The information in this e-mail is confidential and may be subject to legal privilege. It is intended solely for the addressee. Access to this e-mail by anyone else is unauthorized. If you have received this communication in error, please address with the subject heading "Received in error," send to it at msystechnologies.com, then delete the e-mail and destroy any copies of it. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. The views, opinions, conclusions and other information expressed in this electronic mail and any attachments are not given or endorsed by the company unless otherwise indicated by an authorized representative independent of this message. MSys cannot guarantee that e-mail communications are secure or error-free, as information could be intercepted, corrupted, amended, lost, destroyed, arrive late or incomplete, or contain viruses, though all reasonable precautions have been taken to ensure no viruses are present in this e-mail. As our company cannot accept responsibility for any loss or damage arising from the use of this e-mail or attachments we recommend that you subject these to your virus checking procedures prior to use -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahadas at redhat.com Wed Mar 7 15:49:05 2018 From: ahadas at redhat.com (Arik Hadas) Date: Wed, 7 Mar 2018 17:49:05 +0200 Subject: [ovirt-users] Unable to start VM after upgrade vom 4.1.9 to 4.2.1 - NPE In-Reply-To: <9b914e1c-5ded-d61a-18f5-d14f2914823f@plusline.net> References: <62f32a4e-c6eb-5363-42df-b09a2fe33d00@plusline.net> <9b914e1c-5ded-d61a-18f5-d14f2914823f@plusline.net> Message-ID: On Wed, Mar 7, 2018 at 5:32 PM, Jan Siml wrote: > Hello Arik, > > > we have upgrade one of our oVirt engines to 4.2.1 (from 4.1.9) >> and afterwards all nodes too. The cluster compatibility level >> has been set to 4.2. >> >> Now we can't start a VM after it has been powered off. The only >> hint we found in engine.log is: >> >> 2018-03-07 14:51:52,504+01 INFO >> [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] >> (EE-ManagedThreadFactory-engine-Thread-25) >> [f855b54a-56d9-4708-8a67-5609438ddadb] START, >> UpdateVmDynamicDataVDSCommand( >> UpdateVmDynamicDataVDSCommandParameters:{hostId='null', >> vmId='a7bc4124-06cb-4909-9389-bcf727df1304', >> vmDynamic='org.ovirt.engine.co >> re.common.businessentities.VmDyn >> amic at 491983e9'}), >> >> log id: 7d49849e >> 2018-03-07 14:51:52,509+01 INFO >> [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] >> (EE-ManagedThreadFactory-engine-Thread-25) >> [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, >> UpdateVmDynamicDataVDSCommand, log id: 7d49849e >> 2018-03-07 14:51:52,531+01 INFO >> [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] >> (EE-ManagedThreadFactory-engine-Thread-25) >> [f855b54a-56d9-4708-8a67-5609438ddadb] START, CreateVDSCommand( >> CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f- >> 4f7abd1f402b', >> vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM >> [prod-hub-201]'}), log id: 4af1f227 >> 2018-03-07 14:51:52,533+01 INFO >> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCo >> mmand] >> (EE-ManagedThreadFactory-engine-Thread-25) >> [f855b54a-56d9-4708-8a67-5609438ddadb] START, >> CreateBrokerVDSCommand(HostName = prod-node-210, >> CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f- >> 4f7abd1f402b', >> vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM >> [prod-hub-201]'}), log id: 71dcc8e7 >> 2018-03-07 14:51:52,545+01 ERROR >> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCo >> mmand] >> (EE-ManagedThreadFactory-engine-Thread-25) >> [f855b54a-56d9-4708-8a67-5609438ddadb] Failed in >> 'CreateBrokerVDS' method, for vds: 'prod-node-210'; host: >> 'prod-node-210': null >> 2018-03-07 14:51:52,546+01 ERROR >> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCo >> mmand] >> (EE-ManagedThreadFactory-engine-Thread-25) >> [f855b54a-56d9-4708-8a67-5609438ddadb] Command >> 'CreateBrokerVDSCommand(HostName = prod-node-210, >> CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f- >> 4f7abd1f402b', >> vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM >> [prod-hub-201]'})' execution failed: null >> 2018-03-07 14:51:52,546+01 INFO >> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCo >> mmand] >> (EE-ManagedThreadFactory-engine-Thread-25) >> [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, >> CreateBrokerVDSCommand, log id: 71dcc8e7 >> 2018-03-07 14:51:52,546+01 ERROR >> [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] >> (EE-ManagedThreadFactory-engine-Thread-25) [f855b5 >> 4a-56d9-4708-8a67-5609438ddadb] Failed to create VM: >> java.lang.NullPointerException >> at >> org.ovirt.engine.core.vdsbroker.builder.vminfo.LibvirtVmXmlB >> uilder.lambda$writeInterfaces$23(LibvirtVmXmlBuilder.java:1066) >> [vdsbroker.jar:] >> >> [...] >> >> But this doesn't lead us to the root cause. I haven't found any >> matching bug tickets in release notes for upcoming 4.2.1. Can >> anyone help here? >> >> >> What's the mac address of that VM? >> You can find it in the UI or with: >> >> select mac_addr from vm_interface where vm_guid in (select vm_guid >> from vm_static where vm_name=''); >> >> >> Actually, different question - does this VM has unplugged network >> interface? >> > > The VM has two NICs. Both are plugged. > > The MAC addresses are 00:1a:4a:18:01:52 for nic1 and 00:1a:4a:36:01:67 for > nic2. > OK, those seem like two valid mac addresses so maybe something is wrong with the vm devices. Could you please provide the output of: select type, device, address, is_managed, is_plugged, alias from vm_device where vm_id in (select vm_guid from vm_static where vm_name=''); > > Regards > > Jan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jhernand at redhat.com Wed Mar 7 15:50:50 2018 From: jhernand at redhat.com (=?UTF-8?Q?Juan_Hern=c3=a1ndez?=) Date: Wed, 7 Mar 2018 16:50:50 +0100 Subject: [ovirt-users] Tunable parameters in ovirt engine In-Reply-To: References: <786be990-4ea0-613b-d81b-639f70a11398@redhat.com> <109ffb44-89ac-f190-29a4-e937157cce81@redhat.com> Message-ID: <6cb6fbc2-51e3-9c89-23dc-e36a7aefd8b3@redhat.com> It means that with the default configuration the Apache web server can't serve more than 256 concurrent connections. This applies to any application that uses Apache as the web frontend, not just to oVirt. If you want to change that you have to change the MaxRequestWorkers and ServerLimit parameters, as explained here: https://httpd.apache.org/docs/2.4/mod/mpm_common.html#maxrequestworkers So, go to your oVirt engine machine and create a /etc/httpd/conf.d/my.conf file with this content: MaxRequestWorkers 1000 ServerLimit 1000 Then restart the Apache server: # systemctl restart httpd Then your web server should be able to handle 1000 concurrent requests, and you will probably start to find other limits, like the amount of memory and CPU that those 1000 Apache child processes will consume, the number of threads in the JBoss application server, the number of connections to the database, etc. Let me insist a bit that if you base your benchmark solely on the number of concurrent requests or connections that the server can handle you may end up with meaningless results, as a real world application can/should use the server much better than that. On 03/07/2018 04:33 PM, Hari Prasanth Loganathan wrote: > With the default configuration of the web server it is impossible to handle > more than 256 *connections* simultaneously. I guess that "ab" is opening a > connection for each concurrent request, so when you reach request 257 the > web server will just reject the connection, there is nothing that the JBoss > can do about it; you have to increase the number of connections supported > by the web server. > > *So Does it mean that oVirt cannot serve more than 257 request? * > My question is, If its possible How to scale this and what is the > configuration we need to change? > > Also, we are taking a benchmark in using oVirt, So I need to find the > maximum possible oVirt request. So please let me know the configuration > tuning for oVirt to achieve maximum concurrent request. > > Thanks, > Hari > > On Wed, Mar 7, 2018 at 7:25 PM, Juan Hern?ndez wrote: > >> With the default configuration of the web server it is impossible to >> handle more than 256 *connections* simultaneously. I guess that "ab" is >> opening a connection for each concurrent request, so when you reach request >> 257 the web server will just reject the connection, there is nothing that >> the JBoss can do about it; you have to increase the number of connections >> supported by the web server. >> >> Or else you may want to re-consider why you want to use 1000 simultaneous >> connections. It may be OK for a performance test, but there are better ways >> to squeeze performance. For example, you could consider using HTTP >> pipelining, which is much more friendly for the server than so many >> connections. This is what we use when we need to send a large number of >> requests from other systems. There are examples of how to do that with the >> Python and Ruby SDKs here: >> >> Python: >> >> https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/ >> examples/asynchronous_inventory.py >> >> Ruby: >> >> https://github.com/oVirt/ovirt-engine-sdk-ruby/blob/master/ >> sdk/examples/asynchronous_inventory.rb >> >> >> On 03/07/2018 02:43 PM, Hari Prasanth Loganathan wrote: >> >>> Hi Juan, >>> >>> Thanks for the response. >>> >>> I agree web server can handle only limited number of concurrent requests. >>> But Why it is failing with SSL handshake failure for few requests, Can't >>> the JBOSS wait and serve the request? We can spare the delay but not with >>> the request fails. So Is there a configuration in oVirt which can be tuned >>> to achieve this? >>> >>> Thanks, >>> Hari >>> >>> On Wed, Mar 7, 2018 at 7:05 PM, Juan Hern?ndez >>> wrote: >>> >>> The first thing you will need to change for such a test is the number of >>>> simultaneous connections accepted by the Apache web server: by default >>>> the >>>> max is 256. See the Apache documentation here: >>>> >>>> https://httpd.apache.org/docs/2.4/mod/mpm_common.html#m >>>> axrequestworkers >>>> >>>> In addition I also suggest that you consider using the "worker" >>>> multi-processing module instead of the "prefork", as it usually works >>>> better when talking to a Java application server, because it re-uses >>>> connections better. >>>> >>>> On 03/07/2018 02:20 PM, Hari Prasanth Loganathan wrote: >>>> >>>> Hi Team, >>>>> >>>>> *Description of problem:* >>>>> >>>>> I am trying to achieve 1000 concurrent request to oVirt. What are the >>>>> tunable parameters to achieve this? >>>>> >>>>> I tried to perform the benchmarking for ovirt engine using Apache >>>>> benchmark >>>>> using the same SSO token. >>>>> >>>>> ab -n 1000 -c 500 -k -H "accept: application/json" -H "Authorization: >>>>> Bearer SSOTOKEN" https://172.30.56.70/ovirt-engine/ >>>>> >>>> b-9ff1-076fc07ebf50/statistics> >>>>> >>>>> When the number of concurrent request is 500, we are getting more than >>>>> 100 >>>>> failures with the following error, >>>>> >>>>> SSL read failed (1) - closing connection >>>>> 139620982339352:error: >>>>> >>>>> NOTE: It is scaling for concurrent request below 500. >>>>> >>>>> I used the profiler to get the memory and CPU and it seems very less, >>>>> >>>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ >>>>> COMMAND >>>>> 30413 ovirt 20 0 4226664 882396 6776 S 126.0 23.0 27:48.53 java >>>>> >>>>> Configuration of the machine in which Ovirt is deployed : >>>>> >>>>> RAM - 4GB, >>>>> Hard disk - 100GB, >>>>> core processor - 2, >>>>> OS - Cent7.x. >>>>> >>>>> In which 2GB is allocated to oVirt. >>>>> >>>>> >>>>> Version-Release number of selected component (if applicable): >>>>> >>>>> 4.2.2 >>>>> >>>>> >>>>> How reproducible: >>>>> >>>>> If the number of concurrent requests are above 500, we are easily facing >>>>> this issue. >>>>> >>>>> >>>>> *Actual results:* >>>>> >>>>> SSL read failed (1) - closing connection >>>>> 139620982339352:error: >>>>> >>>>> *Expected results:* >>>>> >>>>> Request success. >>>>> >>>>> >>>>> Thanks, >>>>> Hari >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> >>>>> >>>>> >>>> >>> >> > From jsiml at plusline.net Wed Mar 7 15:57:15 2018 From: jsiml at plusline.net (Jan Siml) Date: Wed, 7 Mar 2018 16:57:15 +0100 Subject: [ovirt-users] Unable to start VM after upgrade vom 4.1.9 to 4.2.1 - NPE In-Reply-To: References: <62f32a4e-c6eb-5363-42df-b09a2fe33d00@plusline.net> <9b914e1c-5ded-d61a-18f5-d14f2914823f@plusline.net> Message-ID: Hello Arik, > ? ? ? ? we have upgrade one of our oVirt engines to 4.2.1 (from > 4.1.9) > ? ? ? ? and afterwards all nodes too. The cluster compatibility > level > ? ? ? ? has been set to 4.2. > > ? ? ? ? Now we can't start a VM after it has been powered off. > The only > ? ? ? ? hint we found in engine.log is: > > ? ? ? ? 2018-03-07 14:51:52,504+01 INFO > > [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] > ? ? ? ? (EE-ManagedThreadFactory-engine-Thread-25) > ? ? ? ? [f855b54a-56d9-4708-8a67-5609438ddadb] START, > ? ? ? ? UpdateVmDynamicDataVDSCommand( > ? ? ? ? UpdateVmDynamicDataVDSCommandParameters:{hostId='null', > ? ? ? ? vmId='a7bc4124-06cb-4909-9389-bcf727df1304', > ? ? ? ? vmDynamic='org.ovirt.engine.co > > re.common.businessentities.VmDynamic at 491983e9'}), > > ? ? ? ? log id: 7d49849e > ? ? ? ? 2018-03-07 14:51:52,509+01 INFO > > [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] > ? ? ? ? (EE-ManagedThreadFactory-engine-Thread-25) > ? ? ? ? [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, > ? ? ? ? UpdateVmDynamicDataVDSCommand, log id: 7d49849e > ? ? ? ? 2018-03-07 14:51:52,531+01 INFO > ? ? ? ? [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] > ? ? ? ? (EE-ManagedThreadFactory-engine-Thread-25) > ? ? ? ? [f855b54a-56d9-4708-8a67-5609438ddadb] START, > CreateVDSCommand( > > CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', > ? ? ? ? vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM > ? ? ? ? [prod-hub-201]'}), log id: 4af1f227 > ? ? ? ? 2018-03-07 14:51:52,533+01 INFO > > [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] > ? ? ? ? (EE-ManagedThreadFactory-engine-Thread-25) > ? ? ? ? [f855b54a-56d9-4708-8a67-5609438ddadb] START, > ? ? ? ? CreateBrokerVDSCommand(HostName = prod-node-210, > > CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', > ? ? ? ? vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM > ? ? ? ? [prod-hub-201]'}), log id: 71dcc8e7 > ? ? ? ? 2018-03-07 14:51:52,545+01 ERROR > > [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] > ? ? ? ? (EE-ManagedThreadFactory-engine-Thread-25) > ? ? ? ? [f855b54a-56d9-4708-8a67-5609438ddadb] Failed in > ? ? ? ? 'CreateBrokerVDS' method, for vds: 'prod-node-210'; host: > ? ? ? ? 'prod-node-210': null > ? ? ? ? 2018-03-07 14:51:52,546+01 ERROR > > [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] > ? ? ? ? (EE-ManagedThreadFactory-engine-Thread-25) > ? ? ? ? [f855b54a-56d9-4708-8a67-5609438ddadb] Command > ? ? ? ? 'CreateBrokerVDSCommand(HostName = prod-node-210, > > CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', > ? ? ? ? vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM > ? ? ? ? [prod-hub-201]'})' execution failed: null > ? ? ? ? 2018-03-07 14:51:52,546+01 INFO > > [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] > ? ? ? ? (EE-ManagedThreadFactory-engine-Thread-25) > ? ? ? ? [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, > ? ? ? ? CreateBrokerVDSCommand, log id: 71dcc8e7 > ? ? ? ? 2018-03-07 14:51:52,546+01 ERROR > ? ? ? ? [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] > ? ? ? ? (EE-ManagedThreadFactory-engine-Thread-25) [f855b5 > ? ? ? ? 4a-56d9-4708-8a67-5609438ddadb] Failed to create VM: > ? ? ? ? java.lang.NullPointerException > ? ? ? ? at > > org.ovirt.engine.core.vdsbroker.builder.vminfo.LibvirtVmXmlBuilder.lambda$writeInterfaces$23(LibvirtVmXmlBuilder.java:1066) > ? ? ? ? ??[vdsbroker.jar:] > > ? ? ? ? [...] > > ? ? ? ? But this doesn't lead us to the root cause. I haven't > found any > ? ? ? ? matching bug tickets in release notes for upcoming > 4.2.1. Can > ? ? ? ? anyone help here? > > > ? ? What's the mac address of that VM? > ? ? You can find it in the UI or with: > > ? ? select mac_addr from vm_interface where vm_guid in (select > vm_guid > ? ? from vm_static where vm_name=''); > > > Actually, different question - does this VM has unplugged > network interface? > > > The VM has two NICs. Both are plugged. > > The MAC addresses are 00:1a:4a:18:01:52 for nic1 and > 00:1a:4a:36:01:67 for nic2. > > > OK, those seem like two valid mac addresses so maybe something is wrong > with the vm devices. > Could you please provide the output of: > > select type, device, address, is_managed, is_plugged, alias from > vm_device where vm_id in (select vm_guid from vm_static where > vm_name=''); sure: engine=# select type, device, address, is_managed, is_plugged, alias from vm_device where vm_id in (select vm_guid from vm_static where vm_name='prod-hub-201'); type | device | address | is_managed | is_plugged | alias ------------+---------------+--------------------------------------------------- -----------+------------+------------+---------------- video | qxl | | t | t | controller | virtio-scsi | | t | t | balloon | memballoon | | t | f | balloon0 graphics | spice | | t | t | controller | virtio-serial | {slot=0x06, bus=0x00, domain=0x0000, type=pci, fun ction=0x0} | t | t | virtio-serial0 disk | disk | {slot=0x07, bus=0x00, domain=0x0000, type=pci, fun ction=0x0} | f | t | virtio-disk0 memballoon | memballoon | {slot=0x08, bus=0x00, domain=0x0000, type=pci, fun ction=0x0} | f | t | balloon0 interface | bridge | {slot=0x03, bus=0x00, domain=0x0000, type=pci, fun ction=0x0} | f | t | net0 interface | bridge | {slot=0x09, bus=0x00, domain=0x0000, type=pci, fun ction=0x0} | f | t | net1 controller | scsi | {slot=0x05, bus=0x00, domain=0x0000, type=pci, fun ction=0x0} | f | t | scsi0 controller | ide | {slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1} | f | t | ide controller | usb | {slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x2} | t | t | usb channel | unix | {bus=0, controller=0, type=virtio-serial, port=1} | f | t | channel0 channel | unix | {bus=0, controller=0, type=virtio-serial, port=2} | f | t | channel1 channel | spicevmc | {bus=0, controller=0, type=virtio-serial, port=3} | f | t | channel2 interface | bridge | | t | t | net1 interface | bridge | | t | t | net0 disk | cdrom | | t | f | ide0-1-0 disk | cdrom | {bus=1, controller=0, type=drive, target=0, unit=0} | f | t | ide0-1-0 disk | disk | | t | t | virtio-disk0 (20 rows) Kind regards Jan From oliver.riesener at hs-bremen.de Wed Mar 7 16:02:37 2018 From: oliver.riesener at hs-bremen.de (Oliver Riesener) Date: Wed, 7 Mar 2018 17:02:37 +0100 Subject: [ovirt-users] Unable to start VM after upgrade vom 4.1.9 to 4.2.1 - NPE In-Reply-To: References: <62f32a4e-c6eb-5363-42df-b09a2fe33d00@plusline.net> <9b914e1c-5ded-d61a-18f5-d14f2914823f@plusline.net> Message-ID: <9C3E31E5-1714-451F-B750-FDCA14224E3B@hs-bremen.de> Hi, Enable network and disks on your VM than do: Run -> ONCE Ok Ignore errors. Ok Run Cheeers Olri > Am 07.03.2018 um 16:49 schrieb Arik Hadas : > > > >> On Wed, Mar 7, 2018 at 5:32 PM, Jan Siml wrote: >> Hello Arik, >> >> >>> we have upgrade one of our oVirt engines to 4.2.1 (from 4.1.9) >>> and afterwards all nodes too. The cluster compatibility level >>> has been set to 4.2. >>> >>> Now we can't start a VM after it has been powered off. The only >>> hint we found in engine.log is: >>> >>> 2018-03-07 14:51:52,504+01 INFO >>> [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] >>> (EE-ManagedThreadFactory-engine-Thread-25) >>> [f855b54a-56d9-4708-8a67-5609438ddadb] START, >>> UpdateVmDynamicDataVDSCommand( >>> UpdateVmDynamicDataVDSCommandParameters:{hostId='null', >>> vmId='a7bc4124-06cb-4909-9389-bcf727df1304', >>> vmDynamic='org.ovirt.engine.co >>> re.common.businessentities.VmDynamic at 491983e9'}), >>> >>> log id: 7d49849e >>> 2018-03-07 14:51:52,509+01 INFO >>> [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] >>> (EE-ManagedThreadFactory-engine-Thread-25) >>> [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, >>> UpdateVmDynamicDataVDSCommand, log id: 7d49849e >>> 2018-03-07 14:51:52,531+01 INFO >>> [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] >>> (EE-ManagedThreadFactory-engine-Thread-25) >>> [f855b54a-56d9-4708-8a67-5609438ddadb] START, CreateVDSCommand( >>> CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', >>> vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM >>> [prod-hub-201]'}), log id: 4af1f227 >>> 2018-03-07 14:51:52,533+01 INFO >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] >>> (EE-ManagedThreadFactory-engine-Thread-25) >>> [f855b54a-56d9-4708-8a67-5609438ddadb] START, >>> CreateBrokerVDSCommand(HostName = prod-node-210, >>> CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', >>> vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM >>> [prod-hub-201]'}), log id: 71dcc8e7 >>> 2018-03-07 14:51:52,545+01 ERROR >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] >>> (EE-ManagedThreadFactory-engine-Thread-25) >>> [f855b54a-56d9-4708-8a67-5609438ddadb] Failed in >>> 'CreateBrokerVDS' method, for vds: 'prod-node-210'; host: >>> 'prod-node-210': null >>> 2018-03-07 14:51:52,546+01 ERROR >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] >>> (EE-ManagedThreadFactory-engine-Thread-25) >>> [f855b54a-56d9-4708-8a67-5609438ddadb] Command >>> 'CreateBrokerVDSCommand(HostName = prod-node-210, >>> CreateVDSCommandParameters:{hostId='0add031e-c72f-473f-ab2f-4f7abd1f402b', >>> vmId='a7bc4124-06cb-4909-9389-bcf727df1304', vm='VM >>> [prod-hub-201]'})' execution failed: null >>> 2018-03-07 14:51:52,546+01 INFO >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] >>> (EE-ManagedThreadFactory-engine-Thread-25) >>> [f855b54a-56d9-4708-8a67-5609438ddadb] FINISH, >>> CreateBrokerVDSCommand, log id: 71dcc8e7 >>> 2018-03-07 14:51:52,546+01 ERROR >>> [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] >>> (EE-ManagedThreadFactory-engine-Thread-25) [f855b5 >>> 4a-56d9-4708-8a67-5609438ddadb] Failed to create VM: >>> java.lang.NullPointerException >>> at >>> org.ovirt.engine.core.vdsbroker.builder.vminfo.LibvirtVmXmlBuilder.lambda$writeInterfaces$23(LibvirtVmXmlBuilder.java:1066) >>> [vdsbroker.jar:] >>> >>> [...] >>> >>> But this doesn't lead us to the root cause. I haven't found any >>> matching bug tickets in release notes for upcoming 4.2.1. Can >>> anyone help here? >>> >>> >>> What's the mac address of that VM? >>> You can find it in the UI or with: >>> >>> select mac_addr from vm_interface where vm_guid in (select vm_guid >>> from vm_static where vm_name=''); >>> >>> >>> Actually, different question - does this VM has unplugged network interface? >> >> The VM has two NICs. Both are plugged. >> >> The MAC addresses are 00:1a:4a:18:01:52 for nic1 and 00:1a:4a:36:01:67 for nic2. > > OK, those seem like two valid mac addresses so maybe something is wrong with the vm devices. > Could you please provide the output of: > select type, device, address, is_managed, is_plugged, alias from vm_device where vm_id in (select vm_guid from vm_static where vm_name=''); > >> >> Regards >> >> Jan > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsiml at plusline.net Wed Mar 7 16:06:49 2018 From: jsiml at plusline.net (Jan Siml) Date: Wed, 7 Mar 2018 17:06:49 +0100 Subject: [ovirt-users] Unable to start VM after upgrade vom 4.1.9 to 4.2.1 - NPE In-Reply-To: <9C3E31E5-1714-451F-B750-FDCA14224E3B@hs-bremen.de> References: <62f32a4e-c6eb-5363-42df-b09a2fe33d00@plusline.net> <9b914e1c-5ded-d61a-18f5-d14f2914823f@plusline.net> <9C3E31E5-1714-451F-B750-FDCA14224E3B@hs-bremen.de> Message-ID: Hello Oliver, > Enable network and disks on your VM than do: > Run -> ONCE Ok Ignore errors. Ok > Run > Cheeers WTF! That worked. Did you know, why this works and what happens in the background? Is there a Bugzilla bug ID for this issue? Kind regards Jan From jsiml at plusline.net Wed Mar 7 16:15:01 2018 From: jsiml at plusline.net (Jan Siml) Date: Wed, 7 Mar 2018 17:15:01 +0100 Subject: [ovirt-users] Unable to start VM after upgrade vom 4.1.9 to 4.2.1 - NPE In-Reply-To: References: <62f32a4e-c6eb-5363-42df-b09a2fe33d00@plusline.net> <9b914e1c-5ded-d61a-18f5-d14f2914823f@plusline.net> <9C3E31E5-1714-451F-B750-FDCA14224E3B@hs-bremen.de> Message-ID: <748cb236-647e-b508-f320-bf81a7a24bce@plusline.net> Hello, >> Enable network and disks on your VM than do: >> Run -> ONCE Ok Ignore errors. Ok >> Run >> Cheeers > > WTF! That worked. > > Did you know, why this works and what happens in the background? Is > there a Bugzilla bug ID for this issue? BTW, here is the list of devices before: engine=# select type, device, address, is_managed, is_plugged, alias from vm_device where vm_id in (select vm_guid from vm_static where vm_name='prod-hub-201'); type | device | address | is_managed | is_plugged | alias ------------+---------------+--------------------------------------------------------------+------------+------------+---------------- video | qxl | | t | t | controller | virtio-scsi | | t | t | balloon | memballoon | | t | f | balloon0 graphics | spice | | t | t | controller | virtio-serial | {slot=0x06, bus=0x00, domain=0x0000, type=pci, function=0x0} | t | t | virtio-serial0 disk | disk | {slot=0x07, bus=0x00, domain=0x0000, type=pci, function=0x0} | f | t | virtio-disk0 memballoon | memballoon | {slot=0x08, bus=0x00, domain=0x0000, type=pci, function=0x0} | f | t | balloon0 interface | bridge | {slot=0x03, bus=0x00, domain=0x0000, type=pci, function=0x0} | f | t | net0 interface | bridge | {slot=0x09, bus=0x00, domain=0x0000, type=pci, function=0x0} | f | t | net1 controller | scsi | {slot=0x05, bus=0x00, domain=0x0000, type=pci, function=0x0} | f | t | scsi0 controller | ide | {slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1} | f | t | ide controller | usb | {slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x2} | t | t | usb channel | unix | {bus=0, controller=0, type=virtio-serial, port=1} | f | t | channel0 channel | unix | {bus=0, controller=0, type=virtio-serial, port=2} | f | t | channel1 channel | spicevmc | {bus=0, controller=0, type=virtio-serial, port=3} | f | t | channel2 interface | bridge | | t | t | net1 interface | bridge | | t | t | net0 disk | cdrom | | t | f | ide0-1-0 disk | cdrom | {bus=1, controller=0, type=drive, target=0, unit=0} | f | t | ide0-1-0 disk | disk | | t | t | virtio-disk0 (20 rows) and afterwards: engine=# select type, device, address, is_managed, is_plugged, alias from vm_device where vm_id in (select vm_guid from vm_static where vm_name='prod-hub-201'); type | device | address | is_managed | is_plugged | alias ------------+---------------+--------------------------------------------------------------+------------+------------+---------------- channel | spicevmc | {type=virtio-serial, bus=0, controller=0, port=3} | f | t | channel2 channel | unix | {type=virtio-serial, bus=0, controller=0, port=1} | f | t | channel0 interface | bridge | {type=pci, slot=0x04, bus=0x00, domain=0x0000, function=0x0} | t | t | net1 controller | usb | {type=pci, slot=0x01, bus=0x00, domain=0x0000, function=0x2} | t | t | usb controller | virtio-serial | {type=pci, slot=0x06, bus=0x00, domain=0x0000, function=0x0} | t | t | virtio-serial0 interface | bridge | {type=pci, slot=0x03, bus=0x00, domain=0x0000, function=0x0} | t | t | net0 controller | virtio-scsi | {type=pci, slot=0x05, bus=0x00, domain=0x0000, function=0x0} | t | t | scsi0 video | qxl | {type=pci, slot=0x02, bus=0x00, domain=0x0000, function=0x0} | t | t | video0 channel | unix | {type=virtio-serial, bus=0, controller=0, port=2} | f | t | channel1 balloon | memballoon | | t | t | balloon0 graphics | spice | | t | t | disk | cdrom | | t | f | ide0-1-0 disk | disk | {type=pci, slot=0x07, bus=0x00, domain=0x0000, function=0x0} | t | t | virtio-disk0 (13 rows) Regards Jan From ahadas at redhat.com Wed Mar 7 16:22:09 2018 From: ahadas at redhat.com (Arik Hadas) Date: Wed, 7 Mar 2018 18:22:09 +0200 Subject: [ovirt-users] Unable to start VM after upgrade vom 4.1.9 to 4.2.1 - NPE In-Reply-To: <748cb236-647e-b508-f320-bf81a7a24bce@plusline.net> References: <62f32a4e-c6eb-5363-42df-b09a2fe33d00@plusline.net> <9b914e1c-5ded-d61a-18f5-d14f2914823f@plusline.net> <9C3E31E5-1714-451F-B750-FDCA14224E3B@hs-bremen.de> <748cb236-647e-b508-f320-bf81a7a24bce@plusline.net> Message-ID: On Wed, Mar 7, 2018 at 6:15 PM, Jan Siml wrote: > Hello, > > Enable network and disks on your VM than do: >>> Run -> ONCE Ok Ignore errors. Ok >>> Run >>> Cheeers >>> >> >> WTF! That worked. >> >> Did you know, why this works and what happens in the background? Is there >> a Bugzilla bug ID for this issue? >> > > BTW, here is the list of devices before: > > engine=# select type, device, address, is_managed, is_plugged, alias from > vm_device where vm_id in (select vm_guid from vm_static where > vm_name='prod-hub-201'); > type | device | address > | is_managed | is_plugged | alias > ------------+---------------+------------------------------- > -------------------------------+------------+------------+---------------- > video | qxl | | t | t > | > controller | virtio-scsi | | t | t > | > balloon | memballoon | | t | f > | balloon0 > graphics | spice | | t | t > | > controller | virtio-serial | {slot=0x06, bus=0x00, domain=0x0000, > type=pci, function=0x0} | t | t | virtio-serial0 > disk | disk | {slot=0x07, bus=0x00, domain=0x0000, > type=pci, function=0x0} | f | t | virtio-disk0 > memballoon | memballoon | {slot=0x08, bus=0x00, domain=0x0000, > type=pci, function=0x0} | f | t | balloon0 > interface | bridge | {slot=0x03, bus=0x00, domain=0x0000, > type=pci, function=0x0} | f | t | net0 > interface | bridge | {slot=0x09, bus=0x00, domain=0x0000, > type=pci, function=0x0} | f | t | net1 > controller | scsi | {slot=0x05, bus=0x00, domain=0x0000, > type=pci, function=0x0} | f | t | scsi0 > controller | ide | {slot=0x01, bus=0x00, domain=0x0000, > type=pci, function=0x1} | f | t | ide > controller | usb | {slot=0x01, bus=0x00, domain=0x0000, > type=pci, function=0x2} | t | t | usb > channel | unix | {bus=0, controller=0, type=virtio-serial, > port=1} | f | t | channel0 > channel | unix | {bus=0, controller=0, type=virtio-serial, > port=2} | f | t | channel1 > channel | spicevmc | {bus=0, controller=0, type=virtio-serial, > port=3} | f | t | channel2 > interface | bridge | | t | t > | net1 > interface | bridge | | t | t > | net0 > disk | cdrom | | t | f > | ide0-1-0 > disk | cdrom | {bus=1, controller=0, type=drive, target=0, > unit=0} | f | t | ide0-1-0 > disk | disk | | t | t > | virtio-disk0 > (20 rows) > > and afterwards: > > engine=# select type, device, address, is_managed, is_plugged, alias from > vm_device where vm_id in (select vm_guid from vm_static where > vm_name='prod-hub-201'); > type | device | address > | is_managed | is_plugged | alias > ------------+---------------+------------------------------- > -------------------------------+------------+------------+---------------- > channel | spicevmc | {type=virtio-serial, bus=0, controller=0, > port=3} | f | t | channel2 > channel | unix | {type=virtio-serial, bus=0, controller=0, > port=1} | f | t | channel0 > interface | bridge | {type=pci, slot=0x04, bus=0x00, > domain=0x0000, function=0x0} | t | t | net1 > controller | usb | {type=pci, slot=0x01, bus=0x00, > domain=0x0000, function=0x2} | t | t | usb > controller | virtio-serial | {type=pci, slot=0x06, bus=0x00, > domain=0x0000, function=0x0} | t | t | virtio-serial0 > interface | bridge | {type=pci, slot=0x03, bus=0x00, > domain=0x0000, function=0x0} | t | t | net0 > controller | virtio-scsi | {type=pci, slot=0x05, bus=0x00, > domain=0x0000, function=0x0} | t | t | scsi0 > video | qxl | {type=pci, slot=0x02, bus=0x00, > domain=0x0000, function=0x0} | t | t | video0 > channel | unix | {type=virtio-serial, bus=0, controller=0, > port=2} | f | t | channel1 > balloon | memballoon | | t | t > | balloon0 > graphics | spice | | t | t > | > disk | cdrom | | t | f > | ide0-1-0 > disk | disk | {type=pci, slot=0x07, bus=0x00, > domain=0x0000, function=0x0} | t | t | virtio-disk0 > (13 rows) > > Thanks. The problem was that unmanaged interfaces and disks were created (and thus, you previously had 4 interfaces devices, 2 disk devices and 2 CD devices). That is most probably a result of a bug we had when migrating a VM that was started in cluster < 4.2 to 4.2 host. The fix for this bug will be available in 4.2.2. You could, alternatively, remove the unmanaged (disk and interface) devices and plug the managed ones. > Regards > > Jan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Oliver.Riesener at hs-bremen.de Wed Mar 7 16:51:18 2018 From: Oliver.Riesener at hs-bremen.de (Oliver Riesener) Date: Wed, 7 Mar 2018 17:51:18 +0100 Subject: [ovirt-users] Unable to start VM after upgrade vom 4.1.9 to 4.2.1 - NPE In-Reply-To: References: <62f32a4e-c6eb-5363-42df-b09a2fe33d00@plusline.net> <9b914e1c-5ded-d61a-18f5-d14f2914823f@plusline.net> <9C3E31E5-1714-451F-B750-FDCA14224E3B@hs-bremen.de> <748cb236-647e-b508-f320-bf81a7a24bce@plusline.net> Message-ID: <34f9a09e-3e30-549d-adde-fbd4f5b01795@hs-bremen.de> On 07.03.2018 17:22, Arik Hadas wrote: > > > On Wed, Mar 7, 2018 at 6:15 PM, Jan Siml > wrote: > > Hello, > > Enable network and disks on your VM than do: > Run -> ONCE Ok Ignore errors. Ok > Run > Cheeers > > > WTF! That worked. > > Did you know, why this works and what happens in the > background? Is there a Bugzilla bug ID for this issue? > I figured it out, by they attempt to change the VM CPU family from old VM's, as a last try to get it work again. After i had a live upgrade to 4.2 with running VM's and shutdown them down they all dead with disabled network and disks. No delight to delete them all and recreate them, the other way. Have a nice day. > BTW, here is the list of devices before: > > engine=# select type, device, address, is_managed, is_plugged, > alias from vm_device where vm_id in (select vm_guid from vm_static > where vm_name='prod-hub-201'); > ? ? type? ? |? ? device? ? ?| ?address? ? ? ? ? ? ? ? ? ? | > is_managed | is_plugged | ? ?alias > ------------+---------------+--------------------------------------------------------------+------------+------------+---------------- > ?video? ? ? | qxl? ? ? ? ? ?|? ? ? ? ? ? ? ? ? ? | t ? ? ? | t? ? > ? ? ? | > ?controller | virtio-scsi? ?|? ? ? ? ? ? ? ? ? ? | t ? ? ? | t? ? > ? ? ? | > ?balloon? ? | memballoon? ? |? ? ? ? ? ? ? ? ? ? | t ? ? ? | f? ? > ? ? ? | balloon0 > ?graphics? ?| spice? ? ? ? ?|? ? ? ? ? ? ? ? ? ? | t ? ? ? | t? ? > ? ? ? | > ?controller | virtio-serial | {slot=0x06, bus=0x00, domain=0x0000, > type=pci, function=0x0} | t? ? ? ? ? | t ? ? ? | virtio-serial0 > ?disk? ? ? ?| disk? ? ? ? ? | {slot=0x07, bus=0x00, domain=0x0000, > type=pci, function=0x0} | f? ? ? ? ? | t ? ? ? | virtio-disk0 > ?memballoon | memballoon? ? | {slot=0x08, bus=0x00, domain=0x0000, > type=pci, function=0x0} | f? ? ? ? ? | t ? ? ? | balloon0 > ?interface? | bridge? ? ? ? | {slot=0x03, bus=0x00, domain=0x0000, > type=pci, function=0x0} | f? ? ? ? ? | t ? ? ? | net0 > ?interface? | bridge? ? ? ? | {slot=0x09, bus=0x00, domain=0x0000, > type=pci, function=0x0} | f? ? ? ? ? | t ? ? ? | net1 > ?controller | scsi? ? ? ? ? | {slot=0x05, bus=0x00, domain=0x0000, > type=pci, function=0x0} | f? ? ? ? ? | t ? ? ? | scsi0 > ?controller | ide? ? ? ? ? ?| {slot=0x01, bus=0x00, domain=0x0000, > type=pci, function=0x1} | f? ? ? ? ? | t ? ? ? ? | ide > ?controller | usb? ? ? ? ? ?| {slot=0x01, bus=0x00, domain=0x0000, > type=pci, function=0x2} | t? ? ? ? ? | t ? ? ? ? | usb > ?channel? ? | unix? ? ? ? ? | {bus=0, controller=0, > type=virtio-serial, port=1}? ? ? ? ? ? | f? ? ? ? ? | t ? ? ? ? | > channel0 > ?channel? ? | unix? ? ? ? ? | {bus=0, controller=0, > type=virtio-serial, port=2}? ? ? ? ? ? | f? ? ? ? ? | t ? ? ? ? | > channel1 > ?channel? ? | spicevmc? ? ? | {bus=0, controller=0, > type=virtio-serial, port=3}? ? ? ? ? ? | f? ? ? ? ? | t ? ? ? ? | > channel2 > ?interface? | bridge? ? ? ? |? ? ? ? ? ? ? ? ? ? | t ? ? ? | t? ? > ? ? ? | net1 > ?interface? | bridge? ? ? ? |? ? ? ? ? ? ? ? ? ? | t ? ? ? | t? ? > ? ? ? | net0 > ?disk? ? ? ?| cdrom? ? ? ? ?|? ? ? ? ? ? ? ? ? ? | t ? ? ? | f? ? > ? ? ? | ide0-1-0 > ?disk? ? ? ?| cdrom? ? ? ? ?| {bus=1, controller=0, type=drive, > target=0, unit=0}? ? ? ? ? | f? ? ? ? ? | t ? ? ? ? | ide0-1-0 > ?disk? ? ? ?| disk? ? ? ? ? |? ? ? ? ? ? ? ? ? ? | t ? ? ? | t? ? > ? ? ? | virtio-disk0 > (20 rows) > > and afterwards: > > engine=# select type, device, address, is_managed, is_plugged, > alias from vm_device where vm_id in (select vm_guid from vm_static > where vm_name='prod-hub-201'); > ? ? type? ? |? ? device? ? ?| ?address? ? ? ? ? ? ? ? ? ? | > is_managed | is_plugged | ? ?alias > ------------+---------------+--------------------------------------------------------------+------------+------------+---------------- > ?channel? ? | spicevmc? ? ? | {type=virtio-serial, bus=0, > controller=0, port=3}? ? ? ? ? ? | f? ? ? ? ? | t | channel2 > ?channel? ? | unix? ? ? ? ? | {type=virtio-serial, bus=0, > controller=0, port=1}? ? ? ? ? ? | f? ? ? ? ? | t | channel0 > ?interface? | bridge? ? ? ? | {type=pci, slot=0x04, bus=0x00, > domain=0x0000, function=0x0} | t? ? ? ? ? | t ? ? ? | net1 > ?controller | usb? ? ? ? ? ?| {type=pci, slot=0x01, bus=0x00, > domain=0x0000, function=0x2} | t? ? ? ? ? | t ? ? ? | usb > ?controller | virtio-serial | {type=pci, slot=0x06, bus=0x00, > domain=0x0000, function=0x0} | t? ? ? ? ? | t ? ? ? | virtio-serial0 > ?interface? | bridge? ? ? ? | {type=pci, slot=0x03, bus=0x00, > domain=0x0000, function=0x0} | t? ? ? ? ? | t ? ? ? | net0 > ?controller | virtio-scsi? ?| {type=pci, slot=0x05, bus=0x00, > domain=0x0000, function=0x0} | t? ? ? ? ? | t ? ? ? | scsi0 > ?video? ? ? | qxl? ? ? ? ? ?| {type=pci, slot=0x02, bus=0x00, > domain=0x0000, function=0x0} | t? ? ? ? ? | t ? ? ? | video0 > ?channel? ? | unix? ? ? ? ? | {type=virtio-serial, bus=0, > controller=0, port=2}? ? ? ? ? ? | f? ? ? ? ? | t | channel1 > ?balloon? ? | memballoon? ? |? ? ? ? ? ? ? ? ? ? | t ? ? | t? ? ? > ? ? | balloon0 > ?graphics? ?| spice? ? ? ? ?|? ? ? ? ? ? ? ? ? ? | t ? ? | t? ? ? > ? ? | > ?disk? ? ? ?| cdrom? ? ? ? ?|? ? ? ? ? ? ? ? ? ? | t ? ? ? | f? ? > ? ? ? | ide0-1-0 > ?disk? ? ? ?| disk? ? ? ? ? | {type=pci, slot=0x07, bus=0x00, > domain=0x0000, function=0x0} | t? ? ? ? ? | t ? ? ? | virtio-disk0 > (13 rows) > > > Thanks. > The problem was that unmanaged interfaces and disks were created (and > thus, you previously had 4 interfaces devices, 2 disk devices and 2 CD > devices). > That is most probably a result of a bug we had when migrating a VM > that was started in cluster < 4.2 to 4.2 host. > The fix for this bug will be available in 4.2.2. > You could, alternatively, remove the unmanaged (disk and interface) > devices and plug the managed ones. > > Regards > > Jan > > -- Mit freundlichem Gru? Oliver Riesener -- Hochschule Bremen Elektrotechnik und Informatik Oliver Riesener Neustadtswall 30 D-28199 Bremen Tel: 0421 5905-2405, Fax: -2400 e-mail:oliver.riesener at hs-bremen.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From hsahmed at gmail.com Wed Mar 7 17:50:30 2018 From: hsahmed at gmail.com (Hesham Ahmed) Date: Wed, 07 Mar 2018 17:50:30 +0000 Subject: [ovirt-users] Gluster Snapshot Schedule Failing on 4.2.1 Message-ID: I am having issues with the Gluster Snapshot UI since upgrade to 4.2 and now with 4.2.1. The UI doesn't appear as I explained in the bug report: https://bugzilla.redhat.com/show_bug.cgi?id=1530186 I can now see the UI when I clear the cookies and try the snapshots UI from within the volume details screen, however scheduled snapshots are not being created. The engine log shows a single error: 2018-03-07 20:00:00,051+03 ERROR [org.ovirt.engine.core.utils.timer.JobWrapper] (QuartzOvirtDBScheduler1) [12237b15] Failed to invoke scheduled method onTimer: null Anyone scheduling snapshots successfully wtih 4.2? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mgoldboi at redhat.com Wed Mar 7 17:55:13 2018 From: mgoldboi at redhat.com (Moran Goldboim) Date: Wed, 7 Mar 2018 19:55:13 +0200 Subject: [ovirt-users] Tunable parameters in ovirt engine In-Reply-To: <6cb6fbc2-51e3-9c89-23dc-e36a7aefd8b3@redhat.com> References: <786be990-4ea0-613b-d81b-639f70a11398@redhat.com> <109ffb44-89ac-f190-29a4-e937157cce81@redhat.com> <6cb6fbc2-51e3-9c89-23dc-e36a7aefd8b3@redhat.com> Message-ID: Hi Hari, any specific use-case you are trying to achieve with this benchmark? what are you trying to simulate? just wondering whether there are different options to achieve it.. Thanks. On Wed, Mar 7, 2018 at 5:50 PM, Juan Hern?ndez wrote: > It means that with the default configuration the Apache web server can't > serve more than 256 concurrent connections. This applies to any application > that uses Apache as the web frontend, not just to oVirt. If you want to > change that you have to change the MaxRequestWorkers and ServerLimit > parameters, as explained here: > > https://httpd.apache.org/docs/2.4/mod/mpm_common.html#maxrequestworkers > > So, go to your oVirt engine machine and create a /etc/httpd/conf.d/my.conf > file with this content: > > MaxRequestWorkers 1000 > ServerLimit 1000 > > Then restart the Apache server: > > # systemctl restart httpd > > Then your web server should be able to handle 1000 concurrent requests, > and you will probably start to find other limits, like the amount of memory > and CPU that those 1000 Apache child processes will consume, the number of > threads in the JBoss application server, the number of connections to the > database, etc. > > Let me insist a bit that if you base your benchmark solely on the number > of concurrent requests or connections that the server can handle you may > end up with meaningless results, as a real world application can/should use > the server much better than that. > > On 03/07/2018 04:33 PM, Hari Prasanth Loganathan wrote: > >> With the default configuration of the web server it is impossible to >> handle >> more than 256 *connections* simultaneously. I guess that "ab" is opening a >> connection for each concurrent request, so when you reach request 257 the >> web server will just reject the connection, there is nothing that the >> JBoss >> can do about it; you have to increase the number of connections supported >> by the web server. >> >> *So Does it mean that oVirt cannot serve more than 257 request? * >> >> My question is, If its possible How to scale this and what is the >> configuration we need to change? >> >> Also, we are taking a benchmark in using oVirt, So I need to find the >> maximum possible oVirt request. So please let me know the configuration >> tuning for oVirt to achieve maximum concurrent request. >> >> Thanks, >> Hari >> >> On Wed, Mar 7, 2018 at 7:25 PM, Juan Hern?ndez >> wrote: >> >> With the default configuration of the web server it is impossible to >>> handle more than 256 *connections* simultaneously. I guess that "ab" is >>> opening a connection for each concurrent request, so when you reach >>> request >>> 257 the web server will just reject the connection, there is nothing that >>> the JBoss can do about it; you have to increase the number of connections >>> supported by the web server. >>> >>> Or else you may want to re-consider why you want to use 1000 simultaneous >>> connections. It may be OK for a performance test, but there are better >>> ways >>> to squeeze performance. For example, you could consider using HTTP >>> pipelining, which is much more friendly for the server than so many >>> connections. This is what we use when we need to send a large number of >>> requests from other systems. There are examples of how to do that with >>> the >>> Python and Ruby SDKs here: >>> >>> Python: >>> >>> https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/ >>> examples/asynchronous_inventory.py >>> >>> Ruby: >>> >>> https://github.com/oVirt/ovirt-engine-sdk-ruby/blob/master/ >>> sdk/examples/asynchronous_inventory.rb >>> >>> >>> On 03/07/2018 02:43 PM, Hari Prasanth Loganathan wrote: >>> >>> Hi Juan, >>>> >>>> Thanks for the response. >>>> >>>> I agree web server can handle only limited number of concurrent >>>> requests. >>>> But Why it is failing with SSL handshake failure for few requests, Can't >>>> the JBOSS wait and serve the request? We can spare the delay but not >>>> with >>>> the request fails. So Is there a configuration in oVirt which can be >>>> tuned >>>> to achieve this? >>>> >>>> Thanks, >>>> Hari >>>> >>>> On Wed, Mar 7, 2018 at 7:05 PM, Juan Hern?ndez >>>> wrote: >>>> >>>> The first thing you will need to change for such a test is the number of >>>> >>>>> simultaneous connections accepted by the Apache web server: by default >>>>> the >>>>> max is 256. See the Apache documentation here: >>>>> >>>>> https://httpd.apache.org/docs/2.4/mod/mpm_common.html#m >>>>> axrequestworkers >>>>> >>>>> In addition I also suggest that you consider using the "worker" >>>>> multi-processing module instead of the "prefork", as it usually works >>>>> better when talking to a Java application server, because it re-uses >>>>> connections better. >>>>> >>>>> On 03/07/2018 02:20 PM, Hari Prasanth Loganathan wrote: >>>>> >>>>> Hi Team, >>>>> >>>>>> >>>>>> *Description of problem:* >>>>>> >>>>>> I am trying to achieve 1000 concurrent request to oVirt. What are the >>>>>> tunable parameters to achieve this? >>>>>> >>>>>> I tried to perform the benchmarking for ovirt engine using Apache >>>>>> benchmark >>>>>> using the same SSO token. >>>>>> >>>>>> ab -n 1000 -c 500 -k -H "accept: application/json" -H "Authorization: >>>>>> Bearer SSOTOKEN" https://172.30.56.70/ovirt-engine/ >>>>>> >>>>> b-9ff1-076fc07ebf50/statistics> >>>>>> >>>>>> When the number of concurrent request is 500, we are getting more than >>>>>> 100 >>>>>> failures with the following error, >>>>>> >>>>>> SSL read failed (1) - closing connection >>>>>> 139620982339352:error: >>>>>> >>>>>> NOTE: It is scaling for concurrent request below 500. >>>>>> >>>>>> I used the profiler to get the memory and CPU and it seems very less, >>>>>> >>>>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ >>>>>> COMMAND >>>>>> 30413 ovirt 20 0 4226664 882396 6776 S 126.0 23.0 27:48.53 >>>>>> java >>>>>> >>>>>> Configuration of the machine in which Ovirt is deployed : >>>>>> >>>>>> RAM - 4GB, >>>>>> Hard disk - 100GB, >>>>>> core processor - 2, >>>>>> OS - Cent7.x. >>>>>> >>>>>> In which 2GB is allocated to oVirt. >>>>>> >>>>>> >>>>>> Version-Release number of selected component (if applicable): >>>>>> >>>>>> 4.2.2 >>>>>> >>>>>> >>>>>> How reproducible: >>>>>> >>>>>> If the number of concurrent requests are above 500, we are easily >>>>>> facing >>>>>> this issue. >>>>>> >>>>>> >>>>>> *Actual results:* >>>>>> >>>>>> SSL read failed (1) - closing connection >>>>>> 139620982339352:error: >>>>>> >>>>>> *Expected results:* >>>>>> >>>>>> Request success. >>>>>> >>>>>> >>>>>> Thanks, >>>>>> Hari >>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Users mailing list >>>>>> Users at ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>> >>>>>> >>>>>> >>>>>> >>>>> >>>> >>> >> > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hariprasanth.l at msystechnologies.com Wed Mar 7 18:30:58 2018 From: hariprasanth.l at msystechnologies.com (Hari Prasanth Loganathan) Date: Thu, 8 Mar 2018 00:00:58 +0530 Subject: [ovirt-users] Tunable parameters in ovirt engine In-Reply-To: <6cb6fbc2-51e3-9c89-23dc-e36a7aefd8b3@redhat.com> References: <786be990-4ea0-613b-d81b-639f70a11398@redhat.com> <109ffb44-89ac-f190-29a4-e937157cce81@redhat.com> <6cb6fbc2-51e3-9c89-23dc-e36a7aefd8b3@redhat.com> Message-ID: Thanks Juan for your response. Appreciate it. But for some reason still, I am facing the same SSL handshake failed (5). Could you please check this configuration and let me know the issue in my ovirt engine environment. *Configuration of Apache server:* 1) httpd version, # httpd -v Server version: Apache/2.4.6 (CentOS) Server built: Oct 19 2017 20:39:16 2) I checked the status using the following command, # systemctl status httpd -l ? httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2018-03-07 23:46:32 IST; 1min 55s ago Docs: man:httpd(8) man:apachectl(8) Process: 4351 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, status=0/SUCCESS) Main PID: 4359 (httpd) Status: "Total requests: 264; Current requests/sec: 0.1; Current traffic: 204 B/sec" CGroup: /system.slice/httpd.service ??4359 /usr/sbin/httpd -DFOREGROUND ??4360 /usr/sbin/httpd -DFOREGROUND ??4362 /usr/sbin/httpd -DFOREGROUND ??5100 /usr/sbin/httpd -DFOREGROUND ??5386 /usr/sbin/httpd -DFOREGROUND ??5415 /usr/sbin/httpd -DFOREGROUND ??5416 /usr/sbin/httpd -DFOREGROUND 3) Since the httpd is pointing to the path : /usr/lib/systemd/system/httpd.service vi /usr/lib/systemd/system/httpd.service [Unit] Description=The Apache HTTP Server After=network.target remote-fs.target nss-lookup.target Documentation=man:httpd(8) Documentation=man:apachectl(8) [Service] Type=notify EnvironmentFile=/etc/sysconfig/httpd ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND ExecReload=/usr/sbin/httpd $OPTIONS -k graceful ExecStop=/bin/kill -WINCH ${MAINPID} # We want systemd to give httpd some time to finish gracefully, but still want # it to kill httpd after TimeoutStopSec if something went wrong during the # graceful stop. Normally, Systemd sends SIGTERM signal right after the # ExecStop, which would kill httpd. We are sending useless SIGCONT here to give # httpd time to finish. KillSignal=SIGCONT PrivateTmp=true [Install] WantedBy=multi-user.target 4) As per the above command I found the env file is available '/etc/sysconfig/httpd' vi /etc/sysconfig/httpd # # This file can be used to set additional environment variables for # the httpd process, or pass additional options to the httpd # executable. # # Note: With previous versions of httpd, the MPM could be changed by # editing an "HTTPD" variable here. With the current version, that # variable is now ignored. The MPM is a loadable module, and the # choice of MPM can be changed by editing the configuration file /etc/httpd/conf.modules.d/00-mpm.conf # # # To pass additional options (for instance, -D definitions) to the # httpd binary at startup, set OPTIONS here. # #OPTIONS= # # This setting ensures the httpd process is started in the "C" locale # by default. (Some modules will not behave correctly if # case-sensitive string comparisons are performed in a different # locale.) # LANG=C 5) As per the above command, I found that the conf fileis available in the path : /etc/httpd/conf.modules.d/00-mpm.conf vi /etc/httpd/conf.modules.d/00-mpm.conf # Select the MPM module which should be used by uncommenting exactly # one of the following LoadModule lines: # prefork MPM: Implements a non-threaded, pre-forking web server # See: http://httpd.apache.org/docs/2.4/mod/prefork.html #LoadModule mpm_prefork_module modules/mod_mpm_prefork.so # worker MPM: Multi-Processing Module implementing a hybrid # multi-threaded multi-process web server # See: http://httpd.apache.org/docs/2.4/mod/worker.html # LoadModule mpm_worker_module modules/mod_mpm_worker.so # event MPM: A variant of the worker MPM with the goal of consuming # threads only for connections with active processing # See: http://httpd.apache.org/docs/2.4/mod/event.html # #LoadModule mpm_event_module modules/mod_mpm_event.so ServerLimit 1000 MaxRequestWorkers 1000 As per your comment, I enabled the 'LoadModule mpm_worker_module modules/mod_mpm_worker.so' with the ServerLimit and MaxRequestWorkers as 1000 still I am facing the issue for the following command in apache benchmark test. Completed 100 requests SSL handshake failed (5). SSL handshake failed (5). SSL handshake failed (5). SSL handshake failed (5). SSL handshake failed (5). SSL handshake failed (5). NOTE : It always scales when I have concurrent request below 400 What is wrong in this apache configuration, why SSL handshake is failing for concurrent request above 400 ? Thanks, Hari On Wed, Mar 7, 2018 at 9:20 PM, Juan Hern?ndez wrote: > It means that with the default configuration the Apache web server can't > serve more than 256 concurrent connections. This applies to any application > that uses Apache as the web frontend, not just to oVirt. If you want to > change that you have to change the MaxRequestWorkers and ServerLimit > parameters, as explained here: > > https://httpd.apache.org/docs/2.4/mod/mpm_common.html#maxrequestworkers > > So, go to your oVirt engine machine and create a /etc/httpd/conf.d/my.conf > file with this content: > > MaxRequestWorkers 1000 > ServerLimit 1000 > > Then restart the Apache server: > > # systemctl restart httpd > > Then your web server should be able to handle 1000 concurrent requests, > and you will probably start to find other limits, like the amount of memory > and CPU that those 1000 Apache child processes will consume, the number of > threads in the JBoss application server, the number of connections to the > database, etc. > > Let me insist a bit that if you base your benchmark solely on the number > of concurrent requests or connections that the server can handle you may > end up with meaningless results, as a real world application can/should use > the server much better than that. > > On 03/07/2018 04:33 PM, Hari Prasanth Loganathan wrote: > >> With the default configuration of the web server it is impossible to >> handle >> more than 256 *connections* simultaneously. I guess that "ab" is opening a >> connection for each concurrent request, so when you reach request 257 the >> web server will just reject the connection, there is nothing that the >> JBoss >> can do about it; you have to increase the number of connections supported >> by the web server. >> >> *So Does it mean that oVirt cannot serve more than 257 request? * >> >> My question is, If its possible How to scale this and what is the >> configuration we need to change? >> >> Also, we are taking a benchmark in using oVirt, So I need to find the >> maximum possible oVirt request. So please let me know the configuration >> tuning for oVirt to achieve maximum concurrent request. >> >> Thanks, >> Hari >> >> On Wed, Mar 7, 2018 at 7:25 PM, Juan Hern?ndez >> wrote: >> >> With the default configuration of the web server it is impossible to >>> handle more than 256 *connections* simultaneously. I guess that "ab" is >>> opening a connection for each concurrent request, so when you reach >>> request >>> 257 the web server will just reject the connection, there is nothing that >>> the JBoss can do about it; you have to increase the number of connections >>> supported by the web server. >>> >>> Or else you may want to re-consider why you want to use 1000 simultaneous >>> connections. It may be OK for a performance test, but there are better >>> ways >>> to squeeze performance. For example, you could consider using HTTP >>> pipelining, which is much more friendly for the server than so many >>> connections. This is what we use when we need to send a large number of >>> requests from other systems. There are examples of how to do that with >>> the >>> Python and Ruby SDKs here: >>> >>> Python: >>> >>> https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/ >>> examples/asynchronous_inventory.py >>> >>> Ruby: >>> >>> https://github.com/oVirt/ovirt-engine-sdk-ruby/blob/master/ >>> sdk/examples/asynchronous_inventory.rb >>> >>> >>> On 03/07/2018 02:43 PM, Hari Prasanth Loganathan wrote: >>> >>> Hi Juan, >>>> >>>> Thanks for the response. >>>> >>>> I agree web server can handle only limited number of concurrent >>>> requests. >>>> But Why it is failing with SSL handshake failure for few requests, Can't >>>> the JBOSS wait and serve the request? We can spare the delay but not >>>> with >>>> the request fails. So Is there a configuration in oVirt which can be >>>> tuned >>>> to achieve this? >>>> >>>> Thanks, >>>> Hari >>>> >>>> On Wed, Mar 7, 2018 at 7:05 PM, Juan Hern?ndez >>>> wrote: >>>> >>>> The first thing you will need to change for such a test is the number of >>>> >>>>> simultaneous connections accepted by the Apache web server: by default >>>>> the >>>>> max is 256. See the Apache documentation here: >>>>> >>>>> https://httpd.apache.org/docs/2.4/mod/mpm_common.html#m >>>>> axrequestworkers >>>>> >>>>> In addition I also suggest that you consider using the "worker" >>>>> multi-processing module instead of the "prefork", as it usually works >>>>> better when talking to a Java application server, because it re-uses >>>>> connections better. >>>>> >>>>> On 03/07/2018 02:20 PM, Hari Prasanth Loganathan wrote: >>>>> >>>>> Hi Team, >>>>> >>>>>> >>>>>> *Description of problem:* >>>>>> >>>>>> I am trying to achieve 1000 concurrent request to oVirt. What are the >>>>>> tunable parameters to achieve this? >>>>>> >>>>>> I tried to perform the benchmarking for ovirt engine using Apache >>>>>> benchmark >>>>>> using the same SSO token. >>>>>> >>>>>> ab -n 1000 -c 500 -k -H "accept: application/json" -H "Authorization: >>>>>> Bearer SSOTOKEN" https://172.30.56.70/ovirt-engine/ >>>>>> >>>>> b-9ff1-076fc07ebf50/statistics> >>>>>> >>>>>> When the number of concurrent request is 500, we are getting more than >>>>>> 100 >>>>>> failures with the following error, >>>>>> >>>>>> SSL read failed (1) - closing connection >>>>>> 139620982339352:error: >>>>>> >>>>>> NOTE: It is scaling for concurrent request below 500. >>>>>> >>>>>> I used the profiler to get the memory and CPU and it seems very less, >>>>>> >>>>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ >>>>>> COMMAND >>>>>> 30413 ovirt 20 0 4226664 882396 6776 S 126.0 23.0 27:48.53 >>>>>> java >>>>>> >>>>>> Configuration of the machine in which Ovirt is deployed : >>>>>> >>>>>> RAM - 4GB, >>>>>> Hard disk - 100GB, >>>>>> core processor - 2, >>>>>> OS - Cent7.x. >>>>>> >>>>>> In which 2GB is allocated to oVirt. >>>>>> >>>>>> >>>>>> Version-Release number of selected component (if applicable): >>>>>> >>>>>> 4.2.2 >>>>>> >>>>>> >>>>>> How reproducible: >>>>>> >>>>>> If the number of concurrent requests are above 500, we are easily >>>>>> facing >>>>>> this issue. >>>>>> >>>>>> >>>>>> *Actual results:* >>>>>> >>>>>> SSL read failed (1) - closing connection >>>>>> 139620982339352:error: >>>>>> >>>>>> *Expected results:* >>>>>> >>>>>> Request success. >>>>>> >>>>>> >>>>>> Thanks, >>>>>> Hari >>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Users mailing list >>>>>> Users at ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>> >>>>>> >>>>>> >>>>>> >>>>> >>>> >>> >> > -- DISCLAIMER The information in this e-mail is confidential and may be subject to legal privilege. It is intended solely for the addressee. Access to this e-mail by anyone else is unauthorized. If you have received this communication in error, please address with the subject heading "Received in error," send to it at msystechnologies.com, then delete the e-mail and destroy any copies of it. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. The views, opinions, conclusions and other information expressed in this electronic mail and any attachments are not given or endorsed by the company unless otherwise indicated by an authorized representative independent of this message. MSys cannot guarantee that e-mail communications are secure or error-free, as information could be intercepted, corrupted, amended, lost, destroyed, arrive late or incomplete, or contain viruses, though all reasonable precautions have been taken to ensure no viruses are present in this e-mail. As our company cannot accept responsibility for any loss or damage arising from the use of this e-mail or attachments we recommend that you subject these to your virus checking procedures prior to use -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.skrivanek at redhat.com Wed Mar 7 18:43:32 2018 From: michal.skrivanek at redhat.com (Michal Skrivanek) Date: Wed, 7 Mar 2018 19:43:32 +0100 Subject: [ovirt-users] Very Slow Console Performance - Windows 10 In-Reply-To: <60915132-7486-d25c-4e20-11ab0a4aa8d9@upx.com> References: <781778b2-1d81-4a55-2060-ea570e83fbd1@upx.com> <43c4790c-14d2-dbb7-d074-d8d47d4db913@upx.com> <60915132-7486-d25c-4e20-11ab0a4aa8d9@upx.com> Message-ID: <536C27CE-F311-4344-8067-20D217FC6D79@redhat.com> > On 7 Mar 2018, at 14:03, FERNANDO FREDIANI wrote: > > Hello Gianluca > > Resurrecting this topic. I made the changes as per your instructions below on the Engine configuration but it had no effect on the VM graphics memory. Is it necessary to restart the Engine after adding the 20-overload.properties file ? Also I don't think is necessary to do any changes on the hosts right ? > correct on both > On the recent updates has anything changed in the terms on how to change the video memory assigned to any given VM. I guess it is something that has been forgotten overtime, specially if you are running a VDI-like environment whcih depends very much on the video memory. > there were no changes recently, these are the most recent guidelines we got from SPICE people. They might be out of date. Would be good to raise that specifically (the performance difference for default sizes) to them, can you narrow it down and post to spice-devel at lists.freedesktop.org? Thanks, michal > Let me know. > Thanks > > Fernando Frediani > > On 24/11/2017 20:45, Gianluca Cecchi wrote: >> On Fri, Nov 24, 2017 at 5:50 PM, FERNANDO FREDIANI > wrote: >> I have made a Export of the same VM created in oVirt to a server running pure qemu/KVM and which creates new VMs profiles with vram 65536 and it turned on the Windows 10 which run perfectly with that configuration. >> >> Was reading some documentation that it may be possible to change the file /usr/share/ovirt-engine/conf/osinfo-defaults.properties in order to change it for the profile you want but I am not sure how these changed should be made if directly in that file, on another one just with custom configs and also how to apply them immediatelly to any new or existing VM ? I am pretty confident once vram is increased that should resolve the issue with not only Windows 10 VMs, but other as well. >> >> Anyone can give a hint about the correct procedure to apply this change ? >> >> Thanks in advance. >> Fernando >> >> >> >> >> Hi Fernando, >> based on this: >> https://www.ovirt.org/develop/release-management/features/virt/os-info/ >> >> you should create a file of kind >> /etc/ovirt-engine/osinfo.conf.d/20-overload.properties >> but I think you can only overwrite the multiplier and not directly the vgamem (or vgamem_mb in rhel 7) values >> >> so that you could put something like this inside it: >> >> os.windows_10.devices.display.vramMultiplier.value = 2 >> os.windows_10x64.devices.display.vramMultiplier.value = 2 >> >> I think there are no values for vgamem_mb >> >> I found these two threads in 2016 >> http://lists.ovirt.org/pipermail/users/2016-June/073692.html >> that confirms you cannot set vgamem >> and >> http://lists.ovirt.org/pipermail/users/2016-June/073786.html >> that suggests to create a hook >> >> Just a hack that came into mind: >> in a CentOS vm of mine in a 4.1.5 environment I see that by default I get this qemu command line >> >> -device qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,vram64_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2 >> >> Based on this: >> https://www.ovirt.org/documentation/draft/video-ram/ >> >> you have >> vgamem = 16 MB * number_of_heads >> >> I verified that if I edit the vm in the gui and set Monitors=4 in console section (but with the aim of using only the first head) and then I power off and power on the VM, I get now >> >> -device qxl-vga,id=video0,ram_size=268435456,vram_size=134217728,vram64_size_mb=0,vgamem_mb=64,bus=pci.0,addr=0x2 >> >> I have not a client to connect and verify any improvement: I don't know if you will be able to use all the new ram in the only first head with a better experience or if it is partitioned in some way... >> Could you try eventually? >> >> Gianluca > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From fernando.frediani at upx.com Wed Mar 7 19:32:25 2018 From: fernando.frediani at upx.com (FERNANDO FREDIANI) Date: Wed, 7 Mar 2018 16:32:25 -0300 Subject: [ovirt-users] Very Slow Console Performance - Windows 10 In-Reply-To: <536C27CE-F311-4344-8067-20D217FC6D79@redhat.com> References: <781778b2-1d81-4a55-2060-ea570e83fbd1@upx.com> <43c4790c-14d2-dbb7-d074-d8d47d4db913@upx.com> <60915132-7486-d25c-4e20-11ab0a4aa8d9@upx.com> <536C27CE-F311-4344-8067-20D217FC6D79@redhat.com> Message-ID: Hi I don't think these issue have much to do with Spice, but with the amount of memory oVirt sets to VMs by default, which in some cases for desktop usage seems too little. A field where that could be adjusted without having to edit files in the Engine would probably resolve this issue, or am I missing anything ? Fernando On 07/03/2018 15:43, Michal Skrivanek wrote: > > >> On 7 Mar 2018, at 14:03, FERNANDO FREDIANI > > wrote: >> >> Hello Gianluca >> >> Resurrecting this topic. I made the changes as per your instructions >> below on the Engine configuration but it had no effect on the VM >> graphics memory. Is it necessary to restart the Engine after adding >> the 20-overload.properties file ? Also I don't think is necessary to >> do any changes on the hosts right ? >> > correct on both >> >> On the recent updates has anything changed in the terms on how to >> change the video memory assigned to any given VM. I guess it is >> something that has been forgotten overtime, specially if you are >> running a VDI-like environment whcih depends very much on the video >> memory. >> > there were no changes recently, these are the most recent guidelines > we got from SPICE people. They might be out of date. Would be good to > raise that specifically (the performance difference for default sizes) > to them, can you narrow it down and post to > spice-devel at lists.freedesktop.org > ? > > Thanks, > michal >> >> Let me know. >> Thanks >> >> Fernando Frediani >> >> >> On 24/11/2017 20:45, Gianluca Cecchi wrote: >>> On Fri, Nov 24, 2017 at 5:50 PM, FERNANDO FREDIANI >>> > wrote: >>> >>> I have made a Export of the same VM created in oVirt to a server >>> running pure qemu/KVM and which creates new VMs profiles with >>> vram 65536 and it turned on the Windows 10 which run perfectly >>> with that configuration. >>> >>> Was reading some documentation that it may be possible to change >>> the file /usr/share/ovirt-engine/conf/osinfo-defaults.properties >>> in order to change it for the profile you want but I am not sure >>> how these changed should be made if directly in that file, on >>> another one just with custom configs and also how to apply them >>> immediatelly to any new or existing VM ? I am pretty confident >>> once vram is increased that should resolve the issue with not >>> only Windows 10 VMs, but other as well. >>> >>> Anyone can give a hint about the correct procedure to apply this >>> change ? >>> >>> Thanks in advance. >>> Fernando >>> >>> >>> >>> >>> Hi Fernando, >>> based on this: >>> https://www.ovirt.org/develop/release-management/features/virt/os-info/ >>> >>> >>> you should create a file of kind >>> /etc/ovirt-engine/osinfo.conf.d/20-overload.properties >>> but I think you can only overwrite the multiplier and not directly >>> the vgamem (or vgamem_mb in rhel 7) values >>> >>> so that you could put something like this inside it: >>> >>> os.windows_10.devices.display.vramMultiplier.value = 2 >>> os.windows_10x64.devices.display.vramMultiplier.value = 2 >>> >>> I think there are no values for vgamem_mb >>> >>> I found these two threads in 2016 >>> http://lists.ovirt.org/pipermail/users/2016-June/073692.html >>> that confirms you cannot set vgamem >>> and >>> http://lists.ovirt.org/pipermail/users/2016-June/073786.html >>> that suggests to create a hook >>> >>> Just a hack that came into mind: >>> in a CentOS vm of mine in a 4.1.5 environment I see that by default >>> I get this qemu command line >>> >>> -device >>> qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,vram64_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2 >>> >>> Based on this: >>> https://www.ovirt.org/documentation/draft/video-ram/ >>> >>> >>> you have >>> vgamem = 16 MB * number_of_heads >>> >>> I verified that if I edit the vm in the gui and set Monitors=4 in >>> console section (but with the aim of using only the first head) and >>> then I power off and power on the VM, I get now >>> >>> -device >>> qxl-vga,id=video0,ram_size=268435456,vram_size=134217728,vram64_size_mb=0,vgamem_mb=64,bus=pci.0,addr=0x2 >>> >>> I have not a client to connect and verify any improvement: I don't >>> know if you will be able to use all the new ram in the only first >>> head with a better experience or if it is partitioned in some way... >>> Could you try eventually? >>> >>> Gianluca >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jhernand at redhat.com Wed Mar 7 19:38:25 2018 From: jhernand at redhat.com (=?UTF-8?Q?Juan_Hern=c3=a1ndez?=) Date: Wed, 7 Mar 2018 20:38:25 +0100 Subject: [ovirt-users] Tunable parameters in ovirt engine In-Reply-To: References: <786be990-4ea0-613b-d81b-639f70a11398@redhat.com> <109ffb44-89ac-f190-29a4-e937157cce81@redhat.com> <6cb6fbc2-51e3-9c89-23dc-e36a7aefd8b3@redhat.com> Message-ID: <5105aa87-fea6-eb2e-5ae8-ef47cbbe6677@redhat.com> If you are still having problems I am inclined to think that it is a client issue. For example, I'd try to remove the "-k" option from the "ab" command. If you use keep alive the server may decide anyhow to close the connection after certain number of requests, even if the client asks to keep it alive. Some clients don't handle that perfectly, "ab" may have that problem. If that makes the SSL error messages disappear then I think you can safely ignore them, and restore the "-k" option, if you want. On 03/07/2018 07:30 PM, Hari Prasanth Loganathan wrote: > Thanks Juan for your response. Appreciate it. > But for some reason still, I am facing the same SSL handshake failed (5). > Could you please check this configuration and let me know the issue in my > ovirt engine environment. > > *Configuration of Apache server:* > > 1) httpd version, > > # httpd -v > Server version: Apache/2.4.6 (CentOS) > Server built: Oct 19 2017 20:39:16 > > 2) I checked the status using the following command, > > # systemctl status httpd -l > ? httpd.service - The Apache HTTP Server > Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor > preset: disabled) > Active: active (running) since Wed 2018-03-07 23:46:32 IST; 1min 55s ago > Docs: man:httpd(8) > man:apachectl(8) > Process: 4351 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, > status=0/SUCCESS) > Main PID: 4359 (httpd) > Status: "Total requests: 264; Current requests/sec: 0.1; Current > traffic: 204 B/sec" > CGroup: /system.slice/httpd.service > ??4359 /usr/sbin/httpd -DFOREGROUND > ??4360 /usr/sbin/httpd -DFOREGROUND > ??4362 /usr/sbin/httpd -DFOREGROUND > ??5100 /usr/sbin/httpd -DFOREGROUND > ??5386 /usr/sbin/httpd -DFOREGROUND > ??5415 /usr/sbin/httpd -DFOREGROUND > ??5416 /usr/sbin/httpd -DFOREGROUND > > 3) Since the httpd is pointing to the path : > /usr/lib/systemd/system/httpd.service > > vi /usr/lib/systemd/system/httpd.service > > [Unit] > Description=The Apache HTTP Server > After=network.target remote-fs.target nss-lookup.target > Documentation=man:httpd(8) > Documentation=man:apachectl(8) > > [Service] > Type=notify > EnvironmentFile=/etc/sysconfig/httpd > ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND > ExecReload=/usr/sbin/httpd $OPTIONS -k graceful > ExecStop=/bin/kill -WINCH ${MAINPID} > # We want systemd to give httpd some time to finish gracefully, but still > want > # it to kill httpd after TimeoutStopSec if something went wrong during the > # graceful stop. Normally, Systemd sends SIGTERM signal right after the > # ExecStop, which would kill httpd. We are sending useless SIGCONT here to > give > # httpd time to finish. > KillSignal=SIGCONT > PrivateTmp=true > > [Install] > WantedBy=multi-user.target > > > 4) As per the above command I found the env file is available > '/etc/sysconfig/httpd' > > vi /etc/sysconfig/httpd > > # > # This file can be used to set additional environment variables for > # the httpd process, or pass additional options to the httpd > # executable. > # > # Note: With previous versions of httpd, the MPM could be changed by > # editing an "HTTPD" variable here. With the current version, that > # variable is now ignored. The MPM is a loadable module, and the > # choice of MPM can be changed by editing the configuration file > /etc/httpd/conf.modules.d/00-mpm.conf > # > > # > # To pass additional options (for instance, -D definitions) to the > # httpd binary at startup, set OPTIONS here. > # > #OPTIONS= > > # > # This setting ensures the httpd process is started in the "C" locale > # by default. (Some modules will not behave correctly if > # case-sensitive string comparisons are performed in a different > # locale.) > # > LANG=C > > > 5) As per the above command, I found that the conf fileis available in the > path : /etc/httpd/conf.modules.d/00-mpm.conf > > vi /etc/httpd/conf.modules.d/00-mpm.conf > > # Select the MPM module which should be used by uncommenting exactly > # one of the following LoadModule lines: > > # prefork MPM: Implements a non-threaded, pre-forking web server > # See: http://httpd.apache.org/docs/2.4/mod/prefork.html > #LoadModule mpm_prefork_module modules/mod_mpm_prefork.so > > # worker MPM: Multi-Processing Module implementing a hybrid > # multi-threaded multi-process web server > # See: http://httpd.apache.org/docs/2.4/mod/worker.html > # > LoadModule mpm_worker_module modules/mod_mpm_worker.so > > # event MPM: A variant of the worker MPM with the goal of consuming > # threads only for connections with active processing > # See: http://httpd.apache.org/docs/2.4/mod/event.html > # > #LoadModule mpm_event_module modules/mod_mpm_event.so > > > ServerLimit 1000 > MaxRequestWorkers 1000 > > > > > As per your comment, I enabled the 'LoadModule mpm_worker_module > modules/mod_mpm_worker.so' with the ServerLimit and MaxRequestWorkers as > 1000 still I am facing the issue for the following command in apache > benchmark test. > > Completed 100 requests > SSL handshake failed (5). > SSL handshake failed (5). > SSL handshake failed (5). > SSL handshake failed (5). > SSL handshake failed (5). > SSL handshake failed (5). > > > NOTE : It always scales when I have concurrent request below 400 > > What is wrong in this apache configuration, why SSL handshake is failing > for concurrent request above 400 ? > > Thanks, > Hari > > > > > > > On Wed, Mar 7, 2018 at 9:20 PM, Juan Hern?ndez wrote: > >> It means that with the default configuration the Apache web server can't >> serve more than 256 concurrent connections. This applies to any application >> that uses Apache as the web frontend, not just to oVirt. If you want to >> change that you have to change the MaxRequestWorkers and ServerLimit >> parameters, as explained here: >> >> https://httpd.apache.org/docs/2.4/mod/mpm_common.html#maxrequestworkers >> >> So, go to your oVirt engine machine and create a /etc/httpd/conf.d/my.conf >> file with this content: >> >> MaxRequestWorkers 1000 >> ServerLimit 1000 >> >> Then restart the Apache server: >> >> # systemctl restart httpd >> >> Then your web server should be able to handle 1000 concurrent requests, >> and you will probably start to find other limits, like the amount of memory >> and CPU that those 1000 Apache child processes will consume, the number of >> threads in the JBoss application server, the number of connections to the >> database, etc. >> >> Let me insist a bit that if you base your benchmark solely on the number >> of concurrent requests or connections that the server can handle you may >> end up with meaningless results, as a real world application can/should use >> the server much better than that. >> >> On 03/07/2018 04:33 PM, Hari Prasanth Loganathan wrote: >> >>> With the default configuration of the web server it is impossible to >>> handle >>> more than 256 *connections* simultaneously. I guess that "ab" is opening a >>> connection for each concurrent request, so when you reach request 257 the >>> web server will just reject the connection, there is nothing that the >>> JBoss >>> can do about it; you have to increase the number of connections supported >>> by the web server. >>> >>> *So Does it mean that oVirt cannot serve more than 257 request? * >>> >>> My question is, If its possible How to scale this and what is the >>> configuration we need to change? >>> >>> Also, we are taking a benchmark in using oVirt, So I need to find the >>> maximum possible oVirt request. So please let me know the configuration >>> tuning for oVirt to achieve maximum concurrent request. >>> >>> Thanks, >>> Hari >>> >>> On Wed, Mar 7, 2018 at 7:25 PM, Juan Hern?ndez >>> wrote: >>> >>> With the default configuration of the web server it is impossible to >>>> handle more than 256 *connections* simultaneously. I guess that "ab" is >>>> opening a connection for each concurrent request, so when you reach >>>> request >>>> 257 the web server will just reject the connection, there is nothing that >>>> the JBoss can do about it; you have to increase the number of connections >>>> supported by the web server. >>>> >>>> Or else you may want to re-consider why you want to use 1000 simultaneous >>>> connections. It may be OK for a performance test, but there are better >>>> ways >>>> to squeeze performance. For example, you could consider using HTTP >>>> pipelining, which is much more friendly for the server than so many >>>> connections. This is what we use when we need to send a large number of >>>> requests from other systems. There are examples of how to do that with >>>> the >>>> Python and Ruby SDKs here: >>>> >>>> Python: >>>> >>>> https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/ >>>> examples/asynchronous_inventory.py >>>> >>>> Ruby: >>>> >>>> https://github.com/oVirt/ovirt-engine-sdk-ruby/blob/master/ >>>> sdk/examples/asynchronous_inventory.rb >>>> >>>> >>>> On 03/07/2018 02:43 PM, Hari Prasanth Loganathan wrote: >>>> >>>> Hi Juan, >>>>> >>>>> Thanks for the response. >>>>> >>>>> I agree web server can handle only limited number of concurrent >>>>> requests. >>>>> But Why it is failing with SSL handshake failure for few requests, Can't >>>>> the JBOSS wait and serve the request? We can spare the delay but not >>>>> with >>>>> the request fails. So Is there a configuration in oVirt which can be >>>>> tuned >>>>> to achieve this? >>>>> >>>>> Thanks, >>>>> Hari >>>>> >>>>> On Wed, Mar 7, 2018 at 7:05 PM, Juan Hern?ndez >>>>> wrote: >>>>> >>>>> The first thing you will need to change for such a test is the number of >>>>> >>>>>> simultaneous connections accepted by the Apache web server: by default >>>>>> the >>>>>> max is 256. See the Apache documentation here: >>>>>> >>>>>> https://httpd.apache.org/docs/2.4/mod/mpm_common.html#m >>>>>> axrequestworkers >>>>>> >>>>>> In addition I also suggest that you consider using the "worker" >>>>>> multi-processing module instead of the "prefork", as it usually works >>>>>> better when talking to a Java application server, because it re-uses >>>>>> connections better. >>>>>> >>>>>> On 03/07/2018 02:20 PM, Hari Prasanth Loganathan wrote: >>>>>> >>>>>> Hi Team, >>>>>> >>>>>>> >>>>>>> *Description of problem:* >>>>>>> >>>>>>> I am trying to achieve 1000 concurrent request to oVirt. What are the >>>>>>> tunable parameters to achieve this? >>>>>>> >>>>>>> I tried to perform the benchmarking for ovirt engine using Apache >>>>>>> benchmark >>>>>>> using the same SSO token. >>>>>>> >>>>>>> ab -n 1000 -c 500 -k -H "accept: application/json" -H "Authorization: >>>>>>> Bearer SSOTOKEN" https://172.30.56.70/ovirt-engine/ >>>>>>> >>>>>> b-9ff1-076fc07ebf50/statistics> >>>>>>> >>>>>>> When the number of concurrent request is 500, we are getting more than >>>>>>> 100 >>>>>>> failures with the following error, >>>>>>> >>>>>>> SSL read failed (1) - closing connection >>>>>>> 139620982339352:error: >>>>>>> >>>>>>> NOTE: It is scaling for concurrent request below 500. >>>>>>> >>>>>>> I used the profiler to get the memory and CPU and it seems very less, >>>>>>> >>>>>>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ >>>>>>> COMMAND >>>>>>> 30413 ovirt 20 0 4226664 882396 6776 S 126.0 23.0 27:48.53 >>>>>>> java >>>>>>> >>>>>>> Configuration of the machine in which Ovirt is deployed : >>>>>>> >>>>>>> RAM - 4GB, >>>>>>> Hard disk - 100GB, >>>>>>> core processor - 2, >>>>>>> OS - Cent7.x. >>>>>>> >>>>>>> In which 2GB is allocated to oVirt. >>>>>>> >>>>>>> >>>>>>> Version-Release number of selected component (if applicable): >>>>>>> >>>>>>> 4.2.2 >>>>>>> >>>>>>> >>>>>>> How reproducible: >>>>>>> >>>>>>> If the number of concurrent requests are above 500, we are easily >>>>>>> facing >>>>>>> this issue. >>>>>>> >>>>>>> >>>>>>> *Actual results:* >>>>>>> >>>>>>> SSL read failed (1) - closing connection >>>>>>> 139620982339352:error: >>>>>>> >>>>>>> *Expected results:* >>>>>>> >>>>>>> Request success. >>>>>>> >>>>>>> >>>>>>> Thanks, >>>>>>> Hari >>>>>>> >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Users mailing list >>>>>>> Users at ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > From gianluca.cecchi at gmail.com Wed Mar 7 19:59:41 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Wed, 7 Mar 2018 20:59:41 +0100 Subject: [ovirt-users] Very Slow Console Performance - Windows 10 In-Reply-To: <536C27CE-F311-4344-8067-20D217FC6D79@redhat.com> References: <781778b2-1d81-4a55-2060-ea570e83fbd1@upx.com> <43c4790c-14d2-dbb7-d074-d8d47d4db913@upx.com> <60915132-7486-d25c-4e20-11ab0a4aa8d9@upx.com> <536C27CE-F311-4344-8067-20D217FC6D79@redhat.com> Message-ID: On Wed, Mar 7, 2018 at 7:43 PM, Michal Skrivanek < michal.skrivanek at redhat.com> wrote: > > > On 7 Mar 2018, at 14:03, FERNANDO FREDIANI > wrote: > > Hello Gianluca > > Resurrecting this topic. I made the changes as per your instructions below > on the Engine configuration but it had no effect on the VM graphics memory. > Is it necessary to restart the Engine after adding the > 20-overload.properties file ? Also I don't think is necessary to do any > changes on the hosts right ? > > correct on both > Hello Fernando and Michal, at that time I was doing some tests both with plain virt-manager and oVirt for some Windows 10 VMs. More recently I haven't done anything in that regard again, unfortunately. After you have done what you did suggest yourself and Michal confirmed, then you can test powering off and then on again the VM (so that the new qemu-kvm process starts with the new parameters) and let us know if you enjoy better experience, so that we can ask for adoption as a default (eg for VMs configured as desktops) or as a custom property to give > On the recent updates has anything changed in the terms on how to change > the video memory assigned to any given VM. I guess it is something that has > been forgotten overtime, specially if you are running a VDI-like > environment whcih depends very much on the video memory. > > there were no changes recently, these are the most recent guidelines we > got from SPICE people. They might be out of date. Would be good to raise > that specifically (the performance difference for default sizes) to them, > can you narrow it down and post to spice-devel at lists.freedesktop.org? > This could be very useful too Cheers, Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Wed Mar 7 22:47:18 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Wed, 7 Mar 2018 23:47:18 +0100 Subject: [ovirt-users] Pre-snapshot scripts to run before live snapshot In-Reply-To: References: Message-ID: On Tue, Mar 6, 2018 at 7:54 PM, Yaniv Kaul wrote: > > > On Mar 6, 2018 7:02 PM, "Gianluca Cecchi" > wrote: > > Hello, > this thread last year (started by me... ;-) was very useful in different > aspects involved > > http://lists.ovirt.org/pipermail/users/2017-March/080322.html > > We did cover memory save or not and fsfreeze automatically done by guest > agent if installed inside the VM. > What about pre-snapshot scripts/operations to run inside guest, to have > application consistency? > Eg if I have a database inside the VM and I have scripted my backup job > involving live-snapshot (eg with the backup.py utility of the thread) > > > https://github.com/guillon/qemu-plugins/blob/master/scripts/ > qemu-guest-agent/fsfreeze-hook.d/mysql-flush.sh.sample > > > Hello Yaniv, thanks for the information and the example file link. So I have understood that the freeze-hook script is run with the "freeze" option before snapshot and again but with the "thaw" option after the snapshot. So I can manage what to do, parsing the argument given So far so good. I have tested with a Fedora 27 guest and all is ok there. Now I would like to do something similar for a Windows 2008 R2 x64 VM. I see that the qemu-guest-agent has been installed under C:\Programs\qemu-ga and that the "QEMU Guest Agent" service is run as "C:\Program Files\qemu-ga\qemu-ga.exe" -d In fact I see I have a registry key in HKLM\SYSTEM\CurrentControlSet\services\QEMU-GA with the same value above for ImagePath ; "C:\Program Files\qemu-ga\qemu-ga.exe" -d What should I do to enable a freeze-hook script on Windows now? BTW: searching around while trying to understand more, I found that: on hypervisor running the VM I have # vdsClient -s 0 getVmStats vm_guid that works and gives me information that confirms agent seems to communicate and I get also appsList = [.... , 'QEMU guest agent', ...] Also, the dump of the dynamic xml for the guest contains # virsh -r dumpxml VM_NAME
I tried to get its settings from qemu guest agent using socat and unix domain sockets but I don't receive answer # socat unix-connect:/ var/lib/libvirt/qemu/channels/420e5014-9b26-a4c0-9d79-ed9b123304de.org.qemu.guest_agent.0 readline and then in the interactive prompt {"execute":"guest-info"} to get and verify information about fsfreeze, something like ...., {"enabled": true, "name": "guest-fsfreeze-freeze"}, ... ...., {"enabled": true, "name": "guest-fsfreeze-thaw"}, ... But I didn't get any line.... Is this communication from OS disabled by design out of oVirt mgmt? Thanks again for any info to configure freeze-hook in Windows guest. Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From simone.bruckner at fabasoft.com Thu Mar 8 07:42:33 2018 From: simone.bruckner at fabasoft.com (Bruckner, Simone) Date: Thu, 8 Mar 2018 07:42:33 +0000 Subject: [ovirt-users] Cannot activate storage domain In-Reply-To: <2CB4E8C8E00E594EA06D4AC427E429920FE67150@fabamailserver.fabagl.fabasoft.com> References: <2CB4E8C8E00E594EA06D4AC427E429920FE500D1@fabamailserver.fabagl.fabasoft.com> <2CB4E8C8E00E594EA06D4AC427E429920FE56290@fabamailserver.fabagl.fabasoft.com> <2CB4E8C8E00E594EA06D4AC427E429920FE5A323@fabamailserver.fabagl.fabasoft.com> <2CB4E8C8E00E594EA06D4AC427E429920FE6670D@fabamailserver.fabagl.fabasoft.com> , <2CB4E8C8E00E594EA06D4AC427E429920FE67150@fabamailserver.fabagl.fabasoft.com> Message-ID: <9896E8032366964791E8E3595991B2602857363B@fabamailserver.fabagl.fabasoft.com> Hi Shani, today I again lost access to a storage domain. So currently I have two storage domains that we cannot activate any more. I uploaded the logfiles to our Cloud Service: [ZIP Archive] logfiles.tar.gz I lost access today, March 8th 2018 around 0.55am CET I tried to actived the storage domain around 6.40am CET Please let me know if there is anything I can do to get this addressed. Thank you very much, Simone ________________________________ Von: users-bounces at ovirt.org [users-bounces at ovirt.org]" im Auftrag von "Bruckner, Simone [simone.bruckner at fabasoft.com] Gesendet: Dienstag, 06. M?rz 2018 10:19 An: Shani Leviim Cc: users at ovirt.org Betreff: Re: [ovirt-users] Cannot activate storage domain Hi Shani, please find the logs attached. Thank you, Simone Von: Shani Leviim [mailto:sleviim at redhat.com] Gesendet: Dienstag, 6. M?rz 2018 09:48 An: Bruckner, Simone Cc: users at ovirt.org Betreff: Re: [ovirt-users] Cannot activate storage domain Hi Simone, Can you please share your vdsm and engine logs? Regards, Shani Leviim On Tue, Mar 6, 2018 at 7:34 AM, Bruckner, Simone > wrote: Hello, I apologize for bringing this one up again, but does anybody know if there is a change to recover a storage domain, that cannot be activated? Thank you, Simone Von: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] Im Auftrag von Bruckner, Simone Gesendet: Freitag, 2. M?rz 2018 17:03 An: users at ovirt.org Betreff: Re: [ovirt-users] Cannot activate storage domain Hi all, I managed to get the inactive storage domain to maintenance by stopping all running VMs that were using it, but I am still not able to activate it. Trying to activate results in the following events: For each host: VDSM command GetVGInfoVDS failed: Volume Group does not exist: (u'vg_uuid: 813oRe-64r8-mloU-k9G2-LFsS-dXSG-hpN4kf',) And finally: VDSM command ActivateStorageDomainVDS failed: Storage domain does not exist: (u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',) Is there anything I can do to recover this storage domain? Thank you and all the best, Simone Von: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] Im Auftrag von Bruckner, Simone Gesendet: Donnerstag, 1. M?rz 2018 17:57 An: users at ovirt.org Betreff: Re: [ovirt-users] Cannot activate storage domain Hi, we are still struggling getting a storage domain online again. We tried to put the storage domain in maintenance mode, that led to ?Failed to update OVF disks 809cc8d7-7687-46cf-a342-3be48674a9b3, OVF data isn't updated on those OVF stores?. Trying again with ignoring OVF update failures put the storage domain in ?preparing for maintenance?. We see the following message on all hosts: ?Error releasing host id 26 for domain b83c159c-4ad6-4613-ba16-bab95ccd10c0 (monitor:578)?. Querying the storage domain using vdsm-client on the SPM resulted in # vdsm-client StorageDomain getInfo "storagedomainID"="b83c159c-4ad6-4613-ba16-bab95ccd10c0" vdsm-client: Command StorageDomain.getInfo with args {'storagedomainID': 'b83c159c-4ad6-4613-ba16-bab95ccd10c0'} failed: (code=358, message=Storage domain does not exist: (u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',)) Any ideas? Thank you and all the best, Simone Von: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] Im Auftrag von Bruckner, Simone Gesendet: Mittwoch, 28. Februar 2018 15:52 An: users at ovirt.org Betreff: [ovirt-users] Cannot activate storage domain Hi all, we run a small oVirt installation that we also use for automated testing (automatically creating, dropping vms). We got an inactive FC storage domain that we cannot activate any more. We see several events at that time starting with: VM perftest-c17 is down with error. Exit message: Unable to get volume size for domain b83c159c-4ad6-4613-ba16-bab95ccd10c0 volume 686376c1-4be1-44c3-89a3-0a8addc8fdf2. Trying to activate the strorage domain results in the following alert event for each host: VDSM command GetVGInfoVDS failed: Volume Group does not exist: (u'vg_uuid: 813oRe-64r8-mloU-k9G2-LFsS-dXSG-hpN4kf',) And after those messages from all hosts we get: VDSM command ActivateStorageDomainVDS failed: Storage domain does not exist: (u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',) Failed to activate Storage Domain VMHOST_LUN_205 (Data Center Production) by Invalid status on Data Center Production. Setting status to Non Responsive. Storage Pool Manager runs on Host vmhost003.fabagl.fabasoft.com (Address: vmhost003.fabagl.fabasoft.com), Data Center Production. Checking the hosts with multipath ?ll we see the LUN without errors. We run oVirt 4.2.1 on CentOS 7.4. Hosts are CentOS 7.4 hosts with oVirt installed using oVirt engine. Hosts are connected to about 30 FC LUNs (8 TB each) on two all-flash storage arrays. Thank you, Simone Bruckner _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users From ykaul at redhat.com Thu Mar 8 08:47:20 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Thu, 8 Mar 2018 10:47:20 +0200 Subject: [ovirt-users] Pre-snapshot scripts to run before live snapshot In-Reply-To: References: Message-ID: On Thu, Mar 8, 2018 at 12:47 AM, Gianluca Cecchi wrote: > On Tue, Mar 6, 2018 at 7:54 PM, Yaniv Kaul wrote: > >> >> >> On Mar 6, 2018 7:02 PM, "Gianluca Cecchi" >> wrote: >> >> Hello, >> this thread last year (started by me... ;-) was very useful in different >> aspects involved >> >> http://lists.ovirt.org/pipermail/users/2017-March/080322.html >> >> We did cover memory save or not and fsfreeze automatically done by guest >> agent if installed inside the VM. >> What about pre-snapshot scripts/operations to run inside guest, to have >> application consistency? >> Eg if I have a database inside the VM and I have scripted my backup job >> involving live-snapshot (eg with the backup.py utility of the thread) >> >> >> https://github.com/guillon/qemu-plugins/blob/master/scripts/ >> qemu-guest-agent/fsfreeze-hook.d/mysql-flush.sh.sample >> >> >> > Hello Yaniv, > thanks for the information and the example file link. > So I have understood that the freeze-hook script is run with the "freeze" > option before snapshot and again but with the "thaw" option after the > snapshot. > So I can manage what to do, parsing the argument given > So far so good. > I have tested with a Fedora 27 guest and all is ok there. > > Now I would like to do something similar for a Windows 2008 R2 x64 VM. > Windows is somewhat different. In fact, it's a bit better than Linux (ARGH! but it's true) with its support for VSS - an API for applications to register to events such as backup. You should have the QEMU guest agent VSS provider installed (Note: need to see where's the latest bits - I found[1]). Then, if your application supports VSS, you are all good (I believe). Y. [1] https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-qemu-ga/qemu-ga-win-7.4.5-1/ I see that the qemu-guest-agent has been installed under > C:\Programs\qemu-ga and that the "QEMU Guest Agent" service is run as > "C:\Program Files\qemu-ga\qemu-ga.exe" -d > > In fact I see I have a registry key in > > HKLM\SYSTEM\CurrentControlSet\services\QEMU-GA > > with the same value above for ImagePath ; > "C:\Program Files\qemu-ga\qemu-ga.exe" -d > > What should I do to enable a freeze-hook script on Windows now? > > BTW: searching around while trying to understand more, I found that: > > on hypervisor running the VM I have > > # vdsClient -s 0 getVmStats vm_guid > > that works and gives me information that confirms agent seems to > communicate and I get also > appsList = [.... , 'QEMU guest agent', ...] > > Also, the dump of the dynamic xml for the guest contains > > # virsh -r dumpxml VM_NAME > > > > state='connected'/> > >
> > > I tried to get its settings from qemu guest agent using socat and unix > domain sockets but I don't receive answer > > # socat unix-connect:/var/lib/libvirt/qemu/channels/420e5014-9b26- > a4c0-9d79-ed9b123304de.org.qemu.guest_agent.0 readline > > and then in the interactive prompt > > {"execute":"guest-info"} > > to get and verify information about fsfreeze, something like > ...., {"enabled": true, "name": "guest-fsfreeze-freeze"}, ... ...., > {"enabled": true, "name": "guest-fsfreeze-thaw"}, ... > > But I didn't get any line.... > Is this communication from OS disabled by design out of oVirt mgmt? > > Thanks again for any info to configure freeze-hook in Windows guest. > > Gianluca > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jm3185951 at gmail.com Thu Mar 8 08:55:42 2018 From: jm3185951 at gmail.com (Jonathan Mathews) Date: Thu, 8 Mar 2018 10:55:42 +0200 Subject: [ovirt-users] Failure to upgrade Cluster Compatibility Version In-Reply-To: References: Message-ID: Hi , this has now become really urgent. Everything I try, I am unable to get the Cluster Compatibility Version to change. The entire platform is running the latest 3.6 release. On Tue, Mar 6, 2018 at 4:20 PM, Jonathan Mathews wrote: > Any chance of getting feedback on this? > > It is becoming urgent. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Thu Mar 8 09:48:44 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Thu, 8 Mar 2018 11:48:44 +0200 Subject: [ovirt-users] Failure to upgrade Cluster Compatibility Version In-Reply-To: References: Message-ID: On Thu, Mar 8, 2018 at 10:55 AM, Jonathan Mathews wrote: > Hi , this has now become really urgent. > It's not clear to me why it's urgent. Please look at past replies and provide more information so we can assist you. Y. > > Everything I try, I am unable to get the Cluster Compatibility Version to > change. > > The entire platform is running the latest 3.6 release. > > On Tue, Mar 6, 2018 at 4:20 PM, Jonathan Mathews > wrote: > >> Any chance of getting feedback on this? >> >> It is becoming urgent. >> > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hariprasanth.l at msystechnologies.com Thu Mar 8 11:16:37 2018 From: hariprasanth.l at msystechnologies.com (Hari Prasanth Loganathan) Date: Thu, 8 Mar 2018 16:46:37 +0530 Subject: [ovirt-users] Tunable parameters in ovirt engine In-Reply-To: <5105aa87-fea6-eb2e-5ae8-ef47cbbe6677@redhat.com> References: <786be990-4ea0-613b-d81b-639f70a11398@redhat.com> <109ffb44-89ac-f190-29a4-e937157cce81@redhat.com> <6cb6fbc2-51e3-9c89-23dc-e36a7aefd8b3@redhat.com> <5105aa87-fea6-eb2e-5ae8-ef47cbbe6677@redhat.com> Message-ID: No Juan, It is not working with any benchmark / application tool. It fails with the same error SSL handshake failed (5). Could you let me know the configuration of Apache web server is correct? Thanks, Hari On Thu, Mar 8, 2018 at 1:08 AM, Juan Hern?ndez wrote: > If you are still having problems I am inclined to think that it is a > client issue. For example, I'd try to remove the "-k" option from the "ab" > command. If you use keep alive the server may decide anyhow to close the > connection after certain number of requests, even if the client asks to > keep it alive. Some clients don't handle that perfectly, "ab" may have that > problem. If that makes the SSL error messages disappear then I think you > can safely ignore them, and restore the "-k" option, if you want. > > On 03/07/2018 07:30 PM, Hari Prasanth Loganathan wrote: > >> Thanks Juan for your response. Appreciate it. >> But for some reason still, I am facing the same SSL handshake failed (5). >> Could you please check this configuration and let me know the issue in my >> ovirt engine environment. >> >> *Configuration of Apache server:* >> >> >> 1) httpd version, >> >> # httpd -v >> Server version: Apache/2.4.6 (CentOS) >> Server built: Oct 19 2017 20:39:16 >> >> 2) I checked the status using the following command, >> >> # systemctl status httpd -l >> ? httpd.service - The Apache HTTP Server >> Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; >> vendor >> preset: disabled) >> Active: active (running) since Wed 2018-03-07 23:46:32 IST; 1min 55s >> ago >> Docs: man:httpd(8) >> man:apachectl(8) >> Process: 4351 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, >> status=0/SUCCESS) >> Main PID: 4359 (httpd) >> Status: "Total requests: 264; Current requests/sec: 0.1; Current >> traffic: 204 B/sec" >> CGroup: /system.slice/httpd.service >> ??4359 /usr/sbin/httpd -DFOREGROUND >> ??4360 /usr/sbin/httpd -DFOREGROUND >> ??4362 /usr/sbin/httpd -DFOREGROUND >> ??5100 /usr/sbin/httpd -DFOREGROUND >> ??5386 /usr/sbin/httpd -DFOREGROUND >> ??5415 /usr/sbin/httpd -DFOREGROUND >> ??5416 /usr/sbin/httpd -DFOREGROUND >> >> 3) Since the httpd is pointing to the path : >> /usr/lib/systemd/system/httpd.service >> >> vi /usr/lib/systemd/system/httpd.service >> >> [Unit] >> Description=The Apache HTTP Server >> After=network.target remote-fs.target nss-lookup.target >> Documentation=man:httpd(8) >> Documentation=man:apachectl(8) >> >> [Service] >> Type=notify >> EnvironmentFile=/etc/sysconfig/httpd >> ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND >> ExecReload=/usr/sbin/httpd $OPTIONS -k graceful >> ExecStop=/bin/kill -WINCH ${MAINPID} >> # We want systemd to give httpd some time to finish gracefully, but still >> want >> # it to kill httpd after TimeoutStopSec if something went wrong during the >> # graceful stop. Normally, Systemd sends SIGTERM signal right after the >> # ExecStop, which would kill httpd. We are sending useless SIGCONT here to >> give >> # httpd time to finish. >> KillSignal=SIGCONT >> PrivateTmp=true >> >> [Install] >> WantedBy=multi-user.target >> >> >> 4) As per the above command I found the env file is available >> '/etc/sysconfig/httpd' >> >> vi /etc/sysconfig/httpd >> >> # >> # This file can be used to set additional environment variables for >> # the httpd process, or pass additional options to the httpd >> # executable. >> # >> # Note: With previous versions of httpd, the MPM could be changed by >> # editing an "HTTPD" variable here. With the current version, that >> # variable is now ignored. The MPM is a loadable module, and the >> # choice of MPM can be changed by editing the configuration file >> /etc/httpd/conf.modules.d/00-mpm.conf >> # >> >> # >> # To pass additional options (for instance, -D definitions) to the >> # httpd binary at startup, set OPTIONS here. >> # >> #OPTIONS= >> >> # >> # This setting ensures the httpd process is started in the "C" locale >> # by default. (Some modules will not behave correctly if >> # case-sensitive string comparisons are performed in a different >> # locale.) >> # >> LANG=C >> >> >> 5) As per the above command, I found that the conf fileis available in the >> path : /etc/httpd/conf.modules.d/00-mpm.conf >> >> vi /etc/httpd/conf.modules.d/00-mpm.conf >> >> # Select the MPM module which should be used by uncommenting exactly >> # one of the following LoadModule lines: >> >> # prefork MPM: Implements a non-threaded, pre-forking web server >> # See: http://httpd.apache.org/docs/2.4/mod/prefork.html >> #LoadModule mpm_prefork_module modules/mod_mpm_prefork.so >> >> # worker MPM: Multi-Processing Module implementing a hybrid >> # multi-threaded multi-process web server >> # See: http://httpd.apache.org/docs/2.4/mod/worker.html >> # >> LoadModule mpm_worker_module modules/mod_mpm_worker.so >> >> # event MPM: A variant of the worker MPM with the goal of consuming >> # threads only for connections with active processing >> # See: http://httpd.apache.org/docs/2.4/mod/event.html >> # >> #LoadModule mpm_event_module modules/mod_mpm_event.so >> >> >> ServerLimit 1000 >> MaxRequestWorkers 1000 >> >> >> >> >> As per your comment, I enabled the 'LoadModule mpm_worker_module >> modules/mod_mpm_worker.so' with the ServerLimit and MaxRequestWorkers as >> 1000 still I am facing the issue for the following command in apache >> benchmark test. >> >> Completed 100 requests >> SSL handshake failed (5). >> SSL handshake failed (5). >> SSL handshake failed (5). >> SSL handshake failed (5). >> SSL handshake failed (5). >> SSL handshake failed (5). >> >> >> NOTE : It always scales when I have concurrent request below 400 >> >> What is wrong in this apache configuration, why SSL handshake is failing >> for concurrent request above 400 ? >> >> Thanks, >> Hari >> >> >> >> >> >> >> On Wed, Mar 7, 2018 at 9:20 PM, Juan Hern?ndez >> wrote: >> >> It means that with the default configuration the Apache web server can't >>> serve more than 256 concurrent connections. This applies to any >>> application >>> that uses Apache as the web frontend, not just to oVirt. If you want to >>> change that you have to change the MaxRequestWorkers and ServerLimit >>> parameters, as explained here: >>> >>> https://httpd.apache.org/docs/2.4/mod/mpm_common.html#m >>> axrequestworkers >>> >>> So, go to your oVirt engine machine and create a >>> /etc/httpd/conf.d/my.conf >>> file with this content: >>> >>> MaxRequestWorkers 1000 >>> ServerLimit 1000 >>> >>> Then restart the Apache server: >>> >>> # systemctl restart httpd >>> >>> Then your web server should be able to handle 1000 concurrent requests, >>> and you will probably start to find other limits, like the amount of >>> memory >>> and CPU that those 1000 Apache child processes will consume, the number >>> of >>> threads in the JBoss application server, the number of connections to the >>> database, etc. >>> >>> Let me insist a bit that if you base your benchmark solely on the number >>> of concurrent requests or connections that the server can handle you may >>> end up with meaningless results, as a real world application can/should >>> use >>> the server much better than that. >>> >>> On 03/07/2018 04:33 PM, Hari Prasanth Loganathan wrote: >>> >>> With the default configuration of the web server it is impossible to >>>> handle >>>> more than 256 *connections* simultaneously. I guess that "ab" is >>>> opening a >>>> connection for each concurrent request, so when you reach request 257 >>>> the >>>> web server will just reject the connection, there is nothing that the >>>> JBoss >>>> can do about it; you have to increase the number of connections >>>> supported >>>> by the web server. >>>> >>>> *So Does it mean that oVirt cannot serve more than 257 request? * >>>> >>>> My question is, If its possible How to scale this and what is the >>>> configuration we need to change? >>>> >>>> Also, we are taking a benchmark in using oVirt, So I need to find the >>>> maximum possible oVirt request. So please let me know the configuration >>>> tuning for oVirt to achieve maximum concurrent request. >>>> >>>> Thanks, >>>> Hari >>>> >>>> On Wed, Mar 7, 2018 at 7:25 PM, Juan Hern?ndez >>>> wrote: >>>> >>>> With the default configuration of the web server it is impossible to >>>> >>>>> handle more than 256 *connections* simultaneously. I guess that "ab" is >>>>> opening a connection for each concurrent request, so when you reach >>>>> request >>>>> 257 the web server will just reject the connection, there is nothing >>>>> that >>>>> the JBoss can do about it; you have to increase the number of >>>>> connections >>>>> supported by the web server. >>>>> >>>>> Or else you may want to re-consider why you want to use 1000 >>>>> simultaneous >>>>> connections. It may be OK for a performance test, but there are better >>>>> ways >>>>> to squeeze performance. For example, you could consider using HTTP >>>>> pipelining, which is much more friendly for the server than so many >>>>> connections. This is what we use when we need to send a large number of >>>>> requests from other systems. There are examples of how to do that with >>>>> the >>>>> Python and Ruby SDKs here: >>>>> >>>>> Python: >>>>> >>>>> https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/ >>>>> examples/asynchronous_inventory.py >>>>> >>>>> Ruby: >>>>> >>>>> https://github.com/oVirt/ovirt-engine-sdk-ruby/blob/master/ >>>>> sdk/examples/asynchronous_inventory.rb >>>>> >>>>> >>>>> On 03/07/2018 02:43 PM, Hari Prasanth Loganathan wrote: >>>>> >>>>> Hi Juan, >>>>> >>>>>> >>>>>> Thanks for the response. >>>>>> >>>>>> I agree web server can handle only limited number of concurrent >>>>>> requests. >>>>>> But Why it is failing with SSL handshake failure for few requests, >>>>>> Can't >>>>>> the JBOSS wait and serve the request? We can spare the delay but not >>>>>> with >>>>>> the request fails. So Is there a configuration in oVirt which can be >>>>>> tuned >>>>>> to achieve this? >>>>>> >>>>>> Thanks, >>>>>> Hari >>>>>> >>>>>> On Wed, Mar 7, 2018 at 7:05 PM, Juan Hern?ndez >>>>>> wrote: >>>>>> >>>>>> The first thing you will need to change for such a test is the number >>>>>> of >>>>>> >>>>>> simultaneous connections accepted by the Apache web server: by default >>>>>>> the >>>>>>> max is 256. See the Apache documentation here: >>>>>>> >>>>>>> https://httpd.apache.org/docs/2.4/mod/mpm_common.html#m >>>>>>> axrequestworkers >>>>>>> >>>>>>> In addition I also suggest that you consider using the "worker" >>>>>>> multi-processing module instead of the "prefork", as it usually works >>>>>>> better when talking to a Java application server, because it re-uses >>>>>>> connections better. >>>>>>> >>>>>>> On 03/07/2018 02:20 PM, Hari Prasanth Loganathan wrote: >>>>>>> >>>>>>> Hi Team, >>>>>>> >>>>>>> >>>>>>>> *Description of problem:* >>>>>>>> >>>>>>>> I am trying to achieve 1000 concurrent request to oVirt. What are >>>>>>>> the >>>>>>>> tunable parameters to achieve this? >>>>>>>> >>>>>>>> I tried to perform the benchmarking for ovirt engine using Apache >>>>>>>> benchmark >>>>>>>> using the same SSO token. >>>>>>>> >>>>>>>> ab -n 1000 -c 500 -k -H "accept: application/json" -H >>>>>>>> "Authorization: >>>>>>>> Bearer SSOTOKEN" https://172.30.56.70/ovirt-engine/ >>>>>>>> >>>>>>> b-9ff1-076fc07ebf50/statistics> >>>>>>>> >>>>>>>> When the number of concurrent request is 500, we are getting more >>>>>>>> than >>>>>>>> 100 >>>>>>>> failures with the following error, >>>>>>>> >>>>>>>> SSL read failed (1) - closing connection >>>>>>>> 139620982339352:error: >>>>>>>> >>>>>>>> NOTE: It is scaling for concurrent request below 500. >>>>>>>> >>>>>>>> I used the profiler to get the memory and CPU and it seems very >>>>>>>> less, >>>>>>>> >>>>>>>> PID USER PR NI VIRT RES SHR S %CPU %MEM >>>>>>>> TIME+ >>>>>>>> COMMAND >>>>>>>> 30413 ovirt 20 0 4226664 882396 6776 S 126.0 23.0 27:48.53 >>>>>>>> java >>>>>>>> >>>>>>>> Configuration of the machine in which Ovirt is deployed : >>>>>>>> >>>>>>>> RAM - 4GB, >>>>>>>> Hard disk - 100GB, >>>>>>>> core processor - 2, >>>>>>>> OS - Cent7.x. >>>>>>>> >>>>>>>> In which 2GB is allocated to oVirt. >>>>>>>> >>>>>>>> >>>>>>>> Version-Release number of selected component (if applicable): >>>>>>>> >>>>>>>> 4.2.2 >>>>>>>> >>>>>>>> >>>>>>>> How reproducible: >>>>>>>> >>>>>>>> If the number of concurrent requests are above 500, we are easily >>>>>>>> facing >>>>>>>> this issue. >>>>>>>> >>>>>>>> >>>>>>>> *Actual results:* >>>>>>>> >>>>>>>> SSL read failed (1) - closing connection >>>>>>>> 139620982339352:error: >>>>>>>> >>>>>>>> *Expected results:* >>>>>>>> >>>>>>>> Request success. >>>>>>>> >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Hari >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Users mailing list >>>>>>>> Users at ovirt.org >>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -- DISCLAIMER The information in this e-mail is confidential and may be subject to legal privilege. It is intended solely for the addressee. Access to this e-mail by anyone else is unauthorized. If you have received this communication in error, please address with the subject heading "Received in error," send to it at msystechnologies.com, then delete the e-mail and destroy any copies of it. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. The views, opinions, conclusions and other information expressed in this electronic mail and any attachments are not given or endorsed by the company unless otherwise indicated by an authorized representative independent of this message. MSys cannot guarantee that e-mail communications are secure or error-free, as information could be intercepted, corrupted, amended, lost, destroyed, arrive late or incomplete, or contain viruses, though all reasonable precautions have been taken to ensure no viruses are present in this e-mail. As our company cannot accept responsibility for any loss or damage arising from the use of this e-mail or attachments we recommend that you subject these to your virus checking procedures prior to use -------------- next part -------------- An HTML attachment was scrubbed... URL: From maozza at gmail.com Thu Mar 8 11:16:46 2018 From: maozza at gmail.com (maoz zadok) Date: Thu, 8 Mar 2018 13:16:46 +0200 Subject: [ovirt-users] Fibre Channel Protocol (FCP) Message-ID: Hello All, I connected existing data center with NFS storage domain to a new Fiber-Channel storage domain according to the following guide, and I have three questions. https://www.ovirt.org/documentation/install-guide/chap-Configuring_Storage/ 1. why does the storage domain asks me to "Use Host"? - does it mean that if this host is down there is no access to the FC storage or the next available host elected to manage the FC storage? - does it mean that all the traffic to my storage is routed via the selected host? - what is the best practice, do I need to split the workload to other hosts as well by letting each host manage one small LUN instead of letting one host manage all LUNs or one big LUN? 2. is it possible to set the newly created fiber-channel storage(data domain) as master? 3. can I easily migrate all VM disks to the FC storage instead of the NFS? Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzetto.luca at gmail.com Thu Mar 8 11:23:24 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Thu, 8 Mar 2018 12:23:24 +0100 Subject: [ovirt-users] Fibre Channel Protocol (FCP) In-Reply-To: References: Message-ID: Hello Maoz, i'm serving more than 300 vms in fc setup (and growing). On Thu, Mar 8, 2018 at 12:16 PM, maoz zadok wrote: > Hello All, > I connected existing data center with NFS storage domain to a new > Fiber-Channel storage domain according to the following guide, and I have > three questions. > https://www.ovirt.org/documentation/install-guide/chap-Configuring_Storage/ > > 1. why does the storage domain asks me to "Use Host"? Because there are some operations (lvm initialization & so on) to be done while setting up the volumes on FC storage. These operations will be performed by the given host > > does it mean that if this host is down there is no access to the FC storage > or the next available host elected to manage the FC storage? No, storage is mapped by all the hosts and accessed directly > does it mean that all the traffic to my storage is routed via the selected > host? Absolutely no! > what is the best practice, do I need to split the workload to other hosts as > well by letting each host manage one small LUN instead of letting one host > manage all LUNs or one big LUN? I supposte that your question is related to the way that FC is managed. Since every host sees all the drives, there question is no more appropriate. > > 2. is it possible to set the newly created fiber-channel storage(data > domain) as master? IIRC if you set to maintenance the existing master, another one will be selected. > > 3. can I easily migrate all VM disks to the FC storage instead of the NFS? > Sure, nfs storage domain or fc storage domain doesn't make difference in terms of interaction. You can move disks from one to the other without issues. Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From jhernand at redhat.com Thu Mar 8 11:26:25 2018 From: jhernand at redhat.com (=?UTF-8?Q?Juan_Hern=c3=a1ndez?=) Date: Thu, 8 Mar 2018 12:26:25 +0100 Subject: [ovirt-users] Tunable parameters in ovirt engine In-Reply-To: References: <786be990-4ea0-613b-d81b-639f70a11398@redhat.com> <109ffb44-89ac-f190-29a4-e937157cce81@redhat.com> <6cb6fbc2-51e3-9c89-23dc-e36a7aefd8b3@redhat.com> <5105aa87-fea6-eb2e-5ae8-ef47cbbe6677@redhat.com> Message-ID: But other than those SSL error messages, are the connections really failing? Can you share the results reported by "ab"? On 03/08/2018 12:16 PM, Hari Prasanth Loganathan wrote: > No Juan, It is not working with any benchmark / application tool. It fails > with the same error SSL handshake failed (5). > > Could you let me know the configuration of Apache web server is correct? > > Thanks, > Hari > > On Thu, Mar 8, 2018 at 1:08 AM, Juan Hern?ndez wrote: > >> If you are still having problems I am inclined to think that it is a >> client issue. For example, I'd try to remove the "-k" option from the "ab" >> command. If you use keep alive the server may decide anyhow to close the >> connection after certain number of requests, even if the client asks to >> keep it alive. Some clients don't handle that perfectly, "ab" may have that >> problem. If that makes the SSL error messages disappear then I think you >> can safely ignore them, and restore the "-k" option, if you want. >> >> On 03/07/2018 07:30 PM, Hari Prasanth Loganathan wrote: >> >>> Thanks Juan for your response. Appreciate it. >>> But for some reason still, I am facing the same SSL handshake failed (5). >>> Could you please check this configuration and let me know the issue in my >>> ovirt engine environment. >>> >>> *Configuration of Apache server:* >>> >>> >>> 1) httpd version, >>> >>> # httpd -v >>> Server version: Apache/2.4.6 (CentOS) >>> Server built: Oct 19 2017 20:39:16 >>> >>> 2) I checked the status using the following command, >>> >>> # systemctl status httpd -l >>> ? httpd.service - The Apache HTTP Server >>> Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; >>> vendor >>> preset: disabled) >>> Active: active (running) since Wed 2018-03-07 23:46:32 IST; 1min 55s >>> ago >>> Docs: man:httpd(8) >>> man:apachectl(8) >>> Process: 4351 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, >>> status=0/SUCCESS) >>> Main PID: 4359 (httpd) >>> Status: "Total requests: 264; Current requests/sec: 0.1; Current >>> traffic: 204 B/sec" >>> CGroup: /system.slice/httpd.service >>> ??4359 /usr/sbin/httpd -DFOREGROUND >>> ??4360 /usr/sbin/httpd -DFOREGROUND >>> ??4362 /usr/sbin/httpd -DFOREGROUND >>> ??5100 /usr/sbin/httpd -DFOREGROUND >>> ??5386 /usr/sbin/httpd -DFOREGROUND >>> ??5415 /usr/sbin/httpd -DFOREGROUND >>> ??5416 /usr/sbin/httpd -DFOREGROUND >>> >>> 3) Since the httpd is pointing to the path : >>> /usr/lib/systemd/system/httpd.service >>> >>> vi /usr/lib/systemd/system/httpd.service >>> >>> [Unit] >>> Description=The Apache HTTP Server >>> After=network.target remote-fs.target nss-lookup.target >>> Documentation=man:httpd(8) >>> Documentation=man:apachectl(8) >>> >>> [Service] >>> Type=notify >>> EnvironmentFile=/etc/sysconfig/httpd >>> ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND >>> ExecReload=/usr/sbin/httpd $OPTIONS -k graceful >>> ExecStop=/bin/kill -WINCH ${MAINPID} >>> # We want systemd to give httpd some time to finish gracefully, but still >>> want >>> # it to kill httpd after TimeoutStopSec if something went wrong during the >>> # graceful stop. Normally, Systemd sends SIGTERM signal right after the >>> # ExecStop, which would kill httpd. We are sending useless SIGCONT here to >>> give >>> # httpd time to finish. >>> KillSignal=SIGCONT >>> PrivateTmp=true >>> >>> [Install] >>> WantedBy=multi-user.target >>> >>> >>> 4) As per the above command I found the env file is available >>> '/etc/sysconfig/httpd' >>> >>> vi /etc/sysconfig/httpd >>> >>> # >>> # This file can be used to set additional environment variables for >>> # the httpd process, or pass additional options to the httpd >>> # executable. >>> # >>> # Note: With previous versions of httpd, the MPM could be changed by >>> # editing an "HTTPD" variable here. With the current version, that >>> # variable is now ignored. The MPM is a loadable module, and the >>> # choice of MPM can be changed by editing the configuration file >>> /etc/httpd/conf.modules.d/00-mpm.conf >>> # >>> >>> # >>> # To pass additional options (for instance, -D definitions) to the >>> # httpd binary at startup, set OPTIONS here. >>> # >>> #OPTIONS= >>> >>> # >>> # This setting ensures the httpd process is started in the "C" locale >>> # by default. (Some modules will not behave correctly if >>> # case-sensitive string comparisons are performed in a different >>> # locale.) >>> # >>> LANG=C >>> >>> >>> 5) As per the above command, I found that the conf fileis available in the >>> path : /etc/httpd/conf.modules.d/00-mpm.conf >>> >>> vi /etc/httpd/conf.modules.d/00-mpm.conf >>> >>> # Select the MPM module which should be used by uncommenting exactly >>> # one of the following LoadModule lines: >>> >>> # prefork MPM: Implements a non-threaded, pre-forking web server >>> # See: http://httpd.apache.org/docs/2.4/mod/prefork.html >>> #LoadModule mpm_prefork_module modules/mod_mpm_prefork.so >>> >>> # worker MPM: Multi-Processing Module implementing a hybrid >>> # multi-threaded multi-process web server >>> # See: http://httpd.apache.org/docs/2.4/mod/worker.html >>> # >>> LoadModule mpm_worker_module modules/mod_mpm_worker.so >>> >>> # event MPM: A variant of the worker MPM with the goal of consuming >>> # threads only for connections with active processing >>> # See: http://httpd.apache.org/docs/2.4/mod/event.html >>> # >>> #LoadModule mpm_event_module modules/mod_mpm_event.so >>> >>> >>> ServerLimit 1000 >>> MaxRequestWorkers 1000 >>> >>> >>> >>> >>> As per your comment, I enabled the 'LoadModule mpm_worker_module >>> modules/mod_mpm_worker.so' with the ServerLimit and MaxRequestWorkers as >>> 1000 still I am facing the issue for the following command in apache >>> benchmark test. >>> >>> Completed 100 requests >>> SSL handshake failed (5). >>> SSL handshake failed (5). >>> SSL handshake failed (5). >>> SSL handshake failed (5). >>> SSL handshake failed (5). >>> SSL handshake failed (5). >>> >>> >>> NOTE : It always scales when I have concurrent request below 400 >>> >>> What is wrong in this apache configuration, why SSL handshake is failing >>> for concurrent request above 400 ? >>> >>> Thanks, >>> Hari >>> >>> >>> >>> >>> >>> >>> On Wed, Mar 7, 2018 at 9:20 PM, Juan Hern?ndez >>> wrote: >>> >>> It means that with the default configuration the Apache web server can't >>>> serve more than 256 concurrent connections. This applies to any >>>> application >>>> that uses Apache as the web frontend, not just to oVirt. If you want to >>>> change that you have to change the MaxRequestWorkers and ServerLimit >>>> parameters, as explained here: >>>> >>>> https://httpd.apache.org/docs/2.4/mod/mpm_common.html#m >>>> axrequestworkers >>>> >>>> So, go to your oVirt engine machine and create a >>>> /etc/httpd/conf.d/my.conf >>>> file with this content: >>>> >>>> MaxRequestWorkers 1000 >>>> ServerLimit 1000 >>>> >>>> Then restart the Apache server: >>>> >>>> # systemctl restart httpd >>>> >>>> Then your web server should be able to handle 1000 concurrent requests, >>>> and you will probably start to find other limits, like the amount of >>>> memory >>>> and CPU that those 1000 Apache child processes will consume, the number >>>> of >>>> threads in the JBoss application server, the number of connections to the >>>> database, etc. >>>> >>>> Let me insist a bit that if you base your benchmark solely on the number >>>> of concurrent requests or connections that the server can handle you may >>>> end up with meaningless results, as a real world application can/should >>>> use >>>> the server much better than that. >>>> >>>> On 03/07/2018 04:33 PM, Hari Prasanth Loganathan wrote: >>>> >>>> With the default configuration of the web server it is impossible to >>>>> handle >>>>> more than 256 *connections* simultaneously. I guess that "ab" is >>>>> opening a >>>>> connection for each concurrent request, so when you reach request 257 >>>>> the >>>>> web server will just reject the connection, there is nothing that the >>>>> JBoss >>>>> can do about it; you have to increase the number of connections >>>>> supported >>>>> by the web server. >>>>> >>>>> *So Does it mean that oVirt cannot serve more than 257 request? * >>>>> >>>>> My question is, If its possible How to scale this and what is the >>>>> configuration we need to change? >>>>> >>>>> Also, we are taking a benchmark in using oVirt, So I need to find the >>>>> maximum possible oVirt request. So please let me know the configuration >>>>> tuning for oVirt to achieve maximum concurrent request. >>>>> >>>>> Thanks, >>>>> Hari >>>>> >>>>> On Wed, Mar 7, 2018 at 7:25 PM, Juan Hern?ndez >>>>> wrote: >>>>> >>>>> With the default configuration of the web server it is impossible to >>>>> >>>>>> handle more than 256 *connections* simultaneously. I guess that "ab" is >>>>>> opening a connection for each concurrent request, so when you reach >>>>>> request >>>>>> 257 the web server will just reject the connection, there is nothing >>>>>> that >>>>>> the JBoss can do about it; you have to increase the number of >>>>>> connections >>>>>> supported by the web server. >>>>>> >>>>>> Or else you may want to re-consider why you want to use 1000 >>>>>> simultaneous >>>>>> connections. It may be OK for a performance test, but there are better >>>>>> ways >>>>>> to squeeze performance. For example, you could consider using HTTP >>>>>> pipelining, which is much more friendly for the server than so many >>>>>> connections. This is what we use when we need to send a large number of >>>>>> requests from other systems. There are examples of how to do that with >>>>>> the >>>>>> Python and Ruby SDKs here: >>>>>> >>>>>> Python: >>>>>> >>>>>> https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/ >>>>>> examples/asynchronous_inventory.py >>>>>> >>>>>> Ruby: >>>>>> >>>>>> https://github.com/oVirt/ovirt-engine-sdk-ruby/blob/master/ >>>>>> sdk/examples/asynchronous_inventory.rb >>>>>> >>>>>> >>>>>> On 03/07/2018 02:43 PM, Hari Prasanth Loganathan wrote: >>>>>> >>>>>> Hi Juan, >>>>>> >>>>>>> >>>>>>> Thanks for the response. >>>>>>> >>>>>>> I agree web server can handle only limited number of concurrent >>>>>>> requests. >>>>>>> But Why it is failing with SSL handshake failure for few requests, >>>>>>> Can't >>>>>>> the JBOSS wait and serve the request? We can spare the delay but not >>>>>>> with >>>>>>> the request fails. So Is there a configuration in oVirt which can be >>>>>>> tuned >>>>>>> to achieve this? >>>>>>> >>>>>>> Thanks, >>>>>>> Hari >>>>>>> >>>>>>> On Wed, Mar 7, 2018 at 7:05 PM, Juan Hern?ndez >>>>>>> wrote: >>>>>>> >>>>>>> The first thing you will need to change for such a test is the number >>>>>>> of >>>>>>> >>>>>>> simultaneous connections accepted by the Apache web server: by default >>>>>>>> the >>>>>>>> max is 256. See the Apache documentation here: >>>>>>>> >>>>>>>> https://httpd.apache.org/docs/2.4/mod/mpm_common.html#m >>>>>>>> axrequestworkers >>>>>>>> >>>>>>>> In addition I also suggest that you consider using the "worker" >>>>>>>> multi-processing module instead of the "prefork", as it usually works >>>>>>>> better when talking to a Java application server, because it re-uses >>>>>>>> connections better. >>>>>>>> >>>>>>>> On 03/07/2018 02:20 PM, Hari Prasanth Loganathan wrote: >>>>>>>> >>>>>>>> Hi Team, >>>>>>>> >>>>>>>> >>>>>>>>> *Description of problem:* >>>>>>>>> >>>>>>>>> I am trying to achieve 1000 concurrent request to oVirt. What are >>>>>>>>> the >>>>>>>>> tunable parameters to achieve this? >>>>>>>>> >>>>>>>>> I tried to perform the benchmarking for ovirt engine using Apache >>>>>>>>> benchmark >>>>>>>>> using the same SSO token. >>>>>>>>> >>>>>>>>> ab -n 1000 -c 500 -k -H "accept: application/json" -H >>>>>>>>> "Authorization: >>>>>>>>> Bearer SSOTOKEN" https://172.30.56.70/ovirt-engine/ >>>>>>>>> >>>>>>>> b-9ff1-076fc07ebf50/statistics> >>>>>>>>> >>>>>>>>> When the number of concurrent request is 500, we are getting more >>>>>>>>> than >>>>>>>>> 100 >>>>>>>>> failures with the following error, >>>>>>>>> >>>>>>>>> SSL read failed (1) - closing connection >>>>>>>>> 139620982339352:error: >>>>>>>>> >>>>>>>>> NOTE: It is scaling for concurrent request below 500. >>>>>>>>> >>>>>>>>> I used the profiler to get the memory and CPU and it seems very >>>>>>>>> less, >>>>>>>>> >>>>>>>>> PID USER PR NI VIRT RES SHR S %CPU %MEM >>>>>>>>> TIME+ >>>>>>>>> COMMAND >>>>>>>>> 30413 ovirt 20 0 4226664 882396 6776 S 126.0 23.0 27:48.53 >>>>>>>>> java >>>>>>>>> >>>>>>>>> Configuration of the machine in which Ovirt is deployed : >>>>>>>>> >>>>>>>>> RAM - 4GB, >>>>>>>>> Hard disk - 100GB, >>>>>>>>> core processor - 2, >>>>>>>>> OS - Cent7.x. >>>>>>>>> >>>>>>>>> In which 2GB is allocated to oVirt. >>>>>>>>> >>>>>>>>> >>>>>>>>> Version-Release number of selected component (if applicable): >>>>>>>>> >>>>>>>>> 4.2.2 >>>>>>>>> >>>>>>>>> >>>>>>>>> How reproducible: >>>>>>>>> >>>>>>>>> If the number of concurrent requests are above 500, we are easily >>>>>>>>> facing >>>>>>>>> this issue. >>>>>>>>> >>>>>>>>> >>>>>>>>> *Actual results:* >>>>>>>>> >>>>>>>>> SSL read failed (1) - closing connection >>>>>>>>> 139620982339352:error: >>>>>>>>> >>>>>>>>> *Expected results:* >>>>>>>>> >>>>>>>>> Request success. >>>>>>>>> >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Hari >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> Users mailing list >>>>>>>>> Users at ovirt.org >>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > From maozza at gmail.com Thu Mar 8 11:34:25 2018 From: maozza at gmail.com (maoz zadok) Date: Thu, 8 Mar 2018 13:34:25 +0200 Subject: [ovirt-users] Fibre Channel Protocol (FCP) In-Reply-To: References: Message-ID: Luca many thanks! that was fast and good! one last thing, how do I migrate to the new data domain? I use oVirt version 4.2.0 Thanks again! On Thu, Mar 8, 2018 at 1:23 PM, Luca 'remix_tj' Lorenzetto < lorenzetto.luca at gmail.com> wrote: > Hello Maoz, > > i'm serving more than 300 vms in fc setup (and growing). > > On Thu, Mar 8, 2018 at 12:16 PM, maoz zadok wrote: > > Hello All, > > I connected existing data center with NFS storage domain to a new > > Fiber-Channel storage domain according to the following guide, and I have > > three questions. > > https://www.ovirt.org/documentation/install-guide/ > chap-Configuring_Storage/ > > > > 1. why does the storage domain asks me to "Use Host"? > > Because there are some operations (lvm initialization & so on) to be > done while setting up the volumes on FC storage. These operations will > be performed by the given host > > > > > does it mean that if this host is down there is no access to the FC > storage > > or the next available host elected to manage the FC storage? > > No, storage is mapped by all the hosts and accessed directly > > > does it mean that all the traffic to my storage is routed via the > selected > > host? > > Absolutely no! > > > what is the best practice, do I need to split the workload to other > hosts as > > well by letting each host manage one small LUN instead of letting one > host > > manage all LUNs or one big LUN? > > I supposte that your question is related to the way that FC is > managed. Since every host sees all the drives, there question is no > more appropriate. > > > > > > 2. is it possible to set the newly created fiber-channel storage(data > > domain) as master? > > IIRC if you set to maintenance the existing master, another one will > be selected. > > > > > 3. can I easily migrate all VM disks to the FC storage instead of the > NFS? > > > > Sure, nfs storage domain or fc storage domain doesn't make difference > in terms of interaction. You can move disks from one to the other > without issues. > > Luca > > -- > "E' assurdo impiegare gli uomini di intelligenza eccellente per fare > calcoli che potrebbero essere affidati a chiunque se si usassero delle > macchine" > Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) > > "Internet ? la pi? grande biblioteca del mondo. > Ma il problema ? che i libri sono tutti sparsi sul pavimento" > John Allen Paulos, Matematico (1945-vivente) > > Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , < > lorenzetto.luca at gmail.com> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hariprasanth.l at msystechnologies.com Thu Mar 8 11:34:34 2018 From: hariprasanth.l at msystechnologies.com (Hari Prasanth Loganathan) Date: Thu, 8 Mar 2018 17:04:34 +0530 Subject: [ovirt-users] Tunable parameters in ovirt engine In-Reply-To: References: <786be990-4ea0-613b-d81b-639f70a11398@redhat.com> <109ffb44-89ac-f190-29a4-e937157cce81@redhat.com> <6cb6fbc2-51e3-9c89-23dc-e36a7aefd8b3@redhat.com> <5105aa87-fea6-eb2e-5ae8-ef47cbbe6677@redhat.com> Message-ID: This is the only error message we received from ab. I googled it and found that it is due to the connection drop. So It would be Great, If you could check my Apache server configuration I shared in the thread and let me know your thoughts on this. Thanks, Hari On Thu, Mar 8, 2018 at 4:56 PM, Juan Hern?ndez wrote: > But other than those SSL error messages, are the connections really > failing? Can you share the results reported by "ab"? > > > On 03/08/2018 12:16 PM, Hari Prasanth Loganathan wrote: > >> No Juan, It is not working with any benchmark / application tool. It fails >> with the same error SSL handshake failed (5). >> >> Could you let me know the configuration of Apache web server is correct? >> >> Thanks, >> Hari >> >> On Thu, Mar 8, 2018 at 1:08 AM, Juan Hern?ndez >> wrote: >> >> If you are still having problems I am inclined to think that it is a >>> client issue. For example, I'd try to remove the "-k" option from the >>> "ab" >>> command. If you use keep alive the server may decide anyhow to close the >>> connection after certain number of requests, even if the client asks to >>> keep it alive. Some clients don't handle that perfectly, "ab" may have >>> that >>> problem. If that makes the SSL error messages disappear then I think you >>> can safely ignore them, and restore the "-k" option, if you want. >>> >>> On 03/07/2018 07:30 PM, Hari Prasanth Loganathan wrote: >>> >>> Thanks Juan for your response. Appreciate it. >>>> But for some reason still, I am facing the same SSL handshake failed >>>> (5). >>>> Could you please check this configuration and let me know the issue in >>>> my >>>> ovirt engine environment. >>>> >>>> *Configuration of Apache server:* >>>> >>>> >>>> 1) httpd version, >>>> >>>> # httpd -v >>>> Server version: Apache/2.4.6 (CentOS) >>>> Server built: Oct 19 2017 20:39:16 >>>> >>>> 2) I checked the status using the following command, >>>> >>>> # systemctl status httpd -l >>>> ? httpd.service - The Apache HTTP Server >>>> Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; >>>> vendor >>>> preset: disabled) >>>> Active: active (running) since Wed 2018-03-07 23:46:32 IST; 1min >>>> 55s >>>> ago >>>> Docs: man:httpd(8) >>>> man:apachectl(8) >>>> Process: 4351 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, >>>> status=0/SUCCESS) >>>> Main PID: 4359 (httpd) >>>> Status: "Total requests: 264; Current requests/sec: 0.1; Current >>>> traffic: 204 B/sec" >>>> CGroup: /system.slice/httpd.service >>>> ??4359 /usr/sbin/httpd -DFOREGROUND >>>> ??4360 /usr/sbin/httpd -DFOREGROUND >>>> ??4362 /usr/sbin/httpd -DFOREGROUND >>>> ??5100 /usr/sbin/httpd -DFOREGROUND >>>> ??5386 /usr/sbin/httpd -DFOREGROUND >>>> ??5415 /usr/sbin/httpd -DFOREGROUND >>>> ??5416 /usr/sbin/httpd -DFOREGROUND >>>> >>>> 3) Since the httpd is pointing to the path : >>>> /usr/lib/systemd/system/httpd.service >>>> >>>> vi /usr/lib/systemd/system/httpd.service >>>> >>>> [Unit] >>>> Description=The Apache HTTP Server >>>> After=network.target remote-fs.target nss-lookup.target >>>> Documentation=man:httpd(8) >>>> Documentation=man:apachectl(8) >>>> >>>> [Service] >>>> Type=notify >>>> EnvironmentFile=/etc/sysconfig/httpd >>>> ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND >>>> ExecReload=/usr/sbin/httpd $OPTIONS -k graceful >>>> ExecStop=/bin/kill -WINCH ${MAINPID} >>>> # We want systemd to give httpd some time to finish gracefully, but >>>> still >>>> want >>>> # it to kill httpd after TimeoutStopSec if something went wrong during >>>> the >>>> # graceful stop. Normally, Systemd sends SIGTERM signal right after the >>>> # ExecStop, which would kill httpd. We are sending useless SIGCONT here >>>> to >>>> give >>>> # httpd time to finish. >>>> KillSignal=SIGCONT >>>> PrivateTmp=true >>>> >>>> [Install] >>>> WantedBy=multi-user.target >>>> >>>> >>>> 4) As per the above command I found the env file is available >>>> '/etc/sysconfig/httpd' >>>> >>>> vi /etc/sysconfig/httpd >>>> >>>> # >>>> # This file can be used to set additional environment variables for >>>> # the httpd process, or pass additional options to the httpd >>>> # executable. >>>> # >>>> # Note: With previous versions of httpd, the MPM could be changed by >>>> # editing an "HTTPD" variable here. With the current version, that >>>> # variable is now ignored. The MPM is a loadable module, and the >>>> # choice of MPM can be changed by editing the configuration file >>>> /etc/httpd/conf.modules.d/00-mpm.conf >>>> # >>>> >>>> # >>>> # To pass additional options (for instance, -D definitions) to the >>>> # httpd binary at startup, set OPTIONS here. >>>> # >>>> #OPTIONS= >>>> >>>> # >>>> # This setting ensures the httpd process is started in the "C" locale >>>> # by default. (Some modules will not behave correctly if >>>> # case-sensitive string comparisons are performed in a different >>>> # locale.) >>>> # >>>> LANG=C >>>> >>>> >>>> 5) As per the above command, I found that the conf fileis available in >>>> the >>>> path : /etc/httpd/conf.modules.d/00-mpm.conf >>>> >>>> vi /etc/httpd/conf.modules.d/00-mpm.conf >>>> >>>> # Select the MPM module which should be used by uncommenting exactly >>>> # one of the following LoadModule lines: >>>> >>>> # prefork MPM: Implements a non-threaded, pre-forking web server >>>> # See: http://httpd.apache.org/docs/2.4/mod/prefork.html >>>> #LoadModule mpm_prefork_module modules/mod_mpm_prefork.so >>>> >>>> # worker MPM: Multi-Processing Module implementing a hybrid >>>> # multi-threaded multi-process web server >>>> # See: http://httpd.apache.org/docs/2.4/mod/worker.html >>>> # >>>> LoadModule mpm_worker_module modules/mod_mpm_worker.so >>>> >>>> # event MPM: A variant of the worker MPM with the goal of consuming >>>> # threads only for connections with active processing >>>> # See: http://httpd.apache.org/docs/2.4/mod/event.html >>>> # >>>> #LoadModule mpm_event_module modules/mod_mpm_event.so >>>> >>>> >>>> ServerLimit 1000 >>>> MaxRequestWorkers 1000 >>>> >>>> >>>> >>>> >>>> As per your comment, I enabled the 'LoadModule mpm_worker_module >>>> modules/mod_mpm_worker.so' with the ServerLimit and MaxRequestWorkers as >>>> 1000 still I am facing the issue for the following command in apache >>>> benchmark test. >>>> >>>> Completed 100 requests >>>> SSL handshake failed (5). >>>> SSL handshake failed (5). >>>> SSL handshake failed (5). >>>> SSL handshake failed (5). >>>> SSL handshake failed (5). >>>> SSL handshake failed (5). >>>> >>>> >>>> NOTE : It always scales when I have concurrent request below 400 >>>> >>>> What is wrong in this apache configuration, why SSL handshake is failing >>>> for concurrent request above 400 ? >>>> >>>> Thanks, >>>> Hari >>>> >>>> >>>> >>>> >>>> >>>> >>>> On Wed, Mar 7, 2018 at 9:20 PM, Juan Hern?ndez >>>> wrote: >>>> >>>> It means that with the default configuration the Apache web server can't >>>> >>>>> serve more than 256 concurrent connections. This applies to any >>>>> application >>>>> that uses Apache as the web frontend, not just to oVirt. If you want to >>>>> change that you have to change the MaxRequestWorkers and ServerLimit >>>>> parameters, as explained here: >>>>> >>>>> https://httpd.apache.org/docs/2.4/mod/mpm_common.html#m >>>>> axrequestworkers >>>>> >>>>> So, go to your oVirt engine machine and create a >>>>> /etc/httpd/conf.d/my.conf >>>>> file with this content: >>>>> >>>>> MaxRequestWorkers 1000 >>>>> ServerLimit 1000 >>>>> >>>>> Then restart the Apache server: >>>>> >>>>> # systemctl restart httpd >>>>> >>>>> Then your web server should be able to handle 1000 concurrent requests, >>>>> and you will probably start to find other limits, like the amount of >>>>> memory >>>>> and CPU that those 1000 Apache child processes will consume, the number >>>>> of >>>>> threads in the JBoss application server, the number of connections to >>>>> the >>>>> database, etc. >>>>> >>>>> Let me insist a bit that if you base your benchmark solely on the >>>>> number >>>>> of concurrent requests or connections that the server can handle you >>>>> may >>>>> end up with meaningless results, as a real world application can/should >>>>> use >>>>> the server much better than that. >>>>> >>>>> On 03/07/2018 04:33 PM, Hari Prasanth Loganathan wrote: >>>>> >>>>> With the default configuration of the web server it is impossible to >>>>> >>>>>> handle >>>>>> more than 256 *connections* simultaneously. I guess that "ab" is >>>>>> opening a >>>>>> connection for each concurrent request, so when you reach request 257 >>>>>> the >>>>>> web server will just reject the connection, there is nothing that the >>>>>> JBoss >>>>>> can do about it; you have to increase the number of connections >>>>>> supported >>>>>> by the web server. >>>>>> >>>>>> *So Does it mean that oVirt cannot serve more than 257 request? * >>>>>> >>>>>> My question is, If its possible How to scale this and what is the >>>>>> configuration we need to change? >>>>>> >>>>>> Also, we are taking a benchmark in using oVirt, So I need to find the >>>>>> maximum possible oVirt request. So please let me know the >>>>>> configuration >>>>>> tuning for oVirt to achieve maximum concurrent request. >>>>>> >>>>>> Thanks, >>>>>> Hari >>>>>> >>>>>> On Wed, Mar 7, 2018 at 7:25 PM, Juan Hern?ndez >>>>>> wrote: >>>>>> >>>>>> With the default configuration of the web server it is impossible to >>>>>> >>>>>> handle more than 256 *connections* simultaneously. I guess that "ab" >>>>>>> is >>>>>>> opening a connection for each concurrent request, so when you reach >>>>>>> request >>>>>>> 257 the web server will just reject the connection, there is nothing >>>>>>> that >>>>>>> the JBoss can do about it; you have to increase the number of >>>>>>> connections >>>>>>> supported by the web server. >>>>>>> >>>>>>> Or else you may want to re-consider why you want to use 1000 >>>>>>> simultaneous >>>>>>> connections. It may be OK for a performance test, but there are >>>>>>> better >>>>>>> ways >>>>>>> to squeeze performance. For example, you could consider using HTTP >>>>>>> pipelining, which is much more friendly for the server than so many >>>>>>> connections. This is what we use when we need to send a large number >>>>>>> of >>>>>>> requests from other systems. There are examples of how to do that >>>>>>> with >>>>>>> the >>>>>>> Python and Ruby SDKs here: >>>>>>> >>>>>>> Python: >>>>>>> >>>>>>> https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/ >>>>>>> examples/asynchronous_inventory.py >>>>>>> >>>>>>> Ruby: >>>>>>> >>>>>>> https://github.com/oVirt/ovirt-engine-sdk-ruby/blob/master/ >>>>>>> sdk/examples/asynchronous_inventory.rb >>>>>>> >>>>>>> >>>>>>> On 03/07/2018 02:43 PM, Hari Prasanth Loganathan wrote: >>>>>>> >>>>>>> Hi Juan, >>>>>>> >>>>>>> >>>>>>>> Thanks for the response. >>>>>>>> >>>>>>>> I agree web server can handle only limited number of concurrent >>>>>>>> requests. >>>>>>>> But Why it is failing with SSL handshake failure for few requests, >>>>>>>> Can't >>>>>>>> the JBOSS wait and serve the request? We can spare the delay but not >>>>>>>> with >>>>>>>> the request fails. So Is there a configuration in oVirt which can be >>>>>>>> tuned >>>>>>>> to achieve this? >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Hari >>>>>>>> >>>>>>>> On Wed, Mar 7, 2018 at 7:05 PM, Juan Hern?ndez >>>>>>> > >>>>>>>> wrote: >>>>>>>> >>>>>>>> The first thing you will need to change for such a test is the >>>>>>>> number >>>>>>>> of >>>>>>>> >>>>>>>> simultaneous connections accepted by the Apache web server: by >>>>>>>> default >>>>>>>> >>>>>>>>> the >>>>>>>>> max is 256. See the Apache documentation here: >>>>>>>>> >>>>>>>>> https://httpd.apache.org/docs/2.4/mod/mpm_common.html#m >>>>>>>>> axrequestworkers >>>>>>>>> >>>>>>>>> In addition I also suggest that you consider using the "worker" >>>>>>>>> multi-processing module instead of the "prefork", as it usually >>>>>>>>> works >>>>>>>>> better when talking to a Java application server, because it >>>>>>>>> re-uses >>>>>>>>> connections better. >>>>>>>>> >>>>>>>>> On 03/07/2018 02:20 PM, Hari Prasanth Loganathan wrote: >>>>>>>>> >>>>>>>>> Hi Team, >>>>>>>>> >>>>>>>>> >>>>>>>>> *Description of problem:* >>>>>>>>>> >>>>>>>>>> I am trying to achieve 1000 concurrent request to oVirt. What are >>>>>>>>>> the >>>>>>>>>> tunable parameters to achieve this? >>>>>>>>>> >>>>>>>>>> I tried to perform the benchmarking for ovirt engine using Apache >>>>>>>>>> benchmark >>>>>>>>>> using the same SSO token. >>>>>>>>>> >>>>>>>>>> ab -n 1000 -c 500 -k -H "accept: application/json" -H >>>>>>>>>> "Authorization: >>>>>>>>>> Bearer SSOTOKEN" https://172.30.56.70/ovirt-engine/ >>>>>>>>>> >>>>>>>>> b-9ff1-076fc07ebf50/statistics> >>>>>>>>>> >>>>>>>>>> When the number of concurrent request is 500, we are getting more >>>>>>>>>> than >>>>>>>>>> 100 >>>>>>>>>> failures with the following error, >>>>>>>>>> >>>>>>>>>> SSL read failed (1) - closing connection >>>>>>>>>> 139620982339352:error: >>>>>>>>>> >>>>>>>>>> NOTE: It is scaling for concurrent request below 500. >>>>>>>>>> >>>>>>>>>> I used the profiler to get the memory and CPU and it seems very >>>>>>>>>> less, >>>>>>>>>> >>>>>>>>>> PID USER PR NI VIRT RES SHR S %CPU %MEM >>>>>>>>>> TIME+ >>>>>>>>>> COMMAND >>>>>>>>>> 30413 ovirt 20 0 4226664 882396 6776 S 126.0 23.0 >>>>>>>>>> 27:48.53 >>>>>>>>>> java >>>>>>>>>> >>>>>>>>>> Configuration of the machine in which Ovirt is deployed : >>>>>>>>>> >>>>>>>>>> RAM - 4GB, >>>>>>>>>> Hard disk - 100GB, >>>>>>>>>> core processor - 2, >>>>>>>>>> OS - Cent7.x. >>>>>>>>>> >>>>>>>>>> In which 2GB is allocated to oVirt. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Version-Release number of selected component (if applicable): >>>>>>>>>> >>>>>>>>>> 4.2.2 >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> How reproducible: >>>>>>>>>> >>>>>>>>>> If the number of concurrent requests are above 500, we are easily >>>>>>>>>> facing >>>>>>>>>> this issue. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> *Actual results:* >>>>>>>>>> >>>>>>>>>> SSL read failed (1) - closing connection >>>>>>>>>> 139620982339352:error: >>>>>>>>>> >>>>>>>>>> *Expected results:* >>>>>>>>>> >>>>>>>>>> Request success. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> Hari >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> Users mailing list >>>>>>>>>> Users at ovirt.org >>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -- DISCLAIMER The information in this e-mail is confidential and may be subject to legal privilege. It is intended solely for the addressee. Access to this e-mail by anyone else is unauthorized. If you have received this communication in error, please address with the subject heading "Received in error," send to it at msystechnologies.com, then delete the e-mail and destroy any copies of it. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. The views, opinions, conclusions and other information expressed in this electronic mail and any attachments are not given or endorsed by the company unless otherwise indicated by an authorized representative independent of this message. MSys cannot guarantee that e-mail communications are secure or error-free, as information could be intercepted, corrupted, amended, lost, destroyed, arrive late or incomplete, or contain viruses, though all reasonable precautions have been taken to ensure no viruses are present in this e-mail. As our company cannot accept responsibility for any loss or damage arising from the use of this e-mail or attachments we recommend that you subject these to your virus checking procedures prior to use -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzetto.luca at gmail.com Thu Mar 8 11:38:08 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Thu, 8 Mar 2018 12:38:08 +0100 Subject: [ovirt-users] Fibre Channel Protocol (FCP) In-Reply-To: References: Message-ID: On Thu, Mar 8, 2018 at 12:34 PM, maoz zadok wrote: > Luca many thanks! that was fast and good! one last thing, how do I migrate > to the new data domain? I use oVirt version 4.2.0 > Thanks again! > Hello, you select the vm disks you want to move and with the right click you select the men? entry "Move". You'll get a windows asking for the destination storage domain. Once set the values, you confirm and disk will be moved. https://www.ovirt.org/documentation/admin-guide/chap-Virtual_Machine_Disks/#moving-a-virtual-disk Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From maozza at gmail.com Thu Mar 8 11:44:29 2018 From: maozza at gmail.com (maoz zadok) Date: Thu, 8 Mar 2018 13:44:29 +0200 Subject: [ovirt-users] Fibre Channel Protocol (FCP) In-Reply-To: References: Message-ID: it works! thank you!? On Thu, Mar 8, 2018 at 1:38 PM, Luca 'remix_tj' Lorenzetto < lorenzetto.luca at gmail.com> wrote: > On Thu, Mar 8, 2018 at 12:34 PM, maoz zadok wrote: > > Luca many thanks! that was fast and good! one last thing, how do I > migrate > > to the new data domain? I use oVirt version 4.2.0 > > Thanks again! > > > > Hello, > > you select the vm disks you want to move and with the right click you > select the men? entry "Move". You'll get a windows asking for the > destination storage domain. > > Once set the values, you confirm and disk will be moved. > > https://www.ovirt.org/documentation/admin-guide/ > chap-Virtual_Machine_Disks/#moving-a-virtual-disk > > Luca > > > -- > "E' assurdo impiegare gli uomini di intelligenza eccellente per fare > calcoli che potrebbero essere affidati a chiunque se si usassero delle > macchine" > Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) > > "Internet ? la pi? grande biblioteca del mondo. > Ma il problema ? che i libri sono tutti sparsi sul pavimento" > John Allen Paulos, Matematico (1945-vivente) > > Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , < > lorenzetto.luca at gmail.com> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jhernand at redhat.com Thu Mar 8 11:45:13 2018 From: jhernand at redhat.com (=?UTF-8?Q?Juan_Hern=c3=a1ndez?=) Date: Thu, 8 Mar 2018 12:45:13 +0100 Subject: [ovirt-users] Tunable parameters in ovirt engine In-Reply-To: References: <786be990-4ea0-613b-d81b-639f70a11398@redhat.com> <109ffb44-89ac-f190-29a4-e937157cce81@redhat.com> <6cb6fbc2-51e3-9c89-23dc-e36a7aefd8b3@redhat.com> <5105aa87-fea6-eb2e-5ae8-ef47cbbe6677@redhat.com> Message-ID: <5a289de6-a80d-a4c9-e612-c2ea31e3e54a@redhat.com> I think the configuration is good. There may be some connections that the server is closing before the client expected, but that is normal, in my opinion. On 03/08/2018 12:34 PM, Hari Prasanth Loganathan wrote: > This is the only error message we received from ab. > > I googled it and found that it is due to the connection drop. So It would > be Great, If you could check my Apache server configuration I shared in the > thread and let me know your thoughts on this. > > Thanks, > Hari > > On Thu, Mar 8, 2018 at 4:56 PM, Juan Hern?ndez wrote: > >> But other than those SSL error messages, are the connections really >> failing? Can you share the results reported by "ab"? >> >> >> On 03/08/2018 12:16 PM, Hari Prasanth Loganathan wrote: >> >>> No Juan, It is not working with any benchmark / application tool. It fails >>> with the same error SSL handshake failed (5). >>> >>> Could you let me know the configuration of Apache web server is correct? >>> >>> Thanks, >>> Hari >>> >>> On Thu, Mar 8, 2018 at 1:08 AM, Juan Hern?ndez >>> wrote: >>> >>> If you are still having problems I am inclined to think that it is a >>>> client issue. For example, I'd try to remove the "-k" option from the >>>> "ab" >>>> command. If you use keep alive the server may decide anyhow to close the >>>> connection after certain number of requests, even if the client asks to >>>> keep it alive. Some clients don't handle that perfectly, "ab" may have >>>> that >>>> problem. If that makes the SSL error messages disappear then I think you >>>> can safely ignore them, and restore the "-k" option, if you want. >>>> >>>> On 03/07/2018 07:30 PM, Hari Prasanth Loganathan wrote: >>>> >>>> Thanks Juan for your response. Appreciate it. >>>>> But for some reason still, I am facing the same SSL handshake failed >>>>> (5). >>>>> Could you please check this configuration and let me know the issue in >>>>> my >>>>> ovirt engine environment. >>>>> >>>>> *Configuration of Apache server:* >>>>> >>>>> >>>>> 1) httpd version, >>>>> >>>>> # httpd -v >>>>> Server version: Apache/2.4.6 (CentOS) >>>>> Server built: Oct 19 2017 20:39:16 >>>>> >>>>> 2) I checked the status using the following command, >>>>> >>>>> # systemctl status httpd -l >>>>> ? httpd.service - The Apache HTTP Server >>>>> Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; >>>>> vendor >>>>> preset: disabled) >>>>> Active: active (running) since Wed 2018-03-07 23:46:32 IST; 1min >>>>> 55s >>>>> ago >>>>> Docs: man:httpd(8) >>>>> man:apachectl(8) >>>>> Process: 4351 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, >>>>> status=0/SUCCESS) >>>>> Main PID: 4359 (httpd) >>>>> Status: "Total requests: 264; Current requests/sec: 0.1; Current >>>>> traffic: 204 B/sec" >>>>> CGroup: /system.slice/httpd.service >>>>> ??4359 /usr/sbin/httpd -DFOREGROUND >>>>> ??4360 /usr/sbin/httpd -DFOREGROUND >>>>> ??4362 /usr/sbin/httpd -DFOREGROUND >>>>> ??5100 /usr/sbin/httpd -DFOREGROUND >>>>> ??5386 /usr/sbin/httpd -DFOREGROUND >>>>> ??5415 /usr/sbin/httpd -DFOREGROUND >>>>> ??5416 /usr/sbin/httpd -DFOREGROUND >>>>> >>>>> 3) Since the httpd is pointing to the path : >>>>> /usr/lib/systemd/system/httpd.service >>>>> >>>>> vi /usr/lib/systemd/system/httpd.service >>>>> >>>>> [Unit] >>>>> Description=The Apache HTTP Server >>>>> After=network.target remote-fs.target nss-lookup.target >>>>> Documentation=man:httpd(8) >>>>> Documentation=man:apachectl(8) >>>>> >>>>> [Service] >>>>> Type=notify >>>>> EnvironmentFile=/etc/sysconfig/httpd >>>>> ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND >>>>> ExecReload=/usr/sbin/httpd $OPTIONS -k graceful >>>>> ExecStop=/bin/kill -WINCH ${MAINPID} >>>>> # We want systemd to give httpd some time to finish gracefully, but >>>>> still >>>>> want >>>>> # it to kill httpd after TimeoutStopSec if something went wrong during >>>>> the >>>>> # graceful stop. Normally, Systemd sends SIGTERM signal right after the >>>>> # ExecStop, which would kill httpd. We are sending useless SIGCONT here >>>>> to >>>>> give >>>>> # httpd time to finish. >>>>> KillSignal=SIGCONT >>>>> PrivateTmp=true >>>>> >>>>> [Install] >>>>> WantedBy=multi-user.target >>>>> >>>>> >>>>> 4) As per the above command I found the env file is available >>>>> '/etc/sysconfig/httpd' >>>>> >>>>> vi /etc/sysconfig/httpd >>>>> >>>>> # >>>>> # This file can be used to set additional environment variables for >>>>> # the httpd process, or pass additional options to the httpd >>>>> # executable. >>>>> # >>>>> # Note: With previous versions of httpd, the MPM could be changed by >>>>> # editing an "HTTPD" variable here. With the current version, that >>>>> # variable is now ignored. The MPM is a loadable module, and the >>>>> # choice of MPM can be changed by editing the configuration file >>>>> /etc/httpd/conf.modules.d/00-mpm.conf >>>>> # >>>>> >>>>> # >>>>> # To pass additional options (for instance, -D definitions) to the >>>>> # httpd binary at startup, set OPTIONS here. >>>>> # >>>>> #OPTIONS= >>>>> >>>>> # >>>>> # This setting ensures the httpd process is started in the "C" locale >>>>> # by default. (Some modules will not behave correctly if >>>>> # case-sensitive string comparisons are performed in a different >>>>> # locale.) >>>>> # >>>>> LANG=C >>>>> >>>>> >>>>> 5) As per the above command, I found that the conf fileis available in >>>>> the >>>>> path : /etc/httpd/conf.modules.d/00-mpm.conf >>>>> >>>>> vi /etc/httpd/conf.modules.d/00-mpm.conf >>>>> >>>>> # Select the MPM module which should be used by uncommenting exactly >>>>> # one of the following LoadModule lines: >>>>> >>>>> # prefork MPM: Implements a non-threaded, pre-forking web server >>>>> # See: http://httpd.apache.org/docs/2.4/mod/prefork.html >>>>> #LoadModule mpm_prefork_module modules/mod_mpm_prefork.so >>>>> >>>>> # worker MPM: Multi-Processing Module implementing a hybrid >>>>> # multi-threaded multi-process web server >>>>> # See: http://httpd.apache.org/docs/2.4/mod/worker.html >>>>> # >>>>> LoadModule mpm_worker_module modules/mod_mpm_worker.so >>>>> >>>>> # event MPM: A variant of the worker MPM with the goal of consuming >>>>> # threads only for connections with active processing >>>>> # See: http://httpd.apache.org/docs/2.4/mod/event.html >>>>> # >>>>> #LoadModule mpm_event_module modules/mod_mpm_event.so >>>>> >>>>> >>>>> ServerLimit 1000 >>>>> MaxRequestWorkers 1000 >>>>> >>>>> >>>>> >>>>> >>>>> As per your comment, I enabled the 'LoadModule mpm_worker_module >>>>> modules/mod_mpm_worker.so' with the ServerLimit and MaxRequestWorkers as >>>>> 1000 still I am facing the issue for the following command in apache >>>>> benchmark test. >>>>> >>>>> Completed 100 requests >>>>> SSL handshake failed (5). >>>>> SSL handshake failed (5). >>>>> SSL handshake failed (5). >>>>> SSL handshake failed (5). >>>>> SSL handshake failed (5). >>>>> SSL handshake failed (5). >>>>> >>>>> >>>>> NOTE : It always scales when I have concurrent request below 400 >>>>> >>>>> What is wrong in this apache configuration, why SSL handshake is failing >>>>> for concurrent request above 400 ? >>>>> >>>>> Thanks, >>>>> Hari >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Wed, Mar 7, 2018 at 9:20 PM, Juan Hern?ndez >>>>> wrote: >>>>> >>>>> It means that with the default configuration the Apache web server can't >>>>> >>>>>> serve more than 256 concurrent connections. This applies to any >>>>>> application >>>>>> that uses Apache as the web frontend, not just to oVirt. If you want to >>>>>> change that you have to change the MaxRequestWorkers and ServerLimit >>>>>> parameters, as explained here: >>>>>> >>>>>> https://httpd.apache.org/docs/2.4/mod/mpm_common.html#m >>>>>> axrequestworkers >>>>>> >>>>>> So, go to your oVirt engine machine and create a >>>>>> /etc/httpd/conf.d/my.conf >>>>>> file with this content: >>>>>> >>>>>> MaxRequestWorkers 1000 >>>>>> ServerLimit 1000 >>>>>> >>>>>> Then restart the Apache server: >>>>>> >>>>>> # systemctl restart httpd >>>>>> >>>>>> Then your web server should be able to handle 1000 concurrent requests, >>>>>> and you will probably start to find other limits, like the amount of >>>>>> memory >>>>>> and CPU that those 1000 Apache child processes will consume, the number >>>>>> of >>>>>> threads in the JBoss application server, the number of connections to >>>>>> the >>>>>> database, etc. >>>>>> >>>>>> Let me insist a bit that if you base your benchmark solely on the >>>>>> number >>>>>> of concurrent requests or connections that the server can handle you >>>>>> may >>>>>> end up with meaningless results, as a real world application can/should >>>>>> use >>>>>> the server much better than that. >>>>>> >>>>>> On 03/07/2018 04:33 PM, Hari Prasanth Loganathan wrote: >>>>>> >>>>>> With the default configuration of the web server it is impossible to >>>>>> >>>>>>> handle >>>>>>> more than 256 *connections* simultaneously. I guess that "ab" is >>>>>>> opening a >>>>>>> connection for each concurrent request, so when you reach request 257 >>>>>>> the >>>>>>> web server will just reject the connection, there is nothing that the >>>>>>> JBoss >>>>>>> can do about it; you have to increase the number of connections >>>>>>> supported >>>>>>> by the web server. >>>>>>> >>>>>>> *So Does it mean that oVirt cannot serve more than 257 request? * >>>>>>> >>>>>>> My question is, If its possible How to scale this and what is the >>>>>>> configuration we need to change? >>>>>>> >>>>>>> Also, we are taking a benchmark in using oVirt, So I need to find the >>>>>>> maximum possible oVirt request. So please let me know the >>>>>>> configuration >>>>>>> tuning for oVirt to achieve maximum concurrent request. >>>>>>> >>>>>>> Thanks, >>>>>>> Hari >>>>>>> >>>>>>> On Wed, Mar 7, 2018 at 7:25 PM, Juan Hern?ndez >>>>>>> wrote: >>>>>>> >>>>>>> With the default configuration of the web server it is impossible to >>>>>>> >>>>>>> handle more than 256 *connections* simultaneously. I guess that "ab" >>>>>>>> is >>>>>>>> opening a connection for each concurrent request, so when you reach >>>>>>>> request >>>>>>>> 257 the web server will just reject the connection, there is nothing >>>>>>>> that >>>>>>>> the JBoss can do about it; you have to increase the number of >>>>>>>> connections >>>>>>>> supported by the web server. >>>>>>>> >>>>>>>> Or else you may want to re-consider why you want to use 1000 >>>>>>>> simultaneous >>>>>>>> connections. It may be OK for a performance test, but there are >>>>>>>> better >>>>>>>> ways >>>>>>>> to squeeze performance. For example, you could consider using HTTP >>>>>>>> pipelining, which is much more friendly for the server than so many >>>>>>>> connections. This is what we use when we need to send a large number >>>>>>>> of >>>>>>>> requests from other systems. There are examples of how to do that >>>>>>>> with >>>>>>>> the >>>>>>>> Python and Ruby SDKs here: >>>>>>>> >>>>>>>> Python: >>>>>>>> >>>>>>>> https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/ >>>>>>>> examples/asynchronous_inventory.py >>>>>>>> >>>>>>>> Ruby: >>>>>>>> >>>>>>>> https://github.com/oVirt/ovirt-engine-sdk-ruby/blob/master/ >>>>>>>> sdk/examples/asynchronous_inventory.rb >>>>>>>> >>>>>>>> >>>>>>>> On 03/07/2018 02:43 PM, Hari Prasanth Loganathan wrote: >>>>>>>> >>>>>>>> Hi Juan, >>>>>>>> >>>>>>>> >>>>>>>>> Thanks for the response. >>>>>>>>> >>>>>>>>> I agree web server can handle only limited number of concurrent >>>>>>>>> requests. >>>>>>>>> But Why it is failing with SSL handshake failure for few requests, >>>>>>>>> Can't >>>>>>>>> the JBOSS wait and serve the request? We can spare the delay but not >>>>>>>>> with >>>>>>>>> the request fails. So Is there a configuration in oVirt which can be >>>>>>>>> tuned >>>>>>>>> to achieve this? >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Hari >>>>>>>>> >>>>>>>>> On Wed, Mar 7, 2018 at 7:05 PM, Juan Hern?ndez >>>>>>>>> >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>> The first thing you will need to change for such a test is the >>>>>>>>> number >>>>>>>>> of >>>>>>>>> >>>>>>>>> simultaneous connections accepted by the Apache web server: by >>>>>>>>> default >>>>>>>>> >>>>>>>>>> the >>>>>>>>>> max is 256. See the Apache documentation here: >>>>>>>>>> >>>>>>>>>> https://httpd.apache.org/docs/2.4/mod/mpm_common.html#m >>>>>>>>>> axrequestworkers >>>>>>>>>> >>>>>>>>>> In addition I also suggest that you consider using the "worker" >>>>>>>>>> multi-processing module instead of the "prefork", as it usually >>>>>>>>>> works >>>>>>>>>> better when talking to a Java application server, because it >>>>>>>>>> re-uses >>>>>>>>>> connections better. >>>>>>>>>> >>>>>>>>>> On 03/07/2018 02:20 PM, Hari Prasanth Loganathan wrote: >>>>>>>>>> >>>>>>>>>> Hi Team, >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> *Description of problem:* >>>>>>>>>>> >>>>>>>>>>> I am trying to achieve 1000 concurrent request to oVirt. What are >>>>>>>>>>> the >>>>>>>>>>> tunable parameters to achieve this? >>>>>>>>>>> >>>>>>>>>>> I tried to perform the benchmarking for ovirt engine using Apache >>>>>>>>>>> benchmark >>>>>>>>>>> using the same SSO token. >>>>>>>>>>> >>>>>>>>>>> ab -n 1000 -c 500 -k -H "accept: application/json" -H >>>>>>>>>>> "Authorization: >>>>>>>>>>> Bearer SSOTOKEN" https://172.30.56.70/ovirt-engine/ >>>>>>>>>>> >>>>>>>>>> b-9ff1-076fc07ebf50/statistics> >>>>>>>>>>> >>>>>>>>>>> When the number of concurrent request is 500, we are getting more >>>>>>>>>>> than >>>>>>>>>>> 100 >>>>>>>>>>> failures with the following error, >>>>>>>>>>> >>>>>>>>>>> SSL read failed (1) - closing connection >>>>>>>>>>> 139620982339352:error: >>>>>>>>>>> >>>>>>>>>>> NOTE: It is scaling for concurrent request below 500. >>>>>>>>>>> >>>>>>>>>>> I used the profiler to get the memory and CPU and it seems very >>>>>>>>>>> less, >>>>>>>>>>> >>>>>>>>>>> PID USER PR NI VIRT RES SHR S %CPU %MEM >>>>>>>>>>> TIME+ >>>>>>>>>>> COMMAND >>>>>>>>>>> 30413 ovirt 20 0 4226664 882396 6776 S 126.0 23.0 >>>>>>>>>>> 27:48.53 >>>>>>>>>>> java >>>>>>>>>>> >>>>>>>>>>> Configuration of the machine in which Ovirt is deployed : >>>>>>>>>>> >>>>>>>>>>> RAM - 4GB, >>>>>>>>>>> Hard disk - 100GB, >>>>>>>>>>> core processor - 2, >>>>>>>>>>> OS - Cent7.x. >>>>>>>>>>> >>>>>>>>>>> In which 2GB is allocated to oVirt. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Version-Release number of selected component (if applicable): >>>>>>>>>>> >>>>>>>>>>> 4.2.2 >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> How reproducible: >>>>>>>>>>> >>>>>>>>>>> If the number of concurrent requests are above 500, we are easily >>>>>>>>>>> facing >>>>>>>>>>> this issue. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> *Actual results:* >>>>>>>>>>> >>>>>>>>>>> SSL read failed (1) - closing connection >>>>>>>>>>> 139620982339352:error: >>>>>>>>>>> >>>>>>>>>>> *Expected results:* >>>>>>>>>>> >>>>>>>>>>> Request success. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> Hari >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> Users mailing list >>>>>>>>>>> Users at ovirt.org >>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > From caignec at cines.fr Thu Mar 8 11:56:50 2018 From: caignec at cines.fr (Lionel Caignec) Date: Thu, 8 Mar 2018 12:56:50 +0100 (CET) Subject: [ovirt-users] Ghost Snapshot Disk In-Reply-To: <489433186.2545721.1520320950720.JavaMail.zimbra@cines.fr> References: <2109773819.1728939.1519370725788.JavaMail.zimbra@cines.fr> <1550150634.1827635.1519649308293.JavaMail.zimbra@cines.fr> <280580777.1830731.1519652234852.JavaMail.zimbra@cines.fr> <48154177.1832942.1519654691849.JavaMail.zimbra@cines.fr> <489433186.2545721.1520320950720.JavaMail.zimbra@cines.fr> Message-ID: <736794213.2945423.1520510210879.JavaMail.zimbra@cines.fr> Hi, i finished to move my data, but now when i want to remove my old disk i get stuck to this error : "Cannot detach Virtual Machine Disk. The disk is already configured in a snapshot. In order to detach it, remove the disk's snapshots". But like i said before there is no snapshot anymore. So what can i do? Delete manually inside database? So where? Delete manually lvm volume, so how can i find the good one? Please help ;). Lionel ----- Mail original ----- De: "Lionel Caignec" ?: "Shani Leviim" Cc: "users" Envoy?: Mardi 6 Mars 2018 08:22:30 Objet: Re: [ovirt-users] Ghost Snapshot Disk Hi, ok thank you for information (sorry for late response). I will do that. ----- Mail original ----- De: "Shani Leviim" ?: "Lionel Caignec" Cc: "users" Envoy?: Mardi 27 F?vrier 2018 14:19:45 Objet: Re: [ovirt-users] Ghost Snapshot Disk Hi Lionel, Sorry for the delay in replying you. If it's possible from your side, syncing the data and destroying old disk sounds about right. In addition, it seems like you're having this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1509629 And it was fixed for version 4.1.9. and above. *Regards,* *Shani Leviim* On Mon, Feb 26, 2018 at 4:18 PM, Lionel Caignec wrote: > Ok so i reply myself, > > Version is 4.1.7.6-1 > > I just delete manually a snapshot previously created. But this is an io > intensive vm, whit big disk (2,5To, and 5To). > > For the log, i cannot paste all my log on public list security reason, i > will send you full in private. > Here is an extract relevant to my error > engine.log-20180210:2018-02-09 23:00:03,200+01 INFO > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (default task-312) [44402a8c-3196-43f0-ba33-307ea78e6f49] EVENT_ID: > USER_CREATE_SNAPSHOT(45), Correlation ID: 44402a8c-3196-43f0-ba33-307ea78e6f49, > Job ID: 030cd310-fec9-4a89-8c3f-7888504fe973, Call Stack: null, Custom > ID: null, Custom Event ID: -1, Message: Snapshot 'AUTO_7D_zz_nil_20180209_220002' > creation for VM 'zz_nil' was initiated by snap_user at internal. > engine.log-20180210:2018-02-09 23:01:06,578+01 INFO > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (DefaultQuartzScheduler5) [] EVENT_ID: USER_CREATE_SNAPSHOT_FINISHED_SUCCESS(68), > Correlation ID: 44402a8c-3196-43f0-ba33-307ea78e6f49, Job ID: > 030cd310-fec9-4a89-8c3f-7888504fe973, Call Stack: null, Custom ID: null, > Custom Event ID: -1, Message: Snapshot 'AUTO_7D_zz_nil_20180209_220002' > creation for VM 'zz_nil' has been completed. > engine.log-20180220:2018-02-19 17:01:23,800+01 INFO > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (default task-113) [] EVENT_ID: USER_REMOVE_SNAPSHOT(342), Correlation ID: > 06a9efa4-1b80-4021-bf3e-41ecebe58a88, Job ID: c9a918a7-b00c-43cf-b6de-3659ac0765da, > Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Snapshot > 'AUTO_7D_zz_nil_20180209_220002' deletion for VM 'zz_nil' was initiated > by acaignec at ldap-cines-authz. > engine.log-20180221:2018-02-20 22:24:45,174+01 ERROR > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (DefaultQuartzScheduler6) [06a9efa4-1b80-4021-bf3e-41ecebe58a88] > EVENT_ID: USER_REMOVE_SNAPSHOT_FINISHED_FAILURE(357), Correlation ID: > 06a9efa4-1b80-4021-bf3e-41ecebe58a88, Job ID: c9a918a7-b00c-43cf-b6de-3659ac0765da, > Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Failed to > delete snapshot 'AUTO_7D_zz_nil_20180209_220002' for VM 'zz_nil'. > 2018-02-20 22:24:46,266+01 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] > (DefaultQuartzScheduler3) [516079c3] SPMAsyncTask::PollTask: Polling task > '34137342-4f30-476d-b16c-1cb7e0ea0ac0' (Parent Command 'DestroyImage', > Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') > returned status 'finished', result 'success'. > 2018-02-20 22:24:46,267+01 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] > (DefaultQuartzScheduler3) [516079c3] BaseAsyncTask::onTaskEndSuccess: > Task '34137342-4f30-476d-b16c-1cb7e0ea0ac0' (Parent Command > 'DestroyImage', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') > ended successfully. > 2018-02-20 22:24:46,268+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] > (DefaultQuartzScheduler3) [516079c3] CommandAsyncTask::endActionIfNecessary: > All tasks of command 'fe8c91f2-386b-4b3f-bbf3-aeda8e9244c6' has ended -> > executing 'endAction' > 2018-02-20 22:24:46,268+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] > (DefaultQuartzScheduler3) [516079c3] CommandAsyncTask::endAction: Ending > action for '1' tasks (command ID: 'fe8c91f2-386b-4b3f-bbf3-aeda8e9244c6'): > calling endAction '. > 2018-02-20 22:24:46,268+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] > (org.ovirt.thread.pool-6-thread-20) [516079c3] CommandAsyncTask::endCommandAction > [within thread] context: Attempting to endAction 'DestroyImage', > 2018-02-20 22:24:46,269+01 ERROR [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] > (org.ovirt.thread.pool-6-thread-20) [516079c3] [within thread]: endAction > for action type DestroyImage threw an exception.: > java.lang.NullPointerException > at org.ovirt.engine.core.bll.tasks.CoCoAsyncTaskHelper. > endAction(CoCoAsyncTaskHelper.java:335) [bll.jar:] > at org.ovirt.engine.core.bll.tasks.CommandCoordinatorImpl. > endAction(CommandCoordinatorImpl.java:340) [bll.jar:] > at org.ovirt.engine.core.bll.tasks.CommandAsyncTask. > endCommandAction(CommandAsyncTask.java:154) [bll.jar:] > at org.ovirt.engine.core.bll.tasks.CommandAsyncTask.lambda$ > endActionIfNecessary$0(CommandAsyncTask.java:106) [bll.jar:] > at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$ > InternalWrapperRunnable.run(ThreadPoolUtil.java:84) [utils.jar:] > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [rt.jar:1.8.0_161] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [rt.jar:1.8.0_161] > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [rt.jar:1.8.0_161] > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [rt.jar:1.8.0_161] > at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_161] > > ----- Mail original ----- > De: "Shani Leviim" > ?: "Lionel Caignec" > Envoy?: Lundi 26 F?vrier 2018 14:42:38 > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > Yes, please. > Can you detail a bit more regarding the actions you've done? > > I'm assuming that since the snapshot had no description, trying to operate > it caused the nullPointerException you've got. > But I want to examine what was the cause for that. > > Also, can you please answer back to the list? > > > > *Regards,* > > *Shani Leviim* > > On Mon, Feb 26, 2018 at 3:37 PM, Lionel Caignec wrote: > > > Version is 4.1.7.6-1 > > > > Do you want the log from the day i delete snapshot? > > > > ----- Mail original ----- > > De: "Shani Leviim" > > ?: "Lionel Caignec" > > Cc: "users" > > Envoy?: Lundi 26 F?vrier 2018 14:29:16 > > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > > > Hi, > > > > What is your engine version, please? > > I'm trying to reproduce your steps, for understanding better was is the > > cause for that error. Therefore, a full engine log is needed. > > Can you please attach it? > > > > Thanks, > > > > > > *Shani Leviim* > > > > On Mon, Feb 26, 2018 at 2:48 PM, Lionel Caignec > wrote: > > > > > Hi > > > > > > 1) this is error message from ui.log > > > > > > 2018-02-26 13:44:10,001+01 ERROR [org.ovirt.engine.ui.frontend. > > > server.gwt.OvirtRemoteLoggingService] (default task-3) [] Permutation > > > name: 8C01181C3B121D0AAE1312275CC96415 > > > 2018-02-26 13:44:10,001+01 ERROR [org.ovirt.engine.ui.frontend. > > server.gwt.OvirtRemoteLoggingService] > > > (default task-3) [] Uncaught exception: com.google.gwt.core.client. > > JavaScriptException: > > > (TypeError) > > > __gwt$exception: : Cannot read property 'F' of null > > > at org.ovirt.engine.ui.uicommonweb.models.storage. > > > DisksAllocationModel$3.$onSuccess(DisksAllocationModel.java:120) > > > at org.ovirt.engine.ui.uicommonweb.models.storage. > > > DisksAllocationModel$3.onSuccess(DisksAllocationModel.java:120) > > > at org.ovirt.engine.ui.frontend.Frontend$2.$onSuccess( > > Frontend.java:233) > > > [frontend.jar:] > > > at org.ovirt.engine.ui.frontend.Frontend$2.onSuccess(Frontend. > > java:233) > > > [frontend.jar:] > > > at org.ovirt.engine.ui.frontend.communication. > > > OperationProcessor$2.$onSuccess(OperationProcessor.java:139) > > > [frontend.jar:] > > > at org.ovirt.engine.ui.frontend.communication. > > > OperationProcessor$2.onSuccess(OperationProcessor.java:139) > > > [frontend.jar:] > > > at org.ovirt.engine.ui.frontend.communication. > > > GWTRPCCommunicationProvider$5$1.$onSuccess( > GWTRPCCommunicationProvider. > > java:269) > > > [frontend.jar:] > > > at org.ovirt.engine.ui.frontend.communication. > > > GWTRPCCommunicationProvider$5$1.onSuccess(GWTRPCCommunicationProvider. > > java:269) > > > [frontend.jar:] > > > at com.google.gwt.user.client.rpc.impl.RequestCallbackAdapter. > > > onResponseReceived(RequestCallbackAdapter.java:198) [gwt-servlet.jar:] > > > at com.google.gwt.http.client.Request.$fireOnResponseReceived( > > Request.java:237) > > > [gwt-servlet.jar:] > > > at com.google.gwt.http.client.RequestBuilder$1. > > onReadyStateChange(RequestBuilder.java:409) > > > [gwt-servlet.jar:] > > > at Unknown.eval(webadmin-0.js at 65) > > > at com.google.gwt.core.client.impl.Impl.apply(Impl.java:296) > > > [gwt-servlet.jar:] > > > at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:335) > > > [gwt-servlet.jar:] > > > at Unknown.eval(webadmin-0.js at 54) > > > > > > > > > 2) This line seems to be about the bad disk : > > > > > > f242cc9a-56c1-4ae4-aef0-f75eb01f74b1 | 2018-01-17 22:02:00+01 | > > > 2748779069440 | 00000000-0000-0000-0000-000000000000 | > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > > > 22:01:20.5+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > > > > > > > > > 3) Snapshot table is empty for the concerned vm_id. > > > > > > ----- Mail original ----- > > > De: "Shani Leviim" > > > ?: "Lionel Caignec" > > > Cc: "users" > > > Envoy?: Lundi 26 F?vrier 2018 13:31:23 > > > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > > > > > Hi Lionel, > > > > > > The error message you've mentioned sounds like a UI error. > > > Can you please attach your ui log? > > > > > > Also, on the data from 'images' table you've uploaded, can you describe > > > which line is the relevant disk? > > > > > > Finally (for now), in case the snapshot was deleted, can you please > > > validate it by viewing the output of: > > > $ select * from snapshots; > > > > > > > > > > > > *Regards,* > > > > > > *Shani Leviim* > > > > > > On Mon, Feb 26, 2018 at 9:20 AM, Lionel Caignec > > wrote: > > > > > > > Hi Shani, > > > > thank you for helping me with your reply, > > > > i juste make a little mistake on explanation. In fact it's the > snapshot > > > > does not exist anymore. This is the disk(s) relative to her wich > still > > > > exist, and perhaps LVM volume. > > > > So can i delete manually this disk in database? what about the lvm > > > volume? > > > > Is it better to recreate disk sync data and destroy old one? > > > > > > > > > > > > > > > > ----- Mail original ----- > > > > De: "Shani Leviim" > > > > ?: "Lionel Caignec" > > > > Cc: "users" > > > > Envoy?: Dimanche 25 F?vrier 2018 14:26:41 > > > > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > > > > > > > Hi Lionel, > > > > > > > > You can try to delete that snapshot directly from the database. > > > > > > > > In case of using psql [1], once you've logged in to your database, > you > > > can > > > > run this query: > > > > $ select * from snapshots where vm_id = ''; > > > > This one would list the snapshots associated with a VM by its id. > > > > > > > > In case you don't have you vm_id, you can locate it by querying: > > > > $ select * from vms where vm_name = 'nil'; > > > > This one would show you some details about a VM by its name > (including > > > the > > > > vm's id). > > > > > > > > Once you've found the relevant snapshot, you can delete it by > running: > > > > $ delete from snapshots where snapshot_id = ''; > > > > This one would delete the desired snapshot from the database. > > > > > > > > Since it's a delete operation, I would suggest confirming the ids > > before > > > > executing it. > > > > > > > > Hope you've found it useful! > > > > > > > > [1] > > > > https://www.ovirt.org/documentation/install-guide/ > > > appe-Preparing_a_Remote_ > > > > PostgreSQL_Database_for_Use_with_the_oVirt_Engine/ > > > > > > > > > > > > *Regards,* > > > > > > > > *Shani Leviim* > > > > > > > > On Fri, Feb 23, 2018 at 9:25 AM, Lionel Caignec > > > wrote: > > > > > > > > > Hi, > > > > > > > > > > i've a problem with snapshot. On one VM i've a "snapshot" ghost > > without > > > > > name or uuid, only information is size (see attachment). In the > > > snapshot > > > > > tab there is no trace about this disk. > > > > > > > > > > In database (table images) i found this : > > > > > f242cc9a-56c1-4ae4-aef0-f75eb01f74b1 | 2018-01-17 22:02:00+01 | > > > > > 2748779069440 | 00000000-0000-0000-0000-000000000000 | > > > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > > > > > 22:01:20.5+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > > > > > | 2 | 4 | 17e26476-cecb-441d-a5f7- > > 46ab3ef387ee > > > | > > > > > 2018-01-17 22:01:29.663334+01 | 2018-01-19 08:40:14.345229+01 | f > > > | > > > > > 1 | 2 > > > > > 1c7650fa-542b-4ec2-83a1-d2c1c31be5fd | 2018-01-17 22:02:03+01 | > > > > > 5368709120000 | 00000000-0000-0000-0000-000000000000 | > > > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > > > > > 22:01:20.84+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > > > > > | 2 | 4 | bf834a91-c69f-4d2c-b639- > > 116ed58296d8 > > > | > > > > > 2018-01-17 22:01:29.836133+01 | 2018-01-19 08:40:19.083508+01 | f > > > | > > > > > 1 | 2 > > > > > 8614b21f-c0de-40f2-b4fb-e5cf193b0743 | 2018-02-09 23:00:44+01 | > > > > > 5368709120000 | 00000000-0000-0000-0000-000000000000 | > > > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-02-16 > > > > > 23:00:02.855+01 | 390175dc-baf4-4831-936a-5ea68fa4c969 > > > > > > > > > > > > > > > But i does not know which line is my disk. Is it possible to > delete > > > > > directly into database? > > > > > Or is it better to dump my disk to another new and delete the > > > "corrupted > > > > > one"? > > > > > > > > > > Another thing, when i try to move the disk to another storage > > domain i > > > > > always get "uncaght exeption occured ..." and no error in > engine.log. > > > > > > > > > > > > > > > Thank you for helping. > > > > > > > > > > -- > > > > > Lionel Caignec > > > > > > > > > > _______________________________________________ > > > > > Users mailing list > > > > > Users at ovirt.org > > > > > http://lists.ovirt.org/mailman/listinfo/users > > > > > > > > > > > > > > > > > > > > _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users From tjelinek at redhat.com Thu Mar 8 12:19:02 2018 From: tjelinek at redhat.com (Tomas Jelinek) Date: Thu, 8 Mar 2018 13:19:02 +0100 Subject: [ovirt-users] Power off VM from VM portal In-Reply-To: References: <2826607c-14bc-695d-26d6-b20d12f9b755@shurik.kiev.ua> <7fb78f56-cc23-8369-0e54-586f42408a91@ecarnot.net> <1931fde6-2347-eef9-e6b6-244ddb80517e@shurik.kiev.ua> Message-ID: On Wed, Mar 7, 2018 at 2:36 PM, Nicolas Ecarnot wrote: > Le 07/03/2018 ? 13:42, Alexandr Krivulya a ?crit : > >> >> >> 06.03.2018 17:39, Nicolas Ecarnot ?????: >> >>> Le 06/03/2018 ? 16:02, Alexandr Krivulya a ?crit : >>> >>>> Hi, >>>> >>>> is there any way to power off VM from VM portal (4.2.1.7)? I can't find >>>> "power off" button, just "shutdown". >>>> >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>> >>> Hello Alexandr, >>> >>> After having clicked on the VM link, you'll notice that on the right of >>> the Shutdown button is an arrow allowing you to access to the Power Off >>> feature. >>> >> >> I cant find this arrow on Shutdown button >> >> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > Oh sorry I answered in the context of admin portal. > Indeed, in the VM portal, I neither see this poweroff button. > right, I have opened an issue to track this https://github.com/oVirt/ovirt-web-ui/issues/522 > > -- > Nicolas ECARNOT > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hariprasanth.l at msystechnologies.com Thu Mar 8 12:31:47 2018 From: hariprasanth.l at msystechnologies.com (Hari Prasanth Loganathan) Date: Thu, 8 Mar 2018 18:01:47 +0530 Subject: [ovirt-users] Tunable parameters in ovirt engine In-Reply-To: <5a289de6-a80d-a4c9-e612-c2ea31e3e54a@redhat.com> References: <786be990-4ea0-613b-d81b-639f70a11398@redhat.com> <109ffb44-89ac-f190-29a4-e937157cce81@redhat.com> <6cb6fbc2-51e3-9c89-23dc-e36a7aefd8b3@redhat.com> <5105aa87-fea6-eb2e-5ae8-ef47cbbe6677@redhat.com> <5a289de6-a80d-a4c9-e612-c2ea31e3e54a@redhat.com> Message-ID: Hi Juan, Now we got a clue, I try to hit 100 concurrent request in which 1 request got failed with the 503 error, I observed this in apache ssl_access.log 172.30.36.167 - - [08/Mar/2018:17:22:42 +0530] "GET /ovirt-engine/api/vms/ HTTP/1.1" 503 299 Is it thrown from Apache / ovirt engine? On Thu, Mar 8, 2018 at 5:15 PM, Juan Hern?ndez wrote: > I think the configuration is good. There may be some connections that the > server is closing before the client expected, but that is normal, in my > opinion. > > > On 03/08/2018 12:34 PM, Hari Prasanth Loganathan wrote: > >> This is the only error message we received from ab. >> >> I googled it and found that it is due to the connection drop. So It would >> be Great, If you could check my Apache server configuration I shared in >> the >> thread and let me know your thoughts on this. >> >> Thanks, >> Hari >> >> On Thu, Mar 8, 2018 at 4:56 PM, Juan Hern?ndez >> wrote: >> >> But other than those SSL error messages, are the connections really >>> failing? Can you share the results reported by "ab"? >>> >>> >>> On 03/08/2018 12:16 PM, Hari Prasanth Loganathan wrote: >>> >>> No Juan, It is not working with any benchmark / application tool. It >>>> fails >>>> with the same error SSL handshake failed (5). >>>> >>>> Could you let me know the configuration of Apache web server is correct? >>>> >>>> Thanks, >>>> Hari >>>> >>>> On Thu, Mar 8, 2018 at 1:08 AM, Juan Hern?ndez >>>> wrote: >>>> >>>> If you are still having problems I am inclined to think that it is a >>>> >>>>> client issue. For example, I'd try to remove the "-k" option from the >>>>> "ab" >>>>> command. If you use keep alive the server may decide anyhow to close >>>>> the >>>>> connection after certain number of requests, even if the client asks to >>>>> keep it alive. Some clients don't handle that perfectly, "ab" may have >>>>> that >>>>> problem. If that makes the SSL error messages disappear then I think >>>>> you >>>>> can safely ignore them, and restore the "-k" option, if you want. >>>>> >>>>> On 03/07/2018 07:30 PM, Hari Prasanth Loganathan wrote: >>>>> >>>>> Thanks Juan for your response. Appreciate it. >>>>> >>>>>> But for some reason still, I am facing the same SSL handshake failed >>>>>> (5). >>>>>> Could you please check this configuration and let me know the issue in >>>>>> my >>>>>> ovirt engine environment. >>>>>> >>>>>> *Configuration of Apache server:* >>>>>> >>>>>> >>>>>> 1) httpd version, >>>>>> >>>>>> # httpd -v >>>>>> Server version: Apache/2.4.6 (CentOS) >>>>>> Server built: Oct 19 2017 20:39:16 >>>>>> >>>>>> 2) I checked the status using the following command, >>>>>> >>>>>> # systemctl status httpd -l >>>>>> ? httpd.service - The Apache HTTP Server >>>>>> Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; >>>>>> vendor >>>>>> preset: disabled) >>>>>> Active: active (running) since Wed 2018-03-07 23:46:32 IST; 1min >>>>>> 55s >>>>>> ago >>>>>> Docs: man:httpd(8) >>>>>> man:apachectl(8) >>>>>> Process: 4351 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, >>>>>> status=0/SUCCESS) >>>>>> Main PID: 4359 (httpd) >>>>>> Status: "Total requests: 264; Current requests/sec: 0.1; Current >>>>>> traffic: 204 B/sec" >>>>>> CGroup: /system.slice/httpd.service >>>>>> ??4359 /usr/sbin/httpd -DFOREGROUND >>>>>> ??4360 /usr/sbin/httpd -DFOREGROUND >>>>>> ??4362 /usr/sbin/httpd -DFOREGROUND >>>>>> ??5100 /usr/sbin/httpd -DFOREGROUND >>>>>> ??5386 /usr/sbin/httpd -DFOREGROUND >>>>>> ??5415 /usr/sbin/httpd -DFOREGROUND >>>>>> ??5416 /usr/sbin/httpd -DFOREGROUND >>>>>> >>>>>> 3) Since the httpd is pointing to the path : >>>>>> /usr/lib/systemd/system/httpd.service >>>>>> >>>>>> vi /usr/lib/systemd/system/httpd.service >>>>>> >>>>>> [Unit] >>>>>> Description=The Apache HTTP Server >>>>>> After=network.target remote-fs.target nss-lookup.target >>>>>> Documentation=man:httpd(8) >>>>>> Documentation=man:apachectl(8) >>>>>> >>>>>> [Service] >>>>>> Type=notify >>>>>> EnvironmentFile=/etc/sysconfig/httpd >>>>>> ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND >>>>>> ExecReload=/usr/sbin/httpd $OPTIONS -k graceful >>>>>> ExecStop=/bin/kill -WINCH ${MAINPID} >>>>>> # We want systemd to give httpd some time to finish gracefully, but >>>>>> still >>>>>> want >>>>>> # it to kill httpd after TimeoutStopSec if something went wrong during >>>>>> the >>>>>> # graceful stop. Normally, Systemd sends SIGTERM signal right after >>>>>> the >>>>>> # ExecStop, which would kill httpd. We are sending useless SIGCONT >>>>>> here >>>>>> to >>>>>> give >>>>>> # httpd time to finish. >>>>>> KillSignal=SIGCONT >>>>>> PrivateTmp=true >>>>>> >>>>>> [Install] >>>>>> WantedBy=multi-user.target >>>>>> >>>>>> >>>>>> 4) As per the above command I found the env file is available >>>>>> '/etc/sysconfig/httpd' >>>>>> >>>>>> vi /etc/sysconfig/httpd >>>>>> >>>>>> # >>>>>> # This file can be used to set additional environment variables for >>>>>> # the httpd process, or pass additional options to the httpd >>>>>> # executable. >>>>>> # >>>>>> # Note: With previous versions of httpd, the MPM could be changed by >>>>>> # editing an "HTTPD" variable here. With the current version, that >>>>>> # variable is now ignored. The MPM is a loadable module, and the >>>>>> # choice of MPM can be changed by editing the configuration file >>>>>> /etc/httpd/conf.modules.d/00-mpm.conf >>>>>> # >>>>>> >>>>>> # >>>>>> # To pass additional options (for instance, -D definitions) to the >>>>>> # httpd binary at startup, set OPTIONS here. >>>>>> # >>>>>> #OPTIONS= >>>>>> >>>>>> # >>>>>> # This setting ensures the httpd process is started in the "C" locale >>>>>> # by default. (Some modules will not behave correctly if >>>>>> # case-sensitive string comparisons are performed in a different >>>>>> # locale.) >>>>>> # >>>>>> LANG=C >>>>>> >>>>>> >>>>>> 5) As per the above command, I found that the conf fileis available in >>>>>> the >>>>>> path : /etc/httpd/conf.modules.d/00-mpm.conf >>>>>> >>>>>> vi /etc/httpd/conf.modules.d/00-mpm.conf >>>>>> >>>>>> # Select the MPM module which should be used by uncommenting exactly >>>>>> # one of the following LoadModule lines: >>>>>> >>>>>> # prefork MPM: Implements a non-threaded, pre-forking web server >>>>>> # See: http://httpd.apache.org/docs/2.4/mod/prefork.html >>>>>> #LoadModule mpm_prefork_module modules/mod_mpm_prefork.so >>>>>> >>>>>> # worker MPM: Multi-Processing Module implementing a hybrid >>>>>> # multi-threaded multi-process web server >>>>>> # See: http://httpd.apache.org/docs/2.4/mod/worker.html >>>>>> # >>>>>> LoadModule mpm_worker_module modules/mod_mpm_worker.so >>>>>> >>>>>> # event MPM: A variant of the worker MPM with the goal of consuming >>>>>> # threads only for connections with active processing >>>>>> # See: http://httpd.apache.org/docs/2.4/mod/event.html >>>>>> # >>>>>> #LoadModule mpm_event_module modules/mod_mpm_event.so >>>>>> >>>>>> >>>>>> ServerLimit 1000 >>>>>> MaxRequestWorkers 1000 >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> As per your comment, I enabled the 'LoadModule mpm_worker_module >>>>>> modules/mod_mpm_worker.so' with the ServerLimit and MaxRequestWorkers >>>>>> as >>>>>> 1000 still I am facing the issue for the following command in apache >>>>>> benchmark test. >>>>>> >>>>>> Completed 100 requests >>>>>> SSL handshake failed (5). >>>>>> SSL handshake failed (5). >>>>>> SSL handshake failed (5). >>>>>> SSL handshake failed (5). >>>>>> SSL handshake failed (5). >>>>>> SSL handshake failed (5). >>>>>> >>>>>> >>>>>> NOTE : It always scales when I have concurrent request below 400 >>>>>> >>>>>> What is wrong in this apache configuration, why SSL handshake is >>>>>> failing >>>>>> for concurrent request above 400 ? >>>>>> >>>>>> Thanks, >>>>>> Hari >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Wed, Mar 7, 2018 at 9:20 PM, Juan Hern?ndez >>>>>> wrote: >>>>>> >>>>>> It means that with the default configuration the Apache web server >>>>>> can't >>>>>> >>>>>> serve more than 256 concurrent connections. This applies to any >>>>>>> application >>>>>>> that uses Apache as the web frontend, not just to oVirt. If you want >>>>>>> to >>>>>>> change that you have to change the MaxRequestWorkers and ServerLimit >>>>>>> parameters, as explained here: >>>>>>> >>>>>>> https://httpd.apache.org/docs/2.4/mod/mpm_common.html#m >>>>>>> axrequestworkers >>>>>>> >>>>>>> So, go to your oVirt engine machine and create a >>>>>>> /etc/httpd/conf.d/my.conf >>>>>>> file with this content: >>>>>>> >>>>>>> MaxRequestWorkers 1000 >>>>>>> ServerLimit 1000 >>>>>>> >>>>>>> Then restart the Apache server: >>>>>>> >>>>>>> # systemctl restart httpd >>>>>>> >>>>>>> Then your web server should be able to handle 1000 concurrent >>>>>>> requests, >>>>>>> and you will probably start to find other limits, like the amount of >>>>>>> memory >>>>>>> and CPU that those 1000 Apache child processes will consume, the >>>>>>> number >>>>>>> of >>>>>>> threads in the JBoss application server, the number of connections to >>>>>>> the >>>>>>> database, etc. >>>>>>> >>>>>>> Let me insist a bit that if you base your benchmark solely on the >>>>>>> number >>>>>>> of concurrent requests or connections that the server can handle you >>>>>>> may >>>>>>> end up with meaningless results, as a real world application >>>>>>> can/should >>>>>>> use >>>>>>> the server much better than that. >>>>>>> >>>>>>> On 03/07/2018 04:33 PM, Hari Prasanth Loganathan wrote: >>>>>>> >>>>>>> With the default configuration of the web server it is impossible to >>>>>>> >>>>>>> handle >>>>>>>> more than 256 *connections* simultaneously. I guess that "ab" is >>>>>>>> opening a >>>>>>>> connection for each concurrent request, so when you reach request >>>>>>>> 257 >>>>>>>> the >>>>>>>> web server will just reject the connection, there is nothing that >>>>>>>> the >>>>>>>> JBoss >>>>>>>> can do about it; you have to increase the number of connections >>>>>>>> supported >>>>>>>> by the web server. >>>>>>>> >>>>>>>> *So Does it mean that oVirt cannot serve more than 257 request? * >>>>>>>> >>>>>>>> My question is, If its possible How to scale this and what is the >>>>>>>> configuration we need to change? >>>>>>>> >>>>>>>> Also, we are taking a benchmark in using oVirt, So I need to find >>>>>>>> the >>>>>>>> maximum possible oVirt request. So please let me know the >>>>>>>> configuration >>>>>>>> tuning for oVirt to achieve maximum concurrent request. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Hari >>>>>>>> >>>>>>>> On Wed, Mar 7, 2018 at 7:25 PM, Juan Hern?ndez >>>>>>> > >>>>>>>> wrote: >>>>>>>> >>>>>>>> With the default configuration of the web server it is impossible to >>>>>>>> >>>>>>>> handle more than 256 *connections* simultaneously. I guess that "ab" >>>>>>>> >>>>>>>>> is >>>>>>>>> opening a connection for each concurrent request, so when you reach >>>>>>>>> request >>>>>>>>> 257 the web server will just reject the connection, there is >>>>>>>>> nothing >>>>>>>>> that >>>>>>>>> the JBoss can do about it; you have to increase the number of >>>>>>>>> connections >>>>>>>>> supported by the web server. >>>>>>>>> >>>>>>>>> Or else you may want to re-consider why you want to use 1000 >>>>>>>>> simultaneous >>>>>>>>> connections. It may be OK for a performance test, but there are >>>>>>>>> better >>>>>>>>> ways >>>>>>>>> to squeeze performance. For example, you could consider using HTTP >>>>>>>>> pipelining, which is much more friendly for the server than so many >>>>>>>>> connections. This is what we use when we need to send a large >>>>>>>>> number >>>>>>>>> of >>>>>>>>> requests from other systems. There are examples of how to do that >>>>>>>>> with >>>>>>>>> the >>>>>>>>> Python and Ruby SDKs here: >>>>>>>>> >>>>>>>>> Python: >>>>>>>>> >>>>>>>>> https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/ >>>>>>>>> examples/asynchronous_inventory.py >>>>>>>>> >>>>>>>>> Ruby: >>>>>>>>> >>>>>>>>> https://github.com/oVirt/ovirt-engine-sdk-ruby/blob/master/ >>>>>>>>> sdk/examples/asynchronous_inventory.rb >>>>>>>>> >>>>>>>>> >>>>>>>>> On 03/07/2018 02:43 PM, Hari Prasanth Loganathan wrote: >>>>>>>>> >>>>>>>>> Hi Juan, >>>>>>>>> >>>>>>>>> >>>>>>>>> Thanks for the response. >>>>>>>>>> >>>>>>>>>> I agree web server can handle only limited number of concurrent >>>>>>>>>> requests. >>>>>>>>>> But Why it is failing with SSL handshake failure for few requests, >>>>>>>>>> Can't >>>>>>>>>> the JBOSS wait and serve the request? We can spare the delay but >>>>>>>>>> not >>>>>>>>>> with >>>>>>>>>> the request fails. So Is there a configuration in oVirt which can >>>>>>>>>> be >>>>>>>>>> tuned >>>>>>>>>> to achieve this? >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> Hari >>>>>>>>>> >>>>>>>>>> On Wed, Mar 7, 2018 at 7:05 PM, Juan Hern?ndez < >>>>>>>>>> jhernand at redhat.com >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>> The first thing you will need to change for such a test is the >>>>>>>>>> number >>>>>>>>>> of >>>>>>>>>> >>>>>>>>>> simultaneous connections accepted by the Apache web server: by >>>>>>>>>> default >>>>>>>>>> >>>>>>>>>> the >>>>>>>>>>> max is 256. See the Apache documentation here: >>>>>>>>>>> >>>>>>>>>>> https://httpd.apache.org/docs/2.4/mod/mpm_common.html#m >>>>>>>>>>> axrequestworkers >>>>>>>>>>> >>>>>>>>>>> In addition I also suggest that you consider using the "worker" >>>>>>>>>>> multi-processing module instead of the "prefork", as it usually >>>>>>>>>>> works >>>>>>>>>>> better when talking to a Java application server, because it >>>>>>>>>>> re-uses >>>>>>>>>>> connections better. >>>>>>>>>>> >>>>>>>>>>> On 03/07/2018 02:20 PM, Hari Prasanth Loganathan wrote: >>>>>>>>>>> >>>>>>>>>>> Hi Team, >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> *Description of problem:* >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> I am trying to achieve 1000 concurrent request to oVirt. What >>>>>>>>>>>> are >>>>>>>>>>>> the >>>>>>>>>>>> tunable parameters to achieve this? >>>>>>>>>>>> >>>>>>>>>>>> I tried to perform the benchmarking for ovirt engine using >>>>>>>>>>>> Apache >>>>>>>>>>>> benchmark >>>>>>>>>>>> using the same SSO token. >>>>>>>>>>>> >>>>>>>>>>>> ab -n 1000 -c 500 -k -H "accept: application/json" -H >>>>>>>>>>>> "Authorization: >>>>>>>>>>>> Bearer SSOTOKEN" https://172.30.56.70/ovirt-engine/ >>>>>>>>>>>> >>>>>>>>>>> b-9ff1-076fc07ebf50/statistics> >>>>>>>>>>>> >>>>>>>>>>>> When the number of concurrent request is 500, we are getting >>>>>>>>>>>> more >>>>>>>>>>>> than >>>>>>>>>>>> 100 >>>>>>>>>>>> failures with the following error, >>>>>>>>>>>> >>>>>>>>>>>> SSL read failed (1) - closing connection >>>>>>>>>>>> 139620982339352:error: >>>>>>>>>>>> >>>>>>>>>>>> NOTE: It is scaling for concurrent request below 500. >>>>>>>>>>>> >>>>>>>>>>>> I used the profiler to get the memory and CPU and it seems very >>>>>>>>>>>> less, >>>>>>>>>>>> >>>>>>>>>>>> PID USER PR NI VIRT RES SHR S %CPU %MEM >>>>>>>>>>>> TIME+ >>>>>>>>>>>> COMMAND >>>>>>>>>>>> 30413 ovirt 20 0 4226664 882396 6776 S 126.0 23.0 >>>>>>>>>>>> 27:48.53 >>>>>>>>>>>> java >>>>>>>>>>>> >>>>>>>>>>>> Configuration of the machine in which Ovirt is deployed : >>>>>>>>>>>> >>>>>>>>>>>> RAM - 4GB, >>>>>>>>>>>> Hard disk - 100GB, >>>>>>>>>>>> core processor - 2, >>>>>>>>>>>> OS - Cent7.x. >>>>>>>>>>>> >>>>>>>>>>>> In which 2GB is allocated to oVirt. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Version-Release number of selected component (if applicable): >>>>>>>>>>>> >>>>>>>>>>>> 4.2.2 >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> How reproducible: >>>>>>>>>>>> >>>>>>>>>>>> If the number of concurrent requests are above 500, we are >>>>>>>>>>>> easily >>>>>>>>>>>> facing >>>>>>>>>>>> this issue. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> *Actual results:* >>>>>>>>>>>> >>>>>>>>>>>> SSL read failed (1) - closing connection >>>>>>>>>>>> 139620982339352:error: >>>>>>>>>>>> >>>>>>>>>>>> *Expected results:* >>>>>>>>>>>> >>>>>>>>>>>> Request success. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Thanks, >>>>>>>>>>>> Hari >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>> Users mailing list >>>>>>>>>>>> Users at ovirt.org >>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -- DISCLAIMER The information in this e-mail is confidential and may be subject to legal privilege. It is intended solely for the addressee. Access to this e-mail by anyone else is unauthorized. If you have received this communication in error, please address with the subject heading "Received in error," send to it at msystechnologies.com, then delete the e-mail and destroy any copies of it. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. The views, opinions, conclusions and other information expressed in this electronic mail and any attachments are not given or endorsed by the company unless otherwise indicated by an authorized representative independent of this message. MSys cannot guarantee that e-mail communications are secure or error-free, as information could be intercepted, corrupted, amended, lost, destroyed, arrive late or incomplete, or contain viruses, though all reasonable precautions have been taken to ensure no viruses are present in this e-mail. As our company cannot accept responsibility for any loss or damage arising from the use of this e-mail or attachments we recommend that you subject these to your virus checking procedures prior to use -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbonazzo at redhat.com Thu Mar 8 13:08:23 2018 From: sbonazzo at redhat.com (sbonazzo at redhat.com) Date: Thu, 08 Mar 2018 13:08:23 +0000 Subject: [ovirt-users] Invitation: oVirt System Test Hackathon @ Tue Mar 13, 2018 (users@ovirt.org) Message-ID: You have been invited to the following event. Title: oVirt System Test Hackathon Please join us in an ovirt-system-tests hackathon pushing new tests and improving existing ones for testing Hosted Engine. Git repo is available: https://gerrit.ovirt.org/gitweb?p=ovirt-system-tests.git;a=summary Integration, Node and CI team will be available for helping in the effort and reviewing patches. When: Tue Mar 13, 2018 Where: #ovirt IRC channel Calendar: users at ovirt.org Who: * sbonazzo at redhat.com - organizer * devel at ovirt.org * users at ovirt.org Event details: https://www.google.com/calendar/event?action=VIEW&eid=MDhuZTY5bXRxbGo0cnNmMzIzMzEyMjYwZnEgdXNlcnNAb3ZpcnQub3Jn&tok=MTkjc2JvbmF6em9AcmVkaGF0LmNvbTM0ZGExOGE3NjYwMTM2Y2NlN2E3OTY2NTQ4NWEzZjA4ZWM4NGU4Mjk&ctz=Europe/Rome&hl=en Invitation from Google Calendar: https://www.google.com/calendar/ You are receiving this courtesy email at the account users at ovirt.org because you are an attendee of this event. To stop receiving future updates for this event, decline this event. Alternatively you can sign up for a Google account at https://www.google.com/calendar/ and control your notification settings for your entire calendar. Forwarding this invitation could allow any recipient to modify your RSVP response. Learn more at https://support.google.com/calendar/answer/37135#forwarding -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 1746 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: application/ics Size: 1782 bytes Desc: not available URL: From hariprasanth.l at msystechnologies.com Thu Mar 8 13:19:28 2018 From: hariprasanth.l at msystechnologies.com (Hari Prasanth Loganathan) Date: Thu, 8 Mar 2018 18:49:28 +0530 Subject: [ovirt-users] Tunable parameters in ovirt engine In-Reply-To: References: <786be990-4ea0-613b-d81b-639f70a11398@redhat.com> <109ffb44-89ac-f190-29a4-e937157cce81@redhat.com> <6cb6fbc2-51e3-9c89-23dc-e36a7aefd8b3@redhat.com> <5105aa87-fea6-eb2e-5ae8-ef47cbbe6677@redhat.com> <5a289de6-a80d-a4c9-e612-c2ea31e3e54a@redhat.com> Message-ID: Hi Juan, Could you help me with this question : 1) Is there a limitation in using the same token? Like one token can server only X number of request? On Thu, Mar 8, 2018 at 6:01 PM, Hari Prasanth Loganathan < hariprasanth.l at msystechnologies.com> wrote: > Hi Juan, > > Now we got a clue, I try to hit 100 concurrent request in which 1 request > got failed with the 503 error, I observed this in apache ssl_access.log > > 172.30.36.167 - - [08/Mar/2018:17:22:42 +0530] "GET /ovirt-engine/api/vms/ > HTTP/1.1" 503 299 > > Is it thrown from Apache / ovirt engine? > > > > On Thu, Mar 8, 2018 at 5:15 PM, Juan Hern?ndez > wrote: > >> I think the configuration is good. There may be some connections that the >> server is closing before the client expected, but that is normal, in my >> opinion. >> >> >> On 03/08/2018 12:34 PM, Hari Prasanth Loganathan wrote: >> >>> This is the only error message we received from ab. >>> >>> I googled it and found that it is due to the connection drop. So It would >>> be Great, If you could check my Apache server configuration I shared in >>> the >>> thread and let me know your thoughts on this. >>> >>> Thanks, >>> Hari >>> >>> On Thu, Mar 8, 2018 at 4:56 PM, Juan Hern?ndez >>> wrote: >>> >>> But other than those SSL error messages, are the connections really >>>> failing? Can you share the results reported by "ab"? >>>> >>>> >>>> On 03/08/2018 12:16 PM, Hari Prasanth Loganathan wrote: >>>> >>>> No Juan, It is not working with any benchmark / application tool. It >>>>> fails >>>>> with the same error SSL handshake failed (5). >>>>> >>>>> Could you let me know the configuration of Apache web server is >>>>> correct? >>>>> >>>>> Thanks, >>>>> Hari >>>>> >>>>> On Thu, Mar 8, 2018 at 1:08 AM, Juan Hern?ndez >>>>> wrote: >>>>> >>>>> If you are still having problems I am inclined to think that it is a >>>>> >>>>>> client issue. For example, I'd try to remove the "-k" option from the >>>>>> "ab" >>>>>> command. If you use keep alive the server may decide anyhow to close >>>>>> the >>>>>> connection after certain number of requests, even if the client asks >>>>>> to >>>>>> keep it alive. Some clients don't handle that perfectly, "ab" may have >>>>>> that >>>>>> problem. If that makes the SSL error messages disappear then I think >>>>>> you >>>>>> can safely ignore them, and restore the "-k" option, if you want. >>>>>> >>>>>> On 03/07/2018 07:30 PM, Hari Prasanth Loganathan wrote: >>>>>> >>>>>> Thanks Juan for your response. Appreciate it. >>>>>> >>>>>>> But for some reason still, I am facing the same SSL handshake failed >>>>>>> (5). >>>>>>> Could you please check this configuration and let me know the issue >>>>>>> in >>>>>>> my >>>>>>> ovirt engine environment. >>>>>>> >>>>>>> *Configuration of Apache server:* >>>>>>> >>>>>>> >>>>>>> 1) httpd version, >>>>>>> >>>>>>> # httpd -v >>>>>>> Server version: Apache/2.4.6 (CentOS) >>>>>>> Server built: Oct 19 2017 20:39:16 >>>>>>> >>>>>>> 2) I checked the status using the following command, >>>>>>> >>>>>>> # systemctl status httpd -l >>>>>>> ? httpd.service - The Apache HTTP Server >>>>>>> Loaded: loaded (/usr/lib/systemd/system/httpd.service; >>>>>>> enabled; >>>>>>> vendor >>>>>>> preset: disabled) >>>>>>> Active: active (running) since Wed 2018-03-07 23:46:32 IST; >>>>>>> 1min >>>>>>> 55s >>>>>>> ago >>>>>>> Docs: man:httpd(8) >>>>>>> man:apachectl(8) >>>>>>> Process: 4351 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, >>>>>>> status=0/SUCCESS) >>>>>>> Main PID: 4359 (httpd) >>>>>>> Status: "Total requests: 264; Current requests/sec: 0.1; >>>>>>> Current >>>>>>> traffic: 204 B/sec" >>>>>>> CGroup: /system.slice/httpd.service >>>>>>> ??4359 /usr/sbin/httpd -DFOREGROUND >>>>>>> ??4360 /usr/sbin/httpd -DFOREGROUND >>>>>>> ??4362 /usr/sbin/httpd -DFOREGROUND >>>>>>> ??5100 /usr/sbin/httpd -DFOREGROUND >>>>>>> ??5386 /usr/sbin/httpd -DFOREGROUND >>>>>>> ??5415 /usr/sbin/httpd -DFOREGROUND >>>>>>> ??5416 /usr/sbin/httpd -DFOREGROUND >>>>>>> >>>>>>> 3) Since the httpd is pointing to the path : >>>>>>> /usr/lib/systemd/system/httpd.service >>>>>>> >>>>>>> vi /usr/lib/systemd/system/httpd.service >>>>>>> >>>>>>> [Unit] >>>>>>> Description=The Apache HTTP Server >>>>>>> After=network.target remote-fs.target nss-lookup.target >>>>>>> Documentation=man:httpd(8) >>>>>>> Documentation=man:apachectl(8) >>>>>>> >>>>>>> [Service] >>>>>>> Type=notify >>>>>>> EnvironmentFile=/etc/sysconfig/httpd >>>>>>> ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND >>>>>>> ExecReload=/usr/sbin/httpd $OPTIONS -k graceful >>>>>>> ExecStop=/bin/kill -WINCH ${MAINPID} >>>>>>> # We want systemd to give httpd some time to finish gracefully, but >>>>>>> still >>>>>>> want >>>>>>> # it to kill httpd after TimeoutStopSec if something went wrong >>>>>>> during >>>>>>> the >>>>>>> # graceful stop. Normally, Systemd sends SIGTERM signal right after >>>>>>> the >>>>>>> # ExecStop, which would kill httpd. We are sending useless SIGCONT >>>>>>> here >>>>>>> to >>>>>>> give >>>>>>> # httpd time to finish. >>>>>>> KillSignal=SIGCONT >>>>>>> PrivateTmp=true >>>>>>> >>>>>>> [Install] >>>>>>> WantedBy=multi-user.target >>>>>>> >>>>>>> >>>>>>> 4) As per the above command I found the env file is available >>>>>>> '/etc/sysconfig/httpd' >>>>>>> >>>>>>> vi /etc/sysconfig/httpd >>>>>>> >>>>>>> # >>>>>>> # This file can be used to set additional environment variables for >>>>>>> # the httpd process, or pass additional options to the httpd >>>>>>> # executable. >>>>>>> # >>>>>>> # Note: With previous versions of httpd, the MPM could be changed by >>>>>>> # editing an "HTTPD" variable here. With the current version, that >>>>>>> # variable is now ignored. The MPM is a loadable module, and the >>>>>>> # choice of MPM can be changed by editing the configuration file >>>>>>> /etc/httpd/conf.modules.d/00-mpm.conf >>>>>>> # >>>>>>> >>>>>>> # >>>>>>> # To pass additional options (for instance, -D definitions) to the >>>>>>> # httpd binary at startup, set OPTIONS here. >>>>>>> # >>>>>>> #OPTIONS= >>>>>>> >>>>>>> # >>>>>>> # This setting ensures the httpd process is started in the "C" locale >>>>>>> # by default. (Some modules will not behave correctly if >>>>>>> # case-sensitive string comparisons are performed in a different >>>>>>> # locale.) >>>>>>> # >>>>>>> LANG=C >>>>>>> >>>>>>> >>>>>>> 5) As per the above command, I found that the conf fileis available >>>>>>> in >>>>>>> the >>>>>>> path : /etc/httpd/conf.modules.d/00-mpm.conf >>>>>>> >>>>>>> vi /etc/httpd/conf.modules.d/00-mpm.conf >>>>>>> >>>>>>> # Select the MPM module which should be used by uncommenting exactly >>>>>>> # one of the following LoadModule lines: >>>>>>> >>>>>>> # prefork MPM: Implements a non-threaded, pre-forking web server >>>>>>> # See: http://httpd.apache.org/docs/2.4/mod/prefork.html >>>>>>> #LoadModule mpm_prefork_module modules/mod_mpm_prefork.so >>>>>>> >>>>>>> # worker MPM: Multi-Processing Module implementing a hybrid >>>>>>> # multi-threaded multi-process web server >>>>>>> # See: http://httpd.apache.org/docs/2.4/mod/worker.html >>>>>>> # >>>>>>> LoadModule mpm_worker_module modules/mod_mpm_worker.so >>>>>>> >>>>>>> # event MPM: A variant of the worker MPM with the goal of consuming >>>>>>> # threads only for connections with active processing >>>>>>> # See: http://httpd.apache.org/docs/2.4/mod/event.html >>>>>>> # >>>>>>> #LoadModule mpm_event_module modules/mod_mpm_event.so >>>>>>> >>>>>>> >>>>>>> ServerLimit 1000 >>>>>>> MaxRequestWorkers 1000 >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> As per your comment, I enabled the 'LoadModule mpm_worker_module >>>>>>> modules/mod_mpm_worker.so' with the ServerLimit and >>>>>>> MaxRequestWorkers as >>>>>>> 1000 still I am facing the issue for the following command in apache >>>>>>> benchmark test. >>>>>>> >>>>>>> Completed 100 requests >>>>>>> SSL handshake failed (5). >>>>>>> SSL handshake failed (5). >>>>>>> SSL handshake failed (5). >>>>>>> SSL handshake failed (5). >>>>>>> SSL handshake failed (5). >>>>>>> SSL handshake failed (5). >>>>>>> >>>>>>> >>>>>>> NOTE : It always scales when I have concurrent request below 400 >>>>>>> >>>>>>> What is wrong in this apache configuration, why SSL handshake is >>>>>>> failing >>>>>>> for concurrent request above 400 ? >>>>>>> >>>>>>> Thanks, >>>>>>> Hari >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Wed, Mar 7, 2018 at 9:20 PM, Juan Hern?ndez >>>>>>> wrote: >>>>>>> >>>>>>> It means that with the default configuration the Apache web server >>>>>>> can't >>>>>>> >>>>>>> serve more than 256 concurrent connections. This applies to any >>>>>>>> application >>>>>>>> that uses Apache as the web frontend, not just to oVirt. If you >>>>>>>> want to >>>>>>>> change that you have to change the MaxRequestWorkers and ServerLimit >>>>>>>> parameters, as explained here: >>>>>>>> >>>>>>>> https://httpd.apache.org/docs/2.4/mod/mpm_common.html#m >>>>>>>> axrequestworkers >>>>>>>> >>>>>>>> So, go to your oVirt engine machine and create a >>>>>>>> /etc/httpd/conf.d/my.conf >>>>>>>> file with this content: >>>>>>>> >>>>>>>> MaxRequestWorkers 1000 >>>>>>>> ServerLimit 1000 >>>>>>>> >>>>>>>> Then restart the Apache server: >>>>>>>> >>>>>>>> # systemctl restart httpd >>>>>>>> >>>>>>>> Then your web server should be able to handle 1000 concurrent >>>>>>>> requests, >>>>>>>> and you will probably start to find other limits, like the amount of >>>>>>>> memory >>>>>>>> and CPU that those 1000 Apache child processes will consume, the >>>>>>>> number >>>>>>>> of >>>>>>>> threads in the JBoss application server, the number of connections >>>>>>>> to >>>>>>>> the >>>>>>>> database, etc. >>>>>>>> >>>>>>>> Let me insist a bit that if you base your benchmark solely on the >>>>>>>> number >>>>>>>> of concurrent requests or connections that the server can handle you >>>>>>>> may >>>>>>>> end up with meaningless results, as a real world application >>>>>>>> can/should >>>>>>>> use >>>>>>>> the server much better than that. >>>>>>>> >>>>>>>> On 03/07/2018 04:33 PM, Hari Prasanth Loganathan wrote: >>>>>>>> >>>>>>>> With the default configuration of the web server it is impossible to >>>>>>>> >>>>>>>> handle >>>>>>>>> more than 256 *connections* simultaneously. I guess that "ab" is >>>>>>>>> opening a >>>>>>>>> connection for each concurrent request, so when you reach request >>>>>>>>> 257 >>>>>>>>> the >>>>>>>>> web server will just reject the connection, there is nothing that >>>>>>>>> the >>>>>>>>> JBoss >>>>>>>>> can do about it; you have to increase the number of connections >>>>>>>>> supported >>>>>>>>> by the web server. >>>>>>>>> >>>>>>>>> *So Does it mean that oVirt cannot serve more than 257 request? * >>>>>>>>> >>>>>>>>> My question is, If its possible How to scale this and what is the >>>>>>>>> configuration we need to change? >>>>>>>>> >>>>>>>>> Also, we are taking a benchmark in using oVirt, So I need to find >>>>>>>>> the >>>>>>>>> maximum possible oVirt request. So please let me know the >>>>>>>>> configuration >>>>>>>>> tuning for oVirt to achieve maximum concurrent request. >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Hari >>>>>>>>> >>>>>>>>> On Wed, Mar 7, 2018 at 7:25 PM, Juan Hern?ndez < >>>>>>>>> jhernand at redhat.com> >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>> With the default configuration of the web server it is impossible >>>>>>>>> to >>>>>>>>> >>>>>>>>> handle more than 256 *connections* simultaneously. I guess that >>>>>>>>> "ab" >>>>>>>>> >>>>>>>>>> is >>>>>>>>>> opening a connection for each concurrent request, so when you >>>>>>>>>> reach >>>>>>>>>> request >>>>>>>>>> 257 the web server will just reject the connection, there is >>>>>>>>>> nothing >>>>>>>>>> that >>>>>>>>>> the JBoss can do about it; you have to increase the number of >>>>>>>>>> connections >>>>>>>>>> supported by the web server. >>>>>>>>>> >>>>>>>>>> Or else you may want to re-consider why you want to use 1000 >>>>>>>>>> simultaneous >>>>>>>>>> connections. It may be OK for a performance test, but there are >>>>>>>>>> better >>>>>>>>>> ways >>>>>>>>>> to squeeze performance. For example, you could consider using HTTP >>>>>>>>>> pipelining, which is much more friendly for the server than so >>>>>>>>>> many >>>>>>>>>> connections. This is what we use when we need to send a large >>>>>>>>>> number >>>>>>>>>> of >>>>>>>>>> requests from other systems. There are examples of how to do that >>>>>>>>>> with >>>>>>>>>> the >>>>>>>>>> Python and Ruby SDKs here: >>>>>>>>>> >>>>>>>>>> Python: >>>>>>>>>> >>>>>>>>>> https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/ >>>>>>>>>> examples/asynchronous_inventory.py >>>>>>>>>> >>>>>>>>>> Ruby: >>>>>>>>>> >>>>>>>>>> https://github.com/oVirt/ovirt-engine-sdk-ruby/blob/master/ >>>>>>>>>> sdk/examples/asynchronous_inventory.rb >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On 03/07/2018 02:43 PM, Hari Prasanth Loganathan wrote: >>>>>>>>>> >>>>>>>>>> Hi Juan, >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Thanks for the response. >>>>>>>>>>> >>>>>>>>>>> I agree web server can handle only limited number of concurrent >>>>>>>>>>> requests. >>>>>>>>>>> But Why it is failing with SSL handshake failure for few >>>>>>>>>>> requests, >>>>>>>>>>> Can't >>>>>>>>>>> the JBOSS wait and serve the request? We can spare the delay but >>>>>>>>>>> not >>>>>>>>>>> with >>>>>>>>>>> the request fails. So Is there a configuration in oVirt which >>>>>>>>>>> can be >>>>>>>>>>> tuned >>>>>>>>>>> to achieve this? >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> Hari >>>>>>>>>>> >>>>>>>>>>> On Wed, Mar 7, 2018 at 7:05 PM, Juan Hern?ndez < >>>>>>>>>>> jhernand at redhat.com >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>> The first thing you will need to change for such a test is the >>>>>>>>>>> number >>>>>>>>>>> of >>>>>>>>>>> >>>>>>>>>>> simultaneous connections accepted by the Apache web server: by >>>>>>>>>>> default >>>>>>>>>>> >>>>>>>>>>> the >>>>>>>>>>>> max is 256. See the Apache documentation here: >>>>>>>>>>>> >>>>>>>>>>>> https://httpd.apache.org/docs/2.4/mod/mpm_common.html#m >>>>>>>>>>>> axrequestworkers >>>>>>>>>>>> >>>>>>>>>>>> In addition I also suggest that you consider using the "worker" >>>>>>>>>>>> multi-processing module instead of the "prefork", as it usually >>>>>>>>>>>> works >>>>>>>>>>>> better when talking to a Java application server, because it >>>>>>>>>>>> re-uses >>>>>>>>>>>> connections better. >>>>>>>>>>>> >>>>>>>>>>>> On 03/07/2018 02:20 PM, Hari Prasanth Loganathan wrote: >>>>>>>>>>>> >>>>>>>>>>>> Hi Team, >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> *Description of problem:* >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> I am trying to achieve 1000 concurrent request to oVirt. What >>>>>>>>>>>>> are >>>>>>>>>>>>> the >>>>>>>>>>>>> tunable parameters to achieve this? >>>>>>>>>>>>> >>>>>>>>>>>>> I tried to perform the benchmarking for ovirt engine using >>>>>>>>>>>>> Apache >>>>>>>>>>>>> benchmark >>>>>>>>>>>>> using the same SSO token. >>>>>>>>>>>>> >>>>>>>>>>>>> ab -n 1000 -c 500 -k -H "accept: application/json" -H >>>>>>>>>>>>> "Authorization: >>>>>>>>>>>>> Bearer SSOTOKEN" https://172.30.56.70/ovirt-engine/ >>>>>>>>>>>>> >>>>>>>>>>>> b-9ff1-076fc07ebf50/statistics> >>>>>>>>>>>>> >>>>>>>>>>>>> When the number of concurrent request is 500, we are getting >>>>>>>>>>>>> more >>>>>>>>>>>>> than >>>>>>>>>>>>> 100 >>>>>>>>>>>>> failures with the following error, >>>>>>>>>>>>> >>>>>>>>>>>>> SSL read failed (1) - closing connection >>>>>>>>>>>>> 139620982339352:error: >>>>>>>>>>>>> >>>>>>>>>>>>> NOTE: It is scaling for concurrent request below 500. >>>>>>>>>>>>> >>>>>>>>>>>>> I used the profiler to get the memory and CPU and it seems very >>>>>>>>>>>>> less, >>>>>>>>>>>>> >>>>>>>>>>>>> PID USER PR NI VIRT RES SHR S %CPU %MEM >>>>>>>>>>>>> TIME+ >>>>>>>>>>>>> COMMAND >>>>>>>>>>>>> 30413 ovirt 20 0 4226664 882396 6776 S 126.0 23.0 >>>>>>>>>>>>> 27:48.53 >>>>>>>>>>>>> java >>>>>>>>>>>>> >>>>>>>>>>>>> Configuration of the machine in which Ovirt is deployed : >>>>>>>>>>>>> >>>>>>>>>>>>> RAM - 4GB, >>>>>>>>>>>>> Hard disk - 100GB, >>>>>>>>>>>>> core processor - 2, >>>>>>>>>>>>> OS - Cent7.x. >>>>>>>>>>>>> >>>>>>>>>>>>> In which 2GB is allocated to oVirt. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Version-Release number of selected component (if applicable): >>>>>>>>>>>>> >>>>>>>>>>>>> 4.2.2 >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> How reproducible: >>>>>>>>>>>>> >>>>>>>>>>>>> If the number of concurrent requests are above 500, we are >>>>>>>>>>>>> easily >>>>>>>>>>>>> facing >>>>>>>>>>>>> this issue. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> *Actual results:* >>>>>>>>>>>>> >>>>>>>>>>>>> SSL read failed (1) - closing connection >>>>>>>>>>>>> 139620982339352:error: >>>>>>>>>>>>> >>>>>>>>>>>>> *Expected results:* >>>>>>>>>>>>> >>>>>>>>>>>>> Request success. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks, >>>>>>>>>>>>> Hari >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>> Users mailing list >>>>>>>>>>>>> Users at ovirt.org >>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -- DISCLAIMER The information in this e-mail is confidential and may be subject to legal privilege. It is intended solely for the addressee. Access to this e-mail by anyone else is unauthorized. If you have received this communication in error, please address with the subject heading "Received in error," send to it at msystechnologies.com, then delete the e-mail and destroy any copies of it. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. The views, opinions, conclusions and other information expressed in this electronic mail and any attachments are not given or endorsed by the company unless otherwise indicated by an authorized representative independent of this message. MSys cannot guarantee that e-mail communications are secure or error-free, as information could be intercepted, corrupted, amended, lost, destroyed, arrive late or incomplete, or contain viruses, though all reasonable precautions have been taken to ensure no viruses are present in this e-mail. As our company cannot accept responsibility for any loss or damage arising from the use of this e-mail or attachments we recommend that you subject these to your virus checking procedures prior to use -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Thu Mar 8 13:24:08 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Thu, 8 Mar 2018 14:24:08 +0100 Subject: [ovirt-users] Pre-snapshot scripts to run before live snapshot In-Reply-To: References: Message-ID: On Thu, Mar 8, 2018 at 9:47 AM, Yaniv Kaul wrote: > > >> >> Now I would like to do something similar for a Windows 2008 R2 x64 VM. >> > > Windows is somewhat different. In fact, it's a bit better than Linux > (ARGH! but it's true) with its support for VSS - an API for applications to > register to events such as backup. > You should have the QEMU guest agent VSS provider installed (Note: need to > see where's the latest bits - I found[1]). > > Then, if your application supports VSS, you are all good (I believe). > Y. > > [1] https://fedorapeople.org/groups/virt/virtio-win/direct- > downloads/archive-qemu-ga/qemu-ga-win-7.4.5-1/ > >> >> Yes, I see that there are some VSS events intercepted in event viewer when I run a snapshot. But in my particulr case I have an Oracle database used for Business Intelligence that for performance reasons is not in archive log mode, so it can't interact with VSS layer. Due to its nature I can shutdown this database during the evening and then reopen it before the ETL processing happens during the night. So in that time frame I would like to have a pre-snapshot operation of shutdown db and post-snapshot operation of start db. And then I clone the snapshot and export it in case I have to restore the "blob" as a consistent whole It is for this reason that I'm trying to verify if the freeze-hook is usable also in WIndows environments (based on some threads I find it should be...) Thanks, Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From sabose at redhat.com Thu Mar 8 13:32:40 2018 From: sabose at redhat.com (Sahina Bose) Date: Thu, 8 Mar 2018 19:02:40 +0530 Subject: [ovirt-users] Gluster Snapshot Schedule Failing on 4.2.1 In-Reply-To: References: Message-ID: Thanks for your report, we will take a look. Could you attach the engine.log to the bug? On Wed, Mar 7, 2018 at 11:20 PM, Hesham Ahmed wrote: > I am having issues with the Gluster Snapshot UI since upgrade to 4.2 and > now with 4.2.1. The UI doesn't appear as I explained in the bug report: > https://bugzilla.redhat.com/show_bug.cgi?id=1530186 > > I can now see the UI when I clear the cookies and try the snapshots UI > from within the volume details screen, however scheduled snapshots are not > being created. The engine log shows a single error: > > 2018-03-07 20:00:00,051+03 ERROR [org.ovirt.engine.core.utils.timer.JobWrapper] > (QuartzOvirtDBScheduler1) [12237b15] Failed to invoke scheduled method > onTimer: null > > Anyone scheduling snapshots successfully wtih 4.2? > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sabose at redhat.com Thu Mar 8 13:33:58 2018 From: sabose at redhat.com (Sahina Bose) Date: Thu, 8 Mar 2018 19:03:58 +0530 Subject: [ovirt-users] ovirt 4.2 gluster configuration In-Reply-To: References: Message-ID: On Wed, Feb 14, 2018 at 11:08 PM, Edoardo Mazza wrote: > Hi all, > Scenario: > 3 nodes each with 3 interfaces: 1 for management, 1 for gluster, 1 for VMs > Management interface has it own name and its own ip (es. name = ov1, ip= > 192.168.1.1/24), the same is for gluster interface which has its own name > and its own ip (es. name = gluster1, ip= 192.168.2.1/24). > > When configuring bricks from Ovirt Management tools I get the error: "no > uuid for the name ov1". > Could you provide the relevant log from engine.log? > Network for gluster communication has been defined on network/interface > gluster1. > > What's wrong with this configuration? > > Thanks in advance. > > Edoardo > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fernando.frediani at upx.com Thu Mar 8 13:51:41 2018 From: fernando.frediani at upx.com (FERNANDO FREDIANI) Date: Thu, 8 Mar 2018 10:51:41 -0300 Subject: [ovirt-users] Very Slow Console Performance - Windows 10 In-Reply-To: References: <781778b2-1d81-4a55-2060-ea570e83fbd1@upx.com> <43c4790c-14d2-dbb7-d074-d8d47d4db913@upx.com> <60915132-7486-d25c-4e20-11ab0a4aa8d9@upx.com> <536C27CE-F311-4344-8067-20D217FC6D79@redhat.com> Message-ID: Hello Gianluca. As I mentioned previously I am not sure it has anything to do with SPICE at all, but with the amount of memory the VM has assigned to it. Proff of it is that when you access with via any Remote Desktop protocol it remains slow as if the amount of video memory wasnt being enough and have seen it crashing several times as well. Fernando On 07/03/2018 16:59, Gianluca Cecchi wrote: > On Wed, Mar 7, 2018 at 7:43 PM, Michal Skrivanek > > wrote: > > > >> On 7 Mar 2018, at 14:03, FERNANDO FREDIANI >> > wrote: >> >> Hello Gianluca >> >> Resurrecting this topic. I made the changes as per your >> instructions below on the Engine configuration but it had no >> effect on the VM graphics memory. Is it necessary to restart the >> Engine after adding the 20-overload.properties file ? Also I >> don't think is necessary to do any changes on the hosts right ? >> > correct on both > > > > Hello Fernando and Michal, > at that time I was doing some tests both with plain virt-manager and > oVirt for some Windows 10 VMs. > More recently I haven't done anything in that regard again, unfortunately. > After you have done what you did suggest yourself and Michal > confirmed, then you can test powering off and then on again the VM (so > that the new qemu-kvm process starts with the new parameters) and let > us know if you enjoy better experience, so that we can ask for > adoption as a default (eg for VMs configured as desktops) or as a > custom property to give > >> On the recent updates has anything changed in the terms on how to >> change the video memory assigned to any given VM. I guess it is >> something that has been forgotten overtime, specially if you are >> running a VDI-like environment whcih depends very much on the >> video memory. >> > there were no changes recently, these are the most recent > guidelines we got from SPICE people. They might be out of date. > Would be good to raise that specifically (the performance > difference for default sizes) to them, can you narrow it down and > post to spice-devel at lists.freedesktop.org > ? > > > > This could be very useful too > > Cheers, > Gianluca > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hariprasanth.l at msystechnologies.com Thu Mar 8 13:52:07 2018 From: hariprasanth.l at msystechnologies.com (Hari Prasanth Loganathan) Date: Thu, 8 Mar 2018 19:22:07 +0530 Subject: [ovirt-users] Limitation in using Ovirt SSO Token Message-ID: Hi Team, I would like to know, Is there any limitation in using the same sso token for multiple request. I observe that when I use the same sso token for more than 900 HTTP Rest request, the application went down. Is there any limitation in using same SSO token? I could see that my status is showing as ACTIVE and memory and CPU seem fine. Still, the oVirt is not reachable and I need to restart it to access again. sudo systemctl status ovirt-engine.service -l ? ovirt-engine.service - oVirt Engine Loaded: loaded (/usr/lib/systemd/system/ovirt-engine.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-03-08 19:15:10 IST; 30s ago Main PID: 10370 (ovirt-engine.py) CGroup: /system.slice/ovirt-engine.service ??10370 /usr/bin/python /usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py --redirect-output --systemd=notify start ??10423 ovirt-engine -server -XX:+TieredCompilation -Xms5961M -Xmx5961M -Djava.awt.headless=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djsse.enableSNIExtension=false -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/ovirt-engine/dump -Djava.util.logging.manager=org.jboss.logmanager -Dlogging.configuration=file:///var/lib/ovirt-engine/jboss_runtime/config/ovirt-engine-logging.properties -Dorg.jboss.resolver.warning=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djboss.server.default.config=ovirt-engine -Djboss.home.dir=/usr/share/ovirt-engine-wildfly -Djboss.server.base.dir=/usr/share/ovirt-engine -Djboss.server.data.dir=/var/lib/ovirt-engine -Djboss.server.log.dir=/var/log/ovirt-engine -Djboss.server.config.dir=/var/lib/ovirt-engine/jboss_runtime/config -Djboss.server.temp.dir=/var/lib/ovirt-engine/jboss_runtime/tmp -Djboss.controller.temp.dir=/var/lib/ovirt-engine/jboss_runtime/tmp -jar /usr/share/ovirt-engine-wildfly/jboss-modules.jar -mp /usr/share/ovirt-engine/modules/common:/usr/share/ovirt-engine-extension-aaa-jdbc/modules:/usr/share/ovirt-engine-wildfly/modules -jaxpmodule javax.xml.jaxp-provider org.jboss.as.standalone -c ovirt-engine.xml Mar 08 19:15:10 ovirtengine.localdomain systemd[1]: Starting oVirt Engine... Mar 08 19:15:10 ovirtengine.localdomain ovirt-engine.py[10370]: 2018-03-08 19:15:10,228+0530 ovirt-engine: INFO _detectJBossVersion:187 Detecting JBoss version. Running: /usr/lib/jvm/jre/bin/java ['ovirt-engine-version', '-server', '-XX:+TieredCompilation', '-Xms5961M', '-Xmx5961M', '-Djava.awt.headless=true', '-Dsun.rmi.dgc.client.gcInterval=3600000', '-Dsun.rmi.dgc.server.gcInterval=3600000', '-Djsse.enableSNIExtension=false', '-XX:+HeapDumpOnOutOfMemoryError', '-XX:HeapDumpPath=/var/log/ovirt-engine/dump', '-Djava.util.logging.manager=org.jboss.logmanager', '-Dlogging.configuration=file:///var/lib/ovirt-engine/jboss_runtime/config/ovirt-engine-logging.properties', '-Dorg.jboss.resolver.warning=true', '-Djboss.modules.system.pkgs=org.jboss.byteman', '-Djboss.server.default.config=ovirt-engine', '-Djboss.home.dir=/usr/share/ovirt-engine-wildfly', '-Djboss.server.base.dir=/usr/share/ovirt-engine', '-Djboss.server.data.dir=/var/lib/ovirt-engine', '-Djboss.server.log.dir=/var/log/ovirt-engine', '-Djboss.server.config.dir=/var/lib/ovirt-engine/jboss_runtime/config', '-Djboss.server.temp.dir=/var/lib/ovirt-engine/jboss_runtime/tmp', '-Djboss.controller.temp.dir=/var/lib/ovirt-engine/jboss_runtime/tmp', '-jar', '/usr/share/ovirt-engine-wildfly/jboss-modules.jar', '-mp', '/usr/share/ovirt-engine/modules/common:/usr/share/ovirt-engine-extension-aaa-jdbc/modules:/usr/share/ovirt-engine-wildfly/modules', '-jaxpmodule', 'javax.xml.jaxp-provider', 'org.jboss.as.standalone', '-v'] Mar 08 19:15:10 ovirtengine.localdomain ovirt-engine.py[10370]: 2018-03-08 19:15:10,668+0530 ovirt-engine: INFO _detectJBossVersion:207 Return code: 1, | stdout: '[u'WildFly Full 10.1.0.Final (WildFly Core 2.2.0.Final)'], | stderr: '[]' Mar 08 19:15:10 ovirtengine.localdomain systemd[1]: Started oVirt Engine. Anyhelp would be Appreaciated. Thanks, Hari -- DISCLAIMER The information in this e-mail is confidential and may be subject to legal privilege. It is intended solely for the addressee. Access to this e-mail by anyone else is unauthorized. If you have received this communication in error, please address with the subject heading "Received in error," send to it at msystechnologies.com, then delete the e-mail and destroy any copies of it. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. The views, opinions, conclusions and other information expressed in this electronic mail and any attachments are not given or endorsed by the company unless otherwise indicated by an authorized representative independent of this message. MSys cannot guarantee that e-mail communications are secure or error-free, as information could be intercepted, corrupted, amended, lost, destroyed, arrive late or incomplete, or contain viruses, though all reasonable precautions have been taken to ensure no viruses are present in this e-mail. As our company cannot accept responsibility for any loss or damage arising from the use of this e-mail or attachments we recommend that you subject these to your virus checking procedures prior to use -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbonazzo at redhat.com Thu Mar 8 14:21:16 2018 From: sbonazzo at redhat.com (sbonazzo at redhat.com) Date: Thu, 08 Mar 2018 14:21:16 +0000 Subject: [ovirt-users] Updated invitation: oVirt System Test Hackathon @ Tue Mar 13, 2018 (users@ovirt.org) Message-ID: <001a1141225a8625360566e763ad@google.com> This event has been changed. Title: oVirt System Test Hackathon Please join us in an ovirt-system-tests hackathon pushing new tests and improving existing ones for testing Hosted Engine.Git repo is available: https://gerrit.ovirt.org/gitweb?p=ovirt-system-tests.git;a=summaryIntegration, Node and CI team will be available for helping in the effort and reviewing patches.Here's a public trello board tracking the efforts: https://trello.com/b/Pp76YoRL (changed) When: Tue Mar 13, 2018 Where: #ovirt IRC channel Calendar: users at ovirt.org Who: * sbonazzo at redhat.com - organizer * devel at ovirt.org * users at ovirt.org Event details: https://www.google.com/calendar/event?action=VIEW&eid=MDhuZTY5bXRxbGo0cnNmMzIzMzEyMjYwZnEgdXNlcnNAb3ZpcnQub3Jn&tok=MTkjc2JvbmF6em9AcmVkaGF0LmNvbTM0ZGExOGE3NjYwMTM2Y2NlN2E3OTY2NTQ4NWEzZjA4ZWM4NGU4Mjk&ctz=Europe/Rome&hl=en Invitation from Google Calendar: https://www.google.com/calendar/ You are receiving this courtesy email at the account users at ovirt.org because you are an attendee of this event. To stop receiving future updates for this event, decline this event. Alternatively you can sign up for a Google account at https://www.google.com/calendar/ and control your notification settings for your entire calendar. Forwarding this invitation could allow any recipient to modify your RSVP response. Learn more at https://support.google.com/calendar/answer/37135#forwarding -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2222 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: application/ics Size: 2268 bytes Desc: not available URL: From michal.skrivanek at redhat.com Thu Mar 8 14:25:16 2018 From: michal.skrivanek at redhat.com (Michal Skrivanek) Date: Thu, 8 Mar 2018 15:25:16 +0100 Subject: [ovirt-users] Very Slow Console Performance - Windows 10 In-Reply-To: References: <781778b2-1d81-4a55-2060-ea570e83fbd1@upx.com> <43c4790c-14d2-dbb7-d074-d8d47d4db913@upx.com> <60915132-7486-d25c-4e20-11ab0a4aa8d9@upx.com> <536C27CE-F311-4344-8067-20D217FC6D79@redhat.com> Message-ID: <0791F8F1-6E2F-4EC0-AD11-3877BFD6CC18@redhat.com> > On 8 Mar 2018, at 14:51, FERNANDO FREDIANI wrote: > > Hello Gianluca. > > As I mentioned previously I am not sure it has anything to do with SPICE at all, but with the amount of memory the VM has assigned to it. Proff of it is that when you access with via any Remote Desktop protocol it remains slow as if the amount of video memory wasnt being enough and have seen it crashing several times as well. > > It does, because we just follow their recommendations:) > Fernando > > On 07/03/2018 16:59, Gianluca Cecchi wrote: >> On Wed, Mar 7, 2018 at 7:43 PM, Michal Skrivanek > wrote: >> >> >>> On 7 Mar 2018, at 14:03, FERNANDO FREDIANI > wrote: >>> >>> Hello Gianluca >>> >>> Resurrecting this topic. I made the changes as per your instructions below on the Engine configuration but it had no effect on the VM graphics memory. Is it necessary to restart the Engine after adding the 20-overload.properties file ? Also I don't think is necessary to do any changes on the hosts right ? >>> >> correct on both >> >> >> Hello Fernando and Michal, >> at that time I was doing some tests both with plain virt-manager and oVirt for some Windows 10 VMs. >> More recently I haven't done anything in that regard again, unfortunately. >> After you have done what you did suggest yourself and Michal confirmed, then you can test powering off and then on again the VM (so that the new qemu-kvm process starts with the new parameters) and let us know if you enjoy better experience, so that we can ask for adoption as a default (eg for VMs configured as desktops) or as a custom property to give >> >>> On the recent updates has anything changed in the terms on how to change the video memory assigned to any given VM. I guess it is something that has been forgotten overtime, specially if you are running a VDI-like environment whcih depends very much on the video memory. >>> >> there were no changes recently, these are the most recent guidelines we got from SPICE people. They might be out of date. Would be good to raise that specifically (the performance difference for default sizes) to them, can you narrow it down and post to spice-devel at lists.freedesktop.org ? >> >> >> This could be very useful too >> >> Cheers, >> Gianluca >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.skrivanek at redhat.com Thu Mar 8 14:28:54 2018 From: michal.skrivanek at redhat.com (Michal Skrivanek) Date: Thu, 8 Mar 2018 15:28:54 +0100 Subject: [ovirt-users] Importing VM fails with "No space left on device" In-Reply-To: <2c2c247f-a16e-5f05-aa49-7f1bae24024b@ecarnot.net> References: <2c2c247f-a16e-5f05-aa49-7f1bae24024b@ecarnot.net> Message-ID: > On 6 Mar 2018, at 11:41, Nicolas Ecarnot wrote: > > Hello, > > When importing a VM, I'm facing the know bug : > https://access.redhat.com/solutions/2770791 > > QImgError: ecode=1, stdout=[], stderr=['qemu-img: error while writing sector 93569024: No space left on device' > > The difference between my case and what is described in the RH webpage is that I have no "Failed to flush the refcount block cache". > > Here is what I see : > >> ecfbd1a4-f9d2-463a-ade6-def5bd217b43::DEBUG::2018-03-06 09:57:36,460::utils::718::root::(watchCmd) FAILED: = ['qemu-img: error while writing sector 205517952: No space left on device']; = 1 >> ecfbd1a4-f9d2-463a-ade6-def5bd217b43::ERROR::2018-03-06 09:57:36,460::image::865::Storage.Image::(copyCollapsed) conversion failure for volume ac08bc8d-1eea-449a-a102-cf763c6726c8 Traceback (most recent call last): >> File "/usr/share/vdsm/storage/image.py", line 860, in copyCollapsed >> volume.fmt2str(dstVolFormat)) >> File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 207, in convert >> raise QImgError(rc, out, err) >> QImgError: ecode=1, stdout=[], stderr=['qemu-img: error while writing sector 205517952: No space left on device'], message=None >> ecfbd1a4-f9d2-463a-ade6-def5bd217b43::ERROR::2018-03-06 09:57:36,461::image::878::Storage.Image::(copyCollapsed) Unexpected error >> Traceback (most recent call last): >> File "/usr/share/vdsm/storage/image.py", line 866, in copyCollapsed >> raise se.CopyImageError(str(e)) >> CopyImageError: low level Image copy failed: ("ecode=1, stdout=[], stderr=['qemu-img: error while writing sector 205517952: No space left on device'], message=None",) > > I followed the advices in the RH webpage (check if the figures are correct between the qemu-img sizes and the meta-data file), and they seem to be correct : > > root at serv-hv-adm30:/etc# qemu-img info /rhev/data-center/mnt/serv-lin-adm1.sdis.isere.fr\:_home_vmexport3/be2878c9-2c46-476b-bfae-8b02a4679022/images/a5d68d88-3b54-488d-a61e-7995a1906994/ac08bc8d-1eea-449a-a102-cf763c6726c8 > image: /rhev/data-center/mnt/serv-lin-adm1.sdis.isere.fr:_home_vmexport3/be2878c9-2c46-476b-bfae-8b02a4679022/images/a5d68d88-3b54-488d-a61e-7995a1906994/ac08bc8d-1eea-449a-a102-cf763c6726c8 > file format: qcow2 > virtual size: 98G (105226698752 bytes) > disk size: 97G > cluster_size: 65536 > Format specific information: > compat: 0.10 > refcount bits: 16 > > root at serv-hv-adm30:/etc# cat /rhev/data-center/mnt/serv-lin-adm1.sdis.isere.fr\:_home_vmexport3/be2878c9-2c46-476b-bfae-8b02a4679022/images/a5d68d88-3b54-488d-a61e-7995a1906994/ac08bc8d-1eea-449a-a102-cf763c6726c8.meta DOMAIN=be2878c9-2c46-476b-bfae-8b02a4679022 > CTIME=1520318755 > FORMAT=COW > DISKTYPE=1 > LEGALITY=LEGAL > SIZE=205520896 > VOLTYPE=LEAF > DESCRIPTION= > IMAGE=a5d68d88-3b54-488d-a61e-7995a1906994 > PUUID=00000000-0000-0000-0000-000000000000 > MTIME=0 > POOL_UUID= > TYPE=SPARSE > EOF > > > So I don't see what's wrong? worth sharing with libguestfs users list, please attach the v2v logs (and versions used) so they can take a look > > -- > Nicolas ECARNOT > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > From sleviim at redhat.com Thu Mar 8 15:30:53 2018 From: sleviim at redhat.com (Shani Leviim) Date: Thu, 8 Mar 2018 17:30:53 +0200 Subject: [ovirt-users] Ghost Snapshot Disk In-Reply-To: <736794213.2945423.1520510210879.JavaMail.zimbra@cines.fr> References: <2109773819.1728939.1519370725788.JavaMail.zimbra@cines.fr> <1550150634.1827635.1519649308293.JavaMail.zimbra@cines.fr> <280580777.1830731.1519652234852.JavaMail.zimbra@cines.fr> <48154177.1832942.1519654691849.JavaMail.zimbra@cines.fr> <489433186.2545721.1520320950720.JavaMail.zimbra@cines.fr> <736794213.2945423.1520510210879.JavaMail.zimbra@cines.fr> Message-ID: Hi Lionel, Can you please share once again your engine log (or at least the relevant part where that error message occurred)? *Regards,* *Shani Leviim* On Thu, Mar 8, 2018 at 1:56 PM, Lionel Caignec wrote: > Hi, > > i finished to move my data, but now when i want to remove my old disk i > get stuck to this error : > "Cannot detach Virtual Machine Disk. The disk is already configured in a > snapshot. In order to detach it, remove the disk's snapshots". > But like i said before there is no snapshot anymore. > So what can i do? Delete manually inside database? So where? > Delete manually lvm volume, so how can i find the good one? > > Please help ;). > > Lionel > > ----- Mail original ----- > De: "Lionel Caignec" > ?: "Shani Leviim" > Cc: "users" > Envoy?: Mardi 6 Mars 2018 08:22:30 > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > Hi, > > ok thank you for information (sorry for late response). > > I will do that. > > ----- Mail original ----- > De: "Shani Leviim" > ?: "Lionel Caignec" > Cc: "users" > Envoy?: Mardi 27 F?vrier 2018 14:19:45 > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > Hi Lionel, > > Sorry for the delay in replying you. > > If it's possible from your side, syncing the data and destroying old disk > sounds about right. > > In addition, it seems like you're having this bug: > https://bugzilla.redhat.com/show_bug.cgi?id=1509629 > And it was fixed for version 4.1.9. and above. > > > > *Regards,* > > *Shani Leviim* > > On Mon, Feb 26, 2018 at 4:18 PM, Lionel Caignec wrote: > > > Ok so i reply myself, > > > > Version is 4.1.7.6-1 > > > > I just delete manually a snapshot previously created. But this is an io > > intensive vm, whit big disk (2,5To, and 5To). > > > > For the log, i cannot paste all my log on public list security reason, i > > will send you full in private. > > Here is an extract relevant to my error > > engine.log-20180210:2018-02-09 23:00:03,200+01 INFO > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > > (default task-312) [44402a8c-3196-43f0-ba33-307ea78e6f49] EVENT_ID: > > USER_CREATE_SNAPSHOT(45), Correlation ID: 44402a8c-3196-43f0-ba33- > 307ea78e6f49, > > Job ID: 030cd310-fec9-4a89-8c3f-7888504fe973, Call Stack: null, Custom > > ID: null, Custom Event ID: -1, Message: Snapshot > 'AUTO_7D_zz_nil_20180209_220002' > > creation for VM 'zz_nil' was initiated by snap_user at internal. > > engine.log-20180210:2018-02-09 23:01:06,578+01 INFO > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > > (DefaultQuartzScheduler5) [] EVENT_ID: USER_CREATE_SNAPSHOT_FINISHED_ > SUCCESS(68), > > Correlation ID: 44402a8c-3196-43f0-ba33-307ea78e6f49, Job ID: > > 030cd310-fec9-4a89-8c3f-7888504fe973, Call Stack: null, Custom ID: null, > > Custom Event ID: -1, Message: Snapshot 'AUTO_7D_zz_nil_20180209_220002' > > creation for VM 'zz_nil' has been completed. > > engine.log-20180220:2018-02-19 17:01:23,800+01 INFO > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > > (default task-113) [] EVENT_ID: USER_REMOVE_SNAPSHOT(342), Correlation > ID: > > 06a9efa4-1b80-4021-bf3e-41ecebe58a88, Job ID: c9a918a7-b00c-43cf-b6de- > 3659ac0765da, > > Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Snapshot > > 'AUTO_7D_zz_nil_20180209_220002' deletion for VM 'zz_nil' was initiated > > by acaignec at ldap-cines-authz. > > engine.log-20180221:2018-02-20 22:24:45,174+01 ERROR > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > > (DefaultQuartzScheduler6) [06a9efa4-1b80-4021-bf3e-41ecebe58a88] > > EVENT_ID: USER_REMOVE_SNAPSHOT_FINISHED_FAILURE(357), Correlation ID: > > 06a9efa4-1b80-4021-bf3e-41ecebe58a88, Job ID: c9a918a7-b00c-43cf-b6de- > 3659ac0765da, > > Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Failed > to > > delete snapshot 'AUTO_7D_zz_nil_20180209_220002' for VM 'zz_nil'. > > 2018-02-20 22:24:46,266+01 INFO [org.ovirt.engine.core.bll. > tasks.SPMAsyncTask] > > (DefaultQuartzScheduler3) [516079c3] SPMAsyncTask::PollTask: Polling task > > '34137342-4f30-476d-b16c-1cb7e0ea0ac0' (Parent Command 'DestroyImage', > > Parameters Type 'org.ovirt.engine.core.common.asynctasks. > AsyncTaskParameters') > > returned status 'finished', result 'success'. > > 2018-02-20 22:24:46,267+01 INFO [org.ovirt.engine.core.bll. > tasks.SPMAsyncTask] > > (DefaultQuartzScheduler3) [516079c3] BaseAsyncTask::onTaskEndSuccess: > > Task '34137342-4f30-476d-b16c-1cb7e0ea0ac0' (Parent Command > > 'DestroyImage', Parameters Type 'org.ovirt.engine.core.common. > asynctasks.AsyncTaskParameters') > > ended successfully. > > 2018-02-20 22:24:46,268+01 INFO [org.ovirt.engine.core.bll. > tasks.CommandAsyncTask] > > (DefaultQuartzScheduler3) [516079c3] CommandAsyncTask:: > endActionIfNecessary: > > All tasks of command 'fe8c91f2-386b-4b3f-bbf3-aeda8e9244c6' has ended -> > > executing 'endAction' > > 2018-02-20 22:24:46,268+01 INFO [org.ovirt.engine.core.bll. > tasks.CommandAsyncTask] > > (DefaultQuartzScheduler3) [516079c3] CommandAsyncTask::endAction: Ending > > action for '1' tasks (command ID: 'fe8c91f2-386b-4b3f-bbf3- > aeda8e9244c6'): > > calling endAction '. > > 2018-02-20 22:24:46,268+01 INFO [org.ovirt.engine.core.bll. > tasks.CommandAsyncTask] > > (org.ovirt.thread.pool-6-thread-20) [516079c3] CommandAsyncTask:: > endCommandAction > > [within thread] context: Attempting to endAction 'DestroyImage', > > 2018-02-20 22:24:46,269+01 ERROR [org.ovirt.engine.core.bll. > tasks.CommandAsyncTask] > > (org.ovirt.thread.pool-6-thread-20) [516079c3] [within thread]: > endAction > > for action type DestroyImage threw an exception.: > > java.lang.NullPointerException > > at org.ovirt.engine.core.bll.tasks.CoCoAsyncTaskHelper. > > endAction(CoCoAsyncTaskHelper.java:335) [bll.jar:] > > at org.ovirt.engine.core.bll.tasks.CommandCoordinatorImpl. > > endAction(CommandCoordinatorImpl.java:340) [bll.jar:] > > at org.ovirt.engine.core.bll.tasks.CommandAsyncTask. > > endCommandAction(CommandAsyncTask.java:154) [bll.jar:] > > at org.ovirt.engine.core.bll.tasks.CommandAsyncTask.lambda$ > > endActionIfNecessary$0(CommandAsyncTask.java:106) [bll.jar:] > > at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$ > > InternalWrapperRunnable.run(ThreadPoolUtil.java:84) [utils.jar:] > > at java.util.concurrent.Executors$RunnableAdapter. > call(Executors.java:511) > > [rt.jar:1.8.0_161] > > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > > [rt.jar:1.8.0_161] > > at java.util.concurrent.ThreadPoolExecutor.runWorker( > ThreadPoolExecutor.java:1149) > > [rt.jar:1.8.0_161] > > at java.util.concurrent.ThreadPoolExecutor$Worker.run( > ThreadPoolExecutor.java:624) > > [rt.jar:1.8.0_161] > > at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_161] > > > > ----- Mail original ----- > > De: "Shani Leviim" > > ?: "Lionel Caignec" > > Envoy?: Lundi 26 F?vrier 2018 14:42:38 > > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > > > Yes, please. > > Can you detail a bit more regarding the actions you've done? > > > > I'm assuming that since the snapshot had no description, trying to > operate > > it caused the nullPointerException you've got. > > But I want to examine what was the cause for that. > > > > Also, can you please answer back to the list? > > > > > > > > *Regards,* > > > > *Shani Leviim* > > > > On Mon, Feb 26, 2018 at 3:37 PM, Lionel Caignec > wrote: > > > > > Version is 4.1.7.6-1 > > > > > > Do you want the log from the day i delete snapshot? > > > > > > ----- Mail original ----- > > > De: "Shani Leviim" > > > ?: "Lionel Caignec" > > > Cc: "users" > > > Envoy?: Lundi 26 F?vrier 2018 14:29:16 > > > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > > > > > Hi, > > > > > > What is your engine version, please? > > > I'm trying to reproduce your steps, for understanding better was is the > > > cause for that error. Therefore, a full engine log is needed. > > > Can you please attach it? > > > > > > Thanks, > > > > > > > > > *Shani Leviim* > > > > > > On Mon, Feb 26, 2018 at 2:48 PM, Lionel Caignec > > wrote: > > > > > > > Hi > > > > > > > > 1) this is error message from ui.log > > > > > > > > 2018-02-26 13:44:10,001+01 ERROR [org.ovirt.engine.ui.frontend. > > > > server.gwt.OvirtRemoteLoggingService] (default task-3) [] > Permutation > > > > name: 8C01181C3B121D0AAE1312275CC96415 > > > > 2018-02-26 13:44:10,001+01 ERROR [org.ovirt.engine.ui.frontend. > > > server.gwt.OvirtRemoteLoggingService] > > > > (default task-3) [] Uncaught exception: com.google.gwt.core.client. > > > JavaScriptException: > > > > (TypeError) > > > > __gwt$exception: : Cannot read property 'F' of null > > > > at org.ovirt.engine.ui.uicommonweb.models.storage. > > > > DisksAllocationModel$3.$onSuccess(DisksAllocationModel.java:120) > > > > at org.ovirt.engine.ui.uicommonweb.models.storage. > > > > DisksAllocationModel$3.onSuccess(DisksAllocationModel.java:120) > > > > at org.ovirt.engine.ui.frontend.Frontend$2.$onSuccess( > > > Frontend.java:233) > > > > [frontend.jar:] > > > > at org.ovirt.engine.ui.frontend. > Frontend$2.onSuccess(Frontend. > > > java:233) > > > > [frontend.jar:] > > > > at org.ovirt.engine.ui.frontend.communication. > > > > OperationProcessor$2.$onSuccess(OperationProcessor.java:139) > > > > [frontend.jar:] > > > > at org.ovirt.engine.ui.frontend.communication. > > > > OperationProcessor$2.onSuccess(OperationProcessor.java:139) > > > > [frontend.jar:] > > > > at org.ovirt.engine.ui.frontend.communication. > > > > GWTRPCCommunicationProvider$5$1.$onSuccess( > > GWTRPCCommunicationProvider. > > > java:269) > > > > [frontend.jar:] > > > > at org.ovirt.engine.ui.frontend.communication. > > > > GWTRPCCommunicationProvider$5$1.onSuccess( > GWTRPCCommunicationProvider. > > > java:269) > > > > [frontend.jar:] > > > > at com.google.gwt.user.client.rpc.impl. > RequestCallbackAdapter. > > > > onResponseReceived(RequestCallbackAdapter.java:198) > [gwt-servlet.jar:] > > > > at com.google.gwt.http.client.Request.$ > fireOnResponseReceived( > > > Request.java:237) > > > > [gwt-servlet.jar:] > > > > at com.google.gwt.http.client.RequestBuilder$1. > > > onReadyStateChange(RequestBuilder.java:409) > > > > [gwt-servlet.jar:] > > > > at Unknown.eval(webadmin-0.js at 65) > > > > at com.google.gwt.core.client.impl.Impl.apply(Impl.java:296) > > > > [gwt-servlet.jar:] > > > > at com.google.gwt.core.client.impl.Impl.entry0(Impl.java: > 335) > > > > [gwt-servlet.jar:] > > > > at Unknown.eval(webadmin-0.js at 54) > > > > > > > > > > > > 2) This line seems to be about the bad disk : > > > > > > > > f242cc9a-56c1-4ae4-aef0-f75eb01f74b1 | 2018-01-17 22:02:00+01 | > > > > 2748779069440 | 00000000-0000-0000-0000-000000000000 | > > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > > > > 22:01:20.5+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > > > > > > > > > > > > 3) Snapshot table is empty for the concerned vm_id. > > > > > > > > ----- Mail original ----- > > > > De: "Shani Leviim" > > > > ?: "Lionel Caignec" > > > > Cc: "users" > > > > Envoy?: Lundi 26 F?vrier 2018 13:31:23 > > > > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > > > > > > > Hi Lionel, > > > > > > > > The error message you've mentioned sounds like a UI error. > > > > Can you please attach your ui log? > > > > > > > > Also, on the data from 'images' table you've uploaded, can you > describe > > > > which line is the relevant disk? > > > > > > > > Finally (for now), in case the snapshot was deleted, can you please > > > > validate it by viewing the output of: > > > > $ select * from snapshots; > > > > > > > > > > > > > > > > *Regards,* > > > > > > > > *Shani Leviim* > > > > > > > > On Mon, Feb 26, 2018 at 9:20 AM, Lionel Caignec > > > wrote: > > > > > > > > > Hi Shani, > > > > > thank you for helping me with your reply, > > > > > i juste make a little mistake on explanation. In fact it's the > > snapshot > > > > > does not exist anymore. This is the disk(s) relative to her wich > > still > > > > > exist, and perhaps LVM volume. > > > > > So can i delete manually this disk in database? what about the lvm > > > > volume? > > > > > Is it better to recreate disk sync data and destroy old one? > > > > > > > > > > > > > > > > > > > > ----- Mail original ----- > > > > > De: "Shani Leviim" > > > > > ?: "Lionel Caignec" > > > > > Cc: "users" > > > > > Envoy?: Dimanche 25 F?vrier 2018 14:26:41 > > > > > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > > > > > > > > > Hi Lionel, > > > > > > > > > > You can try to delete that snapshot directly from the database. > > > > > > > > > > In case of using psql [1], once you've logged in to your database, > > you > > > > can > > > > > run this query: > > > > > $ select * from snapshots where vm_id = ''; > > > > > This one would list the snapshots associated with a VM by its id. > > > > > > > > > > In case you don't have you vm_id, you can locate it by querying: > > > > > $ select * from vms where vm_name = 'nil'; > > > > > This one would show you some details about a VM by its name > > (including > > > > the > > > > > vm's id). > > > > > > > > > > Once you've found the relevant snapshot, you can delete it by > > running: > > > > > $ delete from snapshots where snapshot_id = ''; > > > > > This one would delete the desired snapshot from the database. > > > > > > > > > > Since it's a delete operation, I would suggest confirming the ids > > > before > > > > > executing it. > > > > > > > > > > Hope you've found it useful! > > > > > > > > > > [1] > > > > > https://www.ovirt.org/documentation/install-guide/ > > > > appe-Preparing_a_Remote_ > > > > > PostgreSQL_Database_for_Use_with_the_oVirt_Engine/ > > > > > > > > > > > > > > > *Regards,* > > > > > > > > > > *Shani Leviim* > > > > > > > > > > On Fri, Feb 23, 2018 at 9:25 AM, Lionel Caignec > > > > wrote: > > > > > > > > > > > Hi, > > > > > > > > > > > > i've a problem with snapshot. On one VM i've a "snapshot" ghost > > > without > > > > > > name or uuid, only information is size (see attachment). In the > > > > snapshot > > > > > > tab there is no trace about this disk. > > > > > > > > > > > > In database (table images) i found this : > > > > > > f242cc9a-56c1-4ae4-aef0-f75eb01f74b1 | 2018-01-17 22:02:00+01 > | > > > > > > 2748779069440 | 00000000-0000-0000-0000-000000000000 | > > > > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > > > > > > 22:01:20.5+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > > > > > > | 2 | 4 | 17e26476-cecb-441d-a5f7- > > > 46ab3ef387ee > > > > | > > > > > > 2018-01-17 22:01:29.663334+01 | 2018-01-19 08:40:14.345229+01 | f > > > > | > > > > > > 1 | 2 > > > > > > 1c7650fa-542b-4ec2-83a1-d2c1c31be5fd | 2018-01-17 22:02:03+01 | > > > > > > 5368709120000 | 00000000-0000-0000-0000-000000000000 | > > > > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > > > > > > 22:01:20.84+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > > > > > > | 2 | 4 | bf834a91-c69f-4d2c-b639- > > > 116ed58296d8 > > > > | > > > > > > 2018-01-17 22:01:29.836133+01 | 2018-01-19 08:40:19.083508+01 | f > > > > | > > > > > > 1 | 2 > > > > > > 8614b21f-c0de-40f2-b4fb-e5cf193b0743 | 2018-02-09 23:00:44+01 | > > > > > > 5368709120000 | 00000000-0000-0000-0000-000000000000 | > > > > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-02-16 > > > > > > 23:00:02.855+01 | 390175dc-baf4-4831-936a-5ea68fa4c969 > > > > > > > > > > > > > > > > > > But i does not know which line is my disk. Is it possible to > > delete > > > > > > directly into database? > > > > > > Or is it better to dump my disk to another new and delete the > > > > "corrupted > > > > > > one"? > > > > > > > > > > > > Another thing, when i try to move the disk to another storage > > > domain i > > > > > > always get "uncaght exeption occured ..." and no error in > > engine.log. > > > > > > > > > > > > > > > > > > Thank you for helping. > > > > > > > > > > > > -- > > > > > > Lionel Caignec > > > > > > > > > > > > _______________________________________________ > > > > > > Users mailing list > > > > > > Users at ovirt.org > > > > > > http://lists.ovirt.org/mailman/listinfo/users > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From enrico.becchetti at pg.infn.it Thu Mar 8 16:08:58 2018 From: enrico.becchetti at pg.infn.it (Enrico) Date: Thu, 8 Mar 2018 17:08:58 +0100 Subject: [ovirt-users] Ghost Snapshot Disk In-Reply-To: References: <2109773819.1728939.1519370725788.JavaMail.zimbra@cines.fr> <1550150634.1827635.1519649308293.JavaMail.zimbra@cines.fr> <280580777.1830731.1519652234852.JavaMail.zimbra@cines.fr> <48154177.1832942.1519654691849.JavaMail.zimbra@cines.fr> <489433186.2545721.1520320950720.JavaMail.zimbra@cines.fr> <736794213.2945423.1520510210879.JavaMail.zimbra@cines.fr> Message-ID: <802bdf10-372b-49b3-5284-c2cea2ba2876@pg.infn.it> Hi All, I've a similar question , I can't remove a snapshot. oVirt version is the last stable 4.2.1.7 with engine running in non hosted mode. Before remove snapshot I've shutdown vm. These are logs from engine: 2018-03-08 16:57:47,153+01 INFO [org.ovirt.engine.core.bll.ProcessDownVmCommand] (EE-ManagedThreadFactory-engine-Thread-40253) [711eba7d] Running command: ProcessDownVmCommand internal: true. 2018-03-08 16:57:55,589+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (EE-ManagedThreadFactory-engineScheduled-Thread-71) [] Fetched 8 VMs from VDS '85bbf811-1069-4e67-ba86-e50dec9f5da9' 2018-03-08 16:59:00,561+01 INFO [org.ovirt.engine.core.bll.snapshots.RemoveDiskSnapshotsCommand] (default task-31) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Lock Acquired to object 'EngineLock:{exclusiveLocks='[281a869f-8541-49cf-894b-f583bd26083d=DISK]', sharedLocks=''}' 2018-03-08 16:59:00,599+01 INFO [org.ovirt.engine.core.bll.snapshots.RemoveDiskSnapshotsCommand] (EE-ManagedThreadFactory-engine-Thread-40284) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Running command: RemoveDiskSnapshotsCommand internal: false. Entities affected : ID: 01f9b5f2-9e48-4c24-80e5-dca7f1d4d128 Type: VMAction group MANIPULATE_VM_SNAPSHOTS with role type USER 2018-03-08 16:59:00,613+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-40284) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] EVENT_ID: USER_REMOVE_DISK_SNAPSHOT(373), Disk 'SOL_Disk1' from Snapshot(s) 'PHP 5.6.29' of VM 'SOL-DEV' deletion was initiated by admin at internal-authz. 2018-03-08 16:59:00,615+01 INFO [org.ovirt.engine.core.bll.snapshots.RemoveDiskSnapshotsCommand] (EE-ManagedThreadFactory-engine-Thread-40284) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Lock freed to object 'EngineLock:{exclusiveLocks='[281a869f-8541-49cf-894b-f583bd26083d=DISK]', sharedLocks=''}' 2018-03-08 16:59:00,993+01 INFO [org.ovirt.engine.core.bll.snapshots.RemoveDiskSnapshotsCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-25) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Starting child command 1 of 1, image '64e8f28d-6c00-41d8-9f60-26a87d51cb8c' 2018-03-08 16:59:01,026+01 INFO [org.ovirt.engine.core.bll.snapshots.ColdMergeSnapshotSingleDiskCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-7) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Running command: ColdMergeSnapshotSingleDiskCommand internal: true. Entities affected :? ID: 00000000-0000-0000-0000-000000000000 Type: Storage 2018-03-08 16:59:02,026+01 INFO [org.ovirt.engine.core.bll.snapshots.ColdMergeSnapshotSingleDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Command 'ColdMergeSnapshotSingleDisk' id '2bbd81c3-9fa1-4e48-ab69-5588e1367539' executing step 'PREPARE_MERGE' 2018-03-08 16:59:02,048+01 INFO [org.ovirt.engine.core.bll.storage.disk.image.PrepareMergeCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Running command: PrepareMergeCommand internal: true. Entities affected :? ID: 00000000-0000-0000-0000-000000000000 Type: Storage 2018-03-08 16:59:02,049+01 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.PrepareMergeVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] START, PrepareMergeVDSCommand( SPMColdMergeVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7', ignoreFailoverLimit='false'}), log id: 447f2b58 2018-03-08 16:59:02,178+01 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.PrepareMergeVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] FINISH, PrepareMergeVDSCommand, log id: 447f2b58 2018-03-08 16:59:02,221+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command 'b5983000-d637-47da-8aa2-fb8fec50b480' 2018-03-08 16:59:02,221+01 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] CommandMultiAsyncTasks::attachTask: Attaching task '0f762196-21ed-4b70-995d-729f3ed72425' to command 'b5983000-d637-47da-8aa2-fb8fec50b480'. 2018-03-08 16:59:02,235+01 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Adding task '0f762196-21ed-4b70-995d-729f3ed72425' (Parent Command 'PrepareMerge', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling hasn't started yet.. 2018-03-08 16:59:02,241+01 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engineScheduled-Thread-98) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] BaseAsyncTask::startPollingTask: Starting to poll task '0f762196-21ed-4b70-995d-729f3ed72425'. 2018-03-08 16:59:03,262+01 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-79) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Command 'RemoveDiskSnapshots' (id: '8f600a64-5ec5-4f81-98de-2ced76193aa4') waiting on child command id: '2bbd81c3-9fa1-4e48-ab69-5588e1367539' type:'ColdMergeSnapshotSingleDisk' to complete 2018-03-08 16:59:04,271+01 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Command 'ColdMergeSnapshotSingleDisk' (id: '2bbd81c3-9fa1-4e48-ab69-5588e1367539') waiting on child command id: 'b5983000-d637-47da-8aa2-fb8fec50b480' type:'PrepareMerge' to complete 2018-03-08 16:59:07,322+01 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-86) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Command 'RemoveDiskSnapshots' (id: '8f600a64-5ec5-4f81-98de-2ced76193aa4') waiting on child command id: '2bbd81c3-9fa1-4e48-ab69-5588e1367539' type:'ColdMergeSnapshotSingleDisk' to complete 2018-03-08 16:59:08,331+01 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-100) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Command 'ColdMergeSnapshotSingleDisk' (id: '2bbd81c3-9fa1-4e48-ab69-5588e1367539') waiting on child command id: 'b5983000-d637-47da-8aa2-fb8fec50b480' type:'PrepareMerge' to complete 2018-03-08 16:59:09,986+01 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedThreadFactory-engineScheduled-Thread-38) [] Polling and updating Async Tasks: 1 tasks, 1 tasks to poll now 2018-03-08 16:59:09,998+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-38) [] Failed in 'HSMGetAllTasksStatusesVDS' method 2018-03-08 16:59:10,000+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-38) [] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM infn-vm05.management command HSMGetAllTasksStatusesVDS failed: Volume does not exist: ('64e8f28d-6c00-41d8-9f60-26a87d51cb8c',) 2018-03-08 16:59:10,000+01 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engineScheduled-Thread-38) [] SPMAsyncTask::PollTask: Polling task '0f762196-21ed-4b70-995d-729f3ed72425' (Parent Command 'PrepareMerge', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned status 'finished', result 'cleanSuccess'. 2018-03-08 16:59:10,000+01 ERROR [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engineScheduled-Thread-38) [] BaseAsyncTask::logEndTaskFailure: Task '0f762196-21ed-4b70-995d-729f3ed72425' (Parent Command 'PrepareMerge', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended with failure: -- Result: 'cleanSuccess' -- Message: 'VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = Volume does not exist: ('64e8f28d-6c00-41d8-9f60-26a87d51cb8c',), code = 201', -- Exception: 'VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = Volume does not exist: ('64e8f28d-6c00-41d8-9f60-26a87d51cb8c',), code = 201' 2018-03-08 16:59:10,001+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engineScheduled-Thread-38) [] CommandAsyncTask::endActionIfNecessary: All tasks of command 'b5983000-d637-47da-8aa2-fb8fec50b480' has ended -> executing 'endAction' 2018-03-08 16:59:10,001+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engineScheduled-Thread-38) [] CommandAsyncTask::endAction: Ending action for '1' tasks (command ID: 'b5983000-d637-47da-8aa2-fb8fec50b480'): calling endAction '. 2018-03-08 16:59:10,001+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-40287) [] CommandAsyncTask::endCommandAction [within thread] context: Attempting to endAction 'PrepareMerge', 2018-03-08 16:59:10,005+01 ERROR [org.ovirt.engine.core.bll.storage.disk.image.PrepareMergeCommand] (EE-ManagedThreadFactory-engine-Thread-40287) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Ending command 'org.ovirt.engine.core.bll.storage.disk.image.PrepareMergeCommand' with failure. 2018-03-08 16:59:10,013+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-40287) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'PrepareMerge' completed, handling the result. 2018-03-08 16:59:10,013+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-40287) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'PrepareMerge' succeeded, clearing tasks. 2018-03-08 16:59:10,014+01 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-40287) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] SPMAsyncTask::ClearAsyncTask: Attempting to clear task '0f762196-21ed-4b70-995d-729f3ed72425' 2018-03-08 16:59:10,016+01 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-40287) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] START, SPMClearTaskVDSCommand( SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7', ignoreFailoverLimit='false', taskId='0f762196-21ed-4b70-995d-729f3ed72425'}), log id: 5a0c3cc1 2018-03-08 16:59:10,017+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-40287) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] START, HSMClearTaskVDSCommand(HostName = infn-vm05.management, HSMTaskGuidBaseVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc', taskId='0f762196-21ed-4b70-995d-729f3ed72425'}), log id: 608b0bf 2018-03-08 16:59:10,033+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-40287) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] FINISH, HSMClearTaskVDSCommand, log id: 608b0bf 2018-03-08 16:59:10,033+01 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-40287) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] FINISH, SPMClearTaskVDSCommand, log id: 5a0c3cc1 2018-03-08 16:59:10,035+01 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-40287) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] BaseAsyncTask::removeTaskFromDB: Removed task '0f762196-21ed-4b70-995d-729f3ed72425' from DataBase 2018-03-08 16:59:10,035+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-40287) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity 'b5983000-d637-47da-8aa2-fb8fec50b480' 2018-03-08 16:59:15,375+01 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-57) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Command 'RemoveDiskSnapshots' (id: '8f600a64-5ec5-4f81-98de-2ced76193aa4') waiting on child command id: '2bbd81c3-9fa1-4e48-ab69-5588e1367539' type:'ColdMergeSnapshotSingleDisk' to complete 2018-03-08 16:59:16,384+01 ERROR [org.ovirt.engine.core.bll.snapshots.ColdMergeSnapshotSingleDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-27) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Command 'ColdMergeSnapshotSingleDisk' id '2bbd81c3-9fa1-4e48-ab69-5588e1367539' failed executing step 'PREPARE_MERGE' 2018-03-08 16:59:16,384+01 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-27) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Command 'ColdMergeSnapshotSingleDisk' id: '2bbd81c3-9fa1-4e48-ab69-5588e1367539' child commands '[b5983000-d637-47da-8aa2-fb8fec50b480]' executions were completed, status 'FAILED' 2018-03-08 16:59:17,398+01 ERROR [org.ovirt.engine.core.bll.snapshots.ColdMergeSnapshotSingleDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-77) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Ending command 'org.ovirt.engine.core.bll.snapshots.ColdMergeSnapshotSingleDiskCommand' with failure. 2018-03-08 16:59:17,409+01 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-77) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Command 'RemoveDiskSnapshots' id: '8f600a64-5ec5-4f81-98de-2ced76193aa4' child commands '[2bbd81c3-9fa1-4e48-ab69-5588e1367539]' executions were completed, status 'FAILED' 2018-03-08 16:59:18,424+01 ERROR [org.ovirt.engine.core.bll.snapshots.RemoveDiskSnapshotsCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-36) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Ending command 'org.ovirt.engine.core.bll.snapshots.RemoveDiskSnapshotsCommand' with failure. 2018-03-08 16:59:18,460+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-36) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] EVENT_ID: USER_REMOVE_DISK_SNAPSHOT_FINISHED_FAILURE(376), Failed to complete deletion of Disk 'SOL_Disk1' from snapshot(s) 'PHP 5.6.29' of VM 'SOL-DEV' (User: admin at internal-authz). 2018-03-08 17:00:18,145+01 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedThreadFactory-engineScheduled-Thread-46) [] Setting new tasks map. The map contains now 0 tasks 2018-03-08 17:00:18,145+01 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedThreadFactory-engineScheduled-Thread-46) [] Cleared all tasks of pool '18d57688-6ed4-43b8-bd7c-0665b55950b7'. Thanks a lot Best Regards Enrico Il 08/03/18 16:30, Shani Leviim ha scritto: > Hi Lionel, > > Can you please share once again your engine log (or at least the > relevant part where that error message occurred)? > > *Regards, > * > *Shani Leviim > * > > On Thu, Mar 8, 2018 at 1:56 PM, Lionel Caignec > wrote: > > Hi, > > i finished to move my data, but now when i want to remove my old > disk i get stuck to this error : > "Cannot detach Virtual Machine Disk. The disk is already > configured in a snapshot. In order to detach it, remove the disk's > snapshots". > But like i said before there is no snapshot anymore. > So what can i do? Delete manually inside database? So where? > Delete manually lvm volume, so how can i find the good one? > > Please help ;). > > Lionel > > ----- Mail original ----- > De: "Lionel Caignec" > > ?: "Shani Leviim" > > Cc: "users" > > Envoy?: Mardi 6 Mars 2018 08:22:30 > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > Hi, > > ok thank you for information (sorry for late response). > > I will do that. > > ----- Mail original ----- > De: "Shani Leviim" > > ?: "Lionel Caignec" > > Cc: "users" > > Envoy?: Mardi 27 F?vrier 2018 14:19:45 > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > Hi Lionel, > > Sorry for the delay in replying you. > > If it's possible from your side, syncing the data and destroying > old disk > sounds about right. > > In addition, it seems like you're having this bug: > https://bugzilla.redhat.com/show_bug.cgi?id=1509629 > > And it was fixed for version 4.1.9. and above. > > > > *Regards,* > > *Shani Leviim* > > On Mon, Feb 26, 2018 at 4:18 PM, Lionel Caignec > wrote: > > > Ok so i reply myself, > > > > Version is 4.1.7.6-1 > > > > I just delete manually a snapshot previously created. But this > is an io > > intensive vm, whit big disk (2,5To, and 5To). > > > > For the log, i cannot paste all my log on public list security > reason, i > > will send you full in private. > > Here is an extract relevant to my error > > engine.log-20180210:2018-02-09 23:00:03,200+01 INFO > > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > > (default task-312) [44402a8c-3196-43f0-ba33-307ea78e6f49] EVENT_ID: > > USER_CREATE_SNAPSHOT(45), Correlation ID: > 44402a8c-3196-43f0-ba33-307ea78e6f49, > > Job ID: 030cd310-fec9-4a89-8c3f-7888504fe973, Call Stack: null, > Custom > > ID: null, Custom Event ID: -1, Message: Snapshot > 'AUTO_7D_zz_nil_20180209_220002' > > creation for VM 'zz_nil' was initiated by snap_user at internal. > > engine.log-20180210:2018-02-09 23:01:06,578+01 INFO > > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > > (DefaultQuartzScheduler5) [] EVENT_ID: > USER_CREATE_SNAPSHOT_FINISHED_SUCCESS(68), > > Correlation ID: 44402a8c-3196-43f0-ba33-307ea78e6f49, Job ID: > > 030cd310-fec9-4a89-8c3f-7888504fe973, Call Stack: null, Custom > ID: null, > > Custom Event ID: -1, Message: Snapshot > 'AUTO_7D_zz_nil_20180209_220002' > > creation for VM 'zz_nil' has been completed. > > engine.log-20180220:2018-02-19 17:01:23,800+01 INFO > > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > > (default task-113) [] EVENT_ID: USER_REMOVE_SNAPSHOT(342), > Correlation ID: > > 06a9efa4-1b80-4021-bf3e-41ecebe58a88, Job ID: > c9a918a7-b00c-43cf-b6de-3659ac0765da, > > Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: > Snapshot > > 'AUTO_7D_zz_nil_20180209_220002' deletion for VM 'zz_nil' was > initiated > > by acaignec at ldap-cines-authz. > > engine.log-20180221:2018-02-20 22:24:45,174+01 ERROR > > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > > (DefaultQuartzScheduler6) [06a9efa4-1b80-4021-bf3e-41ecebe58a88] > > EVENT_ID: USER_REMOVE_SNAPSHOT_FINISHED_FAILURE(357), > Correlation ID: > > 06a9efa4-1b80-4021-bf3e-41ecebe58a88, Job ID: > c9a918a7-b00c-43cf-b6de-3659ac0765da, > > Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: > Failed to > > delete snapshot 'AUTO_7D_zz_nil_20180209_220002' for VM 'zz_nil'. > >? 2018-02-20 22:24:46,266+01 INFO > [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] > > (DefaultQuartzScheduler3) [516079c3] SPMAsyncTask::PollTask: > Polling task > > '34137342-4f30-476d-b16c-1cb7e0ea0ac0' (Parent Command > 'DestroyImage', > > Parameters Type > 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') > > returned status 'finished', result 'success'. > > 2018-02-20 22:24:46,267+01 INFO > [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] > > (DefaultQuartzScheduler3) [516079c3] > BaseAsyncTask::onTaskEndSuccess: > > Task '34137342-4f30-476d-b16c-1cb7e0ea0ac0' (Parent Command > > 'DestroyImage', Parameters Type > 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') > > ended successfully. > > 2018-02-20 22:24:46,268+01 INFO > [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] > > (DefaultQuartzScheduler3) [516079c3] > CommandAsyncTask::endActionIfNecessary: > > All tasks of command 'fe8c91f2-386b-4b3f-bbf3-aeda8e9244c6' has > ended -> > > executing 'endAction' > > 2018-02-20 22:24:46,268+01 INFO > [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] > > (DefaultQuartzScheduler3) [516079c3] > CommandAsyncTask::endAction: Ending > > action for '1' tasks (command ID: > 'fe8c91f2-386b-4b3f-bbf3-aeda8e9244c6'): > > calling endAction '. > > 2018-02-20 22:24:46,268+01 INFO > [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] > > (org.ovirt.thread.pool-6-thread-20) [516079c3] > CommandAsyncTask::endCommandAction > > [within thread] context: Attempting to endAction 'DestroyImage', > > 2018-02-20 22:24:46,269+01 ERROR > [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] > > (org.ovirt.thread.pool-6-thread-20) [516079c3] [within thread]: > endAction > > for action type DestroyImage threw an exception.: > > java.lang.NullPointerException > >? ? ? ? ?at org.ovirt.engine.core.bll.tasks.CoCoAsyncTaskHelper. > > endAction(CoCoAsyncTaskHelper.java:335) [bll.jar:] > >? ? ? ? ?at org.ovirt.engine.core.bll.tasks.CommandCoordinatorImpl. > > endAction(CommandCoordinatorImpl.java:340) [bll.jar:] > >? ? ? ? ?at org.ovirt.engine.core.bll.tasks.CommandAsyncTask. > > endCommandAction(CommandAsyncTask.java:154) [bll.jar:] > >? ? ? ? ?at org.ovirt.engine.core.bll.tasks.CommandAsyncTask.lambda$ > > endActionIfNecessary$0(CommandAsyncTask.java:106) [bll.jar:] > >? ? ? ? ?at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$ > > InternalWrapperRunnable.run(ThreadPoolUtil.java:84) [utils.jar:] > >? ? ? ? ?at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > > [rt.jar:1.8.0_161] > >? ? ? ? ?at java.util.concurrent.FutureTask.run(FutureTask.java:266) > > [rt.jar:1.8.0_161] > >? ? ? ? ?at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > > [rt.jar:1.8.0_161] > >? ? ? ? ?at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > > [rt.jar:1.8.0_161] > >? ? ? ? ?at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_161] > > > > ----- Mail original ----- > > De: "Shani Leviim" > > > ?: "Lionel Caignec" > > > Envoy?: Lundi 26 F?vrier 2018 14:42:38 > > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > > > Yes, please. > > Can you detail a bit more regarding the actions you've done? > > > > I'm assuming that since the snapshot had no description, trying > to operate > > it caused the nullPointerException you've got. > > But I want to examine what was the cause for that. > > > > Also, can you please answer back to the list? > > > > > > > > *Regards,* > > > > *Shani Leviim* > > > > On Mon, Feb 26, 2018 at 3:37 PM, Lionel Caignec > > wrote: > > > > > Version is 4.1.7.6-1 > > > > > > Do you want the log from the day i delete snapshot? > > > > > > ----- Mail original ----- > > > De: "Shani Leviim" > > > > ?: "Lionel Caignec" > > > > Cc: "users" > > > > Envoy?: Lundi 26 F?vrier 2018 14:29:16 > > > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > > > > > Hi, > > > > > > What is your engine version, please? > > > I'm trying to reproduce your steps, for understanding better > was is the > > > cause for that error. Therefore, a full engine log is needed. > > > Can you please attach it? > > > > > > Thanks, > > > > > > > > > *Shani Leviim* > > > > > > On Mon, Feb 26, 2018 at 2:48 PM, Lionel Caignec > > > > wrote: > > > > > > > Hi > > > > > > > > 1) this is error message from ui.log > > > > > > > >? 2018-02-26 13:44:10,001+01 ERROR [org.ovirt.engine.ui.frontend. > > > > server.gwt.OvirtRemoteLoggingService] (default task-3) [] > Permutation > > > > name: 8C01181C3B121D0AAE1312275CC96415 > > > > 2018-02-26 13:44:10,001+01 ERROR [org.ovirt.engine.ui.frontend. > > > server.gwt.OvirtRemoteLoggingService] > > > > (default task-3) [] Uncaught exception: > com.google.gwt.core.client. > > > JavaScriptException: > > > > (TypeError) > > > >? __gwt$exception: : Cannot read property 'F' of null > > > >? ? ? ? ?at org.ovirt.engine.ui.uicommonweb.models.storage. > > > > DisksAllocationModel$3.$onSuccess(DisksAllocationModel.java:120) > > > >? ? ? ? ?at org.ovirt.engine.ui.uicommonweb.models.storage. > > > > DisksAllocationModel$3.onSuccess(DisksAllocationModel.java:120) > > > >? ? ? ? ?at org.ovirt.engine.ui.frontend.Frontend$2.$onSuccess( > > > Frontend.java:233) > > > > [frontend.jar:] > > > >? ? ? ? ?at > org.ovirt.engine.ui.frontend.Frontend$2.onSuccess(Frontend. > > > java:233) > > > > [frontend.jar:] > > > >? ? ? ? ?at org.ovirt.engine.ui.frontend.communication. > > > > OperationProcessor$2.$onSuccess(OperationProcessor.java:139) > > > > [frontend.jar:] > > > >? ? ? ? ?at org.ovirt.engine.ui.frontend.communication. > > > > OperationProcessor$2.onSuccess(OperationProcessor.java:139) > > > > [frontend.jar:] > > > >? ? ? ? ?at org.ovirt.engine.ui.frontend.communication. > > > > GWTRPCCommunicationProvider$5$1.$onSuccess( > > GWTRPCCommunicationProvider. > > > java:269) > > > > [frontend.jar:] > > > >? ? ? ? ?at org.ovirt.engine.ui.frontend.communication. > > > > > GWTRPCCommunicationProvider$5$1.onSuccess(GWTRPCCommunicationProvider. > > > java:269) > > > > [frontend.jar:] > > > >? ? ? ? ?at > com.google.gwt.user.client.rpc.impl.RequestCallbackAdapter. > > > > onResponseReceived(RequestCallbackAdapter.java:198) > [gwt-servlet.jar:] > > > >? ? ? ? ?at > com.google.gwt.http.client.Request.$fireOnResponseReceived( > > > Request.java:237) > > > > [gwt-servlet.jar:] > > > >? ? ? ? ?at com.google.gwt.http.client.RequestBuilder$1. > > > onReadyStateChange(RequestBuilder.java:409) > > > > [gwt-servlet.jar:] > > > >? ? ? ? ?at Unknown.eval(webadmin-0.js at 65) > > > >? ? ? ? ?at > com.google.gwt.core.client.impl.Impl.apply(Impl.java:296) > > > > [gwt-servlet.jar:] > > > >? ? ? ? ?at > com.google.gwt.core.client.impl.Impl.entry0(Impl.java:335) > > > > [gwt-servlet.jar:] > > > >? ? ? ? ?at Unknown.eval(webadmin-0.js at 54) > > > > > > > > > > > > 2) This line seems to be about the bad disk : > > > > > > > >? f242cc9a-56c1-4ae4-aef0-f75eb01f74b1 | 2018-01-17 22:02:00+01 | > > > > 2748779069440 | 00000000-0000-0000-0000-000000000000 | > > > > 00000000-0000-0000-0000-000000000000 |? ? ? ? ? ?4 | 2018-01-18 > > > > 22:01:20.5+01? ?| 0dd2090c-3491-4fa1-98c3-54ae88be793c > > > > > > > > > > > > 3) Snapshot table is empty for the concerned vm_id. > > > > > > > > ----- Mail original ----- > > > > De: "Shani Leviim" > > > > > ?: "Lionel Caignec" > > > > > Cc: "users" > > > > > Envoy?: Lundi 26 F?vrier 2018 13:31:23 > > > > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > > > > > > > Hi Lionel, > > > > > > > > The error message you've mentioned sounds like a UI error. > > > > Can you please attach your ui log? > > > > > > > > Also, on the data from 'images' table you've uploaded, can > you describe > > > > which line is the relevant disk? > > > > > > > > Finally (for now), in case the snapshot was deleted, can you > please > > > > validate it by viewing the output of: > > > > $ select * from snapshots; > > > > > > > > > > > > > > > > *Regards,* > > > > > > > > *Shani Leviim* > > > > > > > > On Mon, Feb 26, 2018 at 9:20 AM, Lionel Caignec > > > > > wrote: > > > > > > > > > Hi Shani, > > > > > thank you for helping me with your reply, > > > > > i juste make a little mistake on explanation. In fact it's the > > snapshot > > > > > does not exist anymore. This is the disk(s) relative to > her wich > > still > > > > > exist, and perhaps LVM volume. > > > > > So can i delete manually this disk in database? what about > the lvm > > > > volume? > > > > > Is it better to recreate disk sync data and destroy old one? > > > > > > > > > > > > > > > > > > > > ----- Mail original ----- > > > > > De: "Shani Leviim" > > > > > > ?: "Lionel Caignec" > > > > > > Cc: "users" > > > > > > Envoy?: Dimanche 25 F?vrier 2018 14:26:41 > > > > > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > > > > > > > > > Hi Lionel, > > > > > > > > > > You can try to delete that snapshot directly from the > database. > > > > > > > > > > In case of using psql? [1], once you've logged in to your > database, > > you > > > > can > > > > > run this query: > > > > > $ select * from snapshots where vm_id = ''; > > > > > This one would list the snapshots associated with a VM by > its id. > > > > > > > > > > In case you don't have you vm_id, you can locate it by > querying: > > > > > $ select * from vms where vm_name = 'nil'; > > > > > This one would show you some details about a VM by its name > > (including > > > > the > > > > > vm's id). > > > > > > > > > > Once you've found the relevant snapshot, you can delete it by > > running: > > > > > $ delete from snapshots where snapshot_id = > ''; > > > > > This one would delete the desired snapshot from the database. > > > > > > > > > > Since it's a delete operation, I would suggest confirming > the ids > > > before > > > > > executing it. > > > > > > > > > > Hope you've found it useful! > > > > > > > > > > [1] > > > > > https://www.ovirt.org/documentation/install-guide/ > > > > > appe-Preparing_a_Remote_ > > > > > PostgreSQL_Database_for_Use_with_the_oVirt_Engine/ > > > > > > > > > > > > > > > *Regards,* > > > > > > > > > > *Shani Leviim* > > > > > > > > > > On Fri, Feb 23, 2018 at 9:25 AM, Lionel Caignec > > > > > > wrote: > > > > > > > > > > > Hi, > > > > > > > > > > > > i've a problem with snapshot. On one VM i've a > "snapshot" ghost > > > without > > > > > > name or uuid, only information is size (see attachment). > In the > > > > snapshot > > > > > > tab there is no trace about this disk. > > > > > > > > > > > > In database (table images) i found this : > > > > > >? ?f242cc9a-56c1-4ae4-aef0-f75eb01f74b1 | 2018-01-17 > 22:02:00+01 | > > > > > > 2748779069440 | 00000000-0000-0000-0000-000000000000 | > > > > > > 00000000-0000-0000-0000-000000000000 |? ? ? ? ? ?4 | > 2018-01-18 > > > > > > 22:01:20.5+01? ?| 0dd2090c-3491-4fa1-98c3-54ae88be793c > > > > > >? |? ? ? ? ? ?2 |? ? ? ? ? ? ?4 | 17e26476-cecb-441d-a5f7- > > > 46ab3ef387ee > > > > | > > > > > > 2018-01-17 22:01:29.663334+01 | 2018-01-19 > 08:40:14.345229+01 | f > > > > | > > > > > >? ? ? ? ? ? ? ? ? 1 | ?2 > > > > > >? 1c7650fa-542b-4ec2-83a1-d2c1c31be5fd | 2018-01-17 > 22:02:03+01 | > > > > > > 5368709120000 | 00000000-0000-0000-0000-000000000000 | > > > > > > 00000000-0000-0000-0000-000000000000 |? ? ? ? ? ?4 | > 2018-01-18 > > > > > > 22:01:20.84+01? | 0dd2090c-3491-4fa1-98c3-54ae88be793c > > > > > >? |? ? ? ? ? ?2 |? ? ? ? ? ? ?4 | bf834a91-c69f-4d2c-b639- > > > 116ed58296d8 > > > > | > > > > > > 2018-01-17 22:01:29.836133+01 | 2018-01-19 > 08:40:19.083508+01 | f > > > > | > > > > > >? ? ? ? ? ? ? ? ? 1 | ?2 > > > > > >? 8614b21f-c0de-40f2-b4fb-e5cf193b0743 | 2018-02-09 > 23:00:44+01 | > > > > > > 5368709120000 | 00000000-0000-0000-0000-000000000000 | > > > > > > 00000000-0000-0000-0000-000000000000 |? ? ? ? ? ?4 | > 2018-02-16 > > > > > > 23:00:02.855+01 | 390175dc-baf4-4831-936a-5ea68fa4c969 > > > > > > > > > > > > > > > > > >? But i does not know which line is my disk. Is it > possible to > > delete > > > > > > directly into database? > > > > > >? Or is it better to dump my disk to another new and > delete the > > > > "corrupted > > > > > > one"? > > > > > > > > > > > >? Another thing, when i try to move the disk to another > storage > > > domain i > > > > > > always get "uncaght exeption occured ..." and no error in > > engine.log. > > > > > > > > > > > > > > > > > >? Thank you for helping. > > > > > > > > > > > > -- > > > > > > Lionel Caignec > > > > > > > > > > > > _______________________________________________ > > > > > > Users mailing list > > > > > > Users at ovirt.org > > > > > > http://lists.ovirt.org/mailman/listinfo/users > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Thu Mar 8 16:15:10 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Thu, 8 Mar 2018 18:15:10 +0200 Subject: [ovirt-users] Limitation in using Ovirt SSO Token In-Reply-To: References: Message-ID: On Mar 8, 2018 3:53 PM, "Hari Prasanth Loganathan" < hariprasanth.l at msystechnologies.com> wrote: Hi Team, I would like to know, Is there any limitation in using the same sso token for multiple request. Are you reusing it or abusing it? Are you actually using it in a single session, to avoid reauthentication, or just sharing between multiple sessions? While I'd still would not want to see the engine affected by such abuse, the latter is quite atypical use case. Can you share engine and server logs? And again, it'd be very helpful if you could share what you'd like to achieve and we'll gladly assist. Y. I observe that when I use the same sso token for more than 900 HTTP Rest request, the application went down. Is there any limitation in using same SSO token? I could see that my status is showing as ACTIVE and memory and CPU seem fine. Still, the oVirt is not reachable and I need to restart it to access again. sudo systemctl status ovirt-engine.service -l ? ovirt-engine.service - oVirt Engine Loaded: loaded (/usr/lib/systemd/system/ovirt-engine.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-03-08 19:15:10 IST; 30s ago Main PID: 10370 (ovirt-engine.py) CGroup: /system.slice/ovirt-engine.service ??10370 /usr/bin/python /usr/share/ovirt-engine/ services/ovirt-engine/ovirt-engine.py --redirect-output --systemd=notify start ??10423 ovirt-engine -server -XX:+TieredCompilation -Xms5961M -Xmx5961M -Djava.awt.headless=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djsse.enableSNIExtension=false -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/log/ovirt-engine/dump -Djava.util.logging.manager=org.jboss.logmanager -Dlogging.configuration=file:///var/lib/ovirt-engine/jboss_ runtime/config/ovirt-engine-logging.properties -Dorg.jboss.resolver.warning=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djboss.server.default.config=ovirt-engine -Djboss.home.dir=/usr/share/ovirt-engine-wildfly -Djboss.server.base.dir=/usr/share/ovirt-engine -Djboss.server.data.dir=/var/lib/ovirt-engine -Djboss.server.log.dir=/var/log/ovirt-engine -Djboss.server.config.dir=/var/lib/ovirt-engine/jboss_runtime/config -Djboss.server.temp.dir=/var/lib/ovirt-engine/jboss_runtime/tmp -Djboss.controller.temp.dir=/var/lib/ovirt-engine/jboss_runtime/tmp -jar /usr/share/ovirt-engine-wildfly/jboss-modules.jar -mp /usr/share/ovirt-engine/modules/common:/usr/share/ ovirt-engine-extension-aaa-jdbc/modules:/usr/share/ovirt-engine-wildfly/modules -jaxpmodule javax.xml.jaxp-provider org.jboss.as.standalone -c ovirt-engine.xml Mar 08 19:15:10 ovirtengine.localdomain systemd[1]: Starting oVirt Engine... Mar 08 19:15:10 ovirtengine.localdomain ovirt-engine.py[10370]: 2018-03-08 19:15:10,228+0530 ovirt-engine: INFO _detectJBossVersion:187 Detecting JBoss version. Running: /usr/lib/jvm/jre/bin/java ['ovirt-engine-version', '-server', '-XX:+TieredCompilation', '-Xms5961M', '-Xmx5961M', '-Djava.awt.headless=true', '-Dsun.rmi.dgc.client.gcInterval=3600000', '-Dsun.rmi.dgc.server.gcInterval=3600000', '-Djsse.enableSNIExtension=false', '-XX:+HeapDumpOnOutOfMemoryError', '-XX:HeapDumpPath=/var/log/ovirt-engine/dump', '-Djava.util.logging.manager=org.jboss.logmanager', '-Dlogging.configuration=file:///var/lib/ovirt-engine/jboss_ runtime/config/ovirt-engine-logging.properties', '-Dorg.jboss.resolver.warning=true', '-Djboss.modules.system.pkgs=org.jboss.byteman', '-Djboss.server.default.config=ovirt-engine', '-Djboss.home.dir=/usr/share/ovirt-engine-wildfly', '-Djboss.server.base.dir=/usr/share/ovirt-engine', '-Djboss.server.data.dir=/var/lib/ovirt-engine', '-Djboss.server.log.dir=/var/log/ovirt-engine', '-Djboss.server.config.dir=/ var/lib/ovirt-engine/jboss_runtime/config', '-Djboss.server.temp.dir=/var/ lib/ovirt-engine/jboss_runtime/tmp', '-Djboss.controller.temp.dir=/ var/lib/ovirt-engine/jboss_runtime/tmp', '-jar', '/usr/share/ovirt-engine-wildfly/jboss-modules.jar', '-mp', '/usr/share/ovirt-engine/modules/common:/usr/share/ ovirt-engine-extension-aaa-jdbc/modules:/usr/share/ovirt-engine-wildfly/modules', '-jaxpmodule', 'javax.xml.jaxp-provider', 'org.jboss.as.standalone', '-v'] Mar 08 19:15:10 ovirtengine.localdomain ovirt-engine.py[10370]: 2018-03-08 19:15:10,668+0530 ovirt-engine: INFO _detectJBossVersion:207 Return code: 1, | stdout: '[u'WildFly Full 10.1.0.Final (WildFly Core 2.2.0.Final)'], | stderr: '[]' Mar 08 19:15:10 ovirtengine.localdomain systemd[1]: Started oVirt Engine. Anyhelp would be Appreaciated. Thanks, Hari DISCLAIMER The information in this e-mail is confidential and may be subject to legal privilege. It is intended solely for the addressee. Access to this e-mail by anyone else is unauthorized. If you have received this communication in error, please address with the subject heading "Received in error," send to it at msystechnologies.com, then delete the e-mail and destroy any copies of it. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. The views, opinions, conclusions and other information expressed in this electronic mail and any attachments are not given or endorsed by the company unless otherwise indicated by an authorized representative independent of this message. MSys cannot guarantee that e-mail communications are secure or error-free, as information could be intercepted, corrupted, amended, lost, destroyed, arrive late or incomplete, or contain viruses, though all reasonable precautions have been taken to ensure no viruses are present in this e-mail. As our company cannot accept responsibility for any loss or damage arising from the use of this e-mail or attachments we recommend that you subject these to your virus checking procedures prior to use _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From hariprasanth.l at msystechnologies.com Thu Mar 8 17:35:22 2018 From: hariprasanth.l at msystechnologies.com (Hari Prasanth Loganathan) Date: Thu, 8 Mar 2018 23:05:22 +0530 Subject: [ovirt-users] Limitation in using Ovirt SSO Token In-Reply-To: References: Message-ID: Hi Yaniv, To give an example, We are planning to create and manage our VM's using oVirt. Instead of managing oVirt using UI, We will be writing the script which hits the oVirt engine periodically for different functionality. so We would like to take a benchmark to finalise the number of hits to ovirt. This is what we are trying to achieve. Coming to my query, What is the difference between sso token and session maintance in oVirt, If I have a token timeout of 4 days, How the concept of session plays a role in oVirt? Thanks, Hari -- DISCLAIMER The information in this e-mail is confidential and may be subject to legal privilege. It is intended solely for the addressee. Access to this e-mail by anyone else is unauthorized. If you have received this communication in error, please address with the subject heading "Received in error," send to it at msystechnologies.com, then delete the e-mail and destroy any copies of it. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. The views, opinions, conclusions and other information expressed in this electronic mail and any attachments are not given or endorsed by the company unless otherwise indicated by an authorized representative independent of this message. MSys cannot guarantee that e-mail communications are secure or error-free, as information could be intercepted, corrupted, amended, lost, destroyed, arrive late or incomplete, or contain viruses, though all reasonable precautions have been taken to ensure no viruses are present in this e-mail. As our company cannot accept responsibility for any loss or damage arising from the use of this e-mail or attachments we recommend that you subject these to your virus checking procedures prior to use -------------- next part -------------- An HTML attachment was scrubbed... URL: From hariprasanth.l at msystechnologies.com Thu Mar 8 17:38:51 2018 From: hariprasanth.l at msystechnologies.com (Hari Prasanth Loganathan) Date: Thu, 8 Mar 2018 23:08:51 +0530 Subject: [ovirt-users] Limitation in using Ovirt SSO Token In-Reply-To: References: Message-ID: Yes, we will take a SSO token using SSO auth Rest API call and use the same token for all the API hits for next 4 days (Example If 4 days is the SSO token timeout). Is there an issue with using like that? If there is a session concept there for SSO token in using as Rest API? On Thu, Mar 8, 2018 at 11:05 PM, Hari Prasanth Loganathan < hariprasanth.l at msystechnologies.com> wrote: > Hi Yaniv, > > To give an example, We are planning to create and manage our VM's using > oVirt. > Instead of managing oVirt using UI, We will be writing the script which > hits the oVirt engine periodically for different functionality. so We would > like to take a benchmark to finalise the number of hits to ovirt. > > This is what we are trying to achieve. > > Coming to my query, > What is the difference between sso token and session maintance in oVirt, > If I have a token timeout of 4 days, How the concept of session plays a > role in oVirt? > > Thanks, > Hari > > -- DISCLAIMER The information in this e-mail is confidential and may be subject to legal privilege. It is intended solely for the addressee. Access to this e-mail by anyone else is unauthorized. If you have received this communication in error, please address with the subject heading "Received in error," send to it at msystechnologies.com, then delete the e-mail and destroy any copies of it. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. The views, opinions, conclusions and other information expressed in this electronic mail and any attachments are not given or endorsed by the company unless otherwise indicated by an authorized representative independent of this message. MSys cannot guarantee that e-mail communications are secure or error-free, as information could be intercepted, corrupted, amended, lost, destroyed, arrive late or incomplete, or contain viruses, though all reasonable precautions have been taken to ensure no viruses are present in this e-mail. As our company cannot accept responsibility for any loss or damage arising from the use of this e-mail or attachments we recommend that you subject these to your virus checking procedures prior to use -------------- next part -------------- An HTML attachment was scrubbed... URL: From nesretep at chem.byu.edu Thu Mar 8 18:28:32 2018 From: nesretep at chem.byu.edu (Kristian Petersen) Date: Thu, 8 Mar 2018 11:28:32 -0700 Subject: [ovirt-users] hosted-engine deploy fails at "Wait for the engine to come up on the target VM" step Message-ID: I am trying to deploy oVirt with a self-hosted engine and the setup seems to go well until near the very end when the status message says: [ INFO ] TASK [Wait for the engine to come up on the target VM] [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed": true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta": "0:0 0:00.216412", "end": "2018-03-07 16:02:02.677478", "rc": 0, "start": "2018-03-07 16:02:02.461066", "stderr": "", "stderr_lines": [], "stdout ": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\": \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntim estamp=4679955 (Wed Mar 7 16:01:50 2018)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=4679956 (Wed Mar 7 16:01:51 2018)\\nconf_on_share d_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\", \"hostname\": \"rhv1.cpms.byu.edu\", \"host-id\": 1, \"engin e-status\": {\"reason\": \"vm not running on this host\", \"health\": \"bad\", \"vm\": \"down\", \"detail\": \"unknown\"}, \"score\": 3400, \"stopped\": false, \"maintenance\": false, \"crc32\": \"d3a67cf7\", \"local_conf_timestamp\": 4679956, \"host-ts\": 4679955}, \"global_main tenance\": false}", "stdout_lines": ["{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\": \"metadata_parse_version=1\ \nmetadata_feature_version=1\\ntimestamp=4679955 (Wed Mar 7 16:01:50 2018)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=4679956 (Wed Mar 7 16:01:51 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\", \"hostname\": \"rhv1.cpms. byu.edu\", \"host-id\": 1, \"engine-status\": {\"reason\": \"vm not running on this host\", \"health\": \"bad\", \"vm\": \"down\", \"detail\ ": \"unknown\"}, \"score\": 3400, \"stopped\": false, \"maintenance\": false, \"crc32\": \"d3a67cf7\", \"local_conf_timestamp\": 4679956, \" host-ts\": 4679955}, \"global_maintenance\": false}"]} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook Any ideas that might help? -- Kristian Petersen System Administrator Dept. of Chemistry and Biochemistry -------------- next part -------------- An HTML attachment was scrubbed... URL: From hsahmed at gmail.com Thu Mar 8 18:37:30 2018 From: hsahmed at gmail.com (Hesham Ahmed) Date: Thu, 08 Mar 2018 18:37:30 +0000 Subject: [ovirt-users] Gluster Snapshot Schedule Failing on 4.2.1 In-Reply-To: References: Message-ID: Log file attached to the bug. Do let me know if you need anything else. On Thu, Mar 8, 2018, 4:32 PM Sahina Bose wrote: > Thanks for your report, we will take a look. Could you attach the > engine.log to the bug? > > On Wed, Mar 7, 2018 at 11:20 PM, Hesham Ahmed wrote: > >> I am having issues with the Gluster Snapshot UI since upgrade to 4.2 and >> now with 4.2.1. The UI doesn't appear as I explained in the bug report: >> https://bugzilla.redhat.com/show_bug.cgi?id=1530186 >> >> I can now see the UI when I clear the cookies and try the snapshots UI >> from within the volume details screen, however scheduled snapshots are not >> being created. The engine log shows a single error: >> >> 2018-03-07 20:00:00,051+03 ERROR >> [org.ovirt.engine.core.utils.timer.JobWrapper] (QuartzOvirtDBScheduler1) >> [12237b15] Failed to invoke scheduled method onTimer: null >> >> Anyone scheduling snapshots successfully wtih 4.2? >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtqn42 at gmail.com Thu Mar 8 20:27:17 2018 From: jtqn42 at gmail.com (John Nguyen) Date: Thu, 8 Mar 2018 15:27:17 -0500 Subject: [ovirt-users] How to force remove template Message-ID: Hi Guys, I apologize if you may have addressed this earlier. I have a template in an odd state. The template show it's disk on one storage domain, but in actuality the disk is on a different domain. I would like to delete this template since its not in uses and out of date. However when I try, I get the error "image does not exit in domain." Is there a way to force remove a template from the database? Any thoughts would be greatly appreciated. Thanks, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From NasrumMinallah9 at hotmail.com Thu Mar 8 09:13:13 2018 From: NasrumMinallah9 at hotmail.com (Nasrum Minallah Manzoor) Date: Thu, 8 Mar 2018 09:13:13 +0000 Subject: [ovirt-users] Open source backup! In-Reply-To: References: Message-ID: Thank you, Additionally I have to ask that my san is configured with Ovirt. Now multipathing has to be done manually or it will take auto multipathing in ovirt environment! From: Niyazi Elvan [mailto:niyazielvan at gmail.com] Sent: 05 March 2018 8:26 PM To: Nasrum Minallah Manzoor Cc: users at ovirt.org Subject: Re: [ovirt-users] Open source backup! Hi, If you are looking for VM image backup, you may have a look at Open Bacchus https://github.com/openbacchus/bacchus Bacchus is backing up VMs using the oVirt python api and final image will reside on the Export domain (which is an NFS share or glusterfs) in your environment. It does not support moving the images to tapes at the moment. You need to use another tool to stage your backups to tape. Hope this helps. On 5 Mar 2018 Mon at 17:31 Nasrum Minallah Manzoor > wrote: HI, Can you please suggest me any open source backup solution for ovirt Virtual machines. My backup media is FC tape library which is directly attached to my ovirt node. I really appreciate your help Regards, _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Niyazi Elvan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Thu Mar 8 22:12:10 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Fri, 9 Mar 2018 00:12:10 +0200 Subject: [ovirt-users] Limitation in using Ovirt SSO Token In-Reply-To: References: Message-ID: On Mar 8, 2018 7:35 PM, "Hari Prasanth Loganathan" < hariprasanth.l at msystechnologies.com> wrote: Hi Yaniv, To give an example, We are planning to create and manage our VM's using oVirt. Instead of managing oVirt using UI, We will be writing the script which hits the oVirt engine periodically for different functionality. May I suggest Ansible, or any of our SDKs? It'll surely make your work more productive and quite likely higher performance (as it's already quite built for performance, with pipelining, multiplexing, compression and connection management). so We would like to take a benchmark to finalise the number of hits to ovirt. Hits for which functionality exactly? This is what we are trying to achieve. I suspect you focus too much on synthetic benchmark rather than a real use case. Y. Coming to my query, What is the difference between sso token and session maintance in oVirt, If I have a token timeout of 4 days, How the concept of session plays a role in oVirt? Thanks, Hari DISCLAIMER The information in this e-mail is confidential and may be subject to legal privilege. It is intended solely for the addressee. Access to this e-mail by anyone else is unauthorized. If you have received this communication in error, please address with the subject heading "Received in error," send to it at msystechnologies.com, then delete the e-mail and destroy any copies of it. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. The views, opinions, conclusions and other information expressed in this electronic mail and any attachments are not given or endorsed by the company unless otherwise indicated by an authorized representative independent of this message. MSys cannot guarantee that e-mail communications are secure or error-free, as information could be intercepted, corrupted, amended, lost, destroyed, arrive late or incomplete, or contain viruses, though all reasonable precautions have been taken to ensure no viruses are present in this e-mail. As our company cannot accept responsibility for any loss or damage arising from the use of this e-mail or attachments we recommend that you subject these to your virus checking procedures prior to use -------------- next part -------------- An HTML attachment was scrubbed... URL: From NasrumMinallah9 at hotmail.com Thu Mar 8 06:43:51 2018 From: NasrumMinallah9 at hotmail.com (Nasrum Minallah Manzoor) Date: Thu, 8 Mar 2018 06:43:51 +0000 Subject: [ovirt-users] Tape Library! Message-ID: Hi, I need help in configuring Amanda backup in virtual machine added to ovirt node! How can I assign my FC tape library (TS 3100 in my case) to virtual machine! Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccox at endlessnow.com Thu Mar 8 22:35:34 2018 From: ccox at endlessnow.com (Christopher Cox) Date: Thu, 8 Mar 2018 16:35:34 -0600 Subject: [ovirt-users] Tape Library! In-Reply-To: References: Message-ID: <5bb36ba9-c342-0c20-ca6d-5e4a58da2684@endlessnow.com> On 03/08/2018 12:43 AM, Nasrum Minallah Manzoor wrote: > Hi, > > I need help in configuring Amanda backup in virtual machine added to > ovirt node! How can I assign my FC tape library (TS 3100 in my case) to > virtual machine! I know at one time there was an issue created to make this work through virtio. I mean, it was back in the early 3.x days I think. So this might be possible now (??). Passthrough LUN? https://www.ovirt.org/develop/release-management/features/storage/virtio-scsi/ From matt at khoza.com Fri Mar 9 00:16:54 2018 From: matt at khoza.com (Matt Simonsen) Date: Thu, 8 Mar 2018 16:16:54 -0800 Subject: [ovirt-users] Node Next Install Problem Message-ID: <0a7ec760-7df7-6441-9bd7-a0798aa2fac2@khoza.com> I installed based on an older Node Next DVD (4.1.7) that has worked in the past and it doesn't appear to be working when I add it to a cluster. The installer says//it cannot queue package iproute. Is there a repo down or that has changed? Thanks for any suggestions. It appears yum is also broken:/ / /yum update Loaded plugins: fastestmirror, imgbased-persist, package_upload, product-id, ????????????? : search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. centos-opstools-release???????????????????????????????? | 2.9 kB???? 00:00 ovirt-4.1?????????????????????????????????????????????? | 3.0 kB???? 00:00 ovirt-4.1-centos-gluster38????????????????????????????? | 2.9 kB???? 00:00 ?One of the configured repositories failed (Unknown), ?and yum doesn't have enough cached data to continue. At this point the only ?safe thing yum can do is fail. There are a few ways to work "fix" this: ???? 1. Contact the upstream for the repository and get them to fix the problem. ???? 2. Reconfigure the baseurl/etc. for the repository, to point to a working ??????? upstream. This is most often useful if you are using a newer ??????? distribution release than is supported by the repository (and the ??????? packages for the previous distribution release still work). ???? 3. Run the command with the repository temporarily disabled ??????????? yum --disablerepo= ... ???? 4. Disable the repository permanently, so yum won't use it by default. Yum ??????? will then just ignore the repository until you permanently enable it ??????? again or use --enablerepo for temporary usage: ??????????? yum-config-manager --disable ??????? or ??????????? subscription-manager repos --disable= ???? 5. Configure the failing repository to be skipped, if it is unavailable. ??????? Note that yum will try to contact the repo. when it runs most commands, ??????? so will have to try and fail each time (and thus. yum will be be much ??????? slower). If it is a very temporary problem though, this is often a nice ??????? compromise: ??????????? yum-config-manager --save --setopt=.skip_if_unavailable=true Cannot retrieve metalink for repository: ovirt-4.1-epel/x86_64. Please verify its path and try again / -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at khoza.com Fri Mar 9 02:22:22 2018 From: matt at khoza.com (Matt Simonsen) Date: Thu, 8 Mar 2018 18:22:22 -0800 Subject: [ovirt-users] Node Next Install Problem In-Reply-To: <0a7ec760-7df7-6441-9bd7-a0798aa2fac2@khoza.com> References: <0a7ec760-7df7-6441-9bd7-a0798aa2fac2@khoza.com> Message-ID: <20eb6607-f0b8-4664-82aa-2f6646678f80@khoza.com> Doh! Problem solved. Well at least I found it on my own... Date on server is wrong, and certs were silently failing. Matt On 03/08/2018 04:16 PM, Matt Simonsen wrote: > > I installed based on an older Node Next DVD (4.1.7) that has worked in > the past and it doesn't appear to be working when I add it to a cluster. > > The installer says//it cannot queue package iproute. > > Is there a repo down or that has changed? Thanks for any suggestions. > > It appears yum is also broken:/ > / > > /yum update > Loaded plugins: fastestmirror, imgbased-persist, package_upload, > product-id, > ????????????? : search-disabled-repos, subscription-manager > This system is not registered with an entitlement server. You can use > subscription-manager to register. > centos-opstools-release???????????????????????????????? | 2.9 kB???? > 00:00 > ovirt-4.1?????????????????????????????????????????????? | 3.0 kB???? > 00:00 > ovirt-4.1-centos-gluster38????????????????????????????? | 2.9 kB???? > 00:00 > > > ?One of the configured repositories failed (Unknown), > ?and yum doesn't have enough cached data to continue. At this point > the only > ?safe thing yum can do is fail. There are a few ways to work "fix" this: > > ???? 1. Contact the upstream for the repository and get them to fix > the problem. > > ???? 2. Reconfigure the baseurl/etc. for the repository, to point to a > working > ??????? upstream. This is most often useful if you are using a newer > ??????? distribution release than is supported by the repository (and the > ??????? packages for the previous distribution release still work). > > ???? 3. Run the command with the repository temporarily disabled > ??????????? yum --disablerepo= ... > > ???? 4. Disable the repository permanently, so yum won't use it by > default. Yum > ??????? will then just ignore the repository until you permanently > enable it > ??????? again or use --enablerepo for temporary usage: > > ??????????? yum-config-manager --disable > ??????? or > ??????????? subscription-manager repos --disable= > > ???? 5. Configure the failing repository to be skipped, if it is > unavailable. > ??????? Note that yum will try to contact the repo. when it runs most > commands, > ??????? so will have to try and fail each time (and thus. yum will be > be much > ??????? slower). If it is a very temporary problem though, this is > often a nice > ??????? compromise: > > ??????????? yum-config-manager --save > --setopt=.skip_if_unavailable=true > > Cannot retrieve metalink for repository: ovirt-4.1-epel/x86_64. Please > verify its path and try again > / > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From caignec at cines.fr Fri Mar 9 07:12:09 2018 From: caignec at cines.fr (Lionel Caignec) Date: Fri, 9 Mar 2018 08:12:09 +0100 (CET) Subject: [ovirt-users] Ghost Snapshot Disk In-Reply-To: References: <2109773819.1728939.1519370725788.JavaMail.zimbra@cines.fr> <280580777.1830731.1519652234852.JavaMail.zimbra@cines.fr> <48154177.1832942.1519654691849.JavaMail.zimbra@cines.fr> <489433186.2545721.1520320950720.JavaMail.zimbra@cines.fr> <736794213.2945423.1520510210879.JavaMail.zimbra@cines.fr> Message-ID: <86708766.3076883.1520579529591.JavaMail.zimbra@cines.fr> Hi Shani, these is the log i get in engine.log (tail -f) when trying to remove disk from guest : 2018-03-09 07:59:31,741+01 WARN [org.ovirt.engine.core.bll.storage.disk.DetachDiskFromVmCommand] (default task-88) [e111eddd-63da-4c01-9885-f06dbcfb18e8] Validation of action 'DetachDiskFromVm' failed for user xxxxxxxxxxxxxxxxxx. Reasons: VAR__ACTION__DETACH_ACTION_TO,VAR__TYPE__DISK,ERROR_CANNOT_DETACH_DISK_WITH_SNAPSHOT no more information so i'm stuck :) Regards ----- Mail original ----- De: "Shani Leviim" ?: "Lionel Caignec" Cc: "users" Envoy?: Jeudi 8 Mars 2018 16:30:53 Objet: Re: [ovirt-users] Ghost Snapshot Disk Hi Lionel, Can you please share once again your engine log (or at least the relevant part where that error message occurred)? *Regards,* *Shani Leviim* On Thu, Mar 8, 2018 at 1:56 PM, Lionel Caignec wrote: > Hi, > > i finished to move my data, but now when i want to remove my old disk i > get stuck to this error : > "Cannot detach Virtual Machine Disk. The disk is already configured in a > snapshot. In order to detach it, remove the disk's snapshots". > But like i said before there is no snapshot anymore. > So what can i do? Delete manually inside database? So where? > Delete manually lvm volume, so how can i find the good one? > > Please help ;). > > Lionel > > ----- Mail original ----- > De: "Lionel Caignec" > ?: "Shani Leviim" > Cc: "users" > Envoy?: Mardi 6 Mars 2018 08:22:30 > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > Hi, > > ok thank you for information (sorry for late response). > > I will do that. > > ----- Mail original ----- > De: "Shani Leviim" > ?: "Lionel Caignec" > Cc: "users" > Envoy?: Mardi 27 F?vrier 2018 14:19:45 > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > Hi Lionel, > > Sorry for the delay in replying you. > > If it's possible from your side, syncing the data and destroying old disk > sounds about right. > > In addition, it seems like you're having this bug: > https://bugzilla.redhat.com/show_bug.cgi?id=1509629 > And it was fixed for version 4.1.9. and above. > > > > *Regards,* > > *Shani Leviim* > > On Mon, Feb 26, 2018 at 4:18 PM, Lionel Caignec wrote: > > > Ok so i reply myself, > > > > Version is 4.1.7.6-1 > > > > I just delete manually a snapshot previously created. But this is an io > > intensive vm, whit big disk (2,5To, and 5To). > > > > For the log, i cannot paste all my log on public list security reason, i > > will send you full in private. > > Here is an extract relevant to my error > > engine.log-20180210:2018-02-09 23:00:03,200+01 INFO > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > > (default task-312) [44402a8c-3196-43f0-ba33-307ea78e6f49] EVENT_ID: > > USER_CREATE_SNAPSHOT(45), Correlation ID: 44402a8c-3196-43f0-ba33- > 307ea78e6f49, > > Job ID: 030cd310-fec9-4a89-8c3f-7888504fe973, Call Stack: null, Custom > > ID: null, Custom Event ID: -1, Message: Snapshot > 'AUTO_7D_zz_nil_20180209_220002' > > creation for VM 'zz_nil' was initiated by snap_user at internal. > > engine.log-20180210:2018-02-09 23:01:06,578+01 INFO > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > > (DefaultQuartzScheduler5) [] EVENT_ID: USER_CREATE_SNAPSHOT_FINISHED_ > SUCCESS(68), > > Correlation ID: 44402a8c-3196-43f0-ba33-307ea78e6f49, Job ID: > > 030cd310-fec9-4a89-8c3f-7888504fe973, Call Stack: null, Custom ID: null, > > Custom Event ID: -1, Message: Snapshot 'AUTO_7D_zz_nil_20180209_220002' > > creation for VM 'zz_nil' has been completed. > > engine.log-20180220:2018-02-19 17:01:23,800+01 INFO > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > > (default task-113) [] EVENT_ID: USER_REMOVE_SNAPSHOT(342), Correlation > ID: > > 06a9efa4-1b80-4021-bf3e-41ecebe58a88, Job ID: c9a918a7-b00c-43cf-b6de- > 3659ac0765da, > > Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Snapshot > > 'AUTO_7D_zz_nil_20180209_220002' deletion for VM 'zz_nil' was initiated > > by acaignec at ldap-cines-authz. > > engine.log-20180221:2018-02-20 22:24:45,174+01 ERROR > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > > (DefaultQuartzScheduler6) [06a9efa4-1b80-4021-bf3e-41ecebe58a88] > > EVENT_ID: USER_REMOVE_SNAPSHOT_FINISHED_FAILURE(357), Correlation ID: > > 06a9efa4-1b80-4021-bf3e-41ecebe58a88, Job ID: c9a918a7-b00c-43cf-b6de- > 3659ac0765da, > > Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Failed > to > > delete snapshot 'AUTO_7D_zz_nil_20180209_220002' for VM 'zz_nil'. > > 2018-02-20 22:24:46,266+01 INFO [org.ovirt.engine.core.bll. > tasks.SPMAsyncTask] > > (DefaultQuartzScheduler3) [516079c3] SPMAsyncTask::PollTask: Polling task > > '34137342-4f30-476d-b16c-1cb7e0ea0ac0' (Parent Command 'DestroyImage', > > Parameters Type 'org.ovirt.engine.core.common.asynctasks. > AsyncTaskParameters') > > returned status 'finished', result 'success'. > > 2018-02-20 22:24:46,267+01 INFO [org.ovirt.engine.core.bll. > tasks.SPMAsyncTask] > > (DefaultQuartzScheduler3) [516079c3] BaseAsyncTask::onTaskEndSuccess: > > Task '34137342-4f30-476d-b16c-1cb7e0ea0ac0' (Parent Command > > 'DestroyImage', Parameters Type 'org.ovirt.engine.core.common. > asynctasks.AsyncTaskParameters') > > ended successfully. > > 2018-02-20 22:24:46,268+01 INFO [org.ovirt.engine.core.bll. > tasks.CommandAsyncTask] > > (DefaultQuartzScheduler3) [516079c3] CommandAsyncTask:: > endActionIfNecessary: > > All tasks of command 'fe8c91f2-386b-4b3f-bbf3-aeda8e9244c6' has ended -> > > executing 'endAction' > > 2018-02-20 22:24:46,268+01 INFO [org.ovirt.engine.core.bll. > tasks.CommandAsyncTask] > > (DefaultQuartzScheduler3) [516079c3] CommandAsyncTask::endAction: Ending > > action for '1' tasks (command ID: 'fe8c91f2-386b-4b3f-bbf3- > aeda8e9244c6'): > > calling endAction '. > > 2018-02-20 22:24:46,268+01 INFO [org.ovirt.engine.core.bll. > tasks.CommandAsyncTask] > > (org.ovirt.thread.pool-6-thread-20) [516079c3] CommandAsyncTask:: > endCommandAction > > [within thread] context: Attempting to endAction 'DestroyImage', > > 2018-02-20 22:24:46,269+01 ERROR [org.ovirt.engine.core.bll. > tasks.CommandAsyncTask] > > (org.ovirt.thread.pool-6-thread-20) [516079c3] [within thread]: > endAction > > for action type DestroyImage threw an exception.: > > java.lang.NullPointerException > > at org.ovirt.engine.core.bll.tasks.CoCoAsyncTaskHelper. > > endAction(CoCoAsyncTaskHelper.java:335) [bll.jar:] > > at org.ovirt.engine.core.bll.tasks.CommandCoordinatorImpl. > > endAction(CommandCoordinatorImpl.java:340) [bll.jar:] > > at org.ovirt.engine.core.bll.tasks.CommandAsyncTask. > > endCommandAction(CommandAsyncTask.java:154) [bll.jar:] > > at org.ovirt.engine.core.bll.tasks.CommandAsyncTask.lambda$ > > endActionIfNecessary$0(CommandAsyncTask.java:106) [bll.jar:] > > at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$ > > InternalWrapperRunnable.run(ThreadPoolUtil.java:84) [utils.jar:] > > at java.util.concurrent.Executors$RunnableAdapter. > call(Executors.java:511) > > [rt.jar:1.8.0_161] > > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > > [rt.jar:1.8.0_161] > > at java.util.concurrent.ThreadPoolExecutor.runWorker( > ThreadPoolExecutor.java:1149) > > [rt.jar:1.8.0_161] > > at java.util.concurrent.ThreadPoolExecutor$Worker.run( > ThreadPoolExecutor.java:624) > > [rt.jar:1.8.0_161] > > at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_161] > > > > ----- Mail original ----- > > De: "Shani Leviim" > > ?: "Lionel Caignec" > > Envoy?: Lundi 26 F?vrier 2018 14:42:38 > > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > > > Yes, please. > > Can you detail a bit more regarding the actions you've done? > > > > I'm assuming that since the snapshot had no description, trying to > operate > > it caused the nullPointerException you've got. > > But I want to examine what was the cause for that. > > > > Also, can you please answer back to the list? > > > > > > > > *Regards,* > > > > *Shani Leviim* > > > > On Mon, Feb 26, 2018 at 3:37 PM, Lionel Caignec > wrote: > > > > > Version is 4.1.7.6-1 > > > > > > Do you want the log from the day i delete snapshot? > > > > > > ----- Mail original ----- > > > De: "Shani Leviim" > > > ?: "Lionel Caignec" > > > Cc: "users" > > > Envoy?: Lundi 26 F?vrier 2018 14:29:16 > > > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > > > > > Hi, > > > > > > What is your engine version, please? > > > I'm trying to reproduce your steps, for understanding better was is the > > > cause for that error. Therefore, a full engine log is needed. > > > Can you please attach it? > > > > > > Thanks, > > > > > > > > > *Shani Leviim* > > > > > > On Mon, Feb 26, 2018 at 2:48 PM, Lionel Caignec > > wrote: > > > > > > > Hi > > > > > > > > 1) this is error message from ui.log > > > > > > > > 2018-02-26 13:44:10,001+01 ERROR [org.ovirt.engine.ui.frontend. > > > > server.gwt.OvirtRemoteLoggingService] (default task-3) [] > Permutation > > > > name: 8C01181C3B121D0AAE1312275CC96415 > > > > 2018-02-26 13:44:10,001+01 ERROR [org.ovirt.engine.ui.frontend. > > > server.gwt.OvirtRemoteLoggingService] > > > > (default task-3) [] Uncaught exception: com.google.gwt.core.client. > > > JavaScriptException: > > > > (TypeError) > > > > __gwt$exception: : Cannot read property 'F' of null > > > > at org.ovirt.engine.ui.uicommonweb.models.storage. > > > > DisksAllocationModel$3.$onSuccess(DisksAllocationModel.java:120) > > > > at org.ovirt.engine.ui.uicommonweb.models.storage. > > > > DisksAllocationModel$3.onSuccess(DisksAllocationModel.java:120) > > > > at org.ovirt.engine.ui.frontend.Frontend$2.$onSuccess( > > > Frontend.java:233) > > > > [frontend.jar:] > > > > at org.ovirt.engine.ui.frontend. > Frontend$2.onSuccess(Frontend. > > > java:233) > > > > [frontend.jar:] > > > > at org.ovirt.engine.ui.frontend.communication. > > > > OperationProcessor$2.$onSuccess(OperationProcessor.java:139) > > > > [frontend.jar:] > > > > at org.ovirt.engine.ui.frontend.communication. > > > > OperationProcessor$2.onSuccess(OperationProcessor.java:139) > > > > [frontend.jar:] > > > > at org.ovirt.engine.ui.frontend.communication. > > > > GWTRPCCommunicationProvider$5$1.$onSuccess( > > GWTRPCCommunicationProvider. > > > java:269) > > > > [frontend.jar:] > > > > at org.ovirt.engine.ui.frontend.communication. > > > > GWTRPCCommunicationProvider$5$1.onSuccess( > GWTRPCCommunicationProvider. > > > java:269) > > > > [frontend.jar:] > > > > at com.google.gwt.user.client.rpc.impl. > RequestCallbackAdapter. > > > > onResponseReceived(RequestCallbackAdapter.java:198) > [gwt-servlet.jar:] > > > > at com.google.gwt.http.client.Request.$ > fireOnResponseReceived( > > > Request.java:237) > > > > [gwt-servlet.jar:] > > > > at com.google.gwt.http.client.RequestBuilder$1. > > > onReadyStateChange(RequestBuilder.java:409) > > > > [gwt-servlet.jar:] > > > > at Unknown.eval(webadmin-0.js at 65) > > > > at com.google.gwt.core.client.impl.Impl.apply(Impl.java:296) > > > > [gwt-servlet.jar:] > > > > at com.google.gwt.core.client.impl.Impl.entry0(Impl.java: > 335) > > > > [gwt-servlet.jar:] > > > > at Unknown.eval(webadmin-0.js at 54) > > > > > > > > > > > > 2) This line seems to be about the bad disk : > > > > > > > > f242cc9a-56c1-4ae4-aef0-f75eb01f74b1 | 2018-01-17 22:02:00+01 | > > > > 2748779069440 | 00000000-0000-0000-0000-000000000000 | > > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > > > > 22:01:20.5+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > > > > > > > > > > > > 3) Snapshot table is empty for the concerned vm_id. > > > > > > > > ----- Mail original ----- > > > > De: "Shani Leviim" > > > > ?: "Lionel Caignec" > > > > Cc: "users" > > > > Envoy?: Lundi 26 F?vrier 2018 13:31:23 > > > > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > > > > > > > Hi Lionel, > > > > > > > > The error message you've mentioned sounds like a UI error. > > > > Can you please attach your ui log? > > > > > > > > Also, on the data from 'images' table you've uploaded, can you > describe > > > > which line is the relevant disk? > > > > > > > > Finally (for now), in case the snapshot was deleted, can you please > > > > validate it by viewing the output of: > > > > $ select * from snapshots; > > > > > > > > > > > > > > > > *Regards,* > > > > > > > > *Shani Leviim* > > > > > > > > On Mon, Feb 26, 2018 at 9:20 AM, Lionel Caignec > > > wrote: > > > > > > > > > Hi Shani, > > > > > thank you for helping me with your reply, > > > > > i juste make a little mistake on explanation. In fact it's the > > snapshot > > > > > does not exist anymore. This is the disk(s) relative to her wich > > still > > > > > exist, and perhaps LVM volume. > > > > > So can i delete manually this disk in database? what about the lvm > > > > volume? > > > > > Is it better to recreate disk sync data and destroy old one? > > > > > > > > > > > > > > > > > > > > ----- Mail original ----- > > > > > De: "Shani Leviim" > > > > > ?: "Lionel Caignec" > > > > > Cc: "users" > > > > > Envoy?: Dimanche 25 F?vrier 2018 14:26:41 > > > > > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > > > > > > > > > Hi Lionel, > > > > > > > > > > You can try to delete that snapshot directly from the database. > > > > > > > > > > In case of using psql [1], once you've logged in to your database, > > you > > > > can > > > > > run this query: > > > > > $ select * from snapshots where vm_id = ''; > > > > > This one would list the snapshots associated with a VM by its id. > > > > > > > > > > In case you don't have you vm_id, you can locate it by querying: > > > > > $ select * from vms where vm_name = 'nil'; > > > > > This one would show you some details about a VM by its name > > (including > > > > the > > > > > vm's id). > > > > > > > > > > Once you've found the relevant snapshot, you can delete it by > > running: > > > > > $ delete from snapshots where snapshot_id = ''; > > > > > This one would delete the desired snapshot from the database. > > > > > > > > > > Since it's a delete operation, I would suggest confirming the ids > > > before > > > > > executing it. > > > > > > > > > > Hope you've found it useful! > > > > > > > > > > [1] > > > > > https://www.ovirt.org/documentation/install-guide/ > > > > appe-Preparing_a_Remote_ > > > > > PostgreSQL_Database_for_Use_with_the_oVirt_Engine/ > > > > > > > > > > > > > > > *Regards,* > > > > > > > > > > *Shani Leviim* > > > > > > > > > > On Fri, Feb 23, 2018 at 9:25 AM, Lionel Caignec > > > > wrote: > > > > > > > > > > > Hi, > > > > > > > > > > > > i've a problem with snapshot. On one VM i've a "snapshot" ghost > > > without > > > > > > name or uuid, only information is size (see attachment). In the > > > > snapshot > > > > > > tab there is no trace about this disk. > > > > > > > > > > > > In database (table images) i found this : > > > > > > f242cc9a-56c1-4ae4-aef0-f75eb01f74b1 | 2018-01-17 22:02:00+01 > | > > > > > > 2748779069440 | 00000000-0000-0000-0000-000000000000 | > > > > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > > > > > > 22:01:20.5+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > > > > > > | 2 | 4 | 17e26476-cecb-441d-a5f7- > > > 46ab3ef387ee > > > > | > > > > > > 2018-01-17 22:01:29.663334+01 | 2018-01-19 08:40:14.345229+01 | f > > > > | > > > > > > 1 | 2 > > > > > > 1c7650fa-542b-4ec2-83a1-d2c1c31be5fd | 2018-01-17 22:02:03+01 | > > > > > > 5368709120000 | 00000000-0000-0000-0000-000000000000 | > > > > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > > > > > > 22:01:20.84+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > > > > > > | 2 | 4 | bf834a91-c69f-4d2c-b639- > > > 116ed58296d8 > > > > | > > > > > > 2018-01-17 22:01:29.836133+01 | 2018-01-19 08:40:19.083508+01 | f > > > > | > > > > > > 1 | 2 > > > > > > 8614b21f-c0de-40f2-b4fb-e5cf193b0743 | 2018-02-09 23:00:44+01 | > > > > > > 5368709120000 | 00000000-0000-0000-0000-000000000000 | > > > > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-02-16 > > > > > > 23:00:02.855+01 | 390175dc-baf4-4831-936a-5ea68fa4c969 > > > > > > > > > > > > > > > > > > But i does not know which line is my disk. Is it possible to > > delete > > > > > > directly into database? > > > > > > Or is it better to dump my disk to another new and delete the > > > > "corrupted > > > > > > one"? > > > > > > > > > > > > Another thing, when i try to move the disk to another storage > > > domain i > > > > > > always get "uncaght exeption occured ..." and no error in > > engine.log. > > > > > > > > > > > > > > > > > > Thank you for helping. > > > > > > > > > > > > -- > > > > > > Lionel Caignec > > > > > > > > > > > > _______________________________________________ > > > > > > Users mailing list > > > > > > Users at ovirt.org > > > > > > http://lists.ovirt.org/mailman/listinfo/users > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From tadavis at lbl.gov Fri Mar 9 07:54:43 2018 From: tadavis at lbl.gov (Thomas Davis) Date: Thu, 8 Mar 2018 23:54:43 -0800 Subject: [ovirt-users] ovirt 4.2.2-rc3 switching from legacy to OVS.. Message-ID: <031b026d-66ec-14a0-9bec-7a4b0e717556@lbl.gov> I'm getting further along with 4.2.2rc3 than the 4.2.1 when it comes to hosted engine and vlans.. it actually does install under 4.2.2rc3. But it's a complete failure when I switch the cluster from Linux Bridge/Legacy to OVS. The first time I try, vdsm does not properly configure the node, it's all messed up. I'm getting this in vdsmd logs: 2018-03-08 23:12:46,610-0800 INFO (jsonrpc/7) [api.network] START setupNetworks(networks={u'ovirtmgmt': {u'ipv6autoconf': True, u'nic': u'eno1', u'vlan': u'50', u'ipaddr': u'192.168.85.49', u'switch': u'ovs', u'mtu': 1500, u'netmask': u'255.255.252.0', u'dhcpv6': False, u'STP': u'no', u'bridged': u'true', u'gateway': u'192.168.85.254', u'defaultRoute': True}}, bondings={}, options={u'connectivityCheck': u'true', u'connectivityTimeout': 120}) from=::ffff:192.168.85.24,56806, flow_id=4147e25f-0a23-4f47-a0a4-d424a3437d11 (api:46) 2018-03-08 23:12:52,449-0800 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00 seconds (__init__:573) 2018-03-08 23:12:52,511-0800 INFO (jsonrpc/7) [api.network] FINISH setupNetworks error=[Errno 19] ovirtmgmt is not present in the system from=::ffff:192.168.85.24,56806, flow_id=4147e25f-0a23-4f47-a0a4-d424a3437d11 (api:50) 2018-03-08 23:12:52,512-0800 ERROR (jsonrpc/7) [jsonrpc.JsonRpcServer] Internal server error (__init__:611) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 606, in _handle_request res = method(**params) File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 201, in _dynamicMethod result = fn(*methodArgs) File "", line 2, in setupNetworks File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method ret = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1527, in setupNetworks supervdsm.getProxy().setupNetworks(networks, bondings, options) File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 55, in __call__ return callMethod() File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 53, in **kwargs) File "", line 2, in setupNetworks File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod raise convert_to_error(kind, result) IOError: [Errno 19] ovirtmgmt is not present in the system 2018-03-08 23:12:52,512-0800 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.setupNetworks failed (error -32603) in 5.90 seconds (__init__:573) 2018-03-08 23:12:54,769-0800 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00 seconds (__init__:573) 2018-03-08 23:12:54,772-0800 INFO (jsonrpc/5) [api.host] START getCapabilities() from=::1,45562 (api:46) 2018-03-08 23:12:54,906-0800 INFO (jsonrpc/5) [api.host] FINISH getCapabilities error=[Errno 19] ovirtmgmt is not present in the system from=::1,45562 (api:50) 2018-03-08 23:12:54,906-0800 ERROR (jsonrpc/5) [jsonrpc.JsonRpcServer] Internal server error (__init__:611) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 606, in _handle_request res = method(**params) File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 201, in _dynamicMethod result = fn(*methodArgs) File "", line 2, in getCapabilities File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method ret = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1339, in getCapabilities c = caps.get() File "/usr/lib/python2.7/site-packages/vdsm/host/caps.py", line 168, in get net_caps = supervdsm.getProxy().network_caps() File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 55, in __call__ return callMethod() File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 53, in **kwargs) File "", line 2, in network_caps File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod raise convert_to_error(kind, result) IOError: [Errno 19] ovirtmgmt is not present in the system So something is dreadfully wrong with the bridge to ovs conversion in 4.2.2rc3. thomas From stirabos at redhat.com Fri Mar 9 08:21:43 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Fri, 9 Mar 2018 09:21:43 +0100 Subject: [ovirt-users] hosted-engine deploy fails at "Wait for the engine to come up on the target VM" step In-Reply-To: References: Message-ID: On Thu, Mar 8, 2018 at 7:28 PM, Kristian Petersen wrote: > I am trying to deploy oVirt with a self-hosted engine and the setup seems > to go well until near the very end when the status message says: > [ INFO ] TASK [Wait for the engine to come up on the target VM] > > [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed": > true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta": "0:0 > 0:00.216412", "end": "2018-03-07 16:02:02.677478", "rc": 0, "start": > "2018-03-07 16:02:02.461066", "stderr": "", "stderr_lines": [], "stdout > ": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, > \"extra\": \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntim > estamp=4679955 (Wed Mar 7 16:01:50 2018)\\nhost-id=1\\nscore= > 3400\\nvm_conf_refresh_time=4679956 (Wed Mar 7 16:01:51 > 2018)\\nconf_on_share > d_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\", > \"hostname\": \"rhv1.cpms.byu.edu\", \"host-id\": 1, \"engin > e-status\": {\"reason\": \"vm not running on this host\", \"health\": > \"bad\", \"vm\": \"down\", \"detail\": \"unknown\"}, \"score\": 3400, > \"stopped\": false, \"maintenance\": false, \"crc32\": \"d3a67cf7\", > \"local_conf_timestamp\": 4679956, \"host-ts\": 4679955}, \"global_main > tenance\": false}", "stdout_lines": ["{\"1\": {\"conf_on_shared_storage\": > true, \"live-data\": true, \"extra\": \"metadata_parse_version=1\ > \nmetadata_feature_version=1\\ntimestamp=4679955 (Wed Mar 7 16:01:50 > 2018)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=4679956 (Wed Mar > 7 16:01:51 2018)\\nconf_on_shared_storage=True\\nmaintenance= > False\\nstate=EngineStarting\\nstopped=False\\n\", \"hostname\": > \"rhv1.cpms. > byu.edu\", \"host-id\": 1, \"engine-status\": {\"reason\": \"vm not > running on this host\", \"health\": \"bad\", \"vm\": \"down\", \"detail\ > ": \"unknown\"}, \"score\": 3400, \"stopped\": false, \"maintenance\": > false, \"crc32\": \"d3a67cf7\", \"local_conf_timestamp\": 4679956, \" > host-ts\": 4679955}, \"global_maintenance\": false}"]} > [ ERROR ] Failed to execute stage 'Closing up': Failed executing > ansible-playbook > > Any ideas that might help? > Hi Kristian, {\"reason\": \"vm not running on this host\" sonds really bad. I means that ovirt-ha-agent (in charge of restarting the engine VM) think that another host took over but at that stage you should have just one host. Could you please attach /var/log/ovirt-hosted-engine-ha/agent.log and /var/log/vdsm/vdsm.log for the relevant time frame? > > > -- > Kristian Petersen > System Administrator > Dept. of Chemistry and Biochemistry > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arm2arm at gmail.com Fri Mar 9 08:42:33 2018 From: arm2arm at gmail.com (Arman Khalatyan) Date: Fri, 9 Mar 2018 09:42:33 +0100 Subject: [ovirt-users] Tape Library! In-Reply-To: <5bb36ba9-c342-0c20-ca6d-5e4a58da2684@endlessnow.com> References: <5bb36ba9-c342-0c20-ca6d-5e4a58da2684@endlessnow.com> Message-ID: Hi, in our cluster we just passed through the FC card to VM in order to use old LTO3 device...but the drawback is only one host owns FC card what we can use. we tested it with ovirt4.2x, looks promising. a. Am 08.03.2018 11:35 nachm. schrieb "Christopher Cox" : On 03/08/2018 12:43 AM, Nasrum Minallah Manzoor wrote: > Hi, > > I need help in configuring Amanda backup in virtual machine added to ovirt > node! How can I assign my FC tape library (TS 3100 in my case) to virtual > machine! > I know at one time there was an issue created to make this work through virtio. I mean, it was back in the early 3.x days I think. So this might be possible now (??). Passthrough LUN? https://www.ovirt.org/develop/release-management/features/st orage/virtio-scsi/ _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.seidel at helmholtz-muenchen.de Fri Mar 9 08:16:52 2018 From: michael.seidel at helmholtz-muenchen.de (Michael Seidel) Date: Fri, 9 Mar 2018 09:16:52 +0100 Subject: [ovirt-users] ovirt 4.2.1 pre hosted engine deploy failure Message-ID: <16384f09-1a4c-8582-f89d-13bd562d0656@helmholtz-muenchen.de> Hi, I found the messages at http://lists.ovirt.org/pipermail/users/2018-January/086631.html in your archive and am running into a similar/identical issue when trying to install a hosted engine: After providing all of the information, the installer does create some files on the nfs share (plantfiler02:/storage/vmx/ovirt) but eventually dies with: [ INFO ] TASK [Copy configuration files to the right location on host] [ INFO ] TASK [Copy configuration archive to storage] [ ERROR ] [WARNING]: Failure using method (v2_runner_on_failed) in callback plugin [ ERROR ] (): [ ERROR ] 'ascii' codec can't encode character u'\u2018' in position 489: ordinal not in [ ERROR ] range(128) [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook [ INFO ] Stage: Clean up The relevant part in the logfile I believe is the following: 2018-03-09 09:05:05,762+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 {u'_ansible_parsed': True, u'stderr_lines': [u'dd: failed to open \u2018/rhev/data-center/mnt/plantfiler02:_storage_vmx_ovirt/017fcf64-45c8-4289-87f7-c1195f7ec584/images/f36c079b-ba17-4c14-96a2-2b2beea7d989/56cd3448-4ecf-490e-99cc-ace36b977a9a\u2019: Permission denied'], u'cmd': [u'dd', u'bs=20480', u'count=1', u'oflag=direct', u'if=/var/tmp/localvmf0uaFh/56cd3448-4ecf-490e-99cc-ace36b977a9a', u'of=/rhev/data-center/mnt/plantfiler02:_storage_vmx_ovirt/017fcf64-45c8-4289-87f7-c1195f7ec584/images/f36c079b-ba17-4c14-96a2-2b2beea7d989/56cd3448-4ecf-490e-99cc-ace36b977a9a'], u'end': u'2018-03-09 09:05:05.565013', u'_ansible_no_log': False, u'stdout': u'', u'changed': True, u'start': u'2018-03-09 09:05:05.557703', u'delta': u'0:00:00.007310', u'stderr': u'dd: failed to open \u2018/rhev/data-center/mnt/plantfiler02:_storage_vmx_ovirt/017fcf64-45c8-4289-87f7-c1195f7ec584/images/f36c079b-ba17-4c14-96a2-2b2beea7d989/56cd3448-4ecf-490e-99cc-ace36b977a9a\u2019: Permission denied', u'rc': 1, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'dd bs=20480 count=1 oflag=direct if="/var/tmp/localvmf0uaFh/56cd3448-4ecf-490e-99cc-ace36b977a9a" of="/rhev/data-center/mnt/plantfiler02:_storage_vmx_ovirt/017fcf64-45c8-4289-87f7-c1195f7ec584/images/f36c079b-ba17-4c14-96a2-2b2beea7d989/56cd3448-4ecf-490e-99cc-ace36b977a9a"', u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, u'stdout_lines': [], u'ms I did create vdsm user and kvm user and group on the NFS server and I succesfully ran the nfs-check.py script from the host where ovirt should be installed: # python nfs-check.py plantfiler02:/storage/vmx/ovirt/ Current hostname: hyena.******.de - IP addr 10.216.60.21 Trying to /bin/mount -t nfs plantfiler02:/storage/vmx/ovirt/... Executing NFS tests.. Removing vdsmTest file.. Status of tests [OK] Disconnecting from NFS Server.. Done! The target directory has following permissions: drwxr-xr-x 3 vdsm kvm 86 Mar 9 09:12 ovirt I am aware of the issue https://bugzilla.redhat.com/show_bug.cgi?id=1533500 but the underlying problem seems to be the error message issued by dd (as has been mentioned the earlier posts). Am I missing the obvious somewhere regarding permissions? Is there a known solution/workaround to this? Best, - Michael Helmholtz Zentrum Muenchen Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH) Ingolstaedter Landstr. 1 85764 Neuherberg www.helmholtz-muenchen.de Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Heinrich Bassler, Dr. Alfons Enhsen Registergericht: Amtsgericht Muenchen HRB 6466 USt-IdNr: DE 129521671 From sbonazzo at redhat.com Fri Mar 9 09:02:13 2018 From: sbonazzo at redhat.com (Sandro Bonazzola) Date: Fri, 9 Mar 2018 10:02:13 +0100 Subject: [ovirt-users] qemu-kvm-ev-2.9.0-16.el7_4.14.1 has been released Message-ID: Hi, qemu-kvm-ev-2.9.0-16.el7_4.14.1 has been tagged for release and should land on mirrors.centos.org on Monday, March 12th 2018. Here's the ChangeLog: * Thu Mar 08 2018 Sandro Bonazzola - ev-2.9.0-16.el7_4.14.1 - Removing RH branding from package name * Thu Jan 18 2018 Miroslav Rezanina - rhev-2.9.0-16.el7_4.14 - kvm-fw_cfg-fix-memory-corruption-when-all-fw_cfg-slots-a.patch [bz#1534649] - kvm-mirror-Fix-inconsistent-backing-AioContext-for-after.patch [bz#1535125] - Resolves: bz#1534649 (Qemu crashes when all fw_cfg slots are used [rhel-7.4.z]) - Resolves: bz#1535125 (Mirror jobs for drives with iothreads make QEMU to abort with "block.c:1895: bdrv_attach_child: Assertion `bdrv_get_aio_context(parent_bs) == bdrv_get_aio_context(child_bs)' failed." [rhel-7.4.z]) Regards, -- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From O.Dietzel at rto.de Fri Mar 9 09:08:11 2018 From: O.Dietzel at rto.de (Oliver Dietzel) Date: Fri, 9 Mar 2018 09:08:11 +0000 Subject: [ovirt-users] 'Sanlock lockspace add failure' when trying to set up hosted engine 4.2 on existing gluster cluster Message-ID: Install from node iso on gluster works fine, the hosted engine vm installs on gluster, but after installation is finished and rebooted the gluster cluster is not added as data storage. Hosted engine is able to boot from gluster but not able to use it. Looks like HE doesnt use the gluster volume it booted from, but tries to add the same gluster volume a second time. Is there a workaround? Error message in web gui: VDSM ovirt-gluster.rto.de command CreateStoragePoolVDS failed: Cannot acquire host id: (u'e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument')) Error message in sanlock.log: [root at ovirt-gluster ~]# tail /var/log/sanlock.log 2018-03-09 09:37:05 812 [1082]: s5 host 1 2 791 f571ebc1-2572-4689-b64e-6999433f0597.ovirt-glus 2018-03-09 09:37:05 812 [1082]: s5 host 250 1 0 f571ebc1-2572-4689-b64e-6999433f0597.ovirt-glus 2018-03-09 09:37:20 828 [1093]: s5:r4 resource e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:e555168d-719d-4f73-8541-62395c97c1ff:/rhev/data-center/mnt/glusterSD/gluster01:_gv0/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/images/7b6675f7-f739-40b8-a554-b19795fe57c0/e555168d-719d-4f73-8541-62395c97c1ff.lease:0 for 3,12,5566 2018-03-09 09:40:58 1046 [1093]: s6 lockspace hosted-engine:1:/var/run/vdsm/storage/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/3b22b2fe-f4d5-4d0c-995a-03b2850b674b/eb7218ff-bdbc-49c5-af17-a62a7385d299:0 2018-03-09 09:41:20 1067 [1082]: s6 host 1 1 1046 f571ebc1-2572-4689-b64e-6999433f0597.ovirt-glus 2018-03-09 09:42:23 1130 [1093]: s5:r5 resource e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:e555168d-719d-4f73-8541-62395c97c1ff:/rhev/data-center/mnt/glusterSD/gluster01:_gv0/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/images/7b6675f7-f739-40b8-a554-b19795fe57c0/e555168d-719d-4f73-8541-62395c97c1ff.lease:0 for 2,9,11635 2018-03-09 09:44:30 1258 [1093]: add_lockspace e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:250:/rhev/data-center/mnt/glusterSD/gluster01:_gv0/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/dom_md/ids:0 conflicts with name of list1 s5 e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:1:/rhev/data-center/mnt/glusterSD/gluster01:_gv0/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/dom_md/ids:0 2018-03-09 09:44:37 1264 [1093]: s7 lockspace hosted-engine:1:/var/run/vdsm/storage/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/3b22b2fe-f4d5-4d0c-995a-03b2850b674b/eb7218ff-bdbc-49c5-af17-a62a7385d299:0 2018-03-09 09:44:58 1285 [1082]: s7 host 1 2 1264 f571ebc1-2572-4689-b64e-6999433f0597.ovirt-glus 2018-03-09 09:46:42 1389 [1093]: add_lockspace e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:250:/rhev/data-center/mnt/glusterSD/gluster01:_gv0/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/dom_md/ids:0 conflicts with name of list1 s5 e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:1:/rhev/data-center/mnt/glusterSD/gluster01:_gv0/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/dom_md/ids:0 ___________________________________________________________ Oliver Dietzel RTO GmbH Hanauer Landstra?e 439 60314 Frankfurt -------------- next part -------------- An HTML attachment was scrubbed... URL: From msivak at redhat.com Fri Mar 9 10:13:13 2018 From: msivak at redhat.com (Martin Sivak) Date: Fri, 9 Mar 2018 11:13:13 +0100 Subject: [ovirt-users] 'Sanlock lockspace add failure' when trying to set up hosted engine 4.2 on existing gluster cluster In-Reply-To: References: Message-ID: Hi Oliver, which version of oVirt are you running? The issue seems to be that correctly deployed hosted engine does not have any storage available in the webadmin If I understand you correctly. Is that right? We used to require two separate storage domains in 4.1 and older releases. One for hosted engine and one for the rest of the VMs. 4.2.1 changed that. So if you are running 4.1, just add another storage domain [1] and the engine will finish the hosted engine initialization automatically after that. [1] https://www.ovirt.org/documentation/self-hosted/chap-Deploying_Self-Hosted_Engine/ See the paragraph: "Important: Log in as the admin at internal user to continue configuring the Engine and add further resources. You must create another data domain for the data center to be initialized to host regular virtual machine data, and for the Engine virtual machine to be visible. See "Storage" in the Administration Guide for different storage options and on how to add a data storage domain." Best regards Martin Sivak On Fri, Mar 9, 2018 at 10:08 AM, Oliver Dietzel wrote: > Install from node iso on gluster works fine, the hosted engine vm installs > on gluster, but after installation is finished and rebooted the gluster > cluster is not added as data storage. > Hosted engine is able to boot from gluster but not able to use it. Looks > like HE doesnt use the gluster volume it booted from, but tries to add the > same gluster volume a second time. > > Is there a workaround? > > Error message in web gui: > VDSM ovirt-gluster.rto.de command CreateStoragePoolVDS failed: Cannot > acquire host id: (u'e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb', > SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument')) > > Error message in sanlock.log: > > [root at ovirt-gluster ~]# tail /var/log/sanlock.log > 2018-03-09 09:37:05 812 [1082]: s5 host 1 2 791 > f571ebc1-2572-4689-b64e-6999433f0597.ovirt-glus > 2018-03-09 09:37:05 812 [1082]: s5 host 250 1 0 > f571ebc1-2572-4689-b64e-6999433f0597.ovirt-glus > 2018-03-09 09:37:20 828 [1093]: s5:r4 resource > e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:e555168d-719d-4f73-8541-62395c97c1ff:/rhev/data-center/mnt/glusterSD/gluster01:_gv0/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/images/7b6675f7-f739-40b8-a554-b19795fe57c0/e555168d-719d-4f73-8541-62395c97c1ff.lease:0 > for 3,12,5566 > 2018-03-09 09:40:58 1046 [1093]: s6 lockspace > hosted-engine:1:/var/run/vdsm/storage/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/3b22b2fe-f4d5-4d0c-995a-03b2850b674b/eb7218ff-bdbc-49c5-af17-a62a7385d299:0 > 2018-03-09 09:41:20 1067 [1082]: s6 host 1 1 1046 > f571ebc1-2572-4689-b64e-6999433f0597.ovirt-glus > 2018-03-09 09:42:23 1130 [1093]: s5:r5 resource > e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:e555168d-719d-4f73-8541-62395c97c1ff:/rhev/data-center/mnt/glusterSD/gluster01:_gv0/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/images/7b6675f7-f739-40b8-a554-b19795fe57c0/e555168d-719d-4f73-8541-62395c97c1ff.lease:0 > for 2,9,11635 > 2018-03-09 09:44:30 1258 [1093]: add_lockspace > e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:250:/rhev/data-center/mnt/glusterSD/gluster01:_gv0/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/dom_md/ids:0 > conflicts with name of list1 s5 > e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:1:/rhev/data-center/mnt/glusterSD/gluster01:_gv0/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/dom_md/ids:0 > 2018-03-09 09:44:37 1264 [1093]: s7 lockspace > hosted-engine:1:/var/run/vdsm/storage/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/3b22b2fe-f4d5-4d0c-995a-03b2850b674b/eb7218ff-bdbc-49c5-af17-a62a7385d299:0 > 2018-03-09 09:44:58 1285 [1082]: s7 host 1 2 1264 > f571ebc1-2572-4689-b64e-6999433f0597.ovirt-glus > 2018-03-09 09:46:42 1389 [1093]: add_lockspace > e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:250:/rhev/data-center/mnt/glusterSD/gluster01:_gv0/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/dom_md/ids:0 > conflicts with name of list1 s5 > e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:1:/rhev/data-center/mnt/glusterSD/gluster01:_gv0/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/dom_md/ids:0 > ___________________________________________________________ > Oliver Dietzel > RTO GmbH > Hanauer Landstra?e 439 > 60314 Frankfurt > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From O.Dietzel at rto.de Fri Mar 9 10:17:17 2018 From: O.Dietzel at rto.de (Oliver Dietzel) Date: Fri, 9 Mar 2018 10:17:17 +0000 Subject: [ovirt-users] 'Sanlock lockspace add failure' when trying to set up hosted engine 4.2 on existing gluster cluster In-Reply-To: References: Message-ID: Thanks a lot, we already did this a couple of minutes ago, worked! -----Urspr?ngliche Nachricht----- Von: Martin Sivak [mailto:msivak at redhat.com] Gesendet: Freitag, 9. M?rz 2018 11:13 An: Oliver Dietzel Cc: users at ovirt.org Betreff: Re: [ovirt-users] 'Sanlock lockspace add failure' when trying to set up hosted engine 4.2 on existing gluster cluster Hi Oliver, which version of oVirt are you running? The issue seems to be that correctly deployed hosted engine does not have any storage available in the webadmin If I understand you correctly. Is that right? We used to require two separate storage domains in 4.1 and older releases. One for hosted engine and one for the rest of the VMs. 4.2.1 changed that. So if you are running 4.1, just add another storage domain [1] and the engine will finish the hosted engine initialization automatically after that. [1] https://www.ovirt.org/documentation/self-hosted/chap-Deploying_Self-Hosted_Engine/ See the paragraph: "Important: Log in as the admin at internal user to continue configuring the Engine and add further resources. You must create another data domain for the data center to be initialized to host regular virtual machine data, and for the Engine virtual machine to be visible. See "Storage" in the Administration Guide for different storage options and on how to add a data storage domain." Best regards Martin Sivak On Fri, Mar 9, 2018 at 10:08 AM, Oliver Dietzel wrote: > Install from node iso on gluster works fine, the hosted engine vm > installs on gluster, but after installation is finished and rebooted > the gluster cluster is not added as data storage. > Hosted engine is able to boot from gluster but not able to use it. > Looks like HE doesnt use the gluster volume it booted from, but tries > to add the same gluster volume a second time. > > Is there a workaround? > > Error message in web gui: > VDSM ovirt-gluster.rto.de command CreateStoragePoolVDS failed: Cannot > acquire host id: (u'e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb', > SanlockException(22, 'Sanlock lockspace add failure', 'Invalid > argument')) > > Error message in sanlock.log: > > [root at ovirt-gluster ~]# tail /var/log/sanlock.log > 2018-03-09 09:37:05 812 [1082]: s5 host 1 2 791 > f571ebc1-2572-4689-b64e-6999433f0597.ovirt-glus > 2018-03-09 09:37:05 812 [1082]: s5 host 250 1 0 > f571ebc1-2572-4689-b64e-6999433f0597.ovirt-glus > 2018-03-09 09:37:20 828 [1093]: s5:r4 resource > e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:e555168d-719d-4f73-8541-62395c97c > 1ff:/rhev/data-center/mnt/glusterSD/gluster01:_gv0/e6d008f7-e5e8-4064- > 9f6a-6ab7c8723eeb/images/7b6675f7-f739-40b8-a554-b19795fe57c0/e555168d > -719d-4f73-8541-62395c97c1ff.lease:0 > for 3,12,5566 > 2018-03-09 09:40:58 1046 [1093]: s6 lockspace > hosted-engine:1:/var/run/vdsm/storage/e6d008f7-e5e8-4064-9f6a-6ab7c872 > 3eeb/3b22b2fe-f4d5-4d0c-995a-03b2850b674b/eb7218ff-bdbc-49c5-af17-a62a > 7385d299:0 > 2018-03-09 09:41:20 1067 [1082]: s6 host 1 1 1046 > f571ebc1-2572-4689-b64e-6999433f0597.ovirt-glus > 2018-03-09 09:42:23 1130 [1093]: s5:r5 resource > e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:e555168d-719d-4f73-8541-62395c97c > 1ff:/rhev/data-center/mnt/glusterSD/gluster01:_gv0/e6d008f7-e5e8-4064- > 9f6a-6ab7c8723eeb/images/7b6675f7-f739-40b8-a554-b19795fe57c0/e555168d > -719d-4f73-8541-62395c97c1ff.lease:0 > for 2,9,11635 > 2018-03-09 09:44:30 1258 [1093]: add_lockspace > e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:250:/rhev/data-center/mnt/gluster > SD/gluster01:_gv0/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/dom_md/ids:0 > conflicts with name of list1 s5 > e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:1:/rhev/data-center/mnt/glusterSD > /gluster01:_gv0/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/dom_md/ids:0 > 2018-03-09 09:44:37 1264 [1093]: s7 lockspace > hosted-engine:1:/var/run/vdsm/storage/e6d008f7-e5e8-4064-9f6a-6ab7c872 > 3eeb/3b22b2fe-f4d5-4d0c-995a-03b2850b674b/eb7218ff-bdbc-49c5-af17-a62a > 7385d299:0 > 2018-03-09 09:44:58 1285 [1082]: s7 host 1 2 1264 > f571ebc1-2572-4689-b64e-6999433f0597.ovirt-glus > 2018-03-09 09:46:42 1389 [1093]: add_lockspace > e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:250:/rhev/data-center/mnt/gluster > SD/gluster01:_gv0/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/dom_md/ids:0 > conflicts with name of list1 s5 > e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb:1:/rhev/data-center/mnt/glusterSD > /gluster01:_gv0/e6d008f7-e5e8-4064-9f6a-6ab7c8723eeb/dom_md/ids:0 > ___________________________________________________________ > Oliver Dietzel > RTO GmbH > Hanauer Landstra?e 439 > 60314 Frankfurt > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From simone.bruckner at fabasoft.com Fri Mar 9 11:07:28 2018 From: simone.bruckner at fabasoft.com (Bruckner, Simone) Date: Fri, 9 Mar 2018 11:07:28 +0000 Subject: [ovirt-users] Faulty multipath only cleared with VDSM restart Message-ID: <2CB4E8C8E00E594EA06D4AC427E429920FE728E2@fabamailserver.fabagl.fabasoft.com> Hi, after rebooting SAN switches we see faulty multipath entries in VDSM. Running vdsm-client Host getStats shows multipathHealth entries "multipathHealth": { "3600601603cc04500a2f9cd597080db0e": { "valid_paths": 2, "failed_paths": [ "sdcl", "sdde" ] }, ... Running multipath -ll does not show any errors. After restarting VSDM, the multipathHealth entires from vdsm-client are empty again. Is the a way to clear those multipathHealth entires without restarting VDSM? Thank you and all the best, Simone -------------- next part -------------- An HTML attachment was scrubbed... URL: From stirabos at redhat.com Fri Mar 9 18:09:49 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Fri, 9 Mar 2018 19:09:49 +0100 Subject: [ovirt-users] ovirt 4.2.1 pre hosted engine deploy failure In-Reply-To: <16384f09-1a4c-8582-f89d-13bd562d0656@helmholtz-muenchen.de> References: <16384f09-1a4c-8582-f89d-13bd562d0656@helmholtz-muenchen.de> Message-ID: On Fri, Mar 9, 2018 at 9:16 AM, Michael Seidel < michael.seidel at helmholtz-muenchen.de> wrote: > Hi, > > I found the messages at > http://lists.ovirt.org/pipermail/users/2018-January/086631.html in your > archive and am running into a similar/identical issue when trying to > install a hosted engine: > > After providing all of the information, the installer does create some > files on the nfs share (plantfiler02:/storage/vmx/ovirt) but eventually > dies with: > > [ INFO ] TASK [Copy configuration files to the right location on host] > [ INFO ] TASK [Copy configuration archive to storage] > [ ERROR ] [WARNING]: Failure using method (v2_runner_on_failed) in > callback plugin > [ ERROR ] ( at 0x25c86d0>): > [ ERROR ] 'ascii' codec can't encode character u'\u2018' in position > 489: ordinal not in > [ ERROR ] range(128) > [ ERROR ] Failed to execute stage 'Closing up': Failed executing > ansible-playbook > [ INFO ] Stage: Clean up > > The relevant part in the logfile I believe is the following: > > 2018-03-09 09:05:05,762+0100 DEBUG > otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils._process_output:94 {u'_ansible_parsed': True, > u'stderr_lines': [u'dd: failed to open > \u2018/rhev/data-center/mnt/plantfiler02:_storage_vmx_ > ovirt/017fcf64-45c8-4289-87f7-c1195f7ec584/images/f36c079b- > ba17-4c14-96a2-2b2beea7d989/56cd3448-4ecf-490e-99cc-ace36b977a9a\u2019: > Permission denied'], u'cmd': [u'dd', u'bs=20480', u'count=1', > u'oflag=direct', > u'if=/var/tmp/localvmf0uaFh/56cd3448-4ecf-490e-99cc-ace36b977a9a', > u'of=/rhev/data-center/mnt/plantfiler02:_storage_vmx_ > ovirt/017fcf64-45c8-4289-87f7-c1195f7ec584/images/f36c079b- > ba17-4c14-96a2-2b2beea7d989/56cd3448-4ecf-490e-99cc-ace36b977a9a'], > u'end': u'2018-03-09 09:05:05.565013', u'_ansible_no_log': False, > u'stdout': u'', u'changed': True, u'start': u'2018-03-09 > 09:05:05.557703', u'delta': u'0:00:00.007310', u'stderr': u'dd: failed > to open > \u2018/rhev/data-center/mnt/plantfiler02:_storage_vmx_ > ovirt/017fcf64-45c8-4289-87f7-c1195f7ec584/images/f36c079b- > ba17-4c14-96a2-2b2beea7d989/56cd3448-4ecf-490e-99cc-ace36b977a9a\u2019: > Permission denied', u'rc': 1, u'invocation': {u'module_args': {u'warn': > True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'dd > bs=20480 count=1 oflag=direct > if="/var/tmp/localvmf0uaFh/56cd3448-4ecf-490e-99cc-ace36b977a9a" > of="/rhev/data-center/mnt/plantfiler02:_storage_vmx_ > ovirt/017fcf64-45c8-4289-87f7-c1195f7ec584/images/f36c079b- > ba17-4c14-96a2-2b2beea7d989/56cd3448-4ecf-490e-99cc-ace36b977a9a"', > u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, > u'stdout_lines': [], u'ms > > > I did create vdsm user and kvm user and group on the NFS server and I > succesfully ran the nfs-check.py script from the host where ovirt should > be installed: > > # python nfs-check.py plantfiler02:/storage/vmx/ovirt/ > Current hostname: hyena.******.de - IP addr 10.216.60.21 > Trying to /bin/mount -t nfs plantfiler02:/storage/vmx/ovirt/... > Executing NFS tests.. > Removing vdsmTest file.. > Status of tests [OK] > Disconnecting from NFS Server.. > Done! > > > The target directory has following permissions: > > drwxr-xr-x 3 vdsm kvm 86 Mar 9 09:12 ovirt > > > I am aware of the issue > https://bugzilla.redhat.com/show_bug.cgi?id=1533500 but the underlying > problem seems to be the error message issued by dd (as has been > mentioned the earlier posts). > It's will be solved in the next build but it's not the root cause of your issue but just a consequence. > > > Am I missing the obvious somewhere regarding permissions? Is there a > known solution/workaround to this? > Can you please check the permissions and the ownership of the files created at storage domain creation time? > > Best, > - Michael > Helmholtz Zentrum Muenchen > Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH) > Ingolstaedter Landstr. 1 > 85764 Neuherberg > www.helmholtz-muenchen.de > Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe > Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Heinrich Bassler, Dr. Alfons > Enhsen > Registergericht: Amtsgericht Muenchen HRB 6466 > USt-IdNr: DE 129521671 > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabrice.soler at ac-guadeloupe.fr Fri Mar 9 19:12:24 2018 From: fabrice.soler at ac-guadeloupe.fr (Fabrice SOLER) Date: Fri, 9 Mar 2018 15:12:24 -0400 Subject: [ovirt-users] firewall node Message-ID: <0e4473e7-daa2-0d33-0373-ff3702c37227@ac-guadeloupe.fr> Hello, I am trying to open a port on the node. For that, in the cluster configuration I have choosed firewalld, I have created the |*/etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml* file.| | - name: Enable additional port on firewalld ? firewalld: ??? port: "12345/tcp" ??? permanent: yes ??? immediate: yes ??? state: enabled | |then I have rebooted the node like it is noticed on this link : | |https://www.ovirt.org/blog/2017/12/host-deploy-customization/ | |On the node, after the reboot, I read the iptables (iptables -L) and the port is not open. | |I have just updated the engine and the node is 4.2.1.1.| |Is there some change about the firewalld in this version ? (in 4.2.0 it worked) | |Sincerery | -- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fabrice SOLER.PNG Type: image/png Size: 16525 bytes Desc: not available URL: From nicolas at ecarnot.net Fri Mar 9 20:10:11 2018 From: nicolas at ecarnot.net (Nicolas Ecarnot) Date: Fri, 9 Mar 2018 21:10:11 +0100 Subject: [ovirt-users] firewall node In-Reply-To: <0e4473e7-daa2-0d33-0373-ff3702c37227@ac-guadeloupe.fr> References: <0e4473e7-daa2-0d33-0373-ff3702c37227@ac-guadeloupe.fr> Message-ID: <701d44bf-9090-5ba5-663a-4816d2135e16@ecarnot.net> https://www.mail-archive.com/users at ovirt.org/msg46608.html Le 09/03/2018 ? 20:12, Fabrice SOLER a ?crit?: > Hello, > > I am trying to open a port on the node. > > For that, in the cluster configuration I have choosed firewalld, I have > created the > |*/etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml* file.| > > | > - name: Enable additional port on firewalld > ? firewalld: > ??? port: "12345/tcp" > ??? permanent: yes > ??? immediate: yes > ??? state: enabled > | > > |then I have rebooted the node like it is noticed on this link : > | > > |https://www.ovirt.org/blog/2017/12/host-deploy-customization/ > | > > |On the node, after the reboot, I read the iptables (iptables -L) and > the port is not open. > | > > |I have just updated the engine and the node is 4.2.1.1.| > > |Is there some change about the firewalld in this version ? (in 4.2.0 it > worked) > | > > |Sincerery > | > > -- > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- Nicolas ECARNOT From stirabos at redhat.com Fri Mar 9 23:26:48 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Sat, 10 Mar 2018 00:26:48 +0100 Subject: [ovirt-users] hosted-engine deploy fails at "Wait for the engine to come up on the target VM" step In-Reply-To: References: Message-ID: On Fri, Mar 9, 2018 at 8:33 PM, Kristian Petersen wrote: > I have attached the relevant log files as requested.? > vdsm.log.1 > > ? > The real issue is here: BroadwellIBRS destroydestroydestroy (vm:2751) 2018-03-08 08:04:13,757-0700 ERROR (vm/9a1e133d) [virt.vm] (vmId='9a1e133d-13d8-4613-b1a5-fd3ca81ffcc3') The vm start process failed (vm:927) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 856, in _startUnderlyingVm self._run() File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2756, in _run dom.createWithFlags(flags) File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 130, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1069, in createWithFlags if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) libvirtError: internal error: Unknown CPU model BroadwellIBRS Indeed it should be Broadwell-IBRS Can you please report which rpm version of ovirt-hosted-engine-setup did you used? You can fix it in this way: copy /var/run/ovirt-hosted-engine-ha/vm.conf somewhere, edit it and update the cpuType field. Then start the engine VM with your custom vm.conf with something like: hosted-engine --vm-start --vm-conf=/root/my_vm.conf keep the engine up for at least one hour and it will generate the OVF_STORE disks with the right configuration for the hosted-engine VM. It failed really at the end of the setup so anything else should be fine. > > On Fri, Mar 9, 2018 at 1:21 AM, Simone Tiraboschi > wrote: > >> >> >> On Thu, Mar 8, 2018 at 7:28 PM, Kristian Petersen >> wrote: >> >>> I am trying to deploy oVirt with a self-hosted engine and the setup >>> seems to go well until near the very end when the status message says: >>> [ INFO ] TASK [Wait for the engine to come up on the target VM] >>> >>> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed": >>> true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta": "0:0 >>> 0:00.216412", "end": "2018-03-07 16:02:02.677478", "rc": 0, "start": >>> "2018-03-07 16:02:02.461066", "stderr": "", "stderr_lines": [], "stdout >>> ": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, >>> \"extra\": \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntim >>> estamp=4679955 (Wed Mar 7 16:01:50 2018)\\nhost-id=1\\nscore=3400 >>> \\nvm_conf_refresh_time=4679956 (Wed Mar 7 16:01:51 >>> 2018)\\nconf_on_share >>> d_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\", >>> \"hostname\": \"rhv1.cpms.byu.edu\", \"host-id\": 1, \"engin >>> e-status\": {\"reason\": \"vm not running on this host\", \"health\": >>> \"bad\", \"vm\": \"down\", \"detail\": \"unknown\"}, \"score\": 3400, >>> \"stopped\": false, \"maintenance\": false, \"crc32\": \"d3a67cf7\", >>> \"local_conf_timestamp\": 4679956, \"host-ts\": 4679955}, \"global_main >>> tenance\": false}", "stdout_lines": ["{\"1\": >>> {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\": >>> \"metadata_parse_version=1\ >>> \nmetadata_feature_version=1\\ntimestamp=4679955 (Wed Mar 7 16:01:50 >>> 2018)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=4679956 (Wed Mar >>> 7 16:01:51 2018)\\nconf_on_shared_storage >>> =True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\", >>> \"hostname\": \"rhv1.cpms. >>> byu.edu\", \"host-id\": 1, \"engine-status\": {\"reason\": \"vm not >>> running on this host\", \"health\": \"bad\", \"vm\": \"down\", \"detail\ >>> ": \"unknown\"}, \"score\": 3400, \"stopped\": false, \"maintenance\": >>> false, \"crc32\": \"d3a67cf7\", \"local_conf_timestamp\": 4679956, \" >>> host-ts\": 4679955}, \"global_maintenance\": false}"]} >>> [ ERROR ] Failed to execute stage 'Closing up': Failed executing >>> ansible-playbook >>> >>> Any ideas that might help? >>> >> >> >> Hi Kristian, >> {\"reason\": \"vm not running on this host\" sonds really bad. >> I means that ovirt-ha-agent (in charge of restarting the engine VM) think >> that another host took over but at that stage you should have just one host. >> >> Could you please attach /var/log/ovirt-hosted-engine-ha/agent.log and >> /var/log/vdsm/vdsm.log for the relevant time frame? >> >> >>> >>> >>> -- >>> Kristian Petersen >>> System Administrator >>> Dept. of Chemistry and Biochemistry >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> > > > -- > Kristian Petersen > System Administrator > Dept. of Chemistry and Biochemistry > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gchlip2 at uic.edu Fri Mar 9 21:46:21 2018 From: gchlip2 at uic.edu (Chlipala, George Edward) Date: Fri, 9 Mar 2018 21:46:21 +0000 Subject: [ovirt-users] Cannot use noVNC in VM portal, oVirt 4.2.1.7 Message-ID: Our oVirt installation (4.2.1.7) is configured to use noVNC as the default console (ClientModeVncDefault = noVnc). This works perfectly fine in the Administrator portal. However if a user logs in to the VM portal when they click the VNC console option it generates a virt-viewer file (native VNC client) instead of opening a noVNC session. We cannot seem to find any options on the VM portal to use noVNC or set as the default. Is there another option that we need to set to allow noVNC via the VM portal? Thanks! George Chlipala, Ph.D. Associate Director, Core for Research Informatics Research Resources Center University of Illinois at Chicago phone: 312-413-1700 email: gchlip2 at uic.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From simone.bruckner at fabasoft.com Sat Mar 10 08:36:21 2018 From: simone.bruckner at fabasoft.com (Bruckner, Simone) Date: Sat, 10 Mar 2018 08:36:21 +0000 Subject: [ovirt-users] Multiple "Active VM before the preview" snapshots Message-ID: <2CB4E8C8E00E594EA06D4AC427E429920FE757F6@fabamailserver.fabagl.fabasoft.com> Hi, we see some VMs that show an inconsistent view of snapshots. Checking die database for one example vm shows the following result: select snapshot_id, status, description from snapshots where vm_id = '420a6445-df02-da6a-e4e3-ddc451b2914d'; snapshot_id | status | description --------------------------------------+------------+------------------------------ 602f6aa2-f9fa-4fc6-8349-a8afa1f55137 | OK | Active VM before the preview 12097e2a-7baf-497f-bd60-bfec7b111828 | IN_PREVIEW | base dd82844c-c46a-4010-9af3-bec836abea2c | OK | Active VM e70a4715-0511-4ee4-b309-d24a54c56c2f | OK | Active VM before the preview We cannot perform any snapshot, clone, copy operations on those vms. Is there a way to get this cleared? All the best, Simone -------------- next part -------------- An HTML attachment was scrubbed... URL: From fedele.stabile at fis.unical.it Fri Mar 9 23:16:35 2018 From: fedele.stabile at fis.unical.it (Fedele Stabile) Date: Sat, 10 Mar 2018 00:16:35 +0100 Subject: [ovirt-users] ovirt 4 and private networks Message-ID: <06c301d3b7fc$ad522600$07f67200$@fis.unical.it> Hello, I have installed 3 ovirt-nodes managed by 1 self-hosted engine all on a public network. Now I would separate network traffic creating a private network, but I am not able. I want use NFS and gluster storage. Can anyone help me? Thank you in advance, Fedele Stabile -------------- next part -------------- An HTML attachment was scrubbed... URL: From ishaby at redhat.com Sun Mar 11 06:15:15 2018 From: ishaby at redhat.com (Idan Shaby) Date: Sun, 11 Mar 2018 08:15:15 +0200 Subject: [ovirt-users] How to force remove template In-Reply-To: References: Message-ID: Hi John, Indeed looks odd. Can you attach engine and vdsm logs from when you tried to delete the template? Also, any idea how it got there? Remember anything special that you did with the storage domain? Regards, Idan On Thu, Mar 8, 2018 at 10:27 PM, John Nguyen wrote: > Hi Guys, > > I apologize if you may have addressed this earlier. I have a template in > an odd state. The template show it's disk on one storage domain, but in > actuality the disk is on a different domain. I would like to delete this > template since its not in uses and out of date. However when I try, I get > the error "image does not exit in domain." > > Is there a way to force remove a template from the database? Any thoughts > would be greatly appreciated. > > Thanks, > John > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From frolland at redhat.com Sun Mar 11 10:22:58 2018 From: frolland at redhat.com (Fred Rolland) Date: Sun, 11 Mar 2018 12:22:58 +0200 Subject: [ovirt-users] Faulty multipath only cleared with VDSM restart In-Reply-To: <2CB4E8C8E00E594EA06D4AC427E429920FE728E2@fabamailserver.fabagl.fabasoft.com> References: <2CB4E8C8E00E594EA06D4AC427E429920FE728E2@fabamailserver.fabagl.fabasoft.com> Message-ID: Hi Simone, The multipath health is built on VDSM start from the current multipath state, and after that it is maintained based on events sent by udev. You can read about the implementation details in [1]. It seems that in your scenario, either udev did not sent the needed clearing events or that Vdsm mishandled them. Therefore only restart of the Vdsm will clear the report. In order to be able to debug the issue, we will need Vdsm logs with debug level (on storage log) when the issue is happening. Thanks, Fred [1] https://ovirt.org/develop/release-management/features/storage/multipath-events/ On Fri, Mar 9, 2018 at 1:07 PM, Bruckner, Simone < simone.bruckner at fabasoft.com> wrote: > Hi, > > > > after rebooting SAN switches we see faulty multipath entries in VDSM. > > > > Running vdsm-client Host getStats shows multipathHealth entries > > > > "multipathHealth": { > > "3600601603cc04500a2f9cd597080db0e": { > > "valid_paths": 2, > > "failed_paths": [ > > "sdcl", > > "sdde" > > ] > > }, > > ? > > > > Running multipath ?ll does not show any errors. > > > > After restarting VSDM, the multipathHealth entires from vdsm-client are > empty again. > > > > Is the a way to clear those multipathHealth entires without restarting > VDSM? > > > > Thank you and all the best, > > Simone > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sleviim at redhat.com Sun Mar 11 10:51:10 2018 From: sleviim at redhat.com (Shani Leviim) Date: Sun, 11 Mar 2018 12:51:10 +0200 Subject: [ovirt-users] Ghost Snapshot Disk In-Reply-To: <802bdf10-372b-49b3-5284-c2cea2ba2876@pg.infn.it> References: <2109773819.1728939.1519370725788.JavaMail.zimbra@cines.fr> <1550150634.1827635.1519649308293.JavaMail.zimbra@cines.fr> <280580777.1830731.1519652234852.JavaMail.zimbra@cines.fr> <48154177.1832942.1519654691849.JavaMail.zimbra@cines.fr> <489433186.2545721.1520320950720.JavaMail.zimbra@cines.fr> <736794213.2945423.1520510210879.JavaMail.zimbra@cines.fr> <802bdf10-372b-49b3-5284-c2cea2ba2876@pg.infn.it> Message-ID: Hi Enrico, Can you please send your question on a new different mail to users' list? (So we won't mix up the answers due to different ovirt-engine's versions). Thanks for cooperation. *Regards,* *Shani Leviim* On Thu, Mar 8, 2018 at 6:08 PM, Enrico wrote: > Hi All, > I've a similar question , I can't remove a snapshot. oVirt version is the > last stable 4.2.1.7 > with engine running in non hosted mode. Before remove snapshot I've > shutdown vm. > These are logs from engine: > > 2018-03-08 16:57:47,153+01 INFO [org.ovirt.engine.core.bll.ProcessDownVmCommand] > (EE-ManagedThreadFactory-engine-Thread-40253) [711eba7d] Running command: > ProcessDownVmCommand internal: true. > 2018-03-08 16:57:55,589+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmsStatisticsFetcher] (EE-ManagedThreadFactory-engineScheduled-Thread-71) > [] Fetched 8 VMs from VDS '85bbf811-1069-4e67-ba86-e50dec9f5da9' > 2018-03-08 16:59:00,561+01 INFO [org.ovirt.engine.core.bll.snapshots.RemoveDiskSnapshotsCommand] > (default task-31) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Lock Acquired to > object 'EngineLock:{exclusiveLocks='[281a869f-8541-49cf-894b-f583bd26083d=DISK]', > sharedLocks=''}' > 2018-03-08 16:59:00,599+01 INFO [org.ovirt.engine.core.bll.snapshots.RemoveDiskSnapshotsCommand] > (EE-ManagedThreadFactory-engine-Thread-40284) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] > Running command: RemoveDiskSnapshotsCommand internal: false. Entities > affected : ID: 01f9b5f2-9e48-4c24-80e5-dca7f1d4d128 Type: VMAction group > MANIPULATE_VM_SNAPSHOTS with role type USER > 2018-03-08 16:59:00,613+01 INFO [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-40284) > [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] EVENT_ID: > USER_REMOVE_DISK_SNAPSHOT(373), Disk 'SOL_Disk1' from Snapshot(s) 'PHP > 5.6.29' of VM 'SOL-DEV' deletion was initiated by admin at internal-authz. > 2018-03-08 16:59:00,615+01 INFO [org.ovirt.engine.core.bll.snapshots.RemoveDiskSnapshotsCommand] > (EE-ManagedThreadFactory-engine-Thread-40284) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] > Lock freed to object 'EngineLock:{exclusiveLocks='[ > 281a869f-8541-49cf-894b-f583bd26083d=DISK]', sharedLocks=''}' > 2018-03-08 16:59:00,993+01 INFO [org.ovirt.engine.core.bll.snapshots.RemoveDiskSnapshotsCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-25) > [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Starting child command 1 of 1, > image '64e8f28d-6c00-41d8-9f60-26a87d51cb8c' > 2018-03-08 16:59:01,026+01 INFO [org.ovirt.engine.core.bll.snapshots. > ColdMergeSnapshotSingleDiskCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-7) > [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Running command: > ColdMergeSnapshotSingleDiskCommand internal: true. Entities affected : > ID: 00000000-0000-0000-0000-000000000000 Type: Storage > 2018-03-08 16:59:02,026+01 INFO [org.ovirt.engine.core.bll.snapshots. > ColdMergeSnapshotSingleDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-98) > [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Command > 'ColdMergeSnapshotSingleDisk' id '2bbd81c3-9fa1-4e48-ab69-5588e1367539' > executing step 'PREPARE_MERGE' > 2018-03-08 16:59:02,048+01 INFO [org.ovirt.engine.core.bll. > storage.disk.image.PrepareMergeCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-98) > [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Running command: > PrepareMergeCommand internal: true. Entities affected : ID: > 00000000-0000-0000-0000-000000000000 Type: Storage > 2018-03-08 16:59:02,049+01 INFO [org.ovirt.engine.core. > vdsbroker.irsbroker.PrepareMergeVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-98) > [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] START, PrepareMergeVDSCommand( > SPMColdMergeVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7', > ignoreFailoverLimit='false'}), log id: 447f2b58 > 2018-03-08 16:59:02,178+01 INFO [org.ovirt.engine.core. > vdsbroker.irsbroker.PrepareMergeVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-98) > [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] FINISH, PrepareMergeVDSCommand, > log id: 447f2b58 > 2018-03-08 16:59:02,221+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] > (EE-ManagedThreadFactory-engineScheduled-Thread-98) > [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] CommandAsyncTask::Adding > CommandMultiAsyncTasks object for command 'b5983000-d637-47da-8aa2- > fb8fec50b480' > 2018-03-08 16:59:02,221+01 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] > (EE-ManagedThreadFactory-engineScheduled-Thread-98) > [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] CommandMultiAsyncTasks::attachTask: > Attaching task '0f762196-21ed-4b70-995d-729f3ed72425' to command > 'b5983000-d637-47da-8aa2-fb8fec50b480'. > 2018-03-08 16:59:02,235+01 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] > (EE-ManagedThreadFactory-engineScheduled-Thread-98) > [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Adding task > '0f762196-21ed-4b70-995d-729f3ed72425' (Parent Command 'PrepareMerge', > Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), > polling hasn't started yet.. > 2018-03-08 16:59:02,241+01 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] > (EE-ManagedThreadFactory-engineScheduled-Thread-98) > [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] BaseAsyncTask::startPollingTask: > Starting to poll task '0f762196-21ed-4b70-995d-729f3ed72425'. > 2018-03-08 16:59:03,262+01 INFO [org.ovirt.engine.core.bll. > SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-79) > [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Command 'RemoveDiskSnapshots' (id: > '8f600a64-5ec5-4f81-98de-2ced76193aa4') waiting on child command id: > '2bbd81c3-9fa1-4e48-ab69-5588e1367539' type:'ColdMergeSnapshotSingleDisk' > to complete > 2018-03-08 16:59:04,271+01 INFO [org.ovirt.engine.core.bll. > SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-67) > [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Command > 'ColdMergeSnapshotSingleDisk' (id: '2bbd81c3-9fa1-4e48-ab69-5588e1367539') > waiting on child command id: 'b5983000-d637-47da-8aa2-fb8fec50b480' > type:'PrepareMerge' to complete > 2018-03-08 16:59:07,322+01 INFO [org.ovirt.engine.core.bll. > SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-86) > [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Command 'RemoveDiskSnapshots' (id: > '8f600a64-5ec5-4f81-98de-2ced76193aa4') waiting on child command id: > '2bbd81c3-9fa1-4e48-ab69-5588e1367539' type:'ColdMergeSnapshotSingleDisk' > to complete > 2018-03-08 16:59:08,331+01 INFO [org.ovirt.engine.core.bll. > SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-100) > [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Command > 'ColdMergeSnapshotSingleDisk' (id: '2bbd81c3-9fa1-4e48-ab69-5588e1367539') > waiting on child command id: 'b5983000-d637-47da-8aa2-fb8fec50b480' > type:'PrepareMerge' to complete > 2018-03-08 16:59:09,986+01 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] > (EE-ManagedThreadFactory-engineScheduled-Thread-38) [] Polling and > updating Async Tasks: 1 tasks, 1 tasks to poll now > 2018-03-08 16:59:09,998+01 ERROR [org.ovirt.engine.core. > vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-38) [] Failed in > 'HSMGetAllTasksStatusesVDS' method > 2018-03-08 16:59:10,000+01 ERROR [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-38) > [] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM > infn-vm05.management command HSMGetAllTasksStatusesVDS failed: Volume does > not exist: ('64e8f28d-6c00-41d8-9f60-26a87d51cb8c',) > 2018-03-08 16:59:10,000+01 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] > (EE-ManagedThreadFactory-engineScheduled-Thread-38) [] > SPMAsyncTask::PollTask: Polling task '0f762196-21ed-4b70-995d-729f3ed72425' > (Parent Command 'PrepareMerge', Parameters Type > 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned > status 'finished', result 'cleanSuccess'. > 2018-03-08 16:59:10,000+01 ERROR [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] > (EE-ManagedThreadFactory-engineScheduled-Thread-38) [] BaseAsyncTask::logEndTaskFailure: > Task '0f762196-21ed-4b70-995d-729f3ed72425' (Parent Command > 'PrepareMerge', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') > ended with failure: > -- Result: 'cleanSuccess' > -- Message: 'VDSGenericException: VDSErrorException: Failed to > HSMGetAllTasksStatusesVDS, error = Volume does not exist: > ('64e8f28d-6c00-41d8-9f60-26a87d51cb8c',), code = 201', > -- Exception: 'VDSGenericException: VDSErrorException: Failed to > HSMGetAllTasksStatusesVDS, error = Volume does not exist: > ('64e8f28d-6c00-41d8-9f60-26a87d51cb8c',), code = 201' > 2018-03-08 16:59:10,001+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] > (EE-ManagedThreadFactory-engineScheduled-Thread-38) [] CommandAsyncTask::endActionIfNecessary: > All tasks of command 'b5983000-d637-47da-8aa2-fb8fec50b480' has ended -> > executing 'endAction' > 2018-03-08 16:59:10,001+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] > (EE-ManagedThreadFactory-engineScheduled-Thread-38) [] > CommandAsyncTask::endAction: Ending action for '1' tasks (command ID: > 'b5983000-d637-47da-8aa2-fb8fec50b480'): calling endAction '. > 2018-03-08 16:59:10,001+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] > (EE-ManagedThreadFactory-engine-Thread-40287) [] CommandAsyncTask::endCommandAction > [within thread] context: Attempting to endAction 'PrepareMerge', > 2018-03-08 16:59:10,005+01 ERROR [org.ovirt.engine.core.bll. > storage.disk.image.PrepareMergeCommand] (EE-ManagedThreadFactory-engine-Thread-40287) > [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Ending command > 'org.ovirt.engine.core.bll.storage.disk.image.PrepareMergeCommand' with > failure. > 2018-03-08 16:59:10,013+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] > (EE-ManagedThreadFactory-engine-Thread-40287) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] > CommandAsyncTask::HandleEndActionResult [within thread]: endAction for > action type 'PrepareMerge' completed, handling the result. > 2018-03-08 16:59:10,013+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] > (EE-ManagedThreadFactory-engine-Thread-40287) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] > CommandAsyncTask::HandleEndActionResult [within thread]: endAction for > action type 'PrepareMerge' succeeded, clearing tasks. > 2018-03-08 16:59:10,014+01 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] > (EE-ManagedThreadFactory-engine-Thread-40287) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] > SPMAsyncTask::ClearAsyncTask: Attempting to clear task > '0f762196-21ed-4b70-995d-729f3ed72425' > 2018-03-08 16:59:10,016+01 INFO [org.ovirt.engine.core. > vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-40287) > [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] START, SPMClearTaskVDSCommand( > SPMTaskGuidBaseVDSCommandParameters:{storagePoolId=' > 18d57688-6ed4-43b8-bd7c-0665b55950b7', ignoreFailoverLimit='false', > taskId='0f762196-21ed-4b70-995d-729f3ed72425'}), log id: 5a0c3cc1 > 2018-03-08 16:59:10,017+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-40287) > [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] START, HSMClearTaskVDSCommand(HostName > = infn-vm05.management, HSMTaskGuidBaseVDSCommandParam > eters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc', > taskId='0f762196-21ed-4b70-995d-729f3ed72425'}), log id: 608b0bf > 2018-03-08 16:59:10,033+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-40287) > [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] FINISH, HSMClearTaskVDSCommand, > log id: 608b0bf > 2018-03-08 16:59:10,033+01 INFO [org.ovirt.engine.core. > vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-40287) > [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] FINISH, SPMClearTaskVDSCommand, > log id: 5a0c3cc1 > 2018-03-08 16:59:10,035+01 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] > (EE-ManagedThreadFactory-engine-Thread-40287) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] > BaseAsyncTask::removeTaskFromDB: Removed task '0f762196-21ed-4b70-995d-729f3ed72425' > from DataBase > 2018-03-08 16:59:10,035+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] > (EE-ManagedThreadFactory-engine-Thread-40287) [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] > CommandAsyncTask::HandleEndActionResult [within thread]: Removing > CommandMultiAsyncTasks object for entity 'b5983000-d637-47da-8aa2- > fb8fec50b480' > 2018-03-08 16:59:15,375+01 INFO [org.ovirt.engine.core.bll. > SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-57) > [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Command 'RemoveDiskSnapshots' (id: > '8f600a64-5ec5-4f81-98de-2ced76193aa4') waiting on child command id: > '2bbd81c3-9fa1-4e48-ab69-5588e1367539' type:'ColdMergeSnapshotSingleDisk' > to complete > 2018-03-08 16:59:16,384+01 ERROR [org.ovirt.engine.core.bll.snapshots. > ColdMergeSnapshotSingleDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-27) > [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Command > 'ColdMergeSnapshotSingleDisk' id '2bbd81c3-9fa1-4e48-ab69-5588e1367539' > failed executing step 'PREPARE_MERGE' > 2018-03-08 16:59:16,384+01 INFO [org.ovirt.engine.core.bll. > SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-27) > [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Command > 'ColdMergeSnapshotSingleDisk' id: '2bbd81c3-9fa1-4e48-ab69-5588e1367539' > child commands '[b5983000-d637-47da-8aa2-fb8fec50b480]' executions were > completed, status 'FAILED' > 2018-03-08 16:59:17,398+01 ERROR [org.ovirt.engine.core.bll.snapshots. > ColdMergeSnapshotSingleDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-77) > [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Ending command > 'org.ovirt.engine.core.bll.snapshots.ColdMergeSnapshotSingleDiskCommand' > with failure. > 2018-03-08 16:59:17,409+01 INFO [org.ovirt.engine.core.bll. > SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-77) > [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Command 'RemoveDiskSnapshots' id: > '8f600a64-5ec5-4f81-98de-2ced76193aa4' child commands > '[2bbd81c3-9fa1-4e48-ab69-5588e1367539]' executions were completed, > status 'FAILED' > 2018-03-08 16:59:18,424+01 ERROR [org.ovirt.engine.core.bll.snapshots.RemoveDiskSnapshotsCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-36) > [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] Ending command > 'org.ovirt.engine.core.bll.snapshots.RemoveDiskSnapshotsCommand' with > failure. > 2018-03-08 16:59:18,460+01 ERROR [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-36) > [ceb64d67-ce9e-41a8-93dd-ffa512bfd18e] EVENT_ID: > USER_REMOVE_DISK_SNAPSHOT_FINISHED_FAILURE(376), Failed to complete > deletion of Disk 'SOL_Disk1' from snapshot(s) 'PHP 5.6.29' of VM 'SOL-DEV' > (User: admin at internal-authz). > 2018-03-08 17:00:18,145+01 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] > (EE-ManagedThreadFactory-engineScheduled-Thread-46) [] Setting new tasks > map. The map contains now 0 tasks > 2018-03-08 17:00:18,145+01 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] > (EE-ManagedThreadFactory-engineScheduled-Thread-46) [] Cleared all tasks > of pool '18d57688-6ed4-43b8-bd7c-0665b55950b7'. > > Thanks a lot > Best Regards > Enrico > > > Il 08/03/18 16:30, Shani Leviim ha scritto: > > Hi Lionel, > > Can you please share once again your engine log (or at least the relevant > part where that error message occurred)? > > > *Regards, * > > *Shani Leviim * > > On Thu, Mar 8, 2018 at 1:56 PM, Lionel Caignec wrote: > >> Hi, >> >> i finished to move my data, but now when i want to remove my old disk i >> get stuck to this error : >> "Cannot detach Virtual Machine Disk. The disk is already configured in a >> snapshot. In order to detach it, remove the disk's snapshots". >> But like i said before there is no snapshot anymore. >> So what can i do? Delete manually inside database? So where? >> Delete manually lvm volume, so how can i find the good one? >> >> Please help ;). >> >> Lionel >> >> ----- Mail original ----- >> De: "Lionel Caignec" >> ?: "Shani Leviim" >> Cc: "users" >> Envoy?: Mardi 6 Mars 2018 08:22:30 >> Objet: Re: [ovirt-users] Ghost Snapshot Disk >> >> Hi, >> >> ok thank you for information (sorry for late response). >> >> I will do that. >> >> ----- Mail original ----- >> De: "Shani Leviim" >> ?: "Lionel Caignec" >> Cc: "users" >> Envoy?: Mardi 27 F?vrier 2018 14:19:45 >> Objet: Re: [ovirt-users] Ghost Snapshot Disk >> >> Hi Lionel, >> >> Sorry for the delay in replying you. >> >> If it's possible from your side, syncing the data and destroying old disk >> sounds about right. >> >> In addition, it seems like you're having this bug: >> https://bugzilla.redhat.com/show_bug.cgi?id=1509629 >> And it was fixed for version 4.1.9. and above. >> >> >> >> *Regards,* >> >> *Shani Leviim* >> >> On Mon, Feb 26, 2018 at 4:18 PM, Lionel Caignec wrote: >> >> > Ok so i reply myself, >> > >> > Version is 4.1.7.6-1 >> > >> > I just delete manually a snapshot previously created. But this is an io >> > intensive vm, whit big disk (2,5To, and 5To). >> > >> > For the log, i cannot paste all my log on public list security reason, i >> > will send you full in private. >> > Here is an extract relevant to my error >> > engine.log-20180210:2018-02-09 23:00:03,200+01 INFO >> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >> > (default task-312) [44402a8c-3196-43f0-ba33-307ea78e6f49] EVENT_ID: >> > USER_CREATE_SNAPSHOT(45), Correlation ID: 44402a8c-3196-43f0-ba33-307ea7 >> 8e6f49, >> > Job ID: 030cd310-fec9-4a89-8c3f-7888504fe973, Call Stack: null, Custom >> > ID: null, Custom Event ID: -1, Message: Snapshot >> 'AUTO_7D_zz_nil_20180209_220002' >> > creation for VM 'zz_nil' was initiated by snap_user at internal. >> > engine.log-20180210:2018-02-09 23:01:06,578+01 INFO >> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >> > (DefaultQuartzScheduler5) [] EVENT_ID: USER_CREATE_SNAPSHOT_FINISHED_ >> SUCCESS(68), >> > Correlation ID: 44402a8c-3196-43f0-ba33-307ea78e6f49, Job ID: >> > 030cd310-fec9-4a89-8c3f-7888504fe973, Call Stack: null, Custom ID: >> null, >> > Custom Event ID: -1, Message: Snapshot 'AUTO_7D_zz_nil_20180209_220002' >> > creation for VM 'zz_nil' has been completed. >> > engine.log-20180220:2018-02-19 17:01:23,800+01 INFO >> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >> > (default task-113) [] EVENT_ID: USER_REMOVE_SNAPSHOT(342), Correlation >> ID: >> > 06a9efa4-1b80-4021-bf3e-41ecebe58a88, Job ID: >> c9a918a7-b00c-43cf-b6de-3659ac0765da, >> > Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: >> Snapshot >> > 'AUTO_7D_zz_nil_20180209_220002' deletion for VM 'zz_nil' was initiated >> > by acaignec at ldap-cines-authz. >> > engine.log-20180221:2018-02-20 22:24:45,174+01 ERROR >> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >> > (DefaultQuartzScheduler6) [06a9efa4-1b80-4021-bf3e-41ecebe58a88] >> > EVENT_ID: USER_REMOVE_SNAPSHOT_FINISHED_FAILURE(357), Correlation ID: >> > 06a9efa4-1b80-4021-bf3e-41ecebe58a88, Job ID: >> c9a918a7-b00c-43cf-b6de-3659ac0765da, >> > Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Failed >> to >> > delete snapshot 'AUTO_7D_zz_nil_20180209_220002' for VM 'zz_nil'. >> > 2018-02-20 22:24:46,266+01 INFO [org.ovirt.engine.core.bll.tas >> ks.SPMAsyncTask] >> > (DefaultQuartzScheduler3) [516079c3] SPMAsyncTask::PollTask: Polling >> task >> > '34137342-4f30-476d-b16c-1cb7e0ea0ac0' (Parent Command 'DestroyImage', >> > Parameters Type 'org.ovirt.engine.core.common. >> asynctasks.AsyncTaskParameters') >> > returned status 'finished', result 'success'. >> > 2018-02-20 22:24:46,267+01 INFO [org.ovirt.engine.core.bll.tas >> ks.SPMAsyncTask] >> > (DefaultQuartzScheduler3) [516079c3] BaseAsyncTask::onTaskEndSuccess: >> > Task '34137342-4f30-476d-b16c-1cb7e0ea0ac0' (Parent Command >> > 'DestroyImage', Parameters Type 'org.ovirt.engine.core.common. >> asynctasks.AsyncTaskParameters') >> > ended successfully. >> > 2018-02-20 22:24:46,268+01 INFO [org.ovirt.engine.core.bll.tas >> ks.CommandAsyncTask] >> > (DefaultQuartzScheduler3) [516079c3] CommandAsyncTask::endActionIfN >> ecessary: >> > All tasks of command 'fe8c91f2-386b-4b3f-bbf3-aeda8e9244c6' has ended >> -> >> > executing 'endAction' >> > 2018-02-20 22:24:46,268+01 INFO [org.ovirt.engine.core.bll.tas >> ks.CommandAsyncTask] >> > (DefaultQuartzScheduler3) [516079c3] CommandAsyncTask::endAction: Ending >> > action for '1' tasks (command ID: 'fe8c91f2-386b-4b3f-bbf3-aeda8 >> e9244c6'): >> > calling endAction '. >> > 2018-02-20 22:24:46,268+01 INFO [org.ovirt.engine.core.bll.tas >> ks.CommandAsyncTask] >> > (org.ovirt.thread.pool-6-thread-20) [516079c3] >> CommandAsyncTask::endCommandAction >> > [within thread] context: Attempting to endAction 'DestroyImage', >> > 2018-02-20 22:24:46,269+01 ERROR [org.ovirt.engine.core.bll.tas >> ks.CommandAsyncTask] >> > (org.ovirt.thread.pool-6-thread-20) [516079c3] [within thread]: >> endAction >> > for action type DestroyImage threw an exception.: >> > java.lang.NullPointerException >> > at org.ovirt.engine.core.bll.tasks.CoCoAsyncTaskHelper. >> > endAction(CoCoAsyncTaskHelper.java:335) [bll.jar:] >> > at org.ovirt.engine.core.bll.tasks.CommandCoordinatorImpl. >> > endAction(CommandCoordinatorImpl.java:340) [bll.jar:] >> > at org.ovirt.engine.core.bll.tasks.CommandAsyncTask. >> > endCommandAction(CommandAsyncTask.java:154) [bll.jar:] >> > at org.ovirt.engine.core.bll.tasks.CommandAsyncTask.lambda$ >> > endActionIfNecessary$0(CommandAsyncTask.java:106) [bll.jar:] >> > at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$ >> > InternalWrapperRunnable.run(ThreadPoolUtil.java:84) [utils.jar:] >> > at java.util.concurrent.Executors$RunnableAdapter.call( >> Executors.java:511) >> > [rt.jar:1.8.0_161] >> > at java.util.concurrent.FutureTask.run(FutureTask.java:266) >> > [rt.jar:1.8.0_161] >> > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool >> Executor.java:1149) >> > [rt.jar:1.8.0_161] >> > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo >> lExecutor.java:624) >> > [rt.jar:1.8.0_161] >> > at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_161] >> > >> > ----- Mail original ----- >> > De: "Shani Leviim" >> > ?: "Lionel Caignec" >> > Envoy?: Lundi 26 F?vrier 2018 14:42:38 >> > Objet: Re: [ovirt-users] Ghost Snapshot Disk >> > >> > Yes, please. >> > Can you detail a bit more regarding the actions you've done? >> > >> > I'm assuming that since the snapshot had no description, trying to >> operate >> > it caused the nullPointerException you've got. >> > But I want to examine what was the cause for that. >> > >> > Also, can you please answer back to the list? >> > >> > >> > >> > *Regards,* >> > >> > *Shani Leviim* >> > >> > On Mon, Feb 26, 2018 at 3:37 PM, Lionel Caignec >> wrote: >> > >> > > Version is 4.1.7.6-1 >> > > >> > > Do you want the log from the day i delete snapshot? >> > > >> > > ----- Mail original ----- >> > > De: "Shani Leviim" >> > > ?: "Lionel Caignec" >> > > Cc: "users" >> > > Envoy?: Lundi 26 F?vrier 2018 14:29:16 >> > > Objet: Re: [ovirt-users] Ghost Snapshot Disk >> > > >> > > Hi, >> > > >> > > What is your engine version, please? >> > > I'm trying to reproduce your steps, for understanding better was is >> the >> > > cause for that error. Therefore, a full engine log is needed. >> > > Can you please attach it? >> > > >> > > Thanks, >> > > >> > > >> > > *Shani Leviim* >> > > >> > > On Mon, Feb 26, 2018 at 2:48 PM, Lionel Caignec >> > wrote: >> > > >> > > > Hi >> > > > >> > > > 1) this is error message from ui.log >> > > > >> > > > 2018-02-26 13:44:10,001+01 ERROR [org.ovirt.engine.ui.frontend. >> > > > server.gwt.OvirtRemoteLoggingService] (default task-3) [] >> Permutation >> > > > name: 8C01181C3B121D0AAE1312275CC96415 >> > > > 2018-02-26 13:44:10,001+01 ERROR [org.ovirt.engine.ui.frontend. >> > > server.gwt.OvirtRemoteLoggingService] >> > > > (default task-3) [] Uncaught exception: com.google.gwt.core.client. >> > > JavaScriptException: >> > > > (TypeError) >> > > > __gwt$exception: : Cannot read property 'F' of null >> > > > at org.ovirt.engine.ui.uicommonweb.models.storage. >> > > > DisksAllocationModel$3.$onSuccess(DisksAllocationModel.java:120) >> > > > at org.ovirt.engine.ui.uicommonweb.models.storage. >> > > > DisksAllocationModel$3.onSuccess(DisksAllocationModel.java:120) >> > > > at org.ovirt.engine.ui.frontend.Frontend$2.$onSuccess( >> > > Frontend.java:233) >> > > > [frontend.jar:] >> > > > at org.ovirt.engine.ui.frontend.F >> rontend$2.onSuccess(Frontend. >> > > java:233) >> > > > [frontend.jar:] >> > > > at org.ovirt.engine.ui.frontend.communication. >> > > > OperationProcessor$2.$onSuccess(OperationProcessor.java:139) >> > > > [frontend.jar:] >> > > > at org.ovirt.engine.ui.frontend.communication. >> > > > OperationProcessor$2.onSuccess(OperationProcessor.java:139) >> > > > [frontend.jar:] >> > > > at org.ovirt.engine.ui.frontend.communication. >> > > > GWTRPCCommunicationProvider$5$1.$onSuccess( >> > GWTRPCCommunicationProvider. >> > > java:269) >> > > > [frontend.jar:] >> > > > at org.ovirt.engine.ui.frontend.communication. >> > > > GWTRPCCommunicationProvider$5$1.onSuccess(GWTRPCCommunicatio >> nProvider. >> > > java:269) >> > > > [frontend.jar:] >> > > > at com.google.gwt.user.client.rpc >> .impl.RequestCallbackAdapter. >> > > > onResponseReceived(RequestCallbackAdapter.java:198) >> [gwt-servlet.jar:] >> > > > at com.google.gwt.http.client.Req >> uest.$fireOnResponseReceived( >> > > Request.java:237) >> > > > [gwt-servlet.jar:] >> > > > at com.google.gwt.http.client.RequestBuilder$1. >> > > onReadyStateChange(RequestBuilder.java:409) >> > > > [gwt-servlet.jar:] >> > > > at Unknown.eval(webadmin-0.js at 65) >> > > > at com.google.gwt.core.client.imp >> l.Impl.apply(Impl.java:296) >> > > > [gwt-servlet.jar:] >> > > > at com.google.gwt.core.client.imp >> l.Impl.entry0(Impl.java:335) >> > > > [gwt-servlet.jar:] >> > > > at Unknown.eval(webadmin-0.js at 54) >> > > > >> > > > >> > > > 2) This line seems to be about the bad disk : >> > > > >> > > > f242cc9a-56c1-4ae4-aef0-f75eb01f74b1 | 2018-01-17 22:02:00+01 | >> > > > 2748779069440 | 00000000-0000-0000-0000-000000000000 | >> > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 >> > > > 22:01:20.5+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c >> > > > >> > > > >> > > > 3) Snapshot table is empty for the concerned vm_id. >> > > > >> > > > ----- Mail original ----- >> > > > De: "Shani Leviim" >> > > > ?: "Lionel Caignec" >> > > > Cc: "users" >> > > > Envoy?: Lundi 26 F?vrier 2018 13:31:23 >> > > > Objet: Re: [ovirt-users] Ghost Snapshot Disk >> > > > >> > > > Hi Lionel, >> > > > >> > > > The error message you've mentioned sounds like a UI error. >> > > > Can you please attach your ui log? >> > > > >> > > > Also, on the data from 'images' table you've uploaded, can you >> describe >> > > > which line is the relevant disk? >> > > > >> > > > Finally (for now), in case the snapshot was deleted, can you please >> > > > validate it by viewing the output of: >> > > > $ select * from snapshots; >> > > > >> > > > >> > > > >> > > > *Regards,* >> > > > >> > > > *Shani Leviim* >> > > > >> > > > On Mon, Feb 26, 2018 at 9:20 AM, Lionel Caignec >> > > wrote: >> > > > >> > > > > Hi Shani, >> > > > > thank you for helping me with your reply, >> > > > > i juste make a little mistake on explanation. In fact it's the >> > snapshot >> > > > > does not exist anymore. This is the disk(s) relative to her wich >> > still >> > > > > exist, and perhaps LVM volume. >> > > > > So can i delete manually this disk in database? what about the lvm >> > > > volume? >> > > > > Is it better to recreate disk sync data and destroy old one? >> > > > > >> > > > > >> > > > > >> > > > > ----- Mail original ----- >> > > > > De: "Shani Leviim" >> > > > > ?: "Lionel Caignec" >> > > > > Cc: "users" >> > > > > Envoy?: Dimanche 25 F?vrier 2018 14:26:41 >> > > > > Objet: Re: [ovirt-users] Ghost Snapshot Disk >> > > > > >> > > > > Hi Lionel, >> > > > > >> > > > > You can try to delete that snapshot directly from the database. >> > > > > >> > > > > In case of using psql [1], once you've logged in to your >> database, >> > you >> > > > can >> > > > > run this query: >> > > > > $ select * from snapshots where vm_id = ''; >> > > > > This one would list the snapshots associated with a VM by its id. >> > > > > >> > > > > In case you don't have you vm_id, you can locate it by querying: >> > > > > $ select * from vms where vm_name = 'nil'; >> > > > > This one would show you some details about a VM by its name >> > (including >> > > > the >> > > > > vm's id). >> > > > > >> > > > > Once you've found the relevant snapshot, you can delete it by >> > running: >> > > > > $ delete from snapshots where snapshot_id = ''; >> > > > > This one would delete the desired snapshot from the database. >> > > > > >> > > > > Since it's a delete operation, I would suggest confirming the ids >> > > before >> > > > > executing it. >> > > > > >> > > > > Hope you've found it useful! >> > > > > >> > > > > [1] >> > > > > https://www.ovirt.org/documentation/install-guide/ >> > > > appe-Preparing_a_Remote_ >> > > > > PostgreSQL_Database_for_Use_with_the_oVirt_Engine/ >> > > > > >> > > > > >> > > > > *Regards,* >> > > > > >> > > > > *Shani Leviim* >> > > > > >> > > > > On Fri, Feb 23, 2018 at 9:25 AM, Lionel Caignec > > >> > > > wrote: >> > > > > >> > > > > > Hi, >> > > > > > >> > > > > > i've a problem with snapshot. On one VM i've a "snapshot" ghost >> > > without >> > > > > > name or uuid, only information is size (see attachment). In the >> > > > snapshot >> > > > > > tab there is no trace about this disk. >> > > > > > >> > > > > > In database (table images) i found this : >> > > > > > f242cc9a-56c1-4ae4-aef0-f75eb01f74b1 | 2018-01-17 >> 22:02:00+01 | >> > > > > > 2748779069440 | 00000000-0000-0000-0000-000000000000 | >> > > > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 >> > > > > > 22:01:20.5+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c >> > > > > > | 2 | 4 | 17e26476-cecb-441d-a5f7- >> > > 46ab3ef387ee >> > > > | >> > > > > > 2018-01-17 22:01:29.663334+01 | 2018-01-19 08:40:14.345229+01 | >> f >> > > > | >> > > > > > 1 | 2 >> > > > > > 1c7650fa-542b-4ec2-83a1-d2c1c31be5fd | 2018-01-17 22:02:03+01 >> | >> > > > > > 5368709120000 | 00000000-0000-0000-0000-000000000000 | >> > > > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 >> > > > > > 22:01:20.84+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c >> > > > > > | 2 | 4 | bf834a91-c69f-4d2c-b639- >> > > 116ed58296d8 >> > > > | >> > > > > > 2018-01-17 22:01:29.836133+01 | 2018-01-19 08:40:19.083508+01 | >> f >> > > > | >> > > > > > 1 | 2 >> > > > > > 8614b21f-c0de-40f2-b4fb-e5cf193b0743 | 2018-02-09 23:00:44+01 >> | >> > > > > > 5368709120000 | 00000000-0000-0000-0000-000000000000 | >> > > > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-02-16 >> > > > > > 23:00:02.855+01 | 390175dc-baf4-4831-936a-5ea68fa4c969 >> > > > > > >> > > > > > >> > > > > > But i does not know which line is my disk. Is it possible to >> > delete >> > > > > > directly into database? >> > > > > > Or is it better to dump my disk to another new and delete the >> > > > "corrupted >> > > > > > one"? >> > > > > > >> > > > > > Another thing, when i try to move the disk to another storage >> > > domain i >> > > > > > always get "uncaght exeption occured ..." and no error in >> > engine.log. >> > > > > > >> > > > > > >> > > > > > Thank you for helping. >> > > > > > >> > > > > > -- >> > > > > > Lionel Caignec >> > > > > > >> > > > > > _______________________________________________ >> > > > > > Users mailing list >> > > > > > Users at ovirt.org >> > > > > > http://lists.ovirt.org/mailman/listinfo/users >> > > > > > >> > > > > > >> > > > > >> > > > >> > > >> > >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > > > > _______________________________________________ > Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simone.bruckner at fabasoft.com Sun Mar 11 12:23:03 2018 From: simone.bruckner at fabasoft.com (Bruckner, Simone) Date: Sun, 11 Mar 2018 12:23:03 +0000 Subject: [ovirt-users] Faulty multipath only cleared with VDSM restart In-Reply-To: References: <2CB4E8C8E00E594EA06D4AC427E429920FE728E2@fabamailserver.fabagl.fabasoft.com> Message-ID: <2CB4E8C8E00E594EA06D4AC427E429920FE78DB5@fabamailserver.fabagl.fabasoft.com> Hi Fred, thank you for the explanation. I restarted VDSM and will monitor the behaviour. Does that faulty multipath report have any side effects on stability and performance? All the best, Oliver Von: Fred Rolland [mailto:frolland at redhat.com] Gesendet: Sonntag, 11. M?rz 2018 11:23 An: Bruckner, Simone Cc: users at ovirt.org Betreff: Re: [ovirt-users] Faulty multipath only cleared with VDSM restart Hi Simone, The multipath health is built on VDSM start from the current multipath state, and after that it is maintained based on events sent by udev. You can read about the implementation details in [1]. It seems that in your scenario, either udev did not sent the needed clearing events or that Vdsm mishandled them. Therefore only restart of the Vdsm will clear the report. In order to be able to debug the issue, we will need Vdsm logs with debug level (on storage log) when the issue is happening. Thanks, Fred [1] https://ovirt.org/develop/release-management/features/storage/multipath-events/ On Fri, Mar 9, 2018 at 1:07 PM, Bruckner, Simone > wrote: Hi, after rebooting SAN switches we see faulty multipath entries in VDSM. Running vdsm-client Host getStats shows multipathHealth entries "multipathHealth": { "3600601603cc04500a2f9cd597080db0e": { "valid_paths": 2, "failed_paths": [ "sdcl", "sdde" ] }, ? Running multipath ?ll does not show any errors. After restarting VSDM, the multipathHealth entires from vdsm-client are empty again. Is the a way to clear those multipathHealth entires without restarting VDSM? Thank you and all the best, Simone _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From frolland at redhat.com Sun Mar 11 12:29:33 2018 From: frolland at redhat.com (Fred Rolland) Date: Sun, 11 Mar 2018 14:29:33 +0200 Subject: [ovirt-users] Faulty multipath only cleared with VDSM restart In-Reply-To: <2CB4E8C8E00E594EA06D4AC427E429920FE78DB5@fabamailserver.fabagl.fabasoft.com> References: <2CB4E8C8E00E594EA06D4AC427E429920FE728E2@fabamailserver.fabagl.fabasoft.com> <2CB4E8C8E00E594EA06D4AC427E429920FE78DB5@fabamailserver.fabagl.fabasoft.com> Message-ID: The multipath report does not have any effects on stability/performance. It is used for creating events to the user in the engine, in order to detect earlier issues in their storage. [1] In your case, you would have an wrong alarm in the Engine that would not be cleared until next Vdsm restart. [1] https://ovirt.org/develop/release-management/features/storage/multipath-events/#event-reporting On Sun, Mar 11, 2018 at 2:23 PM, Bruckner, Simone < simone.bruckner at fabasoft.com> wrote: > Hi Fred, > > > > thank you for the explanation. I restarted VDSM and will monitor the > behaviour. > > > > Does that faulty multipath report have any side effects on stability and > performance? > > > > All the best, > > Oliver > > > > *Von:* Fred Rolland [mailto:frolland at redhat.com] > *Gesendet:* Sonntag, 11. M?rz 2018 11:23 > *An:* Bruckner, Simone > *Cc:* users at ovirt.org > *Betreff:* Re: [ovirt-users] Faulty multipath only cleared with VDSM > restart > > > > Hi Simone, > > The multipath health is built on VDSM start from the current multipath > state, and after that it is maintained based on events sent by udev. > > You can read about the implementation details in [1]. > > It seems that in your scenario, either udev did not sent the needed > clearing events or that Vdsm mishandled them. > > Therefore only restart of the Vdsm will clear the report. > > In order to be able to debug the issue, we will need Vdsm logs with debug > level (on storage log) when the issue is happening. > > > > Thanks, > > Fred > > > [1] https://ovirt.org/develop/release-management/features/ > storage/multipath-events/ > > > > On Fri, Mar 9, 2018 at 1:07 PM, Bruckner, Simone < > simone.bruckner at fabasoft.com> wrote: > > Hi, > > > > after rebooting SAN switches we see faulty multipath entries in VDSM. > > > > Running vdsm-client Host getStats shows multipathHealth entries > > > > "multipathHealth": { > > "3600601603cc04500a2f9cd597080db0e": { > > "valid_paths": 2, > > "failed_paths": [ > > "sdcl", > > "sdde" > > ] > > }, > > ? > > > > Running multipath ?ll does not show any errors. > > > > After restarting VSDM, the multipathHealth entires from vdsm-client are > empty again. > > > > Is the a way to clear those multipathHealth entires without restarting > VDSM? > > > > Thank you and all the best, > > Simone > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sleviim at redhat.com Sun Mar 11 13:08:51 2018 From: sleviim at redhat.com (Shani Leviim) Date: Sun, 11 Mar 2018 15:08:51 +0200 Subject: [ovirt-users] Cannot activate storage domain In-Reply-To: <9896E8032366964791E8E3595991B2602857363B@fabamailserver.fabagl.fabasoft.com> References: <2CB4E8C8E00E594EA06D4AC427E429920FE500D1@fabamailserver.fabagl.fabasoft.com> <2CB4E8C8E00E594EA06D4AC427E429920FE56290@fabamailserver.fabagl.fabasoft.com> <2CB4E8C8E00E594EA06D4AC427E429920FE5A323@fabamailserver.fabagl.fabasoft.com> <2CB4E8C8E00E594EA06D4AC427E429920FE6670D@fabamailserver.fabagl.fabasoft.com> <2CB4E8C8E00E594EA06D4AC427E429920FE67150@fabamailserver.fabagl.fabasoft.com> <9896E8032366964791E8E3595991B2602857363B@fabamailserver.fabagl.fabasoft.com> Message-ID: Hi Simone, Sorry for the delay replying you. Does the second storage domain you've mentioned is also an FC? If so, please execute the following command /usr/libexec/vdsm/fc-scan -v on one host of each inactive storage domain and share the results you've got. *Regards,* *Shani Leviim* On Thu, Mar 8, 2018 at 9:42 AM, Bruckner, Simone < simone.bruckner at fabasoft.com> wrote: > Hi Shani, > > today I again lost access to a storage domain. So currently I have two > storage domains that we cannot activate any more. > > I uploaded the logfiles to our Cloud Service: [ZIP Archive] > logfiles.tar.gz 1zf8q45o1e9l8334agryb2crdd> > I lost access today, March 8th 2018 around 0.55am CET > I tried to actived the storage domain around 6.40am CET > > Please let me know if there is anything I can do to get this addressed. > > Thank you very much, > Simone > > > ________________________________ > Von: users-bounces at ovirt.org [users-bounces at ovirt.org]" im Auftrag von > "Bruckner, Simone [simone.bruckner at fabasoft.com] > Gesendet: Dienstag, 06. M?rz 2018 10:19 > An: Shani Leviim > Cc: users at ovirt.org > Betreff: Re: [ovirt-users] Cannot activate storage domain > > Hi Shani, > > please find the logs attached. > > Thank you, > Simone > > Von: Shani Leviim [mailto:sleviim at redhat.com] > Gesendet: Dienstag, 6. M?rz 2018 09:48 > An: Bruckner, Simone > Cc: users at ovirt.org > Betreff: Re: [ovirt-users] Cannot activate storage domain > > Hi Simone, > Can you please share your vdsm and engine logs? > > Regards, > Shani Leviim > > On Tue, Mar 6, 2018 at 7:34 AM, Bruckner, Simone < > simone.bruckner at fabasoft.com> wrote: > Hello, I apologize for bringing this one up again, but does anybody know > if there is a change to recover a storage domain, that cannot be activated? > > Thank you, > Simone > > Von: users-bounces at ovirt.org [mailto: > users-bounces at ovirt.org] Im Auftrag von > Bruckner, Simone > Gesendet: Freitag, 2. M?rz 2018 17:03 > > An: users at ovirt.org > Betreff: Re: [ovirt-users] Cannot activate storage domain > > Hi all, > > I managed to get the inactive storage domain to maintenance by stopping > all running VMs that were using it, but I am still not able to activate it. > > Trying to activate results in the following events: > > For each host: > VDSM command GetVGInfoVDS failed: Volume Group does not exist: > (u'vg_uuid: 813oRe-64r8-mloU-k9G2-LFsS-dXSG-hpN4kf',) > > And finally: > VDSM command ActivateStorageDomainVDS failed: Storage domain does not > exist: (u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',) > > Is there anything I can do to recover this storage domain? > > Thank you and all the best, > Simone > > Von: users-bounces at ovirt.org [mailto: > users-bounces at ovirt.org] Im Auftrag von Bruckner, Simone > Gesendet: Donnerstag, 1. M?rz 2018 17:57 > An: users at ovirt.org > Betreff: Re: [ovirt-users] Cannot activate storage domain > > Hi, > > we are still struggling getting a storage domain online again. We tried > to put the storage domain in maintenance mode, that led to ?Failed to > update OVF disks 809cc8d7-7687-46cf-a342-3be48674a9b3, OVF data isn't > updated on those OVF stores?. > > Trying again with ignoring OVF update failures put the storage domain in > ?preparing for maintenance?. We see the following message on all hosts: > ?Error releasing host id 26 for domain b83c159c-4ad6-4613-ba16-bab95ccd10c0 > (monitor:578)?. > > Querying the storage domain using vdsm-client on the SPM resulted in > # vdsm-client StorageDomain getInfo "storagedomainID"="b83c159c- > 4ad6-4613-ba16-bab95ccd10c0" > vdsm-client: Command StorageDomain.getInfo with args {'storagedomainID': > 'b83c159c-4ad6-4613-ba16-bab95ccd10c0'} failed: > (code=358, message=Storage domain does not exist: > (u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',)) > > Any ideas? > > Thank you and all the best, > Simone > > Von: users-bounces at ovirt.org [mailto: > users-bounces at ovirt.org] Im Auftrag von Bruckner, Simone > Gesendet: Mittwoch, 28. Februar 2018 15:52 > An: users at ovirt.org > Betreff: [ovirt-users] Cannot activate storage domain > > Hi all, > > we run a small oVirt installation that we also use for automated testing > (automatically creating, dropping vms). > > We got an inactive FC storage domain that we cannot activate any more. We > see several events at that time starting with: > > VM perftest-c17 is down with error. Exit message: Unable to get volume > size for domain b83c159c-4ad6-4613-ba16-bab95ccd10c0 volume > 686376c1-4be1-44c3-89a3-0a8addc8fdf2. > > Trying to activate the strorage domain results in the following alert > event for each host: > > VDSM command GetVGInfoVDS failed: Volume Group does not exist: > (u'vg_uuid: 813oRe-64r8-mloU-k9G2-LFsS-dXSG-hpN4kf',) > > And after those messages from all hosts we get: > > VDSM command ActivateStorageDomainVDS failed: Storage domain does not > exist: (u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',) > Failed to activate Storage Domain VMHOST_LUN_205 (Data Center Production) > by > Invalid status on Data Center Production. Setting status to Non Responsive. > Storage Pool Manager runs on Host vmhost003.fabagl.fabasoft.com< > http://vmhost003.fabagl.fabasoft.com> (Address: > vmhost003.fabagl.fabasoft.com), > Data Center Production. > > Checking the hosts with multipath ?ll we see the LUN without errors. > > We run oVirt 4.2.1 on CentOS 7.4. Hosts are CentOS 7.4 hosts with oVirt > installed using oVirt engine. > Hosts are connected to about 30 FC LUNs (8 TB each) on two all-flash > storage arrays. > > Thank you, > Simone Bruckner > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simone.bruckner at fabasoft.com Sun Mar 11 16:49:05 2018 From: simone.bruckner at fabasoft.com (Bruckner, Simone) Date: Sun, 11 Mar 2018 16:49:05 +0000 Subject: [ovirt-users] Cannot activate storage domain In-Reply-To: References: <2CB4E8C8E00E594EA06D4AC427E429920FE500D1@fabamailserver.fabagl.fabasoft.com> <2CB4E8C8E00E594EA06D4AC427E429920FE56290@fabamailserver.fabagl.fabasoft.com> <2CB4E8C8E00E594EA06D4AC427E429920FE5A323@fabamailserver.fabagl.fabasoft.com> <2CB4E8C8E00E594EA06D4AC427E429920FE6670D@fabamailserver.fabagl.fabasoft.com> <2CB4E8C8E00E594EA06D4AC427E429920FE67150@fabamailserver.fabagl.fabasoft.com> <9896E8032366964791E8E3595991B2602857363B@fabamailserver.fabagl.fabasoft.com> Message-ID: <2CB4E8C8E00E594EA06D4AC427E429920FE792B3@fabamailserver.fabagl.fabasoft.com> Shani, all storage domains are FC. In the meantime I could track it down to corrupt vg metadata. I filed a bug (https://bugzilla.redhat.com/show_bug.cgi?id=1553133). I ran vgcfgrestore, copied the VMs I could recover and recreated the LUNs. Would there have been another way of recovering? All the best, Simone Von: Shani Leviim [mailto:sleviim at redhat.com] Gesendet: Sonntag, 11. M?rz 2018 14:09 An: Bruckner, Simone Cc: users at ovirt.org Betreff: Re: [ovirt-users] Cannot activate storage domain Hi Simone, Sorry for the delay replying you. Does the second storage domain you've mentioned is also an FC? If so, please execute the following command /usr/libexec/vdsm/fc-scan -v on one host of each inactive storage domain and share the results you've got. Regards, Shani Leviim On Thu, Mar 8, 2018 at 9:42 AM, Bruckner, Simone > wrote: Hi Shani, today I again lost access to a storage domain. So currently I have two storage domains that we cannot activate any more. I uploaded the logfiles to our Cloud Service: [ZIP Archive] logfiles.tar.gz I lost access today, March 8th 2018 around 0.55am CET I tried to actived the storage domain around 6.40am CET Please let me know if there is anything I can do to get this addressed. Thank you very much, Simone ________________________________ Von: users-bounces at ovirt.org [users-bounces at ovirt.org]" im Auftrag von "Bruckner, Simone [simone.bruckner at fabasoft.com] Gesendet: Dienstag, 06. M?rz 2018 10:19 An: Shani Leviim Cc: users at ovirt.org Betreff: Re: [ovirt-users] Cannot activate storage domain Hi Shani, please find the logs attached. Thank you, Simone Von: Shani Leviim [mailto:sleviim at redhat.com] Gesendet: Dienstag, 6. M?rz 2018 09:48 An: Bruckner, Simone > Cc: users at ovirt.org Betreff: Re: [ovirt-users] Cannot activate storage domain Hi Simone, Can you please share your vdsm and engine logs? Regards, Shani Leviim On Tue, Mar 6, 2018 at 7:34 AM, Bruckner, Simone >> wrote: Hello, I apologize for bringing this one up again, but does anybody know if there is a change to recover a storage domain, that cannot be activated? Thank you, Simone Von: users-bounces at ovirt.org> [mailto:users-bounces at ovirt.org>] Im Auftrag von Bruckner, Simone Gesendet: Freitag, 2. M?rz 2018 17:03 An: users at ovirt.org> Betreff: Re: [ovirt-users] Cannot activate storage domain Hi all, I managed to get the inactive storage domain to maintenance by stopping all running VMs that were using it, but I am still not able to activate it. Trying to activate results in the following events: For each host: VDSM command GetVGInfoVDS failed: Volume Group does not exist: (u'vg_uuid: 813oRe-64r8-mloU-k9G2-LFsS-dXSG-hpN4kf',) And finally: VDSM command ActivateStorageDomainVDS failed: Storage domain does not exist: (u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',) Is there anything I can do to recover this storage domain? Thank you and all the best, Simone Von: users-bounces at ovirt.org> [mailto:users-bounces at ovirt.org] Im Auftrag von Bruckner, Simone Gesendet: Donnerstag, 1. M?rz 2018 17:57 An: users at ovirt.org> Betreff: Re: [ovirt-users] Cannot activate storage domain Hi, we are still struggling getting a storage domain online again. We tried to put the storage domain in maintenance mode, that led to ?Failed to update OVF disks 809cc8d7-7687-46cf-a342-3be48674a9b3, OVF data isn't updated on those OVF stores?. Trying again with ignoring OVF update failures put the storage domain in ?preparing for maintenance?. We see the following message on all hosts: ?Error releasing host id 26 for domain b83c159c-4ad6-4613-ba16-bab95ccd10c0 (monitor:578)?. Querying the storage domain using vdsm-client on the SPM resulted in # vdsm-client StorageDomain getInfo "storagedomainID"="b83c159c-4ad6-4613-ba16-bab95ccd10c0" vdsm-client: Command StorageDomain.getInfo with args {'storagedomainID': 'b83c159c-4ad6-4613-ba16-bab95ccd10c0'} failed: (code=358, message=Storage domain does not exist: (u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',)) Any ideas? Thank you and all the best, Simone Von: users-bounces at ovirt.org> [mailto:users-bounces at ovirt.org] Im Auftrag von Bruckner, Simone Gesendet: Mittwoch, 28. Februar 2018 15:52 An: users at ovirt.org> Betreff: [ovirt-users] Cannot activate storage domain Hi all, we run a small oVirt installation that we also use for automated testing (automatically creating, dropping vms). We got an inactive FC storage domain that we cannot activate any more. We see several events at that time starting with: VM perftest-c17 is down with error. Exit message: Unable to get volume size for domain b83c159c-4ad6-4613-ba16-bab95ccd10c0 volume 686376c1-4be1-44c3-89a3-0a8addc8fdf2. Trying to activate the strorage domain results in the following alert event for each host: VDSM command GetVGInfoVDS failed: Volume Group does not exist: (u'vg_uuid: 813oRe-64r8-mloU-k9G2-LFsS-dXSG-hpN4kf',) And after those messages from all hosts we get: VDSM command ActivateStorageDomainVDS failed: Storage domain does not exist: (u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',) Failed to activate Storage Domain VMHOST_LUN_205 (Data Center Production) by Invalid status on Data Center Production. Setting status to Non Responsive. Storage Pool Manager runs on Host vmhost003.fabagl.fabasoft.com (Address: vmhost003.fabagl.fabasoft.com), Data Center Production. Checking the hosts with multipath ?ll we see the LUN without errors. We run oVirt 4.2.1 on CentOS 7.4. Hosts are CentOS 7.4 hosts with oVirt installed using oVirt engine. Hosts are connected to about 30 FC LUNs (8 TB each) on two all-flash storage arrays. Thank you, Simone Bruckner _______________________________________________ Users mailing list Users at ovirt.org> http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.vaye at province-sud.nc Sun Mar 11 22:27:59 2018 From: nicolas.vaye at province-sud.nc (Nicolas Vaye) Date: Sun, 11 Mar 2018 22:27:59 +0000 Subject: [ovirt-users] storage domain ovirt-image-repository doesn't work Message-ID: <1520807274.18402.56.camel@province-sud.nc> Hello, i have installed one oVirt platform with 2 node and 1 HE version 4.2.1.7-1 It seem to work fine, but i have issue with the ovirt-image-repository. Impossible to get the list of available images for this domain : [cid:1520807274.29800.1.camel at province-sud.nc] My cluster is on a private network, so there is a proxy to get internet access. I have tried with a specific proxy configuration on each node (https://www.server-world.info/en/note?os=CentOS_7&p=squid&f=2) so it's a success with yum update, wget or curl with http://glance.ovirt.org:9292/, but nothing in the webui for the ovirt-image-repository domain. I have tried another test with a transparent proxy and the result is the same : success with yum update, wget or curl with http://glance.ovirt.org:9292/, but nothing in the webui for the ovirt-image-repository domain. I don't know where is the specific log for this technical part. Can i have help for this issue. Thanks. Nicolas VAYE DSI - Noum?a NEW CALEDONIA -------------- next part -------------- A non-text attachment was scrubbed... Name: unknown-QMUVFZ Type: image/png Size: 42232 bytes Desc: unknown-QMUVFZ URL: From nicolas.vaye at province-sud.nc Sun Mar 11 22:56:32 2018 From: nicolas.vaye at province-sud.nc (Nicolas Vaye) Date: Sun, 11 Mar 2018 22:56:32 +0000 Subject: [ovirt-users] VM guest agent Message-ID: <1520808989.18402.58.camel@province-sud.nc> Hello, i have installed one oVirt platform with 2 node and 1 HE version 4.2.1.7-1 It seem to work fine, but i would like more information on the guest agent. For the HE, the guest agent seem to be OK, on this vm i 've spotted that the ovirt-guest-agent and qemu-guest-agent are installed. I have 2 VM, 1 debian 9 and 1 RHEL 6.5. I've tried to install the same service on each VM, but the result is the same : no info about IP, fqdn, or app installed for these vm, and there is a orange ! for each vm on the web ui (indicate that i need to install latest guest agent) . I have tried different test with spice-vdagent, or ovirt-guest-agent or qemu-guest-agent but no way. ovirt-guest-agent doesn't start on debian 9 and RHEL 6.5 : MainThread::INFO::2018-03-11 22:46:02,984::ovirt-guest-agent::59::root::Starting oVirt guest agentMainThread::ERROR::2018-03-11 22:46:02,986::ovirt-guest-agent::141::root::Unhandled exception in oVirt guest agent!Traceback (most recent call last): File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 135, in agent.run(daemon, pidfile) File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 65, in run self.agent = LinuxVdsAgent(config) File "/usr/share/ovirt-guest-agent/GuestAgentLinux2.py", line 472, in __init__ AgentLogicBase.__init__(self, config) File "/usr/share/ovirt-guest-agent/OVirtAgentLogic.py", line 188, in __init__ self.vio = VirtIoChannel(config.get("virtio", "device")) File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 153, in __init__ self._stream = VirtIoStream(vport_name) File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 134, in __init__ self._vport = os.open(vport_name, os.O_RDWR)OSError: [Errno 2] No such file or directory: '/dev/virtio-ports/com.redhat.rhevm.vdsm' Can i have help for this problem ? Thanks. Nicolas VAYE DSI - Noum?a NEW CALEDONIA From nsoffer at redhat.com Sun Mar 11 23:07:46 2018 From: nsoffer at redhat.com (Nir Soffer) Date: Sun, 11 Mar 2018 23:07:46 +0000 Subject: [ovirt-users] VM guest agent In-Reply-To: <1520808989.18402.58.camel@province-sud.nc> References: <1520808989.18402.58.camel@province-sud.nc> Message-ID: On Mon, Mar 12, 2018 at 12:57 AM Nicolas Vaye wrote: > Hello, > > i have installed one oVirt platform with 2 node and 1 HE version 4.2.1.7-1 > > It seem to work fine, but i would like more information on the guest agent. > For the HE, the guest agent seem to be OK, on this vm i 've spotted that > the ovirt-guest-agent and qemu-guest-agent are installed. > > I have 2 VM, 1 debian 9 and 1 RHEL 6.5. I've tried to install the same > service on each VM, but the result is the same : > no info about IP, fqdn, or app installed for these vm, and there is a > orange ! for each vm on the web ui (indicate that i need to install latest > guest agent) . > > I have tried different test with spice-vdagent, or ovirt-guest-agent or > qemu-guest-agent but no way. > > ovirt-guest-agent doesn't start on debian 9 and RHEL 6.5 : > MainThread::INFO::2018-03-11 > 22:46:02,984::ovirt-guest-agent::59::root::Starting oVirt guest > agentMainThread::ERROR::2018-03-11 > 22:46:02,986::ovirt-guest-agent::141::root::Unhandled exception in oVirt > guest agent!Traceback (most recent call last): File > "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 135, in > agent.run(daemon, pidfile) File > "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 65, in run > self.agent = LinuxVdsAgent(config) File > "/usr/share/ovirt-guest-agent/GuestAgentLinux2.py", line 472, in __init__ > AgentLogicBase.__init__(self, config) File > "/usr/share/ovirt-guest-agent/OVirtAgentLogic.py", line 188, in __init__ > self.vio = VirtIoChannel(config.get("virtio", "device")) File > "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 153, in __init__ > self._stream = VirtIoStream(vport_name) File > "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 134, in __init__ > self._vport = os.open(vport_name, os.O_RDWR)OSError: [Errno 2] No such file > or directory: '/dev/virtio-ports/com.redhat.rhevm.vdsm' > > > Can i have help for this problem ? > I hope that Tom?? can help. Nir -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreil1 at starlett.lv Sun Mar 11 23:45:06 2018 From: andreil1 at starlett.lv (Andrei Verovski) Date: Mon, 12 Mar 2018 01:45:06 +0200 Subject: [ovirt-users] Auto-restart VM from Linux Shell Message-ID: <661271b7-6e4d-d906-b1ef-fdcae9da1c13@starlett.lv> Hi ! I have stubborn VM which time to time freezes, and watchdog for whatever reason don't restart it. Basically I would like to combine these 3 command into one script. ovirt-shell -l https://node00.mydomain.com.lv/ovirt-engine/api --ca-file="/etc/pki/ovirt-engine/ca.pem" -u "admin at internal" --password "secret" action vm MyVM stop action vm MyVM start Now I have problems. 1) Option --password "secret" is not recognized anymore in oVirt Shell 4.2. 2) What is the proper syntax to connect & run certain command in oVirt Shell 4.2? Something like: ovirt-shell -l https://node00.mydomain.com.lv/ovirt-engine/api --ca-file="/etc/pki/ovirt-engine/ca.pem" -u "admin at internal" --password "secret" && action vm MyVM stop Thanks in advance Andrei From recreationh at gmail.com Mon Mar 12 02:32:28 2018 From: recreationh at gmail.com (Terry hey) Date: Mon, 12 Mar 2018 10:32:28 +0800 Subject: [ovirt-users] Cannot use virt-viewer to open VM console Message-ID: Dear all, I would like to ask which version of virt-viewer are you using? I downloaded virt-viewer 6.0.msi and installed. But i could not open VM console( i have set the graphic protocol is SPICE). It shows the following error. "At least Remote Viewer version 2.0-160 is required to setup this connection, see http://www.ovirt.org/documentation/admin-guide/virt/console-client-resources for details" Also, i can i verify the version of virt-viewer that i have installed? Regards Terry -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.seidel at helmholtz-muenchen.de Mon Mar 12 08:45:51 2018 From: michael.seidel at helmholtz-muenchen.de (Michael Seidel) Date: Mon, 12 Mar 2018 09:45:51 +0100 Subject: [ovirt-users] ovirt 4.2.1 pre hosted engine deploy failure In-Reply-To: References: <16384f09-1a4c-8582-f89d-13bd562d0656@helmholtz-muenchen.de> Message-ID: <5f58d4c6-5781-e206-cc32-bb033bead04b@helmholtz-muenchen.de> Hi, thanks for your response. I checked the permissions and ownership of the filesm, they seem okay to me. However, the installation still fails: [root at plantfiler02 ovirt]# ll total 0 drwxr-xr-x 5 vdsm kvm 61 Mar 9 09:02 017fcf64-45c8-4289-87f7-c1195f7ec584 -rwxr-xr-x 1 vdsm kvm 0 Mar 9 22:07 __DIRECT_IO_TEST__ [root at plantfiler02 ovirt]# ll 017fcf64-45c8-4289-87f7-c1195f7ec584/ total 4 drwxr-xr-x 2 vdsm kvm 111 Mar 9 09:01 dom_md drwxr-xr-x 6 vdsm kvm 4096 Mar 9 09:04 images drwxr-xr-x 4 vdsm kvm 40 Mar 9 09:02 master [root at plantfiler02 ovirt]# ll 017fcf64-45c8-4289-87f7-c1195f7ec584/images/ total 16 drwxr-xr-x 2 vdsm kvm 4096 Mar 9 09:02 3780dd3c-c248-4bbd-ae5d-5977780853dc drwxr-xr-x 2 vdsm kvm 4096 Mar 9 09:04 66aadb48-31c5-49a6-a5ea-cc7a66b388eb drwxr-xr-x 2 vdsm kvm 4096 Mar 9 09:03 e158fb00-c885-4443-9288-184045d6ab1d drwxr-xr-x 2 vdsm kvm 4096 Mar 9 09:03 f36c079b-ba17-4c14-96a2-2b2beea7d989 [root at plantfiler02 ovirt]# ll 017fcf64-45c8-4289-87f7-c1195f7ec584/images/3780dd3c-c248-4bbd-ae5d-5977780853dc/ total 1049604 -rw-rw---- 1 vdsm kvm 1073741824 Mar 9 09:02 3909a6b6-2cf7-4bca-81e0-56959e8ec9b5 -rw-rw---- 1 vdsm kvm 1048576 Mar 9 09:02 3909a6b6-2cf7-4bca-81e0-56959e8ec9b5.lease -rw-r--r-- 1 vdsm kvm 320 Mar 9 09:02 3909a6b6-2cf7-4bca-81e0-56959e8ec9b5.meta Cheers, - Michael On 03/09/2018 07:09 PM, Simone Tiraboschi wrote: > > > On Fri, Mar 9, 2018 at 9:16 AM, Michael Seidel > > wrote: > > Hi, > > I found the messages at > http://lists.ovirt.org/pipermail/users/2018-January/086631.html > in > your > archive and am running into a similar/identical issue when trying to > install a hosted engine: > > After providing all of the information, the installer does create some > files on the nfs share (plantfiler02:/storage/vmx/ovirt) but eventually > dies with: > > [ INFO? ] TASK [Copy configuration files to the right location on host] > [ INFO? ] TASK [Copy configuration archive to storage] > [ ERROR ]? [WARNING]: Failure using method (v2_runner_on_failed) in > callback plugin > [ ERROR ] ( at 0x25c86d0>): > [ ERROR ] 'ascii' codec can't encode character u'\u2018' in position > 489: ordinal not in > [ ERROR ] range(128) > [ ERROR ] Failed to execute stage 'Closing up': Failed executing > ansible-playbook > [ INFO? ] Stage: Clean up > > The relevant part in the logfile I believe is the following: > > 2018-03-09 09:05:05,762+0100 DEBUG > otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils._process_output:94 {u'_ansible_parsed': True, > u'stderr_lines': [u'dd: failed to open > \u2018/rhev/data-center/mnt/plantfiler02:_storage_vmx_ovirt/017fcf64-45c8-4289-87f7-c1195f7ec584/images/f36c079b-ba17-4c14-96a2-2b2beea7d989/56cd3448-4ecf-490e-99cc-ace36b977a9a\u2019: > Permission denied'], u'cmd': [u'dd', u'bs=20480', u'count=1', > u'oflag=direct', > u'if=/var/tmp/localvmf0uaFh/56cd3448-4ecf-490e-99cc-ace36b977a9a', > u'of=/rhev/data-center/mnt/plantfiler02:_storage_vmx_ovirt/017fcf64-45c8-4289-87f7-c1195f7ec584/images/f36c079b-ba17-4c14-96a2-2b2beea7d989/56cd3448-4ecf-490e-99cc-ace36b977a9a'], > u'end': u'2018-03-09 09:05:05.565013', u'_ansible_no_log': False, > u'stdout': u'', u'changed': True, u'start': u'2018-03-09 > 09:05:05.557703', u'delta': u'0:00:00.007310', u'stderr': u'dd: failed > to open > \u2018/rhev/data-center/mnt/plantfiler02:_storage_vmx_ovirt/017fcf64-45c8-4289-87f7-c1195f7ec584/images/f36c079b-ba17-4c14-96a2-2b2beea7d989/56cd3448-4ecf-490e-99cc-ace36b977a9a\u2019: > Permission denied', u'rc': 1, u'invocation': {u'module_args': {u'warn': > True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'dd > bs=20480 count=1 oflag=direct > if="/var/tmp/localvmf0uaFh/56cd3448-4ecf-490e-99cc-ace36b977a9a" > of="/rhev/data-center/mnt/plantfiler02:_storage_vmx_ovirt/017fcf64-45c8-4289-87f7-c1195f7ec584/images/f36c079b-ba17-4c14-96a2-2b2beea7d989/56cd3448-4ecf-490e-99cc-ace36b977a9a"', > u'removes': None, u'creates': None, u'chdir': None, u'stdin': None}}, > u'stdout_lines': [], u'ms > > > I did create vdsm user and kvm user and group on the NFS server and I > succesfully ran the nfs-check.py script from the host where ovirt should > be installed: > > # python nfs-check.py plantfiler02:/storage/vmx/ovirt/ > Current hostname: hyena.******.de - IP addr 10.216.60.21 > Trying to /bin/mount -t nfs plantfiler02:/storage/vmx/ovirt/... > Executing NFS tests.. > Removing vdsmTest file.. > Status of tests [OK] > Disconnecting from NFS Server.. > Done! > > > The target directory has following permissions: > > drwxr-xr-x 3 vdsm? ? ? kvm? 86 Mar? 9 09:12 ovirt > > > I am aware of the issue > https://bugzilla.redhat.com/show_bug.cgi?id=1533500 > but the underlying > problem seems to be the error message issued by dd (as has been > mentioned the earlier posts). > > > It's will be solved in the next build but it's not the root cause of > your issue but just a consequence. > ? > > > > Am I missing the obvious somewhere regarding permissions? Is there a > known solution/workaround to this? > > > Can you please check the permissions and the ownership of the files > created at storage domain creation time? > ? > > > Best, > - Michael > Helmholtz Zentrum Muenchen > Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH) > Ingolstaedter Landstr. 1 > 85764 Neuherberg > www.helmholtz-muenchen.de > Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe > Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Heinrich Bassler, Dr. > Alfons Enhsen > Registergericht: Amtsgericht Muenchen HRB 6466 > USt-IdNr: DE 129521671 > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > Helmholtz Zentrum Muenchen Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH) Ingolstaedter Landstr. 1 85764 Neuherberg www.helmholtz-muenchen.de Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Heinrich Bassler, Dr. Alfons Enhsen Registergericht: Amtsgericht Muenchen HRB 6466 USt-IdNr: DE 129521671 From didi at redhat.com Mon Mar 12 09:31:12 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Mon, 12 Mar 2018 11:31:12 +0200 Subject: [ovirt-users] Auto-restart VM from Linux Shell In-Reply-To: <661271b7-6e4d-d906-b1ef-fdcae9da1c13@starlett.lv> References: <661271b7-6e4d-d906-b1ef-fdcae9da1c13@starlett.lv> Message-ID: On Mon, Mar 12, 2018 at 1:45 AM, Andrei Verovski wrote: > Hi ! > > I have stubborn VM which time to time freezes, and watchdog for whatever > reason don't restart it. > > Basically I would like to combine these 3 command into one script. > > ovirt-shell -l https://node00.mydomain.com.lv/ovirt-engine/api > --ca-file="/etc/pki/ovirt-engine/ca.pem" -u "admin at internal" --password > "secret" > > action vm MyVM stop > action vm MyVM start > > Now I have problems. > 1) Option --password "secret" is not recognized anymore in oVirt Shell 4.2. > 2) What is the proper syntax to connect & run certain command in oVirt > Shell 4.2? Something like: > > ovirt-shell -l https://node00.mydomain.com.lv/ovirt-engine/api > --ca-file="/etc/pki/ovirt-engine/ca.pem" -u "admin at internal" --password > "secret" && action vm MyVM stop ovirt-shell is considered deprecated. Did you consider using ansible? Best regards, -- Didi From Oliver.Riesener at hs-bremen.de Mon Mar 12 09:48:48 2018 From: Oliver.Riesener at hs-bremen.de (Oliver Riesener) Date: Mon, 12 Mar 2018 10:48:48 +0100 Subject: [ovirt-users] VM guest agent In-Reply-To: <1520808989.18402.58.camel@province-sud.nc> References: <1520808989.18402.58.camel@province-sud.nc> Message-ID: <97548a92-ad64-7968-43b9-9167bc41e3a0@hs-bremen.de> Hi, on Debian stretch the problem is the old version of agent from stretch repository. I downloaded 1.0.13 from Debian testing repo as *.deb file. With these new versions of guest-agent then is also a udev rules issue. The serial channels have been renamed and the rules didn`t match for ovirt. See my install script, as attachement. Cheers. On 11.03.2018 23:56, Nicolas Vaye wrote: > Hello, > > i have installed one oVirt platform with 2 node and 1 HE version 4.2.1.7-1 > > It seem to work fine, but i would like more information on the guest agent. > For the HE, the guest agent seem to be OK, on this vm i 've spotted that the ovirt-guest-agent and qemu-guest-agent are installed. > > I have 2 VM, 1 debian 9 and 1 RHEL 6.5. I've tried to install the same service on each VM, but the result is the same : > no info about IP, fqdn, or app installed for these vm, and there is a orange ! for each vm on the web ui (indicate that i need to install latest guest agent) . > > I have tried different test with spice-vdagent, or ovirt-guest-agent or qemu-guest-agent but no way. > > ovirt-guest-agent doesn't start on debian 9 and RHEL 6.5 : > MainThread::INFO::2018-03-11 22:46:02,984::ovirt-guest-agent::59::root::Starting oVirt guest agentMainThread::ERROR::2018-03-11 22:46:02,986::ovirt-guest-agent::141::root::Unhandled exception in oVirt guest agent!Traceback (most recent call last): File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 135, in agent.run(daemon, pidfile) File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 65, in run self.agent = LinuxVdsAgent(config) File "/usr/share/ovirt-guest-agent/GuestAgentLinux2.py", line 472, in __init__ AgentLogicBase.__init__(self, config) File "/usr/share/ovirt-guest-agent/OVirtAgentLogic.py", line 188, in __init__ self.vio = VirtIoChannel(config.get("virtio", "device")) File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 153, in __init__ self._stream = VirtIoStream(vport_name) File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 134, in __init__ self._vport = os.open(vport_name, os.O_RDWR)OSError: [Errno 2] No such file or directory: '/dev/virtio-ports/com.redhat.rhevm.vdsm' > > > Can i have help for this problem ? > > Thanks. > > Nicolas VAYE > DSI - Noum?a > NEW CALEDONIA > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- Mit freundlichem Gru? Oliver Riesener -- Hochschule Bremen Elektrotechnik und Informatik Oliver Riesener Neustadtswall 30 D-28199 Bremen Tel: 0421 5905-2405, Fax: -2400 e-mail:oliver.riesener at hs-bremen.de -------------- next part -------------- A non-text attachment was scrubbed... Name: install-ovirt.sh Type: application/x-shellscript Size: 2658 bytes Desc: not available URL: From didi at redhat.com Mon Mar 12 09:52:12 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Mon, 12 Mar 2018 11:52:12 +0200 Subject: [ovirt-users] Cannot use virt-viewer to open VM console In-Reply-To: References: Message-ID: On Mon, Mar 12, 2018 at 4:32 AM, Terry hey wrote: > Dear all, > > I would like to ask which version of virt-viewer are you using? > I downloaded virt-viewer 6.0.msi and installed. > But i could not open VM console( i have set the graphic protocol is SPICE). > It shows the following error. > > "At least Remote Viewer version 2.0-160 is required to setup this > connection, see > http://www.ovirt.org/documentation/admin-guide/virt/console-client-resources > for details" > > Also, i can i verify the version of virt-viewer that i have installed? Please see: https://bugzilla.redhat.com/show_bug.cgi?id=1285883 http://lists.ovirt.org/pipermail/users/2017-June/thread.html#82343 Are you sure you use the 6.0 msi? I think it should work. Best regards, -- Didi From andreil1 at starlett.lv Mon Mar 12 10:04:45 2018 From: andreil1 at starlett.lv (andreil1 at starlett.lv) Date: Mon, 12 Mar 2018 12:04:45 +0200 Subject: [ovirt-users] Auto-restart VM from Linux Shell In-Reply-To: References: <661271b7-6e4d-d906-b1ef-fdcae9da1c13@starlett.lv> Message-ID: You meant this: https://github.com/machacekondra/ovirt-ansible-example/wiki https://github.com/machacekondra/ovirt-ansible-example Seems like overkill for so simple task. If bash scripts works, its OK for now. > On 12 Mar 2018, at 11:31, Yedidyah Bar David wrote: > > On Mon, Mar 12, 2018 at 1:45 AM, Andrei Verovski wrote: >> Hi ! >> >> I have stubborn VM which time to time freezes, and watchdog for whatever >> reason don't restart it. >> >> Basically I would like to combine these 3 command into one script. >> >> ovirt-shell -l https://node00.mydomain.com.lv/ovirt-engine/api >> --ca-file="/etc/pki/ovirt-engine/ca.pem" -u "admin at internal" --password >> "secret" >> >> action vm MyVM stop >> action vm MyVM start >> >> Now I have problems. >> 1) Option --password "secret" is not recognized anymore in oVirt Shell 4.2. >> 2) What is the proper syntax to connect & run certain command in oVirt >> Shell 4.2? Something like: >> >> ovirt-shell -l https://node00.mydomain.com.lv/ovirt-engine/api >> --ca-file="/etc/pki/ovirt-engine/ca.pem" -u "admin at internal" --password >> "secret" && action vm MyVM stop > > ovirt-shell is considered deprecated. Did you consider using ansible? > > Best regards, > -- > Didi From amureini at redhat.com Mon Mar 12 10:59:53 2018 From: amureini at redhat.com (Allon Mureinik) Date: Mon, 12 Mar 2018 12:59:53 +0200 Subject: [ovirt-users] qemu-kvm-ev-2.9.0-16.el7_4.14.1 has been released In-Reply-To: References: Message-ID: >From oVirt's perspective - this build includes a fix that allows for live storage migration of a disk that uses iothreads. I've already posted a vdsm patch to require it, reviews are welcome: https://gerrit.ovirt.org/#/c/88770/ And thanks for the quick turnaround here, Sandro! On Fri, Mar 9, 2018 at 11:02 AM, Sandro Bonazzola wrote: > Hi, qemu-kvm-ev-2.9.0-16.el7_4.14.1 > has been tagged for > release and should land on mirrors.centos.org on Monday, March 12th 2018. > > Here's the ChangeLog: > > * Thu Mar 08 2018 Sandro Bonazzola - > ev-2.9.0-16.el7_4.14.1 - Removing RH branding from package name * Thu Jan > 18 2018 Miroslav Rezanina - rhev-2.9.0-16.el7_4.14 > - kvm-fw_cfg-fix-memory-corruption-when-all-fw_cfg-slots-a.patch > [bz#1534649] - kvm-mirror-Fix-inconsistent-backing-AioContext-for-after.patch > [bz#1535125] - Resolves: bz#1534649 (Qemu crashes when all fw_cfg slots are > used [rhel-7.4.z]) - Resolves: bz#1535125 (Mirror jobs for drives with > iothreads make QEMU to abort with "block.c:1895: bdrv_attach_child: > Assertion `bdrv_get_aio_context(parent_bs) == > bdrv_get_aio_context(child_bs)' failed." [rhel-7.4.z]) > Regards, > > -- > > SANDRO BONAZZOLA > > ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D > > Red Hat EMEA > > TRIED. TESTED. TRUSTED. > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreil1 at starlett.lv Mon Mar 12 11:40:00 2018 From: andreil1 at starlett.lv (Andrei Verovski) Date: Mon, 12 Mar 2018 13:40:00 +0200 Subject: [ovirt-users] Weekly fstrim & Ubuntu 16.04 LTS guest freeze Message-ID: <5360FB0D-9AC7-4212-96D7-9871DFCF805A@starlett.lv> Hi ! I have stubborn VM (Ubuntu 16.04 LTS) which randomly freezes about each 2 - 3 weeks at weekends (when load is close to zero). No updates or kernel upgrades help. Freeze is not detected by oVirt watchdog and VM is not automatically restarted. Since it happens only on weekends, I suspect some weekly cron job may cause this. Ubuntu 16.04 LTS is installed on qcow2 disk image (thin provision). is it possible that fstrim (which discards / trims unused blocks) is a source of this problem ? Thanks in advance Andrei From ykaul at redhat.com Mon Mar 12 11:50:14 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Mon, 12 Mar 2018 13:50:14 +0200 Subject: [ovirt-users] Weekly fstrim & Ubuntu 16.04 LTS guest freeze In-Reply-To: <5360FB0D-9AC7-4212-96D7-9871DFCF805A@starlett.lv> References: <5360FB0D-9AC7-4212-96D7-9871DFCF805A@starlett.lv> Message-ID: On Mon, Mar 12, 2018 at 1:40 PM, Andrei Verovski wrote: > Hi ! > > > I have stubborn VM (Ubuntu 16.04 LTS) which randomly freezes about each 2 > - 3 weeks at weekends (when load is close to zero). > No updates or kernel upgrades help. > Freeze is not detected by oVirt watchdog and VM is not automatically > restarted. > > Since it happens only on weekends, I suspect some weekly cron job may > cause this. > Ubuntu 16.04 LTS is installed on qcow2 disk image (thin provision). > is it possible that fstrim (which discards / trims unused blocks) is a > source of this problem ? > Are you using virtio-blk (which doesn't support it) or virtio-SCSI (which does)? Also, does the storage support trimming? Not all do. Y. > > Thanks in advance > Andrei > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fab.soler at laposte.net Mon Mar 5 20:29:03 2018 From: fab.soler at laposte.net (Fabrice SOLER) Date: Mon, 5 Mar 2018 16:29:03 -0400 Subject: [ovirt-users] After the export, the import OVA failed In-Reply-To: <744cc79b-4e81-f327-18e1-32533d7c7c21@ac-guadeloupe.fr> References: <5064936d-798c-8c3a-853b-2bb0c19e8f3c@ac-guadeloupe.fr> <744cc79b-4e81-f327-18e1-32533d7c7c21@ac-guadeloupe.fr> Message-ID: <4662a02c-2c01-8fc0-0501-00b153fefb74@laposte.net> Hello, I found this KB : https://bugzilla.redhat.com/show_bug.cgi?id=1529607 and put a description to the VM disk and the OVA export works ! :-) Now, the import does not work :-( The error is : */Failed to load VM configuration from OVA file: /data/ova/amon /*I have tried two ways. In first, I let the file ova. Secondely I did? : tar xvf file.ova and specifiy the directory where the ovf file is. In the engine log, I have found this : 2018-03-05 16:15:58,319-04 INFO [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (default task-6) [e2a953ca-1460-4899-a958-7dbe37e40a21] Ansible playbook command has exited with value: 2 2018-03-05 16:15:58,319-04 ERROR [org.ovirt.engine.core.bll.GetVmFromOvaQuery] (default task-6) [e2a953ca-1460-4899-a958-7dbe37e40a21] Failed to query OVA info 2018-03-05 16:15:58,319-04 ERROR [org.ovirt.engine.core.bll.GetVmFromOvaQuery] (default task-6) [e2a953ca-1460-4899-a958-7dbe37e40a21] Query 'GetVmFromOvaQuery' failed: EngineException: Failed to query OVA info (Failed with error GeneralException and code 100) I have found this KB : https://bugzilla.redhat.com/show_bug.cgi?id=1529965 The unique solution is a update ? Sincerely Fabrice *//* Le 05/03/2018 ? 13:13, Fabrice SOLER a ?crit?: > Hello, > > Thank for your answer, I have put all permissions for the directory > and I always have errors. > > Here are the ERROR in logs on the engine : > > 2018-03-05 13:03:18,525-04 INFO > [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-23) [7dcef072] FINISH, > GetVolumeInfoVDSCommand, return: > org.ovirt.engine.core.common.businessentities.storage.DiskImage at 90e3c610, > log id: 84f89c3 > 2018-03-05 13:03:18,529-04 ERROR > [org.ovirt.engine.core.bll.CreateOvaCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-23) [7dcef072] Command > 'org.ovirt.engine.core.bll.CreateOvaCommand' failed: null > 2018-03-05 13:03:18,529-04 ERROR > [org.ovirt.engine.core.bll.CreateOvaCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-23) [7dcef072] > Exception: java.lang.NullPointerException > ... > 2018-03-05 13:03:18,533-04 ERROR > [org.ovirt.engine.core.bll.exportimport.ExportOvaCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-23) [7dcef072] Failed > to create OVA file > 2018-03-05 13:03:18,533-04 INFO > [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] > (EE-ManagedThreadFactory-engineScheduled-Thread-23) [7dcef072] Command > 'ExportOva' id: 'c484392e-3540-4a11-97bf-3fecbc13e080' failed when > attempting to perform the next operation, marking as FAILED > '[d5d4381b-ec82-4927-91a4-74597cd2511d]' > 2018-03-05 13:03:18,533-04 INFO > [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] > (EE-ManagedThreadFactory-engineScheduled-Thread-23) [7dcef072] Command > 'ExportOva' id: 'c484392e-3540-4a11-97bf-3fecbc13e080' child commands > '[d5d4381b-ec82-4927-91a4-74597cd2511d]' executions were completed, > status 'FAILED' > 2018-03-05 13:03:19,542-04 ERROR > [org.ovirt.engine.core.bll.exportimport.ExportOvaCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-98) > [c99e94b0-a9dd-486f-9274-9aa17c9590a0] Ending command > 'org.ovirt.engine.core.bll.exportimport.ExportOvaCommand' with failure. > 2018-03-05 13:03:19,543-04 INFO > [org.ovirt.engine.core.bll.exportimport.ExportOvaCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-98) > [c99e94b0-a9dd-486f-9274-9aa17c9590a0] Lock freed to object > 'EngineLock:{exclusiveLocks='[3ae307cb-53d6-4d70-87b6-4e073c6f5eb6=VM]', > sharedLocks=''}' > 2018-03-05 13:03:19,550-04 ERROR > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (EE-ManagedThreadFactory-engineScheduled-Thread-98) > [c99e94b0-a9dd-486f-9274-9aa17c9590a0] EVENT_ID: > IMPORTEXPORT_EXPORT_VM_TO_OVA_FAILED(1,225), Failed to export Vm > pfSense as a Virtual Appliance to path /ova/pfSense.ova on Host > eple-rectorat-proto > ... > > Sincerely, > Fabrice > > Le 05/03/2018 ? 12:50, Arik Hadas a ?crit?: >> >> >> On Mon, Mar 5, 2018 at 6:48 PM, Arik Hadas > > wrote: >> >> >> >> On Mon, Mar 5, 2018 at 6:29 PM, Fabrice SOLER >> > > wrote: >> >> Hello, >> >> I need to export my VM to OVA format from the administration >> portail Ovirt. It fails with this message : >> >> /Failed to export Vm CentOS as a Virtual Appliance to path >> /data/CentOS.ova on Host eple-rectorat-proto/ >> >> My storage is local (not NFS or iSCSI), is there some >> particulars permissions to put to the destination directory ? >> >> No, the script that packs the OVA is executed with root permissions. >> >> The path is the path to an export domain ? >> >> Not necessarily. >> >> >> Oh, and please share the (engine, ansible) logs if you want more eyes >> looking at that failure. >> >> Sincerely, >> >> Fabrice SOLER >> >> -- >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >> >> > > -- > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fabrice SOLER.PNG Type: image/png Size: 16525 bytes Desc: not available URL: From fab.soler at laposte.net Tue Mar 6 16:17:10 2018 From: fab.soler at laposte.net (Fabrice SOLER) Date: Tue, 6 Mar 2018 12:17:10 -0400 Subject: [ovirt-users] After the export, the import OVA failed In-Reply-To: <34f50392-8681-e105-e699-b8ebe08cf6ea@ac-guadeloupe.fr> References: <5064936d-798c-8c3a-853b-2bb0c19e8f3c@ac-guadeloupe.fr> <744cc79b-4e81-f327-18e1-32533d7c7c21@ac-guadeloupe.fr> <34f50392-8681-e105-e699-b8ebe08cf6ea@ac-guadeloupe.fr> Message-ID: <16b5cbcc-766f-aa9a-47e6-d787e10c812a@laposte.net> Hello, I constated that the ovf format is not the same when I made the export ova with vmware and ovirt. Export ova with vmware : [root at eple-rectorat-proto AntiVirus]# file AntiVirus.ovf AntiVirus.ovf: XML 1.0 document, ASCII text, with very long lines, with CRLF line terminators Export ova with ovirt : [root at ovirt-eple amon]# file vm.ovf vm.ovf: XML 1.0 document, ASCII text, with very long lines, with no line terminators With Ovirt there is no line terminators. Is that normal ? Is that why the OVA import does not work ? Sincerely, Fabrice SOLER Le 06/03/2018 ? 10:33, Fabrice SOLER a ?crit?: > Hello, > > I have upgraded the engine and the node, so the version is : > 4.2.1.1.1-1.el7 > To import, I made a "tar xvf file.ova". > Then from the portal, I import the VM > > > I saw that : > > > > After that the amon was removed as we can see in the events? : > > > > It seems it does not work. Maybe the VM is hide somewhere ? > > Sincerely, > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fnoenhmgochfekoh.png Type: image/png Size: 18482 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ieaciiocgjjgcbcg.png Type: image/png Size: 14807 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: hljdjnbeojagoppc.png Type: image/png Size: 14507 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fabrice SOLER.PNG Type: image/png Size: 16525 bytes Desc: not available URL: From fab.soler at laposte.net Tue Mar 6 18:25:14 2018 From: fab.soler at laposte.net (Fabrice SOLER) Date: Tue, 6 Mar 2018 14:25:14 -0400 Subject: [ovirt-users] After the export, the import OVA failed In-Reply-To: <3cbdb40f-f3c6-2106-1c58-a68d984e9407@hs-bremen.de> References: <5064936d-798c-8c3a-853b-2bb0c19e8f3c@ac-guadeloupe.fr> <744cc79b-4e81-f327-18e1-32533d7c7c21@ac-guadeloupe.fr> <34f50392-8681-e105-e699-b8ebe08cf6ea@ac-guadeloupe.fr> <3cbdb40f-f3c6-2106-1c58-a68d984e9407@hs-bremen.de> Message-ID: <66cff3d5-a09f-6c3c-7ef7-1b00199dc069@laposte.net> Hi, I have deleted the VM amon and tried to import the OVA. It does not work. I think there is a problem in the ovf file (XML format) like I posted in the precedente mail : */I constated that the ovf format is not the same when I made the export ova with vmware and ovirt./**/ /**//**/ /**/Export ova with vmware :/**/ /**//**/ /**/[root at eple-rectorat-proto AntiVirus]# file AntiVirus.ovf/**/ /**/AntiVirus.ovf: XML 1.0 document, ASCII text, with very long lines, with CRLF line terminators/**/ /**//**/ /**/Export ova with ovirt :/**/ /**/[root at ovirt-eple amon]# file vm.ovf/**/ /**/vm.ovf: XML 1.0 document, ASCII text, with very long lines, with no line terminators/**/ /**//**/ /**/With Ovirt there is no line terminators./**/ /**//**/ /**/Is that normal ? Is that why the OVA import does not work ?/*/ / Le 06/03/2018 ? 12:11, Oliver Riesener a ?crit?: > Hi Fabrice, > try to rename the already existing old VM to another name like amon-old. > The import the OVA machine again. > > On 06.03.2018 15:33, Fabrice SOLER wrote: >> Hello, >> >> I have upgraded the engine and the node, so the version is : >> 4.2.1.1.1-1.el7 >> To import, I made a "tar xvf file.ova". >> Then from the portal, I import the VM >> >> >> I saw that : >> >> >> >> After that the amon was removed as we can see in the events? : >> >> >> >> It seems it does not work. Maybe the VM is hide somewhere ? >> >> Sincerely, >> >> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fnoenhmgochfekoh.png Type: image/png Size: 18482 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ieaciiocgjjgcbcg.png Type: image/png Size: 14807 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: hljdjnbeojagoppc.png Type: image/png Size: 14507 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fabrice SOLER.PNG Type: image/png Size: 16525 bytes Desc: not available URL: From nesretep at chem.byu.edu Tue Mar 6 16:42:31 2018 From: nesretep at chem.byu.edu (Kristian Petersen) Date: Tue, 6 Mar 2018 09:42:31 -0700 Subject: [ovirt-users] Having trouble setting up Ovirt Message-ID: I am trying to setup Ovirt with a self hosted engine and NFS storage for said engine. The storage appears to mounting OK, but when it gets to the point that it is initializing the lockspace it fails spectacularly and shows a Python traceback which I cleaned up and have included below: Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/reinitialize_lockspace.py", line 30, in ha_cli.reset_lockspace(force) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 270, in reset_lockspace stats = broker.get_stats_from_storage() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 135, in get_stats_from_storage result = self._proxy.get_stats() File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request self.send_content(h, request_body) File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content connection.endheaders(request_body) File "/usr/lib64/python2.7/httplib.py", line 1013, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 864, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 826, in send self.connect() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/unixrpc.py", line 52, in connect self.sock.connect(base64.b16decode(self.host)) File "/usr/lib64/python2.7/socket.py", line 224, in meth return getattr(self._sock,name)(*args) socket.error: [Errno 2] No such file or directory The messier ansible output is below: [ INFO ] TASK [Initialize lockspace volume] [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 5, "changed": true, "cmd": ["hosted-engine", "--reinitialize-lockspace", "--force"], " delta": "0:00:01.007879", "end": "2018-03-05 14:03:00.474295", "msg": "non-zero return code", "rc": 1, "start": "2018-03-05 14:02:59.466416" , "stderr": "Traceback (most recent call last):\n File \"/usr/lib64/python2.7/runpy.py\", line 162, in _run_module_as_main\n \"__main__\ ", fname, loader, pkg_name)\n File \"/usr/lib64/python2.7/runpy.py\", line 72, in _run_code\n exec code in run_globals\n File \"/usr/li b/python2.7/site-packages/ovirt_hosted_engine_setup/reinitialize_lockspace.py\", line 30, in \n ha_cli.reset_lockspace(force)\n File \"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py\", line 270, in reset_lockspace\n stats = broker.get_stat s_from_storage()\n File \"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py\", line 135, in get_stats_from_storage\ n result = self._proxy.get_stats()\n File \"/usr/lib64/python2.7/xmlrpclib.py\", line 1233, in __call__\n return self.__send(self.__n ame, args)\n File \"/usr/lib64/python2.7/xmlrpclib.py\", line 1587, in __request\n verbose=self.__verbose\n File \"/usr/lib64/python2.7 /xmlrpclib.py\", line 1273, in request\n return self.single_request(host, handler, request_body, verbose)\n File \"/usr/lib64/python2.7/ xmlrpclib.py\", line 1301, in single_request\n self.send_content(h, request_body)\n File \"/usr/lib64/python2.7/xmlrpclib.py\", line 144 8, in send_content\n connection.endheaders(request_body)\n File \"/usr/lib64/python2.7/httplib.py\", line 1013, in endheaders\n self. _send_output(message_body)\n File \"/usr/lib64/python2.7/httplib.py\", line 864, in _send_output\n self.send(msg)\n File \"/usr/lib64/p ython2.7/httplib.py\", line 826, in send\n self.connect()\n File \"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/unixrpc.p y\", line 52, in connect\n self.sock.connect(base64.b16decode(self.host))\n File \"/usr/lib64/python2.7/socket.py\", line 224, in meth\n return getattr(self._sock,name)(*args)\nsocket.error: [Errno 2] No such file or directory", "stderr_lines": ["Traceback (most recent cal l last):", " File \"/usr/lib64/python2.7/runpy.py\", line 162, in _run_module_as_main", " \"__main__\", fname, loader, pkg_name)", " Fi le \"/usr/lib64/python2.7/runpy.py\", line 72, in _run_code", " exec code in run_globals", " File \"/usr/lib/python2.7/site-packages/ovi rt_hosted_engine_setup/reinitialize_lockspace.py\", line 30, in ", " ha_cli.reset_lockspace(force)", " File \"/usr/lib/python2.7 /site-packages/ovirt_hosted_engine_ha/client/client.py\", line 270, in reset_lockspace", " stats = broker.get_stats_from_storage()", " F ile \"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py\", line 135, in get_stats_from_storage", " result = self. _proxy.get_stats()", " File \"/usr/lib64/python2.7/xmlrpclib.py\", line 1233, in __call__", " return self.__send(self.__name, args)", " File \"/usr/lib64/python2.7/xmlrpclib.py\", line 1587, in __request", " verbose=self.__verbose", " File \"/usr/lib64/python2.7/xmlrpcli b.py\", line 1273, in request", " return self.single_request(host, handler, request_body, verbose)", " File \"/usr/lib64/python2.7/xmlrp clib.py\", line 1301, in single_request", " self.send_content(h, request_body)", " File \"/usr/lib64/python2.7/xmlrpclib.py\", line 1448 , in send_content", " connection.endheaders(request_body)", " File \"/usr/lib64/python2.7/httplib.py\", line 1013, in endheaders", " self._send_output(message_body)", " File \"/usr/lib64/python2.7/httplib.py\", line 864, in _send_output", " self.send(msg)", " File \"/ usr/lib64/python2.7/httplib.py\", line 826, in send", " self.connect()", " File \"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_h a/lib/unixrpc.py\", line 52, in connect", " self.sock.connect(base64.b16decode(self.host))", " File \"/usr/lib64/python2.7/socket.py\", line 224, in meth", " return getattr(self._sock,name)(*args)", "socket.error: [Errno 2] No such file or directory"], "stdout": "", "stdou t_lines": []} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook I am a bit lost on how to proceed as I haven't implemented a self-hosted engine before and I haven't found anything that helps online so far. Thanks in advance for any assistance. -- Kristian Petersen System Administrator Dept. of Chemistry and Biochemistry -------------- next part -------------- An HTML attachment was scrubbed... URL: From ksnull01 at gmail.com Mon Mar 12 11:37:51 2018 From: ksnull01 at gmail.com (KSNull Zero) Date: Mon, 12 Mar 2018 14:37:51 +0300 Subject: [ovirt-users] 4.2 upgrade question Message-ID: Hello! Currently we run 4.1.9 and try to upgrade to the latest 4.2 release. Our DB server is on separate machine and run PostgreSQL 9.2.23. During upgrade the following error occurs: [WARNING] This release requires PostgreSQL server 9.5.9 but the engine database is currently hosted on PostgreSQL server 9.2.23 [ ERROR ] Please upgrade the PostgreSQL instance that serves the engine database to 9.5.9 and retry. Ok, so we need to upgrade PostgreSQL. The question is - do we need to have exact 9.5.9 version of PostgreSQL ? Because if we upgrade PostgreSQL to the latest available 9.5.12 the same error occurs saying that client and server version mismatched and upgrade terminates. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From didi at redhat.com Mon Mar 12 12:33:47 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Mon, 12 Mar 2018 14:33:47 +0200 Subject: [ovirt-users] 4.2 upgrade question In-Reply-To: References: Message-ID: On Mon, Mar 12, 2018 at 1:37 PM, KSNull Zero wrote: > Hello! > Currently we run 4.1.9 and try to upgrade to the latest 4.2 release. > Our DB server is on separate machine and run PostgreSQL 9.2.23. > > During upgrade the following error occurs: > [WARNING] This release requires PostgreSQL server 9.5.9 but the engine > database is currently hosted on PostgreSQL server 9.2.23 > [ ERROR ] Please upgrade the PostgreSQL instance that serves the engine > database to 9.5.9 and retry. > > Ok, so we need to upgrade PostgreSQL. > The question is - do we need to have exact 9.5.9 version of PostgreSQL ? '9.5.9' is not hard-coded, but is the version shipped by SCL [1]. The CentOS 7 engine build pulls that in and uses it, for both client (always) and server (if configured to). This is the only combination that's tested and known to work. To use this on your remote PG machine, add there SCL repos and use them. You will need to upgrade your database to the new version, similarly to what engine-setup does if it's a local db. I do not think we have docs for this, see e.g. [2]. If you want to use some other (non-SCL) build of PG also on the client, I think it should not be too hard to make everything work, as this is what we do in the fedora build, but I didn't try this myself, nor know about anyone that did. It's probably enough to remove the file: /etc/ovirt-engine-setup.env.d/10-setup-scl-postgres-95.env If you go this way, note that you'll have to repeat removing it per each upgrade. Alternatively, you can add your own file there, with a later number, clearing the variables set in this file, e.g.: # cat << __EOF__ > /etc/ovirt-engine-setup.env.d/99-unset-postgresql.env unset RHPOSTGRESQL95BASE unset RHPOSTGRESQL95DATA unset sclenv unset POSTGRESQLENV __EOF__ And also install the postgresql client/libraries/etc matching what you have on your server. [1] https://www.softwarecollections.org/en/scls/rhscl/rh-postgresql95/ [2] https://bugzilla.redhat.com/show_bug.cgi?id=1498351#c12 > Because if we upgrade PostgreSQL to the latest available 9.5.12 the same > error occurs saying that client and server version mismatched and upgrade > terminates. > Thank you. Best regards, -- Didi From didi at redhat.com Mon Mar 12 12:39:33 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Mon, 12 Mar 2018 14:39:33 +0200 Subject: [ovirt-users] Having trouble setting up Ovirt In-Reply-To: References: Message-ID: On Tue, Mar 6, 2018 at 6:42 PM, Kristian Petersen wrote: > I am trying to setup Ovirt with a self hosted engine and NFS storage for > said engine. The storage appears to mounting OK, but when it gets to the > point that it is initializing the lockspace it fails spectacularly and shows > a Python traceback which I cleaned up and have included below: > > Traceback (most recent call last): > File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main > "__main__", fname, loader, pkg_name) > File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code > exec code in run_globals > File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/reinitialize_lockspace.py", > line 30, in > ha_cli.reset_lockspace(force) > File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", > line 270, in reset_lockspace > stats = broker.get_stats_from_storage() > File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", > line 135, in get_stats_from_storage > result = self._proxy.get_stats() > File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ > return self.__send(self.__name, args) > File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request > verbose=self.__verbose > File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request > return self.single_request(host, handler, request_body, verbose) > File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request > self.send_content(h, request_body) > File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content > connection.endheaders(request_body) > File "/usr/lib64/python2.7/httplib.py", line 1013, in endheaders > self._send_output(message_body) > File "/usr/lib64/python2.7/httplib.py", line 864, in _send_output > self.send(msg) > File "/usr/lib64/python2.7/httplib.py", line 826, in send > self.connect() > File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/unixrpc.py", > line 52, in connect > self.sock.connect(base64.b16decode(self.host)) > File "/usr/lib64/python2.7/socket.py", line 224, in meth > return getattr(self._sock,name)(*args) > socket.error: [Errno 2] No such file or directory > > The messier ansible output is below: > [ INFO ] TASK [Initialize lockspace volume] > [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 5, "changed": true, > "cmd": ["hosted-engine", "--reinitialize-lockspace", "--force"], " > delta": "0:00:01.007879", "end": "2018-03-05 14:03:00.474295", "msg": > "non-zero return code", "rc": 1, "start": "2018-03-05 14:02:59.466416" > , "stderr": "Traceback (most recent call last):\n File > \"/usr/lib64/python2.7/runpy.py\", line 162, in _run_module_as_main\n > \"__main__\ > ", fname, loader, pkg_name)\n File \"/usr/lib64/python2.7/runpy.py\", line > 72, in _run_code\n exec code in run_globals\n File \"/usr/li > b/python2.7/site-packages/ovirt_hosted_engine_setup/reinitialize_lockspace.py\", > line 30, in \n ha_cli.reset_lockspace(force)\n > File > \"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py\", > line 270, in reset_lockspace\n stats = broker.get_stat > s_from_storage()\n File > \"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py\", > line 135, in get_stats_from_storage\ > n result = self._proxy.get_stats()\n File > \"/usr/lib64/python2.7/xmlrpclib.py\", line 1233, in __call__\n return > self.__send(self.__n > ame, args)\n File \"/usr/lib64/python2.7/xmlrpclib.py\", line 1587, in > __request\n verbose=self.__verbose\n File \"/usr/lib64/python2.7 > /xmlrpclib.py\", line 1273, in request\n return self.single_request(host, > handler, request_body, verbose)\n File \"/usr/lib64/python2.7/ > xmlrpclib.py\", line 1301, in single_request\n self.send_content(h, > request_body)\n File \"/usr/lib64/python2.7/xmlrpclib.py\", line 144 > 8, in send_content\n connection.endheaders(request_body)\n File > \"/usr/lib64/python2.7/httplib.py\", line 1013, in endheaders\n self. > _send_output(message_body)\n File \"/usr/lib64/python2.7/httplib.py\", line > 864, in _send_output\n self.send(msg)\n File \"/usr/lib64/p > ython2.7/httplib.py\", line 826, in send\n self.connect()\n File > \"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/unixrpc.p > y\", line 52, in connect\n > self.sock.connect(base64.b16decode(self.host))\n File > \"/usr/lib64/python2.7/socket.py\", line 224, in meth\n > return getattr(self._sock,name)(*args)\nsocket.error: [Errno 2] No such > file or directory", "stderr_lines": ["Traceback (most recent cal > l last):", " File \"/usr/lib64/python2.7/runpy.py\", line 162, in > _run_module_as_main", " \"__main__\", fname, loader, pkg_name)", " Fi > le \"/usr/lib64/python2.7/runpy.py\", line 72, in _run_code", " exec code > in run_globals", " File \"/usr/lib/python2.7/site-packages/ovi > rt_hosted_engine_setup/reinitialize_lockspace.py\", line 30, in ", " > ha_cli.reset_lockspace(force)", " File \"/usr/lib/python2.7 > /site-packages/ovirt_hosted_engine_ha/client/client.py\", line 270, in > reset_lockspace", " stats = broker.get_stats_from_storage()", " F > ile > \"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py\", > line 135, in get_stats_from_storage", " result = self. > _proxy.get_stats()", " File \"/usr/lib64/python2.7/xmlrpclib.py\", line > 1233, in __call__", " return self.__send(self.__name, args)", " > File \"/usr/lib64/python2.7/xmlrpclib.py\", line 1587, in __request", " > verbose=self.__verbose", " File \"/usr/lib64/python2.7/xmlrpcli > b.py\", line 1273, in request", " return self.single_request(host, > handler, request_body, verbose)", " File \"/usr/lib64/python2.7/xmlrp > clib.py\", line 1301, in single_request", " self.send_content(h, > request_body)", " File \"/usr/lib64/python2.7/xmlrpclib.py\", line 1448 > , in send_content", " connection.endheaders(request_body)", " File > \"/usr/lib64/python2.7/httplib.py\", line 1013, in endheaders", " > self._send_output(message_body)", " File > \"/usr/lib64/python2.7/httplib.py\", line 864, in _send_output", " > self.send(msg)", " File \"/ > usr/lib64/python2.7/httplib.py\", line 826, in send", " self.connect()", > " File \"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_h > a/lib/unixrpc.py\", line 52, in connect", " > self.sock.connect(base64.b16decode(self.host))", " File > \"/usr/lib64/python2.7/socket.py\", > line 224, in meth", " return getattr(self._sock,name)(*args)", > "socket.error: [Errno 2] No such file or directory"], "stdout": "", "stdou > t_lines": []} > [ ERROR ] Failed to execute stage 'Closing up': Failed executing > ansible-playbook Adding Simone. Can you please check/share these logs: /var/log/ovirt-hosted-engine-setup/* /var/log/ovirt-hosted-engine-ha/* /var/log/vdsm/* > > > I am a bit lost on how to proceed as I haven't implemented a self-hosted > engine before and I haven't found anything that helps online so far. Thanks > in advance for any assistance. The new ansible-based setup was enabled by default only very recently, so there is not much experience with it, nor can you find much online. You can try the old behavior by running: hosted-engine --deploy --noansible However, if the failure is due to some misconfiguration on the storage, such as wrong permissions or whatever, this won't help much. Best regards, -- Didi From jm3185951 at gmail.com Mon Mar 12 12:42:59 2018 From: jm3185951 at gmail.com (Jonathan Mathews) Date: Mon, 12 Mar 2018 14:42:59 +0200 Subject: [ovirt-users] Failure to upgrade Cluster Compatibility Version In-Reply-To: References: Message-ID: Hi Everyone Is it possible to get some feedback on this? On Thu, Mar 8, 2018 at 10:55 AM, Jonathan Mathews wrote: > Hi , this has now become really urgent. > > Everything I try, I am unable to get the Cluster Compatibility Version to > change. > > The entire platform is running the latest 3.6 release. > > On Tue, Mar 6, 2018 at 4:20 PM, Jonathan Mathews > wrote: > >> Any chance of getting feedback on this? >> >> It is becoming urgent. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreil1 at starlett.lv Mon Mar 12 12:50:28 2018 From: andreil1 at starlett.lv (Andrei Verovski) Date: Mon, 12 Mar 2018 14:50:28 +0200 Subject: [ovirt-users] Weekly fstrim & Ubuntu 16.04 LTS guest freeze In-Reply-To: References: <5360FB0D-9AC7-4212-96D7-9871DFCF805A@starlett.lv> Message-ID: On 03/12/2018 01:50 PM, Yaniv Kaul wrote: > > > On Mon, Mar 12, 2018 at 1:40 PM, Andrei Verovski > wrote: > > Hi ! > > > I have stubborn VM (Ubuntu 16.04 LTS) which randomly freezes about > each 2 - 3 weeks at weekends (when load is close to zero). > No updates or kernel upgrades help. > Freeze is not detected by oVirt watchdog and VM is not > automatically restarted. > > Since it happens only on weekends, I suspect some weekly cron job > may cause this. > Ubuntu 16.04 LTS is installed on qcow2 disk image (thin provision). > is it possible that fstrim (which discards / trims unused blocks) > is a source of this problem ? > > > Are you using virtio-blk (which doesn't support it) or virtio-SCSI > (which does)? > Also, does the storage support trimming? Not all do. I'm using virtio-SCSI, Ubuntu thin provision disk formatted as ext4 boot + ext4 root + swap. Disk image on ext4 RAID 1. > Y. > ? > > > Thanks in advance > Andrei > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jm3185951 at gmail.com Mon Mar 12 13:08:02 2018 From: jm3185951 at gmail.com (Jonathan Mathews) Date: Mon, 12 Mar 2018 15:08:02 +0200 Subject: [ovirt-users] Failure to upgrade Cluster Compatibility Version In-Reply-To: References: Message-ID: Hi I do apologise, somehow all these emails seem to be going directly to my trash, so I thought there was no reply. It appears that I need to shutdown all VM's in that cluster, in order to change the Cluster Compatibility Version. On Thu, Mar 8, 2018 at 11:48 AM, Yaniv Kaul wrote: > > > On Thu, Mar 8, 2018 at 10:55 AM, Jonathan Mathews > wrote: > >> Hi , this has now become really urgent. >> > > It's not clear to me why it's urgent. > Please look at past replies and provide more information so we can assist > you. > Y. > > > >> >> Everything I try, I am unable to get the Cluster Compatibility Version >> to change. >> >> The entire platform is running the latest 3.6 release. >> >> On Tue, Mar 6, 2018 at 4:20 PM, Jonathan Mathews >> wrote: >> >>> Any chance of getting feedback on this? >>> >>> It is becoming urgent. >>> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabrice.bacchella at orange.fr Mon Mar 12 15:01:36 2018 From: fabrice.bacchella at orange.fr (Fabrice Bacchella) Date: Mon, 12 Mar 2018 16:01:36 +0100 Subject: [ovirt-users] Problem with repo virtio-win-stable Message-ID: <6958F64A-A073-47AA-98CC-9278DBD9DAF6@orange.fr> I'm trying to setup a local cache of the repository virtio-win-stable, using Sonatype's nexus3. But it's a little picky about content-type and this repository setup is not to it's taste: it's says : 2018-03-12 15:47:21,025+0100 WARN [qtp2016749412-36] *UNKNOWN org.sonatype.nexus.repository.view.handlers.ExceptionHandler - Invalid content: GET /repodata/5048716d95c37bb6e0df68263c13daea16145384c34bd950dba45bf69e39ea98-primary.xml.gz: org.sonatype.nexus.repository.InvalidContentException: Detected content type [application/xml, application/x-xml, text/xml], but expected [application/x-gzip, application/gzip, application/x-tgz, application/gzip-compressed, application/gzipped, application/x-gunzip, application/x-gzip-compressed, gzip/document]: repodata/5048716d95c37bb6e0df68263c13daea16145384c34bd950dba45bf69e39ea98-primary.xml.gz And indeed: $ curl -JORLv https://fedorapeople.org/groups/virt/virtio-win/repo/stable/repodata/5048716d95c37bb6e0df68263c13daea16145384c34bd950dba45bf69e39ea98-primary.xml.gz return: < Content-Encoding: gzip < Content-Type: text/plain; charset=UTF-8 The file content is right: $ zless 5048716d95c37bb6e0df68263c13daea16145384c34bd950dba45bf69e39ea98-primary.xml.gz For sac-gdeploy, I get: $ curl -JORLv https://copr-be.cloud.fedoraproject.org/results/sac/gdeploy/epel-7-x86_64/repodata/0f79cb019e43ae53bda93bae802a611f1fb025859729da143c5459ca2b5590b6-primary.xml.gz ... < Content-Type: application/x-gzip This repository is the only one broken out of 14 I already setup, from Elastic, Postgres and other oVirt repositories. With who should I get in touch to correct that problem ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From hariprasanth.l at msystechnologies.com Mon Mar 12 15:40:45 2018 From: hariprasanth.l at msystechnologies.com (Hari Prasanth Loganathan) Date: Mon, 12 Mar 2018 21:10:45 +0530 Subject: [ovirt-users] PostgreSQL tuning in oVirt Message-ID: Hi Team, We increased the number of threads in Apache web server and JBoss and we are able to scale up to 500 concurrent requests in oVirt. But after the long run, we are getting the below error from PostgreSQL but the oVirt is still running successfully. org.springframework.dao.DataAccessResourceFailureException: PreparedStatementCallback; SQL [select * from getvdcoptionbyname(?, ?)]; This connection has been closed.; nested exception is org.postgresql.util.PSQLException: This connection has been closed. 1) Could somebody explain this error? 2) What are the tuning parameters for PostgreSQL? SystemConfiguration for reference : 16 GB Ram, 5GB for Ovirt Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 4 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 58 Model name: Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz Stepping: 9 CPU MHz: 3369.191 BogoMIPS: 6200.27 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 6144K NUMA node0 CPU(s): 0-3 OS: Cent OS Thanks, Hari -- DISCLAIMER The information in this e-mail is confidential and may be subject to legal privilege. It is intended solely for the addressee. Access to this e-mail by anyone else is unauthorized. If you have received this communication in error, please address with the subject heading "Received in error," send to it at msystechnologies.com, then delete the e-mail and destroy any copies of it. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. The views, opinions, conclusions and other information expressed in this electronic mail and any attachments are not given or endorsed by the company unless otherwise indicated by an authorized representative independent of this message. MSys cannot guarantee that e-mail communications are secure or error-free, as information could be intercepted, corrupted, amended, lost, destroyed, arrive late or incomplete, or contain viruses, though all reasonable precautions have been taken to ensure no viruses are present in this e-mail. As our company cannot accept responsibility for any loss or damage arising from the use of this e-mail or attachments we recommend that you subject these to your virus checking procedures prior to use -------------- next part -------------- An HTML attachment was scrubbed... URL: From nesretep at chem.byu.edu Mon Mar 12 16:00:17 2018 From: nesretep at chem.byu.edu (Kristian Petersen) Date: Mon, 12 Mar 2018 10:00:17 -0600 Subject: [ovirt-users] hosted-engine deploy fails at "Wait for the engine to come up on the target VM" step In-Reply-To: References: Message-ID: I have v2.2.9 of ovirt-hosted-engine-setup currently installed. I'll try out the other suggestion you made also. Thanks for the help. On Fri, Mar 9, 2018 at 4:26 PM, Simone Tiraboschi wrote: > > > On Fri, Mar 9, 2018 at 8:33 PM, Kristian Petersen > wrote: > >> I have attached the relevant log files as requested.? >> vdsm.log.1 >> >> ? >> > > > The real issue is here: > > > BroadwellIBRS > > destroydestroy on_reboot>destroy (vm:2751) > 2018-03-08 08:04:13,757-0700 ERROR (vm/9a1e133d) [virt.vm] > (vmId='9a1e133d-13d8-4613-b1a5-fd3ca81ffcc3') The vm start process failed > (vm:927) > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 856, in > _startUnderlyingVm > self._run() > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2756, in > _run > dom.createWithFlags(flags) > File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", > line 130, in wrapper > ret = f(*args, **kwargs) > File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line > 92, in wrapper > return func(inst, *args, **kwargs) > File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1069, in > createWithFlags > if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', > dom=self) > libvirtError: internal error: Unknown CPU model BroadwellIBRS > > Indeed it should be Broadwell-IBRS > > Can you please report which rpm version of ovirt-hosted-engine-setup did > you used? > > You can fix it in this way: > copy /var/run/ovirt-hosted-engine-ha/vm.conf somewhere, edit it and > update the cpuType field. > > Then start the engine VM with your custom vm.conf with something like: > hosted-engine --vm-start --vm-conf=/root/my_vm.conf > keep the engine up for at least one hour and it will generate the > OVF_STORE disks with the right configuration for the hosted-engine VM. > > It failed really at the end of the setup so anything else should be fine. > > > >> >> On Fri, Mar 9, 2018 at 1:21 AM, Simone Tiraboschi >> wrote: >> >>> >>> >>> On Thu, Mar 8, 2018 at 7:28 PM, Kristian Petersen >> > wrote: >>> >>>> I am trying to deploy oVirt with a self-hosted engine and the setup >>>> seems to go well until near the very end when the status message says: >>>> [ INFO ] TASK [Wait for the engine to come up on the target VM] >>>> >>>> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed": >>>> true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta": "0:0 >>>> 0:00.216412", "end": "2018-03-07 16:02:02.677478", "rc": 0, "start": >>>> "2018-03-07 16:02:02.461066", "stderr": "", "stderr_lines": [], "stdout >>>> ": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, >>>> \"extra\": \"metadata_parse_version=1\\nmetadata_feature_version=1\\nti >>>> m >>>> estamp=4679955 (Wed Mar 7 16:01:50 2018)\\nhost-id=1\\nscore=3400 >>>> \\nvm_conf_refresh_time=4679956 (Wed Mar 7 16:01:51 >>>> 2018)\\nconf_on_share >>>> d_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\", >>>> \"hostname\": \"rhv1.cpms.byu.edu\", \"host-id\": 1, \"engin >>>> e-status\": {\"reason\": \"vm not running on this host\", \"health\": >>>> \"bad\", \"vm\": \"down\", \"detail\": \"unknown\"}, \"score\": 3400, >>>> \"stopped\": false, \"maintenance\": false, \"crc32\": \"d3a67cf7\", >>>> \"local_conf_timestamp\": 4679956, \"host-ts\": 4679955}, \"global_main >>>> tenance\": false}", "stdout_lines": ["{\"1\": >>>> {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\": >>>> \"metadata_parse_version=1\ >>>> \nmetadata_feature_version=1\\ntimestamp=4679955 (Wed Mar 7 16:01:50 >>>> 2018)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=4679956 (Wed Mar >>>> 7 16:01:51 2018)\\nconf_on_shared_storage >>>> =True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\", >>>> \"hostname\": \"rhv1.cpms. >>>> byu.edu\", \"host-id\": 1, \"engine-status\": {\"reason\": \"vm not >>>> running on this host\", \"health\": \"bad\", \"vm\": \"down\", \"detail\ >>>> ": \"unknown\"}, \"score\": 3400, \"stopped\": false, \"maintenance\": >>>> false, \"crc32\": \"d3a67cf7\", \"local_conf_timestamp\": 4679956, \" >>>> host-ts\": 4679955}, \"global_maintenance\": false}"]} >>>> [ ERROR ] Failed to execute stage 'Closing up': Failed executing >>>> ansible-playbook >>>> >>>> Any ideas that might help? >>>> >>> >>> >>> Hi Kristian, >>> {\"reason\": \"vm not running on this host\" sonds really bad. >>> I means that ovirt-ha-agent (in charge of restarting the engine VM) >>> think that another host took over but at that stage you should have just >>> one host. >>> >>> Could you please attach /var/log/ovirt-hosted-engine-ha/agent.log and >>> /var/log/vdsm/vdsm.log for the relevant time frame? >>> >>> >>>> >>>> >>>> -- >>>> Kristian Petersen >>>> System Administrator >>>> Dept. of Chemistry and Biochemistry >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>> >> >> >> -- >> Kristian Petersen >> System Administrator >> Dept. of Chemistry and Biochemistry >> > > -- Kristian Petersen System Administrator BYU Dept. of Chemistry and Biochemistry -------------- next part -------------- An HTML attachment was scrubbed... URL: From nesretep at chem.byu.edu Mon Mar 12 16:19:05 2018 From: nesretep at chem.byu.edu (Kristian Petersen) Date: Mon, 12 Mar 2018 10:19:05 -0600 Subject: [ovirt-users] Having trouble setting up Ovirt In-Reply-To: References: Message-ID: The problem changed a little since I initially posted this to the list. It is getting past the lockspace task now. I posted again with the new situation but did it separate from this. Simone has responded to that one and if you want I can add you to that loop. Using the --noansible flag seems to cause it to fail even sooner. It fails almost right off the bat, which is interesting to say the least. On Mon, Mar 12, 2018 at 6:39 AM, Yedidyah Bar David wrote: > On Tue, Mar 6, 2018 at 6:42 PM, Kristian Petersen > wrote: > > I am trying to setup Ovirt with a self hosted engine and NFS storage for > > said engine. The storage appears to mounting OK, but when it gets to the > > point that it is initializing the lockspace it fails spectacularly and > shows > > a Python traceback which I cleaned up and have included below: > > > > Traceback (most recent call last): > > File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main > > "__main__", fname, loader, pkg_name) > > File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code > > exec code in run_globals > > File > > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ > setup/reinitialize_lockspace.py", > > line 30, in > > ha_cli.reset_lockspace(force) > > File > > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ > ha/client/client.py", > > line 270, in reset_lockspace > > stats = broker.get_stats_from_storage() > > File > > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ > ha/lib/brokerlink.py", > > line 135, in get_stats_from_storage > > result = self._proxy.get_stats() > > File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ > > return self.__send(self.__name, args) > > File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request > > verbose=self.__verbose > > File "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request > > return self.single_request(host, handler, request_body, verbose) > > File "/usr/lib64/python2.7/xmlrpclib.py", line 1301, in single_request > > self.send_content(h, request_body) > > File "/usr/lib64/python2.7/xmlrpclib.py", line 1448, in send_content > > connection.endheaders(request_body) > > File "/usr/lib64/python2.7/httplib.py", line 1013, in endheaders > > self._send_output(message_body) > > File "/usr/lib64/python2.7/httplib.py", line 864, in _send_output > > self.send(msg) > > File "/usr/lib64/python2.7/httplib.py", line 826, in send > > self.connect() > > File > > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ > ha/lib/unixrpc.py", > > line 52, in connect > > self.sock.connect(base64.b16decode(self.host)) > > File "/usr/lib64/python2.7/socket.py", line 224, in meth > > return getattr(self._sock,name)(*args) > > socket.error: [Errno 2] No such file or directory > > > > The messier ansible output is below: > > [ INFO ] TASK [Initialize lockspace volume] > > [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 5, "changed": true, > > "cmd": ["hosted-engine", "--reinitialize-lockspace", "--force"], " > > delta": "0:00:01.007879", "end": "2018-03-05 14:03:00.474295", "msg": > > "non-zero return code", "rc": 1, "start": "2018-03-05 14:02:59.466416" > > , "stderr": "Traceback (most recent call last):\n File > > \"/usr/lib64/python2.7/runpy.py\", line 162, in _run_module_as_main\n > > \"__main__\ > > ", fname, loader, pkg_name)\n File \"/usr/lib64/python2.7/runpy.py\", > line > > 72, in _run_code\n exec code in run_globals\n File \"/usr/li > > b/python2.7/site-packages/ovirt_hosted_engine_setup/ > reinitialize_lockspace.py\", > > line 30, in \n ha_cli.reset_lockspace(force)\n > > File > > \"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ > ha/client/client.py\", > > line 270, in reset_lockspace\n stats = broker.get_stat > > s_from_storage()\n File > > \"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ > ha/lib/brokerlink.py\", > > line 135, in get_stats_from_storage\ > > n result = self._proxy.get_stats()\n File > > \"/usr/lib64/python2.7/xmlrpclib.py\", line 1233, in __call__\n > return > > self.__send(self.__n > > ame, args)\n File \"/usr/lib64/python2.7/xmlrpclib.py\", line 1587, in > > __request\n verbose=self.__verbose\n File \"/usr/lib64/python2.7 > > /xmlrpclib.py\", line 1273, in request\n return > self.single_request(host, > > handler, request_body, verbose)\n File \"/usr/lib64/python2.7/ > > xmlrpclib.py\", line 1301, in single_request\n self.send_content(h, > > request_body)\n File \"/usr/lib64/python2.7/xmlrpclib.py\", line 144 > > 8, in send_content\n connection.endheaders(request_body)\n File > > \"/usr/lib64/python2.7/httplib.py\", line 1013, in endheaders\n self. > > _send_output(message_body)\n File \"/usr/lib64/python2.7/httplib.py\", > line > > 864, in _send_output\n self.send(msg)\n File \"/usr/lib64/p > > ython2.7/httplib.py\", line 826, in send\n self.connect()\n File > > \"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/unixrpc.p > > y\", line 52, in connect\n > > self.sock.connect(base64.b16decode(self.host))\n File > > \"/usr/lib64/python2.7/socket.py\", line 224, in meth\n > > return getattr(self._sock,name)(*args)\nsocket.error: [Errno 2] No > such > > file or directory", "stderr_lines": ["Traceback (most recent cal > > l last):", " File \"/usr/lib64/python2.7/runpy.py\", line 162, in > > _run_module_as_main", " \"__main__\", fname, loader, pkg_name)", " Fi > > le \"/usr/lib64/python2.7/runpy.py\", line 72, in _run_code", " exec > code > > in run_globals", " File \"/usr/lib/python2.7/site-packages/ovi > > rt_hosted_engine_setup/reinitialize_lockspace.py\", line 30, in > ", " > > ha_cli.reset_lockspace(force)", " File \"/usr/lib/python2.7 > > /site-packages/ovirt_hosted_engine_ha/client/client.py\", line 270, in > > reset_lockspace", " stats = broker.get_stats_from_storage()", " F > > ile > > \"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ > ha/lib/brokerlink.py\", > > line 135, in get_stats_from_storage", " result = self. > > _proxy.get_stats()", " File \"/usr/lib64/python2.7/xmlrpclib.py\", line > > 1233, in __call__", " return self.__send(self.__name, args)", " > > File \"/usr/lib64/python2.7/xmlrpclib.py\", line 1587, in __request", " > > verbose=self.__verbose", " File \"/usr/lib64/python2.7/xmlrpcli > > b.py\", line 1273, in request", " return self.single_request(host, > > handler, request_body, verbose)", " File \"/usr/lib64/python2.7/xmlrp > > clib.py\", line 1301, in single_request", " self.send_content(h, > > request_body)", " File \"/usr/lib64/python2.7/xmlrpclib.py\", line 1448 > > , in send_content", " connection.endheaders(request_body)", " File > > \"/usr/lib64/python2.7/httplib.py\", line 1013, in endheaders", " > > self._send_output(message_body)", " File > > \"/usr/lib64/python2.7/httplib.py\", line 864, in _send_output", " > > self.send(msg)", " File \"/ > > usr/lib64/python2.7/httplib.py\", line 826, in send", " > self.connect()", > > " File \"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_h > > a/lib/unixrpc.py\", line 52, in connect", " > > self.sock.connect(base64.b16decode(self.host))", " File > > \"/usr/lib64/python2.7/socket.py\", > > line 224, in meth", " return getattr(self._sock,name)(*args)", > > "socket.error: [Errno 2] No such file or directory"], "stdout": "", > "stdou > > t_lines": []} > > [ ERROR ] Failed to execute stage 'Closing up': Failed executing > > ansible-playbook > > Adding Simone. > > Can you please check/share these logs: > > /var/log/ovirt-hosted-engine-setup/* > /var/log/ovirt-hosted-engine-ha/* > /var/log/vdsm/* > > > > > > > I am a bit lost on how to proceed as I haven't implemented a self-hosted > > engine before and I haven't found anything that helps online so far. > Thanks > > in advance for any assistance. > > The new ansible-based setup was enabled by default only very recently, so > there is not much experience with it, nor can you find much online. > > You can try the old behavior by running: > > hosted-engine --deploy --noansible > > However, if the failure is due to some misconfiguration on the storage, > such as wrong permissions or whatever, this won't help much. > > Best regards, > -- > Didi > -- Kristian Petersen System Administrator BYU Dept. of Chemistry and Biochemistry -------------- next part -------------- An HTML attachment was scrubbed... URL: From stirabos at redhat.com Mon Mar 12 16:20:10 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Mon, 12 Mar 2018 17:20:10 +0100 Subject: [ovirt-users] hosted-engine deploy fails at "Wait for the engine to come up on the target VM" step In-Reply-To: References: Message-ID: On Mon, Mar 12, 2018 at 5:00 PM, Kristian Petersen wrote: > I have v2.2.9 of ovirt-hosted-engine-setup currently installed. > OK, makes sense: https://gerrit.ovirt.org/#/c/87060/ fixes it but it comes only with v2.2.10 > I'll try out the other suggestion you made also. Thanks for the help. > > On Fri, Mar 9, 2018 at 4:26 PM, Simone Tiraboschi > wrote: > >> >> >> On Fri, Mar 9, 2018 at 8:33 PM, Kristian Petersen >> wrote: >> >>> I have attached the relevant log files as requested.? >>> vdsm.log.1 >>> >>> ? >>> >> >> >> The real issue is here: >> >> >> BroadwellIBRS >> >> destroydestroy> reboot>destroy (vm:2751) >> 2018-03-08 08:04:13,757-0700 ERROR (vm/9a1e133d) [virt.vm] >> (vmId='9a1e133d-13d8-4613-b1a5-fd3ca81ffcc3') The vm start process >> failed (vm:927) >> Traceback (most recent call last): >> File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 856, in >> _startUnderlyingVm >> self._run() >> File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2756, in >> _run >> dom.createWithFlags(flags) >> File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", >> line 130, in wrapper >> ret = f(*args, **kwargs) >> File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line >> 92, in wrapper >> return func(inst, *args, **kwargs) >> File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1069, in >> createWithFlags >> if ret == -1: raise libvirtError ('virDomainCreateWithFlags() >> failed', dom=self) >> libvirtError: internal error: Unknown CPU model BroadwellIBRS >> >> Indeed it should be Broadwell-IBRS >> >> Can you please report which rpm version of ovirt-hosted-engine-setup did >> you used? >> >> You can fix it in this way: >> copy /var/run/ovirt-hosted-engine-ha/vm.conf somewhere, edit it and >> update the cpuType field. >> >> Then start the engine VM with your custom vm.conf with something like: >> hosted-engine --vm-start --vm-conf=/root/my_vm.conf >> keep the engine up for at least one hour and it will generate the >> OVF_STORE disks with the right configuration for the hosted-engine VM. >> >> It failed really at the end of the setup so anything else should be fine. >> >> >> >>> >>> On Fri, Mar 9, 2018 at 1:21 AM, Simone Tiraboschi >>> wrote: >>> >>>> >>>> >>>> On Thu, Mar 8, 2018 at 7:28 PM, Kristian Petersen < >>>> nesretep at chem.byu.edu> wrote: >>>> >>>>> I am trying to deploy oVirt with a self-hosted engine and the setup >>>>> seems to go well until near the very end when the status message says: >>>>> [ INFO ] TASK [Wait for the engine to come up on the target VM] >>>>> >>>>> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed": >>>>> true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta": "0:0 >>>>> 0:00.216412", "end": "2018-03-07 16:02:02.677478", "rc": 0, "start": >>>>> "2018-03-07 16:02:02.461066", "stderr": "", "stderr_lines": [], "stdout >>>>> ": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, >>>>> \"extra\": \"metadata_parse_version=1\\nm >>>>> etadata_feature_version=1\\ntim >>>>> estamp=4679955 (Wed Mar 7 16:01:50 2018)\\nhost-id=1\\nscore=3400 >>>>> \\nvm_conf_refresh_time=4679956 (Wed Mar 7 16:01:51 >>>>> 2018)\\nconf_on_share >>>>> d_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\", >>>>> \"hostname\": \"rhv1.cpms.byu.edu\", \"host-id\": 1, \"engin >>>>> e-status\": {\"reason\": \"vm not running on this host\", \"health\": >>>>> \"bad\", \"vm\": \"down\", \"detail\": \"unknown\"}, \"score\": 3400, >>>>> \"stopped\": false, \"maintenance\": false, \"crc32\": \"d3a67cf7\", >>>>> \"local_conf_timestamp\": 4679956, \"host-ts\": 4679955}, \"global_main >>>>> tenance\": false}", "stdout_lines": ["{\"1\": >>>>> {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\": >>>>> \"metadata_parse_version=1\ >>>>> \nmetadata_feature_version=1\\ntimestamp=4679955 (Wed Mar 7 16:01:50 >>>>> 2018)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=4679956 (Wed Mar >>>>> 7 16:01:51 2018)\\nconf_on_shared_storage >>>>> =True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\", >>>>> \"hostname\": \"rhv1.cpms. >>>>> byu.edu\", \"host-id\": 1, \"engine-status\": {\"reason\": \"vm not >>>>> running on this host\", \"health\": \"bad\", \"vm\": \"down\", \"detail\ >>>>> ": \"unknown\"}, \"score\": 3400, \"stopped\": false, \"maintenance\": >>>>> false, \"crc32\": \"d3a67cf7\", \"local_conf_timestamp\": 4679956, \" >>>>> host-ts\": 4679955}, \"global_maintenance\": false}"]} >>>>> [ ERROR ] Failed to execute stage 'Closing up': Failed executing >>>>> ansible-playbook >>>>> >>>>> Any ideas that might help? >>>>> >>>> >>>> >>>> Hi Kristian, >>>> {\"reason\": \"vm not running on this host\" sonds really bad. >>>> I means that ovirt-ha-agent (in charge of restarting the engine VM) >>>> think that another host took over but at that stage you should have just >>>> one host. >>>> >>>> Could you please attach /var/log/ovirt-hosted-engine-ha/agent.log and >>>> /var/log/vdsm/vdsm.log for the relevant time frame? >>>> >>>> >>>>> >>>>> >>>>> -- >>>>> Kristian Petersen >>>>> System Administrator >>>>> Dept. of Chemistry and Biochemistry >>>>> >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> >>>>> >>>> >>> >>> >>> -- >>> Kristian Petersen >>> System Administrator >>> Dept. of Chemistry and Biochemistry >>> >> >> > > > -- > Kristian Petersen > System Administrator > BYU Dept. of Chemistry and Biochemistry > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nesretep at chem.byu.edu Mon Mar 12 16:25:32 2018 From: nesretep at chem.byu.edu (Kristian Petersen) Date: Mon, 12 Mar 2018 10:25:32 -0600 Subject: [ovirt-users] hosted-engine deploy fails at "Wait for the engine to come up on the target VM" step In-Reply-To: References: Message-ID: I'm guessing that v2.2.10 is not in the oVirt repo yet. When I looked at vm.conf, the CPU name has a space in it like the one mentioned in the link you included. So replacing that space with an underscore should do the trick prehaps? On Mon, Mar 12, 2018 at 10:00 AM, Kristian Petersen wrote: > I have v2.2.9 of ovirt-hosted-engine-setup currently installed. I'll try > out the other suggestion you made also. Thanks for the help. > > On Fri, Mar 9, 2018 at 4:26 PM, Simone Tiraboschi > wrote: > >> >> >> On Fri, Mar 9, 2018 at 8:33 PM, Kristian Petersen >> wrote: >> >>> I have attached the relevant log files as requested.? >>> vdsm.log.1 >>> >>> ? >>> >> >> >> The real issue is here: >> >> >> BroadwellIBRS >> >> destroydestroy> reboot>destroy (vm:2751) >> 2018-03-08 08:04:13,757-0700 ERROR (vm/9a1e133d) [virt.vm] >> (vmId='9a1e133d-13d8-4613-b1a5-fd3ca81ffcc3') The vm start process >> failed (vm:927) >> Traceback (most recent call last): >> File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 856, in >> _startUnderlyingVm >> self._run() >> File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2756, in >> _run >> dom.createWithFlags(flags) >> File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", >> line 130, in wrapper >> ret = f(*args, **kwargs) >> File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line >> 92, in wrapper >> return func(inst, *args, **kwargs) >> File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1069, in >> createWithFlags >> if ret == -1: raise libvirtError ('virDomainCreateWithFlags() >> failed', dom=self) >> libvirtError: internal error: Unknown CPU model BroadwellIBRS >> >> Indeed it should be Broadwell-IBRS >> >> Can you please report which rpm version of ovirt-hosted-engine-setup did >> you used? >> >> You can fix it in this way: >> copy /var/run/ovirt-hosted-engine-ha/vm.conf somewhere, edit it and >> update the cpuType field. >> >> Then start the engine VM with your custom vm.conf with something like: >> hosted-engine --vm-start --vm-conf=/root/my_vm.conf >> keep the engine up for at least one hour and it will generate the >> OVF_STORE disks with the right configuration for the hosted-engine VM. >> >> It failed really at the end of the setup so anything else should be fine. >> >> >> >>> >>> On Fri, Mar 9, 2018 at 1:21 AM, Simone Tiraboschi >>> wrote: >>> >>>> >>>> >>>> On Thu, Mar 8, 2018 at 7:28 PM, Kristian Petersen < >>>> nesretep at chem.byu.edu> wrote: >>>> >>>>> I am trying to deploy oVirt with a self-hosted engine and the setup >>>>> seems to go well until near the very end when the status message says: >>>>> [ INFO ] TASK [Wait for the engine to come up on the target VM] >>>>> >>>>> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed": >>>>> true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta": "0:0 >>>>> 0:00.216412", "end": "2018-03-07 16:02:02.677478", "rc": 0, "start": >>>>> "2018-03-07 16:02:02.461066", "stderr": "", "stderr_lines": [], "stdout >>>>> ": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, >>>>> \"extra\": \"metadata_parse_version=1\\nm >>>>> etadata_feature_version=1\\ntim >>>>> estamp=4679955 (Wed Mar 7 16:01:50 2018)\\nhost-id=1\\nscore=3400 >>>>> \\nvm_conf_refresh_time=4679956 (Wed Mar 7 16:01:51 >>>>> 2018)\\nconf_on_share >>>>> d_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\", >>>>> \"hostname\": \"rhv1.cpms.byu.edu\", \"host-id\": 1, \"engin >>>>> e-status\": {\"reason\": \"vm not running on this host\", \"health\": >>>>> \"bad\", \"vm\": \"down\", \"detail\": \"unknown\"}, \"score\": 3400, >>>>> \"stopped\": false, \"maintenance\": false, \"crc32\": \"d3a67cf7\", >>>>> \"local_conf_timestamp\": 4679956, \"host-ts\": 4679955}, \"global_main >>>>> tenance\": false}", "stdout_lines": ["{\"1\": >>>>> {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\": >>>>> \"metadata_parse_version=1\ >>>>> \nmetadata_feature_version=1\\ntimestamp=4679955 (Wed Mar 7 16:01:50 >>>>> 2018)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=4679956 (Wed Mar >>>>> 7 16:01:51 2018)\\nconf_on_shared_storage >>>>> =True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\", >>>>> \"hostname\": \"rhv1.cpms. >>>>> byu.edu\", \"host-id\": 1, \"engine-status\": {\"reason\": \"vm not >>>>> running on this host\", \"health\": \"bad\", \"vm\": \"down\", \"detail\ >>>>> ": \"unknown\"}, \"score\": 3400, \"stopped\": false, \"maintenance\": >>>>> false, \"crc32\": \"d3a67cf7\", \"local_conf_timestamp\": 4679956, \" >>>>> host-ts\": 4679955}, \"global_maintenance\": false}"]} >>>>> [ ERROR ] Failed to execute stage 'Closing up': Failed executing >>>>> ansible-playbook >>>>> >>>>> Any ideas that might help? >>>>> >>>> >>>> >>>> Hi Kristian, >>>> {\"reason\": \"vm not running on this host\" sonds really bad. >>>> I means that ovirt-ha-agent (in charge of restarting the engine VM) >>>> think that another host took over but at that stage you should have just >>>> one host. >>>> >>>> Could you please attach /var/log/ovirt-hosted-engine-ha/agent.log and >>>> /var/log/vdsm/vdsm.log for the relevant time frame? >>>> >>>> >>>>> >>>>> >>>>> -- >>>>> Kristian Petersen >>>>> System Administrator >>>>> Dept. of Chemistry and Biochemistry >>>>> >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> >>>>> >>>> >>> >>> >>> -- >>> Kristian Petersen >>> System Administrator >>> Dept. of Chemistry and Biochemistry >>> >> >> > > > -- > Kristian Petersen > System Administrator > BYU Dept. of Chemistry and Biochemistry > -- Kristian Petersen System Administrator BYU Dept. of Chemistry and Biochemistry -------------- next part -------------- An HTML attachment was scrubbed... URL: From stirabos at redhat.com Mon Mar 12 16:31:32 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Mon, 12 Mar 2018 17:31:32 +0100 Subject: [ovirt-users] hosted-engine deploy fails at "Wait for the engine to come up on the target VM" step In-Reply-To: References: Message-ID: On Mon, Mar 12, 2018 at 5:25 PM, Kristian Petersen wrote: > I'm guessing that v2.2.10 is not in the oVirt repo yet. When I looked at > vm.conf, the CPU name has a space in it like the one mentioned in the link > you included. So replacing that space with an underscore should do the > trick prehaps? > v2.2.12 is in -pre repo. You should replace the space with a dash: Broadwell-IBRS > > On Mon, Mar 12, 2018 at 10:00 AM, Kristian Petersen > wrote: > >> I have v2.2.9 of ovirt-hosted-engine-setup currently installed. I'll try >> out the other suggestion you made also. Thanks for the help. >> >> On Fri, Mar 9, 2018 at 4:26 PM, Simone Tiraboschi >> wrote: >> >>> >>> >>> On Fri, Mar 9, 2018 at 8:33 PM, Kristian Petersen >> > wrote: >>> >>>> I have attached the relevant log files as requested.? >>>> vdsm.log.1 >>>> >>>> ? >>>> >>> >>> >>> The real issue is here: >>> >>> >>> BroadwellIBRS >>> >>> destroydestroy>> oot>destroy (vm:2751) >>> 2018-03-08 08:04:13,757-0700 ERROR (vm/9a1e133d) [virt.vm] >>> (vmId='9a1e133d-13d8-4613-b1a5-fd3ca81ffcc3') The vm start process >>> failed (vm:927) >>> Traceback (most recent call last): >>> File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 856, in >>> _startUnderlyingVm >>> self._run() >>> File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2756, >>> in _run >>> dom.createWithFlags(flags) >>> File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", >>> line 130, in wrapper >>> ret = f(*args, **kwargs) >>> File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line >>> 92, in wrapper >>> return func(inst, *args, **kwargs) >>> File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1069, in >>> createWithFlags >>> if ret == -1: raise libvirtError ('virDomainCreateWithFlags() >>> failed', dom=self) >>> libvirtError: internal error: Unknown CPU model BroadwellIBRS >>> >>> Indeed it should be Broadwell-IBRS >>> >>> Can you please report which rpm version of ovirt-hosted-engine-setup did >>> you used? >>> >>> You can fix it in this way: >>> copy /var/run/ovirt-hosted-engine-ha/vm.conf somewhere, edit it and >>> update the cpuType field. >>> >>> Then start the engine VM with your custom vm.conf with something like: >>> hosted-engine --vm-start --vm-conf=/root/my_vm.conf >>> keep the engine up for at least one hour and it will generate the >>> OVF_STORE disks with the right configuration for the hosted-engine VM. >>> >>> It failed really at the end of the setup so anything else should be fine. >>> >>> >>> >>>> >>>> On Fri, Mar 9, 2018 at 1:21 AM, Simone Tiraboschi >>>> wrote: >>>> >>>>> >>>>> >>>>> On Thu, Mar 8, 2018 at 7:28 PM, Kristian Petersen < >>>>> nesretep at chem.byu.edu> wrote: >>>>> >>>>>> I am trying to deploy oVirt with a self-hosted engine and the setup >>>>>> seems to go well until near the very end when the status message says: >>>>>> [ INFO ] TASK [Wait for the engine to come up on the target VM] >>>>>> >>>>>> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed": >>>>>> true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta": "0:0 >>>>>> 0:00.216412", "end": "2018-03-07 16:02:02.677478", "rc": 0, "start": >>>>>> "2018-03-07 16:02:02.461066", "stderr": "", "stderr_lines": [], "stdout >>>>>> ": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, >>>>>> \"extra\": \"metadata_parse_version=1\\nm >>>>>> etadata_feature_version=1\\ntim >>>>>> estamp=4679955 (Wed Mar 7 16:01:50 2018)\\nhost-id=1\\nscore=3400 >>>>>> \\nvm_conf_refresh_time=4679956 (Wed Mar 7 16:01:51 >>>>>> 2018)\\nconf_on_share >>>>>> d_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\", >>>>>> \"hostname\": \"rhv1.cpms.byu.edu\", \"host-id\": 1, \"engin >>>>>> e-status\": {\"reason\": \"vm not running on this host\", \"health\": >>>>>> \"bad\", \"vm\": \"down\", \"detail\": \"unknown\"}, \"score\": 3400, >>>>>> \"stopped\": false, \"maintenance\": false, \"crc32\": \"d3a67cf7\", >>>>>> \"local_conf_timestamp\": 4679956, \"host-ts\": 4679955}, \"global_main >>>>>> tenance\": false}", "stdout_lines": ["{\"1\": >>>>>> {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\": >>>>>> \"metadata_parse_version=1\ >>>>>> \nmetadata_feature_version=1\\ntimestamp=4679955 (Wed Mar 7 >>>>>> 16:01:50 2018)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=4679956 >>>>>> (Wed Mar >>>>>> 7 16:01:51 2018)\\nconf_on_shared_storage >>>>>> =True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\", >>>>>> \"hostname\": \"rhv1.cpms. >>>>>> byu.edu\", \"host-id\": 1, \"engine-status\": {\"reason\": \"vm not >>>>>> running on this host\", \"health\": \"bad\", \"vm\": \"down\", \"detail\ >>>>>> ": \"unknown\"}, \"score\": 3400, \"stopped\": false, >>>>>> \"maintenance\": false, \"crc32\": \"d3a67cf7\", \"local_conf_timestamp\": >>>>>> 4679956, \" >>>>>> host-ts\": 4679955}, \"global_maintenance\": false}"]} >>>>>> [ ERROR ] Failed to execute stage 'Closing up': Failed executing >>>>>> ansible-playbook >>>>>> >>>>>> Any ideas that might help? >>>>>> >>>>> >>>>> >>>>> Hi Kristian, >>>>> {\"reason\": \"vm not running on this host\" sonds really bad. >>>>> I means that ovirt-ha-agent (in charge of restarting the engine VM) >>>>> think that another host took over but at that stage you should have just >>>>> one host. >>>>> >>>>> Could you please attach /var/log/ovirt-hosted-engine-ha/agent.log and >>>>> /var/log/vdsm/vdsm.log for the relevant time frame? >>>>> >>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Kristian Petersen >>>>>> System Administrator >>>>>> Dept. of Chemistry and Biochemistry >>>>>> >>>>>> _______________________________________________ >>>>>> Users mailing list >>>>>> Users at ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>> >>>>>> >>>>> >>>> >>>> >>>> -- >>>> Kristian Petersen >>>> System Administrator >>>> Dept. of Chemistry and Biochemistry >>>> >>> >>> >> >> >> -- >> Kristian Petersen >> System Administrator >> BYU Dept. of Chemistry and Biochemistry >> > > > > -- > Kristian Petersen > System Administrator > BYU Dept. of Chemistry and Biochemistry > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzetto.luca at gmail.com Mon Mar 12 16:34:41 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Mon, 12 Mar 2018 17:34:41 +0100 Subject: [ovirt-users] Auto-restart VM from Linux Shell In-Reply-To: References: <661271b7-6e4d-d906-b1ef-fdcae9da1c13@starlett.lv> Message-ID: Hello Andrei, i'll do simply in this way with ansible: - name: Start oVirt VMs hosts: localhost connection: local gather_facts: False tasks: - name: Obtain SSO token ovirt_auth: url: https://engine/ovirt-engine/api username: admin at internal password: password insecure: True ignore_errors: False - name: Stop VM ovirt_vms: state: stopped name: "server1" auth: "{{ ovirt_auth }}" - name: Start VM ovirt_vms: state: running name: "server1" auth: "{{ ovirt_auth }}" - always: - name: Revoke the SSO token ovirt_auth: state: absent ovirt_auth: "{{ ovirt_auth }}" On Mon, Mar 12, 2018 at 11:04 AM, wrote: > You meant this: > https://github.com/machacekondra/ovirt-ansible-example/wiki > https://github.com/machacekondra/ovirt-ansible-example > > Seems like overkill for so simple task. If bash scripts works, its OK for now. > > >> On 12 Mar 2018, at 11:31, Yedidyah Bar David wrote: >> >> On Mon, Mar 12, 2018 at 1:45 AM, Andrei Verovski wrote: >>> Hi ! >>> >>> I have stubborn VM which time to time freezes, and watchdog for whatever >>> reason don't restart it. >>> >>> Basically I would like to combine these 3 command into one script. >>> >>> ovirt-shell -l https://node00.mydomain.com.lv/ovirt-engine/api >>> --ca-file="/etc/pki/ovirt-engine/ca.pem" -u "admin at internal" --password >>> "secret" >>> >>> action vm MyVM stop >>> action vm MyVM start >>> >>> Now I have problems. >>> 1) Option --password "secret" is not recognized anymore in oVirt Shell 4.2. >>> 2) What is the proper syntax to connect & run certain command in oVirt >>> Shell 4.2? Something like: >>> >>> ovirt-shell -l https://node00.mydomain.com.lv/ovirt-engine/api >>> --ca-file="/etc/pki/ovirt-engine/ca.pem" -u "admin at internal" --password >>> "secret" && action vm MyVM stop >> >> ovirt-shell is considered deprecated. Did you consider using ansible? >> >> Best regards, >> -- >> Didi > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From nesretep at chem.byu.edu Mon Mar 12 17:27:15 2018 From: nesretep at chem.byu.edu (Kristian Petersen) Date: Mon, 12 Mar 2018 11:27:15 -0600 Subject: [ovirt-users] hosted-engine deploy fails at "Wait for the engine to come up on the target VM" step In-Reply-To: References: Message-ID: I tried using my customized vm.conf with the fix in the CPU name as you suggested. When I ran hosted-engine --vm-start --vm-conf=/root/myvm.conf and that failed. It said the vm didn't exist. It sounds like I might need to get the updated package from the ovirt-4.2-pre repo and try deploying again. On Mon, Mar 12, 2018 at 10:31 AM, Simone Tiraboschi wrote: > > > On Mon, Mar 12, 2018 at 5:25 PM, Kristian Petersen > wrote: > >> I'm guessing that v2.2.10 is not in the oVirt repo yet. When I looked at >> vm.conf, the CPU name has a space in it like the one mentioned in the link >> you included. So replacing that space with an underscore should do the >> trick prehaps? >> > > v2.2.12 is in -pre repo. > > You should replace the space with a dash: Broadwell-IBRS > > >> >> On Mon, Mar 12, 2018 at 10:00 AM, Kristian Petersen < >> nesretep at chem.byu.edu> wrote: >> >>> I have v2.2.9 of ovirt-hosted-engine-setup currently installed. I'll >>> try out the other suggestion you made also. Thanks for the help. >>> >>> On Fri, Mar 9, 2018 at 4:26 PM, Simone Tiraboschi >>> wrote: >>> >>>> >>>> >>>> On Fri, Mar 9, 2018 at 8:33 PM, Kristian Petersen < >>>> nesretep at chem.byu.edu> wrote: >>>> >>>>> I have attached the relevant log files as requested.? >>>>> vdsm.log.1 >>>>> >>>>> ? >>>>> >>>> >>>> >>>> The real issue is here: >>>> >>>> >>>> BroadwellIBRS >>>> >>>> destroydestroy>>> oot>destroy (vm:2751) >>>> 2018-03-08 08:04:13,757-0700 ERROR (vm/9a1e133d) [virt.vm] >>>> (vmId='9a1e133d-13d8-4613-b1a5-fd3ca81ffcc3') The vm start process >>>> failed (vm:927) >>>> Traceback (most recent call last): >>>> File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 856, >>>> in _startUnderlyingVm >>>> self._run() >>>> File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2756, >>>> in _run >>>> dom.createWithFlags(flags) >>>> File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", >>>> line 130, in wrapper >>>> ret = f(*args, **kwargs) >>>> File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", >>>> line 92, in wrapper >>>> return func(inst, *args, **kwargs) >>>> File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1069, in >>>> createWithFlags >>>> if ret == -1: raise libvirtError ('virDomainCreateWithFlags() >>>> failed', dom=self) >>>> libvirtError: internal error: Unknown CPU model BroadwellIBRS >>>> >>>> Indeed it should be Broadwell-IBRS >>>> >>>> Can you please report which rpm version of ovirt-hosted-engine-setup >>>> did you used? >>>> >>>> You can fix it in this way: >>>> copy /var/run/ovirt-hosted-engine-ha/vm.conf somewhere, edit it and >>>> update the cpuType field. >>>> >>>> Then start the engine VM with your custom vm.conf with something like: >>>> hosted-engine --vm-start --vm-conf=/root/my_vm.conf >>>> keep the engine up for at least one hour and it will generate the >>>> OVF_STORE disks with the right configuration for the hosted-engine VM. >>>> >>>> It failed really at the end of the setup so anything else should be >>>> fine. >>>> >>>> >>>> >>>>> >>>>> On Fri, Mar 9, 2018 at 1:21 AM, Simone Tiraboschi >>>> > wrote: >>>>> >>>>>> >>>>>> >>>>>> On Thu, Mar 8, 2018 at 7:28 PM, Kristian Petersen < >>>>>> nesretep at chem.byu.edu> wrote: >>>>>> >>>>>>> I am trying to deploy oVirt with a self-hosted engine and the setup >>>>>>> seems to go well until near the very end when the status message says: >>>>>>> [ INFO ] TASK [Wait for the engine to come up on the target VM] >>>>>>> >>>>>>> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, >>>>>>> "changed": true, "cmd": ["hosted-engine", "--vm-status", "--json"], >>>>>>> "delta": "0:0 >>>>>>> 0:00.216412", "end": "2018-03-07 16:02:02.677478", "rc": 0, "start": >>>>>>> "2018-03-07 16:02:02.461066", "stderr": "", "stderr_lines": [], "stdout >>>>>>> ": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, >>>>>>> \"extra\": \"metadata_parse_version=1\\nm >>>>>>> etadata_feature_version=1\\ntim >>>>>>> estamp=4679955 (Wed Mar 7 16:01:50 2018)\\nhost-id=1\\nscore=3400 >>>>>>> \\nvm_conf_refresh_time=4679956 (Wed Mar 7 16:01:51 >>>>>>> 2018)\\nconf_on_share >>>>>>> d_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\", >>>>>>> \"hostname\": \"rhv1.cpms.byu.edu\", \"host-id\": 1, \"engin >>>>>>> e-status\": {\"reason\": \"vm not running on this host\", >>>>>>> \"health\": \"bad\", \"vm\": \"down\", \"detail\": \"unknown\"}, \"score\": >>>>>>> 3400, >>>>>>> \"stopped\": false, \"maintenance\": false, \"crc32\": \"d3a67cf7\", >>>>>>> \"local_conf_timestamp\": 4679956, \"host-ts\": 4679955}, \"global_main >>>>>>> tenance\": false}", "stdout_lines": ["{\"1\": >>>>>>> {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\": >>>>>>> \"metadata_parse_version=1\ >>>>>>> \nmetadata_feature_version=1\\ntimestamp=4679955 (Wed Mar 7 >>>>>>> 16:01:50 2018)\\nhost-id=1\\nscore=3400 >>>>>>> \\nvm_conf_refresh_time=4679956 (Wed Mar >>>>>>> 7 16:01:51 2018)\\nconf_on_shared_storage >>>>>>> =True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\", >>>>>>> \"hostname\": \"rhv1.cpms. >>>>>>> byu.edu\", \"host-id\": 1, \"engine-status\": {\"reason\": \"vm not >>>>>>> running on this host\", \"health\": \"bad\", \"vm\": \"down\", \"detail\ >>>>>>> ": \"unknown\"}, \"score\": 3400, \"stopped\": false, >>>>>>> \"maintenance\": false, \"crc32\": \"d3a67cf7\", \"local_conf_timestamp\": >>>>>>> 4679956, \" >>>>>>> host-ts\": 4679955}, \"global_maintenance\": false}"]} >>>>>>> [ ERROR ] Failed to execute stage 'Closing up': Failed executing >>>>>>> ansible-playbook >>>>>>> >>>>>>> Any ideas that might help? >>>>>>> >>>>>> >>>>>> >>>>>> Hi Kristian, >>>>>> {\"reason\": \"vm not running on this host\" sonds really bad. >>>>>> I means that ovirt-ha-agent (in charge of restarting the engine VM) >>>>>> think that another host took over but at that stage you should have just >>>>>> one host. >>>>>> >>>>>> Could you please attach /var/log/ovirt-hosted-engine-ha/agent.log >>>>>> and /var/log/vdsm/vdsm.log for the relevant time frame? >>>>>> >>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Kristian Petersen >>>>>>> System Administrator >>>>>>> Dept. of Chemistry and Biochemistry >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Users mailing list >>>>>>> Users at ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>> >>>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Kristian Petersen >>>>> System Administrator >>>>> Dept. of Chemistry and Biochemistry >>>>> >>>> >>>> >>> >>> >>> -- >>> Kristian Petersen >>> System Administrator >>> BYU Dept. of Chemistry and Biochemistry >>> >> >> >> >> -- >> Kristian Petersen >> System Administrator >> BYU Dept. of Chemistry and Biochemistry >> > > -- Kristian Petersen System Administrator BYU Dept. of Chemistry and Biochemistry -------------- next part -------------- An HTML attachment was scrubbed... URL: From stirabos at redhat.com Mon Mar 12 17:36:17 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Mon, 12 Mar 2018 18:36:17 +0100 Subject: [ovirt-users] hosted-engine deploy fails at "Wait for the engine to come up on the target VM" step In-Reply-To: References: Message-ID: On Mon, Mar 12, 2018 at 6:27 PM, Kristian Petersen wrote: > I tried using my customized vm.conf with the fix in the CPU name as you > suggested. When I ran hosted-engine --vm-start --vm-conf=/root/myvm.conf > and that failed. > This is fine if the VM doesn't exist. Can you please share your vdsm.log? > It said the vm didn't exist. It sounds like I might need to get the > updated package from the ovirt-4.2-pre repo and try deploying again. > > On Mon, Mar 12, 2018 at 10:31 AM, Simone Tiraboschi > wrote: > >> >> >> On Mon, Mar 12, 2018 at 5:25 PM, Kristian Petersen > > wrote: >> >>> I'm guessing that v2.2.10 is not in the oVirt repo yet. When I looked >>> at vm.conf, the CPU name has a space in it like the one mentioned in the >>> link you included. So replacing that space with an underscore should do >>> the trick prehaps? >>> >> >> v2.2.12 is in -pre repo. >> >> You should replace the space with a dash: Broadwell-IBRS >> >> >>> >>> On Mon, Mar 12, 2018 at 10:00 AM, Kristian Petersen < >>> nesretep at chem.byu.edu> wrote: >>> >>>> I have v2.2.9 of ovirt-hosted-engine-setup currently installed. I'll >>>> try out the other suggestion you made also. Thanks for the help. >>>> >>>> On Fri, Mar 9, 2018 at 4:26 PM, Simone Tiraboschi >>>> wrote: >>>> >>>>> >>>>> >>>>> On Fri, Mar 9, 2018 at 8:33 PM, Kristian Petersen < >>>>> nesretep at chem.byu.edu> wrote: >>>>> >>>>>> I have attached the relevant log files as requested.? >>>>>> vdsm.log.1 >>>>>> >>>>>> ? >>>>>> >>>>> >>>>> >>>>> The real issue is here: >>>>> >>>>> >>>>> BroadwellIBRS >>>>> >>>>> destroydestroy>>>> oot>destroy (vm:2751) >>>>> 2018-03-08 08:04:13,757-0700 ERROR (vm/9a1e133d) [virt.vm] >>>>> (vmId='9a1e133d-13d8-4613-b1a5-fd3ca81ffcc3') The vm start process >>>>> failed (vm:927) >>>>> Traceback (most recent call last): >>>>> File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 856, >>>>> in _startUnderlyingVm >>>>> self._run() >>>>> File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2756, >>>>> in _run >>>>> dom.createWithFlags(flags) >>>>> File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", >>>>> line 130, in wrapper >>>>> ret = f(*args, **kwargs) >>>>> File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", >>>>> line 92, in wrapper >>>>> return func(inst, *args, **kwargs) >>>>> File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1069, in >>>>> createWithFlags >>>>> if ret == -1: raise libvirtError ('virDomainCreateWithFlags() >>>>> failed', dom=self) >>>>> libvirtError: internal error: Unknown CPU model BroadwellIBRS >>>>> >>>>> Indeed it should be Broadwell-IBRS >>>>> >>>>> Can you please report which rpm version of ovirt-hosted-engine-setup >>>>> did you used? >>>>> >>>>> You can fix it in this way: >>>>> copy /var/run/ovirt-hosted-engine-ha/vm.conf somewhere, edit it and >>>>> update the cpuType field. >>>>> >>>>> Then start the engine VM with your custom vm.conf with something like: >>>>> hosted-engine --vm-start --vm-conf=/root/my_vm.conf >>>>> keep the engine up for at least one hour and it will generate the >>>>> OVF_STORE disks with the right configuration for the hosted-engine VM. >>>>> >>>>> It failed really at the end of the setup so anything else should be >>>>> fine. >>>>> >>>>> >>>>> >>>>>> >>>>>> On Fri, Mar 9, 2018 at 1:21 AM, Simone Tiraboschi < >>>>>> stirabos at redhat.com> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Thu, Mar 8, 2018 at 7:28 PM, Kristian Petersen < >>>>>>> nesretep at chem.byu.edu> wrote: >>>>>>> >>>>>>>> I am trying to deploy oVirt with a self-hosted engine and the setup >>>>>>>> seems to go well until near the very end when the status message says: >>>>>>>> [ INFO ] TASK [Wait for the engine to come up on the target VM] >>>>>>>> >>>>>>>> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, >>>>>>>> "changed": true, "cmd": ["hosted-engine", "--vm-status", "--json"], >>>>>>>> "delta": "0:0 >>>>>>>> 0:00.216412", "end": "2018-03-07 16:02:02.677478", "rc": 0, >>>>>>>> "start": "2018-03-07 16:02:02.461066", "stderr": "", "stderr_lines": [], >>>>>>>> "stdout >>>>>>>> ": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, >>>>>>>> \"extra\": \"metadata_parse_version=1\\nm >>>>>>>> etadata_feature_version=1\\ntim >>>>>>>> estamp=4679955 (Wed Mar 7 16:01:50 2018)\\nhost-id=1\\nscore=3400 >>>>>>>> \\nvm_conf_refresh_time=4679956 (Wed Mar 7 16:01:51 >>>>>>>> 2018)\\nconf_on_share >>>>>>>> d_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\", >>>>>>>> \"hostname\": \"rhv1.cpms.byu.edu\", \"host-id\": 1, \"engin >>>>>>>> e-status\": {\"reason\": \"vm not running on this host\", >>>>>>>> \"health\": \"bad\", \"vm\": \"down\", \"detail\": \"unknown\"}, \"score\": >>>>>>>> 3400, >>>>>>>> \"stopped\": false, \"maintenance\": false, \"crc32\": >>>>>>>> \"d3a67cf7\", \"local_conf_timestamp\": 4679956, \"host-ts\": 4679955}, >>>>>>>> \"global_main >>>>>>>> tenance\": false}", "stdout_lines": ["{\"1\": >>>>>>>> {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\": >>>>>>>> \"metadata_parse_version=1\ >>>>>>>> \nmetadata_feature_version=1\\ntimestamp=4679955 (Wed Mar 7 >>>>>>>> 16:01:50 2018)\\nhost-id=1\\nscore=3400 >>>>>>>> \\nvm_conf_refresh_time=4679956 (Wed Mar >>>>>>>> 7 16:01:51 2018)\\nconf_on_shared_storage >>>>>>>> =True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\", >>>>>>>> \"hostname\": \"rhv1.cpms. >>>>>>>> byu.edu\", \"host-id\": 1, \"engine-status\": {\"reason\": \"vm >>>>>>>> not running on this host\", \"health\": \"bad\", \"vm\": \"down\", \"detail\ >>>>>>>> ": \"unknown\"}, \"score\": 3400, \"stopped\": false, >>>>>>>> \"maintenance\": false, \"crc32\": \"d3a67cf7\", \"local_conf_timestamp\": >>>>>>>> 4679956, \" >>>>>>>> host-ts\": 4679955}, \"global_maintenance\": false}"]} >>>>>>>> [ ERROR ] Failed to execute stage 'Closing up': Failed executing >>>>>>>> ansible-playbook >>>>>>>> >>>>>>>> Any ideas that might help? >>>>>>>> >>>>>>> >>>>>>> >>>>>>> Hi Kristian, >>>>>>> {\"reason\": \"vm not running on this host\" sonds really bad. >>>>>>> I means that ovirt-ha-agent (in charge of restarting the engine VM) >>>>>>> think that another host took over but at that stage you should have just >>>>>>> one host. >>>>>>> >>>>>>> Could you please attach /var/log/ovirt-hosted-engine-ha/agent.log >>>>>>> and /var/log/vdsm/vdsm.log for the relevant time frame? >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Kristian Petersen >>>>>>>> System Administrator >>>>>>>> Dept. of Chemistry and Biochemistry >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Users mailing list >>>>>>>> Users at ovirt.org >>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Kristian Petersen >>>>>> System Administrator >>>>>> Dept. of Chemistry and Biochemistry >>>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> Kristian Petersen >>>> System Administrator >>>> BYU Dept. of Chemistry and Biochemistry >>>> >>> >>> >>> >>> -- >>> Kristian Petersen >>> System Administrator >>> BYU Dept. of Chemistry and Biochemistry >>> >> >> > > > -- > Kristian Petersen > System Administrator > BYU Dept. of Chemistry and Biochemistry > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stirabos at redhat.com Mon Mar 12 17:55:57 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Mon, 12 Mar 2018 18:55:57 +0100 Subject: [ovirt-users] hosted-engine deploy fails at "Wait for the engine to come up on the target VM" step In-Reply-To: References: Message-ID: On Mon, Mar 12, 2018 at 6:44 PM, Kristian Petersen wrote: > I think I accidentally sent that reply before I was really finished with > it. The error said the the VM mentioned in the conf file didn't exist. I > included the log file as requested. > As far as I can see from the logs, the engine VM went up as expected now: 2018-03-12 11:15:00,183-0600 INFO (jsonrpc/5) [api.virt] FINISH getStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': [{'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': u'vnc', 'port': '5900'}], 'memUsage': '17', 'acpiEnable': 'true', 'guestFQDN': u' rhv-engine.cpms.byu.edu', 'vmId': 'cbe9b80f-9c18-409c-b7b1-54d95f4734ca', 'session': 'Unknown', 'displaySecurePort': '-1', 'timeOffset': '0', 'balloonInfo': {'balloon_max': '16777216', 'balloon_min': '0', 'balloon_target': '16777216', 'balloon_cur': '16777216'}, 'pauseCode': 'NOERR', 'disksUsage': [{u'path': u'/', u'total': '6565134336', u'fs': u'xfs', u'used': '1950617600'}, {u'path': u'/boot', u'total': '1063256064', u'fs': u'xfs', u'used': '170590208'}, {u'path': u'/home', u'total': '1063256064', u'fs': u'xfs', u'used': '33792000'}, {u'path': u'/var', u'total': '21464350720', u'fs': u'xfs', u'used': '396513280'}, {u'path': u'/var/log', u'total': '10726932480', u'fs': u'xfs', u'used': '42823680'}, {u'path': u'/tmp', u'total': '2136997888', u'fs': u'xfs', u'used': '34058240'}, {u'path': u'/var/log/audit', u'total': '1063256064', u'fs': u'xfs', u'used': '34586624'}], 'network': {'vnet0': {'macAddr': u'00:16:3e:54:f3:8e', 'rxDropped': '0', 'tx': '30742', 'rxErrors': '0', 'txDropped': '0', 'rx': '167904', 'txErrors': '0', 'state': 'unknown', 'sampleTime': 9385842.59, 'speed': '1000', 'name': 'vnet0'}}, 'vmJobs': {}, 'cpuUser': '7.43', 'elapsedTime': '81', 'memoryStats': {'swap_out': '0', 'majflt': '0', 'mem_cached': '452216', 'mem_free': '13404476', 'mem_buffers': '2104', 'swap_in': '0', 'pageflt': '418', 'mem_total': '16263704', 'mem_unused': '13404476'}, 'cpuSys': '1.67', 'appsList': (u'ovirt-guest-agent-common-1.0.14-1.el7', u'kernel-3.10.0-693.17.1.el7', u'cloud-init-0.7.9-9.el7.centos.2'), 'guestOs': u'3.10.0-693.17.1.el7.x86_64', 'vmName': 'HostedEngine', 'displayType': 'vnc', 'vcpuCount': '4', 'clientIp': '', 'hash': '-7630705381253994604', 'guestCPUCount': 4, 'vmType': 'kvm', 'displayIp': '0', 'cpuUsage': '9110000000', 'vcpuPeriod': 100000L, 'displayPort': '5900', 'guestTimezone': {u'zone': u'America/Denver', u'offset': -420}, 'vcpuQuota': '-1', 'statusTime': '9385842590', 'kvmEnable': 'true', 'disks': {'vda': {'readLatency': '387098', 'writtenBytes': '36851200', 'writeOps': '465', 'apparentsize': '125627793408', 'readOps': '15711', 'writeLatency': '1931806', 'imageID': u'ec964354-ac01-4799-9c20-4bc923d285d4', 'readBytes': '480176128', 'flushLatency': '237153', 'readRate': '545.769487017', 'truesize': '2503184384', 'writeRate': '76646.5023329'}, 'hdc': {'readLatency': '0', 'writtenBytes': '0', 'writeOps': '0', 'apparentsize': '0', 'readOps': '4', 'writeLatency': '0', 'readBytes': '152', 'flushLatency': '0', 'readRate': '0.0', 'truesize': '0', 'writeRate': '0.0'}}, 'monitorResponse': '0', 'guestOsInfo': {u'kernel': u'3.10.0-693.17.1.el7.x86_64', u'arch': u'x86_64', u'version': u'7.4.1708', u'distribution': u'CentOS Linux', u'type': u'linux', u'codename': u'Core'}, 'username': u'None', 'guestName': u'rhv-engine.cpms.byu.edu', 'status': 'Up', 'lastLogin': 1520874832.982759, 'guestIPs': u'192.168.1.22', 'guestContainers': [], 'netIfaces': [{u'inet6': [u'fe80::216:3eff:fe54:f38e'], u'hw': u'00:16:3e:54:f3:8e', u'inet': [u'192.168.1.22'], u'name': u'eth0'}]}]} from=::1,52334 (api:52) > > > On Mon, Mar 12, 2018 at 11:36 AM, Simone Tiraboschi > wrote: > >> >> >> On Mon, Mar 12, 2018 at 6:27 PM, Kristian Petersen > > wrote: >> >>> I tried using my customized vm.conf with the fix in the CPU name as you >>> suggested. When I ran hosted-engine --vm-start --vm-conf=/root/myvm.conf >>> and that failed. >>> >> >> This is fine if the VM doesn't exist. >> Can you please share your vdsm.log? >> >> >>> It said the vm didn't exist. It sounds like I might need to get the >>> updated package from the ovirt-4.2-pre repo and try deploying again. >>> >>> On Mon, Mar 12, 2018 at 10:31 AM, Simone Tiraboschi >> > wrote: >>> >>>> >>>> >>>> On Mon, Mar 12, 2018 at 5:25 PM, Kristian Petersen < >>>> nesretep at chem.byu.edu> wrote: >>>> >>>>> I'm guessing that v2.2.10 is not in the oVirt repo yet. When I looked >>>>> at vm.conf, the CPU name has a space in it like the one mentioned in the >>>>> link you included. So replacing that space with an underscore should do >>>>> the trick prehaps? >>>>> >>>> >>>> v2.2.12 is in -pre repo. >>>> >>>> You should replace the space with a dash: Broadwell-IBRS >>>> >>>> >>>>> >>>>> On Mon, Mar 12, 2018 at 10:00 AM, Kristian Petersen < >>>>> nesretep at chem.byu.edu> wrote: >>>>> >>>>>> I have v2.2.9 of ovirt-hosted-engine-setup currently installed. I'll >>>>>> try out the other suggestion you made also. Thanks for the help. >>>>>> >>>>>> On Fri, Mar 9, 2018 at 4:26 PM, Simone Tiraboschi < >>>>>> stirabos at redhat.com> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Fri, Mar 9, 2018 at 8:33 PM, Kristian Petersen < >>>>>>> nesretep at chem.byu.edu> wrote: >>>>>>> >>>>>>>> I have attached the relevant log files as requested.? >>>>>>>> vdsm.log.1 >>>>>>>> >>>>>>>> ? >>>>>>>> >>>>>>> >>>>>>> >>>>>>> The real issue is here: >>>>>>> >>>>>>> >>>>>>> BroadwellIBRS >>>>>>> >>>>>>> destroydestroy>>>>>> oot>destroy (vm:2751) >>>>>>> 2018-03-08 08:04:13,757-0700 ERROR (vm/9a1e133d) [virt.vm] >>>>>>> (vmId='9a1e133d-13d8-4613-b1a5-fd3ca81ffcc3') The vm start process >>>>>>> failed (vm:927) >>>>>>> Traceback (most recent call last): >>>>>>> File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line >>>>>>> 856, in _startUnderlyingVm >>>>>>> self._run() >>>>>>> File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line >>>>>>> 2756, in _run >>>>>>> dom.createWithFlags(flags) >>>>>>> File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", >>>>>>> line 130, in wrapper >>>>>>> ret = f(*args, **kwargs) >>>>>>> File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", >>>>>>> line 92, in wrapper >>>>>>> return func(inst, *args, **kwargs) >>>>>>> File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1069, >>>>>>> in createWithFlags >>>>>>> if ret == -1: raise libvirtError ('virDomainCreateWithFlags() >>>>>>> failed', dom=self) >>>>>>> libvirtError: internal error: Unknown CPU model BroadwellIBRS >>>>>>> >>>>>>> Indeed it should be Broadwell-IBRS >>>>>>> >>>>>>> Can you please report which rpm version of ovirt-hosted-engine-setup >>>>>>> did you used? >>>>>>> >>>>>>> You can fix it in this way: >>>>>>> copy /var/run/ovirt-hosted-engine-ha/vm.conf somewhere, edit it and >>>>>>> update the cpuType field. >>>>>>> >>>>>>> Then start the engine VM with your custom vm.conf with something >>>>>>> like: >>>>>>> hosted-engine --vm-start --vm-conf=/root/my_vm.conf >>>>>>> keep the engine up for at least one hour and it will generate the >>>>>>> OVF_STORE disks with the right configuration for the hosted-engine VM. >>>>>>> >>>>>>> It failed really at the end of the setup so anything else should be >>>>>>> fine. >>>>>>> >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> On Fri, Mar 9, 2018 at 1:21 AM, Simone Tiraboschi < >>>>>>>> stirabos at redhat.com> wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Thu, Mar 8, 2018 at 7:28 PM, Kristian Petersen < >>>>>>>>> nesretep at chem.byu.edu> wrote: >>>>>>>>> >>>>>>>>>> I am trying to deploy oVirt with a self-hosted engine and the >>>>>>>>>> setup seems to go well until near the very end when the status message says: >>>>>>>>>> [ INFO ] TASK [Wait for the engine to come up on the target VM] >>>>>>>>>> >>>>>>>>>> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, >>>>>>>>>> "changed": true, "cmd": ["hosted-engine", "--vm-status", "--json"], >>>>>>>>>> "delta": "0:0 >>>>>>>>>> 0:00.216412", "end": "2018-03-07 16:02:02.677478", "rc": 0, >>>>>>>>>> "start": "2018-03-07 16:02:02.461066", "stderr": "", "stderr_lines": [], >>>>>>>>>> "stdout >>>>>>>>>> ": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": >>>>>>>>>> true, \"extra\": \"metadata_parse_version=1\\nm >>>>>>>>>> etadata_feature_version=1\\ntim >>>>>>>>>> estamp=4679955 (Wed Mar 7 16:01:50 2018)\\nhost-id=1\\nscore=3400 >>>>>>>>>> \\nvm_conf_refresh_time=4679956 (Wed Mar 7 16:01:51 >>>>>>>>>> 2018)\\nconf_on_share >>>>>>>>>> d_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\", >>>>>>>>>> \"hostname\": \"rhv1.cpms.byu.edu\", \"host-id\": 1, \"engin >>>>>>>>>> e-status\": {\"reason\": \"vm not running on this host\", >>>>>>>>>> \"health\": \"bad\", \"vm\": \"down\", \"detail\": \"unknown\"}, \"score\": >>>>>>>>>> 3400, >>>>>>>>>> \"stopped\": false, \"maintenance\": false, \"crc32\": >>>>>>>>>> \"d3a67cf7\", \"local_conf_timestamp\": 4679956, \"host-ts\": 4679955}, >>>>>>>>>> \"global_main >>>>>>>>>> tenance\": false}", "stdout_lines": ["{\"1\": >>>>>>>>>> {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\": >>>>>>>>>> \"metadata_parse_version=1\ >>>>>>>>>> \nmetadata_feature_version=1\\ntimestamp=4679955 (Wed Mar 7 >>>>>>>>>> 16:01:50 2018)\\nhost-id=1\\nscore=3400 >>>>>>>>>> \\nvm_conf_refresh_time=4679956 (Wed Mar >>>>>>>>>> 7 16:01:51 2018)\\nconf_on_shared_storage >>>>>>>>>> =True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\", >>>>>>>>>> \"hostname\": \"rhv1.cpms. >>>>>>>>>> byu.edu\", \"host-id\": 1, \"engine-status\": {\"reason\": \"vm >>>>>>>>>> not running on this host\", \"health\": \"bad\", \"vm\": \"down\", \"detail\ >>>>>>>>>> ": \"unknown\"}, \"score\": 3400, \"stopped\": false, >>>>>>>>>> \"maintenance\": false, \"crc32\": \"d3a67cf7\", \"local_conf_timestamp\": >>>>>>>>>> 4679956, \" >>>>>>>>>> host-ts\": 4679955}, \"global_maintenance\": false}"]} >>>>>>>>>> [ ERROR ] Failed to execute stage 'Closing up': Failed executing >>>>>>>>>> ansible-playbook >>>>>>>>>> >>>>>>>>>> Any ideas that might help? >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> Hi Kristian, >>>>>>>>> {\"reason\": \"vm not running on this host\" sonds really bad. >>>>>>>>> I means that ovirt-ha-agent (in charge of restarting the engine >>>>>>>>> VM) think that another host took over but at that stage you should have >>>>>>>>> just one host. >>>>>>>>> >>>>>>>>> Could you please attach /var/log/ovirt-hosted-engine-ha/agent.log >>>>>>>>> and /var/log/vdsm/vdsm.log for the relevant time frame? >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Kristian Petersen >>>>>>>>>> System Administrator >>>>>>>>>> Dept. of Chemistry and Biochemistry >>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> Users mailing list >>>>>>>>>> Users at ovirt.org >>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Kristian Petersen >>>>>>>> System Administrator >>>>>>>> Dept. of Chemistry and Biochemistry >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Kristian Petersen >>>>>> System Administrator >>>>>> BYU Dept. of Chemistry and Biochemistry >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Kristian Petersen >>>>> System Administrator >>>>> BYU Dept. of Chemistry and Biochemistry >>>>> >>>> >>>> >>> >>> >>> -- >>> Kristian Petersen >>> System Administrator >>> BYU Dept. of Chemistry and Biochemistry >>> >> >> > > > -- > Kristian Petersen > System Administrator > BYU Dept. of Chemistry and Biochemistry > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vic.ad94 at gmail.com Mon Mar 12 18:33:02 2018 From: vic.ad94 at gmail.com (=?UTF-8?Q?Victor_Jos=C3=A9_Acosta_Dom=C3=ADnguez?=) Date: Mon, 12 Mar 2018 15:33:02 -0300 Subject: [ovirt-users] Ovirt VMS backup Message-ID: http://blog.infratic.com/blog/2017/07/07/create-ovirtrhevs-vm-backup/ Victor Acosta RHCE - RHCSA - RHCVA - VCA-DCV -------------- next part -------------- An HTML attachment was scrubbed... URL: From NasrumMinallah9 at hotmail.com Mon Mar 12 10:39:56 2018 From: NasrumMinallah9 at hotmail.com (Nasrum Minallah Manzoor) Date: Mon, 12 Mar 2018 10:39:56 +0000 Subject: [ovirt-users] Assistance needed... Message-ID: Hi, I need assistance regarding encircled in red in the attached! How can I remove the error "The latest guest agent needs to be installed and running on the guest". Else everything is working fine! Kindly response as soon as possible! Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Untitled_LI.jpg Type: image/jpeg Size: 425604 bytes Desc: Untitled_LI.jpg URL: From dyasny at gmail.com Mon Mar 12 20:13:39 2018 From: dyasny at gmail.com (Dan Yasny) Date: Mon, 12 Mar 2018 16:13:39 -0400 Subject: [ovirt-users] Assistance needed... In-Reply-To: References: Message-ID: Have you tried installing the guest agent? On Mon, Mar 12, 2018 at 6:39 AM, Nasrum Minallah Manzoor < NasrumMinallah9 at hotmail.com> wrote: > Hi, > > > > I need assistance regarding encircled in red in the attached! How can I > remove the error ?The latest guest agent needs to be installed and running > on the guest?. > > > > Else everything is working fine! > > > > > > Kindly response as soon as possible! > > > > > > > > Regards, > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From giuseppe.ragusa at hotmail.com Tue Mar 13 00:10:03 2018 From: giuseppe.ragusa at hotmail.com (Giuseppe Ragusa) Date: Tue, 13 Mar 2018 01:10:03 +0100 Subject: [ovirt-users] Self Hosted Engine installation - does the OVEHOSTED_NETWORK/gateway parameter have an "overloaded" meaning? Message-ID: <1520899803.1522798.1300794152.45A72343@webmail.messagingengine.com> Hi all, I have a question about the best interpretation/choice for the installation parameter OVEHOSTED_NETWORK/gateway It is my understanding that the IP specified as OVEHOSTED_NETWORK/gateway will be used (by means of ping) to verify the ongoing network-wise status of oVirt cluster nodes, with any problems leading to classifications/actions which could even bring to fencing of the "faulty" node. If this is the case, I find it debatable that such a role should be referred to as "gateway", since (particularly in small setups) it should be delegated to an always reachable IP, not connected to mundane tasks such as routers/gateways: Internet (or wider network) reachability (think of an old, cheap router whose power supply starts to misbehave/fail...) should not determine the status of the local oVirt cluster, whose nodes tipically could be directly connected (especially wrt the management ovirtmgmt network) on the same network segment without any need for routing. I suggest that in such a small setup, the console IP of something like the central (managed and stackable) switch could be used: if the central switch (ie all the stacked parts of it) goes down, then really there will be no communication betweeen nodes anyway. It is also my understanding that the above mentioned OVEHOSTED_NETWORK/gateway parameter is automatically passed to cloud-init to configure the actual default gateway of the Self Hosted Engine appliance, without any means to override this choice with an ad-hoc specialized parameter. If this is the case, I think that, in light of the above mentioned scenario, a specific override could be provided, without requirying the admin to reconfigure the appliance after it is deployed (by the way: the appliance, at least in version 4.1.9, does not contain the NetworkManager-glib package, so Ansible playbooks trying to configure the default gateway by means of the nmcli module always fail, and without working default gateway it is not so easy to add packages... think chicken and egg... :-) ). Any thoughts/suggestions? Many thanks in advance. Best regards, Giuseppe From ishaby at redhat.com Tue Mar 13 05:25:07 2018 From: ishaby at redhat.com (Idan Shaby) Date: Tue, 13 Mar 2018 07:25:07 +0200 Subject: [ovirt-users] storage domain ovirt-image-repository doesn't work In-Reply-To: <1520807274.18402.56.camel@province-sud.nc> References: <1520807274.18402.56.camel@province-sud.nc> Message-ID: Hi Nicolas, Let me make sure that I understand what's the issue here - you click on the domain and on the Images sub tab nothing is displayed? Can you please clear your engine log, click on the ovirt-image-repository domain and attach the log to the mail? When I do it, I get the following audit log: 2018-03-13 07:19:25,983+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-86) [6af6ee81-ce9a-46b7-a371-c5c3b0c6bf2a] EVENT_ID: REFRESH_REPOSITORY_IMAGE_LIST_SUCCEEDED(998), Refresh image list succeeded for domain(s): ovirt-image-repository (All file type) Maybe you get an error there that can help us understand the problem. Regards, Idan On Mon, Mar 12, 2018 at 12:27 AM, Nicolas Vaye wrote: > Hello, > > i have installed one oVirt platform with 2 node and 1 HE version 4.2.1.7-1 > > It seem to work fine, but i have issue with the ovirt-image-repository. > > Impossible to get the list of available images for this domain : > [cid:1520807274.29800.1.camel at province-sud.nc] > > My cluster is on a private network, so there is a proxy to get internet > access. > I have tried with a specific proxy configuration on each node ( > https://www.server-world.info/en/note?os=CentOS_7&p=squid&f=2) > so it's a success with yum update, wget or curl with > http://glance.ovirt.org:9292/, but nothing in the webui for the > ovirt-image-repository domain. > > I have tried another test with a transparent proxy and the result is the > same : > success with yum update, wget or curl with http://glance.ovirt.org:9292/, > but nothing in the webui for the ovirt-image-repository domain. > > > I don't know where is the specific log for this technical part. > > Can i have help for this issue. > > Thanks. > > Nicolas VAYE > DSI - Noum?a > NEW CALEDONIA > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Tue Mar 13 07:17:43 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Tue, 13 Mar 2018 09:17:43 +0200 Subject: [ovirt-users] Ovirt VMS backup In-Reply-To: References: Message-ID: On Mon, Mar 12, 2018 at 8:33 PM, Victor Jos? Acosta Dom?nguez < vic.ad94 at gmail.com> wrote: > http://blog.infratic.com/blog/2017/07/07/create-ovirtrhevs-vm-backup/ > The code referred to in this blog, https://github.com/vacosta94/VirtBKP, has no license associated with it. Can you please add an open source license to it? Y. > Victor Acosta > > RHCE - RHCSA - RHCVA - VCA-DCV > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahadas at redhat.com Tue Mar 13 09:30:37 2018 From: ahadas at redhat.com (Arik Hadas) Date: Tue, 13 Mar 2018 11:30:37 +0200 Subject: [ovirt-users] Fwd: question about importing VMware-produced OVA In-Reply-To: References: Message-ID: Sharing the thread below with the list. ---------- Forwarded message ---------- From: Shao-Da Huang Date: Tue, Mar 13, 2018 at 11:23 AM Subject: Re: question about importing VMware-produced OVA To: Arik Hadas 2018-03-13 15:51 GMT+08:00 Arik Hadas : > > On Tue, Mar 13, 2018 at 6:10 AM, Shao-Da Huang > wrote: > >> Hi Arik, >> >> I'm Michael, an oVirt user from Taiwan, and I'm trying to import OVAs >> produced by VMware into my data center. >> I've tried the functionality in the 'Virtual Machine' tab -> Import -> >> Source 'VMware Virtual Appliance (OVA)', but I wanna know that is there a >> corresponding REST API so I can use it to automate this procedure? >> >> I found the >> POST /externalvmimports API, >> but it seems not to be used for OVA. >> > > Hi Michael, > > You can import an OVA via REST-API using that post command. Let's say that > you would like to import an OVA that is located in /home/user/vm.ova on > host 'myhost' then you should set the host to 'myhost', the URL to > 'ova:///home/user/vm.ova' and the provider to VMware. > > Please let me know if that did the trick. > Thank you very much! It works! > Btw, any reason not to send this to the users-list? > If no such REST API exists, how to use CLI commands combinations to achieve >> the same goal as in the engine UI? >> >> Could you give me some advices? >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From grafuls at redhat.com Tue Mar 13 10:01:33 2018 From: grafuls at redhat.com (Gonzalo Rafuls) Date: Tue, 13 Mar 2018 11:01:33 +0100 Subject: [ovirt-users] Assistance needed... In-Reply-To: References: Message-ID: Here [1] you can find some clarification as to what guest agent is plus instructions on how to install on different OSs. [1] https://www.ovirt.org/documentation/internal/guest- agent/understanding-guest-agents-and-other-tools/ Cheers, Gonza.- On Mon, Mar 12, 2018 at 9:13 PM, Dan Yasny wrote: > Have you tried installing the guest agent? > > On Mon, Mar 12, 2018 at 6:39 AM, Nasrum Minallah Manzoor < > NasrumMinallah9 at hotmail.com> wrote: > >> Hi, >> >> >> >> I need assistance regarding encircled in red in the attached! How can I >> remove the error ?The latest guest agent needs to be installed and running >> on the guest?. >> >> >> >> Else everything is working fine! >> >> >> >> >> >> Kindly response as soon as possible! >> >> >> >> >> >> >> >> Regards, >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From blanchet at abes.fr Tue Mar 13 11:57:23 2018 From: blanchet at abes.fr (=?UTF-8?Q?Nathana=c3=abl_Blanchet?=) Date: Tue, 13 Mar 2018 12:57:23 +0100 Subject: [ovirt-users] Ovirt VMS backup In-Reply-To: References: Message-ID: <64ce41ee-2ed5-ce90-fc23-29e85a22e257@abes.fr> sorry, but your link is broken Le 12/03/2018 ? 19:33, Victor Jos? Acosta Dom?nguez a ?crit?: > http://blog.infratic.com/blog/2017/07/07/create-ovirtrhevs-vm-backup/ > > Victor Acosta > > RHCE - RHCSA - RHCVA - VCA-DCV > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- Nathana?l Blanchet Supervision r?seau P?le Infrastrutures Informatiques 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 T?l. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet at abes.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From hariprasanth.l at msystechnologies.com Tue Mar 13 12:22:27 2018 From: hariprasanth.l at msystechnologies.com (Hari Prasanth Loganathan) Date: Tue, 13 Mar 2018 17:52:27 +0530 Subject: [ovirt-users] Status in oVirt Message-ID: Hi Guys, What is the best way to get the oVirt status like 1) Apache web server is running, 2) Jboss server is running, 3) postGreSQL server is running Thanks, Hari -- DISCLAIMER The information in this e-mail is confidential and may be subject to legal privilege. It is intended solely for the addressee. Access to this e-mail by anyone else is unauthorized. If you have received this communication in error, please address with the subject heading "Received in error," send to it at msystechnologies.com, then delete the e-mail and destroy any copies of it. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. The views, opinions, conclusions and other information expressed in this electronic mail and any attachments are not given or endorsed by the company unless otherwise indicated by an authorized representative independent of this message. MSys cannot guarantee that e-mail communications are secure or error-free, as information could be intercepted, corrupted, amended, lost, destroyed, arrive late or incomplete, or contain viruses, though all reasonable precautions have been taken to ensure no viruses are present in this e-mail. As our company cannot accept responsibility for any loss or damage arising from the use of this e-mail or attachments we recommend that you subject these to your virus checking procedures prior to use -------------- next part -------------- An HTML attachment was scrubbed... URL: From NasrumMinallah9 at hotmail.com Mon Mar 12 18:00:08 2018 From: NasrumMinallah9 at hotmail.com (Nasrum Minallah Manzoor) Date: Mon, 12 Mar 2018 18:00:08 +0000 Subject: [ovirt-users] Assistance needed... Message-ID: Hi, Can anyone assist me in getting vnc native console through ovirt engine to my guest machine(window 7). Thanks. From: Nasrum Minallah Manzoor Sent: 12 March 2018 3:40 PM To: users at ovirt.org Cc: 'junaid8756 at gmail.com' Subject: Assistance needed... Hi, I need assistance regarding encircled in red in the attached! How can I remove the error "The latest guest agent needs to be installed and running on the guest". Else everything is working fine! Kindly response as soon as possible! Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From ehaas at redhat.com Tue Mar 13 13:27:24 2018 From: ehaas at redhat.com (Edward Haas) Date: Tue, 13 Mar 2018 15:27:24 +0200 Subject: [ovirt-users] ovirt 4.2.2-rc3 switching from legacy to OVS.. In-Reply-To: <031b026d-66ec-14a0-9bec-7a4b0e717556@lbl.gov> References: <031b026d-66ec-14a0-9bec-7a4b0e717556@lbl.gov> Message-ID: OVS switch support is experimental at this stage and in some cases when trying to change from one switch to the other, it fails. It was also not checked against a hosted engine setup, which handles networking a bit differently for the management network (ovirtmgmt). Nevertheless, we are interested in understanding all the problems that exists today, so if you can, please share the supervdsm log, it has the interesting networking traces. We plan to block cluster switch editing until these problems are resolved. It will be only allowed to define a new cluster as OVS, not convert an existing one from Linux Bridge to OVS. On Fri, Mar 9, 2018 at 9:54 AM, Thomas Davis wrote: > I'm getting further along with 4.2.2rc3 than the 4.2.1 when it comes to > hosted engine and vlans.. it actually does install > under 4.2.2rc3. > > But it's a complete failure when I switch the cluster from Linux > Bridge/Legacy to OVS. The first time I try, vdsm does > not properly configure the node, it's all messed up. > > I'm getting this in vdsmd logs: > > 2018-03-08 23:12:46,610-0800 INFO (jsonrpc/7) [api.network] START > setupNetworks(networks={u'ovirtmgmt': {u'ipv6autoconf': True, u'nic': > u'eno1', u'vlan': u'50', u'ipaddr': u'192.168.85.49', u'switch': u'ovs', > u'mtu': 1500, u'netmask': u'255.255.252.0', u'dhcpv6': False, u'STP': > u'no', u'bridged': u'true', u'gateway': u'192.168.85.254', u'defaultRoute': > True}}, bondings={}, options={u'connectivityCheck': u'true', > u'connectivityTimeout': 120}) from=::ffff:192.168.85.24,56806, > flow_id=4147e25f-0a23-4f47-a0a4-d424a3437d11 (api:46) > > 2018-03-08 23:12:52,449-0800 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC > call Host.ping2 succeeded in 0.00 seconds (__init__:573) > > 2018-03-08 23:12:52,511-0800 INFO (jsonrpc/7) [api.network] FINISH > setupNetworks error=[Errno 19] ovirtmgmt is not present in the system > from=::ffff:192.168.85.24,56806, flow_id=4147e25f-0a23-4f47-a0a4-d424a3437d11 > (api:50) > 2018-03-08 23:12:52,512-0800 ERROR (jsonrpc/7) [jsonrpc.JsonRpcServer] > Internal server error (__init__:611) > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line > 606, in _handle_request > res = method(**params) > File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 201, > in _dynamicMethod > result = fn(*methodArgs) > File "", line 2, in setupNetworks > File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in > method > ret = func(*args, **kwargs) > File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1527, in > setupNetworks > supervdsm.getProxy().setupNetworks(networks, bondings, options) > File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line > 55, in __call__ > return callMethod() > File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line > 53, in > **kwargs) > File "", line 2, in setupNetworks > File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in > _callmethod > raise convert_to_error(kind, result) > IOError: [Errno 19] ovirtmgmt is not present in the system > 2018-03-08 23:12:52,512-0800 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC > call Host.setupNetworks failed (error -32603) in 5.90 seconds (__init__:573) > 2018-03-08 23:12:54,769-0800 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC > call Host.ping2 succeeded in 0.00 seconds (__init__:573) > 2018-03-08 23:12:54,772-0800 INFO (jsonrpc/5) [api.host] START > getCapabilities() from=::1,45562 (api:46) > 2018-03-08 23:12:54,906-0800 INFO (jsonrpc/5) [api.host] FINISH > getCapabilities error=[Errno 19] ovirtmgmt is not present in the system > from=::1,45562 (api:50) > 2018-03-08 23:12:54,906-0800 ERROR (jsonrpc/5) [jsonrpc.JsonRpcServer] > Internal server error (__init__:611) > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line > 606, in _handle_request > res = method(**params) > File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 201, > in _dynamicMethod > result = fn(*methodArgs) > File "", line 2, in getCapabilities > File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in > method > ret = func(*args, **kwargs) > File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1339, in > getCapabilities > c = caps.get() > File "/usr/lib/python2.7/site-packages/vdsm/host/caps.py", line 168, in > get > net_caps = supervdsm.getProxy().network_caps() > File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line > 55, in __call__ > return callMethod() > File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line > 53, in > **kwargs) > File "", line 2, in network_caps > File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in > _callmethod > raise convert_to_error(kind, result) > IOError: [Errno 19] ovirtmgmt is not present in the system > > So something is dreadfully wrong with the bridge to ovs conversion in > 4.2.2rc3. > > thomas > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtqn42 at gmail.com Tue Mar 13 14:15:17 2018 From: jtqn42 at gmail.com (John Nguyen) Date: Tue, 13 Mar 2018 10:15:17 -0400 Subject: [ovirt-users] VM stuck in Locked mode after failed migration Message-ID: Hi Guys, My environment is running Ovirt 3.6 and due to power fluctuation caused by recent weather. I have a handful of VMs stuck in a locked state as they failed to migrate between panicked host. I have purged zombie tasks and ran the unlocked utility as documented in https://www.ovirt.org/develop/developer-guide/db-issues/helperutilities/ unlock_entity.sh completed successfully, however the vms are stilled in a locked state. Unfortunately I'm unable to attach logs because of my companies security posture. Any help would be really appreciated Thanks, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From junaid8756 at gmail.com Tue Mar 13 14:19:01 2018 From: junaid8756 at gmail.com (Junaid Jadoon) Date: Tue, 13 Mar 2018 19:19:01 +0500 Subject: [ovirt-users] change CD not working Message-ID: hi, when i tried to change CD within a Windows VM and getting following error message. Ovirt engine and node version are 4.2. "Error while executing action Change CD: Failed to perform "Change CD" operation, CD might be still in use by the VM. Please try to manually detach the CD from withing the VM: 1. Log in to the VM 2 For Linux VMs, un-mount the CD using umount command; For Windows VMs, right click on the CD drive and click 'Eject';" Initially its working fine suddenly it giving above error. please help me out Regards, Junaid -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtqn42 at gmail.com Tue Mar 13 14:34:35 2018 From: jtqn42 at gmail.com (John Nguyen) Date: Tue, 13 Mar 2018 10:34:35 -0400 Subject: [ovirt-users] How to force remove template In-Reply-To: References: Message-ID: Hello Idan, Thank you for responding. I'm sorry the logs are unavailable to post due to security concerns from my management. To make it even more difficult there are several developers creating and removing VM's at any given time. The system is a 12 hosts Ovirt 3.6 cluster running Gluster for the original storage. There is new storage domain is will be hosted on an external storage (NFS from a Netapp array). I was brought in to upgrade/migrate to Ovirt 4.2 running entirely on NFS. I believe the template is in this odd state because of issues with the Gluster which dates back to the original setup. I've been able to move all the VM and template disks to the NFS domain except this one. Since template is not of value and can be removed. I was hoping there is way to delete the template from command line and be entirely off the Gluster domain before upgrading to 4.2 Thanks again, John On Sun, Mar 11, 2018 at 1:15 AM, Idan Shaby wrote: > Hi John, > > Indeed looks odd. > Can you attach engine and vdsm logs from when you tried to delete the > template? > Also, any idea how it got there? Remember anything special that you did > with the storage domain? > > > Regards, > Idan > > On Thu, Mar 8, 2018 at 10:27 PM, John Nguyen wrote: > >> Hi Guys, >> >> I apologize if you may have addressed this earlier. I have a template in >> an odd state. The template show it's disk on one storage domain, but in >> actuality the disk is on a different domain. I would like to delete this >> template since its not in uses and out of date. However when I try, I get >> the error "image does not exit in domain." >> >> Is there a way to force remove a template from the database? Any >> thoughts would be greatly appreciated. >> >> Thanks, >> John >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phudec at cnc.sk Tue Mar 13 14:33:49 2018 From: phudec at cnc.sk (Peter Hudec) Date: Tue, 13 Mar 2018 15:33:49 +0100 Subject: [ovirt-users] cocpit is not running on hosts Message-ID: Hi, after upgrade to 4.2. there was running the cockpit on each host. Right now, there is no service on port 9090. Is there any special setup how to put it back? [PROD] root at dipovirt03.cnc.sk: /home/phudec # rpm -qa | grep ovirt ovirt-imageio-common-1.2.1-0.el7.centos.noarch ovirt-vmconsole-1.0.4-1.el7.centos.noarch ovirt-provider-ovn-driver-1.2.5-1.el7.centos.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-host-4.2.1-1.el7.centos.x86_64 ovirt-host-deploy-1.7.2-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch ovirt-host-dependencies-4.2.1-1.el7.centos.x86_64 ovirt-hosted-engine-setup-2.2.9-1.el7.centos.noarch ovirt-vmconsole-host-1.0.4-1.el7.centos.noarch python-ovirt-engine-sdk4-4.2.4-2.el7.centos.x86_64 ovirt-imageio-daemon-1.2.1-0.el7.centos.noarch cockpit-ovirt-dashboard-0.11.11-0.1.el7.centos.noarch ovirt-hosted-engine-ha-2.2.4-1.el7.centos.noarch ovirt-engine-appliance-4.2-20180214.1.el7.centos.noarch ovirt-release42-4.2.1.1-1.el7.centos.noarch [PROD] root at dipovirt03.cnc.sk: /home/phudec # rpm -qa | grep cockpit cockpit-system-160-1.el7.centos.noarch cockpit-networkmanager-160-1.el7.centos.noarch cockpit-160-1.el7.centos.x86_64 cockpit-bridge-160-1.el7.centos.x86_64 cockpit-dashboard-160-1.el7.centos.x86_64 cockpit-storaged-160-1.el7.centos.noarch cockpit-ovirt-dashboard-0.11.11-0.1.el7.centos.noarch cockpit-ws-160-1.el7.centos.x86_64 regards Peter -- *Peter Hudec* Infra?trukt?rny architekt phudec at cnc.sk *CNC,?a.s.* Borsk? 6,?841 04 Bratislava Recepcia:?+421 2? 35 000 100 Mobil:+421?905 997 203 *www.cnc.sk* From phudec at cnc.sk Tue Mar 13 14:36:09 2018 From: phudec at cnc.sk (Peter Hudec) Date: Tue, 13 Mar 2018 15:36:09 +0100 Subject: [ovirt-users] Host has no default route. Message-ID: Hi, the hosted engine shows warning on each node about - Host has no default route. my routing on one of the node [PROD] root at dipovirt03.cnc.sk: /home/phudec # ip r s default via 192.168.16.1 dev ovirtmgmt 192.168.16.0/24 dev ovirtmgmt proto kernel scope link src 192.168.16.23 192.168.85.0/24 dev storage proto kernel scope link src 192.168.85.23 regards Peter -- *Peter Hudec* Infra?trukt?rny architekt phudec at cnc.sk *CNC,?a.s.* Borsk? 6,?841 04 Bratislava Recepcia:?+421 2? 35 000 100 Mobil:+421?905 997 203 *www.cnc.sk* From ahadas at redhat.com Tue Mar 13 14:39:40 2018 From: ahadas at redhat.com (Arik Hadas) Date: Tue, 13 Mar 2018 16:39:40 +0200 Subject: [ovirt-users] VM stuck in Locked mode after failed migration In-Reply-To: References: Message-ID: On Tue, Mar 13, 2018 at 4:15 PM, John Nguyen wrote: > Hi Guys, > > My environment is running Ovirt 3.6 and due to power fluctuation caused by > recent weather. I have a handful of VMs stuck in a locked state as they > failed to migrate between panicked host. > > I have purged zombie tasks and ran the unlocked utility as documented in > https://www.ovirt.org/develop/developer-guide/db-issues/helperutilities/ > unlock_entity.sh completed successfully, however the vms are stilled in a > locked state. > > Unfortunately I'm unable to attach logs because of my companies security > posture. > > Any help would be really appreciated > Did you restart the engine? > > Thanks, > John > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mburman at redhat.com Tue Mar 13 15:02:31 2018 From: mburman at redhat.com (Michael Burman) Date: Tue, 13 Mar 2018 17:02:31 +0200 Subject: [ovirt-users] Host has no default route. In-Reply-To: References: Message-ID: Hi Peter It's a bug and it was fixed in https://bugzilla.redhat.com/show_bug.cgi?id=1477589 (fixed in 4.2.2-0.1.el7) Any how it's should not affect in any way, it's just a wrong report in the UI. Cheers) On Tue, Mar 13, 2018 at 4:36 PM, Peter Hudec wrote: > Hi, > > the hosted engine shows warning on each node about > - Host has no default route. > > my routing on one of the node > > [PROD] root at dipovirt03.cnc.sk: /home/phudec # ip r s > default via 192.168.16.1 dev ovirtmgmt > 192.168.16.0/24 dev ovirtmgmt proto kernel scope link src 192.168.16.23 > 192.168.85.0/24 dev storage proto kernel scope link src 192.168.85.23 > > regards > Peter > -- > *Peter Hudec* > Infra?trukt?rny architekt > phudec at cnc.sk > > *CNC, a.s.* > Borsk? 6, 841 04 Bratislava > Recepcia: +421 2 35 000 100 > > Mobil:+421 905 997 203 > *www.cnc.sk* > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman -------------- next part -------------- An HTML attachment was scrubbed... URL: From msivak at redhat.com Tue Mar 13 15:14:24 2018 From: msivak at redhat.com (Martin Sivak) Date: Tue, 13 Mar 2018 16:14:24 +0100 Subject: [ovirt-users] cocpit is not running on hosts In-Reply-To: References: Message-ID: Hi, make sure the service is actually started and the firewall is configured properly: systemctl status cockpit firewall-cmd --list-all You can make sure all is fine by doing the following: systemctl enable cockpit systemctl start cockpit firewall-cmd --add-service=cockpit --permanent firewall-cmd --reload Best regards Martin Sivak On Tue, Mar 13, 2018 at 3:33 PM, Peter Hudec wrote: > Hi, > > after upgrade to 4.2. there was running the cockpit on each host. > Right now, there is no service on port 9090. Is there any special setup > how to put it back? > > [PROD] root at dipovirt03.cnc.sk: /home/phudec # rpm -qa | grep ovirt > ovirt-imageio-common-1.2.1-0.el7.centos.noarch > ovirt-vmconsole-1.0.4-1.el7.centos.noarch > ovirt-provider-ovn-driver-1.2.5-1.el7.centos.noarch > ovirt-setup-lib-1.1.4-1.el7.centos.noarch > ovirt-host-4.2.1-1.el7.centos.x86_64 > ovirt-host-deploy-1.7.2-1.el7.centos.noarch > ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch > ovirt-host-dependencies-4.2.1-1.el7.centos.x86_64 > ovirt-hosted-engine-setup-2.2.9-1.el7.centos.noarch > ovirt-vmconsole-host-1.0.4-1.el7.centos.noarch > python-ovirt-engine-sdk4-4.2.4-2.el7.centos.x86_64 > ovirt-imageio-daemon-1.2.1-0.el7.centos.noarch > cockpit-ovirt-dashboard-0.11.11-0.1.el7.centos.noarch > ovirt-hosted-engine-ha-2.2.4-1.el7.centos.noarch > ovirt-engine-appliance-4.2-20180214.1.el7.centos.noarch > ovirt-release42-4.2.1.1-1.el7.centos.noarch > > [PROD] root at dipovirt03.cnc.sk: /home/phudec # rpm -qa | grep cockpit > cockpit-system-160-1.el7.centos.noarch > cockpit-networkmanager-160-1.el7.centos.noarch > cockpit-160-1.el7.centos.x86_64 > cockpit-bridge-160-1.el7.centos.x86_64 > cockpit-dashboard-160-1.el7.centos.x86_64 > cockpit-storaged-160-1.el7.centos.noarch > cockpit-ovirt-dashboard-0.11.11-0.1.el7.centos.noarch > cockpit-ws-160-1.el7.centos.x86_64 > > regards > Peter > > -- > *Peter Hudec* > Infra?trukt?rny architekt > phudec at cnc.sk > > *CNC, a.s.* > Borsk? 6, 841 04 Bratislava > Recepcia: +421 2 35 000 100 > > Mobil:+421 905 997 203 > *www.cnc.sk* > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From phudec at cnc.sk Tue Mar 13 15:24:48 2018 From: phudec at cnc.sk (Peter Hudec) Date: Tue, 13 Mar 2018 16:24:48 +0100 Subject: [ovirt-users] Host has no default route. In-Reply-To: References: Message-ID: Hi Michael, thanks. Seems to be fixed int 4.2.1. I was just upgrading the whole oVirt cluster while writing the mail. Right now i do not see it any more. Peter On 13/03/2018 16:02, Michael Burman wrote: > Hi Peter > > It's a bug and it was fixed in > https://bugzilla.redhat.com/show_bug.cgi?id=1477589 (fixed in 4.2.2-0.1.el7) > Any how it's should not affect in any way, it's just a wrong report in > the UI. > > Cheers) > > > On Tue, Mar 13, 2018 at 4:36 PM, Peter Hudec > wrote: > > Hi, > > the hosted engine shows warning on each node about > ?- Host has no default route. > > my routing on one of the node > > [PROD] root at dipovirt03.cnc.sk : > /home/phudec # ip r s > default via 192.168.16.1 dev ovirtmgmt > 192.168.16.0/24 dev ovirtmgmt proto kernel > scope link src 192.168.16.23 > 192.168.85.0/24 dev storage proto kernel > scope link src 192.168.85.23 > > ? ? ? ? regards > ? ? ? ? ? ? ? ? Peter > -- > *Peter Hudec* > Infra?trukt?rny architekt > phudec at cnc.sk > > > *CNC,?a.s.* > Borsk? 6,?841 04 Bratislava > Recepcia:?+421 2? 35 000 100 > > Mobil:+421?905 997 203 > *www.cnc.sk * > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > > > -- > > Michael Burman > > Senior Quality engineer - rhv network - redhat israel > > Red Hat > > > > mburman at redhat.com ? ? M: 0545355725 > ? ? IM: mburman > > > -- *Peter Hudec* Infra?trukt?rny architekt phudec at cnc.sk *CNC,?a.s.* Borsk? 6,?841 04 Bratislava Recepcia:?+421 2? 35 000 100 Mobil:+421?905 997 203 *www.cnc.sk* From mburman at redhat.com Tue Mar 13 15:27:33 2018 From: mburman at redhat.com (Michael Burman) Date: Tue, 13 Mar 2018 17:27:33 +0200 Subject: [ovirt-users] Host has no default route. In-Reply-To: References: Message-ID: Correct, the spesific issue you saw was fixed in 4.2.1 and the whole RFE was implemented in 4.2.2 Cheers) On Tue, Mar 13, 2018 at 5:24 PM, Peter Hudec wrote: > Hi Michael, > > thanks. > Seems to be fixed int 4.2.1. I was just upgrading the whole oVirt > cluster while writing the mail. > Right now i do not see it any more. > > Peter > > On 13/03/2018 16:02, Michael Burman wrote: > > Hi Peter > > > > It's a bug and it was fixed in > > https://bugzilla.redhat.com/show_bug.cgi?id=1477589 (fixed in > 4.2.2-0.1.el7) > > Any how it's should not affect in any way, it's just a wrong report in > > the UI. > > > > Cheers) > > > > > > On Tue, Mar 13, 2018 at 4:36 PM, Peter Hudec > > wrote: > > > > Hi, > > > > the hosted engine shows warning on each node about > > - Host has no default route. > > > > my routing on one of the node > > > > [PROD] root at dipovirt03.cnc.sk : > > /home/phudec # ip r s > > default via 192.168.16.1 dev ovirtmgmt > > 192.168.16.0/24 dev ovirtmgmt proto kernel > > scope link src 192.168.16.23 > > 192.168.85.0/24 dev storage proto kernel > > scope link src 192.168.85.23 > > > > regards > > Peter > > -- > > *Peter Hudec* > > Infra?trukt?rny architekt > > phudec at cnc.sk > > > > > > *CNC, a.s.* > > Borsk? 6, 841 04 Bratislava > > Recepcia: +421 2 35 000 100 > > > > Mobil:+421 905 997 203 > > *www.cnc.sk * > > > > > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > > > > > > > > > > > -- > > > > Michael Burman > > > > Senior Quality engineer - rhv network - redhat israel > > > > Red Hat > > > > > > > > mburman at redhat.com M: 0545355725 > > IM: mburman > > > > > > > > > -- > *Peter Hudec* > Infra?trukt?rny architekt > phudec at cnc.sk > > *CNC, a.s.* > Borsk? 6, 841 04 Bratislava > Recepcia: +421 2 35 000 100 > > Mobil:+421 905 997 203 > *www.cnc.sk* > > -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman -------------- next part -------------- An HTML attachment was scrubbed... URL: From phudec at cnc.sk Tue Mar 13 15:28:03 2018 From: phudec at cnc.sk (Peter Hudec) Date: Tue, 13 Mar 2018 16:28:03 +0100 Subject: [ovirt-users] cocpit is not running on hosts In-Reply-To: References: Message-ID: <6bb03ae0-6b59-b826-1548-c47b17a7f1e7@cnc.sk> Hi Martin, thanks. It works. I just wantd to known if any special 'ovirt' commands are needed, seems not. regards Peter On 13/03/2018 16:14, Martin Sivak wrote: > Hi, > > make sure the service is actually started and the firewall is > configured properly: > > systemctl status cockpit > firewall-cmd --list-all > > You can make sure all is fine by doing the following: > > systemctl enable cockpit > systemctl start cockpit > firewall-cmd --add-service=cockpit --permanent > firewall-cmd --reload > > Best regards > > Martin Sivak > > On Tue, Mar 13, 2018 at 3:33 PM, Peter Hudec wrote: >> Hi, >> >> after upgrade to 4.2. there was running the cockpit on each host. >> Right now, there is no service on port 9090. Is there any special setup >> how to put it back? >> >> [PROD] root at dipovirt03.cnc.sk: /home/phudec # rpm -qa | grep ovirt >> ovirt-imageio-common-1.2.1-0.el7.centos.noarch >> ovirt-vmconsole-1.0.4-1.el7.centos.noarch >> ovirt-provider-ovn-driver-1.2.5-1.el7.centos.noarch >> ovirt-setup-lib-1.1.4-1.el7.centos.noarch >> ovirt-host-4.2.1-1.el7.centos.x86_64 >> ovirt-host-deploy-1.7.2-1.el7.centos.noarch >> ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch >> ovirt-host-dependencies-4.2.1-1.el7.centos.x86_64 >> ovirt-hosted-engine-setup-2.2.9-1.el7.centos.noarch >> ovirt-vmconsole-host-1.0.4-1.el7.centos.noarch >> python-ovirt-engine-sdk4-4.2.4-2.el7.centos.x86_64 >> ovirt-imageio-daemon-1.2.1-0.el7.centos.noarch >> cockpit-ovirt-dashboard-0.11.11-0.1.el7.centos.noarch >> ovirt-hosted-engine-ha-2.2.4-1.el7.centos.noarch >> ovirt-engine-appliance-4.2-20180214.1.el7.centos.noarch >> ovirt-release42-4.2.1.1-1.el7.centos.noarch >> >> [PROD] root at dipovirt03.cnc.sk: /home/phudec # rpm -qa | grep cockpit >> cockpit-system-160-1.el7.centos.noarch >> cockpit-networkmanager-160-1.el7.centos.noarch >> cockpit-160-1.el7.centos.x86_64 >> cockpit-bridge-160-1.el7.centos.x86_64 >> cockpit-dashboard-160-1.el7.centos.x86_64 >> cockpit-storaged-160-1.el7.centos.noarch >> cockpit-ovirt-dashboard-0.11.11-0.1.el7.centos.noarch >> cockpit-ws-160-1.el7.centos.x86_64 >> >> regards >> Peter >> >> -- >> *Peter Hudec* >> Infra?trukt?rny architekt >> phudec at cnc.sk >> >> *CNC, a.s.* >> Borsk? 6, 841 04 Bratislava >> Recepcia: +421 2 35 000 100 >> >> Mobil:+421 905 997 203 >> *www.cnc.sk* >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users -- *Peter Hudec* Infra?trukt?rny architekt phudec at cnc.sk *CNC,?a.s.* Borsk? 6,?841 04 Bratislava Recepcia:?+421 2? 35 000 100 Mobil:+421?905 997 203 *www.cnc.sk* From jtqn42 at gmail.com Tue Mar 13 15:38:37 2018 From: jtqn42 at gmail.com (John Nguyen) Date: Tue, 13 Mar 2018 11:38:37 -0400 Subject: [ovirt-users] VM stuck in Locked mode after failed migration In-Reply-To: References: Message-ID: It worked! Thank you so much On Tue, Mar 13, 2018 at 10:39 AM, Arik Hadas wrote: > > > On Tue, Mar 13, 2018 at 4:15 PM, John Nguyen wrote: > >> Hi Guys, >> >> My environment is running Ovirt 3.6 and due to power fluctuation caused >> by recent weather. I have a handful of VMs stuck in a locked state as they >> failed to migrate between panicked host. >> >> I have purged zombie tasks and ran the unlocked utility as documented in >> https://www.ovirt.org/develop/developer-guide/db-issues/helperutilities/ >> unlock_entity.sh completed successfully, however the vms are stilled in a >> locked state. >> >> Unfortunately I'm unable to attach logs because of my companies security >> posture. >> >> Any help would be really appreciated >> > > Did you restart the engine? > > >> >> Thanks, >> John >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Tue Mar 13 15:45:51 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Tue, 13 Mar 2018 16:45:51 +0100 Subject: [ovirt-users] cocpit is not running on hosts In-Reply-To: References: Message-ID: On Tue, Mar 13, 2018 at 4:14 PM, Martin Sivak wrote: > Hi, > > make sure the service is actually started and the firewall is > configured properly: > > systemctl status cockpit > firewall-cmd --list-all > > You can make sure all is fine by doing the following: > > systemctl enable cockpit > systemctl start cockpit > firewall-cmd --add-service=cockpit --permanent > firewall-cmd --reload > > Best regards > > Martin Sivak > > Actually from what I understood, the "cockpit service" has to remain configured as "static", while the "cockpit socket" has to be enabled. And in cockpit.service unit file in [Unit] section: Requires=cockpit.socket So in my case on a plain CentOS server acting as a node I executed: systemctl enable cockpit.socket systemctl start cockpit.socket And I verified I could connect to the hypervisor on port 9090 and then also the status of cockpit.service was "active". In messages: Mar 13 16:30:29 ov42 systemd: Starting Cockpit Web Service Socket. Mar 13 16:30:29 ov42 systemd: Listening on Cockpit Web Service Socket. And when I connect with browser to port 9090 some seconds later: Mar 13 16:30:47 ov42 systemd: Starting Cockpit Web Service... Mar 13 16:30:47 ov42 systemd: Started Cockpit Web Service. Mar 13 16:30:47 ov42 cockpit-ws: Using certificate: /etc/cockpit/ws-certs.d/0-self-signed.cert Mar 13 16:30:47 ov42 cockpit-ws: couldn't read from connection: Error reading data from TLS socket: A TLS fatal alert has been received. Mar 13 16:30:57 ov42 cockpit-session: pam_ssh_add: Failed adding some keys Mar 13 16:30:57 ov42 systemd-logind: New session 3407 of user root. Mar 13 16:30:57 ov42 systemd: Started Session 3407 of user root. Mar 13 16:30:57 ov42 systemd: Starting Session 3407 of user root. Mar 13 16:30:58 ov42 cockpit-ws: logged in user session Mar 13 16:30:58 ov42 cockpit-ws: New connection to session from 10.4.4.12 ... For further stop/start of cockpit.socket, I see that the start of the cockpit.service is instead immediate when cockpit.socket starts eg: Mar 13 16:37:37 ov42 systemd: Starting Cockpit Web Service Socket. Mar 13 16:37:37 ov42 systemd: Listening on Cockpit Web Service Socket. Mar 13 16:37:37 ov42 systemd: Starting Cockpit Web Service... Mar 13 16:37:37 ov42 systemd: Started Cockpit Web Service. Mar 13 16:37:37 ov42 cockpit-ws: Using certificate: /etc/cockpit/ws-certs.d/0-self-signed.cert Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From msivak at redhat.com Tue Mar 13 15:54:05 2018 From: msivak at redhat.com (Martin Sivak) Date: Tue, 13 Mar 2018 16:54:05 +0100 Subject: [ovirt-users] cocpit is not running on hosts In-Reply-To: References: Message-ID: > systemctl enable cockpit.socket > systemctl start cockpit.socket There is slight difference on when the service gets activated (on startup vs. on first access), but I guess either way is fine. The documentation indeed mentions the socket way: http://cockpit-project.org/guide/133/startup.html Best regards Martin Sivak On Tue, Mar 13, 2018 at 4:45 PM, Gianluca Cecchi wrote: > On Tue, Mar 13, 2018 at 4:14 PM, Martin Sivak wrote: >> >> Hi, >> >> make sure the service is actually started and the firewall is >> configured properly: >> >> systemctl status cockpit >> firewall-cmd --list-all >> >> You can make sure all is fine by doing the following: >> >> systemctl enable cockpit >> systemctl start cockpit >> firewall-cmd --add-service=cockpit --permanent >> firewall-cmd --reload >> >> Best regards >> >> Martin Sivak >> > > Actually from what I understood, the "cockpit service" has to remain > configured as "static", while the "cockpit socket" has to be enabled. > And in cockpit.service unit file in [Unit] section: > > Requires=cockpit.socket > > > So in my case on a plain CentOS server acting as a node I executed: > > systemctl enable cockpit.socket > systemctl start cockpit.socket > > And I verified I could connect to the hypervisor on port 9090 and then also > the status of cockpit.service was "active". > > In messages: > > Mar 13 16:30:29 ov42 systemd: Starting Cockpit Web Service Socket. > Mar 13 16:30:29 ov42 systemd: Listening on Cockpit Web Service Socket. > > And when I connect with browser to port 9090 some seconds later: > > Mar 13 16:30:47 ov42 systemd: Starting Cockpit Web Service... > Mar 13 16:30:47 ov42 systemd: Started Cockpit Web Service. > Mar 13 16:30:47 ov42 cockpit-ws: Using certificate: > /etc/cockpit/ws-certs.d/0-self-signed.cert > Mar 13 16:30:47 ov42 cockpit-ws: couldn't read from connection: Error > reading data from TLS socket: A TLS fatal alert has been received. > Mar 13 16:30:57 ov42 cockpit-session: pam_ssh_add: Failed adding some keys > Mar 13 16:30:57 ov42 systemd-logind: New session 3407 of user root. > Mar 13 16:30:57 ov42 systemd: Started Session 3407 of user root. > Mar 13 16:30:57 ov42 systemd: Starting Session 3407 of user root. > Mar 13 16:30:58 ov42 cockpit-ws: logged in user session > Mar 13 16:30:58 ov42 cockpit-ws: New connection to session from 10.4.4.12 > ... > > For further stop/start of cockpit.socket, I see that the start of the > cockpit.service is instead immediate when cockpit.socket starts > > eg: > > Mar 13 16:37:37 ov42 systemd: Starting Cockpit Web Service Socket. > Mar 13 16:37:37 ov42 systemd: Listening on Cockpit Web Service Socket. > Mar 13 16:37:37 ov42 systemd: Starting Cockpit Web Service... > Mar 13 16:37:37 ov42 systemd: Started Cockpit Web Service. > Mar 13 16:37:37 ov42 cockpit-ws: Using certificate: > /etc/cockpit/ws-certs.d/0-self-signed.cert > > Gianluca > > > From tadavis at lbl.gov Tue Mar 13 16:22:07 2018 From: tadavis at lbl.gov (Thomas Davis) Date: Tue, 13 Mar 2018 09:22:07 -0700 Subject: [ovirt-users] ovirt 4.2.2-rc3 switching from legacy to OVS.. In-Reply-To: References: <031b026d-66ec-14a0-9bec-7a4b0e717556@lbl.gov> Message-ID: I'll work on it some more. I have 2 different clusters in the data center (1 is the Hosted Engine systems, another is not..) I had trouble with both. I'll try again on the non-hosted engine cluster to see what it is doing. I have it working in 4.1, but we are trying to do a clean wipe since the 4.1 engine has been upgraded so many times from v3.5 plus we want to move to hosted-engine-ha from a single engine node and the ansible modules/roles (which also have problems..) thomas On Tue, Mar 13, 2018 at 6:27 AM, Edward Haas wrote: > > OVS switch support is experimental at this stage and in some cases when > trying to change from one switch to the other, it fails. > It was also not checked against a hosted engine setup, which handles > networking a bit differently for the management network (ovirtmgmt). > Nevertheless, we are interested in understanding all the problems that > exists today, so if you can, please share the supervdsm log, it has the > interesting networking traces. > > We plan to block cluster switch editing until these problems are resolved. > It will be only allowed to define a new cluster as OVS, not convert an > existing one from Linux Bridge to OVS. > > On Fri, Mar 9, 2018 at 9:54 AM, Thomas Davis wrote: > >> I'm getting further along with 4.2.2rc3 than the 4.2.1 when it comes to >> hosted engine and vlans.. it actually does install >> under 4.2.2rc3. >> >> But it's a complete failure when I switch the cluster from Linux >> Bridge/Legacy to OVS. The first time I try, vdsm does >> not properly configure the node, it's all messed up. >> >> I'm getting this in vdsmd logs: >> >> 2018-03-08 23:12:46,610-0800 INFO (jsonrpc/7) [api.network] START >> setupNetworks(networks={u'ovirtmgmt': {u'ipv6autoconf': True, u'nic': >> u'eno1', u'vlan': u'50', u'ipaddr': u'192.168.85.49', u'switch': u'ovs', >> u'mtu': 1500, u'netmask': u'255.255.252.0', u'dhcpv6': False, u'STP': >> u'no', u'bridged': u'true', u'gateway': u'192.168.85.254', u'defaultRoute': >> True}}, bondings={}, options={u'connectivityCheck': u'true', >> u'connectivityTimeout': 120}) from=::ffff:192.168.85.24,56806, >> flow_id=4147e25f-0a23-4f47-a0a4-d424a3437d11 (api:46) >> >> 2018-03-08 23:12:52,449-0800 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] >> RPC call Host.ping2 succeeded in 0.00 seconds (__init__:573) >> >> 2018-03-08 23:12:52,511-0800 INFO (jsonrpc/7) [api.network] FINISH >> setupNetworks error=[Errno 19] ovirtmgmt is not present in the system >> from=::ffff:192.168.85.24,56806, flow_id=4147e25f-0a23-4f47-a0a4-d424a3437d11 >> (api:50) >> 2018-03-08 23:12:52,512-0800 ERROR (jsonrpc/7) [jsonrpc.JsonRpcServer] >> Internal server error (__init__:611) >> Traceback (most recent call last): >> File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line >> 606, in _handle_request >> res = method(**params) >> File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 201, >> in _dynamicMethod >> result = fn(*methodArgs) >> File "", line 2, in setupNetworks >> File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, >> in method >> ret = func(*args, **kwargs) >> File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1527, in >> setupNetworks >> supervdsm.getProxy().setupNetworks(networks, bondings, options) >> File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line >> 55, in __call__ >> return callMethod() >> File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line >> 53, in >> **kwargs) >> File "", line 2, in setupNetworks >> File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in >> _callmethod >> raise convert_to_error(kind, result) >> IOError: [Errno 19] ovirtmgmt is not present in the system >> 2018-03-08 23:12:52,512-0800 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] >> RPC call Host.setupNetworks failed (error -32603) in 5.90 seconds >> (__init__:573) >> 2018-03-08 23:12:54,769-0800 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] >> RPC call Host.ping2 succeeded in 0.00 seconds (__init__:573) >> 2018-03-08 23:12:54,772-0800 INFO (jsonrpc/5) [api.host] START >> getCapabilities() from=::1,45562 (api:46) >> 2018-03-08 23:12:54,906-0800 INFO (jsonrpc/5) [api.host] FINISH >> getCapabilities error=[Errno 19] ovirtmgmt is not present in the system >> from=::1,45562 (api:50) >> 2018-03-08 23:12:54,906-0800 ERROR (jsonrpc/5) [jsonrpc.JsonRpcServer] >> Internal server error (__init__:611) >> Traceback (most recent call last): >> File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line >> 606, in _handle_request >> res = method(**params) >> File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 201, >> in _dynamicMethod >> result = fn(*methodArgs) >> File "", line 2, in getCapabilities >> File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, >> in method >> ret = func(*args, **kwargs) >> File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1339, in >> getCapabilities >> c = caps.get() >> File "/usr/lib/python2.7/site-packages/vdsm/host/caps.py", line 168, >> in get >> net_caps = supervdsm.getProxy().network_caps() >> File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line >> 55, in __call__ >> return callMethod() >> File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line >> 53, in >> **kwargs) >> File "", line 2, in network_caps >> File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in >> _callmethod >> raise convert_to_error(kind, result) >> IOError: [Errno 19] ovirtmgmt is not present in the system >> >> So something is dreadfully wrong with the bridge to ovs conversion in >> 4.2.2rc3. >> >> thomas >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbonazzo at redhat.com Tue Mar 13 16:26:04 2018 From: sbonazzo at redhat.com (Sandro Bonazzola) Date: Tue, 13 Mar 2018 17:26:04 +0100 Subject: [ovirt-users] ovirt-system-tests hackathon report Message-ID: 4 people accepted calendar invite: - Devin A. Bougie - Francesco Romani - Jiri Belka - suporte, logicworks 4 people tentatively accepted calendar invite: - Amnon Maimon - Andreas Bleischwitz - Arnaud Lauriou - Stephen Pesini 2 mailing lists accepted calendar invite: users at ovirt.org, devel at ovirt.org (don't ask me how) so I may have missed someone in above list 4 patches got merged: Add check for host update to the 1st host. Merged Yaniv Kaul ovirt-system-tests master (add_upgrade_check) 4:10 PM basic-suite-master: add vnic_profile_mappings to register vm Merged Eitan Raviv ovirt-system-tests master (register-template-vnic-mapping) 2:50 PM Revert "ovirt-4.2: Skipping 002_bootstrap.update_default_cluster" Merged Eyal Edri ovirt-system-tests master 11:36 AM seperate 4.2 tests and utils from master Merged Eyal Edri ovirt-system-tests master 11:35 AM 13 patches has been pushed / reviewed / rebased Add gdeploy to ovirt-4.2.repo Daniel Belenky ovirt-system-tests master 4:53 PM Cleanup of test code - next() replaced with any() Martin Siv?k ovirt-system-tests master 4:51 PM Add network queues custom property and use it in the vnic profile for VM0 Yaniv Kaul ovirt-system-tests master (multi_queue_config) 4:49 PM new suite: he-basic-iscsi-suite-master Yuval Turgeman ovirt-system-tests master (he-basic-iscsi-suite-master) 4:47 PM Collect host-deploy bundle from the engine Yedidyah Bar David ovirt-system-tests master 4:41 PM network-suite-master: Make openstack_client_config fixture available to all ... Merge Conflict Marcin Mirecki ovirt-system-tests master 3:39 PM new suite: he-basic-ng-ansible-suite-master Sandro Bonazzola ovirt-system-tests master (he-basic-ng-ansible-suite-master) 3:37 PM Enable and move additional tests to 002 Yaniv Kaul ovirt-system-tests master (move_more_to_002) 3:08 PM common: ovirt-4.2.repo Sandro Bonazzola ovirt-system-tests master 2:34 PM networking: Introducing test_stateless_vm_duplicate_mac_addr_vnic_does_not_be... Leon Goldberg ovirt-system-tests master 2:08 PM hc: Updating gdeploy conf to create vdo volumes Sahina Bose ovirt-system-tests master 12:55 PM master: add USB to the sanity VM Michal Skrivanek ovirt-system-tests master 12:54 PM Test hosted-engine cleanup Yedidyah Bar David ovirt-system-tests master 9:39 AM Feedback from the event: - "if we want to add many more tests to OST, and I think we do, we need to do some change there to allow that. Current framework is simply not scalable enough" - not joining the hackathon because "I'd be like an elephant in a porcelain shop" - "I'm not sure I'm OK with the flood of suites that we have - the more we have, the harder it is to sync and maintain but more importantly - to run." - "We can't keep adding new suite for each parameter we want to test, it adds overhead to monitoring, resources and maintenance." - invite wasn't clear enough. I found people on #ovirt on Freenode and on Red Hat IRC servers and redirected them to OFTC IRC server (my fault, hopefully managed to workaround it by talking to people) Lessons learned: - Calendar invites to mailing lists doesn't work well, need a different way to track mailing list members joining the events. - Invites needs to be pedantic on how to join the event, not leaving space for interpretation and misunderstanding. - We need a contribution guide to ovirt-system-test: we need to make people comfortable in trying to add a new test and we need to ensure that we won't reject a day of work because the patch doesn't match core contributors plannings on number of suites, resources and so on - The ovirt-system-tests check patch script is not good enough. It triggers too many sequential suites on every single patch pushed, and fails due to timeout taking more than 6 hours to complete. - The way ovirt-system-test collects rpms from defined repos is not smart enough: it doesn't take the latest version of a given package, just the first found in sequential order of the repos, Thanks everyone who participated to the event! if you have time please continue improving ovirt-system-test even if today event is almost completed! -- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Tue Mar 13 18:00:05 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Tue, 13 Mar 2018 20:00:05 +0200 Subject: [ovirt-users] ovirt-system-tests hackathon report In-Reply-To: References: Message-ID: On Mar 13, 2018 6:27 PM, "Sandro Bonazzola" wrote: 4 people accepted calendar invite: - Devin A. Bougie - Francesco Romani - Jiri Belka - suporte, logicworks 4 people tentatively accepted calendar invite: - Amnon Maimon - Andreas Bleischwitz - Arnaud Lauriou - Stephen Pesini 2 mailing lists accepted calendar invite: users at ovirt.org, devel at ovirt.org (don't ask me how) so I may have missed someone in above list 4 patches got merged: Add check for host update to the 1st host. Merged Yaniv Kaul ovirt-system-tests master (add_upgrade_check) 4:10 PM basic-suite-master: add vnic_profile_mappings to register vm Merged Eitan Raviv ovirt-system-tests master (register-template-vnic-mapping) 2:50 PM Revert "ovirt-4.2: Skipping 002_bootstrap.update_default_cluster" Merged Eyal Edri ovirt-system-tests master 11:36 AM seperate 4.2 tests and utils from master Merged Eyal Edri ovirt-system-tests master 11:35 AM 13 patches has been pushed / reviewed / rebased Add gdeploy to ovirt-4.2.repo Daniel Belenky ovirt-system-tests master 4:53 PM Cleanup of test code - next() replaced with any() Martin Siv?k ovirt-system-tests master 4:51 PM Add network queues custom property and use it in the vnic profile for VM0 Yaniv Kaul ovirt-system-tests master (multi_queue_config) 4:49 PM new suite: he-basic-iscsi-suite-master Yuval Turgeman ovirt-system-tests master (he-basic-iscsi-suite-master) 4:47 PM Collect host-deploy bundle from the engine Yedidyah Bar David ovirt-system-tests master 4:41 PM network-suite-master: Make openstack_client_config fixture available to all ... Merge Conflict Marcin Mirecki ovirt-system-tests master 3:39 PM new suite: he-basic-ng-ansible-suite-master Sandro Bonazzola ovirt-system-tests master (he-basic-ng-ansible-suite-master) 3:37 PM Enable and move additional tests to 002 Yaniv Kaul ovirt-system-tests master (move_more_to_002) 3:08 PM common: ovirt-4.2.repo Sandro Bonazzola ovirt-system-tests master 2:34 PM networking: Introducing test_stateless_vm_duplicate_ mac_addr_vnic_does_not_be... Leon Goldberg ovirt-system-tests master 2:08 PM hc: Updating gdeploy conf to create vdo volumes Sahina Bose ovirt-system-tests master 12:55 PM master: add USB to the sanity VM Michal Skrivanek ovirt-system-tests master 12:54 PM Test hosted-engine cleanup Yedidyah Bar David ovirt-system-tests master 9:39 AM Nice list of patches! Feedback from the event: - "if we want to add many more tests to OST, and I think we do, we need to do some change there to allow that. Current framework is simply not scalable enough" More specific and constructive ideas are welcome. We know we want to move to pytest, for example. - not joining the hackathon because "I'd be like an elephant in a porcelain shop" - "I'm not sure I'm OK with the flood of suites that we have - the more we have, the harder it is to sync and maintain but more importantly - to run." - "We can't keep adding new suite for each parameter we want to test, it adds overhead to monitoring, resources and maintenance." I tend to agree, but not sure how to elegantly solve it. - invite wasn't clear enough. I found people on #ovirt on Freenode and on Red Hat IRC servers and redirected them to OFTC IRC server (my fault, hopefully managed to workaround it by talking to people) Lessons learned: - Calendar invites to mailing lists doesn't work well, need a different way to track mailing list members joining the events. - Invites needs to be pedantic on how to join the event, not leaving space for interpretation and misunderstanding. - We need a contribution guide to ovirt-system-test: we need to make people comfortable in trying to add a new test and we need to ensure that we won't reject a day of work because the patch doesn't match core contributors plannings on number of suites, resources and so on Agree. Y. - The ovirt-system-tests check patch script is not good enough. It triggers too many sequential suites on every single patch pushed, and fails due to timeout taking more than 6 hours to complete. - The way ovirt-system-test collects rpms from defined repos is not smart enough: it doesn't take the latest version of a given package, just the first found in sequential order of the repos, Thanks everyone who participated to the event! if you have time please continue improving ovirt-system-test even if today event is almost completed! -- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA TRIED. TESTED. TRUSTED. _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.skrivanek at redhat.com Tue Mar 13 19:32:12 2018 From: michal.skrivanek at redhat.com (Michal Skrivanek) Date: Tue, 13 Mar 2018 20:32:12 +0100 Subject: [ovirt-users] change CD not working In-Reply-To: References: Message-ID: > On 13 Mar 2018, at 15:19, Junaid Jadoon wrote: > > hi, > when i tried to change CD within a Windows VM and getting following error message. > > Ovirt engine and node version are 4.2. Hi, a more concrete version would help, but still without logs from host and engine it?s hard to say anything. Please add that Thanks, michal > "Error while executing action Change CD: Failed to perform "Change CD" operation, CD might be still in use by the VM. > Please try to manually detach the CD from withing the VM: > 1. Log in to the VM > 2 For Linux VMs, un-mount the CD using umount command; > For Windows VMs, right click on the CD drive and click 'Eject';" > > Initially its working fine suddenly it giving above error. > > please help me out > Regards, > Junaid > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From rgolan at redhat.com Tue Mar 13 19:44:48 2018 From: rgolan at redhat.com (Roy Golan) Date: Tue, 13 Mar 2018 19:44:48 +0000 Subject: [ovirt-users] Status in oVirt In-Reply-To: References: Message-ID: On Tue, Mar 13, 2018, 2:26 PM Hari Prasanth Loganathan < hariprasanth.l at msystechnologies.com> wrote: > Hi Guys, > > What is the best way to get the oVirt status like > 1) Apache web server is running, > 2) Jboss server is running, > 3) postGreSQL server is running > > Thanks, > Hari > >> >> Check the health url: https://..../ovirt-engine/services/health There are more elaborated ways, but it's a good start. Works for you? DISCLAIMER > > The information in this e-mail is confidential and may be subject to legal > privilege. It is intended solely for the addressee. Access to this e-mail > by anyone else is unauthorized. If you have received this communication in > error, please address with the subject heading "Received in error," send to > it at msystechnologies.com, then delete the e-mail and destroy any copies > of it. If you are not the intended recipient, any disclosure, copying, > distribution or any action taken or omitted to be taken in reliance on it, > is prohibited and may be unlawful. The views, opinions, conclusions and > other information expressed in this electronic mail and any attachments are > not given or endorsed by the company unless otherwise indicated by an > authorized representative independent of this message. > MSys cannot guarantee that e-mail communications are secure or error-free, > as information could be intercepted, corrupted, amended, lost, destroyed, > arrive late or incomplete, or contain viruses, though all reasonable > precautions have been taken to ensure no viruses are present in this e-mail. > As our company cannot accept responsibility for any loss or damage arising > from the use of this e-mail or attachments we recommend that you subject > these to your virus checking procedures prior to use > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eedri at redhat.com Tue Mar 13 19:55:59 2018 From: eedri at redhat.com (Eyal Edri) Date: Tue, 13 Mar 2018 21:55:59 +0200 Subject: [ovirt-users] ovirt-system-tests hackathon report In-Reply-To: References: Message-ID: On Tue, Mar 13, 2018 at 8:00 PM, Yaniv Kaul wrote: > > > On Mar 13, 2018 6:27 PM, "Sandro Bonazzola" wrote: > > 4 people accepted calendar invite: > - Devin A. Bougie > - Francesco Romani > - Jiri Belka > - suporte, logicworks > > 4 people tentatively accepted calendar invite: > - Amnon Maimon > - Andreas Bleischwitz > - Arnaud Lauriou > - Stephen Pesini > > 2 mailing lists accepted calendar invite: users at ovirt.org, devel at ovirt.org > (don't ask me how) so I may have missed someone in above list > > > 4 patches got merged: > Add check for host update to the 1st host. > Merged Yaniv Kaul > > ovirt-system-tests > master > (add_upgrade_check) > 4:10 > PM > basic-suite-master: add vnic_profile_mappings to register vm > Merged Eitan Raviv > > ovirt-system-tests > master > (register-template-vnic-mapping) > 2:50 > PM > Revert "ovirt-4.2: Skipping 002_bootstrap.update_default_cluster" > Merged Eyal Edri > > ovirt-system-tests > > master > 11:36 > AM > seperate 4.2 tests and utils from master > Merged Eyal Edri > > ovirt-system-tests > > master > 11:35 > AM > > 13 patches has been pushed / reviewed / rebased > > Add gdeploy to ovirt-4.2.repo > Daniel Belenky > > ovirt-system-tests > > master > 4:53 > PM > Cleanup of test code - next() replaced with any() > > Martin Siv?k > > ovirt-system-tests > > master > 4:51 > PM > Add network queues custom property and use it in the vnic profile for VM0 > > Yaniv Kaul > > ovirt-system-tests > master > (multi_queue_config) > 4:49 > PM > new suite: he-basic-iscsi-suite-master > Yuval Turgeman > > ovirt-system-tests > master > (he-basic-iscsi-suite-master) > 4:47 > PM > Collect host-deploy bundle from the engine > > Yedidyah Bar David > > ovirt-system-tests > > master > 4:41 > PM > network-suite-master: Make openstack_client_config fixture available to > all ... Merge Conflict Marcin Mirecki > > ovirt-system-tests > > master > 3:39 > PM > new suite: he-basic-ng-ansible-suite-master > > Sandro Bonazzola > > ovirt-system-tests > master > (he-basic-ng-ansible-suite-master) > 3:37 > PM > Enable and move additional tests to 002 > Yaniv Kaul > > ovirt-system-tests > master > (move_more_to_002) > 3:08 > PM > common: ovirt-4.2.repo > Sandro Bonazzola > > ovirt-system-tests > > master > 2:34 > PM > networking: Introducing test_stateless_vm_duplicate_ma > c_addr_vnic_does_not_be... > Leon Goldberg > > ovirt-system-tests > > master > 2:08 > PM > hc: Updating gdeploy conf to create vdo volumes > > Sahina Bose > > ovirt-system-tests > > master > 12:55 > PM > master: add USB to the sanity VM > Michal Skrivanek > > ovirt-system-tests > > master > 12:54 > PM > Test hosted-engine cleanup > Yedidyah Bar David > > ovirt-system-tests > > master > 9:39 > AM > > > Nice list of patches! > > > > Feedback from the event: > - "if we want to add many more tests to OST, and I think we do, we need to > do some change there to allow that. Current framework is simply not > scalable enough" > > > More specific and constructive ideas are welcome. We know we want to move > to pytest, for example. > +1, there is a lot of things we need to improve in OST, the network suite is a good example of going forward with PyTest, we have other idea for improvements like splitting the suites into multiple projects using a new feature in std-ci, dropping the reposync file or making it optional ( actually ovirt-demo-tool is already doing that, using release rpms ). If there are more ideas, we can consider doing an infra hackathon where we can focus in improving infrastructure and moving to PyTest most of the tests ( at least in master ) . > > - not joining the hackathon because "I'd be like an elephant in a > porcelain shop" > - "I'm not sure I'm OK with the flood of suites that we have - the more > we have, the harder it is to sync and maintain but more importantly - to > run." > - "We can't keep adding new suite for each parameter we want to test, it > adds overhead to monitoring, resources and maintenance." > > > I tend to agree, but not sure how to elegantly solve it. > We need to think on a new design for it which includes scalability and multiple maintainers, the requirements and usage has changed significantly over the years, I also don't have an easy solution for it, yet at least. > > - invite wasn't clear enough. I found people on #ovirt on Freenode and on > Red Hat IRC servers and redirected them to OFTC IRC server (my fault, > hopefully managed to workaround it by talking to people) > > > Lessons learned: > - Calendar invites to mailing lists doesn't work well, need a different > way to track mailing list members joining the events. > - Invites needs to be pedantic on how to join the event, not leaving space > for interpretation and misunderstanding. > - We need a contribution guide to ovirt-system-test: we need to make > people comfortable in trying to add a new test and we need to ensure that > we won't reject a day of work because the patch doesn't match core > contributors plannings on number of suites, resources and so on > > This could have been better if the single OST maintainer could be part of the planning and also participate and give feedback and assistance, unfourtunately it was decided to do the hackhaton without him present. I agree we need to improve our contribution guide, and work has started towards it, but any specific feedback or tickets on what can be improved will surely help. > > Agree. > Y. > > - The ovirt-system-tests check patch script is not good enough. It > triggers too many sequential suites on every single patch pushed, and fails > due to timeout taking more than 6 hours to complete. > > This is very close to be much better, we're 1-2 weeks away from implemeting a new feature in STD CI V2 which will allow us to replace the existing 'change resolver' and eventually run all suites in check-patch in parallel, dramatically reducing runtime. > - The way ovirt-system-test collects rpms from defined repos is not smart > enough: it doesn't take the latest version of a given package, just the > first found in sequential order of the repos, > > Its not that its not smart, its by design, repoman uses "only-latest" option to get the first RPM he finds from the repos, I'm pretty sure there was a good reason behind it when it was written a few years ago, I'll try to remember and update. In any case, the relevant patch where this was needed had a different problem with dynamic replacement of repos which we need to think on a solution for. > > Thanks everyone who participated to the event! if you have time please > continue improving ovirt-system-test even if today event is almost > completed! > > > -- > > SANDRO BONAZZOLA > > ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D > > Red Hat EMEA > > TRIED. TESTED. TRUSTED. > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > _______________________________________________ > Infra mailing list > Infra at ovirt.org > http://lists.ovirt.org/mailman/listinfo/infra > > -- Eyal edri MANAGER RHV DevOps EMEA VIRTUALIZATION R&D Red Hat EMEA TRIED. TESTED. TRUSTED. phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreil1 at starlett.lv Tue Mar 13 20:23:05 2018 From: andreil1 at starlett.lv (Andrei Verovski) Date: Tue, 13 Mar 2018 22:23:05 +0200 Subject: [ovirt-users] qemu-guest-agent on oVirt/KVM guest Message-ID: <86198584-21dd-e0e1-d695-2e28db609141@starlett.lv> Hi ! Is this daemon (qemu-guest-agent) required on oVirt/KVM guest? Or ovirt-guest-agent is enough ? Thanks. Andrei From nicolas.vaye at province-sud.nc Tue Mar 13 20:35:23 2018 From: nicolas.vaye at province-sud.nc (Nicolas Vaye) Date: Tue, 13 Mar 2018 20:35:23 +0000 Subject: [ovirt-users] Assistance needed... In-Reply-To: References: Message-ID: <1520973318.6088.97.camel@province-sud.nc> Hi, have you ever seen this documentation ? https://www.ovirt.org/documentation/internal/guest-agent/understanding-guest-agents-and-other-tools/ You can see the link for windows 7 guest agent : https://community.redhat.com/blog/2015/05/how-to-install-and-use-ovirts-windows-guest-tools/ May be it can help you. Regards, Nicolas VAYE -------- Message initial -------- Date: Mon, 12 Mar 2018 18:00:08 +0000 Objet: Re: [ovirt-users] Assistance needed... ?: users at ovirt.org > De: Nasrum Minallah Manzoor > Hi, Can anyone assist me in getting vnc native console through ovirt engine to my guest machine(window 7). Thanks. From: Nasrum Minallah Manzoor Sent: 12 March 2018 3:40 PM To: users at ovirt.org Cc: 'junaid8756 at gmail.com' Subject: Assistance needed... Hi, I need assistance regarding encircled in red in the attached! How can I remove the error ?The latest guest agent needs to be installed and running on the guest?. Else everything is working fine! Kindly response as soon as possible! Regards, _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users From rgolan at redhat.com Tue Mar 13 20:48:17 2018 From: rgolan at redhat.com (Roy Golan) Date: Tue, 13 Mar 2018 20:48:17 +0000 Subject: [ovirt-users] [ovirt-devel] ovirt-system-tests hackathon report In-Reply-To: References: Message-ID: I missed it :( indeed calendar invite to the list didn't work. On Tue, 13 Mar 2018 at 21:58 Eyal Edri wrote: > On Tue, Mar 13, 2018 at 8:00 PM, Yaniv Kaul wrote: > >> >> >> On Mar 13, 2018 6:27 PM, "Sandro Bonazzola" wrote: >> >> 4 people accepted calendar invite: >> - Devin A. Bougie >> - Francesco Romani >> - Jiri Belka >> - suporte, logicworks >> >> 4 people tentatively accepted calendar invite: >> - Amnon Maimon >> - Andreas Bleischwitz >> - Arnaud Lauriou >> - Stephen Pesini >> >> 2 mailing lists accepted calendar invite: users at ovirt.org, >> devel at ovirt.org (don't ask me how) so I may have missed someone in above >> list >> >> >> 4 patches got merged: >> Add check for host update to the 1st host. >> Merged Yaniv Kaul >> >> ovirt-system-tests >> master >> (add_upgrade_check) >> 4:10 >> PM >> basic-suite-master: add vnic_profile_mappings to register vm >> Merged Eitan Raviv >> >> ovirt-system-tests >> master >> (register-template-vnic-mapping) >> 2:50 >> PM >> Revert "ovirt-4.2: Skipping 002_bootstrap.update_default_cluster" >> Merged Eyal Edri >> >> ovirt-system-tests >> >> master >> 11:36 >> AM >> seperate 4.2 tests and utils from master >> Merged Eyal Edri >> >> ovirt-system-tests >> >> master >> 11:35 >> AM >> >> 13 patches has been pushed / reviewed / rebased >> >> Add gdeploy to ovirt-4.2.repo >> Daniel Belenky >> >> ovirt-system-tests >> >> master >> 4:53 >> PM >> Cleanup of test code - next() replaced with any() >> >> Martin Siv?k >> >> ovirt-system-tests >> >> master >> 4:51 >> PM >> Add network queues custom property and use it in the vnic profile for VM0 >> >> Yaniv Kaul >> >> ovirt-system-tests >> master >> (multi_queue_config) >> 4:49 >> PM >> new suite: he-basic-iscsi-suite-master >> Yuval Turgeman >> >> ovirt-system-tests >> master >> (he-basic-iscsi-suite-master) >> 4:47 >> PM >> Collect host-deploy bundle from the engine >> >> Yedidyah Bar David >> >> ovirt-system-tests >> >> master >> 4:41 >> PM >> network-suite-master: Make openstack_client_config fixture available to >> all ... Merge Conflict Marcin Mirecki >> >> ovirt-system-tests >> >> master >> 3:39 >> PM >> new suite: he-basic-ng-ansible-suite-master >> >> Sandro Bonazzola >> >> ovirt-system-tests >> master >> (he-basic-ng-ansible-suite-master) >> 3:37 >> PM >> Enable and move additional tests to 002 >> Yaniv Kaul >> >> ovirt-system-tests >> master >> (move_more_to_002) >> 3:08 >> PM >> common: ovirt-4.2.repo >> Sandro Bonazzola >> >> ovirt-system-tests >> >> master >> 2:34 >> PM >> networking: Introducing >> test_stateless_vm_duplicate_mac_addr_vnic_does_not_be... >> >> Leon Goldberg >> >> ovirt-system-tests >> >> master >> 2:08 >> PM >> hc: Updating gdeploy conf to create vdo volumes >> >> Sahina Bose >> >> ovirt-system-tests >> >> master >> 12:55 >> PM >> master: add USB to the sanity VM >> Michal Skrivanek >> >> ovirt-system-tests >> >> master >> 12:54 >> PM >> Test hosted-engine cleanup >> Yedidyah Bar David >> >> ovirt-system-tests >> >> master >> 9:39 >> AM >> >> >> Nice list of patches! >> >> >> >> Feedback from the event: >> - "if we want to add many more tests to OST, and I think we do, we need >> to do some change there to allow that. Current framework is simply not >> scalable enough" >> >> >> More specific and constructive ideas are welcome. We know we want to move >> to pytest, for example. >> > > +1, there is a lot of things we need to improve in OST, the network suite > is a good example of going forward with PyTest, we have other idea for > improvements like splitting the suites into multiple projects using a new > feature in std-ci, > dropping the reposync file or making it optional ( actually > ovirt-demo-tool is already doing that, using release rpms ). > > If there are more ideas, we can consider doing an infra hackathon where we > can focus in improving infrastructure and moving to PyTest most of the > tests ( at least in master ) . > > >> >> - not joining the hackathon because "I'd be like an elephant in a >> porcelain shop" >> - "I'm not sure I'm OK with the flood of suites that we have - the more >> we have, the harder it is to sync and maintain but more importantly - to >> run." >> - "We can't keep adding new suite for each parameter we want to test, it >> adds overhead to monitoring, resources and maintenance." >> >> >> I tend to agree, but not sure how to elegantly solve it. >> > > We need to think on a new design for it which includes scalability and > multiple maintainers, the requirements and usage has changed significantly > over the years, > I also don't have an easy solution for it, yet at least. > > >> >> - invite wasn't clear enough. I found people on #ovirt on Freenode and on >> Red Hat IRC servers and redirected them to OFTC IRC server (my fault, >> hopefully managed to workaround it by talking to people) >> >> >> Lessons learned: >> - Calendar invites to mailing lists doesn't work well, need a different >> way to track mailing list members joining the events. >> - Invites needs to be pedantic on how to join the event, not leaving >> space for interpretation and misunderstanding. >> - We need a contribution guide to ovirt-system-test: we need to make >> people comfortable in trying to add a new test and we need to ensure that >> we won't reject a day of work because the patch doesn't match core >> contributors plannings on number of suites, resources and so on >> >> > This could have been better if the single OST maintainer could be part of > the planning and also participate and give feedback and assistance, > unfourtunately it was decided to do the hackhaton without him present. > I agree we need to improve our contribution guide, and work has started > towards it, but any specific feedback or tickets on what can be improved > will surely help. > > >> >> Agree. >> Y. >> >> - The ovirt-system-tests check patch script is not good enough. It >> triggers too many sequential suites on every single patch pushed, and fails >> due to timeout taking more than 6 hours to complete. >> >> > This is very close to be much better, we're 1-2 weeks away from > implemeting a new feature in STD CI V2 which will allow us to replace the > existing 'change resolver' and eventually run all suites in check-patch in > parallel, dramatically reducing runtime. > > >> - The way ovirt-system-test collects rpms from defined repos is not smart >> enough: it doesn't take the latest version of a given package, just the >> first found in sequential order of the repos, >> >> > Its not that its not smart, its by design, repoman uses "only-latest" > option to get the first RPM he finds from the repos, I'm pretty sure there > was a good reason behind it when it was written a few years ago, I'll try > to remember and update. > In any case, the relevant patch where this was needed had a different > problem with dynamic replacement of repos which we need to think on a > solution for. > > >> >> Thanks everyone who participated to the event! if you have time please >> continue improving ovirt-system-test even if today event is almost >> completed! >> >> >> -- >> >> SANDRO BONAZZOLA >> >> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D >> >> Red Hat EMEA >> >> TRIED. TESTED. TRUSTED. >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >> >> _______________________________________________ >> Infra mailing list >> Infra at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/infra >> >> > > > -- > > Eyal edri > > > MANAGER > > RHV DevOps > > EMEA VIRTUALIZATION R&D > > > Red Hat EMEA > TRIED. TESTED. TRUSTED. > phone: +972-9-7692018 <+972%209-769-2018> > irc: eedri (on #tlv #rhev-dev #rhev-integ) > _______________________________________________ > Devel mailing list > Devel at ovirt.org > http://lists.ovirt.org/mailman/listinfo/devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.skrivanek at redhat.com Tue Mar 13 21:07:22 2018 From: michal.skrivanek at redhat.com (Michal Skrivanek) Date: Tue, 13 Mar 2018 22:07:22 +0100 Subject: [ovirt-users] qemu-guest-agent on oVirt/KVM guest In-Reply-To: <86198584-21dd-e0e1-d695-2e28db609141@starlett.lv> References: <86198584-21dd-e0e1-d695-2e28db609141@starlett.lv> Message-ID: <4D423276-F8D4-4ED7-BDDB-CC430416DAAA@redhat.com> > On 13 Mar 2018, at 21:23, Andrei Verovski wrote: > > Hi ! > > Is this daemon (qemu-guest-agent) required on oVirt/KVM guest? yes it is, for things like consistent live snapshots. > Or ovirt-guest-agent is enough ? that?s just part of the functionality (more reporting, SSO) > > Thanks. > Andrei > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > From ccox at endlessnow.com Tue Mar 13 21:47:04 2018 From: ccox at endlessnow.com (Christopher Cox) Date: Tue, 13 Mar 2018 16:47:04 -0500 Subject: [ovirt-users] Quick question about oVirt 3.6 and vdsm log in DEBUG mode (apprently by default??) Message-ID: <9114b59a-8ffb-77a6-366c-06d91ba5be26@endlessnow.com> We're running oVirt 3.6 on 9 Dell Blades but with just two fairly fat fabrics, one for LAN stuff, ovirtmgmt and one for iSCSI to the storage domains. 15 VM Storage Domains iSCSI has 4 paths going through a 40Gbit i/o blade to switch 115 VMs or thereabouts 9 VLANS, sharing an i/o blade with ovirtmgmt 40Gbit to switch 500+ virtual disks What we are seeing more and more is that if we do an operation like expose a new LUN and configure a new storage domain, that all of the hyervisors go "red triangle" and "Connecting..." and it takes a very long time (all day) to straighten out. My guess is that there's too much to look at vdsm wise and so it's waiting a short(er) period of time for a completed response than what vdsm is going to us, and it just cycles over and over until it just happens to work. I'm thinking that vdsm having DEBUG enabled isn't helping the latency, but as far as I know it came this way be default. Can we safely disable DEBUG on the hypervisor hosts for vdsm? Can we do this while things are roughly in a steady state? Remember, just doing the moves could throw everything into vdsm la-la-land (actually, that might not be true, might take a new storage thing to do that). Just thinking out loud... can we safely turn off DEBUG logging on the vdsms? Can we do this "live" through bouncing of the vdsm if everything is "steady state"? Do you think this might help the problems we're having with storage operations? (I can see all the blades logging in iSCSI wise, but ovirt engine does the whole red triangle connecting thing, for many, many, many hours). Thanks, Christopher From nicolas.vaye at province-sud.nc Tue Mar 13 23:36:06 2018 From: nicolas.vaye at province-sud.nc (Nicolas Vaye) Date: Tue, 13 Mar 2018 23:36:06 +0000 Subject: [ovirt-users] storage domain ovirt-image-repository doesn't work In-Reply-To: References: <1520807274.18402.56.camel@province-sud.nc> Message-ID: <1520984162.6088.104.camel@province-sud.nc> Hi Idan, here are the logs requested : 2018-03-14 10:25:52,097+11 INFO [org.ovirt.engine.core.utils.transaction.TransactionSupport] (default task-6) [61b5b46f-0ea3-496a-af90-bf82e7d204f3] transaction rolled back 2018-03-14 10:25:52,097+11 ERROR [org.ovirt.engine.core.bll.storage.repoimage.GetImagesListQuery] (default task-6) [61b5b46f-0ea3-496a-af90-bf82e7d204f3] Failed to retrieve image list: Connection timed out (Connection timed out) 2018-03-14 10:25:57,083+11 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'commandCoordinator' is using 0 threads out of 10 and 10 tasks are waiting in the queue. 2018-03-14 10:25:57,083+11 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'default' is using 0 threads out of 1 and 5 tasks are waiting in the queue. 2018-03-14 10:25:57,083+11 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engine' is using 0 threads out of 500, 16 threads waiting for tasks and 0 tasks in queue. 2018-03-14 10:25:57,084+11 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineScheduled' is using 0 threads out of 100 and 100 tasks are waiting in the queue. 2018-03-14 10:25:57,084+11 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineThreadMonitoring' is using 1 threads out of 1 and 0 tasks are waiting in the queue. 2018-03-14 10:25:57,084+11 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'hostUpdatesChecker' is using 0 threads out of 5 and 4 tasks are waiting in the queue. Connection timed out seems to indicate that it doesn't use the proxy to get web access ? or a firewall issue ? but on each ovirt node, i try to curl the url and the result is OK : curl http://glance.ovirt.org:9292/ {"versions": [{"status": "CURRENT", "id": "v2.3", "links": [{"href": "http://glance.ovirt.org:9292/v2/", "rel": "self"}]}, {"status": "SUPPORTED", "id": "v2.2", "links": [{"href": "http://glance.ovirt.org:9292/v2/", "rel": "self"}]}, {"status": "SUPPORTED", "id": "v2.1", "links": [{"href": "http://glance.ovirt.org:9292/v2/", "rel": "self"}]}, {"status": "SUPPORTED", "id": "v2.0", "links": [{"href": "http://glance.ovirt.org:9292/v2/", "rel": "self"}]}, {"status": "SUPPORTED", "id": "v1.1", "links": [{"href": "http://glance.ovirt.org:9292/v1/", "rel": "self"}]}, {"status": "SUPPORTED", "id": "v1.0", "links": [{"href": "http://glance.ovirt.org:9292/v1/", "rel": "self"}]}]} I don't know what is wrong !! Regards, Nicolas -------- Message initial -------- Date: Tue, 13 Mar 2018 07:25:07 +0200 Objet: Re: [ovirt-users] storage domain ovirt-image-repository doesn't work Cc: users at ovirt.org > ?: Nicolas Vaye > De: Idan Shaby > Hi Nicolas, Let me make sure that I understand what's the issue here - you click on the domain and on the Images sub tab nothing is displayed? Can you please clear your engine log, click on the ovirt-image-repository domain and attach the log to the mail? When I do it, I get the following audit log: 2018-03-13 07:19:25,983+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-86) [6af6ee81-ce9a-46b7-a371-c5c3b0c6bf2a] EVENT_ID: REFRESH_REPOSITORY_IMAGE_LIST_SUCCEEDED(998), Refresh image list succeeded for domain(s): ovirt-image-repository (All file type) Maybe you get an error there that can help us understand the problem. Regards, Idan On Mon, Mar 12, 2018 at 12:27 AM, Nicolas Vaye > wrote: Hello, i have installed one oVirt platform with 2 node and 1 HE version 4.2.1.7-1 It seem to work fine, but i have issue with the ovirt-image-repository. Impossible to get the list of available images for this domain : [cid:1520807274.29800.1.camel at province-sud.nc] My cluster is on a private network, so there is a proxy to get internet access. I have tried with a specific proxy configuration on each node (https://www.server-world.info/en/note?os=CentOS_7&p=squid&f=2) so it's a success with yum update, wget or curl with http://glance.ovirt.org:9292/, but nothing in the webui for the ovirt-image-repository domain. I have tried another test with a transparent proxy and the result is the same : success with yum update, wget or curl with http://glance.ovirt.org:9292/, but nothing in the webui for the ovirt-image-repository domain. I don't know where is the specific log for this technical part. Can i have help for this issue. Thanks. Nicolas VAYE DSI - Noum?a NEW CALEDONIA _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users From nicolas.vaye at province-sud.nc Tue Mar 13 23:44:11 2018 From: nicolas.vaye at province-sud.nc (Nicolas Vaye) Date: Tue, 13 Mar 2018 23:44:11 +0000 Subject: [ovirt-users] storage domain ovirt-image-repository doesn't work In-Reply-To: <1520984162.6088.104.camel@province-sud.nc> References: <1520807274.18402.56.camel@province-sud.nc> <1520984162.6088.104.camel@province-sud.nc> Message-ID: <1520984648.6088.106.camel@province-sud.nc> the logs during the test of the ovirt-image-repository provider : 2018-03-14 10:39:43,337+11 INFO [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand] (default task-17) [6c8c6a9f-2c24-4a77-af75-47352c6df887] Running command: TestProviderConnectivityCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_STORAGE_POOL with role type ADMIN 2018-03-14 10:41:30,465+11 INFO [org.ovirt.engine.core.utils.transaction.TransactionSupport] (default task-27) [42cb88a3-2614-4aa9-a3bf-b56102a83c35] transaction rolled back 2018-03-14 10:41:30,465+11 ERROR [org.ovirt.engine.core.bll.storage.repoimage.GetImagesListQuery] (default task-27) [42cb88a3-2614-4aa9-a3bf-b56102a83c35] Failed to retrieve image list: Connection timed out (Connection timed out) 2018-03-14 10:41:50,560+11 ERROR [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand] (default task-17) [6c8c6a9f-2c24-4a77-af75-47352c6df887] Command 'org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand' failed: EngineException: (Failed with error PROVIDER_FAILURE and code 5050) -------- Message initial -------- Date: Tue, 13 Mar 2018 23:36:06 +0000 Objet: Re: [ovirt-users] storage domain ovirt-image-repository doesn't work Cc: users at ovirt.org > ?: ishaby at redhat.com > Reply-to: Nicolas Vaye De: Nicolas Vaye > Hi Idan, here are the logs requested : 2018-03-14 10:25:52,097+11 INFO [org.ovirt.engine.core.utils.transaction.TransactionSupport] (default task-6) [61b5b46f-0ea3-496a-af90-bf82e7d204f3] transaction rolled back 2018-03-14 10:25:52,097+11 ERROR [org.ovirt.engine.core.bll.storage.repoimage.GetImagesListQuery] (default task-6) [61b5b46f-0ea3-496a-af90-bf82e7d204f3] Failed to retrieve image list: Connection timed out (Connection timed out) 2018-03-14 10:25:57,083+11 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'commandCoordinator' is using 0 threads out of 10 and 10 tasks are waiting in the queue. 2018-03-14 10:25:57,083+11 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'default' is using 0 threads out of 1 and 5 tasks are waiting in the queue. 2018-03-14 10:25:57,083+11 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engine' is using 0 threads out of 500, 16 threads waiting for tasks and 0 tasks in queue. 2018-03-14 10:25:57,084+11 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineScheduled' is using 0 threads out of 100 and 100 tasks are waiting in the queue. 2018-03-14 10:25:57,084+11 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineThreadMonitoring' is using 1 threads out of 1 and 0 tasks are waiting in the queue. 2018-03-14 10:25:57,084+11 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'hostUpdatesChecker' is using 0 threads out of 5 and 4 tasks are waiting in the queue. Connection timed out seems to indicate that it doesn't use the proxy to get web access ? or a firewall issue ? but on each ovirt node, i try to curl the url and the result is OK : curl http://glance.ovirt.org:9292/ {"versions": [{"status": "CURRENT", "id": "v2.3", "links": [{"href": "http://glance.ovirt.org:9292/v2/", "rel": "self"}]}, {"status": "SUPPORTED", "id": "v2.2", "links": [{"href": "http://glance.ovirt.org:9292/v2/", "rel": "self"}]}, {"status": "SUPPORTED", "id": "v2.1", "links": [{"href": "http://glance.ovirt.org:9292/v2/", "rel": "self"}]}, {"status": "SUPPORTED", "id": "v2.0", "links": [{"href": "http://glance.ovirt.org:9292/v2/", "rel": "self"}]}, {"status": "SUPPORTED", "id": "v1.1", "links": [{"href": "http://glance.ovirt.org:9292/v1/", "rel": "self"}]}, {"status": "SUPPORTED", "id": "v1.0", "links": [{"href": "http://glance.ovirt.org:9292/v1/", "rel": "self"}]}]} I don't know what is wrong !! Regards, Nicolas -------- Message initial -------- Date: Tue, 13 Mar 2018 07:25:07 +0200 Objet: Re: [ovirt-users] storage domain ovirt-image-repository doesn't work Cc: users at ovirt.org %3e>> ?: Nicolas Vaye > De: Idan Shaby > Hi Nicolas, Let me make sure that I understand what's the issue here - you click on the domain and on the Images sub tab nothing is displayed? Can you please clear your engine log, click on the ovirt-image-repository domain and attach the log to the mail? When I do it, I get the following audit log: 2018-03-13 07:19:25,983+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-86) [6af6ee81-ce9a-46b7-a371-c5c3b0c6bf2a] EVENT_ID: REFRESH_REPOSITORY_IMAGE_LIST_SUCCEEDED(998), Refresh image list succeeded for domain(s): ovirt-image-repository (All file type) Maybe you get an error there that can help us understand the problem. Regards, Idan On Mon, Mar 12, 2018 at 12:27 AM, Nicolas Vaye > wrote: Hello, i have installed one oVirt platform with 2 node and 1 HE version 4.2.1.7-1 It seem to work fine, but i have issue with the ovirt-image-repository. Impossible to get the list of available images for this domain : [cid:1520807274.29800.1.camel at province-sud.nc] My cluster is on a private network, so there is a proxy to get internet access. I have tried with a specific proxy configuration on each node (https://www.server-world.info/en/note?os=CentOS_7&p=squid&f=2) so it's a success with yum update, wget or curl with http://glance.ovirt.org:9292/, but nothing in the webui for the ovirt-image-repository domain. I have tried another test with a transparent proxy and the result is the same : success with yum update, wget or curl with http://glance.ovirt.org:9292/, but nothing in the webui for the ovirt-image-repository domain. I don't know where is the specific log for this technical part. Can i have help for this issue. Thanks. Nicolas VAYE DSI - Noum?a NEW CALEDONIA _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users From nicolas.vaye at province-sud.nc Wed Mar 14 01:30:31 2018 From: nicolas.vaye at province-sud.nc (Nicolas Vaye) Date: Wed, 14 Mar 2018 01:30:31 +0000 Subject: [ovirt-users] Assistance needed... In-Reply-To: <1520973318.6088.97.camel@province-sud.nc> References: <1520973318.6088.97.camel@province-sud.nc> Message-ID: <1520991028.6088.108.camel@province-sud.nc> I did the installation with windows 7 and i have installed the ovirt-tools-setup via the iso and everything is OK. The simple way is to install ovirt-guest-tools-iso on the hosted-engine and then get the iso to put on your ISO domain. After that just put this iso on the cdrom of the windows 7 VM and execute ovirt-guest-tools-setup. Regards, Nicolas VAYE -------- Message initial -------- Date: Tue, 13 Mar 2018 20:35:23 +0000 Objet: Re: [ovirt-users] Assistance needed... ?: users at ovirt.org >, NasrumMinallah9 at hotmail.com > Reply-to: Nicolas Vaye De: Nicolas Vaye > Hi, have you ever seen this documentation ? https://www.ovirt.org/documentation/internal/guest-agent/understanding-guest-agents-and-other-tools/ You can see the link for windows 7 guest agent : https://community.redhat.com/blog/2015/05/how-to-install-and-use-ovirts-windows-guest-tools/ May be it can help you. Regards, Nicolas VAYE -------- Message initial -------- Date: Mon, 12 Mar 2018 18:00:08 +0000 Objet: Re: [ovirt-users] Assistance needed... ?: users at ovirt.org %3e>> De: Nasrum Minallah Manzoor > Hi, Can anyone assist me in getting vnc native console through ovirt engine to my guest machine(window 7). Thanks. From: Nasrum Minallah Manzoor Sent: 12 March 2018 3:40 PM To: users at ovirt.org Cc: 'junaid8756 at gmail.com' > Subject: Assistance needed... Hi, I need assistance regarding encircled in red in the attached! How can I remove the error ?The latest guest agent needs to be installed and running on the guest?. Else everything is working fine! Kindly response as soon as possible! Regards, _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users From hariprasanth.l at msystechnologies.com Wed Mar 14 02:34:36 2018 From: hariprasanth.l at msystechnologies.com (Hari Prasanth Loganathan) Date: Wed, 14 Mar 2018 02:34:36 +0000 Subject: [ovirt-users] Status in oVirt In-Reply-To: References: Message-ID: Ya Thanks, your help is appreciated. On Wed, 14 Mar 2018 at 1:14 AM, Roy Golan wrote: > > > On Tue, Mar 13, 2018, 2:26 PM Hari Prasanth Loganathan < > hariprasanth.l at msystechnologies.com> wrote: > >> Hi Guys, >> >> What is the best way to get the oVirt status like >> 1) Apache web server is running, >> 2) Jboss server is running, >> 3) postGreSQL server is running >> >> Thanks, >> Hari >> >>> >>> > Check the health url: > > https://..../ovirt-engine/services/health > > There are more elaborated ways, but it's a good start. Works for you? > > DISCLAIMER >> >> The information in this e-mail is confidential and may be subject to >> legal privilege. It is intended solely for the addressee. Access to this >> e-mail by anyone else is unauthorized. If you have received this >> communication in error, please address with the subject heading "Received >> in error," send to it at msystechnologies.com, then delete the e-mail and >> destroy any copies of it. If you are not the intended recipient, any >> disclosure, copying, distribution or any action taken or omitted to be >> taken in reliance on it, is prohibited and may be unlawful. The views, >> opinions, conclusions and other information expressed in this electronic >> mail and any attachments are not given or endorsed by the company unless >> otherwise indicated by an authorized representative independent of this >> message. >> MSys cannot guarantee that e-mail communications are secure or >> error-free, as information could be intercepted, corrupted, amended, lost, >> destroyed, arrive late or incomplete, or contain viruses, though all >> reasonable precautions have been taken to ensure no viruses are present in >> this e-mail. As our company cannot accept responsibility for any loss or >> damage arising from the use of this e-mail or attachments we recommend that >> you subject these to your virus checking procedures prior to use >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > -- DISCLAIMER The information in this e-mail is confidential and may be subject to legal privilege. It is intended solely for the addressee. Access to this e-mail by anyone else is unauthorized. If you have received this communication in error, please address with the subject heading "Received in error," send to it at msystechnologies.com, then delete the e-mail and destroy any copies of it. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. The views, opinions, conclusions and other information expressed in this electronic mail and any attachments are not given or endorsed by the company unless otherwise indicated by an authorized representative independent of this message. MSys cannot guarantee that e-mail communications are secure or error-free, as information could be intercepted, corrupted, amended, lost, destroyed, arrive late or incomplete, or contain viruses, though all reasonable precautions have been taken to ensure no viruses are present in this e-mail. As our company cannot accept responsibility for any loss or damage arising from the use of this e-mail or attachments we recommend that you subject these to your virus checking procedures prior to use -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.vaye at province-sud.nc Wed Mar 14 03:17:16 2018 From: nicolas.vaye at province-sud.nc (Nicolas Vaye) Date: Wed, 14 Mar 2018 03:17:16 +0000 Subject: [ovirt-users] change CD not working In-Reply-To: References: Message-ID: <1520997431.6088.113.camel@province-sud.nc> Hi, i have had the same issue today with a windows 2012 standard R2 x64. Impossible to get cdrom suddenly. stop/restart vm don't change anything. for my part, i 'have 2 ovirt node with HE in version 4.2.1.7-1. The only tricks to resolve that (for me) is to stop the vm, detach the hard disk, create new vm, reattach the hard disk, and everything is fine again. The issue has not been identified and resolved but if that would help you. I join my engine.log and vdsm.log in attachment. Regards, Nicolas VAYE -------- Message initial -------- Date: Tue, 13 Mar 2018 20:32:12 +0100 Objet: Re: [ovirt-users] change CD not working Cc: users > ?: Junaid Jadoon > De: Michal Skrivanek > On 13 Mar 2018, at 15:19, Junaid Jadoon > wrote: hi, when i tried to change CD within a Windows VM and getting following error message. Ovirt engine and node version are 4.2. Hi, a more concrete version would help, but still without logs from host and engine it?s hard to say anything. Please add that Thanks, michal "Error while executing action Change CD: Failed to perform "Change CD" operation, CD might be still in use by the VM. Please try to manually detach the CD from withing the VM: 1. Log in to the VM 2 For Linux VMs, un-mount the CD using umount command; For Windows VMs, right click on the CD drive and click 'Eject';" Initially its working fine suddenly it giving above error. please help me out Regards, Junaid _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- A non-text attachment was scrubbed... Name: engine.log.xz Type: application/x-xz Size: 54484 bytes Desc: engine.log.xz URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vdsm.log.4.xz Type: application/x-xz Size: 368448 bytes Desc: vdsm.log.4.xz URL: From dhy336 at sina.com Wed Mar 14 03:15:45 2018 From: dhy336 at sina.com (dhy336 at sina.com) Date: Wed, 14 Mar 2018 11:15:45 +0800 Subject: [ovirt-users] ovirt-engine add host failed Message-ID: <20180314031545.AF2387200CF@webmail.sinamail.sina.com.cn> I add host for ovirt-engine, when SetupNetworks faild, server.log has some error. But I do not how fix it, could someone give me some advise? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: server.log Type: application/octet-stream Size: 49901 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: engine.log Type: application/octet-stream Size: 161798 bytes Desc: not available URL: From recreationh at gmail.com Wed Mar 14 03:59:20 2018 From: recreationh at gmail.com (Terry hey) Date: Wed, 14 Mar 2018 11:59:20 +0800 Subject: [ovirt-users] Cannot use virt-viewer to open VM console In-Reply-To: References: Message-ID: I downloaded virt-viewer6.0 msi from this website https://virt-manager.org/download/ choose Win x86 MSI (gpg) OR Win x64 MSI (gpg) Am i right? Regards, 2018-03-12 17:52 GMT+08:00 Yedidyah Bar David : > On Mon, Mar 12, 2018 at 4:32 AM, Terry hey wrote: > > Dear all, > > > > I would like to ask which version of virt-viewer are you using? > > I downloaded virt-viewer 6.0.msi and installed. > > But i could not open VM console( i have set the graphic protocol is > SPICE). > > It shows the following error. > > > > "At least Remote Viewer version 2.0-160 is required to setup this > > connection, see > > http://www.ovirt.org/documentation/admin-guide/ > virt/console-client-resources > > for details" > > > > Also, i can i verify the version of virt-viewer that i have installed? > > Please see: > > https://bugzilla.redhat.com/show_bug.cgi?id=1285883 > http://lists.ovirt.org/pipermail/users/2017-June/thread.html#82343 > > Are you sure you use the 6.0 msi? I think it should work. > > Best regards, > -- > Didi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.vaye at province-sud.nc Wed Mar 14 04:04:18 2018 From: nicolas.vaye at province-sud.nc (Nicolas Vaye) Date: Wed, 14 Mar 2018 04:04:18 +0000 Subject: [ovirt-users] improvement for web ui during the create template stage. Message-ID: <1521000255.6088.116.camel@province-sud.nc> Hi, I 'have 2 ovirt node with HE in version 4.2.1.7-1. If i make a template from a VM's snapshot in the web ui, there is a form ui to enter several parameter [cid:1521000255.509.1.camel at province-sud.nc] if the name of the template is missing and if we clic on the OK button, there is an highlighting red border on the name to indicate the problem. if i enter a long name for the template and if we clic on the OK button, nothing happend, and there is no highlight or error message to indicate there is a problem with the long name. Could you improve that ? Thanks, Regards, Nicolas VAYE -------------- next part -------------- A non-text attachment was scrubbed... Name: unknown-FDK2FZ Type: image/png Size: 35659 bytes Desc: unknown-FDK2FZ URL: From junaid8756 at gmail.com Wed Mar 14 05:00:54 2018 From: junaid8756 at gmail.com (Junaid Jadoon) Date: Wed, 14 Mar 2018 10:00:54 +0500 Subject: [ovirt-users] change CD not working In-Reply-To: References: Message-ID: Hi, I have attached log files. On Tue, Mar 13, 2018 at 7:19 PM, Junaid Jadoon wrote: > hi, > when i tried to change CD within a Windows VM and getting following error > message. > > Ovirt engine and node version are 4.2. > > "Error while executing action Change CD: Failed to perform "Change CD" operation, CD might be still in use by the VM. > Please try to manually detach the CD from withing the VM: > 1. Log in to the VM > 2 For Linux VMs, un-mount the CD using umount command; > For Windows VMs, right click on the CD drive and click 'Eject';" > > > Initially its working fine suddenly it giving above error. > > > please help me out > > Regards, > > Junaid > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: logs folder.rar Type: application/octet-stream Size: 474283 bytes Desc: not available URL: From NasrumMinallah9 at hotmail.com Tue Mar 13 10:06:15 2018 From: NasrumMinallah9 at hotmail.com (Nasrum Minallah Manzoor) Date: Tue, 13 Mar 2018 10:06:15 +0000 Subject: [ovirt-users] Issue... Message-ID: Hi, I am facing issue when I click on "change CD" option in ovirt's engine it doesn't works. It was working fine, I don't how it stopped working! Any suggestions guys! Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Wed Mar 14 06:34:16 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Wed, 14 Mar 2018 08:34:16 +0200 Subject: [ovirt-users] Quick question about oVirt 3.6 and vdsm log in DEBUG mode (apprently by default??) In-Reply-To: <9114b59a-8ffb-77a6-366c-06d91ba5be26@endlessnow.com> References: <9114b59a-8ffb-77a6-366c-06d91ba5be26@endlessnow.com> Message-ID: On Mar 13, 2018 11:48 PM, "Christopher Cox" wrote: We're running oVirt 3.6 on 9 Dell Blades but with just two fairly fat fabrics, one for LAN stuff, ovirtmgmt and one for iSCSI to the storage domains. 15 VM Storage Domains iSCSI has 4 paths going through a 40Gbit i/o blade to switch 115 VMs or thereabouts 9 VLANS, sharing an i/o blade with ovirtmgmt 40Gbit to switch 500+ virtual disks What we are seeing more and more is that if we do an operation like expose a new LUN and configure a new storage domain, that all of the hyervisors go "red triangle" and "Connecting..." and it takes a very long time (all day) to straighten out. My guess is that there's too much to look at vdsm wise and so it's waiting a short(er) period of time for a completed response than what vdsm is going to us, and it just cycles over and over until it just happens to work. Please upgrade. We have solved issues and improved performance and scale substantially since 3.6. You may also wish to apply lvm filters. Y. I'm thinking that vdsm having DEBUG enabled isn't helping the latency, but as far as I know it came this way be default. Can we safely disable DEBUG on the hypervisor hosts for vdsm? Can we do this while things are roughly in a steady state? Remember, just doing the moves could throw everything into vdsm la-la-land (actually, that might not be true, might take a new storage thing to do that). Just thinking out loud... can we safely turn off DEBUG logging on the vdsms? Can we do this "live" through bouncing of the vdsm if everything is "steady state"? Do you think this might help the problems we're having with storage operations? (I can see all the blades logging in iSCSI wise, but ovirt engine does the whole red triangle connecting thing, for many, many, many hours). Thanks, Christopher _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From kstatsenko at gmail.com Mon Mar 12 18:13:44 2018 From: kstatsenko at gmail.com (=?UTF-8?B?0JrQvtC90YHRgtCw0L3RgtC40L0g0KHRgtCw0YbQtdC90LrQvg==?=) Date: Mon, 12 Mar 2018 21:13:44 +0300 Subject: [ovirt-users] 4.2 upgrade question In-Reply-To: References: Message-ID: Well, it does seem cleaner. Thank you. 2018-03-12 15:33 GMT+03:00 Yedidyah Bar David : > On Mon, Mar 12, 2018 at 1:37 PM, KSNull Zero wrote: > > Hello! > > Currently we run 4.1.9 and try to upgrade to the latest 4.2 release. > > Our DB server is on separate machine and run PostgreSQL 9.2.23. > > > > During upgrade the following error occurs: > > [WARNING] This release requires PostgreSQL server 9.5.9 but the engine > > database is currently hosted on PostgreSQL server 9.2.23 > > [ ERROR ] Please upgrade the PostgreSQL instance that serves the engine > > database to 9.5.9 and retry. > > > > Ok, so we need to upgrade PostgreSQL. > > The question is - do we need to have exact 9.5.9 version of PostgreSQL ? > > '9.5.9' is not hard-coded, but is the version shipped by SCL [1]. > > The CentOS 7 engine build pulls that in and uses it, for both client > (always) > and server (if configured to). > > This is the only combination that's tested and known to work. To use this > on your remote PG machine, add there SCL repos and use them. You will need > to upgrade your database to the new version, similarly to what engine-setup > does if it's a local db. I do not think we have docs for this, see e.g. > [2]. > > If you want to use some other (non-SCL) build of PG also on the client, > I think it should not be too hard to make everything work, as this is > what we do in the fedora build, but I didn't try this myself, nor know > about anyone that did. It's probably enough to remove the file: > > /etc/ovirt-engine-setup.env.d/10-setup-scl-postgres-95.env > > If you go this way, note that you'll have to repeat removing it per > each upgrade. Alternatively, you can add your own file there, with > a later number, clearing the variables set in this file, e.g.: > > # cat << __EOF__ > /etc/ovirt-engine-setup.env.d/99-unset-postgresql.env > unset RHPOSTGRESQL95BASE > unset RHPOSTGRESQL95DATA > unset sclenv > unset POSTGRESQLENV > __EOF__ > > And also install the postgresql client/libraries/etc matching what you > have on your server. > > [1] https://www.softwarecollections.org/en/scls/rhscl/rh-postgresql95/ > [2] https://bugzilla.redhat.com/show_bug.cgi?id=1498351#c12 > > > Because if we upgrade PostgreSQL to the latest available 9.5.12 the same > > error occurs saying that client and server version mismatched and upgrade > > terminates. > > Thank you. > > Best regards, > -- > Didi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nivw at infinidat.com Tue Mar 13 16:31:16 2018 From: nivw at infinidat.com (Niv Gal Weizer) Date: Tue, 13 Mar 2018 18:31:16 +0200 Subject: [ovirt-users] How to log-on to a VM from Centos atomic template Message-ID: On Ovirt 4.2: - click on the left Storage tab - click on Domains - choose Centos 7 Automic Host imge v1711 for X86_64 , and clic import - enable the Import as Template - Click OK Now use this template to create a new VM. After this VM is created run it and open a console to it. -How to log in to this VM? There is no way to change the password from the VM create menu, or change the password after it was created. Niv -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Wed Mar 14 07:53:37 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Wed, 14 Mar 2018 09:53:37 +0200 Subject: [ovirt-users] How to log-on to a VM from Centos atomic template In-Reply-To: References: Message-ID: On Tue, Mar 13, 2018 at 6:31 PM, Niv Gal Weizer wrote: > On Ovirt 4.2: > - click on the left Storage tab > - click on Domains > - choose Centos 7 Automic Host imge v1711 for X86_64 , and clic import > - enable the Import as Template > - Click OK > > Now use this template to create a new VM. > After this VM is created run it and open a console to it. > > > -How to log in to this VM? > There is no way to change the password from the VM create menu, or change > the password after it was created. > Have you tried to set credentials / SSH keys via cloud-init? Y. > > > Niv > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhy336 at sina.com Wed Mar 14 07:56:05 2018 From: dhy336 at sina.com (dhy336 at sina.com) Date: Wed, 14 Mar 2018 15:56:05 +0800 Subject: [ovirt-users] create new domain failed Message-ID: <20180314075605.DE0FAA800C1@webmail.sinamail.sina.com.cn> Hi I find a issue, I create new domain failed error info is "Error while executing action New NFS Storage Domain: Storage domain already exists". vdsm debug info : 2018-03-14 15:39:46,163+0800 INFO (jsonrpc/1) [DynamicBridge] ----------------duhy test--------------------- (Bridge:189)2018-03-14 15:39:46,163+0800 INFO (jsonrpc/1) [DynamicBridge] cmd = StoragePool_connectStorageServer (Bridge:190)2018-03-14 15:39:46,164+0800 INFO (jsonrpc/1) [vdsm.api] START connectStorageServer(domType=1, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id': u'a16bcfa2-fd7c-49f9-84b2-3a905b4f4135', u'connection': u'192.168.122.134:/home/exports/vmstorage', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'auto', u'password': '********', u'port': u''}], options=None) from=::ffff:192.168.122.134,45460, flow_id=61f0b27f, task_id=ecd65a2e-3687-49b6-a4d6-11b9fce74775 (api:46)2018-03-14 15:39:46,190+0800 INFO (jsonrpc/1) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'status': 0, 'id': u'a16bcfa2-fd7c-49f9-84b2-3a905b4f4135'}]} from=::ffff:192.168.122.134,45460, flow_id=61f0b27f, task_id=ecd65a2e-3687-49b6-a4d6-11b9fce74775 (api:52)2018-03-14 15:39:46,191+0800 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call StoragePool.connectStorageServer succeeded in 0.03 seconds (__init__:630)2018-03-14 15:39:46,204+0800 INFO (jsonrpc/3) [DynamicBridge] ----------------duhy test--------------------- (Bridge:189)2018-03-14 15:39:46,205+0800 INFO (jsonrpc/3) [DynamicBridge] cmd = StorageDomain_create (Bridge:190)2018-03-14 15:39:46,205+0800 INFO (jsonrpc/3) [vdsm.api] START createStorageDomain(storageType=1, sdUUID=u'80586557-820f-4e12-9763-11beed4259c8', domainName=u'vmstorage', typeSpecificArg=u'192.168.122.134:/home/exports/vmstorage', domClass=1, domVersion=u'4', options=None) from=::ffff:192.168.122.134,45460, flow_id=61f0b27f, task_id=4b38bebc-5e9f-49cf-aafd-0de9155abe06 (api:46)2018-03-14 15:39:46,206+0800 INFO (jsonrpc/3) [storage.StorageDomainCache] -------------------duhy test-------------- (sdc:106)2018-03-14 15:39:46,206+0800 INFO (jsonrpc/3) [storage.StorageDomainCache] /rhev/data-center (sdc:107)2018-03-14 15:39:46,206+0800 INFO (jsonrpc/3) [storage.StorageDomainCache] ------------------------------------------- (sdc:119)2018-03-14 15:39:46,206+0800 INFO (jsonrpc/3) [storage.StorageDomainCache] {} (sdc:120)2018-03-14 15:39:46,207+0800 INFO (jsonrpc/3) [storage.StorageDomainCache] sdUUID not in __inProgress (sdc:128)2018-03-14 15:39:46,207+0800 INFO (jsonrpc/3) [storage.StorageDomainCache] set([]) (sdc:129)2018-03-14 15:39:46,668+0800 INFO (itmap/0) [IOProcessClient] Starting client ioprocess-1 (__init__:330)2018-03-14 15:39:46,693+0800 INFO (ioprocess/31894) [IOProcess] Starting ioprocess (__init__:452)2018-03-14 15:39:46,704+0800 INFO (jsonrpc/3) [storage.StorageDomain] sdUUID=80586557-820f-4e12-9763-11beed4259c8 domainName=vmstorage remotePath=192.168.122.134:/home/exports/vmstorage domClass=1 (nfsSD:70)2018-03-14 15:39:46,745+0800 INFO (jsonrpc/3) [IOProcessClient] Starting client ioprocess-2 (__init__:330)2018-03-14 15:39:46,780+0800 INFO (ioprocess/31903) [IOProcess] Starting ioprocess (__init__:452)2018-03-14 15:39:47,108+0800 INFO (jsonrpc/3) [storage.xlease] Formatting index for lockspace u'80586557-820f-4e12-9763-11beed4259c8' (version=1) (xlease:641)2018-03-14 15:39:47,489+0800 INFO (jsonrpc/3) [storage.HSM] knownSDs: {80586557-820f-4e12-9763-11beed4259c8: vdsm.storage.nfsSD.findDomain} (hsm:2581)2018-03-14 15:39:47,490+0800 INFO (jsonrpc/3) [storage.StorageDomainCache] --------------manuallyAddDomain------------ (sdc:204)2018-03-14 15:39:47,490+0800 INFO (jsonrpc/3) [storage.StorageDomainCache] (sdc:205)2018-03-14 15:39:47,490+0800 INFO (jsonrpc/3) [storage.StorageDomainCache] 80586557-820f-4e12-9763-11beed4259c8 (sdc:206)2018-03-14 15:39:47,490+0800 INFO (jsonrpc/3) [vdsm.api] FINISH createStorageDomain return=None from=::ffff:192.168.122.134,45460, flow_id=61f0b27f, task_id=4b38bebc-5e9f-49cf-aafd-0de9155abe06 (api:52)2018-03-14 15:39:47,491+0800 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call StorageDomain.create succeeded in 1.29 seconds (__init__:630)2018-03-14 15:39:47,508+0800 INFO (jsonrpc/4) [DynamicBridge] ----------------duhy test--------------------- (Bridge:189)2018-03-14 15:39:47,509+0800 INFO (jsonrpc/4) [DynamicBridge] cmd = StorageDomain_create (Bridge:190)2018-03-14 15:39:47,510+0800 INFO (jsonrpc/4) [vdsm.api] START createStorageDomain(storageType=1, sdUUID=u'80586557-820f-4e12-9763-11beed4259c8', domainName=u'vmstorage', typeSpecificArg=u'192.168.122.134:/home/exports/vmstorage', domClass=1, domVersion=u'4', options=None) from=::ffff:192.168.122.134,45460, flow_id=61f0b27f, task_id=e419bf88-6f4e-43ff-a6ae-0153b3a98c65 (api:46)2018-03-14 15:39:47,510+0800 INFO (jsonrpc/4) [storage.StorageDomainCache] -------------------duhy test-------------- (sdc:106)2018-03-14 15:39:47,511+0800 INFO (jsonrpc/4) [storage.StorageDomainCache] /rhev/data-center (sdc:107)2018-03-14 15:39:47,511+0800 INFO (jsonrpc/4) [storage.StorageDomainCache] ------------------------------------------- (sdc:119)2018-03-14 15:39:47,511+0800 INFO (jsonrpc/4) [storage.StorageDomainCache] {u'80586557-820f-4e12-9763-11beed4259c8': } (sdc:120)2018-03-14 15:39:47,511+0800 INFO (jsonrpc/4) [storage.StorageDomainCache] domain is not None (sdc:123)2018-03-14 15:39:47,511+0800 INFO (jsonrpc/4) [storage.StorageDomainCache] (sdc:124)2018-03-14 15:39:47,512+0800 INFO (jsonrpc/4) [vdsm.api] FINISH createStorageDomain error=Storage domain already exists: (u'80586557-820f-4e12-9763-11beed4259c8',) from=::ffff:192.168.122.134,45460, flow_id=61f0b27f, task_id=e419bf88-6f4e-43ff-a6ae-0153b3a98c65 (api:50)2018-03-14 15:39:47,512+0800 ERROR (jsonrpc/4) [storage.TaskManager.Task] (Task='e419bf88-6f4e-43ff-a6ae-0153b3a98c65') Unexpected error (task:875)Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File "", line 2, in createStorageDomain File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method ret = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2550, in createStorageDomain self.validateNonDomain(sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 322, in validateNonDomain raise se.StorageDomainAlreadyExists(sdUUID)StorageDomainAlreadyExists: Storage domain already exists: (u'80586557-820f-4e12-9763-11beed4259c8',)2018-03-14 15:39:47,515+0800 INFO (jsonrpc/4) [storage.TaskManager.Task] (Task='e419bf88-6f4e-43ff-a6ae-0153b3a98c65') aborting: Task is aborted: "Storage domain already exists: (u'80586557-820f-4e12-9763-11beed4259c8',)" - code 365 (task:1181)2018-03-14 15:39:47,516+0800 ERROR (jsonrpc/4) [storage.Dispatcher] FINISH createStorageDomain error=Storage domain already exists: (u'80586557-820f-4e12-9763-11beed4259c8',) (dispatcher:82)2018-03-14 15:39:47,516+0800 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call StorageDomain.create failed (error 365) in 0.01 seconds (__init__:630)2018-03-14 15:39:47,973+0800 INFO (jsonrpc/5) [DynamicBridge] ----------------duhy test--------------------- (Bridge:189)2018-03-14 15:39:47,973+0800 INFO (jsonrpc/5) [DynamicBridge] cmd = StoragePool_disconnectStorageServer (Bridge:190)2018-03-14 15:39:47,974+0800 INFO (jsonrpc/5) [vdsm.api] START disconnectStorageServer(domType=1, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id': u'a16bcfa2-fd7c-49f9-84b2-3a905b4f4135', u'connection': u'192.168.122.134:/home/exports/vmstorage', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'auto', u'password': '********', u'port': u''}], options=None) from=::ffff:192.168.122.134,45460, flow_id=fc7e65db-8a83-46c5-b38a-12c3837bc453, task_id=250f443a-40e9-4a91-83b4-799b515da2cd (api:46)2018-03-14 15:39:47,975+0800 INFO (jsonrpc/5) [storage.Mount] unmounting /rhev/data-center/mnt/192.168.122.134:_home_exports_vmstorage (mount:213)2018-03-14 15:39:48,061+0800 INFO (jsonrpc/6) [DynamicBridge] ----------------duhy test--------------------- (Bridge:189)2018-03-14 15:39:48,062+0800 INFO (jsonrpc/6) [DynamicBridge] cmd = Host_getAllVmStats (Bridge:190)2018-03-14 15:39:48,062+0800 INFO (jsonrpc/6) [api.host] START getAllVmStats() from=::1,33990 (api:46)2018-03-14 15:39:48,272+0800 INFO (jsonrpc/6) [root] /usr/libexec/vdsm/hooks/after_get_all_vm_stats/10_fakevmstats: rc=0 err= (hooks:109)2018-03-14 15:39:48,273+0800 INFO (jsonrpc/6) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::1,33990 (api:52)2018-03-14 15:39:48,274+0800 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.21 seconds (__init__:630)2018-03-14 15:39:48,412+0800 INFO (jsonrpc/5) [vdsm.api] FINISH disconnectStorageServer return={'statuslist': [{'status': 0, 'id': u'a16bcfa2-fd7c-49f9-84b2-3a905b4f4135'}]} from=::ffff:192.168.122.134,45460, flow_id=fc7e65db-8a83-46c5-b38a-12c3837bc453, task_id=250f443a-40e9-4a91-83b4-799b515da2cd (api:52)2018-03-14 15:39:48,413+0800 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call StoragePool.disconnectStorageServer succeeded in 0.44 seconds (__init__:630)2018-03-14 15:39:48,431+0800 INFO (jsonrpc/7) [DynamicBridge] ----------------duhy test--------------------- (Bridge:189)2018-03-14 15:39:48,432+0800 INFO (jsonrpc/7) [DynamicBridge] cmd = StoragePool_disconnectStorageServer (Bridge:190)2018-03-14 15:39:48,432+0800 INFO (jsonrpc/7) [vdsm.api] START disconnectStorageServer(domType=1, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id': u'a16bcfa2-fd7c-49f9-84b2-3a905b4f4135', u'connection': u'192.168.122.134:/home/exports/vmstorage', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'auto', u'password': '********', u'port': u''}], options=None) from=::ffff:192.168.122.134,45460, flow_id=fc7e65db-8a83-46c5-b38a-12c3837bc453, task_id=5014c75e-67b6-4401-a55b-480d5373f112 (api:46)2018-03-14 15:39:48,433+0800 INFO (jsonrpc/7) [storage.Mount] unmounting /rhev/data-center/mnt/192.168.122.134:_home_exports_vmstorage (mount:213)2018-03-14 15:39:48,491+0800 ERROR (jsonrpc/7) [storage.HSM] Could not disconnect from storageServer (hsm:2466)Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2462, in disconnectStorageServer conObj.disconnect() File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 387, in disconnect return self._mountCon.disconnect() File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 185, in disconnect self._mount.umount(True, True) File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 216, in umount timeout=timeout) File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 53, in __call__ return callMethod() File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 51, in **kwargs) File "", line 2, in umount File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod raise convert_to_error(kind, result)MountError: (32, ';umount: /rhev/data-center/mnt/192.168.122.134:_home_exports_vmstorage: mountpoint not found\n')2018-03-14 15:39:48,768+0800 INFO (jsonrpc/7) [vdsm.api] FINISH disconnectStorageServer return={'statuslist': [{'status': 477, 'id': u'a16bcfa2-fd7c-49f9-84b2-3a905b4f4135'}]} from=::ffff:192.168.122.134,45460, flow_id=fc7e65db-8a83-46c5-b38a-12c3837bc453, task_id=5014c75e-67b6-4401-a55b-480d5373f112 (api:52)2018-03-14 15:39:48,769+0800 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call StoragePool.disconnectStorageServer succeeded in 0.34 seconds (__init__:630)2018-03-14 15:39:50,305+0800 INFO (jsonrpc/0) [DynamicBridge] ----------------duhy test--------------------- (Bridge:189)2018-03-14 15:39:50,306+0800 INFO (jsonrpc/0) [DynamicBridge] cmd = Host_getAllVmStats (Bridge:190)2018-03-14 15:39:50,307+0800 INFO (jsonrpc/0) [api.host] START getAllVmStats() from=::ffff:192.168.122.134,45460 (api:46)2018-03-14 15:39:50,465+0800 INFO (jsonrpc/0) [root] /usr/libexec/vdsm/hooks/after_get_all_vm_stats/10_fakevmstats: rc=0 err= (hooks:109)2018-03-14 15:39:50,467+0800 INFO (jsonrpc/0) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::ffff:192.168.122.134,45460 (api:52)2018-03-14 15:39:50,468+0800 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.16 seconds (__init__:630) I find twice StorageDomain_create ,first is sucessed , cache has this uuid, so second is failed ,because cache has this uuid. finally "Error while executing action New NFS Storage Domain: Storage domain already exists". -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ??1.png Type: image/png Size: 62970 bytes Desc: not available URL: From dhy336 at sina.com Wed Mar 14 08:31:31 2018 From: dhy336 at sina.com (dhy336 at sina.com) Date: Wed, 14 Mar 2018 16:31:31 +0800 Subject: [ovirt-users] =?gbk?q?=BB=D8=B8=B4=A3=BA_create_new_domain_failed?= Message-ID: <20180314083131.1C4D61000DA@webmail.sinamail.sina.com.cn> ----- ???? ----- ???? ????"users" ???[ovirt-users] create new domain failed ???2018?03?14? 16?01? Hi I find a issue, I create new domain failed error info is "Error while executing action New NFS Storage Domain: Storage domain already exists". vdsm debug info : 2018-03-14 15:39:46,163+0800 INFO (jsonrpc/1) [DynamicBridge] ----------------duhy test--------------------- (Bridge:189)2018-03-14 15:39:46,163+0800 INFO (jsonrpc/1) [DynamicBridge] cmd = StoragePool_connectStorageServer (Bridge:190)2018-03-14 15:39:46,164+0800 INFO (jsonrpc/1) [vdsm.api] START connectStorageServer(domType=1, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id': u'a16bcfa2-fd7c-49f9-84b2-3a905b4f4135', u'connection': u'192.168.122.134:/home/exports/vmstorage', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'auto', u'password': '********', u'port': u''}], options=None) from=::ffff:192.168.122.134,45460, flow_id=61f0b27f, task_id=ecd65a2e-3687-49b6-a4d6-11b9fce74775 (api:46)2018-03-14 15:39:46,190+0800 INFO (jsonrpc/1) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'status': 0, 'id': u'a16bcfa2-fd7c-49f9-84b2-3a905b4f4135'}]} from=::ffff:192.168.122.134,45460, flow_id=61f0b27f, task_id=ecd65a2e-3687-49b6-a4d6-11b9fce74775 (api:52)2018-03-14 15:39:46,191+0800 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call StoragePool.connectStorageServer succeeded in 0.03 seconds (__init__:630)2018-03-14 15:39:46,204+0800 INFO (jsonrpc/3) [DynamicBridge] ----------------duhy test--------------------- (Bridge:189)2018-03-14 15:39:46,205+0800 INFO (jsonrpc/3) [DynamicBridge] cmd = StorageDomain_create (Bridge:190)2018-03-14 15:39:46,205+0800 INFO (jsonrpc/3) [vdsm.api] START createStorageDomain(storageType=1, sdUUID=u'80586557-820f-4e12-9763-11beed4259c8', domainName=u'vmstorage', typeSpecificArg=u'192.168.122.134:/home/exports/vmstorage', domClass=1, domVersion=u'4', options=None) from=::ffff:192.168.122.134,45460, flow_id=61f0b27f, task_id=4b38bebc-5e9f-49cf-aafd-0de9155abe06 (api:46)2018-03-14 15:39:46,206+0800 INFO (jsonrpc/3) [storage.StorageDomainCache] -------------------duhy test-------------- (sdc:106)2018-03-14 15:39:46,206+0800 INFO (jsonrpc/3) [storage.StorageDomainCache] /rhev/data-center (sdc:107)2018-03-14 15:39:46,206+0800 INFO (jsonrpc/3) [storage.StorageDomainCache] ------------------------------------------- (sdc:119)2018-03-14 15:39:46,206+0800 INFO (jsonrpc/3) [storage.StorageDomainCache] {} (sdc:120)2018-03-14 15:39:46,207+0800 INFO (jsonrpc/3) [storage.StorageDomainCache] sdUUID not in __inProgress (sdc:128)2018-03-14 15:39:46,207+0800 INFO (jsonrpc/3) [storage.StorageDomainCache] set([]) (sdc:129)2018-03-14 15:39:46,668+0800 INFO (itmap/0) [IOProcessClient] Starting client ioprocess-1 (__init__:330)2018-03-14 15:39:46,693+0800 INFO (ioprocess/31894) [IOProcess] Starting ioprocess (__init__:452)2018-03-14 15:39:46,704+0800 INFO (jsonrpc/3) [storage.StorageDomain] sdUUID=80586557-820f-4e12-9763-11beed4259c8 domainName=vmstorage remotePath=192.168.122.134:/home/exports/vmstorage domClass=1 (nfsSD:70)2018-03-14 15:39:46,745+0800 INFO (jsonrpc/3) [IOProcessClient] Starting client ioprocess-2 (__init__:330)2018-03-14 15:39:46,780+0800 INFO (ioprocess/31903) [IOProcess] Starting ioprocess (__init__:452)2018-03-14 15:39:47,108+0800 INFO (jsonrpc/3) [storage.xlease] Formatting index for lockspace u'80586557-820f-4e12-9763-11beed4259c8' (version=1) (xlease:641)2018-03-14 15:39:47,489+0800 INFO (jsonrpc/3) [storage.HSM] knownSDs: {80586557-820f-4e12-9763-11beed4259c8: vdsm.storage.nfsSD.findDomain} (hsm:2581)2018-03-14 15:39:47,490+0800 INFO (jsonrpc/3) [storage.StorageDomainCache] --------------manuallyAddDomain------------ (sdc:204)2018-03-14 15:39:47,490+0800 INFO (jsonrpc/3) [storage.StorageDomainCache] (sdc:205)2018-03-14 15:39:47,490+0800 INFO (jsonrpc/3) [storage.StorageDomainCache] 80586557-820f-4e12-9763-11beed4259c8 (sdc:206)2018-03-14 15:39:47,490+0800 INFO (jsonrpc/3) [vdsm.api] FINISH createStorageDomain return=None from=::ffff:192.168.122.134,45460, flow_id=61f0b27f, task_id=4b38bebc-5e9f-49cf-aafd-0de9155abe06 (api:52)2018-03-14 15:39:47,491+0800 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call StorageDomain.create succeeded in 1.29 seconds (__init__:630)2018-03-14 15:39:47,508+0800 INFO (jsonrpc/4) [DynamicBridge] ----------------duhy test--------------------- (Bridge:189)2018-03-14 15:39:47,509+0800 INFO (jsonrpc/4) [DynamicBridge] cmd = StorageDomain_create (Bridge:190)2018-03-14 15:39:47,510+0800 INFO (jsonrpc/4) [vdsm.api] START createStorageDomain(storageType=1, sdUUID=u'80586557-820f-4e12-9763-11beed4259c8', domainName=u'vmstorage', typeSpecificArg=u'192.168.122.134:/home/exports/vmstorage', domClass=1, domVersion=u'4', options=None) from=::ffff:192.168.122.134,45460, flow_id=61f0b27f, task_id=e419bf88-6f4e-43ff-a6ae-0153b3a98c65 (api:46)2018-03-14 15:39:47,510+0800 INFO (jsonrpc/4) [storage.StorageDomainCache] -------------------duhy test-------------- (sdc:106)2018-03-14 15:39:47,511+0800 INFO (jsonrpc/4) [storage.StorageDomainCache] /rhev/data-center (sdc:107)2018-03-14 15:39:47,511+0800 INFO (jsonrpc/4) [storage.StorageDomainCache] ------------------------------------------- (sdc:119)2018-03-14 15:39:47,511+0800 INFO (jsonrpc/4) [storage.StorageDomainCache] {u'80586557-820f-4e12-9763-11beed4259c8': } (sdc:120)2018-03-14 15:39:47,511+0800 INFO (jsonrpc/4) [storage.StorageDomainCache] domain is not None (sdc:123)2018-03-14 15:39:47,511+0800 INFO (jsonrpc/4) [storage.StorageDomainCache] (sdc:124)2018-03-14 15:39:47,512+0800 INFO (jsonrpc/4) [vdsm.api] FINISH createStorageDomain error=Storage domain already exists: (u'80586557-820f-4e12-9763-11beed4259c8',) from=::ffff:192.168.122.134,45460, flow_id=61f0b27f, task_id=e419bf88-6f4e-43ff-a6ae-0153b3a98c65 (api:50)2018-03-14 15:39:47,512+0800 ERROR (jsonrpc/4) [storage.TaskManager.Task] (Task='e419bf88-6f4e-43ff-a6ae-0153b3a98c65') Unexpected error (task:875)Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File "", line 2, in createStorageDomain File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method ret = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2550, in createStorageDomain self.validateNonDomain(sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 322, in validateNonDomain raise se.StorageDomainAlreadyExists(sdUUID)StorageDomainAlreadyExists: Storage domain already exists: (u'80586557-820f-4e12-9763-11beed4259c8',)2018-03-14 15:39:47,515+0800 INFO (jsonrpc/4) [storage.TaskManager.Task] (Task='e419bf88-6f4e-43ff-a6ae-0153b3a98c65') aborting: Task is aborted: "Storage domain already exists: (u'80586557-820f-4e12-9763-11beed4259c8',)" - code 365 (task:1181)2018-03-14 15:39:47,516+0800 ERROR (jsonrpc/4) [storage.Dispatcher] FINISH createStorageDomain error=Storage domain already exists: (u'80586557-820f-4e12-9763-11beed4259c8',) (dispatcher:82)2018-03-14 15:39:47,516+0800 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call StorageDomain.create failed (error 365) in 0.01 seconds (__init__:630)2018-03-14 15:39:47,973+0800 INFO (jsonrpc/5) [DynamicBridge] ----------------duhy test--------------------- (Bridge:189)2018-03-14 15:39:47,973+0800 INFO (jsonrpc/5) [DynamicBridge] cmd = StoragePool_disconnectStorageServer (Bridge:190)2018-03-14 15:39:47,974+0800 INFO (jsonrpc/5) [vdsm.api] START disconnectStorageServer(domType=1, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id': u'a16bcfa2-fd7c-49f9-84b2-3a905b4f4135', u'connection': u'192.168.122.134:/home/exports/vmstorage', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'auto', u'password': '********', u'port': u''}], options=None) from=::ffff:192.168.122.134,45460, flow_id=fc7e65db-8a83-46c5-b38a-12c3837bc453, task_id=250f443a-40e9-4a91-83b4-799b515da2cd (api:46)2018-03-14 15:39:47,975+0800 INFO (jsonrpc/5) [storage.Mount] unmounting /rhev/data-center/mnt/192.168.122.134:_home_exports_vmstorage (mount:213)2018-03-14 15:39:48,061+0800 INFO (jsonrpc/6) [DynamicBridge] ----------------duhy test--------------------- (Bridge:189)2018-03-14 15:39:48,062+0800 INFO (jsonrpc/6) [DynamicBridge] cmd = Host_getAllVmStats (Bridge:190)2018-03-14 15:39:48,062+0800 INFO (jsonrpc/6) [api.host] START getAllVmStats() from=::1,33990 (api:46)2018-03-14 15:39:48,272+0800 INFO (jsonrpc/6) [root] /usr/libexec/vdsm/hooks/after_get_all_vm_stats/10_fakevmstats: rc=0 err= (hooks:109)2018-03-14 15:39:48,273+0800 INFO (jsonrpc/6) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::1,33990 (api:52)2018-03-14 15:39:48,274+0800 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.21 seconds (__init__:630)2018-03-14 15:39:48,412+0800 INFO (jsonrpc/5) [vdsm.api] FINISH disconnectStorageServer return={'statuslist': [{'status': 0, 'id': u'a16bcfa2-fd7c-49f9-84b2-3a905b4f4135'}]} from=::ffff:192.168.122.134,45460, flow_id=fc7e65db-8a83-46c5-b38a-12c3837bc453, task_id=250f443a-40e9-4a91-83b4-799b515da2cd (api:52)2018-03-14 15:39:48,413+0800 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call StoragePool.disconnectStorageServer succeeded in 0.44 seconds (__init__:630)2018-03-14 15:39:48,431+0800 INFO (jsonrpc/7) [DynamicBridge] ----------------duhy test--------------------- (Bridge:189)2018-03-14 15:39:48,432+0800 INFO (jsonrpc/7) [DynamicBridge] cmd = StoragePool_disconnectStorageServer (Bridge:190)2018-03-14 15:39:48,432+0800 INFO (jsonrpc/7) [vdsm.api] START disconnectStorageServer(domType=1, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id': u'a16bcfa2-fd7c-49f9-84b2-3a905b4f4135', u'connection': u'192.168.122.134:/home/exports/vmstorage', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'auto', u'password': '********', u'port': u''}], options=None) from=::ffff:192.168.122.134,45460, flow_id=fc7e65db-8a83-46c5-b38a-12c3837bc453, task_id=5014c75e-67b6-4401-a55b-480d5373f112 (api:46)2018-03-14 15:39:48,433+0800 INFO (jsonrpc/7) [storage.Mount] unmounting /rhev/data-center/mnt/192.168.122.134:_home_exports_vmstorage (mount:213)2018-03-14 15:39:48,491+0800 ERROR (jsonrpc/7) [storage.HSM] Could not disconnect from storageServer (hsm:2466)Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2462, in disconnectStorageServer conObj.disconnect() File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 387, in disconnect return self._mountCon.disconnect() File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 185, in disconnect self._mount.umount(True, True) File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 216, in umount timeout=timeout) File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 53, in __call__ return callMethod() File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 51, in **kwargs) File "", line 2, in umount File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod raise convert_to_error(kind, result)MountError: (32, ';umount: /rhev/data-center/mnt/192.168.122.134:_home_exports_vmstorage: mountpoint not found\n')2018-03-14 15:39:48,768+0800 INFO (jsonrpc/7) [vdsm.api] FINISH disconnectStorageServer return={'statuslist': [{'status': 477, 'id': u'a16bcfa2-fd7c-49f9-84b2-3a905b4f4135'}]} from=::ffff:192.168.122.134,45460, flow_id=fc7e65db-8a83-46c5-b38a-12c3837bc453, task_id=5014c75e-67b6-4401-a55b-480d5373f112 (api:52)2018-03-14 15:39:48,769+0800 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call StoragePool.disconnectStorageServer succeeded in 0.34 seconds (__init__:630)2018-03-14 15:39:50,305+0800 INFO (jsonrpc/0) [DynamicBridge] ----------------duhy test--------------------- (Bridge:189)2018-03-14 15:39:50,306+0800 INFO (jsonrpc/0) [DynamicBridge] cmd = Host_getAllVmStats (Bridge:190)2018-03-14 15:39:50,307+0800 INFO (jsonrpc/0) [api.host] START getAllVmStats() from=::ffff:192.168.122.134,45460 (api:46)2018-03-14 15:39:50,465+0800 INFO (jsonrpc/0) [root] /usr/libexec/vdsm/hooks/after_get_all_vm_stats/10_fakevmstats: rc=0 err= (hooks:109)2018-03-14 15:39:50,467+0800 INFO (jsonrpc/0) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::ffff:192.168.122.134,45460 (api:52)2018-03-14 15:39:50,468+0800 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.16 seconds (__init__:630) I find twice StorageDomain_create ,first is sucessed , cache has this uuid, so second is failed ,because cache has this uuid. finally "Error while executing action New NFS Storage Domain: Storage domain already exists". _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users I debug engine, engine send twice StorageDomain_create command to vdsm, I want to know why to send twice StorageDomain_create command, Is this a bug or my enviroment is not work. 2018-03-14 04:23:01,604-04 INFO [org.ovirt.engine.core.bll.profiles.AddDiskProfileCommand] (default task-55) [4eb2816d] Running command: AddDiskProfileCommand internal: true. Entities affected : ID: 82483a27-d046-4082-9371-3c159c957191 Type: StorageAction group CREATE_STORAGE_DISK_PROFILE with role type ADMIN2018-03-14 04:23:01,633-04 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-55) [4eb2816d] EVENT_ID: USER_ADDED_DISK_PROFILE(10,120), Disk Profile vmstorage was successfully added (User: admin at internal-authz).2018-03-14 04:23:01,642-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default task-55) [4eb2816d] START, ConnectStorageServerVDSCommand(HostName = 192.168.122.54, StorageServerConnectionManagementVDSParameters:{hostId='72b8ea98-563b-4b8b-b48a-69571b17ff56', storagePoolId='00000000-0000-0000-0000-000000000000', storageType='NFS', connectionList='[StorageServerConnections:{id='ecce7868-d8fd-4871-8a22-c5e6d1102c86', connection='192.168.122.134:/home/exports/vmstorage', iqn='null', vfsType='null', mountOptions='null', nfsVersion='AUTO', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log id: 58ccf2018-03-14 04:23:01,690-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default task-55) [4eb2816d] FINISH, ConnectStorageServerVDSCommand, return: {ecce7868-d8fd-4871-8a22-c5e6d1102c86=0}, log id: 58ccf2018-03-14 04:23:01,695-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default task-55) [4eb2816d] START, ConnectStorageServerVDSCommand(HostName = 192.168.122.54, StorageServerConnectionManagementVDSParameters:{hostId='72b8ea98-563b-4b8b-b48a-69571b17ff56', storagePoolId='00000000-0000-0000-0000-000000000000', storageType='NFS', connectionList='[StorageServerConnections:{id='ecce7868-d8fd-4871-8a22-c5e6d1102c86', connection='192.168.122.134:/home/exports/vmstorage', iqn='null', vfsType='null', mountOptions='null', nfsVersion='AUTO', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log id: 7da62ba62018-03-14 04:23:01,737-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default task-55) [4eb2816d] FINISH, ConnectStorageServerVDSCommand, return: {ecce7868-d8fd-4871-8a22-c5e6d1102c86=0}, log id: 7da62ba62018-03-14 04:23:01,744-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (default task-55) [4eb2816d] START, CreateStorageDomainVDSCommand(HostName = 192.168.122.54, CreateStorageDomainVDSCommandParameters:{hostId='72b8ea98-563b-4b8b-b48a-69571b17ff56', storageDomain='StorageDomainStatic:{name='vmstorage', id='82483a27-d046-4082-9371-3c159c957191'}', args='192.168.122.134:/home/exports/vmstorage'}), log id: 95323782018-03-14 04:23:01,744-04 INFO [org.ovirt.engine.core.vdsbroker.jsonrpc.JsonRpcVdsServer] (default task-55) [4eb2816d] ---------------------duhy test--------------------------2018-03-14 04:23:01,744-04 INFO [org.ovirt.engine.core.vdsbroker.jsonrpc.JsonRpcVdsServer] (default task-55) [4eb2816d] StorageDomain.createstoragedomainID=82483a27-d046-4082-9371-3c159c957191domainType=1domainClass=1storageFormatType42018-03-14 04:23:02,907-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (default task-55) [4eb2816d] FINISH, CreateStorageDomainVDSCommand, log id: 95323782018-03-14 04:23:02,917-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (default task-55) [4eb2816d] START, CreateStorageDomainVDSCommand(HostName = 192.168.122.54, CreateStorageDomainVDSCommandParameters:{hostId='72b8ea98-563b-4b8b-b48a-69571b17ff56', storageDomain='StorageDomainStatic:{name='vmstorage', id='82483a27-d046-4082-9371-3c159c957191'}', args='192.168.122.134:/home/exports/vmstorage'}), log id: 705b95752018-03-14 04:23:02,917-04 INFO [org.ovirt.engine.core.vdsbroker.jsonrpc.JsonRpcVdsServer] (default task-55) [4eb2816d] ---------------------duhy test--------------------------2018-03-14 04:23:02,917-04 INFO [org.ovirt.engine.core.vdsbroker.jsonrpc.JsonRpcVdsServer] (default task-55) [4eb2816d] StorageDomain.createstoragedomainID=82483a27-d046-4082-9371-3c159c957191domainType=1domainClass=1storageFormatType42018-03-14 04:23:02,944-04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-55) [4eb2816d] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM 192.168.122.54 command CreateStorageDomainVDS failed: Storage domain already exists: (u'82483a27-d046-4082-9371-3c159c957191',)2018-03-14 04:23:02,944-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (default task-55) [4eb2816d] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand' return value 'StatusOnlyReturn [status=Status [code=365, message=Storage domain already exists: (u'82483a27-d046-4082-9371-3c159c957191',)]]' -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ??1.png Type: image/png Size: 235094 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ??1.png Type: image/png Size: 62970 bytes Desc: not available URL: From Joseph.Kelly at tradingscreen.com Wed Mar 14 08:32:55 2018 From: Joseph.Kelly at tradingscreen.com (Joseph Kelly) Date: Wed, 14 Mar 2018 08:32:55 +0000 Subject: [ovirt-users] Query about running ovirt-4.2.1 engine support 3.x nodes ? Message-ID: Hello All, I have two hopefully easy questions regarding an ovirt-4.2.1 engine support and 3.x nodes ? 1) Does an ovirt-4.2.x engine support 3.x nodes ? As This page states: "The cluster compatibility is set according to the version of the least capable host operating system in the cluster." https://www.ovirt.org/documentation/upgrade-guide/chap-Post-Upgrade_Tasks/ Which seems to indicate that you can run say a 4.2.1 engine with lower version nodes, but is that correct ? 2) And can you just upgrade the nodes directly from 3.x to 4.2.x as per these steps ? 1. Move the node to maintenance 2. Add 4.2.x repos 3. yum update 4. reboot 5. Activate (exit maintenance) I've looked in the release notes but wasn't able to find much detail on ovirt-nodes upgrades. Thanks, Joe. -- J. Kelly Infrastructure Engineer TradingScreen www.tradingscreen.com Follow TradingScreen on Twitter, Facebook, or our blog, Trading Smarter This message is intended only for the recipient(s) named above and may contain confidential information. If you are not an intended recipient, you should not review, distribute or copy this message. Please notify the sender immediately by e-mail if you have received this message in error and delete it from your system. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spfma.tech at e.mail.fr Wed Mar 14 10:44:43 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Wed, 14 Mar 2018 11:44:43 +0100 Subject: [ovirt-users] NFS 4.1 support and migration Message-ID: <20180314104443.CAA09E4471@smtp01.mail.de> Hi, Is NFS 4.1 supported and working flawlessly in oVirt ? I would like to give it a try (performances with // transferts), but as it requires changes in my network design if I want to add new links, I want to be sure it worth the effort. Is there an easy way to "migrate"a NFS3 datastore to a NFS4.1 one ? Options are greyed when the domain is up, so maybe it requires more care than just a simple click. Regards ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From alkaplan at redhat.com Wed Mar 14 10:46:59 2018 From: alkaplan at redhat.com (Alona Kaplan) Date: Wed, 14 Mar 2018 12:46:59 +0200 Subject: [ovirt-users] ovirt-engine add host failed In-Reply-To: <20180314031545.AF2387200CF@webmail.sinamail.sina.com.cn> References: <20180314031545.AF2387200CF@webmail.sinamail.sina.com.cn> Message-ID: Hi, Seems the communication with the host is slow and you get a timeout error. Is there a chance you opened the SetupNetwoks dialog while installing the host (it may slow down the host since it queries the host for lldp information)? Please try to re-install (or remove and re-add) the host (don't open the setup network dialog during the installation). More technically, there is a bug in the code - Collecting and persisting the data from the host is done inside a transaction. Only the persisting should be done in a transaction. According to the attached log seems the collection of the data from the host finished successfully (was slow, but finished without a timeout), but the persistence of the data fails since during the persistence the transaction reached to the timeout. Can you please file a bug in the bugzilla and attach the relevant logs (engine.log, server.log and vdsm.log) Thanks. Alona. On Wed, Mar 14, 2018 at 5:15 AM, wrote: > I add host for ovirt-engine, when SetupNetworks faild, server.log has > some error. But I do not how fix it, could someone give me some advise? > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eshenitz at redhat.com Wed Mar 14 10:54:19 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Wed, 14 Mar 2018 12:54:19 +0200 Subject: [ovirt-users] NFS 4.1 support and migration In-Reply-To: <20180314104443.CAA09E4471@smtp01.mail.de> References: <20180314104443.CAA09E4471@smtp01.mail.de> Message-ID: Hi, NFS 4.1 supported and working since version 3.6 (according to this bug fix [1]) [1] Support NFS v4.1 connections - https://bugzilla.redhat.com/show_bug.cgi?id=1283964 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbonazzo at redhat.com Wed Mar 14 11:00:41 2018 From: sbonazzo at redhat.com (Sandro Bonazzola) Date: Wed, 14 Mar 2018 12:00:41 +0100 Subject: [ovirt-users] Localization needs some Italian, Czech and Russian updates Message-ID: Hi, if you have some time, want to help oVirt project, and you're fluent in the following languages you can help translating oVirt! The project is hosted here: https://translate.zanata.org/iteration/view/ovirt/ovirt-4.2 Italian translation is at 84% Czech translation is at 35% Russian translation is at 26% Thanks, -- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhy336 at sina.com Wed Mar 14 11:39:18 2018 From: dhy336 at sina.com (dhy336 at sina.com) Date: Wed, 14 Mar 2018 19:39:18 +0800 Subject: [ovirt-users] =?gbk?b?u9i4tKO6UmU6ICBvdmlydC1lbmdpbmUgYWRkIGhv?= =?gbk?q?st_failed?= Message-ID: <20180314113918.B8D7B6C01CA@webmail.sinamail.sina.com.cn> Hi,Alona thanks for your reply, I find node network is not fixed, there is not Bridge ovirtmgmt. I solve this issue, as shown in the figure. ovirtmgmt is not in "Assiged Logical Networks", I drag ovirtmgmt to "Assiged Logical Networks" .then click OK. host`s status become up, this is work. But when add host , network setup fail, host is "no reponsed" . ----- ???? ----- ????Alona Kaplan ????dhy336 at sina.com ????users ???Re: [ovirt-users] ovirt-engine add host failed ???2018?03?14? 18?47? Hi, Seems the communication with the host is slow and you get a timeout error. Is there a chance you opened the SetupNetwoks dialog while installing the host (it may slow down the host since it queries the host for lldp information)? Please try to re-install (or remove and re-add) the host (don't open the setup network dialog during the installation). More technically, there is a bug in the code -Collecting and persisting the data from the host is done inside a transaction.Only the persisting should be done in a transaction.According to the attached log seems the collection of the data from the host finished successfully (was slow, but finished without a timeout), but the persistence of the data fails since during the persistence the transaction reached to the timeout. Can you please file a bug in the bugzilla and attach the relevant logs (engine.log, server.log and vdsm.log) Thanks.Alona. On Wed, Mar 14, 2018 at 5:15 AM, wrote: I add host for ovirt-engine, when SetupNetworks faild, server.log has some error. But I do not how fix it, could someone give me some advise? _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ??1.png Type: image/png Size: 47070 bytes Desc: not available URL: From dhy336 at sina.com Wed Mar 14 11:42:44 2018 From: dhy336 at sina.com (dhy336 at sina.com) Date: Wed, 14 Mar 2018 19:42:44 +0800 Subject: [ovirt-users] =?gbk?b?u9i4tKO6UmU6ICBvdmlydC1lbmdpbmUgYWRkIGhv?= =?gbk?q?st_failed?= Message-ID: <20180314114244.E99EE19000D1@webmail.sinamail.sina.com.cn> Hi, Alona 1. not opened the SetupNetwoks dialog. while installing the host .2. I try to many times to re-install (or remove and re-add) the host ----- ???? ----- ????Alona Kaplan ????dhy336 at sina.com ????users ???Re: [ovirt-users] ovirt-engine add host failed ???2018?03?14? 18?47? Hi, Seems the communication with the host is slow and you get a timeout error. Is there a chance you opened the SetupNetwoks dialog while installing the host (it may slow down the host since it queries the host for lldp information)? Please try to re-install (or remove and re-add) the host (don't open the setup network dialog during the installation). More technically, there is a bug in the code -Collecting and persisting the data from the host is done inside a transaction.Only the persisting should be done in a transaction.According to the attached log seems the collection of the data from the host finished successfully (was slow, but finished without a timeout), but the persistence of the data fails since during the persistence the transaction reached to the timeout. Can you please file a bug in the bugzilla and attach the relevant logs (engine.log, server.log and vdsm.log) Thanks.Alona. On Wed, Mar 14, 2018 at 5:15 AM, wrote: I add host for ovirt-engine, when SetupNetworks faild, server.log has some error. But I do not how fix it, could someone give me some advise? _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From awels at redhat.com Wed Mar 14 12:36:48 2018 From: awels at redhat.com (Alexander Wels) Date: Wed, 14 Mar 2018 08:36:48 -0400 Subject: [ovirt-users] improvement for web ui during the create template stage. In-Reply-To: <1521000255.6088.116.camel@province-sud.nc> References: <1521000255.6088.116.camel@province-sud.nc> Message-ID: <5595754.jVs0GWpncT@awels> On Wednesday, March 14, 2018 12:04:18 AM EDT Nicolas Vaye wrote: > Hi, > I 'have 2 ovirt node with HE in version 4.2.1.7-1. > > If i make a template from a VM's snapshot in the web ui, there is a form ui > to enter several parameter [cid:1521000255.509.1.camel at province-sud.nc] > > if the name of the template is missing and if we clic on the OK button, > there is an highlighting red border on the name to indicate the problem. > if i enter a long name for the template and if we clic on the OK button, > nothing happend, and there is no highlight or error message to indicate > there is a problem with the long name. > Could you improve that ? > > Thanks, > > Regards, > > Nicolas VAYE > It appears to me it already does that, this a screenshot of me putting in a long template name, and it is highlighted right and if I hover I see a tooltip explaining I can't have more than 64 characters. -------------- next part -------------- A non-text attachment was scrubbed... Name: template_long.png Type: image/png Size: 61147 bytes Desc: not available URL: From ccox at endlessnow.com Wed Mar 14 14:10:19 2018 From: ccox at endlessnow.com (Christopher Cox) Date: Wed, 14 Mar 2018 09:10:19 -0500 Subject: [ovirt-users] Quick question about oVirt 3.6 and vdsm log in DEBUG mode (apprently by default??) In-Reply-To: References: <9114b59a-8ffb-77a6-366c-06d91ba5be26@endlessnow.com> Message-ID: <907984b6-9352-e619-2fc5-a5f0b077c8dc@endlessnow.com> On 03/14/2018 01:34 AM, Yaniv Kaul wrote: > > > On Mar 13, 2018 11:48 PM, "Christopher Cox" > wrote: > ...snip... > > What we are seeing more and more is that if we do an operation like expose a > new LUN and configure a new storage domain, that all of the hyervisors go > "red triangle" and "Connecting..." and it takes a very long time (all day) > to straighten out. > > My guess is that there's too much to look at vdsm wise and so it's waiting a > short(er) period of time for a completed response than what vdsm is going to > us, and it just cycles over and over until it just happens to work. > > > Please upgrade. We have solved issues and improved performance and scale > substantially since 3.6. > You may also wish to apply lvm filters. > Y. Oh, we know and are looking at what we'll have to do to upgrade. With that said, is there more information on what you mentioned as "lvm filters" posted somewhere? Also, would VM reduction, and IMHO, virtual disk reduction help this problem? Is there and engine config parameters that might help as well? Thanks for any help on this. From alkaplan at redhat.com Wed Mar 14 14:56:07 2018 From: alkaplan at redhat.com (Alona Kaplan) Date: Wed, 14 Mar 2018 16:56:07 +0200 Subject: [ovirt-users] ovirt-engine add host failed In-Reply-To: <20180314114244.E99EE19000D1@webmail.sinamail.sina.com.cn> References: <20180314114244.E99EE19000D1@webmail.sinamail.sina.com.cn> Message-ID: Hi, I posted a patch with a proposed fix (https://gerrit.ovirt.org/#/c/88999/). Please open a bug so you can track the version the fix is included in. Please attach to the bug vdsm.log, engine.log and server.log (If you had several attempts to reinstall/remove-add the host please add the logs with all the attempts). On Wed, Mar 14, 2018 at 1:42 PM, wrote: > Hi, Alona > > 1. not opened the SetupNetwoks dialog. while installing the host . > 2. I try to many times to re-install (or remove and re-add) the host > ----- ???? ----- > ????Alona Kaplan > ????dhy336 at sina.com > ????users > ???Re: [ovirt-users] ovirt-engine add host failed > ???2018?03?14? 18?47? > > Hi, > > Seems the communication with the host is slow and you get a timeout error. > > Is there a chance you opened the SetupNetwoks dialog while installing the > host (it may slow down the host since it queries the host for lldp > information)? > > Please try to re-install (or remove and re-add) the host (don't open the > setup network dialog during the installation). > > More technically, there is a bug in the code - > Collecting and persisting the data from the host is done inside a > transaction. > Only the persisting should be done in a transaction. > According to the attached log seems the collection of the data from the > host finished successfully (was slow, but finished without a timeout), but > the persistence of the data fails since during the persistence the > transaction reached to the timeout. > > Can you please file a bug in the bugzilla and attach the relevant logs > (engine.log, server.log and vdsm.log) > > Thanks. > Alona. > > > > On Wed, Mar 14, 2018 at 5:15 AM, wrote: > > I add host for ovirt-engine, when SetupNetworks faild, server.log has > some error. But I do not how fix it, could someone give me some advise? > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Wed Mar 14 15:19:51 2018 From: rightkicktech at gmail.com (Alex K) Date: Wed, 14 Mar 2018 17:19:51 +0200 Subject: [ovirt-users] Ovirt VMS backup In-Reply-To: <64ce41ee-2ed5-ce90-fc23-29e85a22e257@abes.fr> References: <64ce41ee-2ed5-ce90-fc23-29e85a22e257@abes.fr> Message-ID: This link was working. Can someone check this to restore access? Thanx, Alex On Tue, Mar 13, 2018 at 1:57 PM, Nathana?l Blanchet wrote: > sorry, but your link is broken > > Le 12/03/2018 ? 19:33, Victor Jos? Acosta Dom?nguez a ?crit : > > http://blog.infratic.com/blog/2017/07/07/create-ovirtrhevs-vm-backup/ > > Victor Acosta > > RHCE - RHCSA - RHCVA - VCA-DCV > > > _______________________________________________ > Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users > > > -- > Nathana?l Blanchet > > Supervision r?seau > P?le Infrastrutures Informatiques > 227 avenue Professeur-Jean-Louis-Viala > 34193 MONTPELLIER CEDEX 5 > T?l. 33 (0)4 67 54 84 55 > Fax 33 (0)4 67 54 84 14blanchet at abes.fr > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Wed Mar 14 15:25:06 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Wed, 14 Mar 2018 17:25:06 +0200 Subject: [ovirt-users] Ovirt VMS backup In-Reply-To: References: <64ce41ee-2ed5-ce90-fc23-29e85a22e257@abes.fr> Message-ID: On Wed, Mar 14, 2018 at 5:19 PM, Alex K wrote: > This link was working. > Can someone check this to restore access? > Surprisingly, you can go to http://blog.infratic.com/ and just scroll down to the blog entry. Y. > > Thanx, > Alex > > On Tue, Mar 13, 2018 at 1:57 PM, Nathana?l Blanchet > wrote: > >> sorry, but your link is broken >> >> Le 12/03/2018 ? 19:33, Victor Jos? Acosta Dom?nguez a ?crit : >> >> http://blog.infratic.com/blog/2017/07/07/create-ovirtrhevs-vm-backup/ >> >> Victor Acosta >> >> RHCE - RHCSA - RHCVA - VCA-DCV >> >> >> _______________________________________________ >> Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users >> >> >> -- >> Nathana?l Blanchet >> >> Supervision r?seau >> P?le Infrastrutures Informatiques >> 227 avenue Professeur-Jean-Louis-Viala >> 34193 MONTPELLIER CEDEX 5 >> T?l. 33 (0)4 67 54 84 55 >> Fax 33 (0)4 67 54 84 14blanchet at abes.fr >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Wed Mar 14 15:38:32 2018 From: rightkicktech at gmail.com (Alex K) Date: Wed, 14 Mar 2018 17:38:32 +0200 Subject: [ovirt-users] Ovirt VMS backup In-Reply-To: References: <64ce41ee-2ed5-ce90-fc23-29e85a22e257@abes.fr> Message-ID: Thanx. Now the link is ok. Alex On Wed, Mar 14, 2018 at 5:25 PM, Yaniv Kaul wrote: > > > On Wed, Mar 14, 2018 at 5:19 PM, Alex K wrote: > >> This link was working. >> Can someone check this to restore access? >> > > Surprisingly, you can go to http://blog.infratic.com/ and just scroll > down to the blog entry. > Y. > > >> >> Thanx, >> Alex >> >> On Tue, Mar 13, 2018 at 1:57 PM, Nathana?l Blanchet >> wrote: >> >>> sorry, but your link is broken >>> >>> Le 12/03/2018 ? 19:33, Victor Jos? Acosta Dom?nguez a ?crit : >>> >>> http://blog.infratic.com/blog/2017/07/07/create-ovirtrhevs-vm-backup/ >>> >>> Victor Acosta >>> >>> RHCE - RHCSA - RHCVA - VCA-DCV >>> >>> >>> _______________________________________________ >>> Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users >>> >>> >>> -- >>> Nathana?l Blanchet >>> >>> Supervision r?seau >>> P?le Infrastrutures Informatiques227 avenue Professeur-Jean-Louis-Viala >>> 34193 MONTPELLIER CEDEX 5 >>> T?l. 33 (0)4 67 54 84 55 >>> Fax 33 (0)4 67 54 84 14blanchet at abes.fr >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdacrema at enter.eu Wed Mar 14 15:43:48 2018 From: mdacrema at enter.eu (Matteo Dacrema) Date: Wed, 14 Mar 2018 16:43:48 +0100 Subject: [ovirt-users] Ceph Cinder QoS In-Reply-To: <496EE5D1-45B2-4C59-839A-A51F4FF42869@enter.eu> References: <496EE5D1-45B2-4C59-839A-A51F4FF42869@enter.eu> Message-ID: <84883351-55B9-4688-8933-A98744AAAE85@enter.eu> Hi all, has someone experienced the same problem? Is there someone who have a working cinder qos? Thank you Matteo > Il giorno 25 gen 2018, alle ore 11:36, Matteo Dacrema ha scritto: > > Hi All, > > I?m running a 4.2 cluster with all VMs on Ceph using cinder external provider. > > I?m trying to limit IOPS with cinder qos and volume type but it doesn?t work. > VM xml doesn?t show anything about > > Is it expected to work or is not implemented yet? > > Thank you > > Matteo > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- > Questo messaggio e' stato analizzato con Libra ESVA ed e' risultato non infetto. > Seguire il link qui sotto per segnalarlo come spam: > http://mx01.enter.it/cgi-bin/learn-msg.cgi?id=BD22F40423.A457C > > From andreil1 at starlett.lv Wed Mar 14 16:56:42 2018 From: andreil1 at starlett.lv (Andrei Verovski) Date: Wed, 14 Mar 2018 18:56:42 +0200 Subject: [ovirt-users] oVirt Engine 4.2 -> 4.2.1 Upgrade Message-ID: Hi ! After oVirt Engine 4.2 -> 4.2.1 Upgrade do I need to run these commands as described in this article https://www.ovirt.org/release/4.2.1/ su - postgres -c "scl enable rh-postgresql95 ? psql -d engine" postgres=# DROP FUNCTION IF EXISTS uuid_generate_v1(); postgres=# CREATE EXTENSION "uuid-ossp?; BTW, this yelds error: [root at node00 ~]# su - postgres -c "scl enable rh-postgresql95 ? psql -d engine" Unable to open /etc/scl/conf/?! node00 is a dedicated PC with CentOS and oVirt Host Engine. Thanks. Andrei From NasrumMinallah9 at hotmail.com Wed Mar 14 16:28:53 2018 From: NasrumMinallah9 at hotmail.com (Nasrum Minallah Manzoor) Date: Wed, 14 Mar 2018 16:28:53 +0000 Subject: [ovirt-users] Change CD Issue... Message-ID: Hello, At the window, after selecting iso and click OK I get: Error while executing action Change CD: Drive image file could not be found In webadmin gui events list I get: Failed to change disk in VM. It happens with all iso files I already used before. ISO_DOMAIN is up and they are listed in IMAGES pane Kindly look into attached logs (engine log and vdsm log) and assist me what to do! Thank you Regards, Nasrum Minallah -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Logs.rar Type: application/octet-stream Size: 474873 bytes Desc: Logs.rar URL: From NasrumMinallah9 at hotmail.com Wed Mar 14 05:22:10 2018 From: NasrumMinallah9 at hotmail.com (Nasrum Minallah Manzoor) Date: Wed, 14 Mar 2018 05:22:10 +0000 Subject: [ovirt-users] Assistance needed... In-Reply-To: <1520991028.6088.108.camel@province-sud.nc> References: <1520973318.6088.97.camel@province-sud.nc> <1520991028.6088.108.camel@province-sud.nc> Message-ID: Thank you Nicolas! Issue mentioned below is resolved! Now the issue that I am facing is that i cannot change the "Change CD" function, thus generating following error! Error while executing Change CD action. Failed to perform change cd...... Thankyou -----Original Message----- From: Nicolas Vaye [mailto:nicolas.vaye at province-sud.nc] Sent: 14 March 2018 6:31 AM To: users at ovirt.org; NasrumMinallah9 at hotmail.com Subject: Re: [ovirt-users] Assistance needed... I did the installation with windows 7 and i have installed the ovirt-tools-setup via the iso and everything is OK. The simple way is to install ovirt-guest-tools-iso on the hosted-engine and then get the iso to put on your ISO domain. After that just put this iso on the cdrom of the windows 7 VM and execute ovirt-guest-tools-setup. Regards, Nicolas VAYE -------- Message initial -------- Date: Tue, 13 Mar 2018 20:35:23 +0000 Objet: Re: [ovirt-users] Assistance needed... ?: users at ovirt.org >, NasrumMinallah9 at hotmail.com > Reply-to: Nicolas Vaye De: Nicolas Vaye > Hi, have you ever seen this documentation ? https://www.ovirt.org/documentation/internal/guest-agent/understanding-guest-agents-and-other-tools/ You can see the link for windows 7 guest agent : https://community.redhat.com/blog/2015/05/how-to-install-and-use-ovirts-windows-guest-tools/ May be it can help you. Regards, Nicolas VAYE -------- Message initial -------- Date: Mon, 12 Mar 2018 18:00:08 +0000 Objet: Re: [ovirt-users] Assistance needed... ?: users at ovirt.org %3e>> De: Nasrum Minallah Manzoor > Hi, Can anyone assist me in getting vnc native console through ovirt engine to my guest machine(window 7). Thanks. From: Nasrum Minallah Manzoor Sent: 12 March 2018 3:40 PM To: users at ovirt.org Cc: 'junaid8756 at gmail.com' > Subject: Assistance needed... Hi, I need assistance regarding encircled in red in the attached! How can I remove the error ?The latest guest agent needs to be installed and running on the guest?. Else everything is working fine! Kindly response as soon as possible! Regards, _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users From nicolas.vaye at province-sud.nc Wed Mar 14 21:19:55 2018 From: nicolas.vaye at province-sud.nc (Nicolas Vaye) Date: Wed, 14 Mar 2018 21:19:55 +0000 Subject: [ovirt-users] improvement for web ui during the create template stage. In-Reply-To: <5595754.jVs0GWpncT@awels> References: <1521000255.6088.116.camel@province-sud.nc> <5595754.jVs0GWpncT@awels> Message-ID: <1521062392.6088.138.camel@province-sud.nc> Hi, I thought it was the problem. I did a test again and i have recorded the test (in attachment). What is the problem ? Regards, Nicolas VAYE -------- Message initial -------- Date: Wed, 14 Mar 2018 08:36:48 -0400 Objet: Re: [ovirt-users] improvement for web ui during the create template stage. ?: users at ovirt.org, Nicolas Vaye > De: Alexander Wels > On Wednesday, March 14, 2018 12:04:18 AM EDT Nicolas Vaye wrote: Hi, I 'have 2 ovirt node with HE in version 4.2.1.7-1. If i make a template from a VM's snapshot in the web ui, there is a form ui to enter several parameter [cid:1521000255.509.1.camel at province-sud.nc] if the name of the template is missing and if we clic on the OK button, there is an highlighting red border on the name to indicate the problem. if i enter a long name for the template and if we clic on the OK button, nothing happend, and there is no highlight or error message to indicate there is a problem with the long name. Could you improve that ? Thanks, Regards, Nicolas VAYE It appears to me it already does that, this a screenshot of me putting in a long template name, and it is highlighted right and if I hover I see a tooltip explaining I can't have more than 64 characters. -------------- next part -------------- A non-text attachment was scrubbed... Name: bug_create_template.mp4 Type: video/mp4 Size: 964905 bytes Desc: bug_create_template.mp4 URL: From tadavis at lbl.gov Wed Mar 14 23:50:10 2018 From: tadavis at lbl.gov (Thomas Davis) Date: Wed, 14 Mar 2018 16:50:10 -0700 Subject: [ovirt-users] ovirt 4.2.2-rc3 switching from legacy to OVS.. In-Reply-To: References: <031b026d-66ec-14a0-9bec-7a4b0e717556@lbl.gov> Message-ID: <4418d2d0-73d4-aced-049b-5a2971a91274@lbl.gov> Well, I just hit https://bugzilla.redhat.com/show_bug.cgi?id=1513991 And it's been closed, which means with vdsm-4.20.17-1.el7.centos.x86_64 OVS networking is totally borked.. I know OVS is Experimental, but it worked in 4.1.x, and now we have to do a step back to legacy bridge just to use 4.2.x, which in a vlan environment just wreaks havoc (every VLAN need's a unique mac assigned to the bridge, which vdsm does not do, so suddenly you get the kernel complaining about seeing it's mac address several times.) There is zero documentation on how to use OVN instead of OVS. thomas On 03/13/2018 09:22 AM, Thomas Davis wrote: > I'll work on it some more.? I have 2 different clusters in the data > center (1 is the Hosted Engine systems, another is not..)? I had trouble > with both.? I'll try again on the non-hosted engine cluster to see what > it is doing.? I have it working in 4.1, but we are trying to do a clean > wipe since the 4.1 engine has been upgraded so many times from v3.5 plus > we want to move to hosted-engine-ha from a single engine node and the > ansible modules/roles (which also have problems..) > > thomas > > On Tue, Mar 13, 2018 at 6:27 AM, Edward Haas > wrote: > > > OVS switch support is experimental at this stage and in some cases > when trying to change from one switch to the other, it fails. > It was also not checked against a hosted engine setup, which handles > networking a bit differently for the management network (ovirtmgmt). > Nevertheless, we are interested in understanding all the problems > that exists today, so if you can, please share the supervdsm log, it > has the interesting networking traces. > > We plan to block cluster switch editing until these problems are > resolved. It will be only allowed to define a new cluster as OVS, > not convert an existing one from Linux Bridge to OVS. > > On Fri, Mar 9, 2018 at 9:54 AM, Thomas Davis > wrote: > > I'm getting further along with 4.2.2rc3 than the 4.2.1 when it > comes to hosted engine and vlans..? it actually does install > under 4.2.2rc3. > > But it's a complete failure when I switch the cluster from Linux > Bridge/Legacy to OVS.? The first time I try, vdsm does > not properly configure the node, it's all messed up. > > I'm getting this in vdsmd logs: > > 2018-03-08 23:12:46,610-0800 INFO? (jsonrpc/7) [api.network] > START setupNetworks(networks={u'ovirtmgmt': {u'ipv6autoconf': > True, u'nic': u'eno1', u'vlan': u'50', u'ipaddr': > u'192.168.85.49', u'switch': u'ovs', u'mtu': 1500, u'netmask': > u'255.255.252.0', u'dhcpv6': False, u'STP': u'no', u'bridged': > u'true', u'gateway': u'192.168.85.254', u'defaultRoute': True}}, > bondings={}, options={u'connectivityCheck': u'true', > u'connectivityTimeout': 120}) from=::ffff:192.168.85.24,56806, > flow_id=4147e25f-0a23-4f47-a0a4-d424a3437d11 (api:46) > > 2018-03-08 23:12:52,449-0800 INFO? (jsonrpc/2) > [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00 > seconds (__init__:573) > > 2018-03-08 23:12:52,511-0800 INFO? (jsonrpc/7) [api.network] > FINISH setupNetworks error=[Errno 19] ovirtmgmt is not present > in the system from=::ffff:192.168.85.24,56806, > flow_id=4147e25f-0a23-4f47-a0a4-d424a3437d11 (api:50) > 2018-03-08 23:12:52,512-0800 ERROR (jsonrpc/7) > [jsonrpc.JsonRpcServer] Internal server error (__init__:611) > Traceback (most recent call last): > ? File > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line > 606, in _handle_request > ? ? res = method(**params) > ? File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", > line 201, in _dynamicMethod > ? ? result = fn(*methodArgs) > ? File "", line 2, in setupNetworks > ? File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", > line 48, in method > ? ? ret = func(*args, **kwargs) > ? File "/usr/lib/python2.7/site-packages/vdsm/API.py", line > 1527, in setupNetworks > ? ? supervdsm.getProxy().setupNetworks(networks, bondings, options) > ? File > "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", > line 55, in __call__ > ? ? return callMethod() > ? File > "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", > line 53, in > ? ? **kwargs) > ? File "", line 2, in setupNetworks > ? File "/usr/lib64/python2.7/multiprocessing/managers.py", line > 773, in _callmethod > ? ? raise convert_to_error(kind, result) > IOError: [Errno 19] ovirtmgmt is not present in the system > 2018-03-08 23:12:52,512-0800 INFO? (jsonrpc/7) > [jsonrpc.JsonRpcServer] RPC call Host.setupNetworks failed > (error -32603) in 5.90 seconds (__init__:573) > 2018-03-08 23:12:54,769-0800 INFO? (jsonrpc/1) > [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00 > seconds (__init__:573) > 2018-03-08 23:12:54,772-0800 INFO? (jsonrpc/5) [api.host] START > getCapabilities() from=::1,45562 (api:46) > 2018-03-08 23:12:54,906-0800 INFO? (jsonrpc/5) [api.host] FINISH > getCapabilities error=[Errno 19] ovirtmgmt is not present in the > system from=::1,45562 (api:50) > 2018-03-08 23:12:54,906-0800 ERROR (jsonrpc/5) > [jsonrpc.JsonRpcServer] Internal server error (__init__:611) > Traceback (most recent call last): > ? File > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line > 606, in _handle_request > ? ? res = method(**params) > ? File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", > line 201, in _dynamicMethod > ? ? result = fn(*methodArgs) > ? File "", line 2, in getCapabilities > ? File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", > line 48, in method > ? ? ret = func(*args, **kwargs) > ? File "/usr/lib/python2.7/site-packages/vdsm/API.py", line > 1339, in getCapabilities > ? ? c = caps.get() > ? File "/usr/lib/python2.7/site-packages/vdsm/host/caps.py", > line 168, in get > ? ? net_caps = supervdsm.getProxy().network_caps() > ? File > "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", > line 55, in __call__ > ? ? return callMethod() > ? File > "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", > line 53, in > ? ? **kwargs) > ? File "", line 2, in network_caps > ? File "/usr/lib64/python2.7/multiprocessing/managers.py", line > 773, in _callmethod > ? ? raise convert_to_error(kind, result) > IOError: [Errno 19] ovirtmgmt is not present in the system > > So something is dreadfully wrong with the bridge to ovs > conversion in 4.2.2rc3. > > thomas > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > From k0ste at k0ste.ru Thu Mar 15 02:36:52 2018 From: k0ste at k0ste.ru (Konstantin Shalygin) Date: Thu, 15 Mar 2018 09:36:52 +0700 Subject: [ovirt-users] Ceph Cinder QoS In-Reply-To: <84883351-55B9-4688-8933-A98744AAAE85@enter.eu> References: <84883351-55B9-4688-8933-A98744AAAE85@enter.eu> Message-ID: <3081640f-5ffe-0449-eec9-3637dc7e0c04@k0ste.ru> > has someone experienced the same problem? > Is there someone who have a working cinder qos? How exactly? Storage profiles is not present for external providers - so lack of this feature. For now only way to do that is vdsm hook. https://bugzilla.redhat.com/show_bug.cgi?id=1550145 k From dhy336 at sina.com Thu Mar 15 02:32:25 2018 From: dhy336 at sina.com (dhy336 at sina.com) Date: Thu, 15 Mar 2018 10:32:25 +0800 Subject: [ovirt-users] =?gbk?b?u9i4tKO6UmU6IFJlOiAgb3ZpcnQtZW5naW5lIGFk?= =?gbk?q?d_host_failed?= Message-ID: <20180315023225.5042A1000ED@webmail.sinamail.sina.com.cn> Hi Alona, I has file a bug in the bugzilla. url is https://bugzilla.redhat.com/show_bug.cgi?id=1556668, thanks. ----- ???? ----- ????Alona Kaplan ????dhy336 at sina.com ????users ???Re: Re: [ovirt-users] ovirt-engine add host failed ???2018?03?14? 22?56? Hi, I posted a patch with a proposed fix (https://gerrit.ovirt.org/#/c/88999/). Please open a bug so you can track the version the fix is included in. Please attach to the bug vdsm.log, engine.log and server.log (If you had several attempts to reinstall/remove-add the host please add the logs with all the attempts). On Wed, Mar 14, 2018 at 1:42 PM, wrote: Hi, Alona 1. not opened the SetupNetwoks dialog. while installing the host .2. I try to many times to re-install (or remove and re-add) the host ----- ???? ----- ????Alona Kaplan ????dhy336 at sina.com ????users ???Re: [ovirt-users] ovirt-engine add host failed ???2018?03?14? 18?47? Hi, Seems the communication with the host is slow and you get a timeout error. Is there a chance you opened the SetupNetwoks dialog while installing the host (it may slow down the host since it queries the host for lldp information)? Please try to re-install (or remove and re-add) the host (don't open the setup network dialog during the installation). More technically, there is a bug in the code -Collecting and persisting the data from the host is done inside a transaction.Only the persisting should be done in a transaction.According to the attached log seems the collection of the data from the host finished successfully (was slow, but finished without a timeout), but the persistence of the data fails since during the persistence the transaction reached to the timeout. Can you please file a bug in the bugzilla and attach the relevant logs (engine.log, server.log and vdsm.log) Thanks.Alona. On Wed, Mar 14, 2018 at 5:15 AM, wrote: I add host for ovirt-engine, when SetupNetworks faild, server.log has some error. But I do not how fix it, could someone give me some advise? _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhy336 at sina.com Thu Mar 15 03:03:34 2018 From: dhy336 at sina.com (dhy336 at sina.com) Date: Thu, 15 Mar 2018 11:03:34 +0800 Subject: [ovirt-users] =?gbk?b?u9i4tKO6ILvYuLSjulJlOiBSZTogIG92aXJ0LWVu?= =?gbk?q?gine_add=5Fhost=5Ffailed?= Message-ID: <20180315030334.9F99D1000DD@webmail.sinamail.sina.com.cn> Hi Alona, I test your patch, it is work, thank you very much, but I seem not know how to triggered this bug?----- ???? ----- ???? ????"Alona Kaplan" ????users ???[ovirt-users] ???Re: Re: ovirt-engine add_host_failed ???2018?03?15? 10?42? Hi Alona, I has file a bug in the bugzilla. url is https://bugzilla.redhat.com/show_bug.cgi?id=1556668, thanks. ----- ???? ----- ????Alona Kaplan ????dhy336 at sina.com ????users ???Re: Re: [ovirt-users] ovirt-engine add host failed ???2018?03?14? 22?56? Hi, I posted a patch with a proposed fix (https://gerrit.ovirt.org/#/c/88999/). Please open a bug so you can track the version the fix is included in. Please attach to the bug vdsm.log, engine.log and server.log (If you had several attempts to reinstall/remove-add the host please add the logs with all the attempts). On Wed, Mar 14, 2018 at 1:42 PM, wrote: Hi, Alona 1. not opened the SetupNetwoks dialog. while installing the host .2. I try to many times to re-install (or remove and re-add) the host ----- ???? ----- ????Alona Kaplan ????dhy336 at sina.com ????users ???Re: [ovirt-users] ovirt-engine add host failed ???2018?03?14? 18?47? Hi, Seems the communication with the host is slow and you get a timeout error. Is there a chance you opened the SetupNetwoks dialog while installing the host (it may slow down the host since it queries the host for lldp information)? Please try to re-install (or remove and re-add) the host (don't open the setup network dialog during the installation). More technically, there is a bug in the code -Collecting and persisting the data from the host is done inside a transaction.Only the persisting should be done in a transaction.According to the attached log seems the collection of the data from the host finished successfully (was slow, but finished without a timeout), but the persistence of the data fails since during the persistence the transaction reached to the timeout. Can you please file a bug in the bugzilla and attach the relevant logs (engine.log, server.log and vdsm.log) Thanks.Alona. On Wed, Mar 14, 2018 at 5:15 AM, wrote: I add host for ovirt-engine, when SetupNetworks faild, server.log has some error. But I do not how fix it, could someone give me some advise? _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhy336 at sina.com Thu Mar 15 04:41:50 2018 From: dhy336 at sina.com (dhy336 at sina.com) Date: Thu, 15 Mar 2018 12:41:50 +0800 Subject: [ovirt-users] =?gbk?b?u9i4tKO6u9i4tKO6IGNyZWF0ZSBuZXcgZG9tYWlu?= =?gbk?q?_failed?= Message-ID: <20180315044150.7D13E4800BA@webmail.sinamail.sina.com.cn> Hi I find when I create new domain by "New Domain", "AddStorageDomainCommand" in bll is called a time But "CreateStorageDomainVDSCommand" in vdsbroker is call twice. first " CreateStorageDomainVDSCommand" is successed finish, second "CreateStorageDomainVDSCommand" is failed, so message show "VDSM 192.168.122.246 command CreateStorageDomainVDS failed: Storage domain already exists: (u'6b14935f-8e1b-47ca-93fe-a4b1fdf80ead',)", has someone experienced the same problem? ----- ???? ----- ???? ????"dhy336" , "users" ??????[ovirt-users] create new domain failed ???2018?03?14? 16?31? ----- ???? ----- ???? ????"users" ???[ovirt-users] create new domain failed ???2018?03?14? 16?01? Hi I find a issue, I create new domain failed error info is "Error while executing action New NFS Storage Domain: Storage domain already exists". vdsm debug info : 2018-03-14 15:39:46,163+0800 INFO (jsonrpc/1) [DynamicBridge] ----------------duhy test--------------------- (Bridge:189)2018-03-14 15:39:46,163+0800 INFO (jsonrpc/1) [DynamicBridge] cmd = StoragePool_connectStorageServer (Bridge:190)2018-03-14 15:39:46,164+0800 INFO (jsonrpc/1) [vdsm.api] START connectStorageServer(domType=1, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id': u'a16bcfa2-fd7c-49f9-84b2-3a905b4f4135', u'connection': u'192.168.122.134:/home/exports/vmstorage', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'auto', u'password': '********', u'port': u''}], options=None) from=::ffff:192.168.122.134,45460, flow_id=61f0b27f, task_id=ecd65a2e-3687-49b6-a4d6-11b9fce74775 (api:46)2018-03-14 15:39:46,190+0800 INFO (jsonrpc/1) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'status': 0, 'id': u'a16bcfa2-fd7c-49f9-84b2-3a905b4f4135'}]} from=::ffff:192.168.122.134,45460, flow_id=61f0b27f, task_id=ecd65a2e-3687-49b6-a4d6-11b9fce74775 (api:52)2018-03-14 15:39:46,191+0800 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call StoragePool.connectStorageServer succeeded in 0.03 seconds (__init__:630)2018-03-14 15:39:46,204+0800 INFO (jsonrpc/3) [DynamicBridge] ----------------duhy test--------------------- (Bridge:189)2018-03-14 15:39:46,205+0800 INFO (jsonrpc/3) [DynamicBridge] cmd = StorageDomain_create (Bridge:190)2018-03-14 15:39:46,205+0800 INFO (jsonrpc/3) [vdsm.api] START createStorageDomain(storageType=1, sdUUID=u'80586557-820f-4e12-9763-11beed4259c8', domainName=u'vmstorage', typeSpecificArg=u'192.168.122.134:/home/exports/vmstorage', domClass=1, domVersion=u'4', options=None) from=::ffff:192.168.122.134,45460, flow_id=61f0b27f, task_id=4b38bebc-5e9f-49cf-aafd-0de9155abe06 (api:46)2018-03-14 15:39:46,206+0800 INFO (jsonrpc/3) [storage.StorageDomainCache] -------------------duhy test-------------- (sdc:106)2018-03-14 15:39:46,206+0800 INFO (jsonrpc/3) [storage.StorageDomainCache] /rhev/data-center (sdc:107)2018-03-14 15:39:46,206+0800 INFO (jsonrpc/3) [storage.StorageDomainCache] ------------------------------------------- (sdc:119)2018-03-14 15:39:46,206+0800 INFO (jsonrpc/3) [storage.StorageDomainCache] {} (sdc:120)2018-03-14 15:39:46,207+0800 INFO (jsonrpc/3) [storage.StorageDomainCache] sdUUID not in __inProgress (sdc:128)2018-03-14 15:39:46,207+0800 INFO (jsonrpc/3) [storage.StorageDomainCache] set([]) (sdc:129)2018-03-14 15:39:46,668+0800 INFO (itmap/0) [IOProcessClient] Starting client ioprocess-1 (__init__:330)2018-03-14 15:39:46,693+0800 INFO (ioprocess/31894) [IOProcess] Starting ioprocess (__init__:452)2018-03-14 15:39:46,704+0800 INFO (jsonrpc/3) [storage.StorageDomain] sdUUID=80586557-820f-4e12-9763-11beed4259c8 domainName=vmstorage remotePath=192.168.122.134:/home/exports/vmstorage domClass=1 (nfsSD:70)2018-03-14 15:39:46,745+0800 INFO (jsonrpc/3) [IOProcessClient] Starting client ioprocess-2 (__init__:330)2018-03-14 15:39:46,780+0800 INFO (ioprocess/31903) [IOProcess] Starting ioprocess (__init__:452)2018-03-14 15:39:47,108+0800 INFO (jsonrpc/3) [storage.xlease] Formatting index for lockspace u'80586557-820f-4e12-9763-11beed4259c8' (version=1) (xlease:641)2018-03-14 15:39:47,489+0800 INFO (jsonrpc/3) [storage.HSM] knownSDs: {80586557-820f-4e12-9763-11beed4259c8: vdsm.storage.nfsSD.findDomain} (hsm:2581)2018-03-14 15:39:47,490+0800 INFO (jsonrpc/3) [storage.StorageDomainCache] --------------manuallyAddDomain------------ (sdc:204)2018-03-14 15:39:47,490+0800 INFO (jsonrpc/3) [storage.StorageDomainCache] (sdc:205)2018-03-14 15:39:47,490+0800 INFO (jsonrpc/3) [storage.StorageDomainCache] 80586557-820f-4e12-9763-11beed4259c8 (sdc:206)2018-03-14 15:39:47,490+0800 INFO (jsonrpc/3) [vdsm.api] FINISH createStorageDomain return=None from=::ffff:192.168.122.134,45460, flow_id=61f0b27f, task_id=4b38bebc-5e9f-49cf-aafd-0de9155abe06 (api:52)2018-03-14 15:39:47,491+0800 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call StorageDomain.create succeeded in 1.29 seconds (__init__:630)2018-03-14 15:39:47,508+0800 INFO (jsonrpc/4) [DynamicBridge] ----------------duhy test--------------------- (Bridge:189)2018-03-14 15:39:47,509+0800 INFO (jsonrpc/4) [DynamicBridge] cmd = StorageDomain_create (Bridge:190)2018-03-14 15:39:47,510+0800 INFO (jsonrpc/4) [vdsm.api] START createStorageDomain(storageType=1, sdUUID=u'80586557-820f-4e12-9763-11beed4259c8', domainName=u'vmstorage', typeSpecificArg=u'192.168.122.134:/home/exports/vmstorage', domClass=1, domVersion=u'4', options=None) from=::ffff:192.168.122.134,45460, flow_id=61f0b27f, task_id=e419bf88-6f4e-43ff-a6ae-0153b3a98c65 (api:46)2018-03-14 15:39:47,510+0800 INFO (jsonrpc/4) [storage.StorageDomainCache] -------------------duhy test-------------- (sdc:106)2018-03-14 15:39:47,511+0800 INFO (jsonrpc/4) [storage.StorageDomainCache] /rhev/data-center (sdc:107)2018-03-14 15:39:47,511+0800 INFO (jsonrpc/4) [storage.StorageDomainCache] ------------------------------------------- (sdc:119)2018-03-14 15:39:47,511+0800 INFO (jsonrpc/4) [storage.StorageDomainCache] {u'80586557-820f-4e12-9763-11beed4259c8': } (sdc:120)2018-03-14 15:39:47,511+0800 INFO (jsonrpc/4) [storage.StorageDomainCache] domain is not None (sdc:123)2018-03-14 15:39:47,511+0800 INFO (jsonrpc/4) [storage.StorageDomainCache] (sdc:124)2018-03-14 15:39:47,512+0800 INFO (jsonrpc/4) [vdsm.api] FINISH createStorageDomain error=Storage domain already exists: (u'80586557-820f-4e12-9763-11beed4259c8',) from=::ffff:192.168.122.134,45460, flow_id=61f0b27f, task_id=e419bf88-6f4e-43ff-a6ae-0153b3a98c65 (api:50)2018-03-14 15:39:47,512+0800 ERROR (jsonrpc/4) [storage.TaskManager.Task] (Task='e419bf88-6f4e-43ff-a6ae-0153b3a98c65') Unexpected error (task:875)Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File "", line 2, in createStorageDomain File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method ret = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2550, in createStorageDomain self.validateNonDomain(sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 322, in validateNonDomain raise se.StorageDomainAlreadyExists(sdUUID)StorageDomainAlreadyExists: Storage domain already exists: (u'80586557-820f-4e12-9763-11beed4259c8',)2018-03-14 15:39:47,515+0800 INFO (jsonrpc/4) [storage.TaskManager.Task] (Task='e419bf88-6f4e-43ff-a6ae-0153b3a98c65') aborting: Task is aborted: "Storage domain already exists: (u'80586557-820f-4e12-9763-11beed4259c8',)" - code 365 (task:1181)2018-03-14 15:39:47,516+0800 ERROR (jsonrpc/4) [storage.Dispatcher] FINISH createStorageDomain error=Storage domain already exists: (u'80586557-820f-4e12-9763-11beed4259c8',) (dispatcher:82)2018-03-14 15:39:47,516+0800 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call StorageDomain.create failed (error 365) in 0.01 seconds (__init__:630)2018-03-14 15:39:47,973+0800 INFO (jsonrpc/5) [DynamicBridge] ----------------duhy test--------------------- (Bridge:189)2018-03-14 15:39:47,973+0800 INFO (jsonrpc/5) [DynamicBridge] cmd = StoragePool_disconnectStorageServer (Bridge:190)2018-03-14 15:39:47,974+0800 INFO (jsonrpc/5) [vdsm.api] START disconnectStorageServer(domType=1, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id': u'a16bcfa2-fd7c-49f9-84b2-3a905b4f4135', u'connection': u'192.168.122.134:/home/exports/vmstorage', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'auto', u'password': '********', u'port': u''}], options=None) from=::ffff:192.168.122.134,45460, flow_id=fc7e65db-8a83-46c5-b38a-12c3837bc453, task_id=250f443a-40e9-4a91-83b4-799b515da2cd (api:46)2018-03-14 15:39:47,975+0800 INFO (jsonrpc/5) [storage.Mount] unmounting /rhev/data-center/mnt/192.168.122.134:_home_exports_vmstorage (mount:213)2018-03-14 15:39:48,061+0800 INFO (jsonrpc/6) [DynamicBridge] ----------------duhy test--------------------- (Bridge:189)2018-03-14 15:39:48,062+0800 INFO (jsonrpc/6) [DynamicBridge] cmd = Host_getAllVmStats (Bridge:190)2018-03-14 15:39:48,062+0800 INFO (jsonrpc/6) [api.host] START getAllVmStats() from=::1,33990 (api:46)2018-03-14 15:39:48,272+0800 INFO (jsonrpc/6) [root] /usr/libexec/vdsm/hooks/after_get_all_vm_stats/10_fakevmstats: rc=0 err= (hooks:109)2018-03-14 15:39:48,273+0800 INFO (jsonrpc/6) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::1,33990 (api:52)2018-03-14 15:39:48,274+0800 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.21 seconds (__init__:630)2018-03-14 15:39:48,412+0800 INFO (jsonrpc/5) [vdsm.api] FINISH disconnectStorageServer return={'statuslist': [{'status': 0, 'id': u'a16bcfa2-fd7c-49f9-84b2-3a905b4f4135'}]} from=::ffff:192.168.122.134,45460, flow_id=fc7e65db-8a83-46c5-b38a-12c3837bc453, task_id=250f443a-40e9-4a91-83b4-799b515da2cd (api:52)2018-03-14 15:39:48,413+0800 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call StoragePool.disconnectStorageServer succeeded in 0.44 seconds (__init__:630)2018-03-14 15:39:48,431+0800 INFO (jsonrpc/7) [DynamicBridge] ----------------duhy test--------------------- (Bridge:189)2018-03-14 15:39:48,432+0800 INFO (jsonrpc/7) [DynamicBridge] cmd = StoragePool_disconnectStorageServer (Bridge:190)2018-03-14 15:39:48,432+0800 INFO (jsonrpc/7) [vdsm.api] START disconnectStorageServer(domType=1, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id': u'a16bcfa2-fd7c-49f9-84b2-3a905b4f4135', u'connection': u'192.168.122.134:/home/exports/vmstorage', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'auto', u'password': '********', u'port': u''}], options=None) from=::ffff:192.168.122.134,45460, flow_id=fc7e65db-8a83-46c5-b38a-12c3837bc453, task_id=5014c75e-67b6-4401-a55b-480d5373f112 (api:46)2018-03-14 15:39:48,433+0800 INFO (jsonrpc/7) [storage.Mount] unmounting /rhev/data-center/mnt/192.168.122.134:_home_exports_vmstorage (mount:213)2018-03-14 15:39:48,491+0800 ERROR (jsonrpc/7) [storage.HSM] Could not disconnect from storageServer (hsm:2466)Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2462, in disconnectStorageServer conObj.disconnect() File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 387, in disconnect return self._mountCon.disconnect() File "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py", line 185, in disconnect self._mount.umount(True, True) File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 216, in umount timeout=timeout) File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 53, in __call__ return callMethod() File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 51, in **kwargs) File "", line 2, in umount File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod raise convert_to_error(kind, result)MountError: (32, ';umount: /rhev/data-center/mnt/192.168.122.134:_home_exports_vmstorage: mountpoint not found\n')2018-03-14 15:39:48,768+0800 INFO (jsonrpc/7) [vdsm.api] FINISH disconnectStorageServer return={'statuslist': [{'status': 477, 'id': u'a16bcfa2-fd7c-49f9-84b2-3a905b4f4135'}]} from=::ffff:192.168.122.134,45460, flow_id=fc7e65db-8a83-46c5-b38a-12c3837bc453, task_id=5014c75e-67b6-4401-a55b-480d5373f112 (api:52)2018-03-14 15:39:48,769+0800 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call StoragePool.disconnectStorageServer succeeded in 0.34 seconds (__init__:630)2018-03-14 15:39:50,305+0800 INFO (jsonrpc/0) [DynamicBridge] ----------------duhy test--------------------- (Bridge:189)2018-03-14 15:39:50,306+0800 INFO (jsonrpc/0) [DynamicBridge] cmd = Host_getAllVmStats (Bridge:190)2018-03-14 15:39:50,307+0800 INFO (jsonrpc/0) [api.host] START getAllVmStats() from=::ffff:192.168.122.134,45460 (api:46)2018-03-14 15:39:50,465+0800 INFO (jsonrpc/0) [root] /usr/libexec/vdsm/hooks/after_get_all_vm_stats/10_fakevmstats: rc=0 err= (hooks:109)2018-03-14 15:39:50,467+0800 INFO (jsonrpc/0) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::ffff:192.168.122.134,45460 (api:52)2018-03-14 15:39:50,468+0800 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.16 seconds (__init__:630) I find twice StorageDomain_create ,first is sucessed , cache has this uuid, so second is failed ,because cache has this uuid. finally "Error while executing action New NFS Storage Domain: Storage domain already exists". _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users I debug engine, engine send twice StorageDomain_create command to vdsm, I want to know why to send twice StorageDomain_create command, Is this a bug or my enviroment is not work. 2018-03-14 04:23:01,604-04 INFO [org.ovirt.engine.core.bll.profiles.AddDiskProfileCommand] (default task-55) [4eb2816d] Running command: AddDiskProfileCommand internal: true. Entities affected : ID: 82483a27-d046-4082-9371-3c159c957191 Type: StorageAction group CREATE_STORAGE_DISK_PROFILE with role type ADMIN2018-03-14 04:23:01,633-04 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-55) [4eb2816d] EVENT_ID: USER_ADDED_DISK_PROFILE(10,120), Disk Profile vmstorage was successfully added (User: admin at internal-authz).2018-03-14 04:23:01,642-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default task-55) [4eb2816d] START, ConnectStorageServerVDSCommand(HostName = 192.168.122.54, StorageServerConnectionManagementVDSParameters:{hostId='72b8ea98-563b-4b8b-b48a-69571b17ff56', storagePoolId='00000000-0000-0000-0000-000000000000', storageType='NFS', connectionList='[StorageServerConnections:{id='ecce7868-d8fd-4871-8a22-c5e6d1102c86', connection='192.168.122.134:/home/exports/vmstorage', iqn='null', vfsType='null', mountOptions='null', nfsVersion='AUTO', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log id: 58ccf2018-03-14 04:23:01,690-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default task-55) [4eb2816d] FINISH, ConnectStorageServerVDSCommand, return: {ecce7868-d8fd-4871-8a22-c5e6d1102c86=0}, log id: 58ccf2018-03-14 04:23:01,695-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default task-55) [4eb2816d] START, ConnectStorageServerVDSCommand(HostName = 192.168.122.54, StorageServerConnectionManagementVDSParameters:{hostId='72b8ea98-563b-4b8b-b48a-69571b17ff56', storagePoolId='00000000-0000-0000-0000-000000000000', storageType='NFS', connectionList='[StorageServerConnections:{id='ecce7868-d8fd-4871-8a22-c5e6d1102c86', connection='192.168.122.134:/home/exports/vmstorage', iqn='null', vfsType='null', mountOptions='null', nfsVersion='AUTO', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log id: 7da62ba62018-03-14 04:23:01,737-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default task-55) [4eb2816d] FINISH, ConnectStorageServerVDSCommand, return: {ecce7868-d8fd-4871-8a22-c5e6d1102c86=0}, log id: 7da62ba62018-03-14 04:23:01,744-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (default task-55) [4eb2816d] START, CreateStorageDomainVDSCommand(HostName = 192.168.122.54, CreateStorageDomainVDSCommandParameters:{hostId='72b8ea98-563b-4b8b-b48a-69571b17ff56', storageDomain='StorageDomainStatic:{name='vmstorage', id='82483a27-d046-4082-9371-3c159c957191'}', args='192.168.122.134:/home/exports/vmstorage'}), log id: 95323782018-03-14 04:23:01,744-04 INFO [org.ovirt.engine.core.vdsbroker.jsonrpc.JsonRpcVdsServer] (default task-55) [4eb2816d] ---------------------duhy test--------------------------2018-03-14 04:23:01,744-04 INFO [org.ovirt.engine.core.vdsbroker.jsonrpc.JsonRpcVdsServer] (default task-55) [4eb2816d] StorageDomain.createstoragedomainID=82483a27-d046-4082-9371-3c159c957191domainType=1domainClass=1storageFormatType42018-03-14 04:23:02,907-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (default task-55) [4eb2816d] FINISH, CreateStorageDomainVDSCommand, log id: 95323782018-03-14 04:23:02,917-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (default task-55) [4eb2816d] START, CreateStorageDomainVDSCommand(HostName = 192.168.122.54, CreateStorageDomainVDSCommandParameters:{hostId='72b8ea98-563b-4b8b-b48a-69571b17ff56', storageDomain='StorageDomainStatic:{name='vmstorage', id='82483a27-d046-4082-9371-3c159c957191'}', args='192.168.122.134:/home/exports/vmstorage'}), log id: 705b95752018-03-14 04:23:02,917-04 INFO [org.ovirt.engine.core.vdsbroker.jsonrpc.JsonRpcVdsServer] (default task-55) [4eb2816d] ---------------------duhy test--------------------------2018-03-14 04:23:02,917-04 INFO [org.ovirt.engine.core.vdsbroker.jsonrpc.JsonRpcVdsServer] (default task-55) [4eb2816d] StorageDomain.createstoragedomainID=82483a27-d046-4082-9371-3c159c957191domainType=1domainClass=1storageFormatType42018-03-14 04:23:02,944-04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-55) [4eb2816d] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM 192.168.122.54 command CreateStorageDomainVDS failed: Storage domain already exists: (u'82483a27-d046-4082-9371-3c159c957191',)2018-03-14 04:23:02,944-04 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (default task-55) [4eb2816d] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand' return value 'StatusOnlyReturn [status=Status [code=365, message=Storage domain already exists: (u'82483a27-d046-4082-9371-3c159c957191',)]]' -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ??1.png Type: image/png Size: 62970 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ??1.png Type: image/png Size: 235094 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: create-domain-failed.tar.gz Type: application/x-gzip Size: 250052 bytes Desc: not available URL: From kirin.vanderveer at planetinnovation.com.au Thu Mar 15 05:03:15 2018 From: kirin.vanderveer at planetinnovation.com.au (Kirin van der Veer) Date: Thu, 15 Mar 2018 16:03:15 +1100 Subject: [ovirt-users] Using VDSM to edit management interface Message-ID: Hi oVirt people, I have setup a new cluster consisting of many oVirt Nodes with a single dedicated oVirt Engine machine. For the most part things are working, however despite entering the DNS search domain during install on the Nodes the management interface is not aware of my search domain and it has not been added to /etc/resolv.conf (perhaps that is unnecessary?). I eventually worked out that the DNS search domain should be included in /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt However as per the header/warning, that file is generated by VDSM. I assumed that I should be able to edit the search domain with vdsClient, but when I run "vdsClient -m" I don't see any options related to network config. I found the following page on DNS config: https://www.ovirt.org/develop/release-management/features/network/allowExplicitDnsConfiguration/ But it does not seem to offer a way of specifying the DNS search domain (other than perhaps directly editing /etc/resolv.conf - which is generated/managed by Network Manager). nmcli reports that all of my interfaces (including ovirtmgmt) are "unmanaged". Indeed when I attempt to run nmtui there is nothing listed to configure. This should be really simple! I just want to add my local search domain so I can use the short name for my NFS server. I'd appreciate any advice. Thanks in advance, Kirin. . -- *IMPORTANT NOTE. *If you are NOT AN AUTHORISED RECIPIENT of this e-mail, please contact Planet Innovation Pty Ltd by return e-mail or by telephone on +613 9945 7510. In this case, you should not read, print, re-transmit, store or act in reliance on this e-mail or any attachments, and should destroy all copies of them. This e-mail and any attachments are confidential and may contain legally privileged information and/or copyright material of Planet Innovation Pty Ltd or third parties. You should only re-transmit, distribute or commercialise the material if you are authorised to do so. Although we use virus scanning software, we deny all liability for viruses or alike in any message or attachment. This notice should not be removed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From phudec at cnc.sk Thu Mar 15 06:17:57 2018 From: phudec at cnc.sk (Peter Hudec) Date: Thu, 15 Mar 2018 07:17:57 +0100 Subject: [ovirt-users] Using VDSM to edit management interface In-Reply-To: References: Message-ID: <5eff8a00-9a45-7c9f-fe6c-6583b6d8aae9@cnc.sk> Hi Kirin, I suggest to do it old way and edit the /etc/resolv.conf manually. And one advice. Do not relay on the DNS on infrastructure servers. Use /etc/hosts. If he DNS will not be accessible, you will have problem to put it infrastructure up/working. As side effect the hosts allow you to use short names to access servers. If you are ansible positive, you could use hudecof.resolv https://galaxy.ansible.com/hudecof/resolv/ hudecof.hosts https://galaxy.ansible.com/hudecof/hosts/ Peter On 15/03/2018 06:03, Kirin van der Veer wrote: > Hi oVirt people, I have setup a new cluster consisting of many > oVirt Nodes with a single dedicated oVirt Engine machine. For the > most part things are working, however despite entering the DNS > search domain during install on the Nodes the management interface > is not aware of my search domain and it has not been added to > /etc/resolv.conf (perhaps that is unnecessary?). I eventually > worked out that the DNS search domain should be included in > /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt However as per the > header/warning, that file is generated by VDSM. I assumed that I > should be able to edit the search domain with vdsClient, but when I > run "vdsClient -m" I don't see any options related to network > config. I found the following page on DNS config: > https://www.ovirt.org/develop/release-management/features/network/allowExplicitDnsConfiguration/ > > But it does not seem to offer a way of specifying the DNS search domain > (other than perhaps directly editing /etc/resolv.conf - which is > generated/managed by Network Manager). nmcli reports that all of my > interfaces (including ovirtmgmt) are "unmanaged". Indeed when I > attempt to run nmtui there is nothing listed to configure. This > should be really simple! I just want to add my local search domain > so I can use the short name for my NFS server. I'd appreciate any > advice. > > Thanks in advance, Kirin. > > > . > > *IMPORTANT NOTE. *If you are NOT AN AUTHORISED RECIPIENT of this > e-mail, please contact Planet Innovation Pty Ltd by return e-mail > or by telephone on +613 9945 7510. In this case, you should not > read, print, re-transmit, store or act in reliance on this e-mail > or any attachments, and should destroy all copies of them. This > e-mail and any attachments are confidential and may contain legally > privileged information and/or copyright material of Planet > Innovation Pty Ltd or third parties. You should only re-transmit, > distribute or commercialise the material if you are authorised to > do so. Although we use virus scanning software, we deny all > liability for viruses or alike in any message or attachment. This > notice should not be removed. > > ** > > > _______________________________________________ Users mailing list > Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users > -- *Peter Hudec* Infra?trukt?rny architekt phudec at cnc.sk *CNC, a.s.* Borsk? 6, 841 04 Bratislava Recepcia: +421 2 35 000 100 Mobil:+421 905 997 203 *www.cnc.sk* From didi at redhat.com Thu Mar 15 06:33:49 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Thu, 15 Mar 2018 08:33:49 +0200 Subject: [ovirt-users] oVirt Engine 4.2 -> 4.2.1 Upgrade In-Reply-To: References: Message-ID: On Wed, Mar 14, 2018 at 6:56 PM, Andrei Verovski wrote: > Hi ! > > After oVirt Engine 4.2 -> 4.2.1 Upgrade do I need to run these commands as described in this article > https://www.ovirt.org/release/4.2.1/ Please note that the start of this section says: "For databases managed by engine-setup this is performed automatically, but non-managed databases (usually remote databases) this needs to be done manually by administrators." Is your database remote or managed manually, or local and managed by engine-setup? If latter, it's enough to run 'engine-setup'. > > su - postgres -c "scl enable rh-postgresql95 ? psql -d engine" > postgres=# DROP FUNCTION IF EXISTS uuid_generate_v1(); > postgres=# CREATE EXTENSION "uuid-ossp?; Sorry, this is a bug in the documentation. The commands should be: su - postgres -c "scl enable rh-postgresql95 -- psql -d engine" postgres=# DROP FUNCTION IF EXISTS uuid_generate_v1(); postgres=# CREATE EXTENSION "uuid-ossp"; > > BTW, this yelds error: > [root at node00 ~]# su - postgres -c "scl enable rh-postgresql95 ? psql -d engine" > Unable to open /etc/scl/conf/?! Indeed. The reason that this bug happened is that the release notes are (partially) auto-generated, where this part is taken from the doc-text of the linked bug: https://bugzilla.redhat.com/show_bug.cgi?id=1515635 The text there is ok. We should somehow make the resultant markdown mark such doc text as pre-formatted, to prevent it from replacing '--' with '?'. Adding Sandro for this. Thanks for the report! > > node00 is a dedicated PC with CentOS and oVirt Host Engine. Sounds to me like you are in the "this will happen automatically" case. Best regards, -- Didi From matonb at ltresources.co.uk Thu Mar 15 06:50:00 2018 From: matonb at ltresources.co.uk (Maton, Brett) Date: Thu, 15 Mar 2018 06:50:00 +0000 Subject: [ovirt-users] 4.2.2.2-1 Starting hosted engine on all hosts Message-ID: The last three 4.2.2 release candidates that I've tried have been starting self hosted engine all all physical hosts at the same time. Same with the latest RC, what logs do you need to investigate the problem? Regards, Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: From alkaplan at redhat.com Thu Mar 15 06:51:12 2018 From: alkaplan at redhat.com (Alona Kaplan) Date: Thu, 15 Mar 2018 08:51:12 +0200 Subject: [ovirt-users] =?utf-8?b?5Zue5aSN77yaUmU6IFJlOiBvdmlydC1lbmdpbmUg?= =?utf-8?q?add=5Fhost=5Ffailed?= In-Reply-To: <20180315030334.9F99D1000DD@webmail.sinamail.sina.com.cn> References: <20180315030334.9F99D1000DD@webmail.sinamail.sina.com.cn> Message-ID: On Thu, Mar 15, 2018 at 5:03 AM, wrote: > Hi Alona, > I test your patch, it is work, thank you very much, but I seem not know > how to triggered this bug? > https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine Component - bll/network Version - 4.2.0 > ----- ???? ----- > ???? > ????"Alona Kaplan" > ????users > ???[ovirt-users] ???Re: Re: ovirt-engine add_host_failed > ???2018?03?15? 10?42? > > Hi Alona, > I has file a bug in the bugzilla. url is https://bugzilla.redhat.com/ > show_bug.cgi?id=1556668, thanks. > > ----- ???? ----- > ????Alona Kaplan > ????dhy336 at sina.com > ????users > ???Re: Re: [ovirt-users] ovirt-engine add host failed > ???2018?03?14? 22?56? > > Hi, > > I posted a patch with a proposed fix (https://gerrit.ovirt.org/#/c/88999/ > ). > Please open a bug so you can track the version the fix is included in. > > Please attach to the bug vdsm.log, engine.log and server.log (If you had > several attempts to reinstall/remove-add the host please add the logs with > all the attempts). > > > On Wed, Mar 14, 2018 at 1:42 PM, wrote: > > Hi, Alona > > 1. not opened the SetupNetwoks dialog. while installing the host . > 2. I try to many times to re-install (or remove and re-add) the host > ----- ???? ----- > ????Alona Kaplan > ????dhy336 at sina.com > ????users > ???Re: [ovirt-users] ovirt-engine add host failed > ???2018?03?14? 18?47? > > Hi, > > Seems the communication with the host is slow and you get a timeout error. > > Is there a chance you opened the SetupNetwoks dialog while installing the > host (it may slow down the host since it queries the host for lldp > information)? > > Please try to re-install (or remove and re-add) the host (don't open the > setup network dialog during the installation). > > More technically, there is a bug in the code - > Collecting and persisting the data from the host is done inside a > transaction. > Only the persisting should be done in a transaction. > According to the attached log seems the collection of the data from the > host finished successfully (was slow, but finished without a timeout), but > the persistence of the data fails since during the persistence the > transaction reached to the timeout. > > Can you please file a bug in the bugzilla and attach the relevant logs > (engine.log, server.log and vdsm.log) > > Thanks. > Alona. > > > > On Wed, Mar 14, 2018 at 5:15 AM, wrote: > > I add host for ovirt-engine, when SetupNetworks faild, server.log has > some error. But I do not how fix it, could someone give me some advise? > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From didi at redhat.com Thu Mar 15 07:18:20 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Thu, 15 Mar 2018 09:18:20 +0200 Subject: [ovirt-users] 4.2.2.2-1 Starting hosted engine on all hosts In-Reply-To: References: Message-ID: On Thu, Mar 15, 2018 at 8:50 AM, Maton, Brett wrote: > The last three 4.2.2 release candidates that I've tried have been starting > self hosted engine all all physical hosts at the same time. > > Same with the latest RC, what logs do you need to investigate the problem? /var/log/ovirt-hosted-engine-ha/* /var/log/sanlock.log /var/log/vdsm/* Adding Martin. Thanks and best regards, -- Didi From spfma.tech at e.mail.fr Thu Mar 15 08:46:49 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Thu, 15 Mar 2018 09:46:49 +0100 Subject: [ovirt-users] NFS 4.1 support and migration Message-ID: <20180315084649.99952E4477@smtp01.mail.de> Thanks for your answer. And in order to use V4.1 instead of V3 on a domain, do I just have to disconnect it and change its settings ? Seems to be doable with VMs domains, but how to do it with hosted storage domain ? I haven't find a command line way to do this yet. Regards Le 14-Mar-2018 11:54:52 +0100, eshenitz at redhat.com a crit: Hi, NFS 4.1 supported and working since version 3.6 (according to this bug fix [1]) [1] Support NFS v4.1 connections - https://bugzilla.redhat.com/show_bug.cgi?id=1283964 ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From spfma.tech at e.mail.fr Thu Mar 15 08:45:01 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Thu, 15 Mar 2018 09:45:01 +0100 Subject: [ovirt-users] NFS 4.1 support and migration In-Reply-To: References: Message-ID: <20180315084502.3623AE4475@smtp01.mail.de> Thanks for your answer. And to use V4.1 instead of V3 on a domain, do I just have to disconnect it and change its settings ? Seems to be easy to do with VMs domains, but how to do it with hosted storage domain ? Regards Le 14-Mar-2018 11:54:52 +0100, eshenitz at redhat.com a crit: Hi, NFS 4.1 supported and working since version 3.6 (according to this bug fix [1]) [1] Support NFS v4.1 connections - https://bugzilla.redhat.com/show_bug.cgi?id=1283964 ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From eshenitz at redhat.com Thu Mar 15 08:56:34 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Thu, 15 Mar 2018 10:56:34 +0200 Subject: [ovirt-users] NFS 4.1 support and migration In-Reply-To: <20180315084502.3623AE4475@smtp01.mail.de> References: <20180315084502.3623AE4475@smtp01.mail.de> Message-ID: I am not sure what you mean, Can you please try to explain what is the difference between "VMs domain" to "hosted storage domain" according to you? Thanks, On Thu, Mar 15, 2018 at 10:45 AM, wrote: > Thanks for your answer. > > And to use V4.1 instead of V3 on a domain, do I just have to disconnect it > and change its settings ? Seems to be easy to do with VMs domains, but how > to do it with hosted storage domain ? > > Regards > > > > Le 14-Mar-2018 11:54:52 +0100, eshenitz at redhat.com a ?crit: > > > Hi, > > NFS 4.1 supported and working since version 3.6 (according to this bug fix > [1]) > > [1] Support NFS v4.1 connections - https://bugzilla.redhat.com/ > show_bug.cgi?id=1283964 > > > ------------------------------ > FreeMail powered by mail.fr > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From spfma.tech at e.mail.fr Thu Mar 15 09:05:24 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Thu, 15 Mar 2018 10:05:24 +0100 Subject: [ovirt-users] [4.2.2-1.el7.centos] Image locked and unending task Message-ID: <20180315090524.3E5B5E4474@smtp01.mail.de> Hi, I tried to rollback to a snapshot on a VM, but the preview never ended. The task has been running for about 15 hours, with this state : { "916b67fb-8808-43d2-850c-1c12650ccc49": { "verb": "createVolume", "code": 0, "state": "finished", "tag": "spm", "result": { "uuid": "d37ca118-820f-46a3-b99b-714018ea8b42" }, "message": "1 jobs completed successfully", "id": "916b67fb-8808-43d2-850c-1c12650ccc49" } } I just canceled it : the task list is now empty on the CLI but no change on GUI. So I restared the engine VM, but no success. With "/usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh" I was able to manually unlock the image, but the task is still "finalizing". Is this a bug ? Regards ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From carre.fabien at gmail.com Thu Mar 15 09:09:41 2018 From: carre.fabien at gmail.com (=?UTF-8?Q?Fabien_Carr=C3=A9?=) Date: Thu, 15 Mar 2018 09:09:41 +0000 Subject: [ovirt-users] Ovirt and OVH vRack Message-ID: Hello, I am trying to set up an oVirt environment using OVH servers. So far I have installed an engine (4.2) and a node. The node is attached to a NAS through a vRack https://www.ovh.co.uk/solutions/vrack/network-technology.xml. I am using one vlan to connect it, which is the management network However it is not a perfect setup (cf attached screenshot). The management network is in "Out-of-Sync" state. on the node : # ip addr 28: eno4.100 at eno4: mtu 1500 qdisc noqueue master ovirtmgmt state UP qlen 1000 link/ether 0c:c4:ff:7a:6c:13 brd ff:ff:ff:ff:ff:ff 29: ovirtmgmt: mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 0c:c4:ff:7a:6c:13 brd ff:ff:ff:ff:ff:ff inet 10.100.0.11/24 brd 10.100.0.255 scope global ovirtmgmt valid_lft forever preferred_lft forever [image: Screenshot from 2018-03-15 09-03-37.png] It does not seem possible to add extra vlan. I wanted to have one for the vms and one for the hosts. Can you give me some help or guidance ? Also generally speaking do you think such a setup is fine ? Thank you Fabien -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot from 2018-03-15 09-03-37.png Type: image/png Size: 53119 bytes Desc: not available URL: From spfma.tech at e.mail.fr Thu Mar 15 09:12:07 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Thu, 15 Mar 2018 10:12:07 +0100 Subject: [ovirt-users] NFS 4.1 support and migration In-Reply-To: References: Message-ID: <20180315091207.2A5E3E4476@smtp01.mail.de> In fact I don't really know how to change storage domains setttings (like nfs version or export path, ...), if it is only possible. I thought they could be disabled after stopping all related VMS, and maybe settings panel would then unlock ? But this should be impossible with hosted engine dedicated storage domain as it is required for the GUI itself. So I am stuck. Le 15-Mar-2018 09:59:30 +0100, eshenitz at redhat.com a crit: I am not sure what you mean, Can you please try to explain what is the difference between "VMs domain" to "hosted storage domain" according to you? Thanks, On Thu, Mar 15, 2018 at 10:45 AM, wrote: Thanks for your answer. And to use V4.1 instead of V3 on a domain, do I just have to disconnect it and change its settings ? Seems to be easy to do with VMs domains, but how to do it with hosted storage domain ? Regards Le 14-Mar-2018 11:54:52 +0100, eshenitz at redhat.com a crit: Hi, NFS 4.1 supported and working since version 3.6 (according to this bug fix [1]) [1] Support NFS v4.1 connections - https://bugzilla.redhat.com/show_bug.cgi?id=1283964 ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Regards, Eyal Shenitzky ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From eshenitz at redhat.com Thu Mar 15 09:17:24 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Thu, 15 Mar 2018 11:17:24 +0200 Subject: [ovirt-users] [4.2.2-1.el7.centos] Image locked and unending task In-Reply-To: <20180315090524.3E5B5E4474@smtp01.mail.de> References: <20180315090524.3E5B5E4474@smtp01.mail.de> Message-ID: Hi, Can you please specify the version of the engine and supply the engine.log and the vdsm.log? Moreover, can you please specify the steps that you did that led you to this issue? Thanks, On Thu, Mar 15, 2018 at 11:05 AM, wrote: > Hi, > > I tried to rollback to a snapshot on a VM, but the preview never ended. > > The task has been running for about 15 hours, with this state : > > { > "916b67fb-8808-43d2-850c-1c12650ccc49": { > "verb": "createVolume", > "code": 0, > "state": "finished", > "tag": "spm", > "result": { > "uuid": "d37ca118-820f-46a3-b99b-714018ea8b42" > }, > "message": "1 jobs completed successfully", > "id": "916b67fb-8808-43d2-850c-1c12650ccc49" > } > } > > I just canceled it : the task list is now empty on the CLI but no change > on GUI. > > So I restared the engine VM, but no success. > > With "/usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh" I was able > to manually unlock the image, but the task is still "finalizing". > > Is this a bug ? > > Regards > > ------------------------------ > FreeMail powered by mail.fr > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From eshenitz at redhat.com Thu Mar 15 09:22:29 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Thu, 15 Mar 2018 11:22:29 +0200 Subject: [ovirt-users] NFS 4.1 support and migration In-Reply-To: <20180315091207.2A5E3E4476@smtp01.mail.de> References: <20180315091207.2A5E3E4476@smtp01.mail.de> Message-ID: You can edit the storage domain setting after the storage domain deactivated (entered to maintenance mode). On Thu, Mar 15, 2018 at 11:12 AM, wrote: > In fact I don't really know how to change storage domains setttings (like > nfs version or export path, ...), if it is only possible. > > I thought they could be disabled after stopping all related VMS, and maybe > settings panel would then unlock ? > > But this should be impossible with hosted engine dedicated storage domain > as it is required for the GUI itself. > > So I am stuck. > > Le 15-Mar-2018 09:59:30 +0100, eshenitz at redhat.com a ?crit: > > > I am not sure what you mean, > Can you please try to explain what is the difference between "VMs domain" > to "hosted storage domain" according to you? > > Thanks, > > On Thu, Mar 15, 2018 at 10:45 AM, wrote: > >> Thanks for your answer. >> >> And to use V4.1 instead of V3 on a domain, do I just have to disconnect >> it and change its settings ? Seems to be easy to do with VMs domains, but >> how to do it with hosted storage domain ? >> >> Regards >> >> >> >> Le 14-Mar-2018 11:54:52 +0100, eshenitz at redhat.com a ?crit: >> >> >> Hi, >> >> NFS 4.1 supported and working since version 3.6 (according to this bug >> fix [1]) >> >> [1] Support NFS v4.1 connections - https://bugzilla.redhat.com/ >> show_bug.cgi?id=1283964 >> >> >> ------------------------------ >> FreeMail powered by mail.fr >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > -- > Regards, > Eyal Shenitzky > > > ------------------------------ > FreeMail powered by mail.fr > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From eshenitz at redhat.com Thu Mar 15 09:51:04 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Thu, 15 Mar 2018 11:51:04 +0200 Subject: [ovirt-users] [4.2.2-1.el7.centos] Image locked and unending task In-Reply-To: <20180315093535.79298E446D@smtp01.mail.de> References: <20180315093535.79298E446D@smtp01.mail.de> Message-ID: Thank you for sending the logs. According to the logs, it seems that you had some connectivity issue while you tried to preview the snapshot. The preview operation rolled back but according to you failed to finish. It seems like you still have a connectivity issue with that host ( 'pfm-srv-virt-1.pfm-ad.pfm.loc), try to see what happens to it. Here is the relevant part from the log: 2018-03-14 17:00:48,652+01 ERROR [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Heartbeat exceeded for host 'pfm-srv-virt-1.pfm-ad.pfm.loc', last response arrived 2003 ms ago. 2018-03-14 17:00:53,561+01 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to pfm-srv-virt-1.pfm-ad.pfm.loc/10.100.1.50 2018-03-14 17:02:21,832+01 INFO [org.ovirt.engine.core.utils.transaction.TransactionSupport] (EE-ManagedThreadFactory-engine-Thread-118906) [] transaction rolled back 2018-03-14 17:02:21,836+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-118906) [] EVENT_ID: USER_TRY_BACK_TO_SNAPSHOT_FINISH_FAILURE(99), Failed to complete Snapshot-Preview AFTER_INSTALL for VM pfm-ltsp-1. 2018-03-14 17:02:21,836+01 ERROR [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-118906) [] [within thread]: endAction for action type TryBackToAllSnapshotsOfVm threw an exception.: java.lang.NullPointerException at org.ovirt.engine.core.bll.snapshots.SnapshotsManager.deviceCanBeRemoved(SnapshotsManager.java:463) [bll.jar:] at org.ovirt.engine.core.bll.snapshots.SnapshotsManager.attempToRestoreVmConfigurationFromSnapshot(SnapshotsManager.java:415) [bll.jar:] at org.ovirt.engine.core.bll.snapshots.TryBackToAllSnapshotsOfVmCommand.restoreVmConfigFromSnapshot(TryBackToAllSnapshotsOfVmCommand.java:204) [bll.jar:] at org.ovirt.engine.core.bll.snapshots.TryBackToAllSnapshotsOfVmCommand.endSuccessfully(TryBackToAllSnapshotsOfVmCommand.java:168) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.internalEndSuccessfully(CommandBase.java:675) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.endActionInTransactionScope(CommandBase.java:630) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:1936) [bll.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInNewTransaction(TransactionSupport.java:202) [utils.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInRequired(TransactionSupport.java:137) [utils.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:105) [utils.jar:] at org.ovirt.engine.core.bll.CommandBase.endAction(CommandBase.java:495) [bll.jar:] at org.ovirt.engine.core.bll.tasks.DecoratedCommand.endAction(DecoratedCommand.java:17) [bll.jar:] at org.ovirt.engine.core.bll.tasks.CoCoAsyncTaskHelper.endAction(CoCoAsyncTaskHelper.java:353) [bll.jar:] at org.ovirt.engine.core.bll.tasks.CommandCoordinatorImpl.endAction(CommandCoordinatorImpl.java:347) [bll.jar:] at org.ovirt.engine.core.bll.tasks.CommandAsyncTask.endCommandAction(CommandAsyncTask.java:160) [bll.jar:] at org.ovirt.engine.core.bll.tasks.CommandAsyncTask.lambda$endActionIfNecessary$0(CommandAsyncTask.java:112) [bll.jar:] at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalWrapperRunnable.run(ThreadPoolUtil.java:96) [utils.jar:] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [rt.jar:1.8.0_161] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [rt.jar:1.8.0_161] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_161] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_161] at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_161] at org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250) [javax.enterprise.concurrent-1.0.jar:] at org.jboss.as.ee.concurrent.service.ElytronManagedThreadFactory$ElytronManagedThread.run(ElytronManagedThreadFactory.java:78) 2018-03-14 17:02:21,838+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-118906) [] CommandAsyncTask::HandleEndActionResult: endAction for action type 'TryBackToAllSnapshotsOfVm' threw an unrecoverable RuntimeException the task will be cleared. 2018-03-14 17:02:21,841+01 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-118906) On Thu, Mar 15, 2018 at 11:35 AM, wrote: > Thanks for your reply. > > Yesterday, I realized I was doing nothing good with some the software I > planed to install in a VM, so I tried to revert to a snapshot a took just > after OS installation, as I always do. > > As I had added a second disk to the VM in between, I choose to revert to > snapshot without taking care of the second disk contents. > > But the preview operation never ended. So I restarted the engine vm but > nothing changed. > > This morning I tried to cleanup things, using "taskcleaner" and > "unlock_entity". I could regain control over the VM, but the task is still > in "finalizing" state in the GUI. > > I even remove the second disk to see if it was better, but nothing. > > You will find the engine logfile and the "vdsm.log" from the server the > task is running on. > > I am not sure how to check engine version precisely, so I queried the rpm > database in the vm : ovirt-engine-4.2.2-1.el7.centos.noarch > > Regards > > > > > Le 15-Mar-2018 10:17:59 +0100, eshenitz at redhat.com a ?crit: > > > Hi, > > Can you please specify the version of the engine and supply the engine.log > and the vdsm.log? > > Moreover, can you please specify the steps that you did that led you to > this issue? > > Thanks, > > On Thu, Mar 15, 2018 at 11:05 AM, wrote: > >> Hi, >> >> I tried to rollback to a snapshot on a VM, but the preview never ended. >> >> The task has been running for about 15 hours, with this state : >> >> { >> "916b67fb-8808-43d2-850c-1c12650ccc49": { >> "verb": "createVolume", >> "code": 0, >> "state": "finished", >> "tag": "spm", >> "result": { >> "uuid": "d37ca118-820f-46a3-b99b-714018ea8b42" >> }, >> "message": "1 jobs completed successfully", >> "id": "916b67fb-8808-43d2-850c-1c12650ccc49" >> } >> } >> >> I just canceled it : the task list is now empty on the CLI but no change >> on GUI. >> >> So I restared the engine VM, but no success. >> >> With "/usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh" I was able >> to manually unlock the image, but the task is still "finalizing". >> >> Is this a bug ? >> >> Regards >> >> ------------------------------ >> FreeMail powered by mail.fr >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > -- > Regards, > Eyal Shenitzky > > > ------------------------------ > FreeMail powered by mail.fr -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From danken at redhat.com Thu Mar 15 10:21:52 2018 From: danken at redhat.com (Dan Kenigsberg) Date: Thu, 15 Mar 2018 12:21:52 +0200 Subject: [ovirt-users] ovirt 4.2.2-rc3 switching from legacy to OVS.. In-Reply-To: <4418d2d0-73d4-aced-049b-5a2971a91274@lbl.gov> References: <031b026d-66ec-14a0-9bec-7a4b0e717556@lbl.gov> <4418d2d0-73d4-aced-049b-5a2971a91274@lbl.gov> Message-ID: On Thu, Mar 15, 2018 at 1:50 AM, Thomas Davis wrote: > Well, I just hit > > https://bugzilla.redhat.com/show_bug.cgi?id=1513991 > > And it's been closed, which means with vdsm-4.20.17-1.el7.centos.x86_64 > OVS networking is totally borked.. You are welcome to reopen that bug, specifying your use case for OvS. I cannot promise fixing this bug, as our resources are limited, and that bug, which was introduced in 4.2, was not deemed as urgently needed. https://gerrit.ovirt.org/#/c/86932/ attempts to fix the bug, but it still needs a lot of work. > > I know OVS is Experimental, but it worked in 4.1.x, and now we have to do a > step back to legacy bridge just to use 4.2.x, which in a vlan environment > just wreaks havoc (every VLAN need's a unique mac assigned to the bridge, > which vdsm does not do, so suddenly you get the kernel complaining about > seeing it's mac address several times.) Could you elaborate on this issue? What is wrong with a bridge that learns its mac from its underlying device? What wold like Vdsm to do, in your opinion? You can file a bug (or even send a patch) if there is a functionality that you'd like to fix. > > There is zero documentation on how to use OVN instead of OVS. I hope that https://ovirt.org/develop/release-management/features/network/provider-physical-network/ can help. > thomas > > On 03/13/2018 09:22 AM, Thomas Davis wrote: >> >> I'll work on it some more. I have 2 different clusters in the data center >> (1 is the Hosted Engine systems, another is not..) I had trouble with both. >> I'll try again on the non-hosted engine cluster to see what it is doing. I >> have it working in 4.1, but we are trying to do a clean wipe since the 4.1 >> engine has been upgraded so many times from v3.5 plus we want to move to >> hosted-engine-ha from a single engine node and the ansible modules/roles >> (which also have problems..) >> >> thomas >> >> On Tue, Mar 13, 2018 at 6:27 AM, Edward Haas > > wrote: >> >> >> OVS switch support is experimental at this stage and in some cases >> when trying to change from one switch to the other, it fails. >> It was also not checked against a hosted engine setup, which handles >> networking a bit differently for the management network (ovirtmgmt). >> Nevertheless, we are interested in understanding all the problems >> that exists today, so if you can, please share the supervdsm log, it >> has the interesting networking traces. >> >> We plan to block cluster switch editing until these problems are >> resolved. It will be only allowed to define a new cluster as OVS, >> not convert an existing one from Linux Bridge to OVS. >> >> On Fri, Mar 9, 2018 at 9:54 AM, Thomas Davis > > wrote: >> >> I'm getting further along with 4.2.2rc3 than the 4.2.1 when it >> comes to hosted engine and vlans.. it actually does install >> under 4.2.2rc3. >> >> But it's a complete failure when I switch the cluster from Linux >> Bridge/Legacy to OVS. The first time I try, vdsm does >> not properly configure the node, it's all messed up. >> >> I'm getting this in vdsmd logs: >> >> 2018-03-08 23:12:46,610-0800 INFO (jsonrpc/7) [api.network] >> START setupNetworks(networks={u'ovirtmgmt': {u'ipv6autoconf': >> True, u'nic': u'eno1', u'vlan': u'50', u'ipaddr': >> u'192.168.85.49', u'switch': u'ovs', u'mtu': 1500, u'netmask': >> u'255.255.252.0', u'dhcpv6': False, u'STP': u'no', u'bridged': >> u'true', u'gateway': u'192.168.85.254', u'defaultRoute': True}}, >> bondings={}, options={u'connectivityCheck': u'true', >> u'connectivityTimeout': 120}) from=::ffff:192.168.85.24,56806, >> flow_id=4147e25f-0a23-4f47-a0a4-d424a3437d11 (api:46) >> >> 2018-03-08 23:12:52,449-0800 INFO (jsonrpc/2) >> [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00 >> seconds (__init__:573) >> >> 2018-03-08 23:12:52,511-0800 INFO (jsonrpc/7) [api.network] >> FINISH setupNetworks error=[Errno 19] ovirtmgmt is not present >> in the system from=::ffff:192.168.85.24,56806, >> flow_id=4147e25f-0a23-4f47-a0a4-d424a3437d11 (api:50) >> 2018-03-08 23:12:52,512-0800 ERROR (jsonrpc/7) >> [jsonrpc.JsonRpcServer] Internal server error (__init__:611) >> Traceback (most recent call last): >> File >> "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line >> 606, in _handle_request >> res = method(**params) >> File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", >> line 201, in _dynamicMethod >> result = fn(*methodArgs) >> File "", line 2, in setupNetworks >> File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", >> line 48, in method >> ret = func(*args, **kwargs) >> File "/usr/lib/python2.7/site-packages/vdsm/API.py", line >> 1527, in setupNetworks >> supervdsm.getProxy().setupNetworks(networks, bondings, >> options) >> File >> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", >> line 55, in __call__ >> return callMethod() >> File >> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", >> line 53, in >> **kwargs) >> File "", line 2, in setupNetworks >> File "/usr/lib64/python2.7/multiprocessing/managers.py", line >> 773, in _callmethod >> raise convert_to_error(kind, result) >> IOError: [Errno 19] ovirtmgmt is not present in the system >> 2018-03-08 23:12:52,512-0800 INFO (jsonrpc/7) >> [jsonrpc.JsonRpcServer] RPC call Host.setupNetworks failed >> (error -32603) in 5.90 seconds (__init__:573) >> 2018-03-08 23:12:54,769-0800 INFO (jsonrpc/1) >> [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00 >> seconds (__init__:573) >> 2018-03-08 23:12:54,772-0800 INFO (jsonrpc/5) [api.host] START >> getCapabilities() from=::1,45562 (api:46) >> 2018-03-08 23:12:54,906-0800 INFO (jsonrpc/5) [api.host] FINISH >> getCapabilities error=[Errno 19] ovirtmgmt is not present in the >> system from=::1,45562 (api:50) >> 2018-03-08 23:12:54,906-0800 ERROR (jsonrpc/5) >> [jsonrpc.JsonRpcServer] Internal server error (__init__:611) >> Traceback (most recent call last): >> File >> "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line >> 606, in _handle_request >> res = method(**params) >> File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", >> line 201, in _dynamicMethod >> result = fn(*methodArgs) >> File "", line 2, in getCapabilities >> File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", >> line 48, in method >> ret = func(*args, **kwargs) >> File "/usr/lib/python2.7/site-packages/vdsm/API.py", line >> 1339, in getCapabilities >> c = caps.get() >> File "/usr/lib/python2.7/site-packages/vdsm/host/caps.py", >> line 168, in get >> net_caps = supervdsm.getProxy().network_caps() >> File >> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", >> line 55, in __call__ >> return callMethod() >> File >> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", >> line 53, in >> **kwargs) >> File "", line 2, in network_caps >> File "/usr/lib64/python2.7/multiprocessing/managers.py", line >> 773, in _callmethod >> raise convert_to_error(kind, result) >> IOError: [Errno 19] ovirtmgmt is not present in the system >> >> So something is dreadfully wrong with the bridge to ovs >> conversion in 4.2.2rc3. >> >> thomas >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >> >> > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From didi at redhat.com Thu Mar 15 10:37:17 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Thu, 15 Mar 2018 12:37:17 +0200 Subject: [ovirt-users] oVirt Engine 4.2 -> 4.2.1 Upgrade In-Reply-To: References: Message-ID: On Thu, Mar 15, 2018 at 8:33 AM, Yedidyah Bar David wrote: > On Wed, Mar 14, 2018 at 6:56 PM, Andrei Verovski wrote: >> Hi ! >> >> After oVirt Engine 4.2 -> 4.2.1 Upgrade do I need to run these commands as described in this article >> https://www.ovirt.org/release/4.2.1/ > > Please note that the start of this section says: > > "For databases managed by engine-setup this is performed > automatically, but non-managed databases (usually remote databases) > this needs to be done manually by administrators." > > Is your database remote or managed manually, or local and managed by > engine-setup? If latter, it's enough to run 'engine-setup'. > >> >> su - postgres -c "scl enable rh-postgresql95 ? psql -d engine" >> postgres=# DROP FUNCTION IF EXISTS uuid_generate_v1(); >> postgres=# CREATE EXTENSION "uuid-ossp?; > > Sorry, this is a bug in the documentation. The commands should be: > > su - postgres -c "scl enable rh-postgresql95 -- psql -d engine" > postgres=# DROP FUNCTION IF EXISTS uuid_generate_v1(); > postgres=# CREATE EXTENSION "uuid-ossp"; > >> >> BTW, this yelds error: >> [root at node00 ~]# su - postgres -c "scl enable rh-postgresql95 ? psql -d engine" >> Unable to open /etc/scl/conf/?! > > Indeed. > > The reason that this bug happened is that the release notes are (partially) > auto-generated, where this part is taken from the doc-text of the linked bug: > > https://bugzilla.redhat.com/show_bug.cgi?id=1515635 > > The text there is ok. > > We should somehow make the resultant markdown mark such doc text as > pre-formatted, > to prevent it from replacing '--' with '?'. Adding Sandro for this. > > Thanks for the report! This should fix 4.2.1 release notes page: https://github.com/oVirt/ovirt-site/pull/1552 This should fix similar cases for future release notes pages: https://gerrit.ovirt.org/89033 Best regards, > >> >> node00 is a dedicated PC with CentOS and oVirt Host Engine. > > Sounds to me like you are in the "this will happen automatically" case. > > Best regards, > -- > Didi -- Didi From spfma.tech at e.mail.fr Thu Mar 15 10:38:11 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Thu, 15 Mar 2018 11:38:11 +0100 Subject: [ovirt-users] [4.2.2-1.el7.centos] Image locked and unending task In-Reply-To: References: Message-ID: <20180315103811.B449DE4474@smtp01.mail.de> Thanks for that quick answer ! Yes indeed I had some connectivity troubles on this server, a strange bonding problem I am investigating on since yesterday. But with just one link, it is working ok, I have no similar errors after the ones you saw. What can I do to really remove the task from the list ? Manual database cleanup ? Le 15-Mar-2018 11:15:40 +0100, eshenitz at redhat.com a crit: Thank you for sending the logs. According to the logs, it seems that you had some connectivity issue while you tried to preview the snapshot. The preview operation rolled back but according to you failed to finish. It seems like you still have a connectivity issue with that host ('pfm-srv-virt-1.pfm-ad.pfm.loc), try to see what happens to it. Here is the relevant part from the log: 2018-03-14 17:00:48,652+01 ERROR [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Heartbeat exceeded for host 'pfm-srv-virt-1.pfm-ad.pfm.loc', last response arrived 2003 ms ago. 2018-03-14 17:00:53,561+01 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to pfm-srv-virt-1.pfm-ad.pfm.loc/10.100.1.50 2018-03-14 17:02:21,832+01 INFO [org.ovirt.engine.core.utils.transaction.TransactionSupport] (EE-ManagedThreadFactory-engine-Thread-118906) [] transaction rolled back 2018-03-14 17:02:21,836+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-118906) [] EVENT_ID: USER_TRY_BACK_TO_SNAPSHOT_FINISH_FAILURE(99), Failed to complete Snapshot-Preview AFTER_INSTALL for VM pfm-ltsp-1. 2018-03-14 17:02:21,836+01 ERROR [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-118906) [] [within thread]: endAction for action type TryBackToAllSnapshotsOfVm threw an exception.: java.lang.NullPointerException at org.ovirt.engine.core.bll.snapshots.SnapshotsManager.deviceCanBeRemoved(SnapshotsManager.java:463) [bll.jar:] at org.ovirt.engine.core.bll.snapshots.SnapshotsManager.attempToRestoreVmConfigurationFromSnapshot(SnapshotsManager.java:415) [bll.jar:] at org.ovirt.engine.core.bll.snapshots.TryBackToAllSnapshotsOfVmCommand.restoreVmConfigFromSnapshot(TryBackToAllSnapshotsOfVmCommand.java:204) [bll.jar:] at org.ovirt.engine.core.bll.snapshots.TryBackToAllSnapshotsOfVmCommand.endSuccessfully(TryBackToAllSnapshotsOfVmCommand.java:168) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.internalEndSuccessfully(CommandBase.java:675) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.endActionInTransactionScope(CommandBase.java:630) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:1936) [bll.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInNewTransaction(TransactionSupport.java:202) [utils.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInRequired(TransactionSupport.java:137) [utils.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:105) [utils.jar:] at org.ovirt.engine.core.bll.CommandBase.endAction(CommandBase.java:495) [bll.jar:] at org.ovirt.engine.core.bll.tasks.DecoratedCommand.endAction(DecoratedCommand.java:17) [bll.jar:] at org.ovirt.engine.core.bll.tasks.CoCoAsyncTaskHelper.endAction(CoCoAsyncTaskHelper.java:353) [bll.jar:] at org.ovirt.engine.core.bll.tasks.CommandCoordinatorImpl.endAction(CommandCoordinatorImpl.java:347) [bll.jar:] at org.ovirt.engine.core.bll.tasks.CommandAsyncTask.endCommandAction(CommandAsyncTask.java:160) [bll.jar:] at org.ovirt.engine.core.bll.tasks.CommandAsyncTask.lambda$endActionIfNecessary$0(CommandAsyncTask.java:112) [bll.jar:] at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalWrapperRunnable.run(ThreadPoolUtil.java:96) [utils.jar:] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [rt.jar:1.8.0_161] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [rt.jar:1.8.0_161] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_161] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_161] at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_161] at org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250) [javax.enterprise.concurrent-1.0.jar:] at org.jboss.as.ee.concurrent.service.ElytronManagedThreadFactory$ElytronManagedThread.run(ElytronManagedThreadFactory.java:78) 2018-03-14 17:02:21,838+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-118906) [] CommandAsyncTask::HandleEndActionResult: endAction for action type 'TryBackToAllSnapshotsOfVm' threw an unrecoverable RuntimeException the task will be cleared. 2018-03-14 17:02:21,841+01 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-118906) On Thu, Mar 15, 2018 at 11:35 AM, wrote: Thanks for your reply. Yesterday, I realized I was doing nothing good with some the software I planed to install in a VM, so I tried to revert to a snapshot a took just after OS installation, as I always do. As I had added a second disk to the VM in between, I choose to revert to snapshot without taking care of the second disk contents. But the preview operation never ended. So I restarted the engine vm but nothing changed. This morning I tried to cleanup things, using "taskcleaner" and "unlock_entity". I could regain control over the VM, but the task is still in "finalizing" state in the GUI. I even remove the second disk to see if it was better, but nothing. You will find the engine logfile and the "vdsm.log" from the server the task is running on. I am not sure how to check engine version precisely, so I queried the rpm database in the vm : ovirt-engine-4.2.2-1.el7.centos.noarch Regards Le 15-Mar-2018 10:17:59 +0100, eshenitz at redhat.com a crit: Hi, Can you please specify the version of the engine and supply the engine.log and the vdsm.log? Moreover, can you please specify the steps that you did that led you to this issue? Thanks, On Thu, Mar 15, 2018 at 11:05 AM, wrote: Hi, I tried to rollback to a snapshot on a VM, but the preview never ended. The task has been running for about 15 hours, with this state : { "916b67fb-8808-43d2-850c-1c12650ccc49": { "verb": "createVolume", "code": 0, "state": "finished", "tag": "spm", "result": { "uuid": "d37ca118-820f-46a3-b99b-714018ea8b42" }, "message": "1 jobs completed successfully", "id": "916b67fb-8808-43d2-850c-1c12650ccc49" } } I just canceled it : the task list is now empty on the CLI but no change on GUI. So I restared the engine VM, but no success. With "/usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh" I was able to manually unlock the image, but the task is still "finalizing". Is this a bug ? Regards ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Regards, Eyal Shenitzky ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -- Regards, Eyal Shenitzky ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholler at redhat.com Thu Mar 15 10:59:50 2018 From: dholler at redhat.com (Dominik Holler) Date: Thu, 15 Mar 2018 11:59:50 +0100 Subject: [ovirt-users] Ovirt and OVH vRack In-Reply-To: References: Message-ID: <20180315115950.14f3ee04@t460p> On Thu, 15 Mar 2018 09:09:41 +0000 Fabien Carr? wrote: > Hello, > I am trying to set up an oVirt environment using OVH servers. So far > I have installed an engine (4.2) and a node. The node is attached to > a NAS through a vRack > https://www.ovh.co.uk/solutions/vrack/network-technology.xml. I am > using one vlan to connect it, which is the management network > > > However it is not a perfect setup (cf attached screenshot). The > management network is in "Out-of-Sync" state. > on the node : > # ip addr > 28: eno4.100 at eno4: mtu 1500 qdisc > noqueue master ovirtmgmt state UP qlen 1000 > link/ether 0c:c4:ff:7a:6c:13 brd ff:ff:ff:ff:ff:ff > 29: ovirtmgmt: mtu 1500 qdisc > noqueue state UP qlen 1000 > link/ether 0c:c4:ff:7a:6c:13 brd ff:ff:ff:ff:ff:ff > inet 10.100.0.11/24 brd 10.100.0.255 scope global ovirtmgmt > valid_lft forever preferred_lft forever > > > [image: Screenshot from 2018-03-15 09-03-37.png] > It does not seem possible to add extra vlan. I wanted to have one for > the vms and one for the hosts. > Can you give me some help or guidance ? > > Also generally speaking do you think such a setup is fine ? > > Thank you > Fabien If you are limited to a single VLAN, I would use this as a single logical network in oVirt. To isolate the traffic between VMs, I would use external logical networks on the ovirt-provider-ovn. From pbrilla at redhat.com Thu Mar 15 11:10:13 2018 From: pbrilla at redhat.com (Pavol Brilla) Date: Thu, 15 Mar 2018 12:10:13 +0100 Subject: [ovirt-users] Issue... In-Reply-To: References: Message-ID: Is your highlighted VM up? I just check in 4.2 and Change CD is greyed out for VMs which are down, I have it ok for VM which are UP On Tue, Mar 13, 2018 at 11:06 AM, Nasrum Minallah Manzoor < NasrumMinallah9 at hotmail.com> wrote: > Hi, > > > > I am facing issue when I click on ?change CD? option in ovirt?s engine it > doesn?t works. It was working fine, I don?t how it stopped working! > > > > Any suggestions guys! > > > > > > > > Regards, > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- PAVOL BRILLA RHV QUALITY ENGINEER, CLOUD Red Hat Czech Republic, Brno TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From awels at redhat.com Thu Mar 15 12:38:57 2018 From: awels at redhat.com (Alexander Wels) Date: Thu, 15 Mar 2018 08:38:57 -0400 Subject: [ovirt-users] improvement for web ui during the create template stage. In-Reply-To: <1521062392.6088.138.camel@province-sud.nc> References: <1521000255.6088.116.camel@province-sud.nc> <5595754.jVs0GWpncT@awels> <1521062392.6088.138.camel@province-sud.nc> Message-ID: <1994720.iLrxEI1RTZ@awels> On Wednesday, March 14, 2018 5:19:55 PM EDT Nicolas Vaye wrote: > Hi, > > I thought it was the problem. > I did a test again and i have recorded the test (in attachment). > What is the problem ? > > Regards, > > Nicolas VAYE > Interesting, as that name is not long enough to trigger the name length validation. You have 64 characters total for the name. I just tried your scenario on the latest master branch, and it worked as expected, it created the template from the snapshot without issues with that exact same name. I don't see any recent changes to the frontend code for that dialog either. If you look in the engine.log does it say anything? I can only assume some validation is failing, and that validation message is not properly propagated to the frontend, but it should show something in the backend log regardless. > -------- Message initial -------- > > Date: Wed, 14 Mar 2018 08:36:48 -0400 > Objet: Re: [ovirt-users] improvement for web ui during the create template > stage. ?: users at ovirt.org, Nicolas Vaye > nce-sud.nc%3e>> De: Alexander Wels > > > On Wednesday, March 14, 2018 12:04:18 AM EDT Nicolas Vaye wrote: > > > Hi, > I 'have 2 ovirt node with HE in version 4.2.1.7-1. > > If i make a template from a VM's snapshot in the web ui, there is a form ui > to enter several parameter > > > [cid:1521000255.509.1.camel at province-sud.nc rovince-sud.nc>] > > > if the name of the template is missing and if we clic on the OK button, > there is an highlighting red border on the name to indicate the problem. > if i enter a long name for the template and if we clic on the OK button, > nothing happend, and there is no highlight or error message to indicate > there is a problem with the long name. > Could you improve that ? > > Thanks, > > Regards, > > Nicolas VAYE > > > > > It appears to me it already does that, this a screenshot of me putting in a > long template name, and it is highlighted right and if I hover I see a > tooltip explaining I can't have more than 64 characters. > From okane at khm.de Thu Mar 15 12:32:06 2018 From: okane at khm.de (Robert O'Kane) Date: Thu, 15 Mar 2018 13:32:06 +0100 Subject: [ovirt-users] ovirt 4.2.2-rc3 switching from legacy to OVS.. In-Reply-To: References: <031b026d-66ec-14a0-9bec-7a4b0e717556@lbl.gov> <4418d2d0-73d4-aced-049b-5a2971a91274@lbl.gov> Message-ID: <51058e4e-9a8d-0a1c-18ad-277b3cc39254@khm.de> Make sure STP is OFF for each bridge. Then the warnings go away. Cheers, Robert O'Kane On 03/15/2018 11:21 AM, Dan Kenigsberg wrote: >> I know OVS is Experimental, but it worked in 4.1.x, and now we have to do a >> step back to legacy bridge just to use 4.2.x, which in a vlan environment >> just wreaks havoc (every VLAN need's a unique mac assigned to the bridge, >> which vdsm does not do, so suddenly you get the kernel complaining about >> seeing it's mac address several times.) > Could you elaborate on this issue? What is wrong with a bridge that > learns its mac from its underlying device? What wold like Vdsm to do, > in your opinion? You can file a bug (or even send a patch) if there is > a functionality that you'd like to fix. > -- Robert O'Kane Systems Administrator Kunsthochschule f?r Medien K?ln Peter-Welter-Platz 2 50676 K?ln fon: +49(221)20189-223 fax: +49(221)20189-49223 From carre.fabien at gmail.com Thu Mar 15 12:48:13 2018 From: carre.fabien at gmail.com (=?UTF-8?Q?Fabien_Carr=C3=A9?=) Date: Thu, 15 Mar 2018 12:48:13 +0000 Subject: [ovirt-users] Ovirt and OVH vRack In-Reply-To: <20180315115950.14f3ee04@t460p> References: <20180315115950.14f3ee04@t460p> Message-ID: Hello, Thank you for the answer. I'll try your solution. Fabien On Thu, 15 Mar 2018 at 11:59 Dominik Holler wrote: > On Thu, 15 Mar 2018 09:09:41 +0000 > Fabien Carr? wrote: > > > Hello, > > I am trying to set up an oVirt environment using OVH servers. So far > > I have installed an engine (4.2) and a node. The node is attached to > > a NAS through a vRack > > https://www.ovh.co.uk/solutions/vrack/network-technology.xml. I am > > using one vlan to connect it, which is the management network > > > > > > However it is not a perfect setup (cf attached screenshot). The > > management network is in "Out-of-Sync" state. > > on the node : > > # ip addr > > 28: eno4.100 at eno4: mtu 1500 qdisc > > noqueue master ovirtmgmt state UP qlen 1000 > > link/ether 0c:c4:ff:7a:6c:13 brd ff:ff:ff:ff:ff:ff > > 29: ovirtmgmt: mtu 1500 qdisc > > noqueue state UP qlen 1000 > > link/ether 0c:c4:ff:7a:6c:13 brd ff:ff:ff:ff:ff:ff > > inet 10.100.0.11/24 brd 10.100.0.255 scope global ovirtmgmt > > valid_lft forever preferred_lft forever > > > > > > [image: Screenshot from 2018-03-15 09-03-37.png] > > It does not seem possible to add extra vlan. I wanted to have one for > > the vms and one for the hosts. > > Can you give me some help or guidance ? > > > > Also generally speaking do you think such a setup is fine ? > > > > Thank you > > Fabien > > > If you are limited to a single VLAN, I would use this as a single > logical network in oVirt. To isolate the traffic between VMs, I > would use external logical networks on the ovirt-provider-ovn. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spfma.tech at e.mail.fr Thu Mar 15 13:25:14 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Thu, 15 Mar 2018 14:25:14 +0100 Subject: [ovirt-users] NFS 4.1 support and migration In-Reply-To: References: Message-ID: <20180315132514.65118E4473@smtp01.mail.de> Thanks, I totally missed that :-/ And this wil also work for the hosted engine dedicated domain, putting the storage domain the virtual machine is depending on in maintenance ? Le 15-Mar-2018 10:38:48 +0100, eshenitz at redhat.com a crit: You can edit the storage domain setting after the storage domain deactivated (entered to maintenance mode). On Thu, Mar 15, 2018 at 11:12 AM, wrote: In fact I don't really know how to change storage domains setttings (like nfs version or export path, ...), if it is only possible. I thought they could be disabled after stopping all related VMS, and maybe settings panel would then unlock ? But this should be impossible with hosted engine dedicated storage domain as it is required for the GUI itself. So I am stuck. Le 15-Mar-2018 09:59:30 +0100, eshenitz at redhat.com a crit: I am not sure what you mean, Can you please try to explain what is the difference between "VMs domain" to "hosted storage domain" according to you? Thanks, On Thu, Mar 15, 2018 at 10:45 AM, wrote: Thanks for your answer. And to use V4.1 instead of V3 on a domain, do I just have to disconnect it and change its settings ? Seems to be easy to do with VMs domains, but how to do it with hosted storage domain ? Regards Le 14-Mar-2018 11:54:52 +0100, eshenitz at redhat.com a crit: Hi, NFS 4.1 supported and working since version 3.6 (according to this bug fix [1]) [1] Support NFS v4.1 connections - https://bugzilla.redhat.com/show_bug.cgi?id=1283964 ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Regards, Eyal Shenitzky ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Regards, Eyal Shenitzky ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From eshenitz at redhat.com Thu Mar 15 13:30:06 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Thu, 15 Mar 2018 15:30:06 +0200 Subject: [ovirt-users] NFS 4.1 support and migration In-Reply-To: <20180315132514.65118E4473@smtp01.mail.de> References: <20180315132514.65118E4473@smtp01.mail.de> Message-ID: Have to admit that I didn't play with the hosted engine thing, but maybe you can find the answers in the documentation: - https://ovirt.org/documentation/self-hosted/Self-Hosted_Engine_Guide/ - https://www.ovirt.org/documentation/how-to/hosted-engine/ - https://ovirt.org/develop/release-management/features/sla/self-hosted-engine/ On Thu, Mar 15, 2018 at 3:25 PM, wrote: > Thanks, I totally missed that :-/ > > And this wil also work for the hosted engine dedicated domain, putting the > storage domain the virtual machine is depending on in maintenance ? > > > > Le 15-Mar-2018 10:38:48 +0100, eshenitz at redhat.com a ?crit: > > > You can edit the storage domain setting after the storage domain > deactivated (entered to maintenance mode). > > > On Thu, Mar 15, 2018 at 11:12 AM, wrote: > >> In fact I don't really know how to change storage domains setttings (like >> nfs version or export path, ...), if it is only possible. >> >> I thought they could be disabled after stopping all related VMS, and >> maybe settings panel would then unlock ? >> >> But this should be impossible with hosted engine dedicated storage domain >> as it is required for the GUI itself. >> >> So I am stuck. >> >> Le 15-Mar-2018 09:59:30 +0100, eshenitz at redhat.com a ?crit: >> >> >> I am not sure what you mean, >> Can you please try to explain what is the difference between "VMs domain" >> to "hosted storage domain" according to you? >> >> Thanks, >> >> On Thu, Mar 15, 2018 at 10:45 AM, wrote: >> >>> Thanks for your answer. >>> >>> And to use V4.1 instead of V3 on a domain, do I just have to disconnect >>> it and change its settings ? Seems to be easy to do with VMs domains, but >>> how to do it with hosted storage domain ? >>> >>> Regards >>> >>> >>> >>> Le 14-Mar-2018 11:54:52 +0100, eshenitz at redhat.com a ?crit: >>> >>> >>> Hi, >>> >>> NFS 4.1 supported and working since version 3.6 (according to this bug >>> fix [1]) >>> >>> [1] Support NFS v4.1 connections - https://bugzilla.redhat.com/ >>> show_bug.cgi?id=1283964 >>> >>> >>> ------------------------------ >>> FreeMail powered by mail.fr >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> >> >> -- >> Regards, >> Eyal Shenitzky >> >> >> ------------------------------ >> FreeMail powered by mail.fr >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > -- > Regards, > Eyal Shenitzky > > > ------------------------------ > FreeMail powered by mail.fr > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergey at vavilov.org Thu Mar 15 13:26:00 2018 From: sergey at vavilov.org (=?UTF-8?B?U2VyZ2V5IFZhdmlsb3Y=?=) Date: Thu, 15 Mar 2018 13:26:00 +0000 Subject: [ovirt-users] =?utf-8?q?how_to_run_command_in_windows_guest_via_a?= =?utf-8?q?gent=3F?= Message-ID: <20180315132600.C37FA2D002F6@eu.vavilov.org> Hello! I'm wondering how can I run a cmd.exe command (or powershell) inside windows guest virtual machine by ovirt-agent from outside (from ovirt-engine or from host)? What's ovirt's analogue for virsh qemu-agent-command ? Actually, I want to setup ip on virtual NIC. So I can't run a command via windows' protocols. ovirt-agent service works in windows virtual machine and successfully reports its state to ovirt engine. But it looks like ovirt-agent isn't suitable to run commands in virtual machines? Or am I wrong? Should I configure a coexistence of ovirt-agent and qemu-agent inside vm somehow? How did you solve such task before? Thank you, all! -- Sergey Vavilov ___________________________________ NOCC, http://nocc.sourceforge.net From gianluca.cecchi at gmail.com Thu Mar 15 14:26:16 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Thu, 15 Mar 2018 15:26:16 +0100 Subject: [ovirt-users] how to run command in windows guest via agent? In-Reply-To: <20180315132600.C37FA2D002F6@eu.vavilov.org> References: <20180315132600.C37FA2D002F6@eu.vavilov.org> Message-ID: On Thu, Mar 15, 2018 at 2:26 PM, Sergey Vavilov wrote: > Hello! > > I'm wondering > how can I run a cmd.exe command (or powershell) inside windows guest > virtual machine by > ovirt-agent from outside (from ovirt-engine or from host)? > What's ovirt's analogue for > virsh qemu-agent-command > ? > Actually, I want to setup ip on virtual NIC. > So I can't run a command via windows' protocols. > ovirt-agent service works in windows virtual machine and successfully > reports its state to ovirt engine. > But it looks like ovirt-agent isn't suitable to run commands in virtual > machines? > Or am I wrong? > Should I configure a coexistence of ovirt-agent and qemu-agent inside vm > somehow? > How did you solve such task before? > > Thank you, all! > > > -- > Sergey Vavilov > > Hi Sergey, I recently had a need for a Windows 2008 R2 x86_64 VM that I moved from vSphere to RHV (but the considerations below are the same for oVirt). In my case I had to run a backup of an Oracle database that is in NOARCHIVELOG mode, due to performance reasons: it is an RDBMS used for BI that is refreshed every day and doesn't need a point in time recovery but only a consistent state before the "new day" processing is run at 01:00 in the night. In vSphere the backup of the VM was implemented with Veeam, that uses snaphsot technology and interacts with VSS layer of M$ operating systems. Oracle fully supports on Windows the backup of database using VSS only if it is in ARCHIVELOG mode. https://docs.oracle.com/cd/E11882_01/win.112/e10845/vss.htm#NTQRF475 Veeam automatically executed shutdown / snapshot / start of the dabatase because I think it implements what in the link above is called as PRE_SQLCMD in OnPrepareBackup callback (through the VeeamGuestHelper.exe executable). Coming back to oVirt, the interaction with Windows VSS layer is indeed done by "QEMU guest agent"too; you have to install it together with the oVirt Guest Agent when you install oVirt Guest Tools So in my case I would need for Windows something similar to the "fsfreeze/thaw hooks" I have in Linux QEMU guest agent. But searching through the code it seems this functionality is only present in Linux component... In fact some days ago I wrote to the qemu-devel mailing list to ask for possible implementation: http://lists.nongnu.org/archive/html/qemu-devel/2018-03/msg02823.html No answer so far. Feel free (you and others) to contribute to the thread if you think it can help... It could also be useful to have at least writer control commands implementation in VSS interaction of QEMU Guest Agent in Windows, so that one could do something similar to what Veeam on vSphere already does... Based on what I have wrote above, I doubt you could interact at all through the QEMU guest agent on Windows to run any command... And also for Linux guests, I think you are restricted to the default actions (eg shutdown) or forced to use as a workaround the approach to run a snapshot (that then you delete) and "attach" the desired command to the freeze (or thaw) hook... HIH, Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjelinek at redhat.com Thu Mar 15 15:31:02 2018 From: tjelinek at redhat.com (Tomas Jelinek) Date: Thu, 15 Mar 2018 16:31:02 +0100 Subject: [ovirt-users] Cannot use noVNC in VM portal, oVirt 4.2.1.7 In-Reply-To: References: Message-ID: On Fri, Mar 9, 2018 at 10:46 PM, Chlipala, George Edward wrote: > Our oVirt installation (4.2.1.7) is configured to use noVNC as the default > console (ClientModeVncDefault = noVnc). This works perfectly fine in the > Administrator portal. However if a user logs in to the VM portal when they > click the VNC console option it generates a virt-viewer file (native VNC > client) instead of opening a noVNC session. We cannot seem to find any > options on the VM portal to use noVNC or set as the default. Is there > another option that we need to set to allow noVNC via the VM portal? > it is not yet there - the issue tracking it is here: https://github.com/oVirt/ovirt-web-ui/issues/490 > > > Thanks! > > > > George Chlipala, Ph.D. > > Associate Director, Core for Research Informatics > > Research Resources Center > > University of Illinois at Chicago > > phone: 312-413-1700 <(312)%20413-1700> > email: gchlip2 at uic.edu > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreil1 at starlett.lv Thu Mar 15 15:39:48 2018 From: andreil1 at starlett.lv (Andrei Verovski) Date: Thu, 15 Mar 2018 17:39:48 +0200 Subject: [ovirt-users] Warning: CentOS Upgrade with Host Engine - 503 Service Temporarily Unavailable Message-ID: <92209F67-B328-41A0-BA8E-09AF40905A32@starlett.lv> Hi ! I have upgraded CentOS 7.4 with oVirt 4.2.1 host engine (with yum upgrade), and its resulted in broken system - "503 Service Temporarily Unavailable" when connecting to the host engine via web. Service ovirt-engine failed to starts (logs attached at the bottom of this email), other ovirt services seem to run fine. yum update "ovirt-*-setup*? (upgrade 4.2 -> 4.2.1) engine-setup yum upgrade (OS upgrade) Is this issue somehow related to JDK as described here? https://bugzilla.redhat.com/show_bug.cgi?id=1217023 New packages: java-1.8.0-openjdk x86_64 1:1.8.0.161-0.b14.el7_4 java-1.8.0-openjdk-devel x86_64 1:1.8.0.161-0.b14.el7_4 java-1.8.0-openjdk-headless x86_64 1:1.8.0.161-0.b14.el7_4 installed packages (seem to be also 1.8x): [root at node00 ~]# rpm -qa | grep jdk java-1.8.0-openjdk-1.8.0.151-5.b12.el7_4.x86_64 java-1.8.0-openjdk-devel-1.8.0.151-5.b12.el7_4.x86_64 java-1.8.0-openjdk-headless-1.8.0.151-5.b12.el7_4.x86_64 copy-jdk-configs-2.2-5.el7_4.noarch Since my host engine is actually a KVM appliance under another SuSE server, I simply discarded it and reverted back old qcow2 image. So this is a warning to anyone - don?t upgrade CentOS, or at least keep a copy of disk image before ANY upgrade ! ********** LOGS ********* ****************************** [root at node00 ~]# service ovirt-engine status -l Redirecting to /bin/systemctl status -l ovirt-engine.service ? ovirt-engine.service - oVirt Engine Loaded: loaded (/usr/lib/systemd/system/ovirt-engine.service; enabled; vendor preset: disabled) Active: failed (Result: timeout) since Thu 2018-03-15 15:44:08 EET; 2min 18s ago Process: 1474 ExecStart=/usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py --redirect-output --systemd=notify $EXTRA_ARGS start (code=killed, signal=TERM) Main PID: 1474 (code=killed, signal=TERM) CGroup: /system.slice/ovirt-engine.service Mar 15 15:42:37 node00.myhost.lv systemd[1]: Starting oVirt Engine... Mar 15 15:42:50 node00.myhost.lv ovirt-engine.py[1474]: 2018-03-15 15:42:50,869+0200 ovirt-engine: INFO _detectJBossVersion:187 Detecting JBoss version. Running: /usr/lib/jvm/jre/bin/java ['ovirt-engine-version', '-server', '-XX:+TieredCompilation', '-Xms1024M', '-Xmx1024M', '-Djava.awt.headless=true', '-Dsun.rmi.dgc.client.gcInterval=3600000', '-Dsun.rmi.dgc.server.gcInterval=3600000', '-Djsse.enableSNIExtension=false', '-XX:+HeapDumpOnOutOfMemoryError', '-XX:HeapDumpPath=/var/log/ovirt-engine/dump', '-Djava.util.logging.manager=org.jboss.logmanager', '-Dlogging.configuration=file:///var/lib/ovirt-engine/jboss_runtime/config/ovirt-engine-logging.properties', '-Dorg.jboss.resolver.warning=true', '-Djboss.modules.system.pkgs=org.jboss.byteman', '-Djboss.server.default.config=ovirt-engine', '-Djboss.home.dir=/usr/share/ovirt-engine-wildfly', '-Djboss.server.base.dir=/usr/share/ovirt-engine', '-Djboss.server.data.dir=/var/lib/ovirt-engine', '-Djboss.server.log.dir=/var/log/ovirt-engine', '-Djboss.server.config.dir=/var/lib/ovirt-engine/jboss_runtime/config', '-Djboss.server.temp.dir=/var/lib/ovirt-engine/jboss_runtime/tmp', '-Djboss.controller.temp.dir=/var/lib/ovirt-engine/jboss_runtime/tmp', '-jar', '/usr/share/ovirt-engine-wildfly/jboss-modules.jar', '-mp', '/usr/share/ovirt-engine-wildfly-overlay/modules:/usr/share/ovirt-engine/modules/common:/usr/share/ovirt-engine-extension-aaa-jdbc/modules:/usr/share/ovirt-engine-wildfly/modules', '-jaxpmodule', 'javax.xml.jaxp-provider', 'org.jboss.as.standalone', '-v'] Mar 15 15:44:08 node00.myhost.lv systemd[1]: ovirt-engine.service start operation timed out. Terminating. Mar 15 15:44:08 node00.myhost.lv systemd[1]: Failed to start oVirt Engine. Mar 15 15:44:08 node00.myhost.lv systemd[1]: Unit ovirt-engine.service entered failed state. Mar 15 15:44:08 node00.myhost.lv systemd[1]: ovirt-engine.service failed. [root at node00 ~]# [root at node00 ~]# tail -n 20 /var/log/ovirt-engine/server.log 2018-03-15 15:39:03,196+02 INFO [org.quartz.core.QuartzScheduler] (MSC service thread 1-4) Scheduler QuartzOvirtDBScheduler_$_NON_CLUSTERED shutting down. 2018-03-15 15:39:03,196+02 INFO [org.quartz.core.QuartzScheduler] (MSC service thread 1-4) Scheduler QuartzOvirtDBScheduler_$_NON_CLUSTERED paused. 2018-03-15 15:39:03,201+02 INFO [org.wildfly.extension.undertow] (MSC service thread 1-2) WFLYUT0019: Host default-host stopping 2018-03-15 15:39:03,201+02 INFO [org.wildfly.extension.undertow] (MSC service thread 1-2) WFLYUT0004: Undertow 1.4.18.Final stopping 2018-03-15 15:39:03,202+02 INFO [org.quartz.core.QuartzScheduler] (MSC service thread 1-4) Scheduler QuartzOvirtDBScheduler_$_NON_CLUSTERED shutdown complete. 2018-03-15 15:39:03,213+02 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-2) WFLYJCA0010: Unbound data source [java:/DWHDataSource] 2018-03-15 15:39:03,213+02 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-2) WFLYJCA0010: Unbound data source [java:/ENGINEDataSource] 2018-03-15 15:39:03,226+02 INFO [org.jboss.as.clustering.infinispan] (MSC service thread 1-4) WFLYCLINF0003: Stopped timeout-base cache from ovirt-engine container 2018-03-15 15:39:03,228+02 INFO [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-2) WFLYJCA0019: Stopped Driver service with driver-name = postgresql 2018-03-15 15:39:03,233+02 INFO [org.jboss.as.server.deployment] (MSC service thread 1-3) WFLYSRV0208: Stopped subdeployment (runtime-name: root.war) in 271ms 2018-03-15 15:39:03,234+02 INFO [org.jboss.as.server.deployment] (MSC service thread 1-7) WFLYSRV0208: Stopped subdeployment (runtime-name: enginesso.war) in 279ms 2018-03-15 15:39:03,234+02 INFO [org.jboss.as.server.deployment] (MSC service thread 1-7) WFLYSRV0208: Stopped subdeployment (runtime-name: welcome.war) in 279ms 2018-03-15 15:39:03,235+02 INFO [org.jboss.as.clustering.infinispan] (MSC service thread 1-4) WFLYCLINF0003: Stopped inventory cache from ovirt-engine container 2018-03-15 15:39:03,237+02 INFO [org.jboss.as.clustering.infinispan] (MSC service thread 1-4) WFLYCLINF0003: Stopped dashboard cache from ovirt-engine container 2018-03-15 15:39:03,237+02 INFO [org.jboss.as.server.deployment] (MSC service thread 1-7) WFLYSRV0208: Stopped subdeployment (runtime-name: webadmin.war) in 281ms 2018-03-15 15:39:03,237+02 INFO [org.jboss.as.server.deployment] (MSC service thread 1-5) WFLYSRV0208: Stopped subdeployment (runtime-name: docs.war) in 289ms 2018-03-15 15:39:03,234+02 INFO [org.jboss.as.server.deployment] (MSC service thread 1-3) WFLYSRV0208: Stopped subdeployment (runtime-name: services.war) in 267ms 2018-03-15 15:39:03,239+02 INFO [org.jboss.as.server.deployment] (MSC service thread 1-2) WFLYSRV0208: Stopped subdeployment (runtime-name: bll.jar) in 290ms 2018-03-15 15:39:03,242+02 INFO [org.jboss.as.server.deployment] (MSC service thread 1-5) WFLYSRV0028: Stopped deployment engine.ear (runtime-name: engine.ear) in 286ms 2018-03-15 15:39:03,265+02 INFO [org.jboss.as] (MSC service thread 1-5) WFLYSRV0050: WildFly Full 11.0.0.Final (WildFly Core 3.0.8.Final) stopped in 302ms [root at node00 ~]# [root at node00 ~]# tail -n 30 /var/log/ovirt-engine/server.log 2018-03-15 15:39:03,070+02 INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 67) WFLYUT0022: Unregistered web context: '/ovirt-engine/sso' from server 'default-server' 2018-03-15 15:39:03,070+02 INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 12) WFLYUT0022: Unregistered web context: '/ovirt-engine/webadmin' from server 'default-server' 2018-03-15 15:39:03,071+02 INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 66) WFLYUT0022: Unregistered web context: '/ovirt-engine/docs' from server 'default-server' 2018-03-15 15:39:03,132+02 INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 69) WFLYUT0022: Unregistered web context: '/ovirt-engine/services' from server 'default-server' 2018-03-15 15:39:03,187+02 INFO [org.quartz.core.QuartzScheduler] (MSC service thread 1-4) Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED shutting down. 2018-03-15 15:39:03,187+02 INFO [org.quartz.core.QuartzScheduler] (MSC service thread 1-4) Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED paused. 2018-03-15 15:39:03,194+02 INFO [org.jboss.as.server.deployment] (MSC service thread 1-5) WFLYSRV0028: Stopped deployment ovirt-web-ui.war (runtime-name: ovirt-web-ui.war) in 233ms 2018-03-15 15:39:03,195+02 INFO [org.jboss.as.server.deployment] (MSC service thread 1-6) WFLYSRV0028: Stopped deployment apidoc.war (runtime-name: apidoc.war) in 233ms 2018-03-15 15:39:03,195+02 INFO [org.jboss.as.server.deployment] (MSC service thread 1-7) WFLYSRV0028: Stopped deployment restapi.war (runtime-name: restapi.war) in 247ms 2018-03-15 15:39:03,196+02 INFO [org.quartz.core.QuartzScheduler] (MSC service thread 1-4) Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED shutdown complete. 2018-03-15 15:39:03,196+02 INFO [org.quartz.core.QuartzScheduler] (MSC service thread 1-4) Scheduler QuartzOvirtDBScheduler_$_NON_CLUSTERED shutting down. 2018-03-15 15:39:03,196+02 INFO [org.quartz.core.QuartzScheduler] (MSC service thread 1-4) Scheduler QuartzOvirtDBScheduler_$_NON_CLUSTERED paused. 2018-03-15 15:39:03,201+02 INFO [org.wildfly.extension.undertow] (MSC service thread 1-2) WFLYUT0019: Host default-host stopping 2018-03-15 15:39:03,201+02 INFO [org.wildfly.extension.undertow] (MSC service thread 1-2) WFLYUT0004: Undertow 1.4.18.Final stopping 2018-03-15 15:39:03,202+02 INFO [org.quartz.core.QuartzScheduler] (MSC service thread 1-4) Scheduler QuartzOvirtDBScheduler_$_NON_CLUSTERED shutdown complete. 2018-03-15 15:39:03,213+02 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-2) WFLYJCA0010: Unbound data source [java:/DWHDataSource] 2018-03-15 15:39:03,213+02 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-2) WFLYJCA0010: Unbound data source [java:/ENGINEDataSource] 2018-03-15 15:39:03,226+02 INFO [org.jboss.as.clustering.infinispan] (MSC service thread 1-4) WFLYCLINF0003: Stopped timeout-base cache from ovirt-engine container 2018-03-15 15:39:03,228+02 INFO [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-2) WFLYJCA0019: Stopped Driver service with driver-name = postgresql 2018-03-15 15:39:03,233+02 INFO [org.jboss.as.server.deployment] (MSC service thread 1-3) WFLYSRV0208: Stopped subdeployment (runtime-name: root.war) in 271ms 2018-03-15 15:39:03,234+02 INFO [org.jboss.as.server.deployment] (MSC service thread 1-7) WFLYSRV0208: Stopped subdeployment (runtime-name: enginesso.war) in 279ms 2018-03-15 15:39:03,234+02 INFO [org.jboss.as.server.deployment] (MSC service thread 1-7) WFLYSRV0208: Stopped subdeployment (runtime-name: welcome.war) in 279ms 2018-03-15 15:39:03,235+02 INFO [org.jboss.as.clustering.infinispan] (MSC service thread 1-4) WFLYCLINF0003: Stopped inventory cache from ovirt-engine container 2018-03-15 15:39:03,237+02 INFO [org.jboss.as.clustering.infinispan] (MSC service thread 1-4) WFLYCLINF0003: Stopped dashboard cache from ovirt-engine container 2018-03-15 15:39:03,237+02 INFO [org.jboss.as.server.deployment] (MSC service thread 1-7) WFLYSRV0208: Stopped subdeployment (runtime-name: webadmin.war) in 281ms 2018-03-15 15:39:03,237+02 INFO [org.jboss.as.server.deployment] (MSC service thread 1-5) WFLYSRV0208: Stopped subdeployment (runtime-name: docs.war) in 289ms 2018-03-15 15:39:03,234+02 INFO [org.jboss.as.server.deployment] (MSC service thread 1-3) WFLYSRV0208: Stopped subdeployment (runtime-name: services.war) in 267ms 2018-03-15 15:39:03,239+02 INFO [org.jboss.as.server.deployment] (MSC service thread 1-2) WFLYSRV0208: Stopped subdeployment (runtime-name: bll.jar) in 290ms 2018-03-15 15:39:03,242+02 INFO [org.jboss.as.server.deployment] (MSC service thread 1-5) WFLYSRV0028: Stopped deployment engine.ear (runtime-name: engine.ear) in 286ms 2018-03-15 15:39:03,265+02 INFO [org.jboss.as] (MSC service thread 1-5) WFLYSRV0050: WildFly Full 11.0.0.Final (WildFly Core 3.0.8.Final) stopped in 302ms [root at node00 ~]# -------------- next part -------------- An HTML attachment was scrubbed... URL: From vszocs at redhat.com Thu Mar 15 15:42:24 2018 From: vszocs at redhat.com (Vojtech Szocs) Date: Thu, 15 Mar 2018 16:42:24 +0100 Subject: [ovirt-users] UI plugin API updates Message-ID: Dear community, the UI plugin API will be updated to reflect recent oVirt web administration UI design changes. The relevant patch is already merged in master branch [1] and the associated BZ [2] is targeted for 4.3 release. *What's new* Two new API functions, addPrimaryMenuContainer and addSecondaryMenuPlace, allow you to add custom secondary menu items to the vertical navigation menu. You can target both existing (core) and custom primary menu items when adding secondary ones. *What's changed* Some API functions were renamed to stay consistent with current UI design, i.e. reflecting the absence of "main" and "sub" tabs. - addMainTab => addPrimaryMenuPlace - addSubTab => addDetailPlace - setTabContentUrl => setPlaceContentUrl - setTabAccessible => setPlaceAccessible - addMainTabActionButton => addMenuPlaceActionButton - addSubTabActionButton => addDetailPlaceActionButton You can still use the original functions mentioned above, but doing so will yield a warning in the browser console, for example: *addMainTab is deprecated, please use addPrimaryMenuPlace instead.* In addition, for functions that used to deal with "main" or "sub" tabs, the options object no longer supports alignRight (boolean) parameter. That's because PatternFly tabs widget [3] expects all tabs to be aligned next to each other, flowing from left to right. We'll be updating the UI plugins feature page shortly to reflect all the changes. [1] https://gerrit.ovirt.org/#/c/88690/ [2] https://bugzilla.redhat.com/1553902 [3] http://www.patternfly.org/pattern-library/widgets/#tabs Regards, Vojtech -------------- next part -------------- An HTML attachment was scrubbed... URL: From lveyde at redhat.com Thu Mar 15 15:57:40 2018 From: lveyde at redhat.com (Lev Veyde) Date: Thu, 15 Mar 2018 17:57:40 +0200 Subject: [ovirt-users] [ANN] oVirt 4.2.2 Fourth Release Candidate is now available Message-ID: The oVirt Project is pleased to announce the availability of the oVirt 4.2.2 Fourth Release Candidate, as of March 15th, 2018 This update is a release candidate of the second in a series of stabilization updates to the 4.2 series. This is pre-release software. This pre-release should not to be used in production. This release is available now for: * Red Hat Enterprise Linux 7.4 or later * CentOS Linux (or similar) 7.4 or later This release supports Hypervisor Hosts running: * Red Hat Enterprise Linux 7.4 or later * CentOS Linux (or similar) 7.4 or later * oVirt Node 4.2 See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed. Notes: - oVirt Appliance will be available soon - oVirt Node will be available soon [2] Additional Resources: * Read more about the oVirt 4.2.2 release highlights: http://www.ovirt.org/release/4. 2 . 2 / * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt * Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/ [1] http://www.ovirt.org/release/4. 2 . 2 / [2] http://resources.ovirt.org/pub/ovirt-4. 2-pre /iso/ -- Lev Veyde Software Engineer, RHCE | RHCVA | MCITP Red Hat Israel lev at redhat.com | lveyde at redhat.com TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stirabos at redhat.com Thu Mar 15 17:33:35 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Thu, 15 Mar 2018 18:33:35 +0100 Subject: [ovirt-users] 4.2.2.2-1 Starting hosted engine on all hosts In-Reply-To: References: Message-ID: On Thu, Mar 15, 2018 at 8:18 AM, Yedidyah Bar David wrote: > On Thu, Mar 15, 2018 at 8:50 AM, Maton, Brett > wrote: > > The last three 4.2.2 release candidates that I've tried have been > starting > > self hosted engine all all physical hosts at the same time. > > > > Same with the latest RC, what logs do you need to investigate the > problem? > It was this one: https://bugzilla.redhat.com/show_bug.cgi?id=1547479 It got fixed today but still not available in RC4. > > /var/log/ovirt-hosted-engine-ha/* > /var/log/sanlock.log > /var/log/vdsm/* > > Adding Martin. > > Thanks and best regards, > -- > Didi > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Thu Mar 15 17:47:31 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Thu, 15 Mar 2018 18:47:31 +0100 Subject: [ovirt-users] 4.2.2.2-1 Starting hosted engine on all hosts In-Reply-To: References: Message-ID: Il 15 Mar 2018 6:34 PM, "Simone Tiraboschi" ha scritto: On Thu, Mar 15, 2018 at 8:18 AM, Yedidyah Bar David wrote: > On Thu, Mar 15, 2018 at 8:50 AM, Maton, Brett > wrote: > > The last three 4.2.2 release candidates that I've tried have been > starting > > self hosted engine all all physical hosts at the same time. > > > > Same with the latest RC, what logs do you need to investigate the > problem? > It was this one: https://bugzilla.redhat.com/show_bug.cgi?id=1547479 It got fixed today but still not available in RC4. > > /var/log/ovirt-hosted-engine-ha/* > /var/log/sanlock.log > /var/log/vdsm/* > > Adding Martin. > > Thanks and best regards, > -- > Didi > ______________________________________________ If I understood correctly, this kind of risk is not present in 4.1.x and in 4.2.y for every x and for y <= 1? -------------- next part -------------- An HTML attachment was scrubbed... URL: From junaid8756 at gmail.com Thu Mar 15 18:55:34 2018 From: junaid8756 at gmail.com (Junaid Jadoon) Date: Thu, 15 Mar 2018 18:55:34 +0000 Subject: [ovirt-users] change CD not working In-Reply-To: References: Message-ID: > > Ovirt engine and node version are 4.2. > > "Error while executing action Change CD: Failed to perform "Change CD" operation, CD might be still in use by the VM. > Please try to manually detach the CD from withing the VM: > 1. Log in to the VM > 2 For Linux VMs, un-mount the CD using umount command; > For Windows VMs, right click on the CD drive and click 'Eject';" > > > Initially its working fine suddenly it giving above error. > > Logs are attached. > > please help me out > > Regards, > > Junaid > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: logs folder.rar Type: application/rar Size: 474283 bytes Desc: not available URL: From matonb at ltresources.co.uk Thu Mar 15 19:20:30 2018 From: matonb at ltresources.co.uk (Maton, Brett) Date: Thu, 15 Mar 2018 19:20:30 +0000 Subject: [ovirt-users] 4.2.2.2-1 Starting hosted engine on all hosts In-Reply-To: References: Message-ID: Ok cool, glad you already have enough information as it's trashed my hosted-engine beyond recovery... On 15 March 2018 at 17:47, Gianluca Cecchi wrote: > > Il 15 Mar 2018 6:34 PM, "Simone Tiraboschi" ha > scritto: > > > > On Thu, Mar 15, 2018 at 8:18 AM, Yedidyah Bar David > wrote: > >> On Thu, Mar 15, 2018 at 8:50 AM, Maton, Brett >> wrote: >> > The last three 4.2.2 release candidates that I've tried have been >> starting >> > self hosted engine all all physical hosts at the same time. >> > >> > Same with the latest RC, what logs do you need to investigate the >> problem? >> > > It was this one: https://bugzilla.redhat.com/show_bug.cgi?id=1547479 > > It got fixed today but still not available in RC4. > > >> >> /var/log/ovirt-hosted-engine-ha/* >> /var/log/sanlock.log >> /var/log/vdsm/* >> >> Adding Martin. >> >> Thanks and best regards, >> -- >> Didi >> ______________________________________________ > > > If I understood correctly, this kind of risk is not present in 4.1.x and > in 4.2.y for every x and for y <= 1? > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stirabos at redhat.com Thu Mar 15 20:32:19 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Thu, 15 Mar 2018 21:32:19 +0100 Subject: [ovirt-users] 4.2.2.2-1 Starting hosted engine on all hosts In-Reply-To: References: Message-ID: On Thu, Mar 15, 2018 at 6:47 PM, Gianluca Cecchi wrote: > > Il 15 Mar 2018 6:34 PM, "Simone Tiraboschi" ha > scritto: > > > > On Thu, Mar 15, 2018 at 8:18 AM, Yedidyah Bar David > wrote: > >> On Thu, Mar 15, 2018 at 8:50 AM, Maton, Brett >> wrote: >> > The last three 4.2.2 release candidates that I've tried have been >> starting >> > self hosted engine all all physical hosts at the same time. >> > >> > Same with the latest RC, what logs do you need to investigate the >> problem? >> > > It was this one: https://bugzilla.redhat.com/show_bug.cgi?id=1547479 > > It got fixed today but still not available in RC4. > > >> >> /var/log/ovirt-hosted-engine-ha/* >> /var/log/sanlock.log >> /var/log/vdsm/* >> >> Adding Martin. >> >> Thanks and best regards, >> -- >> Didi >> ______________________________________________ > > > If I understood correctly, this kind of risk is not present in 4.1.x and > in 4.2.y for every x and for y <= 1? > > Right. This issue has been introduced with https://gerrit.ovirt.org/#/c/86435/ that comes in ovirt-hosted-engine-ha-2.2.5 so 4.2.1 is not affected since it ships ovirt-hosted-engine-ha-2.2.4. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at palousetech.com Thu Mar 15 21:15:17 2018 From: jim at palousetech.com (Jim Kusznir) Date: Thu, 15 Mar 2018 14:15:17 -0700 Subject: [ovirt-users] gluster self-heal takes cluster offline Message-ID: Hi all: I'm trying to understand why/how (and most importantly, how to fix) an substantial issue I had last night. This happened one other time, but I didn't know/understand all the parts associated with it until last night. I have a 3 node hyperconverged (self-hosted engine, Gluster on each node) cluster. Gluster is Replica 2 + arbitrar. Current network configuration is 2x GigE on load balance ("LAG Group" on switch), plus one GigE from each server on a separate vlan, intended for Gluster (but not used). Server hardware is Dell R610's, each server as an SSD in it. Server 1 and 2 have the full replica, server 3 is the arbitrar. I put server 2 into maintence so I can work on the hardware, including turn it off and such. In the course of the work, I found that I needed to reconfigure the SSD's partitioning somewhat, and it resulted in wiping the data partition (storing VM images). I figure, its no big deal, gluster will rebuild that in short order. I did take care of the extended attr settings and the like, and when I booted it up, gluster came up as expected and began rebuilding the disk. The problem is that suddenly my entire cluster got very sluggish. The entine was marking nodes and VMs failed and unfaling them throughout the system, fairly randomly. It didn't matter what node the engine or VM was on. At one point, it power cycled server 1 for "non-responsive" (even though everything was running on it, and the gluster rebuild was working on it). As a result of this, about 6 VMs were killed and my entire gluster system went down hard (suspending all remaining VMs and the engine), as there were no remaining full copies of the data. After several minutes (these are Dell servers, after all...), server 1 came back up, and gluster resumed the rebuild, and came online on the cluster. I had to manually (virtsh command) unpause the engine, and then struggle through trying to get critical VMs back up. Everything was super slow, and load averages on the servers were often seen in excess of 80 (these are 8 core / 16 thread boxes). Actual CPU usage (reported by top) was rarely above 40% (inclusive of all CPUs) for any one server. Glusterfs was often seen using 180%-350% of a CPU on server 1 and 2. I ended up putting the cluster in global HA maintence mode and disabling power fencing on the nodes until the process finished. It appeared on at least two occasions a functional node was marked bad and had the fencing not been disabled, a node would have rebooted, just further exacerbating the problem. Its clear that the gluster rebuild overloaded things and caused the problem. I don't know why the load was so high (even IOWait was low), but load averages were definately tied to the glusterfs cpu utilization %. At no point did I have any problems pinging any machine (host or VM) unless the engine decided it was dead and killed it. Why did my system bite it so hard with the rebuild? I baby'ed it along until the rebuild was complete, after which it returned to normal operation. As of this event, all networking (host/engine management, gluster, and VM network) were on the same vlan. I'd love to move things off, but so far any attempt to do so breaks my cluster. How can I move my management interfaces to a separate VLAN/IP Space? I also want to move Gluster to its own private space, but it seems if I change anything in the peers file, the entire gluster cluster goes down. The dedicated gluster network is listed as a secondary hostname for all peers already. Will the above network reconfigurations be enough? I got the impression that the issue may not have been purely network based, but possibly server IO overload. Is this likely / right? I appreciate input. I don't think gluster's recovery is supposed to do as much damage as it did the last two or three times any healing was required. Thanks! --Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From endre.karlson at gmail.com Thu Mar 15 22:15:53 2018 From: endre.karlson at gmail.com (Endre Karlson) Date: Thu, 15 Mar 2018 23:15:53 +0100 Subject: [ovirt-users] Ovirt vm's paused due to storage error Message-ID: Hi, this is is here again and we are getting several vm's going into storage error in our 4 node cluster running on centos 7.4 with gluster and ovirt 4.2.1. Gluster version: 3.12.6 volume status [root at ovirt3 ~]# gluster volume status Status of volume: data Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick ovirt0:/gluster/brick3/data 49152 0 Y 9102 Brick ovirt2:/gluster/brick3/data 49152 0 Y 28063 Brick ovirt3:/gluster/brick3/data 49152 0 Y 28379 Brick ovirt0:/gluster/brick4/data 49153 0 Y 9111 Brick ovirt2:/gluster/brick4/data 49153 0 Y 28069 Brick ovirt3:/gluster/brick4/data 49153 0 Y 28388 Brick ovirt0:/gluster/brick5/data 49154 0 Y 9120 Brick ovirt2:/gluster/brick5/data 49154 0 Y 28075 Brick ovirt3:/gluster/brick5/data 49154 0 Y 28397 Brick ovirt0:/gluster/brick6/data 49155 0 Y 9129 Brick ovirt2:/gluster/brick6_1/data 49155 0 Y 28081 Brick ovirt3:/gluster/brick6/data 49155 0 Y 28404 Brick ovirt0:/gluster/brick7/data 49156 0 Y 9138 Brick ovirt2:/gluster/brick7/data 49156 0 Y 28089 Brick ovirt3:/gluster/brick7/data 49156 0 Y 28411 Brick ovirt0:/gluster/brick8/data 49157 0 Y 9145 Brick ovirt2:/gluster/brick8/data 49157 0 Y 28095 Brick ovirt3:/gluster/brick8/data 49157 0 Y 28418 Brick ovirt1:/gluster/brick3/data 49152 0 Y 23139 Brick ovirt1:/gluster/brick4/data 49153 0 Y 23145 Brick ovirt1:/gluster/brick5/data 49154 0 Y 23152 Brick ovirt1:/gluster/brick6/data 49155 0 Y 23159 Brick ovirt1:/gluster/brick7/data 49156 0 Y 23166 Brick ovirt1:/gluster/brick8/data 49157 0 Y 23173 Self-heal Daemon on localhost N/A N/A Y 7757 Bitrot Daemon on localhost N/A N/A Y 7766 Scrubber Daemon on localhost N/A N/A Y 7785 Self-heal Daemon on ovirt2 N/A N/A Y 8205 Bitrot Daemon on ovirt2 N/A N/A Y 8216 Scrubber Daemon on ovirt2 N/A N/A Y 8227 Self-heal Daemon on ovirt0 N/A N/A Y 32665 Bitrot Daemon on ovirt0 N/A N/A Y 32674 Scrubber Daemon on ovirt0 N/A N/A Y 32712 Self-heal Daemon on ovirt1 N/A N/A Y 31759 Bitrot Daemon on ovirt1 N/A N/A Y 31768 Scrubber Daemon on ovirt1 N/A N/A Y 31790 Task Status of Volume data ------------------------------------------------------------------------------ Task : Rebalance ID : 62942ba3-db9e-4604-aa03-4970767f4d67 Status : completed Status of volume: engine Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick ovirt0:/gluster/brick1/engine 49158 0 Y 9155 Brick ovirt2:/gluster/brick1/engine 49158 0 Y 28107 Brick ovirt3:/gluster/brick1/engine 49158 0 Y 28427 Self-heal Daemon on localhost N/A N/A Y 7757 Self-heal Daemon on ovirt1 N/A N/A Y 31759 Self-heal Daemon on ovirt0 N/A N/A Y 32665 Self-heal Daemon on ovirt2 N/A N/A Y 8205 Task Status of Volume engine ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: iso Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick ovirt0:/gluster/brick2/iso 49159 0 Y 9164 Brick ovirt2:/gluster/brick2/iso 49159 0 Y 28116 Brick ovirt3:/gluster/brick2/iso 49159 0 Y 28436 NFS Server on localhost 2049 0 Y 7746 Self-heal Daemon on localhost N/A N/A Y 7757 NFS Server on ovirt1 2049 0 Y 31748 Self-heal Daemon on ovirt1 N/A N/A Y 31759 NFS Server on ovirt0 2049 0 Y 32656 Self-heal Daemon on ovirt0 N/A N/A Y 32665 NFS Server on ovirt2 2049 0 Y 8194 Self-heal Daemon on ovirt2 N/A N/A Y 8205 Task Status of Volume iso ------------------------------------------------------------------------------ There are no active volume tasks -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonbae77 at gmail.com Thu Mar 15 22:41:38 2018 From: jonbae77 at gmail.com (Jonathan Baecker) Date: Thu, 15 Mar 2018 23:41:38 +0100 Subject: [ovirt-users] Host can not boot any more with current kernel Message-ID: <1a051b84-b9f0-e325-a8cf-8e7cd82485c9@gmail.com> Hello everybody, Today I update my engine, from 4.2.1.6 to 4.2.1.7 and later I update two hosts. All are running on CentOS 7.4 Now they have the kernel 3.10.0-693.21, one host I can start normal and the other one always reboot short after the menu where I can select the kernel. There is not even an error message and I can not switch to the boot log screen. The same behavior I have with kernel 3.10.0-693.17. Kernel *.11 and older are working. Have any body experience this issue and know what to do? Regards Jonathan From tadavis at lbl.gov Thu Mar 15 23:25:27 2018 From: tadavis at lbl.gov (Thomas Davis) Date: Thu, 15 Mar 2018 16:25:27 -0700 Subject: [ovirt-users] ovirt 4.2.2-rc3 switching from legacy to OVS.. In-Reply-To: References: <031b026d-66ec-14a0-9bec-7a4b0e717556@lbl.gov> <4418d2d0-73d4-aced-049b-5a2971a91274@lbl.gov> Message-ID: <639a1908-229d-18fd-8557-fe32306bd2cc@lbl.gov> Alrighty, I figured it out. 0) To setup a node in a cluster, make sure the cluster is in OVS, not legacy. 1) Make sure you have an OVN controller setup somewhere. Default appears to be the ovirt-hosted-engine. a) you should also have the external network provider for OVN configured also; see the web interface. 2) when you install the node, make sure it has openvswitch installed and running - ie: a) 'systemctl status openvswitch' says it's up and running. (be sure it's enable also) b) 'ovs-vsctl show' has vdsm bridges listed, and possibly a br-int bridge. 3) if there is no br-int bridge, do 'vdsm-tool ovn-config ovn-controller-ip host-ip' 4) when you have configured several nodes in the OVN, you should see them listed as geneve devices in 'ovs-vsctl show', ie: This is a 4 node cluster, so the other 3 nodes are expected: [root at d8-r12-c1-n3 ~]# ovs-vsctl show 42df28ba-ffd6-4e61-b7b2-219576da51ab Bridge br-int fail_mode: secure Port "ovn-27461b-0" Interface "ovn-27461b-0" type: geneve options: {csum="true", key=flow, remote_ip="192.168.85.91"} Port "vnet1" Interface "vnet1" Port "ovn-a1c08f-0" Interface "ovn-a1c08f-0" type: geneve options: {csum="true", key=flow, remote_ip="192.168.85.87"} Port "patch-br-int-to-f7a19c7d-021a-455d-bf3a-c15e212d8831" Interface "patch-br-int-to-f7a19c7d-021a-455d-bf3a-c15e212d8831" type: patch options: {peer="patch-f7a19c7d-021a-455d-bf3a-c15e212d8831-to-br-int"} Port "vnet0" Interface "vnet0" Port "patch-br-int-to-7874ba85-8f6f-4e43-9535-5a1b1353a9ec" Interface "patch-br-int-to-7874ba85-8f6f-4e43-9535-5a1b1353a9ec" type: patch options: {peer="patch-7874ba85-8f6f-4e43-9535-5a1b1353a9ec-to-br-int"} Port "ovn-8da92c-0" Interface "ovn-8da92c-0" type: geneve options: {csum="true", key=flow, remote_ip="192.168.85.95"} Port br-int Interface br-int type: internal Bridge "vdsmbr_LZmj3uJ1" Port "vdsmbr_LZmj3uJ1" Interface "vdsmbr_LZmj3uJ1" type: internal Port "net211" tag: 211 Interface "net211" type: internal Port "eno2" Interface "eno2" Bridge "vdsmbr_e7rcnufp" Port "vdsmbr_e7rcnufp" Interface "vdsmbr_e7rcnufp" type: internal Port ipmi tag: 20 Interface ipmi type: internal Port ovirtmgmt tag: 50 Interface ovirtmgmt type: internal Port "patch-f7a19c7d-021a-455d-bf3a-c15e212d8831-to-br-int" Interface "patch-f7a19c7d-021a-455d-bf3a-c15e212d8831-to-br-int" type: patch options: {peer="patch-br-int-to-f7a19c7d-021a-455d-bf3a-c15e212d8831"} Port "eno1" Interface "eno1" Port "patch-7874ba85-8f6f-4e43-9535-5a1b1353a9ec-to-br-int" Interface "patch-7874ba85-8f6f-4e43-9535-5a1b1353a9ec-to-br-int" type: patch options: {peer="patch-br-int-to-7874ba85-8f6f-4e43-9535-5a1b1353a9ec"} ovs_version: "2.7.3" 5) Create in the cluster the legacy style bridge networks - ie, ovirtmgmt, etc. Do this just like you where creating them for the legacy network. Define the VLAN #, the MTU, etc. 6) Now, create in the network config, the OVN networks - ie, ovn-ovirtmgmt is on an external provider (select OVN), and make sure 'connect to physical network' is checked, and the correct network from step 5 is picked. Save this off. This will connect the two networks together in a bridge, and all services are visible to both ie dhcp, dns.. 7) when you create the VM, select the OVN network interface, not the legacy bridge interface (this is why I decided to prefix with 'ovn-'). 8) Create the vm, start it, migrate, stop, re-start, etc, it all should work now. Lots of reading.. lots of interesting stuff found.. finally figured this out after reading a bunch of bug fixes for the latest RC (released today) thomas On 03/15/2018 03:21 AM, Dan Kenigsberg wrote: > On Thu, Mar 15, 2018 at 1:50 AM, Thomas Davis wrote: >> Well, I just hit >> >> https://bugzilla.redhat.com/show_bug.cgi?id=1513991 >> >> And it's been closed, which means with vdsm-4.20.17-1.el7.centos.x86_64 >> OVS networking is totally borked.. > > You are welcome to reopen that bug, specifying your use case for OvS. > I cannot promise fixing this bug, as our resources are limited, and > that bug, which was introduced in 4.2, was not deemed as urgently > needed. https://gerrit.ovirt.org/#/c/86932/ attempts to fix the bug, > but it still needs a lot of work. > >> >> I know OVS is Experimental, but it worked in 4.1.x, and now we have to do a >> step back to legacy bridge just to use 4.2.x, which in a vlan environment >> just wreaks havoc (every VLAN need's a unique mac assigned to the bridge, >> which vdsm does not do, so suddenly you get the kernel complaining about >> seeing it's mac address several times.) > > Could you elaborate on this issue? What is wrong with a bridge that > learns its mac from its underlying device? What wold like Vdsm to do, > in your opinion? You can file a bug (or even send a patch) if there is > a functionality that you'd like to fix. > >> >> There is zero documentation on how to use OVN instead of OVS. > > I hope that https://ovirt.org/develop/release-management/features/network/provider-physical-network/ > can help. > >> thomas >> >> On 03/13/2018 09:22 AM, Thomas Davis wrote: >>> >>> I'll work on it some more. I have 2 different clusters in the data center >>> (1 is the Hosted Engine systems, another is not..) I had trouble with both. >>> I'll try again on the non-hosted engine cluster to see what it is doing. I >>> have it working in 4.1, but we are trying to do a clean wipe since the 4.1 >>> engine has been upgraded so many times from v3.5 plus we want to move to >>> hosted-engine-ha from a single engine node and the ansible modules/roles >>> (which also have problems..) >>> >>> thomas >>> >>> On Tue, Mar 13, 2018 at 6:27 AM, Edward Haas >> > wrote: >>> >>> >>> OVS switch support is experimental at this stage and in some cases >>> when trying to change from one switch to the other, it fails. >>> It was also not checked against a hosted engine setup, which handles >>> networking a bit differently for the management network (ovirtmgmt). >>> Nevertheless, we are interested in understanding all the problems >>> that exists today, so if you can, please share the supervdsm log, it >>> has the interesting networking traces. >>> >>> We plan to block cluster switch editing until these problems are >>> resolved. It will be only allowed to define a new cluster as OVS, >>> not convert an existing one from Linux Bridge to OVS. >>> >>> On Fri, Mar 9, 2018 at 9:54 AM, Thomas Davis >> > wrote: >>> >>> I'm getting further along with 4.2.2rc3 than the 4.2.1 when it >>> comes to hosted engine and vlans.. it actually does install >>> under 4.2.2rc3. >>> >>> But it's a complete failure when I switch the cluster from Linux >>> Bridge/Legacy to OVS. The first time I try, vdsm does >>> not properly configure the node, it's all messed up. >>> >>> I'm getting this in vdsmd logs: >>> >>> 2018-03-08 23:12:46,610-0800 INFO (jsonrpc/7) [api.network] >>> START setupNetworks(networks={u'ovirtmgmt': {u'ipv6autoconf': >>> True, u'nic': u'eno1', u'vlan': u'50', u'ipaddr': >>> u'192.168.85.49', u'switch': u'ovs', u'mtu': 1500, u'netmask': >>> u'255.255.252.0', u'dhcpv6': False, u'STP': u'no', u'bridged': >>> u'true', u'gateway': u'192.168.85.254', u'defaultRoute': True}}, >>> bondings={}, options={u'connectivityCheck': u'true', >>> u'connectivityTimeout': 120}) from=::ffff:192.168.85.24,56806, >>> flow_id=4147e25f-0a23-4f47-a0a4-d424a3437d11 (api:46) >>> >>> 2018-03-08 23:12:52,449-0800 INFO (jsonrpc/2) >>> [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00 >>> seconds (__init__:573) >>> >>> 2018-03-08 23:12:52,511-0800 INFO (jsonrpc/7) [api.network] >>> FINISH setupNetworks error=[Errno 19] ovirtmgmt is not present >>> in the system from=::ffff:192.168.85.24,56806, >>> flow_id=4147e25f-0a23-4f47-a0a4-d424a3437d11 (api:50) >>> 2018-03-08 23:12:52,512-0800 ERROR (jsonrpc/7) >>> [jsonrpc.JsonRpcServer] Internal server error (__init__:611) >>> Traceback (most recent call last): >>> File >>> "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line >>> 606, in _handle_request >>> res = method(**params) >>> File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", >>> line 201, in _dynamicMethod >>> result = fn(*methodArgs) >>> File "", line 2, in setupNetworks >>> File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", >>> line 48, in method >>> ret = func(*args, **kwargs) >>> File "/usr/lib/python2.7/site-packages/vdsm/API.py", line >>> 1527, in setupNetworks >>> supervdsm.getProxy().setupNetworks(networks, bondings, >>> options) >>> File >>> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", >>> line 55, in __call__ >>> return callMethod() >>> File >>> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", >>> line 53, in >>> **kwargs) >>> File "", line 2, in setupNetworks >>> File "/usr/lib64/python2.7/multiprocessing/managers.py", line >>> 773, in _callmethod >>> raise convert_to_error(kind, result) >>> IOError: [Errno 19] ovirtmgmt is not present in the system >>> 2018-03-08 23:12:52,512-0800 INFO (jsonrpc/7) >>> [jsonrpc.JsonRpcServer] RPC call Host.setupNetworks failed >>> (error -32603) in 5.90 seconds (__init__:573) >>> 2018-03-08 23:12:54,769-0800 INFO (jsonrpc/1) >>> [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00 >>> seconds (__init__:573) >>> 2018-03-08 23:12:54,772-0800 INFO (jsonrpc/5) [api.host] START >>> getCapabilities() from=::1,45562 (api:46) >>> 2018-03-08 23:12:54,906-0800 INFO (jsonrpc/5) [api.host] FINISH >>> getCapabilities error=[Errno 19] ovirtmgmt is not present in the >>> system from=::1,45562 (api:50) >>> 2018-03-08 23:12:54,906-0800 ERROR (jsonrpc/5) >>> [jsonrpc.JsonRpcServer] Internal server error (__init__:611) >>> Traceback (most recent call last): >>> File >>> "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line >>> 606, in _handle_request >>> res = method(**params) >>> File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", >>> line 201, in _dynamicMethod >>> result = fn(*methodArgs) >>> File "", line 2, in getCapabilities >>> File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", >>> line 48, in method >>> ret = func(*args, **kwargs) >>> File "/usr/lib/python2.7/site-packages/vdsm/API.py", line >>> 1339, in getCapabilities >>> c = caps.get() >>> File "/usr/lib/python2.7/site-packages/vdsm/host/caps.py", >>> line 168, in get >>> net_caps = supervdsm.getProxy().network_caps() >>> File >>> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", >>> line 55, in __call__ >>> return callMethod() >>> File >>> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", >>> line 53, in >>> **kwargs) >>> File "", line 2, in network_caps >>> File "/usr/lib64/python2.7/multiprocessing/managers.py", line >>> 773, in _callmethod >>> raise convert_to_error(kind, result) >>> IOError: [Errno 19] ovirtmgmt is not present in the system >>> >>> So something is dreadfully wrong with the bridge to ovs >>> conversion in 4.2.2rc3. >>> >>> thomas >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >>> >>> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users From jose.fernandes at locaweb.com.br Thu Mar 15 23:26:09 2018 From: jose.fernandes at locaweb.com.br (Jose Fernandes) Date: Thu, 15 Mar 2018 23:26:09 +0000 Subject: [ovirt-users] Setting up a LDAP conf Message-ID: <209153e2c13f470483d0faab7189aa2a@locaweb.com.br> Hello, I have an OpenDJ LDAP server, and I need some help to do query on a specific filter search. We can't figure out how to create a "aaa/profile1.properties" file with these configs. This is how we can filter the users with ldapsearch on our ldap server: -H ldaps://server:port -D uid=user,ou=OU,dc=SERVER,dc=com,dc=br -W -b ou=aa,dc=bb,dc=cc,dc=dd uid=jose.fernandes - My configuration does not permit I search the users on base, so I need to do this filter on "ou=aa,dc=bb,dc=cc,dc=dd" - Port is different from common. Someone can help me to create the config file? Regards, Jos? Fernandes -------------- next part -------------- An HTML attachment was scrubbed... URL: From kirin.vanderveer at planetinnovation.com.au Fri Mar 16 01:15:55 2018 From: kirin.vanderveer at planetinnovation.com.au (Kirin van der Veer) Date: Fri, 16 Mar 2018 12:15:55 +1100 Subject: [ovirt-users] Using VDSM to edit management interface Message-ID: ?akujem Peter, but this doesn't seem to work in my case. /etc/resolv.conf is regenerated by Network Manager after a reboot and my domain settings are lost. Your comments regarding the reliance on DNS make sense for most installations, but in my case oVirt is a secondary service that I would not expect to run unless our core infrastructure is working correctly. I'm hesitant to edit /etc/hosts directly, since that can lead to confusion when the underlying IP addresses change. For now I will hardcode the IPs of my servers. It's frustrating (and surprising) that there is not an easy way to do this. Kirin. On Thu, Mar 15, 2018 at 5:17 PM, Peter Hudec wrote: > Hi Kirin, > > I suggest to do it old way and edit the /etc/resolv.conf manually. > > And one advice. Do not relay on the DNS on infrastructure servers. Use > /etc/hosts. If he DNS will not be accessible, you will have problem to > put it infrastructure up/working. As side effect the hosts allow you > to use short names to access servers. > > If you are ansible positive, you could use > > hudecof.resolv https://galaxy.ansible.com/hudecof/resolv/ > hudecof.hosts https://galaxy.ansible.com/hudecof/hosts/ > > > Peter > > On 15/03/2018 06:03, Kirin van der Veer wrote: > > Hi oVirt people, I have setup a new cluster consisting of many > > oVirt Nodes with a single dedicated oVirt Engine machine. For the > > most part things are working, however despite entering the DNS > > search domain during install on the Nodes the management interface > > is not aware of my search domain and it has not been added to > > /etc/resolv.conf (perhaps that is unnecessary?). I eventually > > worked out that the DNS search domain should be included in > > /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt However as per the > > header/warning, that file is generated by VDSM. I assumed that I > > should be able to edit the search domain with vdsClient, but when I > > run "vdsClient -m" I don't see any options related to network > > config. I found the following page on DNS config: > > https://www.ovirt.org/develop/release-management/features/network/ > allowExplicitDnsConfiguration/ > > > > > But it does not seem to offer a way of specifying the DNS search domain > > (other than perhaps directly editing /etc/resolv.conf - which is > > generated/managed by Network Manager). nmcli reports that all of my > > interfaces (including ovirtmgmt) are "unmanaged". Indeed when I > > attempt to run nmtui there is nothing listed to configure. This > > should be really simple! I just want to add my local search domain > > so I can use the short name for my NFS server. I'd appreciate any > > advice. > > > > Thanks in advance, Kirin. > > > > > > . > > > > *IMPORTANT NOTE. *If you are NOT AN AUTHORISED RECIPIENT of this > > e-mail, please contact Planet Innovation Pty Ltd by return e-mail > > or by telephone on +613 9945 7510. In this case, you should not > > read, print, re-transmit, store or act in reliance on this e-mail > > or any attachments, and should destroy all copies of them. This > > e-mail and any attachments are confidential and may contain legally > > privileged information and/or copyright material of Planet > > Innovation Pty Ltd or third parties. You should only re-transmit, > > distribute or commercialise the material if you are authorised to > > do so. Although we use virus scanning software, we deny all > > liability for viruses or alike in any message or attachment. This > > notice should not be removed. > > > > ** > > > > > > _______________________________________________ Users mailing list > > Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users > > > > > -- > *Peter Hudec* > Infra?trukt?rny architekt > phudec at cnc.sk > > *CNC, a.s.* > Borsk? 6, 841 04 Bratislava > Recepcia: +421 2 35 000 100 > > Mobil:+421 905 997 203 > *www.cnc.sk* -- *IMPORTANT NOTE. *If you are NOT AN AUTHORISED RECIPIENT of this e-mail, please contact Planet Innovation Pty Ltd by return e-mail or by telephone on +613 9945 7510. In this case, you should not read, print, re-transmit, store or act in reliance on this e-mail or any attachments, and should destroy all copies of them. This e-mail and any attachments are confidential and may contain legally privileged information and/or copyright material of Planet Innovation Pty Ltd or third parties. You should only re-transmit, distribute or commercialise the material if you are authorised to do so. Although we use virus scanning software, we deny all liability for viruses or alike in any message or attachment. This notice should not be removed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From phudec at cnc.sk Fri Mar 16 06:12:40 2018 From: phudec at cnc.sk (Peter Hudec) Date: Fri, 16 Mar 2018 07:12:40 +0100 Subject: [ovirt-users] Using VDSM to edit management interface In-Reply-To: References: Message-ID: <99a8bbef-545e-a5b5-9388-b4486f10b4d7@cnc.sk> Remove any settings about dns from the network manager adn the /etc/resolv.conf won't be auto generated. https://ma.ttias.be/centos-7-networkmanager-keeps-overwriting-etcresolv-conf/ Peter On 16/03/2018 02:15, Kirin van der Veer wrote: > ?akujem Peter, but this doesn't seem to work in my case. > /etc/resolv.conf is regenerated by Network Manager after a reboot > and my domain settings are lost. Your comments regarding the > reliance on DNS make sense for most installations, but in my case > oVirt is a secondary service that I would not expect to run unless > our core infrastructure is working correctly. I'm hesitant to edit > /etc/hosts directly, since that can lead to confusion when the > underlying IP addresses change. For now I will hardcode the IPs of > my servers. It's frustrating (and surprising) that there is not an > easy way to do this. > > Kirin. > > On Thu, Mar 15, 2018 at 5:17 PM, Peter Hudec > wrote: > > Hi Kirin, > > I suggest to do it old way and edit the /etc/resolv.conf manually. > > And one advice. Do not relay on the DNS on infrastructure servers. > Use /etc/hosts. If he DNS will not be accessible, you will have > problem to put it infrastructure up/working. As side effect the > hosts allow you to use short names to access servers. > > If you are ansible positive, you could use > > hudecof.resolv https://galaxy.ansible.com/hudecof/resolv/ > hudecof.hosts > https://galaxy.ansible.com/hudecof/hosts/ > > > > Peter > > On 15/03/2018 06:03, Kirin van der Veer wrote: >> Hi oVirt people, I have setup a new cluster consisting of many >> oVirt Nodes with a single dedicated oVirt Engine machine. For >> the most part things are working, however despite entering the >> DNS search domain during install on the Nodes the management >> interface is not aware of my search domain and it has not been >> added to /etc/resolv.conf (perhaps that is unnecessary?). I >> eventually worked out that the DNS search domain should be >> included in /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt >> However as per the header/warning, that file is generated by >> VDSM. I assumed that I should be able to edit the search domain >> with vdsClient, but when I run "vdsClient -m" I don't see any >> options related to network config. I found the following page on >> DNS config: >> > https://www.ovirt.org/develop/release-management/features/network/allowExplicitDnsConfiguration/ > > >> >> > But it does not seem to offer a way of specifying the DNS search > domain >> (other than perhaps directly editing /etc/resolv.conf - which is >> generated/managed by Network Manager). nmcli reports that all of >> my interfaces (including ovirtmgmt) are "unmanaged". Indeed when >> I attempt to run nmtui there is nothing listed to configure. >> This should be really simple! I just want to add my local search >> domain so I can use the short name for my NFS server. I'd >> appreciate any advice. >> >> Thanks in advance, Kirin. >> >> >> . >> >> *IMPORTANT NOTE. *If you are NOT AN AUTHORISED RECIPIENT of this >> e-mail, please contact Planet Innovation Pty Ltd by return >> e-mail or by telephone on +613 9945 7510 >> . In > this case, you should not >> read, print, re-transmit, store or act in reliance on this >> e-mail or any attachments, and should destroy all copies of them. >> This e-mail and any attachments are confidential and may contain >> legally privileged information and/or copyright material of >> Planet Innovation Pty Ltd or third parties. You should only >> re-transmit, distribute or commercialise the material if you are >> authorised to do so. Although we use virus scanning software, we >> deny all liability for viruses or alike in any message or >> attachment. This notice should not be removed. >> >> ** >> >> >> _______________________________________________ Users mailing >> list Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > >> > > > -- *Peter Hudec* Infra?trukt?rny architekt phudec at cnc.sk > > > > *CNC, a.s.* Borsk? 6, 841 04 Bratislava Recepcia: +421 2 35 000 100 > > > Mobil:+421 905 997 203 *www.cnc.sk > * > > > > > *IMPORTANT NOTE. *If you are NOT AN AUTHORISED RECIPIENT of this > e-mail, please contact Planet Innovation Pty Ltd by return e-mail > or by telephone on +613 9945 7510. In this case, you should not > read, print, re-transmit, store or act in reliance on this e-mail > or any attachments, and should destroy all copies of them. This > e-mail and any attachments are confidential and may contain legally > privileged information and/or copyright material of Planet > Innovation Pty Ltd or third parties. You should only re-transmit, > distribute or commercialise the material if you are authorised to > do so. Although we use virus scanning software, we deny all > liability for viruses or alike in any message or attachment. This > notice should not be removed. > > ** -- *Peter Hudec* Infra?trukt?rny architekt phudec at cnc.sk *CNC, a.s.* Borsk? 6, 841 04 Bratislava Recepcia: +421 2 35 000 100 Mobil:+421 905 997 203 *www.cnc.sk* From spfma.tech at e.mail.fr Fri Mar 16 07:57:47 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Fri, 16 Mar 2018 08:57:47 +0100 Subject: [ovirt-users] NFS 4.1 support and migration In-Reply-To: References: Message-ID: <20180316075747.3230FE4472@smtp01.mail.de> As I thought, the GUI didn't allow me to put the domain she depends on in maintenance, so the engine domain wil stay in v3 for now. Maybe I should delete the engine vm, recreate it with new NFS options and then restore the last backup ? I may try this later. For the other domains, auto negociation of NFS settings worked and they are now all mounted using 4.1 version. Le 15-Mar-2018 14:30:39 +0100, eshenitz at redhat.com a crit: Have to admit that I didn't play with the hosted engine thing, but maybe you can find the answers in the documentation: * https://ovirt.org/documentation/self-hosted/Self-Hosted_Engine_Guide/ * https://www.ovirt.org/documentation/how-to/hosted-engine/ * https://ovirt.org/develop/release-management/features/sla/self-hosted-engine/ On Thu, Mar 15, 2018 at 3:25 PM, wrote: Thanks, I totally missed that :-/ And this wil also work for the hosted engine dedicated domain, putting the storage domain the virtual machine is depending on in maintenance ? Le 15-Mar-2018 10:38:48 +0100, eshenitz at redhat.com a crit: You can edit the storage domain setting after the storage domain deactivated (entered to maintenance mode). On Thu, Mar 15, 2018 at 11:12 AM, wrote: In fact I don't really know how to change storage domains setttings (like nfs version or export path, ...), if it is only possible. I thought they could be disabled after stopping all related VMS, and maybe settings panel would then unlock ? But this should be impossible with hosted engine dedicated storage domain as it is required for the GUI itself. So I am stuck. Le 15-Mar-2018 09:59:30 +0100, eshenitz at redhat.com a crit: I am not sure what you mean, Can you please try to explain what is the difference between "VMs domain" to "hosted storage domain" according to you? Thanks, On Thu, Mar 15, 2018 at 10:45 AM, wrote: Thanks for your answer. And to use V4.1 instead of V3 on a domain, do I just have to disconnect it and change its settings ? Seems to be easy to do with VMs domains, but how to do it with hosted storage domain ? Regards Le 14-Mar-2018 11:54:52 +0100, eshenitz at redhat.com a crit: Hi, NFS 4.1 supported and working since version 3.6 (according to this bug fix [1]) [1] Support NFS v4.1 connections - https://bugzilla.redhat.com/show_bug.cgi?id=1283964 ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Regards, Eyal Shenitzky ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Regards, Eyal Shenitzky ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Regards, Eyal Shenitzky ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Fri Mar 16 10:04:47 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Fri, 16 Mar 2018 12:04:47 +0200 Subject: [ovirt-users] 4.2.2.2-1 Starting hosted engine on all hosts In-Reply-To: References: Message-ID: On Mar 15, 2018 9:21 PM, "Maton, Brett" wrote: Ok cool, glad you already have enough information as it's trashed my hosted-engine beyond recovery... Why did it trash it? Y. On 15 March 2018 at 17:47, Gianluca Cecchi wrote: > > Il 15 Mar 2018 6:34 PM, "Simone Tiraboschi" ha > scritto: > > > > On Thu, Mar 15, 2018 at 8:18 AM, Yedidyah Bar David > wrote: > >> On Thu, Mar 15, 2018 at 8:50 AM, Maton, Brett >> wrote: >> > The last three 4.2.2 release candidates that I've tried have been >> starting >> > self hosted engine all all physical hosts at the same time. >> > >> > Same with the latest RC, what logs do you need to investigate the >> problem? >> > > It was this one: https://bugzilla.redhat.com/show_bug.cgi?id=1547479 > > It got fixed today but still not available in RC4. > > >> >> /var/log/ovirt-hosted-engine-ha/* >> /var/log/sanlock.log >> /var/log/vdsm/* >> >> Adding Martin. >> >> Thanks and best regards, >> -- >> Didi >> ______________________________________________ > > > If I understood correctly, this kind of risk is not present in 4.1.x and > in 4.2.y for every x and for y <= 1? > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From matonb at ltresources.co.uk Fri Mar 16 10:11:23 2018 From: matonb at ltresources.co.uk (Maton, Brett) Date: Fri, 16 Mar 2018 10:11:23 +0000 Subject: [ovirt-users] 4.2.2.2-1 Starting hosted engine on all hosts In-Reply-To: References: Message-ID: root (/) partition is now unreadable On 16 March 2018 at 10:04, Yaniv Kaul wrote: > > > On Mar 15, 2018 9:21 PM, "Maton, Brett" wrote: > > Ok cool, glad you already have enough information as it's trashed my > hosted-engine beyond recovery... > > > Why did it trash it? > Y. > > > On 15 March 2018 at 17:47, Gianluca Cecchi > wrote: > >> >> Il 15 Mar 2018 6:34 PM, "Simone Tiraboschi" ha >> scritto: >> >> >> >> On Thu, Mar 15, 2018 at 8:18 AM, Yedidyah Bar David >> wrote: >> >>> On Thu, Mar 15, 2018 at 8:50 AM, Maton, Brett >>> wrote: >>> > The last three 4.2.2 release candidates that I've tried have been >>> starting >>> > self hosted engine all all physical hosts at the same time. >>> > >>> > Same with the latest RC, what logs do you need to investigate the >>> problem? >>> >> >> It was this one: https://bugzilla.redhat.com/show_bug.cgi?id=1547479 >> >> It got fixed today but still not available in RC4. >> >> >>> >>> /var/log/ovirt-hosted-engine-ha/* >>> /var/log/sanlock.log >>> /var/log/vdsm/* >>> >>> Adding Martin. >>> >>> Thanks and best regards, >>> -- >>> Didi >>> ______________________________________________ >> >> >> If I understood correctly, this kind of risk is not present in 4.1.x and >> in 4.2.y for every x and for y <= 1? >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From msivak at redhat.com Fri Mar 16 10:21:52 2018 From: msivak at redhat.com (Martin Sivak) Date: Fri, 16 Mar 2018 11:21:52 +0100 Subject: [ovirt-users] 4.2.2.2-1 Starting hosted engine on all hosts In-Reply-To: References: Message-ID: > > Why did it trash it? Split brain and concurrent filesystem access... The bug only happened in 4.2.2 and was never released officially apart from development builds. And it should be fixed now. Martin On Fri, Mar 16, 2018 at 11:04 AM, Yaniv Kaul wrote: > > > On Mar 15, 2018 9:21 PM, "Maton, Brett" wrote: > > Ok cool, glad you already have enough information as it's trashed my > hosted-engine beyond recovery... > > > Why did it trash it? > Y. > > > On 15 March 2018 at 17:47, Gianluca Cecchi > wrote: >> >> >> Il 15 Mar 2018 6:34 PM, "Simone Tiraboschi" ha >> scritto: >> >> >> >> On Thu, Mar 15, 2018 at 8:18 AM, Yedidyah Bar David >> wrote: >>> >>> On Thu, Mar 15, 2018 at 8:50 AM, Maton, Brett >>> wrote: >>> > The last three 4.2.2 release candidates that I've tried have been >>> > starting >>> > self hosted engine all all physical hosts at the same time. >>> > >>> > Same with the latest RC, what logs do you need to investigate the >>> > problem? >> >> >> It was this one: https://bugzilla.redhat.com/show_bug.cgi?id=1547479 >> >> It got fixed today but still not available in RC4. >> >>> >>> >>> /var/log/ovirt-hosted-engine-ha/* >>> /var/log/sanlock.log >>> /var/log/vdsm/* >>> >>> Adding Martin. >>> >>> Thanks and best regards, >>> -- >>> Didi >>> ______________________________________________ >> >> >> If I understood correctly, this kind of risk is not present in 4.1.x and >> in 4.2.y for every x and for y <= 1? >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From matonb at ltresources.co.uk Fri Mar 16 10:48:39 2018 From: matonb at ltresources.co.uk (Maton, Brett) Date: Fri, 16 Mar 2018 10:48:39 +0000 Subject: [ovirt-users] 4.2.2.2-1 Starting hosted engine on all hosts In-Reply-To: References: Message-ID: Yup, no big surprise really :) On 16 March 2018 at 10:21, Martin Sivak wrote: > > > > Why did it trash it? > > Split brain and concurrent filesystem access... > > The bug only happened in 4.2.2 and was never released officially apart > from development builds. And it should be fixed now. > > Martin > > On Fri, Mar 16, 2018 at 11:04 AM, Yaniv Kaul wrote: > > > > > > On Mar 15, 2018 9:21 PM, "Maton, Brett" > wrote: > > > > Ok cool, glad you already have enough information as it's trashed my > > hosted-engine beyond recovery... > > > > > > Why did it trash it? > > Y. > > > > > > On 15 March 2018 at 17:47, Gianluca Cecchi > > wrote: > >> > >> > >> Il 15 Mar 2018 6:34 PM, "Simone Tiraboschi" ha > >> scritto: > >> > >> > >> > >> On Thu, Mar 15, 2018 at 8:18 AM, Yedidyah Bar David > >> wrote: > >>> > >>> On Thu, Mar 15, 2018 at 8:50 AM, Maton, Brett < > matonb at ltresources.co.uk> > >>> wrote: > >>> > The last three 4.2.2 release candidates that I've tried have been > >>> > starting > >>> > self hosted engine all all physical hosts at the same time. > >>> > > >>> > Same with the latest RC, what logs do you need to investigate the > >>> > problem? > >> > >> > >> It was this one: https://bugzilla.redhat.com/show_bug.cgi?id=1547479 > >> > >> It got fixed today but still not available in RC4. > >> > >>> > >>> > >>> /var/log/ovirt-hosted-engine-ha/* > >>> /var/log/sanlock.log > >>> /var/log/vdsm/* > >>> > >>> Adding Martin. > >>> > >>> Thanks and best regards, > >>> -- > >>> Didi > >>> ______________________________________________ > >> > >> > >> If I understood correctly, this kind of risk is not present in 4.1.x and > >> in 4.2.y for every x and for y <= 1? > >> > >> > >> _______________________________________________ > >> Users mailing list > >> Users at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/users > >> > > > > > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > > > > > > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bilias at edu.physics.uoc.gr Fri Mar 16 10:46:13 2018 From: bilias at edu.physics.uoc.gr (Kapetanakis Giannis) Date: Fri, 16 Mar 2018 12:46:13 +0200 Subject: [ovirt-users] Failed to synchronize networks of Provider ovirt-provider-ovn Message-ID: Hi, After upgrading to 4.2.1 I have problems with ovn provider. I'm getting "Failed to synchronize networks of Provider ovirt-provider-ovn." I use custom SSL certificate in apache and I guess this is the reason. I've tried to update ovirt-provider-ovn.conf with [OVIRT] #ovirt-ca-file=/etc/pki/ovirt-engine/ca.pem ovirt-ca-file=/etc/pki/ovirt-engine/apache-ca.pem but still no go Any tips on this? thanks G From enrico.becchetti at pg.infn.it Fri Mar 16 11:25:57 2018 From: enrico.becchetti at pg.infn.it (Enrico Becchetti) Date: Fri, 16 Mar 2018 12:25:57 +0100 Subject: [ovirt-users] VM has been paused due to NO STORAGE SPACE ERROR ?!?!?!?! Message-ID: ? Dear All, Does someone had seen that error ? When I run this command from my virtual machine: # time dd if=/dev/zero of=enrico.dd bs=4k count=10000000 VM was paused due to kind a storage error/problem. Strange message because tell about "no storage space error" but ovirt puts virtual machine in a paused state. Inside events? from? ovirt web interface? I see this: "VM has been paused due to lack of storage space" but no ERROR found in /var/log/vdsm.log. My oVirt enviroment 4.2.1 has three hypervivosr with FC storage and before now I haven't see any other problem during the normal functioning of the vm , it's seem that this error occurs only when there is massive I/O. Any ideas ? Thanks a lot. Best Regards Enrico -- _______________________________________________________________________ Enrico Becchetti Servizio di Calcolo e Reti Istituto Nazionale di Fisica Nucleare - Sezione di Perugia Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it ______________________________________________________________________ From karli at inparadise.se Fri Mar 16 12:28:10 2018 From: karli at inparadise.se (=?utf-8?B?S2FybGkgU2rDtmJlcmc=?=) Date: Fri, 16 Mar 2018 13:28:10 +0100 (CET) Subject: [ovirt-users] VM has been paused due to NO STORAGE SPACE ERROR ?!?!?!?! In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From enrico.becchetti at pg.infn.it Fri Mar 16 12:43:03 2018 From: enrico.becchetti at pg.infn.it (Enrico Becchetti) Date: Fri, 16 Mar 2018 13:43:03 +0100 Subject: [ovirt-users] VM has been paused due to NO STORAGE SPACE ERROR ?!?!?!?! In-Reply-To: References: Message-ID: yes ... it's a thin provisioning , in fact with preallocated disk type I haven't any problem. Thanks you so much Best Regards Enrico Il 16/03/2018 13:28, Karli Sj?berg ha scritto: > > > Den 16 mars 2018 12:26 skrev Enrico Becchetti > : > > ? Dear All, > Does someone had seen that error ? When I run this command from my > virtual machine: > > # time dd if=/dev/zero of=enrico.dd bs=4k count=10000000 > > VM was paused due to kind a storage error/problem. Strange message > because tell about "no storage space error" but ovirt puts virtual > machine in > a paused state. > > Inside events? from? ovirt web interface? I see this: > > "VM has been paused due to lack of storage space" > > but no ERROR found in /var/log/vdsm.log. > > My oVirt enviroment 4.2.1 has three hypervivosr with FC storage and > before now > I haven't see any other problem during the normal functioning of > the vm > , it's seem > that this error occurs only when there is massive I/O. > > Any ideas ? > Thanks a lot. > Best Regards > Enrico > > > -- > _______________________________________________________________________ > > Enrico Becchetti Servizio di Calcolo e Reti > > Istituto Nazionale di Fisica Nucleare - Sezione di Perugia > Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) > Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it > ______________________________________________________________________ > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > I think I remember something to do with thin provisioning and not > being able to grow fast enough, so out of space. Are the VM's disk > thick or thin? > > /K > ? Dear All, > Does someone had seen that error ? When I run this command from my > virtual machine: > > # time dd if=/dev/zero of=enrico.dd bs=4k count=10000000 > > VM was paused due to kind a storage error/problem. Strange message > because tell about "no storage space error" but ovirt puts virtual > machine in > a paused state. > > Inside events? from? ovirt web interface? I see this: > > "VM has been paused due to lack of storage space" > > but no ERROR found in /var/log/vdsm.log. > > My oVirt enviroment 4.2.1 has three hypervivosr with FC storage and > before now > I haven't see any other problem during the normal functioning of the vm > , it's seem > that this error occurs only when there is massive I/O. > > Any ideas ? > Thanks a lot. > Best Regards > Enrico > > -- > _______________________________________________________________________ > > Enrico Becchetti Servizio di Calcolo e Reti > > Istituto Nazionale di Fisica Nucleare - Sezione di Perugia > Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) > Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it > ______________________________________________________________________ > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- _______________________________________________________________________ Enrico Becchetti Servizio di Calcolo e Reti Istituto Nazionale di Fisica Nucleare - Sezione di Perugia Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it ______________________________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From spfma.tech at e.mail.fr Fri Mar 16 12:48:16 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Fri, 16 Mar 2018 13:48:16 +0100 Subject: [ovirt-users] Hosted engine : rebuild without backups ? Message-ID: <20180316124816.91ABAE446E@smtp01.mail.de> Hi, In case of a total failure of the hosted engine VM, it is recommended to recreate a new one and restore a backup. I hope it works, I will probably have to do this very soon. But is there some kind of "plug and play" features, able to rebuild configuration by browsing storage domains, if the restore process doesn't work ? Something like identifying VMs and their snapshots in the subdirectories, and the guess what is linked to what, ... ? I have a few machines but if I have to rebuild all the engine setup and content, I would like to be able to identify resources easily. A few times ago, I was doing some experiments with XenServer and destroyed/recreated some setup items : I ended with a lot of oprhan resources, and it was a mess to reattach snapshots to their respective VMs. So if oVirt is more helpful in that way ... Regards ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From blanchet at abes.fr Fri Mar 16 12:58:01 2018 From: blanchet at abes.fr (=?UTF-8?Q?Nathana=c3=abl_Blanchet?=) Date: Fri, 16 Mar 2018 13:58:01 +0100 Subject: [ovirt-users] dns vm and ovirt Message-ID: Hi all, I'd need some piece of good practice about dealing a DNS server in or out of ovirt. Until now we never wanted to integrate the DNS vm into ovirt because of the strong dependency. if the DNS server fails for any reason, it becomes difficult ot join the webadmin (except with a static etc hosts) and the nodes may become unvailable if they had been configured with fqdn. We could consider a DNS failover setup, but in a self hosted engine setup (and more globally an hyperconverged setup) , it doesn't make sense of setting up a stand alone DNS vm outside of ovirt. So what about imitating engine vm status in a hosted engine setup? Is there a way to install the DNS vm outside of ovirt but on the ovirt host (and why not in a HA mode)? Second option could be installing the named service on the hosted engine vm? Any suggestion or return of experience would be much appreciated. -- Nathana?l Blanchet Supervision r?seau P?le Infrastrutures Informatiques 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 T?l. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet at abes.fr From Sven.Achtelik at eps.aero Fri Mar 16 13:09:23 2018 From: Sven.Achtelik at eps.aero (Sven Achtelik) Date: Fri, 16 Mar 2018 13:09:23 +0000 Subject: [ovirt-users] Hosted-Engine Agent broken Message-ID: <94c8a2d4a7c347269b1e642ba1690946@eps.aero> Hi All, after upgrading my engine to 4.2 and upgrading my hosts to the latest versions the HA for the hosted engine is not working anymore. The Agent fails with the following errors. Did I miss anything while upgrading ? The Engine is still running - what would be the correct approach to get the HA services up and running ? ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 191, in _run_agent return action(he) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 64, in action_proper return he.start_monitoring() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py", line 421, in start_monitoring self._config.refresh_vm_conf() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py", line 496, in refresh_vm_conf content_from_ovf = self._get_vm_conf_content_from_ovf_store() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py", line 438, in _get_vm_conf_content_from_ovf_store conf = ovf2VmParams.confFromOvf(heovf) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py", line 283, in confFromOvf vmConf = toDict(ovf) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py", line 210, in toDict vmParams['vmId'] = tree.find('Content/Section').attrib[OVF_NS + 'id'] File "lxml.etree.pyx", line 2272, in lxml.etree._Attrib.__getitem__ (src/lxml/lxml.etree.c:55336) KeyError: '{http://schemas.dmtf.org/ovf/envelope/1/}id' ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent ERROR Trying to restart agent ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Unable to refresh vm.conf from the shared storage. Has this HE cluster correctly reached 3.6 level? If anyone could give a hint on where to look at would be very helpful. Thank you, Sven -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholler at redhat.com Fri Mar 16 13:21:13 2018 From: dholler at redhat.com (Dominik Holler) Date: Fri, 16 Mar 2018 14:21:13 +0100 Subject: [ovirt-users] Failed to synchronize networks of Provider ovirt-provider-ovn In-Reply-To: References: Message-ID: <20180316142113.71599c96@t460p> On Fri, 16 Mar 2018 12:46:13 +0200 Kapetanakis Giannis wrote: > Hi, > > After upgrading to 4.2.1 I have problems with ovn provider. > I'm getting "Failed to synchronize networks of Provider > ovirt-provider-ovn." > > I use custom SSL certificate in apache and I guess this is the reason. > > I've tried to update ovirt-provider-ovn.conf with > [OVIRT] > #ovirt-ca-file=/etc/pki/ovirt-engine/ca.pem > ovirt-ca-file=/etc/pki/ovirt-engine/apache-ca.pem > > but still no go > > Any tips on this? > > thanks > > G > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users Would you share the lines in engine.log produced by clicking the "Test" button in the "Edit Provider" dialog? On Clicking the test button, are you asked about "Import provider certificate"? From msivak at redhat.com Fri Mar 16 13:21:16 2018 From: msivak at redhat.com (Martin Sivak) Date: Fri, 16 Mar 2018 14:21:16 +0100 Subject: [ovirt-users] Hosted-Engine Agent broken In-Reply-To: <94c8a2d4a7c347269b1e642ba1690946@eps.aero> References: <94c8a2d4a7c347269b1e642ba1690946@eps.aero> Message-ID: Hi, make sure you have at least ovirt-hosted-engine-ha-2.2.1 and the service was properly restarted. The situation you are describing can happen when you run older hosted engine agent with 4.2 ovirt-engine. It was tracked as: https://bugzilla.redhat.com/1518887 Best regards Martin Sivak On Fri, Mar 16, 2018 at 2:09 PM, Sven Achtelik wrote: > Hi All, > > > > after upgrading my engine to 4.2 and upgrading my hosts to the latest > versions the HA for the hosted engine is not working anymore. The Agent > fails with the following errors. Did I miss anything while upgrading ? The > Engine is still running ? what would be the correct approach to get the HA > services up and running ? > > > > ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceback > (most recent call last): > > File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", > line 191, in _run_agent > > > return action(he) > > File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", > line 64, in action_proper > > > return he.start_monitoring() > > File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py", > line 421, in start_monitoring > > > self._config.refresh_vm_conf() > > File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py", > line 496, in refresh_vm_conf > > > content_from_ovf = self._get_vm_conf_content_from_ovf_store() > > File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py", > line 438, in _get_vm_conf_content_from_ovf_store > > > conf = ovf2VmParams.confFromOvf(heovf) > > File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py", > line 283, in confFromOvf > > > vmConf = toDict(ovf) > > File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/ovf/ovf2VmParams.py", > line 210, in toDict > > > vmParams['vmId'] = tree.find('Content/Section').attrib[OVF_NS + 'id'] > > File > "lxml.etree.pyx", line 2272, in lxml.etree._Attrib.__getitem__ > (src/lxml/lxml.etree.c:55336) > > > KeyError: '{http://schemas.dmtf.org/ovf/envelope/1/}id' > > ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent ERROR Trying to > restart agent > > ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR > Unable to refresh vm.conf from the shared storage. Has this HE cluster > correctly reached 3.6 level? > > > > If anyone could give a hint on where to look at would be very helpful. > > > > Thank you, > > Sven > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From nicolas at ecarnot.net Fri Mar 16 13:46:35 2018 From: nicolas at ecarnot.net (Nicolas Ecarnot) Date: Fri, 16 Mar 2018 14:46:35 +0100 Subject: [ovirt-users] VM has been paused due to NO STORAGE SPACE ERROR ?!?!?!?! In-Reply-To: References: Message-ID: Le 16/03/2018 ? 13:28, Karli Sj?berg a ?crit?: > > > Den 16 mars 2018 12:26 skrev Enrico Becchetti : > > ? Dear All, > Does someone had seen that error ? Yes, I experienced it dozens of times on 3.6 (my 4.2 setup has insufficient workload to trigger such event). And in every case, there was no actual lack of space. > Enrico Becchetti Servizio di Calcolo e Reti > I think I remember something to do with thin provisioning and not being > able to grow fast enough, so out of space. Are the VM's disk thick or thin? All our storage domains are thin-prov. and served by iSCSI (Equallogic PS6xxx and 4xxx). Enrico, do you know if a bug has been filed about this? -- Nicolas ECARNOT From acrow at integrafin.co.uk Fri Mar 16 14:48:44 2018 From: acrow at integrafin.co.uk (Alex Crow) Date: Fri, 16 Mar 2018 14:48:44 +0000 Subject: [ovirt-users] VM has been paused due to NO STORAGE SPACE ERROR ?!?!?!?! In-Reply-To: References: Message-ID: On 16/03/18 13:46, Nicolas Ecarnot wrote: > Le 16/03/2018 ? 13:28, Karli Sj?berg a ?crit?: >> >> >> Den 16 mars 2018 12:26 skrev Enrico Becchetti >> : >> >> ???? ? Dear All, >> ??? Does someone had seen that error ? > > Yes, I experienced it dozens of times on 3.6 (my 4.2 setup has > insufficient workload to trigger such event). > And in every case, there was no actual lack of space. > >> ??? Enrico Becchetti Servizio di Calcolo e Reti >> I think I remember something to do with thin provisioning and not >> being able to grow fast enough, so out of space. Are the VM's disk >> thick or thin? > > All our storage domains are thin-prov. and served by iSCSI (Equallogic > PS6xxx and 4xxx). > > Enrico, do you know if a bug has been filed about this? > Did the VM remain paused? In my experience the VM just gets temporarily paused while the storage is expanded. RH confirmed to me in a ticket that this is expected behaviour. If you need high write performance your VM disks should always be preallocated. We only use Thin Provision for VMs where we know that disk writes are low (eg network services, CPU-bound apps, etc). Alex -- This message is intended only for the addressee and may contain confidential information. Unless you are that person, you may not disclose its contents or use it in any way and are requested to delete the message along with any attachments and notify us immediately. This email is not intended to, nor should it be taken to, constitute advice. The information provided is correct to our knowledge & belief and must not be used as a substitute for obtaining tax, regulatory, investment, legal or any other appropriate advice. "Transact" is operated by Integrated Financial Arrangements Ltd. 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. (Registered office: as above; Registered in England and Wales under number: 3727592). Authorised and regulated by the Financial Conduct Authority (entered on the Financial Services Register; no. 190856). From acrow at integrafin.co.uk Fri Mar 16 14:50:34 2018 From: acrow at integrafin.co.uk (Alex Crow) Date: Fri, 16 Mar 2018 14:50:34 +0000 Subject: [ovirt-users] change CD not working In-Reply-To: References: Message-ID: <3adead2d-7da8-9fdd-eea7-e5725d1b9286@integrafin.co.uk> On 15/03/18 18:55, Junaid Jadoon wrote: > > > Ovirt engine and node version are 4.2. > > "Error while executing action Change CD: Failed to perform "Change CD" operation, CD might be still in use by the VM. > Please try to manually detach the CD from withing the VM: > 1. Log in to the VM > 2 For Linux VMs, un-mount the CD using umount command; > For Windows VMs, right click on the CD drive and click 'Eject';" > > Initially its working fine suddenly it giving above error. > > Logs are attached. > > please help me out > > Regards, > > Junaid > Detach and re-attach of the ISO domain should resolve this. It worked for me. Alex -- This message is intended only for the addressee and may contain confidential information. Unless you are that person, you may not disclose its contents or use it in any way and are requested to delete the message along with any attachments and notify us immediately. This email is not intended to, nor should it be taken to, constitute advice. The information provided is correct to our knowledge & belief and must not be used as a substitute for obtaining tax, regulatory, investment, legal or any other appropriate advice. "Transact" is operated by Integrated Financial Arrangements Ltd. 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. (Registered office: as above; Registered in England and Wales under number: 3727592). Authorised and regulated by the Financial Conduct Authority (entered on the Financial Services Register; no. 190856). -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccox at endlessnow.com Fri Mar 16 15:08:30 2018 From: ccox at endlessnow.com (Christopher Cox) Date: Fri, 16 Mar 2018 10:08:30 -0500 Subject: [ovirt-users] Quick question about oVirt 3.6 and vdsm log in DEBUG mode (apprently by default??) In-Reply-To: <907984b6-9352-e619-2fc5-a5f0b077c8dc@endlessnow.com> References: <9114b59a-8ffb-77a6-366c-06d91ba5be26@endlessnow.com> <907984b6-9352-e619-2fc5-a5f0b077c8dc@endlessnow.com> Message-ID: On 03/14/2018 09:10 AM, Christopher Cox wrote: > On 03/14/2018 01:34 AM, Yaniv Kaul wrote: >> >> >> On Mar 13, 2018 11:48 PM, "Christopher Cox" > > wrote: >> > ...snip... >> >> ??? What we are seeing more and more is that if we do an operation >> like expose a >> ??? new LUN and configure a new storage domain, that all of the >> hyervisors go >> ??? "red triangle" and "Connecting..." and it takes a very long time >> (all day) >> ??? to straighten out. >> >> ??? My guess is that there's too much to look at vdsm wise and so it's >> waiting a >> ??? short(er) period of time for a completed response than what vdsm >> is going to >> ??? us, and it just cycles over and over until it just happens to work. >> >> >> Please upgrade. We have solved issues and improved performance and scale >> substantially since 3.6. >> You may also wish to apply lvm filters. >> Y. > > Oh, we know and are looking at what we'll have to do to upgrade.? With > that said, is there more information on what you mentioned as "lvm > filters" posted somewhere? > > Also, would VM reduction, and IMHO, virtual disk reduction help this > problem? > > Is there and engine config parameters that might help as well? > > Thanks for any help on this. Based on a different older thread about having lots of virtual networks which sounded somewhat similar, I have increased our vdsTimeout value. Any opinions on whether or not that might help? Right now I'm forced to tell my management that we'll have to "roll the dice" to find out. But kind of hoping to hear someone "say" it should help. Anyone? Just looking for something more substantial... From omachace at redhat.com Fri Mar 16 15:32:38 2018 From: omachace at redhat.com (Ondra Machacek) Date: Fri, 16 Mar 2018 16:32:38 +0100 Subject: [ovirt-users] Setting up a LDAP conf In-Reply-To: <209153e2c13f470483d0faab7189aa2a@locaweb.com.br> References: <209153e2c13f470483d0faab7189aa2a@locaweb.com.br> Message-ID: On 03/16/2018 12:26 AM, Jose Fernandes wrote: > Hello, > > > I have an OpenDJ LDAP server, and I need some help to do query on a > specific filter search. I remember I used to setup OpenDJ some time ago, please check this blog post: http://machacekondra.blogspot.cz/2015/05/saml-and-ovirt-35.html The important part there for you is the file: /usr/share/ovirt-engine-extension-aaa-ldap/profiles/opendj.properties Then you can use it as 'include = ' in authz/authn. > > > We can't figure out how to create a "aaa/profile1.properties" file with > these configs. > > > This is how we can filter the users with?ldapsearch on our ldap server: > > > -H ldaps://server:port-D uid=user,ou=OU,dc=SERVER,dc=com,dc=br -W -b > ou=aa,dc=bb,dc=cc,dc=dd uid=jose.fernandes > > > ?- My configuration does not permit I search the users on base, so I > need to do this filter on "ou=aa,dc=bb,dc=cc,dc=dd" > > ?-?Port is different from common. > > > Someone can help me to create the config file? > > > Regards, > > Jos? Fernandes > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From nicolas at ecarnot.net Fri Mar 16 15:37:49 2018 From: nicolas at ecarnot.net (Nicolas Ecarnot) Date: Fri, 16 Mar 2018 16:37:49 +0100 Subject: [ovirt-users] VM has been paused due to NO STORAGE SPACE ERROR ?!?!?!?! In-Reply-To: References: Message-ID: Le 16/03/2018 ? 15:48, Alex Crow a ?crit?: > On 16/03/18 13:46, Nicolas Ecarnot wrote: >> Le 16/03/2018 ? 13:28, Karli Sj?berg a ?crit?: >>> >>> >>> Den 16 mars 2018 12:26 skrev Enrico Becchetti >>> : >>> >>> ???? ? Dear All, >>> ??? Does someone had seen that error ? >> >> Yes, I experienced it dozens of times on 3.6 (my 4.2 setup has >> insufficient workload to trigger such event). >> And in every case, there was no actual lack of space. >> >>> ??? Enrico Becchetti Servizio di Calcolo e Reti >>> I think I remember something to do with thin provisioning and not >>> being able to grow fast enough, so out of space. Are the VM's disk >>> thick or thin? >> >> All our storage domains are thin-prov. and served by iSCSI (Equallogic >> PS6xxx and 4xxx). >> >> Enrico, do you know if a bug has been filed about this? >> > Did the VM remain paused? In my experience the VM just gets temporarily > paused while the storage is expanded. RH confirmed to me in a ticket > that this is expected behaviour. AFAIR, most of them went back up and running by themselves (we had to manually some of them from times to times). The storage side weakness is an interesting trail to follow. We also experienced this behavior when migrating lots of VMs at once, yet using a dedicated storage network. Being on this mailing list since long, I remember we already discussed several times about how some users feel how oVirt can appear sensitive to storage latencies. On my side, the site where most of our workload resides is still in 3.6, so I can not yet witness the efforts oVirt devs have made to cope with this in 4.2 but I'm sure they did. -- Nicolas ECARNOT From farkey_2000 at yahoo.com Fri Mar 16 15:40:02 2018 From: farkey_2000 at yahoo.com (Andy) Date: Fri, 16 Mar 2018 15:40:02 +0000 (UTC) Subject: [ovirt-users] Trunk'd Network References: <1900160491.1855275.1521214802539.ref@mail.yahoo.com> Message-ID: <1900160491.1855275.1521214802539@mail.yahoo.com> Community, I am trying to trunk two VLAN's to a VM on OVIRT 4.2 and all the research/docs I have seen is to create a standard VM network and to NOT tag anything.? That is "Supposed" to pass all traffic where the attached VM will need to tag said traffic.? Everything I have tried has been unsuccessful and I am not seeing any traffic pass to the VM.? Has anyone successfully accomplished sending mulitple VLAN's downstream (vswitch) to a VM?? On the VMWARE side the 4095 VLAN would accomplish this and I can do so in my VMWARE test lab. ? thanks Andy? -------------- next part -------------- An HTML attachment was scrubbed... URL: From bilias at edu.physics.uoc.gr Fri Mar 16 15:40:46 2018 From: bilias at edu.physics.uoc.gr (Kapetanakis Giannis) Date: Fri, 16 Mar 2018 17:40:46 +0200 Subject: [ovirt-users] Failed to synchronize networks of Provider ovirt-provider-ovn In-Reply-To: <20180316142113.71599c96@t460p> References: <20180316142113.71599c96@t460p> Message-ID: On 16/03/18 15:21, Dominik Holler wrote: > On Fri, 16 Mar 2018 12:46:13 +0200 > Kapetanakis Giannis wrote: > >> Hi, >> >> After upgrading to 4.2.1 I have problems with ovn provider. >> I'm getting "Failed to synchronize networks of Provider >> ovirt-provider-ovn." >> >> I use custom SSL certificate in apache and I guess this is the reason. >> >> I've tried to update ovirt-provider-ovn.conf with >> [OVIRT] >> #ovirt-ca-file=/etc/pki/ovirt-engine/ca.pem >> ovirt-ca-file=/etc/pki/ovirt-engine/apache-ca.pem >> >> but still no go > > Would you share the lines in engine.log produced by clicking the "Test" > button in the "Edit Provider" dialog? > On Clicking the test button, are you asked about "Import provider > certificate"? > I get ok in test: Test succeeded, managed to access provider. 2018-03-16 17:35:20,024+02 INFO [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand] (default task-28) [9920f622-b878-45e1-a421-e76c0ab23470] Running command: TestProviderConnectivityCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_STORAGE_POOL with role type ADMIN However a little bit later: ovirt-provider-ovn.log: 2018-03-16 17:37:27,827 requests.packages.urllib3.connectionpool Starting new HTTPS connection (1): engine-host 2018-03-16 17:37:27,827 requests.packages.urllib3.connectionpool Starting new HTTPS connection (1): engine-host 2018-03-16 17:37:27,832 root [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579) Traceback (most recent call last): File "/usr/share/ovirt-provider-ovn/handlers/base_handler.py", line 131, in _handle_request method, path_parts, content) File "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py", line 175, in handle_request return self.call_response_handler(handler, content, parameters) File "/usr/share/ovirt-provider-ovn/handlers/keystone.py", line 33, in call_response_handler return response_handler(content, parameters) File "/usr/share/ovirt-provider-ovn/handlers/keystone_responses.py", line 62, in post_tokens user_password=user_password) File "/usr/share/ovirt-provider-ovn/auth/plugin_facade.py", line 26, in create_token return auth.core.plugin.create_token(user_at_domain, user_password) File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/plugin.py", line 48, in create_token timeout=self._timeout()) File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 75, in create_token username, password, engine_url, ca_file, timeout) File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 91, in _get_sso_token timeout=timeout File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 54, in wrapper response = func(*args, **kwargs) File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 47, in wrapper raise BadGateway(e) BadGateway: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579) and in engine log: 2018-03-16 17:37:27,834+02 ERROR [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-27) [621c2b23] Command 'org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand' failed: EngineException: (Failed with error PROVIDER_FAILURE and code 5050) 2018-03-16 17:37:27,850+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-27) [621c2b23] EVENT_ID: PROVIDER_SYNCHRONIZED_FAILED(216), Failed to synchronize networks of Provider ovirt-provider-ovn. So the engine can talk with ovn but not the other way around as I understand. I think it might have to do with [SSL] settings of ovirt-provider-ovn.conf G From bilias at edu.physics.uoc.gr Fri Mar 16 15:46:36 2018 From: bilias at edu.physics.uoc.gr (Kapetanakis Giannis) Date: Fri, 16 Mar 2018 17:46:36 +0200 Subject: [ovirt-users] Failed to synchronize networks of Provider ovirt-provider-ovn In-Reply-To: References: <20180316142113.71599c96@t460p> Message-ID: <31fde430-1d81-29fb-4674-d34e4089afe1@edu.physics.uoc.gr> On 16/03/18 17:40, Kapetanakis Giannis wrote: > On 16/03/18 15:21, Dominik Holler wrote: >> On Fri, 16 Mar 2018 12:46:13 +0200 >> Kapetanakis Giannis wrote: >> >>> Hi, >>> >>> After upgrading to 4.2.1 I have problems with ovn provider. >>> I'm getting "Failed to synchronize networks of Provider >>> ovirt-provider-ovn." >>> >>> I use custom SSL certificate in apache and I guess this is the reason. >>> >>> I've tried to update ovirt-provider-ovn.conf with >>> [OVIRT] >>> #ovirt-ca-file=/etc/pki/ovirt-engine/ca.pem >>> ovirt-ca-file=/etc/pki/ovirt-engine/apache-ca.pem >>> >>> but still no go > >> >> Would you share the lines in engine.log produced by clicking the "Test" >> button in the "Edit Provider" dialog? >> On Clicking the test button, are you asked about "Import provider >> certificate"? SORRY wrong provider. It asks for the cert. Failed to communicate with the external provider, see log for additional details. 2018-03-16 17:44:08,262+02 INFO [org.ovirt.engine.core.bll.provider.ImportProviderCertificateCommand] (default task-52) [4731d25d-fce3-4408-99ea-8f9d1b5ee5b6] Running command: ImportProviderCertificateCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_STORAGE_POOL with role type ADMIN 2018-03-16 17:44:08,275+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-52) [4731d25d-fce3-4408-99ea-8f9d1b5ee5b6] EVENT_ID: PROVIDER_CERTIFICATE_IMPORTED(213), Certificate for provider ovirt-provider-ovn was imported. (User: admin at internal) 2018-03-16 17:44:08,302+02 INFO [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand] (default task-44) [f4b2c57b-60c7-4ef9-a59f-0c5b22fa0356] Running command: TestProviderConnectivityCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_STORAGE_POOL with role type ADMIN 2018-03-16 17:44:08,360+02 ERROR [org.ovirt.engine.core.bll.provider.network.openstack.BaseNetworkProviderProxy] (default task-44) [f4b2c57b-60c7-4ef9-a59f-0c5b22fa0356] Bad Gateway (OpenStack response error code: 502) 2018-03-16 17:44:08,360+02 ERROR [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand] (default task-44) [f4b2c57b-60c7-4ef9-a59f-0c5b22fa0356] Command 'org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand' failed: EngineException: (Failed with error PROVIDER_FAILURE and code 5050) and in provider log: 2018-03-16 17:45:33,961 requests.packages.urllib3.connectionpool Starting new HTTPS connection (1): engine-host 2018-03-16 17:45:33,961 requests.packages.urllib3.connectionpool Starting new HTTPS connection (1): engine-host 2018-03-16 17:45:33,966 root [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579) Traceback (most recent call last): File "/usr/share/ovirt-provider-ovn/handlers/base_handler.py", line 131, in _handle_request method, path_parts, content) File "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py", line 175, in handle_request return self.call_response_handler(handler, content, parameters) File "/usr/share/ovirt-provider-ovn/handlers/keystone.py", line 33, in call_response_handler return response_handler(content, parameters) File "/usr/share/ovirt-provider-ovn/handlers/keystone_responses.py", line 62, in post_tokens user_password=user_password) File "/usr/share/ovirt-provider-ovn/auth/plugin_facade.py", line 26, in create_token return auth.core.plugin.create_token(user_at_domain, user_password) File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/plugin.py", line 48, in create_token timeout=self._timeout()) File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 75, in create_token username, password, engine_url, ca_file, timeout) File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 91, in _get_sso_token timeout=timeout File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 54, in wrapper response = func(*args, **kwargs) File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 47, in wrapper raise BadGateway(e) BadGateway: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579) From ccox at endlessnow.com Fri Mar 16 15:48:01 2018 From: ccox at endlessnow.com (Christopher Cox) Date: Fri, 16 Mar 2018 10:48:01 -0500 Subject: [ovirt-users] dns vm and ovirt In-Reply-To: References: Message-ID: On 03/16/2018 07:58 AM, Nathana?l Blanchet wrote: > Hi all, > > I'd need some piece of good practice about dealing a DNS server in or > out of ovirt. > Until now we never wanted to integrate the DNS vm into ovirt because of > the strong dependency. if the DNS server fails for any reason, it > becomes difficult ot join the webadmin (except with a static etc hosts) > and the nodes may become unvailable if they had been configured with fqdn. > We could consider a DNS failover setup, but in a self hosted engine > setup (and more globally an hyperconverged setup) , it doesn't make > sense of setting up a stand alone DNS vm outside of ovirt. > > So what about imitating engine vm status in a hosted engine setup? Is > there a way to install the DNS vm outside of ovirt but on the ovirt host > (and why not in a HA mode)? > Second option could be installing the named service on the hosted engine > vm? > > Any suggestion or return of experience would be much appreciated. > You are wise to think of this as a dependency problem. When dealing with any "in band" vs. "out of band" type of scenario you want to properly address how things work "without" the dependency. So.. for example, you could maintain a static host table setup for your ovirt nodes. Thus, they could find each other without DNS. Also, those nodes might have an external DNS configured for lookups (something you don't own) just so things like updates can happen. There are risks to everything. Putting key (normally) out of band infrastructure into your oVirt, including the engine, always involves more risk. With that said, if you think about you key infrastructure being as a separate oVirt datacenter, it would have things like the "static host" maps and such. Some of the infrastructure VMs housed there could include the engine for the "general" datacenters (the ones not providing VMs for key infrastructure). This these "general" purpose datacenters would house the normal VMs and use potentially VMs out of the "infrastructure" datacenter. Does that make sense? It's not unlike how a lot of cloud providers operate. In fact, one well known provider used to house their core cloud infrastructure in VMware and use "cheaper" hypervisors for their cloud clients. Summary: static confs for infrastructure ovirt datacenter containing key core infrastructure VMs (including things like DNS, DHCP, Active Directory, and oVirt engines) used by general purpose ovirt datacenters. Obviously the infrastructure datacenter becomes very important, much like your base network and should be thought of as "first" priority, much like the network. And much like the network, depends on some kickstarter static configs. From jose.fernandes at locaweb.com.br Fri Mar 16 16:39:29 2018 From: jose.fernandes at locaweb.com.br (Jose Fernandes) Date: Fri, 16 Mar 2018 16:39:29 +0000 Subject: [ovirt-users] Setting up a LDAP conf In-Reply-To: References: <209153e2c13f470483d0faab7189aa2a@locaweb.com.br>, Message-ID: <7b10a173ef8f4cfd99fe787350060ab6@locaweb.com.br> Thanks Machacek! ________________________________ De: Ondra Machacek Enviado: sexta-feira, 16 de mar?o de 2018 12:32:38 Para: Jose Fernandes; users at ovirt.org Assunto: Re: [ovirt-users] Setting up a LDAP conf On 03/16/2018 12:26 AM, Jose Fernandes wrote: > Hello, > > > I have an OpenDJ LDAP server, and I need some help to do query on a > specific filter search. I remember I used to setup OpenDJ some time ago, please check this blog post: http://machacekondra.blogspot.cz/2015/05/saml-and-ovirt-35.html The important part there for you is the file: /usr/share/ovirt-engine-extension-aaa-ldap/profiles/opendj.properties Then you can use it as 'include = ' in authz/authn. > > > We can't figure out how to create a "aaa/profile1.properties" file with > these configs. > > > This is how we can filter the users with ldapsearch on our ldap server: > > > -H ldaps://server:port-D uid=user,ou=OU,dc=SERVER,dc=com,dc=br -W -b > ou=aa,dc=bb,dc=cc,dc=dd uid=jose.fernandes > > > - My configuration does not permit I search the users on base, so I > need to do this filter on "ou=aa,dc=bb,dc=cc,dc=dd" > > - Port is different from common. > > > Someone can help me to create the config file? > > > Regards, > > Jos? Fernandes > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholler at redhat.com Fri Mar 16 16:40:35 2018 From: dholler at redhat.com (Dominik Holler) Date: Fri, 16 Mar 2018 17:40:35 +0100 Subject: [ovirt-users] Failed to synchronize networks of Provider ovirt-provider-ovn In-Reply-To: <31fde430-1d81-29fb-4674-d34e4089afe1@edu.physics.uoc.gr> References: <20180316142113.71599c96@t460p> <31fde430-1d81-29fb-4674-d34e4089afe1@edu.physics.uoc.gr> Message-ID: <20180316174035.3d02a185@t460p> On Fri, 16 Mar 2018 17:46:36 +0200 Kapetanakis Giannis wrote: > On 16/03/18 17:40, Kapetanakis Giannis wrote: > > On 16/03/18 15:21, Dominik Holler wrote: > >> On Fri, 16 Mar 2018 12:46:13 +0200 > >> Kapetanakis Giannis wrote: > >> > >>> Hi, > >>> > >>> After upgrading to 4.2.1 I have problems with ovn provider. > >>> I'm getting "Failed to synchronize networks of Provider > >>> ovirt-provider-ovn." > >>> > >>> I use custom SSL certificate in apache and I guess this is the > >>> reason. > >>> > >>> I've tried to update ovirt-provider-ovn.conf with > >>> [OVIRT] > >>> #ovirt-ca-file=/etc/pki/ovirt-engine/ca.pem > >>> ovirt-ca-file=/etc/pki/ovirt-engine/apache-ca.pem > >>> > >>> but still no go > > > >> > >> Would you share the lines in engine.log produced by clicking the > >> "Test" button in the "Edit Provider" dialog? > >> On Clicking the test button, are you asked about "Import provider > >> certificate"? > > SORRY wrong provider. > > It asks for the cert. > Failed to communicate with the external provider, see log for > additional details. > > 2018-03-16 17:44:08,262+02 INFO > [org.ovirt.engine.core.bll.provider.ImportProviderCertificateCommand] > (default task-52) [4731d25d-fce3-4408-99ea-8f9d1b5ee5b6] Running > command: ImportProviderCertificateCommand internal: false. Entities > affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: > SystemAction group CREATE_STORAGE_POOL with role type ADMIN > 2018-03-16 17:44:08,275+02 INFO > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (default task-52) [4731d25d-fce3-4408-99ea-8f9d1b5ee5b6] EVENT_ID: > PROVIDER_CERTIFICATE_IMPORTED(213), Certificate for provider > ovirt-provider-ovn was imported. (User: admin at internal) 2018-03-16 > 17:44:08,302+02 INFO > [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand] > (default task-44) [f4b2c57b-60c7-4ef9-a59f-0c5b22fa0356] Running > command: TestProviderConnectivityCommand internal: false. Entities > affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: > SystemAction group CREATE_STORAGE_POOL with role type ADMIN > 2018-03-16 17:44:08,360+02 ERROR > [org.ovirt.engine.core.bll.provider.network.openstack.BaseNetworkProviderProxy] > (default task-44) [f4b2c57b-60c7-4ef9-a59f-0c5b22fa0356] Bad Gateway > (OpenStack response error code: 502) 2018-03-16 17:44:08,360+02 ERROR > [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand] > (default task-44) [f4b2c57b-60c7-4ef9-a59f-0c5b22fa0356] Command > 'org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand' > failed: EngineException: (Failed with error PROVIDER_FAILURE and code > 5050) > > and in provider log: > > 2018-03-16 17:45:33,961 requests.packages.urllib3.connectionpool > Starting new HTTPS connection (1): engine-host 2018-03-16 > 17:45:33,961 requests.packages.urllib3.connectionpool Starting new > HTTPS connection (1): engine-host 2018-03-16 17:45:33,966 root [SSL: > CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579) > Traceback (most recent call last): File > "/usr/share/ovirt-provider-ovn/handlers/base_handler.py", line 131, > in _handle_request method, path_parts, content) File > "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py", line > 175, in handle_request return self.call_response_handler(handler, > content, parameters) File > "/usr/share/ovirt-provider-ovn/handlers/keystone.py", line 33, in > call_response_handler return response_handler(content, parameters) > File "/usr/share/ovirt-provider-ovn/handlers/keystone_responses.py", > line 62, in post_tokens user_password=user_password) File > "/usr/share/ovirt-provider-ovn/auth/plugin_facade.py", line 26, in > create_token return auth.core.plugin.create_token(user_at_domain, > user_password) File > "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/plugin.py", line > 48, in create_token timeout=self._timeout()) File > "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 75, > in create_token username, password, engine_url, ca_file, timeout) > File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line > 91, in _get_sso_token timeout=timeout File > "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 54, > in wrapper response = func(*args, **kwargs) File > "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 47, > in wrapper raise BadGateway(e) BadGateway: [SSL: > CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579) > Thanks. Yes, the ovirt-provider-ovn refuses to connect to ovirt-engine for authentication because ovirt-provider-ovn does not trust the ssl-certificate and propagates this as the BadGateway error. Please not that engine-setup creates the file /etc/ovirt-provider-ovn/conf.d/10-setup-ovirt-provider-ovn.conf which overwrites the default values from /etc/ovirt-provider-ovn/ovirt-provider-ovn.conf If you want to check if the referenced /etc/pki/ovirt-engine/apache-ca.pem is correct, you can use the following python snippet: import requests response = requests.get('https://ENGINE_FQDN/', verify='/etc/pki/ovirt-engine/apache-ca.pem') assert response.status_code == 200 Does this help to solve the issue? From blanchet at abes.fr Fri Mar 16 17:28:03 2018 From: blanchet at abes.fr (=?UTF-8?Q?Nathana=c3=abl_Blanchet?=) Date: Fri, 16 Mar 2018 18:28:03 +0100 Subject: [ovirt-users] dns vm and ovirt In-Reply-To: References: Message-ID: <8dd344da-9ce2-9689-cb0d-8fdbe3197862@abes.fr> Thanks for precious advices! So? it means that people who thought about hosted engine feature didn't get into your philosophy of running the engine into a second datacenter Le 16/03/2018 ? 16:48, Christopher Cox a ?crit?: > On 03/16/2018 07:58 AM, Nathana?l Blanchet wrote: >> Hi all, >> >> I'd need some piece of good practice about dealing a DNS server in or >> out of ovirt. >> Until now we never wanted to integrate the DNS vm into ovirt because >> of the strong dependency. if the DNS server fails for any reason, it >> becomes difficult ot join the webadmin (except with a static etc >> hosts) and the nodes may become unvailable if they had been >> configured with fqdn. >> We could consider a DNS failover setup, but in a self hosted engine >> setup (and more globally an hyperconverged setup) , it doesn't make >> sense of setting up a stand alone DNS vm outside of ovirt. >> >> So what about imitating engine vm status in a hosted engine setup? Is >> there a way to install the DNS vm outside of ovirt but on the ovirt >> host (and why not in a HA mode)? >> Second option could be installing the named service on the hosted >> engine vm? >> >> Any suggestion or return of experience would be much appreciated. >> > > You are wise to think of this as a dependency problem.? When dealing > with any "in band" vs. "out of band" type of scenario you want to > properly address how things work "without" the dependency. > > So.. for example, you could maintain a static host table setup for > your ovirt nodes.? Thus, they could find each other without DNS. Also, > those nodes might have an external DNS configured for lookups > (something you don't own) just so things like updates can happen. > > There are risks to everything.? Putting key (normally) out of band > infrastructure into your oVirt, including the engine, always involves > more risk. > > With that said, if you think about you key infrastructure being as a > separate oVirt datacenter, it would have things like the "static host" > maps and such.? Some of the infrastructure VMs housed there could > include the engine for the "general" datacenters (the ones not > providing VMs for key infrastructure).? This these "general" purpose > datacenters would house the normal VMs and use potentially VMs out of > the "infrastructure" datacenter.? Does that make sense? > > It's not unlike how a lot of cloud providers operate.? In fact, one > well known provider used to house their core cloud infrastructure in > VMware and use "cheaper" hypervisors for their cloud clients. > > Summary: > static confs for infrastructure ovirt datacenter containing key core > infrastructure VMs (including things like DNS, DHCP, Active Directory, > and oVirt engines) used by general purpose ovirt datacenters. > > Obviously the infrastructure datacenter becomes very important, much > like your base network and should be thought of as "first" priority, > much like the network.? And much like the network, depends on some > kickstarter static configs. > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- Nathana?l Blanchet Supervision r?seau P?le Infrastrutures Informatiques 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 T?l. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet at abes.fr From ccox at endlessnow.com Fri Mar 16 18:38:23 2018 From: ccox at endlessnow.com (Christopher Cox) Date: Fri, 16 Mar 2018 13:38:23 -0500 Subject: [ovirt-users] dns vm and ovirt In-Reply-To: <8dd344da-9ce2-9689-cb0d-8fdbe3197862@abes.fr> References: <8dd344da-9ce2-9689-cb0d-8fdbe3197862@abes.fr> Message-ID: On 03/16/2018 12:28 PM, Nathana?l Blanchet wrote: > Thanks for precious advices! > > So? it means that people who thought about hosted engine feature didn't > get into your philosophy of running the engine into a second datacenter Again, strictly a "risk" thing. Hosted engine is by definition a "chicken and egg" thing. It's great for learning and for lab... but if you're going to run production, I'd at least consider the latter option I presented. With that said, we run dedicated engines today, not hosted. Remember, ovirt nodes run even while the engine is down. So you can tolerate an engine outage for a time period, just can't have reliability in case of node failures, etc. So for us, most of the risk is in rebuilding a new engine if we have to... but certainly considered a "rare" case. Putting key infrastructure inside the very thing that needs the key infrastructure to run is just fraught with problems. Everything has costs and typically, the more robust/reliable your setup, the more it's going to cost. I just wanted to present an "in between" style setup that gives you more reliability, but perhaps not the "best", while keeping costs way down. To me, if you're running any datacenter cluster (for example), you need to have a minimum of 3 nodes. People might not like that, but it's my minimum for reliability and flexibility. So... if wanted to use VMs for core infrastructure, that's 3 nodes. That core infrastructure datacenter might have a hosted engine, but likely also has "static definitions". It's part of the "core", at least several parts of it are. But the idea is it could hold: DNS, DHCP, Active Directory/LDAP, files shares (even storage domains for other datacenters), etc. Obviously a "core" failure is a "core" failure and thus needs the same treatment as whatever you consider to be "core" today. (thus on total "outage" bring up, you bring up the core, which now includes this core infrastructure datacenter... your core "tests" are run to verify, and then the rest is brought up) Then each general production datacenter cluster would have 3 nodes with the engine(s) being a VM(s) off the infrastructure datacenter using core infrastructure off that infrastructure datacenter as well. Again, this is very much like most cloud service providers today. Again, just ideas, mainly thinking on the "cheap", though some might not think so (you'll just have to trust me, what I'm presenting here is incredibly cheap for the reliability and flexibility it provides). Just my opinion. From bilias at edu.physics.uoc.gr Fri Mar 16 23:20:03 2018 From: bilias at edu.physics.uoc.gr (Kapetanakis Giannis) Date: Sat, 17 Mar 2018 01:20:03 +0200 Subject: [ovirt-users] Failed to synchronize networks of Provider ovirt-provider-ovn In-Reply-To: <20180316174035.3d02a185@t460p> References: <20180316142113.71599c96@t460p> <31fde430-1d81-29fb-4674-d34e4089afe1@edu.physics.uoc.gr> <20180316174035.3d02a185@t460p> Message-ID: <3097420a-d0bc-dd92-9417-0813e2746e71@edu.physics.uoc.gr> On 16/03/18 18:40, Dominik Holler wrote: > Thanks. Yes, the ovirt-provider-ovn refuses to connect to ovirt-engine > for authentication because ovirt-provider-ovn does not trust the > ssl-certificate and propagates this as the BadGateway error. > > Please not that engine-setup creates the file > /etc/ovirt-provider-ovn/conf.d/10-setup-ovirt-provider-ovn.conf > which overwrites the default values from > /etc/ovirt-provider-ovn/ovirt-provider-ovn.conf Thanks, I didn't notice the conf.d dir. Changing ovirt-ca-file there fixed it regards, G From bilias at edu.physics.uoc.gr Fri Mar 16 23:32:40 2018 From: bilias at edu.physics.uoc.gr (Kapetanakis Giannis) Date: Sat, 17 Mar 2018 01:32:40 +0200 Subject: [ovirt-users] Failed to synchronize networks of Provider ovirt-provider-ovn In-Reply-To: <3097420a-d0bc-dd92-9417-0813e2746e71@edu.physics.uoc.gr> References: <20180316142113.71599c96@t460p> <31fde430-1d81-29fb-4674-d34e4089afe1@edu.physics.uoc.gr> <20180316174035.3d02a185@t460p> <3097420a-d0bc-dd92-9417-0813e2746e71@edu.physics.uoc.gr> Message-ID: <03d81b18-4ed3-c9db-54d7-3e3bbbf7f893@edu.physics.uoc.gr> On 17/03/18 01:20, Kapetanakis Giannis wrote: > On 16/03/18 18:40, Dominik Holler wrote: >> Thanks. Yes, the ovirt-provider-ovn refuses to connect to ovirt-engine >> for authentication because ovirt-provider-ovn does not trust the >> ssl-certificate and propagates this as the BadGateway error. >> >> Please not that engine-setup creates the file >> /etc/ovirt-provider-ovn/conf.d/10-setup-ovirt-provider-ovn.conf >> which overwrites the default values from >> /etc/ovirt-provider-ovn/ovirt-provider-ovn.conf > > Thanks, > > I didn't notice the conf.d dir. > Changing ovirt-ca-file there fixed it > > regards, > > G In advance, it would make sense to change the default to /etc/pki/ovirt-engine/apache-ca.pem since by default it's a symlink to ca.pem (which is now the default) So default/custom cert would all work G From gianluca.cecchi at gmail.com Fri Mar 16 23:42:26 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Sat, 17 Mar 2018 00:42:26 +0100 Subject: [ovirt-users] ovirt node pxe and thinpool errror in status Message-ID: Hello, I'm experimenting to install ovirt ng node 4.2 via pxe and it seems all went ok. Only problem I have is that opening terminal session I get this traceback that seems related to thinpool Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/lib/python2.7/site-packages/nodectl/__main__.py", line 42, in CliApplication() File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 200, in CliApplication return cmdmap.command(args) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 118, in command return self.commands[command](**kwargs) File "/usr/lib/python2.7/site-packages/nodectl/__init__.py", line 101, in motd Motd(Status(Health(self.imgbased).status(), File "/usr/lib/python2.7/site-packages/imgbased/plugins/core.py", line 358, in status status.results.append(group().run()) File "/usr/lib/python2.7/site-packages/imgbased/plugins/core.py", line 385, in check_thin pool = self.app.imgbase._thinpool() File "/usr/lib/python2.7/site-packages/imgbased/imgbase.py", line 120, in _thinpool return LVM.Thinpool.from_tag(self.thinpool_tag) File "/usr/lib/python2.7/site-packages/imgbased/lvm.py", line 191, in from_tag assert len(lvs) == 1 AssertionError Admin Console: https://192.168.122.196:9090/ [root at localhost ~]# Current disk layout is [root at localhost ~]# fdisk -l /dev/vda Disk /dev/vda: 53.7 GB, 53687091200 bytes, 104857600 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x000a3595 Device Boot Start End Blocks Id System /dev/vda1 * 2048 2099199 1048576 83 Linux /dev/vda2 2099200 104857599 51379200 8e Linux LVM [root at localhost ~]# pvs PV VG Fmt Attr PSize PFree /dev/vda2 centos lvm2 a-- <49.00g 9.80g [root at localhost ~]# vgs VG #PV #LV #SN Attr VSize VFree centos 1 3 0 wz--n- <49.00g 9.80g [root at localhost ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert pool00 centos twi-aotz-- <34.16g 10.28 9.18 root centos Vwi-aotz-- <34.16g pool00 10.28 swap centos -wi-ao---- 5.00g [root at localhost ~]# Any hint? I installed from 4.2 iso and then applied the ovirt-node-ng-image-update-4.2.1.1-1.el7.centos.noarch yum update The problem remains At the moment it is not linked with any engine, also installed. I can go to the Admin Console web page and I see, clicking inside "Virtualization" left tab, that on the right dashboard I have a yellow triangle to the right of the "Health" word... But I don't exactly understand the source of the warning... Thanks for any hint related, Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Sat Mar 17 09:01:12 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Sat, 17 Mar 2018 09:01:12 +0000 Subject: [ovirt-users] Trunk'd Network In-Reply-To: <1900160491.1855275.1521214802539@mail.yahoo.com> References: <1900160491.1855275.1521214802539.ref@mail.yahoo.com> <1900160491.1855275.1521214802539@mail.yahoo.com> Message-ID: Il Ven 16 Mar 2018, 16:40 Andy ha scritto: > Community, > > I am trying to trunk two VLAN's to a VM on OVIRT 4.2 and all the > research/docs I have seen is to create a standard VM network and to NOT tag > anything. That is "Supposed" to pass all traffic where the attached VM > will need to tag said traffic. Everything I have tried has been > unsuccessful and I am not seeing any traffic pass to the VM. Has anyone > successfully accomplished sending mulitple VLAN's downstream (vswitch) to a > VM? On the VMWARE side the 4095 VLAN would accomplish this and I can do so > in my VMWARE test lab. > > thanks > > > See here answer for same question lists.ovirt.org/pipermail/users/2017-November/085253.html Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincent at epicenergy.ca Sat Mar 17 09:50:52 2018 From: vincent at epicenergy.ca (Vincent Royer) Date: Sat, 17 Mar 2018 02:50:52 -0700 Subject: [ovirt-users] Postgresql read only user difficulties Message-ID: Hi, I followed these instructions on Ovirt self hosted engine 4.2.1: https://www.ovirt.org/documentation/data-warehouse/Allowing_Read_Only_Access_to_the_History_Database/ when connecting to the db from an external host I receive this error: pq: no pg_hba.conf entry for host "", user "", database "ovirt_engine_history", SSL off I looked in the normal place for pg_hba.conf but the file does not exist, /data does not exist in /var/lib/pgsql Do i need to run engine-setup again to configure this? Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From matonb at ltresources.co.uk Sat Mar 17 10:13:17 2018 From: matonb at ltresources.co.uk (Maton, Brett) Date: Sat, 17 Mar 2018 10:13:17 +0000 Subject: [ovirt-users] oVirt 4.2 Fresh Install - Storage Domain Message-ID: I've just reinstalled ovirt 4.2 (ovirt-release42.rpm) on a clean CentOS 7.4 host, all went fine however Now that the hosted engine is up, I've added a data storage domain but it looks like the next step that detects / promotes the new data domain to master isn't being triggered. Hosted engine and data domain are on NFS storage, /var/log/messages is filling with these messages journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config ERROR Unable to identify the OVF_STORE volume, falling back to initial vm.conf. Please ensure you already added your first data domain for regular VMs I ran into the same issue when I imported an existing domain. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matonb at ltresources.co.uk Sat Mar 17 10:42:01 2018 From: matonb at ltresources.co.uk (Maton, Brett) Date: Sat, 17 Mar 2018 10:42:01 +0000 Subject: [ovirt-users] Postgresql read only user difficulties In-Reply-To: References: Message-ID: Hi Vincent, oVirt isn't using the stock PostgreSQL but an SCL version You should find pg_hba.conf here /var/opt/rh/rh-postgresql95/lib/pgsql/data/pg_hba.conf Hope this helps. On 17 March 2018 at 09:50, Vincent Royer wrote: > Hi, > > I followed these instructions on Ovirt self hosted engine 4.2.1: > > https://www.ovirt.org/documentation/data-warehouse/ > Allowing_Read_Only_Access_to_the_History_Database/ > > when connecting to the db from an external host I receive this error: > > pq: no pg_hba.conf entry for host "", user "", database > "ovirt_engine_history", SSL off > > I looked in the normal place for pg_hba.conf but the file does not exist, > /data does not exist in /var/lib/pgsql > > Do i need to run engine-setup again to configure this? > > Thank you! > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincent at epicenergy.ca Sat Mar 17 11:20:35 2018 From: vincent at epicenergy.ca (Vincent Royer) Date: Sat, 17 Mar 2018 04:20:35 -0700 Subject: [ovirt-users] Postgresql read only user difficulties In-Reply-To: References: Message-ID: ok thanks, I did see it there but assumed that was a temp file. I updated it according to the instructions, but I still get the same error. # TYPE DATABASE USER ADDRESS METHOD # "local" is for Unix domain socket connections only local all all peer host ovirt_engine_history ovirt_engine_history 0.0.0.0/0 md5 host ovirt_engine_history ovirt_engine_history ::0/0 md5 host ovirt_engine_history grafana 172.16.30.10 /0 md5 host ovirt_engine_history grafana ::0/0 md5 host engine engine 0.0.0.0/0 md5 host engine engine ::0/0 md5 did a systemctl restart postgresql.service and I get "Unit not found". So I did systemctl restart ovirt-engine.service... and the error I get when accessing from 172.16.30.10 is: pq: no pg_hba.conf entry for host "172.16.30.10", user "grafana", database "ovirt_engine_history", SSL off *Vincent Royer* *778-825-1057* *SUSTAINABLE MOBILE ENERGY SOLUTIONS* On Sat, Mar 17, 2018 at 3:42 AM, Maton, Brett wrote: > Hi Vincent, > > oVirt isn't using the stock PostgreSQL but an SCL version > You should find pg_hba.conf here /var/opt/rh/rh-postgresql95/ > lib/pgsql/data/pg_hba.conf > > Hope this helps. > > On 17 March 2018 at 09:50, Vincent Royer wrote: > >> Hi, >> >> I followed these instructions on Ovirt self hosted engine 4.2.1: >> >> https://www.ovirt.org/documentation/data-warehouse/Allowing_ >> Read_Only_Access_to_the_History_Database/ >> >> when connecting to the db from an external host I receive this error: >> >> pq: no pg_hba.conf entry for host "", user "", database >> "ovirt_engine_history", SSL off >> >> I looked in the normal place for pg_hba.conf but the file does not exist, >> /data does not exist in /var/lib/pgsql >> >> Do i need to run engine-setup again to configure this? >> >> Thank you! >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matonb at ltresources.co.uk Sat Mar 17 11:34:18 2018 From: matonb at ltresources.co.uk (Maton, Brett) Date: Sat, 17 Mar 2018 11:34:18 +0000 Subject: [ovirt-users] Postgresql read only user difficulties In-Reply-To: References: Message-ID: You could always try reloading the configuration, pretty sure pg_hba gets reloaded these days: su - postgres scl enable rh-postgresql95 bash pg_ctl reload -D /var/opt/rh/rh-postgresql95/lib/pgsql/data or as root systemctl restart rh-postgresql95-postgresql.service On 17 March 2018 at 11:20, Vincent Royer wrote: > ok thanks, I did see it there but assumed that was a temp file. I updated > it according to the instructions, but I still get the same error. > > # TYPE DATABASE USER ADDRESS METHOD > > # "local" is for Unix domain socket connections only > local all all peer > host ovirt_engine_history ovirt_engine_history 0.0.0.0/0 > md5 > host ovirt_engine_history ovirt_engine_history ::0/0 > md5 > host ovirt_engine_history grafana 172.16.30.10 /0 md5 > host ovirt_engine_history grafana ::0/0 md5 > host engine engine 0.0.0.0/0 md5 > host engine engine ::0/0 md5 > > > did a systemctl restart postgresql.service and I get "Unit not found". > So I did systemctl restart ovirt-engine.service... > > and the error I get when accessing from 172.16.30.10 is: > > pq: no pg_hba.conf entry for host "172.16.30.10", user "grafana", database > "ovirt_engine_history", SSL off > > > > > *Vincent Royer* > *778-825-1057* > > > > *SUSTAINABLE MOBILE ENERGY SOLUTIONS* > > > > > On Sat, Mar 17, 2018 at 3:42 AM, Maton, Brett > wrote: > >> Hi Vincent, >> >> oVirt isn't using the stock PostgreSQL but an SCL version >> You should find pg_hba.conf here /var/opt/rh/rh-postgresql95/li >> b/pgsql/data/pg_hba.conf >> >> Hope this helps. >> >> On 17 March 2018 at 09:50, Vincent Royer wrote: >> >>> Hi, >>> >>> I followed these instructions on Ovirt self hosted engine 4.2.1: >>> >>> https://www.ovirt.org/documentation/data-warehouse/Allowing_ >>> Read_Only_Access_to_the_History_Database/ >>> >>> when connecting to the db from an external host I receive this error: >>> >>> pq: no pg_hba.conf entry for host "", user "", database >>> "ovirt_engine_history", SSL off >>> >>> I looked in the normal place for pg_hba.conf but the file does not >>> exist, /data does not exist in /var/lib/pgsql >>> >>> Do i need to run engine-setup again to configure this? >>> >>> Thank you! >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pablo.localhost at gmail.com Sat Mar 17 17:24:03 2018 From: pablo.localhost at gmail.com (Juan Pablo) Date: Sat, 17 Mar 2018 14:24:03 -0300 Subject: [ovirt-users] oVirt 4.2 Fresh Install - Storage Domain In-Reply-To: References: Message-ID: you need to configure first a domain (can be temporary) then, import the storage domain. 2018-03-17 7:13 GMT-03:00 Maton, Brett : > I've just reinstalled ovirt 4.2 (ovirt-release42.rpm) on a clean CentOS > 7.4 host, all went fine however > > Now that the hosted engine is up, I've added a data storage domain but it > looks like the next step that detects / promotes the new data domain to > master isn't being triggered. > > Hosted engine and data domain are on NFS storage, /var/log/messages is > filling with these messages > > journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent. > hosted_engine.HostedEngine.config ERROR Unable to identify the OVF_STORE > volume, falling back to initial vm.conf. Please ensure you already added > your first data domain for regular VMs > > > I ran into the same issue when I imported an existing domain. > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincent at epicenergy.ca Sat Mar 17 17:44:58 2018 From: vincent at epicenergy.ca (Vincent Royer) Date: Sat, 17 Mar 2018 10:44:58 -0700 Subject: [ovirt-users] Postgresql read only user difficulties In-Reply-To: References: Message-ID: hmmm. not a great result... rh-postgresql95-postgresql.service:...1 Mar 17 10:36:32 ovirt-engine systemd[1]: Failed to start PostgreSQL database.... Mar 17 10:36:32 ovirt-engine systemd[1]: Unit rh-postgresql95-postgresql.ser.... Mar 17 10:36:32 ovirt-engine systemd[1]: rh-postgresql95-postgresql.service .... and can no longer login to ovirt-engine gui: server_error: Connection refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections. tried to restart ovirt-engine and it won't come up - internal server error. *Vincent Royer* *778-825-1057* *SUSTAINABLE MOBILE ENERGY SOLUTIONS* On Sat, Mar 17, 2018 at 4:34 AM, Maton, Brett wrote: > You could always try reloading the configuration, pretty sure pg_hba gets > reloaded these days: > > su - postgres > scl enable rh-postgresql95 bash > pg_ctl reload -D /var/opt/rh/rh-postgresql95/lib/pgsql/data > > or as root > > systemctl restart rh-postgresql95-postgresql.service > > > On 17 March 2018 at 11:20, Vincent Royer wrote: > >> ok thanks, I did see it there but assumed that was a temp file. I >> updated it according to the instructions, but I still get the same error. >> >> # TYPE DATABASE USER ADDRESS METHOD >> >> # "local" is for Unix domain socket connections only >> local all all peer >> host ovirt_engine_history ovirt_engine_history 0.0.0.0/0 >> md5 >> host ovirt_engine_history ovirt_engine_history ::0/0 >> md5 >> host ovirt_engine_history grafana 172.16.30.10 /0 md5 >> host ovirt_engine_history grafana ::0/0 md5 >> host engine engine 0.0.0.0/0 md5 >> host engine engine ::0/0 md5 >> >> >> did a systemctl restart postgresql.service and I get "Unit not found". >> So I did systemctl restart ovirt-engine.service... >> >> and the error I get when accessing from 172.16.30.10 is: >> >> pq: no pg_hba.conf entry for host "172.16.30.10", user "grafana", >> database "ovirt_engine_history", SSL off >> >> >> >> >> *Vincent Royer* >> *778-825-1057 <(778)%20825-1057>* >> >> >> >> *SUSTAINABLE MOBILE ENERGY SOLUTIONS* >> >> >> >> >> On Sat, Mar 17, 2018 at 3:42 AM, Maton, Brett >> wrote: >> >>> Hi Vincent, >>> >>> oVirt isn't using the stock PostgreSQL but an SCL version >>> You should find pg_hba.conf here /var/opt/rh/rh-postgresql95/li >>> b/pgsql/data/pg_hba.conf >>> >>> Hope this helps. >>> >>> On 17 March 2018 at 09:50, Vincent Royer wrote: >>> >>>> Hi, >>>> >>>> I followed these instructions on Ovirt self hosted engine 4.2.1: >>>> >>>> https://www.ovirt.org/documentation/data-warehouse/Allowing_ >>>> Read_Only_Access_to_the_History_Database/ >>>> >>>> when connecting to the db from an external host I receive this error: >>>> >>>> pq: no pg_hba.conf entry for host "", user "", database >>>> "ovirt_engine_history", SSL off >>>> >>>> I looked in the normal place for pg_hba.conf but the file does not >>>> exist, /data does not exist in /var/lib/pgsql >>>> >>>> Do i need to run engine-setup again to configure this? >>>> >>>> Thank you! >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincent at epicenergy.ca Sat Mar 17 18:11:52 2018 From: vincent at epicenergy.ca (Vincent Royer) Date: Sat, 17 Mar 2018 11:11:52 -0700 Subject: [ovirt-users] Postgresql read only user difficulties In-Reply-To: References: Message-ID: I think I see the issue. Extra space after the IP address in pg_hba.conf I'll try again later. Thanks for your help! On Sat, Mar 17, 2018 at 10:44 AM, Vincent Royer wrote: > hmmm. not a great result... > > rh-postgresql95-postgresql.service:...1 > Mar 17 10:36:32 ovirt-engine systemd[1]: Failed to start PostgreSQL > database.... > Mar 17 10:36:32 ovirt-engine systemd[1]: Unit > rh-postgresql95-postgresql.ser.... > Mar 17 10:36:32 ovirt-engine systemd[1]: rh-postgresql95-postgresql.service > .... > > and can no longer login to ovirt-engine gui: > > server_error: Connection refused. Check that the hostname and port are > correct and that the postmaster is accepting TCP/IP connections. > > tried to restart ovirt-engine and it won't come up - internal server > error. > > > > *Vincent Royer* > *778-825-1057 <(778)%20825-1057>* > > > > *SUSTAINABLE MOBILE ENERGY SOLUTIONS* > > > > > On Sat, Mar 17, 2018 at 4:34 AM, Maton, Brett > wrote: > >> You could always try reloading the configuration, pretty sure pg_hba gets >> reloaded these days: >> >> su - postgres >> scl enable rh-postgresql95 bash >> pg_ctl reload -D /var/opt/rh/rh-postgresql95/lib/pgsql/data >> >> or as root >> >> systemctl restart rh-postgresql95-postgresql.service >> >> >> On 17 March 2018 at 11:20, Vincent Royer wrote: >> >>> ok thanks, I did see it there but assumed that was a temp file. I >>> updated it according to the instructions, but I still get the same error. >>> >>> # TYPE DATABASE USER ADDRESS METHOD >>> >>> # "local" is for Unix domain socket connections only >>> local all all peer >>> host ovirt_engine_history ovirt_engine_history 0.0.0.0/0 >>> md5 >>> host ovirt_engine_history ovirt_engine_history ::0/0 >>> md5 >>> host ovirt_engine_history grafana 172.16.30.10 /0 md5 >>> host ovirt_engine_history grafana ::0/0 md5 >>> host engine engine 0.0.0.0/0 md5 >>> host engine engine ::0/0 md5 >>> >>> >>> did a systemctl restart postgresql.service and I get "Unit not found". >>> So I did systemctl restart ovirt-engine.service... >>> >>> and the error I get when accessing from 172.16.30.10 is: >>> >>> pq: no pg_hba.conf entry for host "172.16.30.10", user "grafana", >>> database "ovirt_engine_history", SSL off >>> >>> >>> >>> >>> *Vincent Royer* >>> *778-825-1057 <(778)%20825-1057>* >>> >>> >>> >>> *SUSTAINABLE MOBILE ENERGY SOLUTIONS* >>> >>> >>> >>> >>> On Sat, Mar 17, 2018 at 3:42 AM, Maton, Brett >>> wrote: >>> >>>> Hi Vincent, >>>> >>>> oVirt isn't using the stock PostgreSQL but an SCL version >>>> You should find pg_hba.conf here /var/opt/rh/rh-postgresql95/li >>>> b/pgsql/data/pg_hba.conf >>>> >>>> Hope this helps. >>>> >>>> On 17 March 2018 at 09:50, Vincent Royer wrote: >>>> >>>>> Hi, >>>>> >>>>> I followed these instructions on Ovirt self hosted engine 4.2.1: >>>>> >>>>> https://www.ovirt.org/documentation/data-warehouse/Allowing_ >>>>> Read_Only_Access_to_the_History_Database/ >>>>> >>>>> when connecting to the db from an external host I receive this error: >>>>> >>>>> pq: no pg_hba.conf entry for host "", user "", database >>>>> "ovirt_engine_history", SSL off >>>>> >>>>> I looked in the normal place for pg_hba.conf but the file does not >>>>> exist, /data does not exist in /var/lib/pgsql >>>>> >>>>> Do i need to run engine-setup again to configure this? >>>>> >>>>> Thank you! >>>>> >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matonb at ltresources.co.uk Sat Mar 17 18:43:22 2018 From: matonb at ltresources.co.uk (Maton, Brett) Date: Sat, 17 Mar 2018 18:43:22 +0000 Subject: [ovirt-users] Postgresql read only user difficulties In-Reply-To: References: Message-ID: Yeah if postgres won't start you've probably got a typo in pg_hba.conf On 17 March 2018 at 18:11, Vincent Royer wrote: > I think I see the issue. Extra space after the IP address in pg_hba.conf > > I'll try again later. > > Thanks for your help! > > > > > On Sat, Mar 17, 2018 at 10:44 AM, Vincent Royer > wrote: > >> hmmm. not a great result... >> >> rh-postgresql95-postgresql.service:...1 >> Mar 17 10:36:32 ovirt-engine systemd[1]: Failed to start PostgreSQL >> database.... >> Mar 17 10:36:32 ovirt-engine systemd[1]: Unit >> rh-postgresql95-postgresql.ser.... >> Mar 17 10:36:32 ovirt-engine systemd[1]: rh-postgresql95-postgresql.service >> .... >> >> and can no longer login to ovirt-engine gui: >> >> server_error: Connection refused. Check that the hostname and port are >> correct and that the postmaster is accepting TCP/IP connections. >> >> tried to restart ovirt-engine and it won't come up - internal server >> error. >> >> >> >> *Vincent Royer* >> *778-825-1057 <(778)%20825-1057>* >> >> >> >> *SUSTAINABLE MOBILE ENERGY SOLUTIONS* >> >> >> >> >> On Sat, Mar 17, 2018 at 4:34 AM, Maton, Brett >> wrote: >> >>> You could always try reloading the configuration, pretty sure pg_hba >>> gets reloaded these days: >>> >>> su - postgres >>> scl enable rh-postgresql95 bash >>> pg_ctl reload -D /var/opt/rh/rh-postgresql95/lib/pgsql/data >>> >>> or as root >>> >>> systemctl restart rh-postgresql95-postgresql.service >>> >>> >>> On 17 March 2018 at 11:20, Vincent Royer wrote: >>> >>>> ok thanks, I did see it there but assumed that was a temp file. I >>>> updated it according to the instructions, but I still get the same error. >>>> >>>> # TYPE DATABASE USER ADDRESS METHOD >>>> >>>> # "local" is for Unix domain socket connections only >>>> local all all peer >>>> host ovirt_engine_history ovirt_engine_history 0.0.0.0/0 >>>> md5 >>>> host ovirt_engine_history ovirt_engine_history ::0/0 >>>> md5 >>>> host ovirt_engine_history grafana 172.16.30.10 /0 md5 >>>> host ovirt_engine_history grafana ::0/0 md5 >>>> host engine engine 0.0.0.0/0 md5 >>>> host engine engine ::0/0 md5 >>>> >>>> >>>> did a systemctl restart postgresql.service and I get "Unit not found". >>>> So I did systemctl restart ovirt-engine.service... >>>> >>>> and the error I get when accessing from 172.16.30.10 is: >>>> >>>> pq: no pg_hba.conf entry for host "172.16.30.10", user "grafana", >>>> database "ovirt_engine_history", SSL off >>>> >>>> >>>> >>>> >>>> *Vincent Royer* >>>> *778-825-1057 <(778)%20825-1057>* >>>> >>>> >>>> >>>> *SUSTAINABLE MOBILE ENERGY SOLUTIONS* >>>> >>>> >>>> >>>> >>>> On Sat, Mar 17, 2018 at 3:42 AM, Maton, Brett >>> > wrote: >>>> >>>>> Hi Vincent, >>>>> >>>>> oVirt isn't using the stock PostgreSQL but an SCL version >>>>> You should find pg_hba.conf here /var/opt/rh/rh-postgresql95/li >>>>> b/pgsql/data/pg_hba.conf >>>>> >>>>> Hope this helps. >>>>> >>>>> On 17 March 2018 at 09:50, Vincent Royer >>>>> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> I followed these instructions on Ovirt self hosted engine 4.2.1: >>>>>> >>>>>> https://www.ovirt.org/documentation/data-warehouse/Allowing_ >>>>>> Read_Only_Access_to_the_History_Database/ >>>>>> >>>>>> when connecting to the db from an external host I receive this error: >>>>>> >>>>>> pq: no pg_hba.conf entry for host "", user "", database >>>>>> "ovirt_engine_history", SSL off >>>>>> >>>>>> I looked in the normal place for pg_hba.conf but the file does not >>>>>> exist, /data does not exist in /var/lib/pgsql >>>>>> >>>>>> Do i need to run engine-setup again to configure this? >>>>>> >>>>>> Thank you! >>>>>> >>>>>> _______________________________________________ >>>>>> Users mailing list >>>>>> Users at ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matonb at ltresources.co.uk Sat Mar 17 18:45:07 2018 From: matonb at ltresources.co.uk (Maton, Brett) Date: Sat, 17 Mar 2018 18:45:07 +0000 Subject: [ovirt-users] oVirt 4.2 Fresh Install - Storage Domain In-Reply-To: References: Message-ID: I have added a new domain, but it's not playing ball. hosted_storage Data (master) Active vm_storage Data Active On 17 March 2018 at 17:24, Juan Pablo wrote: > you need to configure first a domain (can be temporary) then, import the > storage domain. > > > > 2018-03-17 7:13 GMT-03:00 Maton, Brett : > >> I've just reinstalled ovirt 4.2 (ovirt-release42.rpm) on a clean CentOS >> 7.4 host, all went fine however >> >> Now that the hosted engine is up, I've added a data storage domain but it >> looks like the next step that detects / promotes the new data domain to >> master isn't being triggered. >> >> Hosted engine and data domain are on NFS storage, /var/log/messages is >> filling with these messages >> >> journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent.h >> osted_engine.HostedEngine.config ERROR Unable to identify the OVF_STORE >> volume, falling back to initial vm.conf. Please ensure you already added >> your first data domain for regular VMs >> >> >> I ran into the same issue when I imported an existing domain. >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pablo.localhost at gmail.com Sat Mar 17 18:49:04 2018 From: pablo.localhost at gmail.com (Juan Pablo) Date: Sat, 17 Mar 2018 15:49:04 -0300 Subject: [ovirt-users] error adding second host on 4.2.1 Message-ID: Hi, Im having a strange situation with 4.2.1.7.el7, Im always getting " Failed Adding new Host node03 to Cluster Default " after adding a second host to the cluster. I tried re installing both nodes and hosted engine, just in case there's something wrong with the initial config, so starting from zero Im getting the same result. Im currently running centos7.4 on both hosts, fully updated; hosted engine storage is nfs, reachable and mountable by both hosts, iscsi for the VM's infra. each host has 3 interfaces: eno1-- not used eno2-- ovirtmgmt bridge (+vlans) enp3s0f0 -10GBe to iscsi storage (disabled for troubleshooting purpose) enp3s0f1 -10GBe to iscsi storage (in use) all the interfaces are reachable and pingable. any hint? Im attaching the deploy log. thanks! JP -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: inst.log.tar.gz Type: application/gzip Size: 757397 bytes Desc: not available URL: From pablo.localhost at gmail.com Sat Mar 17 18:54:23 2018 From: pablo.localhost at gmail.com (Juan Pablo) Date: Sat, 17 Mar 2018 15:54:23 -0300 Subject: [ovirt-users] oVirt 4.2 Fresh Install - Storage Domain In-Reply-To: References: Message-ID: as soon as you install ovirt when you add a second storage domain, it should switch to it as the master, not hosted storage as you have. are both nfs? JP 2018-03-17 15:45 GMT-03:00 Maton, Brett : > I have added a new domain, but it's not playing ball. > > hosted_storage Data (master) Active > vm_storage Data Active > > On 17 March 2018 at 17:24, Juan Pablo wrote: > >> you need to configure first a domain (can be temporary) then, import the >> storage domain. >> >> >> >> 2018-03-17 7:13 GMT-03:00 Maton, Brett : >> >>> I've just reinstalled ovirt 4.2 (ovirt-release42.rpm) on a clean CentOS >>> 7.4 host, all went fine however >>> >>> Now that the hosted engine is up, I've added a data storage domain but >>> it looks like the next step that detects / promotes the new data domain to >>> master isn't being triggered. >>> >>> Hosted engine and data domain are on NFS storage, /var/log/messages is >>> filling with these messages >>> >>> journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent.h >>> osted_engine.HostedEngine.config ERROR Unable to identify the OVF_STORE >>> volume, falling back to initial vm.conf. Please ensure you already added >>> your first data domain for regular VMs >>> >>> >>> I ran into the same issue when I imported an existing domain. >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdarnell at gmail.com Sat Mar 17 20:54:52 2018 From: mdarnell at gmail.com (Matthew Darnell) Date: Sat, 17 Mar 2018 10:54:52 -1000 Subject: [ovirt-users] Looking for Commercial assistance setting up small ovirt cluster Message-ID: Aloha, We are going to be moving from Xen to ovirt and would like to contract someone to help setup the new cluster and migrate from Xen. We are looking at 3 ovirt hosts and Ceph for the redundant storage but would be willing to listen to other proposals. If you are interested in talking more, email mdarnell at comtel dot cloud or reply to this thread. Mahalo. Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From farkey_2000 at yahoo.com Sun Mar 18 03:29:10 2018 From: farkey_2000 at yahoo.com (Andy) Date: Sun, 18 Mar 2018 03:29:10 +0000 (UTC) Subject: [ovirt-users] Trunk'd Network In-Reply-To: References: <1900160491.1855275.1521214802539.ref@mail.yahoo.com> <1900160491.1855275.1521214802539@mail.yahoo.com> Message-ID: <765157741.2554312.1521343750275@mail.yahoo.com> Thanks for the info and I have seen this article.? I have done some additional testing and can now get VLAN traffic to pass to the VM provided there isnt an already created OVIRT network with the assigned VLAN.? I am testing a Cisco vWLC, which allows for two ports on the VM (service port and data port).? I need to send three VLAN's through the data port, which one of them is the same VLAN as ovirtmgmt.? Shouldnt the untagged network pass all VLAN's regardless of already asisgned or is there a network filter/setting on the OVIRT side that needs to be adjusted? Thanks? On Saturday, March 17, 2018, 5:01:25 AM EDT, Gianluca Cecchi wrote: Il Ven 16 Mar 2018, 16:40 Andy ha scritto: Community, I am trying to trunk two VLAN's to a VM on OVIRT 4.2 and all the research/docs I have seen is to create a standard VM network and to NOT tag anything.? That is "Supposed" to pass all traffic where the attached VM will need to tag said traffic.? Everything I have tried has been unsuccessful and I am not seeing any traffic pass to the VM.? Has anyone successfully accomplished sending mulitple VLAN's downstream (vswitch) to a VM?? On the VMWARE side the 4095 VLAN would accomplish this and I can do so in my VMWARE test lab. ? thanks See here answer for same question lists.ovirt.org/pipermail/users/2017-November/085253.html Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From didi at redhat.com Sun Mar 18 06:33:14 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Sun, 18 Mar 2018 08:33:14 +0200 Subject: [ovirt-users] Warning: CentOS Upgrade with Host Engine - 503 Service Temporarily Unavailable In-Reply-To: <92209F67-B328-41A0-BA8E-09AF40905A32@starlett.lv> References: <92209F67-B328-41A0-BA8E-09AF40905A32@starlett.lv> Message-ID: On Thu, Mar 15, 2018 at 5:39 PM, Andrei Verovski wrote: > Hi ! > > I have upgraded CentOS 7.4 with oVirt 4.2.1 host engine (with yum upgrade), > and its resulted in broken system - "503 Service Temporarily Unavailable" > when connecting to the host engine via web. > Service ovirt-engine failed to starts (logs attached at the bottom of this > email), other ovirt services seem to run fine. > > yum update "ovirt-*-setup*? (upgrade 4.2 -> 4.2.1) > engine-setup > yum upgrade (OS upgrade) > > Is this issue somehow related to JDK as described here? > https://bugzilla.redhat.com/show_bug.cgi?id=1217023 Not very likely. This is a very old bug. > > New packages: > java-1.8.0-openjdk x86_64 1:1.8.0.161-0.b14.el7_4 > java-1.8.0-openjdk-devel x86_64 1:1.8.0.161-0.b14.el7_4 > java-1.8.0-openjdk-headless x86_64 1:1.8.0.161-0.b14.el7_4 > > installed packages (seem to be also 1.8x): > [root at node00 ~]# rpm -qa | grep jdk > java-1.8.0-openjdk-1.8.0.151-5.b12.el7_4.x86_64 > java-1.8.0-openjdk-devel-1.8.0.151-5.b12.el7_4.x86_64 > java-1.8.0-openjdk-headless-1.8.0.151-5.b12.el7_4.x86_64 > copy-jdk-configs-2.2-5.el7_4.noarch Indeed. Some versions of the engine could work with both openjdk 1.7 and 1.8, but recent versions require 1.8. > > Since my host engine is actually a KVM appliance under another SuSE server, > I simply discarded it and reverted back old qcow2 image. OK. Any chance you kept a copy, or at least more logs, for further debugging? > > So this is a warning to anyone - don?t upgrade CentOS, or at least keep a > copy of disk image before ANY upgrade ! Having regular backups, both before significant changes like upgrades and also routine, on-going, is always a good idea :-) If you have all of server.log, we can try to understand the cause for the failure. If not, and you try again to upgrade and if fails again, please share all relevant logs before reverting to the backup. Thanks! Best regards, -- Didi From didi at redhat.com Sun Mar 18 07:28:12 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Sun, 18 Mar 2018 09:28:12 +0200 Subject: [ovirt-users] Hosted engine : rebuild without backups ? In-Reply-To: <20180316124816.91ABAE446E@smtp01.mail.de> References: <20180316124816.91ABAE446E@smtp01.mail.de> Message-ID: On Fri, Mar 16, 2018 at 2:48 PM, wrote: > Hi, > > In case of a total failure of the hosted engine VM, it is recommended to > recreate a new one and restore a backup. I hope it works, I will probably > have to do this very soon. > > But is there some kind of "plug and play" features, able to rebuild > configuration by browsing storage domains, if the restore process doesn't > work ? It's called "Import Storage Domain" in oVirt. > > Something like identifying VMs and their snapshots in the subdirectories, > and the guess what is linked to what, ... ? > > I have a few machines but if I have to rebuild all the engine setup and > content, I would like to be able to identify resources easily. > > A few times ago, I was doing some experiments with XenServer and > destroyed/recreated some setup items : I ended with a lot of oprhan > resources, and it was a mess to reattach snapshots to their respective VMs. > So if oVirt is more helpful in that way ... If you try this: 1. Try first on a test setup, as always 2. Make sure to _not_ import the hosted-storage domain, the one used to host the hosted-engine VM. 3. So: setup a new hosted-engine system, then import your _other_ storage domains. Ideally make sure the old hosted storage is not accessible to the new system, so that the new engine does not try to import it accidentally. 4. If you do try to import, for testing, the old hosted-storage, would be interesting if you share the results... Best regards, -- Didi From ykaul at redhat.com Sun Mar 18 08:17:31 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Sun, 18 Mar 2018 10:17:31 +0200 Subject: [ovirt-users] VM has been paused due to NO STORAGE SPACE ERROR ?!?!?!?! In-Reply-To: References: Message-ID: On Fri, Mar 16, 2018 at 1:25 PM, Enrico Becchetti < enrico.becchetti at pg.infn.it> wrote: > Dear All, > Does someone had seen that error ? When I run this command from my virtual > machine: > > # time dd if=/dev/zero of=enrico.dd bs=4k count=10000000 > I don't think it's a very interesting test case for IO performance, but in any case, it may cause the VM to try to write faster than its thin provisioned disk can be extended. A simple workaround would be to change in VDSM the threshold of when it gets extended and by how much. For example: [irs] volume_utilization_percent = 15 volume_utilization_chunk_mb = 4048 Y. > VM was paused due to kind a storage error/problem. Strange message > because tell about "no storage space error" but ovirt puts virtual machine > in > a paused state. > > Inside events from ovirt web interface I see this: > > "VM has been paused due to lack of storage space" > > but no ERROR found in /var/log/vdsm.log. > > My oVirt enviroment 4.2.1 has three hypervivosr with FC storage and before > now > I haven't see any other problem during the normal functioning of the vm , > it's seem > that this error occurs only when there is massive I/O. > > Any ideas ? > Thanks a lot. > Best Regards > Enrico > > > -- > _______________________________________________________________________ > > Enrico Becchetti Servizio di Calcolo e Reti > > Istituto Nazionale di Fisica Nucleare - Sezione di Perugia > Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) > Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it > ______________________________________________________________________ > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matonb at ltresources.co.uk Sun Mar 18 10:00:17 2018 From: matonb at ltresources.co.uk (Maton, Brett) Date: Sun, 18 Mar 2018 10:00:17 +0000 Subject: [ovirt-users] oVirt 4.2 Fresh Install - Storage Domain In-Reply-To: References: Message-ID: Yes, both are on NFS and both are mounted. For what it's worth all the storage this cluster will connect to is on the same NAS device, same permissions etc etc, and it does connect it's just not 'flipping' the master domain. This is the first time I've tried a fresh install of 4.2.1.7-1.el7.centos, I generally in-place upgrade on this cluster. If there are any logs they might give a clue as to what's happening I'm happy to shre those. In the mean time I'll flatten the host and deploy from scratch again. On 17 March 2018 at 18:54, Juan Pablo wrote: > as soon as you install ovirt when you add a second storage domain, it > should switch to it as the master, not hosted storage as you have. are both > nfs? > > JP > > > 2018-03-17 15:45 GMT-03:00 Maton, Brett : > >> I have added a new domain, but it's not playing ball. >> >> hosted_storage Data (master) Active >> vm_storage Data Active >> >> On 17 March 2018 at 17:24, Juan Pablo wrote: >> >>> you need to configure first a domain (can be temporary) then, import the >>> storage domain. >>> >>> >>> >>> 2018-03-17 7:13 GMT-03:00 Maton, Brett : >>> >>>> I've just reinstalled ovirt 4.2 (ovirt-release42.rpm) on a clean CentOS >>>> 7.4 host, all went fine however >>>> >>>> Now that the hosted engine is up, I've added a data storage domain but >>>> it looks like the next step that detects / promotes the new data domain to >>>> master isn't being triggered. >>>> >>>> Hosted engine and data domain are on NFS storage, /var/log/messages is >>>> filling with these messages >>>> >>>> journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent.h >>>> osted_engine.HostedEngine.config ERROR Unable to identify the >>>> OVF_STORE volume, falling back to initial vm.conf. Please ensure you >>>> already added your first data domain for regular VMs >>>> >>>> >>>> I ran into the same issue when I imported an existing domain. >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nsoffer at redhat.com Sun Mar 18 10:55:52 2018 From: nsoffer at redhat.com (Nir Soffer) Date: Sun, 18 Mar 2018 10:55:52 +0000 Subject: [ovirt-users] oVirt 4.2 Fresh Install - Storage Domain In-Reply-To: References: Message-ID: On Sun, Mar 18, 2018 at 12:01 PM Maton, Brett wrote: > Yes, both are on NFS and both are mounted. > > For what it's worth all the storage this cluster will connect to is on the > same NAS device, same permissions etc etc, and it does connect it's just > not 'flipping' the master domain. > > This is the first time I've tried a fresh install of 4.2.1.7-1.el7.centos, > I generally in-place upgrade on this cluster. > > If there are any logs they might give a clue as to what's happening I'm > happy to shre those. > > In the mean time I'll flatten the host and deploy from scratch again. > No need to flatten the host, I think the behavior is expected in 4.2. The installation process was streamlines and there is no need now to create another storage domain to force the system to import the hosted engine domain. Since the hosted engine storage domain cannot be deactivated, using it for master is good. it meas you can deactivate any other storage domain if needed. Maybe the documentation needs update? On 17 March 2018 at 18:54, Juan Pablo wrote: > >> as soon as you install ovirt when you add a second storage domain, it >> should switch to it as the master, not hosted storage as you have. are both >> nfs? >> >> JP >> >> >> 2018-03-17 15:45 GMT-03:00 Maton, Brett : >> >>> I have added a new domain, but it's not playing ball. >>> >>> hosted_storage Data (master) Active >>> vm_storage Data Active >>> >>> On 17 March 2018 at 17:24, Juan Pablo wrote: >>> >>>> you need to configure first a domain (can be temporary) then, import >>>> the storage domain. >>>> >>>> >>>> >>>> 2018-03-17 7:13 GMT-03:00 Maton, Brett : >>>> >>>>> I've just reinstalled ovirt 4.2 (ovirt-release42.rpm) on a clean >>>>> CentOS 7.4 host, all went fine however >>>>> >>>>> Now that the hosted engine is up, I've added a data storage domain but >>>>> it looks like the next step that detects / promotes the new data domain to >>>>> master isn't being triggered. >>>>> >>>>> Hosted engine and data domain are on NFS storage, /var/log/messages is >>>>> filling with these messages >>>>> >>>>> journal: ovirt-ha-agent >>>>> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config ERROR Unable >>>>> to identify the OVF_STORE volume, falling back to initial vm.conf. Please >>>>> ensure you already added your first data domain for regular VMs >>>>> >>>>> >>>>> I ran into the same issue when I imported an existing domain. >>>>> >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> >>>>> >>>> >>> >> > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matonb at ltresources.co.uk Sun Mar 18 11:12:06 2018 From: matonb at ltresources.co.uk (Maton, Brett) Date: Sun, 18 Mar 2018 11:12:06 +0000 Subject: [ovirt-users] oVirt 4.2 Fresh Install - Storage Domain In-Reply-To: References: Message-ID: Ok, that does make sense. I'm rebuilding again at the moment anyway, I'll keep this thread updated as deployment progresses. On 18 March 2018 at 10:55, Nir Soffer wrote: > On Sun, Mar 18, 2018 at 12:01 PM Maton, Brett > wrote: > >> Yes, both are on NFS and both are mounted. >> >> For what it's worth all the storage this cluster will connect to is on >> the same NAS device, same permissions etc etc, and it does connect it's >> just not 'flipping' the master domain. >> >> This is the first time I've tried a fresh install of >> 4.2.1.7-1.el7.centos, I generally in-place upgrade on this cluster. >> >> If there are any logs they might give a clue as to what's happening I'm >> happy to shre those. >> >> In the mean time I'll flatten the host and deploy from scratch again. >> > > No need to flatten the host, I think the behavior is expected in 4.2. The > installation process > was streamlines and there is no need now to create another storage domain > to force the > system to import the hosted engine domain. > > Since the hosted engine storage domain cannot be deactivated, using it for > master is good. > it meas you can deactivate any other storage domain if needed. > > Maybe the documentation needs update? > > On 17 March 2018 at 18:54, Juan Pablo wrote: >> >>> as soon as you install ovirt when you add a second storage domain, it >>> should switch to it as the master, not hosted storage as you have. are both >>> nfs? >>> >>> JP >>> >>> >>> 2018-03-17 15:45 GMT-03:00 Maton, Brett : >>> >>>> I have added a new domain, but it's not playing ball. >>>> >>>> hosted_storage Data (master) Active >>>> vm_storage Data Active >>>> >>>> On 17 March 2018 at 17:24, Juan Pablo >>>> wrote: >>>> >>>>> you need to configure first a domain (can be temporary) then, import >>>>> the storage domain. >>>>> >>>>> >>>>> >>>>> 2018-03-17 7:13 GMT-03:00 Maton, Brett : >>>>> >>>>>> I've just reinstalled ovirt 4.2 (ovirt-release42.rpm) on a clean >>>>>> CentOS 7.4 host, all went fine however >>>>>> >>>>>> Now that the hosted engine is up, I've added a data storage domain >>>>>> but it looks like the next step that detects / promotes the new data domain >>>>>> to master isn't being triggered. >>>>>> >>>>>> Hosted engine and data domain are on NFS storage, /var/log/messages >>>>>> is filling with these messages >>>>>> >>>>>> journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent. >>>>>> hosted_engine.HostedEngine.config ERROR Unable to identify the >>>>>> OVF_STORE volume, falling back to initial vm.conf. Please ensure you >>>>>> already added your first data domain for regular VMs >>>>>> >>>>>> >>>>>> I ran into the same issue when I imported an existing domain. >>>>>> >>>>>> _______________________________________________ >>>>>> Users mailing list >>>>>> Users at ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>> >>>>>> >>>>> >>>> >>> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matonb at ltresources.co.uk Sun Mar 18 11:46:09 2018 From: matonb at ltresources.co.uk (Maton, Brett) Date: Sun, 18 Mar 2018 11:46:09 +0000 Subject: [ovirt-users] oVirt 4.2 Fresh Install - Storage Domain In-Reply-To: References: Message-ID: Right fresh install of CentOS 7.4 all updated Installed ovirt using the repositories from provided by ovirt-release42.rpm The install seems to go fine, once the hosted engine is up /var/log/message is getting filled with the following messages: journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config ERROR Unable to identify the OVF_STORE volume, falling back to initial vm.conf. Please ensure you already added your first data domain for regular VMs I've added a new storage domain as used to be required, but as Nir suggested earlier the expected behaviour in 4.2 is that the hosted_storage domain remains master and /var/log/messages continues to fill with the above messages. One other niggle I've found in the current installer is the space required to generate hosted_engine, the instructions on this page Deploying Self-Hosted Engine suggest that the installer will check that there is enough available space or prompt for an alternative location if there isn't enough at /var/tmp. Actually it does neither and fails the deployment only saying to check the log files for a possible cause. On 18 March 2018 at 11:12, Maton, Brett wrote: > Ok, that does make sense. > > I'm rebuilding again at the moment anyway, I'll keep this thread updated > as deployment progresses. > > On 18 March 2018 at 10:55, Nir Soffer wrote: > >> On Sun, Mar 18, 2018 at 12:01 PM Maton, Brett >> wrote: >> >>> Yes, both are on NFS and both are mounted. >>> >>> For what it's worth all the storage this cluster will connect to is on >>> the same NAS device, same permissions etc etc, and it does connect it's >>> just not 'flipping' the master domain. >>> >>> This is the first time I've tried a fresh install of >>> 4.2.1.7-1.el7.centos, I generally in-place upgrade on this cluster. >>> >>> If there are any logs they might give a clue as to what's happening I'm >>> happy to shre those. >>> >>> In the mean time I'll flatten the host and deploy from scratch again. >>> >> >> No need to flatten the host, I think the behavior is expected in 4.2. The >> installation process >> was streamlines and there is no need now to create another storage domain >> to force the >> system to import the hosted engine domain. >> >> Since the hosted engine storage domain cannot be deactivated, using it >> for master is good. >> it meas you can deactivate any other storage domain if needed. >> >> Maybe the documentation needs update? >> >> On 17 March 2018 at 18:54, Juan Pablo wrote: >>> >>>> as soon as you install ovirt when you add a second storage domain, it >>>> should switch to it as the master, not hosted storage as you have. are both >>>> nfs? >>>> >>>> JP >>>> >>>> >>>> 2018-03-17 15:45 GMT-03:00 Maton, Brett : >>>> >>>>> I have added a new domain, but it's not playing ball. >>>>> >>>>> hosted_storage Data (master) Active >>>>> vm_storage Data Active >>>>> >>>>> On 17 March 2018 at 17:24, Juan Pablo >>>>> wrote: >>>>> >>>>>> you need to configure first a domain (can be temporary) then, import >>>>>> the storage domain. >>>>>> >>>>>> >>>>>> >>>>>> 2018-03-17 7:13 GMT-03:00 Maton, Brett : >>>>>> >>>>>>> I've just reinstalled ovirt 4.2 (ovirt-release42.rpm) on a clean >>>>>>> CentOS 7.4 host, all went fine however >>>>>>> >>>>>>> Now that the hosted engine is up, I've added a data storage domain >>>>>>> but it looks like the next step that detects / promotes the new data domain >>>>>>> to master isn't being triggered. >>>>>>> >>>>>>> Hosted engine and data domain are on NFS storage, /var/log/messages >>>>>>> is filling with these messages >>>>>>> >>>>>>> journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent.h >>>>>>> osted_engine.HostedEngine.config ERROR Unable to identify the >>>>>>> OVF_STORE volume, falling back to initial vm.conf. Please ensure you >>>>>>> already added your first data domain for regular VMs >>>>>>> >>>>>>> >>>>>>> I ran into the same issue when I imported an existing domain. >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Users mailing list >>>>>>> Users at ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maozza at gmail.com Sun Mar 18 13:43:19 2018 From: maozza at gmail.com (maoz zadok) Date: Sun, 18 Mar 2018 15:43:19 +0200 Subject: [ovirt-users] Failed to run VM Message-ID: Hello All, I'm receiving this message every time that I try to start VM, any idea? 2018-03-18 09:39:51,479-04 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (EE-ManagedThreadFactory-engineScheduled-Thread-11) [] Failed during monitoring vm: 2445a47e-b102-11e6-ad11-1866da511add , error is: {}: java.lang.NullPointerException 2018-03-18 09:39:51,480-04 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (EE-ManagedThreadFactory-engineScheduled-Thread-11) [] Exception:: java.lang.NullPointerException 2018-03-18 09:39:57,802-04 INFO [org.ovirt.engine.core.bll.RunVmCommand] (default task-79) [b3b98e2c-6176-413d-9d75-315626918150] Lock Acquired to object 'EngineLock:{exclusiveLocks='[fccee2da-92a3-4187-8aad-d6c1f9d5a3fb=VM]', sharedLocks=''}' 2018-03-18 09:39:57,822-04 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-79) [b3b98e2c-6176-413d-9d75-315626918150] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{vmId='fccee2da-92a3-4187-8aad-d6c1f9d5a3fb'}), log id: 2217c887 2018-03-18 09:39:57,822-04 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-79) [b3b98e2c-6176-413d-9d75-315626918150] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 2217c887 2018-03-18 09:39:57,927-04 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-929544) [b3b98e2c-6176-413d-9d75-315626918150] Running command: RunVmCommand internal: false. Entities affected : ID: fccee2da-92a3-4187-8aad-d6c1f9d5a3fb Type: VMAction group RUN_VM with role type USER 2018-03-18 09:39:57,944-04 INFO [org.ovirt.engine.core.bll.scheduling.policyunits.PreferredHostsWeightPolicyUnit] (EE-ManagedThreadFactory-engine-Thread-929544) [b3b98e2c-6176-413d-9d75-315626918150] Penalizing host 'kvm5' because it is not preferred. 2018-03-18 09:39:57,944-04 INFO [org.ovirt.engine.core.bll.scheduling.policyunits.PreferredHostsWeightPolicyUnit] (EE-ManagedThreadFactory-engine-Thread-929544) [b3b98e2c-6176-413d-9d75-315626918150] Penalizing host 'kvm6' because it is not preferred. 2018-03-18 09:39:57,944-04 INFO [org.ovirt.engine.core.bll.scheduling.policyunits.PreferredHostsWeightPolicyUnit] (EE-ManagedThreadFactory-engine-Thread-929544) [b3b98e2c-6176-413d-9d75-315626918150] Penalizing host 'kvm7' because it is not preferred. 2018-03-18 09:39:57,945-04 WARN [org.ovirt.engine.core.bll.scheduling.pending.PendingResourceManager] (EE-ManagedThreadFactory-engine-Thread-929544) [b3b98e2c-6176-413d-9d75-315626918150] Clearing stale pending resource org.ovirt.engine.core.bll.scheduling.pending.PendingCpuCores at 1d15d667 (host: e044db3f-c49b-467f-9f04-21b44ffe78c4, vm: fccee2da-92a3-4187-8aad-d6c1f9d5a3fb) 2018-03-18 09:39:57,945-04 WARN [org.ovirt.engine.core.bll.scheduling.pending.PendingResourceManager] (EE-ManagedThreadFactory-engine-Thread-929544) [b3b98e2c-6176-413d-9d75-315626918150] Clearing stale pending resource org.ovirt.engine.core.bll.scheduling.pending.PendingMemory at 1d15d667 (host: e044db3f-c49b-467f-9f04-21b44ffe78c4, vm: fccee2da-92a3-4187-8aad-d6c1f9d5a3fb) 2018-03-18 09:39:57,946-04 WARN [org.ovirt.engine.core.bll.scheduling.pending.PendingResourceManager] (EE-ManagedThreadFactory-engine-Thread-929544) [b3b98e2c-6176-413d-9d75-315626918150] Clearing stale pending resource org.ovirt.engine.core.bll.scheduling.pending.PendingOvercommitMemory at 1d15d667 (host: e044db3f-c49b-467f-9f04-21b44ffe78c4, vm: fccee2da-92a3-4187-8aad-d6c1f9d5a3fb) 2018-03-18 09:39:57,946-04 WARN [org.ovirt.engine.core.bll.scheduling.pending.PendingResourceManager] (EE-ManagedThreadFactory-engine-Thread-929544) [b3b98e2c-6176-413d-9d75-315626918150] Clearing stale pending resource org.ovirt.engine.core.bll.scheduling.pending.PendingVM at 1d15d667 (host: e044db3f-c49b-467f-9f04-21b44ffe78c4, vm: fccee2da-92a3-4187-8aad-d6c1f9d5a3fb) 2018-03-18 09:39:57,946-04 ERROR [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-929544) [b3b98e2c-6176-413d-9d75-315626918150] Command 'org.ovirt.engine.core.bll.RunVmCommand' failed: null 2018-03-18 09:39:57,946-04 ERROR [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-929544) [b3b98e2c-6176-413d-9d75-315626918150] Exception: java.lang.NullPointerException at org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring.refreshCommitedMemory(HostMonitoring.java:687) [vdsbroker.jar:] at org.ovirt.engine.core.vdsbroker.VdsManager.updatePendingData(VdsManager.java:476) [vdsbroker.jar:] at org.ovirt.engine.core.bll.scheduling.pending.PendingResourceManager.notifyHostManagers(PendingResourceManager.java:227) [bll.jar:] at org.ovirt.engine.core.bll.scheduling.SchedulingManager.schedule(SchedulingManager.java:361) [bll.jar:] at org.ovirt.engine.core.bll.RunVmCommand.getVdsToRunOn(RunVmCommand.java:878) [bll.jar:] at org.ovirt.engine.core.bll.RunVmCommand.runVm(RunVmCommand.java:266) [bll.jar:] at org.ovirt.engine.core.bll.RunVmCommand.perform(RunVmCommand.java:440) [bll.jar:] at org.ovirt.engine.core.bll.RunVmCommand.executeVmCommand(RunVmCommand.java:365) [bll.jar:] at org.ovirt.engine.core.bll.VmCommand.executeCommand(VmCommand.java:147) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1205) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1345) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:1987) [bll.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:164) [utils.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:103) [utils.jar:] at org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1405) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:412) [bll.jar:] at org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.executeValidatedCommand(PrevalidatingMultipleActionsRunner.java:204) [bll.jar:] at org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.runCommands(PrevalidatingMultipleActionsRunner.java:176) [bll.jar:] at org.ovirt.engine.core.bll.SortedMultipleActionsRunnerBase.runCommands(SortedMultipleActionsRunnerBase.java:20) [bll.jar:] at org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.lambda$invokeCommands$3(PrevalidatingMultipleActionsRunner.java:182) [bll.jar:] at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalWrapperRunnable.run(ThreadPoolUtil.java:96) [utils.jar:] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [rt.jar:1.8.0_161] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [rt.jar:1.8.0_161] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_161] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_161] at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_161] at org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250) [javax.enterprise.concurrent-1.0.jar:] at org.jboss.as.ee.concurrent.service.ElytronManagedThreadFactory$ElytronManagedThread.run(ElytronManagedThreadFactory.java:78) 2018-03-18 09:39:57,969-04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-929544) [b3b98e2c-6176-413d-9d75-315626918150] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM centos7-template (User: admin at internal-authz). 2018-03-18 09:39:57,976-04 INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-929544) [b3b98e2c-6176-413d-9d75-315626918150] Lock freed to object 'EngineLock:{exclusiveLocks='[fccee2da-92a3-4187-8aad-d6c1f9d5a3fb=VM]', sharedLocks=''}' -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahadas at redhat.com Sun Mar 18 14:11:53 2018 From: ahadas at redhat.com (Arik Hadas) Date: Sun, 18 Mar 2018 16:11:53 +0200 Subject: [ovirt-users] Failed to run VM In-Reply-To: References: Message-ID: On Sun, Mar 18, 2018 at 3:43 PM, maoz zadok wrote: > Hello All, > I'm receiving this message every time that I try to start VM, > any idea? > Seems you enounter [1] that is solved in 4.2.2. I assume you're using 4.2.1, right? By just looking at the code, it seems that restarting the engine could be a possible workaround for this. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1532884 > > > > 2018-03-18 09:39:51,479-04 ERROR [org.ovirt.engine.core. > vdsbroker.monitoring.VmsMonitoring] (EE-ManagedThreadFactory-engineScheduled-Thread-11) > [] Failed during monitoring vm: 2445a47e-b102-11e6-ad11-1866da511add , > error is: {}: java.lang.NullPointerException > > 2018-03-18 09:39:51,480-04 ERROR [org.ovirt.engine.core. > vdsbroker.monitoring.VmsMonitoring] (EE-ManagedThreadFactory-engineScheduled-Thread-11) > [] Exception:: java.lang.NullPointerException > > 2018-03-18 09:39:57,802-04 INFO [org.ovirt.engine.core.bll.RunVmCommand] > (default task-79) [b3b98e2c-6176-413d-9d75-315626918150] Lock Acquired to > object 'EngineLock:{exclusiveLocks='[fccee2da-92a3-4187-8aad-d6c1f9d5a3fb=VM]', > sharedLocks=''}' > 2018-03-18 09:39:57,822-04 INFO [org.ovirt.engine.core.vdsbroker. > IsVmDuringInitiatingVDSCommand] (default task-79) > [b3b98e2c-6176-413d-9d75-315626918150] START, > IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommand > Parameters:{vmId='fccee2da-92a3-4187-8aad-d6c1f9d5a3fb'}), log id: > 2217c887 > 2018-03-18 09:39:57,822-04 INFO [org.ovirt.engine.core.vdsbroker. > IsVmDuringInitiatingVDSCommand] (default task-79) > [b3b98e2c-6176-413d-9d75-315626918150] FINISH, > IsVmDuringInitiatingVDSCommand, return: false, log id: 2217c887 > 2018-03-18 09:39:57,927-04 INFO [org.ovirt.engine.core.bll.RunVmCommand] > (EE-ManagedThreadFactory-engine-Thread-929544) [b3b98e2c-6176-413d-9d75-315626918150] > Running command: RunVmCommand internal: false. Entities affected : ID: > fccee2da-92a3-4187-8aad-d6c1f9d5a3fb Type: VMAction group RUN_VM with > role type USER > 2018-03-18 09:39:57,944-04 INFO [org.ovirt.engine.core.bll. > scheduling.policyunits.PreferredHostsWeightPolicyUnit] > (EE-ManagedThreadFactory-engine-Thread-929544) [b3b98e2c-6176-413d-9d75-315626918150] > Penalizing host 'kvm5' because it is not preferred. > 2018-03-18 09:39:57,944-04 INFO [org.ovirt.engine.core.bll. > scheduling.policyunits.PreferredHostsWeightPolicyUnit] > (EE-ManagedThreadFactory-engine-Thread-929544) [b3b98e2c-6176-413d-9d75-315626918150] > Penalizing host 'kvm6' because it is not preferred. > 2018-03-18 09:39:57,944-04 INFO [org.ovirt.engine.core.bll. > scheduling.policyunits.PreferredHostsWeightPolicyUnit] > (EE-ManagedThreadFactory-engine-Thread-929544) [b3b98e2c-6176-413d-9d75-315626918150] > Penalizing host 'kvm7' because it is not preferred. > 2018-03-18 09:39:57,945-04 WARN [org.ovirt.engine.core.bll. > scheduling.pending.PendingResourceManager] (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] Clearing stale pending resource > org.ovirt.engine.core.bll.scheduling.pending.PendingCpuCores at 1d15d667 > (host: e044db3f-c49b-467f-9f04-21b44ffe78c4, vm: fccee2da-92a3-4187-8aad- > d6c1f9d5a3fb) > 2018-03-18 09:39:57,945-04 WARN [org.ovirt.engine.core.bll. > scheduling.pending.PendingResourceManager] (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] Clearing stale pending resource > org.ovirt.engine.core.bll.scheduling.pending.PendingMemory at 1d15d667 > (host: e044db3f-c49b-467f-9f04-21b44ffe78c4, vm: fccee2da-92a3-4187-8aad- > d6c1f9d5a3fb) > 2018-03-18 09:39:57,946-04 WARN [org.ovirt.engine.core.bll. > scheduling.pending.PendingResourceManager] (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] Clearing stale pending resource > org.ovirt.engine.core.bll.scheduling.pending.PendingOvercommitMemory at 1d15d667 > (host: e044db3f-c49b-467f-9f04-21b44ffe78c4, vm: fccee2da-92a3-4187-8aad- > d6c1f9d5a3fb) > 2018-03-18 09:39:57,946-04 WARN [org.ovirt.engine.core.bll. > scheduling.pending.PendingResourceManager] (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] Clearing stale pending resource > org.ovirt.engine.core.bll.scheduling.pending.PendingVM at 1d15d667 (host: > e044db3f-c49b-467f-9f04-21b44ffe78c4, vm: fccee2da-92a3-4187-8aad- > d6c1f9d5a3fb) > 2018-03-18 09:39:57,946-04 ERROR [org.ovirt.engine.core.bll.RunVmCommand] > (EE-ManagedThreadFactory-engine-Thread-929544) [b3b98e2c-6176-413d-9d75-315626918150] > Command 'org.ovirt.engine.core.bll.RunVmCommand' failed: null > 2018-03-18 09:39:57,946-04 ERROR [org.ovirt.engine.core.bll.RunVmCommand] > (EE-ManagedThreadFactory-engine-Thread-929544) [b3b98e2c-6176-413d-9d75-315626918150] > Exception: java.lang.NullPointerException > at org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring. > refreshCommitedMemory(HostMonitoring.java:687) [vdsbroker.jar:] > at org.ovirt.engine.core.vdsbroker.VdsManager. > updatePendingData(VdsManager.java:476) [vdsbroker.jar:] > at org.ovirt.engine.core.bll.scheduling.pending. > PendingResourceManager.notifyHostManagers(PendingResourceManager.java:227) > [bll.jar:] > at org.ovirt.engine.core.bll.scheduling.SchedulingManager. > schedule(SchedulingManager.java:361) [bll.jar:] > at org.ovirt.engine.core.bll.RunVmCommand.getVdsToRunOn(RunVmCommand.java:878) > [bll.jar:] > at org.ovirt.engine.core.bll.RunVmCommand.runVm(RunVmCommand.java:266) > [bll.jar:] > at org.ovirt.engine.core.bll.RunVmCommand.perform(RunVmCommand.java:440) > [bll.jar:] > at org.ovirt.engine.core.bll.RunVmCommand.executeVmCommand(RunVmCommand.java:365) > [bll.jar:] > at org.ovirt.engine.core.bll.VmCommand.executeCommand(VmCommand.java:147) > [bll.jar:] > at org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1205) > [bll.jar:] > at org.ovirt.engine.core.bll.CommandBase. > executeActionInTransactionScope(CommandBase.java:1345) [bll.jar:] > at org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:1987) > [bll.jar:] > at org.ovirt.engine.core.utils.transaction.TransactionSupport. > executeInSuppressed(TransactionSupport.java:164) [utils.jar:] > at org.ovirt.engine.core.utils.transaction.TransactionSupport. > executeInScope(TransactionSupport.java:103) [utils.jar:] > at org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1405) > [bll.jar:] > at org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:412) > [bll.jar:] > at org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRu > nner.executeValidatedCommand(PrevalidatingMultipleActionsRunner.java:204) > [bll.jar:] > at org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRu > nner.runCommands(PrevalidatingMultipleActionsRunner.java:176) [bll.jar:] > at org.ovirt.engine.core.bll.SortedMultipleActionsRunnerBas > e.runCommands(SortedMultipleActionsRunnerBase.java:20) [bll.jar:] > at org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRu > nner.lambda$invokeCommands$3(PrevalidatingMultipleActionsRunner.java:182) > [bll.jar:] > at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$ > InternalWrapperRunnable.run(ThreadPoolUtil.java:96) [utils.jar:] > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [rt.jar:1.8.0_161] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [rt.jar:1.8.0_161] > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [rt.jar:1.8.0_161] > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [rt.jar:1.8.0_161] > at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_161] > at org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ > ManagedThread.run(ManagedThreadFactoryImpl.java:250) > [javax.enterprise.concurrent-1.0.jar:] > at org.jboss.as.ee.concurrent.service.ElytronManagedThreadFactory$ > ElytronManagedThread.run(ElytronManagedThreadFactory.java:78) > > 2018-03-18 09:39:57,969-04 ERROR [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] EVENT_ID: USER_FAILED_RUN_VM(54), > Failed to run VM centos7-template (User: admin at internal-authz). > 2018-03-18 09:39:57,976-04 INFO [org.ovirt.engine.core.bll.RunVmCommand] > (EE-ManagedThreadFactory-engine-Thread-929544) [b3b98e2c-6176-413d-9d75-315626918150] > Lock freed to object 'EngineLock:{exclusiveLocks='[ > fccee2da-92a3-4187-8aad-d6c1f9d5a3fb=VM]', sharedLocks=''}' > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincent at epicenergy.ca Sun Mar 18 18:49:12 2018 From: vincent at epicenergy.ca (Vincent Royer) Date: Sun, 18 Mar 2018 11:49:12 -0700 Subject: [ovirt-users] Postgresql read only user difficulties In-Reply-To: References: Message-ID: well this is frustrating. I added the read only user, but still can't connect. pq: password authentication failed for user "grafana" *Vincent Royer* *778-825-1057* *SUSTAINABLE MOBILE ENERGY SOLUTIONS* On Sat, Mar 17, 2018 at 11:43 AM, Maton, Brett wrote: > Yeah if postgres won't start you've probably got a typo in pg_hba.conf > > On 17 March 2018 at 18:11, Vincent Royer wrote: > >> I think I see the issue. Extra space after the IP address in pg_hba.conf >> >> I'll try again later. >> >> Thanks for your help! >> >> >> >> >> On Sat, Mar 17, 2018 at 10:44 AM, Vincent Royer >> wrote: >> >>> hmmm. not a great result... >>> >>> rh-postgresql95-postgresql.service:...1 >>> Mar 17 10:36:32 ovirt-engine systemd[1]: Failed to start PostgreSQL >>> database.... >>> Mar 17 10:36:32 ovirt-engine systemd[1]: Unit >>> rh-postgresql95-postgresql.ser.... >>> Mar 17 10:36:32 ovirt-engine systemd[1]: rh-postgresql95-postgresql.service >>> .... >>> >>> and can no longer login to ovirt-engine gui: >>> >>> server_error: Connection refused. Check that the hostname and port are >>> correct and that the postmaster is accepting TCP/IP connections. >>> >>> tried to restart ovirt-engine and it won't come up - internal server >>> error. >>> >>> >>> >>> *Vincent Royer* >>> *778-825-1057 <(778)%20825-1057>* >>> >>> >>> >>> *SUSTAINABLE MOBILE ENERGY SOLUTIONS* >>> >>> >>> >>> >>> On Sat, Mar 17, 2018 at 4:34 AM, Maton, Brett >>> wrote: >>> >>>> You could always try reloading the configuration, pretty sure pg_hba >>>> gets reloaded these days: >>>> >>>> su - postgres >>>> scl enable rh-postgresql95 bash >>>> pg_ctl reload -D /var/opt/rh/rh-postgresql95/lib/pgsql/data >>>> >>>> or as root >>>> >>>> systemctl restart rh-postgresql95-postgresql.service >>>> >>>> >>>> On 17 March 2018 at 11:20, Vincent Royer wrote: >>>> >>>>> ok thanks, I did see it there but assumed that was a temp file. I >>>>> updated it according to the instructions, but I still get the same error. >>>>> >>>>> # TYPE DATABASE USER ADDRESS METHOD >>>>> >>>>> # "local" is for Unix domain socket connections only >>>>> local all all peer >>>>> host ovirt_engine_history ovirt_engine_history 0.0.0.0/0 >>>>> md5 >>>>> host ovirt_engine_history ovirt_engine_history ::0/0 >>>>> md5 >>>>> host ovirt_engine_history grafana 172.16.30.10 /0 md5 >>>>> host ovirt_engine_history grafana ::0/0 md5 >>>>> host engine engine 0.0.0.0/0 md5 >>>>> host engine engine ::0/0 md5 >>>>> >>>>> >>>>> did a systemctl restart postgresql.service and I get "Unit not >>>>> found". So I did systemctl restart ovirt-engine.service... >>>>> >>>>> and the error I get when accessing from 172.16.30.10 is: >>>>> >>>>> pq: no pg_hba.conf entry for host "172.16.30.10", user "grafana", >>>>> database "ovirt_engine_history", SSL off >>>>> >>>>> >>>>> >>>>> >>>>> *Vincent Royer* >>>>> *778-825-1057 <(778)%20825-1057>* >>>>> >>>>> >>>>> >>>>> *SUSTAINABLE MOBILE ENERGY SOLUTIONS* >>>>> >>>>> >>>>> >>>>> >>>>> On Sat, Mar 17, 2018 at 3:42 AM, Maton, Brett < >>>>> matonb at ltresources.co.uk> wrote: >>>>> >>>>>> Hi Vincent, >>>>>> >>>>>> oVirt isn't using the stock PostgreSQL but an SCL version >>>>>> You should find pg_hba.conf here /var/opt/rh/rh-postgresql95/li >>>>>> b/pgsql/data/pg_hba.conf >>>>>> >>>>>> Hope this helps. >>>>>> >>>>>> On 17 March 2018 at 09:50, Vincent Royer >>>>>> wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> I followed these instructions on Ovirt self hosted engine 4.2.1: >>>>>>> >>>>>>> https://www.ovirt.org/documentation/data-warehouse/Allowing_ >>>>>>> Read_Only_Access_to_the_History_Database/ >>>>>>> >>>>>>> when connecting to the db from an external host I receive this error: >>>>>>> >>>>>>> pq: no pg_hba.conf entry for host "", user "", database >>>>>>> "ovirt_engine_history", SSL off >>>>>>> >>>>>>> I looked in the normal place for pg_hba.conf but the file does not >>>>>>> exist, /data does not exist in /var/lib/pgsql >>>>>>> >>>>>>> Do i need to run engine-setup again to configure this? >>>>>>> >>>>>>> Thank you! >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Users mailing list >>>>>>> Users at ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maozza at gmail.com Sun Mar 18 19:34:28 2018 From: maozza at gmail.com (maoz zadok) Date: Sun, 18 Mar 2018 21:34:28 +0200 Subject: [ovirt-users] Failed to run VM In-Reply-To: References: Message-ID: Thank you Arik you are right, my version is 4.2.0.2, how can I upgrade? On Sun, Mar 18, 2018 at 4:11 PM, Arik Hadas wrote: > > > On Sun, Mar 18, 2018 at 3:43 PM, maoz zadok wrote: > >> Hello All, >> I'm receiving this message every time that I try to start VM, >> any idea? >> > > Seems you enounter [1] that is solved in 4.2.2. > I assume you're using 4.2.1, right? By just looking at the code, it seems > that restarting the engine could be a possible workaround for this. > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1532884 > > >> >> >> >> 2018-03-18 09:39:51,479-04 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] >> (EE-ManagedThreadFactory-engineScheduled-Thread-11) [] Failed during >> monitoring vm: 2445a47e-b102-11e6-ad11-1866da511add , error is: {}: >> java.lang.NullPointerException >> >> 2018-03-18 09:39:51,480-04 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] >> (EE-ManagedThreadFactory-engineScheduled-Thread-11) [] Exception:: >> java.lang.NullPointerException >> >> 2018-03-18 09:39:57,802-04 INFO [org.ovirt.engine.core.bll.RunVmCommand] >> (default task-79) [b3b98e2c-6176-413d-9d75-315626918150] Lock Acquired >> to object 'EngineLock:{exclusiveLocks='[fccee2da-92a3-4187-8aad-d6c1f9d5a3fb=VM]', >> sharedLocks=''}' >> 2018-03-18 09:39:57,822-04 INFO [org.ovirt.engine.core.vdsbrok >> er.IsVmDuringInitiatingVDSCommand] (default task-79) >> [b3b98e2c-6176-413d-9d75-315626918150] START, >> IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommand >> Parameters:{vmId='fccee2da-92a3-4187-8aad-d6c1f9d5a3fb'}), log id: >> 2217c887 >> 2018-03-18 09:39:57,822-04 INFO [org.ovirt.engine.core.vdsbrok >> er.IsVmDuringInitiatingVDSCommand] (default task-79) >> [b3b98e2c-6176-413d-9d75-315626918150] FINISH, >> IsVmDuringInitiatingVDSCommand, return: false, log id: 2217c887 >> 2018-03-18 09:39:57,927-04 INFO [org.ovirt.engine.core.bll.RunVmCommand] >> (EE-ManagedThreadFactory-engine-Thread-929544) >> [b3b98e2c-6176-413d-9d75-315626918150] Running command: RunVmCommand >> internal: false. Entities affected : ID: fccee2da-92a3-4187-8aad-d6c1f9d5a3fb >> Type: VMAction group RUN_VM with role type USER >> 2018-03-18 09:39:57,944-04 INFO [org.ovirt.engine.core.bll.sch >> eduling.policyunits.PreferredHostsWeightPolicyUnit] >> (EE-ManagedThreadFactory-engine-Thread-929544) >> [b3b98e2c-6176-413d-9d75-315626918150] Penalizing host 'kvm5' because it >> is not preferred. >> 2018-03-18 09:39:57,944-04 INFO [org.ovirt.engine.core.bll.sch >> eduling.policyunits.PreferredHostsWeightPolicyUnit] >> (EE-ManagedThreadFactory-engine-Thread-929544) >> [b3b98e2c-6176-413d-9d75-315626918150] Penalizing host 'kvm6' because it >> is not preferred. >> 2018-03-18 09:39:57,944-04 INFO [org.ovirt.engine.core.bll.sch >> eduling.policyunits.PreferredHostsWeightPolicyUnit] >> (EE-ManagedThreadFactory-engine-Thread-929544) >> [b3b98e2c-6176-413d-9d75-315626918150] Penalizing host 'kvm7' because it >> is not preferred. >> 2018-03-18 09:39:57,945-04 WARN [org.ovirt.engine.core.bll.sch >> eduling.pending.PendingResourceManager] (EE-ManagedThreadFactory-engine-Thread-929544) >> [b3b98e2c-6176-413d-9d75-315626918150] Clearing stale pending resource >> org.ovirt.engine.core.bll.scheduling.pending.PendingCpuCores at 1d15d667 >> (host: e044db3f-c49b-467f-9f04-21b44ffe78c4, vm: >> fccee2da-92a3-4187-8aad-d6c1f9d5a3fb) >> 2018-03-18 09:39:57,945-04 WARN [org.ovirt.engine.core.bll.sch >> eduling.pending.PendingResourceManager] (EE-ManagedThreadFactory-engine-Thread-929544) >> [b3b98e2c-6176-413d-9d75-315626918150] Clearing stale pending resource >> org.ovirt.engine.core.bll.scheduling.pending.PendingMemory at 1d15d667 >> (host: e044db3f-c49b-467f-9f04-21b44ffe78c4, vm: >> fccee2da-92a3-4187-8aad-d6c1f9d5a3fb) >> 2018-03-18 09:39:57,946-04 WARN [org.ovirt.engine.core.bll.sch >> eduling.pending.PendingResourceManager] (EE-ManagedThreadFactory-engine-Thread-929544) >> [b3b98e2c-6176-413d-9d75-315626918150] Clearing stale pending resource >> org.ovirt.engine.core.bll.scheduling.pending.PendingOvercomm >> itMemory at 1d15d667 (host: e044db3f-c49b-467f-9f04-21b44ffe78c4, vm: >> fccee2da-92a3-4187-8aad-d6c1f9d5a3fb) >> 2018-03-18 09:39:57,946-04 WARN [org.ovirt.engine.core.bll.sch >> eduling.pending.PendingResourceManager] (EE-ManagedThreadFactory-engine-Thread-929544) >> [b3b98e2c-6176-413d-9d75-315626918150] Clearing stale pending resource >> org.ovirt.engine.core.bll.scheduling.pending.PendingVM at 1d15d667 (host: >> e044db3f-c49b-467f-9f04-21b44ffe78c4, vm: fccee2da-92a3-4187-8aad-d6c1f9 >> d5a3fb) >> 2018-03-18 09:39:57,946-04 ERROR [org.ovirt.engine.core.bll.RunVmCommand] >> (EE-ManagedThreadFactory-engine-Thread-929544) >> [b3b98e2c-6176-413d-9d75-315626918150] Command >> 'org.ovirt.engine.core.bll.RunVmCommand' failed: null >> 2018-03-18 09:39:57,946-04 ERROR [org.ovirt.engine.core.bll.RunVmCommand] >> (EE-ManagedThreadFactory-engine-Thread-929544) >> [b3b98e2c-6176-413d-9d75-315626918150] Exception: >> java.lang.NullPointerException >> at org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring.re >> freshCommitedMemory(HostMonitoring.java:687) [vdsbroker.jar:] >> at org.ovirt.engine.core.vdsbroker.VdsManager.updatePendingData(VdsManager.java:476) >> [vdsbroker.jar:] >> at org.ovirt.engine.core.bll.scheduling.pending.PendingResource >> Manager.notifyHostManagers(PendingResourceManager.java:227) [bll.jar:] >> at org.ovirt.engine.core.bll.scheduling.SchedulingManager.sched >> ule(SchedulingManager.java:361) [bll.jar:] >> at org.ovirt.engine.core.bll.RunVmCommand.getVdsToRunOn(RunVmCommand.java:878) >> [bll.jar:] >> at org.ovirt.engine.core.bll.RunVmCommand.runVm(RunVmCommand.java:266) >> [bll.jar:] >> at org.ovirt.engine.core.bll.RunVmCommand.perform(RunVmCommand.java:440) >> [bll.jar:] >> at org.ovirt.engine.core.bll.RunVmCommand.executeVmCommand(RunVmCommand.java:365) >> [bll.jar:] >> at org.ovirt.engine.core.bll.VmCommand.executeCommand(VmCommand.java:147) >> [bll.jar:] >> at org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1205) >> [bll.jar:] >> at org.ovirt.engine.core.bll.CommandBase.executeActionInTransac >> tionScope(CommandBase.java:1345) [bll.jar:] >> at org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:1987) >> [bll.jar:] >> at org.ovirt.engine.core.utils.transaction.TransactionSupport.e >> xecuteInSuppressed(TransactionSupport.java:164) [utils.jar:] >> at org.ovirt.engine.core.utils.transaction.TransactionSupport.e >> xecuteInScope(TransactionSupport.java:103) [utils.jar:] >> at org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1405) >> [bll.jar:] >> at org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:412) >> [bll.jar:] >> at org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner >> .executeValidatedCommand(PrevalidatingMultipleActionsRunner.java:204) >> [bll.jar:] >> at org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner >> .runCommands(PrevalidatingMultipleActionsRunner.java:176) [bll.jar:] >> at org.ovirt.engine.core.bll.SortedMultipleActionsRunnerBase. >> runCommands(SortedMultipleActionsRunnerBase.java:20) [bll.jar:] >> at org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner >> .lambda$invokeCommands$3(PrevalidatingMultipleActionsRunner.java:182) >> [bll.jar:] >> at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$Intern >> alWrapperRunnable.run(ThreadPoolUtil.java:96) [utils.jar:] >> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) >> [rt.jar:1.8.0_161] >> at java.util.concurrent.FutureTask.run(FutureTask.java:266) >> [rt.jar:1.8.0_161] >> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) >> [rt.jar:1.8.0_161] >> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) >> [rt.jar:1.8.0_161] >> at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_161] >> at org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl >> $ManagedThread.run(ManagedThreadFactoryImpl.java:250) >> [javax.enterprise.concurrent-1.0.jar:] >> at org.jboss.as.ee.concurrent.service.ElytronManagedThreadFacto >> ry$ElytronManagedThread.run(ElytronManagedThreadFactory.java:78) >> >> 2018-03-18 09:39:57,969-04 ERROR [org.ovirt.engine.core.dal.dbb >> roker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-929544) >> [b3b98e2c-6176-413d-9d75-315626918150] EVENT_ID: USER_FAILED_RUN_VM(54), >> Failed to run VM centos7-template (User: admin at internal-authz). >> 2018-03-18 09:39:57,976-04 INFO [org.ovirt.engine.core.bll.RunVmCommand] >> (EE-ManagedThreadFactory-engine-Thread-929544) >> [b3b98e2c-6176-413d-9d75-315626918150] Lock freed to object >> 'EngineLock:{exclusiveLocks='[fccee2da-92a3-4187-8aad-d6c1f9d5a3fb=VM]', >> sharedLocks=''}' >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreil1 at starlett.lv Sun Mar 18 20:01:30 2018 From: andreil1 at starlett.lv (Andrei Verovski) Date: Sun, 18 Mar 2018 22:01:30 +0200 Subject: [ovirt-users] Failed to run VM In-Reply-To: References: Message-ID: <484d9d8c-f7fb-6732-3f56-853aaf91dbc8@starlett.lv> On 03/18/2018 09:34 PM, maoz zadok wrote: > Thank you Arik you are right, my version is 4.2.0.2, how can I upgrade? 1) upgrade engine as described here https://www.ovirt.org/release/4.2.1/ 2) login into ovirt engine, stop all VMs, choose host -> maintenance, then upgrade node software. > > On Sun, Mar 18, 2018 at 4:11 PM, Arik Hadas > wrote: > > > > On Sun, Mar 18, 2018 at 3:43 PM, maoz zadok > wrote: > > Hello All, > I'm receiving this message every time that I try to start VM, > any idea? > > > Seems you enounter [1] that is solved in 4.2.2. > I assume you're using 4.2.1, right? By just looking at the code, > it seems that restarting the engine could be a possible workaround > for this. > > [1]?https://bugzilla.redhat.com/show_bug.cgi?id=1532884 > > ? > > > > > 2018-03-18 09:39:51,479-04 ERROR > [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] > (EE-ManagedThreadFactory-engineScheduled-Thread-11) [] Failed > during monitoring vm: 2445a47e-b102-11e6-ad11-1866da511add , > error is: {}: java.lang.NullPointerException > > 2018-03-18 09:39:51,480-04 ERROR > [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] > (EE-ManagedThreadFactory-engineScheduled-Thread-11) [] > Exception:: java.lang.NullPointerException > > 2018-03-18 09:39:57,802-04 INFO? > [org.ovirt.engine.core.bll.RunVmCommand] (default task-79) > [b3b98e2c-6176-413d-9d75-315626918150] Lock Acquired to object > 'EngineLock:{exclusiveLocks='[fccee2da-92a3-4187-8aad-d6c1f9d5a3fb=VM]', > sharedLocks=''}' > 2018-03-18 09:39:57,822-04 INFO? > [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] > (default task-79) [b3b98e2c-6176-413d-9d75-315626918150] > START, IsVmDuringInitiatingVDSCommand( > IsVmDuringInitiatingVDSCommandParameters:{vmId='fccee2da-92a3-4187-8aad-d6c1f9d5a3fb'}), > log id: 2217c887 > 2018-03-18 09:39:57,822-04 INFO? > [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] > (default task-79) [b3b98e2c-6176-413d-9d75-315626918150] > FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: > 2217c887 > 2018-03-18 09:39:57,927-04 INFO? > [org.ovirt.engine.core.bll.RunVmCommand] > (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] Running command: > RunVmCommand internal: false. Entities affected :? ID: > fccee2da-92a3-4187-8aad-d6c1f9d5a3fb Type: VMAction group > RUN_VM with role type USER > 2018-03-18 09:39:57,944-04 INFO? > [org.ovirt.engine.core.bll.scheduling.policyunits.PreferredHostsWeightPolicyUnit] > (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] Penalizing host 'kvm5' > because it is not preferred. > 2018-03-18 09:39:57,944-04 INFO? > [org.ovirt.engine.core.bll.scheduling.policyunits.PreferredHostsWeightPolicyUnit] > (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] Penalizing host 'kvm6' > because it is not preferred. > 2018-03-18 09:39:57,944-04 INFO? > [org.ovirt.engine.core.bll.scheduling.policyunits.PreferredHostsWeightPolicyUnit] > (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] Penalizing host 'kvm7' > because it is not preferred. > 2018-03-18 09:39:57,945-04 WARN? > [org.ovirt.engine.core.bll.scheduling.pending.PendingResourceManager] > (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] Clearing stale pending > resource > org.ovirt.engine.core.bll.scheduling.pending.PendingCpuCores at 1d15d667 > (host: e044db3f-c49b-467f-9f04-21b44ffe78c4, vm: > fccee2da-92a3-4187-8aad-d6c1f9d5a3fb) > 2018-03-18 09:39:57,945-04 WARN? > [org.ovirt.engine.core.bll.scheduling.pending.PendingResourceManager] > (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] Clearing stale pending > resource > org.ovirt.engine.core.bll.scheduling.pending.PendingMemory at 1d15d667 > (host: e044db3f-c49b-467f-9f04-21b44ffe78c4, vm: > fccee2da-92a3-4187-8aad-d6c1f9d5a3fb) > 2018-03-18 09:39:57,946-04 WARN? > [org.ovirt.engine.core.bll.scheduling.pending.PendingResourceManager] > (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] Clearing stale pending > resource > org.ovirt.engine.core.bll.scheduling.pending.PendingOvercommitMemory at 1d15d667 > (host: e044db3f-c49b-467f-9f04-21b44ffe78c4, vm: > fccee2da-92a3-4187-8aad-d6c1f9d5a3fb) > 2018-03-18 09:39:57,946-04 WARN? > [org.ovirt.engine.core.bll.scheduling.pending.PendingResourceManager] > (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] Clearing stale pending > resource > org.ovirt.engine.core.bll.scheduling.pending.PendingVM at 1d15d667 > (host: e044db3f-c49b-467f-9f04-21b44ffe78c4, vm: > fccee2da-92a3-4187-8aad-d6c1f9d5a3fb) > 2018-03-18 09:39:57,946-04 ERROR > [org.ovirt.engine.core.bll.RunVmCommand] > (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] Command > 'org.ovirt.engine.core.bll.RunVmCommand' failed: null > 2018-03-18 09:39:57,946-04 ERROR > [org.ovirt.engine.core.bll.RunVmCommand] > (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] Exception: > java.lang.NullPointerException > ??? at > org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring.re > freshCommitedMemory(HostMonitoring.java:687) > [vdsbroker.jar:] > ??? at > org.ovirt.engine.core.vdsbroker.VdsManager.updatePendingData(VdsManager.java:476) > [vdsbroker.jar:] > ??? at > org.ovirt.engine.core.bll.scheduling.pending.PendingResourceManager.notifyHostManagers(PendingResourceManager.java:227) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.scheduling.SchedulingManager.schedule(SchedulingManager.java:361) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.RunVmCommand.getVdsToRunOn(RunVmCommand.java:878) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.RunVmCommand.runVm(RunVmCommand.java:266) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.RunVmCommand.perform(RunVmCommand.java:440) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.RunVmCommand.executeVmCommand(RunVmCommand.java:365) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.VmCommand.executeCommand(VmCommand.java:147) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1205) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1345) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:1987) > [bll.jar:] > ??? at org.ovirt.engine.core.utils.tr > ansaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:164) > [utils.jar:] > ??? at org.ovirt.engine.core.utils.tr > ansaction.TransactionSupport.executeInScope(TransactionSupport.java:103) > [utils.jar:] > ??? at > org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1405) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:412) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.executeValidatedCommand(PrevalidatingMultipleActionsRunner.java:204) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.runCommands(PrevalidatingMultipleActionsRunner.java:176) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.SortedMultipleActionsRunnerBase.runCommands(SortedMultipleActionsRunnerBase.java:20) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.lambda$invokeCommands$3(PrevalidatingMultipleActionsRunner.java:182) > [bll.jar:] > ??? at org.ovirt.engine.core.utils.th > readpool.ThreadPoolUtil$InternalWrapperRunnable.run(ThreadPoolUtil.java:96) > [utils.jar:] > ??? at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [rt.jar:1.8.0_161] > ??? at > java.util.concurrent.FutureTask.run(FutureTask.java:266) > [rt.jar:1.8.0_161] > ??? at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [rt.jar:1.8.0_161] > ??? at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [rt.jar:1.8.0_161] > ??? at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_161] > ??? at > org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250) > [javax.enterprise.concurrent-1.0.jar:] > ??? at > org.jboss.as.ee.concurrent.service.ElytronManagedThreadFactory$ElytronManagedThread.run(ElytronManagedThreadFactory.java:78) > > 2018-03-18 09:39:57,969-04 ERROR > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] EVENT_ID: > USER_FAILED_RUN_VM(54), Failed to run VM centos7-template > (User: admin at internal-authz). > 2018-03-18 09:39:57,976-04 INFO? > [org.ovirt.engine.core.bll.RunVmCommand] > (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] Lock freed to object > 'EngineLock:{exclusiveLocks='[fccee2da-92a3-4187-8aad-d6c1f9d5a3fb=VM]', > sharedLocks=''}' > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreil1 at starlett.lv Sun Mar 18 20:59:52 2018 From: andreil1 at starlett.lv (Andrei Verovski) Date: Sun, 18 Mar 2018 22:59:52 +0200 Subject: [ovirt-users] #2 Failed to run VM In-Reply-To: References: Message-ID: <3fcfce64-78f1-ee55-aa0c-4d838893889d@starlett.lv> On 03/18/2018 09:34 PM, maoz zadok wrote: > Thank you Arik you are right, my version is 4.2.0.2, how can I upgrade? Sorry, didn't notice this problem solved only in pre-release 4.2.2 Then this: 1) upgrade engine as described here https://www.ovirt.org/release/4.2.2/ 2) login into ovirt engine, stop all VMs, choose host -> maintenance, then upgrade node software. > > On Sun, Mar 18, 2018 at 4:11 PM, Arik Hadas > wrote: > > > > On Sun, Mar 18, 2018 at 3:43 PM, maoz zadok > wrote: > > Hello All, > I'm receiving this message every time that I try to start VM, > any idea? > > > Seems you enounter [1] that is solved in 4.2.2. > I assume you're using 4.2.1, right? By just looking at the code, > it seems that restarting the engine could be a possible workaround > for this. > > [1]?https://bugzilla.redhat.com/show_bug.cgi?id=1532884 > > ? > > > > > 2018-03-18 09:39:51,479-04 ERROR > [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] > (EE-ManagedThreadFactory-engineScheduled-Thread-11) [] Failed > during monitoring vm: 2445a47e-b102-11e6-ad11-1866da511add , > error is: {}: java.lang.NullPointerException > > 2018-03-18 09:39:51,480-04 ERROR > [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] > (EE-ManagedThreadFactory-engineScheduled-Thread-11) [] > Exception:: java.lang.NullPointerException > > 2018-03-18 09:39:57,802-04 INFO? > [org.ovirt.engine.core.bll.RunVmCommand] (default task-79) > [b3b98e2c-6176-413d-9d75-315626918150] Lock Acquired to object > 'EngineLock:{exclusiveLocks='[fccee2da-92a3-4187-8aad-d6c1f9d5a3fb=VM]', > sharedLocks=''}' > 2018-03-18 09:39:57,822-04 INFO? > [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] > (default task-79) [b3b98e2c-6176-413d-9d75-315626918150] > START, IsVmDuringInitiatingVDSCommand( > IsVmDuringInitiatingVDSCommandParameters:{vmId='fccee2da-92a3-4187-8aad-d6c1f9d5a3fb'}), > log id: 2217c887 > 2018-03-18 09:39:57,822-04 INFO? > [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] > (default task-79) [b3b98e2c-6176-413d-9d75-315626918150] > FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: > 2217c887 > 2018-03-18 09:39:57,927-04 INFO? > [org.ovirt.engine.core.bll.RunVmCommand] > (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] Running command: > RunVmCommand internal: false. Entities affected :? ID: > fccee2da-92a3-4187-8aad-d6c1f9d5a3fb Type: VMAction group > RUN_VM with role type USER > 2018-03-18 09:39:57,944-04 INFO? > [org.ovirt.engine.core.bll.scheduling.policyunits.PreferredHostsWeightPolicyUnit] > (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] Penalizing host 'kvm5' > because it is not preferred. > 2018-03-18 09:39:57,944-04 INFO? > [org.ovirt.engine.core.bll.scheduling.policyunits.PreferredHostsWeightPolicyUnit] > (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] Penalizing host 'kvm6' > because it is not preferred. > 2018-03-18 09:39:57,944-04 INFO? > [org.ovirt.engine.core.bll.scheduling.policyunits.PreferredHostsWeightPolicyUnit] > (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] Penalizing host 'kvm7' > because it is not preferred. > 2018-03-18 09:39:57,945-04 WARN? > [org.ovirt.engine.core.bll.scheduling.pending.PendingResourceManager] > (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] Clearing stale pending > resource > org.ovirt.engine.core.bll.scheduling.pending.PendingCpuCores at 1d15d667 > (host: e044db3f-c49b-467f-9f04-21b44ffe78c4, vm: > fccee2da-92a3-4187-8aad-d6c1f9d5a3fb) > 2018-03-18 09:39:57,945-04 WARN? > [org.ovirt.engine.core.bll.scheduling.pending.PendingResourceManager] > (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] Clearing stale pending > resource > org.ovirt.engine.core.bll.scheduling.pending.PendingMemory at 1d15d667 > (host: e044db3f-c49b-467f-9f04-21b44ffe78c4, vm: > fccee2da-92a3-4187-8aad-d6c1f9d5a3fb) > 2018-03-18 09:39:57,946-04 WARN? > [org.ovirt.engine.core.bll.scheduling.pending.PendingResourceManager] > (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] Clearing stale pending > resource > org.ovirt.engine.core.bll.scheduling.pending.PendingOvercommitMemory at 1d15d667 > (host: e044db3f-c49b-467f-9f04-21b44ffe78c4, vm: > fccee2da-92a3-4187-8aad-d6c1f9d5a3fb) > 2018-03-18 09:39:57,946-04 WARN? > [org.ovirt.engine.core.bll.scheduling.pending.PendingResourceManager] > (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] Clearing stale pending > resource > org.ovirt.engine.core.bll.scheduling.pending.PendingVM at 1d15d667 > (host: e044db3f-c49b-467f-9f04-21b44ffe78c4, vm: > fccee2da-92a3-4187-8aad-d6c1f9d5a3fb) > 2018-03-18 09:39:57,946-04 ERROR > [org.ovirt.engine.core.bll.RunVmCommand] > (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] Command > 'org.ovirt.engine.core.bll.RunVmCommand' failed: null > 2018-03-18 09:39:57,946-04 ERROR > [org.ovirt.engine.core.bll.RunVmCommand] > (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] Exception: > java.lang.NullPointerException > ??? at > org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring.re > freshCommitedMemory(HostMonitoring.java:687) > [vdsbroker.jar:] > ??? at > org.ovirt.engine.core.vdsbroker.VdsManager.updatePendingData(VdsManager.java:476) > [vdsbroker.jar:] > ??? at > org.ovirt.engine.core.bll.scheduling.pending.PendingResourceManager.notifyHostManagers(PendingResourceManager.java:227) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.scheduling.SchedulingManager.schedule(SchedulingManager.java:361) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.RunVmCommand.getVdsToRunOn(RunVmCommand.java:878) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.RunVmCommand.runVm(RunVmCommand.java:266) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.RunVmCommand.perform(RunVmCommand.java:440) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.RunVmCommand.executeVmCommand(RunVmCommand.java:365) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.VmCommand.executeCommand(VmCommand.java:147) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1205) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1345) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:1987) > [bll.jar:] > ??? at org.ovirt.engine.core.utils.tr > ansaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:164) > [utils.jar:] > ??? at org.ovirt.engine.core.utils.tr > ansaction.TransactionSupport.executeInScope(TransactionSupport.java:103) > [utils.jar:] > ??? at > org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1405) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:412) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.executeValidatedCommand(PrevalidatingMultipleActionsRunner.java:204) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.runCommands(PrevalidatingMultipleActionsRunner.java:176) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.SortedMultipleActionsRunnerBase.runCommands(SortedMultipleActionsRunnerBase.java:20) > [bll.jar:] > ??? at > org.ovirt.engine.core.bll.PrevalidatingMultipleActionsRunner.lambda$invokeCommands$3(PrevalidatingMultipleActionsRunner.java:182) > [bll.jar:] > ??? at org.ovirt.engine.core.utils.th > readpool.ThreadPoolUtil$InternalWrapperRunnable.run(ThreadPoolUtil.java:96) > [utils.jar:] > ??? at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [rt.jar:1.8.0_161] > ??? at > java.util.concurrent.FutureTask.run(FutureTask.java:266) > [rt.jar:1.8.0_161] > ??? at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [rt.jar:1.8.0_161] > ??? at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [rt.jar:1.8.0_161] > ??? at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_161] > ??? at > org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250) > [javax.enterprise.concurrent-1.0.jar:] > ??? at > org.jboss.as.ee.concurrent.service.ElytronManagedThreadFactory$ElytronManagedThread.run(ElytronManagedThreadFactory.java:78) > > 2018-03-18 09:39:57,969-04 ERROR > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] EVENT_ID: > USER_FAILED_RUN_VM(54), Failed to run VM centos7-template > (User: admin at internal-authz). > 2018-03-18 09:39:57,976-04 INFO? > [org.ovirt.engine.core.bll.RunVmCommand] > (EE-ManagedThreadFactory-engine-Thread-929544) > [b3b98e2c-6176-413d-9d75-315626918150] Lock freed to object > 'EngineLock:{exclusiveLocks='[fccee2da-92a3-4187-8aad-d6c1f9d5a3fb=VM]', > sharedLocks=''}' > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From simone.bruckner at fabasoft.com Sun Mar 18 21:15:07 2018 From: simone.bruckner at fabasoft.com (Bruckner, Simone) Date: Sun, 18 Mar 2018 21:15:07 +0000 Subject: [ovirt-users] Cannot start VM after live storage migration - Bad volume specification Message-ID: <2CB4E8C8E00E594EA06D4AC427E429920FE9C36B@fabamailserver.fabagl.fabasoft.com> Hi all, we did a live storage migration of one of three disks of a vm that failed because the vm became not responding when deleting the auto-snapshot: 2018-03-16 15:07:32.084+01 | 0 | Snapshot 'VMNAME_Disk2 Auto-generated for Live Storage Migration' creation for VM 'VMNAME' was initiated by xxx 2018-03-16 15:07:32.097+01 | 0 | User xxx moving disk VMNAME_Disk2 to domain VMHOST_LUN_211. 2018-03-16 15:08:56.304+01 | 0 | Snapshot 'VMNAME_Disk2 Auto-generated for Live Storage Migration' creation for VM 'VMNAME' has been completed. 2018-03-16 16:40:48.89+01 | 0 | Snapshot 'VMNAME_Disk2 Auto-generated for Live Storage Migration' deletion for VM 'VMNAME' was initiated by xxx. 2018-03-16 16:44:44.813+01 | 1 | VM VMNAME is not responding. 2018-03-18 18:40:51.258+01 | 2 | Failed to delete snapshot 'VMNAME_Disk2 Auto-generated for Live Storage Migration' for VM 'VMNAME'. 2018-03-18 18:40:54.506+01 | 1 | Possible failure while deleting VMNAME_Disk2 from the source Storage Domain VMHOST_LUN_211 during the move operation. The Storage Domain may be manually cleaned-up from possible leftover s (User:xxx). Now we cannot start the vm anymore as long as this disk is online. Error message is "VM VMNAME is down with error. Exit message: Bad volume specification {'index': 2, 'domainID': 'ecc71a64-62c1-43f4-bf1f-3bc1b22c7a8a', 'reqsize': '0', 'name': 'vdc', 'truesize': '2147483648', 'format': 'cow', 'discard': False, 'volumeID': '4659b5e0-93c1-478d-97d0-ec1cf4052028', 'apparentsize': '2147483648', 'imageID': 'c1a05108-90d7-421d-a9b4-d4cc65c48429', 'specParams': {}, 'iface': 'virtio', 'cache': 'none', 'propagateErrors': 'off', 'poolID': '5849b030-626e-47cb-ad90-3ce782d831b3', 'device': 'disk', 'path': '/rhev/data-center/mnt/blockSD/ecc71a64-62c1-43f4-bf1f-3bc1b22c7a8a/images/c1a05108-90d7-421d-a9b4-d4cc65c48429/4659b5e0-93c1-478d-97d0-ec1cf4052028', 'serial': 'c1a05108-90d7-421d-a9b4-d4cc65c48429', 'diskType': 'block', 'type': 'block'}." vdsm.log: 2018-03-18 21:53:33,815+0100 ERROR (vm/7d05e511) [storage.TaskManager.Task] (Task='fc3bac16-64f3-4910-8bc4-6cfdd4d270da') Unexpected error (task:875) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File "", line 2, in prepareImage File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method ret = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3179, in prepareImage raise se.prepareIllegalVolumeError(volUUID) prepareIllegalVolumeError: Cannot prepare illegal volume: ('4c6475b1-352a-4114-b647-505cccbe6663',) 2018-03-18 21:53:33,816+0100 INFO (vm/7d05e511) [storage.TaskManager.Task] (Task='fc3bac16-64f3-4910-8bc4-6cfdd4d270da') aborting: Task is aborted: "Cannot prepare illegal volume: ('4c6475b1-352a-4114-b647-505cccbe6663',)" - code 227 (task:1181) 2018-03-18 21:53:33,816+0100 ERROR (vm/7d05e511) [storage.Dispatcher] FINISH prepareImage error=Cannot prepare illegal volume: ('4c6475b1-352a-4114-b647-505cccbe6663',) (dispatcher:82) 2018-03-18 21:53:33,816+0100 ERROR (vm/7d05e511) [virt.vm] (vmId='7d05e511-2e97-4002-bded-285ec4e30587') The vm start process failed (vm:927) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 856, in _startUnderlyingVm self._run() File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2661, in _run self._devices = self._make_devices() File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2608, in _make_devices self._preparePathsForDrives(disk_params) File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1001, in _preparePathsForDrives drive['path'] = self.cif.prepareVolumePath(drive, self.id) File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 393, in prepareVolumePath raise vm.VolumeError(drive) VolumeError: Bad volume specification {'index': 2, 'domainID': 'ecc71a64-62c1-43f4-bf1f-3bc1b22c7a8a', 'reqsize': '0', 'name': 'vdc', 'truesize': '2147483648', 'format': 'cow', 'discard': False, 'volumeID': '4659b5e0-93c1-478d-97d0-ec1cf4052028', 'apparentsize': '2147483648', 'imageID': 'c1a05108-90d7-421d-a9b4-d4cc65c48429', 'specParams': {}, 'iface': 'virtio', 'cache': 'none', 'propagateErrors': 'off', 'poolID': '5849b030-626e-47cb-ad90-3ce782d831b3', 'device': 'disk', 'path': '/rhev/data-center/mnt/blockSD/ecc71a64-62c1-43f4-bf1f-3bc1b22c7a8a/images/c1a05108-90d7-421d-a9b4-d4cc65c48429/4659b5e0-93c1-478d-97d0-ec1cf4052028', 'serial': 'c1a05108-90d7-421d-a9b4-d4cc65c48429', 'diskType': 'block', 'type': 'block'} 2018-03-18 21:53:33,817+0100 INFO (vm/7d05e511) [virt.vm] (vmId='7d05e511-2e97-4002-bded-285ec4e30587') Changed state to Down: Bad volume specification {'index': 2, 'domainID': 'ecc71a64-62c1-43f4-bf1f-3bc1b22c7a8a', 'reqsize': '0', 'name': 'vdc', 'truesize': '2147483648', 'format': 'cow', 'discard': False, 'volumeID': '4659b5e0-93c1-478d-97d0-ec1cf4052028', 'apparentsize': '2147483648', 'imageID': 'c1a05108-90d7-421d-a9b4-d4cc65c48429', 'specParams': {}, 'iface': 'virtio', 'cache': 'none', 'propagateErrors': 'off', 'poolID': '5849b030-626e-47cb-ad90-3ce782d831b3', 'device': 'disk', 'path': '/rhev/data-center/mnt/blockSD/ecc71a64-62c1-43f4-bf1f-3bc1b22c7a8a/images/c1a05108-90d7-421d-a9b4-d4cc65c48429/4659b5e0-93c1-478d-97d0-ec1cf4052028', 'serial': 'c1a05108-90d7-421d-a9b4-d4cc65c48429', 'diskType': 'block', 'type': 'block'} (code=1) (vm:1646) Is there a way to recover this disk? Thank you, Simone -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sven.Achtelik at eps.aero Sun Mar 18 21:45:15 2018 From: Sven.Achtelik at eps.aero (Sven Achtelik) Date: Sun, 18 Mar 2018 21:45:15 +0000 Subject: [ovirt-users] Workflow after restoring engine from backup Message-ID: <831f30ed018b4739a2491cbd24f2429d@eps.aero> Hi All, I had issue with the storage that hosted my engine vm. The disk got corrupted and I needed to restore the engine from a backup. That worked as expected, I just didn't start the engine yet. I know that after the backup was taken some machines where migrated around before the engine disks failed. My question is what will happen once I start the engine service which has the restored backup on it ? Will it query the hosts for the running VMs or will it assume that the VMs are still on the hosts as they resided at the point of backup ? Would I need to change the DB manual to let the engine know where VMs are up at this point ? What will happen to HA VMs ? I feel that it might try to start them a second time. My biggest issue is that I can't get a service Windows to shutdown all VMs and then lat them restart by the engine. Is there a known workflow for that ? Thank you, Sven -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.vaye at province-sud.nc Sun Mar 18 21:45:50 2018 From: nicolas.vaye at province-sud.nc (Nicolas Vaye) Date: Sun, 18 Mar 2018 21:45:50 +0000 Subject: [ovirt-users] improvement for web ui during the create template stage. In-Reply-To: <1994720.iLrxEI1RTZ@awels> References: <1521000255.6088.116.camel@province-sud.nc> <5595754.jVs0GWpncT@awels> <1521062392.6088.138.camel@province-sud.nc> <1994720.iLrxEI1RTZ@awels> Message-ID: <1521409547.1710.32.camel@province-sud.nc> -------- Message initial -------- Date: Thu, 15 Mar 2018 08:38:57 -0400 Objet: Re: [ovirt-users] improvement for web ui during the create template stage. Cc: users at ovirt.org > ?: Nicolas Vaye > De: Alexander Wels > On Wednesday, March 14, 2018 5:19:55 PM EDT Nicolas Vaye wrote: Hi, I thought it was the problem. I did a test again and i have recorded the test (in attachment). What is the problem ? Regards, Nicolas VAYE Interesting, as that name is not long enough to trigger the name length validation. You have 64 characters total for the name. I just tried your scenario on the latest master branch, and it worked as expected, it created the template from the snapshot without issues with that exact same name. I don't see any recent changes to the frontend code for that dialog either. If you look in the engine.log does it say anything? I can only assume some validation is failing, and that validation message is not properly propagated to the frontend, but it should show something in the backend log regardless. -------- Message initial -------- Date: Wed, 14 Mar 2018 08:36:48 -0400 Objet: Re: [ovirt-users] improvement for web ui during the create template stage. ?: users at ovirt.org, Nicolas Vaye > De: Alexander Wels > On Wednesday, March 14, 2018 12:04:18 AM EDT Nicolas Vaye wrote: Hi, I 'have 2 ovirt node with HE in version 4.2.1.7-1. If i make a template from a VM's snapshot in the web ui, there is a form ui to enter several parameter [cid:1521000255.509.1.camel at province-sud.nc] bug submitted https://bugzilla.redhat.com/show_bug.cgi?id=1557803 if the name of the template is missing and if we clic on the OK button, there is an highlighting red border on the name to indicate the problem. if i enter a long name for the template and if we clic on the OK button, nothing happend, and there is no highlight or error message to indicate there is a problem with the long name. Could you improve that ? Thanks, Regards, Nicolas VAYE It appears to me it already does that, this a screenshot of me putting in a long template name, and it is highlighted right and if I hover I see a tooltip explaining I can't have more than 64 characters. From NasrumMinallah9 at hotmail.com Sun Mar 18 16:59:03 2018 From: NasrumMinallah9 at hotmail.com (Nasrum Minallah Manzoor) Date: Sun, 18 Mar 2018 16:59:03 +0000 Subject: [ovirt-users] Change CD Issue... Message-ID: Hello Every one, I am waiting for long if any folk come up with solution for below issue! At the window, after selecting iso and click OK I get: Error while executing action Change CD: Drive image file could not be found Kindly look into attached logs (engine log and vdsm log) and assist me what to do! Thank you Regards, Nasrum Minallah -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Logs.rar Type: application/octet-stream Size: 474873 bytes Desc: Logs.rar URL: From kirin.vanderveer at planetinnovation.com.au Sun Mar 18 23:00:35 2018 From: kirin.vanderveer at planetinnovation.com.au (Kirin van der Veer) Date: Mon, 19 Mar 2018 10:00:35 +1100 Subject: [ovirt-users] Using VDSM to edit management interface Message-ID: Hi Peter, Thanks again for trying to help out on this one. Unfortunately this sent me right back to where I started. When you grep for DNS entries the only matching file is /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt As per my previous email this file is also generated (by VDSM this time) and I don't see how to update the VDSM database so that it generates the file with different entries. It's turtles all the way down :) Kirin. On Fri, Mar 16, 2018 at 5:12 PM, Peter Hudec wrote: > Remove any settings about dns from the network manager adn the > /etc/resolv.conf won't be auto generated. > > https://ma.ttias.be/centos-7-networkmanager-keeps- > overwriting-etcresolv-conf/ > > Peter > > On 16/03/2018 02:15, Kirin van der Veer wrote: > > ?akujem Peter, but this doesn't seem to work in my case. > > /etc/resolv.conf is regenerated by Network Manager after a reboot > > and my domain settings are lost. Your comments regarding the > > reliance on DNS make sense for most installations, but in my case > > oVirt is a secondary service that I would not expect to run unless > > our core infrastructure is working correctly. I'm hesitant to edit > > /etc/hosts directly, since that can lead to confusion when the > > underlying IP addresses change. For now I will hardcode the IPs of > > my servers. It's frustrating (and surprising) that there is not an > > easy way to do this. > > > > Kirin. > > > > On Thu, Mar 15, 2018 at 5:17 PM, Peter Hudec > > wrote: > > > > Hi Kirin, > > > > I suggest to do it old way and edit the /etc/resolv.conf manually. > > > > And one advice. Do not relay on the DNS on infrastructure servers. > > Use /etc/hosts. If he DNS will not be accessible, you will have > > problem to put it infrastructure up/working. As side effect the > > hosts allow you to use short names to access servers. > > > > If you are ansible positive, you could use > > > > hudecof.resolv https://galaxy.ansible.com/hudecof/resolv/ > > hudecof.hosts > > https://galaxy.ansible.com/hudecof/hosts/ > > > > > > > > Peter > > > > On 15/03/2018 06:03, Kirin van der Veer wrote: > >> Hi oVirt people, I have setup a new cluster consisting of many > >> oVirt Nodes with a single dedicated oVirt Engine machine. For > >> the most part things are working, however despite entering the > >> DNS search domain during install on the Nodes the management > >> interface is not aware of my search domain and it has not been > >> added to /etc/resolv.conf (perhaps that is unnecessary?). I > >> eventually worked out that the DNS search domain should be > >> included in /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt > >> However as per the header/warning, that file is generated by > >> VDSM. I assumed that I should be able to edit the search domain > >> with vdsClient, but when I run "vdsClient -m" I don't see any > >> options related to network config. I found the following page on > >> DNS config: > >> > > https://www.ovirt.org/develop/release-management/features/network/ > allowExplicitDnsConfiguration/ > > > > > allowExplicitDnsConfiguration/> > >> > >> > > But it does not seem to offer a way of specifying the DNS search > > domain > >> (other than perhaps directly editing /etc/resolv.conf - which is > >> generated/managed by Network Manager). nmcli reports that all of > >> my interfaces (including ovirtmgmt) are "unmanaged". Indeed when > >> I attempt to run nmtui there is nothing listed to configure. > >> This should be really simple! I just want to add my local search > >> domain so I can use the short name for my NFS server. I'd > >> appreciate any advice. > >> > >> Thanks in advance, Kirin. > >> > >> > >> . > >> > >> *IMPORTANT NOTE. *If you are NOT AN AUTHORISED RECIPIENT of this > >> e-mail, please contact Planet Innovation Pty Ltd by return > >> e-mail or by telephone on +613 9945 7510 > >> . In > > this case, you should not > >> read, print, re-transmit, store or act in reliance on this > >> e-mail or any attachments, and should destroy all copies of them. > >> This e-mail and any attachments are confidential and may contain > >> legally privileged information and/or copyright material of > >> Planet Innovation Pty Ltd or third parties. You should only > >> re-transmit, distribute or commercialise the material if you are > >> authorised to do so. Although we use virus scanning software, we > >> deny all liability for viruses or alike in any message or > >> attachment. This notice should not be removed. > >> > >> ** > >> > >> > >> _______________________________________________ Users mailing > >> list Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > > >> > > > > > > -- *Peter Hudec* Infra?trukt?rny architekt phudec at cnc.sk > > > > > > > > *CNC, a.s.* Borsk? 6, 841 04 Bratislava Recepcia: +421 2 35 000 100 > > > > > > Mobil:+421 905 997 203 *www.cnc.sk > > * > > > > > > > > > *IMPORTANT NOTE. *If you are NOT AN AUTHORISED RECIPIENT of this > > e-mail, please contact Planet Innovation Pty Ltd by return e-mail > > or by telephone on +613 9945 7510. In this case, you should not > > read, print, re-transmit, store or act in reliance on this e-mail > > or any attachments, and should destroy all copies of them. This > > e-mail and any attachments are confidential and may contain legally > > privileged information and/or copyright material of Planet > > Innovation Pty Ltd or third parties. You should only re-transmit, > > distribute or commercialise the material if you are authorised to > > do so. Although we use virus scanning software, we deny all > > liability for viruses or alike in any message or attachment. This > > notice should not be removed. > > > > ** > > > -- > *Peter Hudec* > Infra?trukt?rny architekt > phudec at cnc.sk > > *CNC, a.s.* > Borsk? 6, 841 04 Bratislava > Recepcia: +421 2 35 000 100 > > Mobil:+421 905 997 203 > *www.cnc.sk* -- *IMPORTANT NOTE. *If you are NOT AN AUTHORISED RECIPIENT of this e-mail, please contact Planet Innovation Pty Ltd by return e-mail or by telephone on +613 9945 7510. In this case, you should not read, print, re-transmit, store or act in reliance on this e-mail or any attachments, and should destroy all copies of them. This e-mail and any attachments are confidential and may contain legally privileged information and/or copyright material of Planet Innovation Pty Ltd or third parties. You should only re-transmit, distribute or commercialise the material if you are authorised to do so. Although we use virus scanning software, we deny all liability for viruses or alike in any message or attachment. This notice should not be removed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From punaatua.pk at gmail.com Sun Mar 18 23:13:56 2018 From: punaatua.pk at gmail.com (Punaatua PAINT-KOUI) Date: Sun, 18 Mar 2018 13:13:56 -1000 Subject: [ovirt-users] VDSM SSL validity In-Reply-To: References: Message-ID: Up 2018-02-17 2:57 GMT-10:00 Punaatua PAINT-KOUI : > Any idea someone ? > > Le 14 f?vr. 2018 23:19, "Punaatua PAINT-KOUI" a > ?crit : > >> Hi, >> >> I setup an hyperconverged solution with 3 nodes, hosted engine on >> glusterfs. >> We run this setup in a PCI-DSS environment. According to PCI-DSS >> requirements, we are required to reduce the validity of any certificate >> under 39 months. >> >> I saw in this link https://www.ovirt.org/dev >> elop/release-management/features/infra/pki/ that i can use the option >> VdsCertificateValidityInYears at engine-config. >> >> I'm running ovirt engine 4.2.1 and i checked when i was on 4.2 how to >> edit the option with engine-config --all and engine-config --list but the >> option is not listed >> >> Am i missing something ? >> >> I thing i can regenerate a VDSM certificate with openssl and the CA conf >> in /etc/pki/ovirt-engine on the hosted-engine but i would rather modifiy >> the option for future host that I will add. >> >> -- >> ------------------------------------- >> PAINT-KOUI Punaatua >> > -- ------------------------------------- PAINT-KOUI Punaatua Licence Pro R?seaux et T?lecom IAR Universit? du Sud Toulon Var La Garde France -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at palousetech.com Mon Mar 19 02:09:01 2018 From: jim at palousetech.com (Jim Kusznir) Date: Sun, 18 Mar 2018 19:09:01 -0700 Subject: [ovirt-users] Major Performance Issues with gluster Message-ID: Hello: This past week, I created a new gluster store, as I was running out of disk space on my main, SSD-backed storage pool. I used 2TB Seagate FireCuda drives (hybrid SSD/spinning). Hardware is Dell R610's with integral PERC/6i cards. I placed one disk per machine, exported the disk as a single disk volume from the raid controller, formatted it XFS, mounted it, and dedicated it to a new replica 3 gluster volume. Since doing so, I've been having major performance problems. One of my windows VMs sits at 100% disk utilization nearly continously, and its painful to do anything on it. A Zabbix install on CentOS using mysql as the backing has 70%+ iowait nearly all the time, and I can't seem to get graphs loaded from the web console. Its also always spewing errors that ultimately come down to insufficient disk performance issues. All of this was working OK before the changes. There are two: Old storage was SSD backed, Replica 2 + arb, and running on the same GigE network as management and main VM network. New storage was created using the dedicated Gluster network (running on em4 on these servers, completely different subnet (174.x vs 192.x), and was created replica 3 (no arb), on the FireCuda disks (seem to be the fastest I could afford for non-SSD, as I needed a lot more storage). My attempts to watch so far have NOT shown maxed network interfaces (using bwm-ng on the command line); in fact, the gluster interface is usually below 20% utilized. I'm not sure how to meaningfully measure the performance of the disk itself; I'm not sure what else to look at. My cluster is not very usable currently, though. IOWait on my hosts appears to be below 0.5%, usually 0.0 to 0.1. Inside the VMs is a whole different story. My cluster is currently running ovirt 4.1. I'm interested in going to 4.2, but I think I need to fix this first. Thanks! --Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From Joseph.Kelly at tradingscreen.com Mon Mar 19 03:19:04 2018 From: Joseph.Kelly at tradingscreen.com (Joseph Kelly) Date: Mon, 19 Mar 2018 03:19:04 +0000 Subject: [ovirt-users] Query about running ovirt-4.2.1 engine support 3.x nodes ? In-Reply-To: References: Message-ID: Sorry to ask again, But I can see from the link below the that nodes and engines should work between minor number upgrades. But is ovirt 4.2.x backward compatible with, say, 3.6 nodes. Does anyone know ? Is this documented anywhere ? [ovirt-users] compatibility relationship between datacenter, ovirt and cluster https://www.mail-archive.com/users at ovirt.org/msg17092.html Thanks, Joe. ________________________________ From: Joseph Kelly Sent: Wednesday, March 14, 2018 5:32 PM To: users at ovirt.org Subject: Query about running ovirt-4.2.1 engine support 3.x nodes ? Hello All, I have two hopefully easy questions regarding an ovirt-4.2.1 engine support and 3.x nodes ? 1) Does an ovirt-4.2.x engine support 3.x nodes ? As This page states: "The cluster compatibility is set according to the version of the least capable host operating system in the cluster." https://www.ovirt.org/documentation/upgrade-guide/chap-Post-Upgrade_Tasks/ Which seems to indicate that you can run say a 4.2.1 engine with lower version nodes, but is that correct ? 2) And can you just upgrade the nodes directly from 3.x to 4.2.x as per these steps ? 1. Move the node to maintenance 2. Add 4.2.x repos 3. yum update 4. reboot 5. Activate (exit maintenance) I've looked in the release notes but wasn't able to find much detail on ovirt-nodes upgrades. Thanks, Joe. -- J. Kelly Infrastructure Engineer TradingScreen www.tradingscreen.com Follow TradingScreen on Twitter, Facebook, or our blog, Trading Smarter This message is intended only for the recipient(s) named above and may contain confidential information. If you are not an intended recipient, you should not review, distribute or copy this message. Please notify the sender immediately by e-mail if you have received this message in error and delete it from your system. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matonb at ltresources.co.uk Mon Mar 19 03:50:55 2018 From: matonb at ltresources.co.uk (Maton, Brett) Date: Mon, 19 Mar 2018 03:50:55 +0000 Subject: [ovirt-users] Postgresql read only user difficulties In-Reply-To: References: Message-ID: Ok that may well be the case :) If generated the password as per the instructions with ENCRYPTED PASSWORD Postgres is expecting the password to be pre-encrypted. If you connect to postgres as postgres (psql) Try ALTER ROLE grafana WITH PASSWORD 'clear txt password'; On 18 March 2018 at 18:49, Vincent Royer wrote: > well this is frustrating. I added the read only user, but still can't > connect. > > pq: password authentication failed for user "grafana" > > > *Vincent Royer* > *778-825-1057* > > > > *SUSTAINABLE MOBILE ENERGY SOLUTIONS* > > > > > On Sat, Mar 17, 2018 at 11:43 AM, Maton, Brett > wrote: > >> Yeah if postgres won't start you've probably got a typo in pg_hba.conf >> >> On 17 March 2018 at 18:11, Vincent Royer wrote: >> >>> I think I see the issue. Extra space after the IP address in pg_hba.conf >>> >>> I'll try again later. >>> >>> Thanks for your help! >>> >>> >>> >>> >>> On Sat, Mar 17, 2018 at 10:44 AM, Vincent Royer >>> wrote: >>> >>>> hmmm. not a great result... >>>> >>>> rh-postgresql95-postgresql.service:...1 >>>> Mar 17 10:36:32 ovirt-engine systemd[1]: Failed to start PostgreSQL >>>> database.... >>>> Mar 17 10:36:32 ovirt-engine systemd[1]: Unit >>>> rh-postgresql95-postgresql.ser.... >>>> Mar 17 10:36:32 ovirt-engine systemd[1]: rh-postgresql95-postgresql.service >>>> .... >>>> >>>> and can no longer login to ovirt-engine gui: >>>> >>>> server_error: Connection refused. Check that the hostname and port are >>>> correct and that the postmaster is accepting TCP/IP connections. >>>> >>>> tried to restart ovirt-engine and it won't come up - internal server >>>> error. >>>> >>>> >>>> >>>> *Vincent Royer* >>>> *778-825-1057 <(778)%20825-1057>* >>>> >>>> >>>> >>>> *SUSTAINABLE MOBILE ENERGY SOLUTIONS* >>>> >>>> >>>> >>>> >>>> On Sat, Mar 17, 2018 at 4:34 AM, Maton, Brett >>> > wrote: >>>> >>>>> You could always try reloading the configuration, pretty sure pg_hba >>>>> gets reloaded these days: >>>>> >>>>> su - postgres >>>>> scl enable rh-postgresql95 bash >>>>> pg_ctl reload -D /var/opt/rh/rh-postgresql95/lib/pgsql/data >>>>> >>>>> or as root >>>>> >>>>> systemctl restart rh-postgresql95-postgresql.service >>>>> >>>>> >>>>> On 17 March 2018 at 11:20, Vincent Royer >>>>> wrote: >>>>> >>>>>> ok thanks, I did see it there but assumed that was a temp file. I >>>>>> updated it according to the instructions, but I still get the same error. >>>>>> >>>>>> # TYPE DATABASE USER ADDRESS METHOD >>>>>> >>>>>> # "local" is for Unix domain socket connections only >>>>>> local all all peer >>>>>> host ovirt_engine_history ovirt_engine_history 0.0.0.0/0 >>>>>> md5 >>>>>> host ovirt_engine_history ovirt_engine_history ::0/0 >>>>>> md5 >>>>>> host ovirt_engine_history grafana 172.16.30.10 /0 md5 >>>>>> host ovirt_engine_history grafana ::0/0 md5 >>>>>> host engine engine 0.0.0.0/0 md5 >>>>>> host engine engine ::0/0 md5 >>>>>> >>>>>> >>>>>> did a systemctl restart postgresql.service and I get "Unit not >>>>>> found". So I did systemctl restart ovirt-engine.service... >>>>>> >>>>>> and the error I get when accessing from 172.16.30.10 is: >>>>>> >>>>>> pq: no pg_hba.conf entry for host "172.16.30.10", user "grafana", >>>>>> database "ovirt_engine_history", SSL off >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> *Vincent Royer* >>>>>> *778-825-1057 <(778)%20825-1057>* >>>>>> >>>>>> >>>>>> >>>>>> *SUSTAINABLE MOBILE ENERGY SOLUTIONS* >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Sat, Mar 17, 2018 at 3:42 AM, Maton, Brett < >>>>>> matonb at ltresources.co.uk> wrote: >>>>>> >>>>>>> Hi Vincent, >>>>>>> >>>>>>> oVirt isn't using the stock PostgreSQL but an SCL version >>>>>>> You should find pg_hba.conf here /var/opt/rh/rh-postgresql95/li >>>>>>> b/pgsql/data/pg_hba.conf >>>>>>> >>>>>>> Hope this helps. >>>>>>> >>>>>>> On 17 March 2018 at 09:50, Vincent Royer >>>>>>> wrote: >>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> I followed these instructions on Ovirt self hosted engine 4.2.1: >>>>>>>> >>>>>>>> https://www.ovirt.org/documentation/data-warehouse/Allowing_ >>>>>>>> Read_Only_Access_to_the_History_Database/ >>>>>>>> >>>>>>>> when connecting to the db from an external host I receive this >>>>>>>> error: >>>>>>>> >>>>>>>> pq: no pg_hba.conf entry for host "", user "", database >>>>>>>> "ovirt_engine_history", SSL off >>>>>>>> >>>>>>>> I looked in the normal place for pg_hba.conf but the file does not >>>>>>>> exist, /data does not exist in /var/lib/pgsql >>>>>>>> >>>>>>>> Do i need to run engine-setup again to configure this? >>>>>>>> >>>>>>>> Thank you! >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Users mailing list >>>>>>>> Users at ovirt.org >>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincent at epicenergy.ca Mon Mar 19 04:16:09 2018 From: vincent at epicenergy.ca (Vincent Royer) Date: Sun, 18 Mar 2018 21:16:09 -0700 Subject: [ovirt-users] Postgresql read only user difficulties In-Reply-To: References: Message-ID: I just did it again with another user and this time it seemed to work. Thanks for your help. *Vincent Royer* *778-825-1057* *SUSTAINABLE MOBILE ENERGY SOLUTIONS* On Sun, Mar 18, 2018 at 8:50 PM, Maton, Brett wrote: > Ok that may well be the case :) > > If generated the password as per the instructions with > > ENCRYPTED PASSWORD > > Postgres is expecting the password to be pre-encrypted. > > If you connect to postgres as postgres (psql) > > Try > > ALTER ROLE grafana WITH PASSWORD 'clear txt password'; > > > > On 18 March 2018 at 18:49, Vincent Royer wrote: > >> well this is frustrating. I added the read only user, but still can't >> connect. >> >> pq: password authentication failed for user "grafana" >> >> >> *Vincent Royer* >> *778-825-1057 <(778)%20825-1057>* >> >> >> >> *SUSTAINABLE MOBILE ENERGY SOLUTIONS* >> >> >> >> >> On Sat, Mar 17, 2018 at 11:43 AM, Maton, Brett >> wrote: >> >>> Yeah if postgres won't start you've probably got a typo in pg_hba.conf >>> >>> On 17 March 2018 at 18:11, Vincent Royer wrote: >>> >>>> I think I see the issue. Extra space after the IP address in >>>> pg_hba.conf >>>> >>>> I'll try again later. >>>> >>>> Thanks for your help! >>>> >>>> >>>> >>>> >>>> On Sat, Mar 17, 2018 at 10:44 AM, Vincent Royer >>>> wrote: >>>> >>>>> hmmm. not a great result... >>>>> >>>>> rh-postgresql95-postgresql.service:...1 >>>>> Mar 17 10:36:32 ovirt-engine systemd[1]: Failed to start PostgreSQL >>>>> database.... >>>>> Mar 17 10:36:32 ovirt-engine systemd[1]: Unit >>>>> rh-postgresql95-postgresql.ser.... >>>>> Mar 17 10:36:32 ovirt-engine systemd[1]: rh-postgresql95-postgresql.service >>>>> .... >>>>> >>>>> and can no longer login to ovirt-engine gui: >>>>> >>>>> server_error: Connection refused. Check that the hostname and port are >>>>> correct and that the postmaster is accepting TCP/IP connections. >>>>> >>>>> tried to restart ovirt-engine and it won't come up - internal server >>>>> error. >>>>> >>>>> >>>>> >>>>> *Vincent Royer* >>>>> *778-825-1057 <(778)%20825-1057>* >>>>> >>>>> >>>>> >>>>> *SUSTAINABLE MOBILE ENERGY SOLUTIONS* >>>>> >>>>> >>>>> >>>>> >>>>> On Sat, Mar 17, 2018 at 4:34 AM, Maton, Brett < >>>>> matonb at ltresources.co.uk> wrote: >>>>> >>>>>> You could always try reloading the configuration, pretty sure pg_hba >>>>>> gets reloaded these days: >>>>>> >>>>>> su - postgres >>>>>> scl enable rh-postgresql95 bash >>>>>> pg_ctl reload -D /var/opt/rh/rh-postgresql95/lib/pgsql/data >>>>>> >>>>>> or as root >>>>>> >>>>>> systemctl restart rh-postgresql95-postgresql.service >>>>>> >>>>>> >>>>>> On 17 March 2018 at 11:20, Vincent Royer >>>>>> wrote: >>>>>> >>>>>>> ok thanks, I did see it there but assumed that was a temp file. I >>>>>>> updated it according to the instructions, but I still get the same error. >>>>>>> >>>>>>> # TYPE DATABASE USER ADDRESS >>>>>>> METHOD >>>>>>> >>>>>>> # "local" is for Unix domain socket connections only >>>>>>> local all all peer >>>>>>> host ovirt_engine_history ovirt_engine_history 0.0.0.0/0 >>>>>>> md5 >>>>>>> host ovirt_engine_history ovirt_engine_history ::0/0 >>>>>>> md5 >>>>>>> host ovirt_engine_history grafana 172.16.30.10 /0 >>>>>>> md5 >>>>>>> host ovirt_engine_history grafana ::0/0 md5 >>>>>>> host engine engine 0.0.0.0/0 md5 >>>>>>> host engine engine ::0/0 md5 >>>>>>> >>>>>>> >>>>>>> did a systemctl restart postgresql.service and I get "Unit not >>>>>>> found". So I did systemctl restart ovirt-engine.service... >>>>>>> >>>>>>> and the error I get when accessing from 172.16.30.10 is: >>>>>>> >>>>>>> pq: no pg_hba.conf entry for host "172.16.30.10", user "grafana", >>>>>>> database "ovirt_engine_history", SSL off >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> *Vincent Royer* >>>>>>> *778-825-1057 <(778)%20825-1057>* >>>>>>> >>>>>>> >>>>>>> >>>>>>> *SUSTAINABLE MOBILE ENERGY SOLUTIONS* >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Sat, Mar 17, 2018 at 3:42 AM, Maton, Brett < >>>>>>> matonb at ltresources.co.uk> wrote: >>>>>>> >>>>>>>> Hi Vincent, >>>>>>>> >>>>>>>> oVirt isn't using the stock PostgreSQL but an SCL version >>>>>>>> You should find pg_hba.conf here /var/opt/rh/rh-postgresql95/li >>>>>>>> b/pgsql/data/pg_hba.conf >>>>>>>> >>>>>>>> Hope this helps. >>>>>>>> >>>>>>>> On 17 March 2018 at 09:50, Vincent Royer >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> I followed these instructions on Ovirt self hosted engine 4.2.1: >>>>>>>>> >>>>>>>>> https://www.ovirt.org/documentation/data-warehouse/Allowing_ >>>>>>>>> Read_Only_Access_to_the_History_Database/ >>>>>>>>> >>>>>>>>> when connecting to the db from an external host I receive this >>>>>>>>> error: >>>>>>>>> >>>>>>>>> pq: no pg_hba.conf entry for host "", user "", database >>>>>>>>> "ovirt_engine_history", SSL off >>>>>>>>> >>>>>>>>> I looked in the normal place for pg_hba.conf but the file does not >>>>>>>>> exist, /data does not exist in /var/lib/pgsql >>>>>>>>> >>>>>>>>> Do i need to run engine-setup again to configure this? >>>>>>>>> >>>>>>>>> Thank you! >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> Users mailing list >>>>>>>>> Users at ovirt.org >>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matonb at ltresources.co.uk Mon Mar 19 04:35:13 2018 From: matonb at ltresources.co.uk (Maton, Brett) Date: Mon, 19 Mar 2018 04:35:13 +0000 Subject: [ovirt-users] Postgresql read only user difficulties In-Reply-To: References: Message-ID: No worries, glad you got there in the end On 19 March 2018 at 04:16, Vincent Royer wrote: > I just did it again with another user and this time it seemed to work. > > Thanks for your help. > > *Vincent Royer* > *778-825-1057* > > > > *SUSTAINABLE MOBILE ENERGY SOLUTIONS* > > > > > On Sun, Mar 18, 2018 at 8:50 PM, Maton, Brett > wrote: > >> Ok that may well be the case :) >> >> If generated the password as per the instructions with >> >> ENCRYPTED PASSWORD >> >> Postgres is expecting the password to be pre-encrypted. >> >> If you connect to postgres as postgres (psql) >> >> Try >> >> ALTER ROLE grafana WITH PASSWORD 'clear txt password'; >> >> >> >> On 18 March 2018 at 18:49, Vincent Royer wrote: >> >>> well this is frustrating. I added the read only user, but still can't >>> connect. >>> >>> pq: password authentication failed for user "grafana" >>> >>> >>> *Vincent Royer* >>> *778-825-1057 <(778)%20825-1057>* >>> >>> >>> >>> *SUSTAINABLE MOBILE ENERGY SOLUTIONS* >>> >>> >>> >>> >>> On Sat, Mar 17, 2018 at 11:43 AM, Maton, Brett >> > wrote: >>> >>>> Yeah if postgres won't start you've probably got a typo in pg_hba.conf >>>> >>>> On 17 March 2018 at 18:11, Vincent Royer wrote: >>>> >>>>> I think I see the issue. Extra space after the IP address in >>>>> pg_hba.conf >>>>> >>>>> I'll try again later. >>>>> >>>>> Thanks for your help! >>>>> >>>>> >>>>> >>>>> >>>>> On Sat, Mar 17, 2018 at 10:44 AM, Vincent Royer >>>> > wrote: >>>>> >>>>>> hmmm. not a great result... >>>>>> >>>>>> rh-postgresql95-postgresql.service:...1 >>>>>> Mar 17 10:36:32 ovirt-engine systemd[1]: Failed to start PostgreSQL >>>>>> database.... >>>>>> Mar 17 10:36:32 ovirt-engine systemd[1]: Unit >>>>>> rh-postgresql95-postgresql.ser.... >>>>>> Mar 17 10:36:32 ovirt-engine systemd[1]: >>>>>> rh-postgresql95-postgresql.service .... >>>>>> >>>>>> and can no longer login to ovirt-engine gui: >>>>>> >>>>>> server_error: Connection refused. Check that the hostname and port >>>>>> are correct and that the postmaster is accepting TCP/IP connections. >>>>>> >>>>>> tried to restart ovirt-engine and it won't come up - internal server >>>>>> error. >>>>>> >>>>>> >>>>>> >>>>>> *Vincent Royer* >>>>>> *778-825-1057 <(778)%20825-1057>* >>>>>> >>>>>> >>>>>> >>>>>> *SUSTAINABLE MOBILE ENERGY SOLUTIONS* >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Sat, Mar 17, 2018 at 4:34 AM, Maton, Brett < >>>>>> matonb at ltresources.co.uk> wrote: >>>>>> >>>>>>> You could always try reloading the configuration, pretty sure pg_hba >>>>>>> gets reloaded these days: >>>>>>> >>>>>>> su - postgres >>>>>>> scl enable rh-postgresql95 bash >>>>>>> pg_ctl reload -D /var/opt/rh/rh-postgresql95/lib/pgsql/data >>>>>>> >>>>>>> or as root >>>>>>> >>>>>>> systemctl restart rh-postgresql95-postgresql.service >>>>>>> >>>>>>> >>>>>>> On 17 March 2018 at 11:20, Vincent Royer >>>>>>> wrote: >>>>>>> >>>>>>>> ok thanks, I did see it there but assumed that was a temp file. I >>>>>>>> updated it according to the instructions, but I still get the same error. >>>>>>>> >>>>>>>> # TYPE DATABASE USER ADDRESS >>>>>>>> METHOD >>>>>>>> >>>>>>>> # "local" is for Unix domain socket connections only >>>>>>>> local all all peer >>>>>>>> host ovirt_engine_history ovirt_engine_history 0.0.0.0/0 >>>>>>>> md5 >>>>>>>> host ovirt_engine_history ovirt_engine_history ::0/0 >>>>>>>> md5 >>>>>>>> host ovirt_engine_history grafana 172.16.30.10 /0 >>>>>>>> md5 >>>>>>>> host ovirt_engine_history grafana ::0/0 md5 >>>>>>>> host engine engine 0.0.0.0/0 md5 >>>>>>>> host engine engine ::0/0 md5 >>>>>>>> >>>>>>>> >>>>>>>> did a systemctl restart postgresql.service and I get "Unit not >>>>>>>> found". So I did systemctl restart ovirt-engine.service... >>>>>>>> >>>>>>>> and the error I get when accessing from 172.16.30.10 is: >>>>>>>> >>>>>>>> pq: no pg_hba.conf entry for host "172.16.30.10", user "grafana", >>>>>>>> database "ovirt_engine_history", SSL off >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> *Vincent Royer* >>>>>>>> *778-825-1057 <(778)%20825-1057>* >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> *SUSTAINABLE MOBILE ENERGY SOLUTIONS* >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Sat, Mar 17, 2018 at 3:42 AM, Maton, Brett < >>>>>>>> matonb at ltresources.co.uk> wrote: >>>>>>>> >>>>>>>>> Hi Vincent, >>>>>>>>> >>>>>>>>> oVirt isn't using the stock PostgreSQL but an SCL version >>>>>>>>> You should find pg_hba.conf here /var/opt/rh/rh-postgresql95/li >>>>>>>>> b/pgsql/data/pg_hba.conf >>>>>>>>> >>>>>>>>> Hope this helps. >>>>>>>>> >>>>>>>>> On 17 March 2018 at 09:50, Vincent Royer >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> I followed these instructions on Ovirt self hosted engine 4.2.1: >>>>>>>>>> >>>>>>>>>> https://www.ovirt.org/documentation/data-warehouse/Allowing_ >>>>>>>>>> Read_Only_Access_to_the_History_Database/ >>>>>>>>>> >>>>>>>>>> when connecting to the db from an external host I receive this >>>>>>>>>> error: >>>>>>>>>> >>>>>>>>>> pq: no pg_hba.conf entry for host "", user "", database >>>>>>>>>> "ovirt_engine_history", SSL off >>>>>>>>>> >>>>>>>>>> I looked in the normal place for pg_hba.conf but the file does >>>>>>>>>> not exist, /data does not exist in /var/lib/pgsql >>>>>>>>>> >>>>>>>>>> Do i need to run engine-setup again to configure this? >>>>>>>>>> >>>>>>>>>> Thank you! >>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> Users mailing list >>>>>>>>>> Users at ovirt.org >>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eshenitz at redhat.com Mon Mar 19 05:22:12 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Mon, 19 Mar 2018 07:22:12 +0200 Subject: [ovirt-users] improvement for web ui during the create template stage. In-Reply-To: <1521409442.1710.31.camel@province-sud.nc> References: <1521000255.6088.116.camel@province-sud.nc> <1521409442.1710.31.camel@province-sud.nc> Message-ID: Thanks Nicolas On Sun, Mar 18, 2018 at 11:44 PM, Nicolas Vaye wrote: > Yes, bug submitted > > https://bugzilla.redhat.com/show_bug.cgi?id=1557803 > > > -------- Message initial -------- > > Date: Wed, 14 Mar 2018 11:25:56 +0200 > Objet: Re: [ovirt-users] improvement for web ui during the create template > stage. > ?: Nicolas Vaye 3cnicolas.vaye at province-sud.nc%3e>> > De: Eyal Shenitzky al%20Shenitzky%20%3ceshenitz at redhat.com%3e>> > Hi Nicolas, > > Please submit a bug using - https://bugzilla.redhat.com/ > enter_bug.cgi?product=ovirt-engine. > > Thanks, > > On Wed, Mar 14, 2018 at 6:04 AM, Nicolas Vaye < > nicolas.vaye at province-sud.nc> wrote: > Hi, > I 'have 2 ovirt node with HE in version 4.2.1.7-1. > > If i make a template from a VM's snapshot in the web ui, there is a form > ui to enter several parameter > [cid:1521000255.509.1.camel at province-sud.nc 3A1521000255.509.1.camel at province-sud.nc>] > > if the name of the template is missing and if we clic on the OK button, > there is an highlighting red border on the name to indicate the problem. > if i enter a long name for the template and if we clic on the OK button, > nothing happend, and there is no highlight or error message to indicate > there is a problem with the long name. > > Could you improve that ? > > Thanks, > > Regards, > > Nicolas VAYE > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > > -- > Regards, > Eyal Shenitzky > -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From junaid8756 at gmail.com Mon Mar 19 06:08:58 2018 From: junaid8756 at gmail.com (Junaid Jadoon) Date: Mon, 19 Mar 2018 11:08:58 +0500 Subject: [ovirt-users] change CD not working In-Reply-To: <3adead2d-7da8-9fdd-eea7-e5725d1b9286@integrafin.co.uk> References: <3adead2d-7da8-9fdd-eea7-e5725d1b9286@integrafin.co.uk> Message-ID: Detach and re-attach the ISO domain still no luck?? CD drive in no showing in Windows VM. On Fri, Mar 16, 2018 at 7:50 PM, Alex Crow wrote: > On 15/03/18 18:55, Junaid Jadoon wrote: > > > >> Ovirt engine and node version are 4.2. >> >> "Error while executing action Change CD: Failed to perform "Change CD" operation, CD might be still in use by the VM. >> Please try to manually detach the CD from withing the VM: >> 1. Log in to the VM >> 2 For Linux VMs, un-mount the CD using umount command; >> For Windows VMs, right click on the CD drive and click 'Eject';" >> >> Initially its working fine suddenly it giving above error. >> >> Logs are attached. >> >> please help me out >> >> Regards, >> >> Junaid >> >> Detach and re-attach of the ISO domain should resolve this. It worked for > me. > > Alex > > -- > This message is intended only for the addressee and may contain > confidential information. Unless you are that person, you may not > disclose its contents or use it in any way and are requested to delete > the message along with any attachments and notify us immediately. > This email is not intended to, nor should it be taken to, constitute advice. > The information provided is correct to our knowledge & belief and must not > be used as a substitute for obtaining tax, regulatory, investment, legal or > any other appropriate advice. > > "Transact" is operated by Integrated Financial Arrangements Ltd. > 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. > (Registered office: as above; Registered in England and Wales under > number: 3727592). Authorised and regulated by the Financial Conduct > Authority (entered on the Financial Services Register; no. 190856). > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From didi at redhat.com Mon Mar 19 06:32:36 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Mon, 19 Mar 2018 08:32:36 +0200 Subject: [ovirt-users] Workflow after restoring engine from backup In-Reply-To: <831f30ed018b4739a2491cbd24f2429d@eps.aero> References: <831f30ed018b4739a2491cbd24f2429d@eps.aero> Message-ID: On Sun, Mar 18, 2018 at 11:45 PM, Sven Achtelik wrote: > Hi All, > > > > I had issue with the storage that hosted my engine vm. The disk got > corrupted and I needed to restore the engine from a backup. How did you backup, and how did you restore? Which version was used for each? > That worked as > expected, I just didn?t start the engine yet. OK. > I know that after the backup > was taken some machines where migrated around before the engine disks > failed. Are these machines HA? > My question is what will happen once I start the engine service > which has the restored backup on it ? Will it query the hosts for the > running VMs It will, but HA machines are handled differently. See also: https://bugzilla.redhat.com/show_bug.cgi?id=1441322 https://bugzilla.redhat.com/show_bug.cgi?id=1446055 > or will it assume that the VMs are still on the hosts as they > resided at the point of backup ? It does, initially, but then updates status according to what it gets from hosts. But polling the hosts takes time, especially if you have many, and HA policy might require faster handling. So if it polls first a host that had a machine on it during backup, and sees that it's gone, and didn't yet poll the new host, HA handling starts immediately, which eventually might lead to starting the VM on another host. To prevent that, the fixes to above bugs make the restore process mark HA VMs that do not have leases on the storage as "dead". > Would I need to change the DB manual to let > the engine know where VMs are up at this point ? You might need to, if you have HA VMs and a too-old version of restore. > What will happen to HA VMs > ? I feel that it might try to start them a second time. My biggest issue is > that I can?t get a service Windows to shutdown all VMs and then lat them > restart by the engine. > > > > Is there a known workflow for that ? I am not aware of a tested procedure for handling above if you have a too-old version, but you can check the patches linked from above bugs and manually run the SQL command(s) they include. They are essentially comment 4 of the first bug. Good luck and best regards, -- Didi From didi at redhat.com Mon Mar 19 06:55:50 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Mon, 19 Mar 2018 08:55:50 +0200 Subject: [ovirt-users] Query about running ovirt-4.2.1 engine support 3.x nodes ? In-Reply-To: References: Message-ID: On Mon, Mar 19, 2018 at 5:19 AM, Joseph Kelly < Joseph.Kelly at tradingscreen.com> wrote: > Sorry to ask again, But I can see from the link below the that nodes and > engines should work between minor number > > upgrades. > Indeed. > But is ovirt 4.2.x backward compatible with, say, 3.6 nodes. Does anyone > know ? Is this documented anywhere ? > You can search the release notes pages of 4.2.z releases for '3.6' to find relevant bugs. Just _using_ such hosts, should work. Adding a 3.6 host to a 4.2 engine will likely break. It's definitely not intended to be used for long times - you are encouraged to upgrade your hosts too, soon after the engine. If you plan a very long transition period, I suggest to create a list of operations you might want/need to do, and test everything in a test environment. > > [ovirt-users] compatibility relationship between datacenter, ovirt and > cluster > https://www.mail-archive.com/users at ovirt.org/msg17092.html > > Thanks, > Joe. > > ------------------------------ > *From:* Joseph Kelly > *Sent:* Wednesday, March 14, 2018 5:32 PM > *To:* users at ovirt.org > *Subject:* Query about running ovirt-4.2.1 engine support 3.x nodes ? > > > Hello All, > > > I have two hopefully easy questions regarding an ovirt-4.2.1 engine > support and 3.x nodes ? > > > 1) Does an ovirt-4.2.x engine support 3.x nodes ? As This page states: > > > "The cluster compatibility is set according to the version of the least > capable host operating system in the cluster." > > > https://www.ovirt.org/documentation/upgrade-guide/chap-Post-Upgrade_Tasks/ > > > Which seems to indicate that you can run say a 4.2.1 engine with lower > version nodes, but is that correct ? > > > 2) And can you just upgrade the nodes directly from 3.x to 4.2.x as per > these steps ? > > > 1. Move the node to maintenance > 2. Add 4.2.x repos > 3. yum update > 4. reboot > 5. Activate (exit maintenance) > This should work. You can also use the admin web ui for updates, which might be better, didn't check recently. See also e.g.: https://bugzilla.redhat.com/show_bug.cgi?id=1344020 Best regards, -- Didi -------------- next part -------------- An HTML attachment was scrubbed... URL: From sdhuang32 at gmail.com Mon Mar 19 08:10:39 2018 From: sdhuang32 at gmail.com (Shao-Da Huang) Date: Mon, 19 Mar 2018 16:10:39 +0800 Subject: [ovirt-users] How to disable the QoS settings (go back to 'Unlimited' state) in the vNIC profile by using oVirt REST API? Message-ID: Hi, I have a vNIC profile with a QoS object: vnic_test disabled false Now I try to update this object using PUT method and set the 'pass_through' mode to 'enabled', But I always got the error message "Cannot edit VM network interface profile. 'Port Mirroring' and 'Qos' are not supported on passthrough profiles." no matter I send the request body like: vnic_test enabled false OR vnic_test enabled Could anyone tell me how to disable the related QoS settings (namely go back to 'Unlimited' state) in a vNIC profile by using REST API? -------------- next part -------------- An HTML attachment was scrubbed... URL: From junaid8756 at gmail.com Mon Mar 19 08:17:41 2018 From: junaid8756 at gmail.com (Junaid Jadoon) Date: Mon, 19 Mar 2018 13:17:41 +0500 Subject: [ovirt-users] CD drive not showing Message-ID: Hi, Cd drive is not showing in windows 7 VM. Please help me out??? -------------- next part -------------- An HTML attachment was scrubbed... URL: From tbaror at gmail.com Sun Mar 18 12:01:43 2018 From: tbaror at gmail.com (Tal Bar-Or) Date: Sun, 18 Mar 2018 14:01:43 +0200 Subject: [ovirt-users] Ovirt with ZFS+ Gluster Message-ID: Hello, I started to do new modest system planing and the system will be mounted on top of 3~4 Dell r720 with each 2xe5-2640 v2 and 128GB memory and 12xsas 10k 1.2tb and 3x ssd's my plan is to use zfs on top of glusterfs , and my question is since i didn't saw any doc on it Is this kind of deployment is done in the past and recommended. any way if yes is there any doc how to ? Thanks -- Tal Bar-or -------------- next part -------------- An HTML attachment was scrubbed... URL: From karli at inparadise.se Mon Mar 19 08:36:41 2018 From: karli at inparadise.se (Karli =?ISO-8859-1?Q?Sj=F6berg?=) Date: Mon, 19 Mar 2018 09:36:41 +0100 Subject: [ovirt-users] Ovirt with ZFS+ Gluster In-Reply-To: References: Message-ID: <1521448601.2879.41.camel@inparadise.se> On Sun, 2018-03-18 at 14:01 +0200, Tal Bar-Or wrote: > Hello, > > I started to do new modest system planing and the system will be > mounted on top of 3~4 Dell r720 with each 2xe5-2640 v2 and 128GB > memory and 12xsas 10k 1.2tb and 3x ssd's > my plan is to use zfs on top of glusterfs , and my question is since > i didn't saw any doc on it > Is this kind of deployment is done in the past and recommended. > any way if yes is there any doc how to ? > Thanks > > > -- > Tal Bar-or > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users There aren?t any specific documentation about using ZFS underneath Gluster together with oVirt, but there?s nothing wrong IMO about using ZFS with Gluster. E.g. 45 Drives are using it and posting really funny videos about it: https://www.youtube.com/watch?v=A0wV4k58RIs Are you planning this as a standalone Gluster cluster or do you want to use it hyperconverged? /K -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: This is a digitally signed message part URL: From rightkicktech at gmail.com Mon Mar 19 09:03:56 2018 From: rightkicktech at gmail.com (Alex K) Date: Mon, 19 Mar 2018 11:03:56 +0200 Subject: [ovirt-users] Open source backup! In-Reply-To: References: Message-ID: I was testing Open Bacchus and backups were ok. One issue that I see is that one cannot define how many backup copies to retain, unless I missed sth. Alex On Mon, Mar 5, 2018 at 5:25 PM, Niyazi Elvan wrote: > Hi, > > If you are looking for VM image backup, you may have a look at Open > Bacchus https://github.com/openbacchus/bacchus > > Bacchus is backing up VMs using the oVirt python api and final image will > reside on the Export domain (which is an NFS share or glusterfs) in your > environment. It does not support moving the images to tapes at the moment. > You need to use another tool to stage your backups to tape. > > Hope this helps. > > > On 5 Mar 2018 Mon at 17:31 Nasrum Minallah Manzoor < > NasrumMinallah9 at hotmail.com> wrote: > >> HI, >> Can you please suggest me any open source backup solution for ovirt >> Virtual machines. >> My backup media is FC tape library which is directly attached to my ovirt >> node. I really appreciate your help >> >> >> >> >> >> >> >> Regards, >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > -- > Niyazi Elvan > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sven.Achtelik at eps.aero Mon Mar 19 09:03:57 2018 From: Sven.Achtelik at eps.aero (Sven Achtelik) Date: Mon, 19 Mar 2018 09:03:57 +0000 Subject: [ovirt-users] Workflow after restoring engine from backup In-Reply-To: References: <831f30ed018b4739a2491cbd24f2429d@eps.aero>, Message-ID: Hi Didi, my backups where taken with the end. Backup utility. I have 3 Data centers, two of them with just one host and the third one with 3 hosts running the engine. The backup three days old, was taken on engine version 4.1 (4.1.7) and the restored engine is running on 4.1.9. I have three HA VMs that would be affected. All others are just normal vms. Sounds like it would be the safest to shut down the HA vm S to make sure that nothing happens ? Or can I disable the HA action in the DB for now ? Thank you, Sven Von meinem Samsung Galaxy Smartphone gesendet. -------- Urspr?ngliche Nachricht -------- Von: Yedidyah Bar David Datum: 19.03.18 07:33 (GMT+01:00) An: Sven Achtelik Cc: users at ovirt.org Betreff: Re: [ovirt-users] Workflow after restoring engine from backup On Sun, Mar 18, 2018 at 11:45 PM, Sven Achtelik wrote: > Hi All, > > > > I had issue with the storage that hosted my engine vm. The disk got > corrupted and I needed to restore the engine from a backup. How did you backup, and how did you restore? Which version was used for each? > That worked as > expected, I just didn?t start the engine yet. OK. > I know that after the backup > was taken some machines where migrated around before the engine disks > failed. Are these machines HA? > My question is what will happen once I start the engine service > which has the restored backup on it ? Will it query the hosts for the > running VMs It will, but HA machines are handled differently. See also: https://bugzilla.redhat.com/show_bug.cgi?id=1441322 https://bugzilla.redhat.com/show_bug.cgi?id=1446055 > or will it assume that the VMs are still on the hosts as they > resided at the point of backup ? It does, initially, but then updates status according to what it gets from hosts. But polling the hosts takes time, especially if you have many, and HA policy might require faster handling. So if it polls first a host that had a machine on it during backup, and sees that it's gone, and didn't yet poll the new host, HA handling starts immediately, which eventually might lead to starting the VM on another host. To prevent that, the fixes to above bugs make the restore process mark HA VMs that do not have leases on the storage as "dead". > Would I need to change the DB manual to let > the engine know where VMs are up at this point ? You might need to, if you have HA VMs and a too-old version of restore. > What will happen to HA VMs > ? I feel that it might try to start them a second time. My biggest issue is > that I can?t get a service Windows to shutdown all VMs and then lat them > restart by the engine. > > > > Is there a known workflow for that ? I am not aware of a tested procedure for handling above if you have a too-old version, but you can check the patches linked from above bugs and manually run the SQL command(s) they include. They are essentially comment 4 of the first bug. Good luck and best regards, -- Didi -------------- next part -------------- An HTML attachment was scrubbed... URL: From raghav at exzatechconsulting.com Mon Mar 19 09:07:33 2018 From: raghav at exzatechconsulting.com (Anantha Raghava) Date: Mon, 19 Mar 2018 14:37:33 +0530 Subject: [ovirt-users] Failing to upload qcow2 disk image Message-ID: <42edbcfa-e202-5d7d-f30c-7e6c7a60fa0d@exzatechconsulting.com> Hi, I am trying to upload the disk image which is in qcow2 format. After uploading about 38 GB the status turns to "Paused by system" and it does not resume at all. Any attempt to manually resume, will result back in paused status. Ovirt engine version : 4.2.1.6-1.el7.centos Any guidance to finish this upload task? -- Thanks & Regards, Anantha Raghava -------------- next part -------------- An HTML attachment was scrubbed... URL: From sabose at redhat.com Mon Mar 19 09:14:30 2018 From: sabose at redhat.com (Sahina Bose) Date: Mon, 19 Mar 2018 14:44:30 +0530 Subject: [ovirt-users] Major Performance Issues with gluster In-Reply-To: References: Message-ID: On Mon, Mar 19, 2018 at 7:39 AM, Jim Kusznir wrote: > Hello: > > This past week, I created a new gluster store, as I was running out of > disk space on my main, SSD-backed storage pool. I used 2TB Seagate > FireCuda drives (hybrid SSD/spinning). Hardware is Dell R610's with > integral PERC/6i cards. I placed one disk per machine, exported the disk > as a single disk volume from the raid controller, formatted it XFS, mounted > it, and dedicated it to a new replica 3 gluster volume. > > Since doing so, I've been having major performance problems. One of my > windows VMs sits at 100% disk utilization nearly continously, and its > painful to do anything on it. A Zabbix install on CentOS using mysql as > the backing has 70%+ iowait nearly all the time, and I can't seem to get > graphs loaded from the web console. Its also always spewing errors that > ultimately come down to insufficient disk performance issues. > > All of this was working OK before the changes. There are two: > > Old storage was SSD backed, Replica 2 + arb, and running on the same GigE > network as management and main VM network. > > New storage was created using the dedicated Gluster network (running on > em4 on these servers, completely different subnet (174.x vs 192.x), and was > created replica 3 (no arb), on the FireCuda disks (seem to be the fastest I > could afford for non-SSD, as I needed a lot more storage). > > My attempts to watch so far have NOT shown maxed network interfaces (using > bwm-ng on the command line); in fact, the gluster interface is usually > below 20% utilized. > > I'm not sure how to meaningfully measure the performance of the disk > itself; I'm not sure what else to look at. My cluster is not very usable > currently, though. IOWait on my hosts appears to be below 0.5%, usually > 0.0 to 0.1. Inside the VMs is a whole different story. > > My cluster is currently running ovirt 4.1. I'm interested in going to > 4.2, but I think I need to fix this first. > Can you provide the info of the volume using "gluster volume info" and also profile the volume while running the tests where you experience the performance issue, and share results? For info on how to profile (server-side profiling) - https://docs.gluster.org/en/latest/Administrator%20Guide/Performance%20Testing/ > Thanks! > --Jim > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From didi at redhat.com Mon Mar 19 09:17:35 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Mon, 19 Mar 2018 11:17:35 +0200 Subject: [ovirt-users] Workflow after restoring engine from backup In-Reply-To: References: <831f30ed018b4739a2491cbd24f2429d@eps.aero> Message-ID: On Mon, Mar 19, 2018 at 11:03 AM, Sven Achtelik wrote: > Hi Didi, > > my backups where taken with the end. Backup utility. I have 3 Data centers, > two of them with just one host and the third one with 3 hosts running the > engine. The backup three days old, was taken on engine version 4.1 (4.1.7) > and the restored engine is running on 4.1.9. Since the bug I mentioned was fixed in 4.1.3, you should be covered. > I have three HA VMs that would > be affected. All others are just normal vms. Sounds like it would be the > safest to shut down the HA vm S to make sure that nothing happens ? If you can have downtime, I agree it sounds safer to shutdown the VMs. > Or can I > disable the HA action in the DB for now ? No need to. If you restored with 4.1.9 engine-backup, it should have done this for you. If you still have the restore log, you can verify this by checking it. It should contain 'Resetting HA VM status', and then the result of the sql that it ran. Best regards, > > Thank you, > > Sven > > > > Von meinem Samsung Galaxy Smartphone gesendet. > > > -------- Urspr?ngliche Nachricht -------- > Von: Yedidyah Bar David > Datum: 19.03.18 07:33 (GMT+01:00) > An: Sven Achtelik > Cc: users at ovirt.org > Betreff: Re: [ovirt-users] Workflow after restoring engine from backup > > On Sun, Mar 18, 2018 at 11:45 PM, Sven Achtelik > wrote: >> Hi All, >> >> >> >> I had issue with the storage that hosted my engine vm. The disk got >> corrupted and I needed to restore the engine from a backup. > > How did you backup, and how did you restore? > > Which version was used for each? > >> That worked as >> expected, I just didn?t start the engine yet. > > OK. > >> I know that after the backup >> was taken some machines where migrated around before the engine disks >> failed. > > Are these machines HA? > >> My question is what will happen once I start the engine service >> which has the restored backup on it ? Will it query the hosts for the >> running VMs > > It will, but HA machines are handled differently. > > See also: > > https://bugzilla.redhat.com/show_bug.cgi?id=1441322 > https://bugzilla.redhat.com/show_bug.cgi?id=1446055 > >> or will it assume that the VMs are still on the hosts as they >> resided at the point of backup ? > > It does, initially, but then updates status according to what it > gets from hosts. > > But polling the hosts takes time, especially if you have many, and > HA policy might require faster handling. So if it polls first a > host that had a machine on it during backup, and sees that it's > gone, and didn't yet poll the new host, HA handling starts immediately, > which eventually might lead to starting the VM on another host. > > To prevent that, the fixes to above bugs make the restore process > mark HA VMs that do not have leases on the storage as "dead". > >> Would I need to change the DB manual to let >> the engine know where VMs are up at this point ? > > You might need to, if you have HA VMs and a too-old version of restore. > >> What will happen to HA VMs >> ? I feel that it might try to start them a second time. My biggest issue >> is >> that I can?t get a service Windows to shutdown all VMs and then lat them >> restart by the engine. >> >> >> >> Is there a known workflow for that ? > > I am not aware of a tested procedure for handling above if you have > a too-old version, but you can check the patches linked from above bugs > and manually run the SQL command(s) they include. They are essentially > comment 4 of the first bug. > > Good luck and best regards, > -- > Didi -- Didi From eshenitz at redhat.com Mon Mar 19 09:34:37 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Mon, 19 Mar 2018 11:34:37 +0200 Subject: [ovirt-users] Failing to upload qcow2 disk image In-Reply-To: <42edbcfa-e202-5d7d-f30c-7e6c7a60fa0d@exzatechconsulting.com> References: <42edbcfa-e202-5d7d-f30c-7e6c7a60fa0d@exzatechconsulting.com> Message-ID: Hi Idan, Can you please take a look? On Mon, Mar 19, 2018 at 11:07 AM, Anantha Raghava < raghav at exzatechconsulting.com> wrote: > Hi, > > I am trying to upload the disk image which is in qcow2 format. After > uploading about 38 GB the status turns to "Paused by system" and it does > not resume at all. Any attempt to manually resume, will result back in > paused status. > > Ovirt engine version : 4.2.1.6-1.el7.centos > > Any guidance to finish this upload task? > > -- > > Thanks & Regards, > > > Anantha Raghava > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From enrico.becchetti at pg.infn.it Mon Mar 19 09:48:02 2018 From: enrico.becchetti at pg.infn.it (Enrico Becchetti) Date: Mon, 19 Mar 2018 10:48:02 +0100 Subject: [ovirt-users] VM has been paused due to NO STORAGE SPACE ERROR ?!?!?!?! In-Reply-To: References: Message-ID: Il 16/03/2018 15:48, Alex Crow ha scritto: > On 16/03/18 13:46, Nicolas Ecarnot wrote: >> Le 16/03/2018 ? 13:28, Karli Sj?berg a ?crit?: >>> >>> >>> Den 16 mars 2018 12:26 skrev Enrico Becchetti >>> : >>> >>> ???? ? Dear All, >>> ??? Does someone had seen that error ? >> >> Yes, I experienced it dozens of times on 3.6 (my 4.2 setup has >> insufficient workload to trigger such event). >> And in every case, there was no actual lack of space. >> >>> ??? Enrico Becchetti Servizio di Calcolo e Reti >>> I think I remember something to do with thin provisioning and not >>> being able to grow fast enough, so out of space. Are the VM's disk >>> thick or thin? >> >> All our storage domains are thin-prov. and served by iSCSI >> (Equallogic PS6xxx and 4xxx). >> >> Enrico, do you know if a bug has been filed about this? >> > Did the VM remain paused? In my experience the VM just gets > temporarily paused while the storage is expanded. RH confirmed to me > in a ticket that this is expected behaviour. > > If you need high write performance your VM disks should always be > preallocated. We only use Thin Provision for VMs where we know that > disk writes are low (eg network services, CPU-bound apps, etc). > Thanks a lot !!! Best Regards Enrico > Alex > -- > This message is intended only for the addressee and may contain > confidential information. Unless you are that person, you may not > disclose its contents or use it in any way and are requested to delete > the message along with any attachments and notify us immediately. > This email is not intended to, nor should it be taken to, constitute > advice. > The information provided is correct to our knowledge & belief and must > not > be used as a substitute for obtaining tax, regulatory, investment, > legal or > any other appropriate advice. > > "Transact" is operated by Integrated Financial Arrangements Ltd. > 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) > 7608 5300. > (Registered office: as above; Registered in England and Wales under > number: 3727592). Authorised and regulated by the Financial Conduct > Authority (entered on the Financial Services Register; no. 190856). > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- _______________________________________________________________________ Enrico Becchetti Servizio di Calcolo e Reti Istituto Nazionale di Fisica Nucleare - Sezione di Perugia Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it ______________________________________________________________________ From derez at redhat.com Mon Mar 19 11:39:27 2018 From: derez at redhat.com (Daniel Erez) Date: Mon, 19 Mar 2018 11:39:27 +0000 Subject: [ovirt-users] storage domain ovirt-image-repository doesn't work In-Reply-To: <1520984648.6088.106.camel@province-sud.nc> References: <1520807274.18402.56.camel@province-sud.nc> <1520984162.6088.104.camel@province-sud.nc> <1520984648.6088.106.camel@province-sud.nc> Message-ID: Hi Nicolas, Can you please try navigating to "Administration -> Providers", select "ovirt-image-repository" provider and click "Edit" button. Make sure that "Requires Authentication" isn't checked, and click the "Test" button - is it accessing the provider successfully? On Wed, Mar 14, 2018 at 1:45 AM Nicolas Vaye wrote: > the logs during the test of the ovirt-image-repository provider : > > > 2018-03-14 10:39:43,337+11 INFO > [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand] > (default task-17) [6c8c6a9f-2c24-4a77-af75-47352c6df887] Running command: > TestProviderConnectivityCommand internal: false. Entities affected : ID: > aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group > CREATE_STORAGE_POOL with role type ADMIN > 2018-03-14 10:41:30,465+11 INFO > [org.ovirt.engine.core.utils.transaction.TransactionSupport] (default > task-27) [42cb88a3-2614-4aa9-a3bf-b56102a83c35] transaction rolled back > 2018-03-14 10:41:30,465+11 ERROR > [org.ovirt.engine.core.bll.storage.repoimage.GetImagesListQuery] (default > task-27) [42cb88a3-2614-4aa9-a3bf-b56102a83c35] Failed to retrieve image > list: Connection timed out (Connection timed out) > 2018-03-14 10:41:50,560+11 ERROR > [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand] > (default task-17) [6c8c6a9f-2c24-4a77-af75-47352c6df887] Command > 'org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand' > failed: EngineException: (Failed with error PROVIDER_FAILURE and code 5050) > > > > > > -------- Message initial -------- > > Date: Tue, 13 Mar 2018 23:36:06 +0000 > Objet: Re: [ovirt-users] storage domain ovirt-image-repository doesn't work > Cc: users at ovirt.org 22%20%3cusers at ovirt.org%3e>> > ?: ishaby at redhat.com 22%20%3cishaby at redhat.com%3e>> > Reply-to: Nicolas Vaye > De: Nicolas Vaye Nicolas%20Vaye%20%3cnicolas.vaye at province-sud.nc%3e>> > > Hi Idan, > > here are the logs requested : > > 2018-03-14 10:25:52,097+11 INFO > [org.ovirt.engine.core.utils.transaction.TransactionSupport] (default > task-6) [61b5b46f-0ea3-496a-af90-bf82e7d204f3] transaction rolled back > 2018-03-14 10:25:52,097+11 ERROR > [org.ovirt.engine.core.bll.storage.repoimage.GetImagesListQuery] (default > task-6) [61b5b46f-0ea3-496a-af90-bf82e7d204f3] Failed to retrieve image > list: Connection timed out (Connection timed out) > 2018-03-14 10:25:57,083+11 INFO > [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] > (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool > 'commandCoordinator' is using 0 threads out of 10 and 10 tasks are waiting > in the queue. > 2018-03-14 10:25:57,083+11 INFO > [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] > (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool > 'default' is using 0 threads out of 1 and 5 tasks are waiting in the queue. > 2018-03-14 10:25:57,083+11 INFO > [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] > (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool > 'engine' is using 0 threads out of 500, 16 threads waiting for tasks and 0 > tasks in queue. > 2018-03-14 10:25:57,084+11 INFO > [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] > (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool > 'engineScheduled' is using 0 threads out of 100 and 100 tasks are waiting > in the queue. > 2018-03-14 10:25:57,084+11 INFO > [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] > (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool > 'engineThreadMonitoring' is using 1 threads out of 1 and 0 tasks are > waiting in the queue. > 2018-03-14 10:25:57,084+11 INFO > [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] > (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool > 'hostUpdatesChecker' is using 0 threads out of 5 and 4 tasks are waiting in > the queue. > > > Connection timed out seems to indicate that it doesn't use the proxy to > get web access ? or a firewall issue ? > > but on each ovirt node, i try to curl the url and the result is OK : > > curl http://glance.ovirt.org:9292/ > > {"versions": [{"status": "CURRENT", "id": "v2.3", "links": [{"href": " > http://glance.ovirt.org:9292/v2/", "rel": "self"}]}, {"status": > "SUPPORTED", "id": "v2.2", "links": [{"href": " > http://glance.ovirt.org:9292/v2/", "rel": "self"}]}, {"status": > "SUPPORTED", "id": "v2.1", "links": [{"href": " > http://glance.ovirt.org:9292/v2/", "rel": "self"}]}, {"status": > "SUPPORTED", "id": "v2.0", "links": [{"href": " > http://glance.ovirt.org:9292/v2/", "rel": "self"}]}, {"status": > "SUPPORTED", "id": "v1.1", "links": [{"href": " > http://glance.ovirt.org:9292/v1/", "rel": "self"}]}, {"status": > "SUPPORTED", "id": "v1.0", "links": [{"href": " > http://glance.ovirt.org:9292/v1/", "rel": "self"< > http://glance.ovirt.org:9292/v2/ > ",%20"rel":%20"self"}]},%20{"status":%20"SUPPORTED",%20"id":%20"v2.1",%20"links":%20[{"href":%20" > http://glance.ovirt.org:9292/v2/ > ",%20"rel":%20"self"}]},%20{"status":%20"SUPPORTED",%20"id":%20"v2.0",%20"links":%20[{"href":%20" > http://glance.ovirt.org:9292/v2/ > ",%20"rel":%20"self"}]},%20{"status":%20"SUPPORTED",%20"id":%20"v1.1",%20"links":%20[{"href":%20" > http://glance.ovirt.org:9292/v1/ > ",%20"rel":%20"self"}]},%20{"status":%20"SUPPORTED",%20"id":%20"v1.0",%20"links":%20[{"href":%20" > http://glance.ovirt.org:9292/v1/",%20"rel":%20"self">}]}]} > > I don't know what is wrong !! > > Regards, > > Nicolas > > -------- Message initial -------- > > Date: Tue, 13 Mar 2018 07:25:07 +0200 > Objet: Re: [ovirt-users] storage domain ovirt-image-repository doesn't work > Cc: users at ovirt.org users at ovirt.org> %22%20%3cusers at ovirt.org>%3e>> > ?: Nicolas Vaye nicolas.vaye at province-sud.nc> Nicolas%20Vaye%20%3cnicolas.vaye at province-sud.nc%3e>> > De: Idan Shaby Idan%20Shaby%20%3cishaby at redhat.com%3e>> > Hi Nicolas, > > Let me make sure that I understand what's the issue here - you click on > the domain and on the Images sub tab nothing is displayed? > Can you please clear your engine log, click on the ovirt-image-repository > domain and attach the log to the mail? > When I do it, I get the following audit log: > > 2018-03-13 07:19:25,983+02 INFO > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (default task-86) [6af6ee81-ce9a-46b7-a371-c5c3b0c6bf2a] EVENT_ID: > REFRESH_REPOSITORY_IMAGE_LIST_SUCCEEDED(998), Refresh image list succeeded > for domain(s): ovirt-image-repository (All file type) > > Maybe you get an error there that can help us understand the problem. > > > Regards, > Idan > > On Mon, Mar 12, 2018 at 12:27 AM, Nicolas Vaye < > nicolas.vaye at province-sud.nc nicolas.vaye at province-sud.nc>> wrote: > Hello, > > i have installed one oVirt platform with 2 node and 1 HE version 4.2.1.7-1 > > It seem to work fine, but i have issue with the ovirt-image-repository. > > Impossible to get the list of available images for this domain : > [cid:1520807274.29800.1.camel at province-sud.nc 1520807274.29800.1.camel at province-sud.nc> cid%3A1520807274.29800.1.camel at province-sud.nc>] > > My cluster is on a private network, so there is a proxy to get internet > access. > I have tried with a specific proxy configuration on each node ( > https://www.server-world.info/en/note?os=CentOS_7&p=squid&f=2) > so it's a success with yum update, wget or curl with > http://glance.ovirt.org:9292/, but nothing in the webui for the > ovirt-image-repository domain. > > I have tried another test with a transparent proxy and the result is the > same : > success with yum update, wget or curl with http://glance.ovirt.org:9292/, > but nothing in the webui for the ovirt-image-repository domain. > > > I don't know where is the specific log for this technical part. > > Can i have help for this issue. > > Thanks. > > Nicolas VAYE > DSI - Noum?a > NEW CALEDONIA > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tbaror at gmail.com Mon Mar 19 08:55:12 2018 From: tbaror at gmail.com (Tal Bar-Or) Date: Mon, 19 Mar 2018 10:55:12 +0200 Subject: [ovirt-users] Ovirt with ZFS+ Gluster In-Reply-To: <1521448601.2879.41.camel@inparadise.se> References: <1521448601.2879.41.camel@inparadise.se> Message-ID: wow, that's a nice demonstration Thanks On Mon, Mar 19, 2018 at 10:36 AM, Karli Sj?berg wrote: > On Sun, 2018-03-18 at 14:01 +0200, Tal Bar-Or wrote: > > Hello, > > > > I started to do new modest system planing and the system will be > > mounted on top of 3~4 Dell r720 with each 2xe5-2640 v2 and 128GB > > memory and 12xsas 10k 1.2tb and 3x ssd's > > my plan is to use zfs on top of glusterfs , and my question is since > > i didn't saw any doc on it > > Is this kind of deployment is done in the past and recommended. > > any way if yes is there any doc how to ? > > Thanks > > > > > > -- > > Tal Bar-or > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > There aren?t any specific documentation about using ZFS underneath > Gluster together with oVirt, but there?s nothing wrong IMO about using > ZFS with Gluster. E.g. 45 Drives are using it and posting really funny > videos about it: > > https://www.youtube.com/watch?v=A0wV4k58RIs > > Are you planning this as a standalone Gluster cluster or do you want to > use it hyperconverged? > > /K -- Tal Bar-or -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaymef at gmail.com Mon Mar 19 12:27:57 2018 From: jaymef at gmail.com (Jayme) Date: Mon, 19 Mar 2018 09:27:57 -0300 Subject: [ovirt-users] GlusterFS performance with only one drive per host? Message-ID: I'm spec'ing a new oVirt build using three Dell R720's w/ 256GB. I'm considering storage options. I don't have a requirement for high amounts of storage, I have a little over 1TB to store but want some overhead so I'm thinking 2TB of usable space would be sufficient. I've been doing some research on Micron 1100 2TB ssd's and they seem to offer a lot of value for the money. I'm considering using smaller cheaper SSDs for boot drives and using one 2TB micron SSD in each host for a glusterFS replica 3 setup (on the fence about using an arbiter, I like the extra redundancy replicate 3 will give me). My question is, would I see a performance hit using only one drive in each host with glusterFS or should I try to add more physical disks. Such as 6 1TB drives instead of 3 2TB drives? Also one other question. I've read that gluster can only be done in groups of three. Meaning you need 3, 6, or 9 hosts. Is this true? If I had an operational replicate 3 glusterFS setup and wanted to add more capacity I would have to add 3 more hosts, or is it possible for me to add a 4th host in to the mix for extra processing power down the road? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From budic at onholyground.com Mon Mar 19 14:39:33 2018 From: budic at onholyground.com (Darrell Budic) Date: Mon, 19 Mar 2018 09:39:33 -0500 Subject: [ovirt-users] Ovirt with ZFS+ Gluster In-Reply-To: <1521448601.2879.41.camel@inparadise.se> References: <1521448601.2879.41.camel@inparadise.se> Message-ID: Most of this is still valid if getting a bit long in the tooth: https://docs.gluster.org/en/latest/Administrator%20Guide/Gluster%20On%20ZFS/ I?ve got it running on several production clusters. I?m using the zfsol 0.7.6 kmod installation myself. I use a zvol per brick, and only one brick per machine from the zpool per gluster volume. If I had more disks, I might have two zvols with a brick each per gluster volume, but not now. My local settings: # zfs get all v0 | grep local v0 compression lz4 local v0 xattr sa local v0 acltype posixacl local v0 relatime on local > From: Karli Sj?berg > Subject: Re: [ovirt-users] Ovirt with ZFS+ Gluster > Date: March 19, 2018 at 3:36:41 AM CDT > To: Tal Bar-Or; users > > On Sun, 2018-03-18 at 14:01 +0200, Tal Bar-Or wrote: >> Hello, >> >> I started to do new modest system planing and the system will be >> mounted on top of 3~4 Dell r720 with each 2xe5-2640 v2 and 128GB >> memory and 12xsas 10k 1.2tb and 3x ssd's >> my plan is to use zfs on top of glusterfs , and my question is since >> i didn't saw any doc on it >> Is this kind of deployment is done in the past and recommended. >> any way if yes is there any doc how to ? >> Thanks >> >> >> -- >> Tal Bar-or >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > There aren?t any specific documentation about using ZFS underneath > Gluster together with oVirt, but there?s nothing wrong IMO about using > ZFS with Gluster. E.g. 45 Drives are using it and posting really funny > videos about it: > > https://www.youtube.com/watch?v=A0wV4k58RIs > > Are you planning this as a standalone Gluster cluster or do you want to > use it hyperconverged? > > /K_______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From simone.bruckner at fabasoft.com Mon Mar 19 14:48:40 2018 From: simone.bruckner at fabasoft.com (Bruckner, Simone) Date: Mon, 19 Mar 2018 14:48:40 +0000 Subject: [ovirt-users] Cannot start VM after live storage migration - Bad volume specification In-Reply-To: <2CB4E8C8E00E594EA06D4AC427E429920FE9C36B@fabamailserver.fabagl.fabasoft.com> References: <2CB4E8C8E00E594EA06D4AC427E429920FE9C36B@fabamailserver.fabagl.fabasoft.com> Message-ID: <2CB4E8C8E00E594EA06D4AC427E429920FEA02C7@fabamailserver.fabagl.fabasoft.com> Hi, it seems that there is a broken chain - we see two "empty" parent_ids in the database: engine=# SELECT b.disk_alias, s.description,s.snapshot_id, i.creation_date, s.status, i.imagestatus, i.size,i.parentid,i.image_group_id, i.vm_snapshot_id, i.image_guid, i.parentid, i.active FROM images as i JOIN snapshots AS s ON (i.vm_snapshot_id = s.snapshot_id) LEFT JOIN vm_static AS v ON (s.vm_id = v.vm_guid) JOIN base_disks AS b ON (i.image_group_id = b.disk_id) WHERE v.vm_name = 'VMNAME' and disk_alias = 'VMNAME_Disk2' ORDER BY creation_date, description, disk_alias ; disk_alias | description | snapshot_id | creation_date | status | imagestatus | size | parentid | image_group_id | vm_snapshot_id | image_guid | parentid | active ------------------+------------------------------------------------------------+--------------------------------------+------------------------+--------+-------------+---------------+--------------------------------------+--------------------------------------+--------------------------------------+--------------------------------------+--------------------------------------+-------- VMNAME_Disk2 | tmp | 3920a4e1-fc3f-45e2-84d5-0d3f1b8ad608 | 2018-01-28 10:09:37+01 | OK | 1 | 1979979923456 | 00000000-0000-0000-0000-000000000000 | c1a05108-90d7-421d-a9b4-d4cc65c48429 | 3920a4e1-fc3f-45e2-84d5-0d3f1b8ad608 | 946ee7b7-0770-49c9-ac76-0ce95a433d0f | 00000000-0000-0000-0000-000000000000 | f VMNAME_Disk2 | VMNAME_Disk2 Auto-generated for Live Storage Migration | 51f68304-e1a9-4400-aabc-8e3341d55fdc | 2018-03-16 15:07:35+01 | OK | 1 | 1979979923456 | 00000000-0000-0000-0000-000000000000 | c1a05108-90d7-421d-a9b4-d4cc65c48429 | 51f68304-e1a9-4400-aabc-8e3341d55fdc | 4c6475b1-352a-4114-b647-505cccbe6663 | 00000000-0000-0000-0000-000000000000 | f VMNAME_Disk2 | Active VM | d59a9f9d-f0dc-48ec-97e8-9e7a8b81d76d | 2018-03-18 20:54:23+01 | OK | 1 | 1979979923456 | 946ee7b7-0770-49c9-ac76-0ce95a433d0f | c1a05108-90d7-421d-a9b4-d4cc65c48429 | d59a9f9d-f0dc-48ec-97e8-9e7a8b81d76d | 4659b5e0-93c1-478d-97d0-ec1cf4052028 | 946ee7b7-0770-49c9-ac76-0ce95a433d0f | t Is there a way to recover that disk? All the best, Simone Von: users-bounces at ovirt.org Im Auftrag von Bruckner, Simone Gesendet: Sonntag, 18. M?rz 2018 22:15 An: users at ovirt.org Betreff: [ovirt-users] Cannot start VM after live storage migration - Bad volume specification Hi all, we did a live storage migration of one of three disks of a vm that failed because the vm became not responding when deleting the auto-snapshot: 2018-03-16 15:07:32.084+01 | 0 | Snapshot 'VMNAME_Disk2 Auto-generated for Live Storage Migration' creation for VM 'VMNAME' was initiated by xxx 2018-03-16 15:07:32.097+01 | 0 | User xxx moving disk VMNAME_Disk2 to domain VMHOST_LUN_211. 2018-03-16 15:08:56.304+01 | 0 | Snapshot 'VMNAME_Disk2 Auto-generated for Live Storage Migration' creation for VM 'VMNAME' has been completed. 2018-03-16 16:40:48.89+01 | 0 | Snapshot 'VMNAME_Disk2 Auto-generated for Live Storage Migration' deletion for VM 'VMNAME' was initiated by xxx. 2018-03-16 16:44:44.813+01 | 1 | VM VMNAME is not responding. 2018-03-18 18:40:51.258+01 | 2 | Failed to delete snapshot 'VMNAME_Disk2 Auto-generated for Live Storage Migration' for VM 'VMNAME'. 2018-03-18 18:40:54.506+01 | 1 | Possible failure while deleting VMNAME_Disk2 from the source Storage Domain VMHOST_LUN_211 during the move operation. The Storage Domain may be manually cleaned-up from possible leftover s (User:xxx). Now we cannot start the vm anymore as long as this disk is online. Error message is "VM VMNAME is down with error. Exit message: Bad volume specification {'index': 2, 'domainID': 'ecc71a64-62c1-43f4-bf1f-3bc1b22c7a8a', 'reqsize': '0', 'name': 'vdc', 'truesize': '2147483648', 'format': 'cow', 'discard': False, 'volumeID': '4659b5e0-93c1-478d-97d0-ec1cf4052028', 'apparentsize': '2147483648', 'imageID': 'c1a05108-90d7-421d-a9b4-d4cc65c48429', 'specParams': {}, 'iface': 'virtio', 'cache': 'none', 'propagateErrors': 'off', 'poolID': '5849b030-626e-47cb-ad90-3ce782d831b3', 'device': 'disk', 'path': '/rhev/data-center/mnt/blockSD/ecc71a64-62c1-43f4-bf1f-3bc1b22c7a8a/images/c1a05108-90d7-421d-a9b4-d4cc65c48429/4659b5e0-93c1-478d-97d0-ec1cf4052028', 'serial': 'c1a05108-90d7-421d-a9b4-d4cc65c48429', 'diskType': 'block', 'type': 'block'}." vdsm.log: 2018-03-18 21:53:33,815+0100 ERROR (vm/7d05e511) [storage.TaskManager.Task] (Task='fc3bac16-64f3-4910-8bc4-6cfdd4d270da') Unexpected error (task:875) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File "", line 2, in prepareImage File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method ret = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3179, in prepareImage raise se.prepareIllegalVolumeError(volUUID) prepareIllegalVolumeError: Cannot prepare illegal volume: ('4c6475b1-352a-4114-b647-505cccbe6663',) 2018-03-18 21:53:33,816+0100 INFO (vm/7d05e511) [storage.TaskManager.Task] (Task='fc3bac16-64f3-4910-8bc4-6cfdd4d270da') aborting: Task is aborted: "Cannot prepare illegal volume: ('4c6475b1-352a-4114-b647-505cccbe6663',)" - code 227 (task:1181) 2018-03-18 21:53:33,816+0100 ERROR (vm/7d05e511) [storage.Dispatcher] FINISH prepareImage error=Cannot prepare illegal volume: ('4c6475b1-352a-4114-b647-505cccbe6663',) (dispatcher:82) 2018-03-18 21:53:33,816+0100 ERROR (vm/7d05e511) [virt.vm] (vmId='7d05e511-2e97-4002-bded-285ec4e30587') The vm start process failed (vm:927) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 856, in _startUnderlyingVm self._run() File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2661, in _run self._devices = self._make_devices() File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2608, in _make_devices self._preparePathsForDrives(disk_params) File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 1001, in _preparePathsForDrives drive['path'] = self.cif.prepareVolumePath(drive, self.id) File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 393, in prepareVolumePath raise vm.VolumeError(drive) VolumeError: Bad volume specification {'index': 2, 'domainID': 'ecc71a64-62c1-43f4-bf1f-3bc1b22c7a8a', 'reqsize': '0', 'name': 'vdc', 'truesize': '2147483648', 'format': 'cow', 'discard': False, 'volumeID': '4659b5e0-93c1-478d-97d0-ec1cf4052028', 'apparentsize': '2147483648', 'imageID': 'c1a05108-90d7-421d-a9b4-d4cc65c48429', 'specParams': {}, 'iface': 'virtio', 'cache': 'none', 'propagateErrors': 'off', 'poolID': '5849b030-626e-47cb-ad90-3ce782d831b3', 'device': 'disk', 'path': '/rhev/data-center/mnt/blockSD/ecc71a64-62c1-43f4-bf1f-3bc1b22c7a8a/images/c1a05108-90d7-421d-a9b4-d4cc65c48429/4659b5e0-93c1-478d-97d0-ec1cf4052028', 'serial': 'c1a05108-90d7-421d-a9b4-d4cc65c48429', 'diskType': 'block', 'type': 'block'} 2018-03-18 21:53:33,817+0100 INFO (vm/7d05e511) [virt.vm] (vmId='7d05e511-2e97-4002-bded-285ec4e30587') Changed state to Down: Bad volume specification {'index': 2, 'domainID': 'ecc71a64-62c1-43f4-bf1f-3bc1b22c7a8a', 'reqsize': '0', 'name': 'vdc', 'truesize': '2147483648', 'format': 'cow', 'discard': False, 'volumeID': '4659b5e0-93c1-478d-97d0-ec1cf4052028', 'apparentsize': '2147483648', 'imageID': 'c1a05108-90d7-421d-a9b4-d4cc65c48429', 'specParams': {}, 'iface': 'virtio', 'cache': 'none', 'propagateErrors': 'off', 'poolID': '5849b030-626e-47cb-ad90-3ce782d831b3', 'device': 'disk', 'path': '/rhev/data-center/mnt/blockSD/ecc71a64-62c1-43f4-bf1f-3bc1b22c7a8a/images/c1a05108-90d7-421d-a9b4-d4cc65c48429/4659b5e0-93c1-478d-97d0-ec1cf4052028', 'serial': 'c1a05108-90d7-421d-a9b4-d4cc65c48429', 'diskType': 'block', 'type': 'block'} (code=1) (vm:1646) Is there a way to recover this disk? Thank you, Simone -------------- next part -------------- An HTML attachment was scrubbed... URL: From acrow at integrafin.co.uk Mon Mar 19 14:54:27 2018 From: acrow at integrafin.co.uk (Alex Crow) Date: Mon, 19 Mar 2018 14:54:27 +0000 Subject: [ovirt-users] CD drive not showing In-Reply-To: References: Message-ID: <6691bf4d-8ab5-c969-9807-6a58f87b87f9@integrafin.co.uk> Maybe try removing the ISO domain and then importing it. Alex On 19/03/18 08:17, Junaid Jadoon wrote: > Hi, > Cd drive is not showing in windows 7 VM. > > Please help me out??? > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- This message is intended only for the addressee and may contain confidential information. Unless you are that person, you may not disclose its contents or use it in any way and are requested to delete the message along with any attachments and notify us immediately. This email is not intended to, nor should it be taken to, constitute advice. The information provided is correct to our knowledge & belief and must not be used as a substitute for obtaining tax, regulatory, investment, legal or any other appropriate advice. "Transact" is operated by Integrated Financial Arrangements Ltd. 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. (Registered office: as above; Registered in England and Wales under number: 3727592). Authorised and regulated by the Financial Conduct Authority (entered on the Financial Services Register; no. 190856). -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma at cmadams.net Mon Mar 19 15:04:45 2018 From: cma at cmadams.net (Chris Adams) Date: Mon, 19 Mar 2018 10:04:45 -0500 Subject: [ovirt-users] Sizing hardware for hyperconverged with Gluster? Message-ID: <20180319150445.GA10823@cmadams.net> I have a reasonable feel for how to size hardware for an oVirt cluster with external storage (our current setups all use iSCSI to talk to a SAN). I'm looking at a hyperconverged oVirt+Gluster setup; are there guides for figuring out the additional Gluster resource requirements? I assume I need to allow for additional CPU and RAM, I just don't know how to size it (based on I/O I guess?). -- Chris Adams From SBERGER at qg.com Mon Mar 19 13:17:08 2018 From: SBERGER at qg.com (Berger, Sandy) Date: Mon, 19 Mar 2018 13:17:08 +0000 Subject: [ovirt-users] Network issues with oVirt 4.2 and cloud-init Message-ID: We're using cloud-init to customize VMs built from a template. We're using static IPV4 settings so we're specifying an IP address, subnet mask, and gateway. There is apparently a bug in the current version of cloud-init shipping as part of CentOS 7.4 (https://bugzilla.redhat.com/show_bug.cgi?id=1492726) that fails to set the gateway properly. In the description of the bug, it says it is fixed in RHEL 7.5 but also says one can use https://people.redhat.com/rmccabe/cloud-init/cloud-init-0.7.9-20.el7.x86_64.rpm which is what we're doing. When the new VM first boots, the 3 IPv4 settings are all set correctly. Reboots of the VM maintain the settings properly. But, if the VM is shut down and started again via the oVirt GUI, all of the IPV4 settings on the eth0 virtual NIC are lost and the /etc/sysconfig/network-scripts/ifcfg-eth0 shows that the NIC is now set up for DHCP. Are we doing something incorrectly? Sandy Berger IT - Infrastructure Engineer II Quad/Graphics Performance through Innovation Sussex, Wisconsin 414.566.2123 phone 414.566.4010/2123 pager/PIN sandy.berger at qg.com www.QG.com Follow Us: Facebook | Twitter | LinkedIn | YouTube -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzetto.luca at gmail.com Mon Mar 19 15:56:04 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Mon, 19 Mar 2018 16:56:04 +0100 Subject: [ovirt-users] Network issues with oVirt 4.2 and cloud-init In-Reply-To: References: Message-ID: Hello Sandy, i had the same issue and the cause was cloud-init running again at boot even if Run-Once hasn't been selected as boot option. The way i'm using to solve the problem is to remove cloud-init after the first run, since we don't need it anymore. In case also disabling is enoug: touch /etc/cloud/cloud-init.disabled Luca On Mon, Mar 19, 2018 at 2:17 PM, Berger, Sandy wrote: > We?re using cloud-init to customize VMs built from a template. We?re using > static IPV4 settings so we?re specifying an IP address, subnet mask, and > gateway. There is apparently a bug in the current version of cloud-init > shipping as part of CentOS 7.4 > (https://bugzilla.redhat.com/show_bug.cgi?id=1492726) that fails to set the > gateway properly. In the description of the bug, it says it is fixed in RHEL > 7.5 but also says one can use > https://people.redhat.com/rmccabe/cloud-init/cloud-init-0.7.9-20.el7.x86_64.rpm > which is what we?re doing. > > > > When the new VM first boots, the 3 IPv4 settings are all set correctly. > Reboots of the VM maintain the settings properly. But, if the VM is shut > down and started again via the oVirt GUI, all of the IPV4 settings on the > eth0 virtual NIC are lost and the /etc/sysconfig/network-scripts/ifcfg-eth0 > shows that the NIC is now set up for DHCP. > > > > Are we doing something incorrectly? > > > > Sandy Berger > > IT ? Infrastructure Engineer II > > > > Quad/Graphics > > Performance through Innovation > > > > Sussex, Wisconsin > > 414.566.2123 phone > > 414.566.4010/2123 pager/PIN > > > > sandy.berger at qg.com > > www.QG.com > > > > Follow Us: Facebook | Twitter | LinkedIn | YouTube > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From spfma.tech at e.mail.fr Mon Mar 19 15:56:31 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Mon, 19 Mar 2018 16:56:31 +0100 Subject: [ovirt-users] Hosted engine deployment error Message-ID: <20180319155631.DB556E446F@smtp01.mail.de> Hi, I wanted to rebuild a new hosted engine setup, as the old was corrupted (too much violent poweroff !) So the server was not reinstalled, I just runned "ovirt-hosted-engine-cleanup". The network setup generated by vdsm seems to be still in place, so I haven't changed anything there. Then I decided to update the packages to the latest versions avaible, rebooted the server and run "ovirt-hosted-engine-setup". But the process never succeeds, as I get an error after a long time spent in "[ INFO ] TASK [Wait for the host to be up]" [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": [{"address": "pfm-srv-virt-1.pfm-ad.pfm.loc", "affinity_labels": [], "auto_numa_status": "unknown", "certificate": {"organization": "pfm.loc", "subject": "O=pfm.loc,CN=pfm-srv-virt-1.pfm-ad.pfm.loc"}, "cluster": {"href": "/ovirt-engine/api/clusters/d6c9358e-2b8b-11e8-bc86-00163e152701", "id": "d6c9358e-2b8b-11e8-bc86-00163e152701"}, "comment": "", "cpu": {"speed": 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, "devices": [], "external_network_provider_configurations": [], "external_status": "ok", "hardware_information": {"supported_rng_sources": []}, "hooks": [], "href": "/ovirt-engine/api/hosts/542566c4-fc85-4398-9402-10c8adaa9554", "id": "542566c4-fc85-4398-9402-10c8adaa9554", "katello_errata": [], "kdump_status": "unknown", "ksm": {"enabled": false}, "max_scheduling_memory": 0, "memory": 0, "name": "pfm-srv-virt-1.pfm-ad.pfm.loc", "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline": ""}, "permissions": [], "port": 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false, "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": {"fingerprint": "SHA256:J75BVLFnmGBGFosXzaxCRnuIYcOc75HUBQZ4pOKpDg8", "port": 22}, "statistics": [], "status": "non_responsive", "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], "transparent_huge_pages": {"enabled": false}, "type": "rhel", "unmanaged_networks": [], "update_available": false}]}, "attempts": 120, "changed": false} [ INFO ] TASK [Remove local vm dir] [ INFO ] TASK [Notify the user about a failure] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.\n"} I made another try with Cockpit, it is the same. Am I doing something wrong or is there a bug ? Regards ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at palousetech.com Mon Mar 19 16:28:06 2018 From: jim at palousetech.com (Jim Kusznir) Date: Mon, 19 Mar 2018 09:28:06 -0700 Subject: [ovirt-users] Major Performance Issues with gluster In-Reply-To: References: Message-ID: Here's gluster volume info: [root at ovirt2 ~]# gluster volume info Volume Name: data Type: Replicate Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick2/data Brick2: ovirt2.nwfiber.com:/gluster/brick2/data Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter) Options Reconfigured: changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on server.allow-insecure: on performance.readdir-ahead: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: enable cluster.quorum-type: auto cluster.server-quorum-type: server storage.owner-uid: 36 storage.owner-gid: 36 features.shard: on features.shard-block-size: 512MB performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 network.ping-timeout: 30 user.cifs: off nfs.disable: on performance.strict-o-direct: on Volume Name: data-hdd Type: Replicate Volume ID: d342a3ab-16f3-49f0-bbcf-f788be8ac5f1 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 172.172.1.11:/gluster/brick3/data-hdd Brick2: 172.172.1.12:/gluster/brick3/data-hdd Brick3: 172.172.1.13:/gluster/brick3/data-hdd Options Reconfigured: changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on transport.address-family: inet performance.readdir-ahead: on Volume Name: engine Type: Replicate Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter) Options Reconfigured: changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on performance.readdir-ahead: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: off cluster.quorum-type: auto cluster.server-quorum-type: server storage.owner-uid: 36 storage.owner-gid: 36 features.shard: on features.shard-block-size: 512MB performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 6 network.ping-timeout: 30 user.cifs: off nfs.disable: on performance.strict-o-direct: on Volume Name: iso Type: Replicate Volume ID: b1ba15f5-0f0f-4411-89d0-595179f02b92 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: ovirt1.nwfiber.com:/gluster/brick4/iso Brick2: ovirt2.nwfiber.com:/gluster/brick4/iso Brick3: ovirt3.nwfiber.com:/gluster/brick4/iso (arbiter) Options Reconfigured: performance.readdir-ahead: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.stat-prefetch: off cluster.eager-lock: enable network.remote-dio: off cluster.quorum-type: auto cluster.server-quorum-type: server storage.owner-uid: 36 storage.owner-gid: 36 features.shard: on features.shard-block-size: 512MB performance.low-prio-threads: 32 cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 6 network.ping-timeout: 30 user.cifs: off nfs.disable: on performance.strict-o-direct: on -------------- When I try and turn on profiling, I get: [root at ovirt2 ~]# gluster volume profile data-hdd start Another transaction is in progress for data-hdd. Please try again after sometime. I don't know what that other transaction is, but I am having some "odd behavior" this morning, like a vm disk move between data and data-hdd that stuck at 84% overnight. I've been asking on IRC how to "un-stick" this transfer, as the VM cannot be started, and I can't seem to do anything about it. --Jim On Mon, Mar 19, 2018 at 2:14 AM, Sahina Bose wrote: > > > On Mon, Mar 19, 2018 at 7:39 AM, Jim Kusznir wrote: > >> Hello: >> >> This past week, I created a new gluster store, as I was running out of >> disk space on my main, SSD-backed storage pool. I used 2TB Seagate >> FireCuda drives (hybrid SSD/spinning). Hardware is Dell R610's with >> integral PERC/6i cards. I placed one disk per machine, exported the disk >> as a single disk volume from the raid controller, formatted it XFS, mounted >> it, and dedicated it to a new replica 3 gluster volume. >> >> Since doing so, I've been having major performance problems. One of my >> windows VMs sits at 100% disk utilization nearly continously, and its >> painful to do anything on it. A Zabbix install on CentOS using mysql as >> the backing has 70%+ iowait nearly all the time, and I can't seem to get >> graphs loaded from the web console. Its also always spewing errors that >> ultimately come down to insufficient disk performance issues. >> >> All of this was working OK before the changes. There are two: >> >> Old storage was SSD backed, Replica 2 + arb, and running on the same GigE >> network as management and main VM network. >> >> New storage was created using the dedicated Gluster network (running on >> em4 on these servers, completely different subnet (174.x vs 192.x), and was >> created replica 3 (no arb), on the FireCuda disks (seem to be the fastest I >> could afford for non-SSD, as I needed a lot more storage). >> >> My attempts to watch so far have NOT shown maxed network interfaces >> (using bwm-ng on the command line); in fact, the gluster interface is >> usually below 20% utilized. >> >> I'm not sure how to meaningfully measure the performance of the disk >> itself; I'm not sure what else to look at. My cluster is not very usable >> currently, though. IOWait on my hosts appears to be below 0.5%, usually >> 0.0 to 0.1. Inside the VMs is a whole different story. >> >> My cluster is currently running ovirt 4.1. I'm interested in going to >> 4.2, but I think I need to fix this first. >> > > > Can you provide the info of the volume using "gluster volume info" and > also profile the volume while running the tests where you experience the > performance issue, and share results? > > For info on how to profile (server-side profiling) - > https://docs.gluster.org/en/latest/Administrator%20Guide/ > Performance%20Testing/ > > >> Thanks! >> --Jim >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at palousetech.com Mon Mar 19 16:38:40 2018 From: jim at palousetech.com (Jim Kusznir) Date: Mon, 19 Mar 2018 09:38:40 -0700 Subject: [ovirt-users] Gluster: VM disk stuck in transfer; georep gone wonky Message-ID: Hi all: Sorry for yet another semi-related message to the list. In my attempts to troubleshoot and verify some suspicions on the nature of the performance problems I posted under "Major Performance Issues with gluster", I attempted to move one of my problem VM's back to the original storage (SSD-backed). It appeared to be moving fine, but last night froze at 84%. This morning (8hrs later), its still at 84%. I need to get that VM back up and running, but I don't know how...It seems to be stuck in limbo. The only thing I explicitly did last night as well that may have caused an issue is finally set up and activated georep to an offsite backup machine. That too seems to have gone a bit wonky. On the ovirt server side, it shows normal with all but data-hdd show a last sync'ed time of 3am (which matches my bandwidth graphs for the WAN connections involved). data-hdd (the new disk-backed storage with most of my data in it) shows not yet synced, but I'm also not currently seeing bandwidth usage anymore. I logged into the georep destination box, and found system load a bit high, a bunch of gluster and rsync processes running, and both data and data-hdd using MORE disk space than the origional (data-hdd using 4x more disk space than is on the master node). Not sure what to do about this; I paused the replication from the cluster, but that hasn't seem to had an effect on the georep destination. I promise I'll stop trying things until I get guidance from the list! Please do help; I need the VM HDD unstuck so I can start it. Thanks! --Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From fernando.frediani at upx.com Mon Mar 19 16:38:42 2018 From: fernando.frediani at upx.com (FERNANDO FREDIANI) Date: Mon, 19 Mar 2018 13:38:42 -0300 Subject: [ovirt-users] Migration oVirt Engine from Dedicated Host to Self-Hosted Engine Message-ID: Hello folks I currently have a oVirt Engine which runs in a Dedicated Virtual Machine in another ans separate environment. It is very nice to have it like that because every time I do a oVirt Version Upgrade I take a snapshot before and if it failed (and it did failed in the past several times) I just go back in time before the snapshot and all comes back to normal. Two quick questions: - Going to a Self-Hosted Engine will snapshots or recoverable ways be possible ? - To migrate the Engine from the current environment to the self-hosted engine is it just a question to backup the Database, restore it into the self-hosted engine keeping it with the same IP address ? Are there any special points to take in consideration when doing this migration ? Thanks Fernando -------------- next part -------------- An HTML attachment was scrubbed... URL: From fernando.frediani at upx.com Mon Mar 19 16:41:36 2018 From: fernando.frediani at upx.com (FERNANDO FREDIANI) Date: Mon, 19 Mar 2018 13:41:36 -0300 Subject: [ovirt-users] Migration oVirt Engine from Dedicated Host to Self-Hosted Engine In-Reply-To: References: Message-ID: Just to add up, for the second question I am following this URL: https://ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment/ So the question is more of anything else that may be good to take in attention other than what is already there. Thanks Fernando 2018-03-19 13:38 GMT-03:00 FERNANDO FREDIANI : > Hello folks > > I currently have a oVirt Engine which runs in a Dedicated Virtual Machine > in another ans separate environment. It is very nice to have it like that > because every time I do a oVirt Version Upgrade I take a snapshot before > and if it failed (and it did failed in the past several times) I just go > back in time before the snapshot and all comes back to normal. > > Two quick questions: > > - Going to a Self-Hosted Engine will snapshots or recoverable ways be > possible ? > > - To migrate the Engine from the current environment to the self-hosted > engine is it just a question to backup the Database, restore it into the > self-hosted engine keeping it with the same IP address ? Are there any > special points to take in consideration when doing this migration ? > > Thanks > Fernando > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stirabos at redhat.com Mon Mar 19 16:48:06 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Mon, 19 Mar 2018 17:48:06 +0100 Subject: [ovirt-users] Hosted engine deployment error In-Reply-To: <20180319155631.DB556E446F@smtp01.mail.de> References: <20180319155631.DB556E446F@smtp01.mail.de> Message-ID: On Mon, Mar 19, 2018 at 4:56 PM, wrote: > Hi, > > I wanted to rebuild a new hosted engine setup, as the old was corrupted > (too much violent poweroff !) > > So the server was not reinstalled, I just runned > "ovirt-hosted-engine-cleanup". The network setup generated by vdsm seems to > be still in place, so I haven't changed anything there. > > Then I decided to update the packages to the latest versions avaible, > rebooted the server and run "ovirt-hosted-engine-setup". > > But the process never succeeds, as I get an error after a long time spent > in "[ INFO ] TASK [Wait for the host to be up]" > > > [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": > [{"address": "pfm-srv-virt-1.pfm-ad.pfm.loc", "affinity_labels": [], > "auto_numa_status": "unknown", "certificate": {"organization": "pfm.loc", > "subject": "O=pfm.loc,CN=pfm-srv-virt-1.pfm-ad.pfm.loc"}, "cluster": > {"href": "/ovirt-engine/api/clusters/d6c9358e-2b8b-11e8-bc86-00163e152701", > "id": "d6c9358e-2b8b-11e8-bc86-00163e152701"}, "comment": "", "cpu": > {"speed": 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, > "devices": [], "external_network_provider_configurations": [], > "external_status": "ok", "hardware_information": {"supported_rng_sources": > []}, "hooks": [], "href": "/ovirt-engine/api/hosts/ > 542566c4-fc85-4398-9402-10c8adaa9554", "id": "542566c4-fc85-4398-9402-10c8adaa9554", > "katello_errata": [], "kdump_status": "unknown", "ksm": {"enabled": false}, > "max_scheduling_memory": 0, "memory": 0, "name": "pfm-srv-virt-1.pfm-ad.pfm.loc", > "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported": > false, "os": {"custom_kernel_cmdline": ""}, "permissions": [], "port": > 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false, > "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", > "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": > {"fingerprint": "SHA256:J75BVLFnmGBGFosXzaxCRnuIYcOc75HUBQZ4pOKpDg8", > "port": 22}, "statistics": [], "status": "non_responsive", > "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], > "transparent_huge_pages": {"enabled": false}, "type": "rhel", > "unmanaged_networks": [], "update_available": false}]}, "attempts": 120, > "changed": false} > [ INFO ] TASK [Remove local vm dir] > [ INFO ] TASK [Notify the user about a failure] > [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The > system may not be provisioned according to the playbook results: please > check the logs for the issue, fix accordingly or re-deploy from scratch.\n"} > > > I made another try with Cockpit, it is the same. > > Am I doing something wrong or is there a bug ? > I suppose that your host was condifured with DHCP, if so it's this one: https://bugzilla.redhat.com/1549642 The fix will come with 4.2.2. > > Regards > > > > ------------------------------ > FreeMail powered by mail.fr > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spfma.tech at e.mail.fr Mon Mar 19 17:09:40 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Mon, 19 Mar 2018 18:09:40 +0100 Subject: [ovirt-users] Hosted engine deployment error In-Reply-To: References: Message-ID: <20180319170940.88A5DE446D@smtp01.mail.de> Hi, Thanks for your answer. No, it was configured with static ip. I checked the answer file from the first install, I used the same options. Regards Le 19-Mar-2018 17:48:41 +0100, stirabos at redhat.com a crit: On Mon, Mar 19, 2018 at 4:56 PM, wrote: Hi, I wanted to rebuild a new hosted engine setup, as the old was corrupted (too much violent poweroff !) So the server was not reinstalled, I just runned "ovirt-hosted-engine-cleanup". The network setup generated by vdsm seems to be still in place, so I haven't changed anything there. Then I decided to update the packages to the latest versions avaible, rebooted the server and run "ovirt-hosted-engine-setup". But the process never succeeds, as I get an error after a long time spent in "[ INFO ] TASK [Wait for the host to be up]" [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": [{"address": "pfm-srv-virt-1.pfm-ad.pfm.loc", "affinity_labels": [], "auto_numa_status": "unknown", "certificate": {"organization": "pfm.loc", "subject": "O=pfm.loc,CN=pfm-srv-virt-1.pfm-ad.pfm.loc"}, "cluster": {"href": "/ovirt-engine/api/clusters/d6c9358e-2b8b-11e8-bc86-00163e152701", "id": "d6c9358e-2b8b-11e8-bc86-00163e152701"}, "comment": "", "cpu": {"speed": 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, "devices": [], "external_network_provider_configurations": [], "external_status": "ok", "hardware_information": {"supported_rng_sources": []}, "hooks": [], "href": "/ovirt-engine/api/hosts/542566c4-fc85-4398-9402-10c8adaa9554", "id": "542566c4-fc85-4398-9402-10c8adaa9554", "katello_errata": [], "kdump_status": "unknown", "ksm": {"enabled": false}, "max_scheduling_memory": 0, "memory": 0, "name": "pfm-srv-virt-1.pfm-ad.pfm.loc", "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline": ""}, "permissions": [], "port": 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false, "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": {"fingerprint": "SHA256:J75BVLFnmGBGFosXzaxCRnuIYcOc75HUBQZ4pOKpDg8", "port": 22}, "statistics": [], "status": "non_responsive", "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], "transparent_huge_pages": {"enabled": false}, "type": "rhel", "unmanaged_networks": [], "update_available": false}]}, "attempts": 120, "changed": false} [ INFO ] TASK [Remove local vm dir] [ INFO ] TASK [Notify the user about a failure] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.\n"} I made another try with Cockpit, it is the same. Am I doing something wrong or is there a bug ? I suppose that your host was condifured with DHCP, if so it's this one: https://bugzilla.redhat.com/1549642 The fix will come with 4.2.2. Regards ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreil1 at starlett.lv Mon Mar 19 21:23:16 2018 From: andreil1 at starlett.lv (Andrei Verovski) Date: Mon, 19 Mar 2018 23:23:16 +0200 Subject: [ovirt-users] Migration oVirt Engine from Dedicated Host to Self-Hosted Engine In-Reply-To: References: Message-ID: <5fa32c49-7f8a-fc2b-8439-6e059ecc29c0@starlett.lv> Your current setup is optimal. For example - if node running self-hosted engine dies for whatever reason, what happens next? As an option you can buy a couple of fanless Celeron mini PCs (1 active + 2nd backup with 8GB RAM and 128 - 256 GB SSD) and run host engine here under KVM, so image can be easily cloned/restored/moved if necessary. On 03/19/2018 06:41 PM, FERNANDO FREDIANI wrote: > Just to add up, for the second question I am following this URL: > > https://ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment/ > > So the question is more of anything else that may be good to take in > attention other than what is already there. > > Thanks > Fernando > > 2018-03-19 13:38 GMT-03:00 FERNANDO FREDIANI > >: > > Hello folks > > I currently have a oVirt Engine which runs in a Dedicated Virtual > Machine in another ans separate environment. It is very nice to > have it like that because every time I do a oVirt Version Upgrade > I take a snapshot before and if it failed (and it did failed in > the past several times) I just go back in time before the snapshot > and all comes back to normal. > > Two quick questions: > > - Going to a Self-Hosted Engine will snapshots or recoverable ways > be possible ? > > - To migrate the Engine from the current environment to the > self-hosted engine is it just a question to backup the Database, > restore it into the self-hosted engine keeping it with the same IP > address ? Are there any special points to take in consideration > when doing this migration ? > > Thanks > Fernando > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From fernando.frediani at upx.com Mon Mar 19 21:26:42 2018 From: fernando.frediani at upx.com (FERNANDO FREDIANI) Date: Mon, 19 Mar 2018 18:26:42 -0300 Subject: [ovirt-users] Migration oVirt Engine from Dedicated Host to Self-Hosted Engine In-Reply-To: <5fa32c49-7f8a-fc2b-8439-6e059ecc29c0@starlett.lv> References: <5fa32c49-7f8a-fc2b-8439-6e059ecc29c0@starlett.lv> Message-ID: The goal is to use Self-Hosted Engine and use any snapshot technique, if possible. Fernando 2018-03-19 18:23 GMT-03:00 Andrei Verovski : > > Your current setup is optimal. > For example - if node running self-hosted engine dies for whatever reason, > what happens next? > > As an option you can buy a couple of fanless Celeron mini PCs (1 active + > 2nd backup with 8GB RAM and 128 - 256 GB SSD) and run host engine here > under KVM, so image can be easily cloned/restored/moved if necessary. > > > > On 03/19/2018 06:41 PM, FERNANDO FREDIANI wrote: > > Just to add up, for the second question I am following this URL: > > https://ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_ > Metal_to_an_EL-Based_Self-Hosted_Environment/ > > So the question is more of anything else that may be good to take in > attention other than what is already there. > > Thanks > Fernando > > 2018-03-19 13:38 GMT-03:00 FERNANDO FREDIANI : > >> Hello folks >> >> I currently have a oVirt Engine which runs in a Dedicated Virtual Machine >> in another ans separate environment. It is very nice to have it like that >> because every time I do a oVirt Version Upgrade I take a snapshot before >> and if it failed (and it did failed in the past several times) I just go >> back in time before the snapshot and all comes back to normal. >> >> Two quick questions: >> >> - Going to a Self-Hosted Engine will snapshots or recoverable ways be >> possible ? >> >> - To migrate the Engine from the current environment to the self-hosted >> engine is it just a question to backup the Database, restore it into the >> self-hosted engine keeping it with the same IP address ? Are there any >> special points to take in consideration when doing this migration ? >> >> Thanks >> Fernando >> >> >> > > > _______________________________________________ > Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roupas_zois at hotmail.com Mon Mar 19 10:15:38 2018 From: roupas_zois at hotmail.com (zois roupas) Date: Mon, 19 Mar 2018 10:15:38 +0000 Subject: [ovirt-users] Change ovirtmgmt ip from dhcp to static in a Message-ID: Hello everyone I've made a rookie mistake by installing ovirt 4.2 in centos 7 with dhcp instead of a static ip configuration. Both engine and host are in the same machine cause of limited resources and i was so happy that everything worked so well that i kept configuring and installing vm's ,adding local and nfs storage and setting up the backup! As you understand i must change the configuration to static ip and i can't find any guide describing the correct procedure. Is there an official guide to change configuration without causing any trouble? I've found this thread http://lists.ovirt.org/pipermail/users/2014-May/024432.html but this is for a hosted engine and doesn't help when both engine and host are in the same machine Thanx in advance Best Regards Zois -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Mon Mar 19 22:50:16 2018 From: donny at fortnebula.com (Donny Davis) Date: Mon, 19 Mar 2018 22:50:16 +0000 Subject: [ovirt-users] Major Performance Issues with gluster In-Reply-To: References: Message-ID: Try hitting the optimize for virt option in the volumes tab on oVirt for this volume. This might help with some of it, but that should have been done before you connected it as a storage domain. The sharding feature helps with performance, and so do some of the other options that are present on your other volumes. On Mon, Mar 19, 2018, 12:28 PM Jim Kusznir wrote: > Here's gluster volume info: > > [root at ovirt2 ~]# gluster volume info > > Volume Name: data > Type: Replicate > Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59 > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x (2 + 1) = 3 > Transport-type: tcp > Bricks: > Brick1: ovirt1.nwfiber.com:/gluster/brick2/data > Brick2: ovirt2.nwfiber.com:/gluster/brick2/data > Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter) > Options Reconfigured: > changelog.changelog: on > geo-replication.ignore-pid-check: on > geo-replication.indexing: on > server.allow-insecure: on > performance.readdir-ahead: on > performance.quick-read: off > performance.read-ahead: off > performance.io-cache: off > performance.stat-prefetch: off > cluster.eager-lock: enable > network.remote-dio: enable > cluster.quorum-type: auto > cluster.server-quorum-type: server > storage.owner-uid: 36 > storage.owner-gid: 36 > features.shard: on > features.shard-block-size: 512MB > performance.low-prio-threads: 32 > cluster.data-self-heal-algorithm: full > cluster.locking-scheme: granular > cluster.shd-wait-qlength: 10000 > cluster.shd-max-threads: 8 > network.ping-timeout: 30 > user.cifs: off > nfs.disable: on > performance.strict-o-direct: on > > Volume Name: data-hdd > Type: Replicate > Volume ID: d342a3ab-16f3-49f0-bbcf-f788be8ac5f1 > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x 3 = 3 > Transport-type: tcp > Bricks: > Brick1: 172.172.1.11:/gluster/brick3/data-hdd > Brick2: 172.172.1.12:/gluster/brick3/data-hdd > Brick3: 172.172.1.13:/gluster/brick3/data-hdd > Options Reconfigured: > changelog.changelog: on > geo-replication.ignore-pid-check: on > geo-replication.indexing: on > transport.address-family: inet > performance.readdir-ahead: on > > Volume Name: engine > Type: Replicate > Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x (2 + 1) = 3 > Transport-type: tcp > Bricks: > Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine > Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine > Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter) > Options Reconfigured: > changelog.changelog: on > geo-replication.ignore-pid-check: on > geo-replication.indexing: on > performance.readdir-ahead: on > performance.quick-read: off > performance.read-ahead: off > performance.io-cache: off > performance.stat-prefetch: off > cluster.eager-lock: enable > network.remote-dio: off > cluster.quorum-type: auto > cluster.server-quorum-type: server > storage.owner-uid: 36 > storage.owner-gid: 36 > features.shard: on > features.shard-block-size: 512MB > performance.low-prio-threads: 32 > cluster.data-self-heal-algorithm: full > cluster.locking-scheme: granular > cluster.shd-wait-qlength: 10000 > cluster.shd-max-threads: 6 > network.ping-timeout: 30 > user.cifs: off > nfs.disable: on > performance.strict-o-direct: on > > Volume Name: iso > Type: Replicate > Volume ID: b1ba15f5-0f0f-4411-89d0-595179f02b92 > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x (2 + 1) = 3 > Transport-type: tcp > Bricks: > Brick1: ovirt1.nwfiber.com:/gluster/brick4/iso > Brick2: ovirt2.nwfiber.com:/gluster/brick4/iso > Brick3: ovirt3.nwfiber.com:/gluster/brick4/iso (arbiter) > Options Reconfigured: > performance.readdir-ahead: on > performance.quick-read: off > performance.read-ahead: off > performance.io-cache: off > performance.stat-prefetch: off > cluster.eager-lock: enable > network.remote-dio: off > cluster.quorum-type: auto > cluster.server-quorum-type: server > storage.owner-uid: 36 > storage.owner-gid: 36 > features.shard: on > features.shard-block-size: 512MB > performance.low-prio-threads: 32 > cluster.data-self-heal-algorithm: full > cluster.locking-scheme: granular > cluster.shd-wait-qlength: 10000 > cluster.shd-max-threads: 6 > network.ping-timeout: 30 > user.cifs: off > nfs.disable: on > performance.strict-o-direct: on > > -------------- > > When I try and turn on profiling, I get: > > [root at ovirt2 ~]# gluster volume profile data-hdd start > Another transaction is in progress for data-hdd. Please try again after > sometime. > > I don't know what that other transaction is, but I am having some "odd > behavior" this morning, like a vm disk move between data and data-hdd that > stuck at 84% overnight. > > I've been asking on IRC how to "un-stick" this transfer, as the VM cannot > be started, and I can't seem to do anything about it. > > --Jim > > On Mon, Mar 19, 2018 at 2:14 AM, Sahina Bose wrote: > >> >> >> On Mon, Mar 19, 2018 at 7:39 AM, Jim Kusznir wrote: >> >>> Hello: >>> >>> This past week, I created a new gluster store, as I was running out of >>> disk space on my main, SSD-backed storage pool. I used 2TB Seagate >>> FireCuda drives (hybrid SSD/spinning). Hardware is Dell R610's with >>> integral PERC/6i cards. I placed one disk per machine, exported the disk >>> as a single disk volume from the raid controller, formatted it XFS, mounted >>> it, and dedicated it to a new replica 3 gluster volume. >>> >>> Since doing so, I've been having major performance problems. One of my >>> windows VMs sits at 100% disk utilization nearly continously, and its >>> painful to do anything on it. A Zabbix install on CentOS using mysql as >>> the backing has 70%+ iowait nearly all the time, and I can't seem to get >>> graphs loaded from the web console. Its also always spewing errors that >>> ultimately come down to insufficient disk performance issues. >>> >>> All of this was working OK before the changes. There are two: >>> >>> Old storage was SSD backed, Replica 2 + arb, and running on the same >>> GigE network as management and main VM network. >>> >>> New storage was created using the dedicated Gluster network (running on >>> em4 on these servers, completely different subnet (174.x vs 192.x), and was >>> created replica 3 (no arb), on the FireCuda disks (seem to be the fastest I >>> could afford for non-SSD, as I needed a lot more storage). >>> >>> My attempts to watch so far have NOT shown maxed network interfaces >>> (using bwm-ng on the command line); in fact, the gluster interface is >>> usually below 20% utilized. >>> >>> I'm not sure how to meaningfully measure the performance of the disk >>> itself; I'm not sure what else to look at. My cluster is not very usable >>> currently, though. IOWait on my hosts appears to be below 0.5%, usually >>> 0.0 to 0.1. Inside the VMs is a whole different story. >>> >>> My cluster is currently running ovirt 4.1. I'm interested in going to >>> 4.2, but I think I need to fix this first. >>> >> >> >> Can you provide the info of the volume using "gluster volume info" and >> also profile the volume while running the tests where you experience the >> performance issue, and share results? >> >> For info on how to profile (server-side profiling) - >> https://docs.gluster.org/en/latest/Administrator%20Guide/Performance%20Testing/ >> >> >>> Thanks! >>> --Jim >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jlawrence at squaretrade.com Mon Mar 19 23:47:00 2018 From: jlawrence at squaretrade.com (Jamie Lawrence) Date: Mon, 19 Mar 2018 16:47:00 -0700 Subject: [ovirt-users] Iso upload success, no GUI popup option Message-ID: <6CF44967-E2D3-4346-8BFE-8A6A9116A8E8@squaretrade.com> Hello, I'm trying to iron out the last few oddities of this setup, and one of them is the iso upload process. This worked in the last rebuild, but... well. So, uploading from one of the hosts to an ISO domain claims success, and manually checking shows the ISO uploaded just fine, perms set correctly to 36:36. But it doesn't appear in the GUI popup when creating a new VM. Verified that the VDSM user can fully traverse the directory path - presumably that was tested by uploading it in the first place, but I double-checked. Looked in various logs, but didn't see any action in ovirt-imageio-daemon or -proxy. Didn't see anything in engine.log that looked relevant. What is the troubleshooting method for this? Googling, it seemed most folks' problems were related to permissions. I scanned DB table names for something that seemed like it might have ISO-related info in it, but couldn't find anything, and am not sure what else to check. Thanks, -j From mburman at redhat.com Tue Mar 20 06:37:27 2018 From: mburman at redhat.com (Michael Burman) Date: Tue, 20 Mar 2018 08:37:27 +0200 Subject: [ovirt-users] Network issues with oVirt 4.2 and cloud-init In-Reply-To: References: Message-ID: Hi Berger, Have you sealed the template? if you didn't sealed the template on creation, then your new VM has the same ifcfg-eth0 settings as your origin VM. In order to avoid this, you need to check the 'Seal Template (Linux only)' checkbox in the New Template dialog. Cheers) On Mon, Mar 19, 2018 at 5:56 PM, Luca 'remix_tj' Lorenzetto < lorenzetto.luca at gmail.com> wrote: > Hello Sandy, > > i had the same issue and the cause was cloud-init running again at > boot even if Run-Once hasn't been selected as boot option. The way i'm > using to solve the problem is to remove cloud-init after the first > run, since we don't need it anymore. > > In case also disabling is enoug: > > touch /etc/cloud/cloud-init.disabled > > Luca > > On Mon, Mar 19, 2018 at 2:17 PM, Berger, Sandy wrote: > > We?re using cloud-init to customize VMs built from a template. We?re > using > > static IPV4 settings so we?re specifying an IP address, subnet mask, and > > gateway. There is apparently a bug in the current version of cloud-init > > shipping as part of CentOS 7.4 > > (https://bugzilla.redhat.com/show_bug.cgi?id=1492726) that fails to set > the > > gateway properly. In the description of the bug, it says it is fixed in > RHEL > > 7.5 but also says one can use > > https://people.redhat.com/rmccabe/cloud-init/cloud-init- > 0.7.9-20.el7.x86_64.rpm > > which is what we?re doing. > > > > > > > > When the new VM first boots, the 3 IPv4 settings are all set correctly. > > Reboots of the VM maintain the settings properly. But, if the VM is shut > > down and started again via the oVirt GUI, all of the IPV4 settings on the > > eth0 virtual NIC are lost and the /etc/sysconfig/network- > scripts/ifcfg-eth0 > > shows that the NIC is now set up for DHCP. > > > > > > > > Are we doing something incorrectly? > > > > > > > > Sandy Berger > > > > IT ? Infrastructure Engineer II > > > > > > > > Quad/Graphics > > > > Performance through Innovation > > > > > > > > Sussex, Wisconsin > > > > 414.566.2123 phone > > > > 414.566.4010/2123 pager/PIN > > > > > > > > sandy.berger at qg.com > > > > www.QG.com > > > > > > > > Follow Us: Facebook | Twitter | LinkedIn | YouTube > > > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > > > > > -- > "E' assurdo impiegare gli uomini di intelligenza eccellente per fare > calcoli che potrebbero essere affidati a chiunque se si usassero delle > macchine" > Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) > > "Internet ? la pi? grande biblioteca del mondo. > Ma il problema ? che i libri sono tutti sparsi sul pavimento" > John Allen Paulos, Matematico (1945-vivente) > > Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , < > lorenzetto.luca at gmail.com> > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman -------------- next part -------------- An HTML attachment was scrubbed... URL: From mburman at redhat.com Tue Mar 20 06:46:53 2018 From: mburman at redhat.com (Michael Burman) Date: Tue, 20 Mar 2018 08:46:53 +0200 Subject: [ovirt-users] Change ovirtmgmt ip from dhcp to static in a In-Reply-To: References: Message-ID: Hello Zois, It pretty easy to do, via the webadmin UI , go to Hosts main tab > Choose host > go to 'Network Interfaces' sub tab > Press the 'Setup Host Networks' button > press the pencil icon on your management network > and choose Static IP > press OK and OK to approve the operation. - Note that in some cases, specially if this is a SPM host you will loose connectivity to host for few seconds and host may go to non-responsive state, on a non-SPM host usually this woks without any specific issues. - If the spoken host is a SPM host, I recommend to set it first to maintenance mode, do the switch and then activate. For non-SPM host this will work fine as well when the host is UP. Cheers) On Mon, Mar 19, 2018 at 12:15 PM, zois roupas wrote: > Hello everyone > > > I've made a rookie mistake by installing ovirt 4.2 in centos 7 with dhcp > instead of a static ip configuration. Both engine and host are in the same > machine cause of limited resources and i was so happy that everything > worked so well that i kept configuring and installing vm's ,adding local > and nfs storage and setting up the backup! > > As you understand i must change the configuration to static ip and i can't > find any guide describing the correct procedure. Is there an official guide > to change configuration without causing any trouble? > > I've found this thread http://lists.ovirt.org/pipermail/users/2014-May/ > 024432.html but this is for a hosted engine and doesn't help when both > engine and host are in the same machine > > > Thanx in advance > > Best Regards > > Zois > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at palousetech.com Tue Mar 20 06:48:04 2018 From: jim at palousetech.com (Jim Kusznir) Date: Mon, 19 Mar 2018 23:48:04 -0700 Subject: [ovirt-users] Gluster: VM disk stuck in transfer; georep gone wonky In-Reply-To: References: Message-ID: Unfortunately, I came under heavy pressure to get this vm back up. So, i did more googling and attempted to recover myself. I've gotten closer, but still not quite. I found this post: http://lists.ovirt.org/pipermail/users/2015-November/035686.html Which gave me the unlock tool, which was successful in unlocking the disk. Unfortunately, it did not delete the task, nor did ovirt do so on its own after the disk was unlocked. So I found the taskcleaner.sh in the same directory and attempted to clean the task out....except it doesn't seem to see the task (none of the show tasks options seemed to work or the delete all options). I did still have the task uuid from the gui, so i attempted to use that, but all I got back was a "t" on one line and a "0" on the next, so I have no idea what that was supposed to mean. In any case, the web UI still shows the task, still won't let me start the VM and appears convinced its still copying. I've tried restarting the engine and vdsm on the SPM, neither have helped. I can't find any evidence of the task on the command line; only in the UI. I'd create a new VM if i could rescue the image, but I'm not sure I can manage to get this image accepted in another VM How do i recover now? --Jim On Mon, Mar 19, 2018 at 9:38 AM, Jim Kusznir wrote: > Hi all: > > Sorry for yet another semi-related message to the list. In my attempts to > troubleshoot and verify some suspicions on the nature of the performance > problems I posted under "Major Performance Issues with gluster", I > attempted to move one of my problem VM's back to the original storage > (SSD-backed). It appeared to be moving fine, but last night froze at 84%. > This morning (8hrs later), its still at 84%. > > I need to get that VM back up and running, but I don't know how...It seems > to be stuck in limbo. > > The only thing I explicitly did last night as well that may have caused an > issue is finally set up and activated georep to an offsite backup machine. > That too seems to have gone a bit wonky. On the ovirt server side, it > shows normal with all but data-hdd show a last sync'ed time of 3am (which > matches my bandwidth graphs for the WAN connections involved). data-hdd > (the new disk-backed storage with most of my data in it) shows not yet > synced, but I'm also not currently seeing bandwidth usage anymore. > > I logged into the georep destination box, and found system load a bit > high, a bunch of gluster and rsync processes running, and both data and > data-hdd using MORE disk space than the origional (data-hdd using 4x more > disk space than is on the master node). Not sure what to do about this; I > paused the replication from the cluster, but that hasn't seem to had an > effect on the georep destination. > > I promise I'll stop trying things until I get guidance from the list! > Please do help; I need the VM HDD unstuck so I can start it. > > Thanks! > --Jim > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eshenitz at redhat.com Tue Mar 20 06:52:11 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Tue, 20 Mar 2018 08:52:11 +0200 Subject: [ovirt-users] Gluster: VM disk stuck in transfer; georep gone wonky In-Reply-To: References: Message-ID: Hi, Can you please send the VDSM and Engine log? Thanks On Mon, Mar 19, 2018 at 6:38 PM, Jim Kusznir wrote: > Hi all: > > Sorry for yet another semi-related message to the list. In my attempts to > troubleshoot and verify some suspicions on the nature of the performance > problems I posted under "Major Performance Issues with gluster", I > attempted to move one of my problem VM's back to the original storage > (SSD-backed). It appeared to be moving fine, but last night froze at 84%. > This morning (8hrs later), its still at 84%. > > I need to get that VM back up and running, but I don't know how...It seems > to be stuck in limbo. > > The only thing I explicitly did last night as well that may have caused an > issue is finally set up and activated georep to an offsite backup machine. > That too seems to have gone a bit wonky. On the ovirt server side, it > shows normal with all but data-hdd show a last sync'ed time of 3am (which > matches my bandwidth graphs for the WAN connections involved). data-hdd > (the new disk-backed storage with most of my data in it) shows not yet > synced, but I'm also not currently seeing bandwidth usage anymore. > > I logged into the georep destination box, and found system load a bit > high, a bunch of gluster and rsync processes running, and both data and > data-hdd using MORE disk space than the origional (data-hdd using 4x more > disk space than is on the master node). Not sure what to do about this; I > paused the replication from the cluster, but that hasn't seem to had an > effect on the georep destination. > > I promise I'll stop trying things until I get guidance from the list! > Please do help; I need the VM HDD unstuck so I can start it. > > Thanks! > --Jim > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From eshenitz at redhat.com Tue Mar 20 06:55:38 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Tue, 20 Mar 2018 08:55:38 +0200 Subject: [ovirt-users] Gluster: VM disk stuck in transfer; georep gone wonky In-Reply-To: References: Message-ID: Can you please check if you can detach the disk from the VM and attach it to the created VM? On Tue, Mar 20, 2018 at 8:48 AM, Jim Kusznir wrote: > Unfortunately, I came under heavy pressure to get this vm back up. So, i > did more googling and attempted to recover myself. I've gotten closer, but > still not quite. > > I found this post: > > http://lists.ovirt.org/pipermail/users/2015-November/035686.html > > Which gave me the unlock tool, which was successful in unlocking the > disk. Unfortunately, it did not delete the task, nor did ovirt do so on > its own after the disk was unlocked. > > So I found the taskcleaner.sh in the same directory and attempted to clean > the task out....except it doesn't seem to see the task (none of the show > tasks options seemed to work or the delete all options). I did still have > the task uuid from the gui, so i attempted to use that, but all I got back > was a "t" on one line and a "0" on the next, so I have no idea what that > was supposed to mean. In any case, the web UI still shows the task, still > won't let me start the VM and appears convinced its still copying. I've > tried restarting the engine and vdsm on the SPM, neither have helped. I > can't find any evidence of the task on the command line; only in the UI. > > I'd create a new VM if i could rescue the image, but I'm not sure I can > manage to get this image accepted in another VM > > How do i recover now? > > --Jim > > On Mon, Mar 19, 2018 at 9:38 AM, Jim Kusznir wrote: > >> Hi all: >> >> Sorry for yet another semi-related message to the list. In my attempts >> to troubleshoot and verify some suspicions on the nature of the performance >> problems I posted under "Major Performance Issues with gluster", I >> attempted to move one of my problem VM's back to the original storage >> (SSD-backed). It appeared to be moving fine, but last night froze at 84%. >> This morning (8hrs later), its still at 84%. >> >> I need to get that VM back up and running, but I don't know how...It >> seems to be stuck in limbo. >> >> The only thing I explicitly did last night as well that may have caused >> an issue is finally set up and activated georep to an offsite backup >> machine. That too seems to have gone a bit wonky. On the ovirt server >> side, it shows normal with all but data-hdd show a last sync'ed time of 3am >> (which matches my bandwidth graphs for the WAN connections involved). >> data-hdd (the new disk-backed storage with most of my data in it) shows not >> yet synced, but I'm also not currently seeing bandwidth usage anymore. >> >> I logged into the georep destination box, and found system load a bit >> high, a bunch of gluster and rsync processes running, and both data and >> data-hdd using MORE disk space than the origional (data-hdd using 4x more >> disk space than is on the master node). Not sure what to do about this; I >> paused the replication from the cluster, but that hasn't seem to had an >> effect on the georep destination. >> >> I promise I'll stop trying things until I get guidance from the list! >> Please do help; I need the VM HDD unstuck so I can start it. >> >> Thanks! >> --Jim >> >> > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From tba at kb.dk Tue Mar 20 06:55:48 2018 From: tba at kb.dk (Tony Brian Albers) Date: Tue, 20 Mar 2018 06:55:48 +0000 Subject: [ovirt-users] Gluster: VM disk stuck in transfer; georep gone wonky In-Reply-To: References: Message-ID: <25f81b85-344c-3dd2-761a-a7c0e9b78a46@kb.dk> I read somewhere about clearing out wrong stuff from the UI by manually editing the database, maybe you can try searching for something like that. With regards to the VM, I'd probably just delete it, edit the DB and remove all sorts of references to it and then recover it from backup. Is there nothing about all this in the ovirt logs on the engine and the host? It might point you in the right direction. HTH /tony On 20/03/18 07:48, Jim Kusznir wrote: > Unfortunately, I came under heavy pressure to get this vm back up.? So, > i did more googling and attempted to recover myself.? I've gotten > closer, but still not quite. > > I found this post: > > http://lists.ovirt.org/pipermail/users/2015-November/035686.html > > Which gave me the unlock tool, which was successful in unlocking the > disk.? Unfortunately, it did not delete the task, nor did ovirt do so on > its own after the disk was unlocked. > > So I found the taskcleaner.sh in the same directory and attempted to > clean the task out....except it doesn't seem to see the task (none of > the show tasks options seemed to work or the delete all options).? I did > still have the task uuid from the gui, so i attempted to use that, but > all I got back was a "t" on one line and a "0" on the next, so I have no > idea what that was supposed to mean.? In any case, the web UI still > shows the task, still won't let me start the VM and appears convinced > its still copying.? I've tried restarting the engine and vdsm on the > SPM, neither have helped.? I can't find any evidence of the task on the > command line; only in the UI. > > I'd create a new VM if i could rescue the image, but I'm not sure I can > manage to get this image accepted in another VM > > How do i recover now? > > --Jim > > On Mon, Mar 19, 2018 at 9:38 AM, Jim Kusznir > wrote: > > Hi all: > > Sorry for yet another semi-related message to the list.? In my > attempts to troubleshoot and verify some suspicions on the nature of > the performance problems I posted under "Major Performance Issues > with gluster", I attempted to move one of my problem VM's back to > the original storage (SSD-backed).? It appeared to be moving fine, > but last night froze at 84%.? This morning (8hrs later), its still > at 84%. > > I need to get that VM back up and running, but I don't know how...It > seems to be stuck in limbo. > > The only thing I explicitly did last night as well that may have > caused an issue is finally set up and activated georep to an offsite > backup machine.? That too seems to have gone a bit wonky.? On the > ovirt server side, it shows normal with all but data-hdd show a last > sync'ed time of 3am (which matches my bandwidth graphs for the WAN > connections involved).? data-hdd (the new disk-backed storage with > most of my data in it) shows not yet synced, but I'm also not > currently seeing bandwidth usage anymore. > > I logged into the georep destination box, and found system load a > bit high, a bunch of gluster and rsync processes running, and both > data and data-hdd using MORE disk space than the origional (data-hdd > using 4x more disk space than is on the master node).? Not sure what > to do about this; I paused the replication from the cluster, but > that hasn't seem to had an effect on the georep destination. > > I promise I'll stop trying things until I get guidance from the > list!? Please do help; I need the VM HDD unstuck so I can start it. > > Thanks! > --Jim > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- Tony Albers Systems administrator, IT-development Royal Danish Library, Victor Albecks Vej 1, 8000 Aarhus C, Denmark. Tel: +45 2566 2383 / +45 8946 2316 From jim at palousetech.com Tue Mar 20 07:22:28 2018 From: jim at palousetech.com (Jim Kusznir) Date: Tue, 20 Mar 2018 00:22:28 -0700 Subject: [ovirt-users] Gluster: VM disk stuck in transfer; georep gone wonky In-Reply-To: <25f81b85-344c-3dd2-761a-a7c0e9b78a46@kb.dk> References: <25f81b85-344c-3dd2-761a-a7c0e9b78a46@kb.dk> Message-ID: Thank you for the replies. While waiting, I found one more google responce that said to run engine-setup. I did that, and it fixed the issue. the VM is now running again. As to checking the logs, I'm not sure which ones to check...there are so many in so many different places. I was not able to detach the disk, as "an operation is currently in process" No matter what i did to the disk, it was essentially still locked, even though it no longer said "locked" after I removed it with the unlock script. So, it appears running engine-setup can really fix a bunch of stuff! An important tip to remember... --Jim On Mon, Mar 19, 2018 at 11:55 PM, Tony Brian Albers wrote: > I read somewhere about clearing out wrong stuff from the UI by manually > editing the database, maybe you can try searching for something like that. > > With regards to the VM, I'd probably just delete it, edit the DB and > remove all sorts of references to it and then recover it from backup. > > Is there nothing about all this in the ovirt logs on the engine and the > host? It might point you in the right direction. > > HTH > > /tony > > > On 20/03/18 07:48, Jim Kusznir wrote: > > Unfortunately, I came under heavy pressure to get this vm back up. So, > > i did more googling and attempted to recover myself. I've gotten > > closer, but still not quite. > > > > I found this post: > > > > http://lists.ovirt.org/pipermail/users/2015-November/035686.html > > > > Which gave me the unlock tool, which was successful in unlocking the > > disk. Unfortunately, it did not delete the task, nor did ovirt do so on > > its own after the disk was unlocked. > > > > So I found the taskcleaner.sh in the same directory and attempted to > > clean the task out....except it doesn't seem to see the task (none of > > the show tasks options seemed to work or the delete all options). I did > > still have the task uuid from the gui, so i attempted to use that, but > > all I got back was a "t" on one line and a "0" on the next, so I have no > > idea what that was supposed to mean. In any case, the web UI still > > shows the task, still won't let me start the VM and appears convinced > > its still copying. I've tried restarting the engine and vdsm on the > > SPM, neither have helped. I can't find any evidence of the task on the > > command line; only in the UI. > > > > I'd create a new VM if i could rescue the image, but I'm not sure I can > > manage to get this image accepted in another VM > > > > How do i recover now? > > > > --Jim > > > > On Mon, Mar 19, 2018 at 9:38 AM, Jim Kusznir > > wrote: > > > > Hi all: > > > > Sorry for yet another semi-related message to the list. In my > > attempts to troubleshoot and verify some suspicions on the nature of > > the performance problems I posted under "Major Performance Issues > > with gluster", I attempted to move one of my problem VM's back to > > the original storage (SSD-backed). It appeared to be moving fine, > > but last night froze at 84%. This morning (8hrs later), its still > > at 84%. > > > > I need to get that VM back up and running, but I don't know how...It > > seems to be stuck in limbo. > > > > The only thing I explicitly did last night as well that may have > > caused an issue is finally set up and activated georep to an offsite > > backup machine. That too seems to have gone a bit wonky. On the > > ovirt server side, it shows normal with all but data-hdd show a last > > sync'ed time of 3am (which matches my bandwidth graphs for the WAN > > connections involved). data-hdd (the new disk-backed storage with > > most of my data in it) shows not yet synced, but I'm also not > > currently seeing bandwidth usage anymore. > > > > I logged into the georep destination box, and found system load a > > bit high, a bunch of gluster and rsync processes running, and both > > data and data-hdd using MORE disk space than the origional (data-hdd > > using 4x more disk space than is on the master node). Not sure what > > to do about this; I paused the replication from the cluster, but > > that hasn't seem to had an effect on the georep destination. > > > > I promise I'll stop trying things until I get guidance from the > > list! Please do help; I need the VM HDD unstuck so I can start it. > > > > Thanks! > > --Jim > > > > > > > > > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > > > > -- > Tony Albers > Systems administrator, IT-development > Royal Danish Library, Victor Albecks Vej 1, 8000 Aarhus C, Denmark. > Tel: +45 2566 2383 / +45 8946 2316 > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eraviv at redhat.com Tue Mar 20 08:41:38 2018 From: eraviv at redhat.com (Eitan Raviv) Date: Tue, 20 Mar 2018 10:41:38 +0200 Subject: [ovirt-users] Fw: Network issues with oVirt 4.2 and cloud-init In-Reply-To: <20180320092231.29eb7bbd@t460p> References: <20180320092231.29eb7bbd@t460p> Message-ID: Hi Sandy, Can you elaborate some more about the steps you have taken? Specifically, how\where do you apply cloud-init-0.7.9-20 rpm? Can you make sure that rpm -q cloud-init after VM reboot is still this one? How do you apply the static IP settings that do persist to the VM - via oVirt web-admin\REST API\other? When you restart the VM via oVirt GUI - do you 'Run' it or 'Run Once'? Thanks, Eitan oVirt networking team On Tue, Mar 20, 2018 at 10:22 AM, Dominik Holler wrote: > > > Begin forwarded message: > > Date: Mon, 19 Mar 2018 13:17:08 +0000 > From: "Berger, Sandy" > To: "users at ovirt.org" > Subject: [ovirt-users] Network issues with oVirt 4.2 and cloud-init > > > We're using cloud-init to customize VMs built from a template. We're > using static IPV4 settings so we're specifying an IP address, subnet > mask, and gateway. There is apparently a bug in the current version of > cloud-init shipping as part of CentOS 7.4 > (https://bugzilla.redhat.com/show_bug.cgi?id=1492726) that fails to set > the gateway properly. In the description of the bug, it says it is > fixed in RHEL 7.5 but also says one can use > https://people.redhat.com/rmccabe/cloud-init/cloud-init- > 0.7.9-20.el7.x86_64.rpm > which is what we're doing. > > When the new VM first boots, the 3 IPv4 settings are all set correctly. > Reboots of the VM maintain the settings properly. But, if the VM is > shut down and started again via the oVirt GUI, all of the IPV4 settings > on the eth0 virtual NIC are lost and > the /etc/sysconfig/network-scripts/ifcfg-eth0 shows that the NIC is now > set up for DHCP. > > Are we doing something incorrectly? > > Sandy Berger > IT - Infrastructure Engineer II > > Quad/Graphics > Performance through Innovation > > Sussex, Wisconsin > 414.566.2123 phone > 414.566.4010/2123 pager/PIN > > sandy.berger at qg.com > www.QG.com > > Follow Us: Facebook | > Twitter | > LinkedIn | YouTube > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Eitan Raviv IRC: erav (#ovirt #vdsm #devel #rhev-dev) -------------- next part -------------- An HTML attachment was scrubbed... URL: From derez at redhat.com Tue Mar 20 09:04:40 2018 From: derez at redhat.com (Daniel Erez) Date: Tue, 20 Mar 2018 09:04:40 +0000 Subject: [ovirt-users] Failing to upload qcow2 disk image In-Reply-To: <42edbcfa-e202-5d7d-f30c-7e6c7a60fa0d@exzatechconsulting.com> References: <42edbcfa-e202-5d7d-f30c-7e6c7a60fa0d@exzatechconsulting.com> Message-ID: Hi Anantha, The issue seems similar to https://bugzilla.redhat.com/1554226 Please try to increase to value of ImageTransferClientTicketValidityInSeconds configuration value. E.g. # engine-config -s ImageTransferClientTicketValidityInSeconds=360000 As the value was exposed to engine-config only in version 4.2.2, either update the engine to latest version or update it manually on vdc_options table. On Mon, Mar 19, 2018 at 11:08 AM Anantha Raghava < raghav at exzatechconsulting.com> wrote: > Hi, > > I am trying to upload the disk image which is in qcow2 format. After > uploading about 38 GB the status turns to "Paused by system" and it does > not resume at all. Any attempt to manually resume, will result back in > paused status. > > Ovirt engine version : 4.2.1.6-1.el7.centos > > Any guidance to finish this upload task? > > -- > > Thanks & Regards, > > > Anantha Raghava > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spfma.tech at e.mail.fr Tue Mar 20 09:06:16 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Tue, 20 Mar 2018 10:06:16 +0100 Subject: [ovirt-users] Hosted engine deployment error In-Reply-To: <20180319155631.DB556E446F@smtp01.mail.de> References: <20180319155631.DB556E446F@smtp01.mail.de> Message-ID: <20180320090616.DC0EBE4473@smtp01.mail.de> By the way, should NetworkManager be enabled or disabled when using latest versions of oVirt ? I have found contradictory informations, but some are from two years ago. Regards Le 19-Mar-2018 16:56:50 +0100, spfma.tech at e.mail.fr a crit: Hi, I wanted to rebuild a new hosted engine setup, as the old was corrupted (too much violent poweroff !) So the server was not reinstalled, I just runned "ovirt-hosted-engine-cleanup". The network setup generated by vdsm seems to be still in place, so I haven't changed anything there. Then I decided to update the packages to the latest versions avaible, rebooted the server and run "ovirt-hosted-engine-setup". But the process never succeeds, as I get an error after a long time spent in "[ INFO ] TASK [Wait for the host to be up]" [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": [{"address": "pfm-srv-virt-1.pfm-ad.pfm.loc", "affinity_labels": [], "auto_numa_status": "unknown", "certificate": {"organization": "pfm.loc", "subject": "O=pfm.loc,CN=pfm-srv-virt-1.pfm-ad.pfm.loc"}, "cluster": {"href": "/ovirt-engine/api/clusters/d6c9358e-2b8b-11e8-bc86-00163e152701", "id": "d6c9358e-2b8b-11e8-bc86-00163e152701"}, "comment": "", "cpu": {"speed": 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, "devices": [], "external_network_provider_configurations": [], "external_status": "ok", "hardware_information": {"supported_rng_sources": []}, "hooks": [], "href": "/ovirt-engine/api/hosts/542566c4-fc85-4398-9402-10c8adaa9554", "id": "542566c4-fc85-4398-9402-10c8adaa9554", "katello_errata": [], "kdump_status": "unknown", "ksm": {"enabled": false}, "max_scheduling_memory": 0, "memory": 0, "name": "pfm-srv-virt-1.pfm-ad.pfm.loc", "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline": ""}, "permissions": [], "port": 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false, "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": {"fingerprint": "SHA256:J75BVLFnmGBGFosXzaxCRnuIYcOc75HUBQZ4pOKpDg8", "port": 22}, "statistics": [], "status": "non_responsive", "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], "transparent_huge_pages": {"enabled": false}, "type": "rhel", "unmanaged_networks": [], "update_available": false}]}, "attempts": 120, "changed": false} [ INFO ] TASK [Remove local vm dir] [ INFO ] TASK [Notify the user about a failure] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.\n"} I made another try with Cockpit, it is the same. Am I doing something wrong or is there a bug ? Regards ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From mburman at redhat.com Tue Mar 20 09:15:25 2018 From: mburman at redhat.com (Michael Burman) Date: Tue, 20 Mar 2018 11:15:25 +0200 Subject: [ovirt-users] Change ovirtmgmt ip from dhcp to static in a In-Reply-To: References: Message-ID: Indeed very odd, this shouldn't behave this way, just tested it my self and it is working as expected. Unless i miss understood you here, do you use a different IP address when switching to static or the same IP that you got from dhcp? if yes, then this is another flow.. Can you please share the vdsm version and vdsm log with us? Edy, any idea what can cause this? On Tue, Mar 20, 2018 at 11:10 AM, zois roupas wrote: > Hi Michael and thanks a lot for the time > > > Great step by step instructions but something strange is happening while > trying to change to static ip. I tried to do the change while the host > was in maintenance mode and in activate mode but again after some minutes > the system reverts to the ip that dhcp is serving! > > What am i missing here? Do you have any ideas? > > > Best Regards > Zois > ------------------------------ > *From:* Michael Burman > *Sent:* Tuesday, March 20, 2018 8:46 AM > *To:* zois roupas > *Cc:* users at ovirt.org > *Subject:* Re: [ovirt-users] Change ovirtmgmt ip from dhcp to static in a > > Hello Zois, > > It pretty easy to do, via the webadmin UI , go to Hosts main tab > Choose > host > go to 'Network Interfaces' sub tab > Press the 'Setup Host Networks' > button > press the pencil icon on your management network > and choose > Static IP > press OK and OK to approve the operation. > > - Note that in some cases, specially if this is a SPM host you will loose > connectivity to host for few seconds and host may go to non-responsive > state, on a non-SPM host usually this woks without any specific issues. > > - If the spoken host is a SPM host, I recommend to set it first to > maintenance mode, do the switch and then activate. For non-SPM host this > will work fine as well when the host is UP. > > Cheers) > > On Mon, Mar 19, 2018 at 12:15 PM, zois roupas > wrote: > > Hello everyone > > > I've made a rookie mistake by installing ovirt 4.2 in centos 7 with dhcp > instead of a static ip configuration. Both engine and host are in the same > machine cause of limited resources and i was so happy that everything > worked so well that i kept configuring and installing vm's ,adding local > and nfs storage and setting up the backup! > > As you understand i must change the configuration to static ip and i can't > find any guide describing the correct procedure. Is there an official guide > to change configuration without causing any trouble? > > I've found this thread http://lists.ovirt.org/ > pipermail/users/2014-May/024432.html but this is for a hosted engine and > doesn't help when both engine and host are in the same machine > > > Thanx in advance > > Best Regards > > Zois > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > > -- > > Michael Burman > > Senior Quality engineer - rhv network - redhat israel > > Red Hat > > > > mburman at redhat.com M: 0545355725 IM: mburman > > -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman -------------- next part -------------- An HTML attachment was scrubbed... URL: From amusil at redhat.com Tue Mar 20 09:28:51 2018 From: amusil at redhat.com (Ales Musil) Date: Tue, 20 Mar 2018 10:28:51 +0100 Subject: [ovirt-users] Change ovirtmgmt ip from dhcp to static in a In-Reply-To: References: Message-ID: One thing to note if you are changing the IP to different one that was assigned by DHCP you should uncheck "Verify connectivity between Host and Engine". This makes sure that the engine won't lost connectivity and in case of switching IP it happens. On Tue, Mar 20, 2018 at 10:15 AM, Michael Burman wrote: > Indeed very odd, this shouldn't behave this way, just tested it my self > and it is working as expected. Unless i miss understood you here, do you > use a different IP address when switching to static or the same IP that you > got from dhcp? if yes, then this is another flow.. > > Can you please share the vdsm version and vdsm log with us? > > Edy, any idea what can cause this? > > On Tue, Mar 20, 2018 at 11:10 AM, zois roupas > wrote: > >> Hi Michael and thanks a lot for the time >> >> >> Great step by step instructions but something strange is happening while >> trying to change to static ip. I tried to do the change while the host >> was in maintenance mode and in activate mode but again after some minutes >> the system reverts to the ip that dhcp is serving! >> >> What am i missing here? Do you have any ideas? >> >> >> Best Regards >> Zois >> ------------------------------ >> *From:* Michael Burman >> *Sent:* Tuesday, March 20, 2018 8:46 AM >> *To:* zois roupas >> *Cc:* users at ovirt.org >> *Subject:* Re: [ovirt-users] Change ovirtmgmt ip from dhcp to static in a >> >> Hello Zois, >> >> It pretty easy to do, via the webadmin UI , go to Hosts main tab > Choose >> host > go to 'Network Interfaces' sub tab > Press the 'Setup Host Networks' >> button > press the pencil icon on your management network > and choose >> Static IP > press OK and OK to approve the operation. >> >> - Note that in some cases, specially if this is a SPM host you will loose >> connectivity to host for few seconds and host may go to non-responsive >> state, on a non-SPM host usually this woks without any specific issues. >> >> - If the spoken host is a SPM host, I recommend to set it first to >> maintenance mode, do the switch and then activate. For non-SPM host this >> will work fine as well when the host is UP. >> >> Cheers) >> >> On Mon, Mar 19, 2018 at 12:15 PM, zois roupas >> wrote: >> >> Hello everyone >> >> >> I've made a rookie mistake by installing ovirt 4.2 in centos 7 with dhcp >> instead of a static ip configuration. Both engine and host are in the same >> machine cause of limited resources and i was so happy that everything >> worked so well that i kept configuring and installing vm's ,adding local >> and nfs storage and setting up the backup! >> >> As you understand i must change the configuration to static ip and i >> can't find any guide describing the correct procedure. Is there an official >> guide to change configuration without causing any trouble? >> >> I've found this thread http://lists.ovirt.org/ >> pipermail/users/2014-May/024432.html but this is for a hosted engine and >> doesn't help when both engine and host are in the same machine >> >> >> Thanx in advance >> >> Best Regards >> >> Zois >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >> >> >> -- >> >> Michael Burman >> >> Senior Quality engineer - rhv network - redhat israel >> >> Red Hat >> >> >> >> mburman at redhat.com M: 0545355725 IM: mburman >> >> > > > > -- > > Michael Burman > > Senior Quality engineer - rhv network - redhat israel > > Red Hat > > > > mburman at redhat.com M: 0545355725 IM: mburman > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- ALES MUSIL INTERN - rhv network Red Hat EMEA amusil at redhat.com IM: amusil -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Tue Mar 20 09:44:19 2018 From: rightkicktech at gmail.com (Alex K) Date: Tue, 20 Mar 2018 11:44:19 +0200 Subject: [ovirt-users] Disk upload cancel/remove Message-ID: Hi All, I was trying to upload a VM disk at data storage domain using a python script. I did cancel the upload twice and at the third time the upload was successful, but I see two disks from the previous attempts with status "transferring via API" (see attached). This status of for more then 8 hours and I cannot remove them. Is there any way to clean them from the disks inventory? I am using ovirt 4.1.9.1-1.el7.centos with self hosted engine on 3 nodes. Thanx, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ovirt-disk-upload.png Type: image/png Size: 34142 bytes Desc: not available URL: From eshenitz at redhat.com Tue Mar 20 09:49:26 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Tue, 20 Mar 2018 11:49:26 +0200 Subject: [ovirt-users] Disk upload cancel/remove In-Reply-To: References: Message-ID: Idan/Daniel, Can you please take a look? Thanks, On Tue, Mar 20, 2018 at 11:44 AM, Alex K wrote: > Hi All, > > I was trying to upload a VM disk at data storage domain using a python > script. > I did cancel the upload twice and at the third time the upload was > successful, but I see two disks from the previous attempts with status > "transferring via API" (see attached). This status of for more then 8 hours > and I cannot remove them. > > Is there any way to clean them from the disks inventory? > > > > I am using ovirt 4.1.9.1-1.el7.centos with self hosted engine on 3 nodes. > > Thanx, > Alex > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ovirt-disk-upload.png Type: image/png Size: 34142 bytes Desc: not available URL: From spfma.tech at e.mail.fr Tue Mar 20 10:44:05 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Tue, 20 Mar 2018 11:44:05 +0100 Subject: [ovirt-users] Hosted engine deployment error Message-ID: <20180320104405.A347DE4473@smtp01.mail.de> Hi, In fact it is a workaround coming from you I found in the bugtrack that helped me : chmod 644 /var/cache/vdsm/schema/* As the only thing looking like a weird error I have found was : ERROR Exception raised#012Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/vdsm/vdsmd.py", line 156, in run#012 serve_clients(log)#012 File "/usr/lib/python2.7/site-packages/vdsm/vdsmd.py", line 103, in serve_clients#012 cif = clientIF.getInstance(irs, log, scheduler)#012 File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 250, in getInstance#012 cls._instance = clientIF(irs, log, scheduler)#012 File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 144, in __init__#012 self._prepareJSONRPCServer()#012 File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 307, in _prepareJSONRPCServer#012 bridge = Bridge.DynamicBridge()#012 File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 67, in __init__#012 self._schema = vdsmapi.Schema(paths, api_strict_mode)#012 File "/usr/lib/python2.7/site-packages/vdsm/api/vdsmapi.py", line 217, in __init__#012 raise SchemaNotFound("Unable to find API schema file")#012SchemaNotFound: Unable to find API schema file So I can go one step futher, but the installation still fails in the end, with file permission problems in datastore files (i chose NFS 4.1). I can't indeed touch or get informations even logged in root. But I can create and delete files in the same directory. Is there a workaround for this too ? Regards Le 19-Mar-2018 17:48:41 +0100, stirabos at redhat.com a crit: On Mon, Mar 19, 2018 at 4:56 PM, wrote: Hi, I wanted to rebuild a new hosted engine setup, as the old was corrupted (too much violent poweroff !) So the server was not reinstalled, I just runned "ovirt-hosted-engine-cleanup". The network setup generated by vdsm seems to be still in place, so I haven't changed anything there. Then I decided to update the packages to the latest versions avaible, rebooted the server and run "ovirt-hosted-engine-setup". But the process never succeeds, as I get an error after a long time spent in "[ INFO ] TASK [Wait for the host to be up]" [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": [{"address": "pfm-srv-virt-1.pfm-ad.pfm.loc", "affinity_labels": [], "auto_numa_status": "unknown", "certificate": {"organization": "pfm.loc", "subject": "O=pfm.loc,CN=pfm-srv-virt-1.pfm-ad.pfm.loc"}, "cluster": {"href": "/ovirt-engine/api/clusters/d6c9358e-2b8b-11e8-bc86-00163e152701", "id": "d6c9358e-2b8b-11e8-bc86-00163e152701"}, "comment": "", "cpu": {"speed": 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, "devices": [], "external_network_provider_configurations": [], "external_status": "ok", "hardware_information": {"supported_rng_sources": []}, "hooks": [], "href": "/ovirt-engine/api/hosts/542566c4-fc85-4398-9402-10c8adaa9554", "id": "542566c4-fc85-4398-9402-10c8adaa9554", "katello_errata": [], "kdump_status": "unknown", "ksm": {"enabled": false}, "max_scheduling_memory": 0, "memory": 0, "name": "pfm-srv-virt-1.pfm-ad.pfm.loc", "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline": ""}, "permissions": [], "port": 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false, "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": {"fingerprint": "SHA256:J75BVLFnmGBGFosXzaxCRnuIYcOc75HUBQZ4pOKpDg8", "port": 22}, "statistics": [], "status": "non_responsive", "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], "transparent_huge_pages": {"enabled": false}, "type": "rhel", "unmanaged_networks": [], "update_available": false}]}, "attempts": 120, "changed": false} [ INFO ] TASK [Remove local vm dir] [ INFO ] TASK [Notify the user about a failure] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.n"} I made another try with Cockpit, it is the same. Am I doing something wrong or is there a bug ? I suppose that your host was condifured with DHCP, if so it's this one: https://bugzilla.redhat.com/1549642 The fix will come with 4.2.2. Regards ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreil1 at starlett.lv Tue Mar 20 10:54:16 2018 From: andreil1 at starlett.lv (Andrei Verovski) Date: Tue, 20 Mar 2018 12:54:16 +0200 Subject: [ovirt-users] Q: Removing stuff from EXPORT domain Message-ID: <8E40EB8E-3AD9-4A56-A7E9-E642DF60BA94@starlett.lv> Hi ! What is the proper oVirt way to remove unused stuff from EXPORT domain ? Simply "rm -Rv xxx" and "rm -Rv xxx.meta" via SSH inside export -> 1d7208ce-d3a1-4406-9638-fe7051562994 -> images ? Thanks Andrei From eshenitz at redhat.com Tue Mar 20 10:59:12 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Tue, 20 Mar 2018 12:59:12 +0200 Subject: [ovirt-users] Q: Removing stuff from EXPORT domain In-Reply-To: <8E40EB8E-3AD9-4A56-A7E9-E642DF60BA94@starlett.lv> References: <8E40EB8E-3AD9-4A56-A7E9-E642DF60BA94@starlett.lv> Message-ID: Hi Andrei, You can remove entities from export domain via the UI. On Tue, Mar 20, 2018 at 12:54 PM, Andrei Verovski wrote: > Hi ! > > > What is the proper oVirt way to remove unused stuff from EXPORT domain ? > Simply "rm -Rv xxx" and "rm -Rv xxx.meta" > via SSH inside export -> 1d7208ce-d3a1-4406-9638-fe7051562994 -> images ? > > Thanks > Andrei > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreil1 at starlett.lv Tue Mar 20 11:17:50 2018 From: andreil1 at starlett.lv (Andrei Verovski) Date: Tue, 20 Mar 2018 13:17:50 +0200 Subject: [ovirt-users] Q: Removing stuff from EXPORT domain In-Reply-To: References: <8E40EB8E-3AD9-4A56-A7E9-E642DF60BA94@starlett.lv> Message-ID: > On 20 Mar 2018, at 12:59, Eyal Shenitzky wrote: > > Hi Andrei, > > You can remove entities from export domain via the UI. in 4.2 Storage -> Disks - exported image is not visible here Storage -> Storage Domains -> Manage Domains - nothing that allows to see content of domain I can see content of Export domain only in ?Import Virtual Machine(s)? dialog, but can?t alter it in any way. What I?m missing here ? > > > On Tue, Mar 20, 2018 at 12:54 PM, Andrei Verovski > wrote: > Hi ! > > > What is the proper oVirt way to remove unused stuff from EXPORT domain ? > Simply "rm -Rv xxx" and "rm -Rv xxx.meta" > via SSH inside export -> 1d7208ce-d3a1-4406-9638-fe7051562994 -> images ? > > Thanks > Andrei > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > -- > Regards, > Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From stirabos at redhat.com Tue Mar 20 11:22:16 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Tue, 20 Mar 2018 12:22:16 +0100 Subject: [ovirt-users] Hosted engine deployment error In-Reply-To: <20180320104405.A347DE4473@smtp01.mail.de> References: <20180320104405.A347DE4473@smtp01.mail.de> Message-ID: On Tue, Mar 20, 2018 at 11:44 AM, wrote: > > Hi, > > > > In fact it is a workaround coming from you I found in the bugtrack that > helped me : > > > > > chmod 644 /var/cache/vdsm/schema/* > > > > As the only thing looking like a weird error I have found was : > > > > ERROR Exception raised#012Traceback (most recent call last):#012 File > "/usr/lib/python2.7/site-packages/vdsm/vdsmd.py", line 156, in run#012 > serve_clients(log)#012 File "/usr/lib/python2.7/site-packages/vdsm/vdsmd.py", > line 103, in serve_clients#012 cif = clientIF.getInstance(irs, log, > scheduler)#012 File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", > line 250, in getInstance#012 cls._instance = clientIF(irs, log, > scheduler)#012 File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", > line 144, in __init__#012 self._prepareJSONRPCServer()#012 File > "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 307, in > _prepareJSONRPCServer#012 bridge = Bridge.DynamicBridge()#012 File > "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 67, in > __init__#012 self._schema = vdsmapi.Schema(paths, api_strict_mode)#012 > File "/usr/lib/python2.7/site-packages/vdsm/api/vdsmapi.py", line 217, in > __init__#012 raise SchemaNotFound("Unable to find API schema > file")#012SchemaNotFound: Unable to find API schema file > Thanks, it's tracked here: https://bugzilla.redhat.com/1552565 A fix will come in the next build. > > > So I can go one step futher, but the installation still fails in the end, > with file permission problems in datastore files (i chose NFS 4.1). I can't > indeed touch or get informations even logged in root. But I can create and > delete files in the same directory. > > Is there a workaround for this too ? > Everything should get wrote and read on the NFS export as vdsm:kvm (36:36); can you please ensure that everything is fine with that? > > Regards > > > > Le 19-Mar-2018 17:48:41 +0100, stirabos at redhat.com a ?crit: > > > > > > > On Mon, Mar 19, 2018 at 4:56 PM, wrote: > >> Hi, >> >> I wanted to rebuild a new hosted engine setup, as the old was corrupted >> (too much violent poweroff !) >> >> So the server was not reinstalled, I just runned >> "ovirt-hosted-engine-cleanup". The network setup generated by vdsm seems to >> be still in place, so I haven't changed anything there. >> >> Then I decided to update the packages to the latest versions avaible, >> rebooted the server and run "ovirt-hosted-engine-setup". >> >> But the process never succeeds, as I get an error after a long time spent >> in "[ INFO ] TASK [Wait for the host to be up]" >> >> >> [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": >> {"ovirt_hosts": [{"address": "pfm-srv-virt-1.pfm-ad.pfm.loc", >> "affinity_labels": [], "auto_numa_status": "unknown", "certificate": >> {"organization": "pfm.loc", "subject": "O=pfm.loc,CN=pfm-srv-virt-1.pfm-ad.pfm.loc"}, >> "cluster": {"href": "/ovirt-engine/api/clusters/d6c9358e-2b8b-11e8-bc86-00163e152701", >> "id": "d6c9358e-2b8b-11e8-bc86-00163e152701"}, "comment": "", "cpu": >> {"speed": 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, >> "devices": [], "external_network_provider_configurations": [], >> "external_status": "ok", "hardware_information": {"supported_rng_sources": >> []}, "hooks": [], "href": "/ovirt-engine/api/hosts/ >> 542566c4-fc85-4398-9402-10c8adaa9554", "id": "542566c4-fc85-4398-9402-10c8adaa9554", >> "katello_errata": [], "kdump_status": "unknown", "ksm": {"enabled": false}, >> "max_scheduling_memory": 0, "memory": 0, "name": "pfm-srv-virt-1.pfm-ad.pfm.loc", >> "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported": >> false, "os": {"custom_kernel_cmdline": ""}, "permissions": [], "port": >> 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false, >> "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", >> "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": >> {"fingerprint": "SHA256:J75BVLFnmGBGFosXzaxCRnuIYcOc75HUBQZ4pOKpDg8", >> "port": 22}, "statistics": [], "status": "non_responsive", >> "storage_connection_extensions": [], "summary": {"total": 0}, "tags": >> [], "transparent_huge_pages": {"enabled": false}, "type": "rhel", >> "unmanaged_networks": [], "update_available": false}]}, "attempts": 120, >> "changed": false} >> [ INFO ] TASK [Remove local vm dir] >> [ INFO ] TASK [Notify the user about a failure] >> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The >> system may not be provisioned according to the playbook results: please >> check the logs for the issue, fix accordingly or re-deploy from scratch.n"} >> >> >> I made another try with Cockpit, it is the same. >> >> Am I doing something wrong or is there a bug ? >> > > I suppose that your host was condifured with DHCP, if so it's this one: > https://bugzilla.redhat.com/1549642 > > The fix will come with 4.2.2. > > >> >> Regards >> >> >> >> ------------------------------ >> FreeMail powered by mail.fr >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > ------------------------------ > FreeMail powered by mail.fr > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eshenitz at redhat.com Tue Mar 20 11:30:40 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Tue, 20 Mar 2018 13:30:40 +0200 Subject: [ovirt-users] Q: Removing stuff from EXPORT domain In-Reply-To: References: <8E40EB8E-3AD9-4A56-A7E9-E642DF60BA94@starlett.lv> Message-ID: The export domain should contain entities for export: VMs, Template etc.. You can remove them from the domain via Storage -> Export Domain -> Import Virtual Machine(s) / Templates -> select the unsed entetiy -> remove (upper right side of the window). On Tue, Mar 20, 2018 at 1:17 PM, Andrei Verovski wrote: > > > On 20 Mar 2018, at 12:59, Eyal Shenitzky wrote: > > Hi Andrei, > > You can remove entities from export domain via the UI. > > > in 4.2 > > Storage -> Disks - exported image is not visible here > Storage -> Storage Domains -> Manage Domains - nothing that allows to see > content of domain > > I can see content of Export domain only in ?Import Virtual Machine(s)? > dialog, but can?t alter it in any way. > > What I?m missing here ? > > > > > On Tue, Mar 20, 2018 at 12:54 PM, Andrei Verovski > wrote: > >> Hi ! >> >> >> What is the proper oVirt way to remove unused stuff from EXPORT domain ? >> Simply "rm -Rv xxx" and "rm -Rv xxx.meta" >> via SSH inside export -> 1d7208ce-d3a1-4406-9638-fe7051562994 -> images ? >> >> Thanks >> Andrei >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > > > > -- > Regards, > Eyal Shenitzky > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From awels at redhat.com Tue Mar 20 12:00:27 2018 From: awels at redhat.com (Alexander Wels) Date: Tue, 20 Mar 2018 08:00:27 -0400 Subject: [ovirt-users] Iso upload success, no GUI popup option In-Reply-To: <6CF44967-E2D3-4346-8BFE-8A6A9116A8E8@squaretrade.com> References: <6CF44967-E2D3-4346-8BFE-8A6A9116A8E8@squaretrade.com> Message-ID: <1683696.bEs5auB0my@awels> On Monday, March 19, 2018 7:47:00 PM EDT Jamie Lawrence wrote: > Hello, > > I'm trying to iron out the last few oddities of this setup, and one of them > is the iso upload process. This worked in the last rebuild, but... well. > > So, uploading from one of the hosts to an ISO domain claims success, and > manually checking shows the ISO uploaded just fine, perms set correctly to > 36:36. But it doesn't appear in the GUI popup when creating a new VM. > You probably need to refresh the ISO list, assuming 4.2 go to Storage -> Storage Domains -> , click on the name, and go to the images detail tab. This should refresh the list of ISOs in the list and the ISO should be listed, once that is done, it should show up in the drop down when you change the CD. > Verified that the VDSM user can fully traverse the directory path - > presumably that was tested by uploading it in the first place, but I > double-checked. Looked in various logs, but didn't see any action in > ovirt-imageio-daemon or -proxy. Didn't see anything in engine.log that > looked relevant. > > What is the troubleshooting method for this? Googling, it seemed most folks' > problems were related to permissions. I scanned DB table names for > something that seemed like it might have ISO-related info in it, but > couldn't find anything, and am not sure what else to check. > > Thanks, > > -j > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From sbonazzo at redhat.com Tue Mar 20 12:05:14 2018 From: sbonazzo at redhat.com (Sandro Bonazzola) Date: Tue, 20 Mar 2018 12:05:14 +0000 Subject: [ovirt-users] Q: Removing stuff from EXPORT domain In-Reply-To: References: <8E40EB8E-3AD9-4A56-A7E9-E642DF60BA94@starlett.lv> Message-ID: Il mar 20 mar 2018, 12:32 Eyal Shenitzky ha scritto: > The export domain should contain entities for export: VMs, Template etc.. > > You can remove them from the domain via Storage -> Export Domain -> Import > Virtual Machine(s) / Templates -> select the unsed entetiy -> remove (upper > right side of the window). > Isn't it a bit not intuitive that for removing something you need to open the import command? > > > On Tue, Mar 20, 2018 at 1:17 PM, Andrei Verovski > wrote: > >> >> >> On 20 Mar 2018, at 12:59, Eyal Shenitzky wrote: >> >> Hi Andrei, >> >> You can remove entities from export domain via the UI. >> >> >> in 4.2 >> >> Storage -> Disks - exported image is not visible here >> Storage -> Storage Domains -> Manage Domains - nothing that allows to see >> content of domain >> >> I can see content of Export domain only in ?Import Virtual Machine(s)? >> dialog, but can?t alter it in any way. >> >> What I?m missing here ? >> >> >> >> >> On Tue, Mar 20, 2018 at 12:54 PM, Andrei Verovski >> wrote: >> >>> Hi ! >>> >>> >>> What is the proper oVirt way to remove unused stuff from EXPORT domain ? >>> Simply "rm -Rv xxx" and "rm -Rv xxx.meta" >>> via SSH inside export -> 1d7208ce-d3a1-4406-9638-fe7051562994 -> images ? >>> >>> Thanks >>> Andrei >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >> >> >> >> -- >> Regards, >> Eyal Shenitzky >> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > -- > Regards, > Eyal Shenitzky > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eshenitz at redhat.com Tue Mar 20 13:14:19 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Tue, 20 Mar 2018 15:14:19 +0200 Subject: [ovirt-users] Q: Removing stuff from EXPORT domain In-Reply-To: References: <8E40EB8E-3AD9-4A56-A7E9-E642DF60BA94@starlett.lv> Message-ID: It doesn't behave differently than removing a disk from Storage domains -> Storage domain - > disks, The entities do not belong to any data center so it must be inside the storage domain view. On Tue, Mar 20, 2018 at 2:05 PM, Sandro Bonazzola wrote: > > > Il mar 20 mar 2018, 12:32 Eyal Shenitzky ha scritto: > >> The export domain should contain entities for export: VMs, Template etc.. >> >> You can remove them from the domain via Storage -> Export Domain -> Import >> Virtual Machine(s) / Templates -> select the unsed entetiy -> remove (upper >> right side of the window). >> > > Isn't it a bit not intuitive that for removing something you need to open > the import command? > > > > > >> >> >> On Tue, Mar 20, 2018 at 1:17 PM, Andrei Verovski >> wrote: >> >>> >>> >>> On 20 Mar 2018, at 12:59, Eyal Shenitzky wrote: >>> >>> Hi Andrei, >>> >>> You can remove entities from export domain via the UI. >>> >>> >>> in 4.2 >>> >>> Storage -> Disks - exported image is not visible here >>> Storage -> Storage Domains -> Manage Domains - nothing that allows to >>> see content of domain >>> >>> I can see content of Export domain only in ?Import Virtual Machine(s)? >>> dialog, but can?t alter it in any way. >>> >>> What I?m missing here ? >>> >>> >>> >>> >>> On Tue, Mar 20, 2018 at 12:54 PM, Andrei Verovski >>> wrote: >>> >>>> Hi ! >>>> >>>> >>>> What is the proper oVirt way to remove unused stuff from EXPORT domain ? >>>> Simply "rm -Rv xxx" and "rm -Rv xxx.meta" >>>> via SSH inside export -> 1d7208ce-d3a1-4406-9638-fe7051562994 -> >>>> images ? >>>> >>>> Thanks >>>> Andrei >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>> >>> >>> >>> -- >>> Regards, >>> Eyal Shenitzky >>> >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> >> >> -- >> Regards, >> Eyal Shenitzky >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmartinezp at uci.cu Tue Mar 20 13:21:30 2018 From: mmartinezp at uci.cu (Marcos Michel Martinez Perez) Date: Tue, 20 Mar 2018 09:21:30 -0400 Subject: [ovirt-users] help, ovirt 4.2 Message-ID: <5e4dc257-9c2b-6a98-0696-c005c97479f8@uci.cu> I recently installed oVirt 4.2 on centos 7 and it turns out that it gave me the following error when trying to execute the command engine-setup [root at localhost ~]# engine-setup [ INFO? ] Stage: Initializing [ INFO? ] Stage: Environment setup ????????? Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf'] ????????? Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20180320051815-eon9vg.log ????????? Version: otopi-1.7.7 (otopi-1.7.7-1.el7) [ ERROR ] "before" parameter of method otopi.plugins.ovirt_engine_setup.ovirt_engine.network.ovirtproviderovn.Plugin._misc_configure_ovn_pki is a string, should probably be a tuple. Perhaps a missing comma? ????????? methodinfo: {'priority': 5000, 'name': None, 'before': 'osetup.ovn.provider.service.restart', 'after': ('osetup.pki.ca.available', 'osetup.ovn.services.restart'), 'method': >, 'condition': of >, 'stage': 11} [ ERROR ] "before" parameter of method otopi.plugins.ovirt_engine_setup.ovirt_engine.network.ovirtproviderovn.Plugin._misc_configure_provider is a string, should probably be a tuple. Perhaps a missing comma? ????????? methodinfo: {'priority': 5000, 'name': None, 'before': 'osetup.ovn.provider.service.restart', 'after': ('osetup.pki.ca.available', 'osetup.ovn.services.restart'), 'method': >, 'condition': of >, 'stage': 11} [ ERROR ] Failed to execute stage 'Environment setup': Found bad "before" or "after" parameters [ INFO? ] Stage: Clean up ????????? Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20180320051815-eon9vg.log [ ERROR ] Failed to execute stage 'Clean up': 'NoneType' object has no attribute 'cleanup' [ INFO? ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20180320051817-setup.conf' [ INFO? ] Stage: Pre-termination [ INFO? ] Stage: Termination [ ERROR ] Execution of setup failed UCIENCIA 2018: III Conferencia Cient?fica Internacional de la Universidad de las Ciencias Inform?ticas. Del 24-26 de septiembre, 2018 http://uciencia.uci.cu http://eventos.uci.cu -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreil1 at starlett.lv Tue Mar 20 13:45:06 2018 From: andreil1 at starlett.lv (Andrei Verovski) Date: Tue, 20 Mar 2018 15:45:06 +0200 Subject: [ovirt-users] Q: Removing stuff from EXPORT domain In-Reply-To: References: <8E40EB8E-3AD9-4A56-A7E9-E642DF60BA94@starlett.lv> Message-ID: > On 20 Mar 2018, at 13:30, Eyal Shenitzky wrote: > > The export domain should contain entities for export: VMs, Template etc.. > > You can remove them from the domain via Storage -> Export Domain -> Import Virtual Machine(s) / Templates -> select the unsed entetiy -> remove (upper right side of the window). in oVirt 4.2.1 Storage -> Storage Domains I see list of domains, there are NO ?Import Import Virtual Machine(s)?, see screenshot 1 "Import VM" is inside Compute -> Virtual machines, yet I don?t see option you mention - screenshot 2. Is it possible you have more recent GIT version ? > > > > On Tue, Mar 20, 2018 at 1:17 PM, Andrei Verovski > wrote: > > >> On 20 Mar 2018, at 12:59, Eyal Shenitzky > wrote: >> >> Hi Andrei, >> >> You can remove entities from export domain via the UI. > > in 4.2 > > Storage -> Disks - exported image is not visible here > Storage -> Storage Domains -> Manage Domains - nothing that allows to see content of domain > > I can see content of Export domain only in ?Import Virtual Machine(s)? dialog, but can?t alter it in any way. > > What I?m missing here ? > > >> >> >> On Tue, Mar 20, 2018 at 12:54 PM, Andrei Verovski > wrote: >> Hi ! >> >> >> What is the proper oVirt way to remove unused stuff from EXPORT domain ? >> Simply "rm -Rv xxx" and "rm -Rv xxx.meta" >> via SSH inside export -> 1d7208ce-d3a1-4406-9638-fe7051562994 -> images ? >> >> Thanks >> Andrei >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >> >> -- >> Regards, >> Eyal Shenitzky > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > > -- > Regards, > Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot-1.jpg Type: image/jpeg Size: 136171 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot-2.jpg Type: image/jpeg Size: 134226 bytes Desc: not available URL: From didi at redhat.com Tue Mar 20 13:48:37 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Tue, 20 Mar 2018 15:48:37 +0200 Subject: [ovirt-users] help, ovirt 4.2 In-Reply-To: <5e4dc257-9c2b-6a98-0696-c005c97479f8@uci.cu> References: <5e4dc257-9c2b-6a98-0696-c005c97479f8@uci.cu> Message-ID: On Tue, Mar 20, 2018 at 3:21 PM, Marcos Michel Martinez Perez < mmartinezp at uci.cu> wrote: > I recently installed oVirt 4.2 on centos 7 and it turns out that it gave > me the following error when trying to execute the command engine-setup > Which version? What's the output of: rpm -qi ovirt-engine-setup-plugin-ovirt-engine > > [root at localhost ~]# engine-setup > [ INFO ] Stage: Initializing > [ INFO ] Stage: Environment setup > Configuration files: ['/etc/ovirt-engine-setup. > conf.d/10-packaging-jboss.conf', '/etc/ovirt-engine-setup.conf. > d/10-packaging.conf'] > Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup- > 20180320051815-eon9vg.log > Version: otopi-1.7.7 (otopi-1.7.7-1.el7) > [ ERROR ] "before" parameter of method otopi.plugins.ovirt_engine_ > setup.ovirt_engine.network.ovirtproviderovn.Plugin._misc_configure_ovn_pki > is a string, should probably be a tuple. Perhaps a missing comma? > This error was fixed in 4.2.1, by this patch: https://gerrit.ovirt.org/#/q/Id4e0e621624efff0ff0e8a9fd6f23f53f2aa54d2,n,z The bug existed for some time, but was only exposed by this patch: https://gerrit.ovirt.org/#/c/86679/ which was included in otopi-1.7.7, which was released as part of 4.2.1. So it seems like you somehow have 4.2 repos, updated otopi, but didn't update engine setup packages. > methodinfo: {'priority': 5000, 'name': None, 'before': > 'osetup.ovn.provider.service.restart', 'after': > ('osetup.pki.ca.available', 'osetup.ovn.services.restart'), 'method': > setup.ovirt_engine.network.ovirtproviderovn.Plugin object at 0x1592790>>, > 'condition': of setup.ovirt_engine.network.ovirtproviderovn.Plugin object at 0x1592790>>, > 'stage': 11} > [ ERROR ] "before" parameter of method otopi.plugins.ovirt_engine_ > setup.ovirt_engine.network.ovirtproviderovn.Plugin._misc_configure_provider > is a string, should probably be a tuple. Perhaps a missing comma? > methodinfo: {'priority': 5000, 'name': None, 'before': > 'osetup.ovn.provider.service.restart', 'after': > ('osetup.pki.ca.available', 'osetup.ovn.services.restart'), 'method': > setup.ovirt_engine.network.ovirtproviderovn.Plugin object at 0x1592790>>, > 'condition': of setup.ovirt_engine.network.ovirtproviderovn.Plugin object at 0x1592790>>, > 'stage': 11} > [ ERROR ] Failed to execute stage 'Environment setup': Found bad "before" > or "after" parameters > [ INFO ] Stage: Clean up > Log file is located at /var/log/ovirt-engine/setup/ > ovirt-engine-setup-20180320051815-eon9vg.log > [ ERROR ] Failed to execute stage 'Clean up': 'NoneType' object has no > attribute 'cleanup' > [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/ > answers/20180320051817-setup.conf' > [ INFO ] Stage: Pre-termination > [ INFO ] Stage: Termination > [ ERROR ] Execution of setup failed > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Didi -------------- next part -------------- An HTML attachment was scrubbed... URL: From eshenitz at redhat.com Tue Mar 20 14:01:41 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Tue, 20 Mar 2018 16:01:41 +0200 Subject: [ovirt-users] Q: Removing stuff from EXPORT domain In-Reply-To: References: <8E40EB8E-3AD9-4A56-A7E9-E642DF60BA94@starlett.lv> Message-ID: Attached screen-shot for deleting a VM from an export domain: 1) go to storage domains list 2) select the export domain 3) In the export domain - navigate to 'VM Import' tab 4) select VM and press 'remove' button. On Tue, Mar 20, 2018 at 3:45 PM, Andrei Verovski wrote: > > > On 20 Mar 2018, at 13:30, Eyal Shenitzky wrote: > > The export domain should contain entities for export: VMs, Template etc.. > > You can remove them from the domain via Storage -> Export Domain -> Import > Virtual Machine(s) / Templates -> select the unsed entetiy -> remove (upper > right side of the window). > > > in oVirt 4.2.1 > > Storage -> Storage Domains > I see list of domains, there are NO ?Import Import Virtual Machine(s)?, > see screenshot 1 > "Import VM" is inside Compute -> Virtual machines, yet I don?t see option > you mention - screenshot 2. > > Is it possible you have more recent GIT version ? > > > > > > > > On Tue, Mar 20, 2018 at 1:17 PM, Andrei Verovski > wrote: > >> >> >> On 20 Mar 2018, at 12:59, Eyal Shenitzky wrote: >> >> Hi Andrei, >> >> You can remove entities from export domain via the UI. >> >> >> in 4.2 >> >> Storage -> Disks - exported image is not visible here >> Storage -> Storage Domains -> Manage Domains - nothing that allows to see >> content of domain >> >> I can see content of Export domain only in ?Import Virtual Machine(s)? >> dialog, but can?t alter it in any way. >> >> What I?m missing here ? >> >> >> >> >> On Tue, Mar 20, 2018 at 12:54 PM, Andrei Verovski >> wrote: >> >>> Hi ! >>> >>> >>> What is the proper oVirt way to remove unused stuff from EXPORT domain ? >>> Simply "rm -Rv xxx" and "rm -Rv xxx.meta" >>> via SSH inside export -> 1d7208ce-d3a1-4406-9638-fe7051562994 -> images >>> ? >>> >>> Thanks >>> Andrei >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >> >> >> >> -- >> Regards, >> Eyal Shenitzky >> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > -- > Regards, > Eyal Shenitzky > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: export_domain_VM_import.png Type: image/png Size: 160613 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot-1.jpg Type: image/jpeg Size: 136171 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot-2.jpg Type: image/jpeg Size: 134226 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: storage_domains_list.png Type: image/png Size: 168652 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: remove_exported_vm.png Type: image/png Size: 176218 bytes Desc: not available URL: From andreil1 at starlett.lv Tue Mar 20 14:04:31 2018 From: andreil1 at starlett.lv (Andrei Verovski) Date: Tue, 20 Mar 2018 16:04:31 +0200 Subject: [ovirt-users] Q: Copying VMs between Export domains of different data centres Message-ID: <8B667B41-D6A7-4BA1-A9EF-49A6EEB13A8F@starlett.lv> Hi, I have 2 data centers (with 1 node each because 1 have local data domain) Copied exported from DC #1, exports -> 1d7208ce-d3a1-4406-9638-fe7051562994 -> images -> 12f48f07-7e93-4c66-b0e9-00efc1fec418, with 2 files inside fc469474-94fd-416b-b921-58604f46411c - 171 GB (seems like disk image) fc469474-94fd-416b-b921-58604f46411c.meta to DC #2, export -> 36bc8d5d-30e9-4df5-94cd-c837483c5e41 -> images -> 12f48f07-7e93-4c66-b0e9-00efc1fec418, with these above listed files inside. (screenshot attached) However, in ?Import Virtual machine(s)? dialog this VM is not visible even after running ?Load? command inside import dialog. Looks like for whatever reason oVirt don?t refresh content of this directory. How to instruct oVirt to refresh and index these files? Or this method won?t work at all, and one have to import/export OVA images, or use lengthy procedure described by Fred Roland here ? http://lists.ovirt.org/pipermail/users/2018-February/087304.html Thanks. Andrei -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2018-03-20 at 15.53.34.jpg Type: image/jpeg Size: 64987 bytes Desc: not available URL: From sabose at redhat.com Tue Mar 20 14:08:36 2018 From: sabose at redhat.com (Sahina Bose) Date: Tue, 20 Mar 2018 19:38:36 +0530 Subject: [ovirt-users] Gluster: VM disk stuck in transfer; georep gone wonky In-Reply-To: References: Message-ID: On Mon, Mar 19, 2018 at 10:08 PM, Jim Kusznir wrote: > Hi all: > > Sorry for yet another semi-related message to the list. In my attempts to > troubleshoot and verify some suspicions on the nature of the performance > problems I posted under "Major Performance Issues with gluster", I > attempted to move one of my problem VM's back to the original storage > (SSD-backed). It appeared to be moving fine, but last night froze at 84%. > This morning (8hrs later), its still at 84%. > > I need to get that VM back up and running, but I don't know how...It seems > to be stuck in limbo. > > The only thing I explicitly did last night as well that may have caused an > issue is finally set up and activated georep to an offsite backup machine. > That too seems to have gone a bit wonky. On the ovirt server side, it > shows normal with all but data-hdd show a last sync'ed time of 3am (which > matches my bandwidth graphs for the WAN connections involved). data-hdd > (the new disk-backed storage with most of my data in it) shows not yet > synced, but I'm also not currently seeing bandwidth usage anymore. > > I logged into the georep destination box, and found system load a bit > high, a bunch of gluster and rsync processes running, and both data and > data-hdd using MORE disk space than the origional (data-hdd using 4x more > disk space than is on the master node). Not sure what to do about this; I > paused the replication from the cluster, but that hasn't seem to had an > effect on the georep destination. > For the geo-rep gone wonky - can you provide some more information to debug this. The logs are at /var/log/glusterfs/geo-replication. Please provide the logs from the master and slave. > I promise I'll stop trying things until I get guidance from the list! > Please do help; I need the VM HDD unstuck so I can start it. > > Thanks! > --Jim > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mburman at redhat.com Tue Mar 20 14:10:59 2018 From: mburman at redhat.com (Michael Burman) Date: Tue, 20 Mar 2018 16:10:59 +0200 Subject: [ovirt-users] Change ovirtmgmt ip from dhcp to static in a In-Reply-To: References: Message-ID: Then you need to remove the host from engine, change the IP manually on the host, via ifcfg-file, restart network service and install the host again via the new IP address. On Tue, Mar 20, 2018 at 2:50 PM, zois roupas wrote: > Hi again all, > > > "Unless i miss understood you here, do you use a different IP address > when switching to static or the same IP that you got from dhcp? if yes, > then this is another flow.." > > > To answer your question Michael , i'm trying to configure a different ip > outside of my dhcp pool. The dhcp ip is 10.0.0.245 from the range > 10.0.0.245-10.0.0.250 and i want to configure the ip 10.0.0.9 as the hosts > ip > > "One thing to note if you are changing the IP to different one that was > assigned by DHCP you should uncheck "Verify connectivity between Host and > Engine"" > > Ales, i also tried to follow your advise and uncheck the "Verify > connectivity between Host and Engine" as proposed. Again the same results, > it keeps reverting to previous dhcp ip > I will extract the vdsm log and i'll get back to you, in the meanwhile > this is the error that i see after the assignment of the static ip in the > log > > 2018-03-20 14:16:57,576+0200 ERROR (monitor/38f4464) [storage.Monitor] > Error checking domain 38f4464b-74b9-4468-891b-03cd65d72fec (monitor:424) > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line > 405, in _checkDomainStatus > self.domain.selftest() > File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line > 688, in selftest > self.oop.os.statvfs(self.domaindir) > File "/usr/lib/python2.7/site-packages/vdsm/storage/outOfProcess.py", > line 243, in statvfs > return self._iop.statvfs(path) > File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line > 488, in statvfs > resdict = self._sendCommand("statvfs", {"path": path}, self.timeout) > File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line > 455, in _sendCommand > raise Timeout(os.strerror(errno.ETIMEDOUT)) > Timeout: Connection timed out > > Best Regards > Zois > > > > > ------------------------------ > *From:* Ales Musil > *Sent:* Tuesday, March 20, 2018 11:28 AM > *To:* Michael Burman > *Cc:* zois roupas; users > > *Subject:* Re: [ovirt-users] Change ovirtmgmt ip from dhcp to static in a > > One thing to note if you are changing the IP to different one that was > assigned by DHCP you should uncheck "Verify connectivity between Host and > Engine". This makes sure that the engine won't lost connectivity and in > case of switching IP it happens. > > On Tue, Mar 20, 2018 at 10:15 AM, Michael Burman > wrote: > > Indeed very odd, this shouldn't behave this way, just tested it my self > and it is working as expected. Unless i miss understood you here, do you > use a different IP address when switching to static or the same IP that you > got from dhcp? if yes, then this is another flow.. > > Can you please share the vdsm version and vdsm log with us? > > Edy, any idea what can cause this? > > > On Tue, Mar 20, 2018 at 11:10 AM, zois roupas > wrote: > > Hi Michael and thanks a lot for the time > > > Great step by step instructions but something strange is happening while > trying to change to static ip. I tried to do the change while the host > was in maintenance mode and in activate mode but again after some minutes > the system reverts to the ip that dhcp is serving! > > What am i missing here? Do you have any ideas? > > > Best Regards > Zois > ------------------------------ > *From:* Michael Burman > *Sent:* Tuesday, March 20, 2018 8:46 AM > *To:* zois roupas > *Cc:* users at ovirt.org > *Subject:* Re: [ovirt-users] Change ovirtmgmt ip from dhcp to static in a > > Hello Zois, > > It pretty easy to do, via the webadmin UI , go to Hosts main tab > Choose > host > go to 'Network Interfaces' sub tab > Press the 'Setup Host Networks' > button > press the pencil icon on your management network > and choose > Static IP > press OK and OK to approve the operation. > > - Note that in some cases, specially if this is a SPM host you will loose > connectivity to host for few seconds and host may go to non-responsive > state, on a non-SPM host usually this woks without any specific issues. > > - If the spoken host is a SPM host, I recommend to set it first to > maintenance mode, do the switch and then activate. For non-SPM host this > will work fine as well when the host is UP. > > Cheers) > > On Mon, Mar 19, 2018 at 12:15 PM, zois roupas > wrote: > > Hello everyone > > > I've made a rookie mistake by installing ovirt 4.2 in centos 7 with dhcp > instead of a static ip configuration. Both engine and host are in the same > machine cause of limited resources and i was so happy that everything > worked so well that i kept configuring and installing vm's ,adding local > and nfs storage and setting up the backup! > > As you understand i must change the configuration to static ip and i can't > find any guide describing the correct procedure. Is there an official guide > to change configuration without causing any trouble? > > I've found this thread http://lists.ovirt.org/ > pipermail/users/2014-May/024432.html but this is for a hosted engine and > doesn't help when both engine and host are in the same machine > > > Thanx in advance > > Best Regards > > Zois > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > > -- > > Michael Burman > > Senior Quality engineer - rhv network - redhat israel > > Red Hat > > > > mburman at redhat.com M: 0545355725 IM: mburman > > > > > > -- > > Michael Burman > > Senior Quality engineer - rhv network - redhat israel > > Red Hat > > > > mburman at redhat.com M: 0545355725 IM: mburman > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > > -- > > ALES MUSIL > INTERN - rhv network > > Red Hat EMEA > > > amusil at redhat.com IM: amusil > > -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreil1 at starlett.lv Tue Mar 20 14:22:50 2018 From: andreil1 at starlett.lv (Andrei Verovski) Date: Tue, 20 Mar 2018 16:22:50 +0200 Subject: [ovirt-users] Q: Removing stuff from EXPORT domain In-Reply-To: References: <8E40EB8E-3AD9-4A56-A7E9-E642DF60BA94@starlett.lv> Message-ID: <136E207C-1972-4A50-BC40-56C616D135D9@starlett.lv> > On 20 Mar 2018, at 16:01, Eyal Shenitzky wrote: > > Attached screen-shot for deleting a VM from an export domain: > > 1) go to storage domains list > 2) select the export domain > 3) In the export domain - navigate to 'VM Import' tab > 4) select VM and press 'remove' button. > > > > > I can?t navigate to this point, in Storage -> Storage Domains there is a list of domains as on your screenshot, which are double-clickable items. 3) In the export domain - navigate to 'VM Import? tab Double click on list item opens ?Manage Domain? dialog. Control-Click opens popup menu: ?New Domain?, ?Import Domain?, ?Manage Domain?. ?Manage Domain? allows only edit of basic parameters. Or I must add new domain by means of running ?Import Domain?, and then go to the points you have described ? Running FireFox on Mac and Linux. Is it possible it doesn?t display something correctly. What browser you use ? > > On Tue, Mar 20, 2018 at 3:45 PM, Andrei Verovski > wrote: > > >> On 20 Mar 2018, at 13:30, Eyal Shenitzky > wrote: >> >> The export domain should contain entities for export: VMs, Template etc.. >> >> You can remove them from the domain via Storage -> Export Domain -> Import Virtual Machine(s) / Templates -> select the unsed entetiy -> remove (upper right side of the window). > > in oVirt 4.2.1 > > Storage -> Storage Domains > I see list of domains, there are NO ?Import Import Virtual Machine(s)?, see screenshot 1 > "Import VM" is inside Compute -> Virtual machines, yet I don?t see option you mention - screenshot 2. > > Is it possible you have more recent GIT version ? > > > > > > >> >> >> >> On Tue, Mar 20, 2018 at 1:17 PM, Andrei Verovski > wrote: >> >> >>> On 20 Mar 2018, at 12:59, Eyal Shenitzky > wrote: >>> >>> Hi Andrei, >>> >>> You can remove entities from export domain via the UI. >> >> in 4.2 >> >> Storage -> Disks - exported image is not visible here >> Storage -> Storage Domains -> Manage Domains - nothing that allows to see content of domain >> >> I can see content of Export domain only in ?Import Virtual Machine(s)? dialog, but can?t alter it in any way. >> >> What I?m missing here ? >> >> >>> >>> >>> On Tue, Mar 20, 2018 at 12:54 PM, Andrei Verovski > wrote: >>> Hi ! >>> >>> >>> What is the proper oVirt way to remove unused stuff from EXPORT domain ? >>> Simply "rm -Rv xxx" and "rm -Rv xxx.meta" >>> via SSH inside export -> 1d7208ce-d3a1-4406-9638-fe7051562994 -> images ? >>> >>> Thanks >>> Andrei >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >>> >>> -- >>> Regards, >>> Eyal Shenitzky >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >> >> >> -- >> Regards, >> Eyal Shenitzky > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > > -- > Regards, > Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From eshenitz at redhat.com Tue Mar 20 14:26:52 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Tue, 20 Mar 2018 16:26:52 +0200 Subject: [ovirt-users] Q: Copying VMs between Export domains of different data centres In-Reply-To: References: <8B667B41-D6A7-4BA1-A9EF-49A6EEB13A8F@starlett.lv> Message-ID: On Tue, Mar 20, 2018 at 4:26 PM, Eyal Shenitzky wrote: > Hi Andrei, > > I think you miss understand the concept of export domain. > > Export domain allows you to pass entities from one data center to another. > > The flow is: > > 1) Create an export domain in DC-A > 2) Export required entities to the export domain > 3) Deactivate (enter the storage domain to maintenance mode) and detach > the export domain > 4) Attach the export domain to DC-B and import the entities to it. > > You can see more information here: > - https://www.ovirt.org/documentation/admin-guide/chap-Storage/ > > > > > > On Tue, Mar 20, 2018 at 4:04 PM, Andrei Verovski > wrote: > >> Hi, >> >> I have 2 data centers (with 1 node each because 1 have local data domain) >> >> Copied exported from DC #1, exports -> 1d7208ce-d3a1-4406-9638-fe7051562994 >> -> images -> 12f48f07-7e93-4c66-b0e9-00efc1fec418, with 2 files inside >> fc469474-94fd-416b-b921-58604f46411c - 171 GB (seems like disk image) >> fc469474-94fd-416b-b921-58604f46411c.meta >> >> to DC #2, export -> 36bc8d5d-30e9-4df5-94cd-c837483c5e41 -> images >> -> 12f48f07-7e93-4c66-b0e9-00efc1fec418, with these above listed files >> inside. >> (screenshot attached) >> >> However, in ?Import Virtual machine(s)? dialog this VM is not visible >> even after running ?Load? command inside import dialog. >> Looks like for whatever reason oVirt don?t refresh content of this >> directory. >> >> How to instruct oVirt to refresh and index these files? >> >> Or this method won?t work at all, and one have to import/export OVA >> images, or use lengthy procedure described by Fred Roland here ? >> http://lists.ovirt.org/pipermail/users/2018-February/087304.html >> >> Thanks. >> Andrei >> >> >> >> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > -- > Regards, > Eyal Shenitzky > -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2018-03-20 at 15.53.34.jpg Type: image/jpeg Size: 64987 bytes Desc: not available URL: From eshenitz at redhat.com Tue Mar 20 14:29:21 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Tue, 20 Mar 2018 16:29:21 +0200 Subject: [ovirt-users] Q: Removing stuff from EXPORT domain In-Reply-To: <136E207C-1972-4A50-BC40-56C616D135D9@starlett.lv> References: <8E40EB8E-3AD9-4A56-A7E9-E642DF60BA94@starlett.lv> <136E207C-1972-4A50-BC40-56C616D135D9@starlett.lv> Message-ID: Please try to press on the storage domain *name* while you are at the storage domains list. On Tue, Mar 20, 2018 at 4:22 PM, Andrei Verovski wrote: > > > On 20 Mar 2018, at 16:01, Eyal Shenitzky wrote: > > Attached screen-shot for deleting a VM from an export domain: > > 1) go to storage domains list > 2) select the export domain > 3) In the export domain - navigate to 'VM Import' tab > 4) select VM and press 'remove' button. > > > > > > > I can?t navigate to this point, in Storage -> Storage Domains there is a > list of domains as on your screenshot, which are double-clickable items. > > *3) In the export domain - navigate to 'VM Import? tab* > > Double click on list item opens ?Manage Domain? dialog. > Control-Click opens popup menu: ?New Domain?, ?Import Domain?, ?Manage > Domain?. > > ?Manage Domain? allows only edit of basic parameters. > Or I must add new domain by means of running ?Import Domain?, and then go > to the points you have described ? > > Running FireFox on Mac and Linux. Is it possible it doesn?t display > something correctly. > What browser you use ? > > > > On Tue, Mar 20, 2018 at 3:45 PM, Andrei Verovski > wrote: > >> >> >> On 20 Mar 2018, at 13:30, Eyal Shenitzky wrote: >> >> The export domain should contain entities for export: VMs, Template etc.. >> >> You can remove them from the domain via Storage -> Export Domain -> Import >> Virtual Machine(s) / Templates -> select the unsed entetiy -> remove (upper >> right side of the window). >> >> >> in oVirt 4.2.1 >> >> Storage -> Storage Domains >> I see list of domains, there are NO ?Import Import Virtual Machine(s)?, >> see screenshot 1 >> "Import VM" is inside Compute -> Virtual machines, yet I don?t see option >> you mention - screenshot 2. >> >> Is it possible you have more recent GIT version ? >> >> >> >> >> >> >> >> >> >> On Tue, Mar 20, 2018 at 1:17 PM, Andrei Verovski >> wrote: >> >>> >>> >>> On 20 Mar 2018, at 12:59, Eyal Shenitzky wrote: >>> >>> Hi Andrei, >>> >>> You can remove entities from export domain via the UI. >>> >>> >>> in 4.2 >>> >>> Storage -> Disks - exported image is not visible here >>> Storage -> Storage Domains -> Manage Domains - nothing that allows to >>> see content of domain >>> >>> I can see content of Export domain only in ?Import Virtual Machine(s)? >>> dialog, but can?t alter it in any way. >>> >>> What I?m missing here ? >>> >>> >>> >>> >>> On Tue, Mar 20, 2018 at 12:54 PM, Andrei Verovski >>> wrote: >>> >>>> Hi ! >>>> >>>> >>>> What is the proper oVirt way to remove unused stuff from EXPORT domain ? >>>> Simply "rm -Rv xxx" and "rm -Rv xxx.meta" >>>> via SSH inside export -> 1d7208ce-d3a1-4406-9638-fe7051562994 -> >>>> images ? >>>> >>>> Thanks >>>> Andrei >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>> >>> >>> >>> -- >>> Regards, >>> Eyal Shenitzky >>> >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> >> >> -- >> Regards, >> Eyal Shenitzky >> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > -- > Regards, > Eyal Shenitzky > > > -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From didi at redhat.com Tue Mar 20 14:32:53 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Tue, 20 Mar 2018 16:32:53 +0200 Subject: [ovirt-users] help, ovirt 4.2 In-Reply-To: <77a9a9fc-4d89-5c4f-d5fa-718348cd90e9@uci.cu> References: <5e4dc257-9c2b-6a98-0696-c005c97479f8@uci.cu> <77a9a9fc-4d89-5c4f-d5fa-718348cd90e9@uci.cu> Message-ID: On Tue, Mar 20, 2018 at 3:50 PM, Marcos Michel Martinez Perez wrote: > Name : ovirt-engine-setup-plugin-ovirt-engine > Version : 4.2.0.2 > Release : 1.el7.centos > Architecture: noarch > Install Date: lun 19 mar 2018 12:21:46 CDT > Group : Virtualization/Management > Size : 828601 > License : ASL 2.0 > Signature : RSA/SHA1, lun 11 dic 2017 12:50:01 CST, Key ID > ab8c4f9dfe590cb7 > Source RPM : ovirt-engine-4.2.0.2-1.el7.centos.src.rpm > Build Date : lun 11 dic 2017 12:42:36 CST > Build Host : vm0043.workers-phx.ovirt.org > Relocations : (not relocatable) > URL : http://www.ovirt.org > Summary : Setup and upgrade specific plugins for oVirt Engine > Description : > Setup and upgrade specific plugins for oVirt Engine OK, so you have old ovirt-engine setup packages with newer otopi. Please try to update the setup packages and try again: yum update ovirt\*setup\* engine-setup Best regards, -- Didi From andreil1 at starlett.lv Tue Mar 20 14:42:20 2018 From: andreil1 at starlett.lv (Andrei Verovski) Date: Tue, 20 Mar 2018 16:42:20 +0200 Subject: [ovirt-users] Q: Removing stuff from EXPORT domain In-Reply-To: References: <8E40EB8E-3AD9-4A56-A7E9-E642DF60BA94@starlett.lv> <136E207C-1972-4A50-BC40-56C616D135D9@starlett.lv> Message-ID: > On 20 Mar 2018, at 16:29, Eyal Shenitzky wrote: > > Please try to press on the storage domain name while you are at the storage domains list. OK, that works, yet it really not intuitive. Extra button in the top bar, something like ?more options??, or extra menu item in popup at top right corner would be really helpful. > > On Tue, Mar 20, 2018 at 4:22 PM, Andrei Verovski > wrote: > > >> On 20 Mar 2018, at 16:01, Eyal Shenitzky > wrote: >> >> Attached screen-shot for deleting a VM from an export domain: >> >> 1) go to storage domains list >> 2) select the export domain >> 3) In the export domain - navigate to 'VM Import' tab >> 4) select VM and press 'remove' button. >> >> >> >> >> > > I can?t navigate to this point, in Storage -> Storage Domains there is a list of domains as on your screenshot, which are double-clickable items. > > 3) In the export domain - navigate to 'VM Import? tab > > Double click on list item opens ?Manage Domain? dialog. > Control-Click opens popup menu: ?New Domain?, ?Import Domain?, ?Manage Domain?. > > ?Manage Domain? allows only edit of basic parameters. > Or I must add new domain by means of running ?Import Domain?, and then go to the points you have described ? > > Running FireFox on Mac and Linux. Is it possible it doesn?t display something correctly. > What browser you use ? > > >> >> On Tue, Mar 20, 2018 at 3:45 PM, Andrei Verovski > wrote: >> >> >>> On 20 Mar 2018, at 13:30, Eyal Shenitzky > wrote: >>> >>> The export domain should contain entities for export: VMs, Template etc.. >>> >>> You can remove them from the domain via Storage -> Export Domain -> Import Virtual Machine(s) / Templates -> select the unsed entetiy -> remove (upper right side of the window). >> >> in oVirt 4.2.1 >> >> Storage -> Storage Domains >> I see list of domains, there are NO ?Import Import Virtual Machine(s)?, see screenshot 1 >> "Import VM" is inside Compute -> Virtual machines, yet I don?t see option you mention - screenshot 2. >> >> Is it possible you have more recent GIT version ? >> >> >> >> >> >> >>> >>> >>> >>> On Tue, Mar 20, 2018 at 1:17 PM, Andrei Verovski > wrote: >>> >>> >>>> On 20 Mar 2018, at 12:59, Eyal Shenitzky > wrote: >>>> >>>> Hi Andrei, >>>> >>>> You can remove entities from export domain via the UI. >>> >>> in 4.2 >>> >>> Storage -> Disks - exported image is not visible here >>> Storage -> Storage Domains -> Manage Domains - nothing that allows to see content of domain >>> >>> I can see content of Export domain only in ?Import Virtual Machine(s)? dialog, but can?t alter it in any way. >>> >>> What I?m missing here ? >>> >>> >>>> >>>> >>>> On Tue, Mar 20, 2018 at 12:54 PM, Andrei Verovski > wrote: >>>> Hi ! >>>> >>>> >>>> What is the proper oVirt way to remove unused stuff from EXPORT domain ? >>>> Simply "rm -Rv xxx" and "rm -Rv xxx.meta" >>>> via SSH inside export -> 1d7208ce-d3a1-4406-9638-fe7051562994 -> images ? >>>> >>>> Thanks >>>> Andrei >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>>> >>>> -- >>>> Regards, >>>> Eyal Shenitzky >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >>> >>> >>> -- >>> Regards, >>> Eyal Shenitzky >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >> >> >> -- >> Regards, >> Eyal Shenitzky > > > > > -- > Regards, > Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From spfma.tech at e.mail.fr Tue Mar 20 15:12:09 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Tue, 20 Mar 2018 16:12:09 +0100 Subject: [ovirt-users] Hosted engine deployment error In-Reply-To: References: Message-ID: <20180320151209.438EBE4471@smtp01.mail.de> I tried to make a cleaner install : after cleanup, I recreated "/rhev/data-center/mnt/" and ran the installer again. As you can see, it crashed again with the same access denied error on this file : [ INFO ] TASK [Copy configuration archive to storage] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["dd", "bs=20480", "count=1", "oflag=direct", "if=/var/tmp/localvmVBRLpL/b1884198-69e6-4096-939d-03c87112de10", "of=/rhev/data-center/mnt/10.100.2.132:_volume3_ovirt__engine__self__hosted/015d9546-af01-4fb2-891e-e28683db3387/images/589d0768-c935-4495-aa57-45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10"], "delta": "0:00:00.004468", "end": "2018-03-20 15:57:34.199405", "msg": "non-zero return code", "rc": 1, "start": "2018-03-20 15:57:34.194937", "stderr": "dd: impossible d'ouvrir /rhev/data-center/mnt/10.100.2.132:_volume3_ovirt__engine__self__hosted/015d9546-af01-4fb2-891e-e28683db3387/images/589d0768-c935-4495-aa57-45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10 : Permission non accorde", "stderr_lines": ["dd: impossible d'ouvrir /rhev/data-center/mnt/10.100.2.132:_volume3_ovirt__engine__self__hosted/015d9546-af01-4fb2-891e-e28683db3387/images/589d0768-c935-4495-aa57-45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10 : Permission non accorde"], "stdout": "", "stdout_lines": []} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook But the file permissions look ok to me : -rw-rw----. 1 vdsm kvm 1,0G 20 mars 2018 /rhev/data-center/mnt/10.100.2.132:_volume3_ovirt__engine__self__hosted/015d9546-af01-4fb2-891e-e28683db3387/images/589d0768-c935-4495-aa57-45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10 So I decided to test something : I set a shell for "vdsm", so I could login : su - vdsm -c "touch /rhev/data-center/mnt/10.100.2.132:_volume3_ovirt__engine__self__hosted/015d9546-af01-4fb2-891e-e28683db3387/images/589d0768-c935-4495-aa57-45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10" && echo "OK" OK As far as I can see,still no permission problem But if I try the same as "root" : touch /rhev/data-center/mnt/10.100.2.132:_volume3_ovirt__engine__self__hosted/015d9546-af01-4fb2-891e-e28683db3387/images/589d0768-c935-4495-aa57-45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10 && echo "OK" touch: impossible de faire un touch /rhev/data-center/mnt/10.100.2.132:_volume3_ovirt__engine__self__hosted/015d9546-af01-4fb2-891e-e28683db3387/images/589d0768-c935-4495-aa57-45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10 : Permission non accorde Of course, "root" and "vdsm" can create, touch and delete other files flawlessly in this share. It looks like some kind of immutable file, but is is not suppose to exist on NFS, does it ? Regards Le 20-Mar-2018 12:22:50 +0100, stirabos at redhat.com a crit: On Tue, Mar 20, 2018 at 11:44 AM, wrote: Hi, In fact it is a workaround coming from you I found in the bugtrack that helped me : chmod 644 /var/cache/vdsm/schema/* As the only thing looking like a weird error I have found was : ERROR Exception raised#012Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/vdsm/vdsmd.py", line 156, in run#012 serve_clients(log)#012 File "/usr/lib/python2.7/site-packages/vdsm/vdsmd.py", line 103, in serve_clients#012 cif = clientIF.getInstance(irs, log, scheduler)#012 File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 250, in getInstance#012 cls._instance = clientIF(irs, log, scheduler)#012 File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 144, in __init__#012 self._prepareJSONRPCServer()#012 File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 307, in _prepareJSONRPCServer#012 bridge = Bridge.DynamicBridge()#012 File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 67, in __init__#012 self._schema = vdsmapi.Schema(paths, api_strict_mode)#012 File "/usr/lib/python2.7/site-packages/vdsm/api/vdsmapi.py", line 217, in __init__#012 raise SchemaNotFound("Unable to find API schema file")#012SchemaNotFound: Unable to find API schema file Thanks, it's tracked here: https://bugzilla.redhat.com/1552565 A fix will come in the next build. So I can go one step futher, but the installation still fails in the end, with file permission problems in datastore files (i chose NFS 4.1). I can't indeed touch or get informations even logged in root. But I can create and delete files in the same directory. Is there a workaround for this too ? Everything should get wrote and read on the NFS export as vdsm:kvm (36:36); can you please ensure that everything is fine with that? Regards Le 19-Mar-2018 17:48:41 +0100, stirabos at redhat.com a crit: On Mon, Mar 19, 2018 at 4:56 PM, wrote: Hi, I wanted to rebuild a new hosted engine setup, as the old was corrupted (too much violent poweroff !) So the server was not reinstalled, I just runned "ovirt-hosted-engine-cleanup". The network setup generated by vdsm seems to be still in place, so I haven't changed anything there. Then I decided to update the packages to the latest versions avaible, rebooted the server and run "ovirt-hosted-engine-setup". But the process never succeeds, as I get an error after a long time spent in "[ INFO ] TASK [Wait for the host to be up]" [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": [{"address": "pfm-srv-virt-1.pfm-ad.pfm.loc", "affinity_labels": [], "auto_numa_status": "unknown", "certificate": {"organization": "pfm.loc", "subject": "O=pfm.loc,CN=pfm-srv-virt-1.pfm-ad.pfm.loc"}, "cluster": {"href": "/ovirt-engine/api/clusters/d6c9358e-2b8b-11e8-bc86-00163e152701", "id": "d6c9358e-2b8b-11e8-bc86-00163e152701"}, "comment": "", "cpu": {"speed": 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, "devices": [], "external_network_provider_configurations": [], "external_status": "ok", "hardware_information": {"supported_rng_sources": []}, "hooks": [], "href": "/ovirt-engine/api/hosts/542566c4-fc85-4398-9402-10c8adaa9554", "id": "542566c4-fc85-4398-9402-10c8adaa9554", "katello_errata": [], "kdump_status": "unknown", "ksm": {"enabled": false}, "max_scheduling_memory": 0, "memory": 0, "name": "pfm-srv-virt-1.pfm-ad.pfm.loc", "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline": ""}, "permissions": [], "port": 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false, "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": {"fingerprint": "SHA256:J75BVLFnmGBGFosXzaxCRnuIYcOc75HUBQZ4pOKpDg8", "port": 22}, "statistics": [], "status": "non_responsive", "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], "transparent_huge_pages": {"enabled": false}, "type": "rhel", "unmanaged_networks": [], "update_available": false}]}, "attempts": 120, "changed": false} [ INFO ] TASK [Remove local vm dir] [ INFO ] TASK [Notify the user about a failure] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.n"} I made another try with Cockpit, it is the same. Am I doing something wrong or is there a bug ? I suppose that your host was condifured with DHCP, if so it's this one: https://bugzilla.redhat.com/1549642 The fix will come with 4.2.2. Regards ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreil1 at starlett.lv Tue Mar 20 15:51:51 2018 From: andreil1 at starlett.lv (Andrei Verovski) Date: Tue, 20 Mar 2018 17:51:51 +0200 Subject: [ovirt-users] Q: Copying VMs between Export domains of different data centres In-Reply-To: References: <8B667B41-D6A7-4BA1-A9EF-49A6EEB13A8F@starlett.lv> Message-ID: > On Tue, Mar 20, 2018 at 4:26 PM, Eyal Shenitzky > wrote: > Hi Andrei, > > I think you miss understand the concept of export domain. > > Export domain allows you to pass entities from one data center to another. OK, now I?ve got it. Thanks for so clear and short explanation. It should go straight into the oVirt manual and QA. One Data Center can have only one Export storage domain, right ? So 1 (single) export domain used as some kind of shared exchange buffer (VM ?clipboard? in desktop metaphor) in whole oVirt setup, per 1 host engine. > > The flow is: > > 1) Create an export domain in DC-A > 2) Export required entities to the export domain > 3) Deactivate (enter the storage domain to maintenance mode) and detach the export domain > 4) Attach the export domain to DC-B and import the entities to it. > > You can see more information here: > - https://www.ovirt.org/documentation/admin-guide/chap-Storage/ > > > > > > On Tue, Mar 20, 2018 at 4:04 PM, Andrei Verovski > wrote: > Hi, > > I have 2 data centers (with 1 node each because 1 have local data domain) > > Copied exported from DC #1, exports -> 1d7208ce-d3a1-4406-9638-fe7051562994 -> images -> 12f48f07-7e93-4c66-b0e9-00efc1fec418, with 2 files inside > fc469474-94fd-416b-b921-58604f46411c - 171 GB (seems like disk image) > fc469474-94fd-416b-b921-58604f46411c.meta > > to DC #2, export -> 36bc8d5d-30e9-4df5-94cd-c837483c5e41 -> images -> 12f48f07-7e93-4c66-b0e9-00efc1fec418, with these above listed files inside. > (screenshot attached) > > However, in ?Import Virtual machine(s)? dialog this VM is not visible even after running ?Load? command inside import dialog. > Looks like for whatever reason oVirt don?t refresh content of this directory. > > How to instruct oVirt to refresh and index these files? > > Or this method won?t work at all, and one have to import/export OVA images, or use lengthy procedure described by Fred Roland here ? > http://lists.ovirt.org/pipermail/users/2018-February/087304.html > > Thanks. > Andrei > > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > > -- > Regards, > Eyal Shenitzky > > > > -- > Regards, > Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From stirabos at redhat.com Tue Mar 20 15:56:48 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Tue, 20 Mar 2018 16:56:48 +0100 Subject: [ovirt-users] Hosted engine deployment error In-Reply-To: <20180320151209.438EBE4471@smtp01.mail.de> References: <20180320151209.438EBE4471@smtp01.mail.de> Message-ID: On Tue, Mar 20, 2018 at 4:12 PM, wrote: > I tried to make a cleaner install : after cleanup, I recreated > "/rhev/data-center/mnt/" and ran the installer again. > It should be automatically created by vdsm, can you please avoid that? > > As you can see, it crashed again with the same access denied error on this > file : > > [ INFO ] TASK [Copy configuration archive to storage] > [ ERROR ] fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["dd", > "bs=20480", "count=1", "oflag=direct", "if=/var/tmp/localvmVBRLpL/ > b1884198-69e6-4096-939d-03c87112de10", "of=/rhev/data-center/mnt/10. > 100.2.132:_volume3_ovirt__engine__self__hosted/015d9546- > af01-4fb2-891e-e28683db3387/images/589d0768-c935-4495- > aa57-45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10"], "delta": > "0:00:00.004468", "end": "2018-03-20 15:57:34.199405", "msg": "non-zero > return code", "rc": 1, "start": "2018-03-20 15:57:34.194937", "stderr": > "dd: impossible d'ouvrir ? /rhev/data-center/mnt/10. > 100.2.132:_volume3_ovirt__engine__self__hosted/015d9546- > af01-4fb2-891e-e28683db3387/images/589d0768-c935-4495- > aa57-45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10 ?: Permission non > accord?e", "stderr_lines": ["dd: impossible d'ouvrir > ? /rhev/data-center/mnt/10.100.2.132:_volume3_ovirt__ > engine__self__hosted/015d9546-af01-4fb2-891e-e28683db3387/ > images/589d0768-c935-4495-aa57-45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10 ?: > Permission non accord?e"], "stdout": "", "stdout_lines": []} > [ ERROR ] Failed to execute stage 'Closing up': Failed executing > ansible-playbook > > But the file permissions look ok to me : > > -rw-rw----. 1 vdsm kvm 1,0G 20 mars 2018 /rhev/data-center/mnt/10.100. > 2.132:_volume3_ovirt__engine__self__hosted/015d9546-af01- > 4fb2-891e-e28683db3387/images/589d0768-c935-4495-aa57- > 45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10 > > So I decided to test something : I set a shell for "vdsm", so I could > login : > > su - vdsm -c "touch /rhev/data-center/mnt/10.100. > 2.132:_volume3_ovirt__engine__self__hosted/015d9546-af01- > 4fb2-891e-e28683db3387/images/589d0768-c935-4495-aa57- > 45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10" && echo "OK" > OK > > As far as I can see,still no permission problem > > But if I try the same as "root" : > > touch /rhev/data-center/mnt/10.100.2.132:_volume3_ovirt__engine__ > self__hosted/015d9546-af01-4fb2-891e-e28683db3387/images/ > 589d0768-c935-4495-aa57-45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10 > && echo "OK" > touch: impossible de faire un touch ? /rhev/data-center/mnt/10. > 100.2.132:_volume3_ovirt__engine__self__hosted/015d9546- > af01-4fb2-891e-e28683db3387/images/589d0768-c935-4495- > aa57-45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10 ?: Permission non > accord?e > > Of course, "root" and "vdsm" can create, touch and delete other files > flawlessly in this share. > > It looks like some kind of immutable file, but is is not suppose to exist > on NFS, does it ? > > Regards > > > > > > Le 20-Mar-2018 12:22:50 +0100, stirabos at redhat.com a ?crit: > > > > > On Tue, Mar 20, 2018 at 11:44 AM, wrote: > >> >> Hi, >> >> >> >> In fact it is a workaround coming from you I found in the bugtrack that >> helped me : >> >> >> >> >> chmod 644 /var/cache/vdsm/schema/* >> >> >> >> As the only thing looking like a weird error I have found was : >> >> >> >> ERROR Exception raised#012Traceback (most recent call last):#012 File >> "/usr/lib/python2.7/site-packages/vdsm/vdsmd.py", line 156, in >> run#012 serve_clients(log)#012 File "/usr/lib/python2.7/site-packages/vdsm/vdsmd.py", >> line 103, in serve_clients#012 cif = clientIF.getInstance(irs, log, >> scheduler)#012 File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", >> line 250, in getInstance#012 cls._instance = clientIF(irs, log, >> scheduler)#012 File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", >> line 144, in __init__#012 self._prepareJSONRPCServer()#012 File >> "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 307, in >> _prepareJSONRPCServer#012 bridge = Bridge.DynamicBridge()#012 File >> "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 67, in >> __init__#012 self._schema = vdsmapi.Schema(paths, api_strict_mode)#012 >> File "/usr/lib/python2.7/site-packages/vdsm/api/vdsmapi.py", line 217, >> in __init__#012 raise SchemaNotFound("Unable to find API schema >> file")#012SchemaNotFound: Unable to find API schema file >> > > Thanks, it's tracked here: > https://bugzilla.redhat.com/1552565 > > A fix will come in the next build. > > >> >> >> So I can go one step futher, but the installation still fails in the end, >> with file permission problems in datastore files (i chose NFS 4.1). I can't >> indeed touch or get informations even logged in root. But I can create and >> delete files in the same directory. >> >> Is there a workaround for this too ? >> > > Everything should get wrote and read on the NFS export as vdsm:kvm > (36:36); can you please ensure that everything is fine with that? > > >> >> Regards >> >> >> >> Le 19-Mar-2018 17:48:41 +0100, stirabos at redhat.com a ?crit: >> >> >> >> >> >> >> On Mon, Mar 19, 2018 at 4:56 PM, wrote: >> >>> Hi, >>> >>> I wanted to rebuild a new hosted engine setup, as the old was corrupted >>> (too much violent poweroff !) >>> >>> So the server was not reinstalled, I just runned >>> "ovirt-hosted-engine-cleanup". The network setup generated by vdsm seems to >>> be still in place, so I haven't changed anything there. >>> >>> Then I decided to update the packages to the latest versions avaible, >>> rebooted the server and run "ovirt-hosted-engine-setup". >>> >>> But the process never succeeds, as I get an error after a long time >>> spent in "[ INFO ] TASK [Wait for the host to be up]" >>> >>> >>> [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": >>> {"ovirt_hosts": [{"address": "pfm-srv-virt-1.pfm-ad.pfm.loc", >>> "affinity_labels": [], "auto_numa_status": "unknown", "certificate": >>> {"organization": "pfm.loc", "subject": "O=pfm.loc,CN=pfm-srv-virt-1.pfm-ad.pfm.loc"}, >>> "cluster": {"href": "/ovirt-engine/api/clusters/d6c9358e-2b8b-11e8-bc86-00163e152701", >>> "id": "d6c9358e-2b8b-11e8-bc86-00163e152701"}, "comment": "", "cpu": >>> {"speed": 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, >>> "devices": [], "external_network_provider_configurations": [], >>> "external_status": "ok", "hardware_information": {"supported_rng_sources": >>> []}, "hooks": [], "href": "/ovirt-engine/api/hosts/ >>> 542566c4-fc85-4398-9402-10c8adaa9554", "id": "542566c4-fc85-4398-9402-10c8adaa9554", >>> "katello_errata": [], "kdump_status": "unknown", "ksm": {"enabled": false}, >>> "max_scheduling_memory": 0, "memory": 0, "name": "pfm-srv-virt-1.pfm-ad.pfm.loc", >>> "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported": >>> false, "os": {"custom_kernel_cmdline": ""}, "permissions": [], "port": >>> 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false, >>> "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", >>> "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": >>> {"fingerprint": "SHA256:J75BVLFnmGBGFosXzaxCRnuIYcOc75HUBQZ4pOKpDg8", >>> "port": 22}, "statistics": [], "status": "non_responsive", >>> "storage_connection_extensions": [], "summary": {"total": 0}, "tags": >>> [], "transparent_huge_pages": {"enabled": false}, "type": "rhel", >>> "unmanaged_networks": [], "update_available": false}]}, "attempts": 120, >>> "changed": false} >>> [ INFO ] TASK [Remove local vm dir] >>> [ INFO ] TASK [Notify the user about a failure] >>> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The >>> system may not be provisioned according to the playbook results: please >>> check the logs for the issue, fix accordingly or re-deploy from scratch.n"} >>> >>> >>> I made another try with Cockpit, it is the same. >>> >>> Am I doing something wrong or is there a bug ? >>> >> >> I suppose that your host was condifured with DHCP, if so it's this one: >> https://bugzilla.redhat.com/1549642 >> >> The fix will come with 4.2.2. >> >> >>> >>> Regards >>> >>> >>> >>> ------------------------------ >>> FreeMail powered by mail.fr >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> ------------------------------ >> FreeMail powered by mail.fr >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > ------------------------------ > FreeMail powered by mail.fr > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spfma.tech at e.mail.fr Tue Mar 20 16:02:00 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Tue, 20 Mar 2018 17:02:00 +0100 Subject: [ovirt-users] Hosted engine deployment error In-Reply-To: References: Message-ID: <20180320160201.043C7E446F@smtp01.mail.de> Just to be sure I hadn't altered something, I renamed "mnt" in something else and it was indeed recreated. Le 20-Mar-2018 16:57:22 +0100, stirabos at redhat.com a crit: On Tue, Mar 20, 2018 at 4:12 PM, wrote: I tried to make a cleaner install : after cleanup, I recreated "/rhev/data-center/mnt/" and ran the installer again. It should be automatically created by vdsm, can you please avoid that? As you can see, it crashed again with the same access denied error on this file : [ INFO ] TASK [Copy configuration archive to storage] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["dd", "bs=20480", "count=1", "oflag=direct", "if=/var/tmp/localvmVBRLpL/b1884198-69e6-4096-939d-03c87112de10", "of=/rhev/data-center/mnt/10.100.2.132:_volume3_ovirt__engine__self__hosted/015d9546-af01-4fb2-891e-e28683db3387/images/589d0768-c935-4495-aa57-45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10"], "delta": "0:00:00.004468", "end": "2018-03-20 15:57:34.199405", "msg": "non-zero return code", "rc": 1, "start": "2018-03-20 15:57:34.194937", "stderr": "dd: impossible d'ouvrir /rhev/data-center/mnt/10.100.2.132:_volume3_ovirt__engine__self__hosted/015d9546-af01-4fb2-891e-e28683db3387/images/589d0768-c935-4495-aa57-45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10 : Permission non accorde", "stderr_lines": ["dd: impossible d'ouvrir /rhev/data-center/mnt/10.100.2.132:_volume3_ovirt__engine__self__hosted/015d9546-af01-4fb2-891e-e28683db3387/images/589d0768-c935-4495-aa57-45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10 : Permission non accorde"], "stdout": "", "stdout_lines": []} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook But the file permissions look ok to me : -rw-rw----. 1 vdsm kvm 1,0G 20 mars 2018 /rhev/data-center/mnt/10.100.2.132:_volume3_ovirt__engine__self__hosted/015d9546-af01-4fb2-891e-e28683db3387/images/589d0768-c935-4495-aa57-45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10 So I decided to test something : I set a shell for "vdsm", so I could login : su - vdsm -c "touch /rhev/data-center/mnt/10.100.2.132:_volume3_ovirt__engine__self__hosted/015d9546-af01-4fb2-891e-e28683db3387/images/589d0768-c935-4495-aa57-45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10" && echo "OK" OK As far as I can see,still no permission problem But if I try the same as "root" : touch /rhev/data-center/mnt/10.100.2.132:_volume3_ovirt__engine__self__hosted/015d9546-af01-4fb2-891e-e28683db3387/images/589d0768-c935-4495-aa57-45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10 && echo "OK" touch: impossible de faire un touch /rhev/data-center/mnt/10.100.2.132:_volume3_ovirt__engine__self__hosted/015d9546-af01-4fb2-891e-e28683db3387/images/589d0768-c935-4495-aa57-45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10 : Permission non accorde Of course, "root" and "vdsm" can create, touch and delete other files flawlessly in this share. It looks like some kind of immutable file, but is is not suppose to exist on NFS, does it ? Regards Le 20-Mar-2018 12:22:50 +0100, stirabos at redhat.com a crit: On Tue, Mar 20, 2018 at 11:44 AM, wrote: Hi, In fact it is a workaround coming from you I found in the bugtrack that helped me : chmod 644 /var/cache/vdsm/schema/* As the only thing looking like a weird error I have found was : ERROR Exception raised#012Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/vdsm/vdsmd.py", line 156, in run#012 serve_clients(log)#012 File "/usr/lib/python2.7/site-packages/vdsm/vdsmd.py", line 103, in serve_clients#012 cif = clientIF.getInstance(irs, log, scheduler)#012 File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 250, in getInstance#012 cls._instance = clientIF(irs, log, scheduler)#012 File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 144, in __init__#012 self._prepareJSONRPCServer()#012 File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 307, in _prepareJSONRPCServer#012 bridge = Bridge.DynamicBridge()#012 File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 67, in __init__#012 self._schema = vdsmapi.Schema(paths, api_strict_mode)#012 File "/usr/lib/python2.7/site-packages/vdsm/api/vdsmapi.py", line 217, in __init__#012 raise SchemaNotFound("Unable to find API schema file")#012SchemaNotFound: Unable to find API schema file Thanks, it's tracked here: https://bugzilla.redhat.com/1552565 A fix will come in the next build. So I can go one step futher, but the installation still fails in the end, with file permission problems in datastore files (i chose NFS 4.1). I can't indeed touch or get informations even logged in root. But I can create and delete files in the same directory. Is there a workaround for this too ? Everything should get wrote and read on the NFS export as vdsm:kvm (36:36); can you please ensure that everything is fine with that? Regards Le 19-Mar-2018 17:48:41 +0100, stirabos at redhat.com a crit: On Mon, Mar 19, 2018 at 4:56 PM, wrote: Hi, I wanted to rebuild a new hosted engine setup, as the old was corrupted (too much violent poweroff !) So the server was not reinstalled, I just runned "ovirt-hosted-engine-cleanup". The network setup generated by vdsm seems to be still in place, so I haven't changed anything there. Then I decided to update the packages to the latest versions avaible, rebooted the server and run "ovirt-hosted-engine-setup". But the process never succeeds, as I get an error after a long time spent in "[ INFO ] TASK [Wait for the host to be up]" [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": [{"address": "pfm-srv-virt-1.pfm-ad.pfm.loc", "affinity_labels": [], "auto_numa_status": "unknown", "certificate": {"organization": "pfm.loc", "subject": "O=pfm.loc,CN=pfm-srv-virt-1.pfm-ad.pfm.loc"}, "cluster": {"href": "/ovirt-engine/api/clusters/d6c9358e-2b8b-11e8-bc86-00163e152701", "id": "d6c9358e-2b8b-11e8-bc86-00163e152701"}, "comment": "", "cpu": {"speed": 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, "devices": [], "external_network_provider_configurations": [], "external_status": "ok", "hardware_information": {"supported_rng_sources": []}, "hooks": [], "href": "/ovirt-engine/api/hosts/542566c4-fc85-4398-9402-10c8adaa9554", "id": "542566c4-fc85-4398-9402-10c8adaa9554", "katello_errata": [], "kdump_status": "unknown", "ksm": {"enabled": false}, "max_scheduling_memory": 0, "memory": 0, "name": "pfm-srv-virt-1.pfm-ad.pfm.loc", "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline": ""}, "permissions": [], "port": 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false, "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": {"fingerprint": "SHA256:J75BVLFnmGBGFosXzaxCRnuIYcOc75HUBQZ4pOKpDg8", "port": 22}, "statistics": [], "status": "non_responsive", "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], "transparent_huge_pages": {"enabled": false}, "type": "rhel", "unmanaged_networks": [], "update_available": false}]}, "attempts": 120, "changed": false} [ INFO ] TASK [Remove local vm dir] [ INFO ] TASK [Notify the user about a failure] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.n"} I made another try with Cockpit, it is the same. Am I doing something wrong or is there a bug ? I suppose that your host was condifured with DHCP, if so it's this one: https://bugzilla.redhat.com/1549642 The fix will come with 4.2.2. Regards ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From fschmid at ubimet.com Tue Mar 20 16:42:39 2018 From: fschmid at ubimet.com (Florian Schmid) Date: Tue, 20 Mar 2018 16:42:39 +0000 (UTC) Subject: [ovirt-users] GPG Key of evilissimo repo for ovirt-guest-agent is expired Message-ID: <687643086.7795430.1521564159592.JavaMail.zimbra@ubimet.com> Hi, it looks like for this repo, the GPG key is expired. http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04/xUbuntu_16.04/ Does someone know, whom I should contact, that this key will be renewed? Or does someone know another repo, where I can download the latest ovirt-guest-agent for Ubuntu 16.04 BR Florian From hanson at andrewswireless.net Tue Mar 20 16:11:29 2018 From: hanson at andrewswireless.net (Hanson Turner) Date: Tue, 20 Mar 2018 12:11:29 -0400 Subject: [ovirt-users] Adding host to hosted-engine /w gluster cluster. (On ovirt Node 4.2.1.1) Message-ID: <9352191a-76dd-13ed-463a-61033dc3fe6a@andrewswireless.net> Hi Guys, I've a 3 machine pool running gluster with replica 3 and want to add two more machines. This would change to a replica 5... In ovirt 4.0, I'd done everything manually. No problem there. In ovirt 4.2, I'd used the wizard for the hosted-engine. It looks like the fourth node has been added to the pool but will not go active. It complains gluster isn't running (which I've not manually configured /dev/sdb for gluster). Host install+deploy fails. Host can go into maintenance w/o issue. (Meaning the host has been added to the cluster, but isn't operational) What do I need to do to get the node up and running proper with gluster syncing properly? Manually restarting gluster, tells me there's no peers and no volumes. Do we have a wizard for this too? Or do I need to go find the setup scripts and configure hosts 4 + 5 manually and run the deploy again? Thanks, Hanson From jlawrence at squaretrade.com Tue Mar 20 17:59:56 2018 From: jlawrence at squaretrade.com (Jamie Lawrence) Date: Tue, 20 Mar 2018 10:59:56 -0700 Subject: [ovirt-users] Iso upload success, no GUI popup option In-Reply-To: <1683696.bEs5auB0my@awels> References: <6CF44967-E2D3-4346-8BFE-8A6A9116A8E8@squaretrade.com> <1683696.bEs5auB0my@awels> Message-ID: <8E47137E-7E9B-4CFD-B614-A12C75FEADC0@squaretrade.com> > On Mar 20, 2018, at 5:00 AM, Alexander Wels wrote: > > On Monday, March 19, 2018 7:47:00 PM EDT Jamie Lawrence wrote: >> So, uploading from one of the hosts to an ISO domain claims success, and >> manually checking shows the ISO uploaded just fine, perms set correctly to >> 36:36. But it doesn't appear in the GUI popup when creating a new VM. >> > > You probably need to refresh the ISO list, assuming 4.2 go to Storage -> > Storage Domains -> , click on the name, and go to the images > detail tab. This should refresh the list of ISOs in the list and the ISO > should be listed, once that is done, it should show up in the drop down when > you change the CD. That did it, thanks so much. -j From ggkkrr55 at gmail.com Tue Mar 20 20:03:35 2018 From: ggkkrr55 at gmail.com (Jean Pickard) Date: Tue, 20 Mar 2018 13:03:35 -0700 Subject: [ovirt-users] Fiber Channel Storage not coming up Message-ID: Hello, Today I noticed that most of my VMs weren't accessible. Once i checked the system, I noticed my FC channel SAN data domain is inactive. 2018-03-19 16:01:05,308-0700 ERROR (periodic/32) [virt.vm] (vmId='43700e39-8812-41c6-9cd6-e555a2f19e35') Unable to get watermarks for drive vda: failed to open block device '/rhev/data-center/00000001-0001-0001-0001-000000000311/cecbec42-bff1-4e8d-9c37-c260b7305af7/images/05563eaf-ea90-4364-b92e-4b53173a3a5f/242a97ec-9459-4ce1-b61a-3331a7617f2d': No such file or directory (vm:814) 2018-03-19 16:01:06,018-0700 ERROR (monitor/cecbec4) [storage.Monitor] Error checking domain cecbec42-bff1-4e8d-9c37-c260b7305af7 (monitor:426) Traceback (most recent call last): File "/usr/share/vdsm/storage/monitor.py", line 407, in _checkDomainStatus self.domain.selftest() File "/usr/share/vdsm/storage/sdc.py", line 50, in __getattr__ return getattr(self.getRealDomain(), attrName) File "/usr/share/vdsm/storage/sdc.py", line 53, in getRealDomain return self._cache._realProduce(self._sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 136, in _realProduce domain = self._findDomain(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 153, in _findDomain return findMethod(sdUUID) File "/usr/share/vdsm/storage/blockSD.py", line 1610, in findDomain return BlockStorageDomain(BlockStorageDomain.findDomainPath(sdUUID)) File "/usr/share/vdsm/storage/blockSD.py", line 1550, in findDomainPath raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: (u'cecbec42-bff1-4e8d-9c37-c260b7305af7',) 2018-03-19 16:01:07,331-0700 ERROR (periodic/32) [virt.vm] (vmId='235702f2-286e-403b-b395-db9c1d603d99') Unable to get watermarks for drive vda: failed to open block device '/rhev/data-center/00000001-0001-0001-0001-000000000311/cecbec42-bff1-4e8d-9c37-c260b7305af7/images/8eb9847f-4e4e-4bda-b4fa-a44119137ee2/ddf7c854-fa95-4c3f-b76a-4a90c8067c7e': No such file or directory (vm:814) My attempts to activate my data center storage is failing as well. I don't see any errors in my engine log under /var/log/ovirt-engine. Any idea why my storage is not coming up? Thank you, Jean -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.vaye at province-sud.nc Tue Mar 20 20:32:43 2018 From: nicolas.vaye at province-sud.nc (Nicolas Vaye) Date: Tue, 20 Mar 2018 20:32:43 +0000 Subject: [ovirt-users] storage domain ovirt-image-repository doesn't work In-Reply-To: References: <1520807274.18402.56.camel@province-sud.nc> <1520984162.6088.104.camel@province-sud.nc> <1520984648.6088.106.camel@province-sud.nc> Message-ID: <1521577960.1710.82.camel@province-sud.nc> Hi Daniel, I have checked again this morning, and no it's the same result. How can i do to investigate with more debug in order to solve this issue ? Nicolas -------- Message initial -------- Date: Mon, 19 Mar 2018 11:39:27 +0000 Objet: Re: [ovirt-users] storage domain ovirt-image-repository doesn't work Cc: ishaby at redhat.com >, users at ovirt.org > ?: Nicolas Vaye > De: Daniel Erez > Hi Nicolas, Can you please try navigating to "Administration -> Providers", select "ovirt-image-repository" provider and click "Edit" button. Make sure that "Requires Authentication" isn't checked, and click the "Test" button - is it accessing the provider successfully? On Wed, Mar 14, 2018 at 1:45 AM Nicolas Vaye > wrote: the logs during the test of the ovirt-image-repository provider : 2018-03-14 10:39:43,337+11 INFO [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand] (default task-17) [6c8c6a9f-2c24-4a77-af75-47352c6df887] Running command: TestProviderConnectivityCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_STORAGE_POOL with role type ADMIN 2018-03-14 10:41:30,465+11 INFO [org.ovirt.engine.core.utils.transaction.TransactionSupport] (default task-27) [42cb88a3-2614-4aa9-a3bf-b56102a83c35] transaction rolled back 2018-03-14 10:41:30,465+11 ERROR [org.ovirt.engine.core.bll.storage.repoimage.GetImagesListQuery] (default task-27) [42cb88a3-2614-4aa9-a3bf-b56102a83c35] Failed to retrieve image list: Connection timed out (Connection timed out) 2018-03-14 10:41:50,560+11 ERROR [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand] (default task-17) [6c8c6a9f-2c24-4a77-af75-47352c6df887] Command 'org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand' failed: EngineException: (Failed with error PROVIDER_FAILURE and code 5050) -------- Message initial -------- Date: Tue, 13 Mar 2018 23:36:06 +0000 Objet: Re: [ovirt-users] storage domain ovirt-image-repository doesn't work Cc: users at ovirt.org %22%20%3cusers at ovirt.org%3e>> ?: ishaby at redhat.com %22%20%3cishaby at redhat.com%3e>> Reply-to: Nicolas Vaye > De: Nicolas Vaye %3e>> Hi Idan, here are the logs requested : 2018-03-14 10:25:52,097+11 INFO [org.ovirt.engine.core.utils.transaction.TransactionSupport] (default task-6) [61b5b46f-0ea3-496a-af90-bf82e7d204f3] transaction rolled back 2018-03-14 10:25:52,097+11 ERROR [org.ovirt.engine.core.bll.storage.repoimage.GetImagesListQuery] (default task-6) [61b5b46f-0ea3-496a-af90-bf82e7d204f3] Failed to retrieve image list: Connection timed out (Connection timed out) 2018-03-14 10:25:57,083+11 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'commandCoordinator' is using 0 threads out of 10 and 10 tasks are waiting in the queue. 2018-03-14 10:25:57,083+11 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'default' is using 0 threads out of 1 and 5 tasks are waiting in the queue. 2018-03-14 10:25:57,083+11 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engine' is using 0 threads out of 500, 16 threads waiting for tasks and 0 tasks in queue. 2018-03-14 10:25:57,084+11 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineScheduled' is using 0 threads out of 100 and 100 tasks are waiting in the queue. 2018-03-14 10:25:57,084+11 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'engineThreadMonitoring' is using 1 threads out of 1 and 0 tasks are waiting in the queue. 2018-03-14 10:25:57,084+11 INFO [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 'hostUpdatesChecker' is using 0 threads out of 5 and 4 tasks are waiting in the queue. Connection timed out seems to indicate that it doesn't use the proxy to get web access ? or a firewall issue ? but on each ovirt node, i try to curl the url and the result is OK : curl http://glance.ovirt.org:9292/ {"versions": [{"status": "CURRENT", "id": "v2.3", "links": [{"href": "http://glance.ovirt.org:9292/v2/", "rel": "self"}]}, {"status": "SUPPORTED", "id": "v2.2", "links": [{"href": "http://glance.ovirt.org:9292/v2/", "rel": "self"}]}, {"status": "SUPPORTED", "id": "v2.1", "links": [{"href": "http://glance.ovirt.org:9292/v2/", "rel": "self"}]}, {"status": "SUPPORTED", "id": "v2.0", "links": [{"href": "http://glance.ovirt.org:9292/v2/", "rel": "self"}]}, {"status": "SUPPORTED", "id": "v1.1", "links": [{"href": "http://glance.ovirt.org:9292/v1/", "rel": "self"}]}, {"status": "SUPPORTED", "id": "v1.0", "links": [{"href": "http://glance.ovirt.org:9292/v1/", "rel": "self"}]}]} I don't know what is wrong !! Regards, Nicolas -------- Message initial -------- Date: Tue, 13 Mar 2018 07:25:07 +0200 Objet: Re: [ovirt-users] storage domain ovirt-image-repository doesn't work Cc: users at ovirt.org> >%22%20%3cusers at ovirt.org>%3e>> ?: Nicolas Vaye >%3e>> De: Idan Shaby >%3e>> Hi Nicolas, Let me make sure that I understand what's the issue here - you click on the domain and on the Images sub tab nothing is displayed? Can you please clear your engine log, click on the ovirt-image-repository domain and attach the log to the mail? When I do it, I get the following audit log: 2018-03-13 07:19:25,983+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-86) [6af6ee81-ce9a-46b7-a371-c5c3b0c6bf2a] EVENT_ID: REFRESH_REPOSITORY_IMAGE_LIST_SUCCEEDED(998), Refresh image list succeeded for domain(s): ovirt-image-repository (All file type) Maybe you get an error there that can help us understand the problem. Regards, Idan On Mon, Mar 12, 2018 at 12:27 AM, Nicolas Vaye >>> wrote: Hello, i have installed one oVirt platform with 2 node and 1 HE version 4.2.1.7-1 It seem to work fine, but i have issue with the ovirt-image-repository. Impossible to get the list of available images for this domain : [cid:1520807274.29800.1.camel at province-sud.nc>>] My cluster is on a private network, so there is a proxy to get internet access. I have tried with a specific proxy configuration on each node (https://www.server-world.info/en/note?os=CentOS_7&p=squid&f=2) so it's a success with yum update, wget or curl with http://glance.ovirt.org:9292/, but nothing in the webui for the ovirt-image-repository domain. I have tried another test with a transparent proxy and the result is the same : success with yum update, wget or curl with http://glance.ovirt.org:9292/, but nothing in the webui for the ovirt-image-repository domain. I don't know where is the specific log for this technical part. Can i have help for this issue. Thanks. Nicolas VAYE DSI - Noum?a NEW CALEDONIA _______________________________________________ Users mailing list Users at ovirt.org>> http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users at ovirt.org> http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users From SBERGER at qg.com Tue Mar 20 20:49:33 2018 From: SBERGER at qg.com (Berger, Sandy) Date: Tue, 20 Mar 2018 20:49:33 +0000 Subject: [ovirt-users] Network issues with oVirt 4.2 and cloud-init In-Reply-To: References: Message-ID: Yes, I had the checkbox marked when I made the template. From: Michael Burman [mailto:mburman at redhat.com] Sent: Tuesday, March 20, 2018 1:37 AM To: Berger, Sandy Cc: users at ovirt.org; Luca 'remix_tj' Lorenzetto Subject: Re: [ovirt-users] Network issues with oVirt 4.2 and cloud-init Hi Berger, Have you sealed the template? if you didn't sealed the template on creation, then your new VM has the same ifcfg-eth0 settings as your origin VM. In order to avoid this, you need to check the 'Seal Template (Linux only)' checkbox in the New Template dialog. Cheers) On Mon, Mar 19, 2018 at 5:56 PM, Luca 'remix_tj' Lorenzetto > wrote: Hello Sandy, i had the same issue and the cause was cloud-init running again at boot even if Run-Once hasn't been selected as boot option. The way i'm using to solve the problem is to remove cloud-init after the first run, since we don't need it anymore. In case also disabling is enoug: touch /etc/cloud/cloud-init.disabled Luca On Mon, Mar 19, 2018 at 2:17 PM, Berger, Sandy > wrote: > We?re using cloud-init to customize VMs built from a template. We?re using > static IPV4 settings so we?re specifying an IP address, subnet mask, and > gateway. There is apparently a bug in the current version of cloud-init > shipping as part of CentOS 7.4 > (https://bugzilla.redhat.com/show_bug.cgi?id=1492726) that fails to set the > gateway properly. In the description of the bug, it says it is fixed in RHEL > 7.5 but also says one can use > https://people.redhat.com/rmccabe/cloud-init/cloud-init-0.7.9-20.el7.x86_64.rpm > which is what we?re doing. > > > > When the new VM first boots, the 3 IPv4 settings are all set correctly. > Reboots of the VM maintain the settings properly. But, if the VM is shut > down and started again via the oVirt GUI, all of the IPV4 settings on the > eth0 virtual NIC are lost and the /etc/sysconfig/network-scripts/ifcfg-eth0 > shows that the NIC is now set up for DHCP. > > > > Are we doing something incorrectly? > > > > Sandy Berger > > IT ? Infrastructure Engineer II > > > > Quad/Graphics > > Performance through Innovation > > > > Sussex, Wisconsin > > 414.566.2123 phone > > 414.566.4010/2123 pager/PIN > > > > sandy.berger at qg.com > > www.QG.com > > > > Follow Us: Facebook | Twitter | LinkedIn | YouTube > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , > _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman [https://www.redhat.com/files/brand/email/sig-redhat.png] Follow Us: Facebook | Twitter | LinkedIn | YouTube -------------- next part -------------- An HTML attachment was scrubbed... URL: From SBERGER at qg.com Tue Mar 20 20:48:57 2018 From: SBERGER at qg.com (Berger, Sandy) Date: Tue, 20 Mar 2018 20:48:57 +0000 Subject: [ovirt-users] Network issues with oVirt 4.2 and cloud-init In-Reply-To: References: Message-ID: Thanks for the suggestion. I was hoping to avoid that but will consider doing it as a last resort. -----Original Message----- From: Luca 'remix_tj' Lorenzetto [mailto:lorenzetto.luca at gmail.com] Sent: Monday, March 19, 2018 10:56 AM To: Berger, Sandy Cc: users at ovirt.org Subject: Re: [ovirt-users] Network issues with oVirt 4.2 and cloud-init Hello Sandy, i had the same issue and the cause was cloud-init running again at boot even if Run-Once hasn't been selected as boot option. The way i'm using to solve the problem is to remove cloud-init after the first run, since we don't need it anymore. In case also disabling is enoug: touch /etc/cloud/cloud-init.disabled Luca On Mon, Mar 19, 2018 at 2:17 PM, Berger, Sandy wrote: > We?re using cloud-init to customize VMs built from a template. We?re using > static IPV4 settings so we?re specifying an IP address, subnet mask, and > gateway. There is apparently a bug in the current version of cloud-init > shipping as part of CentOS 7.4 > (https://bugzilla.redhat.com/show_bug.cgi?id=1492726) that fails to set the > gateway properly. In the description of the bug, it says it is fixed in RHEL > 7.5 but also says one can use > https://people.redhat.com/rmccabe/cloud-init/cloud-init-0.7.9-20.el7.x86_64.rpm > which is what we?re doing. > > > > When the new VM first boots, the 3 IPv4 settings are all set correctly. > Reboots of the VM maintain the settings properly. But, if the VM is shut > down and started again via the oVirt GUI, all of the IPV4 settings on the > eth0 virtual NIC are lost and the /etc/sysconfig/network-scripts/ifcfg-eth0 > shows that the NIC is now set up for DHCP. > > > > Are we doing something incorrectly? > > > > Sandy Berger > > IT ? Infrastructure Engineer II > > > > Quad/Graphics > > Performance through Innovation > > > > Sussex, Wisconsin > > 414.566.2123 phone > > 414.566.4010/2123 pager/PIN > > > > sandy.berger at qg.com > > www.QG.com > > > > Follow Us: Facebook | Twitter | LinkedIn | YouTube > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , Follow Us: Facebook | Twitter | LinkedIn | YouTube From SBERGER at qg.com Tue Mar 20 21:05:02 2018 From: SBERGER at qg.com (Berger, Sandy) Date: Tue, 20 Mar 2018 21:05:02 +0000 Subject: [ovirt-users] Fw: Network issues with oVirt 4.2 and cloud-init In-Reply-To: References: <20180320092231.29eb7bbd@t460p> Message-ID: * I have a base VM that was created with a normal kickstart. * I shut the VM down in preparation for a snapshot. * I snapshot it so I can put it back to its original state. * I start the VM and install https://people.redhat.com/rmccabe/cloud-init/cloud-init-0.7.9-20.el7.x86_64.rpm * I shut the VM down in preparation for the template creation * I create a template checking the ?seal the template? box * I modify the template and check ?use cloud-init/sysprep?, set a generic hostname that will be changed when creating the VM, specify a time zone, specify root authentication and its password, expand networks and supply a DNS server list, a search domain, check in-guest network interface name of eth0, specify the IPV4 boot protocol as static, enter a dummy IP address and the correct netmask and gateway, and set IPV6 boot protocol to none. * I create a VM from the template and supply a hostname under ?Name? in the general tab, and then in the Initial Run tab I enter an FQDN in the VM hostname field, and then replace the dummy IP address with the correct one. * After the VM is created I click Run, not Run Once, and all boots up correctly. * If I reboot the server with ?shutdown -r? everything networking comes up correctly on these boots. * If I shut down the server, and click Run in oVirt, not run once, the VM comes up with a DHCP address and the /etc/sysconfig/network-scripts/ifcfg-eth0 file is missing all of the network configuration and instead is set up for DHCP. Checking the version of cloud-init on the newly created VM shows the correct version as listed above. I have not used the REST API for any of this, it?s all been done with the standard GUI interface. Thanks, --Sandy From: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] On Behalf Of Eitan Raviv Sent: Tuesday, March 20, 2018 3:42 AM To: users at ovirt.org; Berger, Sandy Subject: Re: [ovirt-users] Fw: Network issues with oVirt 4.2 and cloud-init Hi Sandy, Can you elaborate some more about the steps you have taken? Specifically, how\where do you apply cloud-init-0.7.9-20 rpm? Can you make sure that rpm -q cloud-init after VM reboot is still this one? How do you apply the static IP settings that do persist to the VM - via oVirt web-admin\REST API\other? When you restart the VM via oVirt GUI - do you 'Run' it or 'Run Once'? Thanks, Eitan oVirt networking team On Tue, Mar 20, 2018 at 10:22 AM, Dominik Holler > wrote: Begin forwarded message: Date: Mon, 19 Mar 2018 13:17:08 +0000 From: "Berger, Sandy" > To: "users at ovirt.org" > Subject: [ovirt-users] Network issues with oVirt 4.2 and cloud-init We're using cloud-init to customize VMs built from a template. We're using static IPV4 settings so we're specifying an IP address, subnet mask, and gateway. There is apparently a bug in the current version of cloud-init shipping as part of CentOS 7.4 (https://bugzilla.redhat.com/show_bug.cgi?id=1492726) that fails to set the gateway properly. In the description of the bug, it says it is fixed in RHEL 7.5 but also says one can use https://people.redhat.com/rmccabe/cloud-init/cloud-init-0.7.9-20.el7.x86_64.rpm which is what we're doing. When the new VM first boots, the 3 IPv4 settings are all set correctly. Reboots of the VM maintain the settings properly. But, if the VM is shut down and started again via the oVirt GUI, all of the IPV4 settings on the eth0 virtual NIC are lost and the /etc/sysconfig/network-scripts/ifcfg-eth0 shows that the NIC is now set up for DHCP. Are we doing something incorrectly? Sandy Berger IT - Infrastructure Engineer II Quad/Graphics Performance through Innovation Sussex, Wisconsin 414.566.2123 phone 414.566.4010/2123 pager/PIN sandy.berger at qg.com> www.QG.com Follow Us: Facebook | Twitter | LinkedIn | YouTube _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Eitan Raviv IRC: erav (#ovirt #vdsm #devel #rhev-dev) -------------- next part -------------- An HTML attachment was scrubbed... URL: From roupas_zois at hotmail.com Tue Mar 20 12:50:48 2018 From: roupas_zois at hotmail.com (zois roupas) Date: Tue, 20 Mar 2018 12:50:48 +0000 Subject: [ovirt-users] Change ovirtmgmt ip from dhcp to static in a In-Reply-To: References: , Message-ID: Hi again all, "Unless i miss understood you here, do you use a different IP address when switching to static or the same IP that you got from dhcp? if yes, then this is another flow.." To answer your question Michael , i'm trying to configure a different ip outside of my dhcp pool. The dhcp ip is 10.0.0.245 from the range 10.0.0.245-10.0.0.250 and i want to configure the ip 10.0.0.9 as the hosts ip "One thing to note if you are changing the IP to different one that was assigned by DHCP you should uncheck "Verify connectivity between Host and Engine"" Ales, i also tried to follow your advise and uncheck the "Verify connectivity between Host and Engine" as proposed. Again the same results, it keeps reverting to previous dhcp ip I will extract the vdsm log and i'll get back to you, in the meanwhile this is the error that i see after the assignment of the static ip in the log 2018-03-20 14:16:57,576+0200 ERROR (monitor/38f4464) [storage.Monitor] Error checking domain 38f4464b-74b9-4468-891b-03cd65d72fec (monitor:424) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 405, in _checkDomainStatus self.domain.selftest() File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line 688, in selftest self.oop.os.statvfs(self.domaindir) File "/usr/lib/python2.7/site-packages/vdsm/storage/outOfProcess.py", line 243, in statvfs return self._iop.statvfs(path) File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 488, in statvfs resdict = self._sendCommand("statvfs", {"path": path}, self.timeout) File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 455, in _sendCommand raise Timeout(os.strerror(errno.ETIMEDOUT)) Timeout: Connection timed out Best Regards Zois ________________________________ From: Ales Musil Sent: Tuesday, March 20, 2018 11:28 AM To: Michael Burman Cc: zois roupas; users Subject: Re: [ovirt-users] Change ovirtmgmt ip from dhcp to static in a One thing to note if you are changing the IP to different one that was assigned by DHCP you should uncheck "Verify connectivity between Host and Engine". This makes sure that the engine won't lost connectivity and in case of switching IP it happens. On Tue, Mar 20, 2018 at 10:15 AM, Michael Burman > wrote: Indeed very odd, this shouldn't behave this way, just tested it my self and it is working as expected. Unless i miss understood you here, do you use a different IP address when switching to static or the same IP that you got from dhcp? if yes, then this is another flow.. Can you please share the vdsm version and vdsm log with us? Edy, any idea what can cause this? On Tue, Mar 20, 2018 at 11:10 AM, zois roupas > wrote: Hi Michael and thanks a lot for the time Great step by step instructions but something strange is happening while trying to change to static ip. I tried to do the change while the host was in maintenance mode and in activate mode but again after some minutes the system reverts to the ip that dhcp is serving! What am i missing here? Do you have any ideas? Best Regards Zois ________________________________ From: Michael Burman > Sent: Tuesday, March 20, 2018 8:46 AM To: zois roupas Cc: users at ovirt.org Subject: Re: [ovirt-users] Change ovirtmgmt ip from dhcp to static in a Hello Zois, It pretty easy to do, via the webadmin UI , go to Hosts main tab > Choose host > go to 'Network Interfaces' sub tab > Press the 'Setup Host Networks' button > press the pencil icon on your management network > and choose Static IP > press OK and OK to approve the operation. - Note that in some cases, specially if this is a SPM host you will loose connectivity to host for few seconds and host may go to non-responsive state, on a non-SPM host usually this woks without any specific issues. - If the spoken host is a SPM host, I recommend to set it first to maintenance mode, do the switch and then activate. For non-SPM host this will work fine as well when the host is UP. Cheers) On Mon, Mar 19, 2018 at 12:15 PM, zois roupas > wrote: Hello everyone I've made a rookie mistake by installing ovirt 4.2 in centos 7 with dhcp instead of a static ip configuration. Both engine and host are in the same machine cause of limited resources and i was so happy that everything worked so well that i kept configuring and installing vm's ,adding local and nfs storage and setting up the backup! As you understand i must change the configuration to static ip and i can't find any guide describing the correct procedure. Is there an official guide to change configuration without causing any trouble? I've found this thread http://lists.ovirt.org/pipermail/users/2014-May/024432.html but this is for a hosted engine and doesn't help when both engine and host are in the same machine Thanx in advance Best Regards Zois _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman [https://www.redhat.com/files/brand/email/sig-redhat.png] -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman [https://www.redhat.com/files/brand/email/sig-redhat.png] _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- ALES MUSIL INTERN - rhv network Red Hat EMEA amusil at redhat.com IM: amusil [https://www.redhat.com/files/brand/email/sig-redhat.png] -------------- next part -------------- An HTML attachment was scrubbed... URL: From eshenitz at redhat.com Wed Mar 21 05:20:59 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Wed, 21 Mar 2018 07:20:59 +0200 Subject: [ovirt-users] Q: Copying VMs between Export domains of different data centres In-Reply-To: References: <8B667B41-D6A7-4BA1-A9EF-49A6EEB13A8F@starlett.lv> Message-ID: Exactly. On Tue, Mar 20, 2018 at 5:51 PM, Andrei Verovski wrote: > On Tue, Mar 20, 2018 at 4:26 PM, Eyal Shenitzky > wrote: > >> Hi Andrei, >> >> I think you miss understand the concept of export domain. >> >> Export domain allows you to pass entities from one data center to another. >> > > > OK, now I?ve got it. Thanks for so clear and short explanation. It should > go straight into the oVirt manual and QA. > > One Data Center can have only one Export storage domain, right ? > > So 1 (single) export domain used as some kind of shared exchange buffer > (VM ?clipboard? in desktop metaphor) in whole oVirt setup, per 1 host > engine. > > > >> The flow is: >> >> 1) Create an export domain in DC-A >> 2) Export required entities to the export domain >> 3) Deactivate (enter the storage domain to maintenance mode) and detach >> the export domain >> 4) Attach the export domain to DC-B and import the entities to it. >> >> You can see more information here: >> - https://www.ovirt.org/documentation/admin-guide/chap-Storage/ >> >> >> >> >> >> On Tue, Mar 20, 2018 at 4:04 PM, Andrei Verovski >> wrote: >> >>> Hi, >>> >>> I have 2 data centers (with 1 node each because 1 have local data domain) >>> >>> Copied exported from DC #1, exports -> 1d7208ce-d3a1-4406-9638-fe7051562994 >>> -> images -> 12f48f07-7e93-4c66-b0e9-00efc1fec418, with 2 files inside >>> fc469474-94fd-416b-b921-58604f46411c - 171 GB (seems like disk image) >>> fc469474-94fd-416b-b921-58604f46411c.meta >>> >>> to DC #2, export -> 36bc8d5d-30e9-4df5-94cd-c837483c5e41 -> images >>> -> 12f48f07-7e93-4c66-b0e9-00efc1fec418, with these above listed files >>> inside. >>> (screenshot attached) >>> >>> However, in ?Import Virtual machine(s)? dialog this VM is not visible >>> even after running ?Load? command inside import dialog. >>> Looks like for whatever reason oVirt don?t refresh content of this >>> directory. >>> >>> How to instruct oVirt to refresh and index these files? >>> >>> Or this method won?t work at all, and one have to import/export OVA >>> images, or use lengthy procedure described by Fred Roland here ? >>> http://lists.ovirt.org/pipermail/users/2018-February/087304.html >>> >>> Thanks. >>> Andrei >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> >> >> -- >> Regards, >> Eyal Shenitzky >> > > > > -- > Regards, > Eyal Shenitzky > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.vaye at province-sud.nc Wed Mar 21 05:30:25 2018 From: nicolas.vaye at province-sud.nc (Nicolas Vaye) Date: Wed, 21 Mar 2018 05:30:25 +0000 Subject: [ovirt-users] how update the network name of a vmnic with API SDK python ? Message-ID: <1521610222.1710.85.camel@province-sud.nc> Hi, i want to change the network name of the existing nic for a VM with python SDK API ? Can i have some help please ? the VM name is testnico the nic name is nic1 the new network name is vlan_NEW and here is the source file written by me (which doesn't work) : #!/usr/bin/env python # -*- coding: utf-8 -*- import logging import time import ovirtsdk4 as sdk import ovirtsdk4.types as types logging.basicConfig(level=logging.DEBUG, filename='example.log') # This example will connect to the server and start a virtual machine # with cloud-init, in order to automatically configure the network and # the password of the `root` user. # Create the connection to the server: connection = sdk.Connection( url='https://ocenter.province-sud.prod/ovirt-engine/api', username='admin at internal', password='xxxx', ca_file='CA_ocenter.pem', debug=True, log=logging.getLogger(), ) # Find the virtual machine: vms_service = connection.system_service().vms_service() vm = vms_service.list(search = 'name=testnico')[0] # Find the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) # In order to specify the network that the new interface will be # connected to we need to specify the identifier of the virtual network # interface profile, so we need to find it: profiles_service = connection.system_service().vnic_profiles_service() profile_id = None for profile in profiles_service.list(): print "profile "+profile.name+","+profile.id if profile.name == 'vlan_NEW': profile_id = profile.id break # Locate the service that manages the network interface cards of the # virtual machine: nics_service = vm_service.nics_service() #print nics_service # Find the nic1 of the VM for nic in nics_service.list(): print "nic "+nic.name+","+nic.id+','+nic.vnic_profile.id if nic.name == 'nic1': nic_service = nics_service.nic_service(nic.id) break print "nic_service nic1 ==>"+str(nic_service) #pprint(vars(nic_service.network_filter_parameters_service().parameter_service())) #nic_service.vnic_profile.id=profile_id #nic_service.update() nic_service.update( vnic_profile=types.VnicProfile( id=profile_id, ) ) # Close the connection to the server: connection.close() The result is : Traceback (most recent call last): File "start_vm_with_cloud_init.py", line 85, in id=profile_id, TypeError: update() got an unexpected keyword argument 'vnic_profile' How can i do ? Thanks. Nicolas VAYE From mburman at redhat.com Wed Mar 21 06:07:36 2018 From: mburman at redhat.com (Michael Burman) Date: Wed, 21 Mar 2018 08:07:36 +0200 Subject: [ovirt-users] Change ovirtmgmt ip from dhcp to static in a In-Reply-To: References: Message-ID: If you changing the host's management IP address then this is the only way to do it. If you have only one host in the cluster, then you will need to shut them down :( On Wed, Mar 21, 2018 at 12:08 AM, zois roupas wrote: > Is this a safe procedure? I mean i only have this host in my cluster, what > will happen at the vm's that are assigned to the host? > > > Thanks again > ------------------------------ > *From:* Michael Burman > *Sent:* Tuesday, March 20, 2018 4:10 PM > *To:* zois roupas > *Cc:* Ales Musil; users > > *Subject:* Re: [ovirt-users] Change ovirtmgmt ip from dhcp to static in a > > Then you need to remove the host from engine, change the IP manually on > the host, via ifcfg-file, restart network service and install the host > again via the new IP address. > > On Tue, Mar 20, 2018 at 2:50 PM, zois roupas > wrote: > > Hi again all, > > > "Unless i miss understood you here, do you use a different IP address > when switching to static or the same IP that you got from dhcp? if yes, > then this is another flow.." > > > To answer your question Michael , i'm trying to configure a different ip > outside of my dhcp pool. The dhcp ip is 10.0.0.245 from the range > 10.0.0.245-10.0.0.250 and i want to configure the ip 10.0.0.9 as the hosts > ip > > "One thing to note if you are changing the IP to different one that was > assigned by DHCP you should uncheck "Verify connectivity between Host and > Engine"" > > Ales, i also tried to follow your advise and uncheck the "Verify > connectivity between Host and Engine" as proposed. Again the same results, > it keeps reverting to previous dhcp ip > I will extract the vdsm log and i'll get back to you, in the meanwhile > this is the error that i see after the assignment of the static ip in the > log > > 2018-03-20 14:16:57,576+0200 ERROR (monitor/38f4464) [storage.Monitor] > Error checking domain 38f4464b-74b9-4468-891b-03cd65d72fec (monitor:424) > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line > 405, in _checkDomainStatus > self.domain.selftest() > File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line > 688, in selftest > self.oop.os.statvfs(self.domaindir) > File "/usr/lib/python2.7/site-packages/vdsm/storage/outOfProcess.py", > line 243, in statvfs > return self._iop.statvfs(path) > File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line > 488, in statvfs > resdict = self._sendCommand("statvfs", {"path": path}, self.timeout) > File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line > 455, in _sendCommand > raise Timeout(os.strerror(errno.ETIMEDOUT)) > Timeout: Connection timed out > > Best Regards > Zois > > > > > ------------------------------ > *From:* Ales Musil > *Sent:* Tuesday, March 20, 2018 11:28 AM > *To:* Michael Burman > *Cc:* zois roupas; users > > *Subject:* Re: [ovirt-users] Change ovirtmgmt ip from dhcp to static in a > > One thing to note if you are changing the IP to different one that was > assigned by DHCP you should uncheck "Verify connectivity between Host and > Engine". This makes sure that the engine won't lost connectivity and in > case of switching IP it happens. > > On Tue, Mar 20, 2018 at 10:15 AM, Michael Burman > wrote: > > Indeed very odd, this shouldn't behave this way, just tested it my self > and it is working as expected. Unless i miss understood you here, do you > use a different IP address when switching to static or the same IP that you > got from dhcp? if yes, then this is another flow.. > > Can you please share the vdsm version and vdsm log with us? > > Edy, any idea what can cause this? > > > On Tue, Mar 20, 2018 at 11:10 AM, zois roupas > wrote: > > Hi Michael and thanks a lot for the time > > > Great step by step instructions but something strange is happening while > trying to change to static ip. I tried to do the change while the host > was in maintenance mode and in activate mode but again after some minutes > the system reverts to the ip that dhcp is serving! > > What am i missing here? Do you have any ideas? > > > Best Regards > Zois > ------------------------------ > *From:* Michael Burman > *Sent:* Tuesday, March 20, 2018 8:46 AM > *To:* zois roupas > *Cc:* users at ovirt.org > *Subject:* Re: [ovirt-users] Change ovirtmgmt ip from dhcp to static in a > > Hello Zois, > > It pretty easy to do, via the webadmin UI , go to Hosts main tab > Choose > host > go to 'Network Interfaces' sub tab > Press the 'Setup Host Networks' > button > press the pencil icon on your management network > and choose > Static IP > press OK and OK to approve the operation. > > - Note that in some cases, specially if this is a SPM host you will loose > connectivity to host for few seconds and host may go to non-responsive > state, on a non-SPM host usually this woks without any specific issues. > > - If the spoken host is a SPM host, I recommend to set it first to > maintenance mode, do the switch and then activate. For non-SPM host this > will work fine as well when the host is UP. > > Cheers) > > On Mon, Mar 19, 2018 at 12:15 PM, zois roupas > wrote: > > Hello everyone > > > I've made a rookie mistake by installing ovirt 4.2 in centos 7 with dhcp > instead of a static ip configuration. Both engine and host are in the same > machine cause of limited resources and i was so happy that everything > worked so well that i kept configuring and installing vm's ,adding local > and nfs storage and setting up the backup! > > As you understand i must change the configuration to static ip and i can't > find any guide describing the correct procedure. Is there an official guide > to change configuration without causing any trouble? > > I've found this thread http://lists.ovirt.org/ > pipermail/users/2014-May/024432.html but this is for a hosted engine and > doesn't help when both engine and host are in the same machine > > > Thanx in advance > > Best Regards > > Zois > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > > -- > > Michael Burman > > Senior Quality engineer - rhv network - redhat israel > > Red Hat > > > > mburman at redhat.com M: 0545355725 IM: mburman > > > > > > -- > > Michael Burman > > Senior Quality engineer - rhv network - redhat israel > > Red Hat > > > > mburman at redhat.com M: 0545355725 IM: mburman > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > > -- > > ALES MUSIL > INTERN - rhv network > > Red Hat EMEA > > > amusil at redhat.com IM: amusil > > > > > > -- > > Michael Burman > > Senior Quality engineer - rhv network - redhat israel > > Red Hat > > > > mburman at redhat.com M: 0545355725 IM: mburman > > -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman -------------- next part -------------- An HTML attachment was scrubbed... URL: From chesterton.adam at gmail.com Wed Mar 21 06:17:23 2018 From: chesterton.adam at gmail.com (Adam Chesterton) Date: Wed, 21 Mar 2018 06:17:23 +0000 Subject: [ovirt-users] Can't Add Host To New Hosted Engine - "Server is already part of another cluster" Message-ID: Hi Everyone, I'm running a 3-host hyperconverged Gluster setup for testing (on some old desktops), and recently the hosted engine died on me, so I have attempted to just clean up my existing hosts, leaving Gluster configured, and re-deploy a fresh hosted engine setup on them. I have successfully got the first host setup and the hosted engine is running on that host. However, when I try to add the other two hosts via the web GUI (as I can no longer add them via CLI) I get this error: "Error while executing action: Server XXXXX is already part of another cluster." I've tried to find where this would still be configured on the two other hosts, but I cannot find anywhere. Does anyone know how I can stop these two hosts from thinking they are still in a cluster? Or, does anyone have some information that might help, or am I going to just have to start a fresh CentOS install? Regards, Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From didi at redhat.com Wed Mar 21 07:03:04 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Wed, 21 Mar 2018 09:03:04 +0200 Subject: [ovirt-users] Can't Add Host To New Hosted Engine - "Server is already part of another cluster" In-Reply-To: References: Message-ID: On Wed, Mar 21, 2018 at 8:17 AM, Adam Chesterton wrote: > Hi Everyone, > > I'm running a 3-host hyperconverged Gluster setup for testing (on some old > desktops), and recently the hosted engine died on me, so I have attempted to > just clean up my existing hosts, leaving Gluster configured, and re-deploy a > fresh hosted engine setup on them. > > I have successfully got the first host setup and the hosted engine is > running on that host. However, when I try to add the other two hosts via the > web GUI (as I can no longer add them via CLI) I get this error: "Error while > executing action: Server XXXXX is already part of another cluster." This message might be a result of the host's participation in a gluster cluster, not hosted-engine cluster. Please share engine.log from the engine. Adding Sahina. > > I've tried to find where this would still be configured on the two other > hosts, but I cannot find anywhere. If it's only about hosted-engine, you can check /etc/ovirt-hosted-engine . You might try using ovirt-hosted-engine-cleanup, although it was not designed for such cases. > > Does anyone know how I can stop these two hosts from thinking they are still > in a cluster? Or, does anyone have some information that might help, or am I > going to just have to start a fresh CentOS install? If you do not need the data, a reinstall might be simplest. If you do, not sure what's your exact plan. You intend to rely on the replication? So that you reinstall one host, add it, wait until syncing finished, then reinstall the other? Might work, no idea. Best regards, -- Didi From dholler at redhat.com Wed Mar 21 07:19:38 2018 From: dholler at redhat.com (Dominik Holler) Date: Wed, 21 Mar 2018 08:19:38 +0100 Subject: [ovirt-users] how update the network name of a vmnic with API SDK python ? In-Reply-To: <1521610222.1710.85.camel@province-sud.nc> References: <1521610222.1710.85.camel@province-sud.nc> Message-ID: <20180321081938.2fd06c8d@t460p> On Wed, 21 Mar 2018 05:30:25 +0000 Nicolas Vaye wrote: > Hi, > > i want to change the network name of the existing nic for a VM with > python SDK API ? Can i have some help please ? > > the VM name is testnico > the nic name is nic1 > the new network name is vlan_NEW > > and here is the source file written by me (which doesn't work) : > > > #!/usr/bin/env python > # -*- coding: utf-8 -*- > > > import logging > import time > > import ovirtsdk4 as sdk > import ovirtsdk4.types as types > > logging.basicConfig(level=logging.DEBUG, filename='example.log') > > # This example will connect to the server and start a virtual machine > # with cloud-init, in order to automatically configure the network and > # the password of the `root` user. > > # Create the connection to the server: > connection = sdk.Connection( > url='https://ocenter.province-sud.prod/ovirt-engine/api', > username='admin at internal', > password='xxxx', > ca_file='CA_ocenter.pem', > debug=True, > log=logging.getLogger(), > ) > > # Find the virtual machine: > vms_service = connection.system_service().vms_service() > vm = vms_service.list(search = 'name=testnico')[0] > > # Find the service that manages the virtual machine: > vm_service = vms_service.vm_service(vm.id) > > > > > # In order to specify the network that the new interface will be > # connected to we need to specify the identifier of the virtual > network # interface profile, so we need to find it: > profiles_service = connection.system_service().vnic_profiles_service() > profile_id = None > for profile in profiles_service.list(): > print "profile "+profile.name+","+profile.id > if profile.name == 'vlan_NEW': > profile_id = profile.id > break > > # Locate the service that manages the network interface cards of the > # virtual machine: > nics_service = vm_service.nics_service() > > #print nics_service > > # Find the nic1 of the VM > for nic in nics_service.list(): > print "nic "+nic.name+","+nic.id+','+nic.vnic_profile.id > if nic.name == 'nic1': > nic_service = nics_service.nic_service(nic.id) > break > > > print "nic_service nic1 ==>"+str(nic_service) > #pprint(vars(nic_service.network_filter_parameters_service().parameter_service())) > > > #nic_service.vnic_profile.id=profile_id > #nic_service.update() > > nic_service.update( > vnic_profile=types.VnicProfile( > id=profile_id, > ) > ) > nic_service.update( types.Nic( vnic_profile=types.VnicProfile( id=profile_id, ) ) ) > > # Close the connection to the server: > connection.close() > > > The result is : > > Traceback (most recent call last): > File "start_vm_with_cloud_init.py", line 85, in > id=profile_id, > TypeError: update() got an unexpected keyword argument 'vnic_profile' > > > How can i do ? > update() expects a parameter of type types.Nic, which has the parameter vnic_profile. > Thanks. > > Nicolas VAYE > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From klaasdemter at gmail.com Wed Mar 21 07:22:35 2018 From: klaasdemter at gmail.com (Klaas Demter) Date: Wed, 21 Mar 2018 08:22:35 +0100 Subject: [ovirt-users] How to deal with "Used memory of host ... exceeded defined threshold [95%]" messages Message-ID: Hi, I'm trying to figure out how I could stop the manager with spamming used memory exceeded threshold messages. I have a hypervisor that has 1,5TB memory and it's using about 98% of that with hugepages for sap hana vms (configured like https://access.redhat.com/articles/3347431). It still has like 40-50GB memory left for the hypervisor which should be plenty. So the option I could figure out on my own: I can adjust the global limit of LogMaxPhysicalMemoryUsedThresholdInPercentage but I'm not sure if that is a good idea for my "normal" hypervisors that often have way less memory. My questions are: - can I adjust the limit on a per hypervisor basis? - can I skip the memory check for certain hypervisors or skip all checks for certain hypervisors? - would a check for free memory in gb make more sense than a percentage check? Setup is still on 4.1.9. Greetings Klaas From rightkicktech at gmail.com Wed Mar 21 07:33:23 2018 From: rightkicktech at gmail.com (Alex K) Date: Wed, 21 Mar 2018 09:33:23 +0200 Subject: [ovirt-users] Disk upload cancel/remove In-Reply-To: References: Message-ID: Even after rebooting the engine the disks are still there with same status "Transferring via API" Alex On Tue, Mar 20, 2018 at 11:49 AM, Eyal Shenitzky wrote: > Idan/Daniel, > > Can you please take a look? > > Thanks, > > On Tue, Mar 20, 2018 at 11:44 AM, Alex K wrote: > >> Hi All, >> >> I was trying to upload a VM disk at data storage domain using a python >> script. >> I did cancel the upload twice and at the third time the upload was >> successful, but I see two disks from the previous attempts with status >> "transferring via API" (see attached). This status of for more then 8 hours >> and I cannot remove them. >> >> Is there any way to clean them from the disks inventory? >> >> >> >> I am using ovirt 4.1.9.1-1.el7.centos with self hosted engine on 3 nodes. >> >> Thanx, >> Alex >> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > -- > Regards, > Eyal Shenitzky > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ovirt-disk-upload.png Type: image/png Size: 34142 bytes Desc: not available URL: From sabose at redhat.com Wed Mar 21 07:46:53 2018 From: sabose at redhat.com (Sahina Bose) Date: Wed, 21 Mar 2018 13:16:53 +0530 Subject: [ovirt-users] Sizing hardware for hyperconverged with Gluster? In-Reply-To: <20180319150445.GA10823@cmadams.net> References: <20180319150445.GA10823@cmadams.net> Message-ID: On Mon, Mar 19, 2018 at 8:34 PM, Chris Adams wrote: > I have a reasonable feel for how to size hardware for an oVirt cluster > with external storage (our current setups all use iSCSI to talk to a > SAN). I'm looking at a hyperconverged oVirt+Gluster setup; are there > guides for figuring out the additional Gluster resource requirements? I > assume I need to allow for additional CPU and RAM, I just don't know how > to size it (based on I/O I guess?). > Sizing for hyperconverged is very similar - ensure there is enough resources for gluster to run. Gluster would need atleast 2 cores, there's also a systemd slice configuration on HC setups for gluster, that limits CPU allocation to 1/3 of available cores in case of contention. RAM requirements would be workload specific. In general 16GB RAM is sufficient for gluster. > -- > Chris Adams > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From events.codefest at iitbhu.ac.in Wed Mar 21 00:14:46 2018 From: events.codefest at iitbhu.ac.in (events.codefest at iitbhu.ac.in) Date: Tue, 20 Mar 2018 17:14:46 -0700 (PDT) Subject: [ovirt-users] Codefest'18 - Invitation for Partnership Message-ID: <5ab1a3f6.8f48620a.84631.bc04@mx.google.com> An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Codefest_open_sorcery_brochure.pdf Type: application/octet-stream Size: 4821159 bytes Desc: not available URL: From nicolas at devels.es Wed Mar 21 09:05:37 2018 From: nicolas at devels.es (nicolas at devels.es) Date: Wed, 21 Mar 2018 09:05:37 +0000 Subject: [ovirt-users] GPG Key of evilissimo repo for ovirt-guest-agent is expired In-Reply-To: <687643086.7795430.1521564159592.JavaMail.zimbra@ubimet.com> References: <687643086.7795430.1521564159592.JavaMail.zimbra@ubimet.com> Message-ID: Also noticed this. As per the CHANGELOG of the package, the maintainer is a member of RedHat at least until Aug 2014. Not sure if he's subscribed to this list, though. Maybe sending him an e-mail directly? You can find it in the /usr/share/doc/ovirt-guest-agent/changelog.Debian.gz file. Regards. El 2018-03-20 16:42, Florian Schmid escribi?: > Hi, > > it looks like for this repo, the GPG key is expired. > http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04/xUbuntu_16.04/ > > Does someone know, whom I should contact, that this key will be > renewed? > > Or does someone know another repo, where I can download the latest > ovirt-guest-agent for Ubuntu 16.04 > > BR Florian > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From andreil1 at starlett.lv Wed Mar 21 09:21:40 2018 From: andreil1 at starlett.lv (Andrei Verovski) Date: Wed, 21 Mar 2018 11:21:40 +0200 Subject: [ovirt-users] Attach Export Domain to another Data Center - Failure Message-ID: Hi, I have 2-host oVirt setup with 2 Data Centers, one with local storage domain (DC #1) for VMs + Export domain on NFS, another with all NFS shared (DC #2). Trying to export VMs from DC #1 to DC #2. VMs are exported to DC #1 export domain (NFS), then domain put into maintenance mode and detached from DC #1. Unfortunately, attaching it to DC #2 failed. Logs attached. Tried to run this command twice. Workaround are possible in order to accomplish this task, yet it would be better to do in a way as it was designed. Thanks. 2018-03-21 10:43:24,024+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-128904) [570ec5d9-fff5-4656-afbd-90b3207a616e] EVENT_ID: USER_ATTACH_STORAGE_DOMAIN_TO_POOL_FAILED(963), Failed to attach Storage Domain node10-NFS-EXPORTS to Data Center StrDataCenter11. (User: admin at internal-authz) 2018-03-21 10:46:16,512+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-128956) [1435fc81] EVENT_ID: VDS_STORAGES_CONNECTION_FAILED(188), Failed to connect Host node11 to the Storage Domains node10-NFS-EXPORTS. 2018-03-21 10:46:16,651+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-128955) [a81ffa4a-5a58-41a0-888a-f0edc321609b] EVENT_ID: USER_ATTACH_STORAGE_DOMAIN_TO_POOL_FAILED(963), Failed to attach Storage Domain node10-NFS-EXPORTS to Data Center StrDataCenter11. (User: admin at internal-authz) tail -n 1000 engine.log | grep 570ec5d9-fff5-4656-afbd-90b3207a616e 2018-03-21 10:41:14,643+02 INFO [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand] (default task-2) [570ec5d9-fff5-4656-afbd-90b3207a616e] Lock Acquired to object 'EngineLock:{exclusiveLocks='[1d7208ce-d3a1-4406-9638-fe7051562994=STORAGE]', sharedLocks=''}' 2018-03-21 10:41:16,129+02 INFO [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand] (EE-ManagedThreadFactory-engine-Thread-128904) [570ec5d9-fff5-4656-afbd-90b3207a616e] Running command: AttachStorageDomainToPoolCommand internal: false. Entities affected : ID: 1d7208ce-d3a1-4406-9638-fe7051562994 Type: StorageAction group MANIPULATE_STORAGE_DOMAIN with role type ADMIN, ID: 80cc922f-8dea-4fed-b951-1060ba116ad5 Type: StoragePoolAction group MANIPULATE_STORAGE_DOMAIN with role type ADMIN 2018-03-21 10:43:23,564+02 ERROR [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand] (EE-ManagedThreadFactory-engine-Thread-128904) [570ec5d9-fff5-4656-afbd-90b3207a616e] Cannot connect storage connection server, aborting attach storage domain operation. 2018-03-21 10:43:23,567+02 INFO [org.ovirt.engine.core.bll.CommandCompensator] (EE-ManagedThreadFactory-engine-Thread-128904) [570ec5d9-fff5-4656-afbd-90b3207a616e] Command [id=921ca7cd-4f93-46aa-8de2-91b13b8f96cb]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot: StoragePoolIsoMapId:{storagePoolId='80cc922f-8dea-4fed-b951-1060ba116ad5', storageId='1d7208ce-d3a1-4406-9638-fe7051562994'}. 2018-03-21 10:43:24,024+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-128904) [570ec5d9-fff5-4656-afbd-90b3207a616e] EVENT_ID: USER_ATTACH_STORAGE_DOMAIN_TO_POOL_FAILED(963), Failed to attach Storage Domain node10-NFS-EXPORTS to Data Center StrDataCenter11. (User: admin at internal-authz) 2018-03-21 10:43:24,114+02 INFO [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand] (EE-ManagedThreadFactory-engine-Thread-128904) [570ec5d9-fff5-4656-afbd-90b3207a616e] Lock freed to object 'EngineLock:{exclusiveLocks='[1d7208ce-d3a1-4406-9638-fe7051562994=STORAGE]', sharedLocks=''}' [root at node00 ovirt-engine]# tail -n 1000 engine.log | grep a81ffa4a-5a58-41a0-888a-f0edc321609b 2018-03-21 10:44:11,025+02 INFO [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand] (default task-16) [a81ffa4a-5a58-41a0-888a-f0edc321609b] Lock Acquired to object 'EngineLock:{exclusiveLocks='[1d7208ce-d3a1-4406-9638-fe7051562994=STORAGE]', sharedLocks=''}' 2018-03-21 10:44:11,236+02 INFO [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand] (EE-ManagedThreadFactory-engine-Thread-128955) [a81ffa4a-5a58-41a0-888a-f0edc321609b] Running command: AttachStorageDomainToPoolCommand internal: false. Entities affected : ID: 1d7208ce-d3a1-4406-9638-fe7051562994 Type: StorageAction group MANIPULATE_STORAGE_DOMAIN with role type ADMIN, ID: 80cc922f-8dea-4fed-b951-1060ba116ad5 Type: StoragePoolAction group MANIPULATE_STORAGE_DOMAIN with role type ADMIN 2018-03-21 10:46:16,567+02 ERROR [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand] (EE-ManagedThreadFactory-engine-Thread-128955) [a81ffa4a-5a58-41a0-888a-f0edc321609b] Cannot connect storage connection server, aborting attach storage domain operation. 2018-03-21 10:46:16,568+02 INFO [org.ovirt.engine.core.bll.CommandCompensator] (EE-ManagedThreadFactory-engine-Thread-128955) [a81ffa4a-5a58-41a0-888a-f0edc321609b] Command [id=b5c25100-1a8a-4db0-9509-99cfa60995b1]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot: StoragePoolIsoMapId:{storagePoolId='80cc922f-8dea-4fed-b951-1060ba116ad5', storageId='1d7208ce-d3a1-4406-9638-fe7051562994'}. 2018-03-21 10:46:16,651+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-128955) [a81ffa4a-5a58-41a0-888a-f0edc321609b] EVENT_ID: USER_ATTACH_STORAGE_DOMAIN_TO_POOL_FAILED(963), Failed to attach Storage Domain node10-NFS-EXPORTS to Data Center StrDataCenter11. (User: admin at internal-authz) 2018-03-21 10:46:16,681+02 INFO [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand] (EE-ManagedThreadFactory-engine-Thread-128955) [a81ffa4a-5a58-41a0-888a-f0edc321609b] Lock freed to object 'EngineLock:{exclusiveLocks='[1d7208ce-d3a1-4406-9638-fe7051562994=STORAGE]', sharedLocks='?}' -------------- next part -------------- An HTML attachment was scrubbed... URL: From frolland at redhat.com Wed Mar 21 09:33:40 2018 From: frolland at redhat.com (Fred Rolland) Date: Wed, 21 Mar 2018 11:33:40 +0200 Subject: [ovirt-users] Attach Export Domain to another Data Center - Failure In-Reply-To: References: Message-ID: Can you provide the vdsm logs from the host. It looks the vdsm failed to connect to the server. On Wed, Mar 21, 2018 at 11:21 AM, Andrei Verovski wrote: > Hi, > > I have 2-host oVirt setup with 2 Data Centers, one with local storage > domain (DC #1) for VMs + Export domain on NFS, another with all NFS shared > (DC #2). > Trying to export VMs from DC #1 to DC #2. > VMs are exported to DC #1 export domain (NFS), then domain put into > maintenance mode and detached from DC #1. > > Unfortunately, attaching it to DC #2 failed. Logs attached. Tried to run > this command twice. > Workaround are possible in order to accomplish this task, yet it would be > better to do in a way as it was designed. > Thanks. > > > 2018-03-21 10:43:24,024+02 ERROR [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-128904) > [570ec5d9-fff5-4656-afbd-90b3207a616e] EVENT_ID: > USER_ATTACH_STORAGE_DOMAIN_TO_POOL_FAILED(963), Failed to attach Storage > Domain node10-NFS-EXPORTS to Data Center StrDataCenter11. (User: > admin at internal-authz) > 2018-03-21 10:46:16,512+02 ERROR [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-128956) > [1435fc81] EVENT_ID: VDS_STORAGES_CONNECTION_FAILED(188), Failed to > connect Host node11 to the Storage Domains node10-NFS-EXPORTS. > 2018-03-21 10:46:16,651+02 ERROR [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-128955) > [a81ffa4a-5a58-41a0-888a-f0edc321609b] EVENT_ID: > USER_ATTACH_STORAGE_DOMAIN_TO_POOL_FAILED(963), Failed to attach Storage > Domain node10-NFS-EXPORTS to Data Center StrDataCenter11. (User: > admin at internal-authz) > > > *tail -n 1000 engine.log | grep 570ec5d9-fff5-4656-afbd-90b3207a616e* > 2018-03-21 10:41:14,643+02 INFO [org.ovirt.engine.core.bll. > storage.domain.AttachStorageDomainToPoolCommand] (default task-2) > [570ec5d9-fff5-4656-afbd-90b3207a616e] Lock Acquired to object > 'EngineLock:{exclusiveLocks='[1d7208ce-d3a1-4406-9638-fe7051562994=STORAGE]', > sharedLocks=''}' > 2018-03-21 10:41:16,129+02 INFO [org.ovirt.engine.core.bll. > storage.domain.AttachStorageDomainToPoolCommand] (EE-ManagedThreadFactory-engine-Thread-128904) > [570ec5d9-fff5-4656-afbd-90b3207a616e] Running command: > AttachStorageDomainToPoolCommand internal: false. Entities affected : > ID: 1d7208ce-d3a1-4406-9638-fe7051562994 Type: StorageAction group > MANIPULATE_STORAGE_DOMAIN with role type ADMIN, ID: > 80cc922f-8dea-4fed-b951-1060ba116ad5 Type: StoragePoolAction group > MANIPULATE_STORAGE_DOMAIN with role type ADMIN > 2018-03-21 10:43:23,564+02 ERROR [org.ovirt.engine.core.bll. > storage.domain.AttachStorageDomainToPoolCommand] (EE-ManagedThreadFactory-engine-Thread-128904) > [570ec5d9-fff5-4656-afbd-90b3207a616e] Cannot connect storage connection > server, aborting attach storage domain operation. > 2018-03-21 10:43:23,567+02 INFO [org.ovirt.engine.core.bll.CommandCompensator] > (EE-ManagedThreadFactory-engine-Thread-128904) [570ec5d9-fff5-4656-afbd-90b3207a616e] > Command [id=921ca7cd-4f93-46aa-8de2-91b13b8f96cb]: Compensating > NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; > snapshot: StoragePoolIsoMapId:{storagePoolId='80cc922f-8dea-4fed-b951-1060ba116ad5', > storageId='1d7208ce-d3a1-4406-9638-fe7051562994'}. > 2018-03-21 10:43:24,024+02 ERROR [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-128904) > [570ec5d9-fff5-4656-afbd-90b3207a616e] EVENT_ID: > USER_ATTACH_STORAGE_DOMAIN_TO_POOL_FAILED(963), Failed to attach Storage > Domain node10-NFS-EXPORTS to Data Center StrDataCenter11. (User: > admin at internal-authz) > 2018-03-21 10:43:24,114+02 INFO [org.ovirt.engine.core.bll. > storage.domain.AttachStorageDomainToPoolCommand] (EE-ManagedThreadFactory-engine-Thread-128904) > [570ec5d9-fff5-4656-afbd-90b3207a616e] Lock freed to object > 'EngineLock:{exclusiveLocks='[1d7208ce-d3a1-4406-9638-fe7051562994=STORAGE]', > sharedLocks=''}' > > > > *[root at node00 ovirt-engine]# tail -n 1000 engine.log | grep > a81ffa4a-5a58-41a0-888a-f0edc321609b* > 2018-03-21 10:44:11,025+02 INFO [org.ovirt.engine.core.bll. > storage.domain.AttachStorageDomainToPoolCommand] (default task-16) > [a81ffa4a-5a58-41a0-888a-f0edc321609b] Lock Acquired to object > 'EngineLock:{exclusiveLocks='[1d7208ce-d3a1-4406-9638-fe7051562994=STORAGE]', > sharedLocks=''}' > 2018-03-21 10:44:11,236+02 INFO [org.ovirt.engine.core.bll. > storage.domain.AttachStorageDomainToPoolCommand] (EE-ManagedThreadFactory-engine-Thread-128955) > [a81ffa4a-5a58-41a0-888a-f0edc321609b] Running command: > AttachStorageDomainToPoolCommand internal: false. Entities affected : > ID: 1d7208ce-d3a1-4406-9638-fe7051562994 Type: StorageAction group > MANIPULATE_STORAGE_DOMAIN with role type ADMIN, ID: > 80cc922f-8dea-4fed-b951-1060ba116ad5 Type: StoragePoolAction group > MANIPULATE_STORAGE_DOMAIN with role type ADMIN > 2018-03-21 10:46:16,567+02 ERROR [org.ovirt.engine.core.bll. > storage.domain.AttachStorageDomainToPoolCommand] (EE-ManagedThreadFactory-engine-Thread-128955) > [a81ffa4a-5a58-41a0-888a-f0edc321609b] Cannot connect storage connection > server, aborting attach storage domain operation. > 2018-03-21 10:46:16,568+02 INFO [org.ovirt.engine.core.bll.CommandCompensator] > (EE-ManagedThreadFactory-engine-Thread-128955) [a81ffa4a-5a58-41a0-888a-f0edc321609b] > Command [id=b5c25100-1a8a-4db0-9509-99cfa60995b1]: Compensating > NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; > snapshot: StoragePoolIsoMapId:{storagePoolId='80cc922f-8dea-4fed-b951-1060ba116ad5', > storageId='1d7208ce-d3a1-4406-9638-fe7051562994'}. > 2018-03-21 10:46:16,651+02 ERROR [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-128955) > [a81ffa4a-5a58-41a0-888a-f0edc321609b] EVENT_ID: > USER_ATTACH_STORAGE_DOMAIN_TO_POOL_FAILED(963), Failed to attach Storage > Domain node10-NFS-EXPORTS to Data Center StrDataCenter11. (User: > admin at internal-authz) > 2018-03-21 10:46:16,681+02 INFO [org.ovirt.engine.core.bll. > storage.domain.AttachStorageDomainToPoolCommand] (EE-ManagedThreadFactory-engine-Thread-128955) > [a81ffa4a-5a58-41a0-888a-f0edc321609b] Lock freed to object > 'EngineLock:{exclusiveLocks='[1d7208ce-d3a1-4406-9638-fe7051562994=STORAGE]', > sharedLocks='?}' > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From frolland at redhat.com Wed Mar 21 09:36:10 2018 From: frolland at redhat.com (Fred Rolland) Date: Wed, 21 Mar 2018 11:36:10 +0200 Subject: [ovirt-users] Fiber Channel Storage not coming up In-Reply-To: References: Message-ID: Can you see the LUNs on the host? To check the status, run: multipath -ll Try also to execute the following command /usr/libexec/vdsm/fc-scan -v On Tue, Mar 20, 2018 at 10:03 PM, Jean Pickard wrote: > Hello, > Today I noticed that most of my VMs weren't accessible. > Once i checked the system, I noticed my FC channel SAN data domain is > inactive. > > > 2018-03-19 16:01:05,308-0700 ERROR (periodic/32) [virt.vm] > (vmId='43700e39-8812-41c6-9cd6-e555a2f19e35') Unable to get watermarks > for drive vda: failed to open block device '/rhev/data-center/00000001- > 0001-0001-0001-000000000311/cecbec42-bff1-4e8d-9c37- > c260b7305af7/images/05563eaf-ea90-4364-b92e-4b53173a3a5f/ > 242a97ec-9459-4ce1-b61a-3331a7617f2d': No such file or directory (vm:814) > 2018-03-19 16:01:06,018-0700 ERROR (monitor/cecbec4) [storage.Monitor] > Error checking domain cecbec42-bff1-4e8d-9c37-c260b7305af7 (monitor:426) > Traceback (most recent call last): > File "/usr/share/vdsm/storage/monitor.py", line 407, in > _checkDomainStatus > self.domain.selftest() > File "/usr/share/vdsm/storage/sdc.py", line 50, in __getattr__ > return getattr(self.getRealDomain(), attrName) > File "/usr/share/vdsm/storage/sdc.py", line 53, in getRealDomain > return self._cache._realProduce(self._sdUUID) > File "/usr/share/vdsm/storage/sdc.py", line 136, in _realProduce > domain = self._findDomain(sdUUID) > File "/usr/share/vdsm/storage/sdc.py", line 153, in _findDomain > return findMethod(sdUUID) > File "/usr/share/vdsm/storage/blockSD.py", line 1610, in findDomain > return BlockStorageDomain(BlockStorageDomain.findDomainPath(sdUUID)) > File "/usr/share/vdsm/storage/blockSD.py", line 1550, in findDomainPath > raise se.StorageDomainDoesNotExist(sdUUID) > StorageDomainDoesNotExist: Storage domain does not exist: > (u'cecbec42-bff1-4e8d-9c37-c260b7305af7',) > 2018-03-19 16:01:07,331-0700 ERROR (periodic/32) [virt.vm] > (vmId='235702f2-286e-403b-b395-db9c1d603d99') Unable to get watermarks > for drive vda: failed to open block device '/rhev/data-center/00000001- > 0001-0001-0001-000000000311/cecbec42-bff1-4e8d-9c37- > c260b7305af7/images/8eb9847f-4e4e-4bda-b4fa-a44119137ee2/ > ddf7c854-fa95-4c3f-b76a-4a90c8067c7e': No such file or directory (vm:814) > > My attempts to activate my data center storage is failing as well. > I don't see any errors in my engine log under /var/log/ovirt-engine. > > Any idea why my storage is not coming up? > > Thank you, > > Jean > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tgolembi at redhat.com Wed Mar 21 09:59:32 2018 From: tgolembi at redhat.com (=?UTF-8?B?VG9tw6HFoSBHb2xlbWJpb3Zza8O9?=) Date: Wed, 21 Mar 2018 10:59:32 +0100 Subject: [ovirt-users] GPG Key of evilissimo repo for ovirt-guest-agent is expired In-Reply-To: <687643086.7795430.1521564159592.JavaMail.zimbra@ubimet.com> References: <687643086.7795430.1521564159592.JavaMail.zimbra@ubimet.com> Message-ID: <20180321105932.526d1198@fiorina> Hi, On Tue, 20 Mar 2018 16:42:39 +0000 (UTC) Florian Schmid wrote: > Hi, > > it looks like for this repo, the GPG key is expired. > http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04/xUbuntu_16.04/ > > Does someone know, whom I should contact, that this key will be renewed? The repository belongs to Vinzenz Feenstra as you can see in package metadata. > Or does someone know another repo, where I can download the latest ovirt-guest-agent for Ubuntu 16.04 You can get ovirt-guest-agent packages from Debian repository: https://packages.debian.org/search?suite=all&searchon=names&keywords=ovirt-guest-agent Tomas > > BR Florian > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- Tom?? Golembiovsk? From angel.gonzalez at uam.es Wed Mar 21 09:54:23 2018 From: angel.gonzalez at uam.es (Angel R. Gonzalez) Date: Wed, 21 Mar 2018 10:54:23 +0100 Subject: [ovirt-users] Cannot remove VM Message-ID: <4169fb5e-eb62-da31-b2a9-9b5a41c8ff47@uam.es> Hi, I can't edit/remove/modify a VM. Always show a message: ??? "Cannot remove VM. Related operation is currently in progress. Please try again later" I also used REST API curl -k -u admin at internal:passwd -X DELETE https://engine/ovirt-engine/api/vms/XXXXXXXX and the output message is the same. How to can execute actions over the VM, or how to remove it from Cluster? Thanks in advance. ?ngel Gonz?lez. From andreil1 at starlett.lv Wed Mar 21 10:04:58 2018 From: andreil1 at starlett.lv (Andrei Verovski) Date: Wed, 21 Mar 2018 12:04:58 +0200 Subject: [ovirt-users] Attach Export Domain to another Data Center - Failure In-Reply-To: References: Message-ID: <53908113-1739-44A9-BE0A-3B4BDB22F230@starlett.lv> Hi, Errors occurred at 10.43 AM and 10.46 AM node00.starlett.lv - 192.168.0.4 - oVirt host engine (separate PC, not hosted) node10.starlett.lv - 192.168.0.5 - host #1 of DC #1, export domain from which was detached from DC #1 node11.starlett.lv - 192.168.0.6 - host #1 of DC #2, Logs from DC#2 node11, to which I?m trying to attach export domain located at NFS share node10. > grep 570ec5d9-fff5-4656-afbd-90b3207a616e within vdsm.log returned nothing, so I did grep -n 10:43 vdsm.log | tail -1000 1011:2018-03-21 06:10:43,077+0200 INFO (jsonrpc/3) [api.host] START getAllVmStats() from=::1,36114 (api:46) 1012:2018-03-21 06:10:43,077+0200 INFO (jsonrpc/3) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::1,36114 (api:52) 1013:2018-03-21 06:10:43,077+0200 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.01 seconds (__init__:573) 1014:2018-03-21 06:10:43,868+0200 INFO (jsonrpc/7) [api.host] START getAllVmStats() from=::ffff:192.168.0.4,49914 (api:46) 1015:2018-03-21 06:10:43,868+0200 INFO (jsonrpc/7) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::ffff:192.168.0.4,49914 (api:52) 1016:2018-03-21 06:10:43,868+0200 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:573) 19729:2018-03-21 09:10:43,641+0200 INFO (jsonrpc/5) [api.host] START getAllVmStats() from=::1,36114 (api:46) 19730:2018-03-21 09:10:43,641+0200 INFO (jsonrpc/5) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::1,36114 (api:52) 19731:2018-03-21 09:10:43,642+0200 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:573) 29302:2018-03-21 10:43:00,085+0200 INFO (periodic/2) [vdsm.api] START repoStats(domains=()) from=internal, task_id=7b690c8a-5470-44da-a8f3-7e7e9b018e88 (api:46) 29303:2018-03-21 10:43:00,085+0200 INFO (periodic/2) [vdsm.api] FINISH repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000477038', 'lastCheck': '0.5', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000498252', 'lastCheck': '0.4', 'valid': True}} from=internal, task_id=7b690c8a-5470-44da-a8f3-7e7e9b018e88 (api:52) 29304:2018-03-21 10:43:00,086+0200 INFO (periodic/2) [vdsm.api] START multipath_health() from=internal, task_id=9a625d43-a03c-429c-856f-9aa8e5ff65b5 (api:46) 29305:2018-03-21 10:43:00,086+0200 INFO (periodic/2) [vdsm.api] FINISH multipath_health return={} from=internal, task_id=9a625d43-a03c-429c-856f-9aa8e5ff65b5 (api:52) 29306:2018-03-21 10:43:05,382+0200 INFO (jsonrpc/5) [api.host] START getAllVmStats() from=::ffff:192.168.0.4,49914 (api:46) 29307:2018-03-21 10:43:05,383+0200 INFO (jsonrpc/5) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::ffff:192.168.0.4,49914 (api:52) 29308:2018-03-21 10:43:05,383+0200 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:573) 29309:2018-03-21 10:43:06,248+0200 INFO (jsonrpc/3) [api.host] START getStats() from=::ffff:192.168.0.4,49914 (api:46) 29310:2018-03-21 10:43:06,249+0200 INFO (jsonrpc/3) [vdsm.api] START repoStats(domains=()) from=::ffff:192.168.0.4,49914, task_id=f80f9bb8-2e84-4f45-ab58-09a88e808cf3 (api:46) 29311:2018-03-21 10:43:06,250+0200 INFO (jsonrpc/3) [vdsm.api] FINISH repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000477038', 'lastCheck': '6.7', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000498252', 'lastCheck': '6.6', 'valid': True}} from=::ffff:192.168.0.4,49914, task_id=f80f9bb8-2e84-4f45-ab58-09a88e808cf3 (api:52) 29312:2018-03-21 10:43:06,250+0200 INFO (jsonrpc/3) [vdsm.api] START multipath_health() from=::ffff:192.168.0.4,49914, task_id=ddd7064d-cd12-43bf-a177-b2a93cc9035e (api:46) 29313:2018-03-21 10:43:06,250+0200 INFO (jsonrpc/3) [vdsm.api] FINISH multipath_health return={} from=::ffff:192.168.0.4,49914, task_id=ddd7064d-cd12-43bf-a177-b2a93cc9035e (api:52) 29314:2018-03-21 10:43:06,256+0200 INFO (jsonrpc/3) [api.host] FINISH getStats return={'status': {'message': 'Done', 'code': 0}, 'info': {'cpuStatistics': {'11': {'cpuUser': '0.13', 'nodeIndex': 1, 'cpuSys': '0.27', 'cpuIdle': '99.60'}, '10': {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.40', 'cpuIdle': '99.47'}, '13': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.20', 'cpuIdle': '99.73'}, '12': {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.67'}, '15': {'cpuUser': '0.20', 'nodeIndex': 1, 'cpuSys': '0.47', 'cpuIdle': '99.33'}, '14': {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': '99.60'}, '1': {'cpuUser': '1.13', 'nodeIndex': 1, 'cpuSys': '1.60', 'cpuIdle': '97.27'}, '0': {'cpuUser': '0.47', 'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': '99.26'}, '3': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.20', 'cpuIdle': '99.73'}, '2': {'cpuUser': '0.33', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': '99.54'}, '5': {'cpuUser': '0.13', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.74'}, '4': {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.67'}, '7': {'cpuUser': '0.13', 'nodeIndex': 1, 'cpuSys': '0.60', 'cpuIdle': '99.27'}, '6': {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.33', 'cpuIdle': '99.54'}, '9': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.27', 'cpuIdle': '99.66'}, '8': {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.33', 'cpuIdle': '99.54'}}, 'numaNodeMemFree': {'1': {'memPercent': 7, 'memFree': '15142'}, '0': {'memPercent': 5, 'memFree': '15349'}}, 'memShared': 0, 'thpState': 'always', 'ksmMergeAcrossNodes': True, 'vmCount': 0, 'memUsed': '3', 'storageDomains': {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000477038', 'lastCheck': '6.7', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000498252', 'lastCheck': '6.6', 'valid': True}}, 'incomingVmMigrations': 0, 'network': {'enp3s0f0': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621780.072418, 'name': 'enp3s0f0', 'tx': '1091287978', 'txDropped': '0', 'rx': '11914868563', 'rxErrors': '0', 'speed': '100', 'rxDropped': '1439'}, 'ovirtmgmt': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621780.072418, 'name': 'ovirtmgmt', 'tx': '1048639112', 'txDropped': '0', 'rx': '11615209092', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'lo': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621780.072418, 'name': 'lo', 'tx': '58057075945', 'txDropped': '0', 'rx': '58057075945', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp3s0f1': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621780.072418, 'name': 'enp3s0f1', 'tx': '0', 'txDropped': '0', 'rx': '66839', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, ';vdsmdummy;': {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621780.072418, 'name': ';vdsmdummy;', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f0': {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621780.072418, 'name': 'enp4s0f0', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f1': {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621780.072418, 'name': 'enp4s0f1', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}}, 'txDropped': '0', 'anonHugePages': '204', 'ksmPages': 100, 'elapsedTime': '76345.09', 'cpuLoad': '0.16', 'cpuSys': '0.36', 'diskStats': {'/var/log': {'free': '7344'}, '/var/run/vdsm/': {'free': '16060'}, '/tmp': {'free': '906'}}, 'cpuUserVdsmd': '1.07', 'netConfigDirty': 'False', 'memCommitted': 0, 'ksmState': False, 'vmMigrating': 0, 'ksmCpu': 0, 'memAvailable': 31081, 'bootTime': '1521545382', 'haStats': {'active': False, 'configured': False, 'score': 0, 'localMaintenance': False, 'globalMaintenance': False}, 'momStatus': 'active', 'multipathHealth': {}, 'rxDropped': '1439', 'outgoingVmMigrations': 0, 'swapTotal': 12287, 'swapFree': 12287, 'hugepages': defaultdict(, {2048: {'resv_hugepages': 0, 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}, 1048576: {'resv_hugepages': 0, 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}}), 'dateTime': '2018-03-21T08:43:06 GMT', 'cpuUser': '0.21', 'memFree': 31337, 'cpuIdle': '99.43', 'vmActive': 0, 'v2vJobs': {}, 'cpuSysVdsmd': '0.93'}} from=::ffff:192.168.0.4,49914 (api:52) 29315:2018-03-21 10:43:06,258+0200 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds (__init__:573) 29316:2018-03-21 10:43:06,632+0200 INFO (jsonrpc/0) [api.host] START getAllVmStats() from=::1,36114 (api:46) 29317:2018-03-21 10:43:06,632+0200 INFO (jsonrpc/0) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::1,36114 (api:52) 29318:2018-03-21 10:43:06,633+0200 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:573) 29319:2018-03-21 10:43:07,415+0200 INFO (jsonrpc/4) [vdsm.api] START getSpmStatus(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', options=None) from=::ffff:192.168.0.4,49914, task_id=276c13c7-61ec-4782-a0d9-5cd0f0b0a729 (api:46) 29320:2018-03-21 10:43:07,420+0200 INFO (jsonrpc/4) [vdsm.api] FINISH getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 6L}} from=::ffff:192.168.0.4,49914, task_id=276c13c7-61ec-4782-a0d9-5cd0f0b0a729 (api:52) 29321:2018-03-21 10:43:07,420+0200 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus succeeded in 0.01 seconds (__init__:573) 29322:2018-03-21 10:43:07,427+0200 INFO (jsonrpc/6) [vdsm.api] START getStoragePoolInfo(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', options=None) from=::ffff:192.168.0.4,49920, task_id=344b18df-a428-429a-b49f-381f901d1618 (api:46) 29323:2018-03-21 10:43:07,434+0200 INFO (jsonrpc/6) [vdsm.api] FINISH getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', 'lver': 6L, 'domains': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8:Active,8a2c304b-c8ae-438b-af54-fc8797ea149f:Active', 'master_uuid': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8', 'version': '4', 'spm_id': 1, 'type': 'NFS', 'master_ver': 3}, 'dominfo': {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': '', 'alerts': [], 'disktotal': '1968811540480', 'version': 4}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': '1968811540480', 'version': 0}}} from=::ffff:192.168.0.4,49920, task_id=344b18df-a428-429a-b49f-381f901d1618 (api:52) 29324:2018-03-21 10:43:07,435+0200 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.01 seconds (__init__:573) 29325:2018-03-21 10:43:13,839+0200 WARN (vdsm.Scheduler) [Executor] Worker blocked: timeout=60, duration=120 at 0x402fe50> task#=3754 at 0x353e690>, traceback: 29376:2018-03-21 10:43:15,104+0200 INFO (periodic/2) [vdsm.api] START repoStats(domains=()) from=internal, task_id=672a883e-1d60-4981-9eee-bdcac2ec30c1 (api:46) 29377:2018-03-21 10:43:15,105+0200 INFO (periodic/2) [vdsm.api] FINISH repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000488777', 'lastCheck': '5.6', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000481962', 'lastCheck': '5.4', 'valid': True}} from=internal, task_id=672a883e-1d60-4981-9eee-bdcac2ec30c1 (api:52) 29378:2018-03-21 10:43:15,105+0200 INFO (periodic/2) [vdsm.api] START multipath_health() from=internal, task_id=55f1b797-da2a-4b07-affe-5b9e41a74307 (api:46) 29379:2018-03-21 10:43:15,106+0200 INFO (periodic/2) [vdsm.api] FINISH multipath_health return={} from=internal, task_id=55f1b797-da2a-4b07-affe-5b9e41a74307 (api:52) 29380:2018-03-21 10:43:17,645+0200 INFO (jsonrpc/7) [vdsm.api] START getSpmStatus(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', options=None) from=::ffff:192.168.0.4,49914, task_id=52a6ddeb-3c20-4f3f-b27d-94d19580b473 (api:46) 29381:2018-03-21 10:43:17,649+0200 INFO (jsonrpc/7) [vdsm.api] FINISH getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 6L}} from=::ffff:192.168.0.4,49914, task_id=52a6ddeb-3c20-4f3f-b27d-94d19580b473 (api:52) 29382:2018-03-21 10:43:17,650+0200 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus succeeded in 0.01 seconds (__init__:573) 29383:2018-03-21 10:43:17,656+0200 INFO (jsonrpc/2) [vdsm.api] START getStoragePoolInfo(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', options=None) from=::ffff:192.168.0.4,49920, task_id=1a103883-6a6a-4d61-ade6-96559591ddb0 (api:46) 29384:2018-03-21 10:43:17,663+0200 INFO (jsonrpc/2) [vdsm.api] FINISH getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', 'lver': 6L, 'domains': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8:Active,8a2c304b-c8ae-438b-af54-fc8797ea149f:Active', 'master_uuid': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8', 'version': '4', 'spm_id': 1, 'type': 'NFS', 'master_ver': 3}, 'dominfo': {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': '', 'alerts': [], 'disktotal': '1968811540480', 'version': 4}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': '1968811540480', 'version': 0}}} from=::ffff:192.168.0.4,49920, task_id=1a103883-6a6a-4d61-ade6-96559591ddb0 (api:52) 29385:2018-03-21 10:43:17,663+0200 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.01 seconds (__init__:573) 29386:2018-03-21 10:43:18,882+0200 ERROR (jsonrpc/1) [storage.HSM] Could not connect to storageServer (hsm:2407) 29406:2018-03-21 10:43:18,883+0200 INFO (jsonrpc/1) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'status': 477, 'id': u'461f65a9-3a81-4f3f-a46d-c5ed12520524'}]} from=::ffff:192.168.0.4,49914, flow_id=4a53b512, task_id=e598cbe0-cde8-4c73-b526-4398df05e67f (api:52) 29407:2018-03-21 10:43:18,883+0200 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call StoragePool.connectStorageServer succeeded in 125.04 seconds (__init__:573) 29408:2018-03-21 10:43:20,387+0200 INFO (jsonrpc/5) [api.host] START getAllVmStats() from=::ffff:192.168.0.4,49914 (api:46) 29409:2018-03-21 10:43:20,388+0200 INFO (jsonrpc/5) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::ffff:192.168.0.4,49914 (api:52) 29410:2018-03-21 10:43:20,388+0200 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.01 seconds (__init__:573) 29411:2018-03-21 10:43:21,497+0200 INFO (jsonrpc/3) [api.host] START getStats() from=::ffff:192.168.0.4,49914 (api:46) 29412:2018-03-21 10:43:21,498+0200 INFO (jsonrpc/3) [vdsm.api] START repoStats(domains=()) from=::ffff:192.168.0.4,49914, task_id=a7469c1f-3d6d-42e7-8798-5c5335da4a9b (api:46) 29413:2018-03-21 10:43:21,498+0200 INFO (jsonrpc/3) [vdsm.api] FINISH repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000445096', 'lastCheck': '1.9', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000296586', 'lastCheck': '1.8', 'valid': True}} from=::ffff:192.168.0.4,49914, task_id=a7469c1f-3d6d-42e7-8798-5c5335da4a9b (api:52) 29414:2018-03-21 10:43:21,499+0200 INFO (jsonrpc/3) [vdsm.api] START multipath_health() from=::ffff:192.168.0.4,49914, task_id=9fa5dae3-e2bd-4186-9a1d-afb0aa6e2511 (api:46) 29415:2018-03-21 10:43:21,499+0200 INFO (jsonrpc/3) [vdsm.api] FINISH multipath_health return={} from=::ffff:192.168.0.4,49914, task_id=9fa5dae3-e2bd-4186-9a1d-afb0aa6e2511 (api:52) 29416:2018-03-21 10:43:21,505+0200 INFO (jsonrpc/3) [api.host] FINISH getStats return={'status': {'message': 'Done', 'code': 0}, 'info': {'cpuStatistics': {'11': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '10': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '13': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '12': {'cpuUser': '0.33', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.47'}, '15': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.87'}, '14': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': '0.33', 'cpuIdle': '99.60'}, '1': {'cpuUser': '1.00', 'nodeIndex': 1, 'cpuSys': '1.00', 'cpuIdle': '98.00'}, '0': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '3': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.93'}, '2': {'cpuUser': '0.27', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.66'}, '5': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '4': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '7': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.27', 'cpuIdle': '99.66'}, '6': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '9': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.80'}, '8': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.93'}}, 'numaNodeMemFree': {'1': {'memPercent': 7, 'memFree': '15142'}, '0': {'memPercent': 5, 'memFree': '15351'}}, 'memShared': 0, 'thpState': 'always', 'ksmMergeAcrossNodes': True, 'vmCount': 0, 'memUsed': '3', 'storageDomains': {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000445096', 'lastCheck': '1.9', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000296586', 'lastCheck': '1.8', 'valid': True}}, 'incomingVmMigrations': 0, 'network': {'enp3s0f0': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621795.091263, 'name': 'enp3s0f0', 'tx': '1091295628', 'txDropped': '0', 'rx': '11914875210', 'rxErrors': '0', 'speed': '100', 'rxDropped': '1442'}, 'ovirtmgmt': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621795.091263, 'name': 'ovirtmgmt', 'tx': '1048646480', 'txDropped': '0', 'rx': '11615214750', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'lo': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621795.091263, 'name': 'lo', 'tx': '58066397668', 'txDropped': '0', 'rx': '58066397668', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp3s0f1': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621795.091263, 'name': 'enp3s0f1', 'tx': '0', 'txDropped': '0', 'rx': '66839', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, ';vdsmdummy;': {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621795.091263, 'name': ';vdsmdummy;', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f0': {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621795.091263, 'name': 'enp4s0f0', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f1': {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621795.091263, 'name': 'enp4s0f1', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}}, 'txDropped': '0', 'anonHugePages': '204', 'ksmPages': 100, 'elapsedTime': '76360.34', 'cpuLoad': '0.15', 'cpuSys': '0.18', 'diskStats': {'/var/log': {'free': '7344'}, '/var/run/vdsm/': {'free': '16060'}, '/tmp': {'free': '906'}}, 'cpuUserVdsmd': '1.07', 'netConfigDirty': 'False', 'memCommitted': 0, 'ksmState': False, 'vmMigrating': 0, 'ksmCpu': 0, 'memAvailable': 31031, 'bootTime': '1521545382', 'haStats': {'active': False, 'configured': False, 'score': 0, 'localMaintenance': False, 'globalMaintenance': False}, 'momStatus': 'active', 'multipathHealth': {}, 'rxDropped': '1442', 'outgoingVmMigrations': 0, 'swapTotal': 12287, 'swapFree': 12287, 'hugepages': defaultdict(, {2048: {'resv_hugepages': 0, 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}, 1048576: {'resv_hugepages': 0, 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}}), 'dateTime': '2018-03-21T08:43:21 GMT', 'cpuUser': '0.15', 'memFree': 31287, 'cpuIdle': '99.67', 'vmActive': 0, 'v2vJobs': {}, 'cpuSysVdsmd': '0.67'}} from=::ffff:192.168.0.4,49914 (api:52) 29417:2018-03-21 10:43:21,508+0200 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.02 seconds (__init__:573) 29418:2018-03-21 10:43:21,651+0200 INFO (jsonrpc/0) [api.host] START getAllVmStats() from=::1,36114 (api:46) 29419:2018-03-21 10:43:21,651+0200 INFO (jsonrpc/0) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::1,36114 (api:52) 29420:2018-03-21 10:43:21,652+0200 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:573) 29421:2018-03-21 10:43:27,764+0200 INFO (jsonrpc/4) [vdsm.api] START getSpmStatus(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', options=None) from=::ffff:192.168.0.4,49914, task_id=46d19cab-8903-4c52-9bb2-4dd8f370997e (api:46) 29422:2018-03-21 10:43:27,769+0200 INFO (jsonrpc/4) [vdsm.api] FINISH getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 6L}} from=::ffff:192.168.0.4,49914, task_id=46d19cab-8903-4c52-9bb2-4dd8f370997e (api:52) 29423:2018-03-21 10:43:27,770+0200 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus succeeded in 0.01 seconds (__init__:573) 29424:2018-03-21 10:43:27,815+0200 INFO (jsonrpc/6) [vdsm.api] START getStoragePoolInfo(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', options=None) from=::ffff:192.168.0.4,49920, task_id=290acc73-8f25-4fd7-9263-7bd05d46430a (api:46) 29425:2018-03-21 10:43:27,822+0200 INFO (jsonrpc/6) [vdsm.api] FINISH getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', 'lver': 6L, 'domains': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8:Active,8a2c304b-c8ae-438b-af54-fc8797ea149f:Active', 'master_uuid': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8', 'version': '4', 'spm_id': 1, 'type': 'NFS', 'master_ver': 3}, 'dominfo': {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': '', 'alerts': [], 'disktotal': '1968811540480', 'version': 4}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': '1968811540480', 'version': 0}}} from=::ffff:192.168.0.4,49920, task_id=290acc73-8f25-4fd7-9263-7bd05d46430a (api:52) 29426:2018-03-21 10:43:27,823+0200 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.01 seconds (__init__:573) 29427:2018-03-21 10:43:30,121+0200 INFO (periodic/3) [vdsm.api] START repoStats(domains=()) from=internal, task_id=0abdebcf-2b0b-4d33-b552-c0f2ed3e567b (api:46) 29428:2018-03-21 10:43:30,122+0200 INFO (periodic/3) [vdsm.api] FINISH repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000402077', 'lastCheck': '0.6', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000344445', 'lastCheck': '0.5', 'valid': True}} from=internal, task_id=0abdebcf-2b0b-4d33-b552-c0f2ed3e567b (api:52) 29429:2018-03-21 10:43:30,122+0200 INFO (periodic/3) [vdsm.api] START multipath_health() from=internal, task_id=acde7e7d-ab4a-4811-934f-4d312da33208 (api:46) 29430:2018-03-21 10:43:30,123+0200 INFO (periodic/3) [vdsm.api] FINISH multipath_health return={} from=internal, task_id=acde7e7d-ab4a-4811-934f-4d312da33208 (api:52) 29431:2018-03-21 10:43:35,393+0200 INFO (jsonrpc/7) [api.host] START getAllVmStats() from=::ffff:192.168.0.4,49914 (api:46) 29432:2018-03-21 10:43:35,394+0200 INFO (jsonrpc/7) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::ffff:192.168.0.4,49914 (api:52) 29433:2018-03-21 10:43:35,394+0200 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:573) 29434:2018-03-21 10:43:36,671+0200 INFO (jsonrpc/2) [api.host] START getAllVmStats() from=::1,36114 (api:46) 29435:2018-03-21 10:43:36,671+0200 INFO (jsonrpc/2) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::1,36114 (api:52) 29436:2018-03-21 10:43:36,672+0200 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:573) 29437:2018-03-21 10:43:37,356+0200 INFO (jsonrpc/1) [api.host] START getStats() from=::ffff:192.168.0.4,49914 (api:46) 29438:2018-03-21 10:43:37,357+0200 INFO (jsonrpc/1) [vdsm.api] START repoStats(domains=()) from=::ffff:192.168.0.4,49914, task_id=bc14e253-d3d4-4cc3-9ae1-c0e6ac6e37f7 (api:46) 29439:2018-03-21 10:43:37,357+0200 INFO (jsonrpc/1) [vdsm.api] FINISH repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000402077', 'lastCheck': '7.8', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000344445', 'lastCheck': '7.7', 'valid': True}} from=::ffff:192.168.0.4,49914, task_id=bc14e253-d3d4-4cc3-9ae1-c0e6ac6e37f7 (api:52) 29440:2018-03-21 10:43:37,358+0200 INFO (jsonrpc/1) [vdsm.api] START multipath_health() from=::ffff:192.168.0.4,49914, task_id=23c698b8-d5bd-4746-b738-97b10f3e4bfb (api:46) 29441:2018-03-21 10:43:37,358+0200 INFO (jsonrpc/1) [vdsm.api] FINISH multipath_health return={} from=::ffff:192.168.0.4,49914, task_id=23c698b8-d5bd-4746-b738-97b10f3e4bfb (api:52) 29442:2018-03-21 10:43:37,365+0200 INFO (jsonrpc/1) [api.host] FINISH getStats return={'status': {'message': 'Done', 'code': 0}, 'info': {'cpuStatistics': {'11': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '10': {'cpuUser': '3.06', 'nodeIndex': 0, 'cpuSys': '0.80', 'cpuIdle': '96.14'}, '13': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.93'}, '12': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.73'}, '15': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.87'}, '14': {'cpuUser': '0.27', 'nodeIndex': 0, 'cpuSys': '0.40', 'cpuIdle': '99.33'}, '1': {'cpuUser': '1.13', 'nodeIndex': 1, 'cpuSys': '1.33', 'cpuIdle': '97.54'}, '0': {'cpuUser': '0.47', 'nodeIndex': 0, 'cpuSys': '0.33', 'cpuIdle': '99.20'}, '3': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.93'}, '2': {'cpuUser': '0.67', 'nodeIndex': 0, 'cpuSys': '0.40', 'cpuIdle': '98.93'}, '5': {'cpuUser': '0.13', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.74'}, '4': {'cpuUser': '0.47', 'nodeIndex': 0, 'cpuSys': '0.40', 'cpuIdle': '99.13'}, '7': {'cpuUser': '0.33', 'nodeIndex': 1, 'cpuSys': '0.73', 'cpuIdle': '98.94'}, '6': {'cpuUser': '5.39', 'nodeIndex': 0, 'cpuSys': '0.47', 'cpuIdle': '94.14'}, '9': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.27', 'cpuIdle': '99.66'}, '8': {'cpuUser': '0.53', 'nodeIndex': 0, 'cpuSys': '0.40', 'cpuIdle': '99.07'}}, 'numaNodeMemFree': {'1': {'memPercent': 7, 'memFree': '15143'}, '0': {'memPercent': 5, 'memFree': '15348'}}, 'memShared': 0, 'thpState': 'always', 'ksmMergeAcrossNodes': True, 'vmCount': 0, 'memUsed': '3', 'storageDomains': {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000402077', 'lastCheck': '7.8', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000344445', 'lastCheck': '7.7', 'valid': True}}, 'incomingVmMigrations': 0, 'network': {'enp3s0f0': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621810.108511, 'name': 'enp3s0f0', 'tx': '1091305295', 'txDropped': '0', 'rx': '11914880953', 'rxErrors': '0', 'speed': '100', 'rxDropped': '1442'}, 'ovirtmgmt': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621810.108511, 'name': 'ovirtmgmt', 'tx': '1048655897', 'txDropped': '0', 'rx': '11615220341', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'lo': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621810.108511, 'name': 'lo', 'tx': '58078891247', 'txDropped': '0', 'rx': '58078891247', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp3s0f1': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621810.108511, 'name': 'enp3s0f1', 'tx': '0', 'txDropped': '0', 'rx': '66839', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, ';vdsmdummy;': {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621810.108511, 'name': ';vdsmdummy;', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f0': {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621810.108511, 'name': 'enp4s0f0', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f1': {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621810.108511, 'name': 'enp4s0f1', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}}, 'txDropped': '0', 'anonHugePages': '204', 'ksmPages': 100, 'elapsedTime': '76376.19', 'cpuLoad': '0.16', 'cpuSys': '0.39', 'diskStats': {'/var/log': {'free': '7344'}, '/var/run/vdsm/': {'free': '16060'}, '/tmp': {'free': '906'}}, 'cpuUserVdsmd': '1.00', 'netConfigDirty': 'False', 'memCommitted': 0, 'ksmState': False, 'vmMigrating': 0, 'ksmCpu': 0, 'memAvailable': 31080, 'bootTime': '1521545382', 'haStats': {'active': False, 'configured': False, 'score': 0, 'localMaintenance': False, 'globalMaintenance': False}, 'momStatus': 'active', 'multipathHealth': {}, 'rxDropped': '1442', 'outgoingVmMigrations': 0, 'swapTotal': 12287, 'swapFree': 12287, 'hugepages': defaultdict(, {2048: {'resv_hugepages': 0, 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}, 1048576: {'resv_hugepages': 0, 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}}), 'dateTime': '2018-03-21T08:43:37 GMT', 'cpuUser': '0.79', 'memFree': 31336, 'cpuIdle': '98.82', 'vmActive': 0, 'v2vJobs': {}, 'cpuSysVdsmd': '0.73'}} from=::ffff:192.168.0.4,49914 (api:52) 29443:2018-03-21 10:43:37,367+0200 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds (__init__:573) 29444:2018-03-21 10:43:38,124+0200 INFO (jsonrpc/5) [vdsm.api] START getSpmStatus(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', options=None) from=::ffff:192.168.0.4,49914, task_id=9d2a82dd-c76c-4eb5-a1e1-b49bf004384e (api:46) 29445:2018-03-21 10:43:38,128+0200 INFO (jsonrpc/5) [vdsm.api] FINISH getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 6L}} from=::ffff:192.168.0.4,49914, task_id=9d2a82dd-c76c-4eb5-a1e1-b49bf004384e (api:52) 29446:2018-03-21 10:43:38,129+0200 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus succeeded in 0.01 seconds (__init__:573) 29447:2018-03-21 10:43:38,171+0200 INFO (jsonrpc/3) [vdsm.api] START getStoragePoolInfo(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', options=None) from=::ffff:192.168.0.4,49920, task_id=62d44b0b-20b2-47ae-bfb8-d4717e301b69 (api:46) 29448:2018-03-21 10:43:38,177+0200 INFO (jsonrpc/3) [vdsm.api] FINISH getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', 'lver': 6L, 'domains': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8:Active,8a2c304b-c8ae-438b-af54-fc8797ea149f:Active', 'master_uuid': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8', 'version': '4', 'spm_id': 1, 'type': 'NFS', 'master_ver': 3}, 'dominfo': {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': '', 'alerts': [], 'disktotal': '1968811540480', 'version': 4}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': '1968811540480', 'version': 0}}} from=::ffff:192.168.0.4,49920, task_id=62d44b0b-20b2-47ae-bfb8-d4717e301b69 (api:52) 29449:2018-03-21 10:43:38,178+0200 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.01 seconds (__init__:573) 29450:2018-03-21 10:43:45,147+0200 INFO (periodic/2) [vdsm.api] START repoStats(domains=()) from=internal, task_id=49c33285-a222-464d-8984-4e75a5c6354a (api:46) 29451:2018-03-21 10:43:45,148+0200 INFO (periodic/2) [vdsm.api] FINISH repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000454102', 'lastCheck': '5.6', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000347381', 'lastCheck': '5.5', 'valid': True}} from=internal, task_id=49c33285-a222-464d-8984-4e75a5c6354a (api:52) 29452:2018-03-21 10:43:45,148+0200 INFO (periodic/2) [vdsm.api] START multipath_health() from=internal, task_id=eed6b564-76b1-4f9b-80a2-1f0f6d9bbe32 (api:46) 29453:2018-03-21 10:43:45,149+0200 INFO (periodic/2) [vdsm.api] FINISH multipath_health return={} from=internal, task_id=eed6b564-76b1-4f9b-80a2-1f0f6d9bbe32 (api:52) 29454:2018-03-21 10:43:48,405+0200 INFO (jsonrpc/0) [vdsm.api] START getSpmStatus(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', options=None) from=::ffff:192.168.0.4,49914, task_id=84651a37-7d13-41a9-9dcf-2169a9c46fc5 (api:46) 29455:2018-03-21 10:43:48,410+0200 INFO (jsonrpc/0) [vdsm.api] FINISH getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 6L}} from=::ffff:192.168.0.4,49914, task_id=84651a37-7d13-41a9-9dcf-2169a9c46fc5 (api:52) 29456:2018-03-21 10:43:48,411+0200 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus succeeded in 0.01 seconds (__init__:573) 29457:2018-03-21 10:43:48,417+0200 INFO (jsonrpc/4) [vdsm.api] START getStoragePoolInfo(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', options=None) from=::ffff:192.168.0.4,49920, task_id=4ebd1d1a-11ac-41ee-b459-7e45f008776b (api:46) 29458:2018-03-21 10:43:48,423+0200 INFO (jsonrpc/4) [vdsm.api] FINISH getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', 'lver': 6L, 'domains': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8:Active,8a2c304b-c8ae-438b-af54-fc8797ea149f:Active', 'master_uuid': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8', 'version': '4', 'spm_id': 1, 'type': 'NFS', 'master_ver': 3}, 'dominfo': {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': '', 'alerts': [], 'disktotal': '1968811540480', 'version': 4}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': '1968811540480', 'version': 0}}} from=::ffff:192.168.0.4,49920, task_id=4ebd1d1a-11ac-41ee-b459-7e45f008776b (api:52) 29459:2018-03-21 10:43:48,424+0200 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.01 seconds (__init__:573) 29460:2018-03-21 10:43:50,399+0200 INFO (jsonrpc/6) [api.host] START getAllVmStats() from=::ffff:192.168.0.4,49914 (api:46) 29461:2018-03-21 10:43:50,400+0200 INFO (jsonrpc/6) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::ffff:192.168.0.4,49914 (api:52) 29462:2018-03-21 10:43:50,400+0200 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:573) 29463:2018-03-21 10:43:51,694+0200 INFO (jsonrpc/7) [api.host] START getAllVmStats() from=::1,36114 (api:46) 29464:2018-03-21 10:43:51,694+0200 INFO (jsonrpc/7) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::1,36114 (api:52) 29465:2018-03-21 10:43:51,695+0200 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:573) 29466:2018-03-21 10:43:52,516+0200 INFO (jsonrpc/2) [api.host] START getStats() from=::ffff:192.168.0.4,49914 (api:46) 29467:2018-03-21 10:43:52,517+0200 INFO (jsonrpc/2) [vdsm.api] START repoStats(domains=()) from=::ffff:192.168.0.4,49914, task_id=0a88f0dc-eccf-4d24-9ac2-22e079a1480a (api:46) 29468:2018-03-21 10:43:52,517+0200 INFO (jsonrpc/2) [vdsm.api] FINISH repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000487937', 'lastCheck': '3.0', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000455956', 'lastCheck': '2.9', 'valid': True}} from=::ffff:192.168.0.4,49914, task_id=0a88f0dc-eccf-4d24-9ac2-22e079a1480a (api:52) 29469:2018-03-21 10:43:52,518+0200 INFO (jsonrpc/2) [vdsm.api] START multipath_health() from=::ffff:192.168.0.4,49914, task_id=a7a44ff7-a137-41b6-9ad5-2c0b32ab79ec (api:46) 29470:2018-03-21 10:43:52,518+0200 INFO (jsonrpc/2) [vdsm.api] FINISH multipath_health return={} from=::ffff:192.168.0.4,49914, task_id=a7a44ff7-a137-41b6-9ad5-2c0b32ab79ec (api:52) 29471:2018-03-21 10:43:52,524+0200 INFO (jsonrpc/2) [api.host] FINISH getStats return={'status': {'message': 'Done', 'code': 0}, 'info': {'cpuStatistics': {'11': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.80'}, '10': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': '99.87'}, '13': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.93'}, '12': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': '99.87'}, '15': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '14': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': '99.87'}, '1': {'cpuUser': '1.13', 'nodeIndex': 1, 'cpuSys': '1.13', 'cpuIdle': '97.74'}, '0': {'cpuUser': '0.33', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.47'}, '3': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '99.93'}, '2': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': '99.87'}, '5': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.87'}, '4': {'cpuUser': '0.27', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': '99.60'}, '7': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.33', 'cpuIdle': '99.67'}, '6': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': '99.87'}, '9': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.80'}, '8': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': '99.80'}}, 'numaNodeMemFree': {'1': {'memPercent': 7, 'memFree': '15143'}, '0': {'memPercent': 5, 'memFree': '15349'}}, 'memShared': 0, 'thpState': 'always', 'ksmMergeAcrossNodes': True, 'vmCount': 0, 'memUsed': '3', 'storageDomains': {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000487937', 'lastCheck': '3.0', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000455956', 'lastCheck': '2.9', 'valid': True}}, 'incomingVmMigrations': 0, 'network': {'enp3s0f0': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621825.13588, 'name': 'enp3s0f0', 'tx': '1091312739', 'txDropped': '0', 'rx': '11914899051', 'rxErrors': '0', 'speed': '100', 'rxDropped': '1443'}, 'ovirtmgmt': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621825.13588, 'name': 'ovirtmgmt', 'tx': '1048663107', 'txDropped': '0', 'rx': '11615236520', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'lo': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621825.13588, 'name': 'lo', 'tx': '58090295494', 'txDropped': '0', 'rx': '58090295494', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp3s0f1': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621825.13588, 'name': 'enp3s0f1', 'tx': '0', 'txDropped': '0', 'rx': '66839', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, ';vdsmdummy;': {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621825.13588, 'name': ';vdsmdummy;', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f0': {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621825.13588, 'name': 'enp4s0f0', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f1': {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621825.13588, 'name': 'enp4s0f1', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}}, 'txDropped': '0', 'anonHugePages': '204', 'ksmPages': 100, 'elapsedTime': '76391.35', 'cpuLoad': '0.15', 'cpuSys': '0.20', 'diskStats': {'/var/log': {'free': '7344'}, '/var/run/vdsm/': {'free': '16060'}, '/tmp': {'free': '906'}}, 'cpuUserVdsmd': '1.13', 'netConfigDirty': 'False', 'memCommitted': 0, 'ksmState': False, 'vmMigrating': 0, 'ksmCpu': 0, 'memAvailable': 31080, 'bootTime': '1521545382', 'haStats': {'active': False, 'configured': False, 'score': 0, 'localMaintenance': False, 'globalMaintenance': False}, 'momStatus': 'active', 'multipathHealth': {}, 'rxDropped': '1443', 'outgoingVmMigrations': 0, 'swapTotal': 12287, 'swapFree': 12287, 'hugepages': defaultdict(, {2048: {'resv_hugepages': 0, 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}, 1048576: {'resv_hugepages': 0, 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}}), 'dateTime': '2018-03-21T08:43:52 GMT', 'cpuUser': '0.14', 'memFree': 31336, 'cpuIdle': '99.66', 'vmActive': 0, 'v2vJobs': {}, 'cpuSysVdsmd': '0.73'}} from=::ffff:192.168.0.4,49914 (api:52) 29472:2018-03-21 10:43:52,526+0200 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds (__init__:573) 29473:2018-03-21 10:43:58,686+0200 INFO (jsonrpc/1) [vdsm.api] START getSpmStatus(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', options=None) from=::ffff:192.168.0.4,49914, task_id=d6d29cf6-c083-4639-91a2-f062eae4e629 (api:46) 29474:2018-03-21 10:43:58,690+0200 INFO (jsonrpc/1) [vdsm.api] FINISH getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 6L}} from=::ffff:192.168.0.4,49914, task_id=d6d29cf6-c083-4639-91a2-f062eae4e629 (api:52) 29475:2018-03-21 10:43:58,690+0200 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus succeeded in 0.01 seconds (__init__:573) 29476:2018-03-21 10:43:58,697+0200 INFO (jsonrpc/5) [vdsm.api] START getStoragePoolInfo(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', options=None) from=::ffff:192.168.0.4,49920, task_id=9dbd91df-dd9c-469c-8308-bfbfc228937c (api:46) 29477:2018-03-21 10:43:58,703+0200 INFO (jsonrpc/5) [vdsm.api] FINISH getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', 'lver': 6L, 'domains': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8:Active,8a2c304b-c8ae-438b-af54-fc8797ea149f:Active', 'master_uuid': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8', 'version': '4', 'spm_id': 1, 'type': 'NFS', 'master_ver': 3}, 'dominfo': {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': '', 'alerts': [], 'disktotal': '1968811540480', 'version': 4}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': '1968811540480', 'version': 0}}} from=::ffff:192.168.0.4,49920, task_id=9dbd91df-dd9c-469c-8308-bfbfc228937c (api:52) 29478:2018-03-21 10:43:58,704+0200 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.01 seconds (__init__:573) grep -n 10:46 vdsm.log | tail -1000 1017:2018-03-21 06:10:46,882+0200 INFO (jsonrpc/2) [api.host] START getStats() from=::ffff:192.168.0.4,49914 (api:46) 1018:2018-03-21 06:10:46,883+0200 INFO (jsonrpc/2) [vdsm.api] START repoStats(domains=()) from=::ffff:192.168.0.4,49914, task_id=6e045b00-88b2-469b-80d6-af38303f8c32 (api:46) 1019:2018-03-21 06:10:46,883+0200 INFO (jsonrpc/2) [vdsm.api] FINISH repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000332785', 'lastCheck': '7.3', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000203914', 'lastCheck': '5.6', 'valid': True}} from=::ffff:192.168.0.4,49914, task_id=6e045b00-88b2-469b-80d6-af38303f8c32 (api:52) 1020:2018-03-21 06:10:46,884+0200 INFO (jsonrpc/2) [vdsm.api] START multipath_health() from=::ffff:192.168.0.4,49914, task_id=475a5b40-4a43-42ee-9524-2bdd220aed83 (api:46) 1021:2018-03-21 06:10:46,884+0200 INFO (jsonrpc/2) [vdsm.api] FINISH multipath_health return={} from=::ffff:192.168.0.4,49914, task_id=475a5b40-4a43-42ee-9524-2bdd220aed83 (api:52) 1022:2018-03-21 06:10:46,889+0200 INFO (jsonrpc/2) [api.host] FINISH getStats return={'status': {'message': 'Done', 'code': 0}, 'info': {'cpuStatistics': {'11': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.20', 'cpuIdle': '99.73'}, '10': {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.67'}, '13': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.27', 'cpuIdle': '99.66'}, '12': {'cpuUser': '0.33', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.47'}, '15': {'cpuUser': '0.33', 'nodeIndex': 1, 'cpuSys': '0.27', 'cpuIdle': '99.40'}, '14': {'cpuUser': '0.47', 'nodeIndex': 0, 'cpuSys': '0.40', 'cpuIdle': '99.13'}, '1': {'cpuUser': '3.00', 'nodeIndex': 1, 'cpuSys': '1.40', 'cpuIdle': '95.60'}, '0': {'cpuUser': '0.73', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.07'}, '3': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '2': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': '99.66'}, '5': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.20', 'cpuIdle': '99.73'}, '4': {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.33', 'cpuIdle': '99.54'}, '7': {'cpuUser': '0.80', 'nodeIndex': 1, 'cpuSys': '0.80', 'cpuIdle': '98.40'}, '6': {'cpuUser': '5.46', 'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': '94.27'}, '9': {'cpuUser': '0.20', 'nodeIndex': 1, 'cpuSys': '0.27', 'cpuIdle': '99.53'}, '8': {'cpuUser': '1.27', 'nodeIndex': 0, 'cpuSys': '0.47', 'cpuIdle': '98.26'}}, 'numaNodeMemFree': {'1': {'memPercent': 7, 'memFree': '15128'}, '0': {'memPercent': 5, 'memFree': '15366'}}, 'memShared': 0, 'thpState': 'always', 'ksmMergeAcrossNodes': True, 'vmCount': 0, 'memUsed': '3', 'storageDomains': {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000332785', 'lastCheck': '7.3', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000203914', 'lastCheck': '5.6', 'valid': True}}, 'incomingVmMigrations': 0, 'network': {'enp3s0f0': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521605437.203776, 'name': 'enp3s0f0', 'tx': '1082264809', 'txDropped': '0', 'rx': '11907188594', 'rxErrors': '0', 'speed': '100', 'rxDropped': '289'}, 'ovirtmgmt': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521605437.203776, 'name': 'ovirtmgmt', 'tx': '1039869554', 'txDropped': '0', 'rx': '11608697383', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'lo': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521605437.203776, 'name': 'lo', 'tx': '45587320171', 'txDropped': '0', 'rx': '45587320171', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp3s0f1': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521605437.203776, 'name': 'enp3s0f1', 'tx': '0', 'txDropped': '0', 'rx': '53620', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, ';vdsmdummy;': {'txErrors': '0', 'state': 'down', 'sampleTime': 1521605437.203776, 'name': ';vdsmdummy;', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f0': {'txErrors': '0', 'state': 'down', 'sampleTime': 1521605437.203776, 'name': 'enp4s0f0', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f1': {'txErrors': '0', 'state': 'down', 'sampleTime': 1521605437.203776, 'name': 'enp4s0f1', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}}, 'txDropped': '0', 'anonHugePages': '204', 'ksmPages': 100, 'elapsedTime': '60005.72', 'cpuLoad': '0.06', 'cpuSys': '0.38', 'diskStats': {'/var/log': {'free': '7357'}, '/var/run/vdsm/': {'free': '16060'}, '/tmp': {'free': '906'}}, 'cpuUserVdsmd': '1.07', 'netConfigDirty': 'False', 'memCommitted': 0, 'ksmState': False, 'vmMigrating': 0, 'ksmCpu': 0, 'memAvailable': 31084, 'bootTime': '1521545382', 'haStats': {'active': False, 'configured': False, 'score': 0, 'localMaintenance': False, 'globalMaintenance': False}, 'momStatus': 'active', 'multipathHealth': {}, 'rxDropped': '289', 'outgoingVmMigrations': 0, 'swapTotal': 12287, 'swapFree': 12287, 'hugepages': defaultdict(, {2048: {'resv_hugepages': 0, 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}, 1048576: {'resv_hugepages': 0, 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}}), 'dateTime': '2018-03-21T04:10:46 GMT', 'cpuUser': '0.83', 'memFree': 31340, 'cpuIdle': '98.79', 'vmActive': 0, 'v2vJobs': {}, 'cpuSysVdsmd': '0.80'}} from=::ffff:192.168.0.4,49914 (api:52) 1023:2018-03-21 06:10:46,891+0200 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds (__init__:573) 13522:2018-03-21 08:10:46,827+0200 INFO (jsonrpc/7) [api.host] START getAllVmStats() from=::ffff:192.168.0.4,49914 (api:46) 13523:2018-03-21 08:10:46,828+0200 INFO (jsonrpc/7) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::ffff:192.168.0.4,49914 (api:52) 13524:2018-03-21 08:10:46,828+0200 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:573) 29734:2018-03-21 10:46:00,338+0200 INFO (periodic/2) [vdsm.api] START repoStats(domains=()) from=internal, task_id=af099ce6-0f44-461b-9e8e-99de63b3884f (api:46) 29735:2018-03-21 10:46:00,339+0200 INFO (periodic/2) [vdsm.api] FINISH repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000470245', 'lastCheck': '0.8', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000476935', 'lastCheck': '0.7', 'valid': True}} from=internal, task_id=af099ce6-0f44-461b-9e8e-99de63b3884f (api:52) 29736:2018-03-21 10:46:00,339+0200 INFO (periodic/2) [vdsm.api] START multipath_health() from=internal, task_id=4a5709d9-e322-44eb-a8a4-bbe3f5e5e1bd (api:46) 29737:2018-03-21 10:46:00,340+0200 INFO (periodic/2) [vdsm.api] FINISH multipath_health return={} from=internal, task_id=4a5709d9-e322-44eb-a8a4-bbe3f5e5e1bd (api:52) 29738:2018-03-21 10:46:04,637+0200 INFO (jsonrpc/7) [vdsm.api] START getSpmStatus(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', options=None) from=::ffff:192.168.0.4,49914, task_id=5b9300f7-f8c4-4f63-a49c-45ca8119d3b9 (api:46) 29739:2018-03-21 10:46:04,642+0200 INFO (jsonrpc/7) [vdsm.api] FINISH getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 6L}} from=::ffff:192.168.0.4,49914, task_id=5b9300f7-f8c4-4f63-a49c-45ca8119d3b9 (api:52) 29740:2018-03-21 10:46:04,642+0200 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus succeeded in 0.01 seconds (__init__:573) 29741:2018-03-21 10:46:04,648+0200 INFO (jsonrpc/1) [vdsm.api] START getStoragePoolInfo(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', options=None) from=::ffff:192.168.0.4,49920, task_id=f4db5bfa-e131-4f9a-baaa-e7555c996ab0 (api:46) 29742:2018-03-21 10:46:04,655+0200 INFO (jsonrpc/1) [vdsm.api] FINISH getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', 'lver': 6L, 'domains': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8:Active,8a2c304b-c8ae-438b-af54-fc8797ea149f:Active', 'master_uuid': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8', 'version': '4', 'spm_id': 1, 'type': 'NFS', 'master_ver': 3}, 'dominfo': {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': '', 'alerts': [], 'disktotal': '1968811540480', 'version': 4}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': '1968811540480', 'version': 0}}} from=::ffff:192.168.0.4,49920, task_id=f4db5bfa-e131-4f9a-baaa-e7555c996ab0 (api:52) 29743:2018-03-21 10:46:04,656+0200 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.00 seconds (__init__:573) 29744:2018-03-21 10:46:05,451+0200 INFO (jsonrpc/5) [api.host] START getAllVmStats() from=::ffff:192.168.0.4,49914 (api:46) 29745:2018-03-21 10:46:05,452+0200 INFO (jsonrpc/5) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::ffff:192.168.0.4,49914 (api:52) 29746:2018-03-21 10:46:05,452+0200 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:573) 29747:2018-03-21 10:46:06,903+0200 INFO (jsonrpc/3) [api.host] START getAllVmStats() from=::1,36114 (api:46) 29748:2018-03-21 10:46:06,904+0200 INFO (jsonrpc/3) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::1,36114 (api:52) 29749:2018-03-21 10:46:06,904+0200 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:573) 29750:2018-03-21 10:46:08,791+0200 WARN (vdsm.Scheduler) [Executor] Worker blocked: timeout=60, duration=120 at 0x3604e10> task#=3762 at 0x3541650>, traceback: 29801:2018-03-21 10:46:12,852+0200 INFO (jsonrpc/0) [api.host] START getStats() from=::ffff:192.168.0.4,49914 (api:46) 29802:2018-03-21 10:46:12,853+0200 INFO (jsonrpc/0) [vdsm.api] START repoStats(domains=()) from=::ffff:192.168.0.4,49914, task_id=391fdab9-d3c2-4075-9c34-177a07a21ec3 (api:46) 29803:2018-03-21 10:46:12,854+0200 INFO (jsonrpc/0) [vdsm.api] FINISH repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000492277', 'lastCheck': '3.3', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000479153', 'lastCheck': '3.2', 'valid': True}} from=::ffff:192.168.0.4,49914, task_id=391fdab9-d3c2-4075-9c34-177a07a21ec3 (api:52) 29804:2018-03-21 10:46:12,854+0200 INFO (jsonrpc/0) [vdsm.api] START multipath_health() from=::ffff:192.168.0.4,49914, task_id=7e53adbc-3974-4f51-bfc8-8fbd3ca6b749 (api:46) 29805:2018-03-21 10:46:12,854+0200 INFO (jsonrpc/0) [vdsm.api] FINISH multipath_health return={} from=::ffff:192.168.0.4,49914, task_id=7e53adbc-3974-4f51-bfc8-8fbd3ca6b749 (api:52) 29806:2018-03-21 10:46:12,861+0200 INFO (jsonrpc/0) [api.host] FINISH getStats return={'status': {'message': 'Done', 'code': 0}, 'info': {'cpuStatistics': {'11': {'cpuUser': '0.47', 'nodeIndex': 1, 'cpuSys': '0.40', 'cpuIdle': '99.13'}, '10': {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.67'}, '13': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.20', 'cpuIdle': '99.73'}, '12': {'cpuUser': '0.20', 'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': '99.53'}, '15': {'cpuUser': '0.13', 'nodeIndex': 1, 'cpuSys': '0.27', 'cpuIdle': '99.60'}, '14': {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.33', 'cpuIdle': '99.54'}, '1': {'cpuUser': '1.00', 'nodeIndex': 1, 'cpuSys': '1.33', 'cpuIdle': '97.67'}, '0': {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': '99.60'}, '3': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '99.93'}, '2': {'cpuUser': '0.20', 'nodeIndex': 0, 'cpuSys': '0.33', 'cpuIdle': '99.47'}, '5': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.80'}, '4': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.73'}, '7': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.40', 'cpuIdle': '99.53'}, '6': {'cpuUser': '0.47', 'nodeIndex': 0, 'cpuSys': '0.40', 'cpuIdle': '99.13'}, '9': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.20', 'cpuIdle': '99.73'}, '8': {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.33', 'cpuIdle': '99.54'}}, 'numaNodeMemFree': {'1': {'memPercent': 7, 'memFree': '15142'}, '0': {'memPercent': 5, 'memFree': '15348'}}, 'memShared': 0, 'thpState': 'always', 'ksmMergeAcrossNodes': True, 'vmCount': 0, 'memUsed': '3', 'storageDomains': {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000492277', 'lastCheck': '3.3', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000479153', 'lastCheck': '3.2', 'valid': True}}, 'incomingVmMigrations': 0, 'network': {'enp3s0f0': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621960.326184, 'name': 'enp3s0f0', 'tx': '1091387790', 'txDropped': '0', 'rx': '11914959714', 'rxErrors': '0', 'speed': '100', 'rxDropped': '1451'}, 'ovirtmgmt': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621960.326184, 'name': 'ovirtmgmt', 'tx': '1048735814', 'txDropped': '0', 'rx': '11615289173', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'lo': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621960.326184, 'name': 'lo', 'tx': '58192068637', 'txDropped': '0', 'rx': '58192068637', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp3s0f1': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621960.326184, 'name': 'enp3s0f1', 'tx': '0', 'txDropped': '0', 'rx': '66839', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, ';vdsmdummy;': {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621960.326184, 'name': ';vdsmdummy;', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f0': {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621960.326184, 'name': 'enp4s0f0', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f1': {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621960.326184, 'name': 'enp4s0f1', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}}, 'txDropped': '0', 'anonHugePages': '204', 'ksmPages': 100, 'elapsedTime': '76531.69', 'cpuLoad': '0.17', 'cpuSys': '0.32', 'diskStats': {'/var/log': {'free': '7344'}, '/var/run/vdsm/': {'free': '16060'}, '/tmp': {'free': '906'}}, 'cpuUserVdsmd': '1.00', 'netConfigDirty': 'False', 'memCommitted': 0, 'ksmState': False, 'vmMigrating': 0, 'ksmCpu': 0, 'memAvailable': 31081, 'bootTime': '1521545382', 'haStats': {'active': False, 'configured': False, 'score': 0, 'localMaintenance': False, 'globalMaintenance': False}, 'momStatus': 'active', 'multipathHealth': {}, 'rxDropped': '1451', 'outgoingVmMigrations': 0, 'swapTotal': 12287, 'swapFree': 12287, 'hugepages': defaultdict(, {2048: {'resv_hugepages': 0, 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}, 1048576: {'resv_hugepages': 0, 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}}), 'dateTime': '2018-03-21T08:46:12 GMT', 'cpuUser': '0.21', 'memFree': 31337, 'cpuIdle': '99.46', 'vmActive': 0, 'v2vJobs': {}, 'cpuSysVdsmd': '0.80'}} from=::ffff:192.168.0.4,49914 (api:52) 29807:2018-03-21 10:46:12,863+0200 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds (__init__:573) 29808:2018-03-21 10:46:13,829+0200 ERROR (jsonrpc/6) [storage.HSM] Could not connect to storageServer (hsm:2407) 29828:2018-03-21 10:46:13,829+0200 INFO (jsonrpc/6) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'status': 477, 'id': u'461f65a9-3a81-4f3f-a46d-c5ed12520524'}]} from=::ffff:192.168.0.4,49914, flow_id=1435fc81, task_id=0a828d2c-d9f4-4f83-a9e9-7393159d5323 (api:52) 29829:2018-03-21 10:46:13,830+0200 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call StoragePool.connectStorageServer succeeded in 125.04 seconds (__init__:573) 29830:2018-03-21 10:46:14,767+0200 INFO (jsonrpc/4) [vdsm.api] START getSpmStatus(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', options=None) from=::ffff:192.168.0.4,49914, task_id=49cd2ae6-7943-47c9-ad86-9e0b7e58bca3 (api:46) 29831:2018-03-21 10:46:14,772+0200 INFO (jsonrpc/4) [vdsm.api] FINISH getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 6L}} from=::ffff:192.168.0.4,49914, task_id=49cd2ae6-7943-47c9-ad86-9e0b7e58bca3 (api:52) 29832:2018-03-21 10:46:14,773+0200 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus succeeded in 0.01 seconds (__init__:573) 29833:2018-03-21 10:46:14,815+0200 INFO (jsonrpc/2) [vdsm.api] START getStoragePoolInfo(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', options=None) from=::ffff:192.168.0.4,49920, task_id=19909616-efb4-490d-982c-66eda9ca4381 (api:46) 29834:2018-03-21 10:46:14,822+0200 INFO (jsonrpc/2) [vdsm.api] FINISH getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', 'lver': 6L, 'domains': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8:Active,8a2c304b-c8ae-438b-af54-fc8797ea149f:Active', 'master_uuid': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8', 'version': '4', 'spm_id': 1, 'type': 'NFS', 'master_ver': 3}, 'dominfo': {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': '', 'alerts': [], 'disktotal': '1968811540480', 'version': 4}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': '1968811540480', 'version': 0}}} from=::ffff:192.168.0.4,49920, task_id=19909616-efb4-490d-982c-66eda9ca4381 (api:52) 29835:2018-03-21 10:46:14,823+0200 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.01 seconds (__init__:573) 29836:2018-03-21 10:46:15,360+0200 INFO (periodic/2) [vdsm.api] START repoStats(domains=()) from=internal, task_id=da74ea9f-164b-450d-b836-6818caa3fdc5 (api:46) 29837:2018-03-21 10:46:15,360+0200 INFO (periodic/2) [vdsm.api] FINISH repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000492277', 'lastCheck': '5.8', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000479153', 'lastCheck': '5.7', 'valid': True}} from=internal, task_id=da74ea9f-164b-450d-b836-6818caa3fdc5 (api:52) 29838:2018-03-21 10:46:15,361+0200 INFO (periodic/2) [vdsm.api] START multipath_health() from=internal, task_id=06245e2b-e8ca-41c4-90bf-4294d0c699b8 (api:46) 29839:2018-03-21 10:46:15,361+0200 INFO (periodic/2) [vdsm.api] FINISH multipath_health return={} from=internal, task_id=06245e2b-e8ca-41c4-90bf-4294d0c699b8 (api:52) 29840:2018-03-21 10:46:20,456+0200 INFO (jsonrpc/7) [api.host] START getAllVmStats() from=::ffff:192.168.0.4,49914 (api:46) 29841:2018-03-21 10:46:20,457+0200 INFO (jsonrpc/7) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::ffff:192.168.0.4,49914 (api:52) 29842:2018-03-21 10:46:20,457+0200 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.01 seconds (__init__:573) 29843:2018-03-21 10:46:21,925+0200 INFO (jsonrpc/1) [api.host] START getAllVmStats() from=::1,36114 (api:46) 29844:2018-03-21 10:46:21,926+0200 INFO (jsonrpc/1) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::1,36114 (api:52) 29845:2018-03-21 10:46:21,926+0200 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:573) 29846:2018-03-21 10:46:24,930+0200 INFO (jsonrpc/5) [vdsm.api] START getSpmStatus(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', options=None) from=::ffff:192.168.0.4,49914, task_id=d964535d-e5eb-42bc-b324-432a05c364da (api:46) 29847:2018-03-21 10:46:24,935+0200 INFO (jsonrpc/5) [vdsm.api] FINISH getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 6L}} from=::ffff:192.168.0.4,49914, task_id=d964535d-e5eb-42bc-b324-432a05c364da (api:52) 29848:2018-03-21 10:46:24,935+0200 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus succeeded in 0.00 seconds (__init__:573) 29849:2018-03-21 10:46:24,987+0200 INFO (jsonrpc/3) [vdsm.api] START getStoragePoolInfo(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', options=None) from=::ffff:192.168.0.4,49920, task_id=167dcd23-04b5-4de3-9c81-7d25ba56407a (api:46) 29850:2018-03-21 10:46:25,022+0200 INFO (jsonrpc/3) [vdsm.api] FINISH getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', 'lver': 6L, 'domains': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8:Active,8a2c304b-c8ae-438b-af54-fc8797ea149f:Active', 'master_uuid': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8', 'version': '4', 'spm_id': 1, 'type': 'NFS', 'master_ver': 3}, 'dominfo': {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': '', 'alerts': [], 'disktotal': '1968811540480', 'version': 4}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': '1968811540480', 'version': 0}}} from=::ffff:192.168.0.4,49920, task_id=167dcd23-04b5-4de3-9c81-7d25ba56407a (api:52) 29851:2018-03-21 10:46:25,023+0200 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.03 seconds (__init__:573) 29852:2018-03-21 10:46:28,010+0200 INFO (jsonrpc/0) [api.host] START getStats() from=::ffff:192.168.0.4,49914 (api:46) 29853:2018-03-21 10:46:28,011+0200 INFO (jsonrpc/0) [vdsm.api] START repoStats(domains=()) from=::ffff:192.168.0.4,49914, task_id=011616ce-53c9-4cfc-8b29-247b1be03409 (api:46) 29854:2018-03-21 10:46:28,011+0200 INFO (jsonrpc/0) [vdsm.api] FINISH repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000218955', 'lastCheck': '8.4', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000351741', 'lastCheck': '8.4', 'valid': True}} from=::ffff:192.168.0.4,49914, task_id=011616ce-53c9-4cfc-8b29-247b1be03409 (api:52) 29855:2018-03-21 10:46:28,012+0200 INFO (jsonrpc/0) [vdsm.api] START multipath_health() from=::ffff:192.168.0.4,49914, task_id=85b42a8d-1bf1-495d-ad32-a1f9710a6468 (api:46) 29856:2018-03-21 10:46:28,012+0200 INFO (jsonrpc/0) [vdsm.api] FINISH multipath_health return={} from=::ffff:192.168.0.4,49914, task_id=85b42a8d-1bf1-495d-ad32-a1f9710a6468 (api:52) 29857:2018-03-21 10:46:28,018+0200 INFO (jsonrpc/0) [api.host] FINISH getStats return={'status': {'message': 'Done', 'code': 0}, 'info': {'cpuStatistics': {'11': {'cpuUser': '0.13', 'nodeIndex': 1, 'cpuSys': '0.20', 'cpuIdle': '99.67'}, '10': {'cpuUser': '1.73', 'nodeIndex': 0, 'cpuSys': '0.53', 'cpuIdle': '97.74'}, '13': {'cpuUser': '0.20', 'nodeIndex': 1, 'cpuSys': '0.27', 'cpuIdle': '99.53'}, '12': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.73'}, '15': {'cpuUser': '0.67', 'nodeIndex': 1, 'cpuSys': '0.27', 'cpuIdle': '99.06'}, '14': {'cpuUser': '0.20', 'nodeIndex': 0, 'cpuSys': '0.33', 'cpuIdle': '99.47'}, '1': {'cpuUser': '1.26', 'nodeIndex': 1, 'cpuSys': '1.33', 'cpuIdle': '97.41'}, '0': {'cpuUser': '0.40', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.53'}, '3': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.80'}, '2': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': '99.73'}, '5': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.87'}, '4': {'cpuUser': '13.78', 'nodeIndex': 0, 'cpuSys': '0.80', 'cpuIdle': '85.42'}, '7': {'cpuUser': '0.47', 'nodeIndex': 1, 'cpuSys': '0.67', 'cpuIdle': '98.86'}, '6': {'cpuUser': '1.07', 'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': '98.66'}, '9': {'cpuUser': '1.20', 'nodeIndex': 1, 'cpuSys': '0.27', 'cpuIdle': '98.53'}, '8': {'cpuUser': '0.80', 'nodeIndex': 0, 'cpuSys': '0.33', 'cpuIdle': '98.87'}}, 'numaNodeMemFree': {'1': {'memPercent': 7, 'memFree': '15136'}, '0': {'memPercent': 5, 'memFree': '15357'}}, 'memShared': 0, 'thpState': 'always', 'ksmMergeAcrossNodes': True, 'vmCount': 0, 'memUsed': '3', 'storageDomains': {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000218955', 'lastCheck': '8.4', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000351741', 'lastCheck': '8.4', 'valid': True}}, 'incomingVmMigrations': 0, 'network': {'enp3s0f0': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621975.346633, 'name': 'enp3s0f0', 'tx': '1091397599', 'txDropped': '0', 'rx': '11914966591', 'rxErrors': '0', 'speed': '100', 'rxDropped': '1454'}, 'ovirtmgmt': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621975.346633, 'name': 'ovirtmgmt', 'tx': '1048745347', 'txDropped': '0', 'rx': '11615294897', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'lo': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621975.346633, 'name': 'lo', 'tx': '58205583384', 'txDropped': '0', 'rx': '58205583384', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp3s0f1': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621975.346633, 'name': 'enp3s0f1', 'tx': '0', 'txDropped': '0', 'rx': '66839', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, ';vdsmdummy;': {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621975.346633, 'name': ';vdsmdummy;', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f0': {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621975.346633, 'name': 'enp4s0f0', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f1': {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621975.346633, 'name': 'enp4s0f1', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}}, 'txDropped': '0', 'anonHugePages': '204', 'ksmPages': 100, 'elapsedTime': '76546.85', 'cpuLoad': '0.18', 'cpuSys': '0.39', 'diskStats': {'/var/log': {'free': '7344'}, '/var/run/vdsm/': {'free': '16060'}, '/tmp': {'free': '906'}}, 'cpuUserVdsmd': '1.20', 'netConfigDirty': 'False', 'memCommitted': 0, 'ksmState': False, 'vmMigrating': 0, 'ksmCpu': 0, 'memAvailable': 31031, 'bootTime': '1521545382', 'haStats': {'active': False, 'configured': False, 'score': 0, 'localMaintenance': False, 'globalMaintenance': False}, 'momStatus': 'active', 'multipathHealth': {}, 'rxDropped': '1454', 'outgoingVmMigrations': 0, 'swapTotal': 12287, 'swapFree': 12287, 'hugepages': defaultdict(, {2048: {'resv_hugepages': 0, 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}, 1048576: {'resv_hugepages': 0, 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}}), 'dateTime': '2018-03-21T08:46:28 GMT', 'cpuUser': '1.38', 'memFree': 31287, 'cpuIdle': '98.23', 'vmActive': 0, 'v2vJobs': {}, 'cpuSysVdsmd': '0.87'}} from=::ffff:192.168.0.4,49914 (api:52) 29858:2018-03-21 10:46:28,020+0200 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds (__init__:573) 29859:2018-03-21 10:46:29,667+0200 INFO (itmap/0) [IOProcessClient] Starting client ioprocess-575 (__init__:308) 29860:2018-03-21 10:46:29,700+0200 INFO (itmap/1) [IOProcessClient] Starting client ioprocess-576 (__init__:308) 29861:2018-03-21 10:46:29,714+0200 INFO (ioprocess/1711202) [IOProcess] Starting ioprocess (__init__:437) 29862:2018-03-21 10:46:29,725+0200 INFO (ioprocess/1711208) [IOProcess] Starting ioprocess (__init__:437) 29863:2018-03-21 10:46:30,375+0200 INFO (periodic/0) [vdsm.api] START repoStats(domains=()) from=internal, task_id=bdacf1c0-3fcb-43b1-82e2-a041a26d0d7f (api:46) 29864:2018-03-21 10:46:30,376+0200 INFO (periodic/0) [vdsm.api] FINISH repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.00031111', 'lastCheck': '0.8', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.00045594', 'lastCheck': '0.6', 'valid': True}} from=internal, task_id=bdacf1c0-3fcb-43b1-82e2-a041a26d0d7f (api:52) 29865:2018-03-21 10:46:30,376+0200 INFO (periodic/0) [vdsm.api] START multipath_health() from=internal, task_id=3d3cad70-e5c8-49b2-9f5d-92a249cc102d (api:46) 29866:2018-03-21 10:46:30,377+0200 INFO (periodic/0) [vdsm.api] FINISH multipath_health return={} from=internal, task_id=3d3cad70-e5c8-49b2-9f5d-92a249cc102d (api:52) 29867:2018-03-21 10:46:35,108+0200 INFO (jsonrpc/6) [vdsm.api] START getSpmStatus(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', options=None) from=::ffff:192.168.0.4,49914, task_id=304d7e7c-03fc-44d6-be32-d083ada09b30 (api:46) 29868:2018-03-21 10:46:35,114+0200 INFO (jsonrpc/6) [vdsm.api] FINISH getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 6L}} from=::ffff:192.168.0.4,49914, task_id=304d7e7c-03fc-44d6-be32-d083ada09b30 (api:52) 29869:2018-03-21 10:46:35,114+0200 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus succeeded in 0.00 seconds (__init__:573) 29870:2018-03-21 10:46:35,170+0200 INFO (jsonrpc/4) [vdsm.api] START getStoragePoolInfo(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', options=None) from=::ffff:192.168.0.4,49920, task_id=76736a93-d4c4-4b2f-9848-7b8731fe2b67 (api:46) 29871:2018-03-21 10:46:35,176+0200 INFO (jsonrpc/4) [vdsm.api] FINISH getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', 'lver': 6L, 'domains': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8:Active,8a2c304b-c8ae-438b-af54-fc8797ea149f:Active', 'master_uuid': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8', 'version': '4', 'spm_id': 1, 'type': 'NFS', 'master_ver': 3}, 'dominfo': {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': '', 'alerts': [], 'disktotal': '1968811540480', 'version': 4}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': '1968811540480', 'version': 0}}} from=::ffff:192.168.0.4,49920, task_id=76736a93-d4c4-4b2f-9848-7b8731fe2b67 (api:52) 29872:2018-03-21 10:46:35,177+0200 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.01 seconds (__init__:573) 29873:2018-03-21 10:46:35,462+0200 INFO (jsonrpc/2) [api.host] START getAllVmStats() from=::ffff:192.168.0.4,49914 (api:46) 29874:2018-03-21 10:46:35,462+0200 INFO (jsonrpc/2) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::ffff:192.168.0.4,49914 (api:52) 29875:2018-03-21 10:46:35,463+0200 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:573) 29876:2018-03-21 10:46:36,948+0200 INFO (jsonrpc/7) [api.host] START getAllVmStats() from=::1,36114 (api:46) 29877:2018-03-21 10:46:36,948+0200 INFO (jsonrpc/7) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::1,36114 (api:52) 29878:2018-03-21 10:46:36,949+0200 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:573) 29879:2018-03-21 10:46:43,179+0200 INFO (jsonrpc/1) [api.host] START getStats() from=::ffff:192.168.0.4,49914 (api:46) 29880:2018-03-21 10:46:43,180+0200 INFO (jsonrpc/1) [vdsm.api] START repoStats(domains=()) from=::ffff:192.168.0.4,49914, task_id=83bbc95c-6d59-4eec-82b1-e8993be28759 (api:46) 29881:2018-03-21 10:46:43,181+0200 INFO (jsonrpc/1) [vdsm.api] FINISH repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000486101', 'lastCheck': '3.6', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.00043444', 'lastCheck': '3.4', 'valid': True}} from=::ffff:192.168.0.4,49914, task_id=83bbc95c-6d59-4eec-82b1-e8993be28759 (api:52) 29882:2018-03-21 10:46:43,181+0200 INFO (jsonrpc/1) [vdsm.api] START multipath_health() from=::ffff:192.168.0.4,49914, task_id=9b781324-b50b-49a9-bf31-5eca835dc75c (api:46) 29883:2018-03-21 10:46:43,181+0200 INFO (jsonrpc/1) [vdsm.api] FINISH multipath_health return={} from=::ffff:192.168.0.4,49914, task_id=9b781324-b50b-49a9-bf31-5eca835dc75c (api:52) 29884:2018-03-21 10:46:43,188+0200 INFO (jsonrpc/1) [api.host] FINISH getStats return={'status': {'message': 'Done', 'code': 0}, 'info': {'cpuStatistics': {'11': {'cpuUser': '0.40', 'nodeIndex': 1, 'cpuSys': '0.27', 'cpuIdle': '99.33'}, '10': {'cpuUser': '0.60', 'nodeIndex': 0, 'cpuSys': '0.47', 'cpuIdle': '98.93'}, '13': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.27', 'cpuIdle': '99.73'}, '12': {'cpuUser': '1.46', 'nodeIndex': 0, 'cpuSys': '0.60', 'cpuIdle': '97.94'}, '15': {'cpuUser': '0.20', 'nodeIndex': 1, 'cpuSys': '0.33', 'cpuIdle': '99.47'}, '14': {'cpuUser': '0.40', 'nodeIndex': 0, 'cpuSys': '0.47', 'cpuIdle': '99.13'}, '1': {'cpuUser': '1.33', 'nodeIndex': 1, 'cpuSys': '1.93', 'cpuIdle': '96.74'}, '0': {'cpuUser': '0.60', 'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': '99.13'}, '3': {'cpuUser': '0.20', 'nodeIndex': 1, 'cpuSys': '0.40', 'cpuIdle': '99.40'}, '2': {'cpuUser': '6.99', 'nodeIndex': 0, 'cpuSys': '0.40', 'cpuIdle': '92.61'}, '5': {'cpuUser': '0.13', 'nodeIndex': 1, 'cpuSys': '0.20', 'cpuIdle': '99.67'}, '4': {'cpuUser': '0.27', 'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': '99.46'}, '7': {'cpuUser': '0.47', 'nodeIndex': 1, 'cpuSys': '0.60', 'cpuIdle': '98.93'}, '6': {'cpuUser': '0.53', 'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': '99.20'}, '9': {'cpuUser': '0.47', 'nodeIndex': 1, 'cpuSys': '0.33', 'cpuIdle': '99.20'}, '8': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': '99.80'}}, 'numaNodeMemFree': {'1': {'memPercent': 7, 'memFree': '15131'}, '0': {'memPercent': 5, 'memFree': '15357'}}, 'memShared': 0, 'thpState': 'always', 'ksmMergeAcrossNodes': True, 'vmCount': 0, 'memUsed': '3', 'storageDomains': {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000486101', 'lastCheck': '3.6', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.00043444', 'lastCheck': '3.4', 'valid': True}}, 'incomingVmMigrations': 0, 'network': {'enp3s0f0': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621990.363726, 'name': 'enp3s0f0', 'tx': '1091404888', 'txDropped': '0', 'rx': '11914970667', 'rxErrors': '0', 'speed': '100', 'rxDropped': '1455'}, 'ovirtmgmt': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621990.363726, 'name': 'ovirtmgmt', 'tx': '1048752406', 'txDropped': '0', 'rx': '11615298347', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'lo': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621990.363726, 'name': 'lo', 'tx': '58215969455', 'txDropped': '0', 'rx': '58215969455', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp3s0f1': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521621990.363726, 'name': 'enp3s0f1', 'tx': '0', 'txDropped': '0', 'rx': '66839', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, ';vdsmdummy;': {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621990.363726, 'name': ';vdsmdummy;', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f0': {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621990.363726, 'name': 'enp4s0f0', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f1': {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621990.363726, 'name': 'enp4s0f1', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}}, 'txDropped': '0', 'anonHugePages': '206', 'ksmPages': 100, 'elapsedTime': '76562.02', 'cpuLoad': '0.18', 'cpuSys': '0.45', 'diskStats': {'/var/log': {'free': '7344'}, '/var/run/vdsm/': {'free': '16060'}, '/tmp': {'free': '906'}}, 'cpuUserVdsmd': '1.27', 'netConfigDirty': 'False', 'memCommitted': 0, 'ksmState': False, 'vmMigrating': 0, 'ksmCpu': 0, 'memAvailable': 31080, 'bootTime': '1521545382', 'haStats': {'active': False, 'configured': False, 'score': 0, 'localMaintenance': False, 'globalMaintenance': False}, 'momStatus': 'active', 'multipathHealth': {}, 'rxDropped': '1455', 'outgoingVmMigrations': 0, 'swapTotal': 12287, 'swapFree': 12287, 'hugepages': defaultdict(, {2048: {'resv_hugepages': 0, 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}, 1048576: {'resv_hugepages': 0, 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}}), 'dateTime': '2018-03-21T08:46:43 GMT', 'cpuUser': '0.87', 'memFree': 31336, 'cpuIdle': '98.68', 'vmActive': 0, 'v2vJobs': {}, 'cpuSysVdsmd': '1.20'}} from=::ffff:192.168.0.4,49914 (api:52) 29885:2018-03-21 10:46:43,190+0200 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds (__init__:573) 29886:2018-03-21 10:46:45,288+0200 INFO (jsonrpc/5) [vdsm.api] START getSpmStatus(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', options=None) from=::ffff:192.168.0.4,49914, task_id=ffe57349-2532-4b64-b32c-8a558273d8e5 (api:46) 29887:2018-03-21 10:46:45,292+0200 INFO (jsonrpc/5) [vdsm.api] FINISH getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 6L}} from=::ffff:192.168.0.4,49914, task_id=ffe57349-2532-4b64-b32c-8a558273d8e5 (api:52) 29888:2018-03-21 10:46:45,293+0200 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus succeeded in 0.00 seconds (__init__:573) 29889:2018-03-21 10:46:45,298+0200 INFO (jsonrpc/3) [vdsm.api] START getStoragePoolInfo(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', options=None) from=::ffff:192.168.0.4,49920, task_id=b2ea28f0-abf0-4258-8152-6035d93aa5dc (api:46) 29890:2018-03-21 10:46:45,305+0200 INFO (jsonrpc/3) [vdsm.api] FINISH getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', 'lver': 6L, 'domains': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8:Active,8a2c304b-c8ae-438b-af54-fc8797ea149f:Active', 'master_uuid': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8', 'version': '4', 'spm_id': 1, 'type': 'NFS', 'master_ver': 3}, 'dominfo': {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': '', 'alerts': [], 'disktotal': '1968811540480', 'version': 4}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': '1968811540480', 'version': 0}}} from=::ffff:192.168.0.4,49920, task_id=b2ea28f0-abf0-4258-8152-6035d93aa5dc (api:52) 29891:2018-03-21 10:46:45,306+0200 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.00 seconds (__init__:573) 29892:2018-03-21 10:46:45,395+0200 INFO (periodic/2) [vdsm.api] START repoStats(domains=()) from=internal, task_id=4010bf72-5c86-4daf-a8f9-12ec229a86e4 (api:46) 29893:2018-03-21 10:46:45,395+0200 INFO (periodic/2) [vdsm.api] FINISH repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000486101', 'lastCheck': '5.8', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.00043444', 'lastCheck': '5.7', 'valid': True}} from=internal, task_id=4010bf72-5c86-4daf-a8f9-12ec229a86e4 (api:52) 29894:2018-03-21 10:46:45,396+0200 INFO (periodic/2) [vdsm.api] START multipath_health() from=internal, task_id=37a1132f-452b-4805-9cc0-b8e74e8d6b02 (api:46) 29895:2018-03-21 10:46:45,396+0200 INFO (periodic/2) [vdsm.api] FINISH multipath_health return={} from=internal, task_id=37a1132f-452b-4805-9cc0-b8e74e8d6b02 (api:52) 29896:2018-03-21 10:46:50,467+0200 INFO (jsonrpc/0) [api.host] START getAllVmStats() from=::ffff:192.168.0.4,49914 (api:46) 29897:2018-03-21 10:46:50,468+0200 INFO (jsonrpc/0) [throttled] Current getAllVmStats: {} (throttledlog:103) 29898:2018-03-21 10:46:50,468+0200 INFO (jsonrpc/0) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::ffff:192.168.0.4,49914 (api:52) 29899:2018-03-21 10:46:50,468+0200 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.01 seconds (__init__:573) 29900:2018-03-21 10:46:51,967+0200 INFO (jsonrpc/6) [api.host] START getAllVmStats() from=::1,36114 (api:46) 29901:2018-03-21 10:46:51,968+0200 INFO (jsonrpc/6) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::1,36114 (api:52) 29902:2018-03-21 10:46:51,968+0200 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:573) 29903:2018-03-21 10:46:55,648+0200 INFO (jsonrpc/4) [vdsm.api] START getSpmStatus(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', options=None) from=::ffff:192.168.0.4,49914, task_id=a975635f-59d7-4f40-b9a7-a5ea7150368e (api:46) 29904:2018-03-21 10:46:55,653+0200 INFO (jsonrpc/4) [vdsm.api] FINISH getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 6L}} from=::ffff:192.168.0.4,49914, task_id=a975635f-59d7-4f40-b9a7-a5ea7150368e (api:52) 29905:2018-03-21 10:46:55,653+0200 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus succeeded in 0.00 seconds (__init__:573) 29906:2018-03-21 10:46:55,659+0200 INFO (jsonrpc/2) [vdsm.api] START getStoragePoolInfo(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', options=None) from=::ffff:192.168.0.4,49920, task_id=97d4ae6c-3719-4149-85fd-28e924ad8883 (api:46) 29907:2018-03-21 10:46:55,666+0200 INFO (jsonrpc/2) [vdsm.api] FINISH getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', 'lver': 6L, 'domains': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8:Active,8a2c304b-c8ae-438b-af54-fc8797ea149f:Active', 'master_uuid': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8', 'version': '4', 'spm_id': 1, 'type': 'NFS', 'master_ver': 3}, 'dominfo': {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': '', 'alerts': [], 'disktotal': '1968811540480', 'version': 4}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': '1968811540480', 'version': 0}}} from=::ffff:192.168.0.4,49920, task_id=97d4ae6c-3719-4149-85fd-28e924ad8883 (api:52) 29908:2018-03-21 10:46:55,667+0200 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.00 seconds (__init__:573) 29909:2018-03-21 10:46:59,473+0200 INFO (jsonrpc/7) [api.host] START getStats() from=::ffff:192.168.0.4,49914 (api:46) 29910:2018-03-21 10:46:59,474+0200 INFO (jsonrpc/7) [vdsm.api] START repoStats(domains=()) from=::ffff:192.168.0.4,49914, task_id=177b551b-9bd3-45a4-84d0-5244630c08e5 (api:46) 29911:2018-03-21 10:46:59,474+0200 INFO (jsonrpc/7) [vdsm.api] FINISH repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000470906', 'lastCheck': '9.9', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000334928', 'lastCheck': '9.7', 'valid': True}} from=::ffff:192.168.0.4,49914, task_id=177b551b-9bd3-45a4-84d0-5244630c08e5 (api:52) 29912:2018-03-21 10:46:59,475+0200 INFO (jsonrpc/7) [vdsm.api] START multipath_health() from=::ffff:192.168.0.4,49914, task_id=30753d5c-9bfd-4b0f-ac83-e32278d82b63 (api:46) 29913:2018-03-21 10:46:59,475+0200 INFO (jsonrpc/7) [vdsm.api] FINISH multipath_health return={} from=::ffff:192.168.0.4,49914, task_id=30753d5c-9bfd-4b0f-ac83-e32278d82b63 (api:52) 29914:2018-03-21 10:46:59,482+0200 INFO (jsonrpc/7) [api.host] FINISH getStats return={'status': {'message': 'Done', 'code': 0}, 'info': {'cpuStatistics': {'11': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.80'}, '10': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': '99.80'}, '13': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '12': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.93'}, '15': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.87'}, '14': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.73'}, '1': {'cpuUser': '1.07', 'nodeIndex': 1, 'cpuSys': '1.07', 'cpuIdle': '97.86'}, '0': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '3': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '2': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': '99.87'}, '5': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.80'}, '4': {'cpuUser': '0.47', 'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': '99.26'}, '7': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.47', 'cpuIdle': '99.53'}, '6': {'cpuUser': '0.20', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.73'}, '9': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.80'}, '8': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.93'}}, 'numaNodeMemFree': {'1': {'memPercent': 7, 'memFree': '15125'}, '0': {'memPercent': 5, 'memFree': '15364'}}, 'memShared': 0, 'thpState': 'always', 'ksmMergeAcrossNodes': True, 'vmCount': 0, 'memUsed': '3', 'storageDomains': {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000470906', 'lastCheck': '9.9', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000334928', 'lastCheck': '9.7', 'valid': True}}, 'incomingVmMigrations': 0, 'network': {'enp3s0f0': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521622005.383289, 'name': 'enp3s0f0', 'tx': '1091412332', 'txDropped': '0', 'rx': '11914976557', 'rxErrors': '0', 'speed': '100', 'rxDropped': '1456'}, 'ovirtmgmt': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521622005.383289, 'name': 'ovirtmgmt', 'tx': '1048761326', 'txDropped': '0', 'rx': '11615304242', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'lo': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521622005.383289, 'name': 'lo', 'tx': '58228460774', 'txDropped': '0', 'rx': '58228460774', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp3s0f1': {'txErrors': '0', 'state': 'up', 'sampleTime': 1521622005.383289, 'name': 'enp3s0f1', 'tx': '0', 'txDropped': '0', 'rx': '66839', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, ';vdsmdummy;': {'txErrors': '0', 'state': 'down', 'sampleTime': 1521622005.383289, 'name': ';vdsmdummy;', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f0': {'txErrors': '0', 'state': 'down', 'sampleTime': 1521622005.383289, 'name': 'enp4s0f0', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f1': {'txErrors': '0', 'state': 'down', 'sampleTime': 1521622005.383289, 'name': 'enp4s0f1', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}}, 'txDropped': '0', 'anonHugePages': '208', 'ksmPages': 100, 'elapsedTime': '76578.31', 'cpuLoad': '0.17', 'cpuSys': '0.20', 'diskStats': {'/var/log': {'free': '7344'}, '/var/run/vdsm/': {'free': '16060'}, '/tmp': {'free': '906'}}, 'cpuUserVdsmd': '1.13', 'netConfigDirty': 'False', 'memCommitted': 0, 'ksmState': False, 'vmMigrating': 0, 'ksmCpu': 0, 'memAvailable': 31078, 'bootTime': '1521545382', 'haStats': {'active': False, 'configured': False, 'score': 0, 'localMaintenance': False, 'globalMaintenance': False}, 'momStatus': 'active', 'multipathHealth': {}, 'rxDropped': '1456', 'outgoingVmMigrations': 0, 'swapTotal': 12287, 'swapFree': 12287, 'hugepages': defaultdict(, {2048: {'resv_hugepages': 0, 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}, 1048576: {'resv_hugepages': 0, 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}}), 'dateTime': '2018-03-21T08:46:59 GMT', 'cpuUser': '0.15', 'memFree': 31334, 'cpuIdle': '99.65', 'vmActive': 0, 'v2vJobs': {}, 'cpuSysVdsmd': '0.60'}} from=::ffff:192.168.0.4,49914 (api:52) 29915:2018-03-21 10:46:59,484+0200 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds (__init__:573) > On 21 Mar 2018, at 11:33, Fred Rolland wrote: > > Can you provide the vdsm logs from the host. > It looks the vdsm failed to connect to the server. > > On Wed, Mar 21, 2018 at 11:21 AM, Andrei Verovski > wrote: > Hi, > > I have 2-host oVirt setup with 2 Data Centers, one with local storage domain (DC #1) for VMs + Export domain on NFS, another with all NFS shared (DC #2). > Trying to export VMs from DC #1 to DC #2. > VMs are exported to DC #1 export domain (NFS), then domain put into maintenance mode and detached from DC #1. > > Unfortunately, attaching it to DC #2 failed. Logs attached. Tried to run this command twice. > Workaround are possible in order to accomplish this task, yet it would be better to do in a way as it was designed. > Thanks. > > > 2018-03-21 10:43:24,024+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-128904) [570ec5d9-fff5-4656-afbd-90b3207a616e] EVENT_ID: USER_ATTACH_STORAGE_DOMAIN_TO_POOL_FAILED(963), Failed to attach Storage Domain node10-NFS-EXPORTS to Data Center StrDataCenter11. (User: admin at internal-authz) > 2018-03-21 10:46:16,512+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-128956) [1435fc81] EVENT_ID: VDS_STORAGES_CONNECTION_FAILED(188), Failed to connect Host node11 to the Storage Domains node10-NFS-EXPORTS. > 2018-03-21 10:46:16,651+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-128955) [a81ffa4a-5a58-41a0-888a-f0edc321609b] EVENT_ID: USER_ATTACH_STORAGE_DOMAIN_TO_POOL_FAILED(963), Failed to attach Storage Domain node10-NFS-EXPORTS to Data Center StrDataCenter11. (User: admin at internal-authz) > > > tail -n 1000 engine.log | grep 570ec5d9-fff5-4656-afbd-90b3207a616e > 2018-03-21 10:41:14,643+02 INFO [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand] (default task-2) [570ec5d9-fff5-4656-afbd-90b3207a616e] Lock Acquired to object 'EngineLock:{exclusiveLocks='[1d7208ce-d3a1-4406-9638-fe7051562994=STORAGE]', sharedLocks=''}' > 2018-03-21 10:41:16,129+02 INFO [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand] (EE-ManagedThreadFactory-engine-Thread-128904) [570ec5d9-fff5-4656-afbd-90b3207a616e] Running command: AttachStorageDomainToPoolCommand internal: false. Entities affected : ID: 1d7208ce-d3a1-4406-9638-fe7051562994 Type: StorageAction group MANIPULATE_STORAGE_DOMAIN with role type ADMIN, ID: 80cc922f-8dea-4fed-b951-1060ba116ad5 Type: StoragePoolAction group MANIPULATE_STORAGE_DOMAIN with role type ADMIN > 2018-03-21 10:43:23,564+02 ERROR [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand] (EE-ManagedThreadFactory-engine-Thread-128904) [570ec5d9-fff5-4656-afbd-90b3207a616e] Cannot connect storage connection server, aborting attach storage domain operation. > 2018-03-21 10:43:23,567+02 INFO [org.ovirt.engine.core.bll.CommandCompensator] (EE-ManagedThreadFactory-engine-Thread-128904) [570ec5d9-fff5-4656-afbd-90b3207a616e] Command [id=921ca7cd-4f93-46aa-8de2-91b13b8f96cb]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot: StoragePoolIsoMapId:{storagePoolId='80cc922f-8dea-4fed-b951-1060ba116ad5', storageId='1d7208ce-d3a1-4406-9638-fe7051562994'}. > 2018-03-21 10:43:24,024+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-128904) [570ec5d9-fff5-4656-afbd-90b3207a616e] EVENT_ID: USER_ATTACH_STORAGE_DOMAIN_TO_POOL_FAILED(963), Failed to attach Storage Domain node10-NFS-EXPORTS to Data Center StrDataCenter11. (User: admin at internal-authz) > 2018-03-21 10:43:24,114+02 INFO [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand] (EE-ManagedThreadFactory-engine-Thread-128904) [570ec5d9-fff5-4656-afbd-90b3207a616e] Lock freed to object 'EngineLock:{exclusiveLocks='[1d7208ce-d3a1-4406-9638-fe7051562994=STORAGE]', sharedLocks=''}' > > > > [root at node00 ovirt-engine]# tail -n 1000 engine.log | grep a81ffa4a-5a58-41a0-888a-f0edc321609b > 2018-03-21 10:44:11,025+02 INFO [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand] (default task-16) [a81ffa4a-5a58-41a0-888a-f0edc321609b] Lock Acquired to object 'EngineLock:{exclusiveLocks='[1d7208ce-d3a1-4406-9638-fe7051562994=STORAGE]', sharedLocks=''}' > 2018-03-21 10:44:11,236+02 INFO [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand] (EE-ManagedThreadFactory-engine-Thread-128955) [a81ffa4a-5a58-41a0-888a-f0edc321609b] Running command: AttachStorageDomainToPoolCommand internal: false. Entities affected : ID: 1d7208ce-d3a1-4406-9638-fe7051562994 Type: StorageAction group MANIPULATE_STORAGE_DOMAIN with role type ADMIN, ID: 80cc922f-8dea-4fed-b951-1060ba116ad5 Type: StoragePoolAction group MANIPULATE_STORAGE_DOMAIN with role type ADMIN > 2018-03-21 10:46:16,567+02 ERROR [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand] (EE-ManagedThreadFactory-engine-Thread-128955) [a81ffa4a-5a58-41a0-888a-f0edc321609b] Cannot connect storage connection server, aborting attach storage domain operation. > 2018-03-21 10:46:16,568+02 INFO [org.ovirt.engine.core.bll.CommandCompensator] (EE-ManagedThreadFactory-engine-Thread-128955) [a81ffa4a-5a58-41a0-888a-f0edc321609b] Command [id=b5c25100-1a8a-4db0-9509-99cfa60995b1]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot: StoragePoolIsoMapId:{storagePoolId='80cc922f-8dea-4fed-b951-1060ba116ad5', storageId='1d7208ce-d3a1-4406-9638-fe7051562994'}. > 2018-03-21 10:46:16,651+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-128955) [a81ffa4a-5a58-41a0-888a-f0edc321609b] EVENT_ID: USER_ATTACH_STORAGE_DOMAIN_TO_POOL_FAILED(963), Failed to attach Storage Domain node10-NFS-EXPORTS to Data Center StrDataCenter11. (User: admin at internal-authz) > 2018-03-21 10:46:16,681+02 INFO [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand] (EE-ManagedThreadFactory-engine-Thread-128955) [a81ffa4a-5a58-41a0-888a-f0edc321609b] Lock freed to object 'EngineLock:{exclusiveLocks='[1d7208ce-d3a1-4406-9638-fe7051562994=STORAGE]', sharedLocks='?}' > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sabose at redhat.com Wed Mar 21 10:12:54 2018 From: sabose at redhat.com (Sahina Bose) Date: Wed, 21 Mar 2018 15:42:54 +0530 Subject: [ovirt-users] Adding host to hosted-engine /w gluster cluster. (On ovirt Node 4.2.1.1) In-Reply-To: <9352191a-76dd-13ed-463a-61033dc3fe6a@andrewswireless.net> References: <9352191a-76dd-13ed-463a-61033dc3fe6a@andrewswireless.net> Message-ID: On Tue, Mar 20, 2018 at 9:41 PM, Hanson Turner wrote: > Hi Guys, > > I've a 3 machine pool running gluster with replica 3 and want to add two > more machines. > > This would change to a replica 5... > Adding 2 more nodes to cluster will not change it to a replica 5. replica 3 is a configuration on the gluster volume. I assume you don't need a replica 5, but just to add more nodes (and possibly new gluster volumes) to the cluster? > In ovirt 4.0, I'd done everything manually. No problem there. > > In ovirt 4.2, I'd used the wizard for the hosted-engine. It looks like the > fourth node has been added to the pool but will not go active. It complains > gluster isn't running (which I've not manually configured /dev/sdb for > gluster). Host install+deploy fails. Host can go into maintenance w/o > issue. (Meaning the host has been added to the cluster, but isn't > operational) > Are the repos configured correctly on the new nodes? Does the oVirt cluster where the nodes are being added have "Enable Gluster Service" enabled? > What do I need to do to get the node up and running proper with gluster > syncing properly? Manually restarting gluster, tells me there's no peers > and no volumes. > > Do we have a wizard for this too? Or do I need to go find the setup > scripts and configure hosts 4 + 5 manually and run the deploy again? > The host addition flow should take care of installing gluster. Can you share the engine log from when the host was added to when it's reported non-operational? > > Thanks, > > Hanson > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fschmid at ubimet.com Wed Mar 21 10:24:55 2018 From: fschmid at ubimet.com (Florian Schmid) Date: Wed, 21 Mar 2018 10:24:55 +0000 (UTC) Subject: [ovirt-users] GPG Key of evilissimo repo for ovirt-guest-agent is expired In-Reply-To: <20180321105932.526d1198@fiorina> References: <687643086.7795430.1521564159592.JavaMail.zimbra@ubimet.com> <20180321105932.526d1198@fiorina> Message-ID: <2026987727.7884119.1521627895285.JavaMail.zimbra@ubimet.com> Hi, thank you very much for the answers. I also wrote Mr Feenstra. BR Florian -------------------------------------------------------------------------------------------------------------------- UBIMET GmbH - weather matters Ing. Florian Schmid ? IT Infrastruktur Austria A-1220 Wien ? Donau-City-Stra?e 11 ? Tel +43 1 263 11 22 DW 469 ? Fax +43 1 263 11 22 219 fschmid at ubimet.com ? www.ubimet.com ? Mobile: +43 664 8323379 Sitz: Wien ? Firmenbuchgericht: Handelsgericht Wien ? FN 248415 t -------------------------------------------------------------------------------------------------------------------- The information contained in this message (including any attachments) is confidential and may be legally privileged or otherwise protected from disclosure. This message is intended solely for the addressee(s). If you are not the intended recipient, please notify the sender by return e-mail and delete this message from your system. Any unauthorized use, reproduction, or dissemination of this message is strictly prohibited. Please note that e-mails are susceptible to change. UBIMET GmbH shall not be liable for the improper or incomplete transmission of the information contained in this communication, nor shall it be liable for any delay in its receipt. UBIMET GmbH accepts no liability for loss or damage caused by software viruses and you are advised to carry out a virus check on any attachments contained in this message. ----- Urspr?ngliche Mail ----- Von: "Tom?? Golembiovsk?" An: "Florian Schmid" CC: "users" , "Vinzenz Feenstra" Gesendet: Mittwoch, 21. M?rz 2018 10:59:32 Betreff: Re: [ovirt-users] GPG Key of evilissimo repo for ovirt-guest-agent is expired Hi, On Tue, 20 Mar 2018 16:42:39 +0000 (UTC) Florian Schmid wrote: > Hi, > > it looks like for this repo, the GPG key is expired. > http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04/xUbuntu_16.04/ > > Does someone know, whom I should contact, that this key will be renewed? The repository belongs to Vinzenz Feenstra as you can see in package metadata. > Or does someone know another repo, where I can download the latest ovirt-guest-agent for Ubuntu 16.04 You can get ovirt-guest-agent packages from Debian repository: https://packages.debian.org/search?suite=all&searchon=names&keywords=ovirt-guest-agent Tomas > > BR Florian > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- Tom?? Golembiovsk? From tgolembi at redhat.com Wed Mar 21 10:32:52 2018 From: tgolembi at redhat.com (=?UTF-8?B?VG9tw6HFoSBHb2xlbWJpb3Zza8O9?=) Date: Wed, 21 Mar 2018 11:32:52 +0100 Subject: [ovirt-users] VM guest agent In-Reply-To: <97548a92-ad64-7968-43b9-9167bc41e3a0@hs-bremen.de> References: <1520808989.18402.58.camel@province-sud.nc> <97548a92-ad64-7968-43b9-9167bc41e3a0@hs-bremen.de> Message-ID: <20180321113252.14da142a@fiorina> Hi, On Mon, 12 Mar 2018 10:48:48 +0100 Oliver Riesener wrote: > Hi, > on Debian stretch the problem is the old version of agent from stretch > repository. > I downloaded 1.0.13 from Debian testing repo as *.deb file. > With these new versions of guest-agent then is also a udev rules issue. > The serial channels have been renamed and the rules didn`t match for ovirt. thanks for noticing. I opened a bug on Debian here: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=893698 Tomas > See my install script, as attachement. > Cheers. > > On 11.03.2018 23:56, Nicolas Vaye wrote: > > Hello, > > > > i have installed one oVirt platform with 2 node and 1 HE version 4.2.1.7-1 > > > > It seem to work fine, but i would like more information on the guest agent. > > For the HE, the guest agent seem to be OK, on this vm i 've spotted that the ovirt-guest-agent and qemu-guest-agent are installed. > > > > I have 2 VM, 1 debian 9 and 1 RHEL 6.5. I've tried to install the same service on each VM, but the result is the same : > > no info about IP, fqdn, or app installed for these vm, and there is a orange ! for each vm on the web ui (indicate that i need to install latest guest agent) . > > > > I have tried different test with spice-vdagent, or ovirt-guest-agent or qemu-guest-agent but no way. > > > > ovirt-guest-agent doesn't start on debian 9 and RHEL 6.5 : > > MainThread::INFO::2018-03-11 22:46:02,984::ovirt-guest-agent::59::root::Starting oVirt guest agentMainThread::ERROR::2018-03-11 22:46:02,986::ovirt-guest-agent::141::root::Unhandled exception in oVirt guest agent!Traceback (most recent call last): File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 135, in agent.run(daemon, pidfile) File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 65, in run self.agent = LinuxVdsAgent(config) File "/usr/share/ovirt-guest-agent/GuestAgentLinux2.py", line 472, in __init__ AgentLogicBase.__init__(self, config) File "/usr/share/ovirt-guest-agent/OVirtAgentLogic.py", line 188, in __init__ self.vio = VirtIoChannel(config.get("virtio", "device")) File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 153, in __init__ self._stream = VirtIoStream(vport_name) File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 134, in __init__ self._vport = os.open(vport_name, os.O_RDWR)OSError: [Errno 2] No such file or directory: '/dev/virtio-ports/com.redhat.rhevm.vdsm' > > > > > > Can i have help for this problem ? > > > > Thanks. > > > > Nicolas VAYE > > DSI - Noum?a > > NEW CALEDONIA > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > -- > Mit freundlichem Gru? > > > Oliver Riesener > > -- > Hochschule Bremen > Elektrotechnik und Informatik > Oliver Riesener > Neustadtswall 30 > D-28199 Bremen > > Tel: 0421 5905-2405, Fax: -2400 > e-mail:oliver.riesener at hs-bremen.de > -- Tom?? Golembiovsk? From tgolembi at redhat.com Wed Mar 21 10:34:08 2018 From: tgolembi at redhat.com (=?UTF-8?B?VG9tw6HFoSBHb2xlbWJpb3Zza8O9?=) Date: Wed, 21 Mar 2018 11:34:08 +0100 Subject: [ovirt-users] VM guest agent In-Reply-To: <1520808989.18402.58.camel@province-sud.nc> References: <1520808989.18402.58.camel@province-sud.nc> Message-ID: <20180321113408.53ba8749@fiorina> Hi, On Sun, 11 Mar 2018 22:56:32 +0000 Nicolas Vaye wrote: > Hello, > > i have installed one oVirt platform with 2 node and 1 HE version 4.2.1.7-1 > > It seem to work fine, but i would like more information on the guest agent. > For the HE, the guest agent seem to be OK, on this vm i 've spotted that the ovirt-guest-agent and qemu-guest-agent are installed. > > I have 2 VM, 1 debian 9 and 1 RHEL 6.5. I've tried to install the same service on each VM, but the result is the same : > no info about IP, fqdn, or app installed for these vm, and there is a orange ! for each vm on the web ui (indicate that i need to install latest guest agent) . What version of the guest agent do you have installed on RHEL 6.5? Tomas > > I have tried different test with spice-vdagent, or ovirt-guest-agent or qemu-guest-agent but no way. > > ovirt-guest-agent doesn't start on debian 9 and RHEL 6.5 : > MainThread::INFO::2018-03-11 22:46:02,984::ovirt-guest-agent::59::root::Starting oVirt guest agentMainThread::ERROR::2018-03-11 22:46:02,986::ovirt-guest-agent::141::root::Unhandled exception in oVirt guest agent!Traceback (most recent call last): File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 135, in agent.run(daemon, pidfile) File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 65, in run self.agent = LinuxVdsAgent(config) File "/usr/share/ovirt-guest-agent/GuestAgentLinux2.py", line 472, in __init__ AgentLogicBase.__init__(self, config) File "/usr/share/ovirt-guest-agent/OVirtAgentLogic.py", line 188, in __init__ self.vio = VirtIoChannel(config.get("virtio", "device")) File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 153, in __init__ self._stream = VirtIoStream(vport_name) File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 134, in __init__ self._vport = os.open(vport_name, os.O_RDWR)OSError: [Errno 2] No such file or directory: '/dev/virtio-ports/com.redhat.rhevm.vdsm' > > > Can i have help for this problem ? > > Thanks. > > Nicolas VAYE > DSI - Noum?a > NEW CALEDONIA > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- Tom?? Golembiovsk? From frolland at redhat.com Wed Mar 21 11:29:54 2018 From: frolland at redhat.com (Fred Rolland) Date: Wed, 21 Mar 2018 13:29:54 +0200 Subject: [ovirt-users] Attach Export Domain to another Data Center - Failure In-Reply-To: <53908113-1739-44A9-BE0A-3B4BDB22F230@starlett.lv> References: <53908113-1739-44A9-BE0A-3B4BDB22F230@starlett.lv> Message-ID: node11 fails to mount the export domain from node 10. You can try manually to see if you have access from node 11. Look at this page to debug the NFS connection: https://www.ovirt.org/documentation/how-to/troubleshooting/troubleshooting-nfs-storage-issues/ On Wed, Mar 21, 2018 at 12:04 PM, Andrei Verovski wrote: > Hi, > > > Errors occurred at 10.43 AM and 10.46 AM > node00.starlett.lv - 192.168.0.4 - oVirt host engine (separate PC, not > hosted) > node10.starlett.lv - 192.168.0.5 - host #1 of DC #1, export domain from > which was detached from DC #1 > node11.starlett.lv - 192.168.0.6 - host #1 of DC #2, > > Logs from DC#2 node11, to which I?m trying to attach export domain located > at NFS share node10. > > *grep 570ec5d9-fff5-4656-afbd-90b3207a616e* >> > within vdsm.log returned nothing, so I did > > > *grep -n 10:43 vdsm.log | tail -1000* > 1011:2018-03-21 06:10:43,077+0200 INFO (jsonrpc/3) [api.host] START > getAllVmStats() from=::1,36114 (api:46) > 1012:2018-03-21 06:10:43,077+0200 INFO (jsonrpc/3) [api.host] FINISH > getAllVmStats return={'status': {'message': 'Done', 'code': 0}, > 'statsList': (suppressed)} from=::1,36114 (api:52) > 1013:2018-03-21 06:10:43,077+0200 INFO (jsonrpc/3) > [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.01 > seconds (__init__:573) > 1014:2018-03-21 06:10:43,868+0200 INFO (jsonrpc/7) [api.host] START > getAllVmStats() from=::ffff:192.168.0.4,49914 (api:46) > 1015:2018-03-21 06:10:43,868+0200 INFO (jsonrpc/7) [api.host] FINISH > getAllVmStats return={'status': {'message': 'Done', 'code': 0}, > 'statsList': (suppressed)} from=::ffff:192.168.0.4,49914 (api:52) > 1016:2018-03-21 06:10:43,868+0200 INFO (jsonrpc/7) > [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 > seconds (__init__:573) > 19729:2018-03-21 09:10:43,641+0200 INFO (jsonrpc/5) [api.host] START > getAllVmStats() from=::1,36114 (api:46) > 19730:2018-03-21 09:10:43,641+0200 INFO (jsonrpc/5) [api.host] FINISH > getAllVmStats return={'status': {'message': 'Done', 'code': 0}, > 'statsList': (suppressed)} from=::1,36114 (api:52) > 19731:2018-03-21 09:10:43,642+0200 INFO (jsonrpc/5) > [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 > seconds (__init__:573) > 29302:2018-03-21 10:43:00,085+0200 INFO (periodic/2) [vdsm.api] START > repoStats(domains=()) from=internal, task_id=7b690c8a-5470-44da-a8f3-7e7e9b018e88 > (api:46) > 29303:2018-03-21 10:43:00,085+0200 INFO (periodic/2) [vdsm.api] FINISH > repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000477038', > 'lastCheck': '0.5', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000498252', 'lastCheck': '0.4', 'valid': True}} from=internal, > task_id=7b690c8a-5470-44da-a8f3-7e7e9b018e88 (api:52) > 29304:2018-03-21 10:43:00,086+0200 INFO (periodic/2) [vdsm.api] START > multipath_health() from=internal, task_id=9a625d43-a03c-429c-856f-9aa8e5ff65b5 > (api:46) > 29305:2018-03-21 10:43:00,086+0200 INFO (periodic/2) [vdsm.api] FINISH > multipath_health return={} from=internal, task_id=9a625d43-a03c-429c-856f-9aa8e5ff65b5 > (api:52) > 29306:2018-03-21 10:43:05,382+0200 INFO (jsonrpc/5) [api.host] START > getAllVmStats() from=::ffff:192.168.0.4,49914 (api:46) > 29307:2018-03-21 10:43:05,383+0200 INFO (jsonrpc/5) [api.host] FINISH > getAllVmStats return={'status': {'message': 'Done', 'code': 0}, > 'statsList': (suppressed)} from=::ffff:192.168.0.4,49914 (api:52) > 29308:2018-03-21 10:43:05,383+0200 INFO (jsonrpc/5) > [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 > seconds (__init__:573) > 29309:2018-03-21 10:43:06,248+0200 INFO (jsonrpc/3) [api.host] START > getStats() from=::ffff:192.168.0.4,49914 (api:46) > 29310:2018-03-21 10:43:06,249+0200 INFO (jsonrpc/3) [vdsm.api] START > repoStats(domains=()) from=::ffff:192.168.0.4,49914, > task_id=f80f9bb8-2e84-4f45-ab58-09a88e808cf3 (api:46) > 29311:2018-03-21 10:43:06,250+0200 INFO (jsonrpc/3) [vdsm.api] FINISH > repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000477038', > 'lastCheck': '6.7', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000498252', 'lastCheck': '6.6', 'valid': True}} > from=::ffff:192.168.0.4,49914, task_id=f80f9bb8-2e84-4f45-ab58-09a88e808cf3 > (api:52) > 29312:2018-03-21 10:43:06,250+0200 INFO (jsonrpc/3) [vdsm.api] START > multipath_health() from=::ffff:192.168.0.4,49914, > task_id=ddd7064d-cd12-43bf-a177-b2a93cc9035e (api:46) > 29313:2018-03-21 10:43:06,250+0200 INFO (jsonrpc/3) [vdsm.api] FINISH > multipath_health return={} from=::ffff:192.168.0.4,49914, > task_id=ddd7064d-cd12-43bf-a177-b2a93cc9035e (api:52) > 29314:2018-03-21 10:43:06,256+0200 INFO (jsonrpc/3) [api.host] FINISH > getStats return={'status': {'message': 'Done', 'code': 0}, 'info': > {'cpuStatistics': {'11': {'cpuUser': '0.13', 'nodeIndex': 1, 'cpuSys': > '0.27', 'cpuIdle': '99.60'}, '10': {'cpuUser': '0.13', 'nodeIndex': 0, > 'cpuSys': '0.40', 'cpuIdle': '99.47'}, '13': {'cpuUser': '0.07', > 'nodeIndex': 1, 'cpuSys': '0.20', 'cpuIdle': '99.73'}, '12': {'cpuUser': > '0.13', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.67'}, '15': > {'cpuUser': '0.20', 'nodeIndex': 1, 'cpuSys': '0.47', 'cpuIdle': '99.33'}, > '14': {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': > '99.60'}, '1': {'cpuUser': '1.13', 'nodeIndex': 1, 'cpuSys': '1.60', > 'cpuIdle': '97.27'}, '0': {'cpuUser': '0.47', 'nodeIndex': 0, 'cpuSys': > '0.27', 'cpuIdle': '99.26'}, '3': {'cpuUser': '0.07', 'nodeIndex': 1, > 'cpuSys': '0.20', 'cpuIdle': '99.73'}, '2': {'cpuUser': '0.33', > 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': '99.54'}, '5': {'cpuUser': > '0.13', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.74'}, '4': > {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.67'}, > '7': {'cpuUser': '0.13', 'nodeIndex': 1, 'cpuSys': '0.60', 'cpuIdle': > '99.27'}, '6': {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.33', > 'cpuIdle': '99.54'}, '9': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': > '0.27', 'cpuIdle': '99.66'}, '8': {'cpuUser': '0.13', 'nodeIndex': 0, > 'cpuSys': '0.33', 'cpuIdle': '99.54'}}, 'numaNodeMemFree': {'1': > {'memPercent': 7, 'memFree': '15142'}, '0': {'memPercent': 5, 'memFree': > '15349'}}, 'memShared': 0, 'thpState': 'always', 'ksmMergeAcrossNodes': > True, 'vmCount': 0, 'memUsed': '3', 'storageDomains': > {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, > 'version': 4, 'acquired': True, 'delay': '0.000477038', 'lastCheck': '6.7', > 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, > 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000498252', > 'lastCheck': '6.6', 'valid': True}}, 'incomingVmMigrations': 0, 'network': > {'enp3s0f0': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621780.072418, 'name': 'enp3s0f0', 'tx': '1091287978', 'txDropped': > '0', 'rx': '11914868563', 'rxErrors': '0', 'speed': '100', 'rxDropped': > '1439'}, 'ovirtmgmt': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621780.072418, 'name': 'ovirtmgmt', 'tx': '1048639112', 'txDropped': > '0', 'rx': '11615209092', 'rxErrors': '0', 'speed': '1000', 'rxDropped': > '0'}, 'lo': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621780.072418, 'name': 'lo', 'tx': '58057075945', 'txDropped': '0', > 'rx': '58057075945', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, > 'enp3s0f1': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621780.072418, 'name': 'enp3s0f1', 'tx': '0', 'txDropped': '0', 'rx': > '66839', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, > ';vdsmdummy;': {'txErrors': '0', 'state': 'down', 'sampleTime': > 1521621780.072418, 'name': ';vdsmdummy;', 'tx': '0', 'txDropped': '0', > 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f0': > {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621780.072418, 'name': > 'enp4s0f0', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', > 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f1': {'txErrors': '0', 'state': > 'down', 'sampleTime': 1521621780.072418, 'name': 'enp4s0f1', 'tx': '0', > 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': > '0'}}, 'txDropped': '0', 'anonHugePages': '204', 'ksmPages': 100, > 'elapsedTime': '76345.09', 'cpuLoad': '0.16', 'cpuSys': '0.36', > 'diskStats': {'/var/log': {'free': '7344'}, '/var/run/vdsm/': {'free': > '16060'}, '/tmp': {'free': '906'}}, 'cpuUserVdsmd': '1.07', > 'netConfigDirty': 'False', 'memCommitted': 0, 'ksmState': False, > 'vmMigrating': 0, 'ksmCpu': 0, 'memAvailable': 31081, 'bootTime': > '1521545382', 'haStats': {'active': False, 'configured': False, 'score': 0, > 'localMaintenance': False, 'globalMaintenance': False}, 'momStatus': > 'active', 'multipathHealth': {}, 'rxDropped': '1439', > 'outgoingVmMigrations': 0, 'swapTotal': 12287, 'swapFree': 12287, > 'hugepages': defaultdict(, {2048: {'resv_hugepages': 0, > 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, > 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}, > 1048576: {'resv_hugepages': 0, 'free_hugepages': 0, > 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': > 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}}), 'dateTime': > '2018-03-21T08:43:06 GMT', 'cpuUser': '0.21', 'memFree': 31337, 'cpuIdle': > '99.43', 'vmActive': 0, 'v2vJobs': {}, 'cpuSysVdsmd': '0.93'}} > from=::ffff:192.168.0.4,49914 (api:52) > 29315:2018-03-21 10:43:06,258+0200 INFO (jsonrpc/3) > [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds > (__init__:573) > 29316:2018-03-21 10:43:06,632+0200 INFO (jsonrpc/0) [api.host] START > getAllVmStats() from=::1,36114 (api:46) > 29317:2018-03-21 10:43:06,632+0200 INFO (jsonrpc/0) [api.host] FINISH > getAllVmStats return={'status': {'message': 'Done', 'code': 0}, > 'statsList': (suppressed)} from=::1,36114 (api:52) > 29318:2018-03-21 10:43:06,633+0200 INFO (jsonrpc/0) > [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 > seconds (__init__:573) > 29319:2018-03-21 10:43:07,415+0200 INFO (jsonrpc/4) [vdsm.api] START > getSpmStatus(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', > options=None) from=::ffff:192.168.0.4,49914, task_id=276c13c7-61ec-4782-a0d9-5cd0f0b0a729 > (api:46) > 29320:2018-03-21 10:43:07,420+0200 INFO (jsonrpc/4) [vdsm.api] FINISH > getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': > 6L}} from=::ffff:192.168.0.4,49914, task_id=276c13c7-61ec-4782-a0d9-5cd0f0b0a729 > (api:52) > 29321:2018-03-21 10:43:07,420+0200 INFO (jsonrpc/4) > [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus succeeded in 0.01 > seconds (__init__:573) > 29322:2018-03-21 10:43:07,427+0200 INFO (jsonrpc/6) [vdsm.api] START > getStoragePoolInfo(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', > options=None) from=::ffff:192.168.0.4,49920, task_id=344b18df-a428-429a-b49f-381f901d1618 > (api:46) > 29323:2018-03-21 10:43:07,434+0200 INFO (jsonrpc/6) [vdsm.api] FINISH > getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': > u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_ > nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/ > 11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', > 'lver': 6L, 'domains': u'ef184b28-1dbc-45ed-b0b3- > 85e780cce5d8:Active,8a2c304b-c8ae-438b-af54-fc8797ea149f:Active', > 'master_uuid': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8', 'version': '4', > 'spm_id': 1, 'type': 'NFS', 'master_ver': 3}, 'dominfo': > {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'status': u'Active', > 'diskfree': '1868692455424', 'isoprefix': '', 'alerts': [], 'disktotal': > '1968811540480', 'version': 4}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': > u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_ > nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/ > 11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': > '1968811540480', 'version': 0}}} from=::ffff:192.168.0.4,49920, > task_id=344b18df-a428-429a-b49f-381f901d1618 (api:52) > 29324:2018-03-21 10:43:07,435+0200 INFO (jsonrpc/6) > [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.01 > seconds (__init__:573) > 29325:2018-03-21 10:43:13,839+0200 WARN (vdsm.Scheduler) [Executor] > Worker blocked: {'params': {u'connectionParams': [{u'id': u'461f65a9-3a81-4f3f-a46d-c5ed12520524', > u'connection': u'node10.starlett.lv:/vmdata/nfs/exports', u'iqn': u'', > u'user': u'', u'tpgt': u'1', u'protocol_version': u'auto', u'password': > '********', u'port': u''}], u'storagepoolID': u'00000000-0000-0000-0000-000000000000', > u'domainType': 1}, 'jsonrpc': '2.0', 'method': u'StoragePool.connectStorageServer', > 'id': u'5f98dfd5-998f-44a6-8594-e5f40144b0ed'} at 0x4076150> timeout=60, > duration=120 at 0x402fe50> task#=3754 at 0x353e690>, traceback: > 29376:2018-03-21 10:43:15,104+0200 INFO (periodic/2) [vdsm.api] START > repoStats(domains=()) from=internal, task_id=672a883e-1d60-4981-9eee-bdcac2ec30c1 > (api:46) > 29377:2018-03-21 10:43:15,105+0200 INFO (periodic/2) [vdsm.api] FINISH > repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000488777', > 'lastCheck': '5.6', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000481962', 'lastCheck': '5.4', 'valid': True}} from=internal, > task_id=672a883e-1d60-4981-9eee-bdcac2ec30c1 (api:52) > 29378:2018-03-21 10:43:15,105+0200 INFO (periodic/2) [vdsm.api] START > multipath_health() from=internal, task_id=55f1b797-da2a-4b07-affe-5b9e41a74307 > (api:46) > 29379:2018-03-21 10:43:15,106+0200 INFO (periodic/2) [vdsm.api] FINISH > multipath_health return={} from=internal, task_id=55f1b797-da2a-4b07-affe-5b9e41a74307 > (api:52) > 29380:2018-03-21 10:43:17,645+0200 INFO (jsonrpc/7) [vdsm.api] START > getSpmStatus(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', > options=None) from=::ffff:192.168.0.4,49914, task_id=52a6ddeb-3c20-4f3f-b27d-94d19580b473 > (api:46) > 29381:2018-03-21 10:43:17,649+0200 INFO (jsonrpc/7) [vdsm.api] FINISH > getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': > 6L}} from=::ffff:192.168.0.4,49914, task_id=52a6ddeb-3c20-4f3f-b27d-94d19580b473 > (api:52) > 29382:2018-03-21 10:43:17,650+0200 INFO (jsonrpc/7) > [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus succeeded in 0.01 > seconds (__init__:573) > 29383:2018-03-21 10:43:17,656+0200 INFO (jsonrpc/2) [vdsm.api] START > getStoragePoolInfo(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', > options=None) from=::ffff:192.168.0.4,49920, task_id=1a103883-6a6a-4d61-ade6-96559591ddb0 > (api:46) > 29384:2018-03-21 10:43:17,663+0200 INFO (jsonrpc/2) [vdsm.api] FINISH > getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': > u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_ > nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/ > 11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', > 'lver': 6L, 'domains': u'ef184b28-1dbc-45ed-b0b3- > 85e780cce5d8:Active,8a2c304b-c8ae-438b-af54-fc8797ea149f:Active', > 'master_uuid': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8', 'version': '4', > 'spm_id': 1, 'type': 'NFS', 'master_ver': 3}, 'dominfo': > {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'status': u'Active', > 'diskfree': '1868692455424', 'isoprefix': '', 'alerts': [], 'disktotal': > '1968811540480', 'version': 4}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': > u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_ > nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/ > 11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': > '1968811540480', 'version': 0}}} from=::ffff:192.168.0.4,49920, > task_id=1a103883-6a6a-4d61-ade6-96559591ddb0 (api:52) > 29385:2018-03-21 10:43:17,663+0200 INFO (jsonrpc/2) > [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.01 > seconds (__init__:573) > 29386:2018-03-21 10:43:18,882+0200 ERROR (jsonrpc/1) [storage.HSM] Could > not connect to storageServer (hsm:2407) > 29406:2018-03-21 10:43:18,883+0200 INFO (jsonrpc/1) [vdsm.api] FINISH > connectStorageServer return={'statuslist': [{'status': 477, 'id': > u'461f65a9-3a81-4f3f-a46d-c5ed12520524'}]} from=::ffff:192.168.0.4,49914, > flow_id=4a53b512, task_id=e598cbe0-cde8-4c73-b526-4398df05e67f (api:52) > 29407:2018-03-21 10:43:18,883+0200 INFO (jsonrpc/1) > [jsonrpc.JsonRpcServer] RPC call StoragePool.connectStorageServer > succeeded in 125.04 seconds (__init__:573) > 29408:2018-03-21 10:43:20,387+0200 INFO (jsonrpc/5) [api.host] START > getAllVmStats() from=::ffff:192.168.0.4,49914 (api:46) > 29409:2018-03-21 10:43:20,388+0200 INFO (jsonrpc/5) [api.host] FINISH > getAllVmStats return={'status': {'message': 'Done', 'code': 0}, > 'statsList': (suppressed)} from=::ffff:192.168.0.4,49914 (api:52) > 29410:2018-03-21 10:43:20,388+0200 INFO (jsonrpc/5) > [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.01 > seconds (__init__:573) > 29411:2018-03-21 10:43:21,497+0200 INFO (jsonrpc/3) [api.host] START > getStats() from=::ffff:192.168.0.4,49914 (api:46) > 29412:2018-03-21 10:43:21,498+0200 INFO (jsonrpc/3) [vdsm.api] START > repoStats(domains=()) from=::ffff:192.168.0.4,49914, > task_id=a7469c1f-3d6d-42e7-8798-5c5335da4a9b (api:46) > 29413:2018-03-21 10:43:21,498+0200 INFO (jsonrpc/3) [vdsm.api] FINISH > repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000445096', > 'lastCheck': '1.9', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000296586', 'lastCheck': '1.8', 'valid': True}} > from=::ffff:192.168.0.4,49914, task_id=a7469c1f-3d6d-42e7-8798-5c5335da4a9b > (api:52) > 29414:2018-03-21 10:43:21,499+0200 INFO (jsonrpc/3) [vdsm.api] START > multipath_health() from=::ffff:192.168.0.4,49914, > task_id=9fa5dae3-e2bd-4186-9a1d-afb0aa6e2511 (api:46) > 29415:2018-03-21 10:43:21,499+0200 INFO (jsonrpc/3) [vdsm.api] FINISH > multipath_health return={} from=::ffff:192.168.0.4,49914, > task_id=9fa5dae3-e2bd-4186-9a1d-afb0aa6e2511 (api:52) > 29416:2018-03-21 10:43:21,505+0200 INFO (jsonrpc/3) [api.host] FINISH > getStats return={'status': {'message': 'Done', 'code': 0}, 'info': > {'cpuStatistics': {'11': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': > '0.07', 'cpuIdle': '99.86'}, '10': {'cpuUser': '0.07', 'nodeIndex': 0, > 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '13': {'cpuUser': '0.07', > 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '12': {'cpuUser': > '0.33', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.47'}, '15': > {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.87'}, > '14': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': '0.33', 'cpuIdle': > '99.60'}, '1': {'cpuUser': '1.00', 'nodeIndex': 1, 'cpuSys': '1.00', > 'cpuIdle': '98.00'}, '0': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': > '0.07', 'cpuIdle': '99.86'}, '3': {'cpuUser': '0.00', 'nodeIndex': 1, > 'cpuSys': '0.07', 'cpuIdle': '99.93'}, '2': {'cpuUser': '0.27', > 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.66'}, '5': {'cpuUser': > '0.07', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '4': > {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, > '7': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.27', 'cpuIdle': > '99.66'}, '6': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': '0.07', > 'cpuIdle': '99.86'}, '9': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': > '0.13', 'cpuIdle': '99.80'}, '8': {'cpuUser': '0.00', 'nodeIndex': 0, > 'cpuSys': '0.07', 'cpuIdle': '99.93'}}, 'numaNodeMemFree': {'1': > {'memPercent': 7, 'memFree': '15142'}, '0': {'memPercent': 5, 'memFree': > '15351'}}, 'memShared': 0, 'thpState': 'always', 'ksmMergeAcrossNodes': > True, 'vmCount': 0, 'memUsed': '3', 'storageDomains': > {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, > 'version': 4, 'acquired': True, 'delay': '0.000445096', 'lastCheck': '1.9', > 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, > 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000296586', > 'lastCheck': '1.8', 'valid': True}}, 'incomingVmMigrations': 0, 'network': > {'enp3s0f0': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621795.091263, 'name': 'enp3s0f0', 'tx': '1091295628', 'txDropped': > '0', 'rx': '11914875210', 'rxErrors': '0', 'speed': '100', 'rxDropped': > '1442'}, 'ovirtmgmt': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621795.091263, 'name': 'ovirtmgmt', 'tx': '1048646480', 'txDropped': > '0', 'rx': '11615214750', 'rxErrors': '0', 'speed': '1000', 'rxDropped': > '0'}, 'lo': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621795.091263, 'name': 'lo', 'tx': '58066397668', 'txDropped': '0', > 'rx': '58066397668', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, > 'enp3s0f1': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621795.091263, 'name': 'enp3s0f1', 'tx': '0', 'txDropped': '0', 'rx': > '66839', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, > ';vdsmdummy;': {'txErrors': '0', 'state': 'down', 'sampleTime': > 1521621795.091263, 'name': ';vdsmdummy;', 'tx': '0', 'txDropped': '0', > 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f0': > {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621795.091263, 'name': > 'enp4s0f0', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', > 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f1': {'txErrors': '0', 'state': > 'down', 'sampleTime': 1521621795.091263, 'name': 'enp4s0f1', 'tx': '0', > 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': > '0'}}, 'txDropped': '0', 'anonHugePages': '204', 'ksmPages': 100, > 'elapsedTime': '76360.34', 'cpuLoad': '0.15', 'cpuSys': '0.18', > 'diskStats': {'/var/log': {'free': '7344'}, '/var/run/vdsm/': {'free': > '16060'}, '/tmp': {'free': '906'}}, 'cpuUserVdsmd': '1.07', > 'netConfigDirty': 'False', 'memCommitted': 0, 'ksmState': False, > 'vmMigrating': 0, 'ksmCpu': 0, 'memAvailable': 31031, 'bootTime': > '1521545382', 'haStats': {'active': False, 'configured': False, 'score': 0, > 'localMaintenance': False, 'globalMaintenance': False}, 'momStatus': > 'active', 'multipathHealth': {}, 'rxDropped': '1442', > 'outgoingVmMigrations': 0, 'swapTotal': 12287, 'swapFree': 12287, > 'hugepages': defaultdict(, {2048: {'resv_hugepages': 0, > 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, > 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}, > 1048576: {'resv_hugepages': 0, 'free_hugepages': 0, > 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': > 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}}), 'dateTime': > '2018-03-21T08:43:21 GMT', 'cpuUser': '0.15', 'memFree': 31287, 'cpuIdle': > '99.67', 'vmActive': 0, 'v2vJobs': {}, 'cpuSysVdsmd': '0.67'}} > from=::ffff:192.168.0.4,49914 (api:52) > 29417:2018-03-21 10:43:21,508+0200 INFO (jsonrpc/3) > [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.02 seconds > (__init__:573) > 29418:2018-03-21 10:43:21,651+0200 INFO (jsonrpc/0) [api.host] START > getAllVmStats() from=::1,36114 (api:46) > 29419:2018-03-21 10:43:21,651+0200 INFO (jsonrpc/0) [api.host] FINISH > getAllVmStats return={'status': {'message': 'Done', 'code': 0}, > 'statsList': (suppressed)} from=::1,36114 (api:52) > 29420:2018-03-21 10:43:21,652+0200 INFO (jsonrpc/0) > [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 > seconds (__init__:573) > 29421:2018-03-21 10:43:27,764+0200 INFO (jsonrpc/4) [vdsm.api] START > getSpmStatus(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', > options=None) from=::ffff:192.168.0.4,49914, task_id=46d19cab-8903-4c52-9bb2-4dd8f370997e > (api:46) > 29422:2018-03-21 10:43:27,769+0200 INFO (jsonrpc/4) [vdsm.api] FINISH > getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': > 6L}} from=::ffff:192.168.0.4,49914, task_id=46d19cab-8903-4c52-9bb2-4dd8f370997e > (api:52) > 29423:2018-03-21 10:43:27,770+0200 INFO (jsonrpc/4) > [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus succeeded in 0.01 > seconds (__init__:573) > 29424:2018-03-21 10:43:27,815+0200 INFO (jsonrpc/6) [vdsm.api] START > getStoragePoolInfo(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', > options=None) from=::ffff:192.168.0.4,49920, task_id=290acc73-8f25-4fd7-9263-7bd05d46430a > (api:46) > 29425:2018-03-21 10:43:27,822+0200 INFO (jsonrpc/6) [vdsm.api] FINISH > getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': > u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_ > nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/ > 11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', > 'lver': 6L, 'domains': u'ef184b28-1dbc-45ed-b0b3- > 85e780cce5d8:Active,8a2c304b-c8ae-438b-af54-fc8797ea149f:Active', > 'master_uuid': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8', 'version': '4', > 'spm_id': 1, 'type': 'NFS', 'master_ver': 3}, 'dominfo': > {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'status': u'Active', > 'diskfree': '1868692455424', 'isoprefix': '', 'alerts': [], 'disktotal': > '1968811540480', 'version': 4}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': > u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_ > nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/ > 11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': > '1968811540480', 'version': 0}}} from=::ffff:192.168.0.4,49920, > task_id=290acc73-8f25-4fd7-9263-7bd05d46430a (api:52) > 29426:2018-03-21 10:43:27,823+0200 INFO (jsonrpc/6) > [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.01 > seconds (__init__:573) > 29427:2018-03-21 10:43:30,121+0200 INFO (periodic/3) [vdsm.api] START > repoStats(domains=()) from=internal, task_id=0abdebcf-2b0b-4d33-b552-c0f2ed3e567b > (api:46) > 29428:2018-03-21 10:43:30,122+0200 INFO (periodic/3) [vdsm.api] FINISH > repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000402077', > 'lastCheck': '0.6', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000344445', 'lastCheck': '0.5', 'valid': True}} from=internal, > task_id=0abdebcf-2b0b-4d33-b552-c0f2ed3e567b (api:52) > 29429:2018-03-21 10:43:30,122+0200 INFO (periodic/3) [vdsm.api] START > multipath_health() from=internal, task_id=acde7e7d-ab4a-4811-934f-4d312da33208 > (api:46) > 29430:2018-03-21 10:43:30,123+0200 INFO (periodic/3) [vdsm.api] FINISH > multipath_health return={} from=internal, task_id=acde7e7d-ab4a-4811-934f-4d312da33208 > (api:52) > 29431:2018-03-21 10:43:35,393+0200 INFO (jsonrpc/7) [api.host] START > getAllVmStats() from=::ffff:192.168.0.4,49914 (api:46) > 29432:2018-03-21 10:43:35,394+0200 INFO (jsonrpc/7) [api.host] FINISH > getAllVmStats return={'status': {'message': 'Done', 'code': 0}, > 'statsList': (suppressed)} from=::ffff:192.168.0.4,49914 (api:52) > 29433:2018-03-21 10:43:35,394+0200 INFO (jsonrpc/7) > [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 > seconds (__init__:573) > 29434:2018-03-21 10:43:36,671+0200 INFO (jsonrpc/2) [api.host] START > getAllVmStats() from=::1,36114 (api:46) > 29435:2018-03-21 10:43:36,671+0200 INFO (jsonrpc/2) [api.host] FINISH > getAllVmStats return={'status': {'message': 'Done', 'code': 0}, > 'statsList': (suppressed)} from=::1,36114 (api:52) > 29436:2018-03-21 10:43:36,672+0200 INFO (jsonrpc/2) > [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 > seconds (__init__:573) > 29437:2018-03-21 10:43:37,356+0200 INFO (jsonrpc/1) [api.host] START > getStats() from=::ffff:192.168.0.4,49914 (api:46) > 29438:2018-03-21 10:43:37,357+0200 INFO (jsonrpc/1) [vdsm.api] START > repoStats(domains=()) from=::ffff:192.168.0.4,49914, > task_id=bc14e253-d3d4-4cc3-9ae1-c0e6ac6e37f7 (api:46) > 29439:2018-03-21 10:43:37,357+0200 INFO (jsonrpc/1) [vdsm.api] FINISH > repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000402077', > 'lastCheck': '7.8', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000344445', 'lastCheck': '7.7', 'valid': True}} > from=::ffff:192.168.0.4,49914, task_id=bc14e253-d3d4-4cc3-9ae1-c0e6ac6e37f7 > (api:52) > 29440:2018-03-21 10:43:37,358+0200 INFO (jsonrpc/1) [vdsm.api] START > multipath_health() from=::ffff:192.168.0.4,49914, > task_id=23c698b8-d5bd-4746-b738-97b10f3e4bfb (api:46) > 29441:2018-03-21 10:43:37,358+0200 INFO (jsonrpc/1) [vdsm.api] FINISH > multipath_health return={} from=::ffff:192.168.0.4,49914, > task_id=23c698b8-d5bd-4746-b738-97b10f3e4bfb (api:52) > 29442:2018-03-21 10:43:37,365+0200 INFO (jsonrpc/1) [api.host] FINISH > getStats return={'status': {'message': 'Done', 'code': 0}, 'info': > {'cpuStatistics': {'11': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': > '0.07', 'cpuIdle': '99.86'}, '10': {'cpuUser': '3.06', 'nodeIndex': 0, > 'cpuSys': '0.80', 'cpuIdle': '96.14'}, '13': {'cpuUser': '0.00', > 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.93'}, '12': {'cpuUser': > '0.07', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.73'}, '15': > {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.87'}, > '14': {'cpuUser': '0.27', 'nodeIndex': 0, 'cpuSys': '0.40', 'cpuIdle': > '99.33'}, '1': {'cpuUser': '1.13', 'nodeIndex': 1, 'cpuSys': '1.33', > 'cpuIdle': '97.54'}, '0': {'cpuUser': '0.47', 'nodeIndex': 0, 'cpuSys': > '0.33', 'cpuIdle': '99.20'}, '3': {'cpuUser': '0.00', 'nodeIndex': 1, > 'cpuSys': '0.07', 'cpuIdle': '99.93'}, '2': {'cpuUser': '0.67', > 'nodeIndex': 0, 'cpuSys': '0.40', 'cpuIdle': '98.93'}, '5': {'cpuUser': > '0.13', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.74'}, '4': > {'cpuUser': '0.47', 'nodeIndex': 0, 'cpuSys': '0.40', 'cpuIdle': '99.13'}, > '7': {'cpuUser': '0.33', 'nodeIndex': 1, 'cpuSys': '0.73', 'cpuIdle': > '98.94'}, '6': {'cpuUser': '5.39', 'nodeIndex': 0, 'cpuSys': '0.47', > 'cpuIdle': '94.14'}, '9': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': > '0.27', 'cpuIdle': '99.66'}, '8': {'cpuUser': '0.53', 'nodeIndex': 0, > 'cpuSys': '0.40', 'cpuIdle': '99.07'}}, 'numaNodeMemFree': {'1': > {'memPercent': 7, 'memFree': '15143'}, '0': {'memPercent': 5, 'memFree': > '15348'}}, 'memShared': 0, 'thpState': 'always', 'ksmMergeAcrossNodes': > True, 'vmCount': 0, 'memUsed': '3', 'storageDomains': > {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, > 'version': 4, 'acquired': True, 'delay': '0.000402077', 'lastCheck': '7.8', > 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, > 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000344445', > 'lastCheck': '7.7', 'valid': True}}, 'incomingVmMigrations': 0, 'network': > {'enp3s0f0': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621810.108511, 'name': 'enp3s0f0', 'tx': '1091305295', 'txDropped': > '0', 'rx': '11914880953', 'rxErrors': '0', 'speed': '100', 'rxDropped': > '1442'}, 'ovirtmgmt': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621810.108511, 'name': 'ovirtmgmt', 'tx': '1048655897', 'txDropped': > '0', 'rx': '11615220341', 'rxErrors': '0', 'speed': '1000', 'rxDropped': > '0'}, 'lo': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621810.108511, 'name': 'lo', 'tx': '58078891247', 'txDropped': '0', > 'rx': '58078891247', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, > 'enp3s0f1': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621810.108511, 'name': 'enp3s0f1', 'tx': '0', 'txDropped': '0', 'rx': > '66839', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, > ';vdsmdummy;': {'txErrors': '0', 'state': 'down', 'sampleTime': > 1521621810.108511, 'name': ';vdsmdummy;', 'tx': '0', 'txDropped': '0', > 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f0': > {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621810.108511, 'name': > 'enp4s0f0', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', > 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f1': {'txErrors': '0', 'state': > 'down', 'sampleTime': 1521621810.108511, 'name': 'enp4s0f1', 'tx': '0', > 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': > '0'}}, 'txDropped': '0', 'anonHugePages': '204', 'ksmPages': 100, > 'elapsedTime': '76376.19', 'cpuLoad': '0.16', 'cpuSys': '0.39', > 'diskStats': {'/var/log': {'free': '7344'}, '/var/run/vdsm/': {'free': > '16060'}, '/tmp': {'free': '906'}}, 'cpuUserVdsmd': '1.00', > 'netConfigDirty': 'False', 'memCommitted': 0, 'ksmState': False, > 'vmMigrating': 0, 'ksmCpu': 0, 'memAvailable': 31080, 'bootTime': > '1521545382', 'haStats': {'active': False, 'configured': False, 'score': 0, > 'localMaintenance': False, 'globalMaintenance': False}, 'momStatus': > 'active', 'multipathHealth': {}, 'rxDropped': '1442', > 'outgoingVmMigrations': 0, 'swapTotal': 12287, 'swapFree': 12287, > 'hugepages': defaultdict(, {2048: {'resv_hugepages': 0, > 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, > 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}, > 1048576: {'resv_hugepages': 0, 'free_hugepages': 0, > 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': > 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}}), 'dateTime': > '2018-03-21T08:43:37 GMT', 'cpuUser': '0.79', 'memFree': 31336, 'cpuIdle': > '98.82', 'vmActive': 0, 'v2vJobs': {}, 'cpuSysVdsmd': '0.73'}} > from=::ffff:192.168.0.4,49914 (api:52) > 29443:2018-03-21 10:43:37,367+0200 INFO (jsonrpc/1) > [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds > (__init__:573) > 29444:2018-03-21 10:43:38,124+0200 INFO (jsonrpc/5) [vdsm.api] START > getSpmStatus(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', > options=None) from=::ffff:192.168.0.4,49914, task_id=9d2a82dd-c76c-4eb5-a1e1-b49bf004384e > (api:46) > 29445:2018-03-21 10:43:38,128+0200 INFO (jsonrpc/5) [vdsm.api] FINISH > getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': > 6L}} from=::ffff:192.168.0.4,49914, task_id=9d2a82dd-c76c-4eb5-a1e1-b49bf004384e > (api:52) > 29446:2018-03-21 10:43:38,129+0200 INFO (jsonrpc/5) > [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus succeeded in 0.01 > seconds (__init__:573) > 29447:2018-03-21 10:43:38,171+0200 INFO (jsonrpc/3) [vdsm.api] START > getStoragePoolInfo(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', > options=None) from=::ffff:192.168.0.4,49920, task_id=62d44b0b-20b2-47ae-bfb8-d4717e301b69 > (api:46) > 29448:2018-03-21 10:43:38,177+0200 INFO (jsonrpc/3) [vdsm.api] FINISH > getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': > u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_ > nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/ > 11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', > 'lver': 6L, 'domains': u'ef184b28-1dbc-45ed-b0b3- > 85e780cce5d8:Active,8a2c304b-c8ae-438b-af54-fc8797ea149f:Active', > 'master_uuid': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8', 'version': '4', > 'spm_id': 1, 'type': 'NFS', 'master_ver': 3}, 'dominfo': > {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'status': u'Active', > 'diskfree': '1868692455424', 'isoprefix': '', 'alerts': [], 'disktotal': > '1968811540480', 'version': 4}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': > u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_ > nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/ > 11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': > '1968811540480', 'version': 0}}} from=::ffff:192.168.0.4,49920, > task_id=62d44b0b-20b2-47ae-bfb8-d4717e301b69 (api:52) > 29449:2018-03-21 10:43:38,178+0200 INFO (jsonrpc/3) > [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.01 > seconds (__init__:573) > 29450:2018-03-21 10:43:45,147+0200 INFO (periodic/2) [vdsm.api] START > repoStats(domains=()) from=internal, task_id=49c33285-a222-464d-8984-4e75a5c6354a > (api:46) > 29451:2018-03-21 10:43:45,148+0200 INFO (periodic/2) [vdsm.api] FINISH > repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000454102', > 'lastCheck': '5.6', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000347381', 'lastCheck': '5.5', 'valid': True}} from=internal, > task_id=49c33285-a222-464d-8984-4e75a5c6354a (api:52) > 29452:2018-03-21 10:43:45,148+0200 INFO (periodic/2) [vdsm.api] START > multipath_health() from=internal, task_id=eed6b564-76b1-4f9b-80a2-1f0f6d9bbe32 > (api:46) > 29453:2018-03-21 10:43:45,149+0200 INFO (periodic/2) [vdsm.api] FINISH > multipath_health return={} from=internal, task_id=eed6b564-76b1-4f9b-80a2-1f0f6d9bbe32 > (api:52) > 29454:2018-03-21 10:43:48,405+0200 INFO (jsonrpc/0) [vdsm.api] START > getSpmStatus(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', > options=None) from=::ffff:192.168.0.4,49914, task_id=84651a37-7d13-41a9-9dcf-2169a9c46fc5 > (api:46) > 29455:2018-03-21 10:43:48,410+0200 INFO (jsonrpc/0) [vdsm.api] FINISH > getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': > 6L}} from=::ffff:192.168.0.4,49914, task_id=84651a37-7d13-41a9-9dcf-2169a9c46fc5 > (api:52) > 29456:2018-03-21 10:43:48,411+0200 INFO (jsonrpc/0) > [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus succeeded in 0.01 > seconds (__init__:573) > 29457:2018-03-21 10:43:48,417+0200 INFO (jsonrpc/4) [vdsm.api] START > getStoragePoolInfo(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', > options=None) from=::ffff:192.168.0.4,49920, task_id=4ebd1d1a-11ac-41ee-b459-7e45f008776b > (api:46) > 29458:2018-03-21 10:43:48,423+0200 INFO (jsonrpc/4) [vdsm.api] FINISH > getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': > u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_ > nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/ > 11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', > 'lver': 6L, 'domains': u'ef184b28-1dbc-45ed-b0b3- > 85e780cce5d8:Active,8a2c304b-c8ae-438b-af54-fc8797ea149f:Active', > 'master_uuid': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8', 'version': '4', > 'spm_id': 1, 'type': 'NFS', 'master_ver': 3}, 'dominfo': > {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'status': u'Active', > 'diskfree': '1868692455424', 'isoprefix': '', 'alerts': [], 'disktotal': > '1968811540480', 'version': 4}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': > u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_ > nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/ > 11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': > '1968811540480', 'version': 0}}} from=::ffff:192.168.0.4,49920, > task_id=4ebd1d1a-11ac-41ee-b459-7e45f008776b (api:52) > 29459:2018-03-21 10:43:48,424+0200 INFO (jsonrpc/4) > [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.01 > seconds (__init__:573) > 29460:2018-03-21 10:43:50,399+0200 INFO (jsonrpc/6) [api.host] START > getAllVmStats() from=::ffff:192.168.0.4,49914 (api:46) > 29461:2018-03-21 10:43:50,400+0200 INFO (jsonrpc/6) [api.host] FINISH > getAllVmStats return={'status': {'message': 'Done', 'code': 0}, > 'statsList': (suppressed)} from=::ffff:192.168.0.4,49914 (api:52) > 29462:2018-03-21 10:43:50,400+0200 INFO (jsonrpc/6) > [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 > seconds (__init__:573) > 29463:2018-03-21 10:43:51,694+0200 INFO (jsonrpc/7) [api.host] START > getAllVmStats() from=::1,36114 (api:46) > 29464:2018-03-21 10:43:51,694+0200 INFO (jsonrpc/7) [api.host] FINISH > getAllVmStats return={'status': {'message': 'Done', 'code': 0}, > 'statsList': (suppressed)} from=::1,36114 (api:52) > 29465:2018-03-21 10:43:51,695+0200 INFO (jsonrpc/7) > [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 > seconds (__init__:573) > 29466:2018-03-21 10:43:52,516+0200 INFO (jsonrpc/2) [api.host] START > getStats() from=::ffff:192.168.0.4,49914 (api:46) > 29467:2018-03-21 10:43:52,517+0200 INFO (jsonrpc/2) [vdsm.api] START > repoStats(domains=()) from=::ffff:192.168.0.4,49914, > task_id=0a88f0dc-eccf-4d24-9ac2-22e079a1480a (api:46) > 29468:2018-03-21 10:43:52,517+0200 INFO (jsonrpc/2) [vdsm.api] FINISH > repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000487937', > 'lastCheck': '3.0', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000455956', 'lastCheck': '2.9', 'valid': True}} > from=::ffff:192.168.0.4,49914, task_id=0a88f0dc-eccf-4d24-9ac2-22e079a1480a > (api:52) > 29469:2018-03-21 10:43:52,518+0200 INFO (jsonrpc/2) [vdsm.api] START > multipath_health() from=::ffff:192.168.0.4,49914, > task_id=a7a44ff7-a137-41b6-9ad5-2c0b32ab79ec (api:46) > 29470:2018-03-21 10:43:52,518+0200 INFO (jsonrpc/2) [vdsm.api] FINISH > multipath_health return={} from=::ffff:192.168.0.4,49914, > task_id=a7a44ff7-a137-41b6-9ad5-2c0b32ab79ec (api:52) > 29471:2018-03-21 10:43:52,524+0200 INFO (jsonrpc/2) [api.host] FINISH > getStats return={'status': {'message': 'Done', 'code': 0}, 'info': > {'cpuStatistics': {'11': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': > '0.13', 'cpuIdle': '99.80'}, '10': {'cpuUser': '0.00', 'nodeIndex': 0, > 'cpuSys': '0.13', 'cpuIdle': '99.87'}, '13': {'cpuUser': '0.00', > 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.93'}, '12': {'cpuUser': > '0.00', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': '99.87'}, '15': > {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, > '14': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': > '99.87'}, '1': {'cpuUser': '1.13', 'nodeIndex': 1, 'cpuSys': '1.13', > 'cpuIdle': '97.74'}, '0': {'cpuUser': '0.33', 'nodeIndex': 0, 'cpuSys': > '0.20', 'cpuIdle': '99.47'}, '3': {'cpuUser': '0.07', 'nodeIndex': 1, > 'cpuSys': '0.00', 'cpuIdle': '99.93'}, '2': {'cpuUser': '0.00', > 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': '99.87'}, '5': {'cpuUser': > '0.00', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.87'}, '4': > {'cpuUser': '0.27', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': '99.60'}, > '7': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.33', 'cpuIdle': > '99.67'}, '6': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.13', > 'cpuIdle': '99.87'}, '9': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': > '0.13', 'cpuIdle': '99.80'}, '8': {'cpuUser': '0.07', 'nodeIndex': 0, > 'cpuSys': '0.13', 'cpuIdle': '99.80'}}, 'numaNodeMemFree': {'1': > {'memPercent': 7, 'memFree': '15143'}, '0': {'memPercent': 5, 'memFree': > '15349'}}, 'memShared': 0, 'thpState': 'always', 'ksmMergeAcrossNodes': > True, 'vmCount': 0, 'memUsed': '3', 'storageDomains': > {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, > 'version': 4, 'acquired': True, 'delay': '0.000487937', 'lastCheck': '3.0', > 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, > 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000455956', > 'lastCheck': '2.9', 'valid': True}}, 'incomingVmMigrations': 0, 'network': > {'enp3s0f0': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621825.13588, 'name': 'enp3s0f0', 'tx': '1091312739', 'txDropped': '0', > 'rx': '11914899051', 'rxErrors': '0', 'speed': '100', 'rxDropped': '1443'}, > 'ovirtmgmt': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621825.13588, 'name': 'ovirtmgmt', 'tx': '1048663107', 'txDropped': > '0', 'rx': '11615236520', 'rxErrors': '0', 'speed': '1000', 'rxDropped': > '0'}, 'lo': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621825.13588, 'name': 'lo', 'tx': '58090295494', 'txDropped': '0', > 'rx': '58090295494', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, > 'enp3s0f1': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621825.13588, 'name': 'enp3s0f1', 'tx': '0', 'txDropped': '0', 'rx': > '66839', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, > ';vdsmdummy;': {'txErrors': '0', 'state': 'down', 'sampleTime': > 1521621825.13588, 'name': ';vdsmdummy;', 'tx': '0', 'txDropped': '0', 'rx': > '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f0': > {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621825.13588, 'name': > 'enp4s0f0', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', > 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f1': {'txErrors': '0', 'state': > 'down', 'sampleTime': 1521621825.13588, 'name': 'enp4s0f1', 'tx': '0', > 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': > '0'}}, 'txDropped': '0', 'anonHugePages': '204', 'ksmPages': 100, > 'elapsedTime': '76391.35', 'cpuLoad': '0.15', 'cpuSys': '0.20', > 'diskStats': {'/var/log': {'free': '7344'}, '/var/run/vdsm/': {'free': > '16060'}, '/tmp': {'free': '906'}}, 'cpuUserVdsmd': '1.13', > 'netConfigDirty': 'False', 'memCommitted': 0, 'ksmState': False, > 'vmMigrating': 0, 'ksmCpu': 0, 'memAvailable': 31080, 'bootTime': > '1521545382', 'haStats': {'active': False, 'configured': False, 'score': 0, > 'localMaintenance': False, 'globalMaintenance': False}, 'momStatus': > 'active', 'multipathHealth': {}, 'rxDropped': '1443', > 'outgoingVmMigrations': 0, 'swapTotal': 12287, 'swapFree': 12287, > 'hugepages': defaultdict(, {2048: {'resv_hugepages': 0, > 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, > 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}, > 1048576: {'resv_hugepages': 0, 'free_hugepages': 0, > 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': > 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}}), 'dateTime': > '2018-03-21T08:43:52 GMT', 'cpuUser': '0.14', 'memFree': 31336, 'cpuIdle': > '99.66', 'vmActive': 0, 'v2vJobs': {}, 'cpuSysVdsmd': '0.73'}} > from=::ffff:192.168.0.4,49914 (api:52) > 29472:2018-03-21 10:43:52,526+0200 INFO (jsonrpc/2) > [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds > (__init__:573) > 29473:2018-03-21 10:43:58,686+0200 INFO (jsonrpc/1) [vdsm.api] START > getSpmStatus(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', > options=None) from=::ffff:192.168.0.4,49914, task_id=d6d29cf6-c083-4639-91a2-f062eae4e629 > (api:46) > 29474:2018-03-21 10:43:58,690+0200 INFO (jsonrpc/1) [vdsm.api] FINISH > getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': > 6L}} from=::ffff:192.168.0.4,49914, task_id=d6d29cf6-c083-4639-91a2-f062eae4e629 > (api:52) > 29475:2018-03-21 10:43:58,690+0200 INFO (jsonrpc/1) > [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus succeeded in 0.01 > seconds (__init__:573) > 29476:2018-03-21 10:43:58,697+0200 INFO (jsonrpc/5) [vdsm.api] START > getStoragePoolInfo(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', > options=None) from=::ffff:192.168.0.4,49920, task_id=9dbd91df-dd9c-469c-8308-bfbfc228937c > (api:46) > 29477:2018-03-21 10:43:58,703+0200 INFO (jsonrpc/5) [vdsm.api] FINISH > getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': > u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_ > nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/ > 11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', > 'lver': 6L, 'domains': u'ef184b28-1dbc-45ed-b0b3- > 85e780cce5d8:Active,8a2c304b-c8ae-438b-af54-fc8797ea149f:Active', > 'master_uuid': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8', 'version': '4', > 'spm_id': 1, 'type': 'NFS', 'master_ver': 3}, 'dominfo': > {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'status': u'Active', > 'diskfree': '1868692455424', 'isoprefix': '', 'alerts': [], 'disktotal': > '1968811540480', 'version': 4}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': > u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_ > nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/ > 11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': > '1968811540480', 'version': 0}}} from=::ffff:192.168.0.4,49920, > task_id=9dbd91df-dd9c-469c-8308-bfbfc228937c (api:52) > 29478:2018-03-21 10:43:58,704+0200 INFO (jsonrpc/5) > [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.01 > seconds (__init__:573) > > > > *grep -n 10:46 vdsm.log | tail -1000* > 1017:2018-03-21 06:10:46,882+0200 INFO (jsonrpc/2) [api.host] START > getStats() from=::ffff:192.168.0.4,49914 (api:46) > 1018:2018-03-21 06:10:46,883+0200 INFO (jsonrpc/2) [vdsm.api] START > repoStats(domains=()) from=::ffff:192.168.0.4,49914, > task_id=6e045b00-88b2-469b-80d6-af38303f8c32 (api:46) > 1019:2018-03-21 06:10:46,883+0200 INFO (jsonrpc/2) [vdsm.api] FINISH > repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000332785', > 'lastCheck': '7.3', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000203914', 'lastCheck': '5.6', 'valid': True}} > from=::ffff:192.168.0.4,49914, task_id=6e045b00-88b2-469b-80d6-af38303f8c32 > (api:52) > 1020:2018-03-21 06:10:46,884+0200 INFO (jsonrpc/2) [vdsm.api] START > multipath_health() from=::ffff:192.168.0.4,49914, > task_id=475a5b40-4a43-42ee-9524-2bdd220aed83 (api:46) > 1021:2018-03-21 06:10:46,884+0200 INFO (jsonrpc/2) [vdsm.api] FINISH > multipath_health return={} from=::ffff:192.168.0.4,49914, > task_id=475a5b40-4a43-42ee-9524-2bdd220aed83 (api:52) > 1022:2018-03-21 06:10:46,889+0200 INFO (jsonrpc/2) [api.host] FINISH > getStats return={'status': {'message': 'Done', 'code': 0}, 'info': > {'cpuStatistics': {'11': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': > '0.20', 'cpuIdle': '99.73'}, '10': {'cpuUser': '0.13', 'nodeIndex': 0, > 'cpuSys': '0.20', 'cpuIdle': '99.67'}, '13': {'cpuUser': '0.07', > 'nodeIndex': 1, 'cpuSys': '0.27', 'cpuIdle': '99.66'}, '12': {'cpuUser': > '0.33', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.47'}, '15': > {'cpuUser': '0.33', 'nodeIndex': 1, 'cpuSys': '0.27', 'cpuIdle': '99.40'}, > '14': {'cpuUser': '0.47', 'nodeIndex': 0, 'cpuSys': '0.40', 'cpuIdle': > '99.13'}, '1': {'cpuUser': '3.00', 'nodeIndex': 1, 'cpuSys': '1.40', > 'cpuIdle': '95.60'}, '0': {'cpuUser': '0.73', 'nodeIndex': 0, 'cpuSys': > '0.20', 'cpuIdle': '99.07'}, '3': {'cpuUser': '0.07', 'nodeIndex': 1, > 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '2': {'cpuUser': '0.07', > 'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': '99.66'}, '5': {'cpuUser': > '0.07', 'nodeIndex': 1, 'cpuSys': '0.20', 'cpuIdle': '99.73'}, '4': > {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.33', 'cpuIdle': '99.54'}, > '7': {'cpuUser': '0.80', 'nodeIndex': 1, 'cpuSys': '0.80', 'cpuIdle': > '98.40'}, '6': {'cpuUser': '5.46', 'nodeIndex': 0, 'cpuSys': '0.27', > 'cpuIdle': '94.27'}, '9': {'cpuUser': '0.20', 'nodeIndex': 1, 'cpuSys': > '0.27', 'cpuIdle': '99.53'}, '8': {'cpuUser': '1.27', 'nodeIndex': 0, > 'cpuSys': '0.47', 'cpuIdle': '98.26'}}, 'numaNodeMemFree': {'1': > {'memPercent': 7, 'memFree': '15128'}, '0': {'memPercent': 5, 'memFree': > '15366'}}, 'memShared': 0, 'thpState': 'always', 'ksmMergeAcrossNodes': > True, 'vmCount': 0, 'memUsed': '3', 'storageDomains': > {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, > 'version': 4, 'acquired': True, 'delay': '0.000332785', 'lastCheck': '7.3', > 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, > 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000203914', > 'lastCheck': '5.6', 'valid': True}}, 'incomingVmMigrations': 0, 'network': > {'enp3s0f0': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521605437.203776, 'name': 'enp3s0f0', 'tx': '1082264809', 'txDropped': > '0', 'rx': '11907188594', 'rxErrors': '0', 'speed': '100', 'rxDropped': > '289'}, 'ovirtmgmt': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521605437.203776, 'name': 'ovirtmgmt', 'tx': '1039869554', 'txDropped': > '0', 'rx': '11608697383', 'rxErrors': '0', 'speed': '1000', 'rxDropped': > '0'}, 'lo': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521605437.203776, 'name': 'lo', 'tx': '45587320171', 'txDropped': '0', > 'rx': '45587320171', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, > 'enp3s0f1': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521605437.203776, 'name': 'enp3s0f1', 'tx': '0', 'txDropped': '0', 'rx': > '53620', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, > ';vdsmdummy;': {'txErrors': '0', 'state': 'down', 'sampleTime': > 1521605437.203776, 'name': ';vdsmdummy;', 'tx': '0', 'txDropped': '0', > 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f0': > {'txErrors': '0', 'state': 'down', 'sampleTime': 1521605437.203776, 'name': > 'enp4s0f0', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', > 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f1': {'txErrors': '0', 'state': > 'down', 'sampleTime': 1521605437.203776, 'name': 'enp4s0f1', 'tx': '0', > 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': > '0'}}, 'txDropped': '0', 'anonHugePages': '204', 'ksmPages': 100, > 'elapsedTime': '60005.72', 'cpuLoad': '0.06', 'cpuSys': '0.38', > 'diskStats': {'/var/log': {'free': '7357'}, '/var/run/vdsm/': {'free': > '16060'}, '/tmp': {'free': '906'}}, 'cpuUserVdsmd': '1.07', > 'netConfigDirty': 'False', 'memCommitted': 0, 'ksmState': False, > 'vmMigrating': 0, 'ksmCpu': 0, 'memAvailable': 31084, 'bootTime': > '1521545382', 'haStats': {'active': False, 'configured': False, 'score': 0, > 'localMaintenance': False, 'globalMaintenance': False}, 'momStatus': > 'active', 'multipathHealth': {}, 'rxDropped': '289', > 'outgoingVmMigrations': 0, 'swapTotal': 12287, 'swapFree': 12287, > 'hugepages': defaultdict(, {2048: {'resv_hugepages': 0, > 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, > 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}, > 1048576: {'resv_hugepages': 0, 'free_hugepages': 0, > 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': > 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}}), 'dateTime': > '2018-03-21T04:10:46 GMT', 'cpuUser': '0.83', 'memFree': 31340, 'cpuIdle': > '98.79', 'vmActive': 0, 'v2vJobs': {}, 'cpuSysVdsmd': '0.80'}} > from=::ffff:192.168.0.4,49914 (api:52) > 1023:2018-03-21 06:10:46,891+0200 INFO (jsonrpc/2) > [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds > (__init__:573) > 13522:2018-03-21 08:10:46,827+0200 INFO (jsonrpc/7) [api.host] START > getAllVmStats() from=::ffff:192.168.0.4,49914 (api:46) > 13523:2018-03-21 08:10:46,828+0200 INFO (jsonrpc/7) [api.host] FINISH > getAllVmStats return={'status': {'message': 'Done', 'code': 0}, > 'statsList': (suppressed)} from=::ffff:192.168.0.4,49914 (api:52) > 13524:2018-03-21 08:10:46,828+0200 INFO (jsonrpc/7) > [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 > seconds (__init__:573) > 29734:2018-03-21 10:46:00,338+0200 INFO (periodic/2) [vdsm.api] START > repoStats(domains=()) from=internal, task_id=af099ce6-0f44-461b-9e8e-99de63b3884f > (api:46) > 29735:2018-03-21 10:46:00,339+0200 INFO (periodic/2) [vdsm.api] FINISH > repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000470245', > 'lastCheck': '0.8', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000476935', 'lastCheck': '0.7', 'valid': True}} from=internal, > task_id=af099ce6-0f44-461b-9e8e-99de63b3884f (api:52) > 29736:2018-03-21 10:46:00,339+0200 INFO (periodic/2) [vdsm.api] START > multipath_health() from=internal, task_id=4a5709d9-e322-44eb-a8a4-bbe3f5e5e1bd > (api:46) > 29737:2018-03-21 10:46:00,340+0200 INFO (periodic/2) [vdsm.api] FINISH > multipath_health return={} from=internal, task_id=4a5709d9-e322-44eb-a8a4-bbe3f5e5e1bd > (api:52) > 29738:2018-03-21 10:46:04,637+0200 INFO (jsonrpc/7) [vdsm.api] START > getSpmStatus(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', > options=None) from=::ffff:192.168.0.4,49914, task_id=5b9300f7-f8c4-4f63-a49c-45ca8119d3b9 > (api:46) > 29739:2018-03-21 10:46:04,642+0200 INFO (jsonrpc/7) [vdsm.api] FINISH > getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': > 6L}} from=::ffff:192.168.0.4,49914, task_id=5b9300f7-f8c4-4f63-a49c-45ca8119d3b9 > (api:52) > 29740:2018-03-21 10:46:04,642+0200 INFO (jsonrpc/7) > [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus succeeded in 0.01 > seconds (__init__:573) > 29741:2018-03-21 10:46:04,648+0200 INFO (jsonrpc/1) [vdsm.api] START > getStoragePoolInfo(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', > options=None) from=::ffff:192.168.0.4,49920, task_id=f4db5bfa-e131-4f9a-baaa-e7555c996ab0 > (api:46) > 29742:2018-03-21 10:46:04,655+0200 INFO (jsonrpc/1) [vdsm.api] FINISH > getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': > u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_ > nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/ > 11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', > 'lver': 6L, 'domains': u'ef184b28-1dbc-45ed-b0b3- > 85e780cce5d8:Active,8a2c304b-c8ae-438b-af54-fc8797ea149f:Active', > 'master_uuid': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8', 'version': '4', > 'spm_id': 1, 'type': 'NFS', 'master_ver': 3}, 'dominfo': > {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'status': u'Active', > 'diskfree': '1868692455424', 'isoprefix': '', 'alerts': [], 'disktotal': > '1968811540480', 'version': 4}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': > u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_ > nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/ > 11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': > '1968811540480', 'version': 0}}} from=::ffff:192.168.0.4,49920, > task_id=f4db5bfa-e131-4f9a-baaa-e7555c996ab0 (api:52) > 29743:2018-03-21 10:46:04,656+0200 INFO (jsonrpc/1) > [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.00 > seconds (__init__:573) > 29744:2018-03-21 10:46:05,451+0200 INFO (jsonrpc/5) [api.host] START > getAllVmStats() from=::ffff:192.168.0.4,49914 (api:46) > 29745:2018-03-21 10:46:05,452+0200 INFO (jsonrpc/5) [api.host] FINISH > getAllVmStats return={'status': {'message': 'Done', 'code': 0}, > 'statsList': (suppressed)} from=::ffff:192.168.0.4,49914 (api:52) > 29746:2018-03-21 10:46:05,452+0200 INFO (jsonrpc/5) > [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 > seconds (__init__:573) > 29747:2018-03-21 10:46:06,903+0200 INFO (jsonrpc/3) [api.host] START > getAllVmStats() from=::1,36114 (api:46) > 29748:2018-03-21 10:46:06,904+0200 INFO (jsonrpc/3) [api.host] FINISH > getAllVmStats return={'status': {'message': 'Done', 'code': 0}, > 'statsList': (suppressed)} from=::1,36114 (api:52) > 29749:2018-03-21 10:46:06,904+0200 INFO (jsonrpc/3) > [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 > seconds (__init__:573) > 29750:2018-03-21 10:46:08,791+0200 WARN (vdsm.Scheduler) [Executor] > Worker blocked: {'params': {u'connectionParams': [{u'id': u'461f65a9-3a81-4f3f-a46d-c5ed12520524', > u'connection': u'node10.starlett.lv:/vmdata/nfs/exports', u'iqn': u'', > u'user': u'', u'tpgt': u'1', u'protocol_version': u'auto', u'password': > '********', u'port': u''}], u'storagepoolID': u'00000000-0000-0000-0000-000000000000', > u'domainType': 1}, 'jsonrpc': '2.0', 'method': u'StoragePool.connectStorageServer', > 'id': u'71143eef-aac6-4996-9783-3e0e3da180c3'} at 0x3604fd0> timeout=60, > duration=120 at 0x3604e10> task#=3762 at 0x3541650>, traceback: > 29801:2018-03-21 10:46:12,852+0200 INFO (jsonrpc/0) [api.host] START > getStats() from=::ffff:192.168.0.4,49914 (api:46) > 29802:2018-03-21 10:46:12,853+0200 INFO (jsonrpc/0) [vdsm.api] START > repoStats(domains=()) from=::ffff:192.168.0.4,49914, > task_id=391fdab9-d3c2-4075-9c34-177a07a21ec3 (api:46) > 29803:2018-03-21 10:46:12,854+0200 INFO (jsonrpc/0) [vdsm.api] FINISH > repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000492277', > 'lastCheck': '3.3', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000479153', 'lastCheck': '3.2', 'valid': True}} > from=::ffff:192.168.0.4,49914, task_id=391fdab9-d3c2-4075-9c34-177a07a21ec3 > (api:52) > 29804:2018-03-21 10:46:12,854+0200 INFO (jsonrpc/0) [vdsm.api] START > multipath_health() from=::ffff:192.168.0.4,49914, > task_id=7e53adbc-3974-4f51-bfc8-8fbd3ca6b749 (api:46) > 29805:2018-03-21 10:46:12,854+0200 INFO (jsonrpc/0) [vdsm.api] FINISH > multipath_health return={} from=::ffff:192.168.0.4,49914, > task_id=7e53adbc-3974-4f51-bfc8-8fbd3ca6b749 (api:52) > 29806:2018-03-21 10:46:12,861+0200 INFO (jsonrpc/0) [api.host] FINISH > getStats return={'status': {'message': 'Done', 'code': 0}, 'info': > {'cpuStatistics': {'11': {'cpuUser': '0.47', 'nodeIndex': 1, 'cpuSys': > '0.40', 'cpuIdle': '99.13'}, '10': {'cpuUser': '0.13', 'nodeIndex': 0, > 'cpuSys': '0.20', 'cpuIdle': '99.67'}, '13': {'cpuUser': '0.07', > 'nodeIndex': 1, 'cpuSys': '0.20', 'cpuIdle': '99.73'}, '12': {'cpuUser': > '0.20', 'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': '99.53'}, '15': > {'cpuUser': '0.13', 'nodeIndex': 1, 'cpuSys': '0.27', 'cpuIdle': '99.60'}, > '14': {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.33', 'cpuIdle': > '99.54'}, '1': {'cpuUser': '1.00', 'nodeIndex': 1, 'cpuSys': '1.33', > 'cpuIdle': '97.67'}, '0': {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': > '0.27', 'cpuIdle': '99.60'}, '3': {'cpuUser': '0.07', 'nodeIndex': 1, > 'cpuSys': '0.00', 'cpuIdle': '99.93'}, '2': {'cpuUser': '0.20', > 'nodeIndex': 0, 'cpuSys': '0.33', 'cpuIdle': '99.47'}, '5': {'cpuUser': > '0.07', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.80'}, '4': > {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.73'}, > '7': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.40', 'cpuIdle': > '99.53'}, '6': {'cpuUser': '0.47', 'nodeIndex': 0, 'cpuSys': '0.40', > 'cpuIdle': '99.13'}, '9': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': > '0.20', 'cpuIdle': '99.73'}, '8': {'cpuUser': '0.13', 'nodeIndex': 0, > 'cpuSys': '0.33', 'cpuIdle': '99.54'}}, 'numaNodeMemFree': {'1': > {'memPercent': 7, 'memFree': '15142'}, '0': {'memPercent': 5, 'memFree': > '15348'}}, 'memShared': 0, 'thpState': 'always', 'ksmMergeAcrossNodes': > True, 'vmCount': 0, 'memUsed': '3', 'storageDomains': > {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, > 'version': 4, 'acquired': True, 'delay': '0.000492277', 'lastCheck': '3.3', > 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, > 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000479153', > 'lastCheck': '3.2', 'valid': True}}, 'incomingVmMigrations': 0, 'network': > {'enp3s0f0': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621960.326184, 'name': 'enp3s0f0', 'tx': '1091387790', 'txDropped': > '0', 'rx': '11914959714', 'rxErrors': '0', 'speed': '100', 'rxDropped': > '1451'}, 'ovirtmgmt': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621960.326184, 'name': 'ovirtmgmt', 'tx': '1048735814', 'txDropped': > '0', 'rx': '11615289173', 'rxErrors': '0', 'speed': '1000', 'rxDropped': > '0'}, 'lo': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621960.326184, 'name': 'lo', 'tx': '58192068637', 'txDropped': '0', > 'rx': '58192068637', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, > 'enp3s0f1': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621960.326184, 'name': 'enp3s0f1', 'tx': '0', 'txDropped': '0', 'rx': > '66839', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, > ';vdsmdummy;': {'txErrors': '0', 'state': 'down', 'sampleTime': > 1521621960.326184, 'name': ';vdsmdummy;', 'tx': '0', 'txDropped': '0', > 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f0': > {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621960.326184, 'name': > 'enp4s0f0', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', > 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f1': {'txErrors': '0', 'state': > 'down', 'sampleTime': 1521621960.326184, 'name': 'enp4s0f1', 'tx': '0', > 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': > '0'}}, 'txDropped': '0', 'anonHugePages': '204', 'ksmPages': 100, > 'elapsedTime': '76531.69', 'cpuLoad': '0.17', 'cpuSys': '0.32', > 'diskStats': {'/var/log': {'free': '7344'}, '/var/run/vdsm/': {'free': > '16060'}, '/tmp': {'free': '906'}}, 'cpuUserVdsmd': '1.00', > 'netConfigDirty': 'False', 'memCommitted': 0, 'ksmState': False, > 'vmMigrating': 0, 'ksmCpu': 0, 'memAvailable': 31081, 'bootTime': > '1521545382', 'haStats': {'active': False, 'configured': False, 'score': 0, > 'localMaintenance': False, 'globalMaintenance': False}, 'momStatus': > 'active', 'multipathHealth': {}, 'rxDropped': '1451', > 'outgoingVmMigrations': 0, 'swapTotal': 12287, 'swapFree': 12287, > 'hugepages': defaultdict(, {2048: {'resv_hugepages': 0, > 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, > 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}, > 1048576: {'resv_hugepages': 0, 'free_hugepages': 0, > 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': > 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}}), 'dateTime': > '2018-03-21T08:46:12 GMT', 'cpuUser': '0.21', 'memFree': 31337, 'cpuIdle': > '99.46', 'vmActive': 0, 'v2vJobs': {}, 'cpuSysVdsmd': '0.80'}} > from=::ffff:192.168.0.4,49914 (api:52) > 29807:2018-03-21 10:46:12,863+0200 INFO (jsonrpc/0) > [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds > (__init__:573) > 29808:2018-03-21 10:46:13,829+0200 ERROR (jsonrpc/6) [storage.HSM] Could > not connect to storageServer (hsm:2407) > 29828:2018-03-21 10:46:13,829+0200 INFO (jsonrpc/6) [vdsm.api] FINISH > connectStorageServer return={'statuslist': [{'status': 477, 'id': > u'461f65a9-3a81-4f3f-a46d-c5ed12520524'}]} from=::ffff:192.168.0.4,49914, > flow_id=1435fc81, task_id=0a828d2c-d9f4-4f83-a9e9-7393159d5323 (api:52) > 29829:2018-03-21 10:46:13,830+0200 INFO (jsonrpc/6) > [jsonrpc.JsonRpcServer] RPC call StoragePool.connectStorageServer > succeeded in 125.04 seconds (__init__:573) > 29830:2018-03-21 10:46:14,767+0200 INFO (jsonrpc/4) [vdsm.api] START > getSpmStatus(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', > options=None) from=::ffff:192.168.0.4,49914, task_id=49cd2ae6-7943-47c9-ad86-9e0b7e58bca3 > (api:46) > 29831:2018-03-21 10:46:14,772+0200 INFO (jsonrpc/4) [vdsm.api] FINISH > getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': > 6L}} from=::ffff:192.168.0.4,49914, task_id=49cd2ae6-7943-47c9-ad86-9e0b7e58bca3 > (api:52) > 29832:2018-03-21 10:46:14,773+0200 INFO (jsonrpc/4) > [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus succeeded in 0.01 > seconds (__init__:573) > 29833:2018-03-21 10:46:14,815+0200 INFO (jsonrpc/2) [vdsm.api] START > getStoragePoolInfo(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', > options=None) from=::ffff:192.168.0.4,49920, task_id=19909616-efb4-490d-982c-66eda9ca4381 > (api:46) > 29834:2018-03-21 10:46:14,822+0200 INFO (jsonrpc/2) [vdsm.api] FINISH > getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': > u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_ > nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/ > 11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', > 'lver': 6L, 'domains': u'ef184b28-1dbc-45ed-b0b3- > 85e780cce5d8:Active,8a2c304b-c8ae-438b-af54-fc8797ea149f:Active', > 'master_uuid': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8', 'version': '4', > 'spm_id': 1, 'type': 'NFS', 'master_ver': 3}, 'dominfo': > {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'status': u'Active', > 'diskfree': '1868692455424', 'isoprefix': '', 'alerts': [], 'disktotal': > '1968811540480', 'version': 4}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': > u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_ > nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/ > 11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': > '1968811540480', 'version': 0}}} from=::ffff:192.168.0.4,49920, > task_id=19909616-efb4-490d-982c-66eda9ca4381 (api:52) > 29835:2018-03-21 10:46:14,823+0200 INFO (jsonrpc/2) > [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.01 > seconds (__init__:573) > 29836:2018-03-21 10:46:15,360+0200 INFO (periodic/2) [vdsm.api] START > repoStats(domains=()) from=internal, task_id=da74ea9f-164b-450d-b836-6818caa3fdc5 > (api:46) > 29837:2018-03-21 10:46:15,360+0200 INFO (periodic/2) [vdsm.api] FINISH > repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000492277', > 'lastCheck': '5.8', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000479153', 'lastCheck': '5.7', 'valid': True}} from=internal, > task_id=da74ea9f-164b-450d-b836-6818caa3fdc5 (api:52) > 29838:2018-03-21 10:46:15,361+0200 INFO (periodic/2) [vdsm.api] START > multipath_health() from=internal, task_id=06245e2b-e8ca-41c4-90bf-4294d0c699b8 > (api:46) > 29839:2018-03-21 10:46:15,361+0200 INFO (periodic/2) [vdsm.api] FINISH > multipath_health return={} from=internal, task_id=06245e2b-e8ca-41c4-90bf-4294d0c699b8 > (api:52) > 29840:2018-03-21 10:46:20,456+0200 INFO (jsonrpc/7) [api.host] START > getAllVmStats() from=::ffff:192.168.0.4,49914 (api:46) > 29841:2018-03-21 10:46:20,457+0200 INFO (jsonrpc/7) [api.host] FINISH > getAllVmStats return={'status': {'message': 'Done', 'code': 0}, > 'statsList': (suppressed)} from=::ffff:192.168.0.4,49914 (api:52) > 29842:2018-03-21 10:46:20,457+0200 INFO (jsonrpc/7) > [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.01 > seconds (__init__:573) > 29843:2018-03-21 10:46:21,925+0200 INFO (jsonrpc/1) [api.host] START > getAllVmStats() from=::1,36114 (api:46) > 29844:2018-03-21 10:46:21,926+0200 INFO (jsonrpc/1) [api.host] FINISH > getAllVmStats return={'status': {'message': 'Done', 'code': 0}, > 'statsList': (suppressed)} from=::1,36114 (api:52) > 29845:2018-03-21 10:46:21,926+0200 INFO (jsonrpc/1) > [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 > seconds (__init__:573) > 29846:2018-03-21 10:46:24,930+0200 INFO (jsonrpc/5) [vdsm.api] START > getSpmStatus(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', > options=None) from=::ffff:192.168.0.4,49914, task_id=d964535d-e5eb-42bc-b324-432a05c364da > (api:46) > 29847:2018-03-21 10:46:24,935+0200 INFO (jsonrpc/5) [vdsm.api] FINISH > getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': > 6L}} from=::ffff:192.168.0.4,49914, task_id=d964535d-e5eb-42bc-b324-432a05c364da > (api:52) > 29848:2018-03-21 10:46:24,935+0200 INFO (jsonrpc/5) > [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus succeeded in 0.00 > seconds (__init__:573) > 29849:2018-03-21 10:46:24,987+0200 INFO (jsonrpc/3) [vdsm.api] START > getStoragePoolInfo(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', > options=None) from=::ffff:192.168.0.4,49920, task_id=167dcd23-04b5-4de3-9c81-7d25ba56407a > (api:46) > 29850:2018-03-21 10:46:25,022+0200 INFO (jsonrpc/3) [vdsm.api] FINISH > getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': > u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_ > nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/ > 11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', > 'lver': 6L, 'domains': u'ef184b28-1dbc-45ed-b0b3- > 85e780cce5d8:Active,8a2c304b-c8ae-438b-af54-fc8797ea149f:Active', > 'master_uuid': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8', 'version': '4', > 'spm_id': 1, 'type': 'NFS', 'master_ver': 3}, 'dominfo': > {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'status': u'Active', > 'diskfree': '1868692455424', 'isoprefix': '', 'alerts': [], 'disktotal': > '1968811540480', 'version': 4}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': > u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_ > nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/ > 11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': > '1968811540480', 'version': 0}}} from=::ffff:192.168.0.4,49920, > task_id=167dcd23-04b5-4de3-9c81-7d25ba56407a (api:52) > 29851:2018-03-21 10:46:25,023+0200 INFO (jsonrpc/3) > [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.03 > seconds (__init__:573) > 29852:2018-03-21 10:46:28,010+0200 INFO (jsonrpc/0) [api.host] START > getStats() from=::ffff:192.168.0.4,49914 (api:46) > 29853:2018-03-21 10:46:28,011+0200 INFO (jsonrpc/0) [vdsm.api] START > repoStats(domains=()) from=::ffff:192.168.0.4,49914, > task_id=011616ce-53c9-4cfc-8b29-247b1be03409 (api:46) > 29854:2018-03-21 10:46:28,011+0200 INFO (jsonrpc/0) [vdsm.api] FINISH > repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000218955', > 'lastCheck': '8.4', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000351741', 'lastCheck': '8.4', 'valid': True}} > from=::ffff:192.168.0.4,49914, task_id=011616ce-53c9-4cfc-8b29-247b1be03409 > (api:52) > 29855:2018-03-21 10:46:28,012+0200 INFO (jsonrpc/0) [vdsm.api] START > multipath_health() from=::ffff:192.168.0.4,49914, > task_id=85b42a8d-1bf1-495d-ad32-a1f9710a6468 (api:46) > 29856:2018-03-21 10:46:28,012+0200 INFO (jsonrpc/0) [vdsm.api] FINISH > multipath_health return={} from=::ffff:192.168.0.4,49914, > task_id=85b42a8d-1bf1-495d-ad32-a1f9710a6468 (api:52) > 29857:2018-03-21 10:46:28,018+0200 INFO (jsonrpc/0) [api.host] FINISH > getStats return={'status': {'message': 'Done', 'code': 0}, 'info': > {'cpuStatistics': {'11': {'cpuUser': '0.13', 'nodeIndex': 1, 'cpuSys': > '0.20', 'cpuIdle': '99.67'}, '10': {'cpuUser': '1.73', 'nodeIndex': 0, > 'cpuSys': '0.53', 'cpuIdle': '97.74'}, '13': {'cpuUser': '0.20', > 'nodeIndex': 1, 'cpuSys': '0.27', 'cpuIdle': '99.53'}, '12': {'cpuUser': > '0.07', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.73'}, '15': > {'cpuUser': '0.67', 'nodeIndex': 1, 'cpuSys': '0.27', 'cpuIdle': '99.06'}, > '14': {'cpuUser': '0.20', 'nodeIndex': 0, 'cpuSys': '0.33', 'cpuIdle': > '99.47'}, '1': {'cpuUser': '1.26', 'nodeIndex': 1, 'cpuSys': '1.33', > 'cpuIdle': '97.41'}, '0': {'cpuUser': '0.40', 'nodeIndex': 0, 'cpuSys': > '0.07', 'cpuIdle': '99.53'}, '3': {'cpuUser': '0.07', 'nodeIndex': 1, > 'cpuSys': '0.13', 'cpuIdle': '99.80'}, '2': {'cpuUser': '0.00', > 'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': '99.73'}, '5': {'cpuUser': > '0.00', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.87'}, '4': > {'cpuUser': '13.78', 'nodeIndex': 0, 'cpuSys': '0.80', 'cpuIdle': '85.42'}, > '7': {'cpuUser': '0.47', 'nodeIndex': 1, 'cpuSys': '0.67', 'cpuIdle': > '98.86'}, '6': {'cpuUser': '1.07', 'nodeIndex': 0, 'cpuSys': '0.27', > 'cpuIdle': '98.66'}, '9': {'cpuUser': '1.20', 'nodeIndex': 1, 'cpuSys': > '0.27', 'cpuIdle': '98.53'}, '8': {'cpuUser': '0.80', 'nodeIndex': 0, > 'cpuSys': '0.33', 'cpuIdle': '98.87'}}, 'numaNodeMemFree': {'1': > {'memPercent': 7, 'memFree': '15136'}, '0': {'memPercent': 5, 'memFree': > '15357'}}, 'memShared': 0, 'thpState': 'always', 'ksmMergeAcrossNodes': > True, 'vmCount': 0, 'memUsed': '3', 'storageDomains': > {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, > 'version': 4, 'acquired': True, 'delay': '0.000218955', 'lastCheck': '8.4', > 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, > 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000351741', > 'lastCheck': '8.4', 'valid': True}}, 'incomingVmMigrations': 0, 'network': > {'enp3s0f0': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621975.346633, 'name': 'enp3s0f0', 'tx': '1091397599', 'txDropped': > '0', 'rx': '11914966591', 'rxErrors': '0', 'speed': '100', 'rxDropped': > '1454'}, 'ovirtmgmt': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621975.346633, 'name': 'ovirtmgmt', 'tx': '1048745347', 'txDropped': > '0', 'rx': '11615294897', 'rxErrors': '0', 'speed': '1000', 'rxDropped': > '0'}, 'lo': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621975.346633, 'name': 'lo', 'tx': '58205583384', 'txDropped': '0', > 'rx': '58205583384', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, > 'enp3s0f1': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621975.346633, 'name': 'enp3s0f1', 'tx': '0', 'txDropped': '0', 'rx': > '66839', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, > ';vdsmdummy;': {'txErrors': '0', 'state': 'down', 'sampleTime': > 1521621975.346633, 'name': ';vdsmdummy;', 'tx': '0', 'txDropped': '0', > 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f0': > {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621975.346633, 'name': > 'enp4s0f0', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', > 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f1': {'txErrors': '0', 'state': > 'down', 'sampleTime': 1521621975.346633, 'name': 'enp4s0f1', 'tx': '0', > 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': > '0'}}, 'txDropped': '0', 'anonHugePages': '204', 'ksmPages': 100, > 'elapsedTime': '76546.85', 'cpuLoad': '0.18', 'cpuSys': '0.39', > 'diskStats': {'/var/log': {'free': '7344'}, '/var/run/vdsm/': {'free': > '16060'}, '/tmp': {'free': '906'}}, 'cpuUserVdsmd': '1.20', > 'netConfigDirty': 'False', 'memCommitted': 0, 'ksmState': False, > 'vmMigrating': 0, 'ksmCpu': 0, 'memAvailable': 31031, 'bootTime': > '1521545382', 'haStats': {'active': False, 'configured': False, 'score': 0, > 'localMaintenance': False, 'globalMaintenance': False}, 'momStatus': > 'active', 'multipathHealth': {}, 'rxDropped': '1454', > 'outgoingVmMigrations': 0, 'swapTotal': 12287, 'swapFree': 12287, > 'hugepages': defaultdict(, {2048: {'resv_hugepages': 0, > 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, > 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}, > 1048576: {'resv_hugepages': 0, 'free_hugepages': 0, > 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': > 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}}), 'dateTime': > '2018-03-21T08:46:28 GMT', 'cpuUser': '1.38', 'memFree': 31287, 'cpuIdle': > '98.23', 'vmActive': 0, 'v2vJobs': {}, 'cpuSysVdsmd': '0.87'}} > from=::ffff:192.168.0.4,49914 (api:52) > 29858:2018-03-21 10:46:28,020+0200 INFO (jsonrpc/0) > [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds > (__init__:573) > 29859:2018-03-21 10:46:29,667+0200 INFO (itmap/0) [IOProcessClient] > Starting client ioprocess-575 (__init__:308) > 29860:2018-03-21 10:46:29,700+0200 INFO (itmap/1) [IOProcessClient] > Starting client ioprocess-576 (__init__:308) > 29861:2018-03-21 10:46:29,714+0200 INFO (ioprocess/1711202) [IOProcess] > Starting ioprocess (__init__:437) > 29862:2018-03-21 10:46:29,725+0200 INFO (ioprocess/1711208) [IOProcess] > Starting ioprocess (__init__:437) > 29863:2018-03-21 10:46:30,375+0200 INFO (periodic/0) [vdsm.api] START > repoStats(domains=()) from=internal, task_id=bdacf1c0-3fcb-43b1-82e2-a041a26d0d7f > (api:46) > 29864:2018-03-21 10:46:30,376+0200 INFO (periodic/0) [vdsm.api] FINISH > repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.00031111', > 'lastCheck': '0.8', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.00045594', 'lastCheck': '0.6', 'valid': True}} from=internal, > task_id=bdacf1c0-3fcb-43b1-82e2-a041a26d0d7f (api:52) > 29865:2018-03-21 10:46:30,376+0200 INFO (periodic/0) [vdsm.api] START > multipath_health() from=internal, task_id=3d3cad70-e5c8-49b2-9f5d-92a249cc102d > (api:46) > 29866:2018-03-21 10:46:30,377+0200 INFO (periodic/0) [vdsm.api] FINISH > multipath_health return={} from=internal, task_id=3d3cad70-e5c8-49b2-9f5d-92a249cc102d > (api:52) > 29867:2018-03-21 10:46:35,108+0200 INFO (jsonrpc/6) [vdsm.api] START > getSpmStatus(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', > options=None) from=::ffff:192.168.0.4,49914, task_id=304d7e7c-03fc-44d6-be32-d083ada09b30 > (api:46) > 29868:2018-03-21 10:46:35,114+0200 INFO (jsonrpc/6) [vdsm.api] FINISH > getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': > 6L}} from=::ffff:192.168.0.4,49914, task_id=304d7e7c-03fc-44d6-be32-d083ada09b30 > (api:52) > 29869:2018-03-21 10:46:35,114+0200 INFO (jsonrpc/6) > [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus succeeded in 0.00 > seconds (__init__:573) > 29870:2018-03-21 10:46:35,170+0200 INFO (jsonrpc/4) [vdsm.api] START > getStoragePoolInfo(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', > options=None) from=::ffff:192.168.0.4,49920, task_id=76736a93-d4c4-4b2f-9848-7b8731fe2b67 > (api:46) > 29871:2018-03-21 10:46:35,176+0200 INFO (jsonrpc/4) [vdsm.api] FINISH > getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': > u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_ > nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/ > 11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', > 'lver': 6L, 'domains': u'ef184b28-1dbc-45ed-b0b3- > 85e780cce5d8:Active,8a2c304b-c8ae-438b-af54-fc8797ea149f:Active', > 'master_uuid': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8', 'version': '4', > 'spm_id': 1, 'type': 'NFS', 'master_ver': 3}, 'dominfo': > {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'status': u'Active', > 'diskfree': '1868692455424', 'isoprefix': '', 'alerts': [], 'disktotal': > '1968811540480', 'version': 4}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': > u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_ > nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/ > 11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': > '1968811540480', 'version': 0}}} from=::ffff:192.168.0.4,49920, > task_id=76736a93-d4c4-4b2f-9848-7b8731fe2b67 (api:52) > 29872:2018-03-21 10:46:35,177+0200 INFO (jsonrpc/4) > [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.01 > seconds (__init__:573) > 29873:2018-03-21 10:46:35,462+0200 INFO (jsonrpc/2) [api.host] START > getAllVmStats() from=::ffff:192.168.0.4,49914 (api:46) > 29874:2018-03-21 10:46:35,462+0200 INFO (jsonrpc/2) [api.host] FINISH > getAllVmStats return={'status': {'message': 'Done', 'code': 0}, > 'statsList': (suppressed)} from=::ffff:192.168.0.4,49914 (api:52) > 29875:2018-03-21 10:46:35,463+0200 INFO (jsonrpc/2) > [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 > seconds (__init__:573) > 29876:2018-03-21 10:46:36,948+0200 INFO (jsonrpc/7) [api.host] START > getAllVmStats() from=::1,36114 (api:46) > 29877:2018-03-21 10:46:36,948+0200 INFO (jsonrpc/7) [api.host] FINISH > getAllVmStats return={'status': {'message': 'Done', 'code': 0}, > 'statsList': (suppressed)} from=::1,36114 (api:52) > 29878:2018-03-21 10:46:36,949+0200 INFO (jsonrpc/7) > [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 > seconds (__init__:573) > 29879:2018-03-21 10:46:43,179+0200 INFO (jsonrpc/1) [api.host] START > getStats() from=::ffff:192.168.0.4,49914 (api:46) > 29880:2018-03-21 10:46:43,180+0200 INFO (jsonrpc/1) [vdsm.api] START > repoStats(domains=()) from=::ffff:192.168.0.4,49914, > task_id=83bbc95c-6d59-4eec-82b1-e8993be28759 (api:46) > 29881:2018-03-21 10:46:43,181+0200 INFO (jsonrpc/1) [vdsm.api] FINISH > repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000486101', > 'lastCheck': '3.6', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.00043444', 'lastCheck': '3.4', 'valid': True}} > from=::ffff:192.168.0.4,49914, task_id=83bbc95c-6d59-4eec-82b1-e8993be28759 > (api:52) > 29882:2018-03-21 10:46:43,181+0200 INFO (jsonrpc/1) [vdsm.api] START > multipath_health() from=::ffff:192.168.0.4,49914, > task_id=9b781324-b50b-49a9-bf31-5eca835dc75c (api:46) > 29883:2018-03-21 10:46:43,181+0200 INFO (jsonrpc/1) [vdsm.api] FINISH > multipath_health return={} from=::ffff:192.168.0.4,49914, > task_id=9b781324-b50b-49a9-bf31-5eca835dc75c (api:52) > 29884:2018-03-21 10:46:43,188+0200 INFO (jsonrpc/1) [api.host] FINISH > getStats return={'status': {'message': 'Done', 'code': 0}, 'info': > {'cpuStatistics': {'11': {'cpuUser': '0.40', 'nodeIndex': 1, 'cpuSys': > '0.27', 'cpuIdle': '99.33'}, '10': {'cpuUser': '0.60', 'nodeIndex': 0, > 'cpuSys': '0.47', 'cpuIdle': '98.93'}, '13': {'cpuUser': '0.00', > 'nodeIndex': 1, 'cpuSys': '0.27', 'cpuIdle': '99.73'}, '12': {'cpuUser': > '1.46', 'nodeIndex': 0, 'cpuSys': '0.60', 'cpuIdle': '97.94'}, '15': > {'cpuUser': '0.20', 'nodeIndex': 1, 'cpuSys': '0.33', 'cpuIdle': '99.47'}, > '14': {'cpuUser': '0.40', 'nodeIndex': 0, 'cpuSys': '0.47', 'cpuIdle': > '99.13'}, '1': {'cpuUser': '1.33', 'nodeIndex': 1, 'cpuSys': '1.93', > 'cpuIdle': '96.74'}, '0': {'cpuUser': '0.60', 'nodeIndex': 0, 'cpuSys': > '0.27', 'cpuIdle': '99.13'}, '3': {'cpuUser': '0.20', 'nodeIndex': 1, > 'cpuSys': '0.40', 'cpuIdle': '99.40'}, '2': {'cpuUser': '6.99', > 'nodeIndex': 0, 'cpuSys': '0.40', 'cpuIdle': '92.61'}, '5': {'cpuUser': > '0.13', 'nodeIndex': 1, 'cpuSys': '0.20', 'cpuIdle': '99.67'}, '4': > {'cpuUser': '0.27', 'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': '99.46'}, > '7': {'cpuUser': '0.47', 'nodeIndex': 1, 'cpuSys': '0.60', 'cpuIdle': > '98.93'}, '6': {'cpuUser': '0.53', 'nodeIndex': 0, 'cpuSys': '0.27', > 'cpuIdle': '99.20'}, '9': {'cpuUser': '0.47', 'nodeIndex': 1, 'cpuSys': > '0.33', 'cpuIdle': '99.20'}, '8': {'cpuUser': '0.07', 'nodeIndex': 0, > 'cpuSys': '0.13', 'cpuIdle': '99.80'}}, 'numaNodeMemFree': {'1': > {'memPercent': 7, 'memFree': '15131'}, '0': {'memPercent': 5, 'memFree': > '15357'}}, 'memShared': 0, 'thpState': 'always', 'ksmMergeAcrossNodes': > True, 'vmCount': 0, 'memUsed': '3', 'storageDomains': > {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, > 'version': 4, 'acquired': True, 'delay': '0.000486101', 'lastCheck': '3.6', > 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, > 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.00043444', > 'lastCheck': '3.4', 'valid': True}}, 'incomingVmMigrations': 0, 'network': > {'enp3s0f0': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621990.363726, 'name': 'enp3s0f0', 'tx': '1091404888', 'txDropped': > '0', 'rx': '11914970667', 'rxErrors': '0', 'speed': '100', 'rxDropped': > '1455'}, 'ovirtmgmt': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621990.363726, 'name': 'ovirtmgmt', 'tx': '1048752406', 'txDropped': > '0', 'rx': '11615298347', 'rxErrors': '0', 'speed': '1000', 'rxDropped': > '0'}, 'lo': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621990.363726, 'name': 'lo', 'tx': '58215969455', 'txDropped': '0', > 'rx': '58215969455', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, > 'enp3s0f1': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521621990.363726, 'name': 'enp3s0f1', 'tx': '0', 'txDropped': '0', 'rx': > '66839', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, > ';vdsmdummy;': {'txErrors': '0', 'state': 'down', 'sampleTime': > 1521621990.363726, 'name': ';vdsmdummy;', 'tx': '0', 'txDropped': '0', > 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f0': > {'txErrors': '0', 'state': 'down', 'sampleTime': 1521621990.363726, 'name': > 'enp4s0f0', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', > 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f1': {'txErrors': '0', 'state': > 'down', 'sampleTime': 1521621990.363726, 'name': 'enp4s0f1', 'tx': '0', > 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': > '0'}}, 'txDropped': '0', 'anonHugePages': '206', 'ksmPages': 100, > 'elapsedTime': '76562.02', 'cpuLoad': '0.18', 'cpuSys': '0.45', > 'diskStats': {'/var/log': {'free': '7344'}, '/var/run/vdsm/': {'free': > '16060'}, '/tmp': {'free': '906'}}, 'cpuUserVdsmd': '1.27', > 'netConfigDirty': 'False', 'memCommitted': 0, 'ksmState': False, > 'vmMigrating': 0, 'ksmCpu': 0, 'memAvailable': 31080, 'bootTime': > '1521545382', 'haStats': {'active': False, 'configured': False, 'score': 0, > 'localMaintenance': False, 'globalMaintenance': False}, 'momStatus': > 'active', 'multipathHealth': {}, 'rxDropped': '1455', > 'outgoingVmMigrations': 0, 'swapTotal': 12287, 'swapFree': 12287, > 'hugepages': defaultdict(, {2048: {'resv_hugepages': 0, > 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, > 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}, > 1048576: {'resv_hugepages': 0, 'free_hugepages': 0, > 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': > 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}}), 'dateTime': > '2018-03-21T08:46:43 GMT', 'cpuUser': '0.87', 'memFree': 31336, 'cpuIdle': > '98.68', 'vmActive': 0, 'v2vJobs': {}, 'cpuSysVdsmd': '1.20'}} > from=::ffff:192.168.0.4,49914 (api:52) > 29885:2018-03-21 10:46:43,190+0200 INFO (jsonrpc/1) > [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds > (__init__:573) > 29886:2018-03-21 10:46:45,288+0200 INFO (jsonrpc/5) [vdsm.api] START > getSpmStatus(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', > options=None) from=::ffff:192.168.0.4,49914, task_id=ffe57349-2532-4b64-b32c-8a558273d8e5 > (api:46) > 29887:2018-03-21 10:46:45,292+0200 INFO (jsonrpc/5) [vdsm.api] FINISH > getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': > 6L}} from=::ffff:192.168.0.4,49914, task_id=ffe57349-2532-4b64-b32c-8a558273d8e5 > (api:52) > 29888:2018-03-21 10:46:45,293+0200 INFO (jsonrpc/5) > [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus succeeded in 0.00 > seconds (__init__:573) > 29889:2018-03-21 10:46:45,298+0200 INFO (jsonrpc/3) [vdsm.api] START > getStoragePoolInfo(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', > options=None) from=::ffff:192.168.0.4,49920, task_id=b2ea28f0-abf0-4258-8152-6035d93aa5dc > (api:46) > 29890:2018-03-21 10:46:45,305+0200 INFO (jsonrpc/3) [vdsm.api] FINISH > getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': > u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_ > nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/ > 11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', > 'lver': 6L, 'domains': u'ef184b28-1dbc-45ed-b0b3- > 85e780cce5d8:Active,8a2c304b-c8ae-438b-af54-fc8797ea149f:Active', > 'master_uuid': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8', 'version': '4', > 'spm_id': 1, 'type': 'NFS', 'master_ver': 3}, 'dominfo': > {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'status': u'Active', > 'diskfree': '1868692455424', 'isoprefix': '', 'alerts': [], 'disktotal': > '1968811540480', 'version': 4}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': > u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_ > nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/ > 11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': > '1968811540480', 'version': 0}}} from=::ffff:192.168.0.4,49920, > task_id=b2ea28f0-abf0-4258-8152-6035d93aa5dc (api:52) > 29891:2018-03-21 10:46:45,306+0200 INFO (jsonrpc/3) > [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.00 > seconds (__init__:573) > 29892:2018-03-21 10:46:45,395+0200 INFO (periodic/2) [vdsm.api] START > repoStats(domains=()) from=internal, task_id=4010bf72-5c86-4daf-a8f9-12ec229a86e4 > (api:46) > 29893:2018-03-21 10:46:45,395+0200 INFO (periodic/2) [vdsm.api] FINISH > repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000486101', > 'lastCheck': '5.8', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.00043444', 'lastCheck': '5.7', 'valid': True}} from=internal, > task_id=4010bf72-5c86-4daf-a8f9-12ec229a86e4 (api:52) > 29894:2018-03-21 10:46:45,396+0200 INFO (periodic/2) [vdsm.api] START > multipath_health() from=internal, task_id=37a1132f-452b-4805-9cc0-b8e74e8d6b02 > (api:46) > 29895:2018-03-21 10:46:45,396+0200 INFO (periodic/2) [vdsm.api] FINISH > multipath_health return={} from=internal, task_id=37a1132f-452b-4805-9cc0-b8e74e8d6b02 > (api:52) > 29896:2018-03-21 10:46:50,467+0200 INFO (jsonrpc/0) [api.host] START > getAllVmStats() from=::ffff:192.168.0.4,49914 (api:46) > 29897:2018-03-21 10:46:50,468+0200 INFO (jsonrpc/0) [throttled] Current > getAllVmStats: {} (throttledlog:103) > 29898:2018-03-21 10:46:50,468+0200 INFO (jsonrpc/0) [api.host] FINISH > getAllVmStats return={'status': {'message': 'Done', 'code': 0}, > 'statsList': (suppressed)} from=::ffff:192.168.0.4,49914 (api:52) > 29899:2018-03-21 10:46:50,468+0200 INFO (jsonrpc/0) > [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.01 > seconds (__init__:573) > 29900:2018-03-21 10:46:51,967+0200 INFO (jsonrpc/6) [api.host] START > getAllVmStats() from=::1,36114 (api:46) > 29901:2018-03-21 10:46:51,968+0200 INFO (jsonrpc/6) [api.host] FINISH > getAllVmStats return={'status': {'message': 'Done', 'code': 0}, > 'statsList': (suppressed)} from=::1,36114 (api:52) > 29902:2018-03-21 10:46:51,968+0200 INFO (jsonrpc/6) > [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 > seconds (__init__:573) > 29903:2018-03-21 10:46:55,648+0200 INFO (jsonrpc/4) [vdsm.api] START > getSpmStatus(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', > options=None) from=::ffff:192.168.0.4,49914, task_id=a975635f-59d7-4f40-b9a7-a5ea7150368e > (api:46) > 29904:2018-03-21 10:46:55,653+0200 INFO (jsonrpc/4) [vdsm.api] FINISH > getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': > 6L}} from=::ffff:192.168.0.4,49914, task_id=a975635f-59d7-4f40-b9a7-a5ea7150368e > (api:52) > 29905:2018-03-21 10:46:55,653+0200 INFO (jsonrpc/4) > [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus succeeded in 0.00 > seconds (__init__:573) > 29906:2018-03-21 10:46:55,659+0200 INFO (jsonrpc/2) [vdsm.api] START > getStoragePoolInfo(spUUID=u'80cc922f-8dea-4fed-b951-1060ba116ad5', > options=None) from=::ffff:192.168.0.4,49920, task_id=97d4ae6c-3719-4149-85fd-28e924ad8883 > (api:46) > 29907:2018-03-21 10:46:55,666+0200 INFO (jsonrpc/2) [vdsm.api] FINISH > getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': > u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_ > nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/ > 11111111-1111-1111-1111-111111111111', 'pool_status': 'connected', > 'lver': 6L, 'domains': u'ef184b28-1dbc-45ed-b0b3- > 85e780cce5d8:Active,8a2c304b-c8ae-438b-af54-fc8797ea149f:Active', > 'master_uuid': u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8', 'version': '4', > 'spm_id': 1, 'type': 'NFS', 'master_ver': 3}, 'dominfo': > {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'status': u'Active', > 'diskfree': '1868692455424', 'isoprefix': '', 'alerts': [], 'disktotal': > '1968811540480', 'version': 4}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'status': u'Active', 'diskfree': '1868692455424', 'isoprefix': > u'/rhev/data-center/mnt/node11.starlett.lv:_vmraid_ > nfs_iso/8a2c304b-c8ae-438b-af54-fc8797ea149f/images/ > 11111111-1111-1111-1111-111111111111', 'alerts': [], 'disktotal': > '1968811540480', 'version': 0}}} from=::ffff:192.168.0.4,49920, > task_id=97d4ae6c-3719-4149-85fd-28e924ad8883 (api:52) > 29908:2018-03-21 10:46:55,667+0200 INFO (jsonrpc/2) > [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.00 > seconds (__init__:573) > 29909:2018-03-21 10:46:59,473+0200 INFO (jsonrpc/7) [api.host] START > getStats() from=::ffff:192.168.0.4,49914 (api:46) > 29910:2018-03-21 10:46:59,474+0200 INFO (jsonrpc/7) [vdsm.api] START > repoStats(domains=()) from=::ffff:192.168.0.4,49914, > task_id=177b551b-9bd3-45a4-84d0-5244630c08e5 (api:46) > 29911:2018-03-21 10:46:59,474+0200 INFO (jsonrpc/7) [vdsm.api] FINISH > repoStats return={u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000470906', > 'lastCheck': '9.9', 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000334928', 'lastCheck': '9.7', 'valid': True}} > from=::ffff:192.168.0.4,49914, task_id=177b551b-9bd3-45a4-84d0-5244630c08e5 > (api:52) > 29912:2018-03-21 10:46:59,475+0200 INFO (jsonrpc/7) [vdsm.api] START > multipath_health() from=::ffff:192.168.0.4,49914, > task_id=30753d5c-9bfd-4b0f-ac83-e32278d82b63 (api:46) > 29913:2018-03-21 10:46:59,475+0200 INFO (jsonrpc/7) [vdsm.api] FINISH > multipath_health return={} from=::ffff:192.168.0.4,49914, > task_id=30753d5c-9bfd-4b0f-ac83-e32278d82b63 (api:52) > 29914:2018-03-21 10:46:59,482+0200 INFO (jsonrpc/7) [api.host] FINISH > getStats return={'status': {'message': 'Done', 'code': 0}, 'info': > {'cpuStatistics': {'11': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': > '0.13', 'cpuIdle': '99.80'}, '10': {'cpuUser': '0.07', 'nodeIndex': 0, > 'cpuSys': '0.13', 'cpuIdle': '99.80'}, '13': {'cpuUser': '0.07', > 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '12': {'cpuUser': > '0.00', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.93'}, '15': > {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.87'}, > '14': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': > '99.73'}, '1': {'cpuUser': '1.07', 'nodeIndex': 1, 'cpuSys': '1.07', > 'cpuIdle': '97.86'}, '0': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': > '0.07', 'cpuIdle': '99.86'}, '3': {'cpuUser': '0.07', 'nodeIndex': 1, > 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '2': {'cpuUser': '0.00', > 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': '99.87'}, '5': {'cpuUser': > '0.07', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.80'}, '4': > {'cpuUser': '0.47', 'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': '99.26'}, > '7': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.47', 'cpuIdle': > '99.53'}, '6': {'cpuUser': '0.20', 'nodeIndex': 0, 'cpuSys': '0.07', > 'cpuIdle': '99.73'}, '9': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': > '0.13', 'cpuIdle': '99.80'}, '8': {'cpuUser': '0.00', 'nodeIndex': 0, > 'cpuSys': '0.07', 'cpuIdle': '99.93'}}, 'numaNodeMemFree': {'1': > {'memPercent': 7, 'memFree': '15125'}, '0': {'memPercent': 5, 'memFree': > '15364'}}, 'memShared': 0, 'thpState': 'always', 'ksmMergeAcrossNodes': > True, 'vmCount': 0, 'memUsed': '3', 'storageDomains': > {u'ef184b28-1dbc-45ed-b0b3-85e780cce5d8': {'code': 0, 'actual': True, > 'version': 4, 'acquired': True, 'delay': '0.000470906', 'lastCheck': '9.9', > 'valid': True}, u'8a2c304b-c8ae-438b-af54-fc8797ea149f': {'code': 0, > 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000334928', > 'lastCheck': '9.7', 'valid': True}}, 'incomingVmMigrations': 0, 'network': > {'enp3s0f0': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521622005.383289, 'name': 'enp3s0f0', 'tx': '1091412332', 'txDropped': > '0', 'rx': '11914976557', 'rxErrors': '0', 'speed': '100', 'rxDropped': > '1456'}, 'ovirtmgmt': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521622005.383289, 'name': 'ovirtmgmt', 'tx': '1048761326', 'txDropped': > '0', 'rx': '11615304242', 'rxErrors': '0', 'speed': '1000', 'rxDropped': > '0'}, 'lo': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521622005.383289, 'name': 'lo', 'tx': '58228460774', 'txDropped': '0', > 'rx': '58228460774', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, > 'enp3s0f1': {'txErrors': '0', 'state': 'up', 'sampleTime': > 1521622005.383289, 'name': 'enp3s0f1', 'tx': '0', 'txDropped': '0', 'rx': > '66839', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, > ';vdsmdummy;': {'txErrors': '0', 'state': 'down', 'sampleTime': > 1521622005.383289, 'name': ';vdsmdummy;', 'tx': '0', 'txDropped': '0', > 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f0': > {'txErrors': '0', 'state': 'down', 'sampleTime': 1521622005.383289, 'name': > 'enp4s0f0', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', > 'speed': '1000', 'rxDropped': '0'}, 'enp4s0f1': {'txErrors': '0', 'state': > 'down', 'sampleTime': 1521622005.383289, 'name': 'enp4s0f1', 'tx': '0', > 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': > '0'}}, 'txDropped': '0', 'anonHugePages': '208', 'ksmPages': 100, > 'elapsedTime': '76578.31', 'cpuLoad': '0.17', 'cpuSys': '0.20', > 'diskStats': {'/var/log': {'free': '7344'}, '/var/run/vdsm/': {'free': > '16060'}, '/tmp': {'free': '906'}}, 'cpuUserVdsmd': '1.13', > 'netConfigDirty': 'False', 'memCommitted': 0, 'ksmState': False, > 'vmMigrating': 0, 'ksmCpu': 0, 'memAvailable': 31078, 'bootTime': > '1521545382', 'haStats': {'active': False, 'configured': False, 'score': 0, > 'localMaintenance': False, 'globalMaintenance': False}, 'momStatus': > 'active', 'multipathHealth': {}, 'rxDropped': '1456', > 'outgoingVmMigrations': 0, 'swapTotal': 12287, 'swapFree': 12287, > 'hugepages': defaultdict(, {2048: {'resv_hugepages': 0, > 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, > 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}, > 1048576: {'resv_hugepages': 0, 'free_hugepages': 0, > 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0, 'vm.free_hugepages': > 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0}}), 'dateTime': > '2018-03-21T08:46:59 GMT', 'cpuUser': '0.15', 'memFree': 31334, 'cpuIdle': > '99.65', 'vmActive': 0, 'v2vJobs': {}, 'cpuSysVdsmd': '0.60'}} > from=::ffff:192.168.0.4,49914 (api:52) > 29915:2018-03-21 10:46:59,484+0200 INFO (jsonrpc/7) > [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds > (__init__:573) > > > On 21 Mar 2018, at 11:33, Fred Rolland wrote: > > Can you provide the vdsm logs from the host. > It looks the vdsm failed to connect to the server. > > On Wed, Mar 21, 2018 at 11:21 AM, Andrei Verovski > wrote: > >> Hi, >> >> I have 2-host oVirt setup with 2 Data Centers, one with local storage >> domain (DC #1) for VMs + Export domain on NFS, another with all NFS shared >> (DC #2). >> Trying to export VMs from DC #1 to DC #2. >> VMs are exported to DC #1 export domain (NFS), then domain put into >> maintenance mode and detached from DC #1. >> >> Unfortunately, attaching it to DC #2 failed. Logs attached. Tried to run >> this command twice. >> Workaround are possible in order to accomplish this task, yet it would be >> better to do in a way as it was designed. >> Thanks. >> >> >> 2018-03-21 10:43:24,024+02 ERROR [org.ovirt.engine.core.dal.dbb >> roker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-128904) >> [570ec5d9-fff5-4656-afbd-90b3207a616e] EVENT_ID: >> USER_ATTACH_STORAGE_DOMAIN_TO_POOL_FAILED(963), Failed to attach Storage >> Domain node10-NFS-EXPORTS to Data Center StrDataCenter11. (User: >> admin at internal-authz) >> 2018-03-21 10:46:16,512+02 ERROR [org.ovirt.engine.core.dal.dbb >> roker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-128956) >> [1435fc81] EVENT_ID: VDS_STORAGES_CONNECTION_FAILED(188), Failed to >> connect Host node11 to the Storage Domains node10-NFS-EXPORTS. >> 2018-03-21 10:46:16,651+02 ERROR [org.ovirt.engine.core.dal.dbb >> roker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-128955) >> [a81ffa4a-5a58-41a0-888a-f0edc321609b] EVENT_ID: >> USER_ATTACH_STORAGE_DOMAIN_TO_POOL_FAILED(963), Failed to attach Storage >> Domain node10-NFS-EXPORTS to Data Center StrDataCenter11. (User: >> admin at internal-authz) >> >> >> *tail -n 1000 engine.log | grep 570ec5d9-fff5-4656-afbd-90b3207a616e* >> 2018-03-21 10:41:14,643+02 INFO [org.ovirt.engine.core.bll.sto >> rage.domain.AttachStorageDomainToPoolCommand] (default task-2) >> [570ec5d9-fff5-4656-afbd-90b3207a616e] Lock Acquired to object >> 'EngineLock:{exclusiveLocks='[1d7208ce-d3a1-4406-9638-fe7051562994=STORAGE]', >> sharedLocks=''}' >> 2018-03-21 10:41:16,129+02 INFO [org.ovirt.engine.core.bll.sto >> rage.domain.AttachStorageDomainToPoolCommand] >> (EE-ManagedThreadFactory-engine-Thread-128904) >> [570ec5d9-fff5-4656-afbd-90b3207a616e] Running command: >> AttachStorageDomainToPoolCommand internal: false. Entities affected : >> ID: 1d7208ce-d3a1-4406-9638-fe7051562994 Type: StorageAction group >> MANIPULATE_STORAGE_DOMAIN with role type ADMIN, ID: >> 80cc922f-8dea-4fed-b951-1060ba116ad5 Type: StoragePoolAction group >> MANIPULATE_STORAGE_DOMAIN with role type ADMIN >> 2018-03-21 10:43:23,564+02 ERROR [org.ovirt.engine.core.bll.sto >> rage.domain.AttachStorageDomainToPoolCommand] >> (EE-ManagedThreadFactory-engine-Thread-128904) >> [570ec5d9-fff5-4656-afbd-90b3207a616e] Cannot connect storage connection >> server, aborting attach storage domain operation. >> 2018-03-21 10:43:23,567+02 INFO [org.ovirt.engine.core.bll.CommandCompensator] >> (EE-ManagedThreadFactory-engine-Thread-128904) >> [570ec5d9-fff5-4656-afbd-90b3207a616e] Command >> [id=921ca7cd-4f93-46aa-8de2-91b13b8f96cb]: Compensating NEW_ENTITY_ID of >> org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; >> snapshot: StoragePoolIsoMapId:{storagePoolId='80cc922f-8dea-4fed-b951-1060ba116ad5', >> storageId='1d7208ce-d3a1-4406-9638-fe7051562994'}. >> 2018-03-21 10:43:24,024+02 ERROR [org.ovirt.engine.core.dal.dbb >> roker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-128904) >> [570ec5d9-fff5-4656-afbd-90b3207a616e] EVENT_ID: >> USER_ATTACH_STORAGE_DOMAIN_TO_POOL_FAILED(963), Failed to attach Storage >> Domain node10-NFS-EXPORTS to Data Center StrDataCenter11. (User: >> admin at internal-authz) >> 2018-03-21 10:43:24,114+02 INFO [org.ovirt.engine.core.bll.sto >> rage.domain.AttachStorageDomainToPoolCommand] >> (EE-ManagedThreadFactory-engine-Thread-128904) >> [570ec5d9-fff5-4656-afbd-90b3207a616e] Lock freed to object >> 'EngineLock:{exclusiveLocks='[1d7208ce-d3a1-4406-9638-fe7051562994=STORAGE]', >> sharedLocks=''}' >> >> >> >> *[root at node00 ovirt-engine]# tail -n 1000 engine.log | grep >> a81ffa4a-5a58-41a0-888a-f0edc321609b* >> 2018-03-21 10:44:11,025+02 INFO [org.ovirt.engine.core.bll.sto >> rage.domain.AttachStorageDomainToPoolCommand] (default task-16) >> [a81ffa4a-5a58-41a0-888a-f0edc321609b] Lock Acquired to object >> 'EngineLock:{exclusiveLocks='[1d7208ce-d3a1-4406-9638-fe7051562994=STORAGE]', >> sharedLocks=''}' >> 2018-03-21 10:44:11,236+02 INFO [org.ovirt.engine.core.bll.sto >> rage.domain.AttachStorageDomainToPoolCommand] >> (EE-ManagedThreadFactory-engine-Thread-128955) >> [a81ffa4a-5a58-41a0-888a-f0edc321609b] Running command: >> AttachStorageDomainToPoolCommand internal: false. Entities affected : >> ID: 1d7208ce-d3a1-4406-9638-fe7051562994 Type: StorageAction group >> MANIPULATE_STORAGE_DOMAIN with role type ADMIN, ID: >> 80cc922f-8dea-4fed-b951-1060ba116ad5 Type: StoragePoolAction group >> MANIPULATE_STORAGE_DOMAIN with role type ADMIN >> 2018-03-21 10:46:16,567+02 ERROR [org.ovirt.engine.core.bll.sto >> rage.domain.AttachStorageDomainToPoolCommand] >> (EE-ManagedThreadFactory-engine-Thread-128955) >> [a81ffa4a-5a58-41a0-888a-f0edc321609b] Cannot connect storage connection >> server, aborting attach storage domain operation. >> 2018-03-21 10:46:16,568+02 INFO [org.ovirt.engine.core.bll.CommandCompensator] >> (EE-ManagedThreadFactory-engine-Thread-128955) >> [a81ffa4a-5a58-41a0-888a-f0edc321609b] Command >> [id=b5c25100-1a8a-4db0-9509-99cfa60995b1]: Compensating NEW_ENTITY_ID of >> org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; >> snapshot: StoragePoolIsoMapId:{storagePoolId='80cc922f-8dea-4fed-b951-1060ba116ad5', >> storageId='1d7208ce-d3a1-4406-9638-fe7051562994'}. >> 2018-03-21 10:46:16,651+02 ERROR [org.ovirt.engine.core.dal.dbb >> roker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-128955) >> [a81ffa4a-5a58-41a0-888a-f0edc321609b] EVENT_ID: >> USER_ATTACH_STORAGE_DOMAIN_TO_POOL_FAILED(963), Failed to attach Storage >> Domain node10-NFS-EXPORTS to Data Center StrDataCenter11. (User: >> admin at internal-authz) >> 2018-03-21 10:46:16,681+02 INFO [org.ovirt.engine.core.bll.sto >> rage.domain.AttachStorageDomainToPoolCommand] >> (EE-ManagedThreadFactory-engine-Thread-128955) >> [a81ffa4a-5a58-41a0-888a-f0edc321609b] Lock freed to object >> 'EngineLock:{exclusiveLocks='[1d7208ce-d3a1-4406-9638-fe7051562994=STORAGE]', >> sharedLocks='?}' >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas at devels.es Wed Mar 21 12:37:51 2018 From: nicolas at devels.es (nicolas at devels.es) Date: Wed, 21 Mar 2018 12:37:51 +0000 Subject: [ovirt-users] Bad volume specification Message-ID: <18e2bb8f7b64871a722d4355ef51a56c@devels.es> Hi, We're running oVirt 4.1.9, today I put a host on maintenance, I saw one of the VMs was taking too long to migrate so I shut it down. It seems that just in that moment the machine ended migrating, but the shutdown did happen as well. Now, when I try to start the VM I'm getting the following error: 2018-03-21 12:31:02,309Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler3) [7985a4e2] EVENT_ID: VM_DOWN_ERROR(119), Correlation ID: null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: VM openmaint.iaas.domain.com is down with error. Exit message: Bad volume specification {'index': '0', u'domainID': u'04cb5bd0-d94e-4d14-a71a-e63a669e11b9', 'reqsize': '0', u'format': u'cow', u'optional': u'false', u'address': {u'function': u'0x0', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'slot': u'0x06'}, u'volumeID': u'68ee7a04-ceff-49f0-bf91-256870543921', 'apparentsize': '3221225472', u'imageID': u'9d087e6b-0832-46db-acb0-16d5131afa0c', u'discard': False, u'specParams': {}, u'readonly': u'false', u'iface': u'virtio', u'deviceId': u'9d087e6b-0832-46db-acb0-16d5131afa0c', 'truesize': '3221225472', u'poolID': u'75bf8f48-970f-42bc-8596-f8ab6efb2b63', u'device': u'disk', u'shared': u'false', u'propagateErrors': u'off', u'type': u'disk'}. It looks quite bad... I'm attaching the engine.log since the moment I start the VM. Is there anything I can do to recover the VM? oVirt says the disk is OK in the 'Disks' tab. Thanks. -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: engine.log URL: From tbaror at gmail.com Wed Mar 21 10:41:47 2018 From: tbaror at gmail.com (Tal Bar-Or) Date: Wed, 21 Mar 2018 12:41:47 +0200 Subject: [ovirt-users] Ovirt nodes NFS connection Message-ID: Hello All, I am about to deploy a new Ovirt platform, the platform will consist 4 Ovirt nodes including management, all servers nodes and storage will have the following config: *nodes server* 4x10G ports network cards 2x10G will be used for VM network. 2x10G will be used for storage connection 2x1Ge 1xGe for nodes management *Storage *4x10G ports network cards 3 x10G for NFS storage mount Ovirt nodes Now given above network configuration layout, what is best practices in terms of nodes for storage NFS connection, throughput and path resilience suggested to use First option each node 2x 10G lacp and on storage side 3x10G lacp? The second option creates 3 VLAN's assign each node on that 3 VLAN's across 2 nic, and on storage, side assigns 3 nice across 3 VLANs? Thanks -- Tal Bar-or -------------- next part -------------- An HTML attachment was scrubbed... URL: From vfeenstr at redhat.com Wed Mar 21 12:01:35 2018 From: vfeenstr at redhat.com (Vinzenz Feenstra) Date: Wed, 21 Mar 2018 13:01:35 +0100 Subject: [ovirt-users] GPG Key of evilissimo repo for ovirt-guest-agent is expired In-Reply-To: <20180321105932.526d1198@fiorina> References: <687643086.7795430.1521564159592.JavaMail.zimbra@ubimet.com> <20180321105932.526d1198@fiorina> Message-ID: <21A3BDDD-B56D-4767-8A95-21959CB69666@redhat.com> > On 21 Mar 2018, at 10:59, Tom?? Golembiovsk? wrote: > > Hi, > > On Tue, 20 Mar 2018 16:42:39 +0000 (UTC) > Florian Schmid wrote: > >> Hi, >> >> it looks like for this repo, the GPG key is expired. >> http://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04/xUbuntu_16.04/ >> >> Does someone know, whom I should contact, that this key will be renewed? > > The repository belongs to Vinzenz Feenstra as you can see in package > metadata. > >> Or does someone know another repo, where I can download the latest ovirt-guest-agent for Ubuntu 16.04 > > You can get ovirt-guest-agent packages from Debian repository: > > https://packages.debian.org/search?suite=all&searchon=names&keywords=ovirt-guest-agent And they are now in xenial and newer on Ubuntu as well > > Tomas > I did rebuild the packages with the updated keys - You might have to reimport them from there - Not sure https://download.opensuse.org/repositories/home:/evilissimo:/ubuntu:/16.04/xUbuntu_16.04/ >> >> BR Florian >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > > -- > Tom?? Golembiovsk? -- Vinzenz Feenstra Senior Software Developer Red Hat Czech -------------- next part -------------- An HTML attachment was scrubbed... URL: From hanson at andrewswireless.net Wed Mar 21 13:31:00 2018 From: hanson at andrewswireless.net (Hanson Turner) Date: Wed, 21 Mar 2018 09:31:00 -0400 Subject: [ovirt-users] Adding host to hosted-engine /w gluster cluster. (On ovirt Node 4.2.1.1) In-Reply-To: References: <9352191a-76dd-13ed-463a-61033dc3fe6a@andrewswireless.net> Message-ID: Hi Sahina, On the fourth node, I've found /var/log/glusterfs/rhev-data-center-mnt-glusterSD-ovirtnode1.core\:_engine.log ... is this the engine.log you're referring to or do you want one from the hosted engine? I actually do want to go replica 5. Most VM's it runs are small(1 Core,1gb Ram,8gb HDD) and HA is needed. I'd like a bigger critical margin than one node failing. As far as the repos, it's a straight the ovirtnode iso install, I think it's Node 4.2.0... which is yum updated to 4.2.1.1 When I installed 4.0 I'd installed on top of centos. This round I went straight with the node os because of simplicity in updating. I can manually restart gluster from cli, the peer and volume status show no peers or volumes. One thing of note, the networking is still as setup from the node install. I cannot change the networking info from the ovirt gui/dashboard. The host goes unresponsive and then another host power cycles it. Thanks, Hanson On 03/21/2018 06:12 AM, Sahina Bose wrote: > > > On Tue, Mar 20, 2018 at 9:41 PM, Hanson Turner > > wrote: > > Hi Guys, > > I've a 3 machine pool running gluster with replica 3 and want to > add two more machines. > > This would change to a replica 5... > > > Adding 2 more nodes to cluster will not change it to a replica 5. > replica 3 is a configuration on the gluster volume. I assume you don't > need a replica 5, but just to add more nodes (and possibly new gluster > volumes) to the cluster? > > > In ovirt 4.0, I'd done everything manually. No problem there. > > In ovirt 4.2, I'd used the wizard for the hosted-engine. It looks > like the fourth node has been added to the pool but will not go > active. It complains gluster isn't running (which I've not > manually configured /dev/sdb for gluster). Host install+deploy > fails. Host can go into maintenance w/o issue. (Meaning the host > has been added to the cluster, but isn't operational) > > > Are the repos configured correctly on the new nodes? Does the oVirt > cluster where the nodes are being added have "Enable Gluster Service" > enabled? > > > What do I need to do to get the node up and running proper with > gluster syncing properly? Manually restarting gluster, tells me > there's no peers and no volumes. > > Do we have a wizard for this too? Or do I need to go find the > setup scripts and configure hosts 4 + 5 manually and run the > deploy again? > > > The host addition flow should take care of installing gluster. > Can you share the engine log from when the host was added to when it's > reported non-operational? > > > > Thanks, > > Hanson > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roupas_zois at hotmail.com Wed Mar 21 11:57:33 2018 From: roupas_zois at hotmail.com (zois roupas) Date: Wed, 21 Mar 2018 11:57:33 +0000 Subject: [ovirt-users] Fw: Change ovirtmgmt ip from dhcp to static in a In-Reply-To: References: , , Message-ID: I'm sorry for not explaining correctly my question, i understand that i will have to power them off and even if i tried with powered on vm's probably ovirt whouldn't let me remove a host. What i couldn't understand was what happens actually to the vm's that don't "reside" on a host but again if they are powered down there is not really a problem if the host is added successfully after the static ip configuration Thanks for everyone's help , i will try to follow your suggestion and there is no need to keep this case open Best Regards Zois ________________________________ From: Michael Burman Sent: Wednesday, March 21, 2018 8:07 AM To: zois roupas Cc: Ales Musil; users Subject: Re: [ovirt-users] Change ovirtmgmt ip from dhcp to static in a If you changing the host's management IP address then this is the only way to do it. If you have only one host in the cluster, then you will need to shut them down :( On Wed, Mar 21, 2018 at 12:08 AM, zois roupas > wrote: Is this a safe procedure? I mean i only have this host in my cluster, what will happen at the vm's that are assigned to the host? Thanks again ________________________________ From: Michael Burman > Sent: Tuesday, March 20, 2018 4:10 PM To: zois roupas Cc: Ales Musil; users Subject: Re: [ovirt-users] Change ovirtmgmt ip from dhcp to static in a Then you need to remove the host from engine, change the IP manually on the host, via ifcfg-file, restart network service and install the host again via the new IP address. On Tue, Mar 20, 2018 at 2:50 PM, zois roupas > wrote: Hi again all, "Unless i miss understood you here, do you use a different IP address when switching to static or the same IP that you got from dhcp? if yes, then this is another flow.." To answer your question Michael , i'm trying to configure a different ip outside of my dhcp pool. The dhcp ip is 10.0.0.245 from the range 10.0.0.245-10.0.0.250 and i want to configure the ip 10.0.0.9 as the hosts ip "One thing to note if you are changing the IP to different one that was assigned by DHCP you should uncheck "Verify connectivity between Host and Engine"" Ales, i also tried to follow your advise and uncheck the "Verify connectivity between Host and Engine" as proposed. Again the same results, it keeps reverting to previous dhcp ip I will extract the vdsm log and i'll get back to you, in the meanwhile this is the error that i see after the assignment of the static ip in the log 2018-03-20 14:16:57,576+0200 ERROR (monitor/38f4464) [storage.Monitor] Error checking domain 38f4464b-74b9-4468-891b-03cd65d72fec (monitor:424) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 405, in _checkDomainStatus self.domain.selftest() File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line 688, in selftest self.oop.os.statvfs(self.domaindir) File "/usr/lib/python2.7/site-packages/vdsm/storage/outOfProcess.py", line 243, in statvfs return self._iop.statvfs(path) File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 488, in statvfs resdict = self._sendCommand("statvfs", {"path": path}, self.timeout) File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 455, in _sendCommand raise Timeout(os.strerror(errno.ETIMEDOUT)) Timeout: Connection timed out Best Regards Zois ________________________________ From: Ales Musil > Sent: Tuesday, March 20, 2018 11:28 AM To: Michael Burman Cc: zois roupas; users Subject: Re: [ovirt-users] Change ovirtmgmt ip from dhcp to static in a One thing to note if you are changing the IP to different one that was assigned by DHCP you should uncheck "Verify connectivity between Host and Engine". This makes sure that the engine won't lost connectivity and in case of switching IP it happens. On Tue, Mar 20, 2018 at 10:15 AM, Michael Burman > wrote: Indeed very odd, this shouldn't behave this way, just tested it my self and it is working as expected. Unless i miss understood you here, do you use a different IP address when switching to static or the same IP that you got from dhcp? if yes, then this is another flow.. Can you please share the vdsm version and vdsm log with us? Edy, any idea what can cause this? On Tue, Mar 20, 2018 at 11:10 AM, zois roupas > wrote: Hi Michael and thanks a lot for the time Great step by step instructions but something strange is happening while trying to change to static ip. I tried to do the change while the host was in maintenance mode and in activate mode but again after some minutes the system reverts to the ip that dhcp is serving! What am i missing here? Do you have any ideas? Best Regards Zois ________________________________ From: Michael Burman > Sent: Tuesday, March 20, 2018 8:46 AM To: zois roupas Cc: users at ovirt.org Subject: Re: [ovirt-users] Change ovirtmgmt ip from dhcp to static in a Hello Zois, It pretty easy to do, via the webadmin UI , go to Hosts main tab > Choose host > go to 'Network Interfaces' sub tab > Press the 'Setup Host Networks' button > press the pencil icon on your management network > and choose Static IP > press OK and OK to approve the operation. - Note that in some cases, specially if this is a SPM host you will loose connectivity to host for few seconds and host may go to non-responsive state, on a non-SPM host usually this woks without any specific issues. - If the spoken host is a SPM host, I recommend to set it first to maintenance mode, do the switch and then activate. For non-SPM host this will work fine as well when the host is UP. Cheers) On Mon, Mar 19, 2018 at 12:15 PM, zois roupas > wrote: Hello everyone I've made a rookie mistake by installing ovirt 4.2 in centos 7 with dhcp instead of a static ip configuration. Both engine and host are in the same machine cause of limited resources and i was so happy that everything worked so well that i kept configuring and installing vm's ,adding local and nfs storage and setting up the backup! As you understand i must change the configuration to static ip and i can't find any guide describing the correct procedure. Is there an official guide to change configuration without causing any trouble? I've found this thread http://lists.ovirt.org/pipermail/users/2014-May/024432.html but this is for a hosted engine and doesn't help when both engine and host are in the same machine Thanx in advance Best Regards Zois _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman [https://www.redhat.com/files/brand/email/sig-redhat.png] -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman [https://www.redhat.com/files/brand/email/sig-redhat.png] _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- ALES MUSIL INTERN - rhv network Red Hat EMEA amusil at redhat.com IM: amusil [https://www.redhat.com/files/brand/email/sig-redhat.png] -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman [https://www.redhat.com/files/brand/email/sig-redhat.png] -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman [https://www.redhat.com/files/brand/email/sig-redhat.png] -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmartinezp at uci.cu Wed Mar 21 14:32:32 2018 From: mmartinezp at uci.cu (Marcos Michel Martinez Perez) Date: Wed, 21 Mar 2018 10:32:32 -0400 Subject: [ovirt-users] ovirt do not start postgresql Message-ID: <3abe88f1-a0bf-5b9d-fae0-03465716531a@uci.cu> good morning list, can someone tell me how I can solve this error? [ ERROR ] Failed to execute stage 'Misc configuration': Failed to start service 'rh-postgresql95-postgresql' [ INFO? ] Stage: Clean up ????????? Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20180321103734-b7svhc.log [ INFO? ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20180321103949-setup.conf' [ INFO? ] Stage: Pre-termination [ INFO? ] Stage: Termination [ ERROR ] Execution of setup failed UCIENCIA 2018: III Conferencia Cient?fica Internacional de la Universidad de las Ciencias Inform?ticas. Del 24-26 de septiembre, 2018 http://uciencia.uci.cu http://eventos.uci.cu -------------- next part -------------- An HTML attachment was scrubbed... URL: From emayoral at arsys.es Wed Mar 21 14:17:17 2018 From: emayoral at arsys.es (Eduardo Mayoral) Date: Wed, 21 Mar 2018 15:17:17 +0100 Subject: [ovirt-users] Memory hot plug Message-ID: Hi, ??? I recently tried to hot plug some memory on a VM running CentOS 6.9 with ovirt-guest-tools installed. ??? oVirt version is 4.2.1.6-1.el7.centos ??? I was careful to increment a multiple of 256 MB as specified in https://www.ovirt.org/develop/release-management/features/virt/hot-plug-memory/? (4 GB -> 6 GB , maximum memory configured for the VM was 8 GB) ??? However the VM was marked as with a pending change for next reboot. Am I missing some non-default configuration requiered for memory hot-plug to work? ??? Thanks in advance! -- Eduardo Mayoral Jimeno (emayoral at arsys.es) Administrador de sistemas. Departamento de Plataformas. Arsys internet. +34 941 620 145 ext. 5153 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzetto.luca at gmail.com Wed Mar 21 14:45:59 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Wed, 21 Mar 2018 15:45:59 +0100 Subject: [ovirt-users] Memory hot plug In-Reply-To: References: Message-ID: Hello Eduardo, On Wed, Mar 21, 2018 at 3:17 PM, Eduardo Mayoral wrote: > Hi, > > I recently tried to hot plug some memory on a VM running CentOS 6.9 with > ovirt-guest-tools installed. > > oVirt version is 4.2.1.6-1.el7.centos > > I was careful to increment a multiple of 256 MB as specified in > https://www.ovirt.org/develop/release-management/features/virt/hot-plug-memory/ > (4 GB -> 6 GB , maximum memory configured for the VM was 8 GB) > > However the VM was marked as with a pending change for next reboot. Am I > missing some non-default configuration requiered for memory hot-plug to > work? No, you're not. Now the vm is seeing the base memory (4GB) you were seeing at boot, plus additional memory (2GB added). Next time you'll boot the vm, the system will see a single block of 6 GB. That's the difference. Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From didi at redhat.com Wed Mar 21 15:08:12 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Wed, 21 Mar 2018 17:08:12 +0200 Subject: [ovirt-users] ovirt do not start postgresql In-Reply-To: <3abe88f1-a0bf-5b9d-fae0-03465716531a@uci.cu> References: <3abe88f1-a0bf-5b9d-fae0-03465716531a@uci.cu> Message-ID: On Wed, Mar 21, 2018 at 4:32 PM, Marcos Michel Martinez Perez < mmartinezp at uci.cu> wrote: > good morning list, can someone tell me how I can solve this error? > > [ ERROR ] Failed to execute stage 'Misc configuration': Failed to start > service 'rh-postgresql95-postgresql' > Please check/share relevant logs, including probably: /var/log/ovirt-engine/setup/* /var/opt/rh/rh-postgresql95/lib/pgsql/data/pg_log /var/log/messages Thanks, -- Didi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmartinezp at uci.cu Wed Mar 21 15:12:32 2018 From: mmartinezp at uci.cu (Marcos Michel Martinez Perez) Date: Wed, 21 Mar 2018 11:12:32 -0400 Subject: [ovirt-users] ovirt do not start postgresql In-Reply-To: References: <3abe88f1-a0bf-5b9d-fae0-03465716531a@uci.cu> Message-ID: <896c0853-4d6c-854b-896f-a9982b0e77a6@uci.cu> previous error [ ERROR ] schema.sh: FATAL: Cannot execute sql command: --file=/usr/share/ovirt-engine/dbscripts/create_functions.sql UCIENCIA 2018: III Conferencia Cient?fica Internacional de la Universidad de las Ciencias Inform?ticas. Del 24-26 de septiembre, 2018 http://uciencia.uci.cu http://eventos.uci.cu From didi at redhat.com Wed Mar 21 15:22:24 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Wed, 21 Mar 2018 17:22:24 +0200 Subject: [ovirt-users] ovirt do not start postgresql In-Reply-To: <896c0853-4d6c-854b-896f-a9982b0e77a6@uci.cu> References: <3abe88f1-a0bf-5b9d-fae0-03465716531a@uci.cu> <896c0853-4d6c-854b-896f-a9982b0e77a6@uci.cu> Message-ID: On Wed, Mar 21, 2018 at 5:12 PM, Marcos Michel Martinez Perez wrote: > previous error > > [ ERROR ] schema.sh: FATAL: Cannot execute sql command: > --file=/usr/share/ovirt-engine/dbscripts/create_functions.sql Thanks. Please share the entire logs. You can use some file sharing service and add a a link to it. Best regards, > > UCIENCIA 2018: III Conferencia Cient?fica Internacional de la Universidad de > las Ciencias Inform?ticas. Del 24-26 de septiembre, 2018 > http://uciencia.uci.cu http://eventos.uci.cu -- Didi From emayoral at arsys.es Wed Mar 21 15:36:59 2018 From: emayoral at arsys.es (Eduardo Mayoral) Date: Wed, 21 Mar 2018 16:36:59 +0100 Subject: [ovirt-users] Memory hot plug In-Reply-To: References: Message-ID: <5eb788c7-8be1-138f-d2c9-53680b775d5c@arsys.es> Thanks for your reply! You are right. I tried with a CentOS 6 and a CentOS 7 guest and both picked the extra memory with no problem. Seems it was a particular issue with the guest that I tried first. Kernel version was older and I had to activate the memory bank as described in https://kb.vmware.com/s/article/1012764 (different hypervisor, but the principle is the same). Best regards, Eduardo Mayoral Jimeno (emayoral at arsys.es) Administrador de sistemas. Departamento de Plataformas. Arsys internet. +34 941 620 145 ext. 5153 On 21/03/18 15:45, Luca 'remix_tj' Lorenzetto wrote: > Hello Eduardo, > > On Wed, Mar 21, 2018 at 3:17 PM, Eduardo Mayoral wrote: >> Hi, >> >> I recently tried to hot plug some memory on a VM running CentOS 6.9 with >> ovirt-guest-tools installed. >> >> oVirt version is 4.2.1.6-1.el7.centos >> >> I was careful to increment a multiple of 256 MB as specified in >> https://www.ovirt.org/develop/release-management/features/virt/hot-plug-memory/ >> (4 GB -> 6 GB , maximum memory configured for the VM was 8 GB) >> >> However the VM was marked as with a pending change for next reboot. Am I >> missing some non-default configuration requiered for memory hot-plug to >> work? > > No, you're not. Now the vm is seeing the base memory (4GB) you were > seeing at boot, plus additional memory (2GB added). Next time you'll > boot the vm, the system will see a single block of 6 GB. > > That's the difference. > > Luca > From spfma.tech at e.mail.fr Wed Mar 21 16:19:55 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Wed, 21 Mar 2018 17:19:55 +0100 Subject: [ovirt-users] Hosted engine deployment error In-Reply-To: <20180320151209.438EBE4471@smtp01.mail.de> References: <20180320151209.438EBE4471@smtp01.mail.de> Message-ID: <20180321161955.C8B56E4473@smtp01.mail.de> Hi, I made some progress : by allowing my NAS to map any user to admin (not the best for security, but it is a dedicated infrastructure), this weird permissions problem disappeared. Maybe a NFS bug somewhere ? I don't know. I was able to redeploy a new hosted engine, and after a cleanup and some other manual cleaning tasks, restore my latest backup. So the new engine vm is able to startup, but it seems there is a problem for communicating with hosts. I get a lot of errors like this one : vdsm[3008]: ERROR ssl handshake: SSLError, address: ::ffff:10.100.1.100 10.100.1.100 is the IP of the engine vm. vdsm.log is not more helpful : 2018-03-21 17:10:10,769+0100 ERROR (Reactor thread) [ProtocolDetector.SSLHandshakeDispatcher] ssl handshake: SSLError, address: ::ffff:10.100.1.100 (sslutils:258) Is there something to update or generate after a restore ? I don't know whether keys and certificates were kept or if new ones are now used. I also tried to add the SSH public key showed in the GUI to the authorized_keys on a node, even reboot, but no change. Regards Le 20-Mar-2018 16:12:40 +0100, spfma.tech at e.mail.fr a crit: I tried to make a cleaner install : after cleanup, I recreated "/rhev/data-center/mnt/" and ran the installer again. As you can see, it crashed again with the same access denied error on this file : [ INFO ] TASK [Copy configuration archive to storage] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["dd", "bs=20480", "count=1", "oflag=direct", "if=/var/tmp/localvmVBRLpL/b1884198-69e6-4096-939d-03c87112de10", "of=/rhev/data-center/mnt/10.100.2.132:_volume3_ovirt__engine__self__hosted/015d9546-af01-4fb2-891e-e28683db3387/images/589d0768-c935-4495-aa57-45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10"], "delta": "0:00:00.004468", "end": "2018-03-20 15:57:34.199405", "msg": "non-zero return code", "rc": 1, "start": "2018-03-20 15:57:34.194937", "stderr": "dd: impossible d'ouvrir /rhev/data-center/mnt/10.100.2.132:_volume3_ovirt__engine__self__hosted/015d9546-af01-4fb2-891e-e28683db3387/images/589d0768-c935-4495-aa57-45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10 : Permission non accorde", "stderr_lines": ["dd: impossible d'ouvrir /rhev/data-center/mnt/10.100.2.132:_volume3_ovirt__engine__self__hosted/015d9546-af01-4fb2-891e-e28683db3387/images/589d0768-c935-4495-aa57-45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10 : Permission non accorde"], "stdout": "", "stdout_lines": []} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook But the file permissions look ok to me : -rw-rw----. 1 vdsm kvm 1,0G 20 mars 2018 /rhev/data-center/mnt/10.100.2.132:_volume3_ovirt__engine__self__hosted/015d9546-af01-4fb2-891e-e28683db3387/images/589d0768-c935-4495-aa57-45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10 So I decided to test something : I set a shell for "vdsm", so I could login : su - vdsm -c "touch /rhev/data-center/mnt/10.100.2.132:_volume3_ovirt__engine__self__hosted/015d9546-af01-4fb2-891e-e28683db3387/images/589d0768-c935-4495-aa57-45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10" && echo "OK" OK As far as I can see,still no permission problem But if I try the same as "root" : touch /rhev/data-center/mnt/10.100.2.132:_volume3_ovirt__engine__self__hosted/015d9546-af01-4fb2-891e-e28683db3387/images/589d0768-c935-4495-aa57-45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10 && echo "OK" touch: impossible de faire un touch /rhev/data-center/mnt/10.100.2.132:_volume3_ovirt__engine__self__hosted/015d9546-af01-4fb2-891e-e28683db3387/images/589d0768-c935-4495-aa57-45b9b2a18526/b1884198-69e6-4096-939d-03c87112de10 : Permission non accorde Of course, "root" and "vdsm" can create, touch and delete other files flawlessly in this share. It looks like some kind of immutable file, but is is not suppose to exist on NFS, does it ? Regards Le 20-Mar-2018 12:22:50 +0100, stirabos at redhat.com a crit: On Tue, Mar 20, 2018 at 11:44 AM, wrote: Hi, In fact it is a workaround coming from you I found in the bugtrack that helped me : chmod 644 /var/cache/vdsm/schema/* As the only thing looking like a weird error I have found was : ERROR Exception raised#012Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/vdsm/vdsmd.py", line 156, in run#012 serve_clients(log)#012 File "/usr/lib/python2.7/site-packages/vdsm/vdsmd.py", line 103, in serve_clients#012 cif = clientIF.getInstance(irs, log, scheduler)#012 File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 250, in getInstance#012 cls._instance = clientIF(irs, log, scheduler)#012 File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 144, in __init__#012 self._prepareJSONRPCServer()#012 File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 307, in _prepareJSONRPCServer#012 bridge = Bridge.DynamicBridge()#012 File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 67, in __init__#012 self._schema = vdsmapi.Schema(paths, api_strict_mode)#012 File "/usr/lib/python2.7/site-packages/vdsm/api/vdsmapi.py", line 217, in __init__#012 raise SchemaNotFound("Unable to find API schema file")#012SchemaNotFound: Unable to find API schema file Thanks, it's tracked here: https://bugzilla.redhat.com/1552565 A fix will come in the next build. So I can go one step futher, but the installation still fails in the end, with file permission problems in datastore files (i chose NFS 4.1). I can't indeed touch or get informations even logged in root. But I can create and delete files in the same directory. Is there a workaround for this too ? Everything should get wrote and read on the NFS export as vdsm:kvm (36:36); can you please ensure that everything is fine with that? Regards Le 19-Mar-2018 17:48:41 +0100, stirabos at redhat.com a crit: On Mon, Mar 19, 2018 at 4:56 PM, wrote: Hi, I wanted to rebuild a new hosted engine setup, as the old was corrupted (too much violent poweroff !) So the server was not reinstalled, I just runned "ovirt-hosted-engine-cleanup". The network setup generated by vdsm seems to be still in place, so I haven't changed anything there. Then I decided to update the packages to the latest versions avaible, rebooted the server and run "ovirt-hosted-engine-setup". But the process never succeeds, as I get an error after a long time spent in "[ INFO ] TASK [Wait for the host to be up]" [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": [{"address": "pfm-srv-virt-1.pfm-ad.pfm.loc", "affinity_labels": [], "auto_numa_status": "unknown", "certificate": {"organization": "pfm.loc", "subject": "O=pfm.loc,CN=pfm-srv-virt-1.pfm-ad.pfm.loc"}, "cluster": {"href": "/ovirt-engine/api/clusters/d6c9358e-2b8b-11e8-bc86-00163e152701", "id": "d6c9358e-2b8b-11e8-bc86-00163e152701"}, "comment": "", "cpu": {"speed": 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, "devices": [], "external_network_provider_configurations": [], "external_status": "ok", "hardware_information": {"supported_rng_sources": []}, "hooks": [], "href": "/ovirt-engine/api/hosts/542566c4-fc85-4398-9402-10c8adaa9554", "id": "542566c4-fc85-4398-9402-10c8adaa9554", "katello_errata": [], "kdump_status": "unknown", "ksm": {"enabled": false}, "max_scheduling_memory": 0, "memory": 0, "name": "pfm-srv-virt-1.pfm-ad.pfm.loc", "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline": ""}, "permissions": [], "port": 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false, "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": {"fingerprint": "SHA256:J75BVLFnmGBGFosXzaxCRnuIYcOc75HUBQZ4pOKpDg8", "port": 22}, "statistics": [], "status": "non_responsive", "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], "transparent_huge_pages": {"enabled": false}, "type": "rhel", "unmanaged_networks": [], "update_available": false}]}, "attempts": 120, "changed": false} [ INFO ] TASK [Remove local vm dir] [ INFO ] TASK [Notify the user about a failure] [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.n"} I made another try with Cockpit, it is the same. Am I doing something wrong or is there a bug ? I suppose that your host was condifured with DHCP, if so it's this one: https://bugzilla.redhat.com/1549642 The fix will come with 4.2.2. Regards ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmartinezp at uci.cu Wed Mar 21 17:24:12 2018 From: mmartinezp at uci.cu (Marcos Michel Martinez Perez) Date: Wed, 21 Mar 2018 13:24:12 -0400 Subject: [ovirt-users] ovirt do not start postgresql In-Reply-To: References: <3abe88f1-a0bf-5b9d-fae0-03465716531a@uci.cu> <896c0853-4d6c-854b-896f-a9982b0e77a6@uci.cu> Message-ID: <9810e78d-8781-6c98-054d-7db851fab669@uci.cu> After performing the configuration of ovirt, that is, executing the engine-config, and accepting or not the parameters that it asks for, the wizard starts to make a series of configurations and then it throws me this error. [ INFO? ] Stage: Transaction setup [ INFO? ] Stopping engine service [ INFO? ] Stopping ovirt-fence-kdump-listener service [ INFO? ] Stopping dwh service [ INFO? ] Stopping Image I/O Proxy service [ INFO? ] Stopping vmconsole-proxy service [ INFO? ] Stopping websocket-proxy service [ INFO? ] Stage: Misc configuration [ INFO? ] Stage: Package installation [ INFO? ] Stage: Misc configuration [ INFO? ] Upgrading CA [ INFO? ] Creating PostgreSQL 'engine' database [ INFO? ] Configuring PostgreSQL [ INFO? ] Creating PostgreSQL 'ovirt_engine_history' database [ INFO? ] Configuring PostgreSQL [ INFO? ] Creating CA [ INFO? ] Creating/refreshing DWH database schema [ INFO? ] Configuring Image I/O Proxy [ INFO? ] Setting up ovirt-vmconsole proxy helper PKI artifacts [ INFO? ] Setting up ovirt-vmconsole SSH PKI artifacts [ INFO? ] Configuring WebSocket Proxy [ INFO? ] Creating/refreshing Engine database schema [ ERROR ] schema.sh: FATAL: Cannot execute sql command: --file=/usr/share/ovirt-engine/dbscripts/create_functions.sql [ ERROR ] Failed to execute stage 'Misc configuration': Engine schema refresh failed [ INFO? ] Rolling back DWH database schema [ INFO? ] Clearing DWH database ovirt_engine_history_20180321121409 [ INFO? ] Rolling back database schema [ INFO? ] Clearing Engine database engine_20180321121404 [ INFO? ] Stage: Clean up ????????? Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20180321121319-mvrexc.log [ INFO? ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20180321121431-setup.conf' [ INFO? ] Stage: Pre-termination [ INFO? ] Stage: Termination [ ERROR ] Execution of setup failed UCIENCIA 2018: III Conferencia Cient?fica Internacional de la Universidad de las Ciencias Inform?ticas. Del 24-26 de septiembre, 2018 http://uciencia.uci.cu http://eventos.uci.cu -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbonazzo at redhat.com Wed Mar 21 18:19:51 2018 From: sbonazzo at redhat.com (Sandro Bonazzola) Date: Wed, 21 Mar 2018 18:19:51 +0000 Subject: [ovirt-users] ovirt do not start postgresql In-Reply-To: <9810e78d-8781-6c98-054d-7db851fab669@uci.cu> References: <3abe88f1-a0bf-5b9d-fae0-03465716531a@uci.cu> <896c0853-4d6c-854b-896f-a9982b0e77a6@uci.cu> <9810e78d-8781-6c98-054d-7db851fab669@uci.cu> Message-ID: Il mer 21 mar 2018, 18:25 Marcos Michel Martinez Perez ha scritto: > After performing the configuration of ovirt, that is, executing the > engine-config, and accepting or not the parameters that it asks for, the > wizard starts to make a series of configurations and then it throws me this > error. > > [ INFO ] Stage: Transaction setup > [ INFO ] Stopping engine service > [ INFO ] Stopping ovirt-fence-kdump-listener service > [ INFO ] Stopping dwh service > [ INFO ] Stopping Image I/O Proxy service > [ INFO ] Stopping vmconsole-proxy service > [ INFO ] Stopping websocket-proxy service > [ INFO ] Stage: Misc configuration > [ INFO ] Stage: Package installation > [ INFO ] Stage: Misc configuration > [ INFO ] Upgrading CA > [ INFO ] Creating PostgreSQL 'engine' database > [ INFO ] Configuring PostgreSQL > [ INFO ] Creating PostgreSQL 'ovirt_engine_history' database > [ INFO ] Configuring PostgreSQL > [ INFO ] Creating CA > [ INFO ] Creating/refreshing DWH database schema > [ INFO ] Configuring Image I/O Proxy > [ INFO ] Setting up ovirt-vmconsole proxy helper PKI artifacts > [ INFO ] Setting up ovirt-vmconsole SSH PKI artifacts > [ INFO ] Configuring WebSocket Proxy > [ INFO ] Creating/refreshing Engine database schema > [ ERROR ] schema.sh: FATAL: Cannot execute sql command: > --file=/usr/share/ovirt-engine/dbscripts/create_functions.sql > [ ERROR ] Failed to execute stage 'Misc configuration': Engine schema > refresh failed > [ INFO ] Rolling back DWH database schema > [ INFO ] Clearing DWH database ovirt_engine_history_20180321121409 > [ INFO ] Rolling back database schema > [ INFO ] Clearing Engine database engine_20180321121404 > [ INFO ] Stage: Clean up > Log file is located at > /var/log/ovirt-engine/setup/ovirt-engine-setup-20180321121319-mvrexc.log > Can you please share this log? [ INFO ] Generating answer file > '/var/lib/ovirt-engine/setup/answers/20180321121431-setup.conf' > [ INFO ] Stage: Pre-termination > [ INFO ] Stage: Termination > [ ERROR ] Execution of setup failed > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmartinezp at uci.cu Wed Mar 21 18:29:36 2018 From: mmartinezp at uci.cu (Marcos Michel Martinez Perez) Date: Wed, 21 Mar 2018 14:29:36 -0400 Subject: [ovirt-users] ovirt do not start postgresql In-Reply-To: References: <3abe88f1-a0bf-5b9d-fae0-03465716531a@uci.cu> <896c0853-4d6c-854b-896f-a9982b0e77a6@uci.cu> <9810e78d-8781-6c98-054d-7db851fab669@uci.cu> Message-ID: # action=setup [environment:default] OVESETUP_DIALOG/confirmSettings=bool:True OVESETUP_CONFIG/applicationMode=str:both OVESETUP_CONFIG/remoteEngineSetupStyle=none:None OVESETUP_CONFIG/sanWipeAfterDelete=bool:False OVESETUP_CONFIG/storageIsLocal=bool:False OVESETUP_CONFIG/firewallManager=none:None OVESETUP_CONFIG/remoteEngineHostRootPassword=none:None OVESETUP_CONFIG/firewallChangesReview=none:None OVESETUP_CONFIG/updateFirewall=bool:False OVESETUP_CONFIG/remoteEngineHostSshPort=none:None OVESETUP_CONFIG/fqdn=str:ovirt.sige.uci.cu OVESETUP_CONFIG/storageType=none:None OSETUP_RPMDISTRO/requireRollback=none:None OSETUP_RPMDISTRO/enableUpgrade=none:None OVESETUP_PROVISIONING/postgresProvisioningEnabled=bool:True OVESETUP_APACHE/configureRootRedirection=bool:True OVESETUP_APACHE/configureSsl=bool:True OVESETUP_DB/secured=bool:False OVESETUP_DB/fixDbConfiguration=none:None OVESETUP_DB/user=str:engine_20180321121404 OVESETUP_DB/dumper=str:pg_custom OVESETUP_DB/database=str:engine_20180321121404 OVESETUP_DB/fixDbViolations=none:None OVESETUP_DB/engineVacuumFull=none:None OVESETUP_DB/host=str:localhost OVESETUP_DB/port=int:5432 OVESETUP_DB/filter=none:None OVESETUP_DB/restoreJobs=int:2 OVESETUP_DB/securedHostValidation=bool:False OVESETUP_ENGINE_CORE/enable=bool:True OVESETUP_CORE/engineStop=none:None OVESETUP_SYSTEM/memCheckEnabled=bool:True OVESETUP_SYSTEM/nfsConfigEnabled=none:None OVESETUP_PKI/organization=str:sige.uci.cu OVESETUP_PKI/renew=none:None OVESETUP_CONFIG/isoDomainName=none:None OVESETUP_CONFIG/engineHeapMax=str:1024M OVESETUP_CONFIG/ignoreVdsgroupInNotifier=none:None OVESETUP_CONFIG/adminPassword=str:Ins03dmesa OVESETUP_CONFIG/isoDomainACL=none:None OVESETUP_CONFIG/isoDomainMountPoint=none:None OVESETUP_ENGINE_CONFIG/fqdn=str:ovirt.sige.uci.cu OVESETUP_CONFIG/engineDbBackupDir=str:/var/lib/ovirt-engine/backups OVESETUP_CONFIG/engineHeapMin=str:1024M OVESETUP_OVN/ovirtProviderOvn=bool:True OVESETUP_OVN/ovirtProviderOvnUser=str:admin at internal OVESETUP_OVN/ovirtProviderOvnPassword=str:Ins03dmesa OVESETUP_CONFIG/websocketProxyConfig=bool:True OVESETUP_DWH_CORE/enable=bool:True OVESETUP_DWH_CONFIG/scale=str:1 OVESETUP_DWH_CONFIG/dwhDbBackupDir=str:/var/lib/ovirt-engine-dwh/backups OVESETUP_DWH_DB/secured=bool:False OVESETUP_DWH_DB/restoreBackupLate=bool:True OVESETUP_DWH_DB/disconnectExistingDwh=none:None OVESETUP_DWH_DB/host=str:localhost OVESETUP_DWH_DB/user=str:ovirt_engine_history_20180321121409 OVESETUP_DWH_DB/dumper=str:pg_custom OVESETUP_DWH_DB/database=str:ovirt_engine_history_20180321121409 OVESETUP_DWH_DB/performBackup=none:None OVESETUP_DWH_DB/port=int:5432 OVESETUP_DWH_DB/filter=none:None OVESETUP_DWH_DB/restoreJobs=int:2 OVESETUP_DWH_DB/securedHostValidation=bool:False OVESETUP_DB/dwhVacuumFull=none:None OVESETUP_DWH_PROVISIONING/postgresProvisioningEnabled=bool:True OVESETUP_CONFIG/imageioProxyConfig=bool:True OVESETUP_VMCONSOLE_PROXY_CONFIG/vmconsoleProxyConfig=bool:True UCIENCIA 2018: III Conferencia Cient?fica Internacional de la Universidad de las Ciencias Inform?ticas. Del 24-26 de septiembre, 2018 http://uciencia.uci.cu http://eventos.uci.cu From roupas_zois at hotmail.com Tue Mar 20 22:08:49 2018 From: roupas_zois at hotmail.com (zois roupas) Date: Tue, 20 Mar 2018 22:08:49 +0000 Subject: [ovirt-users] Change ovirtmgmt ip from dhcp to static in a In-Reply-To: References: , Message-ID: Is this a safe procedure? I mean i only have this host in my cluster, what will happen at the vm's that are assigned to the host? Thanks again ________________________________ From: Michael Burman Sent: Tuesday, March 20, 2018 4:10 PM To: zois roupas Cc: Ales Musil; users Subject: Re: [ovirt-users] Change ovirtmgmt ip from dhcp to static in a Then you need to remove the host from engine, change the IP manually on the host, via ifcfg-file, restart network service and install the host again via the new IP address. On Tue, Mar 20, 2018 at 2:50 PM, zois roupas > wrote: Hi again all, "Unless i miss understood you here, do you use a different IP address when switching to static or the same IP that you got from dhcp? if yes, then this is another flow.." To answer your question Michael , i'm trying to configure a different ip outside of my dhcp pool. The dhcp ip is 10.0.0.245 from the range 10.0.0.245-10.0.0.250 and i want to configure the ip 10.0.0.9 as the hosts ip "One thing to note if you are changing the IP to different one that was assigned by DHCP you should uncheck "Verify connectivity between Host and Engine"" Ales, i also tried to follow your advise and uncheck the "Verify connectivity between Host and Engine" as proposed. Again the same results, it keeps reverting to previous dhcp ip I will extract the vdsm log and i'll get back to you, in the meanwhile this is the error that i see after the assignment of the static ip in the log 2018-03-20 14:16:57,576+0200 ERROR (monitor/38f4464) [storage.Monitor] Error checking domain 38f4464b-74b9-4468-891b-03cd65d72fec (monitor:424) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 405, in _checkDomainStatus self.domain.selftest() File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line 688, in selftest self.oop.os.statvfs(self.domaindir) File "/usr/lib/python2.7/site-packages/vdsm/storage/outOfProcess.py", line 243, in statvfs return self._iop.statvfs(path) File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 488, in statvfs resdict = self._sendCommand("statvfs", {"path": path}, self.timeout) File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 455, in _sendCommand raise Timeout(os.strerror(errno.ETIMEDOUT)) Timeout: Connection timed out Best Regards Zois ________________________________ From: Ales Musil > Sent: Tuesday, March 20, 2018 11:28 AM To: Michael Burman Cc: zois roupas; users Subject: Re: [ovirt-users] Change ovirtmgmt ip from dhcp to static in a One thing to note if you are changing the IP to different one that was assigned by DHCP you should uncheck "Verify connectivity between Host and Engine". This makes sure that the engine won't lost connectivity and in case of switching IP it happens. On Tue, Mar 20, 2018 at 10:15 AM, Michael Burman > wrote: Indeed very odd, this shouldn't behave this way, just tested it my self and it is working as expected. Unless i miss understood you here, do you use a different IP address when switching to static or the same IP that you got from dhcp? if yes, then this is another flow.. Can you please share the vdsm version and vdsm log with us? Edy, any idea what can cause this? On Tue, Mar 20, 2018 at 11:10 AM, zois roupas > wrote: Hi Michael and thanks a lot for the time Great step by step instructions but something strange is happening while trying to change to static ip. I tried to do the change while the host was in maintenance mode and in activate mode but again after some minutes the system reverts to the ip that dhcp is serving! What am i missing here? Do you have any ideas? Best Regards Zois ________________________________ From: Michael Burman > Sent: Tuesday, March 20, 2018 8:46 AM To: zois roupas Cc: users at ovirt.org Subject: Re: [ovirt-users] Change ovirtmgmt ip from dhcp to static in a Hello Zois, It pretty easy to do, via the webadmin UI , go to Hosts main tab > Choose host > go to 'Network Interfaces' sub tab > Press the 'Setup Host Networks' button > press the pencil icon on your management network > and choose Static IP > press OK and OK to approve the operation. - Note that in some cases, specially if this is a SPM host you will loose connectivity to host for few seconds and host may go to non-responsive state, on a non-SPM host usually this woks without any specific issues. - If the spoken host is a SPM host, I recommend to set it first to maintenance mode, do the switch and then activate. For non-SPM host this will work fine as well when the host is UP. Cheers) On Mon, Mar 19, 2018 at 12:15 PM, zois roupas > wrote: Hello everyone I've made a rookie mistake by installing ovirt 4.2 in centos 7 with dhcp instead of a static ip configuration. Both engine and host are in the same machine cause of limited resources and i was so happy that everything worked so well that i kept configuring and installing vm's ,adding local and nfs storage and setting up the backup! As you understand i must change the configuration to static ip and i can't find any guide describing the correct procedure. Is there an official guide to change configuration without causing any trouble? I've found this thread http://lists.ovirt.org/pipermail/users/2014-May/024432.html but this is for a hosted engine and doesn't help when both engine and host are in the same machine Thanx in advance Best Regards Zois _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman [https://www.redhat.com/files/brand/email/sig-redhat.png] -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman [https://www.redhat.com/files/brand/email/sig-redhat.png] _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- ALES MUSIL INTERN - rhv network Red Hat EMEA amusil at redhat.com IM: amusil [https://www.redhat.com/files/brand/email/sig-redhat.png] -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman [https://www.redhat.com/files/brand/email/sig-redhat.png] -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Wed Mar 21 19:41:35 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Wed, 21 Mar 2018 19:41:35 +0000 Subject: [ovirt-users] Ovirt nodes NFS connection In-Reply-To: References: Message-ID: On Wed, Mar 21, 2018 at 12:41 PM, Tal Bar-Or wrote: > Hello All, > > I am about to deploy a new Ovirt platform, the platform will consist 4 > Ovirt nodes including management, all servers nodes and storage will have > the following config: > > *nodes server* > 4x10G ports network cards > 2x10G will be used for VM network. > 2x10G will be used for storage connection > 2x1Ge 1xGe for nodes management > > > *Storage *4x10G ports network cards > 3 x10G for NFS storage mount Ovirt nodes > > Now given above network configuration layout, what is best practices in > terms of nodes for storage NFS connection, throughput and path resilience > suggested to use > First option each node 2x 10G lacp and on storage side 3x10G lacp? > I'm not sure how you'd get more throughout than you can get in a single physical link. You will get redundancy. Of course, on the storage side you might benefit from multiple bonded interfaces. > The second option creates 3 VLAN's assign each node on that 3 VLAN's > across 2 nic, and on storage, side assigns 3 nice across 3 VLANs? > Interesting - but I assume it'll still stick to a single physical link. Y. Thanks > > > > > > -- > Tal Bar-or > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckozleriii at gmail.com Wed Mar 21 20:37:47 2018 From: ckozleriii at gmail.com (Charles Kozler) Date: Wed, 21 Mar 2018 16:37:47 -0400 Subject: [ovirt-users] Juniper vSRX Cluster on oVirt/RHEV Message-ID: Hi All - Recently did this and thought it would be worth documenting. I couldnt find any solid information on vsrx with kvm outside of flat KVM. This outlines some of the things I hit along the way and how to fix. This is my one small way of giving back to such an incredible open source tool https://ckozler.net/vsrx-cluster-on-ovirtrhev/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.vaye at province-sud.nc Wed Mar 21 20:50:57 2018 From: nicolas.vaye at province-sud.nc (Nicolas Vaye) Date: Wed, 21 Mar 2018 20:50:57 +0000 Subject: [ovirt-users] how update the network name of a vmnic with API SDK python ? In-Reply-To: <20180321081938.2fd06c8d@t460p> References: <1521610222.1710.85.camel@province-sud.nc> <20180321081938.2fd06c8d@t460p> Message-ID: <1521665454.1710.117.camel@province-sud.nc> Thank you Dominik, it works. -------- Message initial -------- Date: Wed, 21 Mar 2018 08:19:38 +0100 Objet: Re: [ovirt-users] how update the network name of a vmnic with API SDK python ? Cc: users at ovirt.org > ?: Nicolas Vaye > De: Dominik Holler > On Wed, 21 Mar 2018 05:30:25 +0000 Nicolas Vaye > wrote: Hi, i want to change the network name of the existing nic for a VM with python SDK API ? Can i have some help please ? the VM name is testnico the nic name is nic1 the new network name is vlan_NEW and here is the source file written by me (which doesn't work) : #!/usr/bin/env python # -*- coding: utf-8 -*- import logging import time import ovirtsdk4 as sdk import ovirtsdk4.types as types logging.basicConfig(level=logging.DEBUG, filename='example.log') # This example will connect to the server and start a virtual machine # with cloud-init, in order to automatically configure the network and # the password of the `root` user. # Create the connection to the server: connection = sdk.Connection( url='https://ocenter.province-sud.prod/ovirt-engine/api', username='admin at internal', password='xxxx', ca_file='CA_ocenter.pem', debug=True, log=logging.getLogger(), ) # Find the virtual machine: vms_service = connection.system_service().vms_service() vm = vms_service.list(search = 'name=testnico')[0] # Find the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) # In order to specify the network that the new interface will be # connected to we need to specify the identifier of the virtual network # interface profile, so we need to find it: profiles_service = connection.system_service().vnic_profiles_service() profile_id = None for profile in profiles_service.list(): print "profile "+profile.name+","+profile.id if profile.name == 'vlan_NEW': profile_id = profile.id break # Locate the service that manages the network interface cards of the # virtual machine: nics_service = vm_service.nics_service() #print nics_service # Find the nic1 of the VM for nic in nics_service.list(): print "nic "+nic.name+","+nic.id+','+nic.vnic_profile.id if nic.name == 'nic1': nic_service = nics_service.nic_service(nic.id) break print "nic_service nic1 ==>"+str(nic_service) #pprint(vars(nic_service.network_filter_parameters_service().parameter_service())) #nic_service.vnic_profile.id=profile_id #nic_service.update() nic_service.update( vnic_profile=types.VnicProfile( id=profile_id, ) ) nic_service.update( types.Nic( vnic_profile=types.VnicProfile( id=profile_id, ) ) ) # Close the connection to the server: connection.close() The result is : Traceback (most recent call last): File "start_vm_with_cloud_init.py", line 85, in id=profile_id, TypeError: update() got an unexpected keyword argument 'vnic_profile' How can i do ? update() expects a parameter of type types.Nic, which has the parameter vnic_profile. Thanks. Nicolas VAYE _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users From nicolas.vaye at province-sud.nc Wed Mar 21 21:12:07 2018 From: nicolas.vaye at province-sud.nc (Nicolas Vaye) Date: Wed, 21 Mar 2018 21:12:07 +0000 Subject: [ovirt-users] VM guest agent In-Reply-To: <20180321113408.53ba8749@fiorina> References: <1520808989.18402.58.camel@province-sud.nc> <20180321113408.53ba8749@fiorina> Message-ID: <1521666724.1710.120.camel@province-sud.nc> Hi Tomas, for my RHEL 6.5, i have installed ovirt-guest-agent 1.0.12-2.el6.noarch. Thanks. -------- Message initial -------- Date: Wed, 21 Mar 2018 11:34:08 +0100 Objet: Re: [ovirt-users] VM guest agent Cc: users at ovirt.org > ?: Nicolas Vaye > De: Tom?? Golembiovsk? > Hi, On Sun, 11 Mar 2018 22:56:32 +0000 Nicolas Vaye > wrote: Hello, i have installed one oVirt platform with 2 node and 1 HE version 4.2.1.7-1 It seem to work fine, but i would like more information on the guest agent. For the HE, the guest agent seem to be OK, on this vm i 've spotted that the ovirt-guest-agent and qemu-guest-agent are installed. I have 2 VM, 1 debian 9 and 1 RHEL 6.5. I've tried to install the same service on each VM, but the result is the same : no info about IP, fqdn, or app installed for these vm, and there is a orange ! for each vm on the web ui (indicate that i need to install latest guest agent) . What version of the guest agent do you have installed on RHEL 6.5? Tomas I have tried different test with spice-vdagent, or ovirt-guest-agent or qemu-guest-agent but no way. ovirt-guest-agent doesn't start on debian 9 and RHEL 6.5 : MainThread::INFO::2018-03-11 22:46:02,984::ovirt-guest-agent::59::root::Starting oVirt guest agentMainThread::ERROR::2018-03-11 22:46:02,986::ovirt-guest-agent::141::root::Unhandled exception in oVirt guest agent!Traceback (most recent call last): File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 135, in agent.run(daemon, pidfile) File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 65, in run self.agent = LinuxVdsAgent(config) File "/usr/share/ovirt-guest-agent/GuestAgentLinux2.py", line 472, in __init__ AgentLogicBase.__init__(self, config) File "/usr/share/ovirt-guest-agent/OVirtAgentLogic.py", line 188, in __init__ self.vio = VirtIoChannel(config.get("virtio", "device")) File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 153, in __init__ self._stream = VirtIoStream(vport_name) File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 134, in __init__ self._vport = os.open(vport_name, os.O_RDWR)OSError: [Errno 2] No such file or directory: '/dev/virtio-ports/com.redhat.rhevm.vdsm' Can i have help for this problem ? Thanks. Nicolas VAYE DSI - Noum?a NEW CALEDONIA _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users From dlandgra at redhat.com Wed Mar 21 21:14:21 2018 From: dlandgra at redhat.com (Douglas Landgraf) Date: Wed, 21 Mar 2018 17:14:21 -0400 Subject: [ovirt-users] ovirt do not start postgresql In-Reply-To: References: <3abe88f1-a0bf-5b9d-fae0-03465716531a@uci.cu> <896c0853-4d6c-854b-896f-a9982b0e77a6@uci.cu> <9810e78d-8781-6c98-054d-7db851fab669@uci.cu> Message-ID: Hi Marcos, On Wed, Mar 21, 2018 at 2:29 PM, Marcos Michel Martinez Perez wrote: > # action=setup > [environment:default] > OVESETUP_DIALOG/confirmSettings=bool:True > OVESETUP_CONFIG/applicationMode=str:both > OVESETUP_CONFIG/remoteEngineSetupStyle=none:None > OVESETUP_CONFIG/sanWipeAfterDelete=bool:False > OVESETUP_CONFIG/storageIsLocal=bool:False > OVESETUP_CONFIG/firewallManager=none:None > OVESETUP_CONFIG/remoteEngineHostRootPassword=none:None > OVESETUP_CONFIG/firewallChangesReview=none:None > OVESETUP_CONFIG/updateFirewall=bool:False > OVESETUP_CONFIG/remoteEngineHostSshPort=none:None > OVESETUP_CONFIG/fqdn=str:ovirt.sige.uci.cu > OVESETUP_CONFIG/storageType=none:None > OSETUP_RPMDISTRO/requireRollback=none:None > OSETUP_RPMDISTRO/enableUpgrade=none:None > OVESETUP_PROVISIONING/postgresProvisioningEnabled=bool:True > OVESETUP_APACHE/configureRootRedirection=bool:True > OVESETUP_APACHE/configureSsl=bool:True > OVESETUP_DB/secured=bool:False > OVESETUP_DB/fixDbConfiguration=none:None > OVESETUP_DB/user=str:engine_20180321121404 > OVESETUP_DB/dumper=str:pg_custom > OVESETUP_DB/database=str:engine_20180321121404 > OVESETUP_DB/fixDbViolations=none:None > OVESETUP_DB/engineVacuumFull=none:None > OVESETUP_DB/host=str:localhost > OVESETUP_DB/port=int:5432 > OVESETUP_DB/filter=none:None > OVESETUP_DB/restoreJobs=int:2 > OVESETUP_DB/securedHostValidation=bool:False > OVESETUP_ENGINE_CORE/enable=bool:True > OVESETUP_CORE/engineStop=none:None > OVESETUP_SYSTEM/memCheckEnabled=bool:True > OVESETUP_SYSTEM/nfsConfigEnabled=none:None > OVESETUP_PKI/organization=str:sige.uci.cu > OVESETUP_PKI/renew=none:None > OVESETUP_CONFIG/isoDomainName=none:None > OVESETUP_CONFIG/engineHeapMax=str:1024M > OVESETUP_CONFIG/ignoreVdsgroupInNotifier=none:None > OVESETUP_CONFIG/adminPassword=str:Ins03dmesa > OVESETUP_CONFIG/isoDomainACL=none:None > OVESETUP_CONFIG/isoDomainMountPoint=none:None > OVESETUP_ENGINE_CONFIG/fqdn=str:ovirt.sige.uci.cu > OVESETUP_CONFIG/engineDbBackupDir=str:/var/lib/ovirt-engine/backups > OVESETUP_CONFIG/engineHeapMin=str:1024M > OVESETUP_OVN/ovirtProviderOvn=bool:True > OVESETUP_OVN/ovirtProviderOvnUser=str:admin at internal > OVESETUP_OVN/ovirtProviderOvnPassword=str:Ins03dmesa > OVESETUP_CONFIG/websocketProxyConfig=bool:True > OVESETUP_DWH_CORE/enable=bool:True > OVESETUP_DWH_CONFIG/scale=str:1 > OVESETUP_DWH_CONFIG/dwhDbBackupDir=str:/var/lib/ovirt-engine-dwh/backups > OVESETUP_DWH_DB/secured=bool:False > OVESETUP_DWH_DB/restoreBackupLate=bool:True > OVESETUP_DWH_DB/disconnectExistingDwh=none:None > OVESETUP_DWH_DB/host=str:localhost > OVESETUP_DWH_DB/user=str:ovirt_engine_history_20180321121409 > OVESETUP_DWH_DB/dumper=str:pg_custom > OVESETUP_DWH_DB/database=str:ovirt_engine_history_20180321121409 > OVESETUP_DWH_DB/performBackup=none:None > OVESETUP_DWH_DB/port=int:5432 > OVESETUP_DWH_DB/filter=none:None > OVESETUP_DWH_DB/restoreJobs=int:2 > OVESETUP_DWH_DB/securedHostValidation=bool:False > OVESETUP_DB/dwhVacuumFull=none:None > OVESETUP_DWH_PROVISIONING/postgresProvisioningEnabled=bool:True > OVESETUP_CONFIG/imageioProxyConfig=bool:True > OVESETUP_VMCONSOLE_PROXY_CONFIG/vmconsoleProxyConfig=bool:True > > UCIENCIA 2018: III Conferencia Cient?fica Internacional de la Universidad de > las Ciencias Inform?ticas. Del 24-26 de septiembre, 2018 > http://uciencia.uci.cu http://eventos.uci.cu > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users As Didi mentioned, we are looking for errors in the below files/dir. Do you mind to take a look and share the error message? /var/log/ovirt-engine/setup/* /var/opt/rh/rh-postgresql95/lib/pgsql/data/pg_log /var/log/messages -- Cheers Douglas From jlawrence at squaretrade.com Wed Mar 21 23:39:38 2018 From: jlawrence at squaretrade.com (Jamie Lawrence) Date: Wed, 21 Mar 2018 16:39:38 -0700 Subject: [ovirt-users] Host down/activation loop Message-ID: Hello, Have an issue that feels sanlock related, but I can't get sorted with our installation. This is 4.2.1, hosted engine. One of our hosts is stuck in a loop. It: - gets a VDSM GetStatsVDS timeout, is marked as down, - throws a warning about not being fenced (because that's not enabled yet, because of this problem). - and is set up Up about a minute later. This repeats every 4 minutes and 20 seconds. The hosted engine is running on the host that is stuck like this, and it doesn't appear to get in the way of creating new VMs or other operations, but obviously I can't use fencing, which is a big part of the point of running Ovirt in the first place. I tried setting global maintenance and running hosted-engine --reinitialize-lockspace, which (a) took nearly exactly 2 minutes to run, making me think something timed out, (b) exited with rc 0, and (c) didn't fix the problem. Anyone have an idea of how to fix this? -j - - details - - I still can't quite figure out how to interpret what sanlock says, but the -1s look like wrongness. [sc5-ovirt-1]# sanlock client status daemon bedae69e-03cc-49f8-88f4-9674a85a3185.sc5-ovirt- p -1 helper p -1 listener p 122268 HostedEngine p -1 status s 1aabcd3a-3fd3-4902-b92e-17beaf8fe3fd:1:/rhev/data-center/mnt/glusterSD/172.16.0.151\:_sc5-images/1aabcd3a-3fd3-4902-b92e-17beaf8fe3fd/dom_md/ids:0 s b41eb20a-eafb-481b-9a50-a135cf42b15e:1:/rhev/data-center/mnt/glusterSD/sc5-gluster-10g-1\:_sc5-ovirt__engine/b41eb20a-eafb-481b-9a50-a135cf42b15e/dom_md/ids:0 r b41eb20a-eafb-481b-9a50-a135cf42b15e:8f0c9f7a-ae6a-476e-b6f3-a830dcb79e87:/rhev/data-center/mnt/glusterSD/172.16.0.153\:_sc5-ovirt__engine/b41eb20a-eafb-481b-9a50-a135cf42b15e/images/a9d01d59-f146-47e5-b514-d10f8867678e/8f0c9f7a-ae6a-476e-b6f3-a830dcb79e87.lease:0:5 p 122268 engine.log: 2018-03-21 16:09:26,081-07 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM sc5-ovirt-1 command GetStatsVDS failed: Message timeout which can be caused by communication issues 2018-03-21 16:09:26,081-07 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetStatsVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Command 'GetStatsVDSCommand(HostName = sc5-ovirt-1, VdsIdAndVdsVDSCommandParametersBase:{hostId='be3517e0-f79d-464c-8169-f786d13ac287', vds='Host[sc5-ovirt-1,be3517e0-f79d-464c-8169-f786d13ac287]'})' execution failed: VDSGenericException: VDSNetworkException: Message timeout which can be caused by communication issues 2018-03-21 16:09:26,081-07 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Failed getting vds stats, host='sc5-ovirt-1'(be3517e0-f79d-464c-8169-f786d13ac287): org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: VDSGenericException: VDSNetworkException: Message timeout which can be caused by communication issues 2018-03-21 16:09:26,081-07 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Failure to refresh host 'sc5-ovirt-1' runtime info: VDSGenericException: VDSNetworkException: Message timeout which can be caused by communication issues 2018-03-21 16:09:26,081-07 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Failed to refresh VDS, network error, continuing, vds='sc5-ovirt-1'(be3517e0-f79d-464c-8169-f786d13ac287): VDSGenericException: VDSNetworkException: Message timeout which can be caused by communication issues 2018-03-21 16:09:26,081-07 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (EE-ManagedThreadFactory-engine-Thread-102682) [] Host 'sc5-ovirt-1' is not responding. 2018-03-21 16:09:26,088-07 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-102682) [] EVENT_ID: VDS_HOST_NOT_RESPONDING(9,027), Host sc5-ovirt-1 is not responding. Host cannot be fenced automatically because power management for the host is disabled. 2018-03-21 16:09:27,070-07 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to sc5-ovirt-1/10.181.26.129 2018-03-21 16:09:27,918-07 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler4) [493fb316] START, GlusterServersListVDSCommand(HostName = sc5-gluster-2, VdsIdVDSCommandParametersBase:{hostId='797cbf42-6553-4a75-b8b1-93b2adbbc0db'}), log id: 6afccc01 2018-03-21 16:09:28,579-07 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler4) [493fb316] FINISH, GlusterServersListVDSCommand, return: [192.168.122.1/24:CONNECTED, sc5-gluster-3:CONNECTED, sc5-gluster-10g-1:CONNECTED], log id: 6afccc01 2018-03-21 16:09:28,606-07 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler4) [493fb316] START, GlusterVolumesListVDSCommand(HostName = sc5-gluster-2, GlusterVolumesListVDSParameters:{hostId='797cbf42-6553-4a75-b8b1-93b2adbbc0db'}), log id: 44e90100 2018-03-21 16:09:29,015-07 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler4) [493fb316] FINISH, GlusterVolumesListVDSCommand, return: {6fe949b5-894a-4843-b3e4-af81545574dc=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 140a4a60, bc29ba89-8fc0-494d-9fe5-bc7b34396b65=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 29637467}, log id: 44e90100 2018-03-21 16:09:29,686-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-40) [] START, GetHardwareInfoVDSCommand(HostName = sc5-ovirt-1, VdsIdAndVdsVDSCommandParametersBase:{hostId='be3517e0-f79d-464c-8169-f786d13ac287', vds='Host[sc5-ovirt-1,be3517e0-f79d-464c-8169-f786d13ac287]'}), log id: 6b1cb74b 2018-03-21 16:09:29,692-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-40) [] FINISH, GetHardwareInfoVDSCommand, log id: 6b1cb74b 2018-03-21 16:09:29,900-07 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-40) [576fddcc] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: be3517e0-f79d-464c-8169-f786d13ac287 Type: VDS 2018-03-21 16:09:29,944-07 INFO [org.ovirt.engine.core.bll.InitVdsOnUpCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-40) [26c5f844] Running command: InitVdsOnUpCommand internal: true. Entities affected : ID: c4e2ca40-1e72-11e8-beac-00163e0994d8 Type: StoragePool 2018-03-21 16:09:29,977-07 INFO [org.ovirt.engine.core.bll.storage.pool.ConnectHostToStoragePoolServersCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-40) [41e6da49] Running command: ConnectHostToStoragePoolServersCommand internal: true. Entities affected : ID: c4e2ca40-1e72-11e8-beac-00163e0994d8 Type: StoragePool 2018-03-21 16:09:30,002-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-40) [41e6da49] START, ConnectStorageServerVDSCommand(HostName = sc5-ovirt-1, StorageServerConnectionManagementVDSParameters:{hostId='be3517e0-f79d-464c-8169-f786d13ac287', storagePoolId='c4e2ca40-1e72-11e8-beac-00163e0994d8', storageType='GLUSTERFS', connectionList='[StorageServerConnections:{id='0e2e93f1-3904-4d70-82aa-16bcc83ea314', connection='172.16.0.153:/sc5-ovirt_engine', iqn='null', vfsType='glusterfs', mountOptions='backup-volfile-servers=172.16.0.152:172.16.0.151', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='26c9dbd8-f550-4b7a-9f84-3e905f1a00db', connection='172.16.0.151:/sc5-images', iqn='null', vfsType='glusterfs', mountOptions='backup-volfile-servers=172.16.0.152:172.16.0.153', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log id: acd504a 2018-03-21 16:09:30,099-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-40) [41e6da49] FINISH, ConnectStorageServerVDSCommand, return: {26c9dbd8-f550-4b7a-9f84-3e905f1a00db=0, 0e2e93f1-3904-4d70-82aa-16bcc83ea314=0}, log id: acd504a 2018-03-21 16:09:30,107-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-40) [41e6da49] START, ConnectStorageServerVDSCommand(HostName = sc5-ovirt-1, StorageServerConnectionManagementVDSParameters:{hostId='be3517e0-f79d-464c-8169-f786d13ac287', storagePoolId='c4e2ca40-1e72-11e8-beac-00163e0994d8', storageType='NFS', connectionList='[StorageServerConnections:{id='2239cb49-a8bb-49ee-9a5a-90d72c4602d0', connection='sc5-archive-10g-1:/var/ovirt/ovirt_iso_new', iqn='null', vfsType='null', mountOptions='null', nfsVersion='AUTO', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log id: 35528d0f 2018-03-21 16:09:30,099-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-40) [41e6da49] FINISH, ConnectStorageServerVDSCommand, return: {26c9dbd8-f550-4b7a-9f84-3e905f1a00db=0, 0e2e93f1-3904-4d70-82aa-16bcc83ea314=0}, log id: acd504a From jlawrence at squaretrade.com Wed Mar 21 23:41:41 2018 From: jlawrence at squaretrade.com (Jamie Lawrence) Date: Wed, 21 Mar 2018 16:41:41 -0700 Subject: [ovirt-users] Host down/activation loop Message-ID: Hello, Have an issue that feels sanlock related, but I can't get sorted with our installation. This is 4.2.1, hosted engine. One of our hosts is stuck in a loop. It: - gets a VDSM GetStatsVDS timeout, is marked as down, - throws a warning about not being fenced (because that's not enabled yet, because of this problem). - and is set up Up about a minute later. This repeats every 4 minutes and 20 seconds. The hosted engine is running on the host that is stuck like this, and it doesn't appear to get in the way of creating new VMs or other operations, but obviously I can't use fencing, which is a big part of the point of running Ovirt in the first place. I tried setting global maintenance and running hosted-engine --reinitialize-lockspace, which (a) took nearly exactly 2 minutes to run, making me think something timed out, (b) exited with rc 0, and (c) didn't fix the problem. Anyone have an idea of how to fix this? -j - - details - - I still can't quite figure out how to interpret what sanlock says, but the -1s look like wrongness. [sc5-ovirt-1]# sanlock client status daemon bedae69e-03cc-49f8-88f4-9674a85a3185.sc5-ovirt- p -1 helper p -1 listener p 122268 HostedEngine p -1 status s 1aabcd3a-3fd3-4902-b92e-17beaf8fe3fd:1:/rhev/data-center/mnt/glusterSD/172.16.0.151\:_sc5-images/1aabcd3a-3fd3-4902-b92e-17beaf8fe3fd/dom_md/ids:0 s b41eb20a-eafb-481b-9a50-a135cf42b15e:1:/rhev/data-center/mnt/glusterSD/sc5-gluster-10g-1\:_sc5-ovirt__engine/b41eb20a-eafb-481b-9a50-a135cf42b15e/dom_md/ids:0 r b41eb20a-eafb-481b-9a50-a135cf42b15e:8f0c9f7a-ae6a-476e-b6f3-a830dcb79e87:/rhev/data-center/mnt/glusterSD/172.16.0.153\:_sc5-ovirt__engine/b41eb20a-eafb-481b-9a50-a135cf42b15e/images/a9d01d59-f146-47e5-b514-d10f8867678e/8f0c9f7a-ae6a-476e-b6f3-a830dcb79e87.lease:0:5 p 122268 engine.log: 2018-03-21 16:09:26,081-07 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM sc5-ovirt-1 command GetStatsVDS failed: Message timeout which can be caused by communication issues 2018-03-21 16:09:26,081-07 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetStatsVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Command 'GetStatsVDSCommand(HostName = sc5-ovirt-1, VdsIdAndVdsVDSCommandParametersBase:{hostId='be3517e0-f79d-464c-8169-f786d13ac287', vds='Host[sc5-ovirt-1,be3517e0-f79d-464c-8169-f786d13ac287]'})' execution failed: VDSGenericException: VDSNetworkException: Message timeout which can be caused by communication issues 2018-03-21 16:09:26,081-07 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Failed getting vds stats, host='sc5-ovirt-1'(be3517e0-f79d-464c-8169-f786d13ac287): org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: VDSGenericException: VDSNetworkException: Message timeout which can be caused by communication issues 2018-03-21 16:09:26,081-07 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Failure to refresh host 'sc5-ovirt-1' runtime info: VDSGenericException: VDSNetworkException: Message timeout which can be caused by communication issues 2018-03-21 16:09:26,081-07 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Failed to refresh VDS, network error, continuing, vds='sc5-ovirt-1'(be3517e0-f79d-464c-8169-f786d13ac287): VDSGenericException: VDSNetworkException: Message timeout which can be caused by communication issues 2018-03-21 16:09:26,081-07 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (EE-ManagedThreadFactory-engine-Thread-102682) [] Host 'sc5-ovirt-1' is not responding. 2018-03-21 16:09:26,088-07 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-102682) [] EVENT_ID: VDS_HOST_NOT_RESPONDING(9,027), Host sc5-ovirt-1 is not responding. Host cannot be fenced automatically because power management for the host is disabled. 2018-03-21 16:09:27,070-07 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to sc5-ovirt-1/10.181.26.129 2018-03-21 16:09:27,918-07 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler4) [493fb316] START, GlusterServersListVDSCommand(HostName = sc5-gluster-2, VdsIdVDSCommandParametersBase:{hostId='797cbf42-6553-4a75-b8b1-93b2adbbc0db'}), log id: 6afccc01 2018-03-21 16:09:28,579-07 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler4) [493fb316] FINISH, GlusterServersListVDSCommand, return: [192.168.122.1/24:CONNECTED, sc5-gluster-3:CONNECTED, sc5-gluster-10g-1:CONNECTED], log id: 6afccc01 2018-03-21 16:09:28,606-07 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler4) [493fb316] START, GlusterVolumesListVDSCommand(HostName = sc5-gluster-2, GlusterVolumesListVDSParameters:{hostId='797cbf42-6553-4a75-b8b1-93b2adbbc0db'}), log id: 44e90100 2018-03-21 16:09:29,015-07 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler4) [493fb316] FINISH, GlusterVolumesListVDSCommand, return: {6fe949b5-894a-4843-b3e4-af81545574dc=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 140a4a60, bc29ba89-8fc0-494d-9fe5-bc7b34396b65=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 29637467}, log id: 44e90100 2018-03-21 16:09:29,686-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-40) [] START, GetHardwareInfoVDSCommand(HostName = sc5-ovirt-1, VdsIdAndVdsVDSCommandParametersBase:{hostId='be3517e0-f79d-464c-8169-f786d13ac287', vds='Host[sc5-ovirt-1,be3517e0-f79d-464c-8169-f786d13ac287]'}), log id: 6b1cb74b 2018-03-21 16:09:29,692-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-40) [] FINISH, GetHardwareInfoVDSCommand, log id: 6b1cb74b 2018-03-21 16:09:29,900-07 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-40) [576fddcc] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: be3517e0-f79d-464c-8169-f786d13ac287 Type: VDS 2018-03-21 16:09:29,944-07 INFO [org.ovirt.engine.core.bll.InitVdsOnUpCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-40) [26c5f844] Running command: InitVdsOnUpCommand internal: true. Entities affected : ID: c4e2ca40-1e72-11e8-beac-00163e0994d8 Type: StoragePool 2018-03-21 16:09:29,977-07 INFO [org.ovirt.engine.core.bll.storage.pool.ConnectHostToStoragePoolServersCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-40) [41e6da49] Running command: ConnectHostToStoragePoolServersCommand internal: true. Entities affected : ID: c4e2ca40-1e72-11e8-beac-00163e0994d8 Type: StoragePool 2018-03-21 16:09:30,002-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-40) [41e6da49] START, ConnectStorageServerVDSCommand(HostName = sc5-ovirt-1, StorageServerConnectionManagementVDSParameters:{hostId='be3517e0-f79d-464c-8169-f786d13ac287', storagePoolId='c4e2ca40-1e72-11e8-beac-00163e0994d8', storageType='GLUSTERFS', connectionList='[StorageServerConnections:{id='0e2e93f1-3904-4d70-82aa-16bcc83ea314', connection='172.16.0.153:/sc5-ovirt_engine', iqn='null', vfsType='glusterfs', mountOptions='backup-volfile-servers=172.16.0.152:172.16.0.151', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}, StorageServerConnections:{id='26c9dbd8-f550-4b7a-9f84-3e905f1a00db', connection='172.16.0.151:/sc5-images', iqn='null', vfsType='glusterfs', mountOptions='backup-volfile-servers=172.16.0.152:172.16.0.153', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log id: acd504a 2018-03-21 16:09:30,099-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-40) [41e6da49] FINISH, ConnectStorageServerVDSCommand, return: {26c9dbd8-f550-4b7a-9f84-3e905f1a00db=0, 0e2e93f1-3904-4d70-82aa-16bcc83ea314=0}, log id: acd504a 2018-03-21 16:09:30,107-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-40) [41e6da49] START, ConnectStorageServerVDSCommand(HostName = sc5-ovirt-1, StorageServerConnectionManagementVDSParameters:{hostId='be3517e0-f79d-464c-8169-f786d13ac287', storagePoolId='c4e2ca40-1e72-11e8-beac-00163e0994d8', storageType='NFS', connectionList='[StorageServerConnections:{id='2239cb49-a8bb-49ee-9a5a-90d72c4602d0', connection='sc5-archive-10g-1:/var/ovirt/ovirt_iso_new', iqn='null', vfsType='null', mountOptions='null', nfsVersion='AUTO', nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log id: 35528d0f 2018-03-21 16:09:30,099-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-40) [41e6da49] FINISH, ConnectStorageServerVDSCommand, return: {26c9dbd8-f550-4b7a-9f84-3e905f1a00db=0, 0e2e93f1-3904-4d70-82aa-16bcc83ea314=0}, log id: acd504a From sdhuang32 at gmail.com Thu Mar 22 03:40:58 2018 From: sdhuang32 at gmail.com (Shao-Da Huang) Date: Thu, 22 Mar 2018 11:40:58 +0800 Subject: [ovirt-users] How to generate Swagger specification of the oVirt API? Message-ID: Hi Juan, I saw the discussion in users-list: http://lists.ovirt.org/pipermail/users/2017-April/081618.html and I'm curious about how to generate the Swagger specfication of existing oVirt API (or maybe can generate by ovirt-engine-api-model?). Could you give me some advices on the generating tools or maybe some points to change during the procedure of building ovirt-engine? Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From recreationh at gmail.com Thu Mar 22 03:41:48 2018 From: recreationh at gmail.com (Terry hey) Date: Thu, 22 Mar 2018 11:41:48 +0800 Subject: [ovirt-users] Any monitoring tool provided? Message-ID: Dear all, Now, we can just read how many storage used, cpu usage on ovirt dashboard. But is there any monitoring tool for monitoring virtual machine time to time? If yes, could you guys give me the procedure? Regards Terry -------------- next part -------------- An HTML attachment was scrubbed... URL: From recreationh at gmail.com Thu Mar 22 03:47:08 2018 From: recreationh at gmail.com (Terry hey) Date: Thu, 22 Mar 2018 11:47:08 +0800 Subject: [ovirt-users] virtual machine actual size is not right Message-ID: Dear all, I would like to know the actual size that the virtual machine is used. I found that there is a field to show "actual size". But the number is not right. I entered the server and type df -h. The storage size of the server is about 30GB but the actual size field shows 60 GB. Regards Terry -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.vaye at province-sud.nc Thu Mar 22 07:09:33 2018 From: nicolas.vaye at province-sud.nc (Nicolas Vaye) Date: Thu, 22 Mar 2018 07:09:33 +0000 Subject: [ovirt-users] create a cloned virtual machine based on a template with SDK API python Message-ID: <1521702568.1710.131.camel@province-sud.nc> Hi, I want to create a cloned virtual machine based on a template with SDK API python and i don't find the parameter to indicate the clone action for the disk here is my code : #!/usr/bin/env python # -*- coding: utf-8 -*- # # Copyright (c) 2016 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from pprint import pprint import logging import time import ovirtsdk4 as sdk import ovirtsdk4.types as types template_name='test_debian_9.4' template_version=1 cluster_name='nico-cluster' data_domain_name='OVIRT-TEST2' logging.basicConfig(level=logging.DEBUG, filename='example.log') # This example will connect to the server and start a virtual machine # with cloud-init, in order to automatically configure the network and # the password of the `root` user. # Create the connection to the server: connection = sdk.Connection( url='https://ocenter.province-sud.prod/ovirt-engine/api', username='admin at internal', password='admin', ca_file='CA_ocenter.pem', debug=True, log=logging.getLogger(), ) ################################## ############ TEMPLATE ############ ################################## # Get the reference to the root of the tree of services: system_service = connection.system_service() # Get the reference to the service that manages the storage domains: storage_domains_service = system_service.storage_domains_service() # Find the storage domain we want to be used for virtual machine disks: storage_domain = storage_domains_service.list(search='name='+data_domain_name)[0] # Get the reference to the service that manages the templates: templates_service = system_service.templates_service() # When a template has multiple versions they all have the same name, so # we need to explicitly find the one that has the version name or # version number that we want to use. In this case we want to use # version 1 of the template. templates = templates_service.list(search='name='+template_name) template_id = None for template in templates: if template.version.version_number == template_version: template_id = template.id break if template_id == None: print "ERREUR le template "+template_name+"en version "+str(template_version)+" n'a pas ?t? trouv?!!" # Find the template disk we want be created on specific storage domain # for our virtual machine: template_service = templates_service.template_service(template_id) disk_attachments = connection.follow_link(template_service.get().disk_attachments) print "disk_attachments=" + str(len(disk_attachments)) template_disk_attachments = [] for disk in disk_attachments: template_disk_attachments.append(types.DiskAttachment( disk=types.Disk( id=disk.id, format=types.DiskFormat.COW, storage_domains=[ types.StorageDomain( id=storage_domain.id, ), ], ), ) ) # Get the reference to the service that manages the virtual machines: vms_service = system_service.vms_service() # Add a new virtual machine explicitly indicating the identifier of the # template version that we want to use and indicating that template disk # should be created on specific storage domain for the virtual machine: vm = vms_service.add( types.Vm( name='myvm', cluster=types.Cluster( name=cluster_name ), stateless=False, type=types.VmType('server'), comment='based on template '+template_name+'en version '+str(template_version), template=types.Template( id=template_id ), disk_attachments=template_disk_attachments, ) ) # Get a reference to the service that manages the virtual machine that # was created in the previous step: vm_service = vms_service.vm_service(vm.id) # Wait till the virtual machine is down, which indicats that all the # disks have been created: while True: time.sleep(1) vm = vm_service.get() if vm.status == types.VmStatus.DOWN: break # Close the connection to the server: connection.close() If the data_domain_name is the same as the template data domain, then this script seem to create a vm but not with cloned disk. If the data_domain_name is NOT the same as the template data domain, then this script produce an error ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Cannot add VM. The selected Storage Domain does not contain the VM Template.]". HTTP response code is 400. So how to create a cloned virtual machine based on a template ? I think i'm looking for the same parameter on the web ui, "Storage Allocation" => Thin/Clone [cid:1521702568.14216.1.camel at province-sud.nc] Thanks. Nicolas VAYE -------------- next part -------------- A non-text attachment was scrubbed... Name: unknown-L44QGZ Type: image/png Size: 75487 bytes Desc: unknown-L44QGZ URL: From pbrilla at redhat.com Thu Mar 22 07:44:30 2018 From: pbrilla at redhat.com (Pavol Brilla) Date: Thu, 22 Mar 2018 08:44:30 +0100 Subject: [ovirt-users] virtual machine actual size is not right In-Reply-To: References: Message-ID: Hi just small clarification I entered the server and type df -h. - do you meant you run this command on host which is running mentioned VM? On Thu, Mar 22, 2018 at 4:47 AM, Terry hey wrote: > Dear all, > > I would like to know the actual size that the virtual machine is used. > I found that there is a field to show "actual size". But the number is not > right. > > I entered the server and type df -h. > The storage size of the server is about 30GB but the actual size field > shows 60 GB. > > Regards > Terry > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- PAVOL BRILLA RHV QUALITY ENGINEER, CLOUD Red Hat Czech Republic, Brno TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbonazzo at redhat.com Thu Mar 22 08:12:01 2018 From: sbonazzo at redhat.com (Sandro Bonazzola) Date: Thu, 22 Mar 2018 09:12:01 +0100 Subject: [ovirt-users] ovirt do not start postgresql In-Reply-To: References: <3abe88f1-a0bf-5b9d-fae0-03465716531a@uci.cu> <896c0853-4d6c-854b-896f-a9982b0e77a6@uci.cu> <9810e78d-8781-6c98-054d-7db851fab669@uci.cu> Message-ID: > > As Didi mentioned, we are looking for errors in the below files/dir. > Do you mind to take a look and share the error message? > > /var/log/ovirt-engine/setup/* > /var/opt/rh/rh-postgresql95/lib/pgsql/data/pg_log > /var/log/messages > > > Didi, Eli, can you please have a look? Looking at the logs Marcos shared with me I see: 2018-03-21 15:02:30,373-0400 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.executeRaw:863 execute-result: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', 'localhost', '-p', '5432', ' -u', 'engine_20180321150206', '-d', 'engine_20180321150206', '-l', '/var/log/ovirt-engine/setup/ovirt-engine-setup-20180321150115-jjxbh3.log', '-c', 'apply'], rc=1 2018-03-21 15:02:30,374-0400 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:921 execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', 'localhost', '-p', '5432', '-u' , 'engine_20180321150206', '-d', 'engine_20180321150206', '-l', '/var/log/ovirt-engine/setup/ovirt-engine-setup-20180321150115-jjxbh3.log', '-c', 'apply'] stdout: Creating schema engine_20180321150206 at localhost:5432/engine_20180321150206 Creating fresh schema Creating tables... Creating functions... 2018-03-21 15:02:30,374-0400 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:926 execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', 'localhost', '-p', '5432', '-u' , 'engine_20180321150206', '-d', 'engine_20180321150206', '-l', '/var/log/ovirt-engine/setup/ovirt-engine-setup-20180321150115-jjxbh3.log', '-c', 'apply'] stderr: psql:/usr/share/ovirt-engine/dbscripts/create_functions.sql:1095: ERROR: must be owner of function uuid_generate_v1 FATAL: Cannot execute sql command: --file=/usr/share/ovirt-engine/dbscripts/create_functions.sql 2018-03-21 15:02:30,374-0400 ERROR otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema schema._misc:428 schema.sh: FATAL: Cannot execute sql command: --file=/usr/share/ovirt-engine/dbscripts/create_functions .sql 2018-03-21 15:02:30,375-0400 DEBUG otopi.context context._executeMethod:143 method exception Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in _executeMethod method['method']() File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py", line 430, in _misc raise RuntimeError(_('Engine schema refresh failed')) RuntimeError: Engine schema refresh failed 2018-03-21 15:02:30,376-0400 ERROR otopi.context context._executeMethod:152 Failed to execute stage 'Misc configuration': Engine schema refresh failed 2018-03-21 15:02:30,377-0400 DEBUG otopi.transaction transaction.abort:119 aborting 'DWH Engine database Transaction' 2018-03-21 15:02:30,377-0400 DEBUG otopi.transaction transaction.abort:119 aborting 'Database Transaction' A few lines above in the log I see: ********* QUERY ********** CREATE OR REPLACE FUNCTION uuid_generate_v1() RETURNS uuid STABLE AS $PROCEDURE$ DECLARE v_val BIGINT; v_4_1_part CHAR(4); v_4_2_part CHAR(4); v_4_3_part CHAR(4); v_8_part CHAR(8); v_12_part CHAR(12); v_4_part_max INT; BEGIN PERFORM setseed(random()); -- The only part we should use modulo is the 4 digit part, all the -- rest are really big numbers (i.e 16^8 - 1 and 16^12 - 1) -- The use of round(random() * 1000 is for getting a different id -- for DC/Cluster in different installations v_4_part_max = 65535;-- this is 16^4 -1 v_val := nextval('uuid_sequence'); v_4_1_part := lpad(to_hex((v_val + (round(random() * 1000))::BIGINT)), 4, '0'); v_4_2_part := lpad(to_hex((v_val + (round(random() * 1000))::BIGINT)), 4, '0'); v_4_3_part := lpad(to_hex((v_val + (round(random() * 1000))::BIGINT)), 4, '0'); -- generate this part using the clock timestamp v_8_part := lpad(to_hex(cast(FLOOR(EXTRACT(EPOCH FROM clock_timestamp())) as bigint)), 8 , '0'); v_12_part := lpad(to_hex((v_val + (round(random() * 1000))::BIGINT)), 12, '0'); RETURN v_8_part || v_4_1_part || v_4_2_part || v_4_3_part || v_12_part; END;$PROCEDURE$ LANGUAGE plpgsql; ************************** The log is OVESETUP_CORE/generatedByVersion=str:'4.2.1.7' -------------- next part -------------- An HTML attachment was scrubbed... URL: From didi at redhat.com Thu Mar 22 08:40:15 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Thu, 22 Mar 2018 10:40:15 +0200 Subject: [ovirt-users] ovirt do not start postgresql In-Reply-To: References: <3abe88f1-a0bf-5b9d-fae0-03465716531a@uci.cu> <896c0853-4d6c-854b-896f-a9982b0e77a6@uci.cu> <9810e78d-8781-6c98-054d-7db851fab669@uci.cu> Message-ID: Hi all, It's very hard to debug this without getting a full picture of the state of the machine, including logs, versions, flow, etc. Some comments inside. On Thu, Mar 22, 2018 at 10:12 AM, Sandro Bonazzola wrote: > >> >> As Didi mentioned, we are looking for errors in the below files/dir. >> Do you mind to take a look and share the error message? >> >> /var/log/ovirt-engine/setup/* >> /var/opt/rh/rh-postgresql95/lib/pgsql/data/pg_log >> /var/log/messages >> >> > > Didi, Eli, can you please have a look? > > > > Looking at the logs Marcos shared with me I see: > > 2018-03-21 15:02:30,373-0400 DEBUG > otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema > plugin.executeRaw:863 execute-result: > ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', 'localhost', '-p', > '5432', ' > -u', 'engine_20180321150206', '-d', 'engine_20180321150206', '-l', The fact that your database name is 'engine_20180321150206' most likely means you have/had some problem, see also: https://bugzilla.redhat.com/show_bug.cgi?id=1259782 Please check if you have other databases on your machine, why this one was created, etc. Some information to understand this can be found in the setup logs. > '/var/log/ovirt-engine/setup/ovirt-engine-setup-20180321150115-jjxbh3.log', > '-c', 'apply'], rc=1 > 2018-03-21 15:02:30,374-0400 DEBUG > otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:921 > execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', > 'localhost', '-p', '5432', '-u' > , 'engine_20180321150206', '-d', 'engine_20180321150206', '-l', > '/var/log/ovirt-engine/setup/ovirt-engine-setup-20180321150115-jjxbh3.log', > '-c', 'apply'] stdout: > Creating schema engine_20180321150206 at localhost:5432/engine_20180321150206 > Creating fresh schema > Creating tables... > Creating functions... > > 2018-03-21 15:02:30,374-0400 DEBUG > otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:926 > execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', > 'localhost', '-p', '5432', '-u' > , 'engine_20180321150206', '-d', 'engine_20180321150206', '-l', > '/var/log/ovirt-engine/setup/ovirt-engine-setup-20180321150115-jjxbh3.log', > '-c', 'apply'] stderr: > psql:/usr/share/ovirt-engine/dbscripts/create_functions.sql:1095: ERROR: > must be owner of function uuid_generate_v1 > FATAL: Cannot execute sql command: > --file=/usr/share/ovirt-engine/dbscripts/create_functions.sql > > 2018-03-21 15:02:30,374-0400 ERROR > otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema schema._misc:428 > schema.sh: FATAL: Cannot execute sql command: > --file=/usr/share/ovirt-engine/dbscripts/create_functions > .sql > 2018-03-21 15:02:30,375-0400 DEBUG otopi.context context._executeMethod:143 > method exception > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in > _executeMethod > method['method']() > File > "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py", > line 430, in _misc > raise RuntimeError(_('Engine schema refresh failed')) > RuntimeError: Engine schema refresh failed > 2018-03-21 15:02:30,376-0400 ERROR otopi.context context._executeMethod:152 > Failed to execute stage 'Misc configuration': Engine schema refresh failed > 2018-03-21 15:02:30,377-0400 DEBUG otopi.transaction transaction.abort:119 > aborting 'DWH Engine database Transaction' > 2018-03-21 15:02:30,377-0400 DEBUG otopi.transaction transaction.abort:119 > aborting 'Database Transaction' > > > A few lines above in the log I see: > > ********* QUERY ********** > CREATE OR REPLACE FUNCTION uuid_generate_v1() > RETURNS uuid STABLE > AS $PROCEDURE$ > DECLARE > v_val BIGINT; > v_4_1_part CHAR(4); > v_4_2_part CHAR(4); > v_4_3_part CHAR(4); > v_8_part CHAR(8); > v_12_part CHAR(12); > v_4_part_max INT; > BEGIN > PERFORM setseed(random()); > > -- The only part we should use modulo is the 4 digit part, all the > -- rest are really big numbers (i.e 16^8 - 1 and 16^12 - 1) > -- The use of round(random() * 1000 is for getting a different id > -- for DC/Cluster in different installations > v_4_part_max = 65535;-- this is 16^4 -1 > > v_val := nextval('uuid_sequence'); > v_4_1_part := lpad(to_hex((v_val + (round(random() * 1000))::BIGINT)), > 4, '0'); > v_4_2_part := lpad(to_hex((v_val + (round(random() * 1000))::BIGINT)), > 4, '0'); > v_4_3_part := lpad(to_hex((v_val + (round(random() * 1000))::BIGINT)), > 4, '0'); > > -- generate this part using the clock timestamp > v_8_part := lpad(to_hex(cast(FLOOR(EXTRACT(EPOCH FROM > clock_timestamp())) as bigint)), 8 , '0'); > > v_12_part := lpad(to_hex((v_val + (round(random() * 1000))::BIGINT)), > 12, '0'); > > RETURN v_8_part || v_4_1_part || v_4_2_part || v_4_3_part || v_12_part; > END;$PROCEDURE$ > LANGUAGE plpgsql; > ************************** This function was removed by: https://gerrit.ovirt.org/#/c/84832/ which should be included in 4.2.1 : https://bugzilla.redhat.com/show_bug.cgi?id=1515635 > > The log is OVESETUP_CORE/generatedByVersion=str:'4.2.1.7' So not sure how you still have it. I suggest to open a bug in bugzilla and attach a sosreport of the machine. You can use this link: https://bugzilla.redhat.com/enter_bug.cgi?alias=&assigned_to=sbonazzo%40redhat.com&attach_text=&blocked=&bug_file_loc=http%3A%2F%2F&bug_severity=unspecified&bug_status=NEW&cf_build_id=&cf_category=---&cf_clone_of=&cf_cloudforms_team=---&cf_compliance_control_group=---&cf_compliance_level=---&cf_crm=&cf_cust_facing=---&cf_devel_whiteboard=&cf_docs_score=&cf_documentation_action=---&cf_environment=&cf_internal_whiteboard=&cf_mount_type=---&cf_ovirt_team=Integration&cf_pm_score=&cf_regression_status=---&cf_story_points=---&cf_subsystem_team=---&cf_type=Bug&comment=Description%20of%20problem%3A%0D%0A%0D%0A%0D%0AVersion-Release%20number%20of%20selected%20component%20%28if%20applicable%29%3A%0D%0A%0D%0A%0D%0AHow%20reproducible%3A%0D%0A%0D%0A%0D%0ASteps%20to%20Reproduce%3A%0D%0A1.%0D%0A2.%0D%0A3.%0D%0A%0D%0AActual%20results%3A%0D%0A%0D%0A%0D%0AExpected%20results%3A%0D%0A%0D%0A%0D%0AAdditional%20info%3A%0D%0A&component=Setup.Engine&contenttypeentry=&contenttypemethod=autodetect&contenttypeselection=text%2Fplain&data=&deadline=&defined_cf_layered_products=&defined_cf_partner=&defined_groups=1&defined_rh_sub_component=0&dependson=&description=&docs_contact=&estimated_time=&external_bug_id_1=&external_id_1=0&flag_type-155=X&flag_type-16=X&flag_type-415=X&flag_type-817=X&flag_type-818=X&flag_type-819=X&flag_type-820=X&flag_type-821=X&flag_type-823=X&form_name=enter_bug&keywords=&maketemplate=Remember%20values%20as%20bookmarkable%20template&op_sys=Unspecified&priority=unspecified&product=ovirt-engine&qa_contact=pstehlik%40redhat.com&rep_platform=Unspecified&requestee_type-155=&requestee_type-16=&rh_sub_component=&short_desc=&status_whiteboard=&target_milestone=---&target_release=---&version=4.2.0 My guess for the reason of the failure: You updated the setup packages to 4.2.1.7, but the engine itself failed to update - either because you ran engine-setup with '--offline' or because you removed the repos, they were unavailable, something like that. Even if I am right, I'd still try to understand why it tries to use database engine_20180321150206, as there might be some other problems. Best regards, -- Didi From NasrumMinallah9 at hotmail.com Thu Mar 22 07:10:08 2018 From: NasrumMinallah9 at hotmail.com (Nasrum Minallah Manzoor) Date: Thu, 22 Mar 2018 07:10:08 +0000 Subject: [ovirt-users] Revised Query for moving Ovirt Engine... Message-ID: Hi Every one, To make it more clear I am revising my below query! I have installed 2 nodes in a cluster and ovirt engine is installed on separate machine on Vmware workstation. Now I want to move ovirt engine to one of my any nodes. What process should I go through. Need solution from experts! Regards, From: Nasrum Minallah Manzoor Sent: 22 March 2018 11:54 AM To: 'users at ovirt.org' Cc: 'junaid8756 at gmail.com' Subject: Query for moving Ovirt Engine... Hello Every one, I want to move ovirt engine from one of my machine to node installed on another machine! What steps I should take to accomplish the query! Help would be appreciated! Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From frolland at redhat.com Thu Mar 22 09:02:32 2018 From: frolland at redhat.com (Fred Rolland) Date: Thu, 22 Mar 2018 11:02:32 +0200 Subject: [ovirt-users] create a cloned virtual machine based on a template with SDK API python In-Reply-To: <1521702568.1710.131.camel@province-sud.nc> References: <1521702568.1710.131.camel@province-sud.nc> Message-ID: Hi Nicolas, You can find an example here: https://github.com/oVirt/ovirt-engine-sdk/blob/21f637345597729240f217cfe84fe2a2cf39a655/sdk/examples/add_independet_vm.py#L56 Regards, Fred On Thu, Mar 22, 2018 at 9:09 AM, Nicolas Vaye wrote: > Hi, > > I want to create a cloned virtual machine based on a template with SDK API > python and i don't find the parameter to indicate the clone action for the > disk > > here is my code : > > #!/usr/bin/env python > # -*- coding: utf-8 -*- > > # > # Copyright (c) 2016 Red Hat, Inc. > # > # Licensed under the Apache License, Version 2.0 (the "License"); > # you may not use this file except in compliance with the License. > # You may obtain a copy of the License at > # > # http://www.apache.org/licenses/LICENSE-2.0 > # > # Unless required by applicable law or agreed to in writing, software > # distributed under the License is distributed on an "AS IS" BASIS, > # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. > # See the License for the specific language governing permissions and > # limitations under the License. > # > from pprint import pprint > import logging > import time > > import ovirtsdk4 as sdk > import ovirtsdk4.types as types > > > > template_name='test_debian_9.4' > template_version=1 > cluster_name='nico-cluster' > data_domain_name='OVIRT-TEST2' > > > > logging.basicConfig(level=logging.DEBUG, filename='example.log') > > # This example will connect to the server and start a virtual machine > # with cloud-init, in order to automatically configure the network and > # the password of the `root` user. > > # Create the connection to the server: > connection = sdk.Connection( > url='https://ocenter.province-sud.prod/ovirt-engine/api', > username='admin at internal', > password='admin', > ca_file='CA_ocenter.pem', > debug=True, > log=logging.getLogger(), > ) > > > > ################################## > ############ TEMPLATE ############ > ################################## > > # Get the reference to the root of the tree of services: > system_service = connection.system_service() > > # Get the reference to the service that manages the storage domains: > storage_domains_service = system_service.storage_domains_service() > > # Find the storage domain we want to be used for virtual machine disks: > storage_domain = storage_domains_service.list(search='name='+data_domain_ > name)[0] > > > # Get the reference to the service that manages the templates: > templates_service = system_service.templates_service() > > # When a template has multiple versions they all have the same name, so > # we need to explicitly find the one that has the version name or > # version number that we want to use. In this case we want to use > # version 1 of the template. > templates = templates_service.list(search='name='+template_name) > template_id = None > for template in templates: > if template.version.version_number == template_version: > template_id = template.id > break > > if template_id == None: > print "ERREUR le template "+template_name+"en version > "+str(template_version)+" n'a pas ?t? trouv?!!" > > # Find the template disk we want be created on specific storage domain > # for our virtual machine: > template_service = templates_service.template_service(template_id) > disk_attachments = connection.follow_link(template_service.get().disk_ > attachments) > > print "disk_attachments=" + str(len(disk_attachments)) > > template_disk_attachments = [] > for disk in disk_attachments: > template_disk_attachments.append(types.DiskAttachment( > disk=types.Disk( > id=disk.id, > format=types.DiskFormat.COW, > storage_domains=[ > types.StorageDomain( > id=storage_domain.id, > ), > ], > ), > ) > ) > > > > # Get the reference to the service that manages the virtual machines: > vms_service = system_service.vms_service() > > # Add a new virtual machine explicitly indicating the identifier of the > # template version that we want to use and indicating that template disk > # should be created on specific storage domain for the virtual machine: > vm = vms_service.add( > types.Vm( > name='myvm', > cluster=types.Cluster( > name=cluster_name > ), > stateless=False, > type=types.VmType('server'), > comment='based on template '+template_name+'en version > '+str(template_version), > template=types.Template( > id=template_id > ), > disk_attachments=template_disk_attachments, > ) > ) > > > > # Get a reference to the service that manages the virtual machine that > # was created in the previous step: > vm_service = vms_service.vm_service(vm.id) > > # Wait till the virtual machine is down, which indicats that all the > # disks have been created: > while True: > time.sleep(1) > vm = vm_service.get() > if vm.status == types.VmStatus.DOWN: > break > > > # Close the connection to the server: > connection.close() > > > > > If the data_domain_name is the same as the template data domain, then this > script seem to create a vm but not with cloned disk. > > If the data_domain_name is NOT the same as the template data domain, then > this script produce an error > ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is > "[Cannot add VM. The selected Storage Domain does not contain the VM > Template.]". HTTP response code is 400. > > So how to create a cloned virtual machine based on a template ? > > I think i'm looking for the same parameter on the web ui, "Storage > Allocation" => Thin/Clone > > [cid:1521702568.14216.1.camel at province-sud.nc] > > > Thanks. > > Nicolas VAYE > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Thu Mar 22 09:04:39 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Thu, 22 Mar 2018 11:04:39 +0200 Subject: [ovirt-users] Any monitoring tool provided? In-Reply-To: References: Message-ID: On Thu, Mar 22, 2018 at 5:41 AM, Terry hey wrote: > Dear all, > > Now, we can just read how many storage used, cpu usage on ovirt dashboard. > But is there any monitoring tool for monitoring virtual machine time to > time? > If yes, could you guys give me the procedure? > https://ovirt.org/blog/2017/12/ovirt-metrics-store/ might be helpful. Y. > > > Regards > Terry > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sabose at redhat.com Thu Mar 22 09:14:02 2018 From: sabose at redhat.com (Sahina Bose) Date: Thu, 22 Mar 2018 14:44:02 +0530 Subject: [ovirt-users] Adding host to hosted-engine /w gluster cluster. (On ovirt Node 4.2.1.1) In-Reply-To: References: <9352191a-76dd-13ed-463a-61033dc3fe6a@andrewswireless.net> Message-ID: On Wed, Mar 21, 2018 at 7:01 PM, Hanson Turner wrote: > Hi Sahina, > > On the fourth node, I've found /var/log/glusterfs/rhev-data- > center-mnt-glusterSD-ovirtnode1.core\:_engine.log ... is this the > engine.log you're referring to or do you want one from the hosted engine? > I was referring to oVirt's engine.log. Found under /var/log/ovirt-engine/engine.log I actually do want to go replica 5. Most VM's it runs are small(1 Core,1gb > Ram,8gb HDD) and HA is needed. I'd like a bigger critical margin than one > node failing. > Ok, in this case - in addition to adding the new nodes to cluster, you will also need to add bricks to the volume and increase the replica count to 5. You can do this using the "Add Bricks" from the Bricks tab on selection of a gluster volume. Ensure that you set the replica count to 5 here. This should be done once you successfully add the new hosts to the cluster. > As far as the repos, it's a straight the ovirtnode iso install, I think > it's Node 4.2.0... which is yum updated to 4.2.1.1 > When I installed 4.0 I'd installed on top of centos. This round I went > straight with the node os because of simplicity in updating. > > I can manually restart gluster from cli, the peer and volume status show > no peers or volumes. > This is because the nodes have not been added (or peer probed) to existing gluster cluster. I will need the logs I requested to understand why. > One thing of note, the networking is still as setup from the node install. > I cannot change the networking info from the ovirt gui/dashboard. The host > goes unresponsive and then another host power cycles it. > > Thanks, > Hanson > > > On 03/21/2018 06:12 AM, Sahina Bose wrote: > > > > On Tue, Mar 20, 2018 at 9:41 PM, Hanson Turner > wrote: > >> Hi Guys, >> >> I've a 3 machine pool running gluster with replica 3 and want to add two >> more machines. >> >> This would change to a replica 5... >> > > Adding 2 more nodes to cluster will not change it to a replica 5. replica > 3 is a configuration on the gluster volume. I assume you don't need a > replica 5, but just to add more nodes (and possibly new gluster volumes) to > the cluster? > > >> In ovirt 4.0, I'd done everything manually. No problem there. >> >> In ovirt 4.2, I'd used the wizard for the hosted-engine. It looks like >> the fourth node has been added to the pool but will not go active. It >> complains gluster isn't running (which I've not manually configured >> /dev/sdb for gluster). Host install+deploy fails. Host can go into >> maintenance w/o issue. (Meaning the host has been added to the cluster, but >> isn't operational) >> > > Are the repos configured correctly on the new nodes? Does the oVirt > cluster where the nodes are being added have "Enable Gluster Service" > enabled? > > >> What do I need to do to get the node up and running proper with gluster >> syncing properly? Manually restarting gluster, tells me there's no peers >> and no volumes. >> >> Do we have a wizard for this too? Or do I need to go find the setup >> scripts and configure hosts 4 + 5 manually and run the deploy again? >> > > The host addition flow should take care of installing gluster. > Can you share the engine log from when the host was added to when it's > reported non-operational? > > >> >> Thanks, >> >> Hanson >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Thu Mar 22 09:17:16 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Thu, 22 Mar 2018 11:17:16 +0200 Subject: [ovirt-users] Juniper vSRX Cluster on oVirt/RHEV In-Reply-To: References: Message-ID: On Wed, Mar 21, 2018 at 10:37 PM, Charles Kozler wrote: > Hi All - > > Recently did this and thought it would be worth documenting. I couldnt > find any solid information on vsrx with kvm outside of flat KVM. This > outlines some of the things I hit along the way and how to fix. This is my > one small way of giving back to such an incredible open source tool > > https://ckozler.net/vsrx-cluster-on-ovirtrhev/ > Thanks for sharing! Why didn't you just upload the qcow2 disk via the UI/API though? There's quite a bit of manual work that I hope is not needed? Y. > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luis_souza at outlook.com Tue Mar 20 12:20:42 2018 From: luis_souza at outlook.com (Luis Ricardo de Souza) Date: Tue, 20 Mar 2018 12:20:42 +0000 Subject: [ovirt-users] Ovirt 404 - Not found Message-ID: Hello, My ovirt 4.1.9 install is throwing error 404 - Not found. Follow the log files. DHW 2018-03-20 09:14:31|ETL Service Stopped 2018-03-20 09:16:32|ETL Service Started ovirtEngineDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory hoursToKeepDaily|0 hoursToKeepHourly|720 ovirtEngineDbPassword|********************** runDeleteTime|3 ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory runInterleave|60 limitRows|limit 1000 ovirtEngineHistoryDbUser|ovirt_engine_history ovirtEngineDbUser|engine deleteIncrement|10 timeBetweenErrorEvents|300000 hoursToKeepSamples|24 deleteMultiplier|1000 lastErrorSent|2011-07-03 12:46:47.000000 etlVersion|4.1.9 dwhAggregationDebug|false dwhUuid|f01b63b4-97c6-4cc4-b363-01f0928822cf ovirtEngineHistoryDbDriverClass|org.postgresql.Driver ovirtEngineHistoryDbPassword|********************** Engine 2018-03-20 09:16:43,213-03 WARN [org.ovirt.engine.core.utils.ConfigUtilsBase] (ServerService Thread Pool -- 46) [] Could not find enum value for option: 'VdsFenceOptions' 2018-03-20 09:16:43,213-03 WARN [org.ovirt.engine.core.utils.ConfigUtilsBase] (ServerService Thread Pool -- 46) [] Could not find enum value for option: 'DbJustRestored' 2018-03-20 09:16:43,215-03 INFO [org.ovirt.engine.core.utils.osinfo.OsInfoPreferencesLoader] (ServerService Thread Pool -- 46) [] Loading file '/etc/ovirt-engine/osinfo.conf.d/00-defaults.properties' 2018-03-20 09:16:43,252-03 ERROR [org.ovirt.engine.core.bll.Backend] (ServerService Thread Pool -- 46) [] Error during initialization: org.springframework.jdbc.BadSqlGrammarException: CallableStatementCallback; bad SQL grammar [{call clear_osinfo()}]; nested exception is org.postgresql.util.PSQLException: ERROR: relation "dwh_osinfo" does not exist Where: SQL statement "TRUNCATE dwh_osinfo" PL/pgSQL function clear_osinfo() line 3 at SQL statement at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:231) [spring-jdbc.jar:4.2.4.RELEASE] at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:73) [spring-jdbc.jar:4.2.4.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:1094) [spring-jdbc.jar:4.2.4.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.call(JdbcTemplate.java:1130) [spring-jdbc.jar:4.2.4.RELEASE] at org.springframework.jdbc.core.simple.AbstractJdbcCall.executeCallInternal(AbstractJdbcCall.java:405) [spring-jdbc.jar:4.2.4.RELEASE] at org.springframework.jdbc.core.simple.AbstractJdbcCall.doExecute(AbstractJdbcCall.java:377) [spring-jdbc.jar:4.2.4.RELEASE] at org.springframework.jdbc.core.simple.SimpleJdbcCall.execute(SimpleJdbcCall.java:188) [spring-jdbc.jar:4.2.4.RELEASE] at org.ovirt.engine.core.dao.dwh.OsInfoDaoImpl.populateDwhOsInfo(OsInfoDaoImpl.java:52) [dal.jar:] at org.ovirt.engine.core.bll.Backend.initOsRepository(Backend.java:742) [bll.jar:] at org.ovirt.engine.core.bll.Backend.initialize(Backend.java:258) [bll.jar:] at org.ovirt.engine.core.bll.Backend.create(Backend.java:197) [bll.jar:] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [rt.jar:1.8.0_161] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) [rt.jar:1.8.0_161] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_161] at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_161] at org.jboss.as.ee.component.ManagedReferenceLifecycleMethodInterceptor.processInvocation(ManagedReferenceLifecycleMethodInterceptor.java:96) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437) at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:70) [wildfly-weld-10.1.0.Final.jar:10.1.0.Final] at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.doLifecycleInterception(Jsr299BindingsInterceptor.java:117) [wildfly-weld-10.1.0.Final.jar:10.1.0.Final] at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:103) [wildfly-weld-10.1.0.Final.jar:10.1.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437) at org.jboss.weld.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:73) [weld-core-impl-2.3.5.Final.jar:2.3.5.Final] at org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:83) [wildfly-weld-10.1.0.Final.jar:10.1.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.as.weld.injection.WeldInjectionInterceptor.processInvocation(WeldInjectionInterceptor.java:53) [wildfly-weld-10.1.0.Final.jar:10.1.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.as.ee.component.AroundConstructInterceptorFactory$1.processInvocation(AroundConstructInterceptorFactory.java:28) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.as.weld.injection.WeldInterceptorInjectionInterceptor.processInvocation(WeldInterceptorInjectionInterceptor.java:56) [wildfly-weld-10.1.0.Final.jar:10.1.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.as.ee.component.ComponentInstantiatorInterceptor.processInvocation(ComponentInstantiatorInterceptor.java:74) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.as.weld.ejb.Jsr299BindingsCreateInterceptor.processInvocation(Jsr299BindingsCreateInterceptor.java:100) [wildfly-weld-10.1.0.Final.jar:10.1.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:263) [wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:374) [wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at org.jboss.as.ejb3.tx.LifecycleCMTTxInterceptor.processInvocation(LifecycleCMTTxInterceptor.java:70) [wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.as.weld.injection.WeldInjectionContextInterceptor.processInvocation(WeldInjectionContextInterceptor.java:43) [wildfly-weld-10.1.0.Final.jar:10.1.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41) [wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45) [wildfly-ee-10.1.0.Final.jar:10.1.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:64) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.as.ejb3.component.singleton.StartupCountDownInterceptor.processInvocation(StartupCountDownInterceptor.java:25) [wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:356) at org.jboss.invocation.PrivilegedWithCombinerInterceptor.processInvocation(PrivilegedWithCombinerInterceptor.java:80) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61) at org.jboss.as.ee.component.BasicComponent.constructComponentInstance(BasicComponent.java:161) at org.jboss.as.ee.component.BasicComponent.constructComponentInstance(BasicComponent.java:134) at org.jboss.as.ee.component.BasicComponent.createInstance(BasicComponent.java:88) at org.jboss.as.ejb3.component.singleton.SingletonComponent.getComponentInstance(SingletonComponent.java:124) [wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at org.jboss.as.ejb3.component.singleton.SingletonComponent.start(SingletonComponent.java:138) [wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at org.jboss.as.ee.component.ComponentStartService$1.run(ComponentStartService.java:54) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [rt.jar:1.8.0_161] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [rt.jar:1.8.0_161] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_161] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_161] at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_161] at org.jboss.threads.JBossThread.run(JBossThread.java:320) Caused by: org.postgresql.util.PSQLException: ERROR: relation "dwh_osinfo" does not exist Where: SQL statement "TRUNCATE dwh_osinfo" PL/pgSQL function clear_osinfo() line 3 at SQL statement at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2157) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1886) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:555) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:417) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:410) at org.jboss.jca.adapters.jdbc.CachedPreparedStatement.execute(CachedPreparedStatement.java:303) at org.jboss.jca.adapters.jdbc.WrappedPreparedStatement.execute(WrappedPreparedStatement.java:442) at org.springframework.jdbc.core.JdbcTemplate$6.doInCallableStatement(JdbcTemplate.java:1133) [spring-jdbc.jar:4.2.4.RELEASE] at org.springframework.jdbc.core.JdbcTemplate$6.doInCallableStatement(JdbcTemplate.java:1130) [spring-jdbc.jar:4.2.4.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:1078) [spring-jdbc.jar:4.2.4.RELEASE] ... 65 more -------------- next part -------------- An HTML attachment was scrubbed... URL: From jhernand at redhat.com Thu Mar 22 09:33:51 2018 From: jhernand at redhat.com (=?UTF-8?Q?Juan_Hern=c3=a1ndez?=) Date: Thu, 22 Mar 2018 10:33:51 +0100 Subject: [ovirt-users] How to generate Swagger specification of the oVirt API? In-Reply-To: References: Message-ID: <4c69ff4b-2cc7-9b42-2688-a7a5ee1a8674@redhat.com> On 03/22/2018 04:40 AM, Shao-Da Huang wrote: > Hi Juan, > > I saw the discussion in users-list: > http://lists.ovirt.org/pipermail/users/2017-April/081618.html > and I'm curious about how to generate the Swagger specfication of existing > oVirt API (or maybe can generate by ovirt-engine-api-model?). > Could you give me some advices on the generating tools or maybe some points > to change during the procedure of building ovirt-engine? > > Michael > The JSON file that I mentioned in that mail was generated using I tool that I started to write, but that I never finished. I just have uploaded the patch that adds it to the ovirt-engine-api-metamodel project: [WIP] Generate Swagger specification https://gerrit.ovirt.org/89337 Note that it is by no means complete, it is just an experiment. Take a look if you are curious. Truth is that I don't plan to work on that. Would be nice if you can take it and complete it. If you complete it, then it could be integrated in the build process of the ovirt-engine-api-model project, so that it is generated automatically. From spfma.tech at e.mail.fr Thu Mar 22 09:39:12 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Thu, 22 Mar 2018 10:39:12 +0100 Subject: [ovirt-users] Engine and nodes ssh setup Message-ID: <20180322093913.0BCA4E446E@smtp01.mail.de> Hi, I am still trying to make my restored hosted engine communicate with the nodes without success. There is a step I am not sure : is root user on the engine supposed to be able to log into nodes without password or not ? In my case it doesn't By the way, where are located the certificates actually used for these communications ? Regards ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From sabose at redhat.com Thu Mar 22 09:48:44 2018 From: sabose at redhat.com (Sahina Bose) Date: Thu, 22 Mar 2018 15:18:44 +0530 Subject: [ovirt-users] Can't Add Host To New Hosted Engine - "Server is already part of another cluster" In-Reply-To: References: Message-ID: On Wed, Mar 21, 2018 at 12:33 PM, Yedidyah Bar David wrote: > On Wed, Mar 21, 2018 at 8:17 AM, Adam Chesterton > wrote: > > Hi Everyone, > > > > I'm running a 3-host hyperconverged Gluster setup for testing (on some > old > > desktops), and recently the hosted engine died on me, so I have > attempted to > > just clean up my existing hosts, leaving Gluster configured, and > re-deploy a > > fresh hosted engine setup on them. > > > > I have successfully got the first host setup and the hosted engine is > > running on that host. However, when I try to add the other two hosts via > the > > web GUI (as I can no longer add them via CLI) I get this error: "Error > while > > executing action: Server XXXXX is already part of another cluster." > > This message might be a result of the host's participation in a gluster > cluster, > not hosted-engine cluster. Please share engine.log from the engine. > > Adding Sahina. > Yes, it does look like that. Can you share details of # gluster peer status from your 3 nodes And also the address of the first host in the oVirt engine and below from the HE engine: # su - postgres -c "psql -d engine -c \"select * from gluster_server; \"" > > > > I've tried to find where this would still be configured on the two other > > hosts, but I cannot find anywhere. > > If it's only about hosted-engine, you can check /etc/ovirt-hosted-engine . > > You might try using ovirt-hosted-engine-cleanup, although it was not > designed > for such cases. > > > > > Does anyone know how I can stop these two hosts from thinking they are > still > > in a cluster? Or, does anyone have some information that might help, or > am I > > going to just have to start a fresh CentOS install? > > If you do not need the data, a reinstall might be simplest. > If you do, not sure what's your exact plan. > You intend to rely on the replication? So that you reinstall one host, add > it, > wait until syncing finished, then reinstall the other? Might work, no idea. > > Best regards, > -- > Didi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sabose at redhat.com Thu Mar 22 09:56:13 2018 From: sabose at redhat.com (Sahina Bose) Date: Thu, 22 Mar 2018 15:26:13 +0530 Subject: [ovirt-users] Ovirt vm's paused due to storage error In-Reply-To: References: Message-ID: Can you provide "gluster volume info" and the mount logs of the data volume (I assume that this hosts the vdisks for the VM's with storage error). Also vdsm.log at the corresponding time. On Fri, Mar 16, 2018 at 3:45 AM, Endre Karlson wrote: > Hi, this is is here again and we are getting several vm's going into > storage error in our 4 node cluster running on centos 7.4 with gluster and > ovirt 4.2.1. > > Gluster version: 3.12.6 > > volume status > [root at ovirt3 ~]# gluster volume status > Status of volume: data > Gluster process TCP Port RDMA Port Online > Pid > ------------------------------------------------------------ > ------------------ > Brick ovirt0:/gluster/brick3/data 49152 0 Y > 9102 > Brick ovirt2:/gluster/brick3/data 49152 0 Y > 28063 > Brick ovirt3:/gluster/brick3/data 49152 0 Y > 28379 > Brick ovirt0:/gluster/brick4/data 49153 0 Y > 9111 > Brick ovirt2:/gluster/brick4/data 49153 0 Y > 28069 > Brick ovirt3:/gluster/brick4/data 49153 0 Y > 28388 > Brick ovirt0:/gluster/brick5/data 49154 0 Y > 9120 > Brick ovirt2:/gluster/brick5/data 49154 0 Y > 28075 > Brick ovirt3:/gluster/brick5/data 49154 0 Y > 28397 > Brick ovirt0:/gluster/brick6/data 49155 0 Y > 9129 > Brick ovirt2:/gluster/brick6_1/data 49155 0 Y > 28081 > Brick ovirt3:/gluster/brick6/data 49155 0 Y > 28404 > Brick ovirt0:/gluster/brick7/data 49156 0 Y > 9138 > Brick ovirt2:/gluster/brick7/data 49156 0 Y > 28089 > Brick ovirt3:/gluster/brick7/data 49156 0 Y > 28411 > Brick ovirt0:/gluster/brick8/data 49157 0 Y > 9145 > Brick ovirt2:/gluster/brick8/data 49157 0 Y > 28095 > Brick ovirt3:/gluster/brick8/data 49157 0 Y > 28418 > Brick ovirt1:/gluster/brick3/data 49152 0 Y > 23139 > Brick ovirt1:/gluster/brick4/data 49153 0 Y > 23145 > Brick ovirt1:/gluster/brick5/data 49154 0 Y > 23152 > Brick ovirt1:/gluster/brick6/data 49155 0 Y > 23159 > Brick ovirt1:/gluster/brick7/data 49156 0 Y > 23166 > Brick ovirt1:/gluster/brick8/data 49157 0 Y > 23173 > Self-heal Daemon on localhost N/A N/A Y > 7757 > Bitrot Daemon on localhost N/A N/A Y > 7766 > Scrubber Daemon on localhost N/A N/A Y > 7785 > Self-heal Daemon on ovirt2 N/A N/A Y > 8205 > Bitrot Daemon on ovirt2 N/A N/A Y > 8216 > Scrubber Daemon on ovirt2 N/A N/A Y > 8227 > Self-heal Daemon on ovirt0 N/A N/A Y > 32665 > Bitrot Daemon on ovirt0 N/A N/A Y > 32674 > Scrubber Daemon on ovirt0 N/A N/A Y > 32712 > Self-heal Daemon on ovirt1 N/A N/A Y > 31759 > Bitrot Daemon on ovirt1 N/A N/A Y > 31768 > Scrubber Daemon on ovirt1 N/A N/A Y > 31790 > > Task Status of Volume data > ------------------------------------------------------------ > ------------------ > Task : Rebalance > ID : 62942ba3-db9e-4604-aa03-4970767f4d67 > Status : completed > > Status of volume: engine > Gluster process TCP Port RDMA Port Online > Pid > ------------------------------------------------------------ > ------------------ > Brick ovirt0:/gluster/brick1/engine 49158 0 Y > 9155 > Brick ovirt2:/gluster/brick1/engine 49158 0 Y > 28107 > Brick ovirt3:/gluster/brick1/engine 49158 0 Y > 28427 > Self-heal Daemon on localhost N/A N/A Y > 7757 > Self-heal Daemon on ovirt1 N/A N/A Y > 31759 > Self-heal Daemon on ovirt0 N/A N/A Y > 32665 > Self-heal Daemon on ovirt2 N/A N/A Y > 8205 > > Task Status of Volume engine > ------------------------------------------------------------ > ------------------ > There are no active volume tasks > > Status of volume: iso > Gluster process TCP Port RDMA Port Online > Pid > ------------------------------------------------------------ > ------------------ > Brick ovirt0:/gluster/brick2/iso 49159 0 Y > 9164 > Brick ovirt2:/gluster/brick2/iso 49159 0 Y > 28116 > Brick ovirt3:/gluster/brick2/iso 49159 0 Y > 28436 > NFS Server on localhost 2049 0 Y > 7746 > Self-heal Daemon on localhost N/A N/A Y > 7757 > NFS Server on ovirt1 2049 0 Y > 31748 > Self-heal Daemon on ovirt1 N/A N/A Y > 31759 > NFS Server on ovirt0 2049 0 Y > 32656 > Self-heal Daemon on ovirt0 N/A N/A Y > 32665 > NFS Server on ovirt2 2049 0 Y > 8194 > Self-heal Daemon on ovirt2 N/A N/A Y > 8205 > > Task Status of Volume iso > ------------------------------------------------------------ > ------------------ > There are no active volume tasks > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sabose at redhat.com Thu Mar 22 09:58:07 2018 From: sabose at redhat.com (Sahina Bose) Date: Thu, 22 Mar 2018 15:28:07 +0530 Subject: [ovirt-users] VDSM SSL validity In-Reply-To: References: Message-ID: Didi, Sandro - Do you know if this option VdsCertificateValidityInYears is present in 4.2? On Mon, Mar 19, 2018 at 4:43 AM, Punaatua PAINT-KOUI wrote: > Up > > 2018-02-17 2:57 GMT-10:00 Punaatua PAINT-KOUI : > >> Any idea someone ? >> >> Le 14 f?vr. 2018 23:19, "Punaatua PAINT-KOUI" a >> ?crit : >> >>> Hi, >>> >>> I setup an hyperconverged solution with 3 nodes, hosted engine on >>> glusterfs. >>> We run this setup in a PCI-DSS environment. According to PCI-DSS >>> requirements, we are required to reduce the validity of any certificate >>> under 39 months. >>> >>> I saw in this link https://www.ovirt.org/dev >>> elop/release-management/features/infra/pki/ that i can use the option >>> VdsCertificateValidityInYears at engine-config. >>> >>> I'm running ovirt engine 4.2.1 and i checked when i was on 4.2 how to >>> edit the option with engine-config --all and engine-config --list but the >>> option is not listed >>> >>> Am i missing something ? >>> >>> I thing i can regenerate a VDSM certificate with openssl and the CA conf >>> in /etc/pki/ovirt-engine on the hosted-engine but i would rather modifiy >>> the option for future host that I will add. >>> >>> -- >>> ------------------------------------- >>> PAINT-KOUI Punaatua >>> >> > > > -- > ------------------------------------- > PAINT-KOUI Punaatua > Licence Pro R?seaux et T?lecom IAR > Universit? du Sud Toulon Var > La Garde France > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sdhuang32 at gmail.com Thu Mar 22 09:59:25 2018 From: sdhuang32 at gmail.com (Shao-Da Huang) Date: Thu, 22 Mar 2018 17:59:25 +0800 Subject: [ovirt-users] How to generate Swagger specification of the oVirt API? In-Reply-To: <4c69ff4b-2cc7-9b42-2688-a7a5ee1a8674@redhat.com> References: <4c69ff4b-2cc7-9b42-2688-a7a5ee1a8674@redhat.com> Message-ID: Got it, thank you for your information! 2018-03-22 17:33 GMT+08:00 Juan Hern?ndez : > On 03/22/2018 04:40 AM, Shao-Da Huang wrote: > >> Hi Juan, >> >> I saw the discussion in users-list: >> http://lists.ovirt.org/pipermail/users/2017-April/081618.html >> and I'm curious about how to generate the Swagger specfication of existing >> oVirt API (or maybe can generate by ovirt-engine-api-model?). >> Could you give me some advices on the generating tools or maybe some >> points >> to change during the procedure of building ovirt-engine? >> >> Michael >> >> > The JSON file that I mentioned in that mail was generated using I tool > that I started to write, but that I never finished. I just have uploaded > the patch that adds it to the ovirt-engine-api-metamodel project: > > [WIP] Generate Swagger specification > https://gerrit.ovirt.org/89337 > > Note that it is by no means complete, it is just an experiment. Take a > look if you are curious. > > Truth is that I don't plan to work on that. Would be nice if you can take > it and complete it. If you complete it, then it could be integrated in the > build process of the ovirt-engine-api-model project, so that it is > generated automatically. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sabose at redhat.com Thu Mar 22 10:01:21 2018 From: sabose at redhat.com (Sahina Bose) Date: Thu, 22 Mar 2018 15:31:21 +0530 Subject: [ovirt-users] GlusterFS performance with only one drive per host? In-Reply-To: References: Message-ID: On Mon, Mar 19, 2018 at 5:57 PM, Jayme wrote: > I'm spec'ing a new oVirt build using three Dell R720's w/ 256GB. I'm > considering storage options. I don't have a requirement for high amounts > of storage, I have a little over 1TB to store but want some overhead so I'm > thinking 2TB of usable space would be sufficient. > > I've been doing some research on Micron 1100 2TB ssd's and they seem to > offer a lot of value for the money. I'm considering using smaller cheaper > SSDs for boot drives and using one 2TB micron SSD in each host for a > glusterFS replica 3 setup (on the fence about using an arbiter, I like the > extra redundancy replicate 3 will give me). > > My question is, would I see a performance hit using only one drive in each > host with glusterFS or should I try to add more physical disks. Such as 6 > 1TB drives instead of 3 2TB drives? > [Adding gluster-users for inputs here] > Also one other question. I've read that gluster can only be done in > groups of three. Meaning you need 3, 6, or 9 hosts. Is this true? If I > had an operational replicate 3 glusterFS setup and wanted to add more > capacity I would have to add 3 more hosts, or is it possible for me to add > a 4th host in to the mix for extra processing power down the road? > In oVirt, we support replica 3 or replica 3 with arbiter (where one of the 3 bricks is a low storage arbiter brick). To expand storage, you would need to add in multiples of 3 bricks. However if you only want to expand compute capacity in your HC environment, you can add a 4th node. > Thanks! > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From didi at redhat.com Thu Mar 22 10:49:25 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Thu, 22 Mar 2018 12:49:25 +0200 Subject: [ovirt-users] VDSM SSL validity In-Reply-To: References: Message-ID: On Thu, Mar 22, 2018 at 11:58 AM, Sahina Bose wrote: > Didi, Sandro - Do you know if this option VdsCertificateValidityInYears is > present in 4.2? I do not think it ever was exposed to engine-config - I think it's a bug in that page. You should be able to update it with psql, if needed - something like this: select fn_db_update_config_value('VdsCertificateValidityInYears','2','general'); I didn't try this myself. To get an sql prompt, you can use engine-psql, which should be available in 4.2.2, or simply copy the script from the patch page: https://gerrit.ovirt.org/#/q/I4d9737ea72df0d7e654776a1085901284a523b7f Also, some people claim that the use of certificates for communication between the engine and the hosts is an internal implementation detail, which should not be relevant to PCI DSS requirements. See e.g.: https://ovirt.org/develop/release-management/features/infra/pkireduce/ > > On Mon, Mar 19, 2018 at 4:43 AM, Punaatua PAINT-KOUI > wrote: >> >> Up >> >> 2018-02-17 2:57 GMT-10:00 Punaatua PAINT-KOUI : >>> >>> Any idea someone ? >>> >>> Le 14 f?vr. 2018 23:19, "Punaatua PAINT-KOUI" a >>> ?crit : >>>> >>>> Hi, >>>> >>>> I setup an hyperconverged solution with 3 nodes, hosted engine on >>>> glusterfs. >>>> We run this setup in a PCI-DSS environment. According to PCI-DSS >>>> requirements, we are required to reduce the validity of any certificate >>>> under 39 months. >>>> >>>> I saw in this link >>>> https://www.ovirt.org/develop/release-management/features/infra/pki/ that i >>>> can use the option VdsCertificateValidityInYears at engine-config. >>>> >>>> I'm running ovirt engine 4.2.1 and i checked when i was on 4.2 how to >>>> edit the option with engine-config --all and engine-config --list but the >>>> option is not listed >>>> >>>> Am i missing something ? >>>> >>>> I thing i can regenerate a VDSM certificate with openssl and the CA conf >>>> in /etc/pki/ovirt-engine on the hosted-engine but i would rather modifiy the >>>> option for future host that I will add. >>>> >>>> -- >>>> ------------------------------------- >>>> PAINT-KOUI Punaatua >> >> >> >> >> -- >> ------------------------------------- >> PAINT-KOUI Punaatua >> Licence Pro R?seaux et T?lecom IAR >> Universit? du Sud Toulon Var >> La Garde France >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > -- Didi From hanson at andrewswireless.net Thu Mar 22 13:59:19 2018 From: hanson at andrewswireless.net (Hanson Turner) Date: Thu, 22 Mar 2018 09:59:19 -0400 Subject: [ovirt-users] HostedEngine with HA In-Reply-To: <1471424789.2159.5.camel@eurotux.com> References: <1471022610.23180.8.camel@eurotux.com> <1471344823.2213.6.camel@eurotux.com> <1471424789.2159.5.camel@eurotux.com> Message-ID: <3fc9fbf4-631d-b6ce-11cf-e4caa1e10579@andrewswireless.net> Hi Carlos, If you're using shared storage across the nodes/hypervisors, the HostedEngine VM should already be HA. Sometimes this allows the engine to drop while restarting it on another node. Usually when this happens the rest of the nodes running VMs stay up, and things resync when the downed node comes back. IE the only one to lose pings is the Hosted-Engine. Unless ofcourse there were VM's on the same node, in which case, if they were HA VMs they will be restarted/resumed depending on your settings. Thanks, Hanson On 08/17/2016 05:06 AM, Carlos Rodrigues wrote: > Anyone can help me to build HA on HostedEngine VM? > > How can i guarantee that if host with HostedEngine VM goes down, the > HostedEngine VM moves to another host? > > Regards, > Carlos Rodrigues > > On Tue, 2016-08-16 at 11:53 +0100, Carlos Rodrigues wrote: >> On Sun, 2016-08-14 at 14:22 +0300, Roy Golan wrote: >>> >>> >>> On 12 August 2016 at 20:23, Carlos Rodrigues >>> wrote: >>>> Hello, >>>> >>>> I have one cluster with two hosts with power management correctly >>>> configured and one virtual machine with HostedEngine over shared >>>> storage with FiberChannel. >>>> >>>> When i shutdown the network of host with HostedEngine VM, ?it >>>> should be >>>> possible the HostedEngine VM migrate automatically to another >>>> host? >>>> >>> migrate on which network? >>> >>>> What is the expected behaviour on this HA scenario? >>> After a few minutes your vm will be shutdown by the High >>> Availability >>> agent, as it can't see network, and started on another host. >> >> I'm testing this scenario and after shutdown network, it should be >> expected that agent shutdown ha and started on another host, but >> after >> couple?minutes nothing happens and on host with network we getting >> the >> following messages: >> >> Aug 16 11:44:08 ied-blade11.install.eurotux.local ovirt-ha- >> agent[2779]: >> ovirt-ha-agent >> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config ERROR >> Unable to get vm.conf from OVF_STORE, falling back to initial vm.conf >> >> I think the HA agent its trying to get vm configuration but some how >> it >> can't get vm.conf to start VM. >> >> Regards, >> Carlos Rodrigues >> >> >>>> >>>> Regards, >>>> >>>> -- >>>> Carlos Rodrigues >>>> >>>> Engenheiro de Software S?nior >>>> >>>> Eurotux Inform?tica, S.A. |?www.eurotux.com >>>> (t) +351 253 680 300 (m) +351 911 926 110 >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> From NasrumMinallah9 at hotmail.com Thu Mar 22 06:54:07 2018 From: NasrumMinallah9 at hotmail.com (Nasrum Minallah Manzoor) Date: Thu, 22 Mar 2018 06:54:07 +0000 Subject: [ovirt-users] Query for moving Ovirt Engine... Message-ID: Hello Every one, I want to move ovirt engine from one of my machine to node installed on another machine! What steps I should take to accomplish the query! Help would be appreciated! Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From lveyde at redhat.com Thu Mar 22 14:36:04 2018 From: lveyde at redhat.com (Lev Veyde) Date: Thu, 22 Mar 2018 16:36:04 +0200 Subject: [ovirt-users] [ANN] oVirt 4.2.2 FifthRelease Candidate is now available Message-ID: The oVirt Project is pleased to announce the availability of the oVirt 4.2.2 Fifth Release Candidate, as of March 22nd, 2018 This update is a release candidate of the second in a series of stabilization updates to the 4.2 series. This is pre-release software. This pre-release should not to be used in production. This release is available now for: * Red Hat Enterprise Linux 7.4 or later * CentOS Linux (or similar) 7.4 or later This release supports Hypervisor Hosts running: * Red Hat Enterprise Linux 7.4 or later * CentOS Linux (or similar) 7.4 or later * oVirt Node 4.2 See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed. Notes: - oVirt Appliance is available - oVirt Node will be available soon [2] Additional Resources: * Read more about the oVirt 4.2.2 release highlights: http://www.ovirt.org/release/4. 2 . 2 / * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt * Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/ [1] http://www.ovirt.org/release/4. 2 . 2 / [2] http://resources.ovirt.org/pub/ovirt-4. 2-pre /iso/ -- Lev Veyde Software Engineer, RHCE | RHCVA | MCITP Red Hat Israel lev at redhat.com | lveyde at redhat.com TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From didi at redhat.com Thu Mar 22 14:40:27 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Thu, 22 Mar 2018 16:40:27 +0200 Subject: [ovirt-users] Query for moving Ovirt Engine... In-Reply-To: References: Message-ID: On Thu, Mar 22, 2018 at 8:54 AM, Nasrum Minallah Manzoor wrote: > Hello Every one, > > > > I want to move ovirt engine from one of my machine to node installed on > another machine! What steps I should take to accomplish the query! > > > > Help would be appreciated! Not sure what exactly you mean: 1. You want to copy the engine to another machine and remove from the first? Use engine-backup 2. It's a hosted-engine and you want to migrate the engine vm to another host? You can use the gui and migrate it like any other vm 3. Something else? Best regards, -- Didi From didi at redhat.com Thu Mar 22 15:03:06 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Thu, 22 Mar 2018 17:03:06 +0200 Subject: [ovirt-users] Query for moving Ovirt Engine... In-Reply-To: References: Message-ID: On Thu, Mar 22, 2018 at 4:45 PM, Junaid Jadoon wrote: > Hi yedidyah, > below is our main objective > > > > To make it more clear I am revising my query! > > > > We have installed 2 nodes in a cluster and ovirt engine is installed on > separate machine on Vmware workstation. Now I want to move ovirt engine to > one of my any nodes. What process should I go through. Need solution from > experts! Thanks for the clarification. I suggest to start with: https://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment/ Best regards, > > > > On Thu 22 Mar 2018 7:40 pm Yedidyah Bar David, wrote: >> >> On Thu, Mar 22, 2018 at 8:54 AM, Nasrum Minallah Manzoor >> wrote: >> > Hello Every one, >> > >> > >> > >> > I want to move ovirt engine from one of my machine to node installed on >> > another machine! What steps I should take to accomplish the query! >> > >> > >> > >> > Help would be appreciated! >> >> Not sure what exactly you mean: >> >> 1. You want to copy the engine to another machine and remove from the >> first? >> >> Use engine-backup >> >> 2. It's a hosted-engine and you want to migrate the engine vm to another >> host? >> >> You can use the gui and migrate it like any other vm >> >> 3. Something else? >> >> Best regards, >> -- >> Didi >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users -- Didi From didi at redhat.com Thu Mar 22 15:18:03 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Thu, 22 Mar 2018 17:18:03 +0200 Subject: [ovirt-users] Query for moving Ovirt Engine... In-Reply-To: References: Message-ID: On Thu, Mar 22, 2018 at 5:07 PM, Nasrum Minallah Manzoor wrote: > Thank you Yedidyah, > > Let me clearify it more that i have 2 ovirt nodes in a cluster. Ovirt engine is on seperate machine on vmware workstation. Now i want to move this ovirt engine to one of my nodes (which are in a cluster). I need help with the above so that i could complete my task. > > Thanks in advance! Please see the link I sent before for migrating to hosted-engine. Another option is to simply install the engine on one of the machines. I didn't try this myself, but people reported that it works. This will be simpler than hosted-engine, but will not provide the HA that hosted-engine does. If you want to do this, you should do something like: on the existing engine machine: 1. engine-backup --mode=backup --file=b1 --log=l1 2. Stop and disable the engine and dwh on this machine 3. Copy b1 to the host you want to install it on On the host you want the engine on: 4. Install ovirt-release package (same as you installed on the engine machine) 5. Install the package ovirt-engine 6. engine-backup --mode=restore --file=f1 --log=r1 --provision-db --provision-dwh-db 7. engine-setup 8. Make sure the name of the old engine now resolves to the IP address of the new host, by updating your DNS (e.g. add a CNAME record) or /etc/hosts For more details check engine backup/restore documentation. Best regards, > > Regards, > > > > ________________________________________ > From: Yedidyah Bar David > Sent: Thursday, March 22, 2018 7:40:27 PM > To: Nasrum Minallah Manzoor > Cc: users at ovirt.org > Subject: Re: [ovirt-users] Query for moving Ovirt Engine... > > On Thu, Mar 22, 2018 at 8:54 AM, Nasrum Minallah Manzoor > wrote: >> Hello Every one, >> >> >> >> I want to move ovirt engine from one of my machine to node installed on >> another machine! What steps I should take to accomplish the query! >> >> >> >> Help would be appreciated! > > Not sure what exactly you mean: > > 1. You want to copy the engine to another machine and remove from the first? > > Use engine-backup > > 2. It's a hosted-engine and you want to migrate the engine vm to another host? > > You can use the gui and migrate it like any other vm > > 3. Something else? > > Best regards, > -- > Didi -- Didi From fernando.frediani at upx.com Thu Mar 22 16:24:29 2018 From: fernando.frediani at upx.com (FERNANDO FREDIANI) Date: Thu, 22 Mar 2018 13:24:29 -0300 Subject: [ovirt-users] Ovirt nodes NFS connection In-Reply-To: References: Message-ID: Hello Tal It seems you have a very big overkill on your environment. I would say that normally 2 x 10Gb interfaces can do A LOT for nodes with proper redundancy. Just creating Vlans you can separate traffic and apply, if necessary, QoS per Vlan to guarantee which one is more priority. If you have 2 x 10Gb in a LACP 802.3ad Aggregation in theory you can do 20Gbps of aggregated traffic. If you have 10Gb of constant storage traffic it is already huge, so I normally consider that Storage will not go over a few Gbps and VMs another few Gb which fit perfectly within even 10Gb The only exception I would make is if you have a very intensive (and I am not talking about IOPS, but throughput) from your storage then may be worth to have 2 x 10Gb for Storage and 2 x 10Gb for all other networks (Managment, VMs Traffic, Migration(with cap on traffic), etc). Regards Fernando 2018-03-21 16:41 GMT-03:00 Yaniv Kaul : > > > On Wed, Mar 21, 2018 at 12:41 PM, Tal Bar-Or wrote: > >> Hello All, >> >> I am about to deploy a new Ovirt platform, the platform will consist 4 >> Ovirt nodes including management, all servers nodes and storage will have >> the following config: >> >> *nodes server* >> 4x10G ports network cards >> 2x10G will be used for VM network. >> 2x10G will be used for storage connection >> 2x1Ge 1xGe for nodes management >> >> >> *Storage *4x10G ports network cards >> 3 x10G for NFS storage mount Ovirt nodes >> >> Now given above network configuration layout, what is best practices in >> terms of nodes for storage NFS connection, throughput and path resilience >> suggested to use >> First option each node 2x 10G lacp and on storage side 3x10G lacp? >> > > I'm not sure how you'd get more throughout than you can get in a single > physical link. You will get redundancy. > > Of course, on the storage side you might benefit from multiple bonded > interfaces. > > >> The second option creates 3 VLAN's assign each node on that 3 VLAN's >> across 2 nic, and on storage, side assigns 3 nice across 3 VLANs? >> > > Interesting - but I assume it'll still stick to a single physical link. > Y. > > Thanks >> >> >> >> >> >> -- >> Tal Bar-or >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Thu Mar 22 16:39:57 2018 From: rightkicktech at gmail.com (Alex K) Date: Thu, 22 Mar 2018 18:39:57 +0200 Subject: [ovirt-users] Disk upload cancel/remove In-Reply-To: References: Message-ID: After 48 hours it seems the issue has been resolved. No disks are shown with status "Transferring via API" Alex On Wed, Mar 21, 2018 at 9:33 AM, Alex K wrote: > Even after rebooting the engine the disks are still there with same status > "Transferring via API" > > Alex > > On Tue, Mar 20, 2018 at 11:49 AM, Eyal Shenitzky > wrote: > >> Idan/Daniel, >> >> Can you please take a look? >> >> Thanks, >> >> On Tue, Mar 20, 2018 at 11:44 AM, Alex K wrote: >> >>> Hi All, >>> >>> I was trying to upload a VM disk at data storage domain using a python >>> script. >>> I did cancel the upload twice and at the third time the upload was >>> successful, but I see two disks from the previous attempts with status >>> "transferring via API" (see attached). This status of for more then 8 hours >>> and I cannot remove them. >>> >>> Is there any way to clean them from the disks inventory? >>> >>> >>> >>> I am using ovirt 4.1.9.1-1.el7.centos with self hosted engine on 3 >>> nodes. >>> >>> Thanx, >>> Alex >>> >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> >> >> -- >> Regards, >> Eyal Shenitzky >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ovirt-disk-upload.png Type: image/png Size: 34142 bytes Desc: not available URL: From recreationh at gmail.com Thu Mar 22 16:56:40 2018 From: recreationh at gmail.com (Terry hey) Date: Fri, 23 Mar 2018 00:56:40 +0800 Subject: [ovirt-users] virtual machine actual size is not right In-Reply-To: References: Message-ID: Hello~ i type this command on the running vm, not the hypervisor ( ovirt node). -------------- next part -------------- An HTML attachment was scrubbed... URL: From blanchet at abes.fr Thu Mar 22 17:12:12 2018 From: blanchet at abes.fr (=?UTF-8?Q?Nathana=c3=abl_Blanchet?=) Date: Thu, 22 Mar 2018 18:12:12 +0100 Subject: [ovirt-users] can't select network in network section of initial run Message-ID: <83d869e8-90fe-dfd6-4f16-66cdb249f619@abes.fr> Hi all, In ovirt 4.1.9, the "select network above" is blank when I want to add a network with cloud-init. Is it a known bug corrected in the 4.2.x branch? -- Nathana?l Blanchet Supervision r?seau P?le Infrastrutures Informatiques 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 T?l. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet at abes.fr From alex.duckers at gmail.com Thu Mar 22 17:18:21 2018 From: alex.duckers at gmail.com (aduckers) Date: Thu, 22 Mar 2018 10:18:21 -0700 Subject: [ovirt-users] FCP storage domain wiped, recovery process Message-ID: <5F1072EA-C581-46AD-8B21-5559F515D312@gmail.com> I?ve got an environment with: 13 servers in the oVirt default cluster Hosted Engine 4.1.3.5-1.el7.centos Storage provided by FC san One of the FC LUNs, being used as a Data Domain, was wiped/reformatted through a confluence of events. How can I clean this up and re-add this LUN back? I can?t Detach it, as there are VMs/Templates attached to it. When I try to remove the Templates, it tells me it can?t since the storage domain is inactive. What?s the best way to clean up from this? Thanks From budic at onholyground.com Thu Mar 22 18:23:29 2018 From: budic at onholyground.com (Darrell Budic) Date: Thu, 22 Mar 2018 13:23:29 -0500 Subject: [ovirt-users] Ovirt vm's paused due to storage error In-Reply-To: References: Message-ID: I?ve also encounter something similar on my setup, ovirt 3.1.9 with a gluster 3.12.3 storage cluster. All the storage domains in question are setup as gluster volumes & sharded, and I?ve enabled libgfapi support in the engine. It?s happened primarily to VMs that haven?t been restarted to switch to gfapi yet (still have fuse mounts for these), but one or two VMs that have been switched to gfapi mounts as well. I started updating the storage cluster to gluster 3.12.6 yesterday and got more annoying/bad behavior as well. Many VMs that were ?high disk use? VMs experienced hangs, but not as storage related pauses. Instead, they hang and their watchdogs eventually reported CPU hangs. All did eventually resume normal operation, but it was annoying, to be sure. The Ovirt Engine also lost contact with all of my VMs (unknown status, ? in GUI), even though it still had contact with the hosts. My gluster cluster reported no errors, volume status was normal, and all peers and bricks were connected. Didn?t see anything in the gluster logs that indicated problems, but there were reports of failed heals that eventually went away. Seems like something in vdsm and/or libgfapi isn?t handling the gfapi mounts well during healing and the related locks, but I can?t tell what it is. I?ve got two more servers in the cluster to upgrade to 3.12.6 yet, and I?ll keep an eye on more logs while I?m doing it, will report on it after I get more info. -Darrell > From: Sahina Bose > Subject: Re: [ovirt-users] Ovirt vm's paused due to storage error > Date: March 22, 2018 at 4:56:13 AM CDT > To: Endre Karlson > Cc: users > > Can you provide "gluster volume info" and the mount logs of the data volume (I assume that this hosts the vdisks for the VM's with storage error). > > Also vdsm.log at the corresponding time. > > On Fri, Mar 16, 2018 at 3:45 AM, Endre Karlson > wrote: > Hi, this is is here again and we are getting several vm's going into storage error in our 4 node cluster running on centos 7.4 with gluster and ovirt 4.2.1. > > Gluster version: 3.12.6 > > volume status > [root at ovirt3 ~]# gluster volume status > Status of volume: data > Gluster process TCP Port RDMA Port Online Pid > ------------------------------------------------------------------------------ > Brick ovirt0:/gluster/brick3/data 49152 0 Y 9102 > Brick ovirt2:/gluster/brick3/data 49152 0 Y 28063 > Brick ovirt3:/gluster/brick3/data 49152 0 Y 28379 > Brick ovirt0:/gluster/brick4/data 49153 0 Y 9111 > Brick ovirt2:/gluster/brick4/data 49153 0 Y 28069 > Brick ovirt3:/gluster/brick4/data 49153 0 Y 28388 > Brick ovirt0:/gluster/brick5/data 49154 0 Y 9120 > Brick ovirt2:/gluster/brick5/data 49154 0 Y 28075 > Brick ovirt3:/gluster/brick5/data 49154 0 Y 28397 > Brick ovirt0:/gluster/brick6/data 49155 0 Y 9129 > Brick ovirt2:/gluster/brick6_1/data 49155 0 Y 28081 > Brick ovirt3:/gluster/brick6/data 49155 0 Y 28404 > Brick ovirt0:/gluster/brick7/data 49156 0 Y 9138 > Brick ovirt2:/gluster/brick7/data 49156 0 Y 28089 > Brick ovirt3:/gluster/brick7/data 49156 0 Y 28411 > Brick ovirt0:/gluster/brick8/data 49157 0 Y 9145 > Brick ovirt2:/gluster/brick8/data 49157 0 Y 28095 > Brick ovirt3:/gluster/brick8/data 49157 0 Y 28418 > Brick ovirt1:/gluster/brick3/data 49152 0 Y 23139 > Brick ovirt1:/gluster/brick4/data 49153 0 Y 23145 > Brick ovirt1:/gluster/brick5/data 49154 0 Y 23152 > Brick ovirt1:/gluster/brick6/data 49155 0 Y 23159 > Brick ovirt1:/gluster/brick7/data 49156 0 Y 23166 > Brick ovirt1:/gluster/brick8/data 49157 0 Y 23173 > Self-heal Daemon on localhost N/A N/A Y 7757 > Bitrot Daemon on localhost N/A N/A Y 7766 > Scrubber Daemon on localhost N/A N/A Y 7785 > Self-heal Daemon on ovirt2 N/A N/A Y 8205 > Bitrot Daemon on ovirt2 N/A N/A Y 8216 > Scrubber Daemon on ovirt2 N/A N/A Y 8227 > Self-heal Daemon on ovirt0 N/A N/A Y 32665 > Bitrot Daemon on ovirt0 N/A N/A Y 32674 > Scrubber Daemon on ovirt0 N/A N/A Y 32712 > Self-heal Daemon on ovirt1 N/A N/A Y 31759 > Bitrot Daemon on ovirt1 N/A N/A Y 31768 > Scrubber Daemon on ovirt1 N/A N/A Y 31790 > > Task Status of Volume data > ------------------------------------------------------------------------------ > Task : Rebalance > ID : 62942ba3-db9e-4604-aa03-4970767f4d67 > Status : completed > > Status of volume: engine > Gluster process TCP Port RDMA Port Online Pid > ------------------------------------------------------------------------------ > Brick ovirt0:/gluster/brick1/engine 49158 0 Y 9155 > Brick ovirt2:/gluster/brick1/engine 49158 0 Y 28107 > Brick ovirt3:/gluster/brick1/engine 49158 0 Y 28427 > Self-heal Daemon on localhost N/A N/A Y 7757 > Self-heal Daemon on ovirt1 N/A N/A Y 31759 > Self-heal Daemon on ovirt0 N/A N/A Y 32665 > Self-heal Daemon on ovirt2 N/A N/A Y 8205 > > Task Status of Volume engine > ------------------------------------------------------------------------------ > There are no active volume tasks > > Status of volume: iso > Gluster process TCP Port RDMA Port Online Pid > ------------------------------------------------------------------------------ > Brick ovirt0:/gluster/brick2/iso 49159 0 Y 9164 > Brick ovirt2:/gluster/brick2/iso 49159 0 Y 28116 > Brick ovirt3:/gluster/brick2/iso 49159 0 Y 28436 > NFS Server on localhost 2049 0 Y 7746 > Self-heal Daemon on localhost N/A N/A Y 7757 > NFS Server on ovirt1 2049 0 Y 31748 > Self-heal Daemon on ovirt1 N/A N/A Y 31759 > NFS Server on ovirt0 2049 0 Y 32656 > Self-heal Daemon on ovirt0 N/A N/A Y 32665 > NFS Server on ovirt2 2049 0 Y 8194 > Self-heal Daemon on ovirt2 N/A N/A Y 8205 > > Task Status of Volume iso > ------------------------------------------------------------------------------ > There are no active volume tasks > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincent at epicenergy.ca Thu Mar 22 19:05:32 2018 From: vincent at epicenergy.ca (Vincent Royer) Date: Thu, 22 Mar 2018 12:05:32 -0700 Subject: [ovirt-users] Any monitoring tool provided? In-Reply-To: References: Message-ID: I setup Grafana using the instructions I found on accessing the Ovirt history database. However, the instructions didn't work as written. Regardless, it does work, but it's not easy to setup. The update rate also leaves something to be desired, its ok for historical info, but it's not a good real time monitoring solution (although its possible I could set it up differently and it would work better) Also using Grafana, I have setup Telegraf agents on most of my VMs. Lastly, I also installed Telegraf on the Centos hosts in my Ovirt Cluster *Vincent Royer* *778-825-1057* *SUSTAINABLE MOBILE ENERGY SOLUTIONS* On Wed, Mar 21, 2018 at 8:41 PM, Terry hey wrote: > Dear all, > > Now, we can just read how many storage used, cpu usage on ovirt dashboard. > But is there any monitoring tool for monitoring virtual machine time to > time? > If yes, could you guys give me the procedure? > > > Regards > Terry > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 145730 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 68566 bytes Desc: not available URL: From ccox at endlessnow.com Thu Mar 22 19:28:19 2018 From: ccox at endlessnow.com (Christopher Cox) Date: Thu, 22 Mar 2018 14:28:19 -0500 Subject: [ovirt-users] Any monitoring tool provided? In-Reply-To: References: Message-ID: <32f428b6-dc46-de63-6072-b1fff2eb0b28@endlessnow.com> On 03/21/2018 10:41 PM, Terry hey wrote: > Dear all, > > Now, we can just read how many storage used, cpu usage on ovirt dashboard. > But is there any monitoring tool for monitoring virtual machine time to > time? > If yes, could you guys give me the procedure? A possible option, for a full OS with network connectivity, is to monitor the VM like you would any other host. We use omd/check_mk. Right now there isn't an oVirt specific monitor plugin for check_mk. I know what I said is probably pretty obvious, but just in case. From nicolas.vaye at province-sud.nc Thu Mar 22 20:54:12 2018 From: nicolas.vaye at province-sud.nc (Nicolas Vaye) Date: Thu, 22 Mar 2018 20:54:12 +0000 Subject: [ovirt-users] create a cloned virtual machine based on a template with SDK API python In-Reply-To: <1521702568.1710.131.camel@province-sud.nc> References: <1521702568.1710.131.camel@province-sud.nc> Message-ID: <1521752049.1710.151.camel@province-sud.nc> ok, got it ! just add clone parameter for add function. vm = vms_service.add( types.Vm( name=vm_name, cluster=types.Cluster( name=vm_cluster_name ), stateless=False, type=types.VmType('server'), comment='cloned from template '+template_name+' version '+str(template_version), template=types.Template( id=template_id ), disk_attachments=template_disk_attachments, ), clone=True, ) -------- Message initial -------- Date: Thu, 22 Mar 2018 07:09:33 +0000 Objet: [ovirt-users] create a cloned virtual machine based on a template with SDK API python ?: users at ovirt.org > Reply-to: Nicolas Vaye De: Nicolas Vaye > Hi, I want to create a cloned virtual machine based on a template with SDK API python and i don't find the parameter to indicate the clone action for the disk here is my code : #!/usr/bin/env python # -*- coding: utf-8 -*- # # Copyright (c) 2016 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from pprint import pprint import logging import time import ovirtsdk4 as sdk import ovirtsdk4.types as types template_name='test_debian_9.4' template_version=1 cluster_name='nico-cluster' data_domain_name='OVIRT-TEST2' logging.basicConfig(level=logging.DEBUG, filename='example.log') # This example will connect to the server and start a virtual machine # with cloud-init, in order to automatically configure the network and # the password of the `root` user. # Create the connection to the server: connection = sdk.Connection( url='https://ocenter.province-sud.prod/ovirt-engine/api', username='admin at internal', password='admin', ca_file='CA_ocenter.pem', debug=True, log=logging.getLogger(), ) ################################## ############ TEMPLATE ############ ################################## # Get the reference to the root of the tree of services: system_service = connection.system_service() # Get the reference to the service that manages the storage domains: storage_domains_service = system_service.storage_domains_service() # Find the storage domain we want to be used for virtual machine disks: storage_domain = storage_domains_service.list(search='name='+data_domain_name)[0] # Get the reference to the service that manages the templates: templates_service = system_service.templates_service() # When a template has multiple versions they all have the same name, so # we need to explicitly find the one that has the version name or # version number that we want to use. In this case we want to use # version 1 of the template. templates = templates_service.list(search='name='+template_name) template_id = None for template in templates: if template.version.version_number == template_version: template_id = template.id break if template_id == None: print "ERREUR le template "+template_name+"en version "+str(template_version)+" n'a pas ?t? trouv?!!" # Find the template disk we want be created on specific storage domain # for our virtual machine: template_service = templates_service.template_service(template_id) disk_attachments = connection.follow_link(template_service.get().disk_attachments) print "disk_attachments=" + str(len(disk_attachments)) template_disk_attachments = [] for disk in disk_attachments: template_disk_attachments.append(types.DiskAttachment( disk=types.Disk( id=disk.id, format=types.DiskFormat.COW, storage_domains=[ types.StorageDomain( id=storage_domain.id, ), ], ), ) ) # Get the reference to the service that manages the virtual machines: vms_service = system_service.vms_service() # Add a new virtual machine explicitly indicating the identifier of the # template version that we want to use and indicating that template disk # should be created on specific storage domain for the virtual machine: vm = vms_service.add( types.Vm( name='myvm', cluster=types.Cluster( name=cluster_name ), stateless=False, type=types.VmType('server'), comment='based on template '+template_name+'en version '+str(template_version), template=types.Template( id=template_id ), disk_attachments=template_disk_attachments, ) ) # Get a reference to the service that manages the virtual machine that # was created in the previous step: vm_service = vms_service.vm_service(vm.id) # Wait till the virtual machine is down, which indicats that all the # disks have been created: while True: time.sleep(1) vm = vm_service.get() if vm.status == types.VmStatus.DOWN: break # Close the connection to the server: connection.close() If the data_domain_name is the same as the template data domain, then this script seem to create a vm but not with cloned disk. If the data_domain_name is NOT the same as the template data domain, then this script produce an error ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Cannot add VM. The selected Storage Domain does not contain the VM Template.]". HTTP response code is 400. So how to create a cloned virtual machine based on a template ? I think i'm looking for the same parameter on the web ui, "Storage Allocation" => Thin/Clone [cid:1521702568.14216.1.camel at province-sud.nc] Thanks. Nicolas VAYE _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users From rightkicktech at gmail.com Thu Mar 22 21:00:50 2018 From: rightkicktech at gmail.com (Alex K) Date: Thu, 22 Mar 2018 23:00:50 +0200 Subject: [ovirt-users] FCP storage domain wiped, recovery process In-Reply-To: <5F1072EA-C581-46AD-8B21-5559F515D312@gmail.com> References: <5F1072EA-C581-46AD-8B21-5559F515D312@gmail.com> Message-ID: Hi, Did you try to put it into maintenance then destroy it? On Mar 22, 2018 19:18, "aduckers" wrote: > I?ve got an environment with: > 13 servers in the oVirt default cluster > Hosted Engine 4.1.3.5-1.el7.centos > Storage provided by FC san > > One of the FC LUNs, being used as a Data Domain, was wiped/reformatted > through a confluence of events. > > How can I clean this up and re-add this LUN back? I can?t Detach it, as > there are VMs/Templates attached to it. When I try to remove the > Templates, it tells me it can?t since the storage domain is inactive. > > What?s the best way to clean up from this? > > Thanks > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.duckers at gmail.com Thu Mar 22 21:03:18 2018 From: alex.duckers at gmail.com (aduckers) Date: Thu, 22 Mar 2018 14:03:18 -0700 Subject: [ovirt-users] FCP storage domain wiped, recovery process In-Reply-To: References: <5F1072EA-C581-46AD-8B21-5559F515D312@gmail.com> Message-ID: I have not tried to destroy it yet - I didn?t want to try that without understanding how that would affect all the other objects that are related (vms, templates, etc..). If destroying it will also clean those up, or allow me to delete them, then that sounds like it might be the way to go. > On Mar 22, 2018, at 2:00 PM, Alex K wrote: > > Hi, > > Did you try to put it into maintenance then destroy it? > > On Mar 22, 2018 19:18, "aduckers" > wrote: > I?ve got an environment with: > 13 servers in the oVirt default cluster > Hosted Engine 4.1.3.5-1.el7.centos > Storage provided by FC san > > One of the FC LUNs, being used as a Data Domain, was wiped/reformatted through a confluence of events. > > How can I clean this up and re-add this LUN back? I can?t Detach it, as there are VMs/Templates attached to it. When I try to remove the Templates, it tells me it can?t since the storage domain is inactive. > > What?s the best way to clean up from this? > > Thanks > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bryan.Sockel at mdaemon.com Thu Mar 22 22:25:00 2018 From: Bryan.Sockel at mdaemon.com (Bryan Sockel) Date: Thu, 22 Mar 2018 17:25:00 -0500 Subject: [ovirt-users] Authentication Message-ID: Hey Guys, Was working on switching my authentication over to TLS, and during the process I have lost the Internal Authentication option on my drop down list. Need to Know how to add it back it back to the list of drop down items. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From nasrumminallah9 at hotmail.com Thu Mar 22 15:07:01 2018 From: nasrumminallah9 at hotmail.com (Nasrum Minallah Manzoor) Date: Thu, 22 Mar 2018 15:07:01 +0000 Subject: [ovirt-users] Query for moving Ovirt Engine... In-Reply-To: References: , Message-ID: Thank you Yedidyah, Let me clearify it more that i have 2 ovirt nodes in a cluster. Ovirt engine is on seperate machine on vmware workstation. Now i want to move this ovirt engine to one of my nodes (which are in a cluster). I need help with the above so that i could complete my task. Thanks in advance! Regards, ________________________________________ From: Yedidyah Bar David Sent: Thursday, March 22, 2018 7:40:27 PM To: Nasrum Minallah Manzoor Cc: users at ovirt.org Subject: Re: [ovirt-users] Query for moving Ovirt Engine... On Thu, Mar 22, 2018 at 8:54 AM, Nasrum Minallah Manzoor wrote: > Hello Every one, > > > > I want to move ovirt engine from one of my machine to node installed on > another machine! What steps I should take to accomplish the query! > > > > Help would be appreciated! Not sure what exactly you mean: 1. You want to copy the engine to another machine and remove from the first? Use engine-backup 2. It's a hosted-engine and you want to migrate the engine vm to another host? You can use the gui and migrate it like any other vm 3. Something else? Best regards, -- Didi From chesterton.adam at gmail.com Fri Mar 23 01:10:51 2018 From: chesterton.adam at gmail.com (Adam Chesterton) Date: Fri, 23 Mar 2018 01:10:51 +0000 Subject: [ovirt-users] Can't Add Host To New Hosted Engine - "Server is already part of another cluster" In-Reply-To: References: Message-ID: Hi Sahina and Yedidyah, Thanks for the information and offers of help. I am pleased to report that I've resolved the issue I had, thanks to the prompting your requests gave me, and everything is functional. I shall attempt to explain what happened and how I fixed it. When I looked at the Gluster peer status, the Host01 was rejected by Host02 and Host03 (I did check this back at the start, but didn't check it again and things had changed). I followed the Gluster docs to fix the rejected peer ( http://gluster-documentations.readthedocs.io/en/latest/Administrator%20Guide/Resolving%20Peer%20Rejected/ ). This then gave me a different error message when trying to add Host02 or Host03, "no available server in the cluster to probe the new server", which only further confirmed that it was a Gluster issue, as was suggested. After some hair-pulling and wondering, I finally discovered that, in Compute > Hosts > Host01 under the General tab, it was complaining that Gluster was not active (even though it was running). I clicked the action item link to resolve that, and oVirt appeared to start actually managing the Gluster service. I could then add my other hosts, import the existing storage domains, and everything appears good now. Thanks again for the assistance and prompting me towards the right places to help me resolve it. Regards, Adam On Thu, 22 Mar 2018 at 20:48 Sahina Bose wrote: > On Wed, Mar 21, 2018 at 12:33 PM, Yedidyah Bar David > wrote: > >> On Wed, Mar 21, 2018 at 8:17 AM, Adam Chesterton >> wrote: >> > Hi Everyone, >> > >> > I'm running a 3-host hyperconverged Gluster setup for testing (on some >> old >> > desktops), and recently the hosted engine died on me, so I have >> attempted to >> > just clean up my existing hosts, leaving Gluster configured, and >> re-deploy a >> > fresh hosted engine setup on them. >> > >> > I have successfully got the first host setup and the hosted engine is >> > running on that host. However, when I try to add the other two hosts >> via the >> > web GUI (as I can no longer add them via CLI) I get this error: "Error >> while >> > executing action: Server XXXXX is already part of another cluster." >> >> This message might be a result of the host's participation in a gluster >> cluster, >> not hosted-engine cluster. Please share engine.log from the engine. >> >> Adding Sahina. >> > > Yes, it does look like that. > > Can you share details of > # gluster peer status > from your 3 nodes > > And also the address of the first host in the oVirt engine and below from > the HE engine: > > # su - postgres -c "psql -d engine -c \"select * from gluster_server; \"" > > >> > >> > I've tried to find where this would still be configured on the two other >> > hosts, but I cannot find anywhere. >> >> If it's only about hosted-engine, you can check /etc/ovirt-hosted-engine . >> >> You might try using ovirt-hosted-engine-cleanup, although it was not >> designed >> for such cases. >> >> > >> > Does anyone know how I can stop these two hosts from thinking they are >> still >> > in a cluster? Or, does anyone have some information that might help, or >> am I >> > going to just have to start a fresh CentOS install? >> >> If you do not need the data, a reinstall might be simplest. >> If you do, not sure what's your exact plan. >> You intend to rely on the replication? So that you reinstall one host, >> add it, >> wait until syncing finished, then reinstall the other? Might work, no >> idea. >> >> Best regards, >> -- >> Didi >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From omachace at redhat.com Fri Mar 23 06:15:39 2018 From: omachace at redhat.com (Ondra Machacek) Date: Fri, 23 Mar 2018 07:15:39 +0100 Subject: [ovirt-users] Authentication In-Reply-To: References: Message-ID: <050fee7e-0e07-f919-a4d7-11f168bb44da@redhat.com> On 03/22/2018 11:25 PM, Bryan Sockel wrote: > Hey Guys, > > Was working on switching my authentication over to TLS, and during the > process I have lost the Internal Authentication option on my drop down > list.? Need to Know how to add it back it back to the list of drop down > items. Just re-run engine-setup. > > Thanks > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From sabose at redhat.com Fri Mar 23 06:26:01 2018 From: sabose at redhat.com (Sahina Bose) Date: Fri, 23 Mar 2018 11:56:01 +0530 Subject: [ovirt-users] gluster self-heal takes cluster offline In-Reply-To: References: Message-ID: On Fri, Mar 16, 2018 at 2:45 AM, Jim Kusznir wrote: > Hi all: > > I'm trying to understand why/how (and most importantly, how to fix) an > substantial issue I had last night. This happened one other time, but I > didn't know/understand all the parts associated with it until last night. > > I have a 3 node hyperconverged (self-hosted engine, Gluster on each node) > cluster. Gluster is Replica 2 + arbitrar. Current network configuration > is 2x GigE on load balance ("LAG Group" on switch), plus one GigE from each > server on a separate vlan, intended for Gluster (but not used). Server > hardware is Dell R610's, each server as an SSD in it. Server 1 and 2 have > the full replica, server 3 is the arbitrar. > > I put server 2 into maintence so I can work on the hardware, including > turn it off and such. In the course of the work, I found that I needed to > reconfigure the SSD's partitioning somewhat, and it resulted in wiping the > data partition (storing VM images). I figure, its no big deal, gluster > will rebuild that in short order. I did take care of the extended attr > settings and the like, and when I booted it up, gluster came up as expected > and began rebuilding the disk. > How big was the data on this partition? What was the shard size set on the gluster volume? Out of curiosity, how long did it take to heal and come back to operational? > The problem is that suddenly my entire cluster got very sluggish. The > entine was marking nodes and VMs failed and unfaling them throughout the > system, fairly randomly. It didn't matter what node the engine or VM was > on. At one point, it power cycled server 1 for "non-responsive" (even > though everything was running on it, and the gluster rebuild was working on > it). As a result of this, about 6 VMs were killed and my entire gluster > system went down hard (suspending all remaining VMs and the engine), as > there were no remaining full copies of the data. After several minutes > (these are Dell servers, after all...), server 1 came back up, and gluster > resumed the rebuild, and came online on the cluster. I had to manually > (virtsh command) unpause the engine, and then struggle through trying to > get critical VMs back up. Everything was super slow, and load averages on > the servers were often seen in excess of 80 (these are 8 core / 16 thread > boxes). Actual CPU usage (reported by top) was rarely above 40% (inclusive > of all CPUs) for any one server. Glusterfs was often seen using 180%-350% > of a CPU on server 1 and 2. > > I ended up putting the cluster in global HA maintence mode and disabling > power fencing on the nodes until the process finished. It appeared on at > least two occasions a functional node was marked bad and had the fencing > not been disabled, a node would have rebooted, just further exacerbating > the problem. > > Its clear that the gluster rebuild overloaded things and caused the > problem. I don't know why the load was so high (even IOWait was low), but > load averages were definately tied to the glusterfs cpu utilization %. At > no point did I have any problems pinging any machine (host or VM) unless > the engine decided it was dead and killed it. > > Why did my system bite it so hard with the rebuild? I baby'ed it along > until the rebuild was complete, after which it returned to normal operation. > > As of this event, all networking (host/engine management, gluster, and VM > network) were on the same vlan. I'd love to move things off, but so far > any attempt to do so breaks my cluster. How can I move my management > interfaces to a separate VLAN/IP Space? I also want to move Gluster to its > own private space, but it seems if I change anything in the peers file, the > entire gluster cluster goes down. The dedicated gluster network is listed > as a secondary hostname for all peers already. > > Will the above network reconfigurations be enough? I got the impression > that the issue may not have been purely network based, but possibly server > IO overload. Is this likely / right? > > I appreciate input. I don't think gluster's recovery is supposed to do as > much damage as it did the last two or three times any healing was required. > > Thanks! > --Jim > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sven.Achtelik at eps.aero Fri Mar 23 07:35:08 2018 From: Sven.Achtelik at eps.aero (Sven Achtelik) Date: Fri, 23 Mar 2018 07:35:08 +0000 Subject: [ovirt-users] Workflow after restoring engine from backup In-Reply-To: References: <831f30ed018b4739a2491cbd24f2429d@eps.aero> Message-ID: <9abbbd52e96b4cd1949a37e863130a13@eps.aero> It looks like I can't get a chance to shut down the HA VMs. I check the restore log and it did mention that it change the HA-VM entries. Just to make sure I looked at the DB and for the vms in question it looks like this. engine=# select vm_guid,status,vm_host,exit_status,exit_reason from vm_dynamic Where vm_guid IN (SELECT vm_guid FROM vm_static WHERE auto_startup='t' AND lease_sd_id is NULL); vm_guid | status | vm_host | exit_status | exit_reason --------------------------------------+--------+-----------------+-------------+------------- 8733d4a6-0844-xxxx-804f-6b919e93e076 | 0 | DXXXX | 2 | -1 4eeaa622-17f9-xxxx-b99a-cddb3ad942de | 0 | xxxxAPP | 2 | -1 fbbdc0a0-23a4-4d32-xxxx-a35c59eb790d | 0 | xxxxDB0 | 2 | -1 45a4e7ce-19a9-4db9-xxxxx-66bd1b9d83af | 0 | xxxxxWOR | 2 | -1 (4 rows) Should that be enough to have a safe start of the engine without any HA action kicking in. ? -----Urspr?ngliche Nachricht----- Von: Yedidyah Bar David [mailto:didi at redhat.com] Gesendet: Montag, 19. M?rz 2018 10:18 An: Sven Achtelik Cc: users at ovirt.org Betreff: Re: [ovirt-users] Workflow after restoring engine from backup On Mon, Mar 19, 2018 at 11:03 AM, Sven Achtelik wrote: > Hi Didi, > > my backups where taken with the end. Backup utility. I have 3 Data > centers, two of them with just one host and the third one with 3 hosts > running the engine. The backup three days old, was taken on engine > version 4.1 (4.1.7) and the restored engine is running on 4.1.9. Since the bug I mentioned was fixed in 4.1.3, you should be covered. > I have three HA VMs that would > be affected. All others are just normal vms. Sounds like it would be > the safest to shut down the HA vm S to make sure that nothing happens ? If you can have downtime, I agree it sounds safer to shutdown the VMs. > Or can I > disable the HA action in the DB for now ? No need to. If you restored with 4.1.9 engine-backup, it should have done this for you. If you still have the restore log, you can verify this by checking it. It should contain 'Resetting HA VM status', and then the result of the sql that it ran. Best regards, > > Thank you, > > Sven > > > > Von meinem Samsung Galaxy Smartphone gesendet. > > > -------- Urspr?ngliche Nachricht -------- > Von: Yedidyah Bar David > Datum: 19.03.18 07:33 (GMT+01:00) > An: Sven Achtelik > Cc: users at ovirt.org > Betreff: Re: [ovirt-users] Workflow after restoring engine from backup > > On Sun, Mar 18, 2018 at 11:45 PM, Sven Achtelik > > wrote: >> Hi All, >> >> >> >> I had issue with the storage that hosted my engine vm. The disk got >> corrupted and I needed to restore the engine from a backup. > > How did you backup, and how did you restore? > > Which version was used for each? > >> That worked as >> expected, I just didn?t start the engine yet. > > OK. > >> I know that after the backup >> was taken some machines where migrated around before the engine disks >> failed. > > Are these machines HA? > >> My question is what will happen once I start the engine service which >> has the restored backup on it ? Will it query the hosts for the >> running VMs > > It will, but HA machines are handled differently. > > See also: > > https://bugzilla.redhat.com/show_bug.cgi?id=1441322 > https://bugzilla.redhat.com/show_bug.cgi?id=1446055 > >> or will it assume that the VMs are still on the hosts as they resided >> at the point of backup ? > > It does, initially, but then updates status according to what it gets > from hosts. > > But polling the hosts takes time, especially if you have many, and HA > policy might require faster handling. So if it polls first a host that > had a machine on it during backup, and sees that it's gone, and didn't > yet poll the new host, HA handling starts immediately, which > eventually might lead to starting the VM on another host. > > To prevent that, the fixes to above bugs make the restore process mark > HA VMs that do not have leases on the storage as "dead". > >> Would I need to change the DB manual to let the engine know where VMs >> are up at this point ? > > You might need to, if you have HA VMs and a too-old version of restore. > >> What will happen to HA VMs >> ? I feel that it might try to start them a second time. My biggest >> issue is that I can?t get a service Windows to shutdown all VMs and >> then lat them restart by the engine. >> >> >> >> Is there a known workflow for that ? > > I am not aware of a tested procedure for handling above if you have a > too-old version, but you can check the patches linked from above bugs > and manually run the SQL command(s) they include. They are essentially > comment 4 of the first bug. > > Good luck and best regards, > -- > Didi -- Didi From jvdwege at xs4all.nl Fri Mar 23 07:58:43 2018 From: jvdwege at xs4all.nl (Joop) Date: Fri, 23 Mar 2018 08:58:43 +0100 Subject: [ovirt-users] Juniper vSRX Cluster on oVirt/RHEV In-Reply-To: References: Message-ID: <5AB4B3B3.9050308@xs4all.nl> On 22-3-2018 10:17, Yaniv Kaul wrote: > > > On Wed, Mar 21, 2018 at 10:37 PM, Charles Kozler > wrote: > > Hi All - > > Recently did this and thought it would be worth documenting. I > couldnt find any solid information on vsrx with kvm outside of > flat KVM. This outlines some of the things I hit along the way and > how to fix. This is my one small way of giving back to such an > incredible open source tool > > https://ckozler.net/vsrx-cluster-on-ovirtrhev/ > > > > Thanks for sharing! > Why didn't you just upload the qcow2 disk via the UI/API though? > There's quite a bit of manual work that I hope is not needed? > @Work we're using Juniper too and oud of curiosity I downloaded the qcow2 image and used the UI to upload it and add it to a VM. It just works :-) oVirt++ Joop -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas at devels.es Fri Mar 23 08:03:15 2018 From: nicolas at devels.es (nicolas at devels.es) Date: Fri, 23 Mar 2018 08:03:15 +0000 Subject: [ovirt-users] Bad volume specification In-Reply-To: <18e2bb8f7b64871a722d4355ef51a56c@devels.es> References: <18e2bb8f7b64871a722d4355ef51a56c@devels.es> Message-ID: <476be5841a4d4e7daa0db59388981cfd@devels.es> Guys, any hints to this? El 2018-03-21 12:37, nicolas at devels.es escribi?: > Hi, > > We're running oVirt 4.1.9, today I put a host on maintenance, I saw > one of the VMs was taking too long to migrate so I shut it down. It > seems that just in that moment the machine ended migrating, but the > shutdown did happen as well. > > Now, when I try to start the VM I'm getting the following error: > > 2018-03-21 12:31:02,309Z ERROR > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (DefaultQuartzScheduler3) [7985a4e2] EVENT_ID: VM_DOWN_ERROR(119), > Correlation ID: null, Call Stack: null, Custom ID: null, Custom Event > ID: -1, Message: VM openmaint.iaas.domain.com is down with error. Exit > message: Bad volume specification {'index': '0', u'domainID': > u'04cb5bd0-d94e-4d14-a71a-e63a669e11b9', 'reqsize': '0', u'format': > u'cow', u'optional': u'false', u'address': {u'function': u'0x0', > u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'slot': > u'0x06'}, u'volumeID': u'68ee7a04-ceff-49f0-bf91-256870543921', > 'apparentsize': '3221225472', u'imageID': > u'9d087e6b-0832-46db-acb0-16d5131afa0c', u'discard': False, > u'specParams': {}, u'readonly': u'false', u'iface': u'virtio', > u'deviceId': u'9d087e6b-0832-46db-acb0-16d5131afa0c', 'truesize': > '3221225472', u'poolID': u'75bf8f48-970f-42bc-8596-f8ab6efb2b63', > u'device': u'disk', u'shared': u'false', u'propagateErrors': u'off', > u'type': u'disk'}. > > It looks quite bad... I'm attaching the engine.log since the moment I > start the VM. > > Is there anything I can do to recover the VM? oVirt says the disk is > OK in the 'Disks' tab. > > Thanks. > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From punaatua.pk at gmail.com Fri Mar 23 08:27:00 2018 From: punaatua.pk at gmail.com (Punaatua PAINT-KOUI) Date: Fri, 23 Mar 2018 08:27:00 +0000 Subject: [ovirt-users] VDSM SSL validity In-Reply-To: References: Message-ID: Thanks, I'll check it out. Le jeu. 22 mars 2018 00:49, Yedidyah Bar David a ?crit : > On Thu, Mar 22, 2018 at 11:58 AM, Sahina Bose wrote: > > Didi, Sandro - Do you know if this option VdsCertificateValidityInYears > is > > present in 4.2? > > I do not think it ever was exposed to engine-config - I think it's a > bug in that page. > > You should be able to update it with psql, if needed - something like this: > > select > fn_db_update_config_value('VdsCertificateValidityInYears','2','general'); > > I didn't try this myself. > > To get an sql prompt, you can use engine-psql, which should be > available in 4.2.2, > or simply copy the script from the patch page: > > https://gerrit.ovirt.org/#/q/I4d9737ea72df0d7e654776a1085901284a523b7f > > Also, some people claim that the use of certificates for communication > between > the engine and the hosts is an internal implementation detail, which > should not > be relevant to PCI DSS requirements. See e.g.: > > https://ovirt.org/develop/release-management/features/infra/pkireduce/ > > > > > On Mon, Mar 19, 2018 at 4:43 AM, Punaatua PAINT-KOUI < > punaatua.pk at gmail.com> > > wrote: > >> > >> Up > >> > >> 2018-02-17 2:57 GMT-10:00 Punaatua PAINT-KOUI : > >>> > >>> Any idea someone ? > >>> > >>> Le 14 f?vr. 2018 23:19, "Punaatua PAINT-KOUI" > a > >>> ?crit : > >>>> > >>>> Hi, > >>>> > >>>> I setup an hyperconverged solution with 3 nodes, hosted engine on > >>>> glusterfs. > >>>> We run this setup in a PCI-DSS environment. According to PCI-DSS > >>>> requirements, we are required to reduce the validity of any > certificate > >>>> under 39 months. > >>>> > >>>> I saw in this link > >>>> https://www.ovirt.org/develop/release-management/features/infra/pki/ > that i > >>>> can use the option VdsCertificateValidityInYears at engine-config. > >>>> > >>>> I'm running ovirt engine 4.2.1 and i checked when i was on 4.2 how to > >>>> edit the option with engine-config --all and engine-config --list but > the > >>>> option is not listed > >>>> > >>>> Am i missing something ? > >>>> > >>>> I thing i can regenerate a VDSM certificate with openssl and the CA > conf > >>>> in /etc/pki/ovirt-engine on the hosted-engine but i would rather > modifiy the > >>>> option for future host that I will add. > >>>> > >>>> -- > >>>> ------------------------------------- > >>>> PAINT-KOUI Punaatua > >> > >> > >> > >> > >> -- > >> ------------------------------------- > >> PAINT-KOUI Punaatua > >> Licence Pro R?seaux et T?lecom IAR > >> Universit? du Sud Toulon Var > >> La Garde France > >> > >> _______________________________________________ > >> Users mailing list > >> Users at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/users > >> > > > > > > -- > Didi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Fri Mar 23 09:30:44 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Fri, 23 Mar 2018 10:30:44 +0100 Subject: [ovirt-users] how to run command in windows guest via agent? In-Reply-To: References: <20180315132600.C37FA2D002F6@eu.vavilov.org> Message-ID: On Thu, Mar 15, 2018 at 3:26 PM, Gianluca Cecchi wrote: > On Thu, Mar 15, 2018 at 2:26 PM, Sergey Vavilov > wrote: > >> Hello! >> >> I'm wondering >> how can I run a cmd.exe command (or powershell) inside windows guest >> virtual machine by >> ovirt-agent from outside (from ovirt-engine or from host)? >> What's ovirt's analogue for >> virsh qemu-agent-command >> ? >> Actually, I want to setup ip on virtual NIC. >> So I can't run a command via windows' protocols. >> ovirt-agent service works in windows virtual machine and successfully >> reports its state to ovirt engine. >> But it looks like ovirt-agent isn't suitable to run commands in virtual >> machines? >> Or am I wrong? >> Should I configure a coexistence of ovirt-agent and qemu-agent inside vm >> somehow? >> How did you solve such task before? >> >> Thank you, all! >> >> >> -- >> Sergey Vavilov >> >> > Hi Sergey, > I recently had a need for a Windows 2008 R2 x86_64 VM that I moved from > vSphere to RHV (but the considerations below are the same for oVirt). > In my case I had to run a backup of an Oracle database that is in > NOARCHIVELOG mode, due to performance reasons: it is an RDBMS used for BI > that is refreshed every day and doesn't need a point in time recovery but > only a consistent state before the "new day" processing is run at 01:00 in > the night. > In vSphere the backup of the VM was implemented with Veeam, that uses > snaphsot technology and interacts with VSS layer of M$ operating systems. > Oracle fully supports on Windows the backup of database using VSS only if > it is in ARCHIVELOG mode. > https://docs.oracle.com/cd/E11882_01/win.112/e10845/vss.htm#NTQRF475 > > Veeam automatically executed shutdown / snapshot / start of the dabatase > because I think it implements what in the link above is called as > PRE_SQLCMD in OnPrepareBackup callback (through the VeeamGuestHelper.exe > executable). > > Coming back to oVirt, the interaction with Windows VSS layer is indeed > done by "QEMU guest agent"too; you have to install it together with the > oVirt Guest Agent when you install oVirt Guest Tools > > So in my case I would need for Windows something similar to the > "fsfreeze/thaw hooks" I have in Linux QEMU guest agent. > But searching through the code it seems this functionality is only present > in Linux component... > In fact some days ago I wrote to the qemu-devel mailing list to ask for > possible implementation: > http://lists.nongnu.org/archive/html/qemu-devel/2018-03/msg02823.html > > No answer so far. Feel free (you and others) to contribute to the thread > if you think it can help... > It could also be useful to have at least writer control commands > implementation in VSS interaction of QEMU Guest Agent in Windows, so that > one could do something similar to what Veeam on vSphere already does... > > Based on what I have wrote above, I doubt you could interact at all > through the QEMU guest agent on Windows to run any command... > And also for Linux guests, I think you are restricted to the default > actions (eg shutdown) or forced to use as a workaround the approach to run > a snapshot (that then you delete) and "attach" the desired command to the > freeze (or thaw) hook... > > HIH, > Gianluca > > > Hello, in an off-list message it appears that the qemu guest agent offers a "guest-exec" command, but that it is not enabled into the build at https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/virtio-win-0.1.141-1/ can anyone confirm this and its reasons (security? isolation?...) Could the policy change and for example allow a choice to have it or not at runtime? Any hint on how to recompile enabling the feature? Thanks, Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From adellam-lists at sevenseas.org Fri Mar 23 10:00:15 2018 From: adellam-lists at sevenseas.org (Andrea Dell'Amico) Date: Fri, 23 Mar 2018 11:00:15 +0100 Subject: [ovirt-users] hyperconverged hosted engine configuration problem with the storage network after the installation Message-ID: <59DC9CDC-BB7B-40FB-A7C5-6A10D185E0EB@sevenseas.org> Hello all, I?m configuring a hyperconverged setup of oVirt 4.2.1. The hosted engine setup went well (I had to manually restart the VM to complete the last step). I then tried to setup a storage network for gluster, but I cannot find, in my dashboard, any of the menus that should be present and permit to associate the new virtual network to a physical one and then to the hosts. I?m talking about the second and third menu you can see here https://ovirt.org/blog/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/ under the ?Storage network? paragraph. What I have can be seen here: https://admj.sevenseas.org/owncloud/index.php/s/qgHaqRdJeNMGmWA/preview and here: https://admj.sevenseas.org/owncloud/index.php/s/qgHaqRdJeNMGmWA Any clues? Thanks in advance, Andrea -- Andrea Dell'Amico http://adellam.sevenseas.org/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 699 bytes Desc: Message signed with OpenPGP URL: From pbrilla at redhat.com Fri Mar 23 10:45:47 2018 From: pbrilla at redhat.com (Pavol Brilla) Date: Fri, 23 Mar 2018 11:45:47 +0100 Subject: [ovirt-users] virtual machine actual size is not right In-Reply-To: References: Message-ID: Hi For such big difference between size outside of VM and inside, it looks more that disk is not fully partioned. df is providing you information only about mounted filesystems. Could you try to run inside VM should match all local disks, and you should see size of disk : # parted -l /dev/[sv]d[a-z] | grep ^Disk ( Output of 1 of my VMs ): # parted -l /dev/[sv]d[a-z] | grep ^Disk Disk /dev/sda: 26.8GB Disk Flags: Disk /dev/mapper/rootvg-lv_tmp: 2147MB Disk Flags: Disk /dev/mapper/rootvg-lv_home: 210MB Disk Flags: Disk /dev/mapper/rootvg-lv_swap: 2147MB Disk Flags: Disk /dev/mapper/rootvg-lv_root: 21.8GB So I see that VM has 26.8GB big disk. On Thu, Mar 22, 2018 at 5:56 PM, Terry hey wrote: > Hello~ > i type this command on the running vm, not the hypervisor ( ovirt node). > -- PAVOL BRILLA RHV QUALITY ENGINEER, CLOUD Red Hat Czech Republic, Brno TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From adellam-lists at sevenseas.org Fri Mar 23 11:18:29 2018 From: adellam-lists at sevenseas.org (Andrea Dell'Amico) Date: Fri, 23 Mar 2018 12:18:29 +0100 Subject: [ovirt-users] hyperconverged hosted engine configuration problem with the storage network after the installation In-Reply-To: <59DC9CDC-BB7B-40FB-A7C5-6A10D185E0EB@sevenseas.org> References: <59DC9CDC-BB7B-40FB-A7C5-6A10D185E0EB@sevenseas.org> Message-ID: Never mind, I?ve found that we have to ?click? on the network name and not select and edit it. Andrea > On 23 Mar 2018, at 11:00, Andrea Dell'Amico wrote: > > Hello all, > I?m configuring a hyperconverged setup of oVirt 4.2.1. The hosted engine setup went well (I had to manually restart the VM to complete the last step). > I then tried to setup a storage network for gluster, but I cannot find, in my dashboard, any of the menus that should be present and permit to associate the new virtual network to a physical one and then to the hosts. > I?m talking about the second and third menu you can see here https://ovirt.org/blog/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/ under the ?Storage network? paragraph. > > What I have can be seen here: https://admj.sevenseas.org/owncloud/index.php/s/qgHaqRdJeNMGmWA/preview and here: https://admj.sevenseas.org/owncloud/index.php/s/qgHaqRdJeNMGmWA > > Any clues? > > Thanks in advance, > Andrea > > -- > Andrea Dell'Amico > http://adellam.sevenseas.org/ > > > -- Andrea Dell'Amico http://adellam.sevenseas.org/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 699 bytes Desc: Message signed with OpenPGP URL: From spfma.tech at e.mail.fr Fri Mar 23 11:34:34 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Fri, 23 Mar 2018 12:34:34 +0100 Subject: [ovirt-users] Engine and nodes ssh setup In-Reply-To: References: Message-ID: <20180323113434.9F230E4474@smtp01.mail.de> Hi, I have weaked "/usr/lib/python2.7/site-packages/vdsm/sslutils.py" in order to get more informative errors. So here is what I get 2018-03-23 12:26:17,367+0100 ERROR (Reactor thread) [ProtocolDetector.SSLHandshakeDispatcher] ssl handshake: SSLError, address: ::ffff:10.100.1.100 error : [EOF occurred in violation of protocol (_ssl.c:579)] dispatcher: socket: ('::ffff:10.100.1.51', 54321, 0, 0) family: 10 protocol: 6 (sslutils:259) Can someone explain what it means ? Regards Le 22-Mar-2018 10:55:03 +0100, msivak at redhat.com a crit: Hi, > There is a step I am not sure : is root user on the engine supposed to be > able to log into nodes without password or not ? In my case it doesn't No, the webadmin application uses Java implementation of the ssh protocol and you give it the needed password when you add a host for the first time. It prepares ssh keys for itself and stores them in database (iirc). The root user on the machine running the webadmin app does not have any access to hosts afaik. Best regards Martin Sivak On Thu, Mar 22, 2018 at 10:39 AM, wrote: > Hi, > > I am still trying to make my restored hosted engine communicate with the > nodes without success. > > There is a step I am not sure : is root user on the engine supposed to be > able to log into nodes without password or not ? In my case it doesn't > > By the way, where are located the certificates actually used for these > communications ? > > Regards > > ________________________________ > FreeMail powered by mail.fr > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbonazzo at redhat.com Fri Mar 23 12:16:23 2018 From: sbonazzo at redhat.com (Sandro Bonazzola) Date: Fri, 23 Mar 2018 13:16:23 +0100 Subject: [ovirt-users] Bad volume specification In-Reply-To: <18e2bb8f7b64871a722d4355ef51a56c@devels.es> References: <18e2bb8f7b64871a722d4355ef51a56c@devels.es> Message-ID: 2018-03-21 13:37 GMT+01:00 : > Hi, > > We're running oVirt 4.1.9, today I put a host on maintenance, I saw one of > the VMs was taking too long to migrate so I shut it down. It seems that > just in that moment the machine ended migrating, but the shutdown did > happen as well. > I would suggest to update to 4.2 as soon as possible since 4.1 is not supported anymore now that 4.2 is available > > Now, when I try to start the VM I'm getting the following error: > > 2018-03-21 12:31:02,309Z ERROR [org.ovirt.engine.core.dal.dbb > roker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler3) > [7985a4e2] EVENT_ID: VM_DOWN_ERROR(119), Correlation ID: null, Call Stack: > null, Custom ID: null, Custom Event ID: -1, Message: VM > openmaint.iaas.domain.com is down with error. Exit message: Bad volume > specification {'index': '0', u'domainID': u'04cb5bd0-d94e-4d14-a71a-e63a669e11b9', > 'reqsize': '0', u'format': u'cow', u'optional': u'false', u'address': > {u'function': u'0x0', u'bus': u'0x00', u'domain': u'0x0000', u'type': > u'pci', u'slot': u'0x06'}, u'volumeID': u'68ee7a04-ceff-49f0-bf91-256870543921', > 'apparentsize': '3221225472', u'imageID': u'9d087e6b-0832-46db-acb0-16d5131afa0c', > u'discard': False, u'specParams': {}, u'readonly': u'false', u'iface': > u'virtio', u'deviceId': u'9d087e6b-0832-46db-acb0-16d5131afa0c', > 'truesize': '3221225472', u'poolID': u'75bf8f48-970f-42bc-8596-f8ab6efb2b63', > u'device': u'disk', u'shared': u'false', u'propagateErrors': u'off', > u'type': u'disk'}. > > It looks quite bad... I'm attaching the engine.log since the moment I > start the VM. > > Is there anything I can do to recover the VM? oVirt says the disk is OK in > the 'Disks' tab. > Adding some people who may be able to help. Once solved, please consider upgrade. > > Thanks. > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA sbonazzo at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas at devels.es Fri Mar 23 12:20:59 2018 From: nicolas at devels.es (nicolas at devels.es) Date: Fri, 23 Mar 2018 12:20:59 +0000 Subject: [ovirt-users] Bad volume specification In-Reply-To: References: <18e2bb8f7b64871a722d4355ef51a56c@devels.es> Message-ID: El 2018-03-23 12:16, Sandro Bonazzola escribi?: > 2018-03-21 13:37 GMT+01:00 : > >> Hi, >> >> We're running oVirt 4.1.9, today I put a host on maintenance, I saw >> one of the VMs was taking too long to migrate so I shut it down. It >> seems that just in that moment the machine ended migrating, but the >> shutdown did happen as well. > > I would suggest to update to 4.2 as soon as possible since 4.1 is not > supported anymore now that 4.2 is available > We have 2 oVirt infrastructures. One is migrated to 4.2, we can't migrate the other one since most of the user portal features in 4.1 are not present in 4.2 and our users do a massive usage of this portal to create/tune VMs. I know several issues were created on Github to implement missing features, but we cannot upgrade until they are implemented. Thanks. > ? > >> Now, when I try to start the VM I'm getting the following error: >> >> 2018-03-21 12:31:02,309Z ERROR >> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >> (DefaultQuartzScheduler3) [7985a4e2] EVENT_ID: VM_DOWN_ERROR(119), >> Correlation ID: null, Call Stack: null, Custom ID: null, Custom >> Event ID: -1, Message: VM openmaint.iaas.domain.com [1] is down with >> error. Exit message: Bad volume specification {'index': '0', >> u'domainID': u'04cb5bd0-d94e-4d14-a71a-e63a669e11b9', 'reqsize': >> '0', u'format': u'cow', u'optional': u'false', u'address': >> {u'function': u'0x0', u'bus': u'0x00', u'domain': u'0x0000', >> u'type': u'pci', u'slot': u'0x06'}, u'volumeID': >> u'68ee7a04-ceff-49f0-bf91-256870543921', 'apparentsize': >> '3221225472', u'imageID': u'9d087e6b-0832-46db-acb0-16d5131afa0c', >> u'discard': False, u'specParams': {}, u'readonly': u'false', >> u'iface': u'virtio', u'deviceId': >> u'9d087e6b-0832-46db-acb0-16d5131afa0c', 'truesize': '3221225472', >> u'poolID': u'75bf8f48-970f-42bc-8596-f8ab6efb2b63', u'device': >> u'disk', u'shared': u'false', u'propagateErrors': u'off', u'type': >> u'disk'}. >> >> It looks quite bad... I'm attaching the engine.log since the moment >> I start the VM. >> >> Is there anything I can do to recover the VM? oVirt says the disk >> is OK in the 'Disks' tab. > > Adding some people who may be able to help. Once solved, please > consider upgrade. > > ? > >> Thanks. >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users [2] > > -- > > SANDRO?BONAZZOLA > > ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D > > Red Hat?EMEA [3] > > sbonazzo at redhat.com? ? > > [4] > > [5] > > > > Links: > ------ > [1] http://openmaint.iaas.domain.com > [2] http://lists.ovirt.org/mailman/listinfo/users > [3] https://www.redhat.com/ > [4] https://red.ht/sig > [5] https://redhat.com/summit From sbonazzo at redhat.com Fri Mar 23 12:23:25 2018 From: sbonazzo at redhat.com (Sandro Bonazzola) Date: Fri, 23 Mar 2018 13:23:25 +0100 Subject: [ovirt-users] Bad volume specification In-Reply-To: References: <18e2bb8f7b64871a722d4355ef51a56c@devels.es> Message-ID: 2018-03-23 13:20 GMT+01:00 : > El 2018-03-23 12:16, Sandro Bonazzola escribi?: > >> 2018-03-21 13:37 GMT+01:00 : >> >> Hi, >>> >>> We're running oVirt 4.1.9, today I put a host on maintenance, I saw >>> one of the VMs was taking too long to migrate so I shut it down. It >>> seems that just in that moment the machine ended migrating, but the >>> shutdown did happen as well. >>> >> >> I would suggest to update to 4.2 as soon as possible since 4.1 is not >> supported anymore now that 4.2 is available >> >> > We have 2 oVirt infrastructures. One is migrated to 4.2, we can't migrate > the other one since most of the user portal features in 4.1 are not present > in 4.2 and our users do a massive usage of this portal to create/tune VMs. > I know several issues were created on Github to implement missing features, > but we cannot upgrade until they are implemented. > Understood, thanks for the feedback! > > Thanks. > > >> >> Now, when I try to start the VM I'm getting the following error: >>> >>> 2018-03-21 12:31:02,309Z ERROR >>> >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >> >>> (DefaultQuartzScheduler3) [7985a4e2] EVENT_ID: VM_DOWN_ERROR(119), >>> Correlation ID: null, Call Stack: null, Custom ID: null, Custom >>> Event ID: -1, Message: VM openmaint.iaas.domain.com [1] is down with >>> error. Exit message: Bad volume specification {'index': '0', >>> u'domainID': u'04cb5bd0-d94e-4d14-a71a-e63a669e11b9', 'reqsize': >>> '0', u'format': u'cow', u'optional': u'false', u'address': >>> {u'function': u'0x0', u'bus': u'0x00', u'domain': u'0x0000', >>> u'type': u'pci', u'slot': u'0x06'}, u'volumeID': >>> u'68ee7a04-ceff-49f0-bf91-256870543921', 'apparentsize': >>> '3221225472', u'imageID': u'9d087e6b-0832-46db-acb0-16d5131afa0c', >>> u'discard': False, u'specParams': {}, u'readonly': u'false', >>> u'iface': u'virtio', u'deviceId': >>> u'9d087e6b-0832-46db-acb0-16d5131afa0c', 'truesize': '3221225472', >>> u'poolID': u'75bf8f48-970f-42bc-8596-f8ab6efb2b63', u'device': >>> u'disk', u'shared': u'false', u'propagateErrors': u'off', u'type': >>> u'disk'}. >>> >>> It looks quite bad... I'm attaching the engine.log since the moment >>> I start the VM. >>> >>> Is there anything I can do to recover the VM? oVirt says the disk >>> is OK in the 'Disks' tab. >>> >> >> Adding some people who may be able to help. Once solved, please >> consider upgrade. >> >> >> >> Thanks. >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users [2] >>> >> >> -- >> >> SANDRO BONAZZOLA >> >> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D >> >> Red Hat EMEA [3] >> >> sbonazzo at redhat.com >> >> [4] >> >> [5] >> >> >> >> Links: >> ------ >> [1] http://openmaint.iaas.domain.com >> [2] http://lists.ovirt.org/mailman/listinfo/users >> [3] https://www.redhat.com/ >> [4] https://red.ht/sig >> [5] https://redhat.com/summit >> > -- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA sbonazzo at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at di.unimi.it Fri Mar 23 13:30:37 2018 From: giulio at di.unimi.it (Giulio Casella) Date: Fri, 23 Mar 2018 14:30:37 +0100 Subject: [ovirt-users] ovirt-guest-agent behaviour Message-ID: <87427ced-3e03-210b-118c-061102bfadfe@di.unimi.it> Hi, I just installed a Fedora 27 guest, with ovirt-guest-agent. This VM runs on oVirt (to say the truth is RedHat Virtualization version 4.1.9.2-0.1.el7). I noticed a strange behaviour: in the guest tab "Logged-in user" is always reported as None, but agent seems to be running correctly (other guest data as kernel version, OS, Console client IP, etc. is reported correctly). Nothing relevant is showing in agent log file. Same behaviour for login via kdm and console login. Version of installed agent is ovirt-guest-agent-common-1.0.14-1.fc27.noarch. Other VMs on the same cluster are reporting correct logged-in user as expected (1500+ VM, Windows 7, Fedora 27, Fedora 24, CentOS, ...), so I think is a guest issue. Any ideas? TIA, g From sbonazzo at redhat.com Fri Mar 23 13:44:08 2018 From: sbonazzo at redhat.com (Sandro Bonazzola) Date: Fri, 23 Mar 2018 14:44:08 +0100 Subject: [ovirt-users] ovirt-guest-agent behaviour In-Reply-To: <87427ced-3e03-210b-118c-061102bfadfe@di.unimi.it> References: <87427ced-3e03-210b-118c-061102bfadfe@di.unimi.it> Message-ID: 2018-03-23 14:30 GMT+01:00 Giulio Casella : > Hi, > I just installed a Fedora 27 guest, with ovirt-guest-agent. > This VM runs on oVirt (to say the truth is RedHat Virtualization version > 4.1.9.2-0.1.el7). > I noticed a strange behaviour: in the guest tab "Logged-in user" is always > reported as None, but agent seems to be running correctly (other guest data > as kernel version, OS, Console client IP, etc. is reported correctly). > Nothing relevant is showing in agent log file. > > Same behaviour for login via kdm and console login. > > Version of installed agent is ovirt-guest-agent-common-1.0.1 > 4-1.fc27.noarch. > > Other VMs on the same cluster are reporting correct logged-in user as > expected (1500+ VM, Windows 7, Fedora 27, Fedora 24, CentOS, ...), so I > think is a guest issue. > > Any ideas? > Tomas, maybe you can help here > > TIA, > g > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA sbonazzo at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From giulio at di.unimi.it Fri Mar 23 14:27:57 2018 From: giulio at di.unimi.it (Giulio Casella) Date: Fri, 23 Mar 2018 15:27:57 +0100 Subject: [ovirt-users] ovirt-guest-agent behaviour In-Reply-To: <87427ced-3e03-210b-118c-061102bfadfe@di.unimi.it> References: <87427ced-3e03-210b-118c-061102bfadfe@di.unimi.it> Message-ID: Fixed! It was a problem independent from ovirt/ovirt agent. It was a problem in writing to /var/run/utmp, causing /usr/bin/users (used by ovirt-guest-agent) to show an emtpy list of connected users. Cheers, g Il 23/03/2018 14:30, Giulio Casella ha scritto: > Hi, > I just installed a Fedora 27 guest, with ovirt-guest-agent. > This VM runs on oVirt (to say the truth is RedHat Virtualization version > 4.1.9.2-0.1.el7). > I noticed a strange behaviour: in the guest tab "Logged-in user" is > always reported as None, but agent seems to be running correctly (other > guest data as kernel version, OS, Console client IP, etc. is reported > correctly). Nothing relevant is showing in agent log file. > > Same behaviour for login via kdm and console login. > > Version of installed agent is > ovirt-guest-agent-common-1.0.14-1.fc27.noarch. > > Other VMs on the same cluster are reporting correct logged-in user as > expected (1500+ VM, Windows 7, Fedora 27, Fedora 24, CentOS, ...), so I > think is a guest issue. > > Any ideas? > > TIA, > g > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From budic at onholyground.com Fri Mar 23 14:29:56 2018 From: budic at onholyground.com (Darrell Budic) Date: Fri, 23 Mar 2018 09:29:56 -0500 Subject: [ovirt-users] gluster self-heal takes cluster offline In-Reply-To: References: Message-ID: What version of ovirt and gluster? Sounds like something I just saw with gluster 3.12.x, are you using libgfapi or just fuse mounts? > From: Sahina Bose > Subject: Re: [ovirt-users] gluster self-heal takes cluster offline > Date: March 23, 2018 at 1:26:01 AM CDT > To: Jim Kusznir > Cc: Ravishankar Narayanankutty; users > > > > On Fri, Mar 16, 2018 at 2:45 AM, Jim Kusznir > wrote: > Hi all: > > I'm trying to understand why/how (and most importantly, how to fix) an substantial issue I had last night. This happened one other time, but I didn't know/understand all the parts associated with it until last night. > > I have a 3 node hyperconverged (self-hosted engine, Gluster on each node) cluster. Gluster is Replica 2 + arbitrar. Current network configuration is 2x GigE on load balance ("LAG Group" on switch), plus one GigE from each server on a separate vlan, intended for Gluster (but not used). Server hardware is Dell R610's, each server as an SSD in it. Server 1 and 2 have the full replica, server 3 is the arbitrar. > > I put server 2 into maintence so I can work on the hardware, including turn it off and such. In the course of the work, I found that I needed to reconfigure the SSD's partitioning somewhat, and it resulted in wiping the data partition (storing VM images). I figure, its no big deal, gluster will rebuild that in short order. I did take care of the extended attr settings and the like, and when I booted it up, gluster came up as expected and began rebuilding the disk. > > How big was the data on this partition? What was the shard size set on the gluster volume? > Out of curiosity, how long did it take to heal and come back to operational? > > > The problem is that suddenly my entire cluster got very sluggish. The entine was marking nodes and VMs failed and unfaling them throughout the system, fairly randomly. It didn't matter what node the engine or VM was on. At one point, it power cycled server 1 for "non-responsive" (even though everything was running on it, and the gluster rebuild was working on it). As a result of this, about 6 VMs were killed and my entire gluster system went down hard (suspending all remaining VMs and the engine), as there were no remaining full copies of the data. After several minutes (these are Dell servers, after all...), server 1 came back up, and gluster resumed the rebuild, and came online on the cluster. I had to manually (virtsh command) unpause the engine, and then struggle through trying to get critical VMs back up. Everything was super slow, and load averages on the servers were often seen in excess of 80 (these are 8 core / 16 thread boxes). Actual CPU usage (reported by top) was rarely above 40% (inclusive of all CPUs) for any one server. Glusterfs was often seen using 180%-350% of a CPU on server 1 and 2. > > I ended up putting the cluster in global HA maintence mode and disabling power fencing on the nodes until the process finished. It appeared on at least two occasions a functional node was marked bad and had the fencing not been disabled, a node would have rebooted, just further exacerbating the problem. > > Its clear that the gluster rebuild overloaded things and caused the problem. I don't know why the load was so high (even IOWait was low), but load averages were definately tied to the glusterfs cpu utilization %. At no point did I have any problems pinging any machine (host or VM) unless the engine decided it was dead and killed it. > > Why did my system bite it so hard with the rebuild? I baby'ed it along until the rebuild was complete, after which it returned to normal operation. > > As of this event, all networking (host/engine management, gluster, and VM network) were on the same vlan. I'd love to move things off, but so far any attempt to do so breaks my cluster. How can I move my management interfaces to a separate VLAN/IP Space? I also want to move Gluster to its own private space, but it seems if I change anything in the peers file, the entire gluster cluster goes down. The dedicated gluster network is listed as a secondary hostname for all peers already. > > Will the above network reconfigurations be enough? I got the impression that the issue may not have been purely network based, but possibly server IO overload. Is this likely / right? > > I appreciate input. I don't think gluster's recovery is supposed to do as much damage as it did the last two or three times any healing was required. > > Thanks! > --Jim > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckozleriii at gmail.com Fri Mar 23 14:34:44 2018 From: ckozleriii at gmail.com (Charles Kozler) Date: Fri, 23 Mar 2018 10:34:44 -0400 Subject: [ovirt-users] Juniper vSRX Cluster on oVirt/RHEV In-Reply-To: <5AB4B3B3.9050308@xs4all.nl> References: <5AB4B3B3.9050308@xs4all.nl> Message-ID: I hit a lot of errors when I tried to upload through the web UI. I tried both remote URI and local file and both failed for me. I cant remember exactly what they were but I recall its where I spent a lot of time initially. I think it had something to do with the ovirt-imageio function...something around that I couldnt get working right. Also, doing the way I did it allowed me to quickly restart if I needed to by creating an alias around dd command. I had to restart a bunch so it was useful. I did this all on 4.0.1.1-1.el7.centos On Fri, Mar 23, 2018 at 3:58 AM, Joop wrote: > On 22-3-2018 10:17, Yaniv Kaul wrote: > > > > On Wed, Mar 21, 2018 at 10:37 PM, Charles Kozler < > ckozleriii at gmail.com> wrote: > >> Hi All - >> >> Recently did this and thought it would be worth documenting. I couldnt >> find any solid information on vsrx with kvm outside of flat KVM. This >> outlines some of the things I hit along the way and how to fix. This is my >> one small way of giving back to such an incredible open source tool >> >> https://ckozler.net/vsrx-cluster-on-ovirtrhev/ >> > > Thanks for sharing! > Why didn't you just upload the qcow2 disk via the UI/API though? > There's quite a bit of manual work that I hope is not needed? > > @Work we're using Juniper too and oud of curiosity I downloaded the qcow2 > image and used the UI to upload it and add it to a VM. It just works :-) > oVirt++ > > Joop > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Fri Mar 23 15:38:50 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Fri, 23 Mar 2018 18:38:50 +0300 Subject: [ovirt-users] Bad volume specification In-Reply-To: References: <18e2bb8f7b64871a722d4355ef51a56c@devels.es> Message-ID: On Fri, Mar 23, 2018 at 3:20 PM, wrote: > El 2018-03-23 12:16, Sandro Bonazzola escribi?: > >> 2018-03-21 13:37 GMT+01:00 : >> >> Hi, >>> >>> We're running oVirt 4.1.9, today I put a host on maintenance, I saw >>> one of the VMs was taking too long to migrate so I shut it down. It >>> seems that just in that moment the machine ended migrating, but the >>> shutdown did happen as well. >>> >> >> I would suggest to update to 4.2 as soon as possible since 4.1 is not >> supported anymore now that 4.2 is available >> >> > We have 2 oVirt infrastructures. One is migrated to 4.2, we can't migrate > the other one since most of the user portal features in 4.1 are not present > in 4.2 and our users do a massive usage of this portal to create/tune VMs. > I know several issues were created on Github to implement missing features, > but we cannot upgrade until they are implemented. > Have you checked the latest oVirt 4.2.2 RC? We have brought back several features to the user portal. Y. > > Thanks. > > >> >> Now, when I try to start the VM I'm getting the following error: >>> >>> 2018-03-21 12:31:02,309Z ERROR >>> >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >> >>> (DefaultQuartzScheduler3) [7985a4e2] EVENT_ID: VM_DOWN_ERROR(119), >>> Correlation ID: null, Call Stack: null, Custom ID: null, Custom >>> Event ID: -1, Message: VM openmaint.iaas.domain.com [1] is down with >>> error. Exit message: Bad volume specification {'index': '0', >>> u'domainID': u'04cb5bd0-d94e-4d14-a71a-e63a669e11b9', 'reqsize': >>> '0', u'format': u'cow', u'optional': u'false', u'address': >>> {u'function': u'0x0', u'bus': u'0x00', u'domain': u'0x0000', >>> u'type': u'pci', u'slot': u'0x06'}, u'volumeID': >>> u'68ee7a04-ceff-49f0-bf91-256870543921', 'apparentsize': >>> '3221225472', u'imageID': u'9d087e6b-0832-46db-acb0-16d5131afa0c', >>> u'discard': False, u'specParams': {}, u'readonly': u'false', >>> u'iface': u'virtio', u'deviceId': >>> u'9d087e6b-0832-46db-acb0-16d5131afa0c', 'truesize': '3221225472', >>> u'poolID': u'75bf8f48-970f-42bc-8596-f8ab6efb2b63', u'device': >>> u'disk', u'shared': u'false', u'propagateErrors': u'off', u'type': >>> u'disk'}. >>> >>> It looks quite bad... I'm attaching the engine.log since the moment >>> I start the VM. >>> >>> Is there anything I can do to recover the VM? oVirt says the disk >>> is OK in the 'Disks' tab. >>> >> >> Adding some people who may be able to help. Once solved, please >> consider upgrade. >> >> >> >> Thanks. >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users [2] >>> >> >> -- >> >> SANDRO BONAZZOLA >> >> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D >> >> Red Hat EMEA [3] >> >> sbonazzo at redhat.com >> >> [4] >> >> [5] >> >> >> >> Links: >> ------ >> [1] http://openmaint.iaas.domain.com >> [2] http://lists.ovirt.org/mailman/listinfo/users >> [3] https://www.redhat.com/ >> [4] https://red.ht/sig >> [5] https://redhat.com/summit >> > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From statsenko_ky at interrao.ru Fri Mar 23 16:53:15 2018 From: statsenko_ky at interrao.ru (=?utf-8?B?0KHRgtCw0YbQtdC90LrQviDQmtC+0L3RgdGC0LDQvdGC0LjQvSDQrtGA0Yw=?= =?utf-8?B?0LXQstC40Yc=?=) Date: Fri, 23 Mar 2018 16:53:15 +0000 Subject: [ovirt-users] FC LUN Message-ID: <439dffd7724541748fe3564493cac704@msk1-exchmb07.interrao.ru> Hello! Can you, please, help ? There is a strange error while trying to connect FC LUN from the old SAN storage system to oVirt 4.1.9 hosts. In the same time there are no problems with LUN?s from other storage systems. The error in vdsm.log is: 2018-03-23 16:41:55,920+0300 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.09 seconds (__init__:539) 2018-03-23 16:41:55,965+0300 WARN (jsonrpc/1) [storage.LVM] lvm pvs failed: 5 [] [' Failed to find device for physical volume "/dev/mapper/320080013789309c0".'] (lvm :323) 2018-03-23 16:41:55,966+0300 WARN (jsonrpc/1) [storage.HSM] getPV failed for guid: 320080013789309c0 (hsm:1966) Traceback (most recent call last): File "/usr/share/vdsm/storage/hsm.py", line 1963, in _getDeviceList pv = lvm.getPV(guid) File "/usr/share/vdsm/storage/lvm.py", line 853, in getPV raise se.InaccessiblePhysDev((pvName,)) InaccessiblePhysDev: Multipath cannot access physical device(s): "devices=(u'320080013789309c0',)" Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marceloltmm at gmail.com Fri Mar 23 18:09:28 2018 From: marceloltmm at gmail.com (Marcelo Leandro) Date: Fri, 23 Mar 2018 15:09:28 -0300 Subject: [ovirt-users] Problem to upgrade level cluster to 4.1 Message-ID: Hello, I am try update the cluster level but i had this menssage erro: Erro durante a execu??o da a??o: Update of cluster compatibility version failed because there are VMs/Templates [VPS-NAME01, VPS-NAME02, VPS-NAME03] with incorrect configuration. To fix the issue, please go to each of them, edit and press OK. If the save does not pass, fix the dialog validation. 23/03/2018 15:03:07 Cannot update compatibility version of Vm/Template: [VPS-NAME01], Message: [No Message] 23/03/2018 15:03:07 Cannot update compatibility version of Vm/Template: [VPS-NAME02], Message: [No Message] 23/03/2018 15:03:07 Cannot update compatibility version of Vm/Template: [VPS-NAME03], Message: [No Message] I am already open the edit box vm and close with ok button how show in the erro menssagem: To fix the issue, please go to each of them, edit and press OK. If the save does not pass, fix the dialog validation. But not return error when save. Anyone can help me ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From blanchet at abes.fr Fri Mar 23 18:58:04 2018 From: blanchet at abes.fr (=?UTF-8?Q?Nathana=c3=abl_Blanchet?=) Date: Fri, 23 Mar 2018 19:58:04 +0100 Subject: [ovirt-users] can't select network in network section of initial run In-Reply-To: <83d869e8-90fe-dfd6-4f16-66cdb249f619@abes.fr> References: <83d869e8-90fe-dfd6-4f16-66cdb249f619@abes.fr> Message-ID: <5e8ab63b-5478-4c41-5796-fad796ff9294@abes.fr> Okay, I answer to myself, I found how to do it At the first network step, I shouldn't expect an existing network or something else to be available, but I should choose the name of the interface which one become available... very strange way to do, but it is ok. Le 22/03/2018 ? 18:12, Nathana?l Blanchet a ?crit?: > Hi all, > > In ovirt 4.1.9, the "select network above" is blank when I want to add > a network with cloud-init. > > Is it a known bug corrected in the 4.2.x branch? > -- Nathana?l Blanchet Supervision r?seau P?le Infrastrutures Informatiques 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 T?l. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet at abes.fr From fernando.frediani at upx.com Fri Mar 23 19:27:55 2018 From: fernando.frediani at upx.com (FERNANDO FREDIANI) Date: Fri, 23 Mar 2018 16:27:55 -0300 Subject: [ovirt-users] Juniper vSRX Cluster on oVirt/RHEV In-Reply-To: <5AB4B3B3.9050308@xs4all.nl> References: <5AB4B3B3.9050308@xs4all.nl> Message-ID: Out of curiosity how much traffic can it handle running in these Virtual Machines on the top of reasonable hardware ? Fernando 2018-03-23 4:58 GMT-03:00 Joop : > On 22-3-2018 10:17, Yaniv Kaul wrote: > > > > On Wed, Mar 21, 2018 at 10:37 PM, Charles Kozler < > ckozleriii at gmail.com> wrote: > >> Hi All - >> >> Recently did this and thought it would be worth documenting. I couldnt >> find any solid information on vsrx with kvm outside of flat KVM. This >> outlines some of the things I hit along the way and how to fix. This is my >> one small way of giving back to such an incredible open source tool >> >> https://ckozler.net/vsrx-cluster-on-ovirtrhev/ >> > > Thanks for sharing! > Why didn't you just upload the qcow2 disk via the UI/API though? > There's quite a bit of manual work that I hope is not needed? > > @Work we're using Juniper too and oud of curiosity I downloaded the qcow2 > image and used the UI to upload it and add it to a VM. It just works :-) > oVirt++ > > Joop > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marceloltmm at gmail.com Fri Mar 23 19:29:00 2018 From: marceloltmm at gmail.com (Marcelo Leandro) Date: Fri, 23 Mar 2018 16:29:00 -0300 Subject: [ovirt-users] Any monitoring tool provided? In-Reply-To: <32f428b6-dc46-de63-6072-b1fff2eb0b28@endlessnow.com> References: <32f428b6-dc46-de63-6072-b1fff2eb0b28@endlessnow.com> Message-ID: Hello, I am try this how to: https://www.ovirt.org/develop/release-management/features/metrics/metrics-store-installation/ but when i run this command: /usr/share/ovirt-engine-metrics/setup/ansible/configure_ovirt_machines_for_metrics.sh --playbook=ovirt-metrics-store-installation.yml I am had this error mensagem: ansible-playbook: error: no such option: --playbook my version: ovirt-engine-metrics-1.0.8-1.el7.centos.noarch Anyone can help me? 2018-03-22 16:28 GMT-03:00 Christopher Cox : > On 03/21/2018 10:41 PM, Terry hey wrote: > >> Dear all, >> >> Now, we can just read how many storage used, cpu usage on ovirt dashboard. >> But is there any monitoring tool for monitoring virtual machine time to >> time? >> If yes, could you guys give me the procedure? >> > > A possible option, for a full OS with network connectivity, is to monitor > the VM like you would any other host. > > We use omd/check_mk. > > Right now there isn't an oVirt specific monitor plugin for check_mk. > > I know what I said is probably pretty obvious, but just in case. > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas at devels.es Fri Mar 23 20:02:30 2018 From: nicolas at devels.es (nicolas at devels.es) Date: Fri, 23 Mar 2018 20:02:30 +0000 Subject: [ovirt-users] Bad volume specification In-Reply-To: References: <18e2bb8f7b64871a722d4355ef51a56c@devels.es> Message-ID: <7e9376e27f2437f3b4c42d19672b14d2@devels.es> El 2018-03-23 15:38, Yaniv Kaul escribi?: > On Fri, Mar 23, 2018 at 3:20 PM, wrote: > >> El 2018-03-23 12:16, Sandro Bonazzola escribi?: >> 2018-03-21 13:37 GMT+01:00 : >> >> Hi, >> >> We're running oVirt 4.1.9, today I put a host on maintenance, I saw >> one of the VMs was taking too long to migrate so I shut it down. It >> seems that just in that moment the machine ended migrating, but the >> shutdown did happen as well. >> >> I would suggest to update to 4.2 as soon as possible since 4.1 is >> not >> supported anymore now that 4.2 is available > > We have 2 oVirt infrastructures. One is migrated to 4.2, we can't > migrate the other one since most of the user portal features in 4.1 > are not present in 4.2 and our users do a massive usage of this portal > to create/tune VMs. I know several issues were created on Github to > implement missing features, but we cannot upgrade until they are > implemented. > > Have you checked the latest oVirt 4.2.2 RC? We have brought back > several features to the user portal. > Y. > ? Yes, I'm aware. I'm about to find some time to test it, still I think there will be some features missing (I think I've read that it won't be possible to deploy a VM without a template), but I need to test it for a while. Still I guess we can upgrade and let some teachers test if they can get used to the new user portal. Thank you! > >> Thanks. >> >> ? >> >> Now, when I try to start the VM I'm getting the following error: >> >> 2018-03-21 12:31:02,309Z ERROR >> >> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >> (DefaultQuartzScheduler3) [7985a4e2] EVENT_ID: VM_DOWN_ERROR(119), >> Correlation ID: null, Call Stack: null, Custom ID: null, Custom >> Event ID: -1, Message: VM openmaint.iaas.domain.com [1] [1] is down >> with >> error. Exit message: Bad volume specification {'index': '0', >> u'domainID': u'04cb5bd0-d94e-4d14-a71a-e63a669e11b9', 'reqsize': >> '0', u'format': u'cow', u'optional': u'false', u'address': >> {u'function': u'0x0', u'bus': u'0x00', u'domain': u'0x0000', >> u'type': u'pci', u'slot': u'0x06'}, u'volumeID': >> u'68ee7a04-ceff-49f0-bf91-256870543921', 'apparentsize': >> '3221225472', u'imageID': u'9d087e6b-0832-46db-acb0-16d5131afa0c', >> u'discard': False, u'specParams': {}, u'readonly': u'false', >> u'iface': u'virtio', u'deviceId': >> u'9d087e6b-0832-46db-acb0-16d5131afa0c', 'truesize': '3221225472', >> u'poolID': u'75bf8f48-970f-42bc-8596-f8ab6efb2b63', u'device': >> u'disk', u'shared': u'false', u'propagateErrors': u'off', u'type': >> u'disk'}. >> >> It looks quite bad... I'm attaching the engine.log since the moment >> I start the VM. >> >> Is there anything I can do to recover the VM? oVirt says the disk >> is OK in the 'Disks' tab. >> >> Adding some people who may be able to help. Once solved, please >> consider upgrade. >> >> ? >> >> Thanks. >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users [2] [2] >> >> -- >> >> SANDRO?BONAZZOLA >> >> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION >> R&D >> >> Red Hat?EMEA [3] >> >> sbonazzo at redhat.com? ? >> >> ? ? ? ? ? ? ? ? ?[4] >> >> ?[5] >> >> Links: >> ------ >> [1] http://openmaint.iaas.domain.com [1] >> [2] http://lists.ovirt.org/mailman/listinfo/users [2] >> [3] https://www.redhat.com/ [3] >> [4] https://red.ht/sig [4] >> [5] https://redhat.com/summit [5] > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users [2] > > > > Links: > ------ > [1] http://openmaint.iaas.domain.com > [2] http://lists.ovirt.org/mailman/listinfo/users > [3] https://www.redhat.com/ > [4] https://red.ht/sig > [5] https://redhat.com/summit From mskrivan at redhat.com Fri Mar 23 20:10:43 2018 From: mskrivan at redhat.com (Michal Skrivanek) Date: Fri, 23 Mar 2018 21:10:43 +0100 Subject: [ovirt-users] Bad volume specification In-Reply-To: <7e9376e27f2437f3b4c42d19672b14d2@devels.es> References: <18e2bb8f7b64871a722d4355ef51a56c@devels.es> <7e9376e27f2437f3b4c42d19672b14d2@devels.es> Message-ID: <06C9F954-C05E-47E7-ABF2-FFA853206AF2@redhat.com> > On 23 Mar 2018, at 21:02, nicolas at devels.es wrote: > > El 2018-03-23 15:38, Yaniv Kaul escribi?: >> On Fri, Mar 23, 2018 at 3:20 PM, wrote: >>> El 2018-03-23 12:16, Sandro Bonazzola escribi?: >>> 2018-03-21 13:37 GMT+01:00 : >>> Hi, >>> We're running oVirt 4.1.9, today I put a host on maintenance, I saw >>> one of the VMs was taking too long to migrate so I shut it down. It >>> seems that just in that moment the machine ended migrating, but the >>> shutdown did happen as well. >>> I would suggest to update to 4.2 as soon as possible since 4.1 is >>> not >>> supported anymore now that 4.2 is available >> We have 2 oVirt infrastructures. One is migrated to 4.2, we can't >> migrate the other one since most of the user portal features in 4.1 >> are not present in 4.2 and our users do a massive usage of this portal >> to create/tune VMs. I know several issues were created on Github to >> implement missing features, but we cannot upgrade until they are >> implemented. >> Have you checked the latest oVirt 4.2.2 RC? We have brought back >> several features to the user portal. >> Y. >> > > Yes, I'm aware. I'm about to find some time to test it, still I think there will be some features missing (I think I've read that it won't be possible to deploy a VM without a template), ?Blank? is also a template:) There?s basic disk and network creation > but I need to test it for a while. Still I guess we can upgrade and let some teachers test if they can get used to the new user portal. it?s not yet out, had some issues building dependencies today. Should be ready early next week > > Thank you! > >>> Thanks. >>> >>> Now, when I try to start the VM I'm getting the following error: >>> 2018-03-21 12:31:02,309Z ERROR >> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> (DefaultQuartzScheduler3) [7985a4e2] EVENT_ID: VM_DOWN_ERROR(119), >>> Correlation ID: null, Call Stack: null, Custom ID: null, Custom >>> Event ID: -1, Message: VM openmaint.iaas.domain.com [1] [1] is down >>> with >>> error. Exit message: Bad volume specification {'index': '0', >>> u'domainID': u'04cb5bd0-d94e-4d14-a71a-e63a669e11b9', 'reqsize': >>> '0', u'format': u'cow', u'optional': u'false', u'address': >>> {u'function': u'0x0', u'bus': u'0x00', u'domain': u'0x0000', >>> u'type': u'pci', u'slot': u'0x06'}, u'volumeID': >>> u'68ee7a04-ceff-49f0-bf91-256870543921', 'apparentsize': >>> '3221225472', u'imageID': u'9d087e6b-0832-46db-acb0-16d5131afa0c', >>> u'discard': False, u'specParams': {}, u'readonly': u'false', >>> u'iface': u'virtio', u'deviceId': >>> u'9d087e6b-0832-46db-acb0-16d5131afa0c', 'truesize': '3221225472', >>> u'poolID': u'75bf8f48-970f-42bc-8596-f8ab6efb2b63', u'device': >>> u'disk', u'shared': u'false', u'propagateErrors': u'off', u'type': >>> u'disk'}. >>> It looks quite bad... I'm attaching the engine.log since the moment >>> I start the VM. >>> Is there anything I can do to recover the VM? oVirt says the disk >>> is OK in the 'Disks' tab. >>> Adding some people who may be able to help. Once solved, please >>> consider upgrade. >>> >>> Thanks. >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users [2] [2] >>> -- >>> SANDRO BONAZZOLA >>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION >>> R&D >>> Red Hat EMEA [3] >>> sbonazzo at redhat.com >>> [4] >>> [5] >>> Links: >>> ------ >>> [1] http://openmaint.iaas.domain.com [1] >>> [2] http://lists.ovirt.org/mailman/listinfo/users [2] >>> [3] https://www.redhat.com/ [3] >>> [4] https://red.ht/sig [4] >>> [5] https://redhat.com/summit [5] >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users [2] >> Links: >> ------ >> [1] http://openmaint.iaas.domain.com >> [2] http://lists.ovirt.org/mailman/listinfo/users >> [3] https://www.redhat.com/ >> [4] https://red.ht/sig >> [5] https://redhat.com/summit -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaymef at gmail.com Fri Mar 23 22:44:14 2018 From: jaymef at gmail.com (Jayme) Date: Fri, 23 Mar 2018 19:44:14 -0300 Subject: [ovirt-users] [Gluster-users] GlusterFS performance with only one drive per host? In-Reply-To: References: Message-ID: Do you feel that SSDs are worth the extra cost or am I better off using regular HDDs? I'm looking for the best performance I can get with glusterFS On Fri, Mar 23, 2018 at 12:03 AM, Manoj Pillai wrote: > > > On Thu, Mar 22, 2018 at 3:31 PM, Sahina Bose wrote: > >> >> >> On Mon, Mar 19, 2018 at 5:57 PM, Jayme wrote: >> >>> I'm spec'ing a new oVirt build using three Dell R720's w/ 256GB. I'm >>> considering storage options. I don't have a requirement for high amounts >>> of storage, I have a little over 1TB to store but want some overhead so I'm >>> thinking 2TB of usable space would be sufficient. >>> >>> I've been doing some research on Micron 1100 2TB ssd's and they seem to >>> offer a lot of value for the money. I'm considering using smaller cheaper >>> SSDs for boot drives and using one 2TB micron SSD in each host for a >>> glusterFS replica 3 setup (on the fence about using an arbiter, I like the >>> extra redundancy replicate 3 will give me). >>> >>> My question is, would I see a performance hit using only one drive in >>> each host with glusterFS or should I try to add more physical disks. Such >>> as 6 1TB drives instead of 3 2TB drives? >>> >> > It is possible. With SSDs the rpc layer can become the bottleneck with > some workloads, especially if there are not enough connections out to the > server side. We had experimented with a multi-connection model for this > reason: https://review.gluster.org/#/c/19133/. > > -- Manoj > >> >> [Adding gluster-users for inputs here] >> >> >>> Also one other question. I've read that gluster can only be done in >>> groups of three. Meaning you need 3, 6, or 9 hosts. Is this true? If I >>> had an operational replicate 3 glusterFS setup and wanted to add more >>> capacity I would have to add 3 more hosts, or is it possible for me to add >>> a 4th host in to the mix for extra processing power down the road? >>> >> >> In oVirt, we support replica 3 or replica 3 with arbiter (where one of >> the 3 bricks is a low storage arbiter brick). To expand storage, you would >> need to add in multiples of 3 bricks. However if you only want to expand >> compute capacity in your HC environment, you can add a 4th node. >> >> >>> Thanks! >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://lists.gluster.org/mailman/listinfo/gluster-users >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ckozleriii at gmail.com Sat Mar 24 00:04:09 2018 From: ckozleriii at gmail.com (Charles Kozler) Date: Fri, 23 Mar 2018 20:04:09 -0400 Subject: [ovirt-users] Juniper vSRX Cluster on oVirt/RHEV In-Reply-To: References: <5AB4B3B3.9050308@xs4all.nl> Message-ID: Truth be told I dont really know. What I am going to be doing with it is pretty much mostly some lab stuff and get working with VRF's a bit There is a known limitation with virtio backend driver uses interrupt mode to receive packets and vSRX uses DPDK - https://dpdk.readthedocs.io/en/stable/nics/virtio.html which in turn creates a bottleneck in to the guest VM. It is more ideal to use something like SR-IOV instead and remove as many buffer layers as possible with PCI passthrough One easier way too is to use DPDK OVS. I know ovirt supports OVS in later versions more natively so I just didnt go after it and I dont know if there is any difference between just regular OVS and DPDK OVS. I dont have a huge requirement of insane throughput, just need to get packets from amazon back to my lab and support overlapping subnets This exercise was somewhat of a POC for me to see if it can be done. A lot of Junipers documentation does not take in to account such things as ovirt or proxmox or any linux overlay to hypervisors like it does for vmware / vcenter which is no fault of their own. They assume flat KVM host (or 2 if clustered) whereas stuff like ovirt can introduce variables (eg: no MAC spoofing) On Fri, Mar 23, 2018 at 3:27 PM, FERNANDO FREDIANI < fernando.frediani at upx.com> wrote: > Out of curiosity how much traffic can it handle running in these Virtual > Machines on the top of reasonable hardware ? > > Fernando > > 2018-03-23 4:58 GMT-03:00 Joop : > >> On 22-3-2018 10:17, Yaniv Kaul wrote: >> >> >> >> On Wed, Mar 21, 2018 at 10:37 PM, Charles Kozler < >> ckozleriii at gmail.com> wrote: >> >>> Hi All - >>> >>> Recently did this and thought it would be worth documenting. I couldnt >>> find any solid information on vsrx with kvm outside of flat KVM. This >>> outlines some of the things I hit along the way and how to fix. This is my >>> one small way of giving back to such an incredible open source tool >>> >>> https://ckozler.net/vsrx-cluster-on-ovirtrhev/ >>> >> >> Thanks for sharing! >> Why didn't you just upload the qcow2 disk via the UI/API though? >> There's quite a bit of manual work that I hope is not needed? >> >> @Work we're using Juniper too and oud of curiosity I downloaded the qcow2 >> image and used the UI to upload it and add it to a VM. It just works :-) >> oVirt++ >> >> Joop >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincent at epicenergy.ca Sat Mar 24 00:55:18 2018 From: vincent at epicenergy.ca (Vincent Royer) Date: Fri, 23 Mar 2018 17:55:18 -0700 Subject: [ovirt-users] Resilient Storage for Ovirt Message-ID: Hi, I have a 2 node cluster with Hosted Engine attached to a storage Domain (NFS share) served by WS2016. I run about a dozen VMs. I need to improve availability / resilience of the storage domain, and also the I/O performance. Anytime we need to reboot the Windows Server, its a nightmare for the cluster, we have to put it all into maintenance and take it down. When the Storage server crashes (has happened once) or Windows decides to install an update and reboot (has happened once), the storage domain obviously goes down and sometimes the hosts have a difficult time re-connecting. I can afford a second bare metal server and am looking for input in the best way to provide a highly available storage domain. Ideally I'd like to be able to reboot either storage server without disrupting Ovirt. Should I be looking at clustering with Windows Server, or moving to a different OS? I currently run the Storage in RAID10 (spinning discs) and have the option of adding CacheCade to the array w/ SSD. Would that help I/O for small random R/W? What are the suggested options for this scenario? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Sat Mar 24 07:19:43 2018 From: rightkicktech at gmail.com (Alex K) Date: Sat, 24 Mar 2018 07:19:43 +0000 Subject: [ovirt-users] [Gluster-users] GlusterFS performance with only one drive per host? Message-ID: I would go with at least 4 HDDs per host in RAID 10. Then focus on network performance where bottleneck usualy is for gluster. On Sat, Mar 24, 2018, 00:44 Jayme wrote: > Do you feel that SSDs are worth the extra cost or am I better off using > regular HDDs? I'm looking for the best performance I can get with glusterFS > > On Fri, Mar 23, 2018 at 12:03 AM, Manoj Pillai wrote: > >> >> >> On Thu, Mar 22, 2018 at 3:31 PM, Sahina Bose wrote: >> >>> >>> >>> On Mon, Mar 19, 2018 at 5:57 PM, Jayme wrote: >>> >>>> I'm spec'ing a new oVirt build using three Dell R720's w/ 256GB. I'm >>>> considering storage options. I don't have a requirement for high amounts >>>> of storage, I have a little over 1TB to store but want some overhead so I'm >>>> thinking 2TB of usable space would be sufficient. >>>> >>>> I've been doing some research on Micron 1100 2TB ssd's and they seem to >>>> offer a lot of value for the money. I'm considering using smaller cheaper >>>> SSDs for boot drives and using one 2TB micron SSD in each host for a >>>> glusterFS replica 3 setup (on the fence about using an arbiter, I like the >>>> extra redundancy replicate 3 will give me). >>>> >>>> My question is, would I see a performance hit using only one drive in >>>> each host with glusterFS or should I try to add more physical disks. Such >>>> as 6 1TB drives instead of 3 2TB drives? >>>> >>> >> It is possible. With SSDs the rpc layer can become the bottleneck with >> some workloads, especially if there are not enough connections out to the >> server side. We had experimented with a multi-connection model for this >> reason: https://review.gluster.org/#/c/19133/. >> >> -- Manoj >> >>> >>> [Adding gluster-users for inputs here] >>> >>> >>>> Also one other question. I've read that gluster can only be done in >>>> groups of three. Meaning you need 3, 6, or 9 hosts. Is this true? If I >>>> had an operational replicate 3 glusterFS setup and wanted to add more >>>> capacity I would have to add 3 more hosts, or is it possible for me to add >>>> a 4th host in to the mix for extra processing power down the road? >>>> >>> >>> In oVirt, we support replica 3 or replica 3 with arbiter (where one of >>> the 3 bricks is a low storage arbiter brick). To expand storage, you would >>> need to add in multiples of 3 bricks. However if you only want to expand >>> compute capacity in your HC environment, you can add a 4th node. >>> >>> >>>> Thanks! >>>> >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>> >>> _______________________________________________ >>> Gluster-users mailing list >>> Gluster-users at gluster.org >>> http://lists.gluster.org/mailman/listinfo/gluster-users >>> >> >> > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andy.michielsen at gmail.com Sat Mar 24 08:33:53 2018 From: andy.michielsen at gmail.com (Andy Michielsen) Date: Sat, 24 Mar 2018 09:33:53 +0100 Subject: [ovirt-users] Which hardware are you using for oVirt Message-ID: <815987B5-31DA-4316-809D-A03363A1E3C3@gmail.com> Hi all, Not sure if this is the place to be asking this but I was wondering which hardware you all are using and why in order for me to see what I would be needing. I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 vm?s. The engine, I can run on an other server. The hosts can be fitted with the storage and share the space through glusterfs. I would think I will be needing at least 3 nic?s but would be able to install ovn. (Are 1gb nic?s sufficient ?) Any input you guys would like to share would be greatly appriciated. Thanks, From andreil1 at starlett.lv Sat Mar 24 09:35:38 2018 From: andreil1 at starlett.lv (Andrei Verovski) Date: Sat, 24 Mar 2018 11:35:38 +0200 Subject: [ovirt-users] Which hardware are you using for oVirt In-Reply-To: <815987B5-31DA-4316-809D-A03363A1E3C3@gmail.com> References: <815987B5-31DA-4316-809D-A03363A1E3C3@gmail.com> Message-ID: Hi, HL ProLiant DL380, dual Xeon 120 GB RAID L1 for system 2 TB RAID L10 for VM disks 5 VMs, 3 Linux, 2 Windows Total CPU load most of the time is? low, high level of activity related to disk. Host engine under KVM appliance on SuSE, can be easily moved, backed up, copied, experimented with, etc. You'll have to use servers with more RAM and storage than main. More then one NIC required if some of your VMs are on different subnets, e.g. 1 in internal zone and 2nd on DMZ. For your setup 10 GB NICs + L3 Switch for ovirtmgmt. BTW, I would suggest to have several separate hardware RAIDs unless you have SSD, otherwise limit of the disk system I/O will be a bottleneck. Consider SSD L1 RAID for heavy-loaded databases. *Please note many cheap SSDs do NOT work reliably with SAS controllers even in SATA mode*. For example, I supposed to use 2 x WD Green SSD configures as RAID L1 for OS. It was possible to install system, yet under heavy load simulated with iozone disk system freeze, rendering OS unbootable. Same crash was experienced with 512GB KingFast SSD connected to broadcom/AMCC SAS RAID Card. On 03/24/2018 10:33 AM, Andy Michielsen wrote: > Hi all, > > Not sure if this is the place to be asking this but I was wondering which hardware you all are using and why in order for me to see what I would be needing. > > I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 vm?s. > The engine, I can run on an other server. The hosts can be fitted with the storage and share the space through glusterfs. I would think I will be needing at least 3 nic?s but would be able to install ovn. (Are 1gb nic?s sufficient ?) > > Any input you guys would like to share would be greatly appriciated. > > Thanks, > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Sat Mar 24 11:13:17 2018 From: wodel.youchi at gmail.com (wodel youchi) Date: Sat, 24 Mar 2018 12:13:17 +0100 Subject: [ovirt-users] Testing oVirt 4.2 Message-ID: Hi, I am testing oVirt 4.2, I am using nested KVM for that. I am using two hypervisors Centos 7 updated and the hosted-Engine deployment using the ovirt appliance. For storage I am using iscsi and NFS4 Versions I am using : ovirt-engine-appliance-4.2-20180214.1.el7.centos.noarch ovirt-hosted-engine-setup-2.2.9-1.el7.centos.noarch kernel-3.10.0-693.21.1.el7.x86_64 I have a problem deploying the hosted-engine VM, when configuring the deployment (hosted-engine --deploy), it asks for the engine's hostname then the engine's IP address, I use static IP, in my lab I used *192.168.1.104* as IP for the VM engine, and I choose to add the it's hostname entry to the hypervisors's /etc/hosts But the deployment get stuck every time in the same place : *TASK [Wait for the host to become non operational]* After some time, it gave up and the deployment fails. I don't know the reason for now, but I have seen this behavior in */etc/hosts *of the hypervisor. In the beginning of the deployment the entry *192.168.2.104 engine01.example.local* is added, then sometime after that it's deleted, then a new entry is added with this IP *192.168.122.65 engine01.wodel.wd* which has nothing to do with the network I am using. Here is the error I am seeing in the deployment log 2018-03-24 11:51:31,398+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Wait for the host to become non operational] 2018-03-24 12:02:07,284+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 {u'_ansible _parsed': True, u'_ansible_no_log': False, u'changed': False, u'attempts': 150, u'invocation': {u'module_args': {u'pattern': u'name=hyperv01.wodel.wd', u'fetch_nested': False, u'nested_attributes': []}}, u'ansible_facts': {u'ovirt_hosts': []}} 2018-03-24 12:02:07,385+0100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:98 fatal: [loc alhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": []}, "attempts": 150, "changed": false} 2018-03-24 12:02:07,587+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 PLAY RECAP [engine01.wodel.wd] : ok: 15 changed: 8 unreachable: 0 skipped: 4 failed: 0 2018-03-24 12:02:07,688+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 PLAY RECAP [localhost] : ok: 41 changed: 14 unreachable: 0 skipped: 3 failed: 1 2018-03-24 12:02:07,789+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:180 ansible-playbook rc: 2 2018-03-24 12:02:07,790+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:187 ansible-playbook stdou t: 2018-03-24 12:02:07,791+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:189 to retry, use: --limi t @/usr/share/ovirt-hosted-engine-setup/ansible/bootstrap_local_vm.retry 2018-03-24 12:02:07,791+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:190 ansible-playbook stder r: 2018-03-24 12:02:07,792+0100 DEBUG otopi.context context._executeMethod:143 method exception Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in _executeMethod method['method']() File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/misc.py", line 186, in _closeup r = ah.run() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/ansible_utils.py", line 194, in run raise RuntimeError(_('Failed executing ansible-playbook')) RuntimeError: Failed executing ansible-playbook 2018-03-24 12:02:07,795+0100 ERROR otopi.context context._executeMethod:152 Failed to execute stage 'Closing up': Failed exec uting ansible-playbook any idea???? Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From andy.michielsen at gmail.com Sat Mar 24 11:24:32 2018 From: andy.michielsen at gmail.com (Andy Michielsen) Date: Sat, 24 Mar 2018 12:24:32 +0100 Subject: [ovirt-users] Which hardware are you using for oVirt In-Reply-To: References: <815987B5-31DA-4316-809D-A03363A1E3C3@gmail.com> Message-ID: <1701ED12-9ED4-46B9-96E5-1FD3E32419DD@gmail.com> Hello Andrei, Thank you very much for sharing info on your hardware setup. Very informative. At this moment I have my ovirt engine on our vmware environment which is fine for good backup and restore. I have 4 nodes running now all different in make and model with local storage and it works but lacks performance a bit. But I can get my hands on some old dell?s R415 with 96 Gb of ram and 2 quadcores and 6 x 1 Gb nic?s. They all come with 2 x 146 Gb 15000 rpm?s harddisks. This isn?t bad but I will add more RAM for starters. Also I would like to have some good redundant storage for this too and the servers have limited space to add that. Hopefully others will also share there setups and expirience like you did. Kind regards. > On 24 Mar 2018, at 10:35, Andrei Verovski wrote: > > Hi, > > HL ProLiant DL380, dual Xeon > 120 GB RAID L1 for system > 2 TB RAID L10 for VM disks > 5 VMs, 3 Linux, 2 Windows > Total CPU load most of the time is low, high level of activity related to disk. > Host engine under KVM appliance on SuSE, can be easily moved, backed up, copied, experimented with, etc. > > You'll have to use servers with more RAM and storage than main. > More then one NIC required if some of your VMs are on different subnets, e.g. 1 in internal zone and 2nd on DMZ. > For your setup 10 GB NICs + L3 Switch for ovirtmgmt. > > BTW, I would suggest to have several separate hardware RAIDs unless you have SSD, otherwise limit of the disk system I/O will be a bottleneck. Consider SSD L1 RAID for heavy-loaded databases. > > Please note many cheap SSDs do NOT work reliably with SAS controllers even in SATA mode. > > For example, I supposed to use 2 x WD Green SSD configures as RAID L1 for OS. > It was possible to install system, yet under heavy load simulated with iozone disk system freeze, rendering OS unbootable. > Same crash was experienced with 512GB KingFast SSD connected to broadcom/AMCC SAS RAID Card. > > >> On 03/24/2018 10:33 AM, Andy Michielsen wrote: >> Hi all, >> >> Not sure if this is the place to be asking this but I was wondering which hardware you all are using and why in order for me to see what I would be needing. >> >> I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 vm?s. >> The engine, I can run on an other server. The hosts can be fitted with the storage and share the space through glusterfs. I would think I will be needing at least 3 nic?s but would be able to install ovn. (Are 1gb nic?s sufficient ?) >> >> Any input you guys would like to share would be greatly appriciated. >> >> Thanks, >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From statsenko_ky at interrao.ru Sat Mar 24 11:38:14 2018 From: statsenko_ky at interrao.ru (=?utf-8?B?0KHRgtCw0YbQtdC90LrQviDQmtC+0L3RgdGC0LDQvdGC0LjQvSDQrtGA0Yw=?= =?utf-8?B?0LXQstC40Yc=?=) Date: Sat, 24 Mar 2018 11:38:14 +0000 Subject: [ovirt-users] FC LUN In-Reply-To: <439dffd7724541748fe3564493cac704@msk1-exchmb07.interrao.ru> References: <439dffd7724541748fe3564493cac704@msk1-exchmb07.interrao.ru> Message-ID: Anyone ? Any ideas ? From: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] On Behalf Of ???????? ?????????? ??????? Sent: Friday, March 23, 2018 7:53 PM To: users Subject: [ovirt-users] FC LUN Hello! Can you, please, help ? There is a strange error while trying to connect FC LUN from the old SAN storage system to oVirt 4.1.9 hosts. In the same time there are no problems with LUN?s from other storage systems. The error in vdsm.log is: 2018-03-23 16:41:55,920+0300 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.09 seconds (__init__:539) 2018-03-23 16:41:55,965+0300 WARN (jsonrpc/1) [storage.LVM] lvm pvs failed: 5 [] [' Failed to find device for physical volume "/dev/mapper/320080013789309c0".'] (lvm :323) 2018-03-23 16:41:55,966+0300 WARN (jsonrpc/1) [storage.HSM] getPV failed for guid: 320080013789309c0 (hsm:1966) Traceback (most recent call last): File "/usr/share/vdsm/storage/hsm.py", line 1963, in _getDeviceList pv = lvm.getPV(guid) File "/usr/share/vdsm/storage/lvm.py", line 853, in getPV raise se.InaccessiblePhysDev((pvName,)) InaccessiblePhysDev: Multipath cannot access physical device(s): "devices=(u'320080013789309c0',)" Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From andy.michielsen at gmail.com Sat Mar 24 11:40:36 2018 From: andy.michielsen at gmail.com (Andy Michielsen) Date: Sat, 24 Mar 2018 12:40:36 +0100 Subject: [ovirt-users] Testing oVirt 4.2 In-Reply-To: References: Message-ID: <41136397-BF14-4DCE-9762-C4FA40EFBBA8@gmail.com> Hello, I also have done a installation on my host running KVM and I ?m pretty sure my vm?s can only use the 192.168.122.0/24 range if you install them with NAT networking when creating them. So that might explain why you see that address appear in your log and also explain why the engine system can?t be reached. Kind regards. > On 24 Mar 2018, at 12:13, wodel youchi wrote: > > Hi, > > I am testing oVirt 4.2, I am using nested KVM for that. > I am using two hypervisors Centos 7 updated and the hosted-Engine deployment using the ovirt appliance. > For storage I am using iscsi and NFS4 > > Versions I am using : > ovirt-engine-appliance-4.2-20180214.1.el7.centos.noarch > ovirt-hosted-engine-setup-2.2.9-1.el7.centos.noarch > kernel-3.10.0-693.21.1.el7.x86_64 > > I have a problem deploying the hosted-engine VM, when configuring the deployment (hosted-engine --deploy), it asks for the engine's hostname then the engine's IP address, I use static IP, in my lab I used 192.168.1.104 as IP for the VM engine, and I choose to add the it's hostname entry to the hypervisors's /etc/hosts > > But the deployment get stuck every time in the same place : TASK [Wait for the host to become non operational] > > After some time, it gave up and the deployment fails. > > I don't know the reason for now, but I have seen this behavior in /etc/hosts of the hypervisor. > > In the beginning of the deployment the entry 192.168.2.104 engine01.example.local is added, then sometime after that it's deleted, then a new entry is added with this IP 192.168.122.65 engine01.wodel.wd which has nothing to do with the network I am using. > > Here is the error I am seeing in the deployment log > > 2018-03-24 11:51:31,398+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Wait > for the host to become non operational] > 2018-03-24 12:02:07,284+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 {u'_ansible > _parsed': True, u'_ansible_no_log': False, u'changed': False, u'attempts': 150, u'invocation': {u'module_args': {u'pattern': > u'name=hyperv01.wodel.wd', u'fetch_nested': False, u'nested_attributes': []}}, u'ansible_facts': {u'ovirt_hosts': []}} > 2018-03-24 12:02:07,385+0100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:98 fatal: [loc > alhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": []}, "attempts": 150, "changed": false} > 2018-03-24 12:02:07,587+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 PLAY RECAP > [engine01.wodel.wd] : ok: 15 changed: 8 unreachable: 0 skipped: 4 failed: 0 > 2018-03-24 12:02:07,688+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 PLAY RECAP > [localhost] : ok: 41 changed: 14 unreachable: 0 skipped: 3 failed: 1 > 2018-03-24 12:02:07,789+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:180 ansible-playbook rc: 2 > 2018-03-24 12:02:07,790+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:187 ansible-playbook stdou > t: > 2018-03-24 12:02:07,791+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:189 to retry, use: --limi > t @/usr/share/ovirt-hosted-engine-setup/ansible/bootstrap_local_vm.retry > > 2018-03-24 12:02:07,791+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:190 ansible-playbook stder > r: > 2018-03-24 12:02:07,792+0100 DEBUG otopi.context context._executeMethod:143 method exception > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in _executeMethod > method['method']() > File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/misc.py", line 186, in _closeup > r = ah.run() > File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/ansible_utils.py", line 194, in run > raise RuntimeError(_('Failed executing ansible-playbook')) > RuntimeError: Failed executing ansible-playbook > 2018-03-24 12:02:07,795+0100 ERROR otopi.context context._executeMethod:152 Failed to execute stage 'Closing up': Failed exec > uting ansible-playbook > > > any idea???? > > > Regards > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreil1 at starlett.lv Sat Mar 24 15:25:17 2018 From: andreil1 at starlett.lv (Andrei Verovski) Date: Sat, 24 Mar 2018 17:25:17 +0200 Subject: [ovirt-users] Testing oVirt 4.2 In-Reply-To: <41136397-BF14-4DCE-9762-C4FA40EFBBA8@gmail.com> References: <41136397-BF14-4DCE-9762-C4FA40EFBBA8@gmail.com> Message-ID: <5503cc47-bcef-8789-3707-d4b36fd7885f@starlett.lv> On 03/24/2018 01:40 PM, Andy Michielsen wrote: > Hello, > > I also have done a installation on my host running KVM and I ?m pretty > sure my vm?s can only use the 192.168.122.0/24 range if you install > them with NAT networking when creating them. So that might explain why > you see that address appear in your log and also explain why the > engine system can?t be reached. Can't tell fo sure about other installations, yet IMHO problem is with networking schema. One need to set bridge to real ethernet interface and add it to KVM VM definition. For example, my SuSE box have 2 ethernet cards, 192.168.0.aa for SMB fle server and another bridged with IP 192.168.0.bb defined within KVM guest (CentOS 7.4 with oVirt host engine). See configs below. Another SuSE box have 10 Ethernet interfaces, one for for its own needs, and 4 + 3 for VyOS routers running as KVM guests. ****************************** SU47:/etc/sysconfig/network # tail -n 100 ifcfg-br0 BOOTPROTO='static' BRIDGE='yes' BRIDGE_FORWARDDELAY='0' BRIDGE_PORTS='eth0' BRIDGE_STP='off' BROADCAST='' DHCLIENT_SET_DEFAULT_ROUTE='no' ETHTOOL_OPTIONS='' IPADDR='' MTU='' NETWORK='' PREFIXLEN='24' REMOTE_IPADDR='' STARTMODE='auto' NAME='' SU47:/etc/sysconfig/network # tail -n 100 ifcfg-eth0 BOOTPROTO='none' BROADCAST='' DHCLIENT_SET_DEFAULT_ROUTE='no' ETHTOOL_OPTIONS='' IPADDR='' MTU='' NAME='82579LM Gigabit Network Connection' NETMASK='' NETWORK='' REMOTE_IPADDR='' STARTMODE='auto' PREFIXLEN='' > > Kind regards. > > On 24 Mar 2018, at 12:13, wodel youchi > wrote: > >> Hi, >> >> I am testing oVirt 4.2, I am using nested KVM for that. >> I am using two hypervisors Centos 7 updated and the hosted-Engine >> deployment using the ovirt appliance. >> For storage I am using iscsi and NFS4 >> >> Versions I am using : >> ovirt-engine-appliance-4.2-20180214.1.el7.centos.noarch >> ovirt-hosted-engine-setup-2.2.9-1.el7.centos.noarch >> kernel-3.10.0-693.21.1.el7.x86_64 >> >> I have a problem deploying the hosted-engine VM, when configuring the >> deployment (hosted-engine --deploy), it asks for the engine's >> hostname then the engine's IP address, I use static IP, in my lab I >> used *192.168.1.104*?as IP for the VM engine, and I choose to add the >> it's hostname entry to the hypervisors's /etc/hosts >> >> But the deployment get stuck every time in the same place :?*TASK >> [Wait for the host to become non operational]* >> >> After some time, it gave up and the deployment fails. >> >> I don't know the reason for now, but I have seen this behavior in >> */etc/hosts *of the hypervisor. >> >> In the beginning of the deployment the entry? *192.168.2.104 >> engine01.example.local* is added, then sometime after that it's >> deleted, then a new entry is added with this >> IP?*192.168.122.65engine01.wodel.wd*?which has nothing to do with the >> network I am using. >> >> Here is the error I am seeing in the deployment log >> >> 2018-03-24 11:51:31,398+0100 INFO >> otopi.ovirt_hosted_engine_setup.ansible_utils >> ansible_utils._process_output:100 TASK [Wait >> for the host to become non operational] >> 2018-03-24 12:02:07,284+0100 DEBUG >> otopi.ovirt_hosted_engine_setup.ansible_utils >> ansible_utils._process_output:94 {u'_ansible >> _parsed': True, u'_ansible_no_log': False, u'changed': False, >> u'attempts': 150, u'invocation': {u'module_args': {u'pattern': >> u'name=hyperv01.wodel.wd', u'fetch_nested': False, >> u'nested_attributes': []}}, u'ansible_facts': {u'ovirt_hosts': []}} >> 2018-03-24 12:02:07,385+0100 ERROR >> otopi.ovirt_hosted_engine_setup.ansible_utils >> ansible_utils._process_output:98 fatal: [loc >> alhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": []}, >> "attempts": 150, "changed": false} >> 2018-03-24 12:02:07,587+0100 DEBUG >> otopi.ovirt_hosted_engine_setup.ansible_utils >> ansible_utils._process_output:94 PLAY RECAP >> [engine01.wodel.wd] : ok: 15 changed: 8 unreachable: 0 skipped: 4 >> failed: 0 >> 2018-03-24 12:02:07,688+0100 DEBUG >> otopi.ovirt_hosted_engine_setup.ansible_utils >> ansible_utils._process_output:94 PLAY RECAP >> [localhost] : ok: 41 changed: 14 unreachable: 0 skipped: 3 failed: 1 >> 2018-03-24 12:02:07,789+0100 DEBUG >> otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:180 >> ansible-playbook rc: 2 >> 2018-03-24 12:02:07,790+0100 DEBUG >> otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:187 >> ansible-playbook stdou >> t: >> 2018-03-24 12:02:07,791+0100 DEBUG >> otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:189 >> ?to retry, use: --limi >> t @/usr/share/ovirt-hosted-engine-setup/ansible/bootstrap_local_vm.retry >> >> 2018-03-24 12:02:07,791+0100 DEBUG >> otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:190 >> ansible-playbook stder >> r: >> 2018-03-24 12:02:07,792+0100 DEBUG otopi.context >> context._executeMethod:143 method exception >> Traceback (most recent call last): >> ?File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, >> in _executeMethod >> ???method['method']() >> ?File >> "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/misc.py", >> line 186, in _closeup >> ???r = ah.run() >> ?File >> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/ansible_utils.py", >> line 194, in run >> ???raise RuntimeError(_('Failed executing ansible-playbook')) >> RuntimeError: Failed executing ansible-playbook >> 2018-03-24 12:02:07,795+0100 ERROR otopi.context >> context._executeMethod:152 Failed to execute stage 'Closing up': >> Failed exec >> uting ansible-playbook >> >> >> any idea???? >> >> >> Regards >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexeynikolaev.post at yandex.ru Sat Mar 24 16:18:40 2018 From: alexeynikolaev.post at yandex.ru (=?utf-8?B?0J3QuNC60L7Qu9Cw0LXQsiDQkNC70LXQutGB0LXQuQ==?=) Date: Sat, 24 Mar 2018 19:18:40 +0300 Subject: [ovirt-users] FC LUN In-Reply-To: References: <439dffd7724541748fe3564493cac704@msk1-exchmb07.interrao.ru> Message-ID: <5279901521908320@web30j.yandex.ru> An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Sat Mar 24 19:08:14 2018 From: rightkicktech at gmail.com (Alex K) Date: Sat, 24 Mar 2018 21:08:14 +0200 Subject: [ovirt-users] Which hardware are you using for oVirt In-Reply-To: <1701ED12-9ED4-46B9-96E5-1FD3E32419DD@gmail.com> References: <815987B5-31DA-4316-809D-A03363A1E3C3@gmail.com> <1701ED12-9ED4-46B9-96E5-1FD3E32419DD@gmail.com> Message-ID: I have 2 or 3 node clusters with following hardware (all with self-hosted engine) : 2 node cluster: RAM: 64 GB per host CPU: 8 cores per host Storage: 4x 1TB SAS in RAID10 NIC: 2x Gbit VMs: 20 The above, although I would like to have had a third NIC for gluster storage redundancy, it is running smoothly for quite some time and without performance issues. The VMs it is running are not high on IO (mostly small Linux servers). 3 node clusters: RAM: 32 GB per host CPU: 16 cores per host Storage: 5x 600GB in RAID5 (not ideal but I had to gain some storage space without purchasing extra disks) NIC: 6x Gbit VMs: less then 10 large Windows VMs (Windows 2016 server and Windows 10) For your setup (30 VMs) I would rather go with RAID10 SAS disks and at least a dual 10Gbit NIC dedicated to the gluster traffic only. Alex On Sat, Mar 24, 2018 at 1:24 PM, Andy Michielsen wrote: > Hello Andrei, > > Thank you very much for sharing info on your hardware setup. Very > informative. > > At this moment I have my ovirt engine on our vmware environment which is > fine for good backup and restore. > > I have 4 nodes running now all different in make and model with local > storage and it works but lacks performance a bit. > > But I can get my hands on some old dell?s R415 with 96 Gb of ram and 2 > quadcores and 6 x 1 Gb nic?s. They all come with 2 x 146 Gb 15000 rpm?s > harddisks. This isn?t bad but I will add more RAM for starters. Also I > would like to have some good redundant storage for this too and the servers > have limited space to add that. > > Hopefully others will also share there setups and expirience like you did. > > Kind regards. > > On 24 Mar 2018, at 10:35, Andrei Verovski wrote: > > Hi, > > HL ProLiant DL380, dual Xeon > 120 GB RAID L1 for system > 2 TB RAID L10 for VM disks > 5 VMs, 3 Linux, 2 Windows > Total CPU load most of the time is low, high level of activity related to > disk. > Host engine under KVM appliance on SuSE, can be easily moved, backed up, > copied, experimented with, etc. > > You'll have to use servers with more RAM and storage than main. > More then one NIC required if some of your VMs are on different subnets, > e.g. 1 in internal zone and 2nd on DMZ. > For your setup 10 GB NICs + L3 Switch for ovirtmgmt. > > BTW, I would suggest to have several separate hardware RAIDs unless you > have SSD, otherwise limit of the disk system I/O will be a bottleneck. > Consider SSD L1 RAID for heavy-loaded databases. > > *Please note many cheap SSDs do NOT work reliably with SAS controllers > even in SATA mode*. > > For example, I supposed to use 2 x WD Green SSD configures as RAID L1 for > OS. > It was possible to install system, yet under heavy load simulated with > iozone disk system freeze, rendering OS unbootable. > Same crash was experienced with 512GB KingFast SSD connected to > broadcom/AMCC SAS RAID Card. > > > On 03/24/2018 10:33 AM, Andy Michielsen wrote: > > Hi all, > > Not sure if this is the place to be asking this but I was wondering which hardware you all are using and why in order for me to see what I would be needing. > > I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 vm?s. > The engine, I can run on an other server. The hosts can be fitted with the storage and share the space through glusterfs. I would think I will be needing at least 3 nic?s but would be able to install ovn. (Are 1gb nic?s sufficient ?) > > Any input you guys would like to share would be greatly appriciated. > > Thanks, > _______________________________________________ > Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ishaby at redhat.com Sun Mar 25 05:16:51 2018 From: ishaby at redhat.com (Idan Shaby) Date: Sun, 25 Mar 2018 08:16:51 +0300 Subject: [ovirt-users] Disk upload cancel/remove In-Reply-To: References: Message-ID: Hi Alex, How did you cancel the uploads? Can you please attach the engine log so we can see what happened there? Thanks, Idan On Thu, Mar 22, 2018 at 6:39 PM, Alex K wrote: > After 48 hours it seems the issue has been resolved. > No disks are shown with status "Transferring via API" > > Alex > > On Wed, Mar 21, 2018 at 9:33 AM, Alex K wrote: > >> Even after rebooting the engine the disks are still there with same >> status "Transferring via API" >> >> Alex >> >> On Tue, Mar 20, 2018 at 11:49 AM, Eyal Shenitzky >> wrote: >> >>> Idan/Daniel, >>> >>> Can you please take a look? >>> >>> Thanks, >>> >>> On Tue, Mar 20, 2018 at 11:44 AM, Alex K >>> wrote: >>> >>>> Hi All, >>>> >>>> I was trying to upload a VM disk at data storage domain using a python >>>> script. >>>> I did cancel the upload twice and at the third time the upload was >>>> successful, but I see two disks from the previous attempts with status >>>> "transferring via API" (see attached). This status of for more then 8 hours >>>> and I cannot remove them. >>>> >>>> Is there any way to clean them from the disks inventory? >>>> >>>> >>>> >>>> I am using ovirt 4.1.9.1-1.el7.centos with self hosted engine on 3 >>>> nodes. >>>> >>>> Thanx, >>>> Alex >>>> >>>> >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>> >>> >>> -- >>> Regards, >>> Eyal Shenitzky >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ovirt-disk-upload.png Type: image/png Size: 34142 bytes Desc: not available URL: From didi at redhat.com Sun Mar 25 05:36:52 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Sun, 25 Mar 2018 08:36:52 +0300 Subject: [ovirt-users] Can't Add Host To New Hosted Engine - "Server is already part of another cluster" In-Reply-To: References: Message-ID: On Fri, Mar 23, 2018 at 4:10 AM, Adam Chesterton wrote: > Hi Sahina and Yedidyah, > > Thanks for the information and offers of help. I am pleased to report that > I've resolved the issue I had, thanks to the prompting your requests gave > me, and everything is functional. I shall attempt to explain what happened > and how I fixed it. Glad to hear that :-) Thanks for the report! You might want to open bugs about the misleading error messages/texts, or anything else you think might have helped you more quickly understand what your problem was and/or lead you in the right direction. Best regards, > > When I looked at the Gluster peer status, the Host01 was rejected by Host02 > and Host03 (I did check this back at the start, but didn't check it again > and things had changed). I followed the Gluster docs to fix the rejected > peer > (http://gluster-documentations.readthedocs.io/en/latest/Administrator%20Guide/Resolving%20Peer%20Rejected/). > > This then gave me a different error message when trying to add Host02 or > Host03, "no available server in the cluster to probe the new server", which > only further confirmed that it was a Gluster issue, as was suggested. > > After some hair-pulling and wondering, I finally discovered that, in Compute >> Hosts > Host01 under the General tab, it was complaining that Gluster was > not active (even though it was running). I clicked the action item link to > resolve that, and oVirt appeared to start actually managing the Gluster > service. I could then add my other hosts, import the existing storage > domains, and everything appears good now. > > Thanks again for the assistance and prompting me towards the right places to > help me resolve it. > > Regards, > Adam > > > On Thu, 22 Mar 2018 at 20:48 Sahina Bose wrote: >> >> On Wed, Mar 21, 2018 at 12:33 PM, Yedidyah Bar David >> wrote: >>> >>> On Wed, Mar 21, 2018 at 8:17 AM, Adam Chesterton >>> wrote: >>> > Hi Everyone, >>> > >>> > I'm running a 3-host hyperconverged Gluster setup for testing (on some >>> > old >>> > desktops), and recently the hosted engine died on me, so I have >>> > attempted to >>> > just clean up my existing hosts, leaving Gluster configured, and >>> > re-deploy a >>> > fresh hosted engine setup on them. >>> > >>> > I have successfully got the first host setup and the hosted engine is >>> > running on that host. However, when I try to add the other two hosts >>> > via the >>> > web GUI (as I can no longer add them via CLI) I get this error: "Error >>> > while >>> > executing action: Server XXXXX is already part of another cluster." >>> >>> This message might be a result of the host's participation in a gluster >>> cluster, >>> not hosted-engine cluster. Please share engine.log from the engine. >>> >>> Adding Sahina. >> >> >> Yes, it does look like that. >> >> Can you share details of >> # gluster peer status >> from your 3 nodes >> >> And also the address of the first host in the oVirt engine and below from >> the HE engine: >> >> # su - postgres -c "psql -d engine -c \"select * from gluster_server; \"" >> >>> >>> > >>> > I've tried to find where this would still be configured on the two >>> > other >>> > hosts, but I cannot find anywhere. >>> >>> If it's only about hosted-engine, you can check /etc/ovirt-hosted-engine >>> . >>> >>> You might try using ovirt-hosted-engine-cleanup, although it was not >>> designed >>> for such cases. >>> >>> > >>> > Does anyone know how I can stop these two hosts from thinking they are >>> > still >>> > in a cluster? Or, does anyone have some information that might help, or >>> > am I >>> > going to just have to start a fresh CentOS install? >>> >>> If you do not need the data, a reinstall might be simplest. >>> If you do, not sure what's your exact plan. >>> You intend to rely on the replication? So that you reinstall one host, >>> add it, >>> wait until syncing finished, then reinstall the other? Might work, no >>> idea. >>> >>> Best regards, >>> -- >>> Didi -- Didi From didi at redhat.com Sun Mar 25 05:45:32 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Sun, 25 Mar 2018 08:45:32 +0300 Subject: [ovirt-users] Workflow after restoring engine from backup In-Reply-To: <9abbbd52e96b4cd1949a37e863130a13@eps.aero> References: <831f30ed018b4739a2491cbd24f2429d@eps.aero> <9abbbd52e96b4cd1949a37e863130a13@eps.aero> Message-ID: On Fri, Mar 23, 2018 at 10:35 AM, Sven Achtelik wrote: > It looks like I can't get a chance to shut down the HA VMs. I check the restore log and it did mention that it change the HA-VM entries. Just to make sure I looked at the DB and for the vms in question it looks like this. > > engine=# select vm_guid,status,vm_host,exit_status,exit_reason from vm_dynamic Where vm_guid IN (SELECT vm_guid FROM vm_static WHERE auto_startup='t' AND lease_sd_id is NULL); > vm_guid | status | vm_host | exit_status | exit_reason > --------------------------------------+--------+-----------------+-------------+------------- > 8733d4a6-0844-xxxx-804f-6b919e93e076 | 0 | DXXXX | 2 | -1 > 4eeaa622-17f9-xxxx-b99a-cddb3ad942de | 0 | xxxxAPP | 2 | -1 > fbbdc0a0-23a4-4d32-xxxx-a35c59eb790d | 0 | xxxxDB0 | 2 | -1 > 45a4e7ce-19a9-4db9-xxxxx-66bd1b9d83af | 0 | xxxxxWOR | 2 | -1 > (4 rows) > > Should that be enough to have a safe start of the engine without any HA action kicking in. ? Looks ok, but check also run_on_vds and migrating_to_vds. See also bz 1446055. Best regards, > > > -----Urspr?ngliche Nachricht----- > Von: Yedidyah Bar David [mailto:didi at redhat.com] > Gesendet: Montag, 19. M?rz 2018 10:18 > An: Sven Achtelik > Cc: users at ovirt.org > Betreff: Re: [ovirt-users] Workflow after restoring engine from backup > > On Mon, Mar 19, 2018 at 11:03 AM, Sven Achtelik wrote: >> Hi Didi, >> >> my backups where taken with the end. Backup utility. I have 3 Data >> centers, two of them with just one host and the third one with 3 hosts >> running the engine. The backup three days old, was taken on engine >> version 4.1 (4.1.7) and the restored engine is running on 4.1.9. > > Since the bug I mentioned was fixed in 4.1.3, you should be covered. > >> I have three HA VMs that would >> be affected. All others are just normal vms. Sounds like it would be >> the safest to shut down the HA vm S to make sure that nothing happens ? > > If you can have downtime, I agree it sounds safer to shutdown the VMs. > >> Or can I >> disable the HA action in the DB for now ? > > No need to. If you restored with 4.1.9 engine-backup, it should have done this for you. If you still have the restore log, you can verify this by checking it. It should contain 'Resetting HA VM status', and then the result of the sql that it ran. > > Best regards, > >> >> Thank you, >> >> Sven >> >> >> >> Von meinem Samsung Galaxy Smartphone gesendet. >> >> >> -------- Urspr?ngliche Nachricht -------- >> Von: Yedidyah Bar David >> Datum: 19.03.18 07:33 (GMT+01:00) >> An: Sven Achtelik >> Cc: users at ovirt.org >> Betreff: Re: [ovirt-users] Workflow after restoring engine from backup >> >> On Sun, Mar 18, 2018 at 11:45 PM, Sven Achtelik >> >> wrote: >>> Hi All, >>> >>> >>> >>> I had issue with the storage that hosted my engine vm. The disk got >>> corrupted and I needed to restore the engine from a backup. >> >> How did you backup, and how did you restore? >> >> Which version was used for each? >> >>> That worked as >>> expected, I just didn?t start the engine yet. >> >> OK. >> >>> I know that after the backup >>> was taken some machines where migrated around before the engine disks >>> failed. >> >> Are these machines HA? >> >>> My question is what will happen once I start the engine service which >>> has the restored backup on it ? Will it query the hosts for the >>> running VMs >> >> It will, but HA machines are handled differently. >> >> See also: >> >> https://bugzilla.redhat.com/show_bug.cgi?id=1441322 >> https://bugzilla.redhat.com/show_bug.cgi?id=1446055 >> >>> or will it assume that the VMs are still on the hosts as they resided >>> at the point of backup ? >> >> It does, initially, but then updates status according to what it gets >> from hosts. >> >> But polling the hosts takes time, especially if you have many, and HA >> policy might require faster handling. So if it polls first a host that >> had a machine on it during backup, and sees that it's gone, and didn't >> yet poll the new host, HA handling starts immediately, which >> eventually might lead to starting the VM on another host. >> >> To prevent that, the fixes to above bugs make the restore process mark >> HA VMs that do not have leases on the storage as "dead". >> >>> Would I need to change the DB manual to let the engine know where VMs >>> are up at this point ? >> >> You might need to, if you have HA VMs and a too-old version of restore. >> >>> What will happen to HA VMs >>> ? I feel that it might try to start them a second time. My biggest >>> issue is that I can?t get a service Windows to shutdown all VMs and >>> then lat them restart by the engine. >>> >>> >>> >>> Is there a known workflow for that ? >> >> I am not aware of a tested procedure for handling above if you have a >> too-old version, but you can check the patches linked from above bugs >> and manually run the SQL command(s) they include. They are essentially >> comment 4 of the first bug. >> >> Good luck and best regards, >> -- >> Didi > > > > -- > Didi -- Didi From ahadas at redhat.com Sun Mar 25 06:29:18 2018 From: ahadas at redhat.com (Arik Hadas) Date: Sun, 25 Mar 2018 09:29:18 +0300 Subject: [ovirt-users] Problem to upgrade level cluster to 4.1 In-Reply-To: References: Message-ID: On Fri, Mar 23, 2018 at 9:09 PM, Marcelo Leandro wrote: > Hello, > I am try update the cluster level but i had this menssage erro: > > Erro durante a execu??o da a??o: Update of cluster compatibility version > failed because there are VMs/Templates [VPS-NAME01, VPS-NAME02, VPS-NAME03] > with incorrect configuration. To fix the issue, please go to each of them, > edit and press OK. If the save does not pass, fix the dialog validation. > > > 23/03/2018 15:03:07 > Cannot update compatibility version of Vm/Template: [VPS-NAME01], Message: > [No Message] > 23/03/2018 15:03:07 > Cannot update compatibility version of Vm/Template: [VPS-NAME02], > Message: [No Message] > 23/03/2018 15:03:07 > Cannot update compatibility version of Vm/Template: [VPS-NAME03], > Message: [No Message] > > > I am already open the edit box vm and close with ok button how show in the > erro menssagem: > To fix the issue, please go to each of them, edit and press OK. If the > save does not pass, fix the dialog validation. > > But not return error when save. > > Anyone can help me ? > Can you please share the engine.log? > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From statsenko_ky at interrao.ru Sun Mar 25 07:23:55 2018 From: statsenko_ky at interrao.ru (=?utf-8?B?0KHRgtCw0YbQtdC90LrQviDQmtC+0L3RgdGC0LDQvdGC0LjQvSDQrtGA0Yw=?= =?utf-8?B?0LXQstC40Yc=?=) Date: Sun, 25 Mar 2018 07:23:55 +0000 Subject: [ovirt-users] FC LUN In-Reply-To: <5279901521908320@web30j.yandex.ru> References: <439dffd7724541748fe3564493cac704@msk1-exchmb07.interrao.ru> <5279901521908320@web30j.yandex.ru> Message-ID: Zoning is correct. Mutipath shows this LUN on every host in the DC. From: ???????? ??????? [mailto:alexeynikolaev.post at yandex.ru] Sent: Saturday, March 24, 2018 7:19 PM To: ???????? ?????????? ??????? ; users Subject: Re: [ovirt-users] FC LUN Hi, Konstantin. What ?multipath -ll? command says? Execute on every node that have access to the SAN. Is any zoning on SAN fabrics? -- ?????????? ?? ????????? ??????.????? 24.03.2018, 14:38, "???????? ?????????? ???????" >: Anyone ? Any ideas ? From: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] On Behalf Of ???????? ?????????? ??????? Sent: Friday, March 23, 2018 7:53 PM To: users > Subject: [ovirt-users] FC LUN Hello! Can you, please, help ? There is a strange error while trying to connect FC LUN from the old SAN storage system to oVirt 4.1.9 hosts. In the same time there are no problems with LUN?s from other storage systems. The error in vdsm.log is: 2018-03-23 16:41:55,920+0300 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.09 seconds (__init__:539) 2018-03-23 16:41:55,965+0300 WARN (jsonrpc/1) [storage.LVM] lvm pvs failed: 5 [] [' Failed to find device for physical volume "/dev/mapper/320080013789309c0".'] (lvm :323) 2018-03-23 16:41:55,966+0300 WARN (jsonrpc/1) [storage.HSM] getPV failed for guid: 320080013789309c0 (hsm:1966) Traceback (most recent call last): File "/usr/share/vdsm/storage/hsm.py", line 1963, in _getDeviceList pv = lvm.getPV(guid) File "/usr/share/vdsm/storage/lvm.py", line 853, in getPV raise se.InaccessiblePhysDev((pvName,)) InaccessiblePhysDev: Multipath cannot access physical device(s): "devices=(u'320080013789309c0',)" Thank you. _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladkopy at gmail.com Thu Mar 22 15:45:12 2018 From: vladkopy at gmail.com (Vlad Kopylov) Date: Thu, 22 Mar 2018 11:45:12 -0400 Subject: [ovirt-users] [Gluster-users] GlusterFS performance with only one drive per host? In-Reply-To: References: Message-ID: The bottleneck is definitely not the disk speed with glusterFS, no point of using SSD for bricks what so ever -v On Thu, Mar 22, 2018 at 6:01 AM, Sahina Bose wrote: > > > On Mon, Mar 19, 2018 at 5:57 PM, Jayme wrote: >> >> I'm spec'ing a new oVirt build using three Dell R720's w/ 256GB. I'm >> considering storage options. I don't have a requirement for high amounts of >> storage, I have a little over 1TB to store but want some overhead so I'm >> thinking 2TB of usable space would be sufficient. >> >> I've been doing some research on Micron 1100 2TB ssd's and they seem to >> offer a lot of value for the money. I'm considering using smaller cheaper >> SSDs for boot drives and using one 2TB micron SSD in each host for a >> glusterFS replica 3 setup (on the fence about using an arbiter, I like the >> extra redundancy replicate 3 will give me). >> >> My question is, would I see a performance hit using only one drive in each >> host with glusterFS or should I try to add more physical disks. Such as 6 >> 1TB drives instead of 3 2TB drives? > > > [Adding gluster-users for inputs here] > >> >> Also one other question. I've read that gluster can only be done in >> groups of three. Meaning you need 3, 6, or 9 hosts. Is this true? If I >> had an operational replicate 3 glusterFS setup and wanted to add more >> capacity I would have to add 3 more hosts, or is it possible for me to add a >> 4th host in to the mix for extra processing power down the road? > > > In oVirt, we support replica 3 or replica 3 with arbiter (where one of the 3 > bricks is a low storage arbiter brick). To expand storage, you would need to > add in multiples of 3 bricks. However if you only want to expand compute > capacity in your HC environment, you can add a 4th node. > >> >> Thanks! >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users From mpillai at redhat.com Fri Mar 23 03:03:16 2018 From: mpillai at redhat.com (Manoj Pillai) Date: Fri, 23 Mar 2018 08:33:16 +0530 Subject: [ovirt-users] [Gluster-users] GlusterFS performance with only one drive per host? In-Reply-To: References: Message-ID: On Thu, Mar 22, 2018 at 3:31 PM, Sahina Bose wrote: > > > On Mon, Mar 19, 2018 at 5:57 PM, Jayme wrote: > >> I'm spec'ing a new oVirt build using three Dell R720's w/ 256GB. I'm >> considering storage options. I don't have a requirement for high amounts >> of storage, I have a little over 1TB to store but want some overhead so I'm >> thinking 2TB of usable space would be sufficient. >> >> I've been doing some research on Micron 1100 2TB ssd's and they seem to >> offer a lot of value for the money. I'm considering using smaller cheaper >> SSDs for boot drives and using one 2TB micron SSD in each host for a >> glusterFS replica 3 setup (on the fence about using an arbiter, I like the >> extra redundancy replicate 3 will give me). >> >> My question is, would I see a performance hit using only one drive in >> each host with glusterFS or should I try to add more physical disks. Such >> as 6 1TB drives instead of 3 2TB drives? >> > It is possible. With SSDs the rpc layer can become the bottleneck with some workloads, especially if there are not enough connections out to the server side. We had experimented with a multi-connection model for this reason: https://review.gluster.org/#/c/19133/. -- Manoj > > [Adding gluster-users for inputs here] > > >> Also one other question. I've read that gluster can only be done in >> groups of three. Meaning you need 3, 6, or 9 hosts. Is this true? If I >> had an operational replicate 3 glusterFS setup and wanted to add more >> capacity I would have to add 3 more hosts, or is it possible for me to add >> a 4th host in to the mix for extra processing power down the road? >> > > In oVirt, we support replica 3 or replica 3 with arbiter (where one of the > 3 bricks is a low storage arbiter brick). To expand storage, you would need > to add in multiples of 3 bricks. However if you only want to expand compute > capacity in your HC environment, you can add a 4th node. > > >> Thanks! >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mpillai at redhat.com Sat Mar 24 17:56:14 2018 From: mpillai at redhat.com (Manoj Pillai) Date: Sat, 24 Mar 2018 23:26:14 +0530 Subject: [ovirt-users] [Gluster-users] GlusterFS performance with only one drive per host? In-Reply-To: References: Message-ID: My take is that unless you have loads of data and are trying to optimize for cost/TB, HDDs are probably not the right choice. This is particularly true for random I/O workloads for which HDDs are really quite bad. I'd recommend a recent gluster release, and some tuning because the default settings are not optimized for performance. Some options to consider: client.event-threads server.event-threads cluster.choose-local performance.client-io-threads You can toggle the last two and see what works for you. You'd probably need to set event-threads to 4 or more. Ideally you'd tune some of the thread pools based on observed bottlenecks in collected stats. top (top -bHd 10 > top_threads.out.txt) is great for this. Using 6 small drives/bricks instead of 3 is also a good idea to reduce likelihood of rpc bottlenecks. There has been an effort to improve gluster performance over fast SSDs. Hence the recommendation to try with a recent release. You can also check in on some of the issues being worked on: https://github.com/gluster/glusterfs/issues/412 https://github.com/gluster/glusterfs/issues/410 -- Manoj On Sat, Mar 24, 2018 at 4:14 AM, Jayme wrote: > Do you feel that SSDs are worth the extra cost or am I better off using > regular HDDs? I'm looking for the best performance I can get with glusterFS > > On Fri, Mar 23, 2018 at 12:03 AM, Manoj Pillai wrote: > >> >> >> On Thu, Mar 22, 2018 at 3:31 PM, Sahina Bose wrote: >> >>> >>> >>> On Mon, Mar 19, 2018 at 5:57 PM, Jayme wrote: >>> >>>> I'm spec'ing a new oVirt build using three Dell R720's w/ 256GB. I'm >>>> considering storage options. I don't have a requirement for high amounts >>>> of storage, I have a little over 1TB to store but want some overhead so I'm >>>> thinking 2TB of usable space would be sufficient. >>>> >>>> I've been doing some research on Micron 1100 2TB ssd's and they seem to >>>> offer a lot of value for the money. I'm considering using smaller cheaper >>>> SSDs for boot drives and using one 2TB micron SSD in each host for a >>>> glusterFS replica 3 setup (on the fence about using an arbiter, I like the >>>> extra redundancy replicate 3 will give me). >>>> >>>> My question is, would I see a performance hit using only one drive in >>>> each host with glusterFS or should I try to add more physical disks. Such >>>> as 6 1TB drives instead of 3 2TB drives? >>>> >>> >> It is possible. With SSDs the rpc layer can become the bottleneck with >> some workloads, especially if there are not enough connections out to the >> server side. We had experimented with a multi-connection model for this >> reason: https://review.gluster.org/#/c/19133/. >> >> -- Manoj >> >>> >>> [Adding gluster-users for inputs here] >>> >>> >>>> Also one other question. I've read that gluster can only be done in >>>> groups of three. Meaning you need 3, 6, or 9 hosts. Is this true? If I >>>> had an operational replicate 3 glusterFS setup and wanted to add more >>>> capacity I would have to add 3 more hosts, or is it possible for me to add >>>> a 4th host in to the mix for extra processing power down the road? >>>> >>> >>> In oVirt, we support replica 3 or replica 3 with arbiter (where one of >>> the 3 bricks is a low storage arbiter brick). To expand storage, you would >>> need to add in multiples of 3 bricks. However if you only want to expand >>> compute capacity in your HC environment, you can add a 4th node. >>> >>> >>>> Thanks! >>>> >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>> >>> _______________________________________________ >>> Gluster-users mailing list >>> Gluster-users at gluster.org >>> http://lists.gluster.org/mailman/listinfo/gluster-users >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andy.michielsen at gmail.com Sun Mar 25 07:36:22 2018 From: andy.michielsen at gmail.com (Andy Michielsen) Date: Sun, 25 Mar 2018 09:36:22 +0200 Subject: [ovirt-users] Which hardware are you using for oVirt In-Reply-To: References: <815987B5-31DA-4316-809D-A03363A1E3C3@gmail.com> <1701ED12-9ED4-46B9-96E5-1FD3E32419DD@gmail.com> Message-ID: <545F49C7-9AFE-4BE3-B03B-3FB1DEE1F09E@gmail.com> Hello Alex, Thanks for sharing. Much appriciated. I believe my setup would need 96 Gb off RAM in each host, and would need about at least 3 Tb of storage. Probably 4 Tb would be beter if I want to work with snapshots. (Will be running mostly windows 2016 servers or windows 10 desktops with 6Gb off RAM and 100 Gb of disks) I agree that a 10 Gb network for storage would be very beneficial. Now If I can figure out how to set up a glusterfs on a 3 node cluster in oVirt 4.2 just for the data storage. I ?m golden to get started. :-) Kind regards. > On 24 Mar 2018, at 20:08, Alex K wrote: > > I have 2 or 3 node clusters with following hardware (all with self-hosted engine) : > > 2 node cluster: > RAM: 64 GB per host > CPU: 8 cores per host > Storage: 4x 1TB SAS in RAID10 > NIC: 2x Gbit > VMs: 20 > > The above, although I would like to have had a third NIC for gluster storage redundancy, it is running smoothly for quite some time and without performance issues. > The VMs it is running are not high on IO (mostly small Linux servers). > > 3 node clusters: > RAM: 32 GB per host > CPU: 16 cores per host > Storage: 5x 600GB in RAID5 (not ideal but I had to gain some storage space without purchasing extra disks) > NIC: 6x Gbit > VMs: less then 10 large Windows VMs (Windows 2016 server and Windows 10) > > For your setup (30 VMs) I would rather go with RAID10 SAS disks and at least a dual 10Gbit NIC dedicated to the gluster traffic only. > > Alex > > >> On Sat, Mar 24, 2018 at 1:24 PM, Andy Michielsen wrote: >> Hello Andrei, >> >> Thank you very much for sharing info on your hardware setup. Very informative. >> >> At this moment I have my ovirt engine on our vmware environment which is fine for good backup and restore. >> >> I have 4 nodes running now all different in make and model with local storage and it works but lacks performance a bit. >> >> But I can get my hands on some old dell?s R415 with 96 Gb of ram and 2 quadcores and 6 x 1 Gb nic?s. They all come with 2 x 146 Gb 15000 rpm?s harddisks. This isn?t bad but I will add more RAM for starters. Also I would like to have some good redundant storage for this too and the servers have limited space to add that. >> >> Hopefully others will also share there setups and expirience like you did. >> >> Kind regards. >> >>> On 24 Mar 2018, at 10:35, Andrei Verovski wrote: >>> >>> Hi, >>> >>> HL ProLiant DL380, dual Xeon >>> 120 GB RAID L1 for system >>> 2 TB RAID L10 for VM disks >>> 5 VMs, 3 Linux, 2 Windows >>> Total CPU load most of the time is low, high level of activity related to disk. >>> Host engine under KVM appliance on SuSE, can be easily moved, backed up, copied, experimented with, etc. >>> >>> You'll have to use servers with more RAM and storage than main. >>> More then one NIC required if some of your VMs are on different subnets, e.g. 1 in internal zone and 2nd on DMZ. >>> For your setup 10 GB NICs + L3 Switch for ovirtmgmt. >>> >>> BTW, I would suggest to have several separate hardware RAIDs unless you have SSD, otherwise limit of the disk system I/O will be a bottleneck. Consider SSD L1 RAID for heavy-loaded databases. >>> >>> Please note many cheap SSDs do NOT work reliably with SAS controllers even in SATA mode. >>> >>> For example, I supposed to use 2 x WD Green SSD configures as RAID L1 for OS. >>> It was possible to install system, yet under heavy load simulated with iozone disk system freeze, rendering OS unbootable. >>> Same crash was experienced with 512GB KingFast SSD connected to broadcom/AMCC SAS RAID Card. >>> >>> >>>> On 03/24/2018 10:33 AM, Andy Michielsen wrote: >>>> Hi all, >>>> >>>> Not sure if this is the place to be asking this but I was wondering which hardware you all are using and why in order for me to see what I would be needing. >>>> >>>> I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 vm?s. >>>> The engine, I can run on an other server. The hosts can be fitted with the storage and share the space through glusterfs. I would think I will be needing at least 3 nic?s but would be able to install ovn. (Are 1gb nic?s sufficient ?) >>>> >>>> Any input you guys would like to share would be greatly appriciated. >>>> >>>> Thanks, >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahadas at redhat.com Sun Mar 25 07:53:33 2018 From: ahadas at redhat.com (Arik Hadas) Date: Sun, 25 Mar 2018 10:53:33 +0300 Subject: [ovirt-users] Cannot remove VM In-Reply-To: <4169fb5e-eb62-da31-b2a9-9b5a41c8ff47@uam.es> References: <4169fb5e-eb62-da31-b2a9-9b5a41c8ff47@uam.es> Message-ID: On Wed, Mar 21, 2018 at 11:54 AM, Angel R. Gonzalez wrote: > Hi, > > I can't edit/remove/modify a VM. Always show a message: > "Cannot remove VM. Related operation is currently in progress. Please > try again later" > That means that the VM is locked using an in-memory lock (rather than in the database, so the script for unlocking entities would not help here). The message you get was deprecated, when a lock is taken before executing a long operation we generate a more informative message. So that is almost certainly caused by a bug, we have fixed various bugs that could have lead to that. What version of oVirt do you use? Could you share the engine.log that covers the latest operation that was made on this VM? Anyway, restarting the engine would most probably release the lock of that VM and allow removing it. > > I also used REST API > curl -k -u admin at internal:passwd -X DELETE https://engine/ovirt-engine/ap > i/vms/XXXXXXXX > and the output message is the same. > Yeah, it doesn't matter what client you use since this error is generated on the back-end side. > > How to can execute actions over the VM, or how to remove it from Cluster? > > Thanks in advance. > > ?ngel Gonz?lez. > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahadas at redhat.com Sun Mar 25 08:44:42 2018 From: ahadas at redhat.com (Arik Hadas) Date: Sun, 25 Mar 2018 11:44:42 +0300 Subject: [ovirt-users] Fwd: Can't add/remove usb device to VM In-Reply-To: References: Message-ID: On Mon, Feb 12, 2018 at 6:57 PM, Jon bae wrote: > A little update: > I found out that I have to activate *Hostdev Passthrough* (is this really > necessary for USB devices?). Now I see all the devices and I can add also > an USB device. > Theoretically, no, it is not needed. Practically, it is needed in previous versions of 4.2 due to a bug that would be solved in the upcoming version of 4.2 [1]. > But I'm still not able to disconnect the USB from the old VM. > Yep, that's another bug that would be solved in the upcoming version of 4.2 [2]. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1548344 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1557777 > > ---------- Forwarded message ---------- > From: Jon bae > Date: 2018-02-12 16:14 GMT+01:00 > Subject: Can't add/remove usb device to VM > To: users > > > Hello, > I run oVirt 4.2.1, in 4.1 I added a USB device to a VM and now I wanted to > change this device to a different VM. > But I found out, that I'm not able to remove this device nor I'm able to > add any device to any VM. The device list is empty. > > I found a bug report, what describe this error: > https://bugzilla.redhat.com/show_bug.cgi?id=1531847 > > Is there a solution for that? > > The USB device is a hardware dongle and it is very impotent for us to > change this. > > Any workaround is welcome! > If you can't wait for the upcoming version of 4.2 you can try removing it manually by manipulating the database (assuming there is only one host device attached to the VM): (1) delete from vm_device where vm_id in (select vm_guid from vm_static where vm_name='') and type='hostdev'; (2) update host_device set vm_id = NULL where vm_id in (select vm_guid from vm_static where vm_name=''); > > Regards > > Jonathan > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nsoffer at redhat.com Sun Mar 25 08:58:34 2018 From: nsoffer at redhat.com (Nir Soffer) Date: Sun, 25 Mar 2018 08:58:34 +0000 Subject: [ovirt-users] FC LUN In-Reply-To: <439dffd7724541748fe3564493cac704@msk1-exchmb07.interrao.ru> References: <439dffd7724541748fe3564493cac704@msk1-exchmb07.interrao.ru> Message-ID: ?????? ??? ??, 23 ???? 2018, 20:04, ??? ???????? ?????????? ??????? ?< statsenko_ky at interrao.ru>: > Hello! > > Can you, please, help ? > > There is a strange error while trying to connect FC LUN from the old SAN > storage system to oVirt 4.1.9 hosts. > > In the same time there are no problems with LUN?s from other storage > systems. > > The error in vdsm.log is: > > > > 2018-03-23 16:41:55,920+0300 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC > call Host.getStats succeeded in 0.09 seconds (__init__:539) > > 2018-03-23 16:41:55,965+0300 WARN (jsonrpc/1) [storage.LVM] lvm pvs > failed: 5 [] [' Failed to find device for physical volume > "/dev/mapper/320080013789309c0".'] (lvm > > :323) > > 2018-03-23 16:41:55,966+0300 WARN (jsonrpc/1) [storage.HSM] getPV failed > for guid: 320080013789309c0 (hsm:1966) > > Traceback (most recent call last): > > File "/usr/share/vdsm/storage/hsm.py", line 1963, in _getDeviceList > > pv = lvm.getPV(guid) > > File "/usr/share/vdsm/storage/lvm.py", line 853, in getPV > > raise se.InaccessiblePhysDev((pvName,)) > > InaccessiblePhysDev: Multipath cannot access physical device(s): > "devices=(u'320080013789309c0',)" > This error is bogus, you can ignore it. It was fixed in 4.2 by not trying to get pv info for a lun that was just added an likely will not have pv info. Nir > > > Thank you. > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sradco at redhat.com Sun Mar 25 09:01:23 2018 From: sradco at redhat.com (Shirly Radco) Date: Sun, 25 Mar 2018 12:01:23 +0300 Subject: [ovirt-users] Any monitoring tool provided? In-Reply-To: References: <32f428b6-dc46-de63-6072-b1fff2eb0b28@endlessnow.com> Message-ID: -- SHIRLY RADCO BI SeNIOR SOFTWARE ENGINEER Red Hat Israel TRIED. TESTED. TRUSTED. On Fri, Mar 23, 2018 at 10:29 PM, Marcelo Leandro wrote: > Hello, > > I am try this how to: > > https://www.ovirt.org/develop/release-management/features/ > metrics/metrics-store-installation/ > > but when i run this command: > /usr/share/ovirt-engine-metrics/setup/ansible/ > configure_ovirt_machines_for_metrics.sh --playbook=ovirt- > metrics-store-installation.yml > > I am had this error mensagem: > ansible-playbook: error: no such option: --playbook > > my version: > > ovirt-engine-metrics-1.0.8-1.el7.centos.noarch > Hi, You are using an old rpm. Please upgrade to latest, ovirt-engine-metrics-1.1.3.3-1.el7.centos.noarch I also added some documentation that is still in pull request: Add Viaq installation guide to the oVirt metrics store repo - https://github.com/oVirt/ovirt-site/pull/1551 - This one is meaningful. I introduced a lot of automation that save time when installing. Add prerequisites for installing OpenShift Logging - https://github.com/oVirt/ovirt-site/pull/1561 Added how to import dashboards examples to kibana - https://github.com/oVirt/ovirt-site/pull/1559 Please review them.I'll try to get them merged asap. > > Anyone can help me? > > > 2018-03-22 16:28 GMT-03:00 Christopher Cox : > >> On 03/21/2018 10:41 PM, Terry hey wrote: >> >>> Dear all, >>> >>> Now, we can just read how many storage used, cpu usage on ovirt >>> dashboard. >>> But is there any monitoring tool for monitoring virtual machine time to >>> time? >>> If yes, could you guys give me the procedure? >>> >> >> A possible option, for a full OS with network connectivity, is to monitor >> the VM like you would any other host. >> >> We use omd/check_mk. >> >> Right now there isn't an oVirt specific monitor plugin for check_mk. >> >> I know what I said is probably pretty obvious, but just in case. >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marceloltmm at gmail.com Sun Mar 25 11:19:28 2018 From: marceloltmm at gmail.com (Marcelo Leandro) Date: Sun, 25 Mar 2018 11:19:28 +0000 Subject: [ovirt-users] Any monitoring tool provided? In-Reply-To: References: <32f428b6-dc46-de63-6072-b1fff2eb0b28@endlessnow.com> Message-ID: Good morning , Very thanks shirly, I will try. Marcelo Leandro Em Dom, 25 de mar de 2018 06:01, Shirly Radco escreveu: > > > -- > > SHIRLY RADCO > > BI SeNIOR SOFTWARE ENGINEER > > Red Hat Israel > > TRIED. TESTED. TRUSTED. > > On Fri, Mar 23, 2018 at 10:29 PM, Marcelo Leandro > wrote: > >> Hello, >> >> I am try this how to: >> >> >> https://www.ovirt.org/develop/release-management/features/metrics/metrics-store-installation/ >> >> but when i run this command: >> >> /usr/share/ovirt-engine-metrics/setup/ansible/configure_ovirt_machines_for_metrics.sh --playbook=ovirt-metrics-store-installation.yml >> >> I am had this error mensagem: >> ansible-playbook: error: no such option: --playbook >> >> my version: >> >> ovirt-engine-metrics-1.0.8-1.el7.centos.noarch >> > > Hi, > > You are using an old rpm. > > Please upgrade to latest, ovirt-engine-metrics-1.1.3.3-1.el7.centos.noarch > > I also added some documentation that is still in pull request: > > Add Viaq installation guide to the oVirt metrics store repo - > https://github.com/oVirt/ovirt-site/pull/1551 - This one is meaningful. I > introduced a lot of automation that save time when installing. > > Add prerequisites for installing OpenShift Logging - > https://github.com/oVirt/ovirt-site/pull/1561 > > Added how to import dashboards examples to kibana - > https://github.com/oVirt/ovirt-site/pull/1559 > > Please review them.I'll try to get them merged asap. > > >> >> Anyone can help me? >> >> >> 2018-03-22 16:28 GMT-03:00 Christopher Cox : >> >>> On 03/21/2018 10:41 PM, Terry hey wrote: >>> >>>> Dear all, >>>> >>>> Now, we can just read how many storage used, cpu usage on ovirt >>>> dashboard. >>>> But is there any monitoring tool for monitoring virtual machine time to >>>> time? >>>> If yes, could you guys give me the procedure? >>>> >>> >>> A possible option, for a full OS with network connectivity, is to >>> monitor the VM like you would any other host. >>> >>> We use omd/check_mk. >>> >>> Right now there isn't an oVirt specific monitor plugin for check_mk. >>> >>> I know what I said is probably pretty obvious, but just in case. >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sradco at redhat.com Sun Mar 25 11:44:13 2018 From: sradco at redhat.com (Shirly Radco) Date: Sun, 25 Mar 2018 14:44:13 +0300 Subject: [ovirt-users] Any monitoring tool provided? In-Reply-To: References: Message-ID: Hi Vincent, I'm sorry it was not an easy setup. Can you please share what did not work for you in the instructions? I see you did manage to get it working... :) If you want data from the last 24 hours in 60 seconds interval (Still not real time but can give you a better granularity), You can use the samples tables. Also, Please make sure to update your* views prefix *to the version you are using, In the example prefix is * v4_2_** *If you are using oVirt 4.1, prefix should be ** v4_1_** For example: (Did not get to test this query yet) SELECT DISTINCT min(time) AS time, MEM_Usage, host_name || 'MEM_Usage' as metric FROM ( SELECT stats_hosts.host_id, CASE WHEN delete_date IS NULL THEN host_name ELSE host_name || ' (Removed on ' || CAST ( CAST ( delete_date AS date ) AS varchar ) || ')' END AS host_name, stats_hosts.history_datetime AS time, SUM ( COALESCE ( stats_hosts.cpu_usage_percent, 0 ) * COALESCE ( stats_hosts.minutes_in_status, 0 ) ) / SUM ( COALESCE ( stats_hosts.minutes_in_status, 0 ) ) AS CPU_Usage, SUM ( COALESCE ( stats_hosts.memory_usage_percent, 0 ) * COALESCE ( stats_hosts.minutes_in_status, 0 ) ) / SUM ( COALESCE ( stats_hosts.minutes_in_status, 0 ) ) AS MEM_Usage FROM* v4_2_statistics_hosts_resources_usage_samples* AS stats_hosts INNER JOIN v4_2_configuration_history_hosts ON ( v4_2_configuration_history_hosts.host_id = stats_hosts.host_id ) WHERE stats_hosts.history_datetime >= $__timeFrom() AND stats_hosts.history_datetime < $__timeTo() -- Here we get the latest hosts configuration AND v4_2_configuration_history_hosts.history_id IN ( SELECT MAX ( a.history_id ) FROM v4_2_configuration_history_hosts AS a GROUP BY a.host_id ) AND stats_hosts.host_id IN ( SELECT a.host_id FROM* v4_2_statistics_hosts_resources_usage_samples* a INNER JOIN v4_2_configuration_history_hosts b ON ( a.host_id = b.host_id ) WHERE -- Here we filter by active hosts only a.host_status = 1 -- Here we filter by the datacenter chosen by the user AND b.cluster_id IN ( SELECT v4_2_configuration_history_clusters.cluster_id FROM v4_2_configuration_history_clusters WHERE v4_2_configuration_history_clusters.datacenter_id = $datacenter_id ) -- Here we filter by the clusters chosen by the user AND b.cluster_id IN ($cluster_id) AND a. history_datetime >= $__timeFrom() AND a.history_datetime < $__timeTo() -- Here we get the latest hosts configuration AND b.history_id IN ( SELECT MAX (g.history_id) FROM v4_2_configuration_history_hosts g GROUP BY g.host_id ) GROUP BY a.host_id ORDER BY -- Hosts will be ordered according to the summery of -- memory and CPU usage percent. --This determines the busiest hosts. SUM ( COALESCE ( a.memory_usage_percent * a.minutes_in_status, 0 ) ) / SUM ( COALESCE ( a.minutes_in_status, 0 ) ) + SUM ( COALESCE ( a.cpu_usage_percent * a.minutes_in_status, 0 ) ) / SUM ( COALESCE ( a.minutes_in_status, 0 ) ) DESC LIMIT 5 ) GROUP BY stats_hosts.host_id, host_name, delete_date, history_datetime ) AS a GROUP BY a.host_name, a.mem_usage ORDER BY time -- SHIRLY RADCO BI SeNIOR SOFTWARE ENGINEER Red Hat Israel TRIED. TESTED. TRUSTED. On Thu, Mar 22, 2018 at 9:05 PM, Vincent Royer wrote: > I setup Grafana using the instructions I found on accessing the Ovirt > history database. However, the instructions didn't work as written. > Regardless, it does work, but it's not easy to setup. The update rate also > leaves something to be desired, its ok for historical info, but it's not a > good real time monitoring solution (although its possible I could set it up > differently and it would work better) > > Also using Grafana, I have setup Telegraf agents on most of my VMs. > > Lastly, I also installed Telegraf on the Centos hosts in my Ovirt Cluster > > > > > > > *Vincent Royer* > *778-825-1057 <(778)%20825-1057>* > > > > *SUSTAINABLE MOBILE ENERGY SOLUTIONS* > > > > > On Wed, Mar 21, 2018 at 8:41 PM, Terry hey wrote: > >> Dear all, >> >> Now, we can just read how many storage used, cpu usage on ovirt dashboard. >> But is there any monitoring tool for monitoring virtual machine time to >> time? >> If yes, could you guys give me the procedure? >> >> >> Regards >> Terry >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 68566 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 145730 bytes Desc: not available URL: From sradco at redhat.com Sun Mar 25 11:50:14 2018 From: sradco at redhat.com (Shirly Radco) Date: Sun, 25 Mar 2018 14:50:14 +0300 Subject: [ovirt-users] Any monitoring tool provided? In-Reply-To: References: Message-ID: It is possible to use DWH with Grafana to get VMs stats. https://www.ovirt.org/blog/2018/01/ovirt-report-using-grafana/ Relevant views are with the following prefixes: v4_2_statistics_vms_* v4_2_latest_configuration_vms* v4_2_configuration_history_vms* They are all available also in earlier versions but the prefix will be different, like v4_1_* for 4.1. Another option is our new real time monitoring solution , which is based on OpenShift and Elasticearch, Kibana and Fluentd. https://ovirt.org/blog/2017/12/ovirt-metrics-store -- SHIRLY RADCO BI SeNIOR SOFTWARE ENGINEER Red Hat Israel TRIED. TESTED. TRUSTED. On Thu, Mar 22, 2018 at 5:41 AM, Terry hey wrote: > Dear all, > > Now, we can just read how many storage used, cpu usage on ovirt dashboard. > But is there any monitoring tool for monitoring virtual machine time to > time? > If yes, could you guys give me the procedure? > > > Regards > Terry > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Sun Mar 25 16:30:39 2018 From: rightkicktech at gmail.com (Alex K) Date: Sun, 25 Mar 2018 19:30:39 +0300 Subject: [ovirt-users] ovirt snapshot issue Message-ID: Hi folks, I am facing frequently the following issue: On some large VMs (Windows 2016 with two disk drives, 60GB and 500GB) when attempting to create a snapshot of the VM, the VM becomes unresponsive. The errors that I managed to collect were: vdsm error at host hosting the VM: 2018-03-25 14:40:13,442+0000 WARN (vdsm.Scheduler) [Executor] Worker blocked: timeout=60, duration=60 at 0x39d8210> task#=155842 at 0x2240e10> (executor:351) 2018-03-25 14:40:15,261+0000 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call VM.getStats failed (error 1) in 0.01 seconds (__init__:539) 2018-03-25 14:40:17,471+0000 WARN (jsonrpc/5) [virt.vm] (vmId='a5c761a2-41cd-40c2-b65f-f3819293e8a4') monitor became unresponsive (command timeout, age=67.9100000001) (vm:5132) engine.log: 2018-03-25 14:40:19,875Z WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler2) [1d737df7] EVENT_ID: VM_NOT_RESPONDING(126), Correlation ID: null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: VM Data-Server is not responding. 2018-03-25 14:42:13,708Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler5) [17789048-009a-454b-b8ad-2c72c7cd37aa] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: VDSM v1.cluster command SnapshotVDS failed: Message timeout which can be caused by communication issues 2018-03-25 14:42:13,708Z ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] (DefaultQuartzScheduler5) [17789048-009a-454b-b8ad-2c72c7cd37aa] Command 'SnapshotVDSCommand(HostName = v1.cluster, SnapshotVDSCommandParameters:{runAsync='true', hostId='a713d988-ee03-4ff0-a0cd-dc4cde1507f4', vmId='a5c761a2-41cd-40c2-b65f-f3819293e8a4'})' execution failed: VDSGenericException: VDSNetworkException: Message timeout which can be caused by communication issues 2018-03-25 14:42:13,708Z WARN [org.ovirt.engine.core.bll.snapshots.CreateAllSnapshotsFromVmCommand] (DefaultQuartzScheduler5) [17789048-009a-454b-b8ad-2c72c7cd37aa] Could not perform live snapshot due to error, VM will still be configured to the new created snapshot: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: VDSGenericException: VDSNetworkException: Message timeout which can be caused by communication issues (Failed with error VDS_NETWORK_ERROR and code 5022) 2018-03-25 14:42:13,708Z WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (org.ovirt.thread.pool-6-thread-15) [17789048-009a-454b-b8ad-2c72c7cd37aa] Host 'v1.cluster' is not responding. It will stay in Connecting state for a grace period of 61 seconds and after that an attempt to fence the host will be issued. 2018-03-25 14:42:13,725Z WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-15) [17789048-009a-454b-b8ad-2c72c7cd37aa] EVENT_ID: VDS_HOST_NOT_RESPONDING_CONNECTING(9,008), Correlation ID: null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Host v1.cluster is not responding. It will stay in Connecting state for a grace period of 61 seconds and after that an attempt to fence the host will be issued. 2018-03-25 14:42:13,751Z WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler5) [17789048-009a-454b-b8ad-2c72c7cd37aa] EVENT_ID: USER_CREATE_LIVE_SNAPSHOT_FINISHED_FAILURE(170), Correlation ID: 17789048-009a-454b-b8ad-2c72c7cd37aa, Job ID: 16e48c28-a8c7-4841-bd81-1f2d370f345d, Call Stack: org.ovirt.engine.core.common.errors.EngineException: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: VDSGenericException: VDSNetworkException: Message timeout which can be caused by communication issues (Failed with error VDS_NETWORK_ERROR and code 5022) 2018-03-25 14:42:14,372Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler5) [] EVENT_ID: USER_CREATE_SNAPSHOT_FINISHED_FAILURE(69), Correlation ID: 17789048-009a-454b-b8ad-2c72c7cd37aa, Job ID: 16e48c28-a8c7-4841-bd81-1f2d370f345d, Call Stack: org.ovirt.engine.core.common.errors.EngineException: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: VDSGenericException: VDSNetworkException: Message timeout which can be caused by communication issues (Failed with error VDS_NETWORK_ERROR and code 5022) 2018-03-25 14:42:14,372Z WARN [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (DefaultQuartzScheduler5) [] Command 'CreateAllSnapshotsFromVm' id: 'bad4f5be-5306-413f-a86a-513b3cfd3c66' end method execution failed, as the command isn't marked for endAction() retries silently ignoring 2018-03-25 14:42:15,951Z WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler5) [5017c163] EVENT_ID: VDS_NO_SELINUX_ENFORCEMENT(25), Correlation ID: null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Host v1.cluster does not enforce SELinux. Current status: DISABLED 2018-03-25 14:42:15,951Z WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (DefaultQuartzScheduler5) [5017c163] Host 'v1.cluster' is running with SELinux in 'DISABLED' mode As soon as the VM is unresponsive, the VM console that was already open freezes. I can resume the VM only by powering off and on. I am using ovirt 4.1.9 with 3 nodes and self-hosted engine. I am running mostly Windows 10 and Windows 2016 server VMs. I have installed latest guest agents from: http://resources.ovirt.org/pub/ovirt-4.2/iso/oVirt-toolsSetup/4.2-1.el7.centos/ At the screen where one takes a snapshot I get a warning saying "Could not detect guest agent on the VM. Note that without guest agent the data on the created snapshot may be inconsistent". See attached. I have verified that ovirt guest tools are installed and shown at installed apps at engine GUI. Also Ovirt Guest Agent (32 bit) and qemu-ga are listed as running at the windows tasks manager. Shouldn't ovirt guest agent be 64 bit on Windows 64 bit? Any advice will be much appreciated. Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ovirt-ga-error.png Type: image/png Size: 65777 bytes Desc: not available URL: From pablo.localhost at gmail.com Sun Mar 25 19:35:59 2018 From: pablo.localhost at gmail.com (Juan Pablo) Date: Sun, 25 Mar 2018 16:35:59 -0300 Subject: [ovirt-users] Which hardware are you using for oVirt In-Reply-To: <545F49C7-9AFE-4BE3-B03B-3FB1DEE1F09E@gmail.com> References: <815987B5-31DA-4316-809D-A03363A1E3C3@gmail.com> <1701ED12-9ED4-46B9-96E5-1FD3E32419DD@gmail.com> <545F49C7-9AFE-4BE3-B03B-3FB1DEE1F09E@gmail.com> Message-ID: Andy, Im using 2 node cluster: -2 supermicro 6017 (2 Intel 2420(12C24T each node) 384Gb ram total, 10gbe. all hosted engine via nfs storage side: 2 SC836BE16-R1K28B(192gb arc cache) with raid 10 zfs+intel slog serving iscsi at 10Gbe 80 VM's more or less. regards, 2018-03-25 4:36 GMT-03:00 Andy Michielsen : > Hello Alex, > > Thanks for sharing. Much appriciated. > > I believe my setup would need 96 Gb off RAM in each host, and would need > about at least 3 Tb of storage. Probably 4 Tb would be beter if I want to > work with snapshots. (Will be running mostly windows 2016 servers or > windows 10 desktops with 6Gb off RAM and 100 Gb of disks) > > I agree that a 10 Gb network for storage would be very beneficial. > > Now If I can figure out how to set up a glusterfs on a 3 node cluster in > oVirt 4.2 just for the data storage. I ?m golden to get started. :-) > > Kind regards. > > On 24 Mar 2018, at 20:08, Alex K wrote: > > I have 2 or 3 node clusters with following hardware (all with self-hosted > engine) : > > 2 node cluster: > RAM: 64 GB per host > CPU: 8 cores per host > Storage: 4x 1TB SAS in RAID10 > NIC: 2x Gbit > VMs: 20 > > The above, although I would like to have had a third NIC for gluster > storage redundancy, it is running smoothly for quite some time and without > performance issues. > The VMs it is running are not high on IO (mostly small Linux servers). > > 3 node clusters: > RAM: 32 GB per host > CPU: 16 cores per host > Storage: 5x 600GB in RAID5 (not ideal but I had to gain some storage space > without purchasing extra disks) > NIC: 6x Gbit > VMs: less then 10 large Windows VMs (Windows 2016 server and Windows 10) > > For your setup (30 VMs) I would rather go with RAID10 SAS disks and at > least a dual 10Gbit NIC dedicated to the gluster traffic only. > > Alex > > > On Sat, Mar 24, 2018 at 1:24 PM, Andy Michielsen < > andy.michielsen at gmail.com> wrote: > >> Hello Andrei, >> >> Thank you very much for sharing info on your hardware setup. Very >> informative. >> >> At this moment I have my ovirt engine on our vmware environment which is >> fine for good backup and restore. >> >> I have 4 nodes running now all different in make and model with local >> storage and it works but lacks performance a bit. >> >> But I can get my hands on some old dell?s R415 with 96 Gb of ram and 2 >> quadcores and 6 x 1 Gb nic?s. They all come with 2 x 146 Gb 15000 rpm?s >> harddisks. This isn?t bad but I will add more RAM for starters. Also I >> would like to have some good redundant storage for this too and the servers >> have limited space to add that. >> >> Hopefully others will also share there setups and expirience like you did. >> >> Kind regards. >> >> On 24 Mar 2018, at 10:35, Andrei Verovski wrote: >> >> Hi, >> >> HL ProLiant DL380, dual Xeon >> 120 GB RAID L1 for system >> 2 TB RAID L10 for VM disks >> 5 VMs, 3 Linux, 2 Windows >> Total CPU load most of the time is low, high level of activity related >> to disk. >> Host engine under KVM appliance on SuSE, can be easily moved, backed up, >> copied, experimented with, etc. >> >> You'll have to use servers with more RAM and storage than main. >> More then one NIC required if some of your VMs are on different subnets, >> e.g. 1 in internal zone and 2nd on DMZ. >> For your setup 10 GB NICs + L3 Switch for ovirtmgmt. >> >> BTW, I would suggest to have several separate hardware RAIDs unless you >> have SSD, otherwise limit of the disk system I/O will be a bottleneck. >> Consider SSD L1 RAID for heavy-loaded databases. >> >> *Please note many cheap SSDs do NOT work reliably with SAS controllers >> even in SATA mode*. >> >> For example, I supposed to use 2 x WD Green SSD configures as RAID L1 for >> OS. >> It was possible to install system, yet under heavy load simulated with >> iozone disk system freeze, rendering OS unbootable. >> Same crash was experienced with 512GB KingFast SSD connected to >> broadcom/AMCC SAS RAID Card. >> >> >> On 03/24/2018 10:33 AM, Andy Michielsen wrote: >> >> Hi all, >> >> Not sure if this is the place to be asking this but I was wondering which hardware you all are using and why in order for me to see what I would be needing. >> >> I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 vm?s. >> The engine, I can run on an other server. The hosts can be fitted with the storage and share the space through glusterfs. I would think I will be needing at least 3 nic?s but would be able to install ovn. (Are 1gb nic?s sufficient ?) >> >> Any input you guys would like to share would be greatly appriciated. >> >> Thanks, >> _______________________________________________ >> Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users >> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phudec at cnc.sk Mon Mar 26 05:50:13 2018 From: phudec at cnc.sk (Peter Hudec) Date: Mon, 26 Mar 2018 07:50:13 +0200 Subject: [ovirt-users] Any monitoring tool provided? In-Reply-To: References: Message-ID: <008007e4-7c25-73ed-db8e-60c49fc3ad7a@cnc.sk> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hi Terry, I started to work on ZABBIX integration based on oVirt API. Basically it should be like VmWare integration in ZABBIX with full hosts/vms discovery and statistic gathering. The API provides for each statistics service for NIC, VM as wall CPU and MEM utilization. There is also solution based on reading data from VDSM to prometheus http://rmohr.github.io/virtualization/2016/04/12/monitor-your-ovirt-data center-with-prometheus. Peter On 22/03/2018 04:41, Terry hey wrote: > Dear all, > > Now, we can just read how many storage used, cpu usage on ovirt > dashboard. But is there any monitoring tool for monitoring virtual > machine time to time? If yes, could you guys give me the procedure? > > > > Regards Terry > > > _______________________________________________ Users mailing list > Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users > - -- *Peter Hudec* Infra?trukt?rny architekt phudec at cnc.sk *CNC, a.s.* Borsk? 6, 841 04 Bratislava Recepcia: +421 2 35 000 100 Mobil:+421 905 997 203 *www.cnc.sk* -----BEGIN PGP SIGNATURE----- iQJCBAEBCAAsFiEEqSUbhuEwhryifNeVQnvVWOJ35BAFAlq4ihEOHHBodWRlY0Bj bmMuc2sACgkQQnvVWOJ35BC5Xg//YqtoPKtmA8Y0APnQevX+LrRq8EuViWsn1Wfw Xf2HogPs9vJB8W5PnpI9+jdRTrm4rL74V1qw8LEDWhIoag0n+r/PPk48sRTpQSMM fkCjaNBL94SBc/Fjic7Gtn6IsTD9puPSmkJ7I6APC+c3RIVfYRaPXTSCW1dHkrzX BuTHajuvL1Wdns+SSbGdGOGvegYpKfuFgiorm7eZ47WF2tJ1HlKKhYTiFaCTywo7 6gG2u18ezrEWGXa6MSFhw0NytUGC3JFvseA2jHj+NofRFEHpkhSvLgAYSHcJb1uQ 8IvH8gkY2wjYFk5jqqvxoMX9dtYfnZLtW0gtFNVoEzIwcOjuK7aVi/bdfFIl2P9r CUD9h0O3xEO6BeHo9WZkRwNqOxQVRel/3x/jAu76Q5KPR3U4S7tezuH4xvH0i4g7 uEEc+BSECytfSGPFH41J5X1FflINt4F1tMpjXdaT5qEa9AWYuZddfHeQ9KSDqqBC KOnr+9xRdhS1VYjm8FI3tpya0Mj/OjBJL2PMpEluJd5IpN9ggcfIlUpyCgAzFnUU xyIz05pg35iCJ6FQZCYergQwh15RQLb9LTnCuB39wBfSKXeR27uXIZ8/p/8S5Un4 7uJmidsHF7OGtSFk48WwMGFg1in8ZSwbmNnxR1WkJPJAkxxbXp07an+OzugOnsgR y1b8OkE= =BKxT -----END PGP SIGNATURE----- From rightkicktech at gmail.com Mon Mar 26 06:02:21 2018 From: rightkicktech at gmail.com (Alex K) Date: Mon, 26 Mar 2018 06:02:21 +0000 Subject: [ovirt-users] Any monitoring tool provided? In-Reply-To: <008007e4-7c25-73ed-db8e-60c49fc3ad7a@cnc.sk> References: <008007e4-7c25-73ed-db8e-60c49fc3ad7a@cnc.sk> Message-ID: Hi Peter, This is interesting. Is it going to be a template with an external python script? Alex On Mon, Mar 26, 2018, 08:50 Peter Hudec wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > Hi Terry, > > I started to work on ZABBIX integration based on oVirt API. > Basically it should be like VmWare integration in ZABBIX with full > hosts/vms discovery and statistic gathering. > > The API provides for each statistics service for NIC, VM as wall CPU > and MEM utilization. > > There is also solution based on reading data from VDSM to prometheus > http://rmohr.github.io/virtualization/2016/04/12/monitor-your-ovirt-data > center-with-prometheus > > . > > > Peter > > On 22/03/2018 04:41, Terry hey wrote: > > Dear all, > > > > Now, we can just read how many storage used, cpu usage on ovirt > > dashboard. But is there any monitoring tool for monitoring virtual > > machine time to time? If yes, could you guys give me the procedure? > > > > > > > > Regards Terry > > > > > > _______________________________________________ Users mailing list > > Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users > > > > > - -- > *Peter Hudec* > Infra?trukt?rny architekt > phudec at cnc.sk > > *CNC, a.s.* > Borsk? 6, 841 04 Bratislava > Recepcia: +421 2 35 000 100 > > Mobil:+421 905 997 203 > *www.cnc.sk* > > -----BEGIN PGP SIGNATURE----- > > iQJCBAEBCAAsFiEEqSUbhuEwhryifNeVQnvVWOJ35BAFAlq4ihEOHHBodWRlY0Bj > bmMuc2sACgkQQnvVWOJ35BC5Xg//YqtoPKtmA8Y0APnQevX+LrRq8EuViWsn1Wfw > Xf2HogPs9vJB8W5PnpI9+jdRTrm4rL74V1qw8LEDWhIoag0n+r/PPk48sRTpQSMM > fkCjaNBL94SBc/Fjic7Gtn6IsTD9puPSmkJ7I6APC+c3RIVfYRaPXTSCW1dHkrzX > BuTHajuvL1Wdns+SSbGdGOGvegYpKfuFgiorm7eZ47WF2tJ1HlKKhYTiFaCTywo7 > 6gG2u18ezrEWGXa6MSFhw0NytUGC3JFvseA2jHj+NofRFEHpkhSvLgAYSHcJb1uQ > 8IvH8gkY2wjYFk5jqqvxoMX9dtYfnZLtW0gtFNVoEzIwcOjuK7aVi/bdfFIl2P9r > CUD9h0O3xEO6BeHo9WZkRwNqOxQVRel/3x/jAu76Q5KPR3U4S7tezuH4xvH0i4g7 > uEEc+BSECytfSGPFH41J5X1FflINt4F1tMpjXdaT5qEa9AWYuZddfHeQ9KSDqqBC > KOnr+9xRdhS1VYjm8FI3tpya0Mj/OjBJL2PMpEluJd5IpN9ggcfIlUpyCgAzFnUU > xyIz05pg35iCJ6FQZCYergQwh15RQLb9LTnCuB39wBfSKXeR27uXIZ8/p/8S5Un4 > 7uJmidsHF7OGtSFk48WwMGFg1in8ZSwbmNnxR1WkJPJAkxxbXp07an+OzugOnsgR > y1b8OkE= > =BKxT > -----END PGP SIGNATURE----- > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phudec at cnc.sk Mon Mar 26 06:13:15 2018 From: phudec at cnc.sk (Peter Hudec) Date: Mon, 26 Mar 2018 08:13:15 +0200 Subject: [ovirt-users] Any monitoring tool provided? In-Reply-To: References: <008007e4-7c25-73ed-db8e-60c49fc3ad7a@cnc.sk> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Yes, template and python at this moment I understand oVirt API. I needed to write own small SDK, since the oVirt SDK is using SSO for login, that means it do some additional requests for login process. There is no option to use Basic Auth. The SESSION reuse could be useful, but I do no want to add more First I need to understand some basics from Zabbix Discovery Rules / Host Prototypes. I would like to have VM as separate hosts in Zabbix, like VMWare does. The other stuff is quite easy. There plugin for nagios/icinga if someone using this monitoring tool https://github.com/ovido/check_rhev3 On 26/03/2018 08:02, Alex K wrote: > Hi Peter, > > This is interesting. Is it going to be a template with an external > python script? > > Alex > > On Mon, Mar 26, 2018, 08:50 Peter Hudec > wrote: > > Hi Terry, > > I started to work on ZABBIX integration based on oVirt API. > Basically it should be like VmWare integration in ZABBIX with full > hosts/vms discovery and statistic gathering. > > The API provides for each statistics service for NIC, VM as wall > CPU and MEM utilization. > > There is also solution based on reading data from VDSM to > prometheus > http://rmohr.github.io/virtualization/2016/04/12/monitor-your-ovirt-da ta > > center-with-prometheus > . > > > > Peter > > On 22/03/2018 04:41, Terry hey wrote: >> Dear all, > >> Now, we can just read how many storage used, cpu usage on ovirt >> dashboard. But is there any monitoring tool for monitoring >> virtual machine time to time? If yes, could you guys give me the >> procedure? > > > >> Regards Terry > > >> _______________________________________________ Users mailing >> list Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > _______________________________________________ Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > - -- *Peter Hudec* Infra?trukt?rny architekt phudec at cnc.sk *CNC, a.s.* Borsk? 6, 841 04 Bratislava Recepcia: +421 2 35 000 100 Mobil:+421 905 997 203 *www.cnc.sk* -----BEGIN PGP SIGNATURE----- iQJCBAEBCAAsFiEEqSUbhuEwhryifNeVQnvVWOJ35BAFAlq4j3cOHHBodWRlY0Bj bmMuc2sACgkQQnvVWOJ35BCm2w/+Odg4Vw71pqj7qbipTK68udw+wl5zDD8NPVwi z/Jcrhfz45xxdGqlgeP66ESEDHVBAg7O7q1bhTvxlf2gZqM2SM1XEh/zmX4OHeG0 XBVnJ1gCYtMiZKYr7/lBSfXpwor4VdB0rm9JjiyyNgsgFRtAY8ACSzn4P09FMr+T fMPz2zbpLVpU+SpHOZm8eHTOTlvWqTH5vZ8ezGx32VYhmvfGHlWKffJtfKxnCQtp q3D7cKe3OyeVoQ9QVFu58eiXKvlrDJ8kd5E+vSTMUYIZpX2NyOM1HCACwcxXtjgL 0sAxplDDXkVfJ5ekezbXp6yxOapTW4pkzy0AO1HFM37Mxq9+u9kTMOu1byxDprsY 9EmsWDQZlq1mZXAqWc9O2nKdF9Nw0s5aghaTEzqi0nhk9/TI49uSOmO9PKLTE19s mxi/+EecHDoRGbWBQ+XTBOUNAOoeApXUvdJQ7SwlEQ4mEKDGf2Q7jGghnerlfW0c bmQ+lpDbzKxtvTijFGySAFY2SIn2xzP9EYUFOnSwcuCuePufxKgglV6pQtx5ceGa pxgdwHkxBILZYFfyuJHgLwXmKMcQIZhLFwK0ptgRfzw4Y1SMxh0kmSOhWDocOTY6 ySdWX9lCurFk3ounV8GqmG7yoM0n2G81ttXMzpIUhXWJVz7fKaGQrQ8HDtSrN7V0 7ahxRnQ= =kq0J -----END PGP SIGNATURE----- From ahadas at redhat.com Mon Mar 26 06:56:02 2018 From: ahadas at redhat.com (Arik Hadas) Date: Mon, 26 Mar 2018 09:56:02 +0300 Subject: [ovirt-users] Problem to upgrade level cluster to 4.1 In-Reply-To: References: Message-ID: On Sun, Mar 25, 2018 at 3:06 PM, Marcelo Leandro wrote: > Good morning, > follow the log: > > The real name of vms : > VPS-Jarauto > VPS-Jarauto-Firewall > VPS-Varejo-Mais > > Thanks, > Thanks for sharing the log. Two of these VMs are configured with the custom-property 'macspoof'. That property is not supposed to be used since 4.0 (IIRC). Nevertheless, defining this property for version 4.1 should solve this problem, see [1]. The third VM is in a weird state - it cannot be updated because the host it runs on doesn't support the number of CPUs defined for this VM. You can either shut this VM down during the cluster upgrade or try to migrate it to another host with more CPUs available. Unfortunately, I can't correlate those issues with the VMs you mentioned, but if you'll start with with the first issue you'll find which one of them is that 'third VM'. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1373573 > > 2018-03-25 3:29 GMT-03:00 Arik Hadas : > >> >> >> On Fri, Mar 23, 2018 at 9:09 PM, Marcelo Leandro >> wrote: >> >>> Hello, >>> I am try update the cluster level but i had this menssage erro: >>> >>> Erro durante a execu??o da a??o: Update of cluster compatibility version >>> failed because there are VMs/Templates [VPS-NAME01, VPS-NAME02, >>> VPS-NAME03] with incorrect configuration. To fix the issue, please go >>> to each of them, edit and press OK. If the save does not pass, fix the >>> dialog validation. >>> >>> >>> 23/03/2018 15:03:07 >>> Cannot update compatibility version of Vm/Template: [VPS-NAME01], >>> Message: [No Message] >>> 23/03/2018 15:03:07 >>> Cannot update compatibility version of Vm/Template: [VPS-NAME02], >>> Message: [No Message] >>> 23/03/2018 15:03:07 >>> Cannot update compatibility version of Vm/Template: [VPS-NAME03], >>> Message: [No Message] >>> >>> >>> I am already open the edit box vm and close with ok button how show in >>> the erro menssagem: >>> To fix the issue, please go to each of them, edit and press OK. If the >>> save does not pass, fix the dialog validation. >>> >>> But not return error when save. >>> >>> Anyone can help me ? >>> >> >> Can you please share the engine.log? >> >> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzetto.luca at gmail.com Mon Mar 26 07:02:03 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Mon, 26 Mar 2018 09:02:03 +0200 Subject: [ovirt-users] Which hardware are you using for oVirt In-Reply-To: <815987B5-31DA-4316-809D-A03363A1E3C3@gmail.com> References: <815987B5-31DA-4316-809D-A03363A1E3C3@gmail.com> Message-ID: On Sat, Mar 24, 2018 at 9:33 AM, Andy Michielsen wrote: > Hi all, > > Not sure if this is the place to be asking this but I was wondering which hardware you all are using and why in order for me to see what I would be needing. > > I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 vm?s. > The engine, I can run on an other server. The hosts can be fitted with the storage and share the space through glusterfs. I would think I will be needing at least 3 nic?s but would be able to install ovn. (Are 1gb nic?s sufficient ?) > > Any input you guys would like to share would be greatly appriciated. Hello Andy, i'm not running hyperconverged setup, but just for reference i'll describe my setup: 2 clusters (6+2) HPE BL460 G9 with 512 GB of ram each. On the cluster composed by two nodes we're running self-hosted engine. The storage backend is FC for both (EMC VNX8000 for the biggest one and EMC VPLEX with VMAX as disk backend on the smallest one). Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From lorenzetto.luca at gmail.com Mon Mar 26 07:03:18 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Mon, 26 Mar 2018 09:03:18 +0200 Subject: [ovirt-users] Any monitoring tool provided? In-Reply-To: References: <008007e4-7c25-73ed-db8e-60c49fc3ad7a@cnc.sk> Message-ID: On Mon, Mar 26, 2018 at 8:13 AM, Peter Hudec wrote: [cut] > There plugin for nagios/icinga if someone using this monitoring tool > https://github.com/ovido/check_rhev3 Hello, we're using with success this nagios plugin. Works well, with few configs. Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From rightkicktech at gmail.com Mon Mar 26 07:19:18 2018 From: rightkicktech at gmail.com (Alex K) Date: Mon, 26 Mar 2018 07:19:18 +0000 Subject: [ovirt-users] Which hardware are you using for oVirt In-Reply-To: References: <815987B5-31DA-4316-809D-A03363A1E3C3@gmail.com> Message-ID: Hi Luca, You have 2 node cluster with 512 GB each host to run engine only? How many vms are u running at the compute nodes? Alex On Mon, Mar 26, 2018, 10:02 Luca 'remix_tj' Lorenzetto < lorenzetto.luca at gmail.com> wrote: > On Sat, Mar 24, 2018 at 9:33 AM, Andy Michielsen > wrote: > > Hi all, > > > > Not sure if this is the place to be asking this but I was wondering > which hardware you all are using and why in order for me to see what I > would be needing. > > > > I would like to set up a HA cluster consisting off 3 hosts to be able to > run 30 vm?s. > > The engine, I can run on an other server. The hosts can be fitted with > the storage and share the space through glusterfs. I would think I will be > needing at least 3 nic?s but would be able to install ovn. (Are 1gb nic?s > sufficient ?) > > > > Any input you guys would like to share would be greatly appriciated. > > Hello Andy, > > i'm not running hyperconverged setup, but just for reference i'll > describe my setup: > > 2 clusters (6+2) HPE BL460 G9 with 512 GB of ram each. On the cluster > composed by two nodes we're running self-hosted engine. > > The storage backend is FC for both (EMC VNX8000 for the biggest one > and EMC VPLEX with VMAX as disk backend on the smallest one). > > Luca > > > -- > "E' assurdo impiegare gli uomini di intelligenza eccellente per fare > calcoli che potrebbero essere affidati a chiunque se si usassero delle > macchine" > Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) > > "Internet ? la pi? grande biblioteca del mondo. > Ma il problema ? che i libri sono tutti sparsi sul pavimento" > John Allen Paulos, Matematico (1945-vivente) > > Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , < > lorenzetto.luca at gmail.com> > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzetto.luca at gmail.com Mon Mar 26 07:27:28 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Mon, 26 Mar 2018 09:27:28 +0200 Subject: [ovirt-users] Which hardware are you using for oVirt In-Reply-To: References: <815987B5-31DA-4316-809D-A03363A1E3C3@gmail.com> Message-ID: Sorry Andy, forgot to write the density. On the first cluster (6 nodes) we're running 225 VMs at the moment On the second one (2 nodes) we're running 42 VMs + the engine. This setup is running since last July. Just for completness, we've just set up another environment with the same sizing (2 sockets + 512gb ram) and distribution (6+2) but different hardware (Lenovo x240 M5 IIRC) for hosting production VMs. At the moment there are few VMs, but we're planning to migrate over 300 VMs. Same storage backend. Luca On Mon, Mar 26, 2018 at 9:19 AM, Alex K wrote: > Hi Luca, > > You have 2 node cluster with 512 GB each host to run engine only? > > How many vms are u running at the compute nodes? > > Alex > > On Mon, Mar 26, 2018, 10:02 Luca 'remix_tj' Lorenzetto > wrote: >> >> On Sat, Mar 24, 2018 at 9:33 AM, Andy Michielsen >> wrote: >> > Hi all, >> > >> > Not sure if this is the place to be asking this but I was wondering >> > which hardware you all are using and why in order for me to see what I would >> > be needing. >> > >> > I would like to set up a HA cluster consisting off 3 hosts to be able to >> > run 30 vm?s. >> > The engine, I can run on an other server. The hosts can be fitted with >> > the storage and share the space through glusterfs. I would think I will be >> > needing at least 3 nic?s but would be able to install ovn. (Are 1gb nic?s >> > sufficient ?) >> > >> > Any input you guys would like to share would be greatly appriciated. >> >> Hello Andy, >> >> i'm not running hyperconverged setup, but just for reference i'll >> describe my setup: >> >> 2 clusters (6+2) HPE BL460 G9 with 512 GB of ram each. On the cluster >> composed by two nodes we're running self-hosted engine. >> >> The storage backend is FC for both (EMC VNX8000 for the biggest one >> and EMC VPLEX with VMAX as disk backend on the smallest one). >> >> Luca >> >> >> -- >> "E' assurdo impiegare gli uomini di intelligenza eccellente per fare >> calcoli che potrebbero essere affidati a chiunque se si usassero delle >> macchine" >> Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) >> >> "Internet ? la pi? grande biblioteca del mondo. >> Ma il problema ? che i libri sono tutti sparsi sul pavimento" >> John Allen Paulos, Matematico (1945-vivente) >> >> Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From hawk at tbi.univie.ac.at Mon Mar 26 07:42:57 2018 From: hawk at tbi.univie.ac.at (Richard Neuboeck) Date: Mon, 26 Mar 2018 09:42:57 +0200 Subject: [ovirt-users] Which hardware are you using for oVirt In-Reply-To: <545F49C7-9AFE-4BE3-B03B-3FB1DEE1F09E@gmail.com> References: <815987B5-31DA-4316-809D-A03363A1E3C3@gmail.com> <1701ED12-9ED4-46B9-96E5-1FD3E32419DD@gmail.com> <545F49C7-9AFE-4BE3-B03B-3FB1DEE1F09E@gmail.com> Message-ID: Hi Andy, we have 3 hosts for virtualization. Each 40 Cores, 512GB RAM, RAID 1 for the system, 4 bonded (onboard) 1Gbit NICs for client access (to the VMs) and a 10GBit NIC for the storage network. The storage is built of 3 hosts, 10Gbit NIC, RAID 6 (5TB HDDs and SSDs for caching) and gluster in replica 3 mode. Cheers Richard On 25.03.18 09:36, Andy Michielsen wrote: > Hello Alex, > > Thanks for sharing. Much appriciated. > > I believe my setup would need 96 Gb off RAM in each host, and would need > about at least 3 Tb of storage. Probably 4 Tb would be beter if I want > to work with snapshots. (Will be running mostly windows 2016 servers or > windows 10 desktops with 6Gb off RAM and 100 Gb of disks) > > I agree that a 10 Gb network for storage would be very beneficial. > > Now If I can figure out how to set up a glusterfs on a 3 node cluster in > oVirt 4.2 just for the data storage. I ?m golden to get started. :-) > > Kind regards. > > On 24 Mar 2018, at 20:08, Alex K > wrote: > >> I have 2 or 3 node clusters with following hardware (all with >> self-hosted engine) : >> >> 2 node cluster: >> RAM: 64 GB per host >> CPU: 8 cores per host >> Storage: 4x 1TB SAS in RAID10 >> NIC: 2x Gbit >> VMs: 20 >> >> The above, although I would like to have had a third NIC for gluster >> storage redundancy, it is running smoothly for quite some time and >> without performance issues. >> The VMs it is running are not high on IO (mostly small Linux servers). >> >> 3 node clusters: >> RAM: 32 GB per host >> CPU: 16 cores per host >> Storage: 5x 600GB in RAID5 (not ideal but I had to gain some storage >> space without purchasing extra disks) >> NIC: 6x Gbit >> VMs: less then 10 large Windows VMs (Windows 2016 server and Windows 10) >> >> For your setup (30 VMs) I would rather go with RAID10 SAS disks and at >> least a dual 10Gbit NIC dedicated to the gluster traffic only. >> >> Alex >> >> >> On Sat, Mar 24, 2018 at 1:24 PM, Andy Michielsen >> > wrote: >> >> Hello Andrei, >> >> Thank you very much for sharing info on your hardware setup. Very >> informative. >> >> At this moment I have my ovirt engine on our vmware environment >> which is fine for good backup and restore. >> >> I have 4 nodes running now all different in make and model with >> local storage and it works but lacks performance a bit. >> >> But I can get my hands on some old dell?s R415 with 96 Gb of ram >> and 2 quadcores and 6 x 1 Gb nic?s. They all come with 2 x 146 Gb >> 15000 rpm?s harddisks. This isn?t bad but I will add more RAM for >> starters. Also I would like to have some good redundant storage >> for this too and the servers have limited space to add that. >> >> Hopefully others will also share there setups and expirience like >> you did. >> >> Kind regards. >> >> On 24 Mar 2018, at 10:35, Andrei Verovski > > wrote: >> >>> Hi, >>> >>> HL ProLiant DL380, dual Xeon >>> 120 GB RAID L1 for system >>> 2 TB RAID L10 for VM disks >>> 5 VMs, 3 Linux, 2 Windows >>> Total CPU load most of the time is? low, high level of activity >>> related to disk. >>> Host engine under KVM appliance on SuSE, can be easily moved, >>> backed up, copied, experimented with, etc. >>> >>> You'll have to use servers with more RAM and storage than main. >>> More then one NIC required if some of your VMs are on different >>> subnets, e.g. 1 in internal zone and 2nd on DMZ. >>> For your setup 10 GB NICs + L3 Switch for ovirtmgmt. >>> >>> BTW, I would suggest to have several separate hardware RAIDs >>> unless you have SSD, otherwise limit of the disk system I/O will >>> be a bottleneck. Consider SSD L1 RAID for heavy-loaded databases. >>> >>> *Please note many cheap SSDs do NOT work reliably with SAS >>> controllers even in SATA mode*. >>> >>> For example, I supposed to use 2 x WD Green SSD configures as >>> RAID L1 for OS. >>> It was possible to install system, yet under heavy load simulated >>> with iozone disk system freeze, rendering OS unbootable. >>> Same crash was experienced with 512GB KingFast SSD connected to >>> broadcom/AMCC SAS RAID Card. >>> >>> >>> On 03/24/2018 10:33 AM, Andy Michielsen wrote: >>>> Hi all, >>>> >>>> Not sure if this is the place to be asking this but I was wondering which hardware you all are using and why in order for me to see what I would be needing. >>>> >>>> I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 vm?s. >>>> The engine, I can run on an other server. The hosts can be fitted with the storage and share the space through glusterfs. I would think I will be needing at least 3 nic?s but would be able to install ovn. (Are 1gb nic?s sufficient ?) >>>> >>>> Any input you guys would like to share would be greatly appriciated. >>>> >>>> Thanks, >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>> >>> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >> > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 228 bytes Desc: OpenPGP digital signature URL: From jaganz at gmail.com Mon Mar 26 08:36:50 2018 From: jaganz at gmail.com (yayo (j)) Date: Mon, 26 Mar 2018 10:36:50 +0200 Subject: [ovirt-users] Filter engine log to get only logs shown on the "event console" Message-ID: Hi all, I want to reproduce log shown on the "event console" on my monitoring system but I cant' find the right filter to use. I'm parsing the engine.log on the hosted engine vm. Is the right log file to parse? There is any other way? Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From frolland at redhat.com Mon Mar 26 08:52:30 2018 From: frolland at redhat.com (Fred Rolland) Date: Mon, 26 Mar 2018 11:52:30 +0300 Subject: [ovirt-users] Filter engine log to get only logs shown on the "event console" In-Reply-To: References: Message-ID: Hi, You could try : grep AuditLogDirector /var/log/ovirt-engine/engine.log You can also get the events as SNMP traps, or read them from the DB Regards, Freddy On Mon, Mar 26, 2018 at 11:36 AM, yayo (j) wrote: > Hi all, > > I want to reproduce log shown on the "event console" on my monitoring > system but I cant' find the right filter to use. I'm parsing the engine.log > on the hosted engine vm. Is the right log file to parse? There is any other > way? > > Thank you > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ddqlo at 126.com Mon Mar 26 09:36:42 2018 From: ddqlo at 126.com (=?GBK?B?tq3H4MH6?=) Date: Mon, 26 Mar 2018 17:36:42 +0800 (CST) Subject: [ovirt-users] novnc_websocket_proxy_fqdn Message-ID: <3b8559ec.a4a8.16261aabcf0.Coremail.ddqlo@126.com> Hi all, I am using novnc to connect vms in my ovirt4.1 environment. My engine fqdn has been set to "engine1.test.org" before I excuted "engine-setup". I could connect vms using "engine1.test.org" on client1 which "engine1.test.org" can be resoved. Now I want to connect vms using "engine2.test.org" on client2 which only "engine2.test.org" can be resolved. I have set "SSO_ALTERNATE_ENGINE_FQDNS="engine2.test.org"" in /etc/ovirt-engine/engine.conf.d/99-alternate-engine-fqdns.conf. But I failed. It said that "can't connect to websocket proxy server wss://engine1.test.org:6100". So where can I modify this websocket proxy parameter? Anyone can help? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From sverker at abrahamsson.com Mon Mar 26 11:41:35 2018 From: sverker at abrahamsson.com (Sverker Abrahamsson) Date: Mon, 26 Mar 2018 13:41:35 +0200 Subject: [ovirt-users] Network interface persistence Message-ID: <556f1a9f-62a6-d73e-7ca8-b8efeb143dff@abrahamsson.com> I have a number of vm's running under Ovirt 4.2, most are CentOS 7 and one Debian. Issue is that when multiple network interfaces are assigned they don't persist but it variates at boot which interface will be which. I've tried various methods, setting UUID and HWADDRESS in the ifcfg file and a file /etc/udev/rules.d/70-persistent-net.rules file like the below: SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:1a:4a:16:01:63", KERNEL=="eth*", NAME="eth0" SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:1a:4a:16:01:6d", KERNEL=="eth*", NAME="eth1" I've also tried in the gui to change the network profile each virtual card is attached, but even then the interfaces in vm will be oposide of what I had intended. How to accomplish with Ovirt to get network interface persistence? The Debian vm is an appliance so I can't log in to it to change network interfaces. /Sverker From spfma.tech at e.mail.fr Mon Mar 26 13:36:15 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Mon, 26 Mar 2018 15:36:15 +0200 Subject: [ovirt-users] Hosted engine : rebuild without backups ? In-Reply-To: References: Message-ID: <20180326133615.34773E446D@smtp01.mail.de> Hi, I gave up trying to restore a backup of my former hosted engine, I never managed to get engine and nodes communicate again and I need to go on. So I installed a new engine (let's try a dedicated one this time) and imported the storage domains successfully. After attaching and activating them, it seems to be ok and the first VMs I imported are now running fine. So I just have to redefine logical networks and configure them on the hosts, and it is ok ? Maybe final step will be turning dedicated engine to hosted, I don't know yet. Regards Le 18-Mar-2018 08:28:15 +0100, didi at redhat.com a crit: On Fri, Mar 16, 2018 at 2:48 PM, wrote: > Hi, > > In case of a total failure of the hosted engine VM, it is recommended to > recreate a new one and restore a backup. I hope it works, I will probably > have to do this very soon. > > But is there some kind of "plug and play" features, able to rebuild > configuration by browsing storage domains, if the restore process doesn't > work ? It's called "Import Storage Domain" in oVirt. > > Something like identifying VMs and their snapshots in the subdirectories, > and the guess what is linked to what, ... ? > > I have a few machines but if I have to rebuild all the engine setup and > content, I would like to be able to identify resources easily. > > A few times ago, I was doing some experiments with XenServer and > destroyed/recreated some setup items : I ended with a lot of oprhan > resources, and it was a mess to reattach snapshots to their respective VMs. > So if oVirt is more helpful in that way ... If you try this: 1. Try first on a test setup, as always 2. Make sure to _not_ import the hosted-storage domain, the one used to host the hosted-engine VM. 3. So: setup a new hosted-engine system, then import your _other_ storage domains. Ideally make sure the old hosted storage is not accessible to the new system, so that the new engine does not try to import it accidentally. 4. If you do try to import, for testing, the old hosted-storage, would be interesting if you share the results... Best regards, -- Didi ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From blanchet at abes.fr Mon Mar 26 13:49:32 2018 From: blanchet at abes.fr (=?UTF-8?Q?Nathana=c3=abl_Blanchet?=) Date: Mon, 26 Mar 2018 15:49:32 +0200 Subject: [ovirt-users] Passing VLAN trunk to VM In-Reply-To: References: <93c80ca8-cbb4-4d38-405e-a22ffcd55811@upx.com> Message-ID: <0a6ba86f-bbee-210b-e40c-9a2c89cbc2a0@abes.fr> Hello, I managed to this into a vm so now I can create subinterfaces inside a vm. But: * I can such a taff on different physical NIC from the first NIC where the desired vlan is already configured as a vlan network * I have to set a fix IP address because of dhcp doesn't work for any reason Can anyone confirm this behaviour to be regular? Le 14/03/2017 ? 07:51, Edward Haas a ?crit?: > In oVirt, a network can be classified with a specific VLAN or as "all > the rest". > So the non-vlan network is in fact handling all non defined network > vlans, including the non-taged one. > > Specifying ranges is indeed not supported, but it will be nice if you > can file an RFE asking for the scenario you are looking for. > > Thanks, > Edy. > > > On Mon, Mar 13, 2017 at 5:12 PM, Rog?rio Ceni Coelho > > wrote: > > Thanks Edward. Unfortunately does not achieve my needs. For some > reason, i need to up many networks on a single virtual machine ( > more then 20 ). Seems vmware and Hyper-V have support for that. > > Hyper-V > > > Example 2 > > |PS C:\> Set-VMNetworkAdapterVlan -VMName Redmond -Trunk > -AllowedVlanIdList 1-100 -NativeVlanId 10 | > > Sets the virtual network adapter(s) in virtual machine Redmond to > the Trunk mode. Any traffic tagged with one of the VLAN IDs in the > allowed VLAN list will be permitted to be sent or received by the > VLAN. If traffic is untagged, it will be treated as if it were on > VLAN 10. > > > Thanks anyway and congrats about the excellent oVirt product. > > Em seg, 13 de mar de 2017 ?s 10:58, Edward Haas > escreveu: > > What VLAN you expected to see on the vnics? 200? > Per the screenshots, 200 is already in use on the host for a > different network. You cannot have the same vlan on two > different networks if they origin from the same nic/bond. > > Lets take a small example to illustrate this: > On the Host: bond0 with eth0 and eth1 as slaves. On bond0, > there are 3 vlans: 101, 102, 103. > We define 4 networks: net1 on vlan 101, net2 on vlan 102, net3 > on vlan 103 and netall on bond0. > > When packets pass bond0, they are classified based on their > tag, all that match the defined vlans will be stripped and > forwarded up the stack to the relevant vlan interface. > The ones that are left will get processed by bond0 and the > network that is set on top of it. > Therefore, the netall network will never see packets that came > from vlan 101, 102 and 103, but will see all the rest. > > Thanks, > Edy. > > > On Mon, Mar 13, 2017 at 3:27 PM, Rog?rio Ceni Coelho > > wrote: > > Hi Everyone, > > I try to use as we talk about not define a vlan tag, but > does not work. I try using tag vlan and bond?+ tag vlan. > Seems as you can see, ovirt attach vlan id 0 ( reserved to > untagged vlan on a trunck - > http://standards.ieee.org/getieee802/download/802.1Q-2005.pdf > ?). > > pasted3 > > pasted4 > > pasted5 > > > pasted2pasted1 > > > Em dom, 12 de mar de 2017 ?s 11:33, Edward Haas > > escreveu: > > On Sun, Mar 12, 2017 at 3:43 PM, Rog?rio Ceni Coelho > > wrote: > > I think you define vlans on vm virtual nic OS like > you do as usual. Monday i will try and share > results. Bye. > > > Em Dom, 12 de mar de 2017 10:40, FERNANDO FREDIANI > > escreveu: > > Great ! > > What about a range of VLANs, is it also > supported ? > > > That was the OVS note all about, only with an OVS > bridge it is possible to define/select the vlans which > are exposed > to the VM vnic. But this is not available at the moment. > As Rog?rio mentioned, define the VLAN/s on the VM vnic. > > > 2017-03-11 17:47 GMT-03:00 Edward Haas > >: > > Passing a trunk to the vnic is supported > long ago. > Just create a network over a nic/bond that > is connected to a trunk port and do not > define any VLAN (we call it non vlan network). > In oVirt, a non-vlan network will ignore > the VLAN tag and will forward the packets > as is onward. > It is up to the VM vnic to define vlans or > use a promisc mode to see everything. > > OVS can add a layer of security over the > existing, by defining explicitly which > vlans are allowed for a specific vnic, but > it is not > currently available. > > > On Thu, Mar 9, 2017 at 11:40 PM, Simon > Vincent > wrote: > > I was wondering if open vswitch will > get round this problem. Has anyone > tried it? > > On 9 Mar 2017 7:41 pm, "Rog?rio Ceni > Coelho" > > wrote: > > Hi, > > Ovirt user interface does not > allow to input 4095 as a tag vlan > number ... Only values between 0 > and 4094. > > This is useful to me too. Maybe > any other way ? > > Em qui, 9 de mar de 2017 ?s 16:15, > FERNANDO FREDIANI > > > escreveu: > > Have you tried use Vlan 4095 ? > On VMware it used to be the > way to pass all Vlans from a > vSwitch to a Vlan in a single > port. And yes I have used it > also for pfSense. > > Fernando > > > On 09/03/2017 16:09, Simon > Vincent wrote: >> Is it possible to pass >> multiple VLANs to a VM >> (pfSense) using a single >> virtual NIC? All my existing >> oVirt networks are setup as a >> single tagged VLAN. I know >> this didn't used to be >> supported but wondered if >> this has changed. My other >> option is to pass each VLAN >> as a separate NIC to the VM >> however if I needed to add a >> new VLAN I would have to add >> a new interface and reboot >> the VM as hot-add of NICs is >> not supported by pfSense. >> >> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> >> http://lists.ovirt.org/mailman/listinfo/users >> > > _______________________________________________ > Users mailing list > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- Nathana?l Blanchet Supervision r?seau P?le Infrastrutures Informatiques 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 T?l. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet at abes.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pasted3 Type: image/png Size: 27134 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pasted4 Type: image/png Size: 110233 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pasted5 Type: image/png Size: 129379 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pasted2 Type: image/png Size: 79879 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pasted1 Type: image/png Size: 80525 bytes Desc: not available URL: From andreil1 at starlett.lv Mon Mar 26 14:29:16 2018 From: andreil1 at starlett.lv (Andrei Verovski) Date: Mon, 26 Mar 2018 17:29:16 +0300 Subject: [ovirt-users] overt-guest-agent Failure on latest Debian 9 Sretch Message-ID: <6A9BB6DE-9031-4A5A-A799-E24A49FB8055@starlett.lv> Hi, I just installed latest Debian 9 Sretch under oVirt 4.2 and got this error: # tail -n 1000 ovirt-guest-agent.log MainThread::INFO::2018-03-26 17:09:57,400::ovirt-guest-agent::59::root::Starting oVirt guest agent MainThread::ERROR::2018-03-26 17:09:57,402::ovirt-guest-agent::141::root::Unhandled exception in oVirt guest agent! Traceback (most recent call last): File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 135, in agent.run(daemon, pidfile) File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 65, in run self.agent = LinuxVdsAgent(config) File "/usr/share/ovirt-guest-agent/GuestAgentLinux2.py", line 472, in __init__ AgentLogicBase.__init__(self, config) File "/usr/share/ovirt-guest-agent/OVirtAgentLogic.py", line 188, in __init__ self.vio = VirtIoChannel(config.get("virtio", "device")) File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 153, in __init__ self._stream = VirtIoStream(vport_name) File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 134, in __init__ self._vport = os.open(vport_name, os.O_RDWR) OSError: [Errno 2] No such file or directory: '/dev/virtio-ports/com.redhat.rhevm.vdsm? Followed this manual: https://bugzilla.redhat.com/show_bug.cgi?id=1472293 and run touch /etc/udev/rules.d/55-ovirt-guest-agent.rules # edit /etc/udev/rules.d/55-ovirt-guest-agent.rules SYMLINK=="virtio-ports/ovirt-guest-agent.0", OWNER="ovirtagent", GROUP="ovirtagent" udevadm trigger --subsystem-match="virtio-ports? AND this http://lists.ovirt.org/pipermail/users/2018-January/086101.html touch /etc/ovirt-guest-agent/ovirt-guest-agent.conf # References: <815987B5-31DA-4316-809D-A03363A1E3C3@gmail.com> Message-ID: On 03/24/2018 03:33 AM, Andy Michielsen wrote: > Hi all, > > Not sure if this is the place to be asking this but I was wondering which hardware you all are using and why in order for me to see what I would be needing. > > I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 vm?s. > The engine, I can run on an other server. The hosts can be fitted with the storage and share the space through glusterfs. I would think I will be needing at least 3 nic?s but would be able to install ovn. (Are 1gb nic?s sufficient ?) Just because you asked, but not because this is helpful to you.... But first, a comment on "3 hosts to be able to run 30 VMs". The SPM node shouldn't run a lot of VMs. There are settings (the setting slips my mind) on the engine to give it a "virtual set" of VMs in order to keep VMs off of it. With that said, CPU wise, it doesn't require a lot to run 30 VM's. The costly thing is memory (in general). So while a cheap set of 3 machines might handle the CPU requirements of 30 VM's, those cheap machines might not be able to give you the memory you need (depends). You might be fine. I mean, there are cheap desktop like machines that do 64G (and sometimes more). Just something to keep in mind. Memory and storage will be the most costly items. It's simple math. Linux hosts, of course, don't necessarily need much memory (or storage). But Windows... 1Gbit NIC's are "ok", but again, depends on storage. Glusterfs is no speed demon. But you might not need "fast" storage. Lastly, your setup is just for "fun", right? Otherwise, read on. Running oVirt 3.6 (this is a production setup) ovirt engine (manager): Dell PowerEdge 430, 32G ovirt cluster nodes: Dell m1000e 1.1 backplane Blade Enclosure 9 x M630 Blades (2xE5-2669v3, 384GB), 4 iSCSI paths, 4 bonded LAN, all 10GbE, CentOS 7.2 4 x MXL 10/40GbE (2x40Gbit LAN, 2x40Gbit iSCSI SAN to the S4810's) 120 VM's, CentOS 6, CentOS 7, Windows 10 Ent., Windows Server 2012 We've run on as few as 3 nodes. Network, SAN and Storage (for ovirt Domains): 2 x S4810 (part is used for SAN, part for LAN) Equallogic dual controller (note: passive/active) PS6610S (84 x 4TB 7.2K SAS) Equallogic dual controller (note: passive/active) PS6610X (84 x 1TB 10K SAS ISO and Export Domains are handled by: Dell PE R620, 32G, 2x10Gbit LAN, 2x10Gbit iSCSI to the SAN (above), CentOS 7.4, NFS What I like: * Easy setup. * Relatively good network and storage. What I don't like: * 2 "effective" networks, LAN and iSCSI. All networking uses the same effective path. Would be nice to have more physical isolation for mgmt vs motion vs VMs. QoS is provided in oVirt, but still, would be nice to have the full pathways. * Storage doesn't use active/active controllers, so controller failover is VERY slow. * We have a fast storage system, and somewhat slower storage system (matter of IOPS), neither is SSD, so there isn't a huge difference. No real redundancy or flexibility. * vdsm can no longer respond fast enough for the amount of disks defined (in the event of a new Storage Domain add). We have raised vdsTimeout, but have not tested yet. I inherited the "style" above. My recommendation of where to start for a reasonable production instance, minimum (assumes the S4810's above, not priced here): 1 x ovirt manager/engine, approx $1500 4 x Dell R620, 2xE5-2660, 768G, 6x10GbE (LAN, Storage, Motion), approx $42K 3 x Nexsan 18P 108TB, approx $96K While significantly cheaper (by 6 figures), it provides active/active controllers, storage reliability and flexibility and better network pathways. Why 4 x nodes? Need at least N+1 for reliability. The extra 4th node is merely capacity. Why 3 x storage? Need at least N+1 for reliability. Obviously, you'll still want to back things up and test the ability to restore components like the ovirt engine from scratch. Btw, my recommended minimum above is regardless of hypervisor cluster choice (could be VMware). From marceloltmm at gmail.com Mon Mar 26 17:28:32 2018 From: marceloltmm at gmail.com (Marcelo Leandro) Date: Mon, 26 Mar 2018 14:28:32 -0300 Subject: [ovirt-users] Problem to upgrade level cluster to 4.1 In-Reply-To: References: Message-ID: Thank you for help. This resolved my problem. 2018-03-26 3:56 GMT-03:00 Arik Hadas : > > > On Sun, Mar 25, 2018 at 3:06 PM, Marcelo Leandro > wrote: > >> Good morning, >> follow the log: >> >> The real name of vms : >> VPS-Jarauto >> VPS-Jarauto-Firewall >> VPS-Varejo-Mais >> >> Thanks, >> > > Thanks for sharing the log. > > Two of these VMs are configured with the custom-property 'macspoof'. That > property is not supposed to be used since 4.0 (IIRC). > Nevertheless, defining this property for version 4.1 should solve this > problem, see [1]. > > The third VM is in a weird state - it cannot be updated because the host > it runs on doesn't support the number of CPUs defined for this VM. > You can either shut this VM down during the cluster upgrade or try to > migrate it to another host with more CPUs available. > > Unfortunately, I can't correlate those issues with the VMs you mentioned, > but if you'll start with with the first issue you'll find which one > of them is that 'third VM'. > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1373573 > > >> >> 2018-03-25 3:29 GMT-03:00 Arik Hadas : >> >>> >>> >>> On Fri, Mar 23, 2018 at 9:09 PM, Marcelo Leandro >>> wrote: >>> >>>> Hello, >>>> I am try update the cluster level but i had this menssage erro: >>>> >>>> Erro durante a execu??o da a??o: Update of cluster compatibility >>>> version failed because there are VMs/Templates [VPS-NAME01, VPS-NAME02, >>>> VPS-NAME03] with incorrect configuration. To fix the issue, please go >>>> to each of them, edit and press OK. If the save does not pass, fix the >>>> dialog validation. >>>> >>>> >>>> 23/03/2018 15:03:07 >>>> Cannot update compatibility version of Vm/Template: [VPS-NAME01], >>>> Message: [No Message] >>>> 23/03/2018 15:03:07 >>>> Cannot update compatibility version of Vm/Template: [VPS-NAME02], >>>> Message: [No Message] >>>> 23/03/2018 15:03:07 >>>> Cannot update compatibility version of Vm/Template: [VPS-NAME03], >>>> Message: [No Message] >>>> >>>> >>>> I am already open the edit box vm and close with ok button how show in >>>> the erro menssagem: >>>> To fix the issue, please go to each of them, edit and press OK. If the >>>> save does not pass, fix the dialog validation. >>>> >>>> But not return error when save. >>>> >>>> Anyone can help me ? >>>> >>> >>> Can you please share the engine.log? >>> >>> >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fernando.frediani at upx.com Mon Mar 26 19:27:57 2018 From: fernando.frediani at upx.com (FERNANDO FREDIANI) Date: Mon, 26 Mar 2018 16:27:57 -0300 Subject: [ovirt-users] Juniper vSRX Cluster on oVirt/RHEV In-Reply-To: References: <5AB4B3B3.9050308@xs4all.nl> Message-ID: Indeed, there is this problem wiht the virtio driver which creates this , sometimes huge bottleneck for machines tat do a fair amount of traffic. Other than using DPDK OVS I would love to heard an alternative or a fix for it. Currently being hit by this issue with no solution. As you mention for a lab is fine but would be lovely to have a pretty redundant scenario like this in production. Fernando 2018-03-23 21:04 GMT-03:00 Charles Kozler : > Truth be told I dont really know. What I am going to be doing with it is > pretty much mostly some lab stuff and get working with VRF's a bit > > There is a known limitation with virtio backend driver uses interrupt mode > to receive packets and vSRX uses DPDK - https://dpdk.readthedocs.io/ > en/stable/nics/virtio.html which in turn creates a bottleneck in to the > guest VM. It is more ideal to use something like SR-IOV instead and remove > as many buffer layers as possible with PCI passthrough > > One easier way too is to use DPDK OVS. I know ovirt supports OVS in later > versions more natively so I just didnt go after it and I dont know if there > is any difference between just regular OVS and DPDK OVS. I dont have a huge > requirement of insane throughput, just need to get packets from amazon back > to my lab and support overlapping subnets > > This exercise was somewhat of a POC for me to see if it can be done. A lot > of Junipers documentation does not take in to account such things as ovirt > or proxmox or any linux overlay to hypervisors like it does for vmware / > vcenter which is no fault of their own. They assume flat KVM host (or 2 if > clustered) whereas stuff like ovirt can introduce variables (eg: no MAC > spoofing) > > On Fri, Mar 23, 2018 at 3:27 PM, FERNANDO FREDIANI < > fernando.frediani at upx.com> wrote: > >> Out of curiosity how much traffic can it handle running in these Virtual >> Machines on the top of reasonable hardware ? >> >> Fernando >> >> 2018-03-23 4:58 GMT-03:00 Joop : >> >>> On 22-3-2018 10:17, Yaniv Kaul wrote: >>> >>> >>> >>> On Wed, Mar 21, 2018 at 10:37 PM, Charles Kozler < >>> ckozleriii at gmail.com> wrote: >>> >>>> Hi All - >>>> >>>> Recently did this and thought it would be worth documenting. I couldnt >>>> find any solid information on vsrx with kvm outside of flat KVM. This >>>> outlines some of the things I hit along the way and how to fix. This is my >>>> one small way of giving back to such an incredible open source tool >>>> >>>> https://ckozler.net/vsrx-cluster-on-ovirtrhev/ >>>> >>> >>> Thanks for sharing! >>> Why didn't you just upload the qcow2 disk via the UI/API though? >>> There's quite a bit of manual work that I hope is not needed? >>> >>> @Work we're using Juniper too and oud of curiosity I downloaded the >>> qcow2 image and used the UI to upload it and add it to a VM. It just works >>> :-) oVirt++ >>> >>> Joop >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From punaatua.pk at gmail.com Mon Mar 26 19:34:35 2018 From: punaatua.pk at gmail.com (Punaatua PAINT-KOUI) Date: Mon, 26 Mar 2018 09:34:35 -1000 Subject: [ovirt-users] VDSM SSL validity In-Reply-To: References: Message-ID: I just tried, it works ! Thank for your help. Here are the steps that i followed: connect to the engine database using psql - use the request as you give it select fn_db_update_config_value(' VdsCertificateValidityInYears','2','general'); - verify the option by running select * from vdc_options where option_name like '%VdsCer%'; - restart ovirt-engine New host would have their certificates with the validity under 2 years. I tested with an existing host by put it in maintenance then reinstall Thanks ! those links helped me also: https://www.ovirt.org/develop/developer-guide/db-issues/dbupgrade/ https://www.ovirt.org/documentation/internal/database-upgrade-procedure/ 2018-03-23 17:52 GMT-10:00 Punaatua PAINT-KOUI : > I just tried, it works ! Thank for your help. > > Here are the steps that i followed: > > connect to the engine database using psql > > - use the request as you give it select fn_db_update_config_value(' > VdsCertificateValidityInYears','2','general'); > > - verify the option by running select * from vdc_options where option_name > like '%VdsCer%'; > > - restart ovirt-engine > > New host would have their certificates with the validity under 2 years. I > tested with an existing host by put it in maintenance then reinstall > > Thanks ! > > those links helped me also: > > https://www.ovirt.org/develop/developer-guide/db-issues/dbupgrade/ > > https://www.ovirt.org/documentation/internal/database-upgrade-procedure/ > > > > 2018-03-22 0:49 GMT-10:00 Yedidyah Bar David : > >> On Thu, Mar 22, 2018 at 11:58 AM, Sahina Bose wrote: >> > Didi, Sandro - Do you know if this option VdsCertificateValidityInYears >> is >> > present in 4.2? >> >> I do not think it ever was exposed to engine-config - I think it's a >> bug in that page. >> >> You should be able to update it with psql, if needed - something like >> this: >> >> select fn_db_update_config_value('VdsCertificateValidityInYears',' >> 2','general'); >> >> I didn't try this myself. >> >> To get an sql prompt, you can use engine-psql, which should be >> available in 4.2.2, >> or simply copy the script from the patch page: >> >> https://gerrit.ovirt.org/#/q/I4d9737ea72df0d7e654776a1085901284a523b7f >> >> Also, some people claim that the use of certificates for communication >> between >> the engine and the hosts is an internal implementation detail, which >> should not >> be relevant to PCI DSS requirements. See e.g.: >> >> https://ovirt.org/develop/release-management/features/infra/pkireduce/ >> >> > >> > On Mon, Mar 19, 2018 at 4:43 AM, Punaatua PAINT-KOUI < >> punaatua.pk at gmail.com> >> > wrote: >> >> >> >> Up >> >> >> >> 2018-02-17 2:57 GMT-10:00 Punaatua PAINT-KOUI : >> >>> >> >>> Any idea someone ? >> >>> >> >>> Le 14 f?vr. 2018 23:19, "Punaatua PAINT-KOUI" >> a >> >>> ?crit : >> >>>> >> >>>> Hi, >> >>>> >> >>>> I setup an hyperconverged solution with 3 nodes, hosted engine on >> >>>> glusterfs. >> >>>> We run this setup in a PCI-DSS environment. According to PCI-DSS >> >>>> requirements, we are required to reduce the validity of any >> certificate >> >>>> under 39 months. >> >>>> >> >>>> I saw in this link >> >>>> https://www.ovirt.org/develop/release-management/features/infra/pki/ >> that i >> >>>> can use the option VdsCertificateValidityInYears at engine-config. >> >>>> >> >>>> I'm running ovirt engine 4.2.1 and i checked when i was on 4.2 how to >> >>>> edit the option with engine-config --all and engine-config --list >> but the >> >>>> option is not listed >> >>>> >> >>>> Am i missing something ? >> >>>> >> >>>> I thing i can regenerate a VDSM certificate with openssl and the CA >> conf >> >>>> in /etc/pki/ovirt-engine on the hosted-engine but i would rather >> modifiy the >> >>>> option for future host that I will add. >> >>>> >> >>>> -- >> >>>> ------------------------------------- >> >>>> PAINT-KOUI Punaatua >> >> >> >> >> >> >> >> >> >> -- >> >> ------------------------------------- >> >> PAINT-KOUI Punaatua >> >> Licence Pro R?seaux et T?lecom IAR >> >> Universit? du Sud Toulon Var >> >> La Garde France >> >> >> >> _______________________________________________ >> >> Users mailing list >> >> Users at ovirt.org >> >> http://lists.ovirt.org/mailman/listinfo/users >> >> >> > >> >> >> >> -- >> Didi >> > > > > -- > ------------------------------------- > PAINT-KOUI Punaatua > Licence Pro R?seaux et T?lecom IAR > Universit? du Sud Toulon Var > La Garde France > -- ------------------------------------- PAINT-KOUI Punaatua Licence Pro R?seaux et T?lecom IAR Universit? du Sud Toulon Var La Garde France -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.vaye at province-sud.nc Mon Mar 26 20:16:53 2018 From: nicolas.vaye at province-sud.nc (Nicolas Vaye) Date: Mon, 26 Mar 2018 20:16:53 +0000 Subject: [ovirt-users] overt-guest-agent Failure on latest Debian 9 Sretch In-Reply-To: <6A9BB6DE-9031-4A5A-A799-E24A49FB8055@starlett.lv> References: <6A9BB6DE-9031-4A5A-A799-E24A49FB8055@starlett.lv> Message-ID: <1522095410.1710.217.camel@province-sud.nc> Hello Andrei, i have had the same problem and on Debian stretch the problem is the old version of agent from stretch repository. I downloaded 1.0.13 from Debian testing repo as *.deb file. With these new versions of guest-agent then is also a udev rules issue. The serial channels have been renamed and the rules didn`t match for ovirt. See the install script, as attachement (provided by Oliver.Riesener at hs-bremen.de). May be it can help you. Regards, Nicolas Vaye -------- Message initial -------- Date: Mon, 26 Mar 2018 17:29:16 +0300 Objet: [ovirt-users] overt-guest-agent Failure on latest Debian 9 Sretch ?: users at ovirt.org De: Andrei Verovski > Hi, I just installed latest Debian 9 Sretch under oVirt 4.2 and got this error: # tail -n 1000 ovirt-guest-agent.log MainThread::INFO::2018-03-26 17:09:57,400::ovirt-guest-agent::59::root::Starting oVirt guest agent MainThread::ERROR::2018-03-26 17:09:57,402::ovirt-guest-agent::141::root::Unhandled exception in oVirt guest agent! Traceback (most recent call last): File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 135, in agent.run(daemon, pidfile) File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 65, in run self.agent = LinuxVdsAgent(config) File "/usr/share/ovirt-guest-agent/GuestAgentLinux2.py", line 472, in __init__ AgentLogicBase.__init__(self, config) File "/usr/share/ovirt-guest-agent/OVirtAgentLogic.py", line 188, in __init__ self.vio = VirtIoChannel(config.get("virtio", "device")) File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 153, in __init__ self._stream = VirtIoStream(vport_name) File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 134, in __init__ self._vport = os.open(vport_name, os.O_RDWR) OSError: [Errno 2] No such file or directory: '/dev/virtio-ports/com.redhat.rhevm.vdsm? Followed this manual: https://bugzilla.redhat.com/show_bug.cgi?id=1472293 and run touch /etc/udev/rules.d/55-ovirt-guest-agent.rules # edit /etc/udev/rules.d/55-ovirt-guest-agent.rules SYMLINK=="virtio-ports/ovirt-guest-agent.0", OWNER="ovirtagent", GROUP="ovirtagent" udevadm trigger --subsystem-match="virtio-ports? AND this http://lists.ovirt.org/pipermail/users/2018-January/086101.html touch /etc/ovirt-guest-agent/ovirt-guest-agent.conf # http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- A non-text attachment was scrubbed... Name: ovirt_install_guest_agent_debian.sh Type: application/x-shellscript Size: 2659 bytes Desc: ovirt_install_guest_agent_debian.sh URL: From sverker at abrahamsson.com Mon Mar 26 22:45:35 2018 From: sverker at abrahamsson.com (Sverker Abrahamsson) Date: Tue, 27 Mar 2018 00:45:35 +0200 Subject: [ovirt-users] Network interface persistence In-Reply-To: <556f1a9f-62a6-d73e-7ca8-b8efeb143dff@abrahamsson.com> References: <556f1a9f-62a6-d73e-7ca8-b8efeb143dff@abrahamsson.com> Message-ID: This discussion seems relevant: https://access.redhat.com/discussions/916973 For my CentOS machines there are various alternatives described here which should work, but what to do about my debian-based appliance where I can't get access enough to edit such things? I then need to be able to set up a consistent naming from the virtual bios. If it matters the switch type used is OVS. /Sverker Den 2018-03-26 kl. 13:41, skrev Sverker Abrahamsson: > I have a number of vm's running under Ovirt 4.2, most are CentOS 7 and > one Debian. Issue is that when multiple network interfaces are > assigned they don't persist but it variates at boot which interface > will be which. > > I've tried various methods, setting UUID and HWADDRESS in the ifcfg > file and a file /etc/udev/rules.d/70-persistent-net.rules file like > the below: > > SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", > ATTR{address}=="00:1a:4a:16:01:63", KERNEL=="eth*", NAME="eth0" > SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", > ATTR{address}=="00:1a:4a:16:01:6d", KERNEL=="eth*", NAME="eth1" > > I've also tried in the gui to change the network profile each virtual > card is attached, but even then the interfaces in vm will be oposide > of what I had intended. > > How to accomplish with Ovirt to get network interface persistence? The > Debian vm is an appliance so I can't log in to it to change network > interfaces. > > /Sverker > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From nicolas.vaye at province-sud.nc Tue Mar 27 05:32:23 2018 From: nicolas.vaye at province-sud.nc (Nicolas Vaye) Date: Tue, 27 Mar 2018 05:32:23 +0000 Subject: [ovirt-users] Any monitoring tool provided? In-Reply-To: References: <32f428b6-dc46-de63-6072-b1fff2eb0b28@endlessnow.com> Message-ID: <1522128740.1710.221.camel@province-sud.nc> Hi Shirly, I'm trying to install ovirt metric store with ViaQ on Origin. And on https://github.com/oVirt/ovirt-site/pull/1551/files, you mention **WARNING** DO NOT INSTALL `libvirt` on the OpenShift machine! on my VM "metric store", i requested rpm for libvirt and here are the results : [root at ometricstore .ssh]# rpm -qa | grep libvirt libvirt-daemon-driver-secret-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-driver-storage-core-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-driver-storage-logical-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-config-network-3.2.0-14.el7_4.9.x86_64 libvirt-gconfig-1.0.0-1.el7.x86_64 libvirt-daemon-driver-storage-disk-3.2.0-14.el7_4.9.x86_64 libvirt-glib-1.0.0-1.el7.x86_64 libvirt-libs-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-driver-nwfilter-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-driver-storage-rbd-3.2.0-14.el7_4.9.x86_64 libvirt-gobject-1.0.0-1.el7.x86_64 libvirt-daemon-driver-interface-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-driver-storage-scsi-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-driver-qemu-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-kvm-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-driver-network-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-driver-storage-iscsi-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-driver-nodedev-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-driver-storage-mpath-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-driver-storage-3.2.0-14.el7_4.9.x86_64 should i remove all this package ? Also on the web page https://www.ovirt.org/develop/release-management/features/metrics/metrics-store-installation/#run-ovirt-metrics-store-installation-playbook, you mention /usr/share/ovirt-engine-metrics/setup/ansible/configure_ovirt_hosts_for_metrics.sh --playbook=ovirt-metrics-store-installation.yml But on my hosted engine 4.2.1.7 with ovirt-engine-metrics-1.1.3.3-1.el7.centos.noarch, i notice that the script configure_ovirt_hosts_for_metrics.sh doesn't exist but there is a configure_ovirt_machines_for_metrics.sh script. Is it the good one ? Next, you mention : Allow connections on the following ports/protocols: + * tcp ports 22, 80, 443, 8443 (openshift console), 9200 (Elasticsearch) following by ViaQ on Origin requires these [Yum Repos](centos7-viaq.repo). +You will need to install the following packages: docker, iptables-services. That means that i must uninstall firewalld ? And the ports/protocols will be managed by iptables-services ? What is the command to allow connections on tcp ports 22,80, 443 etc.... ? Is it managed automatically with docker or openshift or other program ? Thanks for all. Nicolas VAYE -------- Message initial -------- Date: Sun, 25 Mar 2018 12:01:23 +0300 Objet: Re: [ovirt-users] Any monitoring tool provided? Cc: users > ?: Marcelo Leandro > De: Shirly Radco > -- SHIRLY RADCO BI SeNIOR SOFTWARE ENGINEER Red Hat Israel [https://www.redhat.com/files/brand/email/sig-redhat.png] TRIED. TESTED. TRUSTED. On Fri, Mar 23, 2018 at 10:29 PM, Marcelo Leandro > wrote: Hello, I am try this how to: https://www.ovirt.org/develop/release-management/features/metrics/metrics-store-installation/ but when i run this command: /usr/share/ovirt-engine-metrics/setup/ansible/configure_ovirt_machines_for_metrics.sh --playbook=ovirt-metrics-store-installation.yml I am had this error mensagem: ansible-playbook: error: no such option: --playbook my version: ovirt-engine-metrics-1.0.8-1.el7.centos.noarch Hi, You are using an old rpm. Please upgrade to latest, ovirt-engine-metrics-1.1.3.3-1.el7.centos.noarch I also added some documentation that is still in pull request: Add Viaq installation guide to the oVirt metrics store repo - https://github.com/oVirt/ovirt-site/pull/1551 - This one is meaningful. I introduced a lot of automation that save time when installing. Add prerequisites for installing OpenShift Logging - https://github.com/oVirt/ovirt-site/pull/1561 Added how to import dashboards examples to kibana - https://github.com/oVirt/ovirt-site/pull/1559 Please review them.I'll try to get them merged asap. Anyone can help me? 2018-03-22 16:28 GMT-03:00 Christopher Cox >: On 03/21/2018 10:41 PM, Terry hey wrote: Dear all, Now, we can just read how many storage used, cpu usage on ovirt dashboard. But is there any monitoring tool for monitoring virtual machine time to time? If yes, could you guys give me the procedure? A possible option, for a full OS with network connectivity, is to monitor the VM like you would any other host. We use omd/check_mk. Right now there isn't an oVirt specific monitor plugin for check_mk. I know what I said is probably pretty obvious, but just in case. _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users From info at linuxfabrik.ch Tue Mar 27 06:45:04 2018 From: info at linuxfabrik.ch (info at linuxfabrik.ch) Date: Tue, 27 Mar 2018 08:45:04 +0200 Subject: [ovirt-users] Ping::(action) Failed to ping x.x.x.x, (4 out of 5) Message-ID: <1522133104.2919.8.camel@linuxfabrik.ch> Hi all, we randomly and constantly have this message in our /var/log/ovirt- hosted-engine-ha/broker.log: /var/log/ovirt-hosted-engine-ha/broker.log:Thread-1::WARNING::2018-03- 27 08:17:25,891::ping::63::ping.Ping::(action) Failed to ping x.x.x.x, (4 out of 5) The pinged device is a switch (not a gateway). We know that a switch might drop icmp packets if it needs to. The interesting thing about that is if it fails it fails always at "4 out of 5", but in the end (5 of 5) it always succeeds. Is there a way to increase the amount of pings or to have another way instead of ping? Regards Markus From recreationh at gmail.com Tue Mar 27 07:02:18 2018 From: recreationh at gmail.com (Terry hey) Date: Tue, 27 Mar 2018 15:02:18 +0800 Subject: [ovirt-users] virtual machine actual size is not right In-Reply-To: References: Message-ID: Hello, thank you for helping me. On the storage domain size: Alias: host1 Disk: 1 Template: Blank Virtual Size: 30 GB Actual Size: 13 GB Creation Date: Jan 29,2018 11:22:54 AM On the server size: # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg01-root 10G 7.2G 2.9G 72% / devtmpfs 1.9G 0 1.9G 0% /dev tmpfs 1.9G 4.0K 1.9G 1% /dev/shm tmpfs 1.9G 17M 1.9G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/sda1 1014M 188M 827M 19% /boot /dev/mapper/vg01-var 15G 996M 15G 7% /var XXX.XXX.XXX.XXX:/nfs_share 249G 96G 141G 41% /mnt/nfs_share tmpfs 379M 0 379M 0% /run/user/0 # parted -l /dev/[sv]d[a-z] | grep ^Disk Disk /dev/sda: 32.2GB Disk Flags: Disk /dev/mapper/vg01-var: 16.1GB Disk Flags: Disk /dev/mapper/vg01-swap: 2147MB Disk Flags: Disk /dev/mapper/vg01-root: 10.7GB Disk Flags: # It still not the same. Also, do you know the upper limitation for thin provision? For example, if i allocated 30 GB to the hosts, what is the upper limitation that the host can use? Regards, Terry 2018-03-23 18:45 GMT+08:00 Pavol Brilla : > Hi > > For such big difference between size outside of VM and inside, it looks > more that disk is not fully partioned. > df is providing you information only about mounted filesystems. > Could you try to run inside VM should match all local disks, and you > should see size of disk : > # parted -l /dev/[sv]d[a-z] | grep ^Disk > > ( Output of 1 of my VMs ): > # parted -l /dev/[sv]d[a-z] | grep ^Disk > Disk /dev/sda: 26.8GB > Disk Flags: > Disk /dev/mapper/rootvg-lv_tmp: 2147MB > Disk Flags: > Disk /dev/mapper/rootvg-lv_home: 210MB > Disk Flags: > Disk /dev/mapper/rootvg-lv_swap: 2147MB > Disk Flags: > Disk /dev/mapper/rootvg-lv_root: 21.8GB > > So I see that VM has 26.8GB big disk. > > On Thu, Mar 22, 2018 at 5:56 PM, Terry hey wrote: > >> Hello~ >> i type this command on the running vm, not the hypervisor ( ovirt node). >> > > > > -- > > PAVOL BRILLA > > RHV QUALITY ENGINEER, CLOUD > > Red Hat Czech Republic, Brno > > TRIED. TESTED. TRUSTED. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sradco at redhat.com Tue Mar 27 07:07:52 2018 From: sradco at redhat.com (Shirly Radco) Date: Tue, 27 Mar 2018 10:07:52 +0300 Subject: [ovirt-users] Any monitoring tool provided? In-Reply-To: <1522128740.1710.221.camel@province-sud.nc> References: <32f428b6-dc46-de63-6072-b1fff2eb0b28@endlessnow.com> <1522128740.1710.221.camel@province-sud.nc> Message-ID: -- SHIRLY RADCO BI SeNIOR SOFTWARE ENGINEER Red Hat Israel TRIED. TESTED. TRUSTED. On Tue, Mar 27, 2018 at 8:32 AM, Nicolas Vaye wrote: > Hi Shirly, > > I'm trying to install ovirt metric store with ViaQ on Origin. > > And on https://github.com/oVirt/ovirt-site/pull/1551/files, you mention > **WARNING** DO NOT INSTALL `libvirt` on the OpenShift machine! > > > on my VM "metric store", i requested rpm for libvirt and here are the > results : > > [root at ometricstore .ssh]# rpm -qa | grep libvirt > libvirt-daemon-driver-secret-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-driver-storage-core-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-driver-storage-logical-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-config-network-3.2.0-14.el7_4.9.x86_64 > libvirt-gconfig-1.0.0-1.el7.x86_64 > libvirt-daemon-driver-storage-disk-3.2.0-14.el7_4.9.x86_64 > libvirt-glib-1.0.0-1.el7.x86_64 > libvirt-libs-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-driver-nwfilter-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-driver-storage-rbd-3.2.0-14.el7_4.9.x86_64 > libvirt-gobject-1.0.0-1.el7.x86_64 > libvirt-daemon-driver-interface-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-driver-storage-scsi-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-driver-qemu-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-kvm-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-driver-network-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-driver-storage-iscsi-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-driver-nodedev-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-driver-storage-mpath-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-driver-storage-3.2.0-14.el7_4.9.x86_64 > > should i remove all this package ? > Yes. You will need to remove them. > > > Also on the web page https://www.ovirt.org/develop/ > release-management/features/metrics/metrics-store-installation/#run-ovirt- > metrics-store-installation-playbook, you mention > > /usr/share/ovirt-engine-metrics/setup/ansible/configure_ovirt_hosts_for_metrics.sh > --playbook=ovirt-metrics-store-installation.yml > I apologise for that. It is indeed a documentation bug. Its should be /usr/share/ovirt-engine-metrics/setup/ansible/configure_ovirt_machines_for_metrics.sh --playbook=ovirt-metrics-store-installation.yml as you mentioned. I'll fix it asap. > > > But on my hosted engine 4.2.1.7 with ovirt-engine-metrics-1.1.3.3-1.el7.centos.noarch, > i notice that the script > > configure_ovirt_hosts_for_metrics.sh doesn't exist but there is a > configure_ovirt_machines_for_metrics.sh script. > > Is it the good one ? > > > > Next, you mention : > Allow connections on the following ports/protocols: > > > + * tcp ports 22, 80, 443, 8443 (openshift console), 9200 (Elasticsearch) > > > following by > > ViaQ on Origin requires these [Yum Repos](centos7-viaq.repo). > > > > > > > > > > > +You will need to install the following packages: docker, > iptables-services. > > > That means that i must uninstall firewalld ? And the ports/protocols will > be managed by iptables-services ? > No , you can just open the ports in firewalld. > > What is the command to allow connections on tcp ports 22,80, 443 etc.... ? > To open a Port on CentOS/RHEL 7 run sudo firewall-cmd --zone=public --add-port=80/tcp --permanent for each port and sudo firewall-cmd --reload Check the updated rules with: firewall-cmd --list-all > > Is it managed automatically with docker or openshift or other program ? > The metrics store is managed by OpenShift. > > > Thanks for all. > > > Nicolas VAYE > > > -------- Message initial -------- > > Date: Sun, 25 Mar 2018 12:01:23 +0300 > Objet: Re: [ovirt-users] Any monitoring tool provided? > Cc: users > > ?: Marcelo Leandro 3cmarceloltmm at gmail.com%3e>> > De: Shirly Radco ly%20Radco%20%3csradco at redhat.com%3e>> > > > > -- > > SHIRLY RADCO > > BI SeNIOR SOFTWARE ENGINEER > > Red Hat Israel > > [https://www.redhat.com/files/brand/email/sig-redhat.png] tps://red.ht/sig> > TRIED. TESTED. TRUSTED. > > > On Fri, Mar 23, 2018 at 10:29 PM, Marcelo Leandro > wrote: > Hello, > > I am try this how to: > > https://www.ovirt.org/develop/release-management/features/ > metrics/metrics-store-installation/ > > but when i run this command: > /usr/share/ovirt-engine-metrics/setup/ansible/ > configure_ovirt_machines_for_metrics.sh --playbook=ovirt-metrics- > store-installation.yml > > I am had this error mensagem: > ansible-playbook: error: no such option: --playbook > > my version: > > ovirt-engine-metrics-1.0.8-1.el7.centos.noarch > > > Hi, > > You are using an old rpm. > > Please upgrade to latest, ovirt-engine-metrics-1.1.3.3-1.el7.centos.noarch > > I also added some documentation that is still in pull request: > > Add Viaq installation guide to the oVirt metrics store repo - > https://github.com/oVirt/ovirt-site/pull/1551 - This one is meaningful. I > introduced a lot of automation that save time when installing. > > Add prerequisites for installing OpenShift Logging - > https://github.com/oVirt/ovirt-site/pull/1561 > > Added how to import dashboards examples to kibana - > https://github.com/oVirt/ovirt-site/pull/1559 > > Please review them.I'll try to get them merged asap. > > > Anyone can help me? > > > 2018-03-22 16:28 GMT-03:00 Christopher Cox ox at endlessnow.com>>: > On 03/21/2018 10:41 PM, Terry hey wrote: > Dear all, > > Now, we can just read how many storage used, cpu usage on ovirt dashboard. > But is there any monitoring tool for monitoring virtual machine time to > time? > If yes, could you guys give me the procedure? > > > A possible option, for a full OS with network connectivity, is to monitor > the VM like you would any other host. > > We use omd/check_mk. > > Right now there isn't an oVirt specific monitor plugin for check_mk. > > I know what I said is probably pretty obvious, but just in case. > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbonazzo at redhat.com Tue Mar 27 07:20:46 2018 From: sbonazzo at redhat.com (Sandro Bonazzola) Date: Tue, 27 Mar 2018 07:20:46 +0000 Subject: [ovirt-users] overt-guest-agent Failure on latest Debian 9 Sretch In-Reply-To: <1522095410.1710.217.camel@province-sud.nc> References: <6A9BB6DE-9031-4A5A-A799-E24A49FB8055@starlett.lv> <1522095410.1710.217.camel@province-sud.nc> Message-ID: Tomas, anything we can do here? Il lun 26 mar 2018, 22:18 Nicolas Vaye ha scritto: > Hello Andrei, > > i have had the same problem and on Debian stretch the problem is the old > version of agent from stretch repository. > > I downloaded 1.0.13 from Debian testing repo as *.deb file. > > With these new versions of guest-agent then is also a udev rules issue. > > The serial channels have been renamed and the rules didn`t match for ovirt. > > See the install script, as attachement (provided by Oliver%20Riesener%20%3cOliver.Riesener at hs-bremen.de%3e> > Oliver.Riesener at hs-bremen.de). > > May be it can help you. > > Regards, > > Nicolas Vaye > > -------- Message initial -------- > > Date: Mon, 26 Mar 2018 17:29:16 +0300 > Objet: [ovirt-users] overt-guest-agent Failure on latest Debian 9 Sretch > ?: users at ovirt.org > De: Andrei Verovski Andrei%20Verovski%20%3candreil1 at starlett.lv%3e>> > > Hi, > > I just installed latest Debian 9 Sretch under oVirt 4.2 and got this error: > > # tail -n 1000 ovirt-guest-agent.log > MainThread::INFO::2018-03-26 > 17:09:57,400::ovirt-guest-agent::59::root::Starting oVirt guest agent > MainThread::ERROR::2018-03-26 > 17:09:57,402::ovirt-guest-agent::141::root::Unhandled exception in oVirt > guest agent! > Traceback (most recent call last): > File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 135, in > > agent.run(daemon, pidfile) > File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 65, in run > self.agent = LinuxVdsAgent(config) > File "/usr/share/ovirt-guest-agent/GuestAgentLinux2.py", line 472, in > __init__ > AgentLogicBase.__init__(self, config) > File "/usr/share/ovirt-guest-agent/OVirtAgentLogic.py", line 188, in > __init__ > self.vio = VirtIoChannel(config.get("virtio", "device")) > File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 153, in > __init__ > self._stream = VirtIoStream(vport_name) > File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 134, in > __init__ > self._vport = os.open(vport_name, os.O_RDWR) > OSError: [Errno 2] No such file or directory: > '/dev/virtio-ports/com.redhat.rhevm.vdsm? > > Followed this manual: > https://bugzilla.redhat.com/show_bug.cgi?id=1472293 > and run > touch /etc/udev/rules.d/55-ovirt-guest-agent.rules > # edit /etc/udev/rules.d/55-ovirt-guest-agent.rules > SYMLINK=="virtio-ports/ovirt-guest-agent.0", OWNER="ovirtagent", > GROUP="ovirtagent" > udevadm trigger --subsystem-match="virtio-ports? > > AND this > > http://lists.ovirt.org/pipermail/users/2018-January/086101.html > touch /etc/ovirt-guest-agent/ovirt-guest-agent.conf # existed so I created it. > # edit /etc/ovirt-guest-agent.conf > [virtio] > device = /dev/virtio-ports/ovirt-guest-agent.0 > > reboot > > Yet still have same problem and error message. > How to solve it ? > > Thanks in advance > Andrei > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sven.Achtelik at eps.aero Tue Mar 27 07:56:34 2018 From: Sven.Achtelik at eps.aero (Sven Achtelik) Date: Tue, 27 Mar 2018 07:56:34 +0000 Subject: [ovirt-users] Workflow after restoring engine from backup In-Reply-To: References: <831f30ed018b4739a2491cbd24f2429d@eps.aero> <9abbbd52e96b4cd1949a37e863130a13@eps.aero> Message-ID: I did look at this, for the VMs in question there are no entries on the run_on_vds and migrating_to_vds fields. I'm thinking of giving this a try. -----Urspr?ngliche Nachricht----- Von: Yedidyah Bar David [mailto:didi at redhat.com] Gesendet: Sonntag, 25. M?rz 2018 07:46 An: Sven Achtelik Cc: users at ovirt.org Betreff: Re: [ovirt-users] Workflow after restoring engine from backup On Fri, Mar 23, 2018 at 10:35 AM, Sven Achtelik wrote: > It looks like I can't get a chance to shut down the HA VMs. I check the restore log and it did mention that it change the HA-VM entries. Just to make sure I looked at the DB and for the vms in question it looks like this. > > engine=# select vm_guid,status,vm_host,exit_status,exit_reason from vm_dynamic Where vm_guid IN (SELECT vm_guid FROM vm_static WHERE auto_startup='t' AND lease_sd_id is NULL); > vm_guid | status | vm_host | exit_status | exit_reason > --------------------------------------+--------+-----------------+-------------+------------- > 8733d4a6-0844-xxxx-804f-6b919e93e076 | 0 | DXXXX | 2 | -1 > 4eeaa622-17f9-xxxx-b99a-cddb3ad942de | 0 | xxxxAPP | 2 | -1 > fbbdc0a0-23a4-4d32-xxxx-a35c59eb790d | 0 | xxxxDB0 | 2 | -1 > 45a4e7ce-19a9-4db9-xxxxx-66bd1b9d83af | 0 | xxxxxWOR | 2 | -1 > (4 rows) > > Should that be enough to have a safe start of the engine without any HA action kicking in. ? Looks ok, but check also run_on_vds and migrating_to_vds. See also bz 1446055. Best regards, > > > -----Urspr?ngliche Nachricht----- > Von: Yedidyah Bar David [mailto:didi at redhat.com] > Gesendet: Montag, 19. M?rz 2018 10:18 > An: Sven Achtelik > Cc: users at ovirt.org > Betreff: Re: [ovirt-users] Workflow after restoring engine from backup > > On Mon, Mar 19, 2018 at 11:03 AM, Sven Achtelik wrote: >> Hi Didi, >> >> my backups where taken with the end. Backup utility. I have 3 Data >> centers, two of them with just one host and the third one with 3 >> hosts running the engine. The backup three days old, was taken on >> engine version 4.1 (4.1.7) and the restored engine is running on 4.1.9. > > Since the bug I mentioned was fixed in 4.1.3, you should be covered. > >> I have three HA VMs that would >> be affected. All others are just normal vms. Sounds like it would be >> the safest to shut down the HA vm S to make sure that nothing happens ? > > If you can have downtime, I agree it sounds safer to shutdown the VMs. > >> Or can I >> disable the HA action in the DB for now ? > > No need to. If you restored with 4.1.9 engine-backup, it should have done this for you. If you still have the restore log, you can verify this by checking it. It should contain 'Resetting HA VM status', and then the result of the sql that it ran. > > Best regards, > >> >> Thank you, >> >> Sven >> >> >> >> Von meinem Samsung Galaxy Smartphone gesendet. >> >> >> -------- Urspr?ngliche Nachricht -------- >> Von: Yedidyah Bar David >> Datum: 19.03.18 07:33 (GMT+01:00) >> An: Sven Achtelik >> Cc: users at ovirt.org >> Betreff: Re: [ovirt-users] Workflow after restoring engine from >> backup >> >> On Sun, Mar 18, 2018 at 11:45 PM, Sven Achtelik >> >> wrote: >>> Hi All, >>> >>> >>> >>> I had issue with the storage that hosted my engine vm. The disk got >>> corrupted and I needed to restore the engine from a backup. >> >> How did you backup, and how did you restore? >> >> Which version was used for each? >> >>> That worked as >>> expected, I just didn?t start the engine yet. >> >> OK. >> >>> I know that after the backup >>> was taken some machines where migrated around before the engine >>> disks failed. >> >> Are these machines HA? >> >>> My question is what will happen once I start the engine service >>> which has the restored backup on it ? Will it query the hosts for >>> the running VMs >> >> It will, but HA machines are handled differently. >> >> See also: >> >> https://bugzilla.redhat.com/show_bug.cgi?id=1441322 >> https://bugzilla.redhat.com/show_bug.cgi?id=1446055 >> >>> or will it assume that the VMs are still on the hosts as they >>> resided at the point of backup ? >> >> It does, initially, but then updates status according to what it gets >> from hosts. >> >> But polling the hosts takes time, especially if you have many, and HA >> policy might require faster handling. So if it polls first a host >> that had a machine on it during backup, and sees that it's gone, and >> didn't yet poll the new host, HA handling starts immediately, which >> eventually might lead to starting the VM on another host. >> >> To prevent that, the fixes to above bugs make the restore process >> mark HA VMs that do not have leases on the storage as "dead". >> >>> Would I need to change the DB manual to let the engine know where >>> VMs are up at this point ? >> >> You might need to, if you have HA VMs and a too-old version of restore. >> >>> What will happen to HA VMs >>> ? I feel that it might try to start them a second time. My biggest >>> issue is that I can?t get a service Windows to shutdown all VMs and >>> then lat them restart by the engine. >>> >>> >>> >>> Is there a known workflow for that ? >> >> I am not aware of a tested procedure for handling above if you have a >> too-old version, but you can check the patches linked from above bugs >> and manually run the SQL command(s) they include. They are >> essentially comment 4 of the first bug. >> >> Good luck and best regards, >> -- >> Didi > > > > -- > Didi -- Didi From spfma.tech at e.mail.fr Tue Mar 27 10:08:26 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Tue, 27 Mar 2018 12:08:26 +0200 Subject: [ovirt-users] Host non-responsive after engine failure Message-ID: <20180327100826.DDCB2E447A@smtp01.mail.de> Hi, I had an electrical failure on the server hosting the engine. After the reboot it was able to gain access to it again, log into the GUI, but the currently online node is not leaving "not responsive" status. Of course, the network storage paths are still mounted, the VMs are running, but I can't gain control again. In vdsmd.log, I have a lot of messages like this one : 2018-03-27 12:03:11,281+0200 INFO (vmrecovery) [vds] recovery: waiting for storage pool to go up (clientIF:674) 2018-03-27 12:03:16,286+0200 INFO (vmrecovery) [vdsm.api] START getConnectedStoragePoolsList(options=None) from=internal, task_id=b90f550e-ee68-4a91-a7c6-3b60f11c3978 (api:46) 2018-03-27 12:03:16,286+0200 INFO (vmrecovery) [vdsm.api] FINISH getConnectedStoragePoolsList return={'poollist': []} from=internal, task_id=b90f550e-ee68-4a91-a7c6-3b60f11c3978 (api:52) 2018-03-27 12:03:16,287+0200 INFO (vmrecovery) [vds] recovery: waiting for storage pool to go up (clientIF:674) 2018-03-27 12:03:18,413+0200 INFO (periodic/3) [vdsm.api] START repoStats(domains=()) from=internal, task_id=067714b4-8172-4eec-92bb-6ac16586a657 (api:46) 2018-03-27 12:03:18,413+0200 INFO (periodic/3) [vdsm.api] FINISH repoStats return={} from=internal, task_id=067714b4-8172-4eec-92bb-6ac16586a657 (api:52) 2018-03-27 12:03:18,413+0200 INFO (periodic/3) [vdsm.api] START multipath_health() from=internal, task_id=e97421fb-5d5a-4291-9231-94bc1961cc49 (api:46) 2018-03-27 12:03:18,413+0200 INFO (periodic/3) [vdsm.api] FINISH multipath_health return={} from=internal, task_id=e97421fb-5d5a-4291-9231-94bc1961cc49 (api:52) 2018-03-27 12:03:20,458+0200 INFO (jsonrpc/6) [api.host] START getAllVmStats() from=::1,57576 (api:46) 2018-03-27 12:03:20,462+0200 INFO (jsonrpc/6) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::1,57576 (api:52) 2018-03-27 12:03:20,464+0200 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.01 seconds (__init__:573) 2018-03-27 12:03:20,474+0200 INFO (jsonrpc/7) [api.host] START getAllVmIoTunePolicies() from=::1,57576 (api:46) 2018-03-27 12:03:20,475+0200 INFO (jsonrpc/7) [api.host] FINISH getAllVmIoTunePolicies return={'status': {'message': 'Done', 'code': 0}, 'io_tune_policies_dict': {'c33a30ba-7fe8-4ff4-aeac-80cb396b9670': {'policy': [], 'current_values': [{'ioTune': {'write_bytes_sec': 0L, 'total_iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L, 'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path': '/rhev/data-center/mnt/10.100.2.132:_volume2_ovirt__vms__1/07efa4fe-06bc-498e-8f42-035461aef900/images/593f6f61-cb7f-4c53-b6e7-617964c222e9/329b2e8b-6cf9-4b39-9190-14a32697ce44', 'name': 'sda'}]}, 'e8a90739-7737-413e-8edc-a373192f4476': {'policy': [], 'current_values': [{'ioTune': {'write_bytes_sec': 0L, 'total_iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L, 'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path': '/rhev/data-center/mnt/10.100.2.132:_volume2_ovirt__vms__1/07efa4fe-06bc-498e-8f42-035461aef900/images/97e078f7-69c6-46c2-b620-26474cd65929/bbb4a1fb-5594-4750-be71-c6b55dca3257', 'name': 'vda'}]}, '3aec5ce4-691f-487c-a916-aa7f7a664d8c': {'policy': [], 'current_values': [{'ioTune': {'write_bytes_sec': 0L, 'total_iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L, 'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path': '/rhev/data-center/mnt/10.100.2.132:_volume2_ovirt__vms__1/07efa4fe-06bc-498e-8f42-035461aef900/images/46a65a1b-d00a-452d-ab9b-70862bb5c053/a4d2ad44-5577-4412-9a8c-819d1f12647a', 'name': 'sda'}, {'ioTune': {'write_bytes_sec': 0L, 'total_iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L, 'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path': '/rhev/data-center/mnt/10.100.2.132:_volume2_ovirt__vms__1/07efa4fe-06bc-498e-8f42-035461aef900/images/0c3a13ce-8f7a-4034-a8cc-12f795b8aa17/c48e0e37-e54b-4ca3-b3ed-b66ead9fad44', 'name': 'sdb'}]}, '5de1de8f-ac01-459f-b4b8-6d1ed05c8ca3': {'policy': [], 'current_values': [{'ioTune': {'write_bytes_sec': 0L, 'total_iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L, 'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path': '/rhev/data-center/mnt/10.100.2.132:_volume2_ovirt__vms__1/07efa4fe-06bc-498e-8f42-035461aef900/images/320ac81c-7db7-4ec0-a271-755e91442b6a/8bfc95c5-318c-43dd-817f-6c7a8a7a5b43', 'name': 'sda'}, {'ioTune': {'write_bytes_sec': 0L, 'total_iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L, 'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path': '/rhev/data-center/mnt/10.100.2.132:_volume2_ovirt__vms__1/07efa4fe-06bc-498e-8f42-035461aef900/images/e7ad86bb-3c63-466b-82cf-687164c46f7b/613ea0ce-ed14-4185-b3fd-36490441f889', 'name': 'sdb'}]}, '5d548a09-a397-4aac-8b1f-39002e014f5f': {'policy': [], 'current_values': [{'ioTune': {'write_bytes_sec': 0L, 'total_iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L, 'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path': '/rhev/data-center/mnt/10.100.2.132:_volume2_ovirt__vms__1/07efa4fe-06bc-498e-8f42-035461aef900/images/c7421014-7c5f-45ad-a948-caa83b8ce3e7/ae0ba893-69af-4b67-a262-b739596d5c95', 'name': 'sda'}]}, '168b01b1-5ec8-41dd-808e-fa9f66cea718': {'policy': [], 'current_values': [{'ioTune': {'write_bytes_sec': 0L, 'total_iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L, 'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path': '/rhev/data-center/mnt/10.100.2.132:_volume2_ovirt__vms__1/07efa4fe-06bc-498e-8f42-035461aef900/images/b9b7902a-7a62-4826-bfda-dff260b9fcd1/d05db17c-9908-4bfb-a74b-4aa944510a56', 'name': 'vda'}, {'ioTune': {'write_bytes_sec': 0L, 'total_iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L, 'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path': '/rhev/data-center/mnt/10.100.2.132:_volume2_ovirt__vms__1/07efa4fe-06bc-498e-8f42-035461aef900/images/564b3848-b6d5-4deb-910f-5b6f2fdbccc5/4f89ff25-2d3b-40b9-9bbc-9a6b6995346c', 'name': 'vdb'}, {'ioTune': {'write_bytes_sec': 0L, 'total_iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L, 'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path': '/rhev/data-center/mnt/10.100.2.132:_volume2_ovirt__vms__1/07efa4fe-06bc-498e-8f42-035461aef900/images/738e0704-8484-483b-ae67-091715496152/2f811423-6bab-4966-9c00-9d3b72429328', 'name': 'vdc'}]}}} from=::1,57576 (api:52) 2018-03-27 12:03:20,475+0200 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmIoTunePolicies succeeded in 0.00 seconds (__init__:573) 2018-03-27 12:03:21,292+0200 INFO (vmrecovery) [vdsm.api] START getConnectedStoragePoolsList(options=None) from=internal, task_id=a35602b2-7d5c-4e87-86cd-ede17c62488f (api:46) 2018-03-27 12:03:21,292+0200 INFO (vmrecovery) [vdsm.api] FINISH getConnectedStoragePoolsList return={'poollist': []} from=internal, task_id=a35602b2-7d5c-4e87-86cd-ede17c62488f (api:52) 2018-03-27 12:03:21,293+0200 INFO (vmrecovery) [vds] recovery: waiting for storage pool to go up (clientIF:674) So i see no error. But in messages : Mar 27 12:01:43 pfm-srv-virt-2 libvirtd: 2018-03-27 10:01:43.569+0000: 71793: error : qemuDomainAgentAvailable:6030 : Guest agent is not responding: QEMU guest agent is not connected I have restarted libvirtd and vdsmd services. Is there something else to do ? Regards ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Tue Mar 27 12:34:45 2018 From: rightkicktech at gmail.com (Alex K) Date: Tue, 27 Mar 2018 15:34:45 +0300 Subject: [ovirt-users] ovirt snapshot issue In-Reply-To: References: Message-ID: Hi All, Any idea on the below? I am using oVirt Guest Tools 4.2-1.el7.centos for the VM. The Window 2016 server VM (which it the one with the relatively big disks: 500 GB) it is consistently rendered unresponsive when trying to get a snapshot. I amy provide any other additional logs if needed. Alex On Sun, Mar 25, 2018 at 7:30 PM, Alex K wrote: > Hi folks, > > I am facing frequently the following issue: > > On some large VMs (Windows 2016 with two disk drives, 60GB and 500GB) when > attempting to create a snapshot of the VM, the VM becomes unresponsive. > > The errors that I managed to collect were: > > vdsm error at host hosting the VM: > 2018-03-25 14:40:13,442+0000 WARN (vdsm.Scheduler) [Executor] Worker > blocked: {u'frozen': False, u'vmID': u'a5c761a2-41cd-40c2-b65f-f3819293e8a4', > u'snapDrives': [{u'baseVolumeID': u'2a33e585-ece8-4f4d-b45d-5ecc9239200e', > u'domainID': u'888e3aae-f49f-42f7-a7fa-76700befabea', u'volumeID': > u'e9a01ebd-83dd-40c3-8c83-5302b0d15e04', u'imageID': > u'c75b8e93-3067-4472-bf24-dafada224e4d'}, {u'baseVolumeID': > u'3fb2278c-1b0d-4677-a529-99084e4b08af', u'domainID': > u'888e3aae-f49f-42f7-a7fa-76700befabea', u'volumeID': > u'78e6b6b1-2406-4393-8d92-831a6d4f1337', u'imageID': > u'd4223744-bf5d-427b-bec2-f14b9bc2ef81'}]}, 'jsonrpc': '2.0', 'method': > u'VM.snapshot', 'id': u'89555c87-9701-4260-9952-789965261e65'} at > 0x7fca4004cc90> timeout=60, duration=60 at 0x39d8210> task#=155842 at > 0x2240e10> (executor:351) > 2018-03-25 14:40:15,261+0000 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC > call VM.getStats failed (error 1) in 0.01 seconds (__init__:539) > 2018-03-25 14:40:17,471+0000 WARN (jsonrpc/5) [virt.vm] > (vmId='a5c761a2-41cd-40c2-b65f-f3819293e8a4') monitor became unresponsive > (command timeout, age=67.9100000001) (vm:5132) > > engine.log: > 2018-03-25 14:40:19,875Z WARN [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler2) > [1d737df7] EVENT_ID: VM_NOT_RESPONDING(126), Correlation ID: null, Call > Stack: null, Custom ID: null, Custom Event ID: -1, Message: VM Data-Server > is not responding. > > 2018-03-25 14:42:13,708Z ERROR [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler5) > [17789048-009a-454b-b8ad-2c72c7cd37aa] EVENT_ID: > VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call Stack: > null, Custom ID: null, Custom Event ID: -1, Message: VDSM v1.cluster > command SnapshotVDS failed: Message timeout which can be caused by > communication issues > 2018-03-25 14:42:13,708Z ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand] > (DefaultQuartzScheduler5) [17789048-009a-454b-b8ad-2c72c7cd37aa] Command > 'SnapshotVDSCommand(HostName = v1.cluster, SnapshotVDSCommandParameters:{runAsync='true', > hostId='a713d988-ee03-4ff0-a0cd-dc4cde1507f4', > vmId='a5c761a2-41cd-40c2-b65f-f3819293e8a4'})' execution failed: > VDSGenericException: VDSNetworkException: Message timeout which can be > caused by communication issues > 2018-03-25 14:42:13,708Z WARN [org.ovirt.engine.core.bll.snapshots. > CreateAllSnapshotsFromVmCommand] (DefaultQuartzScheduler5) > [17789048-009a-454b-b8ad-2c72c7cd37aa] Could not perform live snapshot > due to error, VM will still be configured to the new created snapshot: > EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: > VDSGenericException: VDSNetworkException: Message timeout which can be > caused by communication issues (Failed with error VDS_NETWORK_ERROR and > code 5022) > 2018-03-25 14:42:13,708Z WARN [org.ovirt.engine.core.vdsbroker.VdsManager] > (org.ovirt.thread.pool-6-thread-15) [17789048-009a-454b-b8ad-2c72c7cd37aa] > Host 'v1.cluster' is not responding. It will stay in Connecting state for a > grace period of 61 seconds and after that an attempt to fence the host will > be issued. > 2018-03-25 14:42:13,725Z WARN [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-15) > [17789048-009a-454b-b8ad-2c72c7cd37aa] EVENT_ID: VDS_HOST_NOT_RESPONDING_CONNECTING(9,008), > Correlation ID: null, Call Stack: null, Custom ID: null, Custom Event ID: > -1, Message: Host v1.cluster is not responding. It will stay in Connecting > state for a grace period of 61 seconds and after that an attempt to fence > the host will be issued. > 2018-03-25 14:42:13,751Z WARN [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler5) > [17789048-009a-454b-b8ad-2c72c7cd37aa] EVENT_ID: > USER_CREATE_LIVE_SNAPSHOT_FINISHED_FAILURE(170), Correlation ID: > 17789048-009a-454b-b8ad-2c72c7cd37aa, Job ID: 16e48c28-a8c7-4841-bd81-1f2d370f345d, > Call Stack: org.ovirt.engine.core.common.errors.EngineException: > EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: > VDSGenericException: VDSNetworkException: Message timeout which can be > caused by communication issues (Failed with error VDS_NETWORK_ERROR and > code 5022) > 2018-03-25 14:42:14,372Z ERROR [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler5) [] > EVENT_ID: USER_CREATE_SNAPSHOT_FINISHED_FAILURE(69), Correlation ID: > 17789048-009a-454b-b8ad-2c72c7cd37aa, Job ID: 16e48c28-a8c7-4841-bd81-1f2d370f345d, > Call Stack: org.ovirt.engine.core.common.errors.EngineException: > EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: > VDSGenericException: VDSNetworkException: Message timeout which can be > caused by communication issues (Failed with error VDS_NETWORK_ERROR and > code 5022) > 2018-03-25 14:42:14,372Z WARN [org.ovirt.engine.core.bll. > ConcurrentChildCommandsExecutionCallback] (DefaultQuartzScheduler5) [] > Command 'CreateAllSnapshotsFromVm' id: 'bad4f5be-5306-413f-a86a-513b3cfd3c66' > end method execution failed, as the command isn't marked for endAction() > retries silently ignoring > 2018-03-25 14:42:15,951Z WARN [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler5) > [5017c163] EVENT_ID: VDS_NO_SELINUX_ENFORCEMENT(25), Correlation ID: > null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Host > v1.cluster does not enforce SELinux. Current status: DISABLED > 2018-03-25 14:42:15,951Z WARN [org.ovirt.engine.core.vdsbroker.VdsManager] > (DefaultQuartzScheduler5) [5017c163] Host 'v1.cluster' is running with > SELinux in 'DISABLED' mode > > As soon as the VM is unresponsive, the VM console that was already open > freezes. I can resume the VM only by powering off and on. > > I am using ovirt 4.1.9 with 3 nodes and self-hosted engine. I am running > mostly Windows 10 and Windows 2016 server VMs. I have installed latest > guest agents from: > > http://resources.ovirt.org/pub/ovirt-4.2/iso/oVirt- > toolsSetup/4.2-1.el7.centos/ > > At the screen where one takes a snapshot I get a warning saying "Could not > detect guest agent on the VM. Note that without guest agent the data on the > created snapshot may be inconsistent". See attached. I have verified that > ovirt guest tools are installed and shown at installed apps at engine GUI. > Also Ovirt Guest Agent (32 bit) and qemu-ga are listed as running at the > windows tasks manager. Shouldn't ovirt guest agent be 64 bit on Windows 64 > bit? > > Any advice will be much appreciated. > > Alex > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbonazzo at redhat.com Tue Mar 27 12:38:22 2018 From: sbonazzo at redhat.com (Sandro Bonazzola) Date: Tue, 27 Mar 2018 14:38:22 +0200 Subject: [ovirt-users] ovirt snapshot issue In-Reply-To: References: Message-ID: 2018-03-27 14:34 GMT+02:00 Alex K : > Hi All, > > Any idea on the below? > > I am using oVirt Guest Tools 4.2-1.el7.centos for the VM. > The Window 2016 server VM (which it the one with the relatively big disks: > 500 GB) it is consistently rendered unresponsive when trying to get a > snapshot. > I amy provide any other additional logs if needed. > Adding some people to the thread > > Alex > > On Sun, Mar 25, 2018 at 7:30 PM, Alex K wrote: > >> Hi folks, >> >> I am facing frequently the following issue: >> >> On some large VMs (Windows 2016 with two disk drives, 60GB and 500GB) >> when attempting to create a snapshot of the VM, the VM becomes >> unresponsive. >> >> The errors that I managed to collect were: >> >> vdsm error at host hosting the VM: >> 2018-03-25 14:40:13,442+0000 WARN (vdsm.Scheduler) [Executor] Worker >> blocked: > {u'frozen': False, u'vmID': u'a5c761a2-41cd-40c2-b65f-f3819293e8a4', >> u'snapDrives': [{u'baseVolumeID': u'2a33e585-ece8-4f4d-b45d-5ecc9239200e', >> u'domainID': u'888e3aae-f49f-42f7-a7fa-76700befabea', u'volumeID': >> u'e9a01ebd-83dd-40c3-8c83-5302b0d15e04', u'imageID': >> u'c75b8e93-3067-4472-bf24-dafada224e4d'}, {u'baseVolumeID': >> u'3fb2278c-1b0d-4677-a529-99084e4b08af', u'domainID': >> u'888e3aae-f49f-42f7-a7fa-76700befabea', u'volumeID': >> u'78e6b6b1-2406-4393-8d92-831a6d4f1337', u'imageID': >> u'd4223744-bf5d-427b-bec2-f14b9bc2ef81'}]}, 'jsonrpc': '2.0', 'method': >> u'VM.snapshot', 'id': u'89555c87-9701-4260-9952-789965261e65'} at >> 0x7fca4004cc90> timeout=60, duration=60 at 0x39d8210> task#=155842 at >> 0x2240e10> (executor:351) >> 2018-03-25 14:40:15,261+0000 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] >> RPC call VM.getStats failed (error 1) in 0.01 seconds (__init__:539) >> 2018-03-25 14:40:17,471+0000 WARN (jsonrpc/5) [virt.vm] >> (vmId='a5c761a2-41cd-40c2-b65f-f3819293e8a4') monitor became >> unresponsive (command timeout, age=67.9100000001) (vm:5132) >> >> engine.log: >> 2018-03-25 14:40:19,875Z WARN [org.ovirt.engine.core.dal.dbb >> roker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler2) >> [1d737df7] EVENT_ID: VM_NOT_RESPONDING(126), Correlation ID: null, Call >> Stack: null, Custom ID: null, Custom Event ID: -1, Message: VM Data-Server >> is not responding. >> >> 2018-03-25 14:42:13,708Z ERROR [org.ovirt.engine.core.dal.dbb >> roker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler5) >> [17789048-009a-454b-b8ad-2c72c7cd37aa] EVENT_ID: >> VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call Stack: >> null, Custom ID: null, Custom Event ID: -1, Message: VDSM v1.cluster >> command SnapshotVDS failed: Message timeout which can be caused by >> communication issues >> 2018-03-25 14:42:13,708Z ERROR [org.ovirt.engine.core.vdsbrok >> er.vdsbroker.SnapshotVDSCommand] (DefaultQuartzScheduler5) >> [17789048-009a-454b-b8ad-2c72c7cd37aa] Command >> 'SnapshotVDSCommand(HostName = v1.cluster, SnapshotVDSCommandParameters:{runAsync='true', >> hostId='a713d988-ee03-4ff0-a0cd-dc4cde1507f4', >> vmId='a5c761a2-41cd-40c2-b65f-f3819293e8a4'})' execution failed: >> VDSGenericException: VDSNetworkException: Message timeout which can be >> caused by communication issues >> 2018-03-25 14:42:13,708Z WARN [org.ovirt.engine.core.bll.sna >> pshots.CreateAllSnapshotsFromVmCommand] (DefaultQuartzScheduler5) >> [17789048-009a-454b-b8ad-2c72c7cd37aa] Could not perform live snapshot >> due to error, VM will still be configured to the new created snapshot: >> EngineException: org.ovirt.engine.core.vdsbroke >> r.vdsbroker.VDSNetworkException: VDSGenericException: >> VDSNetworkException: Message timeout which can be caused by communication >> issues (Failed with error VDS_NETWORK_ERROR and code 5022) >> 2018-03-25 14:42:13,708Z WARN [org.ovirt.engine.core.vdsbroker.VdsManager] >> (org.ovirt.thread.pool-6-thread-15) [17789048-009a-454b-b8ad-2c72c7cd37aa] >> Host 'v1.cluster' is not responding. It will stay in Connecting state for a >> grace period of 61 seconds and after that an attempt to fence the host will >> be issued. >> 2018-03-25 14:42:13,725Z WARN [org.ovirt.engine.core.dal.dbb >> roker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-15) >> [17789048-009a-454b-b8ad-2c72c7cd37aa] EVENT_ID: >> VDS_HOST_NOT_RESPONDING_CONNECTING(9,008), Correlation ID: null, Call >> Stack: null, Custom ID: null, Custom Event ID: -1, Message: Host v1.cluster >> is not responding. It will stay in Connecting state for a grace period of >> 61 seconds and after that an attempt to fence the host will be issued. >> 2018-03-25 14:42:13,751Z WARN [org.ovirt.engine.core.dal.dbb >> roker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler5) >> [17789048-009a-454b-b8ad-2c72c7cd37aa] EVENT_ID: >> USER_CREATE_LIVE_SNAPSHOT_FINISHED_FAILURE(170), Correlation ID: >> 17789048-009a-454b-b8ad-2c72c7cd37aa, Job ID: >> 16e48c28-a8c7-4841-bd81-1f2d370f345d, Call Stack: >> org.ovirt.engine.core.common.errors.EngineException: EngineException: >> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: >> VDSGenericException: VDSNetworkException: Message timeout which can be >> caused by communication issues (Failed with error VDS_NETWORK_ERROR and >> code 5022) >> 2018-03-25 14:42:14,372Z ERROR [org.ovirt.engine.core.dal.dbb >> roker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler5) [] >> EVENT_ID: USER_CREATE_SNAPSHOT_FINISHED_FAILURE(69), Correlation ID: >> 17789048-009a-454b-b8ad-2c72c7cd37aa, Job ID: >> 16e48c28-a8c7-4841-bd81-1f2d370f345d, Call Stack: >> org.ovirt.engine.core.common.errors.EngineException: EngineException: >> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: >> VDSGenericException: VDSNetworkException: Message timeout which can be >> caused by communication issues (Failed with error VDS_NETWORK_ERROR and >> code 5022) >> 2018-03-25 14:42:14,372Z WARN [org.ovirt.engine.core.bll.Con >> currentChildCommandsExecutionCallback] (DefaultQuartzScheduler5) [] >> Command 'CreateAllSnapshotsFromVm' id: 'bad4f5be-5306-413f-a86a-513b3cfd3c66' >> end method execution failed, as the command isn't marked for endAction() >> retries silently ignoring >> 2018-03-25 14:42:15,951Z WARN [org.ovirt.engine.core.dal.dbb >> roker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler5) >> [5017c163] EVENT_ID: VDS_NO_SELINUX_ENFORCEMENT(25), Correlation ID: >> null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Host >> v1.cluster does not enforce SELinux. Current status: DISABLED >> 2018-03-25 14:42:15,951Z WARN [org.ovirt.engine.core.vdsbroker.VdsManager] >> (DefaultQuartzScheduler5) [5017c163] Host 'v1.cluster' is running with >> SELinux in 'DISABLED' mode >> >> As soon as the VM is unresponsive, the VM console that was already open >> freezes. I can resume the VM only by powering off and on. >> >> I am using ovirt 4.1.9 with 3 nodes and self-hosted engine. I am running >> mostly Windows 10 and Windows 2016 server VMs. I have installed latest >> guest agents from: >> >> http://resources.ovirt.org/pub/ovirt-4.2/iso/oVirt-toolsSetu >> p/4.2-1.el7.centos/ >> >> At the screen where one takes a snapshot I get a warning saying "Could >> not detect guest agent on the VM. Note that without guest agent the data on >> the created snapshot may be inconsistent". See attached. I have verified >> that ovirt guest tools are installed and shown at installed apps at engine >> GUI. Also Ovirt Guest Agent (32 bit) and qemu-ga are listed as running at >> the windows tasks manager. Shouldn't ovirt guest agent be 64 bit on Windows >> 64 bit? >> >> Any advice will be much appreciated. >> >> Alex >> >> >> >> >> >> > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA sbonazzo at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From fernando.frediani at upx.com Tue Mar 27 13:23:44 2018 From: fernando.frediani at upx.com (FERNANDO FREDIANI) Date: Tue, 27 Mar 2018 10:23:44 -0300 Subject: [ovirt-users] Snapshot of the Self-Hosted Engine Message-ID: Hello Is it possible to snapshot the Self-Hosted Engine before an Upgrade ? If so how ? Thanks Fernando -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahuser at 7five-edv.de Tue Mar 27 15:10:38 2018 From: ahuser at 7five-edv.de (Andreas Huser) Date: Tue, 27 Mar 2018 17:10:38 +0200 (CEST) Subject: [ovirt-users] vdi over wan optimation Message-ID: <325317583.21988.1522163438636.JavaMail.zimbra@7five-edv.de> Hi, i have a question about vdi over wan. The traffic is very high when i look videos or online streams. 100% of my capacity of Internet Bandwidth is used. Does anyone an idea how can i optimise spice for wan? Thanks a lot Andreas From michal.skrivanek at redhat.com Tue Mar 27 16:40:35 2018 From: michal.skrivanek at redhat.com (Michal Skrivanek) Date: Tue, 27 Mar 2018 18:40:35 +0200 Subject: [ovirt-users] overt-guest-agent Failure on latest Debian 9 Sretch In-Reply-To: References: <6A9BB6DE-9031-4A5A-A799-E24A49FB8055@starlett.lv> <1522095410.1710.217.camel@province-sud.nc> Message-ID: > On 27 Mar 2018, at 09:20, Sandro Bonazzola wrote: > > Tomas, anything we can do here? More people complaining the better chance it gets fixed, but that should happen within Debian community. Package updates are coming from debian package maintainer > > Il lun 26 mar 2018, 22:18 Nicolas Vaye > ha scritto: > Hello Andrei, > > i have had the same problem and on Debian stretch the problem is the old version of agent from stretch repository. > > I downloaded 1.0.13 from Debian testing repo as *.deb file. > > With these new versions of guest-agent then is also a udev rules issue. > > The serial channels have been renamed and the rules didn`t match for ovirt. > > See the install script, as attachement (provided by %3e> Oliver.Riesener at hs-bremen.de >). > > May be it can help you. > > Regards, > > Nicolas Vaye > > -------- Message initial -------- > > Date: Mon, 26 Mar 2018 17:29:16 +0300 > Objet: [ovirt-users] overt-guest-agent Failure on latest Debian 9 Sretch > ?: users at ovirt.org > > De: Andrei Verovski %3e>> > > Hi, > > I just installed latest Debian 9 Sretch under oVirt 4.2 and got this error: > > # tail -n 1000 ovirt-guest-agent.log > MainThread::INFO::2018-03-26 17:09:57,400::ovirt-guest-agent::59::root::Starting oVirt guest agent > MainThread::ERROR::2018-03-26 17:09:57,402::ovirt-guest-agent::141::root::Unhandled exception in oVirt guest agent! > Traceback (most recent call last): > File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 135, in > agent.run(daemon, pidfile) > File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 65, in run > self.agent = LinuxVdsAgent(config) > File "/usr/share/ovirt-guest-agent/GuestAgentLinux2.py", line 472, in __init__ > AgentLogicBase.__init__(self, config) > File "/usr/share/ovirt-guest-agent/OVirtAgentLogic.py", line 188, in __init__ > self.vio = VirtIoChannel(config.get("virtio", "device")) > File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 153, in __init__ > self._stream = VirtIoStream(vport_name) > File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 134, in __init__ > self._vport = os.open(vport_name, os.O_RDWR) > OSError: [Errno 2] No such file or directory: '/dev/virtio-ports/com.redhat.rhevm.vdsm? > > Followed this manual: > https://bugzilla.redhat.com/show_bug.cgi?id=1472293 > and run > touch /etc/udev/rules.d/55-ovirt-guest-agent.rules > # edit /etc/udev/rules.d/55-ovirt-guest-agent.rules > SYMLINK=="virtio-ports/ovirt-guest-agent.0", OWNER="ovirtagent", GROUP="ovirtagent" > udevadm trigger --subsystem-match="virtio-ports? > > AND this > > http://lists.ovirt.org/pipermail/users/2018-January/086101.html > touch /etc/ovirt-guest-agent/ovirt-guest-agent.conf # # edit /etc/ovirt-guest-agent.conf > [virtio] > device = /dev/virtio-ports/ovirt-guest-agent.0 > > reboot > > Yet still have same problem and error message. > How to solve it ? > > Thanks in advance > Andrei > > _______________________________________________ > Users mailing list > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccox at endlessnow.com Tue Mar 27 17:02:07 2018 From: ccox at endlessnow.com (Christopher Cox) Date: Tue, 27 Mar 2018 12:02:07 -0500 Subject: [ovirt-users] vdi over wan optimation In-Reply-To: <325317583.21988.1522163438636.JavaMail.zimbra@7five-edv.de> References: <325317583.21988.1522163438636.JavaMail.zimbra@7five-edv.de> Message-ID: <980d35c5-4cc3-ba33-6398-f6499974d65d@endlessnow.com> On 03/27/2018 10:10 AM, Andreas Huser wrote: > Hi, i have a question about vdi over wan. The traffic is very high when i look videos or online streams. 100% of my capacity of Internet Bandwidth is used. > > Does anyone an idea how can i optimise spice for wan? You can always look into QoS, but might have to apply that uniformly to the guest spice traffic (likely). And by QoS, that likely means something you do outside of oVirt (Internet vs Intranet). Of course, applying QoS for "video" may make things horrible. Pumping large amounts of "live" important synchronized data costs a lot, it's the nature of the beast. Restrict it and you often end up with less than usable audio/video, especially if the data is unknown (e.g. a full remote desktop). Ideally, from a thin client perspective, the solution is to run as much of that as possible outside of the remote desktop. There's a reason why those very setup specific options exist out there for handling these types of VDI things (transparently). They are very restricted and of course have a slew of dependencies and usually a lot of cost. (and IMHO, often times go "belly up" within 5 years) I've seen some of these. Basically the thin client uses remote desktop, but when an embedded video happens, that is offloaded as a direct connection handled by the thin client (kind of like "casting" in today's world). I these VDI quests are Windows, RDP is likely going to do better than Spice, especially for remote audio and video. Doesn't mean it won't occupy all your bandwidth, just saying it performs better. With that said, remote desktop via other means, be that RDP, or NX, etc.. might be "better" than Spice. PSA: If these are Windows, be aware of Microsoft's VDI tax (the VDA). This is an arbitrary invented tax that Microsoft created strictly to get people to only user their hypervisor platform. It can cost a lot and it's required annually. In the past I used NX for my Linux "desktops". This worked well even over very low bandwidth connects, however, it assumed my business was not the source of network bottlenecks on the Internet. Just saying. Even so, things that did massive amount of work, be that large AV or IntelliJ (which does a gazillion window creates/destroys) were still some concern. We tweaked our IntelliJ profiles to help reduce the load there. Not a whole lot we could do with regards to audio/video but educate people. And no, I do not recommend 10 users playing PUBG via VDI. :-) From sbonazzo at redhat.com Tue Mar 27 17:05:26 2018 From: sbonazzo at redhat.com (Sandro Bonazzola) Date: Tue, 27 Mar 2018 17:05:26 +0000 Subject: [ovirt-users] overt-guest-agent Failure on latest Debian 9 Sretch In-Reply-To: References: <6A9BB6DE-9031-4A5A-A799-E24A49FB8055@starlett.lv> <1522095410.1710.217.camel@province-sud.nc> Message-ID: Il mar 27 mar 2018, 18:40 Michal Skrivanek ha scritto: > > > On 27 Mar 2018, at 09:20, Sandro Bonazzola wrote: > > Tomas, anything we can do here? > > > More people complaining the better chance it gets fixed, but that should > happen within Debian community. > Package updates are coming from debian package maintainer > Nicolas, can you please open a ticket on Debian and post here the link? > > Il lun 26 mar 2018, 22:18 Nicolas Vaye ha > scritto: > >> Hello Andrei, >> >> i have had the same problem and on Debian stretch the problem is the old >> version of agent from stretch repository. >> >> I downloaded 1.0.13 from Debian testing repo as *.deb file. >> >> With these new versions of guest-agent then is also a udev rules issue. >> >> The serial channels have been renamed and the rules didn`t match for >> ovirt. >> >> See the install script, as attachement (provided by > Oliver%20Riesener%20%3cOliver.Riesener at hs-bremen.de%3e> >> Oliver.Riesener at hs-bremen.de). >> >> May be it can help you. >> >> Regards, >> >> Nicolas Vaye >> >> -------- Message initial -------- >> >> Date: Mon, 26 Mar 2018 17:29:16 +0300 >> Objet: [ovirt-users] overt-guest-agent Failure on latest Debian 9 Sretch >> ?: users at ovirt.org >> De: Andrei Verovski > Andrei%20Verovski%20%3candreil1 at starlett.lv%3e>> >> >> Hi, >> >> I just installed latest Debian 9 Sretch under oVirt 4.2 and got this >> error: >> >> # tail -n 1000 ovirt-guest-agent.log >> MainThread::INFO::2018-03-26 >> 17:09:57,400::ovirt-guest-agent::59::root::Starting oVirt guest agent >> MainThread::ERROR::2018-03-26 >> 17:09:57,402::ovirt-guest-agent::141::root::Unhandled exception in oVirt >> guest agent! >> Traceback (most recent call last): >> File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 135, in >> >> agent.run(daemon, pidfile) >> File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 65, in >> run >> self.agent = LinuxVdsAgent(config) >> File "/usr/share/ovirt-guest-agent/GuestAgentLinux2.py", line 472, in >> __init__ >> AgentLogicBase.__init__(self, config) >> File "/usr/share/ovirt-guest-agent/OVirtAgentLogic.py", line 188, in >> __init__ >> self.vio = VirtIoChannel(config.get("virtio", "device")) >> File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 153, in >> __init__ >> self._stream = VirtIoStream(vport_name) >> File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 134, in >> __init__ >> self._vport = os.open(vport_name, os.O_RDWR) >> OSError: [Errno 2] No such file or directory: >> '/dev/virtio-ports/com.redhat.rhevm.vdsm? >> >> Followed this manual: >> https://bugzilla.redhat.com/show_bug.cgi?id=1472293 >> and run >> touch /etc/udev/rules.d/55-ovirt-guest-agent.rules >> # edit /etc/udev/rules.d/55-ovirt-guest-agent.rules >> SYMLINK=="virtio-ports/ovirt-guest-agent.0", OWNER="ovirtagent", >> GROUP="ovirtagent" >> udevadm trigger --subsystem-match="virtio-ports? >> >> AND this >> >> http://lists.ovirt.org/pipermail/users/2018-January/086101.html >> touch /etc/ovirt-guest-agent/ovirt-guest-agent.conf # > existed so I created it. >> # edit /etc/ovirt-guest-agent.conf >> [virtio] >> device = /dev/virtio-ports/ovirt-guest-agent.0 >> >> reboot >> >> Yet still have same problem and error message. >> How to solve it ? >> >> Thanks in advance >> Andrei >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sven.Achtelik at eps.aero Tue Mar 27 18:14:32 2018 From: Sven.Achtelik at eps.aero (Sven Achtelik) Date: Tue, 27 Mar 2018 18:14:32 +0000 Subject: [ovirt-users] Recovering oVirt-Engine with a backup before upgrading to 4.2 Message-ID: <41e0a4df7d7b4b04824f154982fe953f@eps.aero> Hi All, I'm still facing issues with my HE engine. Here are the steps that I took to end up in this situation: - Update Engine from 4.1.7 to 4.1.9 o That worked as expected - Automatic Backup of Engine DB in the night - Upgraded Engine from 4.1.9 to 4.2.1 o That worked fine - Noticed Issues with the HA support for HE o Cause was not having the latest ovirt-ha agent/broker version on hosts - After updating the first host with the latest packages for the Agent/Broker engine was started twice o As a result the Engine VM Disk was corrupted and there is no Backup of the Disk o There is also no Backup of the Engine DB with version 4.2 - VM disk was repaired with fsck.ext4, but DB is corrupt o Can't restore the Engine DB because the Backup DB from Engine V 4.1 - Rolled back all changes on Engine VM to 4.1.9 and imported Backup o Checked for HA VMs to set as disabled and started the Engine - Login is fine but the Engine is having trouble picking up and information from the Hosts o No information on running VMs or hosts status - Final Situation o 2 Hosts have VMs still running and I can't stop those o I still have the image of my corrupted Engine VM (v4.2) Since there were no major changes after upgrading from 4.1 to 4.2, would it be possible to manually restore the 4.1 DB to the 4.2 Engine VM to this up and running again or are there modifications made to the DB on upgrading that are relevant for this ? All my work on rolling back to 4.1.9 with the DB restore failed as the Engine is not capable of picking up information from the hosts. Lessons learned is to always make a copy/snapshot of the engine VM disk before upgrading anything. What are my options on getting back to a working environment ? Any help or hint is greatly appreciated. Thank you, Sven -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.vaye at province-sud.nc Tue Mar 27 22:10:06 2018 From: nicolas.vaye at province-sud.nc (Nicolas Vaye) Date: Tue, 27 Mar 2018 22:10:06 +0000 Subject: [ovirt-users] Any monitoring tool provided? In-Reply-To: <1522128740.1710.221.camel@province-sud.nc> References: <32f428b6-dc46-de63-6072-b1fff2eb0b28@endlessnow.com> <1522128740.1710.221.camel@province-sud.nc> Message-ID: <1522188601.1710.246.camel@province-sud.nc> OK, i have removed all libvirt-* packages i have seen that in the playbook, firewalld will be disable and iptables manage the firewall. I have a problem because my metric server was behind a proxy web and when i run ANSIBLE_LOG_PATH=/tmp/ansible.log ansible-playbook -vvv -e @/root/vars.yaml -i /root/ansible-inventory-origin-37-aio playbooks/byo/config.yml i get CHECK [memory_availability : localhost] ******************************************************************************************************************************************** fatal: [localhost]: FAILED! => { "changed": false, "checks": { "disk_availability": {}, "docker_image_availability": { "failed": true, "failures": [ [ "OpenShiftCheckException", "One or more required container images are not available:\n cockpit/kubernetes:latest,\n openshift/origin-deployer:latest,\n openshift/origin-docker-registry:latest,\n openshift/origin-haproxy-router:latest,\n openshift/origin-pod:latest\nChecked with: skopeo inspect [--tls-verify=false] [--creds=:] docker:///\nDefault registries searched: docker.io\nFailed connecting to: docker.io\n" ] ], "msg": "One or more required container images are not available:\n cockpit/kubernetes:latest,\n openshift/origin-deployer:latest,\n openshift/origin-docker-registry:latest,\n openshift/origin-haproxy-router:latest,\n openshift/origin-pod:latest\nChecked with: skopeo inspect [--tls-verify=false] [--creds=:] docker:///\nDefault registries searched: docker.io\nFailed connecting to: docker.io\n" }, "docker_storage": { "skipped": true, "skipped_reason": "Disabled by user request" }, "memory_availability": {}, "package_availability": { "changed": false, "invocation": { "module_args": { "packages": [ "PyYAML", "bash-completion", "bind", "ceph-common", "cockpit-bridge", "cockpit-docker", "cockpit-system", "cockpit-ws", "dnsmasq", "docker", "etcd", "firewalld", "flannel", "glusterfs-fuse", "httpd-tools", "iptables", "iptables-services", "iscsi-initiator-utils", "libselinux-python", "nfs-utils", "ntp", "openssl", "origin", "origin-clients", "origin-master", "origin-node", "origin-sdn-ovs", "pyparted", "python-httplib2", "yum-utils" ] } } }, "package_version": { "skipped": true, "skipped_reason": "Disabled by user request" } }, "msg": "One or more checks failed", "playbook_context": "install" } NO MORE HOSTS LEFT ***************************************************************************************************************************************************************** to retry, use: --limit @/usr/share/ansible/openshift-ansible/playbooks/byo/config.retry PLAY RECAP ************************************************************************************************************************************************************************* localhost : ok=67 changed=4 unreachable=0 failed=1 INSTALLER STATUS ******************************************************************************************************************************************************************* Initialization : Complete Health Check : In Progress This phase can be restarted by running: playbooks/byo/openshift-checks/pre-install.yml Failure summary: 1. Hosts: localhost Play: OpenShift Health Checks Task: Run health checks (install) - EL Message: One or more checks failed Details: check "docker_image_availability": One or more required container images are not available: cockpit/kubernetes:latest, openshift/origin-deployer:latest, openshift/origin-docker-registry:latest, openshift/origin-haproxy-router:latest, openshift/origin-pod:latest Checked with: skopeo inspect [--tls-verify=false] [--creds=:] docker:/// Default registries searched: docker.io Failed connecting to: docker.io The execution of "playbooks/byo/config.yml" includes checks designed to fail early if the requirements of the playbook are not met. One or more of these checks failed. To disregard these results,explicitly disable checks by setting an Ansible variable: openshift_disable_check=docker_image_availability Failing check names are shown in the failure details above. Some checks may be configurable by variables if your requirements are different from the defaults; consult check documentation. Variables can be set in the inventory or passed on the command line using the -e flag to ansible-playbook. How can i set a proxy parameter ? Thanks. Nicolas VAYE -------- Message initial -------- Date: Tue, 27 Mar 2018 05:32:23 +0000 Objet: Re: [ovirt-users] Any monitoring tool provided? Cc: users at ovirt.org > ?: marceloltmm at gmail.com >, sradco at redhat.com > Reply-to: Nicolas Vaye De: Nicolas Vaye > Hi Shirly, I'm trying to install ovirt metric store with ViaQ on Origin. And on https://github.com/oVirt/ovirt-site/pull/1551/files, you mention **WARNING** DO NOT INSTALL `libvirt` on the OpenShift machine! on my VM "metric store", i requested rpm for libvirt and here are the results : [root at ometricstore .ssh]# rpm -qa | grep libvirt libvirt-daemon-driver-secret-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-driver-storage-core-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-driver-storage-logical-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-config-network-3.2.0-14.el7_4.9.x86_64 libvirt-gconfig-1.0.0-1.el7.x86_64 libvirt-daemon-driver-storage-disk-3.2.0-14.el7_4.9.x86_64 libvirt-glib-1.0.0-1.el7.x86_64 libvirt-libs-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-driver-nwfilter-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-driver-storage-rbd-3.2.0-14.el7_4.9.x86_64 libvirt-gobject-1.0.0-1.el7.x86_64 libvirt-daemon-driver-interface-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-driver-storage-scsi-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-driver-qemu-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-kvm-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-driver-network-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-driver-storage-iscsi-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-driver-nodedev-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-driver-storage-mpath-3.2.0-14.el7_4.9.x86_64 libvirt-daemon-driver-storage-3.2.0-14.el7_4.9.x86_64 should i remove all this package ? Also on the web page https://www.ovirt.org/develop/release-management/features/metrics/metrics-store-installation/#run-ovirt-metrics-store-installation-playbook, you mention /usr/share/ovirt-engine-metrics/setup/ansible/configure_ovirt_hosts_for_metrics.sh --playbook=ovirt-metrics-store-installation.yml But on my hosted engine 4.2.1.7 with ovirt-engine-metrics-1.1.3.3-1.el7.centos.noarch, i notice that the script configure_ovirt_hosts_for_metrics.sh doesn't exist but there is a configure_ovirt_machines_for_metrics.sh script. Is it the good one ? Next, you mention : Allow connections on the following ports/protocols: + * tcp ports 22, 80, 443, 8443 (openshift console), 9200 (Elasticsearch) following by ViaQ on Origin requires these [Yum Repos](centos7-viaq.repo). +You will need to install the following packages: docker, iptables-services. That means that i must uninstall firewalld ? And the ports/protocols will be managed by iptables-services ? What is the command to allow connections on tcp ports 22,80, 443 etc.... ? Is it managed automatically with docker or openshift or other program ? Thanks for all. Nicolas VAYE -------- Message initial -------- Date: Sun, 25 Mar 2018 12:01:23 +0300 Objet: Re: [ovirt-users] Any monitoring tool provided? Cc: users > ?: Marcelo Leandro > De: Shirly Radco > -- SHIRLY RADCO BI SeNIOR SOFTWARE ENGINEER Red Hat Israel [https://www.redhat.com/files/brand/email/sig-redhat.png] TRIED. TESTED. TRUSTED. On Fri, Mar 23, 2018 at 10:29 PM, Marcelo Leandro > wrote: Hello, I am try this how to: https://www.ovirt.org/develop/release-management/features/metrics/metrics-store-installation/ but when i run this command: /usr/share/ovirt-engine-metrics/setup/ansible/configure_ovirt_machines_for_metrics.sh --playbook=ovirt-metrics-store-installation.yml I am had this error mensagem: ansible-playbook: error: no such option: --playbook my version: ovirt-engine-metrics-1.0.8-1.el7.centos.noarch Hi, You are using an old rpm. Please upgrade to latest, ovirt-engine-metrics-1.1.3.3-1.el7.centos.noarch I also added some documentation that is still in pull request: Add Viaq installation guide to the oVirt metrics store repo - https://github.com/oVirt/ovirt-site/pull/1551 - This one is meaningful. I introduced a lot of automation that save time when installing. Add prerequisites for installing OpenShift Logging - https://github.com/oVirt/ovirt-site/pull/1561 Added how to import dashboards examples to kibana - https://github.com/oVirt/ovirt-site/pull/1559 Please review them.I'll try to get them merged asap. Anyone can help me? 2018-03-22 16:28 GMT-03:00 Christopher Cox >: On 03/21/2018 10:41 PM, Terry hey wrote: Dear all, Now, we can just read how many storage used, cpu usage on ovirt dashboard. But is there any monitoring tool for monitoring virtual machine time to time? If yes, could you guys give me the procedure? A possible option, for a full OS with network connectivity, is to monitor the VM like you would any other host. We use omd/check_mk. Right now there isn't an oVirt specific monitor plugin for check_mk. I know what I said is probably pretty obvious, but just in case. _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users From wodel.youchi at gmail.com Tue Mar 27 22:15:54 2018 From: wodel.youchi at gmail.com (wodel youchi) Date: Tue, 27 Mar 2018 23:15:54 +0100 Subject: [ovirt-users] Testing oVirt 4.2 In-Reply-To: <5503cc47-bcef-8789-3707-d4b36fd7885f@starlett.lv> References: <41136397-BF14-4DCE-9762-C4FA40EFBBA8@gmail.com> <5503cc47-bcef-8789-3707-d4b36fd7885f@starlett.lv> Message-ID: Hi and thanks for your replies, I cleaned up everything and started from scratch. I using nested-kvm for my test with host-passthrough to expose vmx to the VM hypervisor, my physical CPU is a Core i5 6500 This time I had another problem, the VM engine won't start because of this error in vdsm.log 2018-03-27 22:48:31,893+0100 ERROR (vm/c9c0640e) [virt.vm] (vmId='c9c0640e-d8f1-4ade-95f3-40f2982b1d8c') The vm start process failed (vm:927) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 856, in _startUnderlyingVm self._run() File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2756, in _run dom.createWithFlags(flags) File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 130, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1069, in createWithFlags if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) libvirtError:* internal error: Unknown CPU model SkylakeClient* The CPU model *SkylakeClient* presented to the VM engine is not recognized. is there a way to bypass this? Regards. 2018-03-24 16:25 GMT+01:00 Andrei Verovski : > On 03/24/2018 01:40 PM, Andy Michielsen wrote: > > Hello, > > I also have done a installation on my host running KVM and I ?m pretty > sure my vm?s can only use the 192.168.122.0/24 range if you install them > with NAT networking when creating them. So that might explain why you see > that address appear in your log and also explain why the engine system > can?t be reached. > > > Can't tell fo sure about other installations, yet IMHO problem is with > networking schema. > > One need to set bridge to real ethernet interface and add it to KVM VM > definition. > > For example, my SuSE box have 2 ethernet cards, 192.168.0.aa for SMB fle > server and another bridged with IP 192.168.0.bb defined within KVM guest > (CentOS 7.4 with oVirt host engine). See configs below. > > Another SuSE box have 10 Ethernet interfaces, one for for its own needs, > and 4 + 3 for VyOS routers running as KVM guests. > > ****************************** > > SU47:/etc/sysconfig/network # tail -n 100 ifcfg-br0 > BOOTPROTO='static' > BRIDGE='yes' > BRIDGE_FORWARDDELAY='0' > BRIDGE_PORTS='eth0' > BRIDGE_STP='off' > BROADCAST='' > DHCLIENT_SET_DEFAULT_ROUTE='no' > ETHTOOL_OPTIONS='' > IPADDR='' > MTU='' > NETWORK='' > PREFIXLEN='24' > REMOTE_IPADDR='' > STARTMODE='auto' > NAME='' > > SU47:/etc/sysconfig/network # tail -n 100 ifcfg-eth0 > BOOTPROTO='none' > BROADCAST='' > DHCLIENT_SET_DEFAULT_ROUTE='no' > ETHTOOL_OPTIONS='' > IPADDR='' > MTU='' > NAME='82579LM Gigabit Network Connection' > NETMASK='' > NETWORK='' > REMOTE_IPADDR='' > STARTMODE='auto' > PREFIXLEN='' > > > > > Kind regards. > > On 24 Mar 2018, at 12:13, wodel youchi wrote: > > Hi, > > I am testing oVirt 4.2, I am using nested KVM for that. > I am using two hypervisors Centos 7 updated and the hosted-Engine > deployment using the ovirt appliance. > For storage I am using iscsi and NFS4 > > Versions I am using : > ovirt-engine-appliance-4.2-20180214.1.el7.centos.noarch > ovirt-hosted-engine-setup-2.2.9-1.el7.centos.noarch > kernel-3.10.0-693.21.1.el7.x86_64 > > I have a problem deploying the hosted-engine VM, when configuring the > deployment (hosted-engine --deploy), it asks for the engine's hostname then > the engine's IP address, I use static IP, in my lab I used *192.168.1.104* as > IP for the VM engine, and I choose to add the it's hostname entry to the > hypervisors's /etc/hosts > > But the deployment get stuck every time in the same place : *TASK [Wait > for the host to become non operational]* > > After some time, it gave up and the deployment fails. > > I don't know the reason for now, but I have seen this behavior in */etc/hosts > *of the hypervisor. > > In the beginning of the deployment the entry *192.168.2.104 > engine01.example.local* is added, then sometime after that it's deleted, > then a new entry is added with this IP *192.168.122.65 engine01.wodel.wd* which > has nothing to do with the network I am using. > > Here is the error I am seeing in the deployment log > > 2018-03-24 11:51:31,398+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils._process_output:100 TASK [Wait > for the host to become non operational] > 2018-03-24 12:02:07,284+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils._process_output:94 {u'_ansible > _parsed': True, u'_ansible_no_log': False, u'changed': False, u'attempts': > 150, u'invocation': {u'module_args': {u'pattern': > u'name=hyperv01.wodel.wd', u'fetch_nested': False, u'nested_attributes': > []}}, u'ansible_facts': {u'ovirt_hosts': []}} > 2018-03-24 12:02:07,385+0100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils._process_output:98 fatal: [loc > alhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": []}, "attempts": > 150, "changed": false} > 2018-03-24 12:02:07,587+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils._process_output:94 PLAY RECAP > [engine01.wodel.wd] : ok: 15 changed: 8 unreachable: 0 skipped: 4 failed: > 0 > 2018-03-24 12:02:07,688+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils._process_output:94 PLAY RECAP > [localhost] : ok: 41 changed: 14 unreachable: 0 skipped: 3 failed: 1 > 2018-03-24 12:02:07,789+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils.run:180 ansible-playbook rc: 2 > 2018-03-24 12:02:07,790+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils.run:187 ansible-playbook stdou > t: > 2018-03-24 12:02:07,791+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils.run:189 to retry, use: --limi > t @/usr/share/ovirt-hosted-engine-setup/ansible/bootstrap_local_vm.retry > > 2018-03-24 12:02:07,791+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils.run:190 ansible-playbook stder > r: > 2018-03-24 12:02:07,792+0100 DEBUG otopi.context > context._executeMethod:143 method exception > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in > _executeMethod > method['method']() > File "/usr/share/ovirt-hosted-engine-setup/scripts/../ > plugins/gr-he-ansiblesetup/core/misc.py", line 186, in _closeup > r = ah.run() > File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/ansible_utils.py", > line 194, in run > raise RuntimeError(_('Failed executing ansible-playbook')) > RuntimeError: Failed executing ansible-playbook > 2018-03-24 12:02:07,795+0100 ERROR otopi.context > context._executeMethod:152 Failed to execute stage 'Closing up': Failed exec > uting ansible-playbook > > > any idea???? > > > Regards > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > _______________________________________________ > Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at khoza.com Wed Mar 28 00:30:42 2018 From: matt at khoza.com (Matt Simonsen) Date: Tue, 27 Mar 2018 17:30:42 -0700 Subject: [ovirt-users] oVirt Node Resize tool for local storage Message-ID: Hello, We have a development box with local storage, running ovirt Node 4.1 It appears that using the admin interface on port 9090 I can resize a live partition to a smaller size. Our storage is a seperate LVM partition, ext4 formated. My question is, both theoretically and practically, if anyone has feedback on: #1: Does this work (ie- will it shrink the filesystem then shrink the LV)? #2: May we do this with VMs running? Thanks Matt From sradco at redhat.com Wed Mar 28 05:19:56 2018 From: sradco at redhat.com (Shirly Radco) Date: Wed, 28 Mar 2018 08:19:56 +0300 Subject: [ovirt-users] Any monitoring tool provided? In-Reply-To: <1522188601.1710.246.camel@province-sud.nc> References: <32f428b6-dc46-de63-6072-b1fff2eb0b28@endlessnow.com> <1522128740.1710.221.camel@province-sud.nc> <1522188601.1710.246.camel@province-sud.nc> Message-ID: Adding Rich Megginson. -- SHIRLY RADCO BI SeNIOR SOFTWARE ENGINEER Red Hat Israel TRIED. TESTED. TRUSTED. On Wed, Mar 28, 2018 at 1:10 AM, Nicolas Vaye wrote: > OK, i have removed all libvirt-* packages > > i have seen that in the playbook, firewalld will be disable and iptables > manage the firewall. > > I have a problem because my metric server was behind a proxy web and when > i run > > ANSIBLE_LOG_PATH=/tmp/ansible.log ansible-playbook -vvv -e > @/root/vars.yaml -i /root/ansible-inventory-origin-37-aio > playbooks/byo/config.yml > > i get > > > CHECK [memory_availability : localhost] ****************************** > ************************************************************ > ************************************************** > > fatal: [localhost]: FAILED! => { > > "changed": false, > > "checks": { > > "disk_availability": {}, > > "docker_image_availability": { > > "failed": true, > > "failures": [ > > [ > > "OpenShiftCheckException", > > "One or more required container images are not > available:\n cockpit/kubernetes:latest,\n openshift/origin-deployer:latest,\n > openshift/origin-docker-registry:latest,\n openshift/origin-haproxy-router:latest,\n > openshift/origin-pod:latest\nChecked with: skopeo inspect > [--tls-verify=false] [--creds=:] docker:///\nDefault > registries searched: docker.io\nFailed connecting to: docker.io\n" > > ] > > ], > > "msg": "One or more required container images are not > available:\n cockpit/kubernetes:latest,\n openshift/origin-deployer:latest,\n > openshift/origin-docker-registry:latest,\n openshift/origin-haproxy-router:latest,\n > openshift/origin-pod:latest\nChecked with: skopeo inspect > [--tls-verify=false] [--creds=:] docker:///\nDefault > registries searched: docker.io\nFailed connecting to: docker.io\n" > > }, > > "docker_storage": { > > "skipped": true, > > "skipped_reason": "Disabled by user request" > > }, > > "memory_availability": {}, > > "package_availability": { > > "changed": false, > > "invocation": { > > "module_args": { > > "packages": [ > > "PyYAML", > > "bash-completion", > > "bind", > > "ceph-common", > > "cockpit-bridge", > > "cockpit-docker", > > "cockpit-system", > > "cockpit-ws", > > "dnsmasq", > > "docker", > > "etcd", > > "firewalld", > > "flannel", > > "glusterfs-fuse", > > "httpd-tools", > > "iptables", > > "iptables-services", > > "iscsi-initiator-utils", > > "libselinux-python", > > "nfs-utils", > > "ntp", > > "openssl", > > "origin", > > "origin-clients", > > "origin-master", > > "origin-node", > > "origin-sdn-ovs", > > "pyparted", > > "python-httplib2", > > "yum-utils" > > ] > > } > > } > > }, > > "package_version": { > > "skipped": true, > > "skipped_reason": "Disabled by user request" > > } > > }, > > "msg": "One or more checks failed", > > "playbook_context": "install" > > } > > > NO MORE HOSTS LEFT ****************************** > ************************************************************ > *********************************************************************** > > to retry, use: --limit @/usr/share/ansible/openshift- > ansible/playbooks/byo/config.retry > > > PLAY RECAP ************************************************************ > ************************************************************ > ************************************************* > > localhost : ok=67 changed=4 unreachable=0 failed=1 > > > > INSTALLER STATUS ****************************** > ************************************************************ > ************************************************************************* > > Initialization : Complete > > Health Check : In Progress > > This phase can be restarted by running: playbooks/byo/openshift- > checks/pre-install.yml > > > > > Failure summary: > > > > 1. Hosts: localhost > > Play: OpenShift Health Checks > > Task: Run health checks (install) - EL > > Message: One or more checks failed > > Details: check "docker_image_availability": > > One or more required container images are not available: > > cockpit/kubernetes:latest, > > openshift/origin-deployer:latest, > > openshift/origin-docker-registry:latest, > > openshift/origin-haproxy-router:latest, > > openshift/origin-pod:latest > > Checked with: skopeo inspect [--tls-verify=false] > [--creds=:] docker:/// > > Default registries searched: docker.io > > Failed connecting to: docker.io > > > > > The execution of "playbooks/byo/config.yml" includes checks designed to > fail early if the requirements of the playbook are not met. One or more of > these checks failed. To disregard these results,explicitly disable checks > by setting an Ansible variable: > > openshift_disable_check=docker_image_availability > > Failing check names are shown in the failure details above. Some checks > may be configurable by variables if your requirements are different from > the defaults; consult check documentation. > > Variables can be set in the inventory or passed on the command line using > the -e flag to ansible-playbook. > > > How can i set a proxy parameter ? > > > Thanks. > > Nicolas VAYE > > > -------- Message initial -------- > > Date: Tue, 27 Mar 2018 05:32:23 +0000 > Objet: Re: [ovirt-users] Any monitoring tool provided? > Cc: users at ovirt.org 22%20%3cusers at ovirt.org%3e>> > ?: marceloltmm at gmail.com 22marceloltmm at gmail.com%22%20%3cmarceloltmm at gmail.com%3e>>, > sradco at redhat.com 3csradco at redhat.com%3e>> > Reply-to: Nicolas Vaye > De: Nicolas Vaye 3cnicolas.vaye at province-sud.nc%3e>> > > Hi Shirly, > > I'm trying to install ovirt metric store with ViaQ on Origin. > > And on https://github.com/oVirt/ovirt-site/pull/1551/files, you mention > **WARNING** DO NOT INSTALL `libvirt` on the OpenShift machine! > > > on my VM "metric store", i requested rpm for libvirt and here are the > results : > > [root at ometricstore .ssh]# rpm -qa | grep libvirt > libvirt-daemon-driver-secret-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-driver-storage-core-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-driver-storage-logical-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-config-network-3.2.0-14.el7_4.9.x86_64 > libvirt-gconfig-1.0.0-1.el7.x86_64 > libvirt-daemon-driver-storage-disk-3.2.0-14.el7_4.9.x86_64 > libvirt-glib-1.0.0-1.el7.x86_64 > libvirt-libs-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-driver-nwfilter-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-driver-storage-rbd-3.2.0-14.el7_4.9.x86_64 > libvirt-gobject-1.0.0-1.el7.x86_64 > libvirt-daemon-driver-interface-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-driver-storage-scsi-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-driver-qemu-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-kvm-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-driver-network-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-driver-storage-iscsi-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-driver-nodedev-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-driver-storage-mpath-3.2.0-14.el7_4.9.x86_64 > libvirt-daemon-driver-storage-3.2.0-14.el7_4.9.x86_64 > > should i remove all this package ? > > > Also on the web page https://www.ovirt.org/develop/ > release-management/features/metrics/metrics-store-installation/#run-ovirt- > metrics-store-installation-playbook, you mention > > /usr/share/ovirt-engine-metrics/setup/ansible/configure_ovirt_hosts_for_metrics.sh > --playbook=ovirt-metrics-store-installation.yml > > > But on my hosted engine 4.2.1.7 with ovirt-engine-metrics-1.1.3.3-1.el7.centos.noarch, > i notice that the script > > configure_ovirt_hosts_for_metrics.sh doesn't exist but there is a > configure_ovirt_machines_for_metrics.sh script. > > Is it the good one ? > > > > Next, you mention : > Allow connections on the following ports/protocols: > > > + * tcp ports 22, 80, 443, 8443 (openshift console), 9200 (Elasticsearch) > > > following by > > ViaQ on Origin requires these [Yum Repos](centos7-viaq.repo). > > > > > > > > > > > +You will need to install the following packages: docker, > iptables-services. > > > That means that i must uninstall firewalld ? And the ports/protocols will > be managed by iptables-services ? > > What is the command to allow connections on tcp ports 22,80, 443 etc.... ? > > Is it managed automatically with docker or openshift or other program ? > > > > Thanks for all. > > > Nicolas VAYE > > > -------- Message initial -------- > > Date: Sun, 25 Mar 2018 12:01:23 +0300 > Objet: Re: [ovirt-users] Any monitoring tool provided? > Cc: users 3cusers at ovirt.org%3e>> > ?: Marcelo Leandro >> > De: Shirly Radco Shirly%20Radco%20%3csradco at redhat.com%3e>> > > > > -- > > SHIRLY RADCO > > BI SeNIOR SOFTWARE ENGINEER > > Red Hat Israel > > [https://www.redhat.com/files/brand/email/sig-redhat.png] tps://red.ht/sig> > TRIED. TESTED. TRUSTED. > > > On Fri, Mar 23, 2018 at 10:29 PM, Marcelo Leandro > wrote: > Hello, > > I am try this how to: > > https://www.ovirt.org/develop/release-management/features/ > metrics/metrics-store-installation/ > > but when i run this command: > /usr/share/ovirt-engine-metrics/setup/ansible/ > configure_ovirt_machines_for_metrics.sh --playbook=ovirt-metrics- > store-installation.yml > > I am had this error mensagem: > ansible-playbook: error: no such option: --playbook > > my version: > > ovirt-engine-metrics-1.0.8-1.el7.centos.noarch > > > Hi, > > You are using an old rpm. > > Please upgrade to latest, ovirt-engine-metrics-1.1.3.3-1.el7.centos.noarch > > I also added some documentation that is still in pull request: > > Add Viaq installation guide to the oVirt metrics store repo - > https://github.com/oVirt/ovirt-site/pull/1551 - This one is meaningful. I > introduced a lot of automation that save time when installing. > > Add prerequisites for installing OpenShift Logging - > https://github.com/oVirt/ovirt-site/pull/1561 > > Added how to import dashboards examples to kibana - > https://github.com/oVirt/ovirt-site/pull/1559 > > Please review them.I'll try to get them merged asap. > > > Anyone can help me? > > > 2018-03-22 16:28 GMT-03:00 Christopher Cox ox at endlessnow.com>>: > On 03/21/2018 10:41 PM, Terry hey wrote: > Dear all, > > Now, we can just read how many storage used, cpu usage on ovirt dashboard. > But is there any monitoring tool for monitoring virtual machine time to > time? > If yes, could you guys give me the procedure? > > > A possible option, for a full OS with network connectivity, is to monitor > the VM like you would any other host. > > We use omd/check_mk. > > Right now there isn't an oVirt specific monitor plugin for check_mk. > > I know what I said is probably pretty obvious, but just in case. > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From didi at redhat.com Wed Mar 28 06:07:57 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Wed, 28 Mar 2018 09:07:57 +0300 Subject: [ovirt-users] Ping::(action) Failed to ping x.x.x.x, (4 out of 5) In-Reply-To: <1522133104.2919.8.camel@linuxfabrik.ch> References: <1522133104.2919.8.camel@linuxfabrik.ch> Message-ID: On Tue, Mar 27, 2018 at 9:45 AM, info at linuxfabrik.ch wrote: > Hi all, > > we randomly and constantly have this message in our /var/log/ovirt- > hosted-engine-ha/broker.log: > > /var/log/ovirt-hosted-engine-ha/broker.log:Thread-1::WARNING::2018-03- > 27 08:17:25,891::ping::63::ping.Ping::(action) Failed to ping x.x.x.x, > (4 out of 5) > > The pinged device is a switch (not a gateway). We know that a switch > might drop icmp packets if it needs to. The interesting thing about > that is if it fails it fails always at "4 out of 5", but in the end (5 > of 5) it always succeeds. > > Is there a way to increase the amount of pings or to have another way > instead of ping? Now looked at the source, and I do not think there is a way. It might be useful to add one, though - and might be not too hard - perhaps something like this (didn't test): https://gerrit.ovirt.org/89528 That said, I think it's about time we change the text(s) to imply that we use this gateway to test network connectivity with ping, so do not really need to be a gateway, but do need to reliably reply to pings, and if we do have places where we use it as a gateway, we should simply ask two questions, or at least allow overriding in the config file. Best regards, -- Didi From pbrilla at redhat.com Wed Mar 28 06:37:35 2018 From: pbrilla at redhat.com (Pavol Brilla) Date: Wed, 28 Mar 2018 08:37:35 +0200 Subject: [ovirt-users] oVirt Node Resize tool for local storage In-Reply-To: References: Message-ID: Hi AFAIK ext4 is not supporting online shrinking of filesystem, to shrink storage you would need to unmount filesystem, thus it is not possible to do with VM online. On Wed, Mar 28, 2018 at 2:30 AM, Matt Simonsen wrote: > Hello, > > We have a development box with local storage, running ovirt Node 4.1 > > It appears that using the admin interface on port 9090 I can resize a live > partition to a smaller size. > > Our storage is a seperate LVM partition, ext4 formated. > > My question is, both theoretically and practically, if anyone has feedback > on: > > > #1: Does this work (ie- will it shrink the filesystem then shrink the LV)? > > #2: May we do this with VMs running? > > > Thanks > > Matt > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- PAVOL BRILLA RHV QUALITY ENGINEER, CLOUD Red Hat Czech Republic, Brno TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From andy.michielsen at gmail.com Wed Mar 28 06:50:05 2018 From: andy.michielsen at gmail.com (Andy Michielsen) Date: Wed, 28 Mar 2018 08:50:05 +0200 Subject: [ovirt-users] Which hardware are you using for oVirt In-Reply-To: References: <815987B5-31DA-4316-809D-A03363A1E3C3@gmail.com> Message-ID: Hello Chritopher, Thank you very much for sharing. It started out just for fun but now people at work are looking at me to have an environment to do testing, simulate problems they have encountered, etc. And more an more off them see the benifits off this. At work we are running vmware but that was far to expencieve to use it for these test. But as I suspected that was in the beginning and I knew I had to be able to expand so whenever an old server was decommisioned from production I converted it to an node. I now have 4 in use and demands keep growing. So now I want to ask my boss to invest in new hardware as now people are asking me why I do not have proper backups and even why the can not use the vm?s when I perform administrative tasks or upgrades. So that?s why I?m very inerested in what others are using. Kind regards. > On 26 Mar 2018, at 18:03, Christopher Cox wrote: > >> On 03/24/2018 03:33 AM, Andy Michielsen wrote: >> Hi all, >> Not sure if this is the place to be asking this but I was wondering which hardware you all are using and why in order for me to see what I would be needing. >> I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 vm?s. >> The engine, I can run on an other server. The hosts can be fitted with the storage and share the space through glusterfs. I would think I will be needing at least 3 nic?s but would be able to install ovn. (Are 1gb nic?s sufficient ?) > > Just because you asked, but not because this is helpful to you.... > > But first, a comment on "3 hosts to be able to run 30 VMs". The SPM node shouldn't run a lot of VMs. There are settings (the setting slips my mind) on the engine to give it a "virtual set" of VMs in order to keep VMs off of it. > > With that said, CPU wise, it doesn't require a lot to run 30 VM's. The costly thing is memory (in general). So while a cheap set of 3 machines might handle the CPU requirements of 30 VM's, those cheap machines might not be able to give you the memory you need (depends). You might be fine. I mean, there are cheap desktop like machines that do 64G (and sometimes more). Just something to keep in mind. Memory and storage will be the most costly items. It's simple math. Linux hosts, of course, don't necessarily need much memory (or storage). But Windows... > > 1Gbit NIC's are "ok", but again, depends on storage. Glusterfs is no speed demon. But you might not need "fast" storage. > > Lastly, your setup is just for "fun", right? Otherwise, read on. > > > Running oVirt 3.6 (this is a production setup) > > ovirt engine (manager): > Dell PowerEdge 430, 32G > > ovirt cluster nodes: > Dell m1000e 1.1 backplane Blade Enclosure > 9 x M630 Blades (2xE5-2669v3, 384GB), 4 iSCSI paths, 4 bonded LAN, all 10GbE, CentOS 7.2 > 4 x MXL 10/40GbE (2x40Gbit LAN, 2x40Gbit iSCSI SAN to the S4810's) > > 120 VM's, CentOS 6, CentOS 7, Windows 10 Ent., Windows Server 2012 > We've run on as few as 3 nodes. > > Network, SAN and Storage (for ovirt Domains): > 2 x S4810 (part is used for SAN, part for LAN) > Equallogic dual controller (note: passive/active) PS6610S (84 x 4TB 7.2K SAS) > Equallogic dual controller (note: passive/active) PS6610X (84 x 1TB 10K SAS > > ISO and Export Domains are handled by: > Dell PE R620, 32G, 2x10Gbit LAN, 2x10Gbit iSCSI to the SAN (above), CentOS 7.4, NFS > > What I like: > * Easy setup. > * Relatively good network and storage. > > What I don't like: > * 2 "effective" networks, LAN and iSCSI. All networking uses the same effective path. Would be nice to have more physical isolation for mgmt vs motion vs VMs. QoS is provided in oVirt, but still, would be nice to have the full pathways. > * Storage doesn't use active/active controllers, so controller failover is VERY slow. > * We have a fast storage system, and somewhat slower storage system (matter of IOPS), neither is SSD, so there isn't a huge difference. No real redundancy or flexibility. > * vdsm can no longer respond fast enough for the amount of disks defined (in the event of a new Storage Domain add). We have raised vdsTimeout, but have not tested yet. > > I inherited the "style" above. My recommendation of where to start for a reasonable production instance, minimum (assumes the S4810's above, not priced here): > > 1 x ovirt manager/engine, approx $1500 > 4 x Dell R620, 2xE5-2660, 768G, 6x10GbE (LAN, Storage, Motion), approx $42K > 3 x Nexsan 18P 108TB, approx $96K > > While significantly cheaper (by 6 figures), it provides active/active controllers, storage reliability and flexibility and better network pathways. Why 4 x nodes? Need at least N+1 for reliability. The extra 4th node is merely capacity. Why 3 x storage? Need at least N+1 for reliability. > > Obviously, you'll still want to back things up and test the ability to restore components like the ovirt engine from scratch. > > Btw, my recommended minimum above is regardless of hypervisor cluster choice (could be VMware). > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From didi at redhat.com Wed Mar 28 07:06:39 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Wed, 28 Mar 2018 10:06:39 +0300 Subject: [ovirt-users] ovirt snapshot issue In-Reply-To: References: Message-ID: On Tue, Mar 27, 2018 at 3:38 PM, Sandro Bonazzola wrote: > > > 2018-03-27 14:34 GMT+02:00 Alex K : > >> Hi All, >> >> Any idea on the below? >> >> I am using oVirt Guest Tools 4.2-1.el7.centos for the VM. >> The Window 2016 server VM (which it the one with the relatively big >> disks: 500 GB) it is consistently rendered unresponsive when trying to get >> a snapshot. >> I amy provide any other additional logs if needed. >> > > Adding some people to the thread > Adding more people for this part. > > > >> >> Alex >> >> On Sun, Mar 25, 2018 at 7:30 PM, Alex K wrote: >> >>> Hi folks, >>> >>> I am facing frequently the following issue: >>> >>> On some large VMs (Windows 2016 with two disk drives, 60GB and 500GB) >>> when attempting to create a snapshot of the VM, the VM becomes >>> unresponsive. >>> >>> The errors that I managed to collect were: >>> >>> vdsm error at host hosting the VM: >>> 2018-03-25 14:40:13,442+0000 WARN (vdsm.Scheduler) [Executor] Worker >>> blocked: >> {u'frozen': False, u'vmID': u'a5c761a2-41cd-40c2-b65f-f3819293e8a4', >>> u'snapDrives': [{u'baseVolumeID': u'2a33e585-ece8-4f4d-b45d-5ecc9239200e', >>> u'domainID': u'888e3aae-f49f-42f7-a7fa-76700befabea', u'volumeID': >>> u'e9a01ebd-83dd-40c3-8c83-5302b0d15e04', u'imageID': >>> u'c75b8e93-3067-4472-bf24-dafada224e4d'}, {u'baseVolumeID': >>> u'3fb2278c-1b0d-4677-a529-99084e4b08af', u'domainID': >>> u'888e3aae-f49f-42f7-a7fa-76700befabea', u'volumeID': >>> u'78e6b6b1-2406-4393-8d92-831a6d4f1337', u'imageID': >>> u'd4223744-bf5d-427b-bec2-f14b9bc2ef81'}]}, 'jsonrpc': '2.0', 'method': >>> u'VM.snapshot', 'id': u'89555c87-9701-4260-9952-789965261e65'} at >>> 0x7fca4004cc90> timeout=60, duration=60 at 0x39d8210> task#=155842 at >>> 0x2240e10> (executor:351) >>> 2018-03-25 14:40:15,261+0000 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] >>> RPC call VM.getStats failed (error 1) in 0.01 seconds (__init__:539) >>> 2018-03-25 14:40:17,471+0000 WARN (jsonrpc/5) [virt.vm] >>> (vmId='a5c761a2-41cd-40c2-b65f-f3819293e8a4') monitor became >>> unresponsive (command timeout, age=67.9100000001) (vm:5132) >>> >>> engine.log: >>> 2018-03-25 14:40:19,875Z WARN [org.ovirt.engine.core.dal.dbb >>> roker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler2) >>> [1d737df7] EVENT_ID: VM_NOT_RESPONDING(126), Correlation ID: null, Call >>> Stack: null, Custom ID: null, Custom Event ID: -1, Message: VM Data-Server >>> is not responding. >>> >>> 2018-03-25 14:42:13,708Z ERROR [org.ovirt.engine.core.dal.dbb >>> roker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler5) >>> [17789048-009a-454b-b8ad-2c72c7cd37aa] EVENT_ID: >>> VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call Stack: >>> null, Custom ID: null, Custom Event ID: -1, Message: VDSM v1.cluster >>> command SnapshotVDS failed: Message timeout which can be caused by >>> communication issues >>> 2018-03-25 14:42:13,708Z ERROR [org.ovirt.engine.core.vdsbrok >>> er.vdsbroker.SnapshotVDSCommand] (DefaultQuartzScheduler5) >>> [17789048-009a-454b-b8ad-2c72c7cd37aa] Command >>> 'SnapshotVDSCommand(HostName = v1.cluster, SnapshotVDSCommandParameters:{runAsync='true', >>> hostId='a713d988-ee03-4ff0-a0cd-dc4cde1507f4', >>> vmId='a5c761a2-41cd-40c2-b65f-f3819293e8a4'})' execution failed: >>> VDSGenericException: VDSNetworkException: Message timeout which can be >>> caused by communication issues >>> 2018-03-25 14:42:13,708Z WARN [org.ovirt.engine.core.bll.sna >>> pshots.CreateAllSnapshotsFromVmCommand] (DefaultQuartzScheduler5) >>> [17789048-009a-454b-b8ad-2c72c7cd37aa] Could not perform live snapshot >>> due to error, VM will still be configured to the new created snapshot: >>> EngineException: org.ovirt.engine.core.vdsbroke >>> r.vdsbroker.VDSNetworkException: VDSGenericException: >>> VDSNetworkException: Message timeout which can be caused by communication >>> issues (Failed with error VDS_NETWORK_ERROR and code 5022) >>> 2018-03-25 14:42:13,708Z WARN [org.ovirt.engine.core.vdsbroker.VdsManager] >>> (org.ovirt.thread.pool-6-thread-15) [17789048-009a-454b-b8ad-2c72c7cd37aa] >>> Host 'v1.cluster' is not responding. It will stay in Connecting state for a >>> grace period of 61 seconds and after that an attempt to fence the host will >>> be issued. >>> 2018-03-25 14:42:13,725Z WARN [org.ovirt.engine.core.dal.dbb >>> roker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-15) >>> [17789048-009a-454b-b8ad-2c72c7cd37aa] EVENT_ID: >>> VDS_HOST_NOT_RESPONDING_CONNECTING(9,008), Correlation ID: null, Call >>> Stack: null, Custom ID: null, Custom Event ID: -1, Message: Host v1.cluster >>> is not responding. It will stay in Connecting state for a grace period of >>> 61 seconds and after that an attempt to fence the host will be issued. >>> 2018-03-25 14:42:13,751Z WARN [org.ovirt.engine.core.dal.dbb >>> roker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler5) >>> [17789048-009a-454b-b8ad-2c72c7cd37aa] EVENT_ID: >>> USER_CREATE_LIVE_SNAPSHOT_FINISHED_FAILURE(170), Correlation ID: >>> 17789048-009a-454b-b8ad-2c72c7cd37aa, Job ID: >>> 16e48c28-a8c7-4841-bd81-1f2d370f345d, Call Stack: >>> org.ovirt.engine.core.common.errors.EngineException: EngineException: >>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: >>> VDSGenericException: VDSNetworkException: Message timeout which can be >>> caused by communication issues (Failed with error VDS_NETWORK_ERROR and >>> code 5022) >>> 2018-03-25 14:42:14,372Z ERROR [org.ovirt.engine.core.dal.dbb >>> roker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler5) [] >>> EVENT_ID: USER_CREATE_SNAPSHOT_FINISHED_FAILURE(69), Correlation ID: >>> 17789048-009a-454b-b8ad-2c72c7cd37aa, Job ID: >>> 16e48c28-a8c7-4841-bd81-1f2d370f345d, Call Stack: >>> org.ovirt.engine.core.common.errors.EngineException: EngineException: >>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: >>> VDSGenericException: VDSNetworkException: Message timeout which can be >>> caused by communication issues (Failed with error VDS_NETWORK_ERROR and >>> code 5022) >>> 2018-03-25 14:42:14,372Z WARN [org.ovirt.engine.core.bll.Con >>> currentChildCommandsExecutionCallback] (DefaultQuartzScheduler5) [] >>> Command 'CreateAllSnapshotsFromVm' id: 'bad4f5be-5306-413f-a86a-513b3cfd3c66' >>> end method execution failed, as the command isn't marked for endAction() >>> retries silently ignoring >>> 2018-03-25 14:42:15,951Z WARN [org.ovirt.engine.core.dal.dbb >>> roker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler5) >>> [5017c163] EVENT_ID: VDS_NO_SELINUX_ENFORCEMENT(25), Correlation ID: >>> null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Host >>> v1.cluster does not enforce SELinux. Current status: DISABLED >>> 2018-03-25 14:42:15,951Z WARN [org.ovirt.engine.core.vdsbroker.VdsManager] >>> (DefaultQuartzScheduler5) [5017c163] Host 'v1.cluster' is running with >>> SELinux in 'DISABLED' mode >>> >>> As soon as the VM is unresponsive, the VM console that was already open >>> freezes. I can resume the VM only by powering off and on. >>> >>> I am using ovirt 4.1.9 with 3 nodes and self-hosted engine. I am running >>> mostly Windows 10 and Windows 2016 server VMs. I have installed latest >>> guest agents from: >>> >>> http://resources.ovirt.org/pub/ovirt-4.2/iso/oVirt-toolsSetu >>> p/4.2-1.el7.centos/ >>> >>> At the screen where one takes a snapshot I get a warning saying "Could >>> not detect guest agent on the VM. Note that without guest agent the data on >>> the created snapshot may be inconsistent". See attached. I have verified >>> that ovirt guest tools are installed and shown at installed apps at engine >>> GUI. Also Ovirt Guest Agent (32 bit) and qemu-ga are listed as running at >>> the windows tasks manager. Shouldn't ovirt guest agent be 64 bit on Windows >>> 64 bit? >>> >> No idea, but I do not think it's related to your problem of freezing while taking a snapshot. This error was already discussed in the past, see e.g.: http://lists.ovirt.org/pipermail/users/2017-June/082577.html Best regards, -- Didi -------------- next part -------------- An HTML attachment was scrubbed... URL: From didi at redhat.com Wed Mar 28 07:13:59 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Wed, 28 Mar 2018 10:13:59 +0300 Subject: [ovirt-users] Snapshot of the Self-Hosted Engine In-Reply-To: References: Message-ID: On Tue, Mar 27, 2018 at 4:23 PM, FERNANDO FREDIANI wrote: > Hello > > Is it possible to snapshot the Self-Hosted Engine before an Upgrade ? If so > how ? I do not think so - I do not think anything changed since this: http://lists.ovirt.org/pipermail/users/2016-November/044103.html I agree it sounds like a useful thing to have. Not sure how hard it can be to implement it. Feel free to open an RFE bz. Basically, we'll have to: 1. Make sure everything continues to work sensibly - engine/vdsm do the right things, ha agent works as expected, etc. 2. Provide means to start the vm from a snapshot, and/or revert to a snapshot. This is going to be quite ugly, because it will have to duplicate in ovirt-hosted-engine-setup/-ha functionality that already exists in the engine, because at that point the engine is not available to assist with this. Best regards, -- Didi From didi at redhat.com Wed Mar 28 08:06:05 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Wed, 28 Mar 2018 11:06:05 +0300 Subject: [ovirt-users] Recovering oVirt-Engine with a backup before upgrading to 4.2 In-Reply-To: <41e0a4df7d7b4b04824f154982fe953f@eps.aero> References: <41e0a4df7d7b4b04824f154982fe953f@eps.aero> Message-ID: On Tue, Mar 27, 2018 at 9:14 PM, Sven Achtelik wrote: > Hi All, > > > > I?m still facing issues with my HE engine. Here are the steps that I took to > end up in this situation: > > > > - Update Engine from 4.1.7 to 4.1.9 > > o That worked as expected > > - Automatic Backup of Engine DB in the night > > - Upgraded Engine from 4.1.9 to 4.2.1 > > o That worked fine > > - Noticed Issues with the HA support for HE > > o Cause was not having the latest ovirt-ha agent/broker version on hosts > > - After updating the first host with the latest packages for the > Agent/Broker engine was started twice > > o As a result the Engine VM Disk was corrupted and there is no Backup of > the Disk > > o There is also no Backup of the Engine DB with version 4.2 > > - VM disk was repaired with fsck.ext4, but DB is corrupt > > o Can?t restore the Engine DB because the Backup DB from Engine V 4.1 > > - Rolled back all changes on Engine VM to 4.1.9 and imported Backup > > o Checked for HA VMs to set as disabled and started the Engine > > - Login is fine but the Engine is having trouble picking up and > information from the Hosts > > o No information on running VMs or hosts status > > - Final Situation > > o 2 Hosts have VMs still running and I can?t stop those > > o I still have the image of my corrupted Engine VM (v4.2) > > > > Since there were no major changes after upgrading from 4.1 to 4.2, would it > be possible to manually restore the 4.1 DB to the 4.2 Engine VM to this up > and running again or are there modifications made to the DB on upgrading > that are relevant for this ? engine-backup requires restoring to the same version used to take the backup, with a single exception - on 4.0, it can restore 3.6. It's very easy to patch it to allow also 4.1->4.2, search inside it for "VALID_BACKUP_RESTORE_PAIRS". However, I do not think anyone ever tested this, so no idea might break. In 3.6->4.0 days, we did have to fix a few other things, notably apache httpd and iptables->firewalld: https://bugzilla.redhat.com/show_bug.cgi?id=1318580 > All my work on rolling back to 4.1.9 with the > DB restore failed as the Engine is not capable of picking up information > from the hosts. No idea why, but not sure it's related to your restore flow. > Lessons learned is to always make a copy/snapshot of the > engine VM disk before upgrading anything. If it's a hosted-engine, this isn't supported - see my reply on the list ~ 1 hour ago... > What are my options on getting > back to a working environment ? Any help or hint is greatly appreciated. Restore again with either methods - what you tried, or patching engine-backup and restore directly into 4.2 - and if the engine fails to talk to the hosts, try to debug/fix this. If you suspect corruption more severe that just the db, you can install a fresh engine machine from scratch and restore to it. If it's a hosted-engine, you'll need to deploy hosted-engine from scratch, check docs about hosted-engine backup/restore. Best regards, -- Didi From Sven.Achtelik at eps.aero Wed Mar 28 08:50:18 2018 From: Sven.Achtelik at eps.aero (Sven Achtelik) Date: Wed, 28 Mar 2018 08:50:18 +0000 Subject: [ovirt-users] Recovering oVirt-Engine with a backup before upgrading to 4.2 In-Reply-To: References: <41e0a4df7d7b4b04824f154982fe953f@eps.aero> Message-ID: > -----Urspr?ngliche Nachricht----- > Von: Yedidyah Bar David [mailto:didi at redhat.com] > Gesendet: Mittwoch, 28. M?rz 2018 10:06 > An: Sven Achtelik > Cc: users at ovirt.org > Betreff: Re: [ovirt-users] Recovering oVirt-Engine with a backup before > upgrading to 4.2 > > On Tue, Mar 27, 2018 at 9:14 PM, Sven Achtelik > wrote: > > Hi All, > > > > > > > > I?m still facing issues with my HE engine. Here are the steps that I > > took to end up in this situation: > > > > > > > > - Update Engine from 4.1.7 to 4.1.9 > > > > o That worked as expected > > > > - Automatic Backup of Engine DB in the night > > > > - Upgraded Engine from 4.1.9 to 4.2.1 > > > > o That worked fine > > > > - Noticed Issues with the HA support for HE > > > > o Cause was not having the latest ovirt-ha agent/broker version on hosts > > > > - After updating the first host with the latest packages for the > > Agent/Broker engine was started twice > > > > o As a result the Engine VM Disk was corrupted and there is no Backup of > > the Disk > > > > o There is also no Backup of the Engine DB with version 4.2 > > > > - VM disk was repaired with fsck.ext4, but DB is corrupt > > > > o Can?t restore the Engine DB because the Backup DB from Engine V 4.1 > > > > - Rolled back all changes on Engine VM to 4.1.9 and imported Backup > > > > o Checked for HA VMs to set as disabled and started the Engine > > > > - Login is fine but the Engine is having trouble picking up and > > information from the Hosts > > > > o No information on running VMs or hosts status > > > > - Final Situation > > > > o 2 Hosts have VMs still running and I can?t stop those > > > > o I still have the image of my corrupted Engine VM (v4.2) > > > > > > > > Since there were no major changes after upgrading from 4.1 to 4.2, > > would it be possible to manually restore the 4.1 DB to the 4.2 Engine > > VM to this up and running again or are there modifications made to the > > DB on upgrading that are relevant for this ? > > engine-backup requires restoring to the same version used to take the backup, > with a single exception - on 4.0, it can restore 3.6. > > It's very easy to patch it to allow also 4.1->4.2, search inside it for > "VALID_BACKUP_RESTORE_PAIRS". However, I do not think anyone ever > tested this, so no idea might break. In 3.6->4.0 days, we did have to fix a few > other things, notably apache httpd and iptables->firewalld: > > https://bugzilla.redhat.com/show_bug.cgi?id=1318580 > > > All my work on rolling back to 4.1.9 with the DB restore failed as the > > Engine is not capable of picking up information from the hosts. > > No idea why, but not sure it's related to your restore flow. > > > Lessons learned is to always make a copy/snapshot of the engine VM > > disk before upgrading anything. > > If it's a hosted-engine, this isn't supported - see my reply on the list ~ 1 hour > ago... > > > What are my options on getting > > back to a working environment ? Any help or hint is greatly appreciated. > > Restore again with either methods - what you tried, or patching engine- > backup and restore directly into 4.2 - and if the engine fails to talk to the hosts, > try to debug/fix this. > > If you suspect corruption more severe that just the db, you can install a fresh > engine machine from scratch and restore to it. If it's a hosted-engine, you'll > need to deploy hosted-engine from scratch, check docs about hosted-engine > backup/restore. Will the setup of hosted engine from scratch require a new storage domain for the new engine or can I use the one that is already there ? What about the VMs running on my hosts, will they be effected by that ? It might be best to start with a fresh VM. > > Best regards, > -- > Didi From stirabos at redhat.com Wed Mar 28 11:42:28 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Wed, 28 Mar 2018 13:42:28 +0200 Subject: [ovirt-users] Snapshot of the Self-Hosted Engine In-Reply-To: References: Message-ID: On Wed, Mar 28, 2018 at 9:13 AM, Yedidyah Bar David wrote: > On Tue, Mar 27, 2018 at 4:23 PM, FERNANDO FREDIANI > wrote: > > Hello > > > > Is it possible to snapshot the Self-Hosted Engine before an Upgrade ? If > so > > how ? > > I do not think so - I do not think anything changed since this: > > http://lists.ovirt.org/pipermail/users/2016-November/044103.html > > I agree it sounds like a useful thing to have. Not sure how hard it > can be to implement it. Feel free to open an RFE bz. > We are using a disk lease to prevent split brains, AFAIK snapshots are not compatible with disk leases. > > Basically, we'll have to: > > 1. Make sure everything continues to work sensibly - engine/vdsm do > the right things, ha agent works as expected, etc. > > 2. Provide means to start the vm from a snapshot, and/or revert to a > snapshot. This is going to be quite ugly, because it will have to > duplicate in ovirt-hosted-engine-setup/-ha functionality that already > exists in the engine, because at that point the engine is not > available to assist with this. > > Best regards, > -- > Didi > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fernando.frediani at upx.com Wed Mar 28 13:41:01 2018 From: fernando.frediani at upx.com (FERNANDO FREDIANI) Date: Wed, 28 Mar 2018 10:41:01 -0300 Subject: [ovirt-users] Snapshot of the Self-Hosted Engine In-Reply-To: References: Message-ID: Hello Sven and all. Yes storage does have the snapshot function and could be possibility be used, but I was wondering a even easier way through the oVirt Node CLI or something similar that can use the qcow2 image snapshot to do that with the Self-Hosted Engine in Global Maintenance. I used to run the oVirt Engine in a Libvirt KVM Virtual Machine in a separate Host and it has always been extremely handy to have this feature. There has been times where the upgrade was not successfully and just turning off the VM, starting it from snapshot saved my day. Regards Fernando 2018-03-27 14:14 GMT-03:00 Sven Achtelik : > Hi Fernando, > > > > depending on where you?re having your storage you could set everything to > global maintenance, stop the vm and copy the disk image. Or if your storage > systeme is able to do snapshots you could use that function once the engine > is stopped. It?s the easiest way I can think of right now. What kind of > storage are you using ? > > > > Sven > > > > *Von:* users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] *Im > Auftrag von *FERNANDO FREDIANI > *Gesendet:* Dienstag, 27. M?rz 2018 15:24 > *An:* users > *Betreff:* [ovirt-users] Snapshot of the Self-Hosted Engine > > > > Hello > > Is it possible to snapshot the Self-Hosted Engine before an Upgrade ? If > so how ? > > Thanks > > Fernando > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccox at endlessnow.com Wed Mar 28 14:07:43 2018 From: ccox at endlessnow.com (Christopher Cox) Date: Wed, 28 Mar 2018 09:07:43 -0500 Subject: [ovirt-users] oVirt Node Resize tool for local storage In-Reply-To: References: Message-ID: <9336feb8-0355-47f4-0ccc-4cb396b8a332@endlessnow.com> On 03/28/2018 01:37 AM, Pavol Brilla wrote: > Hi > > AFAIK ext4 is not supporting online shrinking of filesystem, > to shrink storage you would need to unmount filesystem, > thus it is not possible to do with VM online. Correct. Just saying it's not possible at all with XFS, be that online or offline. From phudec at cnc.sk Wed Mar 28 14:38:55 2018 From: phudec at cnc.sk (Peter Hudec) Date: Wed, 28 Mar 2018 16:38:55 +0200 Subject: [ovirt-users] Any monitoring tool provided? In-Reply-To: References: <008007e4-7c25-73ed-db8e-60c49fc3ad7a@cnc.sk> Message-ID: <5f8dd076-9adf-0878-b4ec-681f365de81b@cnc.sk> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 I have working proof-of-concept. There still could be bug in templates and not all parameters are monitored, but HOST and VM discovery is working. I will try to share it on github. At this moment I have some performance issues, since it's using the zabbix-agent which needs to for a new process for each query ;( Peter On 26/03/2018 08:13, Peter Hudec wrote: > Yes, template and python > > at this moment I understand oVirt API. I needed to write own small > SDK, since the oVirt SDK is using SSO for login, that means it do > some additional requests for login process. There is no option to > use Basic Auth. The SESSION reuse could be useful, but I do no > want to add more > > First I need to understand some basics from Zabbix Discovery > Rules / Host Prototypes. I would like to have VM as separate hosts > in Zabbix, like VMWare does. The other stuff is quite easy. > > There plugin for nagios/icinga if someone using this monitoring > tool https://github.com/ovido/check_rhev3 > > On 26/03/2018 08:02, Alex K wrote: >> Hi Peter, > >> This is interesting. Is it going to be a template with an >> external python script? > >> Alex > >> On Mon, Mar 26, 2018, 08:50 Peter Hudec > > wrote: > >> Hi Terry, > >> I started to work on ZABBIX integration based on oVirt API. >> Basically it should be like VmWare integration in ZABBIX with >> full hosts/vms discovery and statistic gathering. > >> The API provides for each statistics service for NIC, VM as wall >> CPU and MEM utilization. > >> There is also solution based on reading data from VDSM to >> prometheus >> http://rmohr.github.io/virtualization/2016/04/12/monitor-your-ovirt-d a > >> >> ta > > > center-with-prometheus >> >> >> atacenter-with-prometheus>. > > > >> Peter > >> On 22/03/2018 04:41, Terry hey wrote: >>> Dear all, > >>> Now, we can just read how many storage used, cpu usage on >>> ovirt dashboard. But is there any monitoring tool for >>> monitoring virtual machine time to time? If yes, could you guys >>> give me the procedure? > > > >>> Regards Terry > > >>> _______________________________________________ Users mailing >>> list Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > > >> _______________________________________________ Users mailing >> list Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > > - -- *Peter Hudec* Infra?trukt?rny architekt phudec at cnc.sk *CNC, a.s.* Borsk? 6, 841 04 Bratislava Recepcia: +421 2 35 000 100 Mobil:+421 905 997 203 *www.cnc.sk* -----BEGIN PGP SIGNATURE----- iQJCBAEBCAAsFiEEqSUbhuEwhryifNeVQnvVWOJ35BAFAlq7qPsOHHBodWRlY0Bj bmMuc2sACgkQQnvVWOJ35BDXzw/8ChApssWNkM0HiixYESQP+lgxJeHqHYgvBbrQ DTfiOfTXrWDLIXn7LQdtt7IH4LtTDEwLcGBFSQCTUuX7W6y6Uj5y9pkcGLrtFYuP g1yBEPuqO3RB2QoR6FLlEyfqfDpnIWiRbtFFpK4P6UmRNQX637GKcluMN8EXeujY w/S+0JoV9ANEnDgsyCQvJ1f89D4KTiD9eTv0zijl7abRew8ioMVAmxt2YBFQf1KC rZQ4h7qbymYNDWRv/n4qx3StBN8e0crty73glfWbHrCuw0/lfSMgELWelvvSR1YE 1oEMmRaqQr7poxtXTGdtkXRkvxil+Or/IQ6jibFjMt9rmmkJ4jgMkGdSkYSHlCJC G2pjlN0nlghOmj9QDaX54EAXUeybVmoD8yGVlmREEl8jPYVxAmfasJ7zJ+mIBTU4 21Z7/yhFHINi6pez+3t/42BA11XtfTiUx5GVzj5M+Ky6bbkVOF5H+ndJzRA92UHj lZOyoFP5cg9cFkdmAGep4pE9BdHEIFKnS7m0vVM0IwiKRQMprvAYyBuj23V+wtTc FXauv7+xpyFiH/0IqFCeHn9DCIXnEQlGfwDkCt4PYS+p0Jfm2hlI3fvGqeHZngYz NElJT59Sxc8kWayZrbr5uaw6csTwcXXhVcALaWB/q6DsqR2KyPWwbs3CtfH5OGWh 39Jz7aw= =NcUr -----END PGP SIGNATURE----- From fernando.frediani at upx.com Wed Mar 28 16:05:10 2018 From: fernando.frediani at upx.com (FERNANDO FREDIANI) Date: Wed, 28 Mar 2018 13:05:10 -0300 Subject: [ovirt-users] Deploy Self-Hosted Engine in a Active Host Message-ID: Hello As I mentioned in another thread I am migrating a 'Bare-metal' oVirt-Engine to a Self-Hosted Engine. For that I am following this documentation: https://ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment/ However I think called me attention and I wanted to clarity: Must the Host that will deploy the Self-Hosted Engine be in Maintenance mode and therefore with no other VMs running ? I have a Node which is currently part of a Cluster and wish to deploy the Self-Hosted Engine to it. Must I have to put it into Maintenance mode first or can I just run the 'hosted-engine --deploy'. Note: this Self-Hosted Engine will manage the existing cluster where this Node exists. Guess that is not an issue at all and part of what Self-Hosted Engine is intended to. Thanks Fernando -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sven.Achtelik at eps.aero Wed Mar 28 16:27:58 2018 From: Sven.Achtelik at eps.aero (Sven Achtelik) Date: Wed, 28 Mar 2018 16:27:58 +0000 Subject: [ovirt-users] Recovering oVirt-Engine with a backup before upgrading to 4.2 In-Reply-To: References: <41e0a4df7d7b4b04824f154982fe953f@eps.aero> Message-ID: <390ad7391c6d4dc1b1d92a762c509e88@eps.aero> > -----Urspr?ngliche Nachricht----- > Von: Yedidyah Bar David [mailto:didi at redhat.com] > Gesendet: Mittwoch, 28. M?rz 2018 10:06 > An: Sven Achtelik > Cc: users at ovirt.org > Betreff: Re: [ovirt-users] Recovering oVirt-Engine with a backup before > upgrading to 4.2 > > On Tue, Mar 27, 2018 at 9:14 PM, Sven Achtelik > wrote: > > Hi All, > > > > > > > > I?m still facing issues with my HE engine. Here are the steps that I > > took to end up in this situation: > > > > > > > > - Update Engine from 4.1.7 to 4.1.9 > > > > o That worked as expected > > > > - Automatic Backup of Engine DB in the night > > > > - Upgraded Engine from 4.1.9 to 4.2.1 > > > > o That worked fine > > > > - Noticed Issues with the HA support for HE > > > > o Cause was not having the latest ovirt-ha agent/broker version on hosts > > > > - After updating the first host with the latest packages for the > > Agent/Broker engine was started twice > > > > o As a result the Engine VM Disk was corrupted and there is no Backup of > > the Disk > > > > o There is also no Backup of the Engine DB with version 4.2 > > > > - VM disk was repaired with fsck.ext4, but DB is corrupt > > > > o Can?t restore the Engine DB because the Backup DB from Engine V 4.1 > > > > - Rolled back all changes on Engine VM to 4.1.9 and imported Backup > > > > o Checked for HA VMs to set as disabled and started the Engine > > > > - Login is fine but the Engine is having trouble picking up and > > information from the Hosts > > > > o No information on running VMs or hosts status > > > > - Final Situation > > > > o 2 Hosts have VMs still running and I can?t stop those > > > > o I still have the image of my corrupted Engine VM (v4.2) > > > > > > > > Since there were no major changes after upgrading from 4.1 to 4.2, > > would it be possible to manually restore the 4.1 DB to the 4.2 Engine > > VM to this up and running again or are there modifications made to the > > DB on upgrading that are relevant for this ? > > engine-backup requires restoring to the same version used to take the backup, > with a single exception - on 4.0, it can restore 3.6. > > It's very easy to patch it to allow also 4.1->4.2, search inside it for > "VALID_BACKUP_RESTORE_PAIRS". However, I do not think anyone ever > tested this, so no idea might break. In 3.6->4.0 days, we did have to fix a few > other things, notably apache httpd and iptables->firewalld: > > https://bugzilla.redhat.com/show_bug.cgi?id=1318580 > > > All my work on rolling back to 4.1.9 with the DB restore failed as the > > Engine is not capable of picking up information from the hosts. > > No idea why, but not sure it's related to your restore flow. > > > Lessons learned is to always make a copy/snapshot of the engine VM > > disk before upgrading anything. > > If it's a hosted-engine, this isn't supported - see my reply on the list ~ 1 hour > ago... > > > What are my options on getting > > back to a working environment ? Any help or hint is greatly appreciated. > > Restore again with either methods - what you tried, or patching engine- > backup and restore directly into 4.2 - and if the engine fails to talk to the hosts, > try to debug/fix this. > > If you suspect corruption more severe that just the db, you can install a fresh > engine machine from scratch and restore to it. If it's a hosted-engine, you'll > need to deploy hosted-engine from scratch, check docs about hosted-engine > backup/restore. I read through those documents and it seems that I would need an extra Host/Hardware which I don't have. https://ovirt.org/documentation/self-hosted/chap-Backing_up_and_Restoring_an_EL-Based_Self-Hosted_Environment/ So how would I be able to get a new setup working when I would like to use the Engine-VM-Image ? At this point it sounds like I would have to manually reinstall the machine that is left over and running. I'm lost at this point. > > Best regards, > -- > Didi From fedele.stabile at fis.unical.it Wed Mar 28 10:31:22 2018 From: fedele.stabile at fis.unical.it (Fedele Stabile Nuovo Server) Date: Wed, 28 Mar 2018 12:31:22 +0200 Subject: [ovirt-users] How it is oVirt used in your Department? Message-ID: <1522233082.6801.129.camel@fis.unical.it> My question is mainly addressed at those of you who use oVirt not only for creating services on virtual machines. What is your experience and what did you made? Is there anyone who virtualized an HPC cluster? What is for you the advantage on virtualizing a cluster? Or, having a class with PC or Raspberry is better to use LTSP or PiNet or virtualize desktops? I would like to have a lot of feedback to start a discussion about the best way to use oVirt in different contexts Fedele Stabile From lveyde at redhat.com Wed Mar 28 17:45:12 2018 From: lveyde at redhat.com (Lev Veyde) Date: Wed, 28 Mar 2018 20:45:12 +0300 Subject: [ovirt-users] [ANN] oVirt 4.2.2 GA Release is now available Message-ID: The oVirt Project is pleased to announce the availability of the oVirt 4.2.2 GA release, as of March 28th, 2018. This update is a GA release of the second in a series of stabilization updates to the 4.2 series. This release is available now for: * Red Hat Enterprise Linux 7.4 or later * CentOS Linux (or similar) 7.4 or later This release supports Hypervisor Hosts running: * Red Hat Enterprise Linux 7.4 or later * CentOS Linux (or similar) 7.4 or later * oVirt Node 4.2 See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed. Notes: - oVirt Appliance is available - oVirt Node is available [2] Additional Resources: * Read more about the oVirt 4.2.2 release highlights: http://www.ovirt.org/release/4. 2 . 2 / * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt * Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/ [1] http://www.ovirt.org/release/4. 2 . 2 / [2] http://resources.ovirt.org/pub/ovirt-4. 2 /iso/ -- Lev Veyde Software Engineer, RHCE | RHCVA | MCITP Red Hat Israel lev at redhat.com | lveyde at redhat.com TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jlawrence at squaretrade.com Wed Mar 28 17:48:55 2018 From: jlawrence at squaretrade.com (Jamie Lawrence) Date: Wed, 28 Mar 2018 10:48:55 -0700 Subject: [ovirt-users] Hosted engine VDSM issue with sanlock Message-ID: I still can't resolve this issue. I have a host that is stuck in a cycle; it will be marked non responsive, then come back up, ending with an "finished activation" message in the GUI. Then it repeats. The root cause seems to be sanlock. I'm just unclear on why it started or how to resolve it. The only "approved" knob I'm aware of is --reinitialize-lockspace and the manual equivalent, neither of which fix anything. Anyone have a guess? -j - - - vdsm.log - - - - 2018-03-28 10:38:22,207-0700 INFO (monitor/b41eb20) [storage.SANLock] Acquiring host id for domain b41eb20a-eafb-481b-9a50-a135cf42b15e (id=1, async=True) (clusterlock:284) 2018-03-28 10:38:22,208-0700 ERROR (monitor/b41eb20) [storage.Monitor] Error acquiring host id 1 for domain b41eb20a-eafb-481b-9a50-a135cf42b15e (monitor:568) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 565, in _acquireHostId self.domain.acquireHostId(self.hostId, async=True) File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 828, in acquireHostId self._manifest.acquireHostId(hostId, async) File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 453, in acquireHostId self._domainLock.acquireHostId(hostId, async) File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py", line 315, in acquireHostId raise se.AcquireHostIdFailure(self._sdUUID, e) AcquireHostIdFailure: Cannot acquire host id: (u'b41eb20a-eafb-481b-9a50-a135cf42b15e', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument')) 2018-03-28 10:38:23,078-0700 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00 seconds (__init__:573) 2018-03-28 10:38:23,085-0700 INFO (jsonrpc/6) [vdsm.api] START repoStats(domains=[u'b41eb20a-eafb-481b-9a50-a135cf42b15e']) from=::1,54450, task_id=186d7e8b-7b4e-485d-a9e0-c0cb46eed621 (api:46) 2018-03-28 10:38:23,085-0700 INFO (jsonrpc/6) [vdsm.api] FINISH repoStats return={u'b41eb20a-eafb-481b-9a50-a135cf42b15e': {'code': 0, 'actual': True, 'version': 4, 'acquired': False, 'delay': '0.000812547', 'lastCheck': '0.4', 'valid': True}} from=::1,54450, task_id=186d7e8b-7b4e-485d-a9e0-c0cb46eed621 (api:52) 2018-03-28 10:38:23,086-0700 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:573) 2018-03-28 10:38:23,092-0700 WARN (vdsm.Scheduler) [Executor] Worker blocked: at 0x1d44150> timeout=15, duration=150 at 0x7f076c05fb90> task#=83985 at 0x7f082c08e510>, traceback: File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap self.__bootstrap_inner() File: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner self.run() File: "/usr/lib64/python2.7/threading.py", line 765, in run self.__target(*self.__args, **self.__kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line 194, in run ret = func(*args, **kwargs) File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run self._execute_task() File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in _execute_task task() File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in __call__ self._callable() File: "/usr/lib/python2.7/site-packages/vdsm/virt/periodic.py", line 213, in __call__ self._func() File: "/usr/lib/python2.7/site-packages/vdsm/virt/sampling.py", line 578, in __call__ stats = hostapi.get_stats(self._cif, self._samples.stats()) File: "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 77, in get_stats ret['haStats'] = _getHaInfo() File: "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 182, in _getHaInfo stats = instance.get_all_stats() File: "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 93, in get_all_stats stats = broker.get_stats_from_storage() File: "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 135, in get_stats_from_storage result = self._proxy.get_stats() File: "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File: "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File: "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File: "/usr/lib64/python2.7/xmlrpclib.py", line 1303, in single_request response = h.getresponse(buffering=True) File: "/usr/lib64/python2.7/httplib.py", line 1089, in getresponse response.begin() File: "/usr/lib64/python2.7/httplib.py", line 444, in begin version, status, reason = self._read_status() File: "/usr/lib64/python2.7/httplib.py", line 400, in _read_status line = self.fp.readline(_MAXLINE + 1) File: "/usr/lib64/python2.7/socket.py", line 476, in readline data = self._sock.recv(self._rbufsize) (executor:363) 2018-03-28 10:38:23,274-0700 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00 seconds (__init__:573) 2018-03-28 10:38:24,297-0700 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00 seconds (__init__:573) 2018-03-28 10:38:24,306-0700 INFO (jsonrpc/2) [vdsm.api] START repoStats(domains=[u'b41eb20a-eafb-481b-9a50-a135cf42b15e']) from=::1,54450, task_id=6a60e316-e4d7-415d-970a-a998710a5899 (api:46) 2018-03-28 10:38:24,306-0700 INFO (jsonrpc/2) [vdsm.api] FINISH repoStats return={u'b41eb20a-eafb-481b-9a50-a135cf42b15e': {'code': 0, 'actual': True, 'version': 4, 'acquired': False, 'delay': '0.000812547', 'lastCheck': '1.6', 'valid': True}} from=::1,54450, task_id=6a60e316-e4d7-415d-970a-a998710a5899 (api:52) 2018-03-28 10:38:24,307-0700 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:573) 2018-03-28 10:38:24,374-0700 INFO (jsonrpc/7) [api.host] START getAllVmStats() from=::ffff:10.181.26.150,46064 (api:46) 2018-03-28 10:38:24,377-0700 INFO (jsonrpc/7) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::ffff:10.181.26.150,46064 (api:52) 2018-03-28 10:38:24,379-0700 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:573) 2018-03-28 10:38:24,529-0700 INFO (jsonrpc/5) [api.host] START getAllVmStats() from=::1,54454 (api:46) 2018-03-28 10:38:24,532-0700 INFO (jsonrpc/5) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::1,54454 (api:52) 2018-03-28 10:38:24,533-0700 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.01 seconds (__init__:573) 2018-03-28 10:38:24,545-0700 INFO (jsonrpc/6) [api.host] START getAllVmIoTunePolicies() from=::1,54454 (api:46) 2018-03-28 10:38:24,546-0700 INFO (jsonrpc/6) [api.host] FINISH getAllVmIoTunePolicies return={'status': {'message': 'Done', 'code': 0}, 'io_tune_policies_dict': {'588a1394-4f28-4fb8-bcad-5b08d78ecd00': {'policy': [], 'current_values': [{'ioTune': {'write_bytes_sec': 0L, 'total_iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L, 'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path': u'/var/run/vdsm/storage/b41eb20a-eafb-481b-9a50-a135cf42b15e/a9d01d59-f146-47e5-b514-d10f8867678e/8f0c9f7a-ae6a-476e-b6f3-a830dcb79e87', 'name': 'vda'}]}}} from=::1,54454 (api:52) 2018-03-28 10:38:24,547-0700 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmIoTunePolicies succeeded in 0.00 seconds (__init__:573) 2018-03-28 10:38:29,319-0700 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00 seconds (__init__:573) 2018-03-28 10:38:29,327-0700 INFO (jsonrpc/0) [vdsm.api] START repoStats(domains=[u'b41eb20a-eafb-481b-9a50-a135cf42b15e']) from=::1,54450, task_id=c27c5e13-3b31-4182-9c14-11463c9b590a (api:46) 2018-03-28 10:38:29,327-0700 INFO (jsonrpc/0) [vdsm.api] FINISH repoStats return={u'b41eb20a-eafb-481b-9a50-a135cf42b15e': {'code': 0, 'actual': True, 'version': 4, 'acquired': False, 'delay': '0.000812547', 'lastCheck': '6.6', 'valid': True}} from=::1,54450, task_id=c27c5e13-3b31-4182-9c14-11463c9b590a (api:52) 2018-03-28 10:38:29,328-0700 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:573) 2018-03-28 10:38:30,471-0700 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.ping2 succeeded in 0.00 seconds (__init__:573) 2018-03-28 10:38:30,475-0700 INFO (jsonrpc/7) [api.host] START getCapabilities() from=::1,54450 (api:46) From vincent at epicenergy.ca Wed Mar 28 18:35:22 2018 From: vincent at epicenergy.ca (Vincent Royer) Date: Wed, 28 Mar 2018 11:35:22 -0700 Subject: [ovirt-users] Any monitoring tool provided? In-Reply-To: References: Message-ID: Shirly, Sorry it took so long to reply. The issue I found was that the code for the variables required single quotes around some of the fields, such as this one: WHERE datacenter_id ='$datacenter_id' without the single quotes I was getting errors. *Vincent Royer* *778-825-1057* *SUSTAINABLE MOBILE ENERGY SOLUTIONS* On Sun, Mar 25, 2018 at 4:44 AM, Shirly Radco wrote: > Hi Vincent, > > > I'm sorry it was not an easy setup. > > Can you please share what did not work for you in the instructions? I see > you did manage to get it working... :) > > If you want data from the last 24 hours in 60 seconds interval (Still not > real time but can give you a better granularity), > You can use the samples tables. > > Also, Please make sure to update your* views prefix *to the version you > are using, In the example prefix is * v4_2_** > *If you are using oVirt 4.1, prefix should be ** v4_1_** > > For example: (Did not get to test this query yet) > > SELECT DISTINCT > min(time) AS time, > MEM_Usage, > host_name || 'MEM_Usage' as metric > FROM ( > SELECT > stats_hosts.host_id, > CASE > WHEN delete_date IS NULL > THEN host_name > ELSE > host_name > || > ' (Removed on ' > || > CAST ( CAST ( delete_date AS date ) AS varchar ) > || > ')' > END AS host_name, > stats_hosts.history_datetime AS time, > SUM ( > COALESCE ( > stats_hosts.cpu_usage_percent, > 0 > ) * > COALESCE ( > stats_hosts.minutes_in_status, > 0 > ) > ) / > SUM ( > COALESCE ( > stats_hosts.minutes_in_status, > 0 > ) > ) AS CPU_Usage, > SUM ( > COALESCE ( > stats_hosts.memory_usage_percent, > 0 > ) * > COALESCE ( > stats_hosts.minutes_in_status, > 0 > ) > ) / > SUM ( > COALESCE ( > stats_hosts.minutes_in_status, > 0 > ) > ) AS MEM_Usage > FROM* v4_2_statistics_hosts_resources_usage_samples* AS stats_hosts > INNER JOIN v4_2_configuration_history_hosts > ON ( > v4_2_configuration_history_hosts.host_id = > stats_hosts.host_id > ) > WHERE stats_hosts.history_datetime >= $__timeFrom() > AND stats_hosts.history_datetime < $__timeTo() > -- Here we get the latest hosts configuration > AND v4_2_configuration_history_hosts.history_id IN ( > SELECT MAX ( a.history_id ) > FROM v4_2_configuration_history_hosts AS a > GROUP BY a.host_id > ) > AND stats_hosts.host_id IN ( > SELECT a.host_id > FROM* v4_2_statistics_hosts_resources_usage_samples* a > INNER JOIN v4_2_configuration_history_hosts b > ON ( a.host_id = b.host_id ) > WHERE > -- Here we filter by active hosts only > a.host_status = 1 > -- Here we filter by the datacenter chosen by the user > AND b.cluster_id IN ( > SELECT v4_2_configuration_history_clusters.cluster_id > FROM v4_2_configuration_history_clusters > WHERE > v4_2_configuration_history_clusters.datacenter_id > = > $datacenter_id > ) > -- Here we filter by the clusters chosen by the user > AND b.cluster_id IN ($cluster_id) > AND a. history_datetime >= $__timeFrom() > AND a.history_datetime < $__timeTo() > -- Here we get the latest hosts configuration > AND b.history_id IN ( > SELECT MAX (g.history_id) > FROM v4_2_configuration_history_hosts g > GROUP BY g.host_id > ) > GROUP BY a.host_id > ORDER BY > -- Hosts will be ordered according to the summery of > -- memory and CPU usage percent. > --This determines the busiest hosts. > SUM ( > COALESCE ( > a.memory_usage_percent * a.minutes_in_status, > 0 > ) > ) / > SUM ( > COALESCE ( > a.minutes_in_status, > 0 > ) > ) + > SUM ( > COALESCE ( > a.cpu_usage_percent * a.minutes_in_status, > 0 > ) > ) / > SUM ( > COALESCE ( > a.minutes_in_status, > 0 > ) > ) DESC > LIMIT 5 > ) > GROUP BY > stats_hosts.host_id, > host_name, > delete_date, > history_datetime > ) AS a > GROUP BY a.host_name, a.mem_usage > ORDER BY time > > -- > > SHIRLY RADCO > > BI SeNIOR SOFTWARE ENGINEER > > Red Hat Israel > > TRIED. TESTED. TRUSTED. > > On Thu, Mar 22, 2018 at 9:05 PM, Vincent Royer > wrote: > >> I setup Grafana using the instructions I found on accessing the Ovirt >> history database. However, the instructions didn't work as written. >> Regardless, it does work, but it's not easy to setup. The update rate also >> leaves something to be desired, its ok for historical info, but it's not a >> good real time monitoring solution (although its possible I could set it up >> differently and it would work better) >> >> Also using Grafana, I have setup Telegraf agents on most of my VMs. >> >> Lastly, I also installed Telegraf on the Centos hosts in my Ovirt Cluster >> >> >> >> >> >> >> *Vincent Royer* >> *778-825-1057 <(778)%20825-1057>* >> >> >> >> *SUSTAINABLE MOBILE ENERGY SOLUTIONS* >> >> >> >> >> On Wed, Mar 21, 2018 at 8:41 PM, Terry hey wrote: >> >>> Dear all, >>> >>> Now, we can just read how many storage used, cpu usage on ovirt >>> dashboard. >>> But is there any monitoring tool for monitoring virtual machine time to >>> time? >>> If yes, could you guys give me the procedure? >>> >>> >>> Regards >>> Terry >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 68566 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 145730 bytes Desc: not available URL: From marceloltmm at gmail.com Wed Mar 28 20:51:05 2018 From: marceloltmm at gmail.com (Marcelo Leandro) Date: Wed, 28 Mar 2018 17:51:05 -0300 Subject: [ovirt-users] Cache to NFS Message-ID: Hello, I have 1 server configured with RAID 6 36TB HDD, I would like improve the performance, I read about lvmcache with SSD and would like know if its indicated to configure with NFS and how can calculete the size necessary to ssd. Very Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From colin.coe at gmail.com Thu Mar 29 01:13:35 2018 From: colin.coe at gmail.com (Colin Coe) Date: Thu, 29 Mar 2018 09:13:35 +0800 Subject: [ovirt-users] Host affinity rule Message-ID: Hi all I suspect one of our hypervisors is faulty but at this stage I can't prove it. We're running RHV 4.1.7 (about to upgrade to v4.1.10 in a few days). I'm planning on create a negative host affinity rule to prevent all current existing VMs from running on the suspect host. Afterwards I'll create a couple of test VMs and put them in a positive host affinity rule so they only run on the suspect host. There are about 150 existing VMs, are there any known problems with host affinity rules and putting 150 or so VMs in the group? This is production so I need to be careful. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From tomkcpr at mdevsys.com Thu Mar 29 05:24:18 2018 From: tomkcpr at mdevsys.com (TomK) Date: Thu, 29 Mar 2018 01:24:18 -0400 Subject: [ovirt-users] ILO2 Fencing Message-ID: <3bb23f19-232e-0291-3982-683a3ea34b94@mdevsys.com> Hey Guy's, I've tested my ILO2 fence from the ovirt engine CLI and that works: fence_ilo2 -a 192.168.0.37 -l --password="" --ssl-insecure --tls1.0 -v -o status The UI gives me: Test failed: Failed to run fence status-check on host 'ph-host01.my.dom'. No other host was available to serve as proxy for the operation. Going to add a second host in a bit but anyway to get this working with just one host? I'm just adding the one host to oVirt for some POC we are doing atm but the UI forces me to adjust Power Management settings before proceeding. Also: 2018-03-28 02:04:15,183-04 WARN [org.ovirt.engine.core.bll.network.NetworkConfigurator] (EE-ManagedThreadFactory-engine-Thread-335) [2d691be9] Failed to find a valid interface for the management network of host ph-host01.my.dom. If the interface br0 is a bridge, it should be torn-down manually. 2018-03-28 02:04:15,184-04 ERROR [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] (EE-ManagedThreadFactory-engine-Thread-335) [2d691be9] Exception: org.ovirt.engine.core.bll.network.NetworkConfigurator$NetworkConfiguratorException: Interface br0 is invalid for management network I've these defined as such but not clear what it is expecting: [root at ph-host01 ~]# ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff 3: eth1: mtu 1500 qdisc mq master bond0 state DOWN qlen 1000 link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff 4: eth2: mtu 1500 qdisc mq master bond0 state DOWN qlen 1000 link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff 5: eth3: mtu 1500 qdisc mq master bond0 state DOWN qlen 1000 link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff 21: bond0: mtu 1500 qdisc noqueue master br0 state UP qlen 1000 link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff inet6 fe80::7ae7:d1ff:fe8c:b1ba/64 scope link valid_lft forever preferred_lft forever 23: ;vdsmdummy;: mtu 1500 qdisc noop state DOWN qlen 1000 link/ether fe:69:c7:50:0d:dd brd ff:ff:ff:ff:ff:ff 24: br0: mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff inet 192.168.0.39/23 brd 192.168.1.255 scope global br0 valid_lft forever preferred_lft forever inet6 fe80::7ae7:d1ff:fe8c:b1ba/64 scope link valid_lft forever preferred_lft forever [root at ph-host01 ~]# cd /etc/sysconfig/network-scripts/ [root at ph-host01 network-scripts]# cat ifcfg-br0 DEVICE=br0 TYPE=Bridge BOOTPROTO=none IPADDR=192.168.0.39 NETMASK=255.255.254.0 GATEWAY=192.168.0.1 ONBOOT=yes DELAY=0 USERCTL=no DEFROUTE=yes NM_CONTROLLED=no DOMAIN="my.dom nix.my.dom" SEARCH="my.dom nix.my.dom" HOSTNAME=ph-host01.my.dom DNS1=192.168.0.224 DNS2=192.168.0.44 DNS3=192.168.0.45 ZONE=public [root at ph-host01 network-scripts]# cat ifcfg-bond0 DEVICE=bond0 ONBOOT=yes BOOTPROTO=none USERCTL=no NM_CONTROLLED=no BONDING_OPTS="miimon=100 mode=2" BRIDGE=br0 # # # IPADDR=192.168.0.39 # NETMASK=255.255.254.0 # GATEWAY=192.168.0.1 # DNS1=192.168.0.1 [root at ph-host01 network-scripts]# -- Cheers, Tom K. ------------------------------------------------------------------------------------- Living on earth is expensive, but it includes a free trip around the sun. From artem.tambovskiy at gmail.com Thu Mar 29 05:59:10 2018 From: artem.tambovskiy at gmail.com (Artem Tambovskiy) Date: Thu, 29 Mar 2018 05:59:10 +0000 Subject: [ovirt-users] Hosted engine VDSM issue with sanlock In-Reply-To: References: Message-ID: Hi, How many hosts you have? Check hosted-engine.conf on all hosts including the one you have problem with and look if all host_id values are unique. It might happen that you have several hosts with host_id=1 Regards, Artem ??, 28 ???. 2018 ?., 20:49 Jamie Lawrence : > I still can't resolve this issue. > > I have a host that is stuck in a cycle; it will be marked non responsive, > then come back up, ending with an "finished activation" message in the GUI. > Then it repeats. > > The root cause seems to be sanlock. I'm just unclear on why it started or > how to resolve it. The only "approved" knob I'm aware of is > --reinitialize-lockspace and the manual equivalent, neither of which fix > anything. > > Anyone have a guess? > > -j > > - - - vdsm.log - - - - > > 2018-03-28 10:38:22,207-0700 INFO (monitor/b41eb20) [storage.SANLock] > Acquiring host id for domain b41eb20a-eafb-481b-9a50-a135cf42b15e (id=1, > async=True) (clusterlock:284) > 2018-03-28 10:38:22,208-0700 ERROR (monitor/b41eb20) [storage.Monitor] > Error acquiring host id 1 for domain b41eb20a-eafb-481b-9a50-a135cf42b15e > (monitor:568) > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line > 565, in _acquireHostId > self.domain.acquireHostId(self.hostId, async=True) > File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 828, in > acquireHostId > self._manifest.acquireHostId(hostId, async) > File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 453, in > acquireHostId > self._domainLock.acquireHostId(hostId, async) > File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py", > line 315, in acquireHostId > raise se.AcquireHostIdFailure(self._sdUUID, e) > AcquireHostIdFailure: Cannot acquire host id: > (u'b41eb20a-eafb-481b-9a50-a135cf42b15e', SanlockException(22, 'Sanlock > lockspace add failure', 'Invalid argument')) > 2018-03-28 10:38:23,078-0700 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC > call Host.ping2 succeeded in 0.00 seconds (__init__:573) > 2018-03-28 10:38:23,085-0700 INFO (jsonrpc/6) [vdsm.api] START > repoStats(domains=[u'b41eb20a-eafb-481b-9a50-a135cf42b15e']) > from=::1,54450, task_id=186d7e8b-7b4e-485d-a9e0-c0cb46eed621 (api:46) > 2018-03-28 10:38:23,085-0700 INFO (jsonrpc/6) [vdsm.api] FINISH repoStats > return={u'b41eb20a-eafb-481b-9a50-a135cf42b15e': {'code': 0, 'actual': > True, 'version': 4, 'acquired': False, 'delay': '0.000812547', 'lastCheck': > '0.4', 'valid': True}} from=::1,54450, > task_id=186d7e8b-7b4e-485d-a9e0-c0cb46eed621 (api:52) > 2018-03-28 10:38:23,086-0700 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC > call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:573) > 2018-03-28 10:38:23,092-0700 WARN (vdsm.Scheduler) [Executor] Worker > blocked: action= at 0x1d44150> > timeout=15, duration=150 at 0x7f076c05fb90> task#=83985 at 0x7f082c08e510>, > traceback: > File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap > self.__bootstrap_inner() > File: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner > self.run() > File: "/usr/lib64/python2.7/threading.py", line 765, in run > self.__target(*self.__args, **self.__kwargs) > File: "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line > 194, in run > ret = func(*args, **kwargs) > File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in > _run > self._execute_task() > File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315, in > _execute_task > task() > File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in > __call__ > self._callable() > File: "/usr/lib/python2.7/site-packages/vdsm/virt/periodic.py", line 213, > in __call__ > self._func() > File: "/usr/lib/python2.7/site-packages/vdsm/virt/sampling.py", line 578, > in __call__ > stats = hostapi.get_stats(self._cif, self._samples.stats()) > File: "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 77, in > get_stats > ret['haStats'] = _getHaInfo() > File: "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 182, in > _getHaInfo > stats = instance.get_all_stats() > File: > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", > line 93, in get_all_stats > stats = broker.get_stats_from_storage() > File: > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", > line 135, in get_stats_from_storage > result = self._proxy.get_stats() > File: "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ > return self.__send(self.__name, args) > File: "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request > verbose=self.__verbose > File: "/usr/lib64/python2.7/xmlrpclib.py", line 1273, in request > return self.single_request(host, handler, request_body, verbose) > File: "/usr/lib64/python2.7/xmlrpclib.py", line 1303, in single_request > response = h.getresponse(buffering=True) > File: "/usr/lib64/python2.7/httplib.py", line 1089, in getresponse > response.begin() > File: "/usr/lib64/python2.7/httplib.py", line 444, in begin > version, status, reason = self._read_status() > File: "/usr/lib64/python2.7/httplib.py", line 400, in _read_status > line = self.fp.readline(_MAXLINE + 1) > File: "/usr/lib64/python2.7/socket.py", line 476, in readline > data = self._sock.recv(self._rbufsize) (executor:363) > 2018-03-28 10:38:23,274-0700 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC > call Host.ping2 succeeded in 0.00 seconds (__init__:573) > 2018-03-28 10:38:24,297-0700 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC > call Host.ping2 succeeded in 0.00 seconds (__init__:573) > > 2018-03-28 10:38:24,306-0700 INFO (jsonrpc/2) [vdsm.api] START > repoStats(domains=[u'b41eb20a-eafb-481b-9a50-a135cf42b15e']) > from=::1,54450, task_id=6a60e316-e4d7-415d-970a-a998710a5899 (api:46) > 2018-03-28 10:38:24,306-0700 INFO (jsonrpc/2) [vdsm.api] FINISH repoStats > return={u'b41eb20a-eafb-481b-9a50-a135cf42b15e': {'code': 0, 'actual': > True, 'version': 4, 'acquired': False, 'delay': '0.000812547', 'lastCheck': > '1.6', 'valid': True}} from=::1,54450, > task_id=6a60e316-e4d7-415d-970a-a998710a5899 (api:52) > 2018-03-28 10:38:24,307-0700 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC > call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:573) > 2018-03-28 10:38:24,374-0700 INFO (jsonrpc/7) [api.host] START > getAllVmStats() from=::ffff:10.181.26.150,46064 (api:46) > 2018-03-28 10:38:24,377-0700 INFO (jsonrpc/7) [api.host] FINISH > getAllVmStats return={'status': {'message': 'Done', 'code': 0}, > 'statsList': (suppressed)} from=::ffff:10.181.26.150,46064 (api:52) > 2018-03-28 10:38:24,379-0700 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC > call Host.getAllVmStats succeeded in 0.00 seconds (__init__:573) > 2018-03-28 10:38:24,529-0700 INFO (jsonrpc/5) [api.host] START > getAllVmStats() from=::1,54454 (api:46) > 2018-03-28 10:38:24,532-0700 INFO (jsonrpc/5) [api.host] FINISH > getAllVmStats return={'status': {'message': 'Done', 'code': 0}, > 'statsList': (suppressed)} from=::1,54454 (api:52) > 2018-03-28 10:38:24,533-0700 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC > call Host.getAllVmStats succeeded in 0.01 seconds (__init__:573) > 2018-03-28 10:38:24,545-0700 INFO (jsonrpc/6) [api.host] START > getAllVmIoTunePolicies() from=::1,54454 (api:46) > 2018-03-28 10:38:24,546-0700 INFO (jsonrpc/6) [api.host] FINISH > getAllVmIoTunePolicies return={'status': {'message': 'Done', 'code': 0}, > 'io_tune_policies_dict': {'588a1394-4f28-4fb8-bcad-5b08d78ecd00': > {'policy': [], 'current_values': [{'ioTune': {'write_bytes_sec': 0L, > 'total_iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L, > 'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path': > u'/var/run/vdsm/storage/b41eb20a-eafb-481b-9a50-a135cf42b15e/a9d01d59-f146-47e5-b514-d10f8867678e/8f0c9f7a-ae6a-476e-b6f3-a830dcb79e87', > 'name': 'vda'}]}}} from=::1,54454 (api:52) > 2018-03-28 10:38:24,547-0700 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC > call Host.getAllVmIoTunePolicies succeeded in 0.00 seconds (__init__:573) > 2018-03-28 10:38:29,319-0700 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC > call Host.ping2 succeeded in 0.00 seconds (__init__:573) > 2018-03-28 10:38:29,327-0700 INFO (jsonrpc/0) [vdsm.api] START > repoStats(domains=[u'b41eb20a-eafb-481b-9a50-a135cf42b15e']) > from=::1,54450, task_id=c27c5e13-3b31-4182-9c14-11463c9b590a (api:46) > 2018-03-28 10:38:29,327-0700 INFO (jsonrpc/0) [vdsm.api] FINISH repoStats > return={u'b41eb20a-eafb-481b-9a50-a135cf42b15e': {'code': 0, 'actual': > True, 'version': 4, 'acquired': False, 'delay': '0.000812547', 'lastCheck': > '6.6', 'valid': True}} from=::1,54450, > task_id=c27c5e13-3b31-4182-9c14-11463c9b590a (api:52) > 2018-03-28 10:38:29,328-0700 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC > call Host.getStorageRepoStats succeeded in 0.00 seconds (__init__:573) > 2018-03-28 10:38:30,471-0700 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC > call Host.ping2 succeeded in 0.00 seconds (__init__:573) > 2018-03-28 10:38:30,475-0700 INFO (jsonrpc/7) [api.host] START > getCapabilities() from=::1,54450 (api:46) > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sradco at redhat.com Thu Mar 29 06:41:12 2018 From: sradco at redhat.com (Shirly Radco) Date: Thu, 29 Mar 2018 09:41:12 +0300 Subject: [ovirt-users] Any monitoring tool provided? In-Reply-To: <5f8dd076-9adf-0878-b4ec-681f365de81b@cnc.sk> References: <008007e4-7c25-73ed-db8e-60c49fc3ad7a@cnc.sk> <5f8dd076-9adf-0878-b4ec-681f365de81b@cnc.sk> Message-ID: Hi Peter, Using the oVirt API will add load on the engine and may affect the engine performance. We have the DWH that collectd metrics and configuration data about the hosts and vms and can be used along with Grafana. We also added the oVirt metrics store that collects the metrics directly from the host about the hosts and vms and from the engine about the machine itself and postgres db. This was done in order to lower the load on the engine. We also collect the engine.log and vdsm.log. The metrics are collected to Elasticsearch and can be visualised in Kibana and Grafana and are collected in 10 sec interval that give you almost real time view. I would love to hear why you prefer using Zabbix for collecting the metrics data. Best regards, -- SHIRLY RADCO BI SeNIOR SOFTWARE ENGINEER Red Hat Israel TRIED. TESTED. TRUSTED. On Wed, Mar 28, 2018 at 5:38 PM, Peter Hudec wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > I have working proof-of-concept. There still could be bug in > templates and not all parameters are monitored, but HOST and VM > discovery is working. > > I will try to share it on github. At this moment I have some > performance issues, since it's using the zabbix-agent which needs to > for a new process for each query ;( > > Peter > > On 26/03/2018 08:13, Peter Hudec wrote: > > Yes, template and python > > > > at this moment I understand oVirt API. I needed to write own small > > SDK, since the oVirt SDK is using SSO for login, that means it do > > some additional requests for login process. There is no option to > > use Basic Auth. The SESSION reuse could be useful, but I do no > > want to add more > > > > First I need to understand some basics from Zabbix Discovery > > Rules / Host Prototypes. I would like to have VM as separate hosts > > in Zabbix, like VMWare does. The other stuff is quite easy. > > > > There plugin for nagios/icinga if someone using this monitoring > > tool https://github.com/ovido/check_rhev3 > > > > On 26/03/2018 08:02, Alex K wrote: > >> Hi Peter, > > > >> This is interesting. Is it going to be a template with an > >> external python script? > > > >> Alex > > > >> On Mon, Mar 26, 2018, 08:50 Peter Hudec >> > wrote: > > > >> Hi Terry, > > > >> I started to work on ZABBIX integration based on oVirt API. > >> Basically it should be like VmWare integration in ZABBIX with > >> full hosts/vms discovery and statistic gathering. > > > >> The API provides for each statistics service for NIC, VM as wall > >> CPU and MEM utilization. > > > >> There is also solution based on reading data from VDSM to > >> prometheus > >> http://rmohr.github.io/virtualization/2016/04/12/monitor-your-ovirt-d > a > > > >> > >> > ta > > > > > > center-with-prometheus > >> d > > > >> > >> > atacenter-with-prometheus>. > > > > > > > >> Peter > > > >> On 22/03/2018 04:41, Terry hey wrote: > >>> Dear all, > > > >>> Now, we can just read how many storage used, cpu usage on > >>> ovirt dashboard. But is there any monitoring tool for > >>> monitoring virtual machine time to time? If yes, could you guys > >>> give me the procedure? > > > > > > > >>> Regards Terry > > > > > >>> _______________________________________________ Users mailing > >>> list Users at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/users > > > > > > > >> _______________________________________________ Users mailing > >> list Users at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/users > > > > > > > > - -- > *Peter Hudec* > Infra?trukt?rny architekt > phudec at cnc.sk > > *CNC, a.s.* > Borsk? 6, 841 04 Bratislava > Recepcia: +421 2 35 000 100 > > Mobil:+421 905 997 203 > *www.cnc.sk* > > -----BEGIN PGP SIGNATURE----- > > iQJCBAEBCAAsFiEEqSUbhuEwhryifNeVQnvVWOJ35BAFAlq7qPsOHHBodWRlY0Bj > bmMuc2sACgkQQnvVWOJ35BDXzw/8ChApssWNkM0HiixYESQP+lgxJeHqHYgvBbrQ > DTfiOfTXrWDLIXn7LQdtt7IH4LtTDEwLcGBFSQCTUuX7W6y6Uj5y9pkcGLrtFYuP > g1yBEPuqO3RB2QoR6FLlEyfqfDpnIWiRbtFFpK4P6UmRNQX637GKcluMN8EXeujY > w/S+0JoV9ANEnDgsyCQvJ1f89D4KTiD9eTv0zijl7abRew8ioMVAmxt2YBFQf1KC > rZQ4h7qbymYNDWRv/n4qx3StBN8e0crty73glfWbHrCuw0/lfSMgELWelvvSR1YE > 1oEMmRaqQr7poxtXTGdtkXRkvxil+Or/IQ6jibFjMt9rmmkJ4jgMkGdSkYSHlCJC > G2pjlN0nlghOmj9QDaX54EAXUeybVmoD8yGVlmREEl8jPYVxAmfasJ7zJ+mIBTU4 > 21Z7/yhFHINi6pez+3t/42BA11XtfTiUx5GVzj5M+Ky6bbkVOF5H+ndJzRA92UHj > lZOyoFP5cg9cFkdmAGep4pE9BdHEIFKnS7m0vVM0IwiKRQMprvAYyBuj23V+wtTc > FXauv7+xpyFiH/0IqFCeHn9DCIXnEQlGfwDkCt4PYS+p0Jfm2hlI3fvGqeHZngYz > NElJT59Sxc8kWayZrbr5uaw6csTwcXXhVcALaWB/q6DsqR2KyPWwbs3CtfH5OGWh > 39Jz7aw= > =NcUr > -----END PGP SIGNATURE----- > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phudec at cnc.sk Thu Mar 29 07:26:04 2018 From: phudec at cnc.sk (Peter Hudec) Date: Thu, 29 Mar 2018 09:26:04 +0200 Subject: [ovirt-users] Any monitoring tool provided? In-Reply-To: References: <008007e4-7c25-73ed-db8e-60c49fc3ad7a@cnc.sk> <5f8dd076-9adf-0878-b4ec-681f365de81b@cnc.sk> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hi SHIRLY, first why zabbix. Zabbix is our primary monitoring tool. Zabbix have native implementation for VmWare monitoring, which we of course use. The idea was to use the same monitoring system, alerting for our NOC. The performance problem is due the how zabbix works, especially zabbix agent. You could return only one value each query, to to get stat for VM you have to do about 10-15 queries. Zabbix agent needs to for the process ... So overhead is twice, to get the data using api and forking process. I see the grafana integration, https://www.ovirt.org/blog/2018/01/ovirt-report-using-grafana/. This seems to be dependent on oVirt version, as I see the table names. So my next steps are: a) I took look at the ovirt metrics, looks good. Is it stable on 4.2? b) take a look on zabbix sender, but this have another sets of issues. For example how to send data olny for some VM/Hosts, not for all if I do not want to monitor while platform. Data could be collected from API or DWH c) try to do the oVirt poller into zabbix ?? Generally what we need is monitoring and alerting system. - alert if Vm is down - alert if VM CPU usage is high for period of time - alert if there is storage problem /latency, usage, .../ Peter On 29/03/2018 08:41, Shirly Radco wrote: > Hi Peter, > > Using the oVirt API will add load on the engine and may affect the > engine performance. We have the DWH that collectd metrics and > configuration data about the hosts and vms and can be used along > with Grafana. > > We also added the oVirt metrics store that collects the metrics > directly from the host about the hosts and vms and from the engine > about the machine itself and postgres db. This was done in order to > lower the load on the engine. We also collect the engine.log and > vdsm.log. > > The metrics are collected to Elasticsearch and can be visualised > in Kibana and Grafana and are collected in 10 sec interval that > give you almost real time view. I would love to hear why you prefer > using Zabbix for collecting the metrics data. > > Best regards, > > -- > > SHIRLY RADCO > > BI SeNIOR SOFTWARE ENGINEER > > Red Hat Israel > > TRIED. TESTED. TRUSTED. > > > > On Wed, Mar 28, 2018 at 5:38 PM, Peter Hudec > wrote: > > I have working proof-of-concept. There still could be bug in > templates and not all parameters are monitored, but HOST and VM > discovery is working. > > I will try to share it on github. At this moment I have some > performance issues, since it's using the zabbix-agent which needs > to for a new process for each query ;( > > Peter > > On 26/03/2018 08:13, Peter Hudec wrote: >> Yes, template and python > >> at this moment I understand oVirt API. I needed to write own >> small SDK, since the oVirt SDK is using SSO for login, that means >> it do some additional requests for login process. There is no >> option to use Basic Auth. The SESSION reuse could be useful, but >> I do no want to add more > >> First I need to understand some basics from Zabbix Discovery >> Rules / Host Prototypes. I would like to have VM as separate >> hosts in Zabbix, like VMWare does. The other stuff is quite >> easy. > >> There plugin for nagios/icinga if someone using this monitoring >> tool https://github.com/ovido/check_rhev3 > > >> On 26/03/2018 08:02, Alex K wrote: >>> Hi Peter, > >>> This is interesting. Is it going to be a template with an >>> external python script? > >>> Alex > >>> On Mon, Mar 26, 2018, 08:50 Peter Hudec >>> >> wrote: > >>> Hi Terry, > >>> I started to work on ZABBIX integration based on oVirt API. >>> Basically it should be like VmWare integration in ZABBIX with >>> full hosts/vms discovery and statistic gathering. > >>> The API provides for each statistics service for NIC, VM as >>> wall CPU and MEM utilization. > >>> There is also solution based on reading data from VDSM to >>> prometheus >>> > http://rmohr.github.io/virtualization/2016/04/12/monitor-your-ovirt-d > > > a > >>> >>> > ta > > >> center-with-prometheus >>> > > > d > >>> >>> > atacenter-with-prometheus>. > > > >>> Peter > >>> On 22/03/2018 04:41, Terry hey wrote: >>>> Dear all, > >>>> Now, we can just read how many storage used, cpu usage on >>>> ovirt dashboard. But is there any monitoring tool for >>>> monitoring virtual machine time to time? If yes, could you >>>> guys give me the procedure? > > > >>>> Regards Terry > > >>>> _______________________________________________ Users >>>> mailing list Users at ovirt.org > > >>> http://lists.ovirt.org/mailman/listinfo/users > > > > >>> _______________________________________________ Users mailing >>> list Users at ovirt.org > > >>> http://lists.ovirt.org/mailman/listinfo/users > > > > > > _______________________________________________ Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > - -- *Peter Hudec* Infra?trukt?rny architekt phudec at cnc.sk *CNC, a.s.* Borsk? 6, 841 04 Bratislava Recepcia: +421 2 35 000 100 Mobil:+421 905 997 203 *www.cnc.sk* -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEqSUbhuEwhryifNeVQnvVWOJ35BAFAlq8lQkACgkQQnvVWOJ3 5BBSew//bu5ddAm+OsmBNEZXpusrPdyHkBf16GoYs8gc+EEuUQIp06DkPwSO0s4E J3gexejY0UICNs6chkomB9JVG7A2M8sDFuCDxakefJhwLY92YEgcr3txE5X+Vm/6 RODGRfZILHvwxAEnixC3COr5dIYLls7M2FfnGvB1Mp25OQLoNUxTtIntVhuYfUE2 nZHxbk/+b2n3krbeYtNMgAVPMI0r8c9I4/n4y6sbayoW0O1Ur+3mM6uoyGDrTT8a 3tjD3Rb7drxg5y4TxRBmlHgIvByLNT/r5ucC74GZ6Nf9iN/LXDuYJ8vZ9jcBaGzu aJQP4j69KqRKWCB82NjrPTK1n2m1o/p5ue6L5iEIp3LwNqnyK1b2YlFuTzUhkE59 S+gy8YMuZGDaQ4J36Xi/NnjpsdWSAIwq8BAatqCih5je7cnJZTohtBd7tNtIWX// RdMX7SVrKYvVws/7N9uTbqPppXQk3SEJzEhqPomg+CZTZdm60sUgdZuC/X91OCco lW5xSF2btUJlVLa+/bU+ycnnOYrnecIisRdZeM5iqW25ZXKmpOkl+SZgtjUYqcOb uUVTRGe/ZqbGff3t3zmqYx5cfiVIrV8wX3mHPsMlG00QMJo485/cAetzeRvk2dcg lTYMGLHDrwv7vQ3aDkreeqYjBN+Gi+mDP1IrLxreWBXJT+h4buc= =j0FI -----END PGP SIGNATURE----- From mperina at redhat.com Thu Mar 29 07:39:40 2018 From: mperina at redhat.com (Martin Perina) Date: Thu, 29 Mar 2018 09:39:40 +0200 Subject: [ovirt-users] ILO2 Fencing In-Reply-To: <3bb23f19-232e-0291-3982-683a3ea34b94@mdevsys.com> References: <3bb23f19-232e-0291-3982-683a3ea34b94@mdevsys.com> Message-ID: On Thu, Mar 29, 2018 at 7:24 AM, TomK wrote: > Hey Guy's, > > I've tested my ILO2 fence from the ovirt engine CLI and that works: > > fence_ilo2 -a 192.168.0.37 -l --password="" --ssl-insecure > --tls1.0 -v -o status > ?You are using additional options on command line, please add below to the Options field in Edit Fence Agent dialog and retry ssl_insecure=1,tls1.0=1 ? > > The UI gives me: > > Test failed: Failed to run fence status-check on host 'ph-host01.my.dom'. > No other host was available to serve as proxy for the operation. > ?This is normal, fencing requires to have at least 2 working hosts in the setup ? > > Going to add a second host in a bit but anyway to get this working with > just one host? I'm just adding the one host to oVirt for some POC we are > doing atm but the UI forces me to adjust Power Management settings before > proceeding. > ?You have the options to disable fencing completely for cluster, it's enough to turn off Enable fencing option in Fencing Policy tab in Edit Cluster dialog. ? > > Also: > > 2018-03-28 02:04:15,183-04 WARN [org.ovirt.engine.core.bll.network.NetworkConfigurator] > (EE-ManagedThreadFactory-engine-Thread-335) [2d691be9] Failed to find a > valid interface for the management network of host ph-host01.my.dom. If the > interface br0 is a bridge, it should be torn-down manually. > 2018-03-28 02:04:15,184-04 ERROR [org.ovirt.engine.core.bll.hos > tdeploy.InstallVdsInternalCommand] (EE-ManagedThreadFactory-engine-Thread-335) > [2d691be9] Exception: org.ovirt.engine.core.bll.netw > ork.NetworkConfigurator$NetworkConfiguratorException: Interface br0 is > invalid for management network > ?Petr/Edward could you please take a look? ? > > > I've these defined as such but not clear what it is expecting: > > [root at ph-host01 ~]# ip a > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1 > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 2: eth0: mtu 1500 qdisc mq master > bond0 state UP qlen 1000 > link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff > 3: eth1: mtu 1500 qdisc mq > master bond0 state DOWN qlen 1000 > link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff > 4: eth2: mtu 1500 qdisc mq > master bond0 state DOWN qlen 1000 > link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff > 5: eth3: mtu 1500 qdisc mq > master bond0 state DOWN qlen 1000 > link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff > 21: bond0: mtu 1500 qdisc > noqueue master br0 state UP qlen 1000 > link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff > inet6 fe80::7ae7:d1ff:fe8c:b1ba/64 scope link > valid_lft forever preferred_lft forever > 23: ;vdsmdummy;: mtu 1500 qdisc noop state DOWN qlen > 1000 > link/ether fe:69:c7:50:0d:dd brd ff:ff:ff:ff:ff:ff > 24: br0: mtu 1500 qdisc noqueue state > UP qlen 1000 > link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff > inet 192.168.0.39/23 brd 192.168.1.255 scope global br0 > valid_lft forever preferred_lft forever > inet6 fe80::7ae7:d1ff:fe8c:b1ba/64 scope link > valid_lft forever preferred_lft forever > [root at ph-host01 ~]# cd /etc/sysconfig/network-scripts/ > [root at ph-host01 network-scripts]# cat ifcfg-br0 > DEVICE=br0 > TYPE=Bridge > BOOTPROTO=none > IPADDR=192.168.0.39 > NETMASK=255.255.254.0 > GATEWAY=192.168.0.1 > ONBOOT=yes > DELAY=0 > USERCTL=no > DEFROUTE=yes > NM_CONTROLLED=no > DOMAIN="my.dom nix.my.dom" > SEARCH="my.dom nix.my.dom" > HOSTNAME=ph-host01.my.dom > DNS1=192.168.0.224 > DNS2=192.168.0.44 > DNS3=192.168.0.45 > ZONE=public > [root at ph-host01 network-scripts]# cat ifcfg-bond0 > DEVICE=bond0 > ONBOOT=yes > BOOTPROTO=none > USERCTL=no > NM_CONTROLLED=no > BONDING_OPTS="miimon=100 mode=2" > BRIDGE=br0 > # > # > # IPADDR=192.168.0.39 > # NETMASK=255.255.254.0 > # GATEWAY=192.168.0.1 > # DNS1=192.168.0.1 > [root at ph-host01 network-scripts]# > > > -- > Cheers, > Tom K. > ------------------------------------------------------------ > ------------------------- > > Living on earth is expensive, but it includes a free trip around the sun. > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- Martin Perina Associate Manager, Software Engineering Red Hat Czech s.r.o. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hariprasanth.l at msystechnologies.com Thu Mar 29 09:09:04 2018 From: hariprasanth.l at msystechnologies.com (Hari Prasanth Loganathan) Date: Thu, 29 Mar 2018 14:39:04 +0530 Subject: [ovirt-users] Query on VM Clone Message-ID: Hi Team, 1) I perform the VM clone using the following API api/vms/{vmId}/clone 2) The above API is returning the job id 3) Using the job Id, we continuously query the oVirt to get the status of the clone operation. /api/jobs/${vmCloneJobId} We are able to successfully get the status of the clone operation. But the problem is, we are not able to identify the newly created VM (created using clone). AFAIK, The only way to get the newly created VM is to get all the VM list from oVirt. Is there an easy way to identify the newly created VM using the jobId? Thanks, Hari -- DISCLAIMER The information in this e-mail is confidential and may be subject to legal privilege. It is intended solely for the addressee. Access to this e-mail by anyone else is unauthorized. If you have received this communication in error, please address with the subject heading "Received in error," send to it at msystechnologies.com, then delete the e-mail and destroy any copies of it. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. The views, opinions, conclusions and other information expressed in this electronic mail and any attachments are not given or endorsed by the company unless otherwise indicated by an authorized representative independent of this message. MSys cannot guarantee that e-mail communications are secure or error-free, as information could be intercepted, corrupted, amended, lost, destroyed, arrive late or incomplete, or contain viruses, though all reasonable precautions have been taken to ensure no viruses are present in this e-mail. As our company cannot accept responsibility for any loss or damage arising from the use of this e-mail or attachments we recommend that you subject these to your virus checking procedures prior to use -------------- next part -------------- An HTML attachment was scrubbed... URL: From omachace at redhat.com Thu Mar 29 09:21:34 2018 From: omachace at redhat.com (Ondra Machacek) Date: Thu, 29 Mar 2018 11:21:34 +0200 Subject: [ovirt-users] Query on VM Clone In-Reply-To: References: Message-ID: <8c8756dd-531f-9968-9874-36d0a18552b5@redhat.com> On 03/29/2018 11:09 AM, Hari Prasanth Loganathan wrote: > Hi Team, > > 1) I perform the VM clone using the following API > > api/vms/{vmId}/clone > > 2) The above API is returning the job id > 3) Using the job Id, we continuously query the oVirt to get the status > of the clone operation. > /api/jobs/${vmCloneJobId} > ??? We are able to successfully get the status of the clone operation. > > But the problem is, we are not able to identify the newly created VM > (created using clone). > > AFAIK, The only way to get the newly created VM is to get all the VM > list from oVirt. Is there an easy way to identify the newly created VM > using the jobId? In order to run the clone operation you must pas the VM name, so you know the name, so later to fetch the VM you can just run: api/vms?search=name=thenameofclonnedvm Is this approach OK for you? > > Thanks, > Hari > > DISCLAIMER > > The information in this e-mail is confidential and may be subject to > legal privilege. It is intended solely for the addressee. Access to this > e-mail by anyone else is unauthorized. If you have received this > communication in error, please address with the subject heading > "Received in error," send to it at msystechnologies.com > , ?then delete the e-mail and destroy > any copies of it. If you are not the intended recipient, any disclosure, > copying, distribution or any action taken or omitted to be taken in > reliance on it, is prohibited and may be unlawful. The views, opinions, > conclusions and other information expressed in this electronic mail and > any attachments are not given or endorsed by the company unless > otherwise indicated by an authorized representative independent of this > message. > > MSys cannot guarantee that e-mail communications are secure or > error-free, as information could be intercepted, corrupted, amended, > lost, destroyed, arrive late or incomplete, or contain viruses, though > all reasonable precautions have been taken to ensure no viruses are > present in this e-mail. As our company cannot accept responsibility for > any loss or damage arising from the use of this e-mail or attachments we > recommend that you subject these to your virus checking procedures prior > to use > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From karli at inparadise.se Thu Mar 29 11:02:01 2018 From: karli at inparadise.se (Karli =?ISO-8859-1?Q?Sj=F6berg?=) Date: Thu, 29 Mar 2018 13:02:01 +0200 Subject: [ovirt-users] Query on VM Clone In-Reply-To: <8c8756dd-531f-9968-9874-36d0a18552b5@redhat.com> References: <8c8756dd-531f-9968-9874-36d0a18552b5@redhat.com> Message-ID: <1522321321.2879.94.camel@inparadise.se> On Thu, 2018-03-29 at 11:21 +0200, Ondra Machacek wrote: > On 03/29/2018 11:09 AM, Hari Prasanth Loganathan wrote: > > Hi Team, > > > > 1) I perform the VM clone using the following API > > > > api/vms/{vmId}/clone > > > > 2) The above API is returning the job id > > 3) Using the job Id, we continuously query the oVirt to get the > > status > > of the clone operation. > > /api/jobs/${vmCloneJobId} > > We are able to successfully get the status of the clone > > operation. > > > > But the problem is, we are not able to identify the newly created > > VM > > (created using clone). > > > > AFAIK, The only way to get the newly created VM is to get all the > > VM > > list from oVirt. Is there an easy way to identify the newly created > > VM > > using the jobId? > > In order to run the clone operation you must pas the VM name, so you > know the name, so later to fetch the VM you can just run: > > api/vms?search=name=thenameofclonnedvm Hijacking this a little, because I got curious about something:) Is it possible to do regex searches? Because I remember working on something different, the searches could potentially end up with multiple matched objects, like "thenameofclonnedvm", "thenameofclonnedvm-berta", "thenameofclonnedvm3" and so on. So I was always forced to treat the result as a potential array, loop the objects (this was with Python) and test for an exact match, even if it was just one object. So it would be nicer if you could go like: api/vms?search=name='^thenameofclonnedvm$' And be sure to have an exact match every time. Is that possible? TIA /K > > Is this approach OK for you? > > > > > Thanks, > > Hari > > > > DISCLAIMER > > > > The information in this e-mail is confidential and may be subject > > to > > legal privilege. It is intended solely for the addressee. Access to > > this > > e-mail by anyone else is unauthorized. If you have received this > > communication in error, please address with the subject heading > > "Received in error," send to it at msystechnologies.com > > , then delete the e-mail and > > destroy > > any copies of it. If you are not the intended recipient, any > > disclosure, > > copying, distribution or any action taken or omitted to be taken > > in > > reliance on it, is prohibited and may be unlawful. The views, > > opinions, > > conclusions and other information expressed in this electronic mail > > and > > any attachments are not given or endorsed by the company unless > > otherwise indicated by an authorized representative independent of > > this > > message. > > > > MSys cannot guarantee that e-mail communications are secure or > > error-free, as information could be intercepted, corrupted, > > amended, > > lost, destroyed, arrive late or incomplete, or contain viruses, > > though > > all reasonable precautions have been taken to ensure no viruses > > are > > present in this e-mail. As our company cannot accept responsibility > > for > > any loss or damage arising from the use of this e-mail or > > attachments we > > recommend that you subject these to your virus checking procedures > > prior > > to use > > > > > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: This is a digitally signed message part URL: From omachace at redhat.com Thu Mar 29 11:59:36 2018 From: omachace at redhat.com (Ondra Machacek) Date: Thu, 29 Mar 2018 13:59:36 +0200 Subject: [ovirt-users] Query on VM Clone In-Reply-To: <1522321321.2879.94.camel@inparadise.se> References: <8c8756dd-531f-9968-9874-36d0a18552b5@redhat.com> <1522321321.2879.94.camel@inparadise.se> Message-ID: <10782978-1e33-7a67-a1b6-3e3466799428@redhat.com> On 03/29/2018 01:02 PM, Karli Sj?berg wrote: > On Thu, 2018-03-29 at 11:21 +0200, Ondra Machacek wrote: >> On 03/29/2018 11:09 AM, Hari Prasanth Loganathan wrote: >>> Hi Team, >>> >>> 1) I perform the VM clone using the following API >>> >>> api/vms/{vmId}/clone >>> >>> 2) The above API is returning the job id >>> 3) Using the job Id, we continuously query the oVirt to get the >>> status >>> of the clone operation. >>> /api/jobs/${vmCloneJobId} >>> We are able to successfully get the status of the clone >>> operation. >>> >>> But the problem is, we are not able to identify the newly created >>> VM >>> (created using clone). >>> >>> AFAIK, The only way to get the newly created VM is to get all the >>> VM >>> list from oVirt. Is there an easy way to identify the newly created >>> VM >>> using the jobId? >> >> In order to run the clone operation you must pas the VM name, so you >> know the name, so later to fetch the VM you can just run: >> >> api/vms?search=name=thenameofclonnedvm > > Hijacking this a little, because I got curious about something:) > > Is it possible to do regex searches? Because I remember working on > something different, the searches could potentially end up with > multiple matched objects, like "thenameofclonnedvm", > "thenameofclonnedvm-berta", "thenameofclonnedvm3" and so on. So I was > always forced to treat the result as a potential array, loop the > objects (this was with Python) and test for an exact match, even if it > was just one object. So it would be nicer if you could go like: > > api/vms?search=name='^thenameofclonnedvm$' > > And be sure to have an exact match every time. Is that possible? You can read more about search engine here: https://www.ovirt.org/documentation/admin-guide/appe-Using_Search_Bookmarks_and_Tags/ So if you have for example following VMs in system: vm vm1 vm2 vm3 And you do search like: api/vms?search=name=vm It will return only single Vm called 'vm', but it always return a collection, but with just single item. And you do search like: api/vms?search=name=vm* It will return all VMs starting on 'vm' string. So it's collection of vm, vm1, vm2 and vm3. So by default it search for exact string, but you may use wildcards to improve the search. > > TIA > > /K > >> >> Is this approach OK for you? >> >>> >>> Thanks, >>> Hari >>> >>> DISCLAIMER >>> >>> The information in this e-mail is confidential and may be subject >>> to >>> legal privilege. It is intended solely for the addressee. Access to >>> this >>> e-mail by anyone else is unauthorized. If you have received this >>> communication in error, please address with the subject heading >>> "Received in error," send to it at msystechnologies.com >>> , then delete the e-mail and >>> destroy >>> any copies of it. If you are not the intended recipient, any >>> disclosure, >>> copying, distribution or any action taken or omitted to be taken >>> in >>> reliance on it, is prohibited and may be unlawful. The views, >>> opinions, >>> conclusions and other information expressed in this electronic mail >>> and >>> any attachments are not given or endorsed by the company unless >>> otherwise indicated by an authorized representative independent of >>> this >>> message. >>> >>> MSys cannot guarantee that e-mail communications are secure or >>> error-free, as information could be intercepted, corrupted, >>> amended, >>> lost, destroyed, arrive late or incomplete, or contain viruses, >>> though >>> all reasonable precautions have been taken to ensure no viruses >>> are >>> present in this e-mail. As our company cannot accept responsibility >>> for >>> any loss or damage arising from the use of this e-mail or >>> attachments we >>> recommend that you subject these to your virus checking procedures >>> prior >>> to use >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users From awels at redhat.com Thu Mar 29 12:15:35 2018 From: awels at redhat.com (Alexander Wels) Date: Thu, 29 Mar 2018 08:15:35 -0400 Subject: [ovirt-users] Query on VM Clone In-Reply-To: <10782978-1e33-7a67-a1b6-3e3466799428@redhat.com> References: <1522321321.2879.94.camel@inparadise.se> <10782978-1e33-7a67-a1b6-3e3466799428@redhat.com> Message-ID: <4975876.8G4zl8xcu9@awels> On Thursday, March 29, 2018 7:59:36 AM EDT Ondra Machacek wrote: > On 03/29/2018 01:02 PM, Karli Sj?berg wrote: > > On Thu, 2018-03-29 at 11:21 +0200, Ondra Machacek wrote: > >> On 03/29/2018 11:09 AM, Hari Prasanth Loganathan wrote: > >>> Hi Team, > >>> > >>> 1) I perform the VM clone using the following API > >>> > >>> api/vms/{vmId}/clone > >>> > >>> 2) The above API is returning the job id > >>> 3) Using the job Id, we continuously query the oVirt to get the > >>> status > >>> of the clone operation. > >>> /api/jobs/${vmCloneJobId} > >>> > >>> We are able to successfully get the status of the clone > >>> > >>> operation. > >>> > >>> But the problem is, we are not able to identify the newly created > >>> VM > >>> (created using clone). > >>> > >>> AFAIK, The only way to get the newly created VM is to get all the > >>> VM > >>> list from oVirt. Is there an easy way to identify the newly created > >>> VM > >>> using the jobId? > >> > >> In order to run the clone operation you must pas the VM name, so you > >> know the name, so later to fetch the VM you can just run: > >> > >> api/vms?search=name=thenameofclonnedvm > > > > Hijacking this a little, because I got curious about something:) > > > > Is it possible to do regex searches? Because I remember working on > > something different, the searches could potentially end up with > > multiple matched objects, like "thenameofclonnedvm", > > "thenameofclonnedvm-berta", "thenameofclonnedvm3" and so on. So I was > > always forced to treat the result as a potential array, loop the > > objects (this was with Python) and test for an exact match, even if it > > was just one object. So it would be nicer if you could go like: > > > > api/vms?search=name='^thenameofclonnedvm$' > > > > And be sure to have an exact match every time. Is that possible? > > You can read more about search engine here: > > > https://www.ovirt.org/documentation/admin-guide/appe-Using_Search_Bookmarks_ > and_Tags/ > > So if you have for example following VMs in system: > > vm > vm1 > vm2 > vm3 > > And you do search like: > > api/vms?search=name=vm > > It will return only single Vm called 'vm', but it always return a > collection, but with just single item. > > And you do search like: > > api/vms?search=name=vm* > > It will return all VMs starting on 'vm' string. So it's collection of > vm, vm1, vm2 and vm3. > > So by default it search for exact string, but you may use wildcards to > improve the search. > You can also 'and' and 'or' different parameters, for instance api/vms?search=name%3DVM1+or+name%3DVM2 which will return VM1 and VM2 if they exist. Note that you will need to URL encode your search string to replace all the '=' with %3D and space with +, etc. The only = that shouldn't be encoded is the = after search. For an easy way to find what is available to search on for a particular entity, if you go into the webadmin in the search bar you can start typing and it will auto complete the different available options. AFAIC those match exactly to the search in the REST api. > > TIA > > > > /K > > > >> Is this approach OK for you? > >> > >>> Thanks, > >>> Hari > >>> > >>> DISCLAIMER > >>> > >>> The information in this e-mail is confidential and may be subject > >>> to > >>> legal privilege. It is intended solely for the addressee. Access to > >>> this > >>> e-mail by anyone else is unauthorized. If you have received this > >>> communication in error, please address with the subject heading > >>> "Received in error," send to it at msystechnologies.com > >>> , then delete the e-mail and > >>> destroy > >>> any copies of it. If you are not the intended recipient, any > >>> disclosure, > >>> copying, distribution or any action taken or omitted to be taken > >>> in > >>> reliance on it, is prohibited and may be unlawful. The views, > >>> opinions, > >>> conclusions and other information expressed in this electronic mail > >>> and > >>> any attachments are not given or endorsed by the company unless > >>> otherwise indicated by an authorized representative independent of > >>> this > >>> message. > >>> > >>> MSys cannot guarantee that e-mail communications are secure or > >>> error-free, as information could be intercepted, corrupted, > >>> amended, > >>> lost, destroyed, arrive late or incomplete, or contain viruses, > >>> though > >>> all reasonable precautions have been taken to ensure no viruses > >>> are > >>> present in this e-mail. As our company cannot accept responsibility > >>> for > >>> any loss or damage arising from the use of this e-mail or > >>> attachments we > >>> recommend that you subject these to your virus checking procedures > >>> prior > >>> to use > >>> > >>> > >>> _______________________________________________ > >>> Users mailing list > >>> Users at ovirt.org > >>> http://lists.ovirt.org/mailman/listinfo/users > >> > >> _______________________________________________ > >> Users mailing list > >> Users at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/users > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From hariprasanth.l at msystechnologies.com Thu Mar 29 12:56:15 2018 From: hariprasanth.l at msystechnologies.com (Hari Prasanth Loganathan) Date: Thu, 29 Mar 2018 18:26:15 +0530 Subject: [ovirt-users] Query on VM Clone In-Reply-To: <8c8756dd-531f-9968-9874-36d0a18552b5@redhat.com> References: <8c8756dd-531f-9968-9874-36d0a18552b5@redhat.com> Message-ID: Thanks, Ondra. Appreciated. On Thu, Mar 29, 2018 at 2:51 PM, Ondra Machacek wrote: > On 03/29/2018 11:09 AM, Hari Prasanth Loganathan wrote: > >> Hi Team, >> >> 1) I perform the VM clone using the following API >> >> api/vms/{vmId}/clone >> >> 2) The above API is returning the job id >> 3) Using the job Id, we continuously query the oVirt to get the status of >> the clone operation. >> /api/jobs/${vmCloneJobId} >> We are able to successfully get the status of the clone operation. >> >> But the problem is, we are not able to identify the newly created VM >> (created using clone). >> >> AFAIK, The only way to get the newly created VM is to get all the VM list >> from oVirt. Is there an easy way to identify the newly created VM using the >> jobId? >> > > In order to run the clone operation you must pas the VM name, so you know > the name, so later to fetch the VM you can just run: > > api/vms?search=name=thenameofclonnedvm > > Is this approach OK for you? > > >> Thanks, >> Hari >> >> DISCLAIMER >> >> The information in this e-mail is confidential and may be subject to >> legal privilege. It is intended solely for the addressee. Access to this >> e-mail by anyone else is unauthorized. If you have received this >> communication in error, please address with the subject heading "Received >> in error," send to it at msystechnologies.com > m>, then delete the e-mail and destroy any copies of it. If you are not >> the intended recipient, any disclosure, copying, distribution or any action >> taken or omitted to be taken in reliance on it, is prohibited and may be >> unlawful. The views, opinions, conclusions and other information expressed >> in this electronic mail and any attachments are not given or endorsed by >> the company unless otherwise indicated by an authorized representative >> independent of this message. >> >> MSys cannot guarantee that e-mail communications are secure or >> error-free, as information could be intercepted, corrupted, amended, lost, >> destroyed, arrive late or incomplete, or contain viruses, though all >> reasonable precautions have been taken to ensure no viruses are present in >> this e-mail. As our company cannot accept responsibility for any loss or >> damage arising from the use of this e-mail or attachments we recommend that >> you subject these to your virus checking procedures prior to use >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> -- DISCLAIMER The information in this e-mail is confidential and may be subject to legal privilege. It is intended solely for the addressee. Access to this e-mail by anyone else is unauthorized. If you have received this communication in error, please address with the subject heading "Received in error," send to it at msystechnologies.com, then delete the e-mail and destroy any copies of it. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. The views, opinions, conclusions and other information expressed in this electronic mail and any attachments are not given or endorsed by the company unless otherwise indicated by an authorized representative independent of this message. MSys cannot guarantee that e-mail communications are secure or error-free, as information could be intercepted, corrupted, amended, lost, destroyed, arrive late or incomplete, or contain viruses, though all reasonable precautions have been taken to ensure no viruses are present in this e-mail. As our company cannot accept responsibility for any loss or damage arising from the use of this e-mail or attachments we recommend that you subject these to your virus checking procedures prior to use -------------- next part -------------- An HTML attachment was scrubbed... URL: From karli at inparadise.se Thu Mar 29 13:11:54 2018 From: karli at inparadise.se (=?utf-8?B?S2FybGkgU2rDtmJlcmc=?=) Date: Thu, 29 Mar 2018 15:11:54 +0200 (CEST) Subject: [ovirt-users] Query on VM Clone In-Reply-To: <4975876.8G4zl8xcu9@awels> References: <1522321321.2879.94.camel@inparadise.se> <10782978-1e33-7a67-a1b6-3e3466799428@redhat.com> <4975876.8G4zl8xcu9@awels> Message-ID: An HTML attachment was scrubbed... URL: From Riaan at networkedge.co.nz Thu Mar 29 13:37:17 2018 From: Riaan at networkedge.co.nz (Riaan Timmerman) Date: Thu, 29 Mar 2018 13:37:17 +0000 Subject: [ovirt-users] firewalld rules - snmp Message-ID: <0C462889D648A54688D93207355E3247031D01B801@NELEX01.nel.local> Hi I am running oVirt 4.2 and need to open the firewall (firewalld) to allow an external monitoring system to connect via snmp. Documentation is not exactly clear on how to do this? Regards Riaan -------------- next part -------------- An HTML attachment was scrubbed... URL: From Riaan at networkedge.co.nz Thu Mar 29 13:29:58 2018 From: Riaan at networkedge.co.nz (Riaan Timmerman) Date: Thu, 29 Mar 2018 13:29:58 +0000 Subject: [ovirt-users] firewalld rules - snmp Message-ID: <0C462889D648A54688D93207355E3247031D01B7AA@NELEX01.nel.local> Hi I am running oVirt 4.2 and need to open the firewall (firewalld) to allow an external monitoring system to connect via snmp. Documentation is not exactly clear on how to do this? Regards Riaan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mperina at redhat.com Thu Mar 29 13:53:26 2018 From: mperina at redhat.com (Martin Perina) Date: Thu, 29 Mar 2018 15:53:26 +0200 Subject: [ovirt-users] firewalld rules - snmp In-Reply-To: <0C462889D648A54688D93207355E3247031D01B7AA@NELEX01.nel.local> References: <0C462889D648A54688D93207355E3247031D01B7AA@NELEX01.nel.local> Message-ID: Hi, please take a look at relevant blog post about customizing host deploy process: https://www.ovirt.org/blog/2017/12/host-deploy-customization/ Regards Martin On Thu, Mar 29, 2018 at 3:29 PM, Riaan Timmerman wrote: > Hi > > > > I am running oVirt 4.2 and need to open the firewall (firewalld) to allow > an external monitoring system to connect via snmp. > > > > Documentation is not exactly clear on how to do this? > > > > Regards > > > > Riaan > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Martin Perina Associate Manager, Software Engineering Red Hat Czech s.r.o. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Riaan at networkedge.co.nz Thu Mar 29 14:01:10 2018 From: Riaan at networkedge.co.nz (Riaan Timmerman) Date: Thu, 29 Mar 2018 14:01:10 +0000 Subject: [ovirt-users] firewalld rules - snmp In-Reply-To: References: <0C462889D648A54688D93207355E3247031D01B7AA@NELEX01.nel.local> Message-ID: <0C462889D648A54688D93207355E3247031D01B8A3@NELEX01.nel.local> Thanks I did look at this today but did not know if it applies. I am still a new user so I am a bit unsure of how it works. Which script do you run after creating the yaml file? Riaan From: Martin Perina Sent: Friday, 30 March 2018 2:54 AM To: Riaan Timmerman Cc: users at ovirt.org; Ondra Machacek Subject: Re: [ovirt-users] firewalld rules - snmp Hi, please take a look at relevant blog post about customizing host deploy process: https://www.ovirt.org/blog/2017/12/host-deploy-customization/ Regards Martin On Thu, Mar 29, 2018 at 3:29 PM, Riaan Timmerman > wrote: Hi I am running oVirt 4.2 and need to open the firewall (firewalld) to allow an external monitoring system to connect via snmp. Documentation is not exactly clear on how to do this? Regards Riaan _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Martin Perina Associate Manager, Software Engineering Red Hat Czech s.r.o. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jlawrence at squaretrade.com Thu Mar 29 17:29:43 2018 From: jlawrence at squaretrade.com (Jamie Lawrence) Date: Thu, 29 Mar 2018 10:29:43 -0700 Subject: [ovirt-users] Hosted engine VDSM issue with sanlock In-Reply-To: References: Message-ID: <90618429-6621-4B03-9213-C7821EE6DFD6@squaretrade.com> > On Mar 28, 2018, at 10:59 PM, Artem Tambovskiy wrote: > > Hi, > > How many hosts you have? Check hosted-engine.conf on all hosts including the one you have problem with and look if all host_id values are unique. It might happen that you have several hosts with host_id=1 Hi Artem, Thanks. 3 compute hosts, 3 gluster hosts. Checked them all, they're all unique (1, 2, 6, 101, 102 and 103), so that isn't the problem. Found another datapoint that I'm not entirely sure what to do with. Tried reinstalling the afflicted host - call it host1. Moved the HE off of it, removed it from the GUI, reinstalled. At this point, the SPM was on host3. After it was back up, we moved the SPM to host1. The problem ceased on host1 for several hours and then returned. But most notably, the problem started happening on host3! So it seems somehow related to/influenced by the SPM. And I'm deeply confused. -j From rightkicktech at gmail.com Thu Mar 29 18:42:15 2018 From: rightkicktech at gmail.com (Alex K) Date: Thu, 29 Mar 2018 21:42:15 +0300 Subject: [ovirt-users] ovirt snapshot issue In-Reply-To: References: Message-ID: Any idea with this issue? I am still trying to understand what may be causing this issue. Many thanx for any assistance. Alex On Wed, Mar 28, 2018 at 10:06 AM, Yedidyah Bar David wrote: > On Tue, Mar 27, 2018 at 3:38 PM, Sandro Bonazzola > wrote: > >> >> >> 2018-03-27 14:34 GMT+02:00 Alex K : >> >>> Hi All, >>> >>> Any idea on the below? >>> >>> I am using oVirt Guest Tools 4.2-1.el7.centos for the VM. >>> The Window 2016 server VM (which it the one with the relatively big >>> disks: 500 GB) it is consistently rendered unresponsive when trying to get >>> a snapshot. >>> I amy provide any other additional logs if needed. >>> >> >> Adding some people to the thread >> > > Adding more people for this part. > > >> >> >> >>> >>> Alex >>> >>> On Sun, Mar 25, 2018 at 7:30 PM, Alex K wrote: >>> >>>> Hi folks, >>>> >>>> I am facing frequently the following issue: >>>> >>>> On some large VMs (Windows 2016 with two disk drives, 60GB and 500GB) >>>> when attempting to create a snapshot of the VM, the VM becomes >>>> unresponsive. >>>> >>>> The errors that I managed to collect were: >>>> >>>> vdsm error at host hosting the VM: >>>> 2018-03-25 14:40:13,442+0000 WARN (vdsm.Scheduler) [Executor] Worker >>>> blocked: >>> {u'frozen': False, u'vmID': u'a5c761a2-41cd-40c2-b65f-f3819293e8a4', >>>> u'snapDrives': [{u'baseVolumeID': u'2a33e585-ece8-4f4d-b45d-5ecc9239200e', >>>> u'domainID': u'888e3aae-f49f-42f7-a7fa-76700befabea', u'volumeID': >>>> u'e9a01ebd-83dd-40c3-8c83-5302b0d15e04', u'imageID': >>>> u'c75b8e93-3067-4472-bf24-dafada224e4d'}, {u'baseVolumeID': >>>> u'3fb2278c-1b0d-4677-a529-99084e4b08af', u'domainID': >>>> u'888e3aae-f49f-42f7-a7fa-76700befabea', u'volumeID': >>>> u'78e6b6b1-2406-4393-8d92-831a6d4f1337', u'imageID': >>>> u'd4223744-bf5d-427b-bec2-f14b9bc2ef81'}]}, 'jsonrpc': '2.0', >>>> 'method': u'VM.snapshot', 'id': u'89555c87-9701-4260-9952-789965261e65'} >>>> at 0x7fca4004cc90> timeout=60, duration=60 at 0x39d8210> task#=155842 at >>>> 0x2240e10> (executor:351) >>>> 2018-03-25 14:40:15,261+0000 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] >>>> RPC call VM.getStats failed (error 1) in 0.01 seconds (__init__:539) >>>> 2018-03-25 14:40:17,471+0000 WARN (jsonrpc/5) [virt.vm] >>>> (vmId='a5c761a2-41cd-40c2-b65f-f3819293e8a4') monitor became >>>> unresponsive (command timeout, age=67.9100000001) (vm:5132) >>>> >>>> engine.log: >>>> 2018-03-25 14:40:19,875Z WARN [org.ovirt.engine.core.dal.dbb >>>> roker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler2) >>>> [1d737df7] EVENT_ID: VM_NOT_RESPONDING(126), Correlation ID: null, Call >>>> Stack: null, Custom ID: null, Custom Event ID: -1, Message: VM Data-Server >>>> is not responding. >>>> >>>> 2018-03-25 14:42:13,708Z ERROR [org.ovirt.engine.core.dal.dbb >>>> roker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler5) >>>> [17789048-009a-454b-b8ad-2c72c7cd37aa] EVENT_ID: >>>> VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call Stack: >>>> null, Custom ID: null, Custom Event ID: -1, Message: VDSM v1.cluster >>>> command SnapshotVDS failed: Message timeout which can be caused by >>>> communication issues >>>> 2018-03-25 14:42:13,708Z ERROR [org.ovirt.engine.core.vdsbrok >>>> er.vdsbroker.SnapshotVDSCommand] (DefaultQuartzScheduler5) >>>> [17789048-009a-454b-b8ad-2c72c7cd37aa] Command >>>> 'SnapshotVDSCommand(HostName = v1.cluster, SnapshotVDSCommandParameters:{runAsync='true', >>>> hostId='a713d988-ee03-4ff0-a0cd-dc4cde1507f4', >>>> vmId='a5c761a2-41cd-40c2-b65f-f3819293e8a4'})' execution failed: >>>> VDSGenericException: VDSNetworkException: Message timeout which can be >>>> caused by communication issues >>>> 2018-03-25 14:42:13,708Z WARN [org.ovirt.engine.core.bll.sna >>>> pshots.CreateAllSnapshotsFromVmCommand] (DefaultQuartzScheduler5) >>>> [17789048-009a-454b-b8ad-2c72c7cd37aa] Could not perform live snapshot >>>> due to error, VM will still be configured to the new created snapshot: >>>> EngineException: org.ovirt.engine.core.vdsbroke >>>> r.vdsbroker.VDSNetworkException: VDSGenericException: >>>> VDSNetworkException: Message timeout which can be caused by communication >>>> issues (Failed with error VDS_NETWORK_ERROR and code 5022) >>>> 2018-03-25 14:42:13,708Z WARN [org.ovirt.engine.core.vdsbroker.VdsManager] >>>> (org.ovirt.thread.pool-6-thread-15) [17789048-009a-454b-b8ad-2c72c7cd37aa] >>>> Host 'v1.cluster' is not responding. It will stay in Connecting state for a >>>> grace period of 61 seconds and after that an attempt to fence the host will >>>> be issued. >>>> 2018-03-25 14:42:13,725Z WARN [org.ovirt.engine.core.dal.dbb >>>> roker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-15) >>>> [17789048-009a-454b-b8ad-2c72c7cd37aa] EVENT_ID: >>>> VDS_HOST_NOT_RESPONDING_CONNECTING(9,008), Correlation ID: null, Call >>>> Stack: null, Custom ID: null, Custom Event ID: -1, Message: Host v1.cluster >>>> is not responding. It will stay in Connecting state for a grace period of >>>> 61 seconds and after that an attempt to fence the host will be issued. >>>> 2018-03-25 14:42:13,751Z WARN [org.ovirt.engine.core.dal.dbb >>>> roker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler5) >>>> [17789048-009a-454b-b8ad-2c72c7cd37aa] EVENT_ID: >>>> USER_CREATE_LIVE_SNAPSHOT_FINISHED_FAILURE(170), Correlation ID: >>>> 17789048-009a-454b-b8ad-2c72c7cd37aa, Job ID: >>>> 16e48c28-a8c7-4841-bd81-1f2d370f345d, Call Stack: >>>> org.ovirt.engine.core.common.errors.EngineException: EngineException: >>>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: >>>> VDSGenericException: VDSNetworkException: Message timeout which can be >>>> caused by communication issues (Failed with error VDS_NETWORK_ERROR and >>>> code 5022) >>>> 2018-03-25 14:42:14,372Z ERROR [org.ovirt.engine.core.dal.dbb >>>> roker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler5) [] >>>> EVENT_ID: USER_CREATE_SNAPSHOT_FINISHED_FAILURE(69), Correlation ID: >>>> 17789048-009a-454b-b8ad-2c72c7cd37aa, Job ID: >>>> 16e48c28-a8c7-4841-bd81-1f2d370f345d, Call Stack: >>>> org.ovirt.engine.core.common.errors.EngineException: EngineException: >>>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: >>>> VDSGenericException: VDSNetworkException: Message timeout which can be >>>> caused by communication issues (Failed with error VDS_NETWORK_ERROR and >>>> code 5022) >>>> 2018-03-25 14:42:14,372Z WARN [org.ovirt.engine.core.bll.Con >>>> currentChildCommandsExecutionCallback] (DefaultQuartzScheduler5) [] >>>> Command 'CreateAllSnapshotsFromVm' id: 'bad4f5be-5306-413f-a86a-513b3cfd3c66' >>>> end method execution failed, as the command isn't marked for endAction() >>>> retries silently ignoring >>>> 2018-03-25 14:42:15,951Z WARN [org.ovirt.engine.core.dal.dbb >>>> roker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler5) >>>> [5017c163] EVENT_ID: VDS_NO_SELINUX_ENFORCEMENT(25), Correlation ID: >>>> null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Host >>>> v1.cluster does not enforce SELinux. Current status: DISABLED >>>> 2018-03-25 14:42:15,951Z WARN [org.ovirt.engine.core.vdsbroker.VdsManager] >>>> (DefaultQuartzScheduler5) [5017c163] Host 'v1.cluster' is running with >>>> SELinux in 'DISABLED' mode >>>> >>>> As soon as the VM is unresponsive, the VM console that was already open >>>> freezes. I can resume the VM only by powering off and on. >>>> >>>> I am using ovirt 4.1.9 with 3 nodes and self-hosted engine. I am >>>> running mostly Windows 10 and Windows 2016 server VMs. I have installed >>>> latest guest agents from: >>>> >>>> http://resources.ovirt.org/pub/ovirt-4.2/iso/oVirt-toolsSetu >>>> p/4.2-1.el7.centos/ >>>> >>>> At the screen where one takes a snapshot I get a warning saying "Could >>>> not detect guest agent on the VM. Note that without guest agent the data on >>>> the created snapshot may be inconsistent". See attached. I have verified >>>> that ovirt guest tools are installed and shown at installed apps at engine >>>> GUI. Also Ovirt Guest Agent (32 bit) and qemu-ga are listed as running at >>>> the windows tasks manager. Shouldn't ovirt guest agent be 64 bit on Windows >>>> 64 bit? >>>> >>> > No idea, but I do not think it's related to your problem of freezing while > taking a snapshot. > > This error was already discussed in the past, see e.g.: > > http://lists.ovirt.org/pipermail/users/2017-June/082577.html > > Best regards, > -- > Didi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lveyde at redhat.com Thu Mar 29 20:08:48 2018 From: lveyde at redhat.com (Lev Veyde) Date: Thu, 29 Mar 2018 23:08:48 +0300 Subject: [ovirt-users] Re-spin of the 4.2.2 GA Message-ID: Hi, We found an issue [1] in the ovirt-engine, and it was decided that even though we already released the 4.2.2 GA, that the issue is serious enough for the re-release of the GA to be performed. We also rebuilt the appliance, to use this new ovirt-engine (version 4.2.2.6). Users that managed to already install the originally released version, are encouraged to get the latest one. Our apologies for the possible inconvenience. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1560684 Thanks in advance, -- Lev Veyde Software Engineer, RHCE | RHCVA | MCITP Red Hat Israel lev at redhat.com | lveyde at redhat.com TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.vaye at province-sud.nc Thu Mar 29 23:55:43 2018 From: nicolas.vaye at province-sud.nc (Nicolas Vaye) Date: Thu, 29 Mar 2018 23:55:43 +0000 Subject: [ovirt-users] upgrade ovirt HE from 4.2.1-7 to 4.2.2-2 : [ ERROR ] Failed to execute stage 'Environment customization': 'OVEHOSTED_STORAGE/spUUID' References: <1522367211.11257.4.camel@province-sud.nc> Message-ID: <1522367738.11257.6.camel@province-sud.nc> Hi, i have 2 nodes ovirtnode in one cluster with hosted engine in 4.2.1-7 I want to upgrade to 4.2.2, so i'm using this procedure https://www.ovirt.org/documentation/self-hosted/chap-Maintenance_and_Upgrading_Resources/#upgrading-an-el-based-self-hosted-engine-environment The hosted-engine --upgrade-appliance failed with [ ERROR ] Failed to execute stage 'Environment customization': 'OVEHOSTED_STORAGE/spUUID' [root at ov1 ~]# hosted-engine --upgrade-appliance [ INFO ] Stage: Initializing [ INFO ] Stage: Environment setup During customization use CTRL-D to abort. ================================================================================== Welcome to the oVirt Self Hosted Engine setup/Upgrade tool. Please refer to the oVirt install guide: https://www.ovirt.org/documentation/how-to/hosted-engine/#fresh-install Please refer to the oVirt upgrade guide: https://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine ================================================================================== Continuing will upgrade the engine VM running on this hosts deploying and configuring a new appliance. If your engine VM is already based on el7 you can also simply upgrade the engine there. This procedure will create a new disk on the hosted-engine storage domain and it will backup there the content of your current engine VM disk. The new el7 based appliance will be deployed over the existing disk destroying its content; at any time you will be able to rollback using the content of the backup disk. You will be asked to take a backup of the running engine and copy it to this host. The engine backup will be automatically injected and recovered on the new appliance. Are you sure you want to continue? (Yes, No)[Yes]: Configuration files: [] Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180330101944-vtqn9r.log Version: otopi-1.7.7 (otopi-1.7.7-1.el7.centos) [ INFO ] Detecting available oVirt engine appliances [ INFO ] Stage: Environment packages setup [ INFO ] Stage: Programs detection [ INFO ] Stage: Environment setup [ INFO ] Checking maintenance mode [ INFO ] The engine VM is running on this host [ INFO ] Stage: Environment customization --== STORAGE CONFIGURATION ==-- [ ERROR ] Failed to execute stage 'Environment customization': 'OVEHOSTED_STORAGE/spUUID' [ INFO ] Stage: Clean up [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20180330101948.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine upgrade failed Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180330101944-vtqn9r.log What's wrong ? Regards, Nicolas VAYE in Attached logs.zip file : - ovirt-hosted-engine-setup-20180330101944-vtqn9r.log - answers-20180330101948.conf. - vdsm.log -------------- next part -------------- A non-text attachment was scrubbed... Name: logs.zip Type: application/zip Size: 629965 bytes Desc: logs.zip URL: From sbonazzo at redhat.com Fri Mar 30 06:00:48 2018 From: sbonazzo at redhat.com (Sandro Bonazzola) Date: Fri, 30 Mar 2018 08:00:48 +0200 Subject: [ovirt-users] upgrade ovirt HE from 4.2.1-7 to 4.2.2-2 : [ ERROR ] Failed to execute stage 'Environment customization': 'OVEHOSTED_STORAGE/spUUID' In-Reply-To: <1522367738.11257.6.camel@province-sud.nc> References: <1522367211.11257.4.camel@province-sud.nc> <1522367738.11257.6.camel@province-sud.nc> Message-ID: 2018-03-30 1:55 GMT+02:00 Nicolas Vaye : > Hi, > > i have 2 nodes ovirtnode in one cluster with hosted engine in 4.2.1-7 > I want to upgrade to 4.2.2, so i'm using this procedure > https://www.ovirt.org/documentation/self-hosted/chap-Maintenance_and_ > Upgrading_Resources/#upgrading-an-el-based-self-hosted-engine-environment > > > The hosted-engine --upgrade-appliance failed with > [ ERROR ] Failed to execute stage 'Environment customization': > 'OVEHOSTED_STORAGE/spUUID' > > > Hi Nicolas, please note that this procedure was intended only for migrating across different distribution version (el6 -> el7) and not for usual engine upgrades. Our existing documentation would need some rework, but the correct procedure here would be set the hosted engine to global maintenance following https://www.ovirt.org/documentation/self-hosted/chap-Maintenance_and_Upgrading_Resources/#maintaining-the-self-hosted-engine and then perform an engine upgrade following https://www.ovirt.org/documentation/upgrade-guide/chap-Updates_between_Minor_Releases/ > > [root at ov1 ~]# hosted-engine --upgrade-appliance > [ INFO ] Stage: Initializing > [ INFO ] Stage: Environment setup > During customization use CTRL-D to abort. > > ============================================================ > ====================== > Welcome to the oVirt Self Hosted Engine setup/Upgrade tool. > > Please refer to the oVirt install guide: > https://www.ovirt.org/documentation/how-to/hosted- > engine/#fresh-install > Please refer to the oVirt upgrade guide: > https://www.ovirt.org/documentation/how-to/hosted- > engine/#upgrade-hosted-engine > ============================================================ > ====================== > Continuing will upgrade the engine VM running on this hosts > deploying and configuring a new appliance. > If your engine VM is already based on el7 you can also simply > upgrade the engine there. > This procedure will create a new disk on the hosted-engine > storage domain and it will backup there the content of your current engine > VM disk. > The new el7 based appliance will be deployed over the existing > disk destroying its content; at any time you will be able to rollback using > the content of the backup disk. > You will be asked to take a backup of the running engine and > copy it to this host. > The engine backup will be automatically injected and recovered > on the new appliance. > Are you sure you want to continue? (Yes, No)[Yes]: > Configuration files: [] > Log file: /var/log/ovirt-hosted-engine- > setup/ovirt-hosted-engine-setup-20180330101944-vtqn9r.log > Version: otopi-1.7.7 (otopi-1.7.7-1.el7.centos) > [ INFO ] Detecting available oVirt engine appliances > [ INFO ] Stage: Environment packages setup > [ INFO ] Stage: Programs detection > [ INFO ] Stage: Environment setup > [ INFO ] Checking maintenance mode > [ INFO ] The engine VM is running on this host > [ INFO ] Stage: Environment customization > > --== STORAGE CONFIGURATION ==-- > > [ ERROR ] Failed to execute stage 'Environment customization': > 'OVEHOSTED_STORAGE/spUUID' > [ INFO ] Stage: Clean up > [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine- > setup/answers/answers-20180330101948.conf' > [ INFO ] Stage: Pre-termination > [ INFO ] Stage: Termination > [ ERROR ] Hosted Engine upgrade failed > Log file is located at /var/log/ovirt-hosted-engine- > setup/ovirt-hosted-engine-setup-20180330101944-vtqn9r.log > > > What's wrong ? > > > Regards, > > Nicolas VAYE > > > in Attached logs.zip file : > - ovirt-hosted-engine-setup-20180330101944-vtqn9r.log > - answers-20180330101948.conf. > - vdsm.log > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA sbonazzo at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bryan.Sockel at mdaemon.com Fri Mar 30 13:50:00 2018 From: Bryan.Sockel at mdaemon.com (Bryan Sockel) Date: Fri, 30 Mar 2018 08:50:00 -0500 Subject: [ovirt-users] Moving Templatexs Message-ID: <1b6bc924.1d3c82e.81252b2.29@mdaemon.com> Hi, We are in the process of re-doing one of our storage domains. As part of the process I needed to relocate my templates over to a temporary domain. To do this, I copy the disk from one domain to another. In the past I have been able to go into disk?s and remove the template disk from the storage domain I no longer want it on. Now when I go in to storage -> Disks -> -> Storage and select the storage domain I wish to remove it from, the box is grayed out. Currently running Ovirt version 4.2.2.5-1.el7.centos Thank You, Bryan Sockel -------------- next part -------------- An HTML attachment was scrubbed... URL: From Oliver.Riesener at hs-bremen.de Fri Mar 30 14:42:16 2018 From: Oliver.Riesener at hs-bremen.de (Oliver Riesener) Date: Fri, 30 Mar 2018 16:42:16 +0200 Subject: [ovirt-users] UI Filter does not work as expected [tag != Server] Message-ID: <1BEFADCA-2941-465C-AF6F-EF9677491C20@hs-bremen.de> Hi, i have a list of multiple tagged VM?s. At the UI the filter (Compute >> Virtual Machines >> VMS: [ tag = Server ] works fine. The opposite like [tag != Server] list ALL VM?s, not only the expected list, of VM without tag Server. I run ovirt 4.2.2-6 with ovirt-web-ui.noarch in version: 1.3.7 release: 2.el7.centos. Regards Olri From sbonazzo at redhat.com Fri Mar 30 14:54:43 2018 From: sbonazzo at redhat.com (Sandro Bonazzola) Date: Fri, 30 Mar 2018 16:54:43 +0200 Subject: [ovirt-users] UI Filter does not work as expected [tag != Server] In-Reply-To: <1BEFADCA-2941-465C-AF6F-EF9677491C20@hs-bremen.de> References: <1BEFADCA-2941-465C-AF6F-EF9677491C20@hs-bremen.de> Message-ID: 2018-03-30 16:42 GMT+02:00 Oliver Riesener : > Hi, > > i have a list of multiple tagged VM?s. > > At the UI the filter (Compute >> Virtual Machines >> VMS: [ tag = Server ] > works fine. > > The opposite like [tag != Server] list ALL VM?s, not only the expected > list, > of VM without tag Server. > > I run ovirt 4.2.2-6 with ovirt-web-ui.noarch in version: 1.3.7 release: > 2.el7.centos. > Can you please open an issue on https://github.com/oVirt/ovirt-web-ui/issues if it's not already tracked there? Thanks, > > Regards > > Olri > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA sbonazzo at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Oliver.Riesener at hs-bremen.de Fri Mar 30 16:51:02 2018 From: Oliver.Riesener at hs-bremen.de (Oliver Riesener) Date: Fri, 30 Mar 2018 18:51:02 +0200 Subject: [ovirt-users] UI Filter does not work as expected [tag != Server] In-Reply-To: References: <1BEFADCA-2941-465C-AF6F-EF9677491C20@hs-bremen.de> Message-ID: <33459EEA-2017-42EC-AAC9-2E8D9F2D0AD8@hs-bremen.de> Issue opened at: https://github.com/oVirt/ovirt-web-ui/issues/548 > Am 30.03.2018 um 16:54 schrieb Sandro Bonazzola : > > > > 2018-03-30 16:42 GMT+02:00 Oliver Riesener >: > Hi, > > i have a list of multiple tagged VM?s. > > At the UI the filter (Compute >> Virtual Machines >> VMS: [ tag = Server ] works fine. > > The opposite like [tag != Server] list ALL VM?s, not only the expected list, > of VM without tag Server. > > I run ovirt 4.2.2-6 with ovirt-web-ui.noarch in version: 1.3.7 release: 2.el7.centos. > > Can you please open an issue on https://github.com/oVirt/ovirt-web-ui/issues if it's not already tracked there? > Thanks, > > > > Regards > > Olri > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > -- > SANDRO BONAZZOLA > ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D > Red Hat?EMEA > sbonazzo at redhat.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From budic at onholyground.com Fri Mar 30 17:30:23 2018 From: budic at onholyground.com (Darrell Budic) Date: Fri, 30 Mar 2018 12:30:23 -0500 Subject: [ovirt-users] Ovirt vm's paused due to storage error In-Reply-To: References: Message-ID: Found (and caused) my problem. I?d been evaluating different settings for (default settings shown): cluster.shd-max-threads 1 cluster.shd-wait-qlength 1024 and had forgotten to reset them after testing. I had them at max-thread 8 and qlength 10000. It worked in that the cluster healed in approximately half the time, and was a total failure in that my cluster experienced IO pauses and at least one VM abnormal shutdown. I have 6 core processers in these boxes, and it looks like I just overloaded them to the point that normal IO wasn?t getting serviced because the self-heal was getting too much priority. I?ve reverted to the defaults for these, and things are now behaving normally, no pauses during healing at all. Moral of the story is don?t forget to undo testing settings when done, and really don?t test extreme settings in production! Back to upgrading my test cluster so I can properly abuse things like this. -Darrell > From: Darrell Budic > Subject: Re: [ovirt-users] Ovirt vm's paused due to storage error > Date: March 22, 2018 at 1:23:29 PM CDT > To: users > > I?ve also encounter something similar on my setup, ovirt 3.1.9 with a gluster 3.12.3 storage cluster. All the storage domains in question are setup as gluster volumes & sharded, and I?ve enabled libgfapi support in the engine. It?s happened primarily to VMs that haven?t been restarted to switch to gfapi yet (still have fuse mounts for these), but one or two VMs that have been switched to gfapi mounts as well. > > I started updating the storage cluster to gluster 3.12.6 yesterday and got more annoying/bad behavior as well. Many VMs that were ?high disk use? VMs experienced hangs, but not as storage related pauses. Instead, they hang and their watchdogs eventually reported CPU hangs. All did eventually resume normal operation, but it was annoying, to be sure. The Ovirt Engine also lost contact with all of my VMs (unknown status, ? in GUI), even though it still had contact with the hosts. My gluster cluster reported no errors, volume status was normal, and all peers and bricks were connected. Didn?t see anything in the gluster logs that indicated problems, but there were reports of failed heals that eventually went away. > > Seems like something in vdsm and/or libgfapi isn?t handling the gfapi mounts well during healing and the related locks, but I can?t tell what it is. I?ve got two more servers in the cluster to upgrade to 3.12.6 yet, and I?ll keep an eye on more logs while I?m doing it, will report on it after I get more info. > > -Darrell >> From: Sahina Bose > >> Subject: Re: [ovirt-users] Ovirt vm's paused due to storage error >> Date: March 22, 2018 at 4:56:13 AM CDT >> To: Endre Karlson >> Cc: users >> >> Can you provide "gluster volume info" and the mount logs of the data volume (I assume that this hosts the vdisks for the VM's with storage error). >> >> Also vdsm.log at the corresponding time. >> >> On Fri, Mar 16, 2018 at 3:45 AM, Endre Karlson > wrote: >> Hi, this is is here again and we are getting several vm's going into storage error in our 4 node cluster running on centos 7.4 with gluster and ovirt 4.2.1. >> >> Gluster version: 3.12.6 >> >> volume status >> [root at ovirt3 ~]# gluster volume status >> Status of volume: data >> Gluster process TCP Port RDMA Port Online Pid >> ------------------------------------------------------------------------------ >> Brick ovirt0:/gluster/brick3/data 49152 0 Y 9102 >> Brick ovirt2:/gluster/brick3/data 49152 0 Y 28063 >> Brick ovirt3:/gluster/brick3/data 49152 0 Y 28379 >> Brick ovirt0:/gluster/brick4/data 49153 0 Y 9111 >> Brick ovirt2:/gluster/brick4/data 49153 0 Y 28069 >> Brick ovirt3:/gluster/brick4/data 49153 0 Y 28388 >> Brick ovirt0:/gluster/brick5/data 49154 0 Y 9120 >> Brick ovirt2:/gluster/brick5/data 49154 0 Y 28075 >> Brick ovirt3:/gluster/brick5/data 49154 0 Y 28397 >> Brick ovirt0:/gluster/brick6/data 49155 0 Y 9129 >> Brick ovirt2:/gluster/brick6_1/data 49155 0 Y 28081 >> Brick ovirt3:/gluster/brick6/data 49155 0 Y 28404 >> Brick ovirt0:/gluster/brick7/data 49156 0 Y 9138 >> Brick ovirt2:/gluster/brick7/data 49156 0 Y 28089 >> Brick ovirt3:/gluster/brick7/data 49156 0 Y 28411 >> Brick ovirt0:/gluster/brick8/data 49157 0 Y 9145 >> Brick ovirt2:/gluster/brick8/data 49157 0 Y 28095 >> Brick ovirt3:/gluster/brick8/data 49157 0 Y 28418 >> Brick ovirt1:/gluster/brick3/data 49152 0 Y 23139 >> Brick ovirt1:/gluster/brick4/data 49153 0 Y 23145 >> Brick ovirt1:/gluster/brick5/data 49154 0 Y 23152 >> Brick ovirt1:/gluster/brick6/data 49155 0 Y 23159 >> Brick ovirt1:/gluster/brick7/data 49156 0 Y 23166 >> Brick ovirt1:/gluster/brick8/data 49157 0 Y 23173 >> Self-heal Daemon on localhost N/A N/A Y 7757 >> Bitrot Daemon on localhost N/A N/A Y 7766 >> Scrubber Daemon on localhost N/A N/A Y 7785 >> Self-heal Daemon on ovirt2 N/A N/A Y 8205 >> Bitrot Daemon on ovirt2 N/A N/A Y 8216 >> Scrubber Daemon on ovirt2 N/A N/A Y 8227 >> Self-heal Daemon on ovirt0 N/A N/A Y 32665 >> Bitrot Daemon on ovirt0 N/A N/A Y 32674 >> Scrubber Daemon on ovirt0 N/A N/A Y 32712 >> Self-heal Daemon on ovirt1 N/A N/A Y 31759 >> Bitrot Daemon on ovirt1 N/A N/A Y 31768 >> Scrubber Daemon on ovirt1 N/A N/A Y 31790 >> >> Task Status of Volume data >> ------------------------------------------------------------------------------ >> Task : Rebalance >> ID : 62942ba3-db9e-4604-aa03-4970767f4d67 >> Status : completed >> >> Status of volume: engine >> Gluster process TCP Port RDMA Port Online Pid >> ------------------------------------------------------------------------------ >> Brick ovirt0:/gluster/brick1/engine 49158 0 Y 9155 >> Brick ovirt2:/gluster/brick1/engine 49158 0 Y 28107 >> Brick ovirt3:/gluster/brick1/engine 49158 0 Y 28427 >> Self-heal Daemon on localhost N/A N/A Y 7757 >> Self-heal Daemon on ovirt1 N/A N/A Y 31759 >> Self-heal Daemon on ovirt0 N/A N/A Y 32665 >> Self-heal Daemon on ovirt2 N/A N/A Y 8205 >> >> Task Status of Volume engine >> ------------------------------------------------------------------------------ >> There are no active volume tasks >> >> Status of volume: iso >> Gluster process TCP Port RDMA Port Online Pid >> ------------------------------------------------------------------------------ >> Brick ovirt0:/gluster/brick2/iso 49159 0 Y 9164 >> Brick ovirt2:/gluster/brick2/iso 49159 0 Y 28116 >> Brick ovirt3:/gluster/brick2/iso 49159 0 Y 28436 >> NFS Server on localhost 2049 0 Y 7746 >> Self-heal Daemon on localhost N/A N/A Y 7757 >> NFS Server on ovirt1 2049 0 Y 31748 >> Self-heal Daemon on ovirt1 N/A N/A Y 31759 >> NFS Server on ovirt0 2049 0 Y 32656 >> Self-heal Daemon on ovirt0 N/A N/A Y 32665 >> NFS Server on ovirt2 2049 0 Y 8194 >> Self-heal Daemon on ovirt2 N/A N/A Y 8205 >> >> Task Status of Volume iso >> ------------------------------------------------------------------------------ >> There are no active volume tasks >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From Oliver.Riesener at hs-bremen.de Fri Mar 30 19:06:55 2018 From: Oliver.Riesener at hs-bremen.de (Oliver Riesener) Date: Fri, 30 Mar 2018 21:06:55 +0200 Subject: [ovirt-users] Issue adding network interface to VM failed with HotPlugNicVDS Message-ID: <9D829BFC-E203-47DF-B15A-6777CC169E01@hs-bremen.de> Hi, running ovirt 4.2.2-6 with firewalld enabled. Failed to HotPlugNicVDS, error = The name org.fedoraproject.FirewallD1 was not provided by any .service files, code = 49 Can?t hot plug any new network interfaces. 30d6c2ab', vmId='20abce62-a558-4aee-b3e3-3fa70f1d1918'}', device='bridge', type='INTERFACE', specParams='[inbound={}, outbound={}]', address='', managed='true', plugged='true', readOnly='false', deviceAlias='', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}'})' execution failed: VDSGenericException: VDSErrorException: Failed to HotPlugNicVDS, error = The name org.fedoraproject.FirewallD1 was not provided by any .service files, code = 49 2018-03-30 20:56:08,620+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugNicVDSCommand] (default task-106) [e732710] FINISH, HotPlugNicVDSCommand, log id: 210cb07 2018-03-30 20:56:08,620+02 ERROR [org.ovirt.engine.core.bll.network.vm.ActivateDeactivateVmNicCommand] (default task-106) [e732710] Command 'org.ovirt.engine.core.bll.network.vm.ActivateDeactivateVmNicCommand' failed: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to HotPlugNicVDS, error = The name org.fedoraproject.FirewallD1 was not provided by any .service files, code = 49 (Failed with error ACTIVATE_NIC_FAILED and code 49) 2018-03-30 20:56:08,627+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-106) [e732710] EVENT_ID: NETWORK_ACTIVATE_VM_INTERFACE_FAILURE(1,013), Failed to plug Network Interface nic3 (VirtIO) to VM v-srv-opt. (User: admin at internal) 2018-03-30 20:56:08,629+02 INFO [org.ovirt.engine.core.bll.CommandCompensator] (default task-106) [e732710] Command [id=0db21508-1eeb-40f5-912e-58af9bb3fa9b]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.VmDevice; snapshot: VmDeviceId:{deviceId='6d7a5b68-0eb3-4531-bc06-3aff30d6c2ab', vmId='20abce62-a558-4aee-b3e3-3fa70f1d1918'}. 2018-03-30 20:56:08,630+02 INFO [org.ovirt.engine.core.bll.CommandCompensator] (default task-106) [e732710] Command [id=0db21508-1eeb-40f5-912e-58af9bb3fa9b]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.network.VmNetworkStatistics; snapshot: 6d7a5b68-0eb3-4531-bc06-3aff30d6c2ab. 2018-03-30 20:56:08,631+02 INFO [org.ovirt.engine.core.bll.CommandCompensator] (default task-106) [e732710] Command [id=0db21508-1eeb-40f5-912e-58af9bb3fa9b]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.network.VmNetworkInterface; snapshot: 6d7a5b68-0eb3-4531-bc06-3aff30d6c2ab. 2018-03-30 20:56:08,638+02 INFO [org.ovirt.engine.core.bll.CommandCompensator] (default task-106) [e732710] Command [id=0db21508-1eeb-40f5-912e-58af9bb3fa9b]: Compensating DELETED_OR_UPDATED_ENTITY of org.ovirt.engine.core.common.businessentities.VmStatic; snapshot: id=20abce62-a558-4aee-b3e3-3fa70f1d1918. 2018-03-30 20:56:08,642+02 INFO [org.ovirt.engine.core.bll.CommandCompensator] (default task-106) [e732710] Command [id=0db21508-1eeb-40f5-912e-58af9bb3fa9b]: Compensating TRANSIENT_ENTITY of org.ovirt.engine.core.common.businessentities.ReleaseMacsTransientCompensation; snapshot: org.ovirt.engine.core.common.businessentities.ReleaseMacsTransientCompensation at 581c941c. 2018-03-30 20:56:08,720+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-106) [e732710] EVENT_ID: NETWORK_ADD_VM_INTERFACE_FAILED(933), Failed to add Interface nic3 (VirtIO) to VM v-srv-opt. (User: admin at internal) From ykaul at redhat.com Fri Mar 30 20:15:36 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Fri, 30 Mar 2018 20:15:36 +0000 Subject: [ovirt-users] Host affinity rule In-Reply-To: References: Message-ID: On Thu, Mar 29, 2018, 4:14 AM Colin Coe wrote: > Hi all > > I suspect one of our hypervisors is faulty but at this stage I can't prove > it. > > We're running RHV 4.1.7 (about to upgrade to v4.1.10 in a few days). > > I'm planning on create a negative host affinity rule to prevent all > current existing VMs from running on the suspect host. Afterwards I'll > create a couple of test VMs and put them in a positive host affinity rule > so they only run on the suspect host. > > There are about 150 existing VMs, are there any known problems with host > affinity rules and putting 150 or so VMs in the group? > > This is production so I need to be careful. > Why not move it to maintenance, then create a new test cluster for it and use it for testing? Y. > Thanks > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Fri Mar 30 20:42:40 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Fri, 30 Mar 2018 20:42:40 +0000 Subject: [ovirt-users] Which hardware are you using for oVirt In-Reply-To: References: <815987B5-31DA-4316-809D-A03363A1E3C3@gmail.com> Message-ID: On Mon, Mar 26, 2018, 7:04 PM Christopher Cox wrote: > On 03/24/2018 03:33 AM, Andy Michielsen wrote: > > Hi all, > > > > Not sure if this is the place to be asking this but I was wondering > which hardware you all are using and why in order for me to see what I > would be needing. > > > > I would like to set up a HA cluster consisting off 3 hosts to be able to > run 30 vm?s. > > The engine, I can run on an other server. The hosts can be fitted with > the storage and share the space through glusterfs. I would think I will be > needing at least 3 nic?s but would be able to install ovn. (Are 1gb nic?s > sufficient ?) > > Just because you asked, but not because this is helpful to you.... > > But first, a comment on "3 hosts to be able to run 30 VMs". The SPM > node shouldn't run a lot of VMs. There are settings (the setting slips > my mind) on the engine to give it a "virtual set" of VMs in order to > keep VMs off of it. > > With that said, CPU wise, it doesn't require a lot to run 30 VM's. The > costly thing is memory (in general). So while a cheap set of 3 machines > might handle the CPU requirements of 30 VM's, those cheap machines might > not be able to give you the memory you need (depends). You might be > fine. I mean, there are cheap desktop like machines that do 64G (and > sometimes more). Just something to keep in mind. Memory and storage > will be the most costly items. It's simple math. Linux hosts, of > course, don't necessarily need much memory (or storage). But Windows... > > 1Gbit NIC's are "ok", but again, depends on storage. Glusterfs is no > speed demon. But you might not need "fast" storage. > > Lastly, your setup is just for "fun", right? Otherwise, read on. > > > Running oVirt 3.6 (this is a production setup) > > ovirt engine (manager): > Dell PowerEdge 430, 32G > > ovirt cluster nodes: > Dell m1000e 1.1 backplane Blade Enclosure > 9 x M630 Blades (2xE5-2669v3, 384GB), 4 iSCSI paths, 4 bonded LAN, all > 10GbE, CentOS 7.2 > 4 x MXL 10/40GbE (2x40Gbit LAN, 2x40Gbit iSCSI SAN to the S4810's) > > 120 VM's, CentOS 6, CentOS 7, Windows 10 Ent., Windows Server 2012 > We've run on as few as 3 nodes. > > Network, SAN and Storage (for ovirt Domains): > 2 x S4810 (part is used for SAN, part for LAN) > Equallogic dual controller (note: passive/active) PS6610S (84 x 4TB 7.2K > SAS) > Equallogic dual controller (note: passive/active) PS6610X (84 x 1TB 10K SAS > > ISO and Export Domains are handled by: > Dell PE R620, 32G, 2x10Gbit LAN, 2x10Gbit iSCSI to the SAN (above), > CentOS 7.4, NFS > > What I like: > * Easy setup. > * Relatively good network and storage. > > What I don't like: > * 2 "effective" networks, LAN and iSCSI. All networking uses the same > effective path. Would be nice to have more physical isolation for mgmt > vs motion vs VMs. QoS is provided in oVirt, but still, would be nice to > have the full pathways. > * Storage doesn't use active/active controllers, so controller failover > is VERY slow. > * We have a fast storage system, and somewhat slower storage system > (matter of IOPS), neither is SSD, so there isn't a huge difference. No > real redundancy or flexibility. > * vdsm can no longer respond fast enough for the amount of disks defined > (in the event of a new Storage Domain add). We have raised vdsTimeout, > but have not tested yet. > We have substantially changed and improved VDSM for better scale since 3.6. How many disks are defined, in how many storage domains and LUNs? (also the OS itself has improved). > I inherited the "style" above. My recommendation of where to start for > a reasonable production instance, minimum (assumes the S4810's above, > not priced here): > > 1 x ovirt manager/engine, approx $1500 > What about high availability for the engine? 4 x Dell R620, 2xE5-2660, 768G, 6x10GbE (LAN, Storage, Motion), approx $42K > 3 x Nexsan 18P 108TB, approx $96K > Alternatively, how many reasonable SSDs can you buy? Samsing 860 EVO, 4TB costs in Amazon (US) $1300. You could buy tens (70+) of those and be left with some change. Can you instead use them in a fast storage setup? https://www.backblaze.com/blog/open-source-data-storage-server/ for example is interesting. > While significantly cheaper (by 6 figures), it provides active/active > controllers, storage reliability and flexibility and better network > pathways. Why 4 x nodes? Need at least N+1 for reliability. The extra > 4th node is merely capacity. Why 3 x storage? Need at least N+1 for > reliability. > Are they running in some cluster? > Obviously, you'll still want to back things up and test the ability to > restore components like the ovirt engine from scratch. > +1. Y. > Btw, my recommended minimum above is regardless of hypervisor cluster > choice (could be VMware). > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From colin.coe at gmail.com Fri Mar 30 23:25:08 2018 From: colin.coe at gmail.com (Colin Coe) Date: Fri, 30 Mar 2018 23:25:08 +0000 Subject: [ovirt-users] Host affinity rule In-Reply-To: References: Message-ID: Thank you, that's a much better idea. On Sat, Mar 31, 2018, 4:15 AM Yaniv Kaul wrote: > > > On Thu, Mar 29, 2018, 4:14 AM Colin Coe wrote: > >> Hi all >> >> I suspect one of our hypervisors is faulty but at this stage I can't >> prove it. >> >> We're running RHV 4.1.7 (about to upgrade to v4.1.10 in a few days). >> >> I'm planning on create a negative host affinity rule to prevent all >> current existing VMs from running on the suspect host. Afterwards I'll >> create a couple of test VMs and put them in a positive host affinity rule >> so they only run on the suspect host. >> >> There are about 150 existing VMs, are there any known problems with host >> affinity rules and putting 150 or so VMs in the group? >> >> This is production so I need to be careful. >> > > Why not move it to maintenance, then create a new test cluster for it and > use it for testing? > Y. > > >> Thanks >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wattersm at watters.ws Sat Mar 31 19:20:10 2018 From: wattersm at watters.ws (Michael Watters) Date: Sat, 31 Mar 2018 15:20:10 -0400 Subject: [ovirt-users] Which hardware are you using for oVirt In-Reply-To: <815987B5-31DA-4316-809D-A03363A1E3C3@gmail.com> References: <815987B5-31DA-4316-809D-A03363A1E3C3@gmail.com> Message-ID: <1d867092-a97e-6e30-400a-686a46c64489@watters.ws> We run Dell Poweredge R720s and R730s with 32 GB of RAM and quad Xeon processors.? Storage is provided by Dell MD3800i and Promise arrays using iSCSI.? The network is all 10 gigabit interfaces using 802.3ad bonds.? We actually just upgraded from 1 gigabit NICs since there were some performance issues with storage causing high IOwait on VMs.? I'd recommend avoiding 1 gigabit if you can. On 3/24/18 4:33 AM, Andy Michielsen wrote: > Hi all, > > Not sure if this is the place to be asking this but I was wondering which hardware you all are using and why in order for me to see what I would be needing. > > I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 vm?s. > The engine, I can run on an other server. The hosts can be fitted with the storage and share the space through glusterfs. I would think I will be needing at least 3 nic?s but would be able to install ovn. (Are 1gb nic?s sufficient ?) > > Any input you guys would like to share would be greatly appriciated. > > Thanks, > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From vincent at epicenergy.ca Sat Mar 31 20:10:00 2018 From: vincent at epicenergy.ca (Vincent Royer) Date: Sat, 31 Mar 2018 20:10:00 +0000 Subject: [ovirt-users] Which hardware are you using for oVirt In-Reply-To: <1d867092-a97e-6e30-400a-686a46c64489@watters.ws> References: <815987B5-31DA-4316-809D-A03363A1E3C3@gmail.com> <1d867092-a97e-6e30-400a-686a46c64489@watters.ws> Message-ID: Lots of great hardware knowledge in this thread! I'm also making the move to 10Gb. I'm adding a 3rd host to my deployment and moving to Glusterfs on 3 nodes, from my current NFS share on a separate storage server. Each of the 3 nodes has dual E5-2640 V4s with128Gb ram. I have some hardware choices I would love some advice about: - Should I use Intel X520-DA2 or X710-DA2 nics for the storage network? No significant price difference. The hosts are running oVirt Node 4.2. I hope to use them in bridge mode so that I don't need a 10Gbe switch. I do have a single 10Gbe port left on my router. - the hosts have 12Gbps 520i SAS cards, should I spec 6 or 12 Gbps SSD drives? Here there is a large price difference, also large difference between Enterprise performance, Enterprise mainstream, and Enterprise entry. I'm not sure how to estimate the value of those different options in a Glusterfs deployment. The workload is pretty I/O intensive with fairly small read/write operations (under 128Kb) on windows VMs. Any obvious weak links with this plan? On Sat, Mar 31, 2018, 12:27 PM Michael Watters, wrote: > We run Dell Poweredge R720s and R730s with 32 GB of RAM and quad Xeon > processors. Storage is provided by Dell MD3800i and Promise arrays > using iSCSI. The network is all 10 gigabit interfaces using 802.3ad > bonds. We actually just upgraded from 1 gigabit NICs since there were > some performance issues with storage causing high IOwait on VMs. I'd > recommend avoiding 1 gigabit if you can. > > > On 3/24/18 4:33 AM, Andy Michielsen wrote: > > Hi all, > > > > Not sure if this is the place to be asking this but I was wondering > which hardware you all are using and why in order for me to see what I > would be needing. > > > > I would like to set up a HA cluster consisting off 3 hosts to be able to > run 30 vm?s. > > The engine, I can run on an other server. The hosts can be fitted with > the storage and share the space through glusterfs. I would think I will be > needing at least 3 nic?s but would be able to install ovn. (Are 1gb nic?s > sufficient ?) > > > > Any input you guys would like to share would be greatly appriciated. > > > > Thanks, > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cryptic5000 at hotmail.com Sat Mar 31 08:55:30 2018 From: cryptic5000 at hotmail.com (Cryptic) Date: Sat, 31 Mar 2018 08:55:30 +0000 Subject: [ovirt-users] Data Operations On Any Host Message-ID: Hi, In relation to the change made to distribute data operations between all the hosts in a data center rather than burden the SPM. I am having troubles finding information on this and need assistance to prevent this happening on my development oVirt 4.2 system. The issue I have is that I have a cluster which hosts all the storage volumes using gluster and they have 10G NICs. I also have a separate cluster which is virtualisation only and each host only has 3 x 1G aggregated NICs. When I perform disk moves between storage domains it often uses one of the virtualisation hosts which drastically increases the time taken to move the disk. Can I restrict these types of operations to a set of hosts or turn it off altogether so that it just uses the SPM like it used to in the past. Distributing it is a great feature but unfortunately is no good in my current setup. Regards, Jeremy -------------- next part -------------- An HTML attachment was scrubbed... URL: From tbaror at gmail.com Sun Mar 25 14:43:36 2018 From: tbaror at gmail.com (Tal Bar-Or) Date: Sun, 25 Mar 2018 17:43:36 +0300 Subject: [ovirt-users] Ovirt nodes NFS connection In-Reply-To: References: Message-ID: Thanks all for your answer, it's more clear now On Thu, Mar 22, 2018 at 7:24 PM, FERNANDO FREDIANI < fernando.frediani at upx.com> wrote: > Hello Tal > > It seems you have a very big overkill on your environment. I would say > that normally 2 x 10Gb interfaces can do A LOT for nodes with proper > redundancy. Just creating Vlans you can separate traffic and apply, if > necessary, QoS per Vlan to guarantee which one is more priority. > > If you have 2 x 10Gb in a LACP 802.3ad Aggregation in theory you can do > 20Gbps of aggregated traffic. If you have 10Gb of constant storage traffic > it is already huge, so I normally consider that Storage will not go over a > few Gbps and VMs another few Gb which fit perfectly within even 10Gb > > The only exception I would make is if you have a very intensive (and I am > not talking about IOPS, but throughput) from your storage then may be worth > to have 2 x 10Gb for Storage and 2 x 10Gb for all other networks > (Managment, VMs Traffic, Migration(with cap on traffic), etc). > > Regards > Fernando > > 2018-03-21 16:41 GMT-03:00 Yaniv Kaul : > >> >> >> On Wed, Mar 21, 2018 at 12:41 PM, Tal Bar-Or wrote: >> >>> Hello All, >>> >>> I am about to deploy a new Ovirt platform, the platform will consist 4 >>> Ovirt nodes including management, all servers nodes and storage will have >>> the following config: >>> >>> *nodes server* >>> 4x10G ports network cards >>> 2x10G will be used for VM network. >>> 2x10G will be used for storage connection >>> 2x1Ge 1xGe for nodes management >>> >>> >>> *Storage *4x10G ports network cards >>> 3 x10G for NFS storage mount Ovirt nodes >>> >>> Now given above network configuration layout, what is best practices in >>> terms of nodes for storage NFS connection, throughput and path resilience >>> suggested to use >>> First option each node 2x 10G lacp and on storage side 3x10G lacp? >>> >> >> I'm not sure how you'd get more throughout than you can get in a single >> physical link. You will get redundancy. >> >> Of course, on the storage side you might benefit from multiple bonded >> interfaces. >> >> >>> The second option creates 3 VLAN's assign each node on that 3 VLAN's >>> across 2 nic, and on storage, side assigns 3 nice across 3 VLANs? >>> >> >> Interesting - but I assume it'll still stick to a single physical link. >> Y. >> >> Thanks >>> >>> >>> >>> >>> >>> -- >>> Tal Bar-or >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -- Tal Bar-or -------------- next part -------------- An HTML attachment was scrubbed... URL: From tbaror at gmail.com Sun Mar 25 14:54:35 2018 From: tbaror at gmail.com (Tal Bar-Or) Date: Sun, 25 Mar 2018 17:54:35 +0300 Subject: [ovirt-users] Issues with ZFS volume creation Message-ID: Hello All, I know this question is might be out of Ovirt scope, but I don't have anywhere else to ask for this issue (ZFS users mailing doesn't work), so I am trying my luck here anyway so the issues go as follows : Installed ZFS on top of CentOs 7.4 with Ovirt 4.2 , on physical Dell R720 with 15 sas 10 k 1.2TB each attached to PERC H310 adapter, disks are configured to non-raid, all went OK, but when I am trying to create new zfs pool using the following command: > zpool create -m none -o ashift=12 zvol raidz2 sda sdb sdc sdd sde sdf sdg > sdh sdi sdj sdk sdl sdm > I get the following error below: > /dev/sda is in use and contains a unknown filesystem. > /dev/sdb is in use and contains a unknown filesystem. > /dev/sdc is in use and contains a unknown filesystem. > /dev/sdd is in use and contains a unknown filesystem. > /dev/sde is in use and contains a unknown filesystem. > /dev/sdf is in use and contains a unknown filesystem. > /dev/sdg is in use and contains a unknown filesystem. > /dev/sdh is in use and contains a unknown filesystem. > /dev/sdi is in use and contains a unknown filesystem. > /dev/sdj is in use and contains a unknown filesystem. > /dev/sdk is in use and contains a unknown filesystem. > /dev/sdl is in use and contains a unknown filesystem. > /dev/sdm is in use and contains a unknown filesystem. > When typing command lsblk I get the following output below, all seems ok, any idea what could be wrong? Please advice Thanks # lsblk > NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT > sda 8:0 0 1.1T 0 disk > ??35000cca07245c0ec 253:2 0 1.1T 0 mpath > sdb 8:16 0 1.1T 0 disk > ??35000cca072463898 253:10 0 1.1T 0 mpath > sdc 8:32 0 1.1T 0 disk > ??35000cca0724540e8 253:8 0 1.1T 0 mpath > sdd 8:48 0 1.1T 0 disk > ??35000cca072451b68 253:7 0 1.1T 0 mpath > sde 8:64 0 1.1T 0 disk > ??35000cca07245f578 253:3 0 1.1T 0 mpath > sdf 8:80 0 1.1T 0 disk > ??35000cca07246c568 253:11 0 1.1T 0 mpath > sdg 8:96 0 1.1T 0 disk > ??35000cca0724620c8 253:12 0 1.1T 0 mpath > sdh 8:112 0 1.1T 0 disk > ??35000cca07245d2b8 253:13 0 1.1T 0 mpath > sdi 8:128 0 1.1T 0 disk > ??35000cca07245f0e8 253:4 0 1.1T 0 mpath > sdj 8:144 0 1.1T 0 disk > ??35000cca072418958 253:5 0 1.1T 0 mpath > sdk 8:160 0 1.1T 0 disk > ??35000cca072429700 253:1 0 1.1T 0 mpath > sdl 8:176 0 1.1T 0 disk > ??35000cca07245d848 253:9 0 1.1T 0 mpath > sdm 8:192 0 1.1T 0 disk > ??35000cca0724625a8 253:0 0 1.1T 0 mpath > sdn 8:208 0 1.1T 0 disk > ??35000cca07245f5ac 253:6 0 1.1T 0 mpath -- Tal Bar-or -------------- next part -------------- An HTML attachment was scrubbed... URL: From tomkcpr at mdevsys.com Tue Mar 27 07:03:37 2018 From: tomkcpr at mdevsys.com (TomK) Date: Tue, 27 Mar 2018 03:03:37 -0400 Subject: [ovirt-users] [graph.y:363:graphyyerror] 0-parser: syntax error: line 19 (volume 'management'): "cluster.server-quorum-type:", allowed tokens are 'volume', 'type', 'subvolumes', 'option', 'end-volume'() Message-ID: Hey All, Wondering if anyone see this happen and can provide some hints. After numerous failed attempts to add a physical host to an oVirt VM engine that already had a gluster volume, I get this errors and I'm unable to start up gluster anymore: [2018-03-27 07:01:37.511304] E [MSGID: 101021] [graph.y:363:graphyyerror] 0-parser: syntax error: line 19 (volume 'management'): "cluster.server-quorum-type:" allowed tokens are 'volume', 'type', 'subvolumes', 'option', 'end-volume'() [2018-03-27 07:01:37.511597] E [MSGID: 100026] [glusterfsd.c:2403:glusterfs_process_volfp] 0-: failed to construct the graph [2018-03-27 07:01:37.511791] E [graph.c:1102:glusterfs_graph_destroy] (-->/usr/sbin/glusterd(glusterfs_volumes_init+0xfd) [0x55f06827d0cd] -->/usr/sbin/glusterd(glusterfs_process_volfp+0x150) [0x55f06827cf60] -->/lib64/libglusterfs.so.0(glusterfs_graph_destroy+0x84) [0x7f519a816c64] ) 0-graph: invalid argument: graph [Invalid argument] [2018-03-27 07:01:37.511839] W [glusterfsd.c:1393:cleanup_and_exit] (-->/usr/sbin/glusterd(glusterfs_volumes_init+0xfd) [0x55f06827d0cd] -->/usr/sbin/glusterd(glusterfs_process_volfp+0x163) [0x55f06827cf73] -->/usr/sbin/glusterd(cleanup_and_exit+0x6b) [0x55f06827c49b] ) 0-: received signum (-1), shutting down [2018-03-27 07:02:52.223358] I [MSGID: 100030] [glusterfsd.c:2556:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.13.2 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO) [2018-03-27 07:02:52.229816] E [MSGID: 101021] [graph.y:363:graphyyerror] 0-parser: syntax error: line 19 (volume 'management'): "cluster.server-quorum-type:" allowed tokens are 'volume', 'type', 'subvolumes', 'option', 'end-volume'() [2018-03-27 07:02:52.230125] E [MSGID: 100026] [glusterfsd.c:2403:glusterfs_process_volfp] 0-: failed to construct the graph [2018-03-27 07:02:52.230320] E [graph.c:1102:glusterfs_graph_destroy] (-->/usr/sbin/glusterd(glusterfs_volumes_init+0xfd) [0x55832612b0cd] -->/usr/sbin/glusterd(glusterfs_process_volfp+0x150) [0x55832612af60] -->/lib64/libglusterfs.so.0(glusterfs_graph_destroy+0x84) [0x7f9a1ded4c64] ) 0-graph: invalid argument: graph [Invalid argument] [2018-03-27 07:02:52.230369] W [glusterfsd.c:1393:cleanup_and_exit] (-->/usr/sbin/glusterd(glusterfs_volumes_init+0xfd) [0x55832612b0cd] -->/usr/sbin/glusterd(glusterfs_process_volfp+0x163) [0x55832612af73] -->/usr/sbin/glusterd(cleanup_and_exit+0x6b) [0x55832612a49b] ) 0-: received signum (-1), shutting down -- Cheers, Tom K. ------------------------------------------------------------------------------------- Living on earth is expensive, but it includes a free trip around the sun. From tomkcpr at mdevsys.com Wed Mar 28 06:17:03 2018 From: tomkcpr at mdevsys.com (TomK) Date: Wed, 28 Mar 2018 02:17:03 -0400 Subject: [ovirt-users] ILO2 Fencing Message-ID: <1a85fde6-0627-058b-6224-616521a2a1a4@mdevsys.com> Hey Guy's, I've tested my ILO2 fence from the ovirt engine CLI and that works: fence_ilo2 -a 192.168.0.37 -l --password="" --ssl-insecure --tls1.0 -v -o status The UI gives me: Test failed: Failed to run fence status-check on host 'ph-host01.my.dom'. No other host was available to serve as proxy for the operation. Going to add a second host in a bit but anyway to get this working with just one host? I'm just adding the one host to oVirt for some POC we are doing atm but the UI forces me to adjust Power Management settings before proceeding. Also: 2018-03-28 02:04:15,183-04 WARN [org.ovirt.engine.core.bll.network.NetworkConfigurator] (EE-ManagedThreadFactory-engine-Thread-335) [2d691be9] Failed to find a valid interface for the management network of host ph-host01.my.dom. If the interface br0 is a bridge, it should be torn-down manually. 2018-03-28 02:04:15,184-04 ERROR [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] (EE-ManagedThreadFactory-engine-Thread-335) [2d691be9] Exception: org.ovirt.engine.core.bll.network.NetworkConfigurator$NetworkConfiguratorException: Interface br0 is invalid for management network I've these defined as such but not clear what it is expecting: [root at ph-host01 ~]# ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff 3: eth1: mtu 1500 qdisc mq master bond0 state DOWN qlen 1000 link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff 4: eth2: mtu 1500 qdisc mq master bond0 state DOWN qlen 1000 link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff 5: eth3: mtu 1500 qdisc mq master bond0 state DOWN qlen 1000 link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff 21: bond0: mtu 1500 qdisc noqueue master br0 state UP qlen 1000 link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff inet6 fe80::7ae7:d1ff:fe8c:b1ba/64 scope link valid_lft forever preferred_lft forever 23: ;vdsmdummy;: mtu 1500 qdisc noop state DOWN qlen 1000 link/ether fe:69:c7:50:0d:dd brd ff:ff:ff:ff:ff:ff 24: br0: mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff inet 192.168.0.39/23 brd 192.168.1.255 scope global br0 valid_lft forever preferred_lft forever inet6 fe80::7ae7:d1ff:fe8c:b1ba/64 scope link valid_lft forever preferred_lft forever [root at ph-host01 ~]# cd /etc/sysconfig/network-scripts/ [root at ph-host01 network-scripts]# cat ifcfg-br0 DEVICE=br0 TYPE=Bridge BOOTPROTO=none IPADDR=192.168.0.39 NETMASK=255.255.254.0 GATEWAY=192.168.0.1 ONBOOT=yes DELAY=0 USERCTL=no DEFROUTE=yes NM_CONTROLLED=no DOMAIN="my.dom nix.my.dom" SEARCH="my.dom nix.my.dom" HOSTNAME=ph-host01.my.dom DNS1=192.168.0.224 DNS2=192.168.0.44 DNS3=192.168.0.45 ZONE=public [root at ph-host01 network-scripts]# cat ifcfg-bond0 DEVICE=bond0 ONBOOT=yes BOOTPROTO=none USERCTL=no NM_CONTROLLED=no BONDING_OPTS="miimon=100 mode=2" BRIDGE=br0 # # # IPADDR=192.168.0.39 # NETMASK=255.255.254.0 # GATEWAY=192.168.0.1 # DNS1=192.168.0.1 [root at ph-host01 network-scripts]# -- Cheers, Tom K. ------------------------------------------------------------------------------------- Living on earth is expensive, but it includes a free trip around the sun. From joe.paolicelli at keysight.com Thu Mar 29 15:00:00 2018 From: joe.paolicelli at keysight.com (joe.paolicelli at keysight.com) Date: Thu, 29 Mar 2018 15:00:00 +0000 Subject: [ovirt-users] SR-IOV and oVirt Message-ID: I am working with a customer on enabling sriov within oVirt and were noticing a couple of issues. 1. Whenever we assign the Number of VFs to a physical adapter in one of our hosts, it seems to set the mac addresses of each of the VFs to something other than all zeros. Ex. 02:00:00:00:00:01 2. The above behavior seems to create duplicate mac addresses when we assign 2 or more VFs to a guest VM. All zeros will tell the guest VM that it needs to set the mac. If the guest vm sees something other than all zeros, it will think that it was administratively assigned already and leave as is. 3. We were expecting oVirt to set all of the MAC addresses of the VFs initially to all zeros. Then when we assign these VFs to the guest VM, the guest VM will assign a unique MAC to each of the VFs. 4. Please note that we are assigning the VF to the guest VM by adding a Host Device (the specific pci host device for the VF). This seems to be different than your docs which shows adding a Network Interface with type PCI Passthrough. 5. If we manually run the following command from an ssh session: echo 4 > /sys/class/net/ens4f0/device/sriov_numvfs it will set all of the VFs mac addresses to all zeros. Then when we assign the pci host device to the guest VM through oVirt, it creates unique macs for both vnics. However, when we reboot the Host, it seems to revert back to the oVirt assigned macs of 02:00:00:00:00:01. Do know why this might be happening? Should we be assigning the VFs to the guest VM by adding a network interface with type PCI Passthrough? Ultimately our goal is to enable sriov within oVirt and be able to assign multiple VFs to the guest VMs with each getting a unique mac. We also want to do the vlan tagging via an application running on the guest VM (not at the Host level.) Thank you for any help, jp Joe Paolicelli (JP) Virtualization Specialist, Ixia Solutions Group Keysight Technologies e: jp at keysight.com t: 469.556.6042 www.ixiacom.com [cid:image002.png at 01D2DA11.7BFEC8C0] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 20599 bytes Desc: image001.png URL: From IT.AU at woodgroupkenny.com Sat Mar 31 09:07:52 2018 From: IT.AU at woodgroupkenny.com (SOFTWARE (WG)) Date: Sat, 31 Mar 2018 09:07:52 +0000 Subject: [ovirt-users] Data Path Operations On Any Host Message-ID: Hi, In relation to the change made to distribute data operations between all the hosts in a data center rather than burden the SPM. I am having troubles finding information on this and need assistance to prevent this happening on my development oVirt 4.2 system. The issue I have is that I have a cluster which hosts all the storage volumes using gluster and they have 10G NICs. I also have a separate cluster which is virtualisation only and each host only has 3 x 1G aggregated NICs. When I perform disk moves between storage domains it often uses one of the virtualisation hosts which drastically increases the time taken to move the disk. Can I restrict these types of operations to a set of hosts or turn it off altogether so that it just uses the SPM like it used to in the past. Distributing it is a great feature but unfortunately is no good in my current setup. Regards, Jeremy This message is the property of John Wood Group PLC and/or its subsidiaries and/or affiliates and is intended only for the named recipient(s). Its contents (including any attachments) may be confidential, legally privileged or otherwise protected from disclosure by law. Unauthorised use, copying, distribution or disclosure of any of it may be unlawful and is strictly prohibited. We assume no responsibility to persons other than the intended named recipient(s) and do not accept liability for any errors or omissions which are a result of email transmission. If you have received this message in error, please notify us immediately by reply email to the sender and confirm that the original message and any attachments and copies have been destroyed and deleted from your system. If you do not wish to receive future unsolicited commercial electronic messages from us, please forward this email to: unsubscribe at woodplc.com and include ?Unsubscribe? in the subject line. If applicable, you will continue to receive invoices, project communications and similar factual, non-commercial electronic communications. Please click http://www.woodplc.com/email-disclaimer for notices and company information in relation to emails originating in the UK, Italy or France. -------------- next part -------------- An HTML attachment was scrubbed... URL: