[Users] Addition of nodes to oVirt-engine
by jose garcia
Good evening to all,
I have added two nodes of Fedora 17 with vdsm 4.10 installed to oVirt
3.1 engine. I was having problems
with SSL, so I have disabled it. When I added the nodes there was no
attempt of installation as it was
the case with oVirt 3.0, but the nodes get activated, provided that
ovirtmgmt bridge is present.
I have been told that there is a configuration in the database that make
this happen. I just recreated the
database via the script /dbscripts/create_db_devel.sh and run
engine-setup, after removing all packages from oVirt 3.0 and jboss and
installing ovirt 3.1 basic packages.
My question is: What would be the 'standard' procedure to get oVirt 3.1
running?
Regards,
Jose Garcia
12 years, 5 months
Re: [Users] BSTRAP component='CreateConf' status='FAIL' message='Basic configuration failed to import default values'
by Michal Skrivanek
On Jun 28, 2012, at 15:55 , Karli Sjöberg wrote:
>
> 28 jun 2012 kl. 15.45 skrev Michal Skrivanek:
>
>> Hi,
>> well, I'd check whatever the error is saying…i.e. do you have libjpeg installed?
>>
>> 2012-06-28 12:25:53,223 ERROR [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDS PACKAGES' status='OK' result='libjpeg' message='package libjpeg is not installed '/>
>
> Yes, and also earlier in the log is:
>> <BSTRAP component='VDS PACKAGES' status='WARN' result='libjpeg' message='package libjpeg is not installed '/>
>
> But it is installed:
> # rpm -qa | grep libjpeg
> libjpeg-turbo-1.2.0-1.fc17.x86_64
libjpeg is not libjpeg-turbo
Seems that is the default libjpeg library in Fedora 17 now…
The bootstrap code would need a fix, I suppose...
> # yum install -y libjpeg
> Loaded plugins: langpacks, presto, refresh-packagekit, versionlock
> Package libjpeg-turbo-1.2.0-1.fc17.x86_64 already installed and latest version
> Nothing to do
>
>
>>
>> Thanks,
>> michal
>>
>> On Jun 28, 2012, at 12:32 , Karli Sjöberg wrote:
>>
>>> Hi,
>>>
>>> I am running Fedora 17 and added the ovirt beta repository to have access to webadmin addition, since F17 only comes with CLI by default.
>>>
>>> # wget http://ovirt.org/releases/beta/ovirt-engine.repo -O /etc/yum.repos.d/ovirt- engine_beta.repo
>>> # sed -i -- 's/fedora\/16/fedora\/17/' /etc/yum.repos.d/ovirt-engine_beta.repo
>>> # yum install -y ovirt-engine
>>> # rpm -qa | grep ovirt
>>> ovirt-engine-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
>>> ovirt-engine-genericapi-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
>>> ovirt-engine-webadmin-portal-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
>>> ovirt-iso-uploader-3.1.0-0.git1841d9.fc17.noarch
>>> ovirt-engine-sdk-3.1.0.2-gita89f4e.fc17.noarch
>>> ovirt-engine-restapi-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
>>> ovirt-engine-userportal-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
>>> ovirt-engine-tools-common-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
>>> ovirt-log-collector-3.1.0-0.fc17.noarch
>>> ovirt-engine-dbscripts-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
>>> ovirt-engine-setup-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
>>> ovirt-engine-notification-service-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
>>> ovirt-image-uploader-3.1.0-0.git9c42c8.fc17.noarch
>>> ovirt-engine-config-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
>>> ovirt-engine-backend-3.1.0-0.1.20120620git6ef9f8.fc17.noarch
>>>
>>> I need to get oVirt-engine- and host up and running on the machine because I want it to be able to configure and execute power management on the rest of the hosts in the cluster.
>>> Source: http://lists.ovirt.org/pipermail/users/2012-February/000361.html
>>> "Yes, the ovirt backend does not shut down or power up any hosts directly, it can work only through vdsm. Therefore you need one running host per datacenter to be able to manage the rest of the hosts."
>>>
>>> I am then following this article:
>>> http://blog.jebpages.com/archives/how-to-get-up-and-running-with-ovirt/
>>>
>>> Where I get to adding itself as a host in it´s own cluster and then:
>>> (This is a "Re-Install" from the WUI)
>>> 2012-06-28 12:25:00,000 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-70) Checking autorecoverable hosts
>>> 2012-06-28 12:25:00,004 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-70) Autorecovering 0 hosts
>>> 2012-06-28 12:25:00,005 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-70) Checking autorecoverable hosts done
>>> 2012-06-28 12:25:00,006 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-70) Checking autorecoverable storage domains
>>> 2012-06-28 12:25:00,009 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-70) Autorecovering 0 storage domains
>>> 2012-06-28 12:25:00,009 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-70) Checking autorecoverable storage domains done
>>> 2012-06-28 12:25:05,875 INFO [org.ovirt.engine.core.bll.UpdateVdsCommand] (ajp--0.0.0.0-8009-2) [13427c0] Running command: UpdateVdsCommand internal: false. Entities affected : ID: 105460c0-c0ea-11e1-b737-9b694eb255f6 Type: VDS
>>> 2012-06-28 12:25:05,889 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (ajp--0.0.0.0-8009-2) [13427c0] START, SetVdsStatusVDSCommand(vdsId = 105460c0-c0ea-11e1-b737-9b694eb255f6, status=Installing, nonOperationalReason=NONE), log id: 5ef5ea33
>>> 2012-06-28 12:25:05,895 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (ajp--0.0.0.0-8009-2) [13427c0] FINISH, SetVdsStatusVDSCommand, log id: 5ef5ea33
>>> 2012-06-28 12:25:05,910 INFO [org.ovirt.engine.core.bll.InstallVdsCommand] (pool-3-thread-7) [3a635c45] Running command: InstallVdsCommand internal: true. Entities affected : ID: 105460c0-c0ea-11e1-b737-9b694eb255f6 Type: VDS
>>> 2012-06-28 12:25:05,914 INFO [org.ovirt.engine.core.bll.InstallVdsCommand] (pool-3-thread-7) [3a635c45] Before Installation pool-3-thread-7
>>> 2012-06-28 12:25:05,915 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Executing installation stage. (Stage: Starting Host installation)
>>> 2012-06-28 12:25:05,916 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Executing installation stage. (Stage: Connecting to Host)
>>> 2012-06-28 12:25:05,959 INFO [org.ovirt.engine.core.utils.hostinstall.HostKeyVerifier] (NioProcessor-15) SSH key fingerprint f6:4d:81:ba:a0:f6:c4:09:85:18:10:5f:6f:47:09:58 for host njord.sto.slu.se (172.22.8.14) has been successfully verified.
>>> 2012-06-28 12:25:06,032 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='RHEV_INSTALL' status='OK' message='Connected to Host 172.22.8.14 with SSH key fingerprint: f6:4d:81:ba:a0:f6:c4:09:85:18:10:5f:6f:47:09:58'/>. FYI. (Stage: Connecting to Host)
>>> 2012-06-28 12:25:06,044 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Successfully connected to server ssh. (Stage: Connecting to Host)
>>> 2012-06-28 12:25:06,048 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Executing installation stage. (Stage: Get the unique vds id)
>>> 2012-06-28 12:25:06,052 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-7) Invoking /bin/echo -e `/bin/bash -c /usr/sbin/dmidecode|/bin/awk ' /UUID/{ print $2; } ' | /usr/bin/tr '
>>> ' '_' && cat /sys/class/net/*/address | /bin/grep -v '00:00:00:00' | /bin/sort -u | /usr/bin/head --lines=1` on 172.22.8.14
>>> 2012-06-28 12:25:06,145 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: 44454C4C-3200-1052-8050-B7C04F354431_00:15:17:36:60:4c
>>> . FYI. (Stage: Get the unique vds id)
>>> 2012-06-28 12:25:06,149 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Assigning unique id 44454C4C-3200-1052-8050-B7C04F354431_00:15:17:36:60:4c to Host. (Stage: Get the unique vds id)
>>> 2012-06-28 12:25:06,162 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-7) RunSSHCommand returns true
>>> 2012-06-28 12:25:06,164 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Executing installation stage. (Stage: Upload Installation script to Host)
>>> 2012-06-28 12:25:06,166 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-7) Uploading file /usr/share/ovirt-engine/scripts/vds_installer.py to /tmp/vds_installer_ca67f0a5-115c-4943-a9ef-157654586da5.py on 172.22.8.14
>>> 2012-06-28 12:25:06,170 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-7) Uploading file /usr/share/ovirt-engine/scripts/vds_installer.py to /tmp/vds_installer_ca67f0a5-115c-4943-a9ef-157654586da5.py on 172.22.8.14
>>> 2012-06-28 12:25:09,363 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. successfully done sftp operation ( Stage: Upload Installation script to Host)
>>> 2012-06-28 12:25:09,364 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-7) return true
>>> 2012-06-28 12:25:09,365 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-7) Uploading file /tmp/firewall.conf5272561799347733512.tmp to /tmp/firewall.conf.ca67f0a5-115c-4943-a9ef-157654586da5 on 172.22.8.14
>>> 2012-06-28 12:25:09,366 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-7) Uploading file /tmp/firewall.conf5272561799347733512.tmp to /tmp/firewall.conf.ca67f0a5-115c-4943-a9ef-157654586da5 on 172.22.8.14
>>> 2012-06-28 12:25:12,504 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. successfully done sftp operation ( Stage: Upload Installation script to Host)
>>> 2012-06-28 12:25:12,509 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-7) return true
>>> 2012-06-28 12:25:12,512 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Executing installation stage. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:12,516 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Sending SSH Command chmod +x /tmp/vds_installer_ca67f0a5-115c-4943-a9ef-157654586da5.py; /tmp/vds_installer_ca67f0a5-115c-4943-a9ef-157654586da5.py -c 'ssl=true;management_port=54321' -O 'slu' -t 2012-06-28T10:25:05 -f /tmp/firewall.conf.ca67f0a5-115c-4943-a9ef-157654586da5 -p 80 -b http://xcp-cms.data.slu.se:80/Components/vds/ http://xcp-cms.data.slu.se:80/Components/vds/ 172.22.8.14 ca67f0a5-115c-4943-a9ef-157654586da5 False. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:12,530 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-7) Invoking chmod +x /tmp/vds_installer_ca67f0a5-115c-4943-a9ef-157654586da5.py; /tmp/vds_installer_ca67f0a5-115c-4943-a9ef-157654586da5.py -c 'ssl=true;management_port=54321' -O 'slu' -t 2012-06-28T10:25:05 -f /tmp/firewall.conf.ca67f0a5-115c-4943-a9ef-157654586da5 -p 80 -b http://xcp-cms.data.slu.se:80/Components/vds/ http://xcp-cms.data.slu.se:80/Components/vds/ 172.22.8.14 ca67f0a5-115c-4943-a9ef-157654586da5 False on 172.22.8.14
>>> 2012-06-28 12:25:13,545 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='INSTALLER' status='OK' message='Test platform succeeded'/>
>>> <BSTRAP component='INSTALLER LIB' status='OK' message='Install library already exists'/>
>>> <BSTRAP component='INSTALLER' status='OK' message='vds_bootstrap.py download succeeded'/>
>>> <BSTRAP component='RHN_REGISTRATION' status='OK' message='Host properly registered with RHN/Satellite.'/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:14,615 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDSM_MAJOR_VER' status='OK' message='Available VDSM matches requirements'/>
>>> <BSTRAP component='VT_SVM' status='OK' processor='Intel' message='Server supports virtualization'/>
>>> <BSTRAP component='OS' status='OK' type='FEDORA' message='Supported platform version'/>
>>> <BSTRAP component='KERNEL' status='OK' version='0' message='Skipped kernel version check'/>
>>> <BSTRAP component='CONFLICTING PACKAGES' status='OK' result='cman.x86_64' message='package cman.x86_64 is not installed '/>
>>> <BSTRAP component='REQ PACKAGES' status='OK' result='SDL.x86_64' message='SDL-1.2.14-16.fc17.x86_64 '/>
>>> <BSTRAP component='REQ PACKAGES' status='OK' result='bridge-utils.x86_64' message='bridge-utils-1.5-3.fc17.x86_64 '/>
>>> <BSTRAP component='REQ PACKAGES' status='OK' result='mesa-libGLU.x86_64' message='mesa-libGLU-8.0.3-1.fc17.x86_64 '/>
>>> <BSTRAP component='REQ PACKAGES' status='OK' result='openssl.x86_64' message='openssl-1.0.0j-1.fc17.x86_64 '/>
>>> <BSTRAP component='REQ PACKAGES' status='OK' result='m2crypto.x86_64' message='m2crypto-0.21.1-8.fc17.x86_64 '/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:15,705 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='REQ PACKAGES' status='OK' result='rsync.x86_64' message='rsync-3.0.9-2.fc17.x86_64 '/>
>>> <BSTRAP component='VDS PACKAGES' status='OK' result='qemu-kvm' message='qemu-kvm-1.0-17.fc17.x86_64 '/>
>>> <BSTRAP component='VDS PACKAGES' status='OK' result='qemu-kvm-tools' message='qemu-kvm-tools-1.0-17.fc17.x86_64 '/>
>>> <BSTRAP component='VDS PACKAGES' status='OK' result='vdsm' message='vdsm-4.10.0-2.fc17.x86_64 '/>
>>> <BSTRAP component='VDS PACKAGES' status='OK' result='vdsm-cli' message='vdsm-cli-4.10.0-2.fc17.noarch '/>
>>> <BSTRAP component='VDS PACKAGES' status='WARN' result='libjpeg' message='package libjpeg is not installed '/>
>>> <BSTRAP component='VDS PACKAGES' status='OK' result='spice-server' message='spice-server-0.10.1-2.fc17.x86_64 '/>
>>> <BSTRAP component='VDS PACKAGES' status='OK' result='pixman' message='pixman-0.24.4-2.fc17.x86_64 '/>
>>> <BSTRAP component='VDS PACKAGES' status='OK' result='seabios' message='seabios-1.7.0-1.fc17.x86_64 '/>
>>> <BSTRAP component='VDS PACKAGES' status='OK' result='qemu-img' message='qemu-img-1.0-17.fc17.x86_64 '/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:16,751 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDS PACKAGES' status='OK' result='fence-agents' message='fence-agents-3.1.8-1.fc17.x86_64 '/>
>>> <BSTRAP component='VDS PACKAGES' status='OK' result='libselinux-python' message='libselinux-python-2.1.10-3.fc17.x86_64 '/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:27,770 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDS PACKAGES' status='OK' result='qemu-kvm' message='qemu-kvm-1.0-17.fc17.x86_64 '/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:29,778 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDS PACKAGES' status='OK' result='qemu-kvm-tools' message='qemu-kvm-tools-1.0-17.fc17.x86_64 '/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:32,785 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDS PACKAGES' status='OK' result='vdsm' message='vdsm-4.10.0-2.fc17.x86_64 '/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:35,794 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDS PACKAGES' status='OK' result='vdsm-cli' message='vdsm-cli-4.10.0-2.fc17.noarch '/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:38,805 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDS PACKAGES' status='OK' result='spice-server' message='spice-server-0.10.1-2.fc17.x86_64 '/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:40,819 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDS PACKAGES' status='OK' result='pixman' message='pixman-0.24.4-2.fc17.x86_64 '/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:43,826 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDS PACKAGES' status='OK' result='seabios' message='seabios-1.7.0-1.fc17.x86_64 '/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:46,833 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDS PACKAGES' status='OK' result='qemu-img' message='qemu-img-1.0-17.fc17.x86_64 '/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:49,856 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDS PACKAGES' status='OK' result='fence-agents' message='fence-agents-3.1.8-1.fc17.x86_64 '/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:52,898 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDS PACKAGES' status='OK' result='libselinux-python' message='libselinux-python-2.1.10-3.fc17.x86_64 '/>
>>> . FYI. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:53,223 ERROR [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Received message: <BSTRAP component='VDS PACKAGES' status='OK' result='libjpeg' message='package libjpeg is not installed '/>
>>> <BSTRAP component='CreateConf' status='FAIL' message='Basic configuration failed to import default values'/>
>>> <BSTRAP component='RHEV_INSTALL' status='FAIL'/>
>>> . Error occured. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:53,236 INFO [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] (pool-3-thread-7) RunSSHCommand returns true
>>> 2012-06-28 12:25:53,236 INFO [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] RunScript ended:true
>>> 2012-06-28 12:25:53,237 ERROR [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-7) [3a635c45] Installation of 172.22.8.14. Operation failure. (Stage: Running first installation script on Host)
>>> 2012-06-28 12:25:53,246 INFO [org.ovirt.engine.core.bll.InstallVdsCommand] (pool-3-thread-7) [3a635c45] After Installation pool-3-thread-7
>>> 2012-06-28 12:25:53,248 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (pool-3-thread-7) [3a635c45] START, SetVdsStatusVDSCommand(vdsId = 105460c0-c0ea-11e1-b737-9b694eb255f6, status=InstallFailed, nonOperationalReason=NONE), log id: 3cf757b4
>>> 2012-06-28 12:25:53,264 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (pool-3-thread-7) [3a635c45] FINISH, SetVdsStatusVDSCommand, log id: 3cf757b4
>>>
>>>
>>> The action in question "CreateConf" looks like:
>>>
>>> /usr/share/vdsm-bootstrap/vds_bootstrap.py
>>>
>>> def _makeConfig(self):
>>> import datetime
>>> from config import config
>>>
>>> if not os.path.exists(VDSM_CONF):
>>> logging.debug("makeConfig: generating conf.")
>>> lines = []
>>> lines.append ("# Auto-generated by vds_bootstrap at:" + str(datetime.datetime.now()) + "\n")
>>> lines.append ("\n")
>>>
>>> lines.append ("[vars]\n") #Adding ts for the coming scirpts.
>>> lines.append ("trust_store_path = " + config.get('vars', 'trust_store_path') + "\n")
>>> lines.append ("ssl = " + config.get('vars', 'ssl') + "\n")
>>> lines.append ("\n")
>>>
>>> lines.append ("[addresses]\n") #Adding mgt port for the coming scirpts.
>>> lines.append ("management_port = " + config.get('addresses', 'management_port') + "\n")
>>>
>>> logging.debug("makeConfig: writing the following to " + VDSM_CONF)
>>> logging.debug(lines)
>>> fd, tmpName = tempfile.mkstemp()
>>> f = os.fdopen(fd, 'w')
>>> f.writelines(lines)
>>> f.close()
>>> os.chmod(tmpName, 0644)
>>> shutil.move(tmpName, VDSM_CONF)
>>> else:
>>> self.message = 'Basic configuration found, skipping this step'
>>> logging.debug(self.message)
>>>
>>> def createConf(self):
>>> """
>>> Generate initial configuration file for VDSM. Must run after package installation!
>>> """
>>> self.message = 'Basic configuration set'
>>> self.rc = True
>>> self.status = 'OK'
>>>
>>> try:
>>> self._makeConfig()
>>> except Exception, e:
>>> logging.error('', exc_info=True)
>>> self.message = 'Basic configuration failed'
>>> if isinstance(e, ImportError):
>>> self.message = self.message + ' to import default values'
>>> self.rc = False
>>> self.status = 'FAIL'
>>>
>>> self._xmlOutput('CreateConf', self.status, None, None, self.message)
>>> return self.rc
>>>
>>>
>>> What now? Can anyone tell me why it fails? Besides the obvious "it´s beta" of course:)
>>>
>>>
>>> Med Vänliga Hälsningar
>>> -------------------------------------------------------------------------------
>>> Karli Sjöberg
>>> Swedish University of Agricultural Sciences
>>> Box 7079 (Visiting Address Kronåsvägen 8)
>>> S-750 07 Uppsala, Sweden
>>> Phone: +46-(0)18-67 15 66
>>> karli.sjoberg(a)slu.se
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
> Med Vänliga Hälsningar
> -------------------------------------------------------------------------------
> Karli Sjöberg
> Swedish University of Agricultural Sciences
> Box 7079 (Visiting Address Kronåsvägen 8)
> S-750 07 Uppsala, Sweden
> Phone: +46-(0)18-67 15 66
> karli.sjoberg(a)slu.se
>
12 years, 5 months
Re: [Users] How can I create a snapshot for virtual disks only?
by Shu Ming
On 2012-6-27 23:30, Michal Skrivanek wrote:
> That would make sense only when you are powered off, wouldn't it?
> Otherwise what's the point of disk snapshot with open transactions, buffers in mem, etc, without saving the state of the VM too?
I understand syncing virtual disk state with the OS cache is a quite
challenge to a snapshot tool developers. However, it is a handy feature
for the users just want to back up their data in the disks without
stopping VM. As other virtualization solution support this feature,
we should narrow the gap to attract more users to oVirt.
>
> Thanks,
> michal
>
> On 27 Jun 2012, at 17:26, Shu Ming <shuming(a)linux.vnet.ibm.com> wrote:
>
>> On 2012-6-27 23:19, Dafna Ron wrote:
>>> Hi,
>>>
>>> no. the snapshot is for the entire vm and it's disks.
>>>
>>>
>>> On 06/27/2012 05:31 PM, Shu Ming wrote:
>>>> Hi,
>>>>
>>>> I am testing oVirt engine 3.1 beta release 06-07. I tried
>>>> "snapshots-->create" button for a VM and it looked like that the
>>>> snapshot to be created was for both the running state of the system
>>>> and the virtual disks. I am wondering if there is any way to create
>>>> the snapshot only for the virtual disks of the VM in engine?
>>>>
>> Do we have future plan to make snapshot for it's disks?
>>
>> --
>> Shu Ming <shuming(a)linux.vnet.ibm.com>
>> IBM China Systems and Technology Laboratory
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
--
Shu Ming <shuming(a)linux.vnet.ibm.com>
IBM China Systems and Technology Laboratory
12 years, 5 months
[Users] How can I create a snapshot for virtual disks only?
by Shu Ming
Hi,
I am testing oVirt engine 3.1 beta release 06-07. I tried
"snapshots-->create" button for a VM and it looked like that the
snapshot to be created was for both the running state of the system and
the virtual disks. I am wondering if there is any way to create the
snapshot only for the virtual disks of the VM in engine?
--
Shu Ming <shuming(a)linux.vnet.ibm.com>
IBM China Systems and Technology Laboratory
12 years, 5 months
[Users] oVirt Weekly Meeting Minutes -- 2012-06-27
by Ofer Schreiber
Minutes: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-06-27-14.00.html
Minutes (text): http://ovirt.org/meetings/ovirt/2012/ovirt.2012-06-27-14.00.txt
Log: http://ovirt.org/meetings/ovirt/2012/ovirt.2012-06-27-14.00.log.html
=========================
#ovirt: oVirt Weekly Sync
=========================
Meeting started by oschreib at 14:00:19 UTC. The full logs are available
at http://ovirt.org/meetings/ovirt/2012/ovirt.2012-06-27-14.00.log.html
.
Meeting summary
---------------
* agenda and roll call (oschreib, 14:00:29)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=822145
(oschreib, 14:12:20)
* Status of next release (oschreib, 14:12:41)
* GA date is July 9th (oschreib, 14:12:53)
* 11 blocker currently in the 3.1 tracker (oschreib, 14:13:12)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=822145
(oschreib, 14:13:32)
* Sanlock locking failed for readonly devices (component: libvirt,
status: ASSIGNED) (oschreib, 14:16:17)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=828633
(oschreib, 14:16:32)
* IDEA: the feature owner can put the wiki page in a different
category e.g. [[Category:Feature complete]] (quaid, 14:16:56)
* patch waiting for a review (oschreib, 14:18:12)
* It's impossible to create bond with setupNetworks (component: vdsm,
status: ASSIGNED) (oschreib, 14:19:13)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=831998
(oschreib, 14:19:23)
* merged upstream, waiting for the Fedora backport (oschreib,
14:23:01)
* vdsmd init script times out due to lengthy semanage operation
(component: vdsm, status: POST) (oschreib, 14:23:48)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=832199
(oschreib, 14:23:57)
* patch in review (oschreib, 14:26:55)
* 3.1: sshd daemon is not starting correctly after complete the
installation of oVirt Node (component: node, status: MODIFIED)
(oschreib, 14:27:26)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=832517
(oschreib, 14:27:35)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=832199 (danken,
14:29:05)
* bug fixed, new node build today if possible (oschreib, 14:30:01)
* ACTION: mburns to build and upload new node version (oschreib,
14:30:14)
* 3.1: iptables blocking communication between node and engine
(component: node, status: MODIFIED) (oschreib, 14:31:30)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=832539
(oschreib, 14:31:36)
* should be in next build (oschreib, 14:33:07)
* ovirt-node can't be approved due to missing /rhev/data-center
(component: vdsm, status: ON_QA) (oschreib, 14:33:45)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=832577
(oschreib, 14:33:51)
* waiting for a verification (oschreib, 14:35:29)
* 3.1 Allow to create VLANed network on top of existing bond
(component: vdsm, status: POST) (oschreib, 14:36:05)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=833119
(oschreib, 14:36:20)
* pushed upstream, should be backported tomorrow to Fedora (oschreib,
14:40:56)
* Failed to add host - does not accept vdsm 4.10 (component: vdsm,
status: ON_QA) (oschreib, 14:41:24)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=833201
(oschreib, 14:41:33)
* waiting for a verification (oschreib, 14:43:51)
* [vdsm][bridgeless] BOOTPROTO/IPADDR/NETMASK options are not set on
interface (component: vdsm, status: NEW) (oschreib, 14:44:18)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=834281
(oschreib, 14:44:26)
* not a 3.1 blocker (oschreib, 14:51:06)
* Leak of keystore file descriptors in ovirt-engine (component:
engine, status: MODIFIED) (oschreib, 14:52:00)
* LINK: https://bugzilla.redhat.com/show_bug.cgi?id=834399
(oschreib, 14:52:08)
* pushed and backported, waiting for a build (oschreib, 14:52:22)
* Sub-project reports (oschreib, 14:55:08)
* status covered in BZ overview (oschreib, 14:57:17)
* Upcoming workshops (oschreib, 14:57:29)
* no updates about upcoming workshops (oschreib, 14:59:25)
* Release notes (oschreib, 14:59:56)
* 3.1 release notes should be given to sgordon by mail (oschreib,
15:04:51)
* ACTION: oschreib to nag maintainers about RN (oschreib, 15:06:04)
Meeting ended at 15:08:51 UTC.
Action Items
------------
* mburns to build and upload new node version
* oschreib to nag maintainers about RN
Action Items, by person
-----------------------
* mburns
* mburns to build and upload new node version
* oschreib
* oschreib to nag maintainers about RN
* **UNASSIGNED**
* (none)
People Present (lines said)
---------------------------
* oschreib (119)
* sgordon (24)
* danken (18)
* RobertM (10)
* ilvovsky (8)
* quaid (6)
* fabiand (6)
* mburns (4)
* ofrenkel (3)
* READ10 (2)
* ovirtbot (2)
* Guest1297 (1)
* crobinso (1)
* dustins (1)
Generated by `MeetBot`_ 0.1.4
.. _`MeetBot`: http://wiki.debian.org/MeetBot
12 years, 5 months
[Users] Adding disk via rest api
by Jakub Libosvar
Hi guys,
I'm struggling with creating disk via api. I tried to POST this body
<disk>
<name>my_cool_disk</name>
<provisioned_size>1073741824</provisioned_size>
<storage_domains>
<storage_domain>
<name>master_sd</name>
</storage_domain>
</storage_domains>
<size>1073741824</size>
<interface>virtio</interface>
<format>cow</format>
</disk>
but getting error from CanDoAction:
2012-06-25 17:37:14,497 WARN [org.ovirt.engine.core.bll.AddDiskCommand]
(ajp--0.0.0.0-8009-11) [26a7e908] CanDoAction of action AddDisk failed.
Reasons:VAR__ACTION__ADD,VAR__TYPE__VM_DISK,ACTION_TYPE_FAILED_STORAGE_DOMAIN_NOT_EXIST
2012-06-25 17:37:14,502 ERROR
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource]
(ajp--0.0.0.0-8009-11) Operation Failed: [Cannot add Virtual Machine
Disk. Storage Domain doesn't exist.]
The storage domain 'master_sd' is operational and I can create a disk
from webadmin. According rsdl the provisioned_size is not child of disk
element
<parameter required="true" type="xs:int">
<name>provisioned_size</name>
</parameter>
<parameter required="true" type="xs:string">
<name>disk.interface</name>
</parameter>
<parameter required="true" type="xs:string">
<name>disk.format</name>
</parameter>
but in api/disks it is.
Any ideas what am I doing wrong?
Thanks,
Kuba
12 years, 5 months
[Users] Two updates from the oVirt team
by Ofer Schreiber
Hey everyone,
There are two updates I'd like to share with you:
1. oVirt 3.1 release date has been delayed to the 9th of July.
As you all know, We're in the middle of creating a new build of oVirt.
With your help, we found multiple issues which we consider as release blockers. Some of these issues require a few days to be solved properly, and as a result, we had to delay the general availability of oVirt.
2. New ovirt-engine rpms are available for download at http://ovirt.org/releases/beta/fedora/17
This build contain multiple bug fixes, as well as a new versioning schema, which will ensure future updates will be done correctly.
Please note, that due to the new versions, we don't support in-beta upgrade. please make sure you clean up you environment (using engine-cleanup, and yum remove) before installing the new rpms (version 3.1.0-0.1.20120620git6ef9f8.fc17)
New VDSM rpms should be available in the beginning of next week).
Regards,
--
Ofer Schreiber
oVirt Release Manager
12 years, 5 months
[Users] oVirt 3.1 -- GlusterFS 3.3
by Nathan Stratton
Steps:
1) Installed ovirt-engine, configured Cluster for GlusterFS
2) Installed 8 ovirt 3.1 nodes
3) Joined all 8 notes with ovirt-engine (all up and happy)
4) Manually added all 8 peers for GlusterFS on hosts (all peers happy)
5) Create Volume errors:
2012-06-25 16:44:42,412 WARN
[org.ovirt.engine.core.dal.job.ExecutionMessageDirector]
(ajp--0.0.0.0-8009-4) [6c02c1f5] The message key CreateGlusterVolume is
missing from bundles/ExecutionMessages
2012-06-25 16:44:42,483 INFO
[org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand]
(ajp--0.0.0.0-8009-4) [6c02c1f5] Running command:
CreateGlusterVolumeCommand internal: false. Entities affected : ID:
99408929-82cf-4dc7-a532-9d998063fa95 Type: VdsGroups
2012-06-25 16:44:42,486 INFO
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
(ajp--0.0.0.0-8009-4) [6c02c1f5] START,
CreateGlusterVolumeVDSCommand(vdsId =
8324ff12-bf1c-11e1-b235-43d3f71a81d8), log id: 15757d4c
2012-06-25 16:44:42,593 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(ajp--0.0.0.0-8009-4) [6c02c1f5] Failed in CreateGlusterVolumeVDS method
2012-06-25 16:44:42,594 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(ajp--0.0.0.0-8009-4) [6c02c1f5] Error code unexpected and error message
VDSGenericException: VDSErrorException: Failed to CreateGlusterVolumeVDS,
error = Unexpected exception
2012-06-25 16:44:42,594 ERROR
[org.ovirt.engine.core.vdsbroker.VDSCommandBase] (ajp--0.0.0.0-8009-4)
[6c02c1f5] Command CreateGlusterVolumeVDS execution failed. Exception:
VDSErrorException: VDSGenericException: VDSErrorException: Failed to
CreateGlusterVolumeVDS, error = Unexpected exception
2012-06-25 16:44:42,595 INFO
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
(ajp--0.0.0.0-8009-4) [6c02c1f5] FINISH, CreateGlusterVolumeVDSCommand,
log id: 15757d4c
2012-06-25 16:44:42,596 ERROR
[org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand]
(ajp--0.0.0.0-8009-4) [6c02c1f5] Command
org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand throw Vdc Bll
exception. With error message VdcBLLException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to CreateGlusterVolumeVDS,
error = Unexpected exception
><>
Nathan Stratton
nathan at robotics.net
http://www.robotics.net
12 years, 5 months
[Users] Unable to add iSCSI storage domain - 3.1 beta
by Trey Dockendorf
I'm attempting to add iSCSI storage to a data center via web
interface, and while I"m able to see the target ISCSI device and
login, clicking the "OK" button doesn't submit the storage selection.
I noticed from looking at the oVirt manual that somewhere in this
interface I have to select the LUN. However once logged into the
target I never see LUNs to select.
I've attached two screenshots that show the UI and what could be
errors in how it's being displayed. I also attached logs from vdsm
host and engine that were captured while attempting to add the storage
domain.
After logging into the iSCSI target via web interface, this is the
output on target host, showing that my ovirt node (10.20.1.240) is
connected.
# tgtadm --lld iscsi --op show --mode target
Target 1: iqn.2012-06.edu.tamu.dc-store:lv_kvm_pool
System information:
Driver: iscsi
node dc-store0.tamu.edu
State: ready
I_T nexus information:
I_T nexus: 3
Initiator: iqn.1994-05.com.redhat:ad499aa5f37e
Connection: 0
IP Address: 10.20.1.240
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Readonly: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: p_lu_coraid23_lu
SCSI SN: (stdin)=
Size: 4398047 MB, Block size: 512
Online: Yes
Removable media: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/vg_coraid23/lv_kvm_pool
Backing store flags:
Account information:
ACL information:
ALL
Based on some archived emails I've included previously mentioned
outputs that may help
>From the ovirt node:
# iscsiadm -m discovery -p 10.20.1.250 -t sendtargets --login
10.20.1.250:3260,1 iqn.2012-06.edu.tamu.dc-store:lv_kvm_pool
Logging in to [iface: default, target:
iqn.2012-06.edu.tamu.dc-store:lv_kvm_pool, portal: 10.20.1.250,3260]
(multiple)
Login to [iface: default, target:
iqn.2012-06.edu.tamu.dc-store:lv_kvm_pool, portal: 10.20.1.250,3260]
successful.
# multipath -r
Jun 25 14:19:22 | 3600508e0000000004ecf45ecb0c2ec0a: ignoring map
reload: 1p_lu_coraid23_lu undef IET,VIRTUAL-DISK
size=4.0T features='0' hwhandler='0' wp=undef
`-+- policy='round-robin 0' prio=1 status=undef
`- 8:0:0:1 sdc 8:32 active ready running
# iscsiadm -m session
tcp: [4] 10.20.1.250:3260,1 iqn.2012-06.edu.tamu.dc-store:lv_kvm_pool
# vdsClient -s 0 getDeviceList
[{'GUID': '1p_lu_coraid23_lu',
'capacity': '4398046511104',
'devtype': 'iSCSI',
'fwrev': '0001',
'logicalblocksize': '512',
'partitioned': False,
'pathlist': [{'connection': '10.20.1.250',
'initiatorname': 'default',
'iqn': 'iqn.2012-06.edu.tamu.dc-store:lv_kvm_pool',
'port': '3260',
'portal': '1'}],
'pathstatus': [{'lun': '1',
'physdev': 'sdc',
'state': 'active',
'type': 'iSCSI'}],
'physicalblocksize': '512',
'productID': 'VIRTUAL-DISK',
'pvUUID': '7MHtfm-352q-zWrT-KfU8-sJ1l-e4rP-aKsuhu',
'serial': 'SIET_VIRTUAL-DISK',
'vendorID': 'IET',
'vgUUID': 'YCso68-YCo8-w817-I5Nf-mFx3-aoPe-LWkYco'}]
# vgs -o+pv_name
VG #PV #LV #SN Attr VSize VFree
PV
f18b2342-4713-4319-a878-3025b38556a4 1 6 0 wz--n- 4.00t 4.00t
/dev/mapper/1p_lu_coraid23_lu
vg_dc-kvm0 1 2 0 wz--n- 67.50g 0
/dev/sda2
Thanks
- Trey
12 years, 5 months