[Users] Booting oVirt node image 2.3.0, no install option

This message is in MIME format. Since your mail reader does not understand this format, some or all of this message may not be legible.
--B_3417408887_2820352 Content-type: text/plain; charset="US-ASCII" Content-transfer-encoding: 7bit Hi folks, I'm trying to install oVirt node v2.3.0 on A Dell C2100 server. I can boot up just fine, but the two menu options I see are "Start oVirt node", and "Troubleshooting". When I choose "Start oVirt node", it does just that, and I am soon after given a console login prompt. I've checked the docs, and I don't see what I'm supposed to do next, as in a password etc. Am I missing something? Thanks, -Adam --B_3417408887_2820352 Content-type: text/html; charset="US-ASCII" Content-transfer-encoding: quoted-printable <html><head></head><body style=3D"word-wrap: break-word; -webkit-nbsp-mode: s= pace; -webkit-line-break: after-white-space; color: rgb(0, 0, 0); font-size:= 14px; font-family: Calibri, sans-serif; "><div><br></div><div> = Hi folks,</div><div><br></div><div> I'm trying to install oVirt = node v2.3.0 on A Dell C2100 server. I can boot up just fine, but the two men= u options I see are "Start oVirt node", and "Troubleshooting". When I choose= "Start oVirt node", it does just that, and I am soon after given a console = login prompt. I've checked the docs, and I don't see what I'm supposed to do= next, as in a password etc. Am I missing something? </div><div><br></d= iv><div> Thanks,</div><div><br></div><div> -= Adam</div></body></html> --B_3417408887_2820352--

On Mon, 2012-04-16 at 08:14 -0500, Adam vonNieda wrote:
Hi folks,
I'm trying to install oVirt node v2.3.0 on A Dell C2100 server. I can boot up just fine, but the two menu options I see are "Start oVirt node", and "Troubleshooting". When I choose "Start oVirt node", it does just that, and I am soon after given a console login prompt. I've checked the docs, and I don't see what I'm supposed to do next, as in a password etc. Am I missing something?
Hi Adam, Something is breaking in the boot process. You should be getting a TUI screen that will let you configure and install ovirt-node. I just added an entry on the Node Troublesooting wiki page[1] for you to follow. Mike [1] http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems
Thanks,
-Adam _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Thanks very much Mike. Below is some additional info now that I can get in. Also, when I "su - admin" it tries to start graphical mode, and just goes to blank screen and stays there. Any insight is much appreciated, and please let me know if there's anything else I can try / provide. Thanks, -Adam /tmp/ovirt.log ============== /sbin/restorecon set context /var/cache/yum->unconfined_u:object_r:rpm_var_cache_t:s0 failed:'Read-only file system' /sbin/restorecon reset /var/cache/yum context unconfined_u:object_r:file_t:s0->unconfined_u:object_r:rpm_var_cache_t:s0 /sbin/restorecon reset /etc/sysctl.conf context system_u:object_r:etc_runtime_t:s0->system_u:object_r:system_conf_t:s0 /sbin/restorecon reset /boot-kdump context system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0 2012-04-16 09:36:26,827 - INFO - ovirt-config-installer - ::::live device:::: /dev/sdb 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat /proc/mounts|grep -q "none /live" 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - umount /live 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to mount_live() /var/log/ovirt.log ================== Apr 16 09:35:53 Starting ovirt-early oVirt Node Hypervisor release 2.3.0 (1.0.fc16) Apr 16 09:35:53 Updating /etc/default/ovirt Apr 16 09:35:54 Updating OVIRT_BOOTIF to '' Apr 16 09:35:54 Updating OVIRT_INIT to '' Apr 16 09:35:54 Updating OVIRT_UPGRADE to '' Apr 16 09:35:54 Updating OVIRT_STANDALONE to '1' Apr 16 09:35:54 Updating OVIRT_BOOTPARAMS to 'nomodeset crashkernel=512M-2G:64M,2G-:128M elevator=deadline quiet rd_NO_LVM rhgb rd.luks=0 rd.md=0 rd.dm=0' Apr 16 09:35:54 Updating OVIRT_RHN_TYPE to 'classic' Apr 16 09:35:54 Updating OVIRT_INSTALL to '1' Apr 16 09:35:54 Updating OVIRT_ISCSI_INSTALL to '1' Apr 16 09:36:08 Setting temporary admin password: F8Ax67kfRPSAw Apr 16 09:36:09 Setting temporary root password: F8Ax67kfRPSAw Apr 16 09:36:09 Skip runtime mode configuration. Apr 16 09:36:09 Completed ovirt-early Apr 16 09:36:09 Starting ovirt-awake. Apr 16 09:36:09 Node is operating in unmanaged mode. Apr 16 09:36:09 Completed ovirt-awake: RETVAL=0 Apr 16 09:36:09 Starting ovirt Apr 16 09:36:09 Completed ovirt Apr 16 09:36:10 Starting ovirt-post Apr 16 09:36:20 Hardware virtualization detected Volume group "HostVG" not found Skipping volume group HostVG Restarting network (via systemctl): [ OK ] Apr 16 09:36:20 Starting ovirt-post Apr 16 09:36:21 Hardware virtualization detected Volume group "HostVG" not found Skipping volume group HostVG Restarting network (via systemctl): [ OK ] Apr 16 09:36:22 Starting ovirt-cim Apr 16 09:36:22 Completed ovirt-cim WARNING: persistent config storage not available /var/log/vdsm/vdsm.log ======================= MainThread::INFO::2012-04-16 09:36:21,828::vdsm::71::vds::(run) I am the actual vdsm 4.9-0 MainThread::DEBUG::2012-04-16 09:36:23,873::resourceManager::376::ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2012-04-16 09:36:23,874::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-04-16 09:36:23,918::multipath::85::Storage.Misc.excCmd::(isEnabled) '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) MainThread::INFO::2012-04-16 09:36:25,000::vdsm::71::vds::(run) I am the actual vdsm 4.9-0 MainThread::DEBUG::2012-04-16 09:36:25,199::resourceManager::376::ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2012-04-16 09:36:25,200::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-04-16 09:36:25,231::multipath::85::Storage.Misc.excCmd::(isEnabled) '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,243::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:25,244::multipath::109::Storage.Multipath::(isEnabled) multipath Defaulting to False MainThread::DEBUG::2012-04-16 09:36:25,244::misc::487::Storage.Misc::(rotateFiles) dir: /etc, prefixName: multipath.conf, versions: 5 MainThread::DEBUG::2012-04-16 09:36:25,244::misc::508::Storage.Misc::(rotateFiles) versions found: [0] MainThread::DEBUG::2012-04-16 09:36:25,244::multipath::118::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /bin/cp /etc/multipath.conf /etc/multipath.conf.1' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,255::multipath::118::Storage.Misc.excCmd::(setupMultipath) FAILED: <err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file system\nsudo: sorry, a password is required to run sudo\n'; <rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,256::multipath::118::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf.1' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,269::multipath::118::Storage.Misc.excCmd::(setupMultipath) FAILED: <err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file system\nsudo: sorry, a password is required to run sudo\n'; <rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,270::multipath::123::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /bin/cp /tmp/tmpnPcvWi /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,283::multipath::123::Storage.Misc.excCmd::(setupMultipath) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:25,283::multipath::128::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,294::multipath::128::Storage.Misc.excCmd::(setupMultipath) FAILED: <err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file system\nsudo: sorry, a password is required to run sudo\n'; <rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,295::multipath::131::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /sbin/multipath -F' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,323::multipath::131::Storage.Misc.excCmd::(setupMultipath) FAILED: <err> = ''; <rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,323::multipath::134::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /sbin/service multipathd restart' (cwd None) MainThread::DEBUG::2012-04-16 09:36:26,397::multipath::134::Storage.Misc.excCmd::(setupMultipath) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:26,398::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) MainThread::DEBUG::2012-04-16 09:36:26,443::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:26,445::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:26,447::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,d ev_size' (cwd None) MainThread::DEBUG::2012-04-16 09:36:26,811::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:26,811::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex MainThread::DEBUG::2012-04-16 09:36:26,812::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:26,812::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_si ze,vg_mda_free' (cwd None) MainThread::INFO::2012-04-16 09:36:29,307::vdsm::71::vds::(run) I am the actual vdsm 4.9-0 MainThread::DEBUG::2012-04-16 09:36:29,514::resourceManager::376::ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2012-04-16 09:36:29,515::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-04-16 09:36:29,551::multipath::85::Storage.Misc.excCmd::(isEnabled) '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,564::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,565::multipath::101::Storage.Multipath::(isEnabled) Current revision of multipath.conf detected, preserving MainThread::DEBUG::2012-04-16 09:36:29,565::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,606::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,606::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,608::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,d ev_size' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,714::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,715::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,716::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,716::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_si ze,vg_mda_free' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,813::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,814::lvm::379::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,815::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,916::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0 Thread-11::DEBUG::2012-04-16 09:36:29,917::misc::1017::SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) MainThread::INFO::2012-04-16 09:36:29,919::dispatcher::121::Storage.Dispatcher::(__init__) Starting StorageDispatcher... Thread-11::DEBUG::2012-04-16 09:36:29,919::misc::1019::SamplingMethod::(__call__) Got in to sampling method Thread-11::DEBUG::2012-04-16 09:36:29,921::misc::1017::SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-11::DEBUG::2012-04-16 09:36:29,921::misc::1019::SamplingMethod::(__call__) Got in to sampling method Thread-11::DEBUG::2012-04-16 09:36:29,921::iscsi::389::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,930::utils::595::Storage.Misc.excCmd::(execCmd) '/usr/bin/pgrep -xf ksmd' (cwd None) MainThread::DEBUG::2012-04-16 09:36:30,108::utils::595::Storage.Misc.excCmd::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-11::DEBUG::2012-04-16 09:36:30,116::iscsi::389::Storage.Misc.excCmd::(rescan) FAILED: <err> = 'iscsiadm: No session found.\n'; <rc> = 21 Thread-11::DEBUG::2012-04-16 09:36:30,116::misc::1027::SamplingMethod::(__call__) Returning last result Thread-11::DEBUG::2012-04-16 09:36:30,117::supervdsm::83::SuperVdsmProxy::(_killSupervdsm) Could not kill old Super Vdsm [Errno 2] No such file or directory: '/var/run/vdsm/svdsm.pid' Thread-11::DEBUG::2012-04-16 09:36:30,117::supervdsm::71::SuperVdsmProxy::(_launchSupervdsm) Launching Super Vdsm Thread-11::DEBUG::2012-04-16 09:36:30,118::supervdsm::74::Storage.Misc.excCmd::(_launchSupervdsm) '/usr/bin/sudo -n /usr/bin/python /usr/share/vdsm/supervdsmServer.pyc bd4b3ae7-3e51-4d6b-b681-d5f6cb5bae07 2945' (cwd None) MainThread::DEBUG::2012-04-16 09:36:30,254::supervdsmServer::170::SuperVdsm.Server::(main) Making sure I'm root MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::174::SuperVdsm.Server::(main) Parsing cmd args MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::177::SuperVdsm.Server::(main) Creating PID file MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::181::SuperVdsm.Server::(main) Cleaning old socket MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::185::SuperVdsm.Server::(main) Setting up keep alive thread MainThread::DEBUG::2012-04-16 09:36:30,256::supervdsmServer::190::SuperVdsm.Server::(main) Creating remote object manager MainThread::DEBUG::2012-04-16 09:36:30,256::supervdsmServer::201::SuperVdsm.Server::(main) Started serving super vdsm object Thread-11::DEBUG::2012-04-16 09:36:32,124::supervdsm::92::SuperVdsmProxy::(_connect) Trying to connect to Super Vdsm Thread-11::DEBUG::2012-04-16 09:36:32,133::supervdsm::64::SuperVdsmProxy::(__init__) Connected to Super Vdsm Thread-11::DEBUG::2012-04-16 09:36:34,070::multipath::71::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n /sbin/multipath' (cwd None) Thread-11::DEBUG::2012-04-16 09:36:34,130::multipath::71::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 Thread-11::DEBUG::2012-04-16 09:36:34,131::lvm::460::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,131::lvm::462::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,132::lvm::472::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,132::lvm::474::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,133::lvm::493::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,133::lvm::495::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,133::misc::1027::SamplingMethod::(__call__) Returning last result Thread-11::DEBUG::2012-04-16 09:36:34,133::hsm::272::Storage.HSM::(__cleanStorageRepository) Started cleaning storage repository at '/rhev/data-center' Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::304::Storage.HSM::(__cleanStorageRepository) White list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt'] Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::305::Storage.HSM::(__cleanStorageRepository) Mount list: ['/rhev/data-center'] Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::307::Storage.HSM::(__cleanStorageRepository) Cleaning leftovers Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::350::Storage.HSM::(__cleanStorageRepository) Finished cleaning storage repository at '/rhev/data-center' On 4/16/12 8:38 AM, "Mike Burns" <mburns@redhat.com> wrote:
On Mon, 2012-04-16 at 08:14 -0500, Adam vonNieda wrote:
Hi folks,
I'm trying to install oVirt node v2.3.0 on A Dell C2100 server. I can boot up just fine, but the two menu options I see are "Start oVirt node", and "Troubleshooting". When I choose "Start oVirt node", it does just that, and I am soon after given a console login prompt. I've checked the docs, and I don't see what I'm supposed to do next, as in a password etc. Am I missing something?
Hi Adam,
Something is breaking in the boot process. You should be getting a TUI screen that will let you configure and install ovirt-node.
I just added an entry on the Node Troublesooting wiki page[1] for you to follow.
Mike
[1] http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems
Thanks,
-Adam _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi folks, Still hoping someone can give me a hand with this. I can't install overt-node 2.3.0 on a on a Dell C2100 server because it won't start the graphical interface. I booted up a standard F16 image this morning, and the graphical installer does start during that process. Logs are below. Thanks very much, -Adam
/tmp/ovirt.log ==============
/sbin/restorecon set context /var/cache/yum->unconfined_u:object_r:rpm_var_cache_t:s0 failed:'Read-only file system' /sbin/restorecon reset /var/cache/yum context unconfined_u:object_r:file_t:s0->unconfined_u:object_r:rpm_var_cache_t:s0 /sbin/restorecon reset /etc/sysctl.conf context system_u:object_r:etc_runtime_t:s0->system_u:object_r:system_conf_t:s0 /sbin/restorecon reset /boot-kdump context system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0 2012-04-16 09:36:26,827 - INFO - ovirt-config-installer - ::::live device:::: /dev/sdb 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat /proc/mounts|grep -q "none /live" 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - umount /live 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to mount_live()
/var/log/ovirt.log ==================
Apr 16 09:35:53 Starting ovirt-early oVirt Node Hypervisor release 2.3.0 (1.0.fc16) Apr 16 09:35:53 Updating /etc/default/ovirt Apr 16 09:35:54 Updating OVIRT_BOOTIF to '' Apr 16 09:35:54 Updating OVIRT_INIT to '' Apr 16 09:35:54 Updating OVIRT_UPGRADE to '' Apr 16 09:35:54 Updating OVIRT_STANDALONE to '1' Apr 16 09:35:54 Updating OVIRT_BOOTPARAMS to 'nomodeset crashkernel=512M-2G:64M,2G-:128M elevator=deadline quiet rd_NO_LVM rhgb rd.luks=0 rd.md=0 rd.dm=0' Apr 16 09:35:54 Updating OVIRT_RHN_TYPE to 'classic' Apr 16 09:35:54 Updating OVIRT_INSTALL to '1' Apr 16 09:35:54 Updating OVIRT_ISCSI_INSTALL to '1' Apr 16 09:36:08 Setting temporary admin password: F8Ax67kfRPSAw Apr 16 09:36:09 Setting temporary root password: F8Ax67kfRPSAw Apr 16 09:36:09 Skip runtime mode configuration. Apr 16 09:36:09 Completed ovirt-early Apr 16 09:36:09 Starting ovirt-awake. Apr 16 09:36:09 Node is operating in unmanaged mode. Apr 16 09:36:09 Completed ovirt-awake: RETVAL=0 Apr 16 09:36:09 Starting ovirt Apr 16 09:36:09 Completed ovirt Apr 16 09:36:10 Starting ovirt-post Apr 16 09:36:20 Hardware virtualization detected Volume group "HostVG" not found Skipping volume group HostVG Restarting network (via systemctl): [ OK ] Apr 16 09:36:20 Starting ovirt-post Apr 16 09:36:21 Hardware virtualization detected Volume group "HostVG" not found Skipping volume group HostVG Restarting network (via systemctl): [ OK ] Apr 16 09:36:22 Starting ovirt-cim Apr 16 09:36:22 Completed ovirt-cim WARNING: persistent config storage not available
/var/log/vdsm/vdsm.log =======================
MainThread::INFO::2012-04-16 09:36:21,828::vdsm::71::vds::(run) I am the actual vdsm 4.9-0 MainThread::DEBUG::2012-04-16 09:36:23,873::resourceManager::376::ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2012-04-16 09:36:23,874::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-04-16 09:36:23,918::multipath::85::Storage.Misc.excCmd::(isEnabled) '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) MainThread::INFO::2012-04-16 09:36:25,000::vdsm::71::vds::(run) I am the actual vdsm 4.9-0 MainThread::DEBUG::2012-04-16 09:36:25,199::resourceManager::376::ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2012-04-16 09:36:25,200::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-04-16 09:36:25,231::multipath::85::Storage.Misc.excCmd::(isEnabled) '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,243::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:25,244::multipath::109::Storage.Multipath::(isEnabled) multipath Defaulting to False MainThread::DEBUG::2012-04-16 09:36:25,244::misc::487::Storage.Misc::(rotateFiles) dir: /etc, prefixName: multipath.conf, versions: 5 MainThread::DEBUG::2012-04-16 09:36:25,244::misc::508::Storage.Misc::(rotateFiles) versions found: [0] MainThread::DEBUG::2012-04-16 09:36:25,244::multipath::118::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /bin/cp /etc/multipath.conf /etc/multipath.conf.1' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,255::multipath::118::Storage.Misc.excCmd::(setupMultipath) FAILED: <err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file system\nsudo: sorry, a password is required to run sudo\n'; <rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,256::multipath::118::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf.1' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,269::multipath::118::Storage.Misc.excCmd::(setupMultipath) FAILED: <err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file system\nsudo: sorry, a password is required to run sudo\n'; <rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,270::multipath::123::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /bin/cp /tmp/tmpnPcvWi /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,283::multipath::123::Storage.Misc.excCmd::(setupMultipath) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:25,283::multipath::128::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,294::multipath::128::Storage.Misc.excCmd::(setupMultipath) FAILED: <err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file system\nsudo: sorry, a password is required to run sudo\n'; <rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,295::multipath::131::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /sbin/multipath -F' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,323::multipath::131::Storage.Misc.excCmd::(setupMultipath) FAILED: <err> = ''; <rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,323::multipath::134::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /sbin/service multipathd restart' (cwd None) MainThread::DEBUG::2012-04-16 09:36:26,397::multipath::134::Storage.Misc.excCmd::(setupMultipath) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:26,398::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) MainThread::DEBUG::2012-04-16 09:36:26,443::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:26,445::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:26,447::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count, d ev_size' (cwd None) MainThread::DEBUG::2012-04-16 09:36:26,811::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:26,811::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex MainThread::DEBUG::2012-04-16 09:36:26,812::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:26,812::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_s i ze,vg_mda_free' (cwd None) MainThread::INFO::2012-04-16 09:36:29,307::vdsm::71::vds::(run) I am the actual vdsm 4.9-0 MainThread::DEBUG::2012-04-16 09:36:29,514::resourceManager::376::ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2012-04-16 09:36:29,515::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-04-16 09:36:29,551::multipath::85::Storage.Misc.excCmd::(isEnabled) '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,564::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,565::multipath::101::Storage.Multipath::(isEnabled) Current revision of multipath.conf detected, preserving MainThread::DEBUG::2012-04-16 09:36:29,565::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,606::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,606::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,608::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count, d ev_size' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,714::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,715::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,716::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,716::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_s i ze,vg_mda_free' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,813::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,814::lvm::379::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,815::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,916::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' No volume groups found\n'; <rc> = 0 Thread-11::DEBUG::2012-04-16 09:36:29,917::misc::1017::SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) MainThread::INFO::2012-04-16 09:36:29,919::dispatcher::121::Storage.Dispatcher::(__init__) Starting StorageDispatcher... Thread-11::DEBUG::2012-04-16 09:36:29,919::misc::1019::SamplingMethod::(__call__) Got in to sampling method Thread-11::DEBUG::2012-04-16 09:36:29,921::misc::1017::SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-11::DEBUG::2012-04-16 09:36:29,921::misc::1019::SamplingMethod::(__call__) Got in to sampling method Thread-11::DEBUG::2012-04-16 09:36:29,921::iscsi::389::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,930::utils::595::Storage.Misc.excCmd::(execCmd) '/usr/bin/pgrep -xf ksmd' (cwd None) MainThread::DEBUG::2012-04-16 09:36:30,108::utils::595::Storage.Misc.excCmd::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-11::DEBUG::2012-04-16 09:36:30,116::iscsi::389::Storage.Misc.excCmd::(rescan) FAILED: <err> = 'iscsiadm: No session found.\n'; <rc> = 21 Thread-11::DEBUG::2012-04-16 09:36:30,116::misc::1027::SamplingMethod::(__call__) Returning last result Thread-11::DEBUG::2012-04-16 09:36:30,117::supervdsm::83::SuperVdsmProxy::(_killSupervdsm) Could not kill old Super Vdsm [Errno 2] No such file or directory: '/var/run/vdsm/svdsm.pid' Thread-11::DEBUG::2012-04-16 09:36:30,117::supervdsm::71::SuperVdsmProxy::(_launchSupervdsm) Launching Super Vdsm Thread-11::DEBUG::2012-04-16 09:36:30,118::supervdsm::74::Storage.Misc.excCmd::(_launchSupervdsm) '/usr/bin/sudo -n /usr/bin/python /usr/share/vdsm/supervdsmServer.pyc bd4b3ae7-3e51-4d6b-b681-d5f6cb5bae07 2945' (cwd None) MainThread::DEBUG::2012-04-16 09:36:30,254::supervdsmServer::170::SuperVdsm.Server::(main) Making sure I'm root MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::174::SuperVdsm.Server::(main) Parsing cmd args MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::177::SuperVdsm.Server::(main) Creating PID file MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::181::SuperVdsm.Server::(main) Cleaning old socket MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::185::SuperVdsm.Server::(main) Setting up keep alive thread MainThread::DEBUG::2012-04-16 09:36:30,256::supervdsmServer::190::SuperVdsm.Server::(main) Creating remote object manager MainThread::DEBUG::2012-04-16 09:36:30,256::supervdsmServer::201::SuperVdsm.Server::(main) Started serving super vdsm object Thread-11::DEBUG::2012-04-16 09:36:32,124::supervdsm::92::SuperVdsmProxy::(_connect) Trying to connect to Super Vdsm Thread-11::DEBUG::2012-04-16 09:36:32,133::supervdsm::64::SuperVdsmProxy::(__init__) Connected to Super Vdsm Thread-11::DEBUG::2012-04-16 09:36:34,070::multipath::71::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n /sbin/multipath' (cwd None) Thread-11::DEBUG::2012-04-16 09:36:34,130::multipath::71::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 Thread-11::DEBUG::2012-04-16 09:36:34,131::lvm::460::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,131::lvm::462::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,132::lvm::472::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,132::lvm::474::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,133::lvm::493::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,133::lvm::495::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,133::misc::1027::SamplingMethod::(__call__) Returning last result Thread-11::DEBUG::2012-04-16 09:36:34,133::hsm::272::Storage.HSM::(__cleanStorageRepository) Started cleaning storage repository at '/rhev/data-center' Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::304::Storage.HSM::(__cleanStorageRepository) White list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt'] Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::305::Storage.HSM::(__cleanStorageRepository) Mount list: ['/rhev/data-center'] Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::307::Storage.HSM::(__cleanStorageRepository) Cleaning leftovers Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::350::Storage.HSM::(__cleanStorageRepository) Finished cleaning storage repository at '/rhev/data-center'
On 4/16/12 8:38 AM, "Mike Burns" <mburns@redhat.com> wrote:
On Mon, 2012-04-16 at 08:14 -0500, Adam vonNieda wrote:
Hi folks,
I'm trying to install oVirt node v2.3.0 on A Dell C2100 server. I can boot up just fine, but the two menu options I see are "Start oVirt node", and "Troubleshooting". When I choose "Start oVirt node", it does just that, and I am soon after given a console login prompt. I've checked the docs, and I don't see what I'm supposed to do next, as in a password etc. Am I missing something?
Hi Adam,
Something is breaking in the boot process. You should be getting a TUI screen that will let you configure and install ovirt-node.
I just added an entry on the Node Troublesooting wiki page[1] for you to follow.
Mike
[1] http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems
Thanks,
-Adam _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 04/17/2012 09:45 AM, Adam vonNieda wrote:
Hi folks,
Still hoping someone can give me a hand with this. I can't install overt-node 2.3.0 on a on a Dell C2100 server because it won't start the graphical interface. I booted up a standard F16 image this morning, and the graphical installer does start during that process. Logs are below.
Thanks very much,
-Adam
/tmp/ovirt.log ==============
/sbin/restorecon set context /var/cache/yum->unconfined_u:object_r:rpm_var_cache_t:s0 failed:'Read-only file system' /sbin/restorecon reset /var/cache/yum context unconfined_u:object_r:file_t:s0->unconfined_u:object_r:rpm_var_cache_t:s0 /sbin/restorecon reset /etc/sysctl.conf context system_u:object_r:etc_runtime_t:s0->system_u:object_r:system_conf_t:s0 /sbin/restorecon reset /boot-kdump context system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0 2012-04-16 09:36:26,827 - INFO - ovirt-config-installer - ::::live device:::: /dev/sdb 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat /proc/mounts|grep -q "none /live" 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - umount /live 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to mount_live()
/var/log/ovirt.log ==================
Apr 16 09:35:53 Starting ovirt-early oVirt Node Hypervisor release 2.3.0 (1.0.fc16) Apr 16 09:35:53 Updating /etc/default/ovirt Apr 16 09:35:54 Updating OVIRT_BOOTIF to '' Apr 16 09:35:54 Updating OVIRT_INIT to '' Apr 16 09:35:54 Updating OVIRT_UPGRADE to '' Apr 16 09:35:54 Updating OVIRT_STANDALONE to '1' Apr 16 09:35:54 Updating OVIRT_BOOTPARAMS to 'nomodeset crashkernel=512M-2G:64M,2G-:128M elevator=deadline quiet rd_NO_LVM rhgb rd.luks=0 rd.md=0 rd.dm=0' Apr 16 09:35:54 Updating OVIRT_RHN_TYPE to 'classic' Apr 16 09:35:54 Updating OVIRT_INSTALL to '1' Apr 16 09:35:54 Updating OVIRT_ISCSI_INSTALL to '1' Apr 16 09:36:08 Setting temporary admin password: F8Ax67kfRPSAw Apr 16 09:36:09 Setting temporary root password: F8Ax67kfRPSAw Apr 16 09:36:09 Skip runtime mode configuration. Apr 16 09:36:09 Completed ovirt-early Apr 16 09:36:09 Starting ovirt-awake. Apr 16 09:36:09 Node is operating in unmanaged mode. Apr 16 09:36:09 Completed ovirt-awake: RETVAL=0 Apr 16 09:36:09 Starting ovirt Apr 16 09:36:09 Completed ovirt Apr 16 09:36:10 Starting ovirt-post Apr 16 09:36:20 Hardware virtualization detected Volume group "HostVG" not found Skipping volume group HostVG Restarting network (via systemctl): [ OK ] Apr 16 09:36:20 Starting ovirt-post Apr 16 09:36:21 Hardware virtualization detected Volume group "HostVG" not found Skipping volume group HostVG Restarting network (via systemctl): [ OK ] Apr 16 09:36:22 Starting ovirt-cim Apr 16 09:36:22 Completed ovirt-cim WARNING: persistent config storage not available
/var/log/vdsm/vdsm.log =======================
MainThread::INFO::2012-04-16 09:36:21,828::vdsm::71::vds::(run) I am the actual vdsm 4.9-0 MainThread::DEBUG::2012-04-16 09:36:23,873::resourceManager::376::ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2012-04-16 09:36:23,874::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-04-16 09:36:23,918::multipath::85::Storage.Misc.excCmd::(isEnabled) '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) MainThread::INFO::2012-04-16 09:36:25,000::vdsm::71::vds::(run) I am the actual vdsm 4.9-0 MainThread::DEBUG::2012-04-16 09:36:25,199::resourceManager::376::ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2012-04-16 09:36:25,200::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-04-16 09:36:25,231::multipath::85::Storage.Misc.excCmd::(isEnabled) '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,243::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS: <err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:25,244::multipath::109::Storage.Multipath::(isEnabled) multipath Defaulting to False MainThread::DEBUG::2012-04-16 09:36:25,244::misc::487::Storage.Misc::(rotateFiles) dir: /etc, prefixName: multipath.conf, versions: 5 MainThread::DEBUG::2012-04-16 09:36:25,244::misc::508::Storage.Misc::(rotateFiles) versions found: [0] MainThread::DEBUG::2012-04-16 09:36:25,244::multipath::118::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /bin/cp /etc/multipath.conf /etc/multipath.conf.1' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,255::multipath::118::Storage.Misc.excCmd::(setupMultipath) FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file system\nsudo: sorry, a password is required to run sudo\n';<rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,256::multipath::118::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf.1' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,269::multipath::118::Storage.Misc.excCmd::(setupMultipath) FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file system\nsudo: sorry, a password is required to run sudo\n';<rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,270::multipath::123::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /bin/cp /tmp/tmpnPcvWi /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,283::multipath::123::Storage.Misc.excCmd::(setupMultipath) SUCCESS:<err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:25,283::multipath::128::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,294::multipath::128::Storage.Misc.excCmd::(setupMultipath) FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file system\nsudo: sorry, a password is required to run sudo\n';<rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,295::multipath::131::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /sbin/multipath -F' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,323::multipath::131::Storage.Misc.excCmd::(setupMultipath) FAILED:<err> = '';<rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,323::multipath::134::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /sbin/service multipathd restart' (cwd None) MainThread::DEBUG::2012-04-16 09:36:26,397::multipath::134::Storage.Misc.excCmd::(setupMultipath) SUCCESS:<err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:26,398::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) MainThread::DEBUG::2012-04-16 09:36:26,443::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) SUCCESS:<err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:26,445::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:26,447::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count, d ev_size' (cwd None) MainThread::DEBUG::2012-04-16 09:36:26,811::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:26,811::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex MainThread::DEBUG::2012-04-16 09:36:26,812::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:26,812::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_s i ze,vg_mda_free' (cwd None) MainThread::INFO::2012-04-16 09:36:29,307::vdsm::71::vds::(run) I am the actual vdsm 4.9-0 MainThread::DEBUG::2012-04-16 09:36:29,514::resourceManager::376::ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2012-04-16 09:36:29,515::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-04-16 09:36:29,551::multipath::85::Storage.Misc.excCmd::(isEnabled) '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,564::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS: <err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,565::multipath::101::Storage.Multipath::(isEnabled) Current revision of multipath.conf detected, preserving MainThread::DEBUG::2012-04-16 09:36:29,565::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,606::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) SUCCESS:<err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,606::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,608::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count, d ev_size' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,714::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,715::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,716::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,716::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_s i ze,vg_mda_free' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,813::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> = ' No volume groups found\n';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,814::lvm::379::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,815::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,916::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> = ' No volume groups found\n';<rc> = 0 Thread-11::DEBUG::2012-04-16 09:36:29,917::misc::1017::SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) MainThread::INFO::2012-04-16 09:36:29,919::dispatcher::121::Storage.Dispatcher::(__init__) Starting StorageDispatcher... Thread-11::DEBUG::2012-04-16 09:36:29,919::misc::1019::SamplingMethod::(__call__) Got in to sampling method Thread-11::DEBUG::2012-04-16 09:36:29,921::misc::1017::SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-11::DEBUG::2012-04-16 09:36:29,921::misc::1019::SamplingMethod::(__call__) Got in to sampling method Thread-11::DEBUG::2012-04-16 09:36:29,921::iscsi::389::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,930::utils::595::Storage.Misc.excCmd::(execCmd) '/usr/bin/pgrep -xf ksmd' (cwd None) MainThread::DEBUG::2012-04-16 09:36:30,108::utils::595::Storage.Misc.excCmd::(execCmd) SUCCESS:<err> = '';<rc> = 0 Thread-11::DEBUG::2012-04-16 09:36:30,116::iscsi::389::Storage.Misc.excCmd::(rescan) FAILED:<err> = 'iscsiadm: No session found.\n';<rc> = 21 Thread-11::DEBUG::2012-04-16 09:36:30,116::misc::1027::SamplingMethod::(__call__) Returning last result Thread-11::DEBUG::2012-04-16 09:36:30,117::supervdsm::83::SuperVdsmProxy::(_killSupervdsm) Could not kill old Super Vdsm [Errno 2] No such file or directory: '/var/run/vdsm/svdsm.pid' Thread-11::DEBUG::2012-04-16 09:36:30,117::supervdsm::71::SuperVdsmProxy::(_launchSupervdsm) Launching Super Vdsm Thread-11::DEBUG::2012-04-16 09:36:30,118::supervdsm::74::Storage.Misc.excCmd::(_launchSupervdsm) '/usr/bin/sudo -n /usr/bin/python /usr/share/vdsm/supervdsmServer.pyc bd4b3ae7-3e51-4d6b-b681-d5f6cb5bae07 2945' (cwd None) MainThread::DEBUG::2012-04-16 09:36:30,254::supervdsmServer::170::SuperVdsm.Server::(main) Making sure I'm root MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::174::SuperVdsm.Server::(main) Parsing cmd args MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::177::SuperVdsm.Server::(main) Creating PID file MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::181::SuperVdsm.Server::(main) Cleaning old socket MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::185::SuperVdsm.Server::(main) Setting up keep alive thread MainThread::DEBUG::2012-04-16 09:36:30,256::supervdsmServer::190::SuperVdsm.Server::(main) Creating remote object manager MainThread::DEBUG::2012-04-16 09:36:30,256::supervdsmServer::201::SuperVdsm.Server::(main) Started serving super vdsm object Thread-11::DEBUG::2012-04-16 09:36:32,124::supervdsm::92::SuperVdsmProxy::(_connect) Trying to connect to Super Vdsm Thread-11::DEBUG::2012-04-16 09:36:32,133::supervdsm::64::SuperVdsmProxy::(__init__) Connected to Super Vdsm Thread-11::DEBUG::2012-04-16 09:36:34,070::multipath::71::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n /sbin/multipath' (cwd None) Thread-11::DEBUG::2012-04-16 09:36:34,130::multipath::71::Storage.Misc.excCmd::(rescan) SUCCESS:<err> = '';<rc> = 0 Thread-11::DEBUG::2012-04-16 09:36:34,131::lvm::460::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,131::lvm::462::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,132::lvm::472::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,132::lvm::474::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,133::lvm::493::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,133::lvm::495::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,133::misc::1027::SamplingMethod::(__call__) Returning last result Thread-11::DEBUG::2012-04-16 09:36:34,133::hsm::272::Storage.HSM::(__cleanStorageRepository) Started cleaning storage repository at '/rhev/data-center' Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::304::Storage.HSM::(__cleanStorageRepository) White list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt'] Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::305::Storage.HSM::(__cleanStorageRepository) Mount list: ['/rhev/data-center'] Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::307::Storage.HSM::(__cleanStorageRepository) Cleaning leftovers Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::350::Storage.HSM::(__cleanStorageRepository) Finished cleaning storage repository at '/rhev/data-center'
On 4/16/12 8:38 AM, "Mike Burns"<mburns@redhat.com> wrote:
On Mon, 2012-04-16 at 08:14 -0500, Adam vonNieda wrote:
Hi folks,
I'm trying to install oVirt node v2.3.0 on A Dell C2100 server. I can boot up just fine, but the two menu options I see are "Start oVirt node", and "Troubleshooting". When I choose "Start oVirt node", it does just that, and I am soon after given a console login prompt. I've checked the docs, and I don't see what I'm supposed to do next, as in a password etc. Am I missing something?
Hi Adam,
Something is breaking in the boot process. You should be getting a TUI screen that will let you configure and install ovirt-node.
I just added an entry on the Node Troublesooting wiki page[1] for you to follow.
Mike
[1] http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems
Thanks,
-Adam _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
This is definitely the cause of the installer failing 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat /proc/mounts|grep -q "none /live" 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to mount_live() What kind of media are you installing from: usb/cd/remote console?

Thanks for the reply Joey. I saw that too, and thought maybe my USB thumb drive was set to read only, but it's not. This box doesn't have a DVD drive, I'll try a different USB drive, and if that doesn't work, I'll dig up an external DVD drive. Thanks again, -Adam Adam vonNieda Adam@vonNieda.org On Apr 17, 2012, at 9:07, Joey Boggs <jboggs@redhat.com> wrote:
On 04/17/2012 09:45 AM, Adam vonNieda wrote:
Hi folks,
Still hoping someone can give me a hand with this. I can't install overt-node 2.3.0 on a on a Dell C2100 server because it won't start the graphical interface. I booted up a standard F16 image this morning, and the graphical installer does start during that process. Logs are below.
Thanks very much,
-Adam
/tmp/ovirt.log ==============
/sbin/restorecon set context /var/cache/yum->unconfined_u:object_r:rpm_var_cache_t:s0 failed:'Read-only file system' /sbin/restorecon reset /var/cache/yum context unconfined_u:object_r:file_t:s0->unconfined_u:object_r:rpm_var_cache_t:s0 /sbin/restorecon reset /etc/sysctl.conf context system_u:object_r:etc_runtime_t:s0->system_u:object_r:system_conf_t:s0 /sbin/restorecon reset /boot-kdump context system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0 2012-04-16 09:36:26,827 - INFO - ovirt-config-installer - ::::live device:::: /dev/sdb 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat /proc/mounts|grep -q "none /live" 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - umount /live 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to mount_live()
/var/log/ovirt.log ==================
Apr 16 09:35:53 Starting ovirt-early oVirt Node Hypervisor release 2.3.0 (1.0.fc16) Apr 16 09:35:53 Updating /etc/default/ovirt Apr 16 09:35:54 Updating OVIRT_BOOTIF to '' Apr 16 09:35:54 Updating OVIRT_INIT to '' Apr 16 09:35:54 Updating OVIRT_UPGRADE to '' Apr 16 09:35:54 Updating OVIRT_STANDALONE to '1' Apr 16 09:35:54 Updating OVIRT_BOOTPARAMS to 'nomodeset crashkernel=512M-2G:64M,2G-:128M elevator=deadline quiet rd_NO_LVM rhgb rd.luks=0 rd.md=0 rd.dm=0' Apr 16 09:35:54 Updating OVIRT_RHN_TYPE to 'classic' Apr 16 09:35:54 Updating OVIRT_INSTALL to '1' Apr 16 09:35:54 Updating OVIRT_ISCSI_INSTALL to '1' Apr 16 09:36:08 Setting temporary admin password: F8Ax67kfRPSAw Apr 16 09:36:09 Setting temporary root password: F8Ax67kfRPSAw Apr 16 09:36:09 Skip runtime mode configuration. Apr 16 09:36:09 Completed ovirt-early Apr 16 09:36:09 Starting ovirt-awake. Apr 16 09:36:09 Node is operating in unmanaged mode. Apr 16 09:36:09 Completed ovirt-awake: RETVAL=0 Apr 16 09:36:09 Starting ovirt Apr 16 09:36:09 Completed ovirt Apr 16 09:36:10 Starting ovirt-post Apr 16 09:36:20 Hardware virtualization detected Volume group "HostVG" not found Skipping volume group HostVG Restarting network (via systemctl): [ OK ] Apr 16 09:36:20 Starting ovirt-post Apr 16 09:36:21 Hardware virtualization detected Volume group "HostVG" not found Skipping volume group HostVG Restarting network (via systemctl): [ OK ] Apr 16 09:36:22 Starting ovirt-cim Apr 16 09:36:22 Completed ovirt-cim WARNING: persistent config storage not available
/var/log/vdsm/vdsm.log =======================
MainThread::INFO::2012-04-16 09:36:21,828::vdsm::71::vds::(run) I am the actual vdsm 4.9-0 MainThread::DEBUG::2012-04-16 09:36:23,873::resourceManager::376::ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2012-04-16 09:36:23,874::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-04-16 09:36:23,918::multipath::85::Storage.Misc.excCmd::(isEnabled) '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) MainThread::INFO::2012-04-16 09:36:25,000::vdsm::71::vds::(run) I am the actual vdsm 4.9-0 MainThread::DEBUG::2012-04-16 09:36:25,199::resourceManager::376::ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2012-04-16 09:36:25,200::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-04-16 09:36:25,231::multipath::85::Storage.Misc.excCmd::(isEnabled) '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,243::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS: <err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:25,244::multipath::109::Storage.Multipath::(isEnabled) multipath Defaulting to False MainThread::DEBUG::2012-04-16 09:36:25,244::misc::487::Storage.Misc::(rotateFiles) dir: /etc, prefixName: multipath.conf, versions: 5 MainThread::DEBUG::2012-04-16 09:36:25,244::misc::508::Storage.Misc::(rotateFiles) versions found: [0] MainThread::DEBUG::2012-04-16 09:36:25,244::multipath::118::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /bin/cp /etc/multipath.conf /etc/multipath.conf.1' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,255::multipath::118::Storage.Misc.excCmd::(setupMultipath) FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file system\nsudo: sorry, a password is required to run sudo\n';<rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,256::multipath::118::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf.1' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,269::multipath::118::Storage.Misc.excCmd::(setupMultipath) FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file system\nsudo: sorry, a password is required to run sudo\n';<rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,270::multipath::123::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /bin/cp /tmp/tmpnPcvWi /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,283::multipath::123::Storage.Misc.excCmd::(setupMultipath) SUCCESS:<err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:25,283::multipath::128::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,294::multipath::128::Storage.Misc.excCmd::(setupMultipath) FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file system\nsudo: sorry, a password is required to run sudo\n';<rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,295::multipath::131::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /sbin/multipath -F' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,323::multipath::131::Storage.Misc.excCmd::(setupMultipath) FAILED:<err> = '';<rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,323::multipath::134::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /sbin/service multipathd restart' (cwd None) MainThread::DEBUG::2012-04-16 09:36:26,397::multipath::134::Storage.Misc.excCmd::(setupMultipath) SUCCESS:<err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:26,398::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) MainThread::DEBUG::2012-04-16 09:36:26,443::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) SUCCESS:<err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:26,445::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:26,447::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count, d ev_size' (cwd None) MainThread::DEBUG::2012-04-16 09:36:26,811::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:26,811::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex MainThread::DEBUG::2012-04-16 09:36:26,812::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:26,812::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_s i ze,vg_mda_free' (cwd None) MainThread::INFO::2012-04-16 09:36:29,307::vdsm::71::vds::(run) I am the actual vdsm 4.9-0 MainThread::DEBUG::2012-04-16 09:36:29,514::resourceManager::376::ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2012-04-16 09:36:29,515::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-04-16 09:36:29,551::multipath::85::Storage.Misc.excCmd::(isEnabled) '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,564::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS: <err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,565::multipath::101::Storage.Multipath::(isEnabled) Current revision of multipath.conf detected, preserving MainThread::DEBUG::2012-04-16 09:36:29,565::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,606::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) SUCCESS:<err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,606::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,608::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count, d ev_size' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,714::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,715::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,716::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,716::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_s i ze,vg_mda_free' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,813::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> = ' No volume groups found\n';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,814::lvm::379::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,815::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,916::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> = ' No volume groups found\n';<rc> = 0 Thread-11::DEBUG::2012-04-16 09:36:29,917::misc::1017::SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) MainThread::INFO::2012-04-16 09:36:29,919::dispatcher::121::Storage.Dispatcher::(__init__) Starting StorageDispatcher... Thread-11::DEBUG::2012-04-16 09:36:29,919::misc::1019::SamplingMethod::(__call__) Got in to sampling method Thread-11::DEBUG::2012-04-16 09:36:29,921::misc::1017::SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-11::DEBUG::2012-04-16 09:36:29,921::misc::1019::SamplingMethod::(__call__) Got in to sampling method Thread-11::DEBUG::2012-04-16 09:36:29,921::iscsi::389::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,930::utils::595::Storage.Misc.excCmd::(execCmd) '/usr/bin/pgrep -xf ksmd' (cwd None) MainThread::DEBUG::2012-04-16 09:36:30,108::utils::595::Storage.Misc.excCmd::(execCmd) SUCCESS:<err> = '';<rc> = 0 Thread-11::DEBUG::2012-04-16 09:36:30,116::iscsi::389::Storage.Misc.excCmd::(rescan) FAILED:<err> = 'iscsiadm: No session found.\n';<rc> = 21 Thread-11::DEBUG::2012-04-16 09:36:30,116::misc::1027::SamplingMethod::(__call__) Returning last result Thread-11::DEBUG::2012-04-16 09:36:30,117::supervdsm::83::SuperVdsmProxy::(_killSupervdsm) Could not kill old Super Vdsm [Errno 2] No such file or directory: '/var/run/vdsm/svdsm.pid' Thread-11::DEBUG::2012-04-16 09:36:30,117::supervdsm::71::SuperVdsmProxy::(_launchSupervdsm) Launching Super Vdsm Thread-11::DEBUG::2012-04-16 09:36:30,118::supervdsm::74::Storage.Misc.excCmd::(_launchSupervdsm) '/usr/bin/sudo -n /usr/bin/python /usr/share/vdsm/supervdsmServer.pyc bd4b3ae7-3e51-4d6b-b681-d5f6cb5bae07 2945' (cwd None) MainThread::DEBUG::2012-04-16 09:36:30,254::supervdsmServer::170::SuperVdsm.Server::(main) Making sure I'm root MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::174::SuperVdsm.Server::(main) Parsing cmd args MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::177::SuperVdsm.Server::(main) Creating PID file MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::181::SuperVdsm.Server::(main) Cleaning old socket MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::185::SuperVdsm.Server::(main) Setting up keep alive thread MainThread::DEBUG::2012-04-16 09:36:30,256::supervdsmServer::190::SuperVdsm.Server::(main) Creating remote object manager MainThread::DEBUG::2012-04-16 09:36:30,256::supervdsmServer::201::SuperVdsm.Server::(main) Started serving super vdsm object Thread-11::DEBUG::2012-04-16 09:36:32,124::supervdsm::92::SuperVdsmProxy::(_connect) Trying to connect to Super Vdsm Thread-11::DEBUG::2012-04-16 09:36:32,133::supervdsm::64::SuperVdsmProxy::(__init__) Connected to Super Vdsm Thread-11::DEBUG::2012-04-16 09:36:34,070::multipath::71::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n /sbin/multipath' (cwd None) Thread-11::DEBUG::2012-04-16 09:36:34,130::multipath::71::Storage.Misc.excCmd::(rescan) SUCCESS:<err> = '';<rc> = 0 Thread-11::DEBUG::2012-04-16 09:36:34,131::lvm::460::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,131::lvm::462::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,132::lvm::472::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,132::lvm::474::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,133::lvm::493::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,133::lvm::495::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,133::misc::1027::SamplingMethod::(__call__) Returning last result Thread-11::DEBUG::2012-04-16 09:36:34,133::hsm::272::Storage.HSM::(__cleanStorageRepository) Started cleaning storage repository at '/rhev/data-center' Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::304::Storage.HSM::(__cleanStorageRepository) White list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt'] Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::305::Storage.HSM::(__cleanStorageRepository) Mount list: ['/rhev/data-center'] Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::307::Storage.HSM::(__cleanStorageRepository) Cleaning leftovers Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::350::Storage.HSM::(__cleanStorageRepository) Finished cleaning storage repository at '/rhev/data-center'
On 4/16/12 8:38 AM, "Mike Burns"<mburns@redhat.com> wrote:
On Mon, 2012-04-16 at 08:14 -0500, Adam vonNieda wrote:
Hi folks,
I'm trying to install oVirt node v2.3.0 on A Dell C2100 server. I can boot up just fine, but the two menu options I see are "Start oVirt node", and "Troubleshooting". When I choose "Start oVirt node", it does just that, and I am soon after given a console login prompt. I've checked the docs, and I don't see what I'm supposed to do next, as in a password etc. Am I missing something?
Hi Adam,
Something is breaking in the boot process. You should be getting a TUI screen that will let you configure and install ovirt-node.
I just added an entry on the Node Troublesooting wiki page[1] for you to follow.
Mike
[1] http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems
Thanks,
-Adam _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
This is definitely the cause of the installer failing
2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat /proc/mounts|grep -q "none /live" 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to mount_live()
What kind of media are you installing from: usb/cd/remote console?

On 04/17/2012 10:51 AM, Adam vonNieda wrote:
Thanks for the reply Joey. I saw that too, and thought maybe my USB thumb drive was set to read only, but it's not. This box doesn't have a DVD drive, I'll try a different USB drive, and if that doesn't work, I'll dig up an external DVD drive.
Thanks again,
-Adam
Adam vonNieda Adam@vonNieda.org
On Apr 17, 2012, at 9:07, Joey Boggs<jboggs@redhat.com> wrote:
On 04/17/2012 09:45 AM, Adam vonNieda wrote:
Hi folks,
Still hoping someone can give me a hand with this. I can't install overt-node 2.3.0 on a on a Dell C2100 server because it won't start the graphical interface. I booted up a standard F16 image this morning, and the graphical installer does start during that process. Logs are below.
Thanks very much,
-Adam
/tmp/ovirt.log ==============
/sbin/restorecon set context /var/cache/yum->unconfined_u:object_r:rpm_var_cache_t:s0 failed:'Read-only file system' /sbin/restorecon reset /var/cache/yum context unconfined_u:object_r:file_t:s0->unconfined_u:object_r:rpm_var_cache_t:s0 /sbin/restorecon reset /etc/sysctl.conf context system_u:object_r:etc_runtime_t:s0->system_u:object_r:system_conf_t:s0 /sbin/restorecon reset /boot-kdump context system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0 2012-04-16 09:36:26,827 - INFO - ovirt-config-installer - ::::live device:::: /dev/sdb 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat /proc/mounts|grep -q "none /live" 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - umount /live 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to mount_live()
/var/log/ovirt.log ==================
Apr 16 09:35:53 Starting ovirt-early oVirt Node Hypervisor release 2.3.0 (1.0.fc16) Apr 16 09:35:53 Updating /etc/default/ovirt Apr 16 09:35:54 Updating OVIRT_BOOTIF to '' Apr 16 09:35:54 Updating OVIRT_INIT to '' Apr 16 09:35:54 Updating OVIRT_UPGRADE to '' Apr 16 09:35:54 Updating OVIRT_STANDALONE to '1' Apr 16 09:35:54 Updating OVIRT_BOOTPARAMS to 'nomodeset crashkernel=512M-2G:64M,2G-:128M elevator=deadline quiet rd_NO_LVM rhgb rd.luks=0 rd.md=0 rd.dm=0' Apr 16 09:35:54 Updating OVIRT_RHN_TYPE to 'classic' Apr 16 09:35:54 Updating OVIRT_INSTALL to '1' Apr 16 09:35:54 Updating OVIRT_ISCSI_INSTALL to '1' Apr 16 09:36:08 Setting temporary admin password: F8Ax67kfRPSAw Apr 16 09:36:09 Setting temporary root password: F8Ax67kfRPSAw Apr 16 09:36:09 Skip runtime mode configuration. Apr 16 09:36:09 Completed ovirt-early Apr 16 09:36:09 Starting ovirt-awake. Apr 16 09:36:09 Node is operating in unmanaged mode. Apr 16 09:36:09 Completed ovirt-awake: RETVAL=0 Apr 16 09:36:09 Starting ovirt Apr 16 09:36:09 Completed ovirt Apr 16 09:36:10 Starting ovirt-post Apr 16 09:36:20 Hardware virtualization detected Volume group "HostVG" not found Skipping volume group HostVG Restarting network (via systemctl): [ OK ] Apr 16 09:36:20 Starting ovirt-post Apr 16 09:36:21 Hardware virtualization detected Volume group "HostVG" not found Skipping volume group HostVG Restarting network (via systemctl): [ OK ] Apr 16 09:36:22 Starting ovirt-cim Apr 16 09:36:22 Completed ovirt-cim WARNING: persistent config storage not available
/var/log/vdsm/vdsm.log =======================
MainThread::INFO::2012-04-16 09:36:21,828::vdsm::71::vds::(run) I am the actual vdsm 4.9-0 MainThread::DEBUG::2012-04-16 09:36:23,873::resourceManager::376::ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2012-04-16 09:36:23,874::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-04-16 09:36:23,918::multipath::85::Storage.Misc.excCmd::(isEnabled) '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) MainThread::INFO::2012-04-16 09:36:25,000::vdsm::71::vds::(run) I am the actual vdsm 4.9-0 MainThread::DEBUG::2012-04-16 09:36:25,199::resourceManager::376::ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2012-04-16 09:36:25,200::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-04-16 09:36:25,231::multipath::85::Storage.Misc.excCmd::(isEnabled) '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,243::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS: <err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:25,244::multipath::109::Storage.Multipath::(isEnabled) multipath Defaulting to False MainThread::DEBUG::2012-04-16 09:36:25,244::misc::487::Storage.Misc::(rotateFiles) dir: /etc, prefixName: multipath.conf, versions: 5 MainThread::DEBUG::2012-04-16 09:36:25,244::misc::508::Storage.Misc::(rotateFiles) versions found: [0] MainThread::DEBUG::2012-04-16 09:36:25,244::multipath::118::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /bin/cp /etc/multipath.conf /etc/multipath.conf.1' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,255::multipath::118::Storage.Misc.excCmd::(setupMultipath) FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file system\nsudo: sorry, a password is required to run sudo\n';<rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,256::multipath::118::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf.1' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,269::multipath::118::Storage.Misc.excCmd::(setupMultipath) FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file system\nsudo: sorry, a password is required to run sudo\n';<rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,270::multipath::123::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /bin/cp /tmp/tmpnPcvWi /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,283::multipath::123::Storage.Misc.excCmd::(setupMultipath) SUCCESS:<err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:25,283::multipath::128::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,294::multipath::128::Storage.Misc.excCmd::(setupMultipath) FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file system\nsudo: sorry, a password is required to run sudo\n';<rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,295::multipath::131::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /sbin/multipath -F' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,323::multipath::131::Storage.Misc.excCmd::(setupMultipath) FAILED:<err> = '';<rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,323::multipath::134::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /sbin/service multipathd restart' (cwd None) MainThread::DEBUG::2012-04-16 09:36:26,397::multipath::134::Storage.Misc.excCmd::(setupMultipath) SUCCESS:<err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:26,398::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) MainThread::DEBUG::2012-04-16 09:36:26,443::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) SUCCESS:<err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:26,445::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:26,447::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count, d ev_size' (cwd None) MainThread::DEBUG::2012-04-16 09:36:26,811::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:26,811::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex MainThread::DEBUG::2012-04-16 09:36:26,812::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:26,812::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_s i ze,vg_mda_free' (cwd None) MainThread::INFO::2012-04-16 09:36:29,307::vdsm::71::vds::(run) I am the actual vdsm 4.9-0 MainThread::DEBUG::2012-04-16 09:36:29,514::resourceManager::376::ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2012-04-16 09:36:29,515::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-04-16 09:36:29,551::multipath::85::Storage.Misc.excCmd::(isEnabled) '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,564::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS: <err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,565::multipath::101::Storage.Multipath::(isEnabled) Current revision of multipath.conf detected, preserving MainThread::DEBUG::2012-04-16 09:36:29,565::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,606::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) SUCCESS:<err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,606::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,608::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count, d ev_size' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,714::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,715::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,716::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,716::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_s i ze,vg_mda_free' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,813::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> = ' No volume groups found\n';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,814::lvm::379::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,815::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,916::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> = ' No volume groups found\n';<rc> = 0 Thread-11::DEBUG::2012-04-16 09:36:29,917::misc::1017::SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) MainThread::INFO::2012-04-16 09:36:29,919::dispatcher::121::Storage.Dispatcher::(__init__) Starting StorageDispatcher... Thread-11::DEBUG::2012-04-16 09:36:29,919::misc::1019::SamplingMethod::(__call__) Got in to sampling method Thread-11::DEBUG::2012-04-16 09:36:29,921::misc::1017::SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-11::DEBUG::2012-04-16 09:36:29,921::misc::1019::SamplingMethod::(__call__) Got in to sampling method Thread-11::DEBUG::2012-04-16 09:36:29,921::iscsi::389::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,930::utils::595::Storage.Misc.excCmd::(execCmd) '/usr/bin/pgrep -xf ksmd' (cwd None) MainThread::DEBUG::2012-04-16 09:36:30,108::utils::595::Storage.Misc.excCmd::(execCmd) SUCCESS:<err> = '';<rc> = 0 Thread-11::DEBUG::2012-04-16 09:36:30,116::iscsi::389::Storage.Misc.excCmd::(rescan) FAILED:<err> = 'iscsiadm: No session found.\n';<rc> = 21 Thread-11::DEBUG::2012-04-16 09:36:30,116::misc::1027::SamplingMethod::(__call__) Returning last result Thread-11::DEBUG::2012-04-16 09:36:30,117::supervdsm::83::SuperVdsmProxy::(_killSupervdsm) Could not kill old Super Vdsm [Errno 2] No such file or directory: '/var/run/vdsm/svdsm.pid' Thread-11::DEBUG::2012-04-16 09:36:30,117::supervdsm::71::SuperVdsmProxy::(_launchSupervdsm) Launching Super Vdsm Thread-11::DEBUG::2012-04-16 09:36:30,118::supervdsm::74::Storage.Misc.excCmd::(_launchSupervdsm) '/usr/bin/sudo -n /usr/bin/python /usr/share/vdsm/supervdsmServer.pyc bd4b3ae7-3e51-4d6b-b681-d5f6cb5bae07 2945' (cwd None) MainThread::DEBUG::2012-04-16 09:36:30,254::supervdsmServer::170::SuperVdsm.Server::(main) Making sure I'm root MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::174::SuperVdsm.Server::(main) Parsing cmd args MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::177::SuperVdsm.Server::(main) Creating PID file MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::181::SuperVdsm.Server::(main) Cleaning old socket MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::185::SuperVdsm.Server::(main) Setting up keep alive thread MainThread::DEBUG::2012-04-16 09:36:30,256::supervdsmServer::190::SuperVdsm.Server::(main) Creating remote object manager MainThread::DEBUG::2012-04-16 09:36:30,256::supervdsmServer::201::SuperVdsm.Server::(main) Started serving super vdsm object Thread-11::DEBUG::2012-04-16 09:36:32,124::supervdsm::92::SuperVdsmProxy::(_connect) Trying to connect to Super Vdsm Thread-11::DEBUG::2012-04-16 09:36:32,133::supervdsm::64::SuperVdsmProxy::(__init__) Connected to Super Vdsm Thread-11::DEBUG::2012-04-16 09:36:34,070::multipath::71::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n /sbin/multipath' (cwd None) Thread-11::DEBUG::2012-04-16 09:36:34,130::multipath::71::Storage.Misc.excCmd::(rescan) SUCCESS:<err> = '';<rc> = 0 Thread-11::DEBUG::2012-04-16 09:36:34,131::lvm::460::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,131::lvm::462::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,132::lvm::472::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,132::lvm::474::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,133::lvm::493::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,133::lvm::495::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,133::misc::1027::SamplingMethod::(__call__) Returning last result Thread-11::DEBUG::2012-04-16 09:36:34,133::hsm::272::Storage.HSM::(__cleanStorageRepository) Started cleaning storage repository at '/rhev/data-center' Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::304::Storage.HSM::(__cleanStorageRepository) White list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt'] Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::305::Storage.HSM::(__cleanStorageRepository) Mount list: ['/rhev/data-center'] Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::307::Storage.HSM::(__cleanStorageRepository) Cleaning leftovers Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::350::Storage.HSM::(__cleanStorageRepository) Finished cleaning storage repository at '/rhev/data-center'
On 4/16/12 8:38 AM, "Mike Burns"<mburns@redhat.com> wrote:
On Mon, 2012-04-16 at 08:14 -0500, Adam vonNieda wrote:
Hi folks,
I'm trying to install oVirt node v2.3.0 on A Dell C2100 server. I can boot up just fine, but the two menu options I see are "Start oVirt node", and "Troubleshooting". When I choose "Start oVirt node", it does just that, and I am soon after given a console login prompt. I've checked the docs, and I don't see what I'm supposed to do next, as in a password etc. Am I missing something?
Hi Adam,
Something is breaking in the boot process. You should be getting a TUI screen that will let you configure and install ovirt-node.
I just added an entry on the Node Troublesooting wiki page[1] for you to follow.
Mike
[1] http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems
Thanks,
-Adam _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
This is definitely the cause of the installer failing
2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat /proc/mounts|grep -q "none /live" 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to mount_live()
What kind of media are you installing from: usb/cd/remote console?
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
I did go back and take a look at mount_live and made sure it contains a specific patch to handle usb drives properly. If you can get back to a shell prompt. run blkid and capture the output. If it's way too much to type then just the usb drive output should be ok.

Turns out that there might be an issue with my thumb drive. I tried another, and it worked fine. Thanks very much for the responses folks! -Adam On 4/17/12 10:11 AM, "Joey Boggs" <jboggs@redhat.com> wrote:
On 04/17/2012 10:51 AM, Adam vonNieda wrote:
Thanks for the reply Joey. I saw that too, and thought maybe my USB thumb drive was set to read only, but it's not. This box doesn't have a DVD drive, I'll try a different USB drive, and if that doesn't work, I'll dig up an external DVD drive.
Thanks again,
-Adam
Adam vonNieda Adam@vonNieda.org
On Apr 17, 2012, at 9:07, Joey Boggs<jboggs@redhat.com> wrote:
On 04/17/2012 09:45 AM, Adam vonNieda wrote:
Hi folks,
Still hoping someone can give me a hand with this. I can't install overt-node 2.3.0 on a on a Dell C2100 server because it won't start the graphical interface. I booted up a standard F16 image this morning, and the graphical installer does start during that process. Logs are below.
Thanks very much,
-Adam
/tmp/ovirt.log ==============
/sbin/restorecon set context /var/cache/yum->unconfined_u:object_r:rpm_var_cache_t:s0 failed:'Read-only file system' /sbin/restorecon reset /var/cache/yum context
unconfined_u:object_r:file_t:s0->unconfined_u:object_r:rpm_var_cache_t :s0 /sbin/restorecon reset /etc/sysctl.conf context
system_u:object_r:etc_runtime_t:s0->system_u:object_r:system_conf_t:s0 /sbin/restorecon reset /boot-kdump context system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0 2012-04-16 09:36:26,827 - INFO - ovirt-config-installer - ::::live device:::: /dev/sdb 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat /proc/mounts|grep -q "none /live" 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - umount /live 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to mount_live()
/var/log/ovirt.log ==================
Apr 16 09:35:53 Starting ovirt-early oVirt Node Hypervisor release 2.3.0 (1.0.fc16) Apr 16 09:35:53 Updating /etc/default/ovirt Apr 16 09:35:54 Updating OVIRT_BOOTIF to '' Apr 16 09:35:54 Updating OVIRT_INIT to '' Apr 16 09:35:54 Updating OVIRT_UPGRADE to '' Apr 16 09:35:54 Updating OVIRT_STANDALONE to '1' Apr 16 09:35:54 Updating OVIRT_BOOTPARAMS to 'nomodeset crashkernel=512M-2G:64M,2G-:128M elevator=deadline quiet rd_NO_LVM rhgb rd.luks=0 rd.md=0 rd.dm=0' Apr 16 09:35:54 Updating OVIRT_RHN_TYPE to 'classic' Apr 16 09:35:54 Updating OVIRT_INSTALL to '1' Apr 16 09:35:54 Updating OVIRT_ISCSI_INSTALL to '1' Apr 16 09:36:08 Setting temporary admin password: F8Ax67kfRPSAw Apr 16 09:36:09 Setting temporary root password: F8Ax67kfRPSAw Apr 16 09:36:09 Skip runtime mode configuration. Apr 16 09:36:09 Completed ovirt-early Apr 16 09:36:09 Starting ovirt-awake. Apr 16 09:36:09 Node is operating in unmanaged mode. Apr 16 09:36:09 Completed ovirt-awake: RETVAL=0 Apr 16 09:36:09 Starting ovirt Apr 16 09:36:09 Completed ovirt Apr 16 09:36:10 Starting ovirt-post Apr 16 09:36:20 Hardware virtualization detected Volume group "HostVG" not found Skipping volume group HostVG Restarting network (via systemctl): [ OK ] Apr 16 09:36:20 Starting ovirt-post Apr 16 09:36:21 Hardware virtualization detected Volume group "HostVG" not found Skipping volume group HostVG Restarting network (via systemctl): [ OK ] Apr 16 09:36:22 Starting ovirt-cim Apr 16 09:36:22 Completed ovirt-cim WARNING: persistent config storage not available
/var/log/vdsm/vdsm.log =======================
MainThread::INFO::2012-04-16 09:36:21,828::vdsm::71::vds::(run) I am the actual vdsm 4.9-0 MainThread::DEBUG::2012-04-16
09:36:23,873::resourceManager::376::ResourceManager::(registerNamespac e) Registering namespace 'Storage' MainThread::DEBUG::2012-04-16 09:36:23,874::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-04-16 09:36:23,918::multipath::85::Storage.Misc.excCmd::(isEnabled) '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) MainThread::INFO::2012-04-16 09:36:25,000::vdsm::71::vds::(run) I am the actual vdsm 4.9-0 MainThread::DEBUG::2012-04-16
09:36:25,199::resourceManager::376::ResourceManager::(registerNamespac e) Registering namespace 'Storage' MainThread::DEBUG::2012-04-16 09:36:25,200::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-04-16 09:36:25,231::multipath::85::Storage.Misc.excCmd::(isEnabled) '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,243::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS: <err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:25,244::multipath::109::Storage.Multipath::(isEnabled) multipath Defaulting to False MainThread::DEBUG::2012-04-16 09:36:25,244::misc::487::Storage.Misc::(rotateFiles) dir: /etc, prefixName: multipath.conf, versions: 5 MainThread::DEBUG::2012-04-16 09:36:25,244::misc::508::Storage.Misc::(rotateFiles) versions found: [0] MainThread::DEBUG::2012-04-16 09:36:25,244::multipath::118::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /bin/cp /etc/multipath.conf /etc/multipath.conf.1' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,255::multipath::118::Storage.Misc.excCmd::(setupMultipath) FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file system\nsudo: sorry, a password is required to run sudo\n';<rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,256::multipath::118::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf.1' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,269::multipath::118::Storage.Misc.excCmd::(setupMultipath) FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file system\nsudo: sorry, a password is required to run sudo\n';<rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,270::multipath::123::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /bin/cp /tmp/tmpnPcvWi /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,283::multipath::123::Storage.Misc.excCmd::(setupMultipath) SUCCESS:<err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:25,283::multipath::128::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,294::multipath::128::Storage.Misc.excCmd::(setupMultipath) FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file system\nsudo: sorry, a password is required to run sudo\n';<rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,295::multipath::131::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /sbin/multipath -F' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,323::multipath::131::Storage.Misc.excCmd::(setupMultipath) FAILED:<err> = '';<rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,323::multipath::134::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /sbin/service multipathd restart' (cwd None) MainThread::DEBUG::2012-04-16 09:36:26,397::multipath::134::Storage.Misc.excCmd::(setupMultipath) SUCCESS:<err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16
09:36:26,398::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType ) '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) MainThread::DEBUG::2012-04-16
09:36:26,443::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType ) SUCCESS:<err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:26,445::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:26,447::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o
uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_co unt, d ev_size' (cwd None) MainThread::DEBUG::2012-04-16 09:36:26,811::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:26,811::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex MainThread::DEBUG::2012-04-16 09:36:26,812::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:26,812::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_m da_s i ze,vg_mda_free' (cwd None) MainThread::INFO::2012-04-16 09:36:29,307::vdsm::71::vds::(run) I am the actual vdsm 4.9-0 MainThread::DEBUG::2012-04-16
09:36:29,514::resourceManager::376::ResourceManager::(registerNamespac e) Registering namespace 'Storage' MainThread::DEBUG::2012-04-16 09:36:29,515::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-04-16 09:36:29,551::multipath::85::Storage.Misc.excCmd::(isEnabled) '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,564::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS: <err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,565::multipath::101::Storage.Multipath::(isEnabled) Current revision of multipath.conf detected, preserving MainThread::DEBUG::2012-04-16
09:36:29,565::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType ) '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) MainThread::DEBUG::2012-04-16
09:36:29,606::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType ) SUCCESS:<err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,606::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,608::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o
uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_co unt, d ev_size' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,714::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,715::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,716::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,716::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_m da_s i ze,vg_mda_free' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,813::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> = ' No volume groups found\n';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,814::lvm::379::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,815::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,916::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> = ' No volume groups found\n';<rc> = 0 Thread-11::DEBUG::2012-04-16 09:36:29,917::misc::1017::SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) MainThread::INFO::2012-04-16 09:36:29,919::dispatcher::121::Storage.Dispatcher::(__init__) Starting StorageDispatcher... Thread-11::DEBUG::2012-04-16 09:36:29,919::misc::1019::SamplingMethod::(__call__) Got in to sampling method Thread-11::DEBUG::2012-04-16 09:36:29,921::misc::1017::SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-11::DEBUG::2012-04-16 09:36:29,921::misc::1019::SamplingMethod::(__call__) Got in to sampling method Thread-11::DEBUG::2012-04-16 09:36:29,921::iscsi::389::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,930::utils::595::Storage.Misc.excCmd::(execCmd) '/usr/bin/pgrep -xf ksmd' (cwd None) MainThread::DEBUG::2012-04-16 09:36:30,108::utils::595::Storage.Misc.excCmd::(execCmd) SUCCESS:<err> = '';<rc> = 0 Thread-11::DEBUG::2012-04-16 09:36:30,116::iscsi::389::Storage.Misc.excCmd::(rescan) FAILED:<err> = 'iscsiadm: No session found.\n';<rc> = 21 Thread-11::DEBUG::2012-04-16 09:36:30,116::misc::1027::SamplingMethod::(__call__) Returning last result Thread-11::DEBUG::2012-04-16 09:36:30,117::supervdsm::83::SuperVdsmProxy::(_killSupervdsm) Could not kill old Super Vdsm [Errno 2] No such file or directory: '/var/run/vdsm/svdsm.pid' Thread-11::DEBUG::2012-04-16 09:36:30,117::supervdsm::71::SuperVdsmProxy::(_launchSupervdsm) Launching Super Vdsm Thread-11::DEBUG::2012-04-16 09:36:30,118::supervdsm::74::Storage.Misc.excCmd::(_launchSupervdsm) '/usr/bin/sudo -n /usr/bin/python /usr/share/vdsm/supervdsmServer.pyc bd4b3ae7-3e51-4d6b-b681-d5f6cb5bae07 2945' (cwd None) MainThread::DEBUG::2012-04-16 09:36:30,254::supervdsmServer::170::SuperVdsm.Server::(main) Making sure I'm root MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::174::SuperVdsm.Server::(main) Parsing cmd args MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::177::SuperVdsm.Server::(main) Creating PID file MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::181::SuperVdsm.Server::(main) Cleaning old socket MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::185::SuperVdsm.Server::(main) Setting up keep alive thread MainThread::DEBUG::2012-04-16 09:36:30,256::supervdsmServer::190::SuperVdsm.Server::(main) Creating remote object manager MainThread::DEBUG::2012-04-16 09:36:30,256::supervdsmServer::201::SuperVdsm.Server::(main) Started serving super vdsm object Thread-11::DEBUG::2012-04-16 09:36:32,124::supervdsm::92::SuperVdsmProxy::(_connect) Trying to connect to Super Vdsm Thread-11::DEBUG::2012-04-16 09:36:32,133::supervdsm::64::SuperVdsmProxy::(__init__) Connected to Super Vdsm Thread-11::DEBUG::2012-04-16 09:36:34,070::multipath::71::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n /sbin/multipath' (cwd None) Thread-11::DEBUG::2012-04-16 09:36:34,130::multipath::71::Storage.Misc.excCmd::(rescan) SUCCESS:<err> = '';<rc> = 0 Thread-11::DEBUG::2012-04-16 09:36:34,131::lvm::460::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,131::lvm::462::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,132::lvm::472::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,132::lvm::474::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,133::lvm::493::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,133::lvm::495::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,133::misc::1027::SamplingMethod::(__call__) Returning last result Thread-11::DEBUG::2012-04-16 09:36:34,133::hsm::272::Storage.HSM::(__cleanStorageRepository) Started cleaning storage repository at '/rhev/data-center' Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::304::Storage.HSM::(__cleanStorageRepository) White list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt'] Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::305::Storage.HSM::(__cleanStorageRepository) Mount list: ['/rhev/data-center'] Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::307::Storage.HSM::(__cleanStorageRepository) Cleaning leftovers Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::350::Storage.HSM::(__cleanStorageRepository) Finished cleaning storage repository at '/rhev/data-center'
On 4/16/12 8:38 AM, "Mike Burns"<mburns@redhat.com> wrote:
On Mon, 2012-04-16 at 08:14 -0500, Adam vonNieda wrote: > Hi folks, > > > I'm trying to install oVirt node v2.3.0 on A Dell C2100 >server. I > can boot up just fine, but the two menu options I see are "Start >oVirt > node", and "Troubleshooting". When I choose "Start oVirt node", it > does just that, and I am soon after given a console login prompt. >I've > checked the docs, and I don't see what I'm supposed to do next, as >in > a password etc. Am I missing something? Hi Adam,
Something is breaking in the boot process. You should be getting a TUI screen that will let you configure and install ovirt-node.
I just added an entry on the Node Troublesooting wiki page[1] for you to follow.
Mike
[1] http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems
> Thanks, > > > -Adam > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
This is definitely the cause of the installer failing
2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat /proc/mounts|grep -q "none /live" 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to mount_live()
What kind of media are you installing from: usb/cd/remote console?
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
I did go back and take a look at mount_live and made sure it contains a specific patch to handle usb drives properly. If you can get back to a shell prompt. run blkid and capture the output. If it's way too much to type then just the usb drive output should be ok.

No prob. I am glad to hear it works! dk On Tue, Apr 17, 2012 at 2:48 PM, Adam vonNieda <adam@vonnieda.org> wrote:
Turns out that there might be an issue with my thumb drive. I tried another, and it worked fine. Thanks very much for the responses folks!
-Adam
On 4/17/12 10:11 AM, "Joey Boggs" <jboggs@redhat.com> wrote:
On 04/17/2012 10:51 AM, Adam vonNieda wrote:
Thanks for the reply Joey. I saw that too, and thought maybe my USB thumb drive was set to read only, but it's not. This box doesn't have a DVD drive, I'll try a different USB drive, and if that doesn't work, I'll dig up an external DVD drive.
Thanks again,
-Adam
Adam vonNieda Adam@vonNieda.org
On Apr 17, 2012, at 9:07, Joey Boggs<jboggs@redhat.com> wrote:
On 04/17/2012 09:45 AM, Adam vonNieda wrote:
Hi folks,
Still hoping someone can give me a hand with this. I can't install overt-node 2.3.0 on a on a Dell C2100 server because it won't start the graphical interface. I booted up a standard F16 image this morning, and the graphical installer does start during that process. Logs are below.
Thanks very much,
-Adam
/tmp/ovirt.log ==============
/sbin/restorecon set context /var/cache/yum->unconfined_u:object_r:rpm_var_cache_t:s0 failed:'Read-only file system' /sbin/restorecon reset /var/cache/yum context
unconfined_u:object_r:file_t:s0->unconfined_u:object_r:rpm_var_cache_t :s0 /sbin/restorecon reset /etc/sysctl.conf context
system_u:object_r:etc_runtime_t:s0->system_u:object_r:system_conf_t:s0 /sbin/restorecon reset /boot-kdump context system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0 2012-04-16 09:36:26,827 - INFO - ovirt-config-installer - ::::live device:::: /dev/sdb 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat /proc/mounts|grep -q "none /live" 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - umount /live 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to mount_live()
/var/log/ovirt.log ==================
Apr 16 09:35:53 Starting ovirt-early oVirt Node Hypervisor release 2.3.0 (1.0.fc16) Apr 16 09:35:53 Updating /etc/default/ovirt Apr 16 09:35:54 Updating OVIRT_BOOTIF to '' Apr 16 09:35:54 Updating OVIRT_INIT to '' Apr 16 09:35:54 Updating OVIRT_UPGRADE to '' Apr 16 09:35:54 Updating OVIRT_STANDALONE to '1' Apr 16 09:35:54 Updating OVIRT_BOOTPARAMS to 'nomodeset crashkernel=512M-2G:64M,2G-:128M elevator=deadline quiet rd_NO_LVM rhgb rd.luks=0 rd.md=0 rd.dm=0' Apr 16 09:35:54 Updating OVIRT_RHN_TYPE to 'classic' Apr 16 09:35:54 Updating OVIRT_INSTALL to '1' Apr 16 09:35:54 Updating OVIRT_ISCSI_INSTALL to '1' Apr 16 09:36:08 Setting temporary admin password: F8Ax67kfRPSAw Apr 16 09:36:09 Setting temporary root password: F8Ax67kfRPSAw Apr 16 09:36:09 Skip runtime mode configuration. Apr 16 09:36:09 Completed ovirt-early Apr 16 09:36:09 Starting ovirt-awake. Apr 16 09:36:09 Node is operating in unmanaged mode. Apr 16 09:36:09 Completed ovirt-awake: RETVAL=0 Apr 16 09:36:09 Starting ovirt Apr 16 09:36:09 Completed ovirt Apr 16 09:36:10 Starting ovirt-post Apr 16 09:36:20 Hardware virtualization detected Volume group "HostVG" not found Skipping volume group HostVG Restarting network (via systemctl): [ OK ] Apr 16 09:36:20 Starting ovirt-post Apr 16 09:36:21 Hardware virtualization detected Volume group "HostVG" not found Skipping volume group HostVG Restarting network (via systemctl): [ OK ] Apr 16 09:36:22 Starting ovirt-cim Apr 16 09:36:22 Completed ovirt-cim WARNING: persistent config storage not available
/var/log/vdsm/vdsm.log =======================
MainThread::INFO::2012-04-16 09:36:21,828::vdsm::71::vds::(run) I am the actual vdsm 4.9-0 MainThread::DEBUG::2012-04-16
09:36:23,873::resourceManager::376::ResourceManager::(registerNamespac e) Registering namespace 'Storage' MainThread::DEBUG::2012-04-16 09:36:23,874::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-04-16 09:36:23,918::multipath::85::Storage.Misc.excCmd::(isEnabled) '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) MainThread::INFO::2012-04-16 09:36:25,000::vdsm::71::vds::(run) I am the actual vdsm 4.9-0 MainThread::DEBUG::2012-04-16
09:36:25,199::resourceManager::376::ResourceManager::(registerNamespac e) Registering namespace 'Storage' MainThread::DEBUG::2012-04-16 09:36:25,200::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-04-16 09:36:25,231::multipath::85::Storage.Misc.excCmd::(isEnabled) '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,243::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS: <err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:25,244::multipath::109::Storage.Multipath::(isEnabled) multipath Defaulting to False MainThread::DEBUG::2012-04-16 09:36:25,244::misc::487::Storage.Misc::(rotateFiles) dir: /etc, prefixName: multipath.conf, versions: 5 MainThread::DEBUG::2012-04-16 09:36:25,244::misc::508::Storage.Misc::(rotateFiles) versions found: [0] MainThread::DEBUG::2012-04-16 09:36:25,244::multipath::118::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /bin/cp /etc/multipath.conf /etc/multipath.conf.1' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,255::multipath::118::Storage.Misc.excCmd::(setupMultipath) FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file system\nsudo: sorry, a password is required to run sudo\n';<rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,256::multipath::118::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf.1' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,269::multipath::118::Storage.Misc.excCmd::(setupMultipath) FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file system\nsudo: sorry, a password is required to run sudo\n';<rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,270::multipath::123::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /bin/cp /tmp/tmpnPcvWi /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,283::multipath::123::Storage.Misc.excCmd::(setupMultipath) SUCCESS:<err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:25,283::multipath::128::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,294::multipath::128::Storage.Misc.excCmd::(setupMultipath) FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file system\nsudo: sorry, a password is required to run sudo\n';<rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,295::multipath::131::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /sbin/multipath -F' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,323::multipath::131::Storage.Misc.excCmd::(setupMultipath) FAILED:<err> = '';<rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,323::multipath::134::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /sbin/service multipathd restart' (cwd None) MainThread::DEBUG::2012-04-16 09:36:26,397::multipath::134::Storage.Misc.excCmd::(setupMultipath) SUCCESS:<err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16
09:36:26,398::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType ) '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) MainThread::DEBUG::2012-04-16
09:36:26,443::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType ) SUCCESS:<err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:26,445::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:26,447::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o
uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_co unt, d ev_size' (cwd None) MainThread::DEBUG::2012-04-16 09:36:26,811::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:26,811::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex MainThread::DEBUG::2012-04-16 09:36:26,812::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:26,812::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_m da_s i ze,vg_mda_free' (cwd None) MainThread::INFO::2012-04-16 09:36:29,307::vdsm::71::vds::(run) I am the actual vdsm 4.9-0 MainThread::DEBUG::2012-04-16
09:36:29,514::resourceManager::376::ResourceManager::(registerNamespac e) Registering namespace 'Storage' MainThread::DEBUG::2012-04-16 09:36:29,515::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-04-16 09:36:29,551::multipath::85::Storage.Misc.excCmd::(isEnabled) '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,564::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS: <err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,565::multipath::101::Storage.Multipath::(isEnabled) Current revision of multipath.conf detected, preserving MainThread::DEBUG::2012-04-16
09:36:29,565::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType ) '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) MainThread::DEBUG::2012-04-16
09:36:29,606::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType ) SUCCESS:<err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,606::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,608::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o
uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_co unt, d ev_size' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,714::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,715::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,716::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,716::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_m da_s i ze,vg_mda_free' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,813::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> = ' No volume groups found\n';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,814::lvm::379::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,815::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,916::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> = ' No volume groups found\n';<rc> = 0 Thread-11::DEBUG::2012-04-16 09:36:29,917::misc::1017::SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) MainThread::INFO::2012-04-16 09:36:29,919::dispatcher::121::Storage.Dispatcher::(__init__) Starting StorageDispatcher... Thread-11::DEBUG::2012-04-16 09:36:29,919::misc::1019::SamplingMethod::(__call__) Got in to sampling method Thread-11::DEBUG::2012-04-16 09:36:29,921::misc::1017::SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-11::DEBUG::2012-04-16 09:36:29,921::misc::1019::SamplingMethod::(__call__) Got in to sampling method Thread-11::DEBUG::2012-04-16 09:36:29,921::iscsi::389::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,930::utils::595::Storage.Misc.excCmd::(execCmd) '/usr/bin/pgrep -xf ksmd' (cwd None) MainThread::DEBUG::2012-04-16 09:36:30,108::utils::595::Storage.Misc.excCmd::(execCmd) SUCCESS:<err> = '';<rc> = 0 Thread-11::DEBUG::2012-04-16 09:36:30,116::iscsi::389::Storage.Misc.excCmd::(rescan) FAILED:<err> = 'iscsiadm: No session found.\n';<rc> = 21 Thread-11::DEBUG::2012-04-16 09:36:30,116::misc::1027::SamplingMethod::(__call__) Returning last result Thread-11::DEBUG::2012-04-16 09:36:30,117::supervdsm::83::SuperVdsmProxy::(_killSupervdsm) Could not kill old Super Vdsm [Errno 2] No such file or directory: '/var/run/vdsm/svdsm.pid' Thread-11::DEBUG::2012-04-16 09:36:30,117::supervdsm::71::SuperVdsmProxy::(_launchSupervdsm) Launching Super Vdsm Thread-11::DEBUG::2012-04-16 09:36:30,118::supervdsm::74::Storage.Misc.excCmd::(_launchSupervdsm) '/usr/bin/sudo -n /usr/bin/python /usr/share/vdsm/supervdsmServer.pyc bd4b3ae7-3e51-4d6b-b681-d5f6cb5bae07 2945' (cwd None) MainThread::DEBUG::2012-04-16 09:36:30,254::supervdsmServer::170::SuperVdsm.Server::(main) Making sure I'm root MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::174::SuperVdsm.Server::(main) Parsing cmd args MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::177::SuperVdsm.Server::(main) Creating PID file MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::181::SuperVdsm.Server::(main) Cleaning old socket MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::185::SuperVdsm.Server::(main) Setting up keep alive thread MainThread::DEBUG::2012-04-16 09:36:30,256::supervdsmServer::190::SuperVdsm.Server::(main) Creating remote object manager MainThread::DEBUG::2012-04-16 09:36:30,256::supervdsmServer::201::SuperVdsm.Server::(main) Started serving super vdsm object Thread-11::DEBUG::2012-04-16 09:36:32,124::supervdsm::92::SuperVdsmProxy::(_connect) Trying to connect to Super Vdsm Thread-11::DEBUG::2012-04-16 09:36:32,133::supervdsm::64::SuperVdsmProxy::(__init__) Connected to Super Vdsm Thread-11::DEBUG::2012-04-16 09:36:34,070::multipath::71::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n /sbin/multipath' (cwd None) Thread-11::DEBUG::2012-04-16 09:36:34,130::multipath::71::Storage.Misc.excCmd::(rescan) SUCCESS:<err> = '';<rc> = 0 Thread-11::DEBUG::2012-04-16 09:36:34,131::lvm::460::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,131::lvm::462::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,132::lvm::472::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,132::lvm::474::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,133::lvm::493::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,133::lvm::495::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,133::misc::1027::SamplingMethod::(__call__) Returning last result Thread-11::DEBUG::2012-04-16 09:36:34,133::hsm::272::Storage.HSM::(__cleanStorageRepository) Started cleaning storage repository at '/rhev/data-center' Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::304::Storage.HSM::(__cleanStorageRepository) White list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt'] Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::305::Storage.HSM::(__cleanStorageRepository) Mount list: ['/rhev/data-center'] Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::307::Storage.HSM::(__cleanStorageRepository) Cleaning leftovers Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::350::Storage.HSM::(__cleanStorageRepository) Finished cleaning storage repository at '/rhev/data-center'
On 4/16/12 8:38 AM, "Mike Burns"<mburns@redhat.com> wrote:
> On Mon, 2012-04-16 at 08:14 -0500, Adam vonNieda wrote: >> Hi folks, >> >> >> I'm trying to install oVirt node v2.3.0 on A Dell C2100 >>server. I >> can boot up just fine, but the two menu options I see are "Start >>oVirt >> node", and "Troubleshooting". When I choose "Start oVirt node", it >> does just that, and I am soon after given a console login prompt. >>I've >> checked the docs, and I don't see what I'm supposed to do next, as >>in >> a password etc. Am I missing something? > Hi Adam, > > Something is breaking in the boot process. You should be getting a >TUI > screen that will let you configure and install ovirt-node. > > I just added an entry on the Node Troublesooting wiki page[1] for >you to > follow. > > Mike > > [1] http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems > > >> Thanks, >> >> >> -Adam >> _______________________________________________ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
This is definitely the cause of the installer failing
2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat /proc/mounts|grep -q "none /live" 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to mount_live()
What kind of media are you installing from: usb/cd/remote console?
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
I did go back and take a look at mount_live and made sure it contains a specific patch to handle usb drives properly. If you can get back to a shell prompt. run blkid and capture the output. If it's way too much to type then just the usb drive output should be ok.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Dominic Kaiser Greater Boston Vineyard Director of Operations cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org

I think I just hit the exact same issue with a Sandisk Crusier Blade 4GB USB stick. I bought 4 of them to try and setup a test system (before we commit to real hardware) and at least 2 of them failed with both 2.2 and 2.3 ovirt isos being copied using dd from a mac. I copied to a old 8gb "Strontium" USB stick I had lying around and worked without issue. So it appears to be an issue with the stick. I can provide more specific information on the stick or such if that is useful. It wouldn't surprise me if its due to the low cost nature of the stick (cost $5 AUD) but I am curious as it booted the kernel fine. Jason On 18/04/2012, at 4:48 AM, Adam vonNieda wrote:
Turns out that there might be an issue with my thumb drive. I tried another, and it worked fine. Thanks very much for the responses folks!
-Adam
On 4/17/12 10:11 AM, "Joey Boggs" <jboggs@redhat.com> wrote:
On 04/17/2012 10:51 AM, Adam vonNieda wrote:
Thanks for the reply Joey. I saw that too, and thought maybe my USB thumb drive was set to read only, but it's not. This box doesn't have a DVD drive, I'll try a different USB drive, and if that doesn't work, I'll dig up an external DVD drive.
Thanks again,
-Adam
Adam vonNieda Adam@vonNieda.org
On Apr 17, 2012, at 9:07, Joey Boggs<jboggs@redhat.com> wrote:
On 04/17/2012 09:45 AM, Adam vonNieda wrote:
Hi folks,
Still hoping someone can give me a hand with this. I can't install overt-node 2.3.0 on a on a Dell C2100 server because it won't start the graphical interface. I booted up a standard F16 image this morning, and the graphical installer does start during that process. Logs are below.
Thanks very much,
-Adam
/tmp/ovirt.log ==============
/sbin/restorecon set context /var/cache/yum->unconfined_u:object_r:rpm_var_cache_t:s0 failed:'Read-only file system' /sbin/restorecon reset /var/cache/yum context
unconfined_u:object_r:file_t:s0->unconfined_u:object_r:rpm_var_cache_t :s0 /sbin/restorecon reset /etc/sysctl.conf context
system_u:object_r:etc_runtime_t:s0->system_u:object_r:system_conf_t:s0 /sbin/restorecon reset /boot-kdump context system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0 2012-04-16 09:36:26,827 - INFO - ovirt-config-installer - ::::live device:::: /dev/sdb 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat /proc/mounts|grep -q "none /live" 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - umount /live 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to mount_live()
/var/log/ovirt.log ==================
Apr 16 09:35:53 Starting ovirt-early oVirt Node Hypervisor release 2.3.0 (1.0.fc16) Apr 16 09:35:53 Updating /etc/default/ovirt Apr 16 09:35:54 Updating OVIRT_BOOTIF to '' Apr 16 09:35:54 Updating OVIRT_INIT to '' Apr 16 09:35:54 Updating OVIRT_UPGRADE to '' Apr 16 09:35:54 Updating OVIRT_STANDALONE to '1' Apr 16 09:35:54 Updating OVIRT_BOOTPARAMS to 'nomodeset crashkernel=512M-2G:64M,2G-:128M elevator=deadline quiet rd_NO_LVM rhgb rd.luks=0 rd.md=0 rd.dm=0' Apr 16 09:35:54 Updating OVIRT_RHN_TYPE to 'classic' Apr 16 09:35:54 Updating OVIRT_INSTALL to '1' Apr 16 09:35:54 Updating OVIRT_ISCSI_INSTALL to '1' Apr 16 09:36:08 Setting temporary admin password: F8Ax67kfRPSAw Apr 16 09:36:09 Setting temporary root password: F8Ax67kfRPSAw Apr 16 09:36:09 Skip runtime mode configuration. Apr 16 09:36:09 Completed ovirt-early Apr 16 09:36:09 Starting ovirt-awake. Apr 16 09:36:09 Node is operating in unmanaged mode. Apr 16 09:36:09 Completed ovirt-awake: RETVAL=0 Apr 16 09:36:09 Starting ovirt Apr 16 09:36:09 Completed ovirt Apr 16 09:36:10 Starting ovirt-post Apr 16 09:36:20 Hardware virtualization detected Volume group "HostVG" not found Skipping volume group HostVG Restarting network (via systemctl): [ OK ] Apr 16 09:36:20 Starting ovirt-post Apr 16 09:36:21 Hardware virtualization detected Volume group "HostVG" not found Skipping volume group HostVG Restarting network (via systemctl): [ OK ] Apr 16 09:36:22 Starting ovirt-cim Apr 16 09:36:22 Completed ovirt-cim WARNING: persistent config storage not available
/var/log/vdsm/vdsm.log =======================
MainThread::INFO::2012-04-16 09:36:21,828::vdsm::71::vds::(run) I am the actual vdsm 4.9-0 MainThread::DEBUG::2012-04-16
09:36:23,873::resourceManager::376::ResourceManager::(registerNamespac e) Registering namespace 'Storage' MainThread::DEBUG::2012-04-16 09:36:23,874::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-04-16 09:36:23,918::multipath::85::Storage.Misc.excCmd::(isEnabled) '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) MainThread::INFO::2012-04-16 09:36:25,000::vdsm::71::vds::(run) I am the actual vdsm 4.9-0 MainThread::DEBUG::2012-04-16
09:36:25,199::resourceManager::376::ResourceManager::(registerNamespac e) Registering namespace 'Storage' MainThread::DEBUG::2012-04-16 09:36:25,200::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-04-16 09:36:25,231::multipath::85::Storage.Misc.excCmd::(isEnabled) '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,243::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS: <err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:25,244::multipath::109::Storage.Multipath::(isEnabled) multipath Defaulting to False MainThread::DEBUG::2012-04-16 09:36:25,244::misc::487::Storage.Misc::(rotateFiles) dir: /etc, prefixName: multipath.conf, versions: 5 MainThread::DEBUG::2012-04-16 09:36:25,244::misc::508::Storage.Misc::(rotateFiles) versions found: [0] MainThread::DEBUG::2012-04-16 09:36:25,244::multipath::118::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /bin/cp /etc/multipath.conf /etc/multipath.conf.1' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,255::multipath::118::Storage.Misc.excCmd::(setupMultipath) FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file system\nsudo: sorry, a password is required to run sudo\n';<rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,256::multipath::118::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf.1' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,269::multipath::118::Storage.Misc.excCmd::(setupMultipath) FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file system\nsudo: sorry, a password is required to run sudo\n';<rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,270::multipath::123::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /bin/cp /tmp/tmpnPcvWi /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,283::multipath::123::Storage.Misc.excCmd::(setupMultipath) SUCCESS:<err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:25,283::multipath::128::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,294::multipath::128::Storage.Misc.excCmd::(setupMultipath) FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file system\nsudo: sorry, a password is required to run sudo\n';<rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,295::multipath::131::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /sbin/multipath -F' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,323::multipath::131::Storage.Misc.excCmd::(setupMultipath) FAILED:<err> = '';<rc> = 1 MainThread::DEBUG::2012-04-16 09:36:25,323::multipath::134::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /sbin/service multipathd restart' (cwd None) MainThread::DEBUG::2012-04-16 09:36:26,397::multipath::134::Storage.Misc.excCmd::(setupMultipath) SUCCESS:<err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16
09:36:26,398::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType ) '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) MainThread::DEBUG::2012-04-16
09:36:26,443::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType ) SUCCESS:<err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:26,445::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:26,447::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o
uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_co unt, d ev_size' (cwd None) MainThread::DEBUG::2012-04-16 09:36:26,811::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:26,811::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex MainThread::DEBUG::2012-04-16 09:36:26,812::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:26,812::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_m da_s i ze,vg_mda_free' (cwd None) MainThread::INFO::2012-04-16 09:36:29,307::vdsm::71::vds::(run) I am the actual vdsm 4.9-0 MainThread::DEBUG::2012-04-16
09:36:29,514::resourceManager::376::ResourceManager::(registerNamespac e) Registering namespace 'Storage' MainThread::DEBUG::2012-04-16 09:36:29,515::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-04-16 09:36:29,551::multipath::85::Storage.Misc.excCmd::(isEnabled) '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,564::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS: <err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,565::multipath::101::Storage.Multipath::(isEnabled) Current revision of multipath.conf detected, preserving MainThread::DEBUG::2012-04-16
09:36:29,565::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType ) '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) MainThread::DEBUG::2012-04-16
09:36:29,606::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType ) SUCCESS:<err> = '';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,606::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,608::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o
uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_co unt, d ev_size' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,714::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> = ''; <rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,715::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,716::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,716::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_m da_s i ze,vg_mda_free' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,813::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> = ' No volume groups found\n';<rc> = 0 MainThread::DEBUG::2012-04-16 09:36:29,814::lvm::379::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,815::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,916::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> = ' No volume groups found\n';<rc> = 0 Thread-11::DEBUG::2012-04-16 09:36:29,917::misc::1017::SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) MainThread::INFO::2012-04-16 09:36:29,919::dispatcher::121::Storage.Dispatcher::(__init__) Starting StorageDispatcher... Thread-11::DEBUG::2012-04-16 09:36:29,919::misc::1019::SamplingMethod::(__call__) Got in to sampling method Thread-11::DEBUG::2012-04-16 09:36:29,921::misc::1017::SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-11::DEBUG::2012-04-16 09:36:29,921::misc::1019::SamplingMethod::(__call__) Got in to sampling method Thread-11::DEBUG::2012-04-16 09:36:29,921::iscsi::389::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,930::utils::595::Storage.Misc.excCmd::(execCmd) '/usr/bin/pgrep -xf ksmd' (cwd None) MainThread::DEBUG::2012-04-16 09:36:30,108::utils::595::Storage.Misc.excCmd::(execCmd) SUCCESS:<err> = '';<rc> = 0 Thread-11::DEBUG::2012-04-16 09:36:30,116::iscsi::389::Storage.Misc.excCmd::(rescan) FAILED:<err> = 'iscsiadm: No session found.\n';<rc> = 21 Thread-11::DEBUG::2012-04-16 09:36:30,116::misc::1027::SamplingMethod::(__call__) Returning last result Thread-11::DEBUG::2012-04-16 09:36:30,117::supervdsm::83::SuperVdsmProxy::(_killSupervdsm) Could not kill old Super Vdsm [Errno 2] No such file or directory: '/var/run/vdsm/svdsm.pid' Thread-11::DEBUG::2012-04-16 09:36:30,117::supervdsm::71::SuperVdsmProxy::(_launchSupervdsm) Launching Super Vdsm Thread-11::DEBUG::2012-04-16 09:36:30,118::supervdsm::74::Storage.Misc.excCmd::(_launchSupervdsm) '/usr/bin/sudo -n /usr/bin/python /usr/share/vdsm/supervdsmServer.pyc bd4b3ae7-3e51-4d6b-b681-d5f6cb5bae07 2945' (cwd None) MainThread::DEBUG::2012-04-16 09:36:30,254::supervdsmServer::170::SuperVdsm.Server::(main) Making sure I'm root MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::174::SuperVdsm.Server::(main) Parsing cmd args MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::177::SuperVdsm.Server::(main) Creating PID file MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::181::SuperVdsm.Server::(main) Cleaning old socket MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::185::SuperVdsm.Server::(main) Setting up keep alive thread MainThread::DEBUG::2012-04-16 09:36:30,256::supervdsmServer::190::SuperVdsm.Server::(main) Creating remote object manager MainThread::DEBUG::2012-04-16 09:36:30,256::supervdsmServer::201::SuperVdsm.Server::(main) Started serving super vdsm object Thread-11::DEBUG::2012-04-16 09:36:32,124::supervdsm::92::SuperVdsmProxy::(_connect) Trying to connect to Super Vdsm Thread-11::DEBUG::2012-04-16 09:36:32,133::supervdsm::64::SuperVdsmProxy::(__init__) Connected to Super Vdsm Thread-11::DEBUG::2012-04-16 09:36:34,070::multipath::71::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n /sbin/multipath' (cwd None) Thread-11::DEBUG::2012-04-16 09:36:34,130::multipath::71::Storage.Misc.excCmd::(rescan) SUCCESS:<err> = '';<rc> = 0 Thread-11::DEBUG::2012-04-16 09:36:34,131::lvm::460::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,131::lvm::462::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,132::lvm::472::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,132::lvm::474::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,133::lvm::493::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,133::lvm::495::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,133::misc::1027::SamplingMethod::(__call__) Returning last result Thread-11::DEBUG::2012-04-16 09:36:34,133::hsm::272::Storage.HSM::(__cleanStorageRepository) Started cleaning storage repository at '/rhev/data-center' Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::304::Storage.HSM::(__cleanStorageRepository) White list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt'] Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::305::Storage.HSM::(__cleanStorageRepository) Mount list: ['/rhev/data-center'] Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::307::Storage.HSM::(__cleanStorageRepository) Cleaning leftovers Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::350::Storage.HSM::(__cleanStorageRepository) Finished cleaning storage repository at '/rhev/data-center'
On 4/16/12 8:38 AM, "Mike Burns"<mburns@redhat.com> wrote:
> On Mon, 2012-04-16 at 08:14 -0500, Adam vonNieda wrote: >> Hi folks, >> >> >> I'm trying to install oVirt node v2.3.0 on A Dell C2100 >> server. I >> can boot up just fine, but the two menu options I see are "Start >> oVirt >> node", and "Troubleshooting". When I choose "Start oVirt node", it >> does just that, and I am soon after given a console login prompt. >> I've >> checked the docs, and I don't see what I'm supposed to do next, as >> in >> a password etc. Am I missing something? > Hi Adam, > > Something is breaking in the boot process. You should be getting a > TUI > screen that will let you configure and install ovirt-node. > > I just added an entry on the Node Troublesooting wiki page[1] for > you to > follow. > > Mike > > [1] http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems > > >> Thanks, >> >> >> -Adam >> _______________________________________________ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users This is definitely the cause of the installer failing
2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat /proc/mounts|grep -q "none /live" 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to mount_live()
What kind of media are you installing from: usb/cd/remote console?
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
I did go back and take a look at mount_live and made sure it contains a specific patch to handle usb drives properly. If you can get back to a shell prompt. run blkid and capture the output. If it's way too much to type then just the usb drive output should be ok.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Yep, that's exactly the same issue. Mine was a 16Gb Sandisk Cruiser. When I switched to a no-name older 4Gb stick, it worked fine. I set mine up exactly as you did as well, dd from a Mac. Mine booted the kernel just fine as well. I tried booting up setting the "rootpw=<hash>" as well, but that didn't work for me, so I was unable to collect any information from the "blkid" command. I tried it three times, and I know I was doing it correctly. Joey's comments below.. -Adam <Joey's comments> I did go back and take a look at mount_live and made sure it contains a specific patch to handle usb drives properly. If you can get back to a shell prompt. run blkid and capture the output. If it's way too much to type then just the usb drive output should be ok. <Link to shell prompt instructions> http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems On 4/18/12 5:02 AM, "Jason Lawer" <akula@thegeekhood.net> wrote:
I think I just hit the exact same issue with a Sandisk Crusier Blade 4GB USB stick. I bought 4 of them to try and setup a test system (before we commit to real hardware) and at least 2 of them failed with both 2.2 and 2.3 ovirt isos being copied using dd from a mac.
I copied to a old 8gb "Strontium" USB stick I had lying around and worked without issue. So it appears to be an issue with the stick.
I can provide more specific information on the stick or such if that is useful.
It wouldn't surprise me if its due to the low cost nature of the stick (cost $5 AUD) but I am curious as it booted the kernel fine.
Jason On 18/04/2012, at 4:48 AM, Adam vonNieda wrote:
Turns out that there might be an issue with my thumb drive. I tried another, and it worked fine. Thanks very much for the responses folks!
-Adam
On 4/17/12 10:11 AM, "Joey Boggs" <jboggs@redhat.com> wrote:
On 04/17/2012 10:51 AM, Adam vonNieda wrote:
Thanks for the reply Joey. I saw that too, and thought maybe my USB thumb drive was set to read only, but it's not. This box doesn't have a DVD drive, I'll try a different USB drive, and if that doesn't work, I'll dig up an external DVD drive.
Thanks again,
-Adam
Adam vonNieda Adam@vonNieda.org
On Apr 17, 2012, at 9:07, Joey Boggs<jboggs@redhat.com> wrote:
On 04/17/2012 09:45 AM, Adam vonNieda wrote:
Hi folks,
Still hoping someone can give me a hand with this. I can't install overt-node 2.3.0 on a on a Dell C2100 server because it won't start the graphical interface. I booted up a standard F16 image this morning, and the graphical installer does start during that process. Logs are below.
Thanks very much,
-Adam
> /tmp/ovirt.log > ============== > > /sbin/restorecon set context > /var/cache/yum->unconfined_u:object_r:rpm_var_cache_t:s0 > failed:'Read-only > file system' > /sbin/restorecon reset /var/cache/yum context > > >unconfined_u:object_r:file_t:s0->unconfined_u:object_r:rpm_var_cache >_t > :s0 > /sbin/restorecon reset /etc/sysctl.conf context > > >system_u:object_r:etc_runtime_t:s0->system_u:object_r:system_conf_t: >s0 > /sbin/restorecon reset /boot-kdump context > system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0 > 2012-04-16 09:36:26,827 - INFO - ovirt-config-installer - ::::live > device:::: > /dev/sdb > 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat > /proc/mounts|grep > -q "none /live" > 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - > 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - umount /live > 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - > 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to > mount_live() > > /var/log/ovirt.log > ================== > > Apr 16 09:35:53 Starting ovirt-early > oVirt Node Hypervisor release 2.3.0 (1.0.fc16) > Apr 16 09:35:53 Updating /etc/default/ovirt > Apr 16 09:35:54 Updating OVIRT_BOOTIF to '' > Apr 16 09:35:54 Updating OVIRT_INIT to '' > Apr 16 09:35:54 Updating OVIRT_UPGRADE to '' > Apr 16 09:35:54 Updating OVIRT_STANDALONE to '1' > Apr 16 09:35:54 Updating OVIRT_BOOTPARAMS to 'nomodeset > crashkernel=512M-2G:64M,2G-:128M elevator=deadline quiet rd_NO_LVM > rhgb > rd.luks=0 rd.md=0 rd.dm=0' > Apr 16 09:35:54 Updating OVIRT_RHN_TYPE to 'classic' > Apr 16 09:35:54 Updating OVIRT_INSTALL to '1' > Apr 16 09:35:54 Updating OVIRT_ISCSI_INSTALL to '1' > Apr 16 09:36:08 Setting temporary admin password: F8Ax67kfRPSAw > Apr 16 09:36:09 Setting temporary root password: F8Ax67kfRPSAw > Apr 16 09:36:09 Skip runtime mode configuration. > Apr 16 09:36:09 Completed ovirt-early > Apr 16 09:36:09 Starting ovirt-awake. > Apr 16 09:36:09 Node is operating in unmanaged mode. > Apr 16 09:36:09 Completed ovirt-awake: RETVAL=0 > Apr 16 09:36:09 Starting ovirt > Apr 16 09:36:09 Completed ovirt > Apr 16 09:36:10 Starting ovirt-post > Apr 16 09:36:20 Hardware virtualization detected > Volume group "HostVG" not found > Skipping volume group HostVG > Restarting network (via systemctl): [ OK ] > Apr 16 09:36:20 Starting ovirt-post > Apr 16 09:36:21 Hardware virtualization detected > Volume group "HostVG" not found > Skipping volume group HostVG > Restarting network (via systemctl): [ OK ] > Apr 16 09:36:22 Starting ovirt-cim > Apr 16 09:36:22 Completed ovirt-cim > WARNING: persistent config storage not available > > /var/log/vdsm/vdsm.log > ======================= > > MainThread::INFO::2012-04-16 09:36:21,828::vdsm::71::vds::(run) I >am > the > actual vdsm 4.9-0 > MainThread::DEBUG::2012-04-16 > > >09:36:23,873::resourceManager::376::ResourceManager::(registerNamesp >ac > e) > Registering namespace 'Storage' > MainThread::DEBUG::2012-04-16 > 09:36:23,874::threadPool::45::Misc.ThreadPool::(__init__) Enter - > numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 > MainThread::DEBUG::2012-04-16 > 09:36:23,918::multipath::85::Storage.Misc.excCmd::(isEnabled) > '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) > MainThread::INFO::2012-04-16 09:36:25,000::vdsm::71::vds::(run) I >am > the > actual vdsm 4.9-0 > MainThread::DEBUG::2012-04-16 > > >09:36:25,199::resourceManager::376::ResourceManager::(registerNamesp >ac > e) > Registering namespace 'Storage' > MainThread::DEBUG::2012-04-16 > 09:36:25,200::threadPool::45::Misc.ThreadPool::(__init__) Enter - > numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 > MainThread::DEBUG::2012-04-16 > 09:36:25,231::multipath::85::Storage.Misc.excCmd::(isEnabled) > '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) > MainThread::DEBUG::2012-04-16 > 09:36:25,243::multipath::85::Storage.Misc.excCmd::(isEnabled) > SUCCESS: > <err> = '';<rc> = 0 > MainThread::DEBUG::2012-04-16 > 09:36:25,244::multipath::109::Storage.Multipath::(isEnabled) > multipath > Defaulting to False > MainThread::DEBUG::2012-04-16 > 09:36:25,244::misc::487::Storage.Misc::(rotateFiles) dir: /etc, > prefixName: multipath.conf, versions: 5 > MainThread::DEBUG::2012-04-16 > 09:36:25,244::misc::508::Storage.Misc::(rotateFiles) versions >found: > [0] > MainThread::DEBUG::2012-04-16 > 09:36:25,244::multipath::118::Storage.Misc.excCmd::(setupMultipath) > '/usr/bin/sudo -n /bin/cp /etc/multipath.conf >/etc/multipath.conf.1' > (cwd > None) > MainThread::DEBUG::2012-04-16 > 09:36:25,255::multipath::118::Storage.Misc.excCmd::(setupMultipath) > FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: >Read-only > file > system\nsudo: sorry, a password is required to run sudo\n';<rc> >= 1 > MainThread::DEBUG::2012-04-16 > 09:36:25,256::multipath::118::Storage.Misc.excCmd::(setupMultipath) > '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf.1' (cwd >None) > MainThread::DEBUG::2012-04-16 > 09:36:25,269::multipath::118::Storage.Misc.excCmd::(setupMultipath) > FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: >Read-only > file > system\nsudo: sorry, a password is required to run sudo\n';<rc> >= 1 > MainThread::DEBUG::2012-04-16 > 09:36:25,270::multipath::123::Storage.Misc.excCmd::(setupMultipath) > '/usr/bin/sudo -n /bin/cp /tmp/tmpnPcvWi /etc/multipath.conf' (cwd > None) > MainThread::DEBUG::2012-04-16 > 09:36:25,283::multipath::123::Storage.Misc.excCmd::(setupMultipath) > SUCCESS:<err> = '';<rc> = 0 > MainThread::DEBUG::2012-04-16 > 09:36:25,283::multipath::128::Storage.Misc.excCmd::(setupMultipath) > '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf' (cwd None) > MainThread::DEBUG::2012-04-16 > 09:36:25,294::multipath::128::Storage.Misc.excCmd::(setupMultipath) > FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: >Read-only > file > system\nsudo: sorry, a password is required to run sudo\n';<rc> >= 1 > MainThread::DEBUG::2012-04-16 > 09:36:25,295::multipath::131::Storage.Misc.excCmd::(setupMultipath) > '/usr/bin/sudo -n /sbin/multipath -F' (cwd None) > MainThread::DEBUG::2012-04-16 > 09:36:25,323::multipath::131::Storage.Misc.excCmd::(setupMultipath) > FAILED:<err> = '';<rc> = 1 > MainThread::DEBUG::2012-04-16 > 09:36:25,323::multipath::134::Storage.Misc.excCmd::(setupMultipath) > '/usr/bin/sudo -n /sbin/service multipathd restart' (cwd None) > MainThread::DEBUG::2012-04-16 > 09:36:26,397::multipath::134::Storage.Misc.excCmd::(setupMultipath) > SUCCESS:<err> = '';<rc> = 0 > MainThread::DEBUG::2012-04-16 > > >09:36:26,398::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingTy >pe > ) > '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd > None) > MainThread::DEBUG::2012-04-16 > > >09:36:26,443::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingTy >pe > ) > SUCCESS:<err> = '';<rc> = 0 > MainThread::DEBUG::2012-04-16 > 09:36:26,445::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm > reload > operation' got the operation mutex > MainThread::DEBUG::2012-04-16 > 09:36:26,447::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >-n > /sbin/lvm pvs --config " devices { preferred_names = > [\\"^/dev/mapper/\\"] > ignore_suspended_devices=1 write_cache_state=0 > disable_after_error_count=3 > filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", > \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 > wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " > --noheadings --units b --nosuffix --separator | -o > > >uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_ >co > unt, > d > ev_size' (cwd None) > MainThread::DEBUG::2012-04-16 > 09:36:26,811::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> >= > ''; > <rc> = 0 > MainThread::DEBUG::2012-04-16 > 09:36:26,811::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm > reload > operation' released the operation mutex > MainThread::DEBUG::2012-04-16 > 09:36:26,812::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm > reload > operation' got the operation mutex > MainThread::DEBUG::2012-04-16 > 09:36:26,812::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >-n > /sbin/lvm vgs --config " devices { preferred_names = > [\\"^/dev/mapper/\\"] > ignore_suspended_devices=1 write_cache_state=0 > disable_after_error_count=3 > filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", > \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 > wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " > --noheadings --units b --nosuffix --separator | -o > > >uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg >_m > da_s > i > ze,vg_mda_free' (cwd None) > MainThread::INFO::2012-04-16 09:36:29,307::vdsm::71::vds::(run) I >am > the > actual vdsm 4.9-0 > MainThread::DEBUG::2012-04-16 > > >09:36:29,514::resourceManager::376::ResourceManager::(registerNamesp >ac > e) > Registering namespace 'Storage' > MainThread::DEBUG::2012-04-16 > 09:36:29,515::threadPool::45::Misc.ThreadPool::(__init__) Enter - > numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 > MainThread::DEBUG::2012-04-16 > 09:36:29,551::multipath::85::Storage.Misc.excCmd::(isEnabled) > '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) > MainThread::DEBUG::2012-04-16 > 09:36:29,564::multipath::85::Storage.Misc.excCmd::(isEnabled) > SUCCESS: > <err> = '';<rc> = 0 > MainThread::DEBUG::2012-04-16 > 09:36:29,565::multipath::101::Storage.Multipath::(isEnabled) >Current > revision of multipath.conf detected, preserving > MainThread::DEBUG::2012-04-16 > > >09:36:29,565::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingTy >pe > ) > '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd > None) > MainThread::DEBUG::2012-04-16 > > >09:36:29,606::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingTy >pe > ) > SUCCESS:<err> = '';<rc> = 0 > MainThread::DEBUG::2012-04-16 > 09:36:29,606::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm > reload > operation' got the operation mutex > MainThread::DEBUG::2012-04-16 > 09:36:29,608::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >-n > /sbin/lvm pvs --config " devices { preferred_names = > [\\"^/dev/mapper/\\"] > ignore_suspended_devices=1 write_cache_state=0 > disable_after_error_count=3 > filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", > \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 > wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " > --noheadings --units b --nosuffix --separator | -o > > >uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_ >co > unt, > d > ev_size' (cwd None) > MainThread::DEBUG::2012-04-16 > 09:36:29,714::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> >= > ''; > <rc> = 0 > MainThread::DEBUG::2012-04-16 > 09:36:29,715::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm > reload > operation' released the operation mutex > MainThread::DEBUG::2012-04-16 > 09:36:29,716::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm > reload > operation' got the operation mutex > MainThread::DEBUG::2012-04-16 > 09:36:29,716::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >-n > /sbin/lvm vgs --config " devices { preferred_names = > [\\"^/dev/mapper/\\"] > ignore_suspended_devices=1 write_cache_state=0 > disable_after_error_count=3 > filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", > \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 > wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " > --noheadings --units b --nosuffix --separator | -o > > >uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg >_m > da_s > i > ze,vg_mda_free' (cwd None) > MainThread::DEBUG::2012-04-16 > 09:36:29,813::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> >= > ' No > volume groups found\n';<rc> = 0 > MainThread::DEBUG::2012-04-16 > 09:36:29,814::lvm::379::OperationMutex::(_reloadvgs) Operation 'lvm > reload > operation' released the operation mutex > MainThread::DEBUG::2012-04-16 > 09:36:29,815::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >-n > /sbin/lvm lvs --config " devices { preferred_names = > [\\"^/dev/mapper/\\"] > ignore_suspended_devices=1 write_cache_state=0 > disable_after_error_count=3 > filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", > \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 > wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " > --noheadings --units b --nosuffix --separator | -o > uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) > MainThread::DEBUG::2012-04-16 > 09:36:29,916::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> >= > ' No > volume groups found\n';<rc> = 0 > Thread-11::DEBUG::2012-04-16 > 09:36:29,917::misc::1017::SamplingMethod::(__call__) Trying to >enter > sampling method (storage.sdc.refreshStorage) > MainThread::INFO::2012-04-16 > 09:36:29,919::dispatcher::121::Storage.Dispatcher::(__init__) > Starting > StorageDispatcher... > Thread-11::DEBUG::2012-04-16 > 09:36:29,919::misc::1019::SamplingMethod::(__call__) Got in to > sampling > method > Thread-11::DEBUG::2012-04-16 > 09:36:29,921::misc::1017::SamplingMethod::(__call__) Trying to >enter > sampling method (storage.iscsi.rescan) > Thread-11::DEBUG::2012-04-16 > 09:36:29,921::misc::1019::SamplingMethod::(__call__) Got in to > sampling > method > Thread-11::DEBUG::2012-04-16 > 09:36:29,921::iscsi::389::Storage.Misc.excCmd::(rescan) > '/usr/bin/sudo -n > /sbin/iscsiadm -m session -R' (cwd None) > MainThread::DEBUG::2012-04-16 > 09:36:29,930::utils::595::Storage.Misc.excCmd::(execCmd) > '/usr/bin/pgrep > -xf ksmd' (cwd None) > MainThread::DEBUG::2012-04-16 > 09:36:30,108::utils::595::Storage.Misc.excCmd::(execCmd) > SUCCESS:<err> = > '';<rc> = 0 > Thread-11::DEBUG::2012-04-16 > 09:36:30,116::iscsi::389::Storage.Misc.excCmd::(rescan) >FAILED:<err> > = > 'iscsiadm: No session found.\n';<rc> = 21 > Thread-11::DEBUG::2012-04-16 > 09:36:30,116::misc::1027::SamplingMethod::(__call__) Returning last > result > Thread-11::DEBUG::2012-04-16 > 09:36:30,117::supervdsm::83::SuperVdsmProxy::(_killSupervdsm) Could > not > kill old Super Vdsm [Errno 2] No such file or directory: > '/var/run/vdsm/svdsm.pid' > Thread-11::DEBUG::2012-04-16 > 09:36:30,117::supervdsm::71::SuperVdsmProxy::(_launchSupervdsm) > Launching > Super Vdsm > Thread-11::DEBUG::2012-04-16 > >09:36:30,118::supervdsm::74::Storage.Misc.excCmd::(_launchSupervdsm) > '/usr/bin/sudo -n /usr/bin/python >/usr/share/vdsm/supervdsmServer.pyc > bd4b3ae7-3e51-4d6b-b681-d5f6cb5bae07 2945' (cwd None) > MainThread::DEBUG::2012-04-16 > 09:36:30,254::supervdsmServer::170::SuperVdsm.Server::(main) Making > sure > I'm root > MainThread::DEBUG::2012-04-16 > 09:36:30,255::supervdsmServer::174::SuperVdsm.Server::(main) >Parsing > cmd > args > MainThread::DEBUG::2012-04-16 > 09:36:30,255::supervdsmServer::177::SuperVdsm.Server::(main) > Creating PID > file > MainThread::DEBUG::2012-04-16 > 09:36:30,255::supervdsmServer::181::SuperVdsm.Server::(main) > Cleaning old > socket > MainThread::DEBUG::2012-04-16 > 09:36:30,255::supervdsmServer::185::SuperVdsm.Server::(main) >Setting > up > keep alive thread > MainThread::DEBUG::2012-04-16 > 09:36:30,256::supervdsmServer::190::SuperVdsm.Server::(main) >Creating > remote object manager > MainThread::DEBUG::2012-04-16 > 09:36:30,256::supervdsmServer::201::SuperVdsm.Server::(main) >Started > serving super vdsm object > Thread-11::DEBUG::2012-04-16 > 09:36:32,124::supervdsm::92::SuperVdsmProxy::(_connect) Trying to > connect > to Super Vdsm > Thread-11::DEBUG::2012-04-16 > 09:36:32,133::supervdsm::64::SuperVdsmProxy::(__init__) Connected >to > Super > Vdsm > Thread-11::DEBUG::2012-04-16 > 09:36:34,070::multipath::71::Storage.Misc.excCmd::(rescan) > '/usr/bin/sudo > -n /sbin/multipath' (cwd None) > Thread-11::DEBUG::2012-04-16 > 09:36:34,130::multipath::71::Storage.Misc.excCmd::(rescan) > SUCCESS:<err> > = '';<rc> = 0 > Thread-11::DEBUG::2012-04-16 > 09:36:34,131::lvm::460::OperationMutex::(_invalidateAllPvs) > Operation 'lvm > invalidate operation' got the operation mutex > Thread-11::DEBUG::2012-04-16 > 09:36:34,131::lvm::462::OperationMutex::(_invalidateAllPvs) > Operation 'lvm > invalidate operation' released the operation mutex > Thread-11::DEBUG::2012-04-16 > 09:36:34,132::lvm::472::OperationMutex::(_invalidateAllVgs) > Operation 'lvm > invalidate operation' got the operation mutex > Thread-11::DEBUG::2012-04-16 > 09:36:34,132::lvm::474::OperationMutex::(_invalidateAllVgs) > Operation 'lvm > invalidate operation' released the operation mutex > Thread-11::DEBUG::2012-04-16 > 09:36:34,133::lvm::493::OperationMutex::(_invalidateAllLvs) > Operation 'lvm > invalidate operation' got the operation mutex > Thread-11::DEBUG::2012-04-16 > 09:36:34,133::lvm::495::OperationMutex::(_invalidateAllLvs) > Operation 'lvm > invalidate operation' released the operation mutex > Thread-11::DEBUG::2012-04-16 > 09:36:34,133::misc::1027::SamplingMethod::(__call__) Returning last > result > Thread-11::DEBUG::2012-04-16 > 09:36:34,133::hsm::272::Storage.HSM::(__cleanStorageRepository) > Started > cleaning storage repository at '/rhev/data-center' > Thread-11::DEBUG::2012-04-16 > 09:36:34,136::hsm::304::Storage.HSM::(__cleanStorageRepository) >White > list: ['/rhev/data-center/hsm-tasks', > '/rhev/data-center/hsm-tasks/*', > '/rhev/data-center/mnt'] > Thread-11::DEBUG::2012-04-16 > 09:36:34,136::hsm::305::Storage.HSM::(__cleanStorageRepository) >Mount > list: ['/rhev/data-center'] > Thread-11::DEBUG::2012-04-16 > 09:36:34,136::hsm::307::Storage.HSM::(__cleanStorageRepository) > Cleaning > leftovers > Thread-11::DEBUG::2012-04-16 > 09:36:34,136::hsm::350::Storage.HSM::(__cleanStorageRepository) > Finished > cleaning storage repository at '/rhev/data-center' > > > > > > > > > > On 4/16/12 8:38 AM, "Mike Burns"<mburns@redhat.com> wrote: > >> On Mon, 2012-04-16 at 08:14 -0500, Adam vonNieda wrote: >>> Hi folks, >>> >>> >>> I'm trying to install oVirt node v2.3.0 on A Dell C2100 >>> server. I >>> can boot up just fine, but the two menu options I see are "Start >>> oVirt >>> node", and "Troubleshooting". When I choose "Start oVirt node", >>>it >>> does just that, and I am soon after given a console login prompt. >>> I've >>> checked the docs, and I don't see what I'm supposed to do next, >>>as >>> in >>> a password etc. Am I missing something? >> Hi Adam, >> >> Something is breaking in the boot process. You should be getting >>a >> TUI >> screen that will let you configure and install ovirt-node. >> >> I just added an entry on the Node Troublesooting wiki page[1] for >> you to >> follow. >> >> Mike >> >> [1] http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems >> >> >>> Thanks, >>> >>> >>> -Adam >>> _______________________________________________ >>> Users mailing list >>> Users@ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users This is definitely the cause of the installer failing
2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat /proc/mounts|grep -q "none /live" 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to mount_live()
What kind of media are you installing from: usb/cd/remote console?
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
I did go back and take a look at mount_live and made sure it contains a specific patch to handle usb drives properly. If you can get back to a shell prompt. run blkid and capture the output. If it's way too much to type then just the usb drive output should be ok.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 04/18/2012 08:40 AM, Adam vonNieda wrote:
Yep, that's exactly the same issue. Mine was a 16Gb Sandisk Cruiser. When I switched to a no-name older 4Gb stick, it worked fine. I set mine up exactly as you did as well, dd from a Mac. Mine booted the kernel just fine as well. I tried booting up setting the "rootpw=<hash>" as well, but that didn't work for me, so I was unable to collect any information from the "blkid" command. I tried it three times, and I know I was doing it correctly. Joey's comments below..
-Adam
<Joey's comments>
I did go back and take a look at mount_live and made sure it contains a specific patch to handle usb drives properly. If you can get back to a shell prompt. run blkid and capture the output. If it's way too much to type then just the usb drive output should be ok.
<Link to shell prompt instructions>
http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems
On 4/18/12 5:02 AM, "Jason Lawer"<akula@thegeekhood.net> wrote:
I think I just hit the exact same issue with a Sandisk Crusier Blade 4GB USB stick. I bought 4 of them to try and setup a test system (before we commit to real hardware) and at least 2 of them failed with both 2.2 and 2.3 ovirt isos being copied using dd from a mac.
I copied to a old 8gb "Strontium" USB stick I had lying around and worked without issue. So it appears to be an issue with the stick.
I can provide more specific information on the stick or such if that is useful.
It wouldn't surprise me if its due to the low cost nature of the stick (cost $5 AUD) but I am curious as it booted the kernel fine.
Jason On 18/04/2012, at 4:48 AM, Adam vonNieda wrote:
Turns out that there might be an issue with my thumb drive. I tried another, and it worked fine. Thanks very much for the responses folks!
-Adam
On 4/17/12 10:11 AM, "Joey Boggs"<jboggs@redhat.com> wrote:
On 04/17/2012 10:51 AM, Adam vonNieda wrote:
Thanks for the reply Joey. I saw that too, and thought maybe my USB thumb drive was set to read only, but it's not. This box doesn't have a DVD drive, I'll try a different USB drive, and if that doesn't work, I'll dig up an external DVD drive.
Thanks again,
-Adam
Adam vonNieda Adam@vonNieda.org
On Apr 17, 2012, at 9:07, Joey Boggs<jboggs@redhat.com> wrote:
On 04/17/2012 09:45 AM, Adam vonNieda wrote: > Hi folks, > > Still hoping someone can give me a hand with this. I can't > install > overt-node 2.3.0 on a on a Dell C2100 server because it won't start > the > graphical interface. I booted up a standard F16 image this morning, > and > the graphical installer does start during that process. Logs are > below. > > Thanks very much, > > -Adam > > >> /tmp/ovirt.log >> ============== >> >> /sbin/restorecon set context >> /var/cache/yum->unconfined_u:object_r:rpm_var_cache_t:s0 >> failed:'Read-only >> file system' >> /sbin/restorecon reset /var/cache/yum context >> >> >> unconfined_u:object_r:file_t:s0->unconfined_u:object_r:rpm_var_cache >> _t >> :s0 >> /sbin/restorecon reset /etc/sysctl.conf context >> >> >> system_u:object_r:etc_runtime_t:s0->system_u:object_r:system_conf_t: >> s0 >> /sbin/restorecon reset /boot-kdump context >> system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0 >> 2012-04-16 09:36:26,827 - INFO - ovirt-config-installer - ::::live >> device:::: >> /dev/sdb >> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat >> /proc/mounts|grep >> -q "none /live" >> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - >> 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - umount /live >> 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - >> 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to >> mount_live() >> >> /var/log/ovirt.log >> ================== >> >> Apr 16 09:35:53 Starting ovirt-early >> oVirt Node Hypervisor release 2.3.0 (1.0.fc16) >> Apr 16 09:35:53 Updating /etc/default/ovirt >> Apr 16 09:35:54 Updating OVIRT_BOOTIF to '' >> Apr 16 09:35:54 Updating OVIRT_INIT to '' >> Apr 16 09:35:54 Updating OVIRT_UPGRADE to '' >> Apr 16 09:35:54 Updating OVIRT_STANDALONE to '1' >> Apr 16 09:35:54 Updating OVIRT_BOOTPARAMS to 'nomodeset >> crashkernel=512M-2G:64M,2G-:128M elevator=deadline quiet rd_NO_LVM >> rhgb >> rd.luks=0 rd.md=0 rd.dm=0' >> Apr 16 09:35:54 Updating OVIRT_RHN_TYPE to 'classic' >> Apr 16 09:35:54 Updating OVIRT_INSTALL to '1' >> Apr 16 09:35:54 Updating OVIRT_ISCSI_INSTALL to '1' >> Apr 16 09:36:08 Setting temporary admin password: F8Ax67kfRPSAw >> Apr 16 09:36:09 Setting temporary root password: F8Ax67kfRPSAw >> Apr 16 09:36:09 Skip runtime mode configuration. >> Apr 16 09:36:09 Completed ovirt-early >> Apr 16 09:36:09 Starting ovirt-awake. >> Apr 16 09:36:09 Node is operating in unmanaged mode. >> Apr 16 09:36:09 Completed ovirt-awake: RETVAL=0 >> Apr 16 09:36:09 Starting ovirt >> Apr 16 09:36:09 Completed ovirt >> Apr 16 09:36:10 Starting ovirt-post >> Apr 16 09:36:20 Hardware virtualization detected >> Volume group "HostVG" not found >> Skipping volume group HostVG >> Restarting network (via systemctl): [ OK ] >> Apr 16 09:36:20 Starting ovirt-post >> Apr 16 09:36:21 Hardware virtualization detected >> Volume group "HostVG" not found >> Skipping volume group HostVG >> Restarting network (via systemctl): [ OK ] >> Apr 16 09:36:22 Starting ovirt-cim >> Apr 16 09:36:22 Completed ovirt-cim >> WARNING: persistent config storage not available >> >> /var/log/vdsm/vdsm.log >> ======================= >> >> MainThread::INFO::2012-04-16 09:36:21,828::vdsm::71::vds::(run) I >> am >> the >> actual vdsm 4.9-0 >> MainThread::DEBUG::2012-04-16 >> >> >> 09:36:23,873::resourceManager::376::ResourceManager::(registerNamesp >> ac >> e) >> Registering namespace 'Storage' >> MainThread::DEBUG::2012-04-16 >> 09:36:23,874::threadPool::45::Misc.ThreadPool::(__init__) Enter - >> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >> MainThread::DEBUG::2012-04-16 >> 09:36:23,918::multipath::85::Storage.Misc.excCmd::(isEnabled) >> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >> MainThread::INFO::2012-04-16 09:36:25,000::vdsm::71::vds::(run) I >> am >> the >> actual vdsm 4.9-0 >> MainThread::DEBUG::2012-04-16 >> >> >> 09:36:25,199::resourceManager::376::ResourceManager::(registerNamesp >> ac >> e) >> Registering namespace 'Storage' >> MainThread::DEBUG::2012-04-16 >> 09:36:25,200::threadPool::45::Misc.ThreadPool::(__init__) Enter - >> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >> MainThread::DEBUG::2012-04-16 >> 09:36:25,231::multipath::85::Storage.Misc.excCmd::(isEnabled) >> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:25,243::multipath::85::Storage.Misc.excCmd::(isEnabled) >> SUCCESS: >> <err> = '';<rc> = 0 >> MainThread::DEBUG::2012-04-16 >> 09:36:25,244::multipath::109::Storage.Multipath::(isEnabled) >> multipath >> Defaulting to False >> MainThread::DEBUG::2012-04-16 >> 09:36:25,244::misc::487::Storage.Misc::(rotateFiles) dir: /etc, >> prefixName: multipath.conf, versions: 5 >> MainThread::DEBUG::2012-04-16 >> 09:36:25,244::misc::508::Storage.Misc::(rotateFiles) versions >> found: >> [0] >> MainThread::DEBUG::2012-04-16 >> 09:36:25,244::multipath::118::Storage.Misc.excCmd::(setupMultipath) >> '/usr/bin/sudo -n /bin/cp /etc/multipath.conf >> /etc/multipath.conf.1' >> (cwd >> None) >> MainThread::DEBUG::2012-04-16 >> 09:36:25,255::multipath::118::Storage.Misc.excCmd::(setupMultipath) >> FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: >> Read-only >> file >> system\nsudo: sorry, a password is required to run sudo\n';<rc> >> = 1 >> MainThread::DEBUG::2012-04-16 >> 09:36:25,256::multipath::118::Storage.Misc.excCmd::(setupMultipath) >> '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf.1' (cwd >> None) >> MainThread::DEBUG::2012-04-16 >> 09:36:25,269::multipath::118::Storage.Misc.excCmd::(setupMultipath) >> FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: >> Read-only >> file >> system\nsudo: sorry, a password is required to run sudo\n';<rc> >> = 1 >> MainThread::DEBUG::2012-04-16 >> 09:36:25,270::multipath::123::Storage.Misc.excCmd::(setupMultipath) >> '/usr/bin/sudo -n /bin/cp /tmp/tmpnPcvWi /etc/multipath.conf' (cwd >> None) >> MainThread::DEBUG::2012-04-16 >> 09:36:25,283::multipath::123::Storage.Misc.excCmd::(setupMultipath) >> SUCCESS:<err> = '';<rc> = 0 >> MainThread::DEBUG::2012-04-16 >> 09:36:25,283::multipath::128::Storage.Misc.excCmd::(setupMultipath) >> '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:25,294::multipath::128::Storage.Misc.excCmd::(setupMultipath) >> FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: >> Read-only >> file >> system\nsudo: sorry, a password is required to run sudo\n';<rc> >> = 1 >> MainThread::DEBUG::2012-04-16 >> 09:36:25,295::multipath::131::Storage.Misc.excCmd::(setupMultipath) >> '/usr/bin/sudo -n /sbin/multipath -F' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:25,323::multipath::131::Storage.Misc.excCmd::(setupMultipath) >> FAILED:<err> = '';<rc> = 1 >> MainThread::DEBUG::2012-04-16 >> 09:36:25,323::multipath::134::Storage.Misc.excCmd::(setupMultipath) >> '/usr/bin/sudo -n /sbin/service multipathd restart' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:26,397::multipath::134::Storage.Misc.excCmd::(setupMultipath) >> SUCCESS:<err> = '';<rc> = 0 >> MainThread::DEBUG::2012-04-16 >> >> >> 09:36:26,398::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingTy >> pe >> ) >> '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd >> None) >> MainThread::DEBUG::2012-04-16 >> >> >> 09:36:26,443::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingTy >> pe >> ) >> SUCCESS:<err> = '';<rc> = 0 >> MainThread::DEBUG::2012-04-16 >> 09:36:26,445::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm >> reload >> operation' got the operation mutex >> MainThread::DEBUG::2012-04-16 >> 09:36:26,447::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >> -n >> /sbin/lvm pvs --config " devices { preferred_names = >> [\\"^/dev/mapper/\\"] >> ignore_suspended_devices=1 write_cache_state=0 >> disable_after_error_count=3 >> filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >> \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 >> wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " >> --noheadings --units b --nosuffix --separator | -o >> >> >> uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_ >> co >> unt, >> d >> ev_size' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:26,811::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> >> = >> ''; >> <rc> = 0 >> MainThread::DEBUG::2012-04-16 >> 09:36:26,811::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm >> reload >> operation' released the operation mutex >> MainThread::DEBUG::2012-04-16 >> 09:36:26,812::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm >> reload >> operation' got the operation mutex >> MainThread::DEBUG::2012-04-16 >> 09:36:26,812::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >> -n >> /sbin/lvm vgs --config " devices { preferred_names = >> [\\"^/dev/mapper/\\"] >> ignore_suspended_devices=1 write_cache_state=0 >> disable_after_error_count=3 >> filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >> \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 >> wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " >> --noheadings --units b --nosuffix --separator | -o >> >> >> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg >> _m >> da_s >> i >> ze,vg_mda_free' (cwd None) >> MainThread::INFO::2012-04-16 09:36:29,307::vdsm::71::vds::(run) I >> am >> the >> actual vdsm 4.9-0 >> MainThread::DEBUG::2012-04-16 >> >> >> 09:36:29,514::resourceManager::376::ResourceManager::(registerNamesp >> ac >> e) >> Registering namespace 'Storage' >> MainThread::DEBUG::2012-04-16 >> 09:36:29,515::threadPool::45::Misc.ThreadPool::(__init__) Enter - >> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >> MainThread::DEBUG::2012-04-16 >> 09:36:29,551::multipath::85::Storage.Misc.excCmd::(isEnabled) >> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:29,564::multipath::85::Storage.Misc.excCmd::(isEnabled) >> SUCCESS: >> <err> = '';<rc> = 0 >> MainThread::DEBUG::2012-04-16 >> 09:36:29,565::multipath::101::Storage.Multipath::(isEnabled) >> Current >> revision of multipath.conf detected, preserving >> MainThread::DEBUG::2012-04-16 >> >> >> 09:36:29,565::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingTy >> pe >> ) >> '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd >> None) >> MainThread::DEBUG::2012-04-16 >> >> >> 09:36:29,606::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingTy >> pe >> ) >> SUCCESS:<err> = '';<rc> = 0 >> MainThread::DEBUG::2012-04-16 >> 09:36:29,606::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm >> reload >> operation' got the operation mutex >> MainThread::DEBUG::2012-04-16 >> 09:36:29,608::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >> -n >> /sbin/lvm pvs --config " devices { preferred_names = >> [\\"^/dev/mapper/\\"] >> ignore_suspended_devices=1 write_cache_state=0 >> disable_after_error_count=3 >> filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >> \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 >> wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " >> --noheadings --units b --nosuffix --separator | -o >> >> >> uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_ >> co >> unt, >> d >> ev_size' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:29,714::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> >> = >> ''; >> <rc> = 0 >> MainThread::DEBUG::2012-04-16 >> 09:36:29,715::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm >> reload >> operation' released the operation mutex >> MainThread::DEBUG::2012-04-16 >> 09:36:29,716::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm >> reload >> operation' got the operation mutex >> MainThread::DEBUG::2012-04-16 >> 09:36:29,716::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >> -n >> /sbin/lvm vgs --config " devices { preferred_names = >> [\\"^/dev/mapper/\\"] >> ignore_suspended_devices=1 write_cache_state=0 >> disable_after_error_count=3 >> filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >> \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 >> wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " >> --noheadings --units b --nosuffix --separator | -o >> >> >> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg >> _m >> da_s >> i >> ze,vg_mda_free' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:29,813::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> >> = >> ' No >> volume groups found\n';<rc> = 0 >> MainThread::DEBUG::2012-04-16 >> 09:36:29,814::lvm::379::OperationMutex::(_reloadvgs) Operation 'lvm >> reload >> operation' released the operation mutex >> MainThread::DEBUG::2012-04-16 >> 09:36:29,815::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >> -n >> /sbin/lvm lvs --config " devices { preferred_names = >> [\\"^/dev/mapper/\\"] >> ignore_suspended_devices=1 write_cache_state=0 >> disable_after_error_count=3 >> filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >> \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 >> wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " >> --noheadings --units b --nosuffix --separator | -o >> uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:29,916::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> >> = >> ' No >> volume groups found\n';<rc> = 0 >> Thread-11::DEBUG::2012-04-16 >> 09:36:29,917::misc::1017::SamplingMethod::(__call__) Trying to >> enter >> sampling method (storage.sdc.refreshStorage) >> MainThread::INFO::2012-04-16 >> 09:36:29,919::dispatcher::121::Storage.Dispatcher::(__init__) >> Starting >> StorageDispatcher... >> Thread-11::DEBUG::2012-04-16 >> 09:36:29,919::misc::1019::SamplingMethod::(__call__) Got in to >> sampling >> method >> Thread-11::DEBUG::2012-04-16 >> 09:36:29,921::misc::1017::SamplingMethod::(__call__) Trying to >> enter >> sampling method (storage.iscsi.rescan) >> Thread-11::DEBUG::2012-04-16 >> 09:36:29,921::misc::1019::SamplingMethod::(__call__) Got in to >> sampling >> method >> Thread-11::DEBUG::2012-04-16 >> 09:36:29,921::iscsi::389::Storage.Misc.excCmd::(rescan) >> '/usr/bin/sudo -n >> /sbin/iscsiadm -m session -R' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:29,930::utils::595::Storage.Misc.excCmd::(execCmd) >> '/usr/bin/pgrep >> -xf ksmd' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:30,108::utils::595::Storage.Misc.excCmd::(execCmd) >> SUCCESS:<err> = >> '';<rc> = 0 >> Thread-11::DEBUG::2012-04-16 >> 09:36:30,116::iscsi::389::Storage.Misc.excCmd::(rescan) >> FAILED:<err> >> = >> 'iscsiadm: No session found.\n';<rc> = 21 >> Thread-11::DEBUG::2012-04-16 >> 09:36:30,116::misc::1027::SamplingMethod::(__call__) Returning last >> result >> Thread-11::DEBUG::2012-04-16 >> 09:36:30,117::supervdsm::83::SuperVdsmProxy::(_killSupervdsm) Could >> not >> kill old Super Vdsm [Errno 2] No such file or directory: >> '/var/run/vdsm/svdsm.pid' >> Thread-11::DEBUG::2012-04-16 >> 09:36:30,117::supervdsm::71::SuperVdsmProxy::(_launchSupervdsm) >> Launching >> Super Vdsm >> Thread-11::DEBUG::2012-04-16 >> >> 09:36:30,118::supervdsm::74::Storage.Misc.excCmd::(_launchSupervdsm) >> '/usr/bin/sudo -n /usr/bin/python >> /usr/share/vdsm/supervdsmServer.pyc >> bd4b3ae7-3e51-4d6b-b681-d5f6cb5bae07 2945' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:30,254::supervdsmServer::170::SuperVdsm.Server::(main) Making >> sure >> I'm root >> MainThread::DEBUG::2012-04-16 >> 09:36:30,255::supervdsmServer::174::SuperVdsm.Server::(main) >> Parsing >> cmd >> args >> MainThread::DEBUG::2012-04-16 >> 09:36:30,255::supervdsmServer::177::SuperVdsm.Server::(main) >> Creating PID >> file >> MainThread::DEBUG::2012-04-16 >> 09:36:30,255::supervdsmServer::181::SuperVdsm.Server::(main) >> Cleaning old >> socket >> MainThread::DEBUG::2012-04-16 >> 09:36:30,255::supervdsmServer::185::SuperVdsm.Server::(main) >> Setting >> up >> keep alive thread >> MainThread::DEBUG::2012-04-16 >> 09:36:30,256::supervdsmServer::190::SuperVdsm.Server::(main) >> Creating >> remote object manager >> MainThread::DEBUG::2012-04-16 >> 09:36:30,256::supervdsmServer::201::SuperVdsm.Server::(main) >> Started >> serving super vdsm object >> Thread-11::DEBUG::2012-04-16 >> 09:36:32,124::supervdsm::92::SuperVdsmProxy::(_connect) Trying to >> connect >> to Super Vdsm >> Thread-11::DEBUG::2012-04-16 >> 09:36:32,133::supervdsm::64::SuperVdsmProxy::(__init__) Connected >> to >> Super >> Vdsm >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,070::multipath::71::Storage.Misc.excCmd::(rescan) >> '/usr/bin/sudo >> -n /sbin/multipath' (cwd None) >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,130::multipath::71::Storage.Misc.excCmd::(rescan) >> SUCCESS:<err> >> = '';<rc> = 0 >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,131::lvm::460::OperationMutex::(_invalidateAllPvs) >> Operation 'lvm >> invalidate operation' got the operation mutex >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,131::lvm::462::OperationMutex::(_invalidateAllPvs) >> Operation 'lvm >> invalidate operation' released the operation mutex >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,132::lvm::472::OperationMutex::(_invalidateAllVgs) >> Operation 'lvm >> invalidate operation' got the operation mutex >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,132::lvm::474::OperationMutex::(_invalidateAllVgs) >> Operation 'lvm >> invalidate operation' released the operation mutex >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,133::lvm::493::OperationMutex::(_invalidateAllLvs) >> Operation 'lvm >> invalidate operation' got the operation mutex >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,133::lvm::495::OperationMutex::(_invalidateAllLvs) >> Operation 'lvm >> invalidate operation' released the operation mutex >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,133::misc::1027::SamplingMethod::(__call__) Returning last >> result >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,133::hsm::272::Storage.HSM::(__cleanStorageRepository) >> Started >> cleaning storage repository at '/rhev/data-center' >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,136::hsm::304::Storage.HSM::(__cleanStorageRepository) >> White >> list: ['/rhev/data-center/hsm-tasks', >> '/rhev/data-center/hsm-tasks/*', >> '/rhev/data-center/mnt'] >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,136::hsm::305::Storage.HSM::(__cleanStorageRepository) >> Mount >> list: ['/rhev/data-center'] >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,136::hsm::307::Storage.HSM::(__cleanStorageRepository) >> Cleaning >> leftovers >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,136::hsm::350::Storage.HSM::(__cleanStorageRepository) >> Finished >> cleaning storage repository at '/rhev/data-center' >> >> >> >> >> >> >> >> >> >> On 4/16/12 8:38 AM, "Mike Burns"<mburns@redhat.com> wrote: >> >>> On Mon, 2012-04-16 at 08:14 -0500, Adam vonNieda wrote: >>>> Hi folks, >>>> >>>> >>>> I'm trying to install oVirt node v2.3.0 on A Dell C2100 >>>> server. I >>>> can boot up just fine, but the two menu options I see are "Start >>>> oVirt >>>> node", and "Troubleshooting". When I choose "Start oVirt node", >>>> it >>>> does just that, and I am soon after given a console login prompt. >>>> I've >>>> checked the docs, and I don't see what I'm supposed to do next, >>>> as >>>> in >>>> a password etc. Am I missing something? >>> Hi Adam, >>> >>> Something is breaking in the boot process. You should be getting >>> a >>> TUI >>> screen that will let you configure and install ovirt-node. >>> >>> I just added an entry on the Node Troublesooting wiki page[1] for >>> you to >>> follow. >>> >>> Mike >>> >>> [1] http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems >>> >>> >>>> Thanks, >>>> >>>> >>>> -Adam >>>> _______________________________________________ >>>> Users mailing list >>>> Users@ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users This is definitely the cause of the installer failing
2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat /proc/mounts|grep -q "none /live" 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to mount_live()
What kind of media are you installing from: usb/cd/remote console?
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
I did go back and take a look at mount_live and made sure it contains a specific patch to handle usb drives properly. If you can get back to a shell prompt. run blkid and capture the output. If it's way too much to type then just the usb drive output should be ok.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Just curious do those Sandisk drives still come with the U3 software on them? If so may want to remove it since it can alter the way the drive is presented and that could be causing it. I've got a 2-3year old 8GB Sandisk Cruzer with the U3 software removed and that works fine not sure if it related but might want to just check.

Dunno if they do or not, but if they did, it would have been wiped out by the "dd", as that's going straight to the device, not a partition within. On 4/18/12 8:19 AM, "Joey Boggs" <jboggs@redhat.com> wrote:
On 04/18/2012 08:40 AM, Adam vonNieda wrote:
Yep, that's exactly the same issue. Mine was a 16Gb Sandisk Cruiser. When I switched to a no-name older 4Gb stick, it worked fine. I set mine up exactly as you did as well, dd from a Mac. Mine booted the kernel just fine as well. I tried booting up setting the "rootpw=<hash>" as well, but that didn't work for me, so I was unable to collect any information from the "blkid" command. I tried it three times, and I know I was doing it correctly. Joey's comments below..
-Adam
<Joey's comments>
I did go back and take a look at mount_live and made sure it contains a specific patch to handle usb drives properly. If you can get back to a shell prompt. run blkid and capture the output. If it's way too much to type then just the usb drive output should be ok.
<Link to shell prompt instructions>
http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems
On 4/18/12 5:02 AM, "Jason Lawer"<akula@thegeekhood.net> wrote:
I think I just hit the exact same issue with a Sandisk Crusier Blade 4GB USB stick. I bought 4 of them to try and setup a test system (before we commit to real hardware) and at least 2 of them failed with both 2.2 and 2.3 ovirt isos being copied using dd from a mac.
I copied to a old 8gb "Strontium" USB stick I had lying around and worked without issue. So it appears to be an issue with the stick.
I can provide more specific information on the stick or such if that is useful.
It wouldn't surprise me if its due to the low cost nature of the stick (cost $5 AUD) but I am curious as it booted the kernel fine.
Jason On 18/04/2012, at 4:48 AM, Adam vonNieda wrote:
Turns out that there might be an issue with my thumb drive. I tried another, and it worked fine. Thanks very much for the responses folks!
-Adam
On 4/17/12 10:11 AM, "Joey Boggs"<jboggs@redhat.com> wrote:
On 04/17/2012 10:51 AM, Adam vonNieda wrote:
Thanks for the reply Joey. I saw that too, and thought maybe my USB thumb drive was set to read only, but it's not. This box doesn't have a DVD drive, I'll try a different USB drive, and if that doesn't work, I'll dig up an external DVD drive.
Thanks again,
-Adam
Adam vonNieda Adam@vonNieda.org
On Apr 17, 2012, at 9:07, Joey Boggs<jboggs@redhat.com> wrote:
> On 04/17/2012 09:45 AM, Adam vonNieda wrote: >> Hi folks, >> >> Still hoping someone can give me a hand with this. I can't >> install >> overt-node 2.3.0 on a on a Dell C2100 server because it won't >>start >> the >> graphical interface. I booted up a standard F16 image this >>morning, >> and >> the graphical installer does start during that process. Logs are >> below. >> >> Thanks very much, >> >> -Adam >> >> >>> /tmp/ovirt.log >>> ============== >>> >>> /sbin/restorecon set context >>> /var/cache/yum->unconfined_u:object_r:rpm_var_cache_t:s0 >>> failed:'Read-only >>> file system' >>> /sbin/restorecon reset /var/cache/yum context >>> >>> >>> >>>unconfined_u:object_r:file_t:s0->unconfined_u:object_r:rpm_var_cac >>>he >>> _t >>> :s0 >>> /sbin/restorecon reset /etc/sysctl.conf context >>> >>> >>> >>>system_u:object_r:etc_runtime_t:s0->system_u:object_r:system_conf_ >>>t: >>> s0 >>> /sbin/restorecon reset /boot-kdump context >>> system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0 >>> 2012-04-16 09:36:26,827 - INFO - ovirt-config-installer - >>>::::live >>> device:::: >>> /dev/sdb >>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat >>> /proc/mounts|grep >>> -q "none /live" >>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - >>> 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - umount /live >>> 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - >>> 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to >>> mount_live() >>> >>> /var/log/ovirt.log >>> ================== >>> >>> Apr 16 09:35:53 Starting ovirt-early >>> oVirt Node Hypervisor release 2.3.0 (1.0.fc16) >>> Apr 16 09:35:53 Updating /etc/default/ovirt >>> Apr 16 09:35:54 Updating OVIRT_BOOTIF to '' >>> Apr 16 09:35:54 Updating OVIRT_INIT to '' >>> Apr 16 09:35:54 Updating OVIRT_UPGRADE to '' >>> Apr 16 09:35:54 Updating OVIRT_STANDALONE to '1' >>> Apr 16 09:35:54 Updating OVIRT_BOOTPARAMS to 'nomodeset >>> crashkernel=512M-2G:64M,2G-:128M elevator=deadline quiet >>>rd_NO_LVM >>> rhgb >>> rd.luks=0 rd.md=0 rd.dm=0' >>> Apr 16 09:35:54 Updating OVIRT_RHN_TYPE to 'classic' >>> Apr 16 09:35:54 Updating OVIRT_INSTALL to '1' >>> Apr 16 09:35:54 Updating OVIRT_ISCSI_INSTALL to '1' >>> Apr 16 09:36:08 Setting temporary admin password: F8Ax67kfRPSAw >>> Apr 16 09:36:09 Setting temporary root password: F8Ax67kfRPSAw >>> Apr 16 09:36:09 Skip runtime mode configuration. >>> Apr 16 09:36:09 Completed ovirt-early >>> Apr 16 09:36:09 Starting ovirt-awake. >>> Apr 16 09:36:09 Node is operating in unmanaged mode. >>> Apr 16 09:36:09 Completed ovirt-awake: RETVAL=0 >>> Apr 16 09:36:09 Starting ovirt >>> Apr 16 09:36:09 Completed ovirt >>> Apr 16 09:36:10 Starting ovirt-post >>> Apr 16 09:36:20 Hardware virtualization detected >>> Volume group "HostVG" not found >>> Skipping volume group HostVG >>> Restarting network (via systemctl): [ OK ] >>> Apr 16 09:36:20 Starting ovirt-post >>> Apr 16 09:36:21 Hardware virtualization detected >>> Volume group "HostVG" not found >>> Skipping volume group HostVG >>> Restarting network (via systemctl): [ OK ] >>> Apr 16 09:36:22 Starting ovirt-cim >>> Apr 16 09:36:22 Completed ovirt-cim >>> WARNING: persistent config storage not available >>> >>> /var/log/vdsm/vdsm.log >>> ======================= >>> >>> MainThread::INFO::2012-04-16 09:36:21,828::vdsm::71::vds::(run) I >>> am >>> the >>> actual vdsm 4.9-0 >>> MainThread::DEBUG::2012-04-16 >>> >>> >>> >>>09:36:23,873::resourceManager::376::ResourceManager::(registerName >>>sp >>> ac >>> e) >>> Registering namespace 'Storage' >>> MainThread::DEBUG::2012-04-16 >>> 09:36:23,874::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:23,918::multipath::85::Storage.Misc.excCmd::(isEnabled) >>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>> MainThread::INFO::2012-04-16 09:36:25,000::vdsm::71::vds::(run) I >>> am >>> the >>> actual vdsm 4.9-0 >>> MainThread::DEBUG::2012-04-16 >>> >>> >>> >>>09:36:25,199::resourceManager::376::ResourceManager::(registerName >>>sp >>> ac >>> e) >>> Registering namespace 'Storage' >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,200::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,231::multipath::85::Storage.Misc.excCmd::(isEnabled) >>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,243::multipath::85::Storage.Misc.excCmd::(isEnabled) >>> SUCCESS: >>> <err> = '';<rc> = 0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,244::multipath::109::Storage.Multipath::(isEnabled) >>> multipath >>> Defaulting to False >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,244::misc::487::Storage.Misc::(rotateFiles) dir: /etc, >>> prefixName: multipath.conf, versions: 5 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,244::misc::508::Storage.Misc::(rotateFiles) versions >>> found: >>> [0] >>> MainThread::DEBUG::2012-04-16 >>> >>>09:36:25,244::multipath::118::Storage.Misc.excCmd::(setupMultipath >>>) >>> '/usr/bin/sudo -n /bin/cp /etc/multipath.conf >>> /etc/multipath.conf.1' >>> (cwd >>> None) >>> MainThread::DEBUG::2012-04-16 >>> >>>09:36:25,255::multipath::118::Storage.Misc.excCmd::(setupMultipath >>>) >>> FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: >>> Read-only >>> file >>> system\nsudo: sorry, a password is required to run sudo\n';<rc> >>> = 1 >>> MainThread::DEBUG::2012-04-16 >>> >>>09:36:25,256::multipath::118::Storage.Misc.excCmd::(setupMultipath >>>) >>> '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf.1' (cwd >>> None) >>> MainThread::DEBUG::2012-04-16 >>> >>>09:36:25,269::multipath::118::Storage.Misc.excCmd::(setupMultipath >>>) >>> FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: >>> Read-only >>> file >>> system\nsudo: sorry, a password is required to run sudo\n';<rc> >>> = 1 >>> MainThread::DEBUG::2012-04-16 >>> >>>09:36:25,270::multipath::123::Storage.Misc.excCmd::(setupMultipath >>>) >>> '/usr/bin/sudo -n /bin/cp /tmp/tmpnPcvWi /etc/multipath.conf' >>>(cwd >>> None) >>> MainThread::DEBUG::2012-04-16 >>> >>>09:36:25,283::multipath::123::Storage.Misc.excCmd::(setupMultipath >>>) >>> SUCCESS:<err> = '';<rc> = 0 >>> MainThread::DEBUG::2012-04-16 >>> >>>09:36:25,283::multipath::128::Storage.Misc.excCmd::(setupMultipath >>>) >>> '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf' (cwd >>>None) >>> MainThread::DEBUG::2012-04-16 >>> >>>09:36:25,294::multipath::128::Storage.Misc.excCmd::(setupMultipath >>>) >>> FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: >>> Read-only >>> file >>> system\nsudo: sorry, a password is required to run sudo\n';<rc> >>> = 1 >>> MainThread::DEBUG::2012-04-16 >>> >>>09:36:25,295::multipath::131::Storage.Misc.excCmd::(setupMultipath >>>) >>> '/usr/bin/sudo -n /sbin/multipath -F' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> >>>09:36:25,323::multipath::131::Storage.Misc.excCmd::(setupMultipath >>>) >>> FAILED:<err> = '';<rc> = 1 >>> MainThread::DEBUG::2012-04-16 >>> >>>09:36:25,323::multipath::134::Storage.Misc.excCmd::(setupMultipath >>>) >>> '/usr/bin/sudo -n /sbin/service multipathd restart' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> >>>09:36:26,397::multipath::134::Storage.Misc.excCmd::(setupMultipath >>>) >>> SUCCESS:<err> = '';<rc> = 0 >>> MainThread::DEBUG::2012-04-16 >>> >>> >>> >>>09:36:26,398::hsm::248::Storage.Misc.excCmd::(__validateLvmLocking >>>Ty >>> pe >>> ) >>> '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd >>> None) >>> MainThread::DEBUG::2012-04-16 >>> >>> >>> >>>09:36:26,443::hsm::248::Storage.Misc.excCmd::(__validateLvmLocking >>>Ty >>> pe >>> ) >>> SUCCESS:<err> = '';<rc> = 0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:26,445::lvm::319::OperationMutex::(_reloadpvs) Operation >>>'lvm >>> reload >>> operation' got the operation mutex >>> MainThread::DEBUG::2012-04-16 >>> 09:36:26,447::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>> -n >>> /sbin/lvm pvs --config " devices { preferred_names = >>> [\\"^/dev/mapper/\\"] >>> ignore_suspended_devices=1 write_cache_state=0 >>> disable_after_error_count=3 >>> filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>> \\"r%.*%\\" ] } global { locking_type=1 >>>prioritise_write_locks=1 >>> wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } >>>" >>> --noheadings --units b --nosuffix --separator | -o >>> >>> >>> >>>uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,md >>>a_ >>> co >>> unt, >>> d >>> ev_size' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:26,811::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> >>> = >>> ''; >>> <rc> = 0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:26,811::lvm::342::OperationMutex::(_reloadpvs) Operation >>>'lvm >>> reload >>> operation' released the operation mutex >>> MainThread::DEBUG::2012-04-16 >>> 09:36:26,812::lvm::352::OperationMutex::(_reloadvgs) Operation >>>'lvm >>> reload >>> operation' got the operation mutex >>> MainThread::DEBUG::2012-04-16 >>> 09:36:26,812::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>> -n >>> /sbin/lvm vgs --config " devices { preferred_names = >>> [\\"^/dev/mapper/\\"] >>> ignore_suspended_devices=1 write_cache_state=0 >>> disable_after_error_count=3 >>> filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>> \\"r%.*%\\" ] } global { locking_type=1 >>>prioritise_write_locks=1 >>> wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } >>>" >>> --noheadings --units b --nosuffix --separator | -o >>> >>> >>> >>>uuid,name,attr,size,free,extent_size,extent_count,free_count,tags, >>>vg >>> _m >>> da_s >>> i >>> ze,vg_mda_free' (cwd None) >>> MainThread::INFO::2012-04-16 09:36:29,307::vdsm::71::vds::(run) I >>> am >>> the >>> actual vdsm 4.9-0 >>> MainThread::DEBUG::2012-04-16 >>> >>> >>> >>>09:36:29,514::resourceManager::376::ResourceManager::(registerName >>>sp >>> ac >>> e) >>> Registering namespace 'Storage' >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,515::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,551::multipath::85::Storage.Misc.excCmd::(isEnabled) >>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,564::multipath::85::Storage.Misc.excCmd::(isEnabled) >>> SUCCESS: >>> <err> = '';<rc> = 0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,565::multipath::101::Storage.Multipath::(isEnabled) >>> Current >>> revision of multipath.conf detected, preserving >>> MainThread::DEBUG::2012-04-16 >>> >>> >>> >>>09:36:29,565::hsm::248::Storage.Misc.excCmd::(__validateLvmLocking >>>Ty >>> pe >>> ) >>> '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd >>> None) >>> MainThread::DEBUG::2012-04-16 >>> >>> >>> >>>09:36:29,606::hsm::248::Storage.Misc.excCmd::(__validateLvmLocking >>>Ty >>> pe >>> ) >>> SUCCESS:<err> = '';<rc> = 0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,606::lvm::319::OperationMutex::(_reloadpvs) Operation >>>'lvm >>> reload >>> operation' got the operation mutex >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,608::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>> -n >>> /sbin/lvm pvs --config " devices { preferred_names = >>> [\\"^/dev/mapper/\\"] >>> ignore_suspended_devices=1 write_cache_state=0 >>> disable_after_error_count=3 >>> filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>> \\"r%.*%\\" ] } global { locking_type=1 >>>prioritise_write_locks=1 >>> wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } >>>" >>> --noheadings --units b --nosuffix --separator | -o >>> >>> >>> >>>uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,md >>>a_ >>> co >>> unt, >>> d >>> ev_size' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,714::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> >>> = >>> ''; >>> <rc> = 0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,715::lvm::342::OperationMutex::(_reloadpvs) Operation >>>'lvm >>> reload >>> operation' released the operation mutex >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,716::lvm::352::OperationMutex::(_reloadvgs) Operation >>>'lvm >>> reload >>> operation' got the operation mutex >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,716::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>> -n >>> /sbin/lvm vgs --config " devices { preferred_names = >>> [\\"^/dev/mapper/\\"] >>> ignore_suspended_devices=1 write_cache_state=0 >>> disable_after_error_count=3 >>> filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>> \\"r%.*%\\" ] } global { locking_type=1 >>>prioritise_write_locks=1 >>> wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } >>>" >>> --noheadings --units b --nosuffix --separator | -o >>> >>> >>> >>>uuid,name,attr,size,free,extent_size,extent_count,free_count,tags, >>>vg >>> _m >>> da_s >>> i >>> ze,vg_mda_free' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,813::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> >>> = >>> ' No >>> volume groups found\n';<rc> = 0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,814::lvm::379::OperationMutex::(_reloadvgs) Operation >>>'lvm >>> reload >>> operation' released the operation mutex >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,815::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>> -n >>> /sbin/lvm lvs --config " devices { preferred_names = >>> [\\"^/dev/mapper/\\"] >>> ignore_suspended_devices=1 write_cache_state=0 >>> disable_after_error_count=3 >>> filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>> \\"r%.*%\\" ] } global { locking_type=1 >>>prioritise_write_locks=1 >>> wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } >>>" >>> --noheadings --units b --nosuffix --separator | -o >>> uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,916::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> >>> = >>> ' No >>> volume groups found\n';<rc> = 0 >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:29,917::misc::1017::SamplingMethod::(__call__) Trying to >>> enter >>> sampling method (storage.sdc.refreshStorage) >>> MainThread::INFO::2012-04-16 >>> 09:36:29,919::dispatcher::121::Storage.Dispatcher::(__init__) >>> Starting >>> StorageDispatcher... >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:29,919::misc::1019::SamplingMethod::(__call__) Got in to >>> sampling >>> method >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:29,921::misc::1017::SamplingMethod::(__call__) Trying to >>> enter >>> sampling method (storage.iscsi.rescan) >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:29,921::misc::1019::SamplingMethod::(__call__) Got in to >>> sampling >>> method >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:29,921::iscsi::389::Storage.Misc.excCmd::(rescan) >>> '/usr/bin/sudo -n >>> /sbin/iscsiadm -m session -R' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,930::utils::595::Storage.Misc.excCmd::(execCmd) >>> '/usr/bin/pgrep >>> -xf ksmd' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:30,108::utils::595::Storage.Misc.excCmd::(execCmd) >>> SUCCESS:<err> = >>> '';<rc> = 0 >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:30,116::iscsi::389::Storage.Misc.excCmd::(rescan) >>> FAILED:<err> >>> = >>> 'iscsiadm: No session found.\n';<rc> = 21 >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:30,116::misc::1027::SamplingMethod::(__call__) Returning >>>last >>> result >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:30,117::supervdsm::83::SuperVdsmProxy::(_killSupervdsm) >>>Could >>> not >>> kill old Super Vdsm [Errno 2] No such file or directory: >>> '/var/run/vdsm/svdsm.pid' >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:30,117::supervdsm::71::SuperVdsmProxy::(_launchSupervdsm) >>> Launching >>> Super Vdsm >>> Thread-11::DEBUG::2012-04-16 >>> >>> >>>09:36:30,118::supervdsm::74::Storage.Misc.excCmd::(_launchSupervds >>>m) >>> '/usr/bin/sudo -n /usr/bin/python >>> /usr/share/vdsm/supervdsmServer.pyc >>> bd4b3ae7-3e51-4d6b-b681-d5f6cb5bae07 2945' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:30,254::supervdsmServer::170::SuperVdsm.Server::(main) >>>Making >>> sure >>> I'm root >>> MainThread::DEBUG::2012-04-16 >>> 09:36:30,255::supervdsmServer::174::SuperVdsm.Server::(main) >>> Parsing >>> cmd >>> args >>> MainThread::DEBUG::2012-04-16 >>> 09:36:30,255::supervdsmServer::177::SuperVdsm.Server::(main) >>> Creating PID >>> file >>> MainThread::DEBUG::2012-04-16 >>> 09:36:30,255::supervdsmServer::181::SuperVdsm.Server::(main) >>> Cleaning old >>> socket >>> MainThread::DEBUG::2012-04-16 >>> 09:36:30,255::supervdsmServer::185::SuperVdsm.Server::(main) >>> Setting >>> up >>> keep alive thread >>> MainThread::DEBUG::2012-04-16 >>> 09:36:30,256::supervdsmServer::190::SuperVdsm.Server::(main) >>> Creating >>> remote object manager >>> MainThread::DEBUG::2012-04-16 >>> 09:36:30,256::supervdsmServer::201::SuperVdsm.Server::(main) >>> Started >>> serving super vdsm object >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:32,124::supervdsm::92::SuperVdsmProxy::(_connect) Trying to >>> connect >>> to Super Vdsm >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:32,133::supervdsm::64::SuperVdsmProxy::(__init__) Connected >>> to >>> Super >>> Vdsm >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,070::multipath::71::Storage.Misc.excCmd::(rescan) >>> '/usr/bin/sudo >>> -n /sbin/multipath' (cwd None) >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,130::multipath::71::Storage.Misc.excCmd::(rescan) >>> SUCCESS:<err> >>> = '';<rc> = 0 >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,131::lvm::460::OperationMutex::(_invalidateAllPvs) >>> Operation 'lvm >>> invalidate operation' got the operation mutex >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,131::lvm::462::OperationMutex::(_invalidateAllPvs) >>> Operation 'lvm >>> invalidate operation' released the operation mutex >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,132::lvm::472::OperationMutex::(_invalidateAllVgs) >>> Operation 'lvm >>> invalidate operation' got the operation mutex >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,132::lvm::474::OperationMutex::(_invalidateAllVgs) >>> Operation 'lvm >>> invalidate operation' released the operation mutex >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,133::lvm::493::OperationMutex::(_invalidateAllLvs) >>> Operation 'lvm >>> invalidate operation' got the operation mutex >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,133::lvm::495::OperationMutex::(_invalidateAllLvs) >>> Operation 'lvm >>> invalidate operation' released the operation mutex >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,133::misc::1027::SamplingMethod::(__call__) Returning >>>last >>> result >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,133::hsm::272::Storage.HSM::(__cleanStorageRepository) >>> Started >>> cleaning storage repository at '/rhev/data-center' >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,136::hsm::304::Storage.HSM::(__cleanStorageRepository) >>> White >>> list: ['/rhev/data-center/hsm-tasks', >>> '/rhev/data-center/hsm-tasks/*', >>> '/rhev/data-center/mnt'] >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,136::hsm::305::Storage.HSM::(__cleanStorageRepository) >>> Mount >>> list: ['/rhev/data-center'] >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,136::hsm::307::Storage.HSM::(__cleanStorageRepository) >>> Cleaning >>> leftovers >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,136::hsm::350::Storage.HSM::(__cleanStorageRepository) >>> Finished >>> cleaning storage repository at '/rhev/data-center' >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> On 4/16/12 8:38 AM, "Mike Burns"<mburns@redhat.com> wrote: >>> >>>> On Mon, 2012-04-16 at 08:14 -0500, Adam vonNieda wrote: >>>>> Hi folks, >>>>> >>>>> >>>>> I'm trying to install oVirt node v2.3.0 on A Dell C2100 >>>>> server. I >>>>> can boot up just fine, but the two menu options I see are >>>>>"Start >>>>> oVirt >>>>> node", and "Troubleshooting". When I choose "Start oVirt node", >>>>> it >>>>> does just that, and I am soon after given a console login >>>>>prompt. >>>>> I've >>>>> checked the docs, and I don't see what I'm supposed to do next, >>>>> as >>>>> in >>>>> a password etc. Am I missing something? >>>> Hi Adam, >>>> >>>> Something is breaking in the boot process. You should be >>>>getting >>>> a >>>> TUI >>>> screen that will let you configure and install ovirt-node. >>>> >>>> I just added an entry on the Node Troublesooting wiki page[1] >>>>for >>>> you to >>>> follow. >>>> >>>> Mike >>>> >>>> [1] http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems >>>> >>>> >>>>> Thanks, >>>>> >>>>> >>>>> -Adam >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users@ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >> _______________________________________________ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > This is definitely the cause of the installer failing > > 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat > /proc/mounts|grep -q "none /live" > 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to > mount_live() > > > > What kind of media are you installing from: usb/cd/remote console? _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
I did go back and take a look at mount_live and made sure it contains a specific patch to handle usb drives properly. If you can get back to a shell prompt. run blkid and capture the output. If it's way too much to type then just the usb drive output should be ok.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Just curious do those Sandisk drives still come with the U3 software on them? If so may want to remove it since it can alter the way the drive is presented and that could be causing it. I've got a 2-3year old 8GB Sandisk Cruzer with the U3 software removed and that works fine not sure if it related but might want to just check.

Typed out as I am yet to install the Remote Management Card. # blkid /dev/loop0: TYPE="squashfs" /dev/loop1: TYPE="DM_snapshot_cow" /dev/loop2: TYPE="squashfs" /dev/loop3: LABEL="ovirt-node-iso" UUIS="f1fffd44-6664-48ef-8105-1d986f23127b" TYPE="ext2" /dev/sdb1: LABEL="ovirt-node-iso" TYPE="iso9660" /dev/mapper/1SanDisk: LABEL="ovirt-node-iso" TYPE="iso9660" /dev/mapper/1SanDiskp1: LABEL="ovirt-node-iso" TYPE="iso9660" /dev/mapper/live-rw: LABEL="ovirt-node-iso" UUID="f1fffd44-6664-48ef-8105-1d986f23127b" TYPE="ext2" /dev/mapper/live-osimg-min: LABEL="ovirt-node-iso" UUIS="f1fffd44-6664-48ef-8105-1d986f23127b" TYPE="ext2" Jason On 18/04/12 11:19 PM, Joey Boggs wrote:
On 04/18/2012 08:40 AM, Adam vonNieda wrote:
Yep, that's exactly the same issue. Mine was a 16Gb Sandisk Cruiser. When I switched to a no-name older 4Gb stick, it worked fine. I set mine up exactly as you did as well, dd from a Mac. Mine booted the kernel just fine as well. I tried booting up setting the "rootpw=<hash>" as well, but that didn't work for me, so I was unable to collect any information from the "blkid" command. I tried it three times, and I know I was doing it correctly. Joey's comments below..
-Adam
<Joey's comments>
I did go back and take a look at mount_live and made sure it contains a specific patch to handle usb drives properly. If you can get back to a shell prompt. run blkid and capture the output. If it's way too much to type then just the usb drive output should be ok.
<Link to shell prompt instructions>
http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems
On 4/18/12 5:02 AM, "Jason Lawer"<akula@thegeekhood.net> wrote:
I think I just hit the exact same issue with a Sandisk Crusier Blade 4GB USB stick. I bought 4 of them to try and setup a test system (before we commit to real hardware) and at least 2 of them failed with both 2.2 and 2.3 ovirt isos being copied using dd from a mac.
I copied to a old 8gb "Strontium" USB stick I had lying around and worked without issue. So it appears to be an issue with the stick.
I can provide more specific information on the stick or such if that is useful.
It wouldn't surprise me if its due to the low cost nature of the stick (cost $5 AUD) but I am curious as it booted the kernel fine.
Jason On 18/04/2012, at 4:48 AM, Adam vonNieda wrote:
Turns out that there might be an issue with my thumb drive. I tried another, and it worked fine. Thanks very much for the responses folks!
-Adam
On 4/17/12 10:11 AM, "Joey Boggs"<jboggs@redhat.com> wrote:
On 04/17/2012 10:51 AM, Adam vonNieda wrote:
Thanks for the reply Joey. I saw that too, and thought maybe my USB thumb drive was set to read only, but it's not. This box doesn't have a DVD drive, I'll try a different USB drive, and if that doesn't work, I'll dig up an external DVD drive.
Thanks again,
-Adam
Adam vonNieda Adam@vonNieda.org
On Apr 17, 2012, at 9:07, Joey Boggs<jboggs@redhat.com> wrote:
> On 04/17/2012 09:45 AM, Adam vonNieda wrote: >> Hi folks, >> >> Still hoping someone can give me a hand with this. I can't >> install >> overt-node 2.3.0 on a on a Dell C2100 server because it won't >> start >> the >> graphical interface. I booted up a standard F16 image this >> morning, >> and >> the graphical installer does start during that process. Logs are >> below. >> >> Thanks very much, >> >> -Adam >> >> >>> /tmp/ovirt.log >>> ============== >>> >>> /sbin/restorecon set context >>> /var/cache/yum->unconfined_u:object_r:rpm_var_cache_t:s0 >>> failed:'Read-only >>> file system' >>> /sbin/restorecon reset /var/cache/yum context >>> >>> >>> unconfined_u:object_r:file_t:s0->unconfined_u:object_r:rpm_var_cache >>> >>> _t >>> :s0 >>> /sbin/restorecon reset /etc/sysctl.conf context >>> >>> >>> system_u:object_r:etc_runtime_t:s0->system_u:object_r:system_conf_t: >>> >>> s0 >>> /sbin/restorecon reset /boot-kdump context >>> system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0 >>> 2012-04-16 09:36:26,827 - INFO - ovirt-config-installer - >>> ::::live >>> device:::: >>> /dev/sdb >>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat >>> /proc/mounts|grep >>> -q "none /live" >>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - >>> 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - umount /live >>> 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - >>> 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to >>> mount_live() >>> >>> /var/log/ovirt.log >>> ================== >>> >>> Apr 16 09:35:53 Starting ovirt-early >>> oVirt Node Hypervisor release 2.3.0 (1.0.fc16) >>> Apr 16 09:35:53 Updating /etc/default/ovirt >>> Apr 16 09:35:54 Updating OVIRT_BOOTIF to '' >>> Apr 16 09:35:54 Updating OVIRT_INIT to '' >>> Apr 16 09:35:54 Updating OVIRT_UPGRADE to '' >>> Apr 16 09:35:54 Updating OVIRT_STANDALONE to '1' >>> Apr 16 09:35:54 Updating OVIRT_BOOTPARAMS to 'nomodeset >>> crashkernel=512M-2G:64M,2G-:128M elevator=deadline quiet >>> rd_NO_LVM >>> rhgb >>> rd.luks=0 rd.md=0 rd.dm=0' >>> Apr 16 09:35:54 Updating OVIRT_RHN_TYPE to 'classic' >>> Apr 16 09:35:54 Updating OVIRT_INSTALL to '1' >>> Apr 16 09:35:54 Updating OVIRT_ISCSI_INSTALL to '1' >>> Apr 16 09:36:08 Setting temporary admin password: F8Ax67kfRPSAw >>> Apr 16 09:36:09 Setting temporary root password: F8Ax67kfRPSAw >>> Apr 16 09:36:09 Skip runtime mode configuration. >>> Apr 16 09:36:09 Completed ovirt-early >>> Apr 16 09:36:09 Starting ovirt-awake. >>> Apr 16 09:36:09 Node is operating in unmanaged mode. >>> Apr 16 09:36:09 Completed ovirt-awake: RETVAL=0 >>> Apr 16 09:36:09 Starting ovirt >>> Apr 16 09:36:09 Completed ovirt >>> Apr 16 09:36:10 Starting ovirt-post >>> Apr 16 09:36:20 Hardware virtualization detected >>> Volume group "HostVG" not found >>> Skipping volume group HostVG >>> Restarting network (via systemctl): [ OK ] >>> Apr 16 09:36:20 Starting ovirt-post >>> Apr 16 09:36:21 Hardware virtualization detected >>> Volume group "HostVG" not found >>> Skipping volume group HostVG >>> Restarting network (via systemctl): [ OK ] >>> Apr 16 09:36:22 Starting ovirt-cim >>> Apr 16 09:36:22 Completed ovirt-cim >>> WARNING: persistent config storage not available >>> >>> /var/log/vdsm/vdsm.log >>> ======================= >>> >>> MainThread::INFO::2012-04-16 09:36:21,828::vdsm::71::vds::(run) I >>> am >>> the >>> actual vdsm 4.9-0 >>> MainThread::DEBUG::2012-04-16 >>> >>> >>> 09:36:23,873::resourceManager::376::ResourceManager::(registerNamesp >>> >>> ac >>> e) >>> Registering namespace 'Storage' >>> MainThread::DEBUG::2012-04-16 >>> 09:36:23,874::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:23,918::multipath::85::Storage.Misc.excCmd::(isEnabled) >>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>> MainThread::INFO::2012-04-16 09:36:25,000::vdsm::71::vds::(run) I >>> am >>> the >>> actual vdsm 4.9-0 >>> MainThread::DEBUG::2012-04-16 >>> >>> >>> 09:36:25,199::resourceManager::376::ResourceManager::(registerNamesp >>> >>> ac >>> e) >>> Registering namespace 'Storage' >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,200::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,231::multipath::85::Storage.Misc.excCmd::(isEnabled) >>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,243::multipath::85::Storage.Misc.excCmd::(isEnabled) >>> SUCCESS: >>> <err> = '';<rc> = 0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,244::multipath::109::Storage.Multipath::(isEnabled) >>> multipath >>> Defaulting to False >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,244::misc::487::Storage.Misc::(rotateFiles) dir: /etc, >>> prefixName: multipath.conf, versions: 5 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,244::misc::508::Storage.Misc::(rotateFiles) versions >>> found: >>> [0] >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,244::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>> >>> '/usr/bin/sudo -n /bin/cp /etc/multipath.conf >>> /etc/multipath.conf.1' >>> (cwd >>> None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,255::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>> >>> FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: >>> Read-only >>> file >>> system\nsudo: sorry, a password is required to run sudo\n';<rc> >>> = 1 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,256::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>> >>> '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf.1' (cwd >>> None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,269::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>> >>> FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: >>> Read-only >>> file >>> system\nsudo: sorry, a password is required to run sudo\n';<rc> >>> = 1 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,270::multipath::123::Storage.Misc.excCmd::(setupMultipath) >>> >>> '/usr/bin/sudo -n /bin/cp /tmp/tmpnPcvWi /etc/multipath.conf' >>> (cwd >>> None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,283::multipath::123::Storage.Misc.excCmd::(setupMultipath) >>> >>> SUCCESS:<err> = '';<rc> = 0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,283::multipath::128::Storage.Misc.excCmd::(setupMultipath) >>> >>> '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf' (cwd >>> None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,294::multipath::128::Storage.Misc.excCmd::(setupMultipath) >>> >>> FAILED:<err> = 'sudo: unable to mkdir /var/db/sudo/vdsm: >>> Read-only >>> file >>> system\nsudo: sorry, a password is required to run sudo\n';<rc> >>> = 1 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,295::multipath::131::Storage.Misc.excCmd::(setupMultipath) >>> >>> '/usr/bin/sudo -n /sbin/multipath -F' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,323::multipath::131::Storage.Misc.excCmd::(setupMultipath) >>> >>> FAILED:<err> = '';<rc> = 1 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,323::multipath::134::Storage.Misc.excCmd::(setupMultipath) >>> >>> '/usr/bin/sudo -n /sbin/service multipathd restart' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:26,397::multipath::134::Storage.Misc.excCmd::(setupMultipath) >>> >>> SUCCESS:<err> = '';<rc> = 0 >>> MainThread::DEBUG::2012-04-16 >>> >>> >>> 09:36:26,398::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingTy >>> >>> pe >>> ) >>> '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd >>> None) >>> MainThread::DEBUG::2012-04-16 >>> >>> >>> 09:36:26,443::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingTy >>> >>> pe >>> ) >>> SUCCESS:<err> = '';<rc> = 0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:26,445::lvm::319::OperationMutex::(_reloadpvs) Operation >>> 'lvm >>> reload >>> operation' got the operation mutex >>> MainThread::DEBUG::2012-04-16 >>> 09:36:26,447::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>> -n >>> /sbin/lvm pvs --config " devices { preferred_names = >>> [\\"^/dev/mapper/\\"] >>> ignore_suspended_devices=1 write_cache_state=0 >>> disable_after_error_count=3 >>> filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>> \\"r%.*%\\" ] } global { locking_type=1 >>> prioritise_write_locks=1 >>> wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 >>> } " >>> --noheadings --units b --nosuffix --separator | -o >>> >>> >>> uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_ >>> >>> co >>> unt, >>> d >>> ev_size' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:26,811::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> >>> = >>> ''; >>> <rc> = 0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:26,811::lvm::342::OperationMutex::(_reloadpvs) Operation >>> 'lvm >>> reload >>> operation' released the operation mutex >>> MainThread::DEBUG::2012-04-16 >>> 09:36:26,812::lvm::352::OperationMutex::(_reloadvgs) Operation >>> 'lvm >>> reload >>> operation' got the operation mutex >>> MainThread::DEBUG::2012-04-16 >>> 09:36:26,812::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>> -n >>> /sbin/lvm vgs --config " devices { preferred_names = >>> [\\"^/dev/mapper/\\"] >>> ignore_suspended_devices=1 write_cache_state=0 >>> disable_after_error_count=3 >>> filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>> \\"r%.*%\\" ] } global { locking_type=1 >>> prioritise_write_locks=1 >>> wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 >>> } " >>> --noheadings --units b --nosuffix --separator | -o >>> >>> >>> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg >>> >>> _m >>> da_s >>> i >>> ze,vg_mda_free' (cwd None) >>> MainThread::INFO::2012-04-16 09:36:29,307::vdsm::71::vds::(run) I >>> am >>> the >>> actual vdsm 4.9-0 >>> MainThread::DEBUG::2012-04-16 >>> >>> >>> 09:36:29,514::resourceManager::376::ResourceManager::(registerNamesp >>> >>> ac >>> e) >>> Registering namespace 'Storage' >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,515::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,551::multipath::85::Storage.Misc.excCmd::(isEnabled) >>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,564::multipath::85::Storage.Misc.excCmd::(isEnabled) >>> SUCCESS: >>> <err> = '';<rc> = 0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,565::multipath::101::Storage.Multipath::(isEnabled) >>> Current >>> revision of multipath.conf detected, preserving >>> MainThread::DEBUG::2012-04-16 >>> >>> >>> 09:36:29,565::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingTy >>> >>> pe >>> ) >>> '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd >>> None) >>> MainThread::DEBUG::2012-04-16 >>> >>> >>> 09:36:29,606::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingTy >>> >>> pe >>> ) >>> SUCCESS:<err> = '';<rc> = 0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,606::lvm::319::OperationMutex::(_reloadpvs) Operation >>> 'lvm >>> reload >>> operation' got the operation mutex >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,608::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>> -n >>> /sbin/lvm pvs --config " devices { preferred_names = >>> [\\"^/dev/mapper/\\"] >>> ignore_suspended_devices=1 write_cache_state=0 >>> disable_after_error_count=3 >>> filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>> \\"r%.*%\\" ] } global { locking_type=1 >>> prioritise_write_locks=1 >>> wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 >>> } " >>> --noheadings --units b --nosuffix --separator | -o >>> >>> >>> uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_ >>> >>> co >>> unt, >>> d >>> ev_size' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,714::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> >>> = >>> ''; >>> <rc> = 0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,715::lvm::342::OperationMutex::(_reloadpvs) Operation >>> 'lvm >>> reload >>> operation' released the operation mutex >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,716::lvm::352::OperationMutex::(_reloadvgs) Operation >>> 'lvm >>> reload >>> operation' got the operation mutex >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,716::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>> -n >>> /sbin/lvm vgs --config " devices { preferred_names = >>> [\\"^/dev/mapper/\\"] >>> ignore_suspended_devices=1 write_cache_state=0 >>> disable_after_error_count=3 >>> filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>> \\"r%.*%\\" ] } global { locking_type=1 >>> prioritise_write_locks=1 >>> wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 >>> } " >>> --noheadings --units b --nosuffix --separator | -o >>> >>> >>> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg >>> >>> _m >>> da_s >>> i >>> ze,vg_mda_free' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,813::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> >>> = >>> ' No >>> volume groups found\n';<rc> = 0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,814::lvm::379::OperationMutex::(_reloadvgs) Operation >>> 'lvm >>> reload >>> operation' released the operation mutex >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,815::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>> -n >>> /sbin/lvm lvs --config " devices { preferred_names = >>> [\\"^/dev/mapper/\\"] >>> ignore_suspended_devices=1 write_cache_state=0 >>> disable_after_error_count=3 >>> filter = [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>> \\"r%.*%\\" ] } global { locking_type=1 >>> prioritise_write_locks=1 >>> wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 >>> } " >>> --noheadings --units b --nosuffix --separator | -o >>> uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,916::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS:<err> >>> = >>> ' No >>> volume groups found\n';<rc> = 0 >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:29,917::misc::1017::SamplingMethod::(__call__) Trying to >>> enter >>> sampling method (storage.sdc.refreshStorage) >>> MainThread::INFO::2012-04-16 >>> 09:36:29,919::dispatcher::121::Storage.Dispatcher::(__init__) >>> Starting >>> StorageDispatcher... >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:29,919::misc::1019::SamplingMethod::(__call__) Got in to >>> sampling >>> method >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:29,921::misc::1017::SamplingMethod::(__call__) Trying to >>> enter >>> sampling method (storage.iscsi.rescan) >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:29,921::misc::1019::SamplingMethod::(__call__) Got in to >>> sampling >>> method >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:29,921::iscsi::389::Storage.Misc.excCmd::(rescan) >>> '/usr/bin/sudo -n >>> /sbin/iscsiadm -m session -R' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,930::utils::595::Storage.Misc.excCmd::(execCmd) >>> '/usr/bin/pgrep >>> -xf ksmd' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:30,108::utils::595::Storage.Misc.excCmd::(execCmd) >>> SUCCESS:<err> = >>> '';<rc> = 0 >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:30,116::iscsi::389::Storage.Misc.excCmd::(rescan) >>> FAILED:<err> >>> = >>> 'iscsiadm: No session found.\n';<rc> = 21 >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:30,116::misc::1027::SamplingMethod::(__call__) Returning >>> last >>> result >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:30,117::supervdsm::83::SuperVdsmProxy::(_killSupervdsm) >>> Could >>> not >>> kill old Super Vdsm [Errno 2] No such file or directory: >>> '/var/run/vdsm/svdsm.pid' >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:30,117::supervdsm::71::SuperVdsmProxy::(_launchSupervdsm) >>> Launching >>> Super Vdsm >>> Thread-11::DEBUG::2012-04-16 >>> >>> 09:36:30,118::supervdsm::74::Storage.Misc.excCmd::(_launchSupervdsm) >>> >>> '/usr/bin/sudo -n /usr/bin/python >>> /usr/share/vdsm/supervdsmServer.pyc >>> bd4b3ae7-3e51-4d6b-b681-d5f6cb5bae07 2945' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:30,254::supervdsmServer::170::SuperVdsm.Server::(main) >>> Making >>> sure >>> I'm root >>> MainThread::DEBUG::2012-04-16 >>> 09:36:30,255::supervdsmServer::174::SuperVdsm.Server::(main) >>> Parsing >>> cmd >>> args >>> MainThread::DEBUG::2012-04-16 >>> 09:36:30,255::supervdsmServer::177::SuperVdsm.Server::(main) >>> Creating PID >>> file >>> MainThread::DEBUG::2012-04-16 >>> 09:36:30,255::supervdsmServer::181::SuperVdsm.Server::(main) >>> Cleaning old >>> socket >>> MainThread::DEBUG::2012-04-16 >>> 09:36:30,255::supervdsmServer::185::SuperVdsm.Server::(main) >>> Setting >>> up >>> keep alive thread >>> MainThread::DEBUG::2012-04-16 >>> 09:36:30,256::supervdsmServer::190::SuperVdsm.Server::(main) >>> Creating >>> remote object manager >>> MainThread::DEBUG::2012-04-16 >>> 09:36:30,256::supervdsmServer::201::SuperVdsm.Server::(main) >>> Started >>> serving super vdsm object >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:32,124::supervdsm::92::SuperVdsmProxy::(_connect) Trying to >>> connect >>> to Super Vdsm >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:32,133::supervdsm::64::SuperVdsmProxy::(__init__) Connected >>> to >>> Super >>> Vdsm >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,070::multipath::71::Storage.Misc.excCmd::(rescan) >>> '/usr/bin/sudo >>> -n /sbin/multipath' (cwd None) >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,130::multipath::71::Storage.Misc.excCmd::(rescan) >>> SUCCESS:<err> >>> = '';<rc> = 0 >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,131::lvm::460::OperationMutex::(_invalidateAllPvs) >>> Operation 'lvm >>> invalidate operation' got the operation mutex >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,131::lvm::462::OperationMutex::(_invalidateAllPvs) >>> Operation 'lvm >>> invalidate operation' released the operation mutex >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,132::lvm::472::OperationMutex::(_invalidateAllVgs) >>> Operation 'lvm >>> invalidate operation' got the operation mutex >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,132::lvm::474::OperationMutex::(_invalidateAllVgs) >>> Operation 'lvm >>> invalidate operation' released the operation mutex >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,133::lvm::493::OperationMutex::(_invalidateAllLvs) >>> Operation 'lvm >>> invalidate operation' got the operation mutex >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,133::lvm::495::OperationMutex::(_invalidateAllLvs) >>> Operation 'lvm >>> invalidate operation' released the operation mutex >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,133::misc::1027::SamplingMethod::(__call__) Returning >>> last >>> result >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,133::hsm::272::Storage.HSM::(__cleanStorageRepository) >>> Started >>> cleaning storage repository at '/rhev/data-center' >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,136::hsm::304::Storage.HSM::(__cleanStorageRepository) >>> White >>> list: ['/rhev/data-center/hsm-tasks', >>> '/rhev/data-center/hsm-tasks/*', >>> '/rhev/data-center/mnt'] >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,136::hsm::305::Storage.HSM::(__cleanStorageRepository) >>> Mount >>> list: ['/rhev/data-center'] >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,136::hsm::307::Storage.HSM::(__cleanStorageRepository) >>> Cleaning >>> leftovers >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,136::hsm::350::Storage.HSM::(__cleanStorageRepository) >>> Finished >>> cleaning storage repository at '/rhev/data-center' >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> On 4/16/12 8:38 AM, "Mike Burns"<mburns@redhat.com> wrote: >>> >>>> On Mon, 2012-04-16 at 08:14 -0500, Adam vonNieda wrote: >>>>> Hi folks, >>>>> >>>>> >>>>> I'm trying to install oVirt node v2.3.0 on A Dell C2100 >>>>> server. I >>>>> can boot up just fine, but the two menu options I see are >>>>> "Start >>>>> oVirt >>>>> node", and "Troubleshooting". When I choose "Start oVirt node", >>>>> it >>>>> does just that, and I am soon after given a console login >>>>> prompt. >>>>> I've >>>>> checked the docs, and I don't see what I'm supposed to do next, >>>>> as >>>>> in >>>>> a password etc. Am I missing something? >>>> Hi Adam, >>>> >>>> Something is breaking in the boot process. You should be >>>> getting >>>> a >>>> TUI >>>> screen that will let you configure and install ovirt-node. >>>> >>>> I just added an entry on the Node Troublesooting wiki page[1] >>>> for >>>> you to >>>> follow. >>>> >>>> Mike >>>> >>>> [1] http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems >>>> >>>> >>>>> Thanks, >>>>> >>>>> >>>>> -Adam >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users@ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >> _______________________________________________ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > This is definitely the cause of the installer failing > > 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat > /proc/mounts|grep -q "none /live" > 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to > mount_live() > > > > What kind of media are you installing from: usb/cd/remote console? _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
I did go back and take a look at mount_live and made sure it contains a specific patch to handle usb drives properly. If you can get back to a shell prompt. run blkid and capture the output. If it's way too much to type then just the usb drive output should be ok.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Just curious do those Sandisk drives still come with the U3 software on them? If so may want to remove it since it can alter the way the drive is presented and that could be causing it. I've got a 2-3year old 8GB Sandisk Cruzer with the U3 software removed and that works fine not sure if it related but might want to just check.
participants (5)
-
Adam vonNieda
-
Dominic Kaiser
-
Jason Lawer
-
Joey Boggs
-
Mike Burns