From adam at vonnieda.org Mon Apr 16 09:14:49 2012 Content-Type: multipart/mixed; boundary="===============2667742920195571450==" MIME-Version: 1.0 From: Adam vonNieda To: users at ovirt.org Subject: [Users] Booting oVirt node image 2.3.0, no install option Date: Mon, 16 Apr 2012 08:14:43 -0500 Message-ID: --===============2667742920195571450== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable > This message is in MIME format. Since your mail reader does not understand this format, some or all of this message may not be legible. --B_3417408887_2820352 Content-type: text/plain; charset=3D"US-ASCII" Content-transfer-encoding: 7bit Hi folks, I'm trying to install oVirt node v2.3.0 on A Dell C2100 server. I can boot up just fine, but the two menu options I see are "Start oVirt node", and "Troubleshooting". When I choose "Start oVirt node", it does just that, and I am soon after given a console login prompt. I've checked the docs, and I don't see what I'm supposed to do next, as in a password etc. Am I missing something? = Thanks, -Adam --B_3417408887_2820352 Content-type: text/html; charset=3D"US-ASCII" Content-transfer-encoding: quoted-printable

   = ;=3D Hi folks,

   I'm trying to install oVirt= =3D node v2.3.0 on A Dell C2100 server. I can boot up just fine, but the two me= n=3D u options I see are "Start oVirt node", and "Troubleshooting". When I choos= e=3D "Start oVirt node", it does just that, and I am soon after given a console= =3D login prompt. I've checked the docs, and I don't see what I'm supposed to d= o=3D next, as in a password etc. Am I missing something? 

   Thanks,

      = -=3D Adam
--B_3417408887_2820352-- --===============2667742920195571450== Content-Type: multipart/alternative MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.bin" PiBUaGlzIG1lc3NhZ2UgaXMgaW4gTUlNRSBmb3JtYXQuIFNpbmNlIHlvdXIgbWFpbCByZWFkZXIg ZG9lcyBub3QgdW5kZXJzdGFuZAp0aGlzIGZvcm1hdCwgc29tZSBvciBhbGwgb2YgdGhpcyBtZXNz YWdlIG1heSBub3QgYmUgbGVnaWJsZS4KCi0tQl8zNDE3NDA4ODg3XzI4MjAzNTIKQ29udGVudC10 eXBlOiB0ZXh0L3BsYWluOwoJY2hhcnNldD0iVVMtQVNDSUkiCkNvbnRlbnQtdHJhbnNmZXItZW5j b2Rpbmc6IDdiaXQKCgogICBIaSBmb2xrcywKCiAgIEknbSB0cnlpbmcgdG8gaW5zdGFsbCBvVmly dCBub2RlIHYyLjMuMCBvbiBBIERlbGwgQzIxMDAgc2VydmVyLiBJIGNhbgpib290IHVwIGp1c3Qg ZmluZSwgYnV0IHRoZSB0d28gbWVudSBvcHRpb25zIEkgc2VlIGFyZSAiU3RhcnQgb1ZpcnQgbm9k ZSIsCmFuZCAiVHJvdWJsZXNob290aW5nIi4gV2hlbiBJIGNob29zZSAiU3RhcnQgb1ZpcnQgbm9k ZSIsIGl0IGRvZXMganVzdCB0aGF0LAphbmQgSSBhbSBzb29uIGFmdGVyIGdpdmVuIGEgY29uc29s ZSBsb2dpbiBwcm9tcHQuIEkndmUgY2hlY2tlZCB0aGUgZG9jcywgYW5kCkkgZG9uJ3Qgc2VlIHdo YXQgSSdtIHN1cHBvc2VkIHRvIGRvIG5leHQsIGFzIGluIGEgcGFzc3dvcmQgZXRjLiBBbSBJIG1p c3NpbmcKc29tZXRoaW5nPyAKCiAgIFRoYW5rcywKCiAgICAgIC1BZGFtCgoKCi0tQl8zNDE3NDA4 ODg3XzI4MjAzNTIKQ29udGVudC10eXBlOiB0ZXh0L2h0bWw7CgljaGFyc2V0PSJVUy1BU0NJSSIK Q29udGVudC10cmFuc2Zlci1lbmNvZGluZzogcXVvdGVkLXByaW50YWJsZQoKPGh0bWw+PGhlYWQ+ PC9oZWFkPjxib2R5IHN0eWxlPTNEIndvcmQtd3JhcDogYnJlYWstd29yZDsgLXdlYmtpdC1uYnNw LW1vZGU6IHM9CnBhY2U7IC13ZWJraXQtbGluZS1icmVhazogYWZ0ZXItd2hpdGUtc3BhY2U7IGNv bG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtc2l6ZTo9CiAxNHB4OyBmb250LWZhbWlseTogQ2FsaWJy aSwgc2Fucy1zZXJpZjsgIj48ZGl2Pjxicj48L2Rpdj48ZGl2PiZuYnNwOyAmbmJzcDs9CkhpIGZv bGtzLDwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+Jm5ic3A7ICZuYnNwO0knbSB0cnlpbmcgdG8g aW5zdGFsbCBvVmlydCA9Cm5vZGUgdjIuMy4wIG9uIEEgRGVsbCBDMjEwMCBzZXJ2ZXIuIEkgY2Fu IGJvb3QgdXAganVzdCBmaW5lLCBidXQgdGhlIHR3byBtZW49CnUgb3B0aW9ucyBJIHNlZSBhcmUg IlN0YXJ0IG9WaXJ0IG5vZGUiLCBhbmQgIlRyb3VibGVzaG9vdGluZyIuIFdoZW4gSSBjaG9vc2U9 CiAiU3RhcnQgb1ZpcnQgbm9kZSIsIGl0IGRvZXMganVzdCB0aGF0LCBhbmQgSSBhbSBzb29uIGFm dGVyIGdpdmVuIGEgY29uc29sZSA9CmxvZ2luIHByb21wdC4gSSd2ZSBjaGVja2VkIHRoZSBkb2Nz LCBhbmQgSSBkb24ndCBzZWUgd2hhdCBJJ20gc3VwcG9zZWQgdG8gZG89CiBuZXh0LCBhcyBpbiBh IHBhc3N3b3JkIGV0Yy4gQW0gSSBtaXNzaW5nIHNvbWV0aGluZz8mbmJzcDs8L2Rpdj48ZGl2Pjxi cj48L2Q9Cml2PjxkaXY+Jm5ic3A7ICZuYnNwO1RoYW5rcyw8L2Rpdj48ZGl2Pjxicj48L2Rpdj48 ZGl2PiZuYnNwOyAmbmJzcDsgJm5ic3A7IC09CkFkYW08L2Rpdj48L2JvZHk+PC9odG1sPgoKLS1C XzM0MTc0MDg4ODdfMjgyMDM1Mi0tCgoK --===============2667742920195571450==-- From mburns at redhat.com Mon Apr 16 09:38:11 2012 Content-Type: multipart/mixed; boundary="===============3173807287159427532==" MIME-Version: 1.0 From: Mike Burns To: users at ovirt.org Subject: Re: [Users] Booting oVirt node image 2.3.0, no install option Date: Mon, 16 Apr 2012 09:38:09 -0400 Message-ID: <1334583489.3279.13.camel@beelzebub.mburnsfire.net> In-Reply-To: CBB18573.19215%adam@vonnieda.org --===============3173807287159427532== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On Mon, 2012-04-16 at 08:14 -0500, Adam vonNieda wrote: > = > = > Hi folks, > = > = > I'm trying to install oVirt node v2.3.0 on A Dell C2100 server. I > can boot up just fine, but the two menu options I see are "Start oVirt > node", and "Troubleshooting". When I choose "Start oVirt node", it > does just that, and I am soon after given a console login prompt. I've > checked the docs, and I don't see what I'm supposed to do next, as in > a password etc. Am I missing something? = Hi Adam, Something is breaking in the boot process. You should be getting a TUI screen that will let you configure and install ovirt-node. = I just added an entry on the Node Troublesooting wiki page[1] for you to follow. Mike [1] http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems > = > = > Thanks, > = > = > -Adam > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users --===============3173807287159427532==-- From adam at vonnieda.org Mon Apr 16 11:07:10 2012 Content-Type: multipart/mixed; boundary="===============2941907511155898011==" MIME-Version: 1.0 From: Adam vonNieda To: users at ovirt.org Subject: Re: [Users] Booting oVirt node image 2.3.0, no install option Date: Mon, 16 Apr 2012 10:06:59 -0500 Message-ID: In-Reply-To: 1334583489.3279.13.camel@beelzebub.mburnsfire.net --===============2941907511155898011== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Thanks very much Mike. Below is some additional info now that I can get in. Also, when I "su - admin" it tries to start graphical mode, and just goes to blank screen and stays there. Any insight is much appreciated, and please let me know if there's anything else I can try / provide. Thanks, -Adam /tmp/ovirt.log =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D /sbin/restorecon set context /var/cache/yum->unconfined_u:object_r:rpm_var_cache_t:s0 failed:'Read-only file system' /sbin/restorecon reset /var/cache/yum context unconfined_u:object_r:file_t:s0->unconfined_u:object_r:rpm_var_cache_t:s0 /sbin/restorecon reset /etc/sysctl.conf context system_u:object_r:etc_runtime_t:s0->system_u:object_r:system_conf_t:s0 /sbin/restorecon reset /boot-kdump context system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0 2012-04-16 09:36:26,827 - INFO - ovirt-config-installer - ::::live device:::: /dev/sdb 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat /proc/mounts|grep -q "none /live" 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - umount /live 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to mount_live() /var/log/ovirt.log =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Apr 16 09:35:53 Starting ovirt-early oVirt Node Hypervisor release 2.3.0 (1.0.fc16) Apr 16 09:35:53 Updating /etc/default/ovirt Apr 16 09:35:54 Updating OVIRT_BOOTIF to '' Apr 16 09:35:54 Updating OVIRT_INIT to '' Apr 16 09:35:54 Updating OVIRT_UPGRADE to '' Apr 16 09:35:54 Updating OVIRT_STANDALONE to '1' Apr 16 09:35:54 Updating OVIRT_BOOTPARAMS to 'nomodeset crashkernel=3D512M-2G:64M,2G-:128M elevator=3Ddeadline quiet rd_NO_LVM rhgb rd.luks=3D0 rd.md=3D0 rd.dm=3D0' Apr 16 09:35:54 Updating OVIRT_RHN_TYPE to 'classic' Apr 16 09:35:54 Updating OVIRT_INSTALL to '1' Apr 16 09:35:54 Updating OVIRT_ISCSI_INSTALL to '1' Apr 16 09:36:08 Setting temporary admin password: F8Ax67kfRPSAw Apr 16 09:36:09 Setting temporary root password: F8Ax67kfRPSAw Apr 16 09:36:09 Skip runtime mode configuration. Apr 16 09:36:09 Completed ovirt-early Apr 16 09:36:09 Starting ovirt-awake. Apr 16 09:36:09 Node is operating in unmanaged mode. Apr 16 09:36:09 Completed ovirt-awake: RETVAL=3D0 Apr 16 09:36:09 Starting ovirt Apr 16 09:36:09 Completed ovirt Apr 16 09:36:10 Starting ovirt-post Apr 16 09:36:20 Hardware virtualization detected Volume group "HostVG" not found Skipping volume group HostVG Restarting network (via systemctl): [ OK ] Apr 16 09:36:20 Starting ovirt-post Apr 16 09:36:21 Hardware virtualization detected Volume group "HostVG" not found Skipping volume group HostVG Restarting network (via systemctl): [ OK ] Apr 16 09:36:22 Starting ovirt-cim Apr 16 09:36:22 Completed ovirt-cim WARNING: persistent config storage not available /var/log/vdsm/vdsm.log =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D MainThread::INFO::2012-04-16 09:36:21,828::vdsm::71::vds::(run) I am the actual vdsm 4.9-0 MainThread::DEBUG::2012-04-16 09:36:23,873::resourceManager::376::ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2012-04-16 09:36:23,874::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-04-16 09:36:23,918::multipath::85::Storage.Misc.excCmd::(isEnabled) '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) MainThread::INFO::2012-04-16 09:36:25,000::vdsm::71::vds::(run) I am the actual vdsm 4.9-0 MainThread::DEBUG::2012-04-16 09:36:25,199::resourceManager::376::ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2012-04-16 09:36:25,200::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-04-16 09:36:25,231::multipath::85::Storage.Misc.excCmd::(isEnabled) '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,243::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS: =3D ''; =3D 0 MainThread::DEBUG::2012-04-16 09:36:25,244::multipath::109::Storage.Multipath::(isEnabled) multipath Defaulting to False MainThread::DEBUG::2012-04-16 09:36:25,244::misc::487::Storage.Misc::(rotateFiles) dir: /etc, prefixName: multipath.conf, versions: 5 MainThread::DEBUG::2012-04-16 09:36:25,244::misc::508::Storage.Misc::(rotateFiles) versions found: [0] MainThread::DEBUG::2012-04-16 09:36:25,244::multipath::118::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /bin/cp /etc/multipath.conf /etc/multipath.conf.1' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,255::multipath::118::Storage.Misc.excCmd::(setupMultipath) FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file system\nsudo: sorry, a password is required to run sudo\n'; =3D 1 MainThread::DEBUG::2012-04-16 09:36:25,256::multipath::118::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf.1' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,269::multipath::118::Storage.Misc.excCmd::(setupMultipath) FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file system\nsudo: sorry, a password is required to run sudo\n'; =3D 1 MainThread::DEBUG::2012-04-16 09:36:25,270::multipath::123::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /bin/cp /tmp/tmpnPcvWi /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,283::multipath::123::Storage.Misc.excCmd::(setupMultipath) SUCCESS: =3D ''; =3D 0 MainThread::DEBUG::2012-04-16 09:36:25,283::multipath::128::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,294::multipath::128::Storage.Misc.excCmd::(setupMultipath) FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file system\nsudo: sorry, a password is required to run sudo\n'; =3D 1 MainThread::DEBUG::2012-04-16 09:36:25,295::multipath::131::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /sbin/multipath -F' (cwd None) MainThread::DEBUG::2012-04-16 09:36:25,323::multipath::131::Storage.Misc.excCmd::(setupMultipath) FAILED: =3D ''; =3D 1 MainThread::DEBUG::2012-04-16 09:36:25,323::multipath::134::Storage.Misc.excCmd::(setupMultipath) '/usr/bin/sudo -n /sbin/service multipathd restart' (cwd None) MainThread::DEBUG::2012-04-16 09:36:26,397::multipath::134::Storage.Misc.excCmd::(setupMultipath) SUCCESS: =3D ''; =3D 0 MainThread::DEBUG::2012-04-16 09:36:26,398::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) MainThread::DEBUG::2012-04-16 09:36:26,443::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) SUCCESS: =3D ''; =3D 0 MainThread::DEBUG::2012-04-16 09:36:26,445::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:26,447::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names =3D [\\"^/dev/mapper/\\"] ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_coun= t=3D3 filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks=3D1 wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,d ev_size' (cwd None) MainThread::DEBUG::2012-04-16 09:36:26,811::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: =3D ''; =3D 0 MainThread::DEBUG::2012-04-16 09:36:26,811::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex MainThread::DEBUG::2012-04-16 09:36:26,812::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:26,812::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names =3D [\\"^/dev/mapper/\\"] ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_coun= t=3D3 filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks=3D1 wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_si ze,vg_mda_free' (cwd None) MainThread::INFO::2012-04-16 09:36:29,307::vdsm::71::vds::(run) I am the actual vdsm 4.9-0 MainThread::DEBUG::2012-04-16 09:36:29,514::resourceManager::376::ResourceManager::(registerNamespace) Registering namespace 'Storage' MainThread::DEBUG::2012-04-16 09:36:29,515::threadPool::45::Misc.ThreadPool::(__init__) Enter - numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 MainThread::DEBUG::2012-04-16 09:36:29,551::multipath::85::Storage.Misc.excCmd::(isEnabled) '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,564::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS: =3D ''; =3D 0 MainThread::DEBUG::2012-04-16 09:36:29,565::multipath::101::Storage.Multipath::(isEnabled) Current revision of multipath.conf detected, preserving MainThread::DEBUG::2012-04-16 09:36:29,565::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,606::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) SUCCESS: =3D ''; =3D 0 MainThread::DEBUG::2012-04-16 09:36:29,606::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,608::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names =3D [\\"^/dev/mapper/\\"] ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_coun= t=3D3 filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks=3D1 wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,d ev_size' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,714::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: =3D ''; =3D 0 MainThread::DEBUG::2012-04-16 09:36:29,715::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,716::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,716::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names =3D [\\"^/dev/mapper/\\"] ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_coun= t=3D3 filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks=3D1 wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_si ze,vg_mda_free' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,813::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: =3D ' No volume groups found\n'; =3D 0 MainThread::DEBUG::2012-04-16 09:36:29,814::lvm::379::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex MainThread::DEBUG::2012-04-16 09:36:29,815::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names =3D [\\"^/dev/mapper/\\"] ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_coun= t=3D3 filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks=3D1 wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,916::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: =3D ' No volume groups found\n'; =3D 0 Thread-11::DEBUG::2012-04-16 09:36:29,917::misc::1017::SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) MainThread::INFO::2012-04-16 09:36:29,919::dispatcher::121::Storage.Dispatcher::(__init__) Starting StorageDispatcher... Thread-11::DEBUG::2012-04-16 09:36:29,919::misc::1019::SamplingMethod::(__call__) Got in to sampling method Thread-11::DEBUG::2012-04-16 09:36:29,921::misc::1017::SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-11::DEBUG::2012-04-16 09:36:29,921::misc::1019::SamplingMethod::(__call__) Got in to sampling method Thread-11::DEBUG::2012-04-16 09:36:29,921::iscsi::389::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None) MainThread::DEBUG::2012-04-16 09:36:29,930::utils::595::Storage.Misc.excCmd::(execCmd) '/usr/bin/pgrep -xf ksmd' (cwd None) MainThread::DEBUG::2012-04-16 09:36:30,108::utils::595::Storage.Misc.excCmd::(execCmd) SUCCESS: =3D ''; =3D 0 Thread-11::DEBUG::2012-04-16 09:36:30,116::iscsi::389::Storage.Misc.excCmd::(rescan) FAILED: =3D 'iscsiadm: No session found.\n'; =3D 21 Thread-11::DEBUG::2012-04-16 09:36:30,116::misc::1027::SamplingMethod::(__call__) Returning last result Thread-11::DEBUG::2012-04-16 09:36:30,117::supervdsm::83::SuperVdsmProxy::(_killSupervdsm) Could not kill old Super Vdsm [Errno 2] No such file or directory: '/var/run/vdsm/svdsm.pid' Thread-11::DEBUG::2012-04-16 09:36:30,117::supervdsm::71::SuperVdsmProxy::(_launchSupervdsm) Launching Super Vdsm Thread-11::DEBUG::2012-04-16 09:36:30,118::supervdsm::74::Storage.Misc.excCmd::(_launchSupervdsm) '/usr/bin/sudo -n /usr/bin/python /usr/share/vdsm/supervdsmServer.pyc bd4b3ae7-3e51-4d6b-b681-d5f6cb5bae07 2945' (cwd None) MainThread::DEBUG::2012-04-16 09:36:30,254::supervdsmServer::170::SuperVdsm.Server::(main) Making sure I'm root MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::174::SuperVdsm.Server::(main) Parsing cmd args MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::177::SuperVdsm.Server::(main) Creating PID file MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::181::SuperVdsm.Server::(main) Cleaning old socket MainThread::DEBUG::2012-04-16 09:36:30,255::supervdsmServer::185::SuperVdsm.Server::(main) Setting up keep alive thread MainThread::DEBUG::2012-04-16 09:36:30,256::supervdsmServer::190::SuperVdsm.Server::(main) Creating remote object manager MainThread::DEBUG::2012-04-16 09:36:30,256::supervdsmServer::201::SuperVdsm.Server::(main) Started serving super vdsm object Thread-11::DEBUG::2012-04-16 09:36:32,124::supervdsm::92::SuperVdsmProxy::(_connect) Trying to connect to Super Vdsm Thread-11::DEBUG::2012-04-16 09:36:32,133::supervdsm::64::SuperVdsmProxy::(__init__) Connected to Super Vdsm Thread-11::DEBUG::2012-04-16 09:36:34,070::multipath::71::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n /sbin/multipath' (cwd None) Thread-11::DEBUG::2012-04-16 09:36:34,130::multipath::71::Storage.Misc.excCmd::(rescan) SUCCESS: =3D ''; =3D 0 Thread-11::DEBUG::2012-04-16 09:36:34,131::lvm::460::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,131::lvm::462::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,132::lvm::472::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,132::lvm::474::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,133::lvm::493::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,133::lvm::495::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-11::DEBUG::2012-04-16 09:36:34,133::misc::1027::SamplingMethod::(__call__) Returning last result Thread-11::DEBUG::2012-04-16 09:36:34,133::hsm::272::Storage.HSM::(__cleanStorageRepository) Started cleaning storage repository at '/rhev/data-center' Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::304::Storage.HSM::(__cleanStorageRepository) White list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', '/rhev/data-center/mnt'] Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::305::Storage.HSM::(__cleanStorageRepository) Mount list: ['/rhev/data-center'] Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::307::Storage.HSM::(__cleanStorageRepository) Cleaning leftovers Thread-11::DEBUG::2012-04-16 09:36:34,136::hsm::350::Storage.HSM::(__cleanStorageRepository) Finished cleaning storage repository at '/rhev/data-center' = On 4/16/12 8:38 AM, "Mike Burns" wrote: >On Mon, 2012-04-16 at 08:14 -0500, Adam vonNieda wrote: >> = >> = >> Hi folks, >> = >> = >> I'm trying to install oVirt node v2.3.0 on A Dell C2100 server. I >> can boot up just fine, but the two menu options I see are "Start oVirt >> node", and "Troubleshooting". When I choose "Start oVirt node", it >> does just that, and I am soon after given a console login prompt. I've >> checked the docs, and I don't see what I'm supposed to do next, as in >> a password etc. Am I missing something? > >Hi Adam, > >Something is breaking in the boot process. You should be getting a TUI >screen that will let you configure and install ovirt-node. > >I just added an entry on the Node Troublesooting wiki page[1] for you to >follow. > >Mike > >[1] http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems > > >> = >> = >> Thanks, >> = >> = >> -Adam >> _______________________________________________ >> Users mailing list >> Users(a)ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > --===============2941907511155898011==-- From adam at vonnieda.org Tue Apr 17 09:45:08 2012 Content-Type: multipart/mixed; boundary="===============8505286671932861504==" MIME-Version: 1.0 From: Adam vonNieda To: users at ovirt.org Subject: Re: [Users] Booting oVirt node image 2.3.0, no install option Date: Tue, 17 Apr 2012 08:45:00 -0500 Message-ID: In-Reply-To: CBB19DEE.1921C%adam@vonnieda.org --===============8505286671932861504== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Hi folks, Still hoping someone can give me a hand with this. I can't install overt-node 2.3.0 on a on a Dell C2100 server because it won't start the graphical interface. I booted up a standard F16 image this morning, and the graphical installer does start during that process. Logs are below. Thanks very much, -Adam > >/tmp/ovirt.log >=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > >/sbin/restorecon set context >/var/cache/yum->unconfined_u:object_r:rpm_var_cache_t:s0 failed:'Read-only >file system' >/sbin/restorecon reset /var/cache/yum context >unconfined_u:object_r:file_t:s0->unconfined_u:object_r:rpm_var_cache_t:s0 >/sbin/restorecon reset /etc/sysctl.conf context >system_u:object_r:etc_runtime_t:s0->system_u:object_r:system_conf_t:s0 >/sbin/restorecon reset /boot-kdump context >system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0 >2012-04-16 09:36:26,827 - INFO - ovirt-config-installer - ::::live >device:::: >/dev/sdb >2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat /proc/mounts|grep >-q "none /live" >2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - >2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - umount /live >2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - >2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to mount_live() > >/var/log/ovirt.log >=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > >Apr 16 09:35:53 Starting ovirt-early >oVirt Node Hypervisor release 2.3.0 (1.0.fc16) >Apr 16 09:35:53 Updating /etc/default/ovirt >Apr 16 09:35:54 Updating OVIRT_BOOTIF to '' >Apr 16 09:35:54 Updating OVIRT_INIT to '' >Apr 16 09:35:54 Updating OVIRT_UPGRADE to '' >Apr 16 09:35:54 Updating OVIRT_STANDALONE to '1' >Apr 16 09:35:54 Updating OVIRT_BOOTPARAMS to 'nomodeset >crashkernel=3D512M-2G:64M,2G-:128M elevator=3Ddeadline quiet rd_NO_LVM rhgb >rd.luks=3D0 rd.md=3D0 rd.dm=3D0' >Apr 16 09:35:54 Updating OVIRT_RHN_TYPE to 'classic' >Apr 16 09:35:54 Updating OVIRT_INSTALL to '1' >Apr 16 09:35:54 Updating OVIRT_ISCSI_INSTALL to '1' >Apr 16 09:36:08 Setting temporary admin password: F8Ax67kfRPSAw >Apr 16 09:36:09 Setting temporary root password: F8Ax67kfRPSAw >Apr 16 09:36:09 Skip runtime mode configuration. >Apr 16 09:36:09 Completed ovirt-early >Apr 16 09:36:09 Starting ovirt-awake. >Apr 16 09:36:09 Node is operating in unmanaged mode. >Apr 16 09:36:09 Completed ovirt-awake: RETVAL=3D0 >Apr 16 09:36:09 Starting ovirt >Apr 16 09:36:09 Completed ovirt >Apr 16 09:36:10 Starting ovirt-post >Apr 16 09:36:20 Hardware virtualization detected > Volume group "HostVG" not found > Skipping volume group HostVG >Restarting network (via systemctl): [ OK ] >Apr 16 09:36:20 Starting ovirt-post >Apr 16 09:36:21 Hardware virtualization detected > Volume group "HostVG" not found > Skipping volume group HostVG >Restarting network (via systemctl): [ OK ] >Apr 16 09:36:22 Starting ovirt-cim >Apr 16 09:36:22 Completed ovirt-cim >WARNING: persistent config storage not available > >/var/log/vdsm/vdsm.log >=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > >MainThread::INFO::2012-04-16 09:36:21,828::vdsm::71::vds::(run) I am the >actual vdsm 4.9-0 >MainThread::DEBUG::2012-04-16 >09:36:23,873::resourceManager::376::ResourceManager::(registerNamespace) >Registering namespace 'Storage' >MainThread::DEBUG::2012-04-16 >09:36:23,874::threadPool::45::Misc.ThreadPool::(__init__) Enter - >numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >MainThread::DEBUG::2012-04-16 >09:36:23,918::multipath::85::Storage.Misc.excCmd::(isEnabled) >'/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >MainThread::INFO::2012-04-16 09:36:25,000::vdsm::71::vds::(run) I am the >actual vdsm 4.9-0 >MainThread::DEBUG::2012-04-16 >09:36:25,199::resourceManager::376::ResourceManager::(registerNamespace) >Registering namespace 'Storage' >MainThread::DEBUG::2012-04-16 >09:36:25,200::threadPool::45::Misc.ThreadPool::(__init__) Enter - >numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >MainThread::DEBUG::2012-04-16 >09:36:25,231::multipath::85::Storage.Misc.excCmd::(isEnabled) >'/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >MainThread::DEBUG::2012-04-16 >09:36:25,243::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS: > =3D ''; =3D 0 >MainThread::DEBUG::2012-04-16 >09:36:25,244::multipath::109::Storage.Multipath::(isEnabled) multipath >Defaulting to False >MainThread::DEBUG::2012-04-16 >09:36:25,244::misc::487::Storage.Misc::(rotateFiles) dir: /etc, >prefixName: multipath.conf, versions: 5 >MainThread::DEBUG::2012-04-16 >09:36:25,244::misc::508::Storage.Misc::(rotateFiles) versions found: [0] >MainThread::DEBUG::2012-04-16 >09:36:25,244::multipath::118::Storage.Misc.excCmd::(setupMultipath) >'/usr/bin/sudo -n /bin/cp /etc/multipath.conf /etc/multipath.conf.1' (cwd >None) >MainThread::DEBUG::2012-04-16 >09:36:25,255::multipath::118::Storage.Misc.excCmd::(setupMultipath) >FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file >system\nsudo: sorry, a password is required to run sudo\n'; =3D 1 >MainThread::DEBUG::2012-04-16 >09:36:25,256::multipath::118::Storage.Misc.excCmd::(setupMultipath) >'/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf.1' (cwd None) >MainThread::DEBUG::2012-04-16 >09:36:25,269::multipath::118::Storage.Misc.excCmd::(setupMultipath) >FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file >system\nsudo: sorry, a password is required to run sudo\n'; =3D 1 >MainThread::DEBUG::2012-04-16 >09:36:25,270::multipath::123::Storage.Misc.excCmd::(setupMultipath) >'/usr/bin/sudo -n /bin/cp /tmp/tmpnPcvWi /etc/multipath.conf' (cwd None) >MainThread::DEBUG::2012-04-16 >09:36:25,283::multipath::123::Storage.Misc.excCmd::(setupMultipath) >SUCCESS: =3D ''; =3D 0 >MainThread::DEBUG::2012-04-16 >09:36:25,283::multipath::128::Storage.Misc.excCmd::(setupMultipath) >'/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf' (cwd None) >MainThread::DEBUG::2012-04-16 >09:36:25,294::multipath::128::Storage.Misc.excCmd::(setupMultipath) >FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file >system\nsudo: sorry, a password is required to run sudo\n'; =3D 1 >MainThread::DEBUG::2012-04-16 >09:36:25,295::multipath::131::Storage.Misc.excCmd::(setupMultipath) >'/usr/bin/sudo -n /sbin/multipath -F' (cwd None) >MainThread::DEBUG::2012-04-16 >09:36:25,323::multipath::131::Storage.Misc.excCmd::(setupMultipath) >FAILED: =3D ''; =3D 1 >MainThread::DEBUG::2012-04-16 >09:36:25,323::multipath::134::Storage.Misc.excCmd::(setupMultipath) >'/usr/bin/sudo -n /sbin/service multipathd restart' (cwd None) >MainThread::DEBUG::2012-04-16 >09:36:26,397::multipath::134::Storage.Misc.excCmd::(setupMultipath) >SUCCESS: =3D ''; =3D 0 >MainThread::DEBUG::2012-04-16 >09:36:26,398::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) >'/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) >MainThread::DEBUG::2012-04-16 >09:36:26,443::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) >SUCCESS: =3D ''; =3D 0 >MainThread::DEBUG::2012-04-16 >09:36:26,445::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm reload >operation' got the operation mutex >MainThread::DEBUG::2012-04-16 >09:36:26,447::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >/sbin/lvm pvs --config " devices { preferred_names =3D [\\"^/dev/mapper/\\= "] >ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_cou= nt=3D3 >filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >\\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks=3D1 >wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 } " >--noheadings --units b --nosuffix --separator | -o >uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count, >d >ev_size' (cwd None) >MainThread::DEBUG::2012-04-16 >09:36:26,811::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: =3D ''; > =3D 0 >MainThread::DEBUG::2012-04-16 >09:36:26,811::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm reload >operation' released the operation mutex >MainThread::DEBUG::2012-04-16 >09:36:26,812::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm reload >operation' got the operation mutex >MainThread::DEBUG::2012-04-16 >09:36:26,812::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >/sbin/lvm vgs --config " devices { preferred_names =3D [\\"^/dev/mapper/\\= "] >ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_cou= nt=3D3 >filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >\\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks=3D1 >wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 } " >--noheadings --units b --nosuffix --separator | -o >uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_s >i >ze,vg_mda_free' (cwd None) >MainThread::INFO::2012-04-16 09:36:29,307::vdsm::71::vds::(run) I am the >actual vdsm 4.9-0 >MainThread::DEBUG::2012-04-16 >09:36:29,514::resourceManager::376::ResourceManager::(registerNamespace) >Registering namespace 'Storage' >MainThread::DEBUG::2012-04-16 >09:36:29,515::threadPool::45::Misc.ThreadPool::(__init__) Enter - >numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >MainThread::DEBUG::2012-04-16 >09:36:29,551::multipath::85::Storage.Misc.excCmd::(isEnabled) >'/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >MainThread::DEBUG::2012-04-16 >09:36:29,564::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS: > =3D ''; =3D 0 >MainThread::DEBUG::2012-04-16 >09:36:29,565::multipath::101::Storage.Multipath::(isEnabled) Current >revision of multipath.conf detected, preserving >MainThread::DEBUG::2012-04-16 >09:36:29,565::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) >'/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) >MainThread::DEBUG::2012-04-16 >09:36:29,606::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) >SUCCESS: =3D ''; =3D 0 >MainThread::DEBUG::2012-04-16 >09:36:29,606::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm reload >operation' got the operation mutex >MainThread::DEBUG::2012-04-16 >09:36:29,608::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >/sbin/lvm pvs --config " devices { preferred_names =3D [\\"^/dev/mapper/\\= "] >ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_cou= nt=3D3 >filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >\\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks=3D1 >wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 } " >--noheadings --units b --nosuffix --separator | -o >uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count, >d >ev_size' (cwd None) >MainThread::DEBUG::2012-04-16 >09:36:29,714::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: =3D ''; > =3D 0 >MainThread::DEBUG::2012-04-16 >09:36:29,715::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm reload >operation' released the operation mutex >MainThread::DEBUG::2012-04-16 >09:36:29,716::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm reload >operation' got the operation mutex >MainThread::DEBUG::2012-04-16 >09:36:29,716::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >/sbin/lvm vgs --config " devices { preferred_names =3D [\\"^/dev/mapper/\\= "] >ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_cou= nt=3D3 >filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >\\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks=3D1 >wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 } " >--noheadings --units b --nosuffix --separator | -o >uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_s >i >ze,vg_mda_free' (cwd None) >MainThread::DEBUG::2012-04-16 >09:36:29,813::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: =3D ' No >volume groups found\n'; =3D 0 >MainThread::DEBUG::2012-04-16 >09:36:29,814::lvm::379::OperationMutex::(_reloadvgs) Operation 'lvm reload >operation' released the operation mutex >MainThread::DEBUG::2012-04-16 >09:36:29,815::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >/sbin/lvm lvs --config " devices { preferred_names =3D [\\"^/dev/mapper/\\= "] >ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_cou= nt=3D3 >filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >\\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks=3D1 >wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 } " >--noheadings --units b --nosuffix --separator | -o >uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) >MainThread::DEBUG::2012-04-16 >09:36:29,916::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: =3D ' No >volume groups found\n'; =3D 0 >Thread-11::DEBUG::2012-04-16 >09:36:29,917::misc::1017::SamplingMethod::(__call__) Trying to enter >sampling method (storage.sdc.refreshStorage) >MainThread::INFO::2012-04-16 >09:36:29,919::dispatcher::121::Storage.Dispatcher::(__init__) Starting >StorageDispatcher... >Thread-11::DEBUG::2012-04-16 >09:36:29,919::misc::1019::SamplingMethod::(__call__) Got in to sampling >method >Thread-11::DEBUG::2012-04-16 >09:36:29,921::misc::1017::SamplingMethod::(__call__) Trying to enter >sampling method (storage.iscsi.rescan) >Thread-11::DEBUG::2012-04-16 >09:36:29,921::misc::1019::SamplingMethod::(__call__) Got in to sampling >method >Thread-11::DEBUG::2012-04-16 >09:36:29,921::iscsi::389::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n >/sbin/iscsiadm -m session -R' (cwd None) >MainThread::DEBUG::2012-04-16 >09:36:29,930::utils::595::Storage.Misc.excCmd::(execCmd) '/usr/bin/pgrep >-xf ksmd' (cwd None) >MainThread::DEBUG::2012-04-16 >09:36:30,108::utils::595::Storage.Misc.excCmd::(execCmd) SUCCESS: = =3D >''; =3D 0 >Thread-11::DEBUG::2012-04-16 >09:36:30,116::iscsi::389::Storage.Misc.excCmd::(rescan) FAILED: =3D >'iscsiadm: No session found.\n'; =3D 21 >Thread-11::DEBUG::2012-04-16 >09:36:30,116::misc::1027::SamplingMethod::(__call__) Returning last result >Thread-11::DEBUG::2012-04-16 >09:36:30,117::supervdsm::83::SuperVdsmProxy::(_killSupervdsm) Could not >kill old Super Vdsm [Errno 2] No such file or directory: >'/var/run/vdsm/svdsm.pid' >Thread-11::DEBUG::2012-04-16 >09:36:30,117::supervdsm::71::SuperVdsmProxy::(_launchSupervdsm) Launching >Super Vdsm >Thread-11::DEBUG::2012-04-16 >09:36:30,118::supervdsm::74::Storage.Misc.excCmd::(_launchSupervdsm) >'/usr/bin/sudo -n /usr/bin/python /usr/share/vdsm/supervdsmServer.pyc >bd4b3ae7-3e51-4d6b-b681-d5f6cb5bae07 2945' (cwd None) >MainThread::DEBUG::2012-04-16 >09:36:30,254::supervdsmServer::170::SuperVdsm.Server::(main) Making sure >I'm root >MainThread::DEBUG::2012-04-16 >09:36:30,255::supervdsmServer::174::SuperVdsm.Server::(main) Parsing cmd >args >MainThread::DEBUG::2012-04-16 >09:36:30,255::supervdsmServer::177::SuperVdsm.Server::(main) Creating PID >file >MainThread::DEBUG::2012-04-16 >09:36:30,255::supervdsmServer::181::SuperVdsm.Server::(main) Cleaning old >socket >MainThread::DEBUG::2012-04-16 >09:36:30,255::supervdsmServer::185::SuperVdsm.Server::(main) Setting up >keep alive thread >MainThread::DEBUG::2012-04-16 >09:36:30,256::supervdsmServer::190::SuperVdsm.Server::(main) Creating >remote object manager >MainThread::DEBUG::2012-04-16 >09:36:30,256::supervdsmServer::201::SuperVdsm.Server::(main) Started >serving super vdsm object >Thread-11::DEBUG::2012-04-16 >09:36:32,124::supervdsm::92::SuperVdsmProxy::(_connect) Trying to connect >to Super Vdsm >Thread-11::DEBUG::2012-04-16 >09:36:32,133::supervdsm::64::SuperVdsmProxy::(__init__) Connected to Super >Vdsm >Thread-11::DEBUG::2012-04-16 >09:36:34,070::multipath::71::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo >-n /sbin/multipath' (cwd None) >Thread-11::DEBUG::2012-04-16 >09:36:34,130::multipath::71::Storage.Misc.excCmd::(rescan) SUCCESS: >=3D ''; =3D 0 >Thread-11::DEBUG::2012-04-16 >09:36:34,131::lvm::460::OperationMutex::(_invalidateAllPvs) Operation 'lvm >invalidate operation' got the operation mutex >Thread-11::DEBUG::2012-04-16 >09:36:34,131::lvm::462::OperationMutex::(_invalidateAllPvs) Operation 'lvm >invalidate operation' released the operation mutex >Thread-11::DEBUG::2012-04-16 >09:36:34,132::lvm::472::OperationMutex::(_invalidateAllVgs) Operation 'lvm >invalidate operation' got the operation mutex >Thread-11::DEBUG::2012-04-16 >09:36:34,132::lvm::474::OperationMutex::(_invalidateAllVgs) Operation 'lvm >invalidate operation' released the operation mutex >Thread-11::DEBUG::2012-04-16 >09:36:34,133::lvm::493::OperationMutex::(_invalidateAllLvs) Operation 'lvm >invalidate operation' got the operation mutex >Thread-11::DEBUG::2012-04-16 >09:36:34,133::lvm::495::OperationMutex::(_invalidateAllLvs) Operation 'lvm >invalidate operation' released the operation mutex >Thread-11::DEBUG::2012-04-16 >09:36:34,133::misc::1027::SamplingMethod::(__call__) Returning last result >Thread-11::DEBUG::2012-04-16 >09:36:34,133::hsm::272::Storage.HSM::(__cleanStorageRepository) Started >cleaning storage repository at '/rhev/data-center' >Thread-11::DEBUG::2012-04-16 >09:36:34,136::hsm::304::Storage.HSM::(__cleanStorageRepository) White >list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', >'/rhev/data-center/mnt'] >Thread-11::DEBUG::2012-04-16 >09:36:34,136::hsm::305::Storage.HSM::(__cleanStorageRepository) Mount >list: ['/rhev/data-center'] >Thread-11::DEBUG::2012-04-16 >09:36:34,136::hsm::307::Storage.HSM::(__cleanStorageRepository) Cleaning >leftovers >Thread-11::DEBUG::2012-04-16 >09:36:34,136::hsm::350::Storage.HSM::(__cleanStorageRepository) Finished >cleaning storage repository at '/rhev/data-center' > = > > > > > > > > >On 4/16/12 8:38 AM, "Mike Burns" wrote: > >>On Mon, 2012-04-16 at 08:14 -0500, Adam vonNieda wrote: >>> = >>> = >>> Hi folks, >>> = >>> = >>> I'm trying to install oVirt node v2.3.0 on A Dell C2100 server. I >>> can boot up just fine, but the two menu options I see are "Start oVirt >>> node", and "Troubleshooting". When I choose "Start oVirt node", it >>> does just that, and I am soon after given a console login prompt. I've >>> checked the docs, and I don't see what I'm supposed to do next, as in >>> a password etc. Am I missing something? >> >>Hi Adam, >> >>Something is breaking in the boot process. You should be getting a TUI >>screen that will let you configure and install ovirt-node. >> >>I just added an entry on the Node Troublesooting wiki page[1] for you to >>follow. >> >>Mike >> >>[1] http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems >> >> >>> = >>> = >>> Thanks, >>> = >>> = >>> -Adam >>> _______________________________________________ >>> Users mailing list >>> Users(a)ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >> >> > > --===============8505286671932861504==-- From jboggs at redhat.com Tue Apr 17 10:07:37 2012 Content-Type: multipart/mixed; boundary="===============5790787311303673963==" MIME-Version: 1.0 From: Joey Boggs To: users at ovirt.org Subject: Re: [Users] Booting oVirt node image 2.3.0, no install option Date: Tue, 17 Apr 2012 10:07:34 -0400 Message-ID: <4F8D7926.20207@redhat.com> In-Reply-To: CBB2DD2A.1930C%adam@vonnieda.org --===============5790787311303673963== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 04/17/2012 09:45 AM, Adam vonNieda wrote: > Hi folks, > > Still hoping someone can give me a hand with this. I can't install > overt-node 2.3.0 on a on a Dell C2100 server because it won't start the > graphical interface. I booted up a standard F16 image this morning, and > the graphical installer does start during that process. Logs are below. > > Thanks very much, > > -Adam > > >> /tmp/ovirt.log >> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >> >> /sbin/restorecon set context >> /var/cache/yum->unconfined_u:object_r:rpm_var_cache_t:s0 failed:'Read-on= ly >> file system' >> /sbin/restorecon reset /var/cache/yum context >> unconfined_u:object_r:file_t:s0->unconfined_u:object_r:rpm_var_cache_t:s0 >> /sbin/restorecon reset /etc/sysctl.conf context >> system_u:object_r:etc_runtime_t:s0->system_u:object_r:system_conf_t:s0 >> /sbin/restorecon reset /boot-kdump context >> system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0 >> 2012-04-16 09:36:26,827 - INFO - ovirt-config-installer - ::::live >> device:::: >> /dev/sdb >> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat /proc/mounts|grep >> -q "none /live" >> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - >> 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - umount /live >> 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - >> 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to mount_live() >> >> /var/log/ovirt.log >> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >> >> Apr 16 09:35:53 Starting ovirt-early >> oVirt Node Hypervisor release 2.3.0 (1.0.fc16) >> Apr 16 09:35:53 Updating /etc/default/ovirt >> Apr 16 09:35:54 Updating OVIRT_BOOTIF to '' >> Apr 16 09:35:54 Updating OVIRT_INIT to '' >> Apr 16 09:35:54 Updating OVIRT_UPGRADE to '' >> Apr 16 09:35:54 Updating OVIRT_STANDALONE to '1' >> Apr 16 09:35:54 Updating OVIRT_BOOTPARAMS to 'nomodeset >> crashkernel=3D512M-2G:64M,2G-:128M elevator=3Ddeadline quiet rd_NO_LVM r= hgb >> rd.luks=3D0 rd.md=3D0 rd.dm=3D0' >> Apr 16 09:35:54 Updating OVIRT_RHN_TYPE to 'classic' >> Apr 16 09:35:54 Updating OVIRT_INSTALL to '1' >> Apr 16 09:35:54 Updating OVIRT_ISCSI_INSTALL to '1' >> Apr 16 09:36:08 Setting temporary admin password: F8Ax67kfRPSAw >> Apr 16 09:36:09 Setting temporary root password: F8Ax67kfRPSAw >> Apr 16 09:36:09 Skip runtime mode configuration. >> Apr 16 09:36:09 Completed ovirt-early >> Apr 16 09:36:09 Starting ovirt-awake. >> Apr 16 09:36:09 Node is operating in unmanaged mode. >> Apr 16 09:36:09 Completed ovirt-awake: RETVAL=3D0 >> Apr 16 09:36:09 Starting ovirt >> Apr 16 09:36:09 Completed ovirt >> Apr 16 09:36:10 Starting ovirt-post >> Apr 16 09:36:20 Hardware virtualization detected >> Volume group "HostVG" not found >> Skipping volume group HostVG >> Restarting network (via systemctl): [ OK ] >> Apr 16 09:36:20 Starting ovirt-post >> Apr 16 09:36:21 Hardware virtualization detected >> Volume group "HostVG" not found >> Skipping volume group HostVG >> Restarting network (via systemctl): [ OK ] >> Apr 16 09:36:22 Starting ovirt-cim >> Apr 16 09:36:22 Completed ovirt-cim >> WARNING: persistent config storage not available >> >> /var/log/vdsm/vdsm.log >> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >> >> MainThread::INFO::2012-04-16 09:36:21,828::vdsm::71::vds::(run) I am the >> actual vdsm 4.9-0 >> MainThread::DEBUG::2012-04-16 >> 09:36:23,873::resourceManager::376::ResourceManager::(registerNamespace) >> Registering namespace 'Storage' >> MainThread::DEBUG::2012-04-16 >> 09:36:23,874::threadPool::45::Misc.ThreadPool::(__init__) Enter - >> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >> MainThread::DEBUG::2012-04-16 >> 09:36:23,918::multipath::85::Storage.Misc.excCmd::(isEnabled) >> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >> MainThread::INFO::2012-04-16 09:36:25,000::vdsm::71::vds::(run) I am the >> actual vdsm 4.9-0 >> MainThread::DEBUG::2012-04-16 >> 09:36:25,199::resourceManager::376::ResourceManager::(registerNamespace) >> Registering namespace 'Storage' >> MainThread::DEBUG::2012-04-16 >> 09:36:25,200::threadPool::45::Misc.ThreadPool::(__init__) Enter - >> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >> MainThread::DEBUG::2012-04-16 >> 09:36:25,231::multipath::85::Storage.Misc.excCmd::(isEnabled) >> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:25,243::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS: >> =3D ''; =3D 0 >> MainThread::DEBUG::2012-04-16 >> 09:36:25,244::multipath::109::Storage.Multipath::(isEnabled) multipath >> Defaulting to False >> MainThread::DEBUG::2012-04-16 >> 09:36:25,244::misc::487::Storage.Misc::(rotateFiles) dir: /etc, >> prefixName: multipath.conf, versions: 5 >> MainThread::DEBUG::2012-04-16 >> 09:36:25,244::misc::508::Storage.Misc::(rotateFiles) versions found: [0] >> MainThread::DEBUG::2012-04-16 >> 09:36:25,244::multipath::118::Storage.Misc.excCmd::(setupMultipath) >> '/usr/bin/sudo -n /bin/cp /etc/multipath.conf /etc/multipath.conf.1' (cwd >> None) >> MainThread::DEBUG::2012-04-16 >> 09:36:25,255::multipath::118::Storage.Misc.excCmd::(setupMultipath) >> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only fi= le >> system\nsudo: sorry, a password is required to run sudo\n'; =3D 1 >> MainThread::DEBUG::2012-04-16 >> 09:36:25,256::multipath::118::Storage.Misc.excCmd::(setupMultipath) >> '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf.1' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:25,269::multipath::118::Storage.Misc.excCmd::(setupMultipath) >> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only fi= le >> system\nsudo: sorry, a password is required to run sudo\n'; =3D 1 >> MainThread::DEBUG::2012-04-16 >> 09:36:25,270::multipath::123::Storage.Misc.excCmd::(setupMultipath) >> '/usr/bin/sudo -n /bin/cp /tmp/tmpnPcvWi /etc/multipath.conf' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:25,283::multipath::123::Storage.Misc.excCmd::(setupMultipath) >> SUCCESS: =3D ''; =3D 0 >> MainThread::DEBUG::2012-04-16 >> 09:36:25,283::multipath::128::Storage.Misc.excCmd::(setupMultipath) >> '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:25,294::multipath::128::Storage.Misc.excCmd::(setupMultipath) >> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only fi= le >> system\nsudo: sorry, a password is required to run sudo\n'; =3D 1 >> MainThread::DEBUG::2012-04-16 >> 09:36:25,295::multipath::131::Storage.Misc.excCmd::(setupMultipath) >> '/usr/bin/sudo -n /sbin/multipath -F' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:25,323::multipath::131::Storage.Misc.excCmd::(setupMultipath) >> FAILED: =3D ''; =3D 1 >> MainThread::DEBUG::2012-04-16 >> 09:36:25,323::multipath::134::Storage.Misc.excCmd::(setupMultipath) >> '/usr/bin/sudo -n /sbin/service multipathd restart' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:26,397::multipath::134::Storage.Misc.excCmd::(setupMultipath) >> SUCCESS: =3D ''; =3D 0 >> MainThread::DEBUG::2012-04-16 >> 09:36:26,398::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) >> '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:26,443::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) >> SUCCESS: =3D ''; =3D 0 >> MainThread::DEBUG::2012-04-16 >> 09:36:26,445::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm relo= ad >> operation' got the operation mutex >> MainThread::DEBUG::2012-04-16 >> 09:36:26,447::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >> /sbin/lvm pvs --config " devices { preferred_names =3D [\\"^/dev/mapper/= \\"] >> ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_c= ount=3D3 >> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks=3D1 >> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 } " >> --noheadings --units b --nosuffix --separator | -o >> uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_coun= t, >> d >> ev_size' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:26,811::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: =3D ''; >> =3D 0 >> MainThread::DEBUG::2012-04-16 >> 09:36:26,811::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm relo= ad >> operation' released the operation mutex >> MainThread::DEBUG::2012-04-16 >> 09:36:26,812::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm relo= ad >> operation' got the operation mutex >> MainThread::DEBUG::2012-04-16 >> 09:36:26,812::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >> /sbin/lvm vgs --config " devices { preferred_names =3D [\\"^/dev/mapper/= \\"] >> ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_c= ount=3D3 >> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks=3D1 >> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 } " >> --noheadings --units b --nosuffix --separator | -o >> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda= _s >> i >> ze,vg_mda_free' (cwd None) >> MainThread::INFO::2012-04-16 09:36:29,307::vdsm::71::vds::(run) I am the >> actual vdsm 4.9-0 >> MainThread::DEBUG::2012-04-16 >> 09:36:29,514::resourceManager::376::ResourceManager::(registerNamespace) >> Registering namespace 'Storage' >> MainThread::DEBUG::2012-04-16 >> 09:36:29,515::threadPool::45::Misc.ThreadPool::(__init__) Enter - >> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >> MainThread::DEBUG::2012-04-16 >> 09:36:29,551::multipath::85::Storage.Misc.excCmd::(isEnabled) >> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:29,564::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS: >> =3D ''; =3D 0 >> MainThread::DEBUG::2012-04-16 >> 09:36:29,565::multipath::101::Storage.Multipath::(isEnabled) Current >> revision of multipath.conf detected, preserving >> MainThread::DEBUG::2012-04-16 >> 09:36:29,565::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) >> '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:29,606::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) >> SUCCESS: =3D ''; =3D 0 >> MainThread::DEBUG::2012-04-16 >> 09:36:29,606::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm relo= ad >> operation' got the operation mutex >> MainThread::DEBUG::2012-04-16 >> 09:36:29,608::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >> /sbin/lvm pvs --config " devices { preferred_names =3D [\\"^/dev/mapper/= \\"] >> ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_c= ount=3D3 >> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks=3D1 >> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 } " >> --noheadings --units b --nosuffix --separator | -o >> uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_coun= t, >> d >> ev_size' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:29,714::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: =3D ''; >> =3D 0 >> MainThread::DEBUG::2012-04-16 >> 09:36:29,715::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm relo= ad >> operation' released the operation mutex >> MainThread::DEBUG::2012-04-16 >> 09:36:29,716::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm relo= ad >> operation' got the operation mutex >> MainThread::DEBUG::2012-04-16 >> 09:36:29,716::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >> /sbin/lvm vgs --config " devices { preferred_names =3D [\\"^/dev/mapper/= \\"] >> ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_c= ount=3D3 >> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks=3D1 >> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 } " >> --noheadings --units b --nosuffix --separator | -o >> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda= _s >> i >> ze,vg_mda_free' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:29,813::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: =3D ' = No >> volume groups found\n'; =3D 0 >> MainThread::DEBUG::2012-04-16 >> 09:36:29,814::lvm::379::OperationMutex::(_reloadvgs) Operation 'lvm relo= ad >> operation' released the operation mutex >> MainThread::DEBUG::2012-04-16 >> 09:36:29,815::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >> /sbin/lvm lvs --config " devices { preferred_names =3D [\\"^/dev/mapper/= \\"] >> ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_c= ount=3D3 >> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks=3D1 >> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 } " >> --noheadings --units b --nosuffix --separator | -o >> uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:29,916::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: =3D ' = No >> volume groups found\n'; =3D 0 >> Thread-11::DEBUG::2012-04-16 >> 09:36:29,917::misc::1017::SamplingMethod::(__call__) Trying to enter >> sampling method (storage.sdc.refreshStorage) >> MainThread::INFO::2012-04-16 >> 09:36:29,919::dispatcher::121::Storage.Dispatcher::(__init__) Starting >> StorageDispatcher... >> Thread-11::DEBUG::2012-04-16 >> 09:36:29,919::misc::1019::SamplingMethod::(__call__) Got in to sampling >> method >> Thread-11::DEBUG::2012-04-16 >> 09:36:29,921::misc::1017::SamplingMethod::(__call__) Trying to enter >> sampling method (storage.iscsi.rescan) >> Thread-11::DEBUG::2012-04-16 >> 09:36:29,921::misc::1019::SamplingMethod::(__call__) Got in to sampling >> method >> Thread-11::DEBUG::2012-04-16 >> 09:36:29,921::iscsi::389::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n >> /sbin/iscsiadm -m session -R' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:29,930::utils::595::Storage.Misc.excCmd::(execCmd) '/usr/bin/pgrep >> -xf ksmd' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:30,108::utils::595::Storage.Misc.excCmd::(execCmd) SUCCESS: = =3D >> ''; =3D 0 >> Thread-11::DEBUG::2012-04-16 >> 09:36:30,116::iscsi::389::Storage.Misc.excCmd::(rescan) FAILED: = =3D >> 'iscsiadm: No session found.\n'; =3D 21 >> Thread-11::DEBUG::2012-04-16 >> 09:36:30,116::misc::1027::SamplingMethod::(__call__) Returning last resu= lt >> Thread-11::DEBUG::2012-04-16 >> 09:36:30,117::supervdsm::83::SuperVdsmProxy::(_killSupervdsm) Could not >> kill old Super Vdsm [Errno 2] No such file or directory: >> '/var/run/vdsm/svdsm.pid' >> Thread-11::DEBUG::2012-04-16 >> 09:36:30,117::supervdsm::71::SuperVdsmProxy::(_launchSupervdsm) Launching >> Super Vdsm >> Thread-11::DEBUG::2012-04-16 >> 09:36:30,118::supervdsm::74::Storage.Misc.excCmd::(_launchSupervdsm) >> '/usr/bin/sudo -n /usr/bin/python /usr/share/vdsm/supervdsmServer.pyc >> bd4b3ae7-3e51-4d6b-b681-d5f6cb5bae07 2945' (cwd None) >> MainThread::DEBUG::2012-04-16 >> 09:36:30,254::supervdsmServer::170::SuperVdsm.Server::(main) Making sure >> I'm root >> MainThread::DEBUG::2012-04-16 >> 09:36:30,255::supervdsmServer::174::SuperVdsm.Server::(main) Parsing cmd >> args >> MainThread::DEBUG::2012-04-16 >> 09:36:30,255::supervdsmServer::177::SuperVdsm.Server::(main) Creating PID >> file >> MainThread::DEBUG::2012-04-16 >> 09:36:30,255::supervdsmServer::181::SuperVdsm.Server::(main) Cleaning old >> socket >> MainThread::DEBUG::2012-04-16 >> 09:36:30,255::supervdsmServer::185::SuperVdsm.Server::(main) Setting up >> keep alive thread >> MainThread::DEBUG::2012-04-16 >> 09:36:30,256::supervdsmServer::190::SuperVdsm.Server::(main) Creating >> remote object manager >> MainThread::DEBUG::2012-04-16 >> 09:36:30,256::supervdsmServer::201::SuperVdsm.Server::(main) Started >> serving super vdsm object >> Thread-11::DEBUG::2012-04-16 >> 09:36:32,124::supervdsm::92::SuperVdsmProxy::(_connect) Trying to connect >> to Super Vdsm >> Thread-11::DEBUG::2012-04-16 >> 09:36:32,133::supervdsm::64::SuperVdsmProxy::(__init__) Connected to Sup= er >> Vdsm >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,070::multipath::71::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo >> -n /sbin/multipath' (cwd None) >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,130::multipath::71::Storage.Misc.excCmd::(rescan) SUCCESS: >> =3D ''; =3D 0 >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,131::lvm::460::OperationMutex::(_invalidateAllPvs) Operation 'l= vm >> invalidate operation' got the operation mutex >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,131::lvm::462::OperationMutex::(_invalidateAllPvs) Operation 'l= vm >> invalidate operation' released the operation mutex >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,132::lvm::472::OperationMutex::(_invalidateAllVgs) Operation 'l= vm >> invalidate operation' got the operation mutex >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,132::lvm::474::OperationMutex::(_invalidateAllVgs) Operation 'l= vm >> invalidate operation' released the operation mutex >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,133::lvm::493::OperationMutex::(_invalidateAllLvs) Operation 'l= vm >> invalidate operation' got the operation mutex >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,133::lvm::495::OperationMutex::(_invalidateAllLvs) Operation 'l= vm >> invalidate operation' released the operation mutex >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,133::misc::1027::SamplingMethod::(__call__) Returning last resu= lt >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,133::hsm::272::Storage.HSM::(__cleanStorageRepository) Started >> cleaning storage repository at '/rhev/data-center' >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,136::hsm::304::Storage.HSM::(__cleanStorageRepository) White >> list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', >> '/rhev/data-center/mnt'] >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,136::hsm::305::Storage.HSM::(__cleanStorageRepository) Mount >> list: ['/rhev/data-center'] >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,136::hsm::307::Storage.HSM::(__cleanStorageRepository) Cleaning >> leftovers >> Thread-11::DEBUG::2012-04-16 >> 09:36:34,136::hsm::350::Storage.HSM::(__cleanStorageRepository) Finished >> cleaning storage repository at '/rhev/data-center' >> >> >> >> >> >> >> >> >> >> On 4/16/12 8:38 AM, "Mike Burns" wrote: >> >>> On Mon, 2012-04-16 at 08:14 -0500, Adam vonNieda wrote: >>>> >>>> Hi folks, >>>> >>>> >>>> I'm trying to install oVirt node v2.3.0 on A Dell C2100 server. I >>>> can boot up just fine, but the two menu options I see are "Start oVirt >>>> node", and "Troubleshooting". When I choose "Start oVirt node", it >>>> does just that, and I am soon after given a console login prompt. I've >>>> checked the docs, and I don't see what I'm supposed to do next, as in >>>> a password etc. Am I missing something? >>> Hi Adam, >>> >>> Something is breaking in the boot process. You should be getting a TUI >>> screen that will let you configure and install ovirt-node. >>> >>> I just added an entry on the Node Troublesooting wiki page[1] for you to >>> follow. >>> >>> Mike >>> >>> [1] http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems >>> >>> >>>> >>>> Thanks, >>>> >>>> >>>> -Adam >>>> _______________________________________________ >>>> Users mailing list >>>> Users(a)ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>> >> > > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users This is definitely the cause of the installer failing 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat /proc/mounts|grep -q= "none /live" 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to mount_live() What kind of media are you installing from: usb/cd/remote console? --===============5790787311303673963==-- From adam at vonnieda.org Tue Apr 17 10:52:21 2012 Content-Type: multipart/mixed; boundary="===============8377735419297577718==" MIME-Version: 1.0 From: Adam vonNieda To: users at ovirt.org Subject: Re: [Users] Booting oVirt node image 2.3.0, no install option Date: Tue, 17 Apr 2012 09:51:14 -0500 Message-ID: In-Reply-To: 4F8D7926.20207@redhat.com --===============8377735419297577718== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Thanks for the reply Joey. I saw that too, and thought maybe my USB thum= b drive was set to read only, but it's not. This box doesn't have a DVD dri= ve, I'll try a different USB drive, and if that doesn't work, I'll dig up a= n external DVD drive. Thanks again, -Adam Adam vonNieda Adam(a)vonNieda.org On Apr 17, 2012, at 9:07, Joey Boggs wrote: > On 04/17/2012 09:45 AM, Adam vonNieda wrote: >> Hi folks, >> = >> Still hoping someone can give me a hand with this. I can't install >> overt-node 2.3.0 on a on a Dell C2100 server because it won't start the >> graphical interface. I booted up a standard F16 image this morning, and >> the graphical installer does start during that process. Logs are below. >> = >> Thanks very much, >> = >> -Adam >> = >> = >>> /tmp/ovirt.log >>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>> = >>> /sbin/restorecon set context >>> /var/cache/yum->unconfined_u:object_r:rpm_var_cache_t:s0 failed:'Read-o= nly >>> file system' >>> /sbin/restorecon reset /var/cache/yum context >>> unconfined_u:object_r:file_t:s0->unconfined_u:object_r:rpm_var_cache_t:= s0 >>> /sbin/restorecon reset /etc/sysctl.conf context >>> system_u:object_r:etc_runtime_t:s0->system_u:object_r:system_conf_t:s0 >>> /sbin/restorecon reset /boot-kdump context >>> system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0 >>> 2012-04-16 09:36:26,827 - INFO - ovirt-config-installer - ::::live >>> device:::: >>> /dev/sdb >>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat /proc/mounts|grep >>> -q "none /live" >>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - >>> 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - umount /live >>> 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - >>> 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to mount_live= () >>> = >>> /var/log/ovirt.log >>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>> = >>> Apr 16 09:35:53 Starting ovirt-early >>> oVirt Node Hypervisor release 2.3.0 (1.0.fc16) >>> Apr 16 09:35:53 Updating /etc/default/ovirt >>> Apr 16 09:35:54 Updating OVIRT_BOOTIF to '' >>> Apr 16 09:35:54 Updating OVIRT_INIT to '' >>> Apr 16 09:35:54 Updating OVIRT_UPGRADE to '' >>> Apr 16 09:35:54 Updating OVIRT_STANDALONE to '1' >>> Apr 16 09:35:54 Updating OVIRT_BOOTPARAMS to 'nomodeset >>> crashkernel=3D512M-2G:64M,2G-:128M elevator=3Ddeadline quiet rd_NO_LVM = rhgb >>> rd.luks=3D0 rd.md=3D0 rd.dm=3D0' >>> Apr 16 09:35:54 Updating OVIRT_RHN_TYPE to 'classic' >>> Apr 16 09:35:54 Updating OVIRT_INSTALL to '1' >>> Apr 16 09:35:54 Updating OVIRT_ISCSI_INSTALL to '1' >>> Apr 16 09:36:08 Setting temporary admin password: F8Ax67kfRPSAw >>> Apr 16 09:36:09 Setting temporary root password: F8Ax67kfRPSAw >>> Apr 16 09:36:09 Skip runtime mode configuration. >>> Apr 16 09:36:09 Completed ovirt-early >>> Apr 16 09:36:09 Starting ovirt-awake. >>> Apr 16 09:36:09 Node is operating in unmanaged mode. >>> Apr 16 09:36:09 Completed ovirt-awake: RETVAL=3D0 >>> Apr 16 09:36:09 Starting ovirt >>> Apr 16 09:36:09 Completed ovirt >>> Apr 16 09:36:10 Starting ovirt-post >>> Apr 16 09:36:20 Hardware virtualization detected >>> Volume group "HostVG" not found >>> Skipping volume group HostVG >>> Restarting network (via systemctl): [ OK ] >>> Apr 16 09:36:20 Starting ovirt-post >>> Apr 16 09:36:21 Hardware virtualization detected >>> Volume group "HostVG" not found >>> Skipping volume group HostVG >>> Restarting network (via systemctl): [ OK ] >>> Apr 16 09:36:22 Starting ovirt-cim >>> Apr 16 09:36:22 Completed ovirt-cim >>> WARNING: persistent config storage not available >>> = >>> /var/log/vdsm/vdsm.log >>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>> = >>> MainThread::INFO::2012-04-16 09:36:21,828::vdsm::71::vds::(run) I am the >>> actual vdsm 4.9-0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:23,873::resourceManager::376::ResourceManager::(registerNamespace) >>> Registering namespace 'Storage' >>> MainThread::DEBUG::2012-04-16 >>> 09:36:23,874::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:23,918::multipath::85::Storage.Misc.excCmd::(isEnabled) >>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>> MainThread::INFO::2012-04-16 09:36:25,000::vdsm::71::vds::(run) I am the >>> actual vdsm 4.9-0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,199::resourceManager::376::ResourceManager::(registerNamespace) >>> Registering namespace 'Storage' >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,200::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,231::multipath::85::Storage.Misc.excCmd::(isEnabled) >>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,243::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS: >>> =3D ''; =3D 0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,244::multipath::109::Storage.Multipath::(isEnabled) multipath >>> Defaulting to False >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,244::misc::487::Storage.Misc::(rotateFiles) dir: /etc, >>> prefixName: multipath.conf, versions: 5 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,244::misc::508::Storage.Misc::(rotateFiles) versions found: [0] >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,244::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>> '/usr/bin/sudo -n /bin/cp /etc/multipath.conf /etc/multipath.conf.1' (c= wd >>> None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,255::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only f= ile >>> system\nsudo: sorry, a password is required to run sudo\n'; =3D 1 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,256::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>> '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf.1' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,269::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only f= ile >>> system\nsudo: sorry, a password is required to run sudo\n'; =3D 1 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,270::multipath::123::Storage.Misc.excCmd::(setupMultipath) >>> '/usr/bin/sudo -n /bin/cp /tmp/tmpnPcvWi /etc/multipath.conf' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,283::multipath::123::Storage.Misc.excCmd::(setupMultipath) >>> SUCCESS: =3D ''; =3D 0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,283::multipath::128::Storage.Misc.excCmd::(setupMultipath) >>> '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,294::multipath::128::Storage.Misc.excCmd::(setupMultipath) >>> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only f= ile >>> system\nsudo: sorry, a password is required to run sudo\n'; =3D 1 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,295::multipath::131::Storage.Misc.excCmd::(setupMultipath) >>> '/usr/bin/sudo -n /sbin/multipath -F' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,323::multipath::131::Storage.Misc.excCmd::(setupMultipath) >>> FAILED: =3D ''; =3D 1 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:25,323::multipath::134::Storage.Misc.excCmd::(setupMultipath) >>> '/usr/bin/sudo -n /sbin/service multipathd restart' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:26,397::multipath::134::Storage.Misc.excCmd::(setupMultipath) >>> SUCCESS: =3D ''; =3D 0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:26,398::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) >>> '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:26,443::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) >>> SUCCESS: =3D ''; =3D 0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:26,445::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm rel= oad >>> operation' got the operation mutex >>> MainThread::DEBUG::2012-04-16 >>> 09:36:26,447::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >>> /sbin/lvm pvs --config " devices { preferred_names =3D [\\"^/dev/mapper= /\\"] >>> ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_= count=3D3 >>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks=3D1 >>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 } " >>> --noheadings --units b --nosuffix --separator | -o >>> uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_cou= nt, >>> d >>> ev_size' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:26,811::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: =3D '= '; >>> =3D 0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:26,811::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm rel= oad >>> operation' released the operation mutex >>> MainThread::DEBUG::2012-04-16 >>> 09:36:26,812::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm rel= oad >>> operation' got the operation mutex >>> MainThread::DEBUG::2012-04-16 >>> 09:36:26,812::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >>> /sbin/lvm vgs --config " devices { preferred_names =3D [\\"^/dev/mapper= /\\"] >>> ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_= count=3D3 >>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks=3D1 >>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 } " >>> --noheadings --units b --nosuffix --separator | -o >>> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_md= a_s >>> i >>> ze,vg_mda_free' (cwd None) >>> MainThread::INFO::2012-04-16 09:36:29,307::vdsm::71::vds::(run) I am the >>> actual vdsm 4.9-0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,514::resourceManager::376::ResourceManager::(registerNamespace) >>> Registering namespace 'Storage' >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,515::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,551::multipath::85::Storage.Misc.excCmd::(isEnabled) >>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,564::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS: >>> =3D ''; =3D 0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,565::multipath::101::Storage.Multipath::(isEnabled) Current >>> revision of multipath.conf detected, preserving >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,565::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) >>> '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,606::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) >>> SUCCESS: =3D ''; =3D 0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,606::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm rel= oad >>> operation' got the operation mutex >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,608::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >>> /sbin/lvm pvs --config " devices { preferred_names =3D [\\"^/dev/mapper= /\\"] >>> ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_= count=3D3 >>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks=3D1 >>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 } " >>> --noheadings --units b --nosuffix --separator | -o >>> uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_cou= nt, >>> d >>> ev_size' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,714::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: =3D '= '; >>> =3D 0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,715::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm rel= oad >>> operation' released the operation mutex >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,716::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm rel= oad >>> operation' got the operation mutex >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,716::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >>> /sbin/lvm vgs --config " devices { preferred_names =3D [\\"^/dev/mapper= /\\"] >>> ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_= count=3D3 >>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks=3D1 >>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 } " >>> --noheadings --units b --nosuffix --separator | -o >>> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_md= a_s >>> i >>> ze,vg_mda_free' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,813::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: =3D '= No >>> volume groups found\n'; =3D 0 >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,814::lvm::379::OperationMutex::(_reloadvgs) Operation 'lvm rel= oad >>> operation' released the operation mutex >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,815::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >>> /sbin/lvm lvs --config " devices { preferred_names =3D [\\"^/dev/mapper= /\\"] >>> ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_= count=3D3 >>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks=3D1 >>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 } " >>> --noheadings --units b --nosuffix --separator | -o >>> uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,916::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: =3D '= No >>> volume groups found\n'; =3D 0 >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:29,917::misc::1017::SamplingMethod::(__call__) Trying to enter >>> sampling method (storage.sdc.refreshStorage) >>> MainThread::INFO::2012-04-16 >>> 09:36:29,919::dispatcher::121::Storage.Dispatcher::(__init__) Starting >>> StorageDispatcher... >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:29,919::misc::1019::SamplingMethod::(__call__) Got in to sampling >>> method >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:29,921::misc::1017::SamplingMethod::(__call__) Trying to enter >>> sampling method (storage.iscsi.rescan) >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:29,921::misc::1019::SamplingMethod::(__call__) Got in to sampling >>> method >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:29,921::iscsi::389::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo = -n >>> /sbin/iscsiadm -m session -R' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:29,930::utils::595::Storage.Misc.excCmd::(execCmd) '/usr/bin/pgrep >>> -xf ksmd' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:30,108::utils::595::Storage.Misc.excCmd::(execCmd) SUCCESS: = =3D >>> ''; =3D 0 >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:30,116::iscsi::389::Storage.Misc.excCmd::(rescan) FAILED: = =3D >>> 'iscsiadm: No session found.\n'; =3D 21 >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:30,116::misc::1027::SamplingMethod::(__call__) Returning last res= ult >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:30,117::supervdsm::83::SuperVdsmProxy::(_killSupervdsm) Could not >>> kill old Super Vdsm [Errno 2] No such file or directory: >>> '/var/run/vdsm/svdsm.pid' >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:30,117::supervdsm::71::SuperVdsmProxy::(_launchSupervdsm) Launchi= ng >>> Super Vdsm >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:30,118::supervdsm::74::Storage.Misc.excCmd::(_launchSupervdsm) >>> '/usr/bin/sudo -n /usr/bin/python /usr/share/vdsm/supervdsmServer.pyc >>> bd4b3ae7-3e51-4d6b-b681-d5f6cb5bae07 2945' (cwd None) >>> MainThread::DEBUG::2012-04-16 >>> 09:36:30,254::supervdsmServer::170::SuperVdsm.Server::(main) Making sure >>> I'm root >>> MainThread::DEBUG::2012-04-16 >>> 09:36:30,255::supervdsmServer::174::SuperVdsm.Server::(main) Parsing cmd >>> args >>> MainThread::DEBUG::2012-04-16 >>> 09:36:30,255::supervdsmServer::177::SuperVdsm.Server::(main) Creating P= ID >>> file >>> MainThread::DEBUG::2012-04-16 >>> 09:36:30,255::supervdsmServer::181::SuperVdsm.Server::(main) Cleaning o= ld >>> socket >>> MainThread::DEBUG::2012-04-16 >>> 09:36:30,255::supervdsmServer::185::SuperVdsm.Server::(main) Setting up >>> keep alive thread >>> MainThread::DEBUG::2012-04-16 >>> 09:36:30,256::supervdsmServer::190::SuperVdsm.Server::(main) Creating >>> remote object manager >>> MainThread::DEBUG::2012-04-16 >>> 09:36:30,256::supervdsmServer::201::SuperVdsm.Server::(main) Started >>> serving super vdsm object >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:32,124::supervdsm::92::SuperVdsmProxy::(_connect) Trying to conne= ct >>> to Super Vdsm >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:32,133::supervdsm::64::SuperVdsmProxy::(__init__) Connected to Su= per >>> Vdsm >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,070::multipath::71::Storage.Misc.excCmd::(rescan) '/usr/bin/su= do >>> -n /sbin/multipath' (cwd None) >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,130::multipath::71::Storage.Misc.excCmd::(rescan) SUCCESS: >>> =3D ''; =3D 0 >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,131::lvm::460::OperationMutex::(_invalidateAllPvs) Operation '= lvm >>> invalidate operation' got the operation mutex >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,131::lvm::462::OperationMutex::(_invalidateAllPvs) Operation '= lvm >>> invalidate operation' released the operation mutex >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,132::lvm::472::OperationMutex::(_invalidateAllVgs) Operation '= lvm >>> invalidate operation' got the operation mutex >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,132::lvm::474::OperationMutex::(_invalidateAllVgs) Operation '= lvm >>> invalidate operation' released the operation mutex >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,133::lvm::493::OperationMutex::(_invalidateAllLvs) Operation '= lvm >>> invalidate operation' got the operation mutex >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,133::lvm::495::OperationMutex::(_invalidateAllLvs) Operation '= lvm >>> invalidate operation' released the operation mutex >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,133::misc::1027::SamplingMethod::(__call__) Returning last res= ult >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,133::hsm::272::Storage.HSM::(__cleanStorageRepository) Started >>> cleaning storage repository at '/rhev/data-center' >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,136::hsm::304::Storage.HSM::(__cleanStorageRepository) White >>> list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', >>> '/rhev/data-center/mnt'] >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,136::hsm::305::Storage.HSM::(__cleanStorageRepository) Mount >>> list: ['/rhev/data-center'] >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,136::hsm::307::Storage.HSM::(__cleanStorageRepository) Cleaning >>> leftovers >>> Thread-11::DEBUG::2012-04-16 >>> 09:36:34,136::hsm::350::Storage.HSM::(__cleanStorageRepository) Finished >>> cleaning storage repository at '/rhev/data-center' >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> On 4/16/12 8:38 AM, "Mike Burns" wrote: >>> = >>>> On Mon, 2012-04-16 at 08:14 -0500, Adam vonNieda wrote: >>>>> = >>>>> Hi folks, >>>>> = >>>>> = >>>>> I'm trying to install oVirt node v2.3.0 on A Dell C2100 server. I >>>>> can boot up just fine, but the two menu options I see are "Start oVirt >>>>> node", and "Troubleshooting". When I choose "Start oVirt node", it >>>>> does just that, and I am soon after given a console login prompt. I've >>>>> checked the docs, and I don't see what I'm supposed to do next, as in >>>>> a password etc. Am I missing something? >>>> Hi Adam, >>>> = >>>> Something is breaking in the boot process. You should be getting a TUI >>>> screen that will let you configure and install ovirt-node. >>>> = >>>> I just added an entry on the Node Troublesooting wiki page[1] for you = to >>>> follow. >>>> = >>>> Mike >>>> = >>>> [1] http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems >>>> = >>>> = >>>>> = >>>>> Thanks, >>>>> = >>>>> = >>>>> -Adam >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users(a)ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >>>> = >>> = >> = >> _______________________________________________ >> Users mailing list >> Users(a)ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > = > This is definitely the cause of the installer failing > = > 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat /proc/mounts|grep = -q "none /live" > 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to mount_live() > = > = > = > What kind of media are you installing from: usb/cd/remote console? --===============8377735419297577718==-- From jboggs at redhat.com Tue Apr 17 11:11:03 2012 Content-Type: multipart/mixed; boundary="===============2471813910248482882==" MIME-Version: 1.0 From: Joey Boggs To: users at ovirt.org Subject: Re: [Users] Booting oVirt node image 2.3.0, no install option Date: Tue, 17 Apr 2012 11:11:00 -0400 Message-ID: <4F8D8804.4050605@redhat.com> In-Reply-To: DA32D702-0AE0-4454-86B1-BB647E9C82BA@vonnieda.org --===============2471813910248482882== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 04/17/2012 10:51 AM, Adam vonNieda wrote: > Thanks for the reply Joey. I saw that too, and thought maybe my USB t= humb drive was set to read only, but it's not. This box doesn't have a DVD = drive, I'll try a different USB drive, and if that doesn't work, I'll dig u= p an external DVD drive. > > Thanks again, > > -Adam > > Adam vonNieda > Adam(a)vonNieda.org > > On Apr 17, 2012, at 9:07, Joey Boggs wrote: > >> On 04/17/2012 09:45 AM, Adam vonNieda wrote: >>> Hi folks, >>> >>> Still hoping someone can give me a hand with this. I can't install >>> overt-node 2.3.0 on a on a Dell C2100 server because it won't start the >>> graphical interface. I booted up a standard F16 image this morning, and >>> the graphical installer does start during that process. Logs are below. >>> >>> Thanks very much, >>> >>> -Adam >>> >>> >>>> /tmp/ovirt.log >>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>> >>>> /sbin/restorecon set context >>>> /var/cache/yum->unconfined_u:object_r:rpm_var_cache_t:s0 failed:'Read-= only >>>> file system' >>>> /sbin/restorecon reset /var/cache/yum context >>>> unconfined_u:object_r:file_t:s0->unconfined_u:object_r:rpm_var_cache_t= :s0 >>>> /sbin/restorecon reset /etc/sysctl.conf context >>>> system_u:object_r:etc_runtime_t:s0->system_u:object_r:system_conf_t:s0 >>>> /sbin/restorecon reset /boot-kdump context >>>> system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0 >>>> 2012-04-16 09:36:26,827 - INFO - ovirt-config-installer - ::::live >>>> device:::: >>>> /dev/sdb >>>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat /proc/mounts|gr= ep >>>> -q "none /live" >>>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - >>>> 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - umount /live >>>> 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - >>>> 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to mount_liv= e() >>>> >>>> /var/log/ovirt.log >>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>> >>>> Apr 16 09:35:53 Starting ovirt-early >>>> oVirt Node Hypervisor release 2.3.0 (1.0.fc16) >>>> Apr 16 09:35:53 Updating /etc/default/ovirt >>>> Apr 16 09:35:54 Updating OVIRT_BOOTIF to '' >>>> Apr 16 09:35:54 Updating OVIRT_INIT to '' >>>> Apr 16 09:35:54 Updating OVIRT_UPGRADE to '' >>>> Apr 16 09:35:54 Updating OVIRT_STANDALONE to '1' >>>> Apr 16 09:35:54 Updating OVIRT_BOOTPARAMS to 'nomodeset >>>> crashkernel=3D512M-2G:64M,2G-:128M elevator=3Ddeadline quiet rd_NO_LVM= rhgb >>>> rd.luks=3D0 rd.md=3D0 rd.dm=3D0' >>>> Apr 16 09:35:54 Updating OVIRT_RHN_TYPE to 'classic' >>>> Apr 16 09:35:54 Updating OVIRT_INSTALL to '1' >>>> Apr 16 09:35:54 Updating OVIRT_ISCSI_INSTALL to '1' >>>> Apr 16 09:36:08 Setting temporary admin password: F8Ax67kfRPSAw >>>> Apr 16 09:36:09 Setting temporary root password: F8Ax67kfRPSAw >>>> Apr 16 09:36:09 Skip runtime mode configuration. >>>> Apr 16 09:36:09 Completed ovirt-early >>>> Apr 16 09:36:09 Starting ovirt-awake. >>>> Apr 16 09:36:09 Node is operating in unmanaged mode. >>>> Apr 16 09:36:09 Completed ovirt-awake: RETVAL=3D0 >>>> Apr 16 09:36:09 Starting ovirt >>>> Apr 16 09:36:09 Completed ovirt >>>> Apr 16 09:36:10 Starting ovirt-post >>>> Apr 16 09:36:20 Hardware virtualization detected >>>> Volume group "HostVG" not found >>>> Skipping volume group HostVG >>>> Restarting network (via systemctl): [ OK ] >>>> Apr 16 09:36:20 Starting ovirt-post >>>> Apr 16 09:36:21 Hardware virtualization detected >>>> Volume group "HostVG" not found >>>> Skipping volume group HostVG >>>> Restarting network (via systemctl): [ OK ] >>>> Apr 16 09:36:22 Starting ovirt-cim >>>> Apr 16 09:36:22 Completed ovirt-cim >>>> WARNING: persistent config storage not available >>>> >>>> /var/log/vdsm/vdsm.log >>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>> >>>> MainThread::INFO::2012-04-16 09:36:21,828::vdsm::71::vds::(run) I am t= he >>>> actual vdsm 4.9-0 >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:23,873::resourceManager::376::ResourceManager::(registerNamespac= e) >>>> Registering namespace 'Storage' >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:23,874::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:23,918::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>>> MainThread::INFO::2012-04-16 09:36:25,000::vdsm::71::vds::(run) I am t= he >>>> actual vdsm 4.9-0 >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:25,199::resourceManager::376::ResourceManager::(registerNamespac= e) >>>> Registering namespace 'Storage' >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:25,200::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:25,231::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:25,243::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS: >>>> =3D ''; =3D 0 >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:25,244::multipath::109::Storage.Multipath::(isEnabled) multipath >>>> Defaulting to False >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:25,244::misc::487::Storage.Misc::(rotateFiles) dir: /etc, >>>> prefixName: multipath.conf, versions: 5 >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:25,244::misc::508::Storage.Misc::(rotateFiles) versions found: [= 0] >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:25,244::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>>> '/usr/bin/sudo -n /bin/cp /etc/multipath.conf /etc/multipath.conf.1' (= cwd >>>> None) >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:25,255::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>>> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only= file >>>> system\nsudo: sorry, a password is required to run sudo\n'; =3D 1 >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:25,256::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>>> '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf.1' (cwd None) >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:25,269::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>>> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only= file >>>> system\nsudo: sorry, a password is required to run sudo\n'; =3D 1 >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:25,270::multipath::123::Storage.Misc.excCmd::(setupMultipath) >>>> '/usr/bin/sudo -n /bin/cp /tmp/tmpnPcvWi /etc/multipath.conf' (cwd Non= e) >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:25,283::multipath::123::Storage.Misc.excCmd::(setupMultipath) >>>> SUCCESS: =3D ''; =3D 0 >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:25,283::multipath::128::Storage.Misc.excCmd::(setupMultipath) >>>> '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf' (cwd None) >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:25,294::multipath::128::Storage.Misc.excCmd::(setupMultipath) >>>> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only= file >>>> system\nsudo: sorry, a password is required to run sudo\n'; =3D 1 >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:25,295::multipath::131::Storage.Misc.excCmd::(setupMultipath) >>>> '/usr/bin/sudo -n /sbin/multipath -F' (cwd None) >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:25,323::multipath::131::Storage.Misc.excCmd::(setupMultipath) >>>> FAILED: =3D ''; =3D 1 >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:25,323::multipath::134::Storage.Misc.excCmd::(setupMultipath) >>>> '/usr/bin/sudo -n /sbin/service multipathd restart' (cwd None) >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:26,397::multipath::134::Storage.Misc.excCmd::(setupMultipath) >>>> SUCCESS: =3D ''; =3D 0 >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:26,398::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) >>>> '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:26,443::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) >>>> SUCCESS: =3D ''; =3D 0 >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:26,445::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm re= load >>>> operation' got the operation mutex >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:26,447::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >>>> /sbin/lvm pvs --config " devices { preferred_names =3D [\\"^/dev/mappe= r/\\"] >>>> ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error= _count=3D3 >>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks=3D1 >>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 }= " >>>> --noheadings --units b --nosuffix --separator | -o >>>> uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_co= unt, >>>> d >>>> ev_size' (cwd None) >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:26,811::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: =3D= ''; >>>> =3D 0 >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:26,811::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm re= load >>>> operation' released the operation mutex >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:26,812::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm re= load >>>> operation' got the operation mutex >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:26,812::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >>>> /sbin/lvm vgs --config " devices { preferred_names =3D [\\"^/dev/mappe= r/\\"] >>>> ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error= _count=3D3 >>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks=3D1 >>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 }= " >>>> --noheadings --units b --nosuffix --separator | -o >>>> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_m= da_s >>>> i >>>> ze,vg_mda_free' (cwd None) >>>> MainThread::INFO::2012-04-16 09:36:29,307::vdsm::71::vds::(run) I am t= he >>>> actual vdsm 4.9-0 >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:29,514::resourceManager::376::ResourceManager::(registerNamespac= e) >>>> Registering namespace 'Storage' >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:29,515::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:29,551::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:29,564::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS: >>>> =3D ''; =3D 0 >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:29,565::multipath::101::Storage.Multipath::(isEnabled) Current >>>> revision of multipath.conf detected, preserving >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:29,565::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) >>>> '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd None) >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:29,606::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType) >>>> SUCCESS: =3D ''; =3D 0 >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:29,606::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm re= load >>>> operation' got the operation mutex >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:29,608::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >>>> /sbin/lvm pvs --config " devices { preferred_names =3D [\\"^/dev/mappe= r/\\"] >>>> ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error= _count=3D3 >>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks=3D1 >>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 }= " >>>> --noheadings --units b --nosuffix --separator | -o >>>> uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_co= unt, >>>> d >>>> ev_size' (cwd None) >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:29,714::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: =3D= ''; >>>> =3D 0 >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:29,715::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm re= load >>>> operation' released the operation mutex >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:29,716::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm re= load >>>> operation' got the operation mutex >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:29,716::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >>>> /sbin/lvm vgs --config " devices { preferred_names =3D [\\"^/dev/mappe= r/\\"] >>>> ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error= _count=3D3 >>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks=3D1 >>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 }= " >>>> --noheadings --units b --nosuffix --separator | -o >>>> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_m= da_s >>>> i >>>> ze,vg_mda_free' (cwd None) >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:29,813::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: =3D= ' No >>>> volume groups found\n'; =3D 0 >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:29,814::lvm::379::OperationMutex::(_reloadvgs) Operation 'lvm re= load >>>> operation' released the operation mutex >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:29,815::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >>>> /sbin/lvm lvs --config " devices { preferred_names =3D [\\"^/dev/mappe= r/\\"] >>>> ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error= _count=3D3 >>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks=3D1 >>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 }= " >>>> --noheadings --units b --nosuffix --separator | -o >>>> uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:29,916::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: =3D= ' No >>>> volume groups found\n'; =3D 0 >>>> Thread-11::DEBUG::2012-04-16 >>>> 09:36:29,917::misc::1017::SamplingMethod::(__call__) Trying to enter >>>> sampling method (storage.sdc.refreshStorage) >>>> MainThread::INFO::2012-04-16 >>>> 09:36:29,919::dispatcher::121::Storage.Dispatcher::(__init__) Starting >>>> StorageDispatcher... >>>> Thread-11::DEBUG::2012-04-16 >>>> 09:36:29,919::misc::1019::SamplingMethod::(__call__) Got in to sampling >>>> method >>>> Thread-11::DEBUG::2012-04-16 >>>> 09:36:29,921::misc::1017::SamplingMethod::(__call__) Trying to enter >>>> sampling method (storage.iscsi.rescan) >>>> Thread-11::DEBUG::2012-04-16 >>>> 09:36:29,921::misc::1019::SamplingMethod::(__call__) Got in to sampling >>>> method >>>> Thread-11::DEBUG::2012-04-16 >>>> 09:36:29,921::iscsi::389::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo= -n >>>> /sbin/iscsiadm -m session -R' (cwd None) >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:29,930::utils::595::Storage.Misc.excCmd::(execCmd) '/usr/bin/pgr= ep >>>> -xf ksmd' (cwd None) >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:30,108::utils::595::Storage.Misc.excCmd::(execCmd) SUCCESS:= =3D >>>> ''; =3D 0 >>>> Thread-11::DEBUG::2012-04-16 >>>> 09:36:30,116::iscsi::389::Storage.Misc.excCmd::(rescan) FAILED: = =3D >>>> 'iscsiadm: No session found.\n'; =3D 21 >>>> Thread-11::DEBUG::2012-04-16 >>>> 09:36:30,116::misc::1027::SamplingMethod::(__call__) Returning last re= sult >>>> Thread-11::DEBUG::2012-04-16 >>>> 09:36:30,117::supervdsm::83::SuperVdsmProxy::(_killSupervdsm) Could not >>>> kill old Super Vdsm [Errno 2] No such file or directory: >>>> '/var/run/vdsm/svdsm.pid' >>>> Thread-11::DEBUG::2012-04-16 >>>> 09:36:30,117::supervdsm::71::SuperVdsmProxy::(_launchSupervdsm) Launch= ing >>>> Super Vdsm >>>> Thread-11::DEBUG::2012-04-16 >>>> 09:36:30,118::supervdsm::74::Storage.Misc.excCmd::(_launchSupervdsm) >>>> '/usr/bin/sudo -n /usr/bin/python /usr/share/vdsm/supervdsmServer.pyc >>>> bd4b3ae7-3e51-4d6b-b681-d5f6cb5bae07 2945' (cwd None) >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:30,254::supervdsmServer::170::SuperVdsm.Server::(main) Making su= re >>>> I'm root >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:30,255::supervdsmServer::174::SuperVdsm.Server::(main) Parsing c= md >>>> args >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:30,255::supervdsmServer::177::SuperVdsm.Server::(main) Creating = PID >>>> file >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:30,255::supervdsmServer::181::SuperVdsm.Server::(main) Cleaning = old >>>> socket >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:30,255::supervdsmServer::185::SuperVdsm.Server::(main) Setting up >>>> keep alive thread >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:30,256::supervdsmServer::190::SuperVdsm.Server::(main) Creating >>>> remote object manager >>>> MainThread::DEBUG::2012-04-16 >>>> 09:36:30,256::supervdsmServer::201::SuperVdsm.Server::(main) Started >>>> serving super vdsm object >>>> Thread-11::DEBUG::2012-04-16 >>>> 09:36:32,124::supervdsm::92::SuperVdsmProxy::(_connect) Trying to conn= ect >>>> to Super Vdsm >>>> Thread-11::DEBUG::2012-04-16 >>>> 09:36:32,133::supervdsm::64::SuperVdsmProxy::(__init__) Connected to S= uper >>>> Vdsm >>>> Thread-11::DEBUG::2012-04-16 >>>> 09:36:34,070::multipath::71::Storage.Misc.excCmd::(rescan) '/usr/bin/s= udo >>>> -n /sbin/multipath' (cwd None) >>>> Thread-11::DEBUG::2012-04-16 >>>> 09:36:34,130::multipath::71::Storage.Misc.excCmd::(rescan) SUCCESS: >>>> =3D ''; =3D 0 >>>> Thread-11::DEBUG::2012-04-16 >>>> 09:36:34,131::lvm::460::OperationMutex::(_invalidateAllPvs) Operation = 'lvm >>>> invalidate operation' got the operation mutex >>>> Thread-11::DEBUG::2012-04-16 >>>> 09:36:34,131::lvm::462::OperationMutex::(_invalidateAllPvs) Operation = 'lvm >>>> invalidate operation' released the operation mutex >>>> Thread-11::DEBUG::2012-04-16 >>>> 09:36:34,132::lvm::472::OperationMutex::(_invalidateAllVgs) Operation = 'lvm >>>> invalidate operation' got the operation mutex >>>> Thread-11::DEBUG::2012-04-16 >>>> 09:36:34,132::lvm::474::OperationMutex::(_invalidateAllVgs) Operation = 'lvm >>>> invalidate operation' released the operation mutex >>>> Thread-11::DEBUG::2012-04-16 >>>> 09:36:34,133::lvm::493::OperationMutex::(_invalidateAllLvs) Operation = 'lvm >>>> invalidate operation' got the operation mutex >>>> Thread-11::DEBUG::2012-04-16 >>>> 09:36:34,133::lvm::495::OperationMutex::(_invalidateAllLvs) Operation = 'lvm >>>> invalidate operation' released the operation mutex >>>> Thread-11::DEBUG::2012-04-16 >>>> 09:36:34,133::misc::1027::SamplingMethod::(__call__) Returning last re= sult >>>> Thread-11::DEBUG::2012-04-16 >>>> 09:36:34,133::hsm::272::Storage.HSM::(__cleanStorageRepository) Started >>>> cleaning storage repository at '/rhev/data-center' >>>> Thread-11::DEBUG::2012-04-16 >>>> 09:36:34,136::hsm::304::Storage.HSM::(__cleanStorageRepository) White >>>> list: ['/rhev/data-center/hsm-tasks', '/rhev/data-center/hsm-tasks/*', >>>> '/rhev/data-center/mnt'] >>>> Thread-11::DEBUG::2012-04-16 >>>> 09:36:34,136::hsm::305::Storage.HSM::(__cleanStorageRepository) Mount >>>> list: ['/rhev/data-center'] >>>> Thread-11::DEBUG::2012-04-16 >>>> 09:36:34,136::hsm::307::Storage.HSM::(__cleanStorageRepository) Cleani= ng >>>> leftovers >>>> Thread-11::DEBUG::2012-04-16 >>>> 09:36:34,136::hsm::350::Storage.HSM::(__cleanStorageRepository) Finish= ed >>>> cleaning storage repository at '/rhev/data-center' >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> On 4/16/12 8:38 AM, "Mike Burns" wrote: >>>> >>>>> On Mon, 2012-04-16 at 08:14 -0500, Adam vonNieda wrote: >>>>>> Hi folks, >>>>>> >>>>>> >>>>>> I'm trying to install oVirt node v2.3.0 on A Dell C2100 server. I >>>>>> can boot up just fine, but the two menu options I see are "Start oVi= rt >>>>>> node", and "Troubleshooting". When I choose "Start oVirt node", it >>>>>> does just that, and I am soon after given a console login prompt. I'= ve >>>>>> checked the docs, and I don't see what I'm supposed to do next, as in >>>>>> a password etc. Am I missing something? >>>>> Hi Adam, >>>>> >>>>> Something is breaking in the boot process. You should be getting a T= UI >>>>> screen that will let you configure and install ovirt-node. >>>>> >>>>> I just added an entry on the Node Troublesooting wiki page[1] for you= to >>>>> follow. >>>>> >>>>> Mike >>>>> >>>>> [1] http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems >>>>> >>>>> >>>>>> Thanks, >>>>>> >>>>>> >>>>>> -Adam >>>>>> _______________________________________________ >>>>>> Users mailing list >>>>>> Users(a)ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/users >>> _______________________________________________ >>> Users mailing list >>> Users(a)ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >> This is definitely the cause of the installer failing >> >> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat /proc/mounts|grep= -q "none /live" >> 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to mount_live() >> >> >> >> What kind of media are you installing from: usb/cd/remote console? > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users I did go back and take a look at mount_live and made sure it contains a = specific patch to handle usb drives properly. If you can get back to a = shell prompt. run blkid and capture the output. If it's way too much to = type then just the usb drive output should be ok. --===============2471813910248482882==-- From adam at vonnieda.org Tue Apr 17 14:49:05 2012 Content-Type: multipart/mixed; boundary="===============2307832771376665993==" MIME-Version: 1.0 From: Adam vonNieda To: users at ovirt.org Subject: Re: [Users] Booting oVirt node image 2.3.0, no install option Date: Tue, 17 Apr 2012 13:48:54 -0500 Message-ID: In-Reply-To: 4F8D8804.4050605@redhat.com --===============2307832771376665993== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Turns out that there might be an issue with my thumb drive. I tried another, and it worked fine. Thanks very much for the responses folks! -Adam = On 4/17/12 10:11 AM, "Joey Boggs" wrote: >On 04/17/2012 10:51 AM, Adam vonNieda wrote: >> Thanks for the reply Joey. I saw that too, and thought maybe my USB >>thumb drive was set to read only, but it's not. This box doesn't have a >>DVD drive, I'll try a different USB drive, and if that doesn't work, >>I'll dig up an external DVD drive. >> >> Thanks again, >> >> -Adam >> >> Adam vonNieda >> Adam(a)vonNieda.org >> >> On Apr 17, 2012, at 9:07, Joey Boggs wrote: >> >>> On 04/17/2012 09:45 AM, Adam vonNieda wrote: >>>> Hi folks, >>>> >>>> Still hoping someone can give me a hand with this. I can't install >>>> overt-node 2.3.0 on a on a Dell C2100 server because it won't start >>>>the >>>> graphical interface. I booted up a standard F16 image this morning, >>>>and >>>> the graphical installer does start during that process. Logs are >>>>below. >>>> >>>> Thanks very much, >>>> >>>> -Adam >>>> >>>> >>>>> /tmp/ovirt.log >>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>>> >>>>> /sbin/restorecon set context >>>>> /var/cache/yum->unconfined_u:object_r:rpm_var_cache_t:s0 >>>>>failed:'Read-only >>>>> file system' >>>>> /sbin/restorecon reset /var/cache/yum context >>>>> = >>>>>unconfined_u:object_r:file_t:s0->unconfined_u:object_r:rpm_var_cache_t >>>>>:s0 >>>>> /sbin/restorecon reset /etc/sysctl.conf context >>>>> = >>>>>system_u:object_r:etc_runtime_t:s0->system_u:object_r:system_conf_t:s0 >>>>> /sbin/restorecon reset /boot-kdump context >>>>> system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0 >>>>> 2012-04-16 09:36:26,827 - INFO - ovirt-config-installer - ::::live >>>>> device:::: >>>>> /dev/sdb >>>>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat >>>>>/proc/mounts|grep >>>>> -q "none /live" >>>>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - >>>>> 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - umount /live >>>>> 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - >>>>> 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to >>>>>mount_live() >>>>> >>>>> /var/log/ovirt.log >>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>>> >>>>> Apr 16 09:35:53 Starting ovirt-early >>>>> oVirt Node Hypervisor release 2.3.0 (1.0.fc16) >>>>> Apr 16 09:35:53 Updating /etc/default/ovirt >>>>> Apr 16 09:35:54 Updating OVIRT_BOOTIF to '' >>>>> Apr 16 09:35:54 Updating OVIRT_INIT to '' >>>>> Apr 16 09:35:54 Updating OVIRT_UPGRADE to '' >>>>> Apr 16 09:35:54 Updating OVIRT_STANDALONE to '1' >>>>> Apr 16 09:35:54 Updating OVIRT_BOOTPARAMS to 'nomodeset >>>>> crashkernel=3D512M-2G:64M,2G-:128M elevator=3Ddeadline quiet rd_NO_LVM >>>>>rhgb >>>>> rd.luks=3D0 rd.md=3D0 rd.dm=3D0' >>>>> Apr 16 09:35:54 Updating OVIRT_RHN_TYPE to 'classic' >>>>> Apr 16 09:35:54 Updating OVIRT_INSTALL to '1' >>>>> Apr 16 09:35:54 Updating OVIRT_ISCSI_INSTALL to '1' >>>>> Apr 16 09:36:08 Setting temporary admin password: F8Ax67kfRPSAw >>>>> Apr 16 09:36:09 Setting temporary root password: F8Ax67kfRPSAw >>>>> Apr 16 09:36:09 Skip runtime mode configuration. >>>>> Apr 16 09:36:09 Completed ovirt-early >>>>> Apr 16 09:36:09 Starting ovirt-awake. >>>>> Apr 16 09:36:09 Node is operating in unmanaged mode. >>>>> Apr 16 09:36:09 Completed ovirt-awake: RETVAL=3D0 >>>>> Apr 16 09:36:09 Starting ovirt >>>>> Apr 16 09:36:09 Completed ovirt >>>>> Apr 16 09:36:10 Starting ovirt-post >>>>> Apr 16 09:36:20 Hardware virtualization detected >>>>> Volume group "HostVG" not found >>>>> Skipping volume group HostVG >>>>> Restarting network (via systemctl): [ OK ] >>>>> Apr 16 09:36:20 Starting ovirt-post >>>>> Apr 16 09:36:21 Hardware virtualization detected >>>>> Volume group "HostVG" not found >>>>> Skipping volume group HostVG >>>>> Restarting network (via systemctl): [ OK ] >>>>> Apr 16 09:36:22 Starting ovirt-cim >>>>> Apr 16 09:36:22 Completed ovirt-cim >>>>> WARNING: persistent config storage not available >>>>> >>>>> /var/log/vdsm/vdsm.log >>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>>> >>>>> MainThread::INFO::2012-04-16 09:36:21,828::vdsm::71::vds::(run) I am >>>>>the >>>>> actual vdsm 4.9-0 >>>>> MainThread::DEBUG::2012-04-16 >>>>> = >>>>>09:36:23,873::resourceManager::376::ResourceManager::(registerNamespac >>>>>e) >>>>> Registering namespace 'Storage' >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:23,874::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>>>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:23,918::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>>>> MainThread::INFO::2012-04-16 09:36:25,000::vdsm::71::vds::(run) I am >>>>>the >>>>> actual vdsm 4.9-0 >>>>> MainThread::DEBUG::2012-04-16 >>>>> = >>>>>09:36:25,199::resourceManager::376::ResourceManager::(registerNamespac >>>>>e) >>>>> Registering namespace 'Storage' >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:25,200::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>>>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:25,231::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:25,243::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>>SUCCESS: >>>>> =3D ''; =3D 0 >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:25,244::multipath::109::Storage.Multipath::(isEnabled) >>>>>multipath >>>>> Defaulting to False >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:25,244::misc::487::Storage.Misc::(rotateFiles) dir: /etc, >>>>> prefixName: multipath.conf, versions: 5 >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:25,244::misc::508::Storage.Misc::(rotateFiles) versions found: >>>>>[0] >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:25,244::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>>>> '/usr/bin/sudo -n /bin/cp /etc/multipath.conf /etc/multipath.conf.1' >>>>>(cwd >>>>> None) >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:25,255::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>>>> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only >>>>>file >>>>> system\nsudo: sorry, a password is required to run sudo\n'; =3D= 1 >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:25,256::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>>>> '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf.1' (cwd None) >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:25,269::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>>>> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only >>>>>file >>>>> system\nsudo: sorry, a password is required to run sudo\n'; =3D= 1 >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:25,270::multipath::123::Storage.Misc.excCmd::(setupMultipath) >>>>> '/usr/bin/sudo -n /bin/cp /tmp/tmpnPcvWi /etc/multipath.conf' (cwd >>>>>None) >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:25,283::multipath::123::Storage.Misc.excCmd::(setupMultipath) >>>>> SUCCESS: =3D ''; =3D 0 >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:25,283::multipath::128::Storage.Misc.excCmd::(setupMultipath) >>>>> '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf' (cwd None) >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:25,294::multipath::128::Storage.Misc.excCmd::(setupMultipath) >>>>> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only >>>>>file >>>>> system\nsudo: sorry, a password is required to run sudo\n'; =3D= 1 >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:25,295::multipath::131::Storage.Misc.excCmd::(setupMultipath) >>>>> '/usr/bin/sudo -n /sbin/multipath -F' (cwd None) >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:25,323::multipath::131::Storage.Misc.excCmd::(setupMultipath) >>>>> FAILED: =3D ''; =3D 1 >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:25,323::multipath::134::Storage.Misc.excCmd::(setupMultipath) >>>>> '/usr/bin/sudo -n /sbin/service multipathd restart' (cwd None) >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:26,397::multipath::134::Storage.Misc.excCmd::(setupMultipath) >>>>> SUCCESS: =3D ''; =3D 0 >>>>> MainThread::DEBUG::2012-04-16 >>>>> = >>>>>09:36:26,398::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType >>>>>) >>>>> '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd >>>>>None) >>>>> MainThread::DEBUG::2012-04-16 >>>>> = >>>>>09:36:26,443::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType >>>>>) >>>>> SUCCESS: =3D ''; =3D 0 >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:26,445::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm >>>>>reload >>>>> operation' got the operation mutex >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:26,447::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >>>>> /sbin/lvm pvs --config " devices { preferred_names =3D >>>>>[\\"^/dev/mapper/\\"] >>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>disable_after_error_count=3D3 >>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks= =3D1 >>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 = } " >>>>> --noheadings --units b --nosuffix --separator | -o >>>>> = >>>>>uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_co >>>>>unt, >>>>> d >>>>> ev_size' (cwd None) >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:26,811::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: = =3D >>>>>''; >>>>> =3D 0 >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:26,811::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm >>>>>reload >>>>> operation' released the operation mutex >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:26,812::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm >>>>>reload >>>>> operation' got the operation mutex >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:26,812::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >>>>> /sbin/lvm vgs --config " devices { preferred_names =3D >>>>>[\\"^/dev/mapper/\\"] >>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>disable_after_error_count=3D3 >>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks= =3D1 >>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 = } " >>>>> --noheadings --units b --nosuffix --separator | -o >>>>> = >>>>>uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_m >>>>>da_s >>>>> i >>>>> ze,vg_mda_free' (cwd None) >>>>> MainThread::INFO::2012-04-16 09:36:29,307::vdsm::71::vds::(run) I am >>>>>the >>>>> actual vdsm 4.9-0 >>>>> MainThread::DEBUG::2012-04-16 >>>>> = >>>>>09:36:29,514::resourceManager::376::ResourceManager::(registerNamespac >>>>>e) >>>>> Registering namespace 'Storage' >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:29,515::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>>>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:29,551::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:29,564::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>>SUCCESS: >>>>> =3D ''; =3D 0 >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:29,565::multipath::101::Storage.Multipath::(isEnabled) Current >>>>> revision of multipath.conf detected, preserving >>>>> MainThread::DEBUG::2012-04-16 >>>>> = >>>>>09:36:29,565::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType >>>>>) >>>>> '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd >>>>>None) >>>>> MainThread::DEBUG::2012-04-16 >>>>> = >>>>>09:36:29,606::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingType >>>>>) >>>>> SUCCESS: =3D ''; =3D 0 >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:29,606::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm >>>>>reload >>>>> operation' got the operation mutex >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:29,608::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >>>>> /sbin/lvm pvs --config " devices { preferred_names =3D >>>>>[\\"^/dev/mapper/\\"] >>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>disable_after_error_count=3D3 >>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks= =3D1 >>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 = } " >>>>> --noheadings --units b --nosuffix --separator | -o >>>>> = >>>>>uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_co >>>>>unt, >>>>> d >>>>> ev_size' (cwd None) >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:29,714::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: = =3D >>>>>''; >>>>> =3D 0 >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:29,715::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm >>>>>reload >>>>> operation' released the operation mutex >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:29,716::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm >>>>>reload >>>>> operation' got the operation mutex >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:29,716::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >>>>> /sbin/lvm vgs --config " devices { preferred_names =3D >>>>>[\\"^/dev/mapper/\\"] >>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>disable_after_error_count=3D3 >>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks= =3D1 >>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 = } " >>>>> --noheadings --units b --nosuffix --separator | -o >>>>> = >>>>>uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_m >>>>>da_s >>>>> i >>>>> ze,vg_mda_free' (cwd None) >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:29,813::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: = =3D >>>>>' No >>>>> volume groups found\n'; =3D 0 >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:29,814::lvm::379::OperationMutex::(_reloadvgs) Operation 'lvm >>>>>reload >>>>> operation' released the operation mutex >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:29,815::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >>>>> /sbin/lvm lvs --config " devices { preferred_names =3D >>>>>[\\"^/dev/mapper/\\"] >>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>disable_after_error_count=3D3 >>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks= =3D1 >>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0 = } " >>>>> --noheadings --units b --nosuffix --separator | -o >>>>> uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:29,916::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: = =3D >>>>>' No >>>>> volume groups found\n'; =3D 0 >>>>> Thread-11::DEBUG::2012-04-16 >>>>> 09:36:29,917::misc::1017::SamplingMethod::(__call__) Trying to enter >>>>> sampling method (storage.sdc.refreshStorage) >>>>> MainThread::INFO::2012-04-16 >>>>> 09:36:29,919::dispatcher::121::Storage.Dispatcher::(__init__) >>>>>Starting >>>>> StorageDispatcher... >>>>> Thread-11::DEBUG::2012-04-16 >>>>> 09:36:29,919::misc::1019::SamplingMethod::(__call__) Got in to >>>>>sampling >>>>> method >>>>> Thread-11::DEBUG::2012-04-16 >>>>> 09:36:29,921::misc::1017::SamplingMethod::(__call__) Trying to enter >>>>> sampling method (storage.iscsi.rescan) >>>>> Thread-11::DEBUG::2012-04-16 >>>>> 09:36:29,921::misc::1019::SamplingMethod::(__call__) Got in to >>>>>sampling >>>>> method >>>>> Thread-11::DEBUG::2012-04-16 >>>>> 09:36:29,921::iscsi::389::Storage.Misc.excCmd::(rescan) >>>>>'/usr/bin/sudo -n >>>>> /sbin/iscsiadm -m session -R' (cwd None) >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:29,930::utils::595::Storage.Misc.excCmd::(execCmd) >>>>>'/usr/bin/pgrep >>>>> -xf ksmd' (cwd None) >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:30,108::utils::595::Storage.Misc.excCmd::(execCmd) >>>>>SUCCESS: =3D >>>>> ''; =3D 0 >>>>> Thread-11::DEBUG::2012-04-16 >>>>> 09:36:30,116::iscsi::389::Storage.Misc.excCmd::(rescan) FAILED: >>>>> =3D >>>>> 'iscsiadm: No session found.\n'; =3D 21 >>>>> Thread-11::DEBUG::2012-04-16 >>>>> 09:36:30,116::misc::1027::SamplingMethod::(__call__) Returning last >>>>>result >>>>> Thread-11::DEBUG::2012-04-16 >>>>> 09:36:30,117::supervdsm::83::SuperVdsmProxy::(_killSupervdsm) Could >>>>>not >>>>> kill old Super Vdsm [Errno 2] No such file or directory: >>>>> '/var/run/vdsm/svdsm.pid' >>>>> Thread-11::DEBUG::2012-04-16 >>>>> 09:36:30,117::supervdsm::71::SuperVdsmProxy::(_launchSupervdsm) >>>>>Launching >>>>> Super Vdsm >>>>> Thread-11::DEBUG::2012-04-16 >>>>> 09:36:30,118::supervdsm::74::Storage.Misc.excCmd::(_launchSupervdsm) >>>>> '/usr/bin/sudo -n /usr/bin/python /usr/share/vdsm/supervdsmServer.pyc >>>>> bd4b3ae7-3e51-4d6b-b681-d5f6cb5bae07 2945' (cwd None) >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:30,254::supervdsmServer::170::SuperVdsm.Server::(main) Making >>>>>sure >>>>> I'm root >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:30,255::supervdsmServer::174::SuperVdsm.Server::(main) Parsing >>>>>cmd >>>>> args >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:30,255::supervdsmServer::177::SuperVdsm.Server::(main) >>>>>Creating PID >>>>> file >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:30,255::supervdsmServer::181::SuperVdsm.Server::(main) >>>>>Cleaning old >>>>> socket >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:30,255::supervdsmServer::185::SuperVdsm.Server::(main) Setting >>>>>up >>>>> keep alive thread >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:30,256::supervdsmServer::190::SuperVdsm.Server::(main) Creating >>>>> remote object manager >>>>> MainThread::DEBUG::2012-04-16 >>>>> 09:36:30,256::supervdsmServer::201::SuperVdsm.Server::(main) Started >>>>> serving super vdsm object >>>>> Thread-11::DEBUG::2012-04-16 >>>>> 09:36:32,124::supervdsm::92::SuperVdsmProxy::(_connect) Trying to >>>>>connect >>>>> to Super Vdsm >>>>> Thread-11::DEBUG::2012-04-16 >>>>> 09:36:32,133::supervdsm::64::SuperVdsmProxy::(__init__) Connected to >>>>>Super >>>>> Vdsm >>>>> Thread-11::DEBUG::2012-04-16 >>>>> 09:36:34,070::multipath::71::Storage.Misc.excCmd::(rescan) >>>>>'/usr/bin/sudo >>>>> -n /sbin/multipath' (cwd None) >>>>> Thread-11::DEBUG::2012-04-16 >>>>> 09:36:34,130::multipath::71::Storage.Misc.excCmd::(rescan) >>>>>SUCCESS: >>>>> =3D ''; =3D 0 >>>>> Thread-11::DEBUG::2012-04-16 >>>>> 09:36:34,131::lvm::460::OperationMutex::(_invalidateAllPvs) >>>>>Operation 'lvm >>>>> invalidate operation' got the operation mutex >>>>> Thread-11::DEBUG::2012-04-16 >>>>> 09:36:34,131::lvm::462::OperationMutex::(_invalidateAllPvs) >>>>>Operation 'lvm >>>>> invalidate operation' released the operation mutex >>>>> Thread-11::DEBUG::2012-04-16 >>>>> 09:36:34,132::lvm::472::OperationMutex::(_invalidateAllVgs) >>>>>Operation 'lvm >>>>> invalidate operation' got the operation mutex >>>>> Thread-11::DEBUG::2012-04-16 >>>>> 09:36:34,132::lvm::474::OperationMutex::(_invalidateAllVgs) >>>>>Operation 'lvm >>>>> invalidate operation' released the operation mutex >>>>> Thread-11::DEBUG::2012-04-16 >>>>> 09:36:34,133::lvm::493::OperationMutex::(_invalidateAllLvs) >>>>>Operation 'lvm >>>>> invalidate operation' got the operation mutex >>>>> Thread-11::DEBUG::2012-04-16 >>>>> 09:36:34,133::lvm::495::OperationMutex::(_invalidateAllLvs) >>>>>Operation 'lvm >>>>> invalidate operation' released the operation mutex >>>>> Thread-11::DEBUG::2012-04-16 >>>>> 09:36:34,133::misc::1027::SamplingMethod::(__call__) Returning last >>>>>result >>>>> Thread-11::DEBUG::2012-04-16 >>>>> 09:36:34,133::hsm::272::Storage.HSM::(__cleanStorageRepository) >>>>>Started >>>>> cleaning storage repository at '/rhev/data-center' >>>>> Thread-11::DEBUG::2012-04-16 >>>>> 09:36:34,136::hsm::304::Storage.HSM::(__cleanStorageRepository) White >>>>> list: ['/rhev/data-center/hsm-tasks', >>>>>'/rhev/data-center/hsm-tasks/*', >>>>> '/rhev/data-center/mnt'] >>>>> Thread-11::DEBUG::2012-04-16 >>>>> 09:36:34,136::hsm::305::Storage.HSM::(__cleanStorageRepository) Mount >>>>> list: ['/rhev/data-center'] >>>>> Thread-11::DEBUG::2012-04-16 >>>>> 09:36:34,136::hsm::307::Storage.HSM::(__cleanStorageRepository) >>>>>Cleaning >>>>> leftovers >>>>> Thread-11::DEBUG::2012-04-16 >>>>> 09:36:34,136::hsm::350::Storage.HSM::(__cleanStorageRepository) >>>>>Finished >>>>> cleaning storage repository at '/rhev/data-center' >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On 4/16/12 8:38 AM, "Mike Burns" wrote: >>>>> >>>>>> On Mon, 2012-04-16 at 08:14 -0500, Adam vonNieda wrote: >>>>>>> Hi folks, >>>>>>> >>>>>>> >>>>>>> I'm trying to install oVirt node v2.3.0 on A Dell C2100 >>>>>>>server. I >>>>>>> can boot up just fine, but the two menu options I see are "Start >>>>>>>oVirt >>>>>>> node", and "Troubleshooting". When I choose "Start oVirt node", it >>>>>>> does just that, and I am soon after given a console login prompt. >>>>>>>I've >>>>>>> checked the docs, and I don't see what I'm supposed to do next, as >>>>>>>in >>>>>>> a password etc. Am I missing something? >>>>>> Hi Adam, >>>>>> >>>>>> Something is breaking in the boot process. You should be getting a >>>>>>TUI >>>>>> screen that will let you configure and install ovirt-node. >>>>>> >>>>>> I just added an entry on the Node Troublesooting wiki page[1] for >>>>>>you to >>>>>> follow. >>>>>> >>>>>> Mike >>>>>> >>>>>> [1] http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems >>>>>> >>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> >>>>>>> -Adam >>>>>>> _______________________________________________ >>>>>>> Users mailing list >>>>>>> Users(a)ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>> _______________________________________________ >>>> Users mailing list >>>> Users(a)ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>> This is definitely the cause of the installer failing >>> >>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat >>>/proc/mounts|grep -q "none /live" >>> 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to >>>mount_live() >>> >>> >>> >>> What kind of media are you installing from: usb/cd/remote console? >> _______________________________________________ >> Users mailing list >> Users(a)ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > >I did go back and take a look at mount_live and made sure it contains a >specific patch to handle usb drives properly. If you can get back to a >shell prompt. run blkid and capture the output. If it's way too much to >type then just the usb drive output should be ok. --===============2307832771376665993==-- From dominic at bostonvineyard.org Tue Apr 17 14:57:22 2012 Content-Type: multipart/mixed; boundary="===============4452646685188216127==" MIME-Version: 1.0 From: Dominic Kaiser To: users at ovirt.org Subject: Re: [Users] Booting oVirt node image 2.3.0, no install option Date: Tue, 17 Apr 2012 14:57:19 -0400 Message-ID: In-Reply-To: CBB3250C.195DB%adam@vonnieda.org --===============4452646685188216127== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable No prob. I am glad to hear it works! dk On Tue, Apr 17, 2012 at 2:48 PM, Adam vonNieda wrote: > > Turns out that there might be an issue with my thumb drive. I tried > another, and it worked fine. Thanks very much for the responses folks! > > -Adam > > > On 4/17/12 10:11 AM, "Joey Boggs" wrote: > > >On 04/17/2012 10:51 AM, Adam vonNieda wrote: > >> Thanks for the reply Joey. I saw that too, and thought maybe my USB > >>thumb drive was set to read only, but it's not. This box doesn't have a > >>DVD drive, I'll try a different USB drive, and if that doesn't work, > >>I'll dig up an external DVD drive. > >> > >> Thanks again, > >> > >> -Adam > >> > >> Adam vonNieda > >> Adam(a)vonNieda.org > >> > >> On Apr 17, 2012, at 9:07, Joey Boggs wrote: > >> > >>> On 04/17/2012 09:45 AM, Adam vonNieda wrote: > >>>> Hi folks, > >>>> > >>>> Still hoping someone can give me a hand with this. I can't insta= ll > >>>> overt-node 2.3.0 on a on a Dell C2100 server because it won't start > >>>>the > >>>> graphical interface. I booted up a standard F16 image this morning, > >>>>and > >>>> the graphical installer does start during that process. Logs are > >>>>below. > >>>> > >>>> Thanks very much, > >>>> > >>>> -Adam > >>>> > >>>> > >>>>> /tmp/ovirt.log > >>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > >>>>> > >>>>> /sbin/restorecon set context > >>>>> /var/cache/yum->unconfined_u:object_r:rpm_var_cache_t:s0 > >>>>>failed:'Read-only > >>>>> file system' > >>>>> /sbin/restorecon reset /var/cache/yum context > >>>>> > >>>>>unconfined_u:object_r:file_t:s0->unconfined_u:object_r:rpm_var_cache= _t > >>>>>:s0 > >>>>> /sbin/restorecon reset /etc/sysctl.conf context > >>>>> > >>>>>system_u:object_r:etc_runtime_t:s0->system_u:object_r:system_conf_t:= s0 > >>>>> /sbin/restorecon reset /boot-kdump context > >>>>> system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0 > >>>>> 2012-04-16 09:36:26,827 - INFO - ovirt-config-installer - ::::live > >>>>> device:::: > >>>>> /dev/sdb > >>>>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat > >>>>>/proc/mounts|grep > >>>>> -q "none /live" > >>>>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - > >>>>> 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - umount /live > >>>>> 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - > >>>>> 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to > >>>>>mount_live() > >>>>> > >>>>> /var/log/ovirt.log > >>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > >>>>> > >>>>> Apr 16 09:35:53 Starting ovirt-early > >>>>> oVirt Node Hypervisor release 2.3.0 (1.0.fc16) > >>>>> Apr 16 09:35:53 Updating /etc/default/ovirt > >>>>> Apr 16 09:35:54 Updating OVIRT_BOOTIF to '' > >>>>> Apr 16 09:35:54 Updating OVIRT_INIT to '' > >>>>> Apr 16 09:35:54 Updating OVIRT_UPGRADE to '' > >>>>> Apr 16 09:35:54 Updating OVIRT_STANDALONE to '1' > >>>>> Apr 16 09:35:54 Updating OVIRT_BOOTPARAMS to 'nomodeset > >>>>> crashkernel=3D512M-2G:64M,2G-:128M elevator=3Ddeadline quiet rd_NO_= LVM > >>>>>rhgb > >>>>> rd.luks=3D0 rd.md=3D0 rd.dm=3D0' > >>>>> Apr 16 09:35:54 Updating OVIRT_RHN_TYPE to 'classic' > >>>>> Apr 16 09:35:54 Updating OVIRT_INSTALL to '1' > >>>>> Apr 16 09:35:54 Updating OVIRT_ISCSI_INSTALL to '1' > >>>>> Apr 16 09:36:08 Setting temporary admin password: F8Ax67kfRPSAw > >>>>> Apr 16 09:36:09 Setting temporary root password: F8Ax67kfRPSAw > >>>>> Apr 16 09:36:09 Skip runtime mode configuration. > >>>>> Apr 16 09:36:09 Completed ovirt-early > >>>>> Apr 16 09:36:09 Starting ovirt-awake. > >>>>> Apr 16 09:36:09 Node is operating in unmanaged mode. > >>>>> Apr 16 09:36:09 Completed ovirt-awake: RETVAL=3D0 > >>>>> Apr 16 09:36:09 Starting ovirt > >>>>> Apr 16 09:36:09 Completed ovirt > >>>>> Apr 16 09:36:10 Starting ovirt-post > >>>>> Apr 16 09:36:20 Hardware virtualization detected > >>>>> Volume group "HostVG" not found > >>>>> Skipping volume group HostVG > >>>>> Restarting network (via systemctl): [ OK ] > >>>>> Apr 16 09:36:20 Starting ovirt-post > >>>>> Apr 16 09:36:21 Hardware virtualization detected > >>>>> Volume group "HostVG" not found > >>>>> Skipping volume group HostVG > >>>>> Restarting network (via systemctl): [ OK ] > >>>>> Apr 16 09:36:22 Starting ovirt-cim > >>>>> Apr 16 09:36:22 Completed ovirt-cim > >>>>> WARNING: persistent config storage not available > >>>>> > >>>>> /var/log/vdsm/vdsm.log > >>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D > >>>>> > >>>>> MainThread::INFO::2012-04-16 09:36:21,828::vdsm::71::vds::(run) I am > >>>>>the > >>>>> actual vdsm 4.9-0 > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> > >>>>>09:36:23,873::resourceManager::376::ResourceManager::(registerNamesp= ac > >>>>>e) > >>>>> Registering namespace 'Storage' > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:23,874::threadPool::45::Misc.ThreadPool::(__init__) Enter - > >>>>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:23,918::multipath::85::Storage.Misc.excCmd::(isEnabled) > >>>>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) > >>>>> MainThread::INFO::2012-04-16 09:36:25,000::vdsm::71::vds::(run) I am > >>>>>the > >>>>> actual vdsm 4.9-0 > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> > >>>>>09:36:25,199::resourceManager::376::ResourceManager::(registerNamesp= ac > >>>>>e) > >>>>> Registering namespace 'Storage' > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:25,200::threadPool::45::Misc.ThreadPool::(__init__) Enter - > >>>>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:25,231::multipath::85::Storage.Misc.excCmd::(isEnabled) > >>>>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:25,243::multipath::85::Storage.Misc.excCmd::(isEnabled) > >>>>>SUCCESS: > >>>>> =3D ''; =3D 0 > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:25,244::multipath::109::Storage.Multipath::(isEnabled) > >>>>>multipath > >>>>> Defaulting to False > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:25,244::misc::487::Storage.Misc::(rotateFiles) dir: /etc, > >>>>> prefixName: multipath.conf, versions: 5 > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:25,244::misc::508::Storage.Misc::(rotateFiles) versions found: > >>>>>[0] > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:25,244::multipath::118::Storage.Misc.excCmd::(setupMultipath) > >>>>> '/usr/bin/sudo -n /bin/cp /etc/multipath.conf /etc/multipath.conf.1' > >>>>>(cwd > >>>>> None) > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:25,255::multipath::118::Storage.Misc.excCmd::(setupMultipath) > >>>>> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-o= nly > >>>>>file > >>>>> system\nsudo: sorry, a password is required to run sudo\n'; = =3D 1 > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:25,256::multipath::118::Storage.Misc.excCmd::(setupMultipath) > >>>>> '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf.1' (cwd Non= e) > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:25,269::multipath::118::Storage.Misc.excCmd::(setupMultipath) > >>>>> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-o= nly > >>>>>file > >>>>> system\nsudo: sorry, a password is required to run sudo\n'; = =3D 1 > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:25,270::multipath::123::Storage.Misc.excCmd::(setupMultipath) > >>>>> '/usr/bin/sudo -n /bin/cp /tmp/tmpnPcvWi /etc/multipath.conf' (cwd > >>>>>None) > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:25,283::multipath::123::Storage.Misc.excCmd::(setupMultipath) > >>>>> SUCCESS: =3D ''; =3D 0 > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:25,283::multipath::128::Storage.Misc.excCmd::(setupMultipath) > >>>>> '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf' (cwd None) > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:25,294::multipath::128::Storage.Misc.excCmd::(setupMultipath) > >>>>> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-o= nly > >>>>>file > >>>>> system\nsudo: sorry, a password is required to run sudo\n'; = =3D 1 > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:25,295::multipath::131::Storage.Misc.excCmd::(setupMultipath) > >>>>> '/usr/bin/sudo -n /sbin/multipath -F' (cwd None) > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:25,323::multipath::131::Storage.Misc.excCmd::(setupMultipath) > >>>>> FAILED: =3D ''; =3D 1 > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:25,323::multipath::134::Storage.Misc.excCmd::(setupMultipath) > >>>>> '/usr/bin/sudo -n /sbin/service multipathd restart' (cwd None) > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:26,397::multipath::134::Storage.Misc.excCmd::(setupMultipath) > >>>>> SUCCESS: =3D ''; =3D 0 > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> > >>>>>09:36:26,398::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingTy= pe > >>>>>) > >>>>> '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd > >>>>>None) > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> > >>>>>09:36:26,443::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingTy= pe > >>>>>) > >>>>> SUCCESS: =3D ''; =3D 0 > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:26,445::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm > >>>>>reload > >>>>> operation' got the operation mutex > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:26,447::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n > >>>>> /sbin/lvm pvs --config " devices { preferred_names =3D > >>>>>[\\"^/dev/mapper/\\"] > >>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 > >>>>>disable_after_error_count=3D3 > >>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", > >>>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks= =3D1 > >>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D = 0 } " > >>>>> --noheadings --units b --nosuffix --separator | -o > >>>>> > >>>>>uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_= co > >>>>>unt, > >>>>> d > >>>>> ev_size' (cwd None) > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:26,811::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: = =3D > >>>>>''; > >>>>> =3D 0 > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:26,811::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm > >>>>>reload > >>>>> operation' released the operation mutex > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:26,812::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm > >>>>>reload > >>>>> operation' got the operation mutex > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:26,812::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n > >>>>> /sbin/lvm vgs --config " devices { preferred_names =3D > >>>>>[\\"^/dev/mapper/\\"] > >>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 > >>>>>disable_after_error_count=3D3 > >>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", > >>>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks= =3D1 > >>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D = 0 } " > >>>>> --noheadings --units b --nosuffix --separator | -o > >>>>> > >>>>>uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg= _m > >>>>>da_s > >>>>> i > >>>>> ze,vg_mda_free' (cwd None) > >>>>> MainThread::INFO::2012-04-16 09:36:29,307::vdsm::71::vds::(run) I am > >>>>>the > >>>>> actual vdsm 4.9-0 > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> > >>>>>09:36:29,514::resourceManager::376::ResourceManager::(registerNamesp= ac > >>>>>e) > >>>>> Registering namespace 'Storage' > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:29,515::threadPool::45::Misc.ThreadPool::(__init__) Enter - > >>>>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:29,551::multipath::85::Storage.Misc.excCmd::(isEnabled) > >>>>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:29,564::multipath::85::Storage.Misc.excCmd::(isEnabled) > >>>>>SUCCESS: > >>>>> =3D ''; =3D 0 > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:29,565::multipath::101::Storage.Multipath::(isEnabled) Current > >>>>> revision of multipath.conf detected, preserving > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> > >>>>>09:36:29,565::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingTy= pe > >>>>>) > >>>>> '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd > >>>>>None) > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> > >>>>>09:36:29,606::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingTy= pe > >>>>>) > >>>>> SUCCESS: =3D ''; =3D 0 > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:29,606::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm > >>>>>reload > >>>>> operation' got the operation mutex > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:29,608::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n > >>>>> /sbin/lvm pvs --config " devices { preferred_names =3D > >>>>>[\\"^/dev/mapper/\\"] > >>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 > >>>>>disable_after_error_count=3D3 > >>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", > >>>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks= =3D1 > >>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D = 0 } " > >>>>> --noheadings --units b --nosuffix --separator | -o > >>>>> > >>>>>uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_= co > >>>>>unt, > >>>>> d > >>>>> ev_size' (cwd None) > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:29,714::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: = =3D > >>>>>''; > >>>>> =3D 0 > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:29,715::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm > >>>>>reload > >>>>> operation' released the operation mutex > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:29,716::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm > >>>>>reload > >>>>> operation' got the operation mutex > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:29,716::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n > >>>>> /sbin/lvm vgs --config " devices { preferred_names =3D > >>>>>[\\"^/dev/mapper/\\"] > >>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 > >>>>>disable_after_error_count=3D3 > >>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", > >>>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks= =3D1 > >>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D = 0 } " > >>>>> --noheadings --units b --nosuffix --separator | -o > >>>>> > >>>>>uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg= _m > >>>>>da_s > >>>>> i > >>>>> ze,vg_mda_free' (cwd None) > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:29,813::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: = =3D > >>>>>' No > >>>>> volume groups found\n'; =3D 0 > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:29,814::lvm::379::OperationMutex::(_reloadvgs) Operation 'lvm > >>>>>reload > >>>>> operation' released the operation mutex > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:29,815::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n > >>>>> /sbin/lvm lvs --config " devices { preferred_names =3D > >>>>>[\\"^/dev/mapper/\\"] > >>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 > >>>>>disable_after_error_count=3D3 > >>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", > >>>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks= =3D1 > >>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D = 0 } " > >>>>> --noheadings --units b --nosuffix --separator | -o > >>>>> uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:29,916::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: = =3D > >>>>>' No > >>>>> volume groups found\n'; =3D 0 > >>>>> Thread-11::DEBUG::2012-04-16 > >>>>> 09:36:29,917::misc::1017::SamplingMethod::(__call__) Trying to enter > >>>>> sampling method (storage.sdc.refreshStorage) > >>>>> MainThread::INFO::2012-04-16 > >>>>> 09:36:29,919::dispatcher::121::Storage.Dispatcher::(__init__) > >>>>>Starting > >>>>> StorageDispatcher... > >>>>> Thread-11::DEBUG::2012-04-16 > >>>>> 09:36:29,919::misc::1019::SamplingMethod::(__call__) Got in to > >>>>>sampling > >>>>> method > >>>>> Thread-11::DEBUG::2012-04-16 > >>>>> 09:36:29,921::misc::1017::SamplingMethod::(__call__) Trying to enter > >>>>> sampling method (storage.iscsi.rescan) > >>>>> Thread-11::DEBUG::2012-04-16 > >>>>> 09:36:29,921::misc::1019::SamplingMethod::(__call__) Got in to > >>>>>sampling > >>>>> method > >>>>> Thread-11::DEBUG::2012-04-16 > >>>>> 09:36:29,921::iscsi::389::Storage.Misc.excCmd::(rescan) > >>>>>'/usr/bin/sudo -n > >>>>> /sbin/iscsiadm -m session -R' (cwd None) > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:29,930::utils::595::Storage.Misc.excCmd::(execCmd) > >>>>>'/usr/bin/pgrep > >>>>> -xf ksmd' (cwd None) > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:30,108::utils::595::Storage.Misc.excCmd::(execCmd) > >>>>>SUCCESS: =3D > >>>>> ''; =3D 0 > >>>>> Thread-11::DEBUG::2012-04-16 > >>>>> 09:36:30,116::iscsi::389::Storage.Misc.excCmd::(rescan) FAILED: > >>>>> =3D > >>>>> 'iscsiadm: No session found.\n'; =3D 21 > >>>>> Thread-11::DEBUG::2012-04-16 > >>>>> 09:36:30,116::misc::1027::SamplingMethod::(__call__) Returning last > >>>>>result > >>>>> Thread-11::DEBUG::2012-04-16 > >>>>> 09:36:30,117::supervdsm::83::SuperVdsmProxy::(_killSupervdsm) Could > >>>>>not > >>>>> kill old Super Vdsm [Errno 2] No such file or directory: > >>>>> '/var/run/vdsm/svdsm.pid' > >>>>> Thread-11::DEBUG::2012-04-16 > >>>>> 09:36:30,117::supervdsm::71::SuperVdsmProxy::(_launchSupervdsm) > >>>>>Launching > >>>>> Super Vdsm > >>>>> Thread-11::DEBUG::2012-04-16 > >>>>> 09:36:30,118::supervdsm::74::Storage.Misc.excCmd::(_launchSupervdsm) > >>>>> '/usr/bin/sudo -n /usr/bin/python /usr/share/vdsm/supervdsmServer.p= yc > >>>>> bd4b3ae7-3e51-4d6b-b681-d5f6cb5bae07 2945' (cwd None) > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:30,254::supervdsmServer::170::SuperVdsm.Server::(main) Making > >>>>>sure > >>>>> I'm root > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:30,255::supervdsmServer::174::SuperVdsm.Server::(main) Parsing > >>>>>cmd > >>>>> args > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:30,255::supervdsmServer::177::SuperVdsm.Server::(main) > >>>>>Creating PID > >>>>> file > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:30,255::supervdsmServer::181::SuperVdsm.Server::(main) > >>>>>Cleaning old > >>>>> socket > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:30,255::supervdsmServer::185::SuperVdsm.Server::(main) Setting > >>>>>up > >>>>> keep alive thread > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:30,256::supervdsmServer::190::SuperVdsm.Server::(main) Creati= ng > >>>>> remote object manager > >>>>> MainThread::DEBUG::2012-04-16 > >>>>> 09:36:30,256::supervdsmServer::201::SuperVdsm.Server::(main) Started > >>>>> serving super vdsm object > >>>>> Thread-11::DEBUG::2012-04-16 > >>>>> 09:36:32,124::supervdsm::92::SuperVdsmProxy::(_connect) Trying to > >>>>>connect > >>>>> to Super Vdsm > >>>>> Thread-11::DEBUG::2012-04-16 > >>>>> 09:36:32,133::supervdsm::64::SuperVdsmProxy::(__init__) Connected to > >>>>>Super > >>>>> Vdsm > >>>>> Thread-11::DEBUG::2012-04-16 > >>>>> 09:36:34,070::multipath::71::Storage.Misc.excCmd::(rescan) > >>>>>'/usr/bin/sudo > >>>>> -n /sbin/multipath' (cwd None) > >>>>> Thread-11::DEBUG::2012-04-16 > >>>>> 09:36:34,130::multipath::71::Storage.Misc.excCmd::(rescan) > >>>>>SUCCESS: > >>>>> =3D ''; =3D 0 > >>>>> Thread-11::DEBUG::2012-04-16 > >>>>> 09:36:34,131::lvm::460::OperationMutex::(_invalidateAllPvs) > >>>>>Operation 'lvm > >>>>> invalidate operation' got the operation mutex > >>>>> Thread-11::DEBUG::2012-04-16 > >>>>> 09:36:34,131::lvm::462::OperationMutex::(_invalidateAllPvs) > >>>>>Operation 'lvm > >>>>> invalidate operation' released the operation mutex > >>>>> Thread-11::DEBUG::2012-04-16 > >>>>> 09:36:34,132::lvm::472::OperationMutex::(_invalidateAllVgs) > >>>>>Operation 'lvm > >>>>> invalidate operation' got the operation mutex > >>>>> Thread-11::DEBUG::2012-04-16 > >>>>> 09:36:34,132::lvm::474::OperationMutex::(_invalidateAllVgs) > >>>>>Operation 'lvm > >>>>> invalidate operation' released the operation mutex > >>>>> Thread-11::DEBUG::2012-04-16 > >>>>> 09:36:34,133::lvm::493::OperationMutex::(_invalidateAllLvs) > >>>>>Operation 'lvm > >>>>> invalidate operation' got the operation mutex > >>>>> Thread-11::DEBUG::2012-04-16 > >>>>> 09:36:34,133::lvm::495::OperationMutex::(_invalidateAllLvs) > >>>>>Operation 'lvm > >>>>> invalidate operation' released the operation mutex > >>>>> Thread-11::DEBUG::2012-04-16 > >>>>> 09:36:34,133::misc::1027::SamplingMethod::(__call__) Returning last > >>>>>result > >>>>> Thread-11::DEBUG::2012-04-16 > >>>>> 09:36:34,133::hsm::272::Storage.HSM::(__cleanStorageRepository) > >>>>>Started > >>>>> cleaning storage repository at '/rhev/data-center' > >>>>> Thread-11::DEBUG::2012-04-16 > >>>>> 09:36:34,136::hsm::304::Storage.HSM::(__cleanStorageRepository) Whi= te > >>>>> list: ['/rhev/data-center/hsm-tasks', > >>>>>'/rhev/data-center/hsm-tasks/*', > >>>>> '/rhev/data-center/mnt'] > >>>>> Thread-11::DEBUG::2012-04-16 > >>>>> 09:36:34,136::hsm::305::Storage.HSM::(__cleanStorageRepository) Mou= nt > >>>>> list: ['/rhev/data-center'] > >>>>> Thread-11::DEBUG::2012-04-16 > >>>>> 09:36:34,136::hsm::307::Storage.HSM::(__cleanStorageRepository) > >>>>>Cleaning > >>>>> leftovers > >>>>> Thread-11::DEBUG::2012-04-16 > >>>>> 09:36:34,136::hsm::350::Storage.HSM::(__cleanStorageRepository) > >>>>>Finished > >>>>> cleaning storage repository at '/rhev/data-center' > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> On 4/16/12 8:38 AM, "Mike Burns" wrote: > >>>>> > >>>>>> On Mon, 2012-04-16 at 08:14 -0500, Adam vonNieda wrote: > >>>>>>> Hi folks, > >>>>>>> > >>>>>>> > >>>>>>> I'm trying to install oVirt node v2.3.0 on A Dell C2100 > >>>>>>>server. I > >>>>>>> can boot up just fine, but the two menu options I see are "Start > >>>>>>>oVirt > >>>>>>> node", and "Troubleshooting". When I choose "Start oVirt node", it > >>>>>>> does just that, and I am soon after given a console login prompt. > >>>>>>>I've > >>>>>>> checked the docs, and I don't see what I'm supposed to do next, as > >>>>>>>in > >>>>>>> a password etc. Am I missing something? > >>>>>> Hi Adam, > >>>>>> > >>>>>> Something is breaking in the boot process. You should be getting a > >>>>>>TUI > >>>>>> screen that will let you configure and install ovirt-node. > >>>>>> > >>>>>> I just added an entry on the Node Troublesooting wiki page[1] for > >>>>>>you to > >>>>>> follow. > >>>>>> > >>>>>> Mike > >>>>>> > >>>>>> [1] http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems > >>>>>> > >>>>>> > >>>>>>> Thanks, > >>>>>>> > >>>>>>> > >>>>>>> -Adam > >>>>>>> _______________________________________________ > >>>>>>> Users mailing list > >>>>>>> Users(a)ovirt.org > >>>>>>> http://lists.ovirt.org/mailman/listinfo/users > >>>> _______________________________________________ > >>>> Users mailing list > >>>> Users(a)ovirt.org > >>>> http://lists.ovirt.org/mailman/listinfo/users > >>> This is definitely the cause of the installer failing > >>> > >>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat > >>>/proc/mounts|grep -q "none /live" > >>> 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to > >>>mount_live() > >>> > >>> > >>> > >>> What kind of media are you installing from: usb/cd/remote console? > >> _______________________________________________ > >> Users mailing list > >> Users(a)ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/users > > > >I did go back and take a look at mount_live and made sure it contains a > >specific patch to handle usb drives properly. If you can get back to a > >shell prompt. run blkid and capture the output. If it's way too much to > >type then just the usb drive output should be ok. > > > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- = Dominic Kaiser Greater Boston Vineyard Director of Operations cell: 617-230-1412 fax: 617-252-0238 email: dominic(a)bostonvineyard.org --===============4452646685188216127== Content-Type: text/html MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.html" Tm8gcHJvYi4goEkgYW0gZ2xhZCB0byBoZWFyIGl0IHdvcmtzITxkaXY+PGJyPjwvZGl2PjxkaXY+ ZGs8YnI+PGJyPjxkaXYgY2xhc3M9ImdtYWlsX3F1b3RlIj5PbiBUdWUsIEFwciAxNywgMjAxMiBh dCAyOjQ4IFBNLCBBZGFtIHZvbk5pZWRhIDxzcGFuIGRpcj0ibHRyIj4mbHQ7PGEgaHJlZj0ibWFp bHRvOmFkYW1Adm9ubmllZGEub3JnIj5hZGFtQHZvbm5pZWRhLm9yZzwvYT4mZ3Q7PC9zcGFuPiB3 cm90ZTo8YnI+CjxibG9ja3F1b3RlIGNsYXNzPSJnbWFpbF9xdW90ZSIgc3R5bGU9Im1hcmdpbjow IDAgMCAuOGV4O2JvcmRlci1sZWZ0OjFweCAjY2NjIHNvbGlkO3BhZGRpbmctbGVmdDoxZXgiPjxi cj4KIKAgVHVybnMgb3V0IHRoYXQgdGhlcmUgbWlnaHQgYmUgYW4gaXNzdWUgd2l0aCBteSB0aHVt YiBkcml2ZS4gSSB0cmllZDxicj4KYW5vdGhlciwgYW5kIGl0IHdvcmtlZCBmaW5lLiBUaGFua3Mg dmVyeSBtdWNoIGZvciB0aGUgcmVzcG9uc2VzIGZvbGtzITxicj4KPGJyPgogoCAtQWRhbTxicj4K PGJyPgo8YnI+Ck9uIDQvMTcvMTIgMTA6MTEgQU0sICZxdW90O0pvZXkgQm9nZ3MmcXVvdDsgJmx0 OzxhIGhyZWY9Im1haWx0bzpqYm9nZ3NAcmVkaGF0LmNvbSI+amJvZ2dzQHJlZGhhdC5jb208L2E+ Jmd0OyB3cm90ZTo8YnI+Cjxicj4KJmd0O09uIDA0LzE3LzIwMTIgMTA6NTEgQU0sIEFkYW0gdm9u TmllZGEgd3JvdGU6PGJyPgomZ3Q7Jmd0OyCgIKAgVGhhbmtzIGZvciB0aGUgcmVwbHkgSm9leS4g SSBzYXcgdGhhdCB0b28sIGFuZCB0aG91Z2h0IG1heWJlIG15IFVTQjxicj4KJmd0OyZndDt0aHVt YiBkcml2ZSB3YXMgc2V0IHRvIHJlYWQgb25seSwgYnV0IGl0JiMzOTtzIG5vdC4gVGhpcyBib3gg ZG9lc24mIzM5O3QgaGF2ZSBhPGJyPgomZ3Q7Jmd0O0RWRCBkcml2ZSwgSSYjMzk7bGwgdHJ5IGEg ZGlmZmVyZW50IFVTQiBkcml2ZSwgYW5kIGlmIHRoYXQgZG9lc24mIzM5O3Qgd29yayw8YnI+CiZn dDsmZ3Q7SSYjMzk7bGwgZGlnIHVwIGFuIGV4dGVybmFsIERWRCBkcml2ZS48YnI+CiZndDsmZ3Q7 PGJyPgomZ3Q7Jmd0OyCgIKAgVGhhbmtzIGFnYWluLDxicj4KJmd0OyZndDs8YnI+CiZndDsmZ3Q7 IKAgoCCgIKAtQWRhbTxicj4KJmd0OyZndDs8YnI+CiZndDsmZ3Q7IEFkYW0gdm9uTmllZGE8YnI+ CiZndDsmZ3Q7IEFkYW1Adm9uTmllZGEub3JnPGJyPgomZ3Q7Jmd0Ozxicj4KJmd0OyZndDsgT24g QXByIDE3LCAyMDEyLCBhdCA5OjA3LCBKb2V5IEJvZ2dzJmx0OzxhIGhyZWY9Im1haWx0bzpqYm9n Z3NAcmVkaGF0LmNvbSI+amJvZ2dzQHJlZGhhdC5jb208L2E+Jmd0OyCgd3JvdGU6PGJyPgomZ3Q7 Jmd0Ozxicj4KJmd0OyZndDsmZ3Q7IE9uIDA0LzE3LzIwMTIgMDk6NDUgQU0sIEFkYW0gdm9uTmll ZGEgd3JvdGU6PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7IKAgoCBIaSBmb2xrcyw8YnI+CiZndDsmZ3Q7 Jmd0OyZndDs8YnI+CiZndDsmZ3Q7Jmd0OyZndDsgoCCgIFN0aWxsIGhvcGluZyBzb21lb25lIGNh biBnaXZlIG1lIGEgaGFuZCB3aXRoIHRoaXMuIEkgY2FuJiMzOTt0IGluc3RhbGw8YnI+CiZndDsm Z3Q7Jmd0OyZndDsgb3ZlcnQtbm9kZSAyLjMuMCBvbiBhIG9uIGEgRGVsbCBDMjEwMCBzZXJ2ZXIg YmVjYXVzZSBpdCB3b24mIzM5O3Qgc3RhcnQ8YnI+CiZndDsmZ3Q7Jmd0OyZndDt0aGU8YnI+CiZn dDsmZ3Q7Jmd0OyZndDsgZ3JhcGhpY2FsIGludGVyZmFjZS4gSSBib290ZWQgdXAgYSBzdGFuZGFy ZCBGMTYgaW1hZ2UgdGhpcyBtb3JuaW5nLDxicj4KJmd0OyZndDsmZ3Q7Jmd0O2FuZDxicj4KJmd0 OyZndDsmZ3Q7Jmd0OyB0aGUgZ3JhcGhpY2FsIGluc3RhbGxlciBkb2VzIHN0YXJ0IGR1cmluZyB0 aGF0IHByb2Nlc3MuIExvZ3MgYXJlPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7YmVsb3cuPGJyPgomZ3Q7 Jmd0OyZndDsmZ3Q7PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7IKAgoCBUaGFua3MgdmVyeSBtdWNoLDxi cj4KJmd0OyZndDsmZ3Q7Jmd0Ozxicj4KJmd0OyZndDsmZ3Q7Jmd0OyCgIKAgoCCgLUFkYW08YnI+ CiZndDsmZ3Q7Jmd0OyZndDs8YnI+CiZndDsmZ3Q7Jmd0OyZndDs8YnI+CiZndDsmZ3Q7Jmd0OyZn dDsmZ3Q7IC90bXAvb3ZpcnQubG9nPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyA9PT09PT09PT09 PT09PTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDs8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IC9z YmluL3Jlc3RvcmVjb24gc2V0IGNvbnRleHQ8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IC92YXIv Y2FjaGUveXVtLSZndDt1bmNvbmZpbmVkX3U6b2JqZWN0X3I6cnBtX3Zhcl9jYWNoZV90OnMwPGJy PgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0O2ZhaWxlZDomIzM5O1JlYWQtb25seTxicj4KJmd0OyZndDsm Z3Q7Jmd0OyZndDsgZmlsZSBzeXN0ZW0mIzM5Ozxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgL3Ni aW4vcmVzdG9yZWNvbiByZXNldCAvdmFyL2NhY2hlL3l1bSBjb250ZXh0PGJyPgomZ3Q7Jmd0OyZn dDsmZ3Q7Jmd0Ozxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDt1bmNvbmZpbmVkX3U6b2JqZWN0X3I6 ZmlsZV90OnMwLSZndDt1bmNvbmZpbmVkX3U6b2JqZWN0X3I6cnBtX3Zhcl9jYWNoZV90PGJyPgom Z3Q7Jmd0OyZndDsmZ3Q7Jmd0OzpzMDxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgL3NiaW4vcmVz dG9yZWNvbiByZXNldCAvZXRjL3N5c2N0bC5jb25mIGNvbnRleHQ8YnI+CiZndDsmZ3Q7Jmd0OyZn dDsmZ3Q7PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0O3N5c3RlbV91Om9iamVjdF9yOmV0Y19ydW50 aW1lX3Q6czAtJmd0O3N5c3RlbV91Om9iamVjdF9yOnN5c3RlbV9jb25mX3Q6czA8YnI+CiZndDsm Z3Q7Jmd0OyZndDsmZ3Q7IC9zYmluL3Jlc3RvcmVjb24gcmVzZXQgL2Jvb3Qta2R1bXAgY29udGV4 dDxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgc3lzdGVtX3U6b2JqZWN0X3I6Ym9vdF90OnMwLSZn dDtzeXN0ZW1fdTpvYmplY3RfcjpkZWZhdWx0X3Q6czA8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7 IDIwMTItMDQtMTYgMDk6MzY6MjYsODI3IC0gSU5GTyAtIG92aXJ0LWNvbmZpZy1pbnN0YWxsZXIg LSA6Ojo6bGl2ZTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgZGV2aWNlOjo6Ojxicj4KJmd0OyZn dDsmZ3Q7Jmd0OyZndDsgL2Rldi9zZGI8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IDIwMTItMDQt MTYgMDk6MzY6MjYsODM2IC0gREVCVUcgLSBvdmlydGZ1bmN0aW9ucyAtIGNhdDxicj4KJmd0OyZn dDsmZ3Q7Jmd0OyZndDsvcHJvYy9tb3VudHN8Z3JlcDxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsg LXEgJnF1b3Q7bm9uZSAvbGl2ZSZxdW90Ozxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgMjAxMi0w NC0xNiAwOTozNjoyNiw4MzYgLSBERUJVRyAtIG92aXJ0ZnVuY3Rpb25zIC08YnI+CiZndDsmZ3Q7 Jmd0OyZndDsmZ3Q7IDIwMTItMDQtMTYgMDk6MzY6MjYsOTE1IC0gREVCVUcgLSBvdmlydGZ1bmN0 aW9ucyAtIHVtb3VudCAvbGl2ZTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgMjAxMi0wNC0xNiAw OTozNjoyNiw5MTUgLSBERUJVRyAtIG92aXJ0ZnVuY3Rpb25zIC08YnI+CiZndDsmZ3Q7Jmd0OyZn dDsmZ3Q7IDIwMTItMDQtMTYgMDk6MzY6MjcsNDU1IC0gRVJST1IgLSBvdmlydGZ1bmN0aW9ucyAt IEZhaWxlZCB0bzxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDttb3VudF9saXZlKCk8YnI+CiZndDsm Z3Q7Jmd0OyZndDsmZ3Q7PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAvdmFyL2xvZy9vdmlydC5s b2c8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7ID09PT09PT09PT09PT09PT09PTxicj4KJmd0OyZn dDsmZ3Q7Jmd0OyZndDs8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IEFwciAxNiAwOTozNTo1MyBT dGFydGluZyBvdmlydC1lYXJseTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgb1ZpcnQgTm9kZSBI eXBlcnZpc29yIHJlbGVhc2UgMi4zLjAgKDEuMC5mYzE2KTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZn dDsgQXByIDE2IDA5OjM1OjUzIFVwZGF0aW5nIC9ldGMvZGVmYXVsdC9vdmlydDxicj4KJmd0OyZn dDsmZ3Q7Jmd0OyZndDsgQXByIDE2IDA5OjM1OjU0IFVwZGF0aW5nIE9WSVJUX0JPT1RJRiB0byAm IzM5OyYjMzk7PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBBcHIgMTYgMDk6MzU6NTQgVXBkYXRp bmcgT1ZJUlRfSU5JVCB0byAmIzM5OyYjMzk7PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBBcHIg MTYgMDk6MzU6NTQgVXBkYXRpbmcgT1ZJUlRfVVBHUkFERSB0byAmIzM5OyYjMzk7PGJyPgomZ3Q7 Jmd0OyZndDsmZ3Q7Jmd0OyBBcHIgMTYgMDk6MzU6NTQgVXBkYXRpbmcgT1ZJUlRfU1RBTkRBTE9O RSB0byAmIzM5OzEmIzM5Ozxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgQXByIDE2IDA5OjM1OjU0 IFVwZGF0aW5nIE9WSVJUX0JPT1RQQVJBTVMgdG8gJiMzOTtub21vZGVzZXQ8YnI+CiZndDsmZ3Q7 Jmd0OyZndDsmZ3Q7IGNyYXNoa2VybmVsPTUxMk0tMkc6NjRNLDJHLToxMjhNIGVsZXZhdG9yPWRl YWRsaW5lIHF1aWV0IHJkX05PX0xWTTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDtyaGdiPGJyPgom Z3Q7Jmd0OyZndDsmZ3Q7Jmd0OyByZC5sdWtzPTAgPGEgaHJlZj0iaHR0cDovL3JkLm1kIiB0YXJn ZXQ9Il9ibGFuayI+cmQubWQ8L2E+PTAgPGEgaHJlZj0iaHR0cDovL3JkLmRtIiB0YXJnZXQ9Il9i bGFuayI+cmQuZG08L2E+PTAmIzM5Ozxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgQXByIDE2IDA5 OjM1OjU0IFVwZGF0aW5nIE9WSVJUX1JITl9UWVBFIHRvICYjMzk7Y2xhc3NpYyYjMzk7PGJyPgom Z3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBBcHIgMTYgMDk6MzU6NTQgVXBkYXRpbmcgT1ZJUlRfSU5TVEFM TCB0byAmIzM5OzEmIzM5Ozxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgQXByIDE2IDA5OjM1OjU0 IFVwZGF0aW5nIE9WSVJUX0lTQ1NJX0lOU1RBTEwgdG8gJiMzOTsxJiMzOTs8YnI+CiZndDsmZ3Q7 Jmd0OyZndDsmZ3Q7IEFwciAxNiAwOTozNjowOCBTZXR0aW5nIHRlbXBvcmFyeSBhZG1pbiBwYXNz d29yZDogRjhBeDY3a2ZSUFNBdzxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgQXByIDE2IDA5OjM2 OjA5IFNldHRpbmcgdGVtcG9yYXJ5IHJvb3QgcGFzc3dvcmQ6IEY4QXg2N2tmUlBTQXc8YnI+CiZn dDsmZ3Q7Jmd0OyZndDsmZ3Q7IEFwciAxNiAwOTozNjowOSBTa2lwIHJ1bnRpbWUgbW9kZSBjb25m aWd1cmF0aW9uLjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgQXByIDE2IDA5OjM2OjA5IENvbXBs ZXRlZCBvdmlydC1lYXJseTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgQXByIDE2IDA5OjM2OjA5 IFN0YXJ0aW5nIG92aXJ0LWF3YWtlLjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgQXByIDE2IDA5 OjM2OjA5IE5vZGUgaXMgb3BlcmF0aW5nIGluIHVubWFuYWdlZCBtb2RlLjxicj4KJmd0OyZndDsm Z3Q7Jmd0OyZndDsgQXByIDE2IDA5OjM2OjA5IENvbXBsZXRlZCBvdmlydC1hd2FrZTogUkVUVkFM PTA8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IEFwciAxNiAwOTozNjowOSBTdGFydGluZyBvdmly dDxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgQXByIDE2IDA5OjM2OjA5IENvbXBsZXRlZCBvdmly dDxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgQXByIDE2IDA5OjM2OjEwIFN0YXJ0aW5nIG92aXJ0 LXBvc3Q8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IEFwciAxNiAwOTozNjoyMCBIYXJkd2FyZSB2 aXJ0dWFsaXphdGlvbiBkZXRlY3RlZDxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgoCBWb2x1bWUg Z3JvdXAgJnF1b3Q7SG9zdFZHJnF1b3Q7IG5vdCBmb3VuZDxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZn dDsgoCBTa2lwcGluZyB2b2x1bWUgZ3JvdXAgSG9zdFZHPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0 OyBSZXN0YXJ0aW5nIG5ldHdvcmsgKHZpYSBzeXN0ZW1jdGwpOiCgWyCgT0sgoF08YnI+CiZndDsm Z3Q7Jmd0OyZndDsmZ3Q7IEFwciAxNiAwOTozNjoyMCBTdGFydGluZyBvdmlydC1wb3N0PGJyPgom Z3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBBcHIgMTYgMDk6MzY6MjEgSGFyZHdhcmUgdmlydHVhbGl6YXRp b24gZGV0ZWN0ZWQ8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IKAgVm9sdW1lIGdyb3VwICZxdW90 O0hvc3RWRyZxdW90OyBub3QgZm91bmQ8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IKAgU2tpcHBp bmcgdm9sdW1lIGdyb3VwIEhvc3RWRzxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgUmVzdGFydGlu ZyBuZXR3b3JrICh2aWEgc3lzdGVtY3RsKTogoFsgoE9LIKBdPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7 Jmd0OyBBcHIgMTYgMDk6MzY6MjIgU3RhcnRpbmcgb3ZpcnQtY2ltPGJyPgomZ3Q7Jmd0OyZndDsm Z3Q7Jmd0OyBBcHIgMTYgMDk6MzY6MjIgQ29tcGxldGVkIG92aXJ0LWNpbTxicj4KJmd0OyZndDsm Z3Q7Jmd0OyZndDsgV0FSTklORzogcGVyc2lzdGVudCBjb25maWcgc3RvcmFnZSBub3QgYXZhaWxh YmxlPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgL3Zh ci9sb2cvdmRzbS92ZHNtLmxvZzxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgPT09PT09PT09PT09 PT09PT09PT09PT08YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7 Jmd0OyBNYWluVGhyZWFkOjpJTkZPOjoyMDEyLTA0LTE2IDA5OjM2OjIxLDgyODo6dmRzbTo6NzE6 OnZkczo6KHJ1bikgSSBhbTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDt0aGU8YnI+CiZndDsmZ3Q7 Jmd0OyZndDsmZ3Q7IGFjdHVhbCB2ZHNtIDQuOS0wPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBN YWluVGhyZWFkOjpERUJVRzo6MjAxMi0wNC0xNjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDs8YnI+ CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7MDk6MzY6MjMsODczOjpyZXNvdXJjZU1hbmFnZXI6OjM3Njo6 UmVzb3VyY2VNYW5hZ2VyOjoocmVnaXN0ZXJOYW1lc3BhYzxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZn dDtlKTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgUmVnaXN0ZXJpbmcgbmFtZXNwYWNlICYjMzk7 U3RvcmFnZSYjMzk7PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBNYWluVGhyZWFkOjpERUJVRzo6 MjAxMi0wNC0xNjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgMDk6MzY6MjMsODc0Ojp0aHJlYWRQ b29sOjo0NTo6TWlzYy5UaHJlYWRQb29sOjooX19pbml0X18pIEVudGVyIC08YnI+CiZndDsmZ3Q7 Jmd0OyZndDsmZ3Q7IG51bVRocmVhZHM6IDEwLjAsIHdhaXRUaW1lb3V0OiAzLCBtYXhUYXNrczog NTAwLjA8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IE1haW5UaHJlYWQ6OkRFQlVHOjoyMDEyLTA0 LTE2PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAwOTozNjoyMyw5MTg6Om11bHRpcGF0aDo6ODU6 OlN0b3JhZ2UuTWlzYy5leGNDbWQ6Oihpc0VuYWJsZWQpPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0 OyAmIzM5Oy91c3IvYmluL3N1ZG8gLW4gL2Jpbi9jYXQgL2V0Yy9tdWx0aXBhdGguY29uZiYjMzk7 IChjd2QgTm9uZSk8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IE1haW5UaHJlYWQ6OklORk86OjIw MTItMDQtMTYgMDk6MzY6MjUsMDAwOjp2ZHNtOjo3MTo6dmRzOjoocnVuKSBJIGFtPGJyPgomZ3Q7 Jmd0OyZndDsmZ3Q7Jmd0O3RoZTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgYWN0dWFsIHZkc20g NC45LTA8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IE1haW5UaHJlYWQ6OkRFQlVHOjoyMDEyLTA0 LTE2PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDswOToz NjoyNSwxOTk6OnJlc291cmNlTWFuYWdlcjo6Mzc2OjpSZXNvdXJjZU1hbmFnZXI6OihyZWdpc3Rl ck5hbWVzcGFjPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0O2UpPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7 Jmd0OyBSZWdpc3RlcmluZyBuYW1lc3BhY2UgJiMzOTtTdG9yYWdlJiMzOTs8YnI+CiZndDsmZ3Q7 Jmd0OyZndDsmZ3Q7IE1haW5UaHJlYWQ6OkRFQlVHOjoyMDEyLTA0LTE2PGJyPgomZ3Q7Jmd0OyZn dDsmZ3Q7Jmd0OyAwOTozNjoyNSwyMDA6OnRocmVhZFBvb2w6OjQ1OjpNaXNjLlRocmVhZFBvb2w6 OihfX2luaXRfXykgRW50ZXIgLTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgbnVtVGhyZWFkczog MTAuMCwgd2FpdFRpbWVvdXQ6IDMsIG1heFRhc2tzOiA1MDAuMDxicj4KJmd0OyZndDsmZ3Q7Jmd0 OyZndDsgTWFpblRocmVhZDo6REVCVUc6OjIwMTItMDQtMTY8YnI+CiZndDsmZ3Q7Jmd0OyZndDsm Z3Q7IDA5OjM2OjI1LDIzMTo6bXVsdGlwYXRoOjo4NTo6U3RvcmFnZS5NaXNjLmV4Y0NtZDo6KGlz RW5hYmxlZCk8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7ICYjMzk7L3Vzci9iaW4vc3VkbyAtbiAv YmluL2NhdCAvZXRjL211bHRpcGF0aC5jb25mJiMzOTsgKGN3ZCBOb25lKTxicj4KJmd0OyZndDsm Z3Q7Jmd0OyZndDsgTWFpblRocmVhZDo6REVCVUc6OjIwMTItMDQtMTY8YnI+CiZndDsmZ3Q7Jmd0 OyZndDsmZ3Q7IDA5OjM2OjI1LDI0Mzo6bXVsdGlwYXRoOjo4NTo6U3RvcmFnZS5NaXNjLmV4Y0Nt ZDo6KGlzRW5hYmxlZCk8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7U1VDQ0VTUzo8YnI+CiZndDsm Z3Q7Jmd0OyZndDsmZ3Q7ICZsdDtlcnImZ3Q7IKAgPSAmIzM5OyYjMzk7OyZsdDtyYyZndDsgoCA9 IDA8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IE1haW5UaHJlYWQ6OkRFQlVHOjoyMDEyLTA0LTE2 PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAwOTozNjoyNSwyNDQ6Om11bHRpcGF0aDo6MTA5OjpT dG9yYWdlLk11bHRpcGF0aDo6KGlzRW5hYmxlZCk8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7bXVs dGlwYXRoPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBEZWZhdWx0aW5nIHRvIEZhbHNlPGJyPgom Z3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBNYWluVGhyZWFkOjpERUJVRzo6MjAxMi0wNC0xNjxicj4KJmd0 OyZndDsmZ3Q7Jmd0OyZndDsgMDk6MzY6MjUsMjQ0OjptaXNjOjo0ODc6OlN0b3JhZ2UuTWlzYzo6 KHJvdGF0ZUZpbGVzKSBkaXI6IC9ldGMsPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBwcmVmaXhO YW1lOiBtdWx0aXBhdGguY29uZiwgdmVyc2lvbnM6IDU8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7 IE1haW5UaHJlYWQ6OkRFQlVHOjoyMDEyLTA0LTE2PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAw OTozNjoyNSwyNDQ6Om1pc2M6OjUwODo6U3RvcmFnZS5NaXNjOjoocm90YXRlRmlsZXMpIHZlcnNp b25zIGZvdW5kOjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDtbMF08YnI+CiZndDsmZ3Q7Jmd0OyZn dDsmZ3Q7IE1haW5UaHJlYWQ6OkRFQlVHOjoyMDEyLTA0LTE2PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7 Jmd0OyAwOTozNjoyNSwyNDQ6Om11bHRpcGF0aDo6MTE4OjpTdG9yYWdlLk1pc2MuZXhjQ21kOjoo c2V0dXBNdWx0aXBhdGgpPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAmIzM5Oy91c3IvYmluL3N1 ZG8gLW4gL2Jpbi9jcCAvZXRjL211bHRpcGF0aC5jb25mIC9ldGMvbXVsdGlwYXRoLmNvbmYuMSYj Mzk7PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Oyhjd2Q8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7 IE5vbmUpPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBNYWluVGhyZWFkOjpERUJVRzo6MjAxMi0w NC0xNjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgMDk6MzY6MjUsMjU1OjptdWx0aXBhdGg6OjEx ODo6U3RvcmFnZS5NaXNjLmV4Y0NtZDo6KHNldHVwTXVsdGlwYXRoKTxicj4KJmd0OyZndDsmZ3Q7 Jmd0OyZndDsgRkFJTEVEOiZsdDtlcnImZ3Q7IKAgPSAmIzM5O3N1ZG86IHVuYWJsZSB0byBta2Rp ciAvdmFyL2RiL3N1ZG8vdmRzbTogUmVhZC1vbmx5PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0O2Zp bGU8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IHN5c3RlbVxuc3Vkbzogc29ycnksIGEgcGFzc3dv cmQgaXMgcmVxdWlyZWQgdG8gcnVuIHN1ZG9cbiYjMzk7OyZsdDtyYyZndDsgoCA9IDE8YnI+CiZn dDsmZ3Q7Jmd0OyZndDsmZ3Q7IE1haW5UaHJlYWQ6OkRFQlVHOjoyMDEyLTA0LTE2PGJyPgomZ3Q7 Jmd0OyZndDsmZ3Q7Jmd0OyAwOTozNjoyNSwyNTY6Om11bHRpcGF0aDo6MTE4OjpTdG9yYWdlLk1p c2MuZXhjQ21kOjooc2V0dXBNdWx0aXBhdGgpPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAmIzM5 Oy91c3IvYmluL3N1ZG8gLW4gL3Vzci9zYmluL3BlcnNpc3QgL2V0Yy9tdWx0aXBhdGguY29uZi4x JiMzOTsgKGN3ZCBOb25lKTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgTWFpblRocmVhZDo6REVC VUc6OjIwMTItMDQtMTY8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IDA5OjM2OjI1LDI2OTo6bXVs dGlwYXRoOjoxMTg6OlN0b3JhZ2UuTWlzYy5leGNDbWQ6OihzZXR1cE11bHRpcGF0aCk8YnI+CiZn dDsmZ3Q7Jmd0OyZndDsmZ3Q7IEZBSUxFRDombHQ7ZXJyJmd0OyCgID0gJiMzOTtzdWRvOiB1bmFi bGUgdG8gbWtkaXIgL3Zhci9kYi9zdWRvL3Zkc206IFJlYWQtb25seTxicj4KJmd0OyZndDsmZ3Q7 Jmd0OyZndDtmaWxlPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBzeXN0ZW1cbnN1ZG86IHNvcnJ5 LCBhIHBhc3N3b3JkIGlzIHJlcXVpcmVkIHRvIHJ1biBzdWRvXG4mIzM5OzsmbHQ7cmMmZ3Q7IKAg PSAxPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBNYWluVGhyZWFkOjpERUJVRzo6MjAxMi0wNC0x Njxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgMDk6MzY6MjUsMjcwOjptdWx0aXBhdGg6OjEyMzo6 U3RvcmFnZS5NaXNjLmV4Y0NtZDo6KHNldHVwTXVsdGlwYXRoKTxicj4KJmd0OyZndDsmZ3Q7Jmd0 OyZndDsgJiMzOTsvdXNyL2Jpbi9zdWRvIC1uIC9iaW4vY3AgL3RtcC90bXBuUGN2V2kgL2V0Yy9t dWx0aXBhdGguY29uZiYjMzk7IChjd2Q8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Tm9uZSk8YnI+ CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IE1haW5UaHJlYWQ6OkRFQlVHOjoyMDEyLTA0LTE2PGJyPgom Z3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAwOTozNjoyNSwyODM6Om11bHRpcGF0aDo6MTIzOjpTdG9yYWdl Lk1pc2MuZXhjQ21kOjooc2V0dXBNdWx0aXBhdGgpPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBT VUNDRVNTOiZsdDtlcnImZ3Q7IKAgPSAmIzM5OyYjMzk7OyZsdDtyYyZndDsgoCA9IDA8YnI+CiZn dDsmZ3Q7Jmd0OyZndDsmZ3Q7IE1haW5UaHJlYWQ6OkRFQlVHOjoyMDEyLTA0LTE2PGJyPgomZ3Q7 Jmd0OyZndDsmZ3Q7Jmd0OyAwOTozNjoyNSwyODM6Om11bHRpcGF0aDo6MTI4OjpTdG9yYWdlLk1p c2MuZXhjQ21kOjooc2V0dXBNdWx0aXBhdGgpPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAmIzM5 Oy91c3IvYmluL3N1ZG8gLW4gL3Vzci9zYmluL3BlcnNpc3QgL2V0Yy9tdWx0aXBhdGguY29uZiYj Mzk7IChjd2QgTm9uZSk8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IE1haW5UaHJlYWQ6OkRFQlVH OjoyMDEyLTA0LTE2PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAwOTozNjoyNSwyOTQ6Om11bHRp cGF0aDo6MTI4OjpTdG9yYWdlLk1pc2MuZXhjQ21kOjooc2V0dXBNdWx0aXBhdGgpPGJyPgomZ3Q7 Jmd0OyZndDsmZ3Q7Jmd0OyBGQUlMRUQ6Jmx0O2VyciZndDsgoCA9ICYjMzk7c3VkbzogdW5hYmxl IHRvIG1rZGlyIC92YXIvZGIvc3Vkby92ZHNtOiBSZWFkLW9ubHk8YnI+CiZndDsmZ3Q7Jmd0OyZn dDsmZ3Q7ZmlsZTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgc3lzdGVtXG5zdWRvOiBzb3JyeSwg YSBwYXNzd29yZCBpcyByZXF1aXJlZCB0byBydW4gc3Vkb1xuJiMzOTs7Jmx0O3JjJmd0OyCgID0g MTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgTWFpblRocmVhZDo6REVCVUc6OjIwMTItMDQtMTY8 YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IDA5OjM2OjI1LDI5NTo6bXVsdGlwYXRoOjoxMzE6OlN0 b3JhZ2UuTWlzYy5leGNDbWQ6OihzZXR1cE11bHRpcGF0aCk8YnI+CiZndDsmZ3Q7Jmd0OyZndDsm Z3Q7ICYjMzk7L3Vzci9iaW4vc3VkbyAtbiAvc2Jpbi9tdWx0aXBhdGggLUYmIzM5OyAoY3dkIE5v bmUpPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBNYWluVGhyZWFkOjpERUJVRzo6MjAxMi0wNC0x Njxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgMDk6MzY6MjUsMzIzOjptdWx0aXBhdGg6OjEzMTo6 U3RvcmFnZS5NaXNjLmV4Y0NtZDo6KHNldHVwTXVsdGlwYXRoKTxicj4KJmd0OyZndDsmZ3Q7Jmd0 OyZndDsgRkFJTEVEOiZsdDtlcnImZ3Q7IKAgPSAmIzM5OyYjMzk7OyZsdDtyYyZndDsgoCA9IDE8 YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IE1haW5UaHJlYWQ6OkRFQlVHOjoyMDEyLTA0LTE2PGJy PgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAwOTozNjoyNSwzMjM6Om11bHRpcGF0aDo6MTM0OjpTdG9y YWdlLk1pc2MuZXhjQ21kOjooc2V0dXBNdWx0aXBhdGgpPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0 OyAmIzM5Oy91c3IvYmluL3N1ZG8gLW4gL3NiaW4vc2VydmljZSBtdWx0aXBhdGhkIHJlc3RhcnQm IzM5OyAoY3dkIE5vbmUpPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBNYWluVGhyZWFkOjpERUJV Rzo6MjAxMi0wNC0xNjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgMDk6MzY6MjYsMzk3OjptdWx0 aXBhdGg6OjEzNDo6U3RvcmFnZS5NaXNjLmV4Y0NtZDo6KHNldHVwTXVsdGlwYXRoKTxicj4KJmd0 OyZndDsmZ3Q7Jmd0OyZndDsgU1VDQ0VTUzombHQ7ZXJyJmd0OyCgID0gJiMzOTsmIzM5OzsmbHQ7 cmMmZ3Q7IKAgPSAwPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBNYWluVGhyZWFkOjpERUJVRzo6 MjAxMi0wNC0xNjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDs8YnI+CiZndDsmZ3Q7Jmd0OyZndDsm Z3Q7MDk6MzY6MjYsMzk4Ojpoc206OjI0ODo6U3RvcmFnZS5NaXNjLmV4Y0NtZDo6KF9fdmFsaWRh dGVMdm1Mb2NraW5nVHlwZTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDspPGJyPgomZ3Q7Jmd0OyZn dDsmZ3Q7Jmd0OyAmIzM5Oy91c3IvYmluL3N1ZG8gLW4gL3NiaW4vbHZtIGR1bXBjb25maWcgZ2xv YmFsL2xvY2tpbmdfdHlwZSYjMzk7IChjd2Q8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Tm9uZSk8 YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IE1haW5UaHJlYWQ6OkRFQlVHOjoyMDEyLTA0LTE2PGJy PgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDswOTozNjoyNiw0 NDM6OmhzbTo6MjQ4OjpTdG9yYWdlLk1pc2MuZXhjQ21kOjooX192YWxpZGF0ZUx2bUxvY2tpbmdU eXBlPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Oyk8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IFNV Q0NFU1M6Jmx0O2VyciZndDsgoCA9ICYjMzk7JiMzOTs7Jmx0O3JjJmd0OyCgID0gMDxicj4KJmd0 OyZndDsmZ3Q7Jmd0OyZndDsgTWFpblRocmVhZDo6REVCVUc6OjIwMTItMDQtMTY8YnI+CiZndDsm Z3Q7Jmd0OyZndDsmZ3Q7IDA5OjM2OjI2LDQ0NTo6bHZtOjozMTk6Ok9wZXJhdGlvbk11dGV4Ojoo X3JlbG9hZHB2cykgT3BlcmF0aW9uICYjMzk7bHZtPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0O3Jl bG9hZDxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgb3BlcmF0aW9uJiMzOTsgZ290IHRoZSBvcGVy YXRpb24gbXV0ZXg8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IE1haW5UaHJlYWQ6OkRFQlVHOjoy MDEyLTA0LTE2PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAwOTozNjoyNiw0NDc6Omx2bTo6Mjg3 OjpTdG9yYWdlLk1pc2MuZXhjQ21kOjooY21kKSAmIzM5Oy91c3IvYmluL3N1ZG8gLW48YnI+CiZn dDsmZ3Q7Jmd0OyZndDsmZ3Q7IC9zYmluL2x2bSBwdnMgLS1jb25maWcgJnF1b3Q7IGRldmljZXMg eyBwcmVmZXJyZWRfbmFtZXMgPTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDtbXFwmcXVvdDteL2Rl di9tYXBwZXIvXFwmcXVvdDtdPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBpZ25vcmVfc3VzcGVu ZGVkX2RldmljZXM9MSB3cml0ZV9jYWNoZV9zdGF0ZT0wPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0 O2Rpc2FibGVfYWZ0ZXJfZXJyb3JfY291bnQ9Mzxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgZmls dGVyID0gWyBcXCZxdW90O2ElMVNhbkRpc2t8MzYwMDYwNWIwMDQzNmJkODAxNzFiMTA1YzIyNTM3 N2NlJVxcJnF1b3Q7LDxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgXFwmcXVvdDtyJS4qJVxcJnF1 b3Q7IF0gfSCgZ2xvYmFsIHsgoGxvY2tpbmdfdHlwZT0xIKBwcmlvcml0aXNlX3dyaXRlX2xvY2tz PTE8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IHdhaXRfZm9yX2xvY2tzPTEgfSCgYmFja3VwIHsg oHJldGFpbl9taW4gPSA1MCCgcmV0YWluX2RheXMgPSAwIH0gJnF1b3Q7PGJyPgomZ3Q7Jmd0OyZn dDsmZ3Q7Jmd0OyAtLW5vaGVhZGluZ3MgLS11bml0cyBiIC0tbm9zdWZmaXggLS1zZXBhcmF0b3Ig fCAtbzxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDs8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7dXVp ZCxuYW1lLHNpemUsdmdfbmFtZSx2Z191dWlkLHBlX3N0YXJ0LHBlX2NvdW50LHBlX2FsbG9jX2Nv dW50LG1kYV9jbzxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDt1bnQsPGJyPgomZ3Q7Jmd0OyZndDsm Z3Q7Jmd0OyBkPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBldl9zaXplJiMzOTsgKGN3ZCBOb25l KTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgTWFpblRocmVhZDo6REVCVUc6OjIwMTItMDQtMTY8 YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IDA5OjM2OjI2LDgxMTo6bHZtOjoyODc6OlN0b3JhZ2Uu TWlzYy5leGNDbWQ6OihjbWQpIFNVQ0NFU1M6Jmx0O2VyciZndDsgoCA9PGJyPgomZ3Q7Jmd0OyZn dDsmZ3Q7Jmd0OyYjMzk7JiMzOTs7PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAmbHQ7cmMmZ3Q7 IKAgPSAwPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBNYWluVGhyZWFkOjpERUJVRzo6MjAxMi0w NC0xNjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgMDk6MzY6MjYsODExOjpsdm06OjM0Mjo6T3Bl cmF0aW9uTXV0ZXg6OihfcmVsb2FkcHZzKSBPcGVyYXRpb24gJiMzOTtsdm08YnI+CiZndDsmZ3Q7 Jmd0OyZndDsmZ3Q7cmVsb2FkPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBvcGVyYXRpb24mIzM5 OyByZWxlYXNlZCB0aGUgb3BlcmF0aW9uIG11dGV4PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBN YWluVGhyZWFkOjpERUJVRzo6MjAxMi0wNC0xNjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgMDk6 MzY6MjYsODEyOjpsdm06OjM1Mjo6T3BlcmF0aW9uTXV0ZXg6OihfcmVsb2FkdmdzKSBPcGVyYXRp b24gJiMzOTtsdm08YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7cmVsb2FkPGJyPgomZ3Q7Jmd0OyZn dDsmZ3Q7Jmd0OyBvcGVyYXRpb24mIzM5OyBnb3QgdGhlIG9wZXJhdGlvbiBtdXRleDxicj4KJmd0 OyZndDsmZ3Q7Jmd0OyZndDsgTWFpblRocmVhZDo6REVCVUc6OjIwMTItMDQtMTY8YnI+CiZndDsm Z3Q7Jmd0OyZndDsmZ3Q7IDA5OjM2OjI2LDgxMjo6bHZtOjoyODc6OlN0b3JhZ2UuTWlzYy5leGND bWQ6OihjbWQpICYjMzk7L3Vzci9iaW4vc3VkbyAtbjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsg L3NiaW4vbHZtIHZncyAtLWNvbmZpZyAmcXVvdDsgZGV2aWNlcyB7IHByZWZlcnJlZF9uYW1lcyA9 PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0O1tcXCZxdW90O14vZGV2L21hcHBlci9cXCZxdW90O108 YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IGlnbm9yZV9zdXNwZW5kZWRfZGV2aWNlcz0xIHdyaXRl X2NhY2hlX3N0YXRlPTA8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7ZGlzYWJsZV9hZnRlcl9lcnJv cl9jb3VudD0zPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBmaWx0ZXIgPSBbIFxcJnF1b3Q7YSUx U2FuRGlza3wzNjAwNjA1YjAwNDM2YmQ4MDE3MWIxMDVjMjI1Mzc3Y2UlXFwmcXVvdDssPGJyPgom Z3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBcXCZxdW90O3IlLiolXFwmcXVvdDsgXSB9IKBnbG9iYWwgeyCg bG9ja2luZ190eXBlPTEgoHByaW9yaXRpc2Vfd3JpdGVfbG9ja3M9MTxicj4KJmd0OyZndDsmZ3Q7 Jmd0OyZndDsgd2FpdF9mb3JfbG9ja3M9MSB9IKBiYWNrdXAgeyCgcmV0YWluX21pbiA9IDUwIKBy ZXRhaW5fZGF5cyA9IDAgfSAmcXVvdDs8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IC0tbm9oZWFk aW5ncyAtLXVuaXRzIGIgLS1ub3N1ZmZpeCAtLXNlcGFyYXRvciB8IC1vPGJyPgomZ3Q7Jmd0OyZn dDsmZ3Q7Jmd0Ozxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDt1dWlkLG5hbWUsYXR0cixzaXplLGZy ZWUsZXh0ZW50X3NpemUsZXh0ZW50X2NvdW50LGZyZWVfY291bnQsdGFncyx2Z19tPGJyPgomZ3Q7 Jmd0OyZndDsmZ3Q7Jmd0O2RhX3M8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IGk8YnI+CiZndDsm Z3Q7Jmd0OyZndDsmZ3Q7IHplLHZnX21kYV9mcmVlJiMzOTsgKGN3ZCBOb25lKTxicj4KJmd0OyZn dDsmZ3Q7Jmd0OyZndDsgTWFpblRocmVhZDo6SU5GTzo6MjAxMi0wNC0xNiAwOTozNjoyOSwzMDc6 OnZkc206OjcxOjp2ZHM6OihydW4pIEkgYW08YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7dGhlPGJy PgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBhY3R1YWwgdmRzbSA0LjktMDxicj4KJmd0OyZndDsmZ3Q7 Jmd0OyZndDsgTWFpblRocmVhZDo6REVCVUc6OjIwMTItMDQtMTY8YnI+CiZndDsmZ3Q7Jmd0OyZn dDsmZ3Q7PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OzA5OjM2OjI5LDUxNDo6cmVzb3VyY2VNYW5h Z2VyOjozNzY6OlJlc291cmNlTWFuYWdlcjo6KHJlZ2lzdGVyTmFtZXNwYWM8YnI+CiZndDsmZ3Q7 Jmd0OyZndDsmZ3Q7ZSk8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IFJlZ2lzdGVyaW5nIG5hbWVz cGFjZSAmIzM5O1N0b3JhZ2UmIzM5Ozxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgTWFpblRocmVh ZDo6REVCVUc6OjIwMTItMDQtMTY8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IDA5OjM2OjI5LDUx NTo6dGhyZWFkUG9vbDo6NDU6Ok1pc2MuVGhyZWFkUG9vbDo6KF9faW5pdF9fKSBFbnRlciAtPGJy PgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBudW1UaHJlYWRzOiAxMC4wLCB3YWl0VGltZW91dDogMywg bWF4VGFza3M6IDUwMC4wPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBNYWluVGhyZWFkOjpERUJV Rzo6MjAxMi0wNC0xNjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgMDk6MzY6MjksNTUxOjptdWx0 aXBhdGg6Ojg1OjpTdG9yYWdlLk1pc2MuZXhjQ21kOjooaXNFbmFibGVkKTxicj4KJmd0OyZndDsm Z3Q7Jmd0OyZndDsgJiMzOTsvdXNyL2Jpbi9zdWRvIC1uIC9iaW4vY2F0IC9ldGMvbXVsdGlwYXRo LmNvbmYmIzM5OyAoY3dkIE5vbmUpPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBNYWluVGhyZWFk OjpERUJVRzo6MjAxMi0wNC0xNjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgMDk6MzY6MjksNTY0 OjptdWx0aXBhdGg6Ojg1OjpTdG9yYWdlLk1pc2MuZXhjQ21kOjooaXNFbmFibGVkKTxicj4KJmd0 OyZndDsmZ3Q7Jmd0OyZndDtTVUNDRVNTOjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgJmx0O2Vy ciZndDsgoCA9ICYjMzk7JiMzOTs7Jmx0O3JjJmd0OyCgID0gMDxicj4KJmd0OyZndDsmZ3Q7Jmd0 OyZndDsgTWFpblRocmVhZDo6REVCVUc6OjIwMTItMDQtMTY8YnI+CiZndDsmZ3Q7Jmd0OyZndDsm Z3Q7IDA5OjM2OjI5LDU2NTo6bXVsdGlwYXRoOjoxMDE6OlN0b3JhZ2UuTXVsdGlwYXRoOjooaXNF bmFibGVkKSBDdXJyZW50PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyByZXZpc2lvbiBvZiBtdWx0 aXBhdGguY29uZiBkZXRlY3RlZCwgcHJlc2VydmluZzxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsg TWFpblRocmVhZDo6REVCVUc6OjIwMTItMDQtMTY8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7PGJy PgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OzA5OjM2OjI5LDU2NTo6aHNtOjoyNDg6OlN0b3JhZ2UuTWlz Yy5leGNDbWQ6OihfX3ZhbGlkYXRlTHZtTG9ja2luZ1R5cGU8YnI+CiZndDsmZ3Q7Jmd0OyZndDsm Z3Q7KTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgJiMzOTsvdXNyL2Jpbi9zdWRvIC1uIC9zYmlu L2x2bSBkdW1wY29uZmlnIGdsb2JhbC9sb2NraW5nX3R5cGUmIzM5OyAoY3dkPGJyPgomZ3Q7Jmd0 OyZndDsmZ3Q7Jmd0O05vbmUpPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBNYWluVGhyZWFkOjpE RUJVRzo6MjAxMi0wNC0xNjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDs8YnI+CiZndDsmZ3Q7Jmd0 OyZndDsmZ3Q7MDk6MzY6MjksNjA2Ojpoc206OjI0ODo6U3RvcmFnZS5NaXNjLmV4Y0NtZDo6KF9f dmFsaWRhdGVMdm1Mb2NraW5nVHlwZTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDspPGJyPgomZ3Q7 Jmd0OyZndDsmZ3Q7Jmd0OyBTVUNDRVNTOiZsdDtlcnImZ3Q7IKAgPSAmIzM5OyYjMzk7OyZsdDty YyZndDsgoCA9IDA8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IE1haW5UaHJlYWQ6OkRFQlVHOjoy MDEyLTA0LTE2PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAwOTozNjoyOSw2MDY6Omx2bTo6MzE5 OjpPcGVyYXRpb25NdXRleDo6KF9yZWxvYWRwdnMpIE9wZXJhdGlvbiAmIzM5O2x2bTxicj4KJmd0 OyZndDsmZ3Q7Jmd0OyZndDtyZWxvYWQ8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IG9wZXJhdGlv biYjMzk7IGdvdCB0aGUgb3BlcmF0aW9uIG11dGV4PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBN YWluVGhyZWFkOjpERUJVRzo6MjAxMi0wNC0xNjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgMDk6 MzY6MjksNjA4Ojpsdm06OjI4Nzo6U3RvcmFnZS5NaXNjLmV4Y0NtZDo6KGNtZCkgJiMzOTsvdXNy L2Jpbi9zdWRvIC1uPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAvc2Jpbi9sdm0gcHZzIC0tY29u ZmlnICZxdW90OyBkZXZpY2VzIHsgcHJlZmVycmVkX25hbWVzID08YnI+CiZndDsmZ3Q7Jmd0OyZn dDsmZ3Q7W1xcJnF1b3Q7Xi9kZXYvbWFwcGVyL1xcJnF1b3Q7XTxicj4KJmd0OyZndDsmZ3Q7Jmd0 OyZndDsgaWdub3JlX3N1c3BlbmRlZF9kZXZpY2VzPTEgd3JpdGVfY2FjaGVfc3RhdGU9MDxicj4K Jmd0OyZndDsmZ3Q7Jmd0OyZndDtkaXNhYmxlX2FmdGVyX2Vycm9yX2NvdW50PTM8YnI+CiZndDsm Z3Q7Jmd0OyZndDsmZ3Q7IGZpbHRlciA9IFsgXFwmcXVvdDthJTFTYW5EaXNrfDM2MDA2MDViMDA0 MzZiZDgwMTcxYjEwNWMyMjUzNzdjZSVcXCZxdW90Oyw8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7 IFxcJnF1b3Q7ciUuKiVcXCZxdW90OyBdIH0goGdsb2JhbCB7IKBsb2NraW5nX3R5cGU9MSCgcHJp b3JpdGlzZV93cml0ZV9sb2Nrcz0xPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyB3YWl0X2Zvcl9s b2Nrcz0xIH0goGJhY2t1cCB7IKByZXRhaW5fbWluID0gNTAgoHJldGFpbl9kYXlzID0gMCB9ICZx dW90Ozxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgLS1ub2hlYWRpbmdzIC0tdW5pdHMgYiAtLW5v c3VmZml4IC0tc2VwYXJhdG9yIHwgLW88YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7PGJyPgomZ3Q7 Jmd0OyZndDsmZ3Q7Jmd0O3V1aWQsbmFtZSxzaXplLHZnX25hbWUsdmdfdXVpZCxwZV9zdGFydCxw ZV9jb3VudCxwZV9hbGxvY19jb3VudCxtZGFfY288YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7dW50 LDxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgZDxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgZXZf c2l6ZSYjMzk7IChjd2QgTm9uZSk8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IE1haW5UaHJlYWQ6 OkRFQlVHOjoyMDEyLTA0LTE2PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAwOTozNjoyOSw3MTQ6 Omx2bTo6Mjg3OjpTdG9yYWdlLk1pc2MuZXhjQ21kOjooY21kKSBTVUNDRVNTOiZsdDtlcnImZ3Q7 IKAgPTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsmIzM5OyYjMzk7Ozxicj4KJmd0OyZndDsmZ3Q7 Jmd0OyZndDsgJmx0O3JjJmd0OyCgID0gMDxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgTWFpblRo cmVhZDo6REVCVUc6OjIwMTItMDQtMTY8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IDA5OjM2OjI5 LDcxNTo6bHZtOjozNDI6Ok9wZXJhdGlvbk11dGV4OjooX3JlbG9hZHB2cykgT3BlcmF0aW9uICYj Mzk7bHZtPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0O3JlbG9hZDxicj4KJmd0OyZndDsmZ3Q7Jmd0 OyZndDsgb3BlcmF0aW9uJiMzOTsgcmVsZWFzZWQgdGhlIG9wZXJhdGlvbiBtdXRleDxicj4KJmd0 OyZndDsmZ3Q7Jmd0OyZndDsgTWFpblRocmVhZDo6REVCVUc6OjIwMTItMDQtMTY8YnI+CiZndDsm Z3Q7Jmd0OyZndDsmZ3Q7IDA5OjM2OjI5LDcxNjo6bHZtOjozNTI6Ok9wZXJhdGlvbk11dGV4Ojoo X3JlbG9hZHZncykgT3BlcmF0aW9uICYjMzk7bHZtPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0O3Jl bG9hZDxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgb3BlcmF0aW9uJiMzOTsgZ290IHRoZSBvcGVy YXRpb24gbXV0ZXg8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IE1haW5UaHJlYWQ6OkRFQlVHOjoy MDEyLTA0LTE2PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAwOTozNjoyOSw3MTY6Omx2bTo6Mjg3 OjpTdG9yYWdlLk1pc2MuZXhjQ21kOjooY21kKSAmIzM5Oy91c3IvYmluL3N1ZG8gLW48YnI+CiZn dDsmZ3Q7Jmd0OyZndDsmZ3Q7IC9zYmluL2x2bSB2Z3MgLS1jb25maWcgJnF1b3Q7IGRldmljZXMg eyBwcmVmZXJyZWRfbmFtZXMgPTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDtbXFwmcXVvdDteL2Rl di9tYXBwZXIvXFwmcXVvdDtdPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBpZ25vcmVfc3VzcGVu ZGVkX2RldmljZXM9MSB3cml0ZV9jYWNoZV9zdGF0ZT0wPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0 O2Rpc2FibGVfYWZ0ZXJfZXJyb3JfY291bnQ9Mzxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgZmls dGVyID0gWyBcXCZxdW90O2ElMVNhbkRpc2t8MzYwMDYwNWIwMDQzNmJkODAxNzFiMTA1YzIyNTM3 N2NlJVxcJnF1b3Q7LDxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgXFwmcXVvdDtyJS4qJVxcJnF1 b3Q7IF0gfSCgZ2xvYmFsIHsgoGxvY2tpbmdfdHlwZT0xIKBwcmlvcml0aXNlX3dyaXRlX2xvY2tz PTE8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IHdhaXRfZm9yX2xvY2tzPTEgfSCgYmFja3VwIHsg oHJldGFpbl9taW4gPSA1MCCgcmV0YWluX2RheXMgPSAwIH0gJnF1b3Q7PGJyPgomZ3Q7Jmd0OyZn dDsmZ3Q7Jmd0OyAtLW5vaGVhZGluZ3MgLS11bml0cyBiIC0tbm9zdWZmaXggLS1zZXBhcmF0b3Ig fCAtbzxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDs8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7dXVp ZCxuYW1lLGF0dHIsc2l6ZSxmcmVlLGV4dGVudF9zaXplLGV4dGVudF9jb3VudCxmcmVlX2NvdW50 LHRhZ3MsdmdfbTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDtkYV9zPGJyPgomZ3Q7Jmd0OyZndDsm Z3Q7Jmd0OyBpPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyB6ZSx2Z19tZGFfZnJlZSYjMzk7IChj d2QgTm9uZSk8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IE1haW5UaHJlYWQ6OkRFQlVHOjoyMDEy LTA0LTE2PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAwOTozNjoyOSw4MTM6Omx2bTo6Mjg3OjpT dG9yYWdlLk1pc2MuZXhjQ21kOjooY21kKSBTVUNDRVNTOiZsdDtlcnImZ3Q7IKAgPTxicj4KJmd0 OyZndDsmZ3Q7Jmd0OyZndDsmIzM5OyCgTm88YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IHZvbHVt ZSBncm91cHMgZm91bmRcbiYjMzk7OyZsdDtyYyZndDsgoCA9IDA8YnI+CiZndDsmZ3Q7Jmd0OyZn dDsmZ3Q7IE1haW5UaHJlYWQ6OkRFQlVHOjoyMDEyLTA0LTE2PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7 Jmd0OyAwOTozNjoyOSw4MTQ6Omx2bTo6Mzc5OjpPcGVyYXRpb25NdXRleDo6KF9yZWxvYWR2Z3Mp IE9wZXJhdGlvbiAmIzM5O2x2bTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDtyZWxvYWQ8YnI+CiZn dDsmZ3Q7Jmd0OyZndDsmZ3Q7IG9wZXJhdGlvbiYjMzk7IHJlbGVhc2VkIHRoZSBvcGVyYXRpb24g bXV0ZXg8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IE1haW5UaHJlYWQ6OkRFQlVHOjoyMDEyLTA0 LTE2PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAwOTozNjoyOSw4MTU6Omx2bTo6Mjg3OjpTdG9y YWdlLk1pc2MuZXhjQ21kOjooY21kKSAmIzM5Oy91c3IvYmluL3N1ZG8gLW48YnI+CiZndDsmZ3Q7 Jmd0OyZndDsmZ3Q7IC9zYmluL2x2bSBsdnMgLS1jb25maWcgJnF1b3Q7IGRldmljZXMgeyBwcmVm ZXJyZWRfbmFtZXMgPTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDtbXFwmcXVvdDteL2Rldi9tYXBw ZXIvXFwmcXVvdDtdPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBpZ25vcmVfc3VzcGVuZGVkX2Rl dmljZXM9MSB3cml0ZV9jYWNoZV9zdGF0ZT0wPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0O2Rpc2Fi bGVfYWZ0ZXJfZXJyb3JfY291bnQ9Mzxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgZmlsdGVyID0g WyBcXCZxdW90O2ElMVNhbkRpc2t8MzYwMDYwNWIwMDQzNmJkODAxNzFiMTA1YzIyNTM3N2NlJVxc JnF1b3Q7LDxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgXFwmcXVvdDtyJS4qJVxcJnF1b3Q7IF0g fSCgZ2xvYmFsIHsgoGxvY2tpbmdfdHlwZT0xIKBwcmlvcml0aXNlX3dyaXRlX2xvY2tzPTE8YnI+ CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IHdhaXRfZm9yX2xvY2tzPTEgfSCgYmFja3VwIHsgoHJldGFp bl9taW4gPSA1MCCgcmV0YWluX2RheXMgPSAwIH0gJnF1b3Q7PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7 Jmd0OyAtLW5vaGVhZGluZ3MgLS11bml0cyBiIC0tbm9zdWZmaXggLS1zZXBhcmF0b3IgfCAtbzxi cj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgdXVpZCxuYW1lLHZnX25hbWUsYXR0cixzaXplLHNlZ19z dGFydF9wZSxkZXZpY2VzLHRhZ3MmIzM5OyAoY3dkIE5vbmUpPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7 Jmd0OyBNYWluVGhyZWFkOjpERUJVRzo6MjAxMi0wNC0xNjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZn dDsgMDk6MzY6MjksOTE2Ojpsdm06OjI4Nzo6U3RvcmFnZS5NaXNjLmV4Y0NtZDo6KGNtZCkgU1VD Q0VTUzombHQ7ZXJyJmd0OyCgID08YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7JiMzOTsgoE5vPGJy PgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyB2b2x1bWUgZ3JvdXBzIGZvdW5kXG4mIzM5OzsmbHQ7cmMm Z3Q7IKAgPSAwPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBUaHJlYWQtMTE6OkRFQlVHOjoyMDEy LTA0LTE2PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAwOTozNjoyOSw5MTc6Om1pc2M6OjEwMTc6 OlNhbXBsaW5nTWV0aG9kOjooX19jYWxsX18pIFRyeWluZyB0byBlbnRlcjxicj4KJmd0OyZndDsm Z3Q7Jmd0OyZndDsgc2FtcGxpbmcgbWV0aG9kIChzdG9yYWdlLnNkYy5yZWZyZXNoU3RvcmFnZSk8 YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IE1haW5UaHJlYWQ6OklORk86OjIwMTItMDQtMTY8YnI+ CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IDA5OjM2OjI5LDkxOTo6ZGlzcGF0Y2hlcjo6MTIxOjpTdG9y YWdlLkRpc3BhdGNoZXI6OihfX2luaXRfXyk8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7U3RhcnRp bmc8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IFN0b3JhZ2VEaXNwYXRjaGVyLi4uPGJyPgomZ3Q7 Jmd0OyZndDsmZ3Q7Jmd0OyBUaHJlYWQtMTE6OkRFQlVHOjoyMDEyLTA0LTE2PGJyPgomZ3Q7Jmd0 OyZndDsmZ3Q7Jmd0OyAwOTozNjoyOSw5MTk6Om1pc2M6OjEwMTk6OlNhbXBsaW5nTWV0aG9kOjoo X19jYWxsX18pIEdvdCBpbiB0bzxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDtzYW1wbGluZzxicj4K Jmd0OyZndDsmZ3Q7Jmd0OyZndDsgbWV0aG9kPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBUaHJl YWQtMTE6OkRFQlVHOjoyMDEyLTA0LTE2PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAwOTozNjoy OSw5MjE6Om1pc2M6OjEwMTc6OlNhbXBsaW5nTWV0aG9kOjooX19jYWxsX18pIFRyeWluZyB0byBl bnRlcjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgc2FtcGxpbmcgbWV0aG9kIChzdG9yYWdlLmlz Y3NpLnJlc2Nhbik8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IFRocmVhZC0xMTo6REVCVUc6OjIw MTItMDQtMTY8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IDA5OjM2OjI5LDkyMTo6bWlzYzo6MTAx OTo6U2FtcGxpbmdNZXRob2Q6OihfX2NhbGxfXykgR290IGluIHRvPGJyPgomZ3Q7Jmd0OyZndDsm Z3Q7Jmd0O3NhbXBsaW5nPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBtZXRob2Q8YnI+CiZndDsm Z3Q7Jmd0OyZndDsmZ3Q7IFRocmVhZC0xMTo6REVCVUc6OjIwMTItMDQtMTY8YnI+CiZndDsmZ3Q7 Jmd0OyZndDsmZ3Q7IDA5OjM2OjI5LDkyMTo6aXNjc2k6OjM4OTo6U3RvcmFnZS5NaXNjLmV4Y0Nt ZDo6KHJlc2Nhbik8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7JiMzOTsvdXNyL2Jpbi9zdWRvIC1u PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAvc2Jpbi9pc2NzaWFkbSAtbSBzZXNzaW9uIC1SJiMz OTsgKGN3ZCBOb25lKTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgTWFpblRocmVhZDo6REVCVUc6 OjIwMTItMDQtMTY8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IDA5OjM2OjI5LDkzMDo6dXRpbHM6 OjU5NTo6U3RvcmFnZS5NaXNjLmV4Y0NtZDo6KGV4ZWNDbWQpPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7 Jmd0OyYjMzk7L3Vzci9iaW4vcGdyZXA8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IC14ZiBrc21k JiMzOTsgKGN3ZCBOb25lKTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgTWFpblRocmVhZDo6REVC VUc6OjIwMTItMDQtMTY8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IDA5OjM2OjMwLDEwODo6dXRp bHM6OjU5NTo6U3RvcmFnZS5NaXNjLmV4Y0NtZDo6KGV4ZWNDbWQpPGJyPgomZ3Q7Jmd0OyZndDsm Z3Q7Jmd0O1NVQ0NFU1M6Jmx0O2VyciZndDsgoCA9PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAm IzM5OyYjMzk7OyZsdDtyYyZndDsgoCA9IDA8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IFRocmVh ZC0xMTo6REVCVUc6OjIwMTItMDQtMTY8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IDA5OjM2OjMw LDExNjo6aXNjc2k6OjM4OTo6U3RvcmFnZS5NaXNjLmV4Y0NtZDo6KHJlc2NhbikgRkFJTEVEOiZs dDtlcnImZ3Q7PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyCgPTxicj4KJmd0OyZndDsmZ3Q7Jmd0 OyZndDsgJiMzOTtpc2NzaWFkbTogTm8gc2Vzc2lvbiBmb3VuZC5cbiYjMzk7OyZsdDtyYyZndDsg oCA9IDIxPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBUaHJlYWQtMTE6OkRFQlVHOjoyMDEyLTA0 LTE2PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAwOTozNjozMCwxMTY6Om1pc2M6OjEwMjc6OlNh bXBsaW5nTWV0aG9kOjooX19jYWxsX18pIFJldHVybmluZyBsYXN0PGJyPgomZ3Q7Jmd0OyZndDsm Z3Q7Jmd0O3Jlc3VsdDxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgVGhyZWFkLTExOjpERUJVRzo6 MjAxMi0wNC0xNjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgMDk6MzY6MzAsMTE3OjpzdXBlcnZk c206OjgzOjpTdXBlclZkc21Qcm94eTo6KF9raWxsU3VwZXJ2ZHNtKSBDb3VsZDxicj4KJmd0OyZn dDsmZ3Q7Jmd0OyZndDtub3Q8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IGtpbGwgb2xkIFN1cGVy IFZkc20gW0Vycm5vIDJdIE5vIHN1Y2ggZmlsZSBvciBkaXJlY3Rvcnk6PGJyPgomZ3Q7Jmd0OyZn dDsmZ3Q7Jmd0OyAmIzM5Oy92YXIvcnVuL3Zkc20vc3Zkc20ucGlkJiMzOTs8YnI+CiZndDsmZ3Q7 Jmd0OyZndDsmZ3Q7IFRocmVhZC0xMTo6REVCVUc6OjIwMTItMDQtMTY8YnI+CiZndDsmZ3Q7Jmd0 OyZndDsmZ3Q7IDA5OjM2OjMwLDExNzo6c3VwZXJ2ZHNtOjo3MTo6U3VwZXJWZHNtUHJveHk6Oihf bGF1bmNoU3VwZXJ2ZHNtKTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDtMYXVuY2hpbmc8YnI+CiZn dDsmZ3Q7Jmd0OyZndDsmZ3Q7IFN1cGVyIFZkc208YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IFRo cmVhZC0xMTo6REVCVUc6OjIwMTItMDQtMTY8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IDA5OjM2 OjMwLDExODo6c3VwZXJ2ZHNtOjo3NDo6U3RvcmFnZS5NaXNjLmV4Y0NtZDo6KF9sYXVuY2hTdXBl cnZkc20pPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAmIzM5Oy91c3IvYmluL3N1ZG8gLW4gL3Vz ci9iaW4vcHl0aG9uIC91c3Ivc2hhcmUvdmRzbS9zdXBlcnZkc21TZXJ2ZXIucHljPGJyPgomZ3Q7 Jmd0OyZndDsmZ3Q7Jmd0OyBiZDRiM2FlNy0zZTUxLTRkNmItYjY4MS1kNWY2Y2I1YmFlMDcgMjk0 NSYjMzk7IChjd2QgTm9uZSk8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IE1haW5UaHJlYWQ6OkRF QlVHOjoyMDEyLTA0LTE2PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAwOTozNjozMCwyNTQ6OnN1 cGVydmRzbVNlcnZlcjo6MTcwOjpTdXBlclZkc20uU2VydmVyOjoobWFpbikgTWFraW5nPGJyPgom Z3Q7Jmd0OyZndDsmZ3Q7Jmd0O3N1cmU8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IEkmIzM5O20g cm9vdDxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgTWFpblRocmVhZDo6REVCVUc6OjIwMTItMDQt MTY8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IDA5OjM2OjMwLDI1NTo6c3VwZXJ2ZHNtU2VydmVy OjoxNzQ6OlN1cGVyVmRzbS5TZXJ2ZXI6OihtYWluKSBQYXJzaW5nPGJyPgomZ3Q7Jmd0OyZndDsm Z3Q7Jmd0O2NtZDxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgYXJnczxicj4KJmd0OyZndDsmZ3Q7 Jmd0OyZndDsgTWFpblRocmVhZDo6REVCVUc6OjIwMTItMDQtMTY8YnI+CiZndDsmZ3Q7Jmd0OyZn dDsmZ3Q7IDA5OjM2OjMwLDI1NTo6c3VwZXJ2ZHNtU2VydmVyOjoxNzc6OlN1cGVyVmRzbS5TZXJ2 ZXI6OihtYWluKTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDtDcmVhdGluZyBQSUQ8YnI+CiZndDsm Z3Q7Jmd0OyZndDsmZ3Q7IGZpbGU8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IE1haW5UaHJlYWQ6 OkRFQlVHOjoyMDEyLTA0LTE2PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAwOTozNjozMCwyNTU6 OnN1cGVydmRzbVNlcnZlcjo6MTgxOjpTdXBlclZkc20uU2VydmVyOjoobWFpbik8YnI+CiZndDsm Z3Q7Jmd0OyZndDsmZ3Q7Q2xlYW5pbmcgb2xkPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBzb2Nr ZXQ8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IE1haW5UaHJlYWQ6OkRFQlVHOjoyMDEyLTA0LTE2 PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAwOTozNjozMCwyNTU6OnN1cGVydmRzbVNlcnZlcjo6 MTg1OjpTdXBlclZkc20uU2VydmVyOjoobWFpbikgU2V0dGluZzxicj4KJmd0OyZndDsmZ3Q7Jmd0 OyZndDt1cDxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsga2VlcCBhbGl2ZSB0aHJlYWQ8YnI+CiZn dDsmZ3Q7Jmd0OyZndDsmZ3Q7IE1haW5UaHJlYWQ6OkRFQlVHOjoyMDEyLTA0LTE2PGJyPgomZ3Q7 Jmd0OyZndDsmZ3Q7Jmd0OyAwOTozNjozMCwyNTY6OnN1cGVydmRzbVNlcnZlcjo6MTkwOjpTdXBl clZkc20uU2VydmVyOjoobWFpbikgQ3JlYXRpbmc8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IHJl bW90ZSBvYmplY3QgbWFuYWdlcjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgTWFpblRocmVhZDo6 REVCVUc6OjIwMTItMDQtMTY8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IDA5OjM2OjMwLDI1Njo6 c3VwZXJ2ZHNtU2VydmVyOjoyMDE6OlN1cGVyVmRzbS5TZXJ2ZXI6OihtYWluKSBTdGFydGVkPGJy PgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBzZXJ2aW5nIHN1cGVyIHZkc20gb2JqZWN0PGJyPgomZ3Q7 Jmd0OyZndDsmZ3Q7Jmd0OyBUaHJlYWQtMTE6OkRFQlVHOjoyMDEyLTA0LTE2PGJyPgomZ3Q7Jmd0 OyZndDsmZ3Q7Jmd0OyAwOTozNjozMiwxMjQ6OnN1cGVydmRzbTo6OTI6OlN1cGVyVmRzbVByb3h5 OjooX2Nvbm5lY3QpIFRyeWluZyB0bzxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDtjb25uZWN0PGJy PgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyB0byBTdXBlciBWZHNtPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7 Jmd0OyBUaHJlYWQtMTE6OkRFQlVHOjoyMDEyLTA0LTE2PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0 OyAwOTozNjozMiwxMzM6OnN1cGVydmRzbTo6NjQ6OlN1cGVyVmRzbVByb3h5OjooX19pbml0X18p IENvbm5lY3RlZCB0bzxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDtTdXBlcjxicj4KJmd0OyZndDsm Z3Q7Jmd0OyZndDsgVmRzbTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgVGhyZWFkLTExOjpERUJV Rzo6MjAxMi0wNC0xNjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgMDk6MzY6MzQsMDcwOjptdWx0 aXBhdGg6OjcxOjpTdG9yYWdlLk1pc2MuZXhjQ21kOjoocmVzY2FuKTxicj4KJmd0OyZndDsmZ3Q7 Jmd0OyZndDsmIzM5Oy91c3IvYmluL3N1ZG88YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IC1uIC9z YmluL211bHRpcGF0aCYjMzk7IChjd2QgTm9uZSk8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IFRo cmVhZC0xMTo6REVCVUc6OjIwMTItMDQtMTY8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IDA5OjM2 OjM0LDEzMDo6bXVsdGlwYXRoOjo3MTo6U3RvcmFnZS5NaXNjLmV4Y0NtZDo6KHJlc2Nhbik8YnI+ CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7U1VDQ0VTUzombHQ7ZXJyJmd0Ozxicj4KJmd0OyZndDsmZ3Q7 Jmd0OyZndDsgPSAmIzM5OyYjMzk7OyZsdDtyYyZndDsgoCA9IDA8YnI+CiZndDsmZ3Q7Jmd0OyZn dDsmZ3Q7IFRocmVhZC0xMTo6REVCVUc6OjIwMTItMDQtMTY8YnI+CiZndDsmZ3Q7Jmd0OyZndDsm Z3Q7IDA5OjM2OjM0LDEzMTo6bHZtOjo0NjA6Ok9wZXJhdGlvbk11dGV4OjooX2ludmFsaWRhdGVB bGxQdnMpPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0O09wZXJhdGlvbiAmIzM5O2x2bTxicj4KJmd0 OyZndDsmZ3Q7Jmd0OyZndDsgaW52YWxpZGF0ZSBvcGVyYXRpb24mIzM5OyBnb3QgdGhlIG9wZXJh dGlvbiBtdXRleDxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgVGhyZWFkLTExOjpERUJVRzo6MjAx Mi0wNC0xNjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgMDk6MzY6MzQsMTMxOjpsdm06OjQ2Mjo6 T3BlcmF0aW9uTXV0ZXg6OihfaW52YWxpZGF0ZUFsbFB2cyk8YnI+CiZndDsmZ3Q7Jmd0OyZndDsm Z3Q7T3BlcmF0aW9uICYjMzk7bHZtPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBpbnZhbGlkYXRl IG9wZXJhdGlvbiYjMzk7IHJlbGVhc2VkIHRoZSBvcGVyYXRpb24gbXV0ZXg8YnI+CiZndDsmZ3Q7 Jmd0OyZndDsmZ3Q7IFRocmVhZC0xMTo6REVCVUc6OjIwMTItMDQtMTY8YnI+CiZndDsmZ3Q7Jmd0 OyZndDsmZ3Q7IDA5OjM2OjM0LDEzMjo6bHZtOjo0NzI6Ok9wZXJhdGlvbk11dGV4OjooX2ludmFs aWRhdGVBbGxWZ3MpPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0O09wZXJhdGlvbiAmIzM5O2x2bTxi cj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgaW52YWxpZGF0ZSBvcGVyYXRpb24mIzM5OyBnb3QgdGhl IG9wZXJhdGlvbiBtdXRleDxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgVGhyZWFkLTExOjpERUJV Rzo6MjAxMi0wNC0xNjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgMDk6MzY6MzQsMTMyOjpsdm06 OjQ3NDo6T3BlcmF0aW9uTXV0ZXg6OihfaW52YWxpZGF0ZUFsbFZncyk8YnI+CiZndDsmZ3Q7Jmd0 OyZndDsmZ3Q7T3BlcmF0aW9uICYjMzk7bHZtPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBpbnZh bGlkYXRlIG9wZXJhdGlvbiYjMzk7IHJlbGVhc2VkIHRoZSBvcGVyYXRpb24gbXV0ZXg8YnI+CiZn dDsmZ3Q7Jmd0OyZndDsmZ3Q7IFRocmVhZC0xMTo6REVCVUc6OjIwMTItMDQtMTY8YnI+CiZndDsm Z3Q7Jmd0OyZndDsmZ3Q7IDA5OjM2OjM0LDEzMzo6bHZtOjo0OTM6Ok9wZXJhdGlvbk11dGV4Ojoo X2ludmFsaWRhdGVBbGxMdnMpPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0O09wZXJhdGlvbiAmIzM5 O2x2bTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgaW52YWxpZGF0ZSBvcGVyYXRpb24mIzM5OyBn b3QgdGhlIG9wZXJhdGlvbiBtdXRleDxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgVGhyZWFkLTEx OjpERUJVRzo6MjAxMi0wNC0xNjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgMDk6MzY6MzQsMTMz Ojpsdm06OjQ5NTo6T3BlcmF0aW9uTXV0ZXg6OihfaW52YWxpZGF0ZUFsbEx2cyk8YnI+CiZndDsm Z3Q7Jmd0OyZndDsmZ3Q7T3BlcmF0aW9uICYjMzk7bHZtPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0 OyBpbnZhbGlkYXRlIG9wZXJhdGlvbiYjMzk7IHJlbGVhc2VkIHRoZSBvcGVyYXRpb24gbXV0ZXg8 YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IFRocmVhZC0xMTo6REVCVUc6OjIwMTItMDQtMTY8YnI+ CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IDA5OjM2OjM0LDEzMzo6bWlzYzo6MTAyNzo6U2FtcGxpbmdN ZXRob2Q6OihfX2NhbGxfXykgUmV0dXJuaW5nIGxhc3Q8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7 cmVzdWx0PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBUaHJlYWQtMTE6OkRFQlVHOjoyMDEyLTA0 LTE2PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAwOTozNjozNCwxMzM6OmhzbTo6MjcyOjpTdG9y YWdlLkhTTTo6KF9fY2xlYW5TdG9yYWdlUmVwb3NpdG9yeSk8YnI+CiZndDsmZ3Q7Jmd0OyZndDsm Z3Q7U3RhcnRlZDxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgY2xlYW5pbmcgc3RvcmFnZSByZXBv c2l0b3J5IGF0ICYjMzk7L3JoZXYvZGF0YS1jZW50ZXImIzM5Ozxicj4KJmd0OyZndDsmZ3Q7Jmd0 OyZndDsgVGhyZWFkLTExOjpERUJVRzo6MjAxMi0wNC0xNjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZn dDsgMDk6MzY6MzQsMTM2Ojpoc206OjMwNDo6U3RvcmFnZS5IU006OihfX2NsZWFuU3RvcmFnZVJl cG9zaXRvcnkpIFdoaXRlPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBsaXN0OiBbJiMzOTsvcmhl di9kYXRhLWNlbnRlci9oc20tdGFza3MmIzM5Oyw8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7JiMz OTsvcmhldi9kYXRhLWNlbnRlci9oc20tdGFza3MvKiYjMzk7LDxicj4KJmd0OyZndDsmZ3Q7Jmd0 OyZndDsgJiMzOTsvcmhldi9kYXRhLWNlbnRlci9tbnQmIzM5O108YnI+CiZndDsmZ3Q7Jmd0OyZn dDsmZ3Q7IFRocmVhZC0xMTo6REVCVUc6OjIwMTItMDQtMTY8YnI+CiZndDsmZ3Q7Jmd0OyZndDsm Z3Q7IDA5OjM2OjM0LDEzNjo6aHNtOjozMDU6OlN0b3JhZ2UuSFNNOjooX19jbGVhblN0b3JhZ2VS ZXBvc2l0b3J5KSBNb3VudDxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgbGlzdDogWyYjMzk7L3Jo ZXYvZGF0YS1jZW50ZXImIzM5O108YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IFRocmVhZC0xMTo6 REVCVUc6OjIwMTItMDQtMTY8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IDA5OjM2OjM0LDEzNjo6 aHNtOjozMDc6OlN0b3JhZ2UuSFNNOjooX19jbGVhblN0b3JhZ2VSZXBvc2l0b3J5KTxicj4KJmd0 OyZndDsmZ3Q7Jmd0OyZndDtDbGVhbmluZzxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsgbGVmdG92 ZXJzPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBUaHJlYWQtMTE6OkRFQlVHOjoyMDEyLTA0LTE2 PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyAwOTozNjozNCwxMzY6OmhzbTo6MzUwOjpTdG9yYWdl LkhTTTo6KF9fY2xlYW5TdG9yYWdlUmVwb3NpdG9yeSk8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7 RmluaXNoZWQ8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IGNsZWFuaW5nIHN0b3JhZ2UgcmVwb3Np dG9yeSBhdCAmIzM5Oy9yaGV2L2RhdGEtY2VudGVyJiMzOTs8YnI+CiZndDsmZ3Q7Jmd0OyZndDsm Z3Q7PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDs8YnI+ CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4KJmd0OyZn dDsmZ3Q7Jmd0OyZndDs8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7PGJyPgomZ3Q7Jmd0OyZndDsm Z3Q7Jmd0Ozxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDs8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7 IE9uIDQvMTYvMTIgODozOCBBTSwgJnF1b3Q7TWlrZSBCdXJucyZxdW90OyZsdDs8YSBocmVmPSJt YWlsdG86bWJ1cm5zQHJlZGhhdC5jb20iPm1idXJuc0ByZWRoYXQuY29tPC9hPiZndDsgoCB3cm90 ZTo8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsg T24gTW9uLCAyMDEyLTA0LTE2IGF0IDA4OjE0IC0wNTAwLCBBZGFtIHZvbk5pZWRhIHdyb3RlOjxi cj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyCgIKAgSGkgZm9sa3MsPGJyPgomZ3Q7Jmd0 OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7PGJy PgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IKAgoCBJJiMzOTttIHRyeWluZyB0byBpbnN0 YWxsIG9WaXJ0IG5vZGUgdjIuMy4wIG9uIEEgRGVsbCBDMjEwMDxicj4KJmd0OyZndDsmZ3Q7Jmd0 OyZndDsmZ3Q7Jmd0O3NlcnZlci4gSTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBj YW4gYm9vdCB1cCBqdXN0IGZpbmUsIGJ1dCB0aGUgdHdvIG1lbnUgb3B0aW9ucyBJIHNlZSBhcmUg JnF1b3Q7U3RhcnQ8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDtvVmlydDxicj4KJmd0 OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBub2RlJnF1b3Q7LCBhbmQgJnF1b3Q7VHJvdWJsZXNo b290aW5nJnF1b3Q7LiBXaGVuIEkgY2hvb3NlICZxdW90O1N0YXJ0IG9WaXJ0IG5vZGUmcXVvdDss IGl0PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IGRvZXMganVzdCB0aGF0LCBhbmQg SSBhbSBzb29uIGFmdGVyIGdpdmVuIGEgY29uc29sZSBsb2dpbiBwcm9tcHQuPGJyPgomZ3Q7Jmd0 OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7SSYjMzk7dmU8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0 OyZndDsgY2hlY2tlZCB0aGUgZG9jcywgYW5kIEkgZG9uJiMzOTt0IHNlZSB3aGF0IEkmIzM5O20g c3VwcG9zZWQgdG8gZG8gbmV4dCwgYXM8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDtp bjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBhIHBhc3N3b3JkIGV0Yy4gQW0gSSBt aXNzaW5nIHNvbWV0aGluZz88YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyBIaSBBZGFtLDxi cj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsg U29tZXRoaW5nIGlzIGJyZWFraW5nIGluIHRoZSBib290IHByb2Nlc3MuIKBZb3Ugc2hvdWxkIGJl IGdldHRpbmcgYTxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7VFVJPGJyPgomZ3Q7Jmd0OyZn dDsmZ3Q7Jmd0OyZndDsgc2NyZWVuIHRoYXQgd2lsbCBsZXQgeW91IGNvbmZpZ3VyZSBhbmQgaW5z dGFsbCBvdmlydC1ub2RlLjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7PGJyPgomZ3Q7Jmd0 OyZndDsmZ3Q7Jmd0OyZndDsgSSBqdXN0IGFkZGVkIGFuIGVudHJ5IG9uIHRoZSBOb2RlIFRyb3Vi bGVzb290aW5nIHdpa2kgcGFnZVsxXSBmb3I8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0O3lv dSB0bzxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IGZvbGxvdy48YnI+CiZndDsmZ3Q7Jmd0 OyZndDsmZ3Q7Jmd0Ozxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IE1pa2U8YnI+CiZndDsm Z3Q7Jmd0OyZndDsmZ3Q7Jmd0Ozxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IFsxXSA8YSBo cmVmPSJodHRwOi8vb3ZpcnQub3JnL3dpa2kvTm9kZV9Ucm91Ymxlc2hvb3RpbmcjQm9vdF91cF9w cm9ibGVtcyIgdGFyZ2V0PSJfYmxhbmsiPmh0dHA6Ly9vdmlydC5vcmcvd2lraS9Ob2RlX1Ryb3Vi bGVzaG9vdGluZyNCb290X3VwX3Byb2JsZW1zPC9hPjxicj4KJmd0OyZndDsmZ3Q7Jmd0OyZndDsm Z3Q7PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDs8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7 Jmd0OyZndDsgoCCgIFRoYW5rcyw8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDs8YnI+ CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDs8YnI+CiZndDsmZ3Q7Jmd0OyZndDsmZ3Q7Jmd0 OyZndDsgoCCgIKAgoC1BZGFtPGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IF9fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fPGJyPgomZ3Q7Jmd0OyZn dDsmZ3Q7Jmd0OyZndDsmZ3Q7IFVzZXJzIG1haWxpbmcgbGlzdDxicj4KJmd0OyZndDsmZ3Q7Jmd0 OyZndDsmZ3Q7Jmd0OyA8YSBocmVmPSJtYWlsdG86VXNlcnNAb3ZpcnQub3JnIj5Vc2Vyc0Bvdmly dC5vcmc8L2E+PGJyPgomZ3Q7Jmd0OyZndDsmZ3Q7Jmd0OyZndDsmZ3Q7IDxhIGhyZWY9Imh0dHA6 Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2VycyIgdGFyZ2V0PSJfYmxhbmsi Pmh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2VyczwvYT48YnI+CiZn dDsmZ3Q7Jmd0OyZndDsgX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX188YnI+CiZndDsmZ3Q7Jmd0OyZndDsgVXNlcnMgbWFpbGluZyBsaXN0PGJyPgomZ3Q7Jmd0 OyZndDsmZ3Q7IDxhIGhyZWY9Im1haWx0bzpVc2Vyc0BvdmlydC5vcmciPlVzZXJzQG92aXJ0Lm9y ZzwvYT48YnI+CiZndDsmZ3Q7Jmd0OyZndDsgPGEgaHJlZj0iaHR0cDovL2xpc3RzLm92aXJ0Lm9y Zy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzIiB0YXJnZXQ9Il9ibGFuayI+aHR0cDovL2xpc3RzLm92 aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzPC9hPjxicj4KJmd0OyZndDsmZ3Q7IFRoaXMg aXMgZGVmaW5pdGVseSB0aGUgY2F1c2Ugb2YgdGhlIGluc3RhbGxlciBmYWlsaW5nPGJyPgomZ3Q7 Jmd0OyZndDs8YnI+CiZndDsmZ3Q7Jmd0OyAyMDEyLTA0LTE2IDA5OjM2OjI2LDgzNiAtIERFQlVH IC0gb3ZpcnRmdW5jdGlvbnMgLSBjYXQ8YnI+CiZndDsmZ3Q7Jmd0Oy9wcm9jL21vdW50c3xncmVw IC1xICZxdW90O25vbmUgL2xpdmUmcXVvdDs8YnI+CiZndDsmZ3Q7Jmd0OyAyMDEyLTA0LTE2IDA5 OjM2OjI3LDQ1NSAtIEVSUk9SIC0gb3ZpcnRmdW5jdGlvbnMgLSBGYWlsZWQgdG88YnI+CiZndDsm Z3Q7Jmd0O21vdW50X2xpdmUoKTxicj4KJmd0OyZndDsmZ3Q7PGJyPgomZ3Q7Jmd0OyZndDs8YnI+ CiZndDsmZ3Q7Jmd0Ozxicj4KJmd0OyZndDsmZ3Q7IFdoYXQga2luZCBvZiBtZWRpYSBhcmUgeW91 IGluc3RhbGxpbmcgZnJvbTogdXNiL2NkL3JlbW90ZSBjb25zb2xlPzxicj4KJmd0OyZndDsgX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX188YnI+CiZndDsmZ3Q7 IFVzZXJzIG1haWxpbmcgbGlzdDxicj4KJmd0OyZndDsgPGEgaHJlZj0ibWFpbHRvOlVzZXJzQG92 aXJ0Lm9yZyI+VXNlcnNAb3ZpcnQub3JnPC9hPjxicj4KJmd0OyZndDsgPGEgaHJlZj0iaHR0cDov L2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzIiB0YXJnZXQ9Il9ibGFuayI+ aHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzPC9hPjxicj4KJmd0 Ozxicj4KJmd0O0kgZGlkIGdvIGJhY2sgYW5kIHRha2UgYSBsb29rIGF0IG1vdW50X2xpdmUgYW5k IG1hZGUgc3VyZSBpdCBjb250YWlucyBhPGJyPgomZ3Q7c3BlY2lmaWMgcGF0Y2ggdG8gaGFuZGxl IHVzYiBkcml2ZXMgcHJvcGVybHkuIElmIHlvdSBjYW4gZ2V0IGJhY2sgdG8gYTxicj4KJmd0O3No ZWxsIHByb21wdC4gcnVuIGJsa2lkIGFuZCBjYXB0dXJlIHRoZSBvdXRwdXQuIElmIGl0JiMzOTtz IHdheSB0b28gbXVjaCB0bzxicj4KJmd0O3R5cGUgdGhlbiBqdXN0IHRoZSB1c2IgZHJpdmUgb3V0 cHV0IHNob3VsZCBiZSBvay48YnI+Cjxicj4KPGJyPgpfX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fXzxicj4KVXNlcnMgbWFpbGluZyBsaXN0PGJyPgo8YSBocmVm PSJtYWlsdG86VXNlcnNAb3ZpcnQub3JnIj5Vc2Vyc0BvdmlydC5vcmc8L2E+PGJyPgo8YSBocmVm PSJodHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnMiIHRhcmdldD0i X2JsYW5rIj5odHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnM8L2E+ PGJyPgo8L2Jsb2NrcXVvdGU+PC9kaXY+PGJyPjxiciBjbGVhcj0iYWxsIj48ZGl2Pjxicj48L2Rp dj4tLSA8YnI+RG9taW5pYyBLYWlzZXI8YnI+R3JlYXRlciBCb3N0b24gVmluZXlhcmQ8YnI+RGly ZWN0b3Igb2YgT3BlcmF0aW9uczxicj48YnI+Y2VsbDogNjE3LTIzMC0xNDEyPGJyPmZheDogNjE3 LTI1Mi0wMjM4PGJyPmVtYWlsOiA8YSBocmVmPSJtYWlsdG86ZG9taW5pY0Bib3N0b252aW5leWFy ZC5vcmciPmRvbWluaWNAYm9zdG9udmluZXlhcmQub3JnPC9hPjxicj4KPGJyPjxicj4KPC9kaXY+ Cg== --===============4452646685188216127==-- From akula at thegeekhood.net Wed Apr 18 06:02:21 2012 Content-Type: multipart/mixed; boundary="===============2043866780810260203==" MIME-Version: 1.0 From: Jason Lawer To: users at ovirt.org Subject: Re: [Users] Booting oVirt node image 2.3.0, no install option Date: Wed, 18 Apr 2012 20:02:39 +1000 Message-ID: In-Reply-To: CBB3250C.195DB%adam@vonnieda.org --===============2043866780810260203== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable I think I just hit the exact same issue with a Sandisk Crusier Blade 4GB US= B stick. I bought 4 of them to try and setup a test system (before we commi= t to real hardware) and at least 2 of them failed with both 2.2 and 2.3 ovi= rt isos being copied using dd from a mac. = I copied to a old 8gb "Strontium" USB stick I had lying around and worked w= ithout issue. So it appears to be an issue with the stick. = I can provide more specific information on the stick or such if that is use= ful. = It wouldn't surprise me if its due to the low cost nature of the stick (cos= t $5 AUD) but I am curious as it booted the kernel fine. = Jason On 18/04/2012, at 4:48 AM, Adam vonNieda wrote: > = > Turns out that there might be an issue with my thumb drive. I tried > another, and it worked fine. Thanks very much for the responses folks! > = > -Adam > = > = > On 4/17/12 10:11 AM, "Joey Boggs" wrote: > = >> On 04/17/2012 10:51 AM, Adam vonNieda wrote: >>> Thanks for the reply Joey. I saw that too, and thought maybe my USB >>> thumb drive was set to read only, but it's not. This box doesn't have a >>> DVD drive, I'll try a different USB drive, and if that doesn't work, >>> I'll dig up an external DVD drive. >>> = >>> Thanks again, >>> = >>> -Adam >>> = >>> Adam vonNieda >>> Adam(a)vonNieda.org >>> = >>> On Apr 17, 2012, at 9:07, Joey Boggs wrote: >>> = >>>> On 04/17/2012 09:45 AM, Adam vonNieda wrote: >>>>> Hi folks, >>>>> = >>>>> Still hoping someone can give me a hand with this. I can't install >>>>> overt-node 2.3.0 on a on a Dell C2100 server because it won't start >>>>> the >>>>> graphical interface. I booted up a standard F16 image this morning, >>>>> and >>>>> the graphical installer does start during that process. Logs are >>>>> below. >>>>> = >>>>> Thanks very much, >>>>> = >>>>> -Adam >>>>> = >>>>> = >>>>>> /tmp/ovirt.log >>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>>>> = >>>>>> /sbin/restorecon set context >>>>>> /var/cache/yum->unconfined_u:object_r:rpm_var_cache_t:s0 >>>>>> failed:'Read-only >>>>>> file system' >>>>>> /sbin/restorecon reset /var/cache/yum context >>>>>> = >>>>>> unconfined_u:object_r:file_t:s0->unconfined_u:object_r:rpm_var_cache= _t >>>>>> :s0 >>>>>> /sbin/restorecon reset /etc/sysctl.conf context >>>>>> = >>>>>> system_u:object_r:etc_runtime_t:s0->system_u:object_r:system_conf_t:= s0 >>>>>> /sbin/restorecon reset /boot-kdump context >>>>>> system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0 >>>>>> 2012-04-16 09:36:26,827 - INFO - ovirt-config-installer - ::::live >>>>>> device:::: >>>>>> /dev/sdb >>>>>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat >>>>>> /proc/mounts|grep >>>>>> -q "none /live" >>>>>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - >>>>>> 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - umount /live >>>>>> 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - >>>>>> 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to >>>>>> mount_live() >>>>>> = >>>>>> /var/log/ovirt.log >>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>>>> = >>>>>> Apr 16 09:35:53 Starting ovirt-early >>>>>> oVirt Node Hypervisor release 2.3.0 (1.0.fc16) >>>>>> Apr 16 09:35:53 Updating /etc/default/ovirt >>>>>> Apr 16 09:35:54 Updating OVIRT_BOOTIF to '' >>>>>> Apr 16 09:35:54 Updating OVIRT_INIT to '' >>>>>> Apr 16 09:35:54 Updating OVIRT_UPGRADE to '' >>>>>> Apr 16 09:35:54 Updating OVIRT_STANDALONE to '1' >>>>>> Apr 16 09:35:54 Updating OVIRT_BOOTPARAMS to 'nomodeset >>>>>> crashkernel=3D512M-2G:64M,2G-:128M elevator=3Ddeadline quiet rd_NO_L= VM >>>>>> rhgb >>>>>> rd.luks=3D0 rd.md=3D0 rd.dm=3D0' >>>>>> Apr 16 09:35:54 Updating OVIRT_RHN_TYPE to 'classic' >>>>>> Apr 16 09:35:54 Updating OVIRT_INSTALL to '1' >>>>>> Apr 16 09:35:54 Updating OVIRT_ISCSI_INSTALL to '1' >>>>>> Apr 16 09:36:08 Setting temporary admin password: F8Ax67kfRPSAw >>>>>> Apr 16 09:36:09 Setting temporary root password: F8Ax67kfRPSAw >>>>>> Apr 16 09:36:09 Skip runtime mode configuration. >>>>>> Apr 16 09:36:09 Completed ovirt-early >>>>>> Apr 16 09:36:09 Starting ovirt-awake. >>>>>> Apr 16 09:36:09 Node is operating in unmanaged mode. >>>>>> Apr 16 09:36:09 Completed ovirt-awake: RETVAL=3D0 >>>>>> Apr 16 09:36:09 Starting ovirt >>>>>> Apr 16 09:36:09 Completed ovirt >>>>>> Apr 16 09:36:10 Starting ovirt-post >>>>>> Apr 16 09:36:20 Hardware virtualization detected >>>>>> Volume group "HostVG" not found >>>>>> Skipping volume group HostVG >>>>>> Restarting network (via systemctl): [ OK ] >>>>>> Apr 16 09:36:20 Starting ovirt-post >>>>>> Apr 16 09:36:21 Hardware virtualization detected >>>>>> Volume group "HostVG" not found >>>>>> Skipping volume group HostVG >>>>>> Restarting network (via systemctl): [ OK ] >>>>>> Apr 16 09:36:22 Starting ovirt-cim >>>>>> Apr 16 09:36:22 Completed ovirt-cim >>>>>> WARNING: persistent config storage not available >>>>>> = >>>>>> /var/log/vdsm/vdsm.log >>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D >>>>>> = >>>>>> MainThread::INFO::2012-04-16 09:36:21,828::vdsm::71::vds::(run) I am >>>>>> the >>>>>> actual vdsm 4.9-0 >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> = >>>>>> 09:36:23,873::resourceManager::376::ResourceManager::(registerNamesp= ac >>>>>> e) >>>>>> Registering namespace 'Storage' >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:23,874::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>>>>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:23,918::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>>>>> MainThread::INFO::2012-04-16 09:36:25,000::vdsm::71::vds::(run) I am >>>>>> the >>>>>> actual vdsm 4.9-0 >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> = >>>>>> 09:36:25,199::resourceManager::376::ResourceManager::(registerNamesp= ac >>>>>> e) >>>>>> Registering namespace 'Storage' >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:25,200::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>>>>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:25,231::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:25,243::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>>> SUCCESS: >>>>>> =3D ''; =3D 0 >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:25,244::multipath::109::Storage.Multipath::(isEnabled) >>>>>> multipath >>>>>> Defaulting to False >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:25,244::misc::487::Storage.Misc::(rotateFiles) dir: /etc, >>>>>> prefixName: multipath.conf, versions: 5 >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:25,244::misc::508::Storage.Misc::(rotateFiles) versions found: >>>>>> [0] >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:25,244::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>>>>> '/usr/bin/sudo -n /bin/cp /etc/multipath.conf /etc/multipath.conf.1' >>>>>> (cwd >>>>>> None) >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:25,255::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>>>>> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-on= ly >>>>>> file >>>>>> system\nsudo: sorry, a password is required to run sudo\n'; = =3D 1 >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:25,256::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>>>>> '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf.1' (cwd None) >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:25,269::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>>>>> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-on= ly >>>>>> file >>>>>> system\nsudo: sorry, a password is required to run sudo\n'; = =3D 1 >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:25,270::multipath::123::Storage.Misc.excCmd::(setupMultipath) >>>>>> '/usr/bin/sudo -n /bin/cp /tmp/tmpnPcvWi /etc/multipath.conf' (cwd >>>>>> None) >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:25,283::multipath::123::Storage.Misc.excCmd::(setupMultipath) >>>>>> SUCCESS: =3D ''; =3D 0 >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:25,283::multipath::128::Storage.Misc.excCmd::(setupMultipath) >>>>>> '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf' (cwd None) >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:25,294::multipath::128::Storage.Misc.excCmd::(setupMultipath) >>>>>> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-on= ly >>>>>> file >>>>>> system\nsudo: sorry, a password is required to run sudo\n'; = =3D 1 >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:25,295::multipath::131::Storage.Misc.excCmd::(setupMultipath) >>>>>> '/usr/bin/sudo -n /sbin/multipath -F' (cwd None) >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:25,323::multipath::131::Storage.Misc.excCmd::(setupMultipath) >>>>>> FAILED: =3D ''; =3D 1 >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:25,323::multipath::134::Storage.Misc.excCmd::(setupMultipath) >>>>>> '/usr/bin/sudo -n /sbin/service multipathd restart' (cwd None) >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:26,397::multipath::134::Storage.Misc.excCmd::(setupMultipath) >>>>>> SUCCESS: =3D ''; =3D 0 >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> = >>>>>> 09:36:26,398::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingTy= pe >>>>>> ) >>>>>> '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd >>>>>> None) >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> = >>>>>> 09:36:26,443::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingTy= pe >>>>>> ) >>>>>> SUCCESS: =3D ''; =3D 0 >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:26,445::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm >>>>>> reload >>>>>> operation' got the operation mutex >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:26,447::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >>>>>> /sbin/lvm pvs --config " devices { preferred_names =3D >>>>>> [\\"^/dev/mapper/\\"] >>>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>> disable_after_error_count=3D3 >>>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks= =3D1 >>>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0= } " >>>>>> --noheadings --units b --nosuffix --separator | -o >>>>>> = >>>>>> uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_= co >>>>>> unt, >>>>>> d >>>>>> ev_size' (cwd None) >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:26,811::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: = =3D >>>>>> ''; >>>>>> =3D 0 >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:26,811::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm >>>>>> reload >>>>>> operation' released the operation mutex >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:26,812::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm >>>>>> reload >>>>>> operation' got the operation mutex >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:26,812::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >>>>>> /sbin/lvm vgs --config " devices { preferred_names =3D >>>>>> [\\"^/dev/mapper/\\"] >>>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>> disable_after_error_count=3D3 >>>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks= =3D1 >>>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0= } " >>>>>> --noheadings --units b --nosuffix --separator | -o >>>>>> = >>>>>> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg= _m >>>>>> da_s >>>>>> i >>>>>> ze,vg_mda_free' (cwd None) >>>>>> MainThread::INFO::2012-04-16 09:36:29,307::vdsm::71::vds::(run) I am >>>>>> the >>>>>> actual vdsm 4.9-0 >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> = >>>>>> 09:36:29,514::resourceManager::376::ResourceManager::(registerNamesp= ac >>>>>> e) >>>>>> Registering namespace 'Storage' >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:29,515::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>>>>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:29,551::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:29,564::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>>> SUCCESS: >>>>>> =3D ''; =3D 0 >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:29,565::multipath::101::Storage.Multipath::(isEnabled) Current >>>>>> revision of multipath.conf detected, preserving >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> = >>>>>> 09:36:29,565::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingTy= pe >>>>>> ) >>>>>> '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd >>>>>> None) >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> = >>>>>> 09:36:29,606::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingTy= pe >>>>>> ) >>>>>> SUCCESS: =3D ''; =3D 0 >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:29,606::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm >>>>>> reload >>>>>> operation' got the operation mutex >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:29,608::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >>>>>> /sbin/lvm pvs --config " devices { preferred_names =3D >>>>>> [\\"^/dev/mapper/\\"] >>>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>> disable_after_error_count=3D3 >>>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks= =3D1 >>>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0= } " >>>>>> --noheadings --units b --nosuffix --separator | -o >>>>>> = >>>>>> uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_= co >>>>>> unt, >>>>>> d >>>>>> ev_size' (cwd None) >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:29,714::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: = =3D >>>>>> ''; >>>>>> =3D 0 >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:29,715::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm >>>>>> reload >>>>>> operation' released the operation mutex >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:29,716::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm >>>>>> reload >>>>>> operation' got the operation mutex >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:29,716::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >>>>>> /sbin/lvm vgs --config " devices { preferred_names =3D >>>>>> [\\"^/dev/mapper/\\"] >>>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>> disable_after_error_count=3D3 >>>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks= =3D1 >>>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0= } " >>>>>> --noheadings --units b --nosuffix --separator | -o >>>>>> = >>>>>> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg= _m >>>>>> da_s >>>>>> i >>>>>> ze,vg_mda_free' (cwd None) >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:29,813::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: = =3D >>>>>> ' No >>>>>> volume groups found\n'; =3D 0 >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:29,814::lvm::379::OperationMutex::(_reloadvgs) Operation 'lvm >>>>>> reload >>>>>> operation' released the operation mutex >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:29,815::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n >>>>>> /sbin/lvm lvs --config " devices { preferred_names =3D >>>>>> [\\"^/dev/mapper/\\"] >>>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>> disable_after_error_count=3D3 >>>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks= =3D1 >>>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D 0= } " >>>>>> --noheadings --units b --nosuffix --separator | -o >>>>>> uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:29,916::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: = =3D >>>>>> ' No >>>>>> volume groups found\n'; =3D 0 >>>>>> Thread-11::DEBUG::2012-04-16 >>>>>> 09:36:29,917::misc::1017::SamplingMethod::(__call__) Trying to enter >>>>>> sampling method (storage.sdc.refreshStorage) >>>>>> MainThread::INFO::2012-04-16 >>>>>> 09:36:29,919::dispatcher::121::Storage.Dispatcher::(__init__) >>>>>> Starting >>>>>> StorageDispatcher... >>>>>> Thread-11::DEBUG::2012-04-16 >>>>>> 09:36:29,919::misc::1019::SamplingMethod::(__call__) Got in to >>>>>> sampling >>>>>> method >>>>>> Thread-11::DEBUG::2012-04-16 >>>>>> 09:36:29,921::misc::1017::SamplingMethod::(__call__) Trying to enter >>>>>> sampling method (storage.iscsi.rescan) >>>>>> Thread-11::DEBUG::2012-04-16 >>>>>> 09:36:29,921::misc::1019::SamplingMethod::(__call__) Got in to >>>>>> sampling >>>>>> method >>>>>> Thread-11::DEBUG::2012-04-16 >>>>>> 09:36:29,921::iscsi::389::Storage.Misc.excCmd::(rescan) >>>>>> '/usr/bin/sudo -n >>>>>> /sbin/iscsiadm -m session -R' (cwd None) >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:29,930::utils::595::Storage.Misc.excCmd::(execCmd) >>>>>> '/usr/bin/pgrep >>>>>> -xf ksmd' (cwd None) >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:30,108::utils::595::Storage.Misc.excCmd::(execCmd) >>>>>> SUCCESS: =3D >>>>>> ''; =3D 0 >>>>>> Thread-11::DEBUG::2012-04-16 >>>>>> 09:36:30,116::iscsi::389::Storage.Misc.excCmd::(rescan) FAILED: >>>>>> =3D >>>>>> 'iscsiadm: No session found.\n'; =3D 21 >>>>>> Thread-11::DEBUG::2012-04-16 >>>>>> 09:36:30,116::misc::1027::SamplingMethod::(__call__) Returning last >>>>>> result >>>>>> Thread-11::DEBUG::2012-04-16 >>>>>> 09:36:30,117::supervdsm::83::SuperVdsmProxy::(_killSupervdsm) Could >>>>>> not >>>>>> kill old Super Vdsm [Errno 2] No such file or directory: >>>>>> '/var/run/vdsm/svdsm.pid' >>>>>> Thread-11::DEBUG::2012-04-16 >>>>>> 09:36:30,117::supervdsm::71::SuperVdsmProxy::(_launchSupervdsm) >>>>>> Launching >>>>>> Super Vdsm >>>>>> Thread-11::DEBUG::2012-04-16 >>>>>> 09:36:30,118::supervdsm::74::Storage.Misc.excCmd::(_launchSupervdsm) >>>>>> '/usr/bin/sudo -n /usr/bin/python /usr/share/vdsm/supervdsmServer.pyc >>>>>> bd4b3ae7-3e51-4d6b-b681-d5f6cb5bae07 2945' (cwd None) >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:30,254::supervdsmServer::170::SuperVdsm.Server::(main) Making >>>>>> sure >>>>>> I'm root >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:30,255::supervdsmServer::174::SuperVdsm.Server::(main) Parsing >>>>>> cmd >>>>>> args >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:30,255::supervdsmServer::177::SuperVdsm.Server::(main) >>>>>> Creating PID >>>>>> file >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:30,255::supervdsmServer::181::SuperVdsm.Server::(main) >>>>>> Cleaning old >>>>>> socket >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:30,255::supervdsmServer::185::SuperVdsm.Server::(main) Setting >>>>>> up >>>>>> keep alive thread >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:30,256::supervdsmServer::190::SuperVdsm.Server::(main) Creating >>>>>> remote object manager >>>>>> MainThread::DEBUG::2012-04-16 >>>>>> 09:36:30,256::supervdsmServer::201::SuperVdsm.Server::(main) Started >>>>>> serving super vdsm object >>>>>> Thread-11::DEBUG::2012-04-16 >>>>>> 09:36:32,124::supervdsm::92::SuperVdsmProxy::(_connect) Trying to >>>>>> connect >>>>>> to Super Vdsm >>>>>> Thread-11::DEBUG::2012-04-16 >>>>>> 09:36:32,133::supervdsm::64::SuperVdsmProxy::(__init__) Connected to >>>>>> Super >>>>>> Vdsm >>>>>> Thread-11::DEBUG::2012-04-16 >>>>>> 09:36:34,070::multipath::71::Storage.Misc.excCmd::(rescan) >>>>>> '/usr/bin/sudo >>>>>> -n /sbin/multipath' (cwd None) >>>>>> Thread-11::DEBUG::2012-04-16 >>>>>> 09:36:34,130::multipath::71::Storage.Misc.excCmd::(rescan) >>>>>> SUCCESS: >>>>>> =3D ''; =3D 0 >>>>>> Thread-11::DEBUG::2012-04-16 >>>>>> 09:36:34,131::lvm::460::OperationMutex::(_invalidateAllPvs) >>>>>> Operation 'lvm >>>>>> invalidate operation' got the operation mutex >>>>>> Thread-11::DEBUG::2012-04-16 >>>>>> 09:36:34,131::lvm::462::OperationMutex::(_invalidateAllPvs) >>>>>> Operation 'lvm >>>>>> invalidate operation' released the operation mutex >>>>>> Thread-11::DEBUG::2012-04-16 >>>>>> 09:36:34,132::lvm::472::OperationMutex::(_invalidateAllVgs) >>>>>> Operation 'lvm >>>>>> invalidate operation' got the operation mutex >>>>>> Thread-11::DEBUG::2012-04-16 >>>>>> 09:36:34,132::lvm::474::OperationMutex::(_invalidateAllVgs) >>>>>> Operation 'lvm >>>>>> invalidate operation' released the operation mutex >>>>>> Thread-11::DEBUG::2012-04-16 >>>>>> 09:36:34,133::lvm::493::OperationMutex::(_invalidateAllLvs) >>>>>> Operation 'lvm >>>>>> invalidate operation' got the operation mutex >>>>>> Thread-11::DEBUG::2012-04-16 >>>>>> 09:36:34,133::lvm::495::OperationMutex::(_invalidateAllLvs) >>>>>> Operation 'lvm >>>>>> invalidate operation' released the operation mutex >>>>>> Thread-11::DEBUG::2012-04-16 >>>>>> 09:36:34,133::misc::1027::SamplingMethod::(__call__) Returning last >>>>>> result >>>>>> Thread-11::DEBUG::2012-04-16 >>>>>> 09:36:34,133::hsm::272::Storage.HSM::(__cleanStorageRepository) >>>>>> Started >>>>>> cleaning storage repository at '/rhev/data-center' >>>>>> Thread-11::DEBUG::2012-04-16 >>>>>> 09:36:34,136::hsm::304::Storage.HSM::(__cleanStorageRepository) White >>>>>> list: ['/rhev/data-center/hsm-tasks', >>>>>> '/rhev/data-center/hsm-tasks/*', >>>>>> '/rhev/data-center/mnt'] >>>>>> Thread-11::DEBUG::2012-04-16 >>>>>> 09:36:34,136::hsm::305::Storage.HSM::(__cleanStorageRepository) Mount >>>>>> list: ['/rhev/data-center'] >>>>>> Thread-11::DEBUG::2012-04-16 >>>>>> 09:36:34,136::hsm::307::Storage.HSM::(__cleanStorageRepository) >>>>>> Cleaning >>>>>> leftovers >>>>>> Thread-11::DEBUG::2012-04-16 >>>>>> 09:36:34,136::hsm::350::Storage.HSM::(__cleanStorageRepository) >>>>>> Finished >>>>>> cleaning storage repository at '/rhev/data-center' >>>>>> = >>>>>> = >>>>>> = >>>>>> = >>>>>> = >>>>>> = >>>>>> = >>>>>> = >>>>>> = >>>>>> On 4/16/12 8:38 AM, "Mike Burns" wrote: >>>>>> = >>>>>>> On Mon, 2012-04-16 at 08:14 -0500, Adam vonNieda wrote: >>>>>>>> Hi folks, >>>>>>>> = >>>>>>>> = >>>>>>>> I'm trying to install oVirt node v2.3.0 on A Dell C2100 >>>>>>>> server. I >>>>>>>> can boot up just fine, but the two menu options I see are "Start >>>>>>>> oVirt >>>>>>>> node", and "Troubleshooting". When I choose "Start oVirt node", it >>>>>>>> does just that, and I am soon after given a console login prompt. >>>>>>>> I've >>>>>>>> checked the docs, and I don't see what I'm supposed to do next, as >>>>>>>> in >>>>>>>> a password etc. Am I missing something? >>>>>>> Hi Adam, >>>>>>> = >>>>>>> Something is breaking in the boot process. You should be getting a >>>>>>> TUI >>>>>>> screen that will let you configure and install ovirt-node. >>>>>>> = >>>>>>> I just added an entry on the Node Troublesooting wiki page[1] for >>>>>>> you to >>>>>>> follow. >>>>>>> = >>>>>>> Mike >>>>>>> = >>>>>>> [1] http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems >>>>>>> = >>>>>>> = >>>>>>>> Thanks, >>>>>>>> = >>>>>>>> = >>>>>>>> -Adam >>>>>>>> _______________________________________________ >>>>>>>> Users mailing list >>>>>>>> Users(a)ovirt.org >>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users(a)ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >>>> This is definitely the cause of the installer failing >>>> = >>>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat >>>> /proc/mounts|grep -q "none /live" >>>> 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to >>>> mount_live() >>>> = >>>> = >>>> = >>>> What kind of media are you installing from: usb/cd/remote console? >>> _______________________________________________ >>> Users mailing list >>> Users(a)ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >> = >> I did go back and take a look at mount_live and made sure it contains a >> specific patch to handle usb drives properly. If you can get back to a >> shell prompt. run blkid and capture the output. If it's way too much to >> type then just the usb drive output should be ok. > = > = > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users --===============2043866780810260203==-- From adam at vonnieda.org Wed Apr 18 08:40:33 2012 Content-Type: multipart/mixed; boundary="===============7837393033395956942==" MIME-Version: 1.0 From: Adam vonNieda To: users at ovirt.org Subject: Re: [Users] Booting oVirt node image 2.3.0, no install option Date: Wed, 18 Apr 2012 07:40:21 -0500 Message-ID: In-Reply-To: A452C671-FF2C-44ED-897D-6921B4119D9E@thegeekhood.net --===============7837393033395956942== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Yep, that's exactly the same issue. Mine was a 16Gb Sandisk Cruiser. When I switched to a no-name older 4Gb stick, it worked fine. I set mine up exactly as you did as well, dd from a Mac. Mine booted the kernel just fine as well. I tried booting up setting the "rootpw=3D" as well, but that didn't work for me, so I was unable to collect any information from the "blkid" command. I tried it three times, and I know I was doing it correctly. Joey's comments below.. -Adam I did go back and take a look at mount_live and made sure it contains a specific patch to handle usb drives properly. If you can get back to a shell prompt. run blkid and capture the output. If it's way too much to type then just the usb drive output should be ok. http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems = On 4/18/12 5:02 AM, "Jason Lawer" wrote: >I think I just hit the exact same issue with a Sandisk Crusier Blade 4GB >USB stick. I bought 4 of them to try and setup a test system (before we >commit to real hardware) and at least 2 of them failed with both 2.2 and >2.3 ovirt isos being copied using dd from a mac. > >I copied to a old 8gb "Strontium" USB stick I had lying around and worked >without issue. So it appears to be an issue with the stick. > >I can provide more specific information on the stick or such if that is >useful. = > >It wouldn't surprise me if its due to the low cost nature of the stick >(cost $5 AUD) but I am curious as it booted the kernel fine. > >Jason >On 18/04/2012, at 4:48 AM, Adam vonNieda wrote: > >> = >> Turns out that there might be an issue with my thumb drive. I tried >> another, and it worked fine. Thanks very much for the responses folks! >> = >> -Adam >> = >> = >> On 4/17/12 10:11 AM, "Joey Boggs" wrote: >> = >>> On 04/17/2012 10:51 AM, Adam vonNieda wrote: >>>> Thanks for the reply Joey. I saw that too, and thought maybe my USB >>>> thumb drive was set to read only, but it's not. This box doesn't have >>>>a >>>> DVD drive, I'll try a different USB drive, and if that doesn't work, >>>> I'll dig up an external DVD drive. >>>> = >>>> Thanks again, >>>> = >>>> -Adam >>>> = >>>> Adam vonNieda >>>> Adam(a)vonNieda.org >>>> = >>>> On Apr 17, 2012, at 9:07, Joey Boggs wrote: >>>> = >>>>> On 04/17/2012 09:45 AM, Adam vonNieda wrote: >>>>>> Hi folks, >>>>>> = >>>>>> Still hoping someone can give me a hand with this. I can't >>>>>>install >>>>>> overt-node 2.3.0 on a on a Dell C2100 server because it won't start >>>>>> the >>>>>> graphical interface. I booted up a standard F16 image this morning, >>>>>> and >>>>>> the graphical installer does start during that process. Logs are >>>>>> below. >>>>>> = >>>>>> Thanks very much, >>>>>> = >>>>>> -Adam >>>>>> = >>>>>> = >>>>>>> /tmp/ovirt.log >>>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>>>>> = >>>>>>> /sbin/restorecon set context >>>>>>> /var/cache/yum->unconfined_u:object_r:rpm_var_cache_t:s0 >>>>>>> failed:'Read-only >>>>>>> file system' >>>>>>> /sbin/restorecon reset /var/cache/yum context >>>>>>> = >>>>>>> = >>>>>>>unconfined_u:object_r:file_t:s0->unconfined_u:object_r:rpm_var_cache >>>>>>>_t >>>>>>> :s0 >>>>>>> /sbin/restorecon reset /etc/sysctl.conf context >>>>>>> = >>>>>>> = >>>>>>>system_u:object_r:etc_runtime_t:s0->system_u:object_r:system_conf_t: >>>>>>>s0 >>>>>>> /sbin/restorecon reset /boot-kdump context >>>>>>> system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0 >>>>>>> 2012-04-16 09:36:26,827 - INFO - ovirt-config-installer - ::::live >>>>>>> device:::: >>>>>>> /dev/sdb >>>>>>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat >>>>>>> /proc/mounts|grep >>>>>>> -q "none /live" >>>>>>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - >>>>>>> 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - umount /live >>>>>>> 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - >>>>>>> 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to >>>>>>> mount_live() >>>>>>> = >>>>>>> /var/log/ovirt.log >>>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>>>>> = >>>>>>> Apr 16 09:35:53 Starting ovirt-early >>>>>>> oVirt Node Hypervisor release 2.3.0 (1.0.fc16) >>>>>>> Apr 16 09:35:53 Updating /etc/default/ovirt >>>>>>> Apr 16 09:35:54 Updating OVIRT_BOOTIF to '' >>>>>>> Apr 16 09:35:54 Updating OVIRT_INIT to '' >>>>>>> Apr 16 09:35:54 Updating OVIRT_UPGRADE to '' >>>>>>> Apr 16 09:35:54 Updating OVIRT_STANDALONE to '1' >>>>>>> Apr 16 09:35:54 Updating OVIRT_BOOTPARAMS to 'nomodeset >>>>>>> crashkernel=3D512M-2G:64M,2G-:128M elevator=3Ddeadline quiet rd_NO_= LVM >>>>>>> rhgb >>>>>>> rd.luks=3D0 rd.md=3D0 rd.dm=3D0' >>>>>>> Apr 16 09:35:54 Updating OVIRT_RHN_TYPE to 'classic' >>>>>>> Apr 16 09:35:54 Updating OVIRT_INSTALL to '1' >>>>>>> Apr 16 09:35:54 Updating OVIRT_ISCSI_INSTALL to '1' >>>>>>> Apr 16 09:36:08 Setting temporary admin password: F8Ax67kfRPSAw >>>>>>> Apr 16 09:36:09 Setting temporary root password: F8Ax67kfRPSAw >>>>>>> Apr 16 09:36:09 Skip runtime mode configuration. >>>>>>> Apr 16 09:36:09 Completed ovirt-early >>>>>>> Apr 16 09:36:09 Starting ovirt-awake. >>>>>>> Apr 16 09:36:09 Node is operating in unmanaged mode. >>>>>>> Apr 16 09:36:09 Completed ovirt-awake: RETVAL=3D0 >>>>>>> Apr 16 09:36:09 Starting ovirt >>>>>>> Apr 16 09:36:09 Completed ovirt >>>>>>> Apr 16 09:36:10 Starting ovirt-post >>>>>>> Apr 16 09:36:20 Hardware virtualization detected >>>>>>> Volume group "HostVG" not found >>>>>>> Skipping volume group HostVG >>>>>>> Restarting network (via systemctl): [ OK ] >>>>>>> Apr 16 09:36:20 Starting ovirt-post >>>>>>> Apr 16 09:36:21 Hardware virtualization detected >>>>>>> Volume group "HostVG" not found >>>>>>> Skipping volume group HostVG >>>>>>> Restarting network (via systemctl): [ OK ] >>>>>>> Apr 16 09:36:22 Starting ovirt-cim >>>>>>> Apr 16 09:36:22 Completed ovirt-cim >>>>>>> WARNING: persistent config storage not available >>>>>>> = >>>>>>> /var/log/vdsm/vdsm.log >>>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D >>>>>>> = >>>>>>> MainThread::INFO::2012-04-16 09:36:21,828::vdsm::71::vds::(run) I >>>>>>>am >>>>>>> the >>>>>>> actual vdsm 4.9-0 >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> = >>>>>>> = >>>>>>>09:36:23,873::resourceManager::376::ResourceManager::(registerNamesp >>>>>>>ac >>>>>>> e) >>>>>>> Registering namespace 'Storage' >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:23,874::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>>>>>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:23,918::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>>>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>>>>>> MainThread::INFO::2012-04-16 09:36:25,000::vdsm::71::vds::(run) I >>>>>>>am >>>>>>> the >>>>>>> actual vdsm 4.9-0 >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> = >>>>>>> = >>>>>>>09:36:25,199::resourceManager::376::ResourceManager::(registerNamesp >>>>>>>ac >>>>>>> e) >>>>>>> Registering namespace 'Storage' >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:25,200::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>>>>>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:25,231::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>>>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:25,243::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>>>> SUCCESS: >>>>>>> =3D ''; =3D 0 >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:25,244::multipath::109::Storage.Multipath::(isEnabled) >>>>>>> multipath >>>>>>> Defaulting to False >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:25,244::misc::487::Storage.Misc::(rotateFiles) dir: /etc, >>>>>>> prefixName: multipath.conf, versions: 5 >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:25,244::misc::508::Storage.Misc::(rotateFiles) versions >>>>>>>found: >>>>>>> [0] >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:25,244::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>>>>>> '/usr/bin/sudo -n /bin/cp /etc/multipath.conf >>>>>>>/etc/multipath.conf.1' >>>>>>> (cwd >>>>>>> None) >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:25,255::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>>>>>> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: >>>>>>>Read-only >>>>>>> file >>>>>>> system\nsudo: sorry, a password is required to run sudo\n'; >>>>>>>=3D 1 >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:25,256::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>>>>>> '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf.1' (cwd >>>>>>>None) >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:25,269::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>>>>>> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: >>>>>>>Read-only >>>>>>> file >>>>>>> system\nsudo: sorry, a password is required to run sudo\n'; >>>>>>>=3D 1 >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:25,270::multipath::123::Storage.Misc.excCmd::(setupMultipath) >>>>>>> '/usr/bin/sudo -n /bin/cp /tmp/tmpnPcvWi /etc/multipath.conf' (cwd >>>>>>> None) >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:25,283::multipath::123::Storage.Misc.excCmd::(setupMultipath) >>>>>>> SUCCESS: =3D ''; =3D 0 >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:25,283::multipath::128::Storage.Misc.excCmd::(setupMultipath) >>>>>>> '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf' (cwd None) >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:25,294::multipath::128::Storage.Misc.excCmd::(setupMultipath) >>>>>>> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: >>>>>>>Read-only >>>>>>> file >>>>>>> system\nsudo: sorry, a password is required to run sudo\n'; >>>>>>>=3D 1 >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:25,295::multipath::131::Storage.Misc.excCmd::(setupMultipath) >>>>>>> '/usr/bin/sudo -n /sbin/multipath -F' (cwd None) >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:25,323::multipath::131::Storage.Misc.excCmd::(setupMultipath) >>>>>>> FAILED: =3D ''; =3D 1 >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:25,323::multipath::134::Storage.Misc.excCmd::(setupMultipath) >>>>>>> '/usr/bin/sudo -n /sbin/service multipathd restart' (cwd None) >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:26,397::multipath::134::Storage.Misc.excCmd::(setupMultipath) >>>>>>> SUCCESS: =3D ''; =3D 0 >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> = >>>>>>> = >>>>>>>09:36:26,398::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingTy >>>>>>>pe >>>>>>> ) >>>>>>> '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd >>>>>>> None) >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> = >>>>>>> = >>>>>>>09:36:26,443::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingTy >>>>>>>pe >>>>>>> ) >>>>>>> SUCCESS: =3D ''; =3D 0 >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:26,445::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm >>>>>>> reload >>>>>>> operation' got the operation mutex >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:26,447::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>>>>>>-n >>>>>>> /sbin/lvm pvs --config " devices { preferred_names =3D >>>>>>> [\\"^/dev/mapper/\\"] >>>>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>>> disable_after_error_count=3D3 >>>>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks= =3D1 >>>>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D = 0 } " >>>>>>> --noheadings --units b --nosuffix --separator | -o >>>>>>> = >>>>>>> = >>>>>>>uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_ >>>>>>>co >>>>>>> unt, >>>>>>> d >>>>>>> ev_size' (cwd None) >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:26,811::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: >>>>>>>=3D >>>>>>> ''; >>>>>>> =3D 0 >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:26,811::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm >>>>>>> reload >>>>>>> operation' released the operation mutex >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:26,812::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm >>>>>>> reload >>>>>>> operation' got the operation mutex >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:26,812::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>>>>>>-n >>>>>>> /sbin/lvm vgs --config " devices { preferred_names =3D >>>>>>> [\\"^/dev/mapper/\\"] >>>>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>>> disable_after_error_count=3D3 >>>>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks= =3D1 >>>>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D = 0 } " >>>>>>> --noheadings --units b --nosuffix --separator | -o >>>>>>> = >>>>>>> = >>>>>>>uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg >>>>>>>_m >>>>>>> da_s >>>>>>> i >>>>>>> ze,vg_mda_free' (cwd None) >>>>>>> MainThread::INFO::2012-04-16 09:36:29,307::vdsm::71::vds::(run) I >>>>>>>am >>>>>>> the >>>>>>> actual vdsm 4.9-0 >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> = >>>>>>> = >>>>>>>09:36:29,514::resourceManager::376::ResourceManager::(registerNamesp >>>>>>>ac >>>>>>> e) >>>>>>> Registering namespace 'Storage' >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:29,515::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>>>>>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:29,551::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>>>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:29,564::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>>>> SUCCESS: >>>>>>> =3D ''; =3D 0 >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:29,565::multipath::101::Storage.Multipath::(isEnabled) >>>>>>>Current >>>>>>> revision of multipath.conf detected, preserving >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> = >>>>>>> = >>>>>>>09:36:29,565::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingTy >>>>>>>pe >>>>>>> ) >>>>>>> '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd >>>>>>> None) >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> = >>>>>>> = >>>>>>>09:36:29,606::hsm::248::Storage.Misc.excCmd::(__validateLvmLockingTy >>>>>>>pe >>>>>>> ) >>>>>>> SUCCESS: =3D ''; =3D 0 >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:29,606::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm >>>>>>> reload >>>>>>> operation' got the operation mutex >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:29,608::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>>>>>>-n >>>>>>> /sbin/lvm pvs --config " devices { preferred_names =3D >>>>>>> [\\"^/dev/mapper/\\"] >>>>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>>> disable_after_error_count=3D3 >>>>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks= =3D1 >>>>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D = 0 } " >>>>>>> --noheadings --units b --nosuffix --separator | -o >>>>>>> = >>>>>>> = >>>>>>>uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_ >>>>>>>co >>>>>>> unt, >>>>>>> d >>>>>>> ev_size' (cwd None) >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:29,714::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: >>>>>>>=3D >>>>>>> ''; >>>>>>> =3D 0 >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:29,715::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm >>>>>>> reload >>>>>>> operation' released the operation mutex >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:29,716::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm >>>>>>> reload >>>>>>> operation' got the operation mutex >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:29,716::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>>>>>>-n >>>>>>> /sbin/lvm vgs --config " devices { preferred_names =3D >>>>>>> [\\"^/dev/mapper/\\"] >>>>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>>> disable_after_error_count=3D3 >>>>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks= =3D1 >>>>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D = 0 } " >>>>>>> --noheadings --units b --nosuffix --separator | -o >>>>>>> = >>>>>>> = >>>>>>>uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg >>>>>>>_m >>>>>>> da_s >>>>>>> i >>>>>>> ze,vg_mda_free' (cwd None) >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:29,813::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: >>>>>>>=3D >>>>>>> ' No >>>>>>> volume groups found\n'; =3D 0 >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:29,814::lvm::379::OperationMutex::(_reloadvgs) Operation 'lvm >>>>>>> reload >>>>>>> operation' released the operation mutex >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:29,815::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>>>>>>-n >>>>>>> /sbin/lvm lvs --config " devices { preferred_names =3D >>>>>>> [\\"^/dev/mapper/\\"] >>>>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>>> disable_after_error_count=3D3 >>>>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_locks= =3D1 >>>>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D = 0 } " >>>>>>> --noheadings --units b --nosuffix --separator | -o >>>>>>> uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:29,916::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: >>>>>>>=3D >>>>>>> ' No >>>>>>> volume groups found\n'; =3D 0 >>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>> 09:36:29,917::misc::1017::SamplingMethod::(__call__) Trying to >>>>>>>enter >>>>>>> sampling method (storage.sdc.refreshStorage) >>>>>>> MainThread::INFO::2012-04-16 >>>>>>> 09:36:29,919::dispatcher::121::Storage.Dispatcher::(__init__) >>>>>>> Starting >>>>>>> StorageDispatcher... >>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>> 09:36:29,919::misc::1019::SamplingMethod::(__call__) Got in to >>>>>>> sampling >>>>>>> method >>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>> 09:36:29,921::misc::1017::SamplingMethod::(__call__) Trying to >>>>>>>enter >>>>>>> sampling method (storage.iscsi.rescan) >>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>> 09:36:29,921::misc::1019::SamplingMethod::(__call__) Got in to >>>>>>> sampling >>>>>>> method >>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>> 09:36:29,921::iscsi::389::Storage.Misc.excCmd::(rescan) >>>>>>> '/usr/bin/sudo -n >>>>>>> /sbin/iscsiadm -m session -R' (cwd None) >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:29,930::utils::595::Storage.Misc.excCmd::(execCmd) >>>>>>> '/usr/bin/pgrep >>>>>>> -xf ksmd' (cwd None) >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:30,108::utils::595::Storage.Misc.excCmd::(execCmd) >>>>>>> SUCCESS: =3D >>>>>>> ''; =3D 0 >>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>> 09:36:30,116::iscsi::389::Storage.Misc.excCmd::(rescan) >>>>>>>FAILED: >>>>>>> =3D >>>>>>> 'iscsiadm: No session found.\n'; =3D 21 >>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>> 09:36:30,116::misc::1027::SamplingMethod::(__call__) Returning last >>>>>>> result >>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>> 09:36:30,117::supervdsm::83::SuperVdsmProxy::(_killSupervdsm) Could >>>>>>> not >>>>>>> kill old Super Vdsm [Errno 2] No such file or directory: >>>>>>> '/var/run/vdsm/svdsm.pid' >>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>> 09:36:30,117::supervdsm::71::SuperVdsmProxy::(_launchSupervdsm) >>>>>>> Launching >>>>>>> Super Vdsm >>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>> = >>>>>>>09:36:30,118::supervdsm::74::Storage.Misc.excCmd::(_launchSupervdsm) >>>>>>> '/usr/bin/sudo -n /usr/bin/python >>>>>>>/usr/share/vdsm/supervdsmServer.pyc >>>>>>> bd4b3ae7-3e51-4d6b-b681-d5f6cb5bae07 2945' (cwd None) >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:30,254::supervdsmServer::170::SuperVdsm.Server::(main) Making >>>>>>> sure >>>>>>> I'm root >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:30,255::supervdsmServer::174::SuperVdsm.Server::(main) >>>>>>>Parsing >>>>>>> cmd >>>>>>> args >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:30,255::supervdsmServer::177::SuperVdsm.Server::(main) >>>>>>> Creating PID >>>>>>> file >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:30,255::supervdsmServer::181::SuperVdsm.Server::(main) >>>>>>> Cleaning old >>>>>>> socket >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:30,255::supervdsmServer::185::SuperVdsm.Server::(main) >>>>>>>Setting >>>>>>> up >>>>>>> keep alive thread >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:30,256::supervdsmServer::190::SuperVdsm.Server::(main) >>>>>>>Creating >>>>>>> remote object manager >>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>> 09:36:30,256::supervdsmServer::201::SuperVdsm.Server::(main) >>>>>>>Started >>>>>>> serving super vdsm object >>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>> 09:36:32,124::supervdsm::92::SuperVdsmProxy::(_connect) Trying to >>>>>>> connect >>>>>>> to Super Vdsm >>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>> 09:36:32,133::supervdsm::64::SuperVdsmProxy::(__init__) Connected >>>>>>>to >>>>>>> Super >>>>>>> Vdsm >>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>> 09:36:34,070::multipath::71::Storage.Misc.excCmd::(rescan) >>>>>>> '/usr/bin/sudo >>>>>>> -n /sbin/multipath' (cwd None) >>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>> 09:36:34,130::multipath::71::Storage.Misc.excCmd::(rescan) >>>>>>> SUCCESS: >>>>>>> =3D ''; =3D 0 >>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>> 09:36:34,131::lvm::460::OperationMutex::(_invalidateAllPvs) >>>>>>> Operation 'lvm >>>>>>> invalidate operation' got the operation mutex >>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>> 09:36:34,131::lvm::462::OperationMutex::(_invalidateAllPvs) >>>>>>> Operation 'lvm >>>>>>> invalidate operation' released the operation mutex >>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>> 09:36:34,132::lvm::472::OperationMutex::(_invalidateAllVgs) >>>>>>> Operation 'lvm >>>>>>> invalidate operation' got the operation mutex >>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>> 09:36:34,132::lvm::474::OperationMutex::(_invalidateAllVgs) >>>>>>> Operation 'lvm >>>>>>> invalidate operation' released the operation mutex >>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>> 09:36:34,133::lvm::493::OperationMutex::(_invalidateAllLvs) >>>>>>> Operation 'lvm >>>>>>> invalidate operation' got the operation mutex >>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>> 09:36:34,133::lvm::495::OperationMutex::(_invalidateAllLvs) >>>>>>> Operation 'lvm >>>>>>> invalidate operation' released the operation mutex >>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>> 09:36:34,133::misc::1027::SamplingMethod::(__call__) Returning last >>>>>>> result >>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>> 09:36:34,133::hsm::272::Storage.HSM::(__cleanStorageRepository) >>>>>>> Started >>>>>>> cleaning storage repository at '/rhev/data-center' >>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>> 09:36:34,136::hsm::304::Storage.HSM::(__cleanStorageRepository) >>>>>>>White >>>>>>> list: ['/rhev/data-center/hsm-tasks', >>>>>>> '/rhev/data-center/hsm-tasks/*', >>>>>>> '/rhev/data-center/mnt'] >>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>> 09:36:34,136::hsm::305::Storage.HSM::(__cleanStorageRepository) >>>>>>>Mount >>>>>>> list: ['/rhev/data-center'] >>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>> 09:36:34,136::hsm::307::Storage.HSM::(__cleanStorageRepository) >>>>>>> Cleaning >>>>>>> leftovers >>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>> 09:36:34,136::hsm::350::Storage.HSM::(__cleanStorageRepository) >>>>>>> Finished >>>>>>> cleaning storage repository at '/rhev/data-center' >>>>>>> = >>>>>>> = >>>>>>> = >>>>>>> = >>>>>>> = >>>>>>> = >>>>>>> = >>>>>>> = >>>>>>> = >>>>>>> On 4/16/12 8:38 AM, "Mike Burns" wrote: >>>>>>> = >>>>>>>> On Mon, 2012-04-16 at 08:14 -0500, Adam vonNieda wrote: >>>>>>>>> Hi folks, >>>>>>>>> = >>>>>>>>> = >>>>>>>>> I'm trying to install oVirt node v2.3.0 on A Dell C2100 >>>>>>>>> server. I >>>>>>>>> can boot up just fine, but the two menu options I see are "Start >>>>>>>>> oVirt >>>>>>>>> node", and "Troubleshooting". When I choose "Start oVirt node", >>>>>>>>>it >>>>>>>>> does just that, and I am soon after given a console login prompt. >>>>>>>>> I've >>>>>>>>> checked the docs, and I don't see what I'm supposed to do next, >>>>>>>>>as >>>>>>>>> in >>>>>>>>> a password etc. Am I missing something? >>>>>>>> Hi Adam, >>>>>>>> = >>>>>>>> Something is breaking in the boot process. You should be getting >>>>>>>>a >>>>>>>> TUI >>>>>>>> screen that will let you configure and install ovirt-node. >>>>>>>> = >>>>>>>> I just added an entry on the Node Troublesooting wiki page[1] for >>>>>>>> you to >>>>>>>> follow. >>>>>>>> = >>>>>>>> Mike >>>>>>>> = >>>>>>>> [1] http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems >>>>>>>> = >>>>>>>> = >>>>>>>>> Thanks, >>>>>>>>> = >>>>>>>>> = >>>>>>>>> -Adam >>>>>>>>> _______________________________________________ >>>>>>>>> Users mailing list >>>>>>>>> Users(a)ovirt.org >>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>> _______________________________________________ >>>>>> Users mailing list >>>>>> Users(a)ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> This is definitely the cause of the installer failing >>>>> = >>>>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat >>>>> /proc/mounts|grep -q "none /live" >>>>> 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to >>>>> mount_live() >>>>> = >>>>> = >>>>> = >>>>> What kind of media are you installing from: usb/cd/remote console? >>>> _______________________________________________ >>>> Users mailing list >>>> Users(a)ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>> = >>> I did go back and take a look at mount_live and made sure it contains a >>> specific patch to handle usb drives properly. If you can get back to a >>> shell prompt. run blkid and capture the output. If it's way too much to >>> type then just the usb drive output should be ok. >> = >> = >> _______________________________________________ >> Users mailing list >> Users(a)ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > >_______________________________________________ >Users mailing list >Users(a)ovirt.org >http://lists.ovirt.org/mailman/listinfo/users --===============7837393033395956942==-- From jboggs at redhat.com Wed Apr 18 09:19:10 2012 Content-Type: multipart/mixed; boundary="===============8056007549429923409==" MIME-Version: 1.0 From: Joey Boggs To: users at ovirt.org Subject: Re: [Users] Booting oVirt node image 2.3.0, no install option Date: Wed, 18 Apr 2012 09:19:08 -0400 Message-ID: <4F8EBF4C.8030706@redhat.com> In-Reply-To: CBB41EC4.198B6%adam@vonnieda.org --===============8056007549429923409== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 04/18/2012 08:40 AM, Adam vonNieda wrote: > Yep, that's exactly the same issue. Mine was a 16Gb Sandisk Cruiser. > When I switched to a no-name older 4Gb stick, it worked fine. I set mine > up exactly as you did as well, dd from a Mac. Mine booted the kernel just > fine as well. I tried booting up setting the "rootpw=3D" as well, b= ut > that didn't work for me, so I was unable to collect any information from > the "blkid" command. I tried it three times, and I know I was doing it > correctly. Joey's comments below.. > > -Adam > > > > I did go back and take a look at mount_live and made sure it contains a > specific patch to handle usb drives properly. If you can get back to a > shell prompt. run blkid and capture the output. If it's way too much to > type then just the usb drive output should be ok. > > > > http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems > > > > > > On 4/18/12 5:02 AM, "Jason Lawer" wrote: > >> I think I just hit the exact same issue with a Sandisk Crusier Blade 4GB >> USB stick. I bought 4 of them to try and setup a test system (before we >> commit to real hardware) and at least 2 of them failed with both 2.2 and >> 2.3 ovirt isos being copied using dd from a mac. >> >> I copied to a old 8gb "Strontium" USB stick I had lying around and worked >> without issue. So it appears to be an issue with the stick. >> >> I can provide more specific information on the stick or such if that is >> useful. >> >> It wouldn't surprise me if its due to the low cost nature of the stick >> (cost $5 AUD) but I am curious as it booted the kernel fine. >> >> Jason >> On 18/04/2012, at 4:48 AM, Adam vonNieda wrote: >> >>> Turns out that there might be an issue with my thumb drive. I tried >>> another, and it worked fine. Thanks very much for the responses folks! >>> >>> -Adam >>> >>> >>> On 4/17/12 10:11 AM, "Joey Boggs" wrote: >>> >>>> On 04/17/2012 10:51 AM, Adam vonNieda wrote: >>>>> Thanks for the reply Joey. I saw that too, and thought maybe my U= SB >>>>> thumb drive was set to read only, but it's not. This box doesn't have >>>>> a >>>>> DVD drive, I'll try a different USB drive, and if that doesn't work, >>>>> I'll dig up an external DVD drive. >>>>> >>>>> Thanks again, >>>>> >>>>> -Adam >>>>> >>>>> Adam vonNieda >>>>> Adam(a)vonNieda.org >>>>> >>>>> On Apr 17, 2012, at 9:07, Joey Boggs wrote: >>>>> >>>>>> On 04/17/2012 09:45 AM, Adam vonNieda wrote: >>>>>>> Hi folks, >>>>>>> >>>>>>> Still hoping someone can give me a hand with this. I can't >>>>>>> install >>>>>>> overt-node 2.3.0 on a on a Dell C2100 server because it won't start >>>>>>> the >>>>>>> graphical interface. I booted up a standard F16 image this morning, >>>>>>> and >>>>>>> the graphical installer does start during that process. Logs are >>>>>>> below. >>>>>>> >>>>>>> Thanks very much, >>>>>>> >>>>>>> -Adam >>>>>>> >>>>>>> >>>>>>>> /tmp/ovirt.log >>>>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>>>>>> >>>>>>>> /sbin/restorecon set context >>>>>>>> /var/cache/yum->unconfined_u:object_r:rpm_var_cache_t:s0 >>>>>>>> failed:'Read-only >>>>>>>> file system' >>>>>>>> /sbin/restorecon reset /var/cache/yum context >>>>>>>> >>>>>>>> >>>>>>>> unconfined_u:object_r:file_t:s0->unconfined_u:object_r:rpm_var_cac= he >>>>>>>> _t >>>>>>>> :s0 >>>>>>>> /sbin/restorecon reset /etc/sysctl.conf context >>>>>>>> >>>>>>>> >>>>>>>> system_u:object_r:etc_runtime_t:s0->system_u:object_r:system_conf_= t: >>>>>>>> s0 >>>>>>>> /sbin/restorecon reset /boot-kdump context >>>>>>>> system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0 >>>>>>>> 2012-04-16 09:36:26,827 - INFO - ovirt-config-installer - ::::live >>>>>>>> device:::: >>>>>>>> /dev/sdb >>>>>>>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat >>>>>>>> /proc/mounts|grep >>>>>>>> -q "none /live" >>>>>>>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - >>>>>>>> 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - umount /live >>>>>>>> 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - >>>>>>>> 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to >>>>>>>> mount_live() >>>>>>>> >>>>>>>> /var/log/ovirt.log >>>>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>>>>>> >>>>>>>> Apr 16 09:35:53 Starting ovirt-early >>>>>>>> oVirt Node Hypervisor release 2.3.0 (1.0.fc16) >>>>>>>> Apr 16 09:35:53 Updating /etc/default/ovirt >>>>>>>> Apr 16 09:35:54 Updating OVIRT_BOOTIF to '' >>>>>>>> Apr 16 09:35:54 Updating OVIRT_INIT to '' >>>>>>>> Apr 16 09:35:54 Updating OVIRT_UPGRADE to '' >>>>>>>> Apr 16 09:35:54 Updating OVIRT_STANDALONE to '1' >>>>>>>> Apr 16 09:35:54 Updating OVIRT_BOOTPARAMS to 'nomodeset >>>>>>>> crashkernel=3D512M-2G:64M,2G-:128M elevator=3Ddeadline quiet rd_NO= _LVM >>>>>>>> rhgb >>>>>>>> rd.luks=3D0 rd.md=3D0 rd.dm=3D0' >>>>>>>> Apr 16 09:35:54 Updating OVIRT_RHN_TYPE to 'classic' >>>>>>>> Apr 16 09:35:54 Updating OVIRT_INSTALL to '1' >>>>>>>> Apr 16 09:35:54 Updating OVIRT_ISCSI_INSTALL to '1' >>>>>>>> Apr 16 09:36:08 Setting temporary admin password: F8Ax67kfRPSAw >>>>>>>> Apr 16 09:36:09 Setting temporary root password: F8Ax67kfRPSAw >>>>>>>> Apr 16 09:36:09 Skip runtime mode configuration. >>>>>>>> Apr 16 09:36:09 Completed ovirt-early >>>>>>>> Apr 16 09:36:09 Starting ovirt-awake. >>>>>>>> Apr 16 09:36:09 Node is operating in unmanaged mode. >>>>>>>> Apr 16 09:36:09 Completed ovirt-awake: RETVAL=3D0 >>>>>>>> Apr 16 09:36:09 Starting ovirt >>>>>>>> Apr 16 09:36:09 Completed ovirt >>>>>>>> Apr 16 09:36:10 Starting ovirt-post >>>>>>>> Apr 16 09:36:20 Hardware virtualization detected >>>>>>>> Volume group "HostVG" not found >>>>>>>> Skipping volume group HostVG >>>>>>>> Restarting network (via systemctl): [ OK ] >>>>>>>> Apr 16 09:36:20 Starting ovirt-post >>>>>>>> Apr 16 09:36:21 Hardware virtualization detected >>>>>>>> Volume group "HostVG" not found >>>>>>>> Skipping volume group HostVG >>>>>>>> Restarting network (via systemctl): [ OK ] >>>>>>>> Apr 16 09:36:22 Starting ovirt-cim >>>>>>>> Apr 16 09:36:22 Completed ovirt-cim >>>>>>>> WARNING: persistent config storage not available >>>>>>>> >>>>>>>> /var/log/vdsm/vdsm.log >>>>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D >>>>>>>> >>>>>>>> MainThread::INFO::2012-04-16 09:36:21,828::vdsm::71::vds::(run) I >>>>>>>> am >>>>>>>> the >>>>>>>> actual vdsm 4.9-0 >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> >>>>>>>> >>>>>>>> 09:36:23,873::resourceManager::376::ResourceManager::(registerName= sp >>>>>>>> ac >>>>>>>> e) >>>>>>>> Registering namespace 'Storage' >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:23,874::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>>>>>>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:23,918::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>>>>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>>>>>>> MainThread::INFO::2012-04-16 09:36:25,000::vdsm::71::vds::(run) I >>>>>>>> am >>>>>>>> the >>>>>>>> actual vdsm 4.9-0 >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> >>>>>>>> >>>>>>>> 09:36:25,199::resourceManager::376::ResourceManager::(registerName= sp >>>>>>>> ac >>>>>>>> e) >>>>>>>> Registering namespace 'Storage' >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:25,200::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>>>>>>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:25,231::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>>>>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:25,243::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>>>>> SUCCESS: >>>>>>>> =3D ''; =3D 0 >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:25,244::multipath::109::Storage.Multipath::(isEnabled) >>>>>>>> multipath >>>>>>>> Defaulting to False >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:25,244::misc::487::Storage.Misc::(rotateFiles) dir: /etc, >>>>>>>> prefixName: multipath.conf, versions: 5 >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:25,244::misc::508::Storage.Misc::(rotateFiles) versions >>>>>>>> found: >>>>>>>> [0] >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:25,244::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>>>>>>> '/usr/bin/sudo -n /bin/cp /etc/multipath.conf >>>>>>>> /etc/multipath.conf.1' >>>>>>>> (cwd >>>>>>>> None) >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:25,255::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>>>>>>> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: >>>>>>>> Read-only >>>>>>>> file >>>>>>>> system\nsudo: sorry, a password is required to run sudo\n'; >>>>>>>> =3D 1 >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:25,256::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>>>>>>> '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf.1' (cwd >>>>>>>> None) >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:25,269::multipath::118::Storage.Misc.excCmd::(setupMultipath) >>>>>>>> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: >>>>>>>> Read-only >>>>>>>> file >>>>>>>> system\nsudo: sorry, a password is required to run sudo\n'; >>>>>>>> =3D 1 >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:25,270::multipath::123::Storage.Misc.excCmd::(setupMultipath) >>>>>>>> '/usr/bin/sudo -n /bin/cp /tmp/tmpnPcvWi /etc/multipath.conf' (cwd >>>>>>>> None) >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:25,283::multipath::123::Storage.Misc.excCmd::(setupMultipath) >>>>>>>> SUCCESS: =3D ''; =3D 0 >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:25,283::multipath::128::Storage.Misc.excCmd::(setupMultipath) >>>>>>>> '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf' (cwd None) >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:25,294::multipath::128::Storage.Misc.excCmd::(setupMultipath) >>>>>>>> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: >>>>>>>> Read-only >>>>>>>> file >>>>>>>> system\nsudo: sorry, a password is required to run sudo\n'; >>>>>>>> =3D 1 >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:25,295::multipath::131::Storage.Misc.excCmd::(setupMultipath) >>>>>>>> '/usr/bin/sudo -n /sbin/multipath -F' (cwd None) >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:25,323::multipath::131::Storage.Misc.excCmd::(setupMultipath) >>>>>>>> FAILED: =3D ''; =3D 1 >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:25,323::multipath::134::Storage.Misc.excCmd::(setupMultipath) >>>>>>>> '/usr/bin/sudo -n /sbin/service multipathd restart' (cwd None) >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:26,397::multipath::134::Storage.Misc.excCmd::(setupMultipath) >>>>>>>> SUCCESS: =3D ''; =3D 0 >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> >>>>>>>> >>>>>>>> 09:36:26,398::hsm::248::Storage.Misc.excCmd::(__validateLvmLocking= Ty >>>>>>>> pe >>>>>>>> ) >>>>>>>> '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd >>>>>>>> None) >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> >>>>>>>> >>>>>>>> 09:36:26,443::hsm::248::Storage.Misc.excCmd::(__validateLvmLocking= Ty >>>>>>>> pe >>>>>>>> ) >>>>>>>> SUCCESS: =3D ''; =3D 0 >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:26,445::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm >>>>>>>> reload >>>>>>>> operation' got the operation mutex >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:26,447::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>>>>>>> -n >>>>>>>> /sbin/lvm pvs --config " devices { preferred_names =3D >>>>>>>> [\\"^/dev/mapper/\\"] >>>>>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>>>> disable_after_error_count=3D3 >>>>>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>>>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_lock= s=3D1 >>>>>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D= 0 } " >>>>>>>> --noheadings --units b --nosuffix --separator | -o >>>>>>>> >>>>>>>> >>>>>>>> uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,md= a_ >>>>>>>> co >>>>>>>> unt, >>>>>>>> d >>>>>>>> ev_size' (cwd None) >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:26,811::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: >>>>>>>> =3D >>>>>>>> ''; >>>>>>>> =3D 0 >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:26,811::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm >>>>>>>> reload >>>>>>>> operation' released the operation mutex >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:26,812::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm >>>>>>>> reload >>>>>>>> operation' got the operation mutex >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:26,812::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>>>>>>> -n >>>>>>>> /sbin/lvm vgs --config " devices { preferred_names =3D >>>>>>>> [\\"^/dev/mapper/\\"] >>>>>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>>>> disable_after_error_count=3D3 >>>>>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>>>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_lock= s=3D1 >>>>>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D= 0 } " >>>>>>>> --noheadings --units b --nosuffix --separator | -o >>>>>>>> >>>>>>>> >>>>>>>> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,= vg >>>>>>>> _m >>>>>>>> da_s >>>>>>>> i >>>>>>>> ze,vg_mda_free' (cwd None) >>>>>>>> MainThread::INFO::2012-04-16 09:36:29,307::vdsm::71::vds::(run) I >>>>>>>> am >>>>>>>> the >>>>>>>> actual vdsm 4.9-0 >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> >>>>>>>> >>>>>>>> 09:36:29,514::resourceManager::376::ResourceManager::(registerName= sp >>>>>>>> ac >>>>>>>> e) >>>>>>>> Registering namespace 'Storage' >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:29,515::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>>>>>>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:29,551::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>>>>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:29,564::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>>>>> SUCCESS: >>>>>>>> =3D ''; =3D 0 >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:29,565::multipath::101::Storage.Multipath::(isEnabled) >>>>>>>> Current >>>>>>>> revision of multipath.conf detected, preserving >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> >>>>>>>> >>>>>>>> 09:36:29,565::hsm::248::Storage.Misc.excCmd::(__validateLvmLocking= Ty >>>>>>>> pe >>>>>>>> ) >>>>>>>> '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd >>>>>>>> None) >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> >>>>>>>> >>>>>>>> 09:36:29,606::hsm::248::Storage.Misc.excCmd::(__validateLvmLocking= Ty >>>>>>>> pe >>>>>>>> ) >>>>>>>> SUCCESS: =3D ''; =3D 0 >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:29,606::lvm::319::OperationMutex::(_reloadpvs) Operation 'lvm >>>>>>>> reload >>>>>>>> operation' got the operation mutex >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:29,608::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>>>>>>> -n >>>>>>>> /sbin/lvm pvs --config " devices { preferred_names =3D >>>>>>>> [\\"^/dev/mapper/\\"] >>>>>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>>>> disable_after_error_count=3D3 >>>>>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>>>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_lock= s=3D1 >>>>>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D= 0 } " >>>>>>>> --noheadings --units b --nosuffix --separator | -o >>>>>>>> >>>>>>>> >>>>>>>> uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,md= a_ >>>>>>>> co >>>>>>>> unt, >>>>>>>> d >>>>>>>> ev_size' (cwd None) >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:29,714::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: >>>>>>>> =3D >>>>>>>> ''; >>>>>>>> =3D 0 >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:29,715::lvm::342::OperationMutex::(_reloadpvs) Operation 'lvm >>>>>>>> reload >>>>>>>> operation' released the operation mutex >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:29,716::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm >>>>>>>> reload >>>>>>>> operation' got the operation mutex >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:29,716::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>>>>>>> -n >>>>>>>> /sbin/lvm vgs --config " devices { preferred_names =3D >>>>>>>> [\\"^/dev/mapper/\\"] >>>>>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>>>> disable_after_error_count=3D3 >>>>>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>>>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_lock= s=3D1 >>>>>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D= 0 } " >>>>>>>> --noheadings --units b --nosuffix --separator | -o >>>>>>>> >>>>>>>> >>>>>>>> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,= vg >>>>>>>> _m >>>>>>>> da_s >>>>>>>> i >>>>>>>> ze,vg_mda_free' (cwd None) >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:29,813::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: >>>>>>>> =3D >>>>>>>> ' No >>>>>>>> volume groups found\n'; =3D 0 >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:29,814::lvm::379::OperationMutex::(_reloadvgs) Operation 'lvm >>>>>>>> reload >>>>>>>> operation' released the operation mutex >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:29,815::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>>>>>>> -n >>>>>>>> /sbin/lvm lvs --config " devices { preferred_names =3D >>>>>>>> [\\"^/dev/mapper/\\"] >>>>>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>>>> disable_after_error_count=3D3 >>>>>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>>>>> \\"r%.*%\\" ] } global { locking_type=3D1 prioritise_write_lock= s=3D1 >>>>>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days =3D= 0 } " >>>>>>>> --noheadings --units b --nosuffix --separator | -o >>>>>>>> uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:29,916::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: >>>>>>>> =3D >>>>>>>> ' No >>>>>>>> volume groups found\n'; =3D 0 >>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>> 09:36:29,917::misc::1017::SamplingMethod::(__call__) Trying to >>>>>>>> enter >>>>>>>> sampling method (storage.sdc.refreshStorage) >>>>>>>> MainThread::INFO::2012-04-16 >>>>>>>> 09:36:29,919::dispatcher::121::Storage.Dispatcher::(__init__) >>>>>>>> Starting >>>>>>>> StorageDispatcher... >>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>> 09:36:29,919::misc::1019::SamplingMethod::(__call__) Got in to >>>>>>>> sampling >>>>>>>> method >>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>> 09:36:29,921::misc::1017::SamplingMethod::(__call__) Trying to >>>>>>>> enter >>>>>>>> sampling method (storage.iscsi.rescan) >>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>> 09:36:29,921::misc::1019::SamplingMethod::(__call__) Got in to >>>>>>>> sampling >>>>>>>> method >>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>> 09:36:29,921::iscsi::389::Storage.Misc.excCmd::(rescan) >>>>>>>> '/usr/bin/sudo -n >>>>>>>> /sbin/iscsiadm -m session -R' (cwd None) >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:29,930::utils::595::Storage.Misc.excCmd::(execCmd) >>>>>>>> '/usr/bin/pgrep >>>>>>>> -xf ksmd' (cwd None) >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:30,108::utils::595::Storage.Misc.excCmd::(execCmd) >>>>>>>> SUCCESS: =3D >>>>>>>> ''; =3D 0 >>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>> 09:36:30,116::iscsi::389::Storage.Misc.excCmd::(rescan) >>>>>>>> FAILED: >>>>>>>> =3D >>>>>>>> 'iscsiadm: No session found.\n'; =3D 21 >>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>> 09:36:30,116::misc::1027::SamplingMethod::(__call__) Returning last >>>>>>>> result >>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>> 09:36:30,117::supervdsm::83::SuperVdsmProxy::(_killSupervdsm) Could >>>>>>>> not >>>>>>>> kill old Super Vdsm [Errno 2] No such file or directory: >>>>>>>> '/var/run/vdsm/svdsm.pid' >>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>> 09:36:30,117::supervdsm::71::SuperVdsmProxy::(_launchSupervdsm) >>>>>>>> Launching >>>>>>>> Super Vdsm >>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>> >>>>>>>> 09:36:30,118::supervdsm::74::Storage.Misc.excCmd::(_launchSupervds= m) >>>>>>>> '/usr/bin/sudo -n /usr/bin/python >>>>>>>> /usr/share/vdsm/supervdsmServer.pyc >>>>>>>> bd4b3ae7-3e51-4d6b-b681-d5f6cb5bae07 2945' (cwd None) >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:30,254::supervdsmServer::170::SuperVdsm.Server::(main) Making >>>>>>>> sure >>>>>>>> I'm root >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:30,255::supervdsmServer::174::SuperVdsm.Server::(main) >>>>>>>> Parsing >>>>>>>> cmd >>>>>>>> args >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:30,255::supervdsmServer::177::SuperVdsm.Server::(main) >>>>>>>> Creating PID >>>>>>>> file >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:30,255::supervdsmServer::181::SuperVdsm.Server::(main) >>>>>>>> Cleaning old >>>>>>>> socket >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:30,255::supervdsmServer::185::SuperVdsm.Server::(main) >>>>>>>> Setting >>>>>>>> up >>>>>>>> keep alive thread >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:30,256::supervdsmServer::190::SuperVdsm.Server::(main) >>>>>>>> Creating >>>>>>>> remote object manager >>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>> 09:36:30,256::supervdsmServer::201::SuperVdsm.Server::(main) >>>>>>>> Started >>>>>>>> serving super vdsm object >>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>> 09:36:32,124::supervdsm::92::SuperVdsmProxy::(_connect) Trying to >>>>>>>> connect >>>>>>>> to Super Vdsm >>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>> 09:36:32,133::supervdsm::64::SuperVdsmProxy::(__init__) Connected >>>>>>>> to >>>>>>>> Super >>>>>>>> Vdsm >>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>> 09:36:34,070::multipath::71::Storage.Misc.excCmd::(rescan) >>>>>>>> '/usr/bin/sudo >>>>>>>> -n /sbin/multipath' (cwd None) >>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>> 09:36:34,130::multipath::71::Storage.Misc.excCmd::(rescan) >>>>>>>> SUCCESS: >>>>>>>> =3D ''; =3D 0 >>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>> 09:36:34,131::lvm::460::OperationMutex::(_invalidateAllPvs) >>>>>>>> Operation 'lvm >>>>>>>> invalidate operation' got the operation mutex >>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>> 09:36:34,131::lvm::462::OperationMutex::(_invalidateAllPvs) >>>>>>>> Operation 'lvm >>>>>>>> invalidate operation' released the operation mutex >>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>> 09:36:34,132::lvm::472::OperationMutex::(_invalidateAllVgs) >>>>>>>> Operation 'lvm >>>>>>>> invalidate operation' got the operation mutex >>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>> 09:36:34,132::lvm::474::OperationMutex::(_invalidateAllVgs) >>>>>>>> Operation 'lvm >>>>>>>> invalidate operation' released the operation mutex >>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>> 09:36:34,133::lvm::493::OperationMutex::(_invalidateAllLvs) >>>>>>>> Operation 'lvm >>>>>>>> invalidate operation' got the operation mutex >>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>> 09:36:34,133::lvm::495::OperationMutex::(_invalidateAllLvs) >>>>>>>> Operation 'lvm >>>>>>>> invalidate operation' released the operation mutex >>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>> 09:36:34,133::misc::1027::SamplingMethod::(__call__) Returning last >>>>>>>> result >>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>> 09:36:34,133::hsm::272::Storage.HSM::(__cleanStorageRepository) >>>>>>>> Started >>>>>>>> cleaning storage repository at '/rhev/data-center' >>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>> 09:36:34,136::hsm::304::Storage.HSM::(__cleanStorageRepository) >>>>>>>> White >>>>>>>> list: ['/rhev/data-center/hsm-tasks', >>>>>>>> '/rhev/data-center/hsm-tasks/*', >>>>>>>> '/rhev/data-center/mnt'] >>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>> 09:36:34,136::hsm::305::Storage.HSM::(__cleanStorageRepository) >>>>>>>> Mount >>>>>>>> list: ['/rhev/data-center'] >>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>> 09:36:34,136::hsm::307::Storage.HSM::(__cleanStorageRepository) >>>>>>>> Cleaning >>>>>>>> leftovers >>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>> 09:36:34,136::hsm::350::Storage.HSM::(__cleanStorageRepository) >>>>>>>> Finished >>>>>>>> cleaning storage repository at '/rhev/data-center' >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On 4/16/12 8:38 AM, "Mike Burns" wrote: >>>>>>>> >>>>>>>>> On Mon, 2012-04-16 at 08:14 -0500, Adam vonNieda wrote: >>>>>>>>>> Hi folks, >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> I'm trying to install oVirt node v2.3.0 on A Dell C2100 >>>>>>>>>> server. I >>>>>>>>>> can boot up just fine, but the two menu options I see are "Start >>>>>>>>>> oVirt >>>>>>>>>> node", and "Troubleshooting". When I choose "Start oVirt node", >>>>>>>>>> it >>>>>>>>>> does just that, and I am soon after given a console login prompt. >>>>>>>>>> I've >>>>>>>>>> checked the docs, and I don't see what I'm supposed to do next, >>>>>>>>>> as >>>>>>>>>> in >>>>>>>>>> a password etc. Am I missing something? >>>>>>>>> Hi Adam, >>>>>>>>> >>>>>>>>> Something is breaking in the boot process. You should be getting >>>>>>>>> a >>>>>>>>> TUI >>>>>>>>> screen that will let you configure and install ovirt-node. >>>>>>>>> >>>>>>>>> I just added an entry on the Node Troublesooting wiki page[1] for >>>>>>>>> you to >>>>>>>>> follow. >>>>>>>>> >>>>>>>>> Mike >>>>>>>>> >>>>>>>>> [1] http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems >>>>>>>>> >>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -Adam >>>>>>>>>> _______________________________________________ >>>>>>>>>> Users mailing list >>>>>>>>>> Users(a)ovirt.org >>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>> _______________________________________________ >>>>>>> Users mailing list >>>>>>> Users(a)ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>> This is definitely the cause of the installer failing >>>>>> >>>>>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat >>>>>> /proc/mounts|grep -q "none /live" >>>>>> 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to >>>>>> mount_live() >>>>>> >>>>>> >>>>>> >>>>>> What kind of media are you installing from: usb/cd/remote console? >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users(a)ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >>>> I did go back and take a look at mount_live and made sure it contains a >>>> specific patch to handle usb drives properly. If you can get back to a >>>> shell prompt. run blkid and capture the output. If it's way too much to >>>> type then just the usb drive output should be ok. >>> >>> _______________________________________________ >>> Users mailing list >>> Users(a)ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >> _______________________________________________ >> Users mailing list >> Users(a)ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users Just curious do those Sandisk drives still come with the U3 software on = them? If so may want to remove it since it can alter the way the drive = is presented and that could be causing it. I've got a 2-3year old 8GB = Sandisk Cruzer with the U3 software removed and that works fine not sure = if it related but might want to just check. --===============8056007549429923409==-- From adam at vonnieda.org Wed Apr 18 09:38:27 2012 Content-Type: multipart/mixed; boundary="===============2058142780009839646==" MIME-Version: 1.0 From: Adam vonNieda To: users at ovirt.org Subject: Re: [Users] Booting oVirt node image 2.3.0, no install option Date: Wed, 18 Apr 2012 08:38:16 -0500 Message-ID: In-Reply-To: 4F8EBF4C.8030706@redhat.com --===============2058142780009839646== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Dunno if they do or not, but if they did, it would have been wiped out by the "dd", as that's going straight to the device, not a partition within. = On 4/18/12 8:19 AM, "Joey Boggs" wrote: >On 04/18/2012 08:40 AM, Adam vonNieda wrote: >> Yep, that's exactly the same issue. Mine was a 16Gb Sandisk Cruiser. >> When I switched to a no-name older 4Gb stick, it worked fine. I set mine >> up exactly as you did as well, dd from a Mac. Mine booted the kernel >>just >> fine as well. I tried booting up setting the "rootpw=3D" as well, >>but >> that didn't work for me, so I was unable to collect any information from >> the "blkid" command. I tried it three times, and I know I was doing it >> correctly. Joey's comments below.. >> >> -Adam >> >> >> >> I did go back and take a look at mount_live and made sure it contains a >> specific patch to handle usb drives properly. If you can get back to a >> shell prompt. run blkid and capture the output. If it's way too much to >> type then just the usb drive output should be ok. >> >> >> >> http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems >> >> >> >> >> >> On 4/18/12 5:02 AM, "Jason Lawer" wrote: >> >>> I think I just hit the exact same issue with a Sandisk Crusier Blade >>>4GB >>> USB stick. I bought 4 of them to try and setup a test system (before we >>> commit to real hardware) and at least 2 of them failed with both 2.2 >>>and >>> 2.3 ovirt isos being copied using dd from a mac. >>> >>> I copied to a old 8gb "Strontium" USB stick I had lying around and >>>worked >>> without issue. So it appears to be an issue with the stick. >>> >>> I can provide more specific information on the stick or such if that is >>> useful. >>> >>> It wouldn't surprise me if its due to the low cost nature of the stick >>> (cost $5 AUD) but I am curious as it booted the kernel fine. >>> >>> Jason >>> On 18/04/2012, at 4:48 AM, Adam vonNieda wrote: >>> >>>> Turns out that there might be an issue with my thumb drive. I tried >>>> another, and it worked fine. Thanks very much for the responses folks! >>>> >>>> -Adam >>>> >>>> >>>> On 4/17/12 10:11 AM, "Joey Boggs" wrote: >>>> >>>>> On 04/17/2012 10:51 AM, Adam vonNieda wrote: >>>>>> Thanks for the reply Joey. I saw that too, and thought maybe my >>>>>>USB >>>>>> thumb drive was set to read only, but it's not. This box doesn't >>>>>>have >>>>>> a >>>>>> DVD drive, I'll try a different USB drive, and if that doesn't work, >>>>>> I'll dig up an external DVD drive. >>>>>> >>>>>> Thanks again, >>>>>> >>>>>> -Adam >>>>>> >>>>>> Adam vonNieda >>>>>> Adam(a)vonNieda.org >>>>>> >>>>>> On Apr 17, 2012, at 9:07, Joey Boggs wrote: >>>>>> >>>>>>> On 04/17/2012 09:45 AM, Adam vonNieda wrote: >>>>>>>> Hi folks, >>>>>>>> >>>>>>>> Still hoping someone can give me a hand with this. I can't >>>>>>>> install >>>>>>>> overt-node 2.3.0 on a on a Dell C2100 server because it won't >>>>>>>>start >>>>>>>> the >>>>>>>> graphical interface. I booted up a standard F16 image this >>>>>>>>morning, >>>>>>>> and >>>>>>>> the graphical installer does start during that process. Logs are >>>>>>>> below. >>>>>>>> >>>>>>>> Thanks very much, >>>>>>>> >>>>>>>> -Adam >>>>>>>> >>>>>>>> >>>>>>>>> /tmp/ovirt.log >>>>>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>>>>>>> >>>>>>>>> /sbin/restorecon set context >>>>>>>>> /var/cache/yum->unconfined_u:object_r:rpm_var_cache_t:s0 >>>>>>>>> failed:'Read-only >>>>>>>>> file system' >>>>>>>>> /sbin/restorecon reset /var/cache/yum context >>>>>>>>> >>>>>>>>> >>>>>>>>> = >>>>>>>>>unconfined_u:object_r:file_t:s0->unconfined_u:object_r:rpm_var_cac >>>>>>>>>he >>>>>>>>> _t >>>>>>>>> :s0 >>>>>>>>> /sbin/restorecon reset /etc/sysctl.conf context >>>>>>>>> >>>>>>>>> >>>>>>>>> = >>>>>>>>>system_u:object_r:etc_runtime_t:s0->system_u:object_r:system_conf_ >>>>>>>>>t: >>>>>>>>> s0 >>>>>>>>> /sbin/restorecon reset /boot-kdump context >>>>>>>>> system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0 >>>>>>>>> 2012-04-16 09:36:26,827 - INFO - ovirt-config-installer - >>>>>>>>>::::live >>>>>>>>> device:::: >>>>>>>>> /dev/sdb >>>>>>>>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat >>>>>>>>> /proc/mounts|grep >>>>>>>>> -q "none /live" >>>>>>>>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - >>>>>>>>> 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - umount /live >>>>>>>>> 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - >>>>>>>>> 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to >>>>>>>>> mount_live() >>>>>>>>> >>>>>>>>> /var/log/ovirt.log >>>>>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>>>>>>> >>>>>>>>> Apr 16 09:35:53 Starting ovirt-early >>>>>>>>> oVirt Node Hypervisor release 2.3.0 (1.0.fc16) >>>>>>>>> Apr 16 09:35:53 Updating /etc/default/ovirt >>>>>>>>> Apr 16 09:35:54 Updating OVIRT_BOOTIF to '' >>>>>>>>> Apr 16 09:35:54 Updating OVIRT_INIT to '' >>>>>>>>> Apr 16 09:35:54 Updating OVIRT_UPGRADE to '' >>>>>>>>> Apr 16 09:35:54 Updating OVIRT_STANDALONE to '1' >>>>>>>>> Apr 16 09:35:54 Updating OVIRT_BOOTPARAMS to 'nomodeset >>>>>>>>> crashkernel=3D512M-2G:64M,2G-:128M elevator=3Ddeadline quiet >>>>>>>>>rd_NO_LVM >>>>>>>>> rhgb >>>>>>>>> rd.luks=3D0 rd.md=3D0 rd.dm=3D0' >>>>>>>>> Apr 16 09:35:54 Updating OVIRT_RHN_TYPE to 'classic' >>>>>>>>> Apr 16 09:35:54 Updating OVIRT_INSTALL to '1' >>>>>>>>> Apr 16 09:35:54 Updating OVIRT_ISCSI_INSTALL to '1' >>>>>>>>> Apr 16 09:36:08 Setting temporary admin password: F8Ax67kfRPSAw >>>>>>>>> Apr 16 09:36:09 Setting temporary root password: F8Ax67kfRPSAw >>>>>>>>> Apr 16 09:36:09 Skip runtime mode configuration. >>>>>>>>> Apr 16 09:36:09 Completed ovirt-early >>>>>>>>> Apr 16 09:36:09 Starting ovirt-awake. >>>>>>>>> Apr 16 09:36:09 Node is operating in unmanaged mode. >>>>>>>>> Apr 16 09:36:09 Completed ovirt-awake: RETVAL=3D0 >>>>>>>>> Apr 16 09:36:09 Starting ovirt >>>>>>>>> Apr 16 09:36:09 Completed ovirt >>>>>>>>> Apr 16 09:36:10 Starting ovirt-post >>>>>>>>> Apr 16 09:36:20 Hardware virtualization detected >>>>>>>>> Volume group "HostVG" not found >>>>>>>>> Skipping volume group HostVG >>>>>>>>> Restarting network (via systemctl): [ OK ] >>>>>>>>> Apr 16 09:36:20 Starting ovirt-post >>>>>>>>> Apr 16 09:36:21 Hardware virtualization detected >>>>>>>>> Volume group "HostVG" not found >>>>>>>>> Skipping volume group HostVG >>>>>>>>> Restarting network (via systemctl): [ OK ] >>>>>>>>> Apr 16 09:36:22 Starting ovirt-cim >>>>>>>>> Apr 16 09:36:22 Completed ovirt-cim >>>>>>>>> WARNING: persistent config storage not available >>>>>>>>> >>>>>>>>> /var/log/vdsm/vdsm.log >>>>>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D >>>>>>>>> >>>>>>>>> MainThread::INFO::2012-04-16 09:36:21,828::vdsm::71::vds::(run) I >>>>>>>>> am >>>>>>>>> the >>>>>>>>> actual vdsm 4.9-0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> >>>>>>>>> >>>>>>>>> = >>>>>>>>>09:36:23,873::resourceManager::376::ResourceManager::(registerName >>>>>>>>>sp >>>>>>>>> ac >>>>>>>>> e) >>>>>>>>> Registering namespace 'Storage' >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:23,874::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>>>>>>>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:23,918::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>>>>>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>>>>>>>> MainThread::INFO::2012-04-16 09:36:25,000::vdsm::71::vds::(run) I >>>>>>>>> am >>>>>>>>> the >>>>>>>>> actual vdsm 4.9-0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> >>>>>>>>> >>>>>>>>> = >>>>>>>>>09:36:25,199::resourceManager::376::ResourceManager::(registerName >>>>>>>>>sp >>>>>>>>> ac >>>>>>>>> e) >>>>>>>>> Registering namespace 'Storage' >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:25,200::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>>>>>>>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:25,231::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>>>>>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:25,243::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>>>>>> SUCCESS: >>>>>>>>> =3D ''; =3D 0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:25,244::multipath::109::Storage.Multipath::(isEnabled) >>>>>>>>> multipath >>>>>>>>> Defaulting to False >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:25,244::misc::487::Storage.Misc::(rotateFiles) dir: /etc, >>>>>>>>> prefixName: multipath.conf, versions: 5 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:25,244::misc::508::Storage.Misc::(rotateFiles) versions >>>>>>>>> found: >>>>>>>>> [0] >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> = >>>>>>>>>09:36:25,244::multipath::118::Storage.Misc.excCmd::(setupMultipath >>>>>>>>>) >>>>>>>>> '/usr/bin/sudo -n /bin/cp /etc/multipath.conf >>>>>>>>> /etc/multipath.conf.1' >>>>>>>>> (cwd >>>>>>>>> None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> = >>>>>>>>>09:36:25,255::multipath::118::Storage.Misc.excCmd::(setupMultipath >>>>>>>>>) >>>>>>>>> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: >>>>>>>>> Read-only >>>>>>>>> file >>>>>>>>> system\nsudo: sorry, a password is required to run sudo\n'; >>>>>>>>> =3D 1 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> = >>>>>>>>>09:36:25,256::multipath::118::Storage.Misc.excCmd::(setupMultipath >>>>>>>>>) >>>>>>>>> '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf.1' (cwd >>>>>>>>> None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> = >>>>>>>>>09:36:25,269::multipath::118::Storage.Misc.excCmd::(setupMultipath >>>>>>>>>) >>>>>>>>> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: >>>>>>>>> Read-only >>>>>>>>> file >>>>>>>>> system\nsudo: sorry, a password is required to run sudo\n'; >>>>>>>>> =3D 1 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> = >>>>>>>>>09:36:25,270::multipath::123::Storage.Misc.excCmd::(setupMultipath >>>>>>>>>) >>>>>>>>> '/usr/bin/sudo -n /bin/cp /tmp/tmpnPcvWi /etc/multipath.conf' >>>>>>>>>(cwd >>>>>>>>> None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> = >>>>>>>>>09:36:25,283::multipath::123::Storage.Misc.excCmd::(setupMultipath >>>>>>>>>) >>>>>>>>> SUCCESS: =3D ''; =3D 0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> = >>>>>>>>>09:36:25,283::multipath::128::Storage.Misc.excCmd::(setupMultipath >>>>>>>>>) >>>>>>>>> '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf' (cwd >>>>>>>>>None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> = >>>>>>>>>09:36:25,294::multipath::128::Storage.Misc.excCmd::(setupMultipath >>>>>>>>>) >>>>>>>>> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: >>>>>>>>> Read-only >>>>>>>>> file >>>>>>>>> system\nsudo: sorry, a password is required to run sudo\n'; >>>>>>>>> =3D 1 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> = >>>>>>>>>09:36:25,295::multipath::131::Storage.Misc.excCmd::(setupMultipath >>>>>>>>>) >>>>>>>>> '/usr/bin/sudo -n /sbin/multipath -F' (cwd None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> = >>>>>>>>>09:36:25,323::multipath::131::Storage.Misc.excCmd::(setupMultipath >>>>>>>>>) >>>>>>>>> FAILED: =3D ''; =3D 1 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> = >>>>>>>>>09:36:25,323::multipath::134::Storage.Misc.excCmd::(setupMultipath >>>>>>>>>) >>>>>>>>> '/usr/bin/sudo -n /sbin/service multipathd restart' (cwd None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> = >>>>>>>>>09:36:26,397::multipath::134::Storage.Misc.excCmd::(setupMultipath >>>>>>>>>) >>>>>>>>> SUCCESS: =3D ''; =3D 0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> >>>>>>>>> >>>>>>>>> = >>>>>>>>>09:36:26,398::hsm::248::Storage.Misc.excCmd::(__validateLvmLocking >>>>>>>>>Ty >>>>>>>>> pe >>>>>>>>> ) >>>>>>>>> '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd >>>>>>>>> None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> >>>>>>>>> >>>>>>>>> = >>>>>>>>>09:36:26,443::hsm::248::Storage.Misc.excCmd::(__validateLvmLocking >>>>>>>>>Ty >>>>>>>>> pe >>>>>>>>> ) >>>>>>>>> SUCCESS: =3D ''; =3D 0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:26,445::lvm::319::OperationMutex::(_reloadpvs) Operation >>>>>>>>>'lvm >>>>>>>>> reload >>>>>>>>> operation' got the operation mutex >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:26,447::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>>>>>>>> -n >>>>>>>>> /sbin/lvm pvs --config " devices { preferred_names =3D >>>>>>>>> [\\"^/dev/mapper/\\"] >>>>>>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>>>>> disable_after_error_count=3D3 >>>>>>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>>>>>> \\"r%.*%\\" ] } global { locking_type=3D1 >>>>>>>>>prioritise_write_locks=3D1 >>>>>>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days = =3D 0 } >>>>>>>>>" >>>>>>>>> --noheadings --units b --nosuffix --separator | -o >>>>>>>>> >>>>>>>>> >>>>>>>>> = >>>>>>>>>uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,md >>>>>>>>>a_ >>>>>>>>> co >>>>>>>>> unt, >>>>>>>>> d >>>>>>>>> ev_size' (cwd None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:26,811::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: >>>>>>>>> =3D >>>>>>>>> ''; >>>>>>>>> =3D 0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:26,811::lvm::342::OperationMutex::(_reloadpvs) Operation >>>>>>>>>'lvm >>>>>>>>> reload >>>>>>>>> operation' released the operation mutex >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:26,812::lvm::352::OperationMutex::(_reloadvgs) Operation >>>>>>>>>'lvm >>>>>>>>> reload >>>>>>>>> operation' got the operation mutex >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:26,812::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>>>>>>>> -n >>>>>>>>> /sbin/lvm vgs --config " devices { preferred_names =3D >>>>>>>>> [\\"^/dev/mapper/\\"] >>>>>>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>>>>> disable_after_error_count=3D3 >>>>>>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>>>>>> \\"r%.*%\\" ] } global { locking_type=3D1 >>>>>>>>>prioritise_write_locks=3D1 >>>>>>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days = =3D 0 } >>>>>>>>>" >>>>>>>>> --noheadings --units b --nosuffix --separator | -o >>>>>>>>> >>>>>>>>> >>>>>>>>> = >>>>>>>>>uuid,name,attr,size,free,extent_size,extent_count,free_count,tags, >>>>>>>>>vg >>>>>>>>> _m >>>>>>>>> da_s >>>>>>>>> i >>>>>>>>> ze,vg_mda_free' (cwd None) >>>>>>>>> MainThread::INFO::2012-04-16 09:36:29,307::vdsm::71::vds::(run) I >>>>>>>>> am >>>>>>>>> the >>>>>>>>> actual vdsm 4.9-0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> >>>>>>>>> >>>>>>>>> = >>>>>>>>>09:36:29,514::resourceManager::376::ResourceManager::(registerName >>>>>>>>>sp >>>>>>>>> ac >>>>>>>>> e) >>>>>>>>> Registering namespace 'Storage' >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,515::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>>>>>>>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,551::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>>>>>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,564::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>>>>>> SUCCESS: >>>>>>>>> =3D ''; =3D 0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,565::multipath::101::Storage.Multipath::(isEnabled) >>>>>>>>> Current >>>>>>>>> revision of multipath.conf detected, preserving >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> >>>>>>>>> >>>>>>>>> = >>>>>>>>>09:36:29,565::hsm::248::Storage.Misc.excCmd::(__validateLvmLocking >>>>>>>>>Ty >>>>>>>>> pe >>>>>>>>> ) >>>>>>>>> '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd >>>>>>>>> None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> >>>>>>>>> >>>>>>>>> = >>>>>>>>>09:36:29,606::hsm::248::Storage.Misc.excCmd::(__validateLvmLocking >>>>>>>>>Ty >>>>>>>>> pe >>>>>>>>> ) >>>>>>>>> SUCCESS: =3D ''; =3D 0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,606::lvm::319::OperationMutex::(_reloadpvs) Operation >>>>>>>>>'lvm >>>>>>>>> reload >>>>>>>>> operation' got the operation mutex >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,608::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>>>>>>>> -n >>>>>>>>> /sbin/lvm pvs --config " devices { preferred_names =3D >>>>>>>>> [\\"^/dev/mapper/\\"] >>>>>>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>>>>> disable_after_error_count=3D3 >>>>>>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>>>>>> \\"r%.*%\\" ] } global { locking_type=3D1 >>>>>>>>>prioritise_write_locks=3D1 >>>>>>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days = =3D 0 } >>>>>>>>>" >>>>>>>>> --noheadings --units b --nosuffix --separator | -o >>>>>>>>> >>>>>>>>> >>>>>>>>> = >>>>>>>>>uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,md >>>>>>>>>a_ >>>>>>>>> co >>>>>>>>> unt, >>>>>>>>> d >>>>>>>>> ev_size' (cwd None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,714::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: >>>>>>>>> =3D >>>>>>>>> ''; >>>>>>>>> =3D 0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,715::lvm::342::OperationMutex::(_reloadpvs) Operation >>>>>>>>>'lvm >>>>>>>>> reload >>>>>>>>> operation' released the operation mutex >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,716::lvm::352::OperationMutex::(_reloadvgs) Operation >>>>>>>>>'lvm >>>>>>>>> reload >>>>>>>>> operation' got the operation mutex >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,716::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>>>>>>>> -n >>>>>>>>> /sbin/lvm vgs --config " devices { preferred_names =3D >>>>>>>>> [\\"^/dev/mapper/\\"] >>>>>>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>>>>> disable_after_error_count=3D3 >>>>>>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>>>>>> \\"r%.*%\\" ] } global { locking_type=3D1 >>>>>>>>>prioritise_write_locks=3D1 >>>>>>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days = =3D 0 } >>>>>>>>>" >>>>>>>>> --noheadings --units b --nosuffix --separator | -o >>>>>>>>> >>>>>>>>> >>>>>>>>> = >>>>>>>>>uuid,name,attr,size,free,extent_size,extent_count,free_count,tags, >>>>>>>>>vg >>>>>>>>> _m >>>>>>>>> da_s >>>>>>>>> i >>>>>>>>> ze,vg_mda_free' (cwd None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,813::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: >>>>>>>>> =3D >>>>>>>>> ' No >>>>>>>>> volume groups found\n'; =3D 0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,814::lvm::379::OperationMutex::(_reloadvgs) Operation >>>>>>>>>'lvm >>>>>>>>> reload >>>>>>>>> operation' released the operation mutex >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,815::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>>>>>>>> -n >>>>>>>>> /sbin/lvm lvs --config " devices { preferred_names =3D >>>>>>>>> [\\"^/dev/mapper/\\"] >>>>>>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>>>>> disable_after_error_count=3D3 >>>>>>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>>>>>> \\"r%.*%\\" ] } global { locking_type=3D1 >>>>>>>>>prioritise_write_locks=3D1 >>>>>>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days = =3D 0 } >>>>>>>>>" >>>>>>>>> --noheadings --units b --nosuffix --separator | -o >>>>>>>>> uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,916::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: >>>>>>>>> =3D >>>>>>>>> ' No >>>>>>>>> volume groups found\n'; =3D 0 >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,917::misc::1017::SamplingMethod::(__call__) Trying to >>>>>>>>> enter >>>>>>>>> sampling method (storage.sdc.refreshStorage) >>>>>>>>> MainThread::INFO::2012-04-16 >>>>>>>>> 09:36:29,919::dispatcher::121::Storage.Dispatcher::(__init__) >>>>>>>>> Starting >>>>>>>>> StorageDispatcher... >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,919::misc::1019::SamplingMethod::(__call__) Got in to >>>>>>>>> sampling >>>>>>>>> method >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,921::misc::1017::SamplingMethod::(__call__) Trying to >>>>>>>>> enter >>>>>>>>> sampling method (storage.iscsi.rescan) >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,921::misc::1019::SamplingMethod::(__call__) Got in to >>>>>>>>> sampling >>>>>>>>> method >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,921::iscsi::389::Storage.Misc.excCmd::(rescan) >>>>>>>>> '/usr/bin/sudo -n >>>>>>>>> /sbin/iscsiadm -m session -R' (cwd None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,930::utils::595::Storage.Misc.excCmd::(execCmd) >>>>>>>>> '/usr/bin/pgrep >>>>>>>>> -xf ksmd' (cwd None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:30,108::utils::595::Storage.Misc.excCmd::(execCmd) >>>>>>>>> SUCCESS: =3D >>>>>>>>> ''; =3D 0 >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:30,116::iscsi::389::Storage.Misc.excCmd::(rescan) >>>>>>>>> FAILED: >>>>>>>>> =3D >>>>>>>>> 'iscsiadm: No session found.\n'; =3D 21 >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:30,116::misc::1027::SamplingMethod::(__call__) Returning >>>>>>>>>last >>>>>>>>> result >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:30,117::supervdsm::83::SuperVdsmProxy::(_killSupervdsm) >>>>>>>>>Could >>>>>>>>> not >>>>>>>>> kill old Super Vdsm [Errno 2] No such file or directory: >>>>>>>>> '/var/run/vdsm/svdsm.pid' >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:30,117::supervdsm::71::SuperVdsmProxy::(_launchSupervdsm) >>>>>>>>> Launching >>>>>>>>> Super Vdsm >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> >>>>>>>>> = >>>>>>>>>09:36:30,118::supervdsm::74::Storage.Misc.excCmd::(_launchSupervds >>>>>>>>>m) >>>>>>>>> '/usr/bin/sudo -n /usr/bin/python >>>>>>>>> /usr/share/vdsm/supervdsmServer.pyc >>>>>>>>> bd4b3ae7-3e51-4d6b-b681-d5f6cb5bae07 2945' (cwd None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:30,254::supervdsmServer::170::SuperVdsm.Server::(main) >>>>>>>>>Making >>>>>>>>> sure >>>>>>>>> I'm root >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:30,255::supervdsmServer::174::SuperVdsm.Server::(main) >>>>>>>>> Parsing >>>>>>>>> cmd >>>>>>>>> args >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:30,255::supervdsmServer::177::SuperVdsm.Server::(main) >>>>>>>>> Creating PID >>>>>>>>> file >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:30,255::supervdsmServer::181::SuperVdsm.Server::(main) >>>>>>>>> Cleaning old >>>>>>>>> socket >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:30,255::supervdsmServer::185::SuperVdsm.Server::(main) >>>>>>>>> Setting >>>>>>>>> up >>>>>>>>> keep alive thread >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:30,256::supervdsmServer::190::SuperVdsm.Server::(main) >>>>>>>>> Creating >>>>>>>>> remote object manager >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:30,256::supervdsmServer::201::SuperVdsm.Server::(main) >>>>>>>>> Started >>>>>>>>> serving super vdsm object >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:32,124::supervdsm::92::SuperVdsmProxy::(_connect) Trying to >>>>>>>>> connect >>>>>>>>> to Super Vdsm >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:32,133::supervdsm::64::SuperVdsmProxy::(__init__) Connected >>>>>>>>> to >>>>>>>>> Super >>>>>>>>> Vdsm >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,070::multipath::71::Storage.Misc.excCmd::(rescan) >>>>>>>>> '/usr/bin/sudo >>>>>>>>> -n /sbin/multipath' (cwd None) >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,130::multipath::71::Storage.Misc.excCmd::(rescan) >>>>>>>>> SUCCESS: >>>>>>>>> =3D ''; =3D 0 >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,131::lvm::460::OperationMutex::(_invalidateAllPvs) >>>>>>>>> Operation 'lvm >>>>>>>>> invalidate operation' got the operation mutex >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,131::lvm::462::OperationMutex::(_invalidateAllPvs) >>>>>>>>> Operation 'lvm >>>>>>>>> invalidate operation' released the operation mutex >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,132::lvm::472::OperationMutex::(_invalidateAllVgs) >>>>>>>>> Operation 'lvm >>>>>>>>> invalidate operation' got the operation mutex >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,132::lvm::474::OperationMutex::(_invalidateAllVgs) >>>>>>>>> Operation 'lvm >>>>>>>>> invalidate operation' released the operation mutex >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,133::lvm::493::OperationMutex::(_invalidateAllLvs) >>>>>>>>> Operation 'lvm >>>>>>>>> invalidate operation' got the operation mutex >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,133::lvm::495::OperationMutex::(_invalidateAllLvs) >>>>>>>>> Operation 'lvm >>>>>>>>> invalidate operation' released the operation mutex >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,133::misc::1027::SamplingMethod::(__call__) Returning >>>>>>>>>last >>>>>>>>> result >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,133::hsm::272::Storage.HSM::(__cleanStorageRepository) >>>>>>>>> Started >>>>>>>>> cleaning storage repository at '/rhev/data-center' >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,136::hsm::304::Storage.HSM::(__cleanStorageRepository) >>>>>>>>> White >>>>>>>>> list: ['/rhev/data-center/hsm-tasks', >>>>>>>>> '/rhev/data-center/hsm-tasks/*', >>>>>>>>> '/rhev/data-center/mnt'] >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,136::hsm::305::Storage.HSM::(__cleanStorageRepository) >>>>>>>>> Mount >>>>>>>>> list: ['/rhev/data-center'] >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,136::hsm::307::Storage.HSM::(__cleanStorageRepository) >>>>>>>>> Cleaning >>>>>>>>> leftovers >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,136::hsm::350::Storage.HSM::(__cleanStorageRepository) >>>>>>>>> Finished >>>>>>>>> cleaning storage repository at '/rhev/data-center' >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On 4/16/12 8:38 AM, "Mike Burns" wrote: >>>>>>>>> >>>>>>>>>> On Mon, 2012-04-16 at 08:14 -0500, Adam vonNieda wrote: >>>>>>>>>>> Hi folks, >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> I'm trying to install oVirt node v2.3.0 on A Dell C2100 >>>>>>>>>>> server. I >>>>>>>>>>> can boot up just fine, but the two menu options I see are >>>>>>>>>>>"Start >>>>>>>>>>> oVirt >>>>>>>>>>> node", and "Troubleshooting". When I choose "Start oVirt node", >>>>>>>>>>> it >>>>>>>>>>> does just that, and I am soon after given a console login >>>>>>>>>>>prompt. >>>>>>>>>>> I've >>>>>>>>>>> checked the docs, and I don't see what I'm supposed to do next, >>>>>>>>>>> as >>>>>>>>>>> in >>>>>>>>>>> a password etc. Am I missing something? >>>>>>>>>> Hi Adam, >>>>>>>>>> >>>>>>>>>> Something is breaking in the boot process. You should be >>>>>>>>>>getting >>>>>>>>>> a >>>>>>>>>> TUI >>>>>>>>>> screen that will let you configure and install ovirt-node. >>>>>>>>>> >>>>>>>>>> I just added an entry on the Node Troublesooting wiki page[1] >>>>>>>>>>for >>>>>>>>>> you to >>>>>>>>>> follow. >>>>>>>>>> >>>>>>>>>> Mike >>>>>>>>>> >>>>>>>>>> [1] http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -Adam >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> Users mailing list >>>>>>>>>>> Users(a)ovirt.org >>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>> _______________________________________________ >>>>>>>> Users mailing list >>>>>>>> Users(a)ovirt.org >>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>> This is definitely the cause of the installer failing >>>>>>> >>>>>>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat >>>>>>> /proc/mounts|grep -q "none /live" >>>>>>> 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to >>>>>>> mount_live() >>>>>>> >>>>>>> >>>>>>> >>>>>>> What kind of media are you installing from: usb/cd/remote console? >>>>>> _______________________________________________ >>>>>> Users mailing list >>>>>> Users(a)ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> I did go back and take a look at mount_live and made sure it >>>>>contains a >>>>> specific patch to handle usb drives properly. If you can get back to >>>>>a >>>>> shell prompt. run blkid and capture the output. If it's way too much >>>>>to >>>>> type then just the usb drive output should be ok. >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users(a)ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>> _______________________________________________ >>> Users mailing list >>> Users(a)ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >> >> _______________________________________________ >> Users mailing list >> Users(a)ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > >Just curious do those Sandisk drives still come with the U3 software on >them? If so may want to remove it since it can alter the way the drive >is presented and that could be causing it. I've got a 2-3year old 8GB >Sandisk Cruzer with the U3 software removed and that works fine not sure >if it related but might want to just check. --===============2058142780009839646==-- From akula at thegeekhood.net Wed Apr 18 19:01:29 2012 Content-Type: multipart/mixed; boundary="===============4000893774111980453==" MIME-Version: 1.0 From: Jason Lawer To: users at ovirt.org Subject: Re: [Users] Booting oVirt node image 2.3.0, no install option Date: Thu, 19 Apr 2012 09:01:20 +1000 Message-ID: <4F8F47C0.8010106@thegeekhood.net> In-Reply-To: 4F8EBF4C.8030706@redhat.com --===============4000893774111980453== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Typed out as I am yet to install the Remote Management Card. # blkid /dev/loop0: TYPE=3D"squashfs" /dev/loop1: TYPE=3D"DM_snapshot_cow" /dev/loop2: TYPE=3D"squashfs" /dev/loop3: LABEL=3D"ovirt-node-iso" = UUIS=3D"f1fffd44-6664-48ef-8105-1d986f23127b" TYPE=3D"ext2" /dev/sdb1: LABEL=3D"ovirt-node-iso" TYPE=3D"iso9660" /dev/mapper/1SanDisk: LABEL=3D"ovirt-node-iso" TYPE=3D"iso9660" /dev/mapper/1SanDiskp1: LABEL=3D"ovirt-node-iso" TYPE=3D"iso9660" /dev/mapper/live-rw: LABEL=3D"ovirt-node-iso" = UUID=3D"f1fffd44-6664-48ef-8105-1d986f23127b" TYPE=3D"ext2" /dev/mapper/live-osimg-min: LABEL=3D"ovirt-node-iso" = UUIS=3D"f1fffd44-6664-48ef-8105-1d986f23127b" TYPE=3D"ext2" Jason On 18/04/12 11:19 PM, Joey Boggs wrote: > On 04/18/2012 08:40 AM, Adam vonNieda wrote: >> Yep, that's exactly the same issue. Mine was a 16Gb Sandisk Cruiser. >> When I switched to a no-name older 4Gb stick, it worked fine. I set mine >> up exactly as you did as well, dd from a Mac. Mine booted the kernel = >> just >> fine as well. I tried booting up setting the "rootpw=3D" as well, = >> but >> that didn't work for me, so I was unable to collect any information from >> the "blkid" command. I tried it three times, and I know I was doing it >> correctly. Joey's comments below.. >> >> -Adam >> >> >> >> I did go back and take a look at mount_live and made sure it contains a >> specific patch to handle usb drives properly. If you can get back to a >> shell prompt. run blkid and capture the output. If it's way too much to >> type then just the usb drive output should be ok. >> >> >> >> http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems >> >> >> >> >> >> On 4/18/12 5:02 AM, "Jason Lawer" wrote: >> >>> I think I just hit the exact same issue with a Sandisk Crusier Blade = >>> 4GB >>> USB stick. I bought 4 of them to try and setup a test system (before we >>> commit to real hardware) and at least 2 of them failed with both 2.2 = >>> and >>> 2.3 ovirt isos being copied using dd from a mac. >>> >>> I copied to a old 8gb "Strontium" USB stick I had lying around and = >>> worked >>> without issue. So it appears to be an issue with the stick. >>> >>> I can provide more specific information on the stick or such if that is >>> useful. >>> >>> It wouldn't surprise me if its due to the low cost nature of the stick >>> (cost $5 AUD) but I am curious as it booted the kernel fine. >>> >>> Jason >>> On 18/04/2012, at 4:48 AM, Adam vonNieda wrote: >>> >>>> Turns out that there might be an issue with my thumb drive. I tried >>>> another, and it worked fine. Thanks very much for the responses folks! >>>> >>>> -Adam >>>> >>>> >>>> On 4/17/12 10:11 AM, "Joey Boggs" wrote: >>>> >>>>> On 04/17/2012 10:51 AM, Adam vonNieda wrote: >>>>>> Thanks for the reply Joey. I saw that too, and thought maybe = >>>>>> my USB >>>>>> thumb drive was set to read only, but it's not. This box doesn't = >>>>>> have >>>>>> a >>>>>> DVD drive, I'll try a different USB drive, and if that doesn't work, >>>>>> I'll dig up an external DVD drive. >>>>>> >>>>>> Thanks again, >>>>>> >>>>>> -Adam >>>>>> >>>>>> Adam vonNieda >>>>>> Adam(a)vonNieda.org >>>>>> >>>>>> On Apr 17, 2012, at 9:07, Joey Boggs wrote: >>>>>> >>>>>>> On 04/17/2012 09:45 AM, Adam vonNieda wrote: >>>>>>>> Hi folks, >>>>>>>> >>>>>>>> Still hoping someone can give me a hand with this. I can't >>>>>>>> install >>>>>>>> overt-node 2.3.0 on a on a Dell C2100 server because it won't = >>>>>>>> start >>>>>>>> the >>>>>>>> graphical interface. I booted up a standard F16 image this = >>>>>>>> morning, >>>>>>>> and >>>>>>>> the graphical installer does start during that process. Logs are >>>>>>>> below. >>>>>>>> >>>>>>>> Thanks very much, >>>>>>>> >>>>>>>> -Adam >>>>>>>> >>>>>>>> >>>>>>>>> /tmp/ovirt.log >>>>>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>>>>>>> >>>>>>>>> /sbin/restorecon set context >>>>>>>>> /var/cache/yum->unconfined_u:object_r:rpm_var_cache_t:s0 >>>>>>>>> failed:'Read-only >>>>>>>>> file system' >>>>>>>>> /sbin/restorecon reset /var/cache/yum context >>>>>>>>> >>>>>>>>> >>>>>>>>> unconfined_u:object_r:file_t:s0->unconfined_u:object_r:rpm_var_ca= che = >>>>>>>>> >>>>>>>>> _t >>>>>>>>> :s0 >>>>>>>>> /sbin/restorecon reset /etc/sysctl.conf context >>>>>>>>> >>>>>>>>> >>>>>>>>> system_u:object_r:etc_runtime_t:s0->system_u:object_r:system_conf= _t: = >>>>>>>>> >>>>>>>>> s0 >>>>>>>>> /sbin/restorecon reset /boot-kdump context >>>>>>>>> system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0 >>>>>>>>> 2012-04-16 09:36:26,827 - INFO - ovirt-config-installer - = >>>>>>>>> ::::live >>>>>>>>> device:::: >>>>>>>>> /dev/sdb >>>>>>>>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat >>>>>>>>> /proc/mounts|grep >>>>>>>>> -q "none /live" >>>>>>>>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - >>>>>>>>> 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - umount /live >>>>>>>>> 2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - >>>>>>>>> 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to >>>>>>>>> mount_live() >>>>>>>>> >>>>>>>>> /var/log/ovirt.log >>>>>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>>>>>>> >>>>>>>>> Apr 16 09:35:53 Starting ovirt-early >>>>>>>>> oVirt Node Hypervisor release 2.3.0 (1.0.fc16) >>>>>>>>> Apr 16 09:35:53 Updating /etc/default/ovirt >>>>>>>>> Apr 16 09:35:54 Updating OVIRT_BOOTIF to '' >>>>>>>>> Apr 16 09:35:54 Updating OVIRT_INIT to '' >>>>>>>>> Apr 16 09:35:54 Updating OVIRT_UPGRADE to '' >>>>>>>>> Apr 16 09:35:54 Updating OVIRT_STANDALONE to '1' >>>>>>>>> Apr 16 09:35:54 Updating OVIRT_BOOTPARAMS to 'nomodeset >>>>>>>>> crashkernel=3D512M-2G:64M,2G-:128M elevator=3Ddeadline quiet = >>>>>>>>> rd_NO_LVM >>>>>>>>> rhgb >>>>>>>>> rd.luks=3D0 rd.md=3D0 rd.dm=3D0' >>>>>>>>> Apr 16 09:35:54 Updating OVIRT_RHN_TYPE to 'classic' >>>>>>>>> Apr 16 09:35:54 Updating OVIRT_INSTALL to '1' >>>>>>>>> Apr 16 09:35:54 Updating OVIRT_ISCSI_INSTALL to '1' >>>>>>>>> Apr 16 09:36:08 Setting temporary admin password: F8Ax67kfRPSAw >>>>>>>>> Apr 16 09:36:09 Setting temporary root password: F8Ax67kfRPSAw >>>>>>>>> Apr 16 09:36:09 Skip runtime mode configuration. >>>>>>>>> Apr 16 09:36:09 Completed ovirt-early >>>>>>>>> Apr 16 09:36:09 Starting ovirt-awake. >>>>>>>>> Apr 16 09:36:09 Node is operating in unmanaged mode. >>>>>>>>> Apr 16 09:36:09 Completed ovirt-awake: RETVAL=3D0 >>>>>>>>> Apr 16 09:36:09 Starting ovirt >>>>>>>>> Apr 16 09:36:09 Completed ovirt >>>>>>>>> Apr 16 09:36:10 Starting ovirt-post >>>>>>>>> Apr 16 09:36:20 Hardware virtualization detected >>>>>>>>> Volume group "HostVG" not found >>>>>>>>> Skipping volume group HostVG >>>>>>>>> Restarting network (via systemctl): [ OK ] >>>>>>>>> Apr 16 09:36:20 Starting ovirt-post >>>>>>>>> Apr 16 09:36:21 Hardware virtualization detected >>>>>>>>> Volume group "HostVG" not found >>>>>>>>> Skipping volume group HostVG >>>>>>>>> Restarting network (via systemctl): [ OK ] >>>>>>>>> Apr 16 09:36:22 Starting ovirt-cim >>>>>>>>> Apr 16 09:36:22 Completed ovirt-cim >>>>>>>>> WARNING: persistent config storage not available >>>>>>>>> >>>>>>>>> /var/log/vdsm/vdsm.log >>>>>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D >>>>>>>>> >>>>>>>>> MainThread::INFO::2012-04-16 09:36:21,828::vdsm::71::vds::(run) I >>>>>>>>> am >>>>>>>>> the >>>>>>>>> actual vdsm 4.9-0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> >>>>>>>>> >>>>>>>>> 09:36:23,873::resourceManager::376::ResourceManager::(registerNam= esp = >>>>>>>>> >>>>>>>>> ac >>>>>>>>> e) >>>>>>>>> Registering namespace 'Storage' >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:23,874::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>>>>>>>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:23,918::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>>>>>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>>>>>>>> MainThread::INFO::2012-04-16 09:36:25,000::vdsm::71::vds::(run) I >>>>>>>>> am >>>>>>>>> the >>>>>>>>> actual vdsm 4.9-0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> >>>>>>>>> >>>>>>>>> 09:36:25,199::resourceManager::376::ResourceManager::(registerNam= esp = >>>>>>>>> >>>>>>>>> ac >>>>>>>>> e) >>>>>>>>> Registering namespace 'Storage' >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:25,200::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>>>>>>>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:25,231::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>>>>>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:25,243::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>>>>>> SUCCESS: >>>>>>>>> =3D ''; =3D 0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:25,244::multipath::109::Storage.Multipath::(isEnabled) >>>>>>>>> multipath >>>>>>>>> Defaulting to False >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:25,244::misc::487::Storage.Misc::(rotateFiles) dir: /etc, >>>>>>>>> prefixName: multipath.conf, versions: 5 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:25,244::misc::508::Storage.Misc::(rotateFiles) versions >>>>>>>>> found: >>>>>>>>> [0] >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:25,244::multipath::118::Storage.Misc.excCmd::(setupMultipat= h) = >>>>>>>>> >>>>>>>>> '/usr/bin/sudo -n /bin/cp /etc/multipath.conf >>>>>>>>> /etc/multipath.conf.1' >>>>>>>>> (cwd >>>>>>>>> None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:25,255::multipath::118::Storage.Misc.excCmd::(setupMultipat= h) = >>>>>>>>> >>>>>>>>> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: >>>>>>>>> Read-only >>>>>>>>> file >>>>>>>>> system\nsudo: sorry, a password is required to run sudo\n'; >>>>>>>>> =3D 1 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:25,256::multipath::118::Storage.Misc.excCmd::(setupMultipat= h) = >>>>>>>>> >>>>>>>>> '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf.1' (cwd >>>>>>>>> None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:25,269::multipath::118::Storage.Misc.excCmd::(setupMultipat= h) = >>>>>>>>> >>>>>>>>> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: >>>>>>>>> Read-only >>>>>>>>> file >>>>>>>>> system\nsudo: sorry, a password is required to run sudo\n'; >>>>>>>>> =3D 1 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:25,270::multipath::123::Storage.Misc.excCmd::(setupMultipat= h) = >>>>>>>>> >>>>>>>>> '/usr/bin/sudo -n /bin/cp /tmp/tmpnPcvWi /etc/multipath.conf' = >>>>>>>>> (cwd >>>>>>>>> None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:25,283::multipath::123::Storage.Misc.excCmd::(setupMultipat= h) = >>>>>>>>> >>>>>>>>> SUCCESS: =3D ''; =3D 0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:25,283::multipath::128::Storage.Misc.excCmd::(setupMultipat= h) = >>>>>>>>> >>>>>>>>> '/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf' (cwd = >>>>>>>>> None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:25,294::multipath::128::Storage.Misc.excCmd::(setupMultipat= h) = >>>>>>>>> >>>>>>>>> FAILED: =3D 'sudo: unable to mkdir /var/db/sudo/vdsm: >>>>>>>>> Read-only >>>>>>>>> file >>>>>>>>> system\nsudo: sorry, a password is required to run sudo\n'; >>>>>>>>> =3D 1 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:25,295::multipath::131::Storage.Misc.excCmd::(setupMultipat= h) = >>>>>>>>> >>>>>>>>> '/usr/bin/sudo -n /sbin/multipath -F' (cwd None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:25,323::multipath::131::Storage.Misc.excCmd::(setupMultipat= h) = >>>>>>>>> >>>>>>>>> FAILED: =3D ''; =3D 1 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:25,323::multipath::134::Storage.Misc.excCmd::(setupMultipat= h) = >>>>>>>>> >>>>>>>>> '/usr/bin/sudo -n /sbin/service multipathd restart' (cwd None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:26,397::multipath::134::Storage.Misc.excCmd::(setupMultipat= h) = >>>>>>>>> >>>>>>>>> SUCCESS: =3D ''; =3D 0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> >>>>>>>>> >>>>>>>>> 09:36:26,398::hsm::248::Storage.Misc.excCmd::(__validateLvmLockin= gTy = >>>>>>>>> >>>>>>>>> pe >>>>>>>>> ) >>>>>>>>> '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd >>>>>>>>> None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> >>>>>>>>> >>>>>>>>> 09:36:26,443::hsm::248::Storage.Misc.excCmd::(__validateLvmLockin= gTy = >>>>>>>>> >>>>>>>>> pe >>>>>>>>> ) >>>>>>>>> SUCCESS: =3D ''; =3D 0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:26,445::lvm::319::OperationMutex::(_reloadpvs) Operation = >>>>>>>>> 'lvm >>>>>>>>> reload >>>>>>>>> operation' got the operation mutex >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:26,447::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>>>>>>>> -n >>>>>>>>> /sbin/lvm pvs --config " devices { preferred_names =3D >>>>>>>>> [\\"^/dev/mapper/\\"] >>>>>>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>>>>> disable_after_error_count=3D3 >>>>>>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>>>>>> \\"r%.*%\\" ] } global { locking_type=3D1 = >>>>>>>>> prioritise_write_locks=3D1 >>>>>>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days = =3D 0 = >>>>>>>>> } " >>>>>>>>> --noheadings --units b --nosuffix --separator | -o >>>>>>>>> >>>>>>>>> >>>>>>>>> uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,m= da_ = >>>>>>>>> >>>>>>>>> co >>>>>>>>> unt, >>>>>>>>> d >>>>>>>>> ev_size' (cwd None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:26,811::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: >>>>>>>>> =3D >>>>>>>>> ''; >>>>>>>>> =3D 0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:26,811::lvm::342::OperationMutex::(_reloadpvs) Operation = >>>>>>>>> 'lvm >>>>>>>>> reload >>>>>>>>> operation' released the operation mutex >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:26,812::lvm::352::OperationMutex::(_reloadvgs) Operation = >>>>>>>>> 'lvm >>>>>>>>> reload >>>>>>>>> operation' got the operation mutex >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:26,812::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>>>>>>>> -n >>>>>>>>> /sbin/lvm vgs --config " devices { preferred_names =3D >>>>>>>>> [\\"^/dev/mapper/\\"] >>>>>>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>>>>> disable_after_error_count=3D3 >>>>>>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>>>>>> \\"r%.*%\\" ] } global { locking_type=3D1 = >>>>>>>>> prioritise_write_locks=3D1 >>>>>>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days = =3D 0 = >>>>>>>>> } " >>>>>>>>> --noheadings --units b --nosuffix --separator | -o >>>>>>>>> >>>>>>>>> >>>>>>>>> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags= ,vg = >>>>>>>>> >>>>>>>>> _m >>>>>>>>> da_s >>>>>>>>> i >>>>>>>>> ze,vg_mda_free' (cwd None) >>>>>>>>> MainThread::INFO::2012-04-16 09:36:29,307::vdsm::71::vds::(run) I >>>>>>>>> am >>>>>>>>> the >>>>>>>>> actual vdsm 4.9-0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> >>>>>>>>> >>>>>>>>> 09:36:29,514::resourceManager::376::ResourceManager::(registerNam= esp = >>>>>>>>> >>>>>>>>> ac >>>>>>>>> e) >>>>>>>>> Registering namespace 'Storage' >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,515::threadPool::45::Misc.ThreadPool::(__init__) Enter - >>>>>>>>> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,551::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>>>>>> '/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,564::multipath::85::Storage.Misc.excCmd::(isEnabled) >>>>>>>>> SUCCESS: >>>>>>>>> =3D ''; =3D 0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,565::multipath::101::Storage.Multipath::(isEnabled) >>>>>>>>> Current >>>>>>>>> revision of multipath.conf detected, preserving >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> >>>>>>>>> >>>>>>>>> 09:36:29,565::hsm::248::Storage.Misc.excCmd::(__validateLvmLockin= gTy = >>>>>>>>> >>>>>>>>> pe >>>>>>>>> ) >>>>>>>>> '/usr/bin/sudo -n /sbin/lvm dumpconfig global/locking_type' (cwd >>>>>>>>> None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> >>>>>>>>> >>>>>>>>> 09:36:29,606::hsm::248::Storage.Misc.excCmd::(__validateLvmLockin= gTy = >>>>>>>>> >>>>>>>>> pe >>>>>>>>> ) >>>>>>>>> SUCCESS: =3D ''; =3D 0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,606::lvm::319::OperationMutex::(_reloadpvs) Operation = >>>>>>>>> 'lvm >>>>>>>>> reload >>>>>>>>> operation' got the operation mutex >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,608::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>>>>>>>> -n >>>>>>>>> /sbin/lvm pvs --config " devices { preferred_names =3D >>>>>>>>> [\\"^/dev/mapper/\\"] >>>>>>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>>>>> disable_after_error_count=3D3 >>>>>>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>>>>>> \\"r%.*%\\" ] } global { locking_type=3D1 = >>>>>>>>> prioritise_write_locks=3D1 >>>>>>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days = =3D 0 = >>>>>>>>> } " >>>>>>>>> --noheadings --units b --nosuffix --separator | -o >>>>>>>>> >>>>>>>>> >>>>>>>>> uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,m= da_ = >>>>>>>>> >>>>>>>>> co >>>>>>>>> unt, >>>>>>>>> d >>>>>>>>> ev_size' (cwd None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,714::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: >>>>>>>>> =3D >>>>>>>>> ''; >>>>>>>>> =3D 0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,715::lvm::342::OperationMutex::(_reloadpvs) Operation = >>>>>>>>> 'lvm >>>>>>>>> reload >>>>>>>>> operation' released the operation mutex >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,716::lvm::352::OperationMutex::(_reloadvgs) Operation = >>>>>>>>> 'lvm >>>>>>>>> reload >>>>>>>>> operation' got the operation mutex >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,716::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>>>>>>>> -n >>>>>>>>> /sbin/lvm vgs --config " devices { preferred_names =3D >>>>>>>>> [\\"^/dev/mapper/\\"] >>>>>>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>>>>> disable_after_error_count=3D3 >>>>>>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>>>>>> \\"r%.*%\\" ] } global { locking_type=3D1 = >>>>>>>>> prioritise_write_locks=3D1 >>>>>>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days = =3D 0 = >>>>>>>>> } " >>>>>>>>> --noheadings --units b --nosuffix --separator | -o >>>>>>>>> >>>>>>>>> >>>>>>>>> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags= ,vg = >>>>>>>>> >>>>>>>>> _m >>>>>>>>> da_s >>>>>>>>> i >>>>>>>>> ze,vg_mda_free' (cwd None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,813::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: >>>>>>>>> =3D >>>>>>>>> ' No >>>>>>>>> volume groups found\n'; =3D 0 >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,814::lvm::379::OperationMutex::(_reloadvgs) Operation = >>>>>>>>> 'lvm >>>>>>>>> reload >>>>>>>>> operation' released the operation mutex >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,815::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo >>>>>>>>> -n >>>>>>>>> /sbin/lvm lvs --config " devices { preferred_names =3D >>>>>>>>> [\\"^/dev/mapper/\\"] >>>>>>>>> ignore_suspended_devices=3D1 write_cache_state=3D0 >>>>>>>>> disable_after_error_count=3D3 >>>>>>>>> filter =3D [ \\"a%1SanDisk|3600605b00436bd80171b105c225377ce%\\", >>>>>>>>> \\"r%.*%\\" ] } global { locking_type=3D1 = >>>>>>>>> prioritise_write_locks=3D1 >>>>>>>>> wait_for_locks=3D1 } backup { retain_min =3D 50 retain_days = =3D 0 = >>>>>>>>> } " >>>>>>>>> --noheadings --units b --nosuffix --separator | -o >>>>>>>>> uuid,name,vg_name,attr,size,seg_start_pe,devices,tags' (cwd None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,916::lvm::287::Storage.Misc.excCmd::(cmd) SUCCESS: >>>>>>>>> =3D >>>>>>>>> ' No >>>>>>>>> volume groups found\n'; =3D 0 >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,917::misc::1017::SamplingMethod::(__call__) Trying to >>>>>>>>> enter >>>>>>>>> sampling method (storage.sdc.refreshStorage) >>>>>>>>> MainThread::INFO::2012-04-16 >>>>>>>>> 09:36:29,919::dispatcher::121::Storage.Dispatcher::(__init__) >>>>>>>>> Starting >>>>>>>>> StorageDispatcher... >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,919::misc::1019::SamplingMethod::(__call__) Got in to >>>>>>>>> sampling >>>>>>>>> method >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,921::misc::1017::SamplingMethod::(__call__) Trying to >>>>>>>>> enter >>>>>>>>> sampling method (storage.iscsi.rescan) >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,921::misc::1019::SamplingMethod::(__call__) Got in to >>>>>>>>> sampling >>>>>>>>> method >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,921::iscsi::389::Storage.Misc.excCmd::(rescan) >>>>>>>>> '/usr/bin/sudo -n >>>>>>>>> /sbin/iscsiadm -m session -R' (cwd None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:29,930::utils::595::Storage.Misc.excCmd::(execCmd) >>>>>>>>> '/usr/bin/pgrep >>>>>>>>> -xf ksmd' (cwd None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:30,108::utils::595::Storage.Misc.excCmd::(execCmd) >>>>>>>>> SUCCESS: =3D >>>>>>>>> ''; =3D 0 >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:30,116::iscsi::389::Storage.Misc.excCmd::(rescan) >>>>>>>>> FAILED: >>>>>>>>> =3D >>>>>>>>> 'iscsiadm: No session found.\n'; =3D 21 >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:30,116::misc::1027::SamplingMethod::(__call__) Returning = >>>>>>>>> last >>>>>>>>> result >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:30,117::supervdsm::83::SuperVdsmProxy::(_killSupervdsm) = >>>>>>>>> Could >>>>>>>>> not >>>>>>>>> kill old Super Vdsm [Errno 2] No such file or directory: >>>>>>>>> '/var/run/vdsm/svdsm.pid' >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:30,117::supervdsm::71::SuperVdsmProxy::(_launchSupervdsm) >>>>>>>>> Launching >>>>>>>>> Super Vdsm >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> >>>>>>>>> 09:36:30,118::supervdsm::74::Storage.Misc.excCmd::(_launchSupervd= sm) = >>>>>>>>> >>>>>>>>> '/usr/bin/sudo -n /usr/bin/python >>>>>>>>> /usr/share/vdsm/supervdsmServer.pyc >>>>>>>>> bd4b3ae7-3e51-4d6b-b681-d5f6cb5bae07 2945' (cwd None) >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:30,254::supervdsmServer::170::SuperVdsm.Server::(main) = >>>>>>>>> Making >>>>>>>>> sure >>>>>>>>> I'm root >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:30,255::supervdsmServer::174::SuperVdsm.Server::(main) >>>>>>>>> Parsing >>>>>>>>> cmd >>>>>>>>> args >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:30,255::supervdsmServer::177::SuperVdsm.Server::(main) >>>>>>>>> Creating PID >>>>>>>>> file >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:30,255::supervdsmServer::181::SuperVdsm.Server::(main) >>>>>>>>> Cleaning old >>>>>>>>> socket >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:30,255::supervdsmServer::185::SuperVdsm.Server::(main) >>>>>>>>> Setting >>>>>>>>> up >>>>>>>>> keep alive thread >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:30,256::supervdsmServer::190::SuperVdsm.Server::(main) >>>>>>>>> Creating >>>>>>>>> remote object manager >>>>>>>>> MainThread::DEBUG::2012-04-16 >>>>>>>>> 09:36:30,256::supervdsmServer::201::SuperVdsm.Server::(main) >>>>>>>>> Started >>>>>>>>> serving super vdsm object >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:32,124::supervdsm::92::SuperVdsmProxy::(_connect) Trying to >>>>>>>>> connect >>>>>>>>> to Super Vdsm >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:32,133::supervdsm::64::SuperVdsmProxy::(__init__) Connected >>>>>>>>> to >>>>>>>>> Super >>>>>>>>> Vdsm >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,070::multipath::71::Storage.Misc.excCmd::(rescan) >>>>>>>>> '/usr/bin/sudo >>>>>>>>> -n /sbin/multipath' (cwd None) >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,130::multipath::71::Storage.Misc.excCmd::(rescan) >>>>>>>>> SUCCESS: >>>>>>>>> =3D ''; =3D 0 >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,131::lvm::460::OperationMutex::(_invalidateAllPvs) >>>>>>>>> Operation 'lvm >>>>>>>>> invalidate operation' got the operation mutex >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,131::lvm::462::OperationMutex::(_invalidateAllPvs) >>>>>>>>> Operation 'lvm >>>>>>>>> invalidate operation' released the operation mutex >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,132::lvm::472::OperationMutex::(_invalidateAllVgs) >>>>>>>>> Operation 'lvm >>>>>>>>> invalidate operation' got the operation mutex >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,132::lvm::474::OperationMutex::(_invalidateAllVgs) >>>>>>>>> Operation 'lvm >>>>>>>>> invalidate operation' released the operation mutex >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,133::lvm::493::OperationMutex::(_invalidateAllLvs) >>>>>>>>> Operation 'lvm >>>>>>>>> invalidate operation' got the operation mutex >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,133::lvm::495::OperationMutex::(_invalidateAllLvs) >>>>>>>>> Operation 'lvm >>>>>>>>> invalidate operation' released the operation mutex >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,133::misc::1027::SamplingMethod::(__call__) Returning = >>>>>>>>> last >>>>>>>>> result >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,133::hsm::272::Storage.HSM::(__cleanStorageRepository) >>>>>>>>> Started >>>>>>>>> cleaning storage repository at '/rhev/data-center' >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,136::hsm::304::Storage.HSM::(__cleanStorageRepository) >>>>>>>>> White >>>>>>>>> list: ['/rhev/data-center/hsm-tasks', >>>>>>>>> '/rhev/data-center/hsm-tasks/*', >>>>>>>>> '/rhev/data-center/mnt'] >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,136::hsm::305::Storage.HSM::(__cleanStorageRepository) >>>>>>>>> Mount >>>>>>>>> list: ['/rhev/data-center'] >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,136::hsm::307::Storage.HSM::(__cleanStorageRepository) >>>>>>>>> Cleaning >>>>>>>>> leftovers >>>>>>>>> Thread-11::DEBUG::2012-04-16 >>>>>>>>> 09:36:34,136::hsm::350::Storage.HSM::(__cleanStorageRepository) >>>>>>>>> Finished >>>>>>>>> cleaning storage repository at '/rhev/data-center' >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On 4/16/12 8:38 AM, "Mike Burns" wrote: >>>>>>>>> >>>>>>>>>> On Mon, 2012-04-16 at 08:14 -0500, Adam vonNieda wrote: >>>>>>>>>>> Hi folks, >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> I'm trying to install oVirt node v2.3.0 on A Dell C2100 >>>>>>>>>>> server. I >>>>>>>>>>> can boot up just fine, but the two menu options I see are = >>>>>>>>>>> "Start >>>>>>>>>>> oVirt >>>>>>>>>>> node", and "Troubleshooting". When I choose "Start oVirt node", >>>>>>>>>>> it >>>>>>>>>>> does just that, and I am soon after given a console login = >>>>>>>>>>> prompt. >>>>>>>>>>> I've >>>>>>>>>>> checked the docs, and I don't see what I'm supposed to do next, >>>>>>>>>>> as >>>>>>>>>>> in >>>>>>>>>>> a password etc. Am I missing something? >>>>>>>>>> Hi Adam, >>>>>>>>>> >>>>>>>>>> Something is breaking in the boot process. You should be = >>>>>>>>>> getting >>>>>>>>>> a >>>>>>>>>> TUI >>>>>>>>>> screen that will let you configure and install ovirt-node. >>>>>>>>>> >>>>>>>>>> I just added an entry on the Node Troublesooting wiki page[1] = >>>>>>>>>> for >>>>>>>>>> you to >>>>>>>>>> follow. >>>>>>>>>> >>>>>>>>>> Mike >>>>>>>>>> >>>>>>>>>> [1] http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -Adam >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> Users mailing list >>>>>>>>>>> Users(a)ovirt.org >>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>> _______________________________________________ >>>>>>>> Users mailing list >>>>>>>> Users(a)ovirt.org >>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>> This is definitely the cause of the installer failing >>>>>>> >>>>>>> 2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat >>>>>>> /proc/mounts|grep -q "none /live" >>>>>>> 2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to >>>>>>> mount_live() >>>>>>> >>>>>>> >>>>>>> >>>>>>> What kind of media are you installing from: usb/cd/remote console? >>>>>> _______________________________________________ >>>>>> Users mailing list >>>>>> Users(a)ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> I did go back and take a look at mount_live and made sure it = >>>>> contains a >>>>> specific patch to handle usb drives properly. If you can get back = >>>>> to a >>>>> shell prompt. run blkid and capture the output. If it's way too = >>>>> much to >>>>> type then just the usb drive output should be ok. >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users(a)ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>> _______________________________________________ >>> Users mailing list >>> Users(a)ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >> >> _______________________________________________ >> Users mailing list >> Users(a)ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > Just curious do those Sandisk drives still come with the U3 software = > on them? If so may want to remove it since it can alter the way the = > drive is presented and that could be causing it. I've got a 2-3year = > old 8GB Sandisk Cruzer with the U3 software removed and that works = > fine not sure if it related but might want to just check. --===============4000893774111980453==--