[Users] Fwd: Re: Permission issues

Alex Leonhardt alex.tuxx at gmail.com
Mon Feb 25 21:20:14 UTC 2013


Hi,

Thread-348::WARNING::2013-02-25 
16:20:47,167::fileUtils::185::fileUtils::(createdir) Dir /r         
hev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3 already exists

AND

AcquireHostIdFailure: Cannot acquire host id: 
('6169a495-6ae0-40ba-9734-e0bf0ec0e73d', Sanl         ockException(19, 
'Sanlock lockspace add failure', 'No such device'))

--

Are you sure your config is clean ? Is there anything else claiming to 
be a NFS iso domain ? Maybe not attached to the DC ?

Alex


On 02/25/2013 06:09 PM, suporte at logicworks.pt wrote:
> I'm a little bit lost here. I was able to attach a QNAP NFS storage 
> Data(Master), but cannot attch the local NFS iso. Also I cannot attach 
> an openfiler NFS share.
>
> I was able to put it all work with the release 3.1 with the node 
> 2.5.5-0.1fc17
>
> ------------------------------------------------------------------------
> *De: *suporte at logicworks.pt
> *Para: *users at ovirt.org
> *Enviadas: *Segunda-feira, 25 de Fevereiro de 2013 16:23:33
> *Assunto: *Re: [Users] Fwd: Re: Permission issues
>
> and the vdsm log:
>
> Thread-348::DEBUG::2013-02-25 
> 16:20:47,159::persistentDict::234::Storage.PersistentDict::(r         
> efresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=NAS2', 
> 'IOOPTIMEOUTSEC=10',          'LEASERETRIES=3', 'LEASETIMESEC=60', 
> 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID         =', 
> 'REMOTE_PATH=192.168.5.178:/mnt/nfs-share/nfs/nfs-share', 
> 'ROLE=Regular', 'SDUUID=6169a         
> 495-6ae0-40ba-9734-e0bf0ec0e73d', 'TYPE=NFS', 'VERSION=3', 
> '_SHA_CKSUM=69dc9fc040125ab339e6         5297e958369e513d5ebe']
> Thread-348::DEBUG::2013-02-25 
> 16:20:47,166::persistentDict::234::Storage.PersistentDict::(r         
> efresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=NAS2', 
> 'IOOPTIMEOUTSEC=10',          'LEASERETRIES=3', 'LEASETIMESEC=60', 
> 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID         =', 
> 'REMOTE_PATH=192.168.5.178:/mnt/nfs-share/nfs/nfs-share', 
> 'ROLE=Regular', 'SDUUID=6169a         
> 495-6ae0-40ba-9734-e0bf0ec0e73d', 'TYPE=NFS', 'VERSION=3', 
> '_SHA_CKSUM=69dc9fc040125ab339e6         5297e958369e513d5ebe']
> Thread-348::WARNING::2013-02-25 
> 16:20:47,167::fileUtils::185::fileUtils::(createdir) Dir /r         
> hev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3 already exists
> Thread-348::DEBUG::2013-02-25 
> 16:20:47,167::persistentDict::167::Storage.PersistentDict::(t         
> ransaction) Starting transaction
> Thread-348::DEBUG::2013-02-25 
> 16:20:47,167::persistentDict::175::Storage.PersistentDict::(t         
> ransaction) Finished transaction
> Thread-348::INFO::2013-02-25 
> 16:20:47,167::clusterlock::172::SANLock::(acquireHostId) Acqui         
> ring host id for domain 6169a495-6ae0-40ba-9734-e0bf0ec0e73d (id: 250)
> Thread-348::ERROR::2013-02-25 
> 16:20:48,168::task::833::TaskManager.Task::(_setError) Task=`         
> bb9c54e3-3b03-46db-acab-2c327a19823f`::Unexpected error
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/task.py", line 840, in _run
>     return fn(*args, **kargs)
>   File "/usr/share/vdsm/logUtils.py", line 42, in wrapper
>     res = f(*args, **kwargs)
>   File "/usr/share/vdsm/storage/hsm.py", line 895, in createStoragePool
>     masterVersion, leaseParams)
>   File "/usr/share/vdsm/storage/sp.py", line 567, in create
>     self._acquireTemporaryClusterLock(msdUUID, leaseParams)
>   File "/usr/share/vdsm/storage/sp.py", line 509, in 
> _acquireTemporaryClusterLock
>     msd.acquireHostId(self.id)
>   File "/usr/share/vdsm/storage/sd.py", line 436, in acquireHostId
>     self._clusterLock.acquireHostId(hostId, async)
>   File "/usr/share/vdsm/storage/clusterlock.py", line 187, in 
> acquireHostId
>     raise se.AcquireHostIdFailure(self._sdUUID, e)
> AcquireHostIdFailure: Cannot acquire host id: 
> ('6169a495-6ae0-40ba-9734-e0bf0ec0e73d', Sanl         ockException(19, 
> 'Sanlock lockspace add failure', 'No such device'))
> Thread-348::DEBUG::2013-02-25 
> 16:20:48,168::task::852::TaskManager.Task::(_run) Task=`bb9c5         
> 4e3-3b03-46db-acab-2c327a19823f`::Task._run: 
> bb9c54e3-3b03-46db-acab-2c327a19823f (None, '5         
> 849b030-626e-47cb-ad90-3ce782d831b3', 'Default', 
> '6169a495-6ae0-40ba-9734-e0bf0ec0e73d', ['         
> 6169a495-6ae0-40ba-9734-e0bf0ec0e73d'], 24, None, 5, 60, 10, 3) {} 
> failed - stopping task
> Thread-348::DEBUG::2013-02-25 
> 16:20:48,169::task::1177::TaskManager.Task::(stop) Task=`bb9c         
> 54e3-3b03-46db-acab-2c327a19823f`::stopping in state preparing (force 
> False)
> Thread-348::DEBUG::2013-02-25 
> 16:20:48,169::task::957::TaskManager.Task::(_decref) Task=`bb         
> 9c54e3-3b03-46db-acab-2c327a19823f`::ref 1 aborting True
> Thread-348::INFO::2013-02-25 
> 16:20:48,169::task::1134::TaskManager.Task::(prepare) Task=`bb         
> 9c54e3-3b03-46db-acab-2c327a19823f`::aborting: Task is aborted: 
> 'Cannot acquire host id' -          code 661
> Thread-348::DEBUG::2013-02-25 
> 16:20:48,169::task::1139::TaskManager.Task::(prepare) Task=`b         
> b9c54e3-3b03-46db-acab-2c327a19823f`::Prepare: aborted: Cannot acquire 
> host id
> Thread-348::DEBUG::2013-02-25 
> 16:20:48,169::task::957::TaskManager.Task::(_decref) Task=`bb         
> 9c54e3-3b03-46db-acab-2c327a19823f`::ref 0 aborting True
> Thread-348::DEBUG::2013-02-25 
> 16:20:48,169::task::892::TaskManager.Task::(_doAbort) Task=`b         
> b9c54e3-3b03-46db-acab-2c327a19823f`::Task._doAbort: force False
> Thread-348::DEBUG::2013-02-25 
> 16:20:48,169::resourceManager::864::ResourceManager.Owner::(c         
> ancelAll) Owner.cancelAll requests {}
> Thread-348::DEBUG::2013-02-25 
> 16:20:48,169::task::568::TaskManager.Task::(_updateState) Tas         
> k=`bb9c54e3-3b03-46db-acab-2c327a19823f`::moving from state preparing 
> -> state aborting
> Thread-348::DEBUG::2013-02-25 
> 16:20:48,169::task::523::TaskManager.Task::(__state_aborting)          
> Task=`bb9c54e3-3b03-46db-acab-2c327a19823f`::_aborting: recover policy 
> none
> Thread-348::DEBUG::2013-02-25 
> 16:20:48,169::task::568::TaskManager.Task::(_updateState) Tas         
> k=`bb9c54e3-3b03-46db-acab-2c327a19823f`::moving from state aborting 
> -> state failed
> Thread-348::DEBUG::2013-02-25 
> 16:20:48,169::resourceManager::830::ResourceManager.Owner::(r         
> eleaseAll) Owner.releaseAll requests {} resources 
> {'Storage.6169a495-6ae0-40ba-9734-e0bf0ec         0e73d': < 
> ResourceRef 'Storage.6169a495-6ae0-40ba-9734-e0bf0ec0e73d', isValid: 
> 'True' obj:          'None'>, 
> 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3': < ResourceRef 
> 'Storage.5849b030-62         6e-47cb-ad90-3ce782d831b3', isValid: 
> 'True' obj: 'None'>}
> Thread-348::DEBUG::2013-02-25 
> 16:20:48,169::resourceManager::864::ResourceManager.Owner::(c         
> ancelAll) Owner.cancelAll requests {}
> Thread-348::DEBUG::2013-02-25 
> 16:20:48,170::resourceManager::557::ResourceManager::(release         
> Resource) Trying to release resource 
> 'Storage.6169a495-6ae0-40ba-9734-e0bf0ec0e73d'
> Thread-348::DEBUG::2013-02-25 
> 16:20:48,170::resourceManager::573::ResourceManager::(release         
> Resource) Released resource 
> 'Storage.6169a495-6ae0-40ba-9734-e0bf0ec0e73d' (0 active users)
> Thread-348::DEBUG::2013-02-25 
> 16:20:48,170::resourceManager::578::ResourceManager::(release         
> Resource) Resource 'Storage.6169a495-6ae0-40ba-9734-e0bf0ec0e73d' is 
> free, finding out if a         nyone is waiting for it.
> Thread-348::DEBUG::2013-02-25 
> 16:20:48,170::resourceManager::585::ResourceManager::(release         
> Resource) No one is waiting for resource 
> 'Storage.6169a495-6ae0-40ba-9734-e0bf0ec0e73d', Cl         earing records.
> Thread-348::DEBUG::2013-02-25 
> 16:20:48,170::resourceManager::557::ResourceManager::(release         
> Resource) Trying to release resource 
> 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3'
> Thread-348::DEBUG::2013-02-25 
> 16:20:48,170::resourceManager::573::ResourceManager::(release         
> Resource) Released resource 
> 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' (0 active users)
> Thread-348::DEBUG::2013-02-25 
> 16:20:48,170::resourceManager::578::ResourceManager::(release         
> Resource) Resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' is 
> free, finding out if a         nyone is waiting for it.
> Thread-348::DEBUG::2013-02-25 
> 16:20:48,170::resourceManager::585::ResourceManager::(release         
> Resource) No one is waiting for resource 
> 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3', Cl         earing records.
> Thread-348::ERROR::2013-02-25 
> 16:20:48,170::dispatcher::67::Storage.Dispatcher.Protect::(ru         
> n) {'status': {'message': "Cannot acquire host id: 
> ('6169a495-6ae0-40ba-9734-e0bf0ec0e73d',          SanlockException(19, 
> 'Sanlock lockspace add failure', 'No such device'))", 'code': 661}}
> Thread-350::DEBUG::2013-02-25 
> 16:20:50,379::BindingXMLRPC::913::vds::(wrapper) client [192.         
> 168.5.180]::call volumesList with () {}
> MainProcess|Thread-350::DEBUG::2013-02-25 
> 16:20:50,380::misc::84::Storage.Misc.excCmd::(<la         mbda>) 
> '/usr/sbin/gluster --mode=script volume info --xml' (cwd None)
> MainProcess|Thread-350::DEBUG::2013-02-25 
> 16:20:50,427::misc::84::Storage.Misc.excCmd::(<la         mbda>) 
> SUCCESS: <err> = ''; <rc> = 0
> Thread-350::DEBUG::2013-02-25 
> 16:20:50,427::BindingXMLRPC::920::vds::(wrapper) return volum         
> esList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {}}
>
>
> ------------------------------------------------------------------------
> *De: *suporte at logicworks.pt
> *Para: *users at ovirt.org
> *Enviadas: *Segunda-feira, 25 de Fevereiro de 2013 15:46:13
> *Assunto: *Re: [Users] Fwd: Re: Permission issues
>
> I have the node activated but now cannot attach a NFS domain. I 
> allways get the message: failed to attach Storage domain to Data 
> Center default
>
> I have SELinux disabled in the host and engine.
>
> The sanlock log shows :
>
> 2013-02-25 15:33:41+0000 9535 [21168]: open error -13 
> /rhev/data-center/mnt/192.168.5.178:_mnt_nfs-share_nfs_nfs-share/43baa487-aad8-4d60-8e48-f7a3b3899cd3/dom_md/ids
> 2013-02-25 15:33:41+0000 9535 [21168]: s2 open_disk 
> /rhev/data-center/mnt/192.168.5.178:_mnt_nfs-share_nfs_nfs-share/43baa487-aad8-4d60-8e48-f7a3b3899cd3/dom_md/ids 
> error -13
> 2013-02-25 15:33:42+0000 9536 [938]: s2 add_lockspace fail result -19
> 2013-02-25 15:33:58+0000 9553 [937]: s3 lockspace 
> 43baa487-aad8-4d60-8e48-f7a3b3899cd3:250:/rhev/data-center/mnt/192.168.5.178:_mnt_nfs-share_nfs_nfs-share/43baa487-aad8-4d60-8e48-f7a3b3899cd3/dom_md/ids:0
> 2013-02-25 15:33:58+0000 9553 [21208]: open error -13 
> /rhev/data-center/mnt/192.168.5.178:_mnt_nfs-share_nfs_nfs-share/43baa487-aad8-4d60-8e48-f7a3b3899cd3/dom_md/ids
> 2013-02-25 15:33:58+0000 9553 [21208]: s3 open_disk 
> /rhev/data-center/mnt/192.168.5.178:_mnt_nfs-share_nfs_nfs-share/43baa487-aad8-4d60-8e48-f7a3b3899cd3/dom_md/ids 
> error -13
> 2013-02-25 15:33:59+0000 9554 [937]: s3 add_lockspace fail result -19
>
> Cannot find a way to fix it.
> Any idea?
>
> ------------------------------------------------------------------------
> *De: *"Jakub Bittner" <j.bittner at nbu.cz>
> *Para: *users at ovirt.org
> *Enviadas: *Segunda-feira, 25 de Fevereiro de 2013 12:58:01
> *Assunto: *Re: [Users] Fwd: Re: Permission issues
>
> This problem occurs if SELINUX is disabled, changing state to 
> PERMISSIVE solves this issue.
>
>
> Dne 25.2.2013 13:35, Jakub Bittner napsal(a):
>
>     Than you for sharing your process. I also try it, but I am getting
>     errors while installing vdsm:
>
>     ERROR: Could not determine running system's policy version.
>     ERROR: Unable to open policy /etc/selinux/targeted/policy/policy.27.
>     /var/tmp/rpm-tmp.yqBAt7: line 1:  1087 Unauthorized access to
>     memory (SIGSEGV)                     /usr/bin/vdsm-tool sebool-config
>     ERROR: Could not determine running system's policy version.
>     ERROR: Unable to open policy /etc/selinux/targeted/policy/policy.27.
>     /var/tmp/rpm-tmp.yqBAt7: line 3:  1088 Unauthorized access to
>     memory (SIGSEGV)                     /usr/bin/vdsm-tool set-saslpasswd
>       Verifying  : vdsm-4.10.3-8.fc18.x86_64
>
>     kernel: vdsm-tool[1173]: segfault at 0 ip 00007f1dc72905f8 sp
>     00007fff60814750 error 4 in libapol.so.4.3[7f1dc7269000+34000]
>     kernel: vdsm-tool[1174]: segfault at 0 ip 00007f10de8975f8 sp
>     00007fff4fa8be90 error 4 in libapol.so.4.3[7f10de870000+34000]
>
>
>     vdsm-tool
>     ERROR: Could not determine running system's policy version.
>     ERROR: Unable to open policy /etc/selinux/targeted/policy/policy.27.
>     Neoprávne(ný pr(ístup do pame(ti (SIGSEGV)
>
>
>     rpm -qa|grep vdsm
>     vdsm-xmlrpc-4.10.3-8.fc18.noarch
>     vdsm-cli-4.10.3-8.fc18.noarch
>     vdsm-4.10.3-8.fc18.x86_64
>     vdsm-python-4.10.3-8.fc18.x86_64
>
>     rpm -qa|grep systemd
>     systemd-libs-197-1.fc18.2.x86_64
>     systemd-197-1.fc18.2.x86_64
>     systemd-sysv-197-1.fc18.2.x86_64
>
>
>     Dne 22.2.2013 18:45, suporte at logicworks.pt napsal(a):
>
>         well, it's working now.
>         I remove vdsm (vdsm-gluster too, a dependency rpm )
>
>         install again vdsm first and than vdsm-gluster, and the host
>         is active !!
>
>
>         ------------------------------------------------------------------------
>         *De: *suporte at logicworks.pt
>         *Para: *users at ovirt.org
>         *Enviadas: *Sexta-feira, 22 de Fevereiro de 2013 16:58:58
>         *Assunto: *Re: [Users] Fwd: Re: Permission issues
>
>         I notice that vdsm service is not running:
>         systemctl status vdsmd.service
>         vdsmd.service - Virtual Desktop Server Manager
>                   Loaded: loaded
>         (/usr/lib/systemd/system/vdsmd.service; enabled)
>                   Active: failed (Result: exit-code) since Fri
>         2013-02-22 16:42:24 WET; 1min 45s ago
>                  Process: 1880 ExecStart=/lib/systemd/systemd-vdsmd
>         start (code=exited, status=1/FAILURE)
>
>         Feb 22 16:42:24 node2.acloud.pt python[2011]: DIGEST-MD5
>         client step 2
>         Feb 22 16:42:24 node2.acloud.pt python[2011]: DIGEST-MD5
>         client step 2
>         Feb 22 16:42:24 node2.acloud.pt python[2011]: DIGEST-MD5
>         client step 2
>         Feb 22 16:42:24 node2.acloud.pt python[2011]: DIGEST-MD5
>         client step 2
>         Feb 22 16:42:24 node2.acloud.pt python[2011]: DIGEST-MD5
>         client step 2
>         Feb 22 16:42:24 node2.acloud.pt python[2011]: DIGEST-MD5
>         client step 2
>         Feb 22 16:42:24 node2.acloud.pt systemd-vdsmd[1880]: vdsm:
>         Failed to define network filters on libvirt[FAILED]
>         Feb 22 16:42:24 node2.acloud.pt systemd[1]: vdsmd.service:
>         control process exited, code=exited status=1
>         Feb 22 16:42:24 node2.acloud.pt systemd[1]: Failed to start
>         Virtual Desktop Server Manager.
>         Feb 22 16:42:24 node2.acloud.pt systemd[1]: Unit vdsmd.service
>         entered failed state
>
>          rpm -qa|grep vdsm
>         vdsm-python-4.10.3-8.fc18.x86_64
>         vdsm-gluster-4.10.3-8.fc18.noarch
>         vdsm-4.10.3-8.fc18.x86_64
>         vdsm-xmlrpc-4.10.3-8.fc18.noarch
>         vdsm-cli-4.10.3-8.fc18.noarch
>
>
>
>         ------------------------------------------------------------------------
>         *De: *suporte at logicworks.pt
>         *Para: *users at ovirt.org
>         *Enviadas: *Sexta-feira, 22 de Fevereiro de 2013 15:35:49
>         *Assunto: *Re: [Users] Fwd: Re: Permission issues
>
>         Hi,
>
>         I cannot install a F18 host
>
>         I installed a minimal F18 than
>         yum install net-tools
>
>         systemctl stop NetworkManager.service
>
>         systemctl disable NetworkManager.service
>
>         add a gateway to /etc/sysconfig/network
>         remove /usr/lib/udev/rules.d/60-net.rules|
>         ||systemctl enable network.service|
>         |systemctl start network.service|
>         chkconfig network on
>
>         reboot
>
>         rpm -qa|grep systemd
>         systemd-sysv-197-1.fc18.2.x86_64
>         systemd-197-1.fc18.2.x86_64
>         systemd-libs-197-1.fc18.2.x86_64
>
>         SELinux is disabled
>
>         yum localinstallhttp://ovirt.org/releases/ovirt-release-fedora.noarch.rpm
>
>         Than I add it to the engine via portal
>         Get no error during the install, but never get out from the "This host is in non responding
>         state"
>
>
>         Did I miss something?
>
>         Thanks
>
>         Jose
>
>
>
>
>          iptables -L
>         Chain INPUT (policy ACCEPT)
>         target     prot opt source               destination
>         ACCEPT     all  --  anywhere             anywhere            
>         ctstate RELATED,ESTABLISHED
>         ACCEPT     all  --  anywhere             anywhere
>         ACCEPT     tcp  --  anywhere             anywhere            
>         tcp dpt:54321
>         ACCEPT     tcp  --  anywhere             anywhere            
>         tcp dpt:ssh
>         ACCEPT     udp  --  anywhere             anywhere            
>         udp dpt:snmp
>         ACCEPT     tcp  --  anywhere             anywhere            
>         tcp dpt:16514
>         ACCEPT     tcp  --  anywhere             anywhere            
>         multiport dports xprtld:6166
>         ACCEPT     tcp  --  anywhere             anywhere            
>         multiport dports 49152:49216
>         ACCEPT     tcp  --  anywhere             anywhere            
>         tcp dpt:24007
>         ACCEPT     udp  --  anywhere             anywhere            
>         udp dpt:sunrpc
>         ACCEPT     tcp  --  anywhere             anywhere            
>         tcp dpt:38465
>         ACCEPT     tcp  --  anywhere             anywhere            
>         tcp dpt:38466
>         ACCEPT     tcp  --  anywhere             anywhere            
>         tcp dpt:38467
>         ACCEPT     tcp  --  anywhere             anywhere            
>         tcp dpt:39543
>         ACCEPT     tcp  --  anywhere             anywhere            
>         tcp dpt:55863
>         ACCEPT     tcp  --  anywhere             anywhere            
>         tcp dpt:38468
>         ACCEPT     udp  --  anywhere             anywhere            
>         udp dpt:963
>         ACCEPT     tcp  --  anywhere             anywhere            
>         tcp dpt:965
>         ACCEPT     tcp  --  anywhere             anywhere            
>         tcp dpt:ctdb
>         ACCEPT     tcp  --  anywhere             anywhere            
>         tcp dpt:netbios-ssn
>         ACCEPT     tcp  --  anywhere             anywhere            
>         tcp dpt:microsoft-ds
>         ACCEPT     tcp  --  anywhere             anywhere            
>         tcp dpts:24009:24108
>         REJECT     all  --  anywhere             anywhere            
>         reject-with icmp-host-prohibited
>
>         Chain FORWARD (policy ACCEPT)
>         target     prot opt source               destination
>         REJECT     all  --  anywhere             anywhere            
>         PHYSDEV match ! --physdev-is-bridged reject-with
>         icmp-host-prohibited
>
>         Chain OUTPUT (policy ACCEPT)
>         target     prot opt source               destination
>
>
>         ------------------------------------------------------------------------
>         *De: *"Jeff Bailey" <bailey at cs.kent.edu>
>         *Para: *users at ovirt.org
>         *Enviadas: *Quarta-feira, 20 de Fevereiro de 2013 21:50:38
>         *Assunto: *Re: [Users] Fwd: Re: Permission issues
>
>
>         On 2/20/2013 2:55 PM, suporte at logicworks.pt wrote:
>         > How can I update systemd in the node?
>
>         You would need to install from a newer node iso.  If you don't
>         want to
>         wait, you could install a minimal F18, configure your
>         networking, add
>         the ovirt repo and then just add that host using the engine
>         GUI.  At
>         this stage, you will still have the same problem you currently
>         have.
>         You then need to:
>
>         yum --enablerepo=updates-testing update systemd
>
>         After that, remove /usr/lib/udev/rules.d/60-net.rules <-
>          typing from
>         memory but should be close
>
>         Reboot and everything *should* work :)
>
>         There are other little things like disabling firewalld, tweeking
>         multipath.conf, etc that I do but the steps above basically
>         cover it.
>
>         > Thanks
>         >
>         > ----- Mensagem original -----
>         > De: "Kevin Maziere Aubry" <kevin.maziere at alterway.fr>
>         > Para: "Jakub Bittner" <j.bittner at nbu.cz>
>         > Cc: "users" <users at ovirt.org>
>         > Enviadas: Quarta-feira, 20 Fevereiro, 2013 17:48:29
>         > Assunto: Re: [Users] Fwd: Re: Permission issues
>         >
>         >
>         >
>         > Sur.
>         > I update systemd and remove udev network conf file.
>         > I also stop network manager and add gateway to
>         /etc/sysconfig/network.
>         >
>         > Also I set dns manager name to /etc/hosts file to avoid dns
>         issue.
>         >
>         > It works ;)
>         > Le 20 févr. 2013 18:42, "Jakub Bittner" < j.bittner at nbu.cz >
>         a écrit :
>         >
>         >
>         > I wonder if is there any way to create ovirt-node from
>         running Fedora 18 netinstalled server. Do anybody know what
>         packages should I install?
>         >
>         > Thanks
>         > ______________________________ _________________
>         > Users mailing list
>         > Users at ovirt.org
>         > http://lists.ovirt.org/ mailman/listinfo/users
>         >
>         > _______________________________________________
>         > Users mailing list
>         > Users at ovirt.org
>         > http://lists.ovirt.org/mailman/listinfo/users
>         > _______________________________________________
>         > Users mailing list
>         > Users at ovirt.org
>         > http://lists.ovirt.org/mailman/listinfo/users
>
>         _______________________________________________
>         Users mailing list
>         Users at ovirt.org
>         http://lists.ovirt.org/mailman/listinfo/users
>
>
>         _______________________________________________
>         Users mailing list
>         Users at ovirt.org
>         http://lists.ovirt.org/mailman/listinfo/users
>
>
>         _______________________________________________
>         Users mailing list
>         Users at ovirt.org
>         http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>         _______________________________________________
>         Users mailing list
>         Users at ovirt.org
>         http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
>     _______________________________________________
>     Users mailing list
>     Users at ovirt.org
>     http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20130225/f70025dc/attachment-0001.html>


More information about the Users mailing list