Import Netapp snapshot volume into ovirt and start VM from volume snapshot
by Igor Zinovik
Hello.
I'm trying to implement following backup scenario for my VMs:
storage domain is a volume on netapp appliance connected to oVirt cluster
via NFS3. Netapp produces its zero copy snapshots of this volume.
I want to be able to restore VMs from these snapshots, e.g.
Netapp can export snapshot to ovirt as another NFS volume after that
I want to be able to import VM or attach VM disks from snapshot volume
to VM and start it. As far as I understand it is basic backup|migration
scenario
when user wants to move VMs from one ovirt DC to another ovirt DC.
Right now I cannot import snapshot volume which contains exactly same domain
metadata (in dom_dm/ directory) as original volume (which I cannot detach
since
it hosts disks for running VMs). I tried to rename dom_md/, but with no
success in
domain import process.
Maybe someone in this list uses same scenario for VM backup and can give
some advice.
6 years, 4 months
VM User with UserRole missing permissions to activate console and other actions
by Callum Smith
Dear All,
Please see the errors below. I'm seeing this in the engine.log when as a user I'm trying to activate either a VM console or reboot a VM which I have access to as a user ("UserRole permission assigned to VM).
2018-07-18 10:51:33,554+01 INFO [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-9) [557ca876] Running command
: CreateUserSessionCommand internal: false.
2018-07-18 10:51:33,575+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-9) [557ca876] E
VENT_ID: USER_VDC_LOGIN(30), User callum@Biomedical Research Computing connecting from '192.168.1.241' using session 'wiWA25wdaRP1zay
iyTSGBJKpvi89LdzgKqeX12BcZhNVhpV2BIA+zkAnT50xOSDglxnhfAi3S2ZiODls8JYFUA==' logged in.
2018-07-18 10:51:34,135+01 ERROR [org.ovirt.engine.core.bll.GetSystemStatisticsQuery] (default task-5) [8d830cdb-fc11-4e68-94e6-73309
65c4488] Query execution failed due to insufficient permissions.
2018-07-18 10:51:34,205+01 ERROR [org.ovirt.engine.core.bll.GetPermissionsForObjectQuery] (default task-26) [ba1825f1-60fb-44cd-8b57-
ea701cf698c0] Query execution failed due to insufficient permissions.
2018-07-18 10:51:34,242+01 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-26) [] Operation Faile
d: query execution failed due to insufficient permissions.
2018-07-18 10:51:34,389+01 ERROR [org.ovirt.engine.core.bll.storage.domain.GetStorageDomainListByIdQuery] (default task-17) [02965366
-44b0-4370-ab83-4781065e46c2] Query execution failed due to insufficient permissions.
2018-07-18 10:51:34,393+01 ERROR [org.ovirt.engine.core.bll.storage.domain.GetStorageDomainListByIdQuery] (default task-17) [02965366
-44b0-4370-ab83-4781065e46c2] Query execution failed due to insufficient permissions.
2018-07-18 10:51:34,394+01 ERROR [org.ovirt.engine.core.bll.storage.domain.GetStorageDomainListByIdQuery] (default task-17) [02965366
-44b0-4370-ab83-4781065e46c2] Query execution failed due to insufficient permissions.
2018-07-18 10:51:34,396+01 ERROR [org.ovirt.engine.core.bll.storage.domain.GetStorageDomainListByIdQuery] (default task-17) [02965366
-44b0-4370-ab83-4781065e46c2] Query execution failed due to insufficient permissions.
2018-07-18 10:51:59,195+01 WARN [org.ovirt.engine.core.bll.SetVmTicketCommand] (default task-18) [7881a832] User '9386d6f5-f172-4cdb
-abca-62492a357888' is trying to take the console of virtual machine 'ddb23e0a-01d5-403c-89ab-37c400d2c938', but the console is alrea
dy taken by user 'd021fc10-4f7c-11e8-88cb-00163e6a7aff'.
2018-07-18 10:51:59,197+01 INFO [org.ovirt.engine.core.bll.SetVmTicketCommand] (default task-18) [7881a832] No permission found for
user '9386d6f5-f172-4cdb-abca-62492a357888' or one of the groups he is member of, when running action 'SetVmTicket', Required permiss
ions are: Action type: 'USER' Action group: 'RECONNECT_TO_VM' Object type: 'VM' Object ID: 'ddb23e0a-01d5-403c-89ab-37c400d2c938'.
2018-07-18 10:51:59,197+01 WARN [org.ovirt.engine.core.bll.SetVmTicketCommand] (default task-18) [7881a832] Validation of action 'Se
tVmTicket' failed for user callum@Biomedical Research Computing. Reasons: VAR__ACTION__SET,VAR__TYPE__VM_TICKET,USER_CANNOT_FORCE_REC
ONNECT_TO_VM
2018-07-18 10:51:59,198+01 ERROR [org.ovirt.engine.api.restapi.resource.BackendVmGraphicsConsoleResource] (default task-18) [] Operat
ion Failed: USER_CANNOT_FORCE_RECONNECT_TO_VM
Regards,
Callum
--
Callum Smith
Research Computing Core
Wellcome Trust Centre for Human Genetics
University of Oxford
e. callum(a)well.ox.ac.uk<mailto:callum@well.ox.ac.uk>
6 years, 4 months
oVirt 4.2.5.1-1.el7 JSON-RPC statistics error
by Maton, Brett
I've got one physical host in a 3 host CentOS7.5 cluster that reports the
following error several times a day
VDSM node3.example.com command Get Host Statistics failed: Internal
JSON-RPC error: {'reason': '<Fault 1: "<type
\'exceptions.AttributeError\'>:\'NoneType\' object has no attribute
\'statistics\'">'}
Any ideas what the problem might be?
6 years, 4 months
Fwd: Re: up and running with ovirt 4.2 and gluster
by Jayme
---------- Forwarded message ---------
From: Jayme <jaymef(a)gmail.com>
Date: Sun, Jul 29, 2018, 10:09 PM
Subject: Re: [ovirt-users] Re: up and running with ovirt 4.2 and gluster
To: Mike <xenithorb(a)fedoraproject.org>
On this same subject one thing I'm currently hung up on re: HCI setup is
the next step in the cockpit config for glusterfs. I have three nodes
setup with two drives each in jbod. On the brick config screen you have to
specify the device location and size of each of the volumes you configure
on the previous step i.e engine and data. It seems that I can only specify
one of my drives for the data volume. So I'd have to pick /dev/SDA drive
for engine and /dev/sdb for data, but I have 2x2tb drives what do I do with
all the other space on the devocd I put the engine volume on? If I choose
to use sdb for data and have it 1.8tb does that mean the data storage will
only use one single drive in each of my three nodes? I want to make use of
all six drives to increase I/o performance.
I don't understand how the jbod config of gluster is suppose to work. What
if I had even more drives in each node say 4 separate drives how would I
make use of all drives when I can only specify one device for each volume?
I'm thinking about creating a data and data2 volume and specifying SDA for
data and sdb for data2. Am I completely misunderstanding the setup here.
Its very confusion and there is next to no documentation on HCI available.
On Sun, Jul 29, 2018, 6:34 PM Mike, <xenithorb(a)fedoraproject.org> wrote:
> > Why would they be setup by default via the cockpit if they are no longer
> > needee?
> >
> > On Sat, Jul 28, 2018, 1:13 PM femi adegoke, <ovirt(a)fateknollogee.com>
> wrote:
>
> I agree, that step alone is very confusing
>
> - vmstore is/was "export" and is no longer needed
> - iso domains are no longer needed
> - The only domains you need in the HCI setup are :/data and :/engine
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3HL7XGWWTZ6...
>
6 years, 4 months
Hosted engine unaviable after storage rebuild
by sebastian.w@gmx.ch
Hello all,
My hosted engine does not start up anymore, because the node can't reach the storage.
The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable.
What happend is that my NAS crashed and I had to rebuild it from the backup. I'm quit sure that I called all the NFS shares the same way as they bevore the issue.
How can I figure out whats the issue with the storage?
The logs wheren't realy helpful for me (but may it's me and not the logs to blame)
Regards and thanks
Sebi
6 years, 4 months
Changing Network Interfaces on host causes bond to go offline
by oBSD Nub
Everytime I try to change assigned Logical networks (add ip addresses,
assign a logical network, etc) the bond0 interface winds up going
down. I have to do a ifup bond0 from the console, and the engine
reconnects. After a lot of digging, I think this is the offending
command, but I don't know how to read it or next troubleshooting
steps. Anyone want to point me in the right direction?
Thanks!
2018-07-28 18:07:36,661-0500 WARN (vdsm.Scheduler) [Executor] Worker
blocked: <Worker name=jsonrpc/6 running <Task <JsonRpcTask {'params':
{u'connectionParams': [{u'netIfaceName': u'bond0.3001', u'id':
u'0808bc71-b5f7-4886-9dc8-e21b0d2abf14', u'connection':
u'192.168.200.81', u'iqn':
u'iqn.2002-10.com.infortrend:raid.uid577981.001', u'user': u'',
u'tpgt': u'1', u'ifaceName': u'bond0.3001', u'password': '********',
u'port': u'3260'}, {u'netIfaceName': u'bond0.3002', u'id':
u'77c7e9dd-cd46-48be-ade1-06645772c405', u'connection':
u'192.168.200.81', u'iqn':
u'iqn.2002-10.com.infortrend:raid.uid577981.001', u'user': u'',
u'tpgt': u'1', u'ifaceName': u'bond0.3002', u'password': '********',
u'port': u'3260'}], u'storagepoolID':
u'2cde92be-7b43-11e8-b08d-00505690367f', u'domainType': 3}, 'jsonrpc':
'2.0', 'method': u'StoragePool.connectStorageServer', 'id':
u'9a414389-2b42-4a20-b59c-c3bc9d19c7b4'} at 0x7f29980cdcd0>
timeout=60, duration=60 at 0x7f29980cdfd0> task#=4 at 0x7f299837af10>,
traceback:
File: "/usr/lib64/python2.7/threading.py", line 785, in __bootstrap
self.__bootstrap_inner()
File: "/usr/lib64/python2.7/threading.py", line 812, in __bootstrap_inner
self.run()
File: "/usr/lib64/python2.7/threading.py", line 765, in run
self.__target(*self.__args, **self.__kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py",
line 194, in run
ret = func(*args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 301, in _run
self._execute_task()
File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 315,
in _execute_task
task()
File: "/usr/lib/python2.7/site-packages/vdsm/executor.py", line 391, in __call__
self._callable()
File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
523, in __call__
self._handler(self._ctx, self._req)
File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
566, in _serveRequest
response = self._handle_request(req, ctx)
File: "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
606, in _handle_request
res = method(**params)
File: "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 197,
in _dynamicMethod
result = fn(*methodArgs)
File: "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1070, in
connectStorageServer
connectionParams)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py",
line 73, in wrapper
result = ctask.prepare(func, *args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line
108, in wrapper
return m(self, *a, **kw)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line
1179, in prepare
result = self._run(func, *args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
return fn(*args, **kargs)
File: "<string>", line 2, in connectStorageServer
File: "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method
ret = func(*args, **kwargs)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line
2395, in connectStorageServer
conObj.connect()
File: "/usr/lib/python2.7/site-packages/vdsm/storage/storageServer.py",
line 487, in connect
iscsi.addIscsiNode(self._iface, self._target, self._cred)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/iscsi.py", line
217, in addIscsiNode
iscsiadm.node_login(iface.name, target.address, target.iqn)
File: "/usr/lib/python2.7/site-packages/vdsm/storage/iscsiadm.py",
line 327, in node_login
portal, "-l"])
File: "/usr/lib/python2.7/site-packages/vdsm/storage/iscsiadm.py",
line 122, in _runCmd
return misc.execCmd(cmd, printable=printCmd, sudo=True, sync=sync)
File: "/usr/lib/python2.7/site-packages/vdsm/common/commands.py", line
80, in execCmd
(out, err) = p.communicate(data)
File: "/usr/lib64/python2.7/site-packages/subprocess32.py", line 924,
in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File: "/usr/lib64/python2.7/site-packages/subprocess32.py", line 1706,
in _communicate
orig_timeout)
File: "/usr/lib64/python2.7/site-packages/subprocess32.py", line 1779,
in _communicate_with_poll
ready = poller.poll(self._remaining_time(endtime)) (executor:363)
6 years, 4 months