Tokens

The GRID_LRT.Token module is responsible for interactions with CouchDB using the PiCaS token framework. It contains a Token_Handler object which manages a single _design document on CouchDB, intended for a set of jobs that are logically linked together. In the LOFAR Surveys case, this holds the jobs of a single Observation. Additionally a Token_Set object can create batch tokens, upload attachments to them in bulk and change Token fields in bulk as well. This module is used in combination with the srmlist class to automatically create sets of jobs with N files each.

token.py

Location: GRID_LRT/token.py

Imports:

>>> from GRID_LRT.token import Token, TokenBuilder #Abstract classes
>>> from GRID_LRT.token import caToken, TokenJsonBUilder, TokenList, TokenView
>>> from GRID_LRT.token import TokenSet

Usage:

>>> #Example creation of a token of token_type 'test'
>>> from GRID_LRT.auth.get_picas_credentials import picas_cred
>>> pc=picas_cred() #Gets picas_credentials
>>>
>>> from cloudant.client import CouchDB
>>> from GRID_LRT.token import caToken #Token that implements the cloudant interface
>>> from GRID_LRT.token import TokenList
>>> client = CouchDB(pc.user,pc.password, url='https://picas-lofar.grid.surfsara.nl:6984',connect=True)
>>> db = client[pc.database]
>>> 'token_id' in db # Checks if database includes the token
>>> db['token_id'] #Pulls the token
>>> tl = TokenList(database=db, token_type='token_type') #makes an empty list
>>> tl.add_view(TokenView('temp',"doc.type == \"{}\" ".format(tl.token_type))) # adds a view to the token
>>> for token in tl.list_view_tokens('temp'):
        tl.append(caToken(token_type=tl.token_type, token_id= token['_id'], database=db))
# adds every token in view to the local list (only their ids)
>>> tl.fetch() #Fetch actual data for token in list
>>> t1 = caToken(database=db, token_type='restructure_test', token_id=token_id) #make a token (locally)
>>> t1.build(TokenJsonBuilder('/path/to/token/data.json'))
>>> t1.save() #upload to the database
>>> t1.add_attachment(attachment_name='attachment_name_in_db.txt',filename='/path/to/attachment/file') #Adds attachment to token

Tokens

class GRID_LRT.token.Token(token_type, token_id=None, **kwargs)[source]
__init__(token_type, token_id=None, **kwargs)[source]

x.__init__(…) initializes x; see help(type(x)) for signature

add_attachment()[source]
build(token_builder)[source]
reset()[source]
synchronize(db, prefer_local=False, upload=False)[source]

Synchronizes the token with the database.

TokenSet

class GRID_LRT.token.TokenSet(th=None, tok_config=None)[source]

The TokenSet object can automatically create a group of tokens from a yaml configuration file and a dictionary. It keeps track internally of the set of tokens and allows users to batch attach files to the entire TokenSet or alter fields of all tokens in the set.

__init__(th=None, tok_config=None)[source]

The TokenSet object is created with a TokenHandler Object, which is responsible for the interface to the CouchDB views and Documents. This also ensures that only one job type is contained in a TokenSet.

Args:
param th:The TokenHandler associated with the job tokens
type th:GRID_LRT.Token.TokenHandler
param tok_config:
 Location of the token yaml file on the host FileSystem
type tok_config:
 str
raises:AttributeError, KeyError
add_attach_to_list(attachment, tok_list=None, name=None)[source]

Adds an attachment to all the tokens in the TokenSet, or to another list of tokens if explicitly specified.

add_keys_to_list(key, val, tok_list=None)[source]
create_dict_tokens(iterable={}, id_prefix='SB', id_append='L000000', key_name='STARTSB', file_upload=None)[source]

A function that accepts a dictionary and creates a set of tokens equal to the number of entries (keys) of the dictionary. The values of the dict are a list of strings that may be attached to each token if the ‘file_upload’ argument exists.

Args:
param iterable:The dictionary which determines how many tokens will be created.

The values are attached to each token :type iterable: dict :param id_append: Option to append the OBSID to each Token :type id_append: str :param key_name: The Token field which will hold the value of the dictionary’s keys for each Token :type key_name: str :param file_upload: The name of the file which to upload to the tokens (typically srm.txt) :type file_upload: str

tokens
update_local_tokens()[source]