Interfacing with

The API batch decomposition features allows a user to send a bunch of spectra to the server to be decomposed onto a particular motifset. The spectra are passed as the arguments to a POST request to the following URL:

The argument should be a dictionary with the following two '<key, value>' pairs:

  • Key: ‘motifset’ Value: name of the motifset to decompose onto, e.g. ‘massbank_motifset’
  • Key: ‘spectra’ Value: the spectral information, pickled into a string (using e.g. json.dumps)

The spectra value should be a list, with one item per spectra. The item should be a tuple with three elements:

(string: doc_name, float: parentmass, list: peaks)

Peaks is a list of tuples, each representing a peak in the form

(float: mz, float: intensity)

Python Example

For example in Python, using the requests package:

import requests
        import json

        spectrum = ('spec_name',188.0818,[(53.0384,331117.7),

        spectra = [spectrum] # or add more to the list

        args = {'spectra': json.dumps(spectra), 'motifset': 'massbank_motifset'}

        url = ''

        r =,args)

Because this is computationally intensive, the decomposition is run as a scheduled task. Therefore the POST request doesn’t return the results immediately. Instead it returns some summary, including the ID of the results entry. To get the results (in JSON), do the following:

result_id = r.json()['result_id']
        url2 = '{}/.format(result_id)
        r2 = requests.get(url2)
        print r2.json()

If 'r2.json()' has a 'status' field, it means the job is still running or waiting. If not, you get a dictionary back with the document names as keys and a list as the value. Each list element has the form:

'(string:globalmotifname, string:originalmotifname, float:theta, float:overlap_score)'