RSS 2.0
Critical Assessment of Information Extraction in Biology - data sets are available from Resources/Corpora and require registration.

MetaServer

Annotation Server scripts [2009-05-17]

Synopsis

The online environment consists of two distinct groups, the server itself (found as each variant in the java, perl, and python directories) and a test client (this directory) that can be called with one or more full-text files (either the XML or UTF-8 files). The text gets sent to the Annotation Server (i.e., the server [script] has to be running when starting the test script), and after the Annotation Server responds to the client, the test client asserts the integrity of the received data. Additionally, the result data for BC II.5 can be written to output files if given as optional arguments when starting the test script.

Setup

For a test run, simply start one of the server implementations (perl, python: "bc-annotation-server.*"; for java, see the README file in the java directory) from the command line without any additional options. This will start a constantly running Annotation Server on localhost, port 8000. Then, run the Python 2.5/2.6 script "bcms-test-client.py" from the directory it is in (or make sure the bcms_client_lib is on your PYTHONPATH) and give it one of the UTF-8 files from the training set as an argument. The server will respond with a canned result set, and you should see no messages at all in default mode (i.e., the test script exiting silently in default mode is a good sign). If you can get this ("non-") result, the scripts are working.

You can, if you wish, pipe one or more BioCreative II.5 relevant result types (ACT/interaction, INT/normalizations, IPT/pairs) to output files by giving the output files as optional arguments (see -h or --help output from the test script). This will produce correctly formated result files the BC II.5 evaluation script can read from the data the server sent, e.g., for INT/normalization results the file would contain something like this (for the canned results the sample Annotation Servers respond with):

10.1016/j.febslet.2008.01.064 P1234 1 1.000000
10.1016/j.febslet.2008.01.064 P1235 2 0.500000
10.1016/j.febslet.2008.01.064 P1236 3 0.000100

Next, you should edit the Annotation Server script for the language of your choice (Java, Perl, and Python are available) to call your pipeline with the UTF-8 or XML full-text and return the results in a structure that the BCMS platform understands (see Result Structure below). There are three principal options to join your pipeline with the Annotation Server:

  1. Replace the API functions/classes in the bc-annoation-server.* file with your own versions (for Java, the files Fulltext.java and Medline.java).
  2. Insert calls to your pipeline's API in those functions/methods.
  3. Insert command line calls for your pipeline into the functions/methods in bc-annotation-server.* and parse the result into the script (least recommended).

The choice of method is yours, use whatever is easiest for you. However, keep in mind that if you have to do a lot of data-preprocessing (loading gene dictionaries, for example) before your pipeline can start, it would be recommendable to load these data at the time you start the Annotation Server to reduce the time your server needs to respond to requests. Also, note that the scripts are all written so you can choose to either use XML or UTF-8 input. All scripts give you an informative help when called with --help. To process the XML input files instead of UTF-8, call the server scripts with option -x. See the help output for additional options.

Evaluation and Deployment

To evaluate your server's results with respect to the BioCreative II.5 tasks, you can use the created output files (if they were added as option commands when starting the test script) as input for the BioCreative evaluation script. This way, you can measure how well - in terms of the evaluation function - and how fast - by timing your server/pipeline on one or all files (by using the wildcard in the filename argument, e.g. "bcms-test-client.py some/path/10.1016_j.febslet.2008.*.utf8") - your server performs. If you can run the complete training data set through your Annotation Server in less than four days using the test client without any warning, error, or critical messages, your server is ready for online deployment to the BCMS. Note that you are free to deploy without this testing via the client script, but it is practically guaranteed that you will have a painful time ahead of you trying to debug your server online - you have been warned.

If you so far have no team account on the BCMS homepage, contact Florian Leitner or Martin Krallinger at the CNIO with your Annotation Server's URL (and team ID, if you want to take part in BC II.5). You will then be supplied with the necessary instructions to access your team's server management page.

Input Structure

The Annotation Server's full-text RPC method (Fulltext.getAnnotation) should either accept XML (with the BCMS test script) or the UTF-8 (from the BCMS or with the script) full-text as input. In the case of XML, this is simply a "string" which you can encode to UTF-8 and contains the raw XML exactly as in the XML files found in the training set. In the case of UTF-8 full-text, the input will be an list of content blocks (i.e., Hash[Map]s/Dictionaries) as follows:

[ /* UTF-8 full-text list of content blocks */
  { /* sample content block */
    'section':   "section-name",   /* required */
    'content':   "content text",   /* required */
    'qualifier': "qualifier text", /* optional */
    'number':    1                 /* optional */
  }, ...
]

With the following remarks:

  • section String is always part of a content block, section-name being one of: 'article‑class', 'title', 'abstract', 'keyword', 'definition', 'body', 'figure', 'table', 'textbox', 'appendix', or 'glossary' (i.e., just as in the UTF-8 files).
  • content String is always part of the dictionary/hash, containing the actual text content, usually one entire paragraph from the full-text;
  • qualifier String may be part of the content block hash, identifying (i.e., qualifying) subsections related to that section, f.e., headings to a paragraph, or the acronym of the keyword/definition;
  • number Integer may be part of the content block hash, if multiple content blocks with the same qualifier exist to identify the exact content block.

Result Structure

As with the BCMS prototype and discussed earlier, the results should be returned as lists of structs encapsulated in one struct having keys for each result type you are returning. For those familiar with the prototype the only changes are:

  1. 'rank' is a new, required value for all results except interaction,
  2. some data key names have changed ('dbname' to 'database', 'dbid' to 'accession', 'taxid' to 'accession'), and
  3. the interaction annotation type is now a list with just one allowed struct (was a plain list consisting of a bool and double before) to make sure data structure is the same for all annotation types (as this is much easier to handle).

A sample of such a result structure can be found in each Annotation Server implementation here and is used as "canned" response. The base data structure is defined as a dictionary of lists containing each result as another dictionary:

{ /* result struct */
  'annotation_type_key': [
    { /* data struct 1 */
      'data_key1': <value1>,
      'data_key2': <value2>,
      ...
    },
    { /* data struct 2 */
      'data_key1': <value1>
      'data_key2': <value2>,
      ...
    },
    ...
  ],
  ...
}

The possible data keys are listed here (keys with "+" are required, keys with "-" are optional and might have a default value); All data structs must/can contain the following keys:

  + confidence [double, range ]0..1]]
  + rank [int, range [1..65536], except interaction]
  - evidence [string, freeform text to explain the annotation]
  - version [string, sortable version ID of annotation server]

You do not need to report evidence and version for the challenge, they are planned for later production usage of the BCMS system. For each annotation type (marked with * below) the following specific data keys are available:

* interaction
  + has_interaction [bool]
  /* NB: rank is invalid for interactions (only one result allowed) */
* normalizations
  + accession [string, Identifier]
  - database [string="UniProt", Database]
  - is_interactor [bool=False] /* NB: automatically True for BC II.5! */
* pairs
  + accession_a [string, Identifier]
  + accession_b [string, Identifier]
  - database_a [string="UniProt", Database]
  - database_b [string="UniProt", Database]
  - is_auto [bool=False] /* NB: not evaluated in BC II.5! */

I.e., the ACT result for an article is reported by 'interaction', the INT results by 'normalizations', and IPT by 'pairs'. Additionally, the BCMS understands the following other data types ('mappings' is new); You do not need to use or understand these, they are just here for the sake of completeness, but if you already know them you can add them to your Annotation Server:

* mappings
  + mention [string]
  + offset [integer]
  + section [string, 'body' or 'title' for MEDLINE articles]
  - qualifier [string=""; used for Fulltext articles]
  - number [int=NULL; used for Fulltext articles]
  + accession [string, Identifier]
  - database [string="UniProt", Database]
  - is_interactor [bool=False]
* mentions
  + mention [string]
  + offset [int]
  + section [string, 'body' or 'title' for MEDLINE articles]
  - qualifier [string=""; used for Fulltext articles]
  - number [int=NULL; used for Fulltext articles]
* taxons
  + accession [string, Identifier]

In summary: for every results you must return a 'confidence' score, for all IPT and INT also a unique 'rank', for ACT a boolean value 'has_interaction', for INT the 'accession', and for IPT both 'accession_a' and 'accession_b'. If the result structure is correct, it should produce a valid output when using the bcms-test-client.py script, which in turn can be used by the [bc-]evaluate.py script.

Downloads