Internal LT Service API specification

Note

This specification details the API that LT tool containers need to implement in order to be runnable as functional services within the ELG infrastructure. This is distinct from (though closely related to) the public-facing service execution API that outside users use to send requests to ELG services - the public APIs are documented separately.

Where possible, this document SHOULD use the MUST/SHOULD/MAY terms from RFC 2119 to indicate requirement levels.

Basic API pattern

In order to integrate an LT tool as a functional service in the ELG infrastructure, the tool MUST offer at least one endpoint that can accept HTTP (1.1 or 2 - preferably cleartext HTTP/2) POST requests conforming to the appropriate request schema, and return an appropriate response as application/json. This specification also details a response pattern based on Server-Sent Events (SSE, a protocol defined as part of HTML5) that long-running tools can use to report progress information - support for this mechanism is RECOMMENDED for all tools but not required. Tools are encouraged to use the standard response formats as far as possible, but if a service needs to return other types of data not easily representable within the JSON message structures (e.g. images) they may use the temporary storage helper service described below.

Endpoints may be sent multiple parallel requests by the ELG platform, and there is no requirement that a service must respond to requests in any particular order - certain services may, for example, be more efficient if they can batch up several requests into one back end process (e.g. for GPU computing) and send the responses in one go. If a tool has limits on the number of concurrent requests a single instance can handle then this information should be supplied to the ELG platform administrators as part of the on-boarding process, so the platform can use this data to decide how to scale the pod replicas to match the level of load on the service at any given time.

Where a tool already has its own native HTTP API it may be more convenient for integrators to provide a separate service adapter image which can handle requests matching the ELG specification and transform them into calls on the tool’s native API. The tool container and the adapter container will run within the same “pod” in Kubernetes and can access each other as localhost.

Utility datatypes

The following JSON structures are used in several places in this specification, they are documented here to avoid duplication.

Status message

Since the ELG is supposed to be a multilingual platform, error and other status messages are handled using an approach modelled on the i18n mechanism from the Spring Framework - the message is represented by a code, along with a template text with numbered placeholders that are zero-based indices into an array of params replacement values.

{
  "code":"elg.example.no.translation",
  "text":"Default text to use for the {0} if no {1} can be found",
  "params":["message", "translation"],
  "detail":{
    // arbitrary further details that don't need translation,
    // such as a stack trace, service-native error code, etc.
  }
}

ELG provides a common library of fully-translated message codes for service developers to use, as detailed below - developers are free to use their own codes in their own namespaces (i.e. not prefixed elg.) on the understanding that it is their responsibility to provide translations. A mechanism for developers to contribute their translated messages to the platform is under development but not yet generally available.

Annotations

Many of the request and response types need to represent annotations - pieces of metadata about specific parts of a text or audio data stream, rather than about the stream as a whole. For example, a named entity recogniser might want to state that characters 10 to 15 in the request text represent the name of a female person, or a speech recogniser might want to state that characters 75 to 80 in the transcription represent a word, and map to the time period 1.37 to 1.6 seconds in the source audio. Such structures are represented in a consistent way across all the ELG API messages:

"annotations":{
    "<annotation type>":[
      {
        "start":number,
        "end":number,
        "sourceStart":number,
        "sourceEnd":number,
        "features":{ /* arbitrary JSON */ }
      }
    ]
  }

The <annotation type> is an arbitrary string representing the type of annotation, e.g. “Person” or “Word” in the examples above. For each type of annotation, the matching value is a JSON array of objects, each object representing one annotation of that type. Note that when generating these structures in your API responses the value here MUST be an array even if there is only one annotation of the relevant type - some JSON generation libraries “unwrap” singleton arrays by default. The properties of each annotation object are:

start and end

The position of the annotation in the main data stream to which it refers - this is typically the content directly associated with this annotations structure (for example the text of a translation). When the stream is text these would be Unicode character offsets from the start of the text, for audio they would typically be time points in seconds, etc. Subtracting the start value from the end value should give the length of the annotated area - there are several equivalent ways to conceptualise this, for example with text you could consider the characters as numbered from zero with the start offset inclusive and the end offset exclusive, or you could consider the offsets to represent the positions between characters (so 0 is before the first character, 1 is between the first and second, etc.).

sourceStart and sourceEnd

Where these annotations are relative to a data stream that has been generated from another “source” data stream (e.g. a translation of text in another language, or a transcription of audio), these properties can be optionally used to link to the positions in the source stream (e.g. to align words in the translation with words in the original).

features

Arbitrary JSON representing other properties of the annotation, e.g. a “Person” annotation might have a feature for “gender”, a “Word” from a morphological analyser might have “root” and “suffix”, etc.

Request structure

There are two main types of endpoint currently supported for this specification, one for services whose input is structured or unstructured text and one for services whose input is audio.

Text requests

Services that take plain text (or something from which plain text can be extracted, e.g. HTML) as their input are expected to offer an endpoint that accepts POST requests with Content-Type: application/json that conforms to the following structure.

{
  "type":"text",
  "params":{...},   /* optional */
  "content":"The text of the request",
  // mimeType optional - this is the default if omitted
  "mimeType":"text/plain",
  "features":{ /* arbitrary JSON metadata about this content, optional */ },
  "annotations":{ /* optional */
    "<annotation type>":[
      {
        "start":number,
        "end":number,
        "features":{ /* arbitrary JSON */ }
      }
    ]
  }
}

We expect that across the ELG from amongst the large number of possible and supported document types, a set of a smaller number of document types will emerge as being preferred and well supported (for example, plain text, HTML, XML - we do not intend to support binary formats such as PDF or Word as “text” requests, but may introduce other formats to this specification at a later date).

The only part of this request that is guaranteed to be present is the type (which will always be “text”) and the content. So a minimal request would look like this:

{"type":"text", "content":"This is an example request"}

The optional elements are:

mimeType

the MIME type of the content, if it is not simply plain text

params

vendor-specific parameters - it is up to the individual service implementor to decide how (or indeed whether) to interpret these

features

metadata about the input as a whole

annotations

as described above - the start and end are Unicode character offsets within the content and the sourceStart and sourceEnd are ignored.

Tools that are able to accept text requests are RECOMMENDED to also offer an endpoint that can accept just the plain text (or other types of) “content” posted directly, and treat that the same as they would a message with the "content" property equal to the post data, the "mimeType" taken from the request Content-Type header, and no features or annotations. The "params" should be populated from the URL query string parameters. This endpoint will not be called by the ELG platform internally but it will make the service easier to test outside of the ELG platform infrastructure, and for open-source tools it will allow users to easily download and run the tool locally in Docker on their own hardware.

Structured text request

This is very similar to the plain text request, but for services that require some structure to their input, for example a list of sentences for some MT services, a list of words for a service that re-segments a stream of ASR output into a list of sentences, etc. Again, services that accept this kind of input should provide a POST endpoint that accepts Content-Type: application/json conforming to the following structure:

{
  "type":"structuredText",
  "params":{...},   /* optional */
  "texts":[
    {
      "content":"The text of this node",           // either
      "texts":[/* same structure, recursive */],   // or
      // mimeType optional - this is the default if omitted
      "mimeType":"text/plain",
      "features":{ /* arbitrary JSON metadata about this node, optional */ },
      "annotations":{ /* optional */
        "<annotation type>":[
          {
            "start":number,
            "end":number,
            "features":{ /* arbitrary JSON */ }
          }
        ]
      }
    }
  ]
}

The type will always be “structuredText”, params (optional) allows for vendor-specific parameters whose interpretation is up to the individual service implementor, and texts will always be an array of at least one JSON object. The texts property forms a recursive tree-shaped data structure, each object will be either a leaf node containing a piece of content or a branch node containing another list of texts.

Leaf nodes have one required property content containing the text of this node, plus zero or more of the following optional properties:

mimeType

the MIME type of the content, if it is not simply plain text

features

metadata about this node as a whole

annotations

as described above - the start and end are Unicode character offsets within the content and the sourceStart and sourceEnd are ignored.

Branch nodes have one required property texts containing an array of child nodes (which may in turn be branch or leaf nodes), plus zero or more of the following optional properties:

features

metadata about this node as a whole

annotations

as described above - the start and end are array offsets within the texts array (e.g. "start":0, "end":2 would refer to the first and second children - treat them as zero-based array indices where the start is inclusive and the end is exclusive) and the sourceStart and sourceEnd are ignored.

Here is the simplest possible example of a structured text request representing two sentences, each with several words, with no features and no annotations.

{
  "type":"structuredText",
  "texts":[
    {
      "texts":[
        {"content":"The"},{"content":"European"},{"content":"Language"},{"content":"Grid"}
      ]
    },
    {
      "texts":[
        {"content":"An"},{"content":"API"},{"content":"example"}
      ]
    }
  ]
}

Audio requests

Services that accept audio as input (e.g. speech recognition) are slightly more complex, given the input data cannot be easily encoded directly in JSON. Audio services must accept a POST of Content-Type: multipart/form-data with two parts, the first part named “request” will be application/json conforming to the following structure, and the second part named “content” will be audio/x-wav or audio/mpeg containing the actual audio data.

{
  "type":"audio",
  "params":{...}, // optional
  "format":"string", // LINEAR16 for WAV or MP3 for MP3, other types are service specific
  "sampleRate":number, // deprecated - use the sample rate from the audio file metadata
  "features":{ /* arbitrary JSON metadata about this content, optional */ },
  "annotations":{ /* optional */
    "<annotation type>":[
      {
        "start":number,
        "end":number,
        "features":{ /* arbitrary JSON */ }
      }
    ]
  }
}

The ELG platform typically expects audio to be a single channel - this is not guaranteed, as it depends what the requesting user submits, and a service receiving multiple audio channels may handle this situation in any way it sees fit including processing only the first channel or mixing down the multi-channel stream to mono before processing.

As with text requests we expect that there will be a small number of standard audio formats that are well supported across services (e.g. 16kHz uncompressed WAV) but individual services may support other types.

Optional properties of this request type are:

params

vendor-specific parameters - it is up to the individual service implementor to decide how (or indeed whether) to interpret these

features

metadata about the input as a whole

annotations

as described above - the start and end are floating point timestamps in seconds from the start of the audio and the sourceStart and sourceEnd are ignored.

sampleRate

an earlier version of this specification included this property to specify the sample rate of the audio, but this is no longer used and will rarely be set when services are called by the front end API gateway in the ELG platform. You should not depend on this property, instead simply use the sample rate declared in the audio file header of the “content” part.

Image requests

Services that accept image as input (e.g. OCR) work in a similar way to audio requests as they too cannot easily be encoded into a single json. Image services also accept a POST of Content-Type: multipart/form-data with two parts. Again, the first part named “request” will be application/json conforming to the below structure, and the second part named “content” will have Content-Type as one of: image/png, image/bmp, image/jpeg, image/gif or image/tiff and will contain the actual image data.

{
  "type":"image",
  "params":{...}, // optional,
  "format":"string", // PNG, JPEG, TIFF, GIF or BMP
  "features":{ /* arbitrary JSON metadata about this content, optional */ },
}

Services are not necessarily required to support all the above image formats, but they should return a suitable error message when presented with a format they do not understand. The one-dimensional “annotations” format used by text and audio requests is not appropriate for two dimensional images. A future version of this specification may define a standard way to provide image annotations but at present services requiring this kind of information will need to define their own structure in the features container.

Response structure

Services are expected to return their responses as JSON as described in the rest of this document. The minimal requirement is for services to be able to respond with Content-Type: application/json containing a successful or failed response message, but long-running services may also choose to offer Content-Type: text/event-stream to be able to stream progress reports during processing of the request. This mechanism is described at the end of this document.

Failure message

If processing fails for any reason (whether due to bad input, overloading of the service, or internal errors during processing) then the service should return the following JSON structure to describe the failure.

{
  "failure":{
    "errors":[array of status messages]
  }
}

The errors property is an array of i18n status messages (JSON objects with properties “code”, “text” and “params”) as described above - standard message codes are given in the appendix to this document.

Successful response message

All the successful responses follow this basic format:

{
  "response":{
    "type":"Response type code",
    "warnings":[/* array of status messages, optional*/],
    // other properties type-specific
  }
}

As with the request, the response type code will likely be constant for any given service. The exact format of rest of a successful response message depends on the type of the service.

The warnings list is a slot to report warning messages that did not cause processing to fail entirely but may need to be fed back to the user (e.g. if the process involves several independent steps and only some of the steps failed, or the input was too long and the service chose to truncate it rather than fail altogether). Again, the individual messages in this array are i18n status messages as described above.

Annotations response

This response is suitable for any service that returns standoff annotations that are anchored to locations in text (e.g. named entity recognition) or time points in an audio/video stream (in general: anything compatible with a 1-dimensional coordinate system that uses a single number).

{
  "response":{
    "type":"annotations",
    "warnings":[...], /* optional */
    "features":{...}, /* optional */
    "annotations":{
      "<annotation type>":[
        {
          "start":number,
          "end":number,
          "features":{ /* arbitrary JSON */ }
        }
      ]
    }
  }
}
features (optional)

metadata about the input as a whole

annotations (required, but may be empty "annotations":{})

as described above - for plain text data start and end would be character offsets into the text (Unicode code points), for audio data they would be the time point within the audio in seconds. The sourceStart and sourceEnd are ignored since there are no separate “source” and “target” data streams in this situation.

Classification response

For document-level (or more generally whole-input-level) classification services, e.g. language identification

{
  "response":{
    "type":"classification",
    "warnings":[...], /* optional */
    "classes":[
      {
        "class":"string",
        "score":number /* optional */
      }
    ]
  }
}

We allow for zero or more classifications, each with an optional score. Services should return multiple classes in whatever order they feel is most useful (e.g. “most probable class” first), this order need not correspond to a monotonic ordering by score - we don’t assume scores are all mutually comparable - and the order will be preserved by any subsequent processing steps.

Classification tools that classify segments of the input rather than the whole input should use the annotations or texts response formats instead of this one.

Texts response

A response consisting of one or more new texts with optional annotations, for example multiple alternative possible translations from an MT service or transcriptions from an ASR service.

{
  "response":{
    "type":"texts",
    "warnings":[...], /* optional */
    "texts":[
      {
        "role":"string", /* optional */
        "content":"string of translated/transcribed text", // either
        "texts":[/* same structure, recursive */],         // or
        "score":number, /* optional */
        "features":{ /* arbitrary JSON, optional */ },
        "annotations":{ /* optional */
          "<annotation type>":[
            {
              "start":number,
              "end":number,
              "sourceStart":number, // optional
              "sourceEnd":number,   // optional
              "features":{ /* arbitrary JSON */ }
            }
          ]
        }
      }
    ]
  }
}

As with the structured text request format above, this texts response structure is recursive, so it is possible for each object in the list to be a branch node containing a set of child texts or a leaf node containing a single string.

Leaf nodes have one required property content, plus zero or more of the following optional properties:

role

the role of this node in the response, “alternative” if it represents one of a list of alternative translations/transcriptions, “segment” if it represents a segment of a longer text, or “paragraph”, “sentence”, “word” etc. for specific types of text segment.

score

if this is one of a list of alternatives, each alternative may have a score representing the quality of the alternative

features

metadata about this node as a whole

annotations

as described above - the start and end are Unicode character offsets within the content and the sourceStart and sourceEnd are the offsets into the source data (the interpretation depends on the nature of the source data).

Branch nodes have one required property texts containing an array of child nodes (which may in turn be branch or leaf nodes), plus zero or more of the following optional properties:

role

the role of this node in the response, “alternative” if it represents one of a list of alternative translations/transcriptions, “segment” if it represents a segment of a longer text, or “paragraph”, “sentence”, “word” etc. for specific types of text segment.

features

metadata about this node as a whole

annotations

as described above - the start and end are array offsets within the texts array (e.g. "start":0, "end":2 would refer to the first and second children - treat them as zero-based array indices where the start is inclusive and the end is exclusive) and the sourceStart and sourceEnd are the offsets into the source data (the interpretation depends on the nature of the source data).

The texts response type will typically be used in two different ways, either

  • the top-level list of texts is interpreted as a set of alternatives for the whole result - in this case we would expect the content property to be populated but not the texts one, and a “role” value of “alternative” - tools should return the alternatives in whatever order they feel is most useful, typically descending order of likelihood (though as for classification results we don’t assume scores are mutually comparable and the order of alternatives in the array need not correspond to a monotonic ordering by score).

  • the top-level list of texts is interpreted as a set of segments of the result, where each segment can have N-best alternatives (e.g. a list of sentences, with N possible translations for each sentence). In this case we would expect texts to be populated but not content, and a “role” value of either “segment” or something more detailed indicating the nature of the segmentation such as “sentence”, “paragraph”, “turn” (for speaker detection), etc. - in this case the order of the texts should correspond to the order of the segments in the result.

Audio response

A response consisting of a piece of audio (e.g. an audio rendering of text in a text-to-speech tool), optionally with annotations linked to either or both of the source and target data.

{
  "response":{
    "type":"audio",
    "warnings":[...], /* optional */
    "content":"base64 encoded audio for shorter snippets",
    "format":"string",
    "features":{/* arbitrary JSON, optional */},
    "annotations":{
      "<annotation type>":[
        {
          "start":number,
          "end":number,
          "sourceStart":number, // optional
          "sourceEnd":number,   // optional
          "features":{ /* arbitrary JSON */ }
        }
      ]
    }
  }
}

Here the content property contains base64-encoded audio data, and the format specifies the audio format used - in this version of the ELG platform the supported formats are LINEAR16 (uncompressed WAV) or MP3. In addition the response may contain zero or more of the following optional properties:

features

metadata about this node as a whole

annotations

as described above - the start and end are time offsets within the audio content expressed as floating point numbers of seconds, and the sourceStart and sourceEnd are the offsets into the source data (the interpretation depends on the nature of the source data).

As an alternative to embedding the audio data in base64 encoding within the JSON payload, a service MAY simply return the audio data directly with the appropriate Content-Type (audio/x-wav or audio/mpeg), however this approach means the service will be unable to return features or annotations over the audio, and will be unable to report partial progress.

A note about image processing services

There is not currently a standardised representation for “annotations” over regions of images (as opposed to text or audio). Services that process images should use the “features” section of an annotations or texts response to define image regions in a way that makes sense for that service (rectangular bounding boxes, SVG “path” syntax etc.)

Services that wish to return images should use the temporary storage system.

Progress Reporting

Some LT services can take a long time to process each request, and in these cases it may be useful to be able to send intermediate progress reports back to the caller. This serves both to reassure the caller that processing has not silently failed, and also to ensure the HTTP connection is kept alive. The mechanism for this in ELG leverages the standard “Server-Sent Events” (SSE) protocol format - if the client sends an Accept header that announces that it is able to understand the text/event-stream response type, then the service may choose to immediately return a 200 “OK” response with Content-Type: text/event-stream and hold the connection open (using chunked transfer encoding in HTTP/1.1 or simply not sending a Content-Length in HTTP2). It may then dispatch zero or more SSE “events” with JSON data in the following structure:

{
  "progress":{
    "percent"://number between 0.0 and 100.0,
    "message":{
      // optional status message, with code, text and params as above
    }
  }
}

followed by exactly one successful or failed response in the usual format. Services should not send any further progress messages once the success or failure response has been sent. Note that if a message is provided in a progress report it must be an i18n status message, not simply a plain string.

For example:

Content-Type: text/event-stream

data:{"progress":{"percent":0.0}}

data:{"progress":{"percent":20.0}}

data:{"progress":{
data:    "percent":70.0
data:  }
data:}

data:{"response":{...}}

As per the SSE specification, each line of data within an event is prefixed data:, and an event is terminated by a blank line - there MUST be two consecutive newlines or CRLF sequences between the end of one event and the start of the next.

One would normally expect the progress percentage to increase over time but this is not necessarily a requirement of the specification - services are free to publish progress messages without a "percent" property if they wish to provide a status update message but cannot quantify their progress numerically, or even with a lower percentage than the previous message if they now have information to suggest that the overall process will take longer than first estimated.

Services are RECOMMENDED to support this response format, and to send it if the client indicates they can accept text/event-stream, but it is not required. The clients which will call your services within the ELG infrastructure will accept both text/event-stream and application/json responses, and you are encouraged to return an event stream if you can, but you are free to return application/json if it makes more sense for your service, and you MUST return application/json if the calling client does not indicate in the Accept header that they can understand text/event-stream.

Helper services

The ELG platform provides certain “helper” services that may be called as required by LT tools that are running within the infrastructure, at specific fixed URLs. The following service is currently generally available.

Temporary file storage

The temporary storage service provides a way for LT tools running within the ELG infrastructure to store arbitrary data for a short time at a URL that is accessible from outside the platform. This URL may then be included in the service response (e.g. as a feature value on an annotations or texts response) allowing the caller to retrieve the data before the URL expires. The intended use case for this is for services that need to generate and return data of types such as images or short video segments that cannot easily be represented in the standard JSON response structure - where possible service implementors are encouraged to use the standard JSON representations, but the temporary storage service is available where necessary.

To store data, simply make an HTTP POST request to the fixed URL http://storage.elg/store. The data to be stored should be provided in its raw form in the POST body, and an appropriate Content-Type header should be provided. The maximum size for any single temporary storage file is 10MB. If the upload is successful, the /store endpoint will respond with a JSON response in the same format as used by the asynchronous public API:

{
  "response":{
    "type":"stored",
    "uri":"<download URL>"
  }
}

The “download URL” is a globally-accessible URL, to which a GET request will respond with the same data that was originally stored, served with the same Content-Type as was sent in the /store call. By default the data is available for download for 15 minutes from the time of uploading, this can be configured by passing a query parameter ?ttl=<seconds> to the call, i.e. a POST to http://storage.elg/store?ttl=60 would generate a URL valid for only one minute (60 seconds). The maximum permitted ttl is 86400 seconds (24 hours), any ttl parameter longer than that will be treated as 24 hours.

If the upload fails for any reason the /store endpoint responds with a failure message in exactly the same format as LT services use to report their own failures - indeed, an LT service receiving a failure response from /store could legitimately echo the same failure response message back to its own caller.

Note

The uploading endpoint http://storage.elg/store is only visible inside the ELG infrastructure. This is a deliberate design decision for security reasons - the ELG is not an internet file transfer service, we do not support the upload of temporary files from the internet.


Appendix: best practice suggestions (non-normative)

The API specification above defines the syntactic requirements for a service to be compliant with the ELG API, but this still leaves a lot of flexibility for service providers in terms of which of the available formats to choose, how to use parameters, which annotation types and features to use in their responses, etc. This section aims to give some “best practice” advice on how to design your services to best fit in with the rest of the ELG ecosystem.

Service parameters

All request types include an optional params section allowing the service to take vendor-specific parameters. While this is specified as accepting any JSON, in practice the public API will send all parameters as either a string (if the parameter has a single value) or an array of strings (if the parameter has multiple values). If a service requires numeric or boolean parameters then it should be written to accept string values as well as proper numeric or boolean literals in the JSON, and parse the string value to the appropriate type if possible rather than simply responding with a “type mismatch” error. If you can, you should prefer using multiple top-level parameters rather than nested JSON structures, i.e. prefer

{
  "params":{
    "threshold_person":0.7,
    "threshold_location":0.8
  }
}

rather than

{
  "params":{
    "thresholds":{
      "person":0.7,
      "location":0.8
    }
  }
}

With nested params, users must pass a full JSON request when calling your service via the public API; if you use all top-level parameters (and you handle parsing the values from strings) then users have the option to call the public API endpoint with plain text or audio and put the parameters in the URL query string.

Annotations or texts?

For some types of services there may be several response formats that are equally reasonable options. For example a part of speech tagger receiving text input {"type":"text", "content":"This is an example."} could choose to return its response as standoff annotations, all the same type:

{
  "response":{
    "type":"annotations",
    "annotations":{
      "Token":[
        {"start":0, "end":4, "features":{"category":"PRON"}},
        {"start":5, "end":7, "features":{"category":"AUX"}}
      ]
    }
  }
}

or as a separate annotation type for each POS tag:

{
  "response":{
    "type":"annotations",
    "annotations":{
      "PRON":[
        {"start":0, "end":4}
      ],
      "AUX":[
        {"start":5, "end":7}
      ]
    }
  }
}

or as a “texts” response with one item per word:

{
  "response":{
    "type":"texts",
    "texts":[
      {"content":"This", "features":{"category":"PRON"}},
      {"content":"is", "features":{"category":"AUX"}}
    ]
  }
}

(or one item per sentence, where each sentence has one item per word, etc.). All of these options have their pros and cons, but in general the “annotations” form provides the most flexibility for the caller and we would recommend choosing that format if the tool you are integrating makes the required character offsets available to you. It is simple for the caller to map from a standoff annotations response to a “list of words” kind of structure if they need to but it is difficult or impossible for them to convert the other way and reconstruct the offsets from just a list of tokens.

However if your tool only gives you a list of words (or other segments) without links back to the original text then feel free to use the “texts” response format as appropriate. If your tool segments the text into sentences or similar, then reflect that segmentation in the response structure. Note: each node in a “texts” response must have either “content” or another layer of “texts”, it cannot have both.

Granularity of annotation types

In some cases it is not obvious what should constitute a separate type of annotation, and what should be expressed as a feature value on a single annotation type. In general most services so far deployed on the ELG prefer to have a smaller number of annotation types and put the detail into the features. For example

  • for a word-level tagger such as part-of-speech or morphosyntactic category, have one annotation for “token” or “word” with a feature for the tag. This is particularly the case for large tag sets, e.g. morphological annotations in richly inflected languages.

  • for named entities, use one annotation type for each high level class of entity (“Person”, “Location”, etc) with features for more fine-grained classes (“city”, “country”, …)

The general recommendation is to use a small, closed set of top-level types with short names. Avoid using spaces in annotation types - instead of “Named Entity” prefer camel case “NamedEntity” or underscores “named_entity” - annotation types that are valid identifiers in most programming languages make for cleaner client code.

Representation of dependency trees

Dependency parser services have many choices for how to represent their dependency graphs between tokens. Essentially the output of a dependency parser is

  • a list of sentences, each of which has

  • a list of tokens or words, each of which has

  • one link to its parent word in the dependency tree (except for the head word of the whole sentence, which has no parent).

A given word may have many incoming links from children but can have no more than one outgoing link to its parent or “head” word. This structure can be represented in several ways; as an annotations response with the sentences and tokens as annotations anchored to locations in the input text:

{
  "response": {
    "type": "annotations",
    "annotations": {
      "Sentence": [
        {"start": 0, "end": 10}
      ],
      "Token": [
        {"start": 0, "end": 2, "features": {"id": "tok1"}},
        {"start": 3, "end": 5, "features": {"id": "tok2", "parent": "tok1"}}
      ]
    }
  }
}

or as a texts response with an element for each sentence, which in turn has an element for each token:

{
  "response": {
    "type": "texts",
    "texts": [ // list of sentences
      {
        "texts": [ // list of tokens
          {"content": "This", "features": {"id": "tok1", "parent": "tok2"}},
          {"content": "is", "features": {"id": "tok2"}}
        ]
      }
    ]
  }
}

It is also possible to use a mixture of these two approaches, using a texts response with a leaf node for each sentence, which in turn then uses standoff annotations for the words. As discussed above under “annotations or texts”, if you have access to the original sentence and token offsets from the tool you are integrating then it is better to use the annotations response format, as it easy for a caller to construct a list of tokens from a set of offsets but much harder to reliably map back the other way.

The clearest way to encode dependency links is to give each word a pair of features, one (“id” in the examples above) giving a unique identifier for this word and the other (“parent”) giving the corresponding identifier of the parent node in the tree. Other features may be used to encode the type of dependency relation and any other information such as POS tags. Some existing dependency parser services in the ELG catalogue do not provide unique word IDs and instead express the links in terms of the index of the target word in the sentence’s word list. All of these are valid options but explicit IDs are clearer and more robust.

Finally some parsers use “multi-word tokens”; they decompose tokens into smaller sub-token units, for example to unpack compound nouns in languages like German. For cases like these, add a “words” feature to the token whose value is a list of objects, and place the word ID, parent link and other features in the word object rather than directly in the token features:

{
  "response": {
    "type": "texts",
    "texts": [
      {
        "content": "Supercapacitors are...",
        "annotations": {
          "Token": [
            {"start": 0, "end": 15, "features": {"words": [
              {"str": "Super", "id": "w1", "parent": "w2"},
              {"str": "capacitors", "id": "w2", "parent": "w3"}
            ]}},
            {"start": 16, "end": 19, "features": {"words": [
              {"str": "are", "id": "w3"}
            ]}}
          ]
        }
      }
    ]
  }
}

In this case each word will also need a separate feature giving the word’s surface form (“str” above) since the token text is split across more than one word.

Appendix: Standard status message codes

#
#   Copyright 2019 The European Language Grid
#
#   Licensed under the Apache License, Version 2.0 (the "License");
#   you may not use this file except in compliance with the License.
#   You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
#   Unless required by applicable law or agreed to in writing, software
#   distributed under the License is distributed on an "AS IS" BASIS,
#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#   See the License for the specific language governing permissions and
#   limitations under the License.
#
# This file contains the standard ELG status messages, translations should
# be placed in files named elg-messages_LANG.properties alongside this file.
#

# general bad request errors
elg.request.invalid=Invalid request message
elg.request.missing=No request provided in message
elg.request.type.unsupported=Request type {0} not supported by this service
elg.request.property.unsupported=Unsupported property {0} in request

elg.request.too.large=Request size too large

elg.request.parameter.missing=Required parameter {0} missing from request
elg.request.parameter.invalid=Value "{1}" is not valid for parameter {0}

# Errors specific to text requests
elg.request.text.mimeType.unsupported=MIME type {0} not supported by this service

# Errors specific to audio requests
elg.request.audio.format.unsupported=Audio format {0} not supported by this service
elg.request.audio.sampleRate.unsupported=Audio sample rate {0} not supported by this service

# Errors specific to image requests
elg.request.image.format.unsupported=Image format {0} not supported by this service

# Errors specific to structured text requests
elg.request.structuredText.property.unsupported=Unsupported property {0} in "texts" of structuredText request

# General bad response errors
elg.response.invalid=Invalid response message
elg.response.type.unsupported=Response type {0} not supported

# Unknown property in response
elg.response.property.unsupported=Unsupported property {0} in response
elg.response.texts.property.unsupported=Unsupported property {0} in "texts" of texts response
elg.response.classification.property.unsupported=Unsupported property {0} in "classes" of classification response

# User requested a service that does not exist
elg.service.not.found=Service {0} not found
elg.async.call.not.found=Async call {0} not found

# Permission problems
elg.permissions.quotaExceeded=Authorized quota exceeded
elg.permissions.accessDenied=Access denied
elg.permissions.accessManagerError=Error in access manager: {0}

# Temporary file storage service
elg.file.not.found=File {0} not found
elg.file.expired=Requested file {0} no longer available
elg.upload.too.large=Upload too large

# generic internal error when there's no more specific option
elg.service.internalError=Internal error during processing: {0}