Public LT API specification

LT services can be called via the API endpoints given on the “code samples” page, the format of the various requests and responses is closely related to the internal LT service API used within the ELG infrastructure, but the public endpoints also offer shortcuts to simplify common interactions. There are three different types of endpoint depending on the kind of data required by the service as input - flat text, structured text or audio.

Authentication to all endpoints is by the use of an OAuth2 Bearer Token, and a token suitable for test use can be copied from the “code samples” page. Future versions of this document will include details of how to obtain and renew access tokens programatically. The token is passed via the HTTP Authorization header in the usual way: Authorization: Bearer <tokenValue>

Input formats

Services that process flat text

https://{domain}/execution/processText/{ltServiceID}

Services that process a single flat stream of text can be called via an endpoint of this form. Make an HTTP POST request to the endpoint with one of the following Content-Type headers:

application/json
A JSON object as described in the “text request” section of the LT Service API specification. For example { "type":"text", "content":"The text to process", "params":{"genre":"news"} }. The type must be the string “text”, the content is the text to be processed, and params are specific to the individual service - see the per-service documentation for details of any parameters the service accepts.
text/plain or text/html
Just the text to be processed. In this case any URL query parameters added to the endpoint URL will be passed on to the service as params

Services that process “structured” text

https://{domain}/execution/processStructured/{ltServiceID}

Some services require text that has been pre-segmented in some way, for example split into tokens, sentences or paragraphs. For this endpoint the following Content-Type values are supported:

application/json
A JSON object as described in the “structured text request” section of the LT Service API specification. For example {"type":"structuredText", "texts":[{"content":"First sentence."},{"content":"Second sentence"}]}. As with text requests above, you may also add params to the JSON, these are specific to the individual service - see the per-service documentation for details of any parameters the service accepts.
text/plain

As a convenience shortcut the endpoint can also accept a POST of plain text. In this case, how the text is segmented depends on a URL query parameter split

processStructured/{service} (without a split parameter)
the whole text is treated as a single segment
processStructured/{service}?split=line
the text is divided at line breaks, and each line is treated as a separate segment. Leading or trailing white space on each line is not trimmed, and blank lines become empty segments {"content":""}
processStructured/{service}?split=paragraph
the text is divided at each run of one or more blank lines (i.e. two or more consecutive line breaks, possibly with white space in between). Again, leading or trailing whitespace around each segment is not trimmed.

All query parameters (including split) are passed on to the underlying service.

Services that process audio

https://{domain}/execution/processAudio/{ltServiceID}

Services that process a stream of audio can be called via an endpoint of this form. Make an HTTP POST request whose body is the audio data, with an appropriate Content-Type: audio/mpeg for MP3 audio or Content-Type: audio/x-wav for uncompressed WAV audio.

Any URL query parameters added to the endpoint will be passed on to the service, see the per-service documentation for details of which (if any) parameters the service accepts.

Service responses

The response formats returned from service calls are identical to their counterparts in the internal LT Service API and will not be repeated here. However there is one shortcut for services such as text-to-speech that return audio data. Ordinarily these services return a response of Content-Type: application/json including the audio data encoded in base64, but if you supply a parameter audioOnly (in the params for a JSON request, or as a URL query parameter for an unwrapped text/HTML/audio request) with the value “true” or “yes”, then instead of receiving the full JSON response you will receive just the binary audio data with an appropriate Content-Type of audio/mpeg or audio/x-wav.

Failed responses return a special type of response as follows:

{
  "failure":{
    "errors":[array of status messages]
  }
}

The errors property is an array of internationalization-compatible status message objects - the ELG platform provides another endpoint https://{domain}/i18n/resolve to which you can POST a JSON array of these objects and receive an array of resolved message strings in response.

Asynchronous processing

Some services may take several seconds or more to respond, either because their processing is naturally complex or because there are many requests for the same service being processed at the same time. To avoid the risk of dropped connections in such cases, the ELG platform offers an alternative “asynchronous” interaction style. To use this, send the same POST request, but add /async to the endpoint URL ahead of the /process*, e.g.

https://{domain}/execution/async/processAudio/{ltServiceID}

When called in async mode, the initial request should return immediately with a response of the following form:

{
  "response":{
    "type":"stored",
    "uri":"<polling URL>"
  }
}

The uri property is a URL which you should then begin to poll on a regular basis with a GET request (using the same Authorization token). Each time you poll, if processing is still ongoing you will receive a “progress” response of the form

{
  "progress":{
    "percent"://number between 0.0 and 100.0,
    "message":{
      // optional status message
    }
  }
}

(The message is optional, if provided it is a message object as in the failure response case above, which can be resolved to a message string by the /i18n/resolve endpoint). Some services return true progress percentages, for those that do not provide real updates the endpoint will always return {"progress":{"percent":0.0}} to show that processing is still ongoing.

Once the processing is complete the poll URL will return the JSON response (successful or failed) exactly as you would have got from the normal synchronous API endpoint.