Public LT API specification

LT services can be called via the API endpoints given on the “code samples” page, which will generally be of the form https://{domain}/execution/process/{ltServiceID}. The format of the various requests and responses is closely related to the internal LT service API used within the ELG infrastructure, but the public endpoints also offer shortcuts to simplify common interactions. Authentication to all endpoints is by the use of an OAuth2 Bearer Token, and a token suitable for test use can be copied from the “code samples” page or obtained and renewed programatically using the ELG Python SDK. The token is passed via the HTTP Authorization header in the usual way: Authorization: Bearer <tokenValue>

Input formats

Services that process flat text

Services that process a single flat stream of text can be called via an endpoint of this form. Make an HTTP POST request to the endpoint with one of the following Content-Type headers:


A JSON object as described in the “text request” section of the LT Service API specification. For example { "type":"text", "content":"The text to process", "params":{"genre":"news"} }. The type must be the string “text”, the content is the text to be processed, and params are specific to the individual service - see the per-service documentation for details of any parameters the service accepts.

text/plain or text/html

Just the text to be processed. In this case any URL query parameters added to the endpoint URL will be passed on to the service as params

Services that process “structured” text

Some services require text that has been pre-segmented in some way, for example split into tokens, sentences or paragraphs. For this case, either:

  • Make a POST to the standard endpoint https://{domain}/execution/process/{ltServiceID} with Content-Type: application/json passing a JSON object as described in the “structured text request” section of the LT Service API specification. For example {"type":"structuredText", "texts":[{"content":"First sentence."},{"content":"Second sentence"}]}. As with text requests above, you may also add params to the JSON, these are specific to the individual service - see the per-service documentation for details of any parameters the service accepts.

  • Make a POST to the special endpoint https://{domain}/execution/processStructured/{ltServiceID}?split=... with Content-Type: text/plain. In this case, how the text is segmented depends on the split query parameter:

    processStructured/{service} (without a split parameter)

    the whole text is treated as a single segment


    the text is divided at line breaks, and each line is treated as a separate segment. Leading or trailing white space on each line is not trimmed, and blank lines become empty segments {"content":""}


    the text is divided at each run of one or more blank lines (i.e. two or more consecutive line breaks, possibly with white space in between). Again, leading or trailing whitespace around each segment is not trimmed.

    All query parameters (including split) are passed on to the underlying service.

Services that process audio

Services that process a stream of audio accept an HTTP POST request whose body is the audio data, with an appropriate Content-Type: audio/mpeg for MP3 audio or Content-Type: audio/x-wav for uncompressed WAV audio. Any URL query parameters added to the endpoint will be passed on to the service, see the per-service documentation for details of which (if any) parameters the service accepts.

For services that require the input audio to be supplied with pre-existing annotations (e.g. if the audio has already been segmented into different speakers), the same endpoint accepts the multipart format described in the “audio request” section of the internal API specification. In this case URL query parameters are not forwarded to the service, any required parameters are assumed to be already provided in the JSON part of the request bundle.

Services that process images

Services that process a images accept an HTTP POST request whose body is the image data, with an appropriate Content-Type (image/png, image/jpeg, image/bmp, image/tiff, image/gif depending on image type).

Any URL query parameters added to the endpoint will be passed on to the service. See the per-service documentation for details of which (if any) parameters the service accepts.

A note about parameters

As noted above, when calling the endpoints with “plain” text, audio or image data, parameters may be passed to the service via the URL query string. The parameters (if any) accepted by a particular service are detailed in the per-service documentation. Query parameters are mapped into the JSON structure expected by the service container as follows:

  • A single value for a parameter paramName=paramValue will be passed to the service as a single string {"paramName":"paramValue"}

  • Multiple values for the same parameter paramName=value1&paramName=value2 will be passed as an array {"paramName":["value1","value2"]}

  • In order to force the array format even when passing a single value, append [] to the parameter name (note that these characters must be escaped in the query string): param%5B%5D=value will be passed as {"param":["value"]}

Service responses

The response formats returned from service calls are identical to their counterparts in the internal LT Service API and will not be repeated here. However there is one shortcut for services such as text-to-speech that return audio data. Ordinarily these services return a response of Content-Type: application/json including the audio data encoded in base64, but if you supply a parameter audioOnly (in the params for a JSON request, or as a URL query parameter for an unwrapped text/HTML/audio request) with the value “true” or “yes”, then instead of receiving the full JSON response you will receive just the binary audio data with an appropriate Content-Type of audio/mpeg or audio/x-wav.

Failed responses return a special type of response as follows:

    "errors":[array of status messages]

The errors property is an array of internationalization-compatible status message objects - the ELG platform provides another endpoint https://{domain}/i18n/resolve to which you can POST a JSON array of these objects and receive an array of resolved message strings in response.

Asynchronous processing

Some services may take several seconds or more to respond, either because their processing is naturally complex or because there are many requests for the same service being processed at the same time. To avoid the risk of dropped connections in such cases, the ELG platform offers an alternative “asynchronous” interaction style. To use this, send the same POST request, but add /async to the endpoint URL ahead of the /process, e.g.


When called in async mode, the initial request should return immediately with a response of the following form:

    "uri":"<polling URL>"

The uri property is a URL which you should then begin to poll on a regular basis with a GET request (using the same Authorization token). Each time you poll, if processing is still ongoing you will receive a “progress” response of the form

    "percent"://number between 0.0 and 100.0,
      // optional status message

(The message is optional, if provided it is a message object as in the failure response case above, which can be resolved to a message string by the /i18n/resolve endpoint). Some services return true progress percentages, for those that do not provide real updates the endpoint will always return {"progress":{"percent":0.0}} to show that processing is still ongoing.

Once the processing is complete the poll URL will return the JSON response (successful or failed) exactly as you would have got from the normal synchronous API endpoint.