Introduction
Welcome to the XSignals API! You can use our API to search, list and download files.
Authentication
XSignals API allows both basic auth and bearer tokens to be used to access the API. XSignals API expects one type of authentication to be included in all API requests. For brevity, we will leave the authentication method out of the examples.
To authenticate, use Bearer Token or Basic Authentication:
curl "https://api.xsignals.xpansiv.com/"
-H "Authorization: Bearer yourAccessToken"
curl "https://api.xsignals.xpansiv.com/"
-u username:password ...
Change Password
Allows the user to change their password.
curl -X POST
https://api.xsignals.xpansiv.com/auth/changePassword
-d '{
"password": "new user password"
}'
If successful, the above command returns status code 200, otherwise please check Errors section.
HTTP Request
POST https://api.xsignals.xpansiv.com/auth/changePassword
Request Body
Parameter | Description |
---|---|
password | new user password |
Log In
This endpoint returns a jwt access token and a refresh token. When the access token expires, the refresh token can be used to generate a new one. You will need to use Basic Auth to retrieve the jwt tokens.
curl -u 'login:password'
https://api.xsignals.xpansiv.com/auth/login
If successful, the above command returns a status code 200 and token, otherwise please check Errors section.
{
"token": "your access token",
"refreshToken": "your refresh token"
}
HTTP Request
GET https://api.xsignals.xpansiv.com/auth/login
Request Header
Parameter | Description |
---|---|
userName | userName of account |
password | password of account |
Refresh Access Token
This endpoint returns a new access token.
curl -X POST
https://api.xsignals.xpansiv.com/auth/refresh
-d '{
"refreshToken": "your refresh token"
}'
If successful, the above command returns a status code 200 and token, otherwise please check Errors section.
{
"token": "your access token",
"refreshToken": "your refresh token"
}
HTTP Request
POST https://api.xsignals.xpansiv.com/auth/refresh
Request Body
Parameter | Description |
---|---|
refreshToken | your refresh token |
Log out
Log out the user by invalidating the refresh token.
curl -X POST
https://api.xsignals.xpansiv.com/auth/logout
-H "Authorization: Bearer yourAccessToken"
-d '{
"refreshToken": "your refresh token"
}'
If successful, the above command returns status code 200, otherwise please check Errors section.
HTTP Request
POST https://api.xsignals.xpansiv.com/auth/logout
Request Body
Parameter | Description |
---|---|
refreshToken | your refresh token |
SSO (Auth0) tokens
In order to use the API via SSO, an Auth0 access token is necessary. This token can be obtained by calling the token endpoint in the Auth0 server with the appropriate credentials. The Auth0 server should respond with a valid token that can be used in the API requests, passed in the Authorization header ("Authorization: Bearer auth0Token").
curl -X POST
https://auth.xpansiv.com/oauth/token
-d '{
"username": "your username",
"password": "your Auth0 password",
"grant_type": "http://auth0.com/oauth/grant-type/password-realm",
"realm": "Username-Password-Authentication",
"scope": "SCOPE",
"audience": "https://xpansiv/platform",
"client_id": "your client ID",
"client_secret": "your client secret"
}'
If successful, the above command returns status code 200, otherwise please check Errors section.
HTTP Request
POST https://auth.xpansiv.com/oauth/token
Request Body
Parameter | Description | Payload (for non-sensitive parameters) |
---|---|---|
username | your username | |
password | your Auth0 password | |
grant_type | Auth0 grant type | "http://auth0.com/oauth/grant-type/password-realm" |
realm | Auth0 realm | "Username-Password-Authentication" |
scope | Auth0 scope | "SCOPE" |
audience | Auth0 audience | "https://xpansiv/platform" |
client_id | your client ID | |
client_secret | your client secret |
File Groups
List Groups
curl -X GET
https://api.xsignals.xpansiv.com/file/group?from=0&size=10&query=value1&writeOnly=false
List all groups you are entitled to
Response:
{
"from": 0,
"totalSize": 1,
"items": [
{
"access": "group access level [R|W|S], sample: RW",
"shared": false,
"name": "group name",
"description": "group description",
"fields": [
{
"name": "field name",
"type": "field type [string|int|float|date|array]",
"defaultValue": "field value"
},{
"name": "field name 2",
"type": "field type 2 [string|int|float|date|array]"
}
]
}
]
}
curl -X GET
https://api.xsignals.xpansiv.com/file/group?from=0&size=10&query=<searchString>1&writeOnly=false
List all groups containing the 'searchString' that you can read write
curl -X GET
https://api.xsignals.xpansiv.com/file/group?query=name=*&withSymbol=true
List of file groups which have symbols
This endpoint all you to list all the groups you are entitled to, in addition you can use this endpoint to search for group with given value in the name, description or fields. By default, the endpoint returns max. 100 items per request.
HTTP Request
GET https://api.xsignals.xpansiv.com/file/group
URL Parameters
Parameter | Description |
---|---|
from | |
size | |
query | Value of the description, name or fields to look for. |
writeOnly | If this parameter is set to true, only groups to which user has write access are returned. |
withSymbol | If this parameter is set to true, only groups which have some symbols are returned. |
sortBy | Field by which the results will be sorted. Either createdOn or name. Default to createdOn. |
sortOrder | Order in which items will be returned. Available values are asc and desc. Default to desc. |
Group Details
curl https://api.xsignals.xpansiv.com/file/group/{name}
-H "Authorization: Bearer yourNgToken"
The above command returns a JSON file structured as follows:
{
"name": "group name",
"description": "group description",
"fields": [
{
"name": "field name",
"type": "field type [string|int|float|date|array]"
},{
"name": "field name 2",
"type": "field type 2 [string|int|float|date|array]",
"defaultValue": "field value"
}
]
}
This endpoint retrieves details about a specific file group.
HTTP Request
GET https://api.xsignals.xpansiv.com/file/group/{name}
URL Parameters
Parameter | Description |
---|---|
name | Name of the group to retrieve |
File Download
To download a file, you must first find the id of the file. Using our search API, you can quickly find the file you are looking for.
File Search
Search files within a given group
curl https://api.xsignals.xpansiv.com/file/search?query=groupName%3D<groupName>
Response:
{
"from": 0,
"totalSize": 4,
"files": [
{
"fileName": "file name",
"fileType": "file type [SOURCE|NCSV|DOWNLOADER|DATA_PREP|GDP]",
"groupName": "group name",
"fid": "file id",
"fields": {
"text": "some text",
...
},
"arrivalTime": "2018-01-01T12:00:00",
"owner": "file owner",
"size": 100
},
...
]
}
This endpoint lists all the files found in the given groupName.
NOTE: The query must be urlencoded. i.e. groupName=CBL_Exchange Data Voluntary Carbon Daily File should be encoded as groupName%3DCBL_Exchange%20Data%20Voluntary%20Carbon%20Daily%20File
HTTP Request
GET https://api.xsignals.xpansiv.com/file/search?size=<size>&from=<from>&query=<value>
URL Parameters
Parameter | Description |
---|---|
size | |
from | |
query | Value of the metadata field to look for. |
HTTP Response
The response is a JSON object with a list of files. See the right pane.
File Metadata
curl https://api.xsignals.xpansiv.com/file/<fileId>
This endpoint retrieves the metadata of a specific file. Included in the headers you will find a presigned URL to the location of the file's content. This URL is good for one hour.
Example < Location: https://datalake-xpansivprod-ng.s3.amazonaws.com/c651dd45-edf5-461b-8c59-e7dd457932a9?response-content-disposition=attachment%3Bfilename%3DCBL%20Ex......
Using the URL you can simply GET the file.
curl -XGET LocationURL
Alternatively, you can use the File Content endpoint.
HTTP Request
GET https://api.xsignals.xpansiv.com/file/<fileId>
URL Parameters
Parameter | Description |
---|---|
fileId | The ID of the file to retrieve |
HTTP Response
Response
{
"fileName": "file name",
"fileType": "file type [SOURCE|NCSV]",
"groupName": "group name",
"fid": "file id",
"fields": {
"fieldName": "field value"
},
"arrivalTime": "2018-01-01T12:00:00",
"owner": "file owner",
"size": 0
}
A response header contains location with a file content. If file doesn't exist, the response code is 404.
Response Body
- fileName - file name
- fileType - file type [SOURCE, NCSV]
- groupName - group name
- fid - file id
- fields - map with metadata of this file
- arrivalTime - file upload date
- owner - user name which upload file
- size - size in bytes
File Content
Get the files content
curl https://api.xsignals.xpansiv.com/file/<id>/download
This endpoint will download the contents of a given fileId.
HTTP Request
GET https://api.xsignals.xpansiv.com/file/<id>/download
URL Parameters
Parameter | Description |
---|---|
fileId | The ID of the file to download |
HTTP Response
The response is the file's content.
File Search by Metadata
Search files with metadata filed1 = value1 and return first 10 hits:
curl https://api.xsignals.xpansiv.com/file/search?size=10&from=0&query=value1
Response:
{
"from": 0,
"totalSize": 4,
"files": [
{
"fileName": "file name",
"fileType": "file type [SOURCE|NCSV|DOWNLOADER|DATA_PREP|GDP]",
"groupName": "group name",
"fid": "file id",
"fields": {
"text": "some text",
...
},
"arrivalTime": "2018-01-01T12:00:00",
"owner": "file owner",
"size": 100
},
...
]
}
This endpoint executes a search for files with given metadata fields and values.
HTTP Request
GET https://api.xsignals.xpansiv.com/file/search?size=<size>&from=<from>&query=<value>
URL Parameters
Parameter | Description |
---|---|
size | |
from | |
query | Value of the metadata field to look for. |
HTTP Response
The response is a JSON object with a list of files. See the right pane.
File Search By Regexp
Search files with value1 and return first 10 hits:
curl https://api.xsignals.xpansiv.com/file/search/regexp?size=10&from=0&query=value1
-H "Authorization: Bearer yourNgToken"
Response:
{
"from": 0,
"totalSize": 4,
"files": [
{
"fileName": "file name",
"fileType": "file type [SOURCE|NCSV|DOWNLOADER|DATA_PREP|GDP]",
"groupName": "group name",
"fid": "file id",
"fields": {
"text": "some text",
...
},
"arrivalTime": "2018-01-01T12:00:00",
"owner": "file owner",
"size": 100
},
...
]
}
This endpoint executes a search regexp for files with given metadata fields and values.
HTTP Request
GET https://api.xsignals.xpansiv.com/file/search/regexp?size=<size>&from=<from>&query=<value>
URL Parameters
Parameter | Description |
---|---|
size | |
from | |
query | Value of the metadata field to look for. |
HTTP Response
The response is a JSON object with a list of files. See the right pane.
Time Series
Search Deltas and Corrections
curl -X POST
https://api.xsignals.xpansiv.com/ts
-d '{
"corrections": true,
"delta": true,
"range": "BETWEEN",
"startDate": "2022-01-01T00:00:00",
"endDate": "2022-01-07T23:59:59",
"formatType": "NCSV",
"groupName": "groupName"
}'
The above command returns the dataset in csv format, e.g.:
Instrument,Market,Order Ref,Original,Side,Date(America/New_York),Balance,Ccy(S),Country(S),Date In(S),Instrument Mkt Name(S),Instrument Name(S),Price,Project ID(S),Project Type(S),Quantity,Time In(S),Type(S),ValueTax,Vintage(S)
INST1,SIP,123,321,BID,2022-01-01T16:00:00,20,USD,,2022.01.01,INST1-SIP,Instrument 1 Emission Offset,1.23,,,20,9:54:01,LIMIT,,
INST2,Voluntary,456,654,ASK,2022-01-01T16:00:00,1000,USD,Asia/China,2022.01.01,INST2-Voluntary,Instrument 2,2.3,ABC,Biogas - Cogeneration,1300,8:10:01,LIMIT,,
INST3,Voluntary,789,987,ASK,2022-01-01T16:00:00,20000,USD,Asia/India,2022.01.01,INST3-Voluntary,Instrument 2,4.5,2026,Energy Industries - renewable/non-renewable sources,500,13:48:17,AON,,
This endpoint returns deltas (latest values) and optionally corrections for a given time period.
HTTP Request
POST https://api.xsignals.xpansiv.com/ts
Body Parameters
Parameter | Description |
---|---|
corrections | [false / true] - Return all versions of updated values |
delta | [false / true] - Return new values |
range | [BETWEEN] |
startDate | Date from (ISO 8601) |
endDate | Date to (ISO 8601) |
formatType | [NCSV / STANDARD] |
groupName | Name of group to be searched |
dateFormat | (Optional) - Date format string, defaults to yyyy-MM-dd'T'HH:mm:ss.SSS |
onDateTime | (Optional) - Date, shows results as it was at the given time (ISO 8601) |
Search Time Series Symbols
curl https://api.xsignals.xpansiv.com/ts/symbol/search?query=<symbol>&sortBy=<sortFields>
<symbol> collection of symbol field names and values separated by & sign, ex.
symbol.Country=USA&symbol.name=test
<sortFields> collection of fields to sort by asc or desc, ex.
symbol.Country=desc&symbol.name=asc
The above command returns JSON structured response, e.g.:
{
"from": 0,
"totalSize": 2,
"items": [
{
"groupName": "group name",
"symbols": {
"SYM1": "A1",
"SYM2": "B1"
},
"metadata": ["meta 1", "meta 2"],
"values": ["val 1", "val 2"]
},
...
]
}
This endpoint returns symbols for data stored in timeseries.
HTTP Request
GET https://api.xsignals.xpansiv.com/ts/symbol/search?query=<symbol>
URL Parameters
Parameter | Description |
---|---|
symbol | Name of the symbol to search. |
size | |
from |
Get Time Series Data by Symbols
curl -X POST
https://api.xsignals.xpansiv.com/ts?size=5
-d '{
"startDate": "2017-01-11T10:01:03",
"endDate": "2017-12-11T10:01:03",
"stats": "true",
"timeZone": "America/NewYork",
"corrections": true,
"onDateTime": "2018-12-01T10:01:03.999",
"formatType": "STANDARD",
"keys": [{
"columns": ["Open", "Close", "High"],
"symbols": {
"SYM1": "A1",
"SYM2": "B1"
},
"groupName": "fileGroupName",
"timeZone": "America/Chicago"
}]
}'
With the "formatType": "STANDARD", which is the default value, the above command returns JSON or CSV determined by the 'Accept' header i.e. -H 'Accept: application/json' or -H 'Accept: text/csv' respectively.
Standard format type:
{
"hasNextPage": true,
"columns": [
{
"symbols": {
"SYM2": "B1",
"SYM1": "A1"
},
"valueName": "Value"
}
],
"values": [
{
"time": "2018-01-01T12:00:00",
"values": [
14.1
]
}, ....
],
"stats": [
{
"name": "count",
"values": [
20.0
]
}, ....
]
}
CSV format(by header):
time,"[B1, A1]|High","[B1, A1]|Close","[B1, A1]|Open"
1501218000000,73.11,72.11,71.11
1503896400000,83.11,82.11,81.11
1506574800000,93.11,92.11,91.11
...
This endpoint retrieves timeseries data for specific symbols in the provided time range. If the symbols does not exist, column is not displayed.
HTTP Request
POST https://api.xsignals.xpansiv.com/ts?size=<size>
URL Parameters
Parameter | Description |
---|---|
size |
Request Body
- startDate - date from (ISO 8601)
- endDate - date to (ISO 8601)
- stats - [true / false] statistics
- timeZone - time zone offset (default value is UTC, must be valid)
- corrections - false / true
- onDateTime - show results as it was at the given time (ISO 8601)
- keys - symbols to get from timeseries
- symbols - map of name value
- groupName - group name
- columns -
list of column names, if the columns value is missing, API takes all columns - timeZone - timezone name for the symbol
- formatType - type of format returned by timeseries(optional, default STANDARD) available types: STANDARD, PANDAS, or NCSV
- metadata - [false / true] includes symbol's metadata into response
- metadataType - [false / true] includes metadata types into response
Errors
The XSignals API uses the following error codes:
Error Code | Meaning |
---|---|
400 | Bad Request -- The request you sent is incorrect. |
401 | Unauthorized -- Your API key is wrong. |
403 | Forbidden -- The requested resource is hidden and may be accessed by the administrators only. |
404 | Not Found -- The specified resource could not be found. |
405 | Method Not Allowed -- You tried to access a resource with an invalid method. |
406 | Not Acceptable -- You requested a format that isn't json. |
410 | Gone -- The resource requested has been removed from our servers. |
418 | I'm a teapot. |
429 | Too Many Requests -- You're requesting too many times! Slow down! |
500 | Internal Server Error -- We had a problem with our server. Try again later. |
503 | Service Unavailable -- We're temporarily offline for maintenance. Please try again later. |
API Examples
The following are examples of common use cases of our api.
Pre-Requisites
- You must have been on-boarded to the XSignals platform and have been entitled to the appropriate data products.
- You will need to authenticate to the XSignals platform to obtain the bearer token. For more information, see: Authentication
Downloading a File via cURL
This walk through uses 3 steps to retrieve the latest file contents for a given group.
List Available Groups
curl --request GET 'https://api.xsignals.xpansiv.com/file/group/' --header 'Authorization: Bearer <REDACTED>'
{
"from": 0,
"totalSize": 2,
"items": [
{
...
"groupId": 1234,
"name": "CBL_Active Orders Voluntary Carbon",
"description": "Carbon Active orders by date on CBL.",
"type": "DATA",
"createTime": 1632871153976,
"fields": []
},
...
]
}
Finding Latest File in Given Group
curl --request GET \
'https://api.xsignals.xpansiv.com/file/search?size=1&query=groupName%3dCBL_Active%20Orders%20Voluntary%20Carbon&sortBy=arrivalTime&sortOrder=desc' \
--header 'Authorization: Bearer <REDACTED>'
{
"from": 0,
"totalSize": 34,
"items": [
{
"fileName": "Orders Active Voluntary Carbon.csv",
"fileType": "NCSV",
"groupName": "CBL_Active Orders Voluntary Carbon",
"fid": "69a432b6-9526-4d92-8a89-d86330ca95c4",
"fields": {
...
},
...
}
]
}
Downloading a File
curl --request GET \
'https://api.xsignals.xpansiv.com/file/69a432b6-9526-4d92-8a89-d86330ca95c4/download' \
--header 'Authorization: Bearer <REDACTED>'
Instrument,Market,Order Ref,Original,Side,Date(America/New_York),Balance,Ccy(S),Country(S),Date In(S),Instrument Mkt Name(S),Instrument Name(S),Price,Project ID(S),Project Type(S),Quantity,Time In(S),Type(S),ValueTax,Vintage(S),TimeZone(MD-S)
GEO,SIP,0001234567,0001234567,BID,2021-11-16T16:00:00,5000.00,USD,,2021.11.05,GEO-VCS,Global Emissions Offset,9.99,,,5000.00,17:28:39,LIMIT,0.00,,America/New_York
GEO,SIP,0007898765,0007898765,ASK,2021-11-16T16:00:00,10000.00,USD,,2021.11.08,GEO-VCS,Global Emissions Offset,9.99,,,10000.00,18:27:55,LIMIT,0.00,,America/New_York
...
Instructions
- List Available Groups - Retrieve the list of groups you are entitled to.
- HTTP Request:
curl --request GET 'https://api.xsignals.xpansiv.com/file/group/' --header 'Authorization: Bearer <REDACTED>'
- URL Parameters:
- header ->
Authorization: Bearer <>
- header ->
- Note the name of the group you would like to search.
- HTTP Request:
- Find Latest File in Given Group - Retrieves the details of latest file available within a given group.
- HTTP Request:
curl --request GET 'https://api.xsignals.xpansiv.com/file/search?size=1&query=groupName%3d<groupName>&sortBy=arrivalTime&sortOrder=desc' --header 'Authorization: Bearer <REDACTED>'
- URL Parameters:
- header ->
Authorization: Bearer <>
- query ->
query=groupName%3d<groupName>
- Use the group name returned in step 1.
- Important: The query string MUST be urlencoded.
- header ->
- Note the fid of the file response returned above.
- HTTP Request:
- Downloading a File - Download the contents of the latest file.
- HTTP Request:
curl --request GET 'https://api.xsignals.xpansiv.com/file/<fid>/download' --header 'Authorization: Bearer <REDACTED>'
- URL Parameters:
- header ->
Authorization: Bearer <>
- file id ->
<fid>
- The fid returned in step 2.
- header ->
- HTTP Request:
See the following links for more details:
Pulling Deltas from Time Series for Given Group and Time Range
This walk through uses 2 steps to retrieve the latest values from time series for a given group and time range.
List Available Groups
curl --request GET 'https://api.xsignals.xpansiv.com/file/group/' --header 'Authorization: Bearer <REDACTED>'
{
"from": 0,
"totalSize": 2,
"items": [
{
...
"groupId": 1234,
"name": "CBL_Active Orders Voluntary Carbon",
"description": "Carbon Active orders by date on CBL.",
"type": "DATA",
"createTime": 1632871153976,
"fields": []
},
...
]
}
Request Deltas for Given Group and Date Range
curl --location --request POST 'https://api.xsignals.xpansiv.com/ts' \
--header 'Authorization: Bearer <REDACTED>' \
--data-raw '{
"corrections": true,
"delta": true,
"range": "BETWEEN",
"startDate": "2022-01-20T19:38:00",
"endDate": "2022-01-20T19:49:59",
"formatType": "NCSV",
"groupName": "CBL_Active Orders Voluntary Carbon"
}
'
Instrument,Market,Order Ref,Original,Side,Date(America/New_York),Balance,Ccy(S),Country(S),Date In(S),Instrument Mkt Name(S),Instrument Name(S),Price,Project ID(S),Project Type(S),Quantity,Time In(S),Type(S),ValueTax,Vintage(S)
INST1,SIP,123,321,BID,2022-01-01T16:00:00,20,USD,,2022.01.01,INST1-SIP,Instrument 1 Emission Offset,1.23,,,20,9:54:01,LIMIT,,
INST2,Voluntary,456,654,ASK,2022-01-01T16:00:00,1000,USD,Asia/China,2022.01.01,INST2-Voluntary,Instrument 2,2.3,ABC,Biogas - Cogeneration,1300,8:10:01,LIMIT,,
INST3,Voluntary,789,987,ASK,2022-01-01T16:00:00,20000,USD,Asia/India,2022.01.01,INST3-Voluntary,Instrument 2,4.5,2026,Energy Industries - renewable/non-renewable sources,500,13:48:17,AON,,
Instructions
- List Available Groups - Retrieve the list of groups you are entitled to.
- HTTP Request:
curl --request GET 'https://api.xsignals.xpansiv.com/file/group/' --header 'Authorization: Bearer <REDACTED>'
- URL Parameters:
- header ->
Authorization: Bearer <>
- header ->
- Note the name of the group you would like to search.
- HTTP Request:
- Get Time Series Data - Retrieve time series deltas for a given group and date range.
- HTTP Request:
curl --location --request POST 'https://api.ng.dev.xpansiv.com/ts' \ --header 'Authorization: Bearer <REDACTED>' \ --data-raw '{ <SEE BODY PARAMETERS> }'
- URL Parameters:
- header ->
Authorization: Bearer <>
- header ->
- Body Parameters:
- corrections -> [false / true]
- delta -> [false / true]
- range -> [BETWEEN]
- startDate -> date from (ISO 8601)
- endDate -> date to (ISO 8601)
- formatType -> [NCSV / STANDARD]
- groupName -> Name of group to be searched
- HTTP Request:
See the following links for more details:
Automating Time Series Pulls
#!/bin/bash
GROUPNAME=CBL_Active%20Orders%20Voluntary%20Carbon
#GROUPNAME=CBL_Trades%20Voluntary%20Carbon
SYMS=$(curl -s -n "https://api.xsignals.xpansiv.com/ts/symbol/search?size=500&query=groupName=${GROUPNAME}&tokenExpiration=10m")
ITEMS=$(jq -r '.items[] | {symbols, groupName}' <<< "$SYMS" | jq -s '.')
TOTALSIZE=$(jq -c '.totalSize' <<< "$SYMS")
LENGTH=$(jq -r '.items | length' <<< "$SYMS")
TOKEN=$(jq -r '.nextPageToken' <<< "$SYMS")
PL="{\"formatType\": \"NCSV\", \"keys\": ${ITEMS} }"
curl -s -n "https://api.xsignals.xpansiv.com/ts" -d "${PL}" > data.csv
while [ $LENGTH -lt $TOTALSIZE ]; do
SYMS=$(curl -s -n "https://api.xsignals.xpansiv.com/ts/symbol/search?size=500&query=groupName=${GROUPANME}&tokenExpiration=10m&nextPageToken=${TOKEN}")
ITEMS=$(jq -r '.items[] | {symbols, groupName}' <<< "$SYMS" | jq -s '.')
NLENGTH=$(jq -r '.items | length' <<< "$SYMS")
PL="{\"formatType\": \"NCSV\", \"keys\": ${ITEMS} }"
curl -s -n "https://api.xsignals.xpansiv.com/ts" -d "${PL}" | grep -v 'Instrument,Market' >> data.csv
let LENGTH=LENGTH+NLENGTH
done
A sample bash script that allows you to iterate through a days results concatenating the results in a file named data.csv.
Note: This script requires jq to be available locally.
Data Dictionary
Misc
XPANSIV XSIGNALS PYTHON SDK
This document describes the Xpansiv Python SDK which enables external users to use the Xpansiv platform tools within their Python scripts.
The Python SDK can be used within:
a single Python script that is ran thanks to the Python Runner task within a pipeline in the Xpansiv application
a single Jupyter Notebook that is ran thanks to the Jupyter Runner task within a pipeline in the Xpansiv application
an ensemble of Python scripts that are part of a container, for a Task created by the user, used in a pipeline in the Xpansiv application
Examples of usage can be found at the bottom of the documentation.
Note that the SDK does not cover everything from API documentation but rather commonly used features.
The scope of the SDK:
Datalake Handler - downloading / uploading / reading files from the data lake
Status Handler - sending statuses about the task run
Task Handler - enables communication between tasks within a pipeline and reading/writing parameters
Time Series Handler - retrieves data directly from the time series database
How to install and set the package: Install
pip3 install xpansiv_xsignals==1.0.0
As the library is available from pip, it can be installed as a specific version within a Python Task from within requirements.txt just by adding:
xpansiv_xsignals==1.0.0
The package relies on the requests library so, in the project, the user must install this library in the requirements.txt file.
pip3 install requests
Environment Variables The package uses information from the environment variables. They are automatically provided when running a script within a pipeline (as a Task or within the Python/Jupyter Runners). If running locally the script, users must set them in the project to be able to run the project locally.
Mandatory environment variables to set:
LOGIN → login received from Xpansiv
PASSWORD → password to log in. Credentials are used to generate the token so that each request is authenticated.
NG_API_ENDPOINT → the URL to the Xpansiv platform API (by default, the url is set to https://api.xsignals.xpansiv.com)
This allows to pass the authentication process and directs users' requests to the Xpansiv environment API. Alternatively, instead of the LOGIN and PASSWORD, you may set NG_API_KEY environment variable with an API key generated within the Xpansiv platform to enable the authentication process.
For SSO only user accounts, it is additionally needed to set the following environment variables for Auth0:
CLIENT_ID
CLIENT_SECRET
When using different NG_API_ENDPOINT than the default, also set the following environment variables: DOMAIN, GRANT_TYPE, REALM, SCOPE, AUDIENCE.
The full SSO user setup in code would look the folowing:
import os
os.environ['LOGIN'] = ''
os.environ['PASSWORD'] = ''
os.environ['CLIENT_ID'] = ''
os.environ['CLIENT_SECRET'] = ''
os.environ['NG_API_ENDPOINT'] = ''
os.environ['DOMAIN'] = ''
os.environ['GRANT_TYPE'] = ''
os.environ['REALM'] = ''
os.environ['SCOPE'] = ''
os.environ['AUDIENCE'] = ''
import xpansiv_xsignals as xp
Other variables may be useful when creating the tasks within the platform:
NG_STATUS_GROUP_NAME → the group on the data lake where the pipeline is located, and is used to display the statuses
JOBID → any value; when the pipeline is executed, this value is set by the Xpansiv platform
PIPELINE_ID → any value; when the pipeline is created, this value is set by the Xpansiv platform
Datalake Handler
How to download or read a file from data lake by its name? The DatalakeHandler class can be used as follow within a script to download or upload a file:
import xpansiv_xsignals as xp
import pandas as pd
dh = xp.DatalakeHandler()
dh.download_by_name(file_name='my_file.csv',
group_name='My Group',
file_type='SOURCE',
dest_file_name='folder/local_name.csv',
save=True,
unzip=False)
fileIO = dh.download_by_name(file_name='my_file.csv',
group_name='My Group',
file_type='SOURCE',
dest_file_name=None,
save=False,
unzip=False)
df = pd.read_csv(fileIO)
The download methods allows to either: - download and save locally the wanted file, if save=True - read the file directly from the datalake and get a BytesIO object (kept in memory only, that can for example be read by pandas as a dataframe directly)
Note that by default: - the file is NOT saved locally, but returned as a BytesIO object (streamed from the datalake). - the argument dest_file_name=None, which will save the downloaded file to the root folder with its original name.
*How to download or read a file from data lake by its ID? * In the case that the file ID is known, it can be directly downloaded/read as follow:
import xpansiv_xsignals as xp
import pandas as pd
dh = xp.DatalakeHandler()
dh.download_by_id(file_id='XXXX-XXXX',
dest_file_name='folder/local_name.csv',
save=True,
unzip=False)
fileIO = dh.download_by_id(file_id='XXXX-XXXX',
dest_file_name=None,
save=False,
unzip=False)
df = pd.read_csv(fileIO)
The download methods allows to either: - download and save locally the wanted file, if save=True - read the file directly from the datalake and get a BytesIO object (kept in memory only, that can for example be read by pandas as a dataframe directly)
Note that by default: - the file is NOT saved locally, but returned as a BytesIO object (streamed from the datalake). - the argument dest_file_name=None, which will save the downloaded file to the root folder with its original name.
How to upload a file to the data lake? The uploading method will upload to the given group the file at the specified path, and returns its ID on the lake:
import xpansiv_xsignals as xp
dh = xp.DatalakeHandler()
file_id = dh.upload_file(file='path/local_name.csv',
group_name='My Group',
file_upload_name='name_in_the_datalake.csv')
It is possible as well to stream a python object's content directly to the datalake from memory, without having to save the file on the disk.
The prerequisite is to pass to the uploading method a BytesIO object (not other objects such as pandas Dataframe).
import xpansiv_xsignals as xp
import io
dh = xp.DatalakeHandler()
fileIO = io.BytesIO(df.to_csv().encode())
file_id = dh.upload_file(file=fileIO,
group_name='My Group',
file_upload_name='name_in_the_datalake.csv')
Timeseries Queries
How to get the list of existing symbols for a given group? Data saved in the time series database is structured by group, keys and timestamp. Each set of keys has unique dates entries in the database, with corresponding columns values.
To explore what are the available symbols for a given group, the following method can be used:
import xpansiv_xsignals as xp
ts = xp.Timeseries()
group = 'My Group'
group_symbols = ts.get_symbols(group_name=group)
The default size of the returned list is 1 000 items.
Note that the return object from the get_symbols() method is a JSON (python dict) where the keys and columns are accessible in the items key of the JSON.
How to query by metadata or descriptions?
In order to find the symbols querying by the metadata, column or symbol names, search_for parameter may be used. It will look for passed string in the whole time series database and return the JSON with keys and columns where searched string appears.
import xpansiv_xsignals as xp
ts = xp.Timeseries()
search = 'Data description'
searched_symbols = ts.get_symbols(search_for=search)
Passing both group_name and search_for parameters of the get_symbols() method allows to narrow down the results from selected group. The user must provide either group_name or search_for to the method in order to obtain the symbols.
If trying to get the list of symbols from a group that contains more than 1 000 items, the results will be paginated (by default into chunks of 1 000 items). To navigate in the large results the get_symbols() method takes as extra arguments the size of the returned list and the from page:
import xpansiv_xsignals as xp
ts = xp.Timeseries()
group = 'My Group'
group_symbols = ts.get_symbols(group_name=group, _size=200, _from=5)
By default, these parameters are _size=2000 (the maximum limit for the items lengths) and _from=0.
How to read data from Timeseries database? It is possible to use the SDK to directly query the TimeSeries database for data, given the symbol's keys, columns and the datalake group it is stored on.
On the application, it is similar of creating a Dataprep instance, that selects a set of symbols from groups into a basket.
The retrieved data can be: - streamed directly to memory, retrieved as a BytesIO object, by setting file_name as None (default value), - saved as a csv file locally with the provided path and name as file_name.
The symbols are the keys the data was saved with in the database. For a given symbol, all the keys must be passed, as a dictionary object with the key name and value. It is possible to use a wildcard for the symbols values, to have all the values for that key, using *.
The wanted columns are then passed as a list that can contain one or more items. If an empty list [ ] is passed to the function, it returns all the available columns.
To read all available data for specific symbols and columns with no time frame, no start or end date are passed to the method.
Extra settings are as well available to query data: - Metadata: to either return it in the query of not - Format: either get a Normalized CSV (NCSV) or a dataframe format - Timezone: get the timestamps in the timezone of the user's account or get the data in a specific timezone - Corrections: how to handle corrections to the TimeSeries database (corrections set to 'yes', 'no', 'history' or 'only') - Delta: whether to show by datatimestamp file (delta=False) or insert time (delta=True)
The following code shows an example of how to query the TimseSeries database: :
import xpansiv_xsignals as xp
import pandas as pd
ts = xp.Timeseries()
symbols = {'Key1': "Val1", "Key2": "Val2"}
columns = ['Open', 'Close']
ts.retrieve_data_as_csv(file_name='test/query.csv',
symbols=symbols,
columns=columns,
group_name='My Group'
)
df = pd.read_csv("test/query.csv")
fileIO = ts.retrieve_data_as_csv(file_name=None,
symbols=symbols,
columns=columns,
group_name='My Group'
)
df = pd.read_csv(fileIO)
How to read data from Timeseries for specific dates? To retrieve data the data within specific time frame, user can specify the start and end date.
There are two options how the start and end date may look like:
only date (e.g., 2021-01-04)
date and time (e.g., 2021-02-01T12:00:00; ISO format must be followed)
For example, if user specified start_date=2021-02-01 and end_date=2021-02-06, then data will be retrieved like this: from 2021-02-01 00:00:00 till 2021-02-06 23:59:59.
If date and time is specified then data will be retrieved exactly for the specified time frame.
Note that ISO format must be followed: YYYY-MM-DD*T*HH:mm:ss. Pay attention to the "T" letter between date and time.
import xpansiv_xsignals as xp
import pandas as pd
ts = xp.Timeseries()
# Symbols to query from database
symbols = {'Key1': "Val1", "Key2": "Val2"}
columns = 'Open'
ts.retrieve_data_as_csv(file_name='test/test.csv',
symbols=symbols,
columns=columns,
group_name='My Group',
start_date='2021-01-04',
end_date='2021-02-05'
)
ts.retrieve_data_as_csv(file_name='test/test.csv',
symbols=symbols,
columns=columns,
group_name='My Group',
start_date='2021-01-04T12:30:00',
end_date='2021-02-05T09:15:00'
)
fileIO = ts.retrieve_data_as_csv(file_name=None,
symbols=symbols,
columns=columns,
group_name='My Group',
start_date='2021-01-04',
end_date='2021-02-05'
)
df = pd.read_csv(fileIO)
How to use a wildcard for a key's values? To get all the value for one a several keys for the query, the character * can be used as a wildcard. The argument allow_wildcard should be set to True in the retrieval function to enable the use of wildcard.
Please note that by default, the use of wildcards is DISABLED.
import xpansiv_xsignals as xp
import pandas as pd
ts = xp.Timeseries()
symbols = {'Key1': "Val1", "Key2": "Val2", "Key3": "*"}
columns = ['Open', 'Close']
fileIO = ts.retrieve_data_as_csv(file_name=None,
symbols=symbols,
columns=columns,
group_name='My Group',
allow_wildcard=True
)
df = pd.read_csv(fileIO)
How to get all the columns for a given set of keys? To get all the columns values for a given set of keys in the database, the query can take an empty list as the queried columns, as follow:
import xpansiv_xsignals as xp
import pandas as pd
ts = xp.Timeseries()
symbols = {'Key1': "Val1", "Key2": "Val2", "Key3": "Val3"}
columns = []
fileIO = ts.retrieve_data_as_csv(file_name=None,
symbols=symbols,
columns=columns,
group_name='My Group'
)
df = pd.read_csv(fileIO)
Note that this configuration can be used with keys wildcards (with allow_wildacrd=True) and any other setting.
How to modify the Time Zone of the data? The timestamps in the queried time series are set by default in the timezone of the user's account, who created the script or the pipeline. It is described in the Date column header between brackets (for example Date(UTC))
To modify the time zone in the retrieved dataset, the timezone can be passed directly to the retrieval function as follow. It must respect the Continent/City format.
import xpansiv_xsignals as xp
import pandas as pd
ts = xp.Timeseries()
symbols = {'Key1': "Val1", "Key2": "Val2"}
columns = ['Open', 'Close']
fileIO = ts.retrieve_data_as_csv(file_name=None,
symbols=symbols,
columns=columns,
group_name='My Group',
timezone='Europe/London'
)
df = pd.read_csv(fileIO)
How to get the metadata along with the data? It is possible to get extra columns in the retrieved data, along with the keys & columns values, containing the metadata of the symbols. It is done by setting the argument metadata=True in the retrieval function.
By default, no metadata is included in the queried data.
import xpansiv_xsignals as xp
import pandas as pd
ts = xp.Timeseries()
symbols = {'Key1': "Val1", "Key2": "Val2"}
columns = ['Open', 'Close']
fileIO = ts.retrieve_data_as_csv(file_name=None,
symbols=symbols,
columns=columns,
group_name='My Group',
metadata=True
)
df = pd.read_csv(fileIO)
How to modify the format of the received data? The queried data comes by default as Normalized CSV format (NCSV), with in this order: * the keys columns, * the date column, with timestamps in either the default timezone or the specified one (timezone argument in the function), * the values columns, * the metadata columns, if wanted (metadata=True)
By setting NCSV=False in the retrieval method, the data will be returned as Dataframe format (PANDAS in API docs), as a JSON. The JSON (python dict) has timestamps as keys and a dictionary containing pairs of symbols_columns and their value.
import xpansiv_xsignals as xp
import pandas as pd
ts = xp.Timeseries()
symbols = {'Key1': "Val1", "Key2": "Val2"}
columns = ['Open', 'Close']
file_json = ts.retrieve_data_as_csv(file_name=None,
symbols=symbols,
columns=columns,
group_name='My Group',
metadata=True,
NCSV=False
)
df = pd.DataFrame(file_json).T
Note that the dataframe, created from the JSON containing the data, is then transposed to have timestamps as DatetimeIndex (along rows axis).
Task Handler
Users can extend the existing set of tasks on Xpansiv platform by executing scripts or notebooks respectively from the Python Runner Task or the Jupyter Runner Task.
This task then can be used in a pipeline and be able to communicate with other tasks by:
- reading outputs from other tasks, as inputs
- writing outputs, that can be used by others tasks as inputs
A task can as well receive inputs directly as a file picked from the datalake, either for a specific file either for the newest available version of a file on the lake, given its name and group.
Whithin a python script, run into either one of the Python Runner Task, this is implemented as follow:
Read a Task Input The input file passed to the Task can be: - either downloaded to the disk, - either to be read on the fly (useful when limited space on the disk but not in memory).
import xpansiv_xsignals as xp
import pandas as pd
th = xp.TaskHandler()
th.download_from_input_parameter(arg_name='Input #1',
dest_file_name='data.csv',
save=True)
file_content = th.download_from_input_parameter(arg_name='Input #2',
dest_file_name=None,
save=False)
df = pd.read_csv(file_content)
If dest_file_name=None then the file is saved on the disk with its original name from the datalake.
How to obtain additional file information
There is a possibility to retrieve information related to the input parameters using the DatalakeHandler (dh.get_info_from_id()
) and TaskHandler (th.read_task_parameter_value()
) together.
import xpansiv_xsignals as xp
import pandas as pd
th = xp.TaskHandler()
dh = xp.DatalakeHandler()
dh.get_info_from_id(th.read_task_parameter_value('Input #1'))
This can be used to retrieve any metadata associated with the file itself like file name or arrival time (time it was uploaded to the system)
Set a Task Output The output of the Task can be set : - either by uploading a file saved on the disk to the datalake, - either by streaming the python object content to the datalake as the destination file.
Once uploaded, the Output is set to point to the file on the datalake (by its ID, name ,group name)
import xpansiv_xsignals as xp
import io
th = xp.TaskHandler()
th.upload_to_output_parameter(output_name='Output #1',
file='path/dataset.csv',
group_name='My Final Group',
file_upload_name=None,
file_type='SOURCE')
df_io = io.BytesIO(df.to_csv().encode())
th.upload_to_output_parameter(output_name='Output #1',
file=df_io,
group_name='My Final Group',
file_upload_name='dataset.csv',
file_type='SOURCE')
If file_upload_name=None then the saved file will be uploaded with its original name. If the file is streamed directly to the datalake, the file_upload_name argument must be set.
Statuses
Sending status can be used to show in the application what is the progress of the task execution. It allows to use 3 different levels: - FINISHED (green), - WARNING (orange), - ERROR (red).
Sending statuses remains optional as the Xpansiv platform sends general statuses. Only if user needs to pass some specific information in the status, this is worth using.
import xpansiv_xsignals as xp
sh = xp.StatusHandler()
sh.send_status(status='INFO', message='Crucial Information')
sh.info(message='Pipeline Finished Successfully')
sh.warn(message='Something suspicious is happening ...')
sh.error(message='Oops, the task failed ...')
Note that the info status is informing the status service that the task executed successfully and is finished.
There is also a possibility to send a warning status with a custom warning message under some circumstances and immediately stop the execution of the pipeline.
from xpansiv_xsignals.ExceptionHandler import PythonStepWarnException
i=1
if i>1:
raise PythonStepWarnException(message='The value of i is bigger than 1! Stopping pipeline execution.')
Example 1 - OOP
To simplify the use of the SDK methods in a script, Python SDK methods can be inherited by the user’s main class.
Below is an example of a class that has 3 methods:
- Download raw data (or take from the previous task)
- Process the data
- Upload the data to datalake and pass it to the next task
import io
import xpansiv_xsignals as xp
import pandas as pd
class Runner:
def __init__(self):
self.handler = xp.TaskHandler()
self.df = None
def download_data(self):
self.handler.download_from_input_parameter(arg_name='Dataset', dest_file_name='data.csv', save=True)
return pd.read_csv("data.csv")
def process_data(self, df):
return df_processed
def upload_data(self, df_processed):
fileIO = io.BytesIO(df_processed.to_csv().encode())
self.handler.upload_to_output_parameter(output_name='Processed Dataset', file=fileIO, group_name='Final Group')
def run(self):
df = self.download_data()
df_processed = self.process_data(df)
self.upload_data(df_processed)
status = xp.StatusHandler()
Runner().run()
status.info('Test Pipeline Finished')
Example 2 - functional programming
The SDK methods may be used in functional programming simple scripts. Below is an example of a script that: - Downloads the data from Input #1 - Processes the data - Uploads the data to datalake and passes it to the next task
import os
os.environ['LOGIN'] = ''
os.environ['PASSWORD'] = ''
os.environ['Input #1'] = "{'name':'', 'groupName':''}"
os.environ['NG_STATUS_GROUP_NAME'] = ''
os.environ['NG_API_ENDPOINT'] = ''
import os, sys
import io
import xpansiv_xsignals as xp
import pandas as pd
import logging ; logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
log = logging.getLogger()
th = xp.TaskHandler()
df_io = th.download_from_input_parameter('Input #1')
df = pd.read_csv(df_io)
log.info('Data from Input #1 downloaded.')
df_processed = your_processing_function(df)
file_io = io.BytesIO(df_processed.to_csv(index=False).encode())
th.upload_to_output_parameter(output_name='Output #1',
file=file_io,
file_upload_name='',
group_name=os.environ.get('NG_STATUS_GROUP_NAME'),
file_type='SOURCE')
Who do I talk to? * Admin: Xpansiv info@xpansiv.com
Subscription Notification Webhook API
This API allows clients to register webhooks to receive notifications for events related to file arrivals in the datalake and timeseries processing completion.
Clients can manage their subscriptions via a set of CRUD endpoints. The notifications include an HMAC signature (if a secret is provided) so that the client can verify the integrity and authenticity of the payload.
Note:
You must be a valid XPAN user
. When making requests, include your NG token in theAuthorization
header. For each subscription, the client supplies a callback URL (i.e., the webhook endpoint) and optionally a secret. This secret is later used to compute an HMAC SHA-256 signature of the notification payload.
Endpoints Overview
- POST
https://api.xsignals.xpansiv.com/notification/subscription
– Create a new subscription - PUT
https://api.xsignals.xpansiv.com/notification/subscription
– Update an existing subscription - DELETE
https://api.xsignals.xpansiv.com/notification/subscription/{id}
– Delete a subscription - GET
https://api.xsignals.xpansiv.com/notification/subscription/{id}
– Retrieve a subscription by ID - GET
https://api.xsignals.xpansiv.com/notification/subscription
– List all subscriptions for a user
Create Subscription
HTTP Request
curl -X POST "https://api.xsignals.xpansiv.com/notification/subscription" \
-H "Authorization: Bearer {yourAuthToken}" \
-H "Content-Type: application/json" \
-d '{
"callbackUrl": "https://your-callback-url.com/webhook",
"secret": "optionalSecret",
"groupEvents": [
{
"groupName": "DatalakeGroup1",
"eventType": "FILE_ARRIVAL",
"fileType": "SOURCE"
},
{
"groupName": "DatalakeGroup2",
"eventType": "TS_COMPLETE",
"fileType": ""
}
]
}'
Request Body Parameters
Parameter | Type | Description |
---|---|---|
callbackUrl | String, required | The URL to which notifications will be sent. |
secret | String, optional | A client-provided secret used to compute an HMAC signature for outgoing notifications. If omitted, no signature header is included. |
groupEvents | Array, required | A list of group events for which notifications are desired. Each event object must include the following fields: |
Each Group Event Object
Parameter | Type | Description |
---|---|---|
groupName | String, required | The name of the group. |
eventType | String, required | The event type. Valid values are: - FILE_ARRIVAL – Indicates that a file has arrived in the datalake. - TS_COMPLETE – Indicates that timeseries processing has finished. Note: For TS_COMPLETE events, the fileType field is ignorable. |
fileType | String, optional | The type of file that triggers the event. This field is required for FILE_ARRIVAL events. Valid file types are defined in the FileType enumeration below. |
FileType Enumeration
The valid file types are:
- SOURCE
- NCSV
- D_NCSV
- SYM_NCSV
- COMMAND_JSON
- DATA_PREP
- GDP
- PY_SCRIPT
- PY_EXEC
- DOWNLOADER
- BASKET
- SM_GDP
- INSIGHT
- OBJECTIVE
- PIPELINE
- TASK
- BLV
- EMAIL
- MARKDOWN
- CONFIG
- UICONFIG
Response
Code | Description |
---|---|
200 | Success |
400 | Bad Request. "Group with name=GroupName doesn't exist." - you try to create event for group that doesn't exist. |
400 | Bad Request. "Invalid eventType: FILE_ARRIVAL2. Valid values are: [FILE_ARRIVAL, TS_COMPLETE]" - provide valid eventType. |
400 | Bad Request. "Invalid fileType: fileType" - provide valid fileType. |
Update Subscription
This endpoint updates an existing subscription.
You can activate and deactivate subscription using active
flag.
HTTP Request
curl -X PUT "https://api.xsignals.xpansiv.com/notification/subscription" \
-H "Authorization: Bearer {yourAuthToken}" \
-H "Content-Type: application/json" \
-d '{
"id": 10,
"callbackUrl": "https://your-callback-url.com/webhook",
"secret": "optionalSecret",
"active": true,
"groupEvents": [
{
"groupName": "DatalakeGroup1",
"eventType": "FILE_ARRIVAL",
"fileType": "NCSV"
},
{
"groupName": "DatalakeGroup2",
"eventType": "TS_COMPLETE",
"fileType": ""
}
]
}'
Request Body Parameters
Parameter | Type | Description |
---|---|---|
callbackUrl | String, required | The URL to which notifications will be sent. |
secret | String, optional | A client-provided secret used to compute an HMAC signature for outgoing notifications. If omitted, no signature header is included. |
active | Boolean, required | The active status of the subscription. |
groupEvents | Array, required | A list of group events for which notifications are desired. Each event object must include the following fields: |
Each Group Event Object
Parameter | Type | Description |
---|---|---|
groupName | String, required | The name of the group. |
eventType | String, required | The event type. Valid values are: - FILE_ARRIVAL – Indicates that a file has arrived in the datalake. - TS_COMPLETE – Indicates that timeseries processing has finished. Note: For TS_COMPLETE events, the fileType field is ignorable. |
fileType | String, optional | The type of file that triggers the event. This field is required for FILE_ARRIVAL events. Valid file types are defined in the FileType enumeration below. |
Response
Code | Description |
---|---|
200 | Success |
400 | Bad Request. "Group with name=GroupName doesn't exist." - you try to create event for group that doesn't exist. |
400 | Bad Request. "Invalid eventType: FILE_ARRIVAL2. Valid values are: [FILE_ARRIVAL, TS_COMPLETE]" - provide valid eventType. |
400 | Bad Request. "Invalid fileType: fileType" - provide valid fileType. |
Delete Subscription
This endpoint deletes an existing subscription.
HTTP Request
curl -X DELETE "https://api.xsignals.xpansiv.com/notification/subscription/{id}" \
-H "Authorization: Bearer {yourAuthToken}"
Request Path Parameters
Parameter | Type | Description |
---|---|---|
id | Number, required | The unique ID of the subscription to delete, provided in the URL. |
Response
Code | Description |
---|---|
200 | Success |
Get Subscription
This endpoint get an existing subscription.
HTTP Request
curl -X GET "https://api.xsignals.xpansiv.com/notification/subscription/{id}" \
-H "Authorization: Bearer {yourAuthToken}"
Request Path Parameters
Parameter | Type | Description |
---|---|---|
id | Number, required | The unique ID of the subscription to get, provided in the URL. |
Response
Code | Description |
---|---|
200 | Success |
404 | Subscription not found |
Get List of Subscriptions For User
This endpoint get list of existing subscriptions for user who requests.
HTTP Request
curl -X GET "https://api.xsignals.xpansiv.com/notification/subscription" \
-H "Authorization: Bearer {yourAuthToken}"
Response
Code | Description |
---|---|
200 | Success |
HMAC Signature Authentication
When sending webhook notifications, the server computes an HMAC SHA-256 signature of the payload using a secret provided by the client during subscription creation. This signature is added to the HTTP request as the X-Signature
header. The client can then verify the authenticity and integrity of the payload by re-computing the HMAC with its own copy of the secret.
Client-Side Verification
On the client side, to verify the webhook notification:
Extract the Payload and Signature
- Retrieve the JSON payload and the value of the
X-Signature
header from the incoming request.
Recompute the HMAC
- Using the same algorithm and the secret (which the client stored when creating the subscription), compute the HMAC for the received payload.
Compare Signatures
- Compare the recomputed HMAC with the
X-Signature
value.
Webhook API Python Client Example
Overview
This guide shows how to create a webhook subscription that listens for TS_COMPLETE
events and how to verify and process those events in a Python client. When a TS_COMPLETE
event fires, the server sends a JSON payload to your callback URL. If you supplied a secret during subscription, the payload is signed with HMAC-SHA-256 so you can verify its integrity before triggering any follow-up action (e.g., pulling timeseries data).
1. Webhook Subscription
Create a subscription by registering:
callbackUrl
– the public endpoint that will receive notificationssecret
(optional but recommended) – key for HMAC verification
Example request (subscribing to TS_COMPLETE
for DatalakeGroup2):
curl -X POST "https://api.xsignals.xpansiv.com/notification/subscription" \
-H "Authorization: Bearer {yourToken}" \
-H "Content-Type: application/json" \
-d '{
"callbackUrl": "https://your-callback-url.com",
"secret": "randomsecret",
"groupEvents": [
{
"groupName": "DatalakeGroup2",
"eventType": "TS_COMPLETE",
"fileType": ""
}
]
}'
Note: Replace
groupName
with your data-lake group as needed.
2. Notification Payload
When an event fires, NorthGravity sends a HTTP POST to your callbackUrl
that contains:
Component | Description |
---|---|
Header | X-Signature: <HMAC-SHA-256 digest> (present only if you supplied a secret ) |
Body | JSON payload whose message string identifies the event and context |
Examples
TS_COMPLETE
{
"message": "TS data loading completed for group fileGroupName"
}
FILE_ARRIVAL
{
"message": "File arrival completed for group myGroup with fileId 9a8b7c and fileType CSV"
}
Generated from the Java template:
java
String payload = String.format("File arrival completed for group %s with fileId %s and fileType %s",
groupName, fileId, fileType);
3. Python Client (Flask)
A minimal Flask server that verifies the signature and, when the message matches, pulls timeseries data:
from flask import Flask, request
import hmac, hashlib, json, requests
app = Flask(__name__)
SECRET = "randomsecret" # Same secret used at subscription
TIMESERIES_ENDPOINT = "https://api.xsignals.xpansiv.com/ts?size=5"
@app.route("/webhook/receive", methods=["POST"])
def receive_webhook():
payload = request.get_data(as_text=True)
signature = request.headers.get("X-Signature")
# --- Verify HMAC signature ---
if signature:
calc = hmac.new(SECRET.encode(), payload.encode(), hashlib.sha256).hexdigest()
if calc != signature:
return "Signature mismatch", 401
# --- Parse JSON payload ---
try:
data = json.loads(payload)
except json.JSONDecodeError:
return "Invalid JSON", 400
# --- Trigger follow-up action ---
if data.get("message") == "TS data loading completed for group webhook1":
ts_payload = {
"startDate": "2017-01-11T10:01:03",
"endDate": "2017-12-11T10:01:03",
"stats": True,
"timeZone": "America/NewYork",
"corrections": True,
"onDateTime": "2018-12-01T10:01:03.999",
"formatType": "STANDARD",
"keys": [{
"columns": ["Open", "Close", "High"],
"symbols": {"SYM1": "A1", "SYM2": "B1"},
"groupName": "fileGroupName", # change if necessary
"timeZone": "America/Chicago",
"pattern": True,
"patternExactMatch": False
}]
}
try:
r = requests.post(TIMESERIES_ENDPOINT, json=ts_payload)
print("Timeseries status:", r.status_code, r.text)
except Exception as exc:
print("Timeseries error:", exc)
return "Error sending timeseries request", 500
return "Webhook processed", 200
if __name__ == "__main__":
app.run(host="0.0.0.0", port=8088)
Run this service continuously (e.g., systemd, Docker, or a cloud function) so no notifications are missed.