Loading...
OpenAPI Directory | Velosimo Admin

This data set contains the list of polling places. It can be organized by ward/division, accessibility rating, or type of building. This list is used to assign poll workers, send the machines and necessary accessibility materials, etc. **Endpoint:** http://api.phila.gov/polling-places/v1

Hereafter is the documentation of the private API of [Pims: Pointages Intelligents pour le Monde du Spectacle](https://pims.io). This API is designed for 3rd-party softwares, editors and partners. Its main purpose is to give access the core data of a Pims customer (i.e. events, ticket counts and promotions). ## Authentication The API uses [basic access authentication](https://en.wikipedia.org/wiki/Basic_access_authentication), meaning you will need a username and password to get authorized. As each customer in Pims has its own domain (e.g. caramba.pims.io, gdp.pims.io...), each credentials will be valid for one domain/customer only. If you need dedicated credentials for one domain, please contact us. (In any case, we will need an explicit agreement from the customer before we create these credentials.)

To make your life easy, you can try all endpoints with the public credentials below, pointing to our [demo domain](https://demo.pims.io):
  • Base path: `https://demo.pims.io/api`
  • Username: `demo`
  • Password: `q83792db2GCvgYVdKpU3yG3R`
## Response format The API returns JSON and matches the [HAL specification](http://stateless.co/hal_specification.html). The `Content-Type` of each response will be `application/hal+json`, unless an error occurs. Please note that this documentation describes all responses “as if” they were plain JSON. The specificities of HAL are ignored on purpose, in order to remain compact and avoid repetition.
So when you read in the doc:
{
	"id": 123,
	"property1": "Lorem ipsum",
	"object": {
		"id": 456,
		"property2": 7.89
	}
}
... you'll get in the Real World®:
{
	"id": 123,
	"property2": "Lorem ipsum",
	"_embedded": {
		"object": {
			"id": 456,
			"property2": 7.89,
			"_links": {
				"self": {
					"href": "https://api.mydomain.com/other-item/456"
				}
			}
		}
	}
	"_links": {
		"self": {
			"href": "https://api.mydomain.com/item/123"
		}
	}
}
### Errors Errors return JSON too and tries to match the [Problem Details for HTTP APIs specification](https://tools.ietf.org/html/rfc7807). If it does not match this spec, that's either a bug or a compatibility issue. Please contact us to solve the problem. The `Content-Type` of errors will be `application/problem+json`. The content will match the following JSON: ```json { "type": "https://tools.ietf.org/html/rfc2616#section-10", "title": "Not Found", "status": 404, "detail": "Entity not found" } ``` ## Versioning The API is fully versionned, using an URL-versioning scheme: `https://demo.pims.io/api/v1/events`, `https://demo.pims.io/api/v2/events`,... The version part of the URL is optional, and will be completed with the last stable version if omitted. ## Pagination All responses corresponding to a collection of resources (e.g. `/venues` or `/series/:id/events`) are paginated. When so, you will only get the first 25 resources you asked for. If you need to get more resources in one call, you can use the `page_size` query parameter. E.g. `/venues?page_size=50` to get the 50 first venues. Also note that with HAL, the navigation in paginated responses is a piece of cake, as you can see below: ```json { "_links": { "self": { "href": "https://demo.pims.io/api/v1/events?page=1" }, "first": { "href": "https://demo.pims.io/api/v1/events" }, "last": { "href": "https://demo.pims.io/api/v1/events?page=14" }, "next": { "href": "https://demo.pims.io/api/v1/events?page=2" } }, "_embedded": { ... // data content goes here }, "page_count": 14, "page_size": 25, "total_items": 331, "page": 1 } ``` ## Filtering and sorting Every textual filter (e.g. `/events?label=U2`) and/or sort (e.g. `/events?sort=label`) performed with the API uses UTF8_UNICODE_CI collation, meaning it is: - Case insensitive: “Chloé” will be considered the same as “CHLOÉ”; - Diacritic insensitive: “Chloé” will be considered the same as “Chloe”. When performing a sort, it will always be *ascending* by default. To make it *descending*, just use a minus sign (`-`) in front of the parameter value (e.g. `/events?sort=-label`). ## I18n In responses, some labels can be translated (e.g. promotion types, event input types, etc.). These translatable labels are clearly indicated in the documentation below. By default, they will be displayed in English, but you can change this behaviour via the `Accept-Language` header. E.g., use `fr` as a value for French. ## PHP SDK We provide a simple yet convenient SDK for the PHP language, see [the Github page of the project](https://github.com/pimssas/pims-api-client-php). ## And now? Generaly, you will start by [fetching one or more events](#tag/Events). An event can be anything that occurs in one venue at one given date and time: a concert, a play, a match, a conference, etc. Additionnally, you can explore the [series](#tag/Series): a series is just a group of events (e.g. a tour or a festival). Once you retrieved the events you were interested in, you can look for the sales (ticket counts): - Get a quick overview with [`/events/:id/ticket-counts`](#operation/fetchAllTicketCounts) - Or get a full insight by calling these endpoints: 1. [`/events/:id/categories`](#operation/fetchAllEventsCategories) 2. [`/events/:id/channels`](#operation/fetchAllEventsChannels) 3. [`/events/:id/ticket-counts/detailed`](#operation/fetchAllDetailedTicketCounts) Eventually, you may also want to fetch the [promotions](#tag/Promotions). A promotion can be anything meant to leverage the sales: ads, marketing campaigns, buzz or news around the event, etc. A promotion can be linked to any combination of events and/or series.

Pinecone is a vector database. This is an unofficial, community-managed OpenAPI spec that (should) accurately model the Pinecone API. This project was developed independent of and is unaffiliated with Pinecone Systems. Users should switch to the official API spec, if and when Pinecone releases it.

The Plaid REST API. Please see https://plaid.com/docs/api for more details.

The PocketSmith API

The future of fintech.

Portfolio Optimizer is a [Web API](https://en.wikipedia.org/wiki/Web_API) to analyze and optimize investment portfolios (collection of financial assets such as stocks, bonds, ETFs, crypto-currencies) using modern portfolio theory algorithms (mean-variance, VaR, etc.). # API General Information Portfolio Optimizer is based on [REST](https://en.wikipedia.org/wiki/Representational_state_transfer) for easy integration, uses [JSON](https://en.wikipedia.org/wiki/JSON) for the exchange of data and uses a standard [HTTP verb](https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol#Request_methods) (`POST`) to represent the action(s). Portfolio Optimizer is also as secured as a Web API could be: * [256-bit HTTPS Encryption](https://en.wikipedia.org/wiki/HTTPS) * No usage of cookies * No usage of personal data ## API Headers The following HTTP header(s) are required when calling Portfolio Optimizer endpoints: * `Content-type: application/json` This header specifies that the data provided in input to the endpoint is in JSON format The following HTTP header(s) are optional when calling Portfolio Optimizer endpoints: * `Content-Encoding: gzip` This header indicates that the data provided in input to the endpoint is compressed with gzip. * `X-API-Key: ` This header enables [authenticated users](#auth) to provide their private [API key](#overview--api-key) in order to [benefit from higher API limits](#overview--api-limits) ## API Key Portfolio Optimizer is free to use, but not free to run. In order to obtain an API key and benefit from [higher API limits](#overview--api-limits), a *small* participation to Portfolio Optimizer running costs is required. This participation takes the form of coffee(s), with one coffee = one month of usage.

Buy a Coffee at buymeacoffee.com

> **Notes:** > * Please make sure not to expose your API key publicly! ## API Limits Portfolio Optimizer comes with *fairly reasonable* API limits. For anonymous users: * The API requests are restricted to a subset of all the available endpoints and/or endpoints features * The API requests are limited to 1 request per second for all the anonymous users combined, with concurrent requests rejected * The API requests are limited to 1 second of execution time * The API requests are limited to 20 assets, 250 portfolios, 500 series data points and 5 factors For authenticated users with an [API key](#overview--api-key): * The API requests have access to all the available endpoints and endpoints features * The API requests are limited to 10000 requests per 24 hour per API key, with concurrent requests queued * The API requests are limited to 2.5 seconds of execution time * The API requests are limited to 100 assets, 1250 portfolios, 2500 series data points and 25 factors > **Notes:** > * It is possible to further relax the API limits, or to disable the API limits alltogether; please [contact the support](https://portfoliooptimizer.io/contact/) for more details. > * Information on the API rate limits are provided in response messages HTTP headers `x-ratelimit-*`: > * `x-ratelimit-limit-second`, the limit on the number of API requests per second > * `x-ratelimit-remaining-second`, the number of remaining API requests in the current second > * `x-ratelimit-limit-minute`, the limit on the number of API requests per minute > * ... ## API Regions Portfolio Optimizer servers are located in Western Europe. > **Notes:** > * It is possible to deploy Portfolio Optimizer in other geographical regions, for example to improve the API latency; please [contact the support](https://portfoliooptimizer.io/contact/) for more details. ## API Response Codes Standard [HTTP response codes](https://en.wikipedia.org/wiki/List_of_HTTP_status_codes) are used by Portfolio Optimizer to provide details on the status of API requests. | HTTP Code | Description | Notes | | --------- | ----------- | ----- | | 200 | Request successfully processed | - | | 400 | Request failed to be processed because of incorrect content | The response message body contains information on the incorrect content | | 401 | Request failed to be processed because of invalid API key | - | | 404 | Request failed to be processed because of non existing endpoint | The requested endpoint might exist, but needs to be accessed with another HTTP method (e.g., `POST` instead of `GET`) | | 429 | Request failed to be processed because of API limits violated | The response message HTTP headers `x-ratelimit-*` contain information on the [API limits](#overview--api-limits) | | 500 | Request failed to be processed because of an internal error | Something went wrong on Portfolio Optimizer side, do not hesitate to [report the issue](#overview--support) | | 502 | Request failed to be processed because of a temporary connectivity error | Something went wrong on Portfolio Optimizer side, please check the [API status](#overview--api-status) and do not hesitate to [report the issue](#overview--support) | ## API Status Portfolio Optimizer is monitored 24/7 by [UptimeRobot](https://stats.uptimerobot.com/wgW71SL1AW). # Support For any issue or question about Portfolio Optimizer, please do not hesitate to [contact the support](https://portfoliooptimizer.io/contact/).

Postmark makes sending and receiving email incredibly easy. The Account-level API allows users to configure all Servers, Domains, and Sender Signatures associated with an Account.

Postmark makes sending and receiving email incredibly easy.

This api converts file formats of OpenXml and OpenOffice documents formats to vector files (e.g., svg)

This API helps users convert Excel and Powerpoint documents into rich, live dashboards and stories.

This API is the main entry point for creating, editing and publishing analytics throught the Presalytics API

Welcome to the API Reference Docs page for the Press Association TV API (v2).

Probely is a Web Vulnerability Scanning suite for Agile Teams. It provides continuous scanning of your Web Applications and lets you efficiently manage the lifecycle of the vulnerabilities found, in a sleek and intuitive ~~web interface~~ API. ## Quick-Start ### Authentication To use the API, you first need to create a token (API Key). To create a token, select a target from the drop-down list, go to the "Settings" page, and click on the "Integrations" tab. Write a name for the API Key. For example, if you want to use the API Key for travis, you could name it "travis". In this example, we chose "**example.com_key**" ![Creating API key][1] [1]: assets/qs/create_api_key_1.png The API key was created successfully: ![API key created][2] [2]: assets/qs/create_api_key_2.png On every request, you need to pass this token in the authorization header, like this: ```yaml Authorization: JWT eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJqdGkiOiJBRlNJQlp 3elFsMDEiLCJ1c2VybmFtZSI6IkNIZ2tkSUROdzV0NSJ9.90UwiPGS2hlvgOLktFU0LfKuatNKm mEP79u17VnqT9M ``` **WARNING: Treat this token as a password. With this token, you have the power to fully manage the target.** In the following examples, the token will be named as *PROBELY_AUTH_TOKEN*. ### Scan target First let's view our target list: ```bash curl https://api.probely.com/targets/ \ -X GET \ -H "Content-Type: application/json" \ -H "Authorization: JWT PROBELY_AUTH_TOKEN" ``` From the results, we need the **target id**: ```json { "count":1, "page_total":1, "page":1, "length":10, "results":[ { "id":"AxtkqTE0v3E-", "name":"test-site", "desc":"", "url":"https://test-site.example.com", "settings": "(...)" , "stack": "(...)" , "verified":true, "(...)": "(...)" } ] } ``` Now we can send a request to start a scan on target id **AxtkqTE0v3E-** ```bash curl https://api.probely.com/targets/AxtkqTE0v3E-/scan_now/ \ -X POST \ -H "Content-Type: application/json" \ -H "Authorization: JWT PROBELY_AUTH_TOKEN" ``` And we get a response saying that the scan is scheduled: the status is **queued**, and we've got a **scan id**: ```json { "changed":"2017-08-01T13:37:00.843339Z", "started":null, "completed":null, "mediums":0, "changed_by": "(...)" , "highs":0, "status":"queued", "id":"S6dOMPn0SnoH", "created_by": "(...)" , "target": "(...)" , "created":"2017-08-01T13:37:00.843339Z", "lows":0 } ``` Using the scan id **S6dOMPn0SnoH**, we can pool the scan status: ```bash curl https://api.probely.com/targets/AxtkqTE0v3E-/scans/S6dOMPn0SnoH/ \ -X GET \ -H "Content-Type: application/json" \ -H "Authorization: JWT PROBELY_AUTH_TOKEN" ``` And we get a response saying that the scan status is now **started**: ```json { "id":"S6dOMPn0SnoH", "changed":"2017-08-01T13:38:12.623650Z", "started":null, "completed":null, "mediums":0, "changed_by": "(...)" , "highs":0, "status":"started", "created_by": "(...)" , "target": "(...)" , "created":"2017-08-01T13:37:00.843339Z", "lows":0 } ``` The possible statuses are: | Status | Name | Description | | ------ | ---- | ----------- | | queued | Queued | The scan is queued to start | | started | Started | The scan is currently running | | under_review | Under Review | The scan is complete but has some findings under review | | completed | Completed | The scan is complete | | completed_with_errors | Completed with errors | The scan is complete even after getting some error(s) | | failed | Failed | The scan failed | | canceled | Canceled | The scan was canceled | | canceling | Canceling | The scan is being canceled | During the scan, the keys "lows", "mediums", and "highs" will be updated with the findings, as they are being found. After we get either the status **completed** or **completed_with_errors**, we can view the findings. ### Get vulnerabilities Using the previous scan id **S6dOMPn0SnoH**, we can get the scan results: ```bash curl https://api.probely.com/targets/AxtkqTE0v3E-/scans/S6dOMPn0SnoH/ \ -X GET \ -H "Content-Type: application/json" \ -H "Authorization: JWT PROBELY_AUTH_TOKEN" ``` We get a response saying that the scan status is now **completed**, and that **45** vulnerabilities were found. **14** low, **11** medium and **20** high: ```json { "id":"S6dOMPn0SnoH", "target": "(...)" , "status":"completed", "started":"2017-08-01T13:37:12.623650Z", "completed":"2017-08-01T14:17:48.559514Z", "lows":14, "mediums":11, "highs":20, "created":"2017-08-01T13:37:00.843339Z", "created_by": "(...)" , "changed":"2017-08-01T14:17:48.559514Z", "changed_by": "(...)" } ``` You can now view the results of this scan, or the target findings. Let's start with the scan results: ```bash curl https://api.probely.com/targets/AxtkqTE0v3E-/findings/?scan=S6dOMPn0SnoH&page=1 \ -X GET \ -H "Content-Type: application/json" \ -H "Authorization: JWT PROBELY_AUTH_TOKEN" ``` ```json { "count":45, "page_total":5, "page":1, "length":10, "results":[ { "id":79, "target": "(...)" , "scans": "(...)" , "labels": "(...)" , "fix":"To fix an SQL Injection in PHP, you should use Prepared Statements. Prepared Statements can be thought of as a kind of compiled template for the SQL that an application wants to run, that can be customized using variable parameters.\n\nPHP's PDO extension supports Prepared Statements, so that's probably your best option.\n\nIn the example below you can see the use of prepared statements. Variables ```$username``` and ```$hashedPassword``` come from user input.\n\n```\n$stmt = $dbg->prepare(\"SELECT id, name FROM users\n WHERE username=? AND password=?\");\n$stmt->bindParam(1, $username);\n$stmt->bindParam(2, $hashedPassword);\nif ($stmt->execute()) {\n\t$user = $stmt->fetch();\n\tif ($user) {\n\t\t$_SESSION['authID'] = $user['id'];\n\t\techo \"Hello \" . $user['name'];\n\t} else {\n\t\techo \"Invalid Login\";\n\t}\n}\n``` \n\nAs an added bonus, if you're executing the same query several times, then it'll be even faster than when you're not using prepared statements. This is because when using prepared statements, the query needs to be parsed (prepared) only once, but can be executed multiple times with the same or different parameters. \n", "requests":[ { "request":"(...)", "response":"(...)" }, { "request":"(...)", "response":"(...)" } ], "evidence":null, "extra":"", "definition":{ "id":"xnV8PJVmSoLS", "name":"SQL Injection", "desc":"SQL Injections are the most common form of injections because SQL databases are very popular in dynamic web applications. This vulnerability allows an attacker to tamper existing SQL queries performed by the web application. Depending on the queries, the attacker might be able to access, modify or even destroy data from the database.\n\nSince databases are commonly used to store private data, such as authentication information, personal user data and site content, if an attacker gains access to it, the consequences are typically very severe, ranging from defacement of the web application to users data leakage or loss, or even full control of the web application or database server.", }, "url":"http://test-site.example.com/login.php", "path":"login.php", "method":"post", "parameter":"username", "value":"", "params":{ "username":[ "probely'" ], "password":[ "probely" ] }, "reporter": "(...)" , "assignee":null, "state":"notfixed", "severity":30, "last_found":"2017-08-01T14:03:56.207794Z", "changed":"2017-08-01T14:03:56.207794Z", "changed_by": "(...)" , "comment":"" }, "(...)" ] } ``` You can also view all the target findings, which will show all the findings that are not yet fixed. \\ The structure is similar to the previous result. ```bash curl https://api.probely.com/targets/AxtkqTE0v3E-/findings/ \ -X GET \ -H "Content-Type: application/json" \ -H "Authorization: JWT PROBELY_AUTH_TOKEN" ``` ### Get vulnerability details You can also get details for a particular finding in a target. \\ In this example we will get the details for the same finding as in the previous section: ```bash curl https://api.probely.com/targets/AxtkqTE0v3E-/findings/79/ \ -X GET \ -H "Content-Type: application/json" \ -H "Authorization: JWT PROBELY_AUTH_TOKEN" ``` This will result on the same information, but just for this particular finding: ```json { "id":79, "target": "(...)" , "scans": "(...)" , "labels": "(...)" , "fix":"To fix an SQL Injection in PHP, you should use Prepared Statements. Prepared Statements can be thought of as a kind of compiled template for the SQL that an application wants to run, that can be customized using variable parameters.\n\nPHP's PDO extension supports Prepared Statements, so that's probably your best option.\n\nIn the example below you can see the use of prepared statements. Variables ```$username``` and ```$hashedPassword``` come from user input.\n\n```\n$stmt = $dbg->prepare(\"SELECT id, name FROM users\n WHERE username=? AND password=?\");\n$stmt->bindParam(1, $username);\n$stmt->bindParam(2, $hashedPassword);\nif ($stmt->execute()) {\n\t$user = $stmt->fetch();\n\tif ($user) {\n\t\t$_SESSION['authID'] = $user['id'];\n\t\techo \"Hello \" . $user['name'];\n\t} else {\n\t\techo \"Invalid Login\";\n\t}\n}\n``` \n\nAs an added bonus, if you're executing the same query several times, then it'll be even faster than when you're not using prepared statements. This is because when using prepared statements, the query needs to be parsed (prepared) only once, but can be executed multiple times with the same or different parameters. \n", "requests":[ { "request":"(...)", "response":"(...)" }, { "request":"(...)", "response":"(...)" } ], "evidence":null, "extra":"", "definition":{ "id":"xnV8PJVmSoLS", "name":"SQL Injection", "desc":"SQL Injections are the most common form of injections because SQL databases are very popular in dynamic web applications. This vulnerability allows an attacker to tamper existing SQL queries performed by the web application. Depending on the queries, the attacker might be able to access, modify or even destroy data from the database.\n\nSince databases are commonly used to store private data, such as authentication information, personal user data and site content, if an attacker gains access to it, the consequences are typically very severe, ranging from defacement of the web application to users data leakage or loss, or even full control of the web application or database server.", }, "url":"http://test-site.example.com/login.php", "path":"login.php", "method":"post", "parameter":"username", "value":"", "params":{ "username":[ "probely'" ], "password":[ "probely" ] }, "reporter": "(...)" , "assignee":null, "state":"notfixed", "severity":30, "last_found":"2017-08-01T14:03:56.207794Z", "changed":"2017-08-01T14:03:56.207794Z", "changed_by": "(...)" , "comment":"" } ``` ## Concepts The short version is that you run *scans* on *targets*, and *findings* are created for any issue that is found. However, there are a few more concepts that must be explained in order to get a complete picture of how Probely works. We will spend the next few sections detailing the most important concepts. ### Target A *target* defines the scope of a scan, what will and won't be included in the scan plan. This is done by filling a *target*'s *site* and *assets*. The entry point for the web application (and authentication) is setup in the *target*'s *site*. In modern web applications, you are probably loading resources from multiple domains. A single page app, for example, will usualy load the page from one domain and make AJAX requests to another. This is what *assets* are for: they specify what domains our scanner should follow and create requests for. ### Site A URL is probably not the only thing you will need to setup when scannning your application. Does the application have an authenticated area? Does it use basic auth? Does it expect a certain cookie or header? These parameters are all configured in the *target*'s *site*. We need to ensure that only allowed web applications are scanned. Therefore, we must verify that you have control of any site you wish to include. This can be done by: * Placing a file on a well-known location, on the site's server; * Creating specific DNS records. ### Asset An *asset* is very similar to a *site*. The difference is that it is a domain instead of a URL. Additionally, an *asset* has no login or basic auth support. You can still have custom cookies and headers per *asset*. As with the *site*, you will need to prove an *asset*'s ownership. We have added some rules to make your life easier, if you already have verified a *site* and the domains match, the validation is fast-tracked. ### Scans This is what you're here for. After configuring your *target*, you will want to run *scans* against it. You can either start a one off scan, or schedule one for later - recurring or not. During the *scan*, we will spider and run several modules to check for security issues, which we call *findings*. You can check the *findings* even before a scan ends. If everything goes well, the scan will complete and that is it. With some *findings*, our automated processes may have difficulties determining if it is a false positive or a legitimate issue. In these instances, a scan will be marked as under review, and we will further analyze the finding before making a decision. We will only show findings that, for some degree of confidence, are true positives. A finding that we are not sure of will never be displayed. As much as we try to prevent it, a *scan* (or a sub-module) can malfunction. If this happens, a *scan* is marked as: * "failed": the problem was irrecoverable; * "completed with errors": some module failed but the scan itself completed. During a scan, we try to determine what *frameworks* you are using and add this information to the *site* and *asset* objects discussed previously. ### Findings The last core concept is the *finding*, this is a security issue that we have found during our scans. If the same issue is found in a new scan it will not open a new finding but update the previous. A *finding* will have a lot of information about the issue. Namely, where it was found, URL, insertion point (e.g. cookie), parameter, and method. Evidence we gathered, and the full request and response that we used. Sugestions of how to go about fixing it. A full description of the vulnerability is also present in the *definition* property. We also assign a severity and calculate the CVSS score for each. Besides all this, there are also actions that you can perform on a *finding*. You can assign it to one user, leave comments for your team or add labels, and reduce or increase the severity. If you don't plan on fixing the *finding* and accept the risk, or you think we reported a false positive, you can mark the finding to reflect that.

ContentDepot hosts a range of API’s that allow clients to manage, discover, and obtain content. The API spans many parts of the ContentDepot functionality including MetaPub (a.k.a. metadata distribution) and content management. ## MetaPub MetaPub collects, normalizes and distributes publicly available program, episode, and piece metadata through the public radio system. Backed by ContentDepot and its data model, MetaPub allows producers to supply metadata through various methods: 1. MetaPub Agents that collect producer metadata by "crawling" existing public feeds (e.g. C24, BBC) or the producer's production system (e.g. ATC, ME, TED Radio Hour). 2. Manually enter metadata in the ContentDepot Portal on each program and episode. 3. Publish/push the metadata to the MetaPub upload API and execute an ingest job. MetaPub then distributes this data to stations through an electronic program guide (EPG model) for display on various listener devices such as smart phones, tablets, web streams, HD radios, RDBS enabled FM radios, and more. The EPG format is based on the RadioDNS specifications. ### RadioDNS The RadioDNS Service and Programme Information Specification ([ETSI TS 102 818 v3.4.1](https://www.etsi.org/deliver/etsi_ts/102800_102899/102818/03.04.01_60/ts_102818v030401p.pdf)) defines three primary documents: Service Information, Program Information, and Group Information. These documents, along with the core RadioDNS Hybrid Lookup for Radio Services Specification ([ETSI TS 103 270 v1.4.1](https://www.etsi.org/deliver/etsi_ts/103200_103299/103270/01.04.01_60/ts_103270v010401p.pdf)), define a system where an end listener device can dynamically discover program metadata and fetch the metadata via Internet Protocol (IP) requests. MetaPub's use of RadioDNS differs slightly in that MetaPub (a.k.a PRSS) acts as the "service provider" while the stations and related middleware act as the end devices. While this is not the primary use case of RadioDNS, the flexibility in the specification, service definitions, and DNS resolution allows this model to be easily represented. MetaPub provides both _National Metadata_ and _Station Metadata_. It is strongly recommended that the related [RadioDNS specifications](https://radiodns.org/developers/documentation/) be read for implementation details, definitions, and required XML schemas. ## ContentDepot Drive ContentDepot Drive (CD Drive) provides a private, per customer file storage solution similar to other cloud storage solutions such as Google Drive, Box, and Dropbox. The CD Drive is used to stage content uploads such as metadata files, images, or segment audio before associating the content with specific programs or episodes. CD Drive content can be referenced using a URI by some operations such as synchronizing metadata. There are two possible CD Drive URI formats supported: ID and hierarchical path. The ID reference takes the form ```cddrive:id:{value}``` where value is the integer ID of the file or folder being referenced. The hierarchical path reference takes the form ```cddrive://{path}``` where path is the full, UNIX style path to the file or folder starting with '/'. For example, two CD Drive URIs pointing to the same file may be ```cddrive:id:12345``` and ```cddrive:///show1/episode2/metadata.xml```. More information about URIs can be found at [Wikipedia](https://en.wikipedia.org/wiki/Uniform_Resource_Identifier). ## Authentication The API currently uses OAuth 2.0. All operations require ```cd:full``` access where the client access is only limited by the permissions of the ContentDepot user after authentication. Limiting access scope per client is not currently supported.

The PTV Timetable API provides direct access to Public Transport Victoria’s public transport timetable data. The API returns scheduled timetable, route and stop data for all metropolitan and regional train, tram and bus services in Victoria, including Night Network(Night Train and Night Tram data are included in metropolitan train and tram services data, respectively, whereas Night Bus is a separate route type). The API also returns real-time data for metropolitan train, tram and bus services (where this data is made available to PTV), as well as disruption information, stop facility information, and access to myki ticket outlet data. This Swagger is for Version 3 of the PTV Timetable API. By using this documentation you agree to comply with the licence and terms of service. Train timetable data is updated daily, while the remaining data is updated weekly, taking into account any planned timetable changes (for example, due to holidays or planned disruptions). The PTV timetable API is the same API used by PTV for its apps. To access the most up to date data PTV has (including real-time data) you must use the API dynamically. You can access the PTV Timetable API through a HTTP or HTTPS interface, as follows: base URL / version number / API name / query string The base URL is either: * http://timetableapi.ptv.vic.gov.au or * https://timetableapi.ptv.vic.gov.au The Swagger JSON file is available at http://timetableapi.ptv.vic.gov.au/swagger/docs/v3 Frequently asked questions are available on the PTV website at http://ptv.vic.gov.au/apifaq Links to the following information are also provided on the PTV website at http://ptv.vic.gov.au/ptv-timetable-api/ * How to register for an API key and calculate a signature * PTV Timetable API V2 to V3 Migration Guide * Documentation for Version 2 of the PTV Timetable API * PTV Timetable API Data Quality Statement All information about how to use the API is in this documentation. PTV cannot provide technical support for the API. Credits: This page has been based on Steve Bennett's http://opentransportdata.org/, used with permission.

This document describes the Qualpay Payment Gateway API.

2529 api specs