Loading...
OpenAPI Directory | Velosimo Admin

An OpenAPI description of the OpenLink Smart Data Bot REST API v1

The _Open Build Service API_ is a XML API. To authenticate, use [HTTP basic authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) by passing the _Authorization_ header in the form of `Authorization: Basic `. There is no API versioning as there is no need for it right now. Only rudimentary rate limiting is implemented, so please be gentle when using the API concurrently, especially with potentially expensive operations. In case of abuse, we will limit/remove your access. For command-line users, we recommend using [osc](https://github.com/openSUSE/osc) with its _api_ command to interact with the API. It's as simple as this example: `osc api /about` (_about_ is one of the endpoints documented below)

Please see the complete Orbit API documentation at [https://api.orbit.love/](https://api.orbit.love/).

This is the full documentation of the [REST API](https://book.orthanc-server.com/users/rest.html) of Orthanc.

This reference is automatically generated from the source code of Orthanc. A [shorter cheat sheet](https://book.orthanc-server.com/users/rest-cheatsheet.html) is part of the Orthanc Book.

An earlier, manually crafted version from August 2019, is [still available](2019-08-orthanc-openapi.html), but is not up-to-date anymore ([source](https://groups.google.com/g/orthanc-users/c/NUiJTEICSl8/m/xKeqMrbqAAAJ)).

Swagger Spec file that describes PI Web API

Search for information on companies using a website or company name and get access to Company Data, News, Blog Posts, Competitor Lists and much more.

# Introduction Whether you're looking to build an official Pandascore integration for your service, or you just want to build something awesome, [we can help you get started](/home). The API works over the HTTPS protocol, and is accessed from the `api.pandascore.co` domain. - The current endpoint is [https://api.pandascore.co](https://api.pandascore.co). - All data is sent and received as JSON by default. - Blank fields are included with `null` values instead of being omitted. - All timestamps are returned in ISO-8601 format ### About this documentation Clicking on a query parameter like `filter` or `search` will show you the available options: ![filter](/doc/images/query_param_details.jpg) You can also click on a response to see the detailed response schema: ![response](/doc/images/response_schema.jpg) ## Events hierarchy The PandaScore API allows you to access data about eSports events by using a certain structure detailed below. **Leagues** Leagues are the top level events. They don't have a date and represent a regular competition. A League is composed of one or several series. Some League of Legends leagues are: _EU LCS, NA LCS, LCK, etc._ Some Dota 2 leagues are: _ESL One, GESC, The International, PGL, etc._ **Series** A Serie represents an occurrence of a league event. The EU LCS league has two series per year: _spring 2017, summer 2017, spring 2016, summer 2016 etc._ Some Dota2 Series examples would be: _Changsha Major, Open Bucharest, Frankfurt, i-League Invitational etc._ **Tournaments** Tournaments groups all the matches of a serie under "stages" and "groups". The tournaments of the EU LCS of summer 2017 are: _Group A, Group B, Playoffs, etc._ Some Dota 2 tournaments are: _Group A, Group B, Playoffs, etc._ **Matches** Finally we have matches which have two players or teams (depending on the played videogame) and several games (the rounds of the match). Matches of the group A in the EU LCS of summer 2017 are: _G2 vs FNC, MSF vs NIP, etc._ Matches of the group A in the ESL One, Genting tournamnet are: _Lower Round 1, Quarterfinal, Upper Final, etc._ **Please note that some matches may be listed as "TBD vs TBD" if the matchup is not announced yet, for example the date of the Final match is known but the quarterfinal is still being played.** ![Structure](/doc/images/structure.png) ## Formats <!-- The API currently supports the JSON format by default, as well as the XML format. Add the desired extension to your request URL in order to get that format. --> The API currently supports the JSON format by default. Other formats may be added depending on user needs. ## Pagination The Pandascore API paginates all resources on the index method. Requests that return multiple items will be paginated to 50 items by default. You can specify further pages with the `page[number]` parameter. You can also set a custom page size (up to 100) with the `page[size]` parameter. The `Link` HTTP response header contains pagination data with `first`, `previous`, `next` and `last` raw page links when available, under the format ``` Link: <https://api.pandascore.co/{Resource}?page=X+1>; rel="next", <https://api.pandascore.co/{Resource}?page=X-1>; rel="prev", <https://api.pandascore.co/{Resource}?page=1>; rel="first", <https://api.pandascore.co/{Resource}?page=X+n>; rel="last" ``` There is also: * A `X-Page` header field, which contains the current page. * A `X-Per-Page` header field, which contains the current pagination length. * A `X-Total` header field, which contains the total count of items across all pages. ## Filtering The `filter` query parameter can be used to filter a collection by one or several fields for one or several values. The `filter` parameter takes the field to filter as a key, and the values to filter as the value. Multiples values must be comma-separated (`,`). For example, the following is a request for all the champions with a name matching Twitch or Brand exactly, but only with 21 armor: ``` GET /lol/champions?filter[name]=Brand,Twitch&filter[armor]=21&token=YOUR_ACCESS_TOKEN ``` ## Range The `range` parameter is a hash that allows filtering fields by an interval. Only values between the given two comma-separated bounds will be returned. The bounds are inclusive. For example, the following is a request for all the champions with `hp` within 500 and 1000: ``` GET /lol/champions?range[hp]=500,1000&token=YOUR_ACCESS_TOKEN ``` ## Search The `search` parameter is a bit like the `filter` parameter, but it will return all results where the values **contain** the given parameter. Note: this only works on strings. Searching with integer values is not supported and `filter` or `range` parameters may be better suited for your needs here. For example, to get all the champions with a name containing `"twi"`: ``` $ curl -sg -H 'Authorization: Bearer YOUR_ACCESS_TOKEN' 'https://api.pandascore.co/lol/champions?search[name]=twi' | jq -S '.[].name' "Twitch" "Twisted Fate" ``` ## Sorting All index endpoints support multiple sort fields with comma-separation (`,`); the fields are applied in the order specified. The sort order for each field is ascending unless it is prefixed with a minus (U+002D HYPHEN-MINUS, “-“), in which case it is descending. For example, `GET /lol/champions?sort=attackdamage,-name&token=YOUR_ACCESS_TOKEN` will return all the champions sorted by attack damage. Any champions with the same attack damage will then be sorted by their names in descending alphabetical order. ## Rate limiting Depending on your current plan, you will have a different rate limit. Your plan and your current request count [are available on your dashboard](https://pandascore.co/settings). With the **free plan**, you have a limit of 1000 requests per hour, others plans have a limit of 4000 requests per hour. The number of remaining requests is available in the `X-Rate-Limit-Remaining` response header. Your API key is included in all the examples on this page, so you can test any example right away. **Only you can see this value.** # Authentication The authentication on the Pandascore API works with access tokens. All developers need to [create an account](https://pandascore.co/users/sign_in) before getting started, in order to get an access token. The access token should not be shared. **Your token can be found and regenerated from [your dashboard](https://pandascore.co/settings).** The access token can be passed in the URL with the `token` query string parameter, or in the `Authorization: Bearer` header field. <!-- ReDoc-Inject: <security-definitions> -->

AIaaS provides API access to our bot hosting platform and SDKs, allowing developers to easily integrate conversational interfaces into applications.

papinet API is a global initiative for the Forst and Paper supply chain.

Validate and generate passwords using open source tools

This is the API documentation for Patrowl Engines usage.

## Who is this for? This documentation is for developers creating their own integration with [Feedback's](https://www.pendo.io/product/feedback/) API. If you are doing a standard integration, there's a really easy [Javascript integration](https://help.receptive.io/hc/en-us/articles/209221969-How-to-integrate-Receptive-with-your-app) that you should know about before you go to the effort of building your own integration. ## Authentication API calls generally need to be authenticated. Generate an API Key at https://feedback.pendo.io/app/#/vendor/settings?section=integrate. This key should then be added to every request as a request header named 'auth-token' (preferred), or as a query parameter named 'auth-token'. ## Endpoint API endpoint is https://api.feedback.eu.pendo.io / https://api.feedback.us.pendo.io depending where your datacenter is located. ## Notes API endpoints are being added to this documentation as needed by customers. If you don't see an endpoint you need please contact support and if it exists we'll publish the docs here. The 'try it out' feature on this documentation page will probably be blocked by your browser because the Access-Control-Allow-Origin header has its value set by the Feedback server depending on your hostname. ## Generating client code This documentation is automatically generated from an OpenAPI spec available [here](http://apidoc.receptive.io/receptive.swagger.json). You can use Swagger to auto-generate API client code in many languages using the [Swagger Editor](http://editor.swagger.io/#/)

Self Service Developer API documentation and demo. ##Getting Started You will need an API access profile user and password in order to access search endpoints. Your access profile user and password is used for authenticating all requests to our search API. You MUST pass the user and password each time you perform a search request.

Personio Authentication API

API for reading and writing personnel data incl. data about attendances and absences

Hereafter is the documentation of the private API of [Pims: Pointages Intelligents pour le Monde du Spectacle](https://pims.io). This API is designed for 3rd-party softwares, editors and partners. Its main purpose is to give access the core data of a Pims customer (i.e. events, ticket counts and promotions). ## Authentication The API uses [basic access authentication](https://en.wikipedia.org/wiki/Basic_access_authentication), meaning you will need a username and password to get authorized. As each customer in Pims has its own domain (e.g. caramba.pims.io, gdp.pims.io...), each credentials will be valid for one domain/customer only. If you need dedicated credentials for one domain, please contact us. (In any case, we will need an explicit agreement from the customer before we create these credentials.)

To make your life easy, you can try all endpoints with the public credentials below, pointing to our [demo domain](https://demo.pims.io):
  • Base path: `https://demo.pims.io/api`
  • Username: `demo`
  • Password: `q83792db2GCvgYVdKpU3yG3R`
## Response format The API returns JSON and matches the [HAL specification](http://stateless.co/hal_specification.html). The `Content-Type` of each response will be `application/hal+json`, unless an error occurs. Please note that this documentation describes all responses “as if” they were plain JSON. The specificities of HAL are ignored on purpose, in order to remain compact and avoid repetition.
So when you read in the doc:
{
	"id": 123,
	"property1": "Lorem ipsum",
	"object": {
		"id": 456,
		"property2": 7.89
	}
}
... you'll get in the Real World®:
{
	"id": 123,
	"property2": "Lorem ipsum",
	"_embedded": {
		"object": {
			"id": 456,
			"property2": 7.89,
			"_links": {
				"self": {
					"href": "https://api.mydomain.com/other-item/456"
				}
			}
		}
	}
	"_links": {
		"self": {
			"href": "https://api.mydomain.com/item/123"
		}
	}
}
### Errors Errors return JSON too and tries to match the [Problem Details for HTTP APIs specification](https://tools.ietf.org/html/rfc7807). If it does not match this spec, that's either a bug or a compatibility issue. Please contact us to solve the problem. The `Content-Type` of errors will be `application/problem+json`. The content will match the following JSON: ```json { "type": "https://tools.ietf.org/html/rfc2616#section-10", "title": "Not Found", "status": 404, "detail": "Entity not found" } ``` ## Versioning The API is fully versionned, using an URL-versioning scheme: `https://demo.pims.io/api/v1/events`, `https://demo.pims.io/api/v2/events`,... The version part of the URL is optional, and will be completed with the last stable version if omitted. ## Pagination All responses corresponding to a collection of resources (e.g. `/venues` or `/series/:id/events`) are paginated. When so, you will only get the first 25 resources you asked for. If you need to get more resources in one call, you can use the `page_size` query parameter. E.g. `/venues?page_size=50` to get the 50 first venues. Also note that with HAL, the navigation in paginated responses is a piece of cake, as you can see below: ```json { "_links": { "self": { "href": "https://demo.pims.io/api/v1/events?page=1" }, "first": { "href": "https://demo.pims.io/api/v1/events" }, "last": { "href": "https://demo.pims.io/api/v1/events?page=14" }, "next": { "href": "https://demo.pims.io/api/v1/events?page=2" } }, "_embedded": { ... // data content goes here }, "page_count": 14, "page_size": 25, "total_items": 331, "page": 1 } ``` ## Filtering and sorting Every textual filter (e.g. `/events?label=U2`) and/or sort (e.g. `/events?sort=label`) performed with the API uses UTF8_UNICODE_CI collation, meaning it is: - Case insensitive: “Chloé” will be considered the same as “CHLOÉ”; - Diacritic insensitive: “Chloé” will be considered the same as “Chloe”. When performing a sort, it will always be *ascending* by default. To make it *descending*, just use a minus sign (`-`) in front of the parameter value (e.g. `/events?sort=-label`). ## I18n In responses, some labels can be translated (e.g. promotion types, event input types, etc.). These translatable labels are clearly indicated in the documentation below. By default, they will be displayed in English, but you can change this behaviour via the `Accept-Language` header. E.g., use `fr` as a value for French. ## PHP SDK We provide a simple yet convenient SDK for the PHP language, see [the Github page of the project](https://github.com/pimssas/pims-api-client-php). ## And now? Generaly, you will start by [fetching one or more events](#tag/Events). An event can be anything that occurs in one venue at one given date and time: a concert, a play, a match, a conference, etc. Additionnally, you can explore the [series](#tag/Series): a series is just a group of events (e.g. a tour or a festival). Once you retrieved the events you were interested in, you can look for the sales (ticket counts): - Get a quick overview with [`/events/:id/ticket-counts`](#operation/fetchAllTicketCounts) - Or get a full insight by calling these endpoints: 1. [`/events/:id/categories`](#operation/fetchAllEventsCategories) 2. [`/events/:id/channels`](#operation/fetchAllEventsChannels) 3. [`/events/:id/ticket-counts/detailed`](#operation/fetchAllDetailedTicketCounts) Eventually, you may also want to fetch the [promotions](#tag/Promotions). A promotion can be anything meant to leverage the sales: ads, marketing campaigns, buzz or news around the event, etc. A promotion can be linked to any combination of events and/or series.

Probely is a Web Vulnerability Scanning suite for Agile Teams. It provides continuous scanning of your Web Applications and lets you efficiently manage the lifecycle of the vulnerabilities found, in a sleek and intuitive ~~web interface~~ API. ## Quick-Start ### Authentication To use the API, you first need to create a token (API Key). To create a token, select a target from the drop-down list, go to the "Settings" page, and click on the "Integrations" tab. Write a name for the API Key. For example, if you want to use the API Key for travis, you could name it "travis". In this example, we chose "**example.com_key**" ![Creating API key][1] [1]: assets/qs/create_api_key_1.png The API key was created successfully: ![API key created][2] [2]: assets/qs/create_api_key_2.png On every request, you need to pass this token in the authorization header, like this: ```yaml Authorization: JWT eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJqdGkiOiJBRlNJQlp 3elFsMDEiLCJ1c2VybmFtZSI6IkNIZ2tkSUROdzV0NSJ9.90UwiPGS2hlvgOLktFU0LfKuatNKm mEP79u17VnqT9M ``` **WARNING: Treat this token as a password. With this token, you have the power to fully manage the target.** In the following examples, the token will be named as *PROBELY_AUTH_TOKEN*. ### Scan target First let's view our target list: ```bash curl https://api.probely.com/targets/ \ -X GET \ -H "Content-Type: application/json" \ -H "Authorization: JWT PROBELY_AUTH_TOKEN" ``` From the results, we need the **target id**: ```json { "count":1, "page_total":1, "page":1, "length":10, "results":[ { "id":"AxtkqTE0v3E-", "name":"test-site", "desc":"", "url":"https://test-site.example.com", "settings": "(...)" , "stack": "(...)" , "verified":true, "(...)": "(...)" } ] } ``` Now we can send a request to start a scan on target id **AxtkqTE0v3E-** ```bash curl https://api.probely.com/targets/AxtkqTE0v3E-/scan_now/ \ -X POST \ -H "Content-Type: application/json" \ -H "Authorization: JWT PROBELY_AUTH_TOKEN" ``` And we get a response saying that the scan is scheduled: the status is **queued**, and we've got a **scan id**: ```json { "changed":"2017-08-01T13:37:00.843339Z", "started":null, "completed":null, "mediums":0, "changed_by": "(...)" , "highs":0, "status":"queued", "id":"S6dOMPn0SnoH", "created_by": "(...)" , "target": "(...)" , "created":"2017-08-01T13:37:00.843339Z", "lows":0 } ``` Using the scan id **S6dOMPn0SnoH**, we can pool the scan status: ```bash curl https://api.probely.com/targets/AxtkqTE0v3E-/scans/S6dOMPn0SnoH/ \ -X GET \ -H "Content-Type: application/json" \ -H "Authorization: JWT PROBELY_AUTH_TOKEN" ``` And we get a response saying that the scan status is now **started**: ```json { "id":"S6dOMPn0SnoH", "changed":"2017-08-01T13:38:12.623650Z", "started":null, "completed":null, "mediums":0, "changed_by": "(...)" , "highs":0, "status":"started", "created_by": "(...)" , "target": "(...)" , "created":"2017-08-01T13:37:00.843339Z", "lows":0 } ``` The possible statuses are: | Status | Name | Description | | ------ | ---- | ----------- | | queued | Queued | The scan is queued to start | | started | Started | The scan is currently running | | under_review | Under Review | The scan is complete but has some findings under review | | completed | Completed | The scan is complete | | completed_with_errors | Completed with errors | The scan is complete even after getting some error(s) | | failed | Failed | The scan failed | | canceled | Canceled | The scan was canceled | | canceling | Canceling | The scan is being canceled | During the scan, the keys "lows", "mediums", and "highs" will be updated with the findings, as they are being found. After we get either the status **completed** or **completed_with_errors**, we can view the findings. ### Get vulnerabilities Using the previous scan id **S6dOMPn0SnoH**, we can get the scan results: ```bash curl https://api.probely.com/targets/AxtkqTE0v3E-/scans/S6dOMPn0SnoH/ \ -X GET \ -H "Content-Type: application/json" \ -H "Authorization: JWT PROBELY_AUTH_TOKEN" ``` We get a response saying that the scan status is now **completed**, and that **45** vulnerabilities were found. **14** low, **11** medium and **20** high: ```json { "id":"S6dOMPn0SnoH", "target": "(...)" , "status":"completed", "started":"2017-08-01T13:37:12.623650Z", "completed":"2017-08-01T14:17:48.559514Z", "lows":14, "mediums":11, "highs":20, "created":"2017-08-01T13:37:00.843339Z", "created_by": "(...)" , "changed":"2017-08-01T14:17:48.559514Z", "changed_by": "(...)" } ``` You can now view the results of this scan, or the target findings. Let's start with the scan results: ```bash curl https://api.probely.com/targets/AxtkqTE0v3E-/findings/?scan=S6dOMPn0SnoH&page=1 \ -X GET \ -H "Content-Type: application/json" \ -H "Authorization: JWT PROBELY_AUTH_TOKEN" ``` ```json { "count":45, "page_total":5, "page":1, "length":10, "results":[ { "id":79, "target": "(...)" , "scans": "(...)" , "labels": "(...)" , "fix":"To fix an SQL Injection in PHP, you should use Prepared Statements. Prepared Statements can be thought of as a kind of compiled template for the SQL that an application wants to run, that can be customized using variable parameters.\n\nPHP's PDO extension supports Prepared Statements, so that's probably your best option.\n\nIn the example below you can see the use of prepared statements. Variables ```$username``` and ```$hashedPassword``` come from user input.\n\n```\n$stmt = $dbg->prepare(\"SELECT id, name FROM users\n WHERE username=? AND password=?\");\n$stmt->bindParam(1, $username);\n$stmt->bindParam(2, $hashedPassword);\nif ($stmt->execute()) {\n\t$user = $stmt->fetch();\n\tif ($user) {\n\t\t$_SESSION['authID'] = $user['id'];\n\t\techo \"Hello \" . $user['name'];\n\t} else {\n\t\techo \"Invalid Login\";\n\t}\n}\n``` \n\nAs an added bonus, if you're executing the same query several times, then it'll be even faster than when you're not using prepared statements. This is because when using prepared statements, the query needs to be parsed (prepared) only once, but can be executed multiple times with the same or different parameters. \n", "requests":[ { "request":"(...)", "response":"(...)" }, { "request":"(...)", "response":"(...)" } ], "evidence":null, "extra":"", "definition":{ "id":"xnV8PJVmSoLS", "name":"SQL Injection", "desc":"SQL Injections are the most common form of injections because SQL databases are very popular in dynamic web applications. This vulnerability allows an attacker to tamper existing SQL queries performed by the web application. Depending on the queries, the attacker might be able to access, modify or even destroy data from the database.\n\nSince databases are commonly used to store private data, such as authentication information, personal user data and site content, if an attacker gains access to it, the consequences are typically very severe, ranging from defacement of the web application to users data leakage or loss, or even full control of the web application or database server.", }, "url":"http://test-site.example.com/login.php", "path":"login.php", "method":"post", "parameter":"username", "value":"", "params":{ "username":[ "probely'" ], "password":[ "probely" ] }, "reporter": "(...)" , "assignee":null, "state":"notfixed", "severity":30, "last_found":"2017-08-01T14:03:56.207794Z", "changed":"2017-08-01T14:03:56.207794Z", "changed_by": "(...)" , "comment":"" }, "(...)" ] } ``` You can also view all the target findings, which will show all the findings that are not yet fixed. \\ The structure is similar to the previous result. ```bash curl https://api.probely.com/targets/AxtkqTE0v3E-/findings/ \ -X GET \ -H "Content-Type: application/json" \ -H "Authorization: JWT PROBELY_AUTH_TOKEN" ``` ### Get vulnerability details You can also get details for a particular finding in a target. \\ In this example we will get the details for the same finding as in the previous section: ```bash curl https://api.probely.com/targets/AxtkqTE0v3E-/findings/79/ \ -X GET \ -H "Content-Type: application/json" \ -H "Authorization: JWT PROBELY_AUTH_TOKEN" ``` This will result on the same information, but just for this particular finding: ```json { "id":79, "target": "(...)" , "scans": "(...)" , "labels": "(...)" , "fix":"To fix an SQL Injection in PHP, you should use Prepared Statements. Prepared Statements can be thought of as a kind of compiled template for the SQL that an application wants to run, that can be customized using variable parameters.\n\nPHP's PDO extension supports Prepared Statements, so that's probably your best option.\n\nIn the example below you can see the use of prepared statements. Variables ```$username``` and ```$hashedPassword``` come from user input.\n\n```\n$stmt = $dbg->prepare(\"SELECT id, name FROM users\n WHERE username=? AND password=?\");\n$stmt->bindParam(1, $username);\n$stmt->bindParam(2, $hashedPassword);\nif ($stmt->execute()) {\n\t$user = $stmt->fetch();\n\tif ($user) {\n\t\t$_SESSION['authID'] = $user['id'];\n\t\techo \"Hello \" . $user['name'];\n\t} else {\n\t\techo \"Invalid Login\";\n\t}\n}\n``` \n\nAs an added bonus, if you're executing the same query several times, then it'll be even faster than when you're not using prepared statements. This is because when using prepared statements, the query needs to be parsed (prepared) only once, but can be executed multiple times with the same or different parameters. \n", "requests":[ { "request":"(...)", "response":"(...)" }, { "request":"(...)", "response":"(...)" } ], "evidence":null, "extra":"", "definition":{ "id":"xnV8PJVmSoLS", "name":"SQL Injection", "desc":"SQL Injections are the most common form of injections because SQL databases are very popular in dynamic web applications. This vulnerability allows an attacker to tamper existing SQL queries performed by the web application. Depending on the queries, the attacker might be able to access, modify or even destroy data from the database.\n\nSince databases are commonly used to store private data, such as authentication information, personal user data and site content, if an attacker gains access to it, the consequences are typically very severe, ranging from defacement of the web application to users data leakage or loss, or even full control of the web application or database server.", }, "url":"http://test-site.example.com/login.php", "path":"login.php", "method":"post", "parameter":"username", "value":"", "params":{ "username":[ "probely'" ], "password":[ "probely" ] }, "reporter": "(...)" , "assignee":null, "state":"notfixed", "severity":30, "last_found":"2017-08-01T14:03:56.207794Z", "changed":"2017-08-01T14:03:56.207794Z", "changed_by": "(...)" , "comment":"" } ``` ## Concepts The short version is that you run *scans* on *targets*, and *findings* are created for any issue that is found. However, there are a few more concepts that must be explained in order to get a complete picture of how Probely works. We will spend the next few sections detailing the most important concepts. ### Target A *target* defines the scope of a scan, what will and won't be included in the scan plan. This is done by filling a *target*'s *site* and *assets*. The entry point for the web application (and authentication) is setup in the *target*'s *site*. In modern web applications, you are probably loading resources from multiple domains. A single page app, for example, will usualy load the page from one domain and make AJAX requests to another. This is what *assets* are for: they specify what domains our scanner should follow and create requests for. ### Site A URL is probably not the only thing you will need to setup when scannning your application. Does the application have an authenticated area? Does it use basic auth? Does it expect a certain cookie or header? These parameters are all configured in the *target*'s *site*. We need to ensure that only allowed web applications are scanned. Therefore, we must verify that you have control of any site you wish to include. This can be done by: * Placing a file on a well-known location, on the site's server; * Creating specific DNS records. ### Asset An *asset* is very similar to a *site*. The difference is that it is a domain instead of a URL. Additionally, an *asset* has no login or basic auth support. You can still have custom cookies and headers per *asset*. As with the *site*, you will need to prove an *asset*'s ownership. We have added some rules to make your life easier, if you already have verified a *site* and the domains match, the validation is fast-tracked. ### Scans This is what you're here for. After configuring your *target*, you will want to run *scans* against it. You can either start a one off scan, or schedule one for later - recurring or not. During the *scan*, we will spider and run several modules to check for security issues, which we call *findings*. You can check the *findings* even before a scan ends. If everything goes well, the scan will complete and that is it. With some *findings*, our automated processes may have difficulties determining if it is a false positive or a legitimate issue. In these instances, a scan will be marked as under review, and we will further analyze the finding before making a decision. We will only show findings that, for some degree of confidence, are true positives. A finding that we are not sure of will never be displayed. As much as we try to prevent it, a *scan* (or a sub-module) can malfunction. If this happens, a *scan* is marked as: * "failed": the problem was irrecoverable; * "completed with errors": some module failed but the scan itself completed. During a scan, we try to determine what *frameworks* you are using and add this information to the *site* and *asset* objects discussed previously. ### Findings The last core concept is the *finding*, this is a security issue that we have found during our scans. If the same issue is found in a new scan it will not open a new finding but update the previous. A *finding* will have a lot of information about the issue. Namely, where it was found, URL, insertion point (e.g. cookie), parameter, and method. Evidence we gathered, and the full request and response that we used. Sugestions of how to go about fixing it. A full description of the vulnerability is also present in the *definition* property. We also assign a severity and calculate the CVSS score for each. Besides all this, there are also actions that you can perform on a *finding*. You can assign it to one user, leave comments for your team or add labels, and reduce or increase the severity. If you don't plan on fixing the *finding* and accept the risk, or you think we reported a false positive, you can mark the finding to reflect that.

424 api specs