API and Plugin Development

Home » Help »

InterroBot offers powerful API and plugin development capabilities, allowing you to leverage InterroBot's best-in-class crawl filtering toward your own ends. This guide covers the basics of working with the InterroBot API and creating plugins from a variety of crawl data.

InterroBot API Overview

The Plugin API provides a custom web crawler and web scraping API for developers, enabling you to create a custom report, visualization, or interactive app from from raw crawl data.

Technical Sandbox

If you're just getting started, InterroBot provides a technical API sandbox in-app. You can get there from the Options page by navigating to the "Crawler Data API."

The sandbox is the best place to suss out plugin capabilities and data access. The sandbox provides a programmable web crawler interface where you can experiment with different queries and see live results.

The box above the JSON response is the query you can use to generate the a predictable result from the form arguments, or as a quick-start from which to build your own queries.

Navigating to the API interface via Options, in-app.

Navigating to the API interface via Options, in-app.

API Usage

JavaScript API Access

JavaScript can access the data through the interrobot-plugin library. With the library, you can build web crawler plugins that extend InterroBot's functionality to suit your specific needs. Please refer to the repository for details on setting up a development environment.

Core API Methods

Most of the heavy lifting comes in the form of three API methods: GetProjects, GetResources, and GetCrawls. These methods form the backbone of InterroBot's data API. Between these methods, you can transform your website data into tables of data, visualizations, or in interactive UIs.

Technical documentation for the plugin is hosted on GitHub.

Example API Requests

Retrieve a list of all projects, with the icon image data

let result;  
result = await Plugin.postApiRequest("GetProjects",  {
    fields: ["image"],
    projects: []
});

Retrieve a list of pages with header data.

result = await Plugin.postApiRequest("GetResources", {
    external: true,
    fields: ["headers"],
    offset: 0,
    query: "headers: image",
    project: <projectId>,
    type: "any",
});

Retrieve a list of crawls by project.

const crawls = await Plugin.postApiRequest("GetCrawls", {
    complete: "any",
    fields: [],
    project: <projectId>,
});

To get a better feel for what data and filters are available, see the sandbox, and to get an idea of what is possible, check out the plugins directory full of plugin examples, many of which are open source.

The API/Plugin Connection

With InterroBot, the API and plugin development are intrinsically linked. Every API interaction occurs within a plugin context, and the API exists to serve to the plugins. This design allows you to create a specialized web crawler for content analysis, SEO audits, or any other specific use case you might have.

For scenarios requiring data access outside the plugin context, InterroBot offers alternative solutions:

  1. Most pages within InterroBot provide exportable data in standard formats.
  2. For comprehensive data access, you can export the entire SQLite database, which contains all crawl data and associated metadata.

Future Development

InterroBot plugins represent a new feature, more technical details will be forthcoming, though the interrobot-plugin GitHub repo may be more active than this webpage, at this stage. Contact me through the app if you have any questions.


InterroBot is a web crawler and developer tool for Windows, macOS, Linux, and Android.
Want to learn more? Check out our help section or download the latest build.