Why Desktop? Why not Cloud?
Intranets and Localhost
A desktop crawler can operate within restricted networks. This means if you're trying to work with an intranet (from behind a firewall), there's no need to request IT resolve blocking issues caused by network access.
The same holds for localhost (loopback), development servers, and the like. If you can access the website, on-network, InterroBot can do the same.
Avoiding Cloud Costs
Cloud computing is run "on the meter," where crawlers occupy an especially resource-hungry niche of cloud apps. They consume a high volume of network, compute, and storage—all three resources... all at once.
Ultimately, any crawler in the cloud is reselling services and passing on costs. While there are many advantages to the cloud (access from any browser, faster network, scalability, scheduling), the hosting costs of an application like InterroBot would necessitate a pricey subscription model to cover operations.
Other crawlers have arrived at a similar conclusion, that desktop/on-prem solutions provide superior value for the average case.
If you're concerned with sending your indexed website data to a third-party, running on a desktop keeps your crawl data off of the Internet. It should be noted that your data is only as secure as your laptop/desktop. You can remove the data by either deleting a project and then vacuuming the database (via the Options page), or by uninstalling InterroBot.