Frontera at a glance

Frontera is an application framework that is meant to be used as part of a Crawling System, allowing you to easily manage and define tasks related to a crawl frontier.

Even though it was originally designed for Scrapy, it can also be used with any other Crawling Framework/System as the framework offers a generic frontier functionality.

The purpose of this document is to introduce you to the concepts behind Frontera so that you can get an idea of how it works and to decide if it is suited to your needs.

1. Create your crawler

Create your Scrapy project as you usually do. Enter a directory where you’d like to store your code and then run:

scrapy startproject tutorial

This will create a tutorial directory with the following contents:


These are basically:

  • scrapy.cfg: the project configuration file
  • tutorial/: the project’s python module, you’ll later import your code from here.
  • tutorial/ the project’s items file.
  • tutorial/ the project’s pipelines file.
  • tutorial/ the project’s settings file.
  • tutorial/spiders/: a directory where you’ll later put your spiders.

2. Integrate your crawler with the frontier

This article about integration with Scrapy explains this step in detail.

3. Choose your backend

Configure frontier settings to use a built-in backend like in-memory BFS:

BACKEND = 'frontera.contrib.backends.memory.heapq.BFS'

4. Run the spider

Run your Scrapy spider as usual from the command line:

scrapy crawl myspider

And that’s it! You got your spider running integrated with Frontera.

What else?

You’ve seen a simple example of how to use Frontera with Scrapy, but this is just the surface. Frontera provides many powerful features for making Frontier management easy and efficient, such as:

What’s next?

The next obvious steps are for you to install Frontera, and read the architecture overview and API docs. Thanks for your interest!