Frontera 0.4 documentation¶
Frontera is a web crawling tool box, allowing to build crawlers of any scale and purpose.
Frontera also provides replication, sharding and isolation of all crawler components to scale and distribute it.
Frontera contain components to allow creation of fully-operational web crawler with Scrapy. Even though it was originally designed for Scrapy, it can also be used with any other crawling framework/system as the framework offers a generic tool box.
The purpose of this chapter is to introduce you to the concepts behind Frontera so that you can get an idea of how it works and decide if it is suited to your needs.
- Installation Guide
- HOWTO and Dependencies options.
- Frontier objects
- Understand the classes used to represent requests and responses.
- Filter or alter information for links and documents.
- Canonical URL Solver
- Identify and make use of canonical url of document.
- Define your own crawling policy and custom storage.
- Message bus
- Built-in message bus reference.
- Crawling strategy
- Implementing own crawling strategy for distributed backend.
- Using the Frontier with Scrapy
- Learn how to use Frontera with Scrapy.
- Settings reference.
- Architecture overview
- See how Frontera works and its different components.
- Frontera API
- Learn how to use the frontier.
- Using the Frontier with Requests
- Learn how to use Frontera with Requests.
- Some example projects and scripts using Frontera.
- How to run and write Frontera tests.
- Testing a Frontier
- Test your frontier in an easy way.
- Frequently asked questions.
- Contribution guidelines
- HOWTO contribute.
- Glossary of terms.