Backends¶
Frontier Backend
is where the crawling logic/policies lies, essentially a
brain of your crawler. Queue
,
Metadata
and States
are classes
where all low level code is meant to be placed, and
Backend opposite, operates on a higher levels. Frontera is bundled with database and in-memory implementations of
Queue, Metadata and States which can be combined in your custom backends or used standalone by directly
instantiating FrontierManager
and Backend.
Backend methods are called by the FrontierManager after
Middleware
, using hooks for
Request
and Response
processing
according to frontier data flow.
Unlike Middleware, that can have many different instances activated, only one Backend can be used per frontier.
Activating a backend¶
To activate the frontier backend component, set it through the BACKEND
setting.
Here’s an example:
BACKEND = 'frontera.contrib.backends.memory.FIFO'
Keep in mind that some backends may need to be additionally configured through a particular setting. See backends documentation for more info.
Writing your own backend¶
Each backend component is a single Python class inherited from Backend
or
DistributedBackend
and using one or all of
Queue
, Metadata
and States
.
FrontierManager
will communicate with active backend through the methods described below.
Inherits all methods of Backend, and has two more class methods, which are called during strategy and db worker instantiation.
Backend should communicate with low-level storage by means of these classes:
Metadata¶
Known implementations are: MemoryMetadata
and sqlalchemy.components.Metadata
.
Queue¶
Known implementations are: MemoryQueue
and sqlalchemy.components.Queue
.
States¶
Known implementations are: MemoryStates
and sqlalchemy.components.States
.
Built-in backend reference¶
This article describes all backend components that come bundled with Frontera.
To know the default activated Backend
check the
BACKEND
setting.
Basic algorithms¶
Some of the built-in Backend
objects implement basic algorithms as
as FIFO/LIFO or DFS/BFS for page visit ordering.
Differences between them will be on storage engine used. For instance,
memory.FIFO
and
sqlalchemy.FIFO
will use the same logic but with different
storage engines.
All these backend variations are using the same CommonBackend
class
implementing one-time visit crawling policy with priority queue.
Memory backends¶
This set of Backend
objects will use an heapq module as queue and native
dictionaries as storage for basic algorithms.
-
class
frontera.contrib.backends.memory.
BASE
¶ Base class for in-memory
Backend
objects.
-
class
frontera.contrib.backends.memory.
RANDOM
¶ In-memory
Backend
implementation of a random selection algorithm.
SQLAlchemy backends¶
This set of Backend
objects will use SQLAlchemy as storage for
basic algorithms.
By default it uses an in-memory SQLite database as a storage engine, but any databases supported by SQLAlchemy can be used.
If you need to use your own declarative sqlalchemy models, you can do it by using the
SQLALCHEMYBACKEND_MODELS
setting.
This setting uses a dictionary where key
represents the name of the model to define and value
the model to use.
For a complete list of all settings used for SQLAlchemy backends check the settings section.
-
class
frontera.contrib.backends.sqlalchemy.
BASE
¶ Base class for SQLAlchemy
Backend
objects.
-
class
frontera.contrib.backends.sqlalchemy.
FIFO
¶ SQLAlchemy
Backend
implementation of FIFO algorithm.
-
class
frontera.contrib.backends.sqlalchemy.
LIFO
¶ SQLAlchemy
Backend
implementation of LIFO algorithm.
-
class
frontera.contrib.backends.sqlalchemy.
RANDOM
¶ SQLAlchemy
Backend
implementation of a random selection algorithm.
Revisiting backend¶
Based on custom SQLAlchemy backend, and queue. Crawling starts with seeds. After seeds are crawled, every new
document will be scheduled for immediate crawling. On fetching every new document will be scheduled for recrawling
after fixed interval set by SQLALCHEMYBACKEND_REVISIT_INTERVAL
.
Current implementation of revisiting backend has no prioritization. During long term runs spider could go idle, because there are no documents available for crawling, but there are documents waiting for their scheduled revisit time.
HBase backend¶
Is more suitable for large scale web crawlers. Settings reference can be found here HBase backend. Consider
tunning a block cache to fit states within one block for average size website. To achieve this it’s recommended to use
hostname_local_fingerprint
to achieve documents closeness within the same host. This function can be selected with URL_FINGERPRINT_FUNCTION
setting.