Settings

The Frontera settings allows you to customize the behaviour of all components, including the FrontierManager, Middleware and Backend themselves.

The infrastructure of the settings provides a global namespace of key-value mappings that can be used to pull configuration values from. The settings can be populated through different mechanisms, which are described below.

For a list of available built-in settings see: Built-in settings reference.

Designating the settings

When you use Frontera, you have to tell it which settings you’re using. As FrontierManager is the main entry point to Frontier usage, you can do this by using the method described in the Loading from settings section.

When using a string path pointing to a settings file for the frontier we propose the following directory structure:

my_project/
    frontier/
        __init__.py
        settings.py
        middlewares.py
        backends.py
    ...

These are basically:

  • frontier/settings.py: the frontier settings file.
  • frontier/middlewares.py: the middlewares used by the frontier.
  • frontier/backends.py: the backend(s) used by the frontier.

How to access settings

Settings can be accessed through the FrontierManager.settings attribute, that is passed to Middleware.from_manager and Backend.from_manager class methods:

class MyMiddleware(Component):

    @classmethod
    def from_manager(cls, manager):
        manager = crawler.settings
        if settings.TEST_MODE:
            print "test mode is enabled!"

In other words, settings can be accessed as attributes of the Settings object.

Settings class

class frontera.settings.Settings(module=None, attributes=None)

Built-in frontier settings

Here’s a list of all available Frontera settings, in alphabetical order, along with their default values and the scope where they apply.

AUTO_START

Default: True

Whether to enable frontier automatic start. See Starting/Stopping the frontier

BACKEND

Default: 'frontera.contrib.backends.memory.FIFO'

The Backend to be used by the frontier. For more info see Activating a backend.

CANONICAL_SOLVER

Default: frontera.contrib.canonicalsolvers.Basic

The CanonicalSolver to be used by the frontier for resolving canonical URLs. For more info see Canonical URL Solver.

CONSUMER_BATCH_SIZE

Default: 512

This is a batch size used by strategy and db workers for consuming of spider log and scoring log streams. Increasing it will cause worker to spend more time on every task, but processing more items per task, therefore leaving less time for other tasks during some fixed time interval. Reducing it will result to running several tasks withing the same time interval, but with less overall efficiency. Use it when your consumers too slow, or too fast.

CRAWLING_STRATEGY

Default: None

The path to crawling strategy class, instantiated and used in strategy worker to prioritize and stop crawling in distributed run mode.

DELAY_ON_EMPTY

Default: 5.0

Delay between calls to backend for new batches in Scrapy scheduler, when queue size is getting below CONCURRENT_REQUESTS. When backend has no requests to fetch, this delay helps to exhaust the rest of the buffer without hitting backend on every request. Increase it if calls to your backend is taking too long, and decrease if you need a fast spider bootstrap from seeds.

KAFKA_GET_TIMEOUT

Default: 5.0

Time process should block until requested amount of data will be received from message bus.

LOGGING_CONFIG

Default: logging.conf

The path to a file with logging module configuration. See https://docs.python.org/2/library/logging.config.html#logging-config-fileformat If file is absent, the logging system will be initialized with logging.basicConfig() and CONSOLE handler will be used. This option is used only in db worker and strategy worker.

MAX_NEXT_REQUESTS

Default: 64

The maximum number of requests returned by get_next_requests API method. In distributed context it could be amount of requests produced per spider by db worker or count of requests read from message bus per attempt to fill the spider queue. In single process it’s the count of requests to get from backend per one call to get_next_requests method.

MAX_REQUESTS

Default: 0

Maximum number of returned requests after which Frontera is finished. If value is 0 (default), the frontier will continue indefinitely. See Finishing the frontier.

MESSAGE_BUS

Default: frontera.contrib.messagebus.zeromq.MessageBus

Points Frontera to message bus implementation. Defaults to ZeroMQ.

MIDDLEWARES

A list containing the middlewares enabled in the frontier. For more info see Activating a middleware.

Default:

[
    'frontera.contrib.middlewares.fingerprint.UrlFingerprintMiddleware',
]

NEW_BATCH_DELAY

Default: 30.0

Used in DB worker, and it’s a time interval between production of new batches for all partitions. If partition is busy, it will be skipped.

OVERUSED_SLOT_FACTOR

Default: 5.0

(in progress + queued requests in that slot) / max allowed concurrent downloads per slot before slot is considered overused. This affects only Scrapy scheduler.”

REQUEST_MODEL

Default: 'frontera.core.models.Request'

The Request model to be used by the frontier.

RESPONSE_MODEL

Default: 'frontera.core.models.Response'

The Response model to be used by the frontier.

SCORING_PARTITION_ID

Default: 0

Used by strategy worker, and represents partition startegy worker assigned to.

SPIDER_LOG_PARTITIONS

Default: 1

Number of spider log stream partitions. This affects number of required strategy worker (s), each strategy worker assigned to it’s own partition.

SPIDER_FEED_PARTITIONS

Default: 1

Number of spider feed partitions. This directly affects number of spider processes running. Every spider is assigned to it’s own partition.

SPIDER_PARTITION_ID

Default: 0

Per-spider setting, pointing spider to it’s assigned partition.

STATE_CACHE_SIZE

Default: 1000000

Maximum count of elements in state cache before it gets clear.

STORE_CONTENT

Default: False

Determines if content should be sent over the message bus and stored in the backend: a serious performance killer.

TEST_MODE

Default: False

Whether to enable frontier test mode. See Frontier test mode

Built-in fingerprint middleware settings

Settings used by the UrlFingerprintMiddleware and DomainFingerprintMiddleware.

URL_FINGERPRINT_FUNCTION

Default: frontera.utils.fingerprint.sha1

The function used to calculate the url fingerprint.

DOMAIN_FINGERPRINT_FUNCTION

Default: frontera.utils.fingerprint.sha1

The function used to calculate the domain fingerprint.

TLDEXTRACT_DOMAIN_INFO

Default: False

If set to True, will use tldextract to attach extra domain information (second-level, top-level and subdomain) to meta field (see Adding additional data to objects).

Built-in backends settings

SQLAlchemy

SQLALCHEMYBACKEND_CACHE_SIZE

Default: 10000

SQLAlchemy Metadata LRU Cache size. It’s used for caching objects, which are requested from DB every time already known, documents are crawled. This is mainly saves DB throughput, increase it if you’re experiencing problems with too high volume of SELECT’s to Metadata table, or decrease if you need to save memory.

SQLALCHEMYBACKEND_CLEAR_CONTENT

Default: True

Set to False if you need to disable table content clean up on backend instantiation (e.g. every Scrapy spider run).

SQLALCHEMYBACKEND_DROP_ALL_TABLES

Default: True

Set to False if you need to disable dropping of DB tables on backend instantiation (e.g. every Scrapy spider run).

SQLALCHEMYBACKEND_ENGINE

Default:: sqlite:///:memory:

SQLAlchemy database URL. Default is set to memory.

SQLALCHEMYBACKEND_ENGINE_ECHO

Default: False

Turn on/off SQLAlchemy verbose output. Useful for debugging SQL queries.

SQLALCHEMYBACKEND_MODELS

Default:

{
    'MetadataModel': 'frontera.contrib.backends.sqlalchemy.models.MetadataModel',
    'StateModel': 'frontera.contrib.backends.sqlalchemy.models.StateModel',
    'QueueModel': 'frontera.contrib.backends.sqlalchemy.models.QueueModel'
}

This is mapping with SQLAlchemy models used by backends. It is mainly used for customization.

Revisiting backend

SQLALCHEMYBACKEND_REVISIT_INTERVAL

Default: timedelta(days=1)

Time between document visits, expressed in datetime.timedelta objects. Changing of this setting will only affect documents scheduled after the change. All previously queued documents will be crawled with old periodicity.

HBase backend

HBASE_BATCH_SIZE

Default: 9216

Count of accumulated PUT operations before they sent to HBase.

HBASE_DROP_ALL_TABLES

Default: False

Enables dropping and creation of new HBase tables on worker start.

HBASE_METADATA_TABLE

Default: metadata

Name of the documents metadata table.

HBASE_NAMESPACE

Default: crawler

Name of HBase namespace where all crawler related tables will reside.

HBASE_QUEUE_TABLE

Default: queue

Name of HBase priority queue table.

HBASE_STATE_CACHE_SIZE_LIMIT

Default: 3000000

Number of items in the state cache of strategy worker, before it get’s flushed to HBase and cleared.

HBASE_THRIFT_HOST

Default: localhost

HBase Thrift server host.

HBASE_THRIFT_PORT

Default: 9090

HBase Thrift server port

HBASE_USE_FRAMED_COMPACT

Default: False

Enabling this option dramatically reduces transmission overhead, but the server needs to be properly configured to use Thrifts framed transport and compact protocol.

HBASE_USE_SNAPPY

Default: False

Whatever to compress content and metadata in HBase using Snappy. Decreases amount of disk and network IO within HBase, lowering response times. HBase have to be properly configured to support Snappy compression.

ZeroMQ message bus settings

The message bus class is distributed_frontera.messagebus.zeromq.MessageBus

ZMQ_ADDRESS

Default: 127.0.0.1

Defines where the ZeroMQ socket should bind or connect. Can be a hostname or an IP address. Right now ZMQ has only been properly tested with IPv4. Proper IPv6 support will be added in the near future.

ZMQ_BASE_PORT

Default: 5550

The base port for all ZeroMQ sockets. It uses 6 sockets overall and port starting from base with step 1. Be sure that interval [base:base+5] is available.

Kafka message bus settings

The message bus class is frontera.contrib.messagebus.kafkabus.MessageBus

KAFKA_LOCATION

Hostname and port of kafka broker, separated with :. Can be a string with hostname:port pair separated with commas(,).

FRONTIER_GROUP

Default: general

Kafka consumer group name, used for almost everything.

INCOMING_TOPIC

Default: frontier-done

Spider log stream topic name.

OUTGOING_TOPIC

Default: frontier-todo

Spider feed stream topic name.

SCORING_GROUP

Default: strategy-workers

A group used by strategy workers for spider log reading. Needs to be different than FRONTIER_GROUP.

SCORING_TOPIC

Kafka topic used for scoring log stream.

Default settings

If no settings are specified, frontier will use the built-in default ones. For a complete list of default values see: Built-in settings reference. All default settings can be overridden.