requests_cache.backends package

Classes and functions for cache persistence

requests_cache.backends.get_placeholder_backend(original_exception=None)[source]

Create a placeholder type for a backend class that does not have dependencies installed. This allows delaying ImportErrors until init time, rather than at import time.

Return type

Type[BaseCache]

requests_cache.backends.init_backend(backend, *args, **kwargs)[source]

Initialize a backend given a name, class, or instance

Return type

BaseCache

Submodules

requests_cache.backends.base module

class requests_cache.backends.base.BaseCache(*args, include_get_headers=False, ignored_parameters=None, **kwargs)[source]

Bases: object

Base class for cache implementations, which can also be used as in-memory cache.

See Custom Backends for details on creating your own implementation.

clear()[source]

Delete all items from the cache

create_key(request, **kwargs)[source]

Create a normalized cache key from a request object

Return type

str

delete(key)[source]

Delete key from cache. Also deletes all responses from response history

delete_history(key)[source]

Delete redirect history associated with a response, if any

delete_url(url)[source]

Delete response + redirects associated with url from cache. Works only for GET requests.

get_response(key, default=None)[source]

Retrieves response for key if it’s stored in cache, otherwise returns default

Parameters
  • key (str) – Key of resource

  • default – Value to return if key is not in cache

Return type

CachedResponse

has_key(key)[source]

Returns True if cache has key, False otherwise

Return type

bool

has_url(url)[source]

Returns True if cache has url, False otherwise. Works only for GET request urls

Return type

bool

remove_expired_responses(expire_after=None)[source]

Remove expired responses from the cache, optionally with revalidation

Parameters

expire_after (Union[None, int, float, datetime, timedelta]) – A new expiration time used to revalidate the cache

remove_old_entries(*args, **kwargs)[source]
save_redirect(request, response_key)[source]

Map a redirect request to a response. This makes it possible to associate many keys with a single response.

Parameters
  • request (PreparedRequest) – Request object for redirect URL

  • response_key (str) – Cache key which can be found in responses

save_response(key, response, expire_after=None)[source]

Save response to cache

Parameters
property urls

Get all URLs currently in the cache (excluding redirects)

Return type

List[str]

class requests_cache.backends.base.BaseStorage(secret_key=None, salt=b'requests-cache', suppress_warnings=False, serializer=None, **kwargs)[source]

Bases: collections.abc.MutableMapping, abc.ABC

Base class for backend storage implementations

Parameters
  • secret_key (Union[Iterable, str, bytes, None]) – Optional secret key used to sign cache items for added security

  • salt (Union[str, bytes]) – Optional salt used to sign cache items

  • suppress_warnings (bool) – Don’t show a warning when not using secret_key

  • serializer – Custom serializer that provides loads and dumps methods

deserialize(item)[source]

Deserialize a cached URL or response

Return type

Union[CachedResponse, str]

serialize(item)[source]

Serialize a URL or response into bytes

Return type

bytes

requests_cache.backends.dynamodb module

class requests_cache.backends.dynamodb.DynamoDbCache(table_name='http_cache', **kwargs)[source]

Bases: requests_cache.backends.base.BaseCache

DynamoDB cache backend

Parameters
  • table_name – DynamoDb table name

  • namespace – Name of DynamoDb hash map

  • connection – DynamoDb Resource object (boto3.resource('dynamodb')) to use instead of creating a new one

class requests_cache.backends.dynamodb.DynamoDbDict(table_name, namespace='http_cache', connection=None, endpoint_url=None, region_name='us-east-1', aws_access_key_id=None, aws_secret_access_key=None, read_capacity_units=1, write_capacity_units=1, **kwargs)[source]

Bases: requests_cache.backends.base.BaseStorage

A dictionary-like interface for DynamoDB key-value store

Note: The actual key name on the dynamodb server will be namespace:table_name

In order to deal with how dynamodb stores data/keys, everything, i.e. keys and data, must be pickled.

Parameters
  • table_name – DynamoDb table name

  • namespace – Name of DynamoDb hash map

  • connection – DynamoDb Resource object (boto3.resource('dynamodb')) to use instead of creating a new one

  • endpoint_url – Alternative URL of dynamodb server.

clear()None.  Remove all items from D.[source]

requests_cache.backends.gridfs module

class requests_cache.backends.gridfs.GridFSCache(db_name, **kwargs)[source]

Bases: requests_cache.backends.base.BaseCache

GridFS cache backend. Use this backend to store documents greater than 16MB.

Usage:

requests_cache.install_cache(backend=’gridfs’)

Or:

from pymongo import MongoClient requests_cache.install_cache(backend=’gridfs’, connection=MongoClient(‘another-host.local’))

class requests_cache.backends.gridfs.GridFSPickleDict(db_name, collection_name=None, connection=None, **kwargs)[source]

Bases: requests_cache.backends.base.BaseStorage

A dictionary-like interface for a GridFS collection

clear()None.  Remove all items from D.[source]

requests_cache.backends.mongo module

class requests_cache.backends.mongo.MongoCache(db_name='http_cache', **kwargs)[source]

Bases: requests_cache.backends.base.BaseCache

MongoDB cache backend

class requests_cache.backends.mongo.MongoDict(db_name, collection_name='http_cache', connection=None, **kwargs)[source]

Bases: requests_cache.backends.base.BaseStorage

A dictionary-like interface for a MongoDB collection

clear()None.  Remove all items from D.[source]
class requests_cache.backends.mongo.MongoPickleDict(db_name, collection_name='http_cache', connection=None, **kwargs)[source]

Bases: requests_cache.backends.mongo.MongoDict

Same as MongoDict, but pickles values before saving

requests_cache.backends.redis module

class requests_cache.backends.redis.RedisCache(namespace='http_cache', **kwargs)[source]

Bases: requests_cache.backends.base.BaseCache

Redis cache backend.

Parameters
  • namespace – redis namespace (default: 'requests-cache')

  • connection – (optional) Redis connection instance to use instead of creating a new one

class requests_cache.backends.redis.RedisDict(namespace, collection_name='http_cache', connection=None, **kwargs)[source]

Bases: requests_cache.backends.base.BaseStorage

A dictionary-like interface for redis key-value store.

Notes

  • In order to deal with how redis stores data/keys, all keys and data are pickled.

  • The actual key name on the redis server will be namespace:collection_name.

Parameters
  • namespace – Redis namespace

  • collection_name – Name of the Redis hash map

  • connection – (optional) Redis connection instance to use instead of creating a new one

clear()None.  Remove all items from D.[source]

requests_cache.backends.sqlite module

class requests_cache.backends.sqlite.DbCache(db_path='http_cache', fast_save=False, **kwargs)[source]

Bases: requests_cache.backends.base.BaseCache

SQLite cache backend.

Reading is fast, saving is a bit slower. It can store big amount of data with low memory usage.

Parameters
  • db_path (Union[Path, str]) – Database file path (expands user paths and creates parent dirs)

  • fast_save (bool) – Speedup cache saving up to 50 times but with possibility of data loss. See DbDict for more info

  • timeout – Timeout for acquiring a database lock

remove_expired_responses(*args, **kwargs)[source]

Remove expired responses from the cache, with additional cleanup

class requests_cache.backends.sqlite.DbDict(db_path, table_name='http_cache', fast_save=False, timeout=5.0, **kwargs)[source]

Bases: requests_cache.backends.base.BaseStorage

A dictionary-like interface for SQLite.

It’s possible to create multiply DbDict instances, which will be stored as separate tables in one database:

d1 = DbDict('test', 'table1')
d2 = DbDict('test', 'table2')
d3 = DbDict('test', 'table3')

All data will be stored in separate tables in the file test.sqlite.

Parameters
  • db_path – Database file path

  • table_name – Table name

  • fast_save – Use “PRAGMA synchronous = 0;” to speed up cache saving, but with the potential for data loss

  • timeout – Timeout for acquiring a database lock

bulk_commit()[source]

Context manager used to speedup insertion of big number of records

>>> d1 = DbDict('test')
>>> with d1.bulk_commit():
...     for i in range(1000):
...         d1[i] = i * 2
clear()None.  Remove all items from D.[source]
commit(force=False)[source]

Commits pending transaction if can_commit or force is True

Parameters

force – force commit, ignore can_commit

connection(commit_on_success=False)[source]
vacuum()[source]
class requests_cache.backends.sqlite.DbPickleDict(db_path, table_name='http_cache', fast_save=False, timeout=5.0, **kwargs)[source]

Bases: requests_cache.backends.sqlite.DbDict

Same as DbDict, but pickles values before saving