API

This part of the documentation covers all the interfaces of requests-cache

Public api

requests_cache.core

Core functions for configuring cache and monkey patching requests

class requests_cache.core.CachedSession(cache_name='cache', backend=None, expire_after=None, allowable_codes=(200, ), allowable_methods=('GET', ), old_data_on_error=False, **backend_options)

Requests Sessions with caching support.

Parameters:
  • cache_name

    for sqlite backend: cache file will start with this prefix, e.g cache.sqlite

    for mongodb: it’s used as database name

    for redis: it’s used as the namespace. This means all keys are prefixed with 'cache_name:'

  • backend – cache backend name e.g 'sqlite', 'mongodb', 'redis', 'memory'. (see Persistence). Or instance of backend implementation. Default value is None, which means use 'sqlite' if available, otherwise fallback to 'memory'.
  • expire_after (float) – timedelta or number of seconds after cache will be expired or None (default) to ignore expiration
  • allowable_codes (tuple) – limit caching only for response with this codes (default: 200)
  • allowable_methods (tuple) – cache only requests of this methods (default: ‘GET’)
  • backend_options – options for chosen backend. See corresponding sqlite, mongo and redis backends API documentation
  • include_get_headers – If True headers will be part of cache key. E.g. after get(‘some_link’, headers={‘Accept’:’application/json’}) get(‘some_link’, headers={‘Accept’:’application/xml’}) is not from cache.
  • ignored_parameters – List of parameters to be excluded from the cache key. Useful when requesting the same resource through different credentials or access tokens, passed as parameters.
  • old_data_on_error – If True it will return expired cached response if update fails
cache_disabled(*args, **kwds)

Context manager for temporary disabling cache

>>> s = CachedSession()
>>> with s.cache_disabled():
...     s.get('http://httpbin.org/ip')
remove_expired_responses()

Removes expired responses from storage

requests_cache.core.install_cache(cache_name='cache', backend=None, expire_after=None, allowable_codes=(200, ), allowable_methods=('GET', ), session_factory=<class 'requests_cache.core.CachedSession'>, **backend_options)

Installs cache for all Requests requests by monkey-patching Session

Parameters are the same as in CachedSession. Additional parameters:

Parameters:session_factory – Session factory. It must be class which inherits CachedSession (default)
requests_cache.core.configure(cache_name='cache', backend=None, expire_after=None, allowable_codes=(200, ), allowable_methods=('GET', ), session_factory=<class 'requests_cache.core.CachedSession'>, **backend_options)

Installs cache for all Requests requests by monkey-patching Session

Parameters are the same as in CachedSession. Additional parameters:

Parameters:session_factory – Session factory. It must be class which inherits CachedSession (default)
requests_cache.core.uninstall_cache()

Restores requests.Session and disables cache

requests_cache.core.disabled(*args, **kwds)

Context manager for temporary disabling globally installed cache

Warning

not thread-safe

>>> with requests_cache.disabled():
...     requests.get('http://httpbin.org/ip')
...     requests.get('http://httpbin.org/get')
requests_cache.core.enabled(*args, **kwds)

Context manager for temporary installing global cache.

Accepts same arguments as install_cache()

Warning

not thread-safe

>>> with requests_cache.enabled('cache_db'):
...     requests.get('http://httpbin.org/get')
requests_cache.core.get_cache()

Returns internal cache object from globally installed CachedSession

requests_cache.core.clear()

Clears globally installed cache

requests_cache.core.remove_expired_responses()

Removes expired responses from storage


Cache backends

requests_cache.backends.base

Contains BaseCache class which can be used as in-memory cache backend or extended to support persistence.

class requests_cache.backends.base.BaseCache(*args, **kwargs)

Base class for cache implementations, can be used as in-memory cache.

To extend it you can provide dictionary-like objects for keys_map and responses or override public methods.

keys_map = None

key -> key_in_responses mapping

responses = None

key_in_cache -> response mapping

save_response(key, response)

Save response to cache

Parameters:
  • key – key for this response
  • response – response to save

Note

Response is reduced before saving (with reduce_response()) to make it picklable

add_key_mapping(new_key, key_to_response)

Adds mapping of new_key to key_to_response to make it possible to associate many keys with single response

Parameters:
  • new_key – new key (e.g. url from redirect)
  • key_to_response – key which can be found in responses
Returns:

get_response_and_time(key, default=(None, None))

Retrieves response and timestamp for key if it’s stored in cache, otherwise returns default

Parameters:
  • key – key of resource
  • default – return this if key not found in cache
Returns:

tuple (response, datetime)

Note

Response is restored after unpickling with restore_response()

delete(key)

Delete key from cache. Also deletes all responses from response history

delete_url(url)

Delete response associated with url from cache. Also deletes all responses from response history. Works only for GET requests

clear()

Clear cache

remove_old_entries(created_before)

Deletes entries from cache with creation time older than created_before

has_key(key)

Returns True if cache has key, False otherwise

has_url(url)

Returns True if cache has url, False otherwise. Works only for GET request urls

reduce_response(response, seen=None)

Reduce response object to make it compatible with pickle

restore_response(response, seen=None)

Restore response object after unpickling

requests_cache.backends.sqlite

sqlite3 cache backend

class requests_cache.backends.sqlite.DbCache(location='cache', fast_save=False, extension='.sqlite', **options)

sqlite cache backend.

Reading is fast, saving is a bit slower. It can store big amount of data with low memory usage.

Parameters:
  • location – database filename prefix (default: 'cache')
  • fast_save – Speedup cache saving up to 50 times but with possibility of data loss. See backends.DbDict for more info
  • extension – extension for filename (default: '.sqlite')

requests_cache.backends.mongo

mongo cache backend

class requests_cache.backends.mongo.MongoCache(db_name='requests-cache', **options)

mongo cache backend.

Parameters:
  • db_name – database name (default: 'requests-cache')
  • connection – (optional) pymongo.Connection

requests_cache.backends.redis

redis cache backend

class requests_cache.backends.redis.RedisCache(namespace='requests-cache', **options)

redis cache backend.

Parameters:
  • namespace – redis namespace (default: 'requests-cache')
  • connection – (optional) redis.StrictRedis

Internal modules which can be used outside

requests_cache.backends.dbdict

Dictionary-like objects for saving large data sets to sqlite database

class requests_cache.backends.storage.dbdict.DbDict(filename, table_name='data', fast_save=False, **options)

DbDict - a dictionary-like object for saving large datasets to sqlite database

It’s possible to create multiply DbDict instances, which will be stored as separate tables in one database:

d1 = DbDict('test', 'table1')
d2 = DbDict('test', 'table2')
d3 = DbDict('test', 'table3')

all data will be stored in test.sqlite database into correspondent tables: table1, table2 and table3

Parameters:
  • filename – filename for database (without extension)
  • table_name – table name
  • fast_save – If it’s True, then sqlite will be configured with “PRAGMA synchronous = 0;” to speedup cache saving, but be careful, it’s dangerous. Tests showed that insertion order of records can be wrong with this option.
can_commit = None

Transactions can be commited if this property is set to True

commit(force=False)

Commits pending transaction if can_commit or force is True

Parameters:force – force commit, ignore can_commit
bulk_commit(*args, **kwds)

Context manager used to speedup insertion of big number of records

>>> d1 = DbDict('test')
>>> with d1.bulk_commit():
...     for i in range(1000):
...         d1[i] = i * 2
class requests_cache.backends.storage.dbdict.DbPickleDict(filename, table_name='data', fast_save=False, **options)

Same as DbDict, but pickles values before saving

Parameters:
  • filename – filename for database (without extension)
  • table_name – table name
  • fast_save

    If it’s True, then sqlite will be configured with “PRAGMA synchronous = 0;” to speedup cache saving, but be careful, it’s dangerous. Tests showed that insertion order of records can be wrong with this option.

requests_cache.backends.mongodict

Dictionary-like objects for saving large data sets to mongodb database

class requests_cache.backends.storage.mongodict.MongoDict(db_name, collection_name='mongo_dict_data', connection=None)

MongoDict - a dictionary-like interface for mongo database

Parameters:
  • db_name – database name (be careful with production databases)
  • collection_name – collection name (default: mongo_dict_data)
  • connectionpymongo.Connection instance. If it’s None (default) new connection with default options will be created
class requests_cache.backends.storage.mongodict.MongoPickleDict(db_name, collection_name='mongo_dict_data', connection=None)

Same as MongoDict, but pickles values before saving

Parameters:
  • db_name – database name (be careful with production databases)
  • collection_name – collection name (default: mongo_dict_data)
  • connectionpymongo.Connection instance. If it’s None (default) new connection with default options will be created

requests_cache.backends.redisdict

Dictionary-like objects for saving large data sets to redis key-store

class requests_cache.backends.storage.redisdict.RedisDict(namespace, collection_name='redis_dict_data', connection=None)

RedisDict - a dictionary-like interface for redis key-stores

The actual key name on the redis server will be namespace:collection_name

In order to deal with how redis stores data/keys, everything, i.e. keys and data, must be pickled.

Parameters:
  • namespace – namespace to use
  • collection_name – name of the hash map stored in redis (default: redis_dict_data)
  • connectionredis.StrictRedis instance. If it’s None (default), a new connection with default options will be created