Раздел 2. Using the Apache HTTP Server RU EN Пункт 11. Content Caching This document supplements the IntroductionThe Apache HTTP server offers a range of caching features that are designed to improve the performance of the server in various ways.
To get the most from this document, you should be familiar with the basics of HTTP, and have read the Users' Guides to Mapping URLs to the Filesystem and Content negotiation. Three-state RFC2616 HTTP caching
The HTTP protocol contains built in support for an in-line caching
mechanism
described by section 13 of RFC2616, and the
Unlike a simple two state key/value cache where the content disappears completely when no longer fresh, an HTTP cache includes a mechanism to retain stale content, and to ask the origin server whether this stale content has changed and if not, make it fresh again. An entry in an HTTP cache exists in one of three states:
Full details of how HTTP caching works can be found in Section 13 of RFC2616. Interaction with the ServerThe
If the URL is not found within the cache, If the content found within the cache is stale, the
Improving Cache HitsWhen a virtual host is known by one of many different server aliases,
ensuring that Freshness LifetimeWell formed content that is intended to be cached should declare an
explicit freshness lifetime with the At the same time, the origin server defined freshness lifetime can
be overridden by a client when the client presents their own
When this freshness lifetime is missing from the request or the
response, a default freshness lifetime is applied. The default
freshness lifetime for cached entities is one hour, however
this can be easily over-ridden by using the If a response does not include an For local content, or for remote content that does not define its own
The maximum freshness lifetime may also be controlled by using the
A Brief Guide to Conditional RequestsWhen content expires from the cache and becomes stale, rather than pass on the original request, httpd will modify the request to make it conditional instead. When an When a conditional request is received by an origin server, the origin server should check whether the ETag or the Last-Modified parameter has changed, as appropriate for the request. If not, the origin should respond with a terse "304 Not Modified" response. This signals to the cache that the stale content is still fresh should be used for subsequent requests until the content's new freshness lifetime is reached again. If the content has changed, then the content is served as if the request were not conditional to begin with. Conditional requests offer two benefits. Firstly, when making such a request to the origin server, if the content from the origin matches the content in the cache, this can be determined easily and without the overhead of transferring the entire resource. Secondly, a well designed origin server will be designed in such
a way that conditional requests will be significantly cheaper to
produce than a full response. For static files, typically all that is
involved is a call to Origin servers should make every effort to support conditional requests as is practical, however if conditional requests are not supported, the origin will respond as if the request was not conditional, and the cache will respond as if the content had changed and save the new content to the cache. In this case, the cache will behave like a simple two state cache, where content is effectively either fresh or deleted. What Can be Cached?The full definition of which responses can be cached by an HTTP cache is defined in RFC2616 Section 13.4 Response Cacheability, and can be summed up as follows:
What Should Not be Cached?It should be up to the client creating the request, or the origin
server constructing the response to decide whether or not the content
should be cacheable or not by correctly setting the
Content that is time sensitive, or which varies depending on the
particulars of the request that are not covered by HTTP negotiation,
should not be cached. This content should declare itself uncacheable
using the If content changes often, expressed by a freshness lifetime of minutes or seconds, the content can still be cached, however it is highly desirable that the origin server supports conditional requests correctly to ensure that full responses do not have to be generated on a regular basis. Content that varies based on client provided request headers can be
cached through intelligent use of the Variable/Negotiated ContentWhen the origin server is designed to respond with different content based on the value of headers in the request, for example to serve multiple languages at the same URL, HTTP's caching mechanism makes it possible to cache multiple variants of the same page at the same URL. This is done by the origin server adding a If for example, a response is received with a vary header such as; Multiple variants of the content can be cached side by side,
Cache Setup Examples
Caching to DiskThe Typically the module will be configured as so; CacheRoot "/var/cache/apache/" CacheEnable disk / CacheDirLevels 2 CacheDirLength 1 Importantly, as the cached files are locally stored, operating system in-memory caching will typically be applied to their access also. So although the files are stored on disk, if they are frequently accessed it is likely the operating system will ensure that they are actually served from memory. Understanding the Cache-StoreTo store items in the cache, Each character may be any one of 64-different characters, which mean
that overall there are 64^22 possible hashes. For example, a URL might
be hashed to The overall aim of this technique is to reduce the number of
subdirectories or files that may be in a particular directory,
as most file-systems slow down as this number increases. With
setting of "1" for
Setting
Each URL uses at least two files in the cache-store. Typically there is a ".header" file, which includes meta-information about the URL, such as when it is due to expire and a ".data" file which is a verbatim copy of the content to be served. In the case of a content negotiated via the "Vary" header, a ".vary" directory will be created for the URL in question. This directory will have multiple ".data" files corresponding to the differently negotiated content. Maintaining the Disk CacheThe Instead, provided with httpd is the htcacheclean tool which allows you to clean the cache periodically. Determining how frequently to run htcacheclean and what target size to use for the cache is somewhat complex and trial and error may be needed to select optimal values. htcacheclean has two modes of operation. It can be run as persistent daemon, or periodically from cron. htcacheclean can take up to an hour or more to process very large (tens of gigabytes) caches and if you are running it from cron it is recommended that you determine how long a typical run takes, to avoid running more than one instance at a time. It is also recommended that an appropriate "nice" level is chosen for htcacheclean so that the tool does not cause excessive disk io while the server is running.
![]() Figure 1: Typical cache growth / clean sequence. Because Caching to memcachedUsing the Typically the module will be configured as so: CacheEnable socache / CacheSocache memcache:memcd.example.com:11211 Additional CacheEnable socache / CacheSocache memcache:mem1.example.com:11211,mem2.example.com:11212 This format is also used with the other various CacheEnable socache / CacheSocache shmcb:/path/to/datafile(512000) CacheEnable socache / CacheSocache dbm:/path/to/datafile General Two-state Key/Value Shared Object Caching
The Apache HTTP server offers a low level shared object cache for caching information such as SSL sessions, or authentication credentials, within the socache interface. Additional modules are provided for each implementation, offering the following backends:
Caching Authentication Credentials
The Caching SSL Sessions
The Specialized File Caching
On platforms where a filesystem might be slow, or where file handles are expensive, the option exists to pre-load files into memory on startup. On systems where opening files is slow, the option exists to open the file on startup and cache the file handle. These options can help on systems where access to static files is slow. File-Handle CachingThe act of opening a file can itself be a source of delay, particularly on network filesystems. By maintaining a cache of open file descriptors for commonly served files, httpd can avoid this delay. Currently httpd provides one implementation of File-Handle Caching. CacheFileThe most basic form of caching present in httpd is the file-handle
caching provided by The
CacheFile /usr/local/apache2/htdocs/index.html If you intend to cache a large number of files in this manner, you must ensure that your operating system's limit for the number of open files is set appropriately. Although using If the file is removed while httpd is running, it will continue to maintain an open file descriptor and serve the file as it was when httpd was started. This usually also means that although the file will have been deleted, and not show up on the filesystem, extra free space will not be recovered until httpd is stopped and the file descriptor closed. In-Memory CachingServing directly from system memory is universally the fastest method of serving content. Reading files from a disk controller or, even worse, from a remote network is orders of magnitude slower. Disk controllers usually involve physical processes, and network access is limited by your available bandwidth. Memory access on the other hand can take mere nano-seconds. System memory isn't cheap though, byte for byte it's by far the most expensive type of storage and it's important to ensure that it is used efficiently. By caching files in memory you decrease the amount of memory available on the system. As we'll see, in the case of operating system caching, this is not so much of an issue, but when using httpd's own in-memory caching it is important to make sure that you do not allocate too much memory to a cache. Otherwise the system will be forced to swap out memory, which will likely degrade performance. Operating System CachingAlmost all modern operating systems cache file-data in memory managed directly by the kernel. This is a powerful feature, and for the most part operating systems get it right. For example, on Linux, let's look at the difference in the time it takes to read a file for the first time and the second time; colm@coroebus:~$ time cat testfile > /dev/null real 0m0.065s user 0m0.000s sys 0m0.001s colm@coroebus:~$ time cat testfile > /dev/null real 0m0.003s user 0m0.003s sys 0m0.000s Even for this small file, there is a huge difference in the amount of time it takes to read the file. This is because the kernel has cached the file contents in memory. By ensuring there is "spare" memory on your system, you can ensure that more and more file-contents will be stored in this cache. This can be a very efficient means of in-memory caching, and involves no extra configuration of httpd at all. Additionally, because the operating system knows when files are deleted or modified, it can automatically remove file contents from the cache when necessary. This is a big advantage over httpd's in-memory caching which has no way of knowing when a file has changed. Despite the performance and advantages of automatic operating system caching there are some circumstances in which in-memory caching may be better performed by httpd. MMapFile Caching MMapFile /usr/local/apache2/htdocs/index.html As with the
The Security ConsiderationsAuthorization and Access ControlUsing As traversing a filesystem hierarchy to examine potential
If, for example, your configuration permits access to a resource by IP
address you should ensure that this content is not cached. You can do this
by using the When the Local exploitsAs requests to end-users can be served from the cache, the cache itself can become a target for those wishing to deface or interfere with content. It is important to bear in mind that the cache must at all times be writable by the user which httpd is running as. This is in stark contrast to the usually recommended situation of maintaining all content unwritable by the Apache user. If the Apache user is compromised, for example through a flaw in
a CGI process, it is possible that the cache may be targeted. When
using This presents a somewhat elevated risk in comparison to the other
types of attack it is possible to make as the Apache user. If you are
using Cache PoisoningWhen running httpd as a caching proxy server, there is also the potential for so-called cache poisoning. Cache Poisoning is a broad term for attacks in which an attacker causes the proxy server to retrieve incorrect (and usually undesirable) content from the origin server. For example if the DNS servers used by your system running httpd are vulnerable to DNS cache poisoning, an attacker may be able to control where httpd connects to when requesting content from the origin server. Another example is so-called HTTP request-smuggling attacks. This document is not the correct place for an in-depth discussion of HTTP request smuggling (instead, try your favourite search engine) however it is important to be aware that it is possible to make a series of requests, and to exploit a vulnerability on an origin webserver such that the attacker can entirely control the content retrieved by the proxy. Denial of Service / CachebustingThe Vary mechanism allows multiple variants of the same URL to be
cached side by side. Depending on header values provided by the client,
the cache will select the correct variant to return to the client. This
mechanism can become a problem when an attempt is made to vary on a
header that is known to contain a wide range of possible values under
normal use, for example the In other cases, there may be a need to change the URL of a particular
resource on every request, usually by adding a "cachebuster" string to
the URL. If this content is declared cacheable by a server for a
significant freshness lifetime, these entries can crowd out
legitimate entries in a cache. While |
![]() |